AI Exchange @ UVA [2025.1]
A space for exploring AI in teaching, research, personal productivity & operations across Grounds.
Our mission is simple: to keep the UVA community informed, engaged, and inspired as we navigate this transformation together.
Research Spotlight: Podcast with Reza Mousavi


About Reza: Reza Mousavi’s research explores both the inner workings of large language models (LLMs) and how humans interact with them. He studies how to make AI more reliable and controllable, compares strengths across models like GPT and LLaMA, and investigates the differences between human and AI cognition in processing language.
🔑 Key Insights
His work focuses on (1) the inner mechanics of large language models (LLMs) and how to “tune” them, and (2) how humans consume and interact with AI-generated content in organizational and societal contexts.
Generative AI tools like ChatGPT and Gemini are reshaping research workflows—helping with proofreading, idea generation, running complex models, and even writing grant proposals.
Mousavi is investigating how humans and LLMs differ in processing the same inputs, comparing which tokens each focuses on—an effort to better understand both cognition and AI “attention.”
🚀 Takeaways
Researchers in any discipline can leverage AI, but they need to understand the inner workings of models first.
AI will increasingly act as a collaborator in knowledge work, not just an automation tool.
Choosing the “right” AI tool depends heavily on context—there’s no one-size-fits-all model.
Tools Mentioned:
ChatGPT (OpenAI) – his go-to for coding help and general tasks.
Gemini (Google DeepMind) – his preference for proofreading and content polishing.
Anthropic Claude – another strong option for writing and analysis.
Hugging Face Hub – open-source LLMs and model libraries.
👉 Key idea: Understanding how AI models actually work opens the door to more meaningful, responsible, and innovative research.
Research Spotlight
“AI will not replace researchers—but researchers who learn to work with AI will expand the very frontier of knowledge.” - Anton Korinek
UVA’s very own Anton Korinek, recently named one of Time Magazine’s Top 100 AI Influencers, has released a groundbreaking Guide for AI Agents in Economic Research. While rooted in economics, his insights stretch far beyond the discipline—offering valuable lessons for researchers across the social sciences, business, and beyond.
Korinek’s guide is a call for all researchers to rethink how we work with AI.
Here are a few highlights:
1. AI as Semi-Autonomous Research Assistants
Korinek envisions AI agents not as mere tools, but as collaborators. They can search the web, run code, and synthesize results into coherent narratives. For researchers in any field, this means moving from simply automating tasks to having AI as intellectual partners who can help spark new ideas and perspectives
2. From Linear Workflows to Dynamic Research Graphs
Traditional research often follows a linear path. Korinek highlights frameworks like LangGraph, which enable flexible, branching workflows that adapt as new evidence arises. Instead of forcing projects into rigid scripts, researchers can let their inquiries evolve naturally.
3. Harnessing Deep Research Agents
Advanced systems can orchestrate multiple specialized sub-agents in parallel—one searching, another analyzing, another synthesizing findings. This allows researchers to explore competing hypotheses at once, dramatically accelerating the pace of discovery
4. Guardrails Are Non-Negotiable
AI agents can hallucinate, misapply theories, or compound small errors. Korinek stresses the importance of treating AI like a team of research assistants—always guiding, checking, and verifying their work. Transparency and oversight are critical to ensure credibility
5. Strategic Choices About Tools and Access
Finally, researchers must make conscious decisions about the tools they use. Open-source models may be safer and more transparent, while proprietary systems offer raw power but less control. Institutions, Korinek argues, should base compliance and access policies on evidence, not outdated assumptions.
AI by the Numbers
In a new NBER working paper (How People Use ChatGPT, 2025), researchers analyzed 1.1 million conversations from May 2024–June 2025. Using a classifier, they mapped messages into broad categories. The breakdown is below
Breakdown of ChatGPT conversations from 1.1M samples
Takeaway: productivity and learning tasks drive the majority of use, while personal expression and play remain relatively small.
Upcoming UVA AI Events
AI + Environment RIG
🗓 September 25, 2025 | 1:00–2:00 PM | Online
The Environment + AI Research Interest Group (RIG) is an interdisciplinary community of UVA faculty exploring how artificial intelligence can help understand, manage, and mitigate environmental and climate challenges — and how these technologies impact people, places, and the planet.
Coming Soon: Faculty Voices and AI Research Toolkit


AI Tutors in the Classroom: Professor Jill Mitchell and Professor Roger Martin, who will share how they’re weaving AI into the Foundations of Financial Accounting course — and what it reveals about the future of AI in higher education.
Research Excellence Faculty Toolkit: AI research tools help faculty, staff, and students accelerate discovery by supporting data analysis, coding, visualization, literature reviews, and writing. When used responsibly, they enhance productivity, sharpen insights, and expand the scope of inquiry.
Resources
Inside, you’ll find:
✅ Rules of thumb for aligning AI use with UVA’s Honor System.
✅ Principles & practices for keeping your voice and creativity at the center.
✅ Workflows for many contexts — from essays and case studies to coding.
✅ Prompting strategies to get sharper, clearer results.
Inside you’ll find:
✅ How to work with AI as a research partner
✅ Guardrails to keep results credible
✅ Flexible workflows for essays, coding & analysis
✅ Prompting strategies for clearer outputs
✅ Principles for ethical, responsible use







