AI Exchange @ UVA [2025.5]
Our mission is simple: to keep the UVA community informed, engaged, and inspired as we navigate this transformation together.
Research Spotlight: Building Fair and Secure AI at UVA


Interdisciplinary Paths to Responsible AI: Steven L. Johnson, a professor at the McIntire School of Commerce, studies how digital technologies and online communities influence people—especially youth—and how algorithms impact information and interaction. Tom Hartvigsen, an assistant professor at the School of Data Science, focuses on making AI systems in healthcare and other critical domains trustworthy, fair, and context-aware. Together, their work bridges business, data science, and ethics to improve how humans and AI interact.
🔑 Key Insights
Interdisciplinary collaboration is essential
Steven and Tom stress that solving “hairy” AI problems—like online toxicity, youth well-being, and healthcare—requires people from different disciplines (business, data science, medicine, etc.) working together. Seed grants and deliberate matchmaking (like the UVA Darden–Data Science award) make it possible to invest the extra time these collaborations need.Context matters for AI decisions
Their joint work on toxicity shows you can’t judge content by keywords alone—you need social and conversational context or you risk silencing support communities or missing real harm. Tom extends this idea to “participatory AI,” where domain experts and end-users actively shape and correct models rather than just having tools pushed onto them.AI is powerful but overhyped—so evaluation must be rigorous.
Large language models are genuinely impressive, yet often marketed as near-doctor replacements based on weak tests (like variants of the US medical licensing exam). Tom’s research shows those scores don’t predict real clinical usefulness, and Steven warns that we may be over-investing in current LLM architectures while underinvesting in alternative approaches, echoing past tech hype cycles (.com, 5G, etc).
👉 Main idea: Thoughtful collaboration across fields is crucial to making AI genuinely safe and useful in people’s lives.
“Decoding the Rulebook: Extracting Hidden Moderation Criteria from Reddit Communities”
Steven L. Johnson and Tom Hartvigsen use natural language processing to uncover the real rules behind online content moderation, revealing how community decisions often differ from their stated guidelines.
AI Research@UVA: How AI rules matured in the U.S., EU, and China
Lo, Leo S. (2025). Artificial intelligence regulation matures: Landscapes of the USA, European Union, and China. IFLA Journal (Essay). https://doi.org/10.1177/03400352251384915
(Leo Lo, is Dean of the Libaray and University Librarian at UVA)
💡 The Big Idea
Across 2023–July 2025, AI governance in the U.S., EU, and China moved from high‑level principles to concrete rules and tools. Despite different styles, Lo argues they converge on safety, transparency, and inclusion—and that libraries should respond with tighter procurement accountability, lawful training data, published assessment artifacts, and AI literacy.
📈 Key Findings
United States. EO 14110 (10/30/2023) set a coordination baseline, then was revoked on Jan 20, 2025. On July 23, 2025, three new EOs re‑anchored policy:
fast‑tracking > 100 MW AI‑data‑center projects;
promoting a U.S. “AI technology stack” for export;
directing “unbiased AI” criteria for federal procurement.
European Union. The AI Act (Reg. 2024/1689) entered into force Aug 1, 2024;
“unacceptable‑risk” bans applied Feb 2, 2025;
general‑purpose AI obligations began Aug 2, 2025,
supported by a July 2025 Code of Practice and a new European AI Office.
China. Generative‑AI Interim Measures took effect Aug 15, 2023 (registration, safety review, watermarking).
On July 26, 2025, China proposed a UN‑anchored Global AI Governance Action Plan emphasizing standards, capacity‑building, and traceability.
UK. Stayed regulator‑led and “assurance‑first,” with uptake of ISO/IEC 42001 for AI management systems.
⚖️ Why it Matters
Procurement accountability. We should clear purpose statements, training‑data lineage, bias/robustness metrics, privacy‑by‑design, update notices, and auditability from vendors—mirroring U.S. procurement signals and EU transparency duties.
Lawful, well‑described training data. Curated public‑domain/rights‑cleared corpora reduce risk and improve reproducibility, aligning with EU provenance expectations and domestic rules elsewhere.
Publish assessment artifacts. Share DPIAs, model‑evaluation notes, and method statements to strengthen trust and peer learning.
Make AI literacy a core service. Teach capabilities, limits, and ethics; disclose where AI is used; and maintain human oversight for consequential tasks.
AI Research@UVA: Programmable Virtual Humans Offer a New Path for Drug Discovery
💡 The Big Idea
A team led by You Wu and Lei Xie (Northeastern University) with Philip E. Bourne (UVA) has unveiled a bold new idea: programmable virtual humans — AI-driven digital models that mimic real human biology.
📈 Key Findings
The Problem: Most drugs that look promising in the lab fail in human trials — the dreaded translational gap.
The Innovation: Virtual humans integrate physics-based models, biological networks, and AI to simulate how drugs behave inside the body — from molecules to organs.
Why It Matters: This could mean faster, safer, and cheaper drug development — with fewer animal tests and more precise predictions.
The Vision: A future where scientists can test thousands of therapies virtually before any human trial begins.
AI by the Numbers: Measuring AI’s Real-World Value
💡 The Big Idea
OpenAI just released GDPval, a new benchmark that tests how well AI can perform real, economically valuable work — not just answer trivia or logic puzzles.
Instead of synthetic exams, GDPval uses actual tasks from 44 occupations across the nine sectors that drive most of U.S. GDP. Each task was designed by professionals averaging 14 years of experience.
📈 Key Findings
Real work, not test questions: 1,300+ tasks modeled after real professional deliverables — spreadsheets, slide decks, reports, and more.
Human comparison: AI outputs are graded head-to-head against expert human work.
Progress: Model performance is improving linearly over time — GPT-5 and Claude 4.1 are now approaching human quality.
Productivity boost: With human oversight, AI can complete tasks cheaper and faster, saving up to 1.5× in time and cost.
Open source: A public “gold subset” of 220 tasks and an automated grader are live at evals.openai.com.
⚖️ Why it matters
GDPval could become the new standard for tracking how AI contributes to the economy — a leading indicator of productivity before it shows up in GDP stats.
Around UVA
Inside UVA’s Innovation Corner: Exploring What’s Next with AI
Since launching in July 2025, the Innovation Corner has become UVA’s community space for exploring how artificial intelligence can enhance teaching, learning, and creative work.
Bringing together faculty, staff, instructional designers, and technologists from across Grounds, the group meets monthly to exchange ideas, test emerging tools, and identify where AI might help address real challenges in our classrooms and workplaces.
Why Start the Innovation Corner...? AI is evolving faster than higher education’s traditional adoption cycles. The Innovation Corner serves as a sandbox for responsible experimentation. It is a place to learn together, share discoveries, and think strategically about how UVA might use (or choose not to use) these technologies.
It’s part discussion forum, part testing lab, and part collaborative network for exploring what is emerging across Grounds. Members explore new tools, talk candidly about what’s working (and what isn’t), and surface the gaps or needs that AI might help fill.
Since July, the group has experimented with a wide range of AI applications—from media creation and image generation to research companions and emerging multi-model platforms that combine several AI systems in one privacy-forward environment.
Each month’s exploration includes guided testing and group discussion, helping members understand how these tools could enhance creativity, efficiency, and student engagement, while keeping privacy, accessibility, and equity at the forefront.
In the months ahead, the Innovation Corner will continue exploring AI-enabled video production tools and other emerging applications. The group is also conducting an inventory of AI-related needs across Grounds to help shape next semester’s review list and ensure that future explorations align with real institutional priorities.
The goal: to stay curious, collaborative, and intentional, focusing on innovation that’s both forward-thinking and practical for UVA’s teaching and learning ecosystem.
If you’re interested in joining the Innovation Corner, we’d love to have you. The group meets virtually each month. Not everyone has time to join another group on Grounds, but we’d still love to hear from you.
If you’re experimenting with AI in your teaching, research, or day-to-day work, tell us what you’re trying.
Are you running an AI pilot in your department?
Testing a new workflow or classroom tool?
Exploring creative or ethical uses of AI with students or colleagues?
We’d love to connect, learn from your experiences, and even feature your work in an upcoming meeting.
To get involved or share what you’re working on, contact Sarah Cochran, Senior Director of Learning Experience and Digital Innovation, at skv2kd@virginia.edu.
Upcoming AI Events @ UVA
TODAY!!!
Datapalooza: School of Data Science
🗓 November 14, 2025 | 1:00 PM–5:00 PM | School of Data Science (1919 Ivy Road)
Explore AI, data, and truth at Datapalooza 2025—UVA’s fall showcase of data science in action across fields and for the public good.
AI + Environment RIG
🗓 November 17, 2025 | 1:00 PM–2:00 PM | Online
The Environment + AI Research Interest Group (RIG) invites faculty from across Grounds to our next session. This session will feature two flash talks...
Zezhou Cheng from UVA’s Computer Vision Lab, who will lead a discussion on the intersection of environmental research and AI-driven computer vision.
Mool Gupta will share how AI is being applied to improve critical material recycling from solar cells, advancing the circular economy for clean energy.
Co-Opting AI: Libraries
🗓 November 19, 2025 | 4:00 PM–5:15 PM | Online
On November 19, Mona Sloane will be in conversation with Shauntee Burns-Simpson, Eric Klinenberg, and Peter Muster about the intimate relationship between AI and learning, access to knowledge, and sociality. More information and free registration for this Zoom conversation here.
We would be grateful if you shared this event widely with your networks and particularly your students. The past 50+ Co-Opting AI episodes can be viewed here.
AI Faculty Mixer | LaCross AI Institute
🗓 November 20, 2025 | 4:00 PM–6:00 PM | Forum Hotel
Join the LaCross Institute for Ethical Artificial Intelligence in Business for a faculty mixer uniting colleagues from the Darden School of Business and across UVA who share an interest in AI.
The mixer is free, but please RSVP for the food and beverage count!
UVA AI Conference
🗓 December 5, 2025 | 8:00 AM–6:00 PM | Forum Hotel
Minding the Gap: How AI Drives Performance and What Limits Its Impact
On 5 December 2025, the UVA Conference on Ethical AI in Business will cut through the noise to explore these opportunities and challenges head-on.
UVA LLM Workshop 2025: The UVA Workshop on Large Language Models for Science and Engineering.
🗓 December 5, 2025 | 8:00 AM–4:30 PM | Newcomb Hall Theatre
This workshop aims to bring together students, researchers, and faculty from across UVA to exchange ideas, foster collaboration, and explore the innovative applications of Large Language Models (LLMs) in engineering, education, and scientific research.
Register here: https://forms.gle/51cEWLFtKAU53ZJG8
A preliminary schedule is available on the workshop website. A more detailed schedule will be sent out soon.
AI Fellowship Opportunity
🗓 Application Due December 12, 2025
The UVA Darden LaCross Institute for Ethical Artificial Intelligence in Business (LaCross AI Institute) is requesting proposals for multidisciplinary research fellowships focused on ethical AI in business. The fellowships will cover faculty and student-centric research activities for a period of 12-24 months, up to a total of $100,000.
2026 Fellowships in AI Research (FAIR) Symposium
🗓 January 30, 2026 | Forum Hotel
Join us at The Forum Hotel for the Fellowships in AI Research (FAIR) Research Symposium. Hear from current fellows as they share their latest research and be among the first to meet the 2026 cohort.
Coming Soon: Faculty Voices
From Chatbots to VR—Rethinking Teaching with AI: Marc Santugini, an associate professor of economics at the University of Virginia, uses innovative teaching methods and technologies—from virtual reality to AI-driven feedback—to make economic theory engaging, accessible, and hands-on for students. By experimenting with tools like his Lumiere app, individualized TA sessions, and AI agents, he is reimagining large-class instruction to promote deeper critical thinking and highly personalized learning at scale.
UVA AI Resources
AI for Academic Excellence - Student Toolkit: A comprehensive guide for students on the best uses of AI.
AI Agents in Economic Research (Anton Korinek): A guide for researchers on the use of AI agents.
UVA Claude Builders Student Club: A 250+ strong group for those interested in development via Claude.
UVA Podcasts We Listen to
Co-Opting AI: Public Conversations About Artificial Intelligence and Society:
Prof. Mona Sloane’s series Co-Opting AI is a virtual public speaker series that interrogates and demystifies AI.UVA Data Points: Podcast from the School of Data Science.
HOOS in STEM: From Prof. Ken Ono this series showcases the marvelous cornucopia of STEM at UVA, from the latest innovations to growth inside and outside the classroom.
Thanks for reading AI @ UVA Substack! Subscribe for new posts and podcasts.







