Daniel S. Schiff

Daniel S. Schiff

@dschiff.bsky.social

Assist. Professor @purduepolsci & Co-Director of Governance & Responsible AI Lab (GRAIL). Studying #AI policy and #AIEthics. Secretary for @IEEE 7010 standard.

192 Followers 4 Following 110 Posts Joined Feb 2024
3 days ago

5/5 The stakes: AI influences fundamental life outcomes. Without robust, transparent audits, we risk perpetuating harms and undermining trust.

For governance folks: What's your biggest auditing challenge—technical gaps, regulatory clarity, or stakeholder engagement?

doi.org/10.1177/205395

0 0 0 0
3 days ago
Post image

4/5 Auditors face regulatory ambiguity, data governance gaps, and interdisciplinary friction between tech, legal, and leadership teams.

Yet they're ecosystem builders—translating vague laws into actionable frameworks and pushing organizations toward better AI governance.

0 0 1 0
3 days ago
Post image

3/5 Key finding: Most audits focus narrowly on technical metrics.

Broader impacts on vulnerable communities? Often sidelined.
Public reporting of audit results? Almost nonexistent.

Transparency and stakeholder engagement remain major gaps. 📈

0 0 1 0
3 days ago
Post image

2/5 What's driving AI auditing growth?

🔹 Regulation (EU AI Act, NIST frameworks)
🔹 Reputation management (avoiding biased AI headlines)
🔹 Competitive strategy (trustworthy AI advantage)

The ecosystem spans internal teams, Big Four firms, specialized startups. @purduepolsci.bsky.social

2 0 1 0
3 days ago
Post image

1/5 We interviewed 34 AI ethics auditors across 23 organizations in 7 countries. Published in Big Data & Society. @bigdatasociety.bsky.social

The field borrows from financial auditing: planning, validating, analyzing risks, reporting. But it's still figuring out what success looks like. 📊

0 0 1 0
3 days ago

AI systems decide who gets hired, who gets loans, who receives healthcare. But who's auditing the AI? 🤖

Our new study explores the emerging field of AI ethics auditing—the people and processes trying to make AI accountable. @grailcenter.bsky.social

doi.org/10.1177/205... 🧵

1 2 1 1
1 week ago

Published in Hastings Center Report. @purduepolsci.bsky.social @GRAILcenter.bsky.social

onlinelibrary.wiley.com/doi/abs/10....

With Daniel Susser, Sara Gerke, Laura Y. Cabrera, I. Glenn Cohen, & team

0 0 0 0
1 week ago

Synthetic data should complement real-world data, not replace it. The choice ahead: Will we use this technology to bridge healthcare gaps or deepen inequities?

For governance teams & researchers working on AI in healthcare—curious what you're seeing?

#SyntheticData #AIinHealthcare #Bioethics

0 0 1 0
1 week ago

We argue synthetic data isn't a magic fix—it's a powerful tool that demands robust safeguards 🛡️

Key needs:
• Standards for accuracy & reliability
• Privacy protections
• Transparent policies
• Continued investment in diverse, real-world datasets

0 0 1 0
1 week ago

But the risks are real:
• Accuracy issues for rare disease algorithms
• Potential privacy leaks despite synthetic nature
• Bias amplification from flawed source data
• Regulatory gaps exploiting "non-identifiable" status
• Justice concerns about sidelining real-world diversity

0 0 1 0
1 week ago

What synthetic data promises:
• Privacy protection through artificial datasets
• Inclusive modeling of rare diseases & underserved groups
• Enhanced AI training capabilities
• Scalable research opportunities

The potential is substantial ⚡

1 0 1 0
1 week ago

Enter synthetic data: AI-generated datasets that mimic real-world patterns without containing actual patient information

Sounds perfect—private, inclusive, scalable. But our analysis in Hastings Center Report reveals significant ethical complexities 🚨

0 0 1 0
1 week ago

The challenge: Healthcare research is data-rich but insight-poor 📊

Privacy laws, demographic gaps, and underrepresentation of rare conditions prevent researchers from fully utilizing available EHRs, public datasets, and lab studies

0 0 1 0
1 week ago

Synthetic data promises to revolutionize healthcare research—solving privacy issues, modeling rare diseases, expanding equity. But it's also an ethical minefield that demands careful navigation 🧵

onlinelibrary.wiley.com/doi/abs/10....

0 0 1 1
2 weeks ago

#8 For policy practitioners, governance teams, and org leaders: curious what you're seeing in your hiring? Paper below 👇

@purduepolsci.bsky.social @GRAILcenter.bsky.social

doi.org/10.1109/TTS...

0 0 0 0
2 weeks ago

#7 AI ethics and governance aren't "nice-to-haves"—they're becoming non-negotiable pillars of responsible AI development. As industries adopt AI at scale, these roles will define how society benefits from this technology ⚖️

0 0 1 0
2 weeks ago

#6 What's driving alignment? New AI regulations demand compliance. Employers recognize public trust is critical for AI adoption. Universities race to create relevant programs. More than 100K professionals needed annually

0 0 1 0
2 weeks ago
Post image

#5 Finance and Information industries dominate demand, with AI ethics/governance roles growing fastest there. Highly regulated sectors can't afford ethical lapses as AI adoption scales 🏦

0 0 1 0
2 weeks ago

#4 Demand is surging 🚀 AI ethics roles grew from 35K in 2018 to 109K in 2022. Governance roles hit 96K in 2022. Even as overall AI hiring dipped in 2023, these roles remained stable. Results suggest sustained market need

0 0 1 0
2 weeks ago

#3 Key finding: AI ethics ≠ AI governance. Employers seek distinct skills:

🔹 Ethics: Data privacy, bias mitigation, critical thinking
🔹 Governance: Risk management, policy development, leadership

Both require interdisciplinary knowledge

0 0 1 0
2 weeks ago

#2 Our study analyzed 4.4M+ AI-related job postings to uncover trends in demand for AI ethics (fairness, transparency) and AI governance (regulatory compliance, risk management) skills. Published in IEEE Transactions on Technology and Society

0 0 1 0
2 weeks ago

#1 We're seeing an "AI skills gap"—a shortage of professionals equipped with both technical expertise AND the ability to handle ethical dilemmas and regulatory challenges. AI is transforming industries, but with great power comes great responsibility 📊

0 0 1 0
2 weeks ago
Post image

The AI job market is evolving beyond coding. Employers now demand AI ethics and governance skills at unprecedented rates. Our analysis of 4M+ job postings from 2018-2023 reveals what's driving this shift 🧵

doi.org/10.1109/TTS...

2 1 1 1
3 weeks ago
Preview
Development and validation of a short AI literacy test (AILIT-S) for university students Fostering AI literacy is an important goal in higher education in many disciplines. Assessing AI literacy can inform researchers and educators on curr…

7/7 Curious what you think—does this match what you're seeing in AI education assessment?

For researchers and educators working on AI literacy:

www.sciencedirect.com/science/art...

0 0 0 0
3 weeks ago

6/7 🔬 Next steps: Validation beyond Western university samples, workplace applications, and cross-cultural AI literacy research.

With Arne Bewersdorff and Marie Hornberger. Thanks to Google Research for funding a portion of this work

@purduepolsci.bsky.social @GRAILcenter.bsky.social

0 0 1 0
3 weeks ago

5/7 🌍 Why this matters for AI governance:
Scalable assessment tools are essential for evaluating education programs, informing policy decisions, and ensuring citizens can navigate an AI-driven world.

AILIT-S makes systematic evaluation feasible.

0 0 1 0
3 weeks ago

4/7 🎯 Best use cases:
✔️ Program evaluation
✔️ Group comparisons
✔️ Trend analysis
✔️ Large-scale research

❌ Avoid for individual diagnostics

The speed enables broader participation and better population-level insights.

0 0 1 0
3 weeks ago
Post image

3/7 ✅ Results show AILIT-S delivers:
• ~5 minutes completion time (vs 12+ for full version)
• 91% congruence with comprehensive assessment
• Strong performance for group-level analysis

Trade-off: slightly lower individual reliability (α = 0.61 vs 0.74)

0 0 1 0
3 weeks ago
Post image

2/7 📊 AILIT-S covers 5 core themes:
• What is AI?
• What can AI do?
• How does AI work?
• How do people perceive AI?
• How should AI be used?

Special emphasis on technical understanding—the foundation of true AI literacy.

0 0 1 0
3 weeks ago
Post image

1/7 ⚡ The challenge: Existing AI literacy tests take 12+ minutes, making them impractical for large-scale assessment.

Our solution distills a robust 28-item instrument into 10 key questions—validated with 1,465 university students across the US, Germany, and UK.

0 0 1 0