Watch @Susan_Atheyβs talk on the implications of AI on the organisation of industry and work at #G20SouthAfrica
www.youtube.com/watch?v=okDG...
@gsbsilab.bsky.social
Led by @susanathey.bsky.social, the Golub Capital Social Impact Lab at the Stanford University Graduate School of Business uses digital technology and social science to improve the effectiveness of social sector organizations.
Watch @Susan_Atheyβs talk on the implications of AI on the organisation of industry and work at #G20SouthAfrica
www.youtube.com/watch?v=okDG...
βGovernments will play a key roleβ¦in whether we actually develop the technology that will help lower-skilled workers become more productive by using AI to augment them with expertise that previously was difficult to acquire.β
18.07.2025 13:06 β π 2 π 2 π¬ 1 π 0The talk will cover:
βοΈHow AI is altering industry dynamics & structures
βοΈHow these shifts will impact public services such as health and education
βοΈHow AI market concentration could tax the global economy
βοΈWhy govt policy will be crucial in shaping AI competition and innovation
AI & digitisation are rapidly reshaping the way we work.
Policymakers need to understand how, and what to do about it.
Watch @Susan_Athey speak to G20 leaders about these issues tomorrow 16 July @ 13:30 CET. #G20SouthAfrica
bit.ly/3GyMFgm or bit.ly/44PTXFP
Beyond predictions, @keyonv.bsky.social also worked with @gsbsilab.bsky.social to show how these models can be used to make better estimations of important problems, such as the gender wage gap among men and women with the same career histories. Learn more here: bsky.app/profile/gsbs...
30.06.2025 15:39 β π 0 π 0 π¬ 0 π 0If we know someoneβs career history, how well can we predict which jobs theyβll have next? Read our profile of @keyonv.bsky.social to learn how ML models can be used to predict workersβ career trajectories & better understand labor markets.
medium.com/@gsb_silab/k...
Paper: arxiv.org/abs/2409.09894
30.06.2025 12:15 β π 0 π 0 π¬ 0 π 0Analyzing representations tells us where history explains the gap.
Ex: there are two kinds of managers: those who used to be engineers and those who didnβt. The first group gets paid more and has more males than the second.
Models that donβt use history omit this distinction.
We use these methods to estimate wage gaps adjusted for full job history, following the literature on gender wage gaps.
History explains a substantial fraction of the remaining wage gap when compared to simpler methods. But thereβs still a lot that history canβt account for.
This result motivates new fine-tuning strategies.
We consider 3 strategies similar to methods from the causal estimation literature. E.g. optimize representations to predict the *difference* in male-female wages instead of individual wages.
All perform well on synthetic data.
Two extremes:
A representation that's just the identity function meets condition (1) trivially but not (2).
A representation that uses a very simple summary of history (e.g. # of years worked) should meet (2) but fails (1)
New result: Fast + consistent estimates are possible even if a representation drops info
Two main fine-tuning conditions:
1. Representation only drops info that isn't correlated w/ both wage & gender
2. Representation is simple enough that itβs easy to model wage & gender from it
Intuition: If working in job X at some point has a small effect on wages, but men are much likelier to have worked in job X than women, it may be omitted by a model optimized to predict wage.
30.06.2025 12:15 β π 0 π 0 π¬ 1 π 0Foundation models are usually fine-tuned to make predictions (like wages).
But representations fine-tuned this way can induce omitted variable bias: the gap adjusted for full history can be different from the gap adjusted for the representation of job history.
We use CAREER, a foundation model of job histories. Itβs pretrained on resumes but its representations can be fine-tuned on the smaller datasets used to estimate wage gaps.
30.06.2025 12:15 β π 0 π 0 π¬ 1 π 0But this discards information thatβs relevant to the wage gap.
In contrast, foundation models learn *representations*: lower-dimensional variables that summarize information.
Consider estimating the wage gap explained by differences in job history.
Job history is high-dimensional since there are many possible sequences of jobs. So most economic models describe histories using hand-selected summary stats (e.g., # of years worked).
Decompositions can inform policy: a large explained gender wage gap can suggest differences in choices or opportunities earlier in a workerβs career, while an unexplained gap may arise due to differences in factors such as skill, care responsibilities, or bargaining.
30.06.2025 12:15 β π 0 π 0 π¬ 1 π 0Ex.: estimating the gender wage gap between men & women with the same job histories.
A large literature decomposes wage gaps into two parts: the part βexplainedβ by gender gaps in observed characteristics (e.g. education, experience), and the part thatβs βunexplained.β
Foundation models make great predictions. How should we use them for estimation problems in social science?
New PNAS paper @susanathey.bsky.social & @keyonv.bsky.social & @Blei Lab:
Bad news: Good predictions β good estimates.
Good news: Good estimates possible by fine-tuning models differently π§΅
π Full paper (Athey & Palikot, 2024) on arXiv: arxiv.org/abs/2405.00247. We thank Coursera for their collaboration. Follow @gsbsilab.bsky.social for more research insights on the digital economy, education, and policy. #OnlineLearning #JobMarket
22.05.2025 15:03 β π 0 π 0 π¬ 0 π 0Takeaway 2: Our experiment isolates the effect for the least employable learners from credibly and systematically informing employers about online credentials. The positive finding helps build a case that such credentials may be good investments for workers seeking to transition jobs
22.05.2025 15:03 β π 0 π 0 π¬ 1 π 0Takeaway 1: Online learning platforms and professional networking sites (e.g., LinkedIn) can boost job outcomes with simple features. Even light nudges to encourage skill signaling (like sharing a certificate) can improve employment prospects
22.05.2025 15:03 β π 0 π 0 π¬ 1 π 0Who benefited most? The boost was greatest for learners with the lowest initial job prospects
22.05.2025 15:03 β π 0 π 0 π¬ 1 π 0Nor were the gains simply from sprucing up profiles. Treated learnersβ LinkedIn pages werenβt more complete or active than the control groupβs. This suggests it was the certificate signal itself β not a general profile update β that made the difference
22.05.2025 15:03 β π 0 π 0 π¬ 1 π 0We checked that this isnβt just a fluke of LinkedIn activity. The results held even after excluding any βnewβ jobs that started within 4 months of the intervention (to ensure we only count jobs found after the credential was shared)
22.05.2025 15:03 β π 0 π 0 π¬ 1 π 0And it paid off: nudged learners were ~6% more likely to land a new job within a year than the control group. They were also ~9% more likely to have a job in the same field as their certificate
22.05.2025 15:03 β π 0 π 0 π¬ 1 π 0To dive deeper, we looked closer at the subsample of ~40K learners who were LinkedIn users before the experiment. They were 17% more likely to add their certificate to LinkedIn if they received the treatment. Their profiles also saw more views, signaling higher interest from potential employers
22.05.2025 15:03 β π 0 π 0 π¬ 1 π 0After learners earned a certificate, we randomly assigned a subset to the treatment group who got a prompt to easily add their new credential to LinkedIn. The nudge worked. Treated learners added credentials to their LinkedIn accounts, and these certificates received visits from LinkedIn
22.05.2025 15:03 β π 0 π 0 π¬ 1 π 0We ran a global, randomized trial with ~800,000 Coursera learners who had earned certificates, and who either came from a developing country or had no college degree. Do they get more jobs if they link to their (verified) Coursera certificate on LinkedIn?
22.05.2025 15:03 β π 0 π 0 π¬ 1 π 0