People have lost trust in institutions and are becoming more insular with is linked to increased conflict and lost productivity.
The most popular solution is to facilate trust by having leaders "promote a shared identity and culture" to remine people what unites them
www.edelman.com/sites/g/file...
Exposing people to creative content believed to have been created by gen-AI (vs. a human peer) increases people’s self-confidence in their own relevant creative abilities.
This effect emerges for jokes, stories, poetry, and visual art, even when it's unwarranted.
psycnet.apa.org/record/2026-...
Liberals are less willing to share messages supporting causes they personally endorse when those messages employ moral rhetoric they perceive as aligned with conservative values relative to rhetoric aligned with liberal values.
www.sciencedirect.com/science/arti...
Although leaders should be concerned with collective success, most organizations — from sports teams to universities to global companies — still focus on rewarding individual performance.
We explain the key incentives for collective success: www.powerofusnewsletter.com/p/entreprene...
Texting daily with a random human peer is more effective at reducing loneliness than texting with a highly supportive chatbot.
Next time you feel lonely reach out to a human, any human.
www.sciencedirect.com/science/arti...
I'm giving a talk at the Stanford Tech Impact & Policy Center on "Morality in the Anthropocene" May 19th.
Here is the link to attend in person or via zoom: fsi.stanford.edu/events/jay-v...
Thanks, Laura Globig deserves most of the credit (but she's not on Blue Sky :(
Read the full working paper: osf.io/8fhwg/
This was led by Laura Globig along with Nadya Hanaveriesa & @sydneylevine.bsky.social
👉 Participants were less sensitive to social norms when interacting with AI than when interacting with human partners.
👉 They were worse at predicting AI behavior.
👉 When AI was framed as intentional & goal-directed, people predicted AI behavior more accurately and the cooperation gap was reduced.
Why do social norms work differently when we interact with AI?
In a new paper (N = 1,108), we found that people are less sensitive to prosocial norms when interacting with AI because they are less accurate at predicting AI behavior.
Read the full working paper: osf.io/8fhwg/
New study. We had adults place historical figures on the left-right ideological spectrum. Folks most often place history's villains (Hitler, Stalin, etc) as extreme examples of their opponents. But they place heroes (Jesus, MLK, Lincoln) are on their own team. 1/4 www.tandfonline.com/doi/full/10....
What is “the E-Bike Effect?”
“Those who bought e-bikes increased their average daily bicycle use from 2.1km (1.3 miles) to 9.2km (5.7 miles), a 340% increase. The e-bike share of all their transportation increased dramatically too; from 17% to 49%.”
Fewer car trips.
Via @lloydalter.bsky.social
AI-powered writing tools are increasingly integrated into our e-mails and phones. Now a new study finds biased AI suggestions can sway users’ beliefs
LLMs overemphasize moral concerns common in Western societies and underestimate values more prominent elsewhere. These distortions likely stem from cultural biases in training data and carry societal implications and risks
www.pnas.org/doi/10.1073/...
These findings demonstrates a reliable, valid, accessible, and cost-effective approach to labeling texts for nuanced expressions of national identity, enabling new insights into its role in contemporary and historical trends.
It was led by @stefanleach.bsky.social & @alekscichocka.bsky.social
An analysis of US presidential addresses reveals that expressions of national identities have doubled over the last century.
Defensive national identities were 5X more prevalent in Republicans social media posts than Democrats and also frequent in speeches of populist leaders around the globe.
We tested four popular LLMs across 13 million words from social media, surveys, and political speeches in 25 languages. LLMs outperform both dictionary-based approaches and crowd workers—and reducing the cost by a factor of 1,000 compared to the latter.
National identity drives a range of actions, from civic engagement to intergroup violence.
Our new paper presents a novel approach using LLMs to code expressions of positive (national identification, patriotism) & defensive (nationalism, national narcissism) identities.
osf.io/preprints/ps...
IMO, the big tech bros have been hiding this research any way they can while taking heed from it for their own families. I know that several admit that they severely limit screen time and social media overall for their own kids.
In this systematic review and meta-analysis of up to 153 longitudinal studies:
Social media use was associated with higher depression, behavioral problems, self-injury, and substance use, and lower self-perception and academic achievement tinyurl.com/47z3anym
📚Preprint📚
Gregson, Nikadon, Formanowicz, @chiarazazzarino.bsky.social, Kitchin, Kosinski, @jayvanbavel.bsky.social @alekscichocka.bsky.social
➤ osf.io/preprints/psyarxiv/9jcvr_v1
Tasked LLMs to label national identities in social media posts and political speeches (25 languages, 13M words)
🧵 1/8
“In the preface, he declares that all royalties will be donated to
the Scientific Integrity Fund. His motives for writing seem to be two-fold: firstly, to help
himself make sense of how he became embroiled in a scandal about honesty research, and to
describe what could have been done differently.”
Jennifer Byrne (@jabyrnesci.bsky.social) reviews Max H. Bazerman’s 'Inside an Academic Scandal', a narrative of research misconduct, institutional response, and the ethical challenges surrounding fraud in academia.
journals.uvic.ca/index.php/pi...
Super interesting study, Victoria.
🧵 New preprint alert! Why do some people refuse to use AI, even when you tell them it's safe and beneficial? We argue it's not about risk perception. It's about moralization. AI has become a moral issue for many people, and that changes everything. [1/7] osf.io/preprints/ps...
The wealthy dominate political contributions. Our study shows the top 0.1% donate 10–15× more frequently than the bottom 90%. The gradient isn’t subtle, it’s exponential. cup.org/4cfm0Az
An analysis of over 14 million social media posts from accounts in Canada found that 87% of conspiratorial claims come from just 100 influencers.
This minority of users impacts politics, influencing what people view as normal and leads to self-censoring to avoid attacks from conspiracy theorists.
Sycophantic AI distorts reality by returning responses that are biased to reinforce existing beliefs.
"sycophantic AI distorts belief, manufacturing certainty where there should be doubt."
Unbiased sampling produces discovery rates 5X higher! arxiv.org/pdf/2602.14270