βThey said it could not be doneβ. Weβre releasing Pleias 1.0, the first suite of models trained on open data (either permissibly licensed or uncopyrighted): Pleias-3b, Pleias-1b and Pleias-350m, all based on the two trillion tokens set from Common Corpus.
05.12.2024 16:39 β π 248 π 85 π¬ 11 π 19
ββββββββββββ ~25% trained
"A painting of a mountain lake with a boat in the foreground, surrounded by lush green grass, trees, and rocks. The sky is filled with white, fluffy clouds, creating a peaceful atmosphere."
06.12.2024 22:28 β π 13 π 3 π¬ 2 π 0
Great study on misinformation. Just want to point out that this kind of work is impossible without the fair use doctrine. Massive copying, computational analysis, ...
29.11.2024 22:44 β π 33 π 12 π¬ 2 π 1
Hi, so I've spent the past almost-decade studying research uses of public social media data, like e.g. ML researchers using content from Twitter, Reddit, and Mastodon.
Anyway, buckle up this is about to be a VERY long thread with lots of thoughts and links to papers. π§΅
27.11.2024 15:33 β π 964 π 452 π¬ 59 π 123
Making a bsky dataset is a bit like breaking glaze. It's in users best interests to know how easy it is, but they'll hate you for it.
27.11.2024 04:10 β π 2 π 0 π¬ 0 π 0
Sincerely do not tell anyone in the replies what the fire hose is lmao
15.11.2024 22:14 β π 18 π 6 π¬ 3 π 0
100%. And I think the challenge is real not because it requires complicated technology, but because both AI orgs and rights holders see opt-outs as a compromise that they'd need to be forced into.
14.11.2024 03:04 β π 2 π 0 π¬ 1 π 0
Visiting Scientist at Schmidt Sciences. Visiting Researcher at Stanford NLP Group
Interested in AI safety and interpretability
Previously: Anthropic, AI2, Google, Meta, UNC Chapel Hill
Making AI safer at Google DeepMind
davidlindner.me
Assistant Professor the Polaris Lab @ Princeton (https://www.polarislab.org/); Researching: RL, Strategic Decision-Making+Exploration; AI+Law
Assistant Prof of AI & Decision-Making @MIT EECS
I run the Algorithmic Alignment Group (https://algorithmicalignment.csail.mit.edu/) in CSAIL.
I work on value (mis)alignment in AI systems.
https://people.csail.mit.edu/dhm/
AI safety at Anthropic, on leave from a faculty job at NYU.
Views not employers'.
I think you should join Giving What We Can.
cims.nyu.edu/~sbowman
AI Safety @ xAI | AI robustness, PhD @ UC Berkeley | normanmu.com
5th year PhD student at UW CSE, working on Security and Privacy for ML
PhD student at ETH Zurich | Student Researcher at Google | Agents Security and more in general ML Security and Privacy
edoardo.science
spylab.ai
AI privacy and security | PhD student in the SPY Lab at ETH Zurich | Ask me about coffee βοΈ
ai safety researcher | phd ETH Zurich | https://danielpaleka.com
3rd year Phd candidate @ Princeton ECE
Faculty at βͺthe ELLIS Institute TΓΌbingen and Max Planck Institute for Intelligent Systems. Leading the AI Safety and Alignment group. PhD from EPFL supported by Google & OpenPhil PhD fellowships.
More details: https://www.andriushchenko.me/
Thinking about how/why AI works/doesn't, and how to make it go well for us.
Currently: AI Agent Security @ US AI Safety Institute
benjaminedelman.com
Academic, AI nerd and science nerd more broadly. Currently obsessed with stravinsky (not sure how that happened).