AI Scientists Have a Problem: AI Bots Are Reviewing Their Work
ChatGPT is wreaking chaos in the field that birthed it.
NEW: I wrote about a problem that's plaguing the artificial intelligence field: AI research is clearly being peer-reviewed by AI chatbots, no one really knows what to do about it ...
... and unfortunately, AI scientists have only themselves to blame. www.chronicle.com/article/ai-s...
22.08.2024 16:27 β
π 149
π 48
π¬ 10
π 25
What the heckβs the difference between an βalgorithmβ and βAIβ, anyway?
"Algorithm" and "AI" are two terms that often get used interchangeably, but they have important technical differences. I ran across an algorithm labelled as AI and realized that my intuitions about th...
I got confused enough about what does or doesn't (and should or shouldn't) count as "artificial intelligence" that I had to write about it.
What's the difference between an "algorithm" and an "AI"? Honestly, it's more a matter of taste than we might like to think.
wp.me/pfOKhZ-I
20.08.2024 18:09 β
π 0
π 0
π¬ 0
π 0
Good point; I hand't thought of it outside of tech, but it really is all the same thing over and over, huh?
19.08.2024 16:45 β
π 0
π 0
π¬ 0
π 0
A frustrating part of all these LLMs, etc., is that companies are rewarded for creating untested, unregulated applications & models. If one succeeds, it's affordable to retroactively make it legal. And the negative consequences of the failed apps fall onto the users, not the companies.
19.08.2024 06:44 β
π 0
π 0
π¬ 1
π 0
This is 100% of the mentality that has to be eradicated. This is utterly unacceptable.
18.08.2024 16:08 β
π 54
π 14
π¬ 1
π 0
Man, we gotta scale up our willingness to prosecute white-collar crime real fast!
19.08.2024 06:29 β
π 0
π 0
π¬ 0
π 0
I wonder if LLM folks might surpass the high baseline sociopathy of other tech fields. I used to feel like it was "I don't care if I ruin your stuff" and now it's "I'm actively trying to ruin your stuff".
19.08.2024 06:26 β
π 0
π 0
π¬ 0
π 0
I'd forgotten that - was that Betterhelp? But even there, I worry that they didn't actually face meaningful consequences. I still hear their ads on lots of podcasts.
18.08.2024 02:38 β
π 1
π 0
π¬ 1
π 0
Air Canada Has to Honor a Refund Policy Its Chatbot Made Up
The airline tried to argue that it shouldn't be liable for anything its chatbot says.
Oh, good! The strongest consequences I'd seen of was Air Canada having to refund a customer who bought a ticket at a rate its chatbot made up -- but that was a trivial consequence compared to the effort the customer undertook to take them to court. www.wired.com/story/air-ca...
13.08.2024 22:47 β
π 4
π 0
π¬ 1
π 0
Aaaah! Maybe I'm misunderstanding (haven't read the paper), but is this saying that the LLMs will generate novel ideas that may or may not actually work? Every researcher has tons of good ideas that they haven't published because often the hard part is testing the ideas, not creating them!
13.08.2024 22:44 β
π 4
π 0
π¬ 0
π 0
Love it!
One thing I wonder about, at the very bottom right, is: has anyone ever actually had to take any significant responsibility for their misplaced trust in ChatGPT?
13.08.2024 22:33 β
π 1
π 0
π¬ 2
π 0
I wonder where the belief that ChatGPT can do everything you'd want it to do comes from? All the rest of our tech relies on specialized programs, apps, or devices -- even web browsers require you to go to multiple websites if you want to perform multiple tasks.
13.08.2024 17:14 β
π 0
π 0
π¬ 0
π 0
Speaking of bots on social media, I found out that the AI hashtag on Bluesky is populated approximately entirely with AI-generated scantily-clad buxom anime babes. Is there something instead of hashtags that people use on here?
12.08.2024 17:54 β
π 0
π 0
π¬ 0
π 0
Definitely fun app to have!
Definitely a good start and need to see more. The comments under posts look really realistic. I love the ability to send any photo you want and comment under other peoples posts. Now hereβs the negatives, def want the ai to continue posting after you download the app, more personalityβs for different accounts, ability for ai to send images to you in chats, and hopefully one day the feature to send videos! Looking forward for more updates and features!
(Oops. Too excited. Sorry for the typo!)
There was one review I couldn't fit into the post, so I figure I'll share it here to give you a sense of who's excited about Aspect.
I'm fascinated by the user who is missing out by not being able to send videos to a chatbot.
12.08.2024 17:38 β
π 0
π 0
π¬ 0
π 0
Terrible AI-deas: Aspect, the AI social media space that monetizes your solipsism
A social media system where everyone but you is a bot. It's another Terrible AI-dea.
I found about a social media app that's entirely populated by bots, intentionally.
I'm excited for the first post in our series on Terrible AI-deas, looking at "Aspect", the social media hellscape that uses #LLM to create #AI friends just for you
hallucinatingparrots.wordpress.com/2024/08/12/t...
12.08.2024 17:22 β
π 1
π 0
π¬ 1
π 1
The poem "Antigonish", which was supposedly written in 1899, but may have been time tunneled into the past after looking at the AI/LLM-powered replies to any social media post today:
Yesterday, upon the stair,
I met a man who wasn't there.
He wasn't there again today,
I wish, I wish he'd go away!
09.08.2024 23:46 β
π 0
π 0
π¬ 0
π 0
How could AI-generated text sneak into your writing without you knowing? It calls to mind the airline check-in question "Have your bags been in your possession this entire time?"
07.08.2024 20:44 β
π 0
π 0
π¬ 0
π 0
Everything you write for school or work should come from you, especially something as important as a research paper. With AI Detector, you can be sure the text you turn in is AI-free.
I'm fascinated by the MΓΆbius-strip logic that some AI text sites come up with. Like, here's one from an AI detector, explaining why you should run an AI detector on your homework before turning it in:
07.08.2024 20:41 β
π 0
π 0
π¬ 1
π 0
Text from a website: "Why Is It Important to Bypass ZeroGPT?
Bypassing ZeroGPT means that instructors and any person on the hunt for AI-created text wonβt be able to flag your work. This is particularly critical for students to avoid any problems and maintain academic integrity."
There are a lot of good reasons to avoid any program that purports to distinguish AI-generated text from human-generated text. This one, which I think comes from an AI-generated article, is maybe the worst:
07.08.2024 20:26 β
π 0
π 0
π¬ 0
π 0
Fingers crossed for that!
I've been frustrated because there are great potential insights for theoretical linguistics from large language models; we've never gotten computers to handle complex grammar so well! But all the praise gets given to these depressing parlor-trick applications instead.
07.08.2024 20:16 β
π 20
π 0
π¬ 0
π 0
I think these are good signs, but the worst parts of generative AI isn't going to go away because the big companies stop investing. Like coal mine runoff, we're going to be stuck with low-quality, low-effort AI text and images polluting our lives for a long time.
07.08.2024 20:05 β
π 0
π 0
π¬ 0
π 0
The problem with plausible answers is that we don't immediately realize that they're wrong. We're more likely to accept a wrong answer when it's plausible than when it's implausible. As a result, in many situations it's better if an AI makes big mistakes than small ones.
Intuitively, you might think that it's better if AIs make small errors rather than large ones. But I'm not convinced that's the case, at least not with public-facing AI applications.
05.08.2024 20:33 β
π 0
π 0
π¬ 0
π 0
OpenAI has had a system for watermarking ChatGPT-created text and a tool to detect the watermark ready for about a year, reports The Wall Street Journal. But the company is divided internally over whether to release it. On one hand, it seems like the responsible thing to do; on the other, it could hurt its bottom line.
Companies won't regulate themselves. There's no incentive to act altruistically if you don't think your competitors will follow suit. If we want #AI technologies to adhere to the public good, we have to be willing to impose regulations.
(h/t @bcmerchant.bsky.social)
www.theverge.com/2024/8/4/242...
05.08.2024 20:01 β
π 0
π 0
π¬ 0
π 0
Using AI for Political Polling β Ash Center
This is from someone who should know better, in a Harvard Kennedy School commentary on AI in lieu of political polling: ash.harvard.edu/articles/usi...
As with much of this stuff: sure, it *could* work, but the most likely outcome is "wrong but plausible". You wouldn't know it failed!
05.08.2024 04:15 β
π 0
π 0
π¬ 0
π 0
"Whatβs so powerful about this system is that it can generalize to new scenarios and survey topics, and spit out a plausible answer, even if its accuracy is not guaranteed. In many cases, it will anticipate those responses at least as well as a human political expert. And if the results donβt make sense, the human can immediately prompt the AI with a dozen follow-up questions."
I wonder how much of the hype in the AI bubble consists of people saying something bad as if it is good?
One of the biggest problems with LLMs is that they lie confidently. "Wrong but plausible" is the WORST thing a text can be! And yet here it's sold as a nearly-optimal situation?
05.08.2024 04:09 β
π 0
π 0
π¬ 1
π 0
On the one hand, I can imagine that there's a benefit to submitting prompt drafts that don't work until you get the one that does. But that's also going to craft what you're saying as if the computer is the audience, instead of the actual target of a human audience.
05.08.2024 00:18 β
π 4
π 0
π¬ 0
π 0