James Monk's Avatar

James Monk

@twoev.bsky.social

Physics, dodge and burn. Wow, this place really is a rip off of twitter - someone's gonna get sued

68 Followers  |  76 Following  |  886 Posts  |  Joined: 17.08.2024  |  2.0671

Latest posts by twoev.bsky.social on Bluesky

Post image

Better to rage at the issue of Sith plots than trying to find a scapegoat in Anakin Skywalker

09.08.2025 10:41 β€” πŸ‘ 0    πŸ” 0    πŸ’¬ 0    πŸ“Œ 0

Will OpenAI ever be profitable? I don't know! Are people over-invested? Sure! But the actual model is an improvement, it is incrementally more useful in a production setting. Most people are disappointed though

08.08.2025 21:38 β€” πŸ‘ 0    πŸ” 0    πŸ’¬ 0    πŸ“Œ 0

Horseshoe theory of AI: the AGI boosters and the perma-skeptics are actually the same. The response to GPT5 is quite interesting; it isn't AGI (whatever that is), which both the boosters and skeptics were expecting. OTOH it is quantitatively better than GPT4 at some useful tasks

08.08.2025 21:35 β€” πŸ‘ 0    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0

An under-discussed cause of the present confusing time is the differential exposure to - & perception of - advertising. I experience V. little advertising now and find it crass, obvious & bordering on offensive. I'd hazard a guess that advertising correlates with a bunch of socioeconomic markers

08.08.2025 21:13 β€” πŸ‘ 0    πŸ” 0    πŸ’¬ 0    πŸ“Œ 0

I do think though that an under appreciated class divide is how exposed a person is to advertising. Apart from what I see on the tube, which is not very impactful, I basically see no adverts any more. When I do see them I find them vaguely offensive as a crass form of manipulation

08.08.2025 21:05 β€” πŸ‘ 1    πŸ” 0    πŸ’¬ 0    πŸ“Œ 0

This is a version of Simpson's paradox I think. The people stressing over the vaccines are not the same people casually going for a BBL

08.08.2025 21:02 β€” πŸ‘ 1    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0

You can just lean into it and say the cake is a representation of pile up instead. It looks tasty either way

08.08.2025 11:06 β€” πŸ‘ 1    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0

If reading didn’t exist and you pitched a new tech that caused a human voice to appear inside the mind of a user that others cannot hear, it would be heralded as akin to magic. VCs would love it

07.08.2025 20:26 β€” πŸ‘ 0    πŸ” 0    πŸ’¬ 0    πŸ“Œ 0

I am going to patent a new technology. The tech is that a user will experience a human voice in their own mind, but no one else can hear it. Amazingly, we trigger this feature by only requiring the user briefly looks at a simple - but infinitely customisable - pattern. VCs are gonna go wild for it!

07.08.2025 20:23 β€” πŸ‘ 3    πŸ” 0    πŸ’¬ 0    πŸ“Œ 0
Post image 06.08.2025 22:53 β€” πŸ‘ 0    πŸ” 0    πŸ’¬ 0    πŸ“Œ 0
Obi Wan β€œyou were supposed to end homelessness, not create it” meme

Obi Wan β€œyou were supposed to end homelessness, not create it” meme

06.08.2025 20:01 β€” πŸ‘ 0    πŸ” 0    πŸ’¬ 0    πŸ“Œ 0
06.08.2025 19:58 β€” πŸ‘ 1    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0

Sentiment analysis is a real thing, traders used it for years. But you need a dedicated system, you can’t use an off the shelf LLM without wrapping it in a specialised framework. You also need to be aware of the biases in your sources

06.08.2025 15:31 β€” πŸ‘ 0    πŸ” 0    πŸ’¬ 0    πŸ“Œ 0

Wait, what? Do the people doing that understand that LLM training data is not current, so you can't focus group current issues? The canonical example is when it says Biden is still president. Of course it can search, but then it is just summarising a search result at best

06.08.2025 13:48 β€” πŸ‘ 0    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0

If they did a remake of TTOI they would definitely use an LLM-based focus group to generate policy ideas

06.08.2025 13:14 β€” πŸ‘ 0    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0

We’ve invented DigiBen from The Thick of It

06.08.2025 13:07 β€” πŸ‘ 0    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0

You are describing tool use by the LLM, which is a thing now. It comes with its own problems - the LLM needs to understand what a computer can do, and concerns because you are giving the LLM the ability to perform arbitrary functions on the computer

06.08.2025 12:57 β€” πŸ‘ 0    πŸ” 0    πŸ’¬ 0    πŸ“Œ 0

You are using the wrong type of AI model for this. Words do not have meaning to an image model, they are just pixels. A text-based model won’t produce pretty output, but the words are more likely to come out in the correct order

06.08.2025 12:45 β€” πŸ‘ 0    πŸ” 0    πŸ’¬ 0    πŸ“Œ 0

I mean, if you just try for yourself and ask gpt for a timeline overview of European imperialism since 1885 I’m fairly sure it will respond with a passable answer. That’s not actually what historians do though, so it doesn’t tell us they can be replaced

06.08.2025 12:39 β€” πŸ‘ 1    πŸ” 0    πŸ’¬ 0    πŸ“Œ 0

The correct argument for why AI (which isn’t even a single thing) can’t replace historians is that LLMs’ only model for truth is β€œI read it on the internet.” Which is why I think it’s important to notice that the output of an image model going viral as proof of AI’s failure is a bad thing

06.08.2025 12:35 β€” πŸ‘ 0    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0

It’s actually quite hard. What you are describing is a multi modal model. The programs that turn text into images are not β€œbasic,” they didn’t exist a few years ago. They don’t have a model of what text is, it’s just pixels to them like any other photo they were trained on

06.08.2025 12:32 β€” πŸ‘ 0    πŸ” 0    πŸ’¬ 2    πŸ“Œ 0

In the tradition of bsky I feel compelled to spoil the joke by pointing out that an image model made that, and words have no meanings to those models, which is why it is gibberish. Dismissing AI on this basis is equally as wrong as saying it will replace historians

06.08.2025 12:10 β€” πŸ‘ 0    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0

I said back in 2023 that LLMs will not work for everything, but they will be excellent politicians - they tell people what they want to hear, they are good at manipulating language, they are fine tuned to satisfy the median voter, they have no model of truth beyond what they read

06.08.2025 11:53 β€” πŸ‘ 2    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0

You hear the same argument in favour of id cards or britcard - we already give away all our information, so it doesn’t matter the state mandating it. Who are all these people who let themselves be tracked with impunity? It’s quite easy to avoid the worst of it

06.08.2025 10:15 β€” πŸ‘ 2    πŸ” 0    πŸ’¬ 0    πŸ“Œ 0

Where is the AI capex going? Is it just data centres/nvda? Feels plausible that at least some of it is going on ip, which ultimately is people and wages? What will that money recycle into?

05.08.2025 13:22 β€” πŸ‘ 0    πŸ” 0    πŸ’¬ 0    πŸ“Œ 0

Voting is widely understood to be a decision making mechanism and that is bad and a source of problems. Electoral reform doesn’t fix that, likely accelerates it

The change we need is to understand voting as an accountability mechanism

05.08.2025 10:32 β€” πŸ‘ 0    πŸ” 0    πŸ’¬ 0    πŸ“Œ 0

The most interesting thing is that a financially struggling market trader would have a spare room they could just spontaneously offer up to an estranged relative they only just met

04.08.2025 15:30 β€” πŸ‘ 1    πŸ” 0    πŸ’¬ 0    πŸ“Œ 0

Bond markets are probably less chill about being clowned with than equities, and equities not enforcing a no-clowning policy makes the likelihood of him clowning bonds much higher. Clown contagion is a real possibility here

04.08.2025 15:21 β€” πŸ‘ 0    πŸ” 0    πŸ’¬ 0    πŸ“Œ 0

We should return to using the term β€œdeep learning,” to distinguish this type of modelling from AI slop

04.08.2025 10:40 β€” πŸ‘ 1    πŸ” 0    πŸ’¬ 0    πŸ“Œ 0

To extend this analogy: tinderboxes have been obsolete for nearly 200 years. You don’t need a box of combustible material to start a fire, you only need a match and you can set fire to most things, even things that are normally safe

04.08.2025 10:36 β€” πŸ‘ 0    πŸ” 0    πŸ’¬ 0    πŸ“Œ 0

@twoev is following 20 prominent accounts