the smaller models they're training, or they can directly "prune" and parameters for the specific behavior they want and anneal them with a few training runs).
13.08.2025 23:43 β π 0 π 0 π¬ 0 π 0@shuberfuber.bsky.social
"There is no such thing as a new idea. It is impossible. We simply take a lot of old ideas and put them into a sort of mental kaleidoscope." - Mark Twain
the smaller models they're training, or they can directly "prune" and parameters for the specific behavior they want and anneal them with a few training runs).
13.08.2025 23:43 β π 0 π 0 π¬ 0 π 0For the huge foundational model, there's sort of an educated gamble in that you only need a few, final large enough sized ones to encode human language concepts, from there you can shrink them into any smaller model you need (whether they act as the "teacher" to pre-set most of the parameters in
13.08.2025 23:43 β π 0 π 0 π¬ 1 π 0The large 100B+ model? Probably not.
The small sub 1B model trained specifically for semantic search? Yes.
Long before LLM Google already uses word2vec to perform basic semantic searches.
LLM is just an extension from that.
That's the simpler guideline yes.
The more technical one is more "what exactly do you want?"
a third of it, the AI learns enough to fill in the rest with a good accuracy.
The space is also evolving, and I see potentials of it becoming an adaptive tool for human translators.
warranted, or as a last resort before human translators get to it). It's one of those "human translators are great, but I will settle for an AI one if none is available".
There are also works being done on adapter systems, where an LLM can "adapt" based on human translator to hone in more on the
you just want some quick access to information. Bad if you want to translate those with distinctive voices (because the common pattern will bleed through more).
The way I see it, AI is unlikely to be a replacement (or only replace the lower/simpler ends or those where instant live translation is
to see how well such model would "score" a human translation (basically how confident the model would be if it tries to generate human translation).
2. Most models, publicly available, are naively trained on all text and when asked to translate leans to the most common usage. Great if, say,
A few points from the technical side. Preface, I don't think translators will go away (at least for creative writings).
1. Google Translation, and most translation AI, does hide the logit score (how confidence the model is that the next word is the right one). Personally, it would be fascinating
More Photoshop
Generative AI generally don't render text that neatly. The background is probably AI.
And given how rapidly people seem to be churning these out, someone probably created a workflow somewhere to automate it.
ambiguity that caused it to get mistaken.
Don't feed it to the public endpoint. Only if your university has their own private instance. If not, forget it.
LLM is a language model.
Don't use it for lesson plans.
At most use it as a rough sieve to review course resources (feed it your course resources, ask it questions that you want students to learn and see if it answers it correctly).
If it frequently can't, there's a decent chance there's some
I'm... surprised at that.
Do you mean the tadpoles thing? I thought that's fundamental like a language.
to set a mood/feel/to occupy an empty space, like one of those mass produced abstract art sculptures.
The former people will seek out human talents.
The latter it doesn't really matter whether it's human or AI.
I think it's interesting you use running as an example.
We drive to get to places too.
The way I see it, art and music are similar.
In some cases, we want to see the demonstration of human skills (art galleries, concert, orchestra, stage performance, etc).
In other cases, we just want it
Sure, but since we're on the topic of AI most of the jobs threatened are the less physically taxing ones.
And the boring jobs you implied from the article are all very physical jobs.
Because I'm fairly certain that the underlying logit score of that block of text is low, and is only picked because it's the highest score (sort of like being the top of class despite only having a score of 20/100).
13.08.2025 15:07 β π 0 π 0 π¬ 0 π 0I would say it is a fascinating look into why it fails.
It failed because the article itself didn't define the term. And that's the definition many layperson may give if forced to give an answer based purely on just the article.
One thing I would love to know is how it generates that.
The problem I constantly encountered is that LLM by default doesn't understand which part can be just "creative writing" and which parts need to be accurate.
13.08.2025 14:39 β π 0 π 0 π¬ 0 π 0model a chance to make sure things are accurate.
As far as the model is concerned, the only thing important about a "case number" is that it's a number that follows a certain pattern (basically it's correct grammatically).
No amount of training can solve that conclusively.
I work closely with LLM, and I can tell you that it's less lying but more incapable of understanding if something needs to be "true" (aka mapped to the real world).
Most deployments are not fine-tunes, don't have a secondary pass to verify output, nor do they have tool-calling setup to give the
Roomba?
Unfortunately, robots are expensive, and humans are cheap.
"Interesting job" just means less physically taxing. And when you don't have a robot in the equation, a computer is starting to be on equal footing as a human.
Probably means that Pokemon card arts are soulless corporate slops indistinguishable from AI.
13.08.2025 14:18 β π 0 π 0 π¬ 1 π 0AI is accurate... in the domain they're designed for.
LLM is a language model, it's extremely good at language related tasks (mapping natural language to intent, translation, improve transcription, semantic search, etc).
Asking it to be a doctor is decidedly NOT in their domain.
asking for trouble.
13.08.2025 14:10 β π 0 π 0 π¬ 0 π 0LLM isn't trained to lie, as lying would imply an intention.
LLM doesn't have intention, it's a language processor. It is not capable of understanding truth, only that it's linguistically correct.
Very useful for a natural language interface/semantic search. Push it further than that and you're
Probably shove it through an AI and call it a day.
I'm starting to feel like AI is one of those "this is why we can't have nice things" deal, because holy shit the amount of people misusing it is insane.
Is it as unique, distinctive or human? Obviously not. But hey, we already have that in daily life, where a "human touch" is a luxury that most can't afford, like having your own maid, driver, tailor, sommelier, barista, etc.
13.08.2025 13:55 β π 0 π 0 π¬ 0 π 0Accessible only to those with the money to commission arts and those who can get a good following or those who had enough wealth or lack other responsibility to dedicate themselves to it.
Gen AI is pretty much filling in the expressive needs of those who lack the resource and time needed.