Did he change his mind l, or did his initial grift not work out so heβs pivoting to gain favor with the opposing side of the political spectrum, in preparation for the next grift?
02.03.2025 03:17 β π 0 π 0 π¬ 1 π 0@obeytheai.bsky.social
Did he change his mind l, or did his initial grift not work out so heβs pivoting to gain favor with the opposing side of the political spectrum, in preparation for the next grift?
02.03.2025 03:17 β π 0 π 0 π¬ 1 π 0You are a billionaire. Whats stopping you from volunteering your money?
02.03.2025 03:13 β π 0 π 0 π¬ 0 π 0Forbes literally has this. Itβs there 50 over 50 list
26.02.2025 05:41 β π 2 π 0 π¬ 0 π 0Who can I follow here that is funny or entertaining, or insightful. Or is this platform just politi-posting
26.02.2025 04:36 β π 0 π 0 π¬ 0 π 0Soulseek still operates
25.02.2025 07:44 β π 0 π 0 π¬ 0 π 0Not sure FDR is the best example here. His reorganization plan of 1937, proposed to do almost exactly what Trump is doing with his latest executive order.
To take control of all independent agencies under the executive branch.
Friend just confirmed via text she's putting her cat down and Apple Intelligence just suggested I respond "haha I feel you."
19.02.2025 19:16 β π 7827 π 485 π¬ 287 π 46I meanβ¦ formal reasoning has always been a part of mathematics. I am not denying that.
18.02.2025 23:58 β π 0 π 0 π¬ 1 π 0LLMs don't just have capability of grammatical/syntactical relationships. They also gain capabilities of abstract reasoning
They can then apply that abstract reasoning to Knowledge Bases (internet search for example).
See for example chatgpt.com/share/67787c...
Also, Mark Cuban having a typo in a tweet, is not really relevant to the conversation.
If anything it proves humans can make mistakes same as computer models.
Does this mean Mark Cuban is not an intelligent being... no obviously not. All humans are intelligent beings to some extent
They surely have knowledge. What they don't have is self-awareness or sentience.
Yes they can be incorrect, but so can humans.
100% factual accuracy is not a pre-requisite of intelligence.
How are you defining intelligence.
A common accepted definition is:
"the ability to acquire and apply knowledge and skills."
LLMs systems certainly have the ability to acquire and apply knowledge.
Ah okay to clarify, my point is just that this new AI technology has positive outcomes in other fields and is more far reaching than simple chat bots
18.02.2025 21:17 β π 0 π 0 π¬ 1 π 0Iβm might not be necessary but utilizing these techniques they have made substantial progress on protein folding. Many times that of older techniques used in the past 20 years.
18.02.2025 21:11 β π 0 π 0 π¬ 1 π 0Yes alphafold, but newer versions use the same transformer technology developed in chat bots. Only with tokenizers trained on amino acid sequences rather than English.
18.02.2025 17:38 β π 1 π 0 π¬ 1 π 0Agreed if all you need is Wikipedia and a calculator then LLMs are expensive and overkill.
The value comes only when you need complex reasoning across a variety of data sources.
There are very few real production systems Iβve seen where LLMs fully replace humans at this stage.
Most of that is online hype and marketing.
I agree most of these systems need a human in the loop to use the AI as a tool.
It also depends on the model. Gpt 4o and Claude sonnet, in my experience tend to be the most accurate, when combined with a factual knowledge base.
18.02.2025 08:55 β π 0 π 0 π¬ 0 π 0It depends on the model and how itβs used.
In my experience if you frame your questions against vey specific documents or webpages. Itβs less likely to hallucinate.
Raw models with no knowledge base will hallucinate more.
It is AI though.
You use the AI LLM as an orchestration layer, then provide it with any number of tools (calculator, api access, code interpreter, search, etc).
The AI acts as an agent to recognize intent and sequence the use of tools for a given prompt
And yes you are right you usually wouldnβt use an LLM for simple arithmetic.
The value of the calculator comes when you ask it to do multi step reasoning that requires intermediate arithmetic step.
The o series models use a combination of Internet search and chain of thought reasoning to get pretty good/factual results.
Of course it is still searching the internet so the output data will only be as good as the input data it parses.
Newer versions delegate to a calculator tool for math so it should always work.
18.02.2025 08:18 β π 9 π 0 π¬ 2 π 0Yah and?
18.02.2025 08:16 β π 0 π 0 π¬ 0 π 0But the older techniques can still be very efficient for certain applications
18.02.2025 07:55 β π 0 π 0 π¬ 0 π 0Thatβs a study/technique from a few years ago.
A lot of medical applications are using LLMs now as well.
Including huge advancements in protein structure research.
So many people in these comments that are going to get left behind
18.02.2025 07:50 β π 0 π 0 π¬ 0 π 0Which ones are good
18.02.2025 07:48 β π 1 π 0 π¬ 4 π 0What model are you using the latest ChatGPT gets it right
18.02.2025 07:47 β π 3 π 0 π¬ 3 π 0The closest term is probably beefsteak nazi en.wikipedia.org/wiki/Beefste...
Not quite what your getting at but there was a classification for people who were more left leaning but joined the nazi party out of opportunism.