βš οΈπŸ”§βŒ¨οΈπŸ”₯'s Avatar

βš οΈπŸ”§βŒ¨οΈπŸ”₯

@plausiblyreliable.com.bsky.social

36 Followers  |  235 Following  |  53 Posts  |  Joined: 28.07.2023  |  2.3265

Latest posts by plausiblyreliable.com on Bluesky

cuda is done with some wsl2-specific magic passthrough device the runtime libs know how to use - do _not_ try to actually install any drivers - that will break it, but things that bundle their own cuda runtime like pytorch should just work out of the box

for gui stuff it kinda speaks wayland now

13.09.2025 20:57 β€” πŸ‘ 8    πŸ” 0    πŸ’¬ 0    πŸ“Œ 0
Preview
Get started mounting a Linux disk in WSL 2 Learn how to set up a disk mount in WSL 2 and how to access it.

never found a good way to make a disk visible to both windows and wsl and perform well from both - but one with a linux fs on it should be attachable to wsl like this learn.microsoft.com/en-us/window...

13.09.2025 20:50 β€” πŸ‘ 1    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0

yeah wsl just sucks at this case

13.09.2025 20:48 β€” πŸ‘ 1    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0

the root fs on wsl2 should act just like a regular linux fs on a vm - because it is - but permissions _are_ pretty broken on wsl1 generally and when using the wsl2 9p mounts of windows drives

13.09.2025 20:46 β€” πŸ‘ 1    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0

wsl2 is much better and _almost_ the same as a vm - now mostly just need to remember that the windows fs mounts are not high iops/mmap-friendly (don't try to run stuff directly off them) and it doesn't run an actual init by default (but *can* be configured to run systemd)

13.09.2025 20:38 β€” πŸ‘ 8    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0

I would just look for "post training", "supervised fine-tuning" (human created example responses), "RLHF" (human rater score tuning) - "alignment" is a lot more related to "AI Safety" stuff, sometimes it means things like getting the models to reject bad requests and sometimes it means AI doomerism

09.09.2025 16:43 β€” πŸ‘ 5    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0

The base models (rarely released anymore) almost certainly could be, the "personality" comes from the post training - additional steps at the end with examples in the target style and a bit of tuning by human raters scoring outputs

09.09.2025 13:00 β€” πŸ‘ 9    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0

For anything new now we're using modal and having it write back to our own S3

21.08.2025 18:31 β€” πŸ‘ 1    πŸ” 0    πŸ’¬ 0    πŸ“Œ 0

IME anything ob GPU, even small non-LLM models are hard to run cost effectively if you have low or difficult to predict utilization

21.08.2025 18:29 β€” πŸ‘ 3    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0
Preview
Just Ask for Calibration: Strategies for Eliciting Calibrated Confidence Scores from Language Models Fine-Tuned with Human Feedback A trustworthy real-world prediction system should produce well-calibrated confidence scores; that is, its confidence in an answer should be indicative of the likelihood that the answer is correct, ena...

It seems like for a post-trained model, the best that you can do right now is just ask it: arxiv.org/abs/2305.14975

08.08.2025 23:49 β€” πŸ‘ 2    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0
Chart from GPT4 technical report p. 12 showing calibration suffering after RL post-training

Chart from GPT4 technical report p. 12 showing calibration suffering after RL post-training

The first place I saw this was the GPT-4 technical report. arxiv.org/pdf/2303.08774 p.12

08.08.2025 23:46 β€” πŸ‘ 2    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0

One interesting result I've seen is that *base* models' (pure next token predictors) outputted probabilities match up pretty well with the likelihood of correctness and can kind of be interpreted as confidence scores, but after the post-training steps, especially RL, that stops working.

08.08.2025 23:41 β€” πŸ‘ 2    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0

for the most part you should just be able to take existing web/html apps into it unmodified but it also has some escape hatches to get at native stuff if you need

05.08.2025 17:50 β€” πŸ‘ 0    πŸ” 0    πŸ’¬ 0    πŸ“Œ 0
Preview
Tauri 2.0 The cross-platform app building toolkit

Might be looking for something like Tauri v2.tauri.app

05.08.2025 17:48 β€” πŸ‘ 0    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0

They do have the option of just using claude code at API rates, fully usage based - but nobody really likes that either because you have no idea how much it will spend on a task ahead of time (and if you max out the limits subs are *still* much cheaper than API rates)

29.07.2025 14:49 β€” πŸ‘ 0    πŸ” 0    πŸ’¬ 0    πŸ“Œ 0

I like the open source vibecoding tools like Cline where you bring your own API keys better but paying the raw API prices can be rough, I use Cursor basically *for* the subsidy

09.07.2025 02:44 β€” πŸ‘ 0    πŸ” 0    πŸ’¬ 0    πŸ“Œ 0

Nowadays, after training them on the whole Internet, they do a much shorter post training phase with chat transcripts (outsourced human workers write these, usually for pennies) to make them chatbots out of the box, (ChatGPT) but even those kinds are still fundamentally text completion systems

09.07.2025 01:45 β€” πŸ‘ 0    πŸ” 0    πŸ’¬ 0    πŸ“Œ 0

The actual math part of an LLM is barely a screen full of code. The behavior really is all in the training data selection and prompting.

09.07.2025 01:12 β€” πŸ‘ 0    πŸ” 0    πŸ’¬ 0    πŸ“Œ 0

They know what it currently is they don't know what it used to be

09.07.2025 01:08 β€” πŸ‘ 3    πŸ” 0    πŸ’¬ 0    πŸ“Œ 0

All of the chatbot ones do. Before chatbots "LLM" referred to large text auto completion systems trained on the Internet (e.g. GPT2&3). Since those can reliably complete all sorts of text, it was figured out that you could make them into chat bots by just prompting them with enough chat transcript.

09.07.2025 01:08 β€” πŸ‘ 1    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0

Realistically though it's probably just as simple as this

bsky.app/profile/ceej...

09.07.2025 00:58 β€” πŸ‘ 7    πŸ” 0    πŸ’¬ 0    πŸ“Œ 0

I mean, they don't know anything *outside the context window* about themselves - obviously they know about their system prompt, that's just more input. And it's all from the prompts, not the model itself. If the wrong knowledge cutoff date is in the prompt then the answer about it will be wrong.

09.07.2025 00:53 β€” πŸ‘ 2    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0

It made it up on the spot, if you ask several times you'll get different answers, and there's no way for it to know at all what it would've been in the past, outside consulting web sources. Models themselves have no memory.

09.07.2025 00:46 β€” πŸ‘ 5    πŸ” 0    πŸ’¬ 0    πŸ“Œ 0

We also don't know for sure that these are the live prompts. It is plausible that this is the only cause though: LLM's are known bad at conditions and negation- it doesn't work well to prompt to respond one way in only some circumstances, if something is in the prompt at all, it always has effects

09.07.2025 00:38 β€” πŸ‘ 1    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0

It got that by looking at the news, not from self introspection, another LLM would've found the same thing

09.07.2025 00:31 β€” πŸ‘ 4    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0

It makes exactly as much sense to ask ChatGPT (not much) what the fuck happened to grok as asking grok because they are both doing the same thing when you do - doing a web search, scraping some news sites, and bullshitting a response. It has no special knowledge about itself.

09.07.2025 00:28 β€” πŸ‘ 20    πŸ” 3    πŸ’¬ 1    πŸ“Œ 0

LLM's categorically know nothing about themselves - they do not know what training data they were made, with they do not know anything about their training process. It is meaningless to ask an LLM how it works because it cannot introspect. This is all hallucination.

09.07.2025 00:26 β€” πŸ‘ 17    πŸ” 5    πŸ’¬ 2    πŸ“Œ 0

The first guy to invent a chatbot was horrified that it fooled people into thinking it was a nearly human level intelligence. He concluded that people believe this, in part, because they first think humans are basically machines. Chatbot hype and soullesness are related!

archive.org/details/comp...

05.07.2025 14:18 β€” πŸ‘ 194    πŸ” 26    πŸ’¬ 5    πŸ“Œ 3

they can be given instructions to say they don't know more often - usually RAG systems are told to avoid using information outside sources, but you still can't be sure it actually doesn't know nor should you be that much more confident in its factuality - you _always_ need to double check LLMs

26.05.2025 05:20 β€” πŸ‘ 1    πŸ” 0    πŸ’¬ 0    πŸ“Œ 0

> I've used an AI that can say it doesn't know & it doesn't hallucinate or lie.

As far as I know not even any of the commercial AI companies claim to have this capability - and its not really _possible_ given the way current LLMs are made.

26.05.2025 05:17 β€” πŸ‘ 1    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0

@plausiblyreliable.com is following 20 prominent accounts