What if we could use AI models like Llama 3.2 or Mistral 7B in the browser with JupyterLite? π€―
Still at a very early stage of course, but making some good progress!
Thanks to WebLLM, which brings hardware accelerated language model inference onto web browsers, via WebGPU π
Asked a StackOverflow question recently. Got back a vague response that looks LLM generated by someone. And that was after putting a bounty otherwise no one answers.
Itβs going to be fun 10 years from now when models would have ingested all this as training data.
I'm going to try using BlueSky more reliably for a while. Here are a few thoughts that are guiding my engagement here, and hopefully learning from our collective experience over at Twitter.
chrisholdgraf.com/blog/2024/bl...