Robert Porsch's Avatar

Robert Porsch

@rmporsch.bsky.social

46 Followers  |  642 Following  |  13 Posts  |  Joined: 03.12.2023  |  1.7484

Latest posts by rmporsch.bsky.social on Bluesky

The same can probably be said also about a lot of the compliance and risk functions in major companies.

15.07.2025 11:48 β€” πŸ‘ 1    πŸ” 0    πŸ’¬ 0    πŸ“Œ 0

There are just too many options out there now. I think most people got desensitized. It's just another LLM in the end.

15.07.2025 11:45 β€” πŸ‘ 0    πŸ” 0    πŸ’¬ 0    πŸ“Œ 0

Yeah, there isn't that novelty anymore compared to the deepseek moment. Maybe even expected by many in the Chinese tech industry.

14.07.2025 16:09 β€” πŸ‘ 1    πŸ” 0    πŸ’¬ 0    πŸ“Œ 0
Preview
When AI Becomes the Water We Swim In The Invisible Revolution

When I read these articles about those new browsers from perplexity or openAI I really want to agree, but hyping it all up like this makes it very difficult. Calling it an extension of consciousness is a bit of a stretch. It's an untested product

hybridhorizons.substack.com/p/when-ai-be...

14.07.2025 05:30 β€” πŸ‘ 0    πŸ” 0    πŸ’¬ 0    πŸ“Œ 0

I thought most papers of this size just use the name of the consortium, like the human genome project did with many of its papers

13.07.2025 15:34 β€” πŸ‘ 3    πŸ” 0    πŸ’¬ 0    πŸ“Œ 0

A new definition for AGI just dropped, and it is a bad one.

12.07.2025 18:04 β€” πŸ‘ 169    πŸ” 27    πŸ’¬ 8    πŸ“Œ 5
Preview
I genuinely loved this read about GitHub code search! πŸ’» Explore the advanced features that enhance your coding experience, making it faster and more intuitive to find what you need!

I genuinely loved this read about GitHub code search! πŸ’»

Read "The technology behind GitHub’s new code search." on their blog!

12.07.2025 09:58 β€” πŸ‘ 6    πŸ” 1    πŸ’¬ 0    πŸ“Œ 0
Preview
Life of an inference request (vLLM V1): How LLMs are served efficiently at scale vLLM is an open-source inference engine that serves large language models. We deploy vLLM across GPUs and load open weight models like Llama 4 into it. vLLM sits at the intersection of AI and systems ...

Great article explaining how vLLM is using KV caching under the hood
www.ubicloud.com/blog/life-of...

12.07.2025 09:45 β€” πŸ‘ 0    πŸ” 0    πŸ’¬ 0    πŸ“Œ 0

Do you have the link at hand? :)

31.01.2025 04:56 β€” πŸ‘ 2    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0
Post image

β€œThey said it could not be done”. We’re releasing Pleias 1.0, the first suite of models trained on open data (either permissibly licensed or uncopyrighted): Pleias-3b, Pleias-1b and Pleias-350m, all based on the two trillion tokens set from Common Corpus.

05.12.2024 16:39 β€” πŸ‘ 248    πŸ” 85    πŸ’¬ 11    πŸ“Œ 19
Please note that the Deutsche Bundesbank can no longer be reached by fax as of 31.01.2025.

Please note that the Deutsche Bundesbank can no longer be reached by fax as of 31.01.2025.

Germany has fallen.

26.11.2024 12:58 β€” πŸ‘ 3328    πŸ” 718    πŸ’¬ 87    πŸ“Œ 127
A graph titled "Cost of Transport" showing the relationship between body weight (in kilograms) and energy consumption for distance traveled (calories per gram per kilometer) for various animals and machines. It highlights that a person on a bicycle ranks first in efficiency.

A graph titled "Cost of Transport" showing the relationship between body weight (in kilograms) and energy consumption for distance traveled (calories per gram per kilometer) for various animals and machines. It highlights that a person on a bicycle ranks first in efficiency.

A person on a bicycle is by far the most energy-efficient among animals and machines per distance traveled relative to body weight. The bicycle is magic.

www.jstor.org/stable/24923...

24.11.2024 23:12 β€” πŸ‘ 7696    πŸ” 1672    πŸ’¬ 273    πŸ“Œ 238

I guess asking trusted people is the real definition of a peer review then

25.11.2024 04:30 β€” πŸ‘ 0    πŸ” 0    πŸ’¬ 0    πŸ“Œ 0

Wondering how many enterprise LLM applications use explicit user feedback to improve given that only 3% of users actually use those thumbs up/down buttons. Have seen people forcing users to provide feedback at random times however. Implicit Feedback such as session length seems easier to use

24.11.2024 15:13 β€” πŸ‘ 1    πŸ” 0    πŸ’¬ 0    πŸ“Œ 0

Interesting take on the actual adoption of AI assisted coding. Suggesting that only 5% of professional developers who have access to GitHub Copilot are using the more advanced featurs, most (63%) use auto complete only.

youtube.com/watch?v=Up6W...

24.11.2024 12:02 β€” πŸ‘ 1    πŸ” 0    πŸ’¬ 0    πŸ“Œ 0
Post image

He wanted to test how models performed when subjected to a new tokenization scheme without re-training the model.
Modifying Llama 3’s tokenizer to tokenize numbers from right to left (R2L) instead of left-to-right (L2R) with just a few lines of code πŸ§‘β€πŸ’». This affect the grouping of numbers by three:

24.11.2024 11:05 β€” πŸ‘ 19    πŸ” 1    πŸ’¬ 1    πŸ“Œ 0

It's just so freaking fast!

24.11.2024 05:15 β€” πŸ‘ 0    πŸ” 0    πŸ’¬ 0    πŸ“Œ 0

Wish it was also available in Python

24.11.2024 04:33 β€” πŸ‘ 0    πŸ” 0    πŸ’¬ 0    πŸ“Œ 0

Many of the current LLM eval tools, such as deepeval, mlflow, and evidently.ai are really great! However, I find it hard to choose as most of the tests are trivial and many framework feel like they want to lock me in without providing much added value.

24.11.2024 04:31 β€” πŸ‘ 2    πŸ” 0    πŸ’¬ 0    πŸ“Œ 0

@rmporsch is following 20 prominent accounts