This is the best court live tweeting I have ever seen
04.11.2025 22:05 β π 0 π 0 π¬ 0 π 0@rafuse.dev.bsky.social
hi
This is the best court live tweeting I have ever seen
04.11.2025 22:05 β π 0 π 0 π¬ 0 π 0Incredible work
04.11.2025 22:03 β π 0 π 0 π¬ 0 π 0Lansdowne 2.0 is a travesty of a project.
03.11.2025 15:31 β π 0 π 0 π¬ 0 π 0What does the new design look like? I haven't seen any renders.
03.11.2025 02:20 β π 1 π 0 π¬ 1 π 0At least now I can go back to not caring about sports again.
02.11.2025 04:20 β π 0 π 0 π¬ 0 π 0This is cinema
02.11.2025 03:36 β π 0 π 0 π¬ 0 π 0KIRK
02.11.2025 03:30 β π 0 π 0 π¬ 0 π 0He did
youtu.be/Tp1T7kPEdDY?...
This just isnβt an accurate portrayal of this video. Itβs pretty clearly dismissive (rightfully so) of the claims. The worst thing you can say about it is that itβs clickbait.
WaPo sucks but this guy worked alongside Dave before he left.
The guy in the video was also a coworker of Daveβs and they collaborated on these shorts. βSome right wing guyβ is pretty reductive.
29.10.2025 16:24 β π 5 π 0 π¬ 0 π 0This is absolutely disgusting - let alone overriding a birth certificate. Coupled with the fact facial recognition is documented to have trouble with non-white faces and this is an absolute disaster.
29.10.2025 15:16 β π 1 π 1 π¬ 0 π 0When tech co.s & their fans insist that LLMs are trained on 'all of human knowledge', I always think of languages and other forms of knowledge that are not (and often can't be) in LLM training sets, and how they are devalued and marginalized by that claim.
29.10.2025 11:04 β π 5 π 1 π¬ 2 π 0This in particular is frustrating when the city seems entirely unwilling to implement full-time bus lanes on Bank. It really feels like a huge miss that some of the most popular bus routes in the city will still be forced to crawl with traffic off peak.
29.10.2025 11:23 β π 2 π 0 π¬ 0 π 0Good luck!!
29.10.2025 11:20 β π 2 π 0 π¬ 0 π 0@glengower.ca are you willing to expand on the message in this article for your support for Lansdowne 2.0?
ottawacitizen.com/news/city-co...
CNN is reporting that @zohrankmamdani.bsky.social referred to a voter he was meeting as βmy manβ at a campaign event today, even though the 13th Amendment to the United States Constitution has made it illegal to own human beings since 1865 and remains the law in states including New York
28.10.2025 20:06 β π 10014 π 1006 π¬ 387 π 100tough to prove*
28.10.2025 16:43 β π 0 π 0 π¬ 1 π 0The last thing I'll say is that you seem to credibly believe the reports from the CEOs, and maybe apply the same skepticism you ask me to. the article itself mentions the 80-90% margin is tough to say. The lack of metrics is part of why this is such a thorny thing to discuss! π«‘
28.10.2025 16:42 β π 0 π 0 π¬ 1 π 0If you have any closing articles or stuff let me know and I'll look later!
28.10.2025 16:40 β π 0 π 0 π¬ 1 π 0I'm sorry man, I have to get back to work and I honestly don't care enough about this stuff to argue past my lunch break.
I'll do some more reading from the articles you link at some point - I hope you have a good day!
The rate limits aren't low enough to cover the costs. They're stemming the hemorrhaging, is all.
28.10.2025 16:30 β π 0 π 0 π¬ 1 π 0* The article you reference
28.10.2025 16:28 β π 0 π 0 π¬ 0 π 0Literally in your article. There are coding startups rate limiting users because they use too many tokens.
28.10.2025 16:27 β π 1 π 0 π¬ 2 π 0"I just don't buy it" Doesn't work. You can see the token costs on some of these coding startups and the subscription costs don't cover the token costs.
OpenAI is having the same problem with their models - people are still using waaaay more tokens than the subscription allows
They absolutely are though! ed covers this again and again and again. Even on the highest tier plans, users were using many many times their subscription fees in token costs. This is why inference costs MUST go down.
28.10.2025 16:25 β π 0 π 0 π¬ 2 π 0So inference costs MUST go down to make Cursor profitable. But they aren't, because everyone likes the shiny models. They can't revert to cheap models because they don't work for Cursor's use case.
28.10.2025 16:24 β π 0 π 0 π¬ 1 π 0The problem is that Cursor is currently not charging a per-token rate:
"Cursor's Ultra plan exemplified this approach perfectly: charge users $200 while providing at least $400 worth of tokens, essentially operating at -100% gross margin."
bsky.app/profile/davi...
And it doesn't land. You entire argument seems to be that higher inference good, because more margins, but companies are currently subsidizing users and masking the full cost. So the more expensive the inferencing is, the more they lose. 1/
28.10.2025 16:22 β π 0 π 0 π¬ 2 π 0TVs can still execute their function if they're crappy. Smaller models do not work for my use case (or many of the use cases) at all. I legitimately cannot get good output out of cheaper models.
So to actually use these applications the way they want to be used, you have to use expensive models.
I don't understand your question.
28.10.2025 16:14 β π 0 π 0 π¬ 1 π 0