Phantasy star ❤️
13.08.2025 05:26 — 👍 0 🔁 0 💬 0 📌 0@davidabreu7.bsky.social
Software engineer having fun with AI. ☕️ Java | Kotlin | Typescript LinkedIn: https://www.linkedin.com/in/davidabreu7
Phantasy star ❤️
13.08.2025 05:26 — 👍 0 🔁 0 💬 0 📌 0Seeing my TikToks / The Anti-Planner featured as a means of dismantling internalized ableism means so much.
Thanks so much for the shoutout, @howtoadhd.bsky.social! 😭💖
dev.to/dbanieles/ec...
The Elastic Container Service (ECS) for containerized workloads on AWS is a good choice. There is no control plane to pay for and the setup is simple. When your services running in ECS need to communicate with each other you have choices on how to make this work. (1️⃣/3️⃣)
🧵
I wrote a short post on learning the fundamentals of distributed systems, with a few suggested resources to read and a few suggested projects to try.
notes.eatonphil.com/2025-08-09-w...
Gg wp
09.08.2025 08:26 — 👍 0 🔁 0 💬 0 📌 0Wow, nice astrophotography setup! What’s your equipment?
09.08.2025 08:23 — 👍 0 🔁 0 💬 1 📌 0you gotta be kidding me 😅
08.08.2025 16:43 — 👍 0 🔁 0 💬 0 📌 0The 'relief' of @anthropic.com engineers after seeing all that traffic from AI editors suddenly turning to GPT-5 😬
08.08.2025 16:39 — 👍 0 🔁 0 💬 0 📌 0Want cleaner, more expressive tests in Java?
✅ Avoid endless try-catch
✅ Group failures with elegance
✅ Use AssertJ the right way
Soft assertions can raise your testing game without clutter.
Learn how:
eliasnogueira.com/assert-with-...
#java #quality #test #qualityengineering
OpenAI released their long-promised open weight models today under clean Apache 2 licenses and with benchmarks that put them shockingly close to o3-mini and o4-mini
I've run the smaller (20B) model on my Mac and it's very impressive, despite only using ~15GB of RAM simonwillison.net/2025/Aug/5/g...
Cavado. Northern 🇵🇹
02.08.2025 18:32 — 👍 15 🔁 1 💬 1 📌 0@maven.apache.org 4 (from 4.0.0-rc4 on) contains the Maven Upgrade Tool with which you can automatically upgrade your Maven project. I finally found time to write a small article about it's feature and how to use it. Please test the tool and give feedback :)
maven.apache.org/tools/mvnup....
You just described every politician
30.07.2025 06:47 — 👍 1 🔁 0 💬 0 📌 0Architecting Multi-Agent AI Systems in Java with Quarkus and Langchain4j
How to orchestrate scalable AI workflows using message-driven agents and local LLMs with Kafka
buff.ly/qqRHZr8
#Java #Quarkus #AiAgents #LangChain4j #Kafka
once again, #rustlang is the most admired programming language on the stack overflow survey: survey.stackoverflow.co/2025/technol...
29.07.2025 18:33 — 👍 95 🔁 12 💬 6 📌 2Japan’s beloved fusion dishes like curry rice and tonkatsu may seem traditional today, but they’re rooted in surprising Western influences - a story Expo 2025 is bringing to the world’s table.
22.07.2025 21:40 — 👍 8 🔁 2 💬 1 📌 0🚀 Spring AI Advisors are like AOP for your AI interactions! Intercept, modify, and enhance every AI call without touching your core business logic:
✨Add logging automatically
✨Inject context with RAG
✨Enable chat memory
✨Build custom advisors
youtu.be/1MGiDBI2Ci4
1982
11.07.2025 23:01 — 👍 104 🔁 31 💬 0 📌 3Beautiful
12.07.2025 04:17 — 👍 2 🔁 0 💬 0 📌 0Yep, it’s pretty much it 😅
I even started to get job offerings for “AI code reviewer”
A scatter plot comparing small language models based on Win rate (%) vs Model Size (Billion parameters). The chart evaluates models on 12 popular LLM benchmarks. ⸻ Axes: • X-axis (horizontal): Model size (in billions of parameters), ranging from ~1.7B to 4.5B • Y-axis (vertical): Win rate (%)—higher is better—ranging from 2% to 5% ⸻ Highlighted Insight Areas: • Top-left corner: Ideal zone for models that are better (higher win rate) and smaller (faster/cheaper) • Diagonal line/gray band: Represents the tradeoff baseline; models above it are more efficient per parameter ⸻ Models Plotted: Top-right quadrant (largest, highest win rate): • Qwen3 4B – Highest win rate, ~5% • Gemma3 4B – Slightly below Qwen3 4B Mid-left (smaller but strong performance): • SmolLM3 3B – Strong win rate (~4.4%), outperforming larger models • Qwen2.5 3B – Moderate win rate (~3%) • Llama3.2 3B – Slightly below Qwen2.5 3B Lower-left (least performant): • Qwen3 1.7B – Lowest win rate (~2%) ⸻ Conclusion: • SmolLM3 3B stands out as most efficient, achieving high win rate with a relatively small size. • Qwen3 4B and Gemma3 4B are top-performers overall but less efficient per parameter. • Models like Qwen3 1.7B lag significantly behind in both size and win rate.
SmolLM3: a highly detailed look into modern model training
this is amazing. They go into great detail on just about every aspect. The number of stages, algorithms, optimizer settings, datasets, blueprints, recipes, open source training scripts, ..
huggingface.co/blog/smollm3
"It's Heartbreaking" - The Pokémon Company Tech VP Joins Industry In Criticising Microsoft Layoffs. (Repost)
07.07.2025 23:25 — 👍 56 🔁 7 💬 0 📌 5I heard JRPG music? I’m all in!
07.07.2025 20:31 — 👍 0 🔁 0 💬 0 📌 0The anime was amazing :)
07.07.2025 07:16 — 👍 2 🔁 0 💬 1 📌 0Such a clean (embedded) Rust walkthru - and a cool use-case!
“Navigating Mars with Rust: Developing an Autonomous Pathfinding Rover” from @adacore.bsky.social
blog.adacore.com/navigating-m...
Tales from the jar side: A rant about Java LTS versions, Scary Awesome AI, and the usual silly social media posts open.substack.com/pub/kenkouse...
06.07.2025 21:53 — 👍 3 🔁 1 💬 1 📌 0How Do You Teach #ComputerScience in the #AI Era?
www.nytimes.com/2025...
#GenerativeAI
Get the location of the ISS using DNS – Terence Eden’s Blog buff.ly/V3ocDxl
#dns 🫶🛰️