Users get value. You save time and money.
Donโt get lost in code no one asked for. Take control of what gets built.
Ready to change your workflow?
@jakub-sobolewski.bsky.social
I help R developers improve their testing skills. https://jakubsobolewski.com Staff Engineer @ Appsilon.
Users get value. You save time and money.
Donโt get lost in code no one asked for. Take control of what gets built.
Ready to change your workflow?
I break down how to start with Behavior-Driven Development and build features your users actually want.
How it works:
โ Start every feature as a user story, not a wish list.
โ Write specs from your usersโ words.
โ Write only the code users need.
โ Automate checks along the way.
Building code gets easier every day with AI. Building code that truly matters to users remains the real challenge.
At @user-conf.bsky.social virtual, Iโm sharing how to ship only what matters.
See it here: youtu.be/e4H28G2J05U?...
#rstats #opensource
Great point.
1. APIs are versioned: you track the API version (or version of a package that wraps an API). Your contracts should be safe until you upgrade to new version.
2. Schedule a test run that checks if real APIs still return the shapes you expect. Run it periodically for extra safety.
๐ Result:
Fast tests โ
Reliable tests โ
Cheap tests โ
Better dev experience โ
Here's the approach:
1. Abstract the dependency with an interface
2. Use fakes or mocks in tests
3. Test your code's behavior *against* the fake
4. Plug in the real dependency only in production
โ
Test only the code you own.
How? Simulate the external system in tests. You donโt need the real thing.
๐ซ Why you shouldn't test external dependencies directly:
๐ข Slow (waiting for responses)
๐ฒ Flaky (unreliable availability/results)
๐ธ Expensive (API costs add up)
Or worse, tests donโt get written at all.
๐งช New pattern in the R Tests Gallery: Testing code that uses LLMs, APIs, or databases
External systems power our code, LLMs, APIs, DBs, libraries, but they donโt need to be in our tests.
Check it out ๐ jakubsobolewski.com/r-tests-gall...
#rstats #opensource
Ask AI to draft specs, review them, then refine.
Repeat in minutes what might take days by hand.
Your specs become abstract, clear, and future-proofโno matter how the app evolves.
Give it a go. Cut through legacy fog with AI-powered BDD.
Looks simple but hides complexity. How to write tests that capture what happens without leaking how?
Check this post to see how to use AI to iterate on writing specifications faster: jakubsobolewski.com/blog/ai-assi...
Imagine this workflow:
๐ Start on a โDataโ page with steps: Upload โ Filtering โ Mapping โ Preview
๐ User uploads or picks a default dataset, then moves through steps
๐ Submit variable mappings โ data preview appears
๐ โVisualizationโ unlocks to view plots
๐ช๐ฎ๐ป๐ ๐๐ผ ๐๐ฟ๐ถ๐๐ฒ ๐๐ฝ๐ฒ๐ฐ๐ ๐ณ๐ผ๐ฟ ๐ฎ๐ป ๐ฒ๐
๐ถ๐๐๐ถ๐ป๐ด ๐ฎ๐ฝ๐ฝ? ๐๐ฒ๐ ๐๐ ๐ต๐ฒ๐น๐ฝ ๐๐ผ๐ ๐ถ๐๐ฒ๐ฟ๐ฎ๐๐ฒ ๐ณ๐ฎ๐๐๐ฒ๐ฟ.
Writing specs after the fact gets messy fast. Youโre tempted to mention buttons, screens, and other UI stuff, but that only locks you into one way the app works.
#rstats #opensource
The latest issue of @rweekly is now live!
https://rweekly.org/2025-W31.html
Highlights:
๐ Copy the Pros: How to Recreate this NYTimes Chart in R by @MrPecners
โฉ Speed Testing Code: Three Levels by @kellybodwin
๐งช Testing your Plumber APIs from R by @jakub-sobolewski.bsky.social
As always [โฆ]
๐ Follow me or subscribe for updates as new examples land.
๐ค Have a specific case you want covered? Leave a comment or submit a request, let's build something great, together!
What Youโll Find
โ A growing collection of focused R test examples
โ Step-by-step breakdowns showing what to do and why it works
โ Real code ready to drop into your project with confidence
So far, thereโs only one example, but many more are on the way! ๐
As The R Tests Gallery grows, I hope it becomes a reliable source for test examples, techniques, and practical practices in live code.
The patterns featured come straight from real projects Iโve worked on, if they helped me, maybe theyโll help you too!
Your legacy Shiny app doesn't have to stay legacy forever.
Want to learn more about testing strategies for R? Check out my packages and resources for comprehensive testing approaches.
jakubsobolewski.com/blog
3๏ธโฃ Refactor and Unit Test
Now you have a safety net. Start refactoring the code base into smaller, testable pieces. The acceptance tests will catch any regressions while you improve the code structure.
2๏ธโฃ Create Acceptance Tests
With the behavior documented, make it executable with Cucumber.
To implement the steps you can use:
โ shinytest2
โ Playwright
โ or Cypress
There's a Cucumber implementation available to execute your specifications whether your steps are written in R or JavaScript.
Format specifications with Given, When, Then syntax to describe preconditions, actions and outcomes.
This creates living documentation that both technical and non-technical stakeholders can understand and validate.
Write it down.
Don't mix user interface with behavior. Be precise, but abstract enough so that those behaviors are true when the implementation changes. Instead of saying "click a button", try to phrase it as "do something".
Stay focused on the business goal.
1๏ธโฃ Document Current Behavior
Work with previous maintainers and users to understand what the app should do.
Don't assume you know everything. Ask questions like:
โ What happens when users click this button?
โ How should the app respond to different inputs?
Your legacy Shiny app needs a makeover, but jumping straight into refactoring is like repainting a room with furniture still inside. Things will get messy.
The safest approach? Write acceptance tests first.
#rstats #rshiny #tests #testing #opensource
Episode 207 of R Weekly Highlights is out! serve.podhome.fm/episodepage/...
๐ ๏ธ Generating Quarto syntax (Danielle Navarro)
๐ค Behavior-Driven Development @jakub-sobolewski.bsky.social
๐ Dive()ing into the Hunt @milesmcbain.bsky.social
h/t @mike-thomas.bsky.social & @rbyryo.bsky.social ๐
#RStats
Every failing or awkward test is feedback.
Let your tests guide you toward modular, decoupled, focused code.
3๏ธโฃ Tests can warn you when your code does too much.
If a single test checks too many things, has lots of assertions, or uses โandโ in its title, your code probably lacks separation of concerns.
Clean code means each part, and each test has a focused job.
2๏ธโฃ Tests can show if your code is tightly coupled.
If testing requires an elaborate setup or stubbing lots of internal parts, your code is likely tightly coupled.
When tests expose implementation details, itโs a warning sign: your codeโs parts depend on each other too much.
1๏ธโฃ Tests can reveal if your code isnโt modular.
If you struggle to run code in tests, needing to pass tons of arguments, set up global state, write files, or mock everything, your code probably isnโt modular.
Modular code is easy to use in any context, not just production.
3 Things Tests Reveal About Your Code
Ever wondered what your tests are really telling you about your code?
Tests arenโt just safety netsโtheyโre feedback loops on your design.
#rstats #tests