Want a roadmap to clean, reliable tests?
Get it here: jakubsobolewski.com/get-roadmap/
@jakub-sobolewski.bsky.social
I help R developers improve their testing skills. https://jakubsobolewski.com Staff Engineer @ Appsilon.
Want a roadmap to clean, reliable tests?
Get it here: jakubsobolewski.com/get-roadmap/
Have lots of tests but still do heavy manual testing?
Thatโs a red flag your tests donโt build confidence. Tests should help you ship faster, not add overhead. Rethink your test strategy to reduce manual effort.
Flaky tests? They waste energy and slow development.
Look for external dependencies like timers, APIs, or asynchronous events. If you own the code, inject and substitute those dependencies. If not, stub them. In Shiny apps, wait properly before asserting.
If you update tests with every code change, youโre wasting time.
Test public interfaces, the ways your code is actually used. This keeps tests stable, meaningful, and cuts down on brittle failures. Focus on testing behavior, not implementation details.
What's worse than no automated tests? Bad tests.
Automated tests should save time and catch bugs early. But bad tests slow you down, cause frustration, and reduce confidence. Letโs unpack three common testing traps and how to avoid them. ๐งต
Rweekly issue #38 is out.
This week's highlights include: R6 interfaces (@jakub-sobolewski.bsky.social) and ggplot2 (@nrennie.bsky.social)
#Rstats #code #reading
rweekly.org
Users get value. You save time and money.
Donโt get lost in code no one asked for. Take control of what gets built.
Ready to change your workflow?
I break down how to start with Behavior-Driven Development and build features your users actually want.
How it works:
โ Start every feature as a user story, not a wish list.
โ Write specs from your usersโ words.
โ Write only the code users need.
โ Automate checks along the way.
Building code gets easier every day with AI. Building code that truly matters to users remains the real challenge.
At @user-conf.bsky.social virtual, Iโm sharing how to ship only what matters.
See it here: youtu.be/e4H28G2J05U?...
#rstats #opensource
Great point.
1. APIs are versioned: you track the API version (or version of a package that wraps an API). Your contracts should be safe until you upgrade to new version.
2. Schedule a test run that checks if real APIs still return the shapes you expect. Run it periodically for extra safety.
๐ Result:
Fast tests โ
Reliable tests โ
Cheap tests โ
Better dev experience โ
Here's the approach:
1. Abstract the dependency with an interface
2. Use fakes or mocks in tests
3. Test your code's behavior *against* the fake
4. Plug in the real dependency only in production
โ
Test only the code you own.
How? Simulate the external system in tests. You donโt need the real thing.
๐ซ Why you shouldn't test external dependencies directly:
๐ข Slow (waiting for responses)
๐ฒ Flaky (unreliable availability/results)
๐ธ Expensive (API costs add up)
Or worse, tests donโt get written at all.
๐งช New pattern in the R Tests Gallery: Testing code that uses LLMs, APIs, or databases
External systems power our code, LLMs, APIs, DBs, libraries, but they donโt need to be in our tests.
Check it out ๐ jakubsobolewski.com/r-tests-gall...
#rstats #opensource
Ask AI to draft specs, review them, then refine.
Repeat in minutes what might take days by hand.
Your specs become abstract, clear, and future-proofโno matter how the app evolves.
Give it a go. Cut through legacy fog with AI-powered BDD.
Looks simple but hides complexity. How to write tests that capture what happens without leaking how?
Check this post to see how to use AI to iterate on writing specifications faster: jakubsobolewski.com/blog/ai-assi...
Imagine this workflow:
๐ Start on a โDataโ page with steps: Upload โ Filtering โ Mapping โ Preview
๐ User uploads or picks a default dataset, then moves through steps
๐ Submit variable mappings โ data preview appears
๐ โVisualizationโ unlocks to view plots
๐ช๐ฎ๐ป๐ ๐๐ผ ๐๐ฟ๐ถ๐๐ฒ ๐๐ฝ๐ฒ๐ฐ๐ ๐ณ๐ผ๐ฟ ๐ฎ๐ป ๐ฒ๐
๐ถ๐๐๐ถ๐ป๐ด ๐ฎ๐ฝ๐ฝ? ๐๐ฒ๐ ๐๐ ๐ต๐ฒ๐น๐ฝ ๐๐ผ๐ ๐ถ๐๐ฒ๐ฟ๐ฎ๐๐ฒ ๐ณ๐ฎ๐๐๐ฒ๐ฟ.
Writing specs after the fact gets messy fast. Youโre tempted to mention buttons, screens, and other UI stuff, but that only locks you into one way the app works.
#rstats #opensource
The latest issue of @rweekly is now live!
https://rweekly.org/2025-W31.html
Highlights:
๐ Copy the Pros: How to Recreate this NYTimes Chart in R by @MrPecners
โฉ Speed Testing Code: Three Levels by @kellybodwin
๐งช Testing your Plumber APIs from R by @jakub-sobolewski.bsky.social
As always [โฆ]
๐ Follow me or subscribe for updates as new examples land.
๐ค Have a specific case you want covered? Leave a comment or submit a request, let's build something great, together!
What Youโll Find
โ A growing collection of focused R test examples
โ Step-by-step breakdowns showing what to do and why it works
โ Real code ready to drop into your project with confidence
So far, thereโs only one example, but many more are on the way! ๐
As The R Tests Gallery grows, I hope it becomes a reliable source for test examples, techniques, and practical practices in live code.
The patterns featured come straight from real projects Iโve worked on, if they helped me, maybe theyโll help you too!
Your legacy Shiny app doesn't have to stay legacy forever.
Want to learn more about testing strategies for R? Check out my packages and resources for comprehensive testing approaches.
jakubsobolewski.com/blog
3๏ธโฃ Refactor and Unit Test
Now you have a safety net. Start refactoring the code base into smaller, testable pieces. The acceptance tests will catch any regressions while you improve the code structure.
2๏ธโฃ Create Acceptance Tests
With the behavior documented, make it executable with Cucumber.
To implement the steps you can use:
โ shinytest2
โ Playwright
โ or Cypress
There's a Cucumber implementation available to execute your specifications whether your steps are written in R or JavaScript.
Format specifications with Given, When, Then syntax to describe preconditions, actions and outcomes.
This creates living documentation that both technical and non-technical stakeholders can understand and validate.
Write it down.
Don't mix user interface with behavior. Be precise, but abstract enough so that those behaviors are true when the implementation changes. Instead of saying "click a button", try to phrase it as "do something".
Stay focused on the business goal.
1๏ธโฃ Document Current Behavior
Work with previous maintainers and users to understand what the app should do.
Don't assume you know everything. Ask questions like:
โ What happens when users click this button?
โ How should the app respond to different inputs?
Your legacy Shiny app needs a makeover, but jumping straight into refactoring is like repainting a room with furniture still inside. Things will get messy.
The safest approach? Write acceptance tests first.
#rstats #rshiny #tests #testing #opensource