You add the posit marketplace, install r-lib, and invoke the skill at an opportune moment (or claude code runs it for you with the right trigger words).
The README has some instructions.
github.com/posit-dev/sk...
You add the posit marketplace, install r-lib, and invoke the skill at an opportune moment (or claude code runs it for you with the right trigger words).
The README has some instructions.
github.com/posit-dev/sk...
Thanks for this!
I admit I'm only using Positron assistant w/ copilot which I don't think can access skills.md. What is your general workflow for Claude code with package development? Or any resources you could point me to?
Please do and report back!
I admit I'm only using Positron assistant w/ copilot which I don't think can access skills.md. What is your general workflow for Claude code with package development?
Be extra careful with the Description: -- quote software names, beware of spelling, use proper DOI refs, ...
"Newbies" -- packages, not maintainers are put through a special room in CRAN-Hell πΏ
use this skill for some extra help on the arcane: github.com/posit-dev/sk...
01.03.2026 20:44 β π 5 π 2 π¬ 1 π 0"newbies" check (pkgs w/o prior releases on CRAN) have a particularly fussy human-administered set of checks (e.g. do all functions have explicitly documented return values?) Also, things like spell-check false positives (can be fixed via dirk.eddelbuettel.com/blog/2017/08... )
01.03.2026 17:54 β π 1 π 1 π¬ 1 π 0Thanks. Valuable to set those expectations. Sounds like a manuscript review process π«
01.03.2026 15:30 β π 0 π 0 π¬ 0 π 0
Run R CMD check --as-cran, and read the CRAN Repository Policy document and Writing R Extensions carefully (they are long and dense and document updates poorly...).
Don't expect perfection. Iterate. Feedback from the review is normal as we cannot run all their checks (their bug, not ours). #rstats
Good point. Possible to de-trend with a polynomial and look for deviations from a circle vs a straight line? π€
There is information in the rate of change, it just doesn't always happen at discrete breakpoints. Not always an obvious 'corner' on the circle
Thanks, yeah that's exactly why I think implementing something like this is above my current skill ceiling, but could definitely be done
Use manual selection to inform priors on meaningful changepoints, then a Bayesian model to statistically validate? π€·ββοΈ
Dear #rstats mentors, what advice do you wish someone would have given you before submitting to CRAN for the first time?
The obvious, and the less so
Metabolic threshold detection should combine expert manual selection with a locally weighted piecewise fit/changepoint model
Beyond my current time availability and skill limits to develop, but I'd love to see how it works
+ ) # A tibble: 3 Γ 13 expression min median `itr/sec` mem_alloc `gc/sec` n_itr n_gc <bch:expr> <bch:tm> <bch:tm> <dbl> <bch:byt> <dbl> <int> <dbl> 1 which_apply 50.03ms 116.8ms 8.18 2.49MB 4.03 67 33 2 Find 8.39ms 12.2ms 70.9 9.69KB 2.19 97 3 3 loop 11.7ms 20ms 50.3 26.08KB 2.64 95 5 # βΉ 5 more variables: total_time <bch:tm>, result <list>, memory <list>, # time <list>, gc <list>
Find() faster than a loop too
28.02.2026 00:19 β π 2 π 2 π¬ 0 π 0
Oh yeah. I'm coming back to base R from tidyverse (purrr, etc.), and I just about feel comfortable with the *apply() functions. Now starting to really understand the power of these higher order functions
tidyverse vs base R official #rstats debate I think is planned for next Thursday?
The EPPL is recruiting for a remote (app-based) pilot study examining whether light, fully self-paced, swimming may be a tolerable form of movement for people with ME/CFS and related conditions, due to the distinct physiological effects of water immersion. #ME/CFS #ME #POTS #LongCovid
20.02.2026 19:24 β π 1 π 3 π¬ 1 π 0We recently went through a similar process for equivalence testing and for standard difference testing. It is a good experience. Although I can't help but feel that it still falls back to 'vibes' when there is incomplete quantitative information to use? π€·ββοΈ
27.02.2026 19:17 β π 1 π 0 π¬ 0 π 0
π
That's kinda how I think about potential future users running my code repeatedly. This improvement saved ~30ms... negligible running a single analysis, but adds up on thousands of future runs
Thanks Philip! Ya, there are strong precautions about premature optimisation, but I stumbled on this while solving a bug, so I think that's justified π
Attempting to develop a package has completely changed how I code. Have to be much more flexible. It's been such a fun process!
I couldn't imagine rawdogging contrasts like that. The big lesson I learned from Russ is that the SEs around the marginal means aren't necessarily the appropriate SEs to compare marginal contrasts
27.02.2026 18:23 β π 1 π 0 π¬ 1 π 0## pre-allocate row search counter apply_count <- 0L find_count <- 0L which_apply_count <- which(apply(data, 1L, \(.row_vec) { apply_count <<- apply_count + 1L all(nirs_channels %in% .row_vec) })) Find_result <- Find(\(.i) { find_count <<- find_count + 1L all(nirs_channels %in% data[.i, ]) }, seq_len(nrow(data))) ## compare rows checked data.frame( method = c("which(apply())", "Find()"), rows_checked = c(apply_count, find_count), result = c(which_apply_count, Find_result) ) #> method rows_checked result #> 1 which(apply()) 12042 41 #> 2 Find() 41 41
If I know the row I need is somewhere near the top, it's faster to stop searching after 41 rows than continue over 12042 rows...
who knew! βΊοΈ
library(mnirs) ## development package library(bench) ## benchmarking ## read file to a data frame (internal fn) data <- mnirs:::read_file(example_mnirs("train.red")) ## column name strings to match all nirs_channels <- c("SmO2 unfiltered", "HBDiff unfiltered") ## return bench::mark results bench::mark( ## previous code searched through all rows which_apply = { which(apply(data, 1L, \(.row_vec) { all(nirs_channels %in% .row_vec) })) }, ## TIL about `Find()` which returns the first match and stops Find = { Find(\(.i) { all(nirs_channels %in% data[.i, ]) }, seq_len(nrow(data))) }, check = TRUE, iterations = 100 ) #> # A tibble: 2 Γ 6 #> expression min median `itr/sec` mem_alloc `gc/sec` #> <bch:expr> <bch:tm> <bch:tm> <dbl> <bch:byt> <dbl> #> 1 which_apply 31.5ms 34.89ms 27.1 2.56MB 86.0 #> 2 Find 2.73ms 3.03ms 304. 63.2KB 26.4
TIL `Find()` returns the first match then stops searching.
10x speed and 40x memory improvement in one of my core package functions!
I wonder where else I can implement? π€
#rstats #mnirs
OH phew! I interpreted "getting the numbers they want" as something else π«£
26.02.2026 23:32 β π 1 π 0 π¬ 1 π 0Say more? I'm tagged in this post π
26.02.2026 22:56 β π 1 π 0 π¬ 1 π 0This is secretly a NIRS meme
19.02.2026 21:56 β π 2 π 0 π¬ 1 π 0A cartoon meme template with a person looking at a computer screen and saying "show me a breakpoint". The computer answers "here is a breakpoint" and the screen shows the classic Jamnick et al, 2018 image of multiple lactate breakpoints on a lactate curve, with obviously no real "breakpoint" per se. The person answers "oh my god", as a commentary that when we ask a computational algorithm to find a breakpoint on a partial circular line, it will give us a breakpoint, whether or not it is physiologically meaningful. The corner of a circle, if you will.
Sport scientists love finding corners on a circle π
19.02.2026 21:55 β π 5 π 1 π¬ 1 π 0Now working with comma decimal separated files thanks to Philip's help π
19.02.2026 16:17 β π 3 π 0 π¬ 0 π 0
You work with NIRS data?
Check out this great R package π¦
I had the chance to test it already, I wish I had it when analyzing the data of my last study
For anyone frustrated by processing #NIRS data, I've been working on {mnirs} an R package I think can help with standardisation
Take a look: github.com/jemarnold/mn...
Get in touch if interested. I'm looking for more users to provide feedback and break things. More info to come π¦΅π¦ #mnirs #rstats
Ah, thanks! Another flip in my perspective; don't want to over-manicure the yard down to the dirt, rather mostly leave it to wild with a light touch π oh, I like that
16.02.2026 16:34 β π 1 π 0 π¬ 0 π 0Interesting difference in perspective on productivity. Looking at the picture, I assumed the more contributions, the more your grass would grow
16.02.2026 14:47 β π 5 π 0 π¬ 2 π 0