We will probably reach interplanetary life before pivoting a data.table becomes an easy and logical task.
Also depends on time of the day and how busy servers are. You can get a great response followed by an awful one depending on how busy servers are and, of course, if you are using free or paid versions.
Finally, acknowledge LLMs helped you writing the analysis code in your paper.
Once your analysis look good, you should:
1. Make sure you understand what's happening.
2. Test, test, test.
3. Ask for a critical review from one or two LLM. Go back if needed. Confirm with a statistician.
b. Go for another LLM. Gemini 2.5 is good, and can also write artifacts (called Canvas). You, of course, need to provide all context to it. Just pasting your R code and saying "Fix this" won't help, unless it is a simple issue like missing brackets.
a. Open a new conversation and project and supply the broken code. Provide evidence on where it seems to be broken. Ask for a fix. This may help (+)
5. Using a single LLM. While Claude is great, sometimes it derails and becomes blind to its mistakes. The first thing you must try to do is fix it yourself, but sometimes it's hard. You can proceed with two approaches: (+)
4. Not create a writing style. This can be done easily in Claude. You can even add papers you have written that are open access so Claude knows what you are usually talking about. (+)
I only move to vibe coding when I am sure Claude understood what I am trying to do and I understood what they did on first pass. They need to grasp the idea of the project and my coding style and I need to understand what's happening.
3. Failing to provide context. When I start coding with Claude I usually create a project, update the code, and the first thing I do is asking Claude if he understands what I am trying to accomplish with that code. Then, if they do, I proceed asking them to clean it a bit. (+)
This avoids a lot of "That's fantastic!", or "Good catch!", etc.
For example, my default comments for Claude include: "Do not flatter me. If I am wrong, say I am wrong. If I try to correct you and you still feel you are right, debate. Do not make compliments. Provide direct answers whenever possible" (+)
2. Failing to tweak the LLM to taste. In Claude, you can create projects where you can add files for context, write what you plan to achieve and how Claude should behave. In the overall Claude settings you can also provide an overall guide to how you want the replies (+)
...Code should use base R and {marginaleffects}. Code should be as succinct as possible. Please confirm you understood before proceeding. Also please make sure the code uses dead as the outcome, and not as reference". This will produce a much more useful output. (+)
...I need to run a logistic regression and extract the effect in OR and RD. In addition, I need also an adjusted model for covariates X, Y, and Z, all continuous, and then get the marginal effects for the model. I finally need a plot comparing the effects from adjusted and unadjusted models....
Consider: "Hi Claude, we will be working with R today and I need artifact for a code I plan to run. I have a data frame loaded with outcome labelled as outcome (binary, alive or dead) and intervention (binary, 0 for control and 1 for intervention)...
1. Believe they are talking to Alexa or Siri. This is the most common one. Many people make short, small prompts as if they want Alexa to play a song. For example, people will go and ask Claude the code for a glm. Claude will give back a generic code, if anything. (+)
I've been using LLM for a bunch of stuff, specially coding with R. Claude is great. I believe some people fail to see their true value because they make simple mistakes: (+)
Agreed. I guess we didn't try hard enough to further validate the concept. The last 10 years have failed to give the nextlgical step forward: Validate prospectively inside a trial. The longer it takes, the more skeptical people will be. Adding more attempts to the mix is not helpful, I guess....
We need less phenotyping papers in critical care. Time of exploring the method is long gone. Time to validate.
Discovering evidential (likelihoodist) statistics was probably the coolest academic thing for me over the past 6 months.
That seems a lot of legwork to say "more evidence is needed", heh?
So nice to hear from you again Lars!
Perhaps I should start posting again on social media.