No, please do!
No, there are no rules about using your affiliation only in your area of expertise. Mainly because as a prof you can define your research area as you please. For an employee working on specific funding (& using uni resources) there might be some scrutiny but if you work on it on your own time…
How impolite of things to be blooming
While the world’s on fire
Pointless, defiant, simple beauty
In this Economy???
I would be way more worried about Musk/OpenAI kill switches right now that Chinese ones.
Right, but we’re agreeing then. This is saying that AI enables some things by making them faster/doable, like better computing made AOGCM coupling possible. It was both a technical and a scientific challenge. But the insights of coupled modeling didn’t come from the software but for those using it.
Now it’s ML. It will improve some stuff, make some nice tools. I mean I’m trying to use it as well, and figure out if there’s utility as a tool. There’s definitely opportunities for interesting science done with ML as a tool. But massively overblown.
Oh totally agree. And the trend before that was the ultra-high res stuff, for which people were also willing to throw money at without a clear understanding of… what for. For many it was just “it makes nice high res plots”. But if you can run 1 year of ultra high res or 1000 of 1 degree resolution…
That’s not necessarily bad. Just like we don’t attribute novel insights to Python, or parallel computing, or Overleaf, as if they were the thing giving us the insight. They’re just tools. Sometimes they help.
Even in the NWP case, there’s no insight or new knowledge gained (per your previous tweet). There’s just a more accurate tool bc you can “solve” the problem with more data. Same reason why ML is helping with land models’ tuning issues. A good instrument in data-rich environments. 0 understanding.
Just like a brilliant grad student, I guess, in multiple ways!
And spring wasn’t looking great either
Very Famous Academic once complained to me he had published a super paper with a super model but no1 was using the underlying data to build upon. When I asked if he knew if data was available, or how others could access it, I just got a blank stare. Your data is useless if you don’t offer support! 🙃
So yes, nobody should be evaluated on raw numbers, but also, let's expand our definition of "quality" beyond some of the usual considerations, that are also heavily skewed towards a few highly elitist metrics. If your paper doesn't have a public methodology, I don't care how smart it sounds.
This would also move us away from evaluating people based on highly influential papers as a measure of quality (usually only within reach for those in "famous" groups, lots of gatekeeping in some journals...), and more towards how, holistically, they're contributing to community knowledge.
I think @egu.eu journals are the best ones at doing this already: reviews and rebuttals are public, a reader can get an idea about what happened as an iterative process. Potentially, post-publication comments (if somewhat moderated) could improve on this, as other disciplines already do.
How does this solve the peer-review crisis? It does if we treat peer-review as a community tool, too. Reviewers can check for reproducibility and robustness, as well as for ensuring the paper builds upon previous works, without necessarily policing as much as it does now over conformity.
they often fail at supporting others' works by thoroughly documenting methodologies, data sources, code, in a way that is not just "we chucked everything in 72 pages of unformatted appendix". So quality can take many forms: usefulness for the community is one that is even more overshadowed now.
...just like a house needs nice windows, and pillars. But a house also needs lots of humble bricks (ok please bear with my european metaphor, no wood here) that rest on other bricks, and that support what can come next. And while superb papers can offer great intellectual foundations...
So no, more incremental advances do not necessarily imply a lapse in quality, if done well, and with an eye to better contributing to the knowledge of the community - rather than seeing publishing as the way to show how brilliant you are compared to everybody else. Big papers are important...
If you wait X years before publishing your magnum opus, are you preventing others from building on your many methodological advances and your maybe smaller findings, that might be significant to somebody else? I find papers with most methodology buried in the appendix to be quite hard to reproduce.
I have a very different take from many other academics here about publishing. Imo, publishing a paper is more like laying a new brick when building a house than creating a masterpiece painting from scratch. You want something robust, but also something operable on which others can build upon.
Also, yes, we did it! Boys just need to have fun sometimes, and thanks for the reviewers for being game (one of them did complain you do not “brew” CIDER so we had to rename one of the sections…)
Finally, if you disagree with the current set of scenarios modeling centers have simulated, instead of assuming they are shadily conspiring to make SRM look good because the scenarios look too simple (or that they’re just silly!), you can test your own envisioned scenarios!
… and it works quite well at those! The code and underlying training set are open source and available for the community to reproduce our results and improve upon them. A web-based version is also available here ➡️ simulator.reflective.org Lots of potential future improvements already planned.
CIDER is trained on a large but finite set of pre-existing ESM simulations, but it can emulate novel, out-of-sample scenarios at a small fraction of a cost of one ESM simulation. We test it on multiple models, show its capabilities at reproducing out of sample scenarios (like uncoordinated ones)…
gmd.copernicus.org/articles/19/...
We’ve been working for a few years now on a regional emulator capable of exploring different SRM strategies, capable of expanding the exploration space beyond the small set that Earth System models can provide. It’s here now! Presenting CIDER ➡️
Not closed, you should be able to sign in suggestion mode, so I then can accept signatures after a brief check ;-) otherwise add as comments and I’ll add
Promises promises
We should have a less subtle Friday check up the two of us and talk about all the new shit we learned to make us bitter for the weekend
Might as well try a cutting-edge, novel technique called “Growing a spine” early on. There is nothing to save except all of it.