Who decides what a substantive contribution is, feels like this makes it a review venue not a preprint server anymore
bsky.app/profile/grim...
Yes sorry it's here!! Bad habit I keep forgetting to link work on threads
github.com/DrCatHicks/l...
Oh it doesn't surprise me at all (having worked at a learning-at-work company). People are understandably sensitive about not feeling watched. I just wanted to be clear that's not what I was doing here
I'm literally on this journey with this game right now ππππ
The survey items are just open measures I've tested in our published research, and I've helped loads of teams use them as a reflection exercise. It's just a resource if people have situations in which they would be helpful. Not sure how this is feedback for any part of what I've done!
π
I'll DM!
I really need to write up the newsletter I have going about this but π© between this, book copyedits, consulting relaunch, podcast, public speaking ramping up, [secret developer science], health stuff...!
People in technical roles have tremendous power because of their place in the knowledge hierarchy -- because YOU think what *I* do is valuable, the world does. I believe that many developers want to use that power for good and to uplift work that helps them. We just have to keep figuring out how
π₯Ήπ₯°!!
I have an unfinished paper about how tools shape our thinking, but also, here our thinking shapes our tools. Malleable interfaces offer that. To me the ways we are able to share strategies with each other & offer each other small experiments in shaping our tool use differently is really remarkable
I'm so glad!
I have seen quite an alarming trend among concerned practitioners to *entirely reject the idea of any truth in self-report of psychological experience at all* and I find that profoundly cynical; as an applied scientist I could never wholesale reject thoughtful self-report and all it offers
Absolutely not contingent on the argument about the merits of claude code I find it absolutely lovely that 550+ people have used my learning skill and shared so many examples of carving out moments to reflect and test themselves on their codebase comprehension throughout their days
sounds like a recipe for influencer success though I gotta say
Evidence over agendas is in my consulting principles for a reason. I want to be evidence-driven because I want to have techniques to self-correct my ideas & help people who have been mismeasured agentically create better evidence. That could include throwing out bad measures or designing good ones
you know what, I was going to read something else today but I'll make this my paper of today, thank you!
I believe it leaves me with a very sharp gratitude for all that I have now, and an understanding of multiple sides of these problems. You absolutely cannot work the problem if you are not willing to think about many sides at once, if you are not willing to sit with the complexities.
I think I have an interesting perspective here because I was fundamentally UNmeasured for the first couple of decades of my life. Multiple health problems missed, no feedback about my thoughts and skills, no mentoring from teachers, no way to be legible or visible to society. I was invisible
People who have access to a great deal more data over time frequently achieve better outcomes in many, many problem areas. For example: if you have a dynamic health problem you benefit from monitoring over time, not a test during the 15 min a doc can give you months after you call
I believe that our problems are about *evidence* and how evidence is used in the world, and that working to empower more people to understand and use better evidence is a battle worth fighting.
It is easy to attach a moralizing and sweeping generalization to work on "data" and against anyone who works on "data." But data is simply information. Qualitative "data" is also a problem: we could pull up sweeping meta-analyses of bias in teacher judgments, too.
But different forms of measuring have incredible power. Learners who are divergent and creative thinkers are often better represented in their potential and in their achievement by small test moments ACROSS their work; because they are more exploratory but then also have interesting breakthroughs.
Working on those issues doesn't particularly mean I'm a test superfan. In fact, because I was a child who wasn't in school trying to get into college I had my entire future hang on whether or not *I* could pass standardized assessments. I did and still do think that was pretty grossly unjust
e.g., some learners have a lot more difficulty maintaining their attentional focus during a long standardized test. The length of the standardized test is driven not really by learner need honestly, but by the demands of static tests to get enough information about a learner in a single sitting
It was my belief then and it's still my belief that learners are deeply misrepresented a lot of the time by static tests that come too late, are administered with methods that have immense cultural and historical biases, and where the implementation itself can introduce tremendous bias
This might include the ways that students answer a practice question and get it wrong, and the meaningfulness of whether a learner tries again, versus a learner who goes on to avoid that content area.
During the covid era when I was doing efficacy & effectiveness research at Khan Academy, I was part of a discussion with public school leaders where we talked about the usage of "trace data," its limitations, and its many nuances, when it comes to understanding and modeling learning over time
Useful rundown of issues in the βscience of scalingβ interventions, from Rasul.
www.science.org/doi/10.1126/...