totally
13.02.2026 15:16 β π 1 π 0 π¬ 0 π 0@nickbentley.bsky.social
Posts about game design. Director of Game Design at Dolphin Hat Games (Taco Cat Goat Cheese Pizza), former President of Underdog Games Studio (The Trekking Trilogy), former Director of Online Marketing at North Star Games, former Neuroscientist.
totally
13.02.2026 15:16 β π 1 π 0 π¬ 0 π 0I SUPER relate to that thought. I love writing and game design for similar reasons, and approach them in similar ways.
13.02.2026 15:03 β π 1 π 0 π¬ 1 π 0me too. my trajectory has been the same as yours.
10.02.2026 17:37 β π 1 π 0 π¬ 0 π 0indeed
10.02.2026 14:14 β π 1 π 0 π¬ 1 π 0nice! glad I could be helpful.
10.02.2026 14:14 β π 1 π 0 π¬ 0 π 0yeah, it's really really hard to get the right kind of feedback
10.02.2026 14:14 β π 1 π 0 π¬ 0 π 0heh, yeah it's too long. Apologies.
10.02.2026 14:13 β π 0 π 0 π¬ 0 π 0thanks!
10.02.2026 14:13 β π 1 π 0 π¬ 0 π 0couldn't agree more
10.02.2026 14:12 β π 0 π 0 π¬ 0 π 0thanks very much!
10.02.2026 14:12 β π 1 π 0 π¬ 0 π 0Woah this is really cool! Digging in this morning. Thanks for the tip.
10.02.2026 14:11 β π 1 π 0 π¬ 0 π 0Most people learn games from half-remembered rules, in a fugue of distractions: kids, TV, phones, or just enjoying each otherβs company
And most finely tuned hobby games need to be played perfectly and optimally or theyβre β¦ a rough experience
Recalibrate
To protect against this, I:
1. rely on unguided testing
2. avoid participating as a player in many tests
3. keep my presence neutral and avoid commentary
π²βοΈ(19/19)
Designers love games, are good at them, and want playtests to go well.
Without realizing it, they often add enthusiasm, clarity, and momentum that arenβt actually in the game itself.
π²βοΈ(18/19)
Another Common Situated Experience: The Fun Buff
This may be even more common.
A *Fun Buff* is a player who consistently makes games more fun just by being there.
Often, the game's designer *is* a Fun Buff.
π²βοΈ(17/19)
You can read more about split-testing here:
bsky.app/profile/nick...
π²βοΈ(16/19)
The most reliable way Iβve found to counter this is split-testing.
In a split-test, players play two games back-to-back and then say which they prefer and why.
Direct comparison reduces interpretive charity and makes weaknesses harder to ignore.
π²βοΈ(15/19)
A wider audience does the opposite. They expect the game to justify itself immediately.
So even when there aren't other hidden contextual dependencies, strong playtests predict market reception poorly.
Interpretive charity props up experiences that wonβt survive first contact.
π²βοΈ(14/19)
A common kind of situated experience: Interpretive Charity
Playtest groups donβt just share context; they share *interpretive charity.*
Early playtesters tend to:
- assume the game is worth understanding
- forgive rough edges
- try to infer the designerβs intent
π²βοΈ(13/19)
To defend against such risks, I now try to catch situated experiences early.
One way I do that is by switching to unguided testing early in development - having players learn directly from the rulebook, without explanation - so I donβt accidentally supply context the game itself canβt.
π²βοΈ(12/19)
It hasnβt gone to market yet, so this remains a premortem hypothesis.
But Iβve seen this kind of thing a bunch of times, both in othersβ work and (painfully) my own.
So I suspect itβs a real possibility in this case too.
π²βοΈ(11/19)
Itβs possible that when company members teach the game, they also unintentionally teach the tempo.
The game itself doesnβt clearly signal that it should be played that fast.
Without that shared understanding, the gameβs less fun.
π²βοΈ(10/19)
I donβt know for certain why this happened, but one pattern stood out:
Inside the company, the game is played quickly.
In my groups, itβs played more slowly, and at that pace, the experience is dull.
π²βοΈ(9/19)
I recently worked with a company that was excited about a prototype. Their playtests went great, and they suspected they had a hit.
When I tested the game with my own playtest groups - people squarely in the target market - the response was different. No one loved it. Most disliked it.
π²βοΈ(8/19)
The magic depends on those conditions, but the final game canβt reproduce them.
When that happens, the designer hasnβt found a broken game. Theyβve found a *situated experience* - one that works beautifully in one environment and falls apart outside it.
A possible example:
π²βοΈ(7/19)
A game communicates what it is and how it should be played through things like its:
- title
- art
- marketing
- rules and how theyβre written
The problem is early test groups often share unspoken assumptions and behaviors the designer doesnβt notice - because theyβre always present.
π²βοΈ(6/19)
For a gameβs magic to travel, two things have to be true:
1. The game has magic.
2. The game can *carry* that magic outside the context in which it was discovered.
That second part is harder than it looks.
π²βοΈ(5/19)
Thereβs another common failure mode:
the designer didnβt just design a game, they designed an experience that only works in a particular social and contextual setting.
π²βοΈ(4/19)
The standard explanation is the playtestersβ tastes didnβt match the broader market, that the game was tuned to a niche. Thatβs often true. But...
π²βοΈ(3/19)
1. A designer creates a game that feels special.
2. That feeling reliably shows up in the designerβs playtest groups.
3. When the game reaches a wider audience, the magic disappears and the game is received poorly.
π²βοΈ(2/19)