James Sinclair's Avatar

James Sinclair

@jrsinclair.bsky.social

I type for a living. Sometimes code, sometimes not.

37 Followers  |  23 Following  |  31 Posts  |  Joined: 15.11.2024  |  1.9904

Latest posts by jrsinclair.bsky.social on Bluesky

A maze that exists only as a data structure in memory is a bit useless. We need some way to make it legible to human beings. So I’ve written an article that addresses how we do that.

​https://jrsinclair.com/articles/2025/rendering-mazes-on-the-web/

25.08.2025 08:18 β€” πŸ‘ 0    πŸ” 0    πŸ’¬ 0    πŸ“Œ 0
Preview
The joy of recursion, immutable data, and pure functions: Generating mazes with JavaScript Generating mazes might not be something you do a lot in your typical front-end job. Some might call it a waste of time. Why bother if you’re not a game developer? Who needs that kind of thing? Sure, i...

I felt like writing about something fun. So I wrote an article about creating mazes with JavaScript. Things got out of hand and it grew to two articles. Second one will be published soon.

jrsinclair.com/articles/202...

18.08.2025 07:54 β€” πŸ‘ 0    πŸ” 0    πŸ’¬ 0    πŸ“Œ 0

Thinking about this some more, I do wonder if explore-expand-extract tracks as a specific application of Dave Snowden’s Cynefin framework.

25.07.2025 22:28 β€” πŸ‘ 0    πŸ” 0    πŸ’¬ 0    πŸ“Œ 0
3X Explore, Expand, Extract β€’ Kent Beck β€’ YOW! 2018
YouTube video by GOTO Conferences 3X Explore, Expand, Extract β€’ Kent Beck β€’ YOW! 2018

I can't believe I haven't come across this talk by @kentbeck.com before. It has so much explanatory power.
www.youtube.com/watch?v=Wazq...

23.07.2025 22:21 β€” πŸ‘ 6    πŸ” 2    πŸ’¬ 1    πŸ“Œ 1
Preview
In Praise of β€œNormal” Engineers This article was originally commissioned by Luca Rossi (paywalled) for refactoring.fm, on February 11th, 2025. Luca edited a version of it that emphasized the importance of building β€œ10x engi…

@charity.wtf talks a lot of sense, as usual:

> [10x Engineers exist] So what? It doesn’t matter. […] What matters is how fast the team can collectively write, test, review, ship, maintain, refactor, extend, architect, and revise the software that they own.

charity.wtf/2025/06/19/i...

09.07.2025 06:40 β€” πŸ‘ 0    πŸ” 0    πŸ’¬ 0    πŸ“Œ 0
Preview
What’s the difference between named functions and arrow functions in JavaScript? Arrow functions (also known as β€˜rocket’ functions) are concise and convenient. However, they have subtle differences compared to function declarations and function expressions. So how do you know whic...

I wrote a thing about all the ways you can summon a function in Javascript. It even includes a flow-chart to help check if you're picking a suitable incantation.

jrsinclair.com/articles/202...

30.06.2025 08:44 β€” πŸ‘ 0    πŸ” 0    πŸ’¬ 0    πŸ“Œ 0

What do you think? Does this theory make sense, or am I simply defending my biases?

02.04.2025 09:36 β€” πŸ‘ 0    πŸ” 0    πŸ’¬ 0    πŸ“Œ 0

As a side effect of this structured thinking process, you also generate automated tests that provide immediate feedback on your progress. And if you're following _all_ the steps (red, green, refactor), the code improves with every iteration.

02.04.2025 09:36 β€” πŸ‘ 0    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0

It’s because that initial investment of self-discipline pays off. And the return on investment is huge. Problems are easier to solve if you are clear and specific about what the issue actually is.

And that’s not all.

02.04.2025 09:36 β€” πŸ‘ 0    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0

Instead, TDD forces you to be very specific upfront about _what_ you want to achieve. And that feels like less fun and more effort than diving in and coding a solution. It takes mental effort.

Why, then, do some people love TDD so much?

02.04.2025 09:36 β€” πŸ‘ 0    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0

I have a theory. People struggle with TDD because it feels like hard work. This is because many of us code to think. We're assigned a task, and we start hacking away in the IDE. The solution languidly emerges as we learn more by writing code.

But TDD won’t let you do that.

02.04.2025 09:36 β€” πŸ‘ 0    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0

What would you use these for? Well, if you have large arrays, they can (sometimes) be a more efficient alternative than `.filter()` if you know where the data you want is at the start (or the end) of the array. They also come in handy when writing parsers.

01.04.2025 03:24 β€” πŸ‘ 0    πŸ” 0    πŸ’¬ 0    πŸ“Œ 0
Screenshot of some JavaScript code. The code is as follows:

const dropWhile = (pred, arr) => {
  const idx = arr.findIndex(x => !pred(x));
  return arr.slice(idx);
}

console.log(dropWhile(isVowel, ['a', 'e', 'i', 'o', 'u', 'b']));
// πŸͺ΅ ['b']

Screenshot of some JavaScript code. The code is as follows: const dropWhile = (pred, arr) => { const idx = arr.findIndex(x => !pred(x)); return arr.slice(idx); } console.log(dropWhile(isVowel, ['a', 'e', 'i', 'o', 'u', 'b'])); // πŸͺ΅ ['b']

The second, dropWhile(), does the opposite. It traverses your array and ignores items until a predicate returns false. Then it will give you the rest of the array.

01.04.2025 03:24 β€” πŸ‘ 1    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0
Screenshot of JavaScript code. The code is as follows:

const isVowel = (c) => ['a', 'e', 'i', 'o', 'u'].includes(c);

const takeWhile = (pred, arr) => {
  const idx = arr.findIndex(x => !pred(x));
  return arr.slice(0, idx);
}

console.log(takeWhile(isVowel, ['a', 'e', 'i', 'o', 'u', 'b']));
// πŸͺ΅ ['a', 'e', 'i', 'o', 'u']

Screenshot of JavaScript code. The code is as follows: const isVowel = (c) => ['a', 'e', 'i', 'o', 'u'].includes(c); const takeWhile = (pred, arr) => { const idx = arr.findIndex(x => !pred(x)); return arr.slice(0, idx); } console.log(takeWhile(isVowel, ['a', 'e', 'i', 'o', 'u', 'b'])); // πŸͺ΅ ['a', 'e', 'i', 'o', 'u']

The first, takeWhile(), traverses your array and keeps adding items to a new array until a predicate returns false.

01.04.2025 03:24 β€” πŸ‘ 0    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0

A couple of array utilities you won't find in Array.prototype 🧡

01.04.2025 03:24 β€” πŸ‘ 0    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0
A quote by John Ousterhout, from the book 'A Philosophy of Software Design'.

β€œMost modules have more users than developers, so it is better for the developers to suffer than the users. As a module developer, you should strive to make life as easy as possible for the users of your module, even if that means extra work for you. Another way of expressing this idea is that it is more important for a module to have a simple interface than a simple implementation.”

A quote by John Ousterhout, from the book 'A Philosophy of Software Design'. β€œMost modules have more users than developers, so it is better for the developers to suffer than the users. As a module developer, you should strive to make life as easy as possible for the users of your module, even if that means extra work for you. Another way of expressing this idea is that it is more important for a module to have a simple interface than a simple implementation.”

In some ways, this is so obvious it shouldn't need stating. But it does. And the tension is real. We want our code to be elegant, simple, concise. Yet, we rarely trade that off against the complexity that elegance creates in the interfaces (user interfaces and APIs) we develop.

30.03.2025 21:34 β€” πŸ‘ 1    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0
Preview
How to deal with dirty side effects in your pure functional JavaScript If you start learning about functional programming, it won’t be long before you come across the idea of pure functions. And as you go on, you will discover functional programmers appear to be obsessed...

From the archives: jrsinclair.com/articles/201...

Functional programmers are obsessed with purity. β€œPure functions let you reason about your code”. β€œThey give you referential transparency!” And they have a point. Purity is good. But what do you do with the impure bits of your code?

27.03.2025 21:22 β€” πŸ‘ 0    πŸ” 0    πŸ’¬ 0    πŸ“Œ 0
// With our Map-based approach, we essentially reimplemented
// the Set data structure. We might as well use the Set
// constructor directly.
const uniq = (xs) => [...new Set(...xs)];
const fruits = ['apple', 'banana', 'apple', 'orange', 'banana'];
console.log(uniq(fruits)); // πŸͺ΅ ['apple', 'banana', 'orange']

// With our Map-based approach, we essentially reimplemented // the Set data structure. We might as well use the Set // constructor directly. const uniq = (xs) => [...new Set(...xs)]; const fruits = ['apple', 'banana', 'apple', 'orange', 'banana']; console.log(uniq(fruits)); // πŸͺ΅ ['apple', 'banana', 'orange']

All we've really done with the third option, though, is re-implement the Set structure from scratch. So, we might as well use that. The most efficient option also happens to be the most concise.

26.03.2025 22:39 β€” πŸ‘ 1    πŸ” 0    πŸ’¬ 0    πŸ“Œ 0
// To avoid traversing the array more than we need to,
// we can use a Map to keep track of the elements we've
// seen so far. This way we only traverse the array once.
const uniq = (xs) => Array.of(
  (xs.reduce((m, x) => m.set(x, true), new Map())).keys()
);
const fruits = ['apple', 'banana', 'apple', 'orange', 'banana'];
console.log(uniq(fruits)); // πŸͺ΅ ['apple', 'banana', 'orange']

// To avoid traversing the array more than we need to, // we can use a Map to keep track of the elements we've // seen so far. This way we only traverse the array once. const uniq = (xs) => Array.of( (xs.reduce((m, x) => m.set(x, true), new Map())).keys() ); const fruits = ['apple', 'banana', 'apple', 'orange', 'banana']; console.log(uniq(fruits)); // πŸͺ΅ ['apple', 'banana', 'orange']

To avoid traversing the array more than we need to, we can use a Map to keep track of the elements we've seen so far. This way we only traverse the array once. Once we've been through the list, we return the keys of the Map as an array.

26.03.2025 22:39 β€” πŸ‘ 0    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0
// A more efficient approach is to use .indexOf() to
// check if the current element is the first occurrence
// of that element in the array.
const uniq = (xs) => xs.filter(
  (x, i) => xs.indexOf(x) === i
);
const fruits = ['apple', 'banana', 'apple', 'orange', 'banana'];
console.log(uniq(fruits)); // πŸͺ΅ ['apple', 'banana', 'orange']

// A more efficient approach is to use .indexOf() to // check if the current element is the first occurrence // of that element in the array. const uniq = (xs) => xs.filter( (x, i) => xs.indexOf(x) === i ); const fruits = ['apple', 'banana', 'apple', 'orange', 'banana']; console.log(uniq(fruits)); // πŸͺ΅ ['apple', 'banana', 'orange']

A (slightly) more efficient approach is to use .indexOf() to check if the current element is the first occurrence of that element in the array. But it still traverses the array many more times than it needs to.

26.03.2025 22:39 β€” πŸ‘ 0    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0
// NaΓ―ve solution. This is inefficient because it
// traverses the array multiple times, and creates
// lots of new intermediate arrays. It works, though.
const uniq = (xs) => xs.filter(
  (x, i) => !xs.slice(i + 1).includes(x)
);
const fruits = ['apple', 'banana', 'apple', 'orange', 'banana'];
console.log(uniq(fruits)); // πŸͺ΅ ['apple', 'banana', 'orange']

// NaΓ―ve solution. This is inefficient because it // traverses the array multiple times, and creates // lots of new intermediate arrays. It works, though. const uniq = (xs) => xs.filter( (x, i) => !xs.slice(i + 1).includes(x) ); const fruits = ['apple', 'banana', 'apple', 'orange', 'banana']; console.log(uniq(fruits)); // πŸͺ΅ ['apple', 'banana', 'orange']

Our first approach uses a filter and `.includes()`. This is inefficient because it traverses the array multiple times, and creates lots of new intermediate arrays. It works, though.

26.03.2025 22:39 β€” πŸ‘ 0    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0

Four ways to remove duplicates from an array in JavaScript. 🧡
P.S. Use the last one.

26.03.2025 22:39 β€” πŸ‘ 2    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0
Screenshot of a code snippet showing the use of an array tuple as a Maybe construct. Code reads as follows:

// Suppose we want to retreive the weather conditions
// for a given location. Perhaps we've retreived some
// data from a weather API and we need to process it.
const sensors = {
  ['Dartmoor']: { value: 10, unit: 'C' },
  ['Baker Street']: { value: 18, unit: 'C' },
  ['Scotland Yard']: { value: 68, unit: 'F' },
}
const conditions = {
  ['Dartmoor']: 'Foggy',
  ['Baker Street']: 'Sunny',
  ['Scotland Yard']: 'Rainy',
}

const toCelsius = (fahrenheit) => (fahrenheit - 32) * (5 / 9);

// Here we define a function that wraps our value in an
// array if it's not nullish. We can then use .map(),
// .flatMap(), .reduce() etc. to safely handle the
// nullish case.
const maybe = (value) => value == null ? [] : [value];

const getWeather = (location) => maybe(sensors[location])
  .map(({unit, value}) => unit === 'F' ? toCelsius(value) : value)
  .flatMap(
    (temperature) => maybe(conditions[location])
      .map((condition) => ({temperature, condition}))
  )
  .map(({temperature, condition}) => `${location}: ${temperature}Β°C, ${condition}`)
  .reduce((_, x) => x, 'Weather conditions not available');

console.log(getWeather('Dartmoor')); // πŸͺ΅ 'Dartmoor: 10Β°C, Foggy'
console.log(getWeather('Scotland Yard')); // πŸͺ΅ 'Scotland Yard: 20Β°C, Rainy'
console.log(getWeather('Diogenes Club')); // πŸͺ΅ 'Weather conditions not available'

Screenshot of a code snippet showing the use of an array tuple as a Maybe construct. Code reads as follows: // Suppose we want to retreive the weather conditions // for a given location. Perhaps we've retreived some // data from a weather API and we need to process it. const sensors = { ['Dartmoor']: { value: 10, unit: 'C' }, ['Baker Street']: { value: 18, unit: 'C' }, ['Scotland Yard']: { value: 68, unit: 'F' }, } const conditions = { ['Dartmoor']: 'Foggy', ['Baker Street']: 'Sunny', ['Scotland Yard']: 'Rainy', } const toCelsius = (fahrenheit) => (fahrenheit - 32) * (5 / 9); // Here we define a function that wraps our value in an // array if it's not nullish. We can then use .map(), // .flatMap(), .reduce() etc. to safely handle the // nullish case. const maybe = (value) => value == null ? [] : [value]; const getWeather = (location) => maybe(sensors[location]) .map(({unit, value}) => unit === 'F' ? toCelsius(value) : value) .flatMap( (temperature) => maybe(conditions[location]) .map((condition) => ({temperature, condition})) ) .map(({temperature, condition}) => `${location}: ${temperature}Β°C, ${condition}`) .reduce((_, x) => x, 'Weather conditions not available'); console.log(getWeather('Dartmoor')); // πŸͺ΅ 'Dartmoor: 10Β°C, Foggy' console.log(getWeather('Scotland Yard')); // πŸͺ΅ 'Scotland Yard: 20Β°C, Rainy' console.log(getWeather('Diogenes Club')); // πŸͺ΅ 'Weather conditions not available'

Did you know that you can use an array tuple in place of a Maybe structure? It elegantly handles empty values using familiar `.map()`, `.flatMap()`, and `.reduce()` methods. I think it's rather neat. Credit goes to @jmsfbs@pixelfed.social for introducing me to the idea.

25.03.2025 21:48 β€” πŸ‘ 0    πŸ” 0    πŸ’¬ 0    πŸ“Œ 0

Ousterhout argues that general purpose code tends to be simpler. It doesn't need to handle lots of special cases with copious if-statements and control structures. This kind of code handles those cases "ya ain't gonna need" _without any modification_.

24.03.2025 22:07 β€” πŸ‘ 0    πŸ” 0    πŸ’¬ 0    πŸ“Œ 0

In case you haven't come across it, YAGNI stands for "ya ain't gonna need it." The idea is that we want to avoid over-complicating things to solve for problems we may never encounter. Applied carelessly, you may end up with an over-specialized design.

24.03.2025 22:07 β€” πŸ‘ 0    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0
Post image

I love this quote from John Ousterhout:

> I have found over and over that specialization leads to complexity; I now think that over-specialization may be the single greatest cause of complexity in software.

On the surface, it appears to contradict YAGNI, but not necessarily.

24.03.2025 22:07 β€” πŸ‘ 0    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0
Screenshot of some JavaScript code showing two variables being swapped using array destructuring. The code reads as follows:

let a = 'left';
let b = 'right';

console.log([a, b]);
// πŸͺ΅ ["left", "right"]

// Swap two variables with array destructuring.
[b, a] = [a, b]
console.log([a, b]);
// πŸͺ΅ ["right", "left"]

Screenshot of some JavaScript code showing two variables being swapped using array destructuring. The code reads as follows: let a = 'left'; let b = 'right'; console.log([a, b]); // πŸͺ΅ ["left", "right"] // Swap two variables with array destructuring. [b, a] = [a, b] console.log([a, b]); // πŸͺ΅ ["right", "left"]

It's old news now, but swapping variables with destructuring still blows my mind every time I see it.

23.03.2025 21:21 β€” πŸ‘ 0    πŸ” 0    πŸ’¬ 0    πŸ“Œ 0

You’re right, it’s sad that you can’t generally assume that people know that ?? Is available when they use || for setting defaults. Knowledge about new language features takes some time to disperse.

21.03.2025 00:45 β€” πŸ‘ 0    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0

There’s nothing wrong with using || if it suits your use case. As you suggest, if you’re dealing with user-supplied data, that’s a slightly different use case to one where a developer omits an optional config parameter.

21.03.2025 00:43 β€” πŸ‘ 1    πŸ” 0    πŸ’¬ 0    πŸ“Œ 0
Screenshot of JavaScript code illustrating use of the nullish coalescing operator. The code reads as follows:

function doSomething(configParam) {
  const config = configParam || DEFAULT_VAL; // ❌
  // Rest of code
}

// The code above breaks if config param is a primitive
// type (not an object). For example:
'' || 'Unexpected' // ➑︎ 'Unexpected'
0 || 'Unexpected' // ➑︎ 'Unexpected'
false || 'Unexpected' // ➑︎ 'Unexpected'

// Instead, use the ?? operator
function doSomething(configParam) {
  const config = configParam ?? DEFAULT_VAL; // βœ…
  // Rest of code
}

Screenshot of JavaScript code illustrating use of the nullish coalescing operator. The code reads as follows: function doSomething(configParam) { const config = configParam || DEFAULT_VAL; // ❌ // Rest of code } // The code above breaks if config param is a primitive // type (not an object). For example: '' || 'Unexpected' // ➑︎ 'Unexpected' 0 || 'Unexpected' // ➑︎ 'Unexpected' false || 'Unexpected' // ➑︎ 'Unexpected' // Instead, use the ?? operator function doSomething(configParam) { const config = configParam ?? DEFAULT_VAL; // βœ… // Rest of code }

I still see lots of people setting default values in JS using the || operator. This is fine if the values you're dealing with are always objects. But if you deal with numbers, booleans or strings, this can be problematic. In most cases, the ?? operator is a better choice.

20.03.2025 22:26 β€” πŸ‘ 2    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0

@jrsinclair is following 20 prominent accounts