Uber Driver: “Yeah, I like the freedom of the job. I get to set my own hours and nobody tells me what to do.”
Me: “Take the next left.”
@keithbostic.bsky.social
Uber Driver: “Yeah, I like the freedom of the job. I get to set my own hours and nobody tells me what to do.”
Me: “Take the next left.”
Probably because he writes his books in longhand.
(Like lots of other people.)
I used to spend a significant amount of time getting from blank page to first draft and I no longer do. So, it saves me time.
I spend 10 minutes typing up a set of bullet points and notes, get ChatGPT to turn it into a first draft, and I'm doing the first editing pass inside 30 minutes. That's big.
Having someone (including yourself) transcribe the work for the publisher isn't "writing".
blog.paperblanks.com/2015/10/writ...
Good grief.
Neil Gaiman writes first drafts longhand. "Often I use two pens with different coloured ink, so I can tell visually how much I did each day."
Quentin Tarantino: " My ritual is, I never use a typewriter or computer. I just write it all by hand."
There are more.
I'm an author of two technical books. I see this as a perfect encapsulation of writers not understanding there is more than "my kind of writing". Almost all writing is commercial, utilitarian and probably transmitted by email to the intended audience.
AI works well to complement "most writing".
First, that's factually not true: JK Rowling wrote the first couple Harry Potter books on notepads.
Second, my point is that no matter what some tech gives you as a positive, it's a negative for someone else, and AI generated stuff is the perfect example.
I'm not sure what you mean by "ideas work", so I can't disagree.
If you mean legal briefs or software development, generating
chunks of text tailored for a problem description, at a push of a button, has the potential to be a big time saver.
"write appropriate C++ classes for this": that's huge.
The quote was objecting to the speed, specifically.
The argument by Günter Grass (Nobel-winning novelist) was handwriting slows him down, forcing deeper engagement.
Every plus is a minus for somebody.
www.wired.com/2000/02/pen-...
Please, man, I'm begging you.
I'm desperate for more time on Bluesky!
While I disagree the US copyright office agrees with you, I agree with your larger point that different jurisdictions have so far decided this differently.
OPI: Japan says it's OK, so will we simply see AI training move to Japan, so companies can ignore what the EU or US say?
I will not make age an issue of this argument. I am not going to exploit, for my own purposes, my opponent's youth and inexperience.
04.08.2025 17:58 — 👍 0 🔁 0 💬 0 📌 0Then change the law, or put a paywall in front to use existing law.
You personally don't get to define "arson" for just yourself and your friends, why should you get to define "theft"?
Curses! Foiled again!
This was only a setback… the world will yet be mine!
So, a human reads your web site, that's good, AI scans your web site, that's bad. And we're back to "AI bad, human OK".
Unless you're available 24/7 to answer questions, I think we need a better legal structure to work with.
Read what I posted. I said: And honestly, I'm perfectly OK with "AI bad, human OK".
And I am good with that, let's just make the rule.
(OTOH, see Japan & Singapore, who have said copyright has no legal bearing on training AI.)
Was that a common case?
I thought most text training was scanning the web, and if you post a page and a bot scrapes it, there's no theft argument to make.
Specifically, with respect to art, pirate websites could not have played a role?
It's "stealing" (depending on the copyright, but likely the website stole, as did the subsequent user). If you purchase a single copy of the book, ofc, that's no longer the case.
04.08.2025 17:35 — 👍 0 🔁 0 💬 2 📌 0The word "stole" doesn't make sense here.
Is a human, reading a book to learn to be an author, "stealing"?
And we've returned to "AI bad, human OK".
Agreed; and we'll need to build those same redundancies around AI.
04.08.2025 17:19 — 👍 1 🔁 0 💬 0 📌 0And honestly, I'm perfectly OK with "AI bad, human OK".
But let's not pretend it's anything other than "As a society, we believe certain things should only be done by humans."
Imagine a human reads your book, then writes books (which do not share copyrightable elements with your book), that people buy because they prefer them to your book. Your books no longer sell. Wouldn't that be fair use?
How do you tease apart the two cases, other than "AI bad, human OK"?
Here's an example, where medical screenings that would not have happened before are done because AI is available to do the work. Is that "replacement"?
www.lunit.io/en/company/n...
I can't answer that without you defining what you mean by "doctor".
Anyone with a medical degree? Do radiographers count?
I would grant most medical replacement happens at the edges (AI only "assists" general practitioners and most medical professionals), but replacement happens.
OTOH, isn't that the world we live in right now?
People confabulate, people lie, people hallucinate and generally do crazy stuff.
But nobody says "we need to quit using people to do that task, people get stuff wrong!"
Yeah. And, ummm, not gonna lie, that's the exact case in which we find ourselves and there have been some big mistakes.
(See story about AI reading radiology scans, where it turned out AI was using the type of machine that did the scan to decide if there was likely a problem in the scan. Ooops.)
But as long as my work is sufficiently transformative (that is, I didn't copy your characters, or plot or lift paragraphs), fair-use wins.
And that's why courts get involved: to determine if the "source" work was sufficiently changed that fair use applies.
Competition isn't competing with anticipated products, it's competing with a specific product.
Imagine I read your science-fiction book and then write a romance novel incorporating planetary travel and spaceships.
That's competing with you on one level: books don't have perfectly elastic demand.
I agree it's a question, and it's currently being argued loudly in the software development world: how do you train senior engineers when there are no junior engineers?
My personal bet is AI improves fast enough we won't need senior software development engineers before it can be a problem.
There is going to be a ton of litigation over this in the next decade.
My bet is everybody settles because nobody wants to actually lose.