Steve Smith's Avatar

Steve Smith

@stevesmithtech.bsky.social

Technology leadership | Books leanpub.com/u/stevesmith | Blog stevesmith.tech | Work equalexperts.com | Conf agileonthebeach.com

1,295 Followers  |  26 Following  |  267 Posts  |  Joined: 29.07.2023  |  1.8579

Latest posts by stevesmithtech.bsky.social on Bluesky

Macron remarks are notable- some quotes: "We have been incredibly naive in entrusting our democratic space to social networks that are controlled either by large American entrepreneurs or large Chinese companies, whose interests are not at all the survival or proper functioning of our democracies."

04.10.2025 11:57 β€” πŸ‘ 7015    πŸ” 2264    πŸ’¬ 99    πŸ“Œ 160
Video thumbnail

OpenAI employees are very excited about how well their new AI tool can create fake videos of people doing crimes and have definitely thought through all the implications of this

30.09.2025 23:24 β€” πŸ‘ 10810    πŸ” 3293    πŸ’¬ 220    πŸ“Œ 597
Post image

Betteridge's Law says hello

27.09.2025 15:59 β€” πŸ‘ 3    πŸ” 0    πŸ’¬ 0    πŸ“Œ 0
Post image In this spirit of fraternity, hope and caution, we call upon your leadership to uphold the following principles and red lines to foster dialogue and reflection on how AI can best serve our entire human family:

    Human life and dignity: AI must never be developed or used in ways that threaten, diminish, or disqualify human life, dignity, or fundamental rights. Human intelligence – our capacity for wisdom, moral reasoning, and orientation toward truth and beauty – must never be devalued by artificial processing, however sophisticated. 

    AI must be used as a tool, not an authority: AI must remain under human control. Building uncontrollable systems or over-delegating decisions is morally unacceptable and must be legally prohibited. Therefore, development of superintelligence (as mentioned above) AI technologies should not be allowed until there is broad scientific consensus that it will be done safely and controllably, and there is clear and broad public consent.

    Accountability: only humans have moral and legal agency and AI systems are and must remain legal objects, never subjects. Responsibility and liability reside with developers, vendors, companies, deployers, users, institutes, and governments. AI cannot be granted legal personhood or β€œrights”. 

    Life-and-death decisions: AI systems must never be allowed to make life or death decisions, especially in military applications during armed conflict or peacetime, law enforcement, border control, healthcare or judicial decisions.

In this spirit of fraternity, hope and caution, we call upon your leadership to uphold the following principles and red lines to foster dialogue and reflection on how AI can best serve our entire human family: Human life and dignity: AI must never be developed or used in ways that threaten, diminish, or disqualify human life, dignity, or fundamental rights. Human intelligence – our capacity for wisdom, moral reasoning, and orientation toward truth and beauty – must never be devalued by artificial processing, however sophisticated. AI must be used as a tool, not an authority: AI must remain under human control. Building uncontrollable systems or over-delegating decisions is morally unacceptable and must be legally prohibited. Therefore, development of superintelligence (as mentioned above) AI technologies should not be allowed until there is broad scientific consensus that it will be done safely and controllably, and there is clear and broad public consent. Accountability: only humans have moral and legal agency and AI systems are and must remain legal objects, never subjects. Responsibility and liability reside with developers, vendors, companies, deployers, users, institutes, and governments. AI cannot be granted legal personhood or β€œrights”. Life-and-death decisions: AI systems must never be allowed to make life or death decisions, especially in military applications during armed conflict or peacetime, law enforcement, border control, healthcare or judicial decisions.

    Independent testing and adequate risk assessment must be required before deployment and throughout the entire lifecycle.
    Stewardship: Governments, corporations, and anyone else should not weaponize AI for any kind of domination, illegal wars of aggression, coercion, manipulation, social scoring, or unwarranted mass surveillance. 

    Responsible design: AI should be designed and independently evaluated to avoid unintentional and catastrophic effects on humans and society, for example through design giving rise to deception, delusion, addiction, or loss of autonomy.  

    No AI monopoly: the benefits of AI – economic, medical, scientific, social – should not be monopolized. 

    No Human Devaluation: design and deployment of AI should make humans flourish in their chosen pursuits, not render humanity redundant, disenfranchised, devalued or replaceable. 

    Ecological responsibility: our use of AI must not endanger our planet and ecosystems. Its vast demands for energy, water, and rare minerals must be managed responsibly and sustainably across the whole supply chain.

    No irresponsible global competition: We must avoid an irresponsible race between corporations and countries towards ever more powerful AI.

Independent testing and adequate risk assessment must be required before deployment and throughout the entire lifecycle. Stewardship: Governments, corporations, and anyone else should not weaponize AI for any kind of domination, illegal wars of aggression, coercion, manipulation, social scoring, or unwarranted mass surveillance. Responsible design: AI should be designed and independently evaluated to avoid unintentional and catastrophic effects on humans and society, for example through design giving rise to deception, delusion, addiction, or loss of autonomy. No AI monopoly: the benefits of AI – economic, medical, scientific, social – should not be monopolized. No Human Devaluation: design and deployment of AI should make humans flourish in their chosen pursuits, not render humanity redundant, disenfranchised, devalued or replaceable. Ecological responsibility: our use of AI must not endanger our planet and ecosystems. Its vast demands for energy, water, and rare minerals must be managed responsibly and sustainably across the whole supply chain. No irresponsible global competition: We must avoid an irresponsible race between corporations and countries towards ever more powerful AI.

I was part of a working group on AI and Fraternity assembled by the Vatican. We met in Rome and worked on this over two days. I am happy to share the result of that intense effort: a Declaration we presented to the Pope and other government authorities

coexistence.global

23.09.2025 17:33 β€” πŸ‘ 282    πŸ” 110    πŸ’¬ 7    πŸ“Œ 13

for all people mock it wikipedia is genuinely one of the wonders of the modern world

21.09.2025 19:54 β€” πŸ‘ 9133    πŸ” 1029    πŸ’¬ 152    πŸ“Œ 111
Preview
There isn’t an AI bubbleβ€”there are three Here's how to capitalize on them.

This piece argues we’re in not one, but three AI bubbles:

1. a financial bubble with inflated valuations,

2. an infrastructure bubble with overbuilt data centers,

3. a hype bubble where AI can’t meet its promises.

Honestly, all three seem highly likely to me.

21.09.2025 18:47 β€” πŸ‘ 530    πŸ” 124    πŸ’¬ 17    πŸ“Œ 12

When I grow up, I want to be like @meredithmeredith.bsky.social

19.09.2025 09:56 β€” πŸ‘ 13    πŸ” 1    πŸ’¬ 1    πŸ“Œ 0

at a time where folding for fascism and aligning with authoritarianism is the norm, distinguish yourself by having integrity and convictions and defending them

18.09.2025 11:13 β€” πŸ‘ 120    πŸ” 32    πŸ’¬ 1    πŸ“Œ 1

Owning code in production changes how we write it. β€˜You Build It, You Run It’ isn’t asking too much of developers, imo. It’s a great way to build reliable software.

17.09.2025 10:36 β€” πŸ‘ 30    πŸ” 3    πŸ’¬ 4    πŸ“Œ 0

youtu.be/SKh9WaKnihU?...

16.09.2025 19:11 β€” πŸ‘ 0    πŸ” 0    πŸ’¬ 0    πŸ“Œ 0
Post image

Developers vs. β€˜You Build It, You Run It’ | PART 3 | @stevesmithtech.bsky.social

πŸ“… TOMORROW NIGHT @ 7PM (UK)

Subscribe & hit the bell to receive notifications whenever we release a video πŸ””βž‘οΈ youtube.com/@ModernSoftw...

16.09.2025 11:17 β€” πŸ‘ 3    πŸ” 2    πŸ’¬ 0    πŸ“Œ 0
Preview
Returning to Church Won’t Save Us from Nihilism Engaging in ritual for ritual’s sake only deepens nihilism.

New article by me for @mitpress.bsky.social on the rise of "Nihilistic Violent Extremists" and why David Brooks is wrong to think that Church and "believing in belief" is the solution to nihilism rather than recognizing such empty rituals as the cause...
thereader.mitpress.mit.edu/returning-to...

15.09.2025 14:46 β€” πŸ‘ 32    πŸ” 15    πŸ’¬ 2    πŸ“Œ 3
Preview
The Washington Post Fired Me β€” But My Voice Will Not Be Silenced. I spoke out against hatred and violence in America β€” and it cost me my job.

Some personal news:

I've been fired from the Washington Post in the aftermath of the Charlie Kirk shooting.

Thread incoming.

substack.com/@karenattiah...

15.09.2025 11:07 β€” πŸ‘ 45494    πŸ” 15888    πŸ’¬ 2521    πŸ“Œ 2182

Feels like extend the bubble advertising

15.09.2025 16:19 β€” πŸ‘ 1    πŸ” 0    πŸ’¬ 0    πŸ“Œ 0
Preview
What are the Boris Files and what do they reveal about former PM’s conduct? Leaked material from Johnson’s private office raises serious questions relating to his time in No 10 and since he resigned

EXCLUSIVE - Today the Guardian is publishing the Boris Files

A trove of leaked data from the office of Boris Johnson.

It reveals how Johnson is using the relationships forged in the UK’s highest elected office to facilitate his personal enrichment.

www.theguardian.com/uk-news/2025...

08.09.2025 18:22 β€” πŸ‘ 1804    πŸ” 924    πŸ’¬ 136    πŸ“Œ 166
An illustration of me, and the headline: "AI agents are coming for your privacy, warns Meredith Whittaker
The Signal Foundation’s president worries they will also blunt competition and undermine cyber-security"

An illustration of me, and the headline: "AI agents are coming for your privacy, warns Meredith Whittaker The Signal Foundation’s president worries they will also blunt competition and undermine cyber-security"

To put it bluntly, the path currently being taken towards agentic AI leads to an elimination of privacy and security at the application layer. It will not be possible for apps like Signalβ€”the messaging app whose foundation I runβ€”to continue to provide strong privacy guarantees, built on robust and openly validated encryption, if device-makers and OS developers insist on puncturing the metaphoric blood-brain barrier between apps and the OS. Feeding your sensitive Signal messages into an undifferentiated data slurry connected to cloud servers in service of their AI-agent aspirations is a dangerous abdication of responsibility.

To put it bluntly, the path currently being taken towards agentic AI leads to an elimination of privacy and security at the application layer. It will not be possible for apps like Signalβ€”the messaging app whose foundation I runβ€”to continue to provide strong privacy guarantees, built on robust and openly validated encryption, if device-makers and OS developers insist on puncturing the metaphoric blood-brain barrier between apps and the OS. Feeding your sensitive Signal messages into an undifferentiated data slurry connected to cloud servers in service of their AI-agent aspirations is a dangerous abdication of responsibility.

Happily, it’s not too late. There is much that can still be done, particularly when it comes to protecting the sanctity of private data. What’s needed is a fundamental shift in how we approach the development and deployment of AI agents. First, privacy must be the default, and control must remain in the hands of application developers exercising agency on behalf of their users. Developers need the ability to designate applications as β€œsensitive” and mark them as off-limits to agents, at the OS level and otherwise. This cannot be a convoluted workaround buried in settings; it must be a straightforward, well-documented mechanism (similar to Global Privacy Control) that blocks an agent from accessing our data or taking actions within an app.

Second, radical transparency must be the norm. Vague assurances and marketing-speak are no longer acceptable. OS vendors have an obligation to be clear and precise about their architecture and what data their AI agents are accessing, how it is being used and the measures in place to protect it.

Happily, it’s not too late. There is much that can still be done, particularly when it comes to protecting the sanctity of private data. What’s needed is a fundamental shift in how we approach the development and deployment of AI agents. First, privacy must be the default, and control must remain in the hands of application developers exercising agency on behalf of their users. Developers need the ability to designate applications as β€œsensitive” and mark them as off-limits to agents, at the OS level and otherwise. This cannot be a convoluted workaround buried in settings; it must be a straightforward, well-documented mechanism (similar to Global Privacy Control) that blocks an agent from accessing our data or taking actions within an app. Second, radical transparency must be the norm. Vague assurances and marketing-speak are no longer acceptable. OS vendors have an obligation to be clear and precise about their architecture and what data their AI agents are accessing, how it is being used and the measures in place to protect it.

πŸ“£ NEW -- In The Economist, discussing the privacy perils of AI agents and what AI companies and operating systems need to do--NOW--to protect Signal and much else!

www.economist.com/by-invitatio...

09.09.2025 11:44 β€” πŸ‘ 880    πŸ” 283    πŸ’¬ 11    πŸ“Œ 31
LLM as Pair? This is RonJeffries.com, the combination of new articles, XProgramming, SameElephant, and perhaps even some new items never before contemplated. Copyright Β© 1998-forever Ronald E Jeffries

LLM as Pair?
Could an LLM-human relationship be like pair programming? And if it were .. ?

https://ronjeffries.com/articles/-w025/y/v/

05.09.2025 15:52 β€” πŸ‘ 6    πŸ” 4    πŸ’¬ 0    πŸ“Œ 0

enshittification | noun | when a digital platform is made worse for users, in order to increase profits

03.09.2025 20:22 β€” πŸ‘ 29358    πŸ” 8652    πŸ’¬ 511    πŸ“Œ 660

Not as funny as Compaq and HP merging, and Compaq staff receiving a @nonhp.com email address

03.09.2025 20:08 β€” πŸ‘ 9    πŸ” 0    πŸ’¬ 0    πŸ“Œ 0

basically intrusive surveillance of our intimate spaces... data so lucrative the AI industry will collapse without it

03.09.2025 11:10 β€” πŸ‘ 48    πŸ” 16    πŸ’¬ 2    πŸ“Œ 0

The president is sending the military to control American cities and if you're a reporter who's framing that illegal power grab as "pushing the boundaries of constitutionality" or "acting boldly to fight crime" or whatever, please go find another line of work where you won't get us all killed.

02.09.2025 20:56 β€” πŸ‘ 33948    πŸ” 9360    πŸ’¬ 481    πŸ“Œ 306
Post image

The TRUTH Behind Your β€˜You Build It, You Run It’ Excuses | PART 2 | @stevesmithtech.bsky.social | TOMORROW NIGHT @ 7PM (UK)

Subscribe & hit the notifications bell so you NEVER miss an upload! πŸ”” ➑️ youtube.com/@ModernSoftw...

02.09.2025 14:17 β€” πŸ‘ 5    πŸ” 3    πŸ’¬ 0    πŸ“Œ 0

I want as many developers as possible to experience the joy of refactoring their code when they have a good test suite by their side.

It's infectious. You'd want that feeling all the time.

01.09.2025 18:07 β€” πŸ‘ 61    πŸ” 13    πŸ’¬ 0    πŸ“Œ 1

Generating tests based on your implementation is like setting exam questions based on your answers.

29.08.2025 06:56 β€” πŸ‘ 47    πŸ” 14    πŸ’¬ 6    πŸ“Œ 2

It was good! Thank you

28.08.2025 06:32 β€” πŸ‘ 0    πŸ” 0    πŸ’¬ 0    πŸ“Œ 0
Technical Coaching: From One Team to Many Teams
YouTube video by Emily Bache Technical Coaching: From One Team to Many Teams

How do you spread technical coaching to more teams in an organization? New video today: youtu.be/u-oRiXDKIrE

25.08.2025 14:20 β€” πŸ‘ 8    πŸ” 2    πŸ’¬ 0    πŸ“Œ 1
Why should I write better when a machine can do it for me?
Because actually no one can do it for you, because your voice is unique among all the people on earth. Siri never petted a horse's neck. Alexa has never been ghosted by the captain of the football team. But you have lived, your heart is beating, you have suffered, and you have something important to say. It's a human's job, to use words, and whatever job you give to a machine, that part of your brain goes dark. Maybe it's worth it when it comes to remembering phone numbers and directions, but when that part of your brain that uses words goes dark, that's a vast area that's very close to your soul. Don't let some internet platform convince you that what you have to say and create isn't worthwhile. Words are the echo of your soul. Honing that echo matters.

Why should I write better when a machine can do it for me? Because actually no one can do it for you, because your voice is unique among all the people on earth. Siri never petted a horse's neck. Alexa has never been ghosted by the captain of the football team. But you have lived, your heart is beating, you have suffered, and you have something important to say. It's a human's job, to use words, and whatever job you give to a machine, that part of your brain goes dark. Maybe it's worth it when it comes to remembering phone numbers and directions, but when that part of your brain that uses words goes dark, that's a vast area that's very close to your soul. Don't let some internet platform convince you that what you have to say and create isn't worthwhile. Words are the echo of your soul. Honing that echo matters.

this iconic advertising copywriter named Kathy Hepinstall Parks died over the weekend and I wanted to share something from her website I thought Bluesky would like

22.08.2025 14:20 β€” πŸ‘ 19015    πŸ” 8662    πŸ’¬ 37    πŸ“Œ 356

X: When deploying in test things break for other teams.
Y: The problem is we only have shared test environments.

The problem is not shared test environments. Production is messy, anyway. If we cannot handle messiness in test, production will be downright chaos.

It is a skills and practice problem.

21.08.2025 08:19 β€” πŸ‘ 7    πŸ” 2    πŸ’¬ 1    πŸ“Œ 0
Post image

Why β€˜You Build It, You Run It’ Works (Even If Developers Hate It) | PART 1 | @stevesmithtech.bsky.social

AVAILABLE NOW πŸ“½οΈ

youtu.be/y_W3v-KMPz0

20.08.2025 18:00 β€” πŸ‘ 5    πŸ” 2    πŸ’¬ 0    πŸ“Œ 0

@stevesmithtech is following 20 prominent accounts