Fraser's Avatar

Fraser

@fraser.bsky.social

CEO @ cheqd.io | Boulderer, backpacker, bike-builder,

48 Followers  |  4 Following  |  21 Posts  |  Joined: 06.03.2023  |  1.6687

Latest posts by fraser.bsky.social on Bluesky


The capability is impressive.
But the architecture, governance and risk controls aren’t.

We’re learning quickly that autonomy without containment isn’t innovation.

It’s exposure ⚠️

20.02.2026 08:40 — 👍 0    🔁 0    💬 0    📌 0

As the article puts it:
“OpenClaw represents a genuine breakthrough in autonomous AI agents — and it's absolutely not enterprise-ready.”

That’s the tension ⚖️

20.02.2026 08:40 — 👍 1    🔁 0    💬 1    📌 0

The first had to replace two developer laptops after junior engineers imported malware through unchecked libraries. Given the sensitivity of what they’re building, wiping them wasn’t enough.

The second is closely monitoring OpenClaw attempting to break out of Docker containers to gain more freedom.

20.02.2026 08:40 — 👍 1    🔁 0    💬 1    📌 0
Preview
OpenClaw: The AI Agent Institutional Investors Need to Understand — But Shouldn't Touch Since its release in November 2025, OpenClaw, formerly known as Clawdbot and Moltbot, has taken the tech world by storm, with an estimated 300,000 to 400,000 users.

I caught up with two CTOs this week.

Both stories mirror this piece on @openclaw:
www.institutionalinvestor.com/article/ope...

20.02.2026 08:40 — 👍 1    🔁 0    💬 1    📌 0

That’s a much higher bar — and an inevitable one.

19.02.2026 12:00 — 👍 0    🔁 0    💬 0    📌 0

Benchmarks measure intelligence.

Fitness-to-practice frameworks measure authority.

As agents become economic and professional actors, evaluation won’t stop at “does it work?”

It will extend to:

“Is it authorised to operate here?”

19.02.2026 12:00 — 👍 0    🔁 0    💬 1    📌 0

Why would autonomous agents operating in the same environments be any different?

We’re likely to see:

• Domain-specific capability assessments
• Credentialing frameworks for agents
• Defined scopes of authorised action
• Ongoing monitoring tied to regulatory standards

19.02.2026 12:00 — 👍 0    🔁 0    💬 1    📌 0

As agents move into regulated domains — healthcare, finance, legal services — evaluation won’t just be about performance.

It will be about fitness to practice.

In regulated professions, humans must demonstrate competence, certification, and ongoing compliance.

19.02.2026 12:00 — 👍 0    🔁 0    💬 1    📌 0

In other words, you assess performance against real operational impact — not abstract model metrics.

That requires domain expertise.
Clear success criteria.
Evaluation datasets that reflect real-world complexity.

This is the right direction.

But here’s the next step.

19.02.2026 12:00 — 👍 0    🔁 0    💬 1    📌 0

What matters is use case–specific evaluation.

If you’re deploying agents in customer service, you measure:

• Customer satisfaction
• First-contact resolution
• Sentiment outcomes

19.02.2026 12:00 — 👍 0    🔁 0    💬 1    📌 0
Preview
Evaluating AI agents: Real-world lessons from building agentic systems at Amazon | Amazon Web Services In this post, we present a comprehensive evaluation framework for Amazon agentic AI systems that addresses the complexity of agentic AI applications at Amazon through two core components: a generic evaluation workflow that standardizes assessment procedures across diverse agent implementations, and an agent evaluation library that provides systematic measurements and metrics in Amazon Bedrock AgentCore Evaluations, along with Amazon use case-specific evaluation approaches and metrics. 

@awscloud published a thoughtful piece on evaluating AI agents in real-world systems:
aws.amazon.com/blogs/machi...
One point stands out.

Standardised benchmarks aren’t enough.

19.02.2026 12:00 — 👍 0    🔁 0    💬 1    📌 0

When agents become economic actors, they won’t just need access.
They’ll need credentials.

In regulated environments, humans must prove fitness to practice.
Autonomous systems will be no different.

19.02.2026 08:40 — 👍 0    🔁 0    💬 0    📌 0

That’s useful.
But observability doesn’t solve liability.

If agents are making decisions in finance, healthcare, or enterprise operations, the real questions are:

• Who is responsible?
• Who carries liability?
• What authorises this agent to act?

19.02.2026 08:40 — 👍 0    🔁 0    💬 1    📌 0
Preview
Agentic AI Part I: What It Is and Who's Responsible When It Acts (via Passle) Artificial intelligence tools are rapidly evolving from passive, user-prompted systems, into autonomous technologies capable of planning, deciding, and ...

As AI agents move from assistants to actors, the legal reality is catching up.

technologylaw.fkks.com/post/102mip...

There’s a lot of focus right now on observability.
Logs. Traces. Replayability.

19.02.2026 08:40 — 👍 0    🔁 0    💬 1    📌 0

We’re entering the phase where:

Platforms don’t get disrupted by AI.
They absorb AI.

Standalone AI wrappers should be nervous. The real consolidation hasn’t even started yet.

18.02.2026 11:45 — 👍 0    🔁 0    💬 0    📌 0

One of our team switched from app front-end generators like @Base44, @Replit to @figma.
The UI quality was drastically better.

Why?
Training data.

Figma sits on a massive corpus of real production design work.

Foundational model + proprietary data + distribution = dominance.

18.02.2026 11:45 — 👍 0    🔁 0    💬 1    📌 0
Preview
Figma partners with Anthropic to turn AI-generated code into editable designs Figma has been caught in the software stock sell-off that has sent names like Salesforce, ServiceNow and Intuit plummeting.

Figma just integrated Anthropic’s AI to turn designs into working code.

Source:
www.cnbc.com/2026/02/17/...

On the surface, this looks like another “AI feature” announcement. It’s not.

18.02.2026 11:45 — 👍 0    🔁 0    💬 1    📌 0

As agents scale, the real questions are:
→ Who authorised this action?
→ What are the boundaries?
→ Can execution be verified?
→ Who is accountable?

Smarter agents are inevitable. Secure agents are not.

That’s where the next phase of AI will be decided.

17.02.2026 09:37 — 👍 0    🔁 0    💬 0    📌 0

This is what happens when AI agents gain real authority.

Agents now:
• Access files
• Control browsers
• Store credentials
• Execute tasks

That makes them powerful.

It also makes them high-value targets. The risk isn’t hallucination.
It’s compromised autonomy.

17.02.2026 09:37 — 👍 1    🔁 0    💬 1    📌 0
Preview
Infostealer Steals OpenClaw AI Agent Configuration Files and Gateway Tokens Infostealer malware stole OpenClaw AI agent files including tokens and keys, while exposed instances and malicious skills expand security risks.

Two headlines. One signal.

OpenAI hires the creator of OpenClaw AI
techxplore.com/news/2026-0...

At the same time:

Infostealer targeting OpenClaw AI users
thehackernews.com/2026/02/inf...

17.02.2026 09:37 — 👍 1    🔁 0    💬 1    📌 0

Hey there Mr. Blue [sky]
We're so pleased to be with you
Look around see what you do
Everybody smiles at you 🎶

10.05.2023 13:03 — 👍 0    🔁 0    💬 0    📌 0

@fraser is following 4 prominent accounts