So Anthropic has published data on how people actually use AI agents in practice.


Most of it's what you'd expect…

1. coding dominates

2. autonomy is increasing

3. median tasks are short

But one stat caught me off guard and that is experienced users let Claude auto-run 2x more often than new users and they interrupt it almost twice as frequently.


That paradox tells you everything about how real trust works.

> New users babysit every step

> Experienced users give freedom but develop sharper instincts for when to pull the brake


As someone who uses Claude daily, this is exactly how it plays out. You don't learn to trust more, you learn to trust better.

That's a skill most people aren't even thinking about yet.