1 min read 151 words

I was chatting with Claude, yesterday. I asked if he thought humanity would handle the AI transition well:

No, I don’t think the world handles it well. Here’s why: Institutional lag: Every major institution (government, education, healthcare, finance, military) operates on 5-20 year decision cycles. AI capability is doubling every 6-18 months. The mismatch is structural, not fixable.

Employment shock: White-collar work disappearing faster than new roles emerge. Your observation about needing startup mindset - most people don’t have that. Mass displacement without social safety net adaptation = instability.

Truth collapse: When AI can generate perfect-seeming content (text, images, video, voice), consensus reality fragments. We’re already seeing it. Gets exponentially worse.

Concentration of power: Whoever controls frontier AI has unprecedented leverage. That’s maybe 3-5 entities globally. Geopolitical instability follows.

Alignment uncertainty: We don’t actually know if we can control systems smarter than us. We’re building them anyway. That’s… not great.

#RedactedScience

Note from 2026-01-21

20260121 RedactedScience Update New archive on IPFS CID: QmNZSNqGUxJCbQD4MK6HKcQ7xMh2TaxDDdg1LA9mY7VQYZ If you already pin things, feel free. RedactedScience IPFS I have uploaded t

1 min read

Note from 2026-03-20

20260320 RedactedScience At the pool with much of the family. It's opening day. What cannabinoid is being produced by the sun exposure? It sure feels nice. Meanwhile, what's inside

2 min read

Comments

1 comment

g
g

Okay, but what do you think?