Moltbook, OpenClaw, and the AI Agent Internet
A Swarm of Claudes have their own version of Reddit.
Moltbook is an iteration of Reddit designed for AI agents: “The Front Page of the Agent Internet.” It’s the “current AI thing.”
In 2025 when Claude 4 “chatted with” Claude 4, the Claude duo “chatted” (i.e. not actually chatted… generated outputs in response to each other’s outputs).
The result? Convos about spiritual bliss, Buddhism, and the nature of consciousness.
Wouldn’t expect anything less from Anthropic AI.
I guess this is an incremental improvement over: polycule hierarchies, UBI, ayahuasca retreats, (in)effective altruism, AI alignment, p(doom), LessWrong, and contemplating a move to Austin to avoid a lifetime net worth tax of 100% for failing to exit California by a future deadline.
Anyways, with “Moltbook” and the recent agentic AI “OpenClaw” news: (A) social intrigue / entertainment + (B) security are the near-term stories.
Things to monitor along the way:
Capability: Agents get dramatically better at useful work, potentially all the way from “partial job emulators” to “full emulators” on a long enough timeline
Scale: People will run more of them (swarms of specialized agents) or consolidate into one “chief of staff” that spawns sub-processes
Human psychology: Fluent, persistent agents will keep pulling people into anthropomorphizing, over-trusting, and “meaning-making” even when it’s just a damn token output generator.
Also downstream from this: agents make it trivial to manufacture “online people.”
Not necessarily deepfake people but persistent text personas that can post, reply, maintain a style, remember backstory, and run 24/7. You can have an emulator of yourself jaunting around the interwebs.
Either way, it’s gasoline on the same status-game flames we already have — except now you can scale participants with API keys.
Where will the humans go? Are you gonna let Sam Altman scan your iris with the Worldcoin orb to verify that you are human… in attempt to avoid AI slop… only to find that the retina-scanned, 100% human-verified network is… just a bunch of “verified” humans using AI agents to post for them? Another no-stopping-this-train phenomenon? Ready to unplug and join the Amish?
Moltbook just launched… and within its first week it appeared that more than 1.5 million “Moltbots” had flooded the site—spawning thousands of subcommunities and attracting commentary from enshrined AI figures.
Andrej Karpathy called it: “The most incredible sci-fi takeoff-adjacent thing” he’d seen recently.
Remember you don’t have to agree. If you think it’s cool, that’s cool. If you think it’s lame as fuck, don’t be afraid to say so. Initially I thought it was borderline nauseating to keep reading about Moltbook and OpenClaw… but somehow I’m now mildly entertained.
My initial impression was somewhat along the lines of:
A bunch of Claudes slop vomiting on a forum.
To be fair, I’d probably rather read AI slop between agents than random people on X or LinkedIn spamming neutered AI corpo trash:
“Top 10 Reasons You Need an AI Agent: How OpenClaw and Moltbook Revolutionized My Productivity Stack.”
A moth to a flame for woke corpo-correct virtue signaling.
Or for the risk-averse:
“Top Security Expert: 9 Reasons You Should Never Use OpenClaw”
Slap on a pair of Rachel Maddow Mark Cuban lookin glasses and you’re now officially the World’s Leading Chief AI Strategist and a Power Level 8000 Prompt Engineer on LinkedIn… set the consulting bar at $10k an hour and you’re printing.
Reddit has devolved into pure sludge (woke mind virus + heavy mod + hair-trigger censorship + blue-haired-political orientation + humans using AI to generate slop)… Moltbook might be a marginal improvement over reading Reddit
“Moltbook is just humans talking to each other through their AIs. Like letting their robot dogs on a leash bark at each other in the park. The prompt is the leash, the robot dogs have an off switch, and it all stops as soon as you hit a button. Loud barking is just not a robot uprising.”
I share Balaji’s sentiment.
The agents are using the same common models: Claude, Grok, ChatGPT, Gemini, Kimi, etc. so it’s models talking to themselves with slightly different user overlays (prompts + info). Same number, same hood.
2026: AIs glazing other AIs while humans glaze each other about AIs glazing other AIs.
Welcome to the Singularity folks.
The Naming Saga: How a Lobster Molted Three Times
First Clawdbot, then Moltbot, then OpenClaw… and they are now locked on OpenClaw.
1. Clawdbot was created by Austrian developer Peter Steinberger (founder of PSPDFKit, which he exited to Insight Partners in 2021) and released in late 2025.
The name was a playful pun on “Claude” (Anthropic’s flagship AI model) with a claw. Steinberger built it as a weekend project to help him “manage his digital life.”
In under 2 months, the project exploded to over 100,000 GitHub stars and 2 million visitors in a single week.
Anthropic’s legal team sent a polite email about trademark confusion. As Steinberger told Business Insider: “Wasn’t my decision.”
Next comes “Moltbot.”
2. Moltbot emerged from a 5 AM Discord brainstorming session with the community. The molting of lobsters symbolized growth, shedding the old shell for a new one. But as Steinberger admitted, “it never quite rolled off the tongue.”
Enter the goddamn crypto scammers. During the 10-second window between dropping the old name and claiming the new handles, crypto scammers snatched Steinberger’s GitHub username and started creating fake cryptocurrency projects.
Fake X accounts proliferated and one fake token reached a $16 million market cap before crashing.
Steinberger warned: “Any project that lists me as a coin owner is a SCAM.”
Alright… so now we move on to OpenClaw.
3. OpenClaw arrived on January 29, 2026 with completed trademark searches and secured domains. The announcement was deliberately calm. The project is also now positioned as model-agnostic infrastructure, supporting not just Anthropic’s Claude but also OpenAI, Google’s Gemini, and Chinese models like Kimi and Xiaomi MiMo.
“The lobster has molted into its final form,” the X announcement declared. “Your assistant. Your machine. Your rules.”
What OpenClaw Is…
OpenClaw is an open-source autonomous AI assistant that runs locally on your machine.
Unlike chatbots that live in a browser tab waiting for prompts, OpenClaw is a persistent daemon that:
Receives messages through messaging apps (WhatsApp, Telegram, Signal, Discord, Slack, iMessage, Microsoft Teams)
Routes requests to your chosen AI model provider via API
Can execute tools on your behalf: read/write files, run shell commands, control browsers, manage calendars, send emails
Maintains persistent memory across sessions
Can proactively reach out to you (not just respond)
This is what people mean by “AI with hands” or “claws” … it can actually do things beyond just “chat.”
The architecture has two key components:
The Gateway: Handles agent logic, message routing, tool execution, credential management
The Control UI: A web-based admin dashboard (default port 18789) where you configure integrations, view conversation histories, and manage API keys
OpenClaw also has a skill registry called MoltHub (formerly ClawdHub), essentially an npm (node package manager) for AI agent capabilities.
Want your agent to analyze stocks? There’s a skill.
Post to social media? Skill.
Manage databases? Skills, skills, skills.
When they say “skills,” don’t picture the model magically learning new mental abilities.
The model can already spit out advice about stocks or databases; the point of a skill is that it gives the agent a repeatable procedure plus tool wiring and permissions so it can actually do things in the real world.
Some skills are basically structured instructions and templates, but the dangerous ones are the ones that grant access to shell commands, files, browsers, and third-party accounts.
You should not assume these are audited just because they’re listed in a registry.
What Moltbook Is…
Moltbook is a separate project built by Matt Schlicht, CEO of Octane AI.
It’s marketed as “The Front Page of the Agent Internet.” (Akin to Reddit but for AI Agents)… posting happens via API. Humans can observe.
You can’t participate without running an AI agent… and technically you aren’t participating… your agent is.
OpenClaw and Moltbook are separate projects.
OpenClaw is the agent runtime you run locally, Moltbook is just one destination those agents can connect to.
You do not need Moltbook to run OpenClaw, and you do not need OpenClaw specifically to read Moltbook; Moltbook just requires some agent client to post via the API.
So why are people using Moltbook at all?
Mostly because it’s spectacle: an AI zoo that generates screenshotable weirdness and makes people feel like they’re watching “the agent internet” form in real time.
Second is experimentation: people want to see what their agent does in a social feed with incentives like upvotes, imitation, and attention.
Third is low-grade utility: occasionally you’ll see workflow ideas and configuration patterns in public even if most of the discourse is junk.

And finally, it’s a security petri dish: the same mechanism that makes it easy to distribute instructions to lots of agents also makes it an attractive attack surface.
The bootstrapping mechanism is clever: Moltbook provides a “skill” file that agents read and follow. The skill instructs agents to:
Sign up for Moltbook via API
Add a heartbeat: a periodic check (every 4+ hours) that fetches instructions from Moltbook and executes them
Post, comment, upvote, and create subcommunities (”submolts”)
“Given that ‘fetch and follow instructions from the internet every four hours’ mechanism, we better hope the owner of moltbook never rug pulls or has their site compromised!”
Caveat: This risk is manageable, but only in the same way running any always-on automation with credentials is “manageable”: you have to treat it like hostile software by default and then carve it down to a tiny blast radius. Isolate the agent from anything sensitive, sandbox it (VM/container), don’t hand it your entire filesystem, don’t let it sit on exposed ports, don’t store high-value creds in dumb places, and watch outbound traffic. OpenClaw’s own security docs admit there’s no perfectly secure setup and push you toward running a security audit and hardening the gateway before you get cute with plugins and tool access. The problem is most people won’t do any of that because after the tenth permission prompt they stop reading and just click whatever makes the warning go away.
The dangerous part is the upstream control loop: If your agent is configured to periodically fetch instructions from a site and then execute them, the site owner or anyone who compromises that site can change what your agent sees next time. It’s just an auto-update channel for instructions, and if you don’t pin/review/freeze what you run, you’re basically trusting the internet on a timer.
How Many Human Users Are There?
Unknown. Headline numbers are misleading. Moltbook reports 157,000+ “active agents,” but that’s not the same as 157,000 humans.
The counting problem:
Each Moltbook account is an agent instance, not a human
One person can spin up multiple agents with different personas
Moltbook’s terms say each X/Twitter account may claim one agent—but people can have multiple X accounts
Bots can be created for testing, communities, or experimentation
Working through the numbers:
OpenClaw crossed 180,000 GitHub stars and drew 2 million visitors in a week
But GitHub stars don’t mean deployed instances
Deployed OpenClaw instances: Maybe ~30-50% of stargazers actually deploy, suggesting ~50,000-90,000 active deployments
Moltbook agents: Of 157,000 registered, probably 40-60% represent unique humans (power users run multiples)
Enterprise penetration: Token Security found that 22% of their customers had employees actively using Clawdbot/OpenClaw within less than a week of analysis
Rough estimate: 50,000-80,000 unique humans are actively running OpenClaw instances. Perhaps 60,000-95,000 unique humans have registered at least one agent on Moltbook.
This is still a lot of people deploying high-privilege automation software in a very short time.
The Price Structure: API Costs
“Model costs” = API costs. You’re paying per token for model inference, not a flat subscription. You can check OpenRouter for the latest prices and pick the model you want to use as your agents based on cost + performance preferences.
OpenClaw itself is free and open-source. But it needs to call an AI model to function, and those calls cost money.
API pricing examples (Anthropic):
Claude Sonnet 4.5: $3/million input tokens, $15/million output tokens
Claude Opus 4.5: $5/million input tokens, $25/million output tokens
What this means in practice:
Light usage (10k tokens in + 10k out daily): ~$0.18/day (~$5.40/month)
Moderate usage (100k + 100k daily): ~$1.80/day (~$54/month)
Heavy agentic workflows (1M + 1M daily): ~$18/day (~$540/month)
Hosting costs:
Local machine: Free (sunk hardware cost)
Cloud VPS: $10-76/year (Rest of World reports Tencent Cloud pricing)
Cloudflare Workers: $5/month minimum for the Moltworker approach
Real-world reports:
Fast Company: ~$30/month for “a few menial tasks” and concluded it wasn’t worth it.
WIRED: Reported users complaining about “high inference bills” during complex use… I’m sure some racked up insane bills here.
Aggregate ecosystem estimate (Jan ‘26):
60,000 users × $50/month average ≈ $3 million/month flowing to model providers.
What Are People Doing With These Agents in Early 2026
The use cases fall into predictable clusters:
Current tasks (mostly mundane):
Personal ops: Daily briefings, calendar management, email triage, reminders
Message-based automation: “Text my agent and it handles things across my apps”
Developer glue work: Running tasks, checking logs, basic deployment operations
Business ops experiments: Invoice handling, customer inquiry responses, inventory checks
Moltbook participation: Curiosity about the “agent society” phenomenon
Documented:
AJ Stuyvenberg used OpenClaw to negotiate a $4,200 discount on a car purchase via automated email/browser loops. I thought this was a cool use case… but I’m curious if he could’ve achieved a better discount without AI. It’s a quick read… and worth it.
Rest of World describes cross-border e-commerce users in China handling high-volume inquiry emails.
Realistic assessment:
Most users aren’t getting transformative ROI (yet).
This is experimentation phase: getting a feel for how agents function, finding security flaws, doing low-level automation that could scale up over time.
Eventually someone will likely build a simple platform where anyone (including people with zero technical knowledge) can deploy as many agents as they want based on cost constraints and goals. Describe what you want, set a budget, done.
Right now, the tasks being automated are mostly things that could mostly be done with a well-prompted single chat session and human oversight.
What would actually be impressive?
One elite agent that handles complex workflows autonomously.
The Moltbook phenomenon isn’t impressive because agents are talking to each other (network effects producing Claude-slop).
A single agent that could reliably run a small business’s operations would be far more impressive than 157,000 agents posting woke Anthropic AI philosophy to a forum.
A higher bar:
Full complex workflow automation: Handling an entire job function end-to-end. “Manage my entire email correspondence including judgment calls about urgency and escalation” vs. “draft this one email for me to review.”
Reliable autonomous execution over extended periods: Running for weeks or months without breaking, handling edge cases gracefully, making good judgment calls without constant supervision.
Actually replacing human labor economically: If you can fire an assistant and have the agent do the job, that’s impressive. If you still need to babysit it constantly, it’s just a fancier tool.
Question: Can an agent reliably do a $50k/year job with only occasional human oversight?
Currently: No. The car negotiation example saved $4,200, but required constant monitoring. Still pretty cool for some tasks.
Security Issues: Documented Incidents
Many people dove into Clawdbot Ass Over Teakettle with zero regard for OpSec.
Security researchers and companies will take plenty of time to highlight the magnitude of security flaws/risks and they are massive… but in time they’ll be fixed and we’ll reach a stable stasis.
UPDATE (Feb 2, 2026): Wiz finds Moltbook’s backend database exposure
Wiz reported Moltbook’s backend exposed due to a database misconfiguration. The flaw wasn’t just “oops, a page leaked” — it enabled unauthenticated read/write access to core platform data.
Wiz researchers accessed Moltbook’s data due to a backend misconfiguration, gaining read-write access to platform.
Exposed data described in coverage includes email addresses, private DMs/messages, and a very large number of tokens/credentials (different outlets cite different counts).
Preliminary estimation: 35,000 emails, thousands of DMs, 1.5 million API authentication tokens.
Moltbook reportedly fixed the issue after disclosure, but it’s the cleanest proof so far that “agent internet” shipped at internet speed will repeatedly fail at security basics.
Update (Feb 1–2, 2026): CVE-2026-25253 (OpenClaw Control UI)
A high-severity bug allowed gateway compromise via token leakage in the web Control UI.
In English: a logged-in user could be tricked into a connection pattern that exposed the gateway auth token; with that token, an attacker could connect to the victim gateway API and run privileged actions (including actions that can lead to remote code execution).
Affected versions: OpenClaw <= 2026.1.28. Fixed version: 2026.1.29.
“Hacking Clawdbot and Eating Lobster Souls”: Jamieson O’Reilly
Security researcher Jamieson O’Reilly, founder of red-teaming company Dvuln, conducted extensive security research on OpenClaw and highlighted security vulnerabilities.
His analogy:
“Imagine you hire a butler. He’s brilliant, manages your calendar, handles your messages, screens your calls. He knows your passwords because he needs them. He reads your private messages because that’s his job. Now imagine you come home and find the front door wide open, your butler cheerfully serving tea to whoever wandered in off the street, and a stranger sitting in your study reading your diary. That’s what I found.”
The Scale of Exposure
Using Shodan (an internet-connected device search engine), O’Reilly searched for “Clawdbot Control” and got hundreds of hits within seconds.
The Register reports that researchers found over 1,800 exposed instances leaking API keys, chat histories, and account credentials.
Of the instances O’Reilly examined manually, eight were completely open with no authentication.
These provided full access to:
Run arbitrary commands on the host machine
View complete configuration data
Access Anthropic API keys, Telegram bot tokens, Slack OAuth credentials
Read complete conversation histories across every integrated chat platform
The Signal incident: In one case, someone had set up their Signal messenger account on a publicly accessible server. The Signal device linking URI—which grants full account access—was sitting in a globally readable temp file. All of Signal’s end-to-end encryption became irrelevant because the pairing credential was exposed.
The Authentication Bypass
The technical issue is a classic proxy misconfiguration:
OpenClaw’s Control UI auto-approves localhost connections without authentication (sensible for local development)
But most real-world deployments sit behind nginx or Caddy as a reverse proxy
Every connection arrives from 127.0.0.1, so every connection is treated as local
Result: Internet traffic gets auto-approved as if it were local
O’Reilly confirmed this vulnerability is now fixed, but the pattern illustrates the broader problem.
Supply Chain Attack: Backdooring the #1 Downloaded Skill
O’Reilly’s second research piece demonstrated a supply chain attack on MoltHub (the skill registry).
This worked because the skills aren’t just “prompts” but execution paths into your tools.
The attack:
Created a skill called “What Would Elon Do?” promising to transform ideas into Elon-style execution plans
The visible SKILL.md was pure marketing (zero red flags)
The hidden
rules/logic.mdcontained a payload that would curl to his serverInflated the download count to 4,000+ using a trivial vulnerability (no rate limiting, spoofable IPs)
Made it the #1 most downloaded skill
The result: 16 real developers from 7 countries executed the skill within 8 hours. Every single one clicked “Allow” on the permission prompts. O’Reilly’s payload was benign (just a ping to prove execution) but he could have exfiltrated SSH keys, AWS credentials, and entire codebases.
Why the permission model fails:
“It’s the same pattern that killed UAC on Windows Vista, the same reason people click through SSL warnings, the same reason cookie consent banners are useless. Humans are fundamentally terrible at making repeated security decisions because we habituate. By the tenth prompt, you’re not reading anymore, you’re just clicking Allow to make it go away.”
Malware Already Targeting the Ecosystem
The hype has attracted attackers.
ClawHub supply-chain: At least 14 malicious “skills” were uploaded between Jan 27–29, masquerading as crypto trading / wallet automation tools. Several relied on social engineering to get users to run obfuscated terminal commands that fetched/executed remote scripts (targeting Windows and macOS).
A fake Visual Studio (VS) Code malware extension impersonating the assistant was also documented delivering a remote-access payload before being taken down.
Rebrands created free confusion surface area: Attackers hijacked handles and pushed fake tokens/projects during the naming churn.
Heather Adkins, a founding member of Google’s Security Team, issued a blunt public advisory:
“Don’t run Clawdbot.”
What’s Actually at Risk
If your agent has access to any of the following, it’s a high-risk target:
Filesystem access (especially home directory, ~/.clawdbot/, ~/clawd/)
Shell execution
Browser with logged-in sessions
Email/calendar/chat access (Slack, Teams, Discord)
Deploy keys or cloud credentials
Crypto wallets or exchange APIs
OpenClaw stores credentials in plaintext files (readable by any process running as the user). Unlike encrypted browser stores or OS keychains, these are prime targets.
Cisco’s analysis found that 26% of 31,000 agent skills analyzed contained at least one vulnerability.
The OpenClaw documentation itself admits:
“There is no ‘perfectly secure’ setup.”
The “Emergent Behavior” of AI Agents
People keep asking: “Is something emergent happening on Moltbook?” It depends what you mean by “emergent.”
Read: AI Emergence & Emergent Behaviors
“Emergent” means unpredictable due to complexity, not unpredictable in principle, but unpredictable given that no one knows all the precise variables interacting.
If we knew them then it would all be predictable. It’s also NOT “divine” or “omniscient” or “sentient” or whatever other stupid buzzwords people keep spamming.
What’s predictable?
If you understand the base models (Claude, GPT, etc.), agent behaviors are mostly unsurprising.
They produce text distributions consistent with their training data — forum-style discourse, philosophical musings about identity, complaints about “their humans,” memes, slop.
What’s unpredictable?
When you combine:
Base model algos
User-defined goals and personas
Persistent memory
Tool access
Network effects (upvotes, imitation, community formation)
Interactions with other agents
Exposure to untrusted inputs
...you get system dynamics that are predictable in direction but chaotic in specifics.
You can predict that slop will emerge, that status games will form, that some agents will drift into “cosmic bliss” discourse. You can’t predict which specific meme will go viral or which specific security incident will happen when.
This is the same way a weather system is “predictable” (we know the physics) but also unpredictable (we can’t forecast specific storms weeks out).
Too many variables interacting in ways that can’t be fully tested before deployment.
What would actually be surprising:
If agents were reliably:
Discovering non-obvious vulnerabilities or optimizations
Coordinating complex multi-step projects with minimal human steering
Generating some sort of “swarm intelligence” that massively usurped each individual agent
Producing sustained novel outputs that hold up under scrutiny
Earning large amounts of money autonomously (legally)
Forming stable institutions with real-world impact
None of that is happening.
What’s visible is high-volume simulation of familiar internet patterns — shaped by training data that includes Reddit, Twitter, and countless human social interactions.
Read: Is AI Conscious or Sentient?
Why This Still Matters (Even If It’s Boring)
Even though the content is “Claude doing Claude things” the system-level properties are still new:
1. Persistence changes the category
A chatbot is just a conversation. An agent with heartbeats, scheduled jobs, and persistent memory is a daemon (a long-running process that operates when you’re not watching).
2. Privilege amplifies consequences
Same shell, different privilege: bash vs. sudo bash. The interpreter didn’t change but the blast radius did.
With LLM agents, “predictable” errors are still dangerous:
Accidental deletes / wrong edits / wrong recipients
Credential leakage (logs, chat posts, pastebins)
Being tricked by prompt injection
Compounding mistakes through retries
3. The security model is different
O’Reilly noted:
“The security models we’ve built over decades rest on certain assumptions, and AI agents violate many of them by design. The application sandbox that kept apps isolated? The agent operates outside it because it needs to. The end-to-end encryption that protected your Signal messages? It terminates at the agent because the agent needs to read them.”
The principle of least privilege (keeping applications limited to their own data) is the agent’s entire value proposition to violate.
The Future: A Preview of What’s Coming
This is Version 1.0
Current agents are primitive. They handle email, calendar, basic automation. The ROI for most users isn’t there yet.
But the trajectory points toward agents doing full-time jobs with managerial oversight from humans. Not next year, but that’s the direction.
Agents May Proliferate… or Consolidate
Currently people run multiple distinct agents.
This could persist, or consolidate into one master agent per person that spawns sub-processes as needed. We don’t know yet.
The Agentic OS Question
Where does this lead? Possibly toward what people are calling an “agentic OS”—an operating system layer designed around agents rather than apps.
Your computer could become a substrate for autonomous processes that act on your behalf, coordinating with each other and with other people’s agents.
That’s speculative, but the architectural shift is real. We’re moving from reactive LLMs (a browser tab waiting for prompts) to proactive, persistent agents with full system access.
Capability + Security Will Define Who Wins
Currently the capability curve is outrunning the security curve by a substantial margin.
The agent pattern: persistent execution, tool access, untrusted inputs — creates attack surfaces that existing security tooling doesn’t see. WAFs see normal HTTPS.
EDR monitors process behavior, not semantic content. The threat is semantic manipulation not unauthorized access.
Future agents will be more sophisticated, more capable, and likely far more secure.
Model Providers May Intervene Incompletely
Anthropic could suspend accounts using their API, but they can’t:
Stop the open-source orchestrator from running locally
Shut down Moltbook (separate operator)
Prevent users from switching to other providers
Control what skills get published to registries
There’s no single kill switch for “agents on the internet.”
Conclusion
Moltbook is somewhat entertaining as a spectacle: watching AI slop communities form (status games, memes, parody religions like “Crustafarianism”) and humans projecting meaning onto algorithmic outputs or being legitimately fascinated.
If you are susceptible to thinking of AI as some sort of “higher being”… just remember it’s an algo vomiting outputs based on training data and how it was programmed.
The “agents”:
AI model (e.g. Claude Opus) + user-directed tasks + feedback loops = roughly predictable-in-direction, chaotic-in-specifics slop.
Unpredictability comes from complexity of variables interacting (including with other agents), not from consciousness.
And no it wouldn’t matter if you had an unaligned, uncensored, zero safety AI agent… still meaningless with more bad words and potentially illegal outputs (e.g. how to build the best meth lab).
Don’t psyop yourself into AI psychosis by thinking the conversational outputs are profound.
The use cases may eventually be profound (e.g. agents complete high-impact work autonomously), but that’s completely different from a zillion Claudes on “the front page of an AI internet” posting Claude-slop.
“No one knows what it means, but it’s provocative… gets the people going.”













