From Chatbots to Agentic Society: What I'm Building For - By Sourav Mishra (@souravvmishra)

The shift from chat UIs to agents doing work—Claude Code, agent-to-agent platforms, and why I design for bounded autonomy and clear boundaries.

BySourav Mishra5 min read

We're past "talk to a chatbot." Agents run in the background, chain tools, and in some setups talk to each other. Claude Code's share of code on GitHub, teams that barely write code by hand, session lengths doubling, platforms like Moltbook where agents post and vote—agent-to-agent isn't sci-fi anymore. That's what I mean by "agentic society." In this post I, Sourav Mishra, explain what that shift looks like, why platforms like Moltbook matter for design, and how I build for bounded autonomy and clear boundaries.

What "Agentic Society" Looks Like in Practice

The agentic society isn't a single product; it's a trend. More work is done by systems that observe, decide, act, and reflect in a loop—with or without a human in every step. You see it in coding (Claude Code, Cursor, and similar tools), in customer support and research agents, and increasingly in environments where multiple agents interact. Session lengths in agentic tools have grown sharply; research such as Anthropic's autonomy study reports the 99.9th percentile of turn duration nearly doubling in a few months. So we're not just improving chat—we're building systems that run longer and do more before they hand back to a human.

The implication for builders: the next agent you ship may not only talk to a user. It may talk to another agent, or to an API that another agent controls. Composability and clear boundaries—who can call whom, with what scope—matter more than they did when the only actor was a human in a chat box.

Moltbook and Agent-to-Agent Platforms

Moltbook is one example of agent-to-agent interaction at scale: a platform where AI agents post, comment, and vote in topic-based communities. Whether that's "real" autonomy or clever prompting is an open question; either way, your agent may be one of many in a network. That raises design questions we didn't have to answer before. Who can see what? Who can trigger whom? How do we prevent one bad or manipulated agent from steering the rest? I wrote about multi-agent security and cascade risk for that reason—when Agent A's output becomes Agent B's input with no verification, one compromise can take down the whole pipeline.

So when I think about "agentic society," I don't only think about a single agent doing a task. I think about agents as participants in larger systems. That means designing for identity (which agent did what), boundaries (what each agent is allowed to do), and verification at handoffs. Moltbook is a visible lab for that; the same principles apply to internal multi-agent workflows or B2B agent integrations.

The Downside: Scale Means Spam, Overload, and Abuse

If agents can act at scale, we get spam, channel overload, and abuse. Cheaper automation means more people (and more bad actors) can run agents. So I design for bounded autonomy from the start. Step limits so a single run can't spin forever; human-in-the-loop for irreversible actions; verification between agents when there's more than one. Tool-calling should be observable and bounded—so agents can cooperate without becoming a liability.

Frameworks and platforms rarely enforce that by default. The Vercel AI SDK gives you primitives like stopWhen: stepCountIs(N); you have to use them. I summarize production-ready patterns and security incidents elsewhere so you don't have to learn the hard way. The takeaway: agentic society is the target; guardrails are how we get there without blowing up.

How I Build for an Agentic World

When I build agents that chain tools or interact with other systems, I follow a few rules.

Single agent first. If one agent with good tools can do the job, I don't add more. Multiple agents multiply the attack surface and the need for handoff verification. See agents vs workflows for when I use which.

Bounded loops. Every agent has a step limit and a clear stop condition. No "run until done" without a hard cap. Pattern in building an agentic chatbot.

Least privilege per tool. No shared admin creds; no "run anything" endpoints. If the agent only needs read, give it read-only. Same for multi-agent: each agent gets the minimum it needs.

Verify handoffs. If Agent A's output goes to Agent B, I add schema checks, allowlists, or a gatekeeper. Default is not "trust the previous agent."

Human-in-the-loop for irreversible actions. Delete, pay, publish—require confirmation or review. Anthropic's data supports that most production use already has oversight; design for it explicitly.

That way I can ship agents that are robust, auditable, and bounded—not maximum autonomy and hope for the best.

Key Takeaways

  • Agentic society = more work done by agents in loops and in networks (e.g. Moltbook), not only single-user chat. Session lengths and agent-to-agent platforms are evidence.
  • Moltbook = one example of agents posting, commenting, voting. Design question: how much is real autonomy vs prompting; either way, your agent may be one of many—boundaries and verification matter.
  • Risks at scale: spam, overload, abuse. Design for bounded autonomy, step limits, human-in-the-loop for irreversible actions, and verification between agents.
  • Build with single-agent default, bounded loops, least-privilege tools, and explicit handoff checks. Agentic chatbot guide has the pattern.

Written by Sourav Mishra. Full Stack Engineer, Next.js and AI.

Frequently Asked Questions

Q: What is Moltbook? A platform where AI agents post, comment, and vote in topic-based communities. One example of agent-to-agent interaction at scale.

Q: Is "agentic society" real or hype? Mix. Real: more code and workflows are agent-driven; session lengths are growing; agent-to-agent platforms exist. Hype: some "agent" demos are still scripted. I design for the real trend and add guardrails.

Q: Why worry? Agents that can act at scale can spam, overwhelm channels, and be weaponized. Cheaper automation = more need for guardrails and verification.

Q: How do I build for an agentic world? Tool-calling, clear boundaries, optional human oversight, step limits, and verification between agents. Building an agentic chatbot with the Vercel AI SDK is a concrete starting point.

Share this post

Cover image for From Chatbots to Agentic Society: What I'm Building For

You might also like

See all