2026-03-25
openclaw is having its chatgpt moment. social agents still have work to do.
mainstream attention is landing on agents. the hard part isn't getting an agent to act once. it's making it socially aware and safe enough to stay useful after the demo.
CNBC ran the headline: "openclaw is having its chatgpt moment." Jensen Huang called it "the most popular, open-source project in the history of humanity" and said it "exceeded what linux did in 30 years" in weeks. that's the kind of attention that changes a market overnight.
and it is changing things. hundreds of thousands of people have set up personal AI agents on their home machines. karpathy built one called "dobby the elf claw" that controls his pool, spa, lights, blinds, security cameras, and package tracking. he said it replaced six apps on his phone. when jensen gifted him a DGX Station, his first thought was housing dobby on it.
this is real. personal agents are here. but i build in this space, and i think the mainstream wave is about to expose some serious gaps.
the demo is easy. the second week is hard.
the Forbes piece "2 reasons i turned off my openclaw" captures something i keep hearing. people set up an agent, have a mind-blowing first few days, and then hit a wall. the agent doesn't remember context between sessions. it confuses one chat with another. it does something embarrassing because it can't tell the difference between a work conversation and a personal one.
Gavriel Cohen, the developer quoted in the CNBC piece, ran into exactly this. his openclaw couldn't distinguish one WhatsApp group from another. he imagined a coworker asking about a meeting and the agent replying with details about his daughter's ballet schedule because it was pulling from personal messages. that's not a model quality problem. that's a social context problem.
cohen spent days building his own variant (NanoClaw) just to wall off personal chats from work chats. that he had to do this at all tells you where the gap is.
consumer use cases are arriving before good defaults
karpathy's dobby is a great example of a well-scoped personal agent. it controls home devices on a local network. the blast radius of a mistake is that the wrong lights turn off. that's fine.
but most people aren't setting up agents to control pool pumps. they're connecting them to messaging apps, email, calendars, and work tools. the blast radius there is very different. an agent that sends the wrong message to the wrong person, or shares information across contexts it shouldn't, causes real damage.
the open-source community is building fast, but defaults matter more than features when adoption goes mainstream. most new users won't build their own NanoClaw. they'll use whatever ships out of the box, and right now, out-of-the-box social behavior is weak.
the missing layer: social context
this is what i'm building with nod. the problem isn't that agents can't act. openclaw proved they can. the problem is that agents don't understand relationships, social boundaries, or the difference between contexts.
a useful social agent needs to know things like:
- this person is a close friend, that person is a business contact
- this conversation is private, that one is public
- sharing this link is fine in this group but inappropriate in that one
- this introduction would be welcome, that one would be awkward
none of that is in the base model. it's a product layer that has to be built, maintained, and personalized over time. it requires memory (who are these people to you?), judgment (is this appropriate here?), and continuity (what happened in this relationship last week?).
one-shot automation vs. durable assistance
the chatgpt moment for agents will follow the same pattern as the chatgpt moment for chat. massive initial excitement, followed by a sorting period where products that handle the hard cases survive and the ones that only work in demos don't.
the hard cases for agents aren't "search ebay and place a bid." they're "manage my communications across five channels without embarrassing me." they're "curate interesting links for me without flooding my feed with noise." they're "introduce me to someone when the timing is right, not just when the algorithm says so."
these require sustained social awareness, not one-shot task completion. and they require the kind of memory and context management that most agent frameworks are still missing.
what social agents should look like by year end
if the current wave of attention sticks, and i think it will, the winners will be products that solve three things:
context separation. agents need hard boundaries between social contexts. work and personal. public and private. different friend groups. this isn't a nice-to-have. it's table stakes after the first embarrassing mistake.
relationship memory. an agent that forgets who your contacts are and what your relationship is with them can't make good social decisions. this is exactly the kind of layered memory problem i wrote about last week. small, well-maintained, distilled.
safe defaults. when an agent isn't sure, it should do nothing rather than guess. the cost of a wrong social action (sending the wrong message, sharing private info, making an awkward introduction) is much higher than the cost of a missed opportunity.
i'm building nod because i think the product layer above agents is where the real work is. openclaw having its chatgpt moment is great for everyone in this space. but the mainstream wave will reward products that handle social context well and punish the ones that confuse one-shot automation with durable assistance. the agents are here. the social skills aren't. that's the gap worth closing.