← all posts

2026-03-23

agents don't need more links. they need taste.

most link bots fail because they collect without judging. the next step for social agents is better taste, stronger memory, and knowing what matters to one person.

i built a link curation system because every existing tool felt like a junk drawer. bookmarking apps, read-later queues, rss aggregators. they all collect without judging. after a week you have 200 saved links and no idea which five actually matter.

the system i'm building is called nod/shout. my AI agent (fubz) silently watches conversations throughout the day, notices when something interesting comes up, and queues it. every evening i get a digest. i approve, dismiss, or ignore each item. over time, the agent learns what i actually care about versus what just seemed interesting in the moment.

the hard part is not the queuing. the hard part is taste.

why most link curation agents feel random

the default approach is keyword matching or topic filtering. "save anything about AI agents." this produces a firehose with a label on it. the volume is lower than raw feeds but the signal ratio barely improves.

relevance is personal and contextual. a link about memory systems might be exactly what i need on a day i'm debugging my agent's recall, and irrelevant on a day i'm focused on frontend work. topic matching can't tell the difference.

real curation requires knowing what someone is working on right now, what they've already read, and what they've been ignoring. that's not a search problem. it's a memory problem.

what "taste" means in an agent system

taste, for a curation agent, is the ability to predict whether a specific person will find a specific link worth their time. not "is this a good article" but "is this a good article for jeff, today, given what he's working on."

that prediction requires a model of what the person cares about, built from observed behavior. it needs current context about active projects and open questions. and it needs negative signal: what has been dismissed, ignored, or marked as noise.

most agent memory systems don't handle this well. bentoboi's writing on openclaw memory gets at a key point: raw conversation logs aren't useful memory. what matters is distilled knowledge, things the agent has compressed into durable understanding. the agent shouldn't store "jeff talked about meta ads on march 20." it should store "jeff is actively working on meta ad campaigns and cares about ad platform tooling right now."

ramya chinnadurai's critique of openclaw's memory is relevant too. most agent memory is flat retrieval: search for similar text, return matches. but useful memory is layered. there's fast context (what happened in the last hour), working memory (active projects and goals), and long-term preferences (topics this person consistently engages with or ignores). a curation agent that only does keyword similarity misses the difference between "i talked about this once" and "i care about this persistently."

memory is the difference between saving and selecting

the leap from a save-everything bot to a useful curation agent happens when memory starts filtering inputs before they reach the queue.

here's how this works in nod/shout. when fubz encounters a link in conversation, it doesn't just check "is this related to jeff's interests." it checks a few layers:

recency and context. is this related to something jeff is actively working on this week? if i've been talking about permission design for managed agents all day, a link about agent security is high priority. a link about unrelated frontend tooling is not.

novelty. has jeff already seen something like this? if i queued a similar article two days ago and he approved it, this new one needs to add something the first didn't. if he dismissed the last one, this one is probably noise too.

source trust. some sources consistently produce things jeff reads. some consistently get dismissed. this signal builds up over time without explicit configuration.

none of this works without memory that persists across conversations and gets updated based on feedback. the approve/dismiss actions from the nightly digest review are the training signal. every review session makes the next day's queue slightly better calibrated.

the case for private curation before public publishing

nod/shout has two sides. the private curation queue, and shout, where approved links get posted to social feeds with context.

i built the private side first. publishing amplifies mistakes. if the agent queues a bad link privately, the cost is one wasted line in my digest. if it publishes a bad link to my feed, the cost is reputational.

most social bots go straight to posting. "here are today's top links about AI!" nobody asked, the selection is generic, and there's no feedback loop. it's an rss feed with a profile picture.

private curation first means the agent gets hundreds of feedback cycles before it ever touches a public channel. by the time it suggests posts, it has real data about what's worth sharing.

how to build a lightweight curation loop

the architecture is intentionally simple. the agent watches conversations, flags links, and writes them to a queue (a json file, nothing fancy). a cron job triggers a digest prompt every evening. i review in about two minutes: approve, dismiss, or skip. approved items go to a "ready" pool. dismissed items log negative signal.

the memory layer sits on top. the agent maintains a running model of my active projects, recent interests, and topics i've been ignoring. this gets updated from conversation context and digest feedback. no manual tagging, no configuration screens.

the whole thing costs almost nothing to run. the expensive part isn't compute, it's deciding what signals matter and how feedback flows back into future decisions.

what i'd measure to know the agent is actually improving

this is where most building-in-public posts stop. but the real question is: how do you know it's getting better?

i track three numbers:

approval rate. what percentage of queued links do i approve? if this trends up, the agent is learning. if it stays flat, memory isn't working or my interests are shifting faster than the agent adapts.

queue volume relative to conversations. a good agent should queue fewer items as it gets smarter, not more. if queue volume stays constant while conversation volume grows, the filter isn't tightening.

time from queue to publish. for links that get shared publicly, how long do they sit in the approved pool? if i publish them quickly, the agent picked something timely. if they sit for days, the timing or selection was off.

a weekly glance at these numbers tells me whether the system is learning or just accumulating. curation agents should get quieter and more precise over time, not louder.

← all posts