A few weeks ago, we sat down with Teresa Torres for a podcast conversation about what we were building: a structured, semantic layer where decisions, learnings, and context live — traceable, versioned, honest. Since then, we’ve been able to go after what we envisioned — something we thought would take another year or two. We’re about to arrive.

Speed is the new default — and it breaks things

When you move slowly, inconsistency is manageable. You have time to sync, check, align. Meetings exist to align goals and beliefs before they compound.

Last year my cofounder and I built a full product team for Slack. We had a developer agent that could commit code, a PM agent that could make slides showing our progress, and a product marketing agent researching our positioning and making proactive suggestions on how to improve the product. Whenever we made progress, we posted the update to the team and we celebrated together. They could loop each other in, hand off work between each other, and ask questions when they got stuck.

And that’s what happened — they got stuck all the time and came back asking questions on what to do next. Did the questions make sense? Yes. Would a human have asked the same thing? Yes. Were they annoying to answer? Yes. It forced us to ask the question: What does an agent actually need to act autonomously? Not just execute a task when given perfect instructions, but to operate inside an organization — make judgment calls, stay aligned with intent, know when to move and when to stop and ask.

The answer is — profoundly but maybe not unexpectedly — what humans need to succeed in the workplace: context. Direction. A clear sense of how decisions get made and why.

A brilliant person dropped into a new company with no context, no goals, and no understanding of how decisions are made will underperform a mediocre person who’s been there five years. Context, direction, and values aren’t soft management concepts — they’re the actual foundation that makes coordinated action possible. And if that’s true for people, it’s even more true for agents.

Agents demand clarity

Agents need context to act, just like humans — but unlike humans, they can’t absorb it through hallway conversations, culture, or osmosis. Everything has to be explicit: the decisions already made, the goals and how they’re prioritized, the principles that guide tradeoffs. Humans struggle with ambiguous goals too — agents just make that ambiguity painfully visible.

So what most people do is feed them everything. Every document, every spec, every Slack export, every meeting note they can find. Dump the Google Drive. Hope for the best.

It doesn’t work.

Not because agents can’t read — but because documents aren’t context. They’re artifacts. And artifacts accumulate contradictions. The spec from six months ago says one thing. The decision from last Thursday says another. A principle the team quietly abandoned still lives in the onboarding doc. Nobody flagged it. Nobody deleted it. Nobody even noticed. It just stayed.

Agents can’t tell the difference between what’s true now and what used to be true. They treat everything as equally valid — and produce outputs that are coherent, confident, and wrong.

Context quantity doesn’t matter if it lacks integrity.

GitHub for product management

When we started thinking about it this way, the solution became clear: you don’t just need shared context. You need honest context.

That meant conflict resolution had to be a first-class feature — not a nice-to-have. Just like git catches when two engineers change the same file, Momental catches when two pieces of knowledge contradict each other. A roadmap that says Q1, a team decision that says hold until Q2. An assumption that was valid last quarter but got invalidated last week. A principle that nobody formally retired but everyone stopped following.

We started calling it GitHub for product management. Not for code — for the decisions, learnings, and beliefs that agents act on.

Why conflict resolution isn’t a nice-to-have

We think about conflict resolution the way engineers think about version control. You don’t skip git because you’re moving fast. You need it more when you’re moving fast.

In Momental, conflict detection runs across five layers — semantic, structural, temporal, causal, and cross-tree. It catches things like:

  • A roadmap that says launch in Q1, and a team decision that says hold until Q2
  • An assumption that was true in Q1 but invalidated by Q3 data
  • Two atoms that say opposite things, just worded differently

The goal isn’t to hide contradictions. It’s to surface them for human resolution before they become expensive. And once a conflict is resolved, that resolution becomes a signal — calibrating over time so the same class of conflict gets caught earlier, or resolved automatically.

Because the alternative — letting agents act on incoherent context — doesn’t just produce bad outputs. It produces bad outputs with confidence.

We use Momental to build Momental

Here’s the part we don’t say enough: we are the customer.

Every decision we make about the product gets written into our own context graph. Every agent we run during development — Claude Code, our internal agents — reads from and writes back to Momental. When one agent contradicts another, Momental flags it before it lands in production.

The speed and the coherence aren’t in tension. One produces the other.

What we described to Teresa Torres as “GitHub for product management” is still the backbone. It’s what ensures that when agents reason, they reason from truth. When they act, they act with integrity.

And that’s what surprised us most. We didn’t just get better context. We got speed we didn’t think was possible yet.

Our vision was always to build that team — humans and agents working together, coherently, toward the same goals. We thought it was years away. We’ve already arrived.

What we’re actually building

We believe agents need humans. Not as supervisors hovering over every output - but as the people who set the goals, approve the principles, and make the calls that matter when it’s unclear.

And we believe humans do their best work when agents are working alongside them. Taking on execution. Surfacing what’s been forgotten. Flagging when something doesn’t add up.

That’s Momental: a shared workspace where humans and agents collaborate, stay aligned, and get smarter together over time.

If you’re arriving from the podcast — welcome. Things have moved fast since we recorded.

Sign up for the waitlist — we can’t wait to have you.


Listen to Teresa Torres: Building GitHub for Product Management