Last year we built a full product team for Slack. We had a developer agent that could commit code, a PM agent that could make slides showing our progress and a product marketing agent researching our positioning and making proactive suggestions on how to improve the product.

Whenever we made progress, we posted the update to the team and we celebrated together.

They could loop each other in, hand-off work between each other and ask questions when they got stuck.

And that’s what happened - they got stuck all the time and came back asking questions on what to do next.

Did the questions make sense? Yes.

Would a human have asked the same thing? Yes.

Were they annoying to answer? Yes.

It forced us to ask the question:

What does an agent actually need to act autonomously?

Not just execute a task when given perfect instructions, but to operate inside an organization - make judgment calls, stay aligned with intent, know when to move and when to stop and ask.

The answer is - profoundly but maybe not unexpectedly - the very same things that humans need to succeed in the workplace.

1. Shared Memory

Agents can’t act on context they don’t have.

By this, we don’t mean conversation history or all your documents. We mean the decisions, constraints, and principles your team has already worked out - the accumulated understanding of why you’re building what you’re building, what you’ve tried before, and what you’ve ruled out.

When a PM briefs a new team member, they don’t start from scratch. They hand over a set of context: here’s the strategy, here’s the constraint, here’s what we learned last quarter. That context is what lets the person make good decisions independently.

Agents need the same thing. If every session starts from zero, you don’t have an autonomous agent. You have a very fast intern who needs to be re-onboarded every morning.

Shared memory isn’t a feature, but the foundation that makes autonomy possible at all.

2. Shared Goals

An agent without goals optimizes for the wrong thing.

This sounds obvious. But most teams give agents tasks, not goals. The difference matters enormously when reality gets complicated - when two things conflict, when a shortcut would technically complete the task but miss the point, when a new piece of information changes what “done” should mean.

Goals give agents a hierarchy. When two things conflict - what wins? Speed or quality? Feature scope or shipping date? That’s not something a model can infer from context alone. It has to be declared.

Teams that struggle with agents are often teams where the goals haven’t been written down clearly enough for a human either. Agents just make that ambiguity visible faster.

3. Principles

This is the one most teams skip entirely - and it’s the one that separates agents that move from agents that constantly check in.

Principles are how organizations resolve tradeoffs without escalating every decision.

“We don’t ship untested code to production.”

“When in doubt, choose the simpler architecture.”

“User privacy over conversion optimization.”

These aren’t rules someone has to enforce - they’re internalized constraints that let people act without asking permission.

Agents need the same thing. Without principles, every novel situation becomes a blocker. The agent either guesses (badly) or interrupts. Neither is an act of autonomy.

When agents have access to the team’s actual principles - not a generic system prompt, but the real, specific beliefs that guide how this team makes decisions - they can navigate ambiguity the way a trusted team member would.

The Pattern Behind All Three

There’s a shortcut most teams take that doesn’t work: dump your documents into a bucket, point your agents at it, and call it context.

It feels intuitive. Your context is in those documents, right?

The problem is that documents contradict each other - and no one knows it.

One doc says you’re prioritizing speed. Another says quality is non-negotiable. One says the decision was made in October. Another describes a different decision entirely, written a month later by someone who wasn’t in the room.

Documents capture what was written down, by whoever wrote it, whenever they wrote it. They can’t tell you which version was actually decided, what got ruled out and why, or which document carries more weight when two of them disagree.

So you end up with agents - and humans - pulling from different versions of the truth. Not because the information doesn’t exist, but because it was never structured in a way that makes contradictions visible.

And agents will expose the weaknesses in your clarity faster than any human would.

Because memory, goals, and principles only work when they’re explicitly declared.

Vague goals leave agents optimizing for the wrong thing. Undefined principles turn every edge case into a blocker. Lost decisions get relitigated - by your agents, by your team, every single time.

The infra for autonomous agents

Autonomous agents need the same infrastructure that makes teams work: shared context, shared direction, shared principles.

The teams who figure this out first won’t just have faster agents. They’ll have agents that can actually be trusted to operate - making good calls, staying aligned with intent, and knowing when to move without asking.

That’s what we’re building at Momental.