Table of contents
Open Table of contents
The Centralized AI Team Trap
Creating a centralized AI team repeats a mistake most engineering organizations have already made—and corrected—at least twice.
First it was QA. We centralized testing into a dedicated team, and every feature waited in a queue for validation. Then it was DevOps. We created a platform team that became a ticket-driven bottleneck instead of an enabler. Then data engineering. Same pattern, same outcome: a shared dependency that throttled the entire organization.
Now we’re doing it again with AI.
The structural problem is predictable. A centralized AI team must prioritize across every product domain simultaneously. The payments team needs fraud detection. The marketing team needs personalization. The ops team needs predictive maintenance. One team, one backlog, competing priorities. The result isn’t specialization—it’s a priority queue where most teams lose.
Conway’s Law makes this worse. A centralized AI team naturally produces centralized AI architecture: monolithic inference services, shared feature stores, one-size-fits-all model pipelines. The org structure becomes the system architecture, and neither scales.
The data confirms the bottleneck. Despite 88% of organizations using AI in at least one business function, only about 10% have fully scaled AI across their operations. Meanwhile, only 6% of companies fully trust AI agents with core business processes. The gap between adoption and scale is an organizational problem, not a technical one.
As Conflux, the consultancy extending Team Topologies to AI adoption, puts it: “Leaders expecting autonomous AI while denying team agency face failure.” You can’t automate what you haven’t first empowered humans to own.
AI Through the Team Topologies Lens
Team Topologies defines four fundamental team types. Each maps to a distinct role in AI adoption—and none of them is “the AI team.”
Stream-aligned teams own AI features in their domain. The payments team owns fraud detection AI. The search team owns ranking models. The customer support team owns their chatbot. These teams don’t need AI specialists as permanent members—they need AI literacy. The difference matters: literacy means engineers who can evaluate when to use a foundation model versus a rule engine, who can write effective prompts, who can instrument and monitor AI features in production. This is product engineering with AI as a tool, not a separate discipline.
Platform teams provide AI infrastructure as self-service. Model serving, inference gateways, cost guardrails, GPU scheduling, observability for AI workloads—this is infrastructure, and it follows the X-as-a-Service interaction mode. Stream-aligned teams consume AI platform capabilities the same way they consume compute, storage, or CI/CD: through well-documented APIs with clear SLAs. The platform team never builds domain-specific AI features. They build the roads; product teams drive on them.
Enabling teams provide temporary coaching. Prompt engineering best practices. Model evaluation frameworks. Responsible AI guidelines. RAG architecture patterns. An enabling team works across stream-aligned teams to raise capability—then works itself out of a job in that domain. The key word is temporary. If the enabling team becomes a permanent dependency, you’ve recreated the centralized bottleneck under a different name.
Complicated subsystem teams handle genuine deep specialization. Custom model training on proprietary data. Novel architectures that don’t exist as managed services. Safety-critical inference systems requiring formal verification. The litmus test for whether work belongs here: does it require a PhD and six months of research, or can a competent engineer solve it with an API call and good prompting? If the latter, it belongs in the stream-aligned team. Most AI work today—including most LLM integration work—falls on the API-call side of that line.
Conway’s Law for AI Agents
Your AI agent architecture will mirror your org structure—whether you want it to or not.
Siloed departments produce siloed agents. A company with separate customer service, sales, and operations teams will build three disconnected AI agents that can’t share context, can’t hand off workflows, and can’t reason across domain boundaries. The organizational walls become architectural walls.
This is where the Inverse Conway Maneuver becomes critical for AI. Instead of letting your current org chart dictate your agent architecture, design the ideal agent architecture first—then shape teams to support it. What does good look like? Agents that can collaborate across domains, share context through well-defined interfaces, and operate within clear boundaries of autonomy. That requires teams structured the same way.
The interaction modes from Team Topologies map directly to AI maturity stages. Early adoption is Collaboration: AI engineers and domain teams working closely together, high bandwidth, lots of discovery. As patterns stabilize, shift to Facilitating: enabling teams coaching stream-aligned teams to become self-sufficient. At maturity, the model is X-as-a-Service: platform capabilities consumed on demand with minimal coordination. This mirrors the Shuhari progression—from learning the rules, to breaking the rules, to transcending them.
As Manuel Pais, co-author of Team Topologies, warns: organizations that locally optimize code generation while ignoring downstream bottlenecks in testing, deployment, and operations will find AI amplifies dysfunction rather than resolving it. Gartner projects 40% of enterprise apps will feature agentic AI by end of 2026. The question isn’t whether agents are coming—it’s whether your org structure will let them deliver value or just create more sophisticated bottlenecks.
Making the Shift
If you currently have a centralized AI team, don’t dissolve it overnight. Migrate deliberately.
1. Audit your AI work. Separate platform infrastructure (model serving, inference pipelines, cost management) from domain-specific features (fraud detection, personalization, forecasting). Most centralized teams are doing both, and the two require fundamentally different ownership models.
2. Build the platform layer first. Before moving anyone, ensure stream-aligned teams have self-service AI infrastructure to build on. This means inference endpoints, model registries, prompt management tools, and observability—all consumable without filing a ticket. If you disband the centralized team before the platform exists, you get chaos.
3. Embed AI engineers into stream-aligned teams. Not as consultants or dotted-line advisors—as full team members who participate in sprint planning, own code in production, and share on-call rotations. They bring AI expertise; the team provides domain context. The combination is what produces AI features that actually solve business problems.
4. Convert part of the former centralized team into an enabling function. These engineers become coaches, not builders. They run workshops on prompt engineering, help teams evaluate models, establish responsible AI practices, and create reusable patterns. Their success metric is how quickly teams become self-sufficient—not how many AI features they personally deliver.
Signs it’s working: stream-aligned teams ship AI features without waiting on a central backlog. AI capability appears on team roadmaps as a tool, not as a dependency on another team. McKinsey estimates that 2-5 humans can effectively supervise 50-100 AI agents—but only when teams have the autonomy and context to direct those agents toward the right problems.
The Real Question
The question was never “where does the AI team go on the org chart?”
The question is whether your organization’s structure enables AI capabilities to flow—or forces them through a single point of failure.
Look at your org chart. If AI is a box on the side, you’ve designed a bottleneck. If it’s distributed across your teams—owned by stream-aligned teams, powered by platform, coached by enablers, and deepened by specialists—you’ve designed for flow.
The organizations that scale AI won’t be the ones with the biggest centralized AI team. They’ll be the ones that made AI everyone’s job.