Many simple agents working in parallel on variations of a problem, synthesised into one output.
Swarm intelligence trades depth for breadth. Rather than one sophisticated agent analysing a problem thoroughly, many simpler agents each tackle the same problem from a different angle, and a synthesis layer combines their outputs into a more comprehensive and reliable whole.
A swarm architecture assigns the same core task to multiple agents with variations: different system prompts emphasising different analytical frameworks, different seed information, different tool access, or different output formats. The synthesis layer collects all outputs and applies an aggregation strategy — majority vote for categorical decisions, weighted averaging for numerical ones, or a meta-agent that identifies the most credible and internally consistent set of outputs. The critical design choice is diversity: swarm agents that are too similar will produce highly correlated outputs that look like independent verification but aren't. The critical operational choice is cost: N agents running in parallel costs N times as much as a single agent.
An investment bank uses a swarm pattern for M&A target assessment. Eight agents are dispatched, each with a different analytical lens: financial performance, ESG profile, regulatory history, competitive positioning, technology stack, management team quality, customer concentration risk, and geographic risk. A synthesis agent reads all eight assessments and produces a consolidated risk scorecard, explicitly noting where agents disagreed and flagging the disagreements for analyst attention. The swarm produces more comprehensive analysis in 12 minutes than a single analyst could produce in 6 hours.
For tasks where completeness and diverse perspectives matter more than speed, swarm intelligence outperforms single-agent approaches by design. It is also a hallucination detection strategy: when five of eight agents agree and three disagree significantly, the disagreement is a signal that the confident majority may be wrong. Swarm patterns surface uncertainty rather than hiding it.
How this pattern fails in practice — and what to watch for.
The swarm agents are too similar — same base model, similar prompts, same context. Their outputs are highly correlated. The synthesis layer reports confident consensus because all agents agree, but the agreement reflects shared bias, not independent verification. The swarm has the cost of diversity without the benefit.
The synthesis agent — the component that combines all swarm outputs — fails. All sub-agent work is discarded. If the synthesis failure is detected, the task can be retried at the cost of duplicate sub-agent work. If it is not detected, the task appears complete but no output was produced.
A swarm of 20 agents running on a premium model for a complex task looks affordable in testing. At production scale — hundreds of swarm activations per day — the inference cost is unsustainable. The pattern was designed without cost modelling at realistic production volume.
Seven things to verify before deploying this pattern in production.
Swarm intelligence is a less common topic in the AIDA exam but appears in CAIG and CAIAUD in the context of accountability: when five agents contribute to an output, who is accountable for it? CAIAUD auditors are expected to identify the governance gap that arises when synthesis agents make decisions based on sub-agent outputs without human review of the synthesis logic.
The AIDA certification covers all 21 agentic design patterns with a focus on deployment safety, governance, and the PSF. Free to attempt.