Capability is occupied
Capability thinking is one of the most powerful instruments in architecture, strategy and planning. Organisations map it, measure it and build frameworks around it. Decisions depend on it. And it earns that reliance: when used with precision, capability thinking clarifies what an organisation can achieve, what it actually does and where the gap between the two lives.
The challenge is that precision. Capability is occupied, doing different work in different contexts and a single definition cannot hold all of it. That is not a flaw. It is a feature, if you know how to read it.
The Open Group's OAA Standard makes this readable in an unusual way.
The OAA Standard: a signal in two chapters
The Standard defines capability twice. In chapter 2, once: an ability that an organisation, person, or system possesses. Clean, portable, uncontested. In chapter 17, six times and then explains why none of the six is quite right.
That gap is not an editorial inconsistency. It is a signal. The Standard that settled capability in two sentences in its definitions chapter needs six definitions, four paragraphs of critique and a position statement to handle it in the context of operations architecture. Something more is going on.
What the Standard finds wanting is the tendency to define capability as an activity, even a self-contained one. That framing creates two problems: terminology confusion and solution-space closure. If capability means activity, the range of possible solutions narrows before the problem is properly understood.
The Standard's own position is that operational capabilities should refer to outcomes the operating system delivers: what it does well, or should do well. Its examples are deliberately concrete: delivering inexpensive food quickly from a standard menu with a well-defined quality level. Not what the organisation has. Not what it does. What it achieves, reliably, under conditions.
To sharpen this, the Standard cites Amartya Sen: when a person enjoys a capability, it implies they have freedom to exercise it. Having a capability and being able to exercise it are not the same thing. That distinction, introduced here to argue for outcome-orientation, will matter again later.
The OAA is mapping one district. The full territory is wider and the perspectives below are a guided entry into it.
Six perspectives, six questions
Capability is not confused. It is occupied and doing different work in different contexts. The table below maps six perspectives, each asking a genuinely different question.
| Perspective | Core question |
|---|---|
| Enterprise modelling | What are we structured to do? |
| Strategic planning | What can we achieve? |
| Operational performance | What is demonstrably done? |
| Human development | Can this capability actually be exercised? |
| Systems engineering | What fails when components interact? |
| Adaptive systems | What becomes possible when components interact? |
This is not an exhaustive survey. It covers the perspectives most relevant to enterprise and security architects, includes the adoption condition because the OAA itself introduces it through Sen and highlights stress and complexity because these are where capabilities most visibly fail or surprise in current practice.
Behind these different questions lie two fundamental types of capability claim. Strategic capabilities describe what an organisation is structured to do or could achieve: they are claims about potential. Operational capabilities describe what is demonstrably done: they are claims about reality. Most capability confusion begins when one is mistaken for the other.
What goes wrong
When capability types are confused, the consequences are consistent and predictable. The pattern depends on which type is being misused, but in each case something specific breaks and it breaks in a characteristic way.
The names for those failures will be mapped precisely once the full model is in place. But the shape of the problem is worth stating now: confusion between capability types does not produce random errors. It produces structural ones, the kind that are invisible from inside the frame that created them and only visible from outside it.
That is why the question 'which type of capability is this?' matters before the modelling begins. The answer determines what the capability can claim, how it should be assessed and where it is most likely to fail. The names for those failures appear in the model section, once the full structure is in place to read them against.
The human condition as criterion
The OAA's citation of Sen is not decorative. It introduces a question that most capability frameworks do not ask: can the people this capability is meant to serve actually exercise it?
A security awareness programme that treats people as the problem rather than as the asset. An incident reporting process that creates fear rather than safety. A zero trust architecture that assumes digital literacy that users do not have. In each case the capability exists formally: it is mapped, documented and assessed. And in each case it fails in practice because adoption was never treated as a condition at all.
This is what Sen means by real freedom. Resources, activities and outcomes are not the same as the freedom to exercise them. A capability that cannot be exercised by the people who need it is not a capability. It is a representation of one.
That observation does not just name a failure mode. It introduces a concept: conditions as a criterion that any capability must be tested against. Adoption is one such condition. It will not be the last.
Conditions
If adoption is a condition, the question follows immediately: what else qualifies?
Adoption
Adoption was introduced through Sen in the previous section, but it deserves its place here as a named condition in its own right. A capability that cannot be exercised by the people who need it is not a capability. It is a representation of one. The condition is not whether the capability exists formally. It is whether the people it is meant to serve can actually use it. That question is worth an article on its own.
Stress
A capability that works reliably in a defined operating environment may behave very differently under adversarial pressure. Individual controls are assessed. The system is not. A threat actor does not present as a single control failure: it presents as a sequence of interactions, a phishing email that exploits a gap between email security and endpoint detection, compounded by an exception workflow in patch management that was never assessed, compounded by an incident response plan that assumes communication channels that have been compromised. The capability that failed was not visible from any single control's perspective. It only becomes visible at the level of the whole system, under the conditions of the actual attack.
Complexity
As systems are designed to self-organise, to learn and to produce outcomes beyond what their components were individually specified to deliver, emergence is no longer only a risk to manage. It is a capability to design for. Cynefin gives architects a language for this: the distinction between complicated systems, where best practice applies, and complex systems, where emergent practice must be discovered. The failure mode under complexity is not blindness: it is drift. The system produces something real and valuable that nobody has taken responsibility for.
Maturity
There is a condition that is easy to miss because it looks like a scale rather than a criterion. A capability assessed at CMMI level 2 operates under different conditions than one at level 4. The processes are less defined, the repeatability is lower and the confidence warranted by the assessment is correspondingly weaker. The maturity journey from Initial to Optimising is itself a condition: it shapes what the capability can reliably claim and what evidence can be trusted.
This has a practical implication. Maturity models (CMMI, COBIT 2019, ITIL) are excellent instruments for identifying process gaps and creating improvement roadmaps. They are poor instruments for asserting that a capability will hold under conditions it has not been tested against. A level 3 or level 4 assessment is primarily a claim about process: this activity is defined, documented, managed and measured. It is a substantially weaker claim about outcomes. The right use of a maturity model is as a structured baseline, not a capability validator.
But this raises a harder question. If maturity is a condition, how does a capability move through it? Process definition alone does not improve a capability. Something has to close the gap between what the organisation claims it can achieve and what it demonstrably does. That something is a feedback loop.
Conditions then, is a criterion layer that sits above the perspective table. It asks: does this capability hold in the environment it must actually operate in? Adoption, Stress, Complexity and Maturity are four named instances introduced here. Others follow, including Transformation and Change, and others still exist beyond the table: regulatory, environmental, cultural. The question is always the same.
The feedback loop
Strategic capabilities need operational grounding. A capability roadmap that is never tested against operational evidence drifts. An enterprise model that is never updated from operational reality becomes fiction. The feedback loop, operational evidence revising strategic intent, is what keeps the two honest with each other and what makes maturity progression real rather than nominal.
Digital twins, cyber ranges and continuous control monitoring are all instruments that operationalise this loop. They are not a separate category of capability. They are operational capabilities whose specific function is to close the gap between what the organisation claims it can achieve and what it demonstrably does.
NIST CSF makes this structure visible. The core functions (Govern, Identify, Protect, Detect, Respond, Recover) are strategic capability statements. The implemented controls beneath them are operational. The whole framework exists to answer the conditions question: do these capabilities hold under adversarial pressure? The feedback loop is what makes the answer honest.
Emergence connects here too. You cannot design for emergence you cannot see. The Operational → Strategic arrow in the adaptive systems perspective runs in that direction deliberately: observation comes first, design intent follows. Closing that loop is what makes adaptive capability governable rather than merely accidental.
The model
The sections above have been building toward a model. It has four elements: two capability categories, a feedback loop and a condition criterion. Before the full perspective table is assembled, the backbone is worth stating clearly.
| Element | Role |
|---|---|
| Strategic | What the organisation claims it can achieve |
| Operational | What is demonstrably done |
| Feedback loop | Operational evidence revising strategic intent |
| Conditions | What the capability must hold under |
With that model in place, the full perspective table can be read as it is intended — not as a classification system, but as a diagnostic instrument. Each row names a perspective, the question it asks, the frameworks that operationalise it, its nature, the conditions it assumes or tests against and the failure mode that results when it is misused or confused with another.
| Perspective | Core question | Frameworks | Nature | Conditions | Failure mode |
|---|---|---|---|---|---|
| Enterprise modelling | What are we structured to do? | Zachman, BIZBOK | Strategic | Transformation | Omission |
| Strategic planning | What can we achieve? | TOGAF, Gartner, DoD | Strategic | Change | Illusion |
| Operational performance | What is demonstrably done? | CMMI, COBIT 2019, ITIL | Operational | Maturity | Myopia |
| Human development | Can this capability actually be exercised? | Sen, Nussbaum, UNDP, McKinsey OHI | Operational | Adoption | Fiction |
| Systems engineering | What fails when components interact? | INCOSE | Operational | Stress | Isolation |
| Adaptive systems | What becomes possible when components interact? | Cynefin | Operational → Strategic | Complexity | Drift |
The failure modes now have names:
| Mode | What breaks |
|---|---|
| Omission | The map looks complete, so what is missing is invisible. Enterprise models that have never been tested against operational reality. |
| Illusion | What could be achieved is overstated. Strategic roadmaps that assert capability without tracking whether the conditions for it are in place. |
| Myopia | What exists dominates. Operational processes that score well locally while missing the wider system they belong to. |
| Fiction | Formal capability is not real capability. Programmes that exist on paper and fail in practice because adoption was never tested. |
| Isolation | The system is assessed in pieces and the pieces do not add up. What emerges from interaction is invisible when each component is evaluated alone. |
| Drift | Capability forms and operates without governance. Adaptive systems that produce real outcomes that nobody has taken responsibility for. |
The model as diagnostic instrument
The table does three things. It explains why capability definitions differ: each perspective is asking a genuinely different question. It guides which definition to use, by naming the perspective before the modelling begins. And it diagnoses what goes wrong when the wrong definition is applied, by tracing the failure back to a mismatch between perspective, nature and condition.
Four examples show this in practice. The first two show failures of the feedback loop and conditions. The last two show failures of nature mismatch: Strategic treated as Operational and Operational never grounded in Strategic intent.
Strategic failure
An organisation builds a capability roadmap for digital transformation. Product management is listed as a strategic capability. The roadmap looks complete and well-structured. Eighteen months later delivery is inconsistent, roadmaps drift, and stakeholders have lost confidence. The post-mortem finds the capability was defined and assessed strategically (what we could achieve) but never grounded operationally. Nobody asked what was demonstrably done. The trigger was clear. The direction was set. But the feedback loop was never closed.
| Summary | |
|---|---|
| Capability | Product management |
| Nature | Strategic: a claim about potential, not demonstrated reality |
| Perspective | Strategic planning: what can we achieve? |
| Condition | Change: never tested against operational evidence to confirm the direction was achievable |
| Failure mode | Illusion: what was possible was overstated because reality was never consulted |
| Root cause | Feedback loop: operational evidence never revised the strategic claim |
Operational failure
A security programme achieves CMMI level 3 for incident response. Controls are documented, processes are repeatable, coverage looks complete. Then a supply chain attack produces an incident that each individual control handles correctly — and the system fails anyway. The post-incident review finds the gap was between the controls, not within them. Each process performed as assessed. The system did not.
| Summary | |
|---|---|
| Capability | Incident response |
| Nature | Operational: evidence-grade, but scoped to individual processes |
| Perspective | Operational performance: what is demonstrably done? |
| Condition | Stress: adversarial pressure the assessment was never designed to test |
| Failure mode | Isolation: the interaction between controls was invisible when each was assessed alone |
| Root cause | Conditions: the stress condition was never applied to the assessment |
Nature mismatch: Strategic treated as Operational
A security programme lists real-time threat intelligence as an operational capability. It appears in the maturity assessment at level 3. The process is documented, the tooling is in place and the coverage looks complete. Eighteen months later a significant breach occurs through a threat vector that was well-known in the intelligence community. The post-mortem finds the capability was assessed against process consistency (was the feed ingested, was it documented, was it reviewed) but never against outcomes. The organisation had the process. It did not have the capability.
| Summary | |
|---|---|
| Capability | Real-time threat intelligence |
| Nature | Strategic: a claim about potential, not demonstrated reality |
| Perspective | Operational performance: what is demonstrably done? |
| Condition | Maturity: process consistency measured, outcome delivery never tested |
| Failure mode | Illusion: the maturity score overstated what the capability actually delivered |
| Root cause | Nature mismatch: a Strategic capability treated as Operational evidence |
Nature mismatch: Operational treated as Strategic
A security team has built a mature vulnerability management programme over several years. Scanning is consistent, remediation cycles are tracked and the process scores well on every assessment. But the programme has never been connected to the organisation's strategic risk posture. High-severity vulnerabilities in non-critical systems are remediated ahead of medium-severity vulnerabilities in business-critical ones, because the process optimises for severity score, not for strategic exposure. The capability works. It is working on the wrong thing.
| Summary | |
|---|---|
| Capability | Vulnerability management |
| Nature | Operational: demonstrably done, but never grounded in Strategic intent |
| Perspective | Operational performance: what is demonstrably done? |
| Condition | Change: the strategic risk posture shifted and the operational programme did not follow |
| Failure mode | Myopia: local optimisation, missing the wider strategic picture |
| Root cause | Nature mismatch: Operational capability never connected to Strategic intent |
What experienced architects already know
Experienced architects already navigate this territory. They move between perspectives intuitively, adjust their language to the context and ask the conditions question without always naming it as such. When a senior architect says that capability looks good on paper but I want to see it under pressure: that is the conditions criterion being applied. When they say we are mapping what we want to be, not what we are: that is the Strategic nature being distinguished from Operational. When they push back on a maturity assessment by asking but does it actually work?: that is the feedback loop being demanded.
What this model offers is not new knowledge for experienced architects. It is a shared language for the conversations they are already having, precise enough to use across disciplines and open enough for the people who work alongside architects to follow the same reasoning. Capability thinking is not the exclusive property of those who know the frameworks. It is available to anyone who knows which question they are asking.
The proliferation of definitions and models across this post is itself a signal: capability resists unification not because the concept is confused, but because it is doing genuinely different work in different contexts. The multiplicity is not a problem to be resolved: it is the structure of the territory.
An architect or strategist who is fluent across these meanings, who can move between Strategic and Operational, who understands what the Conditions column is asking, who knows which maturity model they are reaching for and why and who has asked what each capability must hold under: that person is not navigating confusion. They are working with the full map.
Name the perspective before the modelling begins. Then ask which of your capabilities still hold and under what conditions.
References
Standards and frameworks
- The Open Group Architecture Framework (TOGAF): The Open Group. TOGAF Standard, various editions.
- OAA Standard (Operations Architecture): The Open Group. Operations Architecture Standard.
- Zachman Framework: Zachman, J.A. (1987). "A Framework for Information Systems Architecture." IBM Systems Journal, 26(3).
- BIZBOK Guide: Business Architecture Guild. A Guide to the Business Architecture Body of Knowledge (BIZBOK® Guide).
- CMMI: CMMI Institute. Capability Maturity Model Integration (CMMI).
- COBIT 2019: ISACA. COBIT 2019 Framework.
- ITIL: AXELOS. ITIL 4 Foundation.
- NIST Cybersecurity Framework (CSF): National Institute of Standards and Technology. Cybersecurity Framework, Version 2.0 (2024).
- INCOSE Systems Engineering Handbook: INCOSE. Systems Engineering Handbook, 5th ed. (2023). Wiley.
- Cynefin Framework: Snowden, D.J. and Boone, M.E. (2007). "A Leader's Framework for Decision Making." Harvard Business Review, 85(11), pp.68–76.
- DoD Architecture Framework (DoDAF): U.S. Department of Defense. DoD Architecture Framework, Version 2.02.
Academic and human development
- Sen, Amartya: Sen, A. (1999). Development as Freedom. Oxford University Press.
- Nussbaum, Martha: Nussbaum, M.C. (2011). Creating Capabilities: The Human Development Approach. Harvard University Press.
- UNDP Human Development Reports: United Nations Development Programme. Human Development Reports.
- McKinsey Organizational Health Index (OHI): McKinsey & Company. Organizational Health Index.







































