In the previous post, I distinguished between two ideas that are often collapsed into one another: what is possible and what is likely. Probability tells us how plausible some future may be, given what we know or believe. Architecture determines which futures remain structurally admissible in the first place.
This distinction is important, but it can still feel abstract. This post provides a small example to help make it more concrete.
The example is intentionally simple. It is not meant to capture every detail of a real engineering program. Its purpose is to show how architectural constraints shape the future of a system before probability, preference, or optimization enters the discussion.
The central lesson is that two development paths may each be locally reasonable while becoming jointly difficult to integrate. In systems engineering terms, this is a reason why integration, verification, architectural governance, and shared conceptual models matter so much. They are not merely process overhead. They are mechanisms for preserving convergence among parallel architectural futures, consistent with the systems engineering emphasis on life-cycle integration, verification, validation, and technical management [7].
The Starting Point
Suppose an organization is developing a software-intensive robotic platform. At an initial architectural state, call it
, the system has three broad regions:
- a perception subsystem,
- a planning subsystem,
- control subsystem
At
, the architecture still admits several futures. The perception team can improve sensor processing. The planning team can refine behavior generation. The control team can improve execution and stability. These are not yet incompatible directions.
In the notation used earlier in this series,
defines an admissible future cone
![]()
This cone is not a probability distribution. It is the set of futures still allowed by the current architectural commitments. Within that cone, some futures may be more plausible than others. But probability only applies after the space of admissibility has already been shaped.
Two Parallel Chains
Now imagine two teams move forward in parallel. The perception team follows one causal chain
![]()
The planning team follows another
![]()
Note that each step is locally sensible. The perception team improves object detection, adds richer uncertainty estimates, and changes the format of the messages it publishes. The planning team improves route selection, adds new behavioral modes, and changes assumptions about what information it expects from upstream components. Neither team is acting irrationally. Each is moving along a plausible local future.
The problem is that local plausibility is not the same as joint admissibility.
The Integration Frontier
Eventually the two chains must meet at an integration point. Let us call that point
.
The question is not simply whether the perception changes were useful or whether the planning changes were useful (if not, then why bother). The question is whether the two chains still converge into a mutually admissible architectural future.
Formally, we can ask whether there exists an integration state
such that:
![]()
and
![]()
If such an
exists without major rework, the chains remain convergence-compatible. If such an
exists only after substantial reinterpretation, mediation, refactoring, or governance intervention, the chains have diverged in a costly way. If no acceptable
exists within current constraints, the architecture has suffered a convergence failure.
This is the practical meaning of an architectural frontier. It is not merely the next design review. It is the boundary at which independently plausible futures must become jointly coherent. DSM-style dependency reasoning is useful here because it gives a compact way to represent the coupling structure through which such convergence pressures propagate [1, 2].
A Small Diagram
Figure 1 shows the structure of the example.

Figure 1: Two locally plausible development chains must converge at an integration frontier. The difficulty is not only whether each chain is valuable, but whether their endpoint commitments remain jointly admissible.
Where Probability Enters
At any point, managers and engineers may assign plausibilities to different outcomes or different plausibilities to the same outcomes. They may believe that one implementation strategy is more likely to succeed than another. They may estimate schedule risk, technical risk, or market value. Those judgments matter. But they are epistemic. They concern belief, uncertainty, and expectation.
However, the architectural question is prior: what futures are still structurally admissible after these commitments have been made?
Suppose the perception team changes its output from deterministic object labels to probabilistic occupancy fields. At the same time, the planning team assumes a discrete symbolic world model. Each decision may be defensible. But together they may create a semantic mismatch.
The issue is not that integration became improbable in some subjective Bayesian sense. The deeper issue is that the two chains now require a translation layer, a reinterpretation of assumptions, or a change in one of the chains. The future cone has narrowed.
A future that once lay inside
may no longer be directly accessible from the combined state
.
Convergence Plausibility
We can describe this situation using a simple idea: convergence plausibility.
Convergence plausibility is not the probability that one local chain succeeds. It is the plausibility that a family of chains can still meet at an acceptable future state. For two chains
and
, the relevant question is:
such that
and ![]()
But in engineering practice, existence alone is not enough. We care about cost, delay, risk, semantic distortion, verification burden, and governance acceptability.
So the practical question becomes: does there exist an integration state
that is reachable at acceptable cost and preserves the distinctions that matter?
This connects directly to architecture assessment. Integration reviews, interface control, model alignment, and verification planning are not simply checking whether components work. They are checking whether separately evolving causal chains still preserve convergence.
A Roughness Interpretation
This example also illustrates a concept of architectural roughness analogous to kinetic roughening in self-organized criticality or SOC [3, 4].
If perception has advanced rapidly while planning remains organized around older assumptions, the architectural frontier is no longer even. There is a maturity gradient, a semantic gradient, or a verification gradient across a coupled seam.
Let
represent the local architectural frontier position of location
along some dimension
, such as semantic clarity, modernization, verification readiness, or operational maturity.
Let
denote the coupling weight on the directed architectural edge
. Larger values of
mean that a change, mismatch, or assumption shift at location
is more strongly transmitted to location
. In a practical assessment,
could be estimated from DSM dependencies, interface strength, shared data schemas, co-change frequency, verification coupling, or organizational dependency.
For a coupled edge
, define edge roughness as

A large value means that two strongly coupled architectural locations have evolved unevenly. This is often where integration trouble appears. The problem is not merely that one part is immature. The problem is that unevenness occurs across a consequential coupling. The analogy to kinetic roughening is deliberate: interface-growth models study how uneven advancing fronts develop scale structure over time [3, 4].
A Depinning Interpretation
The same example can also be read through the depinning model used in SOC [3, 4].
Each architectural location has accumulated pressure
and a resistance threshold
. So long as
![]()
the location remains pinned. But when pressure exceeds the resistance threshold
![]()
the location must move. That movement redistributes pressure through coupling. A local change can then become a cascade. This is the architectural analogue of threshold-driven depinning models, where local motion in a pinned interface can redistribute stress and trigger further motion [5, 6].
In the example, the perception change may force planning to reinterpret its input assumptions. That perception change may force control to handle new timing or uncertainty behavior. That perception change may force diagnostics and verification to change evidence structures. What began as a local improvement becomes an architectural avalanche.
This is why the size of an architectural change is not measured only by the initiating modification. It is measured by the induced movement across the coupled frontier.
Why Systems Engineering Cares
This is one way to explain why systems engineering practices place so much emphasis on integration, verification, requirements discipline, interface control, and architecture governance [7]. Without the formal language, these practices can look and feel bureaucratic. With the architectural language, their purpose becomes clearer by helping to preserve convergence among parallel chains of development. In practice, techniques such as DSM analysis can help expose where independently evolving work streams remain structurally coupled, even when organizational work packages appear separate [1, 2]. The practices expose roughness before it becomes a crisis and keep ontic admissibility distinct from epistemic confidence. In addition, they provide mechanisms for detecting when local plausibility is undermining global coherence.
This is especially important in large systems where many teams operate under partial knowledge. Each team may make locally rational decisions. But the system does not integrate local rationality. It integrates architectural consequences.
What the Example Shows
The example supports several conclusions.
- Architectural divergence can emerge without anyone making an obviously bad decision. Two chains can each be plausible and still become jointly problematic.
- Integration risk is a property of relations among chains, not merely a property of individual components.
- Probability is not enough. A team may believe a path is likely to succeed, but if that path exits the shared admissible future cone, the belief is irrelevant.
- Roughness matters. Uneven evolution across coupled seams concentrates integration risk.
- Governance matters when it preserves convergence, exposes roughness, clarifies thresholds, and creates controlled release paths.
Bridge to Basins and Entrenchment
This example prepares us for the next major step in our framework: architectural basins.
Once development chains repeatedly reinforce certain commitments, the architecture begins to settle into a region of state space. Some futures become easier to continue. Others become more expensive to recover. The system develops path dependence.
That region is what I will call an architectural basin.
The example in this post shows how divergence appears at the frontier. The next post will examine what happens when such divergence becomes self-reinforcing, creating entrenchment and increasing the cost of change.
Summary
Architecture is not merely a collection of diagrams, interfaces, or design decisions. It is a constraint structure that shapes which futures remain admissible. When multiple teams evolve in parallel, each may follow a locally plausible chain. But architecture determines whether those chains can still converge.
The practical challenge is therefore not only to choose good local futures. It is to preserve the possibility of coherent joint futures. This is why integration is not an afterthought. Integration is where the architecture reveals whether its parallel futures still belong to the same system.
References
[1] Steven D. Eppinger and Tyson R. Browning. Design Structure Matrix Methods and Applications. MIT Press, 2012.
[2] Tyson R. Browning. Design structure matrix extensions and innovations: A survey and new opportunities. IEEE Transactions on Engineering Management, 63(1):27–52, 2016.
[3] Fereydoon Family and TamÅLas Vicsek. Scaling of the active zone in the Eden process on percolation networks and the ballistic deposition model. Journal of Physics A: Mathematical and General, 18(2):L75–L81, 1985.
[4] Mehran Kardar, Giorgio Parisi, and Yi-Cheng Zhang. Dynamic scaling of growing interfaces. Physical Review Letters, 56(9):889–892, 1986.
[5] Chao Tang and Heiko Leschhorn. Self-organized criticality and the depinning transition. Physical Review A, 45(12):R8309–R8312, 1992.
[6] Heiko Leschhorn. Cellular automata for driven interfaces in random media. Physical Review E, 48(1):284–292, 1993.
[7] INCOSE. Systems Engineering Handbook: A Guide for System Life Cycle Processes and Activities. Fifth edition, Wiley, 2023.
Leave a Reply