Adaptive Presentation Core (APC)

- Published on
Chapter 2 - Architectural Roots
Introduction
In the early days of software, architecture was a word rarely used outside of academic circles or mainframe specialists. Systems were built with limited users, a single deployment target, and a team that often knew every part of the codebase. As businesses grew more digital, and systems became more distributed, the old assumptions started to break.
What began as an effort to better organize code quickly evolved into a search for architectural patterns that could scale—not just in performance, but in people, teams, and change.
In the 1990s, we tried layering. In the 2000s, we tried services. Later, we modeled domains, inverted dependencies, and built systems that could react to load and failure. Each of these moves was a response to pain: systems too fragile, too entangled, too slow to adapt.
And each step in that evolution laid the groundwork for what we now call composable architecture—not just as a pattern, but as a philosophy for building systems that survive and thrive in motion.
To build something composable, you have to understand what you're composing it from. Modern composable systems didn’t emerge out of nowhere—they stand on decades of architectural evolution. Understanding this lineage not only clarifies the “why” behind current best practices, but also helps teams avoid repeating old mistakes under new labels.
This chapter explores the roots of composable architecture through the lenses of Service-Oriented Architecture (SOA), Hexagonal Architecture, Reactive Systems, and Domain-Driven Design (DDD). Each offered solutions to different types of complexity—technical, organizational, behavioral—and their philosophies continue to shape today’s composable patterns.
A Timeline of Influence
Era | Architectural Movement | Key Contribution |
---|---|---|
1990s | Layered Architecture | Clear separation of concerns but prone to rigid dependency chains |
Early 2000s | Service-Oriented Architecture (SOA) | Promoted contract-first design and interoperability |
Mid 2000s | Hexagonal Architecture (Ports & Adapters) | Advocated for testability and inversion of control |
2010s | Domain-Driven Design (DDD) | Introduced bounded contexts and ubiquitous language for modeling business complexity |
2014+ | Reactive Systems | Focused on responsiveness, scalability, and resilience through event-driven design |
2020s | Composable Architecture | Inherited and recombined these patterns into a more modular, team-aligned, and runtime-adaptive system |
SOA: The First Step Toward Modularity
In the early 2000s, enterprise software was becoming a tangled web. IT departments were drowning in integration projects—ERP systems needed to talk to CRMs, CRMs to billing engines, billing engines to warehouse databases. Every new business process required a custom integration, and those integrations were fragile, slow, and often undocumented.
Service-Oriented Architecture (SOA) emerged as a response to this chaos. It wasn’t a language or a runtime—it was a philosophy. A way to tame the growing sprawl of enterprise systems by promoting contract-first development and loosely coupled services that could be reused across organizational boundaries.
SOA promised to turn spaghetti into lasagna: clearly separated layers of functionality, each exposed through a well-defined service interface. Instead of duplicating logic or wiring things manually, you could publish a WSDL file (Web Services Description Language), and let other systems consume your service—just like that.
In theory, SOA was elegant. In practice, it often became weighed down by bureaucracy, XML verbosity, and heavyweight Enterprise Service Buses (ESBs) that were hard to operate and even harder to evolve.
Despite its challenges, SOA introduced enduring ideas:
Key Contributions:
- Standardized interfaces: Encouraged teams to document and expose functionality in predictable ways.
- Reusability at scale: Business logic could be packaged once and reused across multiple apps or channels.
- Separation of orchestration and logic: Services were composed into workflows without altering the underlying services.
What Held It Back:
- Heavy middleware: Centralized ESBs introduced tight coupling and became performance bottlenecks.
- Synchronous dependence: Many services assumed real-time availability, creating cascading failures.
- Organizational silos: Teams owned services but not outcomes, leading to interface wars and brittle governance.
Why It Still Matters:
Composable systems inherit SOA’s belief in clean contracts—but reject the centralization and rigidity that SOA architectures often created. Instead of a single orchestration layer, composable systems favor runtime assembly using distributed manifests, APIs, or event meshes. They keep the autonomy that SOA imagined, but decentralize it for today’s faster, federated teams. ---
Hexagonal Architecture: Inversion for Independence
In the mid-2000s, a quiet revolution began to spread among developers frustrated with the entanglement of logic, frameworks, and infrastructure. Software systems were becoming increasingly fragile—too many assumptions about databases, APIs, and UI toolkits were wired deep into core business logic.
Enter Hexagonal Architecture, also known as the Ports and Adapters pattern, introduced by Alistair Cockburn. It offered a deceptively simple proposition: decouple the application’s core logic from the outer layers of technology.
Instead of letting the database dictate how your domain behaves, Hexagonal Architecture places the business logic at the center—clean, isolated, and unaware of infrastructure details. Everything external is an adapter plugged into a port: the database, the UI, the messaging layer, or even test scripts.
Imagine your application as a socket. Whatever plugs into it must conform to your contract, not the other way around.
This inversion of dependency gave developers the freedom to test without standing up infrastructure. It made migrations—from one database to another, or one UI layer to the next—significantly safer. And it brought software architecture closer to the kind of modular, interface-driven thinking seen in mechanical engineering or circuit design.
Key Contributions:
- Inversion of control: The core system no longer depends on infrastructure or third-party services.
- Isolation for testing: Adapters can be mocked or replaced without modifying domain code.
- Flexibility in deployment: Enables swapping or extending outer layers without touching core logic.
Challenges in Practice:
- Cultural adoption: Many teams struggle to restructure their thinking around true inversion.
- No silver bullet: It doesn’t prescribe how to scale deployment or governance.
- Codebase bloat: Without discipline, ports and adapters can multiply without clear ownership.
Why It Still Matters:
Composable architecture inherits Hexagonal thinking at every level:
- In frontend systems, layout shells act as adapters to route and render dynamic components.
- In backend services, clean interface boundaries allow modules to swap APIs or databases without ripple effects.
- In infrastructure, IaC modules become plug-compatible abstractions that let environments evolve independently.
Hexagonal Architecture gave us the language of ports, adapters, and boundaries—a language that composability turns into a foundation for runtime assembly, testability, and independence at scale. ---
DDD: Modeling Complexity with Clarity
By the early 2010s, development teams were beginning to acknowledge that many of their biggest problems weren’t technical—they were linguistic. Teams misunderstood the very things they were trying to build. Customer, account, order, transaction—these terms meant different things to different teams, and systems became filled with translation layers, inconsistent logic, and brittle interfaces.
Enter Domain-Driven Design (DDD), championed by Eric Evans. It was less a technical framework and more a mindset: to tackle complexity by aligning software with deep, shared understanding of the business domain. DDD asked developers to stop thinking in terms of classes and databases and start thinking in terms of the real-world language used by subject matter experts.
“If you can’t describe what your system does in plain language,” Evans argued, “you probably don’t understand it.”
DDD introduced powerful concepts:
- Ubiquitous Language: Shared terms used consistently in both code and conversation.
- Bounded Contexts: Clear conceptual boundaries where terms and rules hold meaning.
- Aggregates: Clusters of objects treated as a single unit for consistency and reasoning.
These ideas changed how architects model behavior and how teams structure ownership. In large-scale organizations, bounded contexts became the foundation for team assignments, repo boundaries, and even org charts.
Key Contributions:
- Modeling for clarity: Architecture is not just about control flow—it’s about capturing meaning.
- Team alignment: Clear boundaries prevent misunderstanding and reduce accidental coupling.
- Evolvability: Contextual contracts allow different parts of the system to evolve at different speeds.
Challenges in Practice:
- Organizational inertia: DDD requires deep collaboration between developers and domain experts.
- Misuse of patterns: Without real domain knowledge, developers misuse DDD terms as technical artifacts.
- Tooling gap: DDD is often a modeling mindset, not a scaffold enforced by the codebase.
Why It Still Matters:
Composable systems depend on modularity—but modularity without meaning creates chaos. DDD gives composable architectures their semantic structure. Each module becomes a bounded context. Each interface speaks a language that reflects business intent.
When you federate frontend teams across a large product suite, or divide backend services by domain, DDD provides the conceptual map to keep things aligned. It reminds us that interfaces aren’t just APIs—they’re promises made in a shared language. and reflecting that understanding in the software architecture.
Reactive Systems: Responsiveness at Scale
As distributed systems became the norm, developers started facing new classes of failure—systems that were technically correct but operationally brittle. Traffic spikes, slow dependencies, and cascading timeouts were turning once-reliable systems into house-of-cards deployments. Scaling horizontally helped, but wasn’t enough. Teams needed systems that didn’t just handle more load—they needed systems that could react to uncertainty.
This gave rise to Reactive Systems, a set of architectural principles formalized by the Reactive Manifesto in 2014. Drawing lessons from actor-based systems like Erlang and modern event-driven paradigms, Reactive Systems offered a blueprint for software that could be resilient, responsive, elastic, and message-driven.
"If your system can’t stay responsive under stress, it doesn’t matter how correct it is."
Reactive Systems embraced asynchronous, non-blocking communication and location transparency. Components communicated via message passing, and resilience wasn’t just a goal—it was a design constraint. Failures weren’t masked; they were managed.
Key Contributions:
- Elasticity and Resilience: Systems could scale out under load and recover from node failure without centralized coordination.
- Message-Driven Design: Asynchronous messaging decoupled producers and consumers, improving fault isolation.
- Backpressure Awareness: Systems could signal when they were overwhelmed, avoiding resource exhaustion.
Challenges in Practice:
- Debugging Complexity: Asynchronous, distributed flows are harder to trace and reason about.
- Operational Tooling: Observability and deployment tools had to evolve to support event-based paradigms.
- Steep Learning Curve: Developers had to unlearn synchronous request-response models.
Why It Still Matters:
In a composable system, autonomy is non-negotiable. But autonomy without coordination can lead to chaos. Reactive systems offer the glue—asynchronous, observable, and fail-tolerant messaging—that lets distributed modules work together.
Frontend platforms use reactive streams to manage UI events. Backend services embrace message brokers like Kafka, NATS, or Redis Streams to orchestrate workflows. Infrastructure stacks adopt reactive scaling strategies to allocate resources dynamically.
In essence, Reactive Systems taught us that in a world of constant change, being modular isn’t enough—you have to be reactive too.
Why These Matter
Composable systems aren’t replacements for these architectural movements—they are their natural evolution. Each of the preceding paradigms brought us closer to a model of software that was more modular, more resilient, and more reflective of real-world complexity. But none of them fully addressed the scale and speed at which today’s organizations operate.
In a world where new teams form quarterly, user expectations shift weekly, and platforms deploy daily, the question has changed: it's no longer how do we structure our code? but how do we structure our ability to change?
The answer lies in composability. It takes the hard-won lessons of the past:
- From SOA, the importance of stable contracts and clear service boundaries.
- From Hexagonal Architecture, the discipline of isolating logic from delivery mechanisms.
- From DDD, the insight that system boundaries must reflect human understanding and organizational context.
- From Reactive Systems, the capacity to adapt in real-time to stress, load, and failure.
Composable architecture doesn't reinvent the wheel—it puts the wheels on different tracks and lets teams build their own engines.
In a composable system:
- Teams can own capabilities end-to-end—from code to infrastructure to runtime experience.
- Systems are stitched together at runtime, not at compile time.
- Interfaces act as contracts between independently evolving components.
- Observability, deployment, and governance are embedded—not bolted on.
Composability is not a trend. It’s a convergence. It’s what happens when modular thinking meets organizational scale.
This architecture doesn’t just scale with traffic. It scales with people. And that may be the most important lesson of all.