From Middleware to Shared Services
For decades the IT landscape has revolved around a layered architecture in which middleware sits between applications, acting as a glue that translates, routes, and orchestrates data. The promise of this approach was clear: a single platform could connect disparate systems, reduce duplication, and provide a unified view of information. In practice, middleware has often become a costly and brittle bottleneck. Each new module or integration added a new layer of complexity, and the effort to maintain and upgrade the middleware stack grew faster than the value it delivered.
Business units no longer accept a model that forces them to duplicate data, create separate databases, and rely on expensive enterprise application integration (EAI) tools. The reality is that most organizations run a handful of data sources – from customer relationship management to inventory, from accounting to supply‑chain – yet the data that flows between them is frequently stale or inconsistent. The cost of reconciling these differences is hidden in the time and effort spent on manual updates, data cleansing, and patching.
A new paradigm emerges when we look at the problem from the opposite direction: instead of forcing systems to speak each other through a middle layer, we can give each system direct access to a single source of truth. Shared services replace the need for a mediator by providing a set of core, reusable services – authentication, messaging, data storage, and workflow – that all applications consume. This eliminates the need to duplicate customer records, product catalogs, or order histories across multiple databases.
Consider a retail chain that operates stores across the country. Traditionally, each store would maintain its own copy of the product catalog, and any change in pricing or stock levels would need to be replicated manually or through an expensive middleware integration. In a shared‑services model, the product catalog lives in a single, centralized service. When a price update occurs, it propagates instantly to every point of sale, online storefront, and back‑office system. The benefit is immediate: no lag between data updates, no risk of divergent data, and a drastic reduction in maintenance overhead.
Adopting shared services does not mean discarding all legacy systems. Instead, it involves wrapping legacy capabilities in lightweight adapters that expose their functions as services. The adapter can run on the existing hardware, communicate over standard protocols, and surface legacy data through a modern, RESTful API. The legacy system remains untouched, but its functionality is now part of the shared‑services ecosystem, and other systems can use it without the overhead of a full middleware layer.
Shared services also empower developers to focus on value‑adding business logic rather than spending time on integration patterns. When every team consumes the same authentication, logging, and data services, the amount of code that needs to be written for each new application drops significantly. The result is a faster time‑to‑market and a lower total cost of ownership.
From the user’s perspective, shared services translate into a seamless experience. A customer who logs into an online portal can instantly access their order history, view loyalty points, and request a return, all backed by real‑time data from the shared services layer. The IT team can deploy updates to a single service and see immediate benefits across all applications. In this way, shared services move IT from a passive provider of infrastructure to an active enabler of business agility.
Real‑Time Enterprise and the Challenge of Legacy Systems
Business operations increasingly demand real‑time visibility and responsiveness. Logistics teams that rely on manual stock checks or outdated inventory feeds struggle to meet service‑level agreements. A single misplaced pallet can ripple across the supply chain, causing delayed shipments, missed sales, and frustrated customers. In such an environment, the data that drives decisions must be fresh, accurate, and instantly accessible.
Traditional logistics workflows often rely on static snapshots of inventory that are updated at set intervals, sometimes days apart. This approach forces warehouse staff to conduct manual reconciliation after every shift. The lag between physical movement and digital representation introduces a window of uncertainty that costs time and money.
Technologies like RFID and mobile location‑based services address this gap by embedding the state of every item directly into the data layer. Each SKU carries an identifier that can be scanned or detected by sensors, and the system receives continuous updates about its location. When a pallet moves from receiving to staging, the shared service capturing the RFID event records the new location in real time. The data becomes live, and any downstream process – such as inventory allocation, picking, or shipping – can react immediately.
Yet most enterprises still run on legacy platforms such as SAP, Oracle E‑Business, or mainframe systems that were built in an era when real‑time data was not a priority. Upgrading these systems is daunting because they are tightly coupled with business logic, regulatory requirements, and institutional knowledge. The question is how to inject real‑time capabilities into these hardened environments without tearing them apart.
A pragmatic solution is a layered transformation approach. First, assess the hardware that supports the legacy system. If it can run modern virtualization or containerization, it can be isolated into a micro‑service that exposes a REST API. Second, evaluate how the legacy system interoperates with external applications. By replacing brittle point‑to‑point connectors with a shared‑services bus, you can decouple the legacy core from the rest of the ecosystem. Third, modernize the development process: shift from procedural code to declarative workflows that invoke shared services for authentication, data retrieval, and event publishing.
When these three dimensions align, the legacy system becomes a resilient, adaptable component of the overall architecture. It no longer dictates the pace of change; instead, it participates in a shared network of services that all share a consistent, real‑time view of data. The result is a unified enterprise where a single order in a CRM can trigger inventory checks, shipping arrangements, and billing - all in seconds.
This transformation does more than just make logistics faster. It changes the mindset of IT teams from gatekeepers of data to facilitators of continuous improvement. By making data real time, you empower every department to act on current information, reducing waste, cutting costs, and increasing customer satisfaction.
Model‑Driven Federation and Rapid Application Development
Model‑driven development has long promised to reduce the friction between business requirements and IT delivery. By capturing architecture in a high‑level model, teams can generate code, validate design decisions, and maintain a single source of truth for the system. When combined with a federated service architecture, model‑driven approaches become even more powerful.
Federated services refer to a network of loosely coupled components that publish and subscribe to shared information models. Each service declares the data structures it consumes and produces, and the federation ensures that any change in one service propagates consistently across all participating nodes. Think of it as a set of synchronized spreadsheets that automatically update each other when a cell changes.
In practice, a federation can be built around standard data interchange formats such as JSON or XML, using a message bus that routes updates to interested subscribers. When a customer changes their address in a CRM, the federation propagates that new address to every service that needs it – billing, marketing, and support – without any explicit integration code. The result is a single, coherent view of the customer across the enterprise.
Model‑driven federation adds a layer of abstraction that speeds development further. Instead of writing boilerplate integration code for each new business process, developers compose services by wiring them together in a visual editor. The editor references a catalog of reusable service patterns – authentication, messaging, data retrieval – that have been pre‑validated and versioned. When a new workflow is defined, the system automatically generates the necessary glue code, configuration files, and deployment scripts.
One of the most compelling use cases for this approach is user authentication. Instead of each application maintaining its own user store and authentication logic, all applications reference a central identity service. The service authenticates credentials, issues tokens, and manages permissions. A new application can be deployed with a single line of code that points to the identity service, and users immediately gain seamless single‑sign‑on across all systems.
By integrating the model‑driven approach with federated services, enterprises can achieve rapid, repeatable deployment of applications. A new e‑commerce platform can be spun up in hours, not weeks, because it pulls its core services from a shared catalog. Updates to the catalog propagate instantly, ensuring that every customer receives the latest features and security patches. This is the essence of an autonomic business model that can match supply with demand in real time.
Process Programming and Building an Adaptable Enterprise
Process programming elevates application development from code‑centric to business‑centric. Instead of writing imperative code that encodes business logic, developers define workflows that map directly to real‑world processes. These workflows are represented as graphical models that can be edited by business users, reducing the bottleneck of IT teams acting as the sole gatekeepers of change.
Imagine a call‑center agent who needs to resolve a customer’s issue that requires a change in shipping address. In a traditional system, the agent would raise a ticket, a developer would write code to update the address, and an administrator would deploy the change. With process programming, the agent can trigger a workflow that calls the shared address‑update service, passing the new information. The workflow automatically logs the change, notifies relevant stakeholders, and updates all downstream systems. The agent’s request is handled in real time, and no developer intervention is required.
Similarly, marketing teams can deploy personalized campaigns on the fly. A campaign designer selects a set of target customers from the shared customer service, applies a rule to exclude those who have opted out, and attaches a dynamic content generator that pulls localized product recommendations. The entire campaign, including data extraction, logic, and deployment, is defined in a visual editor. Once activated, the system distributes the personalized content across email, mobile push, and web channels simultaneously.
Process programming also makes the IT environment inherently adaptable. Because workflows are defined at the business level, changes can be made by non‑technical users in minutes. When market conditions shift - new regulations, competitor launches, or a sudden supply‑chain disruption - the organization can pivot without waiting for a full software release cycle. This agility translates directly into competitive advantage: faster responses to customer needs, quicker time to market for new products, and the ability to experiment with new business models safely.
Underpinning this flexibility is a robust governance model. Every workflow is versioned, auditable, and tied to the shared services that enforce data consistency. Security policies, compliance requirements, and performance metrics are embedded in the service contracts, so even though the front‑end may change frequently, the core guarantees remain stable.
In sum, process programming turns IT from a back‑office function into a frontline partner. By exposing shared services as plug‑and‑play components and empowering business users to orchestrate them, organizations become truly adaptable. The enterprise can evolve continuously, aligning technology with strategy without the delays inherent in legacy EAI approaches.
Neil McEvoy is CEO of the Genesis forum (https://www.webservices-strategy.com), an industry initiative of Service Oriented Architecture vendors describing the business benefits of their technologies. He is the Chief Architect of the On Demand framework, the platform for autonomic business models that match demand and supply perfectly. Neil provides unique consultancy solutions to enterprise end-users and vendor suppliers customised to deliver ROI within the On Demand market. He can be reached at http://www.ondemand-strategy.com.





No comments yet. Be the first to comment!