Promising integration opportunities

Figure 1

In an analysis of technology advances to manufacturing opportunities, McKinsey & Company describes the promise of potential benefits available from "currently demonstrated technologies", which they characterize as exhibiting "the level of performance and reliability needed to automate one or more of 18 capabilities involved in carrying out work activities. In some cases, that level of performance has been demonstrated through commercially available products, and in others through research projects." Their promise follows:

We emphasize that the potential for automation described above is created by adapting and integrating currently demonstrated technologies. Moreover, it is notable that recent technological advances have overcome many of the traditional limitations of robotics and automation. A new generation of robots that are more flexible and versatile, and cost far less, than those used in many manufacturing environments today can be “trained” by frontline staff to perform tasks previously thought to be too difficult for machines—tasks such as picking and packing irregularly spaced objects, and resolving wiring conflicts in large-scale projects in, for example, the aerospace industry. Artificial intelligence is also making major strides that are increasing the potential for automating work activities.

Artificial intelligence has indeed been attracting lots of press within selected domains. When evaluated in the context of the 18 capabilities listed in Figure 1, McKinsey's analysis proposes that the domain with the second highest potential - manufacturing - represents an appealing target. McKinsey uses this analysis to conclude that "Just over half of all working hours in the United States are spent on activities that are the most susceptible to automation". Their analysis considered:

  1. Performing physical activities and operating machinery in both predictable and unpredictable environments
  2. Collecting data about the status of these activities and the information and material required to perform them
  3. Processing that data into a form usable by step 4
  4. Applying expertise in planning, creation, and decision-making
  5. Interfacing with stakeholders to perform and communicate the results of step 4
  6. Managing and developing the teams responsible for this work

McKinsey paints this future as a highly automated world in which workers would be largely troubleshooting the 'unpredictable' situations and resolving the inevitable integration challenges that arise as new technology is introduced into common use:

As roles and processes get redefined in these ways, the economic benefits of automation will also include freeing up and repurposing scarce skilled resources. Particularly in the highest-paid occupations, machines can augment human capabilities to a high degree and amplify the value of expertise by freeing employees to focus on work of higher value. In aircraft maintenance, for example - here drones and insect-size robots could someday perform inspections, robots could deliver parts and tools, and automated tugs could move planes in and out of hangars - fewer technicians would be needed on the maintenance hangar floor, but those who remained would spend more time problem solving for non-routine issues. These workers will, however, need continual retraining to keep up with developing technology.

What is lost in their analyses, except by disclaimer, is that similar 'promising visions' have in fact resulted in major disappointments in the past, offering little more than a promise - to be the first on the block to use some new phrase for computer programs. In The Seven Deadly Sins of AI Predictions, MIT warns us of the many risks which emerge from such predictions about technology, commonly known as Amara's Law, which states "We tend to overestimate the effect of a technology in the short run and underestimate the effect in the long run." From this shaky foundation, the other sins amplify this initial misunderstanding by:

  • Granting the technology magical properties, without understanding its limitations
  • Assuming that by performing some task, that achievement can quickly be translated into a competency in general problem-solving
  • Adopting 'suitcase words' which can refer to many different types of experience, even though little benefits accrue from one of these types to another
  • Falling prey to expectations that over-reach the possible, which often arise from extrapolating a pattern like Moore's "law", without taking account of the underlying limits which will constrain that pattern's future
  • Invoking Hollywood scenarios where imagination is selectively applied, while all the other changes likely to take place in the same timeframe are ignored, even though they are likely to compensate or reinforce the original trend envisioned.
  • Overlooking the difficulty of displacing the position incumbents occupy as a result of their customer's desire that "If it ain't broke, don't try to fix it".

From a broader perspective, these flaws are reach rooted in our cognitive biases, making them difficult to recognize in ourselves. Yet we can reduce theyb impact of these biases by compaaring McKinsey's vision against relevant experiences of others. With this end in mind, it is helpful to perform this comparison within the manufacturing domain, and examine where thc state of the practice from 15 years ago, and reflect on the progress from then until today. At the turn of the last century, Glen Alleman authored an excellent article on the importance of architecture in achieving the goals of Enterprise Resource Planning systems in this context.

Although machine learning has been applied to specialized applications like salad-making robots and recognition of faces in photographs, in order for the technology to be broadly applicable, the path to harvesting value from it must be straightforward, and so well traveled that the journey will present few risks for those who elect to employ ti. Few specific jobs are amenable to replacement by automation within the context of any large firm, unless production volumes are high, and a competitive edge can be secured from those investments. Not every specialized salad can be easily served up on demand; economic benefits are likely to be limited at low utilization levels, especially when operators and maintence personell are still required to keep the robots operating. Unless the deployment and support will be substantially less than existing approaches require, wasted resources will merely change one headache for another.

In this context, manufacturing is far more than just completing a series of assembly steps that can be authored, resourced, and tracked against a cookbook of recipes. Even predictable operations become unpredictable as soon as something goes wrong. Where was the flaw detected and introduced? The more data one produces, the less meaning it may have to selected audiences, unless a focused effort on curating data for that purpose has been performed. Unless decision-making is carefully designed, the results may be sub-optimal, and the downstream implications may be compromised.

Alleman's article on best practices of a decade ago is reproduced in its entirety below. While many things have changed since the article was written (such as the names of relevant frameworks, standards, and technologies), the essential ingredient of architecture remain the same: achieving holistic, integrated solutions. This goal is easy to break down into desirable characteristics, as Glen has done. However, the implementation of such an architecture definition into working solutions derived from underlying COTS components remains as challenging today as it was 15 years ago. 

Architecture-Centric ERP in the Manufacturing Domain

Architecture is the semantic bridge between the requirements and software

Copyright 2000, 2001, 2002

Glen Alleman

Introduction

Much of the discussion in today's literature is centered around building architectural analogies to the design and deployment of information systems. Many of these architecture analogies are based on mapping the building architecture paradigm to the software architecture paradigm. If this were the actual case software systems would be built on rigid, immovable foundations, with fixed frameworks and inflexible habitats and user requirements. In fact, software architecture is more analogous to urban planning. The macro level design of urban spaces is provided by the planner, with infrastructure (utilities, transportation corridors, and other habitat topology) defined before any buildings are constructed. The actual dwelling spaces are built to broad standards and codes. The individual buildings are constructed to meet specific needs of their inhabitants. The urban planner visualizes the city-scape on which the individual dwellings will be constructed. The dwellings are reusable (remodeling) structures that are loosely coupled to the urban structure. Using this analog, dwellings are the reusable components of the city-scape, similar to application components in the system architecture. In both analogies, the infrastructure forms the basis of the architectural guidelines. This includes utilities, building codes, structural limitations, material limitations, and the local style.

These analogies gloss over many of the difficulties involved in formulating, defining, and maintaining the architectural consistency associated with acquiring and integrating Commercial Off The Shelf (COTS) applications. The successful deployment of a COTS based system requires that not only are the current business needs met, but that the foundation for the future needs of the organization be laid.

In many COTS products, the vendor has defined an architecture that may or may not match the architecture of the purchasers domain. For organizations that have mature business processes and legacy systems in place it is unlikely the vendor's architecture will match. The result is an over-constrained problem and an impedance mismatch between the business and the solution.

The consequences this decision may not be known for some time. If the differences between the target architecture of the business and the architecture supplied by the vendor are not determined before the acquisition, these gaps will be revealed during the systems operation much to the disappointment of the purchaser.

What Is Software Architecture?

Software architecture can be defined as the generation of the plans for information systems, analogous to the plans for an urban dwelling space. Christopher Alexander observed that macro-level architecture is made up of many repeated design patterns. Design Patterns and Pattern Languages are ways to describe best practices, good designs, and capture experience in a way that it is possible for others to reuse this experience.

Fundamental to any science or engineering discipline is a common vocabulary for expressing its concepts, and a language for relating them together. The goal of patterns within any community is to create a body of literature to help the members of the community resolve recurring problems encountered during the development of the artifacts of the process. Patterns help create a shared language for communicating insight and experience about these problems and their solutions.

Formally codifying these solutions and their relationships lets us successfully capture the body of knowledge that defines our understanding of good architecture that meet the needs of their users. Forming a common pattern language for conveying the structures and mechanisms of our architectures allows us to intelligibly reason about them. The primary focus is not so much on technology as it is on creating a culture to document and support sound engineering architecture and design.

Software architecture is different from software design in that software architecture is a view of the system as a whole rather than a collection of components assembled into a system. This holistic view forms the basis of the architecture-centric approach to information systems. Architecture becomes the planning process that defines the foundation for the information system.

Distributed object computing and the internet have fundamentally changed the traditional methods of architecting information systems. The consequences of these changes are not yet fully understood by developers as well as consumers of these systems. The current distributed computing and Internet-based systems are complex, vulnerable, and failure-prone when compared to their mainframe predecessors. This complexity is the unavoidable consequence of the demand for ever-increasing flexibility and adaptability. These changing technologies require a different planning mechanism for deployment, based on fundamentally new principles. Because technology is rapidly changing and business requirements are demanding, a process for architecting these new systems is now essential. No longer can systems be simply assembled from components without consideration of the whole.

Architecture is the set of decisions about any system that keeps its implementers and maintainers from exercising needless creativity. Architecture is not the creation of boxes, circles, and lines, laid out in slide presentations. Architecture imposes decisions and constraints on the process of designing, developing, and deploying information systems. Architecture must define the parts, the essential external characteristics of each part, and the relationships between these parts with the goal of assuring a viable outcome.

Architecture Based IT Strategies

Much has been written about software and system architecture. But the question remains, what is architecture and why is it important to the design and deployment of software in the manufacturing domain?

Manufacturing information systems possess a unique set of requirements, which are continuously increasing in complexity. In the past, it was acceptable to provide manufacturing information on a periodic basis. This information was gathered through a labor intensive and error prone data entry process. In the current manufacturing environment, the timeliness and accuracy of information has become a critical success factor in the overall business process.

In the past, manufacturing information was usually provided through a monolithic set of applications mainframe based Manufacturing Resource Planning systems. The mainframe environment has been tagged with the monolithic label for some time. Now that mature client/server applications have been targeted for replacement, they too have been labeled monolithic. It is not the mainframe environment that creates the monolithic architecture; it is the applications architecture itself that results in a monolithic system being deployed. This behavior occurs when the data used by the application is trapped inside the code. Separating the data from the application is one of he primary goals of good software architecture. This separation, however, must take into account the semantics of the data that is the meaning of the data. This meaning is described through a meta-data dictionary which is maintained by the architect. This critical data was inside the applications, which were originally designed to liberate the workforce from mundane data processing tasks. However, without flexibility and adaptability, the users were forced to adapt their behaviors to the behaviors of the system.

The result was a recurring non-recoverable cost burden on the organization. What was originally the responsibility of the software - as defined in the Systems Requirements Analysis - became the burden of the user in the manufacturing environment. This includes multiple and inconsistent data entry tasks, report printing and reentry, inconsistent database information, islands of information and automation, and the inability to adapt to the changing needs of the organization.

The approach of scheduling multi-product batch production activities using an MRP system has given way to customer driven massive customization production with just-in-time everything. it also has become more difficult to predict customer demand for configured products build from standard components or Engineer To Order (ETO) additions to these components. This adds to the complex problem of manufacturing, planning, and production scheduling.

Any replacement of the legacy manufacturing system that currently supports batch production must provide:

  • Support for manufacturing processes that can be modified and expanded as production demand changes
  • A mechanism to address the previously well-defined boundary between product production, product design, and business and financial management
  • A migration path away the traditional product bill-of-material approach to production scheduling. This previous approach has given way to configured bills-of-materials, generated at the time the order is taken, supported by just-in-time component suppliers

Effective manufacturing system architecture must somehow combine production control systems and information systems. Most approaches in the past elected to keep these two systems separate but linked, adapting them to make intersystem communication transparent. At this point, the separation between information systems and manufacturing systems is somewhat artificial. A Manufacturing Control System (MCS) can be defined as the software components that control the scheduling, material planning, and production activities. The information processed by the MCS typically includes Bills-of-Material, Shop Floor Scheduling, Production Planning, Customer Configuration Instructions, Work Instructions, etc. A Manufacturing Information System (MIS) can be defined as the software components that author, distribute, inform, and interact with manufacturing personnel. The information conveyed by these systems is not directly involved in the scheduling and production of products, but rather forms the basis of these activities. The information processed by the MIS typically includes Model and Drawing Information, Planning Bills-of-Materials, Product Support Information, Quality Assurance Information, etc.

However, this strategy fails to address the important problem of how to restructure the manufacturing system to meet the demand of future operations. An alternative approach is to integrate the Manufacturing Control System (MCS) with the Manufacturing Information System (MIS) as a federated heterogeneous system. Production personnel can make use of the MIS maintained information (design and product information) directly on the shop floor. In turn, design and support personnel can then gain direct access to production information.

The complexity of this massive customization and just-in-time manufacturing environment means that the software components, and the work processes they support, are in constant flux. For an integrated manufacturing system to function in this way, software systems must be continuously expanded, modified revised, tested, and repaired. The software components must be integrated quickly and reliably in response to rapidly changing requirements. Finally, such a system must cooperate in addressing these changing objectives. All these requirements define a highly flexible, adaptive architecture capable of operating in rapidly changing conditions.

In order to address these needs, the system architecture can proceed along the following path:

  • Define the goals of the business in a clear and concise manner.
  • Identify existing Information technology that meets these goals.
  • Identify gaps in the Information Technology that fail to meet these goals
  • Identify the organizational structure needed to support the strategy.
  • Define a layered framework for connecting the system components.

Information Systems in Manufacturing

Although much of this paper is targeted at generic systems architecture, it is useful to outline the manufacturing systems that are subject to these architectural constraints.

  • Operational improvement - the operational information systems providing the tools needed to run the business on a day-by-day basis. They provide real time information about costs, productivity, and operational efficiency. They include information, work planning and operational control for:
    • Materials management
    • Flexible manufacturing
    • Machine tool control
    • Automated process control
  • Advanced Manufacturing Technologies - for the control of machinery through automated work instructions, machine tool instructions, and other non-human intervention processes that contribute directly to the bottom line of the business
  • Information Systems - the application software that forms the basis of the operational efficiency and advanced machine control, is dependent on the order entry, production scheduling, and shop control facilities provided by:
    • Enterprise Resource Planning (ERP) - an accounting oriented information system for identifying and planning enterprise wide resources needed to take, make, and account for customer orders.
    • Product Data Management - a collection of applications that maintain the logical and physical relationships between the various components of a product.
    • Product configuration - provides the management of the configuration processes. A Configurator provides knowledge based rules and constraints for assembling parts into products or systems to be delivered against a customer order.
    • Document Management System - an infrastructure system which document-enables business processes and applications through workflow and a document repository. The primary function of DMS is to manage the change to business critical documents and delivery of those documents to the proper user at the right time.

Characteristics of Manufacturing Technologies

There are several characteristics of manufacturing systems that are shared by all systems with good architectural foundations. These properties may appear abstract and not very useful at first. However, there are measurable attributes of a system that can be used to evaluate how the architecture meets the needs of the user community.

  • Openness - enables portability and inter-networking between components of the system
  • Integration - incorporates various systems and resources into a whole without ad-hoc development
  • Flexibility - supports a system evolution, including the existence and continued operation of legacy system
  • Modularity - the parts of a system are autonomous but interrelated The property forms the foundation of flexibility
  • Federation - combining systems from different administrative or technical domains to achieve a single objective
  • Manageability - monitoring, controlling, and managing a system's resources in order to support configuration, Quality of Service and accounting policies
  • Security - ensures that the system's facilities and data are protected against unauthorized access.
  • Transparency - masks from the applications the details of how the system works

Motivations for Architecture-Centered Design

The application of architecture—centered design to manufacturing systems makes several assumptions about the underlying software and its environment:

  • Large systems need sound architectures. As the system grows in complexity and size, the need for a strong architectural foundation grows as well.
  • Software architecture deals with abstraction, decomposition and composition, style, and aesthetics. With complex heterogeneous systems, the management of the system's architecture provides the means for controlling this complexity and is a critical success factor for any system deployment.
  • Software architecture deals with the design and implementation of systems at the highest level. Postponing the detailed programming and hardware decisions until the architectural foundations are laid is a critical success factor in any system deployment.

Architecture Principles

Software architecture is more of an art than a science. This paper does not attempt to present the subject of software architecture in any depth, since the literature is rich with software architecture material. There are several fundamental principles to hold in mind:

  • Abstraction / Simplicity - the most important architectural quality. Simplicity is the visible characteristic of a software architecture that has successfully managed system complexity.
  • Interoperability - the ability to change functionality and interpretable data between two software entities. Interoperability is defined by four enabling requirements:
    • Communication Channel - the mechanism used to communicate between the system components
    • Request Generation Verbs - the actions used in the communication process
    • Data Format Nouns - the syntax used for the nouns
    • Semantics - the intended meaning of the verbs and nouns.
  • Extensibility - the characteristics of architecture that supports unforeseen uses and adapts to new requirements. Extensibility is a very important property for long lifecycle architectures where changing requirements will be applied to the system.

    Interoperability and extensibility are sometimes conflicting requirements, interoperability requires constrained relationships between the software entities, which provides guarantees of mutual compatibility. A flexible relationship is necessary for extensibility, which allows the system to be easily extended into areas of incompatibility.

  • Symmetry - is essential for achieving component integration interchange and reconfigurability. Symmetry is the practice of using a common interface for a wide range of software components. It can be realized as a common interface implemented by all subsystems or as a common base class with specializations for each subsystem.
  • Component isolation - the architectural principle that limits the scope of changes as the system evolves. Component isolation means that a change in one subsystem will not require a change in another.
  • Metadata - self-descriptive information which can describe services and information. Metadata is essential for reconfigurability. With Metadata, new services can be added to a system and discovered at run-time.
  • Separation of Hierarchies - good software architecture provides a stable basis for components and system integration. By separating the architecture into pieces, the stability of the whole may sometimes be enhanced.

Architectural styles

Architectural styles in software is analogous to an architecture style in buildings. An architectural style defines a family of systems or system components in terms of their structural organization. An architectural style expresses components and relationships between these components, with constraints on their application, their associated composition, and the design rules for their construction. Architectural style is determined by:

  • The component types that perform some function at runtime (e.g. a data repository, a process, or a procedure)
  • The topological description of these components indicating their runtime interrelationships (e.g. a repository hasted by a SQL database, processes running on middleware, and procedures created through user interaction with a graphic interface).
  • The semantic constraints that will restrict the system behavior (e.g. a data repository is not allowed to change the values stored in it).
  • The connections that mediate communication, coordination, cooperation among the components (e.g. protocols, interface standards, and common libraries).

There are several broad architectural styles in use in modern distributed systems and several detailed sub-styles each broad grouping. Because practical systems are not constructed from one style. but from a mixture of styles, it is important to understand the interrelationships between styles and their effect on system behavior.

This architectural style analysis:

  • Brings out significant differences that affect the suitability of a style for various tasks, since the architect is empowered to make selections that are more informed.
  • Shows which styles are variations of others, so the architect can be more confident in choosing appropriate combinations of styles
  • Allows the features used to classify styles to help the designer focus on important design and integration issues by providing a checklist of topics.

4+1 Architecture

In many projects, a single diagram is presented to capture the essence of the system architecture. Looking carefully at the boxes and lines in these diagrams, the reader is not sure of the meaning of the components. Do the boxes represent computers? Blocks of executing code? Application interfaces? Business processes? Or just logical groupings of functionality?

One approach to managing architectural style is to partition the architecture into multiple views. The 4+1 Architecture describes the relationship between the four views of the architecture and Use Cases that connect them. A view is nothing more than a projection of the system description, producing a specific perspective on the system's components.

The system architecture is the structure of a software system. It is described as a set of software components and the relationships between them. For a complete description of an architecture several views are needed, each describing a different set of structured elements. For the moment, the 4+1 Architecture provides the following views:

  • Logical - the functional requirements of the system as seen by the user.
  • Process - the non-functional requirements of the system described as abilities
  • Development - organization of software components and the teams that assemble them.
  • Physical - the system's infrastructure and components that make use of the infrastructure.
  • Scenarios - the Use Cases that describe the sequence of actions between the system and its environment or between the internal objects in a particular execution of the system.

Figure 2

Figure 2 describes the 4+1 architecture as originally defined by Philippe Kruchten. The 4+1 architecture is focused on the development of systems rather than assembly of COTS based solutions. The 4+1 paradigm will be further developed during the architecture planning phase using ISO/IEC 10746 guidelines.

Moving From 4+1 Architectures to methodologies

Now that various components of system architecture are established the development of these four architectural components must be placed within a specific context. This is the role of an architectural methodology.

In the 4+1 architecture the arrangement of the system components are described in constructive terms- what the components are made of. The next step in the process is to introduce a business requirements architecture process. The business requirements will drive the architecture. Without consideration of these business requirements the architecture would be context free. By introducing the business requirements, the architecture can be made practical in the context of the business and therefore it can become generative.

These business requirements are not the business functions, but rather the functional and non-functional requirements of a system to support the business functions.

Structure Matters

From the beginnings of software engineering, structure has been the foundation of good architecture. There are some basic tenets that can be used to guide the architecture-centered deployment

  • Systems can be in a cost-effective manner by importing (or generating) large externally developed components
  • It is possible to predict certain qualities about a system by studying its architecture, even in the absence of detailed design documents.
  • Enterprise-wide systems cm be sharing a common architecture. Large-scale reuse is possible through architectural level planning.
  • The functionality of a system component be separated the component's interaction mechanisms. Separating data and process is a critical success factor of any well architected system

The Architecture Process

Figure 3 describes the process by which the architecture of the system is discovered, verified, and deployed. This view may be criticized as a waterfall approach. In fact, it is sequential at the macro—level, with overlapping activities. This methodology is a macro-model for the system. At this level the rapid development, Extreme Programming, iterative development programming methods are just that- programming methods. They are not system architecture methods.

This methodology provides a broad framework in which systems architecture can be put to work. Depending on the specific needs of the organization, each methodology phase may be adapted to the desired outcome. In some situations, the business case and IT Strategy already exist and the technical aspects of the system dominate the effort. In other situations, the framework for thinking about the problem must first be constructed before any actual system construction can take place.

The methodology guides without overly constraining the solution domain. The methodology is vendor neutral, notation neutral, and adaptive to the organization's short and long-term needs. The methodology has been proven in the field, in a wide variety of manufacturing industry environments, from petrochemicals to discrete parts manufacturing.

Figure 3

The primary components of the methodology are focused on the 4+ 1 Architecture as shown in Figure 3. This methodology is deployed in the context of:

  • Commercial off the shelf (COTS) - products integrated into a federated system. The external functionality of a COTS product is defined by the vendor. The behavior of system is usually fixed in some way, with little or no ability to alter the internal functioning. External behaviors can sometimes be tailored to meet business needs, within the framework of the COTS product.
  • Line of Business databases - used to separate the data from the applications. Legacy applications hold information within their boundaries that must be integrated with the system. In many instances, it is not feasible to move this legacy data to a new system.
  • Workflow engine - used to manage the business processes. In many environments the work processes are fluid, changing with the business climate. Adapting the work to the business process is usually done through some form of workflow

Using this context, the traditional software development approach to system architecture is not appropriate. Writing software in COTS environment is a rare occurrence. The 4 + t architecture is adapted to describe the behavioral and test practice attributes of the system. In the COTS domain, the 4 + I Architecture descriptions are now:

  • Logical - the functional requirements of the business process that are directly implemented by system. These include any manual processes or custom components that must be provided to work around gaps in the COTS system.
  • Process - the abilities of the system that are required to meet the business requirements.
  • Development - the vendor, system integrator, business process, and management teams and resources needed to define, acquire, install, deploy, and operate the system.
  • Physical - the infrastructure needed to define, acquire, install, deploy, and operate the system.
  • Scenarios - the Use Cases that describe the sequence of actions between system and its environment or between the external objects involved in a particular execution of he system.

The design of software systems involves a number of disciplines applied during the various phases of the methodology. In the past, functional decomposition and data modeling was the accepted mechanism for defining the system architecture. The descriptive and generative aspects of these notations have given way to UML based notations. Although the methodology described here is technically independent of any notation or requirements methodology style, there are advantages to the current state-of-the-art tools.

These include:

  • Providing assurance through graphical descriptions of the system. Using a layered descriptive language, pictures of the system can be constructed at all levels of detail from high level executive diagrams to programmer's level details of data and processes.
  • Providing generative descriptions, which transform functional decompositions into code templates.
  • Strong textual descriptions, since pictures alone are rarely sufficient to convey the meaning of design.
  • Aesthetic renditions that convey good design principles through a simple, clear and concise notation.

Methodology and the Architecture

Figure 4 presents a topology for the methodology. This arrangement focuses on the architectural principles to the components of the methodology. There are steps in the methodology that are not addressed in Figure 4. Although these steps may be impacted by architecture, they are secondary to the Requirements Analysis, Technical System Design, System Development, and System Deployment.

Figure 4

Methodology for the SRA

The Systems Requirements Analysis (SRA) includes the discovery of the functional and non-functional requirements that influence the architecture of the system. The system requirements analysis described here is for the architecture of the system, rather than the business processes based on this architecture. These requirements are necessary for the system success but at not sufficient. They form the foundation of the system and are therefore the foundation of the business process as well.

The system requirements analysis described here are for the architecture of the system rather than the business processes based this architecture. These requirements are necessary for the system but not sufficient. They form the foundation of the system and are therefore the foundation of the business process as well.

  • Functional requirements - are requirements for the system that can be stated as specific behaviors. The following requirements are for the functional architecture not the business processes provided by this functional architecture.
    • Abstraction - enables complexity reduction. Without abstraction, the internal behavior of the components of the system become exposed. This exposure binds the components to each other in ways that prevent their rearrangement.
    • Encapsulation - deals with grouping of abstractions.
    • Information hiding - conceals details. Without this hiding, the syntax of the various information components cannot be separated from the semantics. This creates a coupling between the users of the information and the information, preventing its reuse or extension.
    • Modularization - provides meaningful decomposition. Without modularization, there can be no partitioning of the system.
    • Separation - provides separation of collaborating components. By separating components, their functionality can be reused.
    • Coupling / Cohesion - are measurements of system structure.
    • Sufficiency, completeness, and primitiveness - are properties of components.
    • Separation of policy and implementation - data and process are isolated.
    • Separation of interface and implementation - user interfaces and core processes are isolated
    • Single point references - referential integrity is maintained.
    • Divide and conquer strategies - modular architecture.
  • Non-functional requirements - are usually termed the system's abilities and represent a set of attributes of the software and hardware that can be measured. These include:
    • Reliability - the robustness of the system in the presence of faults.
    • Scalability - the ability to increase the system's performance with little or no impact on the system architecture.
    • Availability - the ability to deliver services when called upon to do so.
    • Maintainability - the ability to repair the software or hardware with no impact on the system's availability
    • Performance - the ability to deliver services within expectations of the users
    • Repairability - the ability to repair the system without causing consequential damage.
    • Upgradability - the ability to make changes to the hardware and software components without influencing availability.

Methodology for the TSD

The Technical System Design (TSD) develops a detailed description of the system in terms of the multiple views. Like the SRA, the purpose of the TSD is to define the technical aspects of the system architecture. Each of the views is a refinement of the functionality of the COTS products laid over the business processes.

  • Enterprise view - describes the scope and policies of business systems across the enterprise. The view considers all the users of the system in an attempt to normalize the requirements or users do not dominate the system architecture. This view is based on providing a system that is considered infrastructure.
  • Information view - describes the semantics (the meaning) of the information and the information processing. This view assumes the information is isolated from the processing and that the processing activities change over time, while the semantics of the information remains static throughout the lifecycle of the system.
  • Computational view - describes the functional decomposition of the system. This view decomposes the system into computational units and reconstructs the system in a manner that best supports the functional organization of the system.
  • Engineering view - describes the infrastructure of the system as seen from the physical components, networks, servers, peripherals, and workstations.
  • Technology view - describes the technology choices that must be made when selecting vendors for the infrastructure and enabling the COTS components.

Steps in the Architecture Process

The process of discovering, defining, and maintaining an architecture for a specific set of requirements and that the applications that support them is a non-trivial task. There is a distinct difference between architecture and engineering. In the current context, engineering is equivalent to development. Generally speaking, engineering deals with measurables using analytical tools derived from mathematics and the hard sciences- engineering is a deductive process. Architecture deals largely with un-measurables using non-quantitative tools and guidelines based on practical lessons learned architecture is an inductive process.

The steps taken during the creation of a system architecture include:

  • Vision of the System - the purpose, focus, assumptions, and priorities of the system.
  • Business case analysis - how will the system earn its keep?
  • Requirements analysis - the external behavior and appearance of the system.
  • Architecture planning - the mapping between the requirements and the software.
  • System prototyping - construction of a prototype system, complete with all the components. This prototype can be used to verify the ability of the final system.
  • Project management - the professional management of the project, including resources, risks, and deliverables.
  • Architecture prototyping - the construction of an architecture platform to verify the abilities of the system components.
  • Incremental deployment - the deployment of the system in a production environment. This deployment must allow for the incremental functionality of the system, while verifying the capabilities of the application.
  • System transition - move the incrementally deployed system into full production.
  • Operation and maintenance - a phase of the system in which all functionality is deployed and the full bookable benefits are being accrued.
  • Continuous migration - an activity that continuously make improvements to the system, within the architectural guidelines.

The Vision of the System

The purpose, focus, assumptions, and priorities of a software project are essential elements of an vision statement. If any of these elements change during system acquisition and deployment, there is a significant risk that the models used to define the system will be obsolete. The first in an architecture-centered development methodology is to establish a viable vision statement with the assumption that it should not be changed once acquisition and deployment have begun. Any changes that occur in the vision must be reflected in the project models and subsequent analysis and design. The vision statement becomes a binding agreement between the software suppliers and the software consumers-users of the system. This vision statement must be succinct, ranging from a single slide to less than 10 pages of text. The vision statement establishes the context for all subsequent project activities, starting with requirements analysis and ending with the continuous migration of the system.

Business Case Analysis

The creation of a Business Case Analysis is a critical success factor for any system project. Without a clear of the costs and benefits of the proposed system architecture, the decision-makers cannot be presented with complete information. The style of the business case as well as the contents of the analysis is usually an organization specific issue.

Requirements analysis

A project's requirements define the external behavior and appearance of the system without specifying its internal structure. External functional behavior, however, includes inner actions needed to ensure desired non-functional requirements of the external behavior. The external appearance comprises the layout and navigation of the user interface screens, transaction processing activities as well as the behavior of system in terms of its abilities. These abilities are usually defined as adjectives for the properties that the system possesses: Reliability, Availability, Maintainability, Scalability, Performance, Testability, Efficiency, Reusability, and as well as other non-functional properties of the system architecture.

An effective approach for capturing behavioral requirements is through Use Cases, which consist of top-level diagrams (see Figure 5) and extensive textual descriptions. The Use Case notation is deceptively simple but has one invaluable quality- it enforces abstraction. It is one of most effective notations devised for complex concepts and a powerful way to ensure the top-level requirements are represented with simplicity and clarity.

Figure 5

Each circle, or individual Use Case, is accompanied by a description of relevant requirements. It usually takes the form of a list of sequential actions described in domain specific prose. Use Case definitions are developed jointly with domain experts and provide a domain model of the system for the purpose of architecture. Once system integration and deployment begins, use cases are extended with system-specific scenario diagrams that will be elaborated in workflow, procedural changes, and system tests.

The appearance, functionality, and navigation of the user interface are closely related to the use cases. Low fidelity prototyping - drawing screens with paper and pencil  can be effective. In all cases, end-user domain experts must be involved in the screen-definition process.

With the use cases and user interfaces defined, the context for - architectural planning has been established. In addition to documentation sketches or output from CASE tools), the contributors acquire a better understanding of the desired system capabilities in context of the end-user domain.

Use cases provide a visual means of capturing the external behavior of the system. The next step in the requirements analysis is to partition the nouns and noun phrases, verbs and verb phrases generated by the use cases. This activity can be done through Class-Responsibility-Collaboration (CRC) Cards which provide this capability. This technique identifies and specifies the data and processes of the system in an informal manner. The CRC Card method is based on theory of role- playing in which the participants have specific knowledge about their own roles and make requests of other participants to gain knowledge of their roles. Through this role-playing the nouns and verbs of the system are revealed.

4+1 and UML

If UML is to be used in the development of the system described in 4+1, then an understanding of how the different components of the UML can be assigned specific roles:

4+1 Architecture Component

UML Notation

Scenario

Use cases

Logical Views

Class diagrams
State transition diagrams
Collaboration diagrams

Development view

Component diagram

Physical view

Deployment diagram

Process view

Class diagram
Deployment diagram

Figure 6

Architectural Planning

Requirements are inherently ambiguous, intuitive, and informal. Requirements are a right-brained activity. Software is logically un-intuitive (i.e. hard to decipher} and meant to be interpreted unambiguously by a machine. Software is a left-brained activity. Architecture bridges the semantic gap between the requirements and software.

Architecture's first role is to define the mapping between the requirements and the software. Architecture captures intuitive decisions in a more formal manner, making it useful to programmers and system integrators, and defines the internal structure of the system before it is turned into code so that current and future requirements can be satisfied.

However, architecture also has another significant role: it defines the organization of the software project. Architecture planning is the missing link in marry software projects, processes, and methods, often because no one is quite sure what architecture really is. One framework for defining software architecture is provided by the ISO standard for Distributed Processing (ODP) called the International Standard ISO/IEC 10746:

ODP is a way of thinking about complex systems that simplifies decision—making. It organizes the system architecture in terms of five standard viewpoints:

  • Enterprise viewpoint - the purpose, scope, and policies of the business system as defined by the workflow and business rules.
  • Information viewpoint - the semantics of information and information processing
  • Computational viewpoint - the functional decomposition of the system in modules, interfaces, and the messages exchanged across the interfaces.
  • Engineering viewpoint - the infrastructure required to support the distributed environment.
  • Technology viewpoint - choice of technology for the implementation of the system.

Each viewpoint defines the conformance to the architectural requirements. Without this conformance to requirements, the architecture is meaningless, because it will have no clear impact upon implementation. ODP facilitates this process by embodying a pervasive conformance approach. Simple conformance checklists are all that are needed to identity conformance points in the architecture.

The ODP+4 methodology - based on the 4+1 Architecture - generates an Open Distributed Processing architecture as well as formal and informal artifacts, including the vision statement, the use case-based requirements, the rationale, and the conformance statements.

Figure 7

Enterprise Viewpoint

The enterprise viewpoint defines the business purpose and policies of the system in terms of high-level objects. These business-object models identify the essential constraints on the system, including the system objective and important policies. Policies for business objectives are divided into three categories:

  • Obligations - what must be performed by the system.
  • Permissions - what can be enforced by the system
  • Restrictions - what must not be performed by the system.

A typical Business Enterprise Architecture comprises a set of logical object diagrams (in UML notation), and prose descriptions of the diagram semantics. The language of the enterprise is concerned with the performable actions that change policy, such as creating an obligation or revoking permission.

Information viewpoint

The information viewpoint identifies what the system must know, expressed as a model, emphasizing attributes that define the system state. Because ODP is an object-oriented approach, the models also include essential information processes encapsulated with attributes, thus following the conventional notation of an object.

Figure 8

Computational Viewpoint

The computational viewpoint defines the top-level application program interlaces (API). These are fully engineered interfaces of the subsystem boundaries. During implementation, the system integration team will develop application modules that comply With these boundaries. Architectural control of these interlaces is essential to ensuring a stable system structure that supports change and manages complexity.

The computational viewpoint specification defines the modules (objects, APIS, subsystems) within the ODP system, the activities with these modules, and the interactions that occur among them. Most modules in the computational specification describe the application functionality. These modules are linked through their interface descriptions.

The CORBA Interface Definition Language (IDL), and ISO standard notation for ODP computational architectures, becomes a fundamental notation for software architects at these boundaries. It has no programming language and operating system dependencies, and can be translated to most popular programming languages for both CORBA and Microsoft technology based (i.e. COM/DCOM) systems.

Engineering Viewpoint

The engineering viewpoint defines the infrastructure requirements independent of the selected technologies. It resolves some of the complex system decisions, including physical allocation, system scalability, and communications qualities of service (QOS), and fault tolerance.

The benefit of using an ODP+4 like framework is that it separates the various concerns (design forces) during the architecture process. The previous viewpoints of ODP+4 resolved other complex issues of less concern to distributed systems architects, such as APIs, system policies, and information schemas. Conversely, these other viewpoints are able to resolve their respective design forces independent of distribution concerns. Decisions must be made regarding system aspects such as object replication, multithreading, and system topology. It is during this activity that the physical architecture of the system is developed.

Technology Viewpoint

The technologies that will be used to implement the system are selected in this view. At this level of detail, all other viewpoints are fully independent of these technology decisions. Since the majority of the architecture design process is independent of the physical hardware, commercial technology evolution can be readily accommodated.

A systematic technology selection process includes initial identification of the conceptual mechanisms (such as persistence or communication). Specific requirements of the conceptual mechanism are gathered from the other viewpoints and concrete mechanisms such as DBMS, OODBMS, and flat files are identified. Then specific candidate mechanisms are selected from available products. Other project factors, such as product price, training needs, and maintenance risks, are considered at this point. It is important to restate the rationale behind these decisions, just as it is important to record the rationales for all viewpoints as future justification of architectural constraints.

Many projects wrongly consider this technology view as system architecture. By developing the technical viewpoints before the other ODP architecture views, the project is turned upside down.

Prototyping the system

Screen definitions from the System Requirements Analysis can be used to create an on—line mockup of the system to show to end users and managers. Dummy data and simple fie I/O can provide a realistic simulation for the essential parts of the user interface. End users and architects then jointly review the mockups and run through the use cases to validate requirements. Often, new or modified requirements will emerge during this interchange. Print outs of these modified screens can be created and marked up for subsequent development activities. Any modifications to requirements are then incorporated into the other architectural activities.

Through the mockup, management can see visible progress, a politically useful achievement for most projects, that reduces both political and requirements—oriented risk. With rapid prototyping technologies such as screen generation wizards, mockups of most systems can be constructed.

Building Block Based Development

There are two distinct approaches to acquiring and deploying software systems:

  • Product-based - which solves specific components of individual systems. These components can be integrated to form a complete system, but the resulting integration may or may not possess the attributes of good architecture.
  • Asset based - which solves problems in different contexts using components that are architected to provide services greater than the sum of their parts.

Figure 9

In Figure 9, the architects' role is to avoid including into the product must have requirements that could corrupt the architecture.

Building Blocks Of the Prototype

Building blocks are an architectural paradigm that governs the means to construct system along three dimensions:

  • Structure - determine the system decomposition parts and the relationships between the parts.
  • Aspects - model the functional decomposition of the system.
  • Behavior - deals with processing that takes place within the system

Structure should be considered the most important of the three, since it is through structure that the system complexity can be reduced.

This is the primary motivation for the architecture-centric view management of structure. Without control of structure, the resulting system is simply a collection of parts. Gaining any synergy from the collection is no longer possible without a structural framework in which to place the components and their interacting interfaces.

The building docks of a manufacturing system are usually centered on the ERP system, since the Bill of Material is owned by this application. In order to avoid a detailed discussion of ERP and its relationship with other business applications, a set of building blocks can be developed which can be used for all manufacturing applications.

Managing The Project

As the final step in the pre-development process, the project management team plans and validates the deployment schedule to resolve resource issues including staffing, facilities, equipment, and commercial technology procurement.

At this stage the is defined the incremental, external, and internal activities performed during incremental development. External increments risk reduction with respect to requirements and management support. Internal increments support the efficient use of development resources - for example, back-end services used by multiple subsystems. Current best practices suggest several smaller internal increments that support larger scale external increments, the so-called V-W approach.

The architecture-centric process provides for the use of parallel increments. Since the system is partitioned into web-defined computational boundaries, integration teams can work independently and in parallel with other teams, each within their assigned boundaries. Integration planning includes increments spanning architectural boundaries.

Figure 10

The plan should be detailed for early increments and include re-planning activities for later in the project, recognizing the reality that project planners do not know everything up front. At this stage, the team should prepare a risk mitigation plan that identifies technical backups. The integration team involved in mockup and architecture prototyping should continue to build experimental prototypes using fie relatively higher-risk technologies well in advance of most developers. This run-ahead team is an essential dement of risk mitigation,

The final activity in project management planning is the architectural review and startup decision. Up to this point, the enterprise sponsors have made relatively few commitments compared to the full scale deployment costs (about 25% of system cost). Executive sponsors of the project must make a business decision about whether to proceed with the system. This executive commitment will quickly lead to many other commitments that are nearly impossible to reverse (such as technology lock-in, expenses, and vendor-generated publicity). At this point, the system architects are offering the best possible approach given the current business and technology context.

Prototyping The Architecture

The architecture prototype is a simulation of the system architecture. System API definitions are compiled and stub programs are written to simulate the executing system. This architecture prototype will be used to validate the computational and engineering architectures, Including the flow of control and timing across distribution boundaries.

Using technologies such as CORBA, a computational architecture specification can be automatically compiled into a set of programming header files with distributed stubs (on the calling side) and skeletons (on the service side). Processing can be simulated in the skeletons with dummy code. Sample client programs can be written to send invocations across computational boundaries using dummy data. A small number of essential, high-risk Use Cases can be simulated with alternative client programs. At this point, the prototype execution is used to validate conformance with engineering constraints. This is also the time to propose and evaluate changes to the component view, engineering view, or technology view architectures.

Incremental Deployment Of The System

Deployment starts with several essential activities. The integrators must learn and internalize the architecture and requirements. An effective way to achieve this is with a multi-day kickoff meeting, which includes detailed tutorials from domain experts and architects. The results of all previous steps can be leveraged to bring the integrators up to speed. Lectures should be videotaped so that replacement staff can be similarly trained.

Each increment involves a complete development process, including design, coding, and testing. Initially, the majority of the increments be focused on individual subsystems. As the project progresses, an increasing number of increments will involve integrating subsystems.

For most of the software integration activity, the architecture is frozen, except at planned points where architectural upgrades can be inserted without disruption. Architectural stability enables parallel development. For example, at the conclusion of a major external increment, an upgrade might be inserted into the computational architecture before the next increment initiates. The next increment starts with a software upgrade that conforms to the changes. In practice, the need for and frequency of such upgrades decreases as the project progresses. The architect's goal is to increase the stability and quality of the solution based on feedback from development experience. A typical project requires two architectural re-factorings (upgrades) to get to a stable configuration that is suitable for deployment.

Transitioning The System To Production

Deploying the system to a pilot group of end users is an integral part of the process. Lessons learned during this initial deployment will be translated to new development iterations. Schedule slips are inevitable, but serious quality defects are intolerable.

Improving quality by refactoring the integration (improving software structure) is an important investment in the system that should not be neglected. At this stage, architectural certification— where the architect confirms that the system implementation conforms to the specifications and properly implements the end users' requirement — becomes extremely important. In effect, the architect be an impartial arbitrator between the interests of the end users and the developer of the system. If the end users identify new requirements that affect architectural assumptions, the architect can assess the request and work with both sides to plan feasible solutions.

Operating And Maintaining The System

Operations and Maintenance (0&M) is the proving ground to verify if the integration was done right. The majority of system cost will be expended here, and as much as 70% of the O&M cost will be due to extensions. The associated requirements and technology changes are the main divers of continuing development. Typically, half of each integator's time will be spent trying to figure out how the system works. Architecture-centered development resolves much of this contusion with a clear, concise set of documentation, i.e., the system architecture itself.

Continuous Migration Of The System Components

System migration to a follow-on target architecture occurs near the end of the system life cycle. Two major processes for system migration are called Big Bang (or Cold Turkey) and Chicken Little. A Big Bang is a complete, overnight replacement of the legacy system. In practice, Big Bang seldom succeeds; it is a common anti-pattern for system migration. The Chicken Little approach is more effective and ultimately more successful. It involves simultaneous, deployed operation of both target and legacy systems.

  • The Cold Turkey approach, in which the legacy systems are replaced in kind with the new systems. There are many impediments to this approach:
    • A better system must be promised - the current user base expects that the replacement system will perform significantly better, since the effort necessary to deploy the new system needs to be paid back many times over
    • Business conditions never stand still - new system must be capable of evolving with the changing business conditions. As time moves on, the requirements themselves change. Changes in the deployed system must be capable of keeping up.
    • Specifications rarely exist for the current system - by definition, legacy system are poorly documented.
    • Undocumented dependencies frequently exist in the current system - over time, the legacy system has become customized to meet the previous tactical requirements.
    • Legacy systems are usually too big to simply cut over from old to new - millions of database entities and hundreds of legacy applications are tightly coupled to form the legacy system. This complexity becomes a serious burden simply to understand.
    • Management of large projects is difficult and risky - all the problems associated with managing large project are present.
    • Lateness is rarely tolerated - since the legacy system is mission critical, any delays become exaggerated.
    • Large projects tend to become bloated with new and unjustified features - once the system is opened to migration, all sorts of new features will be required
    • Homeostasis is prevalent.
    • Analysis paralysis sets in - with all the issues stated above, the analysis activities become bogged down in details.
  • The Chicken Little approach, in which the system components are incrementally migrated in place until the desired long-term objectives have been reached.
    • Controllable - since the scope of each incremental effort can be managed, the overall architecture of the system vision. The failure of one step in the process does not affect the preceding deployments. In principle, the failure of one step would not affect future steps. Once the failed step has been corrected, the project would proceed as gamed.
    • Only one step fails - there is no Big Bang approach, with an all or nothing result
    • Incremental over time - effort, budgets, human resources can be incrementally deployed.
    • Conservatively optimistic - success is always in hand with incremental benefits paving the way.

In the Chicken Little approach, gateways are integrated between the legacy and target systems. Forward gateways give legacy users access to data that is migrated to the target system. Reverse gateways let target-system users have transparent access to legacy data. Data and functionality migrate incrementally from the legacy to the target system. In effect, system migration is a continuous evolution. As time moves on, new users are added to the target system and taken off the legacy environment. In the end, it will become feasible to switch off the legacy system. By that time, it is likely that the target system become the legacy in a new system mutation. The target system transition overlaps the legacy system migration. In the Chicken Little approach, transition, operations and maintenance, and continuous migration are part of a continuous process of re-deploying the system to meet the ever changing needs of the business.

Applying the methodology

The methodology described in the previous sections must be deployed against a live system in order to be of any value to the organization. This section applies the methodology in a checklist manner. The architect, the developers, and the deployment team can use these checklist to ask questions about the proposed existing) system.

The Role Of The Architect

The architect's role in all of these processes is to maintain the integrity of the vision statement using the guidelines provided in this White Paper. This can be done by:

  • Continually asking architecture questions in response b system requirements, requests for new features, alternative solutions, vendor's offerings. and the suggestions on how to improve the system- does this product or feature requests fit fie architecture that has been defined? If not, does the requested item produce a change in the architecture? If so, does this item actually belong in the system?
  • Continually asking the developers, integrators, and product vendors to describe how their system meets the architectural principles stated in the requirements. These questions are intended to maintain the integrity of fie system for future and unforeseen needs, not the immediate needs of the users. Without this integrity, the system will not be able to adapt to these needs.
  • Continually adapting to the needs of the users and the changing technology. The discipline of software architecture is continually changing. The knowledge of software architecture is expanding trough research and practice. The architect must participate in this process as well.

Architecture Management

The management of the system architecture is a continuous improvement process much in same way any quality program continually evaluates the design and production process in the search for improvement opportunities.

  • Architectural evaluation - the architecture of the proposed system is continually compared with other architectural models to confirm its consistency.
  • Architectural management - the architecture is actively managed in the same way software is
  • System design processes - there are formal design processes for software architecture, just as there are formal processes for software development. These processes must be used if the architecture is to have an impact on the system integrity.
  • Non-functional design processes - the non-functional requirements must be provided in the detailed system design. The design process will include the metrics needed to verify the non-functional requirements are being meet in the architecture.

Architecture Evaluation

The ODP framework provides a number of functions to manage the architecture of the system.

  • Management - how the various components of the system being defined, created, and managed? Are these components defined using some framework, in which their architectural structure can be validated?
  • Coordination - ae the various components and their authors participating in a rigorous process? Can the architectural structure of the system be evaluated across the various components with any consistency? Is there a clear and concise model of each coordinating function in the system?
  • Transactions - are the various transactions in the system clearly defined? Are they visible? Are they recoverable? Do the transactions have permanence?
  • Repository -  is the data in the system isolated from the processing?
  • Type management - is there a mechanism for defining and maintaining the metadata for each data and process type?
  • Security - have the security attributes of the system been defined before the data and processing aspects?

Architecture Management

The computational specifications of the system are intended to be distribution-independent. Failure to deal with this transparency is the primary cause of difficulty in the implementation of a physically distributed, heterogeneous system in a multi-organizational environment. Lack of transparency shifts the complexities from the application domain to the supporting infrastructure domain. In the infrastructure domain, there are many more options available to deal with transparency issues. In the application domain,

  • Access - hides the differences in data representation and procedure calling to enable internetworking between heterogeneous systems..
  • Location - makes the use of addresses, including the distinction between local and remote usage
  • Relocation - hides the relocation of a service and its interface from other services and the interfaces bounded by it.
  • Migration - masks the relocation of a service from that service and the services that interact with it.
  • Persistence - masks the deactivation and reactivation of a service.
  • Failure -  masks the failure and possible recovery of services, to enhance fault tolerance of the system.
  • Transactions - hides the coordination required to satisfy the transactional properties of operations. Transactions have four critical properties:
  • Atomicity, Consistency, Isolation, and Durability - These properties are referred to as ACID. Atomicity means that the transaction executes to completion or not at all. Consistency means that the transaction preserves internal consistency of the database. Isolation means the transaction executes as if it were running alone, with no other transactions. Durability means the transactions results will not be lost in a failure.

System Design Process

The construction of software is based on several fundamental principles. These ae called enabling techniques. All the enabling techniques are independent of a specific software development method, programming language, hardware environment, and to a large extent the application domain. These enabling techniques have been known for years. Many were developed in the 1970's in connection with publications on structured programming.

Although the importance of these techniques has been recognized in the software community for some time, it is now becoming clear of the strong link between system architecture and these enabling principles. Patterns for software architecture are explicit}y built on these principles.

  • Abstraction - a fundamental principle used to cope with complexity. Abstraction can be defined as the essential characteristic of an object that distinguishes it from all other kinds of objects and thus provides crisply defined conceptual boundaries relative to the perspective of the viewer. The word object can be replaced by component or module to achieve a broader definition.
  • Encapsulation - deals with grouping the elements of an abstraction that constitute its structure and behavior, and with separating different abstractions from each other. Encapsulation provides explicit barriers between abstractions.
  • Information hiding - involves concealing the details of a component's implementation from its clients, to better handle system complexities and to minimize coupling between components.
  • Modularization - is concerned with the meaningful decomposition of a system design and with its grouping into subsystems and components.
  • Separation - different or unrelated responsibilities should be separated from each other within the system. Collaborating components that contribute to the solution of a specific task should be separated from components that are involved in the computation of other tasks.
  • Coupling and Cohesion - are principles introduced as part of structured design. Coupling focuses on inter-module characteristics. Cohesion is a measure of strength of association established by the connection from one module to another. Strong coupling complicates a system architecture, since a module is harder to understand, change, or to correct if it is highly interrelated with other modules. Complexity can be reduced by architecting systems with weak coupling.
  • Cohesion - measures the degree of connectivity between the functions and dements of a single function. There are several forms of cohesion, the most desirable being functional cohesion. The worst is coincidental cohesion in which unrelated abstractions are thrown into the same module. Other forms of cohesion- logical, temporal, procedural, and informal cohesion are described in the computer science literature.
  • Sufficiency, completeness, and primitiveness - sufficiency means that a component should capture those characteristics of an abstraction that are necessary to permit a meaningful and efficient interaction with the component. Completeness means that a component should capture all relevant characteristics of its abstraction. Primitiveness means that all the operations a component can perform can be implemented easily. It should be the major goal of every architecture process to be sufficient and complete with respect to the solution to a given problem.
  • Separation of policy and implementation - a component of a system should deal with policy or implementation, but not both. A policy component with deals with context-sensitive decisions, knowledge about semantics and interpretation if information, the assembly of many disjoint computations into a result or selection of parameter values.
  • Separation of interface and implementation - any component in a properly architected system should consist of an interface and an implementation. The interface defines the functionality provided by the component and specifies how to use it. The implementation includes the actual processing for the functionality.
  • Single point references - any function within the system should be declared and defined only once. This avoids problems with inconsistency.
  • Divide and conquer strategies - is familiar to both system architects and political architects. By dividing the problem domain into smaller pieces, the effort necessary to provide a solution can be lessened.

Non-functional Properties

The non-functional properties of a system have the greatest impact on the development, deployment, and maintenance. The overall abilities of the system are a direct result of the non-functional aspects of the architecture.

  • Changeability - since systems usually have a long life span, they will age. The aging process creates new requirements for change. To reduce maintenance costs and the workload involved in changing a system's behavior, it is important to prepare its architecture for modification and evolution.
  • Interoperability - the software system that results from a specific architecture do not exist independently form other systems in the same environment. To support interoperability, system architecture must be designed to offer well-defined access to externally viable functionality and data structure
  • Efficiency - deals with the use of the resources available for the execution of the software, and how this impacts the behavior of the system
  • Reliability - deals with the use of the software to maintain its functionality, in the face of application or system errors and in situations of unexpected or incorrect usage
  • Testability - a system needs support from its architecture to ease the evaluation of its correctness

APPLYING THESE PRINCIPLES

The following checklist can be used as a guideline for applying these principles.

  • Develop a vision statement for the system. This vision statement must be acceptable not only to the management and executive team members, but to the users as well. It is not is uncommon to have a wonderfully-crafted statement of the systems vision, that can be placed on the boardroom wall, only to have the shop Boor user consider the vision out of touch with what actually happens in tie real world. Developing the vision statement is not a simple task and be given sufficient time and effort. This statement will form the foundation of the project and will be used to resolve conflicts in the direction of the development effort- why are we doing this is answered b/ the vision statement.
  • The business case process is common in most organizations. The constraints of the financial calculations will be defined ahead of time. The business case must consist of hard -bookable - savings. After these savings have been identified, soft savings can be considered.
  • The requirements analysis can take place in a variety of forms. The real goal here is to bring out the needs of the user, within the context of good system architecture. The user's needs can be captured through interviews, use cases, CRC Cards, or any disciplined requirements development process. The primary goal of the architect is to prevent the requirements from corrupting the underlying system architecture. This tradeoff process requires skill and persistence. In many cases the user will articulate a requirement as a must have behavior of the system. The consequence of the requirement may damage the architectural structure of the system. The architect's role is to incorporate the requirement without upsetting the architecture. This is done by iterating on both the requirements and the architecture until there is the proper fit.
  • Using the IEC 10746 ODP structure, the architect defines the system from the first five views. The capturing of these viewpoints can be done through any appropriate tool. In the current 00 paradigm, the UML notation is a powerful means of describing all five viewpoints. The complete UML language, in conjunction with CRC cards, can describe the system logical and physical behavior.
  • The prototyping process provides a powerful means of trying out parts of the system design without the commitment to production. During the prototyping activity, the system architecture is being evaluated against the planned architecture. During this process, the architect must be vigilant to avoid compromising the structural, aspect, and behavioral integrity system without good cause. It is the natural inclination of the developers and end user to accept a work around at this point. This work around will become a permanent undesirable feature of the system if it is accepted without the full understanding of the consequences.
  • The incremental process should follow the business case analysis rollout strategy. The business case describes how the benefits will be booked over time and by which component of the organization. The system must follow the benefit stream in order to accrue the savings.
  • The transition to production process is a continuation of the incremental deployment. The full training and operational support activities are now deployed.
  • The operation and maintenance of the system is then turned over to the production staff.
  • The continuous migration of the system is a process not usually considered part of the system architecture. Without the architecture influences on the activity, the system will quickly decay into a legacy set of components.

An Example Architecture

There are numerous examples of layered architecture in the current literature. Presenting one here may appear redundant. However, we need to start some place and the EDM/PDM domain provides unique requirements not always found in general business systems. This example, shown in Figure 11, is derived from Software Architecture and Design Patterns in Business Applications, after adapting to the manufacturing domain.

Figure 11

In this example architecture the components are partitioned into functional layers:

  • Workflow - is an system that implements workflow metaphors for business processes. This component is usually a COTS product or at least a set of components that adheres to the Workflow Management Coalition Specifications. The workflow acts as a glue piece above all the application systems of the enterprise, integrating all of the applications. If there is an atomic process to be performed, the work can be performed externally to the workflow system as part of an application component embedded in the workflow process.
  • Interface objects - node of two types:
    • Dialog Interfaces - provide for the manipulation of kernel objects through interfaces. The interface objects are typically partitioned into presentation objects and dialog controls that manage the flow of control in the dialog.
    • Batch Interfaces - provide for manipulation of application kernel objects using batch processes. This processing is often neglected.
  • Exception handling - a common service provided by the standard architecture. Exceptions ride up the application domain until they reach the upper layer and can be handled by software or made visible to users. In practice, exception handling consists of several exception classes and protocols plus components to map exceptions to meaningful messages for the users.
  • Application kernel - is a set of components grouped by structural similarity, not by functional requirements
    • Business transaction objects - a meaningful business process may require several steps or dialogs, which in turn interact with lower level business processes. Business transaction objects are used to control the sequencing of these actions.
    • Application kernel objects - provide the core set of functions needed for EDM or PDM. These functions are usually derived from the analysis of the business.
    • Pure functions - provide a core set of functions used by the application. These functions typically have a narrow interface, perform complex calculations or behavior (management change, format conversion, database processing).
    • Application System - consists of an interface and a set of kernel objects that are manipulated through the interface.
  • Component isolation - the architectural principle that limits the scope of changes as the system evolves. Component isolation means that a change in one subsystem will not require a change in another.
  • Metadata - self-descriptive information, which can describe services and information. Metadata is essential for reconfigurability. With Metadata, new services can be added to a system and discovered at run-time.
  • Separation of Hierarchies - \good software architecture provides a stable basis for components and system integration. By separating the architecture into pieces, the stability of the whole may sometimes be enhanced.