Saturday, 26 September 2009

Testable Architecture: The Device to Craft Complex Communicating Systems

Introduction
It is very often argued that Software Engineering within distributed system is an engineering of complex system. According to Gödel incompleteness theorem, a complex system can be defined as one that can only be modelled by an infinite number of modelling tools (Chai71). The development of distributed systems in domains like telecommunications, industrial control, supply-chain and business process management represents one of the most complex construction tasks undertaken by software engineers (Jenn01) and the complexity is not accidental but it is an innate property of large systems (Sim96).

In distributed systems we observe emergent behaviour since logical operations may require communicating and multi channel interactions with numerous nodes and sending hundreds of messages in parallel. Distributed behaviour is also more varied, because the placement and order of events can differ from one operation to the next. Modelling the interactions of distributed system is not straight forward and inherently demands a multi-disciplinary approach and a change in traditional mindset to be resolved.
A Multi-disciplinary Class of Problems
The class of problems of modelling distributed systems is multidisciplinary, suggesting that there are several ways of modelling the problems attributes and we were required to combine several of these approaches and models, as Figure 1 shows:
Figure 1 Multi-disciplinarism in Modelling

Arguably, there are two types of modelling approaches, inductive and deductive, within the field of software engineering.
  • Deductive Modelling includes the aspect of structural, functional and collaborative designs and is commonly used in classical software engineering, such as Class Diagrams, Sequence Diagrams, Object Diagrams, Entity Relationship Diagrams (ERD), Data Flow Diagrams (DFD), Flow Charts, Use Cases, etc…
  • Inductive Modelling, are critical dynamic modelling techniques that primarily characterise the aspect of non-determinism within a system mainly arising from the occurrence of emergent behaviour and interactions. Commonly used techniques are formal methods (e.g. Testable Architecture), simulations and probabilistic models of the software artefact.

Modelling Concepts and Techniques
Unlike many engineering fields, software engineering is a particular discipline where the work is mostly done on models and rarely on real tangible objects (Oud02). According to Shaw, (Shaw90), Software engineering is not yet a true engineering discipline but it has the potential to become one. However, the fact that software engineers’ work mainly with models and a certain limited perception of reality, Shaw believes that the success in software engineering lies in the solid interaction between science and engineering.

In 1976, Barry Boehm (Boeh76) proposed the definition of the term Software Engineering as the practical application of scientific knowledge in the design and construction of computer programs and the associated documentation required to develop, operate, and maintain them. This definition is consistent with traditional definitions of engineering, although Boehm noted the shortage of scientific knowledge to apply.

On one hand, science brings the discipline and practice of experiments, i.e. the ability to observe a phenomenon in the real world, build a model of the phenomenon, exercise (simulate or prototype) the model and induce facts about the phenomenon by checking if the model behaves in a similar way to the phenomenon. In this situation, the specifications of the phenomenon might not be known upfront but induced after the knowledge about the phenomenon is gathered from the model. These specifications or requirement are known a posteriori.

On the other hand, engineering is steered towards observing a phenomenon in reality, deducing facts about the phenomenon, build concrete blocks; structures (moulds) or clones based on the deduced facts and reuse these moulds to build a system that mimics the phenomenon in reality. In this situation, the specifications of the phenomenon are known upfront, i.e. deduced before even constructing any models, whilst observing the phenomenon. The process of specifying facts about the phenomenon is rarely a learning process, and requirements are known a priori.

The scientific approach is based on inductive modelling and the engineering approach is based on deductive modelling. Usually in software engineering we are very familiar with the deductive modelling approach, exploiting modelling paradigm such as UML, ERD, and DFD that are well established in the field. However, the uses of inductive modelling techniques are less familiar in business critical software engineering, but applied extensively in safety critical software engineering and academia. Typically, inductive modelling techniques are experiments carried out on prototypes, or simulation of dynamic models which are based on mathematical (formal methods), statistical and probabilistic models. The quality of the final product lies in the modelling power and the techniques used to express the problem. As mentioned earlier, we believe that the power of the modelling lies in the blending of the inductive and deductive modelling techniques.

The rationale of integrating inductive modelling techniques within the domain of our study is due to the elements of non-determinism, emergent behaviours, communicational dynamics which are those parts of the problem that cannot be known or abstracted upfront i.e. a priori. These elements differs from those parts of the problem that can be abstracted from a priori based on experience and domain knowledge, which are normally deduced and translated into structures or models (moulds) i.e. using deductive modelling techniques.

Inductive modelling techniques require a different approach of addressing the problem attributes. In these circumstances, we tend to believe that the requirements are false upfront, and the objective is to validate these requirements against predefined quality attributes. To do so, we build formal models (formal methods) to mimic the functionalities of the suggested requirements and run the models (dynamically) to check if the models conform to the expected output and agreed quality. The modelling tools are dynamic in nature, and very often they offer themselves very easily to simulation engines and formal tests that allow system designers to run and exercise the designs, to perform model validation and verification. Through several simulation runs, the models are modified, adjusted and reinforce until they match, to certain level of confidence, the quality attributes.

Testable Architecture as a Blended Modelling Approach
For many years, computer scientists have tried to unify both types modelling techniques in order to capture the several facets of the distributed communication systems and demonstrate the power of modelling to develop software artefacts of high quality.

The development of distributed messaging system is a complex activity with a large number of quality factors involved in defining success. Despite the fact that inductive modelling is scientifically thorough for analysing and building quality engineered systems, it brings additional cost into the development life cycle. Hence, a development process should be able to blend inductive and deductive modelling techniques, to adjust the equilibrium between cost (time resource) and quality. As a result, the field of software process simulation has received substantial attention over the last twenty years. The aims have been to better understand the software development process and to mitigate the problems that continue to occur in the software industry which require a process modelling framework.
When it comes to modelling the interaction and communication of Distributed System, Choreography Description Language (CDL) is one of the most efficient and robust tool. CDL forms part of Testable Architecture, hereafter TA, and is based on pi calculus (Miln99), which is a formal language to define the act communicating.
Many other formal methods exist such as B Methods, Z Notations and lambda calculus that are used to unambiguously describe software requirements. However when it comes to describing distributed interactions of several participants, they fail, since they were not design to do so. Lambda calculus was designed for parametric description of passing arguments across functions; Z Notation was designed to classify and group attributes of the problem domain into logical sets; and B Methods was designed to describe requirement into logical and consistent machines. Pi calculus is a formal language that uses to concept of channels and naming to describe interactions and fits very well in the problem domain of distributed systems.
Unlike other modelling frameworks, TA is not limited to deductive and static modelling techniques, as it uses pi calculus based on non-deterministic models, that are well known within the academic world, but not yet of a common use within industry. In fact TA acts as a natural “glue” to blend the various modelling approaches providing a framework with the primary objective of removing the characteristic of ad-hocness and ambiguity within the modelling Process.
Using TA, the formal description of the requirements can be translated into different types of modelling tools starting with dynamic modelling tools (inductive modelling) such as Coloured Petri Nets (CPN) and prototyping, then moving to event based modelling tools such as State Chart Diagrams and Sequence diagrams and finishing with structural modelling tools (deductive modelling) such as class diagrams. Throughout the translation process, the specifications and requirements can be tested, validated and reinforce.

Case Study: Testable Architecture used in Large Communication Model of Business Critical Systems
In the case study, we focus on the fundamental problem of underwriting within a global insurance group, which includes the characteristics of Underwriting Workflow System, Policy Manager, Document Management System and the Integration Layer.

We demonstrate how TA is used to reinforce the power of modelling by avoiding classical modelling pitfalls, defining traceability across the lifecycle, providing a reference model through iterations, and addressing defects at early stage, hence increasing the maturity of the process model.

As we mentioned earlier, the design approach employs both the deductive and inductive modelling techniques, and TA employs a formal method, Pi Calculus, that provides the ability to test a given architecture, which is an unambiguous formal description of a set of components and their ordered interactions coupled with constraints on their implementation and behaviour. Such a description may be reasoned over to ensure consistency and correctness against requirements.

The Communication Architecture
The architecture provides communication management and enablement of external systems deployed over an ESB layer, conforming to the principle and discipline of SOA. The architecture diagram, Figure 2, outlines the communication between an Underwriting Workflow System and a Policy Manager (PM) . The communication is handled by the integration layer, employing BizTalk as technology and the Underwriting Workflow System is implemented using Pega PRPC.

The primary use of TA in the given problem domain, is to achieve a model of communication that can evolve to allow BizTalk to move from being purely an EAI to the capability of an ESB wherein heterogeneous types of communication which includes communication will be possible. Such conversation will be with Document Management Systems, Claims Repository Service, external Rating Services and others. In our problem domain, BizTalk maps the message of Pega PRPC, hereafter Pega, to the legacy Policy Manager. This is carried by transforming the data structure of the Pega messages into the data structure of native Policy Manager. There are 3 generic types of communication that describes the conversation between Pega and BizTalk.


Figure 2 Communication Model

The communication model illustrates 3 communication types 1) notification, error and data, expressed as CS_Not, CS_Err and CS_Dat respectively which is channelled from Pega to BizTalk. BizTalk accesses the data mapping schema and transform the incoming schema into response schema which is agreed by the Policy Manager. The Data Mapper is logically represented by the ERD.

From BizTalk to the Policy Manager, there are two types of communication which are 1) notification, CS_Not and 2) Data, CS_Dat. The communication model represented follows an asynchronous mode, which is handled by the Request/Reply map repository. The latter holds the state that assigns the corresponding response from the Policy Manager to a Request from Pega. There is a polling mechanism to notify Pega that a response has been received for a corresponding request.
There are 3 return communication types from the Policy Manager to BizTalk which are CS_Not, CS_Err and CS_Dat. The latter holds the data which is required by Pega to update any underwriting transactions. As we modelled the communication using TA, it has been observed that the existing legacy Policy Manager interface does not differentiate between success and failure response, hence there is no separation of identity between the error and success, which complicates the design of the integration layer. The design flaw has been identified whilst validating and type checking the communication model with TA. This has lead to some mistake proof mechanism within BizTalk to manage error and trace the error back to the presentation layer, i.e. General Underwriting System. BizTalk has to transform the Policy Manager schema into a structure agreeable by Pega. The communication medium employed across Pega, BizTalk and Policy Manager is SOAP.
The process starts at the requirement gathering phase, where TA is used to identify the core aspects of the communication which are in our context, the Pega component, The BizTalk component and Policy Manager (PM), as shown in Figure 3.


Figure 3 Requirement communication model

At the very early stage of design, while validating the communication with TA through formal checking, it has been observed that the BizTalk component includes two primary modules, which is required to be modelled separately, and these are the Mapper component and the Mediator component respectively. This is a typical problem of separation of concerns. The separation showed that the mediator service is solely concerned with the orchestration of the communication model whereas the mapper service is related to the data modelling which ought to be abstracted to the problematic of Canonical Data Model within an ESB. Using classical modelling techniques, purely static design such as sequence diagram, this dichotomy would have been missed in requirement and only be found at the late stage of design or coding. It is also possible that the separation would have been missed completely, adding overheads and reworks to preserve the characteristic of extensibility to the architecture.


Figure 4 Conversation Model

Whilst requirements are gathered, a model of the conversation within problem emerges as shown in Figure 4. This is static diagram that simply lays out the roles, the swim lanes (see Figure 3), and who can talk to who. This enables us to manage the conversation in the system and to also extend the model to add new components and test if the communication model still holds when new participants are added.
The next step is to bind the model in Figure 3 to a choreography, which will enable us to type check the model against the requirement in order to validate the model and remove ambiguity in the requirements for the communication model. The choreography is shown in Figure 5.
Figure 5 Architecting the Design

The binding process involves the process of referencing the model in the requirement and binding the interactions. The binding process also has the effect of filling in some of the missing information on identity and business transactions.

With a bound model, the choreography in Figure 5 can be exercised in order to prove the model against the architectural parameters as shown in Figure 6. The model shows the participants which are Pega, conversing with the BizTalk mediator, then the mapper (for data transformation) to finally be passed to the Policy Manager participant.

Figure 6 Proving the Communication Model

During the test of the architecture, the proof goes green (see Figure 6) if the configuration and parameters or more precisely the types of the interactions are correct and should it be red, the proof reveals that the model deviates from the requirements, highlighting the defects.

Thus for each interaction we can see clearly what the identity is, what we call the type for that identity (the token or tokens) and the Xpath expressions which when executed over the example message (in our case the risk xml of Pega and the Policy Manager Process UW xml) return the appropriate values.

Blended Modelling Approach

After the proof of the model is demonstrated, we believe that the models are true and they conform to the pre defined requirements and many of the ambiguities in the requirements have been detected and consequently resolved at the requirement and design phase of the Software Development Life Cycle (SDLC). Then, in exploiting the capabilities of model generation, TA provides us with a rich a proven set of artefacts such as UML designs and state-charts diagram of the model. In Figure 7, we show the state-charts generated from the proven dynamic models. This is typically the translation of the inductive models (the CDL model) to the more common deductive models (UML and BPMN). Then the course of the SDLC resumes with the normal route of the classical software engineering processes.

Figure 7 Generated UML Artefacts State Chart of the Underwriting System

The generated models along with auto-generated documentations are compiled into the design directives and coding principles that can be handed over to the software designer and the developers. The communication to these parties is founded on formal and mathematical checks which makes the design and the development of the system far less error prone.

Conclusion
In employing TA, we were able to identify business and core service easily and test them against requirements for the mediator business service and mapper core service. We worked very closely with key decision makers to ensure a full understanding and gain agreement on requirements through inductive modelling of requirements and the collaboration model that is embodied in TA. This allowed rapid turn-around with Business Analyst and reduced the overall design time.

Secondly we were able to detect errors both as conflicting requirements (reported back and then remediated with the stakeholders) and technical design errors prior to coding, the latter being the legacy Policy Manager’s error handling problem. We were also able to simplify the design segmenting it and ensuring that it truly represented the requirements through TA.

Finally, TA enabled the generation of implementation artefacts, such as UML designs and state charts that were guaranteed to meet requirements and were an order of magnitude more precise which reduced the communication need to ensure a high quality delivery. This is typically the capability of TA to blend the inductive with the deductive modelling techniques.

Reference


(Chai71) Chaitin G J, “Computational Complexity and Godel's Incompleteness Theorem”, ACM SIGACT News, No. 9, IBM World Trade, Buenos Aires, pp. 11- 12, April 1971


(Jenn01) Jennings N R, “An Agent-based approach for building complex software systems”, Communications of the ACM, Vol 44, No. 4, April 2001


(Sim96) Simon H A, “The Sciences of the Artificial”, MIT Press, 1996


(Oud02) Oudrhiri R, “Une approche de l’évolution des systèmes,- application aux systèmes d’information”, ed.Vuibert, 2002


(Shaw90) Shaw M, “Prospects for an Engineering Discipline of Software”, IEEE Journal, Carnegie Mellon University, 1990


(Boeh76) Boehm B W, “Software Engineering”, IEEE Trans. Computers, pp. 1,226 - 1,241, December 1976


(Miln99) Milner R, “Communicating and Mobile Systems”, Cambridge Press, June 1999








8 comments:

  1. pretty impressive, and well written. When I talked to steve the last time he was fascinated by this :)

    cheers

    ReplyDelete
  2. Really great news!!! this information is well worth looking everyone. Good tips. I will be sharing this with all of my friends! Thank you for sharing valuable information. Go Car insurance comparison sites believes in respecting every customers privacy and helps customers to get hold the product which they are most in need of, taking care of their budget line. This site is quite confident and Continue reading.

    ReplyDelete
  3. Thank you so much for taking the time for you personally to share such a nice info. I truly favor to reading your post

    ________________________
    Paint for mac

    ReplyDelete
  4. Thanks for sharing in detail. Your blog is an inspiration! Apart of really useful tips, it's just really ! This post will be effectively Just about everything looks good displayed.
    -----------------------
    White card

    ReplyDelete
  5. They are becomes a more and more interesting from the starting lines until the end.jira software training

    ReplyDelete
  6. Very nice post here and thanks for it .I always like and such a super contents of these post.Excellent and very cool idea and great content of different kinds of the valuable information's.

    python Training in chennai

    python Course in chennai

    ReplyDelete
  7. Very Informative blog thank you for sharing. Keep sharing.

    Best software training institute in Chennai. Make your career development the best by learning software courses.

    azure certification in chennai
    Xamarin Training Course in Chennai
    Best Docker Training in Chennai

    ReplyDelete
  8. Those guidelines additionally worked to become a good way to
    recognize that other people online have the identical fervor like mine
    to grasp great deal more around this condition.
    mysql online course in Chennai
    unix certification in Chennai
    IT training institute in Chennai

    ReplyDelete