
IntelligentCRM
Customer Relationship Management for Competitive Advantage
IntelligentEAI
E-Business Integration for Competitive Advantage
Sponsored by Level 8
IntelligentERP
Leveraging SAP™ for E-Business Solutions
IntelligentKM
Knowledge Management for Competitive Advantage
Sponsored by Semio Corp.
|

Analytic Applications
sponsored by WhiteLight Systems Inc.
Business Intelligence
sponsored by Cognos Corp.
Data Integration
sponsored by Merant
Database
Featuring C.J. Date
Sponsored by IBM Software
Data Warehousing
sponsored by Acta Technology
E-Commerce
Sponsored by MySAP
Enterprise Development
RealWare Awards
Scalability
Table of Contents
|
February 9, 2000 Volume 3 - Number 3
Building the Unbreakable Chain
“Simulate before you build, test before you deploy” should be your guiding principle for supply-chain and B2B e-commerce system development
By Ram Reddy
Skyscrapers aren’t built from scratch.First, architects draft a blueprint and verify the physical characteristics of the proposed structure and materials involved. Furthermore, building codes require that the building be able to withstand what nature may subject it to by way of earthquakes, tornadoes, and other natural disasters.
As organizations move from pure “brick and mortar” business models to virtual ones, they must apply the same architectural rigor to business-to-business (B2B) e-commerce and supply-chain integration applications, because the consequences of application failure can ripple across multiple organizations. Here success implies that the system not only function as predicted, but more important, fail gracefully without disrupting other participants upstream and downstream. Indeed, in the digital economy, failed B2B and supply-chain systems have the same attention-grabbing potential as a building collapse or train wreck.
Legendary stories of Web-development shops writing and deploying applications in “Internet time,” working around the clock fueled by caffeine, make entertaining reading. However, many of these same applications are put into production without adequate load and stress testing — and it shows. Ebay Inc.’s much publicized failures in 1999 suggest the importance of load- and stress-testing all application components to their breaking points. Similarly, the U.S. Federal Aviation Administration’s decision to scrap its new air-traffic control system just prior to deployment indicates a case in which proper modeling may have preempted the development of a system that would fail to meet expected performance metrics. As both these high-profile examples attest, we must subject all complex, mission-critical systems to analytically rigorous modeling and testing if we are to improve the chances of successful deployment.
A primary focus of B2B and supply-chain systems is to reduce cycle time and increase the velocity of transactions. Typically, cycle time is reduced from days to minutes and transactions are concluded in minutes as opposed to days. Thus, system response time measured in seconds is not as important as reliability. Therefore, it is imperative that B2B and supply-chain systems be engineered for robustness and predictable performance through pre-deployment simulation and testing.
Significance of the B2B Relationship
One popular adage about business-to-consumer e-commerce Web sites goes like this: “If your system is down, that’s bad, but if your system is slow, that’s even worse.” A consumer is just a click away from your competitor when your system slows down. However, the reverse of this statement applies when B2B and supply-chain systems are involved: When the supply-chain system slows down, that’s bad, but when it goes down, that’s a catastrophe.
The reason for these catastrophic consequences is that the entire supply chain can come to a grinding halt, and transaction rollback and recovery may be nontrivial. Compound these effects with the economic loss of idle production facilities while the system is under repair, and the costs add up quickly. These costs can potentially add up to great loss when supply-chain and B2B systems fail.
Some organizations may try to console themselves with the thought that, unlike in the business-to-consumer area, significant infrastructure and business process costs will usually prevent their e-commerce business partners from going elsewhere. Even if that’s the case, short-term financial damage and long-term damage to business relationships will be severe.
Software Engineering Practices, or Lack Thereof
Increasingly, the IT trade media are raising red flags about the state of IT infrastructure for B2B e-commerce and supply-chain systems. Indeed, following quality principles in developing IT systems is not common practice in most corporate IT shops. For example, Meta Group Inc.’s “IT Performance Trends ’99” finds that software engineering practices are in decline, and interest in the Software Engineering Institute’s Capability Maturity Model (CMM) is on the downswing. (CMM defines a path for a software organization to transition from ad hoc, immature process orientation to a mature, disciplined one. When followed, CMM practices improve the ability of software development teams to meet goals for cost, schedule, functionality, and product quality.) Compound this mindset with the pressure to “get the product out in Internet time” and the glorification of “spoils to the first mover,” and the foundation for creating B2B systems that can spread havoc across multiple organizations is laid.
This lack of a systematic approach to software engineering is especially troublesome in light of the increasing complexity of application development environments over the past 10 years. Enterprises have made a transition from monolithic mainframe applications, to client/server, and then to n-tier applications. The emergence of B2B e-commerce and supply-chain systems has added further complexity by distributing these n-tier applications across multiple enterprises. Firms now have the choice to distribute various components of the application logic, business logic, and databases across organizations with heterogeneous systems. In addition, the recent trend in mergers and acquisitions among Fortune 500 firms has contributed to internal system-integration challenges. In such an extensive boundary-spanning environment containing distributed and often heterogeneous systems, proper architecture and software engineering are more crucial than ever for successful development and deployment.
However, the ease with which simple Web sites can be built has contributed to the perception among CEOs and senior executives that building complex, Web-based transaction systems is an easy task. (“My eight-year-old grandson has his own homepage with all these neat features. Getting this Web-based B2B system online should be relatively easy — right?”) This attitude fuels frequent changes in user specifications during the development process. Most IT staffers have a difficult time explaining to the business users that transaction systems are complex and require careful analysis of user requirements. Given the complex nature of transaction systems that span multiple organizations, it becomes necessary to have a “design freeze” after the analysis and design phase. A “design freeze” is difficult to obtain for business-to-consumers (B2C) systems, and in many instances, the nature of the business ordains constant change. The customers and competition dictate system functionality and features in the B2C area.
However, for B2B and supply-chain systems, you are reengineering processes that have been around for a long time and are fairly static, resulting in increased velocity and transaction flow across participating organizations. Changing the design after analysis requires a reassessment of impact on upstream and downstream processes, so enforcing a “design freeze” is crucial when the development effort is underway. Consequently, you must educate business users about the complexity of B2B and supply-chain solutions and involve them in the development process. This effort will help decrease scope creep and avoid frequent systems specification changes during the build.
Simulate Before You Build
Given the complex nature of application development and deployment in such an environment, it is imperative that we simulate our application and deployment design without actually building and distributing various components. Current practice is to garner performance metrics after development and deployment in the field. Thus, performance or reliability bottlenecks are addressed in a fire-fighting mode. Unfortunately, using this approach, the fires are never completely extinguished, and over time other components will break down — resulting in the need for more fire fighting.
In contrast, you should evaluate design and deployment alternatives interactively when designing B2B and supply-chain systems. The resulting simulation model may highlight performance bottlenecks that can significantly alter your application design and deployment options. Modeling becomes increasingly important in loosely coupled systems as the exogenous variables — network bandwidth, firewall throughput, and numerous specialized server performances (such as application, database, and messaging servers) — are considerably more complex than those in monolithic systems.
It is important to recognize that meaningful simulation and timely development are not mutually exclusive. Major pitfalls to avoid in building simulation models include “analysis paralysis” and the assumption that baseline metrics have to be 100-percent accurate. Rather, model building and simulation is an iterative process that should become an organic part of software development, deployment, and post-deployment maintenance.
Here’s how it works. As a first step, you should develop a rough model of the B2B or supply-chain system in question. The model will probably comprise existing mainframe applications on mature execution platforms, combinations of LANs and WANs, application and database servers, and some custom code for intra-enterprise application integration functions.
Baseline performance metrics for existing systems should be easy to collect. If you cannot obtain them in a timely fashion, educated guesses from systems engineers should suffice for the initial model. For customized modules that are yet to be built or new packaged applications that are to be integrated, you can use baseline data from existing modules or installations with similar characteristics. In all cases, to speed up the process, educated guesses from qualified staff can be helpful if needed. These approximations will be replaced with real data as it becomes available during the development phase. Overly optimistic estimates on system performance will be detected during load test. Organizations commonly abandon the building of simulation models of proposed systems because they take too long to build and are overly complex, so don’t fall into that trap.
Develop this model in collaboration with all firms participating in the proposed system. Specifically, ensure that users, business process owners, and IT system managers participate. It will be difficult to manage such a diverse group, but the effort will pay off with a realistic model. With this approach, business processes or system “land mines” tend to get exposed sooner rather than later, enabling you to address potential showstoppers before making significant investments in development.
FIGURE 1 Application design, development, and deployment.

The initial simulations will highlight potential bottlenecks that can significantly affect your system design. (See 1st Phase in Figure 1.) The payoff here is that you can make design changes to accomplish your metrics before major investments in development have been made. These initial simulations should also detect if the proposed system will meet performance expectations and metrics using the hardware and software sanctioned in the budget for the project. As IT projects typically are over budget and do not meet performance expectations, the first simulation should give us some rough estimates on additional funding needs. More importantly, involving business users in the simulation runs will help them understand the complexity of the proposed system and force them to “take ownership” of system requirements. In my experience, non-IT team members are amazed during simulation runs by the cascading effects of system changes on downstream and upstream processes. Armed with this knowledge, they become very effective in fielding change requests from other members of the organization. Thus, non-IT staff must manage design change requests.
The true benefits from the modeling, however, will appear during development, deployment, and maintenance. During this second modeling phase, you should update the initial model with real performance data as and when individual modules are built or are integrated. (See 2nd Phase in Figure 1.) Based on the feedback from the live systems, you can then depict some of the components of the model in more detail or abstract them to a higher level. Drawing an analogy from the “data flow diagram” terminology, it may suffice to depict some components at level zero. However, for some hardware and software components that may cause bottlenecks, drilling down to level five or six may be necessary. You should then simulate the refined model in order to get new performance metrics, which you will in turn validate against expected metrics. You can then revise design or deployment options if indicated by the model.
Because no model is perfect, you should verify its predictions carefully before making design or deployment changes. During the second phase in modeling, errors more often result from lack of information or an incorrect technical or business assumption, rather than true hardware or software failures. You should fix any errors in assumptions (there will be many) used to build this model, without assigning blame. The source of the incorrect assumption should be informed in order to adjust future estimates. In the case of corrections, make sure that the person who provided the erroneous information is comfortable participating in future model-building efforts without getting into an “analysis-paralysis” mode.
FIGURE 2 Post-deployment enhancements and changes.

This iterative process of building and testing individual modules and refining the simulation model should become an ongoing process across the lifetime of the system. During the third modeling phase (see Figure 2), the major benefit of a simulation model is that it typically reduces long-term maintenance costs. According to the Meta Group report I brought up earlier, more of a corporation’s IT budget is allocated to maintenance than to new application development. If we cannot reasonably predict the effects of making changes to a complex B2B or supply-chain production system before deploying the change, the need for constant fire fighting further increases maintenance costs.
Wherever feasible, I recommend that you use automated simulation modeling tools, which could include SES Inc.’s SES Workbench (for system-architecture graphical modeling), Binary Evolution Inc.’s VeloMeter (for Web-server load simulation), or Segue Software’s SilkRealizer (for e-business process modeling). Such tools will significantly reduce the time and overhead associated with building, modifying, and maintaining the models over time. Be sure to assess their suitability individually, however, because capabilities vary widely.
Test Before You Deploy
In the rush to get product out the door and under corporate pressure to be first to market, IT staffers frequently overlook or are asked to pay lip service to rigorously testing their new e-commerce or supply-chain system. As I explained earlier, from a customer relations and credibility standpoint, failure to perform stress and load testing in monolithic applications is not typically disastrous. Such failures are usually restricted to systems within the organization, and the customer or supplier is shielded from the consequences of failure. However, in B2B e-commerce systems the impact of failure is felt immediately, both upstream and downstream.
Even a good simulation model cannot predict all failure points, because the performance degradation of components in loosely coupled systems does not occur gradually. For example, the impact of SSL connections on application servers has been well documented. Therefore, it is critical to stress and load test applications for B2B e-commerce to projected maximum loads before deployment, when you can more easily address unexpected performance anomalies.
Testing various components after they are built or integrated will help further refine your simulation model. The predictions generated by your model should get more accurate as you progress through the development phase. If developers use simulation and testing conscientiously, there should not be any major surprises about system performance after deployment. Any surprises that do occur will likely derive from the fact that some part of the infrastructure or system was not properly reflected in the model. However, in some situations, it may be cost prohibitive to make the testing environment match the deployment environment. In such a case, you should do a cost-benefit analysis of a post-deployment fix vs. recreating a sample of the deployment environment to determine the right course of action.
Many automated testing tool sets are available — including Mercury Interactive Inc.’s LoadRunner and Segue Software’s SilkPerformer — yet most organizations continue to use manual testing methods. To make this rigorous approach to systems design, development, and deployment work without significant overhead, however, automated tools are a necessity. Many tools are available for specific needs, with some tool vendors even incorporating model building, simulation, and testing — all symbiotic functions — into a single, integrated tool set.
Improved Reliability
Two major benefits emerge from this rigorous approach to designing and building B2B e-commerce and supply-chain systems. First, you will more reliably and effectively design, build, deploy, and maintain systems spanning multiple organizations, with increased velocity of transactions across the supply chain. Given the costs of fixing system errors across multiple organizations, this benefit should far outweigh the costs of adopting the rigorous approach.
Second, such systems offer improved reliability and decreased long-term maintenance costs, which can be significant. This approach will also reduce long-term costs of ownership, as system enhancements will not increase maintenance costs due to predictability of the now mature model.
In general, most IT shops treat software development as an art rather than a science. As we move increasingly to interenterprise systems and virtual organizations, this lack of rigor in design and development will focus attention on IT software engineering practices, or lack thereof. B2B or supply-chain system failures that affect multiple organizations will get immediate attention from top management, and in some cases, even from the media. Therefore, you’d better build your B2B and supply systems on solid foundations.
Ram Reddy () is the managing member of Tactica Consulting Group (http://www.tacticagroup.com/); a Midwest-area technology and business strategy consulting firm.
Copyright © 2000 CMP Media Inc. ALL RIGHTS RESERVED
No Reproduction without permission
|