*
Microsoft.com Home|Site Map
Microsoft TechNet*
Search Microsoft.com for:
|TechNet Home
Commercial Internet System
Community
Downloads
Internet Explorer
Internet Information Server 3.0
Internet Information Server 4.0
Interoperability and Migration
IT Solutions
IT Tasks
Microsoft Mail
MS-DOS
Office
Personal Web Server
Project 98
Proxy Server
Security
SNA Server
Systems Management Server
Transaction Server
Visio
Windows 95
Windows 98
Windows 2000 Server
Windows for Workgroups
Windows ME
Windows NT Embedded
Windows NT Server 4.0
Windows NT Terminal Server
Windows NT Workstation

Chapter 9 - Accessing Legacy Applications and Data

On This Page
Identifying Strategies Identifying Strategies
Integrating IIS and Legacy Applications Integrating IIS and Legacy Applications
Gaining Access to Legacy File Data Gaining Access to Legacy File Data
Replicating Legacy Databases Replicating Legacy Databases
Migrating Transaction Processes Migrating Transaction Processes
Resources Resources

This chapter describes how you can use Microsoft development tools and production software packages to make legacy applications and data available to Web applications based on Internet Information Server 4.0 (IIS).

Identifying Strategies

To employ Web technology to best advantage, an enterprise must make its business applications and data easily accessible—over the Internet or a company intranet—to its employees, key business partners, and the public. This goal is often difficult to achieve because mission-critical data is stored in host-based file systems and relational databases on IBM mainframes or AS/400 computers (estimates run as high as 80 percent for many large corporations and government agencies).

Delivering large amounts of legacy data to a wide audience has always been problematic because:

Hardware and system software is expensive. 

Standards are proprietary and not widely supported outside the legacy environment. 

Development costs are high. 

This chapter outlines four strategies for accessing applications and data in MVS (mainframe) and OS/400 (AS/400 minicomputer) systems running within SNA (Systems Network Architecture) environments. You can:

Integrate host applications running in legacy environments with IIS by connecting host transaction processors to Windows NT Server 4.0 by using Microsoft SNA Server 4.0 and Microsoft COM Transaction Integrator for CICS and IMS (COM TI). COM TI is included on the IIS Resource Kit CD. 

Use Microsoft SNA Server 4.0 and Microsoft OLE DB Provider for AS/400 and VSAM (Data Provider) to access legacy files at the record level and send the data to the Windows NT environment. 

Use Microsoft Host Data Replicator (HDR) to acquire host database structures and replicate them for Microsoft SQL Server and IIS 4.0. 

Move the automated processes from the restrictive and expensive legacy environment to the open, more cost-effective Windows NT environment with IIS 4.0 and Microsoft Transaction Server (MTS). 

Connecting to SNA

Each of the legacy access strategies discussed in this chapter requires connections to IBM host computers through SNA. To understand how each strategy is implemented, you need a basic understanding of how the SNA environment is constructed, how to connect to SNA resources, and how to exploit these resources.

The SNA Environment

SNA is IBM's architecture for designing and implementing computer networks. To communicate user data over SNA, a session must be established between two Logical Units (LUs), one on the host system and the other on the client system. Because LU 6.2 is a peer-to-peer protocol, either the mainframe host or the client can initiate a session. By using this protocol, computers running Windows NT Server 4.0 can participate in the SNA environment and gain access to legacy host environments including transaction processing (TP) monitors, VSAM and AS/400 files (both flat and unstructured), and database data structures, such as DB/2 data tables.

 

Figure 9.1 The SNA environment 

Connecting with MS SNA Server

You can use SNA Server 4.0 to connect to the SNA environment from Windows NT 4.0 and IIS. SNA Server translates Windows NT Server 4.0 communications to LU 6.2.

 

Figure 9.2 SNA Server Connects Windows NT to SNA 

Developing and Deploying under Windows NT and IIS

With SNA Server LU 6.2 capabilities, you can develop and deploy applications that access the legacy environment from the Windows NT side of the connection.

Software tools used to gain access to SNA applications and data reside on Windows NT platforms and take advantage of the unified administrative tools and lower-cost resources in the Windows NT environment. 

Application development and modification is accomplished in the Windows NT environment as well. This means that you can avoid the high overhead associated with development and modification of legacy host-side resources. 

Integrating IIS and Legacy Applications

For years, IBM has encouraged its customers to code their business logic into programs that are separate from their terminal access logic. Many Information Services (IS) organizations have responded by coding their business rules into TP programs that execute under CICS or IMS. Gaining access to these programs on the host side from the Windows NT environment can open up the business rules for an entire application, such as inventory control or budgeting, creating new opportunities for distributed applications.

Using tools and techniques to access business logic offers significant advantages over methods such as "screen-scraping" data from terminal emulation programs because:

All the data and processes that the business logic allows are accessible, rather than the limited data and processes accessible to individual terminal access program logic. 

There is no requirement for a terminal emulator on the Windows NT Server platform because the processing involves no terminal access software. 

The integration of legacy processes with IIS-based processes is easier to accomplish and less costly to develop. 

The COM Transaction Integrator

The Microsoft COM Transaction Integrator for CICS and IMS (COM TI) is a technology for integrating legacy TPs running on mainframes with Web application and transaction processes running in the Windows NT environment.

COM TI reduces the effort required to develop applications integrating COBOL (COmmon Business Oriented Language) programs running on mainframes with Automation clients running Windows NT Server, Windows NT Workstation, Windows 95, or any other computer that supports Automation. Specifically:

COM TI can automatically create a recordset of the data returned from a mainframe TP program. The recordset data, formatted in a tabular array, can then be accessed by ASP. 

COM TI coordinates legacy TPs on the mainframe with transaction processes managed by Microsoft Transaction Server (MTS), thus extending the MTS transaction environment to include transactions managed by CICS or IMS on an IBM mainframe computer. 

COM TI development tools map COBOL data declarations to Automation data types. 

 

Figure 9.3 COM TI is a proxy for the mainframe. 

Functional Overview of the COM Transaction Integrator

The following list summarizes how COM TI gains access to CICS applications and integrates data returned from CICS TPs with Internet Information Server through ActiveX Data Objects (ADO) and MTS.

Gain access to CICS TPs COM TI directly supports any TP that executes in CICS or IMS. Because COM TI can access CICS programs, developers can issue application calls to the legacy environment by using CICS to gain access to any program under its control. This includes DB2 databases, VSAM files, or IMS databases. 

Redirect method calls COM TI is a generic proxy for the mainframe. It intercepts method calls from the client application and redirects those calls to TPs running on the mainframe. For example, when an Internet browser sends data that ASP interprets as requiring COM TI, IIS forwards the data to COM TI. 

Reformat method calls When COM TI intercepts the method call, it converts and formats the method's parameters from Automation data types into IBM System 390 mainframe data types. 

Handle return values COM TI handles the return of all output parameters and values from the mainframe and converts and reformats them for IIS as needed. 

COM TI runs under Windows NT, not on the SNA host. Its processing takes place on a computer running Windows NT Server, and does not require any new executable code to be installed on the mainframe, or on the desktop computer that is running the Internet browser. COM TI communicates through SNA Server and uses standard communication protocols (for example, LU 6.2 provided by Microsoft SNA Server version 4.0) to communicate between the computer running Windows NT Server and the mainframe TP.

COM TI Development Scenarios

The following two scenarios illustrate how the COM TI environment can be used to develop applications that integrate TPs with ASP.

Scenario One: Integrating Legacy TP Data Using COM Transaction Integrator

This scenario illustrates how to connect a Windows NT–based Web site to an existing COBOL TP. Suppose you want to dynamically add content from a legacy database running under CICS on an IBM mainframe computer to a Web application running under IIS. You can begin by using ASP to interpret user requests and format the data returned by the mainframe application. Next, you can use COM TI to develop a component that will process the method calls from the IIS environment and the mainframe environment.

This scenario involves six main steps:

Step 1 (setup time): Configuring COM TI. 

Step 2 (design time): Defining required methods and parameters. 

Step 3 (design time): Writing the application. 

Step 4 (design time): Testing the application. 

Step 5 (deployment): Deploying the application components. 

Step 6 (post-deployment): Maintaining the application. 

Step 1: Configuring COM TI 

To develop a COM TI component, you must meet the following system requirements:

Microsoft Windows NT Server 4.0 or Windows NT Workstation 4.0 updated with Service Pack 3. 

Microsoft IIS 4.0 with Microsoft Transaction Server 2.0. 

Microsoft Windows NT Client for SNA Server 4.0.

Microsoft Data Access Components 1.5.

Additionally, the following COM TI components must be installed:

The administration component, which collects information about the user's SNA environment. 

The run-time component, which intercepts the method calls to the mainframe and uses the COM TI–created component library to perform the actual conversion and formatting of the method parameters. In addition, the run-time component interacts with SNA Server and builds LU 6.2 packets, which are sent to the mainframe. 

Development tool components, featuring the component builder, a GUI used to create component libraries from mainframe COBOL programs. 

The component builder is installed as an add-in to Microsoft Visual Basic 5.0, and does not need to be installed on the same system as the other components. Developers who are not using Visual Basic 5.0 can use the component builder as a stand-alone tool.

Step 2: Defining Required Methods and Parameters 

To accomplish this step:

Acquire the COBOL source code from the mainframe using a file transfer mechanism, such as the FTP/AFTP gateway that is delivered with SNA Server. 

Use the COBOL Import Wizard to:

1.

Select the COBOL source code. 

2.

Specify the methods and mainframe TP names. 

3.

Select input, output, and return value parameters. 

When necessary, change the mappings between the COBOL and Automation data types. 

Use the component builder to make a COM TI component library (.tlb), a standard library that can be used by client software and MTS. 

If you have changed the data type mapping in the COBOL code, there are two more actions required in this step:

Use the component builder to generate new COBOL declarations. 

Update the mainframe program with the new COBOL data declarations. This is the only instance requiring modifications to the mainframe environment. 

Step 3: Writing the Application 

To accomplish this step:

Write the client in a language that supports referencing of Automation objects, such as Microsoft Visual Basic, Visual C++, or Visual J++. 

Add the appropriate COM TI component library to the references list in the project and add the references of the component in the program. 

Invoke methods as appropriate throughout the application. 

If the existing mainframe TP is to be modified, do one of the following:

Perform the modification on the mainframe. 

Use a Windows-based COBOL development environment, such as Microfocus COBOL, then move the code to the mainframe. 

Step 4: Testing the Application 

If the mainframe TP is unchanged, the TP does not require testing. If the TP has been modified, then the COBOL program should be tested independently to ensure that it runs correctly in its own environment.

Test the new application as follows:

Ensure that the COM TI component library is registered in MTS. 

Test the mainframe TP independently if it has been modified in any way. 

Test the newly developed COM TI component independently to ensure that it is working correctly. 

Test the application, running the mainframe TP. 

Step 5: Deploying Application Components 

To deploy the client-side of the application, the following software components must be installed on each production computer:

Microsoft Windows NT Server 4.0 or Windows NT Workstation 4.0 (or later) updated with Service Pack 3. 

Microsoft Windows NT Client for SNA Server 4.0.

Microsoft Transaction Server 2.0. 

COM TI administration and run-time components. 

COM TI component libraries registered in MTS. 

Client applications accessing COM TI components. 

Step 6: Maintaining the Application 

As changes are made to the mainframe TP program, do one or more of the following, as appropriate:

Acquire the COBOL source code from the mainframe. 

Use the COBOL Import Wizard to re-specify the method names and host TP names, and to re-select the input, output, and return parameter values. 

What if the Required Mainframe TP does not Exist?

In this case, you must modify steps 2 and 3 of the scenario by developing a TP to run under CICS on the mainframe host.

In step 2, use the COM TI component builder to:

Enter the methods and parameters for the application. 

Add information about the name and location of the new TP. 

Change the default mappings produced by the component builder, if necessary. 

Create the COM TI component library. 

In step 3:

Write the mainframe TP, either on the mainframe or in the Windows environment using a product such as Microfocus COBOL, then move the program to the mainframe for testing. 

Scenario Two: Extending Transactions with COM TI

When deployed in the Windows NT Server environment, COM TI can extend MTS transactions to include mainframe TPs running under CICS and IMS.

A developer can use each of the following steps to connect a Windows NT–based Web site to an existing COBOL TP program for the purpose of making legacy data available to ASP. In this scenario, additional tasks are needed in order to extend MTS transactions to the mainframe-based transactions under the control of CICS.

Step 1: Configuring COM TI 

To develop the COM TI object, you must meet the following system requirements:

Microsoft Windows NT Server 4.0 or Windows NT Workstation 4.0 (or later) updated with Service Pack 3. 

Microsoft IIS 4.0. 

Microsoft Windows NT Client for SNA Server 4.0. 

Microsoft Transaction Server 2.0. 

Additionally, the following COM TI components must be installed (for descriptions of each of the components, see Scenario One):

The administration component. 

The run-time component. 

The component builder. 

Step 2: Defining Required Methods and Parameters 

To make the mainframe TP data available to IIS, perform the following tasks:

Acquire the COBOL source code from the mainframe using a file transfer mechanism such as the FTP/AFTP gateway that is delivered with SNA Server. 

Use the COBOL Import Wizard to:

1.

Select the COBOL source code. 

2.

Specify the methods and mainframe TP names. 

3.

Select input, output, and return value parameters. 

When necessary, change the mappings between the COBOL and Automation data types. 

Use the component builder to make a COM TI component library (.tlb). This is a standard library that can be used by client software and MTS. 

If you have changed the data type mapping in the COBOL code, there are two more actions required in this step:

Use the component builder to generate new COBOL declarations. 

Update the mainframe program with the new COBOL data declarations. This is the only instance requiring modifications to the mainframe environment. 

Step 3: Writing the Application 

To accomplish this step:

Write the client in a language that supports referencing of Automation objects, such as Microsoft Visual Basic, Visual C++, or Visual J++. 

Add the appropriate COM TI component library to the references list in the project and add the references of the component in the program. 

Invoke methods as appropriate throughout the application. 

Define any transaction-related attributes in the COM TI component. The attributes will handle transactions in a manner transparent to the client application (for example, an IIS application using Active Server Pages). The COM TI component will call both MTS/DTC (distributed transaction coordinator) and the TP running under CICS. 

Step 4: Testing the Application 

If the mainframe TP is unchanged, it does not require testing. If the TP has been modified, then the COBOL program should be tested independently to ensure that it runs correctly in its own environment.

Test the new application as follows:

Test the mainframe TP independently if it has been modified in any way. 

Test the newly developed COM TI component independently to ensure that it is working correctly. 

Test the application completely, driving the COM TI object with the client application and running the mainframe TP. 

Carry out a transaction test—test the COM TI object with the transactions made available to check operation between COM TI and MTS in conjunction with COM TI and the TP running under CICS. 

Step 5: Deploying Application Components 

Each of the following applications must be installed on the production computer before you deploy the client side of the application:

Microsoft Windows NT Server 4.0 or Windows NT Workstation 4.0 updated with Service Pack 3. 

Microsoft Windows NT Client for SNA Server 4.0. 

Microsoft Transaction Server 2.0. 

COM TI administration and run-time components. 

COM TI component library. 

Client application accessing COM TI components. 

Step 6: Maintaining the Application 

If you change the mainframe TP application, you must do at least one of the following:

Acquire the COBOL source from the mainframe. 

Use the COBOL Import Wizard to re-specify the method names and host TP names, and re-select the input, output, and return parameter values. 

Using COM TI with IMS

Current versions of COM TI do not support transactional semantics (also known as a two-phase commit) under the IMS subsystem. However, you can access an IMS/DB database transaction through a CICS subsystem front-end TP program. That is, if the mainframe environment supports CICS transaction processing against IMS/DB, you can extend MTS transactional semantics to the IMS/DB database. In this case COM TI provides the same services as any other TP running under CICS. If you do not require transactional semantics, and just want to gain access to your data, you can access IMS directly.

Gaining Access to Legacy File Data

The following section describes how you can incorporate legacy file systems into your Web applications by employing a data provider to files at the record level and move the data to the IIS environment.

Legacy File Data and IIS

To develop Web applications that deliver data stored in VSAM and AS/400 files, you need to be able to gain access to VSAM and AS/400 files from the Windows NT environment. You can do this by making the data available to data consumer applications running under ASP:

Access legacy file systems running under MVS and OS/400 to retrieve the business data stored in them. 

Integrate legacy data with applications and data in the IIS environment using the OLE DB Provider for AS/400 and VSAM. 

Gaining Access to VSAM and AS/400 files with OLE DB and ADO

The OLE DB Provider for AS/400 and VSAM (Data Provider) is the first application to make record-level mainframe VSAM and AS/400 files systems available to ASP applications. The Data Provider makes it possible for consumer ASP applications to gain access to the mission-critical data available in those file systems. The Data Provider ships with Microsoft SNA Server 4.0, Windows NT Client for SNA Server 4.0, and the SNA Server SDK 4.0.

For more information about developing ASP applications, see Chapter 5, "Developing Web Applications."

The Data Provider and the Demand for Legacy File Data

Microsoft released a beta-test version of the OLE DB Provider for AS/400 and VSAM in the summer of 1997, and the Microsoft SNA site received over 600 registrations and download requests during the first four weeks that the beta test kits were available.

This response is not surprising. There are over 400,000 AS/400 computers and about 30,000 mainframe computers deployed world-wide. Some run database management systems, but virtually every one of them stores information in VSAM data sets, and nearly every AS/400 hundred site stores data in conventional file structures—all accessible with the Data Provider.

Functional Overview of the Data Provider

The OLE DB Provider for AS/400 and VSAM (Data Provider) comprises two core components:

An OLE DB–compatible data provider that insulates the complexities of APPC (LU 6.2) programming from the OLE DB or ADO programmer 

An SNA Distributed Data Management (DDM) transaction management program that runs as a Windows NT service under Windows NT 4.0, or as an application under Windows 95. 

The following list summarizes the uses of the Data Provider:

From Windows NT, you can gain access to VSAM and AS/400 file systems through the IBM DDM protocol server components installed on many IBM host systems. There is no need to install Microsoft software on the host system. 

You can use customizable applications to read and write to VSAM and AS/400 files that are in place on IBM host computers. There is no need to migrate the files to the Windows NT environment. 

You can gain access to fixed and variable logical record-length classes and file and record locking, while preserving file and record attributes. 

You can gain access to most AS/400 file types (both physical and logical) and most popular mainframe dataset types: sequential (SAM); VSAM key-sequenced (KSDS), VSAM entry-sequenced (ESDS), VSAM relative record (RRDS), and partitioned (PDS/PDSE). 

Capitalize on Development with the OLE DB/DDM Driver

The Data Provider makes it possible to integrate unstructured legacy file data with data in the Windows NT environment.

The DDM protocol provides program-to-program communications through SNA Server (version 4.0 or later), and native host protocols (such as LU 6.2). No custom development is required on the host for SNA communications. 

IBM DDM servers are available on host systems supporting record-level access to files. For example, Distribute File Manager, a component of IBM DFSMS (Data Facility Storage Management Subsystem) (V1R2 or later), is one target DDM server installed on many mainframes running under MVS or OS/390. On AS/400 computers, OS/400 (V2R2 or later) runs as a DDM server. The Data Provider communicates with Data File Manager and OS/400 through Advanced Program-to-Program Communications (APPC). 

The Data Provider makes it easy for developers to gain access to high-level component interfaces such as OLE DB or ADO. It supports development in Visual Basic, Visual C++, VBScript, and JScript. Web developers don't need to know SNA, APPC, or LU 6.2. 

Scenario: Using the Data Provider to Gain Access Host Files

With the Data Provider, you can gain access to file data on an IBM host from a Windows NT–based Web application. Suppose that you want to add content from a legacy file stored on an IBM mainframe or AS/400 computer to an ASP application running under IIS. ASP can be used to interpret user requests and format the return of data to the user via Web pages. The Data Provider can process calls from the IIS environment and pass data returned from the mainframe environment to IIS.

This scenario requires six main steps:

Step 1 (setup and configuration): Configuring the OLE DB/DDM Driver. 

Step 2 (design time): Defining the application requirements. 

Step 3 (design time): Writing the application. 

Step 4 (design time): Testing the application. 

Step 5 (deployment): Deploying the application components. 

Step 6 (post-deployment): Maintaining the application. 

Step 1: Configuring the Data Provider 

To develop an application using the Data Provider, you must meet the following system installation requirements:

Microsoft Windows NT Server 4.0 or Windows NT Workstation 4.0 or later updated with Service Pack 3 or later. 

Microsoft IIS 4.0 or later. This includes ADO 1.5, the ActiveX Data Objects release supported by the Data Provider. 

Microsoft Windows NT Client for SNA Server 4.0. Configure it to connect to SNA Server 4.0. 

Additionally, the following packages must be installed and configured:

Microsoft OLE DB Provider for AS/400 and VSAM. 

Microsoft OLE DB Provider for AS/400 and VSAM snap-in for Microsoft Management Console (MMC). Configure the DDM service for the target host and PC locale. Optionally, configure the Data Sources if you are not passing data source information through from ADO consumer application. Configure the mainframe data column description to OLE DB data type mappings. 

Step 2: Defining Application Requirements 

To accomplish this step:

Compile a list of target host files, keys, and alternate index paths. Define the subset of records to be read from the target Web application. 

Specify the ADO objects, methods, properties, and collections supported by the Data Provider, to be used in the application. 

Consider using Recordset.Filter to define recordsets based on logical search criteria and to search for records based on application program and user input. 

Use the ADO errors collection to produce errors in formats the program can respond to, avoiding passing unnecessary error conditions to the Web browser. 

Use either the automatic AS/400-to-OLE DB data transformation, or a custom mapping using a DDM service host data column description file. 

Decide whether or not to map Windows NT logon user IDs obtained from the Web browser to host user IDs automatically. 

Choose a deployment option and decide whether to run the DDM service on the computer running SNA Server or the computer running Windows NT Client for SNA Server 4.0. 

Step 3: Writing the Application 

To accomplish this step:

Write scripts to gain access to ADO 1.5 from an ASP page in a language that supports referencing of Automation objects, such as VBScript or JScript. 

Cast data to match the OLE DB and host data types. Refer to the recordset schema to determine which host data types are supported. Ensure that the host data is valid before writing to the host files, especially if a host application concurrently gains access to host data files. 

Check the syntax of supported OLE DB methods and properties. Pay special attention to the connection string and the Recordset.Open parameters. These are unique to each OLE DB Provider. 

If appropriate, use the MS$SAME placeholder to pass the user ID and password to the SNA Server host security feature. 

Program some loops to ensure that target recordsets contain data before passing recordset methods to allow for delays caused by network conditions and the remoteness of target hosts. 

Step 4: Testing the Application 

Test the new application to make sure that:

The ASP pages run correctly. 

There are clean communications between ADO and the DDM Service on the Windows NT side. Consider starting the DDM Service on Windows NT automatically to ensure the timely availability of a PC-to-host connection with minimal session startup time. 

There are reliable, efficient operations between DDM Service and the host by way of SNA Server. Consider keeping the connection between SNA Server and the host connection active to reduce session startup time. 

There is proper data display in the Web applications. Ensure the data integrity of host files, because there can be a loss of precision when you move data from the host to the PC and back again. 

Step 5: Deploying Application Components 

Before deploying the application, the following packages must be installed on each production computer:

Microsoft Windows NT Server 4.0 updated with Service Pack 3, or Windows NT Workstation version 4.0, updated with Service Pack 3. 

Microsoft Windows NT Client for SNA Server 4.0. 

Microsoft IIS 4.0. 

OLE DB Provider for AS/400 and VSAM. 

The ASP pages requesting data from the legacy files. 

DDM Service running with Windows NT Client for SNA Server 4.0 or SNA Server. To improve responsiveness, consider starting the DDM Service automatically when the system restarts to improve responsiveness. 

Step 6: Maintaining the Application 

If you modify the ASP scripts, you need to re-test the application using the following guidelines:

Test the application fully if you add new scripts or script fragments to existing ASP pages that request data from new data sources. 

If the target host data files are restructured, or new host tables are added, these changes need to be incorporated into the Web application by modifying ADO methods and creating new recordsets as needed. 

If the host connectivity changes, you should verify the Windows NT Client for SNA Server 4.0 configuration or the Data Provider data sources. 

Replicating Legacy Databases

Much of the data stored in systems resides in relational databases. In addition to gaining access to legacy data in place on the host, you can replicate database tables from a legacy application to SQL Server.

Why Replication?

Legacy database replication is a conversion process that copies, reformats, and migrates database tables for use in relational databases running under Windows NT 4.0. Using data replicated from a legacy database, developers and systems engineers can:

Integrate the legacy data with data from the Windows NT side. If the data is replicated for storage in an Windows NT–side database, IIS 4.0 connects Internet or intranet clients to dynamically created Web pages retrieving the data through ODBC. 

Readily subject the data to new business logic. For new processes involving the database, developing in Windows is less costly than the development of legacy systems. 

Manage the data efficiently. Windows NT Server 4.0 with IIS provides a common set of system management tools, database management tools, Web application servers, and transaction servers. 

Replicating DB2 Tables using the Host Data Replicator

The HDR (Host Data Replicator) is a database replication software product that copies pre-defined data from IBM DB2 database tables to Microsoft SQL Server database tables. It can do so on demand, at a scheduled time, or according to a recurring schedule. HDR has the capability to reverse the process as well, replicating SQL Server tables for use in a DB2 database.

HDR is composed of the data replicator service (a Windows NT operating system service) and Data Replicator Manager (a Windows NT operating system application for administration). The Data Replicator Manager has a user interface similar to those used by SQL Enterprise Manager and the scheduling portions of SQL Executive.

The HDR performs bi-directional full refresh replication. A complete "snapshot" of the source table is copied into the target database table. The target table has all its records overwritten each time replication occurs.

Replication Type

The Host Data Replicator performs bi-directional replication with full refresh. A complete "snapshot" of the source table is copied into the target database table using either BCP (when copying to Microsoft SQL Server) or ODBC "inserts." All of the records of the target table are overwritten each time replication occurs. Optionally, you can append data to the end of the existing table provided you do not change the table schema.

The following section outlines the types of replication supported by HDR.

Data Processing and Filtering: 

Replication of selected columns ("vertical partitioning"). 

Replication of selected rows defined by SQL queries ("horizontal partitioning"). 

Replication of selected columns from selected rows (combined vertical and horizontal partitioning). 

Construction of destination columns calculated ("derived") from source data. 

Use of SQL expressions to alter data in destination tables before or after replication. 

Change in column names, column data types, or column order between source and destination. 

Scheduling 

Single replication on demand (repeatable at will, or through a programmatic interfaces such as SP_RUNTASK). 

Single replication at a pre-defined time. 

Recurring replication at pre-defined times. 

Statistics 

Throughput for each replication operation. 

Number of bytes transferred for each replication operation. 

Elapsed time for each replication operation. 

Statistics are available through the Data Replicator Manager or Windows NT Performance Monitor. 

Security 

The Data Replicator Manager prompts the administrator to supply a valid Microsoft SQL Server account and password each time it establishes a connection to a Data Replicator Service. If a correct account and password are not provided, the Data Replicator Manager closes the connection, preventing administration of the associated service and its subscriptions. (A subscription is a description of a replication operation involving one source table and one destination table. A single Data Replicator Service can handle many subscriptions).

For DB2: During subscription setup, an administrator must supply a valid DB2 account and password. HDR will also support the SNA Server version 3.0 Single Sign On option. 

Microsoft SQL Server destination table ownership can be defined during subscription setup. Access to replicated data is then controlled through normal Microsoft SQL Server security measures. 

Performance 

Source table names can be filtered to reduce network traffic and improve performance during the setup of full refresh subscriptions in environments with large numbers of possible source tables. 

Connections to source and destination servers can be pooled to avoid performance costs of reestablishing connections unnecessarily. Pool sizes can be adjusted as needed. 

The Data Replicator Service caches subscription information to avoid the performance costs of obtaining the information from the data replicator control database when scheduled replication times arrive. 

HDR is a loosely-coupled product (meaning that it does not support a two-phase commit). 

Supported Platforms:

HDR is supported on the following platforms:

Microsoft SNA Server 3.0 or later. 

Windows NT Server 3.51 or later (Intel and Alpha). 

Microsoft SQL Server 6.5 or later. 

IBM DB2 including DB2 (MVS), DB2/VM (SQL/DS), DB2/400, and the common family (DB2/2, DB2/6000, and DB2/2 Windows NT through APPC). 

Migrating Transaction Processes

Microsoft Transaction Server (MTS) is a transaction management server that provides reliable, secure transaction management for Web applications. The following section provides information that can help you plan a migration of transaction-driven applications from any legacy environment to the Windows NT 4.0 environment where transactions requested by IIS are managed by MTS.

Why Use Transactions?

Two changes in the use of information technology make the increased deployment of transaction management systems compelling for many organizations:

The growing demand to use the Internet and intranets for exchanging secure information, including financial exchanges through online commerce. 

The increasing trend of running multiple reusable software components within one application, including components used to access databases. 

A transaction is a multi-part update process in which the update is committed—made final—only if all of the parts of the transaction are completed successfully. An online transaction is an update process that is initiated and carried out over a data network.

The explosive growth of the Internet and organizational intranets has presented new opportunities for doing business over data networks. The elemental expression of doing business, the exchange of money for goods or services, requires updates to more than one database for each exchange.

Software design is increasingly shifting toward a component model in which applications are made up of many code segments operating independently of each other. Often, an application allows more than one component to concurrently update a database, or more than one database. Concurrent updates require a transaction manager to ensure transaction integrity while optimizing performance.

For more information, see Chapter 6, "Data Access and Transactions."

Migrating to MS Transaction Server (MTS)

There is a growing need for transaction management on the Web, and there are many existing transaction systems on legacy networks processing business-critical data. Hence, many organizations need to manage transactions in both environments. One serious obstacle is that legacy transaction systems do not extend across boundaries, such as the boundary between SNA legacy networks and TCP/IP-based intranets with Windows NT–based servers. In other words, a legacy TP running under CICS or IMS cannot account for a database update on the Windows NT network. Additionally, the costs of development, hosting, and scaling up are higher in the legacy environment than on Windows NT–based networks.

The best solution is a Windows NT–based transaction management system that coordinates IIS-based Web transactions and legacy TPs. Any transaction can then involve updates of databases running under Windows NT, running under a mainframe TP, or both at once. Transaction processes can be selectively migrated to Windows NT–based database management software such as SQL Server, and any transaction processes left on the mainframe can be managed from Windows NT as well.

MTS Features and Capabilities

Microsoft Transaction Server (MTS) expands the capabilities of IIS to include Web-based transaction management. MTS is a component-based transaction management solution that provides a programming model, a run-time environment, and graphical server administration tools—everything required to design and develop a transaction application and migrate a legacy process to it. With MTS, Web developers using ASP can develop full transaction management capabilities for deployment on the Web.

By deploying MTS as your transaction management system, you can profit from the advantages of the Windows NT environment and migrate selected legacy transactional processes. In the meantime, MTS extends transaction management to include processes left running in the legacy environment.

In addition to full transaction monitoring and management, and the low cost of scaling up Windows NT–based systems and software, MTS offers advantages over its mainframe-based counterparts at design time and run-time, as well as for maintenance and administration.

MTS Design Time

The MTS programming model provides the framework for developing components that encapsulate business logic.

MTS fits perfectly in a three-tier programming model (see the sidebar, "Three-Tier Applications and Middleware"). MTS acts as middleware, managing the components which make it possible for a Web application to handle many clients. It also provides developers with a great deal of flexibility:

The model emphasizes a logical architecture for applications, rather than a physical one. Any service can invoke any component. 

MTS connects requests for transactions (calls from ASP pages) to business logic and to database applications, so that you are not required to develop these processes. 

The applications are distributed, which means you can run the right components in the right places, benefitting users and optimizing use of network and computer resources. 

Three-Tier Applications and Middleware

A three-tier application divides a networked application into three logical areas. Middleware, such as MTS, connects the three tiers.

Tier 1 handles presentation. In a Web application, data is requested by the browser and is sent there from the Web server for display.

Tier 2 processes business logic, meaning the set of rules for processing business information. In an IIS-based Web application, Tier 2 processing is carried out in IIS components.

Tier 3 processes the data—associated databases and files where the data is stored. In a Web application, Tier 3 consists of a back-end database management system or file access system with its associated data.

Three-tier systems are easier to modify and to maintain than two-tier systems because the programming of presentation, business logic, and data processing are separated by design. This architecture permits re-development to proceed in one tier, without affecting the others.

Middleware, such as MTS, manages the connections between the tiers, and makes efficient use of resources so that Web application programmers can concentrate on business logic. MTS can connect the browser request (Tier 1) to the business logic (Tier 2). In Tier 3, it can connect business logic to the databases and manage all activities of the transaction.

Application programming interfaces and resource dispensers make applications scalable and robust. Resource dispensers are services that manage non-durable shared state on behalf of the application components within a process. This way, you don't have to undertake traditional programming tasks associated with state maintenance.

MTS works with any application development tool capable of producing ActiveX dynamic-link libraries (DLLs). For example, you can use Microsoft Visual Basic, Microsoft Visual C++, Microsoft Visual J++, or any other ActiveX tool to develop MTS applications.

MTS is designed to work with a wide variety of resource managers, including relational database systems, file systems, and document storage systems. Developers and independent software vendors can select from a wide range of resource managers and use two or more resource managers within a single application.

The MTS programming model makes migration easier by making transaction application development simpler and faster than traditional programming models allow. For more information on developing MTS applications, see Chapter 6, "Data Access and Transactions."

MTS Run Time

The MTS run-time environment is a second-tier platform for running MTS components. This environment provides a comprehensive set of system services including:

Distributed transactions. 

Automatic management of processes and threads. 

Object instance and connection pool management to improve the scalability and performance of applications. 

A distributed security service that controls object invocation and use. 

A graphical interface that supports system administration and component management. 

This run-time infrastructure makes application development, deployment, and the management of applications much easier by making applications scalable and robust.

Overall performance is optimized by managing component instantiation and the connection pool. MTS instantiates components just in time for transactions, then purges state information from the instance when a transaction completes, and reuses the instance for the next transaction. For example, users can enter transaction requests from their browsers to an ASP page containing the code needed to call an MTS component. As these messages are received by MTS, the transactions are managed using components already instantiated. This minimizes the proliferation of object instantiation and connection-making that often inhibit the performance of systems supporting transactions.

MTS Administration Tools

MTS Explorer is a graphical administration tool used to register, deploy, and manage components executing in the MTS run-time environment. With MTS Explorer, you can script administration objects to automate component deployment.

Planning a Migration to MTS

As you migrate processes and databases from a legacy environment (TPs running under CICS on a mainframe) to MTS, TPs will continue to run on the mainframe for a while. To migrate to MTS:

Use MTS and COM TI to extend transaction management to include all the parts of each transaction. This includes updates that take place on databases running under Windows NT, and updates that take place on the mainframe. 

Script the ASP pages so that IIS calls the MTS components that execute the transaction. 

You can migrate parts of the legacy transaction infrastructure to SQL Server and use MTS to manage the parts of the transaction on the legacy host. All of the data can be accessed by IIS using ASP scripts.

Mapping Transaction Tasks to Windows NT–based Applications

The following table maps transaction-related functions to the applications used to support functions in the Windows NT environment.

Table 9.1 Transaction-Related Functions and Windows NT-Based Applications 

Transaction-related taskWindows NT–based application

Manage transactions

Microsoft Transaction Server

Manage data resources

SQL Server 6.5

Call transactions from Web pages

Active Server Pages

Connect to a legacy network

SNA Server 4.0

Extend transactions to legacy TPs

COM TI

Resources

The following resources provide additional information on accessing legacy applications and data.

Web Links

http://www.microsoft.com/sna/ This Web site provides information about Microsoft SNA Server, the COM Transaction Integrator for CICS and IMS, the OLE DB Provider for VSAM and AS/400, and the Host Data Replicator.

Software Product Documentation

For information about Microsoft SNA Server, COM Transaction Integrator for CICS and IMS, OLE DB Provider for VSAM and AS/400, and the Host Data Replicator, see the SDK documentation included with Microsoft SNA Server 4.0.



© 2005 Microsoft Corporation. All rights reserved. Terms of Use |Trademarks |Privacy Statement
Microsoft