An Introduction to Microsoft Transaction Server

Microsoft Corporation

Introduction

The Microsoft® Transaction Server (MTS) represents a new category of product that makes it easier to develop and deploy high-performance, scaleable, and reliable distributed applications. This is achieved by combining the technology of component-based development and deployment environments with the reliability and scalability of transaction processing monitors.

Why Use Microsoft Transaction Server?

Microsoft Transaction Server is designed to make it easier to build high-performance, scaleable and reliable intranet and Internet applications. It has been possible to build these applications for years, but it required both talent and investment that were beyond the reach of most companies.

MTS is based on proven transaction-processing methods, but its significance transcends the domain of transaction processing monitors. It defines a simple programming model and execution environment for distributed, component-based server applications.

Applications are composed of collections of Microsoft ActiveX® components that provide the business-application function. These components are developed as if for a single user. By installing these components to execute within the MTS environment, the server application automatically scales to support many concurrent clients with high performance and reliability.

MTS is specifically designed to allow server applications to scale over a very wide range of users—from small single-user systems to high-volume Internet servers. It provides the robustness and integrity traditionally associated only with high-end transaction processing systems.

This section takes a brief look at the complexities of developing good application servers. It looks at the issues from three different perspectives. First, it highlights what a network server must do to provide a reasonable level of service. Then, it discusses the issues that arise when you build component-based applications. Finally, it describes how crucial it is to maintain application integrity, even when failures occur.

MTS provides an application programming model that shields application developers from these complexities, allowing the developer to focus on application function, and lowering the cost and time required to build applications for intranets and the Internet.

Server Infrastructure

Servers require a sophisticated infrastructure. Building a network application server from scratch is no easy task. Implementing the actual business function, such as handling orders for an online bookstore, is actually a small fraction of the work. Server systems typically must have a sophisticated infrastructure to attain acceptable levels of performance and scale.

Application server developers must usually develop much of the infrastructure themselves. For example, even with the rich services provided by remote-procedure-call (RPC) systems, developers must still:

MTS provides an application-server infrastructure that satisfies these requirements.

Building Component-Based Applications

Building applications from components has tremendous appeal and was one of the early promises of object-oriented computing. It is particularly attractive for server applications because it provides a natural way to encapsulate business functions. However, engineering applications from components was harder than it first appeared. A fundamental weakness of the early object systems was the lack of a common framework that allowed developers to integrate objects created by different parties into one application, either in the same process or across processes. The Component Object Model (COM) addresses this problem.

However, simply having a common component object model is not sufficient for building server applications from components; the components must also use a common server framework. Developers that build their own server frameworks have limited opportunities to use components developed by other parties.

The MTS application architecture and programming interfaces provide a common framework for building component-based server applications.

Maintaining Application Integrity

It is critical that business systems can accurately maintain the state of the business. For example, an online bookstore must reliably track orders. If it doesn't do this, major revenue losses can result. Current orders could be lost or there could be delays in taking and filling orders. Dissatisfied customers might take their business elsewhere.

Maintaining the integrity of business systems has never been easy, especially after failures. Ironically, even though computers are becoming increasingly more reliable, systems as a whole are becoming more unreliable. Failures are common with systems that are composed of hundreds, or thousands, or millions of desktop machines—connected via intranets and the Internet—to tens, or hundreds, or potentially hundreds of thousands of server machines.

The problem is compounded by the demand for distributed applications. Business transactions, such as ordering a book, increasingly involve multiple servers. Credit must be verified, books must be shipped, inventory must be managed, and customers must be billed. Updates must occur in multiple databases on multiple servers. Developers of distributed applications must anticipate that some parts of the application may continue to run even after other parts have failed. These failure scenarios are orders of magnitude more complicated than those of monolithic applications, which fail as a whole.

Business applications are frequently required to coordinate multiple pieces of work as part of a single business transaction. An online bookstore certainly wouldn't want to schedule the shipment of books without doing the proper billing, and it would be equally wrong to bill a customer without scheduling delivery. Coordinating the work so that it all happens, or none of it happens, is very difficult without special support from the system.

Guaranteeing atomic updates, even in the face of failures, is not easy. It is especially difficult when an application is distributed across multiple databases or systems. Using multiple components, which by design hide their implementations, compounds the problem.

Applications must also provide consistent behavior when multiple clients are accessing a component. Concurrent orders of the same book title should not result in attempting to send a single copy of the book to two customers. Unless the application is properly written, race conditions will eventually cause inconsistencies. These problems are difficult and expensive to resolve, and are more likely to occur as volume and concurrency increase. Again, using components compounds the problem.

MTS integrates transactions with component-based programming so that you can develop robust, distributed, component-based applications.

Microsoft Transaction Server Architecture

This section provides a brief introduction to the major architectural elements of MTS. These elements include:

Microsoft Transaction Server Components

Application components model the activity of a business. These components implement the business rules, providing views and transformations of the application state. Consider, for example, the case of an online bookstore. The durable state of the business—such as the pending orders, the inventory on hand, and the accounts receivable—is represented by records in one or more database systems. The application components update that state to reflect changes, such as new orders and the delivery of inventory.

MTS application components are ActiveX in-process servers (DLLs). You can create and implement these components with Microsoft Visual Basic®, Visual C++®, Visual J++®, or any ActiveX-compatible development tool. ActiveX, which is based on COM, includes:

MTS extends COM to provide a general server application framework. In addition to the inherent COM features mentioned above, MTS:

Microsoft Transaction Server shelters you from the complex server issues, allowing you to focus on implementing business functions. Because components running under MTS can take advantage of transactions, you can write applications as if they run in isolation. MTS handles the concurrency, resource pooling, context management, and other system-level complexities. The transaction system, working in cooperation with database servers and other types of resource managers, ensures that concurrent transactions are atomic, consistent, have proper isolation, and that, once committed, the changes are durable

Applications are deployed as collections of ActiveX components, called packages. Packages define both fault isolation and trust boundaries.

Transaction Server Executive

The Transaction Server Executive is a dynamic-link library (DLL) that provides the run-time services for Transaction Server components. These services include thread and context management. This DLL is loaded into the processes that host application components and runs transparently in the background.

Server Processes

A server process is a system process that hosts application component execution. Each server process hosts a package of components, and services tens, hundreds, or potentially thousands of clients. You can configure multiple server process to execute on a single computer. Each server process reflects a separate trust boundary and fault-isolation domain.

Other process environments can also host application components. This way you can deploy applications that meet varying distribution, performance, and fault isolation requirements. For example, you can configure MTS components to load directly into Microsoft SQL Server™ or the Microsoft Internet Information Server (IIS). You can also configure them to load directly into client processes.

Resource Managers

A resource manager is a system service that manages durable data. Server applications use resource managers to maintain the durable state of the application, such as the record of inventory on hand, pending orders, and accounts receivable. The resource managers work in cooperation with the transaction manager to provide the application with a guarantee of atomicity and isolation. Microsoft SQL Server, durable message queues, and transactional file systems are all examples of resource managers.

Atomicity ensures that all of the updates completed under a specific transaction are committed (and made durable) or that they get aborted and rolled back to their previous state.

Consistency means that a transaction is a correct transformation of the system state, preserving the state invariants.

Isolation protects concurrent transactions from seeing each other's partial and uncommitted results, which might create inconsistencies in the application state. Resource managers use transaction-based synchronization protocols to isolate the uncommitted work of active transactions.

Durability means that committed updates to managed resources (such as a database record) survive failures, including communication failures, process failures, and server system failures. Transactional logging even allows you to recover the durable state after disk-media failures.

Atomicity and isolation work together to give the appearance that transactions happen instantly. The intermediate states of a transaction are not visible outside the transaction, and either all the work happens or none of it does. This allows application components to be written as if each transaction executes sequentially and without regard to concurrency—a tremendous simplification for application developers.

MTS supports resource managers that implement either the OLE Transactions protocol or the X/Open XA protocol. A toolkit is provided for developing resource managers.

Resource Dispensers

A resource dispenser is a service that manages nondurable shared state on behalf of the application components within a process. Resource dispensers are similar to resource managers, but without the guarantee or durability. MTS provides two resource dispensers:

A toolkit is provided for developing resource dispensers.

ODBC Resource Dispenser

The ODBC Resource Dispenser manages pools of database connections for Transaction Server components that use the standard Open Database Connectivity (ODBC) interfaces. The resource dispenser maintains pools of database connections, allocating connections to objects quickly and efficiently. Connections are automatically enlisted on the object's transactions. The resource dispenser can automatically reclaim and reuse connections. The ODBC resource dispenser is a DLL that provides this functionality transparently and is a built-in feature of MTS.

Shared Property Manager

The Shared Property Manager provides synchronized access to application-defined, process-wide, properties (variables). You might use it to maintain a Web-page hit counter, to cache invariant data, or to provide smart caching to avoid database hotspots (such as generating unique receipt numbers).

Microsoft Distributed Transaction Coordinator

Microsoft Distributed Transaction Coordinator is a system service that coordinates transactions that span multiple resource managers. Work can be committed as an atomic transaction even if it spans multiple resource managers on potentially separate machines.

Microsoft Distributed Transaction Coordinator was first released as part of Microsoft SQL Server 6.5 and is included in MTS. It implements a two-phase commit protocol that ensures that the transaction outcome (either commit or abort) is consistent across all resource managers involved in a transaction. The Microsoft Distributed Transaction Coordinator ensures atomicity, regardless of failures (node crash, network crash, or a misbehaved resource manager or application), race conditions (transaction starts to commit while one resource manager initiates an abort), or availability (a resource manager prepares a transaction but never returns).

The Microsoft Distributed Transaction Coordinator supports resource managers that implement either the OLE Transaction or X/Open XA protocols.

Conclusion

The Microsoft Transaction Server will change the way people develop business applications. The combination of component-based, object-orient technologies with the time-proven techniques for distributed, online transaction processing will allow the easy deployment of applications composed of purchased and custom-built components. The economic advantages will cause the creation of a new marketplace for business components. This, in turn, will allow business solutions where they previously could not be afforded.

Microsoft Transaction Server has rolled out in two phases. Initially, the Distributed Transaction Coordinator was shipped in April of 1996 as a part of Microsoft SQL Server version 6.5. This technology provides distributed two-phase commit across heterogeneous data stores.

In December of 1996, the Microsoft Transaction Server was shipped. It provides the programming model and run-time execution environment for running ActiveX components in a reliable, scaleable, and distributed fashion.

For More Information

For the latest information on Microsoft Transaction Server, see the Microsoft Transaction Server Web site (http://www.microsoft.com/transaction/).

Also, refer to Transaction Processing: Concepts and Techniques by Jim Gray and Andreas Reuter; Morgan Kaufmann Publishers, 1993.

The information contained in this document represents the current view of Microsoft Corporation on the issues discussed as of the date of publication. Because Microsoft must respond to changing market conditions, it should not be interpreted to be a commitment on the part of Microsoft, and Microsoft cannot guarantee the accuracy of any information presented after the date of publication.

This White Paper is for informational purposes only. MICROSOFT MAKES NO WARRANTIES, EXPRESS OR IMPLIED, IN THIS DOCUMENT.