Database Architectures (Part 2)

Introduction

This two-part article will help readers have a better understanding of the various database architectures that are available for application development. In part one, I described the personal and workgroup database architectures. In part two, I describe the enterprise and mainframe architectures. In addition, I have included formal guidelines and checklists to help you select the correct architecture for a particular application. If you have any questions on selecting the correct database architecture for an application, please feel free to contact me at christopher.foot@alcoa.com.

As I stated in part one, choosing the correct architecture (hardware platform, operating system, database) is critical to the success of any new database application. This decision was simple when the mainframe was the only architecture available. But architecture selection is not as clear-cut now that enterprise, departmental and personal architectures as well as the mainframe are available. Application developers and end users have more hardware and software choices available to them than ever before. The key to success is choosing the right ones. This document does not favor one architecture over another, it’s intent is to help readers choose the architecture tier that best fits an individual application’s requirements. The final architecture decision should not be made in a vacuum or based on personal preference. It is important for all large IT shops to create a part-time architecture selection team consisting of members drawn from the following areas: LAN/WAN communications, O/S support, architectures and database administration. This team should be responsible for determining the best architecture for a particular application.

Document Contents

The major focus of this two-part article is database architecture tiers. It contains an in-depth description of each of the four database architecture tiers (mainframe, enterprise, departmental, personal) followed by several pages of database architecture tier selection criteria. The information regarding each tier will be broken up into the following categories:

  • General description of the architecture, including hardware and software
  • Types of applications that fit the architecture being described
  • Benefits and drawbacks
  • Detailed discussions on cost, expected performance, reliability and ease of use. An evaluation rating (from 0 to 10 with 10 being the most favorable rating) will be included that evaluates the architecture being discussed

Attachments

The following attachments are included:

  • Attachment 1 contains additional architecture evaluation criteria
  • Attachment 2 contains a worksheet used to help determine the best architecture for a given application
  • Attachment 3 contains a description of the Transaction Processing Council Benchmarks
  • Attachment 4 contains a list of acronyms used in this document

TPC Benchmarks

I make numerous references to the TPC (Transaction Processing Council) and TPC benchmarks. The TPC is a non-profit corporation founded to define transaction processing and database benchmarks and to disseminate objective, verifiable TPC performance data to the industry. For more information on the TPC and TPC-C and TPC-D benchmarks, please refer to Attachment 3.

Enterprise Tier

Description

The enterprise and mainframe tiers are the most mature, reliable environments currently available for application development. Unlike the departmental tier, which has only recently become popular, enterprise architectures have been the perennial favorites of shops desiring to build non-mainframe applications. Enterprise architectures were the only alternative to mainframes before the departmental tier evolved into a viable solution. Sun, HP, Data General and Sequent were competing for the corporate consumer’s dollar years before the client/server revolution popularized non-mainframe architectures. As a result, there is a large number of third party tools and applications available for enterprise architectures. In addition, database vendor competition in this tier is fierce, resulting in database products that have a large number of performance options (parallel query, parallel load, bit-mapped indexes, asnynchronous read ahead) and database options ( advanced text search, spatial data management, video/audio storage) available. The enterprise server tier leads all other tiers in database software options and add-ons available to consumers. Because of this competition, the majority of new product enhancements are released for this tier first, making this tier the most technically advanced of the four. Performance and reliability are the key advantages of this environment, while cost and complex administration are the drawbacks.

Hardware

  • 4 to 64 CPUs
  • 200 megabytes to 4 gigabytes (and up) RAM
  • 2 gigabytes to 750 gigabytes DASD (some into Terrabytes)
  • Typical Vendor products – IBM, HP, Sun, Sequent

Operating System

  • Graphical User Interfaces only recently becoming available from vendors
  • Pre-emptive multi-tasking (architected for multiple, concurrent users)
  • Typical vendor products – Microsoft NT Server (debatable), UNIX variations

Database Software

  • Runs as multiple processes on the operating system
  • Graphical User Interface
  • Typical vendor products – Oracle Enterprise, Sybase SQL Server, DB2

Benefits/Drawbacks

  • Hardware
+ Architected for high availability and high performance
+ Good I/O throughput
+ Large number of third party hardware options and add-ons available
+ Robust upgrade path
– High initial startup costs
– Complex to upgrade and administer
– Enterprise-wide administrative tool sets are somewhat immature at this time
– Vendor support varies widely according to platform chosen
  • Operating System
+ Architected for high availability and high performance
+ Leads all operating system tiers for third party applications
+ Robust upgrade path
– Third party software can be expensive
– Complex to administer, upgrade and install third party software
  • Database Software
+ Easy to install, maintain, upgrade and use third party applications
+ Enterprise wide administrative tool sets are available (enterprise database managers)
+ Leads all database tiers for third party database applications
+ Robust set of performance options (parallel query, load, bit mapped indexes, asynchronous read ahead) available from vendors
+ Robust set of database options available (web access, text search, spatial data management, video/audio storage and retrieval)
– Can become expensive
  • Database Environment (hardware, operating system, database software viewed as a single entity)
+ Somewhat easy to use, maintain, upgrade and install third party applications
+ Multiple application environment limits the number of environment implementations (limited number of operating systems, hardware platforms and databases to be supported)
+ Large number of performance options and database add-ons available
+ Disaster recovery options available
– Can become costly
– Complex to administer

Typical Applications

  • Multi-user database with multiple query and update sources is acceptable
  • Number of concurrent users ranging from 5 or 6 to hundreds (depending on database access)
  • Architecture supports both moderate/heavy OLTP as well as moderate/heavy decision support applications

Numerical Ratings (0 – least favorable, 10 – most favorable)

  • Cost (Rating 5)
Hardware, operating system and database costs are oftentimes higher in this environment than other architecture tiers. Enterprise servers offer high performance, high availability and the largest number of database performance options and database add-ons. Advanced functionality, performance and availability options are built into the hardware, database and operating system making them more expensive than their departmental and personal counterparts. In addition, enterprise applications are unable to take advantage of the economies of scale that the mainframe architecture offers, resulting in somewhat higher support charges.
  • Performance (Rating 8)
Vendor competition is the most fierce in this tier. Performance is a key selling point for vendors in this environment and as a result, enterprise tier hardware, operating systems and database software are all architected for high performance. Enterprise servers have the capability of supporting hundreds of gigabytes of disk and can certainly support many hundreds of users, but they do reach a limit of performance capability. There is no magical number of concurrent users that determines when to use enterprise servers versus mainframe servers. The cutoff point can only be determined by an in-depth review of the application, its processing requirements and the user’s performance expectations.
Overall architecture performance must be evaluated when comparing enterprise hardware platforms. One system may have RISC chips that outperform a competitor’s Pentium chips, but the Pentium box may have a faster I/O architecture. Platform scalability is a fundamental performance selling point for enterprise hardware vendors. We must understand that scalability is not totally dependent upon the hardware platform itself, but dependent on the combination of hardware platform, operating system and database.
UNIX operating systems are the performance leaders of enterprise operating systems. Until recently, all enterprise tier TPC-C and TPC-D benchmark records were held by hardware platforms running some flavor of the UNIX operating system. NT is just beginning to challenge UNIX’s position as performance leader, recently capturing several TPC-C and TPC-D benchmarks. But NT Server’s well known scalability problems continue to plague UNIX’s only competitor. Initial reports of the latest release of NT Server (NT Server 5.0) indicate that scalability may have improved from a maximum of 4 processors. Developers trying to determine which operating system best fits their application must determine the number of processors the application will require. With Intel P-5 chip speeds reaching 200 MHZ and P-6 chip speeds running at 250 MHZ to 300 MHZ, is scalability over 8 to 12 chips required?
The key to a high performing enterprise database is to make sure that a sufficient amount of CPU and memory is available for its use. All of the leading database vendors in this tier (DB2/6000, Oracle, Sybase) are able to address multiple gigabytes of memory as well as spread the processing workload among available chips. As stated previously, database vendor competition in this tier is fierce, resulting in database products that have a large number of performance options available.
  • Reliability (Rating 9)
While performance and TPC benchmarks are the traditional battleground for departmental tier vendors, performance and reliability must share the spotlight in the enterprise server market arena. Enterprise hardware platforms, operating systems and databases must be fully capable of supporting mission critical applications if they are to be viable competitors for the corporate consumer’s business. Although not as easily measured as performance, reliability must be considered when evaluating enterprise vendors. One method of evaluating an enterprise tier product’s reliability is to ask the vendor to provide a list of satisfied customers running mission critical 24 X 7 applications.
Enterprise hardware platform vendors offer varying degrees of system availability. These degrees of availability range from mirrored disks to double (or triple) redundant clustering architectures that can be repaired or replaced without bringing the entire system off-line. But as the reliability of a system increases, so does its cost. The bottom line is – the more reliable a platform is, the more expensive it is. The challenge that consumers face is balancing systems availability with total systems cost. The first priority for a business unit is to determine the cost of downtime and use it to set a budget for building the hardware platform. Business units must determine how much customer goodwill and money will be lost if the system is off-line. The business unit can then use that information to determine the level of availability the application requires. The least expensive option can mean downtime of possibly hours (perhaps many) under certain circumstances. At the other extreme, fault-tolerant systems can provide applications with true 24 X 7 availability but with costs that can range from six to seven figures. If high availability is a requirement, the mainframe must be considered as an alternative. The mainframe architecture is able to offer applications a high degree of availability at a reasonable cost.
UNIX is the current operating system of choice for most enterprise server vendors. The reason for this market leadership is performance as well as reliability. But the reliability of NT vs UNIX is still very much an issue with corporate consumers. NT’s use for mission critical applications is still relatively unproven when compared to the different variations of the UNIX operating system. Six year old NT is years behind twenty year old UNIX in supporting mission critical 24 X 7 applications. As a result, the majority of consumers rarely consider NT to be a viable alternative in this arena. This lack of faith will undoubtedly change as the NT operating system matures.
All enterprise database vendors must stress reliability to be competitive in the enterprise marketplace. In addition, many offer additional products that take advantage of UNIX clustering servers, also known as Asymmetric Highly Available Servers. Asymetric HA contains two identical servers: One is the primary, or live system, through which users access data and services; the other identical partner, the secondary or backup server, closely monitors the operations of the primary and automatically takes over the role as primary in the event of a failure. Database vendors are able to provide consumers with automatic switch-over from the primary server to the secondary when a failure occurs.
  • Ease of use (Rating 6)
UNIX, unlike its major competitor (NT Server), is well known to be somewhat complex to administer. There are over 20 different flavors of UNIX being offered by a variety of hardware and operating system vendors. Although alike in many ways, each UNIX operating system has its own variations of system commands, administrative processes and procedures. This complicates application development and administration for shops that run different versions of the UNIX operating system. Developers are unable to make a seamless transition from one environment to another and administrators must be trained in each operating system to effectively administer it. Some flavors of UNIX have GUI administrative tools, but the majority of administrative duties are still performed via command line interface using rather cryptic operating system commands (grep, awk, ps -ef, cat).
All popular enterprise databases are administered through a point and click Graphical User Interface. Installation is more complex for enterprise databases than their departmental counterparts, but this additional complexity can be attributed to the operating system being used and not the database product itself.

Mainframe Tier

Description

Mainframes are in the process of being transformed from monolithic, character-oriented, centralized servers to up-to-date, open and highly flexible client/server systems. There is no doubt that the dramatic growth of enterprise, departmental and personal architectures is playing an enormous role in causing (as well as accelerating) these changes. Mainframe hardware and software vendors are reducing prices and accelerating the level of product enhancements to prevent competitors from eroding their market share. The inability to access mainframe data via Graphical User Interface is no longer an issue. Advances in middleware technology are providing mainframe users with graphical development environments that lend themselves to rapid application prototyping and development. Although it is clear that NT and UNIX servers are capturing an increasingly larger share of the computing market, neither platform will be able to overtake entirely the scalability and reliable performance of the mainframe. In addition, IBM’s CMOS mainframes are much lower in both initial and support costs than their water-cooled counterparts. With the newly announced MVS/Open Edition, UNIX and NT applications will be able to be run natively, with all the performance, reliability and security inherent to the mainframe architecture. MVS/Open Edition will provide consumers with an attractive selection of operating systems (UNIX, NT, OS/390), databases (DB2, Oracle, Sybase, Informix) and third party applications from which to choose. The failure of high-end enterprise servers to displace mainframes on a combination of price/performance and manageability have ensured the continued growth of the mainframe architecture. Mainframe performance and reliability are (and will be in the forseeable future) the standards by which vendors in other tiers measure themselves.

Hardware

  • 1 to 10 central processors
  • 100s – 1000s of megabytes of central and expanded memory available
  • Gigabytes to terrabytes of DASD
  • Typical Vendor products – IBM 9000

Operating System

  • No Graphical User Interface
  • Typical vendor products – IBM OS/390

Database Software

  • Runs as multiple processes on the operating system
  • No Graphical User Interface
  • Typical vendor products – IBM DB2

Benefits/Drawbacks

  • Hardware
+ Architected for high availability and high performance (standard by which other architectures are measured)
+ Large number of third party hardware options and add-ons available
+ Robust upgrade path
+ Good vendor support
+ Disaster recovery options available
– Initial hardware purchase and
– High environmental costs (less for CMOS, more for water-cooled)
– Multiple applications can be affected by hardware failures (Sysplex will correct this)
  • Operating System
+ Architected for high availability and high performance (standard by which other architectures are measured)
+ Able to support thousands of concurrent users
+ Large number of third party applications available
+ Good vendor support
+ Excellent security mechanisms
– Complex to install, administer and upgrade third party software
– Third party applications and tools can be very costly
  • Database Software
+ Architected for high availability
+ Tight integration with operating system and transaction monitors
+ Large number of administrative tool sets are available
+ Large number of third party database applications available
– Somewhat inflexible
– Complex to administer
– DB2 lags behind enterprise tier competitors (Oracle, Sybase) in advanced performance options (parallel query, bit map indexing)
– DB2 lags behind enterprise tier competitors (Oracle, Sybase) in advanced database options (web access, text search, spatial data management, video/audio storage and retrieval)
– Oracle unable to take advantage of large database buffers
– DB2 known for poorly performing locking mechanisms
– DB2 requires that data be in order for good performance
  • Database Environment (hardware, operating system, database software viewed as a single entity)
+ Multiple application environment limits the number of environment implementations (limited number of operating systems, hardware platforms and databases to be supported)
+ Lower startup costs. Mainframe applications are usually charged on a monthly basis for resource utilization as opposed to other architectures which require that hardware and software be purchased up front.
+ Provides better security, reliability and availability than all other architectures.
+ Able to support a large number of concurrent users
– Complex to administer
– Large number of support units sometimes complicates communications
– Limited number of database performance options and database add-ons available
– Memory available to applications is shared by all users

Typical Applications

  • Multi-user database with multiple query and update sources is acceptable
  • Number of concurrent users ranging from hundreds to thousands
  • Architecture easily supports heavy batch, OLTP and DSS applications
  • Applications that require moderate/heavy communication with existing mainframe applications

Numerical Ratings (0 – least favorable, 10 – most favorable)

  • Cost (Rating 8)
The initial startup costs for developing applications on the mainframe is lower than applications built on other architecture tiers. Business units building mainframe applications do not have to purchase hardware and database licenses before starting development. Mainframe applications are charged on the amount of resources (CPU, DASD) they consume on a monthly basis. Monthly IT support costs for the mainframe architecture tier are usually lower than enterprise architectures, but higher than departmentals. Applications running on enterprise architectures are unable to take advantage of the economies of scale that the mainframe architecture offers, resulting in higher support charges. Departmental applications are sometimes owned and administered entirely by the business unit and are not charged monthly IT support fees.
  • Performance (Rating 8)
Having a single vendor manufacture the operating system, hardware platform, database and transaction monitor allows the product interfaces to be tightly integrated for high performance. As a result, mainframe architectures excel in being able to easily accommodate up to thousands of concurrent users as well as multiple terrabytes of stored data. Mainframe hardware platforms lead most other architectures in performance and scalability.
Memory constraints can have a negative impact on some mainframe applications. Mainframe memory is costly and shared by all applications. Enterprise and departmental architectures are able to offer applications hundreds of megabytes of memory at low cost. Typical mainframe DB2 applications must share a limited amount of database buffers. The key to database performance is using memory to decrease the number of disk accesses. Although OLTP applications do not normally access large amounts of data, most decision support applications do. As the volume of data that is being accessed increases so does the impact additional memory has on database performance. As more memory is allocated to the database buffers, the less disk access that is required. Memory access is much faster than the mechanical operations required by the DASD device. Under some circumstances, large memory allocations allow enterprise and departmental servers to compete with the much larger mainframe systems.
OS/390 has a stellar reputation as a high performance operating system. It is widely known that OS/390 has the capability to handle an enormous amount of concurrent users requests from a multitude of different applications. In addition, robust online transaction monitors (CICS, IMS, etc.) are available to help coordinate the concurrent user process workload. Although a few database vendors have products that can serve as transaction processing surrogates (Oracle Multi-Threaded Server is an example), these products do not scale well enough to handle large, corporate-wide OLTP applications.
The mainframe database of choice, DB2, is also architected to accommodate a large number of concurrent users. DB2’s tight integration with CICS and OS/390 make it the database of choice for mainframe applications. Although DB2 does lag behind other vendors in performance enhancements (sequence generators, low CPU locks, etc.) the mainframe environment is still able to provide a high level of performance for the majority of database applications. In addition, DB2 data sharing, recently available in DB2 Version 4.1, increases the performance capabilities of DB2 by allowing applications running on more than one DB2 subsystem to read and write to the same set of data concurrently. A single application can now run on multiple DB2 subsystems and therefore multiple hardware platforms. This increases the total potential capacity of mainframe architectures as database servers by putting more processors in between the application programs and the data.
  • Reliability – 10
Recent Gartner Group studies have shown that a majority of organizations are unwilling to risk their business processes on new technologies, no matter how promising they may appear. 80% of the respondents surveyed stated that adding new risk to the business was the primary factor that mission critical applications were kept on mainframe architectures. A study by the Information Technology Group, a consulting company based in Los Gatos California, reinforces their assumptions by stating that mainframe users can expect an average up-time of 99.6%, while LAN based applications can expect an average of up-time of less than 92%. The risk of moving to a non-mainframe architecture to achieve additional functionality may not be worthwhile for core applications.
  • Ease of use – 6
Mainframes lag behind other architectures in ease of use. Business areas must deal with a number of different support units to accomplish a given task and this tends to separate them from the technology. A successful mainframe environment is run by consensus of opinion. Mainframe support units must place the needs and wants of the entire organization above the needs and wants of individual applications. Prioritization of job scheduling and processing, systems configuration and the application and system software used is determined by central IT management. Business units often have a larger share of ownership in non-mainframe architectures and as a result are able to participate more in the application of the technologies. Non-mainframe architectures can be more easily tailored to provide the specific processing environment the business area requires.

Attachment 1 – Architecture Tiers Comparison Table

The table below can be used to quickly compare the different architecture tiers. Each of the architectures are graded from 1 to 10 with 10 being the most favorable grade. A 0 rating means that the architecture is unable to provide that criteria.

The importance of a particular criteria depends on the application being evaluated. For example, is having the ability to use a transaction monitor important to a departmental application that has 60 concurrent users? Probably not. Is it important to an application that has hundreds of concurrent users? Probably so. Each criteria must be judged on a one by one basis for the application being evaluated. Rank the criteria that is important to the success of the application higher than others during evaluation.

Criteria / Architecture

Personal

Departmental

Enterprise

Mainframe

Ability to accommodate large number of concurrent users

1

5

8

10

Ability to accommodate large amount of stored data

1

7

8

10

Disaster recovery

0

0

8

9

Database ratings
  • Access (7)
  • Oracle (7)
  • Oracle (9)
  • MS SQL Server (8)
  • DB2 (8)
  • Oracle (9)
  • MS SQL Server (6)
  • DB2 (8)

DB2 (9)

Ease of use (overall)

10

8

6

6

Ease of data access

10

9

9

7

Ease of administration

10

9

7

8

Flexibility

9

9

7

7

Initial architecture cost (hardware, O/S, RDBMS) to user

9

7

5

9

Number of third party applications available

5

7

9

8

Number of third party tools available

2

6

8

10

Operating system ratings
  • Win 3.1 (6)
  • Win 95/97 (7)
  • NT Client (9)
  • UNIX (8)
  • NT Server (9)
  • Netware (4)
  • UNIX (8)
  • NT Server (5)
  • OS/390 (10)
Performance (overall)

2

7

8

9

Performance (Online Transaction Processing)

2

7

8

10

Performance (Decision Support)

2

5

8

7

Reliability

2

6

8

10

Security

2

5

7

9

Scalability

2

6

8

9

Transaction monitor ratings

0

7

7

9

Vendor Support

6

8

9

9

Attachment 2 – Database Architecture Worksheet

This worksheet can be used to help determine the optimal architecture for a given application. The final decision of the architecture should not be done in a vacuum or based on personal preference. As I stated previously, the safest and easiest way for any application to choose the correct environment is enlist the help of an Application Architectures Team. This part time team should contain individuals who have the expertise necessary to choose the correct architecture for any given application.

A : is placed under each of the architectures that generally meets the requirement listed on the left. A blank means that although this architecture may not be the architecture that best meets the requirement, it should still be considered as a viable alternative. A h placed under the architecture means that if the requirement being evaluated is truly important to the application, that architecture must no longer be considered as a viable alternative and should be removed from the evaluation process. Add the : that are important to the success of the application to get a general rankings of the different architectures.

Requirement / Architecture

Personal

Departmental

Enterprise

Mainframe

1 to 4 or 5 users with single update source

:

:

:

:

2 to 40 or 50 concurrent users

h

:

:

:

20 to 600-700 concurrent users

h

 

:

:

Thousands of concurrent users

h

h

 

:

MEGs to 1 GIG of DASD

:

:

:

:

MEGs to 50 GIG of DASD  

:

:

:

100s GIG of DASD

h

 

:

:

1000s GIG of DASD

h

h

:

:

Extensive access to other mainframe applications      

:

Heavy decision support

h

 

:

:

High performance/availability is major concern    

:

:

Initial costs primary factor

:

:

 

:

Application performance being impacted by month end

:

:

:

 
Large amount of batch processing required

h

h

:

:

No. of third party tools available  

:

:

:

Application wants to share administrative duties with IT

:

 

h

h

Low costs for database and third party applications

:

:

   
Mission critical application

h

UNIX(: )

NT( )

:

:

Take advantage of leading edge database technology  

:

:

 
Require database support

h

:

:

:

Require disaster recovery backup

h

 

:

:

Robust systems management

h

   

:

Robust security required

h

   

:

Scalability  

:

:

:

Flexibility/overall ease of use

:

:

   
Transaction monitor required

h

 

:

:

Attachment 3 – TPC (Transaction Processing Council) Benchmarks

This document contains many references to the TPC (Transaction Processing Council) when discussing database architecture tier performance. The TPC is a non-profit corporation founded to define transaction processing and database benchmarks and to disseminate objective, verifiable TPC performance data to the industry.

Throughput, in TPC terms, is a measure of maximum sustained system performance. In TPC-C, throughput is defined as how many new-order transactions per minute a system generates while the system is executing four other transactions types (payment, order-status, delivery, stock-level). In TPC-D, throughput is defined as how many decision support transactions per minute a system generates while processing other decision support requests. In general, TPC benchmarks are system-wide benchmarks, encompassing almost all cost dimensions of an entire system environment the user might purchase, including terminals, communications equipment, software (transaction monitors and database software), computer system or host, backup storage, and three years maintenance cost. Performance or price/performance may be more important, depending on application requirements. If the application environment demands very high, mission-critical performance, then you may must give more weight to the TPC’s throughput metric. Generally, the best TPC results combine high throughput with low price/performance. TPC-C and TPC-D results for many popular platforms are available from the Transaction Processing Council’s home page of current TPC-C and TPC-D performance figures www.tpc.org. (TPC-COUNCIL)

Attachment 4 – List of Acronyms

CICS – Customer Information Control System
CMOS – Complimentary Metal Oxide Semiconductor
CPU – Central Processing Unit
DASD – Direct Access Storage Device
DSS – Decision Support System
GIG – Gigabyte
GUI – Graphical User Interface
I/O – Input/Output
LAN – Local Area Network
MEG – Megabyte
MHZ – Megahertz
ODBC – Open Database Connectivity
OLTP – Online Transaction Processing
O/S – Operating System
PC – Personal Computer
RAM – Random Access Memory
RDBMS – Relational Database Management System
RISC – Reduced Instruction Set Computing
SMP – Symmetric Multi-Processing
TPC – Transaction Processing Council
TPC-C – Transaction Processing Council transaction processing benchmark
TPC-D – Transaction Processing Council decision support benchmark
WAN – Wide Area Network

Share this post

Chris Foot

Chris Foot

Chris Foot is currently working for Contemporary Technologies as a certified Oracle trainer and remote database administration consultant. He has worked as a database administrator and distributed technology strategist for the Mellon Bank corporation and was the Senior Database and Server Architect for Alcoa. Chris has written several articles for Database Programming and Design, The Data Administration Newsletter and Data Management Review. Chris has also worked part-time for Platinum Technology as a client/server courseware creator and certification instructor. In addition, Chris has presented several times at the International DB2 Users Group, International Oracle Users Group and the Open Technology Forum. 

scroll to top