By Kai Kaltenbach
ABSTRACT: A primer on SCSI interfaces in Microsoft Windows 95 and Windows NT, this article lists and explains the basic facts about SCSI: host adapters, bandwidth use in various configurations, hard drives, devices in general, cabling, and drive array strategies. A final section lists other sources of information.
The industry debate of SCSI versus IDE/ATAPI interfaces has been going on for some time. Rather than attempt to address SCSI versus IDE performance issues, this article focuses on the concrete differences between SCSI and IDE/ATAPI that tend to make SCSI the corporate business choice for servers and many power user workstations. These differences are:
Here is a brief overview of the SCSI standards to date. See the Reference Documents section for information on the specifications themselves.
Data Transfer Rates
Interface Standard SCSI-1 5 MB/sec SCSI-2 Narrow Under 5 mbps asynchronous, 5 or 10 mbps synchronous ("Fast SCSI-2) SCSI-2 Wide 10 mbps asynchronous, 20 mbps synchronous (16-bit) ("Fast-Wide SCSI-2") SCSI-2 Wide Specified but never implemented (32-bit) SCSI-3 Various transfer rates, up to and including Fiber Channel at 100 mbps UltraSCSI 20 mbps Narrow ("Fast 20"), 40 mbps Wide, 100 mbps Fiber Channel
Cabling Types
Cable Description SCSI-1 External 50-pin Centronics connector (sometimes 25-pin DIN is used) SCSI-1/SCSI-2 Internal 50-pin ribbon cable SCSI-2 External 50-pin high-density mini-DIN connector SCSI-2 Wide Internal 68-pin flat or 34-signal twisted cable SCSI-2 Wide External 68-pin high-density mini-DIN connector
The SCSI host adapter should ideally be Plug and Play compatible. The Microsoft Hardware Design Guide for Windows 95 recommends that the adapter should support:
Plug and Play is not currently a consideration for Windows NT, although it will be in the future.
The SCSI host adapter market has generally settled down to supporting two types of data transfers: PIO (Programmed Input/Output, sometimes known as Processor I/O) and Bus Mastering DMA (Direct Memory Access).
Early AT hard disk systems, and most of today's IDE disk systems, use PIO transfers, wherein the disk controller places a block of data (from 512 bytes to 64 K) into a transfer location in low memory, and the processor moves the data to its destination. This process is relatively inefficient and consumes a lot of processor time, so it is said to have high CPU overhead.
Bus-mastering support allows the host adapter to take over the system bus, and move data into or out of system memory directly. All the CPU has to do is program the operation, and the host adapter takes it from there.
When used with a relatively slow SCSI device, such as a CD-ROM drive that transfers only 300 to 600 Kbps, a benchmark might not show a performance difference between a PIO host adapter and a bus-mastering host adapter. The raw data transfer rate may be the same. But the difference can be dramatic when it is tested with something that uses CPU time and disk access simultaneously. A good example is an AVI video file. To play an AVI movie, not only do you have to read the AVI file from the disk, but the CPU has to spend time decompressing and displaying the video on the fly. SCSI CD-ROM benchmarks using AVI or MPEG video files show a considerable difference in dropped video frames, etc. between PIO and bus-mastering SCSI host adapters.
Another example would be the Windows NT SQL database server, which requires simultaneous CPU and disk performance to service many queries. If disk access is taking a lot of CPU time, the system has fewer resources available to process the CPU-intensive portion of the database query. This makes the DMA bus-mastering feature critical for balanced performance.
Note Do not use an ISA bus-mastering SCSI host adapter in a machine with more than 16MB of RAM. The 16-bit ISA bus can perform DMA only to memory locations under 16 MB. Windows provides a special buffering layer that allows bus-mastering ISA adapters to work in machines with more than 16 MB, but it greatly reduces potential performance. In machines with more than 16 MB RAM, EISA/PCI/VLB adapters should always be used.
SCSI is a language, just like any other computer language. As an interface between the computer's bus and the SCSI bus, the host adapter must process a lot of SCSI commands and move a lot of data.
While some host adapter processors are proprietary chips, many host adapters use standard processors, such as the Motorola 68k series. It's not uncommon to see a high-performance modern SCSI host adapter with more processing power than a typical Macintosh computer of just a few years ago. Many use RISC processors.
Some host adapter manufacturers differentiate their models by processor speed. For example, one large host adapter manufacturer offers the same adapter with either the Motorola 68000 chip (as the standard model) or the 68020 chip (as the high-performance model).
Host adapter processor speed does not usually affect the speed of a single SCSI device in your system; rather, it affects system performance as a whole. In other words, if the system hard disk physically delivers a sustained data transfer rate of 2.5 MB/sec, a faster host adapter isn't magically going to make it transfer data faster. But a fast host adapter processor can significantly reduce SCSI command overhead when there are many accesses to multiple devices.
Host adapter processor speed is more significant in a multi-user system with multiple devices accessed simultaneously than in a single-user system.
The SCSI host adapter used to run Windows/95 or Windows NT must be on the appropriate Win95/NT Hardware Compatibility List (HCL) as must SCSI devices other than hard disks (such as scanners, tape drives).
The host adapter vendor may also bundle, or sell separately, other software drivers or utilities for Windows 95 or Windows NT. One example of a useful tool is a utility for viewing devices on the SCSI bus. Manufacturers handle support issues for their own drivers or utilities provide.
Ideally, the SCSI host adapter should use the newer, high-density shielded 50-pin SCSI-2 external connector, or the high-density 68-pin Wide SCSI connector, rather than the full-size 50-pin SCSI-1 Centronics connector.
The SCSI bus must be terminated at both ends. If there are only internal devices, the host adapter terminates one end of the bus and the last internal device on the end of the internal cable terminates the other.
Auto termination is important if there is an external SCSI device that is frequently added and removed. Without auto termination, each addition or removal of the external device requires opening the computer case and adding or removing resistor banks and/or changing a switch or jumper on the host adapter board. Auto termination automatically senses the presence of an external SSI device and adjusts the host adapter termination accordingly. Most auto terminating host adapters support auto termination only at power-on; others support auto termination at any time and dynamically scan the SCSI bus for new devices.
The SCAM protocol stands for SCSI Configured Auto Matically. (This used to be known as SCSI Configured Auto Magically. See the Reference Documents section at the end of this article for more information.)
SCAM allows the host adapter BIOS and/or the driver software to assign SCSI ID numbers to devices on the SCSI bus automatically. Without SCAM, you manually have to configure all SCSI devices with SCSI ID numbers that do not conflict. Every device uses a different method for setting the ID number. Some external devices have SCSI ID thumbwheels, while most hard disks and CD-ROM drives use pin jumpers. In every case, you must consult the device's documentation to set the ID number correctly, and be aware at all times of the device numbers used by all other devices in the system. SCAM makes this process completely transparent and hassle-free, but there are not many SCAM-compatible devices or host adapters on the market yet.
Since AT-compatible systems do not directly support SCSI, booting from a SCSI device requires enabling the host adapter BIOS. The host adapter should allow you to disable the BIOS if you don't need it.
If you are running a Windows/95 system that runs large MS-DOS applications, investigate the memory footprint of the host adapter BIOS. The host adapter BIOS takes a memory footprint in the upper memory area (between 640 K and 1024 K-specifically C800 through EFFF in shortened hex notation). Since this area is also used for Upper Memory Blocks (UMBs) used to load programs and drivers out of MS-DOS memory, host adapter memory footprint size is important. They can range from 4 K to 64 K.
The host adapter BIOS configuration method is another issue to investigate. The BIOS location has to be set so that it doesn't conflict with any other system devices. A Plug and Play host adapter sets the BIOS location automatically; non-P&P adapters require that you use DIP switches or jumpers to set the BIOS location, and that you know the location of any other option (adapter) ROMs in the system.
SCSI host adapter BIOS code usually goes through several revisions during the useful life of the adapter. New revisions may contain bug fixes or performance optimizations. The BIOS code on host adapters with a Flash BIOS can be updated electronically with a flash update program supplied by the vendor, either on a floppy or through the vendor's bulletin board system or Internet FTP site. Without a Flash BIOS, you need to get a new ROM chip from the adapter vendor, pull the card out of the machine and switch chips yourself. In other cases, you can return the host adapter to the vendor to have the change performed. Clearly a Flash BIOS is a useful convenience, although it adds cost and is still fairly rare among host adapter designs.
The first fundamental of SCSI performance is: eliminate bottlenecks wherever possible. The various links in the performance chain are, in order:
The SCSI performance chain runs only as fast as its slowest link. Maximum SCSI bandwidth is determined by whichever of these is lower: system bus bandwidth, SCSI bus bandwidth, or total SCSI device bandwidth.
The examples below ignore system memory and the host adapter. System memory can be ignored because the computer can run only as fast as its memory-no matter how fast the disk subsystem is. The SCSI host adapter can be ignored because ideally the host adapter runs at maximum system bus speed (as long as it is well-designed, bus-mastering adapter).
The examples begin with a simple case, then grow more complex.
Example 1: an ISA bus, ISA bus-mastering SCSI host adapter, a Fast SCSI-2 connection, and an average modern SCSI hard disk:
Example 1 - ISA bus system
Device Speed Description System Bus & Host 2.5 MB/sec ISA, about 2.5 MB/sec Adapter SCSI Bus 10 MB/sec Fast SCSI-2 SCSI Device 4.5 MB/sec Typical 5400 rpm modern GB hard disk
Here, the system bus is clearly the bottleneck. The next example uses a local bus system instead. Both VESA Local Bus and PCI Local Bus operate at approximately 132 MB/sec in burst mode, with a throughput of about 32 MB/sec.
Example 2 -Local bus system with single device
Device Speed Description System Bus & Host 32 MB/sec PCI/VLB bus Adapter SCSI Bus 10 MB/sec Fast SCSI-2 SCSI Device 4.5 MB/sec Typical 5400 rpm modern GB hard disk
In this example, with other components unchanged, the bottleneck becomes the hard disk itself. Notice what happens when three hard disks operate simultaneously on the same system:
Example 3 -Local bus system with multiple hard disks
Device Speed Description System Bus & Host 32 MB/sec PCI/VLB bus Adapter SCSI Bus 10 MB/sec Fast SCSI-2 SCSI Device 1 [4.5 hard disk #1 MB/sec] SCSI Device 2 [4.5 hard disk #2 MB/sec] SCSI Device 3 [4.5 hard disk #3 MB/sec] Total SCSI Devices 13.5 MB/sec Total with simultaneous operation
SCSI allows multiple devices to operate simultaneously. In the above example, the total bandwidth required for simultaneous operation of all three hard disks exceeds the bandwidth of the SCSI bus itself, and the SCSI bus becomes the bottleneck. This is more common in multiuser systems, and can be avoided by going to Fast-Wide SCSI-2 devices and host adapter as shown below.
Example 4 -Fast-wide SCSI-2 with multiple hard disks
Device Speed Description System Bus & Host 32 MB/sec PCI/VLB bus Adapter SCSI Bus 20 MB/sec Fast-Wide SCSI-2 SCSI Device 1 [4.5 hard disk #1 MB/sec] SCSI Device 2 [4.5 hard disk #2 MB/sec] SCSI Device 3 [4.5 hard disk #3 MB/sec] Total SCSI Devices 13.5 MB/sec Total with simultaneous operation
Notice that the slowest link has again become the hard disks. This shows how important a balanced SCSI subsystem is to optimal performance.
To calculate the bandwidth required for a given system:
Data transfer rate is specified in several ways: MB/sec, megabits/sec (mb/sec-divide by 10 for a rough estimate of MB/sec), and megahertz (same as megabits/sec).
If no existing single SCSI bus satisfies the bandwidth needs, multiple SCSI host adapters, and/or multi-channel host adapters must be combined until total bandwidth is sufficient. For example, two Fast-Wide SCSI-2 adapters at 20 MB/sec bandwidth each, add up to a total of 40 MB/sec bandwidth.
Adapter bus type Bandwidth MB/sec PCMCIA (PC Card) .2-2 ISA .5-2 non-bus-mastering ISA bus-mastering 2.5-3 EISA bus-mastering 5-12 VLB or PCI local bus 32-132
Unfortunately for users planning a system, the computer industry tends to focus on a single performance measurement for a given variety of hardware: computer systems, for instance, are advertised (and bought) as having the fastest chip currently available, without regard to overall performance. In the same way, hard drives are advertised (and bought) solely because of average head access time-the average number of milliseconds it takes for the drive heads to move from one track to another.
Specifying hard drives only by average head access time is like talking about a race car only in terms of cornering. Its true performance includes top speed, average speed, etc. Consider two major factors when determining hard drive performance: true data access time and data transfer rate.
to calculate a hard drive's true data access time, add the average latency to the drive's average head access time. Head access time measures only the time it takes for the disk head to get from one track to another. Once the head is repositioned, it has to wait for the desired data to come around on the disk platter. Latency is the average time required for the part of the disk that contains the desired data to come under the disk head and for the read process to begin. Average latency is determined by the platter's rotational speed: the faster the platters turn, the faster the required data comes under the disk heads.
Average Access Time + Average Latency = True Data Access Time
True data access time is a much better measurement of performance than average access time because it reflects the amount of time required for the drive to start reading the requested data, rather than just move the heads from one place to another.
The true data access time specification is not commonly used, mostly because, until recently, most hard disks had the same rotational speed (3600 rpm) and thus the same latency. Today's hard disks are available in several speeds, the most common being 3600 rpm, 4500 rpm, 5400 rpm and 7200 rpm.
Disk manufacturers haven't started using true data access time because the figure is higher than average head access time and makes the drive look slower-an image no manufacturer has yet been willing to create.
It is easy to calculate true data access time by examining the drive specifications and adding the numbers. The example below shows true data access times for two 2-GB SCSI hard drives that are identical except for rotational speeds:
Drive "A", 5400 Avg. Access 8.5 ms + Avg. Latency 5.56 ms = True Access rpm 14.1 ms Drive "B", 7200 Avg. Access 8.5 ms + Avg. Latency 4.17 ms = True Access rpm 12.7 ms
The true data access times differ by more than 10%, although both of these drives are advertised simply as "2-GB 8.5ms."
An equally important, and often overlooked, measurement of drive performance is the data transfer rate: how fast the drive actually transfers data from the platter onto the SCSI bus. As mentioned above, data transfer rate is specified in MB/sec, mb/sec, and megahertz (same as mb/sec). Although the rule of thumb for converting mb/sec to MB/sec is to divide by 10, some industry insiders say it is more accurate to divide by 8 and then multiply by 85%.
A drive actually has two data transfer rate specifications: minimum and maximum. Since hard disks use a single recording density, and constant angular velocity (CAV), the data transfer rate varies widely from the disk's inner tracks to its outer tracks. For example, one manufacturer's 2-GB SCSI drive delivers 34.5 mb/sec on the inner tracks, and 67.7 mb/sec on the outer tracks. Average transfer rate is the average of the minimum and maximum figures:
Average Transfer Rate = Minimum + ( Maximum - Minimum )
The average transfer rate for the above example is 34.5+(67.7-34.5) or 51.1 mb/sec (approximately 5.1 MB/sec).
Data transfer rate is determined by two factors: recording density and rotational speed. Recording density is usually specified as kb/inch (kbpi). High-performance SCSI drives may have recording densities of 50-75 kbpi or higher. The amount of data transferred in a second can be calculated by the number of linear inches of disk platter that pass under the drive head in a second, times the recording density.
Data Transfer Rate = inches/second * kb/inch
The inches/ second figure is determined by the diameter of the disk and the rotational speed (spindle speed) of the drive.
Data transfer rates vary widely, and two drives that appear otherwise similar may have radically different transfer rates. Given a particular drive capacity and form factor (such as 3.5-inch), the highest data transfer rates are generally on the drives with the highest spindle speed and the fewest platters (thus the highest recording density). For example, a 3.5-inch half-height (1.6-inch) 5400 rpm 1-GB drive probably has a faster data transfer rate than an otherwise equivalent 3.5-inch 1-inch height drive.
Mean Time Between Failure: Mean Time Between Failure, or MTBF, is the manufacturer's estimate of the drive's long-term reliability. Manufacturers tend to measure MTBF differently, but those that conform to ISO manufacturing standards generally use the same or similar methods. Evaluating the MTBF can help in purchasing decisions, but don't rely on it as an absolute. Drive array combinations can significantly alter MTBF figures. See the Drive Arrays: Fault-Tolerant section for more information.
Noise Level: Drive noise level is generally a consideration only for single-user systems. Measured in decibels (dB), drive noise levels should be specified by the manufacturer for both idle and maximum seek conditions. The decibel scale is logarithmic, with each increase of 3 dB representing approximately a doubling of the sound level.
Cache Buffer Size: All modern drives contain their own cache buffer. In today's SCSI drives, the buffer size generally varies from 64 to 1024 Kb. In general, larger buffers are better; but a good cache controller on the drive can make a small buffer perform better than another drive's large buffer. It's difficult to make any objective evaluation of cache buffer performance without reference to independently published drive benchmarks.
Power Consumption: Drive power consumption is an important factor for laptop computers and multi-drive systems. There are several power draw figures to consider depending upon the application. Power draw for power-on drive spin-up is measured in amps. The total power-on spin-up amperage draw for all installed devices in a system must not exceed the total amperage capacity of the system's power supply. Some drives and controllers allow you to set a drive jumper to disable drive spin-up at power-on, allowing the controller to spin-up the drives individually with a delay between each spin-up. Ongoing power consumption at drive idle is measured in watts. This can give an indication of how fast the drive consumes battery life in a portable system.
To find the fastest hard drive, bar none, simply examine the published specifications for the drive with the lowest average true access time and the highest average transfer rate. Many buyers want the best price/performance ratio, and the meaning of that ratio varies from person to person and from application to application. To buy the drives best suited for a particular application, examine the specifications as described above, and compare them to price, service and warranty information. Another key relationship is that between the dollar-per-megabyte ratio ($/MB) and the average true access and transfer rates.
A device must support synchronous data transfers to get the full advantage over SCSI bus bandwidth. Without synchronous transfers, a device can transfer less than 5 MB/sec over a Narrow SCSI-2 bus, rather than the maximum 5-10 MB/sec.
The SCSI host adapter determines if a device supports synchronous transfers through a process known as synchronous negotiation. Some older SCSI devices don't recognize the synchronous negotiation process and hang up if the process is attempted. For this reason, many SCSI host adapters default to not performing synchronous negotiation with any device. Optimal performance requires that a SCSI host adapter be configured to negotiate synchronous transfers with any devices supporting them. Usually, this is configured through the host adapter BIOS.
Synchronous transfer support is less important for low-speed SCSI devices such as most CD-ROM drives, scanners, etc.
See the Host Adapter Considerations section for more SCAM information. Ideally, SCSI devices should support SCAM Level 2, although only SCAM Level 1 is required for Windows logo compliance.
Removable devices should support a software-initiated media ejection.
SCSI devices are manufactured specifically to connect to a particular SCSI bus type. Buy the type of device appropriate for the host adapter. Usually, a Narrow SCSI device will not connect to a Wide SCSI connector, or vice versa, although many Wide SCSI host adapters offer both Narrow and Wide connectors, and some actually take Narrow and Wide devices on the same connector.
External devices should always have two connectors for loop-through support; preferably SCSI-2 high-density 50-pin Narrow or 68-pin Wide shielded connectors. The connectors should be clearly marked "SCSI In" and "SCSI Out."
If a SCSI device supports disconnection, it can detach itself from the SCSI bus temporarily to perform a task, freeing up bus bandwidth for that period of time, then reconnect when that task is completed.
SCSI devices can live in peaceful coexistence with IDE drives. Most systems require that you boot from the IDE drive, and any SCSI hard drives show up as subsequent drive letters. Under Windows NT, you can reorder drive letters, but only after the system has booted into NT.
Some new system board BIOS code now allows you to boot from a SCSI drive prior to an IDE drive, and then load the IDE drive(s) as subsequent drive letters. This feature should become more common over time.
The standard type of SCSI connection is known as single-ended. Another connection type, differential, is used for much longer cable lengths. Single-ended Fast SCSI-2 supports cables up to 6 meters, although lengths over 3 meters are not recommended. Differential SCSI supports up to 25 meters, and is generally used in cases where a large drive array must be located away from the main system.
Probably 90% of all SCSI hardware problems stem from improper termination or cabling. This section examines five areas where cabling problems can develop.
Standard Narrow SCSI-2 systems use an internal 50-pin flat ribbon cable. Wide SCSI-2 uses an internal 68-pin flat ribbon cable, or a 34-signal twisted-pair conductor. Cables should have a typical impedance of 90 ohms. The SCSI-2 specification allows impedance of 90-132 ohms, and recommends a minimum conductor size of 28 AWG with connectors spaced at least 12 inches apart for optimal performance.
Make sure that the flat internal ribbon cable does not contact the flat plane of the metal system case: that greatly reduces impedance.
Avoid using external SCSI devices if possible: it's much easier to put together a solid SCSI subsystem using only internal devices, and external SCSI cables are notoriously variable in quality. If external SCSI is necessary, get a good, name-brand external SCSI cable and keep it as short as possible.
The SCSI bus must be terminated at both ends. There are two types of SCSI termination: active and passive. Active is superior, although some say it is best to combine passive and active termination on the same bus. The host adapter frequently provides termination for one end of the bus, and you should find out what type of termination it provides.
At least one device on the bus must provide termination power. Ideally, termination power should be supplied at the terminators, not in the middle of the bus, but this is not always possible. Suppose, for example, that a single external device (such as a scanner) terminates its end of the bus. Whenever the scanner is not powered-on, it can't provide termination power.
When providing termination, SCSI host adapters generally provide termination power as well. With an all-internal-device system, the host adapter provides termination power for its end of the bus, and only the SCSI device terminating the other end of the bus should be configured to provide termination power. This isn't always the case, though. Depending on the host adapter, the manufacturer may recommend that the host adapter, another device on the bus, or some combination thereof be set to provide termination power. Consult the vendor and host adapter documentation for recommendations.
The Windows 95 Hardware Design Guide recommends installing a permanent internal active terminator at the end of the internal SCSI cable. This is required for SCSI Plug and Play compliance. System manufacturers with Windows 95 logo compliance provide this type of termination; however, internal ribbon cable active terminators are rare and difficult to find at retail outlets. One source is Digi-Key Electronics, at 800-DIGI-KEY.
Improper cabling and termination can have an impact on performance. Signal problems, for instance, may cause non-fatal data resends, reducing performance without producing an observable problem at the operating system level. Such problems can be discovered only by trial and error, or by a SCSI bus analyzer. Very simple LED-indicator SCSI bus analyzing active terminators are available from such vendors as Granite Data in California (510-471-6442). Microsoft does not guarantee their usefullness.
Spanning allows two drives or partitions to join their capacity into a single logical volume. For example, two 1-GB drives can be combined to form a single 2-GB volume, the first half of which is located on the first drive and the second half on the second drive. Theoretically, spanning can be performed on any number of drives, limited only by the spanning software or hardware.
Spanning has very few performance implications. In most cases it neither increases nor decreases performance.
RAID 0 also allows multiple disk drives or partitions to join their capacity into a single volume. Rather than split the data into large chunks the size of each drive (the way spanning does) RAID 0 divides the data into much smaller segments, called stripes, that are distributed round-robin across all drives in the array. Stripes can be anywhere from 512 bytes to several MB. Implementing RAID 0 requires a minimum of three drives. Windows NT's software RAID 0 implementation supports a maximum of 32 drives.
Striping without parity can increase performance significantly by allowing drives in the array to operate simultaneously. This has performance implications for multi-user and single-user systems.
For multi-user systems, especially if the stripe size is greater than the average record size being accessed, simultaneous drive access means that multiple records can be read from the drive array at the same time. This delivers the benefits of multiple discrete drive volumes, without having to distribute data across different volumes.
For single-user systems, striping without parity can greatly increase transfer speed for large sequential data, but only if the stripe size is very small (on the order of 512 bytes) and the drive spindles are synchronized.
The disadvantage of RAID 0 is that it is "anti-fault-tolerant" as opposed to non-fault-tolerant. If any drive in the array fails, the entire array fails, and the data cannot be recovered.
Mirroring configures a pair of matched-size drives or partitions into a single volume, or mirror. All data is written simultaneously to both devices in the mirror. If one of the devices fails, the mirror is said to be broken but all of the data is still intact and accessible on the other half of the mirrored pair.
Mirroring decreases write performance somewhat, because all data must be written twice. Usually both drives can write the data simultaneously, but still the data must be sent twice over the SCSI bus. In hardware mirroring, the fault-tolerant SCSI host adapter receives one copy of the data, and sends it out twice to the pair of drives in the mirror. The only performance hit is on SCSI bus bandwidth. With software mirroring, the operating system has to write the data to the host adapter twice. The overhead occurs not just on the SCSI bus, but across the system bus and in the operating system as well.
Mirroring increases read performance, because although data has to be written simultaneously to both drives in the mirror, data can be read independently from each drive. Each mirrored drive can seek and read different data simultaneously. Read bandwidth is almost doubled.
Mirroring and duplexing (discussed next) are the only fault tolerance options available in 2-drive configurations. RAID 1 storage overhead is always 50%. Two 1-GB drives in a mirrored configuration yield 1-GB total user storage.
Duplexing is the same process as mirroring, but it uses a separate SCSI host adapter for each half of the mirror, providing additional fault tolerance in case of host adapter failure.
Similar to striping without parity (RAID 0), RAID 5 dedicates a portion of the array to storing parity information. In the event of a single drive failure, the parity information can be used to reconstruct the lost data. RAID 5 requires a minimum of 3 drives, and Windows NT Server's software RAID 5 implementation supports a maximum of 32 devices. The storage overhead required for RAID 5 is calculated this way:
Number of drives in RAID 5 array = n, Parity overhead = 1/n
For example, if you have a RAID 5 array consisting of five 1-GB drives, the parity overhead is 1/5, or 1-GB. Total storage is total drive size minus overhead. In this example, that would be 5 GB minus 1 GB, or 4 GB total user storage
Performance implications for RAID 5 are the same as those for RAID 0, except for the extra overhead created by the parity information calculations. If a hardware RAID controller performs these calculations, the overhead is transparent to the operating system. In Windows NT's software RAID 5 implementation, the CPU has to calculate the parity information, and the parity data has to be written over the system bus. With hardware RAID 5, the operating system writes only the original data over the system bus, then the RAID controller calculates the parity information and writes all of the data over just the SCSI bus to the devices in the array. In general, depending on implementation, RAID 5 disk write performance should be 30% to 60% the speed of RAID 1, but it costs much less.
To implement drive arrays in Windows 95, you must have a hardware disk array controller. Windows NT Workstation provides built-in spanning and striping without parity capabilities, and Windows NT Server adds software mirroring, duplexing and striping with parity. Both types of Windows NT can also be used with a hardware disk array. See the Windows NT Resource Kit for more information.
Since SCSI drives can operate simultaneously, configure Windows so that it can page to a different drive than the one it's currently accessing. Windows NT allows setting up separate paging files on each SCSI device, and tries to page to the device not in use. Windows 95 allows a single drive to be specified as the paging destination. Determine drive usage patterns, and set the paging location to the drive used least.
There are many SCSI hardware performance characteristics, potentials and limitations, but by working through them logically it is possible to assemble and configure a system that derives the maximum advantage from SCSI technology. This paper has explained the basics and provided the framework for a thorough, logical evaluation. See the next section for references that explain concepts and specifics in greater detail.
Available from Microsoft:
Available from Global Engineering Documents:
15 Inverness Way East
Englewood, CO 80112-5704
Phone: (800) 854-7179 Outside USA and Canada: (303) 792-2181
FAX: (303) 792- 2192
Available from SCSI BBS 719-574-0424:
Available from Adaptec BBS 408-945-7727:
Available from the Distributed Processing Technology BBS at 407-831-6432:
Available from the Plug and Play forum on CompuServe-Go plugplay:
About the Author
Kai Kaltenbach is a consultant with Microsoft Premier Corporate Support.
Revision 1.0, 9/13/95