White Paper




October 1997


Solid State Disks are I/O subsystem storage devices that use semiconductors as the storage media to deliver outstanding performance in I/O intensive applications by eliminating the latency associated with disk rotation and head seek. Their near-instantaneous access time and high transfer rates supply data quickly bridging the growing performance gap between today's powerful CPUs and storage.

SSDs operate exactly like magnetic disks and can be configured as a RAID rank for the ultimate in performance. Most I/O bottlenecks are caused by unusually active files as indexes, authorization files, job controller, common code libraries, operating system commands, etc. They receive a disproportionate percentage of a system's overall I/O requests . According to a study by Princeton University and Digital Equipment Corporation, less than 5% of the data is responsible for 50% of the disk accesses. These files are easily identifiable and are finite in number. Because of their access frequency, these foils will tend to reside in cache a large portion of the time. If the size of the set of these files is larger than the cache, constant purging and reloading of data into cache will result, leading to lower performance. Moving these files to SSD can dramatically improve user level response time.

Figure SS3e5 shows the performance gains of adding SSD to hold hot files alongside RAID subsystems. For typical workloads, the combination of RAID and SSD outperforms either technology when used by itself.

Figure SS3e5 - Performance of SSD combined with RAID

Other utilization of SSD is as caching in RAID subsystems, for installed RAID systems with limited cache capacity, provided an easy firmware change is implemented to the RAID system. As a read cache, SSD allows fast cache hit access times and provides higher hit rates, leading to higher performance. As a write cache, a non-volatile SSD can improve RAID5 performance, buffering write operations that no longer have to wait for the modification of the parity drive, data can be written magnetic disks later as a background activity.

Figure SS3e6 shows a RAID subsystem consisting of disk drives and SSDs. In this case, critical data, or hot files, are stored on the SSD devices for maximum performance. The SSD can be connected directly to the host interconnect for minimum latency, or to the SCSI ports on the back end of the RAID subsystem if the host interconnect is something other than SCSI. SSDs can be integrated into subsystems exactly like hard disk drives. As SCSI-2 devices, the SSDs conform to the same rules that apply to magnetic disks. They can be mirrored, striped, and bound in volumes. Information is recorded using the same SCSI commands as magnetic disks.

Figure SS3e6 - Using SSD along with RAID

SSDs are aimed at applications where performance is limited by I/O performance. They are particularly effective in write intensive applications, or applications where data locality is poor. SSDs are most used in commercial processing, customer service applications and on-line transaction processing. The tangible costs of lost business or time wasted readily justify the investment. SSDs may not be the optimum choice for applications that require high bandwidth for large, sequential I/O requests; striping of high performance magnetic disks may be a lower cost method to meet these requirements

SSDs vendors include Quantum, Imperial Technologies, Seek Systems, and SolidData



By: Farid Neema

This Report was produced by:
351 Hitchcock Way, Suite #B-200
Santa Barbara, California, 93105
Tel: (805) 563-9491