Adaptec SAS Controllers. Fast and agile. Hard disk drive interfaces: SCSI, SAS, Firewire, IDE, SATA Sas controller connection

Tests of RAID 6, 5, 1 and 0 Arrays with Hitachi SAS-2 Drives

Gone are the days when a decent professional 8-port RAID controller cost quite a lot of money. Nowadays, solutions for the Serial Attached SCSI (SAS) interface have appeared, which are very attractive both in price, in functionality, and in terms of performance. One of them is this review.

LSI MegaRAID SAS 9260-8i Controller

We have previously written about the second generation SAS interface with a transfer rate of 6 Gb / s and the very cheap 8-port LSI SAS 9211-8i HBA controller, designed for organizing storage systems of an entry price level based on the simplest SAS and SATA RAID arrays. drives. The LSI MegaRAID SAS 9260-8i model will be a higher class - it is equipped with a more powerful processor with hardware processing of arrays of levels 5, 6, 50 and 60 (ROC technology - RAID On Chip), as well as a tangible volume (512 MB) of onboard SDRAM for efficient data caching. This controller also supports 6Gb / s SAS and SATA interfaces, and is designed for PCI Express x8 v2.0 (5Gb / s per lane), which is theoretically nearly enough to meet the needs of 8 high-speed SAS ports. And all this - at a retail price of around $ 500, that is, only a couple of hundred more expensive than the budget LSI SAS 9211-8i. The manufacturer himself, by the way, refers this solution to the MegaRAID Value Line series, that is, cost-effective solutions.




8-port SAS controller LSIMegaRAID SAS9260-8i and its SAS2108 processor with DDR2 memory

LSI SAS 9260-8i board has a low profile (MD2 form factor), is equipped with two internal Mini-SAS 4X connectors (each of them allows you to connect up to 4 SAS disks directly or more through port multipliers), is designed for PCI Express bus x8 2.0 and supports RAID levels 0, 1, 5, 6, 10, 50 and 60, dynamic SAS functionality, and more. etc. The LSI SAS 9260-8i controller can be installed both in 1U and 2U rack servers (Mid and High-End servers), and in ATX and Slim-ATX cases (for workstations). RAID support is provided by hardware - built-in LSI SAS2108 processor (PowerPC core at 800 MHz), supplemented with 512 MB DDR2 800 MHz memory with ECC support. LSI promises processor speeds with data up to 2.8 GB / s reading and 1.8 GB / s writing. Among the rich functionality of the adapter, it is worth noting the functions Online Capacity Expansion (OCE), Online RAID Level Migration (RLM) (expanding the volume and changing the type of arrays "on the fly"), SafeStore Encryption Services and Instant secure erase (encrypting data on disks and securely deleting data ), support for solid state drives (SSD Guard technology), and more. etc. A battery module is optionally available for this controller (with it the maximum operating temperature should not exceed +44.5 degrees Celsius).

LSI SAS 9260-8i Controller Key Specifications

System interfacePCI Express x8 2.0 (5 GT / s), Bus Master DMA
Disk interfaceSAS-2 6Gb / s (Supports SSP, SMP, STP and SATA protocols)
SAS ports8 (2 x4 Mini-SAS SFF8087 connectors), supports up to 128 drives via port multipliers
RAID supportlevels 0, 1, 5, 6, 10, 50, 60
CPULSI SAS2108 ROC (PowerPC @ 800 MHz)
Built-in cache memory512 MB ECC DDR2 800 MHz
Power consumption, no more24 W (power supply +3.3 V and +12 V from PCIe slot)
Operating / storage temperature range0 ... + 60 ° С / -45 ... + 105 ° С
Form factor, dimensionsMD2 low-profile, 168 x 64.4 mm
MTBF value\u003e 2 million h
Manufacturer's warranty3 years

The manufacturer identified typical applications of the LSI MegaRAID SAS 9260-8i as follows: a variety of video stations (video on demand, video surveillance, video creation and editing, medical images), high-performance computing and digital data archives, diverse servers (file, web, mail, databases). In general, the overwhelming majority of problems solved in small and medium-sized businesses.

In a white-orange box with a frivolously smiling toothy lady's face on the "title" (apparently, to better lure bearded sysadmins and harsh system builders) there is a controller board, brackets for its installation in ATX, Slim-ATX, etc. cases, two 4-disk cables with Mini-SAS connectors on one end and regular SATA (no power) on the other (for connecting up to 8 disks to the controller), as well as a CD with PDF documentation and drivers for numerous versions of Windows, Linux (SuSE and RedHat), Solaris and VMware.


Scope of delivery for the boxed version of the LSI MegaRAID SAS 9260-8i controller (the MegaRAID Advanced Services Hardware Key mini-card is available on request)

LSI MegaRAID Advanced Services software technologies are available with a special hardware key (supplied separately) for the LSI MegaRAID SAS 9260-8i controller: MegaRAID Recovery, MegaRAID CacheCade, MegaRAID FastPath, LSI SafeStore Encryption Services (beyond the scope of this article). In particular, in terms of increasing the performance of an array of traditional disks (HDD) with the help of a solid-state drive (SSD) added to the system, MegaRAID CacheCade technology will be useful, with the help of which the SSD acts as a second-level cache for an HDD array (analogous to a hybrid solution for HDD), in some cases providing an increase in the performance of the disk subsystem up to 50 times. Also of interest is the MegaRAID FastPath solution, which reduces the latency of the SAS2108 processor processing I / O operations (by disabling the optimization for hard disk drives), which makes it possible to speed up the operation of an array of several solid-state drives (SSD) connected directly to the SAS 9260-8i ports.

It is more convenient to perform operations on configuring, setting up and maintaining the controller and its arrays in the proprietary manager in the operating system environment (the settings in the BIOS Setup menu of the controller itself are not rich enough - only basic functions are available). In particular, in the manager, in a few mouse clicks, you can organize any array and set the policies for its operation (caching, etc.) - see screenshots.




Sample screenshots of the Windows Manager for Configuring RAID Levels 5 (above) and 1 (below).

Testing

To get acquainted with the basic performance of the LSI MegaRAID SAS 9260-8i (without the MegaRAID Advanced Services Hardware Key and related technologies), we used five high-performance SAS drives with a spindle rotation speed of 15 thousand rpm and support for the SAS-2 interface (6 Gb / c) - Hitachi Ultrastar 15K600 HUS156030VLS600 with a capacity of 300 GB.


Hitachi Ultrastar 15K600 hard drive without top cover

This will allow us to test all the basic levels of arrays - RAID 6, 5, 10, 0 and 1, and not only with the minimum number of disks for each of them, but also "for growth", that is, when adding a disk to the second of 4-channel SAS ports of the ROC chip. Note that the hero of this article has a simplified analogue - a 4-port LSI MegaRAID SAS 9260-4i controller based on the same element base. Therefore, our tests of 4-disk arrays are equally applicable to it.

The maximum sequential read / write speed of the payload for the Hitachi HUS156030VLS600 is about 200 MB / s (see graph). Average read access time (specs) - 5.4 ms. The built-in buffer is 64 MB.


Hitachi Ultrastar 15K600 HUS156030VLS600 sequential read / write speed graph

The test system was based on an Intel Xeon 3120 processor, an Intel P45 chipset motherboard, and 2GB DDR2-800 memory. The SAS controller was installed in the PCI Express x16 v2.0 slot. The tests were carried out under the operating systems Windows XP SP3 Professional and Windows 7 Ultimate SP1 x86 (pure American versions), since their server counterparts (Windows 2003 and 2008, respectively) do not allow some of the benchmarks and scripts we used to work. The tests used were AIDA64, ATTO Disk Benchmark 2.46, Intel IOmeter 2006, Intel NAS Performance Toolkit 1.7.1, C'T H2BenchW 4.13 / 4.16, HD Tach RW 3.0.4.0 and for Futuremark PCMark Vantage and PCMark05. The tests were carried out on both unallocated volumes (IOmeter, H2BenchW, AIDA64) and formatted partitions. In the latter case (for NASPT and PCMark), the results were taken both for the physical beginning of the array and for its middle (volumes of arrays with the maximum available capacity were divided into two equal logical partitions). This allows us to more adequately assess the performance of solutions, since the fastest initial sections of volumes, on which file benchmarks are carried out by most browsers, often do not reflect the situation in the rest of the disk, which can also be used very actively in real work.

All tests were performed five times and the results were averaged. We will consider our updated methodology for evaluating professional disk solutions in more detail in a separate article.

It remains to add that in this testing we used controller firmware version 12.12.0-0036 and drivers version 4.32.0.32. Write and read caching has been enabled for all arrays and disks. Perhaps the use of more modern firmware and drivers saved us from the oddities noticed in the results of early tests of the same controller. In our case, no such incidents were observed. However, we also do not use the FC-Test 1.0 script, which is very doubtful in terms of the reliability of the results (which in certain cases the same colleagues "would like to call confusion, vacillation and unpredictability"), in our package, since we have repeatedly noticed its inconsistency on some file patterns ( in particular, sets of many small, less than 100 Kbytes, files).

The diagrams below show the results for 8 array configurations:

  1. RAID 0 of 5 disks;
  2. RAID 0 of 4 disks;
  3. RAID 5 of 5 disks;
  4. RAID 5 of 4 disks;
  5. RAID 6 of 5 disks;
  6. 4-drive RAID 6
  7. RAID 1 of 4 disks;
  8. RAID 1 of 2 disks.

LSI obviously understands a four-disk RAID 1 array (see screenshot above) as a strip + mirror array, usually referred to as RAID 10 (this is also confirmed by the test results).

Test results

In order not to overload the review web page with an innumerable set of diagrams, sometimes uninformative and tiring (which some "rabid colleagues" often do :)), we have summarized the detailed results of some tests in table... Those who wish to analyze the intricacies of the results we have obtained (for example, to find out the behavior of the persons involved in the most critical tasks for themselves) can do it independently. We will focus on the most important and key test results, as well as on average indicators.

Let's first look at the results of the "purely physical" tests.

The average time of random access to data when reading on a single Hitachi Ultrastar 15K600 HUS156030VLS600 disk is 5.5 ms. However, when they are organized into arrays, this figure changes slightly: it decreases (due to efficient caching in the LSI SAS9260 controller) for "mirrored" arrays and increases for all others. The largest increase (about 6%) is observed for arrays of level 6, since the controller has to simultaneously access the largest number of disks (three for RAID 6, two for RAID 5, and one for RAID 0, since the address in this test occurs in blocks of only 512 bytes, which is significantly less than the size of array interleaving blocks).

The situation with random access to arrays during writing (in blocks of 512 bytes) is much more interesting. For a single disk, this parameter is about 2.9 ms (without caching in the host controller), however, in the arrays on the LSI SAS9260 controller, we observe a significant decrease in this figure due to the good write caching in the controller's SDRAM buffer of 512 MB. Interestingly, the most dramatic effect is obtained for RAID 0 arrays (random write access time drops by almost an order of magnitude compared to a single drive)! This should undoubtedly have a beneficial effect on the performance of such arrays in a number of server tasks. At the same time, even on arrays with XOR calculations (that is, a high load on the SAS2108 processor), random write access does not lead to an obvious slowdown in performance - again, thanks to the powerful controller cache. It is natural that RAID 6 is slightly slower here than RAID 5, but the difference between them is, in fact, insignificant. I was somewhat surprised in this test by the behavior of a single "mirror", which showed the slowest random access while writing (perhaps this is a "feature" of the microcode of this controller).

The graphs of linear (sequential) read and write (in large blocks) for all arrays do not have any peculiarities (for reading and writing, they are almost identical, provided that controller write caching is enabled) and they all scale according to the number of disks participating in parallel in the "useful "Process. That is, for a five-disk RAID 0 disks, the speed “doubles” relative to a single disk (reaching 1 GB / s!), For a five-disk RAID 5 it “quadruples”, for RAID 6 it “triples” (triples, of course :)), for a RAID 1 of four disks, it doubles (no fuss! :)), and for a simple mirror, it duplicates the graphs of a single disk. This pattern is clearly visible, in particular, in terms of the maximum read and write speed of real large (256 MB) files in large blocks (from 256 KB to 2 MB), which we will illustrate with the diagram of the ATTO Disk Benchmark 2.46 test (the results of this test for Windows 7 and XP are almost identical).

Here, only the case of reading files on a RAID 6 array of 5 disks unexpectedly dropped out of the overall picture (the results were rechecked many times). However, for reading in blocks of 64 KB the speed of this array is gaining 600 MB / s. So we will write off this fact as a "feature" of the current firmware. Note also that when writing real files, the speed is slightly higher due to caching in a large buffer of the controller, and the difference with reading is the more noticeable, the lower the real linear speed of the array.

As for the interface speed, usually measured in terms of buffer writing and reading (multiple calls to the same disk volume address), here we have to admit that it turned out to be the same for almost all arrays due to the inclusion of the controller cache for these arrays (see . table). Thus, the recording rates for all participants in our test were approximately 2430 MB / s. Note that the PCI Express x8 2.0 bus theoretically gives a speed of 40 Gb / s or 5 Gb / s, however, according to useful data, the theoretical limit is lower - 4 Gb / s, which means that in our case the controller really worked according to version 2.0 of the PCIe bus. Thus, the 2.4 GB / s measured by us is obviously the real bandwidth of the controller onboard memory (DDR2-800 memory with a 32-bit data bus, as seen from the configuration of ECC chips on the board, theoretically gives up to 3.2 GB / s). When reading arrays, caching is not as "comprehensive" as when writing, therefore, the speed of the "interface" measured in utilities, as a rule, is lower than the speed of reading the controller's cache (typical 2.1 GB / s for arrays of levels 5 and 6) , and in some cases it "drops" to the speed of reading the buffer of the hard drives themselves (about 400 MB / s for a single hard drive, see the graph above) multiplied by the number of "sequential" drives in the array (these are just cases of RAID 0 and 1 from our results).

Well, with the "physics" we in the first approximation figured out, it's time to move on to the "lyrics", that is, to the tests of "real" kids applications. By the way, it will be interesting to find out whether the performance of arrays scales when performing complex user tasks as linearly as it scales when reading and writing large files (see the diagram of the ATTO test just above). An inquisitive reader, I hope, has already been able to predict the answer to this question.

As a “salad” to our “lyrical” part of the meal, we will serve desktop disk tests from the PCMark Vantage and PCMark05 packages (for Windows 7 and XP, respectively), as well as a similar “track” test of applications from the H2BenchW 4.13 package from the authoritative German magazine C'T. Yes, these benchmarks were originally created to evaluate hard drives in desktop PCs and low-cost workstations. They emulate the execution on disks of typical tasks of an advanced personal computer - working with video, audio, "photoshop", antivirus, games, a swap file, installing applications, copying and writing files, etc. Therefore, their results should not be taken in the context of this article. as the ultimate truth - after all, other tasks are often performed on multi-disk arrays. Nevertheless, in light of the fact that the manufacturer himself positions this RAID controller, including for relatively inexpensive solutions, this class of test tasks is quite capable of characterizing a certain proportion of applications that will actually be executed on such arrays (the same work with video, professional graphics processing, swapping OS and resource-intensive applications, copying files, anti-virus, etc.). Therefore, the importance of these three complex benchmarks in our overall package should not be underestimated.

In the popular PCMark Vantage, on average (see the diagram), we observe a very remarkable fact - the performance of this multi-disk solution is almost independent of the type of array used! By the way, within certain limits, this conclusion is valid for all individual test tracks (task types) included in the PCMark Vantage and PCMark05 packages (see the table for details). This can mean either that the firmware algorithms of the controller (with cache and disks) almost do not take into account the specifics of the operation of applications of this type, or the fact that the bulk of these tasks are performed in the cache memory of the controller itself (and most likely we observe a combination of these two factors ). However, for the latter case (that is, the execution of tracks to a large extent in the cache of the RAID controller) the average performance of the solutions turns out to be not that high - compare these data with the test results of some "desktop" ("chipset") 4-disk RAID 0 arrays and 5 and inexpensive single SSDs on the SATA 3Gb / s bus (see overview). If compared to a simple "chipset" 4-disk RAID 0 (and on twice slower hard drives than the Hitachi Ultrastar 15K600 used here) arrays on LSI SAS9260 are less than twice as fast in PCMark tests, then relatively not even the fastest "budget" single SSD they all definitely lose! The results of the PCMark05 disk test give a similar picture (see the table; there is no point in drawing a separate diagram for them).

A similar picture (with some reservations) for arrays on LSI SAS9260 can be observed in another "track" application benchmark - C'T H2BenchW 4.13. Here, only two of the slowest (in terms of structure) arrays (RAID 6 of 4 disks and a simple "mirror") noticeably lag behind all other arrays, the performance of which, obviously, reaches that "sufficient" level when it no longer rests against the disk subsystem, and in the efficiency of the SAS2108 processor with the controller's cache with these complex sequences of calls. And in this context, we can be pleased with the fact that the performance of arrays based on LSI SAS9260 in tasks of this class is almost independent of the type of array used (RAID 0, 5, 6 or 10), which allows using more reliable solutions without compromising the final performance.

However, "Maslenitsa is not all for the cat" - if we change the tests and check the operation of arrays with real files on the NTFS file system, the picture will change dramatically. So, in the Intel NASPT 1.7 test, many of the "pre-installed" scenarios of which are quite directly related to tasks typical for computers equipped with an LSI MegaRAID SAS9260-8i controller, the array disposition is similar to that which we observed in the ATTO test when reading and writing large files - the performance increases proportionally as the "linear" speed of the arrays grows.

In this diagram, we present the average for all tests and NASPT patterns, while in the table you can see the detailed results. Let me emphasize that we ran NASPT both under Windows XP (this is what numerous browsers usually do) and under Windows 7 (which, due to certain peculiarities of this test, is done less often). The fact is that Seven (and its "big brother" Windows 2008 Server) use more aggressive native caching algorithms when working with files than XP. In addition, copying of large files in the "Seven" occurs mainly in blocks of 1 MB (XP, as a rule, operates in blocks of 64 KB). This leads to the fact that the results of the "file" test Intel NASPT differ significantly in Windows XP and Windows 7 - in the latter they are much higher, sometimes more than twice! By the way, we compared the results of NASPT (and other tests of our suite) under Windows 7 with 1 GB and 2 GB of installed system memory (there is information that with large amounts of system memory caching of disk operations in Windows 7 increases and NASPT results become even higher) , however, we did not find any difference within the measurement error.

Disputes about which OS (in terms of caching policies, etc.) is "better" to test disks and RAID controllers, we leave for the discussion thread of this article. We believe that testing drives and solutions based on them should be in conditions as close as possible to real situations of their operation. That is why, in our opinion, the results obtained by us for both OS are of equal value.

But back to the NASPT average performance chart. As you can see, the difference between the fastest and the slowest of the arrays we tested here averages a little less than three times. This, of course, is not a fivefold gap, as when reading and writing large files, but it is also quite noticeable. The arrays are actually located in proportion to their linear speed, and this is good news: this means that the LSI SAS2108 processor is quite fast at processing data, almost without creating bottlenecks when arrays of levels 5 and 6 are actively working.

To be fair, it should be noted that there are patterns in NASPT (2 out of 12), in which the same picture is observed as in PCMark with H2BenchW, namely that the performance of all tested arrays is practically the same! These are Office Productivity and Dir Copy to NAS (see table). This is especially evident under Windows 7, although the tendency of "convergence" is obvious for Windows XP (in comparison with other patterns). However, there are patterns in PCMark with H2BenchW where the performance of arrays grows in proportion to their linear speed. So everything is not as simple and unambiguous as some might like.

At first, I wanted to discuss a diagram with the overall performance of the arrays, averaged over all application tests (PCMark + H2BenchW + NASPT + ATTO), that is, this one:

However, there is nothing special to discuss here: we see that the behavior of arrays on the LSI SAS9260 controller in tests emulating the operation of certain applications can radically differ depending on the scenarios used. Therefore, conclusions about the benefits of a particular configuration are best done based on what kind of tasks you are going to perform. And in this we can noticeably be helped by another professional test - synthetic patterns for IOmeter, emulating one or another load on the data storage system.

Tests in IOmeter

In this case, we will omit the discussion of numerous patterns that carefully measure the speed of work depending on the size of the access block, the percentage of write operations, the percentage of random access, etc. This, in fact, is pure synthetics, which gives little useful practical information and is of interest rather purely theoretically. After all, we have already clarified the main practical points regarding "physics" above. It is more important for us to focus on patterns that emulate real work - servers of various types, as well as file operations.

To emulate servers such as File Server, Web Server, and DataBase (database server), we used the well-known patterns of the same name proposed by Intel and StorageReview.com. For all cases, we tested the arrays with a command queue depth (QD) from 1 to 256 with a step of 2.

In the "Database" pattern, which uses random disk accesses in blocks of 8 KB within the entire array size, one can observe a significant advantage of arrays without parity (that is, RAID 0 and 1) with a command queue depth of 4 and higher, while all arrays with parity (RAID 5 and 6) demonstrate very similar performance (despite the twofold difference between them in linear access speed). The situation can be easily explained: all parity-checked arrays showed similar values \u200b\u200bin tests for the average random access time (see the diagram above), and this parameter mainly determines the performance in this test. It is interesting that the performance of all arrays grows almost linearly with increasing command queue depth up to 128, and only at QD \u003d 256, in some cases, one can see a hint of saturation. The maximum performance of arrays with parity at QD \u003d 256 was about 1100 IOps (operations per second), that is, the LSI SAS2108 processor spends less than 1 ms to process one piece of data in 8 KB (about 10 million single-byte XOR operations per second for RAID 6 ; of course, the processor simultaneously performs other tasks for data input-output and working with cache memory).

In the pattern of a file server that uses blocks of different sizes for random read and write accesses to an array within its entire volume, we observe a picture similar to DataBase, with the difference that here five-disk arrays with parity (RAID 5 and 6) noticeably bypass in speed their 4-disk counterparts and demonstrate almost identical performance (about 1200 IOps at QD \u003d 256)! Apparently, adding the fifth disk to the second of the two 4-channel SAS ports of the controller somehow optimizes the computational load on the processor (due to I / O operations?). It may be worth comparing the speed of 4-disk arrays, when the drives are connected in pairs to different Mini-SAS connectors of the controller, in order to determine the optimal configuration for organizing arrays on the LSI SAS9260, but this is a task for another article.

In the web server pattern, where, according to its creators, there are no write operations to disk (and hence the calculation of XOR-functions for writing) as a class, the picture becomes even more interesting. The fact is that all three five-disk arrays from our set (RAID 0, 5 and 6) show identical performance here, despite the noticeable difference between them in terms of linear read speed and parity calculations! By the way, these same three arrays, but of 4 disks, are also identical in speed to each other! And only RAID 1 (and 10) falls out of the picture. Why this happens is difficult to judge. Perhaps the controller has very efficient algorithms for selecting "good drives" (that is, those of the five or four drives from which the necessary data comes first), which in the case of RAID 5 and 6 increases the likelihood of earlier data arrival from the platters, preparing the processor in advance for necessary calculations (remember the deep command queue and the large DDR2-800 buffer). As a result, this can compensate for the latency associated with XOR calculations and equalize them in "chances" with "simple" RAID 0. In any case, the LSI SAS9260 controller can only be praised for its extremely high results (about 1700 IOps for 5-disk arrays with QD \u003d 256) in the Web Server pattern for arrays with parity. Unfortunately, the very low performance of the two-disk mirror in all these server patterns has become a fly in the ointment.

The Web Server pattern is echoed by our own pattern, which emulates random reading of small (64 KB) files within the entire array space.

Again, the results were combined into groups - all 5-disk arrays are identical to each other in speed and lead in our "race", 4-disk RAID 0, 5 and 6 also cannot be distinguished from each other in performance, and only "mirrors" fall out of the total masses (by the way, 4-disk "mirror", that is, RAID 10 is faster than all other 4-disk arrays - apparently, due to the same algorithm of "choosing a successful disk"). We emphasize that these regularities are valid only for a large depth of the command queue, while with a small queue (QD \u003d 1-2), the situation and leaders may be completely different.

Everything changes when servers work with large files. In the conditions of modern "heavy" content and new "optimized" OS such as Windows 7, 2008 Server, etc. working with megabyte files and 1MB data blocks is becoming increasingly important. In this situation, our new pattern, which emulates random reading of 1 MB files within the entire disk (details of new patterns will be described in a separate article on the methodology), turns out to be very useful in order to more fully assess the server potential of the LSI SAS9260 controller.

As you can see, the 4-disk "mirror" here leaves no one hopes for leadership, clearly dominating at any queue of commands. Its performance also initially grows linearly with increasing command queue depth, but at QD \u003d 16 for RAID 1, it reaches saturation (speed is about 200 MB / s). A little “later” (at QD \u003d 32) performance saturation occurs in the arrays slower in this test, among which “silver” and “bronze” have to be given to RAID 0, and arrays with parity are in the outsiders, having lost even before shining RAID 1 of two disks, which turns out to be surprisingly good. This leads us to the conclusion that even when reading, the computational XOR load on the LSI SAS2108 processor when working with large files and blocks (located randomly) is very burdensome for it, and for RAID 6, where it actually doubles, it is sometimes even exorbitant - the performance of solutions barely exceeds 100 MB / s, that is, 6-8 times lower than with linear reading! Redundant RAID 10 is clearly more profitable here.

When randomly recording small files, the picture is again strikingly different from those we saw earlier.

The fact is that here the performance of the arrays practically does not depend on the depth of the command queue (obviously, this is affected by the huge cache of the LSI SAS9260 controller and rather large caches of the hard drives themselves), but it radically changes with the type of the array! The unconditional leaders here are "unpretentious" for the processor RAID 0, and "bronze" with more than two times loss to the leader - in RAID 10. All arrays with parity formed a very close single group with a two-disk mirror (details on them are given in a separate diagram under the main ), losing three times to the leaders. Yes, this is definitely a heavy load on the controller processor. However, frankly speaking, I did not expect such a "failure" from SAS2108. Sometimes even a software RAID 5 based on a "chipset" SATA controller (with caching by means of Windows and calculation with the help of a PC CPU) is able to work faster ... However, the "own" 440-500 IOps controller still produces stable - compare this with chart by average write access time at the beginning of the results section.

The transition to random writing of large files of 1 MB leads to an increase in absolute speed indicators (for RAID 0 - almost to the values \u200b\u200bfor random reading of such files, that is, 180-190 MB / s), but the overall picture remains almost unchanged - arrays with parity many times slower than RAID 0.

The picture for RAID 10 is curious - its performance drops with increasing command queue depth, although not much. There is no such effect for other arrays. The two-disk "mirror" looks modest here again.

Now let's look at patterns in which files are read and written to disk in equal numbers. Such loads are typical, in particular, for some video servers or during active copying / duplication / backup of files within the same array, as well as in the case of defragmentation.

First - files of 64 KB randomly throughout the array.

Here, some similarity with the results of the DataBase pattern is obvious, although the absolute speeds of the arrays are three times higher, and even with QD \u003d 256, some performance saturation is already noticeable. The higher (compared to the DataBase pattern) percentage of write operations in this case leads to the fact that arrays with parity and a two-disk "mirror" become obvious outsiders, significantly inferior in speed to arrays of RAID 0 and 10.

When switching to 1 MB files, this pattern is generally preserved, although the absolute speeds approximately triple, and RAID 10 becomes as fast as a 4-disk "stripe", which is good news.

The last pattern in this article will be the case of sequential (as opposed to random) reading and writing of large files.

And here already many arrays manage to overclock to very decent speeds in the region of 300 MB / s. And although more than a two-fold gap between the leader (RAID 0) and the outsider (two-disk RAID 1) remains (note that with linear reading OR writing this gap is fivefold!), Included in the top three RAID 5 leaders, and the remaining XOR arrays that have pulled up are not may not be encouraging. Indeed, judging by the list of applications of this controller, which LSI itself provides (see the beginning of the article), many target tasks will use this particular nature of accessing arrays. And this is definitely worth considering.

In conclusion, I will give a final diagram in which the indicators of all the above IOmeter test patterns are averaged (geometrically over all patterns and command queues, without weights). It is curious that if the averaging of these results within each pattern is carried out arithmetically with weight coefficients 0.8, 0.6, 0.4 and 0.2 for command queues 32, 64, 128, and 256, respectively (which conditionally takes into account the drop in the share of operations with high the depth of the command queue in the overall operation of drives), then the final (for all patterns) normalized array performance index within 1% coincides with the geometric mean.

So, the average "temperature in the hospital" in our patterns for the IOmeter test shows that there is no way to get away from "physics with mathematics" - RAID 0 and 10 are definitely in the lead. in some cases, decent performance, in general, such arrays cannot "reach" the level of a simple "stripe". At the same time, it is interesting that 5-disk configurations clearly add up compared to 4-disk ones. In particular, 5-disk RAID 6 is definitely faster than 4-disk RAID 5, although in terms of "physics" (random access time and linear access speed) they are virtually identical. The two-disk "mirror" also disappointed me (on average, it is equivalent to a 4-disk RAID 6, although for a mirror two XOR calculations are not required for each bit of data). However, a simple "mirror" is obviously not the target array for a sufficiently powerful 8-port SAS controller with a large cache and a powerful onboard processor. :)

Price information

The LSI MegaRAID SAS 9260-8i 8-port SAS controller with a complete package is priced in the region of $ 500, which can be considered quite attractive. Its simplified 4-port counterpart is even cheaper. A more accurate current average retail price of the device in Moscow, relevant at the time of your reading this article:

LSI SAS 9260-8iLSI SAS 9260-4i
$571() $386()

Conclusion

Summarizing the above, we can conclude that we will not dare to give uniform recommendations “for everyone” on the 8-port LSI MegaRAID SAS9260-8i controller. Everyone should draw conclusions on their own about the need to use it and configure certain arrays with its help - strictly based on the class of tasks that are supposed to be launched. The fact is that in some cases (on some tasks) this inexpensive "megamonster" is able to show outstanding performance even on arrays with double parity (RAID 6 and 60), but in other situations the speed of its RAID 5 and 6 clearly leaves much to be desired. ... And the only salvation (almost universal) will be a RAID 10 array, which can be organized with almost the same success on cheaper controllers. However, it is often thanks to the SAS9260-8i processor and cache that the RAID 10 array behaves here no slower than a stripe from the same number of disks, while ensuring high reliability of the solution. But what should definitely be avoided with the SAS9260-8i is a two-disk mirror and four-disk RAID 6 and 5 - these are obviously sub-optimal configurations for this controller.

Thanks to Hitachi Global Storage Technologies
for the hard drives provided for tests.

Today's file server or web server cannot do without a RAID array. Only this mode of operation can provide the required bandwidth and speed of work with the data storage system. Until recently, the only hard drives suitable for this job were SCSI drives with a spindle speed of 10-15 thousand revolutions per minute. These drives required a separate SCSI controller to operate. SCSI data transfer rates were up to 320 Mb / s, but SCSI is a common parallel interface with all its drawbacks.

More recently, a new disk interface has appeared. It was called SAS (Serial Attached SCSI). Recreation centers in Chelyabinsk -Today, many companies already have controllers for this interface in their product line with support for all levels of RAID arrays. In our mini-tour, we take a look at two members of Adaptec's new family of SAS controllers. These are 8 port model ASR-4800SAS and 4 + 4 port model ASR-48300 12C.

Introducing SAS

What kind of interface is this - SAS? SAS is actually a hybrid of SATA and SCSI. The technology has incorporated the advantages of two interfaces. To begin with, SATA is a serial interface with two independent read and write channels, and each SATA device is connected to a separate channel. SCSI has a very efficient and reliable enterprise data transfer protocol, but the disadvantage is the parallel interface and shared bus for multiple devices. Thus, SAS is free from the shortcomings of SCSI, has the advantages of SATA, and provides speeds up to 300 Mb / s per channel. The diagram below can roughly represent the SCSI and SAS connection diagram.

The bi-directional interface reduces latency to zero as there is no channel read / write switch.

An interesting and positive feature of Serial Attached SCSI is that this interface supports SAS and SATA drives, and both types of drives can be connected to one controller at the same time. However, SAS drives cannot be connected to a SATA controller, since these drives, firstly, require special SCSI (Serial SCSI Protocol) commands to operate, and secondly, they are physically incompatible with a SATA connector. Each SAS drive connects to its own port, but it is still possible to connect more drives than the controller has ports. This is provided by the SAS Expander.

The original difference between the SAS drive pad and the SATA drive pad is the additional data port, that is, each Serial Attached SCSI drive has two SAS ports with its original ID, thus the technology provides redundancy, which increases reliability.

SAS cables are slightly different from SATA cables, there is a special cable harness included with the SAS controller. Just like SCSI, new standard hard drives can be connected not only inside the server case, but also outside, for which special cables and accessories are provided. To connect hot-swappable drives, special backplane boards are used that have all the necessary connectors and ports for connecting drives and controllers.

Typically, the backplane is housed in a special drive sled enclosure that contains the RAID array and provides cooling. In case of failure of one or more disks, it is possible to quickly replace the faulty HDD, and replacing the faulty drive does not stop the operation of the array - it is enough to change the disk and the array is fully functional again.

Adaptec SAS Adapters

Adaptec has brought you two pretty interesting RAID controller models. The first model is a representative of the budget class of devices for building RAID in low-cost entry-level servers - the eight-port model ASR-48300 12C. The second model is much more advanced and designed for more serious tasks, it has eight SAS channels on board - this is the ASR-4800SAS. But let's take a closer look at each of them. Let's start with a simpler and cheaper model.

Adaptec ASR-48300 12C

The ASR-48300 12C controller is designed for building small RAID arrays of levels 0, 1 and 10. Thus, the main types of disk arrays can be built using this controller. This model is delivered in a regular cardboard box, which is decorated in blue and black tones, on the front side of the package there is a stylized image of a controller flying from a computer, which should inspire thoughts about the high speed of the computer with this device inside.

The package is minimal, but includes everything you need to get started with the controller. The kit contains the following.

ASR-48300 12C controller
... Low profile bracket

... Storage Manager software disc
... Brief manual
... Connecting cable with connectors SFF8484 to 4xSFF8482 and power supply 0.5 m.

The controller is designed for the 133 MHz PCI-X bus, which is very widespread in server platforms. The adapter provides eight SAS ports, however, only four ports are implemented in the form of an SFF8484 connector, to which drives are connected inside the case, and the remaining four channels are brought out in the form of an SFF8470 connector, so some of the drives must be connected outside - this can be an external box with four drives inside.

When using an expander, the controller has the ability to work with 128 disks in the array. In addition, the controller is capable of operating in a 64-bit environment and supports the corresponding commands. The card can be installed in a 2U low-profile server with the included low-profile bracket. The general characteristics of the board are as follows.

Benefits

Cost effective Serial Attached SCSI controller with Adaptec HostRAID ™ technology for high-performance storage of mission-critical data.

Customer needs

Ideal for supporting entry-level, mid-range and workgroup server applications that require high-performance storage and robust protection, such as backup applications, web content, email, databases, and data sharing.

System Environment - Departmental and Workgroup Servers

System bus interface type - PCI-X 64 bit / 133 MHz, PCI 33/66

External Connections - One x 4 Infiniband / Serial Attached SCSI (SFF8470)

Internal Connections - One 32 pin x 4 Serial Attached SCSI (SFF8484)

System Requirements - Servers Type IA-32, AMD-32, EM64T and AMD-64

32/64-bit PCI 2.2 or 32/64-bit PCI-X 133 slot

Warranty - 3 years

RAID levels - Adaptec HostRAID 0, 1, and 10

Key RAID Features

  • Boot Array Support
  • Automatic recovery
  • Management with Adaptec Storage Manager software
  • Background initialization

Board dimensions - 6.35cm x 17.78cm (including external connector)

Operating temperature - 0 ° to 50 ° C

Power Dissipation - 4 W

Mean Time Before Failure (MTBF) - 1,692,573 hours @ 40 ºC.

Adaptec ASR-4800SAS

The 4800 adapter is more advanced functionally. This model is positioned for faster servers and workstations. It supports almost any RAID array - arrays that are available in the younger model, and you can also configure RAID 5, 50, JBOD and Adaptec Advanced Data Protection Suite with RAID 1E, 5EE, 6, 60, Copyback Hot Spare with Snapshot Backup option for tower servers and high-density rack servers.

The model comes in the same packaging as the junior model with the design in the same "aviation" style.

The bundle contains almost the same features as the younger card.

ASR-4800SAS Controller
... Full length bracket
... CD with driver and complete manual
... Storage Manager software disc
... Brief manual
... Two cables with connectors SFF8484 to 4xSFF8482 and power supply 1 m each.

The controller has support for the 133 MHz PCI-X bus, but there is also a 4805 model, which is functionally similar, but uses the PCI-E x8 bus. The adapter provides the same eight SAS ports, but all eight ports are implemented as internal, respectively, the board has two SFF8484 connectors (for two supplied cables), but there is also an external SFF8470 connector for four channels, when connected to which one of the internal connectors turns off.

In the same way as in the junior device, the number of disks is expandable up to 128 using expanders. But the main difference of the ASR-4800SAS model from the ASR-48300 12C is the presence of the first 128 MB DDR2 ECC memory used as a cache, which speeds up the work with the disk array and optimizes the work with small files. An optional battery pack is available to keep data in cache during power outages. The general characteristics of the board are as follows.

Benefits - Connect high-performance storage and data protection devices for servers and workstations

Customer Needs - Ideal for supporting server and workgroup applications that require consistently high read / write performance such as streaming video, web content, video on demand, fixed content and reference data storage.

  • System Environment - Departmental and Workgroup Servers and Workstations
  • System Bus Interface Type - PCI-X Host Interface 64-bit / 133 MHz
  • External Connections - SAS connector one x4
  • Internal Connections - SAS connectors two x4
  • Data Transfer Rate - Up to 3 GB / s per port
  • System Requirements - Intel or AMD architecture with free 64-bit 3.3v PCI-X slot
  • Supports EM64T and AMD64 architectures
  • Warranty - 3 years
  • Standard RAID Levels - RAID 0, 1, 10, 5, 50
  • Standard RAID Capabilities - Hot Spare, RAID Level Migration, Online Capacity Expansion, Optimized Disk, Utilization, S.M.A.R.T and SNMP support, plus capabilities from Adaptec Advanced
  • Data Protection Suite including:
  1. Hot Space (RAID 5EE)
  2. Striped Mirror (RAID 1E)
  3. Dual Drive Failure Protection (RAID 6)
  4. Copyback hot spare
  • Additional RAID Features - Snapshot Backup
  • Board dimensions - 24cm x 11.5cm
  • Operating temperature - 0 to 55 degrees C
  • Mean Time Before Failure (MTBF) - 931924 h @ 40 ºC.

Testing

Testing adapters is not easy. Moreover, we have not yet acquired much experience of working with SAS. Therefore, it was decided to test the performance of SAS hard drives in comparison with SATA drives. To do this, we used our existing 73 GB Hitachi HUS151473VLS300 SAS drives at 15000rpm with a 16Mb buffer and WD 150GB SATA150 Raptor WD1500ADFD at 10000rpm with a 16Mb buffer. We made a direct comparison of two fast drives, but with different interfaces on two controllers. The discs were tested in HDTach program, in which the following results were obtained.

Adaptec ASR-48300 12C

Adaptec ASR-4800SAS

It was logical to assume that a SAS hard drive would be faster than SATA, although for performance evaluation we took the fastest WD Raptor drive, which can easily compete in performance with many 15,000 RPM SCSI drives. As for the differences between controllers, they are minimal. Of course, the older model provides more functions, but the need for them arises only in the corporate sector of using such devices. These enterprise features include dedicated RAID levels and additional on-board cache. The average home user is unlikely to install in a home, albeit up to the roof of a modified PC, 8 hard disks assembled in a redundant RAID array - rather, they will prefer to use four disks for a 0 + 1 array, and the rest will be used for data. This is where the ASR-48300 12C comes in handy. In addition, some overclocking motherboards have a PCI-X interface. The advantage of the model for home use is the relatively affordable price (in comparison with eight hard drives) of $ 350 and ease of use (insert and connect). Besides, hard drives of 10K 2.5-inch format are of particular interest. These hard drives have less power consumption, heat up less and take up less space.

conclusions

This is an unusual review for our site and is more geared towards exploring user interest in dedicated hardware reviews. Today we reviewed not only two unusual RAID controllers from the well-known and well-established manufacturer of server equipment - Adaptec. This is also an attempt to write the first analytical article on our website.

With regards to our today's heroes, Adaptec SAS controllers, we can say that the next two products of the company have succeeded. The younger model, ASR-48300, which costs $ 350, may well take root in a productive home computer, and even more so in an entry-level server (or computer that plays its role). For this, the model has all the prerequisites: convenient Adaptec Storage Manager software, support from 8 to 128 disks, work with basic RAID levels.

The older model is designed for serious tasks and, of course, can be used in inexpensive servers, but only if there are special requirements for the speed of working with small files and the reliability of information storage, because the card supports all levels of enterprise-class RAID arrays with redundancy and has 128 MB fast DDR2 cache with Error Correction Control (ECC). The cost of the controller is $ 950.

ASR-48300 12C

Pros of the model

  • Availability
  • Supports 8 to 128 drives
  • Ease of use
  • Stable work
  • Adaptec's reputation
  • PCI-X slot - for greater popularity, only support for the more common PCI-E is missing

ASR-4800SAS

  • Stable work
  • Manufacturer reputation
  • Good functionality
  • Upgrade availability (software and hardware)
  • PCI-E version available
  • Ease of use
  • Supports 8 to 128 drives
  • 8 internal SAS channels
  • Not very suitable for budget and home applications.

Over the past two years, few changes have accumulated:

  • Supermicro is ditching the proprietary "flipped" UIO form factor for controllers. Details will be below.
  • LSI 2108 (SAS2 RAID with 512MB cache) and LSI 2008 (SAS2 HBA with optional RAID support) are still in service. Products based on these chips, both from LSI and from OEM partners, are fairly well debugged and still relevant.
  • LSI 2208 appeared (the same SAS2 RAID with LSI MegaRAID stack, only with dual-core processor and 1024MB cache) and (improved version of LSI 2008 with faster processor and PCI-E 3.0 support).

Moving from UIO to WIO

As you remember, UIO cards are ordinary PCI-E x8 cards with all the element base on the back side, i.e. when installed in the left riser is on top. It took such a form factor to install the cards in the lowest slot of the server, which allowed four cards to be placed in the left riser. UIO is not only a form factor of expansion cards, it is also cases designed for installing risers, the risers themselves and motherboards of a special form factor, with a cutout for the bottom expansion slot and slots for installing risers.
This solution had two problems. Firstly, the non-standard form factor of the expansion cards limited the customer's choice. there are only a few SAS, InfiniBand and Ehternet controllers in the UIO form factor. Secondly, there is an insufficient number of PCI-E lanes in slots for risers - only 36, of which only 24 lanes for the left riser, which is clearly not enough for four motherboards with PCI-E x8.
What is WIO? At first, it turned out that it was possible to place four boards in the left riser without the need to "turn the sandwich butter up", and risers for ordinary boards appeared (RSC-R2UU-A4E8 +). Then the problem of lack of lines (now there are 80) was solved by using slots with a higher density of contacts.
UIO riser RSC-R2UU-UA3E8 +
WIO riser RSC-R2UW-4E8

Results:
  • WIO risers cannot be installed on UIO motherboards (such as X8DTU-F).
  • UIO risers cannot be installed in new cards that are WIO-enabled.
  • There are risers for WIO (on the motherboard) that have a UIO slot for cards. In case you still have UIO controllers. They are used in platforms for Socket B2 (6027B-URF, 1027B-URF, 6017B-URF).
  • There will be no new controllers in the UIO form factor. For example, the USAS2LP-H8iR controller on the LSI 2108 chip will be the last one, there will be no LSI 2208 under UIO - just a regular MD2 with PCI-E x8.

PCI-E controllers

At the moment, three types are relevant: RAID controllers based on LSI 2108/2208 and HBA based on LSI 2308. There is also a mysterious SAS2 HBA AOC-SAS2LP-MV8 on a Marvel 9480 chip, but write about it because of its exoticism. Most of the use cases for internal SAS HBAs are ZFS storage under FreeBSD and various Solaris flavors. Due to the absence of support problems in these operating systems, the choice falls on LSI 2008/2308 in 100% of cases.
LSI 2108
In addition to UIO "shny AOC-USAS2LP-H8iR, which is mentioned in the addition of two more controllers:

AOC-SAS2LP-H8iR
LSI 2108, SAS2 RAID 0/1/5/6/10/50/60, 512MB cache, 8 internal ports (2 SFF-8087 connectors). It is an analogue of the LSI 9260-8i controller, but manufactured by Supermicro, there are minor differences in the board layout, the price is $ 40-50 lower than LSI. All additional LSI options are supported: activation, FastPath and CacheCade 2.0, battery protection of the cache - LSIiBBU07 and LSIiBBU08 (now BBU08 is preferred, it has an extended temperature range and includes a cable for remote mounting).
Despite the introduction of more efficient controllers based on the LSI 2208, the LSI 2108 is still relevant due to the price reduction. Performance with conventional HDDs is enough in any scenario, the IOPS limit for working with SSDs is 150,000, which is more than enough for most budget solutions.

AOC-SAS2LP-H4iR
LSI 2108, SAS2 RAID 0/1/5/6/10/50/60, 512MB cache, 4 internal + 4 external ports. Analogous to the LSI 9280-4i4e controller. Convenient for use in expander cases, because there is no need to bring the output from the expander outside to connect additional JBODs, or in 1U enclosures for 4 disks, if necessary, to provide the ability to expand the number of disks.
LSI 2208

AOC-S2208L-H8iR
LSI 2208, SAS2 RAID 0/1/5/6/10/50/60, 1024MB cache, 8 internal ports (2 SFF-8087 connectors). It is analogous to the LSI 9271-8i controller. The LSI 2208 is a further development of the LSI 2108. The processor has become a dual-core, which allowed us to raise the IOPS performance limit up to 465000. PCI-E 3.0 support was added and the cache was increased to 1GB.
The controller supports BBU09 cache battery protection and CacheVault flash protection. Supermicro supplies them under part numbers BTR-0022L-LSI00279 and BTR-0024L-LSI00297, but it is easier to purchase from us through the LSI sales channel (the second part of the part numbers is the native LSI part number). MegaRAID Advanced Software Options activation keys are also supported, part numbers: AOC-SAS2-FSPT-ESW (FastPath) and AOCCHCD-PRO2-KEY (CacheCade Pro 2.0).
LSI 2308 (HBA)

AOC-S2308L-L8i and AOC-S2308L-L8e
LSI 2308, SAS2 HBA (with IR firmware - RAID 0/1 / 1E), 8 internal ports (2 SFF-8087 connectors). This is the same controller, comes with different firmware. AOC-S2308L-L8e - IT firmware (pure HBA), AOC-S2308L-L8i - IR firmware (with RAID 0/1 / 1E support). The difference is that L8i can work with IR and IT firmware, L8e only with IT, IR firmware is locked. Analogous to the LSI 9207-8 controller i... Differences from LSI 2008: faster chip (800 MHz, as a result - IOPS limit increased to 650 thousand), PCI-E 3.0 support appeared. Application: software RAID "s (ZFS, for example), budget servers.
On the basis of this chip, there will be no cheap controllers with support for RAID-5 (iMR stack, from ready-made controllers - LSI 9240).

Onboard controllers

In the latest products (X9 boards and platforms with them) Supermicro denotes the presence of a SAS2 controller from LSI with the number "7" in the part number, and the number "3" for the chipset SAS (Intel C600). However, no distinction is made between the LSI 2208 and 2308, so be careful when choosing a board.
  • The LSI 2208-based controller soldered on the motherboards has a maximum of 16 disks. When adding 17, it simply won't be detected, and in the MSM log you will see the message "PD is not supported". This is compensated for by a significantly lower price. For example, a bundle "X9DRHi-F + external controller LSI 9271-8i" will cost about $ 500 more than X9DRH-7F with LSI 2008 on board. It is not possible to bypass this limitation by flashing it into LSI 9271 - flashing another SBR block, as in the case of LSI 2108, does not help.
  • Another feature is the lack of support for CacheVault modules, the boards simply lack space for a special connector, so only BBU09 is supported. The possibility of installing the BBU09 depends on the enclosure used. For example, LSI 2208 is used in 7127R-S6 blade servers, there is a BBU connector there, but to mount the module itself, you need an additional MCP-640-00068-0N Battery Holder Bracket.
  • The SAS HBA (LSI 2308) firmware will be needed now, since in DOS on any of the boards with LSI 2308 sas2flash.exe does not start with the error "Failed to initialize PAL".

Controllers in Twin and FatTwin platforms

Some 2U Twin 2 platforms are available in three versions, with three kinds of controllers. For example:
  • 2027TR-HTRF + - SATA chipset
  • 2027TR-H70RF + - LSI 2008
  • 2027TR-H71RF + - LSI 2108
  • 2027TR-H72RF + - LSI 2208
Such a variety is provided due to the fact that the controllers are located on a special backplane that connects to a special slot on the motherboard and to a disk backplane.
BPN-ADP-SAS2-H6IR (LSI 2108)


BPN-ADP-S2208L-H6iR (LSI 2208)

BPN-ADP-SAS2-L6i (LSI 2008)

Supermicro xxxBE16 / xxxBE26 cases

Another topic that is directly related to controllers is the modernization of cases with. There are versions with an additional cage for two 2.5 "drives located on the rear panel of the case. Purpose - a dedicated disk (or mirror) for system boot. Of course, the system can be loaded by allocating a small volume from another disk group or from additional disks fixed inside the case (in 846 cases, you can install additional fasteners for one 3.5 "or two 2.5" drives), but the updated modifications are much more convenient:




Moreover, these additional disks do not have to be connected to the chipset SATA controller. Using the SFF8087-\u003e 4xSATA cable, you can connect to the main SAS controller through the SAS expander output.
P.S. Hope the information was helpful. Keep in mind that the most complete information and technical support for Supermicro, LSI, Adaptec by PMC and other vendors can be obtained from True System.

Briefly about modern RAID controllers

Currently, RAID controllers as a standalone solution are focused exclusively on the specialized server segment of the market. Indeed, all modern motherboards for consumer PCs (not server boards) have integrated firmware SATA RAID controllers, which are more than enough for PC users. However, you need to keep in mind that these controllers are focused exclusively on using the Windows operating system. In operating systems of the Linux family, RAID arrays are created programmatically, and all calculations are transferred from the RAID controller to the central processor.

Servers traditionally use either software / hardware or purely hardware RAID controllers. A hardware RAID controller allows you to create and maintain a RAID array without the need for an operating system or CPU. Such RAID arrays are seen by the operating system as a single disk (SCSI disk). In this case, no specialized driver is needed - the standard (included in the operating system) SCSI disk driver is used. In this regard, hardware controllers are platform independent, and the RAID array is configured through the controller BIOS. A hardware RAID controller does not use the central processor when calculating all checksums, etc., since it uses its own specialized processor and RAM for calculations.

Software and hardware controllers require a dedicated driver that replaces the standard SCSI disk driver. In addition, software and hardware controllers are equipped with management utilities. In this regard, software and hardware controllers are tied to a specific operating system. All necessary calculations in this case are also performed by the processor of the RAID controller itself, but the use of a software driver and management utility allows you to control the controller through the operating system, and not only through the controller BIOS.

Considering the fact that SAS drives have already replaced SCSI server drives, all modern server RAID controllers are focused on supporting either SAS or SATA drives, which are also used in servers.

Last year, drives with the new SATA 3 (SATA 6 Gb / s) interface began to appear on the market, which gradually began to replace the SATA 2 (SATA 3Gb / s) interface. But SAS (3 Gb / s) drives have been replaced by SAS 2.0 (6 Gb / s) drives. Naturally, the new SAS 2.0 standard is fully compatible with the old standard.

Accordingly, RAID controllers with support for the SAS 2.0 standard appeared. It would seem, what's the point of switching to the SAS 2.0 standard, if even the fastest SAS disks have a read and write speed of no more than 200 MB / s and the SAS protocol bandwidth (3 Gb / s or 300 MB / s) is sufficient for them. ?

Indeed, when each drive is connected to a separate port on the RAID controller, 3 Gb / s bandwidth (which is 300 MB / s in theory) is sufficient. However, not only separate disks, but also disk arrays (disk baskets) can be connected to each port of the RAID controller. In this case, one SAS channel is shared by several drives at once, and the bandwidth of 3 Gb / s will no longer be enough. Well, in addition, you need to take into account the presence of SSD drives, the read and write speed of which has already exceeded the 300 MB / s bar. For example, the new Intel SSD 510 drive offers sequential read speeds of up to 500 MB / s and sequential write speeds of up to 315 MB / s.

After taking a quick look at the current situation in the server RAID controller market, let's take a look at the characteristics of the LSI 3ware SAS 9750-8i controller.

3ware SAS 9750-8i RAID Controller Specifications

This RAID controller is based on a specialized XOR processor LSI SAS2108 with a clock frequency of 800 MHz and PowerPC architecture. This processor uses 512MB of 800 MHz DDRII Error Correction (ECC) RAM.

The LSI 3ware SAS 9750-8i controller is compatible with SATA and SAS drives (both HDD and SSD drives are supported) and allows you to connect up to 96 devices using SAS expanders. It is important that this controller supports drives with SATA 600 MB / s (SATA III) and SAS 2 interface.

For connecting drives, the controller has eight ports, which are physically combined into two Mini-SAS SFF-8087 connectors (four ports in each connector). That is, if disks are connected directly to ports, then a total of eight disks can be connected to the controller, and when disk cages are connected to each port, the total disk capacity can be increased to 96. Each of the eight controller ports has a bandwidth of 6 Gb / s, which corresponds to SAS 2 and SATA III standards.

Naturally, when connecting drives or disk cages to this controller, you will need specialized cables that have an internal Mini-SAS SFF-8087 connector at one end, and a connector at the other end that depends on what exactly is connected to the controller. For example, when connecting SAS disks directly to the controller, you must use a cable that has a Mini-SAS SFF-8087 connector on one side and four SFF 8484 connectors on the other, which allow you to directly connect SAS disks. Note that the cables themselves are not included in the package and must be purchased separately.

The LSI 3ware SAS 9750-8i controller has a PCI Express 2.0 x8 interface, which provides 64 Gbps of bandwidth (32 Gbps in each direction). It is clear that this bandwidth is sufficient for a fully loaded eight SAS ports with a bandwidth of 6 Gb / s each. Also note that the controller has a special connector, into which you can optionally connect the backup battery LSIiBBU07.

It is important that this controller requires installation of a driver, that is, it is a hardware-software RAID controller. It supports such operating systems as Windows Vista, Windows Server 2008, Windows Server 2003 x64, Windows 7, Windows 2003 Server, MAC OS X, LinuxFedora Core 11, Red Hat Enterprise Linux 5.4, OpenSuSE 11.1, SuSE Linux Enterprise Server (SLES ) 11, OpenSolaris 2009.06, VMware ESX / ESXi 4.0 / 4.0 update-1 and other Linux systems. The package also includes 3ware Disk Manager 2 software, which allows you to manage your RAID arrays through the operating system.

The LSI 3ware SAS 9750-8i controller supports standard RAID types: RAID 0, 1, 5, 6, 10, and 50. Perhaps the only array type that is not supported is RAID 60. This is due to the fact that this controller is capable of create a RAID 6 array on only five disks connected directly to each controller port (theoretically, RAID 6 can be created on four disks). Accordingly, for a RAID 60 array, this controller requires at least ten disks, which simply do not exist.

It is clear that support for a RAID 1 array is irrelevant for such a controller, since this type of array is created on only two disks, and using such a controller for only two disks is illogical and extremely wasteful. But support for arrays RAID 0, 5, 6, 10 and 50 is very relevant. Although, perhaps, we were in a hurry with the RAID 0 array. Nevertheless, this array does not have redundancy, and, accordingly, does not provide reliable data storage, therefore it is rarely used in servers. However, in theory, this array is the fastest in terms of data read and write speed. However, let's remember how different types of RAID arrays differ from each other and what they are.

RAID levels

The term "RAID array" appeared in 1987, when American researchers Patterson, Gibson and Katz from the University of California at Berkeley described in their article "A case for redundant arrays of inexpensive discs, RAID" how In this way, multiple low-cost hard drives can be combined into a single logical device so that the result is increased system capacity and performance, and the failure of individual drives does not lead to failure of the entire system. Almost 25 years have passed since the publication of this article, but the technology of building RAID arrays has not lost its relevance today. The only thing that has changed since then is the decoding of the RAID acronym. The fact is that initially RAID arrays were not built on cheap disks, so the word Inexpensive was changed to Independent, which was more in line with reality.

Fault tolerance in RAID arrays is achieved through redundancy, that is, part of the disk space is allocated for service purposes, becoming inaccessible to the user.

The increase in the performance of the disk subsystem is provided by the simultaneous operation of several disks, and in this sense, the more disks in the array (up to a certain limit), the better.

Disk drives in an array can be shared using either parallel or independent access. With parallel access, disk space is divided into blocks (strips) for data recording. Likewise, the information to be written to the disk is divided into the same blocks. When writing, separate blocks are written to different disks, and multiple blocks are written to different disks simultaneously, which leads to an increase in write performance. The necessary information is also read in separate blocks simultaneously from several disks, which also contributes to an increase in performance in proportion to the number of disks in the array.

It should be noted that the parallel access model is implemented only if the size of the data write request is larger than the size of the block itself. Otherwise, it is almost impossible to write multiple blocks in parallel. Imagine a situation where the size of an individual block is 8KB, and the size of a write request is 64KB. In this case, the original information is cut into eight blocks of 8 KB each. If you have a four-disk array, you can write four blocks, or 32 KB, at a time. Obviously, in this example, the write speed and read speed will be four times higher than when using a single disc. This is only true for an ideal situation, but the request size is not always a multiple of the block size and the number of disks in the array.

If the size of the data being written is less than the block size, then a fundamentally different model is implemented - independent access. Moreover, this model can also be used when the size of the recorded data is greater than the size of one block. With independent access, all the data of a single request is written to a separate disk, that is, the situation is identical to working with one disk. The advantage of the independent access model is that if multiple write (read) requests are received at the same time, they will all be executed on separate disks independently of each other. This situation is typical, for example, for servers.

According to different types of access, there are also different types of RAID arrays, which are usually characterized by RAID levels. In addition to the type of access, RAID levels differ in the way they are located and redundant information is generated. Redundant information can either be placed on a dedicated disk or shared across all disks.

Currently, there are several RAID levels that are widely used are RAID 0, RAID 1, RAID 5, RAID 6, RAID 10, RAID 50 and RAID 60. Previously, there were also RAID 2, RAID 3 and RAID 4 levels, however these RAID levels are not currently used and modern RAID controllers do not support them. Note that all modern RAID controllers also support the JBOD (Just a Bench Of Disks) function. In this case, we are not talking about a RAID array, but simply about connecting individual disks to a RAID controller.

RAID 0

RAID 0, or striping, is not, strictly speaking, a RAID array, since such an array is not redundant and does not provide data storage reliability. However, historically it is also called a RAID array. A RAID 0 array (Fig. 1) can be built on two or more disks and is used when it is necessary to provide high performance of the disk subsystem, and data storage reliability is not critical. When creating a RAID 0 array, information is split into blocks (these blocks are called stripes), which are simultaneously written to separate disks, that is, a system with parallel access is created (if, of course, the block size allows). With the ability to simultaneously I / O from multiple disks, RAID 0 provides the fastest transfer rates and the most efficient use of disk space, since it does not require storage space for checksums. The implementation of this level is very simple. RAID 0 is mainly used in areas where fast transfer of large amounts of data is required.

Fig. 1. RAID 0 array

In theory, the increase in read and write speed should be a multiple of the number of disks in the array.

The reliability of a RAID 0 array is obviously lower than the reliability of any of the disks individually and decreases with an increase in the number of disks included in the array, since the failure of any of them leads to the inoperability of the entire array. If the mean time between failures of each disk is MTTF disk, then the mean time between failures of a RAID 0 array consisting of n disks is equal to:

MTTF RAID0 \u003d MTTD disk / n.

If we designate the probability of failure over a certain period of time of one disk after p, then for a RAID 0 array from n disks, the probability that at least one disk fails (the probability of an array falling) is:

P (array fall) \u003d 1 - (1 - p) n.

For example, if the probability of a single disk failure within three years of operation is 5%, then the probability of a RAID 0 array falling from two disks is already 9.75%, and from eight disks - 33.7%.

RAID 1

A RAID 1 array (Figure 2), also called a mirror, is a 100 percent redundant array of two drives. That is, the data is completely duplicated (mirrored), due to which a very high level of reliability (as well as cost) is achieved. Note that RAID 1 does not require pre-partitioning of disks and data into blocks. In the simplest case, two drives contain the same information and are one logical drive. If one disk fails, its functions are performed by another (which is absolutely transparent to the user). The array is restored by simple copying. In addition, in theory, a RAID 1 array should double the read speed, since this operation can be performed simultaneously from two disks. This information storage scheme is used mainly in cases where the cost of data security is much higher than the cost of implementing the storage system.

Fig. 2. RAID 1 array

If, as in the previous case, we denote the probability of failure for a certain period of time of one disk after p, then for a RAID 1 array, the probability that both disks will fail at the same time (the probability of an array falling) is:

P (falling array) \u003d p 2.

For example, if the probability of failure of one disk within three years of operation is 5%, then the probability of simultaneous failure of two disks is already 0.25%.

RAID 5

A RAID 5 array (Figure 3) is a fault-tolerant disk array with distributed checksum storage. When writing, the data stream is divided into blocks (stripes) at the byte level, which are simultaneously written to all disks in the array in a circular order.

Fig. 3. RAID 5 array

Suppose the array contains n disks, and the stripe size is d... For each portion of n-1 stripes checksum is calculated p.

Stripe d 1 written to the first disk, stripe d 2 - on the second and so on up to the stripe d n–1, which is written to the (n – 1) th disc. Next, a checksum is written to the n-th disk p n, and the process is cyclically repeated from the first disk on which the stripe is written d n.

The recording process ( n–1) stripes and their checksum are performed simultaneously for all n disks.

The checksum is calculated using a bitwise exclusive OR (XOR) operation on the data blocks being written. So, if there is n hard drives and d - data block (stripe), the checksum is calculated using the following formula:

p n \u003d d 1d 2 ⊕ ... d n – 1.

If any disk fails, the data on it can be recovered from the control data and from the data remaining on the healthy disks. Indeed, using the identities (ab) A b \u003d a and aa = 0 , we get that:

p n⊕ (d kp n) \u003d d ld n⊕ ...⊕ ...⊕ d n – l⊕ (d kp n).

d k \u003d d 1d n⊕ ...⊕ d k – 1d k + 1⊕ ...⊕ p n.

Thus, if a disk with a block fails d k, then it can be restored by the value of the remaining blocks and the checksum.

In the case of RAID 5, all disks in the array must be the same size, but the total capacity of the disk subsystem available for writing becomes less than exactly one disk. For example, if five disks are 100 GB, then the actual size of the array is 400 GB because 100 GB is reserved for audit information.

A RAID 5 array can be built on three or more hard drives. As the number of hard drives in an array increases, its redundancy decreases. Note also that a RAID 5 array can be recovered if only one drive fails. If two drives fail at the same time (or if a second drive fails while rebuilding the array), then the array cannot be recovered.

RAID 6

A RAID 5 array has been shown to be rebuildable if a single disk fails. However, sometimes you need to provide a higher level of reliability than a RAID 5. In this case, you can use a RAID 6 array (Figure 4), which allows you to recover the array even if two drives fail simultaneously.

Fig. 4. RAID 6 array

RAID 6 is similar to RAID 5, except that it uses not one, but two checksums that are cyclically distributed across the drives. First checksum p is calculated using the same algorithm as in a RAID 5 array, that is, it is an XOR operation between data blocks written to different disks:

p n \u003d d 1d2⊕ ...⊕ d n – 1.

The second checksum is calculated using a different algorithm. Without going into mathematical details, let's say that this is also an XOR operation between blocks of data, but each block of data is pre-multiplied by a polynomial coefficient:

q n \u003d g 1 d 1g 2 d 2⊕ ...⊕ g n – 1 d n – 1.

Accordingly, the capacity of two disks in the array is allocated for checksums. In theory, a RAID 6 array can be created on four or more drives, but in many controllers it can be created on a minimum of five drives.

It should be borne in mind that the performance of a RAID 6 array is usually 10-15% lower than the performance of a RAID 5 array (with an equal number of disks), which is caused by a large amount of calculations performed by the controller (it is necessary to calculate the second checksum, as well as read and overwrite more disk blocks as each block writes).

RAID 10

RAID 10 (Figure 5) is a mix of levels 0 and 1. A minimum of four drives are required for this level. In a RAID 10 array of four disks, they are combined in pairs into RAID 1 arrays, and both of these arrays are combined as logical disks into a RAID 0 array. Another approach is also possible: initially the disks are combined into RAID 0 arrays, and then logical disks based on these arrays - to a RAID 1 array.

Fig. 5. RAID 10 array

RAID 50

RAID 50 is a mix of levels 0 and 5 (Figure 6). The minimum required for this level is six disks. In a RAID 50 array, two RAID 5 arrays are first created (at least three disks in each), which are then combined as logical drives into a RAID 0 array.

Fig. 6. RAID 50 array

LSI 3ware SAS 9750-8i Controller Test Methodology

To test the LSI 3ware SAS 9750-8i RAID controller, we used a specialized test suite IOmeter 1.1.0 (version 2010.12.02). The test bench had the following configuration:

  • processor - Intel Core i7-990 (Gulftown);
  • motherboard - GIGABYTE GA-EX58-UD4;
  • memory - DDR3-1066 (3 GB, three-channel operation mode);
  • system drive - WD Caviar SE16 WD3200AAKS;
  • video card - GIGABYTE GeForce GTX480 SOC;
  • RAID controller - LSI 3ware SAS 9750-8i;
  • SAS drives attached to the RAID controller are Seagate Cheetah 15K.7 ST3300657SS.

Testing was carried out under Microsoft Windows 7 Ultimate (32-bit) operating system.

We used the Windows RAID controller driver version 5.12.00.007, and also updated the controller firmware to version 5.12.00.007.

The system drive was connected to SATA, implemented through a controller integrated into the south bridge of the Intel X58 chipset, and SAS drives were connected directly to the ports of the RAID controller using two Mini-SAS SFF-8087 -\u003e 4 SAS cables.

The RAID controller was installed in a PCI Express x8 slot on the motherboard.

The controller was tested with the following RAID arrays: RAID 0, RAID 1, RAID 5, RAID 6, RAID 10 and RAID 50. The number of disks combined in a RAID array varied for each type of array from a minimum value to eight.

The stripe size on all RAID arrays did not change and was 256 KB.

Recall that the IOmeter package allows you to work both with disks on which a logical partition is created, and with disks without a logical partition. If a disk is tested without a logical partition created on it, then IOmeter works at the level of logical data blocks, that is, instead of the operating system, it sends commands to the controller to write or read LBA blocks.

If a logical partition is created on the disk, then initially the IOmeter utility creates a file on the disk that occupies the entire logical partition by default (in principle, the size of this file can be changed by specifying it in the number of 512 byte sectors), and then it already works with this file, that is, it reads or writes (overwrites) individual LBAs within this file. But again, IOmeter bypasses the operating system, that is, it directly sends requests to the controller to read / write data.

In general, when testing HDD disks, as practice shows, there is practically no difference between test results of a disk with a created logical partition and without it. At the same time, we believe that it is more correct to conduct testing without a created logical partition, since in this case the test results do not depend on the used file system (NTFA, FAT, ext, etc.). This is why we performed testing without creating logical partitions.

In addition, the IOmeter utility allows you to set the Transfer Request Size for writing / reading data, and the test can be performed both for sequential (Sequential) reads and writes, when LBA blocks are read and written sequentially one after another, and for random, when LBA blocks are read and written in random order. When generating a load scenario, you can set the test time, the percentage ratio between sequential and random operations (Percent Random / Sequential Distribution), as well as the percentage ratio between read and write operations (Percent Read / Write Distribution). In addition, the IOmeter utility automates the entire testing process and saves all results to a CSV file, which can then be easily exported to an Excel spreadsheet.

Another setting that the IOmeter utility allows you to do is the so-called Align I / Os on on hard disk sector boundaries. By default, IOmeter aligns request blocks to 512-byte disk sector boundaries, but arbitrary alignment can also be specified. Actually, most hard drives have a sector size of 512 bytes, and only recently have drives with a sector size of 4 Kbytes begun to appear. Recall that in HDDs, a sector is the smallest addressable data size that can be written to or read from the disk.

When conducting testing, it is necessary to set the alignment of the blocks of data transfer requests by the size of the disk sector. Since Seagate Cheetah 15K.7 ST3300657SS drives have a sector size of 512 bytes, we used 512-byte sector alignment.

Using the IOmeter test suite, we measured the sequential read and write speed, as well as the random read and write speed of the created RAID array. The sizes of the transmitted data blocks were 512 bytes, 1, 2, 4, 8, 16, 32, 64, 128, 256, 512 and 1024 KB.

In the listed load scenarios, the test time with each request to transfer a data block was 5 minutes. Also note that in all the tests listed, we set the depth of the task queue (# of Outstanding I / Os) to 4 in the IOmeter settings, which is typical for user applications.

Test results

After reviewing the test results, we were surprised by the performance of the LSI 3ware SAS 9750-8i RAID Controller. And so much so that they began to look through our scripts to identify errors in them, and then repeated the testing many times with other settings of the RAID controller. We changed the stripe size and cache behavior of the RAID controller. This, of course, affected the results, but did not change the general nature of the dependence of the data transfer rate on the data block size. And we just could not explain this dependence. The work of this controller seems to us completely illogical. First, the results are unstable, that is, for each fixed size of the data block, the speed changes periodically and the average result has a large error. Note that usually the results of testing disks and controllers using the IOmeter utility are stable and differ very slightly.

Second, as the block size increases, the data rate must increase or remain unchanged in saturation mode (when the rate reaches its maximum value). However, in the case of the LSI 3ware SAS 9750-8i controller, there is a sharp drop in data rate at some block sizes. In addition, it remains a mystery to us why, with the same number of disks for RAID 5 and RAID 6, the write speed is higher than the read speed. In short, we cannot explain the operation of the LSI 3ware SAS 9750-8i controller - all that remains is to state the facts.

Test results can be classified in various ways. For example, for boot scenarios, when, for each boot type, results are given for all possible RAID arrays with a different number of connected disks, or for RAID types, when results with a different number of disks are indicated for each RAID type in sequential read scenarios. , sequential write, random read, and random write. You can also categorize the results by the number of disks in the array, when for each number of disks connected to the controller, the results are given for all possible (given the number of disks) RAID arrays in sequential read and sequential write, random read, and random write scenarios.

We decided to classify the results by the types of arrays, since, in our opinion, despite the rather large number of graphs, such their presentation is more clear.

RAID 0

A RAID 0 array can be created with two to eight drives. The test results for a RAID 0 array are shown in Fig. 7-15.

Fig. 7. Speed \u200b\u200bof sequential read and write
with eight disks in a RAID 0 array

Fig. 8. Speed \u200b\u200bof sequential read and write
with seven disks in a RAID 0 array

Fig. 9. Sequential read speed
and writes on six drives in a RAID 0 array

Fig. 10. Speed \u200b\u200bof sequential read and write
with five disks in a RAID 0 array

Fig. 11. Speed \u200b\u200bof sequential read and write
with four disks in a RAID 0 array

Fig. 12. Speed \u200b\u200bof sequential read and write
with three disks in a RAID 0 array

Fig. 13. Speed \u200b\u200bof sequential read and write
with two disks in a RAID 0 array

Fig. 14. Random read speed
in a RAID 0 array

Fig. 15. The speed of random write in a RAID 0 array

It is clear that the fastest sequential read and write speeds in a RAID 0 array are achieved with eight disks. It should be noted that with eight and seven disks in a RAID 0 array, the sequential read and write speeds are almost the same, and with fewer disks, the sequential write speed becomes higher than the read speed.

It should be noted that there are also typical failures in sequential read and write speed for certain block sizes. For example, with eight and six disks in the array, such failures are observed at a data block size of 1 and 64 KB, and with seven disks - at a size of 1, 2, and 128 KB. There are similar failures, but with different sizes of data blocks, there are also four, three, and two disks in the array.

In terms of sequential read and write speeds (as a characteristic averaged over all block sizes), RAID 0 outperforms all other possible arrays in a configuration with eight, seven, six, five, four, three, and two drives.

Random access in a RAID 0 array is also pretty interesting. The random read speed for each data block size is proportional to the number of disks in the array, which is quite logical. Moreover, with a block size of 512 KB with any number of disks in the array, there is a characteristic drop in the random read speed.

In case of random writing with any number of disks in the array, the speed increases with the increase in the size of the data block and there are no speed drops. At the same time, it should be noted that the highest speed in this case is achieved not with eight, but with seven disks in the array. Next in terms of random write speed is an array of six disks, then five, and only then eight disks. Moreover, in terms of random write speed, an array of eight disks is almost identical to an array of four disks.

In terms of random write speed, RAID 0 outperforms all other possible arrays in configurations with eight, seven, six, five, four, three, and two drives. On the other hand, in terms of random read speed in a configuration with eight disks, RAID 0 is inferior to RAID 10 and RAID 50, but in a configuration with fewer disks, RAID 0 is the leader in random read speed.

RAID 5

A RAID 5 array can be created with three to eight drives. The test results for a RAID 5 array are shown in Fig. 16-23.

Fig. 16. Speed \u200b\u200bof sequential read and write
with eight disks in a RAID 5 array

Fig. 17. Speed \u200b\u200bof sequential read and write
with seven disks in a RAID 5 array

Fig. 18. Speed \u200b\u200bof sequential read and write
with six drives in a RAID 5 array

Fig. 19. Speed \u200b\u200bof sequential read and write
with five disks in a RAID 5 array

Fig. 20. Speed \u200b\u200bof sequential read and write
with four drives in a RAID 5 array

Fig. 21. Speed \u200b\u200bof sequential read and write
with three drives in a RAID 5 array

Fig. 22. Random read speed
in a RAID 5 array

Fig. 23. Random write speed
in a RAID 5 array

It is clear that the highest read and write speed is achieved with eight disks. Note that for a RAID 5 array, the sequential write speed is on average faster than the read speed. However, for a given request size, the sequential read speed can exceed the sequential write speed.

It should also be noted that there are typical failures in sequential read and write speed for certain block sizes for any number of disks in the array.

In sequential read and write speeds in a configuration with eight drives, RAID 5 is inferior to RAID 0 and RAID 50, but outperforms RAID 10 and RAID 6. In configurations with seven drives, RAID 5 is inferior in sequential read and write speed to RAID 0 and outperforms RAID 6 (other types of arrays are not possible with a given number of disks).

In six-drive configurations, RAID 5 is outperforming RAID 0 and RAID 50 in sequential read speed, and only RAID 0 in sequential write speed.

In configurations with five, four, and three drives, RAID 5 is second only to RAID 0 in sequential read and write speeds.

Random access in a RAID 5 array is similar to random access in a RAID 0. Thus, the random read speed for each data block size is proportional to the number of disks in the array, and with a 512 KB block size, for any number of disks in the array, there is a characteristic drop in random read speed. Moreover, it should be noted that the random read speed weakly depends on the number of disks in the array, that is, for any number of disks, it is approximately the same.

In terms of random read speed, RAID 5 in a configuration with eight, seven, six, four and three drives is inferior to all other arrays. And only in a configuration with five drives does it slightly outperform a RAID 6 array.

In terms of random write speed, RAID 5 in a configuration with eight disks is second only to RAID 0 and RAID 50, and in a configuration with seven and five, four and three disks - only to RAID 0.

In a six-drive configuration, RAID 5 is inferior in random write speed to RAID 0, RAID 50, and RAID 10.

RAID 6

The LSI 3ware SAS 9750-8i controller allows you to create a RAID 6 array with five to eight drives. The test results for a RAID 6 array are shown in Fig. 24-29.

Fig. 24. Speed \u200b\u200bof sequential read and write
with eight disks in a RAID 6 array

Fig. 25. Speed \u200b\u200bof sequential read and write
with seven disks in a RAID 6 array

We also note the characteristic failures in sequential read and write speed for certain block sizes for any number of disks in the array.

In terms of sequential read speed, RAID 6 is inferior to all other arrays in configurations with any (from eight to five) number of disks.

In terms of sequential write speed, the situation is somewhat better. In a configuration with eight disks, RAID 6 outperforms RAID 10, and in a configuration with six disks, arrays of RAID 10 and RAID 50. However, in configurations with seven and five disks, when the creation of RAID 10 and RAID 50 arrays is not possible, this array is in last place for sequential write speed.

Random access in a RAID 6 array is similar to random access in RAID 0 and RAID 5. Thus, the random read speed with a 512KB block size for any number of disks in the array has a characteristic drop in random read speed. Note that the maximum random read speed is achieved with six disks in the array. But with seven and eight disks, the random read speed is almost the same.

In case of random writing with any number of disks in the array, the speed increases with the increase in the size of the data block and there are no speed drops. In addition, the random write speed is proportional to the number of disks in the array, but the speed difference is insignificant.

In terms of random read speed, RAID 6 in a configuration with eight and seven drives is ahead of only RAID 5 and is inferior to all other possible arrays.

In a six-drive configuration, RAID 6 is inferior to RAID 10 and RAID 50 in random read speed, and in a five-drive configuration, it is inferior to RAID 0 and RAID 5.

In terms of random write speed, a RAID 6 array is inferior to all other possible arrays with any number of connected drives.

In general, we can state that the RAID 6 array is inferior in performance and the RAID 0, RAID 5, RAID 50 and RAID 10 arrays. That is, in terms of performance, this type of array is in last place.

Fig. 33. Random reading speed
in a RAID 10 array

Fig. 34. Speed \u200b\u200bof random write in a RAID 10 array

Typically, in arrays of eight and six disks, the sequential read speed is higher than the write speed, and in an array of four disks, these speeds are practically the same for any data block size.

For a RAID 10 array, as well as for all other considered arrays, a drop in sequential read and write speed is typical for certain sizes of data blocks for any number of disks in the array.

In case of random writing with any number of disks in the array, the speed increases with the increase in the size of the data block and there are no speed drops. In addition, the random write speed is proportional to the number of disks in the array.

In terms of sequential read speed, the RAID 10 array follows RAID 0, RAID 50 and RAID 5 arrays in a configuration with eight, six and four disks, and in terms of sequential write speed it is inferior even to the RAID 6 array, that is, it follows the RAID 0 arrays. RAID 50, RAID 5 and RAID 6.

On the other hand, in terms of random read speed, the RAID 10 array outperforms all other arrays in the configuration with eight, six and four disks. But in terms of random write speed, this array loses to RAID 0, RAID 50 and RAID 5 arrays in a configuration with eight disks, RAID 0 and RAID 50 arrays in a six-disk configuration, and RAID 0 and RAID 5 arrays in a four-disk configuration.

RAID 50

A RAID 50 array can be built on six or eight drives. The test results for a RAID 50 array are shown in Fig. 35-38.

In the random read scenario, as in all the other considered arrays, there is a characteristic drop in performance at a block size of 512 KB.

In case of random writing with any number of disks in the array, the speed increases with the increase in the size of the data block and there are no speed drops. In addition, the random write speed is proportional to the number of disks in the array, but the difference in speed is insignificant and is observed only with a large (over 256 KB) data block size.

In terms of sequential read speed, the RAID 50 array is second only to the RAID 0 array (in a configuration with eight and six drives). In terms of sequential write speed, RAID 50 is also second only to RAID 0 in a configuration with eight drives, and in a configuration with six drives, it loses to RAID 0, RAID 5, and RAID 6.

On the other hand, in terms of random read and write speed, the RAID 50 array is second only to the RAID 0 array and is ahead of all other arrays with eight and six disks.

RAID 1

As we have already noted, a RAID 1 array, which can be built on only two disks, is impractical to use on such a controller. However, for the sake of completeness, we present the results for a RAID 1 array on two disks. The test results for a RAID 1 array are shown in Fig. 39 and 40.

Fig. 39. Speed \u200b\u200bof sequential write and read in a RAID 1 array

Fig. 40. Speed \u200b\u200bof random writing and reading in a RAID 1 array

For a RAID 10 array, as well as for all other considered arrays, a drop in sequential read and write speed is typical for certain data block sizes.

In the random read scenario, as well as for other arrays, there is a characteristic drop in performance with a block size of 512 KB.

In case of random writing, the speed increases with the size of the data block and there are no speed dips.

A RAID 1 array can only be mapped to a RAID 0 array (since no other arrays are possible with two disks). It should be noted that a RAID 1 array outperforms a RAID 0 array with two drives in all load scenarios except random read.

conclusions

Our impression from testing the LSI 3ware SAS 9750-8i controller in combination with Seagate Cheetah 15K.7 ST3300657SS SAS drives was rather mixed. On the one hand, it has excellent functionality, on the other hand, it is alarming about speed failures with certain data block sizes, which, of course, affects the speed performance of RAID arrays when they function in a real environment.

With the advent of a sufficiently large number of Serial Attached SCSI (SAS) peripherals, we can state the beginning of the transition of the corporate environment to the rails of new technology. But SAS is not only the established successor to UltraSCSI technology, but it is also driving new uses, taking the scalability of systems to unimaginable heights. We decided to demonstrate the potential of SAS by taking a close look at the technology, host adapters, hard drives, and storage systems.

SAS is not an entirely new technology: it takes the best of both worlds. The first part of SAS is about serial communication, which requires fewer physical wires and pins. The shift from parallel to serial transmission also eliminated the bus. Although the current SAS specification defines throughput at 300 MB / s per port, which is less than the 320 MB / s of UltraSCSI, replacing the shared bus with a point-to-point connection is a significant advantage. The second part of SAS is the SCSI protocol, which remains powerful and popular.

SAS can use a wide range of rAID varieties ... Giants such as Adaptec or LSI Logic offer advanced features for expansion, migration, socketing and other capabilities in their products, including those related to distributed RAID arrays across multiple controllers and drives.

Finally, most of the actions mentioned today are performed on the fly. Here we should highlight the excellent products AMCC / 3Ware , Areca and Broadcom / Raidcore , allowing for the transfer of enterprise-class functions to SATA spaces.

Compared to SATA, the traditional SCSI implementation is losing ground on all fronts except for high-end enterprise solutions. SATA offers suitable hard drives , has a good price and a wide range of decisions ... And let's not forget about another "smart" SAS feature: it fits easily with existing SATA infrastructures because SAS host adapters work seamlessly with SATA drives. But you can't connect a SAS drive to a SATA adapter.


Source: Adaptec.

First, it seems to us, we should turn to the history of SAS. The SCSI standard (stands for "small computer system interface") has always been considered as a professional bus for connecting storage devices and some other devices to computers. Hard drives for servers and workstations still use SCSI technology. Unlike the mainstream ATA standard, which allows only two drives to be connected to a single port, SCSI allows up to 15 devices to be connected on a single bus and offers a powerful command protocol. Devices must have a unique SCSI ID, which can be assigned either manually or via SCAM (SCSI Configuration Automatically). Because device IDs for buses of two or more SCSI adapters may not be unique, Logical Unit Numbers (LUNs) have been added to help identify devices in complex SCSI environments.

SCSI hardware is more flexible and reliable than ATA (also called IDE, Integrated Drive Electronics). Devices can be connected both inside the computer and outside, and the cable length can be up to 12 m, if only correctly terminated (in order to avoid signal reflections). With the evolution of SCSI, numerous standards have emerged that stipulate different bus widths, clock speeds, connectors and signal voltages (Fast, Wide, Ultra, Ultra Wide, Ultra2, Ultra2 Wide, Ultra3, Ultra320 SCSI). Fortunately, they all share the same command set.

Any SCSI communication is established between the initiator (host adapter) sending commands and the target drive responding to them. Immediately after receiving a set of commands, the target drive sends a so-called sense-code (state: busy, error or free), by which the initiator knows whether he will receive the desired response or not.

The SCSI protocol specifies almost 60 different commands. They are divided into four categories: non-data, bi-directional, read data, and write data.

The limitations of SCSI begin to show up when you add drives to the bus. Today, you can hardly find a hard drive capable of fully utilizing the 320MB / s bandwidth of the Ultra320 SCSI. But five or more drives on one bus is another matter entirely. An option would be to add a second host adapter for load balancing, but it comes at a cost. The problem with cables too: Twisted 80-wire cables are very expensive. If you also want to get "hot-swap" drives, that is, easy replacement of a failed drive, then special equipment (backplane) is required.

Of course, it is best to place the drives in separate rigs or modules, which are usually hot-swappable, along with other nice control features. As a result, there are more professional SCSI solutions on the market. But they all cost a lot, which is why the SATA standard has developed so rapidly in recent years. While SATA will never meet the needs of high-end enterprise systems, it complements SAS perfectly to create scalable new solutions for next generation networking environments.


SAS does not share a bus across multiple devices. Source: Adaptec.

SATA


On the left is the SATA connector for data transfer. On the right is the power supply connector. There are enough pins to supply 3.3V, 5V and 12V to each SATA drive.

The SATA standard has been on the market for several years, and today it has reached its second generation. SATA I featured 1.5 Gbps throughput with two serial connections using low-voltage differential signaling. At the physical layer, 8/10 bits are used (10 actual bits for 8 data bits), which explains the maximum interface bandwidth of 150 MB / s. After the transition from SATA to a speed of 300 MB / s, many began to call the new standard SATA II, although with standardization SATA-IO (International Organization) planned to add more features first, and then call it SATA II. Hence the latest specification is called SATA 2.5, it includes SATA extensions such as Native Command Queuing (NCQ) and eSATA (external SATA), port multipliers (up to four drives per port), etc. But additional SATA functions are optional for both the controller and the hard drive itself.

Let's hope that in 2007 SATA III at 600 MB / s will still be released.

While parallel ATA (UltraATA) cables were limited to 46cm, then SATA cables can be up to 1m long, and for eSATA they can be twice as long. Instead of 40 or 80 wires, serial transmission requires only one contact. Therefore, SATA cables are very narrow, easy to run inside the computer case, and do not obstruct airflow as much. The SATA port relies on one device, which allows this interface to be classified as point-to-point.


SATA connectors for data and power are provided with separate plugs.

SAS


The signaling protocol is the same as that of SATA. Source: Adaptec.

A nice feature of Serial Attached SCSI is that the technology supports both SCSI and SATA, as a result of which you can connect SAS or SATA drives (or both) to SAS controllers. However, SAS drives cannot work with SATA controllers due to the Serial SCSI Protocol (SSP). Like SATA, SAS follows a point-to-point connection for drives (300MB / s today), and thanks to SAS expanders (or expander) more drives can be connected than the available SAS ports. SAS hard drives support two ports, each with its own unique SAS ID, so you can use two physical connections to provide redundancy by connecting the drive to two different hosts. Thanks to the STP (SATA Tunneling Protocol) SAS controllers can communicate with SATA drives connected to the expander.


Source: Adaptec.



Source: Adaptec.



Source: Adaptec.

Of course, the only physical connection of the SAS expander to the host controller can be considered a bottleneck, so the standard provides for wide (wide) SAS ports. A wide port groups multiple SAS connections into a single link between any two SAS devices (usually between a host controller and an expander / expander). The number of connections within the framework of communication can be increased, it all depends on the imposed requirements. But redundant connections are not supported, nor can any loops or rings be allowed.


Source: Adaptec.

Future SAS implementations will add 600 and 1200 MB / s per port throughput. Of course, the performance of hard drives will not increase in the same proportion, but it will be more convenient to use expanders on a small number of ports.



Devices named "Fan Out" and "Edge" are expanders. But only the main Fan Out expander can work with the SAS domain (see 4x link in the center of the diagram). Up to 128 physical connections are allowed per Edge expander, and you can use wide ports and / or connect other expanders / drives. Topology can be quite complex, yet flexible and powerful at the same time. Source: Adaptec.



Source: Adaptec.

The backplane is the basic building block of any storage system that needs to be hot-pluggable. Therefore, SAS expanders often include powerful rigs (either in a single package or not). Typically a single link is used to connect a simple snap-in to the host adapter. Expanders with built-in snap-ins, of course, rely on multichannel connections.

There are three types of cables and connectors designed for SAS. SFF-8484 is a multicore internal cable that connects the host adapter to the rig. In principle, the same can be achieved by splitting this cable at one end into several separate SAS connectors (see illustration below). SFF-8482 is the connector that connects the drive to a single SAS interface. Finally, the SFF-8470 is an external multicore cable, up to six meters long.


Source: Adaptec.


SFF-8470 cable for external SAS multichannel connections.


Stranded cable SFF-8484. Four SAS channels / ports pass through one connector.


SFF-8484 cable allowing connection of four SATA drives.

SAS as part of SAN solutions

Why do we need all this information? Most users will not even come close to the SAS topology we discussed above. But SAS is more than a next-generation interface for professional hard drives, although it is ideal for building simple and complex RAID arrays based on one or more RAID controllers. SAS can do more. This is a point-to-point serial interface that scales easily as you add the number of links between any two SAS devices. SAS drives come with two ports, so you can connect one port through an expander to a host system, then create a backup path to another host system (or another expander).

The communication between SAS adapters and expanders (and also between two expanders) can be as wide as there are SAS ports available. Expanders are usually rack systems that can accommodate a large number of drives, and the possible connection of SAS to an upstream device in a hierarchy (for example, a host controller) is limited only by the capabilities of the expander.

With a rich and functional infrastructure, SAS allows you to create complex storage topologies, rather than dedicated hard drives or separate network storage. In this case, "complex" does not mean that such a topology is difficult to work with. SAS configurations consist of simple disk snap-ins or use expanders. Any SAS link can be expanded or narrowed, depending on the bandwidth requirements. You can use both powerful SAS hard drives and large SATA models. Together with powerful RAID controllers, you can easily configure, expand or reconfigure data arrays - both in terms of RAID level and from the hardware side.

All of this becomes all the more important when you consider how quickly corporate storage is growing. Today everyone is talking about a SAN - a storage area network. It implies a decentralized organization of the storage subsystem with traditional servers using physically remote storage. Over existing gigabit Ethernet or Fiber Channel networks, a slightly modified SCSI protocol is launched, encapsulated in Ethernet packets (iSCSI - Internet SCSI). A system that runs from a single hard drive to complex nested RAID arrays becomes a so-called target and is tied to an initiator (host system, initiator), which treats the target as if it were just a physical element.

iSCSI, of course, allows you to create a strategy for storage development, data organization or access control. We gain another level of flexibility by removing directly attached storage, allowing any storage subsystem to become an iSCSI target. Moving to off-site storage makes the system independent of storage servers (critical point of failure) and improves hardware manageability. From a software point of view, the storage is still "inside" the server. The target and the iSCSI initiator can be located nearby, on different floors, in different rooms or buildings - it all depends on the quality and speed of the IP connection between them. From this point of view, it is important to note that SAN is not well suited to the requirements of online applications like databases.

2.5 "SAS hard drives

2.5 "professional hard drives are still considered new. We've been looking at the first such drive from Seagate for quite some time now - 2.5 "Ultra320 Savvio which left a good impression. All 2.5 "SCSI drives use 10,000 RPM spindle speeds, but they fall short of the performance level of 3.5" drives at the same spindle speed. The fact is that the outer tracks of the 3.5 "models rotate at a higher linear speed, which provides a higher data transfer rate.

The advantage of small hard drives is also not in the capacity: today for them the maximum is still 73 GB, while in 3.5 "enterprise-class hard drives, we already get 300 GB. In many areas, the ratio of performance to physical volume is very important. or energy efficiency. The more hard drives you use, the more performance you reap - paired with the appropriate infrastructure, of course. And 2.5 "drives consume almost half the power of 3.5" competitors. performance per watt (I / O operations per watt), the 2.5 "form factor gives very good results.

If capacity is your primary concern, 3.5 "10,000 RPM drives are unlikely to be the best choice. The fact is that 3.5" SATA drives provide 66% more capacity (500 instead of 300 GB for hard drive) while keeping the performance level acceptable. Many hard drive manufacturers offer SATA models for 24/7 operation, and the price of drives has been reduced to a minimum. Reliability problems can be solved by purchasing additional (spare) drives for immediate replacement in the array.

The MAY line represents the current generation of Fujitsu 2.5 "drives for the professional sector. Rotation speeds of 10,025 rpm, 36.7 GB and 73.5 GB capacities. All drives come with 8 MB cache and give an average read seek time 4.0 ms and 4.5 ms writes As we have already mentioned, a nice feature of 2.5 "hard drives is the reduced power consumption. Typically, one 2.5 "drive can save at least 60% energy compared to a 3.5" drive.

3.5 "SAS hard drives

Underneath the MAX is Fujitsu's current line of high-performance 15,000 RPM hard drives. So the name is quite consistent. Unlike 2.5 "drives, we get a whopping 16MB of cache and a short 3.3ms average seek time for reading and 3.8ms for writing. Fujitsu offers 36.7GB, 73.4GB and 146GB models. GB (with one, two and four platters).

Hydrodynamic bearings have made their way to enterprise-class hard drives, so the new models run significantly quieter than the previous ones at 15,000 rpm. Of course, these hard drives should be properly cooled, and the hardware provides this too.

Hitachi Global Storage Technologies also offers its own line of high performance solutions. The UltraStar 15K147 runs at 15,000 rpm and has 16 MB cache, just like Fujitsu drives, but the platter configuration is different. The 36.7 GB model uses two platters, not one, and the 73.4 GB model uses three platters, not two. This indicates a lower data density, but this design, in fact, eliminates the use of the inner, slowest areas of the platters. As a result, the heads have to move less, which gives a better average access time.

Hitachi also offers 36.7GB, 73.4GB, and 147GB models with a timed seek (read) time of 3.7ms.

Although Maxtor has already become part of Seagate, the company's product lines are still intact. The manufacturer offers 36, 73 and 147 GB models, all of which differ in 15,000 RPM spindle speeds and 16 MB cache. The company claims an average seek time of 3.4ms for reads and 3.8ms for writes.

Cheetah has long been associated with high-performance hard drives. Seagate was able to instill a similar association with the Barracuda in the desktop segment with its first 7200 RPM desktop drive in 2000.

Available in 36.7 GB, 73.4 GB and 146.8 GB models. They all differ in a spindle speed of 15,000 rpm and a cache of 8 MB. The stated average seek time for reading is 3.5 ms and for writing 4.0 ms.

Host adapters

Unlike SATA controllers, SAS components can only be found on server-grade motherboards or as expansion cards for PCI-X or PCI Express ... If we take it a step further and consider the RAID controllers (Redundant Array of Inexpensive Drives), they are mostly sold as separate cards due to their complexity. RAID cards contain not only the controller itself, but also a chip to accelerate the calculations of redundancy information (XOR engine), as well as cache memory. A small amount of memory is sometimes soldered to the card (most often 128 MB), but some cards allow expansion using DIMM or SO-DIMM.

When choosing a host adapter or RAID controller, you should be clear about what you need. The range of new devices is growing just before our eyes. Simple multiport host adapters are comparatively cheap, while powerful RAID cards are expensive. Consider where you will place your drives: external storage requires at least one external connector. Rack servers usually require low profile cards.

If you need RAID, then decide if you will use hardware acceleration. Some RAID cards consume CPU resources in XOR calculations for RAID 5 or 6; others use their own hardware XOR engine. RAID acceleration is recommended for environments where the server does more than store data, such as databases or web servers.

All host adapter cards that we have shown in our article support 300 MB / s per SAS port and allow very flexible implementation of the storage infrastructure. You won't surprise anyone with external ports today, and take into account the support for both SAS and SATA hard drives. All three cards use the PCI-X interface, but PCI Express versions are already in development.

In our article, we paid attention to cards with eight ports, but the number of connected hard drives is not limited to that. With a SAS expander (external) you can connect any storage. As long as a 4-lane connection is sufficient, you can expand the number of hard drives up to 122. Due to the performance cost of calculating the parity information of RAID 5 or RAID 6, typical external RAID storage will not be able to sufficiently load the bandwidth of a 4-lane connection, even with a large number of drives.

The 48300 is a SAS host adapter for the PCI-X bus. The server market continues to be dominated by PCI-X, although more and more motherboards are equipped with PCI Express interfaces.

The Adaptec SAS 48300 uses a PCI-X interface at 133 MHz for 1.06 GB / s bandwidth. Fast enough if the PCI-X bus is not loaded by other devices. If you include a slower device on the bus, then all other PCI-X cards will slow down to the same speed. For this purpose, several PCI-X controllers are sometimes installed on the board.

Adaptec is positioning the SAS 4800 for mid- to low-end servers and workstations. The MSRP is $ 360, which is quite reasonable. Adaptec HostRAID is supported to migrate to the simplest RAID arrays. In this case, these are RAID levels 0, 1, and 10. The card supports an external four-channel SFF8470 connection, as well as an internal SFF8484 connector paired with a cable for four SAS devices, that is, we get eight ports in total.

The card fits into a 2U rack server with a low-profile slot cover. The package also includes a CD with a driver, a quick installation guide and an internal SAS cable through which you can connect up to four system drives to the card.

SAS player LSI Logic sent us a SAS3442X PCI-X host adapter, a direct competitor to the Adaptec SAS 48300. It comes with eight SAS ports that are split between two four-lane interfaces. The heart of the card is the LSI SAS1068 chip. One of the interfaces is intended for internal devices, the second is for external DAS (Direct Attached Storage). The board uses the PCI-X 133 bus interface.

As usual, 300 MB / s is supported for SATA and SAS drives. There are 16 LEDs on the controller board. Eight of them are simple activity LEDs, and eight more are designed to report a system malfunction.

The LSI SAS3442X is a low profile card so it fits easily into any 2U rack server.

Note the driver support for Linux, Netware 5.1 and 6, Windows 2000 and Server 2003 (x64), Windows XP (x64) and Solaris up to 2.10. Unlike Adaptec, LSI decided not to add support for any RAID modes.

RAID adapters

SAS RAID4800SAS is Adaptec's solution for more complex SAS environments and can be used for application servers, streaming servers, and more. Before us, again, an eight-port card with one external four-lane SAS connection and two internal four-lane interfaces. But if an external connection is used, then only one four-channel interface remains from the internal ones.

The card is also designed for PCI-X 133, which provides sufficient bandwidth for even the most demanding RAID configurations.

As for the RAID modes, here SAS RAID 4800 easily overtakes its "little brother": RAID levels 0, 1, 10, 5, 50 are supported by default, if you have a sufficient number of drives. Unlike the 48300, Adaptec has included two SAS cables, so you can immediately connect eight hard drives to the controller. Unlike the 48300, the card requires a full-length PCI-X slot.

If you decide to upgrade your card to Adaptec Advanced Data Protection Suite then get the option to upgrade to dual redundant RAID modes (6, 60), and a range of enterprise-class features: striped mirror drive (RAID 1E), hot spacing (RAID 5EE), and copyback hot spare. Adaptec Storage Manager is a browser-based utility that manages all Adaptec adapters.

Adaptec offers drivers for Windows Server 2003 (and x64), Windows 2000 Server, Windows XP (x64), Novell Netware, Red Hat Enterprise Linux 3 and 4, SuSe Linux Enterprise Server 8 and 9, and FreeBSD.

SAS snap-ins

The 335SAS is a snap-in for four SAS or SATA drives, but must be plugged into a SAS controller. The 120mm fan keeps the drives cool. You will also have to connect two Molex power plugs to the rig.

Adaptec has included an I2C cable that can be used to control tooling through an appropriate controller. But this won't work with SAS drives. An additional LED cable is intended to signal the activity of the drives, but, again, only for SATA drives. The scope of delivery also includes an internal SAS cable for four drives, so an external four-channel cable is sufficient to connect the drives. If you want to use SATA drives, you will have to use SAS to SATA adapters.

The retail price of $ 369 is not cheap. But you will get a solid and reliable solution.

SAS storage

SANbloc S50 is an enterprise grade 12-drive solution. You will receive a 2U rackmount enclosure that connects to SAS controllers. This is one of the best examples of scalable SAS solutions. 12 drives can be either SAS or SATA. Or imagine a mixture of both. The built-in expander can use one or two 4-lane SAS interfaces to connect the S50 to a host adapter or RAID controller. Since this is clearly a professional solution, it is equipped with two power supplies (with redundancy).

If you've already purchased an Adaptec SAS host adapter, you can easily connect it to the S50 and use Adaptec Storage Manager to manage your drives. If you install 500 GB SATA hard drives, then we get 6 TB storage. If you take 300GB SAS drives, then the capacity is 3.6TB. Since the expander is connected to the host controller with two four-channel interfaces, we get 2.4 GB / s bandwidth, which will be more than enough for any type of array. If you install 12 drives in a RAID0 array, then the maximum throughput is only 1.1 GB / s. In the middle of this year, Adaptec promises to release a slightly modified version with two independent SAS I / O units.

SANbloc S50 contains automatic monitoring and automatic fan speed control. Yes, the device is too loud, so we were relieved to give it away from the lab after the tests were completed. A drive failure message is sent to the controller via SES-2 (SCSI Enclosure Services) or via the I2C physical interface.

Operating temperatures for actuators are 5-55 ° C and for accessories 0-40 ° C.

At the start of our tests, we got a peak bandwidth of just 610 MB / s. By swapping the cable between the S50 and the Adaptec host controller, we still managed to achieve 760 MB / s. We used seven hard drives to load the system in RAID 0 mode. The increase in the number of hard drives did not lead to an increase in throughput.

Test configuration

System hardware
Processors 2x Intel Xeon (Nocona core)
3.6 GHz, FSB800, 1 MB L2 cache
Platform Asus NCL-DS (Socket 604)
Intel E7520 Chipset, BIOS 1005
Memory Corsair CM72DD512AR-400 (DDR2-400 ECC, reg.)
2x 512 MB, CL3-3-3-10
System hard drive Western Digital Caviar WD1200JB
120GB 7200 RPM 8MB Cache UltraATA / 100
Storage controllers Intel 82801EB UltraATA / 100 Controller (ICH5)

Promise SATA 300TX4
Driver 1.0.0.33

Adaptec AIC-7902B Ultra320
Driver 3.0

Adaptec 48300 8 port PCI-X SAS
Driver 1.1.5472

Adaptec 4800 8 port PCI-X SAS
Driver 5.1.0.8360
Firmware 5.1.0.8375

LSI Logic SAS3442X 8 port PCI-X SAS
Driver 1.21.05
BIOS 6.01

Vaults
Hot-swappable 4-bay indoor rig

2U, 12-HDD SAS / SATA JBOD

Net Broadcom BCM5721 Gigabit Ethernet
Video card Built-in
ATi RageXL, 8 MB
Tests
performance measurement c "t h2benchw 3.6
Measuring I / O Performance IOMeter 2003.05.10
Fileserver-Benchmark
Webserver-Benchmark
Database-Benchmark
Workstation-Benchmark
System software and drivers
OS Microsoft Windows Server 2003 Enterprise Edition, Service Pack 1
Platform driver Intel Chipset Installation Utility 7.0.0.1025
Graphics driver Workstation script.

After examining several new SAS hard drives, three associated controllers, and two snap-ins, it became clear that SAS was indeed a promising technology. If you look at the SAS white papers, you can see why. This is not only the serial SCSI successor (fast, convenient and easy to use), but also an excellent level of infrastructure scalability and scalability that makes Ultra320 SCSI solutions look like a stone age.

And the compatibility is just great. If you are planning to purchase professional SATA hardware for your server, then you should take a closer look at SAS. Any SAS controller or hardware is compatible with both SAS and SATA hard drives. Therefore, you can create both high-performance SAS environments and high-capacity SATA environments - or both.

Convenient support for external storage is another major benefit of SAS. If SATA storage uses either proprietary solutions or a single SATA / eSATA link, the SAS storage interface allows for increased throughput in groups of four SAS links. As a result, we get the opportunity to increase throughput for the needs of applications, and not rest at 320 MB / s UltraSCSI or 300 MB / s SATA. Moreover, SAS expanders allow you to create an entire hierarchy of SAS devices, so that administrators have more freedom of activity.

The evolution of SAS devices does not end there. It seems to us that the UltraSCSI interface can be considered obsolete and slowly written off. It is unlikely that the industry will improve it, unless it continues to support existing UltraSCSI implementations. All the same, new hard drives, the latest storage models and accessories, as well as an increase in the interface speed up to 600 MB / s, and then up to 1200 MB / s - all this is intended for SAS.

What should be the modern storage infrastructure? The days of UltraSCSI are numbered with SAS availability. The sequential version is a logical step forward and copes with all tasks better than its predecessor. The choice between UltraSCSI and SAS becomes obvious. Choosing between SAS or SATA is a little more difficult. But if you look ahead, SAS components are still better. Indeed, for maximum performance or from a scalability perspective, there is no longer an alternative to SAS today.


Top