The Design and Implementation of the FreeBSD Operating System, Second Edition
Now available: The Design and Implementation of the FreeBSD Operating System (Second Edition)


[ source navigation ] [ diff markup ] [ identifier search ] [ freetext search ] [ file search ] [ list types ] [ track identifier ]

FreeBSD/Linux Kernel Cross Reference
sys/Documentation/README.DAC960

Version: -  FREEBSD  -  FREEBSD-13-STABLE  -  FREEBSD-13-0  -  FREEBSD-12-STABLE  -  FREEBSD-12-0  -  FREEBSD-11-STABLE  -  FREEBSD-11-0  -  FREEBSD-10-STABLE  -  FREEBSD-10-0  -  FREEBSD-9-STABLE  -  FREEBSD-9-0  -  FREEBSD-8-STABLE  -  FREEBSD-8-0  -  FREEBSD-7-STABLE  -  FREEBSD-7-0  -  FREEBSD-6-STABLE  -  FREEBSD-6-0  -  FREEBSD-5-STABLE  -  FREEBSD-5-0  -  FREEBSD-4-STABLE  -  FREEBSD-3-STABLE  -  FREEBSD22  -  l41  -  OPENBSD  -  linux-2.6  -  MK84  -  PLAN9  -  xnu-8792 
SearchContext: -  none  -  3  -  10 

    1    Linux Driver for Mylex DAC960/AcceleRAID/eXtremeRAID PCI RAID Controllers
    2 
    3                         Version 2.2.11 for Linux 2.2.19
    4                         Version 2.4.11 for Linux 2.4.12
    5 
    6                               PRODUCTION RELEASE
    7 
    8                                 11 October 2001
    9 
   10                                Leonard N. Zubkoff
   11                                Dandelion Digital
   12                                lnz@dandelion.com
   13 
   14          Copyright 1998-2001 by Leonard N. Zubkoff <lnz@dandelion.com>
   15 
   16 
   17                                  INTRODUCTION
   18 
   19 Mylex, Inc. designs and manufactures a variety of high performance PCI RAID
   20 controllers.  Mylex Corporation is located at 34551 Ardenwood Blvd., Fremont,
   21 California 94555, USA and can be reached at 510.796.6100 or on the World Wide
   22 Web at http://www.mylex.com.  Mylex Technical Support can be reached by
   23 electronic mail at mylexsup@us.ibm.com, by voice at 510.608.2400, or by FAX at
   24 510.745.7715.  Contact information for offices in Europe and Japan is available
   25 on their Web site.
   26 
   27 The latest information on Linux support for DAC960 PCI RAID Controllers, as
   28 well as the most recent release of this driver, will always be available from
   29 my Linux Home Page at URL "http://www.dandelion.com/Linux/".  The Linux DAC960
   30 driver supports all current Mylex PCI RAID controllers including the new
   31 eXtremeRAID 2000/3000 and AcceleRAID 352/170/160 models which have an entirely
   32 new firmware interface from the older eXtremeRAID 1100, AcceleRAID 150/200/250,
   33 and DAC960PJ/PG/PU/PD/PL.  See below for a complete controller list as well as
   34 minimum firmware version requirements.  For simplicity, in most places this
   35 documentation refers to DAC960 generically rather than explicitly listing all
   36 the supported models.
   37 
   38 Driver bug reports should be sent via electronic mail to "lnz@dandelion.com".
   39 Please include with the bug report the complete configuration messages reported
   40 by the driver at startup, along with any subsequent system messages relevant to
   41 the controller's operation, and a detailed description of your system's
   42 hardware configuration.  Driver bugs are actually quite rare; if you encounter
   43 problems with disks being marked offline, for example, please contact Mylex
   44 Technical Support as the problem is related to the hardware configuration
   45 rather than the Linux driver.
   46 
   47 Please consult the RAID controller documentation for detailed information
   48 regarding installation and configuration of the controllers.  This document
   49 primarily provides information specific to the Linux support.
   50 
   51 
   52                                 DRIVER FEATURES
   53 
   54 The DAC960 RAID controllers are supported solely as high performance RAID
   55 controllers, not as interfaces to arbitrary SCSI devices.  The Linux DAC960
   56 driver operates at the block device level, the same level as the SCSI and IDE
   57 drivers.  Unlike other RAID controllers currently supported on Linux, the
   58 DAC960 driver is not dependent on the SCSI subsystem, and hence avoids all the
   59 complexity and unnecessary code that would be associated with an implementation
   60 as a SCSI driver.  The DAC960 driver is designed for as high a performance as
   61 possible with no compromises or extra code for compatibility with lower
   62 performance devices.  The DAC960 driver includes extensive error logging and
   63 online configuration management capabilities.  Except for initial configuration
   64 of the controller and adding new disk drives, most everything can be handled
   65 from Linux while the system is operational.
   66 
   67 The DAC960 driver is architected to support up to 8 controllers per system.
   68 Each DAC960 parallel SCSI controller can support up to 15 disk drives per
   69 channel, for a maximum of 60 drives on a four channel controller; the fibre
   70 channel eXtremeRAID 3000 controller supports up to 125 disk drives per loop for
   71 a total of 250 drives.  The drives installed on a controller are divided into
   72 one or more "Drive Groups", and then each Drive Group is subdivided further
   73 into 1 to 32 "Logical Drives".  Each Logical Drive has a specific RAID Level
   74 and caching policy associated with it, and it appears to Linux as a single
   75 block device.  Logical Drives are further subdivided into up to 7 partitions
   76 through the normal Linux and PC disk partitioning schemes.  Logical Drives are
   77 also known as "System Drives", and Drive Groups are also called "Packs".  Both
   78 terms are in use in the Mylex documentation; I have chosen to standardize on
   79 the more generic "Logical Drive" and "Drive Group".
   80 
   81 DAC960 RAID disk devices are named in the style of the Device File System
   82 (DEVFS).  The device corresponding to Logical Drive D on Controller C is
   83 referred to as /dev/rd/cCdD, and the partitions are called /dev/rd/cCdDp1
   84 through /dev/rd/cCdDp7.  For example, partition 3 of Logical Drive 5 on
   85 Controller 2 is referred to as /dev/rd/c2d5p3.  Note that unlike with SCSI
   86 disks the device names will not change in the event of a disk drive failure.
   87 The DAC960 driver is assigned major numbers 48 - 55 with one major number per
   88 controller.  The 8 bits of minor number are divided into 5 bits for the Logical
   89 Drive and 3 bits for the partition.
   90 
   91 
   92           SUPPORTED DAC960/AcceleRAID/eXtremeRAID PCI RAID CONTROLLERS
   93 
   94 The following list comprises the supported DAC960, AcceleRAID, and eXtremeRAID
   95 PCI RAID Controllers as of the date of this document.  It is recommended that
   96 anyone purchasing a Mylex PCI RAID Controller not in the following table
   97 contact the author beforehand to verify that it is or will be supported.
   98 
   99 eXtremeRAID 3000
  100             1 Wide Ultra-2/LVD SCSI channel
  101             2 External Fibre FC-AL channels
  102             233MHz StrongARM SA 110 Processor
  103             64 Bit 33MHz PCI (backward compatible with 32 Bit PCI slots)
  104             32MB/64MB ECC SDRAM Memory
  105 
  106 eXtremeRAID 2000
  107             4 Wide Ultra-160 LVD SCSI channels
  108             233MHz StrongARM SA 110 Processor
  109             64 Bit 33MHz PCI (backward compatible with 32 Bit PCI slots)
  110             32MB/64MB ECC SDRAM Memory
  111 
  112 AcceleRAID 352
  113             2 Wide Ultra-160 LVD SCSI channels
  114             100MHz Intel i960RN RISC Processor
  115             64 Bit 33MHz PCI (backward compatible with 32 Bit PCI slots)
  116             32MB/64MB ECC SDRAM Memory
  117 
  118 AcceleRAID 170
  119             1 Wide Ultra-160 LVD SCSI channel
  120             100MHz Intel i960RM RISC Processor
  121             16MB/32MB/64MB ECC SDRAM Memory
  122 
  123 AcceleRAID 160 (AcceleRAID 170LP)
  124             1 Wide Ultra-160 LVD SCSI channel
  125             100MHz Intel i960RS RISC Processor
  126             Built in 16M ECC SDRAM Memory
  127             PCI Low Profile Form Factor - fit for 2U height
  128 
  129 eXtremeRAID 1100 (DAC1164P)
  130             3 Wide Ultra-2/LVD SCSI channels
  131             233MHz StrongARM SA 110 Processor
  132             64 Bit 33MHz PCI (backward compatible with 32 Bit PCI slots)
  133             16MB/32MB/64MB Parity SDRAM Memory with Battery Backup
  134 
  135 AcceleRAID 250 (DAC960PTL1)
  136             Uses onboard Symbios SCSI chips on certain motherboards
  137             Also includes one onboard Wide Ultra-2/LVD SCSI Channel
  138             66MHz Intel i960RD RISC Processor
  139             4MB/8MB/16MB/32MB/64MB/128MB ECC EDO Memory
  140 
  141 AcceleRAID 200 (DAC960PTL0)
  142             Uses onboard Symbios SCSI chips on certain motherboards
  143             Includes no onboard SCSI Channels
  144             66MHz Intel i960RD RISC Processor
  145             4MB/8MB/16MB/32MB/64MB/128MB ECC EDO Memory
  146 
  147 AcceleRAID 150 (DAC960PRL)
  148             Uses onboard Symbios SCSI chips on certain motherboards
  149             Also includes one onboard Wide Ultra-2/LVD SCSI Channel
  150             33MHz Intel i960RP RISC Processor
  151             4MB Parity EDO Memory
  152 
  153 DAC960PJ    1/2/3 Wide Ultra SCSI-3 Channels
  154             66MHz Intel i960RD RISC Processor
  155             4MB/8MB/16MB/32MB/64MB/128MB ECC EDO Memory
  156 
  157 DAC960PG    1/2/3 Wide Ultra SCSI-3 Channels
  158             33MHz Intel i960RP RISC Processor
  159             4MB/8MB ECC EDO Memory
  160 
  161 DAC960PU    1/2/3 Wide Ultra SCSI-3 Channels
  162             Intel i960CF RISC Processor
  163             4MB/8MB EDRAM or 2MB/4MB/8MB/16MB/32MB DRAM Memory
  164 
  165 DAC960PD    1/2/3 Wide Fast SCSI-2 Channels
  166             Intel i960CF RISC Processor
  167             4MB/8MB EDRAM or 2MB/4MB/8MB/16MB/32MB DRAM Memory
  168 
  169 DAC960PL    1/2/3 Wide Fast SCSI-2 Channels
  170             Intel i960 RISC Processor
  171             2MB/4MB/8MB/16MB/32MB DRAM Memory
  172 
  173 DAC960P     1/2/3 Wide Fast SCSI-2 Channels
  174             Intel i960 RISC Processor
  175             2MB/4MB/8MB/16MB/32MB DRAM Memory
  176 
  177 For the eXtremeRAID 2000/3000 and AcceleRAID 352/170/160, firmware version
  178 6.00-01 or above is required.
  179 
  180 For the eXtremeRAID 1100, firmware version 5.06-0-52 or above is required.
  181 
  182 For the AcceleRAID 250, 200, and 150, firmware version 4.06-0-57 or above is
  183 required.
  184 
  185 For the DAC960PJ and DAC960PG, firmware version 4.06-0-00 or above is required.
  186 
  187 For the DAC960PU, DAC960PD, DAC960PL, and DAC960P, either firmware version
  188 3.51-0-04 or above is required (for dual Flash ROM controllers), or firmware
  189 version 2.73-0-00 or above is required (for single Flash ROM controllers)
  190 
  191 Please note that not all SCSI disk drives are suitable for use with DAC960
  192 controllers, and only particular firmware versions of any given model may
  193 actually function correctly.  Similarly, not all motherboards have a BIOS that
  194 properly initializes the AcceleRAID 250, AcceleRAID 200, AcceleRAID 150,
  195 DAC960PJ, and DAC960PG because the Intel i960RD/RP is a multi-function device.
  196 If in doubt, contact Mylex RAID Technical Support (mylexsup@us.ibm.com) to
  197 verify compatibility.  Mylex makes available a hard disk compatibility list at
  198 http://www.mylex.com/support/hdcomp/hd-lists.html.
  199 
  200 
  201                               DRIVER INSTALLATION
  202 
  203 This distribution was prepared for Linux kernel version 2.2.19 or 2.4.12.
  204 
  205 To install the DAC960 RAID driver, you may use the following commands,
  206 replacing "/usr/src" with wherever you keep your Linux kernel source tree:
  207 
  208   cd /usr/src
  209   tar -xvzf DAC960-2.2.11.tar.gz (or DAC960-2.4.11.tar.gz)
  210   mv README.DAC960 linux/Documentation
  211   mv DAC960.[ch] linux/drivers/block
  212   patch -p0 < DAC960.patch (if DAC960.patch is included)
  213   cd linux
  214   make config
  215   make depend
  216   make bzImage (or zImage)
  217 
  218 Then install "arch/i386/boot/bzImage" or "arch/i386/boot/zImage" as your
  219 standard kernel, run lilo if appropriate, and reboot.
  220 
  221 To create the necessary devices in /dev, the "make_rd" script included in
  222 "DAC960-Utilities.tar.gz" from http://www.dandelion.com/Linux/ may be used.
  223 LILO 21 and FDISK v2.9 include DAC960 support; also included in this archive
  224 are patches to LILO 20 and FDISK v2.8 that add DAC960 support, along with
  225 statically linked executables of LILO and FDISK.  This modified version of LILO
  226 will allow booting from a DAC960 controller and/or mounting the root file
  227 system from a DAC960.
  228 
  229 Red Hat Linux 6.0 and SuSE Linux 6.1 include support for Mylex PCI RAID
  230 controllers.  Installing directly onto a DAC960 may be problematic from other
  231 Linux distributions until their installation utilities are updated.
  232 
  233 
  234                               INSTALLATION NOTES
  235 
  236 Before installing Linux or adding DAC960 logical drives to an existing Linux
  237 system, the controller must first be configured to provide one or more logical
  238 drives using the BIOS Configuration Utility or DACCF.  Please note that since
  239 there are only at most 6 usable partitions on each logical drive, systems
  240 requiring more partitions should subdivide a drive group into multiple logical
  241 drives, each of which can have up to 6 usable partitions.  Also, note that with
  242 large disk arrays it is advisable to enable the 8GB BIOS Geometry (255/63)
  243 rather than accepting the default 2GB BIOS Geometry (128/32); failing to so do
  244 will cause the logical drive geometry to have more than 65535 cylinders which
  245 will make it impossible for FDISK to be used properly.  The 8GB BIOS Geometry
  246 can be enabled by configuring the DAC960 BIOS, which is accessible via Alt-M
  247 during the BIOS initialization sequence.
  248 
  249 For maximum performance and the most efficient E2FSCK performance, it is
  250 recommended that EXT2 file systems be built with a 4KB block size and 16 block
  251 stride to match the DAC960 controller's 64KB default stripe size.  The command
  252 "mke2fs -b 4096 -R stride=16 <device>" is appropriate.  Unless there will be a
  253 large number of small files on the file systems, it is also beneficial to add
  254 the "-i 16384" option to increase the bytes per inode parameter thereby
  255 reducing the file system metadata.  Finally, on systems that will only be run
  256 with Linux 2.2 or later kernels it is beneficial to enable sparse superblocks
  257 with the "-s 1" option.
  258 
  259 
  260                       DAC960 ANNOUNCEMENTS MAILING LIST
  261 
  262 The DAC960 Announcements Mailing List provides a forum for informing Linux
  263 users of new driver releases and other announcements regarding Linux support
  264 for DAC960 PCI RAID Controllers.  To join the mailing list, send a message to
  265 "dac960-announce-request@dandelion.com" with the line "subscribe" in the
  266 message body.
  267 
  268 
  269                 CONTROLLER CONFIGURATION AND STATUS MONITORING
  270 
  271 The DAC960 RAID controllers running firmware 4.06 or above include a Background
  272 Initialization facility so that system downtime is minimized both for initial
  273 installation and subsequent configuration of additional storage.  The BIOS
  274 Configuration Utility (accessible via Alt-R during the BIOS initialization
  275 sequence) is used to quickly configure the controller, and then the logical
  276 drives that have been created are available for immediate use even while they
  277 are still being initialized by the controller.  The primary need for online
  278 configuration and status monitoring is then to avoid system downtime when disk
  279 drives fail and must be replaced.  Mylex's online monitoring and configuration
  280 utilities are being ported to Linux and will become available at some point in
  281 the future.  Note that with a SAF-TE (SCSI Accessed Fault-Tolerant Enclosure)
  282 enclosure, the controller is able to rebuild failed drives automatically as
  283 soon as a drive replacement is made available.
  284 
  285 The primary interfaces for controller configuration and status monitoring are
  286 special files created in the /proc/rd/... hierarchy along with the normal
  287 system console logging mechanism.  Whenever the system is operating, the DAC960
  288 driver queries each controller for status information every 10 seconds, and
  289 checks for additional conditions every 60 seconds.  The initial status of each
  290 controller is always available for controller N in /proc/rd/cN/initial_status,
  291 and the current status as of the last status monitoring query is available in
  292 /proc/rd/cN/current_status.  In addition, status changes are also logged by the
  293 driver to the system console and will appear in the log files maintained by
  294 syslog.  The progress of asynchronous rebuild or consistency check operations
  295 is also available in /proc/rd/cN/current_status, and progress messages are
  296 logged to the system console at most every 60 seconds.
  297 
  298 Starting with the 2.2.3/2.0.3 versions of the driver, the status information
  299 available in /proc/rd/cN/initial_status and /proc/rd/cN/current_status has been
  300 augmented to include the vendor, model, revision, and serial number (if
  301 available) for each physical device found connected to the controller:
  302 
  303 ***** DAC960 RAID Driver Version 2.2.3 of 19 August 1999 *****
  304 Copyright 1998-1999 by Leonard N. Zubkoff <lnz@dandelion.com>
  305 Configuring Mylex DAC960PRL PCI RAID Controller
  306   Firmware Version: 4.07-0-07, Channels: 1, Memory Size: 16MB
  307   PCI Bus: 1, Device: 4, Function: 1, I/O Address: Unassigned
  308   PCI Address: 0xFE300000 mapped at 0xA0800000, IRQ Channel: 21
  309   Controller Queue Depth: 128, Maximum Blocks per Command: 128
  310   Driver Queue Depth: 127, Maximum Scatter/Gather Segments: 33
  311   Stripe Size: 64KB, Segment Size: 8KB, BIOS Geometry: 255/63
  312   SAF-TE Enclosure Management Enabled
  313   Physical Devices:
  314     0:0  Vendor: IBM       Model: DRVS09D           Revision: 0270
  315          Serial Number:       68016775HA
  316          Disk Status: Online, 17928192 blocks
  317     0:1  Vendor: IBM       Model: DRVS09D           Revision: 0270
  318          Serial Number:       68004E53HA
  319          Disk Status: Online, 17928192 blocks
  320     0:2  Vendor: IBM       Model: DRVS09D           Revision: 0270
  321          Serial Number:       13013935HA
  322          Disk Status: Online, 17928192 blocks
  323     0:3  Vendor: IBM       Model: DRVS09D           Revision: 0270
  324          Serial Number:       13016897HA
  325          Disk Status: Online, 17928192 blocks
  326     0:4  Vendor: IBM       Model: DRVS09D           Revision: 0270
  327          Serial Number:       68019905HA
  328          Disk Status: Online, 17928192 blocks
  329     0:5  Vendor: IBM       Model: DRVS09D           Revision: 0270
  330          Serial Number:       68012753HA
  331          Disk Status: Online, 17928192 blocks
  332     0:6  Vendor: ESG-SHV   Model: SCA HSBP M6       Revision: 0.61
  333   Logical Drives:
  334     /dev/rd/c0d0: RAID-5, Online, 89640960 blocks, Write Thru
  335   No Rebuild or Consistency Check in Progress
  336 
  337 To simplify the monitoring process for custom software, the special file
  338 /proc/rd/status returns "OK" when all DAC960 controllers in the system are
  339 operating normally and no failures have occurred, or "ALERT" if any logical
  340 drives are offline or critical or any non-standby physical drives are dead.
  341 
  342 Configuration commands for controller N are available via the special file
  343 /proc/rd/cN/user_command.  A human readable command can be written to this
  344 special file to initiate a configuration operation, and the results of the
  345 operation can then be read back from the special file in addition to being
  346 logged to the system console.  The shell command sequence
  347 
  348   echo "<configuration-command>" > /proc/rd/c0/user_command
  349   cat /proc/rd/c0/user_command
  350 
  351 is typically used to execute configuration commands.  The configuration
  352 commands are:
  353 
  354   flush-cache
  355 
  356     The "flush-cache" command flushes the controller's cache.  The system
  357     automatically flushes the cache at shutdown or if the driver module is
  358     unloaded, so this command is only needed to be certain a write back cache
  359     is flushed to disk before the system is powered off by a command to a UPS.
  360     Note that the flush-cache command also stops an asynchronous rebuild or
  361     consistency check, so it should not be used except when the system is being
  362     halted.
  363 
  364   kill <channel>:<target-id>
  365 
  366     The "kill" command marks the physical drive <channel>:<target-id> as DEAD.
  367     This command is provided primarily for testing, and should not be used
  368     during normal system operation.
  369 
  370   make-online <channel>:<target-id>
  371 
  372     The "make-online" command changes the physical drive <channel>:<target-id>
  373     from status DEAD to status ONLINE.  In cases where multiple physical drives
  374     have been killed simultaneously, this command may be used to bring all but
  375     one of them back online, after which a rebuild to the final drive is
  376     necessary.
  377 
  378     Warning: make-online should only be used on a dead physical drive that is
  379     an active part of a drive group, never on a standby drive.  The command
  380     should never be used on a dead drive that is part of a critical logical
  381     drive; rebuild should be used if only a single drive is dead.
  382 
  383   make-standby <channel>:<target-id>
  384 
  385     The "make-standby" command changes physical drive <channel>:<target-id>
  386     from status DEAD to status STANDBY.  It should only be used in cases where
  387     a dead drive was replaced after an automatic rebuild was performed onto a
  388     standby drive.  It cannot be used to add a standby drive to the controller
  389     configuration if one was not created initially; the BIOS Configuration
  390     Utility must be used for that currently.
  391 
  392   rebuild <channel>:<target-id>
  393 
  394     The "rebuild" command initiates an asynchronous rebuild onto physical drive
  395     <channel>:<target-id>.  It should only be used when a dead drive has been
  396     replaced.
  397 
  398   check-consistency <logical-drive-number>
  399 
  400     The "check-consistency" command initiates an asynchronous consistency check
  401     of <logical-drive-number> with automatic restoration.  It can be used
  402     whenever it is desired to verify the consistency of the redundancy
  403     information.
  404 
  405   cancel-rebuild
  406   cancel-consistency-check
  407 
  408     The "cancel-rebuild" and "cancel-consistency-check" commands cancel any
  409     rebuild or consistency check operations previously initiated.
  410 
  411 
  412                EXAMPLE I - DRIVE FAILURE WITHOUT A STANDBY DRIVE
  413 
  414 The following annotated logs demonstrate the controller configuration and and
  415 online status monitoring capabilities of the Linux DAC960 Driver.  The test
  416 configuration comprises 6 1GB Quantum Atlas I disk drives on two channels of a
  417 DAC960PJ controller.  The physical drives are configured into a single drive
  418 group without a standby drive, and the drive group has been configured into two
  419 logical drives, one RAID-5 and one RAID-6.  Note that these logs are from an
  420 earlier version of the driver and the messages have changed somewhat with newer
  421 releases, but the functionality remains similar.  First, here is the current
  422 status of the RAID configuration:
  423 
  424 gwynedd:/u/lnz# cat /proc/rd/c0/current_status
  425 ***** DAC960 RAID Driver Version 2.0.0 of 23 March 1999 *****
  426 Copyright 1998-1999 by Leonard N. Zubkoff <lnz@dandelion.com>
  427 Configuring Mylex DAC960PJ PCI RAID Controller
  428   Firmware Version: 4.06-0-08, Channels: 3, Memory Size: 8MB
  429   PCI Bus: 0, Device: 19, Function: 1, I/O Address: Unassigned
  430   PCI Address: 0xFD4FC000 mapped at 0x8807000, IRQ Channel: 9
  431   Controller Queue Depth: 128, Maximum Blocks per Command: 128
  432   Driver Queue Depth: 127, Maximum Scatter/Gather Segments: 33
  433   Stripe Size: 64KB, Segment Size: 8KB, BIOS Geometry: 255/63
  434   Physical Devices:
  435     0:1 - Disk: Online, 2201600 blocks
  436     0:2 - Disk: Online, 2201600 blocks
  437     0:3 - Disk: Online, 2201600 blocks
  438     1:1 - Disk: Online, 2201600 blocks
  439     1:2 - Disk: Online, 2201600 blocks
  440     1:3 - Disk: Online, 2201600 blocks
  441   Logical Drives:
  442     /dev/rd/c0d0: RAID-5, Online, 5498880 blocks, Write Thru
  443     /dev/rd/c0d1: RAID-6, Online, 3305472 blocks, Write Thru
  444   No Rebuild or Consistency Check in Progress
  445 
  446 gwynedd:/u/lnz# cat /proc/rd/status
  447 OK
  448 
  449 The above messages indicate that everything is healthy, and /proc/rd/status
  450 returns "OK" indicating that there are no problems with any DAC960 controller
  451 in the system.  For demonstration purposes, while I/O is active Physical Drive
  452 1:1 is now disconnected, simulating a drive failure.  The failure is noted by
  453 the driver within 10 seconds of the controller's having detected it, and the
  454 driver logs the following console status messages indicating that Logical
  455 Drives 0 and 1 are now CRITICAL as a result of Physical Drive 1:1 being DEAD:
  456 
  457 DAC960#0: Physical Drive 1:2 Error Log: Sense Key = 6, ASC = 29, ASCQ = 02
  458 DAC960#0: Physical Drive 1:3 Error Log: Sense Key = 6, ASC = 29, ASCQ = 02
  459 DAC960#0: Physical Drive 1:1 killed because of timeout on SCSI command
  460 DAC960#0: Physical Drive 1:1 is now DEAD
  461 DAC960#0: Logical Drive 0 (/dev/rd/c0d0) is now CRITICAL
  462 DAC960#0: Logical Drive 1 (/dev/rd/c0d1) is now CRITICAL
  463 
  464 The Sense Keys logged here are just Check Condition / Unit Attention conditions
  465 arising from a SCSI bus reset that is forced by the controller during its error
  466 recovery procedures.  Concurrently with the above, the driver status available
  467 from /proc/rd also reflects the drive failure.  The status message in
  468 /proc/rd/status has changed from "OK" to "ALERT":
  469 
  470 gwynedd:/u/lnz# cat /proc/rd/status
  471 ALERT
  472 
  473 and /proc/rd/c0/current_status has been updated:
  474 
  475 gwynedd:/u/lnz# cat /proc/rd/c0/current_status
  476   ...
  477   Physical Devices:
  478     0:1 - Disk: Online, 2201600 blocks
  479     0:2 - Disk: Online, 2201600 blocks
  480     0:3 - Disk: Online, 2201600 blocks
  481     1:1 - Disk: Dead, 2201600 blocks
  482     1:2 - Disk: Online, 2201600 blocks
  483     1:3 - Disk: Online, 2201600 blocks
  484   Logical Drives:
  485     /dev/rd/c0d0: RAID-5, Critical, 5498880 blocks, Write Thru
  486     /dev/rd/c0d1: RAID-6, Critical, 3305472 blocks, Write Thru
  487   No Rebuild or Consistency Check in Progress
  488 
  489 Since there are no standby drives configured, the system can continue to access
  490 the logical drives in a performance degraded mode until the failed drive is
  491 replaced and a rebuild operation completed to restore the redundancy of the
  492 logical drives.  Once Physical Drive 1:1 is replaced with a properly
  493 functioning drive, or if the physical drive was killed without having failed
  494 (e.g., due to electrical problems on the SCSI bus), the user can instruct the
  495 controller to initiate a rebuild operation onto the newly replaced drive:
  496 
  497 gwynedd:/u/lnz# echo "rebuild 1:1" > /proc/rd/c0/user_command
  498 gwynedd:/u/lnz# cat /proc/rd/c0/user_command
  499 Rebuild of Physical Drive 1:1 Initiated
  500 
  501 The echo command instructs the controller to initiate an asynchronous rebuild
  502 operation onto Physical Drive 1:1, and the status message that results from the
  503 operation is then available for reading from /proc/rd/c0/user_command, as well
  504 as being logged to the console by the driver.
  505 
  506 Within 10 seconds of this command the driver logs the initiation of the
  507 asynchronous rebuild operation:
  508 
  509 DAC960#0: Rebuild of Physical Drive 1:1 Initiated
  510 DAC960#0: Physical Drive 1:1 Error Log: Sense Key = 6, ASC = 29, ASCQ = 01
  511 DAC960#0: Physical Drive 1:1 is now WRITE-ONLY
  512 DAC960#0: Rebuild in Progress: Logical Drive 0 (/dev/rd/c0d0) 1% completed
  513 
  514 and /proc/rd/c0/current_status is updated:
  515 
  516 gwynedd:/u/lnz# cat /proc/rd/c0/current_status
  517   ...
  518   Physical Devices:
  519     0:1 - Disk: Online, 2201600 blocks
  520     0:2 - Disk: Online, 2201600 blocks
  521     0:3 - Disk: Online, 2201600 blocks
  522     1:1 - Disk: Write-Only, 2201600 blocks
  523     1:2 - Disk: Online, 2201600 blocks
  524     1:3 - Disk: Online, 2201600 blocks
  525   Logical Drives:
  526     /dev/rd/c0d0: RAID-5, Critical, 5498880 blocks, Write Thru
  527     /dev/rd/c0d1: RAID-6, Critical, 3305472 blocks, Write Thru
  528   Rebuild in Progress: Logical Drive 0 (/dev/rd/c0d0) 6% completed
  529 
  530 As the rebuild progresses, the current status in /proc/rd/c0/current_status is
  531 updated every 10 seconds:
  532 
  533 gwynedd:/u/lnz# cat /proc/rd/c0/current_status
  534   ...
  535   Physical Devices:
  536     0:1 - Disk: Online, 2201600 blocks
  537     0:2 - Disk: Online, 2201600 blocks
  538     0:3 - Disk: Online, 2201600 blocks
  539     1:1 - Disk: Write-Only, 2201600 blocks
  540     1:2 - Disk: Online, 2201600 blocks
  541     1:3 - Disk: Online, 2201600 blocks
  542   Logical Drives:
  543     /dev/rd/c0d0: RAID-5, Critical, 5498880 blocks, Write Thru
  544     /dev/rd/c0d1: RAID-6, Critical, 3305472 blocks, Write Thru
  545   Rebuild in Progress: Logical Drive 0 (/dev/rd/c0d0) 15% completed
  546 
  547 and every minute a progress message is logged to the console by the driver:
  548 
  549 DAC960#0: Rebuild in Progress: Logical Drive 0 (/dev/rd/c0d0) 32% completed
  550 DAC960#0: Rebuild in Progress: Logical Drive 0 (/dev/rd/c0d0) 63% completed
  551 DAC960#0: Rebuild in Progress: Logical Drive 0 (/dev/rd/c0d0) 94% completed
  552 DAC960#0: Rebuild in Progress: Logical Drive 1 (/dev/rd/c0d1) 94% completed
  553 
  554 Finally, the rebuild completes successfully.  The driver logs the status of the 
  555 logical and physical drives and the rebuild completion:
  556 
  557 DAC960#0: Rebuild Completed Successfully
  558 DAC960#0: Physical Drive 1:1 is now ONLINE
  559 DAC960#0: Logical Drive 0 (/dev/rd/c0d0) is now ONLINE
  560 DAC960#0: Logical Drive 1 (/dev/rd/c0d1) is now ONLINE
  561 
  562 /proc/rd/c0/current_status is updated:
  563 
  564 gwynedd:/u/lnz# cat /proc/rd/c0/current_status
  565   ...
  566   Physical Devices:
  567     0:1 - Disk: Online, 2201600 blocks
  568     0:2 - Disk: Online, 2201600 blocks
  569     0:3 - Disk: Online, 2201600 blocks
  570     1:1 - Disk: Online, 2201600 blocks
  571     1:2 - Disk: Online, 2201600 blocks
  572     1:3 - Disk: Online, 2201600 blocks
  573   Logical Drives:
  574     /dev/rd/c0d0: RAID-5, Online, 5498880 blocks, Write Thru
  575     /dev/rd/c0d1: RAID-6, Online, 3305472 blocks, Write Thru
  576   Rebuild Completed Successfully
  577 
  578 and /proc/rd/status indicates that everything is healthy once again:
  579 
  580 gwynedd:/u/lnz# cat /proc/rd/status
  581 OK
  582 
  583 
  584                 EXAMPLE II - DRIVE FAILURE WITH A STANDBY DRIVE
  585 
  586 The following annotated logs demonstrate the controller configuration and and
  587 online status monitoring capabilities of the Linux DAC960 Driver.  The test
  588 configuration comprises 6 1GB Quantum Atlas I disk drives on two channels of a
  589 DAC960PJ controller.  The physical drives are configured into a single drive
  590 group with a standby drive, and the drive group has been configured into two
  591 logical drives, one RAID-5 and one RAID-6.  Note that these logs are from an
  592 earlier version of the driver and the messages have changed somewhat with newer
  593 releases, but the functionality remains similar.  First, here is the current
  594 status of the RAID configuration:
  595 
  596 gwynedd:/u/lnz# cat /proc/rd/c0/current_status
  597 ***** DAC960 RAID Driver Version 2.0.0 of 23 March 1999 *****
  598 Copyright 1998-1999 by Leonard N. Zubkoff <lnz@dandelion.com>
  599 Configuring Mylex DAC960PJ PCI RAID Controller
  600   Firmware Version: 4.06-0-08, Channels: 3, Memory Size: 8MB
  601   PCI Bus: 0, Device: 19, Function: 1, I/O Address: Unassigned
  602   PCI Address: 0xFD4FC000 mapped at 0x8807000, IRQ Channel: 9
  603   Controller Queue Depth: 128, Maximum Blocks per Command: 128
  604   Driver Queue Depth: 127, Maximum Scatter/Gather Segments: 33
  605   Stripe Size: 64KB, Segment Size: 8KB, BIOS Geometry: 255/63
  606   Physical Devices:
  607     0:1 - Disk: Online, 2201600 blocks
  608     0:2 - Disk: Online, 2201600 blocks
  609     0:3 - Disk: Online, 2201600 blocks
  610     1:1 - Disk: Online, 2201600 blocks
  611     1:2 - Disk: Online, 2201600 blocks
  612     1:3 - Disk: Standby, 2201600 blocks
  613   Logical Drives:
  614     /dev/rd/c0d0: RAID-5, Online, 4399104 blocks, Write Thru
  615     /dev/rd/c0d1: RAID-6, Online, 2754560 blocks, Write Thru
  616   No Rebuild or Consistency Check in Progress
  617 
  618 gwynedd:/u/lnz# cat /proc/rd/status
  619 OK
  620 
  621 The above messages indicate that everything is healthy, and /proc/rd/status
  622 returns "OK" indicating that there are no problems with any DAC960 controller
  623 in the system.  For demonstration purposes, while I/O is active Physical Drive
  624 1:2 is now disconnected, simulating a drive failure.  The failure is noted by
  625 the driver within 10 seconds of the controller's having detected it, and the
  626 driver logs the following console status messages:
  627 
  628 DAC960#0: Physical Drive 1:1 Error Log: Sense Key = 6, ASC = 29, ASCQ = 02
  629 DAC960#0: Physical Drive 1:3 Error Log: Sense Key = 6, ASC = 29, ASCQ = 02
  630 DAC960#0: Physical Drive 1:2 killed because of timeout on SCSI command
  631 DAC960#0: Physical Drive 1:2 is now DEAD
  632 DAC960#0: Physical Drive 1:2 killed because it was removed
  633 DAC960#0: Logical Drive 0 (/dev/rd/c0d0) is now CRITICAL
  634 DAC960#0: Logical Drive 1 (/dev/rd/c0d1) is now CRITICAL
  635 
  636 Since a standby drive is configured, the controller automatically begins
  637 rebuilding onto the standby drive:
  638 
  639 DAC960#0: Physical Drive 1:3 is now WRITE-ONLY
  640 DAC960#0: Rebuild in Progress: Logical Drive 0 (/dev/rd/c0d0) 4% completed
  641 
  642 Concurrently with the above, the driver status available from /proc/rd also
  643 reflects the drive failure and automatic rebuild.  The status message in
  644 /proc/rd/status has changed from "OK" to "ALERT":
  645 
  646 gwynedd:/u/lnz# cat /proc/rd/status
  647 ALERT
  648 
  649 and /proc/rd/c0/current_status has been updated:
  650 
  651 gwynedd:/u/lnz# cat /proc/rd/c0/current_status
  652   ...
  653   Physical Devices:
  654     0:1 - Disk: Online, 2201600 blocks
  655     0:2 - Disk: Online, 2201600 blocks
  656     0:3 - Disk: Online, 2201600 blocks
  657     1:1 - Disk: Online, 2201600 blocks
  658     1:2 - Disk: Dead, 2201600 blocks
  659     1:3 - Disk: Write-Only, 2201600 blocks
  660   Logical Drives:
  661     /dev/rd/c0d0: RAID-5, Critical, 4399104 blocks, Write Thru
  662     /dev/rd/c0d1: RAID-6, Critical, 2754560 blocks, Write Thru
  663   Rebuild in Progress: Logical Drive 0 (/dev/rd/c0d0) 4% completed
  664 
  665 As the rebuild progresses, the current status in /proc/rd/c0/current_status is
  666 updated every 10 seconds:
  667 
  668 gwynedd:/u/lnz# cat /proc/rd/c0/current_status
  669   ...
  670   Physical Devices:
  671     0:1 - Disk: Online, 2201600 blocks
  672     0:2 - Disk: Online, 2201600 blocks
  673     0:3 - Disk: Online, 2201600 blocks
  674     1:1 - Disk: Online, 2201600 blocks
  675     1:2 - Disk: Dead, 2201600 blocks
  676     1:3 - Disk: Write-Only, 2201600 blocks
  677   Logical Drives:
  678     /dev/rd/c0d0: RAID-5, Critical, 4399104 blocks, Write Thru
  679     /dev/rd/c0d1: RAID-6, Critical, 2754560 blocks, Write Thru
  680   Rebuild in Progress: Logical Drive 0 (/dev/rd/c0d0) 40% completed
  681 
  682 and every minute a progress message is logged on the console by the driver:
  683 
  684 DAC960#0: Rebuild in Progress: Logical Drive 0 (/dev/rd/c0d0) 40% completed
  685 DAC960#0: Rebuild in Progress: Logical Drive 0 (/dev/rd/c0d0) 76% completed
  686 DAC960#0: Rebuild in Progress: Logical Drive 1 (/dev/rd/c0d1) 66% completed
  687 DAC960#0: Rebuild in Progress: Logical Drive 1 (/dev/rd/c0d1) 84% completed
  688 
  689 Finally, the rebuild completes successfully.  The driver logs the status of the 
  690 logical and physical drives and the rebuild completion:
  691 
  692 DAC960#0: Rebuild Completed Successfully
  693 DAC960#0: Physical Drive 1:3 is now ONLINE
  694 DAC960#0: Logical Drive 0 (/dev/rd/c0d0) is now ONLINE
  695 DAC960#0: Logical Drive 1 (/dev/rd/c0d1) is now ONLINE
  696 
  697 /proc/rd/c0/current_status is updated:
  698 
  699 ***** DAC960 RAID Driver Version 2.0.0 of 23 March 1999 *****
  700 Copyright 1998-1999 by Leonard N. Zubkoff <lnz@dandelion.com>
  701 Configuring Mylex DAC960PJ PCI RAID Controller
  702   Firmware Version: 4.06-0-08, Channels: 3, Memory Size: 8MB
  703   PCI Bus: 0, Device: 19, Function: 1, I/O Address: Unassigned
  704   PCI Address: 0xFD4FC000 mapped at 0x8807000, IRQ Channel: 9
  705   Controller Queue Depth: 128, Maximum Blocks per Command: 128
  706   Driver Queue Depth: 127, Maximum Scatter/Gather Segments: 33
  707   Stripe Size: 64KB, Segment Size: 8KB, BIOS Geometry: 255/63
  708   Physical Devices:
  709     0:1 - Disk: Online, 2201600 blocks
  710     0:2 - Disk: Online, 2201600 blocks
  711     0:3 - Disk: Online, 2201600 blocks
  712     1:1 - Disk: Online, 2201600 blocks
  713     1:2 - Disk: Dead, 2201600 blocks
  714     1:3 - Disk: Online, 2201600 blocks
  715   Logical Drives:
  716     /dev/rd/c0d0: RAID-5, Online, 4399104 blocks, Write Thru
  717     /dev/rd/c0d1: RAID-6, Online, 2754560 blocks, Write Thru
  718   Rebuild Completed Successfully
  719 
  720 and /proc/rd/status indicates that everything is healthy once again:
  721 
  722 gwynedd:/u/lnz# cat /proc/rd/status
  723 OK
  724 
  725 Note that the absence of a viable standby drive does not create an "ALERT"
  726 status.  Once dead Physical Drive 1:2 has been replaced, the controller must be
  727 told that this has occurred and that the newly replaced drive should become the
  728 new standby drive:
  729 
  730 gwynedd:/u/lnz# echo "make-standby 1:2" > /proc/rd/c0/user_command
  731 gwynedd:/u/lnz# cat /proc/rd/c0/user_command
  732 Make Standby of Physical Drive 1:2 Succeeded
  733 
  734 The echo command instructs the controller to make Physical Drive 1:2 into a
  735 standby drive, and the status message that results from the operation is then
  736 available for reading from /proc/rd/c0/user_command, as well as being logged to
  737 the console by the driver.  Within 60 seconds of this command the driver logs:
  738 
  739 DAC960#0: Physical Drive 1:2 Error Log: Sense Key = 6, ASC = 29, ASCQ = 01
  740 DAC960#0: Physical Drive 1:2 is now STANDBY
  741 DAC960#0: Make Standby of Physical Drive 1:2 Succeeded
  742 
  743 and /proc/rd/c0/current_status is updated:
  744 
  745 gwynedd:/u/lnz# cat /proc/rd/c0/current_status
  746   ...
  747   Physical Devices:
  748     0:1 - Disk: Online, 2201600 blocks
  749     0:2 - Disk: Online, 2201600 blocks
  750     0:3 - Disk: Online, 2201600 blocks
  751     1:1 - Disk: Online, 2201600 blocks
  752     1:2 - Disk: Standby, 2201600 blocks
  753     1:3 - Disk: Online, 2201600 blocks
  754   Logical Drives:
  755     /dev/rd/c0d0: RAID-5, Online, 4399104 blocks, Write Thru
  756     /dev/rd/c0d1: RAID-6, Online, 2754560 blocks, Write Thru
  757   Rebuild Completed Successfully

Cache object: 0ec2c8c3f2b0e6c52a106ef5952f5fd4


[ source navigation ] [ diff markup ] [ identifier search ] [ freetext search ] [ file search ] [ list types ] [ track identifier ]


This page is part of the FreeBSD/Linux Linux Kernel Cross-Reference, and was automatically generated using a modified version of the LXR engine.