GIT 4ebabc9e2933736180f18e12c95e56d4244585fa git+ssh://master.kernel.org/pub/scm/linux/kernel/git/jgarzik/libata-dev.git#ALL commit ef2824073fba9def3cf122e89cc485f66dd71f70 Author: Borislav Petkov Date: Mon May 29 01:06:23 2006 -0400 libata debugging: set initial dbg value This patch sets the prerequisites for the new debugging scheme that more or less resembles Donald Becker's net driver example. This one liner doesn't change any functionality beside setting the appropriate debug level for the msg_enable control in the ata_port struct, which will be later used by the ata_msg_* macros to control the amount of debug information sent to printk. Signed-off-by: Signed-off-by: Jeff Garzik commit b83b8e340218a5e4453cd09be32867e89a5697b7 Author: Alan Cox Date: Fri May 26 22:19:33 2006 -0400 [libata] add pata_netcell driver commit 75e995855f45a83afdae34d50c0b3ee14fb23b7a Author: Alan Cox Date: Wed May 24 14:14:41 2006 +0100 [PATCH] libata: add pio_data_xfer_noirq Signed-off-by: Alan Cox Signed-off-by: Jeff Garzik commit 622b20fcb8b42aa4c3c87c0a036f2ad0927b64bc Author: Alan Cox Date: Wed May 24 14:06:11 2006 +0100 [PATCH] PCI identifiers for the pata_via update These IDs are also used by the drivers/ide/pci changes submitted by VIA. Signed-off-by: Alan Cox Signed-off-by: Jeff Garzik commit 89bad5892abca55f56d0ea713c7306d1f845ac8e Author: Albert Lee Date: Fri May 26 13:49:18 2006 +0800 [PATCH] libata: add back ->data_xfer to ata_piix.c Add back ->data_xfer and ->mode_filter to ata_piix.c. Signed-off-by: Albert Lee Signed-off-by: Jeff Garzik commit f4eba0f89e56e2790a46a75e07517c93a9495c09 Author: Alan Cox Date: Fri May 26 21:21:38 2006 -0400 [libata] Add pata_ali, pata_sis drivers commit 0864b5abbfb900e97497894dc4b78ddc36c38e5a Author: Jeff Garzik Date: Wed May 24 02:13:31 2006 -0400 [libata pata] trim trailing whitespace commit 6da2d19cb989f58fc93ba774a73b72b41514c1dd Author: Alan Cox Date: Mon May 22 22:56:37 2006 +0100 [PATCH] pata_via update Signed-off-by: Alan Cox Signed-off-by: Jeff Garzik commit 957d2df1801865eb1e63864bc63b970aa9c460ba Author: Alan Cox Date: Tue May 23 13:18:57 2006 +0100 [PATCH] libata: Remove obsolete flag ATA_FLAG_IRQ_MASK was added when I did the original data transfer with IRQ masked bits for PIO. It has since been replaced by ->pio_data_xfer methods so should be removed so nobody uses it by mistake thinking it still works. Signed-off-by: Alan Cox Signed-off-by: Jeff Garzik commit 31a34fe75906ba5f61606eaed01da313f29ca4b1 Author: Alan Cox Date: Mon May 22 22:58:14 2006 +0100 [PATCH] ata_piix formatting if( spacing fix for Garzik compliant formatting Signed-off-by: Alan Cox Signed-off-by: Jeff Garzik commit bce573d0ad90bdf2979267aad041b261008caf0f Author: Alan Cox Date: Wed May 24 02:06:49 2006 -0400 [libata pata] update AMD, mpiix, old piix, opti, sil680, triflex drivers commit e14698745dd0de1ddbf5cd0cca4313a90f8c1cc1 Author: Mark Lord Date: Mon May 22 19:02:03 2006 -0400 [PATCH] sata_mv: endian annotations Signed-off-by: Mark Lord Signed-off-by: Jeff Garzik commit a6b2c5d4754dc539a560fdf0d3fb78a14174394a Author: Alan Cox Date: Mon May 22 16:59:59 2006 +0100 [PATCH] PATCH: libata. Add ->data_xfer method We need to pass the device in order to do per device checks such as 32bit I/O enables. With the changes to include dev->ap we now don't have to add parameters however just clean them up. Also add data_xfer methods to the existing drivers except ata_piix (which is in the other block of patches). If you reject the piix one just add a data_xfer to it... Signed-off-by: Alan Cox Signed-off-by: Jeff Garzik commit 8190bdb9291758f3b8c436ec1154c9923ddb57ea Author: Jeff Garzik Date: Wed May 24 01:53:39 2006 -0400 [libata] libata-scsi, sata_mv: trim trailing whitespace commit f79d409fae879d135d1aaca6d83451f2787aec07 Author: Alan Cox Date: Mon May 22 16:55:11 2006 +0100 [PATCH] libata - fix bracketing and DMA oops The upstream tree has the ATA_DFLAG_PIO bug fixed but does not have the pass throuugh bug fix Signed-off-by: Alan Cox Signed-off-by: Jeff Garzik commit b6079ca409bf88c248992e96510dd6f610f7ed89 Author: Alan Cox Date: Mon May 22 16:52:06 2006 +0100 [PATCH] libata: PIO 0 Ensure the pio_mode is always setup. Don't do any setup on the controller b just ensure the mode reporting is valid to avoid tons of special cases in PATA driver code when mode switching on the fly. Signed-off-by: Alan Cox Signed-off-by: Jeff Garzik commit 1f3461a72619fcd70a0fcb563306c91f753b4620 Author: Albert Lee Date: Tue May 23 18:12:30 2006 +0800 [PATCH] libata: minor fix for irq-pio merge Minor fix to put the ATA_FLAG_NO_ATAPI flag back. Signed-off-by: Albert Lee Signed-off-by: Jeff Garzik commit d3fb4e8dddebbf7d6c0b02842c619bfd4fa199f5 Author: Jeff Garzik Date: Wed May 24 01:43:25 2006 -0400 [libata sata_promise] Add PATA cable detection. Original patch from Phillip Jordan Cleanups and fixes by me. commit 4c5c81613b0eb0dba97a8f312a2f1162f39fd47b Author: Andrew Chew Date: Thu Apr 20 15:54:26 2006 -0700 [PATCH] sata_nv: Add MCP61 support Added MCP61 SATA support to sata_nv. Signed-off-by: Jeff Garzik commit b74ba22f030eb7ab88f7d8954ad18ecc0ac5ce3c Author: Thomas Glanzmann Date: Fri May 12 10:00:41 2006 +0200 [PATCH] Add PCI ID for the Intel IDE Controller which is in the Intel Mac Minis shipped in first quarter 2006 Signed-off-by: Thomas Glanzmann Signed-off-by: Jeff Garzik commit f8bbfc247efb0e5fa69094614380768ce79afe17 Author: Tejun Heo Date: Fri May 19 21:07:05 2006 +0900 [PATCH] SCSI: make scsi_implement_eh() generic API for SCSI transports libata implemented a feature to schedule EH without an associated EH by manipulating shost->host_eh_scheduled in ata_scsi_schedule_eh() directly. Move this function to scsi_error.c and rename it to scsi_schedule_eh(). It is now an exported API for SCSI transports and exported via new header file drivers/scsi/scsi_transport_api.h This patch also de-export scsi_eh_wakeup() which was exported specifically for ata_scsi_schedule_eh(). Signed-off-by: Tejun Heo Signed-off-by: Jeff Garzik commit a20f33ffde8ba5fb27666aa1e228a45b7e3b8dcb Author: Tejun Heo Date: Tue May 16 12:58:24 2006 +0900 [PATCH] libata: enforce default EH actions LLDDs rely on libata that certain EH actions are automatically taken on some errors. If the port is frozen or one or more qc's have failed with HSM violation or timeout, softreset is enforced (LLDD can ask for storonger EH action at will). If any other error condition exists, libata EH always revalidates. This behavior existed in earlier revisions of new EH but lost during development process. This patch restores it. Signed-off-by: Tejun Heo Signed-off-by: Jeff Garzik commit eec4c3f317991dc85c786ffccd9c1a8620c41b18 Author: Albert Lee Date: Thu May 18 17:51:10 2006 +0800 [PATCH] libata: use qc->result_tf for temp taskfile storage Use qc->result_tf for temp taskfile storage. Signed-off-by: Albert Lee Signed-off-by: Jeff Garzik commit 3655d1d323386e001c786af10f0a3f39f438f03b Author: Albert Lee Date: Fri May 19 11:43:04 2006 +0800 [PATCH] libata: Fix the HSM error_mask mapping (was: Re: libata-tj and SMART) Fix the HSM error_mask mapping. Changes: - Better mapping in ac_err_mask() - In HSM_ST_FIRST ans HSM_ST state, check ATA_ERR|ATA_DF and map it to AC_ERR_DEV instead of AC_ERR_HSM. - In HSM_ST_FIRST and HSM_ST state, map DRQ=1 ERR=1 to AC_ERR_HSM. - For PIO data in and DRQ=1 ERR=1, add check after the junk data block is read. Signed-off-by: Albert Lee Signed-off-by: Jeff Garzik commit aee10a03eb3e240bfd1a6f91e06ce82df47c5c58 Author: Tejun Heo Date: Mon May 15 21:03:56 2006 +0900 [PATCH] sata_sil24: implement NCQ support Implement NCQ support. Sil24 has 31 command slots and all of them are used for NCQ command queueing. libata guarantees that no other command is in progress when it issues an internal command, so always use tag 0 for internal commands. Signed-off-by: Tejun Heo commit 12fad3f965830d71f6454f02b2af002a64cec4d3 Author: Tejun Heo Date: Mon May 15 21:03:55 2006 +0900 [PATCH] ahci: implement NCQ suppport Implement NCQ support. Original implementation is from Jens Axboe. Signed-off-by: Tejun Heo commit a9764c2bb5b6d3c9df91f2977a2a640f55de0dc2 Author: Tejun Heo Date: Mon May 15 21:03:53 2006 +0900 [PATCH] ahci: kill pp->cmd_tbl_sg With NCQ, there are multiple sg tables, so pp->cmd_tbl_sg doesn't cut it. Directly calculate sg table address from pp->cmd_tbl. Signed-off-by: Tejun Heo commit 979db803b8fd120d4ed5216520e9d1815e18bc38 Author: Tejun Heo Date: Mon May 15 21:03:52 2006 +0900 [PATCH] ahci: add HOST_CAP_NCQ constant Add HOST_CAP_NCQ. Signed-off-by: Tejun Heo commit dd410ff12925fc49df3174b18e43598b2b2b621f Author: Tejun Heo Date: Mon May 15 21:03:50 2006 +0900 [PATCH] ahci: clean up AHCI constants in preparation for NCQ * Rename CMD_TBL_HDR to CMD_TBL_HDR_SZ as it's size not offset. * Define MAX_CMDS and CMD_SZ and use them in calculation of other constants. * Define CMD_TBL_AR_SZ as product of CMD_TBL_SZ and MAX_CMDS, and use it when calculating PRIV_DMA_SZ. * CMD_SLOT_SZ is also dependent on MAX_CMDS but hasn't been changed because I didn't want to change the value used by the original code (32 commands). Later NCQ change will bump MAX_CMDS to 32 anyway and the hard coded 32 can be changed to MAX_CMDS then. * Reorder HOST_CAP_* flags. Signed-off-by: Tejun Heo commit a6e6ce8e8dc907a2cf2b994b0ea4099423f046bf Author: Tejun Heo Date: Mon May 15 21:03:48 2006 +0900 [PATCH] libata-ncq: implement NCQ device configuration Now that all NCQ related stuff are in place, implement NCQ device configuration and bump ATA_MAX_QUEUE to 32 thus activating NCQ support. Original implementation is from Jens Axboe. Signed-off-by: Tejun Heo commit e8ee84518c159a663c07bf691ace187527380f61 Author: Tejun Heo Date: Mon May 15 21:03:46 2006 +0900 [PATCH] libata-ncq: update EH to handle NCQ Update EH to handle NCQ. ata_eh_autopsy() is updated to call ata_eh_analyze_ncq_error() which reads log page 10h on NCQ device error and updates eh_context accordingly. ata_eh_report() is updated to report SActive. Signed-off-by: Tejun Heo commit 3dc1d88193b9c65b01b64fb2dc730e486306649f Author: Tejun Heo Date: Mon May 15 21:03:45 2006 +0900 [PATCH] libata-ncq: implement NCQ command translation and exclusion This patch implements NCQ command translation and exclusion. Note that NCQ commands don't use ata_rwcmd_protocol() to choose ATA command. This is because, unlike non-NCQ RW commands, NCQ commands can only be used for NCQ protocol and FUA handling is done with a flag rather than separate command. NCQ enabled device will have queue depth larger than one but no two non-NCQ commands can be issued simultaneously, neither can a non-NCQ command and NCQ commands. This patch makes ata_scsi_translate() return SCSI_MLQUEUE_DEVICE_BUSY if such exclusion is necessary. SCSI midlayer will retry the command later. As SCSI midlayer always retries once a command completes, this doesn't incur unnecessary delays and as most commands will be NCQ ones for NCQ device, so the overhead should be negligible. Initial implementation is from Jens Axboe and using SCSI_MLQUEUE_DEVICE_BUSY for exclusion is suggested by Jeff Garzik. Signed-off-by: Tejun Heo commit dedaf2b0365ccec50714fbde0b3215e7e94fa47c Author: Tejun Heo Date: Mon May 15 21:03:43 2006 +0900 [PATCH] libata-ncq: implement ap->qc_active, ap->sactive and complete helper Add ap->qc_active and ap->sactive, mask of all active qcs and libata's view of the SActive register, respectively. Also, implement ata_qc_complete_multiple() which takes new qc_active mask and complete multiple qcs according to the mask. These will be used to track NCQ commands and complete them. The distinction between ap->qc_active and ap->sactive is also useful for later PM implementation. Signed-off-by: Tejun Heo commit 6cec4a3943bdfe46e2952bc246f17670f747be8d Author: Tejun Heo Date: Mon May 15 21:03:41 2006 +0900 [PATCH] libata-ncq: rename ap->qactive to ap->qc_allocated Rename ap->qactive to ap->qc_allocated. This is to accomodate addition of ap->qc_active, mask of active qcs. Signed-off-by: Tejun Heo commit 2115ea94a2d11fbd228b049e667ec2d3e91ca371 Author: Tejun Heo Date: Mon May 15 21:03:39 2006 +0900 [PATCH] libata-ncq: pass ata_scsi_translate() return value to SCSI midlayer ata_scsi_translate() will need to return SCSI_ML_QUEUE_DEVICE_BUSY to achieve exlusion between NCQ and non-NCQ commands or among non-NCQ commands. Pass its return value upward to SCSI midlayer. Signed-off-by: Tejun Heo commit 88e490340ea4c3a2ebc0187a4339912e2fc1a081 Author: Tejun Heo Date: Mon May 15 21:03:38 2006 +0900 [PATCH] libata-ncq: add NCQ related ATA/libata constants and macros Add NCQ related ATA/libata constants and macros. Signed-off-by: Tejun Heo commit c17ea20d9a689d7335e97e09354865cdd9f873e1 Author: Tejun Heo Date: Mon May 15 20:59:29 2006 +0900 [PATCH] libata: fix irq-pio merge * kill ata_poll_qc_complete() and implement/use ata_hsm_qc_complete() which completes qcs in new EH compliant manner from HSM * don't print error message from ata_hsm_move(). it's responsibility of EH. * kill ATA_FLAG_NOINTR usage in bmdma EH Signed-off-by: Tejun Heo commit 88ce7550c38f46c8697f53727a571bf838bee398 Author: Tejun Heo Date: Mon May 15 20:58:32 2006 +0900 [PATCH] sata_sil24: convert to new EH Convert sata_sil24 to new EH. * When port is frozen, IRQ for the port is masked. * sil24_softreset() doesn't need to mangle with IRQ mask anymore. libata ensures that the port is frozen during reset. * Only turn on interrupts which are handled by interrupt handler and EH. As we don't handle SDB notify yet, turn it off. DEV_XCHG and UNK_FIS are handled by EH and thus turned on. * sil24_softreset() usually fails to recover the port after DEV_XCHG. ATA_PORT_HARDRESET is used as recovery action for DEV_XCHG. * sil24 may be invoked without any active command. e.g. DEV_XCHG irq occuring while no qc in progress still triggers EH and will reset the port and revalidate attached device. Signed-off-by: Tejun Heo commit 2a3917a8bb40a2cb75b458da9c356e8557e8fbed Author: Tejun Heo Date: Mon May 15 20:58:30 2006 +0900 [PATCH] ahci: add PIOS interim interrupt handling During multiblock PIO, multiple PIOS interrupts are generated before qc compltion. Current code prints unnecessary message for such cases. This is exposed when new EH slows down attached device into PIO mode. Signed-off-by: Tejun Heo commit 78cd52d02fa0735949a9fa30a6b79bf02c94c250 Author: Tejun Heo Date: Mon May 15 20:58:29 2006 +0900 [PATCH] ahci: convert to new EH Convert AHCI to new EH. Unfortunately, ICH7 AHCI reacts badly if IRQ mask is diddled during operation. So, freezing is implemented by unconditionally clearing interrupt conditions while frozen. * Interrupts are categorized according to required action. e.g. Connection status or unknown FIS error requires freezing the port while TF or HBUS_DATA don't. * Only CONNECT (reflects SErr.X) interrupt is taken into account not PHYRDY (SErr.N), as CONNECT is better cue for starting EH. * AHCI may be invoked without any active command. e.g. CONNECT irq occuring while no qc in progress still triggers EH and will reset the port and revalidate attached device. Signed-off-by: Tejun Heo commit f6aae27ed002ba9c0a98aff811dbde32ce749d28 Author: Tejun Heo Date: Mon May 15 20:58:27 2006 +0900 [PATCH] sata_sil: convert to new EH Convert sata_sil to new EH. As these controllers have hardware interrupt mask and are known to have screaming interrupts issues, use hardware IRQ masking for freezing. sil_freeze() masks interrupts for the port and sil_thaw() unmasks them. As ports are automatically frozen before probing reset, there is no need to initialize interrupt masks sil_init_onde(). Remove related code. Other than freezing, sata_sil uses stock BMDMA EH routines. Signed-off-by: Tejun Heo commit 3f037db0ba043022e43e8e7266e698d4af264851 Author: Tejun Heo Date: Mon May 15 20:58:25 2006 +0900 [PATCH] ata_piix: convert to new EH ata_piix can use stock BMDMA EH routines. Convert to new EH. Signed-off-by: Tejun Heo commit 6d97dbd72da31a0e334f251fa9df4be9fab6fde2 Author: Tejun Heo Date: Mon May 15 20:58:24 2006 +0900 [PATCH] libata-eh: implement BMDMA EH Implement stock BMDMA error handling methods. Signed-off-by: Tejun Heo commit 022bdb075b9e1f224088a0b268de56268d7bc5b6 Author: Tejun Heo Date: Mon May 15 20:58:22 2006 +0900 [PATCH] libata-eh: implement new EH Implement new EH. The exported interface is ata_do_eh() which is to be called from ->error_handler and performs the following steps to recover the failed port. ata_eh_autopsy() : analyze SError/TF, determine the cause of failure and required recovery actions and record it in ap->eh_context ata_eh_report() : report the failure to user ata_eh_recover() : perform recovery actions described in ap->eh_context ata_eh_finish() : finish failed qcs LLDDs can customize error handling by modifying eh_context before calling ata_do_eh() or, if necessary, doing so inbetween each major steps by calling each step explicitly. Signed-off-by: Tejun Heo commit f3e81b19aac23c0e8c55d5961324ef7de44c23bb Author: Tejun Heo Date: Mon May 15 20:58:21 2006 +0900 [PATCH] libata-eh: implement ata_eh_info and ata_eh_context struct ata_eh_info serves as the communication channel between execution path and EH. Execution path describes detected error condition in ap->eh_info and EH recovers the port using it. To avoid missing error conditions detected during EH, EH makes its own copy of eh_info and clears it on entry allowing error info to accumulate during EH. Most EH states including EH's copy of eh_info are stored in ap->eh_context (struct ata_eh_context) which is owned by EH and thus doesn't require any synchronization to access and alter. This standardized context makes it easy to integrate various parts of EH and extend EH to handle multiple links (for PM). Signed-off-by: Tejun Heo commit 0c247c559cd70f85ba9f0764ce13ae00e20fcad8 Author: Tejun Heo Date: Mon May 15 20:58:19 2006 +0900 [PATCH] libata-eh: implement dev->ering This patch implements ata_ering and uses it to define dev->ering. ata_ering is a ring buffer which records libata errors - whether a command was for normar IO request, err_mask and timestamp. Errors are recorded per-device in dev->ering. This will be used by EH to determine recovery actions. Signed-off-by: Tejun Heo commit 9be1e979f2e1e57a091a658fa88dac266f9fd6fe Author: Tejun Heo Date: Mon May 15 20:58:17 2006 +0900 [PATCH] libata-eh: add ATA and libata flags for new EH Add ATA and libata flags to be used by new EH. Signed-off-by: Tejun Heo commit 246619da308c6910a3ae30e7e5fbf46139619efe Author: Tejun Heo Date: Mon May 15 20:58:16 2006 +0900 [PATCH] libata-eh-fw: update SCSI command completion path for new EH SCSI command completion path used to do some part of EH including printing messages and obtaining sense data. With new EH, all these are responsibilities of the EH, update SCSI command completion path to reflect this. Signed-off-by: Tejun Heo commit d95a717f579e81061830a308125c89f5858f740a Author: Tejun Heo Date: Mon May 15 20:58:14 2006 +0900 [PATCH] libata-eh-fw: update ata_exec_internal() for new EH Update ata_exec_internal() such that it uses new EH framework. ->post_internal_cmd() is always invoked regardless of completion status. Also, when ata_exec_internal() detects a timeout condition and new EH is in place, it freezes the port as timeout for normal commands would do. Note that ata_port_flush_task() is called regardless of wait_for_completion status. This is necessary as exceptions unrelated to the qc can abort the qc, in which case PIO task could still be running after the wait for completion returns. Signed-off-by: Tejun Heo commit ad9e27624479bd167dd7eac0cea4bb3ad13bc926 Author: Tejun Heo Date: Mon May 15 20:58:12 2006 +0900 [PATCH] libata-eh-fw: update ata_scsi_error() for new EH Update ata_scsi_error() for new EH. ata_scsi_error() is responsible for claiming timed out qcs and invoking ->error_handler in safe and synchronized manner. As the state of the controller is unknown if a qc has timed out, the port is frozen in such cases. Note that ata_scsi_timed_out() isn't used for new EH. This is because a timed out qc cannot be claimed by EH without freezing the port and freezing the port in ata_scsi_timed_out() results in unnecessary abortion of other active qcs. ata_scsi_timed_out() can be removed once all drivers are converted to new EH. While at it, add 'TODO: kill' comments to old EH functions. Signed-off-by: Tejun Heo commit dafadcde8d4dc5ea8c742faa7ff4403336b542b8 Author: Tejun Heo Date: Mon May 15 20:58:11 2006 +0900 [PATCH] libata-eh-fw: implement new EH scheduling from PIO PIO executes without holding host_set lock, so it cannot be synchronized using the same mechanism as interrupt driven execution. port_task framework makes sure that EH is not entered until PIO task is flushed, so PIO task can be sure the qc in progress won't go away underneath it. One thing it cannot be sure of is whether the qc has already been scheduled for EH by another exception condition while host_set lock was released. This patch makes ata_poll_qc-complete() handle such conditions properly and make it freeze the port if HSM violation is detected during PIO execution. Signed-off-by: Tejun Heo commit e318049949b07152d851dbfebbd93e560af45ebe Author: Tejun Heo Date: Mon May 15 20:58:09 2006 +0900 [PATCH] libata-eh-fw: implement freeze/thaw Freezing is performed atomic w.r.t. host_set->lock and once frozen LLDD is not allowed to access the port or any qc on it. Also, libata makes sure that no new qc gets issued to a frozen port. A frozen port is thawed after a reset operation completes successfully, so reset methods must do its job while the port is frozen. During initialization all ports get frozen before requesting IRQ, so reset methods are always invoked on a frozen port. Optional ->freeze and ->thaw operations notify LLDD that the port is being frozen and thawed, respectively. LLDD can disable/enable hardware interrupt in these callbacks if the controller's IRQ mask can be changed dynamically. If the controller doesn't allow such operation, LLDD can check for frozen state in the interrupt handler and ack/clear interrupts unconditionally while frozen. Signed-off-by: Tejun Heo commit 7b70fc039824bc7303e4007a5f758f832de56611 Author: Tejun Heo Date: Mon May 15 20:58:07 2006 +0900 [PATCH] libata-eh-fw: implement ata_port_schedule_eh() and ata_port_abort() ata_port_schedule_eh() directly schedules EH for @ap without associated qc. Once EH scheduled, no further qc is allowed and EH kicks in as soon as all currently active qc's are drained. ata_port_abort() schedules all currently active commands for EH by qc_completing them with ATA_QCFLAG_FAILED set. If ata_port_abort() doesn't find any qc to abort, it directly schedule EH using ata_port_schedule_eh(). These two functions provide ways to invoke EH for conditions which aren't directly related to any specfic qc. Signed-off-by: Tejun Heo commit f686bcb8078ac7505ec88818886c2c72639f4fc5 Author: Tejun Heo Date: Mon May 15 20:58:05 2006 +0900 [PATCH] libata-eh-fw: implement new EH scheduling via error completion There are several ways a qc can get schedule for EH in new EH. This patch implements one of them - completing a qc with ATA_QCFLAG_FAILED set or with non-zero qc->err_mask. ALL such qc's are examined by EH. New EH schedules a qc for EH from completion iff ->error_handler is implemented, qc is marked as failed or qc->err_mask is non-zero and the command is not an internal command (internal cmd is handled via ->post_internal_cmd). The EH scheduling itself is performed by asking SCSI midlayer to schedule EH for the specified scmd. For drivers implementing old-EH, nothing changes. As this change makes ata_qc_complete() rather large, it's not inlined anymore and __ata_qc_complete() is exported to other parts of libata for later use. Signed-off-by: Tejun Heo commit f69499f42caf74194df678c9c293f2ee0fe90bc3 Author: Tejun Heo Date: Mon May 15 20:58:03 2006 +0900 [PATCH] libata-eh-fw: update ata_qc_from_tag() to enforce normal/EH qc ownership New EH framework has clear distinction about who owns a qc. Every qc starts owned by normal execution path - PIO, interrupt or whatever. When an exception condition occurs which affects the qc, the qc gets scheduled for EH. Note that some events (say, link lost and regained, command timeout) may schedule qc's which are not directly related but could have been affected for EH too. Scheduling for EH is atomic w.r.t. ap->host_set->lock and once schedule for EH, normal execution path is not allowed to access the qc in whatever way. (PIO synchronization acts a bit different and will be dealt with later) This patch make ata_qc_from_tag() check whether a qc is active and owned by normal path before returning it. If conditions don't match, NULL is returned and thus access to the qc is denied. __ata_qc_from_tag() is the original ata_qc_from_tag() and is used by libata core/EH layers to access inactive/failed qc's. This change is applied only if the associated LLDD implements new EH as indicated by non-NULL ->error_handler Signed-off-by: Tejun Heo commit 2ab7db1ff1d64a2ba389d0692d532f42a15f1f72 Author: Tejun Heo Date: Mon May 15 20:58:02 2006 +0900 [PATCH] libata-eh-fw: use special reserved tag and qc for internal commands New EH may issue internal commands to recover from error while failed qc's are still hanging around. To allow such usage, reserve tag ATA_MAX_QUEUE-1 for internal command. This also makes it easy to tell whether a qc is for internal command or not. ata_tag_internal() test implements this test. To avoid breaking existing drivers, ata_exec_internal() uses ATA_TAG_INTERNAL only for drivers which implement ->error_handler. For drivers using old EH, tag 0 is used. Note that this makes ata_tag_internal() test valid only when ->error_handler is implemented. This is okay as drivers on old EH should not and does not have any reason to use ata_tag_internal(). Signed-off-by: Tejun Heo commit dc2b3515868a254b3d653d77844bff93c5d4c095 Author: Tejun Heo Date: Mon May 15 20:58:00 2006 +0900 [PATCH] libata-eh-fw: clear SError in ata_std_postreset() Clear SError in ata_std_postreset(). This is to clear SError bits which get set during reset. Signed-off-by: Tejun Heo commit 9ec957f2002bd2994be659bbc0ec28397fa251ee Author: Tejun Heo Date: Mon May 15 20:57:58 2006 +0900 [PATCH] libata-eh-fw: add flags and operations for new EH Add ATA_FLAG_EH_{PENDING|FROZEN}, ATA_ATA_QCFLAG_{FAILED|SENSE_VALID} and ops->freeze, thaw, error_handler, post_internal_cmd() for new EH. Signed-off-by: Tejun Heo commit f15a1dafed22d5037e0feea7528e1eeb28a1a7a3 Author: Tejun Heo Date: Mon May 15 20:57:56 2006 +0900 [PATCH] libata: use ATA printk helpers Use ATA printk helpers. Signed-off-by: Tejun Heo commit 61440db61fe4945ad9f7b32b4d6a22b17174aa1f Author: Tejun Heo Date: Mon May 15 20:57:55 2006 +0900 [PATCH] libata: implement ATA printk helpers Implement ata_{port|dev}_printk() which prefixes the message with proper identification string. This change is necessary for later PM support because devices and links should be identified differently depending on how they are attached. This also helps unifying device id strings. Currently, there are two forms in use (P is the port number D device number) - 'ataP(D):', and 'ataP: dev D '. These macros also make it harder to forget proper ID string (e.g. printing only port number when a device is in question). Debug message handling can be integrated into these printk macros by passing debug type and level via @lv. Signed-off-by: Tejun Heo commit 3373efd89dead4ce7818d685729e0431448357c9 Author: Tejun Heo Date: Mon May 15 20:57:53 2006 +0900 [PATCH] libata: use dev->ap Use dev->ap where possible and eliminate superflous @ap from functions and structures. Signed-off-by: Tejun Heo commit 38d87234d6c47ca487fc6344100323d5adc6f32c Author: Tejun Heo Date: Mon May 15 20:57:51 2006 +0900 [PATCH] libata: add dev->ap Add dev->ap which points back to the port the device belongs to. This makes it unnecessary to pass @ap for silly reasons (e.g. printks). Also, this change is necessary to accomodate later PM support which will introduce ATA link inbetween port and device. Signed-off-by: Tejun Heo commit a0ab51cefc95cb7756c4914603fea2b1a0f813c5 Author: Tejun Heo Date: Mon May 15 20:57:49 2006 +0900 [PATCH] libata: kill old SCR functions and sata_dev_present() Kill now unused scr_{read|write|write_flush}() and sata_dev_present(). Signed-off-by: Tejun Heo commit 81952c5497b40ae56835bd0d6537f8c6bdea07e7 Author: Tejun Heo Date: Mon May 15 20:57:47 2006 +0900 [PATCH] libata: use new SCR and on/offline functions Use new SCR and on/offline functions. Note that for LLDD which know it implements SCR callbacks, SCR functions are guaranteed to succeed and ata_port_online() == !ata_port_offline(). Signed-off-by: Tejun Heo commit 34bf21704c848fe00c516d1c8f163db08b70b137 Author: Tejun Heo Date: Mon May 15 20:57:46 2006 +0900 [PATCH] libata: implement new SCR handling and port on/offline functions Implement ata_scr_{valid|read|write|write_flush}() and ata_port_{online|offline}(). These functions replace scr_{read|write}() and sata_dev_present(). Major difference between between the new SCR functions and the old ones is that the new ones have a way to signal error to the caller. This makes handling SCR-available and SCR-unavailable cases in the same path easier. Also, it eases later PM implementation where SCR access can fail due to various reasons. ata_port_{online|offline}() functions return 1 only when they are affirmitive of the condition. e.g. if SCR is unaccessible or presence cannot be determined for other reasons, these functions return 0. So, ata_port_online() != !ata_port_offline(). This distinction is useful in many exception handling cases. Signed-off-by: Tejun Heo commit 838df6284c54447efae956fb9c243d8ba4ab0f47 Author: Tejun Heo Date: Mon May 15 20:57:44 2006 +0900 [PATCH] libata: init ap->cbl to ATA_CBL_SATA early Init ap->cbl to ATA_CBL_SATA in ata_host_init(). This is necessary for soon-to-follow SCR handling function changes. LLDDs are free to change ap->cbl during probing. Signed-off-by: Tejun Heo commit ce5f7f3d0cab82d6c16fcb64def8bfc0a3a85dd6 Author: Tejun Heo Date: Mon May 15 20:57:42 2006 +0900 [PATCH] sata_sil24: update TF image only when necessary Update TF image (pp->tf) only when necessary. Signed-off-by: Tejun Heo commit e61e067227bc76b4d9411a50d735c9d87f27b0e2 Author: Tejun Heo Date: Mon May 15 20:57:40 2006 +0900 [PATCH] libata: implement qc->result_tf Add qc->result_tf and ATA_QCFLAG_RESULT_TF. This moves the responsibility of loading result TF from post-compltion path to qc execution path. qc->result_tf is loaded if explicitly requested or the qc failsa. This allows more efficient completion implementation and correct handling of result TF for controllers which don't have global TF representation such as sil3124/32. Signed-off-by: Tejun Heo commit 96bd39ec295e49443c8b0c25a6b69fdace18780f Author: Tejun Heo Date: Mon May 15 20:57:38 2006 +0900 [PATCH] libata: remove postreset handling from ata_do_reset() Make ata_do_reset() deal only with reset. postreset is now the responsibility of the caller. This is simpler and eases later prereset addition. Signed-off-by: Tejun Heo commit 3adcebb2b59d590d572844815c906ca30477b14a Author: Tejun Heo Date: Mon May 15 20:57:37 2006 +0900 [PATCH] libata: move ->set_mode() handling into ata_set_mode() Move ->set_mode() handlng into ata_set_mode(). Signed-off-by: Tejun Heo commit fe635c7e91036282e4fd0cc5b4eebc712e43270d Author: Tejun Heo Date: Mon May 15 20:57:35 2006 +0900 [PATCH] libata: use preallocated buffers It's not a very good idea to allocate memory during EH. Use statically allocated buffer for dev->id[] and add 512byte buffer ap->sector_buf. This buffer is owned by EH (or probing) and to be used as temporary buffer for various purposes (IDENTIFY, NCQ log page 10h, PM GSCR block). Signed-off-by: Tejun Heo commit 158693031d7c58a355ec1852052a4fca75fd3bda Author: Tejun Heo Date: Mon May 15 20:57:33 2006 +0900 [PATCH] libata: hold host_set lock while finishing internal qc Hold host_set lock while finishing internal qc. Signed-off-by: Tejun Heo commit 7401abf2f44695ef44eef47d5deba1c20214a063 Author: Tejun Heo Date: Mon May 15 20:57:32 2006 +0900 [PATCH] libata: clear ap->active_tag atomically w.r.t. command completion ap->active_tag was cleared in ata_qc_free(). This left ap->active_tag dangling after ata_qc_complete(). Spurious interrupts inbetween could incorrectly access the qc. Clear active_tag in ata_qc_complete(). This change is necessary for later EH changes. Signed-off-by: Tejun Heo commit f8c2c4202d86e14ca03b7adc7ebcb30fc74b24e1 Author: Tejun Heo Date: Mon May 15 20:57:30 2006 +0900 [PATCH] libata: fix ->phy_reset class code handling in ata_bus_probe() ata_bus_probe() doesn't clear dev->class after ->phy_reset(). This can result in falsely enabled devices if probing fails. Clear dev->class to ATA_DEV_UNKNOWN after fetching it. Signed-off-by: Tejun Heo commit 6cd727b14f1a6cdcb088d1067c1ba0ba124806a7 Author: Tejun Heo Date: Mon May 15 20:57:28 2006 +0900 [PATCH] libata: kill duplicate prototypes Kill duplicate prototypes for ata_eh_qc_complete/retry() in libata.h. Signed-off-by: Tejun Heo commit e23befe9018319dc218e2e51c20ce480e6b45eeb Author: Tejun Heo Date: Mon May 15 20:57:27 2006 +0900 [PATCH] libata: unexport ata_scsi_error() While moving ata_scsi_error() from LLDD sht to libata transportt, EXPORT_SYMBOL_GPL() entry was left out. Kill it. Signed-off-by: Tejun Heo commit e4fac92ae744cd5d5c3c1774825788e6b91a5965 Author: Tejun Heo Date: Mon May 15 20:57:25 2006 +0900 [PATCH] ahci: hardreset classification fix AHCI calls ata_dev_classify() even when no device is attached which results in false class code. Fix it. Signed-off-by: Tejun Heo commit 3c567b7d1137633f3ff67cd1df94abc5fd497a85 Author: Tejun Heo Date: Mon May 15 20:57:23 2006 +0900 [PATCH] libata: rename ata_down_sata_spd_limit() and friends Rename ata_down_sata_spd_limit() and friends to sata_down_spd_limit() and likewise for simplicity & consistency. Signed-off-by: Tejun Heo commit c44078c03f018c8cc9d7463b0db4c6c7fb316792 Author: Tejun Heo Date: Mon May 15 20:57:21 2006 +0900 [PATCH] libata: silly fix in ata_scsi_start_stop_xlat() Don't directly access &qc->tf when tf == &qc->tf. Signed-off-by: Tejun Heo commit ee7863bc68fa6ad6fe7cfcc0e5ebe9efe0c0664e Author: Tejun Heo Date: Mon May 15 20:57:20 2006 +0900 [PATCH] SCSI: implement shost->host_eh_scheduled libata needs to invoke EH without scmd. This patch adds shost->host_eh_scheduled to implement such behavior. Currently the only user of this feature is libata and no general interface is defined. This patch simply adds handling for host_eh_scheduled where needed and exports scsi_eh_wakeup() to modules. The rest is upto libata. This is the result of the following discussion. http://thread.gmane.org/gmane.linux.scsi/23853/focus=9760 In short, SCSI host is not supposed to know about exceptions unrelated to specific device or command. Such exceptions should be handled by transport layer proper. However, the distinction is not essential to ATA and libata is planning to depart from SCSI, so, for the time being, libata will be using SCSI EH to handle such exceptions. Signed-off-by: Tejun Heo commit 89f48c4d67dd875cf2216d4402bf77eda41fbdd9 Author: Luben Tuikov Date: Mon May 15 20:57:18 2006 +0900 [PATCH] SCSI: Introduce scsi_req_abort_cmd (REPOST) Introduce scsi_req_abort_cmd(struct scsi_cmnd *). This function requests that SCSI Core start recovery for the command by deleting the timer and adding the command to the eh queue. It can be called by either LLDDs or SCSI Core. LLDDs who implement their own error recovery MAY ignore the timeout event if they generated scsi_req_abort_cmd. First post: http://marc.theaimsgroup.com/?l=linux-scsi&m=113833937421677&w=2 Signed-off-by: Luben Tuikov Signed-off-by: Tejun Heo commit bf2af2a2027e52b653882fbca840620e896ae081 Author: Bastiaan Jacques Date: Mon Apr 17 14:17:59 2006 +0200 [PATCH] ahci: add support for VIA VT8251 Adds AHCI support for the VIA VT8251. Includes a workaround for a hardware bug which requires a Command List Override before softreset. Signed-off-by: Bastiaan Jacques Signed-off-by: Jeff Garzik commit e8cfde81bd0e2e0800eff6fce5147a097437b8b4 Author: Jeff Garzik Date: Wed Apr 12 17:38:48 2006 -0400 [libata pata] update for removal of .eh_strategy_handler from Scsi_Host_Template commit 26ec634c31a11a003040e10b4d650495158632fd Author: Tejun Heo Date: Tue Apr 11 22:32:19 2006 +0900 [PATCH] sata_sil24: enable 64bit Enable 64bit. Signed-off-by: Tejun Heo Signed-off-by: Jeff Garzik commit bad28a37f5e4ab1db5c5f01b77664597b02b257f Author: Tejun Heo Date: Tue Apr 11 22:32:19 2006 +0900 [PATCH] sata_sil24: fix on-memory structure byteorder Data structures residing on memory and fetched by the controller should have LE ordering. Fix it. Signed-off-by: Tejun Heo Signed-off-by: Jeff Garzik commit a5b4c47a2731f1dd685f28b79464e4442f3682ec Author: Tejun Heo Date: Tue Apr 11 22:32:19 2006 +0900 [PATCH] sata_sil24: don't do hardreset during driver initialization There's no need to perform hardreset during driver initialization. It's already done during host reset and even if the controller is in some wacky state, we now have proper hardreset to back us up. Signed-off-by: Tejun Heo Signed-off-by: Jeff Garzik commit ecc2e2b9c97719592b3078d5a5a8666551c91115 Author: Tejun Heo Date: Tue Apr 11 22:32:19 2006 +0900 [PATCH] sata_sil24: reimplement hardreset Reimplement hardreset according to the datasheet. The old hardreset didn't reset controller status and the controller might not be ready after reset. Also, as SStatus is a bit flakey after hardreset, sata_std_hardrset() didn't use to wait long enough before proceeding. Note that as we're not depending on SStatus, DET==1 condition cannot be used to wait for link, so use shorter timeout for no device case. Signed-off-by: Tejun Heo Signed-off-by: Jeff Garzik commit 0eaa6058a6a664ce692e3dc38c6891a74ca47f59 Author: Tejun Heo Date: Tue Apr 11 22:32:19 2006 +0900 [PATCH] sata_sil24: kill 10ms sleep in softreset Nothing, not the datasheet nor the errats, says this delay is necessary and with the previous PORT_CS_INIT change, we know the controller is in good state. Kill 10ms sleep. Signed-off-by: Tejun Heo Signed-off-by: Jeff Garzik commit 2555d6c268240fb3f5f335bd62d0518025343c0f Author: Tejun Heo Date: Tue Apr 11 22:32:19 2006 +0900 [PATCH] sata_sil24: put port into known state before softresetting Make sure the controller has no pending commands and ready for command before issuing SRST. Signed-off-by: Tejun Heo Signed-off-by: Jeff Garzik commit b5bc421c96ca56a9abaad4619da01fe0071904a2 Author: Tejun Heo Date: Tue Apr 11 22:32:19 2006 +0900 [PATCH] sata_sil24: implement sil24_init_port() Implement sil24_init_port which performs port initialization via PORT_CS_INIT. To be used later by softreset and EH. Signed-off-by: Tejun Heo Signed-off-by: Jeff Garzik commit 37024e8ee0d8dbcd0c2634192cb3836549db054e Author: Tejun Heo Date: Tue Apr 11 22:32:19 2006 +0900 [PATCH] sata_sil24: implement loss of completion interrupt on PCI-X errta fix SiI3124 might lose completion interrupt if completion interrupt occurs shortly after SLOT_STAT register is read for the previous completion interrupt if it is operating in PCI-X mode. This currently doesn't trigger as libata never queues more than one command, but it will with NCQ changes. This patch implements the workaround - turning on WoC and explicitly clearing interrupt. Signed-off-by: Tejun Heo Signed-off-by: Jeff Garzik commit 9466d85bb2c80b1aa169dda638b535f2f19714e4 Author: Tejun Heo Date: Tue Apr 11 22:32:18 2006 +0900 [PATCH] sata_sil24: consolidate host flags into SIL24_COMMON_FLAGS All sil24 controllers share the same host flags except for NPORTS. Consolidate them into SIL24_COMMON_FLAGS. Signed-off-by: Tejun Heo Signed-off-by: Jeff Garzik commit 7dafc3fd9a9e34e4a02ee6d141fd391ad5bdcd90 Author: Tejun Heo Date: Tue Apr 11 22:32:18 2006 +0900 [PATCH] sata_sil24: add more constants Add HOST_CTRL_* and more PORT_IRQ_* bits. Signed-off-by: Tejun Heo Signed-off-by: Jeff Garzik commit 3b9f1d0fb3fd68419ee3fa3e0833c6a05462150d Author: Tejun Heo Date: Tue Apr 11 22:32:18 2006 +0900 [PATCH] sata_sil24: rename PORT_IRQ_SDB_FIS to PORT_IRQ_SDB_NOTIFY Rename PORT_IRQ_SDB_FIS to more proper PORT_IRQ_SDB_NOTIFY. Signed-off-by: Tejun Heo Signed-off-by: Jeff Garzik commit 640088024dbf19553bb4b53d81e919cdf570f3b0 Author: Tejun Heo Date: Tue Apr 11 22:32:18 2006 +0900 [PATCH] sata_sil24: typo fix Type fix. Signed-off-by: Tejun Heo Signed-off-by: Jeff Garzik commit 499a86af41cf5a4bf811726841bbc49c0e96fd35 Author: Tejun Heo Date: Tue Apr 11 22:32:18 2006 +0900 [PATCH] libata: export ata_set_sata_spd() This will be used by LLDD hardreset implementation. Signed-off-by: Tejun Heo Signed-off-by: Jeff Garzik commit 51713d359ae274fa4dd4b199ba3a6b0c21ef99e0 Author: Tejun Heo Date: Tue Apr 11 22:26:29 2006 +0900 [PATCH] libata: cosmetic update to ata_bus_probe() Move ata_set_mode() failure handling outside of ap->ops->set_mode if clause such that it can handle ap->ops->set_mode failures after it's updated. Signed-off-by: Tejun Heo Signed-off-by: Jeff Garzik commit ec573755fcd7975aae6b0d536dbcd74a6eed029c Author: Tejun Heo Date: Tue Apr 11 22:26:29 2006 +0900 [PATCH] libata: disable failed devices only once in ata_bus_probe() Devices which consumed all their changes used to be disabled every iteration. This causes unnecessary noise in the console output. Disable once and leave alone. Signed-off-by: Tejun Heo Signed-off-by: Jeff Garzik commit 7dd29dd629bd5a4e6d8a164a9886da01f291ecf2 Author: Tejun Heo Date: Tue Apr 11 22:22:30 2006 +0900 [PATCH] sata_sil24: use ata_wait_register() Replace hard-coded waiting loops in sata_sil24 to ata_wait_register(). Signed-off-by: Tejun Heo Signed-off-by: Jeff Garzik commit 75fe18069a55c78f553643b4e3a24c6864d71d87 Author: Tejun Heo Date: Tue Apr 11 22:22:29 2006 +0900 [PATCH] ahci: use ata_wait_register() Replace ahci_poll_register() with ata_wait_register(). Signed-off-by: Tejun Heo Signed-off-by: Jeff Garzik commit c22daff41001e9ccead87179ac0547f85447139e Author: Tejun Heo Date: Tue Apr 11 22:22:29 2006 +0900 [PATCH] libata: implement ata_wait_register() As waiting for some register bits to change seems to be a common operation shared by some controllers, implement helper function ata_wait_register(). This function also takes care of register write flushing. Note that the condition is inverted, the wait is over when the masked value does NOT match @val. As we're waiting for bits to change, this test is more powerful and allows the function to be used in more places. Signed-off-by: Tejun Heo Signed-off-by: Jeff Garzik commit 643be977f9feba8c3c1e768fc06cac84596ec6f8 Author: Tejun Heo Date: Tue Apr 11 22:22:29 2006 +0900 [PATCH] sata_sil24: better error message from softreset Improve softreset error message. Signed-off-by: Tejun Heo Signed-off-by: Jeff Garzik commit 1c1d832cea1ab851a3f9b9d83245f5bc8b5b04b6 Author: Tejun Heo Date: Tue Apr 11 22:22:29 2006 +0900 [PATCH] sata_sil24: fix timeout calculation in sil24_softreset sil24_softreset calculated timeout by adding ATA_TMOUT_BOOT * HZ to jiffies; however, as ATA_TMOUT_BOOT is already in jiffies, multiplying by HZ makes the value way off. Fix it. Signed-off-by: Tejun Heo Signed-off-by: Jeff Garzik commit 987d2f05b396760517eef7cba66b2f415ac484f5 Author: Tejun Heo Date: Tue Apr 11 22:16:45 2006 +0900 [PATCH] libata: make reset methods complain when they fail Make reset methods complain loud when they fail. Signed-off-by: Tejun Heo Signed-off-by: Jeff Garzik commit 2bf2cb26b2512c6a609bb152982c388329bedff6 Author: Tejun Heo Date: Tue Apr 11 22:16:45 2006 +0900 [PATCH] libata: kill @verbose from ata_reset_fn_t @verbose was added to ata_reset_fn_t because AHCI complained during probing if no device was attached to the port. However, muting failure message isn't the correct approach. Reset methods are responsible for detecting no device condition and finishing successfully. Now that AHCI softreset is fixed, kill @verbose. Signed-off-by: Tejun Heo Signed-off-by: Jeff Garzik commit db70fef0750e5f8dbb64f9fadb333d2c7caf26a1 Author: Tejun Heo Date: Tue Apr 11 22:16:44 2006 +0900 [PATCH] libata: set default cbl in probeinit Make setting CBL type responsibility of probeinit. This allows using only ap->cbl == ATA_CBL_SATA test in all other parts. Without this, ata_down_sata_spd_limit() doesn't work during probe reset. Signed-off-by: Tejun Heo Signed-off-by: Jeff Garzik commit 35bb94b116e1fd4959ef0d3187458b5820eac8c4 Author: Jeff Garzik Date: Tue Apr 11 13:12:34 2006 -0400 libata: Add helper ata_shost_to_port() commit 381544bba3ae6f2f1004b267da34f840b469033c Author: Jeff Garzik Date: Tue Apr 11 13:04:39 2006 -0400 libata: Fix EH merge difference between this branch and upstream. commit c2a6585296009379e0f4eff39cdcb108b457ebf2 Author: Tejun Heo Date: Mon Apr 3 01:58:06 2006 +0900 [PATCH] ahci: do not fail softreset if PHY reports no device All softreset methods are responsible for detecting device presence and succeed softreset in such cases. AHCI didn't use to check for device presence before proceeding with softreset and this caused unnecessary reset retrials during probing. This patch adds presence detection to AHCI softreset. Signed-off-by: Tejun Heo Signed-off-by: Jeff Garzik commit 95de719adc94392a95c3c4d0a2d6b8b1ea39d236 Author: Albert Lee Date: Tue Apr 4 10:57:18 2006 +0800 [PATCH] libata: convert ATAPI_ENABLE_DMADIR to module parameter Convert the ATAPI_ENABLE_DMADIR compile time option needed by some SATA-PATA bridge to runtime module parameter. Signed-off-by: Albert Lee Signed-off-by: Jeff Garzik commit 31ce6daefe2d312e31ee06b0b3301b1cb7878c04 Author: Albert Lee Date: Mon Apr 3 18:31:44 2006 +0800 [PATCH] libata-dev: irq-pio minor fix 2 irq-pio minor fix 2: - Use qc as data for ata_pio_task(). Signed-off-by: Albert Lee Signed-off-by: Jeff Garzik commit 4332a771f4d2f23a6d3beff3dd5405e79775a211 Author: Albert Lee Date: Mon Apr 3 17:43:24 2006 +0800 [PATCH] libata-dev: irq-pio minor fix irq-pio minor fix: - remove the redundant hsm_task_state = HSM_ST_IDLE - add devno to printk() as done in upstream Signed-off-by: Albert Lee Signed-off-by: Jeff Garzik commit af64371ada9452632c349563d688d30d94e918ba Author: Jeff Garzik Date: Sun Apr 2 20:41:36 2006 -0400 [libata] bump versions commit 4bced2d40555eebf8d685f174aa6d58ace353655 Author: Jeff Garzik Date: Sun Apr 2 20:17:48 2006 -0400 [libata] kill bogus cut-n-pasted comments in three drivers commit 6d5f9732a16a74d75f8cdba5b00557662e83f466 Author: Tejun Heo Date: Mon Apr 3 00:09:41 2006 +0900 [PATCH] libata: print SControl in SATA link status info message Now that libata mangles with SControl, it's helpful to print out SControl in link status message. Add it. Signed-off-by: Jeff Garzik commit c13b56a1130bbfacfb588de100e5a248383805a6 Author: Jeff Garzik Date: Sun Apr 2 10:34:24 2006 -0400 [libata] irq-pio: Fix merge mistake commit df3ab408a1d5e6c51732d97f4c06ce1bd52aa238 Author: Jeff Garzik Date: Sun Apr 2 10:17:51 2006 -0400 [libata] s/ata_dev_present/ata_dev_enabled/ for several PATA drivers commit 1ad8e7f9eb051b040880e45337ed74bfd916ef7f Author: Tejun Heo Date: Sun Apr 2 18:51:53 2006 +0900 [PATCH] libata: make some libata-core routines extern Make libata-core routines which will be used by EH implementation extern. Signed-off-by: Tejun Heo Signed-off-by: Jeff Garzik commit ece1d63619df010b8c4f08e43755e2a03f3b6eed Author: Tejun Heo Date: Sun Apr 2 18:51:53 2006 +0900 [PATCH] libata: separate out libata-eh.c A lot of EH codes are about to be added to libata. Separate out libata-eh.c. ata_scsi_timed_out(), ata_scsi_error(), ata_qc_timeout(), ata_eng_timeout(), ata_eh_qc_complete() and ata_eh_qc_retry() are moved. No code is changed by this patch. Signed-off-by: Tejun Heo Signed-off-by: Jeff Garzik commit 35e86b53b1a38e78ff0d70dae4aeb25f4572e433 Author: Tejun Heo Date: Sun Apr 2 18:51:53 2006 +0900 [PATCH] libata: dec scmd->retries for qcs with zero err_mask qcs might get retried because of unrelated failure. e.g. NCQ command failure causes the whole command set to be aborted. Decrement scmd->retries for such retrials to avoid unnecessarily failing commands. Note that scmd->retries will be incremented the first time. Signed-off-by: Tejun Heo Signed-off-by: Jeff Garzik commit d69cf37d5387801914bbf5297f070c7d2ee0206f Author: Tejun Heo Date: Sun Apr 2 18:51:53 2006 +0900 [PATCH] libata: add @cdb to ata_exec_internal() Add @cdb to ata_exec_internal(). It will be used by new EH. Signed-off-by: Tejun Heo Signed-off-by: Jeff Garzik commit 74e6c8c394ca2126a60e97bc1142ec2d91761e9a Author: Tejun Heo Date: Sun Apr 2 18:51:53 2006 +0900 [PATCH] libata: don't read TF directly from sense generation functions TF register might not be directly accessible depending on errors. e.g. TF of failed NCQ command is in log page 10h. Make reading TF responsibility of error handlers. For the current EH, simply push TF reading into qc completion functions as they are practically part of EH. New EH will fill qc->tf with status registers before complting qcs. Signed-off-by: Tejun Heo Signed-off-by: Jeff Garzik commit 058e55e120ca59d37392f9aa753da2d9ead24505 Author: Tejun Heo Date: Sun Apr 2 18:51:53 2006 +0900 [PATCH] libata: always generate sense if qc->err_mask is non-zero Current sense generation code does not generate sense error if status register value doesn't indicate error condition. However, LLDD's may indicate errors which 't show up in status register. Completing such qc's without generating sense results in successful completion of failed commands. Invoke ata_to_sense_error() regardless of status register if qc->err_mask is not zero such that ata_to_sense_error() generates default sense error. Signed-off-by: Tejun Heo Signed-off-by: Jeff Garzik commit c91af2c87e4048cdefcfc9f16fed8d728243c92d Author: Tejun Heo Date: Sun Apr 2 18:51:53 2006 +0900 [PATCH] libata: pass qc around intead of ap during PIO The current code passes pointer to ap around and repeatedly performs ata_qc_from_tag() to access the ongoing qc. This is unnatural and makes EH synchronization cumbersome. Make PIO codes deal with qc instead of ap. Signed-off-by: Tejun Heo Signed-off-by: Jeff Garzik commit 2719736779da2c7fbb17d3de16c817b429bfeb9c Author: Tejun Heo Date: Sun Apr 2 18:51:53 2006 +0900 [PATCH] libata: add ATA_QCFLAG_IO Add a new qc flag ATA_QCFLAG_IO. This flag gets set for normal IO commands originating from SCSI midlayer. This information will be used by EH to determine transfer speed reconfiguration. Signed-off-by: Tejun Heo Signed-off-by: Jeff Garzik commit e8384607d4f395985e3cc5f82d75fc73efc2ecf0 Author: Tejun Heo Date: Sun Apr 2 18:51:53 2006 +0900 [PATCH] libata: clear ATA_DFLAG_PIO before setting it ata_dev_set_mode() is now responsible for managing ATA_DFLAG_PIO. Clear it before setting it. Signed-off-by: Tejun Heo Signed-off-by: Jeff Garzik commit ea1dd4e13010eb9dd5ffb4bfabbb472bc238bebb Author: Tejun Heo Date: Sun Apr 2 18:51:53 2006 +0900 [PATCH] libata: clear only affected flags during ata_dev_configure() ata_dev_configure() should not clear dynamic device flags determined elsewhere. Lower eight bits are reserved for feature flags, define ATA_DFLAG_CFG_MASK and clear only those bits before configuring device. Without this patch, ATA_DFLAG_PIO gets turned off during revalidation making PIO mode unuseable. Signed-off-by: Tejun Heo Signed-off-by: Jeff Garzik commit 198e0fed9e59461fc1890dd8b75ec72d14638873 Author: Tejun Heo Date: Sun Apr 2 18:51:52 2006 +0900 [PATCH] libata: rename ATA_FLAG_PORT_DISABLED to ATA_FLAG_DISABLED Rename ATA_FLAG_PORT_DISABLED to ATA_FLAG_DISABLED for consistency. (ATA_FLAG_* are always about ports). Signed-off-by: Tejun Heo Signed-off-by: Jeff Garzik commit 949b38af40a0b88b7267908b1554a45b97b5b737 Author: Tejun Heo Date: Sun Apr 2 18:51:52 2006 +0900 [PATCH] libata: clean up constants * Reorder ATA_DFLAG_* such that feature flags determined by ata_dev_configure() are on lower bits. Reserve lower eight bits for this purpose and allocate dynamic flags from bit 8. * Reorder ATA_FLAG_* such that feature flags determined during driver initiailization are on bits 0:15, dynamic flags on 16:23 and LLDD specific flags on 24:31. * Kill trailing white space and lower-case an one line comment for consistency. Signed-off-by: Tejun Heo Signed-off-by: Jeff Garzik commit c43c555c3a6db7f0b55fd9b66d7ecff16e827d4e Author: Tejun Heo Date: Sun Apr 2 18:51:52 2006 +0900 [PATCH] libata: ATA_FLAG_IN_EH is not used, kill it Kill unused flag ATA_FLAG_IN_EH. Signed-off-by: Tejun Heo Signed-off-by: Jeff Garzik commit 5eb45c02a9944e813a0b82457607557a1f2c64b5 Author: Tejun Heo Date: Sun Apr 2 18:51:52 2006 +0900 [PATCH] libata: ata_dev_revalidate() printk update Make sure ata_dev_revalidate() complains on failures and kill revalidation failure message printed from ata_dev_set_mode(). Signed-off-by: Tejun Heo Signed-off-by: Jeff Garzik commit d63cb4a6365aa161341fc365df1edc87cd00c9c0 Author: Tejun Heo Date: Sun Apr 2 18:51:52 2006 +0900 [PATCH] libata: report device number when PIO fails Report device number on PIO failure. Signed-off-by: Tejun Heo Signed-off-by: Jeff Garzik commit 565083e1f14e7771aa6bac2d3d4aae0b08d48d78 Author: Tejun Heo Date: Sun Apr 2 17:54:47 2006 +0900 [PATCH] libata: consider disabled devices in ata_dev_xfermask() ata_bus_probe() now marks failed devices properly and leaves meaningful transfer mode masks. This patch makes ata_dev_xfermask() consider disable devices when determining PIO mode to avoid violating device selection timing. While at it, move port-wide resttriction out of device iteration loop and try to make the function look a bit prettier. Signed-off-by: Tejun Heo Signed-off-by: Jeff Garzik commit 14d2bac1877ed4e2cc940d1680db1a4f29225811 Author: Tejun Heo Date: Sun Apr 2 17:54:46 2006 +0900 [PATCH] libata: improve ata_bus_probe() Improve ata_bus_probe() such that configuration failures are handled better. Each device is given ATA_PROBE_MAX_TRIES chances, but any non-transient error (revalidation failure with -ENODEV, configuration failure with -EINVAL...) disables the device directly. Any IO error results in SATA PHY speed down and ata_set_mode() failure lowers transfer mode. The last try always puts a device into PIO-0. After each failure, the whole port is reset to make sure that the controller and all the devices are in a known and stable state. The reset also applies SATA SPD configuration if necessary. Signed-off-by: Tejun Heo Signed-off-by: Jeff Garzik commit cf176e1aa92eb2a3faea8409e841396a66413937 Author: Tejun Heo Date: Sun Apr 2 17:54:46 2006 +0900 [PATCH] libata: implement ata_down_xfermask_limit() Implement ata_down_xfermask_limit(). This function manipulates @dev->pio/mwdma/udma_mask such that the next lower transfer mode is selected. This will be used to improve ata_bus_probe() failure handling and later by EH. Signed-off-by: Tejun Heo Signed-off-by: Jeff Garzik commit edbabd8679a39faef67def4438c9cbccb5c05c5d Author: Tejun Heo Date: Sun Apr 2 20:55:02 2006 +0900 [PATCH] libata: add 5s sleep between resets Some devices react badly if resets are performed back-to-back. Give devices some time to breath and tell user that we're taking a nap. Signed-off-by: Tejun Heo Signed-off-by: Jeff Garzik commit 90dac02c08dabd471927f151b8393eb51e3e020e Author: Tejun Heo Date: Sun Apr 2 17:54:46 2006 +0900 [PATCH] libata: use SATA speed down in ata_drive_probe_reset() Make ata_drive_probe_reset() use SATA SPD configuration. Hardreset will be force if speed renegotiation is necessary. Also, if a hardreset fails, PHY speed is stepped down and hardreset is retried until the lowest speed is reached. Signed-off-by: Tejun Heo Signed-off-by: Jeff Garzik commit 1c3fae4d7eb121933341443c37d3bbee43c0fb68 Author: Tejun Heo Date: Sun Apr 2 20:53:28 2006 +0900 [PATCH] libata: implement ap->sata_spd_limit and helpers ap->sata_spd_limit contrains SATA PHY speed of the port. It is initialized to the configured value prior to probing thus preserving BIOS configured value. hardreset is responsible for applying SPD limit and sata_std_hardreset() is updated to do that. SATA SPD limit will be used to enhance failure handling during probing and later by EH. This patch also normalizes some comments around affected code. Signed-off-by: Tejun Heo Signed-off-by: Jeff Garzik commit 002c8054fa8d0f1afce2b0c728be32d338b9293a Author: Tejun Heo Date: Sun Apr 2 17:54:46 2006 +0900 [PATCH] libata: implement ata_dev_absent() For the time being we cannot use ata_dev_present() as it was renamed to ata_dev_enabled() but we still need presence test. Implement negation of the test. Conveniently, the negated result is needed in more places. This is suggested by Jeff Garzik. Signed-off-by: Tejun Heo Signed-off-by: Jeff Garzik commit 852ee16a914fb3ada2f81e222677c04defc2f15f Author: Tejun Heo Date: Sat Apr 1 01:38:18 2006 +0900 [PATCH] libata: preserve SATA SPD setting over hard resets Don't overwrite SPD setting during hard reset. This change has the (intended) side effect of honoring the BIOS configuration. Signed-off-by: Tejun Heo Signed-off-by: Jeff Garzik commit e82cbdb9a3791f781462c9d00e3486e8fb7e58a8 Author: Tejun Heo Date: Sat Apr 1 01:38:18 2006 +0900 [PATCH] libata: don't disable devices from ata_set_mode() When ata_set_mode() fails on a device, make ata_set_mode() return error code and pointer to the device instead of disabling it directly. This gives more control to higher level driving logic. This patch does not change the end result (configured transfer mode) although it may make libata repeat mode configuration to the peer of a failing device. Later ata_bus_probe() rewrite will make full use of this change. Signed-off-by: Tejun Heo Signed-off-by: Jeff Garzik commit eee6c32f5f114f9b9f2d94862f0dc0d3ff523864 Author: Albert Lee Date: Sat Apr 1 17:38:43 2006 +0800 [PATCH] libata-dev: handle DRQ=1 ERR=1 (revised) Handle DRQ=1 ERR=1 situation. Revised according to what IDE try_to_flush_leftover_data() does. Changes: - For ATA PIO writes and ATAPI devices, just stop the HSM and let EH handle it. - For ATA PIO reads, read only one block of junk data and then let EH handle it. Signed-off-by: Albert Lee Signed-off-by: Jeff Garzik commit e8e0619f68bff8f39d98c46aac85ed1d4557ccfd Author: Tejun Heo Date: Sat Apr 1 01:38:18 2006 +0900 [PATCH] libata: reorganize ata_set_mode() Merge ata_host_set_pio() and ata_host_set_dma() into ata_set_mode() and use function-level *dev to iterate over devices. This eases soon-to-follow ata_set_mode() interface change. While at it, kill an unnecessary comment and normalize others. Signed-off-by: Tejun Heo Signed-off-by: Jeff Garzik commit 4f65977df0b9a667fdcd85b95d457a220c94113f Author: Tejun Heo Date: Sat Apr 1 01:38:18 2006 +0900 [PATCH] libata: make ata_set_mode() handle no-device case properly Make ata_set_mode() return without doing anything if there is no device on the port. This is in preparation for ata_bus_probe() changes. Signed-off-by: Tejun Heo Signed-off-by: Jeff Garzik commit e1211e3fa7fd05ff0d4f597fd37e40de8acc6784 Author: Tejun Heo Date: Sat Apr 1 01:38:18 2006 +0900 [PATCH] libata: implement ata_dev_enabled and disabled() This patch renames ata_dev_present() to ata_dev_enabled() and adds ata_dev_disabled(). This is to discern the state where a device is present but disabled from not-present state. This disctinction is necessary when configuring transfer mode because device selection timing must not be violated even if a device fails to configure. Signed-off-by: Tejun Heo Signed-off-by: Jeff Garzik commit 9974e7cc6c8b8ea796c92cdf28db93e4579a4000 Author: Tejun Heo Date: Sat Apr 1 01:38:17 2006 +0900 [PATCH] libata: convert do_probe_reset() to ata_do_reset() Make do_probe_reset() generic by pushing classification check into ata_drive_probe_reset() and rename it to ata_do_reset(). This will be used by EH reset. Signed-off-by: Tejun Heo Signed-off-by: Jeff Garzik commit 4c360c81a6f8d4253f7fc2e69852305676fdfa72 Author: Tejun Heo Date: Sat Apr 1 01:38:17 2006 +0900 [PATCH] libata: separate out ata_spd_string() Separate out ata_spd_string() from sata_print_link_status(). This will be used by SATA spd configuration routines. Signed-off-by: Tejun Heo Signed-off-by: Jeff Garzik commit 96072e699be08af8e7c33c56759582ea89088a02 Author: Tejun Heo Date: Sat Apr 1 01:38:17 2006 +0900 [PATCH] libata: make ata_bus_probe() return negative errno on failure ata_bus_probe() uses unsigned int rc to receive negative errno and returns the converted unsigned int value. Convert temporary variables to int and make ata_bus_probe() return negative errno on failure. Signed-off-by: Tejun Heo Signed-off-by: Jeff Garzik commit 5bbc53f4cfd28bf1d0e476ed23bc3a094eff718a Author: Tejun Heo Date: Sat Apr 1 01:38:17 2006 +0900 [PATCH] libata: fix ata_set_mode() return value Make ata_set_mode() return correct error value when ata_dev_set_mode() fails. Signed-off-by: Tejun Heo Signed-off-by: Jeff Garzik commit 08a556db919f67e1e4d33ae8d40f7222da34d994 Author: Albert Lee Date: Fri Mar 31 13:29:04 2006 +0800 [PATCH] libata-dev: print out information for ATAPI devices with CDB interrupts print out information for ATAPI devices with CDB interrupts Signed-off-by: Albert Lee Signed-off-by: Jeff Garzik commit ad98a3e7915134bfa21cba546e667eb1d084b0b4 Author: Albert Lee Date: Thu Mar 30 11:12:01 2006 +0800 [PATCH] libata: convert pata_pdc2027x to the new reset mechanism - Convert pata_pdc2027x to the new reset mechanism (->probe_reset) - Increased the version to 0.74 Signed-off-by: Albert Lee Signed-off-by: Jeff Garzik commit 52a3220599647ba429fcbca2388ec35b850fa72f Author: Albert Lee Date: Sat Mar 25 18:18:15 2006 +0800 [PATCH] libata-dev: wait idle after reading the last data block Some CD-ROM drives are slow to clear DRQ, after the last data block is read by PIO. Use ata_wait_idle() after reading the last data block. Signed-off-by: Albert Lee Signed-off-by: Jeff Garzik commit 71601958f73b952281f2b02e16d1f11c99ee0a8b Author: Albert Lee Date: Sat Mar 25 18:11:12 2006 +0800 [PATCH] libata-dev: fix the device err check sequence (respin) Current irq-pio checks ERR bit and stops on ERR before it does anything else. This behavior doesn't look right. The DRQ bit should take higher precedence than the ERR bit. Changes: - Let the HSM do the data transfer whenever the device asks for DRQ bit, even if the ERR bit is set. - For DRQ=1 ERR=1, don't trust the data Signed-off-by: Albert Lee Signed-off-by: Jeff Garzik commit 999bb6f4260f9499fdeab3f8fdcd8b9013eca39e Author: Albert Lee Date: Sat Mar 25 18:07:48 2006 +0800 [PATCH] libata-dev: irq-pio minor fixes (respin) irq-pio minor fixes for printk() and comments. Signed-off-by: Albert Lee Signed-off-by: Jeff Garzik commit c234fb00ea8999076728137d96603b713ad8b53f Author: Albert Lee Date: Sat Mar 25 17:58:38 2006 +0800 [PATCH] libata-dev: Make the the in_wq check as an inline function Make the the in_wq check easier to read as an inline function. Signed-off-by: Albert Lee Signed-off-by: Jeff Garzik commit c2bbc551615c21a4c280c797987dbb50f2701594 Author: Albert Lee Date: Sat Mar 25 17:56:55 2006 +0800 [PATCH] libata-dev: ata_check_atapi_dma() fix for ATA_FLAG_PIO_POLLING LLDDs ata_check_atapi_dma() fix for LLDDs with the ATA_FLAG_PIO_POLLING flag. Signed-off-by: Albert Lee Signed-off-by: Jeff Garzik commit 27cdadef6dfe0d0614653919a110fc75ab1650ce Author: Albert Lee Date: Sat Mar 25 17:53:57 2006 +0800 [PATCH] libata-dev: Cleanup unused enums/functions Cleanup the following unused functions: - ata_pio_poll() - ata_pio_complete() - ata_pio_first_block() - ata_pio_block() - ata_pio_error() ap->pio_task_timeout and other enums. Signed-off-by: Albert Lee Signed-off-by: Jeff Garzik commit a1af37344f669d0fefa8c8a9e37eb6a7c086a2c2 Author: Albert Lee Date: Sat Mar 25 17:50:15 2006 +0800 [PATCH] libata-dev: Convert ata_pio_task() to use the new ata_hsm_move() Convert ata_pio_task() to use the new ata_hsm_move(). Changes: - refactor ata_pio_task() to poll device status register and - call the new ata_hsm_move() when device indicates it is not BSY. Signed-off-by: Albert Lee Signed-off-by: Jeff Garzik commit bb5cb290f095f17f88c912e3da35adf5b2d9500b Author: Albert Lee Date: Sat Mar 25 17:48:02 2006 +0800 [PATCH] libata-dev: Let ata_hsm_move() work with both irq-pio and polling pio Let ata_hsm_move() work with both irq-pio and polling pio codepath. Changes: - add a new parameter "in_wq" for polling pio - add return value "poll_next" to tell polling pio task whether the HSM is finished - merge code from ata_pio_first_block() to the HSM_ST_FIRST state. - call ata_poll_qc_complete() if called from the workqueue Signed-off-by: Albert Lee Signed-off-by: Jeff Garzik commit 6912ccd5a4a095d71fd7215ef2abea877c8fed6f Author: Albert Lee Date: Sat Mar 25 17:45:49 2006 +0800 [PATCH] libata-dev: Minor fix for ata_hsm_move() to work with ata_host_intr() Minor fix for ata_hsm_move() to work with ata_host_intr(). Changes: - WARN_ON() and comment fix - Make the HSM_ST_LAST device status checking more rigid. - Treat unknown HSM state as BUG(). Signed-off-by: Albert Lee Signed-off-by: Jeff Garzik commit e2cec77117a6e4fcbac2601e2f7b0b3f4f5a4c84 Author: Albert Lee Date: Sat Mar 25 17:43:49 2006 +0800 [PATCH] libata-dev: Move out the HSM code from ata_host_intr() Move out the irq-pio HSM code from ata_host_intr() to the new ata_hsm_move() function verbatim. Signed-off-by: Albert Lee Signed-off-by: Jeff Garzik commit 19d5d7309a928eb86f58b37165a2d501621ae3c0 Author: Albert Lee Date: Sat Mar 25 17:41:43 2006 +0800 [PATCH] libata-dev: Remove atapi_packet_task() atapi_packet_task() was replaced by ata_pio_task(). Remove the unused atapi_packet_task(). Signed-off-by: Albert Lee Signed-off-by: Jeff Garzik commit 13ee4628ce68d1e54df01cc31239c8887c59e65d Author: Albert Lee Date: Sat Mar 25 17:39:34 2006 +0800 [PATCH] libata-dev: Fix merge problem with upstream Fix merge problem with upstream. Changes: 1. add missing dev->cdb_len = 16 for ATA devices 2. use ata_pio_task instead of atapi_packet_task Signed-off-by: Albert Lee Signed-off-by: Jeff Garzik commit cf2f7689f94ee02e52d5331bc1a87421a67a882c Author: Albert Lee Date: Sat Mar 25 17:37:53 2006 +0800 [PATCH] libata-dev: Remove trailing whitespaces Remove trailing whitespaces. Signed-off-by: Albert Lee Signed-off-by: Jeff Garzik commit 84ac69e8bf9f36eb0166817373336d14fa58f5cc Author: Jeff Garzik Date: Fri Mar 24 09:27:49 2006 -0500 [libata] irq-pio: fix build breakage commit bbbd62abb51c5804e113859e5bdccb0c9e84250a Author: Jeff Garzik Date: Wed Mar 22 23:38:48 2006 -0500 [libata pata_pdc2027x] use pci_iomap(), kzalloc() where appropriate commit 4eebac4a424444712011039c6f95933f9e362976 Author: Jeff Garzik Date: Wed Mar 22 08:00:36 2006 -0500 [libata sata_promise] fix build breakage due to bad merge commit ee124d992f68912ee78c3deb6a4f01035c7e45d0 Author: Tejun Heo Date: Wed Mar 1 15:13:50 2006 +0900 [PATCH] sata_sil: remove unaffected drives from m15w blacklist m15w blacklist overgrew by attributing unrelated problems to m15w including R_ERR on DMA activate FIS errata. This patch shrinks sata_sil m15w blacklist such that it's as reported by Silicon Image. Signed-off-by: Tejun Heo Cc: Carlos Pardo Signed-off-by: Jeff Garzik commit 200d5a7684cc49ef4be40e832daf3f217e70dfbb Author: Tejun Heo Date: Wed Feb 15 16:24:49 2006 +0900 [PATCH] libata: increase LBA48 max sectors to 65535 max_hw_sectors/max_sectors separation patch made into the tree, increase max_sectors to its hardware limit. Signed-off-by: Tejun Heo Signed-off-by: Jeff Garzik commit f90509718fa29362f49d43f296041613374664fb Author: Jeff Garzik Date: Thu Feb 9 04:50:51 2006 -0500 [libata pata] fix lingering old-style qc_issue_prot function declarations commit abddede3b93146089505932747ded2767ad0c81a Author: Andrew Morton Date: Tue Jan 17 23:28:18 2006 -0800 [PATCH] pata_opti needs PCI This driver explodes if !CONFIG_PCI. drivers/scsi/pata_opti.c: In function `opti_phy_reset': drivers/scsi/pata_opti.c:54: error: elements of array `opti_enable_bits' have incomplete type drivers/scsi/pata_opti.c:55: warning: excess elements in struct initializer drivers/scsi/pata_opti.c:55: warning: (near initialization for `opti_enable_bits[0]') drivers/scsi/pata_opti.c:55: warning: excess elements in struct initializer drivers/scsi/pata_opti.c:55: warning: (near initialization for `opti_enable_bits[0]') drivers/scsi/pata_opti.c:55: warning: excess elements in struct initializer drivers/scsi/pata_opti.c:55: warning: (near initialization for `opti_enable_bits[0]') drivers/scsi/pata_opti.c:55: warning: excess elements in struct initializer drivers/scsi/pata_opti.c:55: warning: (near initialization for `opti_enable_bits[0]') Signed-off-by: Andrew Morton Signed-off-by: Jeff Garzik commit a45cc796eae4d43bf43e1fb84e3f80a02114eac6 Author: Jeff Garzik Date: Tue Jan 17 10:44:00 2006 -0500 [libata] remove 'ordered_flush' member from PATA drivers commit 95f1d4389c49a65d2949e0c793468b605be09eab Author: Alan Cox Date: Mon Jan 9 17:55:26 2006 +0000 [PATCH] libata: Updates to the MPIIX driver Testing on a thinkpad 560 showed up some features I forgot to deal with. The chip itself is a PCI bridge with an ISA IDE controller built into it that is configured via the bridge PCI space. This means if the class bits happen to look like PCI native and we pass it to the usual pci setup functions then mucho shit happens. Rather than put crap for 'well its PCI but always legacy' special cases into the core code I've done a legacy initialize in the driver itself. This also avoids the second problem that you call pci_disable_device when the ATA setup fails or no disk is found. As the MPIIX is the root bridge and memory controller this is deeply ungood. Now passes the full test suite I can manage on all MPIIX systems I have access to. Signed-off-by: Alan Cox Signed-off-by: Jeff Garzik commit 9e6ca92c37564ddc99399815f68799f393a9daa2 Author: Alan Cox Date: Mon Jan 9 17:03:28 2006 +0000 [PATCH] libata: Update TODO list for pata_amd I forgot to do this before submitting that revision to Jeff Signed-off-by: Alan Cox Signed-off-by: Jeff Garzik commit ad1bdec7576269109aab462fada2e3c8556ef38e Author: Jeff Garzik Date: Tue Dec 13 01:54:43 2005 -0500 [libata pata drivers] trim trailing whitespace commit 17ff43f91979cf64c73b99bed51e91675827e63a Author: Jeff Garzik Date: Mon Dec 12 23:50:42 2005 -0500 libata: Add Intel MPIIX and "old PIIX" PATA drivers. Contributed by Alan Cox, with very minor cleanups from me. commit 30063a5ec04f5a6e009bea6b6f7b0c84114130d5 Author: Jeff Garzik Date: Sat Nov 19 18:15:53 2005 -0500 [libata pata_via] fix warning In function `via_init_one': 378: warning: 'type' might be used uninitialized in this function commit 950381ace29d10d7014025a2386e56964712a99d Author: Alan Cox Date: Sat Nov 19 22:14:47 2005 +0000 [PATCH] libata: Clean up and fix the VIA PATA libata driver This fixes the enable bits bug for the VIA and also uses the change I sent and you merged which allows a pci probe to pass private_data through to the final host_set. This allows the driver to dramatically shrink in size and replaces the set of methods for each chip type with a single set of methods and config structure. Adds a couple of printks to help debug some reports of hangs. Signed-off-by: Alan Cox Signed-off-by: Jeff Garzik commit 8b8315701242cf3f86772a97ccfd14f1a583b0d2 Author: Alan Cox Date: Sat Nov 19 22:12:14 2005 +0000 [PATCH] libata: Fix enable bits for triflex Signed-off-by: Alan Cox Signed-off-by: Jeff Garzik commit 4a493f3f6b5adec3e4bc5ddb17a9b1072b904655 Author: Alan Cox Date: Sat Nov 19 22:06:28 2005 +0000 [PATCH] libata: Fix opti pci enable bits as with the AMD bug Signed-off-by: Alan Cox Signed-off-by: Jeff Garzik commit 1507c115b104f1cd40684006b9ec364ecec5445f Author: Alan Cox Date: Sat Nov 19 21:58:14 2005 +0000 [PATCH] libata: AMD pata fixes I managed to send you the pata file versions before I did the last fix (forgot to rsync them back off a build box). This corrects the obvious problem for the AMD driver and fixes drive detection. Signed-off-by: Alan Cox Signed-off-by: Jeff Garzik commit df27c80c8df5b40076e9b7bbaa3570eb508d3eba Author: Jeff Garzik Date: Thu Nov 10 07:46:20 2005 -0500 [libata] constify PCI tables in PATA drivers commit b653dbeb2bb63882c42b220a0ee625fab709cc9b Author: Alan Cox Date: Thu Nov 10 07:44:15 2005 -0500 [libata] Add new PATA driver pata_opti commit bc97863d537ae67f1fba7bbe19282e3779b466d7 Author: Jeff Garzik Date: Thu Nov 10 07:22:45 2005 -0500 [libata] minor updates to PATA drivers - s/Scsi_Host_Template/struct scsi_host_template/ - remove inclusion of drivers/scsi/scsi.h - print out version on init - trim trailing whitespace commit c8fa3b95b7a0cc0d9dff00b6000afdde42f8504b Author: Albert Lee Date: Thu Nov 10 14:04:22 2005 +0800 [PATCH] libata: pata_pdc2027x minor fix Minor fix for pata_pdc2027x to compile with recent changes, etc. Changes: - eliminate use of drivers/scsi/scsi.h compatibility header - use dev_printk() where appropriate - add MODULE_VERSION(DRV_VERSION); Signed-off-by: Albert Lee ======= Signed-off-by: Jeff Garzik commit c33812bbd2c20ab2edcf547cfe3e738c5f7b7a8b Author: Alan Cox Date: Tue Nov 8 14:15:37 2005 +0000 [PATCH] libata: Add enablebits to via driver Signed-off-by: Alan Cox Signed-off-by: Jeff Garzik commit 69ce93ff25037f5f3c983490ddf7a80bc621372d Author: Alan Cox Date: Tue Nov 8 14:14:28 2005 +0000 [PATCH] libata: Add enablebits support to the triflex driver Signed-off-by: Alan Cox Signed-off-by: Jeff Garzik commit 675c80030d0ca96d4cb43ae6db5aa8516b112be3 Author: Alan Cox Date: Tue Nov 8 14:12:22 2005 +0000 [PATCH] libata: Update the AMD driver to support the AMD CS5536. This brings it back into line with the drivers/ide code. The full 5536 PCI identifiers are included. Trim to preference obviously. Also add enablebit support. Signed-off-by: Alan Cox Signed-off-by: Jeff Garzik commit 91797d9e0a6ef8208855a5aa9a3e329754e4aed3 Author: Alan Cox Date: Wed Oct 26 12:25:13 2005 -0400 [libata] Add driver for PATA AMD/NVIDIA chips. commit c79a8dff70155ac707c14680e8e3632d78eaa8d2 Author: Jeff Garzik Date: Tue Oct 25 14:52:27 2005 -0400 libata: Add makefile rules for pata_via driver. commit ff8ac44ab2695f13ba5eef90a61357d92b9b99fd Author: Alan Cox Date: Tue Oct 25 14:35:09 2005 -0400 [libata] Add PATA VIA driver. Signed-off-by: Alan Cox (Although please note that 75% of the driver code and 99% of the hard work are from Vojtech Pavlik's drivers/ide driver) commit 77f34f568d3d74d8baed644a9ce4a67359f24438 Author: Alan Cox Date: Mon Oct 24 23:36:08 2005 -0400 [libata] Add PATA driver for Compaq Triflex commit 58ee99d59ae273ae5c4ffe57ac0c3db1f1f1ab62 Author: Jeff Garzik Date: Fri Oct 21 19:15:14 2005 -0400 [libata pata_sil680] add to Makefile/Kconfig commit 398651116c251e00f31ebf525c83912f3bcbcc31 Author: Alan Cox Date: Fri Oct 21 14:11:38 2005 +0100 [PATCH] Add libata CMD/SI680 driver You now seem to have all the stuff neccessary for this to work merged. Signed-off-by: Alan Cox Signed-off-by: Jeff Garzik commit d95fec041aa48063804ef60a980f1fdec2181e30 Author: Jeff Garzik Date: Mon Aug 29 19:05:42 2005 -0400 [libata pata_pdc2027x] add documentation ref in header; trim trailing whitespace commit 7bc0628a6db9c90d68feea8c1d693274e4bfc17b Author: Albert Lee Date: Fri Aug 12 15:23:39 2005 +0800 [PATCH] libata-dev: pdc2027x ATAPI DMA lost irq problem workaround Patch 3/3: pdc2027x ATAPI DMA lost irq problem workaround Description: Sometimes pdc2027x will lost irq after ATAPI DMA data transfer. With the previous workaround (cmd->request_bufflen % 256), the ATAPI DMA irq lost problem still occurs during the test. Root cause for the irq lost is unknown yet. I've tried your ATAPI DMA alignment patch, but the problem still occurs, even the buffer is aligned. I guess it is pdc2027x hardware problem. The following workarounds are adapted from the Promise pdc618 GPL driver. They seem know about the problem and have the workarounds. In the Promise driver, there are 2 workarounds: 1. Only turn on ATAPI DMA for READ, WRITE, READ_CD and READ_DVD_STRUCTURE commands in the white list. 2. For WRITE_10, if LBA -45150 (FFFF4FA2h) to -1 (FFFFFFFFh) then use PIO mode. However, I've done some test. The negative LBA check seems not needed for pdc2027x. So, only the command white list workaround is included in this patch. Changes: - Only turn on ATAPI DMA for READ, WRITE, READ_CD and READ_DVD_STRUCTURE commands. Tested OK on x86 + pdc20275 + Liteon CD-RW SOHR-5238S and LG DVDRAM GSA-4163B. Signed-off-by: Albert Lee Signed-off-by: Jeff Garzik commit 5cbe20f1146bd2ca4bbc9a31a520a18e1a183c2c Author: Albert Lee Date: Fri Aug 12 15:21:08 2005 +0800 [PATCH] libata-dev: pdc2027x use "long" for counter data type Patch 2/3: pdc2027x use "long" for counter data type Description: The pdc2027x byte counter value is only 30-bit, "long" should be big enough. Changes: - Change the counter data type from "unsigned long" to "long". Patch tested OK on x86, power4 and power5 machines. Signed-off-by: Albert Lee Signed-off-by: Jeff Garzik commit 6b39a907cddbaeaa3a6b6a165ac8b1a07f44dbee Author: Albert Lee Date: Fri Aug 12 15:19:42 2005 +0800 [PATCH] libata-dev: Convert pdc2027x from PIO to MMIO Patch 1/3: Convert pdc2027x from PIO to MMIO Description: Indexed registers need two PIO accesses: one access writes to the index register and the other access reads/writes the indexed register. Using MMIO can access the register directly and simplify the code. Changes: - Use MMIO instead of PIO and indexed registers. - Minor fix such as add ata_host_stop hook and indent beautification, etc. - Revised to use static inline and readx() to flush write to PCI bus instead of wmb(). Signed-off-by: Albert Lee Signed-off-by: Jeff Garzik commit 4c156bce30ec56ab6f985c22d14fbde00d926566 Author: Jeff Garzik Date: Tue May 31 12:09:37 2005 -0400 [libata] pata_pdc2027x: update for recent ->host_stop() API changes commit 6c358a0d05f9cfa3b6113c426036b99eb4e17930 Author: Albert Lee Date: Mon Apr 11 18:28:59 2005 +0800 [PATCH] libata-dev-2.6: pdc2027x PLL input clock detection fix Desc: pdc2027x: fix the PLL input clock detection problem: Sometimes the pdc2027x driver reads incorrect value from the counter and initialize the hardware with incorrect clock rate. Root Cause: - The PLL input clock counter on the pdc2027x is a 30-bit deceasing counter. The driver reads the decreasing counter byte by byte and assemsble back the whole 30-bit counter when detecting the PLL input clock rate on the fly. - The driver reads the counter from LSB then MSB, byte by byte. For example, for 7D030093, the driver read 93, 00, 03 and 7D, piece by piece. (This is pdc2027x hardware limitation.) - The counter is running when the driver reads it. Ex. When 7D030093 decreasing to 7D027FFF, sometimes the driver will read 7D020093. That is, when the driver read LSB, LSB 0093 is quite close to zero. Later when the driver read MSB, MSB is already changed from 7D03 to 7D02. => The driver assembles the wrong bytes and gets 7D020093. Changes: - Verify whether the driver is reading while the MSB bits of the counter is changing. If yes, reread the counter. (Incorrect LSB bits only affect the result by less than 5% under 100ms delay time and can be ignored.) - Change the delay time from 1ms to 100ms. This can make the detected clock rate more accurate. The extra time spent on rereading the counter is less than 1% compared to 100ms. Signed-off-by: Albert Lee commit 65d0c5280c8e0323822b852631fc66c095da343a Author: Albert Lee Date: Mon Apr 11 18:06:46 2005 +0800 [PATCH] libata-dev-2.6: pdc2027x move the PLL counter reading code Change: pdc2027x: move the PLL counter reading code to a separate function. Signed-off-by: Albert Lee commit e7cfdb1671417a6ccdb0906be3eda0dd28dc2012 Author: Albert Lee Date: Mon Apr 11 18:04:38 2005 +0800 [PATCH] libata-dev-2.6: pdc2027x change comments Change: pdc2027x: change comments, indent, etc. Signed-off-by: Albert Lee commit 58636a95bf5dfa20e502f8885d3fb08814f9f1b7 Author: Albert Lee Date: Mon Apr 11 18:02:09 2005 +0800 [PATCH] libata-dev-2.6: pdc2027x add ata_scsi_ioctl Change: pdc2027x: add ata_scsi_ioctl. Signed-off-by: Albert Lee commit 3a67adafd0a2060bf4797dfb613a5cf7ac4f40c4 Author: Albert Lee Date: Thu May 12 15:49:21 2005 -0400 [libata] add driver for Promise PATA 2027x commit 7e5c057444be5fc82355e766044ef103f247ee9a Author: Erik Benada Date: Thu May 12 15:53:28 2005 -0400 [libata sata_promise] support PATA ports on SATA controllers add support for PATA port on Promise PDC2037x controllers. I tried to minimize changes to libata code. I just added flags for each port t o ata_probe_ent structure and modified ata_host_init() function. Promise SATA driver was changed to use new ata_probe_ent->port_flags, check for presence of PATA port and pdc_phy_reset will use different reset code for PATA and SATA ports. commit 46e202ec1feeac3cb722cd3410d62a9a00891388 Author: Jeff Garzik Date: Sat Mar 11 19:25:47 2006 -0500 libata: irq-pio build fixes commit c2956a3b0d1c17b38da369811a6ce93eb7a01a04 Author: Albert Lee Date: Fri Mar 3 10:34:05 2006 +0800 [PATCH] libata-dev: recognize WRITE_MULTI_FUA_EXT for r/w multiple Recognize ATA_CMD_WRITE_MULTI_FUA_EXT as r/w multiple commands. Signed-off-by: Albert Lee Signed-off-by: Jeff Garzik commit a5fd79ccd60b7c9cc0221dfaaa950933eff6af99 Author: Jeff Garzik Date: Mon Feb 20 05:21:14 2006 -0500 sata_vsc build fix commit db024d5398cd332023896caf70530564b15ec88e Author: Jeff Garzik Date: Mon Feb 13 00:23:57 2006 -0500 [libata] build fix after cdb_len move commit 587005de144acd3007b8e7f2a2a7c6add157c155 Author: Jeff Garzik Date: Sat Feb 11 18:17:32 2006 -0500 [libata irq-pio] s/assert/WARN_ON/ commit 41232d3ecac9df8eb94ff27330eb84b8baccc6b7 Author: Jeff Garzik Date: Thu Feb 9 04:52:55 2006 -0500 [libata] build fix after merging some pre-packet_task-removal code commit 332b5a52f2f96bc2d13bbe594a430318c0ee4425 Author: Albert Lee Date: Wed Feb 8 16:51:34 2006 +0800 [PATCH] libata-dev: Minor comment fix Signed-off-by: Albert Lee === Signed-off-by: Jeff Garzik commit 555a8965069b8e34292cbccc3ad8f619b96815fe Author: Albert Lee Date: Wed Feb 8 16:50:29 2006 +0800 [PATCH] libata-dev: Use new AC_ERR_* flags - Use new AC_ERR_* flags as done in Tejun's patches - In ata_qc_timeout(), replace ac_err_mask(drv_stat) with AC_ERR_TIMEOUT. This makes time out handler always report error to upper layer. Otherwise if the drv_stat looks good, libata might falsely report OK to the upper layer. Signed-off-by: Albert Lee === Signed-off-by: Jeff Garzik commit 20ea079e5883ab2b82fa5e576957f52e4e5c252c Author: Albert Lee Date: Wed Feb 8 16:48:49 2006 +0800 [PATCH] libata-dev: Use new ata_queue_pio_task() for PIO polling task - Use new ata_queue_pio_task() for PIO polling task. - Remove the unused ata_queue_packet_task() function. (irq-pio had merged the 2 queues into one.) Signed-off-by: Albert Lee === Signed-off-by: Jeff Garzik commit aef9d533da2eb7a5700a8c8700f2597c35cc50d1 Author: Albert Lee Date: Wed Feb 8 16:37:43 2006 +0800 [PATCH] libata-dev: Fix array index value in ata_rwcmd_protocol() Signed-off-by: Albert Lee === Signed-off-by: Jeff Garzik commit 000080c3499cd5037e60c08a8053efb9e48aa9c0 Author: Albert Lee Date: Mon Dec 26 16:48:00 2005 +0800 [PATCH] libata-dev: filter out noisy ATAPI error messages Changes: - Filter out ATAPI packet command error messages in ata_pio_error() - Filter out ATAPI packet command error messages in ata_host_intr() Signed-off-by: Albert Lee ====== Signed-off-by: Jeff Garzik commit a4f16610081001640ffe0314024bc31c10f69757 Author: Albert Lee Date: Mon Dec 26 16:40:53 2005 +0800 [PATCH] libata-dev: determine err_mask when error is found Changes: - Determine err_mask directly when an error is found. - Remove "qc->err_mask |= __ac_err_mask(status);" in ata_host_intr() Signed-off-by: Albert Lee ============ Signed-off-by: Jeff Garzik commit 3d0a59c02303df01848537b3bf938dc11e9a0ded Author: Jeff Garzik Date: Tue Dec 13 22:28:19 2005 -0500 [libata sata_promise] irq_pio: fix merge bug commit 278efe950988e72e2d0cea35059438fc27035d13 Author: Jeff Garzik Date: Tue Dec 6 05:01:27 2005 -0500 [libata] irq-pio: fix breakage related to err_mask merge commit d67e7ebb2a15bf0429c55f2fc1c4b02808367874 Author: Jeff Garzik Date: Fri Nov 18 11:55:00 2005 -0500 [libata sata_mv] IRQ PIO build fix commit 07f6f7d074e68d56d82e7cc5c65096033ac8dc56 Author: Albert Lee Date: Tue Nov 1 19:33:20 2005 +0800 [PATCH] libata irq-pio: add read/write multiple support - add is_multi_taskfile() to ata.h - initialize ata_device->multi_count with device identify data - use ata_pio_sectors() to support r/w multiple commands Signed-off-by: Albert Lee ======== Signed-off-by: Jeff Garzik commit fbcdd80b0d5bde06f3483b9a13f9599a0452431c Author: Albert Lee Date: Tue Nov 1 19:30:05 2005 +0800 [PATCH] libata irq-pio: eliminate unnecessary queuing in ata_pio_first_block() - change the return value of ata_pio_complete() 0 <-> 1 - add return value for ata_pio_first_block() - rename variable "qc_completed" to "has_next" in ata_pio_task() - use has_next to eliminate unnecessary queuing in ata_pio_first_block() Signed-off-by: Albert Lee ========== Signed-off-by: Jeff Garzik commit e27486db89ef04d5df1727c52362fa3d50cff241 Author: Albert Lee Date: Tue Nov 1 19:24:49 2005 +0800 [PATCH] libata irq-pio: merge the ata_dataout_task workqueue with ata_pio_task workqueue - remove ap->dataout_task from struct ata_port - let ata_pio_task() handle the HSM_ST_FIRST state. - rename ata_dataout_task() to ata_pio_first_block() - replace the ata_dataout_task workqueue with ata_pio_task workqueue Signed-off-by: Albert Lee ======== Signed-off-by: Jeff Garzik commit 467b16d4bebe8d251ca974eaa5da50b315206e9d Author: Albert Lee Date: Tue Nov 1 19:19:01 2005 +0800 [PATCH] libata irq-pio: misc fixes - ata_pio_block(): add ata_altstatus(ap) to prevent reading device status before it is valid - remove the unnecessary HSM_ST_IDLE state from ata_pio_task() - raise BUG() when unknown state is found in ata_pio_task() Signed-off-by: Albert Lee ============ Signed-off-by: Jeff Garzik commit 94ec1ef1cf29e137e5c79372e432b040c6604be6 Author: Jeff Garzik Date: Sun Oct 30 02:15:08 2005 -0500 [libata pdc_adma] fix for new irq-driven PIO code commit be697c3f137c9ed808753bbbc5d7751c6e5303fc Author: Jeff Garzik Date: Tue Oct 18 21:27:34 2005 -0400 [libata pdc_adma] update for removal of ATA_FLAG_NOINTR commit e33b9dfa3008fcaa908dc0c8c472a812c400f839 Author: Jeff Garzik Date: Sun Oct 9 09:51:46 2005 -0400 [libata irq-pio] build fix commit 91b8b3132e1870bfe3c4d3a999f13f20fc4e9726 Author: Albert Lee Date: Sun Oct 9 09:48:44 2005 -0400 [libata irq-pio] use PageHighMem() to optimize the kmap_atomic() usage as done in ide-scsi.c Signed-off-by: Albert Lee commit 083958d313f886dc7d00522f2972f90f55c40041 Author: Albert Lee Date: Sun Oct 9 09:47:31 2005 -0400 [libata irq-pio] reorganize "buf + offset" in ata_pio_sector() and __atapi_pio_bytes() - relocate DPRINTK("data %s\n", qc->tf.flags & ATA_TFLAG_WRITE ? "write" : "read"); - buf + offset, buf - offset tidy up Signed-off-by: Albert Lee commit 7282aa4b49d08254ff1dcefdf3a2fb01b02ebbe2 Author: Albert Lee Date: Sun Oct 9 09:46:07 2005 -0400 [libata irq-pio] reorganize ata_pio_sector() and __atapi_pio_bytes() - move some code out of the kmap_atomic() / kunmap_atomic() zone - remove the redundant "do_write = (qc->tf.flags & ATA_TFLAG_WRITE);" Signed-off-by: Albert Lee commit c71c18576d0d8aa4db876c737c3c597c724cf02f Author: Albert Lee Date: Tue Oct 4 06:03:45 2005 -0400 libata: move atapi_send_cdb() and ata_dataout_task() to be near ata_pio_*() functions commit 54f00389563c80fa1de250a21256313ba01ca07d Author: Albert Lee Date: Fri Sep 30 19:14:19 2005 +0800 [PATCH] libata irq-pio: cleanup ata_qc_issue_prot() ata_qc_issue_prot(): - cleanup and let the PIO data out case always go through the ata_dataout_task() codepath. (Previously for PIO data out case, 2 code pathes were used - irq case goes through ata_data_out_task() codepath. - polling case jumps over the HSM_ST_FIRST state and goes to HSM_ST and ata_pio_task() directly.) ata_dataout_task(): - rearrange the queue_work() code to handle the PIO data out + polling case. Signed-off-by: Albert Lee Signed-off-by: Jeff Garzik commit 86a7397cda08a65bc4f306e812c846e2437b5347 Author: Albert Lee Date: Fri Sep 30 19:11:35 2005 +0800 [PATCH] libata irq-pio: simplify if condition in ata_dataout_task() - Use if (qc->tf.protocol == ATA_PROT_PIO) instead of if(is_atapi_taskfile()) in ata_dataout_task() Signed-off-by: Albert Lee Signed-off-by: Jeff Garzik commit f9997be974be40e884e9e8157ded2f2f9aed454c Author: Albert Lee Date: Fri Sep 30 19:09:31 2005 +0800 [PATCH] libata irq-pio: rename atapi_packet_task() and comments Signed-off-by: Albert Lee Signed-off-by: Jeff Garzik commit c56b14d2a3e32695e13cd49b417da889da744d1c Author: Albert Lee Date: Fri Sep 30 19:07:39 2005 +0800 [PATCH] libata irq-pio: add comments and cleanup Signed-off-by: Albert Lee Signed-off-by: Jeff Garzik commit e50362eccd8809a224cda5f71714a088ba37b2ab Author: Albert Lee Date: Tue Sep 27 17:39:50 2005 +0800 [PATCH] libata: interrupt driven pio for LLD libata.h: libata-core: Add ATA_FLAG_PIO_POLLING flag for LLDs that expect interrupt for command completion only. sata_nv.c: sata_vsc.c: irq handler is wrapper around ata_host_intr(), can handle PIO interrupts. sata_promise.c: sata_sx4.c: sata_qstor.c: sata_mv.c: Private irq handler. Polling mode ATA_FLAG_PIO_POLLING used for compatibility. Signed-off-by: Albert Lee Signed-off-by: Jeff Garzik commit 312f7da2824c82800ee78d6190f12854456957af Author: Albert Lee Date: Tue Sep 27 17:38:03 2005 +0800 [PATCH] libata: interrupt driven pio for libata-core - add PIO_ST_FIRST for the state before sending ATAPI CDB or sending "ATA PIO data out" first data block. - add ATA_TFLAG_POLLING and ATA_DFLAG_CDB_INTR flags - remove the ATA_FLAG_NOINTR flag since the interrupt handler is now aware of the states - modify ata_pio_sector() and atapi_pio_bytes() to work in the interrupt context - modify the ata_host_intr() to handle PIO interrupts - modify ata_qc_issue_prot() to initialize states - atapi_packet_task() changed to handle "ATA PIO data out" first data block - support the pre-ATA4 ATAPI device which raise interrupt when ready to receive CDB Signed-off-by: Albert Lee Signed-off-by: Jeff Garzik --- Signed-off-by: Andrew Morton --- drivers/ide/pci/amd74xx.c | 7 drivers/scsi/Kconfig | 94 + drivers/scsi/Makefile | 13 drivers/scsi/ahci.c | 434 ++-- drivers/scsi/ata_piix.c | 28 drivers/scsi/libata-bmdma.c | 143 + drivers/scsi/libata-core.c | 2525 ++++++++++++++++++---------- drivers/scsi/libata-eh.c | 1561 +++++++++++++++++ drivers/scsi/libata-scsi.c | 410 ++-- drivers/scsi/libata.h | 24 drivers/scsi/pata_ali.c | 659 +++++++ drivers/scsi/pata_amd.c | 688 +++++++ drivers/scsi/pata_mpiix.c | 316 +++ drivers/scsi/pata_netcell.c | 176 + drivers/scsi/pata_oldpiix.c | 338 +++ drivers/scsi/pata_opti.c | 292 +++ drivers/scsi/pata_pdc2027x.c | 868 +++++++++ drivers/scsi/pata_sil680.c | 380 ++++ drivers/scsi/pata_sis.c | 1022 +++++++++++ drivers/scsi/pata_triflex.c | 283 +++ drivers/scsi/pata_via.c | 564 ++++++ drivers/scsi/pdc_adma.c | 10 drivers/scsi/sata_mv.c | 70 drivers/scsi/sata_nv.c | 13 drivers/scsi/sata_promise.c | 69 drivers/scsi/sata_qstor.c | 14 drivers/scsi/sata_sil.c | 70 drivers/scsi/sata_sil24.c | 615 ++++-- drivers/scsi/sata_sis.c | 3 drivers/scsi/sata_svw.c | 5 drivers/scsi/sata_sx4.c | 20 drivers/scsi/sata_uli.c | 3 drivers/scsi/sata_via.c | 3 drivers/scsi/sata_vsc.c | 16 drivers/scsi/scsi.c | 18 drivers/scsi/scsi_error.c | 24 drivers/scsi/scsi_lib.c | 2 drivers/scsi/scsi_transport_api.h | 6 include/linux/ata.h | 34 include/linux/libata.h | 389 +++- include/linux/pci_ids.h | 9 include/scsi/scsi_cmnd.h | 1 include/scsi/scsi_host.h | 1 43 files changed, 10504 insertions(+), 1716 deletions(-) diff -puN drivers/ide/pci/amd74xx.c~git-libata-all drivers/ide/pci/amd74xx.c --- devel/drivers/ide/pci/amd74xx.c~git-libata-all 2006-06-09 15:21:16.000000000 -0700 +++ devel-akpm/drivers/ide/pci/amd74xx.c 2006-06-09 15:21:16.000000000 -0700 @@ -74,6 +74,7 @@ static struct amd_ide_chip { { PCI_DEVICE_ID_NVIDIA_NFORCE_MCP04_IDE, 0x50, AMD_UDMA_133 }, { PCI_DEVICE_ID_NVIDIA_NFORCE_MCP51_IDE, 0x50, AMD_UDMA_133 }, { PCI_DEVICE_ID_NVIDIA_NFORCE_MCP55_IDE, 0x50, AMD_UDMA_133 }, + { PCI_DEVICE_ID_NVIDIA_NFORCE_MCP61_IDE, 0x50, AMD_UDMA_133 }, { PCI_DEVICE_ID_AMD_CS5536_IDE, 0x40, AMD_UDMA_100 }, { 0 } }; @@ -488,7 +489,8 @@ static ide_pci_device_t amd74xx_chipsets /* 14 */ DECLARE_NV_DEV("NFORCE-MCP04"), /* 15 */ DECLARE_NV_DEV("NFORCE-MCP51"), /* 16 */ DECLARE_NV_DEV("NFORCE-MCP55"), - /* 17 */ DECLARE_AMD_DEV("AMD5536"), + /* 17 */ DECLARE_NV_DEV("NFORCE-MCP61"), + /* 18 */ DECLARE_AMD_DEV("AMD5536"), }; static int __devinit amd74xx_probe(struct pci_dev *dev, const struct pci_device_id *id) @@ -525,7 +527,8 @@ static struct pci_device_id amd74xx_pci_ { PCI_VENDOR_ID_NVIDIA, PCI_DEVICE_ID_NVIDIA_NFORCE_MCP04_IDE, PCI_ANY_ID, PCI_ANY_ID, 0, 0, 14 }, { PCI_VENDOR_ID_NVIDIA, PCI_DEVICE_ID_NVIDIA_NFORCE_MCP51_IDE, PCI_ANY_ID, PCI_ANY_ID, 0, 0, 15 }, { PCI_VENDOR_ID_NVIDIA, PCI_DEVICE_ID_NVIDIA_NFORCE_MCP55_IDE, PCI_ANY_ID, PCI_ANY_ID, 0, 0, 16 }, - { PCI_VENDOR_ID_AMD, PCI_DEVICE_ID_AMD_CS5536_IDE, PCI_ANY_ID, PCI_ANY_ID, 0, 0, 17 }, + { PCI_VENDOR_ID_NVIDIA, PCI_DEVICE_ID_NVIDIA_NFORCE_MCP61_IDE, PCI_ANY_ID, PCI_ANY_ID, 0, 0, 17 }, + { PCI_VENDOR_ID_AMD, PCI_DEVICE_ID_AMD_CS5536_IDE, PCI_ANY_ID, PCI_ANY_ID, 0, 0, 18 }, { 0, }, }; MODULE_DEVICE_TABLE(pci, amd74xx_pci_tbl); diff -puN drivers/scsi/ahci.c~git-libata-all drivers/scsi/ahci.c --- devel/drivers/scsi/ahci.c~git-libata-all 2006-06-09 15:21:16.000000000 -0700 +++ devel-akpm/drivers/scsi/ahci.c 2006-06-09 15:21:16.000000000 -0700 @@ -48,7 +48,7 @@ #include #define DRV_NAME "ahci" -#define DRV_VERSION "1.2" +#define DRV_VERSION "1.3" enum { @@ -56,12 +56,15 @@ enum { AHCI_MAX_SG = 168, /* hardware max is 64K */ AHCI_DMA_BOUNDARY = 0xffffffff, AHCI_USE_CLUSTERING = 0, - AHCI_CMD_SLOT_SZ = 32 * 32, + AHCI_MAX_CMDS = 32, + AHCI_CMD_SZ = 32, + AHCI_CMD_SLOT_SZ = AHCI_MAX_CMDS * AHCI_CMD_SZ, AHCI_RX_FIS_SZ = 256, - AHCI_CMD_TBL_HDR = 0x80, AHCI_CMD_TBL_CDB = 0x40, - AHCI_CMD_TBL_SZ = AHCI_CMD_TBL_HDR + (AHCI_MAX_SG * 16), - AHCI_PORT_PRIV_DMA_SZ = AHCI_CMD_SLOT_SZ + AHCI_CMD_TBL_SZ + + AHCI_CMD_TBL_HDR_SZ = 0x80, + AHCI_CMD_TBL_SZ = AHCI_CMD_TBL_HDR_SZ + (AHCI_MAX_SG * 16), + AHCI_CMD_TBL_AR_SZ = AHCI_CMD_TBL_SZ * AHCI_MAX_CMDS, + AHCI_PORT_PRIV_DMA_SZ = AHCI_CMD_SLOT_SZ + AHCI_CMD_TBL_AR_SZ + AHCI_RX_FIS_SZ, AHCI_IRQ_ON_SG = (1 << 31), AHCI_CMD_ATAPI = (1 << 5), @@ -71,8 +74,10 @@ enum { AHCI_CMD_CLR_BUSY = (1 << 10), RX_FIS_D2H_REG = 0x40, /* offset of D2H Register FIS data */ + RX_FIS_UNK = 0x60, /* offset of Unknown FIS data */ board_ahci = 0, + board_ahci_vt8251 = 1, /* global controller registers */ HOST_CAP = 0x00, /* host capabilities */ @@ -87,8 +92,9 @@ enum { HOST_AHCI_EN = (1 << 31), /* AHCI enabled */ /* HOST_CAP bits */ - HOST_CAP_64 = (1 << 31), /* PCI DAC (64-bit DMA) support */ HOST_CAP_CLO = (1 << 24), /* Command List Override support */ + HOST_CAP_NCQ = (1 << 30), /* Native Command Queueing */ + HOST_CAP_64 = (1 << 31), /* PCI DAC (64-bit DMA) support */ /* registers for each SATA port */ PORT_LST_ADDR = 0x00, /* command list DMA addr */ @@ -127,15 +133,16 @@ enum { PORT_IRQ_PIOS_FIS = (1 << 1), /* PIO Setup FIS rx'd */ PORT_IRQ_D2H_REG_FIS = (1 << 0), /* D2H Register FIS rx'd */ - PORT_IRQ_FATAL = PORT_IRQ_TF_ERR | - PORT_IRQ_HBUS_ERR | - PORT_IRQ_HBUS_DATA_ERR | - PORT_IRQ_IF_ERR, - DEF_PORT_IRQ = PORT_IRQ_FATAL | PORT_IRQ_PHYRDY | - PORT_IRQ_CONNECT | PORT_IRQ_SG_DONE | - PORT_IRQ_UNK_FIS | PORT_IRQ_SDB_FIS | - PORT_IRQ_DMAS_FIS | PORT_IRQ_PIOS_FIS | - PORT_IRQ_D2H_REG_FIS, + PORT_IRQ_FREEZE = PORT_IRQ_HBUS_ERR | + PORT_IRQ_IF_ERR | + PORT_IRQ_CONNECT | + PORT_IRQ_UNK_FIS, + PORT_IRQ_ERROR = PORT_IRQ_FREEZE | + PORT_IRQ_TF_ERR | + PORT_IRQ_HBUS_DATA_ERR, + DEF_PORT_IRQ = PORT_IRQ_ERROR | PORT_IRQ_SG_DONE | + PORT_IRQ_SDB_FIS | PORT_IRQ_DMAS_FIS | + PORT_IRQ_PIOS_FIS | PORT_IRQ_D2H_REG_FIS, /* PORT_CMD bits */ PORT_CMD_ATAPI = (1 << 24), /* Device is ATAPI */ @@ -153,6 +160,9 @@ enum { /* hpriv->flags bits */ AHCI_FLAG_MSI = (1 << 0), + + /* ap->flags bits */ + AHCI_FLAG_RESET_NEEDS_CLO = (1 << 24), }; struct ahci_cmd_hdr { @@ -181,7 +191,6 @@ struct ahci_port_priv { dma_addr_t cmd_slot_dma; void *cmd_tbl; dma_addr_t cmd_tbl_dma; - struct ahci_sg *cmd_tbl_sg; void *rx_fis; dma_addr_t rx_fis_dma; }; @@ -193,13 +202,15 @@ static unsigned int ahci_qc_issue(struct static irqreturn_t ahci_interrupt (int irq, void *dev_instance, struct pt_regs *regs); static int ahci_probe_reset(struct ata_port *ap, unsigned int *classes); static void ahci_irq_clear(struct ata_port *ap); -static void ahci_eng_timeout(struct ata_port *ap); static int ahci_port_start(struct ata_port *ap); static void ahci_port_stop(struct ata_port *ap); static void ahci_tf_read(struct ata_port *ap, struct ata_taskfile *tf); static void ahci_qc_prep(struct ata_queued_cmd *qc); static u8 ahci_check_status(struct ata_port *ap); -static inline int ahci_host_intr(struct ata_port *ap, struct ata_queued_cmd *qc); +static void ahci_freeze(struct ata_port *ap); +static void ahci_thaw(struct ata_port *ap); +static void ahci_error_handler(struct ata_port *ap); +static void ahci_post_internal_cmd(struct ata_queued_cmd *qc); static void ahci_remove_one (struct pci_dev *pdev); static struct scsi_host_template ahci_sht = { @@ -207,7 +218,8 @@ static struct scsi_host_template ahci_sh .name = DRV_NAME, .ioctl = ata_scsi_ioctl, .queuecommand = ata_scsi_queuecmd, - .can_queue = ATA_DEF_QUEUE, + .change_queue_depth = ata_scsi_change_queue_depth, + .can_queue = AHCI_MAX_CMDS - 1, .this_id = ATA_SHT_THIS_ID, .sg_tablesize = AHCI_MAX_SG, .cmd_per_lun = ATA_SHT_CMD_PER_LUN, @@ -233,14 +245,18 @@ static const struct ata_port_operations .qc_prep = ahci_qc_prep, .qc_issue = ahci_qc_issue, - .eng_timeout = ahci_eng_timeout, - .irq_handler = ahci_interrupt, .irq_clear = ahci_irq_clear, .scr_read = ahci_scr_read, .scr_write = ahci_scr_write, + .freeze = ahci_freeze, + .thaw = ahci_thaw, + + .error_handler = ahci_error_handler, + .post_internal_cmd = ahci_post_internal_cmd, + .port_start = ahci_port_start, .port_stop = ahci_port_stop, }; @@ -255,6 +271,16 @@ static const struct ata_port_info ahci_p .udma_mask = 0x7f, /* udma0-6 ; FIXME */ .port_ops = &ahci_ops, }, + /* board_ahci_vt8251 */ + { + .sht = &ahci_sht, + .host_flags = ATA_FLAG_SATA | ATA_FLAG_NO_LEGACY | + ATA_FLAG_MMIO | ATA_FLAG_PIO_DMA | + AHCI_FLAG_RESET_NEEDS_CLO, + .pio_mask = 0x1f, /* pio0-4 */ + .udma_mask = 0x7f, /* udma0-6 ; FIXME */ + .port_ops = &ahci_ops, + }, }; static const struct pci_device_id ahci_pci_tbl[] = { @@ -296,6 +322,8 @@ static const struct pci_device_id ahci_p board_ahci }, /* ATI SB600 non-raid */ { PCI_VENDOR_ID_ATI, 0x4381, PCI_ANY_ID, PCI_ANY_ID, 0, 0, board_ahci }, /* ATI SB600 raid */ + { PCI_VENDOR_ID_VIA, 0x3349, PCI_ANY_ID, PCI_ANY_ID, 0, 0, + board_ahci_vt8251 }, /* VIA VT8251 */ { } /* terminate list */ }; @@ -374,8 +402,6 @@ static int ahci_port_start(struct ata_po pp->cmd_tbl = mem; pp->cmd_tbl_dma = mem_dma; - pp->cmd_tbl_sg = mem + AHCI_CMD_TBL_HDR; - ap->private_data = pp; if (hpriv->cap & HOST_CAP_64) @@ -508,46 +534,60 @@ static unsigned int ahci_dev_classify(st return ata_dev_classify(&tf); } -static void ahci_fill_cmd_slot(struct ahci_port_priv *pp, u32 opts) +static void ahci_fill_cmd_slot(struct ahci_port_priv *pp, unsigned int tag, + u32 opts) { - pp->cmd_slot[0].opts = cpu_to_le32(opts); - pp->cmd_slot[0].status = 0; - pp->cmd_slot[0].tbl_addr = cpu_to_le32(pp->cmd_tbl_dma & 0xffffffff); - pp->cmd_slot[0].tbl_addr_hi = cpu_to_le32((pp->cmd_tbl_dma >> 16) >> 16); + dma_addr_t cmd_tbl_dma; + + cmd_tbl_dma = pp->cmd_tbl_dma + tag * AHCI_CMD_TBL_SZ; + + pp->cmd_slot[tag].opts = cpu_to_le32(opts); + pp->cmd_slot[tag].status = 0; + pp->cmd_slot[tag].tbl_addr = cpu_to_le32(cmd_tbl_dma & 0xffffffff); + pp->cmd_slot[tag].tbl_addr_hi = cpu_to_le32((cmd_tbl_dma >> 16) >> 16); } -static int ahci_poll_register(void __iomem *reg, u32 mask, u32 val, - unsigned long interval_msec, - unsigned long timeout_msec) +static int ahci_clo(struct ata_port *ap) { - unsigned long timeout; + void __iomem *port_mmio = (void __iomem *) ap->ioaddr.cmd_addr; + struct ahci_host_priv *hpriv = ap->host_set->private_data; u32 tmp; - timeout = jiffies + (timeout_msec * HZ) / 1000; - do { - tmp = readl(reg); - if ((tmp & mask) == val) - return 0; - msleep(interval_msec); - } while (time_before(jiffies, timeout)); + if (!(hpriv->cap & HOST_CAP_CLO)) + return -EOPNOTSUPP; - return -1; + tmp = readl(port_mmio + PORT_CMD); + tmp |= PORT_CMD_CLO; + writel(tmp, port_mmio + PORT_CMD); + + tmp = ata_wait_register(port_mmio + PORT_CMD, + PORT_CMD_CLO, PORT_CMD_CLO, 1, 500); + if (tmp & PORT_CMD_CLO) + return -EIO; + + return 0; } -static int ahci_softreset(struct ata_port *ap, int verbose, unsigned int *class) +static int ahci_softreset(struct ata_port *ap, unsigned int *class) { - struct ahci_host_priv *hpriv = ap->host_set->private_data; struct ahci_port_priv *pp = ap->private_data; void __iomem *mmio = ap->host_set->mmio_base; void __iomem *port_mmio = ahci_port_base(mmio, ap->port_no); const u32 cmd_fis_len = 5; /* five dwords */ const char *reason = NULL; struct ata_taskfile tf; + u32 tmp; u8 *fis; int rc; DPRINTK("ENTER\n"); + if (ata_port_offline(ap)) { + DPRINTK("PHY reports no device\n"); + *class = ATA_DEV_NONE; + return 0; + } + /* prepare for SRST (AHCI-1.1 10.4.1) */ rc = ahci_stop_engine(ap); if (rc) { @@ -558,23 +598,13 @@ static int ahci_softreset(struct ata_por /* check BUSY/DRQ, perform Command List Override if necessary */ ahci_tf_read(ap, &tf); if (tf.command & (ATA_BUSY | ATA_DRQ)) { - u32 tmp; + rc = ahci_clo(ap); - if (!(hpriv->cap & HOST_CAP_CLO)) { - rc = -EIO; - reason = "port busy but no CLO"; + if (rc == -EOPNOTSUPP) { + reason = "port busy but CLO unavailable"; goto fail_restart; - } - - tmp = readl(port_mmio + PORT_CMD); - tmp |= PORT_CMD_CLO; - writel(tmp, port_mmio + PORT_CMD); - readl(port_mmio + PORT_CMD); /* flush */ - - if (ahci_poll_register(port_mmio + PORT_CMD, PORT_CMD_CLO, 0x0, - 1, 500)) { - rc = -EIO; - reason = "CLO failed"; + } else if (rc) { + reason = "port busy but CLO failed"; goto fail_restart; } } @@ -582,20 +612,21 @@ static int ahci_softreset(struct ata_por /* restart engine */ ahci_start_engine(ap); - ata_tf_init(ap, &tf, 0); + ata_tf_init(ap->device, &tf); fis = pp->cmd_tbl; /* issue the first D2H Register FIS */ - ahci_fill_cmd_slot(pp, cmd_fis_len | AHCI_CMD_RESET | AHCI_CMD_CLR_BUSY); + ahci_fill_cmd_slot(pp, 0, + cmd_fis_len | AHCI_CMD_RESET | AHCI_CMD_CLR_BUSY); tf.ctl |= ATA_SRST; ata_tf_to_fis(&tf, fis, 0); fis[1] &= ~(1 << 7); /* turn off Command FIS bit */ writel(1, port_mmio + PORT_CMD_ISSUE); - readl(port_mmio + PORT_CMD_ISSUE); /* flush */ - if (ahci_poll_register(port_mmio + PORT_CMD_ISSUE, 0x1, 0x0, 1, 500)) { + tmp = ata_wait_register(port_mmio + PORT_CMD_ISSUE, 0x1, 0x1, 1, 500); + if (tmp & 0x1) { rc = -EIO; reason = "1st FIS failed"; goto fail; @@ -605,7 +636,7 @@ static int ahci_softreset(struct ata_por msleep(1); /* issue the second D2H Register FIS */ - ahci_fill_cmd_slot(pp, cmd_fis_len); + ahci_fill_cmd_slot(pp, 0, cmd_fis_len); tf.ctl &= ~ATA_SRST; ata_tf_to_fis(&tf, fis, 0); @@ -625,7 +656,7 @@ static int ahci_softreset(struct ata_por msleep(150); *class = ATA_DEV_NONE; - if (sata_dev_present(ap)) { + if (ata_port_online(ap)) { if (ata_busy_sleep(ap, ATA_TMOUT_BOOT_QUICK, ATA_TMOUT_BOOT)) { rc = -EIO; reason = "device not ready"; @@ -640,25 +671,21 @@ static int ahci_softreset(struct ata_por fail_restart: ahci_start_engine(ap); fail: - if (verbose) - printk(KERN_ERR "ata%u: softreset failed (%s)\n", - ap->id, reason); - else - DPRINTK("EXIT, rc=%d reason=\"%s\"\n", rc, reason); + ata_port_printk(ap, KERN_ERR, "softreset failed (%s)\n", reason); return rc; } -static int ahci_hardreset(struct ata_port *ap, int verbose, unsigned int *class) +static int ahci_hardreset(struct ata_port *ap, unsigned int *class) { int rc; DPRINTK("ENTER\n"); ahci_stop_engine(ap); - rc = sata_std_hardreset(ap, verbose, class); + rc = sata_std_hardreset(ap, class); ahci_start_engine(ap); - if (rc == 0) + if (rc == 0 && ata_port_online(ap)) *class = ahci_dev_classify(ap); if (*class == ATA_DEV_UNKNOWN) *class = ATA_DEV_NONE; @@ -688,6 +715,12 @@ static void ahci_postreset(struct ata_po static int ahci_probe_reset(struct ata_port *ap, unsigned int *classes) { + if ((ap->flags & AHCI_FLAG_RESET_NEEDS_CLO) && + (ata_busy_wait(ap, ATA_BUSY, 1000) & ATA_BUSY)) { + /* ATA_BUSY hasn't cleared, so send a CLO */ + ahci_clo(ap); + } + return ata_drive_probe_reset(ap, ata_std_probeinit, ahci_softreset, ahci_hardreset, ahci_postreset, classes); @@ -708,9 +741,8 @@ static void ahci_tf_read(struct ata_port ata_tf_from_fis(d2h_fis, tf); } -static unsigned int ahci_fill_sg(struct ata_queued_cmd *qc) +static unsigned int ahci_fill_sg(struct ata_queued_cmd *qc, void *cmd_tbl) { - struct ahci_port_priv *pp = qc->ap->private_data; struct scatterlist *sg; struct ahci_sg *ahci_sg; unsigned int n_sg = 0; @@ -720,7 +752,7 @@ static unsigned int ahci_fill_sg(struct /* * Next, the S/G list. */ - ahci_sg = pp->cmd_tbl_sg; + ahci_sg = cmd_tbl + AHCI_CMD_TBL_HDR_SZ; ata_for_each_sg(sg, qc) { dma_addr_t addr = sg_dma_address(sg); u32 sg_len = sg_dma_len(sg); @@ -741,6 +773,7 @@ static void ahci_qc_prep(struct ata_queu struct ata_port *ap = qc->ap; struct ahci_port_priv *pp = ap->private_data; int is_atapi = is_atapi_taskfile(&qc->tf); + void *cmd_tbl; u32 opts; const u32 cmd_fis_len = 5; /* five dwords */ unsigned int n_elem; @@ -749,16 +782,17 @@ static void ahci_qc_prep(struct ata_queu * Fill in command table information. First, the header, * a SATA Register - Host to Device command FIS. */ - ata_tf_to_fis(&qc->tf, pp->cmd_tbl, 0); + cmd_tbl = pp->cmd_tbl + qc->tag * AHCI_CMD_TBL_SZ; + + ata_tf_to_fis(&qc->tf, cmd_tbl, 0); if (is_atapi) { - memset(pp->cmd_tbl + AHCI_CMD_TBL_CDB, 0, 32); - memcpy(pp->cmd_tbl + AHCI_CMD_TBL_CDB, qc->cdb, - qc->dev->cdb_len); + memset(cmd_tbl + AHCI_CMD_TBL_CDB, 0, 32); + memcpy(cmd_tbl + AHCI_CMD_TBL_CDB, qc->cdb, qc->dev->cdb_len); } n_elem = 0; if (qc->flags & ATA_QCFLAG_DMAMAP) - n_elem = ahci_fill_sg(qc); + n_elem = ahci_fill_sg(qc, cmd_tbl); /* * Fill in command slot information. @@ -769,112 +803,123 @@ static void ahci_qc_prep(struct ata_queu if (is_atapi) opts |= AHCI_CMD_ATAPI | AHCI_CMD_PREFETCH; - ahci_fill_cmd_slot(pp, opts); + ahci_fill_cmd_slot(pp, qc->tag, opts); } -static void ahci_restart_port(struct ata_port *ap, u32 irq_stat) +static void ahci_error_intr(struct ata_port *ap, u32 irq_stat) { - void __iomem *mmio = ap->host_set->mmio_base; - void __iomem *port_mmio = ahci_port_base(mmio, ap->port_no); - u32 tmp; + struct ahci_port_priv *pp = ap->private_data; + struct ata_eh_info *ehi = &ap->eh_info; + unsigned int err_mask = 0, action = 0; + struct ata_queued_cmd *qc; + u32 serror; - if ((ap->device[0].class != ATA_DEV_ATAPI) || - ((irq_stat & PORT_IRQ_TF_ERR) == 0)) - printk(KERN_WARNING "ata%u: port reset, " - "p_is %x is %x pis %x cmd %x tf %x ss %x se %x\n", - ap->id, - irq_stat, - readl(mmio + HOST_IRQ_STAT), - readl(port_mmio + PORT_IRQ_STAT), - readl(port_mmio + PORT_CMD), - readl(port_mmio + PORT_TFDATA), - readl(port_mmio + PORT_SCR_STAT), - readl(port_mmio + PORT_SCR_ERR)); + ata_ehi_clear_desc(ehi); - /* stop DMA */ - ahci_stop_engine(ap); + /* AHCI needs SError cleared; otherwise, it might lock up */ + serror = ahci_scr_read(ap, SCR_ERROR); + ahci_scr_write(ap, SCR_ERROR, serror); - /* clear SATA phy error, if any */ - tmp = readl(port_mmio + PORT_SCR_ERR); - writel(tmp, port_mmio + PORT_SCR_ERR); + /* analyze @irq_stat */ + ata_ehi_push_desc(ehi, "irq_stat 0x%08x", irq_stat); - /* if DRQ/BSY is set, device needs to be reset. - * if so, issue COMRESET - */ - tmp = readl(port_mmio + PORT_TFDATA); - if (tmp & (ATA_BUSY | ATA_DRQ)) { - writel(0x301, port_mmio + PORT_SCR_CTL); - readl(port_mmio + PORT_SCR_CTL); /* flush */ - udelay(10); - writel(0x300, port_mmio + PORT_SCR_CTL); - readl(port_mmio + PORT_SCR_CTL); /* flush */ + if (irq_stat & PORT_IRQ_TF_ERR) + err_mask |= AC_ERR_DEV; + + if (irq_stat & (PORT_IRQ_HBUS_ERR | PORT_IRQ_HBUS_DATA_ERR)) { + err_mask |= AC_ERR_HOST_BUS; + action |= ATA_EH_SOFTRESET; } - /* re-start DMA */ - ahci_start_engine(ap); -} + if (irq_stat & PORT_IRQ_IF_ERR) { + err_mask |= AC_ERR_ATA_BUS; + action |= ATA_EH_SOFTRESET; + ata_ehi_push_desc(ehi, ", interface fatal error"); + } -static void ahci_eng_timeout(struct ata_port *ap) -{ - struct ata_host_set *host_set = ap->host_set; - void __iomem *mmio = host_set->mmio_base; - void __iomem *port_mmio = ahci_port_base(mmio, ap->port_no); - struct ata_queued_cmd *qc; - unsigned long flags; + if (irq_stat & (PORT_IRQ_CONNECT | PORT_IRQ_PHYRDY)) { + err_mask |= AC_ERR_ATA_BUS; + action |= ATA_EH_SOFTRESET; + ata_ehi_push_desc(ehi, ", %s", irq_stat & PORT_IRQ_CONNECT ? + "connection status changed" : "PHY RDY changed"); + } - printk(KERN_WARNING "ata%u: handling error/timeout\n", ap->id); + if (irq_stat & PORT_IRQ_UNK_FIS) { + u32 *unk = (u32 *)(pp->rx_fis + RX_FIS_UNK); - spin_lock_irqsave(&host_set->lock, flags); + err_mask |= AC_ERR_HSM; + action |= ATA_EH_SOFTRESET; + ata_ehi_push_desc(ehi, ", unknown FIS %08x %08x %08x %08x", + unk[0], unk[1], unk[2], unk[3]); + } - ahci_restart_port(ap, readl(port_mmio + PORT_IRQ_STAT)); - qc = ata_qc_from_tag(ap, ap->active_tag); - qc->err_mask |= AC_ERR_TIMEOUT; + /* okay, let's hand over to EH */ + ehi->serror |= serror; + ehi->action |= action; - spin_unlock_irqrestore(&host_set->lock, flags); + qc = ata_qc_from_tag(ap, ap->active_tag); + if (qc) + qc->err_mask |= err_mask; + else + ehi->err_mask |= err_mask; - ata_eh_qc_complete(qc); + if (irq_stat & PORT_IRQ_FREEZE) + ata_port_freeze(ap); + else + ata_port_abort(ap); } -static inline int ahci_host_intr(struct ata_port *ap, struct ata_queued_cmd *qc) +static void ahci_host_intr(struct ata_port *ap) { void __iomem *mmio = ap->host_set->mmio_base; void __iomem *port_mmio = ahci_port_base(mmio, ap->port_no); - u32 status, serr, ci; - - serr = readl(port_mmio + PORT_SCR_ERR); - writel(serr, port_mmio + PORT_SCR_ERR); + struct ata_eh_info *ehi = &ap->eh_info; + u32 status, qc_active; + int rc; status = readl(port_mmio + PORT_IRQ_STAT); writel(status, port_mmio + PORT_IRQ_STAT); - ci = readl(port_mmio + PORT_CMD_ISSUE); - if (likely((ci & 0x1) == 0)) { - if (qc) { - WARN_ON(qc->err_mask); - ata_qc_complete(qc); - qc = NULL; - } + if (unlikely(status & PORT_IRQ_ERROR)) { + ahci_error_intr(ap, status); + return; } - if (status & PORT_IRQ_FATAL) { - unsigned int err_mask; - if (status & PORT_IRQ_TF_ERR) - err_mask = AC_ERR_DEV; - else if (status & PORT_IRQ_IF_ERR) - err_mask = AC_ERR_ATA_BUS; - else - err_mask = AC_ERR_HOST_BUS; - - /* command processing has stopped due to error; restart */ - ahci_restart_port(ap, status); - - if (qc) { - qc->err_mask |= err_mask; - ata_qc_complete(qc); - } + if (ap->sactive) + qc_active = readl(port_mmio + PORT_SCR_ACT); + else + qc_active = readl(port_mmio + PORT_CMD_ISSUE); + + rc = ata_qc_complete_multiple(ap, qc_active, NULL); + if (rc > 0) + return; + if (rc < 0) { + ehi->err_mask |= AC_ERR_HSM; + ehi->action |= ATA_EH_SOFTRESET; + ata_port_freeze(ap); + return; } - return 1; + /* hmmm... a spurious interupt */ + + /* some devices send D2H reg with I bit set during NCQ command phase */ + if (ap->sactive && status & PORT_IRQ_D2H_REG_FIS) + return; + + /* ignore interim PIO setup fis interrupts */ + if (ata_tag_valid(ap->active_tag)) { + struct ata_queued_cmd *qc = + ata_qc_from_tag(ap, ap->active_tag); + + if (qc && qc->tf.protocol == ATA_PROT_PIO && + (status & PORT_IRQ_PIOS_FIS)) + return; + } + + if (ata_ratelimit()) + ata_port_printk(ap, KERN_INFO, "spurious interrupt " + "(irq_stat 0x%x active_tag %d sactive 0x%x)\n", + status, ap->active_tag, ap->sactive); } static void ahci_irq_clear(struct ata_port *ap) @@ -882,7 +927,7 @@ static void ahci_irq_clear(struct ata_po /* TODO */ } -static irqreturn_t ahci_interrupt (int irq, void *dev_instance, struct pt_regs *regs) +static irqreturn_t ahci_interrupt(int irq, void *dev_instance, struct pt_regs *regs) { struct ata_host_set *host_set = dev_instance; struct ahci_host_priv *hpriv; @@ -911,14 +956,7 @@ static irqreturn_t ahci_interrupt (int i ap = host_set->ports[i]; if (ap) { - struct ata_queued_cmd *qc; - qc = ata_qc_from_tag(ap, ap->active_tag); - if (!ahci_host_intr(ap, qc)) - if (ata_ratelimit()) - dev_printk(KERN_WARNING, host_set->dev, - "unhandled interrupt on port %u\n", - i); - + ahci_host_intr(ap); VPRINTK("port %u\n", i); } else { VPRINTK("port %u (no irq)\n", i); @@ -935,7 +973,7 @@ static irqreturn_t ahci_interrupt (int i handled = 1; } - spin_unlock(&host_set->lock); + spin_unlock(&host_set->lock); VPRINTK("EXIT\n"); @@ -947,12 +985,64 @@ static unsigned int ahci_qc_issue(struct struct ata_port *ap = qc->ap; void __iomem *port_mmio = (void __iomem *) ap->ioaddr.cmd_addr; - writel(1, port_mmio + PORT_CMD_ISSUE); + if (qc->tf.protocol == ATA_PROT_NCQ) + writel(1 << qc->tag, port_mmio + PORT_SCR_ACT); + writel(1 << qc->tag, port_mmio + PORT_CMD_ISSUE); readl(port_mmio + PORT_CMD_ISSUE); /* flush */ return 0; } +static void ahci_freeze(struct ata_port *ap) +{ + void __iomem *mmio = ap->host_set->mmio_base; + void __iomem *port_mmio = ahci_port_base(mmio, ap->port_no); + + /* turn IRQ off */ + writel(0, port_mmio + PORT_IRQ_MASK); +} + +static void ahci_thaw(struct ata_port *ap) +{ + void __iomem *mmio = ap->host_set->mmio_base; + void __iomem *port_mmio = ahci_port_base(mmio, ap->port_no); + u32 tmp; + + /* clear IRQ */ + tmp = readl(port_mmio + PORT_IRQ_STAT); + writel(tmp, port_mmio + PORT_IRQ_STAT); + writel(1 << ap->id, mmio + HOST_IRQ_STAT); + + /* turn IRQ back on */ + writel(DEF_PORT_IRQ, port_mmio + PORT_IRQ_MASK); +} + +static void ahci_error_handler(struct ata_port *ap) +{ + if (!(ap->flags & ATA_FLAG_FROZEN)) { + /* restart engine */ + ahci_stop_engine(ap); + ahci_start_engine(ap); + } + + /* perform recovery */ + ata_do_eh(ap, ahci_softreset, ahci_hardreset, ahci_postreset); +} + +static void ahci_post_internal_cmd(struct ata_queued_cmd *qc) +{ + struct ata_port *ap = qc->ap; + + if (qc->flags & ATA_QCFLAG_FAILED) + qc->err_mask |= AC_ERR_OTHER; + + if (qc->err_mask) { + /* make DMA engine forget about the failed command */ + ahci_stop_engine(ap); + ahci_start_engine(ap); + } +} + static void ahci_setup_port(struct ata_ioports *port, unsigned long base, unsigned int port_idx) { @@ -1097,9 +1187,6 @@ static int ahci_host_init(struct ata_pro writel(tmp, port_mmio + PORT_IRQ_STAT); writel(1 << i, mmio + HOST_IRQ_STAT); - - /* set irq mask (enables interrupts) */ - writel(DEF_PORT_IRQ, port_mmio + PORT_IRQ_MASK); } tmp = readl(mmio + HOST_CTL); @@ -1197,6 +1284,8 @@ static int ahci_init_one (struct pci_dev VPRINTK("ENTER\n"); + WARN_ON(ATA_MAX_QUEUE > AHCI_MAX_CMDS); + if (!printed_version++) dev_printk(KERN_DEBUG, &pdev->dev, "version " DRV_VERSION "\n"); @@ -1264,6 +1353,9 @@ static int ahci_init_one (struct pci_dev if (rc) goto err_out_hpriv; + if (hpriv->cap & HOST_CAP_NCQ) + probe_ent->host_flags |= ATA_FLAG_NCQ; + ahci_print_info(probe_ent); /* FIXME: check ata_device_add return value */ diff -puN drivers/scsi/ata_piix.c~git-libata-all drivers/scsi/ata_piix.c --- devel/drivers/scsi/ata_piix.c~git-libata-all 2006-06-09 15:21:16.000000000 -0700 +++ devel-akpm/drivers/scsi/ata_piix.c 2006-06-09 15:21:16.000000000 -0700 @@ -93,7 +93,7 @@ #include #define DRV_NAME "ata_piix" -#define DRV_VERSION "1.05" +#define DRV_VERSION "1.10" enum { PIIX_IOCFG = 0x54, /* IDE I/O configuration register */ @@ -159,6 +159,7 @@ static const struct pci_device_id piix_p { 0x8086, 0x7111, PCI_ANY_ID, PCI_ANY_ID, 0, 0, piix4_pata }, { 0x8086, 0x24db, PCI_ANY_ID, PCI_ANY_ID, 0, 0, ich5_pata }, { 0x8086, 0x25a2, PCI_ANY_ID, PCI_ANY_ID, 0, 0, ich5_pata }, + { 0x8086, 0x27df, PCI_ANY_ID, PCI_ANY_ID, 0, 0, ich5_pata }, #endif /* NOTE: The following PCI ids must be kept in sync with the @@ -227,6 +228,7 @@ static const struct ata_port_operations .port_disable = ata_port_disable, .set_piomode = piix_set_piomode, .set_dmamode = piix_set_dmamode, + .mode_filter = ata_pci_default_filter, .tf_load = ata_tf_load, .tf_read = ata_tf_read, @@ -242,8 +244,12 @@ static const struct ata_port_operations .bmdma_status = ata_bmdma_status, .qc_prep = ata_qc_prep, .qc_issue = ata_qc_issue_prot, + .data_xfer = ata_pio_data_xfer, - .eng_timeout = ata_eng_timeout, + .freeze = ata_bmdma_freeze, + .thaw = ata_bmdma_thaw, + .error_handler = ata_bmdma_error_handler, + .post_internal_cmd = ata_bmdma_post_internal_cmd, .irq_handler = ata_interrupt, .irq_clear = ata_bmdma_irq_clear, @@ -270,8 +276,12 @@ static const struct ata_port_operations .bmdma_status = ata_bmdma_status, .qc_prep = ata_qc_prep, .qc_issue = ata_qc_issue_prot, + .data_xfer = ata_pio_data_xfer, - .eng_timeout = ata_eng_timeout, + .freeze = ata_bmdma_freeze, + .thaw = ata_bmdma_thaw, + .error_handler = ata_bmdma_error_handler, + .post_internal_cmd = ata_bmdma_post_internal_cmd, .irq_handler = ata_interrupt, .irq_clear = ata_bmdma_irq_clear, @@ -484,7 +494,7 @@ static int piix_pata_probe_reset(struct struct pci_dev *pdev = to_pci_dev(ap->host_set->dev); if (!pci_test_config_bits(pdev, &piix_enable_bits[ap->hard_port_no])) { - printk(KERN_INFO "ata%u: port disabled. ignoring.\n", ap->id); + ata_port_printk(ap, KERN_INFO, "port disabled. ignoring.\n"); return 0; } @@ -565,7 +575,7 @@ static unsigned int piix_sata_probe (str static int piix_sata_probe_reset(struct ata_port *ap, unsigned int *classes) { if (!piix_sata_probe(ap)) { - printk(KERN_INFO "ata%u: SATA port has no device.\n", ap->id); + ata_port_printk(ap, KERN_INFO, "SATA port has no device.\n"); return 0; } @@ -760,15 +770,15 @@ static int __devinit piix_check_450nx_er pci_read_config_byte(pdev, PCI_REVISION_ID, &rev); pci_read_config_word(pdev, 0x41, &cfg); /* Only on the original revision: IDE DMA can hang */ - if(rev == 0x00) + if (rev == 0x00) no_piix_dma = 1; /* On all revisions below 5 PXB bus lock must be disabled for IDE */ - else if(cfg & (1<<14) && rev < 5) + else if (cfg & (1<<14) && rev < 5) no_piix_dma = 2; } - if(no_piix_dma) + if (no_piix_dma) dev_printk(KERN_WARNING, &ata_dev->dev, "450NX errata present, disabling IDE DMA.\n"); - if(no_piix_dma == 2) + if (no_piix_dma == 2) dev_printk(KERN_WARNING, &ata_dev->dev, "A BIOS update may resolve this.\n"); return no_piix_dma; } diff -puN drivers/scsi/Kconfig~git-libata-all drivers/scsi/Kconfig --- devel/drivers/scsi/Kconfig~git-libata-all 2006-06-09 15:21:16.000000000 -0700 +++ devel-akpm/drivers/scsi/Kconfig 2006-06-09 15:21:16.000000000 -0700 @@ -488,6 +488,24 @@ config SCSI_SATA_AHCI If unsure, say N. +config SCSI_PATA_ALI + tristate "ALi PATA support (Experimental)" + depends on SCSI_SATA && PCI && EXPERIMENTAL + help + This option enables support for the ALi ATA interfaces + found on the many ALi chipsets. + + If unsure, say N. + +config SCSI_PATA_AMD + tristate "AMD/NVidia PATA support (Raving Lunatic)" + depends on SCSI_SATA && PCI + help + This option enables support for the AMD and NVidia PATA + interfaces found on the chipsets for Athlon/Athlon64. + + If unsure, say N. + config SCSI_SATA_SVW tristate "ServerWorks Frodo / Apple K2 SATA support" depends on SCSI_SATA && PCI @@ -497,6 +515,31 @@ config SCSI_SATA_SVW If unsure, say N. +config SCSI_PATA_TRIFLEX + tristate "Compaq Triflex PATA support (Raving Lunatic)" + depends on SCSI_SATA && PCI && EXPERIMENTAL + help + Enable support for the Compaq 'Triflex' IDE controller as found + on many Compaq Pentium-Pro systems, via the new ATA layer. + + If unsure, say N. + +config SCSI_PATA_MPIIX + tristate "Intel PATA MPIIX support (Raving Lunatic)" + depends on SCSI_SATA && PCI && EXPERIMENTAL + help + This option enables support for MPIIX PATA support. + + If unsure, say N. + +config SCSI_PATA_OLDPIIX + tristate "Intel PATA old PIIX support (Raving Lunatic)" + depends on SCSI_SATA && PCI && EXPERIMENTAL + help + This option enables support for old(?) PIIX PATA support. + + If unsure, say N. + config SCSI_ATA_PIIX tristate "Intel PIIX/ICH SATA support" depends on SCSI_SATA && PCI @@ -516,6 +559,15 @@ config SCSI_SATA_MV If unsure, say N. +config SCSI_PATA_NETCELL + tristate "NETCELL Revolution RAID support" + depends on SCSI_SATA && PCI + help + This option enables support for the Netcell Revolution RAID + PATA controller. + + If unsure, say N. + config SCSI_SATA_NV tristate "NVIDIA SATA support" depends on SCSI_SATA && PCI && EXPERIMENTAL @@ -524,6 +576,15 @@ config SCSI_SATA_NV If unsure, say N. +config SCSI_PATA_OPTI + tristate "OPTI621/6215 PATA support (Raving Lunatic)" + depends on SCSI_SATA && PCI + help + This option enables full PIO support for the early Opti ATA + controllers found on some old motherboards. + + If unsure, say N. + config SCSI_PDC_ADMA tristate "Pacific Digital ADMA support" depends on SCSI_SATA && PCI @@ -540,6 +601,14 @@ config SCSI_SATA_QSTOR If unsure, say N. +config SCSI_PATA_PDC2027X + tristate "Promise PATA 2027x support" + depends on SCSI_SATA && PCI + help + This option enables support for Promise PATA pdc20268 to pdc20277 host adapters. + + If unsure, say N. + config SCSI_SATA_PROMISE tristate "Promise SATA TX2/TX4 support" depends on SCSI_SATA && PCI @@ -572,6 +641,22 @@ config SCSI_SATA_SIL24 If unsure, say N. +config SCSI_PATA_SIL680 + tristate "CMD / Silicon Image 680 PATA support" + depends on SCSI_SATA && PCI && EXPERIMENTAL + help + This option enables support for CMD / Silicon Image 680 PATA. + + If unsure, say N. + +config SCSI_PATA_SIS + tristate "SiS PATA support (Experimental)" + depends on SCSI_SATA && PCI && EXPERIMENTAL + help + This option enables support for SiS PATA controllers + + If unsure, say N. + config SCSI_SATA_SIS tristate "SiS 964/180 SATA support" depends on SCSI_SATA && PCI && EXPERIMENTAL @@ -588,6 +673,15 @@ config SCSI_SATA_ULI If unsure, say N. +config SCSI_PATA_VIA + tristate "VIA PATA support" + depends on SCSI_SATA && PCI + help + This option enables support for the VIA PATA interfaces + found on the many VIA chipsets. + + If unsure, say N. + config SCSI_SATA_VIA tristate "VIA SATA support" depends on SCSI_SATA && PCI diff -puN drivers/scsi/libata-bmdma.c~git-libata-all drivers/scsi/libata-bmdma.c --- devel/drivers/scsi/libata-bmdma.c~git-libata-all 2006-06-09 15:21:16.000000000 -0700 +++ devel-akpm/drivers/scsi/libata-bmdma.c 2006-06-09 15:21:16.000000000 -0700 @@ -652,6 +652,149 @@ void ata_bmdma_stop(struct ata_queued_cm ata_altstatus(ap); /* dummy read */ } +/** + * ata_bmdma_freeze - Freeze BMDMA controller port + * @ap: port to freeze + * + * Freeze BMDMA controller port. + * + * LOCKING: + * Inherited from caller. + */ +void ata_bmdma_freeze(struct ata_port *ap) +{ + struct ata_ioports *ioaddr = &ap->ioaddr; + + ap->ctl |= ATA_NIEN; + ap->last_ctl = ap->ctl; + + if (ap->flags & ATA_FLAG_MMIO) + writeb(ap->ctl, (void __iomem *)ioaddr->ctl_addr); + else + outb(ap->ctl, ioaddr->ctl_addr); +} + +/** + * ata_bmdma_thaw - Thaw BMDMA controller port + * @ap: port to thaw + * + * Thaw BMDMA controller port. + * + * LOCKING: + * Inherited from caller. + */ +void ata_bmdma_thaw(struct ata_port *ap) +{ + /* clear & re-enable interrupts */ + ata_chk_status(ap); + ap->ops->irq_clear(ap); + if (ap->ioaddr.ctl_addr) /* FIXME: hack. create a hook instead */ + ata_irq_on(ap); +} + +/** + * ata_bmdma_drive_eh - Perform EH with given methods for BMDMA controller + * @ap: port to handle error for + * @softreset: softreset method (can be NULL) + * @hardreset: hardreset method (can be NULL) + * @postreset: postreset method (can be NULL) + * + * Handle error for ATA BMDMA controller. It can handle both + * PATA and SATA controllers. Many controllers should be able to + * use this EH as-is or with some added handling before and + * after. + * + * This function is intended to be used for constructing + * ->error_handler callback by low level drivers. + * + * LOCKING: + * Kernel thread context (may sleep) + */ +void ata_bmdma_drive_eh(struct ata_port *ap, ata_reset_fn_t softreset, + ata_reset_fn_t hardreset, ata_postreset_fn_t postreset) +{ + struct ata_host_set *host_set = ap->host_set; + struct ata_eh_context *ehc = &ap->eh_context; + struct ata_queued_cmd *qc; + unsigned long flags; + int thaw = 0; + + qc = __ata_qc_from_tag(ap, ap->active_tag); + if (qc && !(qc->flags & ATA_QCFLAG_FAILED)) + qc = NULL; + + /* reset PIO HSM and stop DMA engine */ + spin_lock_irqsave(&host_set->lock, flags); + + ap->hsm_task_state = HSM_ST_IDLE; + + if (qc && (qc->tf.protocol == ATA_PROT_DMA || + qc->tf.protocol == ATA_PROT_ATAPI_DMA)) { + u8 host_stat; + + host_stat = ata_bmdma_status(ap); + + ata_ehi_push_desc(&ehc->i, "BMDMA stat 0x%x", host_stat); + + /* BMDMA controllers indicate host bus error by + * setting DMA_ERR bit and timing out. As it wasn't + * really a timeout event, adjust error mask and + * cancel frozen state. + */ + if (qc->err_mask == AC_ERR_TIMEOUT && host_stat & ATA_DMA_ERR) { + qc->err_mask = AC_ERR_HOST_BUS; + thaw = 1; + } + + ap->ops->bmdma_stop(qc); + } + + ata_altstatus(ap); + ata_chk_status(ap); + ap->ops->irq_clear(ap); + + spin_unlock_irqrestore(&host_set->lock, flags); + + if (thaw) + ata_eh_thaw_port(ap); + + /* PIO and DMA engines have been stopped, perform recovery */ + ata_do_eh(ap, softreset, hardreset, postreset); +} + +/** + * ata_bmdma_error_handler - Stock error handler for BMDMA controller + * @ap: port to handle error for + * + * Stock error handler for BMDMA controller. + * + * LOCKING: + * Kernel thread context (may sleep) + */ +void ata_bmdma_error_handler(struct ata_port *ap) +{ + ata_reset_fn_t hardreset; + + hardreset = NULL; + if (sata_scr_valid(ap)) + hardreset = sata_std_hardreset; + + ata_bmdma_drive_eh(ap, ata_std_softreset, hardreset, ata_std_postreset); +} + +/** + * ata_bmdma_post_internal_cmd - Stock post_internal_cmd for + * BMDMA controller + * @qc: internal command to clean up + * + * LOCKING: + * Kernel thread context (may sleep) + */ +void ata_bmdma_post_internal_cmd(struct ata_queued_cmd *qc) +{ + ata_bmdma_stop(qc); +} + #ifdef CONFIG_PCI static struct ata_probe_ent * ata_probe_ent_alloc(struct device *dev, const struct ata_port_info *port) diff -puN drivers/scsi/libata-core.c~git-libata-all drivers/scsi/libata-core.c --- devel/drivers/scsi/libata-core.c~git-libata-all 2006-06-09 15:21:16.000000000 -0700 +++ devel-akpm/drivers/scsi/libata-core.c 2006-06-09 15:21:16.000000000 -0700 @@ -61,14 +61,10 @@ #include "libata.h" -static unsigned int ata_dev_init_params(struct ata_port *ap, - struct ata_device *dev, - u16 heads, - u16 sectors); -static void ata_set_mode(struct ata_port *ap); -static unsigned int ata_dev_set_xfermode(struct ata_port *ap, - struct ata_device *dev); -static void ata_dev_xfermask(struct ata_port *ap, struct ata_device *dev); +static unsigned int ata_dev_init_params(struct ata_device *dev, + u16 heads, u16 sectors); +static unsigned int ata_dev_set_xfermode(struct ata_device *dev); +static void ata_dev_xfermask(struct ata_device *dev); static unsigned int ata_unique_id = 1; static struct workqueue_struct *ata_wq; @@ -77,6 +73,10 @@ int atapi_enabled = 1; module_param(atapi_enabled, int, 0444); MODULE_PARM_DESC(atapi_enabled, "Enable discovery of ATAPI devices (0=off, 1=on)"); +int atapi_dmadir = 0; +module_param(atapi_dmadir, int, 0444); +MODULE_PARM_DESC(atapi_dmadir, "Enable ATAPI DMADIR bridge support (0=off, 1=on)"); + int libata_fua = 0; module_param_named(fua, libata_fua, int, 0444); MODULE_PARM_DESC(fua, "FUA support (0=off, 1=on)"); @@ -397,11 +397,22 @@ static const char *ata_mode_string(unsig return ""; } -static void ata_dev_disable(struct ata_port *ap, struct ata_device *dev) +static const char *sata_spd_string(unsigned int spd) +{ + static const char * const spd_str[] = { + "1.5 Gbps", + "3.0 Gbps", + }; + + if (spd == 0 || (spd - 1) >= ARRAY_SIZE(spd_str)) + return ""; + return spd_str[spd - 1]; +} + +void ata_dev_disable(struct ata_device *dev) { - if (ata_dev_present(dev)) { - printk(KERN_WARNING "ata%u: dev %u disabled\n", - ap->id, dev->devno); + if (ata_dev_enabled(dev)) { + ata_dev_printk(dev, KERN_WARNING, "disabled\n"); dev->class++; } } @@ -943,15 +954,14 @@ void ata_qc_complete_internal(struct ata { struct completion *waiting = qc->private_data; - qc->ap->ops->tf_read(qc->ap, &qc->tf); complete(waiting); } /** * ata_exec_internal - execute libata internal command - * @ap: Port to which the command is sent * @dev: Device to which the command is sent * @tf: Taskfile registers for the command and the result + * @cdb: CDB for packet command * @dma_dir: Data tranfer direction of the command * @buf: Data buffer of the command * @buflen: Length of data buffer @@ -966,23 +976,62 @@ void ata_qc_complete_internal(struct ata * None. Should be called with kernel context, might sleep. */ -static unsigned -ata_exec_internal(struct ata_port *ap, struct ata_device *dev, - struct ata_taskfile *tf, - int dma_dir, void *buf, unsigned int buflen) +unsigned ata_exec_internal(struct ata_device *dev, + struct ata_taskfile *tf, const u8 *cdb, + int dma_dir, void *buf, unsigned int buflen) { + struct ata_port *ap = dev->ap; u8 command = tf->command; struct ata_queued_cmd *qc; + unsigned int tag, preempted_tag; + u32 preempted_sactive, preempted_qc_active; DECLARE_COMPLETION(wait); unsigned long flags; unsigned int err_mask; + int rc; spin_lock_irqsave(&ap->host_set->lock, flags); - qc = ata_qc_new_init(ap, dev); - BUG_ON(qc == NULL); + /* no internal command while frozen */ + if (ap->flags & ATA_FLAG_FROZEN) { + spin_unlock_irqrestore(&ap->host_set->lock, flags); + return AC_ERR_SYSTEM; + } + + /* initialize internal qc */ + + /* XXX: Tag 0 is used for drivers with legacy EH as some + * drivers choke if any other tag is given. This breaks + * ata_tag_internal() test for those drivers. Don't use new + * EH stuff without converting to it. + */ + if (ap->ops->error_handler) + tag = ATA_TAG_INTERNAL; + else + tag = 0; + + if (test_and_set_bit(tag, &ap->qc_allocated)) + BUG(); + qc = __ata_qc_from_tag(ap, tag); + + qc->tag = tag; + qc->scsicmd = NULL; + qc->ap = ap; + qc->dev = dev; + ata_qc_reinit(qc); + + preempted_tag = ap->active_tag; + preempted_sactive = ap->sactive; + preempted_qc_active = ap->qc_active; + ap->active_tag = ATA_TAG_POISON; + ap->sactive = 0; + ap->qc_active = 0; + /* prepare & issue qc */ qc->tf = *tf; + if (cdb) + memcpy(qc->cdb, cdb, ATAPI_CDB_LEN); + qc->flags |= ATA_QCFLAG_RESULT_TF; qc->dma_dir = dma_dir; if (dma_dir != DMA_NONE) { ata_sg_init_one(qc, buf, buflen); @@ -996,31 +1045,53 @@ ata_exec_internal(struct ata_port *ap, s spin_unlock_irqrestore(&ap->host_set->lock, flags); - if (!wait_for_completion_timeout(&wait, ATA_TMOUT_INTERNAL)) { - ata_port_flush_task(ap); + rc = wait_for_completion_timeout(&wait, ATA_TMOUT_INTERNAL); + ata_port_flush_task(ap); + + if (!rc) { spin_lock_irqsave(&ap->host_set->lock, flags); /* We're racing with irq here. If we lose, the * following test prevents us from completing the qc - * again. If completion irq occurs after here but - * before the caller cleans up, it will result in a - * spurious interrupt. We can live with that. + * twice. If we win, the port is frozen and will be + * cleaned up by ->post_internal_cmd(). */ if (qc->flags & ATA_QCFLAG_ACTIVE) { - qc->err_mask = AC_ERR_TIMEOUT; - ata_qc_complete(qc); - printk(KERN_WARNING "ata%u: qc timeout (cmd 0x%x)\n", - ap->id, command); + qc->err_mask |= AC_ERR_TIMEOUT; + + if (ap->ops->error_handler) + ata_port_freeze(ap); + else + ata_qc_complete(qc); + + ata_dev_printk(dev, KERN_WARNING, + "qc timeout (cmd 0x%x)\n", command); } spin_unlock_irqrestore(&ap->host_set->lock, flags); } - *tf = qc->tf; + /* do post_internal_cmd */ + if (ap->ops->post_internal_cmd) + ap->ops->post_internal_cmd(qc); + + if (qc->flags & ATA_QCFLAG_FAILED && !qc->err_mask) { + ata_dev_printk(dev, KERN_WARNING, "zero err_mask for failed " + "internal command, assuming AC_ERR_OTHER\n"); + qc->err_mask |= AC_ERR_OTHER; + } + + /* finish up */ + spin_lock_irqsave(&ap->host_set->lock, flags); + + *tf = qc->result_tf; err_mask = qc->err_mask; ata_qc_free(qc); + ap->active_tag = preempted_tag; + ap->sactive = preempted_sactive; + ap->qc_active = preempted_qc_active; /* XXX - Some LLDDs (sata_mv) disable port on command failure. * Until those drivers are fixed, we detect the condition @@ -1033,11 +1104,13 @@ ata_exec_internal(struct ata_port *ap, s * * Kill the following code as soon as those drivers are fixed. */ - if (ap->flags & ATA_FLAG_PORT_DISABLED) { + if (ap->flags & ATA_FLAG_DISABLED) { err_mask |= AC_ERR_SYSTEM; ata_port_probe(ap); } + spin_unlock_irqrestore(&ap->host_set->lock, flags); + return err_mask; } @@ -1076,11 +1149,10 @@ unsigned int ata_pio_need_iordy(const st /** * ata_dev_read_id - Read ID data from the specified device - * @ap: port on which target device resides * @dev: target device * @p_class: pointer to class of the target device (may be changed) * @post_reset: is this read ID post-reset? - * @p_id: read IDENTIFY page (newly allocated) + * @id: buffer to read IDENTIFY data into * * Read ID data from the specified device. ATA_CMD_ID_ATA is * performed on ATA devices and ATA_CMD_ID_ATAPI on ATAPI @@ -1093,13 +1165,13 @@ unsigned int ata_pio_need_iordy(const st * RETURNS: * 0 on success, -errno otherwise. */ -static int ata_dev_read_id(struct ata_port *ap, struct ata_device *dev, - unsigned int *p_class, int post_reset, u16 **p_id) +static int ata_dev_read_id(struct ata_device *dev, unsigned int *p_class, + int post_reset, u16 *id) { + struct ata_port *ap = dev->ap; unsigned int class = *p_class; struct ata_taskfile tf; unsigned int err_mask = 0; - u16 *id; const char *reason; int rc; @@ -1107,15 +1179,8 @@ static int ata_dev_read_id(struct ata_po ata_dev_select(ap, dev->devno, 1, 1); /* select device 0/1 */ - id = kmalloc(sizeof(id[0]) * ATA_ID_WORDS, GFP_KERNEL); - if (id == NULL) { - rc = -ENOMEM; - reason = "out of memory"; - goto err_out; - } - retry: - ata_tf_init(ap, &tf, dev->devno); + ata_tf_init(dev, &tf); switch (class) { case ATA_DEV_ATA: @@ -1132,7 +1197,7 @@ static int ata_dev_read_id(struct ata_po tf.protocol = ATA_PROT_PIO; - err_mask = ata_exec_internal(ap, dev, &tf, DMA_FROM_DEVICE, + err_mask = ata_exec_internal(dev, &tf, NULL, DMA_FROM_DEVICE, id, sizeof(id[0]) * ATA_ID_WORDS); if (err_mask) { rc = -EIO; @@ -1159,7 +1224,7 @@ static int ata_dev_read_id(struct ata_po * Some drives were very specific about that exact sequence. */ if (ata_id_major_version(id) < 4 || !ata_id_has_lba(id)) { - err_mask = ata_dev_init_params(ap, dev, id[3], id[6]); + err_mask = ata_dev_init_params(dev, id[3], id[6]); if (err_mask) { rc = -EIO; reason = "INIT_DEV_PARAMS failed"; @@ -1175,25 +1240,44 @@ static int ata_dev_read_id(struct ata_po } *p_class = class; - *p_id = id; + return 0; err_out: - printk(KERN_WARNING "ata%u: dev %u failed to IDENTIFY (%s)\n", - ap->id, dev->devno, reason); - kfree(id); + ata_dev_printk(dev, KERN_WARNING, "failed to IDENTIFY " + "(%s, err_mask=0x%x)\n", reason, err_mask); return rc; } -static inline u8 ata_dev_knobble(const struct ata_port *ap, - struct ata_device *dev) +static inline u8 ata_dev_knobble(struct ata_device *dev) +{ + return ((dev->ap->cbl == ATA_CBL_SATA) && (!ata_id_is_sata(dev->id))); +} + +static void ata_dev_config_ncq(struct ata_device *dev, + char *desc, size_t desc_sz) { - return ((ap->cbl == ATA_CBL_SATA) && (!ata_id_is_sata(dev->id))); + struct ata_port *ap = dev->ap; + int hdepth = 0, ddepth = ata_id_queue_depth(dev->id); + + if (!ata_id_has_ncq(dev->id)) { + desc[0] = '\0'; + return; + } + + if (ap->flags & ATA_FLAG_NCQ) { + hdepth = min(ap->host->can_queue, ATA_MAX_QUEUE - 1); + dev->flags |= ATA_DFLAG_NCQ; + } + + if (hdepth >= ddepth) + snprintf(desc, desc_sz, "NCQ (depth %d)", ddepth); + else + snprintf(desc, desc_sz, "NCQ (depth %d/%d)", hdepth, ddepth); } /** * ata_dev_configure - Configure the specified ATA/ATAPI device - * @ap: Port on which target device resides * @dev: Target device to configure * @print_info: Enable device info printout * @@ -1206,14 +1290,14 @@ static inline u8 ata_dev_knobble(const s * RETURNS: * 0 on success, -errno otherwise */ -static int ata_dev_configure(struct ata_port *ap, struct ata_device *dev, - int print_info) +static int ata_dev_configure(struct ata_device *dev, int print_info) { + struct ata_port *ap = dev->ap; const u16 *id = dev->id; unsigned int xfer_mask; int i, rc; - if (!ata_dev_present(dev)) { + if (!ata_dev_enabled(dev)) { DPRINTK("ENTER/EXIT (host %u, dev %u) -- nodev\n", ap->id, dev->devno); return 0; @@ -1223,13 +1307,13 @@ static int ata_dev_configure(struct ata_ /* print device capabilities */ if (print_info) - printk(KERN_DEBUG "ata%u: dev %u cfg 49:%04x 82:%04x 83:%04x " - "84:%04x 85:%04x 86:%04x 87:%04x 88:%04x\n", - ap->id, dev->devno, id[49], id[82], id[83], - id[84], id[85], id[86], id[87], id[88]); + ata_dev_printk(dev, KERN_DEBUG, "cfg 49:%04x 82:%04x 83:%04x " + "84:%04x 85:%04x 86:%04x 87:%04x 88:%04x\n", + id[49], id[82], id[83], id[84], + id[85], id[86], id[87], id[88]); /* initialize to-be-configured parameters */ - dev->flags = 0; + dev->flags &= ~ATA_DFLAG_CFG_MASK; dev->max_sectors = 0; dev->cdb_len = 0; dev->n_sectors = 0; @@ -1252,6 +1336,7 @@ static int ata_dev_configure(struct ata_ if (ata_id_has_lba(id)) { const char *lba_desc; + char ncq_desc[20]; lba_desc = "LBA"; dev->flags |= ATA_DFLAG_LBA; @@ -1260,15 +1345,17 @@ static int ata_dev_configure(struct ata_ lba_desc = "LBA48"; } + /* config NCQ */ + ata_dev_config_ncq(dev, ncq_desc, sizeof(ncq_desc)); + /* print device info to dmesg */ if (print_info) - printk(KERN_INFO "ata%u: dev %u ATA-%d, " - "max %s, %Lu sectors: %s\n", - ap->id, dev->devno, - ata_id_major_version(id), - ata_mode_string(xfer_mask), - (unsigned long long)dev->n_sectors, - lba_desc); + ata_dev_printk(dev, KERN_INFO, "ATA-%d, " + "max %s, %Lu sectors: %s %s\n", + ata_id_major_version(id), + ata_mode_string(xfer_mask), + (unsigned long long)dev->n_sectors, + lba_desc, ncq_desc); } else { /* CHS */ @@ -1286,13 +1373,18 @@ static int ata_dev_configure(struct ata_ /* print device info to dmesg */ if (print_info) - printk(KERN_INFO "ata%u: dev %u ATA-%d, " - "max %s, %Lu sectors: CHS %u/%u/%u\n", - ap->id, dev->devno, - ata_id_major_version(id), - ata_mode_string(xfer_mask), - (unsigned long long)dev->n_sectors, - dev->cylinders, dev->heads, dev->sectors); + ata_dev_printk(dev, KERN_INFO, "ATA-%d, " + "max %s, %Lu sectors: CHS %u/%u/%u\n", + ata_id_major_version(id), + ata_mode_string(xfer_mask), + (unsigned long long)dev->n_sectors, + dev->cylinders, dev->heads, dev->sectors); + } + + if (dev->id[59] & 0x100) { + dev->multi_count = dev->id[59] & 0xff; + DPRINTK("ata%u: dev %u multi count %u\n", + ap->id, dev->devno, dev->multi_count); } dev->cdb_len = 16; @@ -1300,18 +1392,27 @@ static int ata_dev_configure(struct ata_ /* ATAPI-specific feature tests */ else if (dev->class == ATA_DEV_ATAPI) { + char *cdb_intr_string = ""; + rc = atapi_cdb_len(id); if ((rc < 12) || (rc > ATAPI_CDB_LEN)) { - printk(KERN_WARNING "ata%u: unsupported CDB len\n", ap->id); + ata_dev_printk(dev, KERN_WARNING, + "unsupported CDB len\n"); rc = -EINVAL; goto err_out_nosup; } dev->cdb_len = (unsigned int) rc; + if (ata_id_cdb_intr(dev->id)) { + dev->flags |= ATA_DFLAG_CDB_INTR; + cdb_intr_string = ", CDB intr"; + } + /* print device info to dmesg */ if (print_info) - printk(KERN_INFO "ata%u: dev %u ATAPI, max %s\n", - ap->id, dev->devno, ata_mode_string(xfer_mask)); + ata_dev_printk(dev, KERN_INFO, "ATAPI, max %s%s\n", + ata_mode_string(xfer_mask), + cdb_intr_string); } ap->host->max_cmd_len = 0; @@ -1321,10 +1422,10 @@ static int ata_dev_configure(struct ata_ ap->device[i].cdb_len); /* limit bridge transfers to udma5, 200 sectors */ - if (ata_dev_knobble(ap, dev)) { + if (ata_dev_knobble(dev)) { if (print_info) - printk(KERN_INFO "ata%u(%u): applying bridge limits\n", - ap->id, dev->devno); + ata_dev_printk(dev, KERN_INFO, + "applying bridge limits\n"); dev->udma_mask &= ATA_UDMA5; dev->max_sectors = ATA_MAX_SECTORS; } @@ -1352,16 +1453,24 @@ err_out_nosup: * PCI/etc. bus probe sem. * * RETURNS: - * Zero on success, non-zero on error. + * Zero on success, negative errno otherwise. */ static int ata_bus_probe(struct ata_port *ap) { unsigned int classes[ATA_MAX_DEVICES]; - unsigned int i, rc, found = 0; + int tries[ATA_MAX_DEVICES]; + int i, rc, down_xfermask; + struct ata_device *dev; ata_port_probe(ap); + for (i = 0; i < ATA_MAX_DEVICES; i++) + tries[i] = ATA_PROBE_MAX_TRIES; + + retry: + down_xfermask = 0; + /* reset and determine device classes */ for (i = 0; i < ATA_MAX_DEVICES; i++) classes[i] = ATA_DEV_UNKNOWN; @@ -1369,15 +1478,18 @@ static int ata_bus_probe(struct ata_port if (ap->ops->probe_reset) { rc = ap->ops->probe_reset(ap, classes); if (rc) { - printk("ata%u: reset failed (errno=%d)\n", ap->id, rc); + ata_port_printk(ap, KERN_ERR, + "reset failed (errno=%d)\n", rc); return rc; } } else { ap->ops->phy_reset(ap); - if (!(ap->flags & ATA_FLAG_PORT_DISABLED)) - for (i = 0; i < ATA_MAX_DEVICES; i++) + for (i = 0; i < ATA_MAX_DEVICES; i++) { + if (!(ap->flags & ATA_FLAG_DISABLED)) classes[i] = ap->device[i].class; + ap->device[i].class = ATA_DEV_UNKNOWN; + } ata_port_probe(ap); } @@ -1386,45 +1498,69 @@ static int ata_bus_probe(struct ata_port if (classes[i] == ATA_DEV_UNKNOWN) classes[i] = ATA_DEV_NONE; + /* after the reset the device state is PIO 0 and the controller + state is undefined. Record the mode */ + + for (i = 0; i < ATA_MAX_DEVICES; i++) + ap->device[i].pio_mode = XFER_PIO_0; + /* read IDENTIFY page and configure devices */ for (i = 0; i < ATA_MAX_DEVICES; i++) { - struct ata_device *dev = &ap->device[i]; + dev = &ap->device[i]; - dev->class = classes[i]; + if (tries[i]) + dev->class = classes[i]; - if (!ata_dev_present(dev)) + if (!ata_dev_enabled(dev)) continue; - WARN_ON(dev->id != NULL); - if (ata_dev_read_id(ap, dev, &dev->class, 1, &dev->id)) { - dev->class = ATA_DEV_NONE; - continue; - } + rc = ata_dev_read_id(dev, &dev->class, 1, dev->id); + if (rc) + goto fail; - if (ata_dev_configure(ap, dev, 1)) { - ata_dev_disable(ap, dev); - continue; - } + rc = ata_dev_configure(dev, 1); + if (rc) + goto fail; + } - found = 1; + /* configure transfer mode */ + rc = ata_set_mode(ap, &dev); + if (rc) { + down_xfermask = 1; + goto fail; } - if (!found) - goto err_out_disable; + for (i = 0; i < ATA_MAX_DEVICES; i++) + if (ata_dev_enabled(&ap->device[i])) + return 0; - if (ap->ops->set_mode) - ap->ops->set_mode(ap); - else - ata_set_mode(ap); + /* no device present, disable port */ + ata_port_disable(ap); + ap->ops->port_disable(ap); + return -ENODEV; - if (ap->flags & ATA_FLAG_PORT_DISABLED) - goto err_out_disable; + fail: + switch (rc) { + case -EINVAL: + case -ENODEV: + tries[dev->devno] = 0; + break; + case -EIO: + sata_down_spd_limit(ap); + /* fall through */ + default: + tries[dev->devno]--; + if (down_xfermask && + ata_down_xfermask_limit(dev, tries[dev->devno] == 1)) + tries[dev->devno] = 0; + } - return 0; + if (!tries[dev->devno]) { + ata_down_xfermask_limit(dev, 1); + ata_dev_disable(dev); + } -err_out_disable: - ap->ops->port_disable(ap); - return -1; + goto retry; } /** @@ -1440,7 +1576,7 @@ err_out_disable: void ata_port_probe(struct ata_port *ap) { - ap->flags &= ~ATA_FLAG_PORT_DISABLED; + ap->flags &= ~ATA_FLAG_DISABLED; } /** @@ -1454,27 +1590,21 @@ void ata_port_probe(struct ata_port *ap) */ static void sata_print_link_status(struct ata_port *ap) { - u32 sstatus, tmp; - const char *speed; + u32 sstatus, scontrol, tmp; - if (!ap->ops->scr_read) + if (sata_scr_read(ap, SCR_STATUS, &sstatus)) return; + sata_scr_read(ap, SCR_CONTROL, &scontrol); - sstatus = scr_read(ap, SCR_STATUS); - - if (sata_dev_present(ap)) { + if (ata_port_online(ap)) { tmp = (sstatus >> 4) & 0xf; - if (tmp & (1 << 0)) - speed = "1.5"; - else if (tmp & (1 << 1)) - speed = "3.0"; - else - speed = ""; - printk(KERN_INFO "ata%u: SATA link up %s Gbps (SStatus %X)\n", - ap->id, speed, sstatus); + ata_port_printk(ap, KERN_INFO, + "SATA link up %s (SStatus %X SControl %X)\n", + sata_spd_string(tmp), sstatus, scontrol); } else { - printk(KERN_INFO "ata%u: SATA link down (SStatus %X)\n", - ap->id, sstatus); + ata_port_printk(ap, KERN_INFO, + "SATA link down (SStatus %X SControl %X)\n", + sstatus, scontrol); } } @@ -1497,17 +1627,18 @@ void __sata_phy_reset(struct ata_port *a if (ap->flags & ATA_FLAG_SATA_RESET) { /* issue phy wake/reset */ - scr_write_flush(ap, SCR_CONTROL, 0x301); + sata_scr_write_flush(ap, SCR_CONTROL, 0x301); /* Couldn't find anything in SATA I/II specs, but * AHCI-1.1 10.4.2 says at least 1 ms. */ mdelay(1); } - scr_write_flush(ap, SCR_CONTROL, 0x300); /* phy wake/clear reset */ + /* phy wake/clear reset */ + sata_scr_write_flush(ap, SCR_CONTROL, 0x300); /* wait for phy to become ready, if necessary */ do { msleep(200); - sstatus = scr_read(ap, SCR_STATUS); + sata_scr_read(ap, SCR_STATUS, &sstatus); if ((sstatus & 0xf) != 1) break; } while (time_before(jiffies, timeout)); @@ -1516,12 +1647,12 @@ void __sata_phy_reset(struct ata_port *a sata_print_link_status(ap); /* TODO: phy layer with polling, timeouts, etc. */ - if (sata_dev_present(ap)) + if (!ata_port_offline(ap)) ata_port_probe(ap); else ata_port_disable(ap); - if (ap->flags & ATA_FLAG_PORT_DISABLED) + if (ap->flags & ATA_FLAG_DISABLED) return; if (ata_busy_sleep(ap, ATA_TMOUT_BOOT_QUICK, ATA_TMOUT_BOOT)) { @@ -1546,24 +1677,24 @@ void __sata_phy_reset(struct ata_port *a void sata_phy_reset(struct ata_port *ap) { __sata_phy_reset(ap); - if (ap->flags & ATA_FLAG_PORT_DISABLED) + if (ap->flags & ATA_FLAG_DISABLED) return; ata_bus_reset(ap); } /** * ata_dev_pair - return other device on cable - * @ap: port * @adev: device * * Obtain the other device on the same cable, or if none is * present NULL is returned */ -struct ata_device *ata_dev_pair(struct ata_port *ap, struct ata_device *adev) +struct ata_device *ata_dev_pair(struct ata_device *adev) { + struct ata_port *ap = adev->ap; struct ata_device *pair = &ap->device[1 - adev->devno]; - if (!ata_dev_present(pair)) + if (!ata_dev_enabled(pair)) return NULL; return pair; } @@ -1585,7 +1716,122 @@ void ata_port_disable(struct ata_port *a { ap->device[0].class = ATA_DEV_NONE; ap->device[1].class = ATA_DEV_NONE; - ap->flags |= ATA_FLAG_PORT_DISABLED; + ap->flags |= ATA_FLAG_DISABLED; +} + +/** + * sata_down_spd_limit - adjust SATA spd limit downward + * @ap: Port to adjust SATA spd limit for + * + * Adjust SATA spd limit of @ap downward. Note that this + * function only adjusts the limit. The change must be applied + * using sata_set_spd(). + * + * LOCKING: + * Inherited from caller. + * + * RETURNS: + * 0 on success, negative errno on failure + */ +int sata_down_spd_limit(struct ata_port *ap) +{ + u32 sstatus, spd, mask; + int rc, highbit; + + rc = sata_scr_read(ap, SCR_STATUS, &sstatus); + if (rc) + return rc; + + mask = ap->sata_spd_limit; + if (mask <= 1) + return -EINVAL; + highbit = fls(mask) - 1; + mask &= ~(1 << highbit); + + spd = (sstatus >> 4) & 0xf; + if (spd <= 1) + return -EINVAL; + spd--; + mask &= (1 << spd) - 1; + if (!mask) + return -EINVAL; + + ap->sata_spd_limit = mask; + + ata_port_printk(ap, KERN_WARNING, "limiting SATA link speed to %s\n", + sata_spd_string(fls(mask))); + + return 0; +} + +static int __sata_set_spd_needed(struct ata_port *ap, u32 *scontrol) +{ + u32 spd, limit; + + if (ap->sata_spd_limit == UINT_MAX) + limit = 0; + else + limit = fls(ap->sata_spd_limit); + + spd = (*scontrol >> 4) & 0xf; + *scontrol = (*scontrol & ~0xf0) | ((limit & 0xf) << 4); + + return spd != limit; +} + +/** + * sata_set_spd_needed - is SATA spd configuration needed + * @ap: Port in question + * + * Test whether the spd limit in SControl matches + * @ap->sata_spd_limit. This function is used to determine + * whether hardreset is necessary to apply SATA spd + * configuration. + * + * LOCKING: + * Inherited from caller. + * + * RETURNS: + * 1 if SATA spd configuration is needed, 0 otherwise. + */ +int sata_set_spd_needed(struct ata_port *ap) +{ + u32 scontrol; + + if (sata_scr_read(ap, SCR_CONTROL, &scontrol)) + return 0; + + return __sata_set_spd_needed(ap, &scontrol); +} + +/** + * sata_set_spd - set SATA spd according to spd limit + * @ap: Port to set SATA spd for + * + * Set SATA spd of @ap according to sata_spd_limit. + * + * LOCKING: + * Inherited from caller. + * + * RETURNS: + * 0 if spd doesn't need to be changed, 1 if spd has been + * changed. Negative errno if SCR registers are inaccessible. + */ +int sata_set_spd(struct ata_port *ap) +{ + u32 scontrol; + int rc; + + if ((rc = sata_scr_read(ap, SCR_CONTROL, &scontrol))) + return rc; + + if (!__sata_set_spd_needed(ap, &scontrol)) + return 0; + + if ((rc = sata_scr_write(ap, SCR_CONTROL, scontrol))) + return rc; + + return 1; } /* @@ -1736,151 +1982,196 @@ int ata_timing_compute(struct ata_device return 0; } -static int ata_dev_set_mode(struct ata_port *ap, struct ata_device *dev) +/** + * ata_down_xfermask_limit - adjust dev xfer masks downward + * @dev: Device to adjust xfer masks + * @force_pio0: Force PIO0 + * + * Adjust xfer masks of @dev downward. Note that this function + * does not apply the change. Invoking ata_set_mode() afterwards + * will apply the limit. + * + * LOCKING: + * Inherited from caller. + * + * RETURNS: + * 0 on success, negative errno on failure + */ +int ata_down_xfermask_limit(struct ata_device *dev, int force_pio0) +{ + unsigned long xfer_mask; + int highbit; + + xfer_mask = ata_pack_xfermask(dev->pio_mask, dev->mwdma_mask, + dev->udma_mask); + + if (!xfer_mask) + goto fail; + /* don't gear down to MWDMA from UDMA, go directly to PIO */ + if (xfer_mask & ATA_MASK_UDMA) + xfer_mask &= ~ATA_MASK_MWDMA; + + highbit = fls(xfer_mask) - 1; + xfer_mask &= ~(1 << highbit); + if (force_pio0) + xfer_mask &= 1 << ATA_SHIFT_PIO; + if (!xfer_mask) + goto fail; + + ata_unpack_xfermask(xfer_mask, &dev->pio_mask, &dev->mwdma_mask, + &dev->udma_mask); + + ata_dev_printk(dev, KERN_WARNING, "limiting speed to %s\n", + ata_mode_string(xfer_mask)); + + return 0; + + fail: + return -EINVAL; +} + +static int ata_dev_set_mode(struct ata_device *dev) { unsigned int err_mask; int rc; + dev->flags &= ~ATA_DFLAG_PIO; if (dev->xfer_shift == ATA_SHIFT_PIO) dev->flags |= ATA_DFLAG_PIO; - err_mask = ata_dev_set_xfermode(ap, dev); + err_mask = ata_dev_set_xfermode(dev); if (err_mask) { - printk(KERN_ERR - "ata%u: failed to set xfermode (err_mask=0x%x)\n", - ap->id, err_mask); + ata_dev_printk(dev, KERN_ERR, "failed to set xfermode " + "(err_mask=0x%x)\n", err_mask); return -EIO; } - rc = ata_dev_revalidate(ap, dev, 0); - if (rc) { - printk(KERN_ERR - "ata%u: failed to revalidate after set xfermode\n", - ap->id); + rc = ata_dev_revalidate(dev, 0); + if (rc) return rc; - } DPRINTK("xfer_shift=%u, xfer_mode=0x%x\n", dev->xfer_shift, (int)dev->xfer_mode); - printk(KERN_INFO "ata%u: dev %u configured for %s\n", - ap->id, dev->devno, - ata_mode_string(ata_xfer_mode2mask(dev->xfer_mode))); - return 0; -} - -static int ata_host_set_pio(struct ata_port *ap) -{ - int i; - - for (i = 0; i < ATA_MAX_DEVICES; i++) { - struct ata_device *dev = &ap->device[i]; - - if (!ata_dev_present(dev)) - continue; - - if (!dev->pio_mode) { - printk(KERN_WARNING "ata%u: no PIO support for device %d.\n", ap->id, i); - return -1; - } - - dev->xfer_mode = dev->pio_mode; - dev->xfer_shift = ATA_SHIFT_PIO; - if (ap->ops->set_piomode) - ap->ops->set_piomode(ap, dev); - } - + ata_dev_printk(dev, KERN_INFO, "configured for %s\n", + ata_mode_string(ata_xfer_mode2mask(dev->xfer_mode))); return 0; } -static void ata_host_set_dma(struct ata_port *ap) -{ - int i; - - for (i = 0; i < ATA_MAX_DEVICES; i++) { - struct ata_device *dev = &ap->device[i]; - - if (!ata_dev_present(dev) || !dev->dma_mode) - continue; - - dev->xfer_mode = dev->dma_mode; - dev->xfer_shift = ata_xfer_mode2shift(dev->dma_mode); - if (ap->ops->set_dmamode) - ap->ops->set_dmamode(ap, dev); - } -} - /** * ata_set_mode - Program timings and issue SET FEATURES - XFER * @ap: port on which timings will be programmed + * @r_failed_dev: out paramter for failed device * - * Set ATA device disk transfer mode (PIO3, UDMA6, etc.). + * Set ATA device disk transfer mode (PIO3, UDMA6, etc.). If + * ata_set_mode() fails, pointer to the failing device is + * returned in @r_failed_dev. * * LOCKING: * PCI/etc. bus probe sem. + * + * RETURNS: + * 0 on success, negative errno otherwise */ -static void ata_set_mode(struct ata_port *ap) +int ata_set_mode(struct ata_port *ap, struct ata_device **r_failed_dev) { - int i, rc, used_dma = 0; + struct ata_device *dev; + int i, rc = 0, used_dma = 0, found = 0; + + /* has private set_mode? */ + if (ap->ops->set_mode) { + /* FIXME: make ->set_mode handle no device case and + * return error code and failing device on failure. + */ + for (i = 0; i < ATA_MAX_DEVICES; i++) { + if (ata_dev_enabled(&ap->device[i])) { + ap->ops->set_mode(ap); + break; + } + } + return 0; + } /* step 1: calculate xfer_mask */ for (i = 0; i < ATA_MAX_DEVICES; i++) { - struct ata_device *dev = &ap->device[i]; unsigned int pio_mask, dma_mask; - if (!ata_dev_present(dev)) - continue; + dev = &ap->device[i]; - ata_dev_xfermask(ap, dev); + if (!ata_dev_enabled(dev)) + continue; - /* TODO: let LLDD filter dev->*_mask here */ + ata_dev_xfermask(dev); pio_mask = ata_pack_xfermask(dev->pio_mask, 0, 0); dma_mask = ata_pack_xfermask(0, dev->mwdma_mask, dev->udma_mask); dev->pio_mode = ata_xfer_mask2mode(pio_mask); dev->dma_mode = ata_xfer_mask2mode(dma_mask); + found = 1; if (dev->dma_mode) used_dma = 1; } + if (!found) + goto out; /* step 2: always set host PIO timings */ - rc = ata_host_set_pio(ap); - if (rc) - goto err_out; + for (i = 0; i < ATA_MAX_DEVICES; i++) { + dev = &ap->device[i]; + if (!ata_dev_enabled(dev)) + continue; + + if (!dev->pio_mode) { + ata_dev_printk(dev, KERN_WARNING, "no PIO support\n"); + rc = -EINVAL; + goto out; + } + + dev->xfer_mode = dev->pio_mode; + dev->xfer_shift = ATA_SHIFT_PIO; + if (ap->ops->set_piomode) + ap->ops->set_piomode(ap, dev); + } /* step 3: set host DMA timings */ - ata_host_set_dma(ap); + for (i = 0; i < ATA_MAX_DEVICES; i++) { + dev = &ap->device[i]; + + if (!ata_dev_enabled(dev) || !dev->dma_mode) + continue; + + dev->xfer_mode = dev->dma_mode; + dev->xfer_shift = ata_xfer_mode2shift(dev->dma_mode); + if (ap->ops->set_dmamode) + ap->ops->set_dmamode(ap, dev); + } /* step 4: update devices' xfer mode */ for (i = 0; i < ATA_MAX_DEVICES; i++) { - struct ata_device *dev = &ap->device[i]; + dev = &ap->device[i]; - if (!ata_dev_present(dev)) + if (!ata_dev_enabled(dev)) continue; - if (ata_dev_set_mode(ap, dev)) - goto err_out; + rc = ata_dev_set_mode(dev); + if (rc) + goto out; } - /* - * Record simplex status. If we selected DMA then the other - * host channels are not permitted to do so. + /* Record simplex status. If we selected DMA then the other + * host channels are not permitted to do so. */ - if (used_dma && (ap->host_set->flags & ATA_HOST_SIMPLEX)) ap->host_set->simplex_claimed = 1; - /* - * Chip specific finalisation - */ + /* step5: chip specific finalisation */ if (ap->ops->post_set_mode) ap->ops->post_set_mode(ap); - return; - -err_out: - ata_port_disable(ap); + out: + if (rc) + *r_failed_dev = dev; + return rc; } /** @@ -1930,8 +2221,8 @@ unsigned int ata_busy_sleep (struct ata_ } if (status & ATA_BUSY) - printk(KERN_WARNING "ata%u is slow to respond, " - "please be patient\n", ap->id); + ata_port_printk(ap, KERN_WARNING, + "port is slow to respond, please be patient\n"); timeout = timer_start + tmout; while ((status & ATA_BUSY) && (time_before(jiffies, timeout))) { @@ -1940,8 +2231,8 @@ unsigned int ata_busy_sleep (struct ata_ } if (status & ATA_BUSY) { - printk(KERN_ERR "ata%u failed to respond (%lu secs)\n", - ap->id, tmout / HZ); + ata_port_printk(ap, KERN_ERR, "port failed to respond " + "(%lu secs)\n", tmout / HZ); return 1; } @@ -2033,8 +2324,10 @@ static unsigned int ata_bus_softreset(st * the bus shows 0xFF because the odd clown forgets the D7 * pulldown resistor. */ - if (ata_check_status(ap) == 0xFF) + if (ata_check_status(ap) == 0xFF) { + ata_port_printk(ap, KERN_ERR, "SRST failed (status 0xFF)\n"); return AC_ERR_OTHER; + } ata_bus_post_reset(ap, devmask); @@ -2058,7 +2351,7 @@ static unsigned int ata_bus_softreset(st * Obtains host_set lock. * * SIDE EFFECTS: - * Sets ATA_FLAG_PORT_DISABLED if bus reset fails. + * Sets ATA_FLAG_DISABLED if bus reset fails. */ void ata_bus_reset(struct ata_port *ap) @@ -2126,7 +2419,7 @@ void ata_bus_reset(struct ata_port *ap) return; err_out: - printk(KERN_ERR "ata%u: disabling port\n", ap->id); + ata_port_printk(ap, KERN_ERR, "disabling port\n"); ap->ops->port_disable(ap); DPRINTK("EXIT\n"); @@ -2135,19 +2428,27 @@ err_out: static int sata_phy_resume(struct ata_port *ap) { unsigned long timeout = jiffies + (HZ * 5); - u32 sstatus; + u32 scontrol, sstatus; + int rc; + + if ((rc = sata_scr_read(ap, SCR_CONTROL, &scontrol))) + return rc; + + scontrol = (scontrol & 0x0f0) | 0x300; - scr_write_flush(ap, SCR_CONTROL, 0x300); + if ((rc = sata_scr_write(ap, SCR_CONTROL, scontrol))) + return rc; /* Wait for phy to become ready, if necessary. */ do { msleep(200); - sstatus = scr_read(ap, SCR_STATUS); + if ((rc = sata_scr_read(ap, SCR_STATUS, &sstatus))) + return rc; if ((sstatus & 0xf) != 1) return 0; } while (time_before(jiffies, timeout)); - return -1; + return -EBUSY; } /** @@ -2165,17 +2466,25 @@ static int sata_phy_resume(struct ata_po */ void ata_std_probeinit(struct ata_port *ap) { - if ((ap->flags & ATA_FLAG_SATA) && ap->ops->scr_read) { - sata_phy_resume(ap); - if (sata_dev_present(ap)) - ata_busy_sleep(ap, ATA_TMOUT_BOOT_QUICK, ATA_TMOUT_BOOT); + u32 scontrol; + + /* resume link */ + sata_phy_resume(ap); + + /* init sata_spd_limit to the current value */ + if (sata_scr_read(ap, SCR_CONTROL, &scontrol) == 0) { + int spd = (scontrol >> 4) & 0xf; + ap->sata_spd_limit &= (1 << spd) - 1; } + + /* wait for device */ + if (ata_port_online(ap)) + ata_busy_sleep(ap, ATA_TMOUT_BOOT_QUICK, ATA_TMOUT_BOOT); } /** * ata_std_softreset - reset host port via ATA SRST * @ap: port to reset - * @verbose: fail verbosely * @classes: resulting classes of attached devices * * Reset host port using ATA SRST. This function is to be used @@ -2187,7 +2496,7 @@ void ata_std_probeinit(struct ata_port * * RETURNS: * 0 on success, -errno otherwise. */ -int ata_std_softreset(struct ata_port *ap, int verbose, unsigned int *classes) +int ata_std_softreset(struct ata_port *ap, unsigned int *classes) { unsigned int slave_possible = ap->flags & ATA_FLAG_SLAVE_POSS; unsigned int devmask = 0, err_mask; @@ -2195,7 +2504,7 @@ int ata_std_softreset(struct ata_port *a DPRINTK("ENTER\n"); - if (ap->ops->scr_read && !sata_dev_present(ap)) { + if (ata_port_offline(ap)) { classes[0] = ATA_DEV_NONE; goto out; } @@ -2213,11 +2522,7 @@ int ata_std_softreset(struct ata_port *a DPRINTK("about to softreset, devmask=%x\n", devmask); err_mask = ata_bus_softreset(ap, devmask); if (err_mask) { - if (verbose) - printk(KERN_ERR "ata%u: SRST failed (err_mask=0x%x)\n", - ap->id, err_mask); - else - DPRINTK("EXIT, softreset failed (err_mask=0x%x)\n", + ata_port_printk(ap, KERN_ERR, "SRST failed (err_mask=0x%x)\n", err_mask); return -EIO; } @@ -2235,7 +2540,6 @@ int ata_std_softreset(struct ata_port *a /** * sata_std_hardreset - reset host port via SATA phy reset * @ap: port to reset - * @verbose: fail verbosely * @class: resulting class of attached device * * SATA phy-reset host port using DET bits of SControl register. @@ -2248,35 +2552,57 @@ int ata_std_softreset(struct ata_port *a * RETURNS: * 0 on success, -errno otherwise. */ -int sata_std_hardreset(struct ata_port *ap, int verbose, unsigned int *class) +int sata_std_hardreset(struct ata_port *ap, unsigned int *class) { + u32 scontrol; + int rc; + DPRINTK("ENTER\n"); - /* Issue phy wake/reset */ - scr_write_flush(ap, SCR_CONTROL, 0x301); + if (sata_set_spd_needed(ap)) { + /* SATA spec says nothing about how to reconfigure + * spd. To be on the safe side, turn off phy during + * reconfiguration. This works for at least ICH7 AHCI + * and Sil3124. + */ + if ((rc = sata_scr_read(ap, SCR_CONTROL, &scontrol))) + return rc; + + scontrol = (scontrol & 0x0f0) | 0x302; - /* - * Couldn't find anything in SATA I/II specs, but AHCI-1.1 + if ((rc = sata_scr_write(ap, SCR_CONTROL, scontrol))) + return rc; + + sata_set_spd(ap); + } + + /* issue phy wake/reset */ + if ((rc = sata_scr_read(ap, SCR_CONTROL, &scontrol))) + return rc; + + scontrol = (scontrol & 0x0f0) | 0x301; + + if ((rc = sata_scr_write_flush(ap, SCR_CONTROL, scontrol))) + return rc; + + /* Couldn't find anything in SATA I/II specs, but AHCI-1.1 * 10.4.2 says at least 1 ms. */ msleep(1); - /* Bring phy back */ + /* bring phy back */ sata_phy_resume(ap); /* TODO: phy layer with polling, timeouts, etc. */ - if (!sata_dev_present(ap)) { + if (ata_port_offline(ap)) { *class = ATA_DEV_NONE; DPRINTK("EXIT, link offline\n"); return 0; } if (ata_busy_sleep(ap, ATA_TMOUT_BOOT_QUICK, ATA_TMOUT_BOOT)) { - if (verbose) - printk(KERN_ERR "ata%u: COMRESET failed " - "(device not ready)\n", ap->id); - else - DPRINTK("EXIT, device not ready\n"); + ata_port_printk(ap, KERN_ERR, + "COMRESET failed (device not ready)\n"); return -EIO; } @@ -2305,19 +2631,23 @@ int sata_std_hardreset(struct ata_port * */ void ata_std_postreset(struct ata_port *ap, unsigned int *classes) { - DPRINTK("ENTER\n"); + u32 serror; - /* set cable type if it isn't already set */ - if (ap->cbl == ATA_CBL_NONE && ap->flags & ATA_FLAG_SATA) - ap->cbl = ATA_CBL_SATA; + DPRINTK("ENTER\n"); /* print link status */ - if (ap->cbl == ATA_CBL_SATA) - sata_print_link_status(ap); + sata_print_link_status(ap); + + /* clear SError */ + if (sata_scr_read(ap, SCR_ERROR, &serror) == 0) + sata_scr_write(ap, SCR_ERROR, serror); /* re-enable interrupts */ - if (ap->ioaddr.ctl_addr) /* FIXME: hack. create a hook instead */ - ata_irq_on(ap); + if (!ap->ops->error_handler) { + /* FIXME: hack. create a hook instead */ + if (ap->ioaddr.ctl_addr) + ata_irq_on(ap); + } /* is double-select really necessary? */ if (classes[0] != ATA_DEV_NONE) @@ -2360,7 +2690,7 @@ int ata_std_probe_reset(struct ata_port ata_reset_fn_t hardreset; hardreset = NULL; - if (ap->flags & ATA_FLAG_SATA && ap->ops->scr_read) + if (sata_scr_valid(ap)) hardreset = sata_std_hardreset; return ata_drive_probe_reset(ap, ata_std_probeinit, @@ -2368,16 +2698,15 @@ int ata_std_probe_reset(struct ata_port ata_std_postreset, classes); } -static int do_probe_reset(struct ata_port *ap, ata_reset_fn_t reset, - ata_postreset_fn_t postreset, - unsigned int *classes) +int ata_do_reset(struct ata_port *ap, ata_reset_fn_t reset, + unsigned int *classes) { int i, rc; for (i = 0; i < ATA_MAX_DEVICES; i++) classes[i] = ATA_DEV_UNKNOWN; - rc = reset(ap, 0, classes); + rc = reset(ap, classes); if (rc) return rc; @@ -2394,10 +2723,7 @@ static int do_probe_reset(struct ata_por if (classes[i] == ATA_DEV_UNKNOWN) classes[i] = ATA_DEV_NONE; - if (postreset) - postreset(ap, classes); - - return classes[0] != ATA_DEV_UNKNOWN ? 0 : -ENODEV; + return 0; } /** @@ -2421,8 +2747,6 @@ static int do_probe_reset(struct ata_por * - If classification is supported, fill classes[] with * recognized class codes. * - If classification is not supported, leave classes[] alone. - * - If verbose is non-zero, print error message on failure; - * otherwise, shut up. * * LOCKING: * Kernel thread context (may sleep) @@ -2438,31 +2762,63 @@ int ata_drive_probe_reset(struct ata_por { int rc = -EINVAL; + ata_eh_freeze_port(ap); + if (probeinit) probeinit(ap); - if (softreset) { - rc = do_probe_reset(ap, softreset, postreset, classes); - if (rc == 0) - return 0; + if (softreset && !sata_set_spd_needed(ap)) { + rc = ata_do_reset(ap, softreset, classes); + if (rc == 0 && classes[0] != ATA_DEV_UNKNOWN) + goto done; + ata_port_printk(ap, KERN_INFO, "softreset failed, " + "will try hardreset in 5 secs\n"); + ssleep(5); } if (!hardreset) - return rc; + goto done; - rc = do_probe_reset(ap, hardreset, postreset, classes); - if (rc == 0 || rc != -ENODEV) - return rc; + while (1) { + rc = ata_do_reset(ap, hardreset, classes); + if (rc == 0) { + if (classes[0] != ATA_DEV_UNKNOWN) + goto done; + break; + } + + if (sata_down_spd_limit(ap)) + goto done; + + ata_port_printk(ap, KERN_INFO, "hardreset failed, " + "will retry in 5 secs\n"); + ssleep(5); + } + + if (softreset) { + ata_port_printk(ap, KERN_INFO, + "hardreset succeeded without classification, " + "will retry softreset in 5 secs\n"); + ssleep(5); + + rc = ata_do_reset(ap, softreset, classes); + } + + done: + if (rc == 0) { + if (postreset) + postreset(ap, classes); - if (softreset) - rc = do_probe_reset(ap, softreset, postreset, classes); + ata_eh_thaw_port(ap); + if (classes[0] == ATA_DEV_UNKNOWN) + rc = -ENODEV; + } return rc; } /** * ata_dev_same_device - Determine whether new ID matches configured device - * @ap: port on which the device to compare against resides * @dev: device to compare against * @new_class: class of the new device * @new_id: IDENTIFY page of the new device @@ -2477,17 +2833,16 @@ int ata_drive_probe_reset(struct ata_por * RETURNS: * 1 if @dev matches @new_class and @new_id, 0 otherwise. */ -static int ata_dev_same_device(struct ata_port *ap, struct ata_device *dev, - unsigned int new_class, const u16 *new_id) +static int ata_dev_same_device(struct ata_device *dev, unsigned int new_class, + const u16 *new_id) { const u16 *old_id = dev->id; unsigned char model[2][41], serial[2][21]; u64 new_n_sectors; if (dev->class != new_class) { - printk(KERN_INFO - "ata%u: dev %u class mismatch %d != %d\n", - ap->id, dev->devno, dev->class, new_class); + ata_dev_printk(dev, KERN_INFO, "class mismatch %d != %d\n", + dev->class, new_class); return 0; } @@ -2498,24 +2853,22 @@ static int ata_dev_same_device(struct at new_n_sectors = ata_id_n_sectors(new_id); if (strcmp(model[0], model[1])) { - printk(KERN_INFO - "ata%u: dev %u model number mismatch '%s' != '%s'\n", - ap->id, dev->devno, model[0], model[1]); + ata_dev_printk(dev, KERN_INFO, "model number mismatch " + "'%s' != '%s'\n", model[0], model[1]); return 0; } if (strcmp(serial[0], serial[1])) { - printk(KERN_INFO - "ata%u: dev %u serial number mismatch '%s' != '%s'\n", - ap->id, dev->devno, serial[0], serial[1]); + ata_dev_printk(dev, KERN_INFO, "serial number mismatch " + "'%s' != '%s'\n", serial[0], serial[1]); return 0; } if (dev->class == ATA_DEV_ATA && dev->n_sectors != new_n_sectors) { - printk(KERN_INFO - "ata%u: dev %u n_sectors mismatch %llu != %llu\n", - ap->id, dev->devno, (unsigned long long)dev->n_sectors, - (unsigned long long)new_n_sectors); + ata_dev_printk(dev, KERN_INFO, "n_sectors mismatch " + "%llu != %llu\n", + (unsigned long long)dev->n_sectors, + (unsigned long long)new_n_sectors); return 0; } @@ -2524,7 +2877,6 @@ static int ata_dev_same_device(struct at /** * ata_dev_revalidate - Revalidate ATA device - * @ap: port on which the device to revalidate resides * @dev: device to revalidate * @post_reset: is this revalidation after reset? * @@ -2537,40 +2889,37 @@ static int ata_dev_same_device(struct at * RETURNS: * 0 on success, negative errno otherwise */ -int ata_dev_revalidate(struct ata_port *ap, struct ata_device *dev, - int post_reset) +int ata_dev_revalidate(struct ata_device *dev, int post_reset) { - unsigned int class; - u16 *id; + unsigned int class = dev->class; + u16 *id = (void *)dev->ap->sector_buf; int rc; - if (!ata_dev_present(dev)) - return -ENODEV; - - class = dev->class; - id = NULL; + if (!ata_dev_enabled(dev)) { + rc = -ENODEV; + goto fail; + } - /* allocate & read ID data */ - rc = ata_dev_read_id(ap, dev, &class, post_reset, &id); + /* read ID data */ + rc = ata_dev_read_id(dev, &class, post_reset, id); if (rc) goto fail; /* is the device still there? */ - if (!ata_dev_same_device(ap, dev, class, id)) { + if (!ata_dev_same_device(dev, class, id)) { rc = -ENODEV; goto fail; } - kfree(dev->id); - dev->id = id; + memcpy(dev->id, id, sizeof(id[0]) * ATA_ID_WORDS); /* configure device according to the new ID */ - return ata_dev_configure(ap, dev, 0); + rc = ata_dev_configure(dev, 0); + if (rc == 0) + return 0; fail: - printk(KERN_ERR "ata%u: dev %u revalidation failed (errno=%d)\n", - ap->id, dev->devno, rc); - kfree(id); + ata_dev_printk(dev, KERN_ERR, "revalidation failed (errno=%d)\n", rc); return rc; } @@ -2646,7 +2995,6 @@ static int ata_dma_blacklisted(const str /** * ata_dev_xfermask - Compute supported xfermask of the given device - * @ap: Port on which the device to compute xfermask for resides * @dev: Device to compute xfermask for * * Compute supported xfermask of @dev and store it in @@ -2661,49 +3009,61 @@ static int ata_dma_blacklisted(const str * LOCKING: * None. */ -static void ata_dev_xfermask(struct ata_port *ap, struct ata_device *dev) +static void ata_dev_xfermask(struct ata_device *dev) { + struct ata_port *ap = dev->ap; struct ata_host_set *hs = ap->host_set; unsigned long xfer_mask; int i; - xfer_mask = ata_pack_xfermask(ap->pio_mask, ap->mwdma_mask, - ap->udma_mask); + xfer_mask = ata_pack_xfermask(ap->pio_mask, + ap->mwdma_mask, ap->udma_mask); + + /* Apply cable rule here. Don't apply it early because when + * we handle hot plug the cable type can itself change. + */ + if (ap->cbl == ATA_CBL_PATA40) + xfer_mask &= ~(0xF8 << ATA_SHIFT_UDMA); /* FIXME: Use port-wide xfermask for now */ for (i = 0; i < ATA_MAX_DEVICES; i++) { struct ata_device *d = &ap->device[i]; - if (!ata_dev_present(d)) + + if (ata_dev_absent(d)) continue; - xfer_mask &= ata_pack_xfermask(d->pio_mask, d->mwdma_mask, - d->udma_mask); + + if (ata_dev_disabled(d)) { + /* to avoid violating device selection timing */ + xfer_mask &= ata_pack_xfermask(d->pio_mask, + UINT_MAX, UINT_MAX); + continue; + } + + xfer_mask &= ata_pack_xfermask(d->pio_mask, + d->mwdma_mask, d->udma_mask); xfer_mask &= ata_id_xfermask(d->id); if (ata_dma_blacklisted(d)) xfer_mask &= ~(ATA_MASK_MWDMA | ATA_MASK_UDMA); - /* Apply cable rule here. Don't apply it early because when - we handle hot plug the cable type can itself change */ - if (ap->cbl == ATA_CBL_PATA40) - xfer_mask &= ~(0xF8 << ATA_SHIFT_UDMA); } if (ata_dma_blacklisted(dev)) - printk(KERN_WARNING "ata%u: dev %u is on DMA blacklist, " - "disabling DMA\n", ap->id, dev->devno); + ata_dev_printk(dev, KERN_WARNING, + "device is on DMA blacklist, disabling DMA\n"); if (hs->flags & ATA_HOST_SIMPLEX) { if (hs->simplex_claimed) xfer_mask &= ~(ATA_MASK_MWDMA | ATA_MASK_UDMA); } + if (ap->ops->mode_filter) xfer_mask = ap->ops->mode_filter(ap, dev, xfer_mask); - ata_unpack_xfermask(xfer_mask, &dev->pio_mask, &dev->mwdma_mask, - &dev->udma_mask); + ata_unpack_xfermask(xfer_mask, &dev->pio_mask, + &dev->mwdma_mask, &dev->udma_mask); } /** * ata_dev_set_xfermode - Issue SET FEATURES - XFER MODE command - * @ap: Port associated with device @dev * @dev: Device to which command will be sent * * Issue SET FEATURES - XFER MODE command to device @dev @@ -2716,8 +3076,7 @@ static void ata_dev_xfermask(struct ata_ * 0 on success, AC_ERR_* mask otherwise. */ -static unsigned int ata_dev_set_xfermode(struct ata_port *ap, - struct ata_device *dev) +static unsigned int ata_dev_set_xfermode(struct ata_device *dev) { struct ata_taskfile tf; unsigned int err_mask; @@ -2725,14 +3084,14 @@ static unsigned int ata_dev_set_xfermode /* set up set-features taskfile */ DPRINTK("set features - xfer mode\n"); - ata_tf_init(ap, &tf, dev->devno); + ata_tf_init(dev, &tf); tf.command = ATA_CMD_SET_FEATURES; tf.feature = SETFEATURES_XFER; tf.flags |= ATA_TFLAG_ISADDR | ATA_TFLAG_DEVICE; tf.protocol = ATA_PROT_NODATA; tf.nsect = dev->xfer_mode; - err_mask = ata_exec_internal(ap, dev, &tf, DMA_NONE, NULL, 0); + err_mask = ata_exec_internal(dev, &tf, NULL, DMA_NONE, NULL, 0); DPRINTK("EXIT, err_mask=%x\n", err_mask); return err_mask; @@ -2740,7 +3099,6 @@ static unsigned int ata_dev_set_xfermode /** * ata_dev_init_params - Issue INIT DEV PARAMS command - * @ap: Port associated with device @dev * @dev: Device to which command will be sent * @heads: Number of heads (taskfile parameter) * @sectors: Number of sectors (taskfile parameter) @@ -2751,11 +3109,8 @@ static unsigned int ata_dev_set_xfermode * RETURNS: * 0 on success, AC_ERR_* mask otherwise. */ - -static unsigned int ata_dev_init_params(struct ata_port *ap, - struct ata_device *dev, - u16 heads, - u16 sectors) +static unsigned int ata_dev_init_params(struct ata_device *dev, + u16 heads, u16 sectors) { struct ata_taskfile tf; unsigned int err_mask; @@ -2767,14 +3122,14 @@ static unsigned int ata_dev_init_params( /* set up init dev params taskfile */ DPRINTK("init dev params \n"); - ata_tf_init(ap, &tf, dev->devno); + ata_tf_init(dev, &tf); tf.command = ATA_CMD_INIT_DEV_PARAMS; tf.flags |= ATA_TFLAG_ISADDR | ATA_TFLAG_DEVICE; tf.protocol = ATA_PROT_NODATA; tf.nsect = sectors; tf.device |= (heads - 1) & 0x0f; /* max head = num. of heads - 1 */ - err_mask = ata_exec_internal(ap, dev, &tf, DMA_NONE, NULL, 0); + err_mask = ata_exec_internal(dev, &tf, NULL, DMA_NONE, NULL, 0); DPRINTK("EXIT, err_mask=%x\n", err_mask); return err_mask; @@ -2912,6 +3267,15 @@ int ata_check_atapi_dma(struct ata_queue if (ap->ops->check_atapi_dma) rc = ap->ops->check_atapi_dma(qc); + /* We don't support polling DMA. + * Use PIO if the LLDD handles only interrupts in + * the HSM_ST_LAST state and the ATAPI device + * generates CDB interrupts. + */ + if ((ap->flags & ATA_FLAG_PIO_POLLING) && + (qc->dev->flags & ATA_DFLAG_CDB_INTR)) + rc = 1; + return rc; } /** @@ -3140,134 +3504,6 @@ skip_map: } /** - * ata_poll_qc_complete - turn irq back on and finish qc - * @qc: Command to complete - * @err_mask: ATA status register content - * - * LOCKING: - * None. (grabs host lock) - */ - -void ata_poll_qc_complete(struct ata_queued_cmd *qc) -{ - struct ata_port *ap = qc->ap; - unsigned long flags; - - spin_lock_irqsave(&ap->host_set->lock, flags); - ap->flags &= ~ATA_FLAG_NOINTR; - ata_irq_on(ap); - ata_qc_complete(qc); - spin_unlock_irqrestore(&ap->host_set->lock, flags); -} - -/** - * ata_pio_poll - poll using PIO, depending on current state - * @ap: the target ata_port - * - * LOCKING: - * None. (executing in kernel thread context) - * - * RETURNS: - * timeout value to use - */ - -static unsigned long ata_pio_poll(struct ata_port *ap) -{ - struct ata_queued_cmd *qc; - u8 status; - unsigned int poll_state = HSM_ST_UNKNOWN; - unsigned int reg_state = HSM_ST_UNKNOWN; - - qc = ata_qc_from_tag(ap, ap->active_tag); - WARN_ON(qc == NULL); - - switch (ap->hsm_task_state) { - case HSM_ST: - case HSM_ST_POLL: - poll_state = HSM_ST_POLL; - reg_state = HSM_ST; - break; - case HSM_ST_LAST: - case HSM_ST_LAST_POLL: - poll_state = HSM_ST_LAST_POLL; - reg_state = HSM_ST_LAST; - break; - default: - BUG(); - break; - } - - status = ata_chk_status(ap); - if (status & ATA_BUSY) { - if (time_after(jiffies, ap->pio_task_timeout)) { - qc->err_mask |= AC_ERR_TIMEOUT; - ap->hsm_task_state = HSM_ST_TMOUT; - return 0; - } - ap->hsm_task_state = poll_state; - return ATA_SHORT_PAUSE; - } - - ap->hsm_task_state = reg_state; - return 0; -} - -/** - * ata_pio_complete - check if drive is busy or idle - * @ap: the target ata_port - * - * LOCKING: - * None. (executing in kernel thread context) - * - * RETURNS: - * Non-zero if qc completed, zero otherwise. - */ - -static int ata_pio_complete (struct ata_port *ap) -{ - struct ata_queued_cmd *qc; - u8 drv_stat; - - /* - * This is purely heuristic. This is a fast path. Sometimes when - * we enter, BSY will be cleared in a chk-status or two. If not, - * the drive is probably seeking or something. Snooze for a couple - * msecs, then chk-status again. If still busy, fall back to - * HSM_ST_POLL state. - */ - drv_stat = ata_busy_wait(ap, ATA_BUSY, 10); - if (drv_stat & ATA_BUSY) { - msleep(2); - drv_stat = ata_busy_wait(ap, ATA_BUSY, 10); - if (drv_stat & ATA_BUSY) { - ap->hsm_task_state = HSM_ST_LAST_POLL; - ap->pio_task_timeout = jiffies + ATA_TMOUT_PIO; - return 0; - } - } - - qc = ata_qc_from_tag(ap, ap->active_tag); - WARN_ON(qc == NULL); - - drv_stat = ata_wait_idle(ap); - if (!ata_ok(drv_stat)) { - qc->err_mask |= __ac_err_mask(drv_stat); - ap->hsm_task_state = HSM_ST_ERR; - return 0; - } - - ap->hsm_task_state = HSM_ST_IDLE; - - WARN_ON(qc->err_mask); - ata_poll_qc_complete(qc); - - /* another command may start at this point */ - - return 1; -} - - -/** * swap_buf_le16 - swap halves of 16-bit words in place * @buf: Buffer to swap * @buf_words: Number of 16-bit words in buffer. @@ -3291,7 +3527,7 @@ void swap_buf_le16(u16 *buf, unsigned in /** * ata_mmio_data_xfer - Transfer data by MMIO - * @ap: port to read/write + * @dev: device for this I/O * @buf: data buffer * @buflen: buffer length * @write_data: read/write @@ -3302,9 +3538,10 @@ void swap_buf_le16(u16 *buf, unsigned in * Inherited from caller. */ -static void ata_mmio_data_xfer(struct ata_port *ap, unsigned char *buf, - unsigned int buflen, int write_data) +void ata_mmio_data_xfer(struct ata_device *adev, unsigned char *buf, + unsigned int buflen, int write_data) { + struct ata_port *ap = adev->ap; unsigned int i; unsigned int words = buflen >> 1; u16 *buf16 = (u16 *) buf; @@ -3336,7 +3573,7 @@ static void ata_mmio_data_xfer(struct at /** * ata_pio_data_xfer - Transfer data by PIO - * @ap: port to read/write + * @adev: device to target * @buf: data buffer * @buflen: buffer length * @write_data: read/write @@ -3347,9 +3584,10 @@ static void ata_mmio_data_xfer(struct at * Inherited from caller. */ -static void ata_pio_data_xfer(struct ata_port *ap, unsigned char *buf, - unsigned int buflen, int write_data) +void ata_pio_data_xfer(struct ata_device *adev, unsigned char *buf, + unsigned int buflen, int write_data) { + struct ata_port *ap = adev->ap; unsigned int words = buflen >> 1; /* Transfer multiple of 2 bytes */ @@ -3374,38 +3612,29 @@ static void ata_pio_data_xfer(struct ata } /** - * ata_data_xfer - Transfer data from/to the data register. - * @ap: port to read/write + * ata_pio_data_xfer_noirq - Transfer data by PIO + * @adev: device to target * @buf: data buffer * @buflen: buffer length - * @do_write: read/write + * @write_data: read/write * - * Transfer data from/to the device data register. + * Transfer data from/to the device data register by PIO. Do the + * transfer with interrupts disabled. * * LOCKING: * Inherited from caller. */ -static void ata_data_xfer(struct ata_port *ap, unsigned char *buf, - unsigned int buflen, int do_write) +void ata_pio_data_xfer_noirq(struct ata_device *adev, unsigned char *buf, + unsigned int buflen, int write_data) { - /* Make the crap hardware pay the costs not the good stuff */ - if (unlikely(ap->flags & ATA_FLAG_IRQ_MASK)) { - unsigned long flags; - local_irq_save(flags); - if (ap->flags & ATA_FLAG_MMIO) - ata_mmio_data_xfer(ap, buf, buflen, do_write); - else - ata_pio_data_xfer(ap, buf, buflen, do_write); - local_irq_restore(flags); - } else { - if (ap->flags & ATA_FLAG_MMIO) - ata_mmio_data_xfer(ap, buf, buflen, do_write); - else - ata_pio_data_xfer(ap, buf, buflen, do_write); - } + unsigned long flags; + local_irq_save(flags); + ata_pio_data_xfer(adev, buf, buflen, write_data); + local_irq_restore(flags); } + /** * ata_pio_sector - Transfer ATA_SECT_SIZE (512 bytes) of data. * @qc: Command on going @@ -3435,7 +3664,24 @@ static void ata_pio_sector(struct ata_qu page = nth_page(page, (offset >> PAGE_SHIFT)); offset %= PAGE_SIZE; - buf = kmap(page) + offset; + DPRINTK("data %s\n", qc->tf.flags & ATA_TFLAG_WRITE ? "write" : "read"); + + if (PageHighMem(page)) { + unsigned long flags; + + /* FIXME: use a bounce buffer */ + local_irq_save(flags); + buf = kmap_atomic(page, KM_IRQ0); + + /* do the actual data transfer */ + ap->ops->data_xfer(qc->dev, buf + offset, ATA_SECT_SIZE, do_write); + + kunmap_atomic(buf, KM_IRQ0); + local_irq_restore(flags); + } else { + buf = page_address(page); + ap->ops->data_xfer(qc->dev, buf + offset, ATA_SECT_SIZE, do_write); + } qc->cursect++; qc->cursg_ofs++; @@ -3444,14 +3690,68 @@ static void ata_pio_sector(struct ata_qu qc->cursg++; qc->cursg_ofs = 0; } +} - DPRINTK("data %s\n", qc->tf.flags & ATA_TFLAG_WRITE ? "write" : "read"); +/** + * ata_pio_sectors - Transfer one or many 512-byte sectors. + * @qc: Command on going + * + * Transfer one or many ATA_SECT_SIZE of data from/to the + * ATA device for the DRQ request. + * + * LOCKING: + * Inherited from caller. + */ + +static void ata_pio_sectors(struct ata_queued_cmd *qc) +{ + if (is_multi_taskfile(&qc->tf)) { + /* READ/WRITE MULTIPLE */ + unsigned int nsect; + + WARN_ON(qc->dev->multi_count == 0); + + nsect = min(qc->nsect - qc->cursect, qc->dev->multi_count); + while (nsect--) + ata_pio_sector(qc); + } else + ata_pio_sector(qc); +} + +/** + * atapi_send_cdb - Write CDB bytes to hardware + * @ap: Port to which ATAPI device is attached. + * @qc: Taskfile currently active + * + * When device has indicated its readiness to accept + * a CDB, this function is called. Send the CDB. + * + * LOCKING: + * caller. + */ + +static void atapi_send_cdb(struct ata_port *ap, struct ata_queued_cmd *qc) +{ + /* send SCSI cdb */ + DPRINTK("send cdb\n"); + WARN_ON(qc->dev->cdb_len < 12); - /* do the actual data transfer */ - do_write = (qc->tf.flags & ATA_TFLAG_WRITE); - ata_data_xfer(ap, buf, ATA_SECT_SIZE, do_write); + ap->ops->data_xfer(qc->dev, qc->cdb, qc->dev->cdb_len, 1); + ata_altstatus(ap); /* flush */ - kunmap(page); + switch (qc->tf.protocol) { + case ATA_PROT_ATAPI: + ap->hsm_task_state = HSM_ST; + break; + case ATA_PROT_ATAPI_NODATA: + ap->hsm_task_state = HSM_ST_LAST; + break; + case ATA_PROT_ATAPI_DMA: + ap->hsm_task_state = HSM_ST_LAST; + /* initiate bmdma */ + ap->ops->bmdma_start(qc); + break; + } } /** @@ -3492,11 +3792,11 @@ next_sg: unsigned int i; if (words) /* warning if bytes > 1 */ - printk(KERN_WARNING "ata%u: %u bytes trailing data\n", - ap->id, bytes); + ata_dev_printk(qc->dev, KERN_WARNING, + "%u bytes trailing data\n", bytes); for (i = 0; i < words; i++) - ata_data_xfer(ap, (unsigned char*)pad_buf, 2, do_write); + ap->ops->data_xfer(qc->dev, (unsigned char*)pad_buf, 2, do_write); ap->hsm_task_state = HSM_ST_LAST; return; @@ -3517,7 +3817,24 @@ next_sg: /* don't cross page boundaries */ count = min(count, (unsigned int)PAGE_SIZE - offset); - buf = kmap(page) + offset; + DPRINTK("data %s\n", qc->tf.flags & ATA_TFLAG_WRITE ? "write" : "read"); + + if (PageHighMem(page)) { + unsigned long flags; + + /* FIXME: use bounce buffer */ + local_irq_save(flags); + buf = kmap_atomic(page, KM_IRQ0); + + /* do the actual data transfer */ + ap->ops->data_xfer(qc->dev, buf + offset, count, do_write); + + kunmap_atomic(buf, KM_IRQ0); + local_irq_restore(flags); + } else { + buf = page_address(page); + ap->ops->data_xfer(qc->dev, buf + offset, count, do_write); + } bytes -= count; qc->curbytes += count; @@ -3528,13 +3845,6 @@ next_sg: qc->cursg_ofs = 0; } - DPRINTK("data %s\n", qc->tf.flags & ATA_TFLAG_WRITE ? "write" : "read"); - - /* do the actual data transfer */ - ata_data_xfer(ap, buf, count, do_write); - - kunmap(page); - if (bytes) goto next_sg; } @@ -3556,10 +3866,16 @@ static void atapi_pio_bytes(struct ata_q unsigned int ireason, bc_lo, bc_hi, bytes; int i_write, do_write = (qc->tf.flags & ATA_TFLAG_WRITE) ? 1 : 0; - ap->ops->tf_read(ap, &qc->tf); - ireason = qc->tf.nsect; - bc_lo = qc->tf.lbam; - bc_hi = qc->tf.lbah; + /* Abuse qc->result_tf for temp storage of intermediate TF + * here to save some kernel stack usage. + * For normal completion, qc->result_tf is not relevant. For + * error, qc->result_tf is later overwritten by ata_qc_complete(). + * So, the correctness of qc->result_tf is not affected. + */ + ap->ops->tf_read(ap, &qc->result_tf); + ireason = qc->result_tf.nsect; + bc_lo = qc->result_tf.lbam; + bc_hi = qc->result_tf.lbah; bytes = (bc_hi << 8) | bc_lo; /* shall be cleared to zero, indicating xfer of data */ @@ -3571,307 +3887,366 @@ static void atapi_pio_bytes(struct ata_q if (do_write != i_write) goto err_out; + VPRINTK("ata%u: xfering %d bytes\n", ap->id, bytes); + __atapi_pio_bytes(qc, bytes); return; err_out: - printk(KERN_INFO "ata%u: dev %u: ATAPI check failed\n", - ap->id, dev->devno); + ata_dev_printk(dev, KERN_INFO, "ATAPI check failed\n"); qc->err_mask |= AC_ERR_HSM; ap->hsm_task_state = HSM_ST_ERR; } /** - * ata_pio_block - start PIO on a block + * ata_hsm_ok_in_wq - Check if the qc can be handled in the workqueue. * @ap: the target ata_port + * @qc: qc on going * - * LOCKING: - * None. (executing in kernel thread context) + * RETURNS: + * 1 if ok in workqueue, 0 otherwise. */ -static void ata_pio_block(struct ata_port *ap) +static inline int ata_hsm_ok_in_wq(struct ata_port *ap, struct ata_queued_cmd *qc) { - struct ata_queued_cmd *qc; - u8 status; + if (qc->tf.flags & ATA_TFLAG_POLLING) + return 1; - /* - * This is purely heuristic. This is a fast path. - * Sometimes when we enter, BSY will be cleared in - * a chk-status or two. If not, the drive is probably seeking - * or something. Snooze for a couple msecs, then - * chk-status again. If still busy, fall back to - * HSM_ST_POLL state. - */ - status = ata_busy_wait(ap, ATA_BUSY, 5); - if (status & ATA_BUSY) { - msleep(2); - status = ata_busy_wait(ap, ATA_BUSY, 10); - if (status & ATA_BUSY) { - ap->hsm_task_state = HSM_ST_POLL; - ap->pio_task_timeout = jiffies + ATA_TMOUT_PIO; - return; - } + if (ap->hsm_task_state == HSM_ST_FIRST) { + if (qc->tf.protocol == ATA_PROT_PIO && + (qc->tf.flags & ATA_TFLAG_WRITE)) + return 1; + + if (is_atapi_taskfile(&qc->tf) && + !(qc->dev->flags & ATA_DFLAG_CDB_INTR)) + return 1; } - qc = ata_qc_from_tag(ap, ap->active_tag); - WARN_ON(qc == NULL); + return 0; +} - /* check error */ - if (status & (ATA_ERR | ATA_DF)) { - qc->err_mask |= AC_ERR_DEV; - ap->hsm_task_state = HSM_ST_ERR; - return; - } +/** + * ata_hsm_qc_complete - finish a qc running on standard HSM + * @qc: Command to complete + * @in_wq: 1 if called from workqueue, 0 otherwise + * + * Finish @qc which is running on standard HSM. + * + * LOCKING: + * If @in_wq is zero, spin_lock_irqsave(host_set lock). + * Otherwise, none on entry and grabs host lock. + */ +static void ata_hsm_qc_complete(struct ata_queued_cmd *qc, int in_wq) +{ + struct ata_port *ap = qc->ap; + unsigned long flags; - /* transfer data if any */ - if (is_atapi_taskfile(&qc->tf)) { - /* DRQ=0 means no more data to transfer */ - if ((status & ATA_DRQ) == 0) { - ap->hsm_task_state = HSM_ST_LAST; - return; - } + if (ap->ops->error_handler) { + if (in_wq) { + spin_lock_irqsave(&ap->host_set->lock, flags); - atapi_pio_bytes(qc); - } else { - /* handle BSY=0, DRQ=0 as error */ - if ((status & ATA_DRQ) == 0) { - qc->err_mask |= AC_ERR_HSM; - ap->hsm_task_state = HSM_ST_ERR; - return; - } + /* EH might have kicked in while host_set lock + * is released. + */ + qc = ata_qc_from_tag(ap, qc->tag); + if (qc) { + if (likely(!(qc->err_mask & AC_ERR_HSM))) { + ata_irq_on(ap); + ata_qc_complete(qc); + } else + ata_port_freeze(ap); + } - ata_pio_sector(qc); + spin_unlock_irqrestore(&ap->host_set->lock, flags); + } else { + if (likely(!(qc->err_mask & AC_ERR_HSM))) + ata_qc_complete(qc); + else + ata_port_freeze(ap); + } + } else { + if (in_wq) { + spin_lock_irqsave(&ap->host_set->lock, flags); + ata_irq_on(ap); + ata_qc_complete(qc); + spin_unlock_irqrestore(&ap->host_set->lock, flags); + } else + ata_qc_complete(qc); } ata_altstatus(ap); /* flush */ } -static void ata_pio_error(struct ata_port *ap) -{ - struct ata_queued_cmd *qc; +/** + * ata_hsm_move - move the HSM to the next state. + * @ap: the target ata_port + * @qc: qc on going + * @status: current device status + * @in_wq: 1 if called from workqueue, 0 otherwise + * + * RETURNS: + * 1 when poll next status needed, 0 otherwise. + */ - qc = ata_qc_from_tag(ap, ap->active_tag); - WARN_ON(qc == NULL); +static int ata_hsm_move(struct ata_port *ap, struct ata_queued_cmd *qc, + u8 status, int in_wq) +{ + unsigned long flags = 0; + int poll_next; - if (qc->tf.command != ATA_CMD_PACKET) - printk(KERN_WARNING "ata%u: PIO error\n", ap->id); + WARN_ON((qc->flags & ATA_QCFLAG_ACTIVE) == 0); - /* make sure qc->err_mask is available to - * know what's wrong and recover + /* Make sure ata_qc_issue_prot() does not throw things + * like DMA polling into the workqueue. Notice that + * in_wq is not equivalent to (qc->tf.flags & ATA_TFLAG_POLLING). */ - WARN_ON(qc->err_mask == 0); - - ap->hsm_task_state = HSM_ST_IDLE; - - ata_poll_qc_complete(qc); -} - -static void ata_pio_task(void *_data) -{ - struct ata_port *ap = _data; - unsigned long timeout; - int qc_completed; + WARN_ON(in_wq != ata_hsm_ok_in_wq(ap, qc)); fsm_start: - timeout = 0; - qc_completed = 0; + DPRINTK("ata%u: protocol %d task_state %d (dev_stat 0x%X)\n", + ap->id, qc->tf.protocol, ap->hsm_task_state, status); switch (ap->hsm_task_state) { - case HSM_ST_IDLE: - return; + case HSM_ST_FIRST: + /* Send first data block or PACKET CDB */ - case HSM_ST: - ata_pio_block(ap); - break; - - case HSM_ST_LAST: - qc_completed = ata_pio_complete(ap); - break; - - case HSM_ST_POLL: - case HSM_ST_LAST_POLL: - timeout = ata_pio_poll(ap); - break; + /* If polling, we will stay in the work queue after + * sending the data. Otherwise, interrupt handler + * takes over after sending the data. + */ + poll_next = (qc->tf.flags & ATA_TFLAG_POLLING); - case HSM_ST_TMOUT: - case HSM_ST_ERR: - ata_pio_error(ap); - return; - } + /* check device status */ + if (unlikely((status & ATA_DRQ) == 0)) { + /* handle BSY=0, DRQ=0 as error */ + if (likely(status & (ATA_ERR | ATA_DF))) + /* device stops HSM for abort/error */ + qc->err_mask |= AC_ERR_DEV; + else + /* HSM violation. Let EH handle this */ + qc->err_mask |= AC_ERR_HSM; - if (timeout) - ata_port_queue_task(ap, ata_pio_task, ap, timeout); - else if (!qc_completed) - goto fsm_start; -} + ap->hsm_task_state = HSM_ST_ERR; + goto fsm_start; + } -/** - * atapi_packet_task - Write CDB bytes to hardware - * @_data: Port to which ATAPI device is attached. - * - * When device has indicated its readiness to accept - * a CDB, this function is called. Send the CDB. - * If DMA is to be performed, exit immediately. - * Otherwise, we are in polling mode, so poll - * status under operation succeeds or fails. - * - * LOCKING: - * Kernel thread context (may sleep) - */ + /* Device should not ask for data transfer (DRQ=1) + * when it finds something wrong. + * We ignore DRQ here and stop the HSM by + * changing hsm_task_state to HSM_ST_ERR and + * let the EH abort the command or reset the device. + */ + if (unlikely(status & (ATA_ERR | ATA_DF))) { + printk(KERN_WARNING "ata%d: DRQ=1 with device error, dev_stat 0x%X\n", + ap->id, status); + qc->err_mask |= AC_ERR_HSM; + ap->hsm_task_state = HSM_ST_ERR; + goto fsm_start; + } -static void atapi_packet_task(void *_data) -{ - struct ata_port *ap = _data; - struct ata_queued_cmd *qc; - u8 status; + /* Send the CDB (atapi) or the first data block (ata pio out). + * During the state transition, interrupt handler shouldn't + * be invoked before the data transfer is complete and + * hsm_task_state is changed. Hence, the following locking. + */ + if (in_wq) + spin_lock_irqsave(&ap->host_set->lock, flags); - qc = ata_qc_from_tag(ap, ap->active_tag); - WARN_ON(qc == NULL); - WARN_ON(!(qc->flags & ATA_QCFLAG_ACTIVE)); + if (qc->tf.protocol == ATA_PROT_PIO) { + /* PIO data out protocol. + * send first data block. + */ - /* sleep-wait for BSY to clear */ - DPRINTK("busy wait\n"); - if (ata_busy_sleep(ap, ATA_TMOUT_CDB_QUICK, ATA_TMOUT_CDB)) { - qc->err_mask |= AC_ERR_TIMEOUT; - goto err_out; - } + /* ata_pio_sectors() might change the state + * to HSM_ST_LAST. so, the state is changed here + * before ata_pio_sectors(). + */ + ap->hsm_task_state = HSM_ST; + ata_pio_sectors(qc); + ata_altstatus(ap); /* flush */ + } else + /* send CDB */ + atapi_send_cdb(ap, qc); - /* make sure DRQ is set */ - status = ata_chk_status(ap); - if ((status & (ATA_BUSY | ATA_DRQ)) != ATA_DRQ) { - qc->err_mask |= AC_ERR_HSM; - goto err_out; - } + if (in_wq) + spin_unlock_irqrestore(&ap->host_set->lock, flags); - /* send SCSI cdb */ - DPRINTK("send cdb\n"); - WARN_ON(qc->dev->cdb_len < 12); + /* if polling, ata_pio_task() handles the rest. + * otherwise, interrupt handler takes over from here. + */ + break; - if (qc->tf.protocol == ATA_PROT_ATAPI_DMA || - qc->tf.protocol == ATA_PROT_ATAPI_NODATA) { - unsigned long flags; + case HSM_ST: + /* complete command or read/write the data register */ + if (qc->tf.protocol == ATA_PROT_ATAPI) { + /* ATAPI PIO protocol */ + if ((status & ATA_DRQ) == 0) { + /* No more data to transfer or device error. + * Device error will be tagged in HSM_ST_LAST. + */ + ap->hsm_task_state = HSM_ST_LAST; + goto fsm_start; + } - /* Once we're done issuing command and kicking bmdma, - * irq handler takes over. To not lose irq, we need - * to clear NOINTR flag before sending cdb, but - * interrupt handler shouldn't be invoked before we're - * finished. Hence, the following locking. - */ - spin_lock_irqsave(&ap->host_set->lock, flags); - ap->flags &= ~ATA_FLAG_NOINTR; - ata_data_xfer(ap, qc->cdb, qc->dev->cdb_len, 1); - ata_altstatus(ap); /* flush */ + /* Device should not ask for data transfer (DRQ=1) + * when it finds something wrong. + * We ignore DRQ here and stop the HSM by + * changing hsm_task_state to HSM_ST_ERR and + * let the EH abort the command or reset the device. + */ + if (unlikely(status & (ATA_ERR | ATA_DF))) { + printk(KERN_WARNING "ata%d: DRQ=1 with device error, dev_stat 0x%X\n", + ap->id, status); + qc->err_mask |= AC_ERR_HSM; + ap->hsm_task_state = HSM_ST_ERR; + goto fsm_start; + } - if (qc->tf.protocol == ATA_PROT_ATAPI_DMA) - ap->ops->bmdma_start(qc); /* initiate bmdma */ - spin_unlock_irqrestore(&ap->host_set->lock, flags); - } else { - ata_data_xfer(ap, qc->cdb, qc->dev->cdb_len, 1); - ata_altstatus(ap); /* flush */ + atapi_pio_bytes(qc); - /* PIO commands are handled by polling */ - ap->hsm_task_state = HSM_ST; - ata_port_queue_task(ap, ata_pio_task, ap, 0); - } + if (unlikely(ap->hsm_task_state == HSM_ST_ERR)) + /* bad ireason reported by device */ + goto fsm_start; - return; + } else { + /* ATA PIO protocol */ + if (unlikely((status & ATA_DRQ) == 0)) { + /* handle BSY=0, DRQ=0 as error */ + if (likely(status & (ATA_ERR | ATA_DF))) + /* device stops HSM for abort/error */ + qc->err_mask |= AC_ERR_DEV; + else + /* HSM violation. Let EH handle this */ + qc->err_mask |= AC_ERR_HSM; -err_out: - ata_poll_qc_complete(qc); -} + ap->hsm_task_state = HSM_ST_ERR; + goto fsm_start; + } -/** - * ata_qc_timeout - Handle timeout of queued command - * @qc: Command that timed out - * - * Some part of the kernel (currently, only the SCSI layer) - * has noticed that the active command on port @ap has not - * completed after a specified length of time. Handle this - * condition by disabling DMA (if necessary) and completing - * transactions, with error if necessary. - * - * This also handles the case of the "lost interrupt", where - * for some reason (possibly hardware bug, possibly driver bug) - * an interrupt was not delivered to the driver, even though the - * transaction completed successfully. - * - * LOCKING: - * Inherited from SCSI layer (none, can sleep) - */ + /* For PIO reads, some devices may ask for + * data transfer (DRQ=1) alone with ERR=1. + * We respect DRQ here and transfer one + * block of junk data before changing the + * hsm_task_state to HSM_ST_ERR. + * + * For PIO writes, ERR=1 DRQ=1 doesn't make + * sense since the data block has been + * transferred to the device. + */ + if (unlikely(status & (ATA_ERR | ATA_DF))) { + /* data might be corrputed */ + qc->err_mask |= AC_ERR_DEV; + + if (!(qc->tf.flags & ATA_TFLAG_WRITE)) { + ata_pio_sectors(qc); + ata_altstatus(ap); + status = ata_wait_idle(ap); + } + + if (status & (ATA_BUSY | ATA_DRQ)) + qc->err_mask |= AC_ERR_HSM; + + /* ata_pio_sectors() might change the + * state to HSM_ST_LAST. so, the state + * is changed after ata_pio_sectors(). + */ + ap->hsm_task_state = HSM_ST_ERR; + goto fsm_start; + } -static void ata_qc_timeout(struct ata_queued_cmd *qc) -{ - struct ata_port *ap = qc->ap; - struct ata_host_set *host_set = ap->host_set; - u8 host_stat = 0, drv_stat; - unsigned long flags; + ata_pio_sectors(qc); - DPRINTK("ENTER\n"); + if (ap->hsm_task_state == HSM_ST_LAST && + (!(qc->tf.flags & ATA_TFLAG_WRITE))) { + /* all data read */ + ata_altstatus(ap); + status = ata_wait_idle(ap); + goto fsm_start; + } + } - ap->hsm_task_state = HSM_ST_IDLE; + ata_altstatus(ap); /* flush */ + poll_next = 1; + break; - spin_lock_irqsave(&host_set->lock, flags); + case HSM_ST_LAST: + if (unlikely(!ata_ok(status))) { + qc->err_mask |= __ac_err_mask(status); + ap->hsm_task_state = HSM_ST_ERR; + goto fsm_start; + } - switch (qc->tf.protocol) { + /* no more data to transfer */ + DPRINTK("ata%u: dev %u command complete, drv_stat 0x%x\n", + ap->id, qc->dev->devno, status); - case ATA_PROT_DMA: - case ATA_PROT_ATAPI_DMA: - host_stat = ap->ops->bmdma_status(ap); + WARN_ON(qc->err_mask); - /* before we do anything else, clear DMA-Start bit */ - ap->ops->bmdma_stop(qc); + ap->hsm_task_state = HSM_ST_IDLE; - /* fall through */ + /* complete taskfile transaction */ + ata_hsm_qc_complete(qc, in_wq); - default: - ata_altstatus(ap); - drv_stat = ata_chk_status(ap); + poll_next = 0; + break; - /* ack bmdma irq events */ - ap->ops->irq_clear(ap); + case HSM_ST_ERR: + /* make sure qc->err_mask is available to + * know what's wrong and recover + */ + WARN_ON(qc->err_mask == 0); - printk(KERN_ERR "ata%u: command 0x%x timeout, stat 0x%x host_stat 0x%x\n", - ap->id, qc->tf.command, drv_stat, host_stat); + ap->hsm_task_state = HSM_ST_IDLE; /* complete taskfile transaction */ - qc->err_mask |= ac_err_mask(drv_stat); + ata_hsm_qc_complete(qc, in_wq); + + poll_next = 0; break; + default: + poll_next = 0; + BUG(); } - spin_unlock_irqrestore(&host_set->lock, flags); - - ata_eh_qc_complete(qc); - - DPRINTK("EXIT\n"); + return poll_next; } -/** - * ata_eng_timeout - Handle timeout of queued command - * @ap: Port on which timed-out command is active - * - * Some part of the kernel (currently, only the SCSI layer) - * has noticed that the active command on port @ap has not - * completed after a specified length of time. Handle this - * condition by disabling DMA (if necessary) and completing - * transactions, with error if necessary. - * - * This also handles the case of the "lost interrupt", where - * for some reason (possibly hardware bug, possibly driver bug) - * an interrupt was not delivered to the driver, even though the - * transaction completed successfully. - * - * LOCKING: - * Inherited from SCSI layer (none, can sleep) - */ - -void ata_eng_timeout(struct ata_port *ap) +static void ata_pio_task(void *_data) { - DPRINTK("ENTER\n"); + struct ata_queued_cmd *qc = _data; + struct ata_port *ap = qc->ap; + u8 status; + int poll_next; - ata_qc_timeout(ata_qc_from_tag(ap, ap->active_tag)); +fsm_start: + WARN_ON(ap->hsm_task_state == HSM_ST_IDLE); - DPRINTK("EXIT\n"); + /* + * This is purely heuristic. This is a fast path. + * Sometimes when we enter, BSY will be cleared in + * a chk-status or two. If not, the drive is probably seeking + * or something. Snooze for a couple msecs, then + * chk-status again. If still busy, queue delayed work. + */ + status = ata_busy_wait(ap, ATA_BUSY, 5); + if (status & ATA_BUSY) { + msleep(2); + status = ata_busy_wait(ap, ATA_BUSY, 10); + if (status & ATA_BUSY) { + ata_port_queue_task(ap, ata_pio_task, qc, ATA_SHORT_PAUSE); + return; + } + } + + /* move the HSM */ + poll_next = ata_hsm_move(ap, qc, status, 1); + + /* another command or interrupt handler + * may be running at this point. + */ + if (poll_next) + goto fsm_start; } /** @@ -3888,9 +4263,14 @@ static struct ata_queued_cmd *ata_qc_new struct ata_queued_cmd *qc = NULL; unsigned int i; - for (i = 0; i < ATA_MAX_QUEUE; i++) - if (!test_and_set_bit(i, &ap->qactive)) { - qc = ata_qc_from_tag(ap, i); + /* no command while frozen */ + if (unlikely(ap->flags & ATA_FLAG_FROZEN)) + return NULL; + + /* the last tag is reserved for internal command. */ + for (i = 0; i < ATA_MAX_QUEUE - 1; i++) + if (!test_and_set_bit(i, &ap->qc_allocated)) { + qc = __ata_qc_from_tag(ap, i); break; } @@ -3902,16 +4282,15 @@ static struct ata_queued_cmd *ata_qc_new /** * ata_qc_new_init - Request an available ATA command, and initialize it - * @ap: Port associated with device @dev * @dev: Device from whom we request an available command structure * * LOCKING: * None. */ -struct ata_queued_cmd *ata_qc_new_init(struct ata_port *ap, - struct ata_device *dev) +struct ata_queued_cmd *ata_qc_new_init(struct ata_device *dev) { + struct ata_port *ap = dev->ap; struct ata_queued_cmd *qc; qc = ata_qc_new(ap); @@ -3946,36 +4325,153 @@ void ata_qc_free(struct ata_queued_cmd * qc->flags = 0; tag = qc->tag; if (likely(ata_tag_valid(tag))) { - if (tag == ap->active_tag) - ap->active_tag = ATA_TAG_POISON; qc->tag = ATA_TAG_POISON; - clear_bit(tag, &ap->qactive); + clear_bit(tag, &ap->qc_allocated); } } void __ata_qc_complete(struct ata_queued_cmd *qc) { + struct ata_port *ap = qc->ap; + WARN_ON(qc == NULL); /* ata_qc_from_tag _might_ return NULL */ WARN_ON(!(qc->flags & ATA_QCFLAG_ACTIVE)); if (likely(qc->flags & ATA_QCFLAG_DMAMAP)) ata_sg_clean(qc); + /* command should be marked inactive atomically with qc completion */ + if (qc->tf.protocol == ATA_PROT_NCQ) + ap->sactive &= ~(1 << qc->tag); + else + ap->active_tag = ATA_TAG_POISON; + /* atapi: mark qc as inactive to prevent the interrupt handler * from completing the command twice later, before the error handler * is called. (when rc != 0 and atapi request sense is needed) */ qc->flags &= ~ATA_QCFLAG_ACTIVE; + ap->qc_active &= ~(1 << qc->tag); /* call completion callback */ qc->complete_fn(qc); } +/** + * ata_qc_complete - Complete an active ATA command + * @qc: Command to complete + * @err_mask: ATA Status register contents + * + * Indicate to the mid and upper layers that an ATA + * command has completed, with either an ok or not-ok status. + * + * LOCKING: + * spin_lock_irqsave(host_set lock) + */ +void ata_qc_complete(struct ata_queued_cmd *qc) +{ + struct ata_port *ap = qc->ap; + + /* XXX: New EH and old EH use different mechanisms to + * synchronize EH with regular execution path. + * + * In new EH, a failed qc is marked with ATA_QCFLAG_FAILED. + * Normal execution path is responsible for not accessing a + * failed qc. libata core enforces the rule by returning NULL + * from ata_qc_from_tag() for failed qcs. + * + * Old EH depends on ata_qc_complete() nullifying completion + * requests if ATA_QCFLAG_EH_SCHEDULED is set. Old EH does + * not synchronize with interrupt handler. Only PIO task is + * taken care of. + */ + if (ap->ops->error_handler) { + WARN_ON(ap->flags & ATA_FLAG_FROZEN); + + if (unlikely(qc->err_mask)) + qc->flags |= ATA_QCFLAG_FAILED; + + if (unlikely(qc->flags & ATA_QCFLAG_FAILED)) { + if (!ata_tag_internal(qc->tag)) { + /* always fill result TF for failed qc */ + ap->ops->tf_read(ap, &qc->result_tf); + ata_qc_schedule_eh(qc); + return; + } + } + + /* read result TF if requested */ + if (qc->flags & ATA_QCFLAG_RESULT_TF) + ap->ops->tf_read(ap, &qc->result_tf); + + __ata_qc_complete(qc); + } else { + if (qc->flags & ATA_QCFLAG_EH_SCHEDULED) + return; + + /* read result TF if failed or requested */ + if (qc->err_mask || qc->flags & ATA_QCFLAG_RESULT_TF) + ap->ops->tf_read(ap, &qc->result_tf); + + __ata_qc_complete(qc); + } +} + +/** + * ata_qc_complete_multiple - Complete multiple qcs successfully + * @ap: port in question + * @qc_active: new qc_active mask + * @finish_qc: LLDD callback invoked before completing a qc + * + * Complete in-flight commands. This functions is meant to be + * called from low-level driver's interrupt routine to complete + * requests normally. ap->qc_active and @qc_active is compared + * and commands are completed accordingly. + * + * LOCKING: + * spin_lock_irqsave(host_set lock) + * + * RETURNS: + * Number of completed commands on success, -errno otherwise. + */ +int ata_qc_complete_multiple(struct ata_port *ap, u32 qc_active, + void (*finish_qc)(struct ata_queued_cmd *)) +{ + int nr_done = 0; + u32 done_mask; + int i; + + done_mask = ap->qc_active ^ qc_active; + + if (unlikely(done_mask & qc_active)) { + ata_port_printk(ap, KERN_ERR, "illegal qc_active transition " + "(%08x->%08x)\n", ap->qc_active, qc_active); + return -EINVAL; + } + + for (i = 0; i < ATA_MAX_QUEUE; i++) { + struct ata_queued_cmd *qc; + + if (!(done_mask & (1 << i))) + continue; + + if ((qc = ata_qc_from_tag(ap, i))) { + if (finish_qc) + finish_qc(qc); + ata_qc_complete(qc); + nr_done++; + } + } + + return nr_done; +} + static inline int ata_should_dma_map(struct ata_queued_cmd *qc) { struct ata_port *ap = qc->ap; switch (qc->tf.protocol) { + case ATA_PROT_NCQ: case ATA_PROT_DMA: case ATA_PROT_ATAPI_DMA: return 1; @@ -4010,8 +4506,22 @@ void ata_qc_issue(struct ata_queued_cmd { struct ata_port *ap = qc->ap; - qc->ap->active_tag = qc->tag; + /* Make sure only one non-NCQ command is outstanding. The + * check is skipped for old EH because it reuses active qc to + * request ATAPI sense. + */ + WARN_ON(ap->ops->error_handler && ata_tag_valid(ap->active_tag)); + + if (qc->tf.protocol == ATA_PROT_NCQ) { + WARN_ON(ap->sactive & (1 << qc->tag)); + ap->sactive |= 1 << qc->tag; + } else { + WARN_ON(ap->sactive); + ap->active_tag = qc->tag; + } + qc->flags |= ATA_QCFLAG_ACTIVE; + ap->qc_active |= 1 << qc->tag; if (ata_should_dma_map(qc)) { if (qc->flags & ATA_QCFLAG_SG) { @@ -4061,43 +4571,105 @@ unsigned int ata_qc_issue_prot(struct at { struct ata_port *ap = qc->ap; + /* Use polling pio if the LLD doesn't handle + * interrupt driven pio and atapi CDB interrupt. + */ + if (ap->flags & ATA_FLAG_PIO_POLLING) { + switch (qc->tf.protocol) { + case ATA_PROT_PIO: + case ATA_PROT_ATAPI: + case ATA_PROT_ATAPI_NODATA: + qc->tf.flags |= ATA_TFLAG_POLLING; + break; + case ATA_PROT_ATAPI_DMA: + if (qc->dev->flags & ATA_DFLAG_CDB_INTR) + /* see ata_check_atapi_dma() */ + BUG(); + break; + default: + break; + } + } + + /* select the device */ ata_dev_select(ap, qc->dev->devno, 1, 0); + /* start the command */ switch (qc->tf.protocol) { case ATA_PROT_NODATA: + if (qc->tf.flags & ATA_TFLAG_POLLING) + ata_qc_set_polling(qc); + ata_tf_to_host(ap, &qc->tf); + ap->hsm_task_state = HSM_ST_LAST; + + if (qc->tf.flags & ATA_TFLAG_POLLING) + ata_port_queue_task(ap, ata_pio_task, qc, 0); + break; case ATA_PROT_DMA: + WARN_ON(qc->tf.flags & ATA_TFLAG_POLLING); + ap->ops->tf_load(ap, &qc->tf); /* load tf registers */ ap->ops->bmdma_setup(qc); /* set up bmdma */ ap->ops->bmdma_start(qc); /* initiate bmdma */ + ap->hsm_task_state = HSM_ST_LAST; break; - case ATA_PROT_PIO: /* load tf registers, initiate polling pio */ - ata_qc_set_polling(qc); - ata_tf_to_host(ap, &qc->tf); - ap->hsm_task_state = HSM_ST; - ata_port_queue_task(ap, ata_pio_task, ap, 0); - break; + case ATA_PROT_PIO: + if (qc->tf.flags & ATA_TFLAG_POLLING) + ata_qc_set_polling(qc); - case ATA_PROT_ATAPI: - ata_qc_set_polling(qc); ata_tf_to_host(ap, &qc->tf); - ata_port_queue_task(ap, atapi_packet_task, ap, 0); + + if (qc->tf.flags & ATA_TFLAG_WRITE) { + /* PIO data out protocol */ + ap->hsm_task_state = HSM_ST_FIRST; + ata_port_queue_task(ap, ata_pio_task, qc, 0); + + /* always send first data block using + * the ata_pio_task() codepath. + */ + } else { + /* PIO data in protocol */ + ap->hsm_task_state = HSM_ST; + + if (qc->tf.flags & ATA_TFLAG_POLLING) + ata_port_queue_task(ap, ata_pio_task, qc, 0); + + /* if polling, ata_pio_task() handles the rest. + * otherwise, interrupt handler takes over from here. + */ + } + break; + case ATA_PROT_ATAPI: case ATA_PROT_ATAPI_NODATA: - ap->flags |= ATA_FLAG_NOINTR; + if (qc->tf.flags & ATA_TFLAG_POLLING) + ata_qc_set_polling(qc); + ata_tf_to_host(ap, &qc->tf); - ata_port_queue_task(ap, atapi_packet_task, ap, 0); + + ap->hsm_task_state = HSM_ST_FIRST; + + /* send cdb by polling if no cdb interrupt */ + if ((!(qc->dev->flags & ATA_DFLAG_CDB_INTR)) || + (qc->tf.flags & ATA_TFLAG_POLLING)) + ata_port_queue_task(ap, ata_pio_task, qc, 0); break; case ATA_PROT_ATAPI_DMA: - ap->flags |= ATA_FLAG_NOINTR; + WARN_ON(qc->tf.flags & ATA_TFLAG_POLLING); + ap->ops->tf_load(ap, &qc->tf); /* load tf registers */ ap->ops->bmdma_setup(qc); /* set up bmdma */ - ata_port_queue_task(ap, atapi_packet_task, ap, 0); + ap->hsm_task_state = HSM_ST_FIRST; + + /* send cdb by polling if no cdb interrupt */ + if (!(qc->dev->flags & ATA_DFLAG_CDB_INTR)) + ata_port_queue_task(ap, ata_pio_task, qc, 0); break; default: @@ -4127,52 +4699,66 @@ unsigned int ata_qc_issue_prot(struct at inline unsigned int ata_host_intr (struct ata_port *ap, struct ata_queued_cmd *qc) { - u8 status, host_stat; + u8 status, host_stat = 0; - switch (qc->tf.protocol) { + VPRINTK("ata%u: protocol %d task_state %d\n", + ap->id, qc->tf.protocol, ap->hsm_task_state); - case ATA_PROT_DMA: - case ATA_PROT_ATAPI_DMA: - case ATA_PROT_ATAPI: - /* check status of DMA engine */ - host_stat = ap->ops->bmdma_status(ap); - VPRINTK("ata%u: host_stat 0x%X\n", ap->id, host_stat); - - /* if it's not our irq... */ - if (!(host_stat & ATA_DMA_INTR)) - goto idle_irq; - - /* before we do anything else, clear DMA-Start bit */ - ap->ops->bmdma_stop(qc); - - /* fall through */ - - case ATA_PROT_ATAPI_NODATA: - case ATA_PROT_NODATA: - /* check altstatus */ - status = ata_altstatus(ap); - if (status & ATA_BUSY) - goto idle_irq; + /* Check whether we are expecting interrupt in this state */ + switch (ap->hsm_task_state) { + case HSM_ST_FIRST: + /* Some pre-ATAPI-4 devices assert INTRQ + * at this state when ready to receive CDB. + */ - /* check main status, clearing INTRQ */ - status = ata_chk_status(ap); - if (unlikely(status & ATA_BUSY)) + /* Check the ATA_DFLAG_CDB_INTR flag is enough here. + * The flag was turned on only for atapi devices. + * No need to check is_atapi_taskfile(&qc->tf) again. + */ + if (!(qc->dev->flags & ATA_DFLAG_CDB_INTR)) goto idle_irq; - DPRINTK("ata%u: protocol %d (dev_stat 0x%X)\n", - ap->id, qc->tf.protocol, status); - - /* ack bmdma irq events */ - ap->ops->irq_clear(ap); - - /* complete taskfile transaction */ - qc->err_mask |= ac_err_mask(status); - ata_qc_complete(qc); break; - + case HSM_ST_LAST: + if (qc->tf.protocol == ATA_PROT_DMA || + qc->tf.protocol == ATA_PROT_ATAPI_DMA) { + /* check status of DMA engine */ + host_stat = ap->ops->bmdma_status(ap); + VPRINTK("ata%u: host_stat 0x%X\n", ap->id, host_stat); + + /* if it's not our irq... */ + if (!(host_stat & ATA_DMA_INTR)) + goto idle_irq; + + /* before we do anything else, clear DMA-Start bit */ + ap->ops->bmdma_stop(qc); + + if (unlikely(host_stat & ATA_DMA_ERR)) { + /* error when transfering data to/from memory */ + qc->err_mask |= AC_ERR_HOST_BUS; + ap->hsm_task_state = HSM_ST_ERR; + } + } + break; + case HSM_ST: + break; default: goto idle_irq; } + /* check altstatus */ + status = ata_altstatus(ap); + if (status & ATA_BUSY) + goto idle_irq; + + /* check main status, clearing INTRQ */ + status = ata_chk_status(ap); + if (unlikely(status & ATA_BUSY)) + goto idle_irq; + + /* ack bmdma irq events */ + ap->ops->irq_clear(ap); + + ata_hsm_move(ap, qc, status, 0); return 1; /* irq handled */ idle_irq: @@ -4181,7 +4767,7 @@ idle_irq: #ifdef ATA_IRQ_TRAP if ((ap->stats.idle_irq % 1000) == 0) { ata_irq_ack(ap, 0); /* debug trap */ - printk(KERN_WARNING "ata%d: irq trap\n", ap->id); + ata_port_printk(ap, KERN_WARNING, "irq trap\n"); return 1; } #endif @@ -4219,11 +4805,11 @@ irqreturn_t ata_interrupt (int irq, void ap = host_set->ports[i]; if (ap && - !(ap->flags & (ATA_FLAG_PORT_DISABLED | ATA_FLAG_NOINTR))) { + !(ap->flags & ATA_FLAG_DISABLED)) { struct ata_queued_cmd *qc; qc = ata_qc_from_tag(ap, ap->active_tag); - if (qc && (!(qc->tf.ctl & ATA_NIEN)) && + if (qc && (!(qc->tf.flags & ATA_TFLAG_POLLING)) && (qc->flags & ATA_QCFLAG_ACTIVE)) handled |= ata_host_intr(ap, qc); } @@ -4234,32 +4820,168 @@ irqreturn_t ata_interrupt (int irq, void return IRQ_RETVAL(handled); } +/** + * sata_scr_valid - test whether SCRs are accessible + * @ap: ATA port to test SCR accessibility for + * + * Test whether SCRs are accessible for @ap. + * + * LOCKING: + * None. + * + * RETURNS: + * 1 if SCRs are accessible, 0 otherwise. + */ +int sata_scr_valid(struct ata_port *ap) +{ + return ap->cbl == ATA_CBL_SATA && ap->ops->scr_read; +} + +/** + * sata_scr_read - read SCR register of the specified port + * @ap: ATA port to read SCR for + * @reg: SCR to read + * @val: Place to store read value + * + * Read SCR register @reg of @ap into *@val. This function is + * guaranteed to succeed if the cable type of the port is SATA + * and the port implements ->scr_read. + * + * LOCKING: + * None. + * + * RETURNS: + * 0 on success, negative errno on failure. + */ +int sata_scr_read(struct ata_port *ap, int reg, u32 *val) +{ + if (sata_scr_valid(ap)) { + *val = ap->ops->scr_read(ap, reg); + return 0; + } + return -EOPNOTSUPP; +} + +/** + * sata_scr_write - write SCR register of the specified port + * @ap: ATA port to write SCR for + * @reg: SCR to write + * @val: value to write + * + * Write @val to SCR register @reg of @ap. This function is + * guaranteed to succeed if the cable type of the port is SATA + * and the port implements ->scr_read. + * + * LOCKING: + * None. + * + * RETURNS: + * 0 on success, negative errno on failure. + */ +int sata_scr_write(struct ata_port *ap, int reg, u32 val) +{ + if (sata_scr_valid(ap)) { + ap->ops->scr_write(ap, reg, val); + return 0; + } + return -EOPNOTSUPP; +} + +/** + * sata_scr_write_flush - write SCR register of the specified port and flush + * @ap: ATA port to write SCR for + * @reg: SCR to write + * @val: value to write + * + * This function is identical to sata_scr_write() except that this + * function performs flush after writing to the register. + * + * LOCKING: + * None. + * + * RETURNS: + * 0 on success, negative errno on failure. + */ +int sata_scr_write_flush(struct ata_port *ap, int reg, u32 val) +{ + if (sata_scr_valid(ap)) { + ap->ops->scr_write(ap, reg, val); + ap->ops->scr_read(ap, reg); + return 0; + } + return -EOPNOTSUPP; +} + +/** + * ata_port_online - test whether the given port is online + * @ap: ATA port to test + * + * Test whether @ap is online. Note that this function returns 0 + * if online status of @ap cannot be obtained, so + * ata_port_online(ap) != !ata_port_offline(ap). + * + * LOCKING: + * None. + * + * RETURNS: + * 1 if the port online status is available and online. + */ +int ata_port_online(struct ata_port *ap) +{ + u32 sstatus; + + if (!sata_scr_read(ap, SCR_STATUS, &sstatus) && (sstatus & 0xf) == 0x3) + return 1; + return 0; +} + +/** + * ata_port_offline - test whether the given port is offline + * @ap: ATA port to test + * + * Test whether @ap is offline. Note that this function returns + * 0 if offline status of @ap cannot be obtained, so + * ata_port_online(ap) != !ata_port_offline(ap). + * + * LOCKING: + * None. + * + * RETURNS: + * 1 if the port offline status is available and offline. + */ +int ata_port_offline(struct ata_port *ap) +{ + u32 sstatus; + + if (!sata_scr_read(ap, SCR_STATUS, &sstatus) && (sstatus & 0xf) != 0x3) + return 1; + return 0; +} /* * Execute a 'simple' command, that only consists of the opcode 'cmd' itself, * without filling any other registers */ -static int ata_do_simple_cmd(struct ata_port *ap, struct ata_device *dev, - u8 cmd) +static int ata_do_simple_cmd(struct ata_device *dev, u8 cmd) { struct ata_taskfile tf; int err; - ata_tf_init(ap, &tf, dev->devno); + ata_tf_init(dev, &tf); tf.command = cmd; tf.flags |= ATA_TFLAG_DEVICE; tf.protocol = ATA_PROT_NODATA; - err = ata_exec_internal(ap, dev, &tf, DMA_NONE, NULL, 0); + err = ata_exec_internal(dev, &tf, NULL, DMA_NONE, NULL, 0); if (err) - printk(KERN_ERR "%s: ata command failed: %d\n", - __FUNCTION__, err); + ata_dev_printk(dev, KERN_ERR, "%s: ata command failed: %d\n", + __FUNCTION__, err); return err; } -static int ata_flush_cache(struct ata_port *ap, struct ata_device *dev) +static int ata_flush_cache(struct ata_device *dev) { u8 cmd; @@ -4271,22 +4993,21 @@ static int ata_flush_cache(struct ata_po else cmd = ATA_CMD_FLUSH; - return ata_do_simple_cmd(ap, dev, cmd); + return ata_do_simple_cmd(dev, cmd); } -static int ata_standby_drive(struct ata_port *ap, struct ata_device *dev) +static int ata_standby_drive(struct ata_device *dev) { - return ata_do_simple_cmd(ap, dev, ATA_CMD_STANDBYNOW1); + return ata_do_simple_cmd(dev, ATA_CMD_STANDBYNOW1); } -static int ata_start_drive(struct ata_port *ap, struct ata_device *dev) +static int ata_start_drive(struct ata_device *dev) { - return ata_do_simple_cmd(ap, dev, ATA_CMD_IDLEIMMEDIATE); + return ata_do_simple_cmd(dev, ATA_CMD_IDLEIMMEDIATE); } /** * ata_device_resume - wakeup a previously suspended devices - * @ap: port the device is connected to * @dev: the device to resume * * Kick the drive back into action, by sending it an idle immediate @@ -4294,39 +5015,46 @@ static int ata_start_drive(struct ata_po * and host. * */ -int ata_device_resume(struct ata_port *ap, struct ata_device *dev) +int ata_device_resume(struct ata_device *dev) { + struct ata_port *ap = dev->ap; + if (ap->flags & ATA_FLAG_SUSPENDED) { + struct ata_device *failed_dev; + ata_busy_wait(ap, ATA_BUSY | ATA_DRQ, 200000); + ap->flags &= ~ATA_FLAG_SUSPENDED; - ata_set_mode(ap); + while (ata_set_mode(ap, &failed_dev)) + ata_dev_disable(failed_dev); } - if (!ata_dev_present(dev)) + if (!ata_dev_enabled(dev)) return 0; if (dev->class == ATA_DEV_ATA) - ata_start_drive(ap, dev); + ata_start_drive(dev); return 0; } /** * ata_device_suspend - prepare a device for suspend - * @ap: port the device is connected to * @dev: the device to suspend * @state: target power management state * * Flush the cache on the drive, if appropriate, then issue a * standbynow command. */ -int ata_device_suspend(struct ata_port *ap, struct ata_device *dev, pm_message_t state) +int ata_device_suspend(struct ata_device *dev, pm_message_t state) { - if (!ata_dev_present(dev)) + struct ata_port *ap = dev->ap; + + if (!ata_dev_enabled(dev)) return 0; if (dev->class == ATA_DEV_ATA) - ata_flush_cache(ap, dev); + ata_flush_cache(dev); if (state.event != PM_EVENT_FREEZE) - ata_standby_drive(ap, dev); + ata_standby_drive(dev); ap->flags |= ATA_FLAG_SUSPENDED; return 0; } @@ -4440,7 +5168,7 @@ static void ata_host_init(struct ata_por host->unique_id = ata_unique_id++; host->max_cmd_len = 12; - ap->flags = ATA_FLAG_PORT_DISABLED; + ap->flags = ATA_FLAG_DISABLED; ap->id = host->unique_id; ap->host = host; ap->ctl = ATA_DEVCTL_OBS; @@ -4453,16 +5181,24 @@ static void ata_host_init(struct ata_por ap->mwdma_mask = ent->mwdma_mask; ap->udma_mask = ent->udma_mask; ap->flags |= ent->host_flags; + ap->flags |= ent->port_flags[port_no]; ap->ops = ent->port_ops; - ap->cbl = ATA_CBL_NONE; + ap->sata_spd_limit = UINT_MAX; ap->active_tag = ATA_TAG_POISON; ap->last_ctl = 0xFF; + ap->msg_enable = ATA_MSG_DRV; INIT_WORK(&ap->port_task, NULL, NULL); INIT_LIST_HEAD(&ap->eh_done_q); + /* set cable type */ + ap->cbl = ATA_CBL_NONE; + if (ap->flags & ATA_FLAG_SATA) + ap->cbl = ATA_CBL_SATA; + for (i = 0; i < ATA_MAX_DEVICES; i++) { struct ata_device *dev = &ap->device[i]; + dev->ap = ap; dev->devno = i; dev->pio_mask = UINT_MAX; dev->mwdma_mask = UINT_MAX; @@ -4515,7 +5251,7 @@ static struct ata_port * ata_host_add(co host->transportt = &ata_scsi_transport_template; - ap = (struct ata_port *) &host->hostdata[0]; + ap = ata_shost_to_port(host); ata_host_init(ap, host, host_set, ent, port_no); @@ -4586,18 +5322,18 @@ int ata_device_add(const struct ata_prob (ap->pio_mask << ATA_SHIFT_PIO); /* print per-port info to dmesg */ - printk(KERN_INFO "ata%u: %cATA max %s cmd 0x%lX ctl 0x%lX " - "bmdma 0x%lX irq %lu\n", - ap->id, - ap->flags & ATA_FLAG_SATA ? 'S' : 'P', - ata_mode_string(xfer_mode_mask), - ap->ioaddr.cmd_addr, - ap->ioaddr.ctl_addr, - ap->ioaddr.bmdma_addr, - ent->irq); + ata_port_printk(ap, KERN_INFO, "%cATA max %s cmd 0x%lX " + "ctl 0x%lX bmdma 0x%lX irq %lu\n", + ap->flags & ATA_FLAG_SATA ? 'S' : 'P', + ata_mode_string(xfer_mode_mask), + ap->ioaddr.cmd_addr, + ap->ioaddr.ctl_addr, + ap->ioaddr.bmdma_addr, + ent->irq); ata_chk_status(ap); host_set->ops->irq_clear(ap); + ata_eh_freeze_port(ap); /* freeze port before requesting IRQ */ count++; } @@ -4632,8 +5368,7 @@ int ata_device_add(const struct ata_prob rc = scsi_add_host(ap->host, dev); if (rc) { - printk(KERN_ERR "ata%u: scsi_add_host failed\n", - ap->id); + ata_port_printk(ap, KERN_ERR, "scsi_add_host failed\n"); /* FIXME: do something useful here */ /* FIXME: handle unconditional calls to * scsi_scan_host and ata_host_remove, below, @@ -4728,15 +5463,12 @@ void ata_host_set_remove(struct ata_host int ata_scsi_release(struct Scsi_Host *host) { - struct ata_port *ap = (struct ata_port *) &host->hostdata[0]; - int i; + struct ata_port *ap = ata_shost_to_port(host); DPRINTK("ENTER\n"); ap->ops->port_disable(ap); ata_host_remove(ap, 0); - for (i = 0; i < ATA_MAX_DEVICES; i++) - kfree(ap->device[i].id); DPRINTK("EXIT\n"); return 1; @@ -4895,6 +5627,52 @@ int ata_ratelimit(void) return rc; } +/** + * ata_wait_register - wait until register value changes + * @reg: IO-mapped register + * @mask: Mask to apply to read register value + * @val: Wait condition + * @interval_msec: polling interval in milliseconds + * @timeout_msec: timeout in milliseconds + * + * Waiting for some bits of register to change is a common + * operation for ATA controllers. This function reads 32bit LE + * IO-mapped register @reg and tests for the following condition. + * + * (*@reg & mask) != val + * + * If the condition is met, it returns; otherwise, the process is + * repeated after @interval_msec until timeout. + * + * LOCKING: + * Kernel thread context (may sleep) + * + * RETURNS: + * The final register value. + */ +u32 ata_wait_register(void __iomem *reg, u32 mask, u32 val, + unsigned long interval_msec, + unsigned long timeout_msec) +{ + unsigned long timeout; + u32 tmp; + + tmp = ioread32(reg); + + /* Calculate timeout _after_ the first read to make sure + * preceding writes reach the controller before starting to + * eat away the timeout. + */ + timeout = jiffies + (timeout_msec * HZ) / 1000; + + while ((tmp & mask) == val && time_before(jiffies, timeout)) { + msleep(interval_msec); + tmp = ioread32(reg); + } + + return tmp; +} + /* * libata is essentially a library of internal helper functions for * low-level ATA host controller drivers. As such, the API/ABI is @@ -4908,9 +5686,9 @@ EXPORT_SYMBOL_GPL(ata_device_add); EXPORT_SYMBOL_GPL(ata_host_set_remove); EXPORT_SYMBOL_GPL(ata_sg_init); EXPORT_SYMBOL_GPL(ata_sg_init_one); -EXPORT_SYMBOL_GPL(__ata_qc_complete); +EXPORT_SYMBOL_GPL(ata_qc_complete); +EXPORT_SYMBOL_GPL(ata_qc_complete_multiple); EXPORT_SYMBOL_GPL(ata_qc_issue_prot); -EXPORT_SYMBOL_GPL(ata_eng_timeout); EXPORT_SYMBOL_GPL(ata_tf_load); EXPORT_SYMBOL_GPL(ata_tf_read); EXPORT_SYMBOL_GPL(ata_noop_dev_select); @@ -4924,6 +5702,9 @@ EXPORT_SYMBOL_GPL(ata_port_start); EXPORT_SYMBOL_GPL(ata_port_stop); EXPORT_SYMBOL_GPL(ata_host_stop); EXPORT_SYMBOL_GPL(ata_interrupt); +EXPORT_SYMBOL_GPL(ata_mmio_data_xfer); +EXPORT_SYMBOL_GPL(ata_pio_data_xfer); +EXPORT_SYMBOL_GPL(ata_pio_data_xfer_noirq); EXPORT_SYMBOL_GPL(ata_qc_prep); EXPORT_SYMBOL_GPL(ata_noop_qc_prep); EXPORT_SYMBOL_GPL(ata_bmdma_setup); @@ -4931,7 +5712,13 @@ EXPORT_SYMBOL_GPL(ata_bmdma_start); EXPORT_SYMBOL_GPL(ata_bmdma_irq_clear); EXPORT_SYMBOL_GPL(ata_bmdma_status); EXPORT_SYMBOL_GPL(ata_bmdma_stop); +EXPORT_SYMBOL_GPL(ata_bmdma_freeze); +EXPORT_SYMBOL_GPL(ata_bmdma_thaw); +EXPORT_SYMBOL_GPL(ata_bmdma_drive_eh); +EXPORT_SYMBOL_GPL(ata_bmdma_error_handler); +EXPORT_SYMBOL_GPL(ata_bmdma_post_internal_cmd); EXPORT_SYMBOL_GPL(ata_port_probe); +EXPORT_SYMBOL_GPL(sata_set_spd); EXPORT_SYMBOL_GPL(sata_phy_reset); EXPORT_SYMBOL_GPL(__sata_phy_reset); EXPORT_SYMBOL_GPL(ata_bus_reset); @@ -4946,18 +5733,24 @@ EXPORT_SYMBOL_GPL(ata_dev_classify); EXPORT_SYMBOL_GPL(ata_dev_pair); EXPORT_SYMBOL_GPL(ata_port_disable); EXPORT_SYMBOL_GPL(ata_ratelimit); +EXPORT_SYMBOL_GPL(ata_wait_register); EXPORT_SYMBOL_GPL(ata_busy_sleep); EXPORT_SYMBOL_GPL(ata_port_queue_task); EXPORT_SYMBOL_GPL(ata_scsi_ioctl); EXPORT_SYMBOL_GPL(ata_scsi_queuecmd); EXPORT_SYMBOL_GPL(ata_scsi_slave_config); +EXPORT_SYMBOL_GPL(ata_scsi_change_queue_depth); EXPORT_SYMBOL_GPL(ata_scsi_release); EXPORT_SYMBOL_GPL(ata_host_intr); +EXPORT_SYMBOL_GPL(sata_scr_valid); +EXPORT_SYMBOL_GPL(sata_scr_read); +EXPORT_SYMBOL_GPL(sata_scr_write); +EXPORT_SYMBOL_GPL(sata_scr_write_flush); +EXPORT_SYMBOL_GPL(ata_port_online); +EXPORT_SYMBOL_GPL(ata_port_offline); EXPORT_SYMBOL_GPL(ata_id_string); EXPORT_SYMBOL_GPL(ata_id_c_string); EXPORT_SYMBOL_GPL(ata_scsi_simulate); -EXPORT_SYMBOL_GPL(ata_eh_qc_complete); -EXPORT_SYMBOL_GPL(ata_eh_qc_retry); EXPORT_SYMBOL_GPL(ata_pio_need_iordy); EXPORT_SYMBOL_GPL(ata_timing_compute); @@ -4979,3 +5772,13 @@ EXPORT_SYMBOL_GPL(ata_device_suspend); EXPORT_SYMBOL_GPL(ata_device_resume); EXPORT_SYMBOL_GPL(ata_scsi_device_suspend); EXPORT_SYMBOL_GPL(ata_scsi_device_resume); + +EXPORT_SYMBOL_GPL(ata_eng_timeout); +EXPORT_SYMBOL_GPL(ata_port_schedule_eh); +EXPORT_SYMBOL_GPL(ata_port_abort); +EXPORT_SYMBOL_GPL(ata_port_freeze); +EXPORT_SYMBOL_GPL(ata_eh_freeze_port); +EXPORT_SYMBOL_GPL(ata_eh_thaw_port); +EXPORT_SYMBOL_GPL(ata_eh_qc_complete); +EXPORT_SYMBOL_GPL(ata_eh_qc_retry); +EXPORT_SYMBOL_GPL(ata_do_eh); diff -puN /dev/null drivers/scsi/libata-eh.c --- /dev/null 2006-06-03 22:34:36.282200750 -0700 +++ devel-akpm/drivers/scsi/libata-eh.c 2006-06-09 15:21:16.000000000 -0700 @@ -0,0 +1,1561 @@ +/* + * libata-eh.c - libata error handling + * + * Maintained by: Jeff Garzik + * Please ALWAYS copy linux-ide@vger.kernel.org + * on emails. + * + * Copyright 2006 Tejun Heo + * + * + * This program is free software; you can redistribute it and/or + * modify it under the terms of the GNU General Public License as + * published by the Free Software Foundation; either version 2, or + * (at your option) any later version. + * + * This program is distributed in the hope that it will be useful, + * but WITHOUT ANY WARRANTY; without even the implied warranty of + * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU + * General Public License for more details. + * + * You should have received a copy of the GNU General Public License + * along with this program; see the file COPYING. If not, write to + * the Free Software Foundation, 675 Mass Ave, Cambridge, MA 02139, + * USA. + * + * + * libata documentation is available via 'make {ps|pdf}docs', + * as Documentation/DocBook/libata.* + * + * Hardware documentation available from http://www.t13.org/ and + * http://www.sata-io.org/ + * + */ + +#include +#include +#include +#include +#include +#include +#include +#include "scsi_transport_api.h" + +#include + +#include "libata.h" + +static void __ata_port_freeze(struct ata_port *ap); + +static void ata_ering_record(struct ata_ering *ering, int is_io, + unsigned int err_mask) +{ + struct ata_ering_entry *ent; + + WARN_ON(!err_mask); + + ering->cursor++; + ering->cursor %= ATA_ERING_SIZE; + + ent = &ering->ring[ering->cursor]; + ent->is_io = is_io; + ent->err_mask = err_mask; + ent->timestamp = get_jiffies_64(); +} + +static struct ata_ering_entry * ata_ering_top(struct ata_ering *ering) +{ + struct ata_ering_entry *ent = &ering->ring[ering->cursor]; + if (!ent->err_mask) + return NULL; + return ent; +} + +static int ata_ering_map(struct ata_ering *ering, + int (*map_fn)(struct ata_ering_entry *, void *), + void *arg) +{ + int idx, rc = 0; + struct ata_ering_entry *ent; + + idx = ering->cursor; + do { + ent = &ering->ring[idx]; + if (!ent->err_mask) + break; + rc = map_fn(ent, arg); + if (rc) + break; + idx = (idx - 1 + ATA_ERING_SIZE) % ATA_ERING_SIZE; + } while (idx != ering->cursor); + + return rc; +} + +/** + * ata_scsi_timed_out - SCSI layer time out callback + * @cmd: timed out SCSI command + * + * Handles SCSI layer timeout. We race with normal completion of + * the qc for @cmd. If the qc is already gone, we lose and let + * the scsi command finish (EH_HANDLED). Otherwise, the qc has + * timed out and EH should be invoked. Prevent ata_qc_complete() + * from finishing it by setting EH_SCHEDULED and return + * EH_NOT_HANDLED. + * + * TODO: kill this function once old EH is gone. + * + * LOCKING: + * Called from timer context + * + * RETURNS: + * EH_HANDLED or EH_NOT_HANDLED + */ +enum scsi_eh_timer_return ata_scsi_timed_out(struct scsi_cmnd *cmd) +{ + struct Scsi_Host *host = cmd->device->host; + struct ata_port *ap = ata_shost_to_port(host); + unsigned long flags; + struct ata_queued_cmd *qc; + enum scsi_eh_timer_return ret; + + DPRINTK("ENTER\n"); + + if (ap->ops->error_handler) { + ret = EH_NOT_HANDLED; + goto out; + } + + ret = EH_HANDLED; + spin_lock_irqsave(&ap->host_set->lock, flags); + qc = ata_qc_from_tag(ap, ap->active_tag); + if (qc) { + WARN_ON(qc->scsicmd != cmd); + qc->flags |= ATA_QCFLAG_EH_SCHEDULED; + qc->err_mask |= AC_ERR_TIMEOUT; + ret = EH_NOT_HANDLED; + } + spin_unlock_irqrestore(&ap->host_set->lock, flags); + + out: + DPRINTK("EXIT, ret=%d\n", ret); + return ret; +} + +/** + * ata_scsi_error - SCSI layer error handler callback + * @host: SCSI host on which error occurred + * + * Handles SCSI-layer-thrown error events. + * + * LOCKING: + * Inherited from SCSI layer (none, can sleep) + * + * RETURNS: + * Zero. + */ +void ata_scsi_error(struct Scsi_Host *host) +{ + struct ata_port *ap = ata_shost_to_port(host); + spinlock_t *hs_lock = &ap->host_set->lock; + int i, repeat_cnt = ATA_EH_MAX_REPEAT; + unsigned long flags; + + DPRINTK("ENTER\n"); + + /* synchronize with port task */ + ata_port_flush_task(ap); + + /* synchronize with host_set lock and sort out timeouts */ + + /* For new EH, all qcs are finished in one of three ways - + * normal completion, error completion, and SCSI timeout. + * Both cmpletions can race against SCSI timeout. When normal + * completion wins, the qc never reaches EH. When error + * completion wins, the qc has ATA_QCFLAG_FAILED set. + * + * When SCSI timeout wins, things are a bit more complex. + * Normal or error completion can occur after the timeout but + * before this point. In such cases, both types of + * completions are honored. A scmd is determined to have + * timed out iff its associated qc is active and not failed. + */ + if (ap->ops->error_handler) { + struct scsi_cmnd *scmd, *tmp; + int nr_timedout = 0; + + spin_lock_irqsave(hs_lock, flags); + + list_for_each_entry_safe(scmd, tmp, &host->eh_cmd_q, eh_entry) { + struct ata_queued_cmd *qc; + + for (i = 0; i < ATA_MAX_QUEUE; i++) { + qc = __ata_qc_from_tag(ap, i); + if (qc->flags & ATA_QCFLAG_ACTIVE && + qc->scsicmd == scmd) + break; + } + + if (i < ATA_MAX_QUEUE) { + /* the scmd has an associated qc */ + if (!(qc->flags & ATA_QCFLAG_FAILED)) { + /* which hasn't failed yet, timeout */ + qc->err_mask |= AC_ERR_TIMEOUT; + qc->flags |= ATA_QCFLAG_FAILED; + nr_timedout++; + } + } else { + /* Normal completion occurred after + * SCSI timeout but before this point. + * Successfully complete it. + */ + scmd->retries = scmd->allowed; + scsi_eh_finish_cmd(scmd, &ap->eh_done_q); + } + } + + /* If we have timed out qcs. They belong to EH from + * this point but the state of the controller is + * unknown. Freeze the port to make sure the IRQ + * handler doesn't diddle with those qcs. This must + * be done atomically w.r.t. setting QCFLAG_FAILED. + */ + if (nr_timedout) + __ata_port_freeze(ap); + + spin_unlock_irqrestore(hs_lock, flags); + } else + spin_unlock_wait(hs_lock); + + repeat: + /* invoke error handler */ + if (ap->ops->error_handler) { + /* fetch & clear EH info */ + spin_lock_irqsave(hs_lock, flags); + + memset(&ap->eh_context, 0, sizeof(ap->eh_context)); + ap->eh_context.i = ap->eh_info; + memset(&ap->eh_info, 0, sizeof(ap->eh_info)); + + ap->flags &= ~ATA_FLAG_EH_PENDING; + + spin_unlock_irqrestore(hs_lock, flags); + + /* invoke EH */ + ap->ops->error_handler(ap); + + /* Exception might have happend after ->error_handler + * recovered the port but before this point. Repeat + * EH in such case. + */ + spin_lock_irqsave(hs_lock, flags); + + if (ap->flags & ATA_FLAG_EH_PENDING) { + if (--repeat_cnt) { + ata_port_printk(ap, KERN_INFO, + "EH pending after completion, " + "repeating EH (cnt=%d)\n", repeat_cnt); + spin_unlock_irqrestore(hs_lock, flags); + goto repeat; + } + ata_port_printk(ap, KERN_ERR, "EH pending after %d " + "tries, giving up\n", ATA_EH_MAX_REPEAT); + } + + /* this run is complete, make sure EH info is clear */ + memset(&ap->eh_info, 0, sizeof(ap->eh_info)); + + /* Clear host_eh_scheduled while holding hs_lock such + * that if exception occurs after this point but + * before EH completion, SCSI midlayer will + * re-initiate EH. + */ + host->host_eh_scheduled = 0; + + spin_unlock_irqrestore(hs_lock, flags); + } else { + WARN_ON(ata_qc_from_tag(ap, ap->active_tag) == NULL); + ap->ops->eng_timeout(ap); + } + + /* finish or retry handled scmd's and clean up */ + WARN_ON(host->host_failed || !list_empty(&host->eh_cmd_q)); + + scsi_eh_flush_done_q(&ap->eh_done_q); + + /* clean up */ + spin_lock_irqsave(hs_lock, flags); + + if (ap->flags & ATA_FLAG_RECOVERED) + ata_port_printk(ap, KERN_INFO, "EH complete\n"); + ap->flags &= ~ATA_FLAG_RECOVERED; + + spin_unlock_irqrestore(hs_lock, flags); + + DPRINTK("EXIT\n"); +} + +/** + * ata_qc_timeout - Handle timeout of queued command + * @qc: Command that timed out + * + * Some part of the kernel (currently, only the SCSI layer) + * has noticed that the active command on port @ap has not + * completed after a specified length of time. Handle this + * condition by disabling DMA (if necessary) and completing + * transactions, with error if necessary. + * + * This also handles the case of the "lost interrupt", where + * for some reason (possibly hardware bug, possibly driver bug) + * an interrupt was not delivered to the driver, even though the + * transaction completed successfully. + * + * TODO: kill this function once old EH is gone. + * + * LOCKING: + * Inherited from SCSI layer (none, can sleep) + */ +static void ata_qc_timeout(struct ata_queued_cmd *qc) +{ + struct ata_port *ap = qc->ap; + struct ata_host_set *host_set = ap->host_set; + u8 host_stat = 0, drv_stat; + unsigned long flags; + + DPRINTK("ENTER\n"); + + ap->hsm_task_state = HSM_ST_IDLE; + + spin_lock_irqsave(&host_set->lock, flags); + + switch (qc->tf.protocol) { + + case ATA_PROT_DMA: + case ATA_PROT_ATAPI_DMA: + host_stat = ap->ops->bmdma_status(ap); + + /* before we do anything else, clear DMA-Start bit */ + ap->ops->bmdma_stop(qc); + + /* fall through */ + + default: + ata_altstatus(ap); + drv_stat = ata_chk_status(ap); + + /* ack bmdma irq events */ + ap->ops->irq_clear(ap); + + ata_dev_printk(qc->dev, KERN_ERR, "command 0x%x timeout, " + "stat 0x%x host_stat 0x%x\n", + qc->tf.command, drv_stat, host_stat); + + /* complete taskfile transaction */ + qc->err_mask |= AC_ERR_TIMEOUT; + break; + } + + spin_unlock_irqrestore(&host_set->lock, flags); + + ata_eh_qc_complete(qc); + + DPRINTK("EXIT\n"); +} + +/** + * ata_eng_timeout - Handle timeout of queued command + * @ap: Port on which timed-out command is active + * + * Some part of the kernel (currently, only the SCSI layer) + * has noticed that the active command on port @ap has not + * completed after a specified length of time. Handle this + * condition by disabling DMA (if necessary) and completing + * transactions, with error if necessary. + * + * This also handles the case of the "lost interrupt", where + * for some reason (possibly hardware bug, possibly driver bug) + * an interrupt was not delivered to the driver, even though the + * transaction completed successfully. + * + * TODO: kill this function once old EH is gone. + * + * LOCKING: + * Inherited from SCSI layer (none, can sleep) + */ +void ata_eng_timeout(struct ata_port *ap) +{ + DPRINTK("ENTER\n"); + + ata_qc_timeout(ata_qc_from_tag(ap, ap->active_tag)); + + DPRINTK("EXIT\n"); +} + +/** + * ata_qc_schedule_eh - schedule qc for error handling + * @qc: command to schedule error handling for + * + * Schedule error handling for @qc. EH will kick in as soon as + * other commands are drained. + * + * LOCKING: + * spin_lock_irqsave(host_set lock) + */ +void ata_qc_schedule_eh(struct ata_queued_cmd *qc) +{ + struct ata_port *ap = qc->ap; + + WARN_ON(!ap->ops->error_handler); + + qc->flags |= ATA_QCFLAG_FAILED; + qc->ap->flags |= ATA_FLAG_EH_PENDING; + + /* The following will fail if timeout has already expired. + * ata_scsi_error() takes care of such scmds on EH entry. + * Note that ATA_QCFLAG_FAILED is unconditionally set after + * this function completes. + */ + scsi_req_abort_cmd(qc->scsicmd); +} + +/** + * ata_port_schedule_eh - schedule error handling without a qc + * @ap: ATA port to schedule EH for + * + * Schedule error handling for @ap. EH will kick in as soon as + * all commands are drained. + * + * LOCKING: + * spin_lock_irqsave(host_set lock) + */ +void ata_port_schedule_eh(struct ata_port *ap) +{ + WARN_ON(!ap->ops->error_handler); + + ap->flags |= ATA_FLAG_EH_PENDING; + scsi_schedule_eh(ap->host); + + DPRINTK("port EH scheduled\n"); +} + +/** + * ata_port_abort - abort all qc's on the port + * @ap: ATA port to abort qc's for + * + * Abort all active qc's of @ap and schedule EH. + * + * LOCKING: + * spin_lock_irqsave(host_set lock) + * + * RETURNS: + * Number of aborted qc's. + */ +int ata_port_abort(struct ata_port *ap) +{ + int tag, nr_aborted = 0; + + WARN_ON(!ap->ops->error_handler); + + for (tag = 0; tag < ATA_MAX_QUEUE; tag++) { + struct ata_queued_cmd *qc = ata_qc_from_tag(ap, tag); + + if (qc) { + qc->flags |= ATA_QCFLAG_FAILED; + ata_qc_complete(qc); + nr_aborted++; + } + } + + if (!nr_aborted) + ata_port_schedule_eh(ap); + + return nr_aborted; +} + +/** + * __ata_port_freeze - freeze port + * @ap: ATA port to freeze + * + * This function is called when HSM violation or some other + * condition disrupts normal operation of the port. Frozen port + * is not allowed to perform any operation until the port is + * thawed, which usually follows a successful reset. + * + * ap->ops->freeze() callback can be used for freezing the port + * hardware-wise (e.g. mask interrupt and stop DMA engine). If a + * port cannot be frozen hardware-wise, the interrupt handler + * must ack and clear interrupts unconditionally while the port + * is frozen. + * + * LOCKING: + * spin_lock_irqsave(host_set lock) + */ +static void __ata_port_freeze(struct ata_port *ap) +{ + WARN_ON(!ap->ops->error_handler); + + if (ap->ops->freeze) + ap->ops->freeze(ap); + + ap->flags |= ATA_FLAG_FROZEN; + + DPRINTK("ata%u port frozen\n", ap->id); +} + +/** + * ata_port_freeze - abort & freeze port + * @ap: ATA port to freeze + * + * Abort and freeze @ap. + * + * LOCKING: + * spin_lock_irqsave(host_set lock) + * + * RETURNS: + * Number of aborted commands. + */ +int ata_port_freeze(struct ata_port *ap) +{ + int nr_aborted; + + WARN_ON(!ap->ops->error_handler); + + nr_aborted = ata_port_abort(ap); + __ata_port_freeze(ap); + + return nr_aborted; +} + +/** + * ata_eh_freeze_port - EH helper to freeze port + * @ap: ATA port to freeze + * + * Freeze @ap. + * + * LOCKING: + * None. + */ +void ata_eh_freeze_port(struct ata_port *ap) +{ + unsigned long flags; + + if (!ap->ops->error_handler) + return; + + spin_lock_irqsave(&ap->host_set->lock, flags); + __ata_port_freeze(ap); + spin_unlock_irqrestore(&ap->host_set->lock, flags); +} + +/** + * ata_port_thaw_port - EH helper to thaw port + * @ap: ATA port to thaw + * + * Thaw frozen port @ap. + * + * LOCKING: + * None. + */ +void ata_eh_thaw_port(struct ata_port *ap) +{ + unsigned long flags; + + if (!ap->ops->error_handler) + return; + + spin_lock_irqsave(&ap->host_set->lock, flags); + + ap->flags &= ~ATA_FLAG_FROZEN; + + if (ap->ops->thaw) + ap->ops->thaw(ap); + + spin_unlock_irqrestore(&ap->host_set->lock, flags); + + DPRINTK("ata%u port thawed\n", ap->id); +} + +static void ata_eh_scsidone(struct scsi_cmnd *scmd) +{ + /* nada */ +} + +static void __ata_eh_qc_complete(struct ata_queued_cmd *qc) +{ + struct ata_port *ap = qc->ap; + struct scsi_cmnd *scmd = qc->scsicmd; + unsigned long flags; + + spin_lock_irqsave(&ap->host_set->lock, flags); + qc->scsidone = ata_eh_scsidone; + __ata_qc_complete(qc); + WARN_ON(ata_tag_valid(qc->tag)); + spin_unlock_irqrestore(&ap->host_set->lock, flags); + + scsi_eh_finish_cmd(scmd, &ap->eh_done_q); +} + +/** + * ata_eh_qc_complete - Complete an active ATA command from EH + * @qc: Command to complete + * + * Indicate to the mid and upper layers that an ATA command has + * completed. To be used from EH. + */ +void ata_eh_qc_complete(struct ata_queued_cmd *qc) +{ + struct scsi_cmnd *scmd = qc->scsicmd; + scmd->retries = scmd->allowed; + __ata_eh_qc_complete(qc); +} + +/** + * ata_eh_qc_retry - Tell midlayer to retry an ATA command after EH + * @qc: Command to retry + * + * Indicate to the mid and upper layers that an ATA command + * should be retried. To be used from EH. + * + * SCSI midlayer limits the number of retries to scmd->allowed. + * scmd->retries is decremented for commands which get retried + * due to unrelated failures (qc->err_mask is zero). + */ +void ata_eh_qc_retry(struct ata_queued_cmd *qc) +{ + struct scsi_cmnd *scmd = qc->scsicmd; + if (!qc->err_mask && scmd->retries) + scmd->retries--; + __ata_eh_qc_complete(qc); +} + +/** + * ata_eh_about_to_do - about to perform eh_action + * @ap: target ATA port + * @action: action about to be performed + * + * Called just before performing EH actions to clear related bits + * in @ap->eh_info such that eh actions are not unnecessarily + * repeated. + * + * LOCKING: + * None. + */ +static void ata_eh_about_to_do(struct ata_port *ap, unsigned int action) +{ + unsigned long flags; + + spin_lock_irqsave(&ap->host_set->lock, flags); + ap->eh_info.action &= ~action; + ap->flags |= ATA_FLAG_RECOVERED; + spin_unlock_irqrestore(&ap->host_set->lock, flags); +} + +/** + * ata_err_string - convert err_mask to descriptive string + * @err_mask: error mask to convert to string + * + * Convert @err_mask to descriptive string. Errors are + * prioritized according to severity and only the most severe + * error is reported. + * + * LOCKING: + * None. + * + * RETURNS: + * Descriptive string for @err_mask + */ +static const char * ata_err_string(unsigned int err_mask) +{ + if (err_mask & AC_ERR_HOST_BUS) + return "host bus error"; + if (err_mask & AC_ERR_ATA_BUS) + return "ATA bus error"; + if (err_mask & AC_ERR_TIMEOUT) + return "timeout"; + if (err_mask & AC_ERR_HSM) + return "HSM violation"; + if (err_mask & AC_ERR_SYSTEM) + return "internal error"; + if (err_mask & AC_ERR_MEDIA) + return "media error"; + if (err_mask & AC_ERR_INVALID) + return "invalid argument"; + if (err_mask & AC_ERR_DEV) + return "device error"; + return "unknown error"; +} + +/** + * ata_read_log_page - read a specific log page + * @dev: target device + * @page: page to read + * @buf: buffer to store read page + * @sectors: number of sectors to read + * + * Read log page using READ_LOG_EXT command. + * + * LOCKING: + * Kernel thread context (may sleep). + * + * RETURNS: + * 0 on success, AC_ERR_* mask otherwise. + */ +static unsigned int ata_read_log_page(struct ata_device *dev, + u8 page, void *buf, unsigned int sectors) +{ + struct ata_taskfile tf; + unsigned int err_mask; + + DPRINTK("read log page - page %d\n", page); + + ata_tf_init(dev, &tf); + tf.command = ATA_CMD_READ_LOG_EXT; + tf.lbal = page; + tf.nsect = sectors; + tf.hob_nsect = sectors >> 8; + tf.flags |= ATA_TFLAG_ISADDR | ATA_TFLAG_LBA48 | ATA_TFLAG_DEVICE; + tf.protocol = ATA_PROT_PIO; + + err_mask = ata_exec_internal(dev, &tf, NULL, DMA_FROM_DEVICE, + buf, sectors * ATA_SECT_SIZE); + + DPRINTK("EXIT, err_mask=%x\n", err_mask); + return err_mask; +} + +/** + * ata_eh_read_log_10h - Read log page 10h for NCQ error details + * @dev: Device to read log page 10h from + * @tag: Resulting tag of the failed command + * @tf: Resulting taskfile registers of the failed command + * + * Read log page 10h to obtain NCQ error details and clear error + * condition. + * + * LOCKING: + * Kernel thread context (may sleep). + * + * RETURNS: + * 0 on success, -errno otherwise. + */ +static int ata_eh_read_log_10h(struct ata_device *dev, + int *tag, struct ata_taskfile *tf) +{ + u8 *buf = dev->ap->sector_buf; + unsigned int err_mask; + u8 csum; + int i; + + err_mask = ata_read_log_page(dev, ATA_LOG_SATA_NCQ, buf, 1); + if (err_mask) + return -EIO; + + csum = 0; + for (i = 0; i < ATA_SECT_SIZE; i++) + csum += buf[i]; + if (csum) + ata_dev_printk(dev, KERN_WARNING, + "invalid checksum 0x%x on log page 10h\n", csum); + + if (buf[0] & 0x80) + return -ENOENT; + + *tag = buf[0] & 0x1f; + + tf->command = buf[2]; + tf->feature = buf[3]; + tf->lbal = buf[4]; + tf->lbam = buf[5]; + tf->lbah = buf[6]; + tf->device = buf[7]; + tf->hob_lbal = buf[8]; + tf->hob_lbam = buf[9]; + tf->hob_lbah = buf[10]; + tf->nsect = buf[12]; + tf->hob_nsect = buf[13]; + + return 0; +} + +/** + * atapi_eh_request_sense - perform ATAPI REQUEST_SENSE + * @dev: device to perform REQUEST_SENSE to + * @sense_buf: result sense data buffer (SCSI_SENSE_BUFFERSIZE bytes long) + * + * Perform ATAPI REQUEST_SENSE after the device reported CHECK + * SENSE. This function is EH helper. + * + * LOCKING: + * Kernel thread context (may sleep). + * + * RETURNS: + * 0 on success, AC_ERR_* mask on failure + */ +static unsigned int atapi_eh_request_sense(struct ata_device *dev, + unsigned char *sense_buf) +{ + struct ata_port *ap = dev->ap; + struct ata_taskfile tf; + u8 cdb[ATAPI_CDB_LEN]; + + DPRINTK("ATAPI request sense\n"); + + ata_tf_init(dev, &tf); + + /* FIXME: is this needed? */ + memset(sense_buf, 0, SCSI_SENSE_BUFFERSIZE); + + /* XXX: why tf_read here? */ + ap->ops->tf_read(ap, &tf); + + /* fill these in, for the case where they are -not- overwritten */ + sense_buf[0] = 0x70; + sense_buf[2] = tf.feature >> 4; + + memset(cdb, 0, ATAPI_CDB_LEN); + cdb[0] = REQUEST_SENSE; + cdb[4] = SCSI_SENSE_BUFFERSIZE; + + tf.flags |= ATA_TFLAG_ISADDR | ATA_TFLAG_DEVICE; + tf.command = ATA_CMD_PACKET; + + /* is it pointless to prefer PIO for "safety reasons"? */ + if (ap->flags & ATA_FLAG_PIO_DMA) { + tf.protocol = ATA_PROT_ATAPI_DMA; + tf.feature |= ATAPI_PKT_DMA; + } else { + tf.protocol = ATA_PROT_ATAPI; + tf.lbam = (8 * 1024) & 0xff; + tf.lbah = (8 * 1024) >> 8; + } + + return ata_exec_internal(dev, &tf, cdb, DMA_FROM_DEVICE, + sense_buf, SCSI_SENSE_BUFFERSIZE); +} + +/** + * ata_eh_analyze_serror - analyze SError for a failed port + * @ap: ATA port to analyze SError for + * + * Analyze SError if available and further determine cause of + * failure. + * + * LOCKING: + * None. + */ +static void ata_eh_analyze_serror(struct ata_port *ap) +{ + struct ata_eh_context *ehc = &ap->eh_context; + u32 serror = ehc->i.serror; + unsigned int err_mask = 0, action = 0; + + if (serror & SERR_PERSISTENT) { + err_mask |= AC_ERR_ATA_BUS; + action |= ATA_EH_HARDRESET; + } + if (serror & + (SERR_DATA_RECOVERED | SERR_COMM_RECOVERED | SERR_DATA)) { + err_mask |= AC_ERR_ATA_BUS; + action |= ATA_EH_SOFTRESET; + } + if (serror & SERR_PROTOCOL) { + err_mask |= AC_ERR_HSM; + action |= ATA_EH_SOFTRESET; + } + if (serror & SERR_INTERNAL) { + err_mask |= AC_ERR_SYSTEM; + action |= ATA_EH_SOFTRESET; + } + if (serror & (SERR_PHYRDY_CHG | SERR_DEV_XCHG)) { + err_mask |= AC_ERR_ATA_BUS; + action |= ATA_EH_HARDRESET; + } + + ehc->i.err_mask |= err_mask; + ehc->i.action |= action; +} + +/** + * ata_eh_analyze_ncq_error - analyze NCQ error + * @ap: ATA port to analyze NCQ error for + * + * Read log page 10h, determine the offending qc and acquire + * error status TF. For NCQ device errors, all LLDDs have to do + * is setting AC_ERR_DEV in ehi->err_mask. This function takes + * care of the rest. + * + * LOCKING: + * Kernel thread context (may sleep). + */ +static void ata_eh_analyze_ncq_error(struct ata_port *ap) +{ + struct ata_eh_context *ehc = &ap->eh_context; + struct ata_device *dev = ap->device; + struct ata_queued_cmd *qc; + struct ata_taskfile tf; + int tag, rc; + + /* if frozen, we can't do much */ + if (ap->flags & ATA_FLAG_FROZEN) + return; + + /* is it NCQ device error? */ + if (!ap->sactive || !(ehc->i.err_mask & AC_ERR_DEV)) + return; + + /* has LLDD analyzed already? */ + for (tag = 0; tag < ATA_MAX_QUEUE; tag++) { + qc = __ata_qc_from_tag(ap, tag); + + if (!(qc->flags & ATA_QCFLAG_FAILED)) + continue; + + if (qc->err_mask) + return; + } + + /* okay, this error is ours */ + rc = ata_eh_read_log_10h(dev, &tag, &tf); + if (rc) { + ata_port_printk(ap, KERN_ERR, "failed to read log page 10h " + "(errno=%d)\n", rc); + return; + } + + if (!(ap->sactive & (1 << tag))) { + ata_port_printk(ap, KERN_ERR, "log page 10h reported " + "inactive tag %d\n", tag); + return; + } + + /* we've got the perpetrator, condemn it */ + qc = __ata_qc_from_tag(ap, tag); + memcpy(&qc->result_tf, &tf, sizeof(tf)); + qc->err_mask |= AC_ERR_DEV; + ehc->i.err_mask &= ~AC_ERR_DEV; +} + +/** + * ata_eh_analyze_tf - analyze taskfile of a failed qc + * @qc: qc to analyze + * @tf: Taskfile registers to analyze + * + * Analyze taskfile of @qc and further determine cause of + * failure. This function also requests ATAPI sense data if + * avaliable. + * + * LOCKING: + * Kernel thread context (may sleep). + * + * RETURNS: + * Determined recovery action + */ +static unsigned int ata_eh_analyze_tf(struct ata_queued_cmd *qc, + const struct ata_taskfile *tf) +{ + unsigned int tmp, action = 0; + u8 stat = tf->command, err = tf->feature; + + if ((stat & (ATA_BUSY | ATA_DRQ | ATA_DRDY)) != ATA_DRDY) { + qc->err_mask |= AC_ERR_HSM; + return ATA_EH_SOFTRESET; + } + + if (!(qc->err_mask & AC_ERR_DEV)) + return 0; + + switch (qc->dev->class) { + case ATA_DEV_ATA: + if (err & ATA_ICRC) + qc->err_mask |= AC_ERR_ATA_BUS; + if (err & ATA_UNC) + qc->err_mask |= AC_ERR_MEDIA; + if (err & ATA_IDNF) + qc->err_mask |= AC_ERR_INVALID; + break; + + case ATA_DEV_ATAPI: + tmp = atapi_eh_request_sense(qc->dev, + qc->scsicmd->sense_buffer); + if (!tmp) { + /* ATA_QCFLAG_SENSE_VALID is used to tell + * atapi_qc_complete() that sense data is + * already valid. + * + * TODO: interpret sense data and set + * appropriate err_mask. + */ + qc->flags |= ATA_QCFLAG_SENSE_VALID; + } else + qc->err_mask |= tmp; + } + + if (qc->err_mask & (AC_ERR_HSM | AC_ERR_TIMEOUT | AC_ERR_ATA_BUS)) + action |= ATA_EH_SOFTRESET; + + return action; +} + +static int ata_eh_categorize_ering_entry(struct ata_ering_entry *ent) +{ + if (ent->err_mask & (AC_ERR_ATA_BUS | AC_ERR_TIMEOUT)) + return 1; + + if (ent->is_io) { + if (ent->err_mask & AC_ERR_HSM) + return 1; + if ((ent->err_mask & + (AC_ERR_DEV|AC_ERR_MEDIA|AC_ERR_INVALID)) == AC_ERR_DEV) + return 2; + } + + return 0; +} + +struct speed_down_needed_arg { + u64 since; + int nr_errors[3]; +}; + +static int speed_down_needed_cb(struct ata_ering_entry *ent, void *void_arg) +{ + struct speed_down_needed_arg *arg = void_arg; + + if (ent->timestamp < arg->since) + return -1; + + arg->nr_errors[ata_eh_categorize_ering_entry(ent)]++; + return 0; +} + +/** + * ata_eh_speed_down_needed - Determine wheter speed down is necessary + * @dev: Device of interest + * + * This function examines error ring of @dev and determines + * whether speed down is necessary. Speed down is necessary if + * there have been more than 3 of Cat-1 errors or 10 of Cat-2 + * errors during last 15 minutes. + * + * Cat-1 errors are ATA_BUS, TIMEOUT for any command and HSM + * violation for known supported commands. + * + * Cat-2 errors are unclassified DEV error for known supported + * command. + * + * LOCKING: + * Inherited from caller. + * + * RETURNS: + * 1 if speed down is necessary, 0 otherwise + */ +static int ata_eh_speed_down_needed(struct ata_device *dev) +{ + const u64 interval = 15LLU * 60 * HZ; + static const int err_limits[3] = { -1, 3, 10 }; + struct speed_down_needed_arg arg; + struct ata_ering_entry *ent; + int err_cat; + u64 j64; + + ent = ata_ering_top(&dev->ering); + if (!ent) + return 0; + + err_cat = ata_eh_categorize_ering_entry(ent); + if (err_cat == 0) + return 0; + + memset(&arg, 0, sizeof(arg)); + + j64 = get_jiffies_64(); + if (j64 >= interval) + arg.since = j64 - interval; + else + arg.since = 0; + + ata_ering_map(&dev->ering, speed_down_needed_cb, &arg); + + return arg.nr_errors[err_cat] > err_limits[err_cat]; +} + +/** + * ata_eh_speed_down - record error and speed down if necessary + * @dev: Failed device + * @is_io: Did the device fail during normal IO? + * @err_mask: err_mask of the error + * + * Record error and examine error history to determine whether + * adjusting transmission speed is necessary. It also sets + * transmission limits appropriately if such adjustment is + * necessary. + * + * LOCKING: + * Kernel thread context (may sleep). + * + * RETURNS: + * 0 on success, -errno otherwise + */ +static int ata_eh_speed_down(struct ata_device *dev, int is_io, + unsigned int err_mask) +{ + if (!err_mask) + return 0; + + /* record error and determine whether speed down is necessary */ + ata_ering_record(&dev->ering, is_io, err_mask); + + if (!ata_eh_speed_down_needed(dev)) + return 0; + + /* speed down SATA link speed if possible */ + if (sata_down_spd_limit(dev->ap) == 0) + return ATA_EH_HARDRESET; + + /* lower transfer mode */ + if (ata_down_xfermask_limit(dev, 0) == 0) + return ATA_EH_SOFTRESET; + + ata_dev_printk(dev, KERN_ERR, + "speed down requested but no transfer mode left\n"); + return 0; +} + +/** + * ata_eh_autopsy - analyze error and determine recovery action + * @ap: ATA port to perform autopsy on + * + * Analyze why @ap failed and determine which recovery action is + * needed. This function also sets more detailed AC_ERR_* values + * and fills sense data for ATAPI CHECK SENSE. + * + * LOCKING: + * Kernel thread context (may sleep). + */ +static void ata_eh_autopsy(struct ata_port *ap) +{ + struct ata_eh_context *ehc = &ap->eh_context; + unsigned int action = ehc->i.action; + struct ata_device *failed_dev = NULL; + unsigned int all_err_mask = 0; + int tag, is_io = 0; + u32 serror; + int rc; + + DPRINTK("ENTER\n"); + + /* obtain and analyze SError */ + rc = sata_scr_read(ap, SCR_ERROR, &serror); + if (rc == 0) { + ehc->i.serror |= serror; + ata_eh_analyze_serror(ap); + } else if (rc != -EOPNOTSUPP) + action |= ATA_EH_HARDRESET; + + /* analyze NCQ failure */ + ata_eh_analyze_ncq_error(ap); + + /* any real error trumps AC_ERR_OTHER */ + if (ehc->i.err_mask & ~AC_ERR_OTHER) + ehc->i.err_mask &= ~AC_ERR_OTHER; + + all_err_mask |= ehc->i.err_mask; + + for (tag = 0; tag < ATA_MAX_QUEUE; tag++) { + struct ata_queued_cmd *qc = __ata_qc_from_tag(ap, tag); + + if (!(qc->flags & ATA_QCFLAG_FAILED)) + continue; + + /* inherit upper level err_mask */ + qc->err_mask |= ehc->i.err_mask; + + /* analyze TF */ + action |= ata_eh_analyze_tf(qc, &qc->result_tf); + + /* DEV errors are probably spurious in case of ATA_BUS error */ + if (qc->err_mask & AC_ERR_ATA_BUS) + qc->err_mask &= ~(AC_ERR_DEV | AC_ERR_MEDIA | + AC_ERR_INVALID); + + /* any real error trumps unknown error */ + if (qc->err_mask & ~AC_ERR_OTHER) + qc->err_mask &= ~AC_ERR_OTHER; + + /* SENSE_VALID trumps dev/unknown error and revalidation */ + if (qc->flags & ATA_QCFLAG_SENSE_VALID) { + qc->err_mask &= ~(AC_ERR_DEV | AC_ERR_OTHER); + action &= ~ATA_EH_REVALIDATE; + } + + /* accumulate error info */ + failed_dev = qc->dev; + all_err_mask |= qc->err_mask; + if (qc->flags & ATA_QCFLAG_IO) + is_io = 1; + } + + /* speed down iff command was in progress */ + if (failed_dev) + action |= ata_eh_speed_down(failed_dev, is_io, all_err_mask); + + /* enforce default EH actions */ + if (ap->flags & ATA_FLAG_FROZEN || + all_err_mask & (AC_ERR_HSM | AC_ERR_TIMEOUT)) + action |= ATA_EH_SOFTRESET; + else if (all_err_mask) + action |= ATA_EH_REVALIDATE; + + /* record autopsy result */ + ehc->i.dev = failed_dev; + ehc->i.action = action; + + DPRINTK("EXIT\n"); +} + +/** + * ata_eh_report - report error handling to user + * @ap: ATA port EH is going on + * + * Report EH to user. + * + * LOCKING: + * None. + */ +static void ata_eh_report(struct ata_port *ap) +{ + struct ata_eh_context *ehc = &ap->eh_context; + const char *frozen, *desc; + int tag, nr_failed = 0; + + desc = NULL; + if (ehc->i.desc[0] != '\0') + desc = ehc->i.desc; + + for (tag = 0; tag < ATA_MAX_QUEUE; tag++) { + struct ata_queued_cmd *qc = __ata_qc_from_tag(ap, tag); + + if (!(qc->flags & ATA_QCFLAG_FAILED)) + continue; + if (qc->flags & ATA_QCFLAG_SENSE_VALID && !qc->err_mask) + continue; + + nr_failed++; + } + + if (!nr_failed && !ehc->i.err_mask) + return; + + frozen = ""; + if (ap->flags & ATA_FLAG_FROZEN) + frozen = " frozen"; + + if (ehc->i.dev) { + ata_dev_printk(ehc->i.dev, KERN_ERR, "exception Emask 0x%x " + "SAct 0x%x SErr 0x%x action 0x%x%s\n", + ehc->i.err_mask, ap->sactive, ehc->i.serror, + ehc->i.action, frozen); + if (desc) + ata_dev_printk(ehc->i.dev, KERN_ERR, "(%s)\n", desc); + } else { + ata_port_printk(ap, KERN_ERR, "exception Emask 0x%x " + "SAct 0x%x SErr 0x%x action 0x%x%s\n", + ehc->i.err_mask, ap->sactive, ehc->i.serror, + ehc->i.action, frozen); + if (desc) + ata_port_printk(ap, KERN_ERR, "(%s)\n", desc); + } + + for (tag = 0; tag < ATA_MAX_QUEUE; tag++) { + struct ata_queued_cmd *qc = __ata_qc_from_tag(ap, tag); + + if (!(qc->flags & ATA_QCFLAG_FAILED) || !qc->err_mask) + continue; + + ata_dev_printk(qc->dev, KERN_ERR, "tag %d cmd 0x%x " + "Emask 0x%x stat 0x%x err 0x%x (%s)\n", + qc->tag, qc->tf.command, qc->err_mask, + qc->result_tf.command, qc->result_tf.feature, + ata_err_string(qc->err_mask)); + } +} + +static int ata_eh_reset(struct ata_port *ap, ata_reset_fn_t softreset, + ata_reset_fn_t hardreset, ata_postreset_fn_t postreset) +{ + struct ata_eh_context *ehc = &ap->eh_context; + unsigned int classes[ATA_MAX_DEVICES]; + int tries = ATA_EH_RESET_TRIES; + ata_reset_fn_t reset; + int rc; + + if (softreset && (!hardreset || (!sata_set_spd_needed(ap) && + !(ehc->i.action & ATA_EH_HARDRESET)))) + reset = softreset; + else + reset = hardreset; + + retry: + ata_port_printk(ap, KERN_INFO, "%s resetting port\n", + reset == softreset ? "soft" : "hard"); + + /* reset */ + ata_eh_about_to_do(ap, ATA_EH_RESET_MASK); + ehc->i.flags |= ATA_EHI_DID_RESET; + + rc = ata_do_reset(ap, reset, classes); + + if (rc && --tries) { + ata_port_printk(ap, KERN_WARNING, + "%sreset failed, retrying in 5 secs\n", + reset == softreset ? "soft" : "hard"); + ssleep(5); + + if (reset == hardreset) + sata_down_spd_limit(ap); + if (hardreset) + reset = hardreset; + goto retry; + } + + if (rc == 0) { + if (postreset) + postreset(ap, classes); + + /* reset successful, schedule revalidation */ + ehc->i.dev = NULL; + ehc->i.action &= ~ATA_EH_RESET_MASK; + ehc->i.action |= ATA_EH_REVALIDATE; + } + + return rc; +} + +static int ata_eh_revalidate(struct ata_port *ap, + struct ata_device **r_failed_dev) +{ + struct ata_eh_context *ehc = &ap->eh_context; + struct ata_device *dev; + int i, rc = 0; + + DPRINTK("ENTER\n"); + + for (i = 0; i < ATA_MAX_DEVICES; i++) { + dev = &ap->device[i]; + + if (ehc->i.action & ATA_EH_REVALIDATE && ata_dev_enabled(dev) && + (!ehc->i.dev || ehc->i.dev == dev)) { + if (ata_port_offline(ap)) { + rc = -EIO; + break; + } + + ata_eh_about_to_do(ap, ATA_EH_REVALIDATE); + rc = ata_dev_revalidate(dev, + ehc->i.flags & ATA_EHI_DID_RESET); + if (rc) + break; + + ehc->i.action &= ~ATA_EH_REVALIDATE; + } + } + + if (rc) + *r_failed_dev = dev; + + DPRINTK("EXIT\n"); + return rc; +} + +static int ata_port_nr_enabled(struct ata_port *ap) +{ + int i, cnt = 0; + + for (i = 0; i < ATA_MAX_DEVICES; i++) + if (ata_dev_enabled(&ap->device[i])) + cnt++; + return cnt; +} + +/** + * ata_eh_recover - recover host port after error + * @ap: host port to recover + * @softreset: softreset method (can be NULL) + * @hardreset: hardreset method (can be NULL) + * @postreset: postreset method (can be NULL) + * + * This is the alpha and omega, eum and yang, heart and soul of + * libata exception handling. On entry, actions required to + * recover each devices are recorded in eh_context. This + * function executes all the operations with appropriate retrials + * and fallbacks to resurrect failed devices. + * + * LOCKING: + * Kernel thread context (may sleep). + * + * RETURNS: + * 0 on success, -errno on failure. + */ +static int ata_eh_recover(struct ata_port *ap, ata_reset_fn_t softreset, + ata_reset_fn_t hardreset, + ata_postreset_fn_t postreset) +{ + struct ata_eh_context *ehc = &ap->eh_context; + struct ata_device *dev; + int down_xfermask, i, rc; + + DPRINTK("ENTER\n"); + + /* prep for recovery */ + for (i = 0; i < ATA_MAX_DEVICES; i++) { + dev = &ap->device[i]; + + ehc->tries[dev->devno] = ATA_EH_DEV_TRIES; + } + + retry: + down_xfermask = 0; + rc = 0; + + /* skip EH if possible. */ + if (!ata_port_nr_enabled(ap) && !(ap->flags & ATA_FLAG_FROZEN)) + ehc->i.action = 0; + + /* reset */ + if (ehc->i.action & ATA_EH_RESET_MASK) { + ata_eh_freeze_port(ap); + + rc = ata_eh_reset(ap, softreset, hardreset, postreset); + if (rc) { + ata_port_printk(ap, KERN_ERR, + "reset failed, giving up\n"); + goto out; + } + + ata_eh_thaw_port(ap); + } + + /* revalidate existing devices */ + rc = ata_eh_revalidate(ap, &dev); + if (rc) + goto dev_fail; + + /* configure transfer mode if the port has been reset */ + if (ehc->i.flags & ATA_EHI_DID_RESET) { + rc = ata_set_mode(ap, &dev); + if (rc) { + down_xfermask = 1; + goto dev_fail; + } + } + + goto out; + + dev_fail: + switch (rc) { + case -ENODEV: + case -EINVAL: + ehc->tries[dev->devno] = 0; + break; + case -EIO: + sata_down_spd_limit(ap); + default: + ehc->tries[dev->devno]--; + if (down_xfermask && + ata_down_xfermask_limit(dev, ehc->tries[dev->devno] == 1)) + ehc->tries[dev->devno] = 0; + } + + /* disable device if it has used up all its chances */ + if (ata_dev_enabled(dev) && !ehc->tries[dev->devno]) + ata_dev_disable(dev); + + /* soft didn't work? be haaaaard */ + if (ehc->i.flags & ATA_EHI_DID_RESET) + ehc->i.action |= ATA_EH_HARDRESET; + else + ehc->i.action |= ATA_EH_SOFTRESET; + + if (ata_port_nr_enabled(ap)) { + ata_port_printk(ap, KERN_WARNING, "failed to recover some " + "devices, retrying in 5 secs\n"); + ssleep(5); + } else { + /* no device left, repeat fast */ + msleep(500); + } + + goto retry; + + out: + if (rc) { + for (i = 0; i < ATA_MAX_DEVICES; i++) + ata_dev_disable(&ap->device[i]); + } + + DPRINTK("EXIT, rc=%d\n", rc); + return rc; +} + +/** + * ata_eh_finish - finish up EH + * @ap: host port to finish EH for + * + * Recovery is complete. Clean up EH states and retry or finish + * failed qcs. + * + * LOCKING: + * None. + */ +static void ata_eh_finish(struct ata_port *ap) +{ + int tag; + + /* retry or finish qcs */ + for (tag = 0; tag < ATA_MAX_QUEUE; tag++) { + struct ata_queued_cmd *qc = __ata_qc_from_tag(ap, tag); + + if (!(qc->flags & ATA_QCFLAG_FAILED)) + continue; + + if (qc->err_mask) { + /* FIXME: Once EH migration is complete, + * generate sense data in this function, + * considering both err_mask and tf. + */ + if (qc->err_mask & AC_ERR_INVALID) + ata_eh_qc_complete(qc); + else + ata_eh_qc_retry(qc); + } else { + if (qc->flags & ATA_QCFLAG_SENSE_VALID) { + ata_eh_qc_complete(qc); + } else { + /* feed zero TF to sense generation */ + memset(&qc->result_tf, 0, sizeof(qc->result_tf)); + ata_eh_qc_retry(qc); + } + } + } +} + +/** + * ata_do_eh - do standard error handling + * @ap: host port to handle error for + * @softreset: softreset method (can be NULL) + * @hardreset: hardreset method (can be NULL) + * @postreset: postreset method (can be NULL) + * + * Perform standard error handling sequence. + * + * LOCKING: + * Kernel thread context (may sleep). + */ +void ata_do_eh(struct ata_port *ap, ata_reset_fn_t softreset, + ata_reset_fn_t hardreset, ata_postreset_fn_t postreset) +{ + ata_eh_autopsy(ap); + ata_eh_report(ap); + ata_eh_recover(ap, softreset, hardreset, postreset); + ata_eh_finish(ap); +} diff -puN drivers/scsi/libata.h~git-libata-all drivers/scsi/libata.h --- devel/drivers/scsi/libata.h~git-libata-all 2006-06-09 15:21:16.000000000 -0700 +++ devel-akpm/drivers/scsi/libata.h 2006-06-09 15:21:16.000000000 -0700 @@ -29,10 +29,9 @@ #define __LIBATA_H__ #define DRV_NAME "libata" -#define DRV_VERSION "1.20" /* must be exactly four chars */ +#define DRV_VERSION "1.30" /* must be exactly four chars */ struct ata_scsi_args { - struct ata_port *ap; struct ata_device *dev; u16 *id; struct scsi_cmnd *cmd; @@ -41,13 +40,24 @@ struct ata_scsi_args { /* libata-core.c */ extern int atapi_enabled; +extern int atapi_dmadir; extern int libata_fua; -extern struct ata_queued_cmd *ata_qc_new_init(struct ata_port *ap, - struct ata_device *dev); +extern struct ata_queued_cmd *ata_qc_new_init(struct ata_device *dev); extern int ata_rwcmd_protocol(struct ata_queued_cmd *qc); +extern void ata_dev_disable(struct ata_device *dev); extern void ata_port_flush_task(struct ata_port *ap); +extern unsigned ata_exec_internal(struct ata_device *dev, + struct ata_taskfile *tf, const u8 *cdb, + int dma_dir, void *buf, unsigned int buflen); +extern int sata_down_spd_limit(struct ata_port *ap); +extern int sata_set_spd_needed(struct ata_port *ap); +extern int ata_down_xfermask_limit(struct ata_device *dev, int force_pio0); +extern int ata_set_mode(struct ata_port *ap, struct ata_device **r_failed_dev); +extern int ata_do_reset(struct ata_port *ap, ata_reset_fn_t reset, + unsigned int *classes); extern void ata_qc_free(struct ata_queued_cmd *qc); extern void ata_qc_issue(struct ata_queued_cmd *qc); +extern void __ata_qc_complete(struct ata_queued_cmd *qc); extern int ata_check_atapi_dma(struct ata_queued_cmd *qc); extern void ata_dev_select(struct ata_port *ap, unsigned int device, unsigned int wait, unsigned int can_sleep); @@ -88,5 +98,11 @@ extern void ata_scsi_set_sense(struct sc extern void ata_scsi_rbuf_fill(struct ata_scsi_args *args, unsigned int (*actor) (struct ata_scsi_args *args, u8 *rbuf, unsigned int buflen)); +extern void ata_schedule_scsi_eh(struct Scsi_Host *shost); + +/* libata-eh.c */ +extern enum scsi_eh_timer_return ata_scsi_timed_out(struct scsi_cmnd *cmd); +extern void ata_scsi_error(struct Scsi_Host *host); +extern void ata_qc_schedule_eh(struct ata_queued_cmd *qc); #endif /* __LIBATA_H__ */ diff -puN drivers/scsi/libata-scsi.c~git-libata-all drivers/scsi/libata-scsi.c --- devel/drivers/scsi/libata-scsi.c~git-libata-all 2006-06-09 15:21:16.000000000 -0700 +++ devel-akpm/drivers/scsi/libata-scsi.c 2006-06-09 15:21:16.000000000 -0700 @@ -41,6 +41,7 @@ #include #include #include +#include #include #include #include @@ -53,8 +54,6 @@ typedef unsigned int (*ata_xlat_func_t)(struct ata_queued_cmd *qc, const u8 *scsicmd); static struct ata_device * ata_scsi_find_dev(struct ata_port *ap, const struct scsi_device *scsidev); -static void ata_scsi_error(struct Scsi_Host *host); -enum scsi_eh_timer_return ata_scsi_timed_out(struct scsi_cmnd *cmd); #define RW_RECOVERY_MPAGE 0x1 #define RW_RECOVERY_MPAGE_LEN 12 @@ -304,7 +303,6 @@ int ata_scsi_ioctl(struct scsi_device *s /** * ata_scsi_qc_new - acquire new ata_queued_cmd reference - * @ap: ATA port to which the new command is attached * @dev: ATA device to which the new command is attached * @cmd: SCSI command that originated this ATA command * @done: SCSI command completion function @@ -323,14 +321,13 @@ int ata_scsi_ioctl(struct scsi_device *s * RETURNS: * Command allocated, or %NULL if none available. */ -struct ata_queued_cmd *ata_scsi_qc_new(struct ata_port *ap, - struct ata_device *dev, +struct ata_queued_cmd *ata_scsi_qc_new(struct ata_device *dev, struct scsi_cmnd *cmd, void (*done)(struct scsi_cmnd *)) { struct ata_queued_cmd *qc; - qc = ata_qc_new_init(ap, dev); + qc = ata_qc_new_init(dev); if (qc) { qc->scsicmd = cmd; qc->scsidone = done; @@ -397,18 +394,18 @@ void ata_dump_status(unsigned id, struct int ata_scsi_device_resume(struct scsi_device *sdev) { - struct ata_port *ap = (struct ata_port *) &sdev->host->hostdata[0]; + struct ata_port *ap = ata_shost_to_port(sdev->host); struct ata_device *dev = &ap->device[sdev->id]; - return ata_device_resume(ap, dev); + return ata_device_resume(dev); } int ata_scsi_device_suspend(struct scsi_device *sdev, pm_message_t state) { - struct ata_port *ap = (struct ata_port *) &sdev->host->hostdata[0]; + struct ata_port *ap = ata_shost_to_port(sdev->host); struct ata_device *dev = &ap->device[sdev->id]; - return ata_device_suspend(ap, dev, state); + return ata_device_suspend(dev, state); } /** @@ -419,6 +416,7 @@ int ata_scsi_device_suspend(struct scsi_ * @sk: the sense key we'll fill out * @asc: the additional sense code we'll fill out * @ascq: the additional sense code qualifier we'll fill out + * @verbose: be verbose * * Converts an ATA error into a SCSI error. Fill out pointers to * SK, ASC, and ASCQ bytes for later use in fixed or descriptor @@ -428,7 +426,7 @@ int ata_scsi_device_suspend(struct scsi_ * spin_lock_irqsave(host_set lock) */ void ata_to_sense_error(unsigned id, u8 drv_stat, u8 drv_err, u8 *sk, u8 *asc, - u8 *ascq) + u8 *ascq, int verbose) { int i; @@ -493,8 +491,9 @@ void ata_to_sense_error(unsigned id, u8 } } /* No immediate match */ - printk(KERN_WARNING "ata%u: no sense translation for " - "error 0x%02x\n", id, drv_err); + if (verbose) + printk(KERN_WARNING "ata%u: no sense translation for " + "error 0x%02x\n", id, drv_err); } /* Fall back to interpreting status bits */ @@ -507,8 +506,9 @@ void ata_to_sense_error(unsigned id, u8 } } /* No error? Undecoded? */ - printk(KERN_WARNING "ata%u: no sense translation for status: 0x%02x\n", - id, drv_stat); + if (verbose) + printk(KERN_WARNING "ata%u: no sense translation for " + "status: 0x%02x\n", id, drv_stat); /* We need a sensible error return here, which is tricky, and one that won't cause people to do things like return a disk wrongly */ @@ -517,9 +517,10 @@ void ata_to_sense_error(unsigned id, u8 *ascq = 0x00; translate_done: - printk(KERN_ERR "ata%u: translated ATA stat/err 0x%02x/%02x to " - "SCSI SK/ASC/ASCQ 0x%x/%02x/%02x\n", id, drv_stat, drv_err, - *sk, *asc, *ascq); + if (verbose) + printk(KERN_ERR "ata%u: translated ATA stat/err 0x%02x/%02x " + "to SCSI SK/ASC/ASCQ 0x%x/%02x/%02x\n", + id, drv_stat, drv_err, *sk, *asc, *ascq); return; } @@ -539,27 +540,23 @@ void ata_to_sense_error(unsigned id, u8 void ata_gen_ata_desc_sense(struct ata_queued_cmd *qc) { struct scsi_cmnd *cmd = qc->scsicmd; - struct ata_taskfile *tf = &qc->tf; + struct ata_taskfile *tf = &qc->result_tf; unsigned char *sb = cmd->sense_buffer; unsigned char *desc = sb + 8; + int verbose = qc->ap->ops->error_handler == NULL; memset(sb, 0, SCSI_SENSE_BUFFERSIZE); cmd->result = (DRIVER_SENSE << 24) | SAM_STAT_CHECK_CONDITION; /* - * Read the controller registers. - */ - WARN_ON(qc->ap->ops->tf_read == NULL); - qc->ap->ops->tf_read(qc->ap, tf); - - /* * Use ata_to_sense_error() to map status register bits * onto sense key, asc & ascq. */ - if (tf->command & (ATA_BUSY | ATA_DF | ATA_ERR | ATA_DRQ)) { + if (qc->err_mask || + tf->command & (ATA_BUSY | ATA_DF | ATA_ERR | ATA_DRQ)) { ata_to_sense_error(qc->ap->id, tf->command, tf->feature, - &sb[1], &sb[2], &sb[3]); + &sb[1], &sb[2], &sb[3], verbose); sb[1] &= 0x0f; } @@ -615,26 +612,22 @@ void ata_gen_ata_desc_sense(struct ata_q void ata_gen_fixed_sense(struct ata_queued_cmd *qc) { struct scsi_cmnd *cmd = qc->scsicmd; - struct ata_taskfile *tf = &qc->tf; + struct ata_taskfile *tf = &qc->result_tf; unsigned char *sb = cmd->sense_buffer; + int verbose = qc->ap->ops->error_handler == NULL; memset(sb, 0, SCSI_SENSE_BUFFERSIZE); cmd->result = (DRIVER_SENSE << 24) | SAM_STAT_CHECK_CONDITION; /* - * Read the controller registers. - */ - WARN_ON(qc->ap->ops->tf_read == NULL); - qc->ap->ops->tf_read(qc->ap, tf); - - /* * Use ata_to_sense_error() to map status register bits * onto sense key, asc & ascq. */ - if (tf->command & (ATA_BUSY | ATA_DF | ATA_ERR | ATA_DRQ)) { + if (qc->err_mask || + tf->command & (ATA_BUSY | ATA_DF | ATA_ERR | ATA_DRQ)) { ata_to_sense_error(qc->ap->id, tf->command, tf->feature, - &sb[2], &sb[12], &sb[13]); + &sb[2], &sb[12], &sb[13], verbose); sb[2] &= 0x0f; } @@ -677,7 +670,7 @@ static void ata_scsi_dev_config(struct s */ max_sectors = ATA_MAX_SECTORS; if (dev->flags & ATA_DFLAG_LBA48) - max_sectors = 2048; + max_sectors = ATA_MAX_SECTORS_LBA48; if (dev->max_sectors) max_sectors = dev->max_sectors; @@ -692,6 +685,14 @@ static void ata_scsi_dev_config(struct s request_queue_t *q = sdev->request_queue; blk_queue_max_hw_segments(q, q->max_hw_segments - 1); } + + if (dev->flags & ATA_DFLAG_NCQ) { + int depth; + + depth = min(sdev->host->can_queue, ata_id_queue_depth(dev->id)); + depth = min(ATA_MAX_QUEUE - 1, depth); + scsi_adjust_queue_depth(sdev, MSG_SIMPLE_TAG, depth); + } } /** @@ -716,7 +717,7 @@ int ata_scsi_slave_config(struct scsi_de struct ata_port *ap; struct ata_device *dev; - ap = (struct ata_port *) &sdev->host->hostdata[0]; + ap = ata_shost_to_port(sdev->host); dev = &ap->device[sdev->id]; ata_scsi_dev_config(sdev, dev); @@ -726,134 +727,40 @@ int ata_scsi_slave_config(struct scsi_de } /** - * ata_scsi_timed_out - SCSI layer time out callback - * @cmd: timed out SCSI command + * ata_scsi_change_queue_depth - SCSI callback for queue depth config + * @sdev: SCSI device to configure queue depth for + * @queue_depth: new queue depth * - * Handles SCSI layer timeout. We race with normal completion of - * the qc for @cmd. If the qc is already gone, we lose and let - * the scsi command finish (EH_HANDLED). Otherwise, the qc has - * timed out and EH should be invoked. Prevent ata_qc_complete() - * from finishing it by setting EH_SCHEDULED and return - * EH_NOT_HANDLED. + * This is libata standard hostt->change_queue_depth callback. + * SCSI will call into this callback when user tries to set queue + * depth via sysfs. * * LOCKING: - * Called from timer context + * SCSI layer (we don't care) * * RETURNS: - * EH_HANDLED or EH_NOT_HANDLED + * Newly configured queue depth. */ -enum scsi_eh_timer_return ata_scsi_timed_out(struct scsi_cmnd *cmd) +int ata_scsi_change_queue_depth(struct scsi_device *sdev, int queue_depth) { - struct Scsi_Host *host = cmd->device->host; - struct ata_port *ap = (struct ata_port *) &host->hostdata[0]; - unsigned long flags; - struct ata_queued_cmd *qc; - enum scsi_eh_timer_return ret = EH_HANDLED; - - DPRINTK("ENTER\n"); - - spin_lock_irqsave(&ap->host_set->lock, flags); - qc = ata_qc_from_tag(ap, ap->active_tag); - if (qc) { - WARN_ON(qc->scsicmd != cmd); - qc->flags |= ATA_QCFLAG_EH_SCHEDULED; - qc->err_mask |= AC_ERR_TIMEOUT; - ret = EH_NOT_HANDLED; - } - spin_unlock_irqrestore(&ap->host_set->lock, flags); - - DPRINTK("EXIT, ret=%d\n", ret); - return ret; -} - -/** - * ata_scsi_error - SCSI layer error handler callback - * @host: SCSI host on which error occurred - * - * Handles SCSI-layer-thrown error events. - * - * LOCKING: - * Inherited from SCSI layer (none, can sleep) - */ - -static void ata_scsi_error(struct Scsi_Host *host) -{ - struct ata_port *ap; - unsigned long flags; - - DPRINTK("ENTER\n"); - - ap = (struct ata_port *) &host->hostdata[0]; - - spin_lock_irqsave(&ap->host_set->lock, flags); - WARN_ON(ap->flags & ATA_FLAG_IN_EH); - ap->flags |= ATA_FLAG_IN_EH; - WARN_ON(ata_qc_from_tag(ap, ap->active_tag) == NULL); - spin_unlock_irqrestore(&ap->host_set->lock, flags); - - ata_port_flush_task(ap); - - ap->ops->eng_timeout(ap); - - WARN_ON(host->host_failed || !list_empty(&host->eh_cmd_q)); - - scsi_eh_flush_done_q(&ap->eh_done_q); - - spin_lock_irqsave(&ap->host_set->lock, flags); - ap->flags &= ~ATA_FLAG_IN_EH; - spin_unlock_irqrestore(&ap->host_set->lock, flags); - - DPRINTK("EXIT\n"); -} - -static void ata_eh_scsidone(struct scsi_cmnd *scmd) -{ - /* nada */ -} - -static void __ata_eh_qc_complete(struct ata_queued_cmd *qc) -{ - struct ata_port *ap = qc->ap; - struct scsi_cmnd *scmd = qc->scsicmd; - unsigned long flags; - - spin_lock_irqsave(&ap->host_set->lock, flags); - qc->scsidone = ata_eh_scsidone; - __ata_qc_complete(qc); - WARN_ON(ata_tag_valid(qc->tag)); - spin_unlock_irqrestore(&ap->host_set->lock, flags); + struct ata_port *ap = ata_shost_to_port(sdev->host); + struct ata_device *dev; + int max_depth; - scsi_eh_finish_cmd(scmd, &ap->eh_done_q); -} + if (queue_depth < 1) + return sdev->queue_depth; -/** - * ata_eh_qc_complete - Complete an active ATA command from EH - * @qc: Command to complete - * - * Indicate to the mid and upper layers that an ATA command has - * completed. To be used from EH. - */ -void ata_eh_qc_complete(struct ata_queued_cmd *qc) -{ - struct scsi_cmnd *scmd = qc->scsicmd; - scmd->retries = scmd->allowed; - __ata_eh_qc_complete(qc); -} + dev = ata_scsi_find_dev(ap, sdev); + if (!dev || !ata_dev_enabled(dev)) + return sdev->queue_depth; + + max_depth = min(sdev->host->can_queue, ata_id_queue_depth(dev->id)); + max_depth = min(ATA_MAX_QUEUE - 1, max_depth); + if (queue_depth > max_depth) + queue_depth = max_depth; -/** - * ata_eh_qc_retry - Tell midlayer to retry an ATA command after EH - * @qc: Command to retry - * - * Indicate to the mid and upper layers that an ATA command - * should be retried. To be used from EH. - * - * SCSI midlayer limits the number of retries to scmd->allowed. - * This function might need to adjust scmd->retries for commands - * which get retried due to unrelated NCQ failures. - */ -void ata_eh_qc_retry(struct ata_queued_cmd *qc) -{ - __ata_eh_qc_complete(qc); + scsi_adjust_queue_depth(sdev, MSG_SIMPLE_TAG, queue_depth); + return queue_depth; } /** @@ -891,7 +798,7 @@ static unsigned int ata_scsi_start_stop_ tf->nsect = 1; /* 1 sector, lba=0 */ if (qc->dev->flags & ATA_DFLAG_LBA) { - qc->tf.flags |= ATA_TFLAG_LBA; + tf->flags |= ATA_TFLAG_LBA; tf->lbah = 0x0; tf->lbam = 0x0; @@ -1195,6 +1102,7 @@ static unsigned int ata_scsi_rw_xlat(str u64 block; u32 n_block; + qc->flags |= ATA_QCFLAG_IO; tf->flags |= ATA_TFLAG_ISADDR | ATA_TFLAG_DEVICE; if (scsicmd[0] == WRITE_10 || scsicmd[0] == WRITE_6 || @@ -1241,7 +1149,36 @@ static unsigned int ata_scsi_rw_xlat(str */ goto nothing_to_do; - if (dev->flags & ATA_DFLAG_LBA) { + if ((dev->flags & (ATA_DFLAG_PIO | ATA_DFLAG_NCQ)) == ATA_DFLAG_NCQ) { + /* yay, NCQ */ + if (!lba_48_ok(block, n_block)) + goto out_of_range; + + tf->protocol = ATA_PROT_NCQ; + tf->flags |= ATA_TFLAG_LBA | ATA_TFLAG_LBA48; + + if (tf->flags & ATA_TFLAG_WRITE) + tf->command = ATA_CMD_FPDMA_WRITE; + else + tf->command = ATA_CMD_FPDMA_READ; + + qc->nsect = n_block; + + tf->nsect = qc->tag << 3; + tf->hob_feature = (n_block >> 8) & 0xff; + tf->feature = n_block & 0xff; + + tf->hob_lbah = (block >> 40) & 0xff; + tf->hob_lbam = (block >> 32) & 0xff; + tf->hob_lbal = (block >> 24) & 0xff; + tf->lbah = (block >> 16) & 0xff; + tf->lbam = (block >> 8) & 0xff; + tf->lbal = block & 0xff; + + tf->device = 1 << 6; + if (tf->flags & ATA_TFLAG_FUA) + tf->device |= 1 << 7; + } else if (dev->flags & ATA_DFLAG_LBA) { tf->flags |= ATA_TFLAG_LBA; if (lba_28_ok(block, n_block)) { @@ -1356,10 +1293,8 @@ static void ata_scsi_qc_complete(struct } } - if (need_sense) { - /* The ata_gen_..._sense routines fill in tf */ - ata_dump_status(qc->ap->id, &qc->tf); - } + if (need_sense && !qc->ap->ops->error_handler) + ata_dump_status(qc->ap->id, &qc->result_tf); qc->scsidone(cmd); @@ -1367,8 +1302,40 @@ static void ata_scsi_qc_complete(struct } /** + * ata_scmd_need_defer - Check whether we need to defer scmd + * @dev: ATA device to which the command is addressed + * @is_io: Is the command IO (and thus possibly NCQ)? + * + * NCQ and non-NCQ commands cannot run together. As upper layer + * only knows the queue depth, we are responsible for maintaining + * exclusion. This function checks whether a new command can be + * issued to @dev. + * + * LOCKING: + * spin_lock_irqsave(host_set lock) + * + * RETURNS: + * 1 if deferring is needed, 0 otherwise. + */ +static int ata_scmd_need_defer(struct ata_device *dev, int is_io) +{ + struct ata_port *ap = dev->ap; + + if (!(dev->flags & ATA_DFLAG_NCQ)) + return 0; + + if (is_io) { + if (!ata_tag_valid(ap->active_tag)) + return 0; + } else { + if (!ata_tag_valid(ap->active_tag) && !ap->sactive) + return 0; + } + return 1; +} + +/** * ata_scsi_translate - Translate then issue SCSI command to ATA device - * @ap: ATA port to which the command is addressed * @dev: ATA device to which the command is addressed * @cmd: SCSI command to execute * @done: SCSI command completion function @@ -1389,19 +1356,25 @@ static void ata_scsi_qc_complete(struct * * LOCKING: * spin_lock_irqsave(host_set lock) + * + * RETURNS: + * 0 on success, SCSI_ML_QUEUE_DEVICE_BUSY if the command + * needs to be deferred. */ - -static void ata_scsi_translate(struct ata_port *ap, struct ata_device *dev, - struct scsi_cmnd *cmd, +static int ata_scsi_translate(struct ata_device *dev, struct scsi_cmnd *cmd, void (*done)(struct scsi_cmnd *), ata_xlat_func_t xlat_func) { struct ata_queued_cmd *qc; u8 *scsicmd = cmd->cmnd; + int is_io = xlat_func == ata_scsi_rw_xlat; VPRINTK("ENTER\n"); - qc = ata_scsi_qc_new(ap, dev, cmd, done); + if (unlikely(ata_scmd_need_defer(dev, is_io))) + goto defer; + + qc = ata_scsi_qc_new(dev, cmd, done); if (!qc) goto err_mem; @@ -1409,8 +1382,8 @@ static void ata_scsi_translate(struct at if (cmd->sc_data_direction == DMA_FROM_DEVICE || cmd->sc_data_direction == DMA_TO_DEVICE) { if (unlikely(cmd->request_bufflen < 1)) { - printk(KERN_WARNING "ata%u(%u): WARNING: zero len r/w req\n", - ap->id, dev->devno); + ata_dev_printk(dev, KERN_WARNING, + "WARNING: zero len r/w req\n"); goto err_did; } @@ -1432,13 +1405,13 @@ static void ata_scsi_translate(struct at ata_qc_issue(qc); VPRINTK("EXIT\n"); - return; + return 0; early_finish: ata_qc_free(qc); done(cmd); DPRINTK("EXIT - early finish (good or error)\n"); - return; + return 0; err_did: ata_qc_free(qc); @@ -1446,7 +1419,11 @@ err_mem: cmd->result = (DID_ERROR << 16); done(cmd); DPRINTK("EXIT - internal\n"); - return; + return 0; + +defer: + DPRINTK("EXIT - defer\n"); + return SCSI_MLQUEUE_DEVICE_BUSY; } /** @@ -1944,7 +1921,7 @@ unsigned int ata_scsiop_mode_sense(struc return 0; dpofua = 0; - if (ata_dev_supports_fua(args->id) && dev->flags & ATA_DFLAG_LBA48 && + if (ata_dev_supports_fua(args->id) && (dev->flags & ATA_DFLAG_LBA48) && (!(dev->flags & ATA_DFLAG_PIO) || dev->multi_count)) dpofua = 1 << 4; @@ -2137,13 +2114,14 @@ void ata_scsi_badcmd(struct scsi_cmnd *c static void atapi_sense_complete(struct ata_queued_cmd *qc) { - if (qc->err_mask && ((qc->err_mask & AC_ERR_DEV) == 0)) + if (qc->err_mask && ((qc->err_mask & AC_ERR_DEV) == 0)) { /* FIXME: not quite right; we don't want the * translation of taskfile registers into * a sense descriptors, since that's only * correct for ATA, not ATAPI */ ata_gen_ata_desc_sense(qc); + } qc->scsidone(qc->scsicmd); ata_qc_free(qc); @@ -2207,21 +2185,38 @@ static void atapi_qc_complete(struct ata VPRINTK("ENTER, err_mask 0x%X\n", err_mask); + /* handle completion from new EH */ + if (unlikely(qc->ap->ops->error_handler && + (err_mask || qc->flags & ATA_QCFLAG_SENSE_VALID))) { + + if (!(qc->flags & ATA_QCFLAG_SENSE_VALID)) { + /* FIXME: not quite right; we don't want the + * translation of taskfile registers into a + * sense descriptors, since that's only + * correct for ATA, not ATAPI + */ + ata_gen_ata_desc_sense(qc); + } + + qc->scsicmd->result = SAM_STAT_CHECK_CONDITION; + qc->scsidone(cmd); + ata_qc_free(qc); + return; + } + + /* successful completion or old EH failure path */ if (unlikely(err_mask & AC_ERR_DEV)) { cmd->result = SAM_STAT_CHECK_CONDITION; atapi_request_sense(qc); return; - } - - else if (unlikely(err_mask)) + } else if (unlikely(err_mask)) { /* FIXME: not quite right; we don't want the * translation of taskfile registers into * a sense descriptors, since that's only * correct for ATA, not ATAPI */ ata_gen_ata_desc_sense(qc); - - else { + } else { u8 *scsicmd = cmd->cmnd; if ((scsicmd[0] == INQUIRY) && ((scsicmd[1] & 0x03) == 0)) { @@ -2303,11 +2298,9 @@ static unsigned int atapi_xlat(struct at qc->tf.protocol = ATA_PROT_ATAPI_DMA; qc->tf.feature |= ATAPI_PKT_DMA; -#ifdef ATAPI_ENABLE_DMADIR - /* some SATA bridges need us to indicate data xfer direction */ - if (cmd->sc_data_direction != DMA_TO_DEVICE) + if (atapi_dmadir && (cmd->sc_data_direction != DMA_TO_DEVICE)) + /* some SATA bridges need us to indicate data xfer direction */ qc->tf.feature |= ATAPI_DMADIR; -#endif } qc->nbytes = cmd->bufflen; @@ -2347,13 +2340,14 @@ ata_scsi_find_dev(struct ata_port *ap, c (scsidev->lun != 0))) return NULL; - if (unlikely(!ata_dev_present(dev))) + if (unlikely(!ata_dev_enabled(dev))) return NULL; if (!atapi_enabled || (ap->flags & ATA_FLAG_NO_ATAPI)) { if (unlikely(dev->class == ATA_DEV_ATAPI)) { - printk(KERN_WARNING "ata%u(%u): WARNING: ATAPI is %s, device ignored.\n", - ap->id, dev->devno, atapi_enabled ? "not supported with this driver" : "disabled"); + ata_dev_printk(dev, KERN_WARNING, + "WARNING: ATAPI is %s, device ignored.\n", + atapi_enabled ? "not supported with this driver" : "disabled"); return NULL; } } @@ -2414,10 +2408,15 @@ ata_scsi_pass_thru(struct ata_queued_cmd { struct ata_taskfile *tf = &(qc->tf); struct scsi_cmnd *cmd = qc->scsicmd; + struct ata_device *dev = qc->dev; if ((tf->protocol = ata_scsi_map_proto(scsicmd[1])) == ATA_PROT_UNKNOWN) goto invalid_fld; + /* We may not issue DMA commands if no DMA mode is set */ + if (tf->protocol == ATA_PROT_DMA && dev->dma_mode == 0) + goto invalid_fld; + if (scsicmd[1] & 0xe0) /* PIO multi not supported yet */ goto invalid_fld; @@ -2502,6 +2501,9 @@ ata_scsi_pass_thru(struct ata_queued_cmd */ qc->nsect = cmd->bufflen / ATA_SECT_SIZE; + /* request result TF */ + qc->flags |= ATA_QCFLAG_RESULT_TF; + return 0; invalid_fld: @@ -2578,19 +2580,24 @@ static inline void ata_scsi_dump_cdb(str #endif } -static inline void __ata_scsi_queuecmd(struct scsi_cmnd *cmd, void (*done)(struct scsi_cmnd *), - struct ata_port *ap, struct ata_device *dev) +static inline int __ata_scsi_queuecmd(struct scsi_cmnd *cmd, + void (*done)(struct scsi_cmnd *), + struct ata_device *dev) { + int rc = 0; + if (dev->class == ATA_DEV_ATA) { ata_xlat_func_t xlat_func = ata_get_xlat_func(dev, cmd->cmnd[0]); if (xlat_func) - ata_scsi_translate(ap, dev, cmd, done, xlat_func); + rc = ata_scsi_translate(dev, cmd, done, xlat_func); else - ata_scsi_simulate(ap, dev, cmd, done); + ata_scsi_simulate(dev, cmd, done); } else - ata_scsi_translate(ap, dev, cmd, done, atapi_xlat); + rc = ata_scsi_translate(dev, cmd, done, atapi_xlat); + + return rc; } /** @@ -2609,17 +2616,18 @@ static inline void __ata_scsi_queuecmd(s * Releases scsi-layer-held lock, and obtains host_set lock. * * RETURNS: - * Zero. + * Return value from __ata_scsi_queuecmd() if @cmd can be queued, + * 0 otherwise. */ - int ata_scsi_queuecmd(struct scsi_cmnd *cmd, void (*done)(struct scsi_cmnd *)) { struct ata_port *ap; struct ata_device *dev; struct scsi_device *scsidev = cmd->device; struct Scsi_Host *shost = scsidev->host; + int rc = 0; - ap = (struct ata_port *) &shost->hostdata[0]; + ap = ata_shost_to_port(shost); spin_unlock(shost->host_lock); spin_lock(&ap->host_set->lock); @@ -2628,7 +2636,7 @@ int ata_scsi_queuecmd(struct scsi_cmnd * dev = ata_scsi_find_dev(ap, scsidev); if (likely(dev)) - __ata_scsi_queuecmd(cmd, done, ap, dev); + rc = __ata_scsi_queuecmd(cmd, done, dev); else { cmd->result = (DID_BAD_TARGET << 16); done(cmd); @@ -2636,12 +2644,11 @@ int ata_scsi_queuecmd(struct scsi_cmnd * spin_unlock(&ap->host_set->lock); spin_lock(shost->host_lock); - return 0; + return rc; } /** * ata_scsi_simulate - simulate SCSI command on ATA device - * @ap: port the device is connected to * @dev: the target device * @cmd: SCSI command being sent to device. * @done: SCSI command completion function. @@ -2653,14 +2660,12 @@ int ata_scsi_queuecmd(struct scsi_cmnd * * spin_lock_irqsave(host_set lock) */ -void ata_scsi_simulate(struct ata_port *ap, struct ata_device *dev, - struct scsi_cmnd *cmd, +void ata_scsi_simulate(struct ata_device *dev, struct scsi_cmnd *cmd, void (*done)(struct scsi_cmnd *)) { struct ata_scsi_args args; const u8 *scsicmd = cmd->cmnd; - args.ap = ap; args.dev = dev; args.id = dev->id; args.cmd = cmd; @@ -2735,14 +2740,13 @@ void ata_scsi_scan_host(struct ata_port struct ata_device *dev; unsigned int i; - if (ap->flags & ATA_FLAG_PORT_DISABLED) + if (ap->flags & ATA_FLAG_DISABLED) return; for (i = 0; i < ATA_MAX_DEVICES; i++) { dev = &ap->device[i]; - if (ata_dev_present(dev)) + if (ata_dev_enabled(dev)) scsi_scan_target(&ap->host->shost_gendev, 0, i, 0, 0); } } - diff -puN drivers/scsi/Makefile~git-libata-all drivers/scsi/Makefile --- devel/drivers/scsi/Makefile~git-libata-all 2006-06-09 15:21:16.000000000 -0700 +++ devel-akpm/drivers/scsi/Makefile 2006-06-09 15:21:16.000000000 -0700 @@ -123,16 +123,27 @@ obj-$(CONFIG_SCSI_NSP32) += nsp32.o obj-$(CONFIG_SCSI_IPR) += ipr.o obj-$(CONFIG_SCSI_IBMVSCSI) += ibmvscsi/ obj-$(CONFIG_SCSI_SATA_AHCI) += libata.o ahci.o +obj-$(CONFIG_SCSI_PATA_ALI) += libata.o pata_ali.o +obj-$(CONFIG_SCSI_PATA_AMD) += libata.o pata_amd.o obj-$(CONFIG_SCSI_SATA_SVW) += libata.o sata_svw.o +obj-$(CONFIG_SCSI_PATA_OPTI) += libata.o pata_opti.o +obj-$(CONFIG_SCSI_PATA_MPIIX) += libata.o pata_mpiix.o +obj-$(CONFIG_SCSI_PATA_OLDPIIX) += libata.o pata_oldpiix.o obj-$(CONFIG_SCSI_ATA_PIIX) += libata.o ata_piix.o +obj-$(CONFIG_SCSI_PATA_NETCELL) += libata.o pata_netcell.o +obj-$(CONFIG_SCSI_PATA_PDC2027X)+= libata.o pata_pdc2027x.o obj-$(CONFIG_SCSI_SATA_PROMISE) += libata.o sata_promise.o obj-$(CONFIG_SCSI_SATA_QSTOR) += libata.o sata_qstor.o obj-$(CONFIG_SCSI_SATA_SIL) += libata.o sata_sil.o obj-$(CONFIG_SCSI_SATA_SIL24) += libata.o sata_sil24.o +obj-$(CONFIG_SCSI_PATA_SIL680) += libata.o pata_sil680.o obj-$(CONFIG_SCSI_SATA_VIA) += libata.o sata_via.o +obj-$(CONFIG_SCSI_PATA_VIA) += libata.o pata_via.o obj-$(CONFIG_SCSI_SATA_VITESSE) += libata.o sata_vsc.o +obj-$(CONFIG_SCSI_PATA_SIS) += libata.o pata_sis.o obj-$(CONFIG_SCSI_SATA_SIS) += libata.o sata_sis.o obj-$(CONFIG_SCSI_SATA_SX4) += libata.o sata_sx4.o +obj-$(CONFIG_SCSI_PATA_TRIFLEX) += libata.o pata_triflex.o obj-$(CONFIG_SCSI_SATA_NV) += libata.o sata_nv.o obj-$(CONFIG_SCSI_SATA_ULI) += libata.o sata_uli.o obj-$(CONFIG_SCSI_SATA_MV) += libata.o sata_mv.o @@ -165,7 +176,7 @@ ncr53c8xx-flags-$(CONFIG_SCSI_ZALON) \ CFLAGS_ncr53c8xx.o := $(ncr53c8xx-flags-y) $(ncr53c8xx-flags-m) zalon7xx-objs := zalon.o ncr53c8xx.o NCR_Q720_mod-objs := NCR_Q720.o ncr53c8xx.o -libata-objs := libata-core.o libata-scsi.o libata-bmdma.o +libata-objs := libata-core.o libata-scsi.o libata-bmdma.o libata-eh.o oktagon_esp_mod-objs := oktagon_esp.o oktagon_io.o # Files generated that shall be removed upon make clean diff -puN /dev/null drivers/scsi/pata_ali.c --- /dev/null 2006-06-03 22:34:36.282200750 -0700 +++ devel-akpm/drivers/scsi/pata_ali.c 2006-06-09 15:21:16.000000000 -0700 @@ -0,0 +1,659 @@ +/* + * pata_ali.c - ALI 15x3 PATA for new ATA layer + * (C) 2005 Red Hat Inc + * Alan Cox + * + * based in part upon + * linux/drivers/ide/pci/alim15x3.c Version 0.17 2003/01/02 + * + * Copyright (C) 1998-2000 Michel Aubry, Maintainer + * Copyright (C) 1998-2000 Andrzej Krzysztofowicz, Maintainer + * Copyright (C) 1999-2000 CJ, cjtsai@ali.com.tw, Maintainer + * + * Copyright (C) 1998-2000 Andre Hedrick (andre@linux-ide.org) + * May be copied or modified under the terms of the GNU General Public License + * Copyright (C) 2002 Alan Cox + * ALi (now ULi M5228) support by Clear Zhang + * + * Documentation + * Chipset documentation available under NDA only + * + * TODO/CHECK + * Cannot have ATAPI on both master & slave for rev < c2 (???) but + * otherwise should do atapi DMA. + */ + +#include +#include +#include +#include +#include +#include +#include +#include +#include + +#define DRV_NAME "pata_ali" +#define DRV_VERSION "0.5.5" + +/* + * Cable special cases + */ + +static struct dmi_system_id cable_dmi_table[] = { + { + .ident = "HP Pavilion N5430", + .matches = { + DMI_MATCH(DMI_BOARD_VENDOR, "Hewlett-Packard"), + DMI_MATCH(DMI_BOARD_NAME, "OmniBook N32N-736"), + }, + }, + { } +}; + +static int ali_cable_override(struct pci_dev *pdev) +{ + /* Fujitsu P2000 */ + if (pdev->subsystem_vendor == 0x10CF && pdev->subsystem_device == 0x10AF) + return 1; + /* Systems by DMI */ + if (dmi_check_system(cable_dmi_table)) + return 1; + return 0; +} + +/** + * ali_c2_cable_detect - cable detection + * @ap: ATA port + * + * Perform cable detection for C2 and later revisions + */ + +static int ali_c2_cable_detect(struct ata_port *ap) +{ + struct pci_dev *pdev = to_pci_dev(ap->host_set->dev); + u8 ata66; + + /* Certain laptops use short but suitable cables and don't + implement the detect logic */ + + if (ali_cable_override(pdev)) + return ATA_CBL_PATA80; + + /* Host view cable detect 0x4A bit 0 primary bit 1 secondary + Bit set for 40 pin */ + pci_read_config_byte(pdev, 0x4A, &ata66); + if (ata66 & (1 << ap->hard_port_no)) + return ATA_CBL_PATA40; + else + return ATA_CBL_PATA80; +} + +/** + * ali_early_probe_reset - reset for eary chip + * @ap: ATA port + * + * Handle the reset callback for the later chips with cable detect + */ + +static void ali_c2_probe_init(struct ata_port *ap) +{ + ap->cbl = ali_c2_cable_detect(ap); + ata_std_probeinit(ap); +} + +static int ali_c2_probe_reset(struct ata_port *ap, unsigned int *classes) +{ + return ata_drive_probe_reset(ap, ali_c2_probe_init, + ata_std_softreset, NULL, + ata_std_postreset, classes); +} + +/** + * ali_early_cable_detect - cable detection + * @ap: ATA port + * + * Perform cable detection for older chipsets. This turns out to be + * rather easy to implement + */ + +static int ali_early_cable_detect(struct ata_port *ap) +{ + return ATA_CBL_PATA40; +} + +/** + * ali_early_probie_init - reset for early chip + * @ap: ATA port + * + * Handle the reset callback for the early (pre cable detect) chips. + */ + +static void ali_early_probe_init(struct ata_port *ap) +{ + ap->cbl = ali_early_cable_detect(ap); + ata_std_probeinit(ap); +} + +static int ali_early_probe_reset(struct ata_port *ap, unsigned int *classes) +{ + return ata_drive_probe_reset(ap, ali_early_probe_init, + ata_std_softreset, NULL, + ata_std_postreset, classes); +} + +/** + * ali_20_filter - filter for earlier ALI DMA + * @ap: ALi ATA port + * @adev: attached device + * + * Ensure that we do not do DMA on CD devices. We may be able to + * fix that later on. Also ensure we do not do UDMA on WDC drives + */ + +static unsigned long ali_20_filter(const struct ata_port *ap, struct ata_device *adev, unsigned long mask) +{ + char model_num[40]; + /* No DMA on anything but a disk for now */ + if (adev->class != ATA_DEV_ATA) + mask &= ~(ATA_MASK_MWDMA | ATA_MASK_UDMA); + ata_id_string(adev->id, model_num, ATA_ID_PROD_OFS, sizeof(model_num)); + if (strstr(model_num, "WDC")) + return mask &= ~ATA_MASK_UDMA; + return ata_pci_default_filter(ap, adev, mask); +} + +/** + * ali_fifo_control - FIFO manager + * @ap: ALi channel to control + * @adev: device for FIFO control + * @on: 0 for off 1 for on + * + * Enable or disable the FIFO on a given device. Because of the way the + * ALi FIFO works it provides a boost on ATA disk but can be confused by + * ATAPI and we must therefore manage it. + */ + +static void ali_fifo_control(struct ata_port *ap, struct ata_device *adev, int on) +{ + struct pci_dev *pdev = to_pci_dev(ap->host_set->dev); + int pio_fifo = 0x54 + ap->hard_port_no; + u8 fifo; + int shift = 4 * adev->devno; + + /* Bits 3:2 (7:6 for slave) control the PIO. 00 is off 01 + is on. The FIFO must not be used for ATAPI. We preserve + BIOS set thresholds */ + pci_read_config_byte(pdev, pio_fifo, &fifo); + fifo &= ~(0x0C << shift); + if (on) + fifo |= (on << shift); + pci_write_config_byte(pdev, pio_fifo, fifo); +} + +/** + * ali_program_modes - load mode registers + * @ap: ALi channel to load + * @adev: Device the timing is for + * @cmd: Command timing + * @data: Data timing + * @udma: UDMA timing or zero for off + * + * Loads the timing registers for cmd/data and disable UDMA if + * udma is zero. If udma is set then load and enable the UDMA + * timing but do not touch the command/data timing. + */ + +static void ali_program_modes(struct ata_port *ap, struct ata_device *adev, struct ata_timing *t, u8 ultra) +{ + struct pci_dev *pdev = to_pci_dev(ap->host_set->dev); + int cas = 0x58 + 4 * ap->hard_port_no; /* Command timing */ + int cbt = 0x59 + 4 * ap->hard_port_no; /* Command timing */ + int drwt = 0x5A + 4 * ap->hard_port_no + adev->devno; /* R/W timing */ + int udmat = 0x56 + ap->hard_port_no; /* UDMA timing */ + int shift = 4 * adev->devno; + u8 udma; + + if (t != NULL) { + t->setup = FIT(t->setup, 1, 8) & 7; + t->act8b = FIT(t->act8b, 1, 8) & 7; + t->rec8b = FIT(t->rec8b, 1, 16) & 15; + t->active = FIT(t->active, 1, 8) & 7; + t->recover = FIT(t->recover, 1, 16) & 15; + + pci_write_config_byte(pdev, cas, t->setup); + pci_write_config_byte(pdev, cbt, (t->act8b << 4) | t->rec8b); + pci_write_config_byte(pdev, drwt, (t->active << 4) | t->recover); + } + + /* Set up the UDMA enable */ + pci_read_config_byte(pdev, udmat, &udma); + udma &= ~(0x0F << shift); + udma |= ultra << shift; + pci_write_config_byte(pdev, udmat, udma); +} + +/** + * ali_set_piomode - set initial PIO mode data + * @ap: ATA interface + * @adev: ATA device + * + * Program the ALi registers for PIO mode. FIXME: add timings for + * PIO5. + */ + +static void ali_set_piomode(struct ata_port *ap, struct ata_device *adev) +{ + struct ata_device *pair = ata_dev_pair(adev); + struct ata_timing t; + unsigned long T = 1000000000 / 33333; /* PCI clock based */ + + ata_timing_compute(adev, adev->pio_mode, &t, T, 1); + if (pair) { + struct ata_timing p; + ata_timing_compute(pair, pair->pio_mode, &p, T, 1); + ata_timing_merge(&p, &t, &t, ATA_TIMING_SETUP|ATA_TIMING_8BIT); + if (pair->dma_mode) { + ata_timing_compute(pair, pair->dma_mode, &p, T, 1); + ata_timing_merge(&p, &t, &t, ATA_TIMING_SETUP|ATA_TIMING_8BIT); + } + } + + /* PIO FIFO is only permitted on ATA disk */ + if (adev->class != ATA_DEV_ATA) + ali_fifo_control(ap, adev, 0); + ali_program_modes(ap, adev, &t, 0); + if (adev->class == ATA_DEV_ATA) + ali_fifo_control(ap, adev, 0x04); + +} + +/** + * ali_set_dmamode - set initial DMA mode data + * @ap: ATA interface + * @adev: ATA device + * + * FIXME: MWDMA timings + */ + +static void ali_set_dmamode(struct ata_port *ap, struct ata_device *adev) +{ + static u8 udma_timing[7] = { 0xC, 0xB, 0xA, 0x9, 0x8, 0xF, 0xD }; + struct ata_device *pair = ata_dev_pair(adev); + struct ata_timing t; + unsigned long T = 1000000000 / 33333; /* PCI clock based */ + + + if (adev->class == ATA_DEV_ATA) + ali_fifo_control(ap, adev, 0x08); + + if (adev->dma_mode >= XFER_UDMA_0) { + ali_program_modes(ap, adev, NULL, udma_timing[adev->dma_mode - XFER_UDMA_0]); + } else { + ata_timing_compute(adev, adev->dma_mode, &t, T, 1); + if (pair) { + struct ata_timing p; + ata_timing_compute(pair, pair->pio_mode, &p, T, 1); + ata_timing_merge(&p, &t, &t, ATA_TIMING_SETUP|ATA_TIMING_8BIT); + if (pair->dma_mode) { + ata_timing_compute(pair, pair->dma_mode, &p, T, 1); + ata_timing_merge(&p, &t, &t, ATA_TIMING_SETUP|ATA_TIMING_8BIT); + } + } + ali_program_modes(ap, adev, &t, 0); + } +} + +/** + * ali_lock_sectors - Keep older devices to 255 sector mode + * @ap: ATA port + * @adev: Device + * + * Called during the bus probe for each device that is found. We use + * this call to lock the sector count of the device to 255 or less on + * older ALi controllers. If we didn't do this then large I/O's would + * require LBA48 commands which the older ALi requires are issued by + * slower PIO methods + */ + +static void ali_lock_sectors(struct ata_port *ap, struct ata_device *adev) +{ + adev->max_sectors = 255; +} + +static struct scsi_host_template ali_sht = { + .module = THIS_MODULE, + .name = DRV_NAME, + .ioctl = ata_scsi_ioctl, + .queuecommand = ata_scsi_queuecmd, + .can_queue = ATA_DEF_QUEUE, + .this_id = ATA_SHT_THIS_ID, + .sg_tablesize = LIBATA_MAX_PRD, + /* Keep LBA28 counts so large I/O's don't turn LBA48 and PIO + with older controllers. Not locked so will grow on C5 or later */ + .max_sectors = 255, + .cmd_per_lun = ATA_SHT_CMD_PER_LUN, + .emulated = ATA_SHT_EMULATED, + .use_clustering = ATA_SHT_USE_CLUSTERING, + .proc_name = DRV_NAME, + .dma_boundary = ATA_DMA_BOUNDARY, + .slave_configure = ata_scsi_slave_config, + .bios_param = ata_std_bios_param, +}; + +/* + * Port operations for PIO only ALi + */ + +static struct ata_port_operations ali_early_port_ops = { + .port_disable = ata_port_disable, + .set_piomode = ali_set_piomode, + .tf_load = ata_tf_load, + .tf_read = ata_tf_read, + .check_status = ata_check_status, + .exec_command = ata_exec_command, + .dev_select = ata_std_dev_select, + + .probe_reset = ali_early_probe_reset, + + .qc_prep = ata_qc_prep, + .qc_issue = ata_qc_issue_prot, + .eng_timeout = ata_eng_timeout, + .data_xfer = ata_pio_data_xfer, + + .irq_handler = ata_interrupt, + .irq_clear = ata_bmdma_irq_clear, + + .port_start = ata_port_start, + .port_stop = ata_port_stop, + .host_stop = ata_host_stop +}; + +/* + * Port operations for DMA capable ALi without cable + * detect + */ +static struct ata_port_operations ali_20_port_ops = { + .port_disable = ata_port_disable, + + .set_piomode = ali_set_piomode, + .set_dmamode = ali_set_dmamode, + .mode_filter = ali_20_filter, + + .tf_load = ata_tf_load, + .tf_read = ata_tf_read, + .check_status = ata_check_status, + .exec_command = ata_exec_command, + .dev_select = ata_std_dev_select, + .dev_config = ali_lock_sectors, + + .probe_reset = ali_early_probe_reset, + + .bmdma_setup = ata_bmdma_setup, + .bmdma_start = ata_bmdma_start, + .bmdma_stop = ata_bmdma_stop, + .bmdma_status = ata_bmdma_status, + + .qc_prep = ata_qc_prep, + .qc_issue = ata_qc_issue_prot, + .eng_timeout = ata_eng_timeout, + .data_xfer = ata_pio_data_xfer, + + .irq_handler = ata_interrupt, + .irq_clear = ata_bmdma_irq_clear, + + .port_start = ata_port_start, + .port_stop = ata_port_stop, + .host_stop = ata_host_stop +}; + +/* + * Port operations for DMA capable ALi with cable detect + */ +static struct ata_port_operations ali_c2_port_ops = { + .port_disable = ata_port_disable, + .set_piomode = ali_set_piomode, + .set_dmamode = ali_set_dmamode, + .mode_filter = ata_pci_default_filter, + .tf_load = ata_tf_load, + .tf_read = ata_tf_read, + .check_status = ata_check_status, + .exec_command = ata_exec_command, + .dev_select = ata_std_dev_select, + .dev_config = ali_lock_sectors, + + .probe_reset = ali_c2_probe_reset, + + .bmdma_setup = ata_bmdma_setup, + .bmdma_start = ata_bmdma_start, + .bmdma_stop = ata_bmdma_stop, + .bmdma_status = ata_bmdma_status, + + .qc_prep = ata_qc_prep, + .qc_issue = ata_qc_issue_prot, + .eng_timeout = ata_eng_timeout, + .data_xfer = ata_pio_data_xfer, + + .irq_handler = ata_interrupt, + .irq_clear = ata_bmdma_irq_clear, + + .port_start = ata_port_start, + .port_stop = ata_port_stop, + .host_stop = ata_host_stop +}; + +/* + * Port operations for DMA capable ALi with cable detect and LBA48 + */ +static struct ata_port_operations ali_c5_port_ops = { + .port_disable = ata_port_disable, + .set_piomode = ali_set_piomode, + .set_dmamode = ali_set_dmamode, + .mode_filter = ata_pci_default_filter, + .tf_load = ata_tf_load, + .tf_read = ata_tf_read, + .check_status = ata_check_status, + .exec_command = ata_exec_command, + .dev_select = ata_std_dev_select, + + .probe_reset = ali_c2_probe_reset, + + .bmdma_setup = ata_bmdma_setup, + .bmdma_start = ata_bmdma_start, + .bmdma_stop = ata_bmdma_stop, + .bmdma_status = ata_bmdma_status, + + .qc_prep = ata_qc_prep, + .qc_issue = ata_qc_issue_prot, + .eng_timeout = ata_eng_timeout, + .data_xfer = ata_pio_data_xfer, + + .irq_handler = ata_interrupt, + .irq_clear = ata_bmdma_irq_clear, + + .port_start = ata_port_start, + .port_stop = ata_port_stop, + .host_stop = ata_host_stop +}; + +/** + * ali_init_one - discovery callback + * @pdev: PCI device ID + * @id: PCI table info + * + * An ALi IDE interface has been discovered. Figure out what revision + * and perform configuration work before handing it to the ATA layer + */ + +static int ali_init_one(struct pci_dev *pdev, const struct pci_device_id *id) +{ + static struct ata_port_info info_early = { + .sht = &ali_sht, + .host_flags = ATA_FLAG_SLAVE_POSS | ATA_FLAG_SRST, + .pio_mask = 0x1f, + .port_ops = &ali_early_port_ops + }; + /* Revision 0x20 added DMA */ + static struct ata_port_info info_20 = { + .sht = &ali_sht, + .host_flags = ATA_FLAG_SLAVE_POSS | ATA_FLAG_SRST | ATA_FLAG_PIO_LBA48, + .pio_mask = 0x1f, + /*.mwdma_mask = 0x07,*/ + .port_ops = &ali_20_port_ops + }; + /* Revision 0x20 with support logic added UDMA */ + static struct ata_port_info info_20_udma = { + .sht = &ali_sht, + .host_flags = ATA_FLAG_SLAVE_POSS | ATA_FLAG_SRST | ATA_FLAG_PIO_LBA48, + .pio_mask = 0x1f, + /*.mwdma_mask = 0x07, */ + .udma_mask = 0x07, /* UDMA33 */ + .port_ops = &ali_20_port_ops + }; + /* Revision 0xC2 adds UDMA66 */ + static struct ata_port_info info_c2 = { + .sht = &ali_sht, + .host_flags = ATA_FLAG_SLAVE_POSS | ATA_FLAG_SRST | ATA_FLAG_PIO_LBA48, + .pio_mask = 0x1f, + /*.mwdma_mask = 0x07, */ + .udma_mask = 0x1f, + .port_ops = &ali_c2_port_ops + }; + /* Revision 0xC3 is UDMA100 */ + static struct ata_port_info info_c3 = { + .sht = &ali_sht, + .host_flags = ATA_FLAG_SLAVE_POSS | ATA_FLAG_SRST | ATA_FLAG_PIO_LBA48, + .pio_mask = 0x1f, + /* .mwdma_mask = 0x07, */ + .udma_mask = 0x3f, + .port_ops = &ali_c2_port_ops + }; + /* Revision 0xC4 is UDMA133 */ + static struct ata_port_info info_c4 = { + .sht = &ali_sht, + .host_flags = ATA_FLAG_SLAVE_POSS | ATA_FLAG_SRST | ATA_FLAG_PIO_LBA48, + .pio_mask = 0x1f, + /* .mwdma_mask = 0x07, */ + .udma_mask = 0x7f, + .port_ops = &ali_c2_port_ops + }; + /* Revision 0xC5 is UDMA133 with LBA48 DMA */ + static struct ata_port_info info_c5 = { + .sht = &ali_sht, + .host_flags = ATA_FLAG_SLAVE_POSS | ATA_FLAG_SRST, + .pio_mask = 0x1f, + /* .mwdma_mask = 0x07, */ + .udma_mask = 0x7f, + .port_ops = &ali_c5_port_ops + }; + + static struct ata_port_info *port_info[2]; + u8 rev, tmp; + struct pci_dev *north, *isa_bridge; + + pci_read_config_byte(pdev, PCI_REVISION_ID, &rev); + + /* + * The chipset revision selects the driver operations and + * mode data. + */ + + if (rev < 0x20) { + port_info[0] = port_info[1] = &info_early; + } else if (rev < 0xC2) { + /* 1543-E/F, 1543C-C, 1543C-D, 1543C-E */ + pci_read_config_byte(pdev, 0x4B, &tmp); + /* Clear CD-ROM DMA write bit */ + tmp &= 0x7F; + pci_write_config_byte(pdev, 0x4B, tmp); + port_info[0] = port_info[1] = &info_20; + } else if (rev == 0xC2) { + port_info[0] = port_info[1] = &info_c2; + } else if (rev == 0xC3) { + port_info[0] = port_info[1] = &info_c3; + } else if (rev == 0xC4) { + port_info[0] = port_info[1] = &info_c4; + } else + port_info[0] = port_info[1] = &info_c5; + + if (rev >= 0xC2) { + /* Enable cable detection logic */ + pci_read_config_byte(pdev, 0x4B, &tmp); + pci_write_config_byte(pdev, 0x4B, tmp | 0x08); + } + + north = pci_get_slot(pdev->bus, PCI_DEVFN(0,0)); + isa_bridge = pci_get_device(PCI_VENDOR_ID_AL, PCI_DEVICE_ID_AL_M1533, NULL); + + if (north && north->vendor == PCI_VENDOR_ID_AL) { + /* Configure the ALi bridge logic. For non ALi rely on BIOS. + Set the south bridge enable bit */ + pci_read_config_byte(isa_bridge, 0x79, &tmp); + if (rev == 0xC2) + pci_write_config_byte(isa_bridge, 0x79, tmp | 0x04); + else if (rev > 0xC2) + pci_write_config_byte(isa_bridge, 0x79, tmp | 0x02); + } + + if (rev >= 0x20) { + if (rev < 0xC2) { + /* Are we paired with a UDMA capable chip */ + pci_read_config_byte(isa_bridge, 0x5E, &tmp); + if ((tmp & 0x1E) == 0x12) + port_info[0] = port_info[1] = &info_20_udma; + } + /* + * CD_ROM DMA on (0x53 bit 0). Enable this even if we want + * to use PIO. 0x53 bit 1 (rev 20 only) - enable FIFO control + * via 0x54/55. + */ + pci_read_config_byte(pdev, 0x53, &tmp); + if (rev == 0x20) + tmp &= ~0x02; + if (rev == 0xc7) + tmp |= 0x03; + else + tmp |= 0x01; /* CD_ROM enable for DMA */ + pci_write_config_byte(pdev, 0x53, tmp); + } + + pci_dev_put(isa_bridge); + pci_dev_put(north); + + ata_pci_clear_simplex(pdev); + return ata_pci_init_one(pdev, port_info, 2); +} + +static struct pci_device_id ali[] = { + { PCI_DEVICE(PCI_VENDOR_ID_AL, PCI_DEVICE_ID_AL_M5228), }, + { PCI_DEVICE(PCI_VENDOR_ID_AL, PCI_DEVICE_ID_AL_M5229), }, + { 0, }, +}; + +static struct pci_driver ali_pci_driver = { + .name = DRV_NAME, + .id_table = ali, + .probe = ali_init_one, + .remove = ata_pci_remove_one +}; + +static int __init ali_init(void) +{ + return pci_register_driver(&ali_pci_driver); +} + + +static void __exit ali_exit(void) +{ + pci_unregister_driver(&ali_pci_driver); +} + + +MODULE_AUTHOR("Alan Cox"); +MODULE_DESCRIPTION("low-level driver for ALi PATA"); +MODULE_LICENSE("GPL"); +MODULE_DEVICE_TABLE(pci, ali); +MODULE_VERSION(DRV_VERSION); + +module_init(ali_init); +module_exit(ali_exit); diff -puN /dev/null drivers/scsi/pata_amd.c --- /dev/null 2006-06-03 22:34:36.282200750 -0700 +++ devel-akpm/drivers/scsi/pata_amd.c 2006-06-09 15:21:16.000000000 -0700 @@ -0,0 +1,688 @@ +/* + * pata_amd.c - AMD PATA for new ATA layer + * (C) 2005-2006 Red Hat Inc + * Alan Cox + * + * Based on pata-sil680. Errata information is taken from data sheets + * and the amd74xx.c driver by Vojtech Pavlik. Nvidia SATA devices are + * claimed by sata-nv.c. + * + * TODO: + * Nvidia support here or seperated ? + * Variable system clock when/if it makes sense + * Power management on ports + * + * + * Documentation publically available. + */ + +#include +#include +#include +#include +#include +#include +#include +#include + +#define DRV_NAME "pata_amd" +#define DRV_VERSION "0.1.7" + +/** + * timing_setup - shared timing computation and load + * @ap: ATA port being set up + * @adev: drive being configured + * @offset: port offset + * @speed: target speed + * @clock: clock multiplier (number of times 33MHz for this part) + * + * Perform the actual timing set up for Nvidia or AMD PATA devices. + * The actual devices vary so they all call into this helper function + * providing the clock multipler and offset (because AMD and Nvidia put + * the ports at different locations). + */ + +static void timing_setup(struct ata_port *ap, struct ata_device *adev, int offset, int speed, int clock) +{ + static const unsigned char amd_cyc2udma[] = { + 6, 6, 5, 4, 0, 1, 1, 2, 2, 3, 3, 3, 3, 3, 3, 7 + }; + + struct pci_dev *pdev = to_pci_dev(ap->host_set->dev); + struct ata_device *peer = ata_dev_pair(adev); + int dn = ap->hard_port_no * 2 + adev->devno; + struct ata_timing at, apeer; + int T, UT; + const int amd_clock = 33333; /* KHz. */ + u8 t; + + T = 1000000000 / amd_clock; + UT = T / min_t(int, max_t(int, clock, 1), 2); + + if (ata_timing_compute(adev, speed, &at, T, UT) < 0) { + dev_printk(KERN_ERR, &pdev->dev, "unknown mode %d.\n", speed); + return; + } + + if (peer) { + /* This may be over conservative */ + if (peer->dma_mode) { + ata_timing_compute(peer, peer->dma_mode, &apeer, T, UT); + ata_timing_merge(&apeer, &at, &at, ATA_TIMING_8BIT); + } + ata_timing_compute(peer, peer->pio_mode, &apeer, T, UT); + ata_timing_merge(&apeer, &at, &at, ATA_TIMING_8BIT); + } + + if (speed == XFER_UDMA_5 && amd_clock <= 33333) at.udma = 1; + if (speed == XFER_UDMA_6 && amd_clock <= 33333) at.udma = 15; + + /* + * Now do the setup work + */ + + /* Configure the address set up timing */ + pci_read_config_byte(pdev, offset + 0x0C, &t); + t = (t & ~(3 << ((3 - dn) << 1))) | ((FIT(at.setup, 1, 4) - 1) << ((3 - dn) << 1)); + pci_write_config_byte(pdev, offset + 0x0C , t); + + /* Configure the 8bit I/O timing */ + pci_write_config_byte(pdev, offset + 0x0E + (1 - (dn >> 1)), + ((FIT(at.act8b, 1, 16) - 1) << 4) | (FIT(at.rec8b, 1, 16) - 1)); + + /* Drive timing */ + pci_write_config_byte(pdev, offset + 0x08 + (3 - dn), + ((FIT(at.active, 1, 16) - 1) << 4) | (FIT(at.recover, 1, 16) - 1)); + + switch (clock) { + case 1: + t = at.udma ? (0xc0 | (FIT(at.udma, 2, 5) - 2)) : 0x03; + break; + + case 2: + t = at.udma ? (0xc0 | amd_cyc2udma[FIT(at.udma, 2, 10)]) : 0x03; + break; + + case 3: + t = at.udma ? (0xc0 | amd_cyc2udma[FIT(at.udma, 1, 10)]) : 0x03; + break; + + case 4: + t = at.udma ? (0xc0 | amd_cyc2udma[FIT(at.udma, 1, 15)]) : 0x03; + break; + + default: + return; + } + + /* UDMA timing */ + pci_write_config_byte(pdev, offset + 0x10 + (3 - dn), t); +} + +/** + * amd_probe_init - cable detection + * @ap: ATA port + * + * Perform cable detection. The BIOS stores this in PCI config + * space for us. + */ + +static void amd_probe_init(struct ata_port *ap) +{ + static u32 bitmask[2] = {0x03, 0xC0}; + u8 ata66; + struct pci_dev *pdev = to_pci_dev(ap->host_set->dev); + + pci_read_config_byte(pdev, 0x42, &ata66); + if (ata66 & bitmask[ap->hard_port_no]) + ap->cbl = ATA_CBL_PATA80; + else + ap->cbl = ATA_CBL_PATA40; + ata_std_probeinit(ap); + +} + +static int amd_probe_reset(struct ata_port *ap, unsigned int *classes) +{ + struct pci_dev *pdev = to_pci_dev(ap->host_set->dev); + static struct pci_bits amd_enable_bits[] = { + { 0x40, 1, 0x02, 0x02 }, + { 0x40, 1, 0x01, 0x01 } + }; + + if (!pci_test_config_bits(pdev, &amd_enable_bits[ap->hard_port_no])) { + ata_port_disable(ap); + printk(KERN_INFO "ata%u: port disabled. ignoring.\n", ap->id); + return 0; + } + return ata_drive_probe_reset(ap, amd_probe_init, + ata_std_softreset, NULL, + ata_std_postreset, classes); +} + +static void amd_early_probe_init(struct ata_port *ap) +{ + /* No host side cable detection */ + ap->cbl = ATA_CBL_PATA80; + ata_std_probeinit(ap); + +} + +static int amd_early_probe_reset(struct ata_port *ap, unsigned int *classes) +{ + struct pci_dev *pdev = to_pci_dev(ap->host_set->dev); + static struct pci_bits amd_enable_bits[] = { + { 0x40, 1, 0x02, 0x02 }, + { 0x40, 1, 0x01, 0x01 } + }; + + if (!pci_test_config_bits(pdev, &amd_enable_bits[ap->hard_port_no])) { + ata_port_disable(ap); + printk(KERN_INFO "ata%u: port disabled. ignoring.\n", ap->id); + return 0; + } + return ata_drive_probe_reset(ap, amd_early_probe_init, + ata_std_softreset, NULL, + ata_std_postreset, classes); +} + +/** + * amd33_set_piomode - set initial PIO mode data + * @ap: ATA interface + * @adev: ATA device + * + * Program the AMD registers for PIO mode. + */ + +static void amd33_set_piomode(struct ata_port *ap, struct ata_device *adev) +{ + timing_setup(ap, adev, 0x40, adev->pio_mode, 1); +} + +static void amd66_set_piomode(struct ata_port *ap, struct ata_device *adev) +{ + timing_setup(ap, adev, 0x40, adev->pio_mode, 2); +} + +static void amd100_set_piomode(struct ata_port *ap, struct ata_device *adev) +{ + timing_setup(ap, adev, 0x40, adev->pio_mode, 3); +} + +static void amd133_set_piomode(struct ata_port *ap, struct ata_device *adev) +{ + timing_setup(ap, adev, 0x40, adev->pio_mode, 4); +} + +/** + * amd33_set_dmamode - set initial DMA mode data + * @ap: ATA interface + * @adev: ATA device + * + * Program the MWDMA/UDMA modes for the AMD and Nvidia + * chipset. + */ + +static void amd33_set_dmamode(struct ata_port *ap, struct ata_device *adev) +{ + timing_setup(ap, adev, 0x40, adev->dma_mode, 1); +} + +static void amd66_set_dmamode(struct ata_port *ap, struct ata_device *adev) +{ + timing_setup(ap, adev, 0x40, adev->dma_mode, 2); +} + +static void amd100_set_dmamode(struct ata_port *ap, struct ata_device *adev) +{ + timing_setup(ap, adev, 0x40, adev->dma_mode, 3); +} + +static void amd133_set_dmamode(struct ata_port *ap, struct ata_device *adev) +{ + timing_setup(ap, adev, 0x40, adev->dma_mode, 4); +} + + +/** + * nv_probe_init - cable detection + * @ap: ATA port + * + * Perform cable detection. The BIOS stores this in PCI config + * space for us. + */ + +static void nv_probe_init(struct ata_port *ap) { + static u8 bitmask[2] = {0x03, 0xC0}; + u8 ata66; + u16 udma; + struct pci_dev *pdev = to_pci_dev(ap->host_set->dev); + + pci_read_config_byte(pdev, 0x52, &ata66); + if (ata66 & bitmask[ap->hard_port_no]) + ap->cbl = ATA_CBL_PATA80; + else + ap->cbl = ATA_CBL_PATA40; + + /* We now have to double check because the Nvidia boxes BIOS + doesn't always set the cable bits but does set mode bits */ + + pci_read_config_word(pdev, 0x62 - 2 * ap->hard_port_no, &udma); + if ((udma & 0xC4) == 0xC4 || (udma & 0xC400) == 0xC400) + ap->cbl = ATA_CBL_PATA80; + ata_std_probeinit(ap); +} + +static int nv_probe_reset(struct ata_port *ap, unsigned int *classes) +{ + return ata_drive_probe_reset(ap, nv_probe_init, + ata_std_softreset, NULL, + ata_std_postreset, classes); +} +/** + * nv100_set_piomode - set initial PIO mode data + * @ap: ATA interface + * @adev: ATA device + * + * Program the AMD registers for PIO mode. + */ + +static void nv100_set_piomode(struct ata_port *ap, struct ata_device *adev) +{ + timing_setup(ap, adev, 0x50, adev->pio_mode, 3); +} + +static void nv133_set_piomode(struct ata_port *ap, struct ata_device *adev) +{ + timing_setup(ap, adev, 0x50, adev->pio_mode, 4); +} + +/** + * nv100_set_dmamode - set initial DMA mode data + * @ap: ATA interface + * @adev: ATA device + * + * Program the MWDMA/UDMA modes for the AMD and Nvidia + * chipset. + */ + +static void nv100_set_dmamode(struct ata_port *ap, struct ata_device *adev) +{ + timing_setup(ap, adev, 0x50, adev->dma_mode, 3); +} + +static void nv133_set_dmamode(struct ata_port *ap, struct ata_device *adev) +{ + timing_setup(ap, adev, 0x50, adev->dma_mode, 4); +} + +static struct scsi_host_template amd_sht = { + .module = THIS_MODULE, + .name = DRV_NAME, + .ioctl = ata_scsi_ioctl, + .queuecommand = ata_scsi_queuecmd, + .can_queue = ATA_DEF_QUEUE, + .this_id = ATA_SHT_THIS_ID, + .sg_tablesize = LIBATA_MAX_PRD, + .max_sectors = ATA_MAX_SECTORS, + .cmd_per_lun = ATA_SHT_CMD_PER_LUN, + .emulated = ATA_SHT_EMULATED, + .use_clustering = ATA_SHT_USE_CLUSTERING, + .proc_name = DRV_NAME, + .dma_boundary = ATA_DMA_BOUNDARY, + .slave_configure = ata_scsi_slave_config, + .bios_param = ata_std_bios_param, +}; + +static struct ata_port_operations amd33_port_ops = { + .port_disable = ata_port_disable, + .set_piomode = amd33_set_piomode, + .set_dmamode = amd33_set_dmamode, + .mode_filter = ata_pci_default_filter, + .tf_load = ata_tf_load, + .tf_read = ata_tf_read, + .check_status = ata_check_status, + .exec_command = ata_exec_command, + .dev_select = ata_std_dev_select, + + .probe_reset = amd_early_probe_reset, + + .bmdma_setup = ata_bmdma_setup, + .bmdma_start = ata_bmdma_start, + .bmdma_stop = ata_bmdma_stop, + .bmdma_status = ata_bmdma_status, + + .qc_prep = ata_qc_prep, + .qc_issue = ata_qc_issue_prot, + .eng_timeout = ata_eng_timeout, + .data_xfer = ata_pio_data_xfer, + + .irq_handler = ata_interrupt, + .irq_clear = ata_bmdma_irq_clear, + + .port_start = ata_port_start, + .port_stop = ata_port_stop, + .host_stop = ata_host_stop +}; + +static struct ata_port_operations amd66_port_ops = { + .port_disable = ata_port_disable, + .set_piomode = amd66_set_piomode, + .set_dmamode = amd66_set_dmamode, + .mode_filter = ata_pci_default_filter, + .tf_load = ata_tf_load, + .tf_read = ata_tf_read, + .check_status = ata_check_status, + .exec_command = ata_exec_command, + .dev_select = ata_std_dev_select, + + .probe_reset = amd_early_probe_reset, + + .bmdma_setup = ata_bmdma_setup, + .bmdma_start = ata_bmdma_start, + .bmdma_stop = ata_bmdma_stop, + .bmdma_status = ata_bmdma_status, + + .qc_prep = ata_qc_prep, + .qc_issue = ata_qc_issue_prot, + .eng_timeout = ata_eng_timeout, + .data_xfer = ata_pio_data_xfer, + + .irq_handler = ata_interrupt, + .irq_clear = ata_bmdma_irq_clear, + + .port_start = ata_port_start, + .port_stop = ata_port_stop, + .host_stop = ata_host_stop +}; + +static struct ata_port_operations amd100_port_ops = { + .port_disable = ata_port_disable, + .set_piomode = amd100_set_piomode, + .set_dmamode = amd100_set_dmamode, + .mode_filter = ata_pci_default_filter, + .tf_load = ata_tf_load, + .tf_read = ata_tf_read, + .check_status = ata_check_status, + .exec_command = ata_exec_command, + .dev_select = ata_std_dev_select, + + .probe_reset = amd_probe_reset, + + .bmdma_setup = ata_bmdma_setup, + .bmdma_start = ata_bmdma_start, + .bmdma_stop = ata_bmdma_stop, + .bmdma_status = ata_bmdma_status, + + .qc_prep = ata_qc_prep, + .qc_issue = ata_qc_issue_prot, + .eng_timeout = ata_eng_timeout, + .data_xfer = ata_pio_data_xfer, + + .irq_handler = ata_interrupt, + .irq_clear = ata_bmdma_irq_clear, + + .port_start = ata_port_start, + .port_stop = ata_port_stop, + .host_stop = ata_host_stop +}; + +static struct ata_port_operations amd133_port_ops = { + .port_disable = ata_port_disable, + .set_piomode = amd133_set_piomode, + .set_dmamode = amd133_set_dmamode, + .mode_filter = ata_pci_default_filter, + .tf_load = ata_tf_load, + .tf_read = ata_tf_read, + .check_status = ata_check_status, + .exec_command = ata_exec_command, + .dev_select = ata_std_dev_select, + + .probe_reset = amd_probe_reset, + + .bmdma_setup = ata_bmdma_setup, + .bmdma_start = ata_bmdma_start, + .bmdma_stop = ata_bmdma_stop, + .bmdma_status = ata_bmdma_status, + + .qc_prep = ata_qc_prep, + .qc_issue = ata_qc_issue_prot, + .eng_timeout = ata_eng_timeout, + .data_xfer = ata_pio_data_xfer, + + .irq_handler = ata_interrupt, + .irq_clear = ata_bmdma_irq_clear, + + .port_start = ata_port_start, + .port_stop = ata_port_stop, + .host_stop = ata_host_stop +}; + +static struct ata_port_operations nv100_port_ops = { + .port_disable = ata_port_disable, + .set_piomode = nv100_set_piomode, + .set_dmamode = nv100_set_dmamode, + .mode_filter = ata_pci_default_filter, + .tf_load = ata_tf_load, + .tf_read = ata_tf_read, + .check_status = ata_check_status, + .exec_command = ata_exec_command, + .dev_select = ata_std_dev_select, + + .probe_reset = nv_probe_reset, + + .bmdma_setup = ata_bmdma_setup, + .bmdma_start = ata_bmdma_start, + .bmdma_stop = ata_bmdma_stop, + .bmdma_status = ata_bmdma_status, + + .qc_prep = ata_qc_prep, + .qc_issue = ata_qc_issue_prot, + .eng_timeout = ata_eng_timeout, + .data_xfer = ata_pio_data_xfer, + + .irq_handler = ata_interrupt, + .irq_clear = ata_bmdma_irq_clear, + + .port_start = ata_port_start, + .port_stop = ata_port_stop, + .host_stop = ata_host_stop +}; + +static struct ata_port_operations nv133_port_ops = { + .port_disable = ata_port_disable, + .set_piomode = nv133_set_piomode, + .set_dmamode = nv133_set_dmamode, + .mode_filter = ata_pci_default_filter, + .tf_load = ata_tf_load, + .tf_read = ata_tf_read, + .check_status = ata_check_status, + .exec_command = ata_exec_command, + .dev_select = ata_std_dev_select, + + .probe_reset = nv_probe_reset, + + .bmdma_setup = ata_bmdma_setup, + .bmdma_start = ata_bmdma_start, + .bmdma_stop = ata_bmdma_stop, + .bmdma_status = ata_bmdma_status, + + .qc_prep = ata_qc_prep, + .qc_issue = ata_qc_issue_prot, + .eng_timeout = ata_eng_timeout, + .data_xfer = ata_pio_data_xfer, + + .irq_handler = ata_interrupt, + .irq_clear = ata_bmdma_irq_clear, + + .port_start = ata_port_start, + .port_stop = ata_port_stop, + .host_stop = ata_host_stop +}; + +static int amd_init_one(struct pci_dev *pdev, const struct pci_device_id *id) +{ + static struct ata_port_info info[10] = { + { /* 0: AMD 7401 */ + .sht = &amd_sht, + .host_flags = ATA_FLAG_SLAVE_POSS | ATA_FLAG_SRST, + .pio_mask = 0x1f, + .mwdma_mask = 0x07, /* No SWDMA */ + .udma_mask = 0x07, /* UDMA 33 */ + .port_ops = &amd33_port_ops + }, + { /* 1: Early AMD7409 - no swdma */ + .sht = &amd_sht, + .host_flags = ATA_FLAG_SLAVE_POSS | ATA_FLAG_SRST, + .pio_mask = 0x1f, + .mwdma_mask = 0x07, + .udma_mask = 0x1f, /* UDMA 66 */ + .port_ops = &amd66_port_ops + }, + { /* 2: AMD 7409, no swdma errata */ + .sht = &amd_sht, + .host_flags = ATA_FLAG_SLAVE_POSS | ATA_FLAG_SRST, + .pio_mask = 0x1f, + .mwdma_mask = 0x07, + .udma_mask = 0x1f, /* UDMA 66 */ + .port_ops = &amd66_port_ops + }, + { /* 3: AMD 7411 */ + .sht = &amd_sht, + .host_flags = ATA_FLAG_SLAVE_POSS | ATA_FLAG_SRST, + .pio_mask = 0x1f, + .mwdma_mask = 0x07, + .udma_mask = 0x3f, /* UDMA 100 */ + .port_ops = &amd100_port_ops + }, + { /* 4: AMD 7441 */ + .sht = &amd_sht, + .host_flags = ATA_FLAG_SLAVE_POSS | ATA_FLAG_SRST, + .pio_mask = 0x1f, + .mwdma_mask = 0x07, + .udma_mask = 0x3f, /* UDMA 100 */ + .port_ops = &amd100_port_ops + }, + { /* 5: AMD 8111*/ + .sht = &amd_sht, + .host_flags = ATA_FLAG_SLAVE_POSS | ATA_FLAG_SRST, + .pio_mask = 0x1f, + .mwdma_mask = 0x07, + .udma_mask = 0x7f, /* UDMA 133, no swdma */ + .port_ops = &amd133_port_ops + }, + { /* 6: AMD 8111 UDMA 100 (Serenade) */ + .sht = &amd_sht, + .host_flags = ATA_FLAG_SLAVE_POSS | ATA_FLAG_SRST, + .pio_mask = 0x1f, + .mwdma_mask = 0x07, + .udma_mask = 0x3f, /* UDMA 100, no swdma */ + .port_ops = &amd133_port_ops + }, + { /* 7: Nvidia Nforce */ + .sht = &amd_sht, + .host_flags = ATA_FLAG_SLAVE_POSS | ATA_FLAG_SRST, + .pio_mask = 0x1f, + .mwdma_mask = 0x07, + .udma_mask = 0x3f, /* UDMA 100 */ + .port_ops = &nv100_port_ops + }, + { /* 8: Nvidia Nforce2 and later */ + .sht = &amd_sht, + .host_flags = ATA_FLAG_SLAVE_POSS | ATA_FLAG_SRST, + .pio_mask = 0x1f, + .mwdma_mask = 0x07, + .udma_mask = 0x7f, /* UDMA 133, no swdma */ + .port_ops = &nv133_port_ops + }, + { /* 9: AMD CS5536 (Geode companion) */ + .sht = &amd_sht, + .host_flags = ATA_FLAG_SLAVE_POSS | ATA_FLAG_SRST, + .pio_mask = 0x1f, + .mwdma_mask = 0x07, + .udma_mask = 0x3f, /* UDMA 100 */ + .port_ops = &amd100_port_ops + } + }; + static struct ata_port_info *port_info[2]; + static int printed_version; + int type = id->driver_data; + u8 rev; + u8 fifo; + + if (!printed_version++) + dev_printk(KERN_DEBUG, &pdev->dev, "version " DRV_VERSION "\n"); + + pci_read_config_byte(pdev, PCI_REVISION_ID, &rev); + pci_read_config_byte(pdev, 0x41, &fifo); + + /* Check for AMD7409 without swdma errata and if found adjust type */ + if (type == 1 && rev > 0x7) + type = 2; + + /* Check for AMD7411 */ + if (type == 3) + /* FIFO is broken */ + pci_write_config_byte(pdev, 0x41, fifo & 0x0F); + else + pci_write_config_byte(pdev, 0x41, fifo | 0xF0); + + /* Serenade ? */ + if (type == 5 && pdev->subsystem_vendor == PCI_VENDOR_ID_AMD && + pdev->subsystem_device == PCI_DEVICE_ID_AMD_SERENADE) + type = 6; /* UDMA 100 only */ + + if (type < 3) + ata_pci_clear_simplex(pdev); + + /* And fire it up */ + + port_info[0] = port_info[1] = &info[type]; + return ata_pci_init_one(pdev, port_info, 2); +} + +static const struct pci_device_id amd[] = { + { PCI_VENDOR_ID_AMD, PCI_DEVICE_ID_AMD_COBRA_7401, PCI_ANY_ID, PCI_ANY_ID, 0, 0, 0 }, + { PCI_VENDOR_ID_AMD, PCI_DEVICE_ID_AMD_VIPER_7409, PCI_ANY_ID, PCI_ANY_ID, 0, 0, 1 }, + { PCI_VENDOR_ID_AMD, PCI_DEVICE_ID_AMD_VIPER_7411, PCI_ANY_ID, PCI_ANY_ID, 0, 0, 3 }, + { PCI_VENDOR_ID_AMD, PCI_DEVICE_ID_AMD_OPUS_7441, PCI_ANY_ID, PCI_ANY_ID, 0, 0, 4 }, + { PCI_VENDOR_ID_AMD, PCI_DEVICE_ID_AMD_8111_IDE, PCI_ANY_ID, PCI_ANY_ID, 0, 0, 5 }, + { PCI_VENDOR_ID_NVIDIA, PCI_DEVICE_ID_NVIDIA_NFORCE_IDE, PCI_ANY_ID, PCI_ANY_ID, 0, 0, 7 }, + { PCI_VENDOR_ID_NVIDIA, PCI_DEVICE_ID_NVIDIA_NFORCE2_IDE, PCI_ANY_ID, PCI_ANY_ID, 0, 0, 8 }, + { PCI_VENDOR_ID_NVIDIA, PCI_DEVICE_ID_NVIDIA_NFORCE2S_IDE, PCI_ANY_ID, PCI_ANY_ID, 0, 0, 8 }, + { PCI_VENDOR_ID_NVIDIA, PCI_DEVICE_ID_NVIDIA_NFORCE3_IDE, PCI_ANY_ID, PCI_ANY_ID, 0, 0, 8 }, + { PCI_VENDOR_ID_NVIDIA, PCI_DEVICE_ID_NVIDIA_NFORCE3S_IDE, PCI_ANY_ID, PCI_ANY_ID, 0, 0, 8 }, + { PCI_VENDOR_ID_NVIDIA, PCI_DEVICE_ID_NVIDIA_NFORCE_CK804_IDE, PCI_ANY_ID, PCI_ANY_ID, 0, 0, 8 }, + { PCI_VENDOR_ID_NVIDIA, PCI_DEVICE_ID_NVIDIA_NFORCE_MCP04_IDE, PCI_ANY_ID, PCI_ANY_ID, 0, 0, 8 }, + { PCI_VENDOR_ID_NVIDIA, PCI_DEVICE_ID_NVIDIA_NFORCE_MCP51_IDE, PCI_ANY_ID, PCI_ANY_ID, 0, 0, 8 }, + { PCI_VENDOR_ID_NVIDIA, PCI_DEVICE_ID_NVIDIA_NFORCE_MCP55_IDE, PCI_ANY_ID, PCI_ANY_ID, 0, 0, 8 }, + { PCI_VENDOR_ID_AMD, PCI_DEVICE_ID_AMD_CS5536_IDE, PCI_ANY_ID, PCI_ANY_ID, 0, 0, 9 }, + { 0, }, +}; + +static struct pci_driver amd_pci_driver = { + .name = DRV_NAME, + .id_table = amd, + .probe = amd_init_one, + .remove = ata_pci_remove_one +}; + +static int __init amd_init(void) +{ + return pci_register_driver(&amd_pci_driver); +} + +static void __exit amd_exit(void) +{ + pci_unregister_driver(&amd_pci_driver); +} + + +MODULE_AUTHOR("Alan Cox"); +MODULE_DESCRIPTION("low-level driver for AMD PATA IDE"); +MODULE_LICENSE("GPL"); +MODULE_DEVICE_TABLE(pci, amd); +MODULE_VERSION(DRV_VERSION); + +module_init(amd_init); +module_exit(amd_exit); diff -puN /dev/null drivers/scsi/pata_mpiix.c --- /dev/null 2006-06-03 22:34:36.282200750 -0700 +++ devel-akpm/drivers/scsi/pata_mpiix.c 2006-06-09 15:21:16.000000000 -0700 @@ -0,0 +1,316 @@ +/* + * pata_mpiix.c - Intel MPIIX PATA for new ATA layer + * (C) 2005-2006 Red Hat Inc + * Alan Cox + * + * The MPIIX is different enough to the PIIX4 and friends that we give it + * a separate driver. The old ide/pci code handles this by just not tuning + * MPIIX at all. + * + * The MPIIX also differs in another important way from the majority of PIIX + * devices. The chip is a bridge (pardon the pun) between the old world of + * ISA IDE and PCI IDE. Although the ATA timings are PCI configured the actual + * IDE controller is not decoded in PCI space and the chip does not claim to + * be IDE class PCI. This requires slightly non-standard probe logic compared + * with PCI IDE and also that we do not disable the device when our driver is + * unloaded (as it has many other functions). + * + * The driver conciously keeps this logic internally to avoid pushing quirky + * PATA history into the clean libata layer. + * + * Thinkpad specific note: If you boot an MPIIX using thinkpad with a PCMCIA + * hard disk present this driver will not detect it. This is not a bug. In this + * configuration the secondary port of the MPIIX is disabled and the addresses + * are decoded by the PCMCIA bridge and therefore are for a generic IDE driver + * to operate. + */ + +#include +#include +#include +#include +#include +#include +#include +#include + +#define DRV_NAME "pata_mpiix" +#define DRV_VERSION "0.6.1" + +enum { + IDETIM = 0x6C, /* IDE control register */ + IORDY = (1 << 1), + PPE = (1 << 2), + FTIM = (1 << 0), + ENABLED = (1 << 15), + SECONDARY = (1 << 14) +}; + +static void mpiix_probe_init(struct ata_port *ap) +{ + ap->cbl = ATA_CBL_PATA40; + ata_std_probeinit(ap); +} + +/** + * mpiix_probe_reset - probe reset + * @ap: ATA port + * @classes: + * + * Perform the ATA probe and bus reset sequence plus specific handling + * for this hardware. The MPIIX has the enable bits in a different place + * to PIIX4 and friends. As a pure PIO device it has no cable detect + */ + +static int mpiix_probe_reset(struct ata_port *ap, unsigned int *classes) +{ + struct pci_dev *pdev = to_pci_dev(ap->host_set->dev); + static const struct pci_bits mpiix_enable_bits[] = { + { 0x6D, 1, 0x80, 0x80 }, + { 0x6F, 1, 0x80, 0x80 } + }; + + if (!pci_test_config_bits(pdev, &mpiix_enable_bits[ap->hard_port_no])) { + ata_port_disable(ap); + printk(KERN_INFO "ata%u: port disabled. ignoring.\n", ap->id); + return 0; + } + return ata_drive_probe_reset(ap, mpiix_probe_init, + ata_std_softreset, NULL, + ata_std_postreset, classes); +} + +/** + * mpiix_set_piomode - set initial PIO mode data + * @ap: ATA interface + * @adev: ATA device + * + * Called to do the PIO mode setup. The MPIIX allows us to program the + * IORDY sample point (2-5 clocks), recovery 1-4 clocks and whether + * prefetching or iordy are used. + * + * This would get very ugly because we can only program timing for one + * device at a time, the other gets PIO0. Fortunately libata calls + * our qc_issue_prot command before a command is issued so we can + * flip the timings back and forth to reduce the pain. + */ + +static void mpiix_set_piomode(struct ata_port *ap, struct ata_device *adev) +{ + int control = 0; + int pio = adev->pio_mode - XFER_PIO_0; + struct pci_dev *pdev = to_pci_dev(ap->host_set->dev); + u16 idetim; + static const /* ISP RTC */ + u8 timings[][2] = { { 0, 0 }, + { 0, 0 }, + { 1, 0 }, + { 2, 1 }, + { 2, 3 }, }; + + pci_read_config_word(pdev, IDETIM, &idetim); + /* Mask the IORDY/TIME/PPE0 bank for this device */ + if (adev->class == ATA_DEV_ATA) + control |= PPE; /* PPE enable for disk */ + if (ata_pio_need_iordy(adev)) + control |= IORDY; /* IORDY */ + if (pio > 0) + control |= FTIM; /* This drive is on the fast timing bank */ + + /* Mask out timing and clear both TIME bank selects */ + idetim &= 0xCCEE; + idetim &= ~(0x07 << (2 * adev->devno)); + idetim |= (control << (2 * adev->devno)); + + idetim |= (timings[pio][0] << 12) | (timings[pio][1] << 8); + pci_write_config_word(pdev, IDETIM, idetim); + + /* We use ap->private_data as a pointer to the device currently + loaded for timing */ + ap->private_data = adev; +} + +/** + * mpiix_qc_issue_prot - command issue + * @qc: command pending + * + * Called when the libata layer is about to issue a command. We wrap + * this interface so that we can load the correct ATA timings if + * neccessary. Our logic also clears TIME0/TIME1 for the other device so + * that, even if we get this wrong, cycles to the other device will + * be made PIO0. + */ + +static unsigned int mpiix_qc_issue_prot(struct ata_queued_cmd *qc) +{ + struct ata_port *ap = qc->ap; + struct ata_device *adev = qc->dev; + + /* If modes have been configured and the channel data is not loaded + then load it. We have to check if pio_mode is set as the core code + does not set adev->pio_mode to XFER_PIO_0 while probing as would be + logical */ + + if (adev->pio_mode && adev != ap->private_data) + mpiix_set_piomode(ap, adev); + + return ata_qc_issue_prot(qc); +} + +static struct scsi_host_template mpiix_sht = { + .module = THIS_MODULE, + .name = DRV_NAME, + .ioctl = ata_scsi_ioctl, + .queuecommand = ata_scsi_queuecmd, + .can_queue = ATA_DEF_QUEUE, + .this_id = ATA_SHT_THIS_ID, + .sg_tablesize = LIBATA_MAX_PRD, + .max_sectors = ATA_MAX_SECTORS, + .cmd_per_lun = ATA_SHT_CMD_PER_LUN, + .emulated = ATA_SHT_EMULATED, + .use_clustering = ATA_SHT_USE_CLUSTERING, + .proc_name = DRV_NAME, + .dma_boundary = ATA_DMA_BOUNDARY, + .slave_configure = ata_scsi_slave_config, + .bios_param = ata_std_bios_param, +}; + +static struct ata_port_operations mpiix_port_ops = { + .port_disable = ata_port_disable, + .set_piomode = mpiix_set_piomode, + + .tf_load = ata_tf_load, + .tf_read = ata_tf_read, + .check_status = ata_check_status, + .exec_command = ata_exec_command, + .dev_select = ata_std_dev_select, + + .probe_reset = mpiix_probe_reset, + + .qc_prep = ata_qc_prep, + .qc_issue = mpiix_qc_issue_prot, + .data_xfer = ata_pio_data_xfer, + + .irq_handler = ata_interrupt, + .irq_clear = ata_bmdma_irq_clear, + + .port_start = ata_port_start, + .port_stop = ata_port_stop, + .host_stop = ata_host_stop +}; + +static int mpiix_init_one(struct pci_dev *dev, const struct pci_device_id *id) +{ + /* Single threaded by the PCI probe logic */ + static struct ata_probe_ent probe[2]; + static int printed_version; + u16 idetim; + int enabled; + + if (!printed_version++) + dev_printk(KERN_DEBUG, &dev->dev, "version " DRV_VERSION "\n"); + + /* MPIIX has many functions which can be turned on or off according + to other devices present. Make sure IDE is enabled before we try + and use it */ + + pci_read_config_word(dev, IDETIM, &idetim); + if (!(idetim & ENABLED)) + return -ENODEV; + + /* We do our own plumbing to avoid leaking special cases for whacko + ancient hardware into the core code. There are two issues to + worry about. #1 The chip is a bridge so if in legacy mode and + without BARs set fools the setup. #2 If you pci_disable_device + the MPIIX your box goes castors up */ + + INIT_LIST_HEAD(&probe[0].node); + probe[0].dev = pci_dev_to_dev(dev); + probe[0].port_ops = &mpiix_port_ops; + probe[0].sht = &mpiix_sht; + probe[0].pio_mask = 0x1F; + probe[0].irq = 14; + probe[0].irq_flags = SA_SHIRQ; + probe[0].host_flags = ATA_FLAG_SLAVE_POSS | ATA_FLAG_SRST; + probe[0].legacy_mode = 1; + probe[0].hard_port_no = 0; + probe[0].n_ports = 1; + probe[0].port[0].cmd_addr = 0x1F0; + probe[0].port[0].ctl_addr = 0x3F6; + probe[0].port[0].altstatus_addr = 0x3F6; + + /* The secondary lurks at different addresses but is otherwise + the same beastie */ + + INIT_LIST_HEAD(&probe[1].node); + probe[1] = probe[0]; + probe[1].irq = 15; + probe[1].hard_port_no = 1; + probe[1].port[0].cmd_addr = 0x170; + probe[1].port[0].ctl_addr = 0x376; + probe[1].port[0].altstatus_addr = 0x376; + + /* Let libata fill in the port details */ + ata_std_ports(&probe[0].port[0]); + ata_std_ports(&probe[1].port[0]); + + /* Now add the port that is active */ + enabled = (idetim & SECONDARY) ? 1 : 0; + + if (ata_device_add(&probe[enabled])) + return 0; + return -ENODEV; +} + +/** + * mpiix_remove_one - device unload + * @pdev: PCI device being removed + * + * Handle an unplug/unload event for a PCI device. Unload the + * PCI driver but do not use the default handler as we *MUST NOT* + * disable the device as it has other functions. + */ + +static void __devexit mpiix_remove_one(struct pci_dev *pdev) +{ + struct device *dev = pci_dev_to_dev(pdev); + struct ata_host_set *host_set = dev_get_drvdata(dev); + + ata_host_set_remove(host_set); + dev_set_drvdata(dev, NULL); +} + + + +static const struct pci_device_id mpiix[] = { + { PCI_DEVICE(PCI_VENDOR_ID_INTEL, PCI_DEVICE_ID_INTEL_82371MX), }, + { 0, }, +}; + +static struct pci_driver mpiix_pci_driver = { + .name = DRV_NAME, + .id_table = mpiix, + .probe = mpiix_init_one, + .remove = mpiix_remove_one +}; + +static int __init mpiix_init(void) +{ + return pci_register_driver(&mpiix_pci_driver); +} + + +static void __exit mpiix_exit(void) +{ + pci_unregister_driver(&mpiix_pci_driver); +} + + +MODULE_AUTHOR("Alan Cox"); +MODULE_DESCRIPTION("low-level driver for Intel MPIIX"); +MODULE_LICENSE("GPL"); +MODULE_DEVICE_TABLE(pci, mpiix); +MODULE_VERSION(DRV_VERSION); + +module_init(mpiix_init); +module_exit(mpiix_exit); diff -puN /dev/null drivers/scsi/pata_netcell.c --- /dev/null 2006-06-03 22:34:36.282200750 -0700 +++ devel-akpm/drivers/scsi/pata_netcell.c 2006-06-09 15:21:16.000000000 -0700 @@ -0,0 +1,176 @@ +/* + * pata_netcell.c - Netcell PATA driver + * + * (c) 2006 Red Hat + */ + +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include + +#define DRV_NAME "pata_netcell" +#define DRV_VERSION "0.1.2" + +/** + * netcell_probe_init - check for 40/80 pin + * @ap: Port + * + * Cables are handled by the RAID controller. Report 80 pin. + */ + +static void netcell_probe_init(struct ata_port *ap) +{ + ap->cbl = ATA_CBL_PATA80; + ata_std_probeinit(ap); +} + +/** + * netcell_probe_reset - Probe specified port on PATA host controller + * @ap: Port to probe + * @classes + * + * LOCKING: + * None (inherited from caller). + */ + +static int netcell_probe_reset(struct ata_port *ap, unsigned int *classes) +{ + return ata_drive_probe_reset(ap, netcell_probe_init, + ata_std_softreset, NULL, + ata_std_postreset, classes); +} + +/* No PIO or DMA methods needed for this device */ + +static struct scsi_host_template netcell_sht = { + .module = THIS_MODULE, + .name = DRV_NAME, + .ioctl = ata_scsi_ioctl, + .queuecommand = ata_scsi_queuecmd, + .can_queue = ATA_DEF_QUEUE, + .this_id = ATA_SHT_THIS_ID, + .sg_tablesize = LIBATA_MAX_PRD, + /* Special handling needed if you have sector or LBA48 limits */ + .max_sectors = ATA_MAX_SECTORS, + .cmd_per_lun = ATA_SHT_CMD_PER_LUN, + .emulated = ATA_SHT_EMULATED, + .use_clustering = ATA_SHT_USE_CLUSTERING, + .proc_name = DRV_NAME, + .dma_boundary = ATA_DMA_BOUNDARY, + .slave_configure = ata_scsi_slave_config, + /* Use standard CHS mapping rules */ + .bios_param = ata_std_bios_param, +}; + +static const struct ata_port_operations netcell_ops = { + .port_disable = ata_port_disable, + + /* Task file is PCI ATA format, use helpers */ + .tf_load = ata_tf_load, + .tf_read = ata_tf_read, + .check_status = ata_check_status, + .exec_command = ata_exec_command, + .dev_select = ata_std_dev_select, + + .probe_reset = netcell_probe_reset, + + /* BMDMA handling is PCI ATA format, use helpers */ + .bmdma_setup = ata_bmdma_setup, + .bmdma_start = ata_bmdma_start, + .bmdma_stop = ata_bmdma_stop, + .bmdma_status = ata_bmdma_status, + .qc_prep = ata_qc_prep, + .qc_issue = ata_qc_issue_prot, + .data_xfer = ata_pio_data_xfer, + + /* Timeout handling. Special recovery hooks here */ + .eng_timeout = ata_eng_timeout, + .irq_handler = ata_interrupt, + .irq_clear = ata_bmdma_irq_clear, + + /* Generic PATA PCI ATA helpers */ + .port_start = ata_port_start, + .port_stop = ata_port_stop, + .host_stop = ata_host_stop, +}; + + +/** + * netcell_init_one - Register Netcell ATA PCI device with kernel services + * @pdev: PCI device to register + * @ent: Entry in netcell_pci_tbl matching with @pdev + * + * Called from kernel PCI layer. + * + * LOCKING: + * Inherited from PCI layer (may sleep). + * + * RETURNS: + * Zero on success, or -ERRNO value. + */ + +static int netcell_init_one (struct pci_dev *pdev, const struct pci_device_id *ent) +{ + static int printed_version; + static struct ata_port_info info = { + .sht = &netcell_sht, + .host_flags = ATA_FLAG_SLAVE_POSS | ATA_FLAG_SRST, + /* Actually we don't really care about these as the + firmware deals with it */ + .pio_mask = 0x1f, /* pio0-4 */ + .mwdma_mask = 0x07, /* mwdma0-2 */ + .udma_mask = 0x3f, /* UDMA 133 */ + .port_ops = &netcell_ops, + }; + static struct ata_port_info *port_info[2] = { &info, &info }; + + if (!printed_version++) + dev_printk(KERN_DEBUG, &pdev->dev, + "version " DRV_VERSION "\n"); + + /* Any chip specific setup/optimisation/messages here */ + ata_pci_clear_simplex(pdev); + + /* And let the library code do the work */ + return ata_pci_init_one(pdev, port_info, 2); +} + +static const struct pci_device_id netcell_pci_tbl[] = { + { 0x169C, 0x0044, PCI_ANY_ID, PCI_ANY_ID, }, + { } /* terminate list */ +}; + +static struct pci_driver netcell_pci_driver = { + .name = DRV_NAME, + .id_table = netcell_pci_tbl, + .probe = netcell_init_one, + .remove = ata_pci_remove_one, +}; + +static int __init netcell_init(void) +{ + return pci_register_driver(&netcell_pci_driver); +} + +static void __exit netcell_exit(void) +{ + pci_unregister_driver(&netcell_pci_driver); +} + + +module_init(netcell_init); +module_exit(netcell_exit); + +MODULE_AUTHOR(""); +MODULE_DESCRIPTION("SCSI low-level driver for Netcell PATA RAID"); +MODULE_LICENSE("GPL"); +MODULE_DEVICE_TABLE(pci, netcell_pci_tbl); +MODULE_VERSION(DRV_VERSION); + diff -puN /dev/null drivers/scsi/pata_oldpiix.c --- /dev/null 2006-06-03 22:34:36.282200750 -0700 +++ devel-akpm/drivers/scsi/pata_oldpiix.c 2006-06-09 15:21:16.000000000 -0700 @@ -0,0 +1,338 @@ +/* + * pata_oldpiix.c - Intel PATA/SATA controllers + * + * (C) 2005 Red Hat + * + * Some parts based on ata_piix.c by Jeff Garzik and others. + * + * Early PIIX differs significantly from the later PIIX as it lacks + * SITRE and the slave timing registers. This means that you have to + * set timing per channel, or be clever. Libata tells us whenever it + * does drive selection and we use this to reload the timings. + * + * Because of these behaviour differences PIIX gets its own driver module. + */ + +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include + +#define DRV_NAME "pata_oldpiix" +#define DRV_VERSION "0.4.3" + +/** + * oldpiix_probe_init - probe begin + * @ap: ATA port + * + * Set up cable type and use generic probe init + */ + +static void oldpiix_probe_init(struct ata_port *ap) +{ + ap->cbl = ATA_CBL_PATA40; + ata_std_probeinit(ap); +} + +/** + * oldpiix_pata_probe_reset - Probe specified port on PATA host controller + * @ap: Port to probe + * @classes: + * + * LOCKING: + * None (inherited from caller). + */ + +static int oldpiix_pata_probe_reset(struct ata_port *ap, unsigned int *classes) +{ + struct pci_dev *pdev = to_pci_dev(ap->host_set->dev); + static const struct pci_bits oldpiix_enable_bits[] = { + { 0x41U, 1U, 0x80UL, 0x80UL }, /* port 0 */ + { 0x43U, 1U, 0x80UL, 0x80UL }, /* port 1 */ + }; + + if (!pci_test_config_bits(pdev, &oldpiix_enable_bits[ap->hard_port_no])) { + ata_port_disable(ap); + printk(KERN_INFO "ata%u: port disabled. ignoring.\n", ap->id); + return 0; + } + return ata_drive_probe_reset(ap, oldpiix_probe_init, + ata_std_softreset, NULL, + ata_std_postreset, classes); +} + +/** + * oldpiix_set_piomode - Initialize host controller PATA PIO timings + * @ap: Port whose timings we are configuring + * @adev: um + * + * Set PIO mode for device, in host controller PCI config space. + * + * LOCKING: + * None (inherited from caller). + */ + +static void oldpiix_set_piomode (struct ata_port *ap, struct ata_device *adev) +{ + unsigned int pio = adev->pio_mode - XFER_PIO_0; + struct pci_dev *dev = to_pci_dev(ap->host_set->dev); + unsigned int idetm_port= ap->hard_port_no ? 0x42 : 0x40; + u16 idetm_data; + int control = 0; + + /* + * See Intel Document 298600-004 for the timing programing rules + * for PIIX/ICH. Note that the early PIIX does not have the slave + * timing port at 0x44. + */ + + static const /* ISP RTC */ + u8 timings[][2] = { { 0, 0 }, + { 0, 0 }, + { 1, 0 }, + { 2, 1 }, + { 2, 3 }, }; + + if (pio > 2) + control |= 1; /* TIME1 enable */ + if (ata_pio_need_iordy(adev)) + control |= 2; /* IE IORDY */ + + /* Intel specifies that the PPE functionality is for disk only */ + if (adev->class == ATA_DEV_ATA) + control |= 4; /* PPE enable */ + + pci_read_config_word(dev, idetm_port, &idetm_data); + + /* Enable PPE, IE and TIME as appropriate. Clear the other + drive timing bits */ + if (adev->devno == 0) { + idetm_data &= 0xCCE0; + idetm_data |= control; + } else { + idetm_data &= 0xCC0E; + idetm_data |= (control << 4); + } + idetm_data |= (timings[pio][0] << 12) | + (timings[pio][1] << 8); + pci_write_config_word(dev, idetm_port, idetm_data); + + /* Track which port is configured */ + ap->private_data = adev; +} + +/** + * oldpiix_set_dmamode - Initialize host controller PATA DMA timings + * @ap: Port whose timings we are configuring + * @adev: Device to program + * @isich: True if the device is an ICH and has IOCFG registers + * + * Set MWDMA mode for device, in host controller PCI config space. + * + * LOCKING: + * None (inherited from caller). + */ + +static void oldpiix_set_dmamode (struct ata_port *ap, struct ata_device *adev) +{ + struct pci_dev *dev = to_pci_dev(ap->host_set->dev); + u8 idetm_port = ap->hard_port_no ? 0x42 : 0x40; + u16 idetm_data; + + static const /* ISP RTC */ + u8 timings[][2] = { { 0, 0 }, + { 0, 0 }, + { 1, 0 }, + { 2, 1 }, + { 2, 3 }, }; + + /* + * MWDMA is driven by the PIO timings. We must also enable + * IORDY unconditionally along with TIME1. PPE has already + * been set when the PIO timing was set. + */ + + unsigned int mwdma = adev->dma_mode - XFER_MW_DMA_0; + unsigned int control; + const unsigned int needed_pio[3] = { + XFER_PIO_0, XFER_PIO_3, XFER_PIO_4 + }; + int pio = needed_pio[mwdma] - XFER_PIO_0; + + pci_read_config_word(dev, idetm_port, &idetm_data); + + control = 3; /* IORDY|TIME0 */ + /* Intel specifies that the PPE functionality is for disk only */ + if (adev->class == ATA_DEV_ATA) + control |= 4; /* PPE enable */ + + /* If the drive MWDMA is faster than it can do PIO then + we must force PIO into PIO0 */ + + if (adev->pio_mode < needed_pio[mwdma]) + /* Enable DMA timing only */ + control |= 8; /* PIO cycles in PIO0 */ + + /* Mask out the relevant control and timing bits we will load. Also + clear the other drive TIME register as a precaution */ + if (adev->devno == 0) { + idetm_data &= 0xCCE0; + idetm_data |= control; + } else { + idetm_data &= 0xCC0E; + idetm_data |= (control << 4); + } + idetm_data |= (timings[pio][0] << 12) | (timings[pio][1] << 8); + pci_write_config_word(dev, idetm_port, idetm_data); + + /* Track which port is configured */ + ap->private_data = adev; +} + +/** + * oldpiix_qc_issue_prot - command issue + * @qc: command pending + * + * Called when the libata layer is about to issue a command. We wrap + * this interface so that we can load the correct ATA timings if + * neccessary. Our logic also clears TIME0/TIME1 for the other device so + * that, even if we get this wrong, cycles to the other device will + * be made PIO0. + */ + +static unsigned int oldpiix_qc_issue_prot(struct ata_queued_cmd *qc) +{ + struct ata_port *ap = qc->ap; + struct ata_device *adev = qc->dev; + + if (adev != ap->private_data) { + if (adev->dma_mode) + oldpiix_set_dmamode(ap, adev); + else if (adev->pio_mode) + oldpiix_set_piomode(ap, adev); + } + return ata_qc_issue_prot(qc); +} + + +static struct scsi_host_template oldpiix_sht = { + .module = THIS_MODULE, + .name = DRV_NAME, + .ioctl = ata_scsi_ioctl, + .queuecommand = ata_scsi_queuecmd, + .can_queue = ATA_DEF_QUEUE, + .this_id = ATA_SHT_THIS_ID, + .sg_tablesize = LIBATA_MAX_PRD, + .max_sectors = ATA_MAX_SECTORS, + .cmd_per_lun = ATA_SHT_CMD_PER_LUN, + .emulated = ATA_SHT_EMULATED, + .use_clustering = ATA_SHT_USE_CLUSTERING, + .proc_name = DRV_NAME, + .dma_boundary = ATA_DMA_BOUNDARY, + .slave_configure = ata_scsi_slave_config, + .bios_param = ata_std_bios_param, +}; + +static const struct ata_port_operations oldpiix_pata_ops = { + .port_disable = ata_port_disable, + .set_piomode = oldpiix_set_piomode, + .set_dmamode = oldpiix_set_dmamode, + .mode_filter = ata_pci_default_filter, + + .tf_load = ata_tf_load, + .tf_read = ata_tf_read, + .check_status = ata_check_status, + .exec_command = ata_exec_command, + .dev_select = ata_std_dev_select, + + .probe_reset = oldpiix_pata_probe_reset, + + .bmdma_setup = ata_bmdma_setup, + .bmdma_start = ata_bmdma_start, + .bmdma_stop = ata_bmdma_stop, + .bmdma_status = ata_bmdma_status, + .qc_prep = ata_qc_prep, + .qc_issue = oldpiix_qc_issue_prot, + .data_xfer = ata_pio_data_xfer, + + .irq_handler = ata_interrupt, + .irq_clear = ata_bmdma_irq_clear, + + .port_start = ata_port_start, + .port_stop = ata_port_stop, + .host_stop = ata_host_stop, +}; + + +/** + * oldpiix_init_one - Register PIIX ATA PCI device with kernel services + * @pdev: PCI device to register + * @ent: Entry in oldpiix_pci_tbl matching with @pdev + * + * Called from kernel PCI layer. We probe for combined mode (sigh), + * and then hand over control to libata, for it to do the rest. + * + * LOCKING: + * Inherited from PCI layer (may sleep). + * + * RETURNS: + * Zero on success, or -ERRNO value. + */ + +static int oldpiix_init_one (struct pci_dev *pdev, const struct pci_device_id *ent) +{ + static int printed_version; + static struct ata_port_info info = { + .sht = &oldpiix_sht, + .host_flags = ATA_FLAG_SLAVE_POSS | ATA_FLAG_SRST, + .pio_mask = 0x1f, /* pio0-4 */ + .mwdma_mask = 0x07, /* mwdma1-2 */ + .port_ops = &oldpiix_pata_ops, + }; + static struct ata_port_info *port_info[2] = { &info, &info }; + + if (!printed_version++) + dev_printk(KERN_DEBUG, &pdev->dev, + "version " DRV_VERSION "\n"); + + return ata_pci_init_one(pdev, port_info, 2); +} + +static const struct pci_device_id oldpiix_pci_tbl[] = { + { PCI_DEVICE(0x8086, 0x1230), }, + { } /* terminate list */ +}; + +static struct pci_driver oldpiix_pci_driver = { + .name = DRV_NAME, + .id_table = oldpiix_pci_tbl, + .probe = oldpiix_init_one, + .remove = ata_pci_remove_one, +}; + +static int __init oldpiix_init(void) +{ + return pci_register_driver(&oldpiix_pci_driver); +} + +static void __exit oldpiix_exit(void) +{ + pci_unregister_driver(&oldpiix_pci_driver); +} + + +module_init(oldpiix_init); +module_exit(oldpiix_exit); + +MODULE_AUTHOR("Alan Cox"); +MODULE_DESCRIPTION("SCSI low-level driver for early PIIX series controllers"); +MODULE_LICENSE("GPL"); +MODULE_DEVICE_TABLE(pci, oldpiix_pci_tbl); +MODULE_VERSION(DRV_VERSION); + diff -puN /dev/null drivers/scsi/pata_opti.c --- /dev/null 2006-06-03 22:34:36.282200750 -0700 +++ devel-akpm/drivers/scsi/pata_opti.c 2006-06-09 15:21:16.000000000 -0700 @@ -0,0 +1,292 @@ +/* + * pata_opti.c - ATI PATA for new ATA layer + * (C) 2005 Red Hat Inc + * Alan Cox + * + * Based on + * linux/drivers/ide/pci/opti621.c Version 0.7 Sept 10, 2002 + * + * Copyright (C) 1996-1998 Linus Torvalds & authors (see below) + * + * Authors: + * Jaromir Koutek , + * Jan Harkes , + * Mark Lord + * Some parts of code are from ali14xx.c and from rz1000.c. + * + * Also consulted the FreeBSD prototype driver by Kevin Day to try + * and resolve some confusions. Further documentation can be found in + * Ralf Brown's interrupt list + * + * If you have other variants of the Opti range (Viper/Vendetta) please + * try this driver with those PCI idents and report back. For the later + * chips see the pata_optidma driver + * + */ + +#include +#include +#include +#include +#include +#include +#include +#include + +#define DRV_NAME "pata_opti" +#define DRV_VERSION "0.2.3" + +enum { + READ_REG = 0, /* index of Read cycle timing register */ + WRITE_REG = 1, /* index of Write cycle timing register */ + CNTRL_REG = 3, /* index of Control register */ + STRAP_REG = 5, /* index of Strap register */ + MISC_REG = 6 /* index of Miscellaneous register */ +}; + +/** + * opti_probe_init - probe begin + * @ap: ATA port + * + * Set up cable type and use generic probe init + */ + +static void opti_probe_init(struct ata_port *ap) +{ + ap->cbl = ATA_CBL_PATA40; + ata_std_probeinit(ap); +} + +/** + * opti_probe_reset - probe reset + * @ap: ATA port + * @classes: + * + * Perform the ATA probe and bus reset sequence plus specific handling + * for this hardware. The Opti needs little handling - we have no UDMA66 + * capability that needs cable detection. All we must do is check the port + * is enabled. + */ + +static int opti_probe_reset(struct ata_port *ap, unsigned int *classes) +{ + struct pci_dev *pdev = to_pci_dev(ap->host_set->dev); + static const struct pci_bits opti_enable_bits[] = { + { 0x45, 1, 0x80, 0x00 }, + { 0x40, 1, 0x08, 0x00 } + }; + + if (!pci_test_config_bits(pdev, &opti_enable_bits[ap->hard_port_no])) { + ata_port_disable(ap); + printk(KERN_INFO "ata%u: port disabled. ignoring.\n", ap->id); + return 0; + } + return ata_drive_probe_reset(ap, opti_probe_init, + ata_std_softreset, NULL, + ata_std_postreset, classes); +} + +/** + * opti_write_reg - control register setup + * @ap: ATA port + * @value: value + * @reg: control register number + * + * The Opti uses magic 'trapdoor' register accesses to do configuration + * rather than using PCI space as other controllers do. The double inw + * on the error register activates configuration mode. We can then write + * the control register + */ + +static void opti_write_reg(struct ata_port *ap, u8 val, int reg) +{ + unsigned long regio = ap->ioaddr.cmd_addr; + + /* These 3 unlock the control register access */ + inw(regio + 1); + inw(regio + 1); + outb(3, regio + 2); + + /* Do the I/O */ + outb(val, regio + reg); + + /* Relock */ + outb(0x83, regio + 2); +} + +#if 0 +/** + * opti_read_reg - control register read + * @ap: ATA port + * @reg: control register number + * + * The Opti uses magic 'trapdoor' register accesses to do configuration + * rather than using PCI space as other controllers do. The double inw + * on the error register activates configuration mode. We can then read + * the control register + */ + +static u8 opti_read_reg(struct ata_port *ap, int reg) +{ + unsigned long regio = ap->ioaddr.cmd_addr; + u8 ret; + inw(regio + 1); + inw(regio + 1); + outb(3, regio + 2); + ret = inb(regio + reg); + outb(0x83, regio + 2); +} +#endif + +/** + * opti_set_piomode - set initial PIO mode data + * @ap: ATA interface + * @adev: ATA device + * + * Called to do the PIO mode setup. Timing numbers are taken from + * the FreeBSD driver then pre computed to keep the code clean. There + * are two tables depending on the hardware clock speed. + */ + +static void opti_set_piomode(struct ata_port *ap, struct ata_device *adev) +{ + struct ata_device *pair = ata_dev_pair(adev); + int clock; + int pio = adev->pio_mode - XFER_PIO_0; + unsigned long regio = ap->ioaddr.cmd_addr; + u8 addr; + + /* Address table precomputed with prefetch off and a DCLK of 2 */ + static const u8 addr_timing[2][5] = { + { 0x30, 0x20, 0x20, 0x10, 0x10 }, + { 0x20, 0x20, 0x10, 0x10, 0x10 } + }; + static const u8 data_rec_timing[2][5] = { + { 0x6B, 0x56, 0x42, 0x32, 0x31 }, + { 0x58, 0x44, 0x32, 0x22, 0x21 } + }; + + outb(0xff, regio + 5); + clock = inw(regio + 5) & 1; + + /* + * As with many controllers the address setup time is shared + * and must suit both devices if present. + */ + + addr = addr_timing[clock][pio]; + if (pair) { + /* Hardware constraint */ + u8 pair_addr = addr_timing[clock][pair->pio_mode - XFER_PIO_0]; + if (pair_addr > addr) + addr = pair_addr; + } + + /* Commence primary programming sequence */ + opti_write_reg(ap, adev->devno, MISC_REG); + opti_write_reg(ap, data_rec_timing[clock][pio], READ_REG); + opti_write_reg(ap, data_rec_timing[clock][pio], WRITE_REG); + opti_write_reg(ap, addr, MISC_REG); + + /* Programming sequence complete, override strapping */ + opti_write_reg(ap, 0x85, CNTRL_REG); +} + +static struct scsi_host_template opti_sht = { + .module = THIS_MODULE, + .name = DRV_NAME, + .ioctl = ata_scsi_ioctl, + .queuecommand = ata_scsi_queuecmd, + .can_queue = ATA_DEF_QUEUE, + .this_id = ATA_SHT_THIS_ID, + .sg_tablesize = LIBATA_MAX_PRD, + .max_sectors = ATA_MAX_SECTORS, + .cmd_per_lun = ATA_SHT_CMD_PER_LUN, + .emulated = ATA_SHT_EMULATED, + .use_clustering = ATA_SHT_USE_CLUSTERING, + .proc_name = DRV_NAME, + .dma_boundary = ATA_DMA_BOUNDARY, + .slave_configure = ata_scsi_slave_config, + .bios_param = ata_std_bios_param, +}; + +static struct ata_port_operations opti_port_ops = { + .port_disable = ata_port_disable, + .set_piomode = opti_set_piomode, +/* .set_dmamode = opti_set_dmamode, */ + .tf_load = ata_tf_load, + .tf_read = ata_tf_read, + .check_status = ata_check_status, + .exec_command = ata_exec_command, + .dev_select = ata_std_dev_select, + + .probe_reset = opti_probe_reset, + + .bmdma_setup = ata_bmdma_setup, + .bmdma_start = ata_bmdma_start, + .bmdma_stop = ata_bmdma_stop, + .bmdma_status = ata_bmdma_status, + + .qc_prep = ata_qc_prep, + .qc_issue = ata_qc_issue_prot, + .eng_timeout = ata_eng_timeout, + .data_xfer = ata_pio_data_xfer, + + .irq_handler = ata_interrupt, + .irq_clear = ata_bmdma_irq_clear, + + .port_start = ata_port_start, + .port_stop = ata_port_stop, + .host_stop = ata_host_stop +}; + +static int opti_init_one(struct pci_dev *dev, const struct pci_device_id *id) +{ + static struct ata_port_info info = { + .sht = &opti_sht, + .host_flags = ATA_FLAG_SLAVE_POSS | ATA_FLAG_SRST, + .pio_mask = 0x1f, + .port_ops = &opti_port_ops + }; + static struct ata_port_info *port_info[2] = { &info, &info }; + static int printed_version; + + if (!printed_version++) + dev_printk(KERN_DEBUG, &dev->dev, "version " DRV_VERSION "\n"); + + return ata_pci_init_one(dev, port_info, 2); +} + +static const struct pci_device_id opti[] = { + { PCI_VENDOR_ID_OPTI, PCI_DEVICE_ID_OPTI_82C621, PCI_ANY_ID, PCI_ANY_ID, 0, 0, 0}, + { PCI_VENDOR_ID_OPTI, PCI_DEVICE_ID_OPTI_82C825, PCI_ANY_ID, PCI_ANY_ID, 0, 0, 1}, + { 0, }, +}; + +static struct pci_driver opti_pci_driver = { + .name = DRV_NAME, + .id_table = opti, + .probe = opti_init_one, + .remove = ata_pci_remove_one +}; + +static int __init opti_init(void) +{ + return pci_register_driver(&opti_pci_driver); +} + + +static void __exit opti_exit(void) +{ + pci_unregister_driver(&opti_pci_driver); +} + + +MODULE_AUTHOR("Alan Cox"); +MODULE_DESCRIPTION("low-level driver for Opti 621/621X"); +MODULE_LICENSE("GPL"); +MODULE_DEVICE_TABLE(pci, opti); +MODULE_VERSION(DRV_VERSION); + +module_init(opti_init); +module_exit(opti_exit); diff -puN /dev/null drivers/scsi/pata_pdc2027x.c --- /dev/null 2006-06-03 22:34:36.282200750 -0700 +++ devel-akpm/drivers/scsi/pata_pdc2027x.c 2006-06-09 15:21:16.000000000 -0700 @@ -0,0 +1,868 @@ +/* + * Promise PATA TX2/TX4/TX2000/133 IDE driver for pdc20268 to pdc20277. + * + * This program is free software; you can redistribute it and/or + * modify it under the terms of the GNU General Public License + * as published by the Free Software Foundation; either version + * 2 of the License, or (at your option) any later version. + * + * Ported to libata by: + * Albert Lee IBM Corporation + * + * Copyright (C) 1998-2002 Andre Hedrick + * Portions Copyright (C) 1999 Promise Technology, Inc. + * + * Author: Frank Tiernan (frankt@promise.com) + * Released under terms of General Public License + * + * + * libata documentation is available via 'make {ps|pdf}docs', + * as Documentation/DocBook/libata.* + * + * Hardware information only available under NDA. + * + */ +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include + +#define DRV_NAME "pata_pdc2027x" +#define DRV_VERSION "0.74" +#undef PDC_DEBUG + +#ifdef PDC_DEBUG +#define PDPRINTK(fmt, args...) printk(KERN_ERR "%s: " fmt, __FUNCTION__, ## args) +#else +#define PDPRINTK(fmt, args...) +#endif + +enum { + PDC_UDMA_100 = 0, + PDC_UDMA_133 = 1, + + PDC_100_MHZ = 100000000, + PDC_133_MHZ = 133333333, + + PDC_SYS_CTL = 0x1100, + PDC_ATA_CTL = 0x1104, + PDC_GLOBAL_CTL = 0x1108, + PDC_CTCR0 = 0x110C, + PDC_CTCR1 = 0x1110, + PDC_BYTE_COUNT = 0x1120, + PDC_PLL_CTL = 0x1202, +}; + +static int pdc2027x_init_one(struct pci_dev *pdev, const struct pci_device_id *ent); +static void pdc2027x_remove_one(struct pci_dev *pdev); +static int pdc2027x_probe_reset(struct ata_port *ap, unsigned int *classes); +static void pdc2027x_set_piomode(struct ata_port *ap, struct ata_device *adev); +static void pdc2027x_set_dmamode(struct ata_port *ap, struct ata_device *adev); +static void pdc2027x_post_set_mode(struct ata_port *ap); +static int pdc2027x_check_atapi_dma(struct ata_queued_cmd *qc); + +/* + * ATA Timing Tables based on 133MHz controller clock. + * These tables are only used when the controller is in 133MHz clock. + * If the controller is in 100MHz clock, the ASIC hardware will + * set the timing registers automatically when "set feature" command + * is issued to the device. However, if the controller clock is 133MHz, + * the following tables must be used. + */ +static struct pdc2027x_pio_timing { + u8 value0, value1, value2; +} pdc2027x_pio_timing_tbl [] = { + { 0xfb, 0x2b, 0xac }, /* PIO mode 0 */ + { 0x46, 0x29, 0xa4 }, /* PIO mode 1 */ + { 0x23, 0x26, 0x64 }, /* PIO mode 2 */ + { 0x27, 0x0d, 0x35 }, /* PIO mode 3, IORDY on, Prefetch off */ + { 0x23, 0x09, 0x25 }, /* PIO mode 4, IORDY on, Prefetch off */ +}; + +static struct pdc2027x_mdma_timing { + u8 value0, value1; +} pdc2027x_mdma_timing_tbl [] = { + { 0xdf, 0x5f }, /* MDMA mode 0 */ + { 0x6b, 0x27 }, /* MDMA mode 1 */ + { 0x69, 0x25 }, /* MDMA mode 2 */ +}; + +static struct pdc2027x_udma_timing { + u8 value0, value1, value2; +} pdc2027x_udma_timing_tbl [] = { + { 0x4a, 0x0f, 0xd5 }, /* UDMA mode 0 */ + { 0x3a, 0x0a, 0xd0 }, /* UDMA mode 1 */ + { 0x2a, 0x07, 0xcd }, /* UDMA mode 2 */ + { 0x1a, 0x05, 0xcd }, /* UDMA mode 3 */ + { 0x1a, 0x03, 0xcd }, /* UDMA mode 4 */ + { 0x1a, 0x02, 0xcb }, /* UDMA mode 5 */ + { 0x1a, 0x01, 0xcb }, /* UDMA mode 6 */ +}; + +static const struct pci_device_id pdc2027x_pci_tbl[] = { +#ifdef ATA_ENABLE_PATA + { PCI_VENDOR_ID_PROMISE, PCI_DEVICE_ID_PROMISE_20268, PCI_ANY_ID, PCI_ANY_ID, 0, 0, PDC_UDMA_100 }, + { PCI_VENDOR_ID_PROMISE, PCI_DEVICE_ID_PROMISE_20269, PCI_ANY_ID, PCI_ANY_ID, 0, 0, PDC_UDMA_133 }, + { PCI_VENDOR_ID_PROMISE, PCI_DEVICE_ID_PROMISE_20270, PCI_ANY_ID, PCI_ANY_ID, 0, 0, PDC_UDMA_100 }, + { PCI_VENDOR_ID_PROMISE, PCI_DEVICE_ID_PROMISE_20271, PCI_ANY_ID, PCI_ANY_ID, 0, 0, PDC_UDMA_133 }, + { PCI_VENDOR_ID_PROMISE, PCI_DEVICE_ID_PROMISE_20275, PCI_ANY_ID, PCI_ANY_ID, 0, 0, PDC_UDMA_133 }, + { PCI_VENDOR_ID_PROMISE, PCI_DEVICE_ID_PROMISE_20276, PCI_ANY_ID, PCI_ANY_ID, 0, 0, PDC_UDMA_133 }, + { PCI_VENDOR_ID_PROMISE, PCI_DEVICE_ID_PROMISE_20277, PCI_ANY_ID, PCI_ANY_ID, 0, 0, PDC_UDMA_133 }, +#endif + { } /* terminate list */ +}; + +static struct pci_driver pdc2027x_pci_driver = { + .name = DRV_NAME, + .id_table = pdc2027x_pci_tbl, + .probe = pdc2027x_init_one, + .remove = __devexit_p(pdc2027x_remove_one), +}; + +static struct scsi_host_template pdc2027x_sht = { + .module = THIS_MODULE, + .name = DRV_NAME, + .ioctl = ata_scsi_ioctl, + .queuecommand = ata_scsi_queuecmd, + .can_queue = ATA_DEF_QUEUE, + .this_id = ATA_SHT_THIS_ID, + .sg_tablesize = LIBATA_MAX_PRD, + .max_sectors = ATA_MAX_SECTORS, + .cmd_per_lun = ATA_SHT_CMD_PER_LUN, + .emulated = ATA_SHT_EMULATED, + .use_clustering = ATA_SHT_USE_CLUSTERING, + .proc_name = DRV_NAME, + .dma_boundary = ATA_DMA_BOUNDARY, + .slave_configure = ata_scsi_slave_config, + .bios_param = ata_std_bios_param, +}; + +static struct ata_port_operations pdc2027x_pata100_ops = { + .port_disable = ata_port_disable, + + .tf_load = ata_tf_load, + .tf_read = ata_tf_read, + .check_status = ata_check_status, + .exec_command = ata_exec_command, + .dev_select = ata_std_dev_select, + + .probe_reset = pdc2027x_probe_reset, + + .check_atapi_dma = pdc2027x_check_atapi_dma, + .bmdma_setup = ata_bmdma_setup, + .bmdma_start = ata_bmdma_start, + .bmdma_stop = ata_bmdma_stop, + .bmdma_status = ata_bmdma_status, + .qc_prep = ata_qc_prep, + .qc_issue = ata_qc_issue_prot, + .eng_timeout = ata_eng_timeout, + + .irq_handler = ata_interrupt, + .irq_clear = ata_bmdma_irq_clear, + + .port_start = ata_port_start, + .port_stop = ata_port_stop, + .host_stop = ata_pci_host_stop, +}; + +static struct ata_port_operations pdc2027x_pata133_ops = { + .port_disable = ata_port_disable, + .set_piomode = pdc2027x_set_piomode, + .set_dmamode = pdc2027x_set_dmamode, + + .tf_load = ata_tf_load, + .tf_read = ata_tf_read, + .check_status = ata_check_status, + .exec_command = ata_exec_command, + .dev_select = ata_std_dev_select, + + .probe_reset = pdc2027x_probe_reset, + .post_set_mode = pdc2027x_post_set_mode, + + .check_atapi_dma = pdc2027x_check_atapi_dma, + .bmdma_setup = ata_bmdma_setup, + .bmdma_start = ata_bmdma_start, + .bmdma_stop = ata_bmdma_stop, + .bmdma_status = ata_bmdma_status, + .qc_prep = ata_qc_prep, + .qc_issue = ata_qc_issue_prot, + .eng_timeout = ata_eng_timeout, + + .irq_handler = ata_interrupt, + .irq_clear = ata_bmdma_irq_clear, + + .port_start = ata_port_start, + .port_stop = ata_port_stop, + .host_stop = ata_pci_host_stop, +}; + +static struct ata_port_info pdc2027x_port_info[] = { + /* PDC_UDMA_100 */ + { + .sht = &pdc2027x_sht, + .host_flags = ATA_FLAG_NO_LEGACY | ATA_FLAG_SLAVE_POSS | + ATA_FLAG_MMIO, + .pio_mask = 0x1f, /* pio0-4 */ + .mwdma_mask = 0x07, /* mwdma0-2 */ + .udma_mask = ATA_UDMA5, /* udma0-5 */ + .port_ops = &pdc2027x_pata100_ops, + }, + /* PDC_UDMA_133 */ + { + .sht = &pdc2027x_sht, + .host_flags = ATA_FLAG_NO_LEGACY | ATA_FLAG_SLAVE_POSS | + ATA_FLAG_MMIO, + .pio_mask = 0x1f, /* pio0-4 */ + .mwdma_mask = 0x07, /* mwdma0-2 */ + .udma_mask = ATA_UDMA6, /* udma0-6 */ + .port_ops = &pdc2027x_pata133_ops, + }, +}; + +MODULE_AUTHOR("Andre Hedrick, Frank Tiernan, Albert Lee"); +MODULE_DESCRIPTION("libata driver module for Promise PDC20268 to PDC20277"); +MODULE_LICENSE("GPL"); +MODULE_VERSION(DRV_VERSION); +MODULE_DEVICE_TABLE(pci, pdc2027x_pci_tbl); + +/** + * port_mmio - Get the MMIO address of PDC2027x extended registers + * @ap: Port + * @offset: offset from mmio base + */ +static inline void* port_mmio(struct ata_port *ap, unsigned int offset) +{ + return ap->host_set->mmio_base + ap->port_no * 0x100 + offset; +} + +/** + * dev_mmio - Get the MMIO address of PDC2027x extended registers + * @ap: Port + * @adev: device + * @offset: offset from mmio base + */ +static inline void* dev_mmio(struct ata_port *ap, struct ata_device *adev, unsigned int offset) +{ + u8 adj = (adev->devno) ? 0x08 : 0x00; + return port_mmio(ap, offset) + adj; +} + +/** + * pdc2027x_pata_cbl_detect - Probe host controller cable detect info + * @ap: Port for which cable detect info is desired + * + * Read 80c cable indicator from Promise extended register. + * This register is latched when the system is reset. + * + * LOCKING: + * None (inherited from caller). + */ +static void pdc2027x_cbl_detect(struct ata_port *ap) +{ + u32 cgcr; + + /* check cable detect results */ + cgcr = readl(port_mmio(ap, PDC_GLOBAL_CTL)); + if (cgcr & (1 << 26)) + goto cbl40; + + PDPRINTK("No cable or 80-conductor cable on port %d\n", ap->port_no); + + ap->cbl = ATA_CBL_PATA80; + return; + +cbl40: + printk(KERN_INFO DRV_NAME ": 40-conductor cable detected on port %d\n", ap->port_no); + ap->cbl = ATA_CBL_PATA40; + ap->udma_mask &= ATA_UDMA_MASK_40C; +} + +/** + * pdc2027x_port_enabled - Check PDC ATA control register to see whether the port is enabled. + * @ap: Port to check + */ +static inline int pdc2027x_port_enabled(struct ata_port *ap) +{ + return readb(port_mmio(ap, PDC_ATA_CTL)) & 0x02; +} + +/** + * pdc2027x_probeinit - probeinit for PATA host controller + * @ap: Target port + * + * Probeinit including cable detection. + * + * LOCKING: + * None (inherited from caller). + */ + +static void pdc2027x_probeinit(struct ata_port *ap) +{ + pdc2027x_cbl_detect(ap); + ata_std_probeinit(ap); +} + +/** + * pdc2027x_probe_reset - Perform reset on PATA port and classify + * @ap: Port to reset + * @classes: Resulting classes of attached devices + * + * Reset PATA phy and classify attached devices. + * + * LOCKING: + * None (inherited from caller). + */ + +static int pdc2027x_probe_reset(struct ata_port *ap, unsigned int *classes) +{ + /* Check whether port enabled */ + if (!pdc2027x_port_enabled(ap)) { + printk(KERN_INFO "ata%u: port disabled. ignoring.\n", ap->id); + return 0; + } + + return ata_drive_probe_reset(ap, pdc2027x_probeinit, + ata_std_softreset, NULL, + ata_std_postreset, classes); +} + +/** + * pdc2027x_set_piomode - Initialize host controller PATA PIO timings + * @ap: Port to configure + * @adev: um + * @pio: PIO mode, 0 - 4 + * + * Set PIO mode for device. + * + * LOCKING: + * None (inherited from caller). + */ +static void pdc2027x_set_piomode(struct ata_port *ap, struct ata_device *adev) +{ + unsigned int pio = adev->pio_mode - XFER_PIO_0; + u32 ctcr0, ctcr1; + + PDPRINTK("adev->pio_mode[%X]\n", adev->pio_mode); + + /* Sanity check */ + if (pio > 4) { + printk(KERN_ERR DRV_NAME ": Unknown pio mode [%d] ignored\n", pio); + return; + + } + + /* Set the PIO timing registers using value table for 133MHz */ + PDPRINTK("Set pio regs... \n"); + + ctcr0 = readl(dev_mmio(ap, adev, PDC_CTCR0)); + ctcr0 &= 0xffff0000; + ctcr0 |= pdc2027x_pio_timing_tbl[pio].value0 | + (pdc2027x_pio_timing_tbl[pio].value1 << 8); + writel(ctcr0, dev_mmio(ap, adev, PDC_CTCR0)); + + ctcr1 = readl(dev_mmio(ap, adev, PDC_CTCR1)); + ctcr1 &= 0x00ffffff; + ctcr1 |= (pdc2027x_pio_timing_tbl[pio].value2 << 24); + writel(ctcr1, dev_mmio(ap, adev, PDC_CTCR1)); + + PDPRINTK("Set pio regs done\n"); + + PDPRINTK("Set to pio mode[%u] \n", pio); +} + +/** + * pdc2027x_set_dmamode - Initialize host controller PATA UDMA timings + * @ap: Port to configure + * @adev: um + * @udma: udma mode, XFER_UDMA_0 to XFER_UDMA_6 + * + * Set UDMA mode for device. + * + * LOCKING: + * None (inherited from caller). + */ +static void pdc2027x_set_dmamode(struct ata_port *ap, struct ata_device *adev) +{ + unsigned int dma_mode = adev->dma_mode; + u32 ctcr0, ctcr1; + + if ((dma_mode >= XFER_UDMA_0) && + (dma_mode <= XFER_UDMA_6)) { + /* Set the UDMA timing registers with value table for 133MHz */ + unsigned int udma_mode = dma_mode & 0x07; + + if (dma_mode == XFER_UDMA_2) { + /* + * Turn off tHOLD. + * If tHOLD is '1', the hardware will add half clock for data hold time. + * This code segment seems to be no effect. tHOLD will be overwritten below. + */ + ctcr1 = readl(dev_mmio(ap, adev, PDC_CTCR1)); + writel(ctcr1 & ~(1 << 7), dev_mmio(ap, adev, PDC_CTCR1)); + } + + PDPRINTK("Set udma regs... \n"); + + ctcr1 = readl(dev_mmio(ap, adev, PDC_CTCR1)); + ctcr1 &= 0xff000000; + ctcr1 |= pdc2027x_udma_timing_tbl[udma_mode].value0 | + (pdc2027x_udma_timing_tbl[udma_mode].value1 << 8) | + (pdc2027x_udma_timing_tbl[udma_mode].value2 << 16); + writel(ctcr1, dev_mmio(ap, adev, PDC_CTCR1)); + + PDPRINTK("Set udma regs done\n"); + + PDPRINTK("Set to udma mode[%u] \n", udma_mode); + + } else if ((dma_mode >= XFER_MW_DMA_0) && + (dma_mode <= XFER_MW_DMA_2)) { + /* Set the MDMA timing registers with value table for 133MHz */ + unsigned int mdma_mode = dma_mode & 0x07; + + PDPRINTK("Set mdma regs... \n"); + ctcr0 = readl(dev_mmio(ap, adev, PDC_CTCR0)); + + ctcr0 &= 0x0000ffff; + ctcr0 |= (pdc2027x_mdma_timing_tbl[mdma_mode].value0 << 16) | + (pdc2027x_mdma_timing_tbl[mdma_mode].value1 << 24); + + writel(ctcr0, dev_mmio(ap, adev, PDC_CTCR0)); + PDPRINTK("Set mdma regs done\n"); + + PDPRINTK("Set to mdma mode[%u] \n", mdma_mode); + } else { + printk(KERN_ERR DRV_NAME ": Unknown dma mode [%u] ignored\n", dma_mode); + } +} + +/** + * pdc2027x_post_set_mode - Set the timing registers back to correct values. + * @ap: Port to configure + * + * The pdc2027x hardware will look at "SET FEATURES" and change the timing registers + * automatically. The values set by the hardware might be incorrect, under 133Mhz PLL. + * This function overwrites the possibly incorrect values set by the hardware to be correct. + */ +static void pdc2027x_post_set_mode(struct ata_port *ap) +{ + int i; + + for (i = 0; i < ATA_MAX_DEVICES; i++) { + struct ata_device *dev = &ap->device[i]; + + if (ata_dev_enabled(dev)) { + + pdc2027x_set_piomode(ap, dev); + + /* + * Enable prefetch if the device support PIO only. + */ + if (dev->xfer_shift == ATA_SHIFT_PIO) { + u32 ctcr1 = readl(dev_mmio(ap, dev, PDC_CTCR1)); + ctcr1 |= (1 << 25); + writel(ctcr1, dev_mmio(ap, dev, PDC_CTCR1)); + + PDPRINTK("Turn on prefetch\n"); + } else { + pdc2027x_set_dmamode(ap, dev); + } + } + } +} + +/** + * pdc2027x_check_atapi_dma - Check whether ATAPI DMA can be supported for this command + * @qc: Metadata associated with taskfile to check + * + * LOCKING: + * None (inherited from caller). + * + * RETURNS: 0 when ATAPI DMA can be used + * 1 otherwise + */ +static int pdc2027x_check_atapi_dma(struct ata_queued_cmd *qc) +{ + struct scsi_cmnd *cmd = qc->scsicmd; + u8 *scsicmd = cmd->cmnd; + int rc = 1; /* atapi dma off by default */ + + /* + * This workaround is from Promise's GPL driver. + * If ATAPI DMA is used for commands not in the + * following white list, say MODE_SENSE and REQUEST_SENSE, + * pdc2027x might hit the irq lost problem. + */ + switch (scsicmd[0]) { + case READ_10: + case WRITE_10: + case READ_12: + case WRITE_12: + case READ_6: + case WRITE_6: + case 0xad: /* READ_DVD_STRUCTURE */ + case 0xbe: /* READ_CD */ + /* ATAPI DMA is ok */ + rc = 0; + break; + default: + ; + } + + return rc; +} + +/** + * pdc_read_counter - Read the ctr counter + * @probe_ent: for the port address + */ + +static long pdc_read_counter(struct ata_probe_ent *probe_ent) +{ + long counter; + int retry = 1; + u32 bccrl, bccrh, bccrlv, bccrhv; + +retry: + bccrl = readl(probe_ent->mmio_base + PDC_BYTE_COUNT) & 0xffff; + bccrh = readl(probe_ent->mmio_base + PDC_BYTE_COUNT + 0x100) & 0xffff; + rmb(); + + /* Read the counter values again for verification */ + bccrlv = readl(probe_ent->mmio_base + PDC_BYTE_COUNT) & 0xffff; + bccrhv = readl(probe_ent->mmio_base + PDC_BYTE_COUNT + 0x100) & 0xffff; + rmb(); + + counter = (bccrh << 15) | bccrl; + + PDPRINTK("bccrh [%X] bccrl [%X]\n", bccrh, bccrl); + PDPRINTK("bccrhv[%X] bccrlv[%X]\n", bccrhv, bccrlv); + + /* + * The 30-bit decreasing counter are read by 2 pieces. + * Incorrect value may be read when both bccrh and bccrl are changing. + * Ex. When 7900 decrease to 78FF, wrong value 7800 might be read. + */ + if (retry && !(bccrh == bccrhv && bccrl >= bccrlv)) { + retry--; + PDPRINTK("rereading counter\n"); + goto retry; + } + + return counter; +} + +/** + * adjust_pll - Adjust the PLL input clock in Hz. + * + * @pdc_controller: controller specific information + * @probe_ent: For the port address + * @pll_clock: The input of PLL in HZ + */ +static void pdc_adjust_pll(struct ata_probe_ent *probe_ent, long pll_clock, unsigned int board_idx) +{ + + u16 pll_ctl; + long pll_clock_khz = pll_clock / 1000; + long pout_required = board_idx? PDC_133_MHZ:PDC_100_MHZ; + long ratio = pout_required / pll_clock_khz; + int F, R; + + /* Sanity check */ + if (unlikely(pll_clock_khz < 5000L || pll_clock_khz > 70000L)) { + printk(KERN_ERR DRV_NAME ": Invalid PLL input clock %ldkHz, give up!\n", pll_clock_khz); + return; + } + +#ifdef PDC_DEBUG + PDPRINTK("pout_required is %ld\n", pout_required); + + /* Show the current clock value of PLL control register + * (maybe already configured by the firmware) + */ + pll_ctl = readw(probe_ent->mmio_base + PDC_PLL_CTL); + + PDPRINTK("pll_ctl[%X]\n", pll_ctl); +#endif + + /* + * Calculate the ratio of F, R and OD + * POUT = (F + 2) / (( R + 2) * NO) + */ + if (ratio < 8600L) { /* 8.6x */ + /* Using NO = 0x01, R = 0x0D */ + R = 0x0d; + } else if (ratio < 12900L) { /* 12.9x */ + /* Using NO = 0x01, R = 0x08 */ + R = 0x08; + } else if (ratio < 16100L) { /* 16.1x */ + /* Using NO = 0x01, R = 0x06 */ + R = 0x06; + } else if (ratio < 64000L) { /* 64x */ + R = 0x00; + } else { + /* Invalid ratio */ + printk(KERN_ERR DRV_NAME ": Invalid ratio %ld, give up!\n", ratio); + return; + } + + F = (ratio * (R+2)) / 1000 - 2; + + if (unlikely(F < 0 || F > 127)) { + /* Invalid F */ + printk(KERN_ERR DRV_NAME ": F[%d] invalid!\n", F); + return; + } + + PDPRINTK("F[%d] R[%d] ratio*1000[%ld]\n", F, R, ratio); + + pll_ctl = (R << 8) | F; + + PDPRINTK("Writing pll_ctl[%X]\n", pll_ctl); + + writew(pll_ctl, probe_ent->mmio_base + PDC_PLL_CTL); + readw(probe_ent->mmio_base + PDC_PLL_CTL); /* flush */ + + /* Wait the PLL circuit to be stable */ + mdelay(30); + +#ifdef PDC_DEBUG + /* + * Show the current clock value of PLL control register + * (maybe configured by the firmware) + */ + pll_ctl = readw(probe_ent->mmio_base + PDC_PLL_CTL); + + PDPRINTK("pll_ctl[%X]\n", pll_ctl); +#endif + + return; +} + +/** + * detect_pll_input_clock - Detect the PLL input clock in Hz. + * @probe_ent: for the port address + * Ex. 16949000 on 33MHz PCI bus for pdc20275. + * Half of the PCI clock. + */ +static long pdc_detect_pll_input_clock(struct ata_probe_ent *probe_ent) +{ + u32 scr; + long start_count, end_count; + long pll_clock; + + /* Read current counter value */ + start_count = pdc_read_counter(probe_ent); + + /* Start the test mode */ + scr = readl(probe_ent->mmio_base + PDC_SYS_CTL); + PDPRINTK("scr[%X]\n", scr); + writel(scr | (0x01 << 14), probe_ent->mmio_base + PDC_SYS_CTL); + readl(probe_ent->mmio_base + PDC_SYS_CTL); /* flush */ + + /* Let the counter run for 100 ms. */ + mdelay(100); + + /* Read the counter values again */ + end_count = pdc_read_counter(probe_ent); + + /* Stop the test mode */ + scr = readl(probe_ent->mmio_base + PDC_SYS_CTL); + PDPRINTK("scr[%X]\n", scr); + writel(scr & ~(0x01 << 14), probe_ent->mmio_base + PDC_SYS_CTL); + readl(probe_ent->mmio_base + PDC_SYS_CTL); /* flush */ + + /* calculate the input clock in Hz */ + pll_clock = (start_count - end_count) * 10; + + PDPRINTK("start[%ld] end[%ld] \n", start_count, end_count); + PDPRINTK("PLL input clock[%ld]Hz\n", pll_clock); + + return pll_clock; +} + +/** + * pdc_hardware_init - Initialize the hardware. + * @pdev: instance of pci_dev found + * @pdc_controller: controller specific information + * @pe: for the port address + */ +static int pdc_hardware_init(struct pci_dev *pdev, struct ata_probe_ent *pe, unsigned int board_idx) +{ + long pll_clock; + + /* + * Detect PLL input clock rate. + * On some system, where PCI bus is running at non-standard clock rate. + * Ex. 25MHz or 40MHz, we have to adjust the cycle_time. + * The pdc20275 controller employs PLL circuit to help correct timing registers setting. + */ + pll_clock = pdc_detect_pll_input_clock(pe); + + if (pll_clock < 0) /* counter overflow? Try again. */ + pll_clock = pdc_detect_pll_input_clock(pe); + + dev_printk(KERN_INFO, &pdev->dev, "PLL input clock %ld kHz\n", pll_clock/1000); + + /* Adjust PLL control register */ + pdc_adjust_pll(pe, pll_clock, board_idx); + + return 0; +} + +/** + * pdc_ata_setup_port - setup the mmio address + * @port: ata ioports to setup + * @base: base address + */ +static void pdc_ata_setup_port(struct ata_ioports *port, unsigned long base) +{ + port->cmd_addr = + port->data_addr = base; + port->feature_addr = + port->error_addr = base + 0x05; + port->nsect_addr = base + 0x0a; + port->lbal_addr = base + 0x0f; + port->lbam_addr = base + 0x10; + port->lbah_addr = base + 0x15; + port->device_addr = base + 0x1a; + port->command_addr = + port->status_addr = base + 0x1f; + port->altstatus_addr = + port->ctl_addr = base + 0x81a; +} + +/** + * pdc2027x_init_one - PCI probe function + * Called when an instance of PCI adapter is inserted. + * This function checks whether the hardware is supported, + * initialize hardware and register an instance of ata_host_set to + * libata by providing struct ata_probe_ent and ata_device_add(). + * (implements struct pci_driver.probe() ) + * + * @pdev: instance of pci_dev found + * @ent: matching entry in the id_tbl[] + */ +static int __devinit pdc2027x_init_one(struct pci_dev *pdev, const struct pci_device_id *ent) +{ + static int printed_version; + unsigned int board_idx = (unsigned int) ent->driver_data; + + struct ata_probe_ent *probe_ent = NULL; + unsigned long base; + void *mmio_base; + int rc; + + if (!printed_version++) + dev_printk(KERN_DEBUG, &pdev->dev, "version " DRV_VERSION "\n"); + + rc = pci_enable_device(pdev); + if (rc) + return rc; + + rc = pci_request_regions(pdev, DRV_NAME); + if (rc) + goto err_out; + + rc = pci_set_dma_mask(pdev, ATA_DMA_MASK); + if (rc) + goto err_out_regions; + + rc = pci_set_consistent_dma_mask(pdev, ATA_DMA_MASK); + if (rc) + goto err_out_regions; + + /* Prepare the probe entry */ + probe_ent = kzalloc(sizeof(*probe_ent), GFP_KERNEL); + if (probe_ent == NULL) { + rc = -ENOMEM; + goto err_out_regions; + } + + probe_ent->dev = pci_dev_to_dev(pdev); + INIT_LIST_HEAD(&probe_ent->node); + + mmio_base = pci_iomap(pdev, 5, 0); + if (!mmio_base) { + rc = -ENOMEM; + goto err_out_free_ent; + } + + base = (unsigned long) mmio_base; + + probe_ent->sht = pdc2027x_port_info[board_idx].sht; + probe_ent->host_flags = pdc2027x_port_info[board_idx].host_flags; + probe_ent->pio_mask = pdc2027x_port_info[board_idx].pio_mask; + probe_ent->mwdma_mask = pdc2027x_port_info[board_idx].mwdma_mask; + probe_ent->udma_mask = pdc2027x_port_info[board_idx].udma_mask; + probe_ent->port_ops = pdc2027x_port_info[board_idx].port_ops; + + probe_ent->irq = pdev->irq; + probe_ent->irq_flags = SA_SHIRQ; + probe_ent->mmio_base = mmio_base; + + pdc_ata_setup_port(&probe_ent->port[0], base + 0x17c0); + probe_ent->port[0].bmdma_addr = base + 0x1000; + pdc_ata_setup_port(&probe_ent->port[1], base + 0x15c0); + probe_ent->port[1].bmdma_addr = base + 0x1008; + + probe_ent->n_ports = 2; + + pci_set_master(pdev); + //pci_enable_intx(pdev); + + /* initialize adapter */ + if (pdc_hardware_init(pdev, probe_ent, board_idx) != 0) + goto err_out_free_ent; + + ata_device_add(probe_ent); + kfree(probe_ent); + + return 0; + +err_out_free_ent: + kfree(probe_ent); +err_out_regions: + pci_release_regions(pdev); +err_out: + pci_disable_device(pdev); + return rc; +} + +/** + * pdc2027x_remove_one - Called to remove a single instance of the + * adapter. + * + * @dev: The PCI device to remove. + * FIXME: module load/unload not working yet + */ +static void __devexit pdc2027x_remove_one(struct pci_dev *pdev) +{ + ata_pci_remove_one(pdev); +} + +/** + * pdc2027x_init - Called after this module is loaded into the kernel. + */ +static int __init pdc2027x_init(void) +{ + return pci_module_init(&pdc2027x_pci_driver); +} + +/** + * pdc2027x_exit - Called before this module unloaded from the kernel + */ +static void __exit pdc2027x_exit(void) +{ + pci_unregister_driver(&pdc2027x_pci_driver); +} + +module_init(pdc2027x_init); +module_exit(pdc2027x_exit); diff -puN /dev/null drivers/scsi/pata_sil680.c --- /dev/null 2006-06-03 22:34:36.282200750 -0700 +++ devel-akpm/drivers/scsi/pata_sil680.c 2006-06-09 15:21:16.000000000 -0700 @@ -0,0 +1,380 @@ +/* + * pata_sil680.c - SIL680 PATA for new ATA layer + * (C) 2005 Red Hat Inc + * Alan Cox + * + * based upon + * + * linux/drivers/ide/pci/siimage.c Version 1.07 Nov 30, 2003 + * + * Copyright (C) 2001-2002 Andre Hedrick + * Copyright (C) 2003 Red Hat + * + * May be copied or modified under the terms of the GNU General Public License + * + * Documentation publically available. + * + * If you have strange problems with nVidia chipset systems please + * see the SI support documentation and update your system BIOS + * if neccessary + * + * TODO + * If we know all our devices are LBA28 (or LBA28 sized) we could use + * the command fifo mode. + */ + +#include +#include +#include +#include +#include +#include +#include +#include + +#define DRV_NAME "pata_sil680" +#define DRV_VERSION "0.3.1" + +/** + * sil680_selreg - return register base + * @hwif: interface + * @r: config offset + * + * Turn a config register offset into the right address in either + * PCI space or MMIO space to access the control register in question + * Thankfully this is a configuration operation so isnt performance + * criticial. + */ + +static unsigned long sil680_selreg(struct ata_port *ap, int r) +{ + unsigned long base = 0xA0 + r; + base += (ap->hard_port_no << 4); + return base; +} + +/** + * sil680_seldev - return register base + * @hwif: interface + * @r: config offset + * + * Turn a config register offset into the right address in either + * PCI space or MMIO space to access the control register in question + * including accounting for the unit shift. + */ + +static unsigned long sil680_seldev(struct ata_port *ap, struct ata_device *adev, int r) +{ + unsigned long base = 0xA0 + r; + base += (ap->hard_port_no << 4); + base |= adev->devno ? 2 : 0; + return base; +} + + +/** + * sil680_cable_detect - cable detection + * @ap: ATA port + * + * Perform cable detection. The SIL680 stores this in PCI config + * space for us. + */ + +static int sil680_cable_detect(struct ata_port *ap) { + struct pci_dev *pdev = to_pci_dev(ap->host_set->dev); + unsigned long addr = sil680_selreg(ap, 0); + u8 ata66; + pci_read_config_byte(pdev, addr, &ata66); + if (ata66 & 1) + return ATA_CBL_PATA80; + else + return ATA_CBL_PATA40; +} + +static void sil680_probe_init(struct ata_port *ap) +{ + ap->cbl = sil680_cable_detect(ap); + ata_std_probeinit(ap); +} + +/** + * sil680_bus_reset - reset the SIL680 bus + * @ap: ATA port to reset + * + * Perform the SIL680 housekeeping when doing an ATA bus reset + */ + +static int sil680_bus_reset(struct ata_port *ap,unsigned int *classes) +{ + struct pci_dev *pdev = to_pci_dev(ap->host_set->dev); + unsigned long addr = sil680_selreg(ap, 0); + u8 reset; + + pci_read_config_byte(pdev, addr, &reset); + pci_write_config_byte(pdev, addr, reset | 0x03); + udelay(25); + pci_write_config_byte(pdev, addr, reset); + return ata_std_softreset(ap, classes); +} + +static int sil680_probe_reset(struct ata_port *ap, unsigned int *classes) +{ + return ata_drive_probe_reset(ap, sil680_probe_init, + sil680_bus_reset, NULL, + ata_std_postreset, classes); +} + +/** + * sil680_set_piomode - set initial PIO mode data + * @ap: ATA interface + * @adev: ATA device + * + * Program the SIL680 registers for PIO mode. Note that the task speed + * registers are shared between the devices so we must pick the lowest + * mode for command work. + */ + +static void sil680_set_piomode(struct ata_port *ap, struct ata_device *adev) +{ + static u16 speed_p[5] = { 0x328A, 0x2283, 0x1104, 0x10C3, 0x10C1 }; + static u16 speed_t[5] = { 0x328A, 0x1281, 0x1281, 0x10C3, 0x10C1 }; + + unsigned long tfaddr = sil680_selreg(ap, 0x02); + unsigned long addr = sil680_seldev(ap, adev, 0x04); + struct pci_dev *pdev = to_pci_dev(ap->host_set->dev); + int pio = adev->pio_mode - XFER_PIO_0; + int lowest_pio = pio; + u16 reg; + + struct ata_device *pair = ata_dev_pair(adev); + + if (pair != NULL && adev->pio_mode > pair->pio_mode) + lowest_pio = pair->pio_mode - XFER_PIO_0; + + pci_write_config_word(pdev, addr, speed_p[pio]); + pci_write_config_word(pdev, tfaddr, speed_t[lowest_pio]); + + pci_read_config_word(pdev, tfaddr-2, ®); + reg &= ~0x0200; /* Clear IORDY */ + if (ata_pio_need_iordy(adev)) + reg |= 0x0200; /* Enable IORDY */ + pci_write_config_word(pdev, tfaddr-2, reg); +} + +/** + * sil680_set_dmamode - set initial DMA mode data + * @ap: ATA interface + * @adev: ATA device + * + * Program the MWDMA/UDMA modes for the sil680 k + * chipset. The MWDMA mode values are pulled from a lookup table + * while the chipset uses mode number for UDMA. + */ + +static void sil680_set_dmamode(struct ata_port *ap, struct ata_device *adev) +{ + static u8 ultra_table[2][7] = { + { 0x0C, 0x07, 0x05, 0x04, 0x02, 0x01, 0xFF }, /* 100MHz */ + { 0x0F, 0x0B, 0x07, 0x05, 0x03, 0x02, 0x01 }, /* 133Mhz */ + }; + static u16 dma_table[3] = { 0x2208, 0x10C2, 0x10C1 }; + + struct pci_dev *pdev = to_pci_dev(ap->host_set->dev); + unsigned long ma = sil680_seldev(ap, adev, 0x08); + unsigned long ua = sil680_seldev(ap, adev, 0x0C); + unsigned long addr_mask = 0x80 + 4 * ap->hard_port_no; + int port_shift = adev->devno * 4; + u8 scsc, mode; + u16 multi, ultra; + + pci_read_config_byte(pdev, 0x8A, &scsc); + pci_read_config_byte(pdev, addr_mask, &mode); + pci_read_config_word(pdev, ma, &multi); + pci_read_config_word(pdev, ua, &ultra); + + /* Mask timing bits */ + ultra &= ~0x3F; + mode &= ~(0x03 << port_shift); + + /* Extract scsc */ + scsc = (scsc & 0x30) ? 1: 0; + + if (adev->dma_mode >= XFER_UDMA_0) { + multi = 0x10C1; + ultra |= ultra_table[scsc][adev->dma_mode - XFER_UDMA_0]; + mode |= (0x03 << port_shift); + } else { + multi = dma_table[adev->dma_mode - XFER_MW_DMA_0]; + mode |= (0x02 << port_shift); + } + pci_write_config_byte(pdev, addr_mask, mode); + pci_write_config_word(pdev, ma, multi); + pci_write_config_word(pdev, ua, ultra); +} + +static struct scsi_host_template sil680_sht = { + .module = THIS_MODULE, + .name = DRV_NAME, + .ioctl = ata_scsi_ioctl, + .queuecommand = ata_scsi_queuecmd, + .can_queue = ATA_DEF_QUEUE, + .this_id = ATA_SHT_THIS_ID, + .sg_tablesize = LIBATA_MAX_PRD, + .max_sectors = ATA_MAX_SECTORS, + .cmd_per_lun = ATA_SHT_CMD_PER_LUN, + .emulated = ATA_SHT_EMULATED, + .use_clustering = ATA_SHT_USE_CLUSTERING, + .proc_name = DRV_NAME, + .dma_boundary = ATA_DMA_BOUNDARY, + .slave_configure = ata_scsi_slave_config, + .bios_param = ata_std_bios_param, +}; + +static struct ata_port_operations sil680_port_ops = { + .port_disable = ata_port_disable, + .set_piomode = sil680_set_piomode, + .set_dmamode = sil680_set_dmamode, + .mode_filter = ata_pci_default_filter, + .tf_load = ata_tf_load, + .tf_read = ata_tf_read, + .check_status = ata_check_status, + .exec_command = ata_exec_command, + .dev_select = ata_std_dev_select, + + .probe_reset = sil680_probe_reset, + + .bmdma_setup = ata_bmdma_setup, + .bmdma_start = ata_bmdma_start, + .bmdma_stop = ata_bmdma_stop, + .bmdma_status = ata_bmdma_status, + + .qc_prep = ata_qc_prep, + .qc_issue = ata_qc_issue_prot, + .eng_timeout = ata_eng_timeout, + .data_xfer = ata_pio_data_xfer, + + .irq_handler = ata_interrupt, + .irq_clear = ata_bmdma_irq_clear, + + .port_start = ata_port_start, + .port_stop = ata_port_stop, + .host_stop = ata_host_stop +}; + +static int sil680_init_one(struct pci_dev *pdev, const struct pci_device_id *id) +{ + static struct ata_port_info info = { + .sht = &sil680_sht, + .host_flags = ATA_FLAG_SLAVE_POSS | ATA_FLAG_SRST, + .pio_mask = 0x1f, + .mwdma_mask = 0x07, + .udma_mask = 0x7f, + .port_ops = &sil680_port_ops + }; + static struct ata_port_info info_slow = { + .sht = &sil680_sht, + .host_flags = ATA_FLAG_SLAVE_POSS | ATA_FLAG_SRST, + .pio_mask = 0x1f, + .mwdma_mask = 0x07, + .udma_mask = 0x3f, + .port_ops = &sil680_port_ops + }; + static struct ata_port_info *port_info[2] = {&info, &info}; + static int printed_version; + u32 class_rev = 0; + u8 tmpbyte = 0; + + if (!printed_version++) + dev_printk(KERN_DEBUG, &pdev->dev, "version " DRV_VERSION "\n"); + + pci_read_config_dword(pdev, PCI_CLASS_REVISION, &class_rev); + class_rev &= 0xff; + /* FIXME: double check */ + pci_write_config_byte(pdev, PCI_CACHE_LINE_SIZE, (class_rev) ? 1 : 255); + + pci_write_config_byte(pdev, 0x80, 0x00); + pci_write_config_byte(pdev, 0x84, 0x00); + + pci_read_config_byte(pdev, 0x8A, &tmpbyte); + + printk(KERN_INFO "sil680: BA5_EN = %d clock = %02X\n", + tmpbyte & 1, tmpbyte & 0x30); + + switch(tmpbyte & 0x30) { + case 0x00: + /* 133 clock attempt to force it on */ + pci_write_config_byte(pdev, 0x8A, tmpbyte|0x10); + break; + case 0x30: + /* if clocking is disabled */ + /* 133 clock attempt to force it on */ + pci_write_config_byte(pdev, 0x8A, tmpbyte & ~0x20); + break; + case 0x10: + /* 133 already */ + break; + case 0x20: + /* BIOS set PCI x2 clocking */ + break; + } + + pci_read_config_byte(pdev, 0x8A, &tmpbyte); + printk(KERN_INFO "sil680: BA5_EN = %d clock = %02X\n", + tmpbyte & 1, tmpbyte & 0x30); + if ((tmpbyte & 0x30) == 0) + port_info[0] = port_info[1] = &info_slow; + + pci_write_config_byte(pdev, 0xA1, 0x72); + pci_write_config_word(pdev, 0xA2, 0x328A); + pci_write_config_dword(pdev, 0xA4, 0x62DD62DD); + pci_write_config_dword(pdev, 0xA8, 0x43924392); + pci_write_config_dword(pdev, 0xAC, 0x40094009); + pci_write_config_byte(pdev, 0xB1, 0x72); + pci_write_config_word(pdev, 0xB2, 0x328A); + pci_write_config_dword(pdev, 0xB4, 0x62DD62DD); + pci_write_config_dword(pdev, 0xB8, 0x43924392); + pci_write_config_dword(pdev, 0xBC, 0x40094009); + + switch(tmpbyte & 0x30) { + case 0x00: printk(KERN_INFO "sil680: 100MHz clock.\n");break; + case 0x10: printk(KERN_INFO "sil680: 133MHz clock.\n");break; + case 0x20: printk(KERN_INFO "sil680: Using PCI clock.\n");break; + /* This last case is _NOT_ ok */ + case 0x30: printk(KERN_ERR "sil680: Clock disabled ?\n"); + return -EIO; + } + return ata_pci_init_one(pdev, port_info, 2); +} + +static const struct pci_device_id sil680[] = { + { PCI_DEVICE(PCI_VENDOR_ID_CMD, PCI_DEVICE_ID_SII_680), }, + { 0, }, +}; + +static struct pci_driver sil680_pci_driver = { + .name = DRV_NAME, + .id_table = sil680, + .probe = sil680_init_one, + .remove = ata_pci_remove_one +}; + +static int __init sil680_init(void) +{ + return pci_register_driver(&sil680_pci_driver); +} + + +static void __exit sil680_exit(void) +{ + pci_unregister_driver(&sil680_pci_driver); +} + + +MODULE_AUTHOR("Alan Cox"); +MODULE_DESCRIPTION("low-level driver for SI680 PATA"); +MODULE_LICENSE("GPL"); +MODULE_DEVICE_TABLE(pci, sil680); +MODULE_VERSION(DRV_VERSION); + +module_init(sil680_init); +module_exit(sil680_exit); diff -puN /dev/null drivers/scsi/pata_sis.c --- /dev/null 2006-06-03 22:34:36.282200750 -0700 +++ devel-akpm/drivers/scsi/pata_sis.c 2006-06-09 15:21:16.000000000 -0700 @@ -0,0 +1,1022 @@ +/* + * pata_sis.c - SiS ATA driver + * + * (C) 2005 Red Hat + * + * Based upon linux/drivers/ide/pci/sis5513.c + * Copyright (C) 1999-2000 Andre Hedrick + * Copyright (C) 2002 Lionel Bouton , Maintainer + * Copyright (C) 2003 Vojtech Pavlik + * SiS Taiwan : for direct support and hardware. + * Daniela Engert : for initial ATA100 advices and numerous others. + * John Fremlin, Manfred Spraul, Dave Morgan, Peter Kjellerstedt : + * for checking code correctness, providing patches. + * Original tests and design on the SiS620 chipset. + * ATA100 tests and design on the SiS735 chipset. + * ATA16/33 support from specs + * ATA133 support for SiS961/962 by L.C. Chang + * + * + * TODO + * Check MWDMA on drives that don't support MWDMA speed pio cycles ? + * More Testing + */ + +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include + +#define DRV_NAME "pata_sis" +#define DRV_VERSION "0.3.2" + +struct sis_chipset { + u16 device; /* PCI host ID */ + struct ata_port_info *info; /* Info block */ + /* Probably add family, cable detect type etc here to clean + up code later */ +}; + +/** + * sis_port_base - return PCI configuration base for dev + * @adev: device + * + * Returns the base of the PCI configuration registers for this port + * number. + */ + +static int sis_port_base(struct ata_device *adev) +{ + return 0x40 + (4 * adev->ap->hard_port_no) + (2 * adev->devno); +} + +/** + * sis_133_probe_init - check for 40/80 pin + * @ap: Port + * + * Perform cable detection for the later UDMA133 capable + * SiS chipset. + */ + +static void sis_133_probe_init(struct ata_port *ap) +{ + struct pci_dev *pdev = to_pci_dev(ap->host_set->dev); + u16 tmp; + + /* The top bit of this register is the cable detect bit */ + pci_read_config_word(pdev, 0x50 + 2 * ap->hard_port_no, &tmp); + if (tmp & 0x8000) + ap->cbl = ATA_CBL_PATA40; + else + ap->cbl = ATA_CBL_PATA80; + + ata_std_probeinit(ap); +} + +/** + * sis_probe_reset - Probe specified port on PATA host controller + * @ap: Port to probe + * + * LOCKING: + * None (inherited from caller). + */ + +static int sis_133_probe_reset(struct ata_port *ap, unsigned int *classes) +{ + struct pci_dev *pdev = to_pci_dev(ap->host_set->dev); + static const struct pci_bits sis_enable_bits[] = { + { 0x4aU, 1U, 0x02UL, 0x02UL }, /* port 0 */ + { 0x4aU, 1U, 0x04UL, 0x04UL }, /* port 1 */ + }; + + if (!pci_test_config_bits(pdev, &sis_enable_bits[ap->hard_port_no])) { + ata_port_disable(ap); + printk(KERN_INFO "ata%u: port disabled. ignoring.\n", ap->id); + return 0; + } + return ata_drive_probe_reset(ap, sis_133_probe_init, + ata_std_softreset, NULL, + ata_std_postreset, classes); +} + + +/** + * sis_66_probe_init - check for 40/80 pin + * @ap: Port + * + * Perform cable detection on the UDMA66, UDMA100 and early UDMA133 + * SiS IDE controllers. + */ + +static void sis_66_probe_init(struct ata_port *ap) +{ + struct pci_dev *pdev = to_pci_dev(ap->host_set->dev); + u8 tmp; + + /* Older chips keep cable detect in bits 4/5 of reg 0x48 */ + pci_read_config_byte(pdev, 0x48, &tmp); + tmp >>= ap->hard_port_no; + if (tmp & 0x10) + ap->cbl = ATA_CBL_PATA40; + else + ap->cbl = ATA_CBL_PATA80; + + ata_std_probeinit(ap); +} + +/** + * sis_66_probe_reset - Probe specified port on PATA host controller + * @ap: Port to probe + * @classes: + * + * LOCKING: + * None (inherited from caller). + */ + +static int sis_66_probe_reset(struct ata_port *ap, unsigned int *classes) +{ + struct pci_dev *pdev = to_pci_dev(ap->host_set->dev); + static const struct pci_bits sis_enable_bits[] = { + { 0x4aU, 1U, 0x02UL, 0x02UL }, /* port 0 */ + { 0x4aU, 1U, 0x04UL, 0x04UL }, /* port 1 */ + }; + + if (!pci_test_config_bits(pdev, &sis_enable_bits[ap->hard_port_no])) { + ata_port_disable(ap); + printk(KERN_INFO "ata%u: port disabled. ignoring.\n", ap->id); + return 0; + } + return ata_drive_probe_reset(ap, sis_66_probe_init, + ata_std_softreset, NULL, + ata_std_postreset, classes); +} + +/** + * sis_old_probe_init - probe begin + * @ap: ATA port + * + * Set up cable type and use generic probe init + */ + +static void sis_old_probe_init(struct ata_port *ap) +{ + ap->cbl = ATA_CBL_PATA40; + ata_std_probeinit(ap); +} + + +/** + * sis_old_probe_reset - Probe specified port on PATA host controller + * @ap: Port to probe + * + * LOCKING: + * None (inherited from caller). + */ + +static int sis_old_probe_reset(struct ata_port *ap, unsigned int *classes) +{ + struct pci_dev *pdev = to_pci_dev(ap->host_set->dev); + static const struct pci_bits sis_enable_bits[] = { + { 0x4aU, 1U, 0x02UL, 0x02UL }, /* port 0 */ + { 0x4aU, 1U, 0x04UL, 0x04UL }, /* port 1 */ + }; + + if (!pci_test_config_bits(pdev, &sis_enable_bits[ap->hard_port_no])) { + ata_port_disable(ap); + printk(KERN_INFO "ata%u: port disabled. ignoring.\n", ap->id); + return 0; + } + return ata_drive_probe_reset(ap, sis_old_probe_init, + ata_std_softreset, NULL, + ata_std_postreset, classes); +} + +/** + * sis_set_fifo - Set RWP fifo bits for this device + * @ap: Port + * @adev: Device + * + * SIS chipsets implement prefetch/postwrite bits for each device + * on both channels. This functionality is not ATAPI compatible and + * must be configured according to the class of device present + */ + +static void sis_set_fifo(struct ata_port *ap, struct ata_device *adev) +{ + struct pci_dev *pdev = to_pci_dev(ap->host_set->dev); + u8 fifoctrl; + u8 mask = 0x11; + + mask <<= (2 * ap->hard_port_no); + mask <<= adev->devno; + + /* This holds various bits including the FIFO control */ + pci_read_config_byte(pdev, 0x4B, &fifoctrl); + fifoctrl &= ~mask; + + /* Enable for ATA (disk) only */ + if (adev->class == ATA_DEV_ATA) + fifoctrl |= mask; + pci_write_config_byte(pdev, 0x4B, fifoctrl); +} + +/** + * sis_old_set_piomode - Initialize host controller PATA PIO timings + * @ap: Port whose timings we are configuring + * @adev: Device we are configuring for. + * + * Set PIO mode for device, in host controller PCI config space. This + * function handles PIO set up for all chips that are pre ATA100 and + * also early ATA100 devices. + * + * LOCKING: + * None (inherited from caller). + */ + +static void sis_old_set_piomode (struct ata_port *ap, struct ata_device *adev) +{ + struct pci_dev *pdev = to_pci_dev(ap->host_set->dev); + int port = sis_port_base(adev); + u8 t1, t2; + int speed = adev->pio_mode - XFER_PIO_0; + + const u8 active[] = { 0x00, 0x07, 0x04, 0x03, 0x01 }; + const u8 recovery[] = { 0x00, 0x06, 0x04, 0x03, 0x03 }; + + sis_set_fifo(ap, adev); + + pci_read_config_byte(pdev, port, &t1); + pci_read_config_byte(pdev, port + 1, &t2); + + t1 &= ~0x0F; /* Clear active/recovery timings */ + t2 &= ~0x07; + + t1 |= active[speed]; + t2 |= recovery[speed]; + + pci_write_config_byte(pdev, port, t1); + pci_write_config_byte(pdev, port + 1, t2); +} + +/** + * sis_100_set_pioode - Initialize host controller PATA PIO timings + * @ap: Port whose timings we are configuring + * @adev: Device we are configuring for. + * + * Set PIO mode for device, in host controller PCI config space. This + * function handles PIO set up for ATA100 devices and early ATA133. + * + * LOCKING: + * None (inherited from caller). + */ + +static void sis_100_set_piomode (struct ata_port *ap, struct ata_device *adev) +{ + struct pci_dev *pdev = to_pci_dev(ap->host_set->dev); + int port = sis_port_base(adev); + int speed = adev->pio_mode - XFER_PIO_0; + + const u8 actrec[] = { 0x00, 0x67, 0x44, 0x33, 0x31 }; + + sis_set_fifo(ap, adev); + + pci_write_config_byte(pdev, port, actrec[speed]); +} + +/** + * sis_133_set_pioode - Initialize host controller PATA PIO timings + * @ap: Port whose timings we are configuring + * @adev: Device we are configuring for. + * + * Set PIO mode for device, in host controller PCI config space. This + * function handles PIO set up for the later ATA133 devices. + * + * LOCKING: + * None (inherited from caller). + */ + +static void sis_133_set_piomode (struct ata_port *ap, struct ata_device *adev) +{ + struct pci_dev *pdev = to_pci_dev(ap->host_set->dev); + int port = 0x40; + u32 t1; + u32 reg54; + int speed = adev->pio_mode - XFER_PIO_0; + + const u32 timing133[] = { + 0x28269000, /* Recovery << 24 | Act << 16 | Ini << 12 */ + 0x0C266000, + 0x04263000, + 0x0C0A3000, + 0x05093000 + }; + const u32 timing100[] = { + 0x1E1C6000, /* Recovery << 24 | Act << 16 | Ini << 12 */ + 0x091C4000, + 0x031C2000, + 0x09072000, + 0x04062000 + }; + + sis_set_fifo(ap, adev); + + /* If bit 14 is set then the registers are mapped at 0x70 not 0x40 */ + pci_read_config_dword(pdev, 0x54, ®54); + if (reg54 & 0x40000000) + port = 0x70; + port += 8 * ap->hard_port_no + 4 * adev->devno; + + pci_read_config_dword(pdev, port, &t1); + t1 &= 0xC0C00FFF; /* Mask out timing */ + + if (t1 & 0x08) /* 100 or 133 ? */ + t1 |= timing133[speed]; + else + t1 |= timing100[speed]; + pci_write_config_byte(pdev, port, t1); +} + +/** + * sis_old_set_dmamode - Initialize host controller PATA DMA timings + * @ap: Port whose timings we are configuring + * @adev: Device to program + * + * Set UDMA/MWDMA mode for device, in host controller PCI config space. + * Handles pre UDMA and UDMA33 devices. Supports MWDMA as well unlike + * the old ide/pci driver. + * + * LOCKING: + * None (inherited from caller). + */ + +static void sis_old_set_dmamode (struct ata_port *ap, struct ata_device *adev) +{ + struct pci_dev *pdev = to_pci_dev(ap->host_set->dev); + int speed = adev->dma_mode - XFER_MW_DMA_0; + int drive_pci = sis_port_base(adev); + u16 timing; + + const u16 mwdma_bits[] = { 0x707, 0x202, 0x202 }; + const u16 udma_bits[] = { 0xE000, 0xC000, 0xA000 }; + + pci_read_config_word(pdev, drive_pci, &timing); + + if (adev->dma_mode < XFER_UDMA_0) { + /* bits 3-0 hold recovery timing bits 8-10 active timing and + the higer bits are dependant on the device */ + timing &= ~ 0x870F; + timing |= mwdma_bits[speed]; + pci_write_config_word(pdev, drive_pci, timing); + } else { + /* Bit 15 is UDMA on/off, bit 13-14 are cycle time */ + speed = adev->dma_mode - XFER_UDMA_0; + timing &= ~0x6000; + timing |= udma_bits[speed]; + } +} + +/** + * sis_66_set_dmamode - Initialize host controller PATA DMA timings + * @ap: Port whose timings we are configuring + * @adev: Device to program + * + * Set UDMA/MWDMA mode for device, in host controller PCI config space. + * Handles UDMA66 and early UDMA100 devices. Supports MWDMA as well unlike + * the old ide/pci driver. + * + * LOCKING: + * None (inherited from caller). + */ + +static void sis_66_set_dmamode (struct ata_port *ap, struct ata_device *adev) +{ + struct pci_dev *pdev = to_pci_dev(ap->host_set->dev); + int speed = adev->dma_mode - XFER_MW_DMA_0; + int drive_pci = sis_port_base(adev); + u16 timing; + + const u16 mwdma_bits[] = { 0x707, 0x202, 0x202 }; + const u16 udma_bits[] = { 0xF000, 0xD000, 0xB000, 0xA000, 0x9000}; + + pci_read_config_word(pdev, drive_pci, &timing); + + if (adev->dma_mode < XFER_UDMA_0) { + /* bits 3-0 hold recovery timing bits 8-10 active timing and + the higer bits are dependant on the device, bit 15 udma */ + timing &= ~ 0x870F; + timing |= mwdma_bits[speed]; + } else { + /* Bit 15 is UDMA on/off, bit 12-14 are cycle time */ + speed = adev->dma_mode - XFER_UDMA_0; + timing &= ~0x6000; + timing |= udma_bits[speed]; + } + pci_write_config_word(pdev, drive_pci, timing); +} + +/** + * sis_100_set_dmamode - Initialize host controller PATA DMA timings + * @ap: Port whose timings we are configuring + * @adev: Device to program + * + * Set UDMA/MWDMA mode for device, in host controller PCI config space. + * Handles UDMA66 and early UDMA100 devices. + * + * LOCKING: + * None (inherited from caller). + */ + +static void sis_100_set_dmamode (struct ata_port *ap, struct ata_device *adev) +{ + struct pci_dev *pdev = to_pci_dev(ap->host_set->dev); + int speed = adev->dma_mode - XFER_MW_DMA_0; + int drive_pci = sis_port_base(adev); + u16 timing; + + const u16 udma_bits[] = { 0x8B00, 0x8700, 0x8500, 0x8300, 0x8200, 0x8100}; + + pci_read_config_word(pdev, drive_pci, &timing); + + if (adev->dma_mode < XFER_UDMA_0) { + /* NOT SUPPORTED YET: NEED DATA SHEET. DITTO IN OLD DRIVER */ + } else { + /* Bit 15 is UDMA on/off, bit 12-14 are cycle time */ + speed = adev->dma_mode - XFER_UDMA_0; + timing &= ~0x0F00; + timing |= udma_bits[speed]; + } + pci_write_config_word(pdev, drive_pci, timing); +} + +/** + * sis_133_early_set_dmamode - Initialize host controller PATA DMA timings + * @ap: Port whose timings we are configuring + * @adev: Device to program + * + * Set UDMA/MWDMA mode for device, in host controller PCI config space. + * Handles early SiS 961 bridges. Supports MWDMA as well unlike + * the old ide/pci driver. + * + * LOCKING: + * None (inherited from caller). + */ + +static void sis_133_early_set_dmamode (struct ata_port *ap, struct ata_device *adev) +{ + struct pci_dev *pdev = to_pci_dev(ap->host_set->dev); + int speed = adev->dma_mode - XFER_MW_DMA_0; + int drive_pci = sis_port_base(adev); + u16 timing; + + const u16 udma_bits[] = { 0x8F00, 0x8A00, 0x8700, 0x8500, 0x8300, 0x8200, 0x8100}; + + pci_read_config_word(pdev, drive_pci, &timing); + + if (adev->dma_mode < XFER_UDMA_0) { + /* NOT SUPPORTED YET: NEED DATA SHEET. DITTO IN OLD DRIVER */ + } else { + /* Bit 15 is UDMA on/off, bit 12-14 are cycle time */ + speed = adev->dma_mode - XFER_UDMA_0; + timing &= ~0x0F00; + timing |= udma_bits[speed]; + } + pci_write_config_word(pdev, drive_pci, timing); +} + +/** + * sis_133_set_dmamode - Initialize host controller PATA DMA timings + * @ap: Port whose timings we are configuring + * @adev: Device to program + * + * Set UDMA/MWDMA mode for device, in host controller PCI config space. + * Handles early SiS 961 bridges. Supports MWDMA as well unlike + * the old ide/pci driver. + * + * LOCKING: + * None (inherited from caller). + */ + +static void sis_133_set_dmamode (struct ata_port *ap, struct ata_device *adev) +{ + struct pci_dev *pdev = to_pci_dev(ap->host_set->dev); + int speed = adev->dma_mode - XFER_MW_DMA_0; + int port = 0x40; + u32 t1; + u32 reg54; + + /* bits 4- cycle time 8 - cvs time */ + const u32 timing_u100[] = { 0x6B0, 0x470, 0x350, 0x140, 0x120, 0x110, 0x000 }; + const u32 timing_u133[] = { 0x9F0, 0x6A0, 0x470, 0x250, 0x230, 0x220, 0x210 }; + + /* If bit 14 is set then the registers are mapped at 0x70 not 0x40 */ + pci_read_config_dword(pdev, 0x54, ®54); + if (reg54 & 0x40000000) + port = 0x70; + port += (8 * ap->hard_port_no) + (4 * adev->devno); + + pci_read_config_dword(pdev, port, &t1); + + if (adev->dma_mode < XFER_UDMA_0) { + t1 &= ~0x00000004; + /* FIXME: need data sheet to add MWDMA here. Also lacking on + ide/pci driver */ + } else { + speed = adev->dma_mode - XFER_UDMA_0; + /* if & 8 no UDMA133 - need info for ... */ + t1 &= ~0x00000FF0; + t1 |= 0x00000004; + if (t1 & 0x08) + t1 |= timing_u133[speed]; + else + t1 |= timing_u100[speed]; + } + pci_write_config_dword(pdev, port, t1); +} + +static struct scsi_host_template sis_sht = { + .module = THIS_MODULE, + .name = DRV_NAME, + .ioctl = ata_scsi_ioctl, + .queuecommand = ata_scsi_queuecmd, + .can_queue = ATA_DEF_QUEUE, + .this_id = ATA_SHT_THIS_ID, + .sg_tablesize = LIBATA_MAX_PRD, + .max_sectors = ATA_MAX_SECTORS, + .cmd_per_lun = ATA_SHT_CMD_PER_LUN, + .emulated = ATA_SHT_EMULATED, + .use_clustering = ATA_SHT_USE_CLUSTERING, + .proc_name = DRV_NAME, + .dma_boundary = ATA_DMA_BOUNDARY, + .slave_configure = ata_scsi_slave_config, + .bios_param = ata_std_bios_param, +// .ordered_flush = 1, +}; + +static const struct ata_port_operations sis_133_ops = { + .port_disable = ata_port_disable, + .set_piomode = sis_133_set_piomode, + .set_dmamode = sis_133_set_dmamode, + .mode_filter = ata_pci_default_filter, + + .tf_load = ata_tf_load, + .tf_read = ata_tf_read, + .check_status = ata_check_status, + .exec_command = ata_exec_command, + .dev_select = ata_std_dev_select, + + .probe_reset = sis_133_probe_reset, + + .bmdma_setup = ata_bmdma_setup, + .bmdma_start = ata_bmdma_start, + .bmdma_stop = ata_bmdma_stop, + .bmdma_status = ata_bmdma_status, + .qc_prep = ata_qc_prep, + .qc_issue = ata_qc_issue_prot, + .data_xfer = ata_pio_data_xfer, + + .eng_timeout = ata_eng_timeout, + + .irq_handler = ata_interrupt, + .irq_clear = ata_bmdma_irq_clear, + + .port_start = ata_port_start, + .port_stop = ata_port_stop, + .host_stop = ata_host_stop, +}; + +static const struct ata_port_operations sis_133_early_ops = { + .port_disable = ata_port_disable, + .set_piomode = sis_100_set_piomode, + .set_dmamode = sis_133_early_set_dmamode, + .mode_filter = ata_pci_default_filter, + + .tf_load = ata_tf_load, + .tf_read = ata_tf_read, + .check_status = ata_check_status, + .exec_command = ata_exec_command, + .dev_select = ata_std_dev_select, + + .probe_reset = sis_66_probe_reset, + + .bmdma_setup = ata_bmdma_setup, + .bmdma_start = ata_bmdma_start, + .bmdma_stop = ata_bmdma_stop, + .bmdma_status = ata_bmdma_status, + .qc_prep = ata_qc_prep, + .qc_issue = ata_qc_issue_prot, + .data_xfer = ata_pio_data_xfer, + + .eng_timeout = ata_eng_timeout, + + .irq_handler = ata_interrupt, + .irq_clear = ata_bmdma_irq_clear, + + .port_start = ata_port_start, + .port_stop = ata_port_stop, + .host_stop = ata_host_stop, +}; + +static const struct ata_port_operations sis_100_ops = { + .port_disable = ata_port_disable, + .set_piomode = sis_100_set_piomode, + .set_dmamode = sis_100_set_dmamode, + .mode_filter = ata_pci_default_filter, + + .tf_load = ata_tf_load, + .tf_read = ata_tf_read, + .check_status = ata_check_status, + .exec_command = ata_exec_command, + .dev_select = ata_std_dev_select, + + .probe_reset = sis_66_probe_reset, + + .bmdma_setup = ata_bmdma_setup, + .bmdma_start = ata_bmdma_start, + .bmdma_stop = ata_bmdma_stop, + .bmdma_status = ata_bmdma_status, + .qc_prep = ata_qc_prep, + .qc_issue = ata_qc_issue_prot, + .data_xfer = ata_pio_data_xfer, + + .eng_timeout = ata_eng_timeout, + + .irq_handler = ata_interrupt, + .irq_clear = ata_bmdma_irq_clear, + + .port_start = ata_port_start, + .port_stop = ata_port_stop, + .host_stop = ata_host_stop, +}; + +static const struct ata_port_operations sis_66_ops = { + .port_disable = ata_port_disable, + .set_piomode = sis_old_set_piomode, + .set_dmamode = sis_66_set_dmamode, + .mode_filter = ata_pci_default_filter, + + .tf_load = ata_tf_load, + .tf_read = ata_tf_read, + .check_status = ata_check_status, + .exec_command = ata_exec_command, + .dev_select = ata_std_dev_select, + + .probe_reset = sis_66_probe_reset, + + .bmdma_setup = ata_bmdma_setup, + .bmdma_start = ata_bmdma_start, + .bmdma_stop = ata_bmdma_stop, + .bmdma_status = ata_bmdma_status, + .qc_prep = ata_qc_prep, + .qc_issue = ata_qc_issue_prot, + .data_xfer = ata_pio_data_xfer, + + .eng_timeout = ata_eng_timeout, + + .irq_handler = ata_interrupt, + .irq_clear = ata_bmdma_irq_clear, + + .port_start = ata_port_start, + .port_stop = ata_port_stop, + .host_stop = ata_host_stop, +}; + +static const struct ata_port_operations sis_old_ops = { + .port_disable = ata_port_disable, + .set_piomode = sis_old_set_piomode, + .set_dmamode = sis_old_set_dmamode, + .mode_filter = ata_pci_default_filter, + + .tf_load = ata_tf_load, + .tf_read = ata_tf_read, + .check_status = ata_check_status, + .exec_command = ata_exec_command, + .dev_select = ata_std_dev_select, + + .probe_reset = sis_old_probe_reset, + + .bmdma_setup = ata_bmdma_setup, + .bmdma_start = ata_bmdma_start, + .bmdma_stop = ata_bmdma_stop, + .bmdma_status = ata_bmdma_status, + .qc_prep = ata_qc_prep, + .qc_issue = ata_qc_issue_prot, + .data_xfer = ata_pio_data_xfer, + + .eng_timeout = ata_eng_timeout, + + .irq_handler = ata_interrupt, + .irq_clear = ata_bmdma_irq_clear, + + .port_start = ata_port_start, + .port_stop = ata_port_stop, + .host_stop = ata_host_stop, +}; + +static struct ata_port_info sis_info = { + .sht = &sis_sht, + .host_flags = ATA_FLAG_SLAVE_POSS | ATA_FLAG_SRST, + .pio_mask = 0x1f, /* pio0-4 */ + .mwdma_mask = 0x07, + .udma_mask = 0, + .port_ops = &sis_old_ops, +}; +static struct ata_port_info sis_info33 = { + .sht = &sis_sht, + .host_flags = ATA_FLAG_SLAVE_POSS | ATA_FLAG_SRST, + .pio_mask = 0x1f, /* pio0-4 */ + .mwdma_mask = 0x07, + .udma_mask = ATA_UDMA2, /* UDMA 33 */ + .port_ops = &sis_old_ops, +}; +static struct ata_port_info sis_info66 = { + .sht = &sis_sht, + .host_flags = ATA_FLAG_SLAVE_POSS | ATA_FLAG_SRST, + .pio_mask = 0x1f, /* pio0-4 */ + .udma_mask = ATA_UDMA4, /* UDMA 66 */ + .port_ops = &sis_66_ops, +}; +static struct ata_port_info sis_info100 = { + .sht = &sis_sht, + .host_flags = ATA_FLAG_SLAVE_POSS | ATA_FLAG_SRST, + .pio_mask = 0x1f, /* pio0-4 */ + .udma_mask = ATA_UDMA5, + .port_ops = &sis_100_ops, +}; +static struct ata_port_info sis_info100_early = { + .sht = &sis_sht, + .host_flags = ATA_FLAG_SLAVE_POSS | ATA_FLAG_SRST, + .udma_mask = ATA_UDMA5, + .pio_mask = 0x1f, /* pio0-4 */ + .port_ops = &sis_66_ops, +}; +static struct ata_port_info sis_info133 = { + .sht = &sis_sht, + .host_flags = ATA_FLAG_SLAVE_POSS | ATA_FLAG_SRST, + .pio_mask = 0x1f, /* pio0-4 */ + .udma_mask = ATA_UDMA6, + .port_ops = &sis_133_ops, +}; +static struct ata_port_info sis_info133_early = { + .sht = &sis_sht, + .host_flags = ATA_FLAG_SLAVE_POSS | ATA_FLAG_SRST, + .pio_mask = 0x1f, /* pio0-4 */ + .udma_mask = ATA_UDMA6, + .port_ops = &sis_133_early_ops, +}; + + +static void sis_fixup(struct pci_dev *pdev, struct sis_chipset *sis) +{ + u16 regw; + u8 reg; + + if (sis->info == &sis_info133) { + pci_read_config_word(pdev, 0x50, ®w); + if (regw & 0x08) + pci_write_config_word(pdev, 0x50, regw & ~0x08); + pci_read_config_word(pdev, 0x52, ®w); + if (regw & 0x08) + pci_write_config_word(pdev, 0x52, regw & ~0x08); + return; + } + + if (sis->info == &sis_info133_early || sis->info == &sis_info100) { + /* Fix up latency */ + pci_write_config_byte(pdev, PCI_LATENCY_TIMER, 0x80); + /* Set compatibility bit */ + pci_read_config_byte(pdev, 0x49, ®); + if (!(reg & 0x01)) + pci_write_config_byte(pdev, 0x49, reg | 0x01); + return; + } + + if (sis->info == &sis_info66 || sis->info == &sis_info100_early) { + /* Fix up latency */ + pci_write_config_byte(pdev, PCI_LATENCY_TIMER, 0x80); + /* Set compatibility bit */ + pci_read_config_byte(pdev, 0x52, ®); + if (!(reg & 0x04)) + pci_write_config_byte(pdev, 0x52, reg | 0x04); + return; + } + + if (sis->info == &sis_info33) { + pci_read_config_byte(pdev, PCI_CLASS_PROG, ®); + if (( reg & 0x0F ) != 0x00) + pci_write_config_byte(pdev, PCI_CLASS_PROG, reg & 0xF0); + /* Fall through to ATA16 fixup below */ + } + + if (sis->info == &sis_info || sis->info == &sis_info33) { + /* force per drive recovery and active timings + needed on ATA_33 and below chips */ + pci_read_config_byte(pdev, 0x52, ®); + if (!(reg & 0x08)) + pci_write_config_byte(pdev, 0x52, reg|0x08); + return; + } + + BUG(); +} + +/** + * sis_init_one - Register SiS ATA PCI device with kernel services + * @pdev: PCI device to register + * @ent: Entry in sis_pci_tbl matching with @pdev + * + * Called from kernel PCI layer. We probe for combined mode (sigh), + * and then hand over control to libata, for it to do the rest. + * + * LOCKING: + * Inherited from PCI layer (may sleep). + * + * RETURNS: + * Zero on success, or -ERRNO value. + */ + +static int sis_init_one (struct pci_dev *pdev, const struct pci_device_id *ent) +{ + static int printed_version; + static struct ata_port_info *port_info[2]; + struct ata_port_info *port; + struct pci_dev *host = NULL; + struct sis_chipset *chipset = NULL; + + static struct sis_chipset sis_chipsets[] = { + { 0x0745, &sis_info100 }, + { 0x0735, &sis_info100 }, + { 0x0733, &sis_info100 }, + { 0x0635, &sis_info100 }, + { 0x0633, &sis_info100 }, + + { 0x0730, &sis_info100_early }, /* 100 with ATA 66 layout */ + { 0x0550, &sis_info100_early }, /* 100 with ATA 66 layout */ + + { 0x0640, &sis_info66 }, + { 0x0630, &sis_info66 }, + { 0x0620, &sis_info66 }, + { 0x0540, &sis_info66 }, + { 0x0530, &sis_info66 }, + + { 0x5600, &sis_info33 }, + { 0x5598, &sis_info33 }, + { 0x5597, &sis_info33 }, + { 0x5591, &sis_info33 }, + { 0x5582, &sis_info33 }, + { 0x5581, &sis_info33 }, + + { 0x5596, &sis_info }, + { 0x5571, &sis_info }, + { 0x5517, &sis_info }, + { 0x5511, &sis_info }, + + {0} + }; + static struct sis_chipset sis133_early = { + 0x0, &sis_info133_early + }; + static struct sis_chipset sis133 = { + 0x0, &sis_info133 + }; + static struct sis_chipset sis100_early = { + 0x0, &sis_info100_early + }; + static struct sis_chipset sis100 = { + 0x0, &sis_info100 + }; + + if (!printed_version++) + dev_printk(KERN_DEBUG, &pdev->dev, + "version " DRV_VERSION "\n"); + + /* We have to find the bridge first */ + + for (chipset = &sis_chipsets[0]; chipset->device; chipset++) { + host = pci_get_device(PCI_VENDOR_ID_SI, chipset->device, NULL); + if (host != NULL) { + if (chipset->device == 0x630) { /* SIS630 */ + u8 host_rev; + pci_read_config_byte(host, PCI_REVISION_ID, &host_rev); + if (host_rev >= 0x30) /* 630 ET */ + chipset = &sis100_early; + } + break; + } + } + + /* Look for concealed bridges */ + if (host == NULL) { + /* Second check */ + u32 idemisc; + u16 trueid; + + /* Disable ID masking and register remapping then + see what the real ID is */ + + pci_read_config_dword(pdev, 0x54, &idemisc); + pci_write_config_dword(pdev, 0x54, idemisc & 0x7fffffff); + pci_read_config_word(pdev, PCI_DEVICE_ID, &trueid); + pci_write_config_dword(pdev, 0x54, idemisc); + + switch(trueid) { + case 0x5518: /* SIS 962/963 */ + chipset = &sis133; + if ((idemisc & 0x40000000) == 0) { + pci_write_config_dword(pdev, 0x54, idemisc | 0x40000000); + printk(KERN_INFO "SIS5513: Switching to 5513 register mapping\n"); + } + break; + case 0x0180: /* SIS 965/965L */ + chipset = &sis133; + break; + case 0x1180: /* SIS 966/966L */ + chipset = &sis133; + break; + } + } + + /* Further check */ + if (chipset == NULL) { + struct pci_dev *lpc_bridge; + u16 trueid; + u8 prefctl; + u8 idecfg; + u8 sbrev; + + /* Try the second unmasking technique */ + pci_read_config_byte(pdev, 0x4a, &idecfg); + pci_write_config_byte(pdev, 0x4a, idecfg | 0x10); + pci_read_config_word(pdev, PCI_DEVICE_ID, &trueid); + pci_write_config_byte(pdev, 0x4a, idecfg); + + switch(trueid) { + case 0x5517: + lpc_bridge = pci_get_slot(0x00, 0x10); /* Bus 0 Dev 2 Fn 0 */ + if (lpc_bridge == NULL) + break; + pci_read_config_byte(lpc_bridge, PCI_REVISION_ID, &sbrev); + pci_read_config_byte(pdev, 0x49, &prefctl); + pci_dev_put(lpc_bridge); + + if (sbrev == 0x10 && (prefctl & 0x80)) { + chipset = &sis133_early; + break; + } + chipset = &sis100; + break; + } + } + pci_dev_put(host); + + /* No chipset info, no support */ + if (chipset == NULL) + return -ENODEV; + + port = chipset->info; + port->private_data = chipset; + + sis_fixup(pdev, chipset); + + port_info[0] = port_info[1] = port; + return ata_pci_init_one(pdev, port_info, 2); +} + +static const struct pci_device_id sis_pci_tbl[] = { + { PCI_DEVICE(PCI_VENDOR_ID_SI, 0x5513), }, /* SiS 5513 */ + { PCI_DEVICE(PCI_VENDOR_ID_SI, 0x5518), }, /* SiS 5518 */ + { } +}; + +static struct pci_driver sis_pci_driver = { + .name = DRV_NAME, + .id_table = sis_pci_tbl, + .probe = sis_init_one, + .remove = ata_pci_remove_one, +}; + +static int __init sis_init(void) +{ + return pci_register_driver(&sis_pci_driver); +} + +static void __exit sis_exit(void) +{ + pci_unregister_driver(&sis_pci_driver); +} + + +module_init(sis_init); +module_exit(sis_exit); + +MODULE_AUTHOR("Alan Cox"); +MODULE_DESCRIPTION("SCSI low-level driver for SiS ATA"); +MODULE_LICENSE("GPL"); +MODULE_DEVICE_TABLE(pci, sis_pci_tbl); +MODULE_VERSION(DRV_VERSION); + diff -puN /dev/null drivers/scsi/pata_triflex.c --- /dev/null 2006-06-03 22:34:36.282200750 -0700 +++ devel-akpm/drivers/scsi/pata_triflex.c 2006-06-09 15:21:16.000000000 -0700 @@ -0,0 +1,283 @@ +/* + * pata_triflex.c - Compaq PATA for new ATA layer + * (C) 2005 Red Hat Inc + * Alan Cox + * + * based upon + * + * triflex.c + * + * IDE Chipset driver for the Compaq TriFlex IDE controller. + * + * Known to work with the Compaq Workstation 5x00 series. + * + * Copyright (C) 2002 Hewlett-Packard Development Group, L.P. + * Author: Torben Mathiasen + * + * This program is free software; you can redistribute it and/or modify + * it under the terms of the GNU General Public License version 2 as + * published by the Free Software Foundation. + * + * This program is distributed in the hope that it will be useful, + * but WITHOUT ANY WARRANTY; without even the implied warranty of + * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the + * GNU General Public License for more details. + * + * You should have received a copy of the GNU General Public License + * along with this program; if not, write to the Free Software + * Foundation, Inc., 59 Temple Place, Suite 330, Boston, MA 02111-1307 USA + * + * Loosely based on the piix & svwks drivers. + * + * Documentation: + * Not publically available. + */ + +#include +#include +#include +#include +#include +#include +#include +#include + +#define DRV_NAME "pata_triflex" +#define DRV_VERSION "0.2.3" + +/** + * triflex_probe_init - probe begin + * @ap: ATA port + * + * Set up cable type and use generic probe init + */ + +static void triflex_probe_init(struct ata_port *ap) +{ + ap->cbl = ATA_CBL_PATA40; + ata_std_probeinit(ap); +} + + + +static int triflex_probe_reset(struct ata_port *ap, unsigned int *classes) +{ + struct pci_dev *pdev = to_pci_dev(ap->host_set->dev); + static struct pci_bits triflex_enable_bits[] = { + { 0x80, 1, 0x01, 0x01 }, + { 0x80, 1, 0x02, 0x02 } + }; + + if (!pci_test_config_bits(pdev, &triflex_enable_bits[ap->hard_port_no])) { + ata_port_disable(ap); + printk(KERN_INFO "ata%u: port disabled. ignoring.\n", ap->id); + return 0; + } + return ata_drive_probe_reset(ap, triflex_probe_init, + ata_std_softreset, NULL, + ata_std_postreset, classes); +} + +/** + * triflex_load_timing - timing configuration + * @ap: ATA interface + * @adev: Device on the bus + * @speed: speed to configure + * + * The Triflex has one set of timings per device per channel. This + * means we must do some switching. As the PIO and DMA timings don't + * match we have to do some reloading unlike PIIX devices where tuning + * tricks can avoid it. + */ + +static void triflex_load_timing(struct ata_port *ap, struct ata_device *adev, int speed) +{ + struct pci_dev *pdev = to_pci_dev(ap->host_set->dev); + u32 timing = 0; + u32 triflex_timing, old_triflex_timing; + int channel_offset = ap->hard_port_no ? 0x74: 0x70; + unsigned int is_slave = (adev->devno != 0); + + + pci_read_config_dword(pdev, channel_offset, &old_triflex_timing); + triflex_timing = old_triflex_timing; + + switch(speed) + { + case XFER_MW_DMA_2: + timing = 0x0103;break; + case XFER_MW_DMA_1: + timing = 0x0203;break; + case XFER_MW_DMA_0: + timing = 0x0808;break; + case XFER_SW_DMA_2: + case XFER_SW_DMA_1: + case XFER_SW_DMA_0: + timing = 0x0F0F;break; + case XFER_PIO_4: + timing = 0x0202;break; + case XFER_PIO_3: + timing = 0x0204;break; + case XFER_PIO_2: + timing = 0x0404;break; + case XFER_PIO_1: + timing = 0x0508;break; + case XFER_PIO_0: + timing = 0x0808;break; + default: + BUG(); + } + triflex_timing &= ~ (0xFFFF << (16 * is_slave)); + triflex_timing |= (timing << (16 * is_slave)); + + if (triflex_timing != old_triflex_timing) + pci_write_config_dword(pdev, channel_offset, triflex_timing); +} + +/** + * triflex_set_piomode - set initial PIO mode data + * @ap: ATA interface + * @adev: ATA device + * + * Use the timing loader to set up the PIO mode. We have to do this + * because DMA start/stop will only be called once DMA occurs. If there + * has been no DMA then the PIO timings are still needed. + */ +static void triflex_set_piomode(struct ata_port *ap, struct ata_device *adev) +{ + triflex_load_timing(ap, adev, adev->pio_mode); +} + +/** + * triflex_dma_start - DMA start callback + * @qc: Command in progress + * + * Usually drivers set the DMA timing at the point the set_dmamode call + * is made. Triflex however requires we load new timings on the + * transition or keep matching PIO/DMA pairs (ie MWDMA2/PIO4 etc). + * We load the DMA timings just before starting DMA and then restore + * the PIO timing when the DMA is finished. + */ + +static void triflex_bmdma_start(struct ata_queued_cmd *qc) +{ + triflex_load_timing(qc->ap, qc->dev, qc->dev->dma_mode); + ata_bmdma_start(qc); +} + +/** + * triflex_dma_stop - DMA stop callback + * @ap: ATA interface + * @adev: ATA device + * + * We loaded new timings in dma_start, as a result we need to restore + * the PIO timings in dma_stop so that the next command issue gets the + * right clock values. + */ + +static void triflex_bmdma_stop(struct ata_queued_cmd *qc) +{ + ata_bmdma_stop(qc); + triflex_load_timing(qc->ap, qc->dev, qc->dev->pio_mode); +} + +static struct scsi_host_template triflex_sht = { + .module = THIS_MODULE, + .name = DRV_NAME, + .ioctl = ata_scsi_ioctl, + .queuecommand = ata_scsi_queuecmd, + .can_queue = ATA_DEF_QUEUE, + .this_id = ATA_SHT_THIS_ID, + .sg_tablesize = LIBATA_MAX_PRD, + .max_sectors = ATA_MAX_SECTORS, + .cmd_per_lun = ATA_SHT_CMD_PER_LUN, + .emulated = ATA_SHT_EMULATED, + .use_clustering = ATA_SHT_USE_CLUSTERING, + .proc_name = DRV_NAME, + .dma_boundary = ATA_DMA_BOUNDARY, + .slave_configure = ata_scsi_slave_config, + .bios_param = ata_std_bios_param, +}; + +static struct ata_port_operations triflex_port_ops = { + .port_disable = ata_port_disable, + .set_piomode = triflex_set_piomode, + .mode_filter = ata_pci_default_filter, + + .tf_load = ata_tf_load, + .tf_read = ata_tf_read, + .check_status = ata_check_status, + .exec_command = ata_exec_command, + .dev_select = ata_std_dev_select, + + .probe_reset = triflex_probe_reset, + + .bmdma_setup = ata_bmdma_setup, + .bmdma_start = triflex_bmdma_start, + .bmdma_stop = triflex_bmdma_stop, + .bmdma_status = ata_bmdma_status, + + .qc_prep = ata_qc_prep, + .qc_issue = ata_qc_issue_prot, + .eng_timeout = ata_eng_timeout, + .data_xfer = ata_pio_data_xfer, + + .irq_handler = ata_interrupt, + .irq_clear = ata_bmdma_irq_clear, + + .port_start = ata_port_start, + .port_stop = ata_port_stop, + .host_stop = ata_host_stop +}; + +static int triflex_init_one(struct pci_dev *dev, const struct pci_device_id *id) +{ + static struct ata_port_info info = { + .sht = &triflex_sht, + .host_flags = ATA_FLAG_SLAVE_POSS | ATA_FLAG_SRST, + .pio_mask = 0x1f, + .mwdma_mask = 0x07, + .port_ops = &triflex_port_ops + }; + static struct ata_port_info *port_info[2] = { &info, &info }; + static int printed_version; + + if (!printed_version++) + dev_printk(KERN_DEBUG, &dev->dev, "version " DRV_VERSION "\n"); + + return ata_pci_init_one(dev, port_info, 2); +} + +static const struct pci_device_id triflex[] = { + { PCI_VENDOR_ID_COMPAQ, PCI_DEVICE_ID_COMPAQ_TRIFLEX_IDE, + PCI_ANY_ID, PCI_ANY_ID, 0, 0, 0 }, + { 0, }, +}; + +static struct pci_driver triflex_pci_driver = { + .name = DRV_NAME, + .id_table = triflex, + .probe = triflex_init_one, + .remove = ata_pci_remove_one +}; + +static int __init triflex_init(void) +{ + return pci_register_driver(&triflex_pci_driver); +} + + +static void __exit triflex_exit(void) +{ + pci_unregister_driver(&triflex_pci_driver); +} + + +MODULE_AUTHOR("Alan Cox"); +MODULE_DESCRIPTION("low-level driver for Compaq Triflex"); +MODULE_LICENSE("GPL"); +MODULE_DEVICE_TABLE(pci, triflex); +MODULE_VERSION(DRV_VERSION); + +module_init(triflex_init); +module_exit(triflex_exit); diff -puN /dev/null drivers/scsi/pata_via.c --- /dev/null 2006-06-03 22:34:36.282200750 -0700 +++ devel-akpm/drivers/scsi/pata_via.c 2006-06-09 15:21:16.000000000 -0700 @@ -0,0 +1,564 @@ +/* + * pata_via.c - VIA PATA for new ATA layer + * (C) 2005-2006 Red Hat Inc + * Alan Cox + * + * Documentation + * Most chipset documentation available under NDA only + * + * VIA version guide + * VIA VT82C561 - early design, uses ata_generic currently + * VIA VT82C576 - MWDMA, 33Mhz + * VIA VT82C586 - MWDMA, 33Mhz + * VIA VT82C586a - Added UDMA to 33Mhz + * VIA VT82C586b - UDMA33 + * VIA VT82C596a - Nonfunctional UDMA66 + * VIA VT82C596b - Working UDMA66 + * VIA VT82C686 - Nonfunctional UDMA66 + * VIA VT82C686a - Working UDMA66 + * VIA VT82C686b - Updated to UDMA100 + * VIA VT8231 - UDMA100 + * VIA VT8233 - UDMA100 + * VIA VT8233a - UDMA133 + * VIA VT8233c - UDMA100 + * VIA VT8235 - UDMA133 + * VIA VT8237 - UDMA133 + * + * Most registers remain compatible across chips. Others start reserved + * and acquire sensible semantics if set to 1 (eg cable detect). A few + * exceptions exist, notably around the FIFO settings. + * + * One additional quirk of the VIA design is that like ALi they use few + * PCI IDs for a lot of chips. + * + * Based heavily on: + * + * Version 3.38 + * + * VIA IDE driver for Linux. Supported southbridges: + * + * vt82c576, vt82c586, vt82c586a, vt82c586b, vt82c596a, vt82c596b, + * vt82c686, vt82c686a, vt82c686b, vt8231, vt8233, vt8233c, vt8233a, + * vt8235, vt8237 + * + * Copyright (c) 2000-2002 Vojtech Pavlik + * + * Based on the work of: + * Michel Aubry + * Jeff Garzik + * Andre Hedrick + + */ + +#include +#include +#include +#include +#include +#include +#include +#include + +#define DRV_NAME "pata_via" +#define DRV_VERSION "0.1.11" + +/* + * The following comes directly from Vojtech Pavlik's ide/pci/via82cxxx + * driver. + */ + +enum { + VIA_UDMA = 0x007, + VIA_UDMA_NONE = 0x000, + VIA_UDMA_33 = 0x001, + VIA_UDMA_66 = 0x002, + VIA_UDMA_100 = 0x003, + VIA_UDMA_133 = 0x004, + VIA_BAD_PREQ = 0x010, /* Crashes if PREQ# till DDACK# set */ + VIA_BAD_CLK66 = 0x020, /* 66 MHz clock doesn't work correctly */ + VIA_SET_FIFO = 0x040, /* Needs to have FIFO split set */ + VIA_NO_UNMASK = 0x080, /* Doesn't work with IRQ unmasking on */ + VIA_BAD_ID = 0x100, /* Has wrong vendor ID (0x1107) */ + VIA_BAD_AST = 0x200, /* Don't touch Address Setup Timing */ + VIA_NO_ENABLES = 0x400, /* Has no enablebits */ +}; + +/* + * VIA SouthBridge chips. + */ + +static const struct via_isa_bridge { + const char *name; + u16 id; + u8 rev_min; + u8 rev_max; + u16 flags; +} via_isa_bridges[] = { + { "cx700", PCI_DEVICE_ID_VIA_CX700, 0x00, 0x2f, VIA_UDMA_133 | VIA_BAD_AST }, + { "vt6410", PCI_DEVICE_ID_VIA_6410, 0x00, 0x2f, VIA_UDMA_133 | VIA_BAD_AST | VIA_NO_ENABLES}, + { "vt8237a", PCI_DEVICE_ID_VIA_8237A, 0x00, 0x2f, VIA_UDMA_133 | VIA_BAD_AST }, + { "vt8237", PCI_DEVICE_ID_VIA_8237, 0x00, 0x2f, VIA_UDMA_133 | VIA_BAD_AST }, + { "vt8235", PCI_DEVICE_ID_VIA_8235, 0x00, 0x2f, VIA_UDMA_133 | VIA_BAD_AST }, + { "vt8233a", PCI_DEVICE_ID_VIA_8233A, 0x00, 0x2f, VIA_UDMA_133 | VIA_BAD_AST }, + { "vt8233c", PCI_DEVICE_ID_VIA_8233C_0, 0x00, 0x2f, VIA_UDMA_100 }, + { "vt8233", PCI_DEVICE_ID_VIA_8233_0, 0x00, 0x2f, VIA_UDMA_100 }, + { "vt8231", PCI_DEVICE_ID_VIA_8231, 0x00, 0x2f, VIA_UDMA_100 }, + { "vt82c686b", PCI_DEVICE_ID_VIA_82C686, 0x40, 0x4f, VIA_UDMA_100 }, + { "vt82c686a", PCI_DEVICE_ID_VIA_82C686, 0x10, 0x2f, VIA_UDMA_66 }, + { "vt82c686", PCI_DEVICE_ID_VIA_82C686, 0x00, 0x0f, VIA_UDMA_33 | VIA_BAD_CLK66 }, + { "vt82c596b", PCI_DEVICE_ID_VIA_82C596, 0x10, 0x2f, VIA_UDMA_66 }, + { "vt82c596a", PCI_DEVICE_ID_VIA_82C596, 0x00, 0x0f, VIA_UDMA_33 | VIA_BAD_CLK66 }, + { "vt82c586b", PCI_DEVICE_ID_VIA_82C586_0, 0x47, 0x4f, VIA_UDMA_33 | VIA_SET_FIFO }, + { "vt82c586b", PCI_DEVICE_ID_VIA_82C586_0, 0x40, 0x46, VIA_UDMA_33 | VIA_SET_FIFO | VIA_BAD_PREQ }, + { "vt82c586b", PCI_DEVICE_ID_VIA_82C586_0, 0x30, 0x3f, VIA_UDMA_33 | VIA_SET_FIFO }, + { "vt82c586a", PCI_DEVICE_ID_VIA_82C586_0, 0x20, 0x2f, VIA_UDMA_33 | VIA_SET_FIFO }, + { "vt82c586", PCI_DEVICE_ID_VIA_82C586_0, 0x00, 0x0f, VIA_UDMA_NONE | VIA_SET_FIFO }, + { "vt82c576", PCI_DEVICE_ID_VIA_82C576, 0x00, 0x2f, VIA_UDMA_NONE | VIA_SET_FIFO | VIA_NO_UNMASK }, + { "vt82c576", PCI_DEVICE_ID_VIA_82C576, 0x00, 0x2f, VIA_UDMA_NONE | VIA_SET_FIFO | VIA_NO_UNMASK | VIA_BAD_ID }, + { NULL } +}; + +/** + * via_cable_detect - cable detection + * @ap: ATA port + * + * Perform cable detection. Actually for the VIA case the BIOS + * already did this for us. We read the values provided by the + * BIOS. If you are using an 8235 in a non-PC configuration you + * may need to update this code. + * + * Hotplug also impacts on this. + */ + +static int via_cable_detect(struct ata_port *ap) { + struct pci_dev *pdev = to_pci_dev(ap->host_set->dev); + u32 ata66; + + pci_read_config_dword(pdev, 0x50, &ata66); + /* Check both the drive cable reporting bits, we might not have + two drives */ + if (ata66 & (0x10100000 >> (16 * ap->hard_port_no))) + return ATA_CBL_PATA80; + else + return ATA_CBL_PATA40; +} + +static void via_probe_init(struct ata_port *ap) +{ + const struct via_isa_bridge *config = ap->host_set->private_data; + + if ((config->flags & VIA_UDMA) >= VIA_UDMA_66) + ap->cbl = via_cable_detect(ap); + else + ap->cbl = ATA_CBL_PATA40; + ata_std_probeinit(ap); +} + + +/** + * via_probe_reset - reset for VIA chips + * @ap: ATA port + * + * Handle the reset callback for the later chips with cable detect + */ + +static int via_probe_reset(struct ata_port *ap, unsigned int *classes) +{ + const struct via_isa_bridge *config = ap->host_set->private_data; + + if (!(config->flags & VIA_NO_ENABLES)) { + struct pci_dev *pdev = to_pci_dev(ap->host_set->dev); + static const struct pci_bits via_enable_bits[] = { + { 0x40, 1, 0x02, 0x02 }, + { 0x40, 1, 0x01, 0x01 } + }; + + if (!pci_test_config_bits(pdev, &via_enable_bits[ap->hard_port_no])) { + ata_port_disable(ap); + printk(KERN_INFO "ata%u: port disabled. ignoring.\n", ap->id); + return 0; + } + } + return ata_drive_probe_reset(ap, via_probe_init, + ata_std_softreset, NULL, + ata_std_postreset, classes); +} + +/** + * via_do_set_mode - set initial PIO mode data + * @ap: ATA interface + * @adev: ATA device + * @mode: ATA mode being programmed + * @tdiv: Clocks per PCI clock + * @set_ast: Set to program address setup + * @udma_type: UDMA mode/format of registers + * + * Program the VIA registers for DMA and PIO modes. Uses the ata timing + * support in order to compute modes. + * + * FIXME: Hotplug will require we serialize multiple mode changes + * on the two channels. + */ + +static void via_do_set_mode(struct ata_port *ap, struct ata_device *adev, int mode, int tdiv, int set_ast, int udma_type) +{ + struct pci_dev *pdev = to_pci_dev(ap->host_set->dev); + struct ata_device *peer = ata_dev_pair(adev); + struct ata_timing t, p; + static int via_clock = 33333; /* Bus clock in kHZ - ought to be tunable one day */ + unsigned long T = 1000000000 / via_clock; + unsigned long UT = T/tdiv; + int ut; + int offset = 3 - (2*ap->hard_port_no) - adev->devno; + + + /* Calculate the timing values we require */ + ata_timing_compute(adev, mode, &t, T, UT); + + /* We share 8bit timing so we must merge the constraints */ + if (peer) { + if (peer->pio_mode) { + ata_timing_compute(peer, peer->pio_mode, &p, T, UT); + ata_timing_merge(&p, &t, &t, ATA_TIMING_8BIT); + } + } + + /* Address setup is programmable but breaks on UDMA133 setups */ + if (set_ast) { + u8 setup; /* 2 bits per drive */ + int shift = 2 * offset; + + pci_read_config_byte(pdev, 0x4C, &setup); + setup &= ~(3 << shift); + setup |= FIT(t.setup, 1, 4) << shift; /* 1,4 or 1,4 - 1 FIXME */ + pci_write_config_byte(pdev, 0x4C, setup); + } + + /* Load the PIO mode bits */ + pci_write_config_byte(pdev, 0x4F - ap->hard_port_no, + ((FIT(t.act8b, 1, 16) - 1) << 4) | (FIT(t.rec8b, 1, 16) - 1)); + pci_write_config_byte(pdev, 0x48 + offset, + ((FIT(t.active, 1, 16) - 1) << 4) | (FIT(t.recover, 1, 16) - 1)); + + /* Load the UDMA bits according to type */ + switch(udma_type) { + default: + /* BUG() ? */ + /* fall through */ + case 33: + ut = t.udma ? (0xe0 | (FIT(t.udma, 2, 5) - 2)) : 0x03; + break; + case 66: + ut = t.udma ? (0xe8 | (FIT(t.udma, 2, 9) - 2)) : 0x0f; + break; + case 100: + ut = t.udma ? (0xe0 | (FIT(t.udma, 2, 9) - 2)) : 0x07; + break; + case 133: + ut = t.udma ? (0xe0 | (FIT(t.udma, 2, 9) - 2)) : 0x07; + break; + } + /* Set UDMA unless device is not UDMA capable */ + if (udma_type) + pci_write_config_byte(pdev, 0x50 + offset, ut); +} + +static void via_set_piomode(struct ata_port *ap, struct ata_device *adev) +{ + const struct via_isa_bridge *config = ap->host_set->private_data; + int set_ast = (config->flags & VIA_BAD_AST) ? 0 : 1; + int mode = config->flags & VIA_UDMA; + static u8 tclock[5] = { 1, 1, 2, 3, 4 }; + static u8 udma[5] = { 0, 33, 66, 100, 133 }; + + via_do_set_mode(ap, adev, adev->pio_mode, tclock[mode], set_ast, udma[mode]); +} + +static void via_set_dmamode(struct ata_port *ap, struct ata_device *adev) +{ + const struct via_isa_bridge *config = ap->host_set->private_data; + int set_ast = (config->flags & VIA_BAD_AST) ? 0 : 1; + int mode = config->flags & VIA_UDMA; + static u8 tclock[5] = { 1, 1, 2, 3, 4 }; + static u8 udma[5] = { 0, 33, 66, 100, 133 }; + + via_do_set_mode(ap, adev, adev->dma_mode, tclock[mode], set_ast, udma[mode]); +} + +static struct scsi_host_template via_sht = { + .module = THIS_MODULE, + .name = DRV_NAME, + .ioctl = ata_scsi_ioctl, + .queuecommand = ata_scsi_queuecmd, + .can_queue = ATA_DEF_QUEUE, + .this_id = ATA_SHT_THIS_ID, + .sg_tablesize = LIBATA_MAX_PRD, + .max_sectors = ATA_MAX_SECTORS, + .cmd_per_lun = ATA_SHT_CMD_PER_LUN, + .emulated = ATA_SHT_EMULATED, + .use_clustering = ATA_SHT_USE_CLUSTERING, + .proc_name = DRV_NAME, + .dma_boundary = ATA_DMA_BOUNDARY, + .slave_configure = ata_scsi_slave_config, + .bios_param = ata_std_bios_param, +}; + +static struct ata_port_operations via_port_ops = { + .port_disable = ata_port_disable, + .set_piomode = via_set_piomode, + .set_dmamode = via_set_dmamode, + .mode_filter = ata_pci_default_filter, + + .tf_load = ata_tf_load, + .tf_read = ata_tf_read, + .check_status = ata_check_status, + .exec_command = ata_exec_command, + .dev_select = ata_std_dev_select, + + .probe_reset = via_probe_reset, + + .bmdma_setup = ata_bmdma_setup, + .bmdma_start = ata_bmdma_start, + .bmdma_stop = ata_bmdma_stop, + .bmdma_status = ata_bmdma_status, + + .qc_prep = ata_qc_prep, + .qc_issue = ata_qc_issue_prot, + .eng_timeout = ata_eng_timeout, + .data_xfer = ata_pio_data_xfer, + + .irq_handler = ata_interrupt, + .irq_clear = ata_bmdma_irq_clear, + + .port_start = ata_port_start, + .port_stop = ata_port_stop, + .host_stop = ata_host_stop +}; + +static struct ata_port_operations via_port_ops_noirq = { + .port_disable = ata_port_disable, + .set_piomode = via_set_piomode, + .set_dmamode = via_set_dmamode, + .mode_filter = ata_pci_default_filter, + + .tf_load = ata_tf_load, + .tf_read = ata_tf_read, + .check_status = ata_check_status, + .exec_command = ata_exec_command, + .dev_select = ata_std_dev_select, + + .probe_reset = via_probe_reset, + + .bmdma_setup = ata_bmdma_setup, + .bmdma_start = ata_bmdma_start, + .bmdma_stop = ata_bmdma_stop, + .bmdma_status = ata_bmdma_status, + + .qc_prep = ata_qc_prep, + .qc_issue = ata_qc_issue_prot, + .eng_timeout = ata_eng_timeout, + .data_xfer = ata_pio_data_xfer_noirq, + + .irq_handler = ata_interrupt, + .irq_clear = ata_bmdma_irq_clear, + + .port_start = ata_port_start, + .port_stop = ata_port_stop, + .host_stop = ata_host_stop +}; + +/** + * via_init_one - discovery callback + * @pdev: PCI device ID + * @id: PCI table info + * + * A VIA IDE interface has been discovered. Figure out what revision + * and perform configuration work before handing it to the ATA layer + */ + +static int via_init_one(struct pci_dev *pdev, const struct pci_device_id *id) +{ + /* Early VIA without UDMA support */ + static struct ata_port_info via_mwdma_info = { + .sht = &via_sht, + .host_flags = ATA_FLAG_SLAVE_POSS | ATA_FLAG_SRST, + .pio_mask = 0x1f, + .mwdma_mask = 0x07, + .port_ops = &via_port_ops + }; + /* Ditto with IRQ masking required */ + static struct ata_port_info via_mwdma_info_borked = { + .sht = &via_sht, + .host_flags = ATA_FLAG_SLAVE_POSS | ATA_FLAG_SRST, + .pio_mask = 0x1f, + .mwdma_mask = 0x07, + .port_ops = &via_port_ops_noirq, + }; + /* VIA UDMA 33 devices (and borked 66) */ + static struct ata_port_info via_udma33_info = { + .sht = &via_sht, + .host_flags = ATA_FLAG_SLAVE_POSS | ATA_FLAG_SRST, + .pio_mask = 0x1f, + .mwdma_mask = 0x07, + .udma_mask = 0x7, + .port_ops = &via_port_ops + }; + /* VIA UDMA 66 devices */ + static struct ata_port_info via_udma66_info = { + .sht = &via_sht, + .host_flags = ATA_FLAG_SLAVE_POSS | ATA_FLAG_SRST, + .pio_mask = 0x1f, + .mwdma_mask = 0x07, + .udma_mask = 0x1f, + .port_ops = &via_port_ops + }; + /* VIA UDMA 100 devices */ + static struct ata_port_info via_udma100_info = { + .sht = &via_sht, + .host_flags = ATA_FLAG_SLAVE_POSS | ATA_FLAG_SRST, + .pio_mask = 0x1f, + .mwdma_mask = 0x07, + .udma_mask = 0x3f, + .port_ops = &via_port_ops + }; + /* UDMA133 with bad AST (All current 133) */ + static struct ata_port_info via_udma133_info = { + .sht = &via_sht, + .host_flags = ATA_FLAG_SLAVE_POSS | ATA_FLAG_SRST, + .pio_mask = 0x1f, + .mwdma_mask = 0x07, + .udma_mask = 0x7f, /* FIXME: should check north bridge */ + .port_ops = &via_port_ops + }; + struct ata_port_info *port_info[2], *type; + struct pci_dev *isa = NULL; + const struct via_isa_bridge *config; + static int printed_version; + u8 t; + u8 enable; + u32 timing; + + if (!printed_version++) + dev_printk(KERN_DEBUG, &pdev->dev, "version " DRV_VERSION "\n"); + + /* To find out how the IDE will behave and what features we + actually have to look at the bridge not the IDE controller */ + for (config = via_isa_bridges; config->id; config++) + if ((isa = pci_get_device(PCI_VENDOR_ID_VIA + + !!(config->flags & VIA_BAD_ID), + config->id, NULL))) { + + pci_read_config_byte(isa, PCI_REVISION_ID, &t); + if (t >= config->rev_min && + t <= config->rev_max) + break; + pci_dev_put(isa); + } + + if (!config->id) { + printk(KERN_WARNING "via: Unknown VIA SouthBridge, disabling.\n"); + return -ENODEV; + } + pci_dev_put(isa); + + /* 0x40 low bits indicate enabled channels */ + pci_read_config_byte(pdev, 0x40 , &enable); + enable &= 3; + if (enable == 0) { + return -ENODEV; + } + + /* Initialise the FIFO for the enabled channels. */ + if (config->flags & VIA_SET_FIFO) { + u8 fifo_setting[4] = {0x00, 0x60, 0x00, 0x20}; + u8 fifo; + + pci_read_config_byte(pdev, 0x43, &fifo); + + /* Clear PREQ# until DDACK# for errata */ + if (config->flags & VIA_BAD_PREQ) + fifo &= 0x7F; + else + fifo &= 0x9f; + /* Turn on FIFO for enabled channels */ + fifo |= fifo_setting[enable]; + pci_write_config_byte(pdev, 0x43, fifo); + } + /* Clock set up */ + switch(config->flags & VIA_UDMA) { + case VIA_UDMA_NONE: + if (config->flags & VIA_NO_UNMASK) + type = &via_mwdma_info_borked; + else + type = &via_mwdma_info; + break; + case VIA_UDMA_33: + type = &via_udma33_info; + break; + case VIA_UDMA_66: + type = &via_udma66_info; + /* The 66 MHz devices require we enable the clock */ + pci_read_config_dword(pdev, 0x50, &timing); + timing |= 0x80008; + pci_write_config_dword(pdev, 0x50, timing); + break; + case VIA_UDMA_100: + type = &via_udma100_info; + break; + case VIA_UDMA_133: + type = &via_udma133_info; + break; + default: + WARN_ON(1); + return -ENODEV; + } + + if (config->flags & VIA_BAD_CLK66) { + /* Disable the 66MHz clock on problem devices */ + pci_read_config_dword(pdev, 0x50, &timing); + timing &= ~0x80008; + pci_write_config_dword(pdev, 0x50, timing); + } + + /* We have established the device type, now fire it up */ + type->private_data = (void *)config; + + port_info[0] = port_info[1] = type; + return ata_pci_init_one(pdev, port_info, 2); +} + +static const struct pci_device_id via[] = { + { PCI_DEVICE(PCI_VENDOR_ID_VIA, PCI_DEVICE_ID_VIA_82C576_1), }, + { PCI_DEVICE(PCI_VENDOR_ID_VIA, PCI_DEVICE_ID_VIA_82C586_1), }, + { PCI_DEVICE(PCI_VENDOR_ID_VIA, PCI_DEVICE_ID_VIA_6410), }, + { PCI_DEVICE(PCI_VENDOR_ID_VIA, PCI_DEVICE_ID_VIA_SATA_EIDE), }, + { 0, }, +}; + +static struct pci_driver via_pci_driver = { + .name = DRV_NAME, + .id_table = via, + .probe = via_init_one, + .remove = ata_pci_remove_one +}; + +static int __init via_init(void) +{ + return pci_register_driver(&via_pci_driver); +} + + +static void __exit via_exit(void) +{ + pci_unregister_driver(&via_pci_driver); +} + + +MODULE_AUTHOR("Alan Cox"); +MODULE_DESCRIPTION("low-level driver for VIA PATA"); +MODULE_LICENSE("GPL"); +MODULE_DEVICE_TABLE(pci, via); +MODULE_VERSION(DRV_VERSION); + +module_init(via_init); +module_exit(via_exit); diff -puN drivers/scsi/pdc_adma.c~git-libata-all drivers/scsi/pdc_adma.c --- devel/drivers/scsi/pdc_adma.c~git-libata-all 2006-06-09 15:21:16.000000000 -0700 +++ devel-akpm/drivers/scsi/pdc_adma.c 2006-06-09 15:21:16.000000000 -0700 @@ -46,7 +46,7 @@ #include #define DRV_NAME "pdc_adma" -#define DRV_VERSION "0.03" +#define DRV_VERSION "0.04" /* macro to calculate base address for ATA regs */ #define ADMA_ATA_REGS(base,port_no) ((base) + ((port_no) * 0x40)) @@ -455,13 +455,13 @@ static inline unsigned int adma_intr_pkt continue; handled = 1; adma_enter_reg_mode(ap); - if (ap->flags & (ATA_FLAG_PORT_DISABLED | ATA_FLAG_NOINTR)) + if (ap->flags & ATA_FLAG_DISABLED) continue; pp = ap->private_data; if (!pp || pp->state != adma_state_pkt) continue; qc = ata_qc_from_tag(ap, ap->active_tag); - if (qc && (!(qc->tf.ctl & ATA_NIEN))) { + if (qc && (!(qc->tf.flags & ATA_TFLAG_POLLING))) { if ((status & (aPERR | aPSD | aUIRQ))) qc->err_mask |= AC_ERR_OTHER; else if (pp->pkt[0] != cDONE) @@ -480,13 +480,13 @@ static inline unsigned int adma_intr_mmi for (port_no = 0; port_no < host_set->n_ports; ++port_no) { struct ata_port *ap; ap = host_set->ports[port_no]; - if (ap && (!(ap->flags & (ATA_FLAG_PORT_DISABLED | ATA_FLAG_NOINTR)))) { + if (ap && (!(ap->flags & ATA_FLAG_DISABLED))) { struct ata_queued_cmd *qc; struct adma_port_priv *pp = ap->private_data; if (!pp || pp->state != adma_state_mmio) continue; qc = ata_qc_from_tag(ap, ap->active_tag); - if (qc && (!(qc->tf.ctl & ATA_NIEN))) { + if (qc && (!(qc->tf.flags & ATA_TFLAG_POLLING))) { /* check main status, clearing INTRQ */ u8 status = ata_check_status(ap); diff -puN drivers/scsi/sata_mv.c~git-libata-all drivers/scsi/sata_mv.c --- devel/drivers/scsi/sata_mv.c~git-libata-all 2006-06-09 15:21:16.000000000 -0700 +++ devel-akpm/drivers/scsi/sata_mv.c 2006-06-09 15:21:16.000000000 -0700 @@ -93,7 +93,7 @@ enum { MV_FLAG_IRQ_COALESCE = (1 << 29), /* IRQ coalescing capability */ MV_COMMON_FLAGS = (ATA_FLAG_SATA | ATA_FLAG_NO_LEGACY | ATA_FLAG_SATA_RESET | ATA_FLAG_MMIO | - ATA_FLAG_NO_ATAPI), + ATA_FLAG_NO_ATAPI | ATA_FLAG_PIO_POLLING), MV_6XXX_FLAGS = MV_FLAG_IRQ_COALESCE, CRQB_FLAG_READ = (1 << 0), @@ -272,33 +272,33 @@ enum chip_type { /* Command ReQuest Block: 32B */ struct mv_crqb { - u32 sg_addr; - u32 sg_addr_hi; - u16 ctrl_flags; - u16 ata_cmd[11]; + __le32 sg_addr; + __le32 sg_addr_hi; + __le16 ctrl_flags; + __le16 ata_cmd[11]; }; struct mv_crqb_iie { - u32 addr; - u32 addr_hi; - u32 flags; - u32 len; - u32 ata_cmd[4]; + __le32 addr; + __le32 addr_hi; + __le32 flags; + __le32 len; + __le32 ata_cmd[4]; }; /* Command ResPonse Block: 8B */ struct mv_crpb { - u16 id; - u16 flags; - u32 tmstmp; + __le16 id; + __le16 flags; + __le32 tmstmp; }; /* EDMA Physical Region Descriptor (ePRD); A.K.A. SG */ struct mv_sg { - u32 addr; - u32 flags_size; - u32 addr_hi; - u32 reserved; + __le32 addr; + __le32 flags_size; + __le32 addr_hi; + __le32 reserved; }; struct mv_port_priv { @@ -406,6 +406,7 @@ static const struct ata_port_operations .qc_prep = mv_qc_prep, .qc_issue = mv_qc_issue, + .data_xfer = ata_mmio_data_xfer, .eng_timeout = mv_eng_timeout, @@ -433,6 +434,7 @@ static const struct ata_port_operations .qc_prep = mv_qc_prep, .qc_issue = mv_qc_issue, + .data_xfer = ata_mmio_data_xfer, .eng_timeout = mv_eng_timeout, @@ -683,7 +685,7 @@ static void mv_stop_dma(struct ata_port } if (EDMA_EN & reg) { - printk(KERN_ERR "ata%u: Unable to stop eDMA\n", ap->id); + ata_port_printk(ap, KERN_ERR, "Unable to stop eDMA\n"); /* FIXME: Consider doing a reset here to recover */ } } @@ -1028,7 +1030,7 @@ static inline unsigned mv_inc_q_index(un return (index + 1) & MV_MAX_Q_DEPTH_MASK; } -static inline void mv_crqb_pack_cmd(u16 *cmdw, u8 data, u8 addr, unsigned last) +static inline void mv_crqb_pack_cmd(__le16 *cmdw, u8 data, u8 addr, unsigned last) { u16 tmp = data | (addr << CRQB_CMD_ADDR_SHIFT) | CRQB_CMD_CS | (last ? CRQB_CMD_LAST : 0); @@ -1051,7 +1053,7 @@ static void mv_qc_prep(struct ata_queued { struct ata_port *ap = qc->ap; struct mv_port_priv *pp = ap->private_data; - u16 *cw; + __le16 *cw; struct ata_taskfile *tf; u16 flags = 0; unsigned in_index; @@ -1307,8 +1309,8 @@ static void mv_err_intr(struct ata_port edma_err_cause = readl(port_mmio + EDMA_ERR_IRQ_CAUSE_OFS); if (EDMA_ERR_SERR & edma_err_cause) { - serr = scr_read(ap, SCR_ERROR); - scr_write_flush(ap, SCR_ERROR, serr); + sata_scr_read(ap, SCR_ERROR, &serr); + sata_scr_write_flush(ap, SCR_ERROR, serr); } if (EDMA_ERR_SELF_DIS & edma_err_cause) { struct mv_port_priv *pp = ap->private_data; @@ -1377,7 +1379,7 @@ static void mv_host_intr(struct ata_host /* Note that DEV_IRQ might happen spuriously during EDMA, * and should be ignored in such cases. * The cause of this is still under investigation. - */ + */ if (pp->pp_flags & MV_PP_FLAG_EDMA_EN) { /* EDMA: check for response queue interrupt */ if ((CRPB_DMA_DONE << hard_port) & hc_irq_cause) { @@ -1398,7 +1400,7 @@ static void mv_host_intr(struct ata_host } } - if (ap->flags & (ATA_FLAG_PORT_DISABLED | ATA_FLAG_NOINTR)) + if (ap && (ap->flags & ATA_FLAG_DISABLED)) continue; err_mask = ac_err_mask(ata_status); @@ -1419,7 +1421,7 @@ static void mv_host_intr(struct ata_host VPRINTK("port %u IRQ found for qc, " "ata_status 0x%x\n", port,ata_status); /* mark qc status appropriately */ - if (!(qc->tf.ctl & ATA_NIEN)) { + if (!(qc->tf.flags & ATA_TFLAG_POLLING)) { qc->err_mask |= err_mask; ata_qc_complete(qc); } @@ -1949,15 +1951,16 @@ static void __mv_phy_reset(struct ata_po /* Issue COMRESET via SControl */ comreset_retry: - scr_write_flush(ap, SCR_CONTROL, 0x301); + sata_scr_write_flush(ap, SCR_CONTROL, 0x301); __msleep(1, can_sleep); - scr_write_flush(ap, SCR_CONTROL, 0x300); + sata_scr_write_flush(ap, SCR_CONTROL, 0x300); __msleep(20, can_sleep); timeout = jiffies + msecs_to_jiffies(200); do { - sstatus = scr_read(ap, SCR_STATUS) & 0x3; + sata_scr_read(ap, SCR_STATUS, &sstatus); + sstatus &= 0x3; if ((sstatus == 3) || (sstatus == 0)) break; @@ -1974,11 +1977,12 @@ comreset_retry: "SCtrl 0x%08x\n", mv_scr_read(ap, SCR_STATUS), mv_scr_read(ap, SCR_ERROR), mv_scr_read(ap, SCR_CONTROL)); - if (sata_dev_present(ap)) { + if (ata_port_online(ap)) { ata_port_probe(ap); } else { - printk(KERN_INFO "ata%u: no device found (phy stat %08x)\n", - ap->id, scr_read(ap, SCR_STATUS)); + sata_scr_read(ap, SCR_STATUS, &sstatus); + ata_port_printk(ap, KERN_INFO, + "no device found (phy stat %08x)\n", sstatus); ata_port_disable(ap); return; } @@ -2005,7 +2009,7 @@ comreset_retry: tf.nsect = readb((void __iomem *) ap->ioaddr.nsect_addr); dev->class = ata_dev_classify(&tf); - if (!ata_dev_present(dev)) { + if (!ata_dev_enabled(dev)) { VPRINTK("Port disabled post-sig: No device present.\n"); ata_port_disable(ap); } @@ -2036,7 +2040,7 @@ static void mv_eng_timeout(struct ata_po { struct ata_queued_cmd *qc; - printk(KERN_ERR "ata%u: Entering mv_eng_timeout\n",ap->id); + ata_port_printk(ap, KERN_ERR, "Entering mv_eng_timeout\n"); DPRINTK("All regs @ start of eng_timeout\n"); mv_dump_all_regs(ap->host_set->mmio_base, ap->port_no, to_pci_dev(ap->host_set->dev)); diff -puN drivers/scsi/sata_nv.c~git-libata-all drivers/scsi/sata_nv.c --- devel/drivers/scsi/sata_nv.c~git-libata-all 2006-06-09 15:21:16.000000000 -0700 +++ devel-akpm/drivers/scsi/sata_nv.c 2006-06-09 15:21:16.000000000 -0700 @@ -44,7 +44,7 @@ #include #define DRV_NAME "sata_nv" -#define DRV_VERSION "0.8" +#define DRV_VERSION "0.9" enum { NV_PORTS = 2, @@ -140,6 +140,12 @@ static const struct pci_device_id nv_pci PCI_ANY_ID, PCI_ANY_ID, 0, 0, GENERIC }, { PCI_VENDOR_ID_NVIDIA, PCI_DEVICE_ID_NVIDIA_NFORCE_MCP55_SATA2, PCI_ANY_ID, PCI_ANY_ID, 0, 0, GENERIC }, + { PCI_VENDOR_ID_NVIDIA, PCI_DEVICE_ID_NVIDIA_NFORCE_MCP61_SATA, + PCI_ANY_ID, PCI_ANY_ID, 0, 0, GENERIC }, + { PCI_VENDOR_ID_NVIDIA, PCI_DEVICE_ID_NVIDIA_NFORCE_MCP61_SATA2, + PCI_ANY_ID, PCI_ANY_ID, 0, 0, GENERIC }, + { PCI_VENDOR_ID_NVIDIA, PCI_DEVICE_ID_NVIDIA_NFORCE_MCP61_SATA3, + PCI_ANY_ID, PCI_ANY_ID, 0, 0, GENERIC }, { PCI_VENDOR_ID_NVIDIA, PCI_ANY_ID, PCI_ANY_ID, PCI_ANY_ID, PCI_CLASS_STORAGE_IDE<<8, 0xffff00, GENERIC }, @@ -228,6 +234,7 @@ static const struct ata_port_operations .qc_prep = ata_qc_prep, .qc_issue = ata_qc_issue_prot, .eng_timeout = ata_eng_timeout, + .data_xfer = ata_pio_data_xfer, .irq_handler = nv_interrupt, .irq_clear = ata_bmdma_irq_clear, .scr_read = nv_scr_read, @@ -279,11 +286,11 @@ static irqreturn_t nv_interrupt (int irq ap = host_set->ports[i]; if (ap && - !(ap->flags & (ATA_FLAG_PORT_DISABLED | ATA_FLAG_NOINTR))) { + !(ap->flags & ATA_FLAG_DISABLED)) { struct ata_queued_cmd *qc; qc = ata_qc_from_tag(ap, ap->active_tag); - if (qc && (!(qc->tf.ctl & ATA_NIEN))) + if (qc && (!(qc->tf.flags & ATA_TFLAG_POLLING))) handled += ata_host_intr(ap, qc); else // No request pending? Clear interrupt status diff -puN drivers/scsi/sata_promise.c~git-libata-all drivers/scsi/sata_promise.c --- devel/drivers/scsi/sata_promise.c~git-libata-all 2006-06-09 15:21:16.000000000 -0700 +++ devel-akpm/drivers/scsi/sata_promise.c 2006-06-09 15:21:16.000000000 -0700 @@ -76,7 +76,8 @@ enum { PDC_RESET = (1 << 11), /* HDMA reset */ PDC_COMMON_FLAGS = ATA_FLAG_NO_LEGACY | ATA_FLAG_SRST | - ATA_FLAG_MMIO | ATA_FLAG_NO_ATAPI, + ATA_FLAG_MMIO | ATA_FLAG_NO_ATAPI | + ATA_FLAG_PIO_POLLING, }; @@ -136,6 +137,7 @@ static const struct ata_port_operations .qc_prep = pdc_qc_prep, .qc_issue = pdc_qc_issue_prot, .eng_timeout = pdc_eng_timeout, + .data_xfer = ata_mmio_data_xfer, .irq_handler = pdc_interrupt, .irq_clear = pdc_irq_clear, @@ -158,6 +160,7 @@ static const struct ata_port_operations .qc_prep = pdc_qc_prep, .qc_issue = pdc_qc_issue_prot, + .data_xfer = ata_mmio_data_xfer, .eng_timeout = pdc_eng_timeout, .irq_handler = pdc_interrupt, .irq_clear = pdc_irq_clear, @@ -171,7 +174,7 @@ static const struct ata_port_info pdc_po /* board_2037x */ { .sht = &pdc_ata_sht, - .host_flags = PDC_COMMON_FLAGS | ATA_FLAG_SATA, + .host_flags = PDC_COMMON_FLAGS /* | ATA_FLAG_SATA */, .pio_mask = 0x1f, /* pio0-4 */ .mwdma_mask = 0x07, /* mwdma0-2 */ .udma_mask = 0x7f, /* udma0-6 ; FIXME */ @@ -360,17 +363,32 @@ static void pdc_reset_port(struct ata_po static void pdc_sata_phy_reset(struct ata_port *ap) { pdc_reset_port(ap); - sata_phy_reset(ap); + if (ap->flags & ATA_FLAG_SATA) + sata_phy_reset(ap); + else + pdc_pata_phy_reset(ap); +} + +static void pdc_pata_cbl_detect(struct ata_port *ap) +{ + u8 tmp; + void __iomem *mmio = (void *) ap->ioaddr.cmd_addr + PDC_CTLSTAT + 0x03; + + tmp = readb(mmio); + + if (tmp & 0x01) { + ap->cbl = ATA_CBL_PATA40; + ap->udma_mask &= ATA_UDMA_MASK_40C; + } else + ap->cbl = ATA_CBL_PATA80; } static void pdc_pata_phy_reset(struct ata_port *ap) { - /* FIXME: add cable detect. Don't assume 40-pin cable */ - ap->cbl = ATA_CBL_PATA40; - ap->udma_mask &= ATA_UDMA_MASK_40C; + pdc_pata_cbl_detect(ap); - pdc_reset_port(ap); ata_port_probe(ap); + ata_bus_reset(ap); } @@ -435,7 +453,7 @@ static void pdc_eng_timeout(struct ata_p switch (qc->tf.protocol) { case ATA_PROT_DMA: case ATA_PROT_NODATA: - printk(KERN_ERR "ata%u: command timeout\n", ap->id); + ata_port_printk(ap, KERN_ERR, "command timeout\n"); drv_stat = ata_wait_idle(ap); qc->err_mask |= __ac_err_mask(drv_stat); break; @@ -443,8 +461,9 @@ static void pdc_eng_timeout(struct ata_p default: drv_stat = ata_busy_wait(ap, ATA_BUSY | ATA_DRQ, 1000); - printk(KERN_ERR "ata%u: unknown timeout, cmd 0x%x stat 0x%x\n", - ap->id, qc->tf.command, drv_stat); + ata_port_printk(ap, KERN_ERR, + "unknown timeout, cmd 0x%x stat 0x%x\n", + qc->tf.command, drv_stat); qc->err_mask |= ac_err_mask(drv_stat); break; @@ -533,11 +552,11 @@ static irqreturn_t pdc_interrupt (int ir ap = host_set->ports[i]; tmp = mask & (1 << (i + 1)); if (tmp && ap && - !(ap->flags & (ATA_FLAG_PORT_DISABLED | ATA_FLAG_NOINTR))) { + !(ap->flags & ATA_FLAG_DISABLED)) { struct ata_queued_cmd *qc; qc = ata_qc_from_tag(ap, ap->active_tag); - if (qc && (!(qc->tf.ctl & ATA_NIEN))) + if (qc && (!(qc->tf.flags & ATA_TFLAG_POLLING))) handled += pdc_host_intr(ap, qc); } } @@ -672,14 +691,11 @@ static int pdc_ata_init_one (struct pci_ unsigned int board_idx = (unsigned int) ent->driver_data; int pci_dev_busy = 0; int rc; + u8 tmp; if (!printed_version++) dev_printk(KERN_DEBUG, &pdev->dev, "version " DRV_VERSION "\n"); - /* - * If this driver happens to only be useful on Apple's K2, then - * we should check that here as it has a normal Serverworks ID - */ rc = pci_enable_device(pdev); if (rc) return rc; @@ -740,6 +756,9 @@ static int pdc_ata_init_one (struct pci_ probe_ent->port[0].scr_addr = base + 0x400; probe_ent->port[1].scr_addr = base + 0x500; + probe_ent->port_flags[0] = ATA_FLAG_SATA; + probe_ent->port_flags[1] = ATA_FLAG_SATA; + /* notice 4-port boards */ switch (board_idx) { case board_40518: @@ -754,13 +773,29 @@ static int pdc_ata_init_one (struct pci_ probe_ent->port[2].scr_addr = base + 0x600; probe_ent->port[3].scr_addr = base + 0x700; + + probe_ent->port_flags[2] = ATA_FLAG_SATA; + probe_ent->port_flags[3] = ATA_FLAG_SATA; break; case board_2057x: /* Override hotplug offset for SATAII150 */ hp->hotplug_offset = PDC2_SATA_PLUG_CSR; /* Fall through */ case board_2037x: - probe_ent->n_ports = 2; + /* Some boards have also PATA port */ + tmp = readb(mmio_base + PDC_FLASH_CTL+1); + if (!(tmp & 0x80)) + { + probe_ent->n_ports = 3; + + pdc_ata_setup_port(&probe_ent->port[2], base + 0x300); + + probe_ent->port_flags[2] = ATA_FLAG_SLAVE_POSS; + + printk(KERN_INFO DRV_NAME " PATA port found\n"); + } + else + probe_ent->n_ports = 2; break; case board_20771: probe_ent->n_ports = 2; diff -puN drivers/scsi/sata_qstor.c~git-libata-all drivers/scsi/sata_qstor.c --- devel/drivers/scsi/sata_qstor.c~git-libata-all 2006-06-09 15:21:16.000000000 -0700 +++ devel-akpm/drivers/scsi/sata_qstor.c 2006-06-09 15:21:16.000000000 -0700 @@ -41,7 +41,7 @@ #include #define DRV_NAME "sata_qstor" -#define DRV_VERSION "0.05" +#define DRV_VERSION "0.06" enum { QS_PORTS = 4, @@ -156,6 +156,7 @@ static const struct ata_port_operations .phy_reset = qs_phy_reset, .qc_prep = qs_qc_prep, .qc_issue = qs_qc_issue, + .data_xfer = ata_mmio_data_xfer, .eng_timeout = qs_eng_timeout, .irq_handler = qs_intr, .irq_clear = qs_irq_clear, @@ -175,7 +176,7 @@ static const struct ata_port_info qs_por .host_flags = ATA_FLAG_SATA | ATA_FLAG_NO_LEGACY | ATA_FLAG_SATA_RESET | //FIXME ATA_FLAG_SRST | - ATA_FLAG_MMIO, + ATA_FLAG_MMIO | ATA_FLAG_PIO_POLLING, .pio_mask = 0x10, /* pio4 */ .udma_mask = 0x7f, /* udma0-6 */ .port_ops = &qs_ata_ops, @@ -394,14 +395,13 @@ static inline unsigned int qs_intr_pkt(s DPRINTK("SFF=%08x%08x: sCHAN=%u sHST=%d sDST=%02x\n", sff1, sff0, port_no, sHST, sDST); handled = 1; - if (ap && !(ap->flags & - (ATA_FLAG_PORT_DISABLED|ATA_FLAG_NOINTR))) { + if (ap && !(ap->flags & ATA_FLAG_DISABLED)) { struct ata_queued_cmd *qc; struct qs_port_priv *pp = ap->private_data; if (!pp || pp->state != qs_state_pkt) continue; qc = ata_qc_from_tag(ap, ap->active_tag); - if (qc && (!(qc->tf.ctl & ATA_NIEN))) { + if (qc && (!(qc->tf.flags & ATA_TFLAG_POLLING))) { switch (sHST) { case 0: /* successful CPB */ case 3: /* device error */ @@ -428,13 +428,13 @@ static inline unsigned int qs_intr_mmio( struct ata_port *ap; ap = host_set->ports[port_no]; if (ap && - !(ap->flags & (ATA_FLAG_PORT_DISABLED | ATA_FLAG_NOINTR))) { + !(ap->flags & ATA_FLAG_DISABLED)) { struct ata_queued_cmd *qc; struct qs_port_priv *pp = ap->private_data; if (!pp || pp->state != qs_state_mmio) continue; qc = ata_qc_from_tag(ap, ap->active_tag); - if (qc && (!(qc->tf.ctl & ATA_NIEN))) { + if (qc && (!(qc->tf.flags & ATA_TFLAG_POLLING))) { /* check main status, clearing INTRQ */ u8 status = ata_check_status(ap); diff -puN drivers/scsi/sata_sil24.c~git-libata-all drivers/scsi/sata_sil24.c --- devel/drivers/scsi/sata_sil24.c~git-libata-all 2006-06-09 15:21:16.000000000 -0700 +++ devel-akpm/drivers/scsi/sata_sil24.c 2006-06-09 15:21:16.000000000 -0700 @@ -31,7 +31,7 @@ #include #define DRV_NAME "sata_sil24" -#define DRV_VERSION "0.23" +#define DRV_VERSION "0.24" /* * Port request block (PRB) 32 bytes @@ -86,6 +86,13 @@ enum { /* HOST_SLOT_STAT bits */ HOST_SSTAT_ATTN = (1 << 31), + /* HOST_CTRL bits */ + HOST_CTRL_M66EN = (1 << 16), /* M66EN PCI bus signal */ + HOST_CTRL_TRDY = (1 << 17), /* latched PCI TRDY */ + HOST_CTRL_STOP = (1 << 18), /* latched PCI STOP */ + HOST_CTRL_DEVSEL = (1 << 19), /* latched PCI DEVSEL */ + HOST_CTRL_REQ64 = (1 << 20), /* latched PCI REQ64 */ + /* * Port registers * (8192 bytes @ +0x0000, +0x2000, +0x4000 and +0x6000 @ BAR2) @@ -142,8 +149,15 @@ enum { PORT_IRQ_PWR_CHG = (1 << 3), /* power management change */ PORT_IRQ_PHYRDY_CHG = (1 << 4), /* PHY ready change */ PORT_IRQ_COMWAKE = (1 << 5), /* COMWAKE received */ - PORT_IRQ_UNK_FIS = (1 << 6), /* Unknown FIS received */ - PORT_IRQ_SDB_FIS = (1 << 11), /* SDB FIS received */ + PORT_IRQ_UNK_FIS = (1 << 6), /* unknown FIS received */ + PORT_IRQ_DEV_XCHG = (1 << 7), /* device exchanged */ + PORT_IRQ_8B10B = (1 << 8), /* 8b/10b decode error threshold */ + PORT_IRQ_CRC = (1 << 9), /* CRC error threshold */ + PORT_IRQ_HANDSHAKE = (1 << 10), /* handshake error threshold */ + PORT_IRQ_SDB_NOTIFY = (1 << 11), /* SDB notify received */ + + DEF_PORT_IRQ = PORT_IRQ_COMPLETE | PORT_IRQ_ERROR | + PORT_IRQ_DEV_XCHG | PORT_IRQ_UNK_FIS, /* bits[27:16] are unmasked (raw) */ PORT_IRQ_RAW_SHIFT = 16, @@ -174,7 +188,7 @@ enum { PORT_CERR_CMD_PCIPERR = 27, /* ctrl[15:13] 110 - PCI parity err while fetching PRB */ PORT_CERR_XFR_UNDEF = 32, /* PSD ecode 00 - undefined */ PORT_CERR_XFR_TGTABRT = 33, /* PSD ecode 01 - target abort */ - PORT_CERR_XFR_MSGABRT = 34, /* PSD ecode 10 - master abort */ + PORT_CERR_XFR_MSTABRT = 34, /* PSD ecode 10 - master abort */ PORT_CERR_XFR_PCIPERR = 35, /* PSD ecode 11 - PCI prity err during transfer */ PORT_CERR_SENDSERVICE = 36, /* FIS received while sending service */ @@ -202,11 +216,19 @@ enum { SGE_DRD = (1 << 29), /* discard data read (/dev/null) data address ignored */ + SIL24_MAX_CMDS = 31, + /* board id */ BID_SIL3124 = 0, BID_SIL3132 = 1, BID_SIL3131 = 2, + /* host flags */ + SIL24_COMMON_FLAGS = ATA_FLAG_SATA | ATA_FLAG_NO_LEGACY | + ATA_FLAG_MMIO | ATA_FLAG_PIO_DMA | + ATA_FLAG_NCQ, + SIL24_FLAG_PCIX_IRQ_WOC = (1 << 24), /* IRQ loss errata on PCI-X */ + IRQ_STAT_4PORTS = 0xf, }; @@ -226,6 +248,58 @@ union sil24_cmd_block { struct sil24_atapi_block atapi; }; +static struct sil24_cerr_info { + unsigned int err_mask, action; + const char *desc; +} sil24_cerr_db[] = { + [0] = { AC_ERR_DEV, ATA_EH_REVALIDATE, + "device error" }, + [PORT_CERR_DEV] = { AC_ERR_DEV, ATA_EH_REVALIDATE, + "device error via D2H FIS" }, + [PORT_CERR_SDB] = { AC_ERR_DEV, ATA_EH_REVALIDATE, + "device error via SDB FIS" }, + [PORT_CERR_DATA] = { AC_ERR_ATA_BUS, ATA_EH_SOFTRESET, + "error in data FIS" }, + [PORT_CERR_SEND] = { AC_ERR_ATA_BUS, ATA_EH_SOFTRESET, + "failed to transmit command FIS" }, + [PORT_CERR_INCONSISTENT] = { AC_ERR_HSM, ATA_EH_SOFTRESET, + "protocol mismatch" }, + [PORT_CERR_DIRECTION] = { AC_ERR_HSM, ATA_EH_SOFTRESET, + "data directon mismatch" }, + [PORT_CERR_UNDERRUN] = { AC_ERR_HSM, ATA_EH_SOFTRESET, + "ran out of SGEs while writing" }, + [PORT_CERR_OVERRUN] = { AC_ERR_HSM, ATA_EH_SOFTRESET, + "ran out of SGEs while reading" }, + [PORT_CERR_PKT_PROT] = { AC_ERR_HSM, ATA_EH_SOFTRESET, + "invalid data directon for ATAPI CDB" }, + [PORT_CERR_SGT_BOUNDARY] = { AC_ERR_SYSTEM, ATA_EH_SOFTRESET, + "SGT no on qword boundary" }, + [PORT_CERR_SGT_TGTABRT] = { AC_ERR_HOST_BUS, ATA_EH_SOFTRESET, + "PCI target abort while fetching SGT" }, + [PORT_CERR_SGT_MSTABRT] = { AC_ERR_HOST_BUS, ATA_EH_SOFTRESET, + "PCI master abort while fetching SGT" }, + [PORT_CERR_SGT_PCIPERR] = { AC_ERR_HOST_BUS, ATA_EH_SOFTRESET, + "PCI parity error while fetching SGT" }, + [PORT_CERR_CMD_BOUNDARY] = { AC_ERR_SYSTEM, ATA_EH_SOFTRESET, + "PRB not on qword boundary" }, + [PORT_CERR_CMD_TGTABRT] = { AC_ERR_HOST_BUS, ATA_EH_SOFTRESET, + "PCI target abort while fetching PRB" }, + [PORT_CERR_CMD_MSTABRT] = { AC_ERR_HOST_BUS, ATA_EH_SOFTRESET, + "PCI master abort while fetching PRB" }, + [PORT_CERR_CMD_PCIPERR] = { AC_ERR_HOST_BUS, ATA_EH_SOFTRESET, + "PCI parity error while fetching PRB" }, + [PORT_CERR_XFR_UNDEF] = { AC_ERR_HOST_BUS, ATA_EH_SOFTRESET, + "undefined error while transferring data" }, + [PORT_CERR_XFR_TGTABRT] = { AC_ERR_HOST_BUS, ATA_EH_SOFTRESET, + "PCI target abort while transferring data" }, + [PORT_CERR_XFR_MSTABRT] = { AC_ERR_HOST_BUS, ATA_EH_SOFTRESET, + "PCI master abort while transferring data" }, + [PORT_CERR_XFR_PCIPERR] = { AC_ERR_HOST_BUS, ATA_EH_SOFTRESET, + "PCI parity error while transferring data" }, + [PORT_CERR_SENDSERVICE] = { AC_ERR_HSM, ATA_EH_SOFTRESET, + "FIS received while sending service FIS" }, +}; + /* * ap->private_data * @@ -253,8 +327,11 @@ static int sil24_probe_reset(struct ata_ static void sil24_qc_prep(struct ata_queued_cmd *qc); static unsigned int sil24_qc_issue(struct ata_queued_cmd *qc); static void sil24_irq_clear(struct ata_port *ap); -static void sil24_eng_timeout(struct ata_port *ap); static irqreturn_t sil24_interrupt(int irq, void *dev_instance, struct pt_regs *regs); +static void sil24_freeze(struct ata_port *ap); +static void sil24_thaw(struct ata_port *ap); +static void sil24_error_handler(struct ata_port *ap); +static void sil24_post_internal_cmd(struct ata_queued_cmd *qc); static int sil24_port_start(struct ata_port *ap); static void sil24_port_stop(struct ata_port *ap); static void sil24_host_stop(struct ata_host_set *host_set); @@ -281,7 +358,8 @@ static struct scsi_host_template sil24_s .name = DRV_NAME, .ioctl = ata_scsi_ioctl, .queuecommand = ata_scsi_queuecmd, - .can_queue = ATA_DEF_QUEUE, + .change_queue_depth = ata_scsi_change_queue_depth, + .can_queue = SIL24_MAX_CMDS, .this_id = ATA_SHT_THIS_ID, .sg_tablesize = LIBATA_MAX_PRD, .cmd_per_lun = ATA_SHT_CMD_PER_LUN, @@ -309,14 +387,17 @@ static const struct ata_port_operations .qc_prep = sil24_qc_prep, .qc_issue = sil24_qc_issue, - .eng_timeout = sil24_eng_timeout, - .irq_handler = sil24_interrupt, .irq_clear = sil24_irq_clear, .scr_read = sil24_scr_read, .scr_write = sil24_scr_write, + .freeze = sil24_freeze, + .thaw = sil24_thaw, + .error_handler = sil24_error_handler, + .post_internal_cmd = sil24_post_internal_cmd, + .port_start = sil24_port_start, .port_stop = sil24_port_stop, .host_stop = sil24_host_stop, @@ -333,9 +414,8 @@ static struct ata_port_info sil24_port_i /* sil_3124 */ { .sht = &sil24_sht, - .host_flags = ATA_FLAG_SATA | ATA_FLAG_NO_LEGACY | - ATA_FLAG_MMIO | ATA_FLAG_PIO_DMA | - SIL24_NPORTS2FLAG(4), + .host_flags = SIL24_COMMON_FLAGS | SIL24_NPORTS2FLAG(4) | + SIL24_FLAG_PCIX_IRQ_WOC, .pio_mask = 0x1f, /* pio0-4 */ .mwdma_mask = 0x07, /* mwdma0-2 */ .udma_mask = 0x3f, /* udma0-5 */ @@ -344,9 +424,7 @@ static struct ata_port_info sil24_port_i /* sil_3132 */ { .sht = &sil24_sht, - .host_flags = ATA_FLAG_SATA | ATA_FLAG_NO_LEGACY | - ATA_FLAG_MMIO | ATA_FLAG_PIO_DMA | - SIL24_NPORTS2FLAG(2), + .host_flags = SIL24_COMMON_FLAGS | SIL24_NPORTS2FLAG(2), .pio_mask = 0x1f, /* pio0-4 */ .mwdma_mask = 0x07, /* mwdma0-2 */ .udma_mask = 0x3f, /* udma0-5 */ @@ -355,9 +433,7 @@ static struct ata_port_info sil24_port_i /* sil_3131/sil_3531 */ { .sht = &sil24_sht, - .host_flags = ATA_FLAG_SATA | ATA_FLAG_NO_LEGACY | - ATA_FLAG_MMIO | ATA_FLAG_PIO_DMA | - SIL24_NPORTS2FLAG(1), + .host_flags = SIL24_COMMON_FLAGS | SIL24_NPORTS2FLAG(1), .pio_mask = 0x1f, /* pio0-4 */ .mwdma_mask = 0x07, /* mwdma0-2 */ .udma_mask = 0x3f, /* udma0-5 */ @@ -365,6 +441,13 @@ static struct ata_port_info sil24_port_i }, }; +static int sil24_tag(int tag) +{ + if (unlikely(ata_tag_internal(tag))) + return 0; + return tag; +} + static void sil24_dev_config(struct ata_port *ap, struct ata_device *dev) { void __iomem *port = (void __iomem *)ap->ioaddr.cmd_addr; @@ -426,56 +509,65 @@ static void sil24_tf_read(struct ata_por *tf = pp->tf; } -static int sil24_softreset(struct ata_port *ap, int verbose, - unsigned int *class) +static int sil24_init_port(struct ata_port *ap) +{ + void __iomem *port = (void __iomem *)ap->ioaddr.cmd_addr; + u32 tmp; + + writel(PORT_CS_INIT, port + PORT_CTRL_STAT); + ata_wait_register(port + PORT_CTRL_STAT, + PORT_CS_INIT, PORT_CS_INIT, 10, 100); + tmp = ata_wait_register(port + PORT_CTRL_STAT, + PORT_CS_RDY, 0, 10, 100); + + if ((tmp & (PORT_CS_INIT | PORT_CS_RDY)) != PORT_CS_RDY) + return -EIO; + return 0; +} + +static int sil24_softreset(struct ata_port *ap, unsigned int *class) { void __iomem *port = (void __iomem *)ap->ioaddr.cmd_addr; struct sil24_port_priv *pp = ap->private_data; struct sil24_prb *prb = &pp->cmd_block[0].ata.prb; dma_addr_t paddr = pp->cmd_block_dma; - unsigned long timeout = jiffies + ATA_TMOUT_BOOT * HZ; - u32 irq_enable, irq_stat; + u32 mask, irq_stat; + const char *reason; DPRINTK("ENTER\n"); - if (!sata_dev_present(ap)) { + if (ata_port_offline(ap)) { DPRINTK("PHY reports no device\n"); *class = ATA_DEV_NONE; goto out; } - /* temporarily turn off IRQs during SRST */ - irq_enable = readl(port + PORT_IRQ_ENABLE_SET); - writel(irq_enable, port + PORT_IRQ_ENABLE_CLR); - - /* - * XXX: Not sure whether the following sleep is needed or not. - * The original driver had it. So.... - */ - msleep(10); + /* put the port into known state */ + if (sil24_init_port(ap)) { + reason ="port not ready"; + goto err; + } - prb->ctrl = PRB_CTRL_SRST; + /* do SRST */ + prb->ctrl = cpu_to_le16(PRB_CTRL_SRST); prb->fis[1] = 0; /* no PM yet */ writel((u32)paddr, port + PORT_CMD_ACTIVATE); + writel((u64)paddr >> 32, port + PORT_CMD_ACTIVATE + 4); - do { - irq_stat = readl(port + PORT_IRQ_STAT); - writel(irq_stat, port + PORT_IRQ_STAT); /* clear irq */ - - irq_stat >>= PORT_IRQ_RAW_SHIFT; - if (irq_stat & (PORT_IRQ_COMPLETE | PORT_IRQ_ERROR)) - break; - - msleep(100); - } while (time_before(jiffies, timeout)); + mask = (PORT_IRQ_COMPLETE | PORT_IRQ_ERROR) << PORT_IRQ_RAW_SHIFT; + irq_stat = ata_wait_register(port + PORT_IRQ_STAT, mask, 0x0, + 100, ATA_TMOUT_BOOT / HZ * 1000); - /* restore IRQs */ - writel(irq_enable, port + PORT_IRQ_ENABLE_SET); + writel(irq_stat, port + PORT_IRQ_STAT); /* clear IRQs */ + irq_stat >>= PORT_IRQ_RAW_SHIFT; if (!(irq_stat & PORT_IRQ_COMPLETE)) { - DPRINTK("EXIT, srst failed\n"); - return -EIO; + if (irq_stat & PORT_IRQ_ERROR) + reason = "SRST command error"; + else + reason = "timeout"; + goto err; } sil24_update_tf(ap); @@ -487,15 +579,55 @@ static int sil24_softreset(struct ata_po out: DPRINTK("EXIT, class=%u\n", *class); return 0; + + err: + ata_port_printk(ap, KERN_ERR, "softreset failed (%s)\n", reason); + return -EIO; } -static int sil24_hardreset(struct ata_port *ap, int verbose, - unsigned int *class) +static int sil24_hardreset(struct ata_port *ap, unsigned int *class) { - unsigned int dummy_class; + void __iomem *port = (void __iomem *)ap->ioaddr.cmd_addr; + const char *reason; + int tout_msec; + u32 tmp; + + /* sil24 does the right thing(tm) without any protection */ + sata_set_spd(ap); + + tout_msec = 100; + if (ata_port_online(ap)) + tout_msec = 5000; - /* sil24 doesn't report device signature after hard reset */ - return sata_std_hardreset(ap, verbose, &dummy_class); + writel(PORT_CS_DEV_RST, port + PORT_CTRL_STAT); + tmp = ata_wait_register(port + PORT_CTRL_STAT, + PORT_CS_DEV_RST, PORT_CS_DEV_RST, 10, tout_msec); + + /* SStatus oscillates between zero and valid status for short + * duration after DEV_RST, give it time to settle. + */ + msleep(100); + + if (tmp & PORT_CS_DEV_RST) { + if (ata_port_offline(ap)) + return 0; + reason = "link not ready"; + goto err; + } + + if (ata_busy_sleep(ap, ATA_TMOUT_BOOT_QUICK, ATA_TMOUT_BOOT)) { + reason = "device not ready"; + goto err; + } + + /* sil24 doesn't report device class code after hardreset, + * leave *class alone. + */ + return 0; + + err: + ata_port_printk(ap, KERN_ERR, "hardreset failed (%s)\n", reason); + return -EIO; } static int sil24_probe_reset(struct ata_port *ap, unsigned int *classes) @@ -528,17 +660,20 @@ static void sil24_qc_prep(struct ata_que { struct ata_port *ap = qc->ap; struct sil24_port_priv *pp = ap->private_data; - union sil24_cmd_block *cb = pp->cmd_block + qc->tag; + union sil24_cmd_block *cb; struct sil24_prb *prb; struct sil24_sge *sge; + u16 ctrl = 0; + + cb = &pp->cmd_block[sil24_tag(qc->tag)]; switch (qc->tf.protocol) { case ATA_PROT_PIO: case ATA_PROT_DMA: + case ATA_PROT_NCQ: case ATA_PROT_NODATA: prb = &cb->ata.prb; sge = cb->ata.sge; - prb->ctrl = 0; break; case ATA_PROT_ATAPI: @@ -551,12 +686,10 @@ static void sil24_qc_prep(struct ata_que if (qc->tf.protocol != ATA_PROT_ATAPI_NODATA) { if (qc->tf.flags & ATA_TFLAG_WRITE) - prb->ctrl = PRB_CTRL_PACKET_WRITE; + ctrl = PRB_CTRL_PACKET_WRITE; else - prb->ctrl = PRB_CTRL_PACKET_READ; - } else - prb->ctrl = 0; - + ctrl = PRB_CTRL_PACKET_READ; + } break; default: @@ -565,6 +698,7 @@ static void sil24_qc_prep(struct ata_que BUG(); } + prb->ctrl = cpu_to_le16(ctrl); ata_tf_to_fis(&qc->tf, prb->fis, 0); if (qc->flags & ATA_QCFLAG_DMAMAP) @@ -574,11 +708,18 @@ static void sil24_qc_prep(struct ata_que static unsigned int sil24_qc_issue(struct ata_queued_cmd *qc) { struct ata_port *ap = qc->ap; - void __iomem *port = (void __iomem *)ap->ioaddr.cmd_addr; struct sil24_port_priv *pp = ap->private_data; - dma_addr_t paddr = pp->cmd_block_dma + qc->tag * sizeof(*pp->cmd_block); + void __iomem *port = (void __iomem *)ap->ioaddr.cmd_addr; + unsigned int tag = sil24_tag(qc->tag); + dma_addr_t paddr; + void __iomem *activate; + + paddr = pp->cmd_block_dma + tag * sizeof(*pp->cmd_block); + activate = port + PORT_CMD_ACTIVATE + tag * 8; + + writel((u32)paddr, activate); + writel((u64)paddr >> 32, activate + 4); - writel((u32)paddr, port + PORT_CMD_ACTIVATE); return 0; } @@ -587,162 +728,141 @@ static void sil24_irq_clear(struct ata_p /* unused */ } -static int __sil24_restart_controller(void __iomem *port) +static void sil24_freeze(struct ata_port *ap) { - u32 tmp; - int cnt; - - writel(PORT_CS_INIT, port + PORT_CTRL_STAT); - - /* Max ~10ms */ - for (cnt = 0; cnt < 10000; cnt++) { - tmp = readl(port + PORT_CTRL_STAT); - if (tmp & PORT_CS_RDY) - return 0; - udelay(1); - } + void __iomem *port = (void __iomem *)ap->ioaddr.cmd_addr; - return -1; + /* Port-wide IRQ mask in HOST_CTRL doesn't really work, clear + * PORT_IRQ_ENABLE instead. + */ + writel(0xffff, port + PORT_IRQ_ENABLE_CLR); } -static void sil24_restart_controller(struct ata_port *ap) +static void sil24_thaw(struct ata_port *ap) { - if (__sil24_restart_controller((void __iomem *)ap->ioaddr.cmd_addr)) - printk(KERN_ERR DRV_NAME - " ata%u: failed to restart controller\n", ap->id); + void __iomem *port = (void __iomem *)ap->ioaddr.cmd_addr; + u32 tmp; + + /* clear IRQ */ + tmp = readl(port + PORT_IRQ_STAT); + writel(tmp, port + PORT_IRQ_STAT); + + /* turn IRQ back on */ + writel(DEF_PORT_IRQ, port + PORT_IRQ_ENABLE_SET); } -static int __sil24_reset_controller(void __iomem *port) +static void sil24_error_intr(struct ata_port *ap) { - int cnt; - u32 tmp; + void __iomem *port = (void __iomem *)ap->ioaddr.cmd_addr; + struct ata_eh_info *ehi = &ap->eh_info; + int freeze = 0; + u32 irq_stat; - /* Reset controller state. Is this correct? */ - writel(PORT_CS_DEV_RST, port + PORT_CTRL_STAT); - readl(port + PORT_CTRL_STAT); /* sync */ + /* on error, we need to clear IRQ explicitly */ + irq_stat = readl(port + PORT_IRQ_STAT); + writel(irq_stat, port + PORT_IRQ_STAT); - /* Max ~100ms */ - for (cnt = 0; cnt < 1000; cnt++) { - udelay(100); - tmp = readl(port + PORT_CTRL_STAT); - if (!(tmp & PORT_CS_DEV_RST)) - break; - } + /* first, analyze and record host port events */ + ata_ehi_clear_desc(ehi); - if (tmp & PORT_CS_DEV_RST) - return -1; + ata_ehi_push_desc(ehi, "irq_stat 0x%08x", irq_stat); - if (tmp & PORT_CS_RDY) - return 0; + if (irq_stat & PORT_IRQ_DEV_XCHG) { + ehi->err_mask |= AC_ERR_ATA_BUS; + /* sil24 doesn't recover very well from phy + * disconnection with a softreset. Force hardreset. + */ + ehi->action |= ATA_EH_HARDRESET; + ata_ehi_push_desc(ehi, ", device_exchanged"); + freeze = 1; + } - return __sil24_restart_controller(port); -} + if (irq_stat & PORT_IRQ_UNK_FIS) { + ehi->err_mask |= AC_ERR_HSM; + ehi->action |= ATA_EH_SOFTRESET; + ata_ehi_push_desc(ehi , ", unknown FIS"); + freeze = 1; + } -static void sil24_reset_controller(struct ata_port *ap) -{ - printk(KERN_NOTICE DRV_NAME - " ata%u: resetting controller...\n", ap->id); - if (__sil24_reset_controller((void __iomem *)ap->ioaddr.cmd_addr)) - printk(KERN_ERR DRV_NAME - " ata%u: failed to reset controller\n", ap->id); -} + /* deal with command error */ + if (irq_stat & PORT_IRQ_ERROR) { + struct sil24_cerr_info *ci = NULL; + unsigned int err_mask = 0, action = 0; + struct ata_queued_cmd *qc; + u32 cerr; + + /* analyze CMD_ERR */ + cerr = readl(port + PORT_CMD_ERR); + if (cerr < ARRAY_SIZE(sil24_cerr_db)) + ci = &sil24_cerr_db[cerr]; + + if (ci && ci->desc) { + err_mask |= ci->err_mask; + action |= ci->action; + ata_ehi_push_desc(ehi, ", %s", ci->desc); + } else { + err_mask |= AC_ERR_OTHER; + action |= ATA_EH_SOFTRESET; + ata_ehi_push_desc(ehi, ", unknown command error %d", + cerr); + } -static void sil24_eng_timeout(struct ata_port *ap) -{ - struct ata_queued_cmd *qc; + /* record error info */ + qc = ata_qc_from_tag(ap, ap->active_tag); + if (qc) { + sil24_update_tf(ap); + qc->err_mask |= err_mask; + } else + ehi->err_mask |= err_mask; - qc = ata_qc_from_tag(ap, ap->active_tag); + ehi->action |= action; + } - printk(KERN_ERR "ata%u: command timeout\n", ap->id); - qc->err_mask |= AC_ERR_TIMEOUT; - ata_eh_qc_complete(qc); + /* freeze or abort */ + if (freeze) + ata_port_freeze(ap); + else + ata_port_abort(ap); +} - sil24_reset_controller(ap); +static void sil24_finish_qc(struct ata_queued_cmd *qc) +{ + if (qc->flags & ATA_QCFLAG_RESULT_TF) + sil24_update_tf(qc->ap); } -static void sil24_error_intr(struct ata_port *ap, u32 slot_stat) +static inline void sil24_host_intr(struct ata_port *ap) { - struct ata_queued_cmd *qc = ata_qc_from_tag(ap, ap->active_tag); - struct sil24_port_priv *pp = ap->private_data; void __iomem *port = (void __iomem *)ap->ioaddr.cmd_addr; - u32 irq_stat, cmd_err, sstatus, serror; - unsigned int err_mask; + u32 slot_stat, qc_active; + int rc; - irq_stat = readl(port + PORT_IRQ_STAT); - writel(irq_stat, port + PORT_IRQ_STAT); /* clear irq */ + slot_stat = readl(port + PORT_SLOT_STAT); - if (!(irq_stat & PORT_IRQ_ERROR)) { - /* ignore non-completion, non-error irqs for now */ - printk(KERN_WARNING DRV_NAME - "ata%u: non-error exception irq (irq_stat %x)\n", - ap->id, irq_stat); + if (unlikely(slot_stat & HOST_SSTAT_ATTN)) { + sil24_error_intr(ap); return; } - cmd_err = readl(port + PORT_CMD_ERR); - sstatus = readl(port + PORT_SSTATUS); - serror = readl(port + PORT_SERROR); - if (serror) - writel(serror, port + PORT_SERROR); + if (ap->flags & SIL24_FLAG_PCIX_IRQ_WOC) + writel(PORT_IRQ_COMPLETE, port + PORT_IRQ_STAT); - /* - * Don't log ATAPI device errors. They're supposed to happen - * and any serious errors will be logged using sense data by - * the SCSI layer. - */ - if (ap->device[0].class != ATA_DEV_ATAPI || cmd_err > PORT_CERR_SDB) - printk("ata%u: error interrupt on port%d\n" - " stat=0x%x irq=0x%x cmd_err=%d sstatus=0x%x serror=0x%x\n", - ap->id, ap->port_no, slot_stat, irq_stat, cmd_err, sstatus, serror); - - if (cmd_err == PORT_CERR_DEV || cmd_err == PORT_CERR_SDB) { - /* - * Device is reporting error, tf registers are valid. - */ - sil24_update_tf(ap); - err_mask = ac_err_mask(pp->tf.command); - sil24_restart_controller(ap); - } else { - /* - * Other errors. libata currently doesn't have any - * mechanism to report these errors. Just turn on - * ATA_ERR. - */ - err_mask = AC_ERR_OTHER; - sil24_reset_controller(ap); + qc_active = slot_stat & ~HOST_SSTAT_ATTN; + rc = ata_qc_complete_multiple(ap, qc_active, sil24_finish_qc); + if (rc > 0) + return; + if (rc < 0) { + struct ata_eh_info *ehi = &ap->eh_info; + ehi->err_mask |= AC_ERR_HSM; + ehi->action |= ATA_EH_SOFTRESET; + ata_port_freeze(ap); + return; } - if (qc) { - qc->err_mask |= err_mask; - ata_qc_complete(qc); - } -} - -static inline void sil24_host_intr(struct ata_port *ap) -{ - struct ata_queued_cmd *qc = ata_qc_from_tag(ap, ap->active_tag); - void __iomem *port = (void __iomem *)ap->ioaddr.cmd_addr; - u32 slot_stat; - - slot_stat = readl(port + PORT_SLOT_STAT); - if (!(slot_stat & HOST_SSTAT_ATTN)) { - struct sil24_port_priv *pp = ap->private_data; - /* - * !HOST_SSAT_ATTN guarantees successful completion, - * so reading back tf registers is unnecessary for - * most commands. TODO: read tf registers for - * commands which require these values on successful - * completion (EXECUTE DEVICE DIAGNOSTIC, CHECK POWER, - * DEVICE RESET and READ PORT MULTIPLIER (any more?). - */ - sil24_update_tf(ap); - - if (qc) { - qc->err_mask |= ac_err_mask(pp->tf.command); - ata_qc_complete(qc); - } - } else - sil24_error_intr(ap, slot_stat); + if (ata_ratelimit()) + ata_port_printk(ap, KERN_INFO, "spurious interrupt " + "(slot_stat 0x%x active_tag %d sactive 0x%x)\n", + slot_stat, ap->active_tag, ap->sactive); } static irqreturn_t sil24_interrupt(int irq, void *dev_instance, struct pt_regs *regs) @@ -769,7 +889,7 @@ static irqreturn_t sil24_interrupt(int i for (i = 0; i < host_set->n_ports; i++) if (status & (1 << i)) { struct ata_port *ap = host_set->ports[i]; - if (ap && !(ap->flags & ATA_FLAG_PORT_DISABLED)) { + if (ap && !(ap->flags & ATA_FLAG_DISABLED)) { sil24_host_intr(host_set->ports[i]); handled++; } else @@ -782,9 +902,34 @@ static irqreturn_t sil24_interrupt(int i return IRQ_RETVAL(handled); } +static void sil24_error_handler(struct ata_port *ap) +{ + struct ata_eh_context *ehc = &ap->eh_context; + + if (sil24_init_port(ap)) { + ata_eh_freeze_port(ap); + ehc->i.action |= ATA_EH_HARDRESET; + } + + /* perform recovery */ + ata_do_eh(ap, sil24_softreset, sil24_hardreset, ata_std_postreset); +} + +static void sil24_post_internal_cmd(struct ata_queued_cmd *qc) +{ + struct ata_port *ap = qc->ap; + + if (qc->flags & ATA_QCFLAG_FAILED) + qc->err_mask |= AC_ERR_OTHER; + + /* make DMA engine forget about the failed command */ + if (qc->err_mask) + sil24_init_port(ap); +} + static inline void sil24_cblk_free(struct sil24_port_priv *pp, struct device *dev) { - const size_t cb_size = sizeof(*pp->cmd_block); + const size_t cb_size = sizeof(*pp->cmd_block) * SIL24_MAX_CMDS; dma_free_coherent(dev, cb_size, pp->cmd_block, pp->cmd_block_dma); } @@ -794,7 +939,7 @@ static int sil24_port_start(struct ata_p struct device *dev = ap->host_set->dev; struct sil24_port_priv *pp; union sil24_cmd_block *cb; - size_t cb_size = sizeof(*cb); + size_t cb_size = sizeof(*cb) * SIL24_MAX_CMDS; dma_addr_t cb_dma; int rc = -ENOMEM; @@ -858,6 +1003,7 @@ static int sil24_init_one(struct pci_dev void __iomem *host_base = NULL; void __iomem *port_base = NULL; int i, rc; + u32 tmp; if (!printed_version++) dev_printk(KERN_DEBUG, &pdev->dev, "version " DRV_VERSION "\n"); @@ -910,35 +1056,51 @@ static int sil24_init_one(struct pci_dev /* * Configure the device */ - /* - * FIXME: This device is certainly 64-bit capable. We just - * don't know how to use it. After fixing 32bit activation in - * this function, enable 64bit masks here. - */ - rc = pci_set_dma_mask(pdev, DMA_32BIT_MASK); - if (rc) { - dev_printk(KERN_ERR, &pdev->dev, - "32-bit DMA enable failed\n"); - goto out_free; - } - rc = pci_set_consistent_dma_mask(pdev, DMA_32BIT_MASK); - if (rc) { - dev_printk(KERN_ERR, &pdev->dev, - "32-bit consistent DMA enable failed\n"); - goto out_free; + if (!pci_set_dma_mask(pdev, DMA_64BIT_MASK)) { + rc = pci_set_consistent_dma_mask(pdev, DMA_64BIT_MASK); + if (rc) { + rc = pci_set_consistent_dma_mask(pdev, DMA_32BIT_MASK); + if (rc) { + dev_printk(KERN_ERR, &pdev->dev, + "64-bit DMA enable failed\n"); + goto out_free; + } + } + } else { + rc = pci_set_dma_mask(pdev, DMA_32BIT_MASK); + if (rc) { + dev_printk(KERN_ERR, &pdev->dev, + "32-bit DMA enable failed\n"); + goto out_free; + } + rc = pci_set_consistent_dma_mask(pdev, DMA_32BIT_MASK); + if (rc) { + dev_printk(KERN_ERR, &pdev->dev, + "32-bit consistent DMA enable failed\n"); + goto out_free; + } } /* GPIO off */ writel(0, host_base + HOST_FLASH_CMD); - /* Mask interrupts during initialization */ + /* Apply workaround for completion IRQ loss on PCI-X errata */ + if (probe_ent->host_flags & SIL24_FLAG_PCIX_IRQ_WOC) { + tmp = readl(host_base + HOST_CTRL); + if (tmp & (HOST_CTRL_TRDY | HOST_CTRL_STOP | HOST_CTRL_DEVSEL)) + dev_printk(KERN_INFO, &pdev->dev, + "Applying completion IRQ loss on PCI-X " + "errata fix\n"); + else + probe_ent->host_flags &= ~SIL24_FLAG_PCIX_IRQ_WOC; + } + + /* clear global reset & mask interrupts during initialization */ writel(0, host_base + HOST_CTRL); for (i = 0; i < probe_ent->n_ports; i++) { void __iomem *port = port_base + i * PORT_REGS_SIZE; unsigned long portu = (unsigned long)port; - u32 tmp; - int cnt; probe_ent->port[i].cmd_addr = portu + PORT_PRB; probe_ent->port[i].scr_addr = portu + PORT_SCONTROL; @@ -952,18 +1114,20 @@ static int sil24_init_one(struct pci_dev tmp = readl(port + PORT_CTRL_STAT); if (tmp & PORT_CS_PORT_RST) { writel(PORT_CS_PORT_RST, port + PORT_CTRL_CLR); - readl(port + PORT_CTRL_STAT); /* sync */ - for (cnt = 0; cnt < 10; cnt++) { - msleep(10); - tmp = readl(port + PORT_CTRL_STAT); - if (!(tmp & PORT_CS_PORT_RST)) - break; - } + tmp = ata_wait_register(port + PORT_CTRL_STAT, + PORT_CS_PORT_RST, + PORT_CS_PORT_RST, 10, 100); if (tmp & PORT_CS_PORT_RST) dev_printk(KERN_ERR, &pdev->dev, "failed to clear port RST\n"); } + /* Configure IRQ WoC */ + if (probe_ent->host_flags & SIL24_FLAG_PCIX_IRQ_WOC) + writel(PORT_CS_IRQ_WOC, port + PORT_CTRL_STAT); + else + writel(PORT_CS_IRQ_WOC, port + PORT_CTRL_CLR); + /* Zero error counters. */ writel(0x8000, port + PORT_DECODE_ERR_THRESH); writel(0x8000, port + PORT_CRC_ERR_THRESH); @@ -972,26 +1136,11 @@ static int sil24_init_one(struct pci_dev writel(0x0000, port + PORT_CRC_ERR_CNT); writel(0x0000, port + PORT_HSHK_ERR_CNT); - /* FIXME: 32bit activation? */ - writel(0, port + PORT_ACTIVATE_UPPER_ADDR); - writel(PORT_CS_32BIT_ACTV, port + PORT_CTRL_STAT); - - /* Configure interrupts */ - writel(0xffff, port + PORT_IRQ_ENABLE_CLR); - writel(PORT_IRQ_COMPLETE | PORT_IRQ_ERROR | PORT_IRQ_SDB_FIS, - port + PORT_IRQ_ENABLE_SET); - - /* Clear interrupts */ - writel(0x0fff0fff, port + PORT_IRQ_STAT); - writel(PORT_CS_IRQ_WOC, port + PORT_CTRL_CLR); + /* Always use 64bit activation */ + writel(PORT_CS_32BIT_ACTV, port + PORT_CTRL_CLR); /* Clear port multiplier enable and resume bits */ writel(PORT_CS_PM_EN | PORT_CS_RESUME, port + PORT_CTRL_CLR); - - /* Reset itself */ - if (__sil24_reset_controller(port)) - dev_printk(KERN_ERR, &pdev->dev, - "failed to reset controller\n"); } /* Turn on interrupts */ diff -puN drivers/scsi/sata_sil.c~git-libata-all drivers/scsi/sata_sil.c --- devel/drivers/scsi/sata_sil.c~git-libata-all 2006-06-09 15:21:16.000000000 -0700 +++ devel-akpm/drivers/scsi/sata_sil.c 2006-06-09 15:21:16.000000000 -0700 @@ -46,7 +46,7 @@ #include #define DRV_NAME "sata_sil" -#define DRV_VERSION "0.9" +#define DRV_VERSION "1.0" enum { /* @@ -96,6 +96,8 @@ static void sil_dev_config(struct ata_po static u32 sil_scr_read (struct ata_port *ap, unsigned int sc_reg); static void sil_scr_write (struct ata_port *ap, unsigned int sc_reg, u32 val); static void sil_post_set_mode (struct ata_port *ap); +static void sil_freeze(struct ata_port *ap); +static void sil_thaw(struct ata_port *ap); static const struct pci_device_id sil_pci_tbl[] = { @@ -119,12 +121,8 @@ static const struct sil_drivelist { { "ST330013AS", SIL_QUIRK_MOD15WRITE }, { "ST340017AS", SIL_QUIRK_MOD15WRITE }, { "ST360015AS", SIL_QUIRK_MOD15WRITE }, - { "ST380013AS", SIL_QUIRK_MOD15WRITE }, { "ST380023AS", SIL_QUIRK_MOD15WRITE }, { "ST3120023AS", SIL_QUIRK_MOD15WRITE }, - { "ST3160023AS", SIL_QUIRK_MOD15WRITE }, - { "ST3120026AS", SIL_QUIRK_MOD15WRITE }, - { "ST3200822AS", SIL_QUIRK_MOD15WRITE }, { "ST340014ASL", SIL_QUIRK_MOD15WRITE }, { "ST360014ASL", SIL_QUIRK_MOD15WRITE }, { "ST380011ASL", SIL_QUIRK_MOD15WRITE }, @@ -174,7 +172,11 @@ static const struct ata_port_operations .bmdma_status = ata_bmdma_status, .qc_prep = ata_qc_prep, .qc_issue = ata_qc_issue_prot, - .eng_timeout = ata_eng_timeout, + .data_xfer = ata_mmio_data_xfer, + .freeze = sil_freeze, + .thaw = sil_thaw, + .error_handler = ata_bmdma_error_handler, + .post_internal_cmd = ata_bmdma_post_internal_cmd, .irq_handler = ata_interrupt, .irq_clear = ata_bmdma_irq_clear, .scr_read = sil_scr_read, @@ -263,7 +265,7 @@ static void sil_post_set_mode (struct at for (i = 0; i < 2; i++) { dev = &ap->device[i]; - if (!ata_dev_present(dev)) + if (!ata_dev_enabled(dev)) dev_mode[i] = 0; /* PIO0/1/2 */ else if (dev->flags & ATA_DFLAG_PIO) dev_mode[i] = 1; /* PIO3/4 */ @@ -314,6 +316,33 @@ static void sil_scr_write (struct ata_po writel(val, mmio); } +static void sil_freeze(struct ata_port *ap) +{ + void __iomem *mmio_base = ap->host_set->mmio_base; + u32 tmp; + + /* plug IRQ */ + tmp = readl(mmio_base + SIL_SYSCFG); + tmp |= SIL_MASK_IDE0_INT << ap->port_no; + writel(tmp, mmio_base + SIL_SYSCFG); + readl(mmio_base + SIL_SYSCFG); /* flush */ +} + +static void sil_thaw(struct ata_port *ap) +{ + void __iomem *mmio_base = ap->host_set->mmio_base; + u32 tmp; + + /* clear IRQ */ + ata_chk_status(ap); + ata_bmdma_irq_clear(ap); + + /* turn on IRQ */ + tmp = readl(mmio_base + SIL_SYSCFG); + tmp &= ~(SIL_MASK_IDE0_INT << ap->port_no); + writel(tmp, mmio_base + SIL_SYSCFG); +} + /** * sil_dev_config - Apply device/host-specific errata fixups * @ap: Port containing device to be examined @@ -360,16 +389,16 @@ static void sil_dev_config(struct ata_po if (slow_down || ((ap->flags & SIL_FLAG_MOD15WRITE) && (quirks & SIL_QUIRK_MOD15WRITE))) { - printk(KERN_INFO "ata%u(%u): applying Seagate errata fix (mod15write workaround)\n", - ap->id, dev->devno); + ata_dev_printk(dev, KERN_INFO, "applying Seagate errata fix " + "(mod15write workaround)\n"); dev->max_sectors = 15; return; } /* limit to udma5 */ if (quirks & SIL_QUIRK_UDMA5MAX) { - printk(KERN_INFO "ata%u(%u): applying Maxtor errata fix %s\n", - ap->id, dev->devno, model_num); + ata_dev_printk(dev, KERN_INFO, + "applying Maxtor errata fix %s\n", model_num); dev->udma_mask &= ATA_UDMA5; return; } @@ -384,16 +413,12 @@ static int sil_init_one (struct pci_dev int rc; unsigned int i; int pci_dev_busy = 0; - u32 tmp, irq_mask; + u32 tmp; u8 cls; if (!printed_version++) dev_printk(KERN_DEBUG, &pdev->dev, "version " DRV_VERSION "\n"); - /* - * If this driver happens to only be useful on Apple's K2, then - * we should check that here as it has a normal Serverworks ID - */ rc = pci_enable_device(pdev); if (rc) return rc; @@ -478,24 +503,11 @@ static int sil_init_one (struct pci_dev } if (ent->driver_data == sil_3114) { - irq_mask = SIL_MASK_4PORT; - /* flip the magic "make 4 ports work" bit */ tmp = readl(mmio_base + sil_port[2].bmdma); if ((tmp & SIL_INTR_STEERING) == 0) writel(tmp | SIL_INTR_STEERING, mmio_base + sil_port[2].bmdma); - - } else { - irq_mask = SIL_MASK_2PORT; - } - - /* make sure IDE0/1/2/3 interrupts are not masked */ - tmp = readl(mmio_base + SIL_SYSCFG); - if (tmp & irq_mask) { - tmp &= ~irq_mask; - writel(tmp, mmio_base + SIL_SYSCFG); - readl(mmio_base + SIL_SYSCFG); /* flush */ } /* mask all SATA phy-related interrupts */ diff -puN drivers/scsi/sata_sis.c~git-libata-all drivers/scsi/sata_sis.c --- devel/drivers/scsi/sata_sis.c~git-libata-all 2006-06-09 15:21:16.000000000 -0700 +++ devel-akpm/drivers/scsi/sata_sis.c 2006-06-09 15:21:16.000000000 -0700 @@ -43,7 +43,7 @@ #include #define DRV_NAME "sata_sis" -#define DRV_VERSION "0.5" +#define DRV_VERSION "0.6" enum { sis_180 = 0, @@ -113,6 +113,7 @@ static const struct ata_port_operations .bmdma_status = ata_bmdma_status, .qc_prep = ata_qc_prep, .qc_issue = ata_qc_issue_prot, + .data_xfer = ata_pio_data_xfer, .eng_timeout = ata_eng_timeout, .irq_handler = ata_interrupt, .irq_clear = ata_bmdma_irq_clear, diff -puN drivers/scsi/sata_svw.c~git-libata-all drivers/scsi/sata_svw.c --- devel/drivers/scsi/sata_svw.c~git-libata-all 2006-06-09 15:21:16.000000000 -0700 +++ devel-akpm/drivers/scsi/sata_svw.c 2006-06-09 15:21:16.000000000 -0700 @@ -54,7 +54,7 @@ #endif /* CONFIG_PPC_OF */ #define DRV_NAME "sata_svw" -#define DRV_VERSION "1.07" +#define DRV_VERSION "1.8" enum { /* Taskfile registers offsets */ @@ -257,7 +257,7 @@ static int k2_sata_proc_info(struct Scsi int len, index; /* Find the ata_port */ - ap = (struct ata_port *) &shost->hostdata[0]; + ap = ata_shost_to_port(shost); if (ap == NULL) return 0; @@ -320,6 +320,7 @@ static const struct ata_port_operations .bmdma_status = ata_bmdma_status, .qc_prep = ata_qc_prep, .qc_issue = ata_qc_issue_prot, + .data_xfer = ata_mmio_data_xfer, .eng_timeout = ata_eng_timeout, .irq_handler = ata_interrupt, .irq_clear = ata_bmdma_irq_clear, diff -puN drivers/scsi/sata_sx4.c~git-libata-all drivers/scsi/sata_sx4.c --- devel/drivers/scsi/sata_sx4.c~git-libata-all 2006-06-09 15:21:16.000000000 -0700 +++ devel-akpm/drivers/scsi/sata_sx4.c 2006-06-09 15:21:16.000000000 -0700 @@ -46,7 +46,7 @@ #include "sata_promise.h" #define DRV_NAME "sata_sx4" -#define DRV_VERSION "0.8" +#define DRV_VERSION "0.9" enum { @@ -204,6 +204,7 @@ static const struct ata_port_operations .phy_reset = pdc_20621_phy_reset, .qc_prep = pdc20621_qc_prep, .qc_issue = pdc20621_qc_issue_prot, + .data_xfer = ata_mmio_data_xfer, .eng_timeout = pdc_eng_timeout, .irq_handler = pdc20621_interrupt, .irq_clear = pdc20621_irq_clear, @@ -218,7 +219,7 @@ static const struct ata_port_info pdc_po .sht = &pdc_sata_sht, .host_flags = ATA_FLAG_SATA | ATA_FLAG_NO_LEGACY | ATA_FLAG_SRST | ATA_FLAG_MMIO | - ATA_FLAG_NO_ATAPI, + ATA_FLAG_NO_ATAPI | ATA_FLAG_PIO_POLLING, .pio_mask = 0x1f, /* pio0-4 */ .mwdma_mask = 0x07, /* mwdma0-2 */ .udma_mask = 0x7f, /* udma0-6 ; FIXME */ @@ -833,11 +834,11 @@ static irqreturn_t pdc20621_interrupt (i tmp = mask & (1 << i); VPRINTK("seq %u, port_no %u, ap %p, tmp %x\n", i, port_no, ap, tmp); if (tmp && ap && - !(ap->flags & (ATA_FLAG_PORT_DISABLED | ATA_FLAG_NOINTR))) { + !(ap->flags & ATA_FLAG_DISABLED)) { struct ata_queued_cmd *qc; qc = ata_qc_from_tag(ap, ap->active_tag); - if (qc && (!(qc->tf.ctl & ATA_NIEN))) + if (qc && (!(qc->tf.flags & ATA_TFLAG_POLLING))) handled += pdc20621_host_intr(ap, qc, (i > 4), mmio_base); } @@ -868,15 +869,16 @@ static void pdc_eng_timeout(struct ata_p switch (qc->tf.protocol) { case ATA_PROT_DMA: case ATA_PROT_NODATA: - printk(KERN_ERR "ata%u: command timeout\n", ap->id); + ata_port_printk(ap, KERN_ERR, "command timeout\n"); qc->err_mask |= __ac_err_mask(ata_wait_idle(ap)); break; default: drv_stat = ata_busy_wait(ap, ATA_BUSY | ATA_DRQ, 1000); - printk(KERN_ERR "ata%u: unknown timeout, cmd 0x%x stat 0x%x\n", - ap->id, qc->tf.command, drv_stat); + ata_port_printk(ap, KERN_ERR, + "unknown timeout, cmd 0x%x stat 0x%x\n", + qc->tf.command, drv_stat); qc->err_mask |= ac_err_mask(drv_stat); break; @@ -1375,10 +1377,6 @@ static int pdc_sata_init_one (struct pci if (!printed_version++) dev_printk(KERN_DEBUG, &pdev->dev, "version " DRV_VERSION "\n"); - /* - * If this driver happens to only be useful on Apple's K2, then - * we should check that here as it has a normal Serverworks ID - */ rc = pci_enable_device(pdev); if (rc) return rc; diff -puN drivers/scsi/sata_uli.c~git-libata-all drivers/scsi/sata_uli.c --- devel/drivers/scsi/sata_uli.c~git-libata-all 2006-06-09 15:21:16.000000000 -0700 +++ devel-akpm/drivers/scsi/sata_uli.c 2006-06-09 15:21:16.000000000 -0700 @@ -37,7 +37,7 @@ #include #define DRV_NAME "sata_uli" -#define DRV_VERSION "0.5" +#define DRV_VERSION "0.6" enum { uli_5289 = 0, @@ -110,6 +110,7 @@ static const struct ata_port_operations .bmdma_status = ata_bmdma_status, .qc_prep = ata_qc_prep, .qc_issue = ata_qc_issue_prot, + .data_xfer = ata_pio_data_xfer, .eng_timeout = ata_eng_timeout, diff -puN drivers/scsi/sata_via.c~git-libata-all drivers/scsi/sata_via.c --- devel/drivers/scsi/sata_via.c~git-libata-all 2006-06-09 15:21:16.000000000 -0700 +++ devel-akpm/drivers/scsi/sata_via.c 2006-06-09 15:21:16.000000000 -0700 @@ -47,7 +47,7 @@ #include #define DRV_NAME "sata_via" -#define DRV_VERSION "1.1" +#define DRV_VERSION "1.2" enum board_ids_enum { vt6420, @@ -124,6 +124,7 @@ static const struct ata_port_operations .qc_prep = ata_qc_prep, .qc_issue = ata_qc_issue_prot, + .data_xfer = ata_pio_data_xfer, .eng_timeout = ata_eng_timeout, diff -puN drivers/scsi/sata_vsc.c~git-libata-all drivers/scsi/sata_vsc.c --- devel/drivers/scsi/sata_vsc.c~git-libata-all 2006-06-09 15:21:16.000000000 -0700 +++ devel-akpm/drivers/scsi/sata_vsc.c 2006-06-09 15:21:16.000000000 -0700 @@ -221,14 +221,21 @@ static irqreturn_t vsc_sata_interrupt (i ap = host_set->ports[i]; - if (ap && !(ap->flags & - (ATA_FLAG_PORT_DISABLED|ATA_FLAG_NOINTR))) { + if (is_vsc_sata_int_err(i, int_status)) { + u32 err_status; + printk(KERN_DEBUG "%s: ignoring interrupt(s)\n", __FUNCTION__); + err_status = ap ? vsc_sata_scr_read(ap, SCR_ERROR) : 0; + vsc_sata_scr_write(ap, SCR_ERROR, err_status); + handled++; + } + + if (ap && !(ap->flags & ATA_FLAG_DISABLED)) { struct ata_queued_cmd *qc; qc = ata_qc_from_tag(ap, ap->active_tag); - if (qc && (!(qc->tf.ctl & ATA_NIEN))) { + if (qc && (!(qc->tf.flags & ATA_TFLAG_POLLING))) handled += ata_host_intr(ap, qc); - } else if (is_vsc_sata_int_err(i, int_status)) { + else if (is_vsc_sata_int_err(i, int_status)) { /* * On some chips (i.e. Intel 31244), an error * interrupt will sneak in at initialization @@ -290,6 +297,7 @@ static const struct ata_port_operations .bmdma_status = ata_bmdma_status, .qc_prep = ata_qc_prep, .qc_issue = ata_qc_issue_prot, + .data_xfer = ata_pio_data_xfer, .eng_timeout = ata_eng_timeout, .irq_handler = vsc_sata_interrupt, .irq_clear = ata_bmdma_irq_clear, diff -puN drivers/scsi/scsi.c~git-libata-all drivers/scsi/scsi.c --- devel/drivers/scsi/scsi.c~git-libata-all 2006-06-09 15:21:16.000000000 -0700 +++ devel-akpm/drivers/scsi/scsi.c 2006-06-09 15:21:16.000000000 -0700 @@ -720,6 +720,24 @@ void scsi_init_cmd_from_req(struct scsi_ static DEFINE_PER_CPU(struct list_head, scsi_done_q); /** + * scsi_req_abort_cmd -- Request command recovery for the specified command + * cmd: pointer to the SCSI command of interest + * + * This function requests that SCSI Core start recovery for the + * command by deleting the timer and adding the command to the eh + * queue. It can be called by either LLDDs or SCSI Core. LLDDs who + * implement their own error recovery MAY ignore the timeout event if + * they generated scsi_req_abort_cmd. + */ +void scsi_req_abort_cmd(struct scsi_cmnd *cmd) +{ + if (!scsi_delete_timer(cmd)) + return; + scsi_times_out(cmd); +} +EXPORT_SYMBOL(scsi_req_abort_cmd); + +/** * scsi_done - Enqueue the finished SCSI command into the done queue. * @cmd: The SCSI Command for which a low-level device driver (LLDD) gives * ownership back to SCSI Core -- i.e. the LLDD has finished with it. diff -puN drivers/scsi/scsi_error.c~git-libata-all drivers/scsi/scsi_error.c --- devel/drivers/scsi/scsi_error.c~git-libata-all 2006-06-09 15:21:16.000000000 -0700 +++ devel-akpm/drivers/scsi/scsi_error.c 2006-06-09 15:21:16.000000000 -0700 @@ -58,6 +58,28 @@ void scsi_eh_wakeup(struct Scsi_Host *sh } /** + * scsi_schedule_eh - schedule EH for SCSI host + * @shost: SCSI host to invoke error handling on. + * + * Schedule SCSI EH without scmd. + **/ +void scsi_schedule_eh(struct Scsi_Host *shost) +{ + unsigned long flags; + + spin_lock_irqsave(shost->host_lock, flags); + + if (scsi_host_set_state(shost, SHOST_RECOVERY) == 0 || + scsi_host_set_state(shost, SHOST_CANCEL_RECOVERY) == 0) { + shost->host_eh_scheduled++; + scsi_eh_wakeup(shost); + } + + spin_unlock_irqrestore(shost->host_lock, flags); +} +EXPORT_SYMBOL_GPL(scsi_schedule_eh); + +/** * scsi_eh_scmd_add - add scsi cmd to error handling. * @scmd: scmd to run eh on. * @eh_flag: optional SCSI_EH flag. @@ -1517,7 +1539,7 @@ int scsi_error_handler(void *data) */ set_current_state(TASK_INTERRUPTIBLE); while (!kthread_should_stop()) { - if (shost->host_failed == 0 || + if ((shost->host_failed == 0 && shost->host_eh_scheduled == 0) || shost->host_failed != shost->host_busy) { SCSI_LOG_ERROR_RECOVERY(1, printk("Error handler scsi_eh_%d sleeping\n", diff -puN drivers/scsi/scsi_lib.c~git-libata-all drivers/scsi/scsi_lib.c --- devel/drivers/scsi/scsi_lib.c~git-libata-all 2006-06-09 15:21:16.000000000 -0700 +++ devel-akpm/drivers/scsi/scsi_lib.c 2006-06-09 15:21:16.000000000 -0700 @@ -566,7 +566,7 @@ void scsi_device_unbusy(struct scsi_devi spin_lock_irqsave(shost->host_lock, flags); shost->host_busy--; if (unlikely(scsi_host_in_recovery(shost) && - shost->host_failed)) + (shost->host_failed || shost->host_eh_scheduled))) scsi_eh_wakeup(shost); spin_unlock(shost->host_lock); spin_lock(sdev->request_queue->queue_lock); diff -puN /dev/null drivers/scsi/scsi_transport_api.h --- /dev/null 2006-06-03 22:34:36.282200750 -0700 +++ devel-akpm/drivers/scsi/scsi_transport_api.h 2006-06-09 15:21:16.000000000 -0700 @@ -0,0 +1,6 @@ +#ifndef _SCSI_TRANSPORT_API_H +#define _SCSI_TRANSPORT_API_H + +void scsi_schedule_eh(struct Scsi_Host *shost); + +#endif /* _SCSI_TRANSPORT_API_H */ diff -puN include/linux/ata.h~git-libata-all include/linux/ata.h --- devel/include/linux/ata.h~git-libata-all 2006-06-09 15:21:16.000000000 -0700 +++ devel-akpm/include/linux/ata.h 2006-06-09 15:21:16.000000000 -0700 @@ -97,6 +97,9 @@ enum { ATA_DRQ = (1 << 3), /* data request i/o */ ATA_ERR = (1 << 0), /* have an error */ ATA_SRST = (1 << 2), /* software reset */ + ATA_ICRC = (1 << 7), /* interface CRC error */ + ATA_UNC = (1 << 6), /* uncorrectable media error */ + ATA_IDNF = (1 << 4), /* ID not found */ ATA_ABORTED = (1 << 2), /* command aborted */ /* ATA command block registers */ @@ -130,6 +133,8 @@ enum { ATA_CMD_WRITE = 0xCA, ATA_CMD_WRITE_EXT = 0x35, ATA_CMD_WRITE_FUA_EXT = 0x3D, + ATA_CMD_FPDMA_READ = 0x60, + ATA_CMD_FPDMA_WRITE = 0x61, ATA_CMD_PIO_READ = 0x20, ATA_CMD_PIO_READ_EXT = 0x24, ATA_CMD_PIO_WRITE = 0x30, @@ -148,6 +153,10 @@ enum { ATA_CMD_INIT_DEV_PARAMS = 0x91, ATA_CMD_READ_NATIVE_MAX = 0xF8, ATA_CMD_READ_NATIVE_MAX_EXT = 0x27, + ATA_CMD_READ_LOG_EXT = 0x2f, + + /* READ_LOG_EXT pages */ + ATA_LOG_SATA_NCQ = 0x10, /* SETFEATURES stuff */ SETFEATURES_XFER = 0x03, @@ -192,6 +201,16 @@ enum { SCR_ACTIVE = 3, SCR_NOTIFICATION = 4, + /* SError bits */ + SERR_DATA_RECOVERED = (1 << 0), /* recovered data error */ + SERR_COMM_RECOVERED = (1 << 1), /* recovered comm failure */ + SERR_DATA = (1 << 8), /* unrecovered data error */ + SERR_PERSISTENT = (1 << 9), /* persistent data/comm error */ + SERR_PROTOCOL = (1 << 10), /* protocol violation */ + SERR_INTERNAL = (1 << 11), /* host internal error */ + SERR_PHYRDY_CHG = (1 << 16), /* PHY RDY changed */ + SERR_DEV_XCHG = (1 << 26), /* device exchanged */ + /* struct ata_taskfile flags */ ATA_TFLAG_LBA48 = (1 << 0), /* enable 48-bit LBA and "HOB" */ ATA_TFLAG_ISADDR = (1 << 1), /* enable r/w to nsect/lba regs */ @@ -199,6 +218,7 @@ enum { ATA_TFLAG_WRITE = (1 << 3), /* data dir: host->dev==1 (write) */ ATA_TFLAG_LBA = (1 << 4), /* enable LBA */ ATA_TFLAG_FUA = (1 << 5), /* enable FUA */ + ATA_TFLAG_POLLING = (1 << 6), /* set nIEN to 1 and use polling */ }; enum ata_tf_protocols { @@ -207,6 +227,7 @@ enum ata_tf_protocols { ATA_PROT_NODATA, /* no data */ ATA_PROT_PIO, /* PIO single sector */ ATA_PROT_DMA, /* DMA */ + ATA_PROT_NCQ, /* NCQ */ ATA_PROT_ATAPI, /* packet command, PIO data xfer*/ ATA_PROT_ATAPI_NODATA, /* packet command, no data */ ATA_PROT_ATAPI_DMA, /* packet command with special DMA sauce */ @@ -262,6 +283,8 @@ struct ata_taskfile { #define ata_id_has_pm(id) ((id)[82] & (1 << 3)) #define ata_id_has_lba(id) ((id)[49] & (1 << 9)) #define ata_id_has_dma(id) ((id)[49] & (1 << 8)) +#define ata_id_has_ncq(id) ((id)[76] & (1 << 8)) +#define ata_id_queue_depth(id) (((id)[75] & 0x1f) + 1) #define ata_id_removeable(id) ((id)[0] & (1 << 7)) #define ata_id_has_dword_io(id) ((id)[50] & (1 << 0)) #define ata_id_u32(id,n) \ @@ -272,6 +295,8 @@ struct ata_taskfile { ((u64) (id)[(n) + 1] << 16) | \ ((u64) (id)[(n) + 0]) ) +#define ata_id_cdb_intr(id) (((id)[0] & 0x60) == 0x20) + static inline unsigned int ata_id_major_version(const u16 *id) { unsigned int mver; @@ -311,6 +336,15 @@ static inline int is_atapi_taskfile(cons (tf->protocol == ATA_PROT_ATAPI_DMA); } +static inline int is_multi_taskfile(struct ata_taskfile *tf) +{ + return (tf->command == ATA_CMD_READ_MULTI) || + (tf->command == ATA_CMD_WRITE_MULTI) || + (tf->command == ATA_CMD_READ_MULTI_EXT) || + (tf->command == ATA_CMD_WRITE_MULTI_EXT) || + (tf->command == ATA_CMD_WRITE_MULTI_FUA_EXT); +} + static inline int ata_ok(u8 status) { return ((status & (ATA_BUSY | ATA_DRDY | ATA_DF | ATA_DRQ | ATA_ERR)) diff -puN include/linux/libata.h~git-libata-all include/linux/libata.h --- devel/include/linux/libata.h~git-libata-all 2006-06-09 15:21:16.000000000 -0700 +++ devel-akpm/include/linux/libata.h 2006-06-09 15:21:16.000000000 -0700 @@ -33,6 +33,7 @@ #include #include #include +#include /* * compile-time options: to be removed as soon as all the drivers are @@ -44,7 +45,6 @@ #undef ATA_NDEBUG /* define to disable quick runtime checks */ #undef ATA_ENABLE_PATA /* define to enable PATA support in some * low-level drivers */ -#undef ATAPI_ENABLE_DMADIR /* enables ATAPI DMADIR bridge support */ /* note: prints function name for you */ @@ -108,8 +108,11 @@ enum { LIBATA_MAX_PRD = ATA_MAX_PRD / 2, ATA_MAX_PORTS = 8, ATA_DEF_QUEUE = 1, - ATA_MAX_QUEUE = 1, + /* tag ATA_MAX_QUEUE - 1 is reserved for internal commands */ + ATA_MAX_QUEUE = 32, + ATA_TAG_INTERNAL = ATA_MAX_QUEUE - 1, ATA_MAX_SECTORS = 200, /* FIXME */ + ATA_MAX_SECTORS_LBA48 = 65535, ATA_MAX_BUS = 2, ATA_DEF_BUSY_WAIT = 10000, ATA_SHORT_PAUSE = (HZ >> 6) + 1, @@ -120,9 +123,13 @@ enum { ATA_SHT_USE_CLUSTERING = 1, /* struct ata_device stuff */ - ATA_DFLAG_LBA48 = (1 << 0), /* device supports LBA48 */ - ATA_DFLAG_PIO = (1 << 1), /* device currently in PIO mode */ - ATA_DFLAG_LBA = (1 << 2), /* device supports LBA */ + ATA_DFLAG_LBA = (1 << 0), /* device supports LBA */ + ATA_DFLAG_LBA48 = (1 << 1), /* device supports LBA48 */ + ATA_DFLAG_CDB_INTR = (1 << 2), /* device asserts INTRQ when ready for CDB */ + ATA_DFLAG_NCQ = (1 << 3), /* device supports NCQ */ + ATA_DFLAG_CFG_MASK = (1 << 8) - 1, + + ATA_DFLAG_PIO = (1 << 8), /* device currently in PIO mode */ ATA_DEV_UNKNOWN = 0, /* unknown device */ ATA_DEV_ATA = 1, /* ATA device */ @@ -132,43 +139,50 @@ enum { ATA_DEV_NONE = 5, /* no device */ /* struct ata_port flags */ - ATA_FLAG_SLAVE_POSS = (1 << 1), /* host supports slave dev */ + ATA_FLAG_SLAVE_POSS = (1 << 0), /* host supports slave dev */ /* (doesn't imply presence) */ - ATA_FLAG_PORT_DISABLED = (1 << 2), /* port is disabled, ignore it */ - ATA_FLAG_SATA = (1 << 3), - ATA_FLAG_NO_LEGACY = (1 << 4), /* no legacy mode check */ - ATA_FLAG_SRST = (1 << 5), /* (obsolete) use ATA SRST, not E.D.D. */ - ATA_FLAG_MMIO = (1 << 6), /* use MMIO, not PIO */ - ATA_FLAG_SATA_RESET = (1 << 7), /* (obsolete) use COMRESET */ - ATA_FLAG_PIO_DMA = (1 << 8), /* PIO cmds via DMA */ - ATA_FLAG_NOINTR = (1 << 9), /* FIXME: Remove this once - * proper HSM is in place. */ - ATA_FLAG_DEBUGMSG = (1 << 10), - ATA_FLAG_NO_ATAPI = (1 << 11), /* No ATAPI support */ - - ATA_FLAG_SUSPENDED = (1 << 12), /* port is suspended */ - - ATA_FLAG_PIO_LBA48 = (1 << 13), /* Host DMA engine is LBA28 only */ - ATA_FLAG_IRQ_MASK = (1 << 14), /* Mask IRQ in PIO xfers */ - - ATA_FLAG_FLUSH_PORT_TASK = (1 << 15), /* Flush port task */ - ATA_FLAG_IN_EH = (1 << 16), /* EH in progress */ - - ATA_QCFLAG_ACTIVE = (1 << 1), /* cmd not yet ack'd to scsi lyer */ - ATA_QCFLAG_SG = (1 << 3), /* have s/g table? */ - ATA_QCFLAG_SINGLE = (1 << 4), /* no s/g, just a single buffer */ + ATA_FLAG_SATA = (1 << 1), + ATA_FLAG_NO_LEGACY = (1 << 2), /* no legacy mode check */ + ATA_FLAG_MMIO = (1 << 3), /* use MMIO, not PIO */ + ATA_FLAG_SRST = (1 << 4), /* (obsolete) use ATA SRST, not E.D.D. */ + ATA_FLAG_SATA_RESET = (1 << 5), /* (obsolete) use COMRESET */ + ATA_FLAG_NO_ATAPI = (1 << 6), /* No ATAPI support */ + ATA_FLAG_PIO_DMA = (1 << 7), /* PIO cmds via DMA */ + ATA_FLAG_PIO_LBA48 = (1 << 8), /* Host DMA engine is LBA28 only */ + ATA_FLAG_PIO_POLLING = (1 << 10), /* use polling PIO if LLD + * doesn't handle PIO interrupts */ + ATA_FLAG_NCQ = (1 << 11), /* host supports NCQ */ + + ATA_FLAG_DEBUGMSG = (1 << 14), + ATA_FLAG_FLUSH_PORT_TASK = (1 << 15), /* flush port task */ + + ATA_FLAG_EH_PENDING = (1 << 16), /* EH pending */ + ATA_FLAG_FROZEN = (1 << 17), /* port is frozen */ + ATA_FLAG_RECOVERED = (1 << 18), /* recovery action performed */ + + ATA_FLAG_DISABLED = (1 << 22), /* port is disabled, ignore it */ + ATA_FLAG_SUSPENDED = (1 << 23), /* port is suspended (power) */ + + /* bits 24:31 of ap->flags are reserved for LLDD specific flags */ + + /* struct ata_queued_cmd flags */ + ATA_QCFLAG_ACTIVE = (1 << 0), /* cmd not yet ack'd to scsi lyer */ + ATA_QCFLAG_SG = (1 << 1), /* have s/g table? */ + ATA_QCFLAG_SINGLE = (1 << 2), /* no s/g, just a single buffer */ ATA_QCFLAG_DMAMAP = ATA_QCFLAG_SG | ATA_QCFLAG_SINGLE, - ATA_QCFLAG_EH_SCHEDULED = (1 << 5), /* EH scheduled */ + ATA_QCFLAG_IO = (1 << 3), /* standard IO command */ + ATA_QCFLAG_RESULT_TF = (1 << 4), /* result TF requested */ + + ATA_QCFLAG_FAILED = (1 << 16), /* cmd failed and is owned by EH */ + ATA_QCFLAG_SENSE_VALID = (1 << 17), /* sense data valid */ + ATA_QCFLAG_EH_SCHEDULED = (1 << 18), /* EH scheduled (obsolete) */ /* host set flags */ ATA_HOST_SIMPLEX = (1 << 0), /* Host is simplex, one DMA channel per host_set only */ /* various lengths of time */ - ATA_TMOUT_PIO = 30 * HZ, ATA_TMOUT_BOOT = 30 * HZ, /* heuristic */ ATA_TMOUT_BOOT_QUICK = 7 * HZ, /* heuristic */ - ATA_TMOUT_CDB = 30 * HZ, - ATA_TMOUT_CDB_QUICK = 5 * HZ, ATA_TMOUT_INTERNAL = 30 * HZ, ATA_TMOUT_INTERNAL_QUICK = 5 * HZ, @@ -207,21 +221,44 @@ enum { /* size of buffer to pad xfers ending on unaligned boundaries */ ATA_DMA_PAD_SZ = 4, ATA_DMA_PAD_BUF_SZ = ATA_DMA_PAD_SZ * ATA_MAX_QUEUE, - - /* Masks for port functions */ + + /* masks for port functions */ ATA_PORT_PRIMARY = (1 << 0), ATA_PORT_SECONDARY = (1 << 1), + + /* ering size */ + ATA_ERING_SIZE = 32, + + /* desc_len for ata_eh_info and context */ + ATA_EH_DESC_LEN = 80, + + /* reset / recovery action types */ + ATA_EH_REVALIDATE = (1 << 0), + ATA_EH_SOFTRESET = (1 << 1), + ATA_EH_HARDRESET = (1 << 2), + + ATA_EH_RESET_MASK = ATA_EH_SOFTRESET | ATA_EH_HARDRESET, + + /* ata_eh_info->flags */ + ATA_EHI_DID_RESET = (1 << 0), /* already reset this port */ + + /* max repeat if error condition is still set after ->error_handler */ + ATA_EH_MAX_REPEAT = 5, + + /* how hard are we gonna try to probe/recover devices */ + ATA_PROBE_MAX_TRIES = 3, + ATA_EH_RESET_TRIES = 3, + ATA_EH_DEV_TRIES = 3, }; enum hsm_task_states { - HSM_ST_UNKNOWN, - HSM_ST_IDLE, - HSM_ST_POLL, - HSM_ST_TMOUT, - HSM_ST, - HSM_ST_LAST, - HSM_ST_LAST_POLL, - HSM_ST_ERR, + HSM_ST_UNKNOWN, /* state unknown */ + HSM_ST_IDLE, /* no command on going */ + HSM_ST, /* (waiting the device to) transfer data */ + HSM_ST_LAST, /* (waiting the device to) complete command */ + HSM_ST_ERR, /* error */ + HSM_ST_FIRST, /* (waiting the device to) + write CDB or first data block */ }; enum ata_completion_errors { @@ -245,7 +282,7 @@ struct ata_queued_cmd; /* typedefs */ typedef void (*ata_qc_cb_t) (struct ata_queued_cmd *qc); typedef void (*ata_probeinit_fn_t)(struct ata_port *); -typedef int (*ata_reset_fn_t)(struct ata_port *, int, unsigned int *); +typedef int (*ata_reset_fn_t)(struct ata_port *, unsigned int *); typedef void (*ata_postreset_fn_t)(struct ata_port *ap, unsigned int *); struct ata_ioports { @@ -281,6 +318,7 @@ struct ata_probe_ent { unsigned long irq; unsigned int irq_flags; unsigned long host_flags; + unsigned long port_flags[ATA_MAX_PORTS]; unsigned long host_set_flags; void __iomem *mmio_base; void *private_data; @@ -336,7 +374,7 @@ struct ata_queued_cmd { struct scatterlist *__sg; unsigned int err_mask; - + struct ata_taskfile result_tf; ata_qc_cb_t complete_fn; void *private_data; @@ -348,12 +386,24 @@ struct ata_host_stats { unsigned long rw_reqbuf; }; +struct ata_ering_entry { + int is_io; + unsigned int err_mask; + u64 timestamp; +}; + +struct ata_ering { + int cursor; + struct ata_ering_entry ring[ATA_ERING_SIZE]; +}; + struct ata_device { + struct ata_port *ap; u64 n_sectors; /* size of device, if ATA */ unsigned long flags; /* ATA_DFLAG_xxx */ unsigned int class; /* ATA_DEV_xxx */ unsigned int devno; /* 0 or 1 */ - u16 *id; /* IDENTIFY xxx DEVICE data */ + u16 id[ATA_ID_WORDS]; /* IDENTIFY xxx DEVICE data */ u8 pio_mode; u8 dma_mode; u8 xfer_mode; @@ -373,6 +423,24 @@ struct ata_device { u16 cylinders; /* Number of cylinders */ u16 heads; /* Number of heads */ u16 sectors; /* Number of sectors per track */ + + /* error history */ + struct ata_ering ering; +}; + +struct ata_eh_info { + struct ata_device *dev; /* offending device */ + u32 serror; /* SError from LLDD */ + unsigned int err_mask; /* port-wide err_mask */ + unsigned int action; /* ATA_EH_* action mask */ + unsigned int flags; /* ATA_EHI_* flags */ + char desc[ATA_EH_DESC_LEN]; + int desc_len; +}; + +struct ata_eh_context { + struct ata_eh_info i; + int tries[ATA_MAX_DEVICES]; }; struct ata_port { @@ -397,12 +465,21 @@ struct ata_port { unsigned int mwdma_mask; unsigned int udma_mask; unsigned int cbl; /* cable type; ATA_CBL_xxx */ + unsigned int sata_spd_limit; /* SATA PHY speed limit */ + + /* record runtime error info, protected by host_set lock */ + struct ata_eh_info eh_info; + /* EH context owned by EH */ + struct ata_eh_context eh_context; struct ata_device device[ATA_MAX_DEVICES]; struct ata_queued_cmd qcmd[ATA_MAX_QUEUE]; - unsigned long qactive; + unsigned long qc_allocated; + unsigned int qc_active; + unsigned int active_tag; + u32 sactive; struct ata_host_stats stats; struct ata_host_set *host_set; @@ -411,12 +488,13 @@ struct ata_port { struct work_struct port_task; unsigned int hsm_task_state; - unsigned long pio_task_timeout; u32 msg_enable; struct list_head eh_done_q; void *private_data; + + u8 sector_buf[ATA_SECT_SIZE]; /* owned by EH */ }; struct ata_port_operations { @@ -447,10 +525,20 @@ struct ata_port_operations { void (*bmdma_setup) (struct ata_queued_cmd *qc); void (*bmdma_start) (struct ata_queued_cmd *qc); + void (*data_xfer) (struct ata_device *, unsigned char *, unsigned int, int); + void (*qc_prep) (struct ata_queued_cmd *qc); unsigned int (*qc_issue) (struct ata_queued_cmd *qc); - void (*eng_timeout) (struct ata_port *ap); + /* Error handlers. ->error_handler overrides ->eng_timeout and + * indicates that new-style EH is in place. + */ + void (*eng_timeout) (struct ata_port *ap); /* obsolete */ + + void (*freeze) (struct ata_port *ap); + void (*thaw) (struct ata_port *ap); + void (*error_handler) (struct ata_port *ap); + void (*post_internal_cmd) (struct ata_queued_cmd *qc); irqreturn_t (*irq_handler)(int, void *, struct pt_regs *); void (*irq_clear) (struct ata_port *); @@ -496,18 +584,16 @@ extern void ata_port_probe(struct ata_po extern void __sata_phy_reset(struct ata_port *ap); extern void sata_phy_reset(struct ata_port *ap); extern void ata_bus_reset(struct ata_port *ap); +extern int sata_set_spd(struct ata_port *ap); extern int ata_drive_probe_reset(struct ata_port *ap, ata_probeinit_fn_t probeinit, ata_reset_fn_t softreset, ata_reset_fn_t hardreset, ata_postreset_fn_t postreset, unsigned int *classes); extern void ata_std_probeinit(struct ata_port *ap); -extern int ata_std_softreset(struct ata_port *ap, int verbose, - unsigned int *classes); -extern int sata_std_hardreset(struct ata_port *ap, int verbose, - unsigned int *class); +extern int ata_std_softreset(struct ata_port *ap, unsigned int *classes); +extern int sata_std_hardreset(struct ata_port *ap, unsigned int *class); extern void ata_std_postreset(struct ata_port *ap, unsigned int *classes); -extern int ata_dev_revalidate(struct ata_port *ap, struct ata_device *dev, - int post_reset); +extern int ata_dev_revalidate(struct ata_device *dev, int post_reset); extern void ata_port_disable(struct ata_port *); extern void ata_std_ports(struct ata_ioports *ioaddr); #ifdef CONFIG_PCI @@ -523,20 +609,27 @@ extern void ata_host_set_remove(struct a extern int ata_scsi_detect(struct scsi_host_template *sht); extern int ata_scsi_ioctl(struct scsi_device *dev, int cmd, void __user *arg); extern int ata_scsi_queuecmd(struct scsi_cmnd *cmd, void (*done)(struct scsi_cmnd *)); -extern void ata_eh_qc_complete(struct ata_queued_cmd *qc); -extern void ata_eh_qc_retry(struct ata_queued_cmd *qc); extern int ata_scsi_release(struct Scsi_Host *host); extern unsigned int ata_host_intr(struct ata_port *ap, struct ata_queued_cmd *qc); +extern int sata_scr_valid(struct ata_port *ap); +extern int sata_scr_read(struct ata_port *ap, int reg, u32 *val); +extern int sata_scr_write(struct ata_port *ap, int reg, u32 val); +extern int sata_scr_write_flush(struct ata_port *ap, int reg, u32 val); +extern int ata_port_online(struct ata_port *ap); +extern int ata_port_offline(struct ata_port *ap); extern int ata_scsi_device_resume(struct scsi_device *); extern int ata_scsi_device_suspend(struct scsi_device *, pm_message_t state); -extern int ata_device_resume(struct ata_port *, struct ata_device *); -extern int ata_device_suspend(struct ata_port *, struct ata_device *, pm_message_t state); +extern int ata_device_resume(struct ata_device *); +extern int ata_device_suspend(struct ata_device *, pm_message_t state); extern int ata_ratelimit(void); extern unsigned int ata_busy_sleep(struct ata_port *ap, unsigned long timeout_pat, unsigned long timeout); extern void ata_port_queue_task(struct ata_port *ap, void (*fn)(void *), void *data, unsigned long delay); +extern u32 ata_wait_register(void __iomem *reg, u32 mask, u32 val, + unsigned long interval_msec, + unsigned long timeout_msec); /* * Default driver ops implementations @@ -555,6 +648,12 @@ extern int ata_port_start (struct ata_po extern void ata_port_stop (struct ata_port *ap); extern void ata_host_stop (struct ata_host_set *host_set); extern irqreturn_t ata_interrupt (int irq, void *dev_instance, struct pt_regs *regs); +extern void ata_mmio_data_xfer(struct ata_device *adev, unsigned char *buf, + unsigned int buflen, int write_data); +extern void ata_pio_data_xfer(struct ata_device *adev, unsigned char *buf, + unsigned int buflen, int write_data); +extern void ata_pio_data_xfer_noirq(struct ata_device *adev, unsigned char *buf, + unsigned int buflen, int write_data); extern void ata_qc_prep(struct ata_queued_cmd *qc); extern void ata_noop_qc_prep(struct ata_queued_cmd *qc); extern unsigned int ata_qc_issue_prot(struct ata_queued_cmd *qc); @@ -572,17 +671,26 @@ extern void ata_bmdma_start (struct ata_ extern void ata_bmdma_stop(struct ata_queued_cmd *qc); extern u8 ata_bmdma_status(struct ata_port *ap); extern void ata_bmdma_irq_clear(struct ata_port *ap); -extern void __ata_qc_complete(struct ata_queued_cmd *qc); -extern void ata_eng_timeout(struct ata_port *ap); -extern void ata_scsi_simulate(struct ata_port *ap, struct ata_device *dev, - struct scsi_cmnd *cmd, +extern void ata_bmdma_freeze(struct ata_port *ap); +extern void ata_bmdma_thaw(struct ata_port *ap); +extern void ata_bmdma_drive_eh(struct ata_port *ap, + ata_reset_fn_t softreset, + ata_reset_fn_t hardreset, + ata_postreset_fn_t postreset); +extern void ata_bmdma_error_handler(struct ata_port *ap); +extern void ata_bmdma_post_internal_cmd(struct ata_queued_cmd *qc); +extern void ata_qc_complete(struct ata_queued_cmd *qc); +extern int ata_qc_complete_multiple(struct ata_port *ap, u32 qc_active, + void (*finish_qc)(struct ata_queued_cmd *)); +extern void ata_scsi_simulate(struct ata_device *dev, struct scsi_cmnd *cmd, void (*done)(struct scsi_cmnd *)); extern int ata_std_bios_param(struct scsi_device *sdev, struct block_device *bdev, sector_t capacity, int geom[]); extern int ata_scsi_slave_config(struct scsi_device *sdev); -extern struct ata_device *ata_dev_pair(struct ata_port *ap, - struct ata_device *adev); +extern int ata_scsi_change_queue_depth(struct scsi_device *sdev, + int queue_depth); +extern struct ata_device *ata_dev_pair(struct ata_device *adev); /* * Timing helpers @@ -628,7 +736,50 @@ extern int pci_test_config_bits(struct p extern unsigned long ata_pci_default_filter(const struct ata_port *, struct ata_device *, unsigned long); #endif /* CONFIG_PCI */ +/* + * EH + */ +extern void ata_eng_timeout(struct ata_port *ap); + +extern void ata_port_schedule_eh(struct ata_port *ap); +extern int ata_port_abort(struct ata_port *ap); +extern int ata_port_freeze(struct ata_port *ap); + +extern void ata_eh_freeze_port(struct ata_port *ap); +extern void ata_eh_thaw_port(struct ata_port *ap); + +extern void ata_eh_qc_complete(struct ata_queued_cmd *qc); +extern void ata_eh_qc_retry(struct ata_queued_cmd *qc); + +extern void ata_do_eh(struct ata_port *ap, ata_reset_fn_t softreset, + ata_reset_fn_t hardreset, ata_postreset_fn_t postreset); + +/* + * printk helpers + */ +#define ata_port_printk(ap, lv, fmt, args...) \ + printk(lv"ata%u: "fmt, (ap)->id , ##args) + +#define ata_dev_printk(dev, lv, fmt, args...) \ + printk(lv"ata%u.%02u: "fmt, (dev)->ap->id, (dev)->devno , ##args) + +/* + * ata_eh_info helpers + */ +#define ata_ehi_push_desc(ehi, fmt, args...) do { \ + (ehi)->desc_len += scnprintf((ehi)->desc + (ehi)->desc_len, \ + ATA_EH_DESC_LEN - (ehi)->desc_len, \ + fmt , ##args); \ +} while (0) + +#define ata_ehi_clear_desc(ehi) do { \ + (ehi)->desc[0] = '\0'; \ + (ehi)->desc_len = 0; \ +} while (0) +/* + * qc helpers + */ static inline int ata_sg_is_last(struct scatterlist *sg, struct ata_queued_cmd *qc) { @@ -671,14 +822,39 @@ static inline unsigned int ata_tag_valid return (tag < ATA_MAX_QUEUE) ? 1 : 0; } -static inline unsigned int ata_class_present(unsigned int class) +static inline unsigned int ata_tag_internal(unsigned int tag) +{ + return tag == ATA_MAX_QUEUE - 1; +} + +static inline unsigned int ata_class_enabled(unsigned int class) { return class == ATA_DEV_ATA || class == ATA_DEV_ATAPI; } -static inline unsigned int ata_dev_present(const struct ata_device *dev) +static inline unsigned int ata_class_disabled(unsigned int class) +{ + return class == ATA_DEV_ATA_UNSUP || class == ATA_DEV_ATAPI_UNSUP; +} + +static inline unsigned int ata_class_absent(unsigned int class) +{ + return !ata_class_enabled(class) && !ata_class_disabled(class); +} + +static inline unsigned int ata_dev_enabled(const struct ata_device *dev) +{ + return ata_class_enabled(dev->class); +} + +static inline unsigned int ata_dev_disabled(const struct ata_device *dev) +{ + return ata_class_disabled(dev->class); +} + +static inline unsigned int ata_dev_absent(const struct ata_device *dev) { - return ata_class_present(dev->class); + return ata_class_absent(dev->class); } static inline u8 ata_chk_status(struct ata_port *ap) @@ -759,20 +935,35 @@ static inline void ata_qc_set_polling(st qc->tf.ctl |= ATA_NIEN; } -static inline struct ata_queued_cmd *ata_qc_from_tag (struct ata_port *ap, - unsigned int tag) +static inline struct ata_queued_cmd *__ata_qc_from_tag(struct ata_port *ap, + unsigned int tag) { if (likely(ata_tag_valid(tag))) return &ap->qcmd[tag]; return NULL; } -static inline void ata_tf_init(struct ata_port *ap, struct ata_taskfile *tf, unsigned int device) +static inline struct ata_queued_cmd *ata_qc_from_tag(struct ata_port *ap, + unsigned int tag) +{ + struct ata_queued_cmd *qc = __ata_qc_from_tag(ap, tag); + + if (unlikely(!qc) || !ap->ops->error_handler) + return qc; + + if ((qc->flags & (ATA_QCFLAG_ACTIVE | + ATA_QCFLAG_FAILED)) == ATA_QCFLAG_ACTIVE) + return qc; + + return NULL; +} + +static inline void ata_tf_init(struct ata_device *dev, struct ata_taskfile *tf) { memset(tf, 0, sizeof(*tf)); - tf->ctl = ap->ctl; - if (device == 0) + tf->ctl = dev->ap->ctl; + if (dev->devno == 0) tf->device = ATA_DEVICE_OBS; else tf->device = ATA_DEVICE_OBS | ATA_DEV1; @@ -787,26 +978,11 @@ static inline void ata_qc_reinit(struct qc->nbytes = qc->curbytes = 0; qc->err_mask = 0; - ata_tf_init(qc->ap, &qc->tf, qc->dev->devno); -} - -/** - * ata_qc_complete - Complete an active ATA command - * @qc: Command to complete - * @err_mask: ATA Status register contents - * - * Indicate to the mid and upper layers that an ATA - * command has completed, with either an ok or not-ok status. - * - * LOCKING: - * spin_lock_irqsave(host_set lock) - */ -static inline void ata_qc_complete(struct ata_queued_cmd *qc) -{ - if (unlikely(qc->flags & ATA_QCFLAG_EH_SCHEDULED)) - return; + ata_tf_init(qc->dev, &qc->tf); - __ata_qc_complete(qc); + /* init result_tf such that it indicates normal completion */ + qc->result_tf.command = ATA_DRDY; + qc->result_tf.feature = 0; } /** @@ -885,28 +1061,6 @@ static inline u8 ata_irq_ack(struct ata_ return status; } -static inline u32 scr_read(struct ata_port *ap, unsigned int reg) -{ - return ap->ops->scr_read(ap, reg); -} - -static inline void scr_write(struct ata_port *ap, unsigned int reg, u32 val) -{ - ap->ops->scr_write(ap, reg, val); -} - -static inline void scr_write_flush(struct ata_port *ap, unsigned int reg, - u32 val) -{ - ap->ops->scr_write(ap, reg, val); - (void) ap->ops->scr_read(ap, reg); -} - -static inline unsigned int sata_dev_present(struct ata_port *ap) -{ - return ((scr_read(ap, SCR_STATUS) & 0xf) == 0x3) ? 1 : 0; -} - static inline int ata_try_flush_cache(const struct ata_device *dev) { return ata_id_wcache_enabled(dev->id) || @@ -916,7 +1070,7 @@ static inline int ata_try_flush_cache(co static inline unsigned int ac_err_mask(u8 status) { - if (status & ATA_BUSY) + if (status & (ATA_BUSY | ATA_DRQ)) return AC_ERR_HSM; if (status & (ATA_ERR | ATA_DF)) return AC_ERR_DEV; @@ -944,4 +1098,9 @@ static inline void ata_pad_free(struct a dma_free_coherent(dev, ATA_DMA_PAD_BUF_SZ, ap->pad, ap->pad_dma); } +static inline struct ata_port *ata_shost_to_port(struct Scsi_Host *host) +{ + return (struct ata_port *) &host->hostdata[0]; +} + #endif /* __LINUX_LIBATA_H__ */ diff -puN include/linux/pci_ids.h~git-libata-all include/linux/pci_ids.h --- devel/include/linux/pci_ids.h~git-libata-all 2006-06-09 15:21:16.000000000 -0700 +++ devel-akpm/include/linux/pci_ids.h 2006-06-09 15:21:16.000000000 -0700 @@ -1187,6 +1187,10 @@ #define PCI_DEVICE_ID_NVIDIA_QUADRO_FX_1100 0x034E #define PCI_DEVICE_ID_NVIDIA_NVENET_14 0x0372 #define PCI_DEVICE_ID_NVIDIA_NVENET_15 0x0373 +#define PCI_DEVICE_ID_NVIDIA_NFORCE_MCP61_SATA 0x03E7 +#define PCI_DEVICE_ID_NVIDIA_NFORCE_MCP61_IDE 0x03EC +#define PCI_DEVICE_ID_NVIDIA_NFORCE_MCP61_SATA2 0x03F6 +#define PCI_DEVICE_ID_NVIDIA_NFORCE_MCP61_SATA3 0x03F7 #define PCI_VENDOR_ID_IMS 0x10e0 #define PCI_DEVICE_ID_IMS_TT128 0x9128 @@ -1240,6 +1244,7 @@ #define PCI_DEVICE_ID_VIA_PX8X0_0 0x0259 #define PCI_DEVICE_ID_VIA_3269_0 0x0269 #define PCI_DEVICE_ID_VIA_K8T800PRO_0 0x0282 +#define PCI_DEVICE_ID_VIA_3296_0 0x0296 #define PCI_DEVICE_ID_VIA_8363_0 0x0305 #define PCI_DEVICE_ID_VIA_P4M800CE 0x0314 #define PCI_DEVICE_ID_VIA_8371_0 0x0391 @@ -1247,6 +1252,7 @@ #define PCI_DEVICE_ID_VIA_82C561 0x0561 #define PCI_DEVICE_ID_VIA_82C586_1 0x0571 #define PCI_DEVICE_ID_VIA_82C576 0x0576 +#define PCI_DEVICE_ID_VIA_SATA_EIDE 0x0581 #define PCI_DEVICE_ID_VIA_82C586_0 0x0586 #define PCI_DEVICE_ID_VIA_82C596 0x0596 #define PCI_DEVICE_ID_VIA_82C597_0 0x0597 @@ -1287,10 +1293,11 @@ #define PCI_DEVICE_ID_VIA_8783_0 0x3208 #define PCI_DEVICE_ID_VIA_8237 0x3227 #define PCI_DEVICE_ID_VIA_8251 0x3287 -#define PCI_DEVICE_ID_VIA_3296_0 0x0296 +#define PCI_DEVICE_ID_VIA_8237A 0x3337 #define PCI_DEVICE_ID_VIA_8231 0x8231 #define PCI_DEVICE_ID_VIA_8231_4 0x8235 #define PCI_DEVICE_ID_VIA_8365_1 0x8305 +#define PCI_DEVICE_ID_VIA_CX700 0x8324 #define PCI_DEVICE_ID_VIA_8371_1 0x8391 #define PCI_DEVICE_ID_VIA_82C598_1 0x8598 #define PCI_DEVICE_ID_VIA_838X_1 0xB188 diff -puN include/scsi/scsi_cmnd.h~git-libata-all include/scsi/scsi_cmnd.h --- devel/include/scsi/scsi_cmnd.h~git-libata-all 2006-06-09 15:21:16.000000000 -0700 +++ devel-akpm/include/scsi/scsi_cmnd.h 2006-06-09 15:21:16.000000000 -0700 @@ -151,6 +151,7 @@ extern struct scsi_cmnd *scsi_get_comman extern void scsi_put_command(struct scsi_cmnd *); extern void scsi_io_completion(struct scsi_cmnd *, unsigned int, unsigned int); extern void scsi_finish_command(struct scsi_cmnd *cmd); +extern void scsi_req_abort_cmd(struct scsi_cmnd *cmd); extern void *scsi_kmap_atomic_sg(struct scatterlist *sg, int sg_count, size_t *offset, size_t *len); diff -puN include/scsi/scsi_host.h~git-libata-all include/scsi/scsi_host.h --- devel/include/scsi/scsi_host.h~git-libata-all 2006-06-09 15:21:16.000000000 -0700 +++ devel-akpm/include/scsi/scsi_host.h 2006-06-09 15:21:16.000000000 -0700 @@ -472,6 +472,7 @@ struct Scsi_Host { */ unsigned int host_busy; /* commands actually active on low-level */ unsigned int host_failed; /* commands that failed. */ + unsigned int host_eh_scheduled; /* EH scheduled without command */ unsigned short host_no; /* Used for IOCTL_GET_IDLUN, /proc/scsi et al. */ int resetting; /* if set, it means that last_reset is a valid value */ _