GIT a55f9d62627636ec533de4b426b6563983476453 git://lost.foo-projects.org/~sln/linux-2.6#dma-upstream commit Author: Shannon Nelson Date: Tue Aug 21 09:57:28 2007 -0700 IOAT: Fix ioat_dma descriptor cache miss (in -mm) (we've fixed it in 2.6.23-rc3, let's get the same fix into -mm) The layout for struct ioat_desc_sw is non-optimal and causes an extra cache hit for every descriptor processed. By tightening up the struct layout and removing one item, we pull in the fields that get used in the speedpath and get a little better performance. Before: ------- struct ioat_desc_sw { struct ioat_dma_descriptor * hw; /* 0 8 */ struct list_head node; /* 8 16 */ int tx_cnt; /* 24 4 */ /* XXX 4 bytes hole, try to pack */ dma_addr_t src; /* 32 8 */ __u32 src_len; /* 40 4 */ /* XXX 4 bytes hole, try to pack */ dma_addr_t dst; /* 48 8 */ __u32 dst_len; /* 56 4 */ /* XXX 4 bytes hole, try to pack */ /* --- cacheline 1 boundary (64 bytes) --- */ struct dma_async_tx_descriptor async_tx; /* 64 144 */ /* --- cacheline 3 boundary (192 bytes) was 16 bytes ago --- */ /* size: 208, cachelines: 4 */ /* sum members: 196, holes: 3, sum holes: 12 */ /* last cacheline: 16 bytes */ }; /* definitions: 1 */ After: ------ struct ioat_desc_sw { struct ioat_dma_descriptor * hw; /* 0 8 */ struct list_head node; /* 8 16 */ int tx_cnt; /* 24 4 */ __u32 len; /* 28 4 */ dma_addr_t src; /* 32 8 */ dma_addr_t dst; /* 40 8 */ struct dma_async_tx_descriptor async_tx; /* 48 144 */ /* --- cacheline 3 boundary (192 bytes) --- */ /* size: 192, cachelines: 3 */ }; /* definitions: 1 */ Signed-off-by: Shannon Nelson commit dfcd8f5674459a6bf84777cda1800dc922b71297 Author: Shannon Nelson Date: Tue Aug 21 09:56:59 2007 -0700 IOAT: fix for UP use of cpu_physical_id() Make sure we can use cpu_physical_id() even when compiled for uni-processor. Cc: "Luck, Tony" Cc: Andi Kleen Cc: Chris Leech Signed-off-by: Andrew Morton Signed-off-by: Shannon Nelson commit 635d007c091c494daf5d4f9867268c7fc8944685 Author: Adrian Bunk Date: Wed Aug 15 15:39:45 2007 -0700 DMA engine kconfig improvements (rev2) This patch contains the following changes to the DMA engine menus: - switch to menuconfig - INTEL_IOATDMA must depend on X86 - INTEL_IOATDMA must select DCA - device drivers shouldn't "default m" - DCA shouldn't be a user visible option - make it clear in the INTEL_IOATDMA help text that this driver is for rare hardware the user most likely doesn't has - let DMA_ENGINE be select'ed by the DMA devices, making it less likely for a user to accidentally enable NET_DMA [dan.j.williams@intel.com: fix up Kconfig text] Signed-off-by: Adrian Bunk Signed-off-by: Dan Williams Signed-off-by: Shannon Nelson commit c39c5c37e8f7e35bbc056250ff578b94dfdb478b Author: Shannon Nelson Date: Tue Aug 14 11:09:31 2007 -0700 I/OAT: Add DCA services Add code to connect to the DCA driver and provide cpu tags for use by drivers that would like to use Direct Cache Access hints. Signed-off-by: Shannon Nelson Acked-by: David S. Miller commit 0119b5f4da374e58ca59c57b69824d8ce1dabb69 Author: Shannon Nelson Date: Tue Aug 14 11:09:30 2007 -0700 DCA: Add Direct Cache Access driver Direct Cache Access (DCA) is a method for warming the CPU cache before data is used, with the intent of lessening the impact of cache misses. This patch adds a manager and interface for matching up client requests for DCA services with devices that offer DCA services. In order to use DCA, a module must do bus writes with the appropriate tag bits set to trigger a cache read for a specific CPU. However, different CPUs and chipsets can require different sets of tag bits, and the methods for determining the correct bits may be simple hardcoding or may be a hardware specific magic incantation. This interface is a way for DCA clients to find the correct tag bits for the targeted CPU without needing to know the specifics. Signed-off-by: Shannon Nelson Acked-by: David S. Miller commit 03f2090172cbb3121b43c551e2a184b2e083d97e Author: Shannon Nelson Date: Tue Aug 14 11:09:29 2007 -0700 I/OAT: Add support for MSI and MSI-X Add support for MSI and MSI-X interrupt handling, including the ability to choose the desired interrupt method. Signed-off-by: Shannon Nelson Acked-by: David S. Miller commit 949d8bdb4706b0f206583f9ba0667bc88d6fd695 Author: Shannon Nelson Date: Tue Aug 14 11:09:28 2007 -0700 I/OAT: Split PCI startup from DMA handling code Split the general PCI startup from the DMA handling code in order to prepare for adding support for DCA services and future versions of the ioatdma device. Signed-off-by: Shannon Nelson Acked-by: David S. Miller commit 79804ed77e65deb57b8551159e28a6e001e3d1a5 Author: Shannon Nelson Date: Tue Aug 14 11:09:27 2007 -0700 I/OAT: code cleanup from checkpatch output Take care of a bunch of little code nits in ioatdma files Signed-off-by: Shannon Nelson Acked-by: David S. Miller commit 4d043005f5c868b022e7687755d87f4882e5bab7 Author: Shannon Nelson Date: Tue Aug 14 11:09:26 2007 -0700 I/OAT: Rename the source file Rename the ioatdma.c file in preparation for splitting into multiple files, which will allow for easier adding new functionality. Signed-off-by: Shannon Nelson Acked-by: David S. Miller commit c5072b0333e5cddba58fea3ebc7812ecb267f1fa Author: Shannon Nelson Date: Tue Aug 14 11:09:25 2007 -0700 I/OAT: New device ids Add device ids for new revs of the Intel I/OAT DMA engine Signed-off-by: Shannon Nelson Acked-by: David S. Miller drivers/Kconfig | 2 drivers/Makefile | 1 drivers/dca/Kconfig | 8 drivers/dca/Makefile | 2 drivers/dca/dca-core.c | 170 ++++++++ drivers/dca/dca-sysfs.c | 88 ++++ drivers/dma/Kconfig | 60 ++- drivers/dma/Makefile | 1 drivers/dma/ioat.c | 196 +++++++++ drivers/dma/ioat_dca.c | 267 +++++++++++++ drivers/dma/ioat_dma.c | Unmerged drivers/dma/ioatdma.c | 827 --------------------------------------- drivers/dma/ioatdma.h | 30 + drivers/dma/ioatdma_hw.h | 2 drivers/dma/ioatdma_registers.h | 6 include/linux/dca.h | 47 ++ include/linux/pci_ids.h | 2 17 files changed, 851 insertions(+), 858 deletions(-) diff --git a/drivers/Kconfig b/drivers/Kconfig index 3e1c442..b447e60 100644 --- a/drivers/Kconfig +++ b/drivers/Kconfig @@ -82,6 +82,8 @@ source "drivers/rtc/Kconfig" source "drivers/dma/Kconfig" +source "drivers/dca/Kconfig" + source "drivers/auxdisplay/Kconfig" source "drivers/kvm/Kconfig" diff --git a/drivers/Makefile b/drivers/Makefile index f0878b2..dbd9fa5 100644 --- a/drivers/Makefile +++ b/drivers/Makefile @@ -85,6 +85,7 @@ obj-$(CONFIG_CRYPTO) += crypto/ obj-$(CONFIG_SUPERH) += sh/ obj-$(CONFIG_GENERIC_TIME) += clocksource/ obj-$(CONFIG_DMA_ENGINE) += dma/ +obj-$(CONFIG_DCA) += dca/ obj-$(CONFIG_HID) += hid/ obj-$(CONFIG_PPC_PS3) += ps3/ obj-$(CONFIG_OF) += of/ diff --git a/drivers/dca/Kconfig b/drivers/dca/Kconfig new file mode 100644 index 0000000..c2ed77c --- /dev/null +++ b/drivers/dca/Kconfig @@ -0,0 +1,8 @@ +# +# DCA server configuration +# + +config DCA + tristate + + diff --git a/drivers/dca/Makefile b/drivers/dca/Makefile new file mode 100644 index 0000000..b2db56b --- /dev/null +++ b/drivers/dca/Makefile @@ -0,0 +1,2 @@ +obj-$(CONFIG_DCA) += dca.o +dca-objs := dca-core.o dca-sysfs.o diff --git a/drivers/dca/dca-core.c b/drivers/dca/dca-core.c new file mode 100644 index 0000000..864f469 --- /dev/null +++ b/drivers/dca/dca-core.c @@ -0,0 +1,170 @@ +/* + * Copyright(c) 2007 Intel Corporation. All rights reserved. + * + * This program is free software; you can redistribute it and/or modify it + * under the terms of the GNU General Public License as published by the Free + * Software Foundation; either version 2 of the License, or (at your option) + * any later version. + * + * This program is distributed in the hope that it will be useful, but WITHOUT + * ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or + * FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License for + * more details. + * + * You should have received a copy of the GNU General Public License along with + * this program; if not, write to the Free Software Foundation, Inc., 59 + * Temple Place - Suite 330, Boston, MA 02111-1307, USA. + * + * The full GNU General Public License is included in this distribution in the + * file called COPYING. + */ + +/* + * This driver supports an interface for DCA clients and providers to meet. + */ + +#include +#include +#include +#include + +MODULE_LICENSE("GPL"); + +/* For now we're assuming a single, global, DCA provider for the system. */ + +static spinlock_t dca_lock; + +struct dca_provider *global_dca = NULL; + +int dca_add_requester(struct device *dev) +{ + int err, slot; + + if (!global_dca) + return -ENODEV; + + spin_lock(&dca_lock); + slot = global_dca->ops->add_requester(global_dca, dev); + spin_unlock(&dca_lock); + if (slot < 0) + return slot; + + err = dca_sysfs_add_req(global_dca, dev, slot); + if (err) { + spin_lock(&dca_lock); + global_dca->ops->remove_requester(global_dca, dev); + spin_unlock(&dca_lock); + return err; + } + + return 0; +} +EXPORT_SYMBOL(dca_add_requester); + +int dca_remove_requester(struct device *dev) +{ + int slot; + if (!global_dca) + return -ENODEV; + + spin_lock(&dca_lock); + slot = global_dca->ops->remove_requester(global_dca, dev); + spin_unlock(&dca_lock); + if (slot < 0) + return slot; + + dca_sysfs_remove_req(global_dca, slot); + return 0; +} +EXPORT_SYMBOL(dca_remove_requester); + +u8 dca_get_tag(int cpu) +{ + if (!global_dca) + return -ENODEV; + return global_dca->ops->get_tag(global_dca, cpu); +} +EXPORT_SYMBOL(dca_get_tag); + +struct dca_provider *alloc_dca_provider(struct dca_ops *ops, int priv_size) +{ + struct dca_provider *dca; + int alloc_size; + + alloc_size = (sizeof(*dca) + priv_size); + dca = kzalloc(alloc_size, GFP_KERNEL); + if (!dca) + return NULL; + dca->ops = ops; + + return dca; +} +EXPORT_SYMBOL(alloc_dca_provider); + +void free_dca_provider(struct dca_provider *dca) +{ + kfree(dca); +} +EXPORT_SYMBOL(free_dca_provider); + +static BLOCKING_NOTIFIER_HEAD(dca_provider_chain); + +int register_dca_provider(struct dca_provider *dca, struct device *dev) +{ + int err; + + if (global_dca) + return -EEXIST; + err = dca_sysfs_add_provider(dca, dev); + if (err) + return err; + global_dca = dca; + blocking_notifier_call_chain(&dca_provider_chain, + DCA_PROVIDER_ADD, NULL); + return 0; +} +EXPORT_SYMBOL(register_dca_provider); + +void unregister_dca_provider(struct dca_provider *dca) +{ + if (!global_dca) + return; + blocking_notifier_call_chain(&dca_provider_chain, + DCA_PROVIDER_REMOVE, NULL); + global_dca = NULL; + dca_sysfs_remove_provider(dca); +} +EXPORT_SYMBOL(unregister_dca_provider); + +void dca_register_notify(struct notifier_block *nb) +{ + blocking_notifier_chain_register(&dca_provider_chain, nb); +} +EXPORT_SYMBOL(dca_register_notify); + +void dca_unregister_notify(struct notifier_block *nb) +{ + blocking_notifier_chain_unregister(&dca_provider_chain, nb); +} +EXPORT_SYMBOL(dca_unregister_notify); + +static int __init dca_init(void) +{ + int err; + + spin_lock_init(&dca_lock); + + err = dca_sysfs_init(); + if (err) + return err; + return 0; +} + +static void __exit dca_exit(void) +{ + dca_sysfs_exit(); +} + +module_init(dca_init); +module_exit(dca_exit); + diff --git a/drivers/dca/dca-sysfs.c b/drivers/dca/dca-sysfs.c new file mode 100644 index 0000000..24a263b --- /dev/null +++ b/drivers/dca/dca-sysfs.c @@ -0,0 +1,88 @@ +#include +#include +#include +#include +#include +#include +#include + +static struct class *dca_class; +static struct idr dca_idr; +static spinlock_t dca_idr_lock; + +int dca_sysfs_add_req(struct dca_provider *dca, struct device *dev, int slot) +{ + struct class_device *cd; + + cd = class_device_create(dca_class, dca->cd, MKDEV(0, slot + 1), + dev, "requester%d", slot); + if (IS_ERR(cd)) + return PTR_ERR(cd); + return 0; +} + +void dca_sysfs_remove_req(struct dca_provider *dca, int slot) +{ + class_device_destroy(dca_class, MKDEV(0, slot + 1)); +} + +int dca_sysfs_add_provider(struct dca_provider *dca, struct device *dev) +{ + struct class_device *cd; + int err = 0; + +idr_try_again: + if (!idr_pre_get(&dca_idr, GFP_KERNEL)) + return -ENOMEM; + spin_lock(&dca_idr_lock); + err = idr_get_new(&dca_idr, dca, &dca->id); + spin_unlock(&dca_idr_lock); + switch (err) { + case 0: + break; + case -EAGAIN: + goto idr_try_again; + default: + return err; + } + + cd = class_device_create(dca_class, NULL, MKDEV(0, 0), + dev, "dca%d", dca->id); + if (IS_ERR(cd)) { + spin_lock(&dca_idr_lock); + idr_remove(&dca_idr, dca->id); + spin_unlock(&dca_idr_lock); + return PTR_ERR(cd); + } + dca->cd = cd; + return 0; +} + +void dca_sysfs_remove_provider(struct dca_provider *dca) +{ + class_device_unregister(dca->cd); + dca->cd = NULL; + spin_lock(&dca_idr_lock); + idr_remove(&dca_idr, dca->id); + spin_unlock(&dca_idr_lock); +} + +int __init dca_sysfs_init(void) +{ + idr_init(&dca_idr); + spin_lock_init(&dca_idr_lock); + + dca_class = class_create(THIS_MODULE, "dca"); + if (IS_ERR(dca_class)) { + idr_destroy(&dca_idr); + return PTR_ERR(dca_class); + } + return 0; +} + +void __exit dca_sysfs_exit(void) +{ + class_destroy(dca_class); + idr_destroy(&dca_idr); +} + diff --git a/drivers/dma/Kconfig b/drivers/dma/Kconfig index 8f670da..955a99b 100644 --- a/drivers/dma/Kconfig +++ b/drivers/dma/Kconfig @@ -2,42 +2,52 @@ # # DMA engine configuration # -menu "DMA Engine support" - depends on HAS_DMA +menuconfig DMADEVICES + bool "DMA Offload Engine support" + depends on (PCI && X86) || ARCH_IOP32X || ARCH_IOP33X || ARCH_IOP13XX + help + Intel(R) offload engines enable offloading memory copies in the + network stack and RAID operations in the MD driver. + +if DMADEVICES + +comment "DMA Devices" + +config INTEL_IOATDMA + tristate "Intel I/OAT DMA support" + depends on PCI && X86 + select DMA_ENGINE + select DCA + help + Enable support for the Intel(R) I/OAT DMA engine present + in recent chipsets. + + Say Y here if you have such a chipset. + + If unsure, say N. + +config INTEL_IOP_ADMA + tristate "Intel IOP ADMA support" + depends on ARCH_IOP32X || ARCH_IOP33X || ARCH_IOP13XX + select ASYNC_CORE + select DMA_ENGINE + help + Enable support for the Intel(R) IOP Series RAID engines. config DMA_ENGINE - bool "Support for DMA engines" - ---help--- - DMA engines offload bulk memory operations from the CPU to dedicated - hardware, allowing the operations to happen asynchronously. + bool comment "DMA Clients" + depends on DMA_ENGINE config NET_DMA bool "Network: TCP receive copy offload" depends on DMA_ENGINE && NET default y - ---help--- + help This enables the use of DMA engines in the network stack to offload receive copy-to-user operations, freeing CPU cycles. Since this is the main user of the DMA engine, it should be enabled; say Y here. -comment "DMA Devices" - -config INTEL_IOATDMA - tristate "Intel I/OAT DMA support" - depends on DMA_ENGINE && PCI - default m - ---help--- - Enable support for the Intel(R) I/OAT DMA engine. - -config INTEL_IOP_ADMA - tristate "Intel IOP ADMA support" - depends on DMA_ENGINE && (ARCH_IOP32X || ARCH_IOP33X || ARCH_IOP13XX) - select ASYNC_CORE - default m - ---help--- - Enable support for the Intel(R) IOP Series RAID engines. - -endmenu +endif diff --git a/drivers/dma/Makefile b/drivers/dma/Makefile index b3839b6..b152cd8 100644 --- a/drivers/dma/Makefile +++ b/drivers/dma/Makefile @@ -1,4 +1,5 @@ obj-$(CONFIG_DMA_ENGINE) += dmaengine.o obj-$(CONFIG_NET_DMA) += iovlock.o obj-$(CONFIG_INTEL_IOATDMA) += ioatdma.o +ioatdma-objs := ioat.o ioat_dma.o ioat_dca.o obj-$(CONFIG_INTEL_IOP_ADMA) += iop-adma.o diff --git a/drivers/dma/ioat.c b/drivers/dma/ioat.c new file mode 100644 index 0000000..8ae8c53 --- /dev/null +++ b/drivers/dma/ioat.c @@ -0,0 +1,196 @@ +/* + * Intel I/OAT DMA Linux driver + * Copyright(c) 2007 Intel Corporation. + * + * This program is free software; you can redistribute it and/or modify it + * under the terms and conditions of the GNU General Public License, + * version 2, as published by the Free Software Foundation. + * + * This program is distributed in the hope that it will be useful, but WITHOUT + * ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or + * FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License for + * more details. + * + * You should have received a copy of the GNU General Public License along with + * this program; if not, write to the Free Software Foundation, Inc., + * 51 Franklin St - Fifth Floor, Boston, MA 02110-1301 USA. + * + * The full GNU General Public License is included in this distribution in + * the file called "COPYING". + * + */ + +/* + * This driver supports an Intel I/OAT DMA engine, which does asynchronous + * copy operations. + */ + +#include +#include +#include +#include +#include +#include "ioatdma.h" +#include "ioatdma_registers.h" +#include "ioatdma_hw.h" + +MODULE_VERSION("1.24"); +MODULE_LICENSE("GPL"); +MODULE_AUTHOR("Intel Corporation"); + +static struct pci_device_id ioat_pci_tbl[] = { + { PCI_DEVICE(PCI_VENDOR_ID_INTEL, PCI_DEVICE_ID_INTEL_IOAT) }, + { PCI_DEVICE(PCI_VENDOR_ID_INTEL, PCI_DEVICE_ID_INTEL_IOAT_CNB) }, + { PCI_DEVICE(PCI_VENDOR_ID_INTEL, PCI_DEVICE_ID_INTEL_IOAT_SCNB) }, + { PCI_DEVICE(PCI_VENDOR_ID_UNISYS, PCI_DEVICE_ID_UNISYS_DMA_DIRECTOR) }, + { 0, } +}; + +struct ioat_device { + struct pci_dev *pdev; + void __iomem *iobase; + struct ioatdma_device *dma; + struct dca_provider *dca; +}; + +static int __devinit ioat_probe(struct pci_dev *pdev, + const struct pci_device_id *id); +static void __devexit ioat_remove(struct pci_dev *pdev); + +static int ioat_setup_functionality(struct pci_dev *pdev, void __iomem *iobase) +{ + struct ioat_device *device = pci_get_drvdata(pdev); + u8 version; + int err = 0; + + version = readb(iobase + IOAT_VER_OFFSET); + switch (version) { + case IOAT_VER_1_2: + device->dma = ioat_dma_probe(pdev, iobase); + device->dca = ioat_dca_init(pdev, iobase); + break; + default: + err = -ENODEV; + break; + } + return err; +} + +static void ioat_shutdown_functionality(struct pci_dev *pdev) +{ + struct ioat_device *device = pci_get_drvdata(pdev); + + if (device->dma) { + ioat_dma_remove(device->dma); + device->dma = NULL; + } + + if (device->dca) { + unregister_dca_provider(device->dca); + free_dca_provider(device->dca); + device->dca = NULL; + } + +} + +static struct pci_driver ioat_pci_drv = { + .name = "ioatdma", + .id_table = ioat_pci_tbl, + .probe = ioat_probe, + .shutdown = ioat_shutdown_functionality, + .remove = __devexit_p(ioat_remove), +}; + +static int __devinit ioat_probe(struct pci_dev *pdev, + const struct pci_device_id *id) +{ + void __iomem *iobase; + struct ioat_device *device; + unsigned long mmio_start, mmio_len; + int err; + + err = pci_enable_device(pdev); + if (err) + goto err_enable_device; + + err = pci_request_regions(pdev, ioat_pci_drv.name); + if (err) + goto err_request_regions; + + err = pci_set_dma_mask(pdev, DMA_64BIT_MASK); + if (err) + err = pci_set_dma_mask(pdev, DMA_32BIT_MASK); + if (err) + goto err_set_dma_mask; + + err = pci_set_consistent_dma_mask(pdev, DMA_64BIT_MASK); + if (err) + err = pci_set_consistent_dma_mask(pdev, DMA_32BIT_MASK); + if (err) + goto err_set_dma_mask; + + mmio_start = pci_resource_start(pdev, 0); + mmio_len = pci_resource_len(pdev, 0); + iobase = ioremap(mmio_start, mmio_len); + if (!iobase) { + err = -ENOMEM; + goto err_ioremap; + } + + device = kzalloc(sizeof(*device), GFP_KERNEL); + if (!device) { + err = -ENOMEM; + goto err_kzalloc; + } + device->pdev = pdev; + pci_set_drvdata(pdev, device); + device->iobase = iobase; + + pci_set_master(pdev); + + err = ioat_setup_functionality(pdev, iobase); + if (err) + goto err_version; + + return 0; + +err_version: + kfree(device); +err_kzalloc: + iounmap(iobase); +err_ioremap: +err_set_dma_mask: + pci_release_regions(pdev); + pci_disable_device(pdev); +err_request_regions: +err_enable_device: + return err; +} + +static void __devexit ioat_remove(struct pci_dev *pdev) +{ + struct ioat_device *device = pci_get_drvdata(pdev); + + ioat_shutdown_functionality(pdev); + + kfree(device); + + iounmap(device->iobase); + pci_release_regions(pdev); + pci_disable_device(pdev); +} + +static int __init ioat_init_module(void) +{ + /* it's currently unsafe to unload this module */ + /* if forced, worst case is that rmmod hangs */ + __unsafe(THIS_MODULE); + return pci_register_driver(&ioat_pci_drv); +} +module_init(ioat_init_module); + +static void __exit ioat_exit_module(void) +{ + pci_unregister_driver(&ioat_pci_drv); +} +module_exit(ioat_exit_module); diff --git a/drivers/dma/ioat_dca.c b/drivers/dma/ioat_dca.c new file mode 100644 index 0000000..b1af49c --- /dev/null +++ b/drivers/dma/ioat_dca.c @@ -0,0 +1,267 @@ +/* + * Intel I/OAT DMA Linux driver + * Copyright(c) 2007 Intel Corporation. + * + * This program is free software; you can redistribute it and/or modify it + * under the terms and conditions of the GNU General Public License, + * version 2, as published by the Free Software Foundation. + * + * This program is distributed in the hope that it will be useful, but WITHOUT + * ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or + * FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License for + * more details. + * + * You should have received a copy of the GNU General Public License along with + * this program; if not, write to the Free Software Foundation, Inc., + * 51 Franklin St - Fifth Floor, Boston, MA 02110-1301 USA. + * + * The full GNU General Public License is included in this distribution in + * the file called "COPYING". + * + */ + +#include +#include +#include +#include +#include + +// either a kernel change is needed, or we need something like this in kernel +#ifndef CONFIG_SMP +#include +#undef cpu_physical_id +#define cpu_physical_id(cpu) (cpuid_ebx(1) >> 24) +#endif + +#include "ioatdma.h" +#include "ioatdma_registers.h" + +/* THIS STUFF NEEDS TO LIVE SOMEWHERE ELSE */ +#define X86_FEATURE_DCA (4*32+18) /* Direct Cache Access */ +/* / THIS STUFF NEEDS TO LIVE SOMEWHERE ELSE */ + +/* + * Bit 16 of a tag map entry is the "valid" bit, if it is set then bits 0:15 + * contain the bit number of the APIC ID to map into the DCA tag. If the valid + * bit is not set, then the value must be 0 or 1 and defines the bit in the tag. + */ +#define DCA_TAG_MAP_VALID 0x80 + +/* + * "Legacy" DCA systems do not implement the DCA register set in the + * I/OAT device. Software needs direct support for their tag mappings. + */ + +#define APICID_BIT(x) (DCA_TAG_MAP_VALID | (x)) +#define IOAT_TAG_MAP_LEN 8 + +static u8 ioat_tag_map_BNB[IOAT_TAG_MAP_LEN] = { + 1, APICID_BIT(1), APICID_BIT(2), APICID_BIT(2), }; +static u8 ioat_tag_map_SCNB[IOAT_TAG_MAP_LEN] = { + 1, APICID_BIT(1), APICID_BIT(2), APICID_BIT(2), }; +static u8 ioat_tag_map_CNB[IOAT_TAG_MAP_LEN] = { + 1, APICID_BIT(1), APICID_BIT(3), APICID_BIT(4), APICID_BIT(2), }; +static u8 ioat_tag_map_UNISYS[IOAT_TAG_MAP_LEN] = { 0 }; + +/* pack PCI B/D/F into a u16 */ +static inline u16 dcaid_from_pcidev(struct pci_dev *pci) +{ + return (pci->bus->number << 8) | pci->devfn; +} + +static int dca_enabled_in_bios(void) +{ + /* CPUID level 9 returns DCA configuration */ + /* Bit 0 indicates DCA enabled by the BIOS */ + unsigned long cpuid_level_9; + int res; + + cpuid_level_9 = cpuid_eax(9); + res = test_bit(0, &cpuid_level_9); + if (!res) + printk(KERN_ERR "ioat dma: DCA is disabled in BIOS\n"); + + return res; +} + +static int system_has_dca_enabled(void) +{ + if (boot_cpu_has(X86_FEATURE_DCA)) + return dca_enabled_in_bios(); + + printk(KERN_ERR "ioat dma: boot cpu doesn't have X86_FEATURE_DCA\n"); + return 0; +} + +struct ioat_dca_slot { + struct pci_dev *pdev; /* requester device */ + u16 rid; /* requester id, as used by IOAT */ +}; + +#define IOAT_DCA_MAX_REQ 6 + +struct ioat_dca_priv { + void __iomem *iobase; + void *dca_base; + int max_requesters; + int requester_count; + u8 tag_map[IOAT_TAG_MAP_LEN]; + struct ioat_dca_slot req_slots[0]; +}; + +/* 5000 series chipset DCA Port Requester ID Table Entry Format + * [15:8] PCI-Express Bus Number + * [7:3] PCI-Express Device Number + * [2:0] PCI-Express Function Number + * + * 5000 series chipset DCA control register format + * [7:1] Reserved (0) + * [0] Ignore Function Number + */ + +static int ioat_dca_add_requester(struct dca_provider *dca, struct device *dev) +{ + struct ioat_dca_priv *ioatdca = dca_priv(dca); + struct pci_dev *pdev; + int i; + u16 id; + + /* This implementation only supports PCI-Express */ + if (dev->bus != &pci_bus_type) + return -ENODEV; + pdev = to_pci_dev(dev); + id = dcaid_from_pcidev(pdev); + + if (ioatdca->requester_count == ioatdca->max_requesters) + return -ENODEV; + + for (i = 0; i < ioatdca->max_requesters; i++) { + if (ioatdca->req_slots[i].pdev == NULL) { + /* found an empty slot */ + ioatdca->requester_count++; + ioatdca->req_slots[i].pdev = pdev; + ioatdca->req_slots[i].rid = id; + writew(id, ioatdca->dca_base + (i * 4)); + /* make sure the ignore function bit is off */ + writeb(0, ioatdca->dca_base + (i * 4) + 2); + return i; + } + } + /* Error, ioatdma->requester_count is out of whack */ + return -EFAULT; +} + +static int ioat_dca_remove_requester(struct dca_provider *dca, + struct device *dev) +{ + struct ioat_dca_priv *ioatdca = dca_priv(dca); + struct pci_dev *pdev; + int i; + + /* This implementation only supports PCI-Express */ + if (dev->bus != &pci_bus_type) + return -ENODEV; + pdev = to_pci_dev(dev); + + for (i = 0; i < ioatdca->max_requesters; i++) { + if (ioatdca->req_slots[i].pdev == pdev) { + writew(0, ioatdca->dca_base + (i * 4)); + ioatdca->req_slots[i].pdev = NULL; + ioatdca->req_slots[i].rid = 0; + ioatdca->requester_count--; + return i; + } + } + return -ENODEV; +} + +static u8 ioat_dca_get_tag(struct dca_provider *dca, int cpu) +{ + struct ioat_dca_priv *ioatdca = dca_priv(dca); + int i, apic_id, bit, value; + u8 entry, tag; + + tag = 0; + apic_id = cpu_physical_id(cpu); + + for (i = 0; i < IOAT_TAG_MAP_LEN; i++) { + entry = ioatdca->tag_map[i]; + if (entry & DCA_TAG_MAP_VALID) { + bit = entry & ~DCA_TAG_MAP_VALID; + value = (apic_id & (1 << bit)) ? 1 : 0; + } else { + value = entry ? 1 : 0; + } + tag |= (value << i); + } + return tag; +} + +static struct dca_ops ioat_dca_ops = { + .add_requester = ioat_dca_add_requester, + .remove_requester = ioat_dca_remove_requester, + .get_tag = ioat_dca_get_tag, +}; + + +struct dca_provider *ioat_dca_init(struct pci_dev *pdev, void __iomem *iobase) +{ + struct dca_provider *dca; + struct ioat_dca_priv *ioatdca; + u8 *tag_map = NULL; + int i; + int err; + + if (!system_has_dca_enabled()) + return NULL; + + /* I/OAT v1 systems must have a known tag_map to support DCA */ + switch (pdev->vendor) { + case PCI_VENDOR_ID_INTEL: + switch (pdev->device) { + case PCI_DEVICE_ID_INTEL_IOAT: + tag_map = ioat_tag_map_BNB; + break; + case PCI_DEVICE_ID_INTEL_IOAT_CNB: + tag_map = ioat_tag_map_CNB; + break; + case PCI_DEVICE_ID_INTEL_IOAT_SCNB: + tag_map = ioat_tag_map_SCNB; + break; + } + break; + case PCI_VENDOR_ID_UNISYS: + switch (pdev->device) { + case PCI_DEVICE_ID_UNISYS_DMA_DIRECTOR: + tag_map = ioat_tag_map_UNISYS; + break; + } + break; + } + if (tag_map == NULL) + return NULL; + + dca = alloc_dca_provider(&ioat_dca_ops, + sizeof(*ioatdca) + + (sizeof(struct ioat_dca_slot) * IOAT_DCA_MAX_REQ)); + if (!dca) + return NULL; + + ioatdca = dca_priv(dca); + ioatdca->max_requesters = IOAT_DCA_MAX_REQ; + + ioatdca->dca_base = iobase + 0x54; + + /* copy over the APIC ID to DCA tag mapping */ + for (i = 0; i < IOAT_TAG_MAP_LEN; i++) + ioatdca->tag_map[i] = tag_map[i]; + + err = register_dca_provider(dca, &pdev->dev); + if (err) { + free_dca_provider(dca); + return NULL; + } + + return dca; +} + * Unmerged path drivers/dma/ioat_dma.c diff --git a/drivers/dma/ioatdma.c b/drivers/dma/ioatdma.c deleted file mode 100644 index 2d1f178..0000000 --- a/drivers/dma/ioatdma.c +++ /dev/null @@ -1,827 +0,0 @@ -/* - * Copyright(c) 2004 - 2006 Intel Corporation. All rights reserved. - * - * This program is free software; you can redistribute it and/or modify it - * under the terms of the GNU General Public License as published by the Free - * Software Foundation; either version 2 of the License, or (at your option) - * any later version. - * - * This program is distributed in the hope that it will be useful, but WITHOUT - * ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or - * FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License for - * more details. - * - * You should have received a copy of the GNU General Public License along with - * this program; if not, write to the Free Software Foundation, Inc., 59 - * Temple Place - Suite 330, Boston, MA 02111-1307, USA. - * - * The full GNU General Public License is included in this distribution in the - * file called COPYING. - */ - -/* - * This driver supports an Intel I/OAT DMA engine, which does asynchronous - * copy operations. - */ - -#include -#include -#include -#include -#include -#include -#include -#include "ioatdma.h" -#include "ioatdma_registers.h" -#include "ioatdma_hw.h" - -#define to_ioat_chan(chan) container_of(chan, struct ioat_dma_chan, common) -#define to_ioat_device(dev) container_of(dev, struct ioat_device, common) -#define to_ioat_desc(lh) container_of(lh, struct ioat_desc_sw, node) -#define tx_to_ioat_desc(tx) container_of(tx, struct ioat_desc_sw, async_tx) - -/* internal functions */ -static int __devinit ioat_probe(struct pci_dev *pdev, const struct pci_device_id *ent); -static void ioat_shutdown(struct pci_dev *pdev); -static void __devexit ioat_remove(struct pci_dev *pdev); - -static int enumerate_dma_channels(struct ioat_device *device) -{ - u8 xfercap_scale; - u32 xfercap; - int i; - struct ioat_dma_chan *ioat_chan; - - device->common.chancnt = readb(device->reg_base + IOAT_CHANCNT_OFFSET); - xfercap_scale = readb(device->reg_base + IOAT_XFERCAP_OFFSET); - xfercap = (xfercap_scale == 0 ? -1 : (1UL << xfercap_scale)); - - for (i = 0; i < device->common.chancnt; i++) { - ioat_chan = kzalloc(sizeof(*ioat_chan), GFP_KERNEL); - if (!ioat_chan) { - device->common.chancnt = i; - break; - } - - ioat_chan->device = device; - ioat_chan->reg_base = device->reg_base + (0x80 * (i + 1)); - ioat_chan->xfercap = xfercap; - spin_lock_init(&ioat_chan->cleanup_lock); - spin_lock_init(&ioat_chan->desc_lock); - INIT_LIST_HEAD(&ioat_chan->free_desc); - INIT_LIST_HEAD(&ioat_chan->used_desc); - /* This should be made common somewhere in dmaengine.c */ - ioat_chan->common.device = &device->common; - list_add_tail(&ioat_chan->common.device_node, - &device->common.channels); - } - return device->common.chancnt; -} - -static void -ioat_set_src(dma_addr_t addr, struct dma_async_tx_descriptor *tx, int index) -{ - struct ioat_desc_sw *iter, *desc = tx_to_ioat_desc(tx); - struct ioat_dma_chan *ioat_chan = to_ioat_chan(tx->chan); - - pci_unmap_addr_set(desc, src, addr); - - list_for_each_entry(iter, &desc->async_tx.tx_list, node) { - iter->hw->src_addr = addr; - addr += ioat_chan->xfercap; - } - -} - -static void -ioat_set_dest(dma_addr_t addr, struct dma_async_tx_descriptor *tx, int index) -{ - struct ioat_desc_sw *iter, *desc = tx_to_ioat_desc(tx); - struct ioat_dma_chan *ioat_chan = to_ioat_chan(tx->chan); - - pci_unmap_addr_set(desc, dst, addr); - - list_for_each_entry(iter, &desc->async_tx.tx_list, node) { - iter->hw->dst_addr = addr; - addr += ioat_chan->xfercap; - } -} - -static dma_cookie_t -ioat_tx_submit(struct dma_async_tx_descriptor *tx) -{ - struct ioat_dma_chan *ioat_chan = to_ioat_chan(tx->chan); - struct ioat_desc_sw *desc = tx_to_ioat_desc(tx); - int append = 0; - dma_cookie_t cookie; - struct ioat_desc_sw *group_start; - - group_start = list_entry(desc->async_tx.tx_list.next, - struct ioat_desc_sw, node); - spin_lock_bh(&ioat_chan->desc_lock); - /* cookie incr and addition to used_list must be atomic */ - cookie = ioat_chan->common.cookie; - cookie++; - if (cookie < 0) - cookie = 1; - ioat_chan->common.cookie = desc->async_tx.cookie = cookie; - - /* write address into NextDescriptor field of last desc in chain */ - to_ioat_desc(ioat_chan->used_desc.prev)->hw->next = - group_start->async_tx.phys; - list_splice_init(&desc->async_tx.tx_list, ioat_chan->used_desc.prev); - - ioat_chan->pending += desc->tx_cnt; - if (ioat_chan->pending >= 4) { - append = 1; - ioat_chan->pending = 0; - } - spin_unlock_bh(&ioat_chan->desc_lock); - - if (append) - writeb(IOAT_CHANCMD_APPEND, - ioat_chan->reg_base + IOAT_CHANCMD_OFFSET); - - return cookie; -} - -static struct ioat_desc_sw *ioat_dma_alloc_descriptor( - struct ioat_dma_chan *ioat_chan, - gfp_t flags) -{ - struct ioat_dma_descriptor *desc; - struct ioat_desc_sw *desc_sw; - struct ioat_device *ioat_device; - dma_addr_t phys; - - ioat_device = to_ioat_device(ioat_chan->common.device); - desc = pci_pool_alloc(ioat_device->dma_pool, flags, &phys); - if (unlikely(!desc)) - return NULL; - - desc_sw = kzalloc(sizeof(*desc_sw), flags); - if (unlikely(!desc_sw)) { - pci_pool_free(ioat_device->dma_pool, desc, phys); - return NULL; - } - - memset(desc, 0, sizeof(*desc)); - dma_async_tx_descriptor_init(&desc_sw->async_tx, &ioat_chan->common); - desc_sw->async_tx.tx_set_src = ioat_set_src; - desc_sw->async_tx.tx_set_dest = ioat_set_dest; - desc_sw->async_tx.tx_submit = ioat_tx_submit; - INIT_LIST_HEAD(&desc_sw->async_tx.tx_list); - desc_sw->hw = desc; - desc_sw->async_tx.phys = phys; - - return desc_sw; -} - -#define INITIAL_IOAT_DESC_COUNT 128 - -static void ioat_start_null_desc(struct ioat_dma_chan *ioat_chan); - -/* returns the actual number of allocated descriptors */ -static int ioat_dma_alloc_chan_resources(struct dma_chan *chan) -{ - struct ioat_dma_chan *ioat_chan = to_ioat_chan(chan); - struct ioat_desc_sw *desc = NULL; - u16 chanctrl; - u32 chanerr; - int i; - LIST_HEAD(tmp_list); - - /* - * In-use bit automatically set by reading chanctrl - * If 0, we got it, if 1, someone else did - */ - chanctrl = readw(ioat_chan->reg_base + IOAT_CHANCTRL_OFFSET); - if (chanctrl & IOAT_CHANCTRL_CHANNEL_IN_USE) - return -EBUSY; - - /* Setup register to interrupt and write completion status on error */ - chanctrl = IOAT_CHANCTRL_CHANNEL_IN_USE | - IOAT_CHANCTRL_ERR_INT_EN | - IOAT_CHANCTRL_ANY_ERR_ABORT_EN | - IOAT_CHANCTRL_ERR_COMPLETION_EN; - writew(chanctrl, ioat_chan->reg_base + IOAT_CHANCTRL_OFFSET); - - chanerr = readl(ioat_chan->reg_base + IOAT_CHANERR_OFFSET); - if (chanerr) { - printk("IOAT: CHANERR = %x, clearing\n", chanerr); - writel(chanerr, ioat_chan->reg_base + IOAT_CHANERR_OFFSET); - } - - /* Allocate descriptors */ - for (i = 0; i < INITIAL_IOAT_DESC_COUNT; i++) { - desc = ioat_dma_alloc_descriptor(ioat_chan, GFP_KERNEL); - if (!desc) { - printk(KERN_ERR "IOAT: Only %d initial descriptors\n", i); - break; - } - list_add_tail(&desc->node, &tmp_list); - } - spin_lock_bh(&ioat_chan->desc_lock); - list_splice(&tmp_list, &ioat_chan->free_desc); - spin_unlock_bh(&ioat_chan->desc_lock); - - /* allocate a completion writeback area */ - /* doing 2 32bit writes to mmio since 1 64b write doesn't work */ - ioat_chan->completion_virt = - pci_pool_alloc(ioat_chan->device->completion_pool, - GFP_KERNEL, - &ioat_chan->completion_addr); - memset(ioat_chan->completion_virt, 0, - sizeof(*ioat_chan->completion_virt)); - writel(((u64) ioat_chan->completion_addr) & 0x00000000FFFFFFFF, - ioat_chan->reg_base + IOAT_CHANCMP_OFFSET_LOW); - writel(((u64) ioat_chan->completion_addr) >> 32, - ioat_chan->reg_base + IOAT_CHANCMP_OFFSET_HIGH); - - ioat_start_null_desc(ioat_chan); - return i; -} - -static void ioat_dma_memcpy_cleanup(struct ioat_dma_chan *ioat_chan); - -static void ioat_dma_free_chan_resources(struct dma_chan *chan) -{ - struct ioat_dma_chan *ioat_chan = to_ioat_chan(chan); - struct ioat_device *ioat_device = to_ioat_device(chan->device); - struct ioat_desc_sw *desc, *_desc; - u16 chanctrl; - int in_use_descs = 0; - - ioat_dma_memcpy_cleanup(ioat_chan); - - writeb(IOAT_CHANCMD_RESET, ioat_chan->reg_base + IOAT_CHANCMD_OFFSET); - - spin_lock_bh(&ioat_chan->desc_lock); - list_for_each_entry_safe(desc, _desc, &ioat_chan->used_desc, node) { - in_use_descs++; - list_del(&desc->node); - pci_pool_free(ioat_device->dma_pool, desc->hw, - desc->async_tx.phys); - kfree(desc); - } - list_for_each_entry_safe(desc, _desc, &ioat_chan->free_desc, node) { - list_del(&desc->node); - pci_pool_free(ioat_device->dma_pool, desc->hw, - desc->async_tx.phys); - kfree(desc); - } - spin_unlock_bh(&ioat_chan->desc_lock); - - pci_pool_free(ioat_device->completion_pool, - ioat_chan->completion_virt, - ioat_chan->completion_addr); - - /* one is ok since we left it on there on purpose */ - if (in_use_descs > 1) - printk(KERN_ERR "IOAT: Freeing %d in use descriptors!\n", - in_use_descs - 1); - - ioat_chan->last_completion = ioat_chan->completion_addr = 0; - - /* Tell hw the chan is free */ - chanctrl = readw(ioat_chan->reg_base + IOAT_CHANCTRL_OFFSET); - chanctrl &= ~IOAT_CHANCTRL_CHANNEL_IN_USE; - writew(chanctrl, ioat_chan->reg_base + IOAT_CHANCTRL_OFFSET); -} - -static struct dma_async_tx_descriptor * -ioat_dma_prep_memcpy(struct dma_chan *chan, size_t len, int int_en) -{ - struct ioat_dma_chan *ioat_chan = to_ioat_chan(chan); - struct ioat_desc_sw *first, *prev, *new; - LIST_HEAD(new_chain); - u32 copy; - size_t orig_len; - int desc_count = 0; - - if (!len) - return NULL; - - orig_len = len; - - first = NULL; - prev = NULL; - - spin_lock_bh(&ioat_chan->desc_lock); - while (len) { - if (!list_empty(&ioat_chan->free_desc)) { - new = to_ioat_desc(ioat_chan->free_desc.next); - list_del(&new->node); - } else { - /* try to get another desc */ - new = ioat_dma_alloc_descriptor(ioat_chan, GFP_ATOMIC); - /* will this ever happen? */ - /* TODO add upper limit on these */ - BUG_ON(!new); - } - - copy = min((u32) len, ioat_chan->xfercap); - - new->hw->size = copy; - new->hw->ctl = 0; - new->async_tx.cookie = 0; - new->async_tx.ack = 1; - - /* chain together the physical address list for the HW */ - if (!first) - first = new; - else - prev->hw->next = (u64) new->async_tx.phys; - - prev = new; - len -= copy; - list_add_tail(&new->node, &new_chain); - desc_count++; - } - - list_splice(&new_chain, &new->async_tx.tx_list); - - new->hw->ctl = IOAT_DMA_DESCRIPTOR_CTL_CP_STS; - new->hw->next = 0; - new->tx_cnt = desc_count; - new->async_tx.ack = 0; /* client is in control of this ack */ - new->async_tx.cookie = -EBUSY; - - pci_unmap_len_set(new, len, orig_len); - spin_unlock_bh(&ioat_chan->desc_lock); - - return new ? &new->async_tx : NULL; -} - - -/** - * ioat_dma_memcpy_issue_pending - push potentially unrecognized appended descriptors to hw - * @chan: DMA channel handle - */ - -static void ioat_dma_memcpy_issue_pending(struct dma_chan *chan) -{ - struct ioat_dma_chan *ioat_chan = to_ioat_chan(chan); - - if (ioat_chan->pending != 0) { - ioat_chan->pending = 0; - writeb(IOAT_CHANCMD_APPEND, - ioat_chan->reg_base + IOAT_CHANCMD_OFFSET); - } -} - -static void ioat_dma_memcpy_cleanup(struct ioat_dma_chan *chan) -{ - unsigned long phys_complete; - struct ioat_desc_sw *desc, *_desc; - dma_cookie_t cookie = 0; - - prefetch(chan->completion_virt); - - if (!spin_trylock(&chan->cleanup_lock)) - return; - - /* The completion writeback can happen at any time, - so reads by the driver need to be atomic operations - The descriptor physical addresses are limited to 32-bits - when the CPU can only do a 32-bit mov */ - -#if (BITS_PER_LONG == 64) - phys_complete = - chan->completion_virt->full & IOAT_CHANSTS_COMPLETED_DESCRIPTOR_ADDR; -#else - phys_complete = chan->completion_virt->low & IOAT_LOW_COMPLETION_MASK; -#endif - - if ((chan->completion_virt->full & IOAT_CHANSTS_DMA_TRANSFER_STATUS) == - IOAT_CHANSTS_DMA_TRANSFER_STATUS_HALTED) { - printk("IOAT: Channel halted, chanerr = %x\n", - readl(chan->reg_base + IOAT_CHANERR_OFFSET)); - - /* TODO do something to salvage the situation */ - } - - if (phys_complete == chan->last_completion) { - spin_unlock(&chan->cleanup_lock); - return; - } - - spin_lock_bh(&chan->desc_lock); - list_for_each_entry_safe(desc, _desc, &chan->used_desc, node) { - - /* - * Incoming DMA requests may use multiple descriptors, due to - * exceeding xfercap, perhaps. If so, only the last one will - * have a cookie, and require unmapping. - */ - if (desc->async_tx.cookie) { - cookie = desc->async_tx.cookie; - - /* yes we are unmapping both _page and _single alloc'd - regions with unmap_page. Is this *really* that bad? - */ - pci_unmap_page(chan->device->pdev, - pci_unmap_addr(desc, dst), - pci_unmap_len(desc, len), - PCI_DMA_FROMDEVICE); - pci_unmap_page(chan->device->pdev, - pci_unmap_addr(desc, src), - pci_unmap_len(desc, len), - PCI_DMA_TODEVICE); - } - - if (desc->async_tx.phys != phys_complete) { - /* a completed entry, but not the last, so cleanup - * if the client is done with the descriptor - */ - if (desc->async_tx.ack) { - list_del(&desc->node); - list_add_tail(&desc->node, &chan->free_desc); - } else - desc->async_tx.cookie = 0; - } else { - /* last used desc. Do not remove, so we can append from - it, but don't look at it next time, either */ - desc->async_tx.cookie = 0; - - /* TODO check status bits? */ - break; - } - } - - spin_unlock_bh(&chan->desc_lock); - - chan->last_completion = phys_complete; - if (cookie != 0) - chan->completed_cookie = cookie; - - spin_unlock(&chan->cleanup_lock); -} - -static void ioat_dma_dependency_added(struct dma_chan *chan) -{ - struct ioat_dma_chan *ioat_chan = to_ioat_chan(chan); - spin_lock_bh(&ioat_chan->desc_lock); - if (ioat_chan->pending == 0) { - spin_unlock_bh(&ioat_chan->desc_lock); - ioat_dma_memcpy_cleanup(ioat_chan); - } else - spin_unlock_bh(&ioat_chan->desc_lock); -} - -/** - * ioat_dma_is_complete - poll the status of a IOAT DMA transaction - * @chan: IOAT DMA channel handle - * @cookie: DMA transaction identifier - * @done: if not %NULL, updated with last completed transaction - * @used: if not %NULL, updated with last used transaction - */ - -static enum dma_status ioat_dma_is_complete(struct dma_chan *chan, - dma_cookie_t cookie, - dma_cookie_t *done, - dma_cookie_t *used) -{ - struct ioat_dma_chan *ioat_chan = to_ioat_chan(chan); - dma_cookie_t last_used; - dma_cookie_t last_complete; - enum dma_status ret; - - last_used = chan->cookie; - last_complete = ioat_chan->completed_cookie; - - if (done) - *done= last_complete; - if (used) - *used = last_used; - - ret = dma_async_is_complete(cookie, last_complete, last_used); - if (ret == DMA_SUCCESS) - return ret; - - ioat_dma_memcpy_cleanup(ioat_chan); - - last_used = chan->cookie; - last_complete = ioat_chan->completed_cookie; - - if (done) - *done= last_complete; - if (used) - *used = last_used; - - return dma_async_is_complete(cookie, last_complete, last_used); -} - -/* PCI API */ - -static struct pci_device_id ioat_pci_tbl[] = { - { PCI_DEVICE(PCI_VENDOR_ID_INTEL, PCI_DEVICE_ID_INTEL_IOAT) }, - { PCI_DEVICE(PCI_VENDOR_ID_UNISYS, - PCI_DEVICE_ID_UNISYS_DMA_DIRECTOR) }, - { 0, } -}; - -static struct pci_driver ioat_pci_driver = { - .name = "ioatdma", - .id_table = ioat_pci_tbl, - .probe = ioat_probe, - .shutdown = ioat_shutdown, - .remove = __devexit_p(ioat_remove), -}; - -static irqreturn_t ioat_do_interrupt(int irq, void *data) -{ - struct ioat_device *instance = data; - unsigned long attnstatus; - u8 intrctrl; - - intrctrl = readb(instance->reg_base + IOAT_INTRCTRL_OFFSET); - - if (!(intrctrl & IOAT_INTRCTRL_MASTER_INT_EN)) - return IRQ_NONE; - - if (!(intrctrl & IOAT_INTRCTRL_INT_STATUS)) { - writeb(intrctrl, instance->reg_base + IOAT_INTRCTRL_OFFSET); - return IRQ_NONE; - } - - attnstatus = readl(instance->reg_base + IOAT_ATTNSTATUS_OFFSET); - - printk(KERN_ERR "ioatdma error: interrupt! status %lx\n", attnstatus); - - writeb(intrctrl, instance->reg_base + IOAT_INTRCTRL_OFFSET); - return IRQ_HANDLED; -} - -static void ioat_start_null_desc(struct ioat_dma_chan *ioat_chan) -{ - struct ioat_desc_sw *desc; - - spin_lock_bh(&ioat_chan->desc_lock); - - if (!list_empty(&ioat_chan->free_desc)) { - desc = to_ioat_desc(ioat_chan->free_desc.next); - list_del(&desc->node); - } else { - /* try to get another desc */ - spin_unlock_bh(&ioat_chan->desc_lock); - desc = ioat_dma_alloc_descriptor(ioat_chan, GFP_KERNEL); - spin_lock_bh(&ioat_chan->desc_lock); - /* will this ever happen? */ - BUG_ON(!desc); - } - - desc->hw->ctl = IOAT_DMA_DESCRIPTOR_NUL; - desc->hw->next = 0; - desc->async_tx.ack = 1; - - list_add_tail(&desc->node, &ioat_chan->used_desc); - spin_unlock_bh(&ioat_chan->desc_lock); - - writel(((u64) desc->async_tx.phys) & 0x00000000FFFFFFFF, - ioat_chan->reg_base + IOAT_CHAINADDR_OFFSET_LOW); - writel(((u64) desc->async_tx.phys) >> 32, - ioat_chan->reg_base + IOAT_CHAINADDR_OFFSET_HIGH); - - writeb(IOAT_CHANCMD_START, ioat_chan->reg_base + IOAT_CHANCMD_OFFSET); -} - -/* - * Perform a IOAT transaction to verify the HW works. - */ -#define IOAT_TEST_SIZE 2000 - -static int ioat_self_test(struct ioat_device *device) -{ - int i; - u8 *src; - u8 *dest; - struct dma_chan *dma_chan; - struct dma_async_tx_descriptor *tx; - dma_addr_t addr; - dma_cookie_t cookie; - int err = 0; - - src = kzalloc(sizeof(u8) * IOAT_TEST_SIZE, GFP_KERNEL); - if (!src) - return -ENOMEM; - dest = kzalloc(sizeof(u8) * IOAT_TEST_SIZE, GFP_KERNEL); - if (!dest) { - kfree(src); - return -ENOMEM; - } - - /* Fill in src buffer */ - for (i = 0; i < IOAT_TEST_SIZE; i++) - src[i] = (u8)i; - - /* Start copy, using first DMA channel */ - dma_chan = container_of(device->common.channels.next, - struct dma_chan, - device_node); - if (ioat_dma_alloc_chan_resources(dma_chan) < 1) { - err = -ENODEV; - goto out; - } - - tx = ioat_dma_prep_memcpy(dma_chan, IOAT_TEST_SIZE, 0); - async_tx_ack(tx); - addr = dma_map_single(dma_chan->device->dev, src, IOAT_TEST_SIZE, - DMA_TO_DEVICE); - ioat_set_src(addr, tx, 0); - addr = dma_map_single(dma_chan->device->dev, dest, IOAT_TEST_SIZE, - DMA_FROM_DEVICE); - ioat_set_dest(addr, tx, 0); - cookie = ioat_tx_submit(tx); - ioat_dma_memcpy_issue_pending(dma_chan); - msleep(1); - - if (ioat_dma_is_complete(dma_chan, cookie, NULL, NULL) != DMA_SUCCESS) { - printk(KERN_ERR "ioatdma: Self-test copy timed out, disabling\n"); - err = -ENODEV; - goto free_resources; - } - if (memcmp(src, dest, IOAT_TEST_SIZE)) { - printk(KERN_ERR "ioatdma: Self-test copy failed compare, disabling\n"); - err = -ENODEV; - goto free_resources; - } - -free_resources: - ioat_dma_free_chan_resources(dma_chan); -out: - kfree(src); - kfree(dest); - return err; -} - -static int __devinit ioat_probe(struct pci_dev *pdev, - const struct pci_device_id *ent) -{ - int err; - unsigned long mmio_start, mmio_len; - void __iomem *reg_base; - struct ioat_device *device; - - err = pci_enable_device(pdev); - if (err) - goto err_enable_device; - - err = pci_set_dma_mask(pdev, DMA_64BIT_MASK); - if (err) - err = pci_set_dma_mask(pdev, DMA_32BIT_MASK); - if (err) - goto err_set_dma_mask; - - err = pci_request_regions(pdev, ioat_pci_driver.name); - if (err) - goto err_request_regions; - - mmio_start = pci_resource_start(pdev, 0); - mmio_len = pci_resource_len(pdev, 0); - - reg_base = ioremap(mmio_start, mmio_len); - if (!reg_base) { - err = -ENOMEM; - goto err_ioremap; - } - - device = kzalloc(sizeof(*device), GFP_KERNEL); - if (!device) { - err = -ENOMEM; - goto err_kzalloc; - } - - /* DMA coherent memory pool for DMA descriptor allocations */ - device->dma_pool = pci_pool_create("dma_desc_pool", pdev, - sizeof(struct ioat_dma_descriptor), 64, 0); - if (!device->dma_pool) { - err = -ENOMEM; - goto err_dma_pool; - } - - device->completion_pool = pci_pool_create("completion_pool", pdev, sizeof(u64), SMP_CACHE_BYTES, SMP_CACHE_BYTES); - if (!device->completion_pool) { - err = -ENOMEM; - goto err_completion_pool; - } - - device->pdev = pdev; - pci_set_drvdata(pdev, device); -#ifdef CONFIG_PCI_MSI - if (pci_enable_msi(pdev) == 0) { - device->msi = 1; - } else { - device->msi = 0; - } -#endif - err = request_irq(pdev->irq, &ioat_do_interrupt, IRQF_SHARED, "ioat", - device); - if (err) - goto err_irq; - - device->reg_base = reg_base; - - writeb(IOAT_INTRCTRL_MASTER_INT_EN, device->reg_base + IOAT_INTRCTRL_OFFSET); - pci_set_master(pdev); - - INIT_LIST_HEAD(&device->common.channels); - enumerate_dma_channels(device); - - dma_cap_set(DMA_MEMCPY, device->common.cap_mask); - device->common.device_alloc_chan_resources = ioat_dma_alloc_chan_resources; - device->common.device_free_chan_resources = ioat_dma_free_chan_resources; - device->common.device_prep_dma_memcpy = ioat_dma_prep_memcpy; - device->common.device_is_tx_complete = ioat_dma_is_complete; - device->common.device_issue_pending = ioat_dma_memcpy_issue_pending; - device->common.device_dependency_added = ioat_dma_dependency_added; - device->common.dev = &pdev->dev; - printk(KERN_INFO "Intel(R) I/OAT DMA Engine found, %d channels\n", - device->common.chancnt); - - err = ioat_self_test(device); - if (err) - goto err_self_test; - - dma_async_device_register(&device->common); - - return 0; - -err_self_test: -err_irq: - pci_pool_destroy(device->completion_pool); -err_completion_pool: - pci_pool_destroy(device->dma_pool); -err_dma_pool: - kfree(device); -err_kzalloc: - iounmap(reg_base); -err_ioremap: - pci_release_regions(pdev); -err_request_regions: -err_set_dma_mask: - pci_disable_device(pdev); -err_enable_device: - - printk(KERN_ERR "Intel(R) I/OAT DMA Engine initialization failed\n"); - - return err; -} - -static void ioat_shutdown(struct pci_dev *pdev) -{ - struct ioat_device *device; - device = pci_get_drvdata(pdev); - - dma_async_device_unregister(&device->common); -} - -static void __devexit ioat_remove(struct pci_dev *pdev) -{ - struct ioat_device *device; - struct dma_chan *chan, *_chan; - struct ioat_dma_chan *ioat_chan; - - device = pci_get_drvdata(pdev); - dma_async_device_unregister(&device->common); - - free_irq(device->pdev->irq, device); -#ifdef CONFIG_PCI_MSI - if (device->msi) - pci_disable_msi(device->pdev); -#endif - pci_pool_destroy(device->dma_pool); - pci_pool_destroy(device->completion_pool); - iounmap(device->reg_base); - pci_release_regions(pdev); - pci_disable_device(pdev); - list_for_each_entry_safe(chan, _chan, &device->common.channels, device_node) { - ioat_chan = to_ioat_chan(chan); - list_del(&chan->device_node); - kfree(ioat_chan); - } - kfree(device); -} - -/* MODULE API */ -MODULE_VERSION("1.9"); -MODULE_LICENSE("GPL"); -MODULE_AUTHOR("Intel Corporation"); - -static int __init ioat_init_module(void) -{ - /* it's currently unsafe to unload this module */ - /* if forced, worst case is that rmmod hangs */ - __unsafe(THIS_MODULE); - - return pci_register_driver(&ioat_pci_driver); -} - -module_init(ioat_init_module); - -static void __exit ioat_exit_module(void) -{ - pci_unregister_driver(&ioat_pci_driver); -} - -module_exit(ioat_exit_module); diff --git a/drivers/dma/ioatdma.h b/drivers/dma/ioatdma.h index bf4dad7..d59c90b 100644 --- a/drivers/dma/ioatdma.h +++ b/drivers/dma/ioatdma.h @@ -28,10 +28,18 @@ #include #include #include +enum ioat_interrupt { + none = 0, + msix_multi_vector = 1, + msix_single_vector = 2, + msi = 3, + intx = 4, +}; + #define IOAT_LOW_COMPLETION_MASK 0xffffffc0 /** - * struct ioat_device - internal representation of a IOAT device + * struct ioatdma_device - internal representation of a IOAT device * @pdev: PCI-Express device * @reg_base: MMIO register space base address * @dma_pool: for allocating DMA descriptors @@ -39,14 +47,17 @@ #define IOAT_LOW_COMPLETION_MASK 0xfffff * @msi: Message Signaled Interrupt number */ -struct ioat_device { +struct ioatdma_device { struct pci_dev *pdev; void __iomem *reg_base; struct pci_pool *dma_pool; struct pci_pool *completion_pool; struct dma_device common; - u8 msi; + u8 version; + enum ioat_interrupt irq_mode; + struct msix_entry msix_entries[4]; + struct ioat_dma_chan *idx[4]; }; /** @@ -84,7 +95,7 @@ struct ioat_dma_chan { int pending; - struct ioat_device *device; + struct ioatdma_device *device; struct dma_chan common; dma_addr_t completion_addr; @@ -95,6 +106,7 @@ struct ioat_dma_chan { u32 high; }; } *completion_virt; + struct tasklet_struct cleanup_task; }; /* wrapper around hardware descriptor format + additional software fields */ @@ -117,4 +129,14 @@ struct ioat_desc_sw { struct dma_async_tx_descriptor async_tx; }; +#if defined(CONFIG_INTEL_IOATDMA) || defined(CONFIG_INTEL_IOATDMA_MODULE) +struct ioatdma_device *ioat_dma_probe(struct pci_dev *, void __iomem *); +void ioat_dma_remove(struct ioatdma_device *device); +struct dca_provider *ioat_dca_init(struct pci_dev *, void __iomem *); +#else +#define ioat_dma_probe(pdev, io) NULL +#define ioat_dma_remove(dev) do { } while (0) +#define ioat_dca_init(pdev, io) NULL +#endif + #endif /* IOATDMA_H */ diff --git a/drivers/dma/ioatdma_hw.h b/drivers/dma/ioatdma_hw.h index 4d7a128..9e7434e 100644 --- a/drivers/dma/ioatdma_hw.h +++ b/drivers/dma/ioatdma_hw.h @@ -27,7 +27,7 @@ #define IOAT_PCI_DID 0x1A38 #define IOAT_PCI_RID 0x00 #define IOAT_PCI_SVID 0x8086 #define IOAT_PCI_SID 0x8086 -#define IOAT_VER 0x12 /* Version 1.2 */ +#define IOAT_VER_1_2 0x12 /* Version 1.2 */ struct ioat_dma_descriptor { uint32_t size; diff --git a/drivers/dma/ioatdma_registers.h b/drivers/dma/ioatdma_registers.h index a30c734..baaab5e 100644 --- a/drivers/dma/ioatdma_registers.h +++ b/drivers/dma/ioatdma_registers.h @@ -1,5 +1,5 @@ /* - * Copyright(c) 2004 - 2006 Intel Corporation. All rights reserved. + * Copyright(c) 2004 - 2007 Intel Corporation. All rights reserved. * * This program is free software; you can redistribute it and/or modify it * under the terms of the GNU General Public License as published by the Free @@ -21,6 +21,9 @@ #ifndef _IOAT_REGISTERS_H_ #define _IOAT_REGISTERS_H_ +#define IOAT_PCI_DMACTRL_OFFSET 0x48 +#define IOAT_PCI_DMACTRL_DMA_EN 0x00000001 +#define IOAT_PCI_DMACTRL_MSI_EN 0x00000002 /* MMIO Device Registers */ #define IOAT_CHANCNT_OFFSET 0x00 /* 8-bit */ @@ -39,6 +42,7 @@ #define IOAT_INTRCTRL_OFFSET 0x03 /* #define IOAT_INTRCTRL_MASTER_INT_EN 0x01 /* Master Interrupt Enable */ #define IOAT_INTRCTRL_INT_STATUS 0x02 /* ATTNSTATUS -or- Channel Int */ #define IOAT_INTRCTRL_INT 0x04 /* INT_STATUS -and- MASTER_INT_EN */ +#define IOAT_INTRCTRL_MSIX_VECTOR_CONTROL 0x08 /* Enable all MSI-X vectors */ #define IOAT_ATTNSTATUS_OFFSET 0x04 /* Each bit is a channel */ diff --git a/include/linux/dca.h b/include/linux/dca.h new file mode 100644 index 0000000..83eaecc --- /dev/null +++ b/include/linux/dca.h @@ -0,0 +1,47 @@ +#ifndef DCA_H +#define DCA_H +/* DCA Provider API */ + +/* DCA Notifier Interface */ +void dca_register_notify(struct notifier_block *nb); +void dca_unregister_notify(struct notifier_block *nb); + +#define DCA_PROVIDER_ADD 0x0001 +#define DCA_PROVIDER_REMOVE 0x0002 + +struct dca_provider { + struct dca_ops *ops; + struct class_device *cd; + int id; +}; + +struct dca_ops { + int (*add_requester) (struct dca_provider *, struct device *); + int (*remove_requester) (struct dca_provider *, struct device *); + u8 (*get_tag) (struct dca_provider *, int cpu); +}; + +struct dca_provider *alloc_dca_provider(struct dca_ops *ops, int priv_size); +void free_dca_provider(struct dca_provider *dca); +int register_dca_provider(struct dca_provider *dca, struct device *dev); +void unregister_dca_provider(struct dca_provider *dca); + +static inline void *dca_priv(struct dca_provider *dca) +{ + return (void *)dca + sizeof(struct dca_provider); +} + +/* Requester API */ +int dca_add_requester(struct device *dev); +int dca_remove_requester(struct device *dev); +u8 dca_get_tag(int cpu); + +/* internal stuff */ +int __init dca_sysfs_init(void); +void __exit dca_sysfs_exit(void); +int dca_sysfs_add_provider(struct dca_provider *dca, struct device *dev); +void dca_sysfs_remove_provider(struct dca_provider *dca); +int dca_sysfs_add_req(struct dca_provider *dca, struct device *dev, int slot); +void dca_sysfs_remove_req(struct dca_provider *dca, int slot); + +#endif /* DCA_H */ diff --git a/include/linux/pci_ids.h b/include/linux/pci_ids.h index 07fc574..d4883bd 100644 --- a/include/linux/pci_ids.h +++ b/include/linux/pci_ids.h @@ -2287,6 +2287,8 @@ #define PCI_DEVICE_ID_INTEL_MCH_PB1 0x35 #define PCI_DEVICE_ID_INTEL_MCH_PC 0x3599 #define PCI_DEVICE_ID_INTEL_MCH_PC1 0x359a #define PCI_DEVICE_ID_INTEL_E7525_MCH 0x359e +#define PCI_DEVICE_ID_INTEL_IOAT_CNB 0x360b +#define PCI_DEVICE_ID_INTEL_IOAT_SCNB 0x65ff #define PCI_DEVICE_ID_INTEL_82371SB_0 0x7000 #define PCI_DEVICE_ID_INTEL_82371SB_1 0x7010 #define PCI_DEVICE_ID_INTEL_82371SB_2 0x7020