GIT 5c1bbdd36a28a8e2aa0d4e9143bbc97f7a1976a2 git+ssh://master.kernel.org/pub/scm/linux/kernel/git/airlied/drm-2.6.git#drm-mm commit Author: Dave Airlie Date: Fri Jan 25 11:36:39 2008 +1000 drm/i915: final i915 interface change - make reloc use copy from user Life is a lot simpler especially with relocation avoidance code, if we simple store the relocations in a malloced buffer and copy from user instead of bring up a BO everytime. Signed-off-by: Dave Airlie commit 6d32e25c83dd5da1da7820cd62f3f7ebbe560da2 Author: Dave Airlie Date: Fri Jan 25 11:05:59 2008 +1000 drm: whitespace and some label cleanups. Signed-off-by: Dave Airlie commit 710ffb9b3db9a154a690281ba5cb55f8d969d38b Author: Kyle McMartin Date: Thu Jan 24 16:48:24 2008 +1000 i915: fix invalid opcode exception on cpus without clflush i915_flush_ttm was unconditionally executing a clflush instruction to (obviously) flush the cache. Instead, check if the cpu supports clflush, and if not, fall back to calling wbinvd to flush the entire cache. Signed-off-by: Kyle McMartin Signed-off-by: Dave Airlie commit a75b6e259acef004d61dd11130eb3b3a34976a02 Author: Eric Anholt Date: Thu Jan 24 16:47:28 2008 +1000 drm: Add additional explanation of DRM_BO_FLAG_CACHED_MAPPED Signed-off-by: Dave Airlie commit 267e18a9e23b947789b60287dadfa9cdac1af9d7 Author: Zhenyu Wang Date: Thu Jan 24 16:46:36 2008 +1000 i915: Add chipset id for Intel Integrated Graphics Device This adds new chipset id in drm. Signed-off-by: Zhenyu Wang Signed-off-by: Dave Airlie commit dc09e17ca052ff6428a8beaa21058f0345ba1878 Author: Thomas Hellstrom Date: Thu Jan 24 16:44:25 2008 +1000 drm/ttm: Properly propagate the user-space fence flags. This avoids a sync flush when user-space has already programmed and MI_FLUSH in the batchbuffer. Signed-off-by: Dave Airlie commit fcd863795159c0c3d49e66e9837391aa44dbe03c Author: Keith Packard Date: Thu Jan 24 16:42:46 2008 +1000 drm: Change drm_bo_type_dc to drm_bo_type_device and comment usage of this value. I couldn't figure out what drm_bo_type_dc was for; Dave Airlie finally clued me in that it was the 'normal' buffer objects with kernel allocated pages that could be mmapped from the drm device file. I thought that 'drm_bo_type_device' was a more descriptive name. I also added a bunch of comments describing the use of the type enum values the functions that use them. Signed-off-by: Dave Airlie commit 74baf618637c59138b21f9338be630d60c530de0 Author: Keith Packard Date: Thu Jan 24 16:41:36 2008 +1000 drm/ttm: Rename inappropriately named 'mask' fields to 'proposed_flags' instead. Flags pending validation were stored in a misleadingly named field, 'mask'. As 'mask' is already used to indicate pieces of a flags field which are changing, it seems better to use a name reflecting the actual purpose of this field. I chose 'proposed_flags' as they may not actually end up in 'flags', and in an case will be modified when they are moved over. This affects the API, but not ABI of the user-mode interface. Signed-off-by: Dave Airlie commit a19b7294f4458aea93bc1ac98ae211273a26e129 Author: Keith Packard Date: Thu Jan 24 16:40:25 2008 +1000 drm: Use dummy_read_page for unpopulated kernel-allocated ttm pages. Previously, dummy_read_page was used only for read-only user allocations; it filled in pages that were not present in the user address map (presumably, these were allocated but never written to pages). This patch allows them to be used for read-only ttms allocated from the kernel, so that applications can over-allocate buffers without forcing every page to be allocated. Signed-off-by: Dave Airlie commit 60f483b3a32c84603dbe4cb29b90645a60ad7d77 Author: Keith Packard Date: Thu Jan 24 16:39:42 2008 +1000 drm: Move dummy_read_page from drm_ttm_set_user to drm_ttm_create. I'm hoping to use the dummy_read_page for kernel allocated buffers to avoid allocating extra pages for read-only buffers (like vertex and batch buffers) This also eliminates the 'write' parameter to drm_ttm_set_user and just has DRM_TTM_PAGE_WRITE passed into drm_ttm_create. Signed-off-by: Dave Airlie commit 2aaf2b4bd9b779f7f63c2452d0b19c377cb4e105 Author: Keith Packard Date: Thu Jan 24 16:36:31 2008 +1000 drm/ttm: Clean up and document drm_ttm.c APIs. drm_bind_ttm -> drm_ttm_bind. Aside from changing drm_bind_ttm to drm_ttm_bind, this patch adds only documentation and fixes the functions inside drm_ttm.c to all be prefixed with drm_ttm_. Signed-off-by: Dave Airlie commit b6512a4bd10826b9bdfd675a1bd5f159ae863bfe Author: Keith Packard Date: Thu Jan 24 16:35:31 2008 +1000 drm/ttm: Document drm_ttm_set_user. Add a comment explaining the parameters for this function Signed-off-by: Dave Airlie commit bfc399261b8a250c9563acef9ece481abfc5863d Author: Keith Packard Date: Thu Jan 24 16:34:54 2008 +1000 drm/ttm: Document drm_buffer_object_validate function. Just add documentation for this function, no code changes. Signed-off-by: Dave Airlie commit 54422bf36b625738b70da63fd175f235d6d19771 Author: Keith Packard Date: Thu Jan 24 16:34:16 2008 +1000 drm/ttm: Document fence_class mess in drm_bo_setstatus_ioctl drmBOSetStatus does not bother to set the fence_class parameter. Fortunately, drm_bo_setstatus_ioctl doesn't end up using it as it calls drm_bo_handle_validate with use_old_fence_class = 1. Signed-off-by: Dave Airlie commit 18f1eda75f7c6f8e3a0c22cb5363ac8fffced7d6 Author: Keith Packard Date: Thu Jan 24 16:33:23 2008 +1000 drm/ttm: Document drm_bo_handle_validate. Match drm_bo_do_validate parameter order. Document parameters and usage for drm_bo_handle_validate. Change parameter order to match drm_bo_do_validate (fence_class has been moved to after flags, hint and mask values). Existing users of this function have been changed, but out-of-tree users must be modified separately. Signed-off-by: Dave Airlie commit 702a8ba7b1fa06311d66648791ea0fc7428d7ce8 Author: Keith Packard Date: Thu Jan 24 16:32:43 2008 +1000 drm/ttm: Document drm_bo_do_validate. Remove spurious 'do_wait' parameter Add comments about the parameters to drm_bo_do_validate, along with comments for the DRM_BO_HINT options. Remove the 'do_wait' parameter as it is duplicated by DRM_BO_HINT_DONT_BLOCK. Signed-off-by: Dave Airlie commit 6d9198f4675fca980271cdff92cd3be054e8ebe2 Author: Keith Packard Date: Thu Jan 24 16:31:45 2008 +1000 drm/ttm: Make ttm create/destroy APIs consistent. Pass page_flags in create. Creating a ttm was done with drm_ttm_init while destruction was done with drm_destroy_ttm. Renaming these to drm_ttm_create and drm_ttm_destroy makes their use clearer. Passing page_flags to the create function will allow that to know whether user or kernel pages are needed, with the goal of allowing kernel ttms to be saved for later reuse. Signed-off-by: Dave Airlie commit c12fcaa431f1c95356d157bd5999c90024b52a52 Author: Alan Hourihane Date: Thu Jan 24 16:29:48 2008 +1000 drm/bo: catch an out of memory condition Signed-off-by: Dave Airlie commit 3ee7790e68664e8953775d85d0a372952de03bc1 Author: Keith Packard Date: Thu Jan 24 16:28:38 2008 +1000 drm/i915: Make relocation validate client computed values when debugging Signed-off-by: Dave Airlie commit d9642e0487c3f120e4d7d59b2ee97610fb17b996 Author: Keith Packard Date: Thu Jan 24 16:26:40 2008 +1000 i915: wait for buffer idle before writing relocations When writing a relocation entry, make sure the target buffer is idle, otherwise the GPU may see inconsistent data. Signed-off-by: Dave Airlie commit 21de92a93d41fb047de2861cf4ed248193798f43 Author: Keith Packard Date: Thu Jan 24 16:25:57 2008 +1000 drm/i915: Allow relocation to be skipped when buffers don't move. One of the costs of superioctl has been the need to perform relocations inside the kernel. The cost of mapping the buffers to the CPU and writing data is fairly high, especially if those buffers have been mapped and read by the GPU. If we assume that buffers don't move around very often, we can have the client compute the relocations itself using the previous GPU address. When that object doesn't move, the kernel can skip computing and writing the updated data. Here's a patch which adds a new field to struct drm_bo_info_req called 'presumed_offset', and a new DRM_BO_HINT_PRESUMED_OFFSET that is set when this field has been filled in by the client. There are two separate optimizations performed when the presumed_offset is correct: 1. i915_exec_reloc checks to see if all previous buffer offsets were guesse correctly. If so, there's no need for it to look at *any* of the relocations for a buffer. When this happens, it skips the whole relocation process, simply returning success. 2. i915_apply_reloc checks to see if the target buffer offset was guessed correctly. If so, it skips mapping the relocatee, computing the relocation and writing the value. If no relocations are needed, the relocatee should never be mapped to the CPU, and so the kernel shouldn't need to wait for any fences to pass. Signed-off-by: Dave Airlie commit b1593b85f1a1420513ef1b11593ae6494d1fede9 Author: Márton Németh Date: Thu Jan 24 15:58:57 2008 +1000 drm: cleanup DRM_DEBUG() parameters As DRM_DEBUG macro already prints out the __FUNCTION__ string (see drivers/char/drm/drmP.h), it is not worth doing this again. At some other places the ending "\n" was added. airlied:- I cleaned up a few that this patch missed also Signed-off-by: Dave Airlie commit f20c5cff51c78d37f17989cb0676cf27322fe8cc Author: Carlos Martín Date: Wed Jan 23 16:41:17 2008 +1000 drm/i915: add support for E7221 chipset E7221 chipset is a server version of the i915. Signed-off-by: Dave Airlie commit 673d3ee45bd0c7d1bebddf3ac7781068fb880f7a Author: Zou Nan hai Date: Wed Jan 23 15:54:04 2008 +1000 drm: fix a possible deadlock on 32-bit ttm based systems. this is to fix a deadloop in drm hang system issue. (1 << bits) is an undefined value when bits == 32. gcc may generate 1 with this expression which will lead to an infinite retry loop in drm_ht_just_insert_please. Signed-off-by: Dave Airlie commit 3c75a6d3e03df4d6d1ca70515463945e5a119a99 Author: Li Zefan Date: Mon Dec 17 09:47:19 2007 +1000 drm: don't cast a pointer to pointer of list_head The casting is safe only when the list_head member is the first member of the structure. Signed-off-by: Li Zefan Signed-off-by: Andrew Morton Signed-off-by: Dave Airlie commit 1218a86c75e6cd0fb6fea202f6ab64fabcd1de75 Author: Jesper Juhl Date: Mon Dec 17 09:47:17 2007 +1000 mga_dma: return 'err' not just zero from mga_do_cleanup_dma() While reading some code I stumbled across the use of 'err' in drivers/char/drm/mga_dma.c::mga_do_cleanup_dma() and I think there's a small problem. The variable is only used inside #if __OS_HAS_AGP which is fine, but all that ever happens is an assignment to the variable - it is never actually used for anything. The variable is nicely initialized to zero which is also what the return statement at the end of function returns (always at the moment). It looks to me like that function should be returning 'err' instead of always just returning 0. Here's a patch to do that. Signed-off-by: Jesper Juhl Signed-off-by: Andrew Morton Signed-off-by: Dave Airlie commit df766dfb3926f378e161a29656298399fc178ca9 Author: Dave Airlie Date: Mon Dec 17 09:41:56 2007 +1000 drm: add _DRM_DRIVER flag, and re-order unload. Allow drivers to addmaps that won't be removed by lastclose or unload. The unload needs to be re-ordered to avoid removing the hashs before the driver has removed the final maps. Signed-off-by: Dave Airlie commit 59dd30b1cfe9bc77f11fce9075fc79b84014c847 Author: Jiri Slaby Date: Thu Nov 29 09:57:16 2007 +1000 drm: i915, fix pointer strip Beside the emitted warning, the added cast (u64 -> unsigned) strips out part of address on 64 bit. Cast to unsigned long instead. Signed-off-by: Jiri Slaby Cc: Thomas Hellstrom Signed-off-by: Andrew Morton Signed-off-by: Dave Airlie commit e55fc6036d5a56fba9f29a0b868484fdfe710da9 Author: Dave Airlie Date: Thu Nov 29 09:48:20 2007 +1000 drm: enable udev node creation Signed-off-by: Dave Airlie commit a28f7af29040fd736fbfdb8a707e2837872002ee Author: Dave Airlie Date: Thu Nov 22 18:57:08 2007 +1000 drm: enable TTM chipset flushing + i915 superioctl This requires patches found in the AGP queue to implement the chipset flush. Signed-off-by: Dave Airlie commit fc847e15af7df497e7e9eb7f1986cc0a9d15ab5f Author: Dave Airlie Date: Thu Nov 22 18:53:36 2007 +1000 drm/i915: add some missing pieces of ttm superioctl commit 2cff60f2fc54b87452d49f2a56397e3ed3cad09c Author: Eric Anholt Date: Thu Nov 22 18:46:54 2007 +1000 drm: Make DRM_IOCTL_GET_CLIENT return EINVAL when it can't find client #idx. Fixes the getclient test and dritest -c. Signed-off-by: Dave Airlie commit 58f9986ee9fb26413173d5f97a88a992aa62b10f Author: Dave Airlie Date: Thu Nov 22 18:43:46 2007 +1000 drm: move drm_mem_init to proper place in startup sequence For TTM this needs to be called later. Signed-off-by: Dave Airlie commit 8eb8d17bc65bbf804886b6a404e586cabbea2bd1 Author: Dave Airlie Date: Thu Nov 22 18:23:13 2007 +1000 drm: call driver load function after initialising AGP needed to intel chipset flushing Signed-off-by: Dave Airlie commit 389a886fbf7fdb92e61869ed52cefb62c72f93c0 Author: Ian Romanick Date: Thu Nov 22 17:02:08 2007 +1000 drm: Fix ioc32 compat layer Previously any ioctls that weren't explicitly listed in the compat ioctl table would fail with ENOTTY. If the incoming ioctl number is outside the range of the table, assume that it Just Works, and pass it off to drm_ioctl. This make the fence related ioctls work on 64-bit PowerPC. Signed-off-by: Dave Airlie commit e4828c319bfa369c21a0c4c96c0c87e090522e4a Author: Eric Anholt Date: Thu Nov 22 16:55:15 2007 +1000 drm: fd.o bug #11895: Only add the AGP base to map offset if the caller didn't. The i830 and newer intel 2D code adds the AGP base to map offsets already, because it wasn't doing the AGP enable which used to set dev->agp->base. Credit goes to Zhenyu for finding the issue. Signed-off-by: Dave Airlie commit 24066d1582a8e57c7c4be553cc8dc74a359adff3 Author: Michel Dänzer Date: Thu Nov 22 14:49:12 2007 +1000 i915: Add page flip, triple buffer and cleaned up pipe/plane code This amalgamates the changes to drm git tree around the pageflip/pipe control area. These changes had numerous fixes at various stages. Author: Michel Dänzer i915: Improved page flipping support, including triple buffering. i915: Add support for scheduled buffer swaps to be done as flips. i915: Fix test for synchronous flip affecting both pipes. i915: Eliminate dev_priv->current_page. Author: Jesse Barnes Disambiguate planes & pipes for swap operations Remove plane->pipe mapping from SAREA private after all Signed-off-by: Dave Airlie commit 359e2edb960c15d26ebd071ca56c7f173ad52e2e Author: Zou Nan hai Date: Thu Nov 22 14:28:52 2007 +1000 i915: ARB_Occlusion_query(MMIO ioctl) support. This adds a new ioctl for passing counter information from the chip back to applications, these counters include the data needed to perform OC. Signed-off-by: Dave Airlie commit 0983b0ce6a6b9eaec93a1b98d8cf67b406eb3c39 Author: Jesse Barnes Date: Thu Nov 22 14:14:14 2007 +1000 i915: add suspend/resume support Add suspend/resume support to the i915 driver. Moves some of the initialization into the driver load routine, and fixes up places where we assumed no dev_private existed in some of the cleanup paths. This allows us to suspend/resume properly even if X isn't running. Signed-off-by: Dave Airlie commit 45531cfec68a91c32f46279634dd089687ba1708 Author: Jesse Barnes Date: Thu Nov 22 14:02:38 2007 +1000 drm: update DRM sysfs support Make DRM devices use real Linux devices instead of class devices, which are going away. While we're at it, clean up some of the interfaces to take struct drm_device * or struct device * and use the global drm_class where needed instead of passing it around. Signed-off-by: Dave Airlie commit 505eb6e6d82a3e4a6225301b3a10acad4e1fa33b Author: Eric Anholt Date: Thu Nov 22 16:40:37 2007 +1000 drm: Initialize the AGP structure's base address at init rather than enable. Not all drivers call enable (intel), but they would still like to use this member in driver code. Signed-off-by: Dave Airlie commit 25992d81383595f07ec888b7898edd055f5fb755 Author: Thomas Hellstrom Date: Thu Nov 22 13:30:34 2007 +1000 drm: ttm fixes from upstream mm fixups Avoid buffers not ending up on a list in some cases Fix a user-buffer check. Signed-off-by: Dave Airlie commit 9b5e2267ecfaf673c656fb187bc99689f3563d8c Author: Jerome Glisse Date: Thu Nov 22 13:18:53 2007 +1000 drm: fix dead lock in drm_buffer_object_transfer This fixes a deadlock in the buffer moving code. The current i915 driver doesn't use these codepaths yet. Signed-off-by: Dave Airlie commit eeaccde806a92687f78451395f58d1e61b2d539a Author: Dave Airlie Date: Mon Nov 5 16:59:20 2007 +1000 drm/ttm: fix build with AGP disabled Signed-off-by: Dave Airlie commit 01b894a1d5b32af470123ca15242ccc0614b190f Author: Thomas Hellstrom Date: Mon Nov 5 16:53:59 2007 +1000 drm/i915: add support for buffer objects and fencing to the i915 driver This add support for the new memory manager buffer objects and fencing to the i915 driver. Signed-off-by: Dave Airlie commit d613bdf787821c79c458c87676f57ab088ad969a Author: Dave Airlie Date: Mon Nov 5 16:39:19 2007 +1000 drm/ttm: Add AGP backend for TTM objects. This adds a DRI memory manager backend to control AGP memory allocations. Signed-off-by: Thomas Hellstrom Signed-off-by: Dave Airlie commit 05baec7c7b865e38ceac69424447ce1ffec4692c Author: Thomas Hellstrom Date: Mon Nov 5 14:54:31 2007 +1000 drm: add buffer object and TTM support to DRI memory manager This is the main commit of the DRM memory manager. This adds buffer objects which are objects which can live in various memory areas, and TTM (Translation Table Maps) which allow buffer objects to be moved in/out of the GART dynamically. Signed-off-by: Thomas Hellstrom Signed-off-by: Dave Airlie commit 562410ab84799b686d7812c9e686edd5ef862f23 Author: Dave Airlie Date: Mon Nov 5 14:13:19 2007 +1000 drm: add fencing code for memory manager. This adds the generic fence code required to support the memory manager. Fences are user objects and are used to improve synchronization between CPU and GPU. The CPU drops fences into the GPUs command stream, and when the GPU execution has passed that point the fence is signaled. Other objects are attached to fences and they are used to stop objects being interacted with on the CPU with the GPU is busy processing them. Signed-off-by: Thomas Hellstrom Signed-off-by: Dave Airlie commit da85972d8aded4c9a54920a9b97cc5d275410693 Author: Thomas Hellstrom Date: Mon Nov 5 13:51:07 2007 +1000 drm: add file offset memory manager This "memory" manager instance is to be used to allocate VM space at non-overlapping offsets on the DRM file descriptor. Signed-off-by: Dave Airlie commit 6e834481e357974dfa386eebdfa6a90264e884e3 Author: Thomas Hellstrom Date: Mon Nov 5 13:45:22 2007 +1000 drm: add user object manager for tracking kernel objects A user object is a structure that helps the drm give out user handles to kernel internal objects and keep track of these objects so they can be destroyed, e.g. when the user space process exits. Signed-off-by: Thomas Hellstrom Signed-off-by: Dave Airlie commit 693f24fa77cb21bc3ae318ebef0fe3ae79ebed0a Author: Thomas Hellstrom Date: Mon Nov 5 13:21:19 2007 +1000 drm: add basic memory accounting for userspace controlled objects. This adds basic memory accounting in preparation for the new memory manager. Objects that are allocated from userspace should be allocated via the new drm_ctl functions so that the DRM can limit how much RAM users can allocate using the drm interfaces. This code is possibly naive in the heurisitic it uses to arrive at the memory limit. However this shouldn't cause a problem at this stage as fixing this issue is a larger problem and the drm will co-operate with other projects that require the same issues solved. (IBMs SPU support mainly). Signed-off-by: Thomas Hellstrom Signed-off-by: Dave Airlie commit 325fb0705328f7149f6296050aff5527107181f8 Author: Dave Airlie Date: Mon Nov 5 13:07:28 2007 +1000 drm: move two function extern into the correct block commit e59b88798f78ee3cac1b751fa85b3abb6de82560 Author: Dave Airlie Date: Mon Nov 5 12:50:58 2007 +1000 drm: run cleanfile across drm tree Signed-off-by: Dave Airlie commit cfb7ab76e606aae0206d82e46726ee9e63a9e2cf Author: Dave Airlie Date: Mon Nov 5 12:37:41 2007 +1000 drm: some minor cleanups and changes to make memory manager merging easier. Signed-off-by: Dave Airlie drivers/char/drm/Kconfig | 9 +- drivers/char/drm/Makefile | 7 +- drivers/char/drm/README.drm | 1 - drivers/char/drm/ati_pcigart.c | 13 +- drivers/char/drm/drm.h | 337 +++++- drivers/char/drm/drmP.h | 162 ++- drivers/char/drm/drm_agpsupport.c | 199 +++- drivers/char/drm/drm_auth.c | 8 +- drivers/char/drm/drm_bo.c | 2669 +++++++++++++++++++++++++++++++++++++ drivers/char/drm/drm_bo_lock.c | 175 +++ drivers/char/drm/drm_bo_move.c | 576 ++++++++ drivers/char/drm/drm_bufs.c | 65 +- drivers/char/drm/drm_context.c | 6 +- drivers/char/drm/drm_drv.c | 95 ++- drivers/char/drm/drm_fence.c | 847 ++++++++++++ drivers/char/drm/drm_fops.c | 59 +- drivers/char/drm/drm_hashtab.c | 9 +- drivers/char/drm/drm_hashtab.h | 1 - drivers/char/drm/drm_ioc32.c | 6 +- drivers/char/drm/drm_ioctl.c | 25 +- drivers/char/drm/drm_irq.c | 6 +- drivers/char/drm/drm_lock.c | 2 +- drivers/char/drm/drm_memory.c | 70 +- drivers/char/drm/drm_mm.c | 13 +- drivers/char/drm/drm_object.c | 293 ++++ drivers/char/drm/drm_objects.h | 700 ++++++++++ drivers/char/drm/drm_os_linux.h | 4 +- drivers/char/drm/drm_pciids.h | 2 +- drivers/char/drm/drm_proc.c | 94 ++- drivers/char/drm/drm_sarea.h | 2 +- drivers/char/drm/drm_scatter.c | 12 +- drivers/char/drm/drm_stub.c | 39 +- drivers/char/drm/drm_sysfs.c | 146 ++- drivers/char/drm/drm_ttm.c | 468 +++++++ drivers/char/drm/drm_vm.c | 207 +++- drivers/char/drm/i810_dma.c | 24 +- drivers/char/drm/i810_drv.h | 52 +- drivers/char/drm/i830_dma.c | 2 +- drivers/char/drm/i830_drm.h | 8 +- drivers/char/drm/i830_drv.h | 51 +- drivers/char/drm/i830_irq.c | 2 +- drivers/char/drm/i915_buffer.c | 195 +++ drivers/char/drm/i915_dma.c | 822 ++++++++++-- drivers/char/drm/i915_drm.h | 115 ++- drivers/char/drm/i915_drv.c | 493 +++++++- drivers/char/drm/i915_drv.h | 925 ++++++++++++- drivers/char/drm/i915_fence.c | 158 +++ drivers/char/drm/i915_irq.c | 304 ++++-- drivers/char/drm/i915_mem.c | 11 +- drivers/char/drm/mga_dma.c | 10 +- drivers/char/drm/mga_drv.h | 123 +- drivers/char/drm/mga_state.c | 24 +- drivers/char/drm/r128_cce.c | 6 +- drivers/char/drm/r128_drv.h | 5 +- drivers/char/drm/r128_state.c | 43 +- drivers/char/drm/r300_cmdbuf.c | 36 +- drivers/char/drm/r300_reg.h | 32 +- drivers/char/drm/radeon_cp.c | 16 +- drivers/char/drm/radeon_drm.h | 12 +- drivers/char/drm/radeon_drv.h | 15 +- drivers/char/drm/radeon_irq.c | 6 +- drivers/char/drm/radeon_mem.c | 6 +- drivers/char/drm/radeon_state.c | 15 +- drivers/char/drm/savage_state.c | 6 +- drivers/char/drm/sis_mm.c | 6 +- drivers/char/drm/via_dma.c | 20 +- drivers/char/drm/via_dmablit.c | 184 ++-- drivers/char/drm/via_dmablit.h | 84 +- drivers/char/drm/via_drm.h | 4 +- drivers/char/drm/via_drv.c | 2 +- drivers/char/drm/via_irq.c | 26 +- drivers/char/drm/via_map.c | 5 +- drivers/char/drm/via_mm.c | 6 +- drivers/char/drm/via_video.c | 4 +- 74 files changed, 10261 insertions(+), 924 deletions(-) diff --git a/drivers/char/drm/Kconfig b/drivers/char/drm/Kconfig index ba3058d..610d6fd 100644 --- a/drivers/char/drm/Kconfig +++ b/drivers/char/drm/Kconfig @@ -38,7 +38,7 @@ config DRM_RADEON Choose this option if you have an ATI Radeon graphics card. There are both PCI and AGP versions. You don't need to choose this to run the Radeon in plain VGA mode. - + If M is selected, the module will be called radeon. config DRM_I810 @@ -71,9 +71,9 @@ config DRM_I915 852GM, 855GM 865G or 915G integrated graphics. If M is selected, the module will be called i915. AGP support is required for this driver to work. This driver is used by the Intel driver in X.org 6.8 and - XFree86 4.4 and above. If unsure, build this and i830 as modules and + XFree86 4.4 and above. If unsure, build this and i830 as modules and the X server will load the correct one. - + endchoice config DRM_MGA @@ -88,7 +88,7 @@ config DRM_SIS tristate "SiS video cards" depends on DRM && AGP help - Choose this option if you have a SiS 630 or compatible video + Choose this option if you have a SiS 630 or compatible video chipset. If M is selected the module will be called sis. AGP support is required for this driver to work. @@ -105,4 +105,3 @@ config DRM_SAVAGE help Choose this option if you have a Savage3D/4/SuperSavage/Pro/Twister chipset. If M is selected the module will be called savage. - diff --git a/drivers/char/drm/Makefile b/drivers/char/drm/Makefile index 6915a05..85c4f9e 100644 --- a/drivers/char/drm/Makefile +++ b/drivers/char/drm/Makefile @@ -6,14 +6,15 @@ drm-objs := drm_auth.o drm_bufs.o drm_context.o drm_dma.o drm_drawable.o \ drm_drv.o drm_fops.o drm_ioctl.o drm_irq.o \ drm_lock.o drm_memory.o drm_proc.o drm_stub.o drm_vm.o \ drm_agpsupport.o drm_scatter.o ati_pcigart.o drm_pci.o \ - drm_sysfs.o drm_hashtab.o drm_sman.o drm_mm.o + drm_sysfs.o drm_hashtab.o drm_sman.o drm_mm.o drm_object.o \ + drm_fence.o drm_ttm.o drm_bo.o drm_bo_move.o drm_bo_lock.o tdfx-objs := tdfx_drv.o r128-objs := r128_drv.o r128_cce.o r128_state.o r128_irq.o mga-objs := mga_drv.o mga_dma.o mga_state.o mga_warp.o mga_irq.o i810-objs := i810_drv.o i810_dma.o i830-objs := i830_drv.o i830_dma.o i830_irq.o -i915-objs := i915_drv.o i915_dma.o i915_irq.o i915_mem.o +i915-objs := i915_drv.o i915_dma.o i915_irq.o i915_mem.o i915_fence.o i915_buffer.o radeon-objs := radeon_drv.o radeon_cp.o radeon_state.o radeon_mem.o radeon_irq.o r300_cmdbuf.o sis-objs := sis_drv.o sis_mm.o savage-objs := savage_drv.o savage_bci.o savage_state.o @@ -38,5 +39,3 @@ obj-$(CONFIG_DRM_I915) += i915.o obj-$(CONFIG_DRM_SIS) += sis.o obj-$(CONFIG_DRM_SAVAGE)+= savage.o obj-$(CONFIG_DRM_VIA) +=via.o - - diff --git a/drivers/char/drm/README.drm b/drivers/char/drm/README.drm index af74cd7..b5b3327 100644 --- a/drivers/char/drm/README.drm +++ b/drivers/char/drm/README.drm @@ -41,4 +41,3 @@ For specific information about kernel-level support, see: A Security Analysis of the Direct Rendering Infrastructure http://dri.sourceforge.net/doc/security_low_level.html - diff --git a/drivers/char/drm/ati_pcigart.c b/drivers/char/drm/ati_pcigart.c index 3345641..617abab 100644 --- a/drivers/char/drm/ati_pcigart.c +++ b/drivers/char/drm/ati_pcigart.c @@ -41,20 +41,19 @@ static void *drm_ati_alloc_pcigart_table(int order) struct page *page; int i; - DRM_DEBUG("%s: alloc %d order\n", __FUNCTION__, order); + DRM_DEBUG("%d order\n", order); address = __get_free_pages(GFP_KERNEL | __GFP_COMP, order); - if (address == 0UL) { + if (address == 0UL) return NULL; - } page = virt_to_page(address); for (i = 0; i < order; i++, page++) SetPageReserved(page); - DRM_DEBUG("%s: returning 0x%08lx\n", __FUNCTION__, address); + DRM_DEBUG("returning 0x%08lx\n", address); return (void *)address; } @@ -63,7 +62,7 @@ static void drm_ati_free_pcigart_table(void *address, int order) struct page *page; int i; int num_pages = 1 << order; - DRM_DEBUG("%s\n", __FUNCTION__); + DRM_DEBUG("\n"); page = virt_to_page((unsigned long)address); @@ -197,7 +196,7 @@ int drm_ati_pcigart_init(struct drm_device *dev, struct drm_ati_pcigart_info *ga page_base = (u32) entry->busaddr[i]; for (j = 0; j < (PAGE_SIZE / ATI_PCIGART_PAGE_SIZE); j++) { - switch(gart_info->gart_reg_if) { + switch (gart_info->gart_reg_if) { case DRM_ATI_GART_IGP: *pci_gart = cpu_to_le32((page_base) | 0xc); break; @@ -222,7 +221,7 @@ int drm_ati_pcigart_init(struct drm_device *dev, struct drm_ati_pcigart_info *ga mb(); #endif - done: +done: gart_info->addr = address; gart_info->bus_addr = bus_address; return ret; diff --git a/drivers/char/drm/drm.h b/drivers/char/drm/drm.h index 82fb3d0..8fe8ac1 100644 --- a/drivers/char/drm/drm.h +++ b/drivers/char/drm/drm.h @@ -190,6 +190,7 @@ enum drm_map_type { _DRM_AGP = 3, /**< AGP/GART */ _DRM_SCATTER_GATHER = 4, /**< Scatter/gather memory for PCI DMA */ _DRM_CONSISTENT = 5, /**< Consistent memory for PCI DMA */ + _DRM_TTM = 6 }; /** @@ -202,7 +203,8 @@ enum drm_map_flags { _DRM_KERNEL = 0x08, /**< kernel requires access */ _DRM_WRITE_COMBINING = 0x10, /**< use write-combining if available */ _DRM_CONTAINS_LOCK = 0x20, /**< SHM page that contains lock */ - _DRM_REMOVABLE = 0x40 /**< Removable mapping */ + _DRM_REMOVABLE = 0x40, /**< Removable mapping */ + _DRM_DRIVER = 0x80 /**< Managed by driver */ }; struct drm_ctx_priv_map { @@ -470,6 +472,7 @@ struct drm_irq_busid { enum drm_vblank_seq_type { _DRM_VBLANK_ABSOLUTE = 0x0, /**< Wait for specific vblank sequence number */ _DRM_VBLANK_RELATIVE = 0x1, /**< Wait for given number of vblanks */ + _DRM_VBLANK_FLIP = 0x8000000, /**< Scheduled buffer swap should flip */ _DRM_VBLANK_NEXTONMISS = 0x10000000, /**< If missed, wait for next vblank */ _DRM_VBLANK_SECONDARY = 0x20000000, /**< Secondary display controller */ _DRM_VBLANK_SIGNAL = 0x40000000 /**< Send signal instead of blocking */ @@ -572,6 +575,315 @@ struct drm_set_version { int drm_dd_minor; }; +#define DRM_FENCE_FLAG_EMIT 0x00000001 +#define DRM_FENCE_FLAG_SHAREABLE 0x00000002 +#define DRM_FENCE_FLAG_WAIT_LAZY 0x00000004 +#define DRM_FENCE_FLAG_WAIT_IGNORE_SIGNALS 0x00000008 +#define DRM_FENCE_FLAG_NO_USER 0x00000010 + +/* Reserved for driver use */ +#define DRM_FENCE_MASK_DRIVER 0xFF000000 + +#define DRM_FENCE_TYPE_EXE 0x00000001 + +struct drm_fence_arg { + unsigned int handle; + unsigned int fence_class; + unsigned int type; + unsigned int flags; + unsigned int signaled; + unsigned int error; + unsigned int sequence; + unsigned int pad64; + uint64_t expand_pad[2]; /*Future expansion */ +}; + +/* Buffer permissions, referring to how the GPU uses the buffers. + * these translate to fence types used for the buffers. + * Typically a texture buffer is read, A destination buffer is write and + * a command (batch-) buffer is exe. Can be or-ed together. + */ + +#define DRM_BO_FLAG_READ (1ULL << 0) +#define DRM_BO_FLAG_WRITE (1ULL << 1) +#define DRM_BO_FLAG_EXE (1ULL << 2) + +/* + * All of the bits related to access mode + */ +#define DRM_BO_MASK_ACCESS (DRM_BO_FLAG_READ | DRM_BO_FLAG_WRITE | DRM_BO_FLAG_EXE) +/* + * Status flags. Can be read to determine the actual state of a buffer. + * Can also be set in the buffer mask before validation. + */ + +/* + * Mask: Never evict this buffer. Not even with force. + * This type of buffer is only available to root and must be manually + * removed before buffer manager shutdown or lock. + * Flags: Acknowledge + */ +#define DRM_BO_FLAG_NO_EVICT (1ULL << 4) + +/* + * Mask: Require that the buffer is placed in mappable memory when validated. + * If not set the buffer may or may not be in mappable memory when validated. + * Flags: If set, the buffer is in mappable memory. + */ +#define DRM_BO_FLAG_MAPPABLE (1ULL << 5) + +/* Mask: The buffer should be shareable with other processes. + * Flags: The buffer is shareable with other processes. + */ +#define DRM_BO_FLAG_SHAREABLE (1ULL << 6) + +/* Mask: If set, place the buffer in cache-coherent memory if available. + * If clear, never place the buffer in cache coherent memory if validated. + * Flags: The buffer is currently in cache-coherent memory. + */ +#define DRM_BO_FLAG_CACHED (1ULL << 7) + +/* Mask: Make sure that every time this buffer is validated, + * it ends up on the same location provided that the memory mask + * is the same. + * The buffer will also not be evicted when claiming space for + * other buffers. Basically a pinned buffer but it may be thrown out as + * part of buffer manager shutdown or locking. + * Flags: Acknowledge. + */ +#define DRM_BO_FLAG_NO_MOVE (1ULL << 8) + +/* Mask: Make sure the buffer is in cached memory when mapped. In conjunction + * with DRM_BO_FLAG_CACHED it also allows the buffer to be bound into the GART + * with unsnooped PTEs instead of snooped, by using chipset-specific cache + * flushing at bind time. A better name might be DRM_BO_FLAG_TT_UNSNOOPED, + * as the eviction to local memory (TTM unbind) on map is just a side effect + * to prevent aggressive cache prefetch from the GPU disturbing the cache + * management that the DRM is doing. + * + * Flags: Acknowledge. + * Buffers allocated with this flag should not be used for suballocators + * This type may have issues on CPUs with over-aggressive caching + * http://marc.info/?l=linux-kernel&m=102376926732464&w=2 + */ +#define DRM_BO_FLAG_CACHED_MAPPED (1ULL << 19) + + +/* Mask: Force DRM_BO_FLAG_CACHED flag strictly also if it is set. + * Flags: Acknowledge. + */ +#define DRM_BO_FLAG_FORCE_CACHING (1ULL << 13) + +/* + * Mask: Force DRM_BO_FLAG_MAPPABLE flag strictly also if it is clear. + * Flags: Acknowledge. + */ +#define DRM_BO_FLAG_FORCE_MAPPABLE (1ULL << 14) +#define DRM_BO_FLAG_TILE (1ULL << 15) + +/* + * Memory type flags that can be or'ed together in the mask, but only + * one appears in flags. + */ + +/* System memory */ +#define DRM_BO_FLAG_MEM_LOCAL (1ULL << 24) +/* Translation table memory */ +#define DRM_BO_FLAG_MEM_TT (1ULL << 25) +/* Vram memory */ +#define DRM_BO_FLAG_MEM_VRAM (1ULL << 26) +/* Up to the driver to define. */ +#define DRM_BO_FLAG_MEM_PRIV0 (1ULL << 27) +#define DRM_BO_FLAG_MEM_PRIV1 (1ULL << 28) +#define DRM_BO_FLAG_MEM_PRIV2 (1ULL << 29) +#define DRM_BO_FLAG_MEM_PRIV3 (1ULL << 30) +#define DRM_BO_FLAG_MEM_PRIV4 (1ULL << 31) +/* We can add more of these now with a 64-bit flag type */ + +/* + * This is a mask covering all of the memory type flags; easier to just + * use a single constant than a bunch of | values. It covers + * DRM_BO_FLAG_MEM_LOCAL through DRM_BO_FLAG_MEM_PRIV4 + */ +#define DRM_BO_MASK_MEM 0x00000000FF000000ULL +/* + * This adds all of the CPU-mapping options in with the memory + * type to label all bits which change how the page gets mapped + */ +#define DRM_BO_MASK_MEMTYPE (DRM_BO_MASK_MEM | \ + DRM_BO_FLAG_CACHED_MAPPED | \ + DRM_BO_FLAG_CACHED | \ + DRM_BO_FLAG_MAPPABLE) + +/* Driver-private flags */ +#define DRM_BO_MASK_DRIVER 0xFFFF000000000000ULL + +/* + * Don't block on validate and map. Instead, return EBUSY. + */ +#define DRM_BO_HINT_DONT_BLOCK 0x00000002 +/* + * Don't place this buffer on the unfenced list. This means + * that the buffer will not end up having a fence associated + * with it as a result of this operation + */ +#define DRM_BO_HINT_DONT_FENCE 0x00000004 +/* + * Sleep while waiting for the operation to complete. + * Without this flag, the kernel will, instead, spin + * until this operation has completed. I'm not sure + * why you would ever want this, so please always + * provide DRM_BO_HINT_WAIT_LAZY to any operation + * which may block + */ +#define DRM_BO_HINT_WAIT_LAZY 0x00000008 +/* + * The client has compute relocations refering to this buffer using the + * offset in the presumed_offset field. If that offset ends up matching + * where this buffer lands, the kernel is free to skip executing those + * relocations + */ +#define DRM_BO_HINT_PRESUMED_OFFSET 0x00000010 + + +#define DRM_BO_INIT_MAGIC 0xfe769812 +#define DRM_BO_INIT_MAJOR 1 +#define DRM_BO_INIT_MINOR 0 +#define DRM_BO_INIT_PATCH 0 + + +struct drm_bo_info_req { + uint64_t mask; + uint64_t flags; + unsigned int handle; + unsigned int hint; + unsigned int fence_class; + unsigned int desired_tile_stride; + unsigned int tile_info; + unsigned int pad64; + uint64_t presumed_offset; +}; + +struct drm_bo_create_req { + uint64_t flags; + uint64_t size; + uint64_t buffer_start; + unsigned int hint; + unsigned int page_alignment; +}; + + +/* + * Reply flags + */ + +#define DRM_BO_REP_BUSY 0x00000001 + +struct drm_bo_info_rep { + uint64_t flags; + uint64_t proposed_flags; + uint64_t size; + uint64_t offset; + uint64_t arg_handle; + uint64_t buffer_start; + unsigned int handle; + unsigned int fence_flags; + unsigned int rep_flags; + unsigned int page_alignment; + unsigned int desired_tile_stride; + unsigned int hw_tile_stride; + unsigned int tile_info; + unsigned int pad64; + uint64_t expand_pad[4]; /*Future expansion */ +}; + +struct drm_bo_arg_rep { + struct drm_bo_info_rep bo_info; + int ret; + unsigned int pad64; +}; + +struct drm_bo_create_arg { + union { + struct drm_bo_create_req req; + struct drm_bo_info_rep rep; + } d; +}; + +struct drm_bo_handle_arg { + unsigned int handle; +}; + +struct drm_bo_reference_info_arg { + union { + struct drm_bo_handle_arg req; + struct drm_bo_info_rep rep; + } d; +}; + +struct drm_bo_map_wait_idle_arg { + union { + struct drm_bo_info_req req; + struct drm_bo_info_rep rep; + } d; +}; + +struct drm_bo_op_req { + enum { + drm_bo_validate, + drm_bo_fence, + drm_bo_ref_fence, + } op; + unsigned int arg_handle; + struct drm_bo_info_req bo_req; +}; + + +struct drm_bo_op_arg { + uint64_t next; + union { + struct drm_bo_op_req req; + struct drm_bo_arg_rep rep; + } d; + int handled; + unsigned int pad64; +}; + + +#define DRM_BO_MEM_LOCAL 0 +#define DRM_BO_MEM_TT 1 +#define DRM_BO_MEM_VRAM 2 +#define DRM_BO_MEM_PRIV0 3 +#define DRM_BO_MEM_PRIV1 4 +#define DRM_BO_MEM_PRIV2 5 +#define DRM_BO_MEM_PRIV3 6 +#define DRM_BO_MEM_PRIV4 7 + +#define DRM_BO_MEM_TYPES 8 /* For now. */ + +#define DRM_BO_LOCK_UNLOCK_BM (1 << 0) +#define DRM_BO_LOCK_IGNORE_NO_EVICT (1 << 1) + +struct drm_bo_version_arg { + uint32_t major; + uint32_t minor; + uint32_t patchlevel; +}; + +struct drm_mm_type_arg { + unsigned int mem_type; + unsigned int lock_flags; +}; + +struct drm_mm_init_arg { + unsigned int magic; + unsigned int major; + unsigned int minor; + unsigned int mem_type; + uint64_t p_offset; + uint64_t p_size; +}; + #define DRM_IOCTL_BASE 'd' #define DRM_IO(nr) _IO(DRM_IOCTL_BASE,nr) #define DRM_IOR(nr,type) _IOR(DRM_IOCTL_BASE,nr,type) @@ -634,6 +946,29 @@ struct drm_set_version { #define DRM_IOCTL_UPDATE_DRAW DRM_IOW(0x3f, struct drm_update_draw) +#define DRM_IOCTL_MM_INIT DRM_IOWR(0xc0, struct drm_mm_init_arg) +#define DRM_IOCTL_MM_TAKEDOWN DRM_IOWR(0xc1, struct drm_mm_type_arg) +#define DRM_IOCTL_MM_LOCK DRM_IOWR(0xc2, struct drm_mm_type_arg) +#define DRM_IOCTL_MM_UNLOCK DRM_IOWR(0xc3, struct drm_mm_type_arg) + +#define DRM_IOCTL_FENCE_CREATE DRM_IOWR(0xc4, struct drm_fence_arg) +#define DRM_IOCTL_FENCE_REFERENCE DRM_IOWR(0xc6, struct drm_fence_arg) +#define DRM_IOCTL_FENCE_UNREFERENCE DRM_IOWR(0xc7, struct drm_fence_arg) +#define DRM_IOCTL_FENCE_SIGNALED DRM_IOWR(0xc8, struct drm_fence_arg) +#define DRM_IOCTL_FENCE_FLUSH DRM_IOWR(0xc9, struct drm_fence_arg) +#define DRM_IOCTL_FENCE_WAIT DRM_IOWR(0xca, struct drm_fence_arg) +#define DRM_IOCTL_FENCE_EMIT DRM_IOWR(0xcb, struct drm_fence_arg) +#define DRM_IOCTL_FENCE_BUFFERS DRM_IOWR(0xcc, struct drm_fence_arg) + +#define DRM_IOCTL_BO_CREATE DRM_IOWR(0xcd, struct drm_bo_create_arg) +#define DRM_IOCTL_BO_MAP DRM_IOWR(0xcf, struct drm_bo_map_wait_idle_arg) +#define DRM_IOCTL_BO_UNMAP DRM_IOWR(0xd0, struct drm_bo_handle_arg) +#define DRM_IOCTL_BO_REFERENCE DRM_IOWR(0xd1, struct drm_bo_reference_info_arg) +#define DRM_IOCTL_BO_UNREFERENCE DRM_IOWR(0xd2, struct drm_bo_handle_arg) +#define DRM_IOCTL_BO_SETSTATUS DRM_IOWR(0xd3, struct drm_bo_map_wait_idle_arg) +#define DRM_IOCTL_BO_INFO DRM_IOWR(0xd4, struct drm_bo_reference_info_arg) +#define DRM_IOCTL_BO_WAIT_IDLE DRM_IOWR(0xd5, struct drm_bo_map_wait_idle_arg) +#define DRM_IOCTL_BO_VERSION DRM_IOR(0xd6, struct drm_bo_version_arg) /** * Device specific ioctls should only be in their respective headers * The device specific ioctl range is from 0x40 to 0x99. diff --git a/drivers/char/drm/drmP.h b/drivers/char/drm/drmP.h index dde02a1..d8b3bcd 100644 --- a/drivers/char/drm/drmP.h +++ b/drivers/char/drm/drmP.h @@ -55,6 +55,7 @@ #include #include /* For (un)lock_kernel */ #include +#include #include #include #if defined(__alpha__) || defined(__powerpc__) @@ -66,6 +67,7 @@ #ifdef CONFIG_MTRR #include #endif +#include #if defined(CONFIG_AGP) || defined(CONFIG_AGP_MODULE) #include #include @@ -144,9 +146,22 @@ struct drm_device; #define DRM_MEM_CTXLIST 21 #define DRM_MEM_MM 22 #define DRM_MEM_HASHTAB 23 +#define DRM_MEM_OBJECTS 24 +#define DRM_MEM_FENCE 25 +#define DRM_MEM_TTM 26 +#define DRM_MEM_BUFOBJ 27 #define DRM_MAX_CTXBITMAP (PAGE_SIZE * 8) #define DRM_MAP_HASH_OFFSET 0x10000000 +#define DRM_MAP_HASH_ORDER 12 +#define DRM_OBJECT_HASH_ORDER 12 +#define DRM_FILE_PAGE_OFFSET_START ((0xFFFFFFFFUL >> PAGE_SHIFT) + 1) +#define DRM_FILE_PAGE_OFFSET_SIZE ((0xFFFFFFFFUL >> PAGE_SHIFT) * 16) +/* + * This should be small enough to allow the use of kmalloc for hash tables + * instead of vmalloc. + */ +#define DRM_FILE_HASH_ORDER 8 /*@}*/ @@ -292,7 +307,6 @@ struct drm_magic_entry { struct list_head head; struct drm_hash_item hash_item; struct drm_file *priv; - struct drm_magic_entry *next; }; struct drm_vma_entry { @@ -375,6 +389,12 @@ struct drm_buf_entry { struct drm_freelist freelist; }; +enum drm_ref_type { + _DRM_REF_USE = 0, + _DRM_REF_TYPE1, + _DRM_NO_REF_TYPES +}; + /** File private data */ struct drm_file { int authenticated; @@ -388,8 +408,16 @@ struct drm_file { struct drm_head *head; int remove_auth_on_close; unsigned long lock_count; - void *driver_priv; + /* + * The user object hash table is global and resides in the + * drm_device structure. We protect the lists and hash tables with the + * device struct_mutex. A bit coarse-grained but probably the best + * option. + */ + struct list_head refd_objects; + struct drm_open_hash refd_object_hash[_DRM_NO_REF_TYPES]; struct file *filp; + void *driver_priv; }; /** Wait queue */ @@ -401,11 +429,9 @@ struct drm_queue { wait_queue_head_t read_queue; /**< Processes waiting on block_read */ atomic_t block_write; /**< Queue blocked for writes */ wait_queue_head_t write_queue; /**< Processes waiting on block_write */ -#if 1 atomic_t total_queued; /**< Total queued statistic */ atomic_t total_flushed; /**< Total flushes statistic */ atomic_t total_locks; /**< Total locks statistics */ -#endif enum drm_ctx_flags flags; /**< Context preserving and 2D-only */ struct drm_waitlist waitlist; /**< Pending buffers */ wait_queue_head_t flush_queue; /**< Processes waiting until flush */ @@ -416,7 +442,8 @@ struct drm_queue { */ struct drm_lock_data { struct drm_hw_lock *hw_lock; /**< Hardware lock */ - struct drm_file *file_priv; /**< File descr of lock holder (0=kernel) */ + /** Private of lock holder's file (NULL=kernel) */ + struct drm_file *file_priv; wait_queue_head_t lock_queue; /**< Queue of blocked processes */ unsigned long lock_time; /**< Time of last lock in jiffies */ spinlock_t spinlock; @@ -491,6 +518,27 @@ struct drm_sigdata { struct drm_hw_lock *lock; }; + +/* + * Generic memory manager structs + */ + +struct drm_mm_node { + struct list_head fl_entry; + struct list_head ml_entry; + int free; + unsigned long start; + unsigned long size; + struct drm_mm *mm; + void *private; +}; + +struct drm_mm { + struct list_head fl_entry; + struct list_head ml_entry; +}; + + /** * Mappings list */ @@ -498,7 +546,8 @@ struct drm_map_list { struct list_head head; /**< list head */ struct drm_hash_item hash; struct drm_map *map; /**< mapping */ - unsigned int user_token; + uint64_t user_token; + struct drm_mm_node *file_offset_node; }; typedef struct drm_map drm_local_map_t; @@ -536,23 +585,7 @@ struct drm_ati_pcigart_info { int table_size; }; -/* - * Generic memory manager structs - */ -struct drm_mm_node { - struct list_head fl_entry; - struct list_head ml_entry; - int free; - unsigned long start; - unsigned long size; - struct drm_mm *mm; - void *private; -}; - -struct drm_mm { - struct list_head fl_entry; - struct list_head ml_entry; -}; +#include "drm_objects.h" /** * DRM driver structure. This structure represent the common code for @@ -567,6 +600,8 @@ struct drm_driver { void (*postclose) (struct drm_device *, struct drm_file *); void (*lastclose) (struct drm_device *); int (*unload) (struct drm_device *); + int (*suspend) (struct drm_device *); + int (*resume) (struct drm_device *); int (*dma_ioctl) (struct drm_device *dev, void *data, struct drm_file *file_priv); void (*dma_ready) (struct drm_device *); int (*dma_quiescent) (struct drm_device *); @@ -609,6 +644,9 @@ struct drm_driver { void (*set_version) (struct drm_device *dev, struct drm_set_version *sv); + struct drm_fence_driver *fence_driver; + struct drm_bo_driver *bo_driver; + int major; int minor; int patchlevel; @@ -642,6 +680,7 @@ struct drm_head { * may contain multiple heads. */ struct drm_device { + struct device dev; /**< Linux device */ char *unique; /**< Unique identifier: e.g., busid */ int unique_len; /**< Length of unique field */ char *devname; /**< For /proc/interrupts */ @@ -683,6 +722,10 @@ struct drm_device { struct list_head maplist; /**< Linked list of regions */ int map_count; /**< Number of mappable regions */ struct drm_open_hash map_hash; /**< User token hash table for maps */ + struct drm_mm offset_manager; /**< User token manager */ + struct drm_open_hash object_hash; /**< User token hash table for objects */ + struct address_space *dev_mapping; /**< For unmap_mapping_range() */ + struct page *ttm_dummy_page; /** \name Context handle management */ /*@{ */ @@ -750,7 +793,6 @@ struct drm_device { struct pci_controller *hose; #endif struct drm_sg_mem *sg; /**< Scatter gather memory */ - unsigned long *ctx_bitmap; /**< context bitmap */ void *dev_private; /**< device private data */ struct drm_sigdata sigdata; /**< For block_all_signals */ sigset_t sigmask; @@ -760,6 +802,9 @@ struct drm_device { unsigned int agp_buffer_token; struct drm_head primary; /**< primary screen head */ + struct drm_fence_manager fm; + struct drm_buffer_manager bm; + /** \name Drawable information */ /*@{ */ spinlock_t drw_lock; @@ -767,6 +812,15 @@ struct drm_device { /*@} */ }; +#if __OS_HAS_AGP +struct drm_agp_ttm_backend { + struct drm_ttm_backend backend; + DRM_AGP_MEM *mem; + struct agp_bridge_data *bridge; + int populated; +}; +#endif + static __inline__ int drm_core_check_feature(struct drm_device *dev, int feature) { @@ -847,6 +901,8 @@ extern int drm_release(struct inode *inode, struct file *filp); /* Mapping support (drm_vm.h) */ extern int drm_mmap(struct file *filp, struct vm_area_struct *vma); +extern unsigned long drm_core_get_map_ofs(struct drm_map * map); +extern unsigned long drm_core_get_reg_ofs(struct drm_device *dev); extern unsigned int drm_poll(struct file *filp, struct poll_table_struct *wait); /* Memory management support (drm_memory.h) */ @@ -861,6 +917,15 @@ extern int drm_free_agp(DRM_AGP_MEM * handle, int pages); extern int drm_bind_agp(DRM_AGP_MEM * handle, unsigned int start); extern int drm_unbind_agp(DRM_AGP_MEM * handle); +extern void drm_free_memctl(size_t size); +extern int drm_alloc_memctl(size_t size); +extern void drm_query_memctl(uint64_t *cur_used, + uint64_t *low_threshold, + uint64_t *high_threshold); +extern void drm_init_memctl(size_t low_threshold, + size_t high_threshold, + size_t unit_size); + /* Misc. IOCTL support (drm_ioctl.h) */ extern int drm_irq_by_busid(struct drm_device *dev, void *data, struct drm_file *file_priv); @@ -1018,7 +1083,8 @@ extern DRM_AGP_MEM *drm_agp_allocate_memory(struct agp_bridge_data *bridge, size extern int drm_agp_free_memory(DRM_AGP_MEM * handle); extern int drm_agp_bind_memory(DRM_AGP_MEM * handle, off_t start); extern int drm_agp_unbind_memory(DRM_AGP_MEM * handle); - +extern struct drm_ttm_backend *drm_agp_init_ttm(struct drm_device *dev); +extern void drm_agp_chipset_flush(struct drm_device *dev); /* Stub support (drm_stub.h) */ extern int drm_get_dev(struct pci_dev *pdev, const struct pci_device_id *ent, struct drm_driver *driver); @@ -1061,11 +1127,11 @@ extern void __drm_pci_free(struct drm_device *dev, drm_dma_handle_t * dmah); extern void drm_pci_free(struct drm_device *dev, drm_dma_handle_t * dmah); /* sysfs support (drm_sysfs.c) */ +struct drm_sysfs_class; extern struct class *drm_sysfs_create(struct module *owner, char *name); -extern void drm_sysfs_destroy(struct class *cs); -extern struct class_device *drm_sysfs_device_add(struct class *cs, - struct drm_head *head); -extern void drm_sysfs_device_remove(struct class_device *class_dev); +extern void drm_sysfs_destroy(void); +extern int drm_sysfs_device_add(struct drm_device *dev, struct drm_head *head); +extern void drm_sysfs_device_remove(struct drm_device *dev); /* * Basic memory manager support (drm_mm.c) @@ -1073,7 +1139,7 @@ extern void drm_sysfs_device_remove(struct class_device *class_dev); extern struct drm_mm_node *drm_mm_get_block(struct drm_mm_node * parent, unsigned long size, unsigned alignment); -void drm_mm_put_block(struct drm_mm_node * cur); +extern void drm_mm_put_block(struct drm_mm_node * cur); extern struct drm_mm_node *drm_mm_search_free(const struct drm_mm *mm, unsigned long size, unsigned alignment, int best_match); extern int drm_mm_init(struct drm_mm *mm, unsigned long start, unsigned long size); @@ -1142,10 +1208,40 @@ extern void drm_free(void *pt, size_t size, int area); extern void *drm_calloc(size_t nmemb, size_t size, int area); #endif -/*@}*/ +/* + * Accounting variants of standard calls. + */ -extern unsigned long drm_core_get_map_ofs(struct drm_map * map); -extern unsigned long drm_core_get_reg_ofs(struct drm_device *dev); +static inline void *drm_ctl_alloc(size_t size, int area) +{ + void *ret; + if (drm_alloc_memctl(size)) + return NULL; + ret = drm_alloc(size, area); + if (!ret) + drm_free_memctl(size); + return ret; +} + +static inline void *drm_ctl_calloc(size_t nmemb, size_t size, int area) +{ + void *ret; + + if (drm_alloc_memctl(nmemb*size)) + return NULL; + ret = drm_calloc(nmemb, size, area); + if (!ret) + drm_free_memctl(nmemb*size); + return ret; +} + +static inline void drm_ctl_free(void *pt, size_t size, int area) +{ + drm_free(pt, size, area); + drm_free_memctl(size); +} + +/*@}*/ #endif /* __KERNEL__ */ #endif diff --git a/drivers/char/drm/drm_agpsupport.c b/drivers/char/drm/drm_agpsupport.c index 214f4fb..c96a2a3 100644 --- a/drivers/char/drm/drm_agpsupport.c +++ b/drivers/char/drm/drm_agpsupport.c @@ -68,7 +68,6 @@ int drm_agp_info(struct drm_device *dev, struct drm_agp_info *info) return 0; } - EXPORT_SYMBOL(drm_agp_info); int drm_agp_info_ioctl(struct drm_device *dev, void *data, @@ -93,7 +92,7 @@ int drm_agp_info_ioctl(struct drm_device *dev, void *data, * Verifies the AGP device hasn't been acquired before and calls * \c agp_backend_acquire. */ -int drm_agp_acquire(struct drm_device * dev) +int drm_agp_acquire(struct drm_device *dev) { if (!dev->agp) return -ENODEV; @@ -104,7 +103,6 @@ int drm_agp_acquire(struct drm_device * dev) dev->agp->acquired = 1; return 0; } - EXPORT_SYMBOL(drm_agp_acquire); /** @@ -133,7 +131,7 @@ int drm_agp_acquire_ioctl(struct drm_device *dev, void *data, * * Verifies the AGP device has been acquired and calls \c agp_backend_release. */ -int drm_agp_release(struct drm_device * dev) +int drm_agp_release(struct drm_device *dev) { if (!dev->agp || !dev->agp->acquired) return -EINVAL; @@ -159,18 +157,16 @@ int drm_agp_release_ioctl(struct drm_device *dev, void *data, * Verifies the AGP device has been acquired but not enabled, and calls * \c agp_enable. */ -int drm_agp_enable(struct drm_device * dev, struct drm_agp_mode mode) +int drm_agp_enable(struct drm_device *dev, struct drm_agp_mode mode) { if (!dev->agp || !dev->agp->acquired) return -EINVAL; dev->agp->mode = mode.mode; agp_enable(dev->agp->bridge, mode.mode); - dev->agp->base = dev->agp->agp_info.aper_base; dev->agp->enabled = 1; return 0; } - EXPORT_SYMBOL(drm_agp_enable); int drm_agp_enable_ioctl(struct drm_device *dev, void *data, @@ -245,7 +241,7 @@ int drm_agp_alloc_ioctl(struct drm_device *dev, void *data, * * Walks through drm_agp_head::memory until finding a matching handle. */ -static struct drm_agp_mem *drm_agp_lookup_entry(struct drm_device * dev, +static struct drm_agp_mem *drm_agp_lookup_entry(struct drm_device *dev, unsigned long handle) { struct drm_agp_mem *entry; @@ -417,19 +413,19 @@ struct drm_agp_head *drm_agp_init(struct drm_device *dev) INIT_LIST_HEAD(&head->memory); head->cant_use_aperture = head->agp_info.cant_use_aperture; head->page_mask = head->agp_info.page_mask; - + head->base = head->agp_info.aper_base; return head; } /** Calls agp_allocate_memory() */ -DRM_AGP_MEM *drm_agp_allocate_memory(struct agp_bridge_data * bridge, +DRM_AGP_MEM *drm_agp_allocate_memory(struct agp_bridge_data *bridge, size_t pages, u32 type) { return agp_allocate_memory(bridge, pages, type); } /** Calls agp_free_memory() */ -int drm_agp_free_memory(DRM_AGP_MEM * handle) +int drm_agp_free_memory(DRM_AGP_MEM *handle) { if (!handle) return 0; @@ -438,7 +434,7 @@ int drm_agp_free_memory(DRM_AGP_MEM * handle) } /** Calls agp_bind_memory() */ -int drm_agp_bind_memory(DRM_AGP_MEM * handle, off_t start) +int drm_agp_bind_memory(DRM_AGP_MEM *handle, off_t start) { if (!handle) return -EINVAL; @@ -446,11 +442,188 @@ int drm_agp_bind_memory(DRM_AGP_MEM * handle, off_t start) } /** Calls agp_unbind_memory() */ -int drm_agp_unbind_memory(DRM_AGP_MEM * handle) +int drm_agp_unbind_memory(DRM_AGP_MEM *handle) { if (!handle) return -EINVAL; return agp_unbind_memory(handle); } + +/* + * AGP ttm backend interface. + */ + +#ifndef AGP_USER_TYPES +#define AGP_USER_TYPES (1 << 16) +#define AGP_USER_MEMORY (AGP_USER_TYPES) +#define AGP_USER_CACHED_MEMORY (AGP_USER_TYPES + 1) +#endif +#define AGP_REQUIRED_MAJOR 0 +#define AGP_REQUIRED_MINOR 102 + +static int drm_agp_needs_unbind_cache_adjust(struct drm_ttm_backend *backend) +{ + return ((backend->flags & DRM_BE_FLAG_BOUND_CACHED) ? 0 : 1); +} + + +static int drm_agp_populate(struct drm_ttm_backend *backend, + unsigned long num_pages, struct page **pages, + struct page *dummy_read_page) +{ + struct drm_agp_ttm_backend *agp_be = + container_of(backend, struct drm_agp_ttm_backend, backend); + struct page **cur_page, **last_page = pages + num_pages; + DRM_AGP_MEM *mem; + int dummy_page_count = 0; + + if (drm_alloc_memctl(num_pages * sizeof(void *))) + return -1; + + DRM_DEBUG("drm_agp_populate_ttm\n"); + mem = drm_agp_allocate_memory(agp_be->bridge, num_pages, AGP_USER_MEMORY); + if (!mem) { + drm_free_memctl(num_pages * sizeof(void *)); + return -1; + } + + DRM_DEBUG("Current page count is %ld\n", (long) mem->page_count); + mem->page_count = 0; + for (cur_page = pages; cur_page < last_page; ++cur_page) { + struct page *page = *cur_page; + if (!page) { + page = dummy_read_page; + ++dummy_page_count; + } + mem->memory[mem->page_count++] = phys_to_gart(page_to_phys(page)); + } + if (dummy_page_count) + DRM_DEBUG("Mapped %d dummy pages\n", dummy_page_count); + agp_be->mem = mem; + return 0; +} + +static int drm_agp_bind_ttm(struct drm_ttm_backend *backend, + struct drm_bo_mem_reg *bo_mem) +{ + struct drm_agp_ttm_backend *agp_be = + container_of(backend, struct drm_agp_ttm_backend, backend); + DRM_AGP_MEM *mem = agp_be->mem; + int ret; + int snooped = (bo_mem->flags & DRM_BO_FLAG_CACHED) && !(bo_mem->flags & DRM_BO_FLAG_CACHED_MAPPED); + + DRM_DEBUG("drm_agp_bind_ttm\n"); + mem->is_flushed = TRUE; + mem->type = AGP_USER_MEMORY; + /* CACHED MAPPED implies not snooped memory */ + if (snooped) + mem->type = AGP_USER_CACHED_MEMORY; + + ret = drm_agp_bind_memory(mem, bo_mem->mm_node->start); + if (ret) + DRM_ERROR("AGP Bind memory failed\n"); + + DRM_FLAG_MASKED(backend->flags, (bo_mem->flags & DRM_BO_FLAG_CACHED) ? + DRM_BE_FLAG_BOUND_CACHED : 0, + DRM_BE_FLAG_BOUND_CACHED); + return ret; +} + +static int drm_agp_unbind_ttm(struct drm_ttm_backend *backend) +{ + struct drm_agp_ttm_backend *agp_be = + container_of(backend, struct drm_agp_ttm_backend, backend); + + DRM_DEBUG("drm_agp_unbind_ttm\n"); + if (agp_be->mem->is_bound) + return drm_agp_unbind_memory(agp_be->mem); + else + return 0; +} + +static void drm_agp_clear_ttm(struct drm_ttm_backend *backend) +{ + struct drm_agp_ttm_backend *agp_be = + container_of(backend, struct drm_agp_ttm_backend, backend); + DRM_AGP_MEM *mem = agp_be->mem; + + DRM_DEBUG("drm_agp_clear_ttm\n"); + if (mem) { + unsigned long num_pages = mem->page_count; + backend->func->unbind(backend); + agp_free_memory(mem); + drm_free_memctl(num_pages * sizeof(void *)); + } + agp_be->mem = NULL; +} + +static void drm_agp_destroy_ttm(struct drm_ttm_backend *backend) +{ + struct drm_agp_ttm_backend *agp_be; + + if (backend) { + DRM_DEBUG("drm_agp_destroy_ttm\n"); + agp_be = container_of(backend, struct drm_agp_ttm_backend, backend); + if (agp_be) { + if (agp_be->mem) + backend->func->clear(backend); + drm_ctl_free(agp_be, sizeof(*agp_be), DRM_MEM_TTM); + } + } +} + +static struct drm_ttm_backend_func agp_ttm_backend = { + .needs_ub_cache_adjust = drm_agp_needs_unbind_cache_adjust, + .populate = drm_agp_populate, + .clear = drm_agp_clear_ttm, + .bind = drm_agp_bind_ttm, + .unbind = drm_agp_unbind_ttm, + .destroy = drm_agp_destroy_ttm, +}; + +struct drm_ttm_backend *drm_agp_init_ttm(struct drm_device *dev) +{ + struct drm_agp_ttm_backend *agp_be; + struct agp_kern_info *info; + + if (!dev->agp) { + DRM_ERROR("AGP is not initialized.\n"); + return NULL; + } + info = &dev->agp->agp_info; + + if (info->version.major != AGP_REQUIRED_MAJOR || + info->version.minor < AGP_REQUIRED_MINOR) { + DRM_ERROR("Wrong agpgart version %d.%d\n" + "\tYou need at least version %d.%d.\n", + info->version.major, + info->version.minor, + AGP_REQUIRED_MAJOR, + AGP_REQUIRED_MINOR); + return NULL; + } + + + agp_be = drm_ctl_calloc(1, sizeof(*agp_be), DRM_MEM_TTM); + if (!agp_be) + return NULL; + + agp_be->mem = NULL; + + agp_be->bridge = dev->agp->bridge; + agp_be->populated = FALSE; + agp_be->backend.func = &agp_ttm_backend; + agp_be->backend.dev = dev; + + return &agp_be->backend; +} +EXPORT_SYMBOL(drm_agp_init_ttm); + +void drm_agp_chipset_flush(struct drm_device *dev) +{ + agp_flush_chipset(dev->agp->bridge); +} +EXPORT_SYMBOL(drm_agp_chipset_flush); + #endif /* __OS_HAS_AGP */ diff --git a/drivers/char/drm/drm_auth.c b/drivers/char/drm/drm_auth.c index a734627..5cd9e8d 100644 --- a/drivers/char/drm/drm_auth.c +++ b/drivers/char/drm/drm_auth.c @@ -45,7 +45,7 @@ * the one with matching magic number, while holding the drm_device::struct_mutex * lock. */ -static struct drm_file *drm_find_file(struct drm_device * dev, drm_magic_t magic) +static struct drm_file *drm_find_file(struct drm_device *dev, drm_magic_t magic) { struct drm_file *retval = NULL; struct drm_magic_entry *pt; @@ -71,7 +71,7 @@ static struct drm_file *drm_find_file(struct drm_device * dev, drm_magic_t magic * associated the magic number hash key in drm_device::magiclist, while holding * the drm_device::struct_mutex lock. */ -static int drm_add_magic(struct drm_device * dev, struct drm_file * priv, +static int drm_add_magic(struct drm_device *dev, struct drm_file *priv, drm_magic_t magic) { struct drm_magic_entry *entry; @@ -102,7 +102,7 @@ static int drm_add_magic(struct drm_device * dev, struct drm_file * priv, * Searches and unlinks the entry in drm_device::magiclist with the magic * number hash key, while holding the drm_device::struct_mutex lock. */ -static int drm_remove_magic(struct drm_device * dev, drm_magic_t magic) +static int drm_remove_magic(struct drm_device *dev, drm_magic_t magic) { struct drm_magic_entry *pt; struct drm_hash_item *hash; @@ -139,7 +139,7 @@ static int drm_remove_magic(struct drm_device * dev, drm_magic_t magic) */ int drm_getmagic(struct drm_device *dev, void *data, struct drm_file *file_priv) { - static drm_magic_t sequence = 0; + static drm_magic_t sequence; static DEFINE_SPINLOCK(lock); struct drm_auth *auth = data; diff --git a/drivers/char/drm/drm_bo.c b/drivers/char/drm/drm_bo.c new file mode 100644 index 0000000..3145328 --- /dev/null +++ b/drivers/char/drm/drm_bo.c @@ -0,0 +1,2669 @@ +/************************************************************************** + * + * Copyright (c) 2006-2007 Tungsten Graphics, Inc., Cedar Park, TX., USA + * All Rights Reserved. + * + * Permission is hereby granted, free of charge, to any person obtaining a + * copy of this software and associated documentation files (the + * "Software"), to deal in the Software without restriction, including + * without limitation the rights to use, copy, modify, merge, publish, + * distribute, sub license, and/or sell copies of the Software, and to + * permit persons to whom the Software is furnished to do so, subject to + * the following conditions: + * + * The above copyright notice and this permission notice (including the + * next paragraph) shall be included in all copies or substantial portions + * of the Software. + * + * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR + * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, + * FITNESS FOR A PARTICULAR PURPOSE AND NON-INFRINGEMENT. IN NO EVENT SHALL + * THE COPYRIGHT HOLDERS, AUTHORS AND/OR ITS SUPPLIERS BE LIABLE FOR ANY CLAIM, + * DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR + * OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE + * USE OR OTHER DEALINGS IN THE SOFTWARE. + * + **************************************************************************/ +/* + * Authors: Thomas Hellstrom + */ + +#include "drmP.h" + +/* + * Locking may look a bit complicated but isn't really: + * + * The buffer usage atomic_t needs to be protected by dev->struct_mutex + * when there is a chance that it can be zero before or after the operation. + * + * dev->struct_mutex also protects all lists and list heads, + * Hash tables and hash heads. + * + * bo->mutex protects the buffer object itself excluding the usage field. + * bo->mutex does also protect the buffer list heads, so to manipulate those, + * we need both the bo->mutex and the dev->struct_mutex. + * + * Locking order is bo->mutex, dev->struct_mutex. Therefore list traversal + * is a bit complicated. When dev->struct_mutex is released to grab bo->mutex, + * the list traversal will, in general, need to be restarted. + * + */ + +static void drm_bo_destroy_locked(struct drm_buffer_object *bo); +static int drm_bo_setup_vm_locked(struct drm_buffer_object *bo); +static void drm_bo_takedown_vm_locked(struct drm_buffer_object *bo); +static void drm_bo_unmap_virtual(struct drm_buffer_object *bo); + +static inline uint64_t drm_bo_type_flags(unsigned type) +{ + return (1ULL << (24 + type)); +} + +/* + * bo locked. dev->struct_mutex locked. + */ + +void drm_bo_add_to_pinned_lru(struct drm_buffer_object *bo) +{ + struct drm_mem_type_manager *man; + + DRM_ASSERT_LOCKED(&bo->dev->struct_mutex); + DRM_ASSERT_LOCKED(&bo->mutex); + + man = &bo->dev->bm.man[bo->pinned_mem_type]; + list_add_tail(&bo->pinned_lru, &man->pinned); +} + +void drm_bo_add_to_lru(struct drm_buffer_object *bo) +{ + struct drm_mem_type_manager *man; + + DRM_ASSERT_LOCKED(&bo->dev->struct_mutex); + + if (!(bo->mem.proposed_flags & (DRM_BO_FLAG_NO_MOVE | DRM_BO_FLAG_NO_EVICT)) + || bo->mem.mem_type != bo->pinned_mem_type) { + man = &bo->dev->bm.man[bo->mem.mem_type]; + list_add_tail(&bo->lru, &man->lru); + } else { + INIT_LIST_HEAD(&bo->lru); + } +} + +static int drm_bo_vm_pre_move(struct drm_buffer_object *bo, int old_is_pci) +{ + if (!bo->map_list.map) + return 0; + + drm_bo_unmap_virtual(bo); + return 0; +} + +static void drm_bo_vm_post_move(struct drm_buffer_object *bo) +{ +} + +/* + * Call bo->mutex locked. + */ + +static int drm_bo_add_ttm(struct drm_buffer_object *bo) +{ + struct drm_device *dev = bo->dev; + int ret = 0; + uint32_t page_flags = 0; + + DRM_ASSERT_LOCKED(&bo->mutex); + bo->ttm = NULL; + + if (bo->mem.proposed_flags & DRM_BO_FLAG_WRITE) + page_flags |= DRM_TTM_PAGE_WRITE; + + switch (bo->type) { + case drm_bo_type_device: + case drm_bo_type_kernel: + bo->ttm = drm_ttm_create(dev, bo->num_pages << PAGE_SHIFT, + page_flags, dev->bm.dummy_read_page); + if (!bo->ttm) + ret = -ENOMEM; + break; + case drm_bo_type_user: + bo->ttm = drm_ttm_create(dev, bo->num_pages << PAGE_SHIFT, + page_flags | DRM_TTM_PAGE_USER, + dev->bm.dummy_read_page); + if (!bo->ttm) + ret = -ENOMEM; + + ret = drm_ttm_set_user(bo->ttm, current, + bo->buffer_start, + bo->num_pages); + if (ret) + return ret; + + break; + default: + DRM_ERROR("Illegal buffer object type\n"); + ret = -EINVAL; + break; + } + + return ret; +} + +static int drm_bo_handle_move_mem(struct drm_buffer_object *bo, + struct drm_bo_mem_reg *mem, + int evict, int no_wait) +{ + struct drm_device *dev = bo->dev; + struct drm_buffer_manager *bm = &dev->bm; + int old_is_pci = drm_mem_reg_is_pci(dev, &bo->mem); + int new_is_pci = drm_mem_reg_is_pci(dev, mem); + struct drm_mem_type_manager *old_man = &bm->man[bo->mem.mem_type]; + struct drm_mem_type_manager *new_man = &bm->man[mem->mem_type]; + int ret = 0; + + if (old_is_pci || new_is_pci || + ((mem->flags ^ bo->mem.flags) & DRM_BO_FLAG_CACHED)) + ret = drm_bo_vm_pre_move(bo, old_is_pci); + if (ret) + return ret; + + /* + * Create and bind a ttm if required. + */ + + if (!(new_man->flags & _DRM_FLAG_MEMTYPE_FIXED) && (bo->ttm == NULL)) { + ret = drm_bo_add_ttm(bo); + if (ret) + goto out_err; + + if (mem->mem_type != DRM_BO_MEM_LOCAL) { + ret = drm_ttm_bind(bo->ttm, mem); + if (ret) + goto out_err; + } + } + + if ((bo->mem.mem_type == DRM_BO_MEM_LOCAL) && bo->ttm == NULL) { + + struct drm_bo_mem_reg *old_mem = &bo->mem; + uint64_t save_flags = old_mem->flags; + uint64_t save_proposed_flags = old_mem->proposed_flags; + + *old_mem = *mem; + mem->mm_node = NULL; + old_mem->proposed_flags = save_proposed_flags; + DRM_FLAG_MASKED(save_flags, mem->flags, DRM_BO_MASK_MEMTYPE); + + } else if (!(old_man->flags & _DRM_FLAG_MEMTYPE_FIXED) && + !(new_man->flags & _DRM_FLAG_MEMTYPE_FIXED)) { + + ret = drm_bo_move_ttm(bo, evict, no_wait, mem); + + } else if (dev->driver->bo_driver->move) { + ret = dev->driver->bo_driver->move(bo, evict, no_wait, mem); + + } else { + + ret = drm_bo_move_memcpy(bo, evict, no_wait, mem); + + } + + if (ret) + goto out_err; + + if (old_is_pci || new_is_pci) + drm_bo_vm_post_move(bo); + + if (bo->priv_flags & _DRM_BO_FLAG_EVICTED) { + ret = + dev->driver->bo_driver->invalidate_caches(dev, + bo->mem.flags); + if (ret) + DRM_ERROR("Can not flush read caches\n"); + } + + DRM_FLAG_MASKED(bo->priv_flags, + (evict) ? _DRM_BO_FLAG_EVICTED : 0, + _DRM_BO_FLAG_EVICTED); + + if (bo->mem.mm_node) + bo->offset = (bo->mem.mm_node->start << PAGE_SHIFT) + + bm->man[bo->mem.mem_type].gpu_offset; + + + return 0; + +out_err: + if (old_is_pci || new_is_pci) + drm_bo_vm_post_move(bo); + + new_man = &bm->man[bo->mem.mem_type]; + if ((new_man->flags & _DRM_FLAG_MEMTYPE_FIXED) && bo->ttm) { + drm_ttm_unbind(bo->ttm); + drm_ttm_destroy(bo->ttm); + bo->ttm = NULL; + } + + return ret; +} + +/* + * Call bo->mutex locked. + * Wait until the buffer is idle. + */ + +int drm_bo_wait(struct drm_buffer_object *bo, int lazy, int ignore_signals, + int no_wait) +{ + int ret; + + DRM_ASSERT_LOCKED(&bo->mutex); + + if (bo->fence) { + if (drm_fence_object_signaled(bo->fence, bo->fence_type, 0)) { + drm_fence_usage_deref_unlocked(&bo->fence); + return 0; + } + if (no_wait) + return -EBUSY; + + ret = drm_fence_object_wait(bo->fence, lazy, ignore_signals, + bo->fence_type); + if (ret) + return ret; + + drm_fence_usage_deref_unlocked(&bo->fence); + } + return 0; +} +EXPORT_SYMBOL(drm_bo_wait); + +static int drm_bo_expire_fence(struct drm_buffer_object *bo, int allow_errors) +{ + struct drm_device *dev = bo->dev; + struct drm_buffer_manager *bm = &dev->bm; + + if (bo->fence) { + if (bm->nice_mode) { + unsigned long _end = jiffies + 3 * DRM_HZ; + int ret; + do { + ret = drm_bo_wait(bo, 0, 1, 0); + if (ret && allow_errors) + return ret; + + } while (ret && !time_after_eq(jiffies, _end)); + + if (bo->fence) { + bm->nice_mode = 0; + DRM_ERROR("Detected GPU lockup or " + "fence driver was taken down. " + "Evicting buffer.\n"); + } + } + if (bo->fence) + drm_fence_usage_deref_unlocked(&bo->fence); + } + return 0; +} + +/* + * Call dev->struct_mutex locked. + * Attempts to remove all private references to a buffer by expiring its + * fence object and removing from lru lists and memory managers. + */ + +static void drm_bo_cleanup_refs(struct drm_buffer_object *bo, int remove_all) +{ + struct drm_device *dev = bo->dev; + struct drm_buffer_manager *bm = &dev->bm; + + DRM_ASSERT_LOCKED(&dev->struct_mutex); + + atomic_inc(&bo->usage); + mutex_unlock(&dev->struct_mutex); + mutex_lock(&bo->mutex); + + DRM_FLAG_MASKED(bo->priv_flags, 0, _DRM_BO_FLAG_UNFENCED); + + if (bo->fence && drm_fence_object_signaled(bo->fence, + bo->fence_type, 0)) + drm_fence_usage_deref_unlocked(&bo->fence); + + if (bo->fence && remove_all) + (void)drm_bo_expire_fence(bo, 0); + + mutex_lock(&dev->struct_mutex); + + if (!atomic_dec_and_test(&bo->usage)) + goto out; + + if (!bo->fence) { + list_del_init(&bo->lru); + if (bo->mem.mm_node) { + drm_mm_put_block(bo->mem.mm_node); + if (bo->pinned_node == bo->mem.mm_node) + bo->pinned_node = NULL; + bo->mem.mm_node = NULL; + } + list_del_init(&bo->pinned_lru); + if (bo->pinned_node) { + drm_mm_put_block(bo->pinned_node); + bo->pinned_node = NULL; + } + list_del_init(&bo->ddestroy); + mutex_unlock(&bo->mutex); + drm_bo_destroy_locked(bo); + return; + } + + if (list_empty(&bo->ddestroy)) { + drm_fence_object_flush(bo->fence, bo->fence_type); + list_add_tail(&bo->ddestroy, &bm->ddestroy); + schedule_delayed_work(&bm->wq, + ((DRM_HZ / 100) < 1) ? 1 : DRM_HZ / 100); + } + +out: + mutex_unlock(&bo->mutex); + return; +} + +/* + * Verify that refcount is 0 and that there are no internal references + * to the buffer object. Then destroy it. + */ + +static void drm_bo_destroy_locked(struct drm_buffer_object *bo) +{ + struct drm_device *dev = bo->dev; + struct drm_buffer_manager *bm = &dev->bm; + + DRM_ASSERT_LOCKED(&dev->struct_mutex); + + if (list_empty(&bo->lru) && bo->mem.mm_node == NULL && + list_empty(&bo->pinned_lru) && bo->pinned_node == NULL && + list_empty(&bo->ddestroy) && atomic_read(&bo->usage) == 0) { + if (bo->fence != NULL) { + DRM_ERROR("Fence was non-zero.\n"); + drm_bo_cleanup_refs(bo, 0); + return; + } + + + if (bo->ttm) { + drm_ttm_unbind(bo->ttm); + drm_ttm_destroy(bo->ttm); + bo->ttm = NULL; + } + + atomic_dec(&bm->count); + + drm_ctl_free(bo, sizeof(*bo), DRM_MEM_BUFOBJ); + + return; + } + + /* + * Some stuff is still trying to reference the buffer object. + * Get rid of those references. + */ + + drm_bo_cleanup_refs(bo, 0); + + return; +} + +/* + * Call dev->struct_mutex locked. + */ + +static void drm_bo_delayed_delete(struct drm_device *dev, int remove_all) +{ + struct drm_buffer_manager *bm = &dev->bm; + + struct drm_buffer_object *entry, *nentry; + struct list_head *list, *next; + + list_for_each_safe(list, next, &bm->ddestroy) { + entry = list_entry(list, struct drm_buffer_object, ddestroy); + + nentry = NULL; + if (next != &bm->ddestroy) { + nentry = list_entry(next, struct drm_buffer_object, + ddestroy); + atomic_inc(&nentry->usage); + } + + drm_bo_cleanup_refs(entry, remove_all); + + if (nentry) + atomic_dec(&nentry->usage); + } +} + +static void drm_bo_delayed_workqueue(struct work_struct *work) +{ + struct drm_buffer_manager *bm = + container_of(work, struct drm_buffer_manager, wq.work); + struct drm_device *dev = container_of(bm, struct drm_device, bm); + + DRM_DEBUG("Delayed delete Worker\n"); + + mutex_lock(&dev->struct_mutex); + if (!bm->initialized) { + mutex_unlock(&dev->struct_mutex); + return; + } + drm_bo_delayed_delete(dev, 0); + if (bm->initialized && !list_empty(&bm->ddestroy)) { + schedule_delayed_work(&bm->wq, + ((DRM_HZ / 100) < 1) ? 1 : DRM_HZ / 100); + } + mutex_unlock(&dev->struct_mutex); +} + +void drm_bo_usage_deref_locked(struct drm_buffer_object **bo) +{ + struct drm_buffer_object *tmp_bo = *bo; + bo = NULL; + + DRM_ASSERT_LOCKED(&tmp_bo->dev->struct_mutex); + + if (atomic_dec_and_test(&tmp_bo->usage)) + drm_bo_destroy_locked(tmp_bo); +} +EXPORT_SYMBOL(drm_bo_usage_deref_locked); + +static void drm_bo_base_deref_locked(struct drm_file *file_priv, + struct drm_user_object *uo) +{ + struct drm_buffer_object *bo = + drm_user_object_entry(uo, struct drm_buffer_object, base); + + DRM_ASSERT_LOCKED(&bo->dev->struct_mutex); + + drm_bo_takedown_vm_locked(bo); + drm_bo_usage_deref_locked(&bo); +} + +void drm_bo_usage_deref_unlocked(struct drm_buffer_object **bo) +{ + struct drm_buffer_object *tmp_bo = *bo; + struct drm_device *dev = tmp_bo->dev; + + *bo = NULL; + if (atomic_dec_and_test(&tmp_bo->usage)) { + mutex_lock(&dev->struct_mutex); + if (atomic_read(&tmp_bo->usage) == 0) + drm_bo_destroy_locked(tmp_bo); + mutex_unlock(&dev->struct_mutex); + } +} +EXPORT_SYMBOL(drm_bo_usage_deref_unlocked); + +void drm_putback_buffer_objects(struct drm_device *dev) +{ + struct drm_buffer_manager *bm = &dev->bm; + struct list_head *list = &bm->unfenced; + struct drm_buffer_object *entry, *next; + + mutex_lock(&dev->struct_mutex); + list_for_each_entry_safe(entry, next, list, lru) { + atomic_inc(&entry->usage); + mutex_unlock(&dev->struct_mutex); + + mutex_lock(&entry->mutex); + BUG_ON(!(entry->priv_flags & _DRM_BO_FLAG_UNFENCED)); + mutex_lock(&dev->struct_mutex); + + list_del_init(&entry->lru); + DRM_FLAG_MASKED(entry->priv_flags, 0, _DRM_BO_FLAG_UNFENCED); + DRM_WAKEUP(&entry->event_queue); + + /* + * FIXME: Might want to put back on head of list + * instead of tail here. + */ + + drm_bo_add_to_lru(entry); + mutex_unlock(&entry->mutex); + drm_bo_usage_deref_locked(&entry); + } + mutex_unlock(&dev->struct_mutex); +} +EXPORT_SYMBOL(drm_putback_buffer_objects); + + +/* + * Note. The caller has to register (if applicable) + * and deregister fence object usage. + */ + +int drm_fence_buffer_objects(struct drm_device *dev, + struct list_head *list, + uint32_t fence_flags, + struct drm_fence_object *fence, + struct drm_fence_object **used_fence) +{ + struct drm_buffer_manager *bm = &dev->bm; + struct drm_buffer_object *entry; + uint32_t fence_type = 0; + uint32_t fence_class = ~0; + int count = 0; + int ret = 0; + struct list_head *l; + + mutex_lock(&dev->struct_mutex); + + if (!list) + list = &bm->unfenced; + + if (fence) + fence_class = fence->fence_class; + + list_for_each_entry(entry, list, lru) { + BUG_ON(!(entry->priv_flags & _DRM_BO_FLAG_UNFENCED)); + fence_type |= entry->new_fence_type; + if (fence_class == ~0) + fence_class = entry->new_fence_class; + else if (entry->new_fence_class != fence_class) { + DRM_ERROR("Unmatching fence classes on unfenced list: " + "%d and %d.\n", + fence_class, + entry->new_fence_class); + ret = -EINVAL; + goto out; + } + count++; + } + + if (!count) { + ret = -EINVAL; + goto out; + } + + if (fence) { + if ((fence_type & fence->type) != fence_type || + (fence->fence_class != fence_class)) { + DRM_ERROR("Given fence doesn't match buffers " + "on unfenced list.\n"); + ret = -EINVAL; + goto out; + } + } else { + mutex_unlock(&dev->struct_mutex); + ret = drm_fence_object_create(dev, fence_class, fence_type, + fence_flags | DRM_FENCE_FLAG_EMIT, + &fence); + mutex_lock(&dev->struct_mutex); + if (ret) + goto out; + } + + count = 0; + l = list->next; + while (l != list) { + prefetch(l->next); + entry = list_entry(l, struct drm_buffer_object, lru); + atomic_inc(&entry->usage); + mutex_unlock(&dev->struct_mutex); + mutex_lock(&entry->mutex); + mutex_lock(&dev->struct_mutex); + list_del_init(l); + if (entry->priv_flags & _DRM_BO_FLAG_UNFENCED) { + count++; + if (entry->fence) + drm_fence_usage_deref_locked(&entry->fence); + entry->fence = drm_fence_reference_locked(fence); + entry->fence_class = entry->new_fence_class; + entry->fence_type = entry->new_fence_type; + DRM_FLAG_MASKED(entry->priv_flags, 0, + _DRM_BO_FLAG_UNFENCED); + DRM_WAKEUP(&entry->event_queue); + drm_bo_add_to_lru(entry); + } + mutex_unlock(&entry->mutex); + drm_bo_usage_deref_locked(&entry); + l = list->next; + } + DRM_DEBUG("Fenced %d buffers\n", count); +out: + mutex_unlock(&dev->struct_mutex); + *used_fence = fence; + return ret; +} +EXPORT_SYMBOL(drm_fence_buffer_objects); + +/* + * bo->mutex locked + */ + +static int drm_bo_evict(struct drm_buffer_object *bo, unsigned mem_type, + int no_wait) +{ + int ret = 0; + struct drm_device *dev = bo->dev; + struct drm_bo_mem_reg evict_mem; + + /* + * Someone might have modified the buffer before we took the + * buffer mutex. + */ + + if (bo->priv_flags & _DRM_BO_FLAG_UNFENCED) + goto out; + if (bo->mem.mem_type != mem_type) + goto out; + + ret = drm_bo_wait(bo, 0, 0, no_wait); + + if (ret && ret != -EAGAIN) { + DRM_ERROR("Failed to expire fence before " + "buffer eviction.\n"); + goto out; + } + + evict_mem = bo->mem; + evict_mem.mm_node = NULL; + + evict_mem = bo->mem; + evict_mem.proposed_flags = dev->driver->bo_driver->evict_flags(bo); + ret = drm_bo_mem_space(bo, &evict_mem, no_wait); + + if (ret) { + if (ret != -EAGAIN) + DRM_ERROR("Failed to find memory space for " + "buffer 0x%p eviction.\n", bo); + goto out; + } + + ret = drm_bo_handle_move_mem(bo, &evict_mem, 1, no_wait); + + if (ret) { + if (ret != -EAGAIN) + DRM_ERROR("Buffer eviction failed\n"); + goto out; + } + + mutex_lock(&dev->struct_mutex); + if (evict_mem.mm_node) { + if (evict_mem.mm_node != bo->pinned_node) + drm_mm_put_block(evict_mem.mm_node); + evict_mem.mm_node = NULL; + } + list_del(&bo->lru); + drm_bo_add_to_lru(bo); + mutex_unlock(&dev->struct_mutex); + + DRM_FLAG_MASKED(bo->priv_flags, _DRM_BO_FLAG_EVICTED, + _DRM_BO_FLAG_EVICTED); + +out: + return ret; +} + +/** + * Repeatedly evict memory from the LRU for @mem_type until we create enough + * space, or we've evicted everything and there isn't enough space. + */ +static int drm_bo_mem_force_space(struct drm_device *dev, + struct drm_bo_mem_reg *mem, + uint32_t mem_type, int no_wait) +{ + struct drm_mm_node *node; + struct drm_buffer_manager *bm = &dev->bm; + struct drm_buffer_object *entry; + struct drm_mem_type_manager *man = &bm->man[mem_type]; + struct list_head *lru; + unsigned long num_pages = mem->num_pages; + int ret; + + mutex_lock(&dev->struct_mutex); + do { + node = drm_mm_search_free(&man->manager, num_pages, + mem->page_alignment, 1); + if (node) + break; + + lru = &man->lru; + if (lru->next == lru) + break; + + entry = list_entry(lru->next, struct drm_buffer_object, lru); + atomic_inc(&entry->usage); + mutex_unlock(&dev->struct_mutex); + mutex_lock(&entry->mutex); + BUG_ON(entry->mem.flags & (DRM_BO_FLAG_NO_MOVE | DRM_BO_FLAG_NO_EVICT)); + + ret = drm_bo_evict(entry, mem_type, no_wait); + mutex_unlock(&entry->mutex); + drm_bo_usage_deref_unlocked(&entry); + if (ret) + return ret; + mutex_lock(&dev->struct_mutex); + } while (1); + + if (!node) { + mutex_unlock(&dev->struct_mutex); + return -ENOMEM; + } + + node = drm_mm_get_block(node, num_pages, mem->page_alignment); + mutex_unlock(&dev->struct_mutex); + mem->mm_node = node; + mem->mem_type = mem_type; + return 0; +} + +static int drm_bo_mt_compatible(struct drm_mem_type_manager *man, + int disallow_fixed, + uint32_t mem_type, + uint64_t mask, uint32_t *res_mask) +{ + uint64_t cur_flags = drm_bo_type_flags(mem_type); + uint64_t flag_diff; + + if ((man->flags & _DRM_FLAG_MEMTYPE_FIXED) && disallow_fixed) + return 0; + if (man->flags & _DRM_FLAG_MEMTYPE_CACHED) + cur_flags |= DRM_BO_FLAG_CACHED; + if (man->flags & _DRM_FLAG_MEMTYPE_MAPPABLE) + cur_flags |= DRM_BO_FLAG_MAPPABLE; + if (man->flags & _DRM_FLAG_MEMTYPE_CSELECT) + DRM_FLAG_MASKED(cur_flags, mask, DRM_BO_FLAG_CACHED); + + if ((cur_flags & mask & DRM_BO_MASK_MEM) == 0) + return 0; + + if (mem_type == DRM_BO_MEM_LOCAL) { + *res_mask = cur_flags; + return 1; + } + + flag_diff = (mask ^ cur_flags); + if (flag_diff & DRM_BO_FLAG_CACHED_MAPPED) + cur_flags |= DRM_BO_FLAG_CACHED_MAPPED; + + if ((flag_diff & DRM_BO_FLAG_CACHED) && + (!(mask & DRM_BO_FLAG_CACHED) || + (mask & DRM_BO_FLAG_FORCE_CACHING))) + return 0; + + if ((flag_diff & DRM_BO_FLAG_MAPPABLE) && + ((mask & DRM_BO_FLAG_MAPPABLE) || + (mask & DRM_BO_FLAG_FORCE_MAPPABLE))) + return 0; + + *res_mask = cur_flags; + return 1; +} + +/** + * Creates space for memory region @mem according to its type. + * + * This function first searches for free space in compatible memory types in + * the priority order defined by the driver. If free space isn't found, then + * drm_bo_mem_force_space is attempted in priority order to evict and find + * space. + */ +int drm_bo_mem_space(struct drm_buffer_object *bo, + struct drm_bo_mem_reg *mem, int no_wait) +{ + struct drm_device *dev = bo->dev; + struct drm_buffer_manager *bm = &dev->bm; + struct drm_mem_type_manager *man; + + uint32_t num_prios = dev->driver->bo_driver->num_mem_type_prio; + const uint32_t *prios = dev->driver->bo_driver->mem_type_prio; + uint32_t i; + uint32_t mem_type = DRM_BO_MEM_LOCAL; + uint32_t cur_flags; + int type_found = 0; + int type_ok = 0; + int has_eagain = 0; + struct drm_mm_node *node = NULL; + int ret; + + mem->mm_node = NULL; + for (i = 0; i < num_prios; ++i) { + mem_type = prios[i]; + man = &bm->man[mem_type]; + + type_ok = drm_bo_mt_compatible(man, + bo->type == drm_bo_type_user, + mem_type, mem->proposed_flags, + &cur_flags); + + if (!type_ok) + continue; + + if (mem_type == DRM_BO_MEM_LOCAL) + break; + + if ((mem_type == bo->pinned_mem_type) && + (bo->pinned_node != NULL)) { + node = bo->pinned_node; + break; + } + + mutex_lock(&dev->struct_mutex); + if (man->has_type && man->use_type) { + type_found = 1; + node = drm_mm_search_free(&man->manager, mem->num_pages, + mem->page_alignment, 1); + if (node) + node = drm_mm_get_block(node, mem->num_pages, + mem->page_alignment); + } + mutex_unlock(&dev->struct_mutex); + if (node) + break; + } + + if ((type_ok && (mem_type == DRM_BO_MEM_LOCAL)) || node) { + mem->mm_node = node; + mem->mem_type = mem_type; + mem->flags = cur_flags; + return 0; + } + + if (!type_found) + return -EINVAL; + + num_prios = dev->driver->bo_driver->num_mem_busy_prio; + prios = dev->driver->bo_driver->mem_busy_prio; + + for (i = 0; i < num_prios; ++i) { + mem_type = prios[i]; + man = &bm->man[mem_type]; + + if (!man->has_type) + continue; + + if (!drm_bo_mt_compatible(man, + bo->type == drm_bo_type_user, + mem_type, + mem->proposed_flags, + &cur_flags)) + continue; + + ret = drm_bo_mem_force_space(dev, mem, mem_type, no_wait); + + if (ret == 0 && mem->mm_node) { + mem->flags = cur_flags; + return 0; + } + + if (ret == -EAGAIN) + has_eagain = 1; + } + + ret = (has_eagain) ? -EAGAIN : -ENOMEM; + return ret; +} +EXPORT_SYMBOL(drm_bo_mem_space); + +/* + * drm_bo_propose_flags: + * + * @bo: the buffer object getting new flags + * + * @new_flags: the new set of proposed flag bits + * + * @new_mask: the mask of bits changed in new_flags + * + * Modify the proposed_flag bits in @bo + */ +static int drm_bo_modify_proposed_flags(struct drm_buffer_object *bo, + uint64_t new_flags, uint64_t new_mask) +{ + uint32_t new_access; + + /* Copy unchanging bits from existing proposed_flags */ + DRM_FLAG_MASKED(new_flags, bo->mem.proposed_flags, ~new_mask); + + if (bo->type == drm_bo_type_user && + ((new_flags & (DRM_BO_FLAG_CACHED | DRM_BO_FLAG_FORCE_CACHING)) != + (DRM_BO_FLAG_CACHED | DRM_BO_FLAG_FORCE_CACHING))) { + DRM_ERROR("User buffers require cache-coherent memory.\n"); + return -EINVAL; + } + + if ((new_mask & DRM_BO_FLAG_NO_EVICT) && !DRM_SUSER(DRM_CURPROC)) { + DRM_ERROR("DRM_BO_FLAG_NO_EVICT is only available to priviliged processes.\n"); + return -EPERM; + } + + if ((new_flags & DRM_BO_FLAG_NO_MOVE)) { + DRM_ERROR("DRM_BO_FLAG_NO_MOVE is not properly implemented yet.\n"); + return -EPERM; + } + + new_access = new_flags & (DRM_BO_FLAG_EXE | DRM_BO_FLAG_WRITE | + DRM_BO_FLAG_READ); + + if (new_access == 0) { + DRM_ERROR("Invalid buffer object rwx properties\n"); + return -EINVAL; + } + + bo->mem.proposed_flags = new_flags; + return 0; +} + +/* + * Call dev->struct_mutex locked. + */ + +struct drm_buffer_object *drm_lookup_buffer_object(struct drm_file *file_priv, + uint32_t handle, int check_owner) +{ + struct drm_user_object *uo; + struct drm_buffer_object *bo; + + uo = drm_lookup_user_object(file_priv, handle); + + if (!uo || (uo->type != drm_buffer_type)) { + DRM_ERROR("Could not find buffer object 0x%08x\n", handle); + return NULL; + } + + if (check_owner && file_priv != uo->owner) { + if (!drm_lookup_ref_object(file_priv, uo, _DRM_REF_USE)) + return NULL; + } + + bo = drm_user_object_entry(uo, struct drm_buffer_object, base); + atomic_inc(&bo->usage); + return bo; +} +EXPORT_SYMBOL(drm_lookup_buffer_object); + +/* + * Call bo->mutex locked. + * Returns 1 if the buffer is currently rendered to or from. 0 otherwise. + * Doesn't do any fence flushing as opposed to the drm_bo_busy function. + */ + +static int drm_bo_quick_busy(struct drm_buffer_object *bo) +{ + struct drm_fence_object *fence = bo->fence; + + BUG_ON(bo->priv_flags & _DRM_BO_FLAG_UNFENCED); + if (fence) { + if (drm_fence_object_signaled(fence, bo->fence_type, 0)) { + drm_fence_usage_deref_unlocked(&bo->fence); + return 0; + } + return 1; + } + return 0; +} + +/* + * Call bo->mutex locked. + * Returns 1 if the buffer is currently rendered to or from. 0 otherwise. + */ + +static int drm_bo_busy(struct drm_buffer_object *bo) +{ + struct drm_fence_object *fence = bo->fence; + + BUG_ON(bo->priv_flags & _DRM_BO_FLAG_UNFENCED); + if (fence) { + if (drm_fence_object_signaled(fence, bo->fence_type, 0)) { + drm_fence_usage_deref_unlocked(&bo->fence); + return 0; + } + drm_fence_object_flush(fence, DRM_FENCE_TYPE_EXE); + if (drm_fence_object_signaled(fence, bo->fence_type, 0)) { + drm_fence_usage_deref_unlocked(&bo->fence); + return 0; + } + return 1; + } + return 0; +} + +static int drm_bo_evict_cached(struct drm_buffer_object *bo) +{ + int ret = 0; + + BUG_ON(bo->priv_flags & _DRM_BO_FLAG_UNFENCED); + if (bo->mem.mm_node) + ret = drm_bo_evict(bo, DRM_BO_MEM_TT, 1); + return ret; +} + +/* + * Wait until a buffer is unmapped. + */ + +static int drm_bo_wait_unmapped(struct drm_buffer_object *bo, int no_wait) +{ + int ret = 0; + + if ((atomic_read(&bo->mapped) >= 0) && no_wait) + return -EBUSY; + + DRM_WAIT_ON(ret, bo->event_queue, 3 * DRM_HZ, + atomic_read(&bo->mapped) == -1); + + if (ret == -EINTR) + ret = -EAGAIN; + + return ret; +} + +static int drm_bo_check_unfenced(struct drm_buffer_object *bo) +{ + int ret; + + mutex_lock(&bo->mutex); + ret = (bo->priv_flags & _DRM_BO_FLAG_UNFENCED); + mutex_unlock(&bo->mutex); + return ret; +} + +/* + * Wait until a buffer, scheduled to be fenced moves off the unfenced list. + * Until then, we cannot really do anything with it except delete it. + */ + +static int drm_bo_wait_unfenced(struct drm_buffer_object *bo, int no_wait, + int eagain_if_wait) +{ + int ret = (bo->priv_flags & _DRM_BO_FLAG_UNFENCED); + + if (ret && no_wait) + return -EBUSY; + else if (!ret) + return 0; + + ret = 0; + mutex_unlock(&bo->mutex); + DRM_WAIT_ON(ret, bo->event_queue, 3 * DRM_HZ, + !drm_bo_check_unfenced(bo)); + mutex_lock(&bo->mutex); + if (ret == -EINTR) + return -EAGAIN; + ret = (bo->priv_flags & _DRM_BO_FLAG_UNFENCED); + if (ret) { + DRM_ERROR("Timeout waiting for buffer to become fenced\n"); + return -EBUSY; + } + if (eagain_if_wait) + return -EAGAIN; + + return 0; +} + +/* + * Fill in the ioctl reply argument with buffer info. + * Bo locked. + */ + +static void drm_bo_fill_rep_arg(struct drm_buffer_object *bo, + struct drm_bo_info_rep *rep) +{ + if (!rep) + return; + + rep->handle = bo->base.hash.key; + rep->flags = bo->mem.flags; + rep->size = bo->num_pages * PAGE_SIZE; + rep->offset = bo->offset; + + /* + * drm_bo_type_device buffers have user-visible + * handles which can be used to share across + * processes. Hand that back to the application + */ + if (bo->type == drm_bo_type_device) + rep->arg_handle = bo->map_list.user_token; + else + rep->arg_handle = 0; + + rep->proposed_flags = bo->mem.proposed_flags; + rep->buffer_start = bo->buffer_start; + rep->fence_flags = bo->fence_type; + rep->rep_flags = 0; + rep->page_alignment = bo->mem.page_alignment; + + if ((bo->priv_flags & _DRM_BO_FLAG_UNFENCED) || drm_bo_quick_busy(bo)) { + DRM_FLAG_MASKED(rep->rep_flags, DRM_BO_REP_BUSY, + DRM_BO_REP_BUSY); + } +} + +/* + * Wait for buffer idle and register that we've mapped the buffer. + * Mapping is registered as a drm_ref_object with type _DRM_REF_TYPE1, + * so that if the client dies, the mapping is automatically + * unregistered. + */ + +static int drm_buffer_object_map(struct drm_file *file_priv, uint32_t handle, + uint32_t map_flags, unsigned hint, + struct drm_bo_info_rep *rep) +{ + struct drm_buffer_object *bo; + struct drm_device *dev = file_priv->head->dev; + int ret = 0; + int no_wait = hint & DRM_BO_HINT_DONT_BLOCK; + + mutex_lock(&dev->struct_mutex); + bo = drm_lookup_buffer_object(file_priv, handle, 1); + mutex_unlock(&dev->struct_mutex); + + if (!bo) + return -EINVAL; + + mutex_lock(&bo->mutex); + ret = drm_bo_wait_unfenced(bo, no_wait, 0); + if (ret) + goto out; + + /* + * If this returns true, we are currently unmapped. + * We need to do this test, because unmapping can + * be done without the bo->mutex held. + */ + + while (1) { + if (atomic_inc_and_test(&bo->mapped)) { + if (no_wait && drm_bo_busy(bo)) { + atomic_dec(&bo->mapped); + ret = -EBUSY; + goto out; + } + ret = drm_bo_wait(bo, 0, 0, no_wait); + if (ret) { + atomic_dec(&bo->mapped); + goto out; + } + + if (bo->mem.flags & DRM_BO_FLAG_CACHED_MAPPED) + drm_bo_evict_cached(bo); + + break; + } else if (bo->mem.flags & DRM_BO_FLAG_CACHED_MAPPED) { + + /* + * We are already mapped with different flags. + * need to wait for unmap. + */ + + ret = drm_bo_wait_unmapped(bo, no_wait); + if (ret) + goto out; + + continue; + } + break; + } + + mutex_lock(&dev->struct_mutex); + ret = drm_add_ref_object(file_priv, &bo->base, _DRM_REF_TYPE1); + mutex_unlock(&dev->struct_mutex); + if (ret) { + if (atomic_add_negative(-1, &bo->mapped)) + DRM_WAKEUP(&bo->event_queue); + + } else + drm_bo_fill_rep_arg(bo, rep); +out: + mutex_unlock(&bo->mutex); + drm_bo_usage_deref_unlocked(&bo); + return ret; +} + +static int drm_buffer_object_unmap(struct drm_file *file_priv, uint32_t handle) +{ + struct drm_device *dev = file_priv->head->dev; + struct drm_buffer_object *bo; + struct drm_ref_object *ro; + int ret = 0; + + mutex_lock(&dev->struct_mutex); + + bo = drm_lookup_buffer_object(file_priv, handle, 1); + if (!bo) { + ret = -EINVAL; + goto out; + } + + ro = drm_lookup_ref_object(file_priv, &bo->base, _DRM_REF_TYPE1); + if (!ro) { + ret = -EINVAL; + goto out; + } + + drm_remove_ref_object(file_priv, ro); + drm_bo_usage_deref_locked(&bo); +out: + mutex_unlock(&dev->struct_mutex); + return ret; +} + +/* + * Call struct-sem locked. + */ + +static void drm_buffer_user_object_unmap(struct drm_file *file_priv, + struct drm_user_object *uo, + enum drm_ref_type action) +{ + struct drm_buffer_object *bo = + drm_user_object_entry(uo, struct drm_buffer_object, base); + + /* + * We DON'T want to take the bo->lock here, because we want to + * hold it when we wait for unmapped buffer. + */ + + BUG_ON(action != _DRM_REF_TYPE1); + + if (atomic_add_negative(-1, &bo->mapped)) + DRM_WAKEUP(&bo->event_queue); +} + +/* + * bo->mutex locked. + * Note that new_mem_flags are NOT transferred to the bo->mem.proposed_flags. + */ + +int drm_bo_move_buffer(struct drm_buffer_object *bo, uint64_t new_mem_flags, + int no_wait, int move_unfenced) +{ + struct drm_device *dev = bo->dev; + struct drm_buffer_manager *bm = &dev->bm; + int ret = 0; + struct drm_bo_mem_reg mem; + /* + * Flush outstanding fences. + */ + + drm_bo_busy(bo); + + /* + * Wait for outstanding fences. + */ + + ret = drm_bo_wait(bo, 0, 0, no_wait); + if (ret) + return ret; + + mem.num_pages = bo->num_pages; + mem.size = mem.num_pages << PAGE_SHIFT; + mem.proposed_flags = new_mem_flags; + mem.page_alignment = bo->mem.page_alignment; + + mutex_lock(&bm->evict_mutex); + mutex_lock(&dev->struct_mutex); + list_del_init(&bo->lru); + mutex_unlock(&dev->struct_mutex); + + /* + * Determine where to move the buffer. + */ + ret = drm_bo_mem_space(bo, &mem, no_wait); + if (ret) + goto out_unlock; + + ret = drm_bo_handle_move_mem(bo, &mem, 0, no_wait); + +out_unlock: + mutex_lock(&dev->struct_mutex); + if (ret || !move_unfenced) { + if (mem.mm_node) { + if (mem.mm_node != bo->pinned_node) + drm_mm_put_block(mem.mm_node); + mem.mm_node = NULL; + } + drm_bo_add_to_lru(bo); + if (bo->priv_flags & _DRM_BO_FLAG_UNFENCED) { + DRM_WAKEUP(&bo->event_queue); + DRM_FLAG_MASKED(bo->priv_flags, 0, + _DRM_BO_FLAG_UNFENCED); + } + } else { + list_add_tail(&bo->lru, &bm->unfenced); + DRM_FLAG_MASKED(bo->priv_flags, _DRM_BO_FLAG_UNFENCED, + _DRM_BO_FLAG_UNFENCED); + } + mutex_unlock(&dev->struct_mutex); + mutex_unlock(&bm->evict_mutex); + return ret; +} + +static int drm_bo_mem_compat(struct drm_bo_mem_reg *mem) +{ + uint32_t flag_diff = (mem->proposed_flags ^ mem->flags); + + if ((mem->proposed_flags & mem->flags & DRM_BO_MASK_MEM) == 0) + return 0; + if ((flag_diff & DRM_BO_FLAG_CACHED) && + (/* !(mem->proposed_flags & DRM_BO_FLAG_CACHED) ||*/ + (mem->proposed_flags & DRM_BO_FLAG_FORCE_CACHING))) + return 0; + + if ((flag_diff & DRM_BO_FLAG_MAPPABLE) && + ((mem->proposed_flags & DRM_BO_FLAG_MAPPABLE) || + (mem->proposed_flags & DRM_BO_FLAG_FORCE_MAPPABLE))) + return 0; + return 1; +} + +/** + * drm_buffer_object_validate: + * + * @bo: the buffer object to modify + * + * @fence_class: the new fence class covering this buffer + * + * @move_unfenced: a boolean indicating whether switching the + * memory space of this buffer should cause the buffer to + * be placed on the unfenced list. + * + * @no_wait: whether this function should return -EBUSY instead + * of waiting. + * + * Change buffer access parameters. This can involve moving + * the buffer to the correct memory type, pinning the buffer + * or changing the class/type of fence covering this buffer + * + * Must be called with bo locked. + */ + +static int drm_buffer_object_validate(struct drm_buffer_object *bo, + uint32_t fence_class, + int move_unfenced, int no_wait) +{ + struct drm_device *dev = bo->dev; + struct drm_buffer_manager *bm = &dev->bm; + struct drm_bo_driver *driver = dev->driver->bo_driver; + uint32_t ftype; + int ret; + + DRM_DEBUG("Proposed flags 0x%016llx, Old flags 0x%016llx\n", + (unsigned long long) bo->mem.proposed_flags, + (unsigned long long) bo->mem.flags); + + ret = driver->fence_type(bo, &fence_class, &ftype); + + if (ret) { + DRM_ERROR("Driver did not support given buffer permissions\n"); + return ret; + } + + /* + * We're switching command submission mechanism, + * or cannot simply rely on the hardware serializing for us. + * + * Wait for buffer idle. + */ + + if ((fence_class != bo->fence_class) || + ((ftype ^ bo->fence_type) & bo->fence_type)) { + + ret = drm_bo_wait(bo, 0, 0, no_wait); + + if (ret) + return ret; + + } + + bo->new_fence_class = fence_class; + bo->new_fence_type = ftype; + + ret = drm_bo_wait_unmapped(bo, no_wait); + if (ret) { + DRM_ERROR("Timed out waiting for buffer unmap.\n"); + return ret; + } + + /* + * Check whether we need to move buffer. + */ + + if (!drm_bo_mem_compat(&bo->mem)) { + ret = drm_bo_move_buffer(bo, bo->mem.proposed_flags, no_wait, + move_unfenced); + if (ret) { + if (ret != -EAGAIN) + DRM_ERROR("Failed moving buffer.\n"); + return ret; + } + } + + /* + * Pinned buffers. + */ + + if (bo->mem.proposed_flags & (DRM_BO_FLAG_NO_EVICT | DRM_BO_FLAG_NO_MOVE)) { + bo->pinned_mem_type = bo->mem.mem_type; + mutex_lock(&dev->struct_mutex); + list_del_init(&bo->pinned_lru); + drm_bo_add_to_pinned_lru(bo); + + if (bo->pinned_node != bo->mem.mm_node) { + if (bo->pinned_node != NULL) + drm_mm_put_block(bo->pinned_node); + bo->pinned_node = bo->mem.mm_node; + } + + mutex_unlock(&dev->struct_mutex); + + } else if (bo->pinned_node != NULL) { + + mutex_lock(&dev->struct_mutex); + + if (bo->pinned_node != bo->mem.mm_node) + drm_mm_put_block(bo->pinned_node); + + list_del_init(&bo->pinned_lru); + bo->pinned_node = NULL; + mutex_unlock(&dev->struct_mutex); + + } + + /* + * We might need to add a TTM. + */ + + if (bo->mem.mem_type == DRM_BO_MEM_LOCAL && bo->ttm == NULL) { + ret = drm_bo_add_ttm(bo); + if (ret) + return ret; + } + /* + * Validation has succeeded, move the access and other + * non-mapping-related flag bits from the proposed flags to + * the active flags + */ + + DRM_FLAG_MASKED(bo->mem.flags, bo->mem.proposed_flags, ~DRM_BO_MASK_MEMTYPE); + + /* + * Finally, adjust lru to be sure. + */ + + mutex_lock(&dev->struct_mutex); + list_del(&bo->lru); + if (move_unfenced) { + list_add_tail(&bo->lru, &bm->unfenced); + DRM_FLAG_MASKED(bo->priv_flags, _DRM_BO_FLAG_UNFENCED, + _DRM_BO_FLAG_UNFENCED); + } else { + drm_bo_add_to_lru(bo); + if (bo->priv_flags & _DRM_BO_FLAG_UNFENCED) { + DRM_WAKEUP(&bo->event_queue); + DRM_FLAG_MASKED(bo->priv_flags, 0, + _DRM_BO_FLAG_UNFENCED); + } + } + mutex_unlock(&dev->struct_mutex); + + return 0; +} + +/** + * drm_bo_do_validate: + * + * @bo: the buffer object + * + * @flags: access rights, mapping parameters and cacheability. See + * the DRM_BO_FLAG_* values in drm.h + * + * @mask: Which flag values to change; this allows callers to modify + * things without knowing the current state of other flags. + * + * @hint: changes the proceedure for this operation, see the DRM_BO_HINT_* + * values in drm.h. + * + * @fence_class: a driver-specific way of doing fences. Presumably, + * this would be used if the driver had more than one submission and + * fencing mechanism. At this point, there isn't any use of this + * from the user mode code. + * + * @rep: To be stuffed with the reply from validation + * + * 'validate' a buffer object. This changes where the buffer is + * located, along with changing access modes. + */ + +int drm_bo_do_validate(struct drm_buffer_object *bo, + uint64_t flags, uint64_t mask, uint32_t hint, + uint32_t fence_class, + struct drm_bo_info_rep *rep) +{ + int ret; + int no_wait = (hint & DRM_BO_HINT_DONT_BLOCK) != 0; + + mutex_lock(&bo->mutex); + ret = drm_bo_wait_unfenced(bo, no_wait, 0); + + if (ret) + goto out; + + ret = drm_bo_modify_proposed_flags (bo, flags, mask); + if (ret) + goto out; + + ret = drm_buffer_object_validate(bo, + fence_class, + !(hint & DRM_BO_HINT_DONT_FENCE), + no_wait); +out: + if (rep) + drm_bo_fill_rep_arg(bo, rep); + + mutex_unlock(&bo->mutex); + return ret; +} +EXPORT_SYMBOL(drm_bo_do_validate); + +/** + * drm_bo_handle_validate + * + * @file_priv: the drm file private, used to get a handle to the user context + * + * @handle: the buffer object handle + * + * @flags: access rights, mapping parameters and cacheability. See + * the DRM_BO_FLAG_* values in drm.h + * + * @mask: Which flag values to change; this allows callers to modify + * things without knowing the current state of other flags. + * + * @hint: changes the proceedure for this operation, see the DRM_BO_HINT_* + * values in drm.h. + * + * @fence_class: a driver-specific way of doing fences. Presumably, + * this would be used if the driver had more than one submission and + * fencing mechanism. At this point, there isn't any use of this + * from the user mode code. + * + * @use_old_fence_class: don't change fence class, pull it from the buffer object + * + * @rep: To be stuffed with the reply from validation + * + * @bp_rep: To be stuffed with the buffer object pointer + * + * Perform drm_bo_do_validate on a buffer referenced by a user-space handle. + * Some permissions checking is done on the parameters, otherwise this + * is a thin wrapper. + */ + +int drm_bo_handle_validate(struct drm_file *file_priv, uint32_t handle, + uint64_t flags, uint64_t mask, + uint32_t hint, + uint32_t fence_class, + int use_old_fence_class, + struct drm_bo_info_rep *rep, + struct drm_buffer_object **bo_rep) +{ + struct drm_device *dev = file_priv->head->dev; + struct drm_buffer_object *bo; + int ret; + + mutex_lock(&dev->struct_mutex); + bo = drm_lookup_buffer_object(file_priv, handle, 1); + mutex_unlock(&dev->struct_mutex); + + if (!bo) + return -EINVAL; + + if (use_old_fence_class) + fence_class = bo->fence_class; + + /* + * Only allow creator to change shared buffer mask. + */ + + if (bo->base.owner != file_priv) + mask &= ~(DRM_BO_FLAG_NO_EVICT | DRM_BO_FLAG_NO_MOVE); + + + ret = drm_bo_do_validate(bo, flags, mask, hint, fence_class, rep); + + if (!ret && bo_rep) + *bo_rep = bo; + else + drm_bo_usage_deref_unlocked(&bo); + + return ret; +} +EXPORT_SYMBOL(drm_bo_handle_validate); + +static int drm_bo_handle_info(struct drm_file *file_priv, uint32_t handle, + struct drm_bo_info_rep *rep) +{ + struct drm_device *dev = file_priv->head->dev; + struct drm_buffer_object *bo; + + mutex_lock(&dev->struct_mutex); + bo = drm_lookup_buffer_object(file_priv, handle, 1); + mutex_unlock(&dev->struct_mutex); + + if (!bo) + return -EINVAL; + + mutex_lock(&bo->mutex); + if (!(bo->priv_flags & _DRM_BO_FLAG_UNFENCED)) + (void)drm_bo_busy(bo); + drm_bo_fill_rep_arg(bo, rep); + mutex_unlock(&bo->mutex); + drm_bo_usage_deref_unlocked(&bo); + return 0; +} + +static int drm_bo_handle_wait(struct drm_file *file_priv, uint32_t handle, + uint32_t hint, + struct drm_bo_info_rep *rep) +{ + struct drm_device *dev = file_priv->head->dev; + struct drm_buffer_object *bo; + int no_wait = hint & DRM_BO_HINT_DONT_BLOCK; + int ret; + + mutex_lock(&dev->struct_mutex); + bo = drm_lookup_buffer_object(file_priv, handle, 1); + mutex_unlock(&dev->struct_mutex); + + if (!bo) + return -EINVAL; + + mutex_lock(&bo->mutex); + ret = drm_bo_wait_unfenced(bo, no_wait, 0); + if (ret) + goto out; + ret = drm_bo_wait(bo, hint & DRM_BO_HINT_WAIT_LAZY, 0, no_wait); + if (ret) + goto out; + + drm_bo_fill_rep_arg(bo, rep); + +out: + mutex_unlock(&bo->mutex); + drm_bo_usage_deref_unlocked(&bo); + return ret; +} + +int drm_buffer_object_create(struct drm_device *dev, + unsigned long size, + enum drm_bo_type type, + uint64_t flags, + uint32_t hint, + uint32_t page_alignment, + unsigned long buffer_start, + struct drm_buffer_object **buf_obj) +{ + struct drm_buffer_manager *bm = &dev->bm; + struct drm_buffer_object *bo; + int ret = 0; + unsigned long num_pages; + + size += buffer_start & ~PAGE_MASK; + num_pages = (size + PAGE_SIZE - 1) >> PAGE_SHIFT; + if (num_pages == 0) { + DRM_ERROR("Illegal buffer object size.\n"); + return -EINVAL; + } + + bo = drm_ctl_calloc(1, sizeof(*bo), DRM_MEM_BUFOBJ); + + if (!bo) + return -ENOMEM; + + mutex_init(&bo->mutex); + mutex_lock(&bo->mutex); + + atomic_set(&bo->usage, 1); + atomic_set(&bo->mapped, -1); + DRM_INIT_WAITQUEUE(&bo->event_queue); + INIT_LIST_HEAD(&bo->lru); + INIT_LIST_HEAD(&bo->pinned_lru); + INIT_LIST_HEAD(&bo->ddestroy); + bo->dev = dev; + bo->type = type; + bo->num_pages = num_pages; + bo->mem.mem_type = DRM_BO_MEM_LOCAL; + bo->mem.num_pages = bo->num_pages; + bo->mem.mm_node = NULL; + bo->mem.page_alignment = page_alignment; + bo->buffer_start = buffer_start & PAGE_MASK; + bo->priv_flags = 0; + bo->mem.flags = (DRM_BO_FLAG_MEM_LOCAL | DRM_BO_FLAG_CACHED | + DRM_BO_FLAG_MAPPABLE); + bo->mem.proposed_flags = 0; + atomic_inc(&bm->count); + /* + * Use drm_bo_modify_proposed_flags to error-check the proposed flags + */ + ret = drm_bo_modify_proposed_flags(bo, flags, flags); + if (ret) + goto out_err; + + /* + * For drm_bo_type_device buffers, allocate + * address space from the device so that applications + * can mmap the buffer from there + */ + if (bo->type == drm_bo_type_device) { + mutex_lock(&dev->struct_mutex); + ret = drm_bo_setup_vm_locked(bo); + mutex_unlock(&dev->struct_mutex); + if (ret) + goto out_err; + } + + ret = drm_buffer_object_validate(bo, 0, 0, hint & DRM_BO_HINT_DONT_BLOCK); + if (ret) + goto out_err; + + mutex_unlock(&bo->mutex); + *buf_obj = bo; + return 0; + +out_err: + mutex_unlock(&bo->mutex); + + drm_bo_usage_deref_unlocked(&bo); + return ret; +} +EXPORT_SYMBOL(drm_buffer_object_create); + + +static int drm_bo_add_user_object(struct drm_file *file_priv, + struct drm_buffer_object *bo, int shareable) +{ + struct drm_device *dev = file_priv->head->dev; + int ret; + + mutex_lock(&dev->struct_mutex); + ret = drm_add_user_object(file_priv, &bo->base, shareable); + if (ret) + goto out; + + bo->base.remove = drm_bo_base_deref_locked; + bo->base.type = drm_buffer_type; + bo->base.ref_struct_locked = NULL; + bo->base.unref = drm_buffer_user_object_unmap; + +out: + mutex_unlock(&dev->struct_mutex); + return ret; +} + +int drm_bo_create_ioctl(struct drm_device *dev, void *data, struct drm_file *file_priv) +{ + struct drm_bo_create_arg *arg = data; + struct drm_bo_create_req *req = &arg->d.req; + struct drm_bo_info_rep *rep = &arg->d.rep; + struct drm_buffer_object *entry; + enum drm_bo_type bo_type; + int ret = 0; + + DRM_DEBUG("drm_bo_create_ioctl: %dkb, %dkb align\n", + (int)(req->size / 1024), req->page_alignment * 4); + + if (!dev->bm.initialized) { + DRM_ERROR("Buffer object manager is not initialized.\n"); + return -EINVAL; + } + + /* + * If the buffer creation request comes in with a starting address, + * that points at the desired user pages to map. Otherwise, create + * a drm_bo_type_device buffer, which uses pages allocated from the kernel + */ + bo_type = (req->buffer_start) ? drm_bo_type_user : drm_bo_type_device; + + /* + * User buffers cannot be shared + */ + if (bo_type == drm_bo_type_user) + req->flags &= ~DRM_BO_FLAG_SHAREABLE; + + ret = drm_buffer_object_create(file_priv->head->dev, + req->size, bo_type, req->flags, + req->hint, req->page_alignment, + req->buffer_start, &entry); + if (ret) + goto out; + + ret = drm_bo_add_user_object(file_priv, entry, + req->flags & DRM_BO_FLAG_SHAREABLE); + if (ret) { + drm_bo_usage_deref_unlocked(&entry); + goto out; + } + + mutex_lock(&entry->mutex); + drm_bo_fill_rep_arg(entry, rep); + mutex_unlock(&entry->mutex); + +out: + return ret; +} + +int drm_bo_setstatus_ioctl(struct drm_device *dev, + void *data, struct drm_file *file_priv) +{ + struct drm_bo_map_wait_idle_arg *arg = data; + struct drm_bo_info_req *req = &arg->d.req; + struct drm_bo_info_rep *rep = &arg->d.rep; + int ret; + + if (!dev->bm.initialized) { + DRM_ERROR("Buffer object manager is not initialized.\n"); + return -EINVAL; + } + + ret = drm_bo_read_lock(&dev->bm.bm_lock); + if (ret) + return ret; + + /* + * validate the buffer. note that 'fence_class' will be unused + * as we pass use_old_fence_class=1 here. Note also that + * the libdrm API doesn't pass fence_class to the kernel, + * so it's a good thing it isn't used here. + */ + ret = drm_bo_handle_validate(file_priv, req->handle, + req->flags, + req->mask, + req->hint | DRM_BO_HINT_DONT_FENCE, + req->fence_class, 1, + rep, NULL); + + (void) drm_bo_read_unlock(&dev->bm.bm_lock); + if (ret) + return ret; + + return 0; +} + +int drm_bo_map_ioctl(struct drm_device *dev, void *data, struct drm_file *file_priv) +{ + struct drm_bo_map_wait_idle_arg *arg = data; + struct drm_bo_info_req *req = &arg->d.req; + struct drm_bo_info_rep *rep = &arg->d.rep; + int ret; + if (!dev->bm.initialized) { + DRM_ERROR("Buffer object manager is not initialized.\n"); + return -EINVAL; + } + + ret = drm_buffer_object_map(file_priv, req->handle, req->mask, + req->hint, rep); + if (ret) + return ret; + + return 0; +} + +int drm_bo_unmap_ioctl(struct drm_device *dev, void *data, struct drm_file *file_priv) +{ + struct drm_bo_handle_arg *arg = data; + int ret; + if (!dev->bm.initialized) { + DRM_ERROR("Buffer object manager is not initialized.\n"); + return -EINVAL; + } + + ret = drm_buffer_object_unmap(file_priv, arg->handle); + return ret; +} + + +int drm_bo_reference_ioctl(struct drm_device *dev, void *data, struct drm_file *file_priv) +{ + struct drm_bo_reference_info_arg *arg = data; + struct drm_bo_handle_arg *req = &arg->d.req; + struct drm_bo_info_rep *rep = &arg->d.rep; + struct drm_user_object *uo; + int ret; + + if (!dev->bm.initialized) { + DRM_ERROR("Buffer object manager is not initialized.\n"); + return -EINVAL; + } + + ret = drm_user_object_ref(file_priv, req->handle, + drm_buffer_type, &uo); + if (ret) + return ret; + + ret = drm_bo_handle_info(file_priv, req->handle, rep); + if (ret) + return ret; + + return 0; +} + +int drm_bo_unreference_ioctl(struct drm_device *dev, void *data, struct drm_file *file_priv) +{ + struct drm_bo_handle_arg *arg = data; + int ret = 0; + + if (!dev->bm.initialized) { + DRM_ERROR("Buffer object manager is not initialized.\n"); + return -EINVAL; + } + + ret = drm_user_object_unref(file_priv, arg->handle, drm_buffer_type); + return ret; +} + +int drm_bo_info_ioctl(struct drm_device *dev, void *data, struct drm_file *file_priv) +{ + struct drm_bo_reference_info_arg *arg = data; + struct drm_bo_handle_arg *req = &arg->d.req; + struct drm_bo_info_rep *rep = &arg->d.rep; + int ret; + + if (!dev->bm.initialized) { + DRM_ERROR("Buffer object manager is not initialized.\n"); + return -EINVAL; + } + + ret = drm_bo_handle_info(file_priv, req->handle, rep); + if (ret) + return ret; + + return 0; +} + +int drm_bo_wait_idle_ioctl(struct drm_device *dev, void *data, struct drm_file *file_priv) +{ + struct drm_bo_map_wait_idle_arg *arg = data; + struct drm_bo_info_req *req = &arg->d.req; + struct drm_bo_info_rep *rep = &arg->d.rep; + int ret; + if (!dev->bm.initialized) { + DRM_ERROR("Buffer object manager is not initialized.\n"); + return -EINVAL; + } + + ret = drm_bo_handle_wait(file_priv, req->handle, + req->hint, rep); + if (ret) + return ret; + + return 0; +} + +static int drm_bo_leave_list(struct drm_buffer_object *bo, + uint32_t mem_type, + int free_pinned, + int allow_errors) +{ + struct drm_device *dev = bo->dev; + int ret = 0; + + mutex_lock(&bo->mutex); + + ret = drm_bo_expire_fence(bo, allow_errors); + if (ret) + goto out; + + if (free_pinned) { + DRM_FLAG_MASKED(bo->mem.flags, 0, DRM_BO_FLAG_NO_MOVE); + mutex_lock(&dev->struct_mutex); + list_del_init(&bo->pinned_lru); + if (bo->pinned_node == bo->mem.mm_node) + bo->pinned_node = NULL; + if (bo->pinned_node != NULL) { + drm_mm_put_block(bo->pinned_node); + bo->pinned_node = NULL; + } + mutex_unlock(&dev->struct_mutex); + } + + if (bo->mem.flags & DRM_BO_FLAG_NO_EVICT) { + DRM_ERROR("A DRM_BO_NO_EVICT buffer present at " + "cleanup. Removing flag and evicting.\n"); + bo->mem.flags &= ~DRM_BO_FLAG_NO_EVICT; + bo->mem.proposed_flags &= ~DRM_BO_FLAG_NO_EVICT; + } + + if (bo->mem.mem_type == mem_type) + ret = drm_bo_evict(bo, mem_type, 0); + + if (ret) { + if (allow_errors) { + goto out; + } else { + ret = 0; + DRM_ERROR("Cleanup eviction failed\n"); + } + } + +out: + mutex_unlock(&bo->mutex); + return ret; +} + + +static struct drm_buffer_object *drm_bo_entry(struct list_head *list, + int pinned_list) +{ + if (pinned_list) + return list_entry(list, struct drm_buffer_object, pinned_lru); + else + return list_entry(list, struct drm_buffer_object, lru); +} + +/* + * dev->struct_mutex locked. + */ + +static int drm_bo_force_list_clean(struct drm_device *dev, + struct list_head *head, + unsigned mem_type, + int free_pinned, + int allow_errors, + int pinned_list) +{ + struct list_head *list, *next, *prev; + struct drm_buffer_object *entry, *nentry; + int ret; + int do_restart; + + /* + * The list traversal is a bit odd here, because an item may + * disappear from the list when we release the struct_mutex or + * when we decrease the usage count. Also we're not guaranteed + * to drain pinned lists, so we can't always restart. + */ + +restart: + nentry = NULL; + list_for_each_safe(list, next, head) { + prev = list->prev; + + entry = (nentry != NULL) ? nentry: drm_bo_entry(list, pinned_list); + atomic_inc(&entry->usage); + if (nentry) { + atomic_dec(&nentry->usage); + nentry = NULL; + } + + /* + * Protect the next item from destruction, so we can check + * its list pointers later on. + */ + + if (next != head) { + nentry = drm_bo_entry(next, pinned_list); + atomic_inc(&nentry->usage); + } + mutex_unlock(&dev->struct_mutex); + + ret = drm_bo_leave_list(entry, mem_type, free_pinned, + allow_errors); + mutex_lock(&dev->struct_mutex); + + drm_bo_usage_deref_locked(&entry); + if (ret) + return ret; + + /* + * Has the next item disappeared from the list? + */ + + do_restart = ((next->prev != list) && (next->prev != prev)); + + if (nentry != NULL && do_restart) + drm_bo_usage_deref_locked(&nentry); + + if (do_restart) + goto restart; + } + return 0; +} + +int drm_bo_clean_mm(struct drm_device *dev, unsigned mem_type) +{ + struct drm_buffer_manager *bm = &dev->bm; + struct drm_mem_type_manager *man = &bm->man[mem_type]; + int ret = -EINVAL; + + if (mem_type >= DRM_BO_MEM_TYPES) { + DRM_ERROR("Illegal memory type %d\n", mem_type); + return ret; + } + + if (!man->has_type) { + DRM_ERROR("Trying to take down uninitialized " + "memory manager type %u\n", mem_type); + return ret; + } + man->use_type = 0; + man->has_type = 0; + + ret = 0; + if (mem_type > 0) { + BUG_ON(!list_empty(&bm->unfenced)); + drm_bo_force_list_clean(dev, &man->lru, mem_type, 1, 0, 0); + drm_bo_force_list_clean(dev, &man->pinned, mem_type, 1, 0, 1); + + if (drm_mm_clean(&man->manager)) { + drm_mm_takedown(&man->manager); + } else { + ret = -EBUSY; + } + } + + return ret; +} +EXPORT_SYMBOL(drm_bo_clean_mm); + +/** + *Evict all buffers of a particular mem_type, but leave memory manager + *regions for NO_MOVE buffers intact. New buffers cannot be added at this + *point since we have the hardware lock. + */ + +static int drm_bo_lock_mm(struct drm_device *dev, unsigned mem_type) +{ + int ret; + struct drm_buffer_manager *bm = &dev->bm; + struct drm_mem_type_manager *man = &bm->man[mem_type]; + + if (mem_type == 0 || mem_type >= DRM_BO_MEM_TYPES) { + DRM_ERROR("Illegal memory manager memory type %u.\n", mem_type); + return -EINVAL; + } + + if (!man->has_type) { + DRM_ERROR("Memory type %u has not been initialized.\n", + mem_type); + return 0; + } + + ret = drm_bo_force_list_clean(dev, &man->lru, mem_type, 0, 1, 0); + if (ret) + return ret; + ret = drm_bo_force_list_clean(dev, &man->pinned, mem_type, 0, 1, 1); + + return ret; +} + +int drm_bo_init_mm(struct drm_device *dev, + unsigned type, + unsigned long p_offset, unsigned long p_size) +{ + struct drm_buffer_manager *bm = &dev->bm; + int ret = -EINVAL; + struct drm_mem_type_manager *man; + + if (type >= DRM_BO_MEM_TYPES) { + DRM_ERROR("Illegal memory type %d\n", type); + return ret; + } + + man = &bm->man[type]; + if (man->has_type) { + DRM_ERROR("Memory manager already initialized for type %d\n", + type); + return ret; + } + + ret = dev->driver->bo_driver->init_mem_type(dev, type, man); + if (ret) + return ret; + + ret = 0; + if (type != DRM_BO_MEM_LOCAL) { + if (!p_size) { + DRM_ERROR("Zero size memory manager type %d\n", type); + return ret; + } + ret = drm_mm_init(&man->manager, p_offset, p_size); + if (ret) + return ret; + } + man->has_type = 1; + man->use_type = 1; + + INIT_LIST_HEAD(&man->lru); + INIT_LIST_HEAD(&man->pinned); + + return 0; +} +EXPORT_SYMBOL(drm_bo_init_mm); + +/* + * This function is intended to be called on drm driver unload. + * If you decide to call it from lastclose, you must protect the call + * from a potentially racing drm_bo_driver_init in firstopen. + * (This may happen on X server restart). + */ + +int drm_bo_driver_finish(struct drm_device *dev) +{ + struct drm_buffer_manager *bm = &dev->bm; + int ret = 0; + unsigned i = DRM_BO_MEM_TYPES; + struct drm_mem_type_manager *man; + + mutex_lock(&dev->struct_mutex); + + if (!bm->initialized) + goto out; + bm->initialized = 0; + + while (i--) { + man = &bm->man[i]; + if (man->has_type) { + man->use_type = 0; + if ((i != DRM_BO_MEM_LOCAL) && drm_bo_clean_mm(dev, i)) { + ret = -EBUSY; + DRM_ERROR("DRM memory manager type %d " + "is not clean.\n", i); + } + man->has_type = 0; + } + } + mutex_unlock(&dev->struct_mutex); + + if (!cancel_delayed_work(&bm->wq)) + flush_scheduled_work(); + + mutex_lock(&dev->struct_mutex); + drm_bo_delayed_delete(dev, 1); + if (list_empty(&bm->ddestroy)) + DRM_DEBUG("Delayed destroy list was clean\n"); + + if (list_empty(&bm->man[0].lru)) + DRM_DEBUG("Swap list was clean\n"); + + if (list_empty(&bm->man[0].pinned)) + DRM_DEBUG("NO_MOVE list was clean\n"); + + if (list_empty(&bm->unfenced)) + DRM_DEBUG("Unfenced list was clean\n"); + + __free_page(bm->dummy_read_page); + +out: + mutex_unlock(&dev->struct_mutex); + return ret; +} + +/* + * This function is intended to be called on drm driver load. + * If you decide to call it from firstopen, you must protect the call + * from a potentially racing drm_bo_driver_finish in lastclose. + * (This may happen on X server restart). + */ + +int drm_bo_driver_init(struct drm_device *dev) +{ + struct drm_bo_driver *driver = dev->driver->bo_driver; + struct drm_buffer_manager *bm = &dev->bm; + int ret = -EINVAL; + + bm->dummy_read_page = NULL; + drm_bo_init_lock(&bm->bm_lock); + mutex_lock(&dev->struct_mutex); + if (!driver) + goto out_unlock; + + bm->dummy_read_page = alloc_page(__GFP_ZERO | GFP_DMA32); + if (!bm->dummy_read_page) { + ret = -ENOMEM; + goto out_unlock; + } + + /* + * Initialize the system memory buffer type. + * Other types need to be driver / IOCTL initialized. + */ + ret = drm_bo_init_mm(dev, DRM_BO_MEM_LOCAL, 0, 0); + if (ret) + goto out_unlock; + + INIT_DELAYED_WORK(&bm->wq, drm_bo_delayed_workqueue); + bm->initialized = 1; + bm->nice_mode = 1; + atomic_set(&bm->count, 0); + bm->cur_pages = 0; + INIT_LIST_HEAD(&bm->unfenced); + INIT_LIST_HEAD(&bm->ddestroy); +out_unlock: + mutex_unlock(&dev->struct_mutex); + return ret; +} +EXPORT_SYMBOL(drm_bo_driver_init); + +int drm_mm_init_ioctl(struct drm_device *dev, void *data, struct drm_file *file_priv) +{ + struct drm_mm_init_arg *arg = data; + struct drm_buffer_manager *bm = &dev->bm; + struct drm_bo_driver *driver = dev->driver->bo_driver; + int ret; + + if (!driver) { + DRM_ERROR("Buffer objects are not supported by this driver\n"); + return -EINVAL; + } + + ret = drm_bo_write_lock(&bm->bm_lock, file_priv); + if (ret) + return ret; + + ret = -EINVAL; + if (arg->magic != DRM_BO_INIT_MAGIC) { + DRM_ERROR("You are using an old libdrm that is not compatible with\n" + "\tthe kernel DRM module. Please upgrade your libdrm.\n"); + return -EINVAL; + } + if (arg->major != DRM_BO_INIT_MAJOR) { + DRM_ERROR("libdrm and kernel DRM buffer object interface major\n" + "\tversion don't match. Got %d, expected %d.\n", + arg->major, DRM_BO_INIT_MAJOR); + return -EINVAL; + } + + mutex_lock(&dev->struct_mutex); + if (!bm->initialized) { + DRM_ERROR("DRM memory manager was not initialized.\n"); + goto out; + } + if (arg->mem_type == 0) { + DRM_ERROR("System memory buffers already initialized.\n"); + goto out; + } + ret = drm_bo_init_mm(dev, arg->mem_type, + arg->p_offset, arg->p_size); + +out: + mutex_unlock(&dev->struct_mutex); + (void) drm_bo_write_unlock(&bm->bm_lock, file_priv); + + if (ret) + return ret; + + return 0; +} + +int drm_mm_takedown_ioctl(struct drm_device *dev, void *data, struct drm_file *file_priv) +{ + struct drm_mm_type_arg *arg = data; + struct drm_buffer_manager *bm = &dev->bm; + struct drm_bo_driver *driver = dev->driver->bo_driver; + int ret; + + if (!driver) { + DRM_ERROR("Buffer objects are not supported by this driver\n"); + return -EINVAL; + } + + ret = drm_bo_write_lock(&bm->bm_lock, file_priv); + if (ret) + return ret; + + mutex_lock(&dev->struct_mutex); + ret = -EINVAL; + if (!bm->initialized) { + DRM_ERROR("DRM memory manager was not initialized\n"); + goto out; + } + if (arg->mem_type == 0) { + DRM_ERROR("No takedown for System memory buffers.\n"); + goto out; + } + ret = 0; + if (drm_bo_clean_mm(dev, arg->mem_type)) { + DRM_ERROR("Memory manager type %d not clean. " + "Delaying takedown\n", arg->mem_type); + } +out: + mutex_unlock(&dev->struct_mutex); + (void) drm_bo_write_unlock(&bm->bm_lock, file_priv); + + if (ret) + return ret; + + return 0; +} + +int drm_mm_lock_ioctl(struct drm_device *dev, void *data, struct drm_file *file_priv) +{ + struct drm_mm_type_arg *arg = data; + struct drm_bo_driver *driver = dev->driver->bo_driver; + int ret; + + if (!driver) { + DRM_ERROR("Buffer objects are not supported by this driver\n"); + return -EINVAL; + } + + if (arg->lock_flags & DRM_BO_LOCK_IGNORE_NO_EVICT) { + DRM_ERROR("Lock flag DRM_BO_LOCK_IGNORE_NO_EVICT not supported yet.\n"); + return -EINVAL; + } + + if (arg->lock_flags & DRM_BO_LOCK_UNLOCK_BM) { + ret = drm_bo_write_lock(&dev->bm.bm_lock, file_priv); + if (ret) + return ret; + } + + mutex_lock(&dev->struct_mutex); + ret = drm_bo_lock_mm(dev, arg->mem_type); + mutex_unlock(&dev->struct_mutex); + if (ret) { + (void) drm_bo_write_unlock(&dev->bm.bm_lock, file_priv); + return ret; + } + + return 0; +} + +int drm_mm_unlock_ioctl(struct drm_device *dev, + void *data, + struct drm_file *file_priv) +{ + struct drm_mm_type_arg *arg = data; + struct drm_bo_driver *driver = dev->driver->bo_driver; + int ret; + + if (!driver) { + DRM_ERROR("Buffer objects are not supported by this driver\n"); + return -EINVAL; + } + + if (arg->lock_flags & DRM_BO_LOCK_UNLOCK_BM) { + ret = drm_bo_write_unlock(&dev->bm.bm_lock, file_priv); + if (ret) + return ret; + } + + return 0; +} + +/* + * buffer object vm functions. + */ + +int drm_mem_reg_is_pci(struct drm_device *dev, struct drm_bo_mem_reg *mem) +{ + struct drm_buffer_manager *bm = &dev->bm; + struct drm_mem_type_manager *man = &bm->man[mem->mem_type]; + + if (!(man->flags & _DRM_FLAG_MEMTYPE_FIXED)) { + if (mem->mem_type == DRM_BO_MEM_LOCAL) + return 0; + + if (man->flags & _DRM_FLAG_MEMTYPE_CMA) + return 0; + + if (mem->flags & DRM_BO_FLAG_CACHED) + return 0; + } + return 1; +} +EXPORT_SYMBOL(drm_mem_reg_is_pci); + +/** + * \c Get the PCI offset for the buffer object memory. + * + * \param bo The buffer object. + * \param bus_base On return the base of the PCI region + * \param bus_offset On return the byte offset into the PCI region + * \param bus_size On return the byte size of the buffer object or zero if + * the buffer object memory is not accessible through a PCI region. + * \return Failure indication. + * + * Returns -EINVAL if the buffer object is currently not mappable. + * Otherwise returns zero. + */ + +int drm_bo_pci_offset(struct drm_device *dev, + struct drm_bo_mem_reg *mem, + unsigned long *bus_base, + unsigned long *bus_offset, unsigned long *bus_size) +{ + struct drm_buffer_manager *bm = &dev->bm; + struct drm_mem_type_manager *man = &bm->man[mem->mem_type]; + + *bus_size = 0; + if (!(man->flags & _DRM_FLAG_MEMTYPE_MAPPABLE)) + return -EINVAL; + + if (drm_mem_reg_is_pci(dev, mem)) { + *bus_offset = mem->mm_node->start << PAGE_SHIFT; + *bus_size = mem->num_pages << PAGE_SHIFT; + *bus_base = man->io_offset; + } + + return 0; +} + +/** + * \c Kill all user-space virtual mappings of this buffer object. + * + * \param bo The buffer object. + * + * Call bo->mutex locked. + */ + +void drm_bo_unmap_virtual(struct drm_buffer_object *bo) +{ + struct drm_device *dev = bo->dev; + loff_t offset = ((loff_t) bo->map_list.hash.key) << PAGE_SHIFT; + loff_t holelen = ((loff_t) bo->mem.num_pages) << PAGE_SHIFT; + + if (!dev->dev_mapping) + return; + + unmap_mapping_range(dev->dev_mapping, offset, holelen, 1); +} + +/** + * drm_bo_takedown_vm_locked: + * + * @bo: the buffer object to remove any drm device mapping + * + * Remove any associated vm mapping on the drm device node that + * would have been created for a drm_bo_type_device buffer + */ +static void drm_bo_takedown_vm_locked(struct drm_buffer_object *bo) +{ + struct drm_map_list *list; + drm_local_map_t *map; + struct drm_device *dev = bo->dev; + + DRM_ASSERT_LOCKED(&dev->struct_mutex); + if (bo->type != drm_bo_type_device) + return; + + list = &bo->map_list; + if (list->user_token) { + drm_ht_remove_item(&dev->map_hash, &list->hash); + list->user_token = 0; + } + if (list->file_offset_node) { + drm_mm_put_block(list->file_offset_node); + list->file_offset_node = NULL; + } + + map = list->map; + if (!map) + return; + + drm_ctl_free(map, sizeof(*map), DRM_MEM_BUFOBJ); + list->map = NULL; + list->user_token = 0ULL; + drm_bo_usage_deref_locked(&bo); +} + +/** + * drm_bo_setup_vm_locked: + * + * @bo: the buffer to allocate address space for + * + * Allocate address space in the drm device so that applications + * can mmap the buffer and access the contents. This only + * applies to drm_bo_type_device objects as others are not + * placed in the drm device address space. + */ +static int drm_bo_setup_vm_locked(struct drm_buffer_object *bo) +{ + struct drm_map_list *list = &bo->map_list; + drm_local_map_t *map; + struct drm_device *dev = bo->dev; + + DRM_ASSERT_LOCKED(&dev->struct_mutex); + list->map = drm_ctl_calloc(1, sizeof(*map), DRM_MEM_BUFOBJ); + if (!list->map) + return -ENOMEM; + + map = list->map; + map->offset = 0; + map->type = _DRM_TTM; + map->flags = _DRM_REMOVABLE; + map->size = bo->mem.num_pages * PAGE_SIZE; + atomic_inc(&bo->usage); + map->handle = (void *)bo; + + list->file_offset_node = drm_mm_search_free(&dev->offset_manager, + bo->mem.num_pages, 0, 0); + + if (!list->file_offset_node) { + drm_bo_takedown_vm_locked(bo); + return -ENOMEM; + } + + list->file_offset_node = drm_mm_get_block(list->file_offset_node, + bo->mem.num_pages, 0); + + list->hash.key = list->file_offset_node->start; + if (drm_ht_insert_item(&dev->map_hash, &list->hash)) { + drm_bo_takedown_vm_locked(bo); + return -ENOMEM; + } + + list->user_token = ((uint64_t) list->hash.key) << PAGE_SHIFT; + + return 0; +} + +int drm_bo_version_ioctl(struct drm_device *dev, void *data, + struct drm_file *file_priv) +{ + struct drm_bo_version_arg *arg = (struct drm_bo_version_arg *)data; + + arg->major = DRM_BO_INIT_MAJOR; + arg->minor = DRM_BO_INIT_MINOR; + arg->patchlevel = DRM_BO_INIT_PATCH; + + return 0; +} diff --git a/drivers/char/drm/drm_bo_lock.c b/drivers/char/drm/drm_bo_lock.c new file mode 100644 index 0000000..e4f159d --- /dev/null +++ b/drivers/char/drm/drm_bo_lock.c @@ -0,0 +1,175 @@ +/************************************************************************** + * + * Copyright (c) 2007 Tungsten Graphics, Inc., Cedar Park, TX., USA + * All Rights Reserved. + * + * Permission is hereby granted, free of charge, to any person obtaining a + * copy of this software and associated documentation files (the + * "Software"), to deal in the Software without restriction, including + * without limitation the rights to use, copy, modify, merge, publish, + * distribute, sub license, and/or sell copies of the Software, and to + * permit persons to whom the Software is furnished to do so, subject to + * the following conditions: + * + * The above copyright notice and this permission notice (including the + * next paragraph) shall be included in all copies or substantial portions + * of the Software. + * + * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR + * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, + * FITNESS FOR A PARTICULAR PURPOSE AND NON-INFRINGEMENT. IN NO EVENT SHALL + * THE COPYRIGHT HOLDERS, AUTHORS AND/OR ITS SUPPLIERS BE LIABLE FOR ANY CLAIM, + * DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR + * OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE + * USE OR OTHER DEALINGS IN THE SOFTWARE. + * + **************************************************************************/ +/* + * Authors: Thomas Hellstrom + */ + +/* + * This file implements a simple replacement for the buffer manager use + * of the heavyweight hardware lock. + * The lock is a read-write lock. Taking it in read mode is fast, and + * intended for in-kernel use only. + * Taking it in write mode is slow. + * + * The write mode is used only when there is a need to block all + * user-space processes from allocating a + * new memory area. + * Typical use in write mode is X server VT switching, and it's allowed + * to leave kernel space with the write lock held. If a user-space process + * dies while having the write-lock, it will be released during the file + * descriptor release. + * + * The read lock is typically placed at the start of an IOCTL- or + * user-space callable function that may end up allocating a memory area. + * This includes setstatus, super-ioctls and no_pfn; the latter may move + * unmappable regions to mappable. It's a bug to leave kernel space with the + * read lock held. + * + * Both read- and write lock taking is interruptible for low signal-delivery + * latency. The locking functions will return -EAGAIN if interrupted by a + * signal. + * + * Locking order: The lock should be taken BEFORE any kernel mutexes + * or spinlocks. + */ + +#include "drmP.h" + +void drm_bo_init_lock(struct drm_bo_lock *lock) +{ + DRM_INIT_WAITQUEUE(&lock->queue); + atomic_set(&lock->write_lock_pending, 0); + atomic_set(&lock->readers, 0); +} + +void drm_bo_read_unlock(struct drm_bo_lock *lock) +{ + if (unlikely(atomic_add_negative(-1, &lock->readers))) + BUG(); + if (atomic_read(&lock->readers) == 0) + wake_up_interruptible(&lock->queue); +} +EXPORT_SYMBOL(drm_bo_read_unlock); + +int drm_bo_read_lock(struct drm_bo_lock *lock) +{ + while (unlikely(atomic_read(&lock->write_lock_pending) != 0)) { + int ret; + ret = wait_event_interruptible + (lock->queue, atomic_read(&lock->write_lock_pending) == 0); + if (ret) + return -EAGAIN; + } + + while (unlikely(!atomic_add_unless(&lock->readers, 1, -1))) { + int ret; + ret = wait_event_interruptible + (lock->queue, atomic_add_unless(&lock->readers, 1, -1)); + if (ret) + return -EAGAIN; + } + return 0; +} +EXPORT_SYMBOL(drm_bo_read_lock); + +static int __drm_bo_write_unlock(struct drm_bo_lock *lock) +{ + if (unlikely(atomic_cmpxchg(&lock->readers, -1, 0) != -1)) + return -EINVAL; + if (unlikely(atomic_cmpxchg(&lock->write_lock_pending, 1, 0) != 1)) + return -EINVAL; + wake_up_interruptible(&lock->queue); + return 0; +} + +static void drm_bo_write_lock_remove(struct drm_file *file_priv, + struct drm_user_object *item) +{ + struct drm_bo_lock *lock = container_of(item, struct drm_bo_lock, base); + int ret; + + ret = __drm_bo_write_unlock(lock); + BUG_ON(ret); +} + +int drm_bo_write_lock(struct drm_bo_lock *lock, struct drm_file *file_priv) +{ + int ret = 0; + struct drm_device *dev; + + if (unlikely(atomic_cmpxchg(&lock->write_lock_pending, 0, 1) != 0)) + return -EINVAL; + + while (unlikely(atomic_cmpxchg(&lock->readers, 0, -1) != 0)) { + ret = wait_event_interruptible + (lock->queue, atomic_cmpxchg(&lock->readers, 0, -1) == 0); + + if (ret) { + atomic_set(&lock->write_lock_pending, 0); + wake_up_interruptible(&lock->queue); + return -EAGAIN; + } + } + + /* + * Add a dummy user-object, the destructor of which will + * make sure the lock is released if the client dies + * while holding it. + */ + + dev = file_priv->head->dev; + mutex_lock(&dev->struct_mutex); + ret = drm_add_user_object(file_priv, &lock->base, 0); + lock->base.remove = &drm_bo_write_lock_remove; + lock->base.type = drm_lock_type; + if (ret) + (void)__drm_bo_write_unlock(lock); + + mutex_unlock(&dev->struct_mutex); + + return ret; +} + +int drm_bo_write_unlock(struct drm_bo_lock *lock, struct drm_file *file_priv) +{ + struct drm_device *dev = file_priv->head->dev; + struct drm_ref_object *ro; + + mutex_lock(&dev->struct_mutex); + + if (lock->base.owner != file_priv) { + mutex_unlock(&dev->struct_mutex); + return -EINVAL; + } + ro = drm_lookup_ref_object(file_priv, &lock->base, _DRM_REF_USE); + BUG_ON(!ro); + drm_remove_ref_object(file_priv, ro); + lock->base.owner = NULL; + + mutex_unlock(&dev->struct_mutex); + return 0; +} diff --git a/drivers/char/drm/drm_bo_move.c b/drivers/char/drm/drm_bo_move.c new file mode 100644 index 0000000..70e8d80 --- /dev/null +++ b/drivers/char/drm/drm_bo_move.c @@ -0,0 +1,576 @@ +/************************************************************************** + * + * Copyright (c) 2007 Tungsten Graphics, Inc., Cedar Park, TX., USA + * All Rights Reserved. + * + * Permission is hereby granted, free of charge, to any person obtaining a + * copy of this software and associated documentation files (the + * "Software"), to deal in the Software without restriction, including + * without limitation the rights to use, copy, modify, merge, publish, + * distribute, sub license, and/or sell copies of the Software, and to + * permit persons to whom the Software is furnished to do so, subject to + * the following conditions: + * + * The above copyright notice and this permission notice (including the + * next paragraph) shall be included in all copies or substantial portions + * of the Software. + * + * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR + * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, + * FITNESS FOR A PARTICULAR PURPOSE AND NON-INFRINGEMENT. IN NO EVENT SHALL + * THE COPYRIGHT HOLDERS, AUTHORS AND/OR ITS SUPPLIERS BE LIABLE FOR ANY CLAIM, + * DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR + * OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE + * USE OR OTHER DEALINGS IN THE SOFTWARE. + * + **************************************************************************/ +/* + * Authors: Thomas Hellstrom + */ + +#include "drmP.h" + +/** + * Free the old memory node unless it's a pinned region and we + * have not been requested to free also pinned regions. + */ + +static void drm_bo_free_old_node(struct drm_buffer_object *bo) +{ + struct drm_bo_mem_reg *old_mem = &bo->mem; + + if (old_mem->mm_node && (old_mem->mm_node != bo->pinned_node)) { + mutex_lock(&bo->dev->struct_mutex); + drm_mm_put_block(old_mem->mm_node); + old_mem->mm_node = NULL; + mutex_unlock(&bo->dev->struct_mutex); + } + old_mem->mm_node = NULL; +} + +int drm_bo_move_ttm(struct drm_buffer_object *bo, + int evict, int no_wait, struct drm_bo_mem_reg *new_mem) +{ + struct drm_ttm *ttm = bo->ttm; + struct drm_bo_mem_reg *old_mem = &bo->mem; + uint64_t save_flags = old_mem->flags; + uint64_t save_proposed_flags = old_mem->proposed_flags; + int ret; + + if (old_mem->mem_type == DRM_BO_MEM_TT) { + if (evict) + drm_ttm_evict(ttm); + else + drm_ttm_unbind(ttm); + + drm_bo_free_old_node(bo); + DRM_FLAG_MASKED(old_mem->flags, + DRM_BO_FLAG_CACHED | DRM_BO_FLAG_MAPPABLE | + DRM_BO_FLAG_MEM_LOCAL, DRM_BO_MASK_MEMTYPE); + old_mem->mem_type = DRM_BO_MEM_LOCAL; + save_flags = old_mem->flags; + } + if (new_mem->mem_type != DRM_BO_MEM_LOCAL) { + ret = drm_ttm_bind(ttm, new_mem); + if (ret) + return ret; + } + + *old_mem = *new_mem; + new_mem->mm_node = NULL; + old_mem->proposed_flags = save_proposed_flags; + DRM_FLAG_MASKED(save_flags, new_mem->flags, DRM_BO_MASK_MEMTYPE); + return 0; +} +EXPORT_SYMBOL(drm_bo_move_ttm); + +/** + * \c Return a kernel virtual address to the buffer object PCI memory. + * + * \param bo The buffer object. + * \return Failure indication. + * + * Returns -EINVAL if the buffer object is currently not mappable. + * Returns -ENOMEM if the ioremap operation failed. + * Otherwise returns zero. + * + * After a successfull call, bo->iomap contains the virtual address, or NULL + * if the buffer object content is not accessible through PCI space. + * Call bo->mutex locked. + */ + +int drm_mem_reg_ioremap(struct drm_device *dev, struct drm_bo_mem_reg *mem, + void **virtual) +{ + struct drm_buffer_manager *bm = &dev->bm; + struct drm_mem_type_manager *man = &bm->man[mem->mem_type]; + unsigned long bus_offset; + unsigned long bus_size; + unsigned long bus_base; + int ret; + void *addr; + + *virtual = NULL; + ret = drm_bo_pci_offset(dev, mem, &bus_base, &bus_offset, &bus_size); + if (ret || bus_size == 0) + return ret; + + if (!(man->flags & _DRM_FLAG_NEEDS_IOREMAP)) + addr = (void *)(((u8 *) man->io_addr) + bus_offset); + else { + addr = ioremap_nocache(bus_base + bus_offset, bus_size); + if (!addr) + return -ENOMEM; + } + *virtual = addr; + return 0; +} +EXPORT_SYMBOL(drm_mem_reg_ioremap); + +/** + * \c Unmap mapping obtained using drm_bo_ioremap + * + * \param bo The buffer object. + * + * Call bo->mutex locked. + */ + +void drm_mem_reg_iounmap(struct drm_device *dev, struct drm_bo_mem_reg *mem, + void *virtual) +{ + struct drm_buffer_manager *bm; + struct drm_mem_type_manager *man; + + bm = &dev->bm; + man = &bm->man[mem->mem_type]; + + if (virtual && (man->flags & _DRM_FLAG_NEEDS_IOREMAP)) + iounmap(virtual); +} + +static int drm_copy_io_page(void *dst, void *src, unsigned long page) +{ + uint32_t *dstP = + (uint32_t *) ((unsigned long)dst + (page << PAGE_SHIFT)); + uint32_t *srcP = + (uint32_t *) ((unsigned long)src + (page << PAGE_SHIFT)); + + int i; + for (i = 0; i < PAGE_SIZE / sizeof(uint32_t); ++i) + iowrite32(ioread32(srcP++), dstP++); + return 0; +} + +static int drm_copy_io_ttm_page(struct drm_ttm *ttm, void *src, + unsigned long page) +{ + struct page *d = drm_ttm_get_page(ttm, page); + void *dst; + + if (!d) + return -ENOMEM; + + src = (void *)((unsigned long)src + (page << PAGE_SHIFT)); + dst = kmap(d); + if (!dst) + return -ENOMEM; + + memcpy_fromio(dst, src, PAGE_SIZE); + kunmap(d); + return 0; +} + +static int drm_copy_ttm_io_page(struct drm_ttm *ttm, void *dst, unsigned long page) +{ + struct page *s = drm_ttm_get_page(ttm, page); + void *src; + + if (!s) + return -ENOMEM; + + dst = (void *)((unsigned long)dst + (page << PAGE_SHIFT)); + src = kmap(s); + if (!src) + return -ENOMEM; + + memcpy_toio(dst, src, PAGE_SIZE); + kunmap(s); + return 0; +} + +int drm_bo_move_memcpy(struct drm_buffer_object *bo, + int evict, int no_wait, struct drm_bo_mem_reg *new_mem) +{ + struct drm_device *dev = bo->dev; + struct drm_mem_type_manager *man = &dev->bm.man[new_mem->mem_type]; + struct drm_ttm *ttm = bo->ttm; + struct drm_bo_mem_reg *old_mem = &bo->mem; + struct drm_bo_mem_reg old_copy = *old_mem; + void *old_iomap; + void *new_iomap; + int ret; + uint64_t save_flags = old_mem->flags; + uint64_t save_proposed_flags = old_mem->proposed_flags; + unsigned long i; + unsigned long page; + unsigned long add = 0; + int dir; + + ret = drm_mem_reg_ioremap(dev, old_mem, &old_iomap); + if (ret) + return ret; + ret = drm_mem_reg_ioremap(dev, new_mem, &new_iomap); + if (ret) + goto out; + + if (old_iomap == NULL && new_iomap == NULL) + goto out2; + if (old_iomap == NULL && ttm == NULL) + goto out2; + + add = 0; + dir = 1; + + if ((old_mem->mem_type == new_mem->mem_type) && + (new_mem->mm_node->start < + old_mem->mm_node->start + old_mem->mm_node->size)) { + dir = -1; + add = new_mem->num_pages - 1; + } + + for (i = 0; i < new_mem->num_pages; ++i) { + page = i * dir + add; + if (old_iomap == NULL) + ret = drm_copy_ttm_io_page(ttm, new_iomap, page); + else if (new_iomap == NULL) + ret = drm_copy_io_ttm_page(ttm, old_iomap, page); + else + ret = drm_copy_io_page(new_iomap, old_iomap, page); + if (ret) + goto out1; + } + mb(); +out2: + drm_bo_free_old_node(bo); + + *old_mem = *new_mem; + new_mem->mm_node = NULL; + old_mem->proposed_flags = save_proposed_flags; + DRM_FLAG_MASKED(save_flags, new_mem->flags, DRM_BO_MASK_MEMTYPE); + + if ((man->flags & _DRM_FLAG_MEMTYPE_FIXED) && (ttm != NULL)) { + drm_ttm_unbind(ttm); + drm_ttm_destroy(ttm); + bo->ttm = NULL; + } + +out1: + drm_mem_reg_iounmap(dev, new_mem, new_iomap); +out: + drm_mem_reg_iounmap(dev, &old_copy, old_iomap); + return ret; +} +EXPORT_SYMBOL(drm_bo_move_memcpy); + +/* + * Transfer a buffer object's memory and LRU status to a newly + * created object. User-space references remains with the old + * object. Call bo->mutex locked. + */ + +int drm_buffer_object_transfer(struct drm_buffer_object *bo, + struct drm_buffer_object **new_obj) +{ + struct drm_buffer_object *fbo; + struct drm_device *dev = bo->dev; + struct drm_buffer_manager *bm = &dev->bm; + + fbo = drm_ctl_calloc(1, sizeof(*fbo), DRM_MEM_BUFOBJ); + if (!fbo) + return -ENOMEM; + + *fbo = *bo; + mutex_init(&fbo->mutex); + mutex_lock(&fbo->mutex); + mutex_lock(&dev->struct_mutex); + + DRM_INIT_WAITQUEUE(&bo->event_queue); + INIT_LIST_HEAD(&fbo->ddestroy); + INIT_LIST_HEAD(&fbo->lru); + INIT_LIST_HEAD(&fbo->pinned_lru); + + fbo->fence = drm_fence_reference_locked(bo->fence); + fbo->pinned_node = NULL; + fbo->mem.mm_node->private = (void *)fbo; + atomic_set(&fbo->usage, 1); + atomic_inc(&bm->count); + mutex_unlock(&dev->struct_mutex); + mutex_unlock(&fbo->mutex); + + *new_obj = fbo; + return 0; +} + +/* + * Since move is underway, we need to block signals in this function. + * We cannot restart until it has finished. + */ + +int drm_bo_move_accel_cleanup(struct drm_buffer_object *bo, + int evict, int no_wait, uint32_t fence_class, + uint32_t fence_type, uint32_t fence_flags, + struct drm_bo_mem_reg *new_mem) +{ + struct drm_device *dev = bo->dev; + struct drm_mem_type_manager *man = &dev->bm.man[new_mem->mem_type]; + struct drm_bo_mem_reg *old_mem = &bo->mem; + int ret; + uint64_t save_flags = old_mem->flags; + uint64_t save_proposed_flags = old_mem->proposed_flags; + struct drm_buffer_object *old_obj; + + if (bo->fence) + drm_fence_usage_deref_unlocked(&bo->fence); + ret = drm_fence_object_create(dev, fence_class, fence_type, + fence_flags | DRM_FENCE_FLAG_EMIT, + &bo->fence); + bo->fence_type = fence_type; + if (ret) + return ret; + + if (evict || ((bo->mem.mm_node == bo->pinned_node) && + bo->mem.mm_node != NULL)) { + ret = drm_bo_wait(bo, 0, 1, 0); + if (ret) + return ret; + + drm_bo_free_old_node(bo); + + if ((man->flags & _DRM_FLAG_MEMTYPE_FIXED) && (bo->ttm != NULL)) { + drm_ttm_unbind(bo->ttm); + drm_ttm_destroy(bo->ttm); + bo->ttm = NULL; + } + } else { + + /* This should help pipeline ordinary buffer moves. + * + * Hang old buffer memory on a new buffer object, + * and leave it to be released when the GPU + * operation has completed. + */ + + ret = drm_buffer_object_transfer(bo, &old_obj); + + if (ret) + return ret; + + if (!(man->flags & _DRM_FLAG_MEMTYPE_FIXED)) + old_obj->ttm = NULL; + else + bo->ttm = NULL; + + mutex_lock(&dev->struct_mutex); + list_del_init(&old_obj->lru); + DRM_FLAG_MASKED(bo->priv_flags, 0, _DRM_BO_FLAG_UNFENCED); + drm_bo_add_to_lru(old_obj); + + drm_bo_usage_deref_locked(&old_obj); + mutex_unlock(&dev->struct_mutex); + + } + + *old_mem = *new_mem; + new_mem->mm_node = NULL; + old_mem->proposed_flags = save_proposed_flags; + DRM_FLAG_MASKED(save_flags, new_mem->flags, DRM_BO_MASK_MEMTYPE); + return 0; +} +EXPORT_SYMBOL(drm_bo_move_accel_cleanup); + +int drm_bo_same_page(unsigned long offset, + unsigned long offset2) +{ + return (offset & PAGE_MASK) == (offset2 & PAGE_MASK); +} +EXPORT_SYMBOL(drm_bo_same_page); + +unsigned long drm_bo_offset_end(unsigned long offset, + unsigned long end) +{ + offset = (offset + PAGE_SIZE) & PAGE_MASK; + return (end < offset) ? end : offset; +} +EXPORT_SYMBOL(drm_bo_offset_end); + +static pgprot_t drm_kernel_io_prot(uint32_t map_type) +{ + pgprot_t tmp = PAGE_KERNEL; + +#if defined(__i386__) || defined(__x86_64__) +#ifdef USE_PAT_WC +#warning using pat + if (drm_use_pat() && map_type == _DRM_TTM) { + pgprot_val(tmp) |= _PAGE_PAT; + return tmp; + } +#endif + if (boot_cpu_data.x86 > 3 && map_type != _DRM_AGP) { + pgprot_val(tmp) |= _PAGE_PCD; + pgprot_val(tmp) &= ~_PAGE_PWT; + } +#elif defined(__powerpc__) + pgprot_val(tmp) |= _PAGE_NO_CACHE; + if (map_type == _DRM_REGISTERS) + pgprot_val(tmp) |= _PAGE_GUARDED; +#endif +#if defined(__ia64__) + if (map_type == _DRM_TTM) + tmp = pgprot_writecombine(tmp); + else + tmp = pgprot_noncached(tmp); +#endif + return tmp; +} + +static int drm_bo_ioremap(struct drm_buffer_object *bo, unsigned long bus_base, + unsigned long bus_offset, unsigned long bus_size, + struct drm_bo_kmap_obj *map) +{ + struct drm_device *dev = bo->dev; + struct drm_bo_mem_reg *mem = &bo->mem; + struct drm_mem_type_manager *man = &dev->bm.man[mem->mem_type]; + + if (!(man->flags & _DRM_FLAG_NEEDS_IOREMAP)) { + map->bo_kmap_type = bo_map_premapped; + map->virtual = (void *)(((u8 *) man->io_addr) + bus_offset); + } else { + map->bo_kmap_type = bo_map_iomap; + map->virtual = ioremap_nocache(bus_base + bus_offset, bus_size); + } + return (!map->virtual) ? -ENOMEM : 0; +} + +static int drm_bo_kmap_ttm(struct drm_buffer_object *bo, + unsigned long start_page, unsigned long num_pages, + struct drm_bo_kmap_obj *map) +{ + struct drm_device *dev = bo->dev; + struct drm_bo_mem_reg *mem = &bo->mem; + struct drm_mem_type_manager *man = &dev->bm.man[mem->mem_type]; + pgprot_t prot; + struct drm_ttm *ttm = bo->ttm; + struct page *d; + int i; + + BUG_ON(!ttm); + + if (num_pages == 1 && (mem->flags & DRM_BO_FLAG_CACHED)) { + + /* + * We're mapping a single page, and the desired + * page protection is consistent with the bo. + */ + + map->bo_kmap_type = bo_map_kmap; + map->page = drm_ttm_get_page(ttm, start_page); + map->virtual = kmap(map->page); + } else { + /* + * Populate the part we're mapping; + */ + + for (i = start_page; i < start_page + num_pages; ++i) { + d = drm_ttm_get_page(ttm, i); + if (!d) + return -ENOMEM; + } + + /* + * We need to use vmap to get the desired page protection + * or to make the buffer object look contigous. + */ + + prot = (mem->flags & DRM_BO_FLAG_CACHED) ? + PAGE_KERNEL : + drm_kernel_io_prot(man->drm_bus_maptype); + map->bo_kmap_type = bo_map_vmap; + map->virtual = vmap(ttm->pages + start_page, + num_pages, 0, prot); + } + return (!map->virtual) ? -ENOMEM : 0; +} + +/* + * This function is to be used for kernel mapping of buffer objects. + * It chooses the appropriate mapping method depending on the memory type + * and caching policy the buffer currently has. + * Mapping multiple pages or buffers that live in io memory is a bit slow and + * consumes vmalloc space. Be restrictive with such mappings. + * Mapping single pages usually returns the logical kernel address, + * (which is fast) + * BUG may use slower temporary mappings for high memory pages or + * uncached / write-combined pages. + * + * The function fills in a drm_bo_kmap_obj which can be used to return the + * kernel virtual address of the buffer. + * + * Code servicing a non-priviliged user request is only allowed to map one + * page at a time. We might need to implement a better scheme to stop such + * processes from consuming all vmalloc space. + */ + +int drm_bo_kmap(struct drm_buffer_object *bo, unsigned long start_page, + unsigned long num_pages, struct drm_bo_kmap_obj *map) +{ + int ret; + unsigned long bus_base; + unsigned long bus_offset; + unsigned long bus_size; + + map->virtual = NULL; + + if (num_pages > bo->num_pages) + return -EINVAL; + if (start_page > bo->num_pages) + return -EINVAL; + ret = drm_bo_pci_offset(bo->dev, &bo->mem, &bus_base, + &bus_offset, &bus_size); + + if (ret) + return ret; + + if (bus_size == 0) { + return drm_bo_kmap_ttm(bo, start_page, num_pages, map); + } else { + bus_offset += start_page << PAGE_SHIFT; + bus_size = num_pages << PAGE_SHIFT; + return drm_bo_ioremap(bo, bus_base, bus_offset, bus_size, map); + } +} +EXPORT_SYMBOL(drm_bo_kmap); + +void drm_bo_kunmap(struct drm_bo_kmap_obj *map) +{ + if (!map->virtual) + return; + + switch (map->bo_kmap_type) { + case bo_map_iomap: + iounmap(map->virtual); + break; + case bo_map_vmap: + vunmap(map->virtual); + break; + case bo_map_kmap: + kunmap(map->page); + break; + case bo_map_premapped: + break; + default: + BUG(); + } + map->virtual = NULL; + map->page = NULL; +} +EXPORT_SYMBOL(drm_bo_kunmap); diff --git a/drivers/char/drm/drm_bufs.c b/drivers/char/drm/drm_bufs.c index d24a6c2..6da493f 100644 --- a/drivers/char/drm/drm_bufs.c +++ b/drivers/char/drm/drm_bufs.c @@ -46,7 +46,6 @@ unsigned long drm_get_resource_len(struct drm_device *dev, unsigned int resource { return pci_resource_len(dev->pdev, resource); } - EXPORT_SYMBOL(drm_get_resource_len); static struct drm_map_list *drm_find_matching_map(struct drm_device *dev, @@ -56,7 +55,7 @@ static struct drm_map_list *drm_find_matching_map(struct drm_device *dev, list_for_each_entry(entry, &dev->maplist, head) { if (entry->map && map->type == entry->map->type && ((entry->map->offset == map->offset) || - (map->type == _DRM_SHM && map->flags==_DRM_CONTAINS_LOCK))) { + (map->type == _DRM_SHM && map->flags == _DRM_CONTAINS_LOCK))) { return entry; } } @@ -101,10 +100,10 @@ static int drm_map_handle(struct drm_device *dev, struct drm_hash_item *hash, * type. Adds the map to the map list drm_device::maplist. Adds MTRR's where * applicable and if supported by the kernel. */ -static int drm_addmap_core(struct drm_device * dev, unsigned int offset, +static int drm_addmap_core(struct drm_device *dev, unsigned int offset, unsigned int size, enum drm_map_type type, enum drm_map_flags flags, - struct drm_map_list ** maplist) + struct drm_map_list **maplist) { struct drm_map *map; struct drm_map_list *list; @@ -184,12 +183,12 @@ static int drm_addmap_core(struct drm_device * dev, unsigned int offset, return -ENOMEM; } } - + break; case _DRM_SHM: list = drm_find_matching_map(dev, map); if (list != NULL) { - if(list->map->size != map->size) { + if (list->map->size != map->size) { DRM_DEBUG("Matching maps of type %d with " "mismatched sizes, (%ld vs %ld)\n", map->type, map->size, list->map->size); @@ -229,11 +228,17 @@ static int drm_addmap_core(struct drm_device * dev, unsigned int offset, #ifdef __alpha__ map->offset += dev->hose->mem_space->start; #endif - /* Note: dev->agp->base may actually be 0 when the DRM - * is not in control of AGP space. But if user space is - * it should already have added the AGP base itself. + /* In some cases (i810 driver), user space may have already + * added the AGP base itself, because dev->agp->base previously + * only got set during AGP enable. So, only add the base + * address if the map's offset isn't already within the + * aperture. */ - map->offset += dev->agp->base; + if (map->offset < dev->agp->base || + map->offset > dev->agp->base + + dev->agp->agp_info.aper_size * 1024 * 1024 - 1) { + map->offset += dev->agp->base; + } map->mtrr = dev->agp->agp_mtrr; /* for getmap */ /* This assumes the DRM is in total control of AGP space. @@ -317,9 +322,9 @@ static int drm_addmap_core(struct drm_device * dev, unsigned int offset, return 0; } -int drm_addmap(struct drm_device * dev, unsigned int offset, +int drm_addmap(struct drm_device *dev, unsigned int offset, unsigned int size, enum drm_map_type type, - enum drm_map_flags flags, drm_local_map_t ** map_ptr) + enum drm_map_flags flags, drm_local_map_t **map_ptr) { struct drm_map_list *list; int rc; @@ -329,7 +334,6 @@ int drm_addmap(struct drm_device * dev, unsigned int offset, *map_ptr = list->map; return rc; } - EXPORT_SYMBOL(drm_addmap); int drm_addmap_ioctl(struct drm_device *dev, void *data, @@ -413,6 +417,8 @@ int drm_rmmap_locked(struct drm_device *dev, drm_local_map_t *map) dmah.size = map->size; __drm_pci_free(dev, &dmah); break; + case _DRM_TTM: + BUG_ON(1); } drm_free(map, sizeof(*map), DRM_MEM_MAPS); @@ -429,6 +435,7 @@ int drm_rmmap(struct drm_device *dev, drm_local_map_t *map) return ret; } +EXPORT_SYMBOL(drm_rmmap); /* The rmmap ioctl appears to be unnecessary. All mappings are torn down on * the last close of the device, and this is necessary for cleanup when things @@ -486,16 +493,15 @@ int drm_rmmap_ioctl(struct drm_device *dev, void *data, * * Frees any pages and buffers associated with the given entry. */ -static void drm_cleanup_buf_error(struct drm_device * dev, - struct drm_buf_entry * entry) +static void drm_cleanup_buf_error(struct drm_device *dev, + struct drm_buf_entry *entry) { int i; if (entry->seg_count) { for (i = 0; i < entry->seg_count; i++) { - if (entry->seglist[i]) { + if (entry->seglist[i]) drm_pci_free(dev, entry->seglist[i]); - } } drm_free(entry->seglist, entry->seg_count * @@ -532,7 +538,7 @@ static void drm_cleanup_buf_error(struct drm_device * dev, * reallocates the buffer list of the same size order to accommodate the new * buffers. */ -int drm_addbufs_agp(struct drm_device * dev, struct drm_buf_desc * request) +int drm_addbufs_agp(struct drm_device *dev, struct drm_buf_desc *request) { struct drm_device_dma *dma = dev->dma; struct drm_buf_entry *entry; @@ -677,9 +683,8 @@ int drm_addbufs_agp(struct drm_device * dev, struct drm_buf_desc * request) } dma->buflist = temp_buflist; - for (i = 0; i < entry->buf_count; i++) { + for (i = 0; i < entry->buf_count; i++) dma->buflist[i + dma->buf_count] = &entry->buflist[i]; - } dma->buf_count += entry->buf_count; dma->seg_count += entry->seg_count; @@ -702,7 +707,7 @@ int drm_addbufs_agp(struct drm_device * dev, struct drm_buf_desc * request) EXPORT_SYMBOL(drm_addbufs_agp); #endif /* __OS_HAS_AGP */ -int drm_addbufs_pci(struct drm_device * dev, struct drm_buf_desc * request) +int drm_addbufs_pci(struct drm_device *dev, struct drm_buf_desc *request) { struct drm_device_dma *dma = dev->dma; int count; @@ -814,9 +819,9 @@ int drm_addbufs_pci(struct drm_device * dev, struct drm_buf_desc * request) page_count = 0; while (entry->buf_count < count) { - + dmah = drm_pci_alloc(dev, PAGE_SIZE << page_order, 0x1000, 0xfffffffful); - + if (!dmah) { /* Set count correctly so we free the proper amount. */ entry->buf_count = count; @@ -895,9 +900,8 @@ int drm_addbufs_pci(struct drm_device * dev, struct drm_buf_desc * request) } dma->buflist = temp_buflist; - for (i = 0; i < entry->buf_count; i++) { + for (i = 0; i < entry->buf_count; i++) dma->buflist[i + dma->buf_count] = &entry->buflist[i]; - } /* No allocations failed, so now we can replace the orginal pagelist * with the new one. @@ -1090,7 +1094,7 @@ static int drm_addbufs_sg(struct drm_device * dev, struct drm_buf_desc * request return 0; } -static int drm_addbufs_fb(struct drm_device * dev, struct drm_buf_desc * request) +static int drm_addbufs_fb(struct drm_device *dev, struct drm_buf_desc *request) { struct drm_device_dma *dma = dev->dma; struct drm_buf_entry *entry; @@ -1227,9 +1231,8 @@ static int drm_addbufs_fb(struct drm_device * dev, struct drm_buf_desc * request } dma->buflist = temp_buflist; - for (i = 0; i < entry->buf_count; i++) { + for (i = 0; i < entry->buf_count; i++) dma->buflist[i + dma->buf_count] = &entry->buflist[i]; - } dma->buf_count += entry->buf_count; dma->seg_count += entry->seg_count; @@ -1480,7 +1483,7 @@ int drm_freebufs(struct drm_device *dev, void *data, * drm_mmap_dma(). */ int drm_mapbufs(struct drm_device *dev, void *data, - struct drm_file *file_priv) + struct drm_file *file_priv) { struct drm_device_dma *dma = dev->dma; int retcode = 0; @@ -1563,7 +1566,7 @@ int drm_mapbufs(struct drm_device *dev, void *data, } } } - done: +done: request->count = dma->buf_count; DRM_DEBUG("%d buffers, retcode = %d\n", request->count, retcode); @@ -1592,5 +1595,3 @@ int drm_order(unsigned long size) return order; } EXPORT_SYMBOL(drm_order); - - diff --git a/drivers/char/drm/drm_context.c b/drivers/char/drm/drm_context.c index 17fe69e..8a8b963 100644 --- a/drivers/char/drm/drm_context.c +++ b/drivers/char/drm/drm_context.c @@ -159,7 +159,7 @@ int drm_getsareactx(struct drm_device *dev, void *data, request->handle = NULL; list_for_each_entry(_entry, &dev->maplist, head) { if (_entry->map == map) { - request->handle = + request->handle = (void *)(unsigned long)_entry->user_token; break; } @@ -195,11 +195,11 @@ int drm_setsareactx(struct drm_device *dev, void *data, && r_list->user_token == (unsigned long) request->handle) goto found; } - bad: +bad: mutex_unlock(&dev->struct_mutex); return -EINVAL; - found: +found: map = r_list->map; if (!map) goto bad; diff --git a/drivers/char/drm/drm_drv.c b/drivers/char/drm/drm_drv.c index 44a4626..798d85b 100644 --- a/drivers/char/drm/drm_drv.c +++ b/drivers/char/drm/drm_drv.c @@ -117,6 +117,34 @@ static struct drm_ioctl_desc drm_ioctls[] = { DRM_IOCTL_DEF(DRM_IOCTL_WAIT_VBLANK, drm_wait_vblank, 0), DRM_IOCTL_DEF(DRM_IOCTL_UPDATE_DRAW, drm_update_drawable_info, DRM_AUTH|DRM_MASTER|DRM_ROOT_ONLY), + + DRM_IOCTL_DEF(DRM_IOCTL_MM_INIT, drm_mm_init_ioctl, + DRM_AUTH|DRM_MASTER|DRM_ROOT_ONLY), + DRM_IOCTL_DEF(DRM_IOCTL_MM_TAKEDOWN, drm_mm_takedown_ioctl, + DRM_AUTH|DRM_MASTER|DRM_ROOT_ONLY), + DRM_IOCTL_DEF(DRM_IOCTL_MM_LOCK, drm_mm_lock_ioctl, + DRM_AUTH|DRM_MASTER|DRM_ROOT_ONLY), + DRM_IOCTL_DEF(DRM_IOCTL_MM_UNLOCK, drm_mm_unlock_ioctl, + DRM_AUTH|DRM_MASTER|DRM_ROOT_ONLY), + + DRM_IOCTL_DEF(DRM_IOCTL_FENCE_CREATE, drm_fence_create_ioctl, DRM_AUTH), + DRM_IOCTL_DEF(DRM_IOCTL_FENCE_REFERENCE, drm_fence_reference_ioctl, DRM_AUTH), + DRM_IOCTL_DEF(DRM_IOCTL_FENCE_UNREFERENCE, drm_fence_unreference_ioctl, DRM_AUTH), + DRM_IOCTL_DEF(DRM_IOCTL_FENCE_SIGNALED, drm_fence_signaled_ioctl, DRM_AUTH), + DRM_IOCTL_DEF(DRM_IOCTL_FENCE_FLUSH, drm_fence_flush_ioctl, DRM_AUTH), + DRM_IOCTL_DEF(DRM_IOCTL_FENCE_WAIT, drm_fence_wait_ioctl, DRM_AUTH), + DRM_IOCTL_DEF(DRM_IOCTL_FENCE_EMIT, drm_fence_emit_ioctl, DRM_AUTH), + DRM_IOCTL_DEF(DRM_IOCTL_FENCE_BUFFERS, drm_fence_buffers_ioctl, DRM_AUTH), + + DRM_IOCTL_DEF(DRM_IOCTL_BO_CREATE, drm_bo_create_ioctl, DRM_AUTH), + DRM_IOCTL_DEF(DRM_IOCTL_BO_MAP, drm_bo_map_ioctl, DRM_AUTH), + DRM_IOCTL_DEF(DRM_IOCTL_BO_UNMAP, drm_bo_unmap_ioctl, DRM_AUTH), + DRM_IOCTL_DEF(DRM_IOCTL_BO_REFERENCE, drm_bo_reference_ioctl, DRM_AUTH), + DRM_IOCTL_DEF(DRM_IOCTL_BO_UNREFERENCE, drm_bo_unreference_ioctl, DRM_AUTH), + DRM_IOCTL_DEF(DRM_IOCTL_BO_SETSTATUS, drm_bo_setstatus_ioctl, DRM_AUTH), + DRM_IOCTL_DEF(DRM_IOCTL_BO_INFO, drm_bo_info_ioctl, DRM_AUTH), + DRM_IOCTL_DEF(DRM_IOCTL_BO_WAIT_IDLE, drm_bo_wait_idle_ioctl, DRM_AUTH), + DRM_IOCTL_DEF(DRM_IOCTL_BO_VERSION, drm_bo_version_ioctl, 0), }; #define DRM_CORE_IOCTL_COUNT ARRAY_SIZE( drm_ioctls ) @@ -139,6 +167,8 @@ int drm_lastclose(struct drm_device * dev) DRM_DEBUG("\n"); + drm_bo_driver_finish(dev); + if (dev->driver->lastclose) dev->driver->lastclose(dev); DRM_DEBUG("driver lastclose completed\n"); @@ -196,12 +226,14 @@ int drm_lastclose(struct drm_device * dev) /* Clear vma list (only built for debugging) */ list_for_each_entry_safe(vma, vma_temp, &dev->vmalist, head) { list_del(&vma->head); - drm_free(vma, sizeof(*vma), DRM_MEM_VMAS); + drm_ctl_free(vma, sizeof(*vma), DRM_MEM_VMAS); } list_for_each_entry_safe(r_list, list_t, &dev->maplist, head) { - drm_rmmap_locked(dev, r_list->map); - r_list = NULL; + if (!(r_list->map->flags & _DRM_DRIVER)) { + drm_rmmap_locked(dev, r_list->map); + r_list = NULL; + } } if (drm_core_check_feature(dev, DRIVER_DMA_QUEUE) && dev->queuelist) { @@ -228,6 +260,7 @@ int drm_lastclose(struct drm_device * dev) dev->lock.file_priv = NULL; wake_up_interruptible(&dev->lock.lock_queue); } + dev->dev_mapping = NULL; mutex_unlock(&dev->struct_mutex); DRM_DEBUG("lastclose completed\n"); @@ -255,8 +288,6 @@ int drm_init(struct drm_driver *driver) DRM_DEBUG("\n"); - drm_mem_init(); - for (i = 0; driver->pci_driver.id_table[i].vendor != 0; i++) { pid = (struct pci_device_id *)&driver->pci_driver.id_table[i]; @@ -292,10 +323,7 @@ static void drm_cleanup(struct drm_device * dev) } drm_lastclose(dev); - - drm_ht_remove(&dev->map_hash); - - drm_ctxbitmap_cleanup(dev); + drm_fence_manager_takedown(dev); if (drm_core_has_MTRR(dev) && drm_core_has_AGP(dev) && dev->agp && dev->agp->agp_mtrr >= 0) { @@ -314,6 +342,11 @@ static void drm_cleanup(struct drm_device * dev) if (dev->driver->unload) dev->driver->unload(dev); + drm_ht_remove(&dev->map_hash); + drm_mm_takedown(&dev->offset_manager); + drm_ht_remove(&dev->object_hash); + drm_ctxbitmap_cleanup(dev); + drm_put_head(&dev->primary); if (drm_put_dev(dev)) DRM_ERROR("Cannot unload module\n"); @@ -356,8 +389,32 @@ static const struct file_operations drm_stub_fops = { static int __init drm_core_init(void) { - int ret = -ENOMEM; + int ret; + struct sysinfo si; + unsigned long avail_memctl_mem; + unsigned long max_memctl_mem; + + si_meminfo(&si); + + /* + * AGP only allows low / DMA32 memory ATM. + */ + + avail_memctl_mem = si.totalram - si.totalhigh; + + /* + * Avoid overflows + */ + + max_memctl_mem = 1UL << (32 - PAGE_SHIFT); + max_memctl_mem = (max_memctl_mem / si.mem_unit) * PAGE_SIZE; + if (avail_memctl_mem >= max_memctl_mem) + avail_memctl_mem = max_memctl_mem; + + drm_init_memctl(avail_memctl_mem/2, avail_memctl_mem*3/4, si.mem_unit); + + ret = -ENOMEM; drm_cards_limit = (drm_cards_limit < DRM_MAX_MINOR + 1 ? drm_cards_limit : DRM_MAX_MINOR + 1); @@ -383,22 +440,24 @@ static int __init drm_core_init(void) goto err_p3; } + drm_mem_init(); + DRM_INFO("Initialized %s %d.%d.%d %s\n", CORE_NAME, CORE_MAJOR, CORE_MINOR, CORE_PATCHLEVEL, CORE_DATE); return 0; - err_p3: - drm_sysfs_destroy(drm_class); - err_p2: +err_p3: + drm_sysfs_destroy(); +err_p2: unregister_chrdev(DRM_MAJOR, "drm"); drm_free(drm_heads, sizeof(*drm_heads) * drm_cards_limit, DRM_MEM_STUB); - err_p1: +err_p1: return ret; } static void __exit drm_core_exit(void) { remove_proc_entry("dri", NULL); - drm_sysfs_destroy(drm_class); + drm_sysfs_destroy(); unregister_chrdev(DRM_MAJOR, "drm"); @@ -514,15 +573,13 @@ int drm_ioctl(struct inode *inode, struct file *filp, } } - err_i1: - if (kdata) - kfree(kdata); +err_i1: + kfree(kdata); atomic_dec(&dev->ioctl_count); if (retcode) DRM_DEBUG("ret = %x\n", retcode); return retcode; } - EXPORT_SYMBOL(drm_ioctl); drm_local_map_t *drm_getsarea(struct drm_device *dev) diff --git a/drivers/char/drm/drm_fence.c b/drivers/char/drm/drm_fence.c new file mode 100644 index 0000000..a224e18 --- /dev/null +++ b/drivers/char/drm/drm_fence.c @@ -0,0 +1,847 @@ +/************************************************************************** + * + * Copyright (c) 2006-2007 Tungsten Graphics, Inc., Cedar Park, TX., USA + * All Rights Reserved. + * + * Permission is hereby granted, free of charge, to any person obtaining a + * copy of this software and associated documentation files (the + * "Software"), to deal in the Software without restriction, including + * without limitation the rights to use, copy, modify, merge, publish, + * distribute, sub license, and/or sell copies of the Software, and to + * permit persons to whom the Software is furnished to do so, subject to + * the following conditions: + * + * The above copyright notice and this permission notice (including the + * next paragraph) shall be included in all copies or substantial portions + * of the Software. + * + * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR + * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, + * FITNESS FOR A PARTICULAR PURPOSE AND NON-INFRINGEMENT. IN NO EVENT SHALL + * THE COPYRIGHT HOLDERS, AUTHORS AND/OR ITS SUPPLIERS BE LIABLE FOR ANY CLAIM, + * DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR + * OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE + * USE OR OTHER DEALINGS IN THE SOFTWARE. + * + **************************************************************************/ +/* + * Authors: Thomas Hellstrom + */ + +#include "drmP.h" + +/* + * Typically called by the IRQ handler. + */ + +void drm_fence_handler(struct drm_device *dev, uint32_t fence_class, + uint32_t sequence, uint32_t type, uint32_t error) +{ + int wake = 0; + uint32_t diff; + uint32_t relevant; + struct drm_fence_manager *fm = &dev->fm; + struct drm_fence_class_manager *fc = &fm->fence_class[fence_class]; + struct drm_fence_driver *driver = dev->driver->fence_driver; + struct list_head *head; + struct drm_fence_object *fence, *next; + int found = 0; + int is_exe = (type & DRM_FENCE_TYPE_EXE); + int ge_last_exe; + + + diff = (sequence - fc->exe_flush_sequence) & driver->sequence_mask; + + if (fc->pending_exe_flush && is_exe && diff < driver->wrap_diff) + fc->pending_exe_flush = 0; + + diff = (sequence - fc->last_exe_flush) & driver->sequence_mask; + ge_last_exe = diff < driver->wrap_diff; + + if (is_exe && ge_last_exe) + fc->last_exe_flush = sequence; + + if (list_empty(&fc->ring)) + return; + + list_for_each_entry(fence, &fc->ring, ring) { + diff = (sequence - fence->sequence) & driver->sequence_mask; + if (diff > driver->wrap_diff) { + found = 1; + break; + } + } + + fc->pending_flush &= ~type; + head = (found) ? &fence->ring : &fc->ring; + + list_for_each_entry_safe_reverse(fence, next, head, ring) { + if (&fence->ring == &fc->ring) + break; + + if (error) { + fence->error = error; + fence->signaled = fence->type; + fence->submitted_flush = fence->type; + fence->flush_mask = fence->type; + list_del_init(&fence->ring); + wake = 1; + break; + } + + if (is_exe) + type |= fence->native_type; + + relevant = type & fence->type; + + if ((fence->signaled | relevant) != fence->signaled) { + fence->signaled |= relevant; + fence->flush_mask |= relevant; + fence->submitted_flush |= relevant; + DRM_DEBUG("Fence 0x%08lx signaled 0x%08x\n", + fence->base.hash.key, fence->signaled); + wake = 1; + } + + relevant = fence->flush_mask & + ~(fence->submitted_flush | fence->signaled); + + fc->pending_flush |= relevant; + fence->submitted_flush |= relevant; + + if (!(fence->type & ~fence->signaled)) { + DRM_DEBUG("Fence completely signaled 0x%08lx\n", + fence->base.hash.key); + list_del_init(&fence->ring); + } + + } + + /* + * Reinstate lost flush flags. + */ + + if ((fc->pending_flush & type) != type) { + head = head->prev; + list_for_each_entry(fence, head, ring) { + if (&fence->ring == &fc->ring) + break; + diff = (fc->last_exe_flush - fence->sequence) & + driver->sequence_mask; + if (diff > driver->wrap_diff) + break; + + relevant = fence->submitted_flush & ~fence->signaled; + fc->pending_flush |= relevant; + } + } + + if (wake) { + DRM_WAKEUP(&fc->fence_queue); + } +} +EXPORT_SYMBOL(drm_fence_handler); + +static void drm_fence_unring(struct drm_device *dev, struct list_head *ring) +{ + struct drm_fence_manager *fm = &dev->fm; + unsigned long flags; + + write_lock_irqsave(&fm->lock, flags); + list_del_init(ring); + write_unlock_irqrestore(&fm->lock, flags); +} + +void drm_fence_usage_deref_locked(struct drm_fence_object **fence) +{ + struct drm_fence_object *tmp_fence = *fence; + struct drm_device *dev = tmp_fence->dev; + struct drm_fence_manager *fm = &dev->fm; + + DRM_ASSERT_LOCKED(&dev->struct_mutex); + *fence = NULL; + if (atomic_dec_and_test(&tmp_fence->usage)) { + drm_fence_unring(dev, &tmp_fence->ring); + DRM_DEBUG("Destroyed a fence object 0x%08lx\n", + tmp_fence->base.hash.key); + atomic_dec(&fm->count); + BUG_ON(!list_empty(&tmp_fence->base.list)); + drm_ctl_free(tmp_fence, sizeof(*tmp_fence), DRM_MEM_FENCE); + } +} +EXPORT_SYMBOL(drm_fence_usage_deref_locked); + +void drm_fence_usage_deref_unlocked(struct drm_fence_object **fence) +{ + struct drm_fence_object *tmp_fence = *fence; + struct drm_device *dev = tmp_fence->dev; + struct drm_fence_manager *fm = &dev->fm; + + *fence = NULL; + if (atomic_dec_and_test(&tmp_fence->usage)) { + mutex_lock(&dev->struct_mutex); + if (atomic_read(&tmp_fence->usage) == 0) { + drm_fence_unring(dev, &tmp_fence->ring); + atomic_dec(&fm->count); + BUG_ON(!list_empty(&tmp_fence->base.list)); + drm_ctl_free(tmp_fence, sizeof(*tmp_fence), DRM_MEM_FENCE); + } + mutex_unlock(&dev->struct_mutex); + } +} +EXPORT_SYMBOL(drm_fence_usage_deref_unlocked); + +struct drm_fence_object +*drm_fence_reference_locked(struct drm_fence_object *src) +{ + DRM_ASSERT_LOCKED(&src->dev->struct_mutex); + + atomic_inc(&src->usage); + return src; +} + +void drm_fence_reference_unlocked(struct drm_fence_object **dst, + struct drm_fence_object *src) +{ + mutex_lock(&src->dev->struct_mutex); + *dst = src; + atomic_inc(&src->usage); + mutex_unlock(&src->dev->struct_mutex); +} +EXPORT_SYMBOL(drm_fence_reference_unlocked); + +static void drm_fence_object_destroy(struct drm_file *priv, + struct drm_user_object *base) +{ + struct drm_fence_object *fence = + drm_user_object_entry(base, struct drm_fence_object, base); + + drm_fence_usage_deref_locked(&fence); +} + +int drm_fence_object_signaled(struct drm_fence_object *fence, + uint32_t mask, int poke_flush) +{ + unsigned long flags; + int signaled; + struct drm_device *dev = fence->dev; + struct drm_fence_manager *fm = &dev->fm; + struct drm_fence_driver *driver = dev->driver->fence_driver; + + if (poke_flush) + driver->poke_flush(dev, fence->fence_class); + read_lock_irqsave(&fm->lock, flags); + signaled = + (fence->type & mask & fence->signaled) == (fence->type & mask); + read_unlock_irqrestore(&fm->lock, flags); + + return signaled; +} +EXPORT_SYMBOL(drm_fence_object_signaled); + +static void drm_fence_flush_exe(struct drm_fence_class_manager *fc, + struct drm_fence_driver *driver, + uint32_t sequence) +{ + uint32_t diff; + + if (!fc->pending_exe_flush) { + fc->exe_flush_sequence = sequence; + fc->pending_exe_flush = 1; + } else { + diff = (sequence - fc->exe_flush_sequence) & driver->sequence_mask; + if (diff < driver->wrap_diff) + fc->exe_flush_sequence = sequence; + } +} + +int drm_fence_object_flush(struct drm_fence_object *fence, + uint32_t type) +{ + struct drm_device *dev = fence->dev; + struct drm_fence_manager *fm = &dev->fm; + struct drm_fence_class_manager *fc = &fm->fence_class[fence->fence_class]; + struct drm_fence_driver *driver = dev->driver->fence_driver; + unsigned long flags; + + if (type & ~fence->type) { + DRM_ERROR("Flush trying to extend fence type, " + "0x%x, 0x%x\n", type, fence->type); + return -EINVAL; + } + + write_lock_irqsave(&fm->lock, flags); + fence->flush_mask |= type; + if ((fence->submitted_flush & fence->signaled) + == fence->submitted_flush) { + if ((fence->type & DRM_FENCE_TYPE_EXE) && + !(fence->submitted_flush & DRM_FENCE_TYPE_EXE)) { + drm_fence_flush_exe(fc, driver, fence->sequence); + fence->submitted_flush |= DRM_FENCE_TYPE_EXE; + } else { + fc->pending_flush |= (fence->flush_mask & + ~fence->submitted_flush); + fence->submitted_flush = fence->flush_mask; + } + } + write_unlock_irqrestore(&fm->lock, flags); + driver->poke_flush(dev, fence->fence_class); + return 0; +} + +/* + * Make sure old fence objects are signaled before their fence sequences are + * wrapped around and reused. + */ + +void drm_fence_flush_old(struct drm_device *dev, uint32_t fence_class, + uint32_t sequence) +{ + struct drm_fence_manager *fm = &dev->fm; + struct drm_fence_class_manager *fc = &fm->fence_class[fence_class]; + struct drm_fence_driver *driver = dev->driver->fence_driver; + uint32_t old_sequence; + unsigned long flags; + struct drm_fence_object *fence; + uint32_t diff; + + write_lock_irqsave(&fm->lock, flags); + old_sequence = (sequence - driver->flush_diff) & driver->sequence_mask; + diff = (old_sequence - fc->last_exe_flush) & driver->sequence_mask; + + if ((diff < driver->wrap_diff) && !fc->pending_exe_flush) { + fc->pending_exe_flush = 1; + fc->exe_flush_sequence = sequence - (driver->flush_diff / 2); + } + write_unlock_irqrestore(&fm->lock, flags); + + mutex_lock(&dev->struct_mutex); + read_lock_irqsave(&fm->lock, flags); + + if (list_empty(&fc->ring)) { + read_unlock_irqrestore(&fm->lock, flags); + mutex_unlock(&dev->struct_mutex); + return; + } + fence = drm_fence_reference_locked(list_entry(fc->ring.next, struct drm_fence_object, ring)); + mutex_unlock(&dev->struct_mutex); + diff = (old_sequence - fence->sequence) & driver->sequence_mask; + read_unlock_irqrestore(&fm->lock, flags); + if (diff < driver->wrap_diff) + drm_fence_object_flush(fence, fence->type); + drm_fence_usage_deref_unlocked(&fence); +} +EXPORT_SYMBOL(drm_fence_flush_old); + +static int drm_fence_lazy_wait(struct drm_fence_object *fence, + int ignore_signals, + uint32_t mask) +{ + struct drm_device *dev = fence->dev; + struct drm_fence_manager *fm = &dev->fm; + struct drm_fence_class_manager *fc = &fm->fence_class[fence->fence_class]; + int signaled; + unsigned long _end = jiffies + 3*DRM_HZ; + int ret = 0; + + do { + DRM_WAIT_ON(ret, fc->fence_queue, 3 * DRM_HZ, + (signaled = drm_fence_object_signaled(fence, mask, 1))); + if (signaled) + return 0; + if (time_after_eq(jiffies, _end)) + break; + } while (ret == -EINTR && ignore_signals); + if (drm_fence_object_signaled(fence, mask, 0)) + return 0; + if (time_after_eq(jiffies, _end)) + ret = -EBUSY; + if (ret) { + if (ret == -EBUSY) { + DRM_ERROR("Fence timeout. " + "GPU lockup or fence driver was " + "taken down. %d 0x%08x 0x%02x 0x%02x 0x%02x\n", + fence->fence_class, + fence->sequence, + fence->type, + mask, + fence->signaled); + DRM_ERROR("Pending exe flush %d 0x%08x\n", + fc->pending_exe_flush, + fc->exe_flush_sequence); + } + return ((ret == -EINTR) ? -EAGAIN : ret); + } + return 0; +} + +int drm_fence_object_wait(struct drm_fence_object *fence, + int lazy, int ignore_signals, uint32_t mask) +{ + struct drm_device *dev = fence->dev; + struct drm_fence_driver *driver = dev->driver->fence_driver; + int ret = 0; + unsigned long _end; + int signaled; + + if (mask & ~fence->type) { + DRM_ERROR("Wait trying to extend fence type" + " 0x%08x 0x%08x\n", mask, fence->type); + BUG(); + return -EINVAL; + } + + if (drm_fence_object_signaled(fence, mask, 0)) + return 0; + + _end = jiffies + 3 * DRM_HZ; + + drm_fence_object_flush(fence, mask); + + if (lazy && driver->lazy_capable) { + + ret = drm_fence_lazy_wait(fence, ignore_signals, mask); + if (ret) + return ret; + + } else { + + if (driver->has_irq(dev, fence->fence_class, + DRM_FENCE_TYPE_EXE)) { + ret = drm_fence_lazy_wait(fence, ignore_signals, + DRM_FENCE_TYPE_EXE); + if (ret) + return ret; + } + + if (driver->has_irq(dev, fence->fence_class, + mask & ~DRM_FENCE_TYPE_EXE)) { + ret = drm_fence_lazy_wait(fence, ignore_signals, + mask); + if (ret) + return ret; + } + } + if (drm_fence_object_signaled(fence, mask, 0)) + return 0; + + /* + * Avoid kernel-space busy-waits. + */ + if (!ignore_signals) + return -EAGAIN; + + do { + schedule(); + signaled = drm_fence_object_signaled(fence, mask, 1); + } while (!signaled && !time_after_eq(jiffies, _end)); + + if (!signaled) + return -EBUSY; + + return 0; +} +EXPORT_SYMBOL(drm_fence_object_wait); + +int drm_fence_object_emit(struct drm_fence_object *fence, uint32_t fence_flags, + uint32_t fence_class, uint32_t type) +{ + struct drm_device *dev = fence->dev; + struct drm_fence_manager *fm = &dev->fm; + struct drm_fence_driver *driver = dev->driver->fence_driver; + struct drm_fence_class_manager *fc = &fm->fence_class[fence->fence_class]; + unsigned long flags; + uint32_t sequence; + uint32_t native_type; + int ret; + + drm_fence_unring(dev, &fence->ring); + ret = driver->emit(dev, fence_class, fence_flags, &sequence, + &native_type); + if (ret) + return ret; + + write_lock_irqsave(&fm->lock, flags); + fence->fence_class = fence_class; + fence->type = type; + fence->flush_mask = 0x00; + fence->submitted_flush = 0x00; + fence->signaled = 0x00; + fence->sequence = sequence; + fence->native_type = native_type; + if (list_empty(&fc->ring)) + fc->last_exe_flush = sequence - 1; + list_add_tail(&fence->ring, &fc->ring); + write_unlock_irqrestore(&fm->lock, flags); + return 0; +} +EXPORT_SYMBOL(drm_fence_object_emit); + +static int drm_fence_object_init(struct drm_device *dev, uint32_t fence_class, + uint32_t type, + uint32_t fence_flags, + struct drm_fence_object *fence) +{ + int ret = 0; + unsigned long flags; + struct drm_fence_manager *fm = &dev->fm; + + mutex_lock(&dev->struct_mutex); + atomic_set(&fence->usage, 1); + mutex_unlock(&dev->struct_mutex); + + write_lock_irqsave(&fm->lock, flags); + INIT_LIST_HEAD(&fence->ring); + + /* + * Avoid hitting BUG() for kernel-only fence objects. + */ + + INIT_LIST_HEAD(&fence->base.list); + fence->fence_class = fence_class; + fence->type = type; + fence->flush_mask = 0; + fence->submitted_flush = 0; + fence->signaled = 0; + fence->sequence = 0; + fence->dev = dev; + write_unlock_irqrestore(&fm->lock, flags); + if (fence_flags & DRM_FENCE_FLAG_EMIT) { + ret = drm_fence_object_emit(fence, fence_flags, + fence->fence_class, type); + } + return ret; +} + +int drm_fence_add_user_object(struct drm_file *priv, + struct drm_fence_object *fence, int shareable) +{ + struct drm_device *dev = priv->head->dev; + int ret; + + mutex_lock(&dev->struct_mutex); + ret = drm_add_user_object(priv, &fence->base, shareable); + if (ret) + goto out; + atomic_inc(&fence->usage); + fence->base.type = drm_fence_type; + fence->base.remove = &drm_fence_object_destroy; + DRM_DEBUG("Fence 0x%08lx created\n", fence->base.hash.key); +out: + mutex_unlock(&dev->struct_mutex); + return ret; +} +EXPORT_SYMBOL(drm_fence_add_user_object); + +int drm_fence_object_create(struct drm_device *dev, uint32_t fence_class, + uint32_t type, unsigned flags, + struct drm_fence_object **c_fence) +{ + struct drm_fence_object *fence; + int ret; + struct drm_fence_manager *fm = &dev->fm; + + fence = drm_ctl_calloc(1, sizeof(*fence), DRM_MEM_FENCE); + if (!fence) + return -ENOMEM; + ret = drm_fence_object_init(dev, fence_class, type, flags, fence); + if (ret) { + drm_fence_usage_deref_unlocked(&fence); + return ret; + } + *c_fence = fence; + atomic_inc(&fm->count); + + return 0; +} +EXPORT_SYMBOL(drm_fence_object_create); + +void drm_fence_manager_init(struct drm_device *dev) +{ + struct drm_fence_manager *fm = &dev->fm; + struct drm_fence_class_manager *fence_class; + struct drm_fence_driver *fed = dev->driver->fence_driver; + int i; + unsigned long flags; + + rwlock_init(&fm->lock); + write_lock_irqsave(&fm->lock, flags); + fm->initialized = 0; + if (!fed) + goto out_unlock; + + fm->initialized = 1; + fm->num_classes = fed->num_classes; + BUG_ON(fm->num_classes > _DRM_FENCE_CLASSES); + + for (i = 0; i < fm->num_classes; ++i) { + fence_class = &fm->fence_class[i]; + + INIT_LIST_HEAD(&fence_class->ring); + fence_class->pending_flush = 0; + DRM_INIT_WAITQUEUE(&fence_class->fence_queue); + } + + atomic_set(&fm->count, 0); + out_unlock: + write_unlock_irqrestore(&fm->lock, flags); +} + +void drm_fence_fill_arg(struct drm_fence_object *fence, + struct drm_fence_arg *arg) +{ + struct drm_device *dev = fence->dev; + struct drm_fence_manager *fm = &dev->fm; + unsigned long irq_flags; + + read_lock_irqsave(&fm->lock, irq_flags); + arg->handle = fence->base.hash.key; + arg->fence_class = fence->fence_class; + arg->type = fence->type; + arg->signaled = fence->signaled; + arg->error = fence->error; + arg->sequence = fence->sequence; + read_unlock_irqrestore(&fm->lock, irq_flags); +} +EXPORT_SYMBOL(drm_fence_fill_arg); + +void drm_fence_manager_takedown(struct drm_device *dev) +{ +} + +struct drm_fence_object *drm_lookup_fence_object(struct drm_file *priv, + uint32_t handle) +{ + struct drm_device *dev = priv->head->dev; + struct drm_user_object *uo; + struct drm_fence_object *fence; + + mutex_lock(&dev->struct_mutex); + uo = drm_lookup_user_object(priv, handle); + if (!uo || (uo->type != drm_fence_type)) { + mutex_unlock(&dev->struct_mutex); + return NULL; + } + fence = drm_fence_reference_locked(drm_user_object_entry(uo, struct drm_fence_object, base)); + mutex_unlock(&dev->struct_mutex); + return fence; +} + +int drm_fence_create_ioctl(struct drm_device *dev, void *data, struct drm_file *file_priv) +{ + int ret; + struct drm_fence_manager *fm = &dev->fm; + struct drm_fence_arg *arg = data; + struct drm_fence_object *fence; + ret = 0; + + if (!fm->initialized) { + DRM_ERROR("The DRM driver does not support fencing.\n"); + return -EINVAL; + } + + if (arg->flags & DRM_FENCE_FLAG_EMIT) + LOCK_TEST_WITH_RETURN(dev, file_priv); + ret = drm_fence_object_create(dev, arg->fence_class, + arg->type, arg->flags, &fence); + if (ret) + return ret; + ret = drm_fence_add_user_object(file_priv, fence, + arg->flags & + DRM_FENCE_FLAG_SHAREABLE); + if (ret) { + drm_fence_usage_deref_unlocked(&fence); + return ret; + } + + /* + * usage > 0. No need to lock dev->struct_mutex; + */ + + arg->handle = fence->base.hash.key; + + drm_fence_fill_arg(fence, arg); + drm_fence_usage_deref_unlocked(&fence); + + return ret; +} + +int drm_fence_reference_ioctl(struct drm_device *dev, void *data, struct drm_file *file_priv) +{ + int ret; + struct drm_fence_manager *fm = &dev->fm; + struct drm_fence_arg *arg = data; + struct drm_fence_object *fence; + struct drm_user_object *uo; + ret = 0; + + if (!fm->initialized) { + DRM_ERROR("The DRM driver does not support fencing.\n"); + return -EINVAL; + } + + ret = drm_user_object_ref(file_priv, arg->handle, drm_fence_type, &uo); + if (ret) + return ret; + fence = drm_lookup_fence_object(file_priv, arg->handle); + drm_fence_fill_arg(fence, arg); + drm_fence_usage_deref_unlocked(&fence); + + return ret; +} + + +int drm_fence_unreference_ioctl(struct drm_device *dev, void *data, struct drm_file *file_priv) +{ + int ret; + struct drm_fence_manager *fm = &dev->fm; + struct drm_fence_arg *arg = data; + ret = 0; + + if (!fm->initialized) { + DRM_ERROR("The DRM driver does not support fencing.\n"); + return -EINVAL; + } + + return drm_user_object_unref(file_priv, arg->handle, drm_fence_type); +} + +int drm_fence_signaled_ioctl(struct drm_device *dev, void *data, struct drm_file *file_priv) +{ + int ret; + struct drm_fence_manager *fm = &dev->fm; + struct drm_fence_arg *arg = data; + struct drm_fence_object *fence; + ret = 0; + + if (!fm->initialized) { + DRM_ERROR("The DRM driver does not support fencing.\n"); + return -EINVAL; + } + + fence = drm_lookup_fence_object(file_priv, arg->handle); + if (!fence) + return -EINVAL; + + drm_fence_fill_arg(fence, arg); + drm_fence_usage_deref_unlocked(&fence); + + return ret; +} + +int drm_fence_flush_ioctl(struct drm_device *dev, void *data, struct drm_file *file_priv) +{ + int ret; + struct drm_fence_manager *fm = &dev->fm; + struct drm_fence_arg *arg = data; + struct drm_fence_object *fence; + ret = 0; + + if (!fm->initialized) { + DRM_ERROR("The DRM driver does not support fencing.\n"); + return -EINVAL; + } + + fence = drm_lookup_fence_object(file_priv, arg->handle); + if (!fence) + return -EINVAL; + ret = drm_fence_object_flush(fence, arg->type); + + drm_fence_fill_arg(fence, arg); + drm_fence_usage_deref_unlocked(&fence); + + return ret; +} + + +int drm_fence_wait_ioctl(struct drm_device *dev, void *data, struct drm_file *file_priv) +{ + int ret; + struct drm_fence_manager *fm = &dev->fm; + struct drm_fence_arg *arg = data; + struct drm_fence_object *fence; + ret = 0; + + if (!fm->initialized) { + DRM_ERROR("The DRM driver does not support fencing.\n"); + return -EINVAL; + } + + fence = drm_lookup_fence_object(file_priv, arg->handle); + if (!fence) + return -EINVAL; + ret = drm_fence_object_wait(fence, + arg->flags & DRM_FENCE_FLAG_WAIT_LAZY, + 0, arg->type); + + drm_fence_fill_arg(fence, arg); + drm_fence_usage_deref_unlocked(&fence); + + return ret; +} + + +int drm_fence_emit_ioctl(struct drm_device *dev, void *data, struct drm_file *file_priv) +{ + int ret; + struct drm_fence_manager *fm = &dev->fm; + struct drm_fence_arg *arg = data; + struct drm_fence_object *fence; + ret = 0; + + if (!fm->initialized) { + DRM_ERROR("The DRM driver does not support fencing.\n"); + return -EINVAL; + } + + LOCK_TEST_WITH_RETURN(dev, file_priv); + fence = drm_lookup_fence_object(file_priv, arg->handle); + if (!fence) + return -EINVAL; + ret = drm_fence_object_emit(fence, arg->flags, arg->fence_class, + arg->type); + + drm_fence_fill_arg(fence, arg); + drm_fence_usage_deref_unlocked(&fence); + + return ret; +} + +int drm_fence_buffers_ioctl(struct drm_device *dev, void *data, struct drm_file *file_priv) +{ + int ret; + struct drm_fence_manager *fm = &dev->fm; + struct drm_fence_arg *arg = data; + struct drm_fence_object *fence; + ret = 0; + + if (!fm->initialized) { + DRM_ERROR("The DRM driver does not support fencing.\n"); + return -EINVAL; + } + + if (!dev->bm.initialized) { + DRM_ERROR("Buffer object manager is not initialized\n"); + return -EINVAL; + } + LOCK_TEST_WITH_RETURN(dev, file_priv); + ret = drm_fence_buffer_objects(dev, NULL, arg->flags, + NULL, &fence); + if (ret) + return ret; + + if (!(arg->flags & DRM_FENCE_FLAG_NO_USER)) { + ret = drm_fence_add_user_object(file_priv, fence, + arg->flags & + DRM_FENCE_FLAG_SHAREABLE); + if (ret) + return ret; + } + + arg->handle = fence->base.hash.key; + + drm_fence_fill_arg(fence, arg); + drm_fence_usage_deref_unlocked(&fence); + + return ret; +} diff --git a/drivers/char/drm/drm_fops.c b/drivers/char/drm/drm_fops.c index 3992f73..a5920dc 100644 --- a/drivers/char/drm/drm_fops.c +++ b/drivers/char/drm/drm_fops.c @@ -147,11 +147,18 @@ int drm_open(struct inode *inode, struct file *filp) spin_lock(&dev->count_lock); if (!dev->open_count++) { spin_unlock(&dev->count_lock); - return drm_setup(dev); + retcode = drm_setup(dev); + goto out; } spin_unlock(&dev->count_lock); } - +out: + mutex_lock(&dev->struct_mutex); + BUG_ON((dev->dev_mapping != NULL) && + (dev->dev_mapping != inode->i_mapping)); + if (dev->dev_mapping == NULL) + dev->dev_mapping = inode->i_mapping; + mutex_unlock(&dev->struct_mutex); return retcode; } EXPORT_SYMBOL(drm_open); @@ -228,6 +235,7 @@ static int drm_open_helper(struct inode *inode, struct file *filp, int minor = iminor(inode); struct drm_file *priv; int ret; + int i, j; if (filp->f_flags & O_EXCL) return -EBUSY; /* No exclusive opens */ @@ -253,6 +261,20 @@ static int drm_open_helper(struct inode *inode, struct file *filp, priv->lock_count = 0; INIT_LIST_HEAD(&priv->lhead); + INIT_LIST_HEAD(&priv->refd_objects); + + for (i = 0; i < _DRM_NO_REF_TYPES; ++i) { + ret = drm_ht_create(&priv->refd_object_hash[i], + DRM_FILE_HASH_ORDER); + if (ret) + break; + } + + if (ret) { + for (j = 0; j < i; ++j) + drm_ht_remove(&priv->refd_object_hash[j]); + goto out_free; + } if (dev->driver->open) { ret = dev->driver->open(dev, priv); @@ -287,7 +309,7 @@ static int drm_open_helper(struct inode *inode, struct file *filp, #endif return 0; - out_free: +out_free: drm_free(priv, sizeof(*priv), DRM_MEM_FILES); filp->private_data = NULL; return ret; @@ -309,6 +331,32 @@ int drm_fasync(int fd, struct file *filp, int on) } EXPORT_SYMBOL(drm_fasync); +static void drm_object_release(struct file *filp) +{ + struct drm_file *priv = filp->private_data; + struct list_head *head; + struct drm_ref_object *ref_object; + int i; + + /* + * Free leftover ref objects created by me. Note that we cannot use + * list_for_each() here, as the struct_mutex may be temporarily + * released by the remove_() functions, and thus the lists may be + * altered. + * Also, a drm_remove_ref_object() will not remove it + * from the list unless its refcount is 1. + */ + head = &priv->refd_objects; + while (head->next != head) { + ref_object = list_entry(head->next, struct drm_ref_object, list); + drm_remove_ref_object(priv, ref_object); + head = &priv->refd_objects; + } + + for (i = 0; i < _DRM_NO_REF_TYPES; ++i) + drm_ht_remove(&priv->refd_object_hash[i]); +} + /** * Release file. * @@ -347,7 +395,7 @@ int drm_release(struct inode *inode, struct file *filp) if (drm_i_have_hw_lock(dev, file_priv)) { dev->driver->reclaim_buffers_locked(dev, file_priv); } else { - unsigned long _end=jiffies + 3*DRM_HZ; + unsigned long _end = jiffies + 3*DRM_HZ; int locked = 0; drm_idlelock_take(&dev->lock); @@ -356,7 +404,7 @@ int drm_release(struct inode *inode, struct file *filp) * Wait for a while. */ - do{ + do { spin_lock(&dev->lock.spinlock); locked = dev->lock.idle_has_lock; spin_unlock(&dev->lock.spinlock); @@ -422,6 +470,7 @@ int drm_release(struct inode *inode, struct file *filp) mutex_unlock(&dev->ctxlist_mutex); mutex_lock(&dev->struct_mutex); + drm_object_release(filp); if (file_priv->remove_auth_on_close == 1) { struct drm_file *temp; diff --git a/drivers/char/drm/drm_hashtab.c b/drivers/char/drm/drm_hashtab.c index 4b8e7db..72a783e 100644 --- a/drivers/char/drm/drm_hashtab.c +++ b/drivers/char/drm/drm_hashtab.c @@ -57,7 +57,7 @@ int drm_ht_create(struct drm_open_hash *ht, unsigned int order) DRM_ERROR("Out of memory for hash table\n"); return -ENOMEM; } - for (i=0; i< ht->size; ++i) { + for (i = 0; i < ht->size; ++i) { INIT_HLIST_HEAD(&ht->table[i]); } return 0; @@ -80,7 +80,7 @@ void drm_ht_verbose_list(struct drm_open_hash *ht, unsigned long key) } } -static struct hlist_node *drm_ht_find_key(struct drm_open_hash *ht, +static struct hlist_node *drm_ht_find_key(struct drm_open_hash *ht, unsigned long key) { struct drm_hash_item *entry; @@ -129,7 +129,7 @@ int drm_ht_insert_item(struct drm_open_hash *ht, struct drm_hash_item *item) } /* - * Just insert an item and return any "bits" bit key that hasn't been + * Just insert an item and return any "bits" bit key that hasn't been * used before. */ int drm_ht_just_insert_please(struct drm_open_hash *ht, struct drm_hash_item *item, @@ -147,7 +147,7 @@ int drm_ht_just_insert_please(struct drm_open_hash *ht, struct drm_hash_item *it ret = drm_ht_insert_item(ht, item); if (ret) unshifted_key = (unshifted_key + 1) & mask; - } while(ret && (unshifted_key != first)); + } while (ret && (unshifted_key != first)); if (ret) { DRM_ERROR("Available key bit space exhausted\n"); @@ -200,4 +200,3 @@ void drm_ht_remove(struct drm_open_hash *ht) ht->table = NULL; } } - diff --git a/drivers/char/drm/drm_hashtab.h b/drivers/char/drm/drm_hashtab.h index 573e333..cd2b189 100644 --- a/drivers/char/drm/drm_hashtab.h +++ b/drivers/char/drm/drm_hashtab.h @@ -65,4 +65,3 @@ extern void drm_ht_remove(struct drm_open_hash *ht); #endif - diff --git a/drivers/char/drm/drm_ioc32.c b/drivers/char/drm/drm_ioc32.c index 2286f33..90f5a8d 100644 --- a/drivers/char/drm/drm_ioc32.c +++ b/drivers/char/drm/drm_ioc32.c @@ -1051,8 +1051,12 @@ long drm_compat_ioctl(struct file *filp, unsigned int cmd, unsigned long arg) drm_ioctl_compat_t *fn; int ret; + /* Assume that ioctls without an explicit compat routine will just + * work. This may not always be a good assumption, but it's better + * than always failing. + */ if (nr >= ARRAY_SIZE(drm_compat_ioctls)) - return -ENOTTY; + return drm_ioctl(filp->f_dentry->d_inode, filp, cmd, arg); fn = drm_compat_ioctls[nr]; diff --git a/drivers/char/drm/drm_ioctl.c b/drivers/char/drm/drm_ioctl.c index 3cbebf8..16829fb 100644 --- a/drivers/char/drm/drm_ioctl.c +++ b/drivers/char/drm/drm_ioctl.c @@ -234,26 +234,23 @@ int drm_getclient(struct drm_device *dev, void *data, idx = client->idx; mutex_lock(&dev->struct_mutex); - - if (list_empty(&dev->filelist)) { - mutex_unlock(&dev->struct_mutex); - return -EINVAL; - } i = 0; list_for_each_entry(pt, &dev->filelist, lhead) { - if (i++ >= idx) - break; + if (i++ >= idx) { + client->auth = pt->authenticated; + client->pid = pt->pid; + client->uid = pt->uid; + client->magic = pt->magic; + client->iocs = pt->ioctl_count; + mutex_unlock(&dev->struct_mutex); + + return 0; + } } - - client->auth = pt->authenticated; - client->pid = pt->pid; - client->uid = pt->uid; - client->magic = pt->magic; - client->iocs = pt->ioctl_count; mutex_unlock(&dev->struct_mutex); - return 0; + return -EINVAL; } /** diff --git a/drivers/char/drm/drm_irq.c b/drivers/char/drm/drm_irq.c index 05eae63..3b1a382 100644 --- a/drivers/char/drm/drm_irq.c +++ b/drivers/char/drm/drm_irq.c @@ -107,7 +107,7 @@ static int drm_irq_install(struct drm_device * dev) dev->irq_enabled = 1; mutex_unlock(&dev->struct_mutex); - DRM_DEBUG("%s: irq=%d\n", __FUNCTION__, dev->irq); + DRM_DEBUG("irq=%d\n", dev->irq); if (drm_core_check_feature(dev, DRIVER_IRQ_VBL)) { init_waitqueue_head(&dev->vbl_queue); @@ -164,7 +164,7 @@ int drm_irq_uninstall(struct drm_device * dev) if (!irq_enabled) return -EINVAL; - DRM_DEBUG("%s: irq=%d\n", __FUNCTION__, dev->irq); + DRM_DEBUG("irq=%d\n", dev->irq); dev->driver->irq_uninstall(dev); @@ -340,7 +340,7 @@ int drm_wait_vblank(struct drm_device *dev, void *data, struct drm_file *file_pr vblwait->reply.tval_usec = now.tv_usec; } - done: +done: return ret; } diff --git a/drivers/char/drm/drm_lock.c b/drivers/char/drm/drm_lock.c index bea2a7d..316ab73 100644 --- a/drivers/char/drm/drm_lock.c +++ b/drivers/char/drm/drm_lock.c @@ -175,7 +175,7 @@ int drm_unlock(struct drm_device *dev, void *data, struct drm_file *file_priv) if (dev->driver->kernel_context_switch_unlock) dev->driver->kernel_context_switch_unlock(dev); else { - if (drm_lock_free(&dev->lock,lock->context)) { + if (drm_lock_free(&dev->lock, lock->context)) { /* FIXME: Should really bail out here. */ } } diff --git a/drivers/char/drm/drm_memory.c b/drivers/char/drm/drm_memory.c index 9301990..9a01a91 100644 --- a/drivers/char/drm/drm_memory.c +++ b/drivers/char/drm/drm_memory.c @@ -36,6 +36,75 @@ #include #include "drmP.h" +static struct { + spinlock_t lock; + uint64_t cur_used; + uint64_t low_threshold; + uint64_t high_threshold; +} drm_memctl = { + .lock = __SPIN_LOCK_UNLOCKED(drm_memctl.lock) +}; + +static inline size_t drm_size_align(size_t size) +{ + size_t tmpSize = 4; + if (size > PAGE_SIZE) + return PAGE_ALIGN(size); + + while (tmpSize < size) + tmpSize <<= 1; + + return (size_t) tmpSize; +} + +int drm_alloc_memctl(size_t size) +{ + int ret; + unsigned long a_size = drm_size_align(size); + + spin_lock(&drm_memctl.lock); + ret = ((drm_memctl.cur_used + a_size) > drm_memctl.high_threshold) ? + -ENOMEM : 0; + if (!ret) + drm_memctl.cur_used += a_size; + spin_unlock(&drm_memctl.lock); + return ret; +} +EXPORT_SYMBOL(drm_alloc_memctl); + +void drm_free_memctl(size_t size) +{ + unsigned long a_size = drm_size_align(size); + + spin_lock(&drm_memctl.lock); + drm_memctl.cur_used -= a_size; + spin_unlock(&drm_memctl.lock); +} +EXPORT_SYMBOL(drm_free_memctl); + +void drm_query_memctl(uint64_t *cur_used, + uint64_t *low_threshold, + uint64_t *high_threshold) +{ + spin_lock(&drm_memctl.lock); + *cur_used = drm_memctl.cur_used; + *low_threshold = drm_memctl.low_threshold; + *high_threshold = drm_memctl.high_threshold; + spin_unlock(&drm_memctl.lock); +} +EXPORT_SYMBOL(drm_query_memctl); + +void drm_init_memctl(size_t p_low_threshold, + size_t p_high_threshold, + size_t unit_size) +{ + spin_lock(&drm_memctl.lock); + drm_memctl.cur_used = 0; + drm_memctl.low_threshold = p_low_threshold * unit_size; + drm_memctl.high_threshold = p_high_threshold * unit_size; + spin_unlock(&drm_memctl.lock); +} + #ifdef DEBUG_MEMORY #include "drm_memory_debug.h" #else @@ -179,4 +248,3 @@ void drm_core_ioremapfree(struct drm_map *map, struct drm_device *dev) iounmap(map->handle); } EXPORT_SYMBOL(drm_core_ioremapfree); - diff --git a/drivers/char/drm/drm_mm.c b/drivers/char/drm/drm_mm.c index 86f4eb6..369a052 100644 --- a/drivers/char/drm/drm_mm.c +++ b/drivers/char/drm/drm_mm.c @@ -82,7 +82,7 @@ static int drm_mm_create_tail_node(struct drm_mm *mm, struct drm_mm_node *child; child = (struct drm_mm_node *) - drm_alloc(sizeof(*child), DRM_MEM_MM); + drm_ctl_alloc(sizeof(*child), DRM_MEM_MM); if (!child) return -ENOMEM; @@ -118,7 +118,7 @@ static struct drm_mm_node *drm_mm_split_at_start(struct drm_mm_node *parent, struct drm_mm_node *child; child = (struct drm_mm_node *) - drm_alloc(sizeof(*child), DRM_MEM_MM); + drm_ctl_alloc(sizeof(*child), DRM_MEM_MM); if (!child) return NULL; @@ -200,8 +200,8 @@ void drm_mm_put_block(struct drm_mm_node * cur) prev_node->size += next_node->size; list_del(&next_node->ml_entry); list_del(&next_node->fl_entry); - drm_free(next_node, sizeof(*next_node), - DRM_MEM_MM); + drm_ctl_free(next_node, sizeof(*next_node), + DRM_MEM_MM); } else { next_node->size += cur->size; next_node->start = cur->start; @@ -214,7 +214,7 @@ void drm_mm_put_block(struct drm_mm_node * cur) list_add(&cur->fl_entry, &mm->fl_entry); } else { list_del(&cur->ml_entry); - drm_free(cur, sizeof(*cur), DRM_MEM_MM); + drm_ctl_free(cur, sizeof(*cur), DRM_MEM_MM); } } @@ -291,6 +291,5 @@ void drm_mm_takedown(struct drm_mm * mm) list_del(&entry->fl_entry); list_del(&entry->ml_entry); - drm_free(entry, sizeof(*entry), DRM_MEM_MM); + drm_ctl_free(entry, sizeof(*entry), DRM_MEM_MM); } - diff --git a/drivers/char/drm/drm_object.c b/drivers/char/drm/drm_object.c new file mode 100644 index 0000000..57a777e --- /dev/null +++ b/drivers/char/drm/drm_object.c @@ -0,0 +1,293 @@ +/************************************************************************** + * + * Copyright (c) 2006-2007 Tungsten Graphics, Inc., Cedar Park, TX., USA + * All Rights Reserved. + * + * Permission is hereby granted, free of charge, to any person obtaining a + * copy of this software and associated documentation files (the + * "Software"), to deal in the Software without restriction, including + * without limitation the rights to use, copy, modify, merge, publish, + * distribute, sub license, and/or sell copies of the Software, and to + * permit persons to whom the Software is furnished to do so, subject to + * the following conditions: + * + * The above copyright notice and this permission notice (including the + * next paragraph) shall be included in all copies or substantial portions + * of the Software. + * + * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR + * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, + * FITNESS FOR A PARTICULAR PURPOSE AND NON-INFRINGEMENT. IN NO EVENT SHALL + * THE COPYRIGHT HOLDERS, AUTHORS AND/OR ITS SUPPLIERS BE LIABLE FOR ANY CLAIM, + * DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR + * OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE + * USE OR OTHER DEALINGS IN THE SOFTWARE. + * + **************************************************************************/ +/* + * Authors: Thomas Hellstrom + */ + +#include "drmP.h" + +int drm_add_user_object(struct drm_file *priv, struct drm_user_object *item, + int shareable) +{ + struct drm_device *dev = priv->head->dev; + int ret; + + DRM_ASSERT_LOCKED(&dev->struct_mutex); + + /* The refcount will be bumped to 1 when we add the ref object below. */ + atomic_set(&item->refcount, 0); + item->shareable = shareable; + item->owner = priv; + + ret = drm_ht_just_insert_please(&dev->object_hash, &item->hash, + (unsigned long)item, 31, 0, 0); + if (ret) + return ret; + + ret = drm_add_ref_object(priv, item, _DRM_REF_USE); + if (ret) + ret = drm_ht_remove_item(&dev->object_hash, &item->hash); + + return ret; +} +EXPORT_SYMBOL(drm_add_user_object); + +struct drm_user_object *drm_lookup_user_object(struct drm_file *priv, uint32_t key) +{ + struct drm_device *dev = priv->head->dev; + struct drm_hash_item *hash; + int ret; + struct drm_user_object *item; + + DRM_ASSERT_LOCKED(&dev->struct_mutex); + + ret = drm_ht_find_item(&dev->object_hash, key, &hash); + if (ret) + return NULL; + + item = drm_hash_entry(hash, struct drm_user_object, hash); + + if (priv != item->owner) { + struct drm_open_hash *ht = &priv->refd_object_hash[_DRM_REF_USE]; + ret = drm_ht_find_item(ht, (unsigned long)item, &hash); + if (ret) { + DRM_ERROR("Object not registered for usage\n"); + return NULL; + } + } + return item; +} +EXPORT_SYMBOL(drm_lookup_user_object); + +static void drm_deref_user_object(struct drm_file *priv, struct drm_user_object *item) +{ + struct drm_device *dev = priv->head->dev; + int ret; + + if (atomic_dec_and_test(&item->refcount)) { + ret = drm_ht_remove_item(&dev->object_hash, &item->hash); + BUG_ON(ret); + item->remove(priv, item); + } +} + +static int drm_object_ref_action(struct drm_file *priv, struct drm_user_object *ro, + enum drm_ref_type action) +{ + int ret = 0; + + switch (action) { + case _DRM_REF_USE: + atomic_inc(&ro->refcount); + break; + default: + if (!ro->ref_struct_locked) { + break; + } else { + ro->ref_struct_locked(priv, ro, action); + } + } + return ret; +} + +int drm_add_ref_object(struct drm_file *priv, struct drm_user_object *referenced_object, + enum drm_ref_type ref_action) +{ + int ret = 0; + struct drm_ref_object *item; + struct drm_open_hash *ht = &priv->refd_object_hash[ref_action]; + + DRM_ASSERT_LOCKED(&priv->head->dev->struct_mutex); + if (!referenced_object->shareable && priv != referenced_object->owner) { + DRM_ERROR("Not allowed to reference this object\n"); + return -EINVAL; + } + + /* + * If this is not a usage reference, Check that usage has been registered + * first. Otherwise strange things may happen on destruction. + */ + + if ((ref_action != _DRM_REF_USE) && priv != referenced_object->owner) { + item = + drm_lookup_ref_object(priv, referenced_object, + _DRM_REF_USE); + if (!item) { + DRM_ERROR + ("Object not registered for usage by this client\n"); + return -EINVAL; + } + } + + if (NULL != + (item = + drm_lookup_ref_object(priv, referenced_object, ref_action))) { + atomic_inc(&item->refcount); + return drm_object_ref_action(priv, referenced_object, + ref_action); + } + + item = drm_ctl_calloc(1, sizeof(*item), DRM_MEM_OBJECTS); + if (item == NULL) { + DRM_ERROR("Could not allocate reference object\n"); + return -ENOMEM; + } + + atomic_set(&item->refcount, 1); + item->hash.key = (unsigned long)referenced_object; + ret = drm_ht_insert_item(ht, &item->hash); + item->unref_action = ref_action; + + if (ret) + goto out; + + list_add(&item->list, &priv->refd_objects); + ret = drm_object_ref_action(priv, referenced_object, ref_action); +out: + return ret; +} + +struct drm_ref_object *drm_lookup_ref_object(struct drm_file *priv, + struct drm_user_object *referenced_object, + enum drm_ref_type ref_action) +{ + struct drm_hash_item *hash; + int ret; + + DRM_ASSERT_LOCKED(&priv->head->dev->struct_mutex); + ret = drm_ht_find_item(&priv->refd_object_hash[ref_action], + (unsigned long)referenced_object, &hash); + if (ret) + return NULL; + + return drm_hash_entry(hash, struct drm_ref_object, hash); +} +EXPORT_SYMBOL(drm_lookup_ref_object); + +static void drm_remove_other_references(struct drm_file *priv, + struct drm_user_object *ro) +{ + int i; + struct drm_open_hash *ht; + struct drm_hash_item *hash; + struct drm_ref_object *item; + + for (i = _DRM_REF_USE + 1; i < _DRM_NO_REF_TYPES; ++i) { + ht = &priv->refd_object_hash[i]; + while (!drm_ht_find_item(ht, (unsigned long)ro, &hash)) { + item = drm_hash_entry(hash, struct drm_ref_object, hash); + drm_remove_ref_object(priv, item); + } + } +} + +void drm_remove_ref_object(struct drm_file *priv, struct drm_ref_object *item) +{ + int ret; + struct drm_user_object *user_object = (struct drm_user_object *) item->hash.key; + struct drm_open_hash *ht = &priv->refd_object_hash[item->unref_action]; + enum drm_ref_type unref_action; + + DRM_ASSERT_LOCKED(&priv->head->dev->struct_mutex); + unref_action = item->unref_action; + if (atomic_dec_and_test(&item->refcount)) { + ret = drm_ht_remove_item(ht, &item->hash); + BUG_ON(ret); + list_del_init(&item->list); + if (unref_action == _DRM_REF_USE) + drm_remove_other_references(priv, user_object); + drm_ctl_free(item, sizeof(*item), DRM_MEM_OBJECTS); + } + + switch (unref_action) { + case _DRM_REF_USE: + drm_deref_user_object(priv, user_object); + break; + default: + BUG_ON(!user_object->unref); + user_object->unref(priv, user_object, unref_action); + break; + } + +} + +int drm_user_object_ref(struct drm_file *priv, uint32_t user_token, + enum drm_object_type type, struct drm_user_object **object) +{ + struct drm_device *dev = priv->head->dev; + struct drm_user_object *uo; + struct drm_hash_item *hash; + int ret; + + mutex_lock(&dev->struct_mutex); + ret = drm_ht_find_item(&dev->object_hash, user_token, &hash); + if (ret) { + DRM_ERROR("Could not find user object to reference.\n"); + goto out_err; + } + uo = drm_hash_entry(hash, struct drm_user_object, hash); + if (uo->type != type) { + ret = -EINVAL; + goto out_err; + } + ret = drm_add_ref_object(priv, uo, _DRM_REF_USE); + if (ret) + goto out_err; + mutex_unlock(&dev->struct_mutex); + *object = uo; + return 0; +out_err: + mutex_unlock(&dev->struct_mutex); + return ret; +} + +int drm_user_object_unref(struct drm_file *priv, uint32_t user_token, + enum drm_object_type type) +{ + struct drm_device *dev = priv->head->dev; + struct drm_user_object *uo; + struct drm_ref_object *ro; + int ret; + + mutex_lock(&dev->struct_mutex); + uo = drm_lookup_user_object(priv, user_token); + if (!uo || (uo->type != type)) { + ret = -EINVAL; + goto out_err; + } + ro = drm_lookup_ref_object(priv, uo, _DRM_REF_USE); + if (!ro) { + ret = -EINVAL; + goto out_err; + } + drm_remove_ref_object(priv, ro); + mutex_unlock(&dev->struct_mutex); + return 0; +out_err: + mutex_unlock(&dev->struct_mutex); + return ret; +} diff --git a/drivers/char/drm/drm_objects.h b/drivers/char/drm/drm_objects.h new file mode 100644 index 0000000..dad11f2 --- /dev/null +++ b/drivers/char/drm/drm_objects.h @@ -0,0 +1,700 @@ +/************************************************************************** + * + * Copyright (c) 2006-2007 Tungsten Graphics, Inc., Cedar Park, TX., USA + * All Rights Reserved. + * + * Permission is hereby granted, free of charge, to any person obtaining a + * copy of this software and associated documentation files (the + * "Software"), to deal in the Software without restriction, including + * without limitation the rights to use, copy, modify, merge, publish, + * distribute, sub license, and/or sell copies of the Software, and to + * permit persons to whom the Software is furnished to do so, subject to + * the following conditions: + * + * The above copyright notice and this permission notice (including the + * next paragraph) shall be included in all copies or substantial portions + * of the Software. + * + * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR + * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, + * FITNESS FOR A PARTICULAR PURPOSE AND NON-INFRINGEMENT. IN NO EVENT SHALL + * THE COPYRIGHT HOLDERS, AUTHORS AND/OR ITS SUPPLIERS BE LIABLE FOR ANY CLAIM, + * DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR + * OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE + * USE OR OTHER DEALINGS IN THE SOFTWARE. + * + **************************************************************************/ +/* + * Authors: Thomas Hellstrom + */ + +#ifndef _DRM_OBJECTS_H +#define _DRM_OBJECTS_H + +struct drm_device; +struct drm_bo_mem_reg; + +/*************************************************** + * User space objects. (drm_object.c) + */ + +#define drm_user_object_entry(_ptr, _type, _member) container_of(_ptr, _type, _member) + +enum drm_object_type { + drm_fence_type, + drm_buffer_type, + drm_lock_type, + /* + * Add other user space object types here. + */ + drm_driver_type0 = 256, + drm_driver_type1, + drm_driver_type2, + drm_driver_type3, + drm_driver_type4 +}; + +/* + * A user object is a structure that helps the drm give out user handles + * to kernel internal objects and to keep track of these objects so that + * they can be destroyed, for example when the user space process exits. + * Designed to be accessible using a user space 32-bit handle. + */ + +struct drm_user_object { + struct drm_hash_item hash; + struct list_head list; + enum drm_object_type type; + atomic_t refcount; + int shareable; + struct drm_file *owner; + void (*ref_struct_locked) (struct drm_file *priv, + struct drm_user_object *obj, + enum drm_ref_type ref_action); + void (*unref) (struct drm_file *priv, struct drm_user_object *obj, + enum drm_ref_type unref_action); + void (*remove) (struct drm_file *priv, struct drm_user_object *obj); +}; + +/* + * A ref object is a structure which is used to + * keep track of references to user objects and to keep track of these + * references so that they can be destroyed for example when the user space + * process exits. Designed to be accessible using a pointer to the _user_ object. + */ + +struct drm_ref_object { + struct drm_hash_item hash; + struct list_head list; + atomic_t refcount; + enum drm_ref_type unref_action; +}; + +/** + * Must be called with the struct_mutex held. + */ + +extern int drm_add_user_object(struct drm_file *priv, struct drm_user_object *item, + int shareable); +/** + * Must be called with the struct_mutex held. + */ + +extern struct drm_user_object *drm_lookup_user_object(struct drm_file *priv, + uint32_t key); + +/* + * Must be called with the struct_mutex held. May temporarily release it. + */ + +extern int drm_add_ref_object(struct drm_file *priv, + struct drm_user_object *referenced_object, + enum drm_ref_type ref_action); + +/* + * Must be called with the struct_mutex held. + */ + +struct drm_ref_object *drm_lookup_ref_object(struct drm_file *priv, + struct drm_user_object *referenced_object, + enum drm_ref_type ref_action); +/* + * Must be called with the struct_mutex held. + * If "item" has been obtained by a call to drm_lookup_ref_object. You may not + * release the struct_mutex before calling drm_remove_ref_object. + * This function may temporarily release the struct_mutex. + */ + +extern void drm_remove_ref_object(struct drm_file *priv, struct drm_ref_object *item); +extern int drm_user_object_ref(struct drm_file *priv, uint32_t user_token, + enum drm_object_type type, + struct drm_user_object **object); +extern int drm_user_object_unref(struct drm_file *priv, uint32_t user_token, + enum drm_object_type type); + +/*************************************************** + * Fence objects. (drm_fence.c) + */ + +struct drm_fence_object { + struct drm_user_object base; + struct drm_device *dev; + atomic_t usage; + + /* + * The below three fields are protected by the fence manager spinlock. + */ + + struct list_head ring; + int fence_class; + uint32_t native_type; + uint32_t type; + uint32_t signaled; + uint32_t sequence; + uint32_t flush_mask; + uint32_t submitted_flush; + uint32_t error; +}; + +#define _DRM_FENCE_CLASSES 8 +#define _DRM_FENCE_TYPE_EXE 0x00 + +struct drm_fence_class_manager { + struct list_head ring; + uint32_t pending_flush; + wait_queue_head_t fence_queue; + int pending_exe_flush; + uint32_t last_exe_flush; + uint32_t exe_flush_sequence; +}; + +struct drm_fence_manager { + int initialized; + rwlock_t lock; + struct drm_fence_class_manager fence_class[_DRM_FENCE_CLASSES]; + uint32_t num_classes; + atomic_t count; +}; + +struct drm_fence_driver { + uint32_t num_classes; + uint32_t wrap_diff; + uint32_t flush_diff; + uint32_t sequence_mask; + int lazy_capable; + int (*has_irq) (struct drm_device *dev, uint32_t fence_class, + uint32_t flags); + int (*emit) (struct drm_device *dev, uint32_t fence_class, + uint32_t flags, uint32_t *breadcrumb, + uint32_t *native_type); + void (*poke_flush) (struct drm_device *dev, uint32_t fence_class); +}; + +extern void drm_fence_handler(struct drm_device *dev, uint32_t fence_class, + uint32_t sequence, uint32_t type, + uint32_t error); +extern void drm_fence_manager_init(struct drm_device *dev); +extern void drm_fence_manager_takedown(struct drm_device *dev); +extern void drm_fence_flush_old(struct drm_device *dev, uint32_t fence_class, + uint32_t sequence); +extern int drm_fence_object_flush(struct drm_fence_object *fence, + uint32_t type); +extern int drm_fence_object_signaled(struct drm_fence_object *fence, + uint32_t type, int flush); +extern void drm_fence_usage_deref_locked(struct drm_fence_object **fence); +extern void drm_fence_usage_deref_unlocked(struct drm_fence_object **fence); +extern struct drm_fence_object *drm_fence_reference_locked(struct drm_fence_object *src); +extern void drm_fence_reference_unlocked(struct drm_fence_object **dst, + struct drm_fence_object *src); +extern int drm_fence_object_wait(struct drm_fence_object *fence, + int lazy, int ignore_signals, uint32_t mask); +extern int drm_fence_object_create(struct drm_device *dev, uint32_t type, + uint32_t fence_flags, uint32_t fence_class, + struct drm_fence_object **c_fence); +extern int drm_fence_object_emit(struct drm_fence_object *fence, + uint32_t fence_flags, uint32_t class, + uint32_t type); +extern void drm_fence_fill_arg(struct drm_fence_object *fence, + struct drm_fence_arg *arg); + +extern int drm_fence_add_user_object(struct drm_file *priv, + struct drm_fence_object *fence, + int shareable); + +extern int drm_fence_create_ioctl(struct drm_device *dev, void *data, + struct drm_file *file_priv); +extern int drm_fence_destroy_ioctl(struct drm_device *dev, void *data, + struct drm_file *file_priv); +extern int drm_fence_reference_ioctl(struct drm_device *dev, void *data, + struct drm_file *file_priv); +extern int drm_fence_unreference_ioctl(struct drm_device *dev, void *data, + struct drm_file *file_priv); +extern int drm_fence_signaled_ioctl(struct drm_device *dev, void *data, + struct drm_file *file_priv); +extern int drm_fence_flush_ioctl(struct drm_device *dev, void *data, + struct drm_file *file_priv); +extern int drm_fence_wait_ioctl(struct drm_device *dev, void *data, + struct drm_file *file_priv); +extern int drm_fence_emit_ioctl(struct drm_device *dev, void *data, + struct drm_file *file_priv); +extern int drm_fence_buffers_ioctl(struct drm_device *dev, void *data, + struct drm_file *file_priv); + +/************************************************** + *TTMs + */ + +/* + * The ttm backend GTT interface. (In our case AGP). + * Any similar type of device (PCIE?) + * needs only to implement these functions to be usable with the TTM interface. + * The AGP backend implementation lives in drm_agpsupport.c + * basically maps these calls to available functions in agpgart. + * Each drm device driver gets an + * additional function pointer that creates these types, + * so that the device can choose the correct aperture. + * (Multiple AGP apertures, etc.) + * Most device drivers will let this point to the standard AGP implementation. + */ + +#define DRM_BE_FLAG_NEEDS_FREE 0x00000001 +#define DRM_BE_FLAG_BOUND_CACHED 0x00000002 + +struct drm_ttm_backend; +struct drm_ttm_backend_func { + int (*needs_ub_cache_adjust) (struct drm_ttm_backend *backend); + int (*populate) (struct drm_ttm_backend *backend, + unsigned long num_pages, struct page **pages, + struct page *dummy_read_page); + void (*clear) (struct drm_ttm_backend *backend); + int (*bind) (struct drm_ttm_backend *backend, + struct drm_bo_mem_reg *bo_mem); + int (*unbind) (struct drm_ttm_backend *backend); + void (*destroy) (struct drm_ttm_backend *backend); +}; + + +struct drm_ttm_backend { + struct drm_device *dev; + uint32_t flags; + struct drm_ttm_backend_func *func; +}; + +struct drm_ttm { + struct page *dummy_read_page; + struct page **pages; + uint32_t page_flags; + unsigned long num_pages; + atomic_t vma_count; + struct drm_device *dev; + int destroy; + uint32_t mapping_offset; + struct drm_ttm_backend *be; + enum { + ttm_bound, + ttm_evicted, + ttm_unbound, + ttm_unpopulated, + } state; + +}; + +extern struct drm_ttm *drm_ttm_create(struct drm_device *dev, unsigned long size, + uint32_t page_flags, + struct page *dummy_read_page); +extern int drm_ttm_bind(struct drm_ttm *ttm, struct drm_bo_mem_reg *bo_mem); +extern void drm_ttm_unbind(struct drm_ttm *ttm); +extern void drm_ttm_evict(struct drm_ttm *ttm); +extern void drm_ttm_fixup_caching(struct drm_ttm *ttm); +extern struct page *drm_ttm_get_page(struct drm_ttm *ttm, int index); +extern void drm_ttm_cache_flush(void); +extern int drm_ttm_populate(struct drm_ttm *ttm); +extern int drm_ttm_set_user(struct drm_ttm *ttm, + struct task_struct *tsk, + unsigned long start, + unsigned long num_pages); + +/* + * Destroy a ttm. The user normally calls drmRmMap or a similar IOCTL to do + * this which calls this function iff there are no vmas referencing it anymore. + * Otherwise it is called when the last vma exits. + */ + +extern int drm_ttm_destroy(struct drm_ttm *ttm); + +#define DRM_FLAG_MASKED(_old, _new, _mask) {\ +(_old) ^= (((_old) ^ (_new)) & (_mask)); \ +} + +#define DRM_TTM_MASK_FLAGS ((1 << PAGE_SHIFT) - 1) +#define DRM_TTM_MASK_PFN (0xFFFFFFFFU - DRM_TTM_MASK_FLAGS) + +/* + * Page flags. + */ + +/* + * This ttm should not be cached by the CPU + */ +#define DRM_TTM_PAGE_UNCACHED (1 << 0) +/* + * This flat is not used at this time; I don't know what the + * intent was + */ +#define DRM_TTM_PAGE_USED (1 << 1) +/* + * This flat is not used at this time; I don't know what the + * intent was + */ +#define DRM_TTM_PAGE_BOUND (1 << 2) +/* + * This flat is not used at this time; I don't know what the + * intent was + */ +#define DRM_TTM_PAGE_PRESENT (1 << 3) +/* + * The array of page pointers was allocated with vmalloc + * instead of drm_calloc. + */ +#define DRM_TTM_PAGE_VMALLOC (1 << 4) +/* + * This ttm is mapped from user space + */ +#define DRM_TTM_PAGE_USER (1 << 5) +/* + * This ttm will be written to by the GPU + */ +#define DRM_TTM_PAGE_WRITE (1 << 6) +/* + * This ttm was mapped to the GPU, and so the contents may have + * been modified + */ +#define DRM_TTM_PAGE_USER_DIRTY (1 << 7) +/* + * This flag is not used at this time; I don't know what the + * intent was. + */ +#define DRM_TTM_PAGE_USER_DMA (1 << 8) + +/*************************************************** + * Buffer objects. (drm_bo.c, drm_bo_move.c) + */ + +struct drm_bo_mem_reg { + struct drm_mm_node *mm_node; + unsigned long size; + unsigned long num_pages; + uint32_t page_alignment; + uint32_t mem_type; + /* + * Current buffer status flags, indicating + * where the buffer is located and which + * access modes are in effect + */ + uint64_t flags; + /** + * These are the flags proposed for + * a validate operation. If the + * validate succeeds, they'll get moved + * into the flags field + */ + uint64_t proposed_flags; + + uint32_t desired_tile_stride; + uint32_t hw_tile_stride; +}; + +enum drm_bo_type { + /* + * drm_bo_type_device are 'normal' drm allocations, + * pages are allocated from within the kernel automatically + * and the objects can be mmap'd from the drm device. Each + * drm_bo_type_device object has a unique name which can be + * used by other processes to share access to the underlying + * buffer. + */ + drm_bo_type_device, + /* + * drm_bo_type_user are buffers of pages that already exist + * in the process address space. They are more limited than + * drm_bo_type_device buffers in that they must always + * remain cached (as we assume the user pages are mapped cached), + * and they are not sharable to other processes through DRM + * (although, regular shared memory should still work fine). + */ + drm_bo_type_user, + /* + * drm_bo_type_kernel are buffers that exist solely for use + * within the kernel. The pages cannot be mapped into the + * process. One obvious use would be for the ring + * buffer where user access would not (ideally) be required. + */ + drm_bo_type_kernel, +}; + +struct drm_buffer_object { + struct drm_device *dev; + struct drm_user_object base; + + /* + * If there is a possibility that the usage variable is zero, + * then dev->struct_mutext should be locked before incrementing it. + */ + + atomic_t usage; + unsigned long buffer_start; + enum drm_bo_type type; + unsigned long offset; + atomic_t mapped; + struct drm_bo_mem_reg mem; + + struct list_head lru; + struct list_head ddestroy; + + uint32_t fence_type; + uint32_t fence_class; + uint32_t new_fence_type; + uint32_t new_fence_class; + struct drm_fence_object *fence; + uint32_t priv_flags; + wait_queue_head_t event_queue; + struct mutex mutex; + unsigned long num_pages; + + /* For pinned buffers */ + struct drm_mm_node *pinned_node; + uint32_t pinned_mem_type; + struct list_head pinned_lru; + + /* For vm */ + struct drm_ttm *ttm; + struct drm_map_list map_list; + uint32_t memory_type; + unsigned long bus_offset; + uint32_t vm_flags; + void *iomap; + + +}; + +#define _DRM_BO_FLAG_UNFENCED 0x00000001 +#define _DRM_BO_FLAG_EVICTED 0x00000002 + +struct drm_mem_type_manager { + int has_type; + int use_type; + struct drm_mm manager; + struct list_head lru; + struct list_head pinned; + uint32_t flags; + uint32_t drm_bus_maptype; + unsigned long gpu_offset; + unsigned long io_offset; + unsigned long io_size; + void *io_addr; +}; + +struct drm_bo_lock { + struct drm_user_object base; + wait_queue_head_t queue; + atomic_t write_lock_pending; + atomic_t readers; +}; + +#define _DRM_FLAG_MEMTYPE_FIXED 0x00000001 /* Fixed (on-card) PCI memory */ +#define _DRM_FLAG_MEMTYPE_MAPPABLE 0x00000002 /* Memory mappable */ +#define _DRM_FLAG_MEMTYPE_CACHED 0x00000004 /* Cached binding */ +#define _DRM_FLAG_NEEDS_IOREMAP 0x00000008 /* Fixed memory needs ioremap + before kernel access. */ +#define _DRM_FLAG_MEMTYPE_CMA 0x00000010 /* Can't map aperture */ +#define _DRM_FLAG_MEMTYPE_CSELECT 0x00000020 /* Select caching */ + +struct drm_buffer_manager { + struct drm_bo_lock bm_lock; + struct mutex evict_mutex; + int nice_mode; + int initialized; + struct drm_file *last_to_validate; + struct drm_mem_type_manager man[DRM_BO_MEM_TYPES]; + struct list_head unfenced; + struct list_head ddestroy; + struct delayed_work wq; + uint32_t fence_type; + unsigned long cur_pages; + atomic_t count; + struct page *dummy_read_page; +}; + +struct drm_bo_driver { + const uint32_t *mem_type_prio; + const uint32_t *mem_busy_prio; + uint32_t num_mem_type_prio; + uint32_t num_mem_busy_prio; + struct drm_ttm_backend *(*create_ttm_backend_entry) + (struct drm_device *dev); + int (*fence_type) (struct drm_buffer_object *bo, uint32_t *fclass, + uint32_t *type); + int (*invalidate_caches) (struct drm_device *dev, uint64_t flags); + int (*init_mem_type) (struct drm_device *dev, uint32_t type, + struct drm_mem_type_manager *man); + /* + * evict_flags: + * + * @bo: the buffer object to be evicted + * + * Return the bo flags for a buffer which is not mapped to the hardware. + * These will be placed in proposed_flags so that when the move is + * finished, they'll end up in bo->mem.flags + */ + uint64_t(*evict_flags) (struct drm_buffer_object *bo); + /* + * move: + * + * @bo: the buffer to move + * + * @evict: whether this motion is evicting the buffer from + * the graphics address space + * + * @no_wait: whether this should give up and return -EBUSY + * if this move would require sleeping + * + * @new_mem: the new memory region receiving the buffer + * + * Move a buffer between two memory regions. + */ + int (*move) (struct drm_buffer_object *bo, + int evict, int no_wait, struct drm_bo_mem_reg *new_mem); + /* + * ttm_cache_flush + */ + void (*ttm_cache_flush)(struct drm_ttm *ttm); +}; + +/* + * buffer objects (drm_bo.c) + */ + +extern int drm_bo_create_ioctl(struct drm_device *dev, void *data, struct drm_file *file_priv); +extern int drm_bo_destroy_ioctl(struct drm_device *dev, void *data, struct drm_file *file_priv); +extern int drm_bo_map_ioctl(struct drm_device *dev, void *data, struct drm_file *file_priv); +extern int drm_bo_unmap_ioctl(struct drm_device *dev, void *data, struct drm_file *file_priv); +extern int drm_bo_reference_ioctl(struct drm_device *dev, void *data, struct drm_file *file_priv); +extern int drm_bo_unreference_ioctl(struct drm_device *dev, void *data, struct drm_file *file_priv); +extern int drm_bo_wait_idle_ioctl(struct drm_device *dev, void *data, struct drm_file *file_priv); +extern int drm_bo_info_ioctl(struct drm_device *dev, void *data, struct drm_file *file_priv); +extern int drm_bo_setstatus_ioctl(struct drm_device *dev, void *data, struct drm_file *file_priv); +extern int drm_mm_init_ioctl(struct drm_device *dev, void *data, struct drm_file *file_priv); +extern int drm_mm_takedown_ioctl(struct drm_device *dev, void *data, struct drm_file *file_priv); +extern int drm_mm_lock_ioctl(struct drm_device *dev, void *data, struct drm_file *file_priv); +extern int drm_mm_unlock_ioctl(struct drm_device *dev, void *data, struct drm_file *file_priv); +extern int drm_bo_version_ioctl(struct drm_device *dev, void *data, struct drm_file *file_priv); +extern int drm_bo_driver_finish(struct drm_device *dev); +extern int drm_bo_driver_init(struct drm_device *dev); +extern int drm_bo_pci_offset(struct drm_device *dev, + struct drm_bo_mem_reg *mem, + unsigned long *bus_base, + unsigned long *bus_offset, + unsigned long *bus_size); +extern int drm_mem_reg_is_pci(struct drm_device *dev, struct drm_bo_mem_reg *mem); + +extern void drm_bo_usage_deref_locked(struct drm_buffer_object **bo); +extern void drm_bo_usage_deref_unlocked(struct drm_buffer_object **bo); +extern void drm_putback_buffer_objects(struct drm_device *dev); +extern int drm_fence_buffer_objects(struct drm_device *dev, + struct list_head *list, + uint32_t fence_flags, + struct drm_fence_object *fence, + struct drm_fence_object **used_fence); +extern void drm_bo_add_to_lru(struct drm_buffer_object *bo); +extern int drm_buffer_object_create(struct drm_device *dev, unsigned long size, + enum drm_bo_type type, uint64_t flags, + uint32_t hint, uint32_t page_alignment, + unsigned long buffer_start, + struct drm_buffer_object **bo); +extern int drm_bo_wait(struct drm_buffer_object *bo, int lazy, int ignore_signals, + int no_wait); +extern int drm_bo_mem_space(struct drm_buffer_object *bo, + struct drm_bo_mem_reg *mem, int no_wait); +extern int drm_bo_move_buffer(struct drm_buffer_object *bo, + uint64_t new_mem_flags, + int no_wait, int move_unfenced); +extern int drm_bo_clean_mm(struct drm_device *dev, unsigned mem_type); +extern int drm_bo_init_mm(struct drm_device *dev, unsigned type, + unsigned long p_offset, unsigned long p_size); +extern int drm_bo_handle_validate(struct drm_file *file_priv, uint32_t handle, + uint64_t flags, uint64_t mask, uint32_t hint, + uint32_t fence_class, int use_old_fence_class, + struct drm_bo_info_rep *rep, + struct drm_buffer_object **bo_rep); +extern struct drm_buffer_object *drm_lookup_buffer_object(struct drm_file *file_priv, + uint32_t handle, + int check_owner); +extern int drm_bo_do_validate(struct drm_buffer_object *bo, + uint64_t flags, uint64_t mask, uint32_t hint, + uint32_t fence_class, + struct drm_bo_info_rep *rep); + +/* + * Buffer object memory move- and map helpers. + * drm_bo_move.c + */ + +extern int drm_bo_move_ttm(struct drm_buffer_object *bo, + int evict, int no_wait, + struct drm_bo_mem_reg *new_mem); +extern int drm_bo_move_memcpy(struct drm_buffer_object *bo, + int evict, + int no_wait, struct drm_bo_mem_reg *new_mem); +extern int drm_bo_move_accel_cleanup(struct drm_buffer_object *bo, + int evict, int no_wait, + uint32_t fence_class, uint32_t fence_type, + uint32_t fence_flags, + struct drm_bo_mem_reg *new_mem); +extern int drm_bo_same_page(unsigned long offset, unsigned long offset2); +extern unsigned long drm_bo_offset_end(unsigned long offset, + unsigned long end); + +struct drm_bo_kmap_obj { + void *virtual; + struct page *page; + enum { + bo_map_iomap, + bo_map_vmap, + bo_map_kmap, + bo_map_premapped, + } bo_kmap_type; +}; + +static inline void *drm_bmo_virtual(struct drm_bo_kmap_obj *map, int *is_iomem) +{ + *is_iomem = (map->bo_kmap_type == bo_map_iomap || + map->bo_kmap_type == bo_map_premapped); + return map->virtual; +} +extern void drm_bo_kunmap(struct drm_bo_kmap_obj *map); +extern int drm_bo_kmap(struct drm_buffer_object *bo, unsigned long start_page, + unsigned long num_pages, struct drm_bo_kmap_obj *map); + +/* + * drm_bo_lock.c + * Simple replacement for the hardware lock on buffer manager init and clean. + */ + + +extern void drm_bo_init_lock(struct drm_bo_lock *lock); +extern void drm_bo_read_unlock(struct drm_bo_lock *lock); +extern int drm_bo_read_lock(struct drm_bo_lock *lock); +extern int drm_bo_write_lock(struct drm_bo_lock *lock, + struct drm_file *file_priv); + +extern int drm_bo_write_unlock(struct drm_bo_lock *lock, + struct drm_file *file_priv); + +#ifdef CONFIG_DEBUG_MUTEXES +#define DRM_ASSERT_LOCKED(_mutex) \ + BUG_ON(!mutex_is_locked(_mutex) || \ + ((_mutex)->owner != current_thread_info())) +#else +#define DRM_ASSERT_LOCKED(_mutex) +#endif +#endif diff --git a/drivers/char/drm/drm_os_linux.h b/drivers/char/drm/drm_os_linux.h index daa69c9..8dbd257 100644 --- a/drivers/char/drm/drm_os_linux.h +++ b/drivers/char/drm/drm_os_linux.h @@ -69,9 +69,9 @@ static __inline__ int mtrr_del(int reg, unsigned long base, unsigned long size) #define DRM_COPY_TO_USER(arg1, arg2, arg3) \ copy_to_user(arg1, arg2, arg3) /* Macros for copyfrom user, but checking readability only once */ -#define DRM_VERIFYAREA_READ( uaddr, size ) \ +#define DRM_VERIFYAREA_READ( uaddr, size ) \ (access_ok( VERIFY_READ, uaddr, size ) ? 0 : -EFAULT) -#define DRM_COPY_FROM_USER_UNCHECKED(arg1, arg2, arg3) \ +#define DRM_COPY_FROM_USER_UNCHECKED(arg1, arg2, arg3) \ __copy_from_user(arg1, arg2, arg3) #define DRM_COPY_TO_USER_UNCHECKED(arg1, arg2, arg3) \ __copy_to_user(arg1, arg2, arg3) diff --git a/drivers/char/drm/drm_pciids.h b/drivers/char/drm/drm_pciids.h index 43d3c42..c69a4d0 100644 --- a/drivers/char/drm/drm_pciids.h +++ b/drivers/char/drm/drm_pciids.h @@ -311,5 +311,5 @@ {0x8086, 0x29d2, PCI_ANY_ID, PCI_ANY_ID, 0, 0, 0}, \ {0x8086, 0x2a02, PCI_ANY_ID, PCI_ANY_ID, 0, 0, 0}, \ {0x8086, 0x2a12, PCI_ANY_ID, PCI_ANY_ID, 0, 0, 0}, \ + {0x8086, 0x2a42, PCI_ANY_ID, PCI_ANY_ID, 0, 0, 0}, \ {0, 0, 0} - diff --git a/drivers/char/drm/drm_proc.c b/drivers/char/drm/drm_proc.c index 12dfea8..a644b18 100644 --- a/drivers/char/drm/drm_proc.c +++ b/drivers/char/drm/drm_proc.c @@ -49,6 +49,8 @@ static int drm_queues_info(char *buf, char **start, off_t offset, int request, int *eof, void *data); static int drm_bufs_info(char *buf, char **start, off_t offset, int request, int *eof, void *data); +static int drm_objects_info(char *buf, char **start, off_t offset, + int request, int *eof, void *data); #if DRM_DEBUG_CODE static int drm_vma_info(char *buf, char **start, off_t offset, int request, int *eof, void *data); @@ -67,6 +69,7 @@ static struct drm_proc_list { {"clients", drm_clients_info}, {"queues", drm_queues_info}, {"bufs", drm_bufs_info}, + {"objects", drm_objects_info}, #if DRM_DEBUG_CODE {"vma", drm_vma_info}, #endif @@ -236,11 +239,11 @@ static int drm__vm_info(char *buf, char **start, off_t offset, int request, type = "??"; else type = types[map->type]; - DRM_PROC_PRINT("%4d 0x%08lx 0x%08lx %4.4s 0x%02x 0x%08x ", + DRM_PROC_PRINT("%4d 0x%08lx 0x%08lx %4.4s 0x%02x 0x%08lx ", i, map->offset, map->size, type, map->flags, - r_list->user_token); + (unsigned long) r_list->user_token); if (map->mtrr < 0) { DRM_PROC_PRINT("none\n"); } else { @@ -416,6 +419,93 @@ static int drm_bufs_info(char *buf, char **start, off_t offset, int request, } /** + * Called when "/proc/dri/.../objects" is read. + * + * \param buf output buffer. + * \param start start of output data. + * \param offset requested start offset. + * \param request requested number of bytes. + * \param eof whether there is no more data to return. + * \param data private data. + * \return number of written bytes. + */ +static int drm__objects_info(char *buf, char **start, off_t offset, int request, + int *eof, void *data) +{ + struct drm_device *dev = (struct drm_device *) data; + int len = 0; + struct drm_buffer_manager *bm = &dev->bm; + struct drm_fence_manager *fm = &dev->fm; + uint64_t used_mem; + uint64_t low_mem; + uint64_t high_mem; + + + if (offset > DRM_PROC_LIMIT) { + *eof = 1; + return 0; + } + + *start = &buf[offset]; + *eof = 0; + + DRM_PROC_PRINT("Object accounting:\n\n"); + if (fm->initialized) { + DRM_PROC_PRINT("Number of active fence objects: %d.\n", + atomic_read(&fm->count)); + } else { + DRM_PROC_PRINT("Fence objects are not supported by this driver\n"); + } + + if (bm->initialized) { + DRM_PROC_PRINT("Number of active buffer objects: %d.\n\n", + atomic_read(&bm->count)); + } + DRM_PROC_PRINT("Memory accounting:\n\n"); + if (bm->initialized) { + DRM_PROC_PRINT("Number of locked GATT pages: %lu.\n", bm->cur_pages); + } else { + DRM_PROC_PRINT("Buffer objects are not supported by this driver.\n"); + } + + drm_query_memctl(&used_mem, &low_mem, &high_mem); + + if (used_mem > 16*PAGE_SIZE) { + DRM_PROC_PRINT("Used object memory is %lu pages.\n", + (unsigned long) (used_mem >> PAGE_SHIFT)); + } else { + DRM_PROC_PRINT("Used object memory is %lu bytes.\n", + (unsigned long) used_mem); + } + DRM_PROC_PRINT("Soft object memory usage threshold is %lu pages.\n", + (unsigned long) (low_mem >> PAGE_SHIFT)); + DRM_PROC_PRINT("Hard object memory usage threshold is %lu pages.\n", + (unsigned long) (high_mem >> PAGE_SHIFT)); + + DRM_PROC_PRINT("\n"); + + if (len > request + offset) + return request; + *eof = 1; + return len - offset; +} + +/** + * Simply calls _objects_info() while holding the drm_device::struct_mutex lock. + */ +static int drm_objects_info(char *buf, char **start, off_t offset, int request, + int *eof, void *data) +{ + struct drm_device *dev = (struct drm_device *) data; + int ret; + + mutex_lock(&dev->struct_mutex); + ret = drm__objects_info(buf, start, offset, request, eof, data); + mutex_unlock(&dev->struct_mutex); + return ret; +} + +/** * Called when "/proc/dri/.../clients" is read. * * \param buf output buffer. diff --git a/drivers/char/drm/drm_sarea.h b/drivers/char/drm/drm_sarea.h index e040f47..4800373 100644 --- a/drivers/char/drm/drm_sarea.h +++ b/drivers/char/drm/drm_sarea.h @@ -45,7 +45,7 @@ #endif /** Maximum number of drawables in the SAREA */ -#define SAREA_MAX_DRAWABLES 256 +#define SAREA_MAX_DRAWABLES 256 #define SAREA_DRAWABLE_CLAIMED_ENTRY 0x80000000 diff --git a/drivers/char/drm/drm_scatter.c b/drivers/char/drm/drm_scatter.c index eb7fa43..56c6eaa 100644 --- a/drivers/char/drm/drm_scatter.c +++ b/drivers/char/drm/drm_scatter.c @@ -67,7 +67,7 @@ int drm_sg_alloc(struct drm_device *dev, struct drm_scatter_gather * request) struct drm_sg_mem *entry; unsigned long pages, i, j; - DRM_DEBUG("%s\n", __FUNCTION__); + DRM_DEBUG("\n"); if (!drm_core_check_feature(dev, DRIVER_SG)) return -EINVAL; @@ -81,7 +81,7 @@ int drm_sg_alloc(struct drm_device *dev, struct drm_scatter_gather * request) memset(entry, 0, sizeof(*entry)); pages = (request->size + PAGE_SIZE - 1) / PAGE_SIZE; - DRM_DEBUG("sg size=%ld pages=%ld\n", request->size, pages); + DRM_DEBUG("size=%ld pages=%ld\n", request->size, pages); entry->pages = pages; entry->pagelist = drm_alloc(pages * sizeof(*entry->pagelist), @@ -122,8 +122,8 @@ int drm_sg_alloc(struct drm_device *dev, struct drm_scatter_gather * request) entry->handle = ScatterHandle((unsigned long)entry->virtual); - DRM_DEBUG("sg alloc handle = %08lx\n", entry->handle); - DRM_DEBUG("sg alloc virtual = %p\n", entry->virtual); + DRM_DEBUG("handle = %08lx\n", entry->handle); + DRM_DEBUG("virtual = %p\n", entry->virtual); for (i = (unsigned long)entry->virtual, j = 0; j < pages; i += PAGE_SIZE, j++) { @@ -179,7 +179,7 @@ int drm_sg_alloc(struct drm_device *dev, struct drm_scatter_gather * request) return 0; - failed: +failed: drm_sg_cleanup(entry); return -ENOMEM; } @@ -210,7 +210,7 @@ int drm_sg_free(struct drm_device *dev, void *data, if (!entry || entry->handle != request->handle) return -EINVAL; - DRM_DEBUG("sg free virtual = %p\n", entry->virtual); + DRM_DEBUG("virtual = %p\n", entry->virtual); drm_sg_cleanup(entry); diff --git a/drivers/char/drm/drm_stub.c b/drivers/char/drm/drm_stub.c index ee83ff9..4010365 100644 --- a/drivers/char/drm/drm_stub.c +++ b/drivers/char/drm/drm_stub.c @@ -71,6 +71,7 @@ static int drm_fill_in_dev(struct drm_device * dev, struct pci_dev *pdev, init_timer(&dev->timer); mutex_init(&dev->struct_mutex); mutex_init(&dev->ctxlist_mutex); + mutex_init(&dev->bm.evict_mutex); idr_init(&dev->drw_idr); @@ -83,7 +84,19 @@ static int drm_fill_in_dev(struct drm_device * dev, struct pci_dev *pdev, #endif dev->irq = pdev->irq; - if (drm_ht_create(&dev->map_hash, 12)) { + if (drm_ht_create(&dev->map_hash, DRM_MAP_HASH_ORDER)) { + return -ENOMEM; + } + + if (drm_mm_init(&dev->offset_manager, DRM_FILE_PAGE_OFFSET_START, + DRM_FILE_PAGE_OFFSET_SIZE)) { + drm_ht_remove(&dev->map_hash); + return -ENOMEM; + } + + if (drm_ht_create(&dev->object_hash, DRM_OBJECT_HASH_ORDER)) { + drm_ht_remove(&dev->map_hash); + drm_mm_takedown(&dev->offset_manager); return -ENOMEM; } @@ -98,10 +111,6 @@ static int drm_fill_in_dev(struct drm_device * dev, struct pci_dev *pdev, dev->driver = driver; - if (dev->driver->load) - if ((retcode = dev->driver->load(dev, ent->driver_data))) - goto error_out_unreg; - if (drm_core_has_AGP(dev)) { if (drm_device_is_agp(dev)) dev->agp = drm_agp_init(dev); @@ -120,15 +129,20 @@ static int drm_fill_in_dev(struct drm_device * dev, struct pci_dev *pdev, } } + if (dev->driver->load) + if ((retcode = dev->driver->load(dev, ent->driver_data))) + goto error_out_unreg; + retcode = drm_ctxbitmap_init(dev); if (retcode) { DRM_ERROR("Cannot allocate memory for context bitmap.\n"); goto error_out_unreg; } + drm_fence_manager_init(dev); return 0; - error_out_unreg: +error_out_unreg: drm_lastclose(dev); return retcode; } @@ -168,11 +182,10 @@ static int drm_get_head(struct drm_device * dev, struct drm_head * head) goto err_g1; } - head->dev_class = drm_sysfs_device_add(drm_class, head); - if (IS_ERR(head->dev_class)) { + ret = drm_sysfs_device_add(dev, head); + if (ret) { printk(KERN_ERR "DRM: Error sysfs_device_add.\n"); - ret = PTR_ERR(head->dev_class); goto err_g2; } *heads = head; @@ -183,9 +196,9 @@ static int drm_get_head(struct drm_device * dev, struct drm_head * head) } DRM_ERROR("out of minors\n"); return -ENOMEM; - err_g2: +err_g2: drm_proc_cleanup(minor, drm_proc_root, head->dev_root); - err_g1: +err_g1: *head = (struct drm_head) { .dev = NULL}; return ret; @@ -224,7 +237,7 @@ int drm_get_dev(struct pci_dev *pdev, const struct pci_device_id *ent, } if ((ret = drm_get_head(dev, &dev->primary))) goto err_g2; - + DRM_INFO("Initialized %s %d.%d.%d %s on minor %d\n", driver->name, driver->major, driver->minor, driver->patchlevel, driver->date, dev->primary.minor); @@ -283,7 +296,7 @@ int drm_put_head(struct drm_head * head) DRM_DEBUG("release secondary minor %d\n", minor); drm_proc_cleanup(minor, drm_proc_root, head->dev_root); - drm_sysfs_device_remove(head->dev_class); + drm_sysfs_device_remove(head->dev); *head = (struct drm_head) {.dev = NULL}; diff --git a/drivers/char/drm/drm_sysfs.c b/drivers/char/drm/drm_sysfs.c index cf4349b..fa36153 100644 --- a/drivers/char/drm/drm_sysfs.c +++ b/drivers/char/drm/drm_sysfs.c @@ -19,6 +19,45 @@ #include "drm_core.h" #include "drmP.h" +#define to_drm_device(d) container_of(d, struct drm_device, dev) + +/** + * drm_sysfs_suspend - DRM class suspend hook + * @dev: Linux device to suspend + * @state: power state to enter + * + * Just figures out what the actual struct drm_device associated with + * @dev is and calls its suspend hook, if present. + */ +static int drm_sysfs_suspend(struct device *dev, pm_message_t state) +{ + struct drm_device *drm_dev = to_drm_device(dev); + + printk(KERN_ERR "%s\n", __FUNCTION__); + + if (drm_dev->driver->suspend) + return drm_dev->driver->suspend(drm_dev); + + return 0; +} + +/** + * drm_sysfs_resume - DRM class resume hook + * @dev: Linux device to resume + * + * Just figures out what the actual struct drm_device associated with + * @dev is and calls its resume hook, if present. + */ +static int drm_sysfs_resume(struct device *dev) +{ + struct drm_device *drm_dev = to_drm_device(dev); + + if (drm_dev->driver->resume) + return drm_dev->driver->resume(drm_dev); + + return 0; +} + /* Display the version of drm_core. This doesn't work right in current design */ static ssize_t version_show(struct class *dev, char *buf) { @@ -33,7 +72,7 @@ static CLASS_ATTR(version, S_IRUGO, version_show, NULL); * @owner: pointer to the module that is to "own" this struct drm_sysfs_class * @name: pointer to a string for the name of this class. * - * This is used to create a struct drm_sysfs_class pointer that can then be used + * This is used to create DRM class pointer that can then be used * in calls to drm_sysfs_device_add(). * * Note, the pointer created here is to be destroyed when finished by making a @@ -50,6 +89,9 @@ struct class *drm_sysfs_create(struct module *owner, char *name) goto err_out; } + class->suspend = drm_sysfs_suspend; + class->resume = drm_sysfs_resume; + err = class_create_file(class, &class_attr_version); if (err) goto err_out_class; @@ -63,94 +105,100 @@ err_out: } /** - * drm_sysfs_destroy - destroys a struct drm_sysfs_class structure - * @cs: pointer to the struct drm_sysfs_class that is to be destroyed + * drm_sysfs_destroy - destroys DRM class * - * Note, the pointer to be destroyed must have been created with a call to - * drm_sysfs_create(). + * Destroy the DRM device class. */ -void drm_sysfs_destroy(struct class *class) +void drm_sysfs_destroy(void) { - if ((class == NULL) || (IS_ERR(class))) + if ((drm_class == NULL) || (IS_ERR(drm_class))) return; - - class_remove_file(class, &class_attr_version); - class_destroy(class); + class_remove_file(drm_class, &class_attr_version); + class_destroy(drm_class); } -static ssize_t show_dri(struct class_device *class_device, char *buf) +static ssize_t show_dri(struct device *device, struct device_attribute *attr, + char *buf) { - struct drm_device * dev = ((struct drm_head *)class_get_devdata(class_device))->dev; + struct drm_device *dev = to_drm_device(device); if (dev->driver->dri_library_name) return dev->driver->dri_library_name(dev, buf); return snprintf(buf, PAGE_SIZE, "%s\n", dev->driver->pci_driver.name); } -static struct class_device_attribute class_device_attrs[] = { +static struct device_attribute device_attrs[] = { __ATTR(dri_library_name, S_IRUGO, show_dri, NULL), }; /** + * drm_sysfs_device_release - do nothing + * @dev: Linux device + * + * Normally, this would free the DRM device associated with @dev, along + * with cleaning up any other stuff. But we do that in the DRM core, so + * this function can just return and hope that the core does its job. + */ +static void drm_sysfs_device_release(struct device *dev) +{ + return; +} + +/** * drm_sysfs_device_add - adds a class device to sysfs for a character driver - * @cs: pointer to the struct class that this device should be registered to. - * @dev: the dev_t for the device to be added. - * @device: a pointer to a struct device that is assiociated with this class device. - * @fmt: string for the class device's name + * @dev: DRM device to be added + * @head: DRM head in question * - * A struct class_device will be created in sysfs, registered to the specified - * class. A "dev" file will be created, showing the dev_t for the device. The - * pointer to the struct class_device will be returned from the call. Any further - * sysfs files that might be required can be created using this pointer. - * Note: the struct class passed to this function must have previously been - * created with a call to drm_sysfs_create(). + * Add a DRM device to the DRM's device model class. We use @dev's PCI device + * as the parent for the Linux device, and make sure it has a file containing + * the driver we're using (for userspace compatibility). */ -struct class_device *drm_sysfs_device_add(struct class *cs, struct drm_head *head) +int drm_sysfs_device_add(struct drm_device *dev, struct drm_head *head) { - struct class_device *class_dev; - int i, j, err; - - class_dev = class_device_create(cs, NULL, - MKDEV(DRM_MAJOR, head->minor), - &(head->dev->pdev)->dev, - "card%d", head->minor); - if (IS_ERR(class_dev)) { - err = PTR_ERR(class_dev); + int err; + int i, j; + + dev->dev.parent = &dev->pdev->dev; + dev->dev.class = drm_class; + dev->dev.release = drm_sysfs_device_release; + dev->dev.devt = head->device; + snprintf(dev->dev.bus_id, BUS_ID_SIZE, "card%d", head->minor); + + err = device_register(&dev->dev); + if (err) { + DRM_ERROR("device add failed: %d\n", err); goto err_out; } - class_set_devdata(class_dev, head); - - for (i = 0; i < ARRAY_SIZE(class_device_attrs); i++) { - err = class_device_create_file(class_dev, - &class_device_attrs[i]); + for (i = 0; i < ARRAY_SIZE(device_attrs); i++) { + err = device_create_file(&dev->dev, &device_attrs[i]); if (err) goto err_out_files; } - return class_dev; + return 0; err_out_files: if (i > 0) for (j = 0; j < i; j++) - class_device_remove_file(class_dev, - &class_device_attrs[i]); - class_device_unregister(class_dev); + device_remove_file(&dev->dev, &device_attrs[i]); + device_unregister(&dev->dev); err_out: - return ERR_PTR(err); + + return err; } /** - * drm_sysfs_device_remove - removes a class device that was created with drm_sysfs_device_add() - * @dev: the dev_t of the device that was previously registered. + * drm_sysfs_device_remove - remove DRM device + * @dev: DRM device to remove * * This call unregisters and cleans up a class device that was created with a * call to drm_sysfs_device_add() */ -void drm_sysfs_device_remove(struct class_device *class_dev) +void drm_sysfs_device_remove(struct drm_device *dev) { int i; - for (i = 0; i < ARRAY_SIZE(class_device_attrs); i++) - class_device_remove_file(class_dev, &class_device_attrs[i]); - class_device_unregister(class_dev); + for (i = 0; i < ARRAY_SIZE(device_attrs); i++) + device_remove_file(&dev->dev, &device_attrs[i]); + device_unregister(&dev->dev); } diff --git a/drivers/char/drm/drm_ttm.c b/drivers/char/drm/drm_ttm.c new file mode 100644 index 0000000..c9de50b --- /dev/null +++ b/drivers/char/drm/drm_ttm.c @@ -0,0 +1,468 @@ +/************************************************************************** + * + * Copyright (c) 2006-2007 Tungsten Graphics, Inc., Cedar Park, TX., USA + * All Rights Reserved. + * + * Permission is hereby granted, free of charge, to any person obtaining a + * copy of this software and associated documentation files (the + * "Software"), to deal in the Software without restriction, including + * without limitation the rights to use, copy, modify, merge, publish, + * distribute, sub license, and/or sell copies of the Software, and to + * permit persons to whom the Software is furnished to do so, subject to + * the following conditions: + * + * The above copyright notice and this permission notice (including the + * next paragraph) shall be included in all copies or substantial portions + * of the Software. + * + * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR + * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, + * FITNESS FOR A PARTICULAR PURPOSE AND NON-INFRINGEMENT. IN NO EVENT SHALL + * THE COPYRIGHT HOLDERS, AUTHORS AND/OR ITS SUPPLIERS BE LIABLE FOR ANY CLAIM, + * DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR + * OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE + * USE OR OTHER DEALINGS IN THE SOFTWARE. + * + **************************************************************************/ +/* + * Authors: Thomas Hellstrom + */ + +#include "drmP.h" + +static void drm_ttm_ipi_handler(void *null) +{ + flush_agp_cache(); +} + +void drm_ttm_cache_flush(void) +{ + if (on_each_cpu(drm_ttm_ipi_handler, NULL, 1, 1) != 0) + DRM_ERROR("Timed out waiting for drm cache flush.\n"); +} +EXPORT_SYMBOL(drm_ttm_cache_flush); + +/* + * Use kmalloc if possible. Otherwise fall back to vmalloc. + */ + +static void drm_ttm_alloc_pages(struct drm_ttm *ttm) +{ + unsigned long size = ttm->num_pages * sizeof(*ttm->pages); + ttm->pages = NULL; + + if (drm_alloc_memctl(size)) + return; + + if (size <= PAGE_SIZE) + ttm->pages = drm_calloc(1, size, DRM_MEM_TTM); + + if (!ttm->pages) { + ttm->pages = vmalloc_user(size); + if (ttm->pages) + ttm->page_flags |= DRM_TTM_PAGE_VMALLOC; + } + if (!ttm->pages) + drm_free_memctl(size); +} + +static void drm_ttm_free_pages(struct drm_ttm *ttm) +{ + unsigned long size = ttm->num_pages * sizeof(*ttm->pages); + + if (ttm->page_flags & DRM_TTM_PAGE_VMALLOC) { + vfree(ttm->pages); + ttm->page_flags &= ~DRM_TTM_PAGE_VMALLOC; + } else { + drm_free(ttm->pages, size, DRM_MEM_TTM); + } + drm_free_memctl(size); + ttm->pages = NULL; +} + +static struct page *drm_ttm_alloc_page(void) +{ + struct page *page; + + if (drm_alloc_memctl(PAGE_SIZE)) + return NULL; + + page = alloc_page(GFP_KERNEL | __GFP_ZERO | GFP_DMA32); + if (!page) { + drm_free_memctl(PAGE_SIZE); + return NULL; + } + return page; +} + +/* + * Change caching policy for the linear kernel map + * for range of pages in a ttm. + */ + +static int drm_ttm_set_caching(struct drm_ttm *ttm, int noncached) +{ + int i; + struct page **cur_page; + int do_tlbflush = 0; + + if ((ttm->page_flags & DRM_TTM_PAGE_UNCACHED) == noncached) + return 0; + + if (noncached) + drm_ttm_cache_flush(); + + for (i = 0; i < ttm->num_pages; ++i) { + cur_page = ttm->pages + i; + if (*cur_page) { + if (!PageHighMem(*cur_page)) { + if (noncached) { + map_page_into_agp(*cur_page); + } else { + unmap_page_from_agp(*cur_page); + } + do_tlbflush = 1; + } + } + } + if (do_tlbflush) + flush_agp_mappings(); + + DRM_FLAG_MASKED(ttm->page_flags, noncached, DRM_TTM_PAGE_UNCACHED); + + return 0; +} + + +static void drm_ttm_free_user_pages(struct drm_ttm *ttm) +{ + int write; + int dirty; + struct page *page; + int i; + + BUG_ON(!(ttm->page_flags & DRM_TTM_PAGE_USER)); + write = ((ttm->page_flags & DRM_TTM_PAGE_WRITE) != 0); + dirty = ((ttm->page_flags & DRM_TTM_PAGE_USER_DIRTY) != 0); + + for (i = 0; i < ttm->num_pages; ++i) { + page = ttm->pages[i]; + if (page == NULL) + continue; + + if (page == ttm->dummy_read_page) { + BUG_ON(write); + continue; + } + + if (write && dirty && !PageReserved(page)) + set_page_dirty_lock(page); + + ttm->pages[i] = NULL; + put_page(page); + } +} + +static void drm_ttm_free_alloced_pages(struct drm_ttm *ttm) +{ + int i; + struct drm_buffer_manager *bm = &ttm->dev->bm; + struct page **cur_page; + + for (i = 0; i < ttm->num_pages; ++i) { + cur_page = ttm->pages + i; + if (*cur_page) { + if (page_count(*cur_page) != 1) + DRM_ERROR("Erroneous page count. Leaking pages.\n"); + if (page_mapped(*cur_page)) + DRM_ERROR("Erroneous map count. Leaking page mappings.\n"); + __free_page(*cur_page); + drm_free_memctl(PAGE_SIZE); + --bm->cur_pages; + } + } +} + +/* + * Free all resources associated with a ttm. + */ + +int drm_ttm_destroy(struct drm_ttm *ttm) +{ + struct drm_ttm_backend *be; + + if (!ttm) + return 0; + + be = ttm->be; + if (be) { + be->func->destroy(be); + ttm->be = NULL; + } + + if (ttm->pages) { + if (ttm->page_flags & DRM_TTM_PAGE_UNCACHED) + drm_ttm_set_caching(ttm, 0); + + if (ttm->page_flags & DRM_TTM_PAGE_USER) + drm_ttm_free_user_pages(ttm); + else + drm_ttm_free_alloced_pages(ttm); + + drm_ttm_free_pages(ttm); + } + + drm_ctl_free(ttm, sizeof(*ttm), DRM_MEM_TTM); + return 0; +} + +struct page *drm_ttm_get_page(struct drm_ttm *ttm, int index) +{ + struct page *p; + struct drm_buffer_manager *bm = &ttm->dev->bm; + + p = ttm->pages[index]; + if (!p) { + p = drm_ttm_alloc_page(); + if (!p) + return NULL; + ttm->pages[index] = p; + ++bm->cur_pages; + } + return p; +} +EXPORT_SYMBOL(drm_ttm_get_page); + +/** + * drm_ttm_set_user: + * + * @ttm: the ttm to map pages to. This must always be + * a freshly created ttm. + * + * @tsk: a pointer to the address space from which to map + * pages. + * + * @write: a boolean indicating that write access is desired + * + * start: the starting address + * + * Map a range of user addresses to a new ttm object. This + * provides access to user memory from the graphics device. + */ +int drm_ttm_set_user(struct drm_ttm *ttm, + struct task_struct *tsk, + unsigned long start, + unsigned long num_pages) +{ + struct mm_struct *mm = tsk->mm; + int ret; + int write = (ttm->page_flags & DRM_TTM_PAGE_WRITE) != 0; + + BUG_ON(num_pages != ttm->num_pages); + BUG_ON((ttm->page_flags & DRM_TTM_PAGE_USER) == 0); + + down_read(&mm->mmap_sem); + ret = get_user_pages(tsk, mm, start, num_pages, + write, 0, ttm->pages, NULL); + up_read(&mm->mmap_sem); + + if (ret != num_pages && write) { + drm_ttm_free_user_pages(ttm); + return -ENOMEM; + } + + return 0; +} + + + +/** + * drm_ttm_populate: + * + * @ttm: the object to allocate pages for + * + * Allocate pages for all unset page entries, then + * call the backend to create the hardware mappings + */ +int drm_ttm_populate(struct drm_ttm *ttm) +{ + struct page *page; + unsigned long i; + struct drm_ttm_backend *be; + + if (ttm->state != ttm_unpopulated) + return 0; + + be = ttm->be; + if (ttm->page_flags & DRM_TTM_PAGE_WRITE) { + for (i = 0; i < ttm->num_pages; ++i) { + page = drm_ttm_get_page(ttm, i); + if (!page) + return -ENOMEM; + } + } + be->func->populate(be, ttm->num_pages, ttm->pages, ttm->dummy_read_page); + ttm->state = ttm_unbound; + return 0; +} + +/** + * drm_ttm_create: + * + * @dev: the drm_device + * + * @size: The size (in bytes) of the desired object + * + * @page_flags: various DRM_TTM_PAGE_* flags. See drm_object.h. + * + * Allocate and initialize a ttm, leaving it unpopulated at this time + */ + +struct drm_ttm *drm_ttm_create(struct drm_device *dev, unsigned long size, + uint32_t page_flags, struct page *dummy_read_page) +{ + struct drm_bo_driver *bo_driver = dev->driver->bo_driver; + struct drm_ttm *ttm; + + if (!bo_driver) + return NULL; + + ttm = drm_ctl_calloc(1, sizeof(*ttm), DRM_MEM_TTM); + if (!ttm) + return NULL; + + ttm->dev = dev; + atomic_set(&ttm->vma_count, 0); + + ttm->destroy = 0; + ttm->num_pages = (size + PAGE_SIZE - 1) >> PAGE_SHIFT; + + ttm->page_flags = page_flags; + + ttm->dummy_read_page = dummy_read_page; + + /* + * Account also for AGP module memory usage. + */ + + drm_ttm_alloc_pages(ttm); + if (!ttm->pages) { + drm_ttm_destroy(ttm); + DRM_ERROR("Failed allocating page table\n"); + return NULL; + } + ttm->be = bo_driver->create_ttm_backend_entry(dev); + if (!ttm->be) { + drm_ttm_destroy(ttm); + DRM_ERROR("Failed creating ttm backend entry\n"); + return NULL; + } + ttm->state = ttm_unpopulated; + return ttm; +} + +/** + * drm_ttm_evict: + * + * @ttm: the object to be unbound from the aperture. + * + * Transition a ttm from bound to evicted, where it + * isn't present in the aperture, but various caches may + * not be consistent. + */ +void drm_ttm_evict(struct drm_ttm *ttm) +{ + struct drm_ttm_backend *be = ttm->be; + int ret; + + if (ttm->state == ttm_bound) { + ret = be->func->unbind(be); + BUG_ON(ret); + } + + ttm->state = ttm_evicted; +} + +/** + * drm_ttm_fixup_caching: + * + * @ttm: the object to set unbound + * + * XXX this function is misnamed. Transition a ttm from evicted to + * unbound, flushing caches as appropriate. + */ +void drm_ttm_fixup_caching(struct drm_ttm *ttm) +{ + + if (ttm->state == ttm_evicted) { + struct drm_ttm_backend *be = ttm->be; + if (be->func->needs_ub_cache_adjust(be)) + drm_ttm_set_caching(ttm, 0); + ttm->state = ttm_unbound; + } +} + +/** + * drm_ttm_unbind: + * + * @ttm: the object to unbind from the graphics device + * + * Unbind an object from the aperture. This removes the mappings + * from the graphics device and flushes caches if necessary. + */ +void drm_ttm_unbind(struct drm_ttm *ttm) +{ + if (ttm->state == ttm_bound) + drm_ttm_evict(ttm); + + drm_ttm_fixup_caching(ttm); +} + +/** + * drm_ttm_bind: + * + * @ttm: the ttm object to bind to the graphics device + * + * @bo_mem: the aperture memory region which will hold the object + * + * Bind a ttm object to the aperture. This ensures that the necessary + * pages are allocated, flushes CPU caches as needed and marks the + * ttm as DRM_TTM_PAGE_USER_DIRTY to indicate that it may have been + * modified by the GPU + */ +int drm_ttm_bind(struct drm_ttm *ttm, struct drm_bo_mem_reg *bo_mem) +{ + struct drm_bo_driver *bo_driver = ttm->dev->driver->bo_driver; + int ret = 0; + struct drm_ttm_backend *be; + + if (!ttm) + return -EINVAL; + if (ttm->state == ttm_bound) + return 0; + + be = ttm->be; + + ret = drm_ttm_populate(ttm); + if (ret) + return ret; + + if (ttm->state == ttm_unbound && !(bo_mem->flags & DRM_BO_FLAG_CACHED)) + drm_ttm_set_caching(ttm, DRM_TTM_PAGE_UNCACHED); + else if ((bo_mem->flags & DRM_BO_FLAG_CACHED_MAPPED) && + bo_driver->ttm_cache_flush) + bo_driver->ttm_cache_flush(ttm); + + ret = be->func->bind(be, bo_mem); + if (ret) { + ttm->state = ttm_evicted; + DRM_ERROR("Couldn't bind backend.\n"); + return ret; + } + + ttm->state = ttm_bound; + if (ttm->page_flags & DRM_TTM_PAGE_USER) + ttm->page_flags |= DRM_TTM_PAGE_USER_DIRTY; + return 0; +} +EXPORT_SYMBOL(drm_ttm_bind); diff --git a/drivers/char/drm/drm_vm.c b/drivers/char/drm/drm_vm.c index e8d50af..9bb85f4 100644 --- a/drivers/char/drm/drm_vm.c +++ b/drivers/char/drm/drm_vm.c @@ -40,6 +40,10 @@ static void drm_vm_open(struct vm_area_struct *vma); static void drm_vm_close(struct vm_area_struct *vma); +static int drm_bo_mmap_locked(struct vm_area_struct *vma, + struct file *filp, + drm_local_map_t *map); + static pgprot_t drm_io_prot(uint32_t map_type, struct vm_area_struct *vma) { @@ -139,7 +143,7 @@ static __inline__ struct page *drm_do_vm_nopage(struct vm_area_struct *vma, return page; } - vm_nopage_error: +vm_nopage_error: return NOPAGE_SIGBUS; /* Disallow mremap */ } #else /* __OS_HAS_AGP */ @@ -180,7 +184,7 @@ static __inline__ struct page *drm_do_vm_shm_nopage(struct vm_area_struct *vma, return NOPAGE_SIGBUS; get_page(page); - DRM_DEBUG("shm_nopage 0x%lx\n", address); + DRM_DEBUG("0x%lx\n", address); return page; } @@ -213,7 +217,7 @@ static void drm_vm_shm_close(struct vm_area_struct *vma) found_maps++; if (pt->vma == vma) { list_del(&pt->head); - drm_free(pt, sizeof(*pt), DRM_MEM_VMAS); + drm_ctl_free(pt, sizeof(*pt), DRM_MEM_VMAS); } } @@ -255,6 +259,9 @@ static void drm_vm_shm_close(struct vm_area_struct *vma) dmah.size = map->size; __drm_pci_free(dev, &dmah); break; + case _DRM_TTM: + BUG_ON(1); + break; } drm_free(map, sizeof(*map), DRM_MEM_MAPS); } @@ -294,7 +301,7 @@ static __inline__ struct page *drm_do_vm_dma_nopage(struct vm_area_struct *vma, get_page(page); - DRM_DEBUG("dma_nopage 0x%lx (page %lu)\n", address, page_nr); + DRM_DEBUG("0x%lx (page %lu)\n", address, page_nr); return page; } @@ -413,7 +420,7 @@ static void drm_vm_open_locked(struct vm_area_struct *vma) vma->vm_start, vma->vm_end - vma->vm_start); atomic_inc(&dev->vma_count); - vma_entry = drm_alloc(sizeof(*vma_entry), DRM_MEM_VMAS); + vma_entry = drm_ctl_alloc(sizeof(*vma_entry), DRM_MEM_VMAS); if (vma_entry) { vma_entry->vma = vma; vma_entry->pid = current->pid; @@ -453,7 +460,7 @@ static void drm_vm_close(struct vm_area_struct *vma) list_for_each_entry_safe(pt, temp, &dev->vmalist, head) { if (pt->vma == vma) { list_del(&pt->head); - drm_free(pt, sizeof(*pt), DRM_MEM_VMAS); + drm_ctl_free(pt, sizeof(*pt), DRM_MEM_VMAS); break; } } @@ -651,6 +658,8 @@ static int drm_mmap_locked(struct file *filp, struct vm_area_struct *vma) vma->vm_private_data = (void *)map; vma->vm_flags |= VM_RESERVED; break; + case _DRM_TTM: + return drm_bo_mmap_locked(vma, filp, map); default: return -EINVAL; /* This should never happen. */ } @@ -674,3 +683,189 @@ int drm_mmap(struct file *filp, struct vm_area_struct *vma) return ret; } EXPORT_SYMBOL(drm_mmap); + +/** + * buffer object vm functions. + */ + +/** + * \c Pagefault method for buffer objects. + * + * \param vma Virtual memory area. + * \param address File offset. + * \return Error or refault. The pfn is manually inserted. + * + * It's important that pfns are inserted while holding the bo->mutex lock. + * otherwise we might race with unmap_mapping_range() which is always + * called with the bo->mutex lock held. + * + * We're modifying the page attribute bits of the vma->vm_page_prot field, + * without holding the mmap_sem in write mode. Only in read mode. + * These bits are not used by the mm subsystem code, and we consider them + * protected by the bo->mutex lock. + */ + +static unsigned long drm_bo_vm_nopfn(struct vm_area_struct *vma, + unsigned long address) +{ + struct drm_buffer_object *bo = (struct drm_buffer_object *) vma->vm_private_data; + unsigned long page_offset; + struct page *page = NULL; + struct drm_ttm *ttm; + struct drm_device *dev; + unsigned long pfn; + int err; + unsigned long bus_base; + unsigned long bus_offset; + unsigned long bus_size; + unsigned long ret = NOPFN_REFAULT; + + if (address > vma->vm_end) + return NOPFN_SIGBUS; + + dev = bo->dev; + err = drm_bo_read_lock(&dev->bm.bm_lock); + if (err) + return NOPFN_REFAULT; + + err = mutex_lock_interruptible(&bo->mutex); + if (err) { + drm_bo_read_unlock(&dev->bm.bm_lock); + return NOPFN_REFAULT; + } + + err = drm_bo_wait(bo, 0, 0, 0); + if (err) { + ret = (err != -EAGAIN) ? NOPFN_SIGBUS : NOPFN_REFAULT; + goto out_unlock; + } + + /* + * If buffer happens to be in a non-mappable location, + * move it to a mappable. + */ + + if (!(bo->mem.flags & DRM_BO_FLAG_MAPPABLE)) { + uint32_t new_flags = bo->mem.proposed_flags | + DRM_BO_FLAG_MAPPABLE | + DRM_BO_FLAG_FORCE_MAPPABLE; + err = drm_bo_move_buffer(bo, new_flags, 0, 0); + if (err) { + ret = (err != -EAGAIN) ? NOPFN_SIGBUS : NOPFN_REFAULT; + goto out_unlock; + } + } + + err = drm_bo_pci_offset(dev, &bo->mem, &bus_base, &bus_offset, + &bus_size); + + if (err) { + ret = NOPFN_SIGBUS; + goto out_unlock; + } + + page_offset = (address - vma->vm_start) >> PAGE_SHIFT; + + if (bus_size) { + struct drm_mem_type_manager *man = &dev->bm.man[bo->mem.mem_type]; + + pfn = ((bus_base + bus_offset) >> PAGE_SHIFT) + page_offset; + vma->vm_page_prot = drm_io_prot(man->drm_bus_maptype, vma); + } else { + ttm = bo->ttm; + + drm_ttm_fixup_caching(ttm); + page = drm_ttm_get_page(ttm, page_offset); + if (!page) { + ret = NOPFN_OOM; + goto out_unlock; + } + pfn = page_to_pfn(page); + vma->vm_page_prot = (bo->mem.flags & DRM_BO_FLAG_CACHED) ? + vm_get_page_prot(vma->vm_flags) : + drm_io_prot(_DRM_TTM, vma); + } + + err = vm_insert_pfn(vma, address, pfn); + if (err) { + ret = (err != -EAGAIN) ? NOPFN_OOM : NOPFN_REFAULT; + goto out_unlock; + } +out_unlock: + mutex_unlock(&bo->mutex); + drm_bo_read_unlock(&dev->bm.bm_lock); + return ret; +} + +static void drm_bo_vm_open_locked(struct vm_area_struct *vma) +{ + struct drm_buffer_object *bo = (struct drm_buffer_object *) vma->vm_private_data; + + drm_vm_open_locked(vma); + atomic_inc(&bo->usage); +} + +/** + * \c vma open method for buffer objects. + * + * \param vma virtual memory area. + */ + +static void drm_bo_vm_open(struct vm_area_struct *vma) +{ + struct drm_buffer_object *bo = (struct drm_buffer_object *) vma->vm_private_data; + struct drm_device *dev = bo->dev; + + mutex_lock(&dev->struct_mutex); + drm_bo_vm_open_locked(vma); + mutex_unlock(&dev->struct_mutex); +} + +/** + * \c vma close method for buffer objects. + * + * \param vma virtual memory area. + */ + +static void drm_bo_vm_close(struct vm_area_struct *vma) +{ + struct drm_buffer_object *bo = (struct drm_buffer_object *) vma->vm_private_data; + struct drm_device *dev = bo->dev; + + drm_vm_close(vma); + if (bo) { + mutex_lock(&dev->struct_mutex); + drm_bo_usage_deref_locked((struct drm_buffer_object **) + &vma->vm_private_data); + mutex_unlock(&dev->struct_mutex); + } + return; +} + +static struct vm_operations_struct drm_bo_vm_ops = { + .nopfn = drm_bo_vm_nopfn, + .open = drm_bo_vm_open, + .close = drm_bo_vm_close, +}; + +/** + * mmap buffer object memory. + * + * \param vma virtual memory area. + * \param file_priv DRM file private. + * \param map The buffer object drm map. + * \return zero on success or a negative number on failure. + */ + +int drm_bo_mmap_locked(struct vm_area_struct *vma, + struct file *filp, + drm_local_map_t *map) +{ + vma->vm_ops = &drm_bo_vm_ops; + vma->vm_private_data = map->handle; + vma->vm_file = filp; + vma->vm_flags |= VM_RESERVED | VM_IO; + vma->vm_flags |= VM_PFNMAP; + drm_bo_vm_open_locked(vma); + return 0; +} diff --git a/drivers/char/drm/i810_dma.c b/drivers/char/drm/i810_dma.c index eb381a7..8d7ea81 100644 --- a/drivers/char/drm/i810_dma.c +++ b/drivers/char/drm/i810_dma.c @@ -40,7 +40,7 @@ #define I810_BUF_FREE 2 #define I810_BUF_CLIENT 1 -#define I810_BUF_HARDWARE 0 +#define I810_BUF_HARDWARE 0 #define I810_BUF_UNMAPPED 0 #define I810_BUF_MAPPED 1 @@ -570,7 +570,7 @@ static void i810EmitState(struct drm_device * dev) drm_i810_sarea_t *sarea_priv = dev_priv->sarea_priv; unsigned int dirty = sarea_priv->dirty; - DRM_DEBUG("%s %x\n", __FUNCTION__, dirty); + DRM_DEBUG("%x\n", dirty); if (dirty & I810_UPLOAD_BUFFERS) { i810EmitDestVerified(dev, sarea_priv->BufferState); @@ -802,8 +802,7 @@ static void i810_dma_dispatch_flip(struct drm_device * dev) int pitch = dev_priv->pitch; RING_LOCALS; - DRM_DEBUG("%s: page=%d pfCurrentPage=%d\n", - __FUNCTION__, + DRM_DEBUG("page=%d pfCurrentPage=%d\n", dev_priv->current_page, dev_priv->sarea_priv->pf_current_page); @@ -848,8 +847,6 @@ static void i810_dma_quiescent(struct drm_device * dev) drm_i810_private_t *dev_priv = dev->dev_private; RING_LOCALS; -/* printk("%s\n", __FUNCTION__); */ - i810_kernel_lost_context(dev); BEGIN_LP_RING(4); @@ -869,8 +866,6 @@ static int i810_flush_queue(struct drm_device * dev) int i, ret = 0; RING_LOCALS; -/* printk("%s\n", __FUNCTION__); */ - i810_kernel_lost_context(dev); BEGIN_LP_RING(2); @@ -949,7 +944,7 @@ static int i810_dma_vertex(struct drm_device *dev, void *data, LOCK_TEST_WITH_RETURN(dev, file_priv); - DRM_DEBUG("i810 dma vertex, idx %d used %d discard %d\n", + DRM_DEBUG("idx %d used %d discard %d\n", vertex->idx, vertex->used, vertex->discard); if (vertex->idx < 0 || vertex->idx > dma->buf_count) @@ -987,7 +982,7 @@ static int i810_clear_bufs(struct drm_device *dev, void *data, static int i810_swap_bufs(struct drm_device *dev, void *data, struct drm_file *file_priv) { - DRM_DEBUG("i810_swap_bufs\n"); + DRM_DEBUG("\n"); LOCK_TEST_WITH_RETURN(dev, file_priv); @@ -1068,11 +1063,10 @@ static void i810_dma_dispatch_mc(struct drm_device * dev, struct drm_buf * buf, sarea_priv->dirty = 0x7f; - DRM_DEBUG("dispatch mc addr 0x%lx, used 0x%x\n", address, used); + DRM_DEBUG("addr 0x%lx, used 0x%x\n", address, used); dev_priv->counter++; DRM_DEBUG("dispatch counter : %ld\n", dev_priv->counter); - DRM_DEBUG("i810_dma_dispatch_mc\n"); DRM_DEBUG("start : %lx\n", start); DRM_DEBUG("used : %d\n", used); DRM_DEBUG("start + used - 4 : %ld\n", start + used - 4); @@ -1179,7 +1173,7 @@ static void i810_do_init_pageflip(struct drm_device * dev) { drm_i810_private_t *dev_priv = dev->dev_private; - DRM_DEBUG("%s\n", __FUNCTION__); + DRM_DEBUG("\n"); dev_priv->page_flipping = 1; dev_priv->current_page = 0; dev_priv->sarea_priv->pf_current_page = dev_priv->current_page; @@ -1189,7 +1183,7 @@ static int i810_do_cleanup_pageflip(struct drm_device * dev) { drm_i810_private_t *dev_priv = dev->dev_private; - DRM_DEBUG("%s\n", __FUNCTION__); + DRM_DEBUG("\n"); if (dev_priv->current_page != 0) i810_dma_dispatch_flip(dev); @@ -1202,7 +1196,7 @@ static int i810_flip_bufs(struct drm_device *dev, void *data, { drm_i810_private_t *dev_priv = dev->dev_private; - DRM_DEBUG("%s\n", __FUNCTION__); + DRM_DEBUG("\n"); LOCK_TEST_WITH_RETURN(dev, file_priv); diff --git a/drivers/char/drm/i810_drv.h b/drivers/char/drm/i810_drv.h index 0af4587..0118849 100644 --- a/drivers/char/drm/i810_drv.h +++ b/drivers/char/drm/i810_drv.h @@ -25,7 +25,7 @@ * DEALINGS IN THE SOFTWARE. * * Authors: Rickard E. (Rik) Faith - * Jeff Hartmann + * Jeff Hartmann * */ @@ -134,7 +134,7 @@ extern int i810_max_ioctl; #define I810_ADDR(reg) (I810_BASE(reg) + reg) #define I810_DEREF(reg) *(__volatile__ int *)I810_ADDR(reg) #define I810_READ(reg) I810_DEREF(reg) -#define I810_WRITE(reg,val) do { I810_DEREF(reg) = val; } while (0) +#define I810_WRITE(reg,val) do { I810_DEREF(reg) = val; } while (0) #define I810_DEREF16(reg) *(__volatile__ u16 *)I810_ADDR(reg) #define I810_READ16(reg) I810_DEREF16(reg) #define I810_WRITE16(reg,val) do { I810_DEREF16(reg) = val; } while (0) @@ -145,7 +145,7 @@ extern int i810_max_ioctl; #define BEGIN_LP_RING(n) do { \ if (I810_VERBOSE) \ - DRM_DEBUG("BEGIN_LP_RING(%d) in %s\n", n, __FUNCTION__); \ + DRM_DEBUG("BEGIN_LP_RING(%d)\n", n); \ if (dev_priv->ring.space < n*4) \ i810_wait_ring(dev, n*4); \ dev_priv->ring.space -= n*4; \ @@ -155,19 +155,19 @@ extern int i810_max_ioctl; } while (0) #define ADVANCE_LP_RING() do { \ - if (I810_VERBOSE) DRM_DEBUG("ADVANCE_LP_RING\n"); \ - dev_priv->ring.tail = outring; \ + if (I810_VERBOSE) DRM_DEBUG("ADVANCE_LP_RING\n"); \ + dev_priv->ring.tail = outring; \ I810_WRITE(LP_RING + RING_TAIL, outring); \ } while(0) -#define OUT_RING(n) do { \ +#define OUT_RING(n) do { \ if (I810_VERBOSE) DRM_DEBUG(" OUT_RING %x\n", (int)(n)); \ *(volatile unsigned int *)(virt + outring) = n; \ outring += 4; \ outring &= ringmask; \ } while (0) -#define GFX_OP_USER_INTERRUPT ((0<<29)|(2<<23)) +#define GFX_OP_USER_INTERRUPT ((0<<29)|(2<<23)) #define GFX_OP_BREAKPOINT_INTERRUPT ((0<<29)|(1<<23)) #define CMD_REPORT_HEAD (7<<23) #define CMD_STORE_DWORD_IDX ((0x21<<23) | 0x1) @@ -184,28 +184,28 @@ extern int i810_max_ioctl; #define I810REG_HWSTAM 0x02098 #define I810REG_INT_IDENTITY_R 0x020a4 -#define I810REG_INT_MASK_R 0x020a8 +#define I810REG_INT_MASK_R 0x020a8 #define I810REG_INT_ENABLE_R 0x020a0 -#define LP_RING 0x2030 -#define HP_RING 0x2040 -#define RING_TAIL 0x00 +#define LP_RING 0x2030 +#define HP_RING 0x2040 +#define RING_TAIL 0x00 #define TAIL_ADDR 0x000FFFF8 -#define RING_HEAD 0x04 -#define HEAD_WRAP_COUNT 0xFFE00000 -#define HEAD_WRAP_ONE 0x00200000 -#define HEAD_ADDR 0x001FFFFC -#define RING_START 0x08 -#define START_ADDR 0x00FFFFF8 -#define RING_LEN 0x0C -#define RING_NR_PAGES 0x000FF000 -#define RING_REPORT_MASK 0x00000006 -#define RING_REPORT_64K 0x00000002 -#define RING_REPORT_128K 0x00000004 -#define RING_NO_REPORT 0x00000000 -#define RING_VALID_MASK 0x00000001 -#define RING_VALID 0x00000001 -#define RING_INVALID 0x00000000 +#define RING_HEAD 0x04 +#define HEAD_WRAP_COUNT 0xFFE00000 +#define HEAD_WRAP_ONE 0x00200000 +#define HEAD_ADDR 0x001FFFFC +#define RING_START 0x08 +#define START_ADDR 0x00FFFFF8 +#define RING_LEN 0x0C +#define RING_NR_PAGES 0x000FF000 +#define RING_REPORT_MASK 0x00000006 +#define RING_REPORT_64K 0x00000002 +#define RING_REPORT_128K 0x00000004 +#define RING_NO_REPORT 0x00000000 +#define RING_VALID_MASK 0x00000001 +#define RING_VALID 0x00000001 +#define RING_INVALID 0x00000000 #define GFX_OP_SCISSOR ((0x3<<29)|(0x1c<<24)|(0x10<<19)) #define SC_UPDATE_SCISSOR (0x1<<1) diff --git a/drivers/char/drm/i830_dma.c b/drivers/char/drm/i830_dma.c index 69a363e..379cbda 100644 --- a/drivers/char/drm/i830_dma.c +++ b/drivers/char/drm/i830_dma.c @@ -42,7 +42,7 @@ #define I830_BUF_FREE 2 #define I830_BUF_CLIENT 1 -#define I830_BUF_HARDWARE 0 +#define I830_BUF_HARDWARE 0 #define I830_BUF_UNMAPPED 0 #define I830_BUF_MAPPED 1 diff --git a/drivers/char/drm/i830_drm.h b/drivers/char/drm/i830_drm.h index 968a6d9..4b00d2d 100644 --- a/drivers/char/drm/i830_drm.h +++ b/drivers/char/drm/i830_drm.h @@ -12,9 +12,9 @@ #define _I830_DEFINES_ #define I830_DMA_BUF_ORDER 12 -#define I830_DMA_BUF_SZ (1< - * Jeff Hartmann + * Jeff Hartmann * */ @@ -156,8 +156,7 @@ extern int i830_driver_device_is_agp(struct drm_device * dev); #define BEGIN_LP_RING(n) do { \ if (I830_VERBOSE) \ - printk("BEGIN_LP_RING(%d) in %s\n", \ - n, __FUNCTION__); \ + printk("BEGIN_LP_RING(%d)\n", (n)); \ if (dev_priv->ring.space < n*4) \ i830_wait_ring(dev, n*4, __FUNCTION__); \ outcount = 0; \ @@ -183,7 +182,7 @@ extern int i830_driver_device_is_agp(struct drm_device * dev); extern int i830_wait_ring(struct drm_device * dev, int n, const char *caller); -#define GFX_OP_USER_INTERRUPT ((0<<29)|(2<<23)) +#define GFX_OP_USER_INTERRUPT ((0<<29)|(2<<23)) #define GFX_OP_BREAKPOINT_INTERRUPT ((0<<29)|(1<<23)) #define CMD_REPORT_HEAD (7<<23) #define CMD_STORE_DWORD_IDX ((0x21<<23) | 0x1) @@ -203,30 +202,30 @@ extern int i830_wait_ring(struct drm_device * dev, int n, const char *caller); #define I830REG_HWSTAM 0x02098 #define I830REG_INT_IDENTITY_R 0x020a4 -#define I830REG_INT_MASK_R 0x020a8 +#define I830REG_INT_MASK_R 0x020a8 #define I830REG_INT_ENABLE_R 0x020a0 #define I830_IRQ_RESERVED ((1<<13)|(3<<2)) -#define LP_RING 0x2030 -#define HP_RING 0x2040 -#define RING_TAIL 0x00 +#define LP_RING 0x2030 +#define HP_RING 0x2040 +#define RING_TAIL 0x00 #define TAIL_ADDR 0x001FFFF8 -#define RING_HEAD 0x04 -#define HEAD_WRAP_COUNT 0xFFE00000 -#define HEAD_WRAP_ONE 0x00200000 -#define HEAD_ADDR 0x001FFFFC -#define RING_START 0x08 -#define START_ADDR 0x0xFFFFF000 -#define RING_LEN 0x0C -#define RING_NR_PAGES 0x001FF000 -#define RING_REPORT_MASK 0x00000006 -#define RING_REPORT_64K 0x00000002 -#define RING_REPORT_128K 0x00000004 -#define RING_NO_REPORT 0x00000000 -#define RING_VALID_MASK 0x00000001 -#define RING_VALID 0x00000001 -#define RING_INVALID 0x00000000 +#define RING_HEAD 0x04 +#define HEAD_WRAP_COUNT 0xFFE00000 +#define HEAD_WRAP_ONE 0x00200000 +#define HEAD_ADDR 0x001FFFFC +#define RING_START 0x08 +#define START_ADDR 0x0xFFFFF000 +#define RING_LEN 0x0C +#define RING_NR_PAGES 0x001FF000 +#define RING_REPORT_MASK 0x00000006 +#define RING_REPORT_64K 0x00000002 +#define RING_REPORT_128K 0x00000004 +#define RING_NO_REPORT 0x00000000 +#define RING_VALID_MASK 0x00000001 +#define RING_VALID 0x00000001 +#define RING_INVALID 0x00000000 #define GFX_OP_SCISSOR ((0x3<<29)|(0x1c<<24)|(0x10<<19)) #define SC_UPDATE_SCISSOR (0x1<<1) @@ -279,9 +278,9 @@ extern int i830_wait_ring(struct drm_device * dev, int n, const char *caller); #define XY_SRC_COPY_BLT_WRITE_ALPHA (1<<21) #define XY_SRC_COPY_BLT_WRITE_RGB (1<<20) -#define MI_BATCH_BUFFER ((0x30<<23)|1) -#define MI_BATCH_BUFFER_START (0x31<<23) -#define MI_BATCH_BUFFER_END (0xA<<23) +#define MI_BATCH_BUFFER ((0x30<<23)|1) +#define MI_BATCH_BUFFER_START (0x31<<23) +#define MI_BATCH_BUFFER_END (0xA<<23) #define MI_BATCH_NON_SECURE (1) #define MI_WAIT_FOR_EVENT ((0x3<<23)) diff --git a/drivers/char/drm/i830_irq.c b/drivers/char/drm/i830_irq.c index 76403f4..a33db5f 100644 --- a/drivers/char/drm/i830_irq.c +++ b/drivers/char/drm/i830_irq.c @@ -144,7 +144,7 @@ int i830_irq_wait(struct drm_device *dev, void *data, struct drm_file *file_priv) { drm_i830_private_t *dev_priv = dev->dev_private; - drm_i830_irq_wait_t *irqwait = data; + drm_i830_irq_wait_t *irqwait = data; if (!dev_priv) { DRM_ERROR("%s called with no initialization\n", __FUNCTION__); diff --git a/drivers/char/drm/i915_buffer.c b/drivers/char/drm/i915_buffer.c new file mode 100644 index 0000000..632c0ad --- /dev/null +++ b/drivers/char/drm/i915_buffer.c @@ -0,0 +1,195 @@ +/************************************************************************** + * + * Copyright 2006 Tungsten Graphics, Inc., Bismarck, ND., USA + * All Rights Reserved. + * + * Permission is hereby granted, free of charge, to any person obtaining a + * copy of this software and associated documentation files (the + * "Software"), to deal in the Software without restriction, including + * without limitation the rights to use, copy, modify, merge, publish, + * distribute, sub license, and/or sell copies of the Software, and to + * permit persons to whom the Software is furnished to do so, subject to + * the following conditions: + * + * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR + * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, + * FITNESS FOR A PARTICULAR PURPOSE AND NON-INFRINGEMENT. IN NO EVENT SHALL + * THE COPYRIGHT HOLDERS, AUTHORS AND/OR ITS SUPPLIERS BE LIABLE FOR ANY CLAIM, + * DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR + * OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE + * USE OR OTHER DEALINGS IN THE SOFTWARE. + * + * The above copyright notice and this permission notice (including the + * next paragraph) shall be included in all copies or substantial portions + * of the Software. + * + * + **************************************************************************/ +/* + * Authors: Thomas Hellström + */ + +#include "drmP.h" +#include "i915_drm.h" +#include "i915_drv.h" + +struct drm_ttm_backend *i915_create_ttm_backend_entry(struct drm_device *dev) +{ + return drm_agp_init_ttm(dev); +} + +int i915_fence_type(struct drm_buffer_object *bo, + uint32_t *fclass, + uint32_t *type) +{ + if (bo->mem.proposed_flags & (DRM_BO_FLAG_READ | DRM_BO_FLAG_WRITE)) + *type = 3; + else + *type = 1; + return 0; +} + +int i915_invalidate_caches(struct drm_device *dev, uint64_t flags) +{ + /* + * FIXME: Only emit once per batchbuffer submission. + */ + + uint32_t flush_cmd = MI_NO_WRITE_FLUSH; + + if (flags & DRM_BO_FLAG_READ) + flush_cmd |= MI_READ_FLUSH; + if (flags & DRM_BO_FLAG_EXE) + flush_cmd |= MI_EXE_FLUSH; + + return i915_emit_mi_flush(dev, flush_cmd); +} + +int i915_init_mem_type(struct drm_device *dev, uint32_t type, + struct drm_mem_type_manager *man) +{ + switch (type) { + case DRM_BO_MEM_LOCAL: + man->flags = _DRM_FLAG_MEMTYPE_MAPPABLE | + _DRM_FLAG_MEMTYPE_CACHED; + man->drm_bus_maptype = 0; + man->gpu_offset = 0; + break; + case DRM_BO_MEM_TT: + if (!(drm_core_has_AGP(dev) && dev->agp)) { + DRM_ERROR("AGP is not enabled for memory type %u\n", + (unsigned)type); + return -EINVAL; + } + man->io_offset = dev->agp->agp_info.aper_base; + man->io_size = dev->agp->agp_info.aper_size * 1024 * 1024; + man->io_addr = NULL; + man->flags = _DRM_FLAG_MEMTYPE_MAPPABLE | + _DRM_FLAG_MEMTYPE_CSELECT | _DRM_FLAG_NEEDS_IOREMAP; + man->drm_bus_maptype = _DRM_AGP; + man->gpu_offset = 0; + break; + case DRM_BO_MEM_PRIV0: + if (!(drm_core_has_AGP(dev) && dev->agp)) { + DRM_ERROR("AGP is not enabled for memory type %u\n", + (unsigned)type); + return -EINVAL; + } + man->io_offset = dev->agp->agp_info.aper_base; + man->io_size = dev->agp->agp_info.aper_size * 1024 * 1024; + man->io_addr = NULL; + man->flags = _DRM_FLAG_MEMTYPE_MAPPABLE | + _DRM_FLAG_MEMTYPE_FIXED | _DRM_FLAG_NEEDS_IOREMAP; + man->drm_bus_maptype = _DRM_AGP; + man->gpu_offset = 0; + break; + default: + DRM_ERROR("Unsupported memory type %u\n", (unsigned)type); + return -EINVAL; + } + return 0; +} + +/* + * i915_evict_flags: + * + * @bo: the buffer object to be evicted + * + * Return the bo flags for a buffer which is not mapped to the hardware. + * These will be placed in proposed_flags so that when the move is + * finished, they'll end up in bo->mem.flags + */ +uint64_t i915_evict_flags(struct drm_buffer_object *bo) +{ + switch (bo->mem.mem_type) { + case DRM_BO_MEM_LOCAL: + case DRM_BO_MEM_TT: + return DRM_BO_FLAG_MEM_LOCAL; + default: + return DRM_BO_FLAG_MEM_TT | DRM_BO_FLAG_CACHED; + } +} + + +/* + * Disable i915_move_flip for now, since we can't guarantee that the hardware lock + * is held here. To re-enable we need to make sure either + * a) The X server is using DRM to submit commands to the ring, or + * b) DRM can use the HP ring for these blits. This means i915 needs to + * implement a new ring submission mechanism and fence class. + */ + +int i915_move(struct drm_buffer_object *bo, + int evict, int no_wait, struct drm_bo_mem_reg *new_mem) +{ + struct drm_bo_mem_reg *old_mem = &bo->mem; + + if (old_mem->mem_type == DRM_BO_MEM_LOCAL) { + return drm_bo_move_memcpy(bo, evict, no_wait, new_mem); + } else if (new_mem->mem_type == DRM_BO_MEM_LOCAL) { + if (0 /*i915_move_flip(bo, evict, no_wait, new_mem)*/) + return drm_bo_move_memcpy(bo, evict, no_wait, new_mem); + } else { + if (0 /*i915_move_blit(bo, evict, no_wait, new_mem)*/) + return drm_bo_move_memcpy(bo, evict, no_wait, new_mem); + } + return 0; +} + + +static inline void drm_cache_flush_addr(void *virt) +{ + int i; + + for (i = 0; i < PAGE_SIZE; i += boot_cpu_data.x86_clflush_size) + clflush(virt+i); +} + +static inline void drm_cache_flush_page(struct page *p) +{ + drm_cache_flush_addr(page_address(p)); +} + +void i915_flush_ttm(struct drm_ttm *ttm) +{ + int i; + + if (!ttm) + return; + + DRM_MEMORYBARRIER(); + +#ifdef CONFIG_X86_32 + /* Hopefully nobody has built an x86-64 processor without clflush */ + if (!cpu_has_clflush) { + wbinvd(); + DRM_MEMORYBARRIER(); + return; + } +#endif + + for (i = ttm->num_pages - 1; i >= 0; i--) + drm_cache_flush_page(drm_ttm_get_page(ttm, i)); + + DRM_MEMORYBARRIER(); +} diff --git a/drivers/char/drm/i915_dma.c b/drivers/char/drm/i915_dma.c index e61a43e..0ac7fd3 100644 --- a/drivers/char/drm/i915_dma.c +++ b/drivers/char/drm/i915_dma.c @@ -31,23 +31,12 @@ #include "i915_drm.h" #include "i915_drv.h" -#define IS_I965G(dev) (dev->pci_device == 0x2972 || \ - dev->pci_device == 0x2982 || \ - dev->pci_device == 0x2992 || \ - dev->pci_device == 0x29A2 || \ - dev->pci_device == 0x2A02 || \ - dev->pci_device == 0x2A12) - -#define IS_G33(dev) (dev->pci_device == 0x29b2 || \ - dev->pci_device == 0x29c2 || \ - dev->pci_device == 0x29d2) - /* Really want an OS-independent resettable timer. Would like to have * this loop run for (eg) 3 sec, but have the timer reset every time * the head pointer changes, so that EBUSY only happens if the ring * actually stalls for (eg) 3 seconds. */ -int i915_wait_ring(struct drm_device * dev, int n, const char *caller) +int i915_wait_ring(struct drm_device *dev, int n, const char *caller) { drm_i915_private_t *dev_priv = dev->dev_private; drm_i915_ring_buffer_t *ring = &(dev_priv->ring); @@ -73,7 +62,7 @@ int i915_wait_ring(struct drm_device * dev, int n, const char *caller) return -EBUSY; } -void i915_kernel_lost_context(struct drm_device * dev) +void i915_kernel_lost_context(struct drm_device *dev) { drm_i915_private_t *dev_priv = dev->dev_private; drm_i915_ring_buffer_t *ring = &(dev_priv->ring); @@ -88,8 +77,9 @@ void i915_kernel_lost_context(struct drm_device * dev) dev_priv->sarea_priv->perf_boxes |= I915_BOX_RING_EMPTY; } -static int i915_dma_cleanup(struct drm_device * dev) +static int i915_dma_cleanup(struct drm_device *dev) { + drm_i915_private_t *dev_priv = dev->dev_private; /* Make sure interrupts are disabled here because the uninstall ioctl * may not have been called from userspace and after dev_private * is freed, it's too late. @@ -97,57 +87,49 @@ static int i915_dma_cleanup(struct drm_device * dev) if (dev->irq) drm_irq_uninstall(dev); - if (dev->dev_private) { - drm_i915_private_t *dev_priv = - (drm_i915_private_t *) dev->dev_private; - - if (dev_priv->ring.virtual_start) { - drm_core_ioremapfree(&dev_priv->ring.map, dev); - } - - if (dev_priv->status_page_dmah) { - drm_pci_free(dev, dev_priv->status_page_dmah); - /* Need to rewrite hardware status page */ - I915_WRITE(0x02080, 0x1ffff000); - } - - if (dev_priv->status_gfx_addr) { - dev_priv->status_gfx_addr = 0; - drm_core_ioremapfree(&dev_priv->hws_map, dev); - I915_WRITE(0x2080, 0x1ffff000); - } + if (dev_priv->ring.virtual_start) { + drm_core_ioremapfree(&dev_priv->ring.map, dev); + dev_priv->ring.virtual_start = 0; + dev_priv->ring.map.handle = 0; + dev_priv->ring.map.size = 0; + } - drm_free(dev->dev_private, sizeof(drm_i915_private_t), - DRM_MEM_DRIVER); + if (dev_priv->status_page_dmah) { + drm_pci_free(dev, dev_priv->status_page_dmah); + dev_priv->status_page_dmah = NULL; + /* Need to rewrite hardware status page */ + I915_WRITE(0x02080, 0x1ffff000); + } - dev->dev_private = NULL; + if (dev_priv->status_gfx_addr) { + dev_priv->status_gfx_addr = 0; + drm_core_ioremapfree(&dev_priv->hws_map, dev); + I915_WRITE(0x2080, 0x1ffff000); } return 0; } -static int i915_initialize(struct drm_device * dev, - drm_i915_private_t * dev_priv, - drm_i915_init_t * init) +static int i915_initialize(struct drm_device *dev, drm_i915_init_t * init) { - memset(dev_priv, 0, sizeof(drm_i915_private_t)); + drm_i915_private_t *dev_priv = dev->dev_private; dev_priv->sarea = drm_getsarea(dev); if (!dev_priv->sarea) { DRM_ERROR("can not find sarea!\n"); - dev->dev_private = (void *)dev_priv; i915_dma_cleanup(dev); return -EINVAL; } dev_priv->mmio_map = drm_core_findmap(dev, init->mmio_offset); if (!dev_priv->mmio_map) { - dev->dev_private = (void *)dev_priv; i915_dma_cleanup(dev); DRM_ERROR("can not find mmio map!\n"); return -EINVAL; } + dev_priv->max_validate_buffers = I915_MAX_VALIDATE_BUFFERS; + dev_priv->sarea_priv = (drm_i915_sarea_t *) ((u8 *) dev_priv->sarea->handle + init->sarea_priv_offset); @@ -165,7 +147,6 @@ static int i915_initialize(struct drm_device * dev, drm_core_ioremap(&dev_priv->ring.map, dev); if (dev_priv->ring.map.handle == NULL) { - dev->dev_private = (void *)dev_priv; i915_dma_cleanup(dev); DRM_ERROR("can not ioremap virtual address for" " ring buffer\n"); @@ -175,10 +156,7 @@ static int i915_initialize(struct drm_device * dev, dev_priv->ring.virtual_start = dev_priv->ring.map.handle; dev_priv->cpp = init->cpp; - dev_priv->back_offset = init->back_offset; - dev_priv->front_offset = init->front_offset; - dev_priv->current_page = 0; - dev_priv->sarea_priv->pf_current_page = dev_priv->current_page; + dev_priv->sarea_priv->pf_current_page = 0; /* We are using separate values as placeholders for mechanisms for * private backbuffer/depthbuffer usage. @@ -191,13 +169,16 @@ static int i915_initialize(struct drm_device * dev, */ dev_priv->allow_batchbuffer = 1; + /* Enable vblank on pipe A for older X servers + */ + dev_priv->vblank_pipe = DRM_I915_VBLANK_PIPE_A; + /* Program Hardware Status Page */ if (!IS_G33(dev)) { dev_priv->status_page_dmah = drm_pci_alloc(dev, PAGE_SIZE, PAGE_SIZE, 0xffffffff); if (!dev_priv->status_page_dmah) { - dev->dev_private = (void *)dev_priv; i915_dma_cleanup(dev); DRM_ERROR("Can not allocate hardware status page\n"); return -ENOMEM; @@ -209,11 +190,11 @@ static int i915_initialize(struct drm_device * dev, I915_WRITE(0x02080, dev_priv->dma_status_page); } DRM_DEBUG("Enabled hardware status page\n"); - dev->dev_private = (void *)dev_priv; + mutex_init(&dev_priv->cmdbuf_mutex); return 0; } -static int i915_dma_resume(struct drm_device * dev) +static int i915_dma_resume(struct drm_device *dev) { drm_i915_private_t *dev_priv = (drm_i915_private_t *) dev->dev_private; @@ -254,17 +235,12 @@ static int i915_dma_resume(struct drm_device * dev) static int i915_dma_init(struct drm_device *dev, void *data, struct drm_file *file_priv) { - drm_i915_private_t *dev_priv; drm_i915_init_t *init = data; int retcode = 0; switch (init->func) { case I915_INIT_DMA: - dev_priv = drm_alloc(sizeof(drm_i915_private_t), - DRM_MEM_DRIVER); - if (dev_priv == NULL) - return -ENOMEM; - retcode = i915_initialize(dev, dev_priv, init); + retcode = i915_initialize(dev, init); break; case I915_CLEANUP_DMA: retcode = i915_dma_cleanup(dev); @@ -351,12 +327,12 @@ static int validate_cmd(int cmd) { int ret = do_validate_cmd(cmd); -/* printk("validate_cmd( %x ): %d\n", cmd, ret); */ +/* printk("validate_cmd( %x ): %d\n", cmd, ret); */ return ret; } -static int i915_emit_cmds(struct drm_device * dev, int __user * buffer, int dwords) +static int i915_emit_cmds(struct drm_device *dev, int __user * buffer, int dwords) { drm_i915_private_t *dev_priv = dev->dev_private; int i; @@ -395,7 +371,7 @@ static int i915_emit_cmds(struct drm_device * dev, int __user * buffer, int dwor return 0; } -static int i915_emit_box(struct drm_device * dev, +static int i915_emit_box(struct drm_device *dev, struct drm_clip_rect __user * boxes, int i, int DR1, int DR4) { @@ -438,15 +414,17 @@ static int i915_emit_box(struct drm_device * dev, * emit. For now, do it in both places: */ -static void i915_emit_breadcrumb(struct drm_device *dev) +void i915_emit_breadcrumb(struct drm_device *dev) { drm_i915_private_t *dev_priv = dev->dev_private; RING_LOCALS; - dev_priv->sarea_priv->last_enqueue = ++dev_priv->counter; + if (++dev_priv->counter > BREADCRUMB_MASK) { + dev_priv->counter = 1; + DRM_DEBUG("Breadcrumb counter wrapped around\n"); + } - if (dev_priv->counter > 0x7FFFFFFFUL) - dev_priv->sarea_priv->last_enqueue = dev_priv->counter = 1; + dev_priv->sarea_priv->last_enqueue = dev_priv->counter; BEGIN_LP_RING(4); OUT_RING(CMD_STORE_DWORD_IDX); @@ -456,9 +434,32 @@ static void i915_emit_breadcrumb(struct drm_device *dev) ADVANCE_LP_RING(); } -static int i915_dispatch_cmdbuffer(struct drm_device * dev, + +int i915_emit_mi_flush(struct drm_device *dev, uint32_t flush) +{ + drm_i915_private_t *dev_priv = dev->dev_private; + uint32_t flush_cmd = CMD_MI_FLUSH; + RING_LOCALS; + + flush_cmd |= flush; + + i915_kernel_lost_context(dev); + + BEGIN_LP_RING(4); + OUT_RING(flush_cmd); + OUT_RING(0); + OUT_RING(0); + OUT_RING(0); + ADVANCE_LP_RING(); + + return 0; +} + + +static int i915_dispatch_cmdbuffer(struct drm_device *dev, drm_i915_cmdbuffer_t * cmd) { + drm_i915_private_t *dev_priv = dev->dev_private; int nbox = cmd->num_cliprects; int i = 0, count, ret; @@ -485,10 +486,11 @@ static int i915_dispatch_cmdbuffer(struct drm_device * dev, } i915_emit_breadcrumb(dev); + drm_fence_flush_old(dev, 0, dev_priv->counter); return 0; } -static int i915_dispatch_batchbuffer(struct drm_device * dev, +static int i915_dispatch_batchbuffer(struct drm_device *dev, drm_i915_batchbuffer_t * batch) { drm_i915_private_t *dev_priv = dev->dev_private; @@ -535,59 +537,83 @@ static int i915_dispatch_batchbuffer(struct drm_device * dev, } i915_emit_breadcrumb(dev); - + drm_fence_flush_old(dev, 0, dev_priv->counter); return 0; } -static int i915_dispatch_flip(struct drm_device * dev) +static void i915_do_dispatch_flip(struct drm_device *dev, int plane, int sync) { drm_i915_private_t *dev_priv = dev->dev_private; + u32 num_pages, current_page, next_page, dspbase; + int shift = 2 * plane, x, y; RING_LOCALS; - DRM_DEBUG("%s: page=%d pfCurrentPage=%d\n", - __FUNCTION__, - dev_priv->current_page, - dev_priv->sarea_priv->pf_current_page); - - i915_kernel_lost_context(dev); + /* Calculate display base offset */ + num_pages = dev_priv->sarea_priv->third_handle ? 3 : 2; + current_page = (dev_priv->sarea_priv->pf_current_page >> shift) & 0x3; + next_page = (current_page + 1) % num_pages; - BEGIN_LP_RING(2); - OUT_RING(INST_PARSER_CLIENT | INST_OP_FLUSH | INST_FLUSH_MAP_CACHE); - OUT_RING(0); - ADVANCE_LP_RING(); + switch (next_page) { + default: + case 0: + dspbase = dev_priv->sarea_priv->front_offset; + break; + case 1: + dspbase = dev_priv->sarea_priv->back_offset; + break; + case 2: + dspbase = dev_priv->sarea_priv->third_offset; + break; + } - BEGIN_LP_RING(6); - OUT_RING(CMD_OP_DISPLAYBUFFER_INFO | ASYNC_FLIP); - OUT_RING(0); - if (dev_priv->current_page == 0) { - OUT_RING(dev_priv->back_offset); - dev_priv->current_page = 1; + if (plane == 0) { + x = dev_priv->sarea_priv->planeA_x; + y = dev_priv->sarea_priv->planeA_y; } else { - OUT_RING(dev_priv->front_offset); - dev_priv->current_page = 0; + x = dev_priv->sarea_priv->planeB_x; + y = dev_priv->sarea_priv->planeB_y; } - OUT_RING(0); - ADVANCE_LP_RING(); - BEGIN_LP_RING(2); - OUT_RING(MI_WAIT_FOR_EVENT | MI_WAIT_FOR_PLANE_A_FLIP); - OUT_RING(0); - ADVANCE_LP_RING(); + dspbase += (y * dev_priv->sarea_priv->pitch + x) * dev_priv->cpp; - dev_priv->sarea_priv->last_enqueue = dev_priv->counter++; + DRM_DEBUG("plane=%d current_page=%d dspbase=0x%x\n", plane, current_page, + dspbase); BEGIN_LP_RING(4); - OUT_RING(CMD_STORE_DWORD_IDX); - OUT_RING(20); - OUT_RING(dev_priv->counter); - OUT_RING(0); + OUT_RING(sync ? 0 : + (MI_WAIT_FOR_EVENT | (plane ? MI_WAIT_FOR_PLANE_B_FLIP : + MI_WAIT_FOR_PLANE_A_FLIP))); + OUT_RING(CMD_OP_DISPLAYBUFFER_INFO | (sync ? 0 : ASYNC_FLIP) | + (plane ? DISPLAY_PLANE_B : DISPLAY_PLANE_A)); + OUT_RING(dev_priv->sarea_priv->pitch * dev_priv->cpp); + OUT_RING(dspbase); ADVANCE_LP_RING(); - dev_priv->sarea_priv->pf_current_page = dev_priv->current_page; - return 0; + dev_priv->sarea_priv->pf_current_page &= ~(0x3 << shift); + dev_priv->sarea_priv->pf_current_page |= next_page << shift; } -static int i915_quiescent(struct drm_device * dev) +void i915_dispatch_flip(struct drm_device *dev, int planes, int sync) +{ + drm_i915_private_t *dev_priv = dev->dev_private; + int i; + + DRM_DEBUG("%s: planes=0x%x pfCurrentPage=%d\n", + __FUNCTION__, + planes, dev_priv->sarea_priv->pf_current_page); + + i915_emit_mi_flush(dev, MI_READ_FLUSH | MI_EXE_FLUSH); + + for (i = 0; i < 2; i++) + if (planes & (1 << i)) + i915_do_dispatch_flip(dev, i, sync); + + i915_emit_breadcrumb(dev); + if (!sync) + drm_fence_flush_old(dev, 0, dev_priv->counter); +} + +static int i915_quiescent(struct drm_device *dev) { drm_i915_private_t *dev_priv = dev->dev_private; @@ -607,7 +633,6 @@ static int i915_batchbuffer(struct drm_device *dev, void *data, struct drm_file *file_priv) { drm_i915_private_t *dev_priv = (drm_i915_private_t *) dev->dev_private; - u32 *hw_status = dev_priv->hw_status_page; drm_i915_sarea_t *sarea_priv = (drm_i915_sarea_t *) dev_priv->sarea_priv; drm_i915_batchbuffer_t *batch = data; @@ -630,7 +655,7 @@ static int i915_batchbuffer(struct drm_device *dev, void *data, ret = i915_dispatch_batchbuffer(dev, batch); - sarea_priv->last_dispatch = (int)hw_status[5]; + sarea_priv->last_dispatch = READ_BREADCRUMB(dev_priv); return ret; } @@ -638,7 +663,6 @@ static int i915_cmdbuffer(struct drm_device *dev, void *data, struct drm_file *file_priv) { drm_i915_private_t *dev_priv = (drm_i915_private_t *) dev->dev_private; - u32 *hw_status = dev_priv->hw_status_page; drm_i915_sarea_t *sarea_priv = (drm_i915_sarea_t *) dev_priv->sarea_priv; drm_i915_cmdbuffer_t *cmdbuf = data; @@ -663,18 +687,477 @@ static int i915_cmdbuffer(struct drm_device *dev, void *data, return ret; } - sarea_priv->last_dispatch = (int)hw_status[5]; + sarea_priv->last_dispatch = READ_BREADCRUMB(dev_priv); return 0; } -static int i915_flip_bufs(struct drm_device *dev, void *data, - struct drm_file *file_priv) +#if DRM_DEBUG_CODE +#define DRM_DEBUG_RELOCATION (drm_debug != 0) +#else +#define DRM_DEBUG_RELOCATION 0 +#endif + +struct i915_relocatee_info { + struct drm_buffer_object *buf; + unsigned long offset; + u32 *data_page; + unsigned page_offset; + struct drm_bo_kmap_obj kmap; + int is_iomem; +}; + +struct drm_i915_validate_buffer { + struct drm_buffer_object *buffer; + int presumed_offset_correct; +}; + +static void i915_dereference_buffers_locked(struct drm_i915_validate_buffer *buffers, + unsigned num_buffers) +{ + while (num_buffers--) + drm_bo_usage_deref_locked(&buffers[num_buffers].buffer); +} + +int i915_apply_reloc(struct drm_file *file_priv, int num_buffers, + struct drm_i915_validate_buffer *buffers, + struct i915_relocatee_info *relocatee, + uint32_t *reloc) +{ + unsigned index; + unsigned long new_cmd_offset; + u32 val; + int ret, i; + int buf_index = -1; + + for (i = 0; i <= num_buffers; i++) + if (buffers[i].buffer) + if (reloc[2] == buffers[i].buffer->base.hash.key) + buf_index = i; + + if (buf_index == -1) { + DRM_ERROR("Illegal relocation buffer %08X\n", reloc[2]); + return -EINVAL; + } + + /* + * Short-circuit relocations that were correctly + * guessed by the client + */ + if (buffers[buf_index].presumed_offset_correct && !DRM_DEBUG_RELOCATION) + return 0; + + new_cmd_offset = reloc[0]; + if (!relocatee->data_page || + !drm_bo_same_page(relocatee->offset, new_cmd_offset)) { + drm_bo_kunmap(&relocatee->kmap); + relocatee->offset = new_cmd_offset; + mutex_lock (&relocatee->buf->mutex); + ret = drm_bo_wait (relocatee->buf, 0, 0, FALSE); + mutex_unlock (&relocatee->buf->mutex); + if (ret) { + DRM_ERROR("Could not wait for buffer to apply relocs\n %08lx", new_cmd_offset); + return ret; + } + ret = drm_bo_kmap(relocatee->buf, new_cmd_offset >> PAGE_SHIFT, + 1, &relocatee->kmap); + if (ret) { + DRM_ERROR("Could not map command buffer to apply relocs\n %08lx", new_cmd_offset); + return ret; + } + + relocatee->data_page = drm_bmo_virtual(&relocatee->kmap, + &relocatee->is_iomem); + relocatee->page_offset = (relocatee->offset & PAGE_MASK); + } + + val = buffers[buf_index].buffer->offset; + index = (reloc[0] - relocatee->page_offset) >> 2; + + /* add in validate */ + val = val + reloc[1]; + + if (DRM_DEBUG_RELOCATION) { + if (buffers[buf_index].presumed_offset_correct && + relocatee->data_page[index] != val) { + DRM_DEBUG ("Relocation mismatch source %d target %d buffer %d user %08x kernel %08x\n", + reloc[0], reloc[1], buf_index, relocatee->data_page[index], val); + } + } + relocatee->data_page[index] = val; + return 0; +} + +int i915_process_relocs(struct drm_file *file_priv, + uint32_t buf_handle, + uint32_t __user **reloc_user_ptr, + struct i915_relocatee_info *relocatee, + struct drm_i915_validate_buffer *buffers, + uint32_t num_buffers) +{ + int ret, reloc_stride; + uint32_t cur_offset; + uint32_t reloc_count; + uint32_t reloc_type; + uint32_t reloc_buf_size; + uint32_t *reloc_buf = NULL; + int i; + + /* do a copy from user from the user ptr */ + ret = get_user(reloc_count, *reloc_user_ptr); + if (ret) { + DRM_ERROR("Could not map relocation buffer.\n"); + goto out; + } + + ret = get_user(reloc_type, (*reloc_user_ptr)+1); + if (ret) { + DRM_ERROR("Could not map relocation buffer.\n"); + goto out; + } + + if (reloc_type != 0) { + DRM_ERROR("Unsupported relocation type requested\n"); + ret = -EINVAL; + goto out; + } + + reloc_buf_size = (I915_RELOC_HEADER + (reloc_count * I915_RELOC0_STRIDE)) * sizeof(uint32_t); + reloc_buf = kmalloc(reloc_buf_size, GFP_KERNEL); + if (!reloc_buf) { + DRM_ERROR("Out of memory for reloc buffer\n"); + ret = -ENOMEM; + goto out; + } + + if (copy_from_user(reloc_buf, *reloc_user_ptr, reloc_buf_size)) { + ret = -EFAULT; + goto out; + } + + /* get next relocate buffer handle */ + *reloc_user_ptr = (uint32_t *)*(unsigned long *)&reloc_buf[2]; + + reloc_stride = I915_RELOC0_STRIDE * sizeof(uint32_t); /* may be different for other types of relocs */ + + DRM_DEBUG("num relocs is %d, next is %p\n", reloc_count, *reloc_user_ptr); + + for (i = 0; i < reloc_count; i++) { + cur_offset = I915_RELOC_HEADER + (i * I915_RELOC0_STRIDE); + + ret = i915_apply_reloc(file_priv, num_buffers, buffers, + relocatee, reloc_buf + cur_offset); + if (ret) + goto out; + } + +out: + + if (reloc_buf) + kfree(reloc_buf); + drm_bo_kunmap(&relocatee->kmap); + relocatee->data_page = NULL; + + return ret; +} + +static int i915_exec_reloc(struct drm_file *file_priv, drm_handle_t buf_handle, + uint32_t __user *reloc_user_ptr, + struct drm_i915_validate_buffer *buffers, + uint32_t buf_count) +{ + struct drm_device *dev = file_priv->head->dev; + struct i915_relocatee_info relocatee; + int ret = 0; + int b; + + /* + * Short circuit relocations when all previous + * buffers offsets were correctly guessed by + * the client + */ + if (!DRM_DEBUG_RELOCATION) { + for (b = 0; b < buf_count; b++) + if (!buffers[b].presumed_offset_correct) + break; + + if (b == buf_count) + return 0; + } + + memset(&relocatee, 0, sizeof(relocatee)); + + mutex_lock(&dev->struct_mutex); + relocatee.buf = drm_lookup_buffer_object(file_priv, buf_handle, 1); + mutex_unlock(&dev->struct_mutex); + if (!relocatee.buf) { + DRM_DEBUG("relocatee buffer invalid %08x\n", buf_handle); + ret = -EINVAL; + goto out_err; + } + + while (reloc_user_ptr) { + ret = i915_process_relocs(file_priv, buf_handle, &reloc_user_ptr, &relocatee, buffers, buf_count); + if (ret) { + DRM_ERROR("process relocs failed\n"); + break; + } + } + + mutex_lock(&dev->struct_mutex); + drm_bo_usage_deref_locked(&relocatee.buf); + mutex_unlock(&dev->struct_mutex); + +out_err: + return ret; +} + +/* + * Validate, add fence and relocate a block of bos from a userspace list + */ +int i915_validate_buffer_list(struct drm_file *file_priv, + unsigned int fence_class, uint64_t data, + struct drm_i915_validate_buffer *buffers, + uint32_t *num_buffers) { + struct drm_i915_op_arg arg; + struct drm_bo_op_req *req = &arg.d.req; + struct drm_bo_arg_rep rep; + unsigned long next = 0; + int ret = 0; + unsigned buf_count = 0; + struct drm_device *dev = file_priv->head->dev; + uint32_t buf_handle; + uint32_t __user *reloc_user_ptr; + + do { + if (buf_count >= *num_buffers) { + DRM_ERROR("Buffer count exceeded %d\n.", *num_buffers); + ret = -EINVAL; + goto out_err; + } + + buffers[buf_count].buffer = NULL; + buffers[buf_count].presumed_offset_correct = 0; + + if (copy_from_user(&arg, (void __user *)(unsigned long)data, sizeof(arg))) { + ret = -EFAULT; + goto out_err; + } + + if (arg.handled) { + data = arg.next; + mutex_lock(&dev->struct_mutex); + buffers[buf_count].buffer = drm_lookup_buffer_object(file_priv, req->arg_handle, 1); + mutex_unlock(&dev->struct_mutex); + buf_count++; + continue; + } + + rep.ret = 0; + if (req->op != drm_bo_validate) { + DRM_ERROR + ("Buffer object operation wasn't \"validate\".\n"); + rep.ret = -EINVAL; + goto out_err; + } + + buf_handle = req->bo_req.handle; + reloc_user_ptr = (uint32_t *)(unsigned long)arg.reloc_ptr; + + if (reloc_user_ptr) { + ret = i915_exec_reloc(file_priv, buf_handle, reloc_user_ptr, buffers, buf_count); + if (ret) + goto out_err; + DRM_MEMORYBARRIER(); + } + + rep.ret = drm_bo_handle_validate(file_priv, req->bo_req.handle, + req->bo_req.flags, req->bo_req.mask, + req->bo_req.hint, + req->bo_req.fence_class, 0, + &rep.bo_info, + &buffers[buf_count].buffer); + + if (rep.ret) { + DRM_ERROR("error on handle validate %d\n", rep.ret); + goto out_err; + } + /* + * If the user provided a presumed offset hint, check whether + * the buffer is in the same place, if so, relocations relative to + * this buffer need not be performed + */ + if ((req->bo_req.hint & DRM_BO_HINT_PRESUMED_OFFSET) && + buffers[buf_count].buffer->offset == req->bo_req.presumed_offset) + buffers[buf_count].presumed_offset_correct = 1; + + next = arg.next; + arg.handled = 1; + arg.d.rep = rep; + + if (copy_to_user((void __user *)(unsigned long)data, &arg, sizeof(arg))) + return -EFAULT; + + data = next; + buf_count++; + + } while (next != 0); + *num_buffers = buf_count; + return 0; +out_err: + mutex_lock(&dev->struct_mutex); + i915_dereference_buffers_locked(buffers, buf_count); + mutex_unlock(&dev->struct_mutex); + *num_buffers = 0; + return (ret) ? ret : rep.ret; +} + +static int i915_execbuffer(struct drm_device *dev, void *data, + struct drm_file *file_priv) +{ + drm_i915_private_t *dev_priv = (drm_i915_private_t *) dev->dev_private; + drm_i915_sarea_t *sarea_priv = (drm_i915_sarea_t *) + dev_priv->sarea_priv; + struct drm_i915_execbuffer *exec_buf = data; + struct _drm_i915_batchbuffer *batch = &exec_buf->batch; + struct drm_fence_arg *fence_arg = &exec_buf->fence_arg; + int num_buffers; + int ret; + struct drm_i915_validate_buffer *buffers; + struct drm_fence_object *fence; + + if (!dev_priv->allow_batchbuffer) { + DRM_ERROR("Batchbuffer ioctl disabled\n"); + return -EINVAL; + } + + + if (batch->num_cliprects && DRM_VERIFYAREA_READ(batch->cliprects, + batch->num_cliprects * + sizeof(struct drm_clip_rect))) + return -EFAULT; + + if (exec_buf->num_buffers > dev_priv->max_validate_buffers) + return -EINVAL; + + + ret = drm_bo_read_lock(&dev->bm.bm_lock); + if (ret) + return ret; + + /* + * The cmdbuf_mutex makes sure the validate-submit-fence + * operation is atomic. + */ + + ret = mutex_lock_interruptible(&dev_priv->cmdbuf_mutex); + if (ret) { + drm_bo_read_unlock(&dev->bm.bm_lock); + return -EAGAIN; + } + + num_buffers = exec_buf->num_buffers; + + buffers = drm_calloc(num_buffers, sizeof(struct drm_i915_validate_buffer), DRM_MEM_DRIVER); + if (!buffers) { + drm_bo_read_unlock(&dev->bm.bm_lock); + mutex_unlock(&dev_priv->cmdbuf_mutex); + return -ENOMEM; + } + + /* validate buffer list + fixup relocations */ + ret = i915_validate_buffer_list(file_priv, 0, exec_buf->ops_list, + buffers, &num_buffers); + if (ret) + goto out_free; + + /* make sure all previous memory operations have passed */ + DRM_MEMORYBARRIER(); + drm_agp_chipset_flush(dev); + + /* submit buffer */ + batch->start = buffers[num_buffers-1].buffer->offset; + + DRM_DEBUG("i915 exec batchbuffer, start %x used %d cliprects %d\n", + batch->start, batch->used, batch->num_cliprects); + + ret = i915_dispatch_batchbuffer(dev, batch); + if (ret) + goto out_err0; + + sarea_priv->last_dispatch = READ_BREADCRUMB(dev_priv); + + /* fence */ + ret = drm_fence_buffer_objects(dev, NULL, fence_arg->flags, + NULL, &fence); + if (ret) + goto out_err0; + + if (!(fence_arg->flags & DRM_FENCE_FLAG_NO_USER)) { + ret = drm_fence_add_user_object(file_priv, fence, fence_arg->flags & DRM_FENCE_FLAG_SHAREABLE); + if (!ret) { + fence_arg->handle = fence->base.hash.key; + fence_arg->fence_class = fence->fence_class; + fence_arg->type = fence->type; + fence_arg->signaled = fence->signaled; + } + } + drm_fence_usage_deref_unlocked(&fence); +out_err0: + + /* handle errors */ + mutex_lock(&dev->struct_mutex); + i915_dereference_buffers_locked(buffers, num_buffers); + mutex_unlock(&dev->struct_mutex); + +out_free: + drm_free(buffers, (exec_buf->num_buffers * sizeof(struct drm_buffer_object *)), DRM_MEM_DRIVER); + + mutex_unlock(&dev_priv->cmdbuf_mutex); + drm_bo_read_unlock(&dev->bm.bm_lock); + return ret; +} + +static int i915_do_cleanup_pageflip(struct drm_device *dev) +{ + drm_i915_private_t *dev_priv = dev->dev_private; + int i, planes, num_pages = dev_priv->sarea_priv->third_handle ? 3 : 2; + + DRM_DEBUG("%s\n", __FUNCTION__); + + for (i = 0, planes = 0; i < 2; i++) + if (dev_priv->sarea_priv->pf_current_page & (0x3 << (2 * i))) { + dev_priv->sarea_priv->pf_current_page = + (dev_priv->sarea_priv->pf_current_page & + ~(0x3 << (2 * i))) | (num_pages - 1) << (2 * i); + + planes |= 1 << i; + } + + if (planes) + i915_dispatch_flip(dev, planes, 0); + + return 0; +} + +static int i915_flip_bufs(struct drm_device *dev, void *data, struct drm_file *file_priv) +{ + drm_i915_flip_t *param = data; + DRM_DEBUG("%s\n", __FUNCTION__); LOCK_TEST_WITH_RETURN(dev, file_priv); - return i915_dispatch_flip(dev); + /* This is really planes */ + if (param->pipes & ~0x3) { + DRM_ERROR("Invalid planes 0x%x, only <= 0x3 is valid\n", + param->pipes); + return -EINVAL; + } + + i915_dispatch_flip(dev, param->pipes, 0); + + return 0; } static int i915_getparam(struct drm_device *dev, void *data, @@ -685,7 +1168,7 @@ static int i915_getparam(struct drm_device *dev, void *data, int value; if (!dev_priv) { - DRM_ERROR("%s called with no initialization\n", __FUNCTION__); + DRM_ERROR("called with no initialization\n"); return -EINVAL; } @@ -719,7 +1202,7 @@ static int i915_setparam(struct drm_device *dev, void *data, drm_i915_setparam_t *param = data; if (!dev_priv) { - DRM_ERROR("%s called with no initialization\n", __FUNCTION__); + DRM_ERROR("called with no initialization\n"); return -EINVAL; } @@ -742,6 +1225,63 @@ static int i915_setparam(struct drm_device *dev, void *data, return 0; } +drm_i915_mmio_entry_t mmio_table[] = { + [MMIO_REGS_PS_DEPTH_COUNT] = { + I915_MMIO_MAY_READ|I915_MMIO_MAY_WRITE, + 0x2350, + 8 + } +}; + +static int mmio_table_size = sizeof(mmio_table)/sizeof(drm_i915_mmio_entry_t); + +static int i915_mmio(struct drm_device *dev, void *data, + struct drm_file *file_priv) +{ + uint32_t buf[8]; + drm_i915_private_t *dev_priv = dev->dev_private; + drm_i915_mmio_entry_t *e; + drm_i915_mmio_t *mmio = data; + void __iomem *base; + int i; + + if (!dev_priv) { + DRM_ERROR("called with no initialization\n"); + return -EINVAL; + } + + if (mmio->reg >= mmio_table_size) + return -EINVAL; + + e = &mmio_table[mmio->reg]; + base = (u8 *) dev_priv->mmio_map->handle + e->offset; + + switch (mmio->read_write) { + case I915_MMIO_READ: + if (!(e->flag & I915_MMIO_MAY_READ)) + return -EINVAL; + for (i = 0; i < e->size / 4; i++) + buf[i] = I915_READ(e->offset + i * 4); + if (DRM_COPY_TO_USER(mmio->data, buf, e->size)) { + DRM_ERROR("DRM_COPY_TO_USER failed\n"); + return -EFAULT; + } + break; + + case I915_MMIO_WRITE: + if (!(e->flag & I915_MMIO_MAY_WRITE)) + return -EINVAL; + if (DRM_COPY_FROM_USER(buf, mmio->data, e->size)) { + DRM_ERROR("DRM_COPY_TO_USER failed\n"); + return -EFAULT; + } + for (i = 0; i < e->size / 4; i++) + I915_WRITE(e->offset + i * 4, buf[i]); + break; + } + return 0; +} + static int i915_set_status_page(struct drm_device *dev, void *data, struct drm_file *file_priv) { @@ -749,7 +1289,7 @@ static int i915_set_status_page(struct drm_device *dev, void *data, drm_i915_hws_addr_t *hws = data; if (!dev_priv) { - DRM_ERROR("%s called with no initialization\n", __FUNCTION__); + DRM_ERROR("called with no initialization\n"); return -EINVAL; } @@ -757,7 +1297,7 @@ static int i915_set_status_page(struct drm_device *dev, void *data, dev_priv->status_gfx_addr = hws->addr & (0x1ffff<<12); - dev_priv->hws_map.offset = dev->agp->agp_info.aper_base + hws->addr; + dev_priv->hws_map.offset = dev->agp->base + hws->addr; dev_priv->hws_map.size = 4*1024; dev_priv->hws_map.type = 0; dev_priv->hws_map.flags = 0; @@ -765,7 +1305,6 @@ static int i915_set_status_page(struct drm_device *dev, void *data, drm_core_ioremap(&dev_priv->hws_map, dev); if (dev_priv->hws_map.handle == NULL) { - dev->dev_private = (void *)dev_priv; i915_dma_cleanup(dev); dev_priv->status_gfx_addr = 0; DRM_ERROR("can not ioremap virtual address for" @@ -784,6 +1323,10 @@ static int i915_set_status_page(struct drm_device *dev, void *data, int i915_driver_load(struct drm_device *dev, unsigned long flags) { + struct drm_i915_private *dev_priv = dev->dev_private; + unsigned long base, size; + int ret = 0, mmio_bar = IS_I9XX(dev) ? 0 : 1; + /* i915 has 4 more counters */ dev->counters += 4; dev->types[6] = _DRM_STAT_IRQ; @@ -791,24 +1334,53 @@ int i915_driver_load(struct drm_device *dev, unsigned long flags) dev->types[8] = _DRM_STAT_SECONDARY; dev->types[9] = _DRM_STAT_DMA; + dev_priv = drm_alloc(sizeof(drm_i915_private_t), DRM_MEM_DRIVER); + if (dev_priv == NULL) + return -ENOMEM; + + memset(dev_priv, 0, sizeof(drm_i915_private_t)); + + dev->dev_private = (void *)dev_priv; + + /* Add register map (needed for suspend/resume) */ + base = drm_get_resource_start(dev, mmio_bar); + size = drm_get_resource_len(dev, mmio_bar); + + ret = drm_addmap(dev, base, size, _DRM_REGISTERS, + _DRM_KERNEL | _DRM_DRIVER, + &dev_priv->mmio_map); + return ret; +} + +int i915_driver_unload(struct drm_device *dev) +{ + struct drm_i915_private *dev_priv = dev->dev_private; + + if (dev_priv->mmio_map) + drm_rmmap(dev, dev_priv->mmio_map); + + drm_free(dev->dev_private, sizeof(drm_i915_private_t), + DRM_MEM_DRIVER); + return 0; } -void i915_driver_lastclose(struct drm_device * dev) +void i915_driver_lastclose(struct drm_device *dev) { - if (dev->dev_private) { - drm_i915_private_t *dev_priv = dev->dev_private; + drm_i915_private_t *dev_priv = dev->dev_private; + + if (drm_getsarea(dev) && dev_priv->sarea_priv) + i915_do_cleanup_pageflip(dev); + if (dev_priv->agp_heap) i915_mem_takedown(&(dev_priv->agp_heap)); - } + i915_dma_cleanup(dev); } -void i915_driver_preclose(struct drm_device * dev, struct drm_file *file_priv) +void i915_driver_preclose(struct drm_device *dev, struct drm_file *file_priv) { - if (dev->dev_private) { - drm_i915_private_t *dev_priv = dev->dev_private; - i915_mem_release(dev, file_priv, dev_priv->agp_heap); - } + drm_i915_private_t *dev_priv = dev->dev_private; + i915_mem_release(dev, file_priv, dev_priv->agp_heap); } struct drm_ioctl_desc i915_ioctls[] = { @@ -828,7 +1400,9 @@ struct drm_ioctl_desc i915_ioctls[] = { DRM_IOCTL_DEF(DRM_I915_SET_VBLANK_PIPE, i915_vblank_pipe_set, DRM_AUTH|DRM_MASTER|DRM_ROOT_ONLY ), DRM_IOCTL_DEF(DRM_I915_GET_VBLANK_PIPE, i915_vblank_pipe_get, DRM_AUTH ), DRM_IOCTL_DEF(DRM_I915_VBLANK_SWAP, i915_vblank_swap, DRM_AUTH), + DRM_IOCTL_DEF(DRM_I915_MMIO, i915_mmio, DRM_AUTH), DRM_IOCTL_DEF(DRM_I915_HWS_ADDR, i915_set_status_page, DRM_AUTH), + DRM_IOCTL_DEF(DRM_I915_EXECBUFFER, i915_execbuffer, DRM_AUTH), }; int i915_max_ioctl = DRM_ARRAY_SIZE(i915_ioctls); @@ -844,7 +1418,13 @@ int i915_max_ioctl = DRM_ARRAY_SIZE(i915_ioctls); * \returns * A value of 1 is always retured to indictate every i9x5 is AGP. */ -int i915_driver_device_is_agp(struct drm_device * dev) +int i915_driver_device_is_agp(struct drm_device *dev) { return 1; } + +int i915_driver_firstopen(struct drm_device *dev) +{ + drm_bo_driver_init(dev); + return 0; +} diff --git a/drivers/char/drm/i915_drm.h b/drivers/char/drm/i915_drm.h index 05c66cf..f0bde6b 100644 --- a/drivers/char/drm/i915_drm.h +++ b/drivers/char/drm/i915_drm.h @@ -105,16 +105,32 @@ typedef struct _drm_i915_sarea { unsigned int rotated_tiled; unsigned int rotated2_tiled; - int pipeA_x; - int pipeA_y; - int pipeA_w; - int pipeA_h; - int pipeB_x; - int pipeB_y; - int pipeB_w; - int pipeB_h; + int planeA_x; + int planeA_y; + int planeA_w; + int planeA_h; + int planeB_x; + int planeB_y; + int planeB_w; + int planeB_h; + + /* Triple buffering */ + drm_handle_t third_handle; + int third_offset; + int third_size; + unsigned int third_tiled; } drm_i915_sarea_t; +/* Driver specific fence types and classes. + */ + +/* The only fence class we support */ +#define DRM_I915_FENCE_CLASS_ACCEL 0 +/* Fence type that guarantees read-write flush */ +#define DRM_I915_FENCE_TYPE_RW 2 +/* MI_FLUSH programmed just before the fence */ +#define DRM_I915_FENCE_FLAG_FLUSHED 0x01000000 + /* Flags for perf_boxes */ #define I915_BOX_RING_EMPTY 0x1 @@ -142,11 +158,13 @@ typedef struct _drm_i915_sarea { #define DRM_I915_SET_VBLANK_PIPE 0x0d #define DRM_I915_GET_VBLANK_PIPE 0x0e #define DRM_I915_VBLANK_SWAP 0x0f +#define DRM_I915_MMIO 0x10 #define DRM_I915_HWS_ADDR 0x11 +#define DRM_I915_EXECBUFFER 0x12 #define DRM_IOCTL_I915_INIT DRM_IOW( DRM_COMMAND_BASE + DRM_I915_INIT, drm_i915_init_t) #define DRM_IOCTL_I915_FLUSH DRM_IO ( DRM_COMMAND_BASE + DRM_I915_FLUSH) -#define DRM_IOCTL_I915_FLIP DRM_IO ( DRM_COMMAND_BASE + DRM_I915_FLIP) +#define DRM_IOCTL_I915_FLIP DRM_IOW( DRM_COMMAND_BASE + DRM_I915_FLIP, drm_i915_flip_t) #define DRM_IOCTL_I915_BATCHBUFFER DRM_IOW( DRM_COMMAND_BASE + DRM_I915_BATCHBUFFER, drm_i915_batchbuffer_t) #define DRM_IOCTL_I915_IRQ_EMIT DRM_IOWR(DRM_COMMAND_BASE + DRM_I915_IRQ_EMIT, drm_i915_irq_emit_t) #define DRM_IOCTL_I915_IRQ_WAIT DRM_IOW( DRM_COMMAND_BASE + DRM_I915_IRQ_WAIT, drm_i915_irq_wait_t) @@ -160,6 +178,20 @@ typedef struct _drm_i915_sarea { #define DRM_IOCTL_I915_SET_VBLANK_PIPE DRM_IOW( DRM_COMMAND_BASE + DRM_I915_SET_VBLANK_PIPE, drm_i915_vblank_pipe_t) #define DRM_IOCTL_I915_GET_VBLANK_PIPE DRM_IOR( DRM_COMMAND_BASE + DRM_I915_GET_VBLANK_PIPE, drm_i915_vblank_pipe_t) #define DRM_IOCTL_I915_VBLANK_SWAP DRM_IOWR(DRM_COMMAND_BASE + DRM_I915_VBLANK_SWAP, drm_i915_vblank_swap_t) +#define DRM_IOCTL_I915_MMIO DRM_IOWR(DRM_COMMAND_BASE + DRM_I915_MMIO, drm_i915_vblank_swap_t) +#define DRM_IOCTL_I915_EXECBUFFER DRM_IOWR(DRM_COMMAND_BASE + DRM_I915_EXECBUFFER, struct drm_i915_execbuffer) + +/* Asynchronous page flipping: + */ +typedef struct drm_i915_flip { + /* + * This is really talking about planes, and we could rename it + * except for the fact that some of the duplicated i915_drm.h files + * out there check for HAVE_I915_FLIP and so might pick up this + * version. + */ + int pipes; +} drm_i915_flip_t; /* Allow drivers to submit batchbuffers directly to hardware, relying * on the security mechanisms provided by hardware. @@ -263,8 +295,73 @@ typedef struct drm_i915_vblank_swap { unsigned int sequence; } drm_i915_vblank_swap_t; +#define I915_MMIO_READ 0 +#define I915_MMIO_WRITE 1 + +#define I915_MMIO_MAY_READ 0x1 +#define I915_MMIO_MAY_WRITE 0x2 + +#define MMIO_REGS_IA_PRIMATIVES_COUNT 0 +#define MMIO_REGS_IA_VERTICES_COUNT 1 +#define MMIO_REGS_VS_INVOCATION_COUNT 2 +#define MMIO_REGS_GS_PRIMITIVES_COUNT 3 +#define MMIO_REGS_GS_INVOCATION_COUNT 4 +#define MMIO_REGS_CL_PRIMITIVES_COUNT 5 +#define MMIO_REGS_CL_INVOCATION_COUNT 6 +#define MMIO_REGS_PS_INVOCATION_COUNT 7 +#define MMIO_REGS_PS_DEPTH_COUNT 8 + +typedef struct drm_i915_mmio_entry { + unsigned int flag; + unsigned int offset; + unsigned int size; +} drm_i915_mmio_entry_t; + +typedef struct drm_i915_mmio { + unsigned int read_write:1; + unsigned int reg:31; + void __user *data; +} drm_i915_mmio_t; + typedef struct drm_i915_hws_addr { uint64_t addr; } drm_i915_hws_addr_t; +/* + * Relocation header is 4 uint32_ts + * 0 - 32 bit reloc count + * 1 - 32-bit relocation type + * 2-3 - 64-bit user buffer handle ptr for another list of relocs. + */ +#define I915_RELOC_HEADER 4 + +/* + * type 0 relocation has 4-uint32_t stride + * 0 - offset into buffer + * 1 - delta to add in + * 2 - buffer handle + * 3 - reserved (for optimisations later). + */ +#define I915_RELOC_TYPE_0 0 +#define I915_RELOC0_STRIDE 4 + +struct drm_i915_op_arg { + uint64_t next; + uint64_t reloc_ptr; + int handled; + union { + struct drm_bo_op_req req; + struct drm_bo_arg_rep rep; + } d; + +}; + +struct drm_i915_execbuffer { + uint64_t ops_list; + uint32_t num_buffers; + struct _drm_i915_batchbuffer batch; + drm_context_t context; /* for lockless use in the future */ + struct drm_fence_arg fence_arg; +}; + #endif /* _I915_DRM_H_ */ diff --git a/drivers/char/drm/i915_drv.c b/drivers/char/drm/i915_drv.c index 85bcc27..72beff1 100644 --- a/drivers/char/drm/i915_drv.c +++ b/drivers/char/drm/i915_drv.c @@ -37,6 +37,492 @@ static struct pci_device_id pciidlist[] = { i915_PCI_IDS }; +static struct drm_fence_driver i915_fence_driver = { + .num_classes = 1, + .wrap_diff = (1U << (BREADCRUMB_BITS - 1)), + .flush_diff = (1U << (BREADCRUMB_BITS - 2)), + .sequence_mask = BREADCRUMB_MASK, + .lazy_capable = 1, + .emit = i915_fence_emit_sequence, + .poke_flush = i915_poke_flush, + .has_irq = i915_fence_has_irq, +}; + +static uint32_t i915_mem_prios[] = {DRM_BO_MEM_PRIV0, DRM_BO_MEM_TT, DRM_BO_MEM_LOCAL}; +static uint32_t i915_busy_prios[] = {DRM_BO_MEM_TT, DRM_BO_MEM_PRIV0, DRM_BO_MEM_LOCAL}; + +static struct drm_bo_driver i915_bo_driver = { + .mem_type_prio = i915_mem_prios, + .mem_busy_prio = i915_busy_prios, + .num_mem_type_prio = sizeof(i915_mem_prios)/sizeof(uint32_t), + .num_mem_busy_prio = sizeof(i915_busy_prios)/sizeof(uint32_t), + .create_ttm_backend_entry = i915_create_ttm_backend_entry, + .fence_type = i915_fence_type, + .invalidate_caches = i915_invalidate_caches, + .init_mem_type = i915_init_mem_type, + .evict_flags = i915_evict_flags, + .move = i915_move, + .ttm_cache_flush = i915_flush_ttm, +}; + +enum pipe { + PIPE_A = 0, + PIPE_B, +}; + +static bool i915_pipe_enabled(struct drm_device *dev, enum pipe pipe) +{ + struct drm_i915_private *dev_priv = dev->dev_private; + + if (pipe == PIPE_A) + return (I915_READ(DPLL_A) & DPLL_VCO_ENABLE); + else + return (I915_READ(DPLL_B) & DPLL_VCO_ENABLE); +} + +static void i915_save_palette(struct drm_device *dev, enum pipe pipe) +{ + struct drm_i915_private *dev_priv = dev->dev_private; + unsigned long reg = (pipe == PIPE_A ? PALETTE_A : PALETTE_B); + u32 *array; + int i; + + if (!i915_pipe_enabled(dev, pipe)) + return; + + if (pipe == PIPE_A) + array = dev_priv->save_palette_a; + else + array = dev_priv->save_palette_b; + + for(i = 0; i < 256; i++) + array[i] = I915_READ(reg + (i << 2)); +} + +static void i915_restore_palette(struct drm_device *dev, enum pipe pipe) +{ + struct drm_i915_private *dev_priv = dev->dev_private; + unsigned long reg = (pipe == PIPE_A ? PALETTE_A : PALETTE_B); + u32 *array; + int i; + + if (!i915_pipe_enabled(dev, pipe)) + return; + + if (pipe == PIPE_A) + array = dev_priv->save_palette_a; + else + array = dev_priv->save_palette_b; + + for(i = 0; i < 256; i++) + I915_WRITE(reg + (i << 2), array[i]); +} + +static u8 i915_read_indexed(u16 index_port, u16 data_port, u8 reg) +{ + outb(reg, index_port); + return inb(data_port); +} + +static u8 i915_read_ar(u16 st01, u8 reg, u16 palette_enable) +{ + inb(st01); + outb(palette_enable | reg, VGA_AR_INDEX); + return inb(VGA_AR_DATA_READ); +} + +static void i915_write_ar(u8 st01, u8 reg, u8 val, u16 palette_enable) +{ + inb(st01); + outb(palette_enable | reg, VGA_AR_INDEX); + outb(val, VGA_AR_DATA_WRITE); +} + +static void i915_write_indexed(u16 index_port, u16 data_port, u8 reg, u8 val) +{ + outb(reg, index_port); + outb(val, data_port); +} + +static void i915_save_vga(struct drm_device *dev) +{ + struct drm_i915_private *dev_priv = dev->dev_private; + int i; + u16 cr_index, cr_data, st01; + + /* VGA color palette registers */ + dev_priv->saveDACMASK = inb(VGA_DACMASK); + /* DACCRX automatically increments during read */ + outb(0, VGA_DACRX); + /* Read 3 bytes of color data from each index */ + for (i = 0; i < 256 * 3; i++) + dev_priv->saveDACDATA[i] = inb(VGA_DACDATA); + + /* MSR bits */ + dev_priv->saveMSR = inb(VGA_MSR_READ); + if (dev_priv->saveMSR & VGA_MSR_CGA_MODE) { + cr_index = VGA_CR_INDEX_CGA; + cr_data = VGA_CR_DATA_CGA; + st01 = VGA_ST01_CGA; + } else { + cr_index = VGA_CR_INDEX_MDA; + cr_data = VGA_CR_DATA_MDA; + st01 = VGA_ST01_MDA; + } + + /* CRT controller regs */ + i915_write_indexed(cr_index, cr_data, 0x11, + i915_read_indexed(cr_index, cr_data, 0x11) & + (~0x80)); + for (i = 0; i < 0x24; i++) + dev_priv->saveCR[i] = + i915_read_indexed(cr_index, cr_data, i); + /* Make sure we don't turn off CR group 0 writes */ + dev_priv->saveCR[0x11] &= ~0x80; + + /* Attribute controller registers */ + inb(st01); + dev_priv->saveAR_INDEX = inb(VGA_AR_INDEX); + for (i = 0; i < 20; i++) + dev_priv->saveAR[i] = i915_read_ar(st01, i, 0); + inb(st01); + outb(dev_priv->saveAR_INDEX, VGA_AR_INDEX); + + /* Graphics controller registers */ + for (i = 0; i < 9; i++) + dev_priv->saveGR[i] = + i915_read_indexed(VGA_GR_INDEX, VGA_GR_DATA, i); + + dev_priv->saveGR[0x10] = + i915_read_indexed(VGA_GR_INDEX, VGA_GR_DATA, 0x10); + dev_priv->saveGR[0x11] = + i915_read_indexed(VGA_GR_INDEX, VGA_GR_DATA, 0x11); + dev_priv->saveGR[0x18] = + i915_read_indexed(VGA_GR_INDEX, VGA_GR_DATA, 0x18); + + /* Sequencer registers */ + for (i = 0; i < 8; i++) + dev_priv->saveSR[i] = + i915_read_indexed(VGA_SR_INDEX, VGA_SR_DATA, i); +} + +static void i915_restore_vga(struct drm_device *dev) +{ + struct drm_i915_private *dev_priv = dev->dev_private; + int i; + u16 cr_index, cr_data, st01; + + /* MSR bits */ + outb(dev_priv->saveMSR, VGA_MSR_WRITE); + if (dev_priv->saveMSR & VGA_MSR_CGA_MODE) { + cr_index = VGA_CR_INDEX_CGA; + cr_data = VGA_CR_DATA_CGA; + st01 = VGA_ST01_CGA; + } else { + cr_index = VGA_CR_INDEX_MDA; + cr_data = VGA_CR_DATA_MDA; + st01 = VGA_ST01_MDA; + } + + /* Sequencer registers, don't write SR07 */ + for (i = 0; i < 7; i++) + i915_write_indexed(VGA_SR_INDEX, VGA_SR_DATA, i, + dev_priv->saveSR[i]); + + /* CRT controller regs */ + /* Enable CR group 0 writes */ + i915_write_indexed(cr_index, cr_data, 0x11, dev_priv->saveCR[0x11]); + for (i = 0; i < 0x24; i++) + i915_write_indexed(cr_index, cr_data, i, dev_priv->saveCR[i]); + + /* Graphics controller regs */ + for (i = 0; i < 9; i++) + i915_write_indexed(VGA_GR_INDEX, VGA_GR_DATA, i, + dev_priv->saveGR[i]); + + i915_write_indexed(VGA_GR_INDEX, VGA_GR_DATA, 0x10, + dev_priv->saveGR[0x10]); + i915_write_indexed(VGA_GR_INDEX, VGA_GR_DATA, 0x11, + dev_priv->saveGR[0x11]); + i915_write_indexed(VGA_GR_INDEX, VGA_GR_DATA, 0x18, + dev_priv->saveGR[0x18]); + + /* Attribute controller registers */ + for (i = 0; i < 20; i++) + i915_write_ar(st01, i, dev_priv->saveAR[i], 0); + inb(st01); /* switch back to index mode */ + outb(dev_priv->saveAR_INDEX | 0x20, VGA_AR_INDEX); + + /* VGA color palette registers */ + outb(dev_priv->saveDACMASK, VGA_DACMASK); + /* DACCRX automatically increments during read */ + outb(0, VGA_DACWX); + /* Read 3 bytes of color data from each index */ + for (i = 0; i < 256 * 3; i++) + outb(dev_priv->saveDACDATA[i], VGA_DACDATA); + +} + +static int i915_suspend(struct drm_device *dev) +{ + struct drm_i915_private *dev_priv = dev->dev_private; + int i; + + if (!dev || !dev_priv) { + printk(KERN_ERR "dev: %p, dev_priv: %p\n", dev, dev_priv); + printk(KERN_ERR "DRM not initialized, aborting suspend.\n"); + return -ENODEV; + } + + pci_save_state(dev->pdev); + pci_read_config_byte(dev->pdev, LBB, &dev_priv->saveLBB); + + /* Pipe & plane A info */ + dev_priv->savePIPEACONF = I915_READ(PIPEACONF); + dev_priv->savePIPEASRC = I915_READ(PIPEASRC); + dev_priv->saveFPA0 = I915_READ(FPA0); + dev_priv->saveFPA1 = I915_READ(FPA1); + dev_priv->saveDPLL_A = I915_READ(DPLL_A); + if (IS_I965G(dev)) + dev_priv->saveDPLL_A_MD = I915_READ(DPLL_A_MD); + dev_priv->saveHTOTAL_A = I915_READ(HTOTAL_A); + dev_priv->saveHBLANK_A = I915_READ(HBLANK_A); + dev_priv->saveHSYNC_A = I915_READ(HSYNC_A); + dev_priv->saveVTOTAL_A = I915_READ(VTOTAL_A); + dev_priv->saveVBLANK_A = I915_READ(VBLANK_A); + dev_priv->saveVSYNC_A = I915_READ(VSYNC_A); + dev_priv->saveBCLRPAT_A = I915_READ(BCLRPAT_A); + + dev_priv->saveDSPACNTR = I915_READ(DSPACNTR); + dev_priv->saveDSPASTRIDE = I915_READ(DSPASTRIDE); + dev_priv->saveDSPASIZE = I915_READ(DSPASIZE); + dev_priv->saveDSPAPOS = I915_READ(DSPAPOS); + dev_priv->saveDSPABASE = I915_READ(DSPABASE); + if (IS_I965G(dev)) { + dev_priv->saveDSPASURF = I915_READ(DSPASURF); + dev_priv->saveDSPATILEOFF = I915_READ(DSPATILEOFF); + } + i915_save_palette(dev, PIPE_A); + + /* Pipe & plane B info */ + dev_priv->savePIPEBCONF = I915_READ(PIPEBCONF); + dev_priv->savePIPEBSRC = I915_READ(PIPEBSRC); + dev_priv->saveFPB0 = I915_READ(FPB0); + dev_priv->saveFPB1 = I915_READ(FPB1); + dev_priv->saveDPLL_B = I915_READ(DPLL_B); + if (IS_I965G(dev)) + dev_priv->saveDPLL_B_MD = I915_READ(DPLL_B_MD); + dev_priv->saveHTOTAL_B = I915_READ(HTOTAL_B); + dev_priv->saveHBLANK_B = I915_READ(HBLANK_B); + dev_priv->saveHSYNC_B = I915_READ(HSYNC_B); + dev_priv->saveVTOTAL_B = I915_READ(VTOTAL_B); + dev_priv->saveVBLANK_B = I915_READ(VBLANK_B); + dev_priv->saveVSYNC_B = I915_READ(VSYNC_B); + dev_priv->saveBCLRPAT_A = I915_READ(BCLRPAT_A); + + dev_priv->saveDSPBCNTR = I915_READ(DSPBCNTR); + dev_priv->saveDSPBSTRIDE = I915_READ(DSPBSTRIDE); + dev_priv->saveDSPBSIZE = I915_READ(DSPBSIZE); + dev_priv->saveDSPBPOS = I915_READ(DSPBPOS); + dev_priv->saveDSPBBASE = I915_READ(DSPBBASE); + if (IS_I965GM(dev) || IS_IGD_GM(dev)) { + dev_priv->saveDSPBSURF = I915_READ(DSPBSURF); + dev_priv->saveDSPBTILEOFF = I915_READ(DSPBTILEOFF); + } + i915_save_palette(dev, PIPE_B); + + /* CRT state */ + dev_priv->saveADPA = I915_READ(ADPA); + + /* LVDS state */ + dev_priv->savePP_CONTROL = I915_READ(PP_CONTROL); + dev_priv->savePFIT_PGM_RATIOS = I915_READ(PFIT_PGM_RATIOS); + dev_priv->saveBLC_PWM_CTL = I915_READ(BLC_PWM_CTL); + if (IS_I965G(dev)) + dev_priv->saveBLC_PWM_CTL2 = I915_READ(BLC_PWM_CTL2); + if (IS_MOBILE(dev) && !IS_I830(dev)) + dev_priv->saveLVDS = I915_READ(LVDS); + if (!IS_I830(dev) && !IS_845G(dev)) + dev_priv->savePFIT_CONTROL = I915_READ(PFIT_CONTROL); + dev_priv->saveLVDSPP_ON = I915_READ(LVDSPP_ON); + dev_priv->saveLVDSPP_OFF = I915_READ(LVDSPP_OFF); + dev_priv->savePP_CYCLE = I915_READ(PP_CYCLE); + + /* FIXME: save TV & SDVO state */ + + /* FBC state */ + dev_priv->saveFBC_CFB_BASE = I915_READ(FBC_CFB_BASE); + dev_priv->saveFBC_LL_BASE = I915_READ(FBC_LL_BASE); + dev_priv->saveFBC_CONTROL2 = I915_READ(FBC_CONTROL2); + dev_priv->saveFBC_CONTROL = I915_READ(FBC_CONTROL); + + /* VGA state */ + dev_priv->saveVCLK_DIVISOR_VGA0 = I915_READ(VCLK_DIVISOR_VGA0); + dev_priv->saveVCLK_DIVISOR_VGA1 = I915_READ(VCLK_DIVISOR_VGA1); + dev_priv->saveVCLK_POST_DIV = I915_READ(VCLK_POST_DIV); + dev_priv->saveVGACNTRL = I915_READ(VGACNTRL); + + /* Scratch space */ + for (i = 0; i < 16; i++) { + dev_priv->saveSWF0[i] = I915_READ(SWF0 + (i << 2)); + dev_priv->saveSWF1[i] = I915_READ(SWF10 + (i << 2)); + } + for (i = 0; i < 3; i++) + dev_priv->saveSWF2[i] = I915_READ(SWF30 + (i << 2)); + + i915_save_vga(dev); + + /* Shut down the device */ + pci_disable_device(dev->pdev); + pci_set_power_state(dev->pdev, PCI_D3hot); + + return 0; +} + +static int i915_resume(struct drm_device *dev) +{ + struct drm_i915_private *dev_priv = dev->dev_private; + int i; + + pci_set_power_state(dev->pdev, PCI_D0); + pci_restore_state(dev->pdev); + if (pci_enable_device(dev->pdev)) + return -1; + + pci_write_config_byte(dev->pdev, LBB, dev_priv->saveLBB); + + /* Pipe & plane A info */ + /* Prime the clock */ + if (dev_priv->saveDPLL_A & DPLL_VCO_ENABLE) { + I915_WRITE(DPLL_A, dev_priv->saveDPLL_A & + ~DPLL_VCO_ENABLE); + udelay(150); + } + I915_WRITE(FPA0, dev_priv->saveFPA0); + I915_WRITE(FPA1, dev_priv->saveFPA1); + /* Actually enable it */ + I915_WRITE(DPLL_A, dev_priv->saveDPLL_A); + udelay(150); + if (IS_I965G(dev)) + I915_WRITE(DPLL_A_MD, dev_priv->saveDPLL_A_MD); + udelay(150); + + /* Restore mode */ + I915_WRITE(HTOTAL_A, dev_priv->saveHTOTAL_A); + I915_WRITE(HBLANK_A, dev_priv->saveHBLANK_A); + I915_WRITE(HSYNC_A, dev_priv->saveHSYNC_A); + I915_WRITE(VTOTAL_A, dev_priv->saveVTOTAL_A); + I915_WRITE(VBLANK_A, dev_priv->saveVBLANK_A); + I915_WRITE(VSYNC_A, dev_priv->saveVSYNC_A); + I915_WRITE(BCLRPAT_A, dev_priv->saveBCLRPAT_A); + + /* Restore plane info */ + I915_WRITE(DSPASIZE, dev_priv->saveDSPASIZE); + I915_WRITE(DSPAPOS, dev_priv->saveDSPAPOS); + I915_WRITE(PIPEASRC, dev_priv->savePIPEASRC); + I915_WRITE(DSPABASE, dev_priv->saveDSPABASE); + I915_WRITE(DSPASTRIDE, dev_priv->saveDSPASTRIDE); + if (IS_I965G(dev)) { + I915_WRITE(DSPASURF, dev_priv->saveDSPASURF); + I915_WRITE(DSPATILEOFF, dev_priv->saveDSPATILEOFF); + } + + if ((dev_priv->saveDPLL_A & DPLL_VCO_ENABLE) && + (dev_priv->saveDPLL_A & DPLL_VGA_MODE_DIS)) + I915_WRITE(PIPEACONF, dev_priv->savePIPEACONF); + + i915_restore_palette(dev, PIPE_A); + /* Enable the plane */ + I915_WRITE(DSPACNTR, dev_priv->saveDSPACNTR); + I915_WRITE(DSPABASE, I915_READ(DSPABASE)); + + /* Pipe & plane B info */ + if (dev_priv->saveDPLL_B & DPLL_VCO_ENABLE) { + I915_WRITE(DPLL_B, dev_priv->saveDPLL_B & + ~DPLL_VCO_ENABLE); + udelay(150); + } + I915_WRITE(FPB0, dev_priv->saveFPB0); + I915_WRITE(FPB1, dev_priv->saveFPB1); + /* Actually enable it */ + I915_WRITE(DPLL_B, dev_priv->saveDPLL_B); + udelay(150); + if (IS_I965G(dev)) + I915_WRITE(DPLL_B_MD, dev_priv->saveDPLL_B_MD); + udelay(150); + + /* Restore mode */ + I915_WRITE(HTOTAL_B, dev_priv->saveHTOTAL_B); + I915_WRITE(HBLANK_B, dev_priv->saveHBLANK_B); + I915_WRITE(HSYNC_B, dev_priv->saveHSYNC_B); + I915_WRITE(VTOTAL_B, dev_priv->saveVTOTAL_B); + I915_WRITE(VBLANK_B, dev_priv->saveVBLANK_B); + I915_WRITE(VSYNC_B, dev_priv->saveVSYNC_B); + I915_WRITE(BCLRPAT_B, dev_priv->saveBCLRPAT_B); + + /* Restore plane info */ + I915_WRITE(DSPBSIZE, dev_priv->saveDSPBSIZE); + I915_WRITE(DSPBPOS, dev_priv->saveDSPBPOS); + I915_WRITE(PIPEBSRC, dev_priv->savePIPEBSRC); + I915_WRITE(DSPBBASE, dev_priv->saveDSPBBASE); + I915_WRITE(DSPBSTRIDE, dev_priv->saveDSPBSTRIDE); + if (IS_I965G(dev)) { + I915_WRITE(DSPBSURF, dev_priv->saveDSPBSURF); + I915_WRITE(DSPBTILEOFF, dev_priv->saveDSPBTILEOFF); + } + + if ((dev_priv->saveDPLL_B & DPLL_VCO_ENABLE) && + (dev_priv->saveDPLL_B & DPLL_VGA_MODE_DIS)) + I915_WRITE(PIPEBCONF, dev_priv->savePIPEBCONF); + i915_restore_palette(dev, PIPE_A); + /* Enable the plane */ + I915_WRITE(DSPBCNTR, dev_priv->saveDSPBCNTR); + I915_WRITE(DSPBBASE, I915_READ(DSPBBASE)); + + /* CRT state */ + I915_WRITE(ADPA, dev_priv->saveADPA); + + /* LVDS state */ + if (IS_I965G(dev)) + I915_WRITE(BLC_PWM_CTL2, dev_priv->saveBLC_PWM_CTL2); + if (IS_MOBILE(dev) && !IS_I830(dev)) + I915_WRITE(LVDS, dev_priv->saveLVDS); + if (!IS_I830(dev) && !IS_845G(dev)) + I915_WRITE(PFIT_CONTROL, dev_priv->savePFIT_CONTROL); + + I915_WRITE(PFIT_PGM_RATIOS, dev_priv->savePFIT_PGM_RATIOS); + I915_WRITE(BLC_PWM_CTL, dev_priv->saveBLC_PWM_CTL); + I915_WRITE(LVDSPP_ON, dev_priv->saveLVDSPP_ON); + I915_WRITE(LVDSPP_OFF, dev_priv->saveLVDSPP_OFF); + I915_WRITE(PP_CYCLE, dev_priv->savePP_CYCLE); + I915_WRITE(PP_CONTROL, dev_priv->savePP_CONTROL); + + /* FIXME: restore TV & SDVO state */ + + /* FBC info */ + I915_WRITE(FBC_CFB_BASE, dev_priv->saveFBC_CFB_BASE); + I915_WRITE(FBC_LL_BASE, dev_priv->saveFBC_LL_BASE); + I915_WRITE(FBC_CONTROL2, dev_priv->saveFBC_CONTROL2); + I915_WRITE(FBC_CONTROL, dev_priv->saveFBC_CONTROL); + + /* VGA state */ + I915_WRITE(VGACNTRL, dev_priv->saveVGACNTRL); + I915_WRITE(VCLK_DIVISOR_VGA0, dev_priv->saveVCLK_DIVISOR_VGA0); + I915_WRITE(VCLK_DIVISOR_VGA1, dev_priv->saveVCLK_DIVISOR_VGA1); + I915_WRITE(VCLK_POST_DIV, dev_priv->saveVCLK_POST_DIV); + udelay(150); + + for (i = 0; i < 16; i++) { + I915_WRITE(SWF0 + (i << 2), dev_priv->saveSWF0[i]); + I915_WRITE(SWF10 + (i << 2), dev_priv->saveSWF1[i+7]); + } + for (i = 0; i < 3; i++) + I915_WRITE(SWF30 + (i << 2), dev_priv->saveSWF2[i]); + + i915_restore_vga(dev); + + return 0; +} static struct drm_driver driver = { /* don't use mtrr's here, the Xserver or user space app should @@ -47,8 +533,12 @@ static struct drm_driver driver = { DRIVER_HAVE_IRQ | DRIVER_IRQ_SHARED | DRIVER_IRQ_VBL | DRIVER_IRQ_VBL2, .load = i915_driver_load, + .unload = i915_driver_unload, + .firstopen = i915_driver_firstopen, .lastclose = i915_driver_lastclose, .preclose = i915_driver_preclose, + .suspend = i915_suspend, + .resume = i915_resume, .device_is_agp = i915_driver_device_is_agp, .vblank_wait = i915_driver_vblank_wait, .vblank_wait2 = i915_driver_vblank_wait2, @@ -77,7 +567,8 @@ static struct drm_driver driver = { .name = DRIVER_NAME, .id_table = pciidlist, }, - + .fence_driver = &i915_fence_driver, + .bo_driver = &i915_bo_driver, .name = DRIVER_NAME, .desc = DRIVER_DESC, .date = DRIVER_DATE, diff --git a/drivers/char/drm/i915_drv.h b/drivers/char/drm/i915_drv.h index e064292..1d96241 100644 --- a/drivers/char/drm/i915_drv.h +++ b/drivers/char/drm/i915_drv.h @@ -37,7 +37,7 @@ #define DRIVER_NAME "i915" #define DRIVER_DESC "Intel Graphics" -#define DRIVER_DATE "20060119" +#define DRIVER_DATE "20071122" /* Interface history: * @@ -48,11 +48,18 @@ * 1.5: Add vblank pipe configuration * 1.6: - New ioctl for scheduling buffer swaps on vertical blank * - Support vertical blank on secondary display pipe + * 1.8: New ioctl for ARB_Occlusion_Query + * 1.9: Usable page flipping and triple buffering + * 1.10: Plane/pipe disentangling + * 1.11: TTM superioctl + * 1.12: TTM relocation optimization */ #define DRIVER_MAJOR 1 -#define DRIVER_MINOR 6 +#define DRIVER_MINOR 12 #define DRIVER_PATCHLEVEL 0 +#define I915_MAX_VALIDATE_BUFFERS 4096 + typedef struct _drm_i915_ring_buffer { int tail_mask; unsigned long Start; @@ -76,8 +83,9 @@ struct mem_block { typedef struct _drm_i915_vbl_swap { struct list_head head; drm_drawable_t drw_id; - unsigned int pipe; + unsigned int plane; unsigned int sequence; + int flip; } drm_i915_vbl_swap_t; typedef struct drm_i915_private { @@ -90,15 +98,11 @@ typedef struct drm_i915_private { drm_dma_handle_t *status_page_dmah; void *hw_status_page; dma_addr_t dma_status_page; - unsigned long counter; + uint32_t counter; unsigned int status_gfx_addr; drm_local_map_t hws_map; unsigned int cpp; - int back_offset; - int front_offset; - int current_page; - int page_flipping; int use_mi_batchbuffer_start; wait_queue_head_t irq_queue; @@ -110,10 +114,102 @@ typedef struct drm_i915_private { struct mem_block *agp_heap; unsigned int sr01, adpa, ppcr, dvob, dvoc, lvds; int vblank_pipe; + spinlock_t user_irq_lock; + int user_irq_refcount; + int fence_irq_on; + uint32_t irq_enable_reg; + int irq_enabled; + + uint32_t flush_sequence; + uint32_t flush_flags; + uint32_t flush_pending; + uint32_t saved_flush_status; + void *agp_iomap; + unsigned int max_validate_buffers; + struct mutex cmdbuf_mutex; spinlock_t swaps_lock; drm_i915_vbl_swap_t vbl_swaps; unsigned int swaps_pending; + + /* Register state */ + u8 saveLBB; + u32 saveDSPACNTR; + u32 saveDSPBCNTR; + u32 savePIPEACONF; + u32 savePIPEBCONF; + u32 savePIPEASRC; + u32 savePIPEBSRC; + u32 saveFPA0; + u32 saveFPA1; + u32 saveDPLL_A; + u32 saveDPLL_A_MD; + u32 saveHTOTAL_A; + u32 saveHBLANK_A; + u32 saveHSYNC_A; + u32 saveVTOTAL_A; + u32 saveVBLANK_A; + u32 saveVSYNC_A; + u32 saveBCLRPAT_A; + u32 saveDSPASTRIDE; + u32 saveDSPASIZE; + u32 saveDSPAPOS; + u32 saveDSPABASE; + u32 saveDSPASURF; + u32 saveDSPATILEOFF; + u32 savePFIT_PGM_RATIOS; + u32 saveBLC_PWM_CTL; + u32 saveBLC_PWM_CTL2; + u32 saveFPB0; + u32 saveFPB1; + u32 saveDPLL_B; + u32 saveDPLL_B_MD; + u32 saveHTOTAL_B; + u32 saveHBLANK_B; + u32 saveHSYNC_B; + u32 saveVTOTAL_B; + u32 saveVBLANK_B; + u32 saveVSYNC_B; + u32 saveBCLRPAT_B; + u32 saveDSPBSTRIDE; + u32 saveDSPBSIZE; + u32 saveDSPBPOS; + u32 saveDSPBBASE; + u32 saveDSPBSURF; + u32 saveDSPBTILEOFF; + u32 saveVCLK_DIVISOR_VGA0; + u32 saveVCLK_DIVISOR_VGA1; + u32 saveVCLK_POST_DIV; + u32 saveVGACNTRL; + u32 saveADPA; + u32 saveLVDS; + u32 saveLVDSPP_ON; + u32 saveLVDSPP_OFF; + u32 saveDVOA; + u32 saveDVOB; + u32 saveDVOC; + u32 savePP_ON; + u32 savePP_OFF; + u32 savePP_CONTROL; + u32 savePP_CYCLE; + u32 savePFIT_CONTROL; + u32 save_palette_a[256]; + u32 save_palette_b[256]; + u32 saveFBC_CFB_BASE; + u32 saveFBC_LL_BASE; + u32 saveFBC_CONTROL; + u32 saveFBC_CONTROL2; + u32 saveSWF0[16]; + u32 saveSWF1[16]; + u32 saveSWF2[3]; + u8 saveMSR; + u8 saveSR[8]; + u8 saveGR[24]; + u8 saveAR_INDEX; + u8 saveAR[20]; + u8 saveDACMASK; + u8 saveDACDATA[256*3]; /* 256 3-byte colors */ + u8 saveCR[36]; } drm_i915_private_t; extern struct drm_ioctl_desc i915_ioctls[]; @@ -122,12 +218,17 @@ extern int i915_max_ioctl; /* i915_dma.c */ extern void i915_kernel_lost_context(struct drm_device * dev); extern int i915_driver_load(struct drm_device *, unsigned long flags); +extern int i915_driver_unload(struct drm_device *); extern void i915_driver_lastclose(struct drm_device * dev); extern void i915_driver_preclose(struct drm_device *dev, struct drm_file *file_priv); extern int i915_driver_device_is_agp(struct drm_device * dev); extern long i915_compat_ioctl(struct file *filp, unsigned int cmd, unsigned long arg); +extern void i915_emit_breadcrumb(struct drm_device *dev); +extern void i915_dispatch_flip(struct drm_device * dev, int pipes, int sync); +extern int i915_emit_mi_flush(struct drm_device *dev, uint32_t flush); +extern int i915_driver_firstopen(struct drm_device *dev); /* i915_irq.c */ extern int i915_irq_emit(struct drm_device *dev, void *data, @@ -145,6 +246,9 @@ extern int i915_vblank_pipe_set(struct drm_device *dev, void *data, struct drm_file *file_priv); extern int i915_vblank_pipe_get(struct drm_device *dev, void *data, struct drm_file *file_priv); +extern int i915_emit_irq(struct drm_device *dev); +extern void i915_user_irq_on(drm_i915_private_t *dev_priv); +extern void i915_user_irq_off(drm_i915_private_t *dev_priv); extern int i915_vblank_swap(struct drm_device *dev, void *data, struct drm_file *file_priv); @@ -159,11 +263,37 @@ extern int i915_mem_destroy_heap(struct drm_device *dev, void *data, struct drm_file *file_priv); extern void i915_mem_takedown(struct mem_block **heap); extern void i915_mem_release(struct drm_device * dev, - struct drm_file *file_priv, struct mem_block *heap); + struct drm_file *file_priv, + struct mem_block *heap); +/* i915_fence.c */ + + +extern void i915_fence_handler(struct drm_device *dev); +extern int i915_fence_emit_sequence(struct drm_device *dev, uint32_t class, + uint32_t flags, + uint32_t *sequence, + uint32_t *native_type); +extern void i915_poke_flush(struct drm_device *dev, uint32_t class); +extern int i915_fence_has_irq(struct drm_device *dev, uint32_t class, uint32_t flags); + +/* i915_buffer.c */ +extern struct drm_ttm_backend *i915_create_ttm_backend_entry(struct drm_device *dev); +extern int i915_fence_type(struct drm_buffer_object *bo, uint32_t *fclass, + uint32_t *type); +extern int i915_invalidate_caches(struct drm_device *dev, uint64_t buffer_flags); +extern int i915_init_mem_type(struct drm_device *dev, uint32_t type, + struct drm_mem_type_manager *man); +extern uint64_t i915_evict_flags(struct drm_buffer_object *bo); +extern int i915_move(struct drm_buffer_object *bo, int evict, + int no_wait, struct drm_bo_mem_reg *new_mem); +void i915_flush_ttm(struct drm_ttm *ttm); + +extern void intel_init_chipset_flush_compat(struct drm_device *dev); +extern void intel_fini_chipset_flush_compat(struct drm_device *dev); #define I915_READ(reg) DRM_READ32(dev_priv->mmio_map, (reg)) #define I915_WRITE(reg,val) DRM_WRITE32(dev_priv->mmio_map, (reg), (val)) -#define I915_READ16(reg) DRM_READ16(dev_priv->mmio_map, (reg)) +#define I915_READ16(reg) DRM_READ16(dev_priv->mmio_map, (reg)) #define I915_WRITE16(reg,val) DRM_WRITE16(dev_priv->mmio_map, (reg), (val)) #define I915_VERBOSE 0 @@ -173,9 +303,8 @@ extern void i915_mem_release(struct drm_device * dev, #define BEGIN_LP_RING(n) do { \ if (I915_VERBOSE) \ - DRM_DEBUG("BEGIN_LP_RING(%d) in %s\n", \ - (n), __FUNCTION__); \ - if (dev_priv->ring.space < (n)*4) \ + DRM_DEBUG("BEGIN_LP_RING(%d)\n", (n)); \ + if (dev_priv->ring.space < (n)*4) \ i915_wait_ring(dev, (n)*4, __FUNCTION__); \ outcount = 0; \ outring = dev_priv->ring.tail; \ @@ -200,25 +329,115 @@ extern void i915_mem_release(struct drm_device * dev, extern int i915_wait_ring(struct drm_device * dev, int n, const char *caller); -#define GFX_OP_USER_INTERRUPT ((0<<29)|(2<<23)) +/* Extended config space */ +#define LBB 0xf4 + +/* VGA stuff */ + +#define VGA_ST01_MDA 0x3ba +#define VGA_ST01_CGA 0x3da + +#define VGA_MSR_WRITE 0x3c2 +#define VGA_MSR_READ 0x3cc +#define VGA_MSR_MEM_EN (1<<1) +#define VGA_MSR_CGA_MODE (1<<0) + +#define VGA_SR_INDEX 0x3c4 +#define VGA_SR_DATA 0x3c5 + +#define VGA_AR_INDEX 0x3c0 +#define VGA_AR_VID_EN (1<<5) +#define VGA_AR_DATA_WRITE 0x3c0 +#define VGA_AR_DATA_READ 0x3c1 + +#define VGA_GR_INDEX 0x3ce +#define VGA_GR_DATA 0x3cf +/* GR05 */ +#define VGA_GR_MEM_READ_MODE_SHIFT 3 +#define VGA_GR_MEM_READ_MODE_PLANE 1 +/* GR06 */ +#define VGA_GR_MEM_MODE_MASK 0xc +#define VGA_GR_MEM_MODE_SHIFT 2 +#define VGA_GR_MEM_A0000_AFFFF 0 +#define VGA_GR_MEM_A0000_BFFFF 1 +#define VGA_GR_MEM_B0000_B7FFF 2 +#define VGA_GR_MEM_B0000_BFFFF 3 + +#define VGA_DACMASK 0x3c6 +#define VGA_DACRX 0x3c7 +#define VGA_DACWX 0x3c8 +#define VGA_DACDATA 0x3c9 + +#define VGA_CR_INDEX_MDA 0x3b4 +#define VGA_CR_DATA_MDA 0x3b5 +#define VGA_CR_INDEX_CGA 0x3d4 +#define VGA_CR_DATA_CGA 0x3d5 + +#define GFX_OP_USER_INTERRUPT ((0<<29)|(2<<23)) #define GFX_OP_BREAKPOINT_INTERRUPT ((0<<29)|(1<<23)) #define CMD_REPORT_HEAD (7<<23) #define CMD_STORE_DWORD_IDX ((0x21<<23) | 0x1) #define CMD_OP_BATCH_BUFFER ((0x0<<29)|(0x30<<23)|0x1) -#define INST_PARSER_CLIENT 0x00000000 -#define INST_OP_FLUSH 0x02000000 -#define INST_FLUSH_MAP_CACHE 0x00000001 +#define CMD_MI_FLUSH (0x04 << 23) +#define MI_NO_WRITE_FLUSH (1 << 2) +#define MI_READ_FLUSH (1 << 0) +#define MI_EXE_FLUSH (1 << 1) +#define MI_END_SCENE (1 << 4) /* flush binner and incr scene count */ +#define MI_SCENE_COUNT (1 << 3) /* just increment scene count */ + +/* Packet to load a register value from the ring/batch command stream: + */ +#define CMD_MI_LOAD_REGISTER_IMM ((0x22 << 23)|0x1) #define BB1_START_ADDR_MASK (~0x7) #define BB1_PROTECTED (1<<0) #define BB1_UNPROTECTED (0<<0) #define BB2_END_ADDR_MASK (~0x7) +/* Framebuffer compression */ +#define FBC_CFB_BASE 0x03200 /* 4k page aligned */ +#define FBC_LL_BASE 0x03204 /* 4k page aligned */ +#define FBC_CONTROL 0x03208 +#define FBC_CTL_EN (1<<31) +#define FBC_CTL_PERIODIC (1<<30) +#define FBC_CTL_INTERVAL_SHIFT (16) +#define FBC_CTL_UNCOMPRESSIBLE (1<<14) +#define FBC_CTL_STRIDE_SHIFT (5) +#define FBC_CTL_FENCENO (1<<0) +#define FBC_COMMAND 0x0320c +#define FBC_CMD_COMPRESS (1<<0) +#define FBC_STATUS 0x03210 +#define FBC_STAT_COMPRESSING (1<<31) +#define FBC_STAT_COMPRESSED (1<<30) +#define FBC_STAT_MODIFIED (1<<29) +#define FBC_STAT_CURRENT_LINE (1<<0) +#define FBC_CONTROL2 0x03214 +#define FBC_CTL_FENCE_DBL (0<<4) +#define FBC_CTL_IDLE_IMM (0<<2) +#define FBC_CTL_IDLE_FULL (1<<2) +#define FBC_CTL_IDLE_LINE (2<<2) +#define FBC_CTL_IDLE_DEBUG (3<<2) +#define FBC_CTL_CPU_FENCE (1<<1) +#define FBC_CTL_PLANEA (0<<0) +#define FBC_CTL_PLANEB (1<<0) +#define FBC_FENCE_OFF 0x0321b + +#define FBC_LL_SIZE (1536) +#define FBC_LL_PAD (32) + +/* Interrupt bits: + */ +#define USER_INT_FLAG (1<<1) +#define VSYNC_PIPEB_FLAG (1<<5) +#define VSYNC_PIPEA_FLAG (1<<7) +#define HWB_OOM_FLAG (1<<13) /* binner out of memory */ + #define I915REG_HWSTAM 0x02098 #define I915REG_INT_IDENTITY_R 0x020a4 -#define I915REG_INT_MASK_R 0x020a8 +#define I915REG_INT_MASK_R 0x020a8 #define I915REG_INT_ENABLE_R 0x020a0 +#define I915REG_INSTPM 0x020c0 #define I915REG_PIPEASTAT 0x70024 #define I915REG_PIPEBSTAT 0x71024 @@ -229,7 +448,7 @@ extern int i915_wait_ring(struct drm_device * dev, int n, const char *caller); #define SRX_INDEX 0x3c4 #define SRX_DATA 0x3c5 #define SR01 1 -#define SR01_SCREEN_OFF (1<<5) +#define SR01_SCREEN_OFF (1<<5) #define PPCR 0x61204 #define PPCR_ON (1<<0) @@ -249,31 +468,129 @@ extern int i915_wait_ring(struct drm_device * dev, int n, const char *caller); #define ADPA_DPMS_OFF (3<<10) #define NOPID 0x2094 -#define LP_RING 0x2030 -#define HP_RING 0x2040 -#define RING_TAIL 0x00 +#define LP_RING 0x2030 +#define HP_RING 0x2040 +/* The binner has its own ring buffer: + */ +#define HWB_RING 0x2400 + +#define RING_TAIL 0x00 #define TAIL_ADDR 0x001FFFF8 -#define RING_HEAD 0x04 -#define HEAD_WRAP_COUNT 0xFFE00000 -#define HEAD_WRAP_ONE 0x00200000 -#define HEAD_ADDR 0x001FFFFC -#define RING_START 0x08 -#define START_ADDR 0x0xFFFFF000 -#define RING_LEN 0x0C -#define RING_NR_PAGES 0x001FF000 -#define RING_REPORT_MASK 0x00000006 -#define RING_REPORT_64K 0x00000002 -#define RING_REPORT_128K 0x00000004 -#define RING_NO_REPORT 0x00000000 -#define RING_VALID_MASK 0x00000001 -#define RING_VALID 0x00000001 -#define RING_INVALID 0x00000000 +#define RING_HEAD 0x04 +#define HEAD_WRAP_COUNT 0xFFE00000 +#define HEAD_WRAP_ONE 0x00200000 +#define HEAD_ADDR 0x001FFFFC +#define RING_START 0x08 +#define START_ADDR 0x0xFFFFF000 +#define RING_LEN 0x0C +#define RING_NR_PAGES 0x001FF000 +#define RING_REPORT_MASK 0x00000006 +#define RING_REPORT_64K 0x00000002 +#define RING_REPORT_128K 0x00000004 +#define RING_NO_REPORT 0x00000000 +#define RING_VALID_MASK 0x00000001 +#define RING_VALID 0x00000001 +#define RING_INVALID 0x00000000 + +/* Instruction parser error reg: + */ +#define IPEIR 0x2088 + +/* Scratch pad debug 0 reg: + */ +#define SCPD0 0x209c + +/* Error status reg: + */ +#define ESR 0x20b8 + +/* Secondary DMA fetch address debug reg: + */ +#define DMA_FADD_S 0x20d4 + +/* Cache mode 0 reg. + * - Manipulating render cache behaviour is central + * to the concept of zone rendering, tuning this reg can help avoid + * unnecessary render cache reads and even writes (for z/stencil) + * at beginning and end of scene. + * + * - To change a bit, write to this reg with a mask bit set and the + * bit of interest either set or cleared. EG: (BIT<<16) | BIT to set. + */ +#define Cache_Mode_0 0x2120 +#define CM0_MASK_SHIFT 16 +#define CM0_IZ_OPT_DISABLE (1<<6) +#define CM0_ZR_OPT_DISABLE (1<<5) +#define CM0_DEPTH_EVICT_DISABLE (1<<4) +#define CM0_COLOR_EVICT_DISABLE (1<<3) +#define CM0_DEPTH_WRITE_DISABLE (1<<1) +#define CM0_RC_OP_FLUSH_DISABLE (1<<0) + + +/* Graphics flush control. A CPU write flushes the GWB of all writes. + * The data is discarded. + */ +#define GFX_FLSH_CNTL 0x2170 + +/* Binner control. Defines the location of the bin pointer list: + */ +#define BINCTL 0x2420 +#define BC_MASK (1 << 9) + +/* Binned scene info. + */ +#define BINSCENE 0x2428 +#define BS_OP_LOAD (1 << 8) +#define BS_MASK (1 << 22) + +/* Bin command parser debug reg: + */ +#define BCPD 0x2480 + +/* Bin memory control debug reg: + */ +#define BMCD 0x2484 + +/* Bin data cache debug reg: + */ +#define BDCD 0x2488 + +/* Binner pointer cache debug reg: + */ +#define BPCD 0x248c + +/* Binner scratch pad debug reg: + */ +#define BINSKPD 0x24f0 + +/* HWB scratch pad debug reg: + */ +#define HWBSKPD 0x24f4 + +/* Binner memory pool reg: + */ +#define BMP_BUFFER 0x2430 +#define BMP_PAGE_SIZE_4K (0 << 10) +#define BMP_BUFFER_SIZE_SHIFT 1 +#define BMP_ENABLE (1 << 0) + +/* Get/put memory from the binner memory pool: + */ +#define BMP_GET 0x2438 +#define BMP_PUT 0x2440 +#define BMP_OFFSET_SHIFT 5 + +/* 3D state packets: + */ +#define GFX_OP_RASTER_RULES ((0x3<<29)|(0x7<<24)) #define GFX_OP_SCISSOR ((0x3<<29)|(0x1c<<24)|(0x10<<19)) #define SC_UPDATE_SCISSOR (0x1<<1) #define SC_ENABLE_MASK (0x1<<0) #define SC_ENABLE (0x1<<0) +#define GFX_OP_LOAD_INDIRECT ((0x3<<29)|(0x1d<<24)|(0x7<<16)) + #define GFX_OP_SCISSOR_INFO ((0x3<<29)|(0x1d<<24)|(0x81<<16)|(0x1)) #define SCI_YMIN_MASK (0xffff<<16) #define SCI_XMIN_MASK (0xffff<<0) @@ -288,19 +605,21 @@ extern int i915_wait_ring(struct drm_device * dev, int n, const char *caller); #define GFX_OP_DESTBUFFER_VARS ((0x3<<29)|(0x1d<<24)|(0x85<<16)|0x0) #define GFX_OP_DRAWRECT_INFO ((0x3<<29)|(0x1d<<24)|(0x80<<16)|(0x3)) -#define GFX_OP_DRAWRECT_INFO_I965 ((0x7900<<16)|0x2) +#define GFX_OP_DRAWRECT_INFO_I965 ((0x7900<<16)|0x2) +#define SRC_COPY_BLT_CMD ((2<<29)|(0x43<<22)|4) #define XY_SRC_COPY_BLT_CMD ((2<<29)|(0x53<<22)|6) #define XY_SRC_COPY_BLT_WRITE_ALPHA (1<<21) #define XY_SRC_COPY_BLT_WRITE_RGB (1<<20) -#define MI_BATCH_BUFFER ((0x30<<23)|1) -#define MI_BATCH_BUFFER_START (0x31<<23) -#define MI_BATCH_BUFFER_END (0xA<<23) +#define MI_BATCH_BUFFER ((0x30<<23)|1) +#define MI_BATCH_BUFFER_START (0x31<<23) +#define MI_BATCH_BUFFER_END (0xA<<23) #define MI_BATCH_NON_SECURE (1) #define MI_BATCH_NON_SECURE_I965 (1<<8) #define MI_WAIT_FOR_EVENT ((0x3<<23)) +#define MI_WAIT_FOR_PLANE_B_FLIP (1<<6) #define MI_WAIT_FOR_PLANE_A_FLIP (1<<2) #define MI_WAIT_FOR_PLANE_A_SCANLINES (1<<1) @@ -308,9 +627,531 @@ extern int i915_wait_ring(struct drm_device * dev, int n, const char *caller); #define CMD_OP_DISPLAYBUFFER_INFO ((0x0<<29)|(0x14<<23)|2) #define ASYNC_FLIP (1<<22) +#define DISPLAY_PLANE_A (0<<20) +#define DISPLAY_PLANE_B (1<<20) + +/* Display regs */ +#define DSPACNTR 0x70180 +#define DSPBCNTR 0x71180 +#define DISPPLANE_SEL_PIPE_MASK (1<<24) + +/* Define the region of interest for the binner: + */ +#define CMD_OP_BIN_CONTROL ((0x3<<29)|(0x1d<<24)|(0x84<<16)|4) #define CMD_OP_DESTBUFFER_INFO ((0x3<<29)|(0x1d<<24)|(0x8e<<16)|1) -#define READ_BREADCRUMB(dev_priv) (((u32 *)(dev_priv->hw_status_page))[5]) +#define BREADCRUMB_BITS 31 +#define BREADCRUMB_MASK ((1U << BREADCRUMB_BITS) - 1) + +#define READ_BREADCRUMB(dev_priv) (((volatile u32*)(dev_priv->hw_status_page))[5]) +#define READ_HWSP(dev_priv, reg) (((volatile u32*)(dev_priv->hw_status_page))[reg]) + +#define BLC_PWM_CTL 0x61254 +#define BACKLIGHT_MODULATION_FREQ_SHIFT (17) + +#define BLC_PWM_CTL2 0x61250 +/** + * This is the most significant 15 bits of the number of backlight cycles in a + * complete cycle of the modulated backlight control. + * + * The actual value is this field multiplied by two. + */ +#define BACKLIGHT_MODULATION_FREQ_MASK (0x7fff << 17) +#define BLM_LEGACY_MODE (1 << 16) +/** + * This is the number of cycles out of the backlight modulation cycle for which + * the backlight is on. + * + * This field must be no greater than the number of cycles in the complete + * backlight modulation cycle. + */ +#define BACKLIGHT_DUTY_CYCLE_SHIFT (0) +#define BACKLIGHT_DUTY_CYCLE_MASK (0xffff) + +#define I915_GCFGC 0xf0 +#define I915_LOW_FREQUENCY_ENABLE (1 << 7) +#define I915_DISPLAY_CLOCK_190_200_MHZ (0 << 4) +#define I915_DISPLAY_CLOCK_333_MHZ (4 << 4) +#define I915_DISPLAY_CLOCK_MASK (7 << 4) + +#define I855_HPLLCC 0xc0 +#define I855_CLOCK_CONTROL_MASK (3 << 0) +#define I855_CLOCK_133_200 (0 << 0) +#define I855_CLOCK_100_200 (1 << 0) +#define I855_CLOCK_100_133 (2 << 0) +#define I855_CLOCK_166_250 (3 << 0) + +/* p317, 319 + */ +#define VCLK2_VCO_M 0x6008 /* treat as 16 bit? (includes msbs) */ +#define VCLK2_VCO_N 0x600a +#define VCLK2_VCO_DIV_SEL 0x6012 + +#define VCLK_DIVISOR_VGA0 0x6000 +#define VCLK_DIVISOR_VGA1 0x6004 +#define VCLK_POST_DIV 0x6010 +/** Selects a post divisor of 4 instead of 2. */ +# define VGA1_PD_P2_DIV_4 (1 << 15) +/** Overrides the p2 post divisor field */ +# define VGA1_PD_P1_DIV_2 (1 << 13) +# define VGA1_PD_P1_SHIFT 8 +/** P1 value is 2 greater than this field */ +# define VGA1_PD_P1_MASK (0x1f << 8) +/** Selects a post divisor of 4 instead of 2. */ +# define VGA0_PD_P2_DIV_4 (1 << 7) +/** Overrides the p2 post divisor field */ +# define VGA0_PD_P1_DIV_2 (1 << 5) +# define VGA0_PD_P1_SHIFT 0 +/** P1 value is 2 greater than this field */ +# define VGA0_PD_P1_MASK (0x1f << 0) + +/* I830 CRTC registers */ +#define HTOTAL_A 0x60000 +#define HBLANK_A 0x60004 +#define HSYNC_A 0x60008 +#define VTOTAL_A 0x6000c +#define VBLANK_A 0x60010 +#define VSYNC_A 0x60014 +#define PIPEASRC 0x6001c +#define BCLRPAT_A 0x60020 +#define VSYNCSHIFT_A 0x60028 + +#define HTOTAL_B 0x61000 +#define HBLANK_B 0x61004 +#define HSYNC_B 0x61008 +#define VTOTAL_B 0x6100c +#define VBLANK_B 0x61010 +#define VSYNC_B 0x61014 +#define PIPEBSRC 0x6101c +#define BCLRPAT_B 0x61020 +#define VSYNCSHIFT_B 0x61028 + +#define PP_STATUS 0x61200 +# define PP_ON (1 << 31) +/** + * Indicates that all dependencies of the panel are on: + * + * - PLL enabled + * - pipe enabled + * - LVDS/DVOB/DVOC on + */ +# define PP_READY (1 << 30) +# define PP_SEQUENCE_NONE (0 << 28) +# define PP_SEQUENCE_ON (1 << 28) +# define PP_SEQUENCE_OFF (2 << 28) +# define PP_SEQUENCE_MASK 0x30000000 +#define PP_CONTROL 0x61204 +# define POWER_TARGET_ON (1 << 0) + +#define LVDSPP_ON 0x61208 +#define LVDSPP_OFF 0x6120c +#define PP_CYCLE 0x61210 + +#define PFIT_CONTROL 0x61230 +# define PFIT_ENABLE (1 << 31) +# define PFIT_PIPE_MASK (3 << 29) +# define PFIT_PIPE_SHIFT 29 +# define VERT_INTERP_DISABLE (0 << 10) +# define VERT_INTERP_BILINEAR (1 << 10) +# define VERT_INTERP_MASK (3 << 10) +# define VERT_AUTO_SCALE (1 << 9) +# define HORIZ_INTERP_DISABLE (0 << 6) +# define HORIZ_INTERP_BILINEAR (1 << 6) +# define HORIZ_INTERP_MASK (3 << 6) +# define HORIZ_AUTO_SCALE (1 << 5) +# define PANEL_8TO6_DITHER_ENABLE (1 << 3) + +#define PFIT_PGM_RATIOS 0x61234 +# define PFIT_VERT_SCALE_MASK 0xfff00000 +# define PFIT_HORIZ_SCALE_MASK 0x0000fff0 + +#define PFIT_AUTO_RATIOS 0x61238 + + +#define DPLL_A 0x06014 +#define DPLL_B 0x06018 +# define DPLL_VCO_ENABLE (1 << 31) +# define DPLL_DVO_HIGH_SPEED (1 << 30) +# define DPLL_SYNCLOCK_ENABLE (1 << 29) +# define DPLL_VGA_MODE_DIS (1 << 28) +# define DPLLB_MODE_DAC_SERIAL (1 << 26) /* i915 */ +# define DPLLB_MODE_LVDS (2 << 26) /* i915 */ +# define DPLL_MODE_MASK (3 << 26) +# define DPLL_DAC_SERIAL_P2_CLOCK_DIV_10 (0 << 24) /* i915 */ +# define DPLL_DAC_SERIAL_P2_CLOCK_DIV_5 (1 << 24) /* i915 */ +# define DPLLB_LVDS_P2_CLOCK_DIV_14 (0 << 24) /* i915 */ +# define DPLLB_LVDS_P2_CLOCK_DIV_7 (1 << 24) /* i915 */ +# define DPLL_P2_CLOCK_DIV_MASK 0x03000000 /* i915 */ +# define DPLL_FPA01_P1_POST_DIV_MASK 0x00ff0000 /* i915 */ +/** + * The i830 generation, in DAC/serial mode, defines p1 as two plus this + * bitfield, or just 2 if PLL_P1_DIVIDE_BY_TWO is set. + */ +# define DPLL_FPA01_P1_POST_DIV_MASK_I830 0x001f0000 +/** + * The i830 generation, in LVDS mode, defines P1 as the bit number set within + * this field (only one bit may be set). + */ +# define DPLL_FPA01_P1_POST_DIV_MASK_I830_LVDS 0x003f0000 +# define DPLL_FPA01_P1_POST_DIV_SHIFT 16 +# define PLL_P2_DIVIDE_BY_4 (1 << 23) /* i830, required in DVO non-gang */ +# define PLL_P1_DIVIDE_BY_TWO (1 << 21) /* i830 */ +# define PLL_REF_INPUT_DREFCLK (0 << 13) +# define PLL_REF_INPUT_TVCLKINA (1 << 13) /* i830 */ +# define PLL_REF_INPUT_TVCLKINBC (2 << 13) /* SDVO TVCLKIN */ +# define PLLB_REF_INPUT_SPREADSPECTRUMIN (3 << 13) +# define PLL_REF_INPUT_MASK (3 << 13) +# define PLL_LOAD_PULSE_PHASE_SHIFT 9 +/* + * Parallel to Serial Load Pulse phase selection. + * Selects the phase for the 10X DPLL clock for the PCIe + * digital display port. The range is 4 to 13; 10 or more + * is just a flip delay. The default is 6 + */ +# define PLL_LOAD_PULSE_PHASE_MASK (0xf << PLL_LOAD_PULSE_PHASE_SHIFT) +# define DISPLAY_RATE_SELECT_FPA1 (1 << 8) + +/** + * SDVO multiplier for 945G/GM. Not used on 965. + * + * \sa DPLL_MD_UDI_MULTIPLIER_MASK + */ +# define SDVO_MULTIPLIER_MASK 0x000000ff +# define SDVO_MULTIPLIER_SHIFT_HIRES 4 +# define SDVO_MULTIPLIER_SHIFT_VGA 0 + +/** @defgroup DPLL_MD + * @{ + */ +/** Pipe A SDVO/UDI clock multiplier/divider register for G965. */ +#define DPLL_A_MD 0x0601c +/** Pipe B SDVO/UDI clock multiplier/divider register for G965. */ +#define DPLL_B_MD 0x06020 +/** + * UDI pixel divider, controlling how many pixels are stuffed into a packet. + * + * Value is pixels minus 1. Must be set to 1 pixel for SDVO. + */ +# define DPLL_MD_UDI_DIVIDER_MASK 0x3f000000 +# define DPLL_MD_UDI_DIVIDER_SHIFT 24 +/** UDI pixel divider for VGA, same as DPLL_MD_UDI_DIVIDER_MASK. */ +# define DPLL_MD_VGA_UDI_DIVIDER_MASK 0x003f0000 +# define DPLL_MD_VGA_UDI_DIVIDER_SHIFT 16 +/** + * SDVO/UDI pixel multiplier. + * + * SDVO requires that the bus clock rate be between 1 and 2 Ghz, and the bus + * clock rate is 10 times the DPLL clock. At low resolution/refresh rate + * modes, the bus rate would be below the limits, so SDVO allows for stuffing + * dummy bytes in the datastream at an increased clock rate, with both sides of + * the link knowing how many bytes are fill. + * + * So, for a mode with a dotclock of 65Mhz, we would want to double the clock + * rate to 130Mhz to get a bus rate of 1.30Ghz. The DPLL clock rate would be + * set to 130Mhz, and the SDVO multiplier set to 2x in this register and + * through an SDVO command. + * + * This register field has values of multiplication factor minus 1, with + * a maximum multiplier of 5 for SDVO. + */ +# define DPLL_MD_UDI_MULTIPLIER_MASK 0x00003f00 +# define DPLL_MD_UDI_MULTIPLIER_SHIFT 8 +/** SDVO/UDI pixel multiplier for VGA, same as DPLL_MD_UDI_MULTIPLIER_MASK. + * This best be set to the default value (3) or the CRT won't work. No, + * I don't entirely understand what this does... + */ +# define DPLL_MD_VGA_UDI_MULTIPLIER_MASK 0x0000003f +# define DPLL_MD_VGA_UDI_MULTIPLIER_SHIFT 0 +/** @} */ + +#define DPLL_TEST 0x606c +# define DPLLB_TEST_SDVO_DIV_1 (0 << 22) +# define DPLLB_TEST_SDVO_DIV_2 (1 << 22) +# define DPLLB_TEST_SDVO_DIV_4 (2 << 22) +# define DPLLB_TEST_SDVO_DIV_MASK (3 << 22) +# define DPLLB_TEST_N_BYPASS (1 << 19) +# define DPLLB_TEST_M_BYPASS (1 << 18) +# define DPLLB_INPUT_BUFFER_ENABLE (1 << 16) +# define DPLLA_TEST_N_BYPASS (1 << 3) +# define DPLLA_TEST_M_BYPASS (1 << 2) +# define DPLLA_INPUT_BUFFER_ENABLE (1 << 0) + +#define ADPA 0x61100 +#define ADPA_DAC_ENABLE (1<<31) +#define ADPA_DAC_DISABLE 0 +#define ADPA_PIPE_SELECT_MASK (1<<30) +#define ADPA_PIPE_A_SELECT 0 +#define ADPA_PIPE_B_SELECT (1<<30) +#define ADPA_USE_VGA_HVPOLARITY (1<<15) +#define ADPA_SETS_HVPOLARITY 0 +#define ADPA_VSYNC_CNTL_DISABLE (1<<11) +#define ADPA_VSYNC_CNTL_ENABLE 0 +#define ADPA_HSYNC_CNTL_DISABLE (1<<10) +#define ADPA_HSYNC_CNTL_ENABLE 0 +#define ADPA_VSYNC_ACTIVE_HIGH (1<<4) +#define ADPA_VSYNC_ACTIVE_LOW 0 +#define ADPA_HSYNC_ACTIVE_HIGH (1<<3) +#define ADPA_HSYNC_ACTIVE_LOW 0 + +#define FPA0 0x06040 +#define FPA1 0x06044 +#define FPB0 0x06048 +#define FPB1 0x0604c +# define FP_N_DIV_MASK 0x003f0000 +# define FP_N_DIV_SHIFT 16 +# define FP_M1_DIV_MASK 0x00003f00 +# define FP_M1_DIV_SHIFT 8 +# define FP_M2_DIV_MASK 0x0000003f +# define FP_M2_DIV_SHIFT 0 + + +#define PORT_HOTPLUG_EN 0x61110 +# define SDVOB_HOTPLUG_INT_EN (1 << 26) +# define SDVOC_HOTPLUG_INT_EN (1 << 25) +# define TV_HOTPLUG_INT_EN (1 << 18) +# define CRT_HOTPLUG_INT_EN (1 << 9) +# define CRT_HOTPLUG_FORCE_DETECT (1 << 3) + +#define PORT_HOTPLUG_STAT 0x61114 +# define CRT_HOTPLUG_INT_STATUS (1 << 11) +# define TV_HOTPLUG_INT_STATUS (1 << 10) +# define CRT_HOTPLUG_MONITOR_MASK (3 << 8) +# define CRT_HOTPLUG_MONITOR_COLOR (3 << 8) +# define CRT_HOTPLUG_MONITOR_MONO (2 << 8) +# define CRT_HOTPLUG_MONITOR_NONE (0 << 8) +# define SDVOC_HOTPLUG_INT_STATUS (1 << 7) +# define SDVOB_HOTPLUG_INT_STATUS (1 << 6) + +#define SDVOB 0x61140 +#define SDVOC 0x61160 +#define SDVO_ENABLE (1 << 31) +#define SDVO_PIPE_B_SELECT (1 << 30) +#define SDVO_STALL_SELECT (1 << 29) +#define SDVO_INTERRUPT_ENABLE (1 << 26) +/** + * 915G/GM SDVO pixel multiplier. + * + * Programmed value is multiplier - 1, up to 5x. + * + * \sa DPLL_MD_UDI_MULTIPLIER_MASK + */ +#define SDVO_PORT_MULTIPLY_MASK (7 << 23) +#define SDVO_PORT_MULTIPLY_SHIFT 23 +#define SDVO_PHASE_SELECT_MASK (15 << 19) +#define SDVO_PHASE_SELECT_DEFAULT (6 << 19) +#define SDVO_CLOCK_OUTPUT_INVERT (1 << 18) +#define SDVOC_GANG_MODE (1 << 16) +#define SDVO_BORDER_ENABLE (1 << 7) +#define SDVOB_PCIE_CONCURRENCY (1 << 3) +#define SDVO_DETECTED (1 << 2) +/* Bits to be preserved when writing */ +#define SDVOB_PRESERVE_MASK ((1 << 17) | (1 << 16) | (1 << 14)) +#define SDVOC_PRESERVE_MASK (1 << 17) + +/** @defgroup LVDS + * @{ + */ +/** + * This register controls the LVDS output enable, pipe selection, and data + * format selection. + * + * All of the clock/data pairs are force powered down by power sequencing. + */ +#define LVDS 0x61180 +/** + * Enables the LVDS port. This bit must be set before DPLLs are enabled, as + * the DPLL semantics change when the LVDS is assigned to that pipe. + */ +# define LVDS_PORT_EN (1 << 31) +/** Selects pipe B for LVDS data. Must be set on pre-965. */ +# define LVDS_PIPEB_SELECT (1 << 30) + +/** + * Enables the A0-A2 data pairs and CLKA, containing 18 bits of color data per + * pixel. + */ +# define LVDS_A0A2_CLKA_POWER_MASK (3 << 8) +# define LVDS_A0A2_CLKA_POWER_DOWN (0 << 8) +# define LVDS_A0A2_CLKA_POWER_UP (3 << 8) +/** + * Controls the A3 data pair, which contains the additional LSBs for 24 bit + * mode. Only enabled if LVDS_A0A2_CLKA_POWER_UP also indicates it should be + * on. + */ +# define LVDS_A3_POWER_MASK (3 << 6) +# define LVDS_A3_POWER_DOWN (0 << 6) +# define LVDS_A3_POWER_UP (3 << 6) +/** + * Controls the CLKB pair. This should only be set when LVDS_B0B3_POWER_UP + * is set. + */ +# define LVDS_CLKB_POWER_MASK (3 << 4) +# define LVDS_CLKB_POWER_DOWN (0 << 4) +# define LVDS_CLKB_POWER_UP (3 << 4) + +/** + * Controls the B0-B3 data pairs. This must be set to match the DPLL p2 + * setting for whether we are in dual-channel mode. The B3 pair will + * additionally only be powered up when LVDS_A3_POWER_UP is set. + */ +# define LVDS_B0B3_POWER_MASK (3 << 2) +# define LVDS_B0B3_POWER_DOWN (0 << 2) +# define LVDS_B0B3_POWER_UP (3 << 2) + +#define PIPEACONF 0x70008 +#define PIPEACONF_ENABLE (1<<31) +#define PIPEACONF_DISABLE 0 +#define PIPEACONF_DOUBLE_WIDE (1<<30) +#define I965_PIPECONF_ACTIVE (1<<30) +#define PIPEACONF_SINGLE_WIDE 0 +#define PIPEACONF_PIPE_UNLOCKED 0 +#define PIPEACONF_PIPE_LOCKED (1<<25) +#define PIPEACONF_PALETTE 0 +#define PIPEACONF_GAMMA (1<<24) +#define PIPECONF_FORCE_BORDER (1<<25) +#define PIPECONF_PROGRESSIVE (0 << 21) +#define PIPECONF_INTERLACE_W_FIELD_INDICATION (6 << 21) +#define PIPECONF_INTERLACE_FIELD_0_ONLY (7 << 21) + +#define PIPEBCONF 0x71008 +#define PIPEBCONF_ENABLE (1<<31) +#define PIPEBCONF_DISABLE 0 +#define PIPEBCONF_DOUBLE_WIDE (1<<30) +#define PIPEBCONF_DISABLE 0 +#define PIPEBCONF_GAMMA (1<<24) +#define PIPEBCONF_PALETTE 0 + +#define PIPEBGCMAXRED 0x71010 +#define PIPEBGCMAXGREEN 0x71014 +#define PIPEBGCMAXBLUE 0x71018 +#define PIPEBSTAT 0x71024 +#define PIPEBFRAMEHIGH 0x71040 +#define PIPEBFRAMEPIXEL 0x71044 + +#define DSPACNTR 0x70180 +#define DSPBCNTR 0x71180 +#define DISPLAY_PLANE_ENABLE (1<<31) +#define DISPLAY_PLANE_DISABLE 0 +#define DISPPLANE_GAMMA_ENABLE (1<<30) +#define DISPPLANE_GAMMA_DISABLE 0 +#define DISPPLANE_PIXFORMAT_MASK (0xf<<26) +#define DISPPLANE_8BPP (0x2<<26) +#define DISPPLANE_15_16BPP (0x4<<26) +#define DISPPLANE_16BPP (0x5<<26) +#define DISPPLANE_32BPP_NO_ALPHA (0x6<<26) +#define DISPPLANE_32BPP (0x7<<26) +#define DISPPLANE_STEREO_ENABLE (1<<25) +#define DISPPLANE_STEREO_DISABLE 0 +#define DISPPLANE_SEL_PIPE_MASK (1<<24) +#define DISPPLANE_SEL_PIPE_A 0 +#define DISPPLANE_SEL_PIPE_B (1<<24) +#define DISPPLANE_SRC_KEY_ENABLE (1<<22) +#define DISPPLANE_SRC_KEY_DISABLE 0 +#define DISPPLANE_LINE_DOUBLE (1<<20) +#define DISPPLANE_NO_LINE_DOUBLE 0 +#define DISPPLANE_STEREO_POLARITY_FIRST 0 +#define DISPPLANE_STEREO_POLARITY_SECOND (1<<18) +/* plane B only */ +#define DISPPLANE_ALPHA_TRANS_ENABLE (1<<15) +#define DISPPLANE_ALPHA_TRANS_DISABLE 0 +#define DISPPLANE_SPRITE_ABOVE_DISPLAYA 0 +#define DISPPLANE_SPRITE_ABOVE_OVERLAY (1) + +#define DSPABASE 0x70184 +#define DSPASTRIDE 0x70188 + +#define DSPBBASE 0x71184 +#define DSPBADDR DSPBBASE +#define DSPBSTRIDE 0x71188 + +#define DSPAKEYVAL 0x70194 +#define DSPAKEYMASK 0x70198 + +#define DSPAPOS 0x7018C /* reserved */ +#define DSPASIZE 0x70190 +#define DSPBPOS 0x7118C +#define DSPBSIZE 0x71190 + +#define DSPASURF 0x7019C +#define DSPATILEOFF 0x701A4 + +#define DSPBSURF 0x7119C +#define DSPBTILEOFF 0x711A4 + +#define VGACNTRL 0x71400 +# define VGA_DISP_DISABLE (1 << 31) +# define VGA_2X_MODE (1 << 30) +# define VGA_PIPE_B_SELECT (1 << 29) + +/* + * Some BIOS scratch area registers. The 845 (and 830?) store the amount + * of video memory available to the BIOS in SWF1. + */ + +#define SWF0 0x71410 + +/* + * 855 scratch registers. + */ +#define SWF10 0x70410 + +#define SWF30 0x72414 + +/* + * Overlay registers. These are overlay registers accessed via MMIO. + * Those loaded via the overlay register page are defined in i830_video.c. + */ +#define OVADD 0x30000 + +#define DOVSTA 0x30008 +#define OC_BUF (0x3<<20) + +#define OGAMC5 0x30010 +#define OGAMC4 0x30014 +#define OGAMC3 0x30018 +#define OGAMC2 0x3001c +#define OGAMC1 0x30020 +#define OGAMC0 0x30024 +/* + * Palette registers + */ +#define PALETTE_A 0x0a000 +#define PALETTE_B 0x0a800 + +#define IS_I830(dev) ((dev)->pci_device == 0x3577) +#define IS_845G(dev) ((dev)->pci_device == 0x2562) +#define IS_I85X(dev) ((dev)->pci_device == 0x3582) +#define IS_I855(dev) ((dev)->pci_device == 0x3582) +#define IS_I865G(dev) ((dev)->pci_device == 0x2572) + +#define IS_I915G(dev) ((dev)->pci_device == 0x2582 || (dev)->pci_device == 0x258a) +#define IS_I915GM(dev) ((dev)->pci_device == 0x2592) +#define IS_I945G(dev) ((dev)->pci_device == 0x2772) +#define IS_I945GM(dev) ((dev)->pci_device == 0x27A2) + +#define IS_I965G(dev) ((dev)->pci_device == 0x2972 || \ + (dev)->pci_device == 0x2982 || \ + (dev)->pci_device == 0x2992 || \ + (dev)->pci_device == 0x29A2 || \ + (dev)->pci_device == 0x2A02 || \ + (dev)->pci_device == 0x2A12 || \ + (dev)->pci_device == 0x2A42) + +#define IS_I965GM(dev) ((dev)->pci_device == 0x2A02) + +#define IS_IGD_GM(dev) ((dev)->pci_device == 0x2A42) + +#define IS_G33(dev) ((dev)->pci_device == 0x29C2 || \ + (dev)->pci_device == 0x29B2 || \ + (dev)->pci_device == 0x29D2) + +#define IS_I9XX(dev) (IS_I915G(dev) || IS_I915GM(dev) || IS_I945G(dev) || \ + IS_I945GM(dev) || IS_I965G(dev) || IS_G33(dev)) + +#define IS_MOBILE(dev) (IS_I830(dev) || IS_I85X(dev) || IS_I915GM(dev) || \ + IS_I945GM(dev) || IS_I965GM(dev) || IS_IGD_GM(dev)) + +#define PRIMARY_RINGBUFFER_SIZE (128*1024) #endif diff --git a/drivers/char/drm/i915_fence.c b/drivers/char/drm/i915_fence.c new file mode 100644 index 0000000..7f93978 --- /dev/null +++ b/drivers/char/drm/i915_fence.c @@ -0,0 +1,158 @@ +/************************************************************************** + * + * Copyright 2006 Tungsten Graphics, Inc., Bismarck, ND., USA + * All Rights Reserved. + * + * Permission is hereby granted, free of charge, to any person obtaining a + * copy of this software and associated documentation files (the + * "Software"), to deal in the Software without restriction, including + * without limitation the rights to use, copy, modify, merge, publish, + * distribute, sub license, and/or sell copies of the Software, and to + * permit persons to whom the Software is furnished to do so, subject to + * the following conditions: + * + * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR + * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, + * FITNESS FOR A PARTICULAR PURPOSE AND NON-INFRINGEMENT. IN NO EVENT SHALL + * THE COPYRIGHT HOLDERS, AUTHORS AND/OR ITS SUPPLIERS BE LIABLE FOR ANY CLAIM, + * DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR + * OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE + * USE OR OTHER DEALINGS IN THE SOFTWARE. + * + * The above copyright notice and this permission notice (including the + * next paragraph) shall be included in all copies or substantial portions + * of the Software. + * + * + **************************************************************************/ +/* + * Authors: Thomas Hellström + */ + +#include "drmP.h" +#include "drm.h" +#include "i915_drm.h" +#include "i915_drv.h" + +/* + * Implements an intel sync flush operation. + */ + +static void i915_perform_flush(struct drm_device *dev) +{ + drm_i915_private_t *dev_priv = (drm_i915_private_t *) dev->dev_private; + struct drm_fence_manager *fm = &dev->fm; + struct drm_fence_class_manager *fc = &fm->fence_class[0]; + struct drm_fence_driver *driver = dev->driver->fence_driver; + uint32_t flush_flags = 0; + uint32_t flush_sequence = 0; + uint32_t i_status; + uint32_t diff; + uint32_t sequence; + int rwflush; + + if (!dev_priv) + return; + + if (fc->pending_exe_flush) { + sequence = READ_BREADCRUMB(dev_priv); + + /* + * First update fences with the current breadcrumb. + */ + + diff = (sequence - fc->last_exe_flush) & BREADCRUMB_MASK; + if (diff < driver->wrap_diff && diff != 0) + drm_fence_handler(dev, 0, sequence, + DRM_FENCE_TYPE_EXE, 0); + + if (dev_priv->fence_irq_on && !fc->pending_exe_flush) { + i915_user_irq_off(dev_priv); + dev_priv->fence_irq_on = 0; + } else if (!dev_priv->fence_irq_on && fc->pending_exe_flush) { + i915_user_irq_on(dev_priv); + dev_priv->fence_irq_on = 1; + } + } + + if (dev_priv->flush_pending) { + i_status = READ_HWSP(dev_priv, 0); + if ((i_status & (1 << 12)) != + (dev_priv->saved_flush_status & (1 << 12))) { + flush_flags = dev_priv->flush_flags; + flush_sequence = dev_priv->flush_sequence; + dev_priv->flush_pending = 0; + drm_fence_handler(dev, 0, flush_sequence, flush_flags, 0); + } + } + + rwflush = fc->pending_flush & DRM_I915_FENCE_TYPE_RW; + if (rwflush && !dev_priv->flush_pending) { + dev_priv->flush_sequence = (uint32_t) READ_BREADCRUMB(dev_priv); + dev_priv->flush_flags = fc->pending_flush; + dev_priv->saved_flush_status = READ_HWSP(dev_priv, 0); + I915_WRITE(I915REG_INSTPM, (1 << 5) | (1 << 21)); + dev_priv->flush_pending = 1; + fc->pending_flush &= ~DRM_I915_FENCE_TYPE_RW; + } + + if (dev_priv->flush_pending) { + i_status = READ_HWSP(dev_priv, 0); + if ((i_status & (1 << 12)) != + (dev_priv->saved_flush_status & (1 << 12))) { + flush_flags = dev_priv->flush_flags; + flush_sequence = dev_priv->flush_sequence; + dev_priv->flush_pending = 0; + drm_fence_handler(dev, 0, flush_sequence, flush_flags, 0); + } + } + +} + +void i915_poke_flush(struct drm_device *dev, uint32_t class) +{ + struct drm_fence_manager *fm = &dev->fm; + unsigned long flags; + + write_lock_irqsave(&fm->lock, flags); + i915_perform_flush(dev); + write_unlock_irqrestore(&fm->lock, flags); +} + +int i915_fence_emit_sequence(struct drm_device *dev, uint32_t class, + uint32_t flags, uint32_t *sequence, + uint32_t *native_type) +{ + drm_i915_private_t *dev_priv = (drm_i915_private_t *) dev->dev_private; + if (!dev_priv) + return -EINVAL; + + i915_emit_irq(dev); + *sequence = (uint32_t) dev_priv->counter; + *native_type = DRM_FENCE_TYPE_EXE; + if (flags & DRM_I915_FENCE_FLAG_FLUSHED) + *native_type |= DRM_I915_FENCE_TYPE_RW; + + return 0; +} + +void i915_fence_handler(struct drm_device *dev) +{ + struct drm_fence_manager *fm = &dev->fm; + + write_lock(&fm->lock); + i915_perform_flush(dev); + write_unlock(&fm->lock); +} + +int i915_fence_has_irq(struct drm_device *dev, uint32_t class, uint32_t flags) +{ + /* + * We have an irq that tells us when we have a new breadcrumb. + */ + + if (class == 0 && flags == DRM_FENCE_TYPE_EXE) + return 1; + + return 0; +} diff --git a/drivers/char/drm/i915_irq.c b/drivers/char/drm/i915_irq.c index a443f4a..34b63b6 100644 --- a/drivers/char/drm/i915_irq.c +++ b/drivers/char/drm/i915_irq.c @@ -38,6 +38,71 @@ #define MAX_NOPID ((u32)~0) /** + * i915_get_pipe - return the the pipe associated with a given plane + * @dev: DRM device + * @plane: plane to look for + * + * We need to get the pipe associated with a given plane to correctly perform + * vblank driven swapping, and they may not always be equal. So look up the + * pipe associated with @plane here. + */ +static int +i915_get_pipe(struct drm_device *dev, int plane) +{ + drm_i915_private_t *dev_priv = (drm_i915_private_t *) dev->dev_private; + u32 dspcntr; + + dspcntr = plane ? I915_READ(DSPBCNTR) : I915_READ(DSPACNTR); + + return dspcntr & DISPPLANE_SEL_PIPE_MASK ? 1 : 0; +} + +/** + * Emit a synchronous flip. + * + * This function must be called with the drawable spinlock held. + */ +static void +i915_dispatch_vsync_flip(struct drm_device *dev, struct drm_drawable_info *drw, + int plane) +{ + drm_i915_private_t *dev_priv = (drm_i915_private_t *) dev->dev_private; + drm_i915_sarea_t *sarea_priv = dev_priv->sarea_priv; + u16 x1, y1, x2, y2; + int pf_planes = 1 << plane; + + /* If the window is visible on the other plane, we have to flip on that + * plane as well. + */ + if (plane == 1) { + x1 = sarea_priv->planeA_x; + y1 = sarea_priv->planeA_y; + x2 = x1 + sarea_priv->planeA_w; + y2 = y1 + sarea_priv->planeA_h; + } else { + x1 = sarea_priv->planeB_x; + y1 = sarea_priv->planeB_y; + x2 = x1 + sarea_priv->planeB_w; + y2 = y1 + sarea_priv->planeB_h; + } + + if (x2 > 0 && y2 > 0) { + int i, num_rects = drw->num_rects; + struct drm_clip_rect *rect = drw->rects; + + for (i = 0; i < num_rects; i++) + if (!(rect[i].x1 >= x2 || rect[i].y1 >= y2 || + rect[i].x2 <= x1 || rect[i].y2 <= y1)) { + pf_planes = 0x3; + + break; + } + } + + i915_dispatch_flip(dev, pf_planes, 1); +} + +/** * Emit blits for scheduled buffer swaps. * * This function will be called with the HW lock held. @@ -45,14 +110,13 @@ static void i915_vblank_tasklet(struct drm_device *dev) { drm_i915_private_t *dev_priv = (drm_i915_private_t *) dev->dev_private; - unsigned long irqflags; struct list_head *list, *tmp, hits, *hit; - int nhits, nrects, slice[2], upper[2], lower[2], i; + int nhits, nrects, slice[2], upper[2], lower[2], i, num_pages; unsigned counter[2] = { atomic_read(&dev->vbl_received), atomic_read(&dev->vbl_received2) }; struct drm_drawable_info *drw; drm_i915_sarea_t *sarea_priv = dev_priv->sarea_priv; - u32 cpp = dev_priv->cpp; + u32 cpp = dev_priv->cpp, offsets[3]; u32 cmd = (cpp == 4) ? (XY_SRC_COPY_BLT_CMD | XY_SRC_COPY_BLT_WRITE_ALPHA | XY_SRC_COPY_BLT_WRITE_RGB) @@ -67,14 +131,20 @@ static void i915_vblank_tasklet(struct drm_device *dev) nhits = nrects = 0; - spin_lock_irqsave(&dev_priv->swaps_lock, irqflags); + /* No irqsave/restore necessary. This tasklet may be run in an + * interrupt context or normal context, but we don't have to worry + * about getting interrupted by something acquiring the lock, because + * we are the interrupt context thing that acquires the lock. + */ + spin_lock(&dev_priv->swaps_lock); /* Find buffer swaps scheduled for this vertical blank */ list_for_each_safe(list, tmp, &dev_priv->vbl_swaps.head) { drm_i915_vbl_swap_t *vbl_swap = list_entry(list, drm_i915_vbl_swap_t, head); + int pipe = i915_get_pipe(dev, vbl_swap->plane); - if ((counter[vbl_swap->pipe] - vbl_swap->sequence) > (1<<23)) + if ((counter[pipe] - vbl_swap->sequence) > (1<<23)) continue; list_del(list); @@ -116,33 +186,22 @@ static void i915_vblank_tasklet(struct drm_device *dev) spin_lock(&dev_priv->swaps_lock); } - if (nhits == 0) { - spin_unlock_irqrestore(&dev_priv->swaps_lock, irqflags); - return; - } - spin_unlock(&dev_priv->swaps_lock); + if (nhits == 0) + return; i915_kernel_lost_context(dev); - BEGIN_LP_RING(6); - - OUT_RING(GFX_OP_DRAWRECT_INFO); - OUT_RING(0); - OUT_RING(0); - OUT_RING(sarea_priv->width | sarea_priv->height << 16); - OUT_RING(sarea_priv->width | sarea_priv->height << 16); - OUT_RING(0); - - ADVANCE_LP_RING(); - - sarea_priv->ctxOwner = DRM_KERNEL_CONTEXT; - upper[0] = upper[1] = 0; - slice[0] = max(sarea_priv->pipeA_h / nhits, 1); - slice[1] = max(sarea_priv->pipeB_h / nhits, 1); - lower[0] = sarea_priv->pipeA_y + slice[0]; - lower[1] = sarea_priv->pipeB_y + slice[0]; + slice[0] = max(sarea_priv->planeA_h / nhits, 1); + slice[1] = max(sarea_priv->planeB_h / nhits, 1); + lower[0] = sarea_priv->planeA_y + slice[0]; + lower[1] = sarea_priv->planeB_y + slice[0]; + + offsets[0] = sarea_priv->front_offset; + offsets[1] = sarea_priv->back_offset; + offsets[2] = sarea_priv->third_offset; + num_pages = sarea_priv->third_handle ? 3 : 2; spin_lock(&dev->drw_lock); @@ -154,6 +213,8 @@ static void i915_vblank_tasklet(struct drm_device *dev) for (i = 0; i++ < nhits; upper[0] = lower[0], lower[0] += slice[0], upper[1] = lower[1], lower[1] += slice[1]) { + int init_drawrect = 1; + if (i == nhits) lower[0] = lower[1] = sarea_priv->height; @@ -161,7 +222,7 @@ static void i915_vblank_tasklet(struct drm_device *dev) drm_i915_vbl_swap_t *swap_hit = list_entry(hit, drm_i915_vbl_swap_t, head); struct drm_clip_rect *rect; - int num_rects, pipe; + int num_rects, plane, front, back; unsigned short top, bottom; drw = drm_get_drawable_info(dev, swap_hit->drw_id); @@ -169,10 +230,37 @@ static void i915_vblank_tasklet(struct drm_device *dev) if (!drw) continue; + plane = swap_hit->plane; + + if (swap_hit->flip) { + i915_dispatch_vsync_flip(dev, drw, plane); + continue; + } + + if (init_drawrect) { + BEGIN_LP_RING(6); + + OUT_RING(GFX_OP_DRAWRECT_INFO); + OUT_RING(0); + OUT_RING(0); + OUT_RING(sarea_priv->width | sarea_priv->height << 16); + OUT_RING(sarea_priv->width | sarea_priv->height << 16); + OUT_RING(0); + + ADVANCE_LP_RING(); + + sarea_priv->ctxOwner = DRM_KERNEL_CONTEXT; + + init_drawrect = 0; + } + rect = drw->rects; - pipe = swap_hit->pipe; - top = upper[pipe]; - bottom = lower[pipe]; + top = upper[plane]; + bottom = lower[plane]; + + front = (dev_priv->sarea_priv->pf_current_page >> + (2 * plane)) & 0x3; + back = (front + 1) % num_pages; for (num_rects = drw->num_rects; num_rects--; rect++) { int y1 = max(rect->y1, top); @@ -187,17 +275,17 @@ static void i915_vblank_tasklet(struct drm_device *dev) OUT_RING(pitchropcpp); OUT_RING((y1 << 16) | rect->x1); OUT_RING((y2 << 16) | rect->x2); - OUT_RING(sarea_priv->front_offset); + OUT_RING(offsets[front]); OUT_RING((y1 << 16) | rect->x1); OUT_RING(pitchropcpp & 0xffff); - OUT_RING(sarea_priv->back_offset); + OUT_RING(offsets[back]); ADVANCE_LP_RING(); } } } - spin_unlock_irqrestore(&dev->drw_lock, irqflags); + spin_unlock(&dev->drw_lock); list_for_each_safe(hit, tmp, &hits) { drm_i915_vbl_swap_t *swap_hit = @@ -220,10 +308,7 @@ irqreturn_t i915_driver_irq_handler(DRM_IRQ_ARGS) pipeb_stats = I915_READ(I915REG_PIPEBSTAT); temp = I915_READ16(I915REG_INT_IDENTITY_R); - - temp &= (USER_INT_FLAG | VSYNC_PIPEA_FLAG | VSYNC_PIPEB_FLAG); - - DRM_DEBUG("%s flag=%08x\n", __FUNCTION__, temp); + temp &= (dev_priv->irq_enable_reg | USER_INT_FLAG); if (temp == 0) return IRQ_NONE; @@ -234,8 +319,10 @@ irqreturn_t i915_driver_irq_handler(DRM_IRQ_ARGS) dev_priv->sarea_priv->last_dispatch = READ_BREADCRUMB(dev_priv); - if (temp & USER_INT_FLAG) + if (temp & USER_INT_FLAG) { DRM_WAKEUP(&dev_priv->irq_queue); + i915_fence_handler(dev); + } if (temp & (VSYNC_PIPEA_FLAG | VSYNC_PIPEB_FLAG)) { int vblank_pipe = dev_priv->vblank_pipe; @@ -269,38 +356,52 @@ irqreturn_t i915_driver_irq_handler(DRM_IRQ_ARGS) return IRQ_HANDLED; } -static int i915_emit_irq(struct drm_device * dev) +int i915_emit_irq(struct drm_device *dev) { drm_i915_private_t *dev_priv = dev->dev_private; RING_LOCALS; i915_kernel_lost_context(dev); - DRM_DEBUG("%s\n", __FUNCTION__); - - dev_priv->sarea_priv->last_enqueue = ++dev_priv->counter; + DRM_DEBUG("\n"); - if (dev_priv->counter > 0x7FFFFFFFUL) - dev_priv->sarea_priv->last_enqueue = dev_priv->counter = 1; + i915_emit_breadcrumb(dev); - BEGIN_LP_RING(6); - OUT_RING(CMD_STORE_DWORD_IDX); - OUT_RING(20); - OUT_RING(dev_priv->counter); - OUT_RING(0); + BEGIN_LP_RING(2); OUT_RING(0); OUT_RING(GFX_OP_USER_INTERRUPT); ADVANCE_LP_RING(); - + return dev_priv->counter; } +void i915_user_irq_on(drm_i915_private_t *dev_priv) +{ + spin_lock(&dev_priv->user_irq_lock); + if (dev_priv->irq_enabled && (++dev_priv->user_irq_refcount == 1)) { + dev_priv->irq_enable_reg |= USER_INT_FLAG; + I915_WRITE16(I915REG_INT_ENABLE_R, dev_priv->irq_enable_reg); + } + spin_unlock(&dev_priv->user_irq_lock); +} + +void i915_user_irq_off(drm_i915_private_t *dev_priv) +{ + spin_lock(&dev_priv->user_irq_lock); + if (dev_priv->irq_enabled && (--dev_priv->user_irq_refcount == 0)) { + /* dev_priv->irq_enable_reg &= ~USER_INT_FLAG; + I915_WRITE16(I915REG_INT_ENABLE_R, dev_priv->irq_enable_reg);*/ + } + spin_unlock(&dev_priv->user_irq_lock); +} + + static int i915_wait_irq(struct drm_device * dev, int irq_nr) { drm_i915_private_t *dev_priv = (drm_i915_private_t *) dev->dev_private; int ret = 0; - DRM_DEBUG("%s irq_nr=%d breadcrumb=%d\n", __FUNCTION__, irq_nr, + DRM_DEBUG("irq_nr=%d breadcrumb=%d\n", irq_nr, READ_BREADCRUMB(dev_priv)); if (READ_BREADCRUMB(dev_priv) >= irq_nr) @@ -308,12 +409,13 @@ static int i915_wait_irq(struct drm_device * dev, int irq_nr) dev_priv->sarea_priv->perf_boxes |= I915_BOX_WAIT; + i915_user_irq_on(dev_priv); DRM_WAIT_ON(ret, dev_priv->irq_queue, 3 * DRM_HZ, READ_BREADCRUMB(dev_priv) >= irq_nr); + i915_user_irq_off(dev_priv); if (ret == -EBUSY) { - DRM_ERROR("%s: EBUSY -- rec: %d emitted: %d\n", - __FUNCTION__, + DRM_ERROR("EBUSY -- rec: %d emitted: %d\n", READ_BREADCRUMB(dev_priv), (int)dev_priv->counter); } @@ -321,7 +423,8 @@ static int i915_wait_irq(struct drm_device * dev, int irq_nr) return ret; } -static int i915_driver_vblank_do_wait(struct drm_device *dev, unsigned int *sequence, +static int i915_driver_vblank_do_wait(struct drm_device *dev, + unsigned int *sequence, atomic_t *counter) { drm_i915_private_t *dev_priv = dev->dev_private; @@ -329,14 +432,14 @@ static int i915_driver_vblank_do_wait(struct drm_device *dev, unsigned int *sequ int ret = 0; if (!dev_priv) { - DRM_ERROR("%s called with no initialization\n", __FUNCTION__); + DRM_ERROR("called with no initialization\n"); return -EINVAL; } DRM_WAIT_ON(ret, dev->vbl_queue, 3 * DRM_HZ, (((cur_vblank = atomic_read(counter)) - *sequence) <= (1<<23))); - + *sequence = cur_vblank; return ret; @@ -365,7 +468,7 @@ int i915_irq_emit(struct drm_device *dev, void *data, LOCK_TEST_WITH_RETURN(dev, file_priv); if (!dev_priv) { - DRM_ERROR("%s called with no initialization\n", __FUNCTION__); + DRM_ERROR("called with no initialization\n"); return -EINVAL; } @@ -388,7 +491,7 @@ int i915_irq_wait(struct drm_device *dev, void *data, drm_i915_irq_wait_t *irqwait = data; if (!dev_priv) { - DRM_ERROR("%s called with no initialization\n", __FUNCTION__); + DRM_ERROR("called with no initialization\n"); return -EINVAL; } @@ -398,15 +501,15 @@ int i915_irq_wait(struct drm_device *dev, void *data, static void i915_enable_interrupt (struct drm_device *dev) { drm_i915_private_t *dev_priv = (drm_i915_private_t *) dev->dev_private; - u16 flag; - flag = 0; + dev_priv->irq_enable_reg = USER_INT_FLAG; if (dev_priv->vblank_pipe & DRM_I915_VBLANK_PIPE_A) - flag |= VSYNC_PIPEA_FLAG; + dev_priv->irq_enable_reg |= VSYNC_PIPEA_FLAG; if (dev_priv->vblank_pipe & DRM_I915_VBLANK_PIPE_B) - flag |= VSYNC_PIPEB_FLAG; + dev_priv->irq_enable_reg |= VSYNC_PIPEB_FLAG; - I915_WRITE16(I915REG_INT_ENABLE_R, USER_INT_FLAG | flag); + I915_WRITE16(I915REG_INT_ENABLE_R, dev_priv->irq_enable_reg); + dev_priv->irq_enabled = 1; } /* Set the vblank monitor pipe @@ -418,13 +521,12 @@ int i915_vblank_pipe_set(struct drm_device *dev, void *data, drm_i915_vblank_pipe_t *pipe = data; if (!dev_priv) { - DRM_ERROR("%s called with no initialization\n", __FUNCTION__); + DRM_ERROR("called with no initialization\n"); return -EINVAL; } if (pipe->pipe & ~(DRM_I915_VBLANK_PIPE_A|DRM_I915_VBLANK_PIPE_B)) { - DRM_ERROR("%s called with invalid pipe 0x%x\n", - __FUNCTION__, pipe->pipe); + DRM_ERROR("called with invalid pipe 0x%x\n", pipe->pipe); return -EINVAL; } @@ -443,7 +545,7 @@ int i915_vblank_pipe_get(struct drm_device *dev, void *data, u16 flag; if (!dev_priv) { - DRM_ERROR("%s called with no initialization\n", __FUNCTION__); + DRM_ERROR("called with no initialization\n"); return -EINVAL; } @@ -466,7 +568,7 @@ int i915_vblank_swap(struct drm_device *dev, void *data, drm_i915_private_t *dev_priv = dev->dev_private; drm_i915_vblank_swap_t *swap = data; drm_i915_vbl_swap_t *vbl_swap; - unsigned int pipe, seqtype, curseq; + unsigned int pipe, seqtype, curseq, plane; unsigned long irqflags; struct list_head *list; @@ -481,12 +583,14 @@ int i915_vblank_swap(struct drm_device *dev, void *data, } if (swap->seqtype & ~(_DRM_VBLANK_RELATIVE | _DRM_VBLANK_ABSOLUTE | - _DRM_VBLANK_SECONDARY | _DRM_VBLANK_NEXTONMISS)) { + _DRM_VBLANK_SECONDARY | _DRM_VBLANK_NEXTONMISS | + _DRM_VBLANK_FLIP)) { DRM_ERROR("Invalid sequence type 0x%x\n", swap->seqtype); return -EINVAL; } - pipe = (swap->seqtype & _DRM_VBLANK_SECONDARY) ? 1 : 0; + plane = (swap->seqtype & _DRM_VBLANK_SECONDARY) ? 1 : 0; + pipe = i915_get_pipe(dev, plane); seqtype = swap->seqtype & (_DRM_VBLANK_RELATIVE | _DRM_VBLANK_ABSOLUTE); @@ -497,6 +601,11 @@ int i915_vblank_swap(struct drm_device *dev, void *data, spin_lock_irqsave(&dev->drw_lock, irqflags); + /* It makes no sense to schedule a swap for a drawable that doesn't have + * valid information at this point. E.g. this could mean that the X + * server is too old to push drawable information to the DRM, in which + * case all such swaps would become ineffective. + */ if (!drm_get_drawable_info(dev, swap->drawable)) { spin_unlock_irqrestore(&dev->drw_lock, irqflags); DRM_DEBUG("Invalid drawable ID %d\n", swap->drawable); @@ -519,14 +628,43 @@ int i915_vblank_swap(struct drm_device *dev, void *data, } } + if (swap->seqtype & _DRM_VBLANK_FLIP) { + swap->sequence--; + + if ((curseq - swap->sequence) <= (1<<23)) { + struct drm_drawable_info *drw; + + LOCK_TEST_WITH_RETURN(dev, file_priv); + + spin_lock_irqsave(&dev->drw_lock, irqflags); + + drw = drm_get_drawable_info(dev, swap->drawable); + + if (!drw) { + spin_unlock_irqrestore(&dev->drw_lock, + irqflags); + DRM_DEBUG("Invalid drawable ID %d\n", + swap->drawable); + return -EINVAL; + } + + i915_dispatch_vsync_flip(dev, drw, plane); + + spin_unlock_irqrestore(&dev->drw_lock, irqflags); + + return 0; + } + } + spin_lock_irqsave(&dev_priv->swaps_lock, irqflags); list_for_each(list, &dev_priv->vbl_swaps.head) { vbl_swap = list_entry(list, drm_i915_vbl_swap_t, head); if (vbl_swap->drw_id == swap->drawable && - vbl_swap->pipe == pipe && + vbl_swap->plane == plane && vbl_swap->sequence == swap->sequence) { + vbl_swap->flip = (swap->seqtype & _DRM_VBLANK_FLIP); spin_unlock_irqrestore(&dev_priv->swaps_lock, irqflags); DRM_DEBUG("Already scheduled\n"); return 0; @@ -550,12 +688,16 @@ int i915_vblank_swap(struct drm_device *dev, void *data, DRM_DEBUG("\n"); vbl_swap->drw_id = swap->drawable; - vbl_swap->pipe = pipe; + vbl_swap->plane = plane; vbl_swap->sequence = swap->sequence; + vbl_swap->flip = (swap->seqtype & _DRM_VBLANK_FLIP); + + if (vbl_swap->flip) + swap->sequence++; spin_lock_irqsave(&dev_priv->swaps_lock, irqflags); - list_add_tail((struct list_head *)vbl_swap, &dev_priv->vbl_swaps.head); + list_add_tail(&vbl_swap->head, &dev_priv->vbl_swaps.head); dev_priv->swaps_pending++; spin_unlock_irqrestore(&dev_priv->swaps_lock, irqflags); @@ -569,7 +711,7 @@ void i915_driver_irq_preinstall(struct drm_device * dev) { drm_i915_private_t *dev_priv = (drm_i915_private_t *) dev->dev_private; - I915_WRITE16(I915REG_HWSTAM, 0xfffe); + I915_WRITE16(I915REG_HWSTAM, 0xeffe); I915_WRITE16(I915REG_INT_MASK_R, 0x0); I915_WRITE16(I915REG_INT_ENABLE_R, 0x0); } @@ -582,10 +724,17 @@ void i915_driver_irq_postinstall(struct drm_device * dev) INIT_LIST_HEAD(&dev_priv->vbl_swaps.head); dev_priv->swaps_pending = 0; - if (!dev_priv->vblank_pipe) - dev_priv->vblank_pipe = DRM_I915_VBLANK_PIPE_A; + spin_lock_init(&dev_priv->user_irq_lock); + dev_priv->user_irq_refcount = 0; + i915_enable_interrupt(dev); DRM_INIT_WAITQUEUE(&dev_priv->irq_queue); + + /* + * Initialize the hardware status page IRQ location. + */ + + I915_WRITE(I915REG_INSTPM, (1 << 5) | (1 << 21)); } void i915_driver_irq_uninstall(struct drm_device * dev) @@ -596,6 +745,7 @@ void i915_driver_irq_uninstall(struct drm_device * dev) if (!dev_priv) return; + dev_priv->irq_enabled = 0; I915_WRITE16(I915REG_HWSTAM, 0xffff); I915_WRITE16(I915REG_INT_MASK_R, 0xffff); I915_WRITE16(I915REG_INT_ENABLE_R, 0x0); diff --git a/drivers/char/drm/i915_mem.c b/drivers/char/drm/i915_mem.c index 56fb9b3..6126a60 100644 --- a/drivers/char/drm/i915_mem.c +++ b/drivers/char/drm/i915_mem.c @@ -276,7 +276,7 @@ int i915_mem_alloc(struct drm_device *dev, void *data, struct mem_block *block, **heap; if (!dev_priv) { - DRM_ERROR("%s called with no initialization\n", __FUNCTION__); + DRM_ERROR("called with no initialization\n"); return -EINVAL; } @@ -314,7 +314,7 @@ int i915_mem_free(struct drm_device *dev, void *data, struct mem_block *block, **heap; if (!dev_priv) { - DRM_ERROR("%s called with no initialization\n", __FUNCTION__); + DRM_ERROR("called with no initialization\n"); return -EINVAL; } @@ -342,7 +342,7 @@ int i915_mem_init_heap(struct drm_device *dev, void *data, struct mem_block **heap; if (!dev_priv) { - DRM_ERROR("%s called with no initialization\n", __FUNCTION__); + DRM_ERROR("called with no initialization\n"); return -EINVAL; } @@ -366,7 +366,7 @@ int i915_mem_destroy_heap( struct drm_device *dev, void *data, struct mem_block **heap; if ( !dev_priv ) { - DRM_ERROR( "%s called with no initialization\n", __FUNCTION__ ); + DRM_ERROR( "called with no initialization\n" ); return -EINVAL; } @@ -375,7 +375,7 @@ int i915_mem_destroy_heap( struct drm_device *dev, void *data, DRM_ERROR("get_heap failed"); return -EFAULT; } - + if (!*heap) { DRM_ERROR("heap not initialized?"); return -EFAULT; @@ -384,4 +384,3 @@ int i915_mem_destroy_heap( struct drm_device *dev, void *data, i915_mem_takedown( heap ); return 0; } - diff --git a/drivers/char/drm/mga_dma.c b/drivers/char/drm/mga_dma.c index c567c34..c1d12db 100644 --- a/drivers/char/drm/mga_dma.c +++ b/drivers/char/drm/mga_dma.c @@ -493,7 +493,7 @@ static int mga_do_agp_dma_bootstrap(struct drm_device * dev, dma_bs->agp_size); return err; } - + dev_priv->agp_size = agp_size; dev_priv->agp_handle = agp_req.handle; @@ -550,7 +550,7 @@ static int mga_do_agp_dma_bootstrap(struct drm_device * dev, { struct drm_map_list *_entry; unsigned long agp_token = 0; - + list_for_each_entry(_entry, &dev->maplist, head) { if (_entry->map == dev->agp_buffer_map) agp_token = _entry->user_token; @@ -964,7 +964,7 @@ static int mga_do_cleanup_dma(struct drm_device *dev, int full_cleanup) free_req.handle = dev_priv->agp_handle; drm_agp_free(dev, &free_req); - + dev_priv->agp_textures = NULL; dev_priv->agp_size = 0; dev_priv->agp_handle = 0; @@ -998,7 +998,7 @@ static int mga_do_cleanup_dma(struct drm_device *dev, int full_cleanup) } } - return 0; + return err; } int mga_dma_init(struct drm_device *dev, void *data, @@ -1050,7 +1050,7 @@ int mga_dma_flush(struct drm_device *dev, void *data, #if MGA_DMA_DEBUG int ret = mga_do_wait_for_idle(dev_priv); if (ret < 0) - DRM_INFO("%s: -EBUSY\n", __FUNCTION__); + DRM_INFO("-EBUSY\n"); return ret; #else return mga_do_wait_for_idle(dev_priv); diff --git a/drivers/char/drm/mga_drv.h b/drivers/char/drm/mga_drv.h index cd94c04..f6ebd24 100644 --- a/drivers/char/drm/mga_drv.h +++ b/drivers/char/drm/mga_drv.h @@ -216,8 +216,8 @@ static inline u32 _MGA_READ(u32 * addr) #define MGA_WRITE( reg, val ) DRM_WRITE32(dev_priv->mmio, (reg), (val)) #endif -#define DWGREG0 0x1c00 -#define DWGREG0_END 0x1dff +#define DWGREG0 0x1c00 +#define DWGREG0_END 0x1dff #define DWGREG1 0x2c00 #define DWGREG1_END 0x2dff @@ -249,7 +249,7 @@ do { \ } else if ( dev_priv->prim.space < \ dev_priv->prim.high_mark ) { \ if ( MGA_DMA_DEBUG ) \ - DRM_INFO( "%s: wrap...\n", __FUNCTION__ ); \ + DRM_INFO( "wrap...\n"); \ return -EBUSY; \ } \ } \ @@ -260,7 +260,7 @@ do { \ if ( test_bit( 0, &dev_priv->prim.wrapped ) ) { \ if ( mga_do_wait_for_idle( dev_priv ) < 0 ) { \ if ( MGA_DMA_DEBUG ) \ - DRM_INFO( "%s: wrap...\n", __FUNCTION__ ); \ + DRM_INFO( "wrap...\n"); \ return -EBUSY; \ } \ mga_do_dma_wrap_end( dev_priv ); \ @@ -280,8 +280,7 @@ do { \ #define BEGIN_DMA( n ) \ do { \ if ( MGA_VERBOSE ) { \ - DRM_INFO( "BEGIN_DMA( %d ) in %s\n", \ - (n), __FUNCTION__ ); \ + DRM_INFO( "BEGIN_DMA( %d )\n", (n) ); \ DRM_INFO( " space=0x%x req=0x%Zx\n", \ dev_priv->prim.space, (n) * DMA_BLOCK_SIZE ); \ } \ @@ -292,7 +291,7 @@ do { \ #define BEGIN_DMA_WRAP() \ do { \ if ( MGA_VERBOSE ) { \ - DRM_INFO( "BEGIN_DMA() in %s\n", __FUNCTION__ ); \ + DRM_INFO( "BEGIN_DMA()\n" ); \ DRM_INFO( " space=0x%x\n", dev_priv->prim.space ); \ } \ prim = dev_priv->prim.start; \ @@ -311,7 +310,7 @@ do { \ #define FLUSH_DMA() \ do { \ if ( 0 ) { \ - DRM_INFO( "%s:\n", __FUNCTION__ ); \ + DRM_INFO( "\n" ); \ DRM_INFO( " tail=0x%06x head=0x%06lx\n", \ dev_priv->prim.tail, \ MGA_READ( MGA_PRIMADDRESS ) - \ @@ -394,22 +393,22 @@ do { \ #define MGA_VINTCLR (1 << 4) #define MGA_VINTEN (1 << 5) -#define MGA_ALPHACTRL 0x2c7c -#define MGA_AR0 0x1c60 -#define MGA_AR1 0x1c64 -#define MGA_AR2 0x1c68 -#define MGA_AR3 0x1c6c -#define MGA_AR4 0x1c70 -#define MGA_AR5 0x1c74 -#define MGA_AR6 0x1c78 +#define MGA_ALPHACTRL 0x2c7c +#define MGA_AR0 0x1c60 +#define MGA_AR1 0x1c64 +#define MGA_AR2 0x1c68 +#define MGA_AR3 0x1c6c +#define MGA_AR4 0x1c70 +#define MGA_AR5 0x1c74 +#define MGA_AR6 0x1c78 #define MGA_CXBNDRY 0x1c80 -#define MGA_CXLEFT 0x1ca0 +#define MGA_CXLEFT 0x1ca0 #define MGA_CXRIGHT 0x1ca4 -#define MGA_DMAPAD 0x1c54 -#define MGA_DSTORG 0x2cb8 -#define MGA_DWGCTL 0x1c00 +#define MGA_DMAPAD 0x1c54 +#define MGA_DSTORG 0x2cb8 +#define MGA_DWGCTL 0x1c00 # define MGA_OPCOD_MASK (15 << 0) # define MGA_OPCOD_TRAP (4 << 0) # define MGA_OPCOD_TEXTURE_TRAP (6 << 0) @@ -455,27 +454,27 @@ do { \ # define MGA_CLIPDIS (1 << 31) #define MGA_DWGSYNC 0x2c4c -#define MGA_FCOL 0x1c24 -#define MGA_FIFOSTATUS 0x1e10 -#define MGA_FOGCOL 0x1cf4 +#define MGA_FCOL 0x1c24 +#define MGA_FIFOSTATUS 0x1e10 +#define MGA_FOGCOL 0x1cf4 #define MGA_FXBNDRY 0x1c84 -#define MGA_FXLEFT 0x1ca8 +#define MGA_FXLEFT 0x1ca8 #define MGA_FXRIGHT 0x1cac -#define MGA_ICLEAR 0x1e18 +#define MGA_ICLEAR 0x1e18 # define MGA_SOFTRAPICLR (1 << 0) # define MGA_VLINEICLR (1 << 5) -#define MGA_IEN 0x1e1c +#define MGA_IEN 0x1e1c # define MGA_SOFTRAPIEN (1 << 0) # define MGA_VLINEIEN (1 << 5) -#define MGA_LEN 0x1c5c +#define MGA_LEN 0x1c5c #define MGA_MACCESS 0x1c04 -#define MGA_PITCH 0x1c8c -#define MGA_PLNWT 0x1c1c -#define MGA_PRIMADDRESS 0x1e58 +#define MGA_PITCH 0x1c8c +#define MGA_PLNWT 0x1c1c +#define MGA_PRIMADDRESS 0x1e58 # define MGA_DMA_GENERAL (0 << 0) # define MGA_DMA_BLIT (1 << 0) # define MGA_DMA_VECTOR (2 << 0) @@ -487,43 +486,43 @@ do { \ # define MGA_PRIMPTREN0 (1 << 0) # define MGA_PRIMPTREN1 (1 << 1) -#define MGA_RST 0x1e40 +#define MGA_RST 0x1e40 # define MGA_SOFTRESET (1 << 0) # define MGA_SOFTEXTRST (1 << 1) -#define MGA_SECADDRESS 0x2c40 -#define MGA_SECEND 0x2c44 -#define MGA_SETUPADDRESS 0x2cd0 -#define MGA_SETUPEND 0x2cd4 +#define MGA_SECADDRESS 0x2c40 +#define MGA_SECEND 0x2c44 +#define MGA_SETUPADDRESS 0x2cd0 +#define MGA_SETUPEND 0x2cd4 #define MGA_SGN 0x1c58 #define MGA_SOFTRAP 0x2c48 -#define MGA_SRCORG 0x2cb4 +#define MGA_SRCORG 0x2cb4 # define MGA_SRMMAP_MASK (1 << 0) # define MGA_SRCMAP_FB (0 << 0) # define MGA_SRCMAP_SYSMEM (1 << 0) # define MGA_SRCACC_MASK (1 << 1) # define MGA_SRCACC_PCI (0 << 1) # define MGA_SRCACC_AGP (1 << 1) -#define MGA_STATUS 0x1e14 +#define MGA_STATUS 0x1e14 # define MGA_SOFTRAPEN (1 << 0) # define MGA_VSYNCPEN (1 << 4) # define MGA_VLINEPEN (1 << 5) # define MGA_DWGENGSTS (1 << 16) # define MGA_ENDPRDMASTS (1 << 17) #define MGA_STENCIL 0x2cc8 -#define MGA_STENCILCTL 0x2ccc +#define MGA_STENCILCTL 0x2ccc -#define MGA_TDUALSTAGE0 0x2cf8 -#define MGA_TDUALSTAGE1 0x2cfc -#define MGA_TEXBORDERCOL 0x2c5c -#define MGA_TEXCTL 0x2c30 +#define MGA_TDUALSTAGE0 0x2cf8 +#define MGA_TDUALSTAGE1 0x2cfc +#define MGA_TEXBORDERCOL 0x2c5c +#define MGA_TEXCTL 0x2c30 #define MGA_TEXCTL2 0x2c3c # define MGA_DUALTEX (1 << 7) # define MGA_G400_TC2_MAGIC (1 << 15) # define MGA_MAP1_ENABLE (1 << 31) -#define MGA_TEXFILTER 0x2c58 -#define MGA_TEXHEIGHT 0x2c2c -#define MGA_TEXORG 0x2c24 +#define MGA_TEXFILTER 0x2c58 +#define MGA_TEXHEIGHT 0x2c2c +#define MGA_TEXORG 0x2c24 # define MGA_TEXORGMAP_MASK (1 << 0) # define MGA_TEXORGMAP_FB (0 << 0) # define MGA_TEXORGMAP_SYSMEM (1 << 0) @@ -534,45 +533,45 @@ do { \ #define MGA_TEXORG2 0x2ca8 #define MGA_TEXORG3 0x2cac #define MGA_TEXORG4 0x2cb0 -#define MGA_TEXTRANS 0x2c34 -#define MGA_TEXTRANSHIGH 0x2c38 -#define MGA_TEXWIDTH 0x2c28 - -#define MGA_WACCEPTSEQ 0x1dd4 -#define MGA_WCODEADDR 0x1e6c -#define MGA_WFLAG 0x1dc4 -#define MGA_WFLAG1 0x1de0 +#define MGA_TEXTRANS 0x2c34 +#define MGA_TEXTRANSHIGH 0x2c38 +#define MGA_TEXWIDTH 0x2c28 + +#define MGA_WACCEPTSEQ 0x1dd4 +#define MGA_WCODEADDR 0x1e6c +#define MGA_WFLAG 0x1dc4 +#define MGA_WFLAG1 0x1de0 #define MGA_WFLAGNB 0x1e64 -#define MGA_WFLAGNB1 0x1e08 +#define MGA_WFLAGNB1 0x1e08 #define MGA_WGETMSB 0x1dc8 -#define MGA_WIADDR 0x1dc0 +#define MGA_WIADDR 0x1dc0 #define MGA_WIADDR2 0x1dd8 # define MGA_WMODE_SUSPEND (0 << 0) # define MGA_WMODE_RESUME (1 << 0) # define MGA_WMODE_JUMP (2 << 0) # define MGA_WMODE_START (3 << 0) # define MGA_WAGP_ENABLE (1 << 2) -#define MGA_WMISC 0x1e70 +#define MGA_WMISC 0x1e70 # define MGA_WUCODECACHE_ENABLE (1 << 0) # define MGA_WMASTER_ENABLE (1 << 1) # define MGA_WCACHEFLUSH_ENABLE (1 << 3) #define MGA_WVRTXSZ 0x1dcc -#define MGA_YBOT 0x1c9c -#define MGA_YDST 0x1c90 +#define MGA_YBOT 0x1c9c +#define MGA_YDST 0x1c90 #define MGA_YDSTLEN 0x1c88 #define MGA_YDSTORG 0x1c94 -#define MGA_YTOP 0x1c98 +#define MGA_YTOP 0x1c98 -#define MGA_ZORG 0x1c0c +#define MGA_ZORG 0x1c0c /* This finishes the current batch of commands */ -#define MGA_EXEC 0x0100 +#define MGA_EXEC 0x0100 /* AGP PLL encoding (for G200 only). */ -#define MGA_AGP_PLL 0x1e4c +#define MGA_AGP_PLL 0x1e4c # define MGA_AGP2XPLL_DISABLE (0 << 0) # define MGA_AGP2XPLL_ENABLE (1 << 0) diff --git a/drivers/char/drm/mga_state.c b/drivers/char/drm/mga_state.c index 5ec8b61..d3f8aad 100644 --- a/drivers/char/drm/mga_state.c +++ b/drivers/char/drm/mga_state.c @@ -150,8 +150,8 @@ static __inline__ void mga_g400_emit_tex0(drm_mga_private_t * dev_priv) drm_mga_texture_regs_t *tex = &sarea_priv->tex_state[0]; DMA_LOCALS; -/* printk("mga_g400_emit_tex0 %x %x %x\n", tex->texorg, */ -/* tex->texctl, tex->texctl2); */ +/* printk("mga_g400_emit_tex0 %x %x %x\n", tex->texorg, */ +/* tex->texctl, tex->texctl2); */ BEGIN_DMA(6); @@ -190,8 +190,8 @@ static __inline__ void mga_g400_emit_tex1(drm_mga_private_t * dev_priv) drm_mga_texture_regs_t *tex = &sarea_priv->tex_state[1]; DMA_LOCALS; -/* printk("mga_g400_emit_tex1 %x %x %x\n", tex->texorg, */ -/* tex->texctl, tex->texctl2); */ +/* printk("mga_g400_emit_tex1 %x %x %x\n", tex->texorg, */ +/* tex->texctl, tex->texctl2); */ BEGIN_DMA(5); @@ -256,7 +256,7 @@ static __inline__ void mga_g400_emit_pipe(drm_mga_private_t * dev_priv) unsigned int pipe = sarea_priv->warp_pipe; DMA_LOCALS; -/* printk("mga_g400_emit_pipe %x\n", pipe); */ +/* printk("mga_g400_emit_pipe %x\n", pipe); */ BEGIN_DMA(10); @@ -619,7 +619,7 @@ static void mga_dma_dispatch_swap(struct drm_device * dev) FLUSH_DMA(); - DRM_DEBUG("%s... done.\n", __FUNCTION__); + DRM_DEBUG("... done.\n"); } static void mga_dma_dispatch_vertex(struct drm_device * dev, struct drm_buf * buf) @@ -631,7 +631,7 @@ static void mga_dma_dispatch_vertex(struct drm_device * dev, struct drm_buf * bu u32 length = (u32) buf->used; int i = 0; DMA_LOCALS; - DRM_DEBUG("vertex: buf=%d used=%d\n", buf->idx, buf->used); + DRM_DEBUG("buf=%d used=%d\n", buf->idx, buf->used); if (buf->used) { buf_priv->dispatched = 1; @@ -678,7 +678,7 @@ static void mga_dma_dispatch_indices(struct drm_device * dev, struct drm_buf * b u32 address = (u32) buf->bus_address; int i = 0; DMA_LOCALS; - DRM_DEBUG("indices: buf=%d start=%d end=%d\n", buf->idx, start, end); + DRM_DEBUG("buf=%d start=%d end=%d\n", buf->idx, start, end); if (start != end) { buf_priv->dispatched = 1; @@ -955,7 +955,7 @@ static int mga_dma_iload(struct drm_device *dev, void *data, struct drm_file *fi #if 0 if (mga_do_wait_for_idle(dev_priv) < 0) { if (MGA_DMA_DEBUG) - DRM_INFO("%s: -EBUSY\n", __FUNCTION__); + DRM_INFO("-EBUSY\n"); return -EBUSY; } #endif @@ -1014,7 +1014,7 @@ static int mga_getparam(struct drm_device *dev, void *data, struct drm_file *fil int value; if (!dev_priv) { - DRM_ERROR("%s called with no initialization\n", __FUNCTION__); + DRM_ERROR("called with no initialization\n"); return -EINVAL; } @@ -1046,7 +1046,7 @@ static int mga_set_fence(struct drm_device *dev, void *data, struct drm_file *fi DMA_LOCALS; if (!dev_priv) { - DRM_ERROR("%s called with no initialization\n", __FUNCTION__); + DRM_ERROR("called with no initialization\n"); return -EINVAL; } @@ -1075,7 +1075,7 @@ file_priv) u32 *fence = data; if (!dev_priv) { - DRM_ERROR("%s called with no initialization\n", __FUNCTION__); + DRM_ERROR("called with no initialization\n"); return -EINVAL; } diff --git a/drivers/char/drm/r128_cce.c b/drivers/char/drm/r128_cce.c index 7d550ab..892e0a5 100644 --- a/drivers/char/drm/r128_cce.c +++ b/drivers/char/drm/r128_cce.c @@ -1,4 +1,4 @@ -/* r128_cce.c -- ATI Rage 128 driver -*- linux-c -*- +/* r128_cce.c -- ATI Rage 128 driver -*- linux-c -*- * Created: Wed Apr 5 19:24:19 2000 by kevin@precisioninsight.com */ /* @@ -651,7 +651,7 @@ int r128_cce_start(struct drm_device *dev, void *data, struct drm_file *file_pri LOCK_TEST_WITH_RETURN(dev, file_priv); if (dev_priv->cce_running || dev_priv->cce_mode == R128_PM4_NONPM4) { - DRM_DEBUG("%s while CCE running\n", __FUNCTION__); + DRM_DEBUG("while CCE running\n"); return 0; } @@ -710,7 +710,7 @@ int r128_cce_reset(struct drm_device *dev, void *data, struct drm_file *file_pri LOCK_TEST_WITH_RETURN(dev, file_priv); if (!dev_priv) { - DRM_DEBUG("%s called before init done\n", __FUNCTION__); + DRM_DEBUG("called before init done\n"); return -EINVAL; } diff --git a/drivers/char/drm/r128_drv.h b/drivers/char/drm/r128_drv.h index 5041bd8..011105e 100644 --- a/drivers/char/drm/r128_drv.h +++ b/drivers/char/drm/r128_drv.h @@ -462,8 +462,7 @@ do { \ #define BEGIN_RING( n ) do { \ if ( R128_VERBOSE ) { \ - DRM_INFO( "BEGIN_RING( %d ) in %s\n", \ - (n), __FUNCTION__ ); \ + DRM_INFO( "BEGIN_RING( %d )\n", (n)); \ } \ if ( dev_priv->ring.space <= (n) * sizeof(u32) ) { \ COMMIT_RING(); \ @@ -493,7 +492,7 @@ do { \ write * sizeof(u32) ); \ } \ if (((dev_priv->ring.tail + _nr) & tail_mask) != write) { \ - DRM_ERROR( \ + DRM_ERROR( \ "ADVANCE_RING(): mismatch: nr: %x write: %x line: %d\n", \ ((dev_priv->ring.tail + _nr) & tail_mask), \ write, __LINE__); \ diff --git a/drivers/char/drm/r128_state.c b/drivers/char/drm/r128_state.c index b7f483c..51a9afc 100644 --- a/drivers/char/drm/r128_state.c +++ b/drivers/char/drm/r128_state.c @@ -42,7 +42,7 @@ static void r128_emit_clip_rects(drm_r128_private_t * dev_priv, { u32 aux_sc_cntl = 0x00000000; RING_LOCALS; - DRM_DEBUG(" %s\n", __FUNCTION__); + DRM_DEBUG("\n"); BEGIN_RING((count < 3 ? count : 3) * 5 + 2); @@ -85,7 +85,7 @@ static __inline__ void r128_emit_core(drm_r128_private_t * dev_priv) drm_r128_sarea_t *sarea_priv = dev_priv->sarea_priv; drm_r128_context_regs_t *ctx = &sarea_priv->context_state; RING_LOCALS; - DRM_DEBUG(" %s\n", __FUNCTION__); + DRM_DEBUG("\n"); BEGIN_RING(2); @@ -100,7 +100,7 @@ static __inline__ void r128_emit_context(drm_r128_private_t * dev_priv) drm_r128_sarea_t *sarea_priv = dev_priv->sarea_priv; drm_r128_context_regs_t *ctx = &sarea_priv->context_state; RING_LOCALS; - DRM_DEBUG(" %s\n", __FUNCTION__); + DRM_DEBUG("\n"); BEGIN_RING(13); @@ -126,7 +126,7 @@ static __inline__ void r128_emit_setup(drm_r128_private_t * dev_priv) drm_r128_sarea_t *sarea_priv = dev_priv->sarea_priv; drm_r128_context_regs_t *ctx = &sarea_priv->context_state; RING_LOCALS; - DRM_DEBUG(" %s\n", __FUNCTION__); + DRM_DEBUG("\n"); BEGIN_RING(3); @@ -142,7 +142,7 @@ static __inline__ void r128_emit_masks(drm_r128_private_t * dev_priv) drm_r128_sarea_t *sarea_priv = dev_priv->sarea_priv; drm_r128_context_regs_t *ctx = &sarea_priv->context_state; RING_LOCALS; - DRM_DEBUG(" %s\n", __FUNCTION__); + DRM_DEBUG("\n"); BEGIN_RING(5); @@ -161,7 +161,7 @@ static __inline__ void r128_emit_window(drm_r128_private_t * dev_priv) drm_r128_sarea_t *sarea_priv = dev_priv->sarea_priv; drm_r128_context_regs_t *ctx = &sarea_priv->context_state; RING_LOCALS; - DRM_DEBUG(" %s\n", __FUNCTION__); + DRM_DEBUG("\n"); BEGIN_RING(2); @@ -178,7 +178,7 @@ static __inline__ void r128_emit_tex0(drm_r128_private_t * dev_priv) drm_r128_texture_regs_t *tex = &sarea_priv->tex_state[0]; int i; RING_LOCALS; - DRM_DEBUG(" %s\n", __FUNCTION__); + DRM_DEBUG("\n"); BEGIN_RING(7 + R128_MAX_TEXTURE_LEVELS); @@ -204,7 +204,7 @@ static __inline__ void r128_emit_tex1(drm_r128_private_t * dev_priv) drm_r128_texture_regs_t *tex = &sarea_priv->tex_state[1]; int i; RING_LOCALS; - DRM_DEBUG(" %s\n", __FUNCTION__); + DRM_DEBUG("\n"); BEGIN_RING(5 + R128_MAX_TEXTURE_LEVELS); @@ -226,7 +226,7 @@ static void r128_emit_state(drm_r128_private_t * dev_priv) drm_r128_sarea_t *sarea_priv = dev_priv->sarea_priv; unsigned int dirty = sarea_priv->dirty; - DRM_DEBUG("%s: dirty=0x%08x\n", __FUNCTION__, dirty); + DRM_DEBUG("dirty=0x%08x\n", dirty); if (dirty & R128_UPLOAD_CORE) { r128_emit_core(dev_priv); @@ -362,7 +362,7 @@ static void r128_cce_dispatch_clear(struct drm_device * dev, unsigned int flags = clear->flags; int i; RING_LOCALS; - DRM_DEBUG("%s\n", __FUNCTION__); + DRM_DEBUG("\n"); if (dev_priv->page_flipping && dev_priv->current_page == 1) { unsigned int tmp = flags; @@ -466,7 +466,7 @@ static void r128_cce_dispatch_swap(struct drm_device * dev) struct drm_clip_rect *pbox = sarea_priv->boxes; int i; RING_LOCALS; - DRM_DEBUG("%s\n", __FUNCTION__); + DRM_DEBUG("\n"); #if R128_PERFORMANCE_BOXES /* Do some trivial performance monitoring... @@ -528,8 +528,7 @@ static void r128_cce_dispatch_flip(struct drm_device * dev) { drm_r128_private_t *dev_priv = dev->dev_private; RING_LOCALS; - DRM_DEBUG("%s: page=%d pfCurrentPage=%d\n", - __FUNCTION__, + DRM_DEBUG("page=%d pfCurrentPage=%d\n", dev_priv->current_page, dev_priv->sarea_priv->pfCurrentPage); #if R128_PERFORMANCE_BOXES @@ -1156,7 +1155,7 @@ static int r128_cce_dispatch_read_pixels(struct drm_device * dev, int count, *x, *y; int i, xbuf_size, ybuf_size; RING_LOCALS; - DRM_DEBUG("%s\n", __FUNCTION__); + DRM_DEBUG("\n"); count = depth->n; if (count > 4096 || count <= 0) @@ -1226,7 +1225,7 @@ static void r128_cce_dispatch_stipple(struct drm_device * dev, u32 * stipple) drm_r128_private_t *dev_priv = dev->dev_private; int i; RING_LOCALS; - DRM_DEBUG("%s\n", __FUNCTION__); + DRM_DEBUG("\n"); BEGIN_RING(33); @@ -1309,7 +1308,7 @@ static int r128_do_cleanup_pageflip(struct drm_device * dev) static int r128_cce_flip(struct drm_device *dev, void *data, struct drm_file *file_priv) { drm_r128_private_t *dev_priv = dev->dev_private; - DRM_DEBUG("%s\n", __FUNCTION__); + DRM_DEBUG("\n"); LOCK_TEST_WITH_RETURN(dev, file_priv); @@ -1328,7 +1327,7 @@ static int r128_cce_swap(struct drm_device *dev, void *data, struct drm_file *fi { drm_r128_private_t *dev_priv = dev->dev_private; drm_r128_sarea_t *sarea_priv = dev_priv->sarea_priv; - DRM_DEBUG("%s\n", __FUNCTION__); + DRM_DEBUG("\n"); LOCK_TEST_WITH_RETURN(dev, file_priv); @@ -1356,7 +1355,7 @@ static int r128_cce_vertex(struct drm_device *dev, void *data, struct drm_file * LOCK_TEST_WITH_RETURN(dev, file_priv); if (!dev_priv) { - DRM_ERROR("%s called with no initialization\n", __FUNCTION__); + DRM_ERROR("called with no initialization\n"); return -EINVAL; } @@ -1412,7 +1411,7 @@ static int r128_cce_indices(struct drm_device *dev, void *data, struct drm_file LOCK_TEST_WITH_RETURN(dev, file_priv); if (!dev_priv) { - DRM_ERROR("%s called with no initialization\n", __FUNCTION__); + DRM_ERROR("called with no initialization\n"); return -EINVAL; } @@ -1557,11 +1556,11 @@ static int r128_cce_indirect(struct drm_device *dev, void *data, struct drm_file LOCK_TEST_WITH_RETURN(dev, file_priv); if (!dev_priv) { - DRM_ERROR("%s called with no initialization\n", __FUNCTION__); + DRM_ERROR("called with no initialization\n"); return -EINVAL; } - DRM_DEBUG("indirect: idx=%d s=%d e=%d d=%d\n", + DRM_DEBUG("idx=%d s=%d e=%d d=%d\n", indirect->idx, indirect->start, indirect->end, indirect->discard); @@ -1622,7 +1621,7 @@ static int r128_getparam(struct drm_device *dev, void *data, struct drm_file *fi int value; if (!dev_priv) { - DRM_ERROR("%s called with no initialization\n", __FUNCTION__); + DRM_ERROR("called with no initialization\n"); return -EINVAL; } diff --git a/drivers/char/drm/r300_cmdbuf.c b/drivers/char/drm/r300_cmdbuf.c index 59b2944..8cd8271 100644 --- a/drivers/char/drm/r300_cmdbuf.c +++ b/drivers/char/drm/r300_cmdbuf.c @@ -486,7 +486,7 @@ static __inline__ int r300_emit_bitblt_multi(drm_radeon_private_t *dev_priv, if (cmd[0] & 0x8000) { u32 offset; - if (cmd[1] & (RADEON_GMC_SRC_PITCH_OFFSET_CNTL + if (cmd[1] & (RADEON_GMC_SRC_PITCH_OFFSET_CNTL | RADEON_GMC_DST_PITCH_OFFSET_CNTL)) { offset = cmd[2] << 10; ret = !radeon_check_offset(dev_priv, offset); @@ -504,7 +504,7 @@ static __inline__ int r300_emit_bitblt_multi(drm_radeon_private_t *dev_priv, DRM_ERROR("Invalid bitblt second offset is %08X\n", offset); return -EINVAL; } - + } } @@ -723,54 +723,54 @@ static int r300_scratch(drm_radeon_private_t *dev_priv, u32 *ref_age_base; u32 i, buf_idx, h_pending; RING_LOCALS; - - if (cmdbuf->bufsz < + + if (cmdbuf->bufsz < (sizeof(u64) + header.scratch.n_bufs * sizeof(buf_idx))) { return -EINVAL; } - + if (header.scratch.reg >= 5) { return -EINVAL; } - + dev_priv->scratch_ages[header.scratch.reg]++; - + ref_age_base = (u32 *)(unsigned long)*((uint64_t *)cmdbuf->buf); - + cmdbuf->buf += sizeof(u64); cmdbuf->bufsz -= sizeof(u64); - + for (i=0; i < header.scratch.n_bufs; i++) { buf_idx = *(u32 *)cmdbuf->buf; buf_idx *= 2; /* 8 bytes per buf */ - + if (DRM_COPY_TO_USER(ref_age_base + buf_idx, &dev_priv->scratch_ages[header.scratch.reg], sizeof(u32))) { return -EINVAL; } - + if (DRM_COPY_FROM_USER(&h_pending, ref_age_base + buf_idx + 1, sizeof(u32))) { return -EINVAL; } - + if (h_pending == 0) { return -EINVAL; } - + h_pending--; - + if (DRM_COPY_TO_USER(ref_age_base + buf_idx + 1, &h_pending, sizeof(u32))) { return -EINVAL; } - + cmdbuf->buf += sizeof(buf_idx); cmdbuf->bufsz -= sizeof(buf_idx); } - + BEGIN_RING(2); OUT_RING( CP_PACKET0( RADEON_SCRATCH_REG0 + header.scratch.reg * 4, 0 ) ); OUT_RING( dev_priv->scratch_ages[header.scratch.reg] ); ADVANCE_RING(); - + return 0; } @@ -919,7 +919,7 @@ int r300_do_cp_cmdbuf(struct drm_device *dev, goto cleanup; } break; - + default: DRM_ERROR("bad cmd_type %i at %p\n", header.header.cmd_type, diff --git a/drivers/char/drm/r300_reg.h b/drivers/char/drm/r300_reg.h index 3ae57ec..9a71457 100644 --- a/drivers/char/drm/r300_reg.h +++ b/drivers/char/drm/r300_reg.h @@ -853,13 +853,13 @@ USE OR OTHER DEALINGS IN THE SOFTWARE. # define R300_TX_FORMAT_W8Z8Y8X8 0xC # define R300_TX_FORMAT_W2Z10Y10X10 0xD # define R300_TX_FORMAT_W16Z16Y16X16 0xE -# define R300_TX_FORMAT_DXT1 0xF -# define R300_TX_FORMAT_DXT3 0x10 -# define R300_TX_FORMAT_DXT5 0x11 +# define R300_TX_FORMAT_DXT1 0xF +# define R300_TX_FORMAT_DXT3 0x10 +# define R300_TX_FORMAT_DXT5 0x11 # define R300_TX_FORMAT_D3DMFT_CxV8U8 0x12 /* no swizzle */ -# define R300_TX_FORMAT_A8R8G8B8 0x13 /* no swizzle */ -# define R300_TX_FORMAT_B8G8_B8G8 0x14 /* no swizzle */ -# define R300_TX_FORMAT_G8R8_G8B8 0x15 /* no swizzle */ +# define R300_TX_FORMAT_A8R8G8B8 0x13 /* no swizzle */ +# define R300_TX_FORMAT_B8G8_B8G8 0x14 /* no swizzle */ +# define R300_TX_FORMAT_G8R8_G8B8 0x15 /* no swizzle */ /* 0x16 - some 16 bit green format.. ?? */ # define R300_TX_FORMAT_UNK25 (1 << 25) /* no swizzle */ # define R300_TX_FORMAT_CUBIC_MAP (1 << 26) @@ -867,19 +867,19 @@ USE OR OTHER DEALINGS IN THE SOFTWARE. /* gap */ /* Floating point formats */ /* Note - hardware supports both 16 and 32 bit floating point */ -# define R300_TX_FORMAT_FL_I16 0x18 -# define R300_TX_FORMAT_FL_I16A16 0x19 +# define R300_TX_FORMAT_FL_I16 0x18 +# define R300_TX_FORMAT_FL_I16A16 0x19 # define R300_TX_FORMAT_FL_R16G16B16A16 0x1A -# define R300_TX_FORMAT_FL_I32 0x1B -# define R300_TX_FORMAT_FL_I32A32 0x1C +# define R300_TX_FORMAT_FL_I32 0x1B +# define R300_TX_FORMAT_FL_I32A32 0x1C # define R300_TX_FORMAT_FL_R32G32B32A32 0x1D /* alpha modes, convenience mostly */ /* if you have alpha, pick constant appropriate to the number of channels (1 for I8, 2 for I8A8, 4 for R8G8B8A8, etc */ -# define R300_TX_FORMAT_ALPHA_1CH 0x000 -# define R300_TX_FORMAT_ALPHA_2CH 0x200 -# define R300_TX_FORMAT_ALPHA_4CH 0x600 -# define R300_TX_FORMAT_ALPHA_NONE 0xA00 +# define R300_TX_FORMAT_ALPHA_1CH 0x000 +# define R300_TX_FORMAT_ALPHA_2CH 0x200 +# define R300_TX_FORMAT_ALPHA_4CH 0x600 +# define R300_TX_FORMAT_ALPHA_NONE 0xA00 /* Swizzling */ /* constants */ # define R300_TX_FORMAT_X 0 @@ -1360,11 +1360,11 @@ USE OR OTHER DEALINGS IN THE SOFTWARE. # define R300_RB3D_Z_DISABLED_2 0x00000014 # define R300_RB3D_Z_TEST 0x00000012 # define R300_RB3D_Z_TEST_AND_WRITE 0x00000016 -# define R300_RB3D_Z_WRITE_ONLY 0x00000006 +# define R300_RB3D_Z_WRITE_ONLY 0x00000006 # define R300_RB3D_Z_TEST 0x00000012 # define R300_RB3D_Z_TEST_AND_WRITE 0x00000016 -# define R300_RB3D_Z_WRITE_ONLY 0x00000006 +# define R300_RB3D_Z_WRITE_ONLY 0x00000006 # define R300_RB3D_STENCIL_ENABLE 0x00000001 #define R300_RB3D_ZSTENCIL_CNTL_1 0x4F04 diff --git a/drivers/char/drm/radeon_cp.c b/drivers/char/drm/radeon_cp.c index 24fca8e..e16294c 100644 --- a/drivers/char/drm/radeon_cp.c +++ b/drivers/char/drm/radeon_cp.c @@ -1127,7 +1127,7 @@ static void radeon_cp_init_ring_buffer(struct drm_device * dev, { u32 ring_start, cur_read_ptr; u32 tmp; - + /* Initialize the memory controller. With new memory map, the fb location * is not changed, it should have been properly initialized already. Part * of the problem is that the code below is bogus, assuming the GART is @@ -1358,7 +1358,7 @@ static void radeon_set_pcigart(drm_radeon_private_t * dev_priv, int on) return; } - tmp = RADEON_READ(RADEON_AIC_CNTL); + tmp = RADEON_READ(RADEON_AIC_CNTL); if (on) { RADEON_WRITE(RADEON_AIC_CNTL, @@ -1583,7 +1583,7 @@ static int radeon_do_init_cp(struct drm_device * dev, drm_radeon_init_t * init) dev_priv->fb_location = (RADEON_READ(RADEON_MC_FB_LOCATION) & 0xffff) << 16; - dev_priv->fb_size = + dev_priv->fb_size = ((RADEON_READ(RADEON_MC_FB_LOCATION) & 0xffff0000u) + 0x10000) - dev_priv->fb_location; @@ -1630,7 +1630,7 @@ static int radeon_do_init_cp(struct drm_device * dev, drm_radeon_init_t * init) ((base + dev_priv->gart_size) & 0xfffffffful) < base) base = dev_priv->fb_location - dev_priv->gart_size; - } + } dev_priv->gart_vm_start = base & 0xffc00000u; if (dev_priv->gart_vm_start != base) DRM_INFO("GART aligned down from 0x%08x to 0x%08x\n", @@ -1852,12 +1852,12 @@ int radeon_cp_start(struct drm_device *dev, void *data, struct drm_file *file_pr LOCK_TEST_WITH_RETURN(dev, file_priv); if (dev_priv->cp_running) { - DRM_DEBUG("%s while CP running\n", __FUNCTION__); + DRM_DEBUG("while CP running\n"); return 0; } if (dev_priv->cp_mode == RADEON_CSQ_PRIDIS_INDDIS) { - DRM_DEBUG("%s called with bogus CP mode (%d)\n", - __FUNCTION__, dev_priv->cp_mode); + DRM_DEBUG("called with bogus CP mode (%d)\n", + dev_priv->cp_mode); return 0; } @@ -1962,7 +1962,7 @@ int radeon_cp_reset(struct drm_device *dev, void *data, struct drm_file *file_pr LOCK_TEST_WITH_RETURN(dev, file_priv); if (!dev_priv) { - DRM_DEBUG("%s called before init done\n", __FUNCTION__); + DRM_DEBUG("called before init done\n"); return -EINVAL; } diff --git a/drivers/char/drm/radeon_drm.h b/drivers/char/drm/radeon_drm.h index 5a8e23f..5f8d042 100644 --- a/drivers/char/drm/radeon_drm.h +++ b/drivers/char/drm/radeon_drm.h @@ -223,10 +223,10 @@ typedef union { #define R300_CMD_CP_DELAY 5 #define R300_CMD_DMA_DISCARD 6 #define R300_CMD_WAIT 7 -# define R300_WAIT_2D 0x1 -# define R300_WAIT_3D 0x2 -# define R300_WAIT_2D_CLEAN 0x3 -# define R300_WAIT_3D_CLEAN 0x4 +# define R300_WAIT_2D 0x1 +# define R300_WAIT_3D 0x2 +# define R300_WAIT_2D_CLEAN 0x3 +# define R300_WAIT_3D_CLEAN 0x4 #define R300_CMD_SCRATCH 8 typedef union { @@ -722,7 +722,7 @@ typedef struct drm_radeon_surface_free { unsigned int address; } drm_radeon_surface_free_t; -#define DRM_RADEON_VBLANK_CRTC1 1 -#define DRM_RADEON_VBLANK_CRTC2 2 +#define DRM_RADEON_VBLANK_CRTC1 1 +#define DRM_RADEON_VBLANK_CRTC2 2 #endif diff --git a/drivers/char/drm/radeon_drv.h b/drivers/char/drm/radeon_drv.h index bfbb60a..8ce4940 100644 --- a/drivers/char/drm/radeon_drv.h +++ b/drivers/char/drm/radeon_drv.h @@ -429,7 +429,7 @@ extern int r300_do_cp_cmdbuf(struct drm_device * dev, #define RADEON_PCIE_INDEX 0x0030 #define RADEON_PCIE_DATA 0x0034 #define RADEON_PCIE_TX_GART_CNTL 0x10 -# define RADEON_PCIE_TX_GART_EN (1 << 0) +# define RADEON_PCIE_TX_GART_EN (1 << 0) # define RADEON_PCIE_TX_GART_UNMAPPED_ACCESS_PASS_THRU (0<<1) # define RADEON_PCIE_TX_GART_UNMAPPED_ACCESS_CLAMP_LO (1<<1) # define RADEON_PCIE_TX_GART_UNMAPPED_ACCESS_DISCARD (3<<1) @@ -439,7 +439,7 @@ extern int r300_do_cp_cmdbuf(struct drm_device * dev, # define RADEON_PCIE_TX_GART_INVALIDATE_TLB (1<<8) #define RADEON_PCIE_TX_DISCARD_RD_ADDR_LO 0x11 #define RADEON_PCIE_TX_DISCARD_RD_ADDR_HI 0x12 -#define RADEON_PCIE_TX_GART_BASE 0x13 +#define RADEON_PCIE_TX_GART_BASE 0x13 #define RADEON_PCIE_TX_GART_START_LO 0x14 #define RADEON_PCIE_TX_GART_START_HI 0x15 #define RADEON_PCIE_TX_GART_END_LO 0x16 @@ -512,12 +512,12 @@ extern int r300_do_cp_cmdbuf(struct drm_device * dev, #define RADEON_GEN_INT_STATUS 0x0044 # define RADEON_CRTC_VBLANK_STAT (1 << 0) -# define RADEON_CRTC_VBLANK_STAT_ACK (1 << 0) +# define RADEON_CRTC_VBLANK_STAT_ACK (1 << 0) # define RADEON_CRTC2_VBLANK_STAT (1 << 9) -# define RADEON_CRTC2_VBLANK_STAT_ACK (1 << 9) +# define RADEON_CRTC2_VBLANK_STAT_ACK (1 << 9) # define RADEON_GUI_IDLE_INT_TEST_ACK (1 << 19) # define RADEON_SW_INT_TEST (1 << 25) -# define RADEON_SW_INT_TEST_ACK (1 << 25) +# define RADEON_SW_INT_TEST_ACK (1 << 25) # define RADEON_SW_INT_FIRE (1 << 26) #define RADEON_HOST_PATH_CNTL 0x0130 @@ -1114,8 +1114,7 @@ do { \ #define BEGIN_RING( n ) do { \ if ( RADEON_VERBOSE ) { \ - DRM_INFO( "BEGIN_RING( %d ) in %s\n", \ - n, __FUNCTION__ ); \ + DRM_INFO( "BEGIN_RING( %d )\n", (n)); \ } \ if ( dev_priv->ring.space <= (n) * sizeof(u32) ) { \ COMMIT_RING(); \ @@ -1133,7 +1132,7 @@ do { \ write, dev_priv->ring.tail ); \ } \ if (((dev_priv->ring.tail + _nr) & mask) != write) { \ - DRM_ERROR( \ + DRM_ERROR( \ "ADVANCE_RING(): mismatch: nr: %x write: %x line: %d\n", \ ((dev_priv->ring.tail + _nr) & mask), \ write, __LINE__); \ diff --git a/drivers/char/drm/radeon_irq.c b/drivers/char/drm/radeon_irq.c index 84f5bc3..009af38 100644 --- a/drivers/char/drm/radeon_irq.c +++ b/drivers/char/drm/radeon_irq.c @@ -154,7 +154,7 @@ static int radeon_driver_vblank_do_wait(struct drm_device * dev, int ack = 0; atomic_t *counter; if (!dev_priv) { - DRM_ERROR("%s called with no initialization\n", __FUNCTION__); + DRM_ERROR("called with no initialization\n"); return -EINVAL; } @@ -205,7 +205,7 @@ int radeon_irq_emit(struct drm_device *dev, void *data, struct drm_file *file_pr LOCK_TEST_WITH_RETURN(dev, file_priv); if (!dev_priv) { - DRM_ERROR("%s called with no initialization\n", __FUNCTION__); + DRM_ERROR("called with no initialization\n"); return -EINVAL; } @@ -227,7 +227,7 @@ int radeon_irq_wait(struct drm_device *dev, void *data, struct drm_file *file_pr drm_radeon_irq_wait_t *irqwait = data; if (!dev_priv) { - DRM_ERROR("%s called with no initialization\n", __FUNCTION__); + DRM_ERROR("called with no initialization\n"); return -EINVAL; } diff --git a/drivers/char/drm/radeon_mem.c b/drivers/char/drm/radeon_mem.c index a29acfe..78b34fa 100644 --- a/drivers/char/drm/radeon_mem.c +++ b/drivers/char/drm/radeon_mem.c @@ -224,7 +224,7 @@ int radeon_mem_alloc(struct drm_device *dev, void *data, struct drm_file *file_p struct mem_block *block, **heap; if (!dev_priv) { - DRM_ERROR("%s called with no initialization\n", __FUNCTION__); + DRM_ERROR("called with no initialization\n"); return -EINVAL; } @@ -259,7 +259,7 @@ int radeon_mem_free(struct drm_device *dev, void *data, struct drm_file *file_pr struct mem_block *block, **heap; if (!dev_priv) { - DRM_ERROR("%s called with no initialization\n", __FUNCTION__); + DRM_ERROR("called with no initialization\n"); return -EINVAL; } @@ -285,7 +285,7 @@ int radeon_mem_init_heap(struct drm_device *dev, void *data, struct drm_file *fi struct mem_block **heap; if (!dev_priv) { - DRM_ERROR("%s called with no initialization\n", __FUNCTION__); + DRM_ERROR("called with no initialization\n"); return -EINVAL; } diff --git a/drivers/char/drm/radeon_state.c b/drivers/char/drm/radeon_state.c index f824f2f..1dffdc3 100644 --- a/drivers/char/drm/radeon_state.c +++ b/drivers/char/drm/radeon_state.c @@ -898,7 +898,7 @@ static void radeon_cp_dispatch_clear(struct drm_device * dev, int w = pbox[i].x2 - x; int h = pbox[i].y2 - y; - DRM_DEBUG("dispatch clear %d,%d-%d,%d flags 0x%x\n", + DRM_DEBUG("%d,%d-%d,%d flags 0x%x\n", x, y, w, h, flags); if (flags & RADEON_FRONT) { @@ -1368,7 +1368,7 @@ static void radeon_cp_dispatch_swap(struct drm_device * dev) int w = pbox[i].x2 - x; int h = pbox[i].y2 - y; - DRM_DEBUG("dispatch swap %d,%d-%d,%d\n", x, y, w, h); + DRM_DEBUG("%d,%d-%d,%d\n", x, y, w, h); BEGIN_RING(9); @@ -1422,8 +1422,7 @@ static void radeon_cp_dispatch_flip(struct drm_device * dev) int offset = (dev_priv->sarea_priv->pfCurrentPage == 1) ? dev_priv->front_offset : dev_priv->back_offset; RING_LOCALS; - DRM_DEBUG("%s: pfCurrentPage=%d\n", - __FUNCTION__, + DRM_DEBUG("pfCurrentPage=%d\n", dev_priv->sarea_priv->pfCurrentPage); /* Do some trivial performance monitoring... @@ -1562,7 +1561,7 @@ static void radeon_cp_dispatch_indirect(struct drm_device * dev, { drm_radeon_private_t *dev_priv = dev->dev_private; RING_LOCALS; - DRM_DEBUG("indirect: buf=%d s=0x%x e=0x%x\n", buf->idx, start, end); + DRM_DEBUG("buf=%d s=0x%x e=0x%x\n", buf->idx, start, end); if (start != end) { int offset = (dev_priv->gart_buffers_offset @@ -1758,7 +1757,7 @@ static int radeon_cp_dispatch_texture(struct drm_device * dev, buf = radeon_freelist_get(dev); } if (!buf) { - DRM_DEBUG("radeon_cp_dispatch_texture: EAGAIN\n"); + DRM_DEBUG("EAGAIN\n"); if (DRM_COPY_TO_USER(tex->image, image, sizeof(*image))) return -EFAULT; return -EAGAIN; @@ -2413,7 +2412,7 @@ static int radeon_cp_indirect(struct drm_device *dev, void *data, struct drm_fil LOCK_TEST_WITH_RETURN(dev, file_priv); - DRM_DEBUG("indirect: idx=%d s=%d e=%d d=%d\n", + DRM_DEBUG("idx=%d s=%d e=%d d=%d\n", indirect->idx, indirect->start, indirect->end, indirect->discard); @@ -2779,7 +2778,7 @@ static int radeon_emit_wait(struct drm_device * dev, int flags) drm_radeon_private_t *dev_priv = dev->dev_private; RING_LOCALS; - DRM_DEBUG("%s: %x\n", __FUNCTION__, flags); + DRM_DEBUG("%x\n", flags); switch (flags) { case RADEON_WAIT_2D: BEGIN_RING(2); diff --git a/drivers/char/drm/savage_state.c b/drivers/char/drm/savage_state.c index bf8e0e1..5f6238f 100644 --- a/drivers/char/drm/savage_state.c +++ b/drivers/char/drm/savage_state.c @@ -512,7 +512,7 @@ static int savage_dispatch_vb_prim(drm_savage_private_t * dev_priv, DMA_DRAW_PRIMITIVE(count, prim, skip); if (vb_stride == vtx_size) { - DMA_COPY(&vtxbuf[vb_stride * start], + DMA_COPY(&vtxbuf[vb_stride * start], vtx_size * count); } else { for (i = start; i < start + count; ++i) { @@ -742,7 +742,7 @@ static int savage_dispatch_vb_idx(drm_savage_private_t * dev_priv, while (n != 0) { /* Can emit up to 255 vertices (85 triangles) at once. */ unsigned int count = n > 255 ? 255 : n; - + /* Check indices */ for (i = 0; i < count; ++i) { if (idx[i] > vb_size / (vb_stride * 4)) { @@ -933,7 +933,7 @@ static int savage_dispatch_draw(drm_savage_private_t * dev_priv, /* j was check in savage_bci_cmdbuf */ ret = savage_dispatch_vb_idx(dev_priv, &cmd_header, (const uint16_t *)cmdbuf, - (const uint32_t *)vtxbuf, vb_size, + (const uint32_t *)vtxbuf, vb_size, vb_stride); cmdbuf += j; break; diff --git a/drivers/char/drm/sis_mm.c b/drivers/char/drm/sis_mm.c index a6b7ccd..b387877 100644 --- a/drivers/char/drm/sis_mm.c +++ b/drivers/char/drm/sis_mm.c @@ -115,7 +115,7 @@ static int sis_fb_init(struct drm_device *dev, void *data, struct drm_file *file dev_priv->vram_offset = fb->offset; mutex_unlock(&dev->struct_mutex); - DRM_DEBUG("offset = %u, size = %u", fb->offset, fb->size); + DRM_DEBUG("offset = %u, size = %u\n", fb->offset, fb->size); return 0; } @@ -205,7 +205,7 @@ static int sis_ioctl_agp_init(struct drm_device *dev, void *data, dev_priv->agp_offset = agp->offset; mutex_unlock(&dev->struct_mutex); - DRM_DEBUG("offset = %u, size = %u", agp->offset, agp->size); + DRM_DEBUG("offset = %u, size = %u\n", agp->offset, agp->size); return 0; } @@ -249,7 +249,7 @@ int sis_idle(struct drm_device *dev) return 0; } } - + /* * Implement a device switch here if needed */ diff --git a/drivers/char/drm/via_dma.c b/drivers/char/drm/via_dma.c index 75d6b74..f895cbf 100644 --- a/drivers/char/drm/via_dma.c +++ b/drivers/char/drm/via_dma.c @@ -179,14 +179,12 @@ static int via_initialize(struct drm_device * dev, } if (dev_priv->ring.virtual_start != NULL) { - DRM_ERROR("%s called again without calling cleanup\n", - __FUNCTION__); + DRM_ERROR("called again without calling cleanup\n"); return -EFAULT; } if (!dev->agp || !dev->agp->base) { - DRM_ERROR("%s called with no agp memory available\n", - __FUNCTION__); + DRM_ERROR("called with no agp memory available\n"); return -EFAULT; } @@ -267,8 +265,7 @@ static int via_dispatch_cmdbuffer(struct drm_device * dev, drm_via_cmdbuffer_t * dev_priv = (drm_via_private_t *) dev->dev_private; if (dev_priv->ring.virtual_start == NULL) { - DRM_ERROR("%s called without initializing AGP ring buffer.\n", - __FUNCTION__); + DRM_ERROR("called without initializing AGP ring buffer.\n"); return -EFAULT; } @@ -337,8 +334,7 @@ static int via_cmdbuffer(struct drm_device *dev, void *data, struct drm_file *fi LOCK_TEST_WITH_RETURN(dev, file_priv); - DRM_DEBUG("via cmdbuffer, buf %p size %lu\n", cmdbuf->buf, - cmdbuf->size); + DRM_DEBUG("buf %p size %lu\n", cmdbuf->buf, cmdbuf->size); ret = via_dispatch_cmdbuffer(dev, cmdbuf); if (ret) { @@ -379,8 +375,7 @@ static int via_pci_cmdbuffer(struct drm_device *dev, void *data, struct drm_file LOCK_TEST_WITH_RETURN(dev, file_priv); - DRM_DEBUG("via_pci_cmdbuffer, buf %p size %lu\n", cmdbuf->buf, - cmdbuf->size); + DRM_DEBUG("buf %p size %lu\n", cmdbuf->buf, cmdbuf->size); ret = via_dispatch_pci_cmdbuffer(dev, cmdbuf); if (ret) { @@ -648,14 +643,13 @@ static int via_cmdbuf_size(struct drm_device *dev, void *data, struct drm_file * uint32_t tmp_size, count; drm_via_private_t *dev_priv; - DRM_DEBUG("via cmdbuf_size\n"); + DRM_DEBUG("\n"); LOCK_TEST_WITH_RETURN(dev, file_priv); dev_priv = (drm_via_private_t *) dev->dev_private; if (dev_priv->ring.virtual_start == NULL) { - DRM_ERROR("%s called without initializing AGP ring buffer.\n", - __FUNCTION__); + DRM_ERROR("called without initializing AGP ring buffer.\n"); return -EFAULT; } diff --git a/drivers/char/drm/via_dmablit.c b/drivers/char/drm/via_dmablit.c index c6fd16f..33c5197 100644 --- a/drivers/char/drm/via_dmablit.c +++ b/drivers/char/drm/via_dmablit.c @@ -1,5 +1,5 @@ /* via_dmablit.c -- PCI DMA BitBlt support for the VIA Unichrome/Pro - * + * * Copyright (C) 2005 Thomas Hellstrom, All Rights Reserved. * * Permission is hereby granted, free of charge, to any person obtaining a @@ -16,22 +16,22 @@ * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, * FITNESS FOR A PARTICULAR PURPOSE AND NON-INFRINGEMENT. IN NO EVENT SHALL - * THE COPYRIGHT HOLDERS, AUTHORS AND/OR ITS SUPPLIERS BE LIABLE FOR ANY CLAIM, - * DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR - * OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE + * THE COPYRIGHT HOLDERS, AUTHORS AND/OR ITS SUPPLIERS BE LIABLE FOR ANY CLAIM, + * DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR + * OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE * USE OR OTHER DEALINGS IN THE SOFTWARE. * - * Authors: + * Authors: * Thomas Hellstrom. * Partially based on code obtained from Digeo Inc. */ /* - * Unmaps the DMA mappings. - * FIXME: Is this a NoOp on x86? Also - * FIXME: What happens if this one is called and a pending blit has previously done - * the same DMA mappings? + * Unmaps the DMA mappings. + * FIXME: Is this a NoOp on x86? Also + * FIXME: What happens if this one is called and a pending blit has previously done + * the same DMA mappings? */ #include "drmP.h" @@ -65,7 +65,7 @@ via_unmap_blit_from_device(struct pci_dev *pdev, drm_via_sg_info_t *vsg) int num_desc = vsg->num_desc; unsigned cur_descriptor_page = num_desc / vsg->descriptors_per_page; unsigned descriptor_this_page = num_desc % vsg->descriptors_per_page; - drm_via_descriptor_t *desc_ptr = vsg->desc_pages[cur_descriptor_page] + + drm_via_descriptor_t *desc_ptr = vsg->desc_pages[cur_descriptor_page] + descriptor_this_page; dma_addr_t next = vsg->chain_start; @@ -73,7 +73,7 @@ via_unmap_blit_from_device(struct pci_dev *pdev, drm_via_sg_info_t *vsg) if (descriptor_this_page-- == 0) { cur_descriptor_page--; descriptor_this_page = vsg->descriptors_per_page - 1; - desc_ptr = vsg->desc_pages[cur_descriptor_page] + + desc_ptr = vsg->desc_pages[cur_descriptor_page] + descriptor_this_page; } dma_unmap_single(&pdev->dev, next, sizeof(*desc_ptr), DMA_TO_DEVICE); @@ -93,7 +93,7 @@ via_unmap_blit_from_device(struct pci_dev *pdev, drm_via_sg_info_t *vsg) static void via_map_blit_for_device(struct pci_dev *pdev, const drm_via_dmablit_t *xfer, - drm_via_sg_info_t *vsg, + drm_via_sg_info_t *vsg, int mode) { unsigned cur_descriptor_page = 0; @@ -110,7 +110,7 @@ via_map_blit_for_device(struct pci_dev *pdev, dma_addr_t next = 0 | VIA_DMA_DPR_EC; drm_via_descriptor_t *desc_ptr = NULL; - if (mode == 1) + if (mode == 1) desc_ptr = vsg->desc_pages[cur_descriptor_page]; for (cur_line = 0; cur_line < xfer->num_lines; ++cur_line) { @@ -118,24 +118,24 @@ via_map_blit_for_device(struct pci_dev *pdev, line_len = xfer->line_length; cur_fb = fb_addr; cur_mem = mem_addr; - + while (line_len > 0) { remaining_len = min(PAGE_SIZE-VIA_PGOFF(cur_mem), line_len); line_len -= remaining_len; if (mode == 1) { - desc_ptr->mem_addr = - dma_map_page(&pdev->dev, - vsg->pages[VIA_PFN(cur_mem) - + desc_ptr->mem_addr = + dma_map_page(&pdev->dev, + vsg->pages[VIA_PFN(cur_mem) - VIA_PFN(first_addr)], - VIA_PGOFF(cur_mem), remaining_len, + VIA_PGOFF(cur_mem), remaining_len, vsg->direction); desc_ptr->dev_addr = cur_fb; - + desc_ptr->size = remaining_len; desc_ptr->next = (uint32_t) next; - next = dma_map_single(&pdev->dev, desc_ptr, sizeof(*desc_ptr), + next = dma_map_single(&pdev->dev, desc_ptr, sizeof(*desc_ptr), DMA_TO_DEVICE); desc_ptr++; if (++num_descriptors_this_page >= vsg->descriptors_per_page) { @@ -143,12 +143,12 @@ via_map_blit_for_device(struct pci_dev *pdev, desc_ptr = vsg->desc_pages[++cur_descriptor_page]; } } - + num_desc++; cur_mem += remaining_len; cur_fb += remaining_len; } - + mem_addr += xfer->mem_stride; fb_addr += xfer->fb_stride; } @@ -161,14 +161,14 @@ via_map_blit_for_device(struct pci_dev *pdev, } /* - * Function that frees up all resources for a blit. It is usable even if the + * Function that frees up all resources for a blit. It is usable even if the * blit info has only been partially built as long as the status enum is consistent * with the actual status of the used resources. */ static void -via_free_sg_info(struct pci_dev *pdev, drm_via_sg_info_t *vsg) +via_free_sg_info(struct pci_dev *pdev, drm_via_sg_info_t *vsg) { struct page *page; int i; @@ -185,7 +185,7 @@ via_free_sg_info(struct pci_dev *pdev, drm_via_sg_info_t *vsg) case dr_via_pages_locked: for (i=0; inum_pages; ++i) { if ( NULL != (page = vsg->pages[i])) { - if (! PageReserved(page) && (DMA_FROM_DEVICE == vsg->direction)) + if (! PageReserved(page) && (DMA_FROM_DEVICE == vsg->direction)) SetPageDirty(page); page_cache_release(page); } @@ -200,7 +200,7 @@ via_free_sg_info(struct pci_dev *pdev, drm_via_sg_info_t *vsg) vsg->bounce_buffer = NULL; } vsg->free_on_sequence = 0; -} +} /* * Fire a blit engine. @@ -213,7 +213,7 @@ via_fire_dmablit(struct drm_device *dev, drm_via_sg_info_t *vsg, int engine) VIA_WRITE(VIA_PCI_DMA_MAR0 + engine*0x10, 0); VIA_WRITE(VIA_PCI_DMA_DAR0 + engine*0x10, 0); - VIA_WRITE(VIA_PCI_DMA_CSR0 + engine*0x04, VIA_DMA_CSR_DD | VIA_DMA_CSR_TD | + VIA_WRITE(VIA_PCI_DMA_CSR0 + engine*0x04, VIA_DMA_CSR_DD | VIA_DMA_CSR_TD | VIA_DMA_CSR_DE); VIA_WRITE(VIA_PCI_DMA_MR0 + engine*0x04, VIA_DMA_MR_CM | VIA_DMA_MR_TDIE); VIA_WRITE(VIA_PCI_DMA_BCR0 + engine*0x10, 0); @@ -233,9 +233,9 @@ via_lock_all_dma_pages(drm_via_sg_info_t *vsg, drm_via_dmablit_t *xfer) { int ret; unsigned long first_pfn = VIA_PFN(xfer->mem_addr); - vsg->num_pages = VIA_PFN(xfer->mem_addr + (xfer->num_lines * xfer->mem_stride -1)) - + vsg->num_pages = VIA_PFN(xfer->mem_addr + (xfer->num_lines * xfer->mem_stride -1)) - first_pfn + 1; - + if (NULL == (vsg->pages = vmalloc(sizeof(struct page *) * vsg->num_pages))) return -ENOMEM; memset(vsg->pages, 0, sizeof(struct page *) * vsg->num_pages); @@ -248,7 +248,7 @@ via_lock_all_dma_pages(drm_via_sg_info_t *vsg, drm_via_dmablit_t *xfer) up_read(¤t->mm->mmap_sem); if (ret != vsg->num_pages) { - if (ret < 0) + if (ret < 0) return ret; vsg->state = dr_via_pages_locked; return -EINVAL; @@ -264,21 +264,21 @@ via_lock_all_dma_pages(drm_via_sg_info_t *vsg, drm_via_dmablit_t *xfer) * quite large for some blits, and pages don't need to be contingous. */ -static int +static int via_alloc_desc_pages(drm_via_sg_info_t *vsg) { int i; - + vsg->descriptors_per_page = PAGE_SIZE / sizeof( drm_via_descriptor_t); - vsg->num_desc_pages = (vsg->num_desc + vsg->descriptors_per_page - 1) / + vsg->num_desc_pages = (vsg->num_desc + vsg->descriptors_per_page - 1) / vsg->descriptors_per_page; if (NULL == (vsg->desc_pages = kcalloc(vsg->num_desc_pages, sizeof(void *), GFP_KERNEL))) return -ENOMEM; - + vsg->state = dr_via_desc_pages_alloc; for (i=0; inum_desc_pages; ++i) { - if (NULL == (vsg->desc_pages[i] = + if (NULL == (vsg->desc_pages[i] = (drm_via_descriptor_t *) __get_free_page(GFP_KERNEL))) return -ENOMEM; } @@ -286,7 +286,7 @@ via_alloc_desc_pages(drm_via_sg_info_t *vsg) vsg->num_desc); return 0; } - + static void via_abort_dmablit(struct drm_device *dev, int engine) { @@ -300,7 +300,7 @@ via_dmablit_engine_off(struct drm_device *dev, int engine) { drm_via_private_t *dev_priv = (drm_via_private_t *)dev->dev_private; - VIA_WRITE(VIA_PCI_DMA_CSR0 + engine*0x04, VIA_DMA_CSR_TD | VIA_DMA_CSR_DD); + VIA_WRITE(VIA_PCI_DMA_CSR0 + engine*0x04, VIA_DMA_CSR_TD | VIA_DMA_CSR_DD); } @@ -311,7 +311,7 @@ via_dmablit_engine_off(struct drm_device *dev, int engine) * task. Basically the task of the interrupt handler is to submit a new blit to the engine, while * the workqueue task takes care of processing associated with the old blit. */ - + void via_dmablit_handler(struct drm_device *dev, int engine, int from_irq) { @@ -331,19 +331,19 @@ via_dmablit_handler(struct drm_device *dev, int engine, int from_irq) spin_lock_irqsave(&blitq->blit_lock, irqsave); } - done_transfer = blitq->is_active && + done_transfer = blitq->is_active && (( status = VIA_READ(VIA_PCI_DMA_CSR0 + engine*0x04)) & VIA_DMA_CSR_TD); - done_transfer = done_transfer || ( blitq->aborting && !(status & VIA_DMA_CSR_DE)); + done_transfer = done_transfer || ( blitq->aborting && !(status & VIA_DMA_CSR_DE)); cur = blitq->cur; if (done_transfer) { blitq->blits[cur]->aborted = blitq->aborting; blitq->done_blit_handle++; - DRM_WAKEUP(blitq->blit_queue + cur); + DRM_WAKEUP(blitq->blit_queue + cur); cur++; - if (cur >= VIA_NUM_BLIT_SLOTS) + if (cur >= VIA_NUM_BLIT_SLOTS) cur = 0; blitq->cur = cur; @@ -355,7 +355,7 @@ via_dmablit_handler(struct drm_device *dev, int engine, int from_irq) blitq->is_active = 0; blitq->aborting = 0; - schedule_work(&blitq->wq); + schedule_work(&blitq->wq); } else if (blitq->is_active && time_after_eq(jiffies, blitq->end)) { @@ -367,7 +367,7 @@ via_dmablit_handler(struct drm_device *dev, int engine, int from_irq) blitq->aborting = 1; blitq->end = jiffies + DRM_HZ; } - + if (!blitq->is_active) { if (blitq->num_outstanding) { via_fire_dmablit(dev, blitq->blits[cur], engine); @@ -383,14 +383,14 @@ via_dmablit_handler(struct drm_device *dev, int engine, int from_irq) } via_dmablit_engine_off(dev, engine); } - } + } if (from_irq) { spin_unlock(&blitq->blit_lock); } else { spin_unlock_irqrestore(&blitq->blit_lock, irqsave); } -} +} @@ -426,13 +426,13 @@ via_dmablit_active(drm_via_blitq_t *blitq, int engine, uint32_t handle, wait_que return active; } - + /* * Sync. Wait for at least three seconds for the blit to be performed. */ static int -via_dmablit_sync(struct drm_device *dev, uint32_t handle, int engine) +via_dmablit_sync(struct drm_device *dev, uint32_t handle, int engine) { drm_via_private_t *dev_priv = (drm_via_private_t *)dev->dev_private; @@ -441,12 +441,12 @@ via_dmablit_sync(struct drm_device *dev, uint32_t handle, int engine) int ret = 0; if (via_dmablit_active(blitq, engine, handle, &queue)) { - DRM_WAIT_ON(ret, *queue, 3 * DRM_HZ, + DRM_WAIT_ON(ret, *queue, 3 * DRM_HZ, !via_dmablit_active(blitq, engine, handle, NULL)); } DRM_DEBUG("DMA blit sync handle 0x%x engine %d returned %d\n", handle, engine, ret); - + return ret; } @@ -468,12 +468,12 @@ via_dmablit_timer(unsigned long data) struct drm_device *dev = blitq->dev; int engine = (int) (blitq - ((drm_via_private_t *)dev->dev_private)->blit_queues); - - DRM_DEBUG("Polling timer called for engine %d, jiffies %lu\n", engine, + + DRM_DEBUG("Polling timer called for engine %d, jiffies %lu\n", engine, (unsigned long) jiffies); via_dmablit_handler(dev, engine, 0); - + if (!timer_pending(&blitq->poll_timer)) { mod_timer(&blitq->poll_timer, jiffies + 1); @@ -497,7 +497,7 @@ via_dmablit_timer(unsigned long data) */ -static void +static void via_dmablit_workqueue(struct work_struct *work) { drm_via_blitq_t *blitq = container_of(work, drm_via_blitq_t, wq); @@ -505,38 +505,38 @@ via_dmablit_workqueue(struct work_struct *work) unsigned long irqsave; drm_via_sg_info_t *cur_sg; int cur_released; - - - DRM_DEBUG("Workqueue task called for blit engine %ld\n",(unsigned long) + + + DRM_DEBUG("Workqueue task called for blit engine %ld\n",(unsigned long) (blitq - ((drm_via_private_t *)dev->dev_private)->blit_queues)); spin_lock_irqsave(&blitq->blit_lock, irqsave); - + while(blitq->serviced != blitq->cur) { cur_released = blitq->serviced++; DRM_DEBUG("Releasing blit slot %d\n", cur_released); - if (blitq->serviced >= VIA_NUM_BLIT_SLOTS) + if (blitq->serviced >= VIA_NUM_BLIT_SLOTS) blitq->serviced = 0; - + cur_sg = blitq->blits[cur_released]; blitq->num_free++; - + spin_unlock_irqrestore(&blitq->blit_lock, irqsave); - + DRM_WAKEUP(&blitq->busy_queue); - + via_free_sg_info(dev->pdev, cur_sg); kfree(cur_sg); - + spin_lock_irqsave(&blitq->blit_lock, irqsave); } spin_unlock_irqrestore(&blitq->blit_lock, irqsave); } - + /* * Init all blit engines. Currently we use two, but some hardware have 4. @@ -550,8 +550,8 @@ via_init_dmablit(struct drm_device *dev) drm_via_private_t *dev_priv = (drm_via_private_t *)dev->dev_private; drm_via_blitq_t *blitq; - pci_set_master(dev->pdev); - + pci_set_master(dev->pdev); + for (i=0; i< VIA_NUM_BLIT_ENGINES; ++i) { blitq = dev_priv->blit_queues + i; blitq->dev = dev; @@ -572,20 +572,20 @@ via_init_dmablit(struct drm_device *dev) INIT_WORK(&blitq->wq, via_dmablit_workqueue); setup_timer(&blitq->poll_timer, via_dmablit_timer, (unsigned long)blitq); - } + } } /* * Build all info and do all mappings required for a blit. */ - + static int via_build_sg_info(struct drm_device *dev, drm_via_sg_info_t *vsg, drm_via_dmablit_t *xfer) { int draw = xfer->to_fb; int ret = 0; - + vsg->direction = (draw) ? DMA_TO_DEVICE : DMA_FROM_DEVICE; vsg->bounce_buffer = NULL; @@ -599,7 +599,7 @@ via_build_sg_info(struct drm_device *dev, drm_via_sg_info_t *vsg, drm_via_dmabli /* * Below check is a driver limitation, not a hardware one. We * don't want to lock unused pages, and don't want to incoporate the - * extra logic of avoiding them. Make sure there are no. + * extra logic of avoiding them. Make sure there are no. * (Not a big limitation anyway.) */ @@ -625,11 +625,11 @@ via_build_sg_info(struct drm_device *dev, drm_via_sg_info_t *vsg, drm_via_dmabli if (xfer->num_lines > 2048 || (xfer->num_lines*xfer->mem_stride > (2048*2048*4))) { DRM_ERROR("Too large PCI DMA bitblt.\n"); return -EINVAL; - } + } - /* + /* * we allow a negative fb stride to allow flipping of images in - * transfer. + * transfer. */ if (xfer->mem_stride < xfer->line_length || @@ -653,11 +653,11 @@ via_build_sg_info(struct drm_device *dev, drm_via_sg_info_t *vsg, drm_via_dmabli #else if ((((unsigned long)xfer->mem_addr & 15) || ((unsigned long)xfer->fb_addr & 3)) || - ((xfer->num_lines > 1) && + ((xfer->num_lines > 1) && ((xfer->mem_stride & 15) || (xfer->fb_stride & 3)))) { DRM_ERROR("Invalid DRM bitblt alignment.\n"); return -EINVAL; - } + } #endif if (0 != (ret = via_lock_all_dma_pages(vsg, xfer))) { @@ -673,17 +673,17 @@ via_build_sg_info(struct drm_device *dev, drm_via_sg_info_t *vsg, drm_via_dmabli return ret; } via_map_blit_for_device(dev->pdev, xfer, vsg, 1); - + return 0; } - + /* * Reserve one free slot in the blit queue. Will wait for one second for one * to become available. Otherwise -EBUSY is returned. */ -static int +static int via_dmablit_grab_slot(drm_via_blitq_t *blitq, int engine) { int ret=0; @@ -698,10 +698,10 @@ via_dmablit_grab_slot(drm_via_blitq_t *blitq, int engine) if (ret) { return (-EINTR == ret) ? -EAGAIN : ret; } - + spin_lock_irqsave(&blitq->blit_lock, irqsave); } - + blitq->num_free--; spin_unlock_irqrestore(&blitq->blit_lock, irqsave); @@ -712,7 +712,7 @@ via_dmablit_grab_slot(drm_via_blitq_t *blitq, int engine) * Hand back a free slot if we changed our mind. */ -static void +static void via_dmablit_release_slot(drm_via_blitq_t *blitq) { unsigned long irqsave; @@ -728,8 +728,8 @@ via_dmablit_release_slot(drm_via_blitq_t *blitq) */ -static int -via_dmablit(struct drm_device *dev, drm_via_dmablit_t *xfer) +static int +via_dmablit(struct drm_device *dev, drm_via_dmablit_t *xfer) { drm_via_private_t *dev_priv = (drm_via_private_t *)dev->dev_private; drm_via_sg_info_t *vsg; @@ -760,15 +760,15 @@ via_dmablit(struct drm_device *dev, drm_via_dmablit_t *xfer) spin_lock_irqsave(&blitq->blit_lock, irqsave); blitq->blits[blitq->head++] = vsg; - if (blitq->head >= VIA_NUM_BLIT_SLOTS) + if (blitq->head >= VIA_NUM_BLIT_SLOTS) blitq->head = 0; blitq->num_outstanding++; - xfer->sync.sync_handle = ++blitq->cur_blit_handle; + xfer->sync.sync_handle = ++blitq->cur_blit_handle; spin_unlock_irqrestore(&blitq->blit_lock, irqsave); xfer->sync.engine = engine; - via_dmablit_handler(dev, engine, 0); + via_dmablit_handler(dev, engine, 0); return 0; } @@ -776,7 +776,7 @@ via_dmablit(struct drm_device *dev, drm_via_dmablit_t *xfer) /* * Sync on a previously submitted blit. Note that the X server use signals extensively, and * that there is a very big probability that this IOCTL will be interrupted by a signal. In that - * case it returns with -EAGAIN for the signal to be delivered. + * case it returns with -EAGAIN for the signal to be delivered. * The caller should then reissue the IOCTL. This is similar to what is being done for drmGetLock(). */ @@ -786,7 +786,7 @@ via_dma_blit_sync( struct drm_device *dev, void *data, struct drm_file *file_pri drm_via_blitsync_t *sync = data; int err; - if (sync->engine >= VIA_NUM_BLIT_ENGINES) + if (sync->engine >= VIA_NUM_BLIT_ENGINES) return -EINVAL; err = via_dmablit_sync(dev, sync->sync_handle, sync->engine); @@ -796,15 +796,15 @@ via_dma_blit_sync( struct drm_device *dev, void *data, struct drm_file *file_pri return err; } - + /* * Queue a blit and hand back a handle to be used for sync. This IOCTL may be interrupted by a signal - * while waiting for a free slot in the blit queue. In that case it returns with -EAGAIN and should + * while waiting for a free slot in the blit queue. In that case it returns with -EAGAIN and should * be reissued. See the above IOCTL code. */ -int +int via_dma_blit( struct drm_device *dev, void *data, struct drm_file *file_priv ) { drm_via_dmablit_t *xfer = data; diff --git a/drivers/char/drm/via_dmablit.h b/drivers/char/drm/via_dmablit.h index 6f6a513..7408a54 100644 --- a/drivers/char/drm/via_dmablit.h +++ b/drivers/char/drm/via_dmablit.h @@ -1,5 +1,5 @@ /* via_dmablit.h -- PCI DMA BitBlt support for the VIA Unichrome/Pro - * + * * Copyright 2005 Thomas Hellstrom. * All Rights Reserved. * @@ -17,12 +17,12 @@ * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, * FITNESS FOR A PARTICULAR PURPOSE AND NON-INFRINGEMENT. IN NO EVENT SHALL - * THE COPYRIGHT HOLDERS, AUTHORS AND/OR ITS SUPPLIERS BE LIABLE FOR ANY CLAIM, - * DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR - * OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE + * THE COPYRIGHT HOLDERS, AUTHORS AND/OR ITS SUPPLIERS BE LIABLE FOR ANY CLAIM, + * DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR + * OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE * USE OR OTHER DEALINGS IN THE SOFTWARE. * - * Authors: + * Authors: * Thomas Hellstrom. * Register info from Digeo Inc. */ @@ -67,7 +67,7 @@ typedef struct _drm_via_blitq { unsigned cur; unsigned num_free; unsigned num_outstanding; - unsigned long end; + unsigned long end; int aborting; int is_active; drm_via_sg_info_t *blits[VIA_NUM_BLIT_SLOTS]; @@ -77,46 +77,46 @@ typedef struct _drm_via_blitq { struct work_struct wq; struct timer_list poll_timer; } drm_via_blitq_t; - -/* + +/* * PCI DMA Registers * Channels 2 & 3 don't seem to be implemented in hardware. */ - -#define VIA_PCI_DMA_MAR0 0xE40 /* Memory Address Register of Channel 0 */ -#define VIA_PCI_DMA_DAR0 0xE44 /* Device Address Register of Channel 0 */ -#define VIA_PCI_DMA_BCR0 0xE48 /* Byte Count Register of Channel 0 */ -#define VIA_PCI_DMA_DPR0 0xE4C /* Descriptor Pointer Register of Channel 0 */ - -#define VIA_PCI_DMA_MAR1 0xE50 /* Memory Address Register of Channel 1 */ -#define VIA_PCI_DMA_DAR1 0xE54 /* Device Address Register of Channel 1 */ -#define VIA_PCI_DMA_BCR1 0xE58 /* Byte Count Register of Channel 1 */ -#define VIA_PCI_DMA_DPR1 0xE5C /* Descriptor Pointer Register of Channel 1 */ - -#define VIA_PCI_DMA_MAR2 0xE60 /* Memory Address Register of Channel 2 */ -#define VIA_PCI_DMA_DAR2 0xE64 /* Device Address Register of Channel 2 */ -#define VIA_PCI_DMA_BCR2 0xE68 /* Byte Count Register of Channel 2 */ -#define VIA_PCI_DMA_DPR2 0xE6C /* Descriptor Pointer Register of Channel 2 */ - -#define VIA_PCI_DMA_MAR3 0xE70 /* Memory Address Register of Channel 3 */ -#define VIA_PCI_DMA_DAR3 0xE74 /* Device Address Register of Channel 3 */ -#define VIA_PCI_DMA_BCR3 0xE78 /* Byte Count Register of Channel 3 */ -#define VIA_PCI_DMA_DPR3 0xE7C /* Descriptor Pointer Register of Channel 3 */ - -#define VIA_PCI_DMA_MR0 0xE80 /* Mode Register of Channel 0 */ -#define VIA_PCI_DMA_MR1 0xE84 /* Mode Register of Channel 1 */ -#define VIA_PCI_DMA_MR2 0xE88 /* Mode Register of Channel 2 */ -#define VIA_PCI_DMA_MR3 0xE8C /* Mode Register of Channel 3 */ - -#define VIA_PCI_DMA_CSR0 0xE90 /* Command/Status Register of Channel 0 */ -#define VIA_PCI_DMA_CSR1 0xE94 /* Command/Status Register of Channel 1 */ -#define VIA_PCI_DMA_CSR2 0xE98 /* Command/Status Register of Channel 2 */ -#define VIA_PCI_DMA_CSR3 0xE9C /* Command/Status Register of Channel 3 */ - -#define VIA_PCI_DMA_PTR 0xEA0 /* Priority Type Register */ - -/* Define for DMA engine */ + +#define VIA_PCI_DMA_MAR0 0xE40 /* Memory Address Register of Channel 0 */ +#define VIA_PCI_DMA_DAR0 0xE44 /* Device Address Register of Channel 0 */ +#define VIA_PCI_DMA_BCR0 0xE48 /* Byte Count Register of Channel 0 */ +#define VIA_PCI_DMA_DPR0 0xE4C /* Descriptor Pointer Register of Channel 0 */ + +#define VIA_PCI_DMA_MAR1 0xE50 /* Memory Address Register of Channel 1 */ +#define VIA_PCI_DMA_DAR1 0xE54 /* Device Address Register of Channel 1 */ +#define VIA_PCI_DMA_BCR1 0xE58 /* Byte Count Register of Channel 1 */ +#define VIA_PCI_DMA_DPR1 0xE5C /* Descriptor Pointer Register of Channel 1 */ + +#define VIA_PCI_DMA_MAR2 0xE60 /* Memory Address Register of Channel 2 */ +#define VIA_PCI_DMA_DAR2 0xE64 /* Device Address Register of Channel 2 */ +#define VIA_PCI_DMA_BCR2 0xE68 /* Byte Count Register of Channel 2 */ +#define VIA_PCI_DMA_DPR2 0xE6C /* Descriptor Pointer Register of Channel 2 */ + +#define VIA_PCI_DMA_MAR3 0xE70 /* Memory Address Register of Channel 3 */ +#define VIA_PCI_DMA_DAR3 0xE74 /* Device Address Register of Channel 3 */ +#define VIA_PCI_DMA_BCR3 0xE78 /* Byte Count Register of Channel 3 */ +#define VIA_PCI_DMA_DPR3 0xE7C /* Descriptor Pointer Register of Channel 3 */ + +#define VIA_PCI_DMA_MR0 0xE80 /* Mode Register of Channel 0 */ +#define VIA_PCI_DMA_MR1 0xE84 /* Mode Register of Channel 1 */ +#define VIA_PCI_DMA_MR2 0xE88 /* Mode Register of Channel 2 */ +#define VIA_PCI_DMA_MR3 0xE8C /* Mode Register of Channel 3 */ + +#define VIA_PCI_DMA_CSR0 0xE90 /* Command/Status Register of Channel 0 */ +#define VIA_PCI_DMA_CSR1 0xE94 /* Command/Status Register of Channel 1 */ +#define VIA_PCI_DMA_CSR2 0xE98 /* Command/Status Register of Channel 2 */ +#define VIA_PCI_DMA_CSR3 0xE9C /* Command/Status Register of Channel 3 */ + +#define VIA_PCI_DMA_PTR 0xEA0 /* Priority Type Register */ + +/* Define for DMA engine */ /* DPR */ #define VIA_DMA_DPR_EC (1<<1) /* end of chain */ #define VIA_DMA_DPR_DDIE (1<<2) /* descriptor done interrupt enable */ diff --git a/drivers/char/drm/via_drm.h b/drivers/char/drm/via_drm.h index 8f53c76..a3b5c10 100644 --- a/drivers/char/drm/via_drm.h +++ b/drivers/char/drm/via_drm.h @@ -35,7 +35,7 @@ #include "via_drmclient.h" #endif -#define VIA_NR_SAREA_CLIPRECTS 8 +#define VIA_NR_SAREA_CLIPRECTS 8 #define VIA_NR_XVMC_PORTS 10 #define VIA_NR_XVMC_LOCKS 5 #define VIA_MAX_CACHELINE_SIZE 64 @@ -259,7 +259,7 @@ typedef struct drm_via_blitsync { typedef struct drm_via_dmablit { uint32_t num_lines; uint32_t line_length; - + uint32_t fb_addr; uint32_t fb_stride; diff --git a/drivers/char/drm/via_drv.c b/drivers/char/drm/via_drv.c index 2d4957a..80c01cd 100644 --- a/drivers/char/drm/via_drv.c +++ b/drivers/char/drm/via_drv.c @@ -71,7 +71,7 @@ static struct drm_driver driver = { .name = DRIVER_NAME, .id_table = pciidlist, }, - + .name = DRIVER_NAME, .desc = DRIVER_DESC, .date = DRIVER_DATE, diff --git a/drivers/char/drm/via_irq.c b/drivers/char/drm/via_irq.c index 9c1d52b..c6bb978 100644 --- a/drivers/char/drm/via_irq.c +++ b/drivers/char/drm/via_irq.c @@ -169,9 +169,9 @@ int via_driver_vblank_wait(struct drm_device * dev, unsigned int *sequence) unsigned int cur_vblank; int ret = 0; - DRM_DEBUG("viadrv_vblank_wait\n"); + DRM_DEBUG("\n"); if (!dev_priv) { - DRM_ERROR("%s called with no initialization\n", __FUNCTION__); + DRM_ERROR("called with no initialization\n"); return -EINVAL; } @@ -201,24 +201,23 @@ via_driver_irq_wait(struct drm_device * dev, unsigned int irq, int force_sequenc maskarray_t *masks; int real_irq; - DRM_DEBUG("%s\n", __FUNCTION__); + DRM_DEBUG("\n"); if (!dev_priv) { - DRM_ERROR("%s called with no initialization\n", __FUNCTION__); + DRM_ERROR("called with no initialization\n"); return -EINVAL; } if (irq >= drm_via_irq_num) { - DRM_ERROR("%s Trying to wait on unknown irq %d\n", __FUNCTION__, - irq); + DRM_ERROR("Trying to wait on unknown irq %d\n", irq); return -EINVAL; } real_irq = dev_priv->irq_map[irq]; if (real_irq < 0) { - DRM_ERROR("%s Video IRQ %d not available on this hardware.\n", - __FUNCTION__, irq); + DRM_ERROR("Video IRQ %d not available on this hardware.\n", + irq); return -EINVAL; } @@ -251,7 +250,7 @@ void via_driver_irq_preinstall(struct drm_device * dev) drm_via_irq_t *cur_irq; int i; - DRM_DEBUG("driver_irq_preinstall: dev_priv: %p\n", dev_priv); + DRM_DEBUG("dev_priv: %p\n", dev_priv); if (dev_priv) { cur_irq = dev_priv->via_irqs; @@ -298,7 +297,7 @@ void via_driver_irq_postinstall(struct drm_device * dev) drm_via_private_t *dev_priv = (drm_via_private_t *) dev->dev_private; u32 status; - DRM_DEBUG("via_driver_irq_postinstall\n"); + DRM_DEBUG("\n"); if (dev_priv) { status = VIA_READ(VIA_REG_INTERRUPT); VIA_WRITE(VIA_REG_INTERRUPT, status | VIA_IRQ_GLOBAL @@ -317,7 +316,7 @@ void via_driver_irq_uninstall(struct drm_device * dev) drm_via_private_t *dev_priv = (drm_via_private_t *) dev->dev_private; u32 status; - DRM_DEBUG("driver_irq_uninstall)\n"); + DRM_DEBUG("\n"); if (dev_priv) { /* Some more magic, oh for some data sheets ! */ @@ -344,7 +343,7 @@ int via_wait_irq(struct drm_device *dev, void *data, struct drm_file *file_priv) return -EINVAL; if (irqwait->request.irq >= dev_priv->num_irqs) { - DRM_ERROR("%s Trying to wait on unknown irq %d\n", __FUNCTION__, + DRM_ERROR("Trying to wait on unknown irq %d\n", irqwait->request.irq); return -EINVAL; } @@ -362,8 +361,7 @@ int via_wait_irq(struct drm_device *dev, void *data, struct drm_file *file_priv) } if (irqwait->request.type & VIA_IRQ_SIGNAL) { - DRM_ERROR("%s Signals on Via IRQs not implemented yet.\n", - __FUNCTION__); + DRM_ERROR("Signals on Via IRQs not implemented yet.\n"); return -EINVAL; } diff --git a/drivers/char/drm/via_map.c b/drivers/char/drm/via_map.c index 1009150..a967556 100644 --- a/drivers/char/drm/via_map.c +++ b/drivers/char/drm/via_map.c @@ -29,7 +29,7 @@ static int via_do_init_map(struct drm_device * dev, drm_via_init_t * init) { drm_via_private_t *dev_priv = dev->dev_private; - DRM_DEBUG("%s\n", __FUNCTION__); + DRM_DEBUG("\n"); dev_priv->sarea = drm_getsarea(dev); if (!dev_priv->sarea) { @@ -79,7 +79,7 @@ int via_map_init(struct drm_device *dev, void *data, struct drm_file *file_priv) { drm_via_init_t *init = data; - DRM_DEBUG("%s\n", __FUNCTION__); + DRM_DEBUG("\n"); switch (init->func) { case VIA_INIT_MAP: @@ -121,4 +121,3 @@ int via_driver_unload(struct drm_device *dev) return 0; } - diff --git a/drivers/char/drm/via_mm.c b/drivers/char/drm/via_mm.c index 3ffbf86..e640949 100644 --- a/drivers/char/drm/via_mm.c +++ b/drivers/char/drm/via_mm.c @@ -53,7 +53,7 @@ int via_agp_init(struct drm_device *dev, void *data, struct drm_file *file_priv) dev_priv->agp_offset = agp->offset; mutex_unlock(&dev->struct_mutex); - DRM_DEBUG("offset = %u, size = %u", agp->offset, agp->size); + DRM_DEBUG("offset = %u, size = %u\n", agp->offset, agp->size); return 0; } @@ -77,7 +77,7 @@ int via_fb_init(struct drm_device *dev, void *data, struct drm_file *file_priv) dev_priv->vram_offset = fb->offset; mutex_unlock(&dev->struct_mutex); - DRM_DEBUG("offset = %u, size = %u", fb->offset, fb->size); + DRM_DEBUG("offset = %u, size = %u\n", fb->offset, fb->size); return 0; @@ -113,7 +113,7 @@ void via_lastclose(struct drm_device *dev) dev_priv->vram_initialized = 0; dev_priv->agp_initialized = 0; mutex_unlock(&dev->struct_mutex); -} +} int via_mem_alloc(struct drm_device *dev, void *data, struct drm_file *file_priv) diff --git a/drivers/char/drm/via_video.c b/drivers/char/drm/via_video.c index c15e75b..6ec04ac 100644 --- a/drivers/char/drm/via_video.c +++ b/drivers/char/drm/via_video.c @@ -33,7 +33,7 @@ void via_init_futex(drm_via_private_t * dev_priv) { unsigned int i; - DRM_DEBUG("%s\n", __FUNCTION__); + DRM_DEBUG("\n"); for (i = 0; i < VIA_NR_XVMC_LOCKS; ++i) { DRM_INIT_WAITQUEUE(&(dev_priv->decoder_queue[i])); @@ -73,7 +73,7 @@ int via_decoder_futex(struct drm_device *dev, void *data, struct drm_file *file_ drm_via_sarea_t *sAPriv = dev_priv->sarea_priv; int ret = 0; - DRM_DEBUG("%s\n", __FUNCTION__); + DRM_DEBUG("\n"); if (fx->lock > VIA_NR_XVMC_LOCKS) return -EFAULT;