GIT e78ef23c1e2fff14b7992b1380b98bec241f6fc6 git+ssh://master.kernel.org/pub/scm/linux/kernel/git/lenb/linux-acpi-2.6.git#test commit 4d2fafd17a325b3f4f5f9edb1211bc7f4c311269 Author: Tear Date: Wed May 23 14:12:30 2007 -0700 ACPI: Remove Dell Optiplex GX240 from the ACPI blacklist I have a Dell Optiplex GX240 and when I boot Linux, ACPI gets set up by only acpi=ht. dmesg shows the following line: DELL GX240 detected: force use of acpi=ht Everything seemed to be fine. However, I discovered that everything is not fine. The USB controller works so slowly that copying a few (uncached) 1 megabyte large photos from a USB-enabled digital camera takes many minutes instead of a couple of seconds. I am using Linux 2.6.21.1 on a Debian 4.0 ("Etch") system. I thought that this might be related to ACPI. So I tried to boot with _only_ "acpi=force" appended to the kernel command line. Voila, the USB controller started to work at full speed and copying photos from my digital camera took only seconds. I tested the system with "acpi=force" and could not find anything which did not work. I thought that this might be related to interrupts and APIC as well. (Note that this is APIC, not ACPI.) I tried booting with _only_ "noapic" and "nolapic" appended to the command line. Again, the USB controller started to work at full speed. Signed-off-by: Andrew Morton Signed-off-by: Len Brown commit 3f8698d4d3f72252980575fb8d7b4cafeb5dd0a2 Author: Kristen Carlson Accardi Date: Wed May 23 14:12:29 2007 -0700 ACPI: bay: send envp with uevent Make the bay driver send env information on bay events. Upon any bay event, we will send the string "BAY_EVENT=%d" along with the KOBJ_CHANGE, and report the event number. What the event number means will be platform specific. Event 3 is always an eject request, but an insert may be either event 1, or it may be event 0. Event 1 may also be a remove request. It would be best if you check the number of your event with udevmonitor before writing any udev scripts for inserting and removing drive bays. Signed-off-by: Kristen Carlson Accardi Cc: Stephan Berberig Signed-off-by: Andrew Morton Signed-off-by: Len Brown commit ff55a9cebab02403f942121e2f898bb06ecfffbb Author: Len Brown Date: Sat Jun 2 00:15:25 2007 -0400 ACPI: Lindent processor throttling code Signed-off-by: Len Brown commit 01854e697a77a434104b2f7e6d7fd463a978af32 Author: Luming Yu Date: Sat May 26 22:49:58 2007 +0800 ACPI: add ACPI 3.0 _TPC _TSS _PTC throttling support adds _TPC _TSS _PTC -- Throttling Present Capabilities Signed-off-by: Luming Yu Signed-off-by: Len Brown commit 0157896199e2b0e65b5be1d423aded91634f2861 Author: Len Brown Date: Thu May 31 22:51:43 2007 -0400 cpuidle: build fix - cpuidle vs ipw2100 module ERROR: "acpi_set_cstate_limit" [drivers/net/wireless/ipw2100.ko] undefined! Signed-off-by: Len Brown commit 66132bd12d63a88881b31e40afa6926b2877ef67 Author: Luck, Tony Date: Thu May 24 13:57:40 2007 -0700 ACPI: Section mismatch ... acpi_map_pxm_to_node Last of the "Section mismatch" errors from ia64 builds! acpi_map_pxm_to_node() is defined with attribute __cpuinit, but is called by "normal" kernel functions acpi_getnode() and acpi_map_cpu2node(). Commit f363d16fbb9374c0bd7f2757d412c287169094c9 moved the data structures on which this routine operates from __cpuinitdata to regular memory, so this routine can also move out of init space. Signed-off-by: Tony Luck Signed-off-by: Len Brown commit 40bf36fed3f990cad3c54175528a975f265659ce Author: Adam Belay Date: Sat Mar 24 03:47:07 2007 -0400 cpuidle: add the 'menu' governor Here is my first take at implementing an idle PM governor that takes full advantage of NO_HZ. I call it the 'menu' governor because it considers the full list of idle states before each entry. I've kept the implementation fairly simple. It attempts to guess the next residency time and then chooses a state that would meet at least the break-even point between power savings and entry cost. To this end, it selects the deepest idle state that satisfies the following constraints: 1. If the idle time elapsed since bus master activity was detected is below a threshold (currently 20 ms), then limit the selection to C2-type or above. 2. Do not choose a state with a break-even residency that exceeds the expected time remaining until the next timer interrupt. 3. Do not choose a state with a break-even residency that exceeds the elapsed time between the last pair of break events, excluding timer interrupts. This governor has an advantage over "ladder" governor because it proactively checks how much time remains until the next timer interrupt using the tick infrastructure. Also, it handles device interrupt activity more intelligently by not including timer interrupts in break event calculations. Finally, it doesn't make policy decisions using the number of state entries, which can have variable residency times (NO_HZ makes these potentially very large), and instead only considers sleep time deltas. The menu governor can be selected during runtime using the cpuidle sysfs interface like so: "echo "menu" > /sys/devices/system/cpu/cpuidle/current_governor" Signed-off-by: Adam Belay Signed-off-by: Len Brown commit 85b2afe395e09ffb6a0bdb88d8e0fe80f039ef30 Author: Adam Belay Date: Sat Mar 24 03:47:03 2007 -0400 cpuidle: export time until next timer interrupt using NO_HZ Expose information about the time remaining until the next timer interrupt expires by utilizing the dynticks infrastructure. Also modify the main idle loop to allow dynticks to handle non-interrupt break events (e.g. DMA). Finally, expose sleep ticks information to external code. Thomas Gleixner is responsible for much of the code in this patch. However, I've made some additional changes, so I'm probably responsible if there are any bugs or oversights :) Signed-off-by: Adam Belay Signed-off-by: Len Brown commit ddb5ec5c0a7ef163c1efe2468bcc005a57187dfa Author: Adam Belay Date: Sat Mar 24 03:46:58 2007 -0400 cpuidle: governor API changes This patch prepares cpuidle for the menu governor. It adds an optional stage after idle state entry to give the governor an opportunity to check why the state was exited. Also it makes sure the idle loop returns after each state entry, allowing the appropriate dynticks code to run. Signed-off-by: Adam Belay Signed-off-by: Len Brown commit 1e5e8c1042335e3dddc86a977ca2cd3a660cb8bc Author: Alexey Starikovskiy Date: Thu May 31 20:40:07 2007 -0400 ACPI: remove recursion from thermal notify handler Threshold changes occur in response to trip points events, in order to implement hysteresis. Thus, it is not necessary to re-evaluate _TMP and compare to the trip points on threshold changes, because we've already just done it. This is important because thermal_check() executes _TMP, which on some systems may itself cause an additional event. http://bugzilla.kernel.org/show_bug.cgi?id=8385 Signed-off-by: Alexey Starikovskiy Signed-off-by: Len Brown commit bef1c09c47c7d2d93392096b8105eb1370fd515d Author: Thomas Renninger Date: Thu May 31 17:20:39 2007 +0200 ACPI: create CONFIG_ACPI_DEBUG_FUNC_TRACE Split ACPI_DEBUG into function trace enabled and not enabled. Function trace is most of the ACPI_DEBUG costs, but is not much of use for kernel ACPI debugging. Size of kernel image increased on test compile: + 48k (Full ACPI_DEBUG) + 35k (ACPI_DEBUG with function trace compiled out) Performance without function trace is also much better. Also remove ACPI_LV_DEBUG_OBJECT from default debug level as a lot vendors let Store (value, debug) in their code and this might confuse users when it pops up in syslog. Thomas Renninger Signed-off-by: Len Brown commit cb917ad56e4c903ae6d7740922a9590515535cf5 Author: Len Brown Date: Wed May 30 00:28:13 2007 -0400 ACPI: disable _OSI(Linux) by default Per notes in previous commit, Linux can not continue to enable _OSI(Linux) by default -- it exposes BIOS bugs. Disable it by default now. Signed-off-by: Len Brown commit 39c31576c2610337ec04f01939dfab72847750ed Author: Bob Moore Date: Wed May 2 15:51:37 2007 -0400 ACPICA: Clear reserved fields for incoming ACPI 1.0 FADTs Fixed a problem with the internal FADT conversion where ACPI 1.0 FADTs that contained invalid non-zero values in reserved fields could cause later failures because these fields have meaning in later revisions of the FADT. For incoming ACPI 1.0 FADTs, these fields are now always zeroed. (Preferred_PM_Profile, PSTATE_CNT, CST_CNT, IAPC_BOOT_FLAGS.) Signed-off-by: Bob Moore Signed-off-by: Len Brown commit 3fb364812c61294ecec0c09d546f3c35be3353a8 Author: Bob Moore Date: Tue Apr 3 20:00:29 2007 -0400 ACPICA: Fixed possible corruption of global GPE list Fixed a problem in acpi_ev_delete_gpe_xrupt where the global interrupt list could be corrupted if the interrupt being removed was at the head of the list. Reported by Linn Crosetto. Signed-off-by: Bob Moore Signed-off-by: Len Brown commit 22455d0ecdf2b178a081b05de3dad1801baf0957 Author: Bob Moore Date: Tue Apr 3 19:59:37 2007 -0400 ACPICA: Support for external package objects as method arguments Implemented support to allow Package objects to be passed as method arguments to the acpi_evaluate_object interface. Previously, this would return an AE_NOT_IMPLEMENTED exception. Signed-off-by: Bob Moore Signed-off-by: Len Brown commit 7dcb691fa4224c3c74095ffabcc7d992e69221f6 Author: Bob Moore Date: Tue Mar 27 20:25:46 2007 -0400 ACPICA: Changes for Cygwin compatibility Allow generation of ACPICA apps on Cygwin. Signed-off-by: Bob Moore Signed-off-by: Len Brown commit 9da86eb6ffdbc92f2d977663d46d361d18fd9e68 Author: Bob Moore Date: Mon Mar 26 22:10:34 2007 -0400 ACPICA: Update _OSI string list Latest update for the Windows strings, with comments. Removed unused strings. Signed-off-by: Bob Moore Signed-off-by: Len Brown commit 78490d82129f7331d1366737c8704c1c053221a3 Author: Alexey Starikovskiy Date: Fri May 11 13:18:55 2007 -0400 ACPI: battery: syntax cleanup In response to review comments from Andrew Morton Signed-off-by: Alexey Starikovskiy Signed-off-by: Len Brown commit 1f9767df1346c9ce09d6e51b9f34b851e3d94fad Author: Kristen Carlson Accardi Date: Wed May 9 15:55:53 2007 -0700 ACPI: bay: unsuppress uevents Since platform devices seem to get uevents suppressed by default, manually unsuppress for the bay device since we want to be able to send uevents. Signed-off-by: Kristen Carlson Accardi Signed-off-by: Len Brown commit 79a8f70b4b9127eacfc91dd1436c4a7be05e62ab Author: Kristen Carlson Accardi Date: Wed May 9 15:10:22 2007 -0700 ACPI: dock: send envp with uevent Send an env along with our KOBJ_CHANGE uevent so that user space has the option of checking for that to see if a dock or undock has occurred. Signed-off-by: Kristen Carlson Accardi Signed-off-by: Len Brown commit 9ef2a9a9f08722998540ed2ff38bccd0c54344c8 Author: Kristen Carlson Accardi Date: Wed May 9 15:09:12 2007 -0700 ACPI: dock: unsuppress uevents Platform devices may not send uevents by default - override the setting so that we can send uevents on dock/undock. Signed-off-by: Kristen Carlson Accardi Signed-off-by: Len Brown commit a0cd35fdca0bb711854edeaf016cec6cdf82eeca Author: Kristen Carlson Accardi Date: Wed May 9 15:08:15 2007 -0700 ACPI: dock: add immediate_undock option Allow the driver to be loaded with an option that will allow userspace to control whether the laptop is ejected immediately when the user presses the button, or only when the syfs undock file is written. if immediate_undock == 1, then when the user presses the undock button, the laptop will send an event to userspace to notify userspace of the undock, but then immediately undock without waiting for userspace. This is the current behavior, and I set this to be the default. if immediate_undock == 0, then when the user presses the undock button, the laptop will send an event to userspace and do nothing. User space can query the "flags" sysfs entry to determine if an undock request has been made by the user (if bit 1 is set). User space will then need to write the undock sysfs entry to complete the undocking process. Signed-off-by: Kristen Carlson Accardi Signed-off-by: Len Brown commit 0f6f2804563eee64f0fc7cbcb009b98b6f332af6 Author: Kristen Carlson Accardi Date: Wed May 9 15:07:04 2007 -0700 ACPI: dock: use dynamically allocated platform device Get rid of no release function warnings by switching to dynamically allocating the platform_device and using the platform device release routine in the base driver. Signed-off-by: Kristen Carlson Accardi Signed-off-by: Len Brown commit 22fe4c2114e29477ca6738729c074ee8f60d3b73 Author: Chuck Ebbert Date: Wed May 9 15:05:48 2007 -0700 ACPI: dock: fix opps after dock driver fails to initialize The driver tests the dock_station pointer for nonnull to check whether it has initialized properly. But in some cases dock_station will be non-null after being freed when driver init fails. Fix by zeroing the pointer after freeing. Signed-off-by: Chuck Ebbert Signed-off-by: Kristen Carlson Accardi Signed-off-by: Len Brown commit 38ff4ffc039ba5a5878f2dcbb03d87c3a1f02f1b Author: Kristen Carlson Accardi Date: Wed May 9 15:04:24 2007 -0700 ACPI: dock: cleanup the uid patch Make uid sysfs file error path free memory, and cleanup sysfs file when removing driver. Also fix CodingStyle violations. Signed-off-by: Kristen Carlson Accardi Cc: Illya A. Volynets-Evenbakh Signed-off-by: Len Brown commit 23b0f015bf2c050b8b5399430ca64e1b3398cf76 Author: Luming Yu Date: Wed May 9 21:07:05 2007 +0800 ACPI: video: output switch sysfs support Requires CONFIG_VIDEO_OUTPUT_CONTROL and CONFIG_ACPI_VIDEO. After loading output.ko and video.ko, you would have /sys/class/video_output and several device acpi_videoNum there. For example, I got acpi_video0, acpi_video1,acpi_video2,and acpi_video3 under /sys/class/video_output on my T40. I can query the status of output device0 by running " cat /sys/class/video_output/acpi_video0 " The return value is defined in ACPI SPEC B.5.5 _DCS(Return the Status of Output Device). Also you can turn off video1 and turn on video0 by " echo 0 > acpi_video1; echo 0x80000000 > acpi_video0". Please reference ACPI SPEC B.5.7 _DSS for the parameter definition. Please note that it may or may NOT works purely depending on if your vendor providing correct ACPI video extension support in bios. the driver output.ko and video.ko just works like a interface to invoke BIOS. Signed-off-by: Luming Yu Signed-off-by: Len Brown commit e1d4b76cbe138f7c93a051010362d0f9add57566 Author: Venki Pallipadi Date: Thu Apr 26 00:03:59 2007 -0700 cpuidle: hang fix Prevent hang on x86-64, when ACPI processor driver is added as a module on a system that does not support C-states. x86-64 expects all idle handlers to enable interrupts before returning from idle handler. This is due to enter_idle(), exit_idle() races. Make cpuidle_idle_call() confirm to this when there is no pm_idle_old. Also, cpuidle look at the return values of attch_driver() and set current_driver to NULL if attach fails on all CPUs. Signed-off-by: Venkatesh Pallipadi Signed-off-by: Andrew Morton Signed-off-by: Len Brown commit ff0e02b9a13936484eea6a713e99df0204d99732 Author: Shaohua Li Date: Thu Apr 26 10:40:09 2007 +0800 cpuidle: add support for max_cstate limit With CPUIDLE framework, the max_cstate (to limit max cpu c-state) parameter is ingored. Some systems require it to ignore C2/C3 and some drivers like ipw require it too. Signed-off-by: Shaohua Li Signed-off-by: Len Brown commit 4d91bed5b98905d0399466e6500385c7c9aa4675 Author: Shaohua Li Date: Thu Apr 26 10:40:13 2007 +0800 cpuidle: add cpuidle_fore_redetect_devices API add cpuidle_force_redetect_devices API, which forces all CPU redetect idle states. Next patch will use it. Signed-off-by: Shaohua Li Signed-off-by: Len Brown commit 55705c005660bf8c5e83359f3d1ef009702af0da Author: Shaohua Li Date: Thu Apr 26 10:40:01 2007 +0800 cpuidle: fix sysfs related issue Fix the cpuidle sysfs issue. a. make kobject dynamicaly allocated b. fixed sysfs init issue to avoid suspend/resume issue Signed-off-by: Shaohua Li Signed-off-by: Len Brown commit 6eedeef73e7fff32eb5fa25178c3c77b1db0ec0f Author: Vladimir Lebedev Date: Sat Apr 21 22:41:48 2007 -0400 process reading battery status hangs http://bugzilla.kernel.org/show_bug.cgi?id=8351 Signed-off-by: Vladimir Lebedev Signed-off-by: Len Brown commit 4d8f36e7d95a98fe7099a76e1fb766a77d26efff Author: Randy Dunlap Date: Wed Mar 28 22:52:53 2007 -0400 cpuidle: 1-bit field must be unsigned A 1-bit bitfield has no room for a sign bit. drivers/cpuidle/governors/ladder.c:54:16: error: dubious bitfield without explicit `signed' or `unsigned' Signed-off-by: Randy Dunlap Cc: Venkatesh Pallipadi Signed-off-by: Andrew Morton Signed-off-by: Len Brown commit 6e122e5e891d5e66fe3df37474e00b6ab241b80c Author: Venkatesh Pallipadi Date: Wed Mar 28 22:52:41 2007 -0400 cpuidle: fix boot hang Patch for cpuidle boot hang reported by Larry Finger here. http://www.ussg.iu.edu/hypermail/linux/kernel/0703.2/2025.html Signed-off-by: Venkatesh Pallipadi Cc: Larry Finger Signed-off-by: Andrew Morton Signed-off-by: Len Brown commit 8eecab24ab74a9b565888f62ab172b5993b85784 Author: Len Brown Date: Wed Mar 7 04:37:53 2007 -0500 cpuidle: ladder does not depend on ACPI build fix for CONFIG_ACPI=n In file included from drivers/cpuidle/governors/ladder.c:21: include/acpi/processor.h:88: error: expected specifier-qualifier-list before ‘acpi_integer’ include/acpi/processor.h:106: error: expected specifier-qualifier-list before ‘acpi_integer’ include/acpi/processor.h:168: error: expected specifier-qualifier-list before ‘acpi_handle’ Signed-off-by: Len Brown commit ed6a8fc4d67a601706e72442a0568852e20aa7e7 Author: Adrian Bunk Date: Tue Mar 6 02:29:40 2007 -0800 cpuidle: make code static This patch makes the following needlessly global code static: - driver.c: __cpuidle_find_driver() - governor.c: __cpuidle_find_governor() - ladder.c: struct ladder_governor Signed-off-by: Adrian Bunk Cc: Venkatesh Pallipadi Cc: Adam Belay Cc: Shaohua Li Signed-off-by: Andrew Morton Signed-off-by: Len Brown commit 0b7b292b9e893f21f275ab40457e0a6a9fb3302a Author: Venkatesh Pallipadi Date: Wed Mar 7 02:38:22 2007 -0500 cpu_idle: fix build break This patch fixes a build breakage with !CONFIG_HOTPLUG_CPU and CONFIG_CPU_IDLE. Signed-off-by: Venkatesh Pallipadi Signed-off-by: Adrian Bunk Signed-off-by: Andrew Morton Signed-off-by: Len Brown commit 5520c1cdb3154acb2c1250c395bd7d6f3c6f01b5 Author: Venkatesh Pallipadi Date: Tue Mar 6 02:29:39 2007 -0800 cpuidle: build fix for !CPU_IDLE Fix the compile issues when CPU_IDLE is not configured. Signed-off-by: Venkatesh Pallipadi Cc: Adam Belay Cc: Shaohua Li Signed-off-by: Andrew Morton Signed-off-by: Len Brown commit bd5951fdfdef243135ea275c27e6ff5ba20a3d7d Author: Venkatesh Pallipadi Date: Thu Feb 22 13:54:57 2007 -0800 cpuidle take2: Basic documentation for cpuidle Documentation for cpuidle infrastructure Signed-off-by: Venkatesh Pallipadi Signed-off-by: Adam Belay Signed-off-by: Shaohua Li Signed-off-by: Len Brown commit 33741f25522cf4e660e263106df951b9cbe02df1 Author: Venkatesh Pallipadi Date: Thu Feb 22 13:54:03 2007 -0800 cpuidle take2: Hookup ACPI C-states driver with cpuidle Hookup ACPI C-states onto generic cpuidle infrastructure. drivers/acpi/procesor_idle.c is now a ACPI C-states driver that registers as a driver in cpuidle infrastructure and the policy part is removed from drivers/acpi/processor_idle.c. We use governor in cpuidle instead. Signed-off-by: Shaohua Li Signed-off-by: Venkatesh Pallipadi Signed-off-by: Adam Belay Signed-off-by: Len Brown commit b89790e9968a77c6cdc9fa08c5260d73face5487 Author: Venkatesh Pallipadi Date: Thu Feb 22 13:52:57 2007 -0800 cpuidle take2: Core cpuidle infrastructure Announcing 'cpuidle', a new CPU power management infrastructure to manage idle CPUs in a clean and efficient manner. cpuidle separates out the drivers that can provide support for multiple types of idle states and policy governors that decide on what idle state to use at run time. A cpuidle driver can support multiple idle states based on parameters like varying power consumption, wakeup latency, etc (ACPI C-states for example). A cpuidle governor can be usage model specific (laptop, server, laptop on battery etc). Main advantage of the infrastructure being, it allows independent development of drivers and governors and allows for better CPU power management. A huge thanks to Adam Belay and Shaohua Li who were part of this mini-project since its beginning and are greatly responsible for this patchset. This patch: Core cpuidle infrastructure. Introduces a new abstraction layer for cpuidle: * which manages drivers that can support multiple idles states. Drivers can be generic or particular to specific hardware/platform * allows pluging in multiple policy governors that can take idle state policy decision * The core also has a set of sysfs interfaces with which administrato can know about supported drivers and governors and switch them at run time. Signed-off-by: Adam Belay Signed-off-by: Shaohua Li Signed-off-by: Venkatesh Pallipadi Signed-off-by: Len Brown commit 9ea7d57576f40c6af03c8c9fa7a069f2222b498b Author: Vladimir Lebedev Date: Tue Feb 20 15:48:06 2007 +0300 ACPI: battery: Lindent Signed-off-by: Vladimir Lebedev Signed-off-by: Len Brown commit b6ce4083ed8e2a01a3a59301eabe0fc1e68a8a84 Author: Vladimir Lebedev Date: Tue Feb 20 15:48:06 2007 +0300 ACPI: Cache battery status instead of re-evaluating AML /proc exports _BST in a single file, and _BST is re-evaulated whenever that file is read. Sometimes user-space reads this file frequently, and on some systems _BST takes a long time to evaluate due to a slow EC. Further, when we move to sysfs, the values returned from _BST will be in multiple files, and evaluating _BST for each file read would make matters worse. Here code is added to support caching the results of _BST. A new module parameter "update_time" tells how many seconds the cached _BST should be used before it is re-evaluated. Currently, update_time defaults to 0, and so the existing behaviour of re-evaluating on each read retained. Signed-off-by: Vladimir Lebedev Signed-off-by: Len Brown commit a1f0eff21edac1bd87e397f56c4258b9611b5a50 Author: Vladimir Lebedev Date: Tue Feb 20 15:48:06 2007 +0300 ACPI: battery: make internal names consistent with battery "state" Cleanup -- No functional changes. Battery state is currently exported in a proc "state" file. Update associated #defines and routines to be consistent. Signed-off-by: Vladimir Lebedev Signed-off-by: Len Brown Documentation/cpuidle/core.txt | 17 + Documentation/cpuidle/driver.txt | 24 + Documentation/cpuidle/governor.txt | 24 + Documentation/cpuidle/sysfs.txt | 27 + arch/i386/Kconfig | 2 arch/i386/kernel/acpi/boot.c | 8 arch/i386/kernel/process.c | 2 arch/x86_64/Kconfig | 2 drivers/Makefile | 1 drivers/acpi/Kconfig | 10 drivers/acpi/battery.c | 673 +++++++++++++++++--------- drivers/acpi/bay.c | 19 - drivers/acpi/dock.c | 119 ++++- drivers/acpi/events/evgpeblk.c | 4 drivers/acpi/osl.c | 12 drivers/acpi/processor_core.c | 11 drivers/acpi/processor_idle.c | 889 ++++++++++++++--------------------- drivers/acpi/processor_throttling.c | 408 +++++++++++++++- drivers/acpi/tables/tbfadt.c | 44 +- drivers/acpi/thermal.c | 1 drivers/acpi/utilities/uteval.c | 17 - drivers/acpi/video.c | 40 ++ drivers/cpuidle/Kconfig | 39 ++ drivers/cpuidle/Makefile | 5 drivers/cpuidle/cpuidle.c | 307 ++++++++++++ drivers/cpuidle/cpuidle.h | 50 ++ drivers/cpuidle/driver.c | 276 +++++++++++ drivers/cpuidle/governor.c | 160 ++++++ drivers/cpuidle/governors/Makefile | 6 drivers/cpuidle/governors/ladder.c | 227 +++++++++ drivers/cpuidle/governors/menu.c | 152 ++++++ drivers/cpuidle/sysfs.c | 373 +++++++++++++++ drivers/video/Kconfig | 7 drivers/video/Makefile | 3 include/acpi/acmacros.h | 23 + include/acpi/acoutput.h | 4 include/acpi/platform/acenv.h | 2 include/acpi/platform/aclinux.h | 3 include/acpi/processor.h | 50 ++ include/linux/acpi.h | 7 include/linux/cpuidle.h | 189 +++++++ include/linux/tick.h | 10 kernel/softirq.c | 5 kernel/time/tick-sched.c | 24 + 44 files changed, 3398 insertions(+), 878 deletions(-) diff --git a/Documentation/cpuidle/core.txt b/Documentation/cpuidle/core.txt new file mode 100644 index 0000000..e686cfc --- /dev/null +++ b/Documentation/cpuidle/core.txt @@ -0,0 +1,17 @@ + + Supporting multiple CPU idle levels in kernel + + cpuidle + +General Information: + +Various CPUs today support multiple idle levels that are differentiated +by varying exit latencies and power consumption during idle. +cpuidle is a generic in-kernel infrastructure that separates +idle policy (governor) from idle mechanism (driver) and provides a +standardized infrastructure to support independent development of +governors and drivers. + +cpuidle resides under /drivers/cpuidle. + + diff --git a/Documentation/cpuidle/driver.txt b/Documentation/cpuidle/driver.txt new file mode 100644 index 0000000..2dbee4b --- /dev/null +++ b/Documentation/cpuidle/driver.txt @@ -0,0 +1,24 @@ + + + Supporting multiple CPU idle levels in kernel + + cpuidle drivers + + + + +cpuidle driver supports capability detection for a particular system. The +init and exit routines will be called for each online CPU, with a percpu +cpuidle_driver object and driver should fill in cpuidle_states inside +cpuidle_driver depending on the CPU capability. + +Driver can handle dynamic state changes (like battery<->AC), by calling +force_redetect interface. + +It is possible to have more than one driver registered at the same time and +user can switch between drivers using /sysfs interface. + +Interfaces: +int cpuidle_register_driver(struct cpuidle_driver *drv); +void cpuidle_unregister_driver(struct cpuidle_driver *drv); +int cpuidle_force_redetect(struct cpuidle_device *dev); diff --git a/Documentation/cpuidle/governor.txt b/Documentation/cpuidle/governor.txt new file mode 100644 index 0000000..a0fc3e5 --- /dev/null +++ b/Documentation/cpuidle/governor.txt @@ -0,0 +1,24 @@ + + + + Supporting multiple CPU idle levels in kernel + + cpuidle governors + + + + +cpuidle governor is policy routine that decides what idle state to enter at +any given time. cpuidle core uses different callbacks to governor while +handling idle entry. +* select_state callback where governor can determine next idle state to enter +* prepare_idle callback is called before entering an idle state +* scan callback is called after a driver forces redetection of the states + +More than one governor can be registered at the same time and +user can switch between drivers using /sysfs interface. + +Interfaces: +int cpuidle_register_governor(struct cpuidle_governor *gov); +void cpuidle_unregister_governor(struct cpuidle_governor *gov); + diff --git a/Documentation/cpuidle/sysfs.txt b/Documentation/cpuidle/sysfs.txt new file mode 100644 index 0000000..7fbf644 --- /dev/null +++ b/Documentation/cpuidle/sysfs.txt @@ -0,0 +1,27 @@ + + + Supporting multiple CPU idle levels in kernel + + cpuidle sysfs + +System global cpuidle information are under +/sys/devices/system/cpu/cpuidle + +The current interfaces in this directory has self-explanatory names: +* available_drivers +* available_governors +* current_driver +* current_governor + +Per logical CPU specific cpuidle information are under +/sys/devices/system/cpu/cpuX/cpuidle +for each online cpu X + +Under this percpu directory, there is a directory for each idle state supported +by the driver, which in turn has +* latency +* power +* time +* usage + + diff --git a/arch/i386/Kconfig b/arch/i386/Kconfig index 8770a5d..8ccb93f 100644 --- a/arch/i386/Kconfig +++ b/arch/i386/Kconfig @@ -1053,6 +1053,8 @@ endif # APM source "arch/i386/kernel/cpu/cpufreq/Kconfig" +source "drivers/cpuidle/Kconfig" + endmenu menu "Bus options (PCI, PCMCIA, EISA, MCA, ISA)" diff --git a/arch/i386/kernel/acpi/boot.c b/arch/i386/kernel/acpi/boot.c index 280898b..a2c8b9e 100644 --- a/arch/i386/kernel/acpi/boot.c +++ b/arch/i386/kernel/acpi/boot.c @@ -971,14 +971,6 @@ static struct dmi_system_id __initdata a }, { .callback = force_acpi_ht, - .ident = "DELL GX240", - .matches = { - DMI_MATCH(DMI_BOARD_VENDOR, "Dell Computer Corporation"), - DMI_MATCH(DMI_BOARD_NAME, "OptiPlex GX240"), - }, - }, - { - .callback = force_acpi_ht, .ident = "HP VISUALIZE NT Workstation", .matches = { DMI_MATCH(DMI_BOARD_VENDOR, "Hewlett-Packard"), diff --git a/arch/i386/kernel/process.c b/arch/i386/kernel/process.c index 06dfa65..857484b 100644 --- a/arch/i386/kernel/process.c +++ b/arch/i386/kernel/process.c @@ -179,13 +179,13 @@ void cpu_idle(void) /* endless idle loop with no priority at all */ while (1) { - tick_nohz_stop_sched_tick(); while (!need_resched()) { void (*idle)(void); if (__get_cpu_var(cpu_idle_state)) __get_cpu_var(cpu_idle_state) = 0; + tick_nohz_stop_sched_tick(); check_pgt_cache(); rmb(); idle = pm_idle; diff --git a/arch/x86_64/Kconfig b/arch/x86_64/Kconfig index 5ce9443..1937466 100644 --- a/arch/x86_64/Kconfig +++ b/arch/x86_64/Kconfig @@ -698,6 +698,8 @@ source "drivers/acpi/Kconfig" source "arch/x86_64/kernel/cpufreq/Kconfig" +source "drivers/cpuidle/Kconfig" + endmenu menu "Bus options (PCI etc.)" diff --git a/drivers/Makefile b/drivers/Makefile index adad2f3..4ddaf85 100644 --- a/drivers/Makefile +++ b/drivers/Makefile @@ -70,6 +70,7 @@ obj-$(CONFIG_EDAC) += edac/ obj-$(CONFIG_MCA) += mca/ obj-$(CONFIG_EISA) += eisa/ obj-$(CONFIG_CPU_FREQ) += cpufreq/ +obj-$(CONFIG_CPU_IDLE) += cpuidle/ obj-$(CONFIG_MMC) += mmc/ obj-$(CONFIG_NEW_LEDS) += leds/ obj-$(CONFIG_INFINIBAND) += infiniband/ diff --git a/drivers/acpi/Kconfig b/drivers/acpi/Kconfig index 139f41f..b3f9518 100644 --- a/drivers/acpi/Kconfig +++ b/drivers/acpi/Kconfig @@ -124,7 +124,7 @@ config ACPI_BUTTON config ACPI_VIDEO tristate "Video" - depends on X86 && BACKLIGHT_CLASS_DEVICE + depends on X86 && BACKLIGHT_CLASS_DEVICE && VIDEO_OUTPUT_CONTROL help This driver implement the ACPI Extensions For Display Adapters for integrated graphics devices on motherboard, as specified in @@ -280,6 +280,14 @@ config ACPI_DEBUG of verbosity. Saying Y enables these statements. This will increase your kernel size by around 50K. +config ACPI_DEBUG_FUNC_TRACE + bool "Additionally enable ACPI function tracing" + default n + depends on ACPI_DEBUG + help + ACPI Debug Statements slow down ACPI processing. Function trace + is about half of the penalty and is rarely useful. + config ACPI_EC bool default y diff --git a/drivers/acpi/battery.c b/drivers/acpi/battery.c index e64c76c..cad932d 100644 --- a/drivers/acpi/battery.c +++ b/drivers/acpi/battery.c @@ -43,21 +43,30 @@ #define ACPI_BATTERY_COMPONENT 0x000400 #define ACPI_BATTERY_CLASS "battery" #define ACPI_BATTERY_HID "PNP0C0A" #define ACPI_BATTERY_DEVICE_NAME "Battery" -#define ACPI_BATTERY_FILE_INFO "info" -#define ACPI_BATTERY_FILE_STATUS "state" -#define ACPI_BATTERY_FILE_ALARM "alarm" #define ACPI_BATTERY_NOTIFY_STATUS 0x80 #define ACPI_BATTERY_NOTIFY_INFO 0x81 #define ACPI_BATTERY_UNITS_WATTS "mW" #define ACPI_BATTERY_UNITS_AMPS "mA" #define _COMPONENT ACPI_BATTERY_COMPONENT + +#define ACPI_BATTERY_UPDATE_TIME 0 + +#define ACPI_BATTERY_NONE_UPDATE 0 +#define ACPI_BATTERY_EASY_UPDATE 1 +#define ACPI_BATTERY_INIT_UPDATE 2 + ACPI_MODULE_NAME("battery"); MODULE_AUTHOR("Paul Diefenbaugh"); MODULE_DESCRIPTION("ACPI Battery Driver"); MODULE_LICENSE("GPL"); +static unsigned int update_time = ACPI_BATTERY_UPDATE_TIME; + +/* 0 - every time, > 0 - by update_time */ +module_param(update_time, uint, 0644); + extern struct proc_dir_entry *acpi_lock_battery_dir(void); extern void *acpi_unlock_battery_dir(struct proc_dir_entry *acpi_battery_dir); @@ -76,7 +85,7 @@ static struct acpi_driver acpi_battery_d }, }; -struct acpi_battery_status { +struct acpi_battery_state { acpi_integer state; acpi_integer present_rate; acpi_integer remaining_capacity; @@ -99,33 +108,111 @@ struct acpi_battery_info { acpi_string oem_info; }; -struct acpi_battery_flags { - u8 present:1; /* Bay occupied? */ - u8 power_unit:1; /* 0=watts, 1=apms */ - u8 alarm:1; /* _BTP present? */ - u8 reserved:5; +enum acpi_battery_files{ + ACPI_BATTERY_INFO = 0, + ACPI_BATTERY_STATE, + ACPI_BATTERY_ALARM, + ACPI_BATTERY_NUMFILES, }; -struct acpi_battery_trips { - unsigned long warning; - unsigned long low; +struct acpi_battery_flags { + u8 battery_present_prev; + u8 alarm_present; + u8 init_update; + u8 update[ACPI_BATTERY_NUMFILES]; + u8 power_unit; }; struct acpi_battery { - struct acpi_device * device; + struct mutex mutex; + struct acpi_device *device; struct acpi_battery_flags flags; - struct acpi_battery_trips trips; + struct acpi_buffer bif_data; + struct acpi_buffer bst_data; unsigned long alarm; - struct acpi_battery_info *info; + unsigned long update_time[ACPI_BATTERY_NUMFILES]; }; +inline int acpi_battery_present(struct acpi_battery *battery) +{ + return battery->device->status.battery_present; +} +inline char *acpi_battery_power_units(struct acpi_battery *battery) +{ + if (battery->flags.power_unit) + return ACPI_BATTERY_UNITS_AMPS; + else + return ACPI_BATTERY_UNITS_WATTS; +} + +inline acpi_handle acpi_battery_handle(struct acpi_battery *battery) +{ + return battery->device->handle; +} + /* -------------------------------------------------------------------------- Battery Management -------------------------------------------------------------------------- */ -static int -acpi_battery_get_info(struct acpi_battery *battery, - struct acpi_battery_info **bif) +static void acpi_battery_check_result(struct acpi_battery *battery, int result) +{ + if (!battery) + return; + + if (result) { + battery->flags.init_update = 1; + } +} + +static int acpi_battery_extract_package(struct acpi_battery *battery, + union acpi_object *package, + struct acpi_buffer *format, + struct acpi_buffer *data, + char *package_name) +{ + acpi_status status = AE_OK; + struct acpi_buffer data_null = { 0, NULL }; + + status = acpi_extract_package(package, format, &data_null); + if (status != AE_BUFFER_OVERFLOW) { + ACPI_EXCEPTION((AE_INFO, status, "Extracting size %s", + package_name)); + return -ENODEV; + } + + if (data_null.length != data->length) { + kfree(data->pointer); + data->pointer = kzalloc(data_null.length, GFP_KERNEL); + if (!data->pointer) { + ACPI_EXCEPTION((AE_INFO, AE_NO_MEMORY, "kzalloc()")); + return -ENOMEM; + } + data->length = data_null.length; + } + + status = acpi_extract_package(package, format, data); + if (ACPI_FAILURE(status)) { + ACPI_EXCEPTION((AE_INFO, status, "Extracting %s", + package_name)); + return -ENODEV; + } + + return 0; +} + +static int acpi_battery_get_status(struct acpi_battery *battery) +{ + int result = 0; + + result = acpi_bus_get_status(battery->device); + if (result) { + ACPI_EXCEPTION((AE_INFO, AE_ERROR, "Evaluating _STA")); + return -ENODEV; + } + return result; +} + +static int acpi_battery_get_info(struct acpi_battery *battery) { int result = 0; acpi_status status = 0; @@ -133,16 +220,20 @@ acpi_battery_get_info(struct acpi_batter struct acpi_buffer format = { sizeof(ACPI_BATTERY_FORMAT_BIF), ACPI_BATTERY_FORMAT_BIF }; - struct acpi_buffer data = { 0, NULL }; union acpi_object *package = NULL; + struct acpi_buffer *data = NULL; + struct acpi_battery_info *bif = NULL; + battery->update_time[ACPI_BATTERY_INFO] = get_seconds(); - if (!battery || !bif) - return -EINVAL; + if (!acpi_battery_present(battery)) + return 0; - /* Evalute _BIF */ + /* Evaluate _BIF */ - status = acpi_evaluate_object(battery->device->handle, "_BIF", NULL, &buffer); + status = + acpi_evaluate_object(acpi_battery_handle(battery), "_BIF", NULL, + &buffer); if (ACPI_FAILURE(status)) { ACPI_EXCEPTION((AE_INFO, status, "Evaluating _BIF")); return -ENODEV; @@ -150,41 +241,29 @@ acpi_battery_get_info(struct acpi_batter package = buffer.pointer; - /* Extract Package Data */ - - status = acpi_extract_package(package, &format, &data); - if (status != AE_BUFFER_OVERFLOW) { - ACPI_EXCEPTION((AE_INFO, status, "Extracting _BIF")); - result = -ENODEV; - goto end; - } + data = &battery->bif_data; - data.pointer = kzalloc(data.length, GFP_KERNEL); - if (!data.pointer) { - result = -ENOMEM; - goto end; - } + /* Extract Package Data */ - status = acpi_extract_package(package, &format, &data); - if (ACPI_FAILURE(status)) { - ACPI_EXCEPTION((AE_INFO, status, "Extracting _BIF")); - kfree(data.pointer); - result = -ENODEV; + result = + acpi_battery_extract_package(battery, package, &format, data, + "_BIF"); + if (result) goto end; - } end: + kfree(buffer.pointer); - if (!result) - (*bif) = data.pointer; + if (!result) { + bif = data->pointer; + battery->flags.power_unit = bif->power_unit; + } return result; } -static int -acpi_battery_get_status(struct acpi_battery *battery, - struct acpi_battery_status **bst) +static int acpi_battery_get_state(struct acpi_battery *battery) { int result = 0; acpi_status status = 0; @@ -192,16 +271,19 @@ acpi_battery_get_status(struct acpi_batt struct acpi_buffer format = { sizeof(ACPI_BATTERY_FORMAT_BST), ACPI_BATTERY_FORMAT_BST }; - struct acpi_buffer data = { 0, NULL }; union acpi_object *package = NULL; + struct acpi_buffer *data = NULL; + battery->update_time[ACPI_BATTERY_STATE] = get_seconds(); - if (!battery || !bst) - return -EINVAL; + if (!acpi_battery_present(battery)) + return 0; - /* Evalute _BST */ + /* Evaluate _BST */ - status = acpi_evaluate_object(battery->device->handle, "_BST", NULL, &buffer); + status = + acpi_evaluate_object(acpi_battery_handle(battery), "_BST", NULL, + &buffer); if (ACPI_FAILURE(status)) { ACPI_EXCEPTION((AE_INFO, status, "Evaluating _BST")); return -ENODEV; @@ -209,55 +291,49 @@ acpi_battery_get_status(struct acpi_batt package = buffer.pointer; - /* Extract Package Data */ - - status = acpi_extract_package(package, &format, &data); - if (status != AE_BUFFER_OVERFLOW) { - ACPI_EXCEPTION((AE_INFO, status, "Extracting _BST")); - result = -ENODEV; - goto end; - } + data = &battery->bst_data; - data.pointer = kzalloc(data.length, GFP_KERNEL); - if (!data.pointer) { - result = -ENOMEM; - goto end; - } + /* Extract Package Data */ - status = acpi_extract_package(package, &format, &data); - if (ACPI_FAILURE(status)) { - ACPI_EXCEPTION((AE_INFO, status, "Extracting _BST")); - kfree(data.pointer); - result = -ENODEV; + result = + acpi_battery_extract_package(battery, package, &format, data, + "_BST"); + if (result) goto end; - } end: kfree(buffer.pointer); - if (!result) - (*bst) = data.pointer; - return result; } -static int -acpi_battery_set_alarm(struct acpi_battery *battery, unsigned long alarm) +static int acpi_battery_get_alarm(struct acpi_battery *battery) +{ + battery->update_time[ACPI_BATTERY_ALARM] = get_seconds(); + + return 0; +} + +static int acpi_battery_set_alarm(struct acpi_battery *battery, + unsigned long alarm) { acpi_status status = 0; union acpi_object arg0 = { ACPI_TYPE_INTEGER }; struct acpi_object_list arg_list = { 1, &arg0 }; + battery->update_time[ACPI_BATTERY_ALARM] = get_seconds(); - if (!battery) - return -EINVAL; + if (!acpi_battery_present(battery)) + return -ENODEV; - if (!battery->flags.alarm) + if (!battery->flags.alarm_present) return -ENODEV; arg0.integer.value = alarm; - status = acpi_evaluate_object(battery->device->handle, "_BTP", &arg_list, NULL); + status = + acpi_evaluate_object(acpi_battery_handle(battery), "_BTP", + &arg_list, NULL); if (ACPI_FAILURE(status)) return -ENODEV; @@ -268,65 +344,114 @@ acpi_battery_set_alarm(struct acpi_batte return 0; } -static int acpi_battery_check(struct acpi_battery *battery) +static int acpi_battery_init_alarm(struct acpi_battery *battery) { int result = 0; acpi_status status = AE_OK; acpi_handle handle = NULL; - struct acpi_device *device = NULL; - struct acpi_battery_info *bif = NULL; + struct acpi_battery_info *bif = battery->bif_data.pointer; + unsigned long alarm = battery->alarm; + /* See if alarms are supported, and if so, set default */ - if (!battery) - return -EINVAL; - - device = battery->device; + status = acpi_get_handle(acpi_battery_handle(battery), "_BTP", &handle); + if (ACPI_SUCCESS(status)) { + battery->flags.alarm_present = 1; + if (!alarm && bif) { + alarm = bif->design_capacity_warning; + } + result = acpi_battery_set_alarm(battery, alarm); + if (result) + goto end; + } else { + battery->flags.alarm_present = 0; + } - result = acpi_bus_get_status(device); - if (result) - return result; + end: - /* Insertion? */ + return result; +} - if (!battery->flags.present && device->status.battery_present) { +static int acpi_battery_init_update(struct acpi_battery *battery) +{ + int result = 0; - ACPI_DEBUG_PRINT((ACPI_DB_INFO, "Battery inserted\n")); + result = acpi_battery_get_status(battery); + if (result) + return result; - /* Evalute _BIF to get certain static information */ + battery->flags.battery_present_prev = acpi_battery_present(battery); - result = acpi_battery_get_info(battery, &bif); + if (acpi_battery_present(battery)) { + result = acpi_battery_get_info(battery); + if (result) + return result; + result = acpi_battery_get_state(battery); if (result) return result; - battery->flags.power_unit = bif->power_unit; - battery->trips.warning = bif->design_capacity_warning; - battery->trips.low = bif->design_capacity_low; - kfree(bif); + acpi_battery_init_alarm(battery); + } + + return result; +} - /* See if alarms are supported, and if so, set default */ +static int acpi_battery_update(struct acpi_battery *battery, + int update, int *update_result_ptr) +{ + int result = 0; + int update_result = ACPI_BATTERY_NONE_UPDATE; + + if (!acpi_battery_present(battery)) { + update = 1; + } - status = acpi_get_handle(battery->device->handle, "_BTP", &handle); - if (ACPI_SUCCESS(status)) { - battery->flags.alarm = 1; - acpi_battery_set_alarm(battery, battery->trips.warning); + if (battery->flags.init_update) { + result = acpi_battery_init_update(battery); + if (result) + goto end; + update_result = ACPI_BATTERY_INIT_UPDATE; + } else if (update) { + result = acpi_battery_get_status(battery); + if (result) + goto end; + if ((!battery->flags.battery_present_prev & acpi_battery_present(battery)) + || (battery->flags.battery_present_prev & !acpi_battery_present(battery))) { + result = acpi_battery_init_update(battery); + if (result) + goto end; + update_result = ACPI_BATTERY_INIT_UPDATE; + } else { + update_result = ACPI_BATTERY_EASY_UPDATE; } } - /* Removal? */ + end: - else if (battery->flags.present && !device->status.battery_present) { - ACPI_DEBUG_PRINT((ACPI_DB_INFO, "Battery removed\n")); - } + battery->flags.init_update = (result != 0); - battery->flags.present = device->status.battery_present; + *update_result_ptr = update_result; return result; } -static void acpi_battery_check_present(struct acpi_battery *battery) +static void acpi_battery_notify_update(struct acpi_battery *battery) { - if (!battery->flags.present) { - acpi_battery_check(battery); + acpi_battery_get_status(battery); + + if (battery->flags.init_update) { + return; + } + + if ((!battery->flags.battery_present_prev & + acpi_battery_present(battery)) || + (battery->flags.battery_present_prev & + !acpi_battery_present(battery))) { + battery->flags.init_update = 1; + } else { + battery->flags.update[ACPI_BATTERY_INFO] = 1; + battery->flags.update[ACPI_BATTERY_STATE] = 1; + battery->flags.update[ACPI_BATTERY_ALARM] = 1; } } @@ -335,37 +460,33 @@ static void acpi_battery_check_present(s -------------------------------------------------------------------------- */ static struct proc_dir_entry *acpi_battery_dir; -static int acpi_battery_read_info(struct seq_file *seq, void *offset) + +static int acpi_battery_print_info(struct seq_file *seq, int result) { - int result = 0; struct acpi_battery *battery = seq->private; struct acpi_battery_info *bif = NULL; char *units = "?"; - - if (!battery) + if (result) goto end; - acpi_battery_check_present(battery); - - if (battery->flags.present) + if (acpi_battery_present(battery)) seq_printf(seq, "present: yes\n"); else { seq_printf(seq, "present: no\n"); goto end; } - /* Battery Info (_BIF) */ - - result = acpi_battery_get_info(battery, &bif); - if (result || !bif) { - seq_printf(seq, "ERROR: Unable to read battery information\n"); + bif = battery->bif_data.pointer; + if (!bif) { + ACPI_EXCEPTION((AE_INFO, AE_ERROR, "BIF buffer is NULL")); + result = -ENODEV; goto end; } - units = - bif-> - power_unit ? ACPI_BATTERY_UNITS_AMPS : ACPI_BATTERY_UNITS_WATTS; + /* Battery Units */ + + units = acpi_battery_power_units(battery); if (bif->design_capacity == ACPI_BATTERY_VALUE_UNKNOWN) seq_printf(seq, "design capacity: unknown\n"); @@ -396,7 +517,6 @@ static int acpi_battery_read_info(struct else seq_printf(seq, "design voltage: %d mV\n", (u32) bif->design_voltage); - seq_printf(seq, "design capacity warning: %d %sh\n", (u32) bif->design_capacity_warning, units); seq_printf(seq, "design capacity low: %d %sh\n", @@ -411,50 +531,40 @@ static int acpi_battery_read_info(struct seq_printf(seq, "OEM info: %s\n", bif->oem_info); end: - kfree(bif); - return 0; -} + if (result) + seq_printf(seq, "ERROR: Unable to read battery info\n"); -static int acpi_battery_info_open_fs(struct inode *inode, struct file *file) -{ - return single_open(file, acpi_battery_read_info, PDE(inode)->data); + return result; } -static int acpi_battery_read_state(struct seq_file *seq, void *offset) +static int acpi_battery_print_state(struct seq_file *seq, int result) { - int result = 0; struct acpi_battery *battery = seq->private; - struct acpi_battery_status *bst = NULL; + struct acpi_battery_state *bst = NULL; char *units = "?"; - - if (!battery) + if (result) goto end; - acpi_battery_check_present(battery); - - if (battery->flags.present) + if (acpi_battery_present(battery)) seq_printf(seq, "present: yes\n"); else { seq_printf(seq, "present: no\n"); goto end; } - /* Battery Units */ - - units = - battery->flags. - power_unit ? ACPI_BATTERY_UNITS_AMPS : ACPI_BATTERY_UNITS_WATTS; - - /* Battery Status (_BST) */ - - result = acpi_battery_get_status(battery, &bst); - if (result || !bst) { - seq_printf(seq, "ERROR: Unable to read battery status\n"); + bst = battery->bst_data.pointer; + if (!bst) { + ACPI_EXCEPTION((AE_INFO, AE_ERROR, "BST buffer is NULL")); + result = -ENODEV; goto end; } + /* Battery Units */ + + units = acpi_battery_power_units(battery); + if (!(bst->state & 0x04)) seq_printf(seq, "capacity state: ok\n"); else @@ -490,48 +600,43 @@ static int acpi_battery_read_state(struc (u32) bst->present_voltage); end: - kfree(bst); - return 0; -} + if (result) { + seq_printf(seq, "ERROR: Unable to read battery state\n"); + } -static int acpi_battery_state_open_fs(struct inode *inode, struct file *file) -{ - return single_open(file, acpi_battery_read_state, PDE(inode)->data); + return result; } -static int acpi_battery_read_alarm(struct seq_file *seq, void *offset) +static int acpi_battery_print_alarm(struct seq_file *seq, int result) { struct acpi_battery *battery = seq->private; char *units = "?"; - - if (!battery) + if (result) goto end; - acpi_battery_check_present(battery); - - if (!battery->flags.present) { + if (!acpi_battery_present(battery)) { seq_printf(seq, "present: no\n"); goto end; } /* Battery Units */ - units = - battery->flags. - power_unit ? ACPI_BATTERY_UNITS_AMPS : ACPI_BATTERY_UNITS_WATTS; - - /* Battery Alarm */ + units = acpi_battery_power_units(battery); seq_printf(seq, "alarm: "); if (!battery->alarm) seq_printf(seq, "unsupported\n"); else - seq_printf(seq, "%d %sh\n", (u32) battery->alarm, units); + seq_printf(seq, "%lu %sh\n", battery->alarm, units); end: - return 0; + + if (result) + seq_printf(seq, "ERROR: Unable to read battery alarm\n"); + + return result; } static ssize_t @@ -543,27 +648,113 @@ acpi_battery_write_alarm(struct file *fi char alarm_string[12] = { '\0' }; struct seq_file *m = file->private_data; struct acpi_battery *battery = m->private; - + int update_result = ACPI_BATTERY_NONE_UPDATE; if (!battery || (count > sizeof(alarm_string) - 1)) return -EINVAL; - acpi_battery_check_present(battery); + mutex_lock(&battery->mutex); - if (!battery->flags.present) - return -ENODEV; + result = acpi_battery_update(battery, 1, &update_result); + if (result) { + result = -ENODEV; + goto end; + } + + if (!acpi_battery_present(battery)) { + result = -ENODEV; + goto end; + } - if (copy_from_user(alarm_string, buffer, count)) - return -EFAULT; + if (copy_from_user(alarm_string, buffer, count)) { + result = -EFAULT; + goto end; + } alarm_string[count] = '\0'; result = acpi_battery_set_alarm(battery, simple_strtoul(alarm_string, NULL, 0)); if (result) - return result; + goto end; + + end: - return count; + acpi_battery_check_result(battery, result); + + if (!result) + result = count; + + mutex_unlock(&battery->mutex); + + return result; +} + +typedef int(*print_func)(struct seq_file *seq, int result); +typedef int(*get_func)(struct acpi_battery *battery); + +static struct acpi_read_mux { + print_func print; + get_func get; +} acpi_read_funcs[ACPI_BATTERY_NUMFILES] = { + {.get = acpi_battery_get_info, .print = acpi_battery_print_info}, + {.get = acpi_battery_get_state, .print = acpi_battery_print_state}, + {.get = acpi_battery_get_alarm, .print = acpi_battery_print_alarm}, +}; + +static int acpi_battery_read(int fid, struct seq_file *seq) +{ + struct acpi_battery *battery = seq->private; + int result = 0; + int update_result = ACPI_BATTERY_NONE_UPDATE; + int update = 0; + + mutex_lock(&battery->mutex); + + update = (get_seconds() - battery->update_time[fid] >= update_time); + update = (update | battery->flags.update[fid]); + + result = acpi_battery_update(battery, update, &update_result); + if (result) + goto end; + + if (update_result == ACPI_BATTERY_EASY_UPDATE) { + result = acpi_read_funcs[fid].get(battery); + if (result) + goto end; + } + + end: + result = acpi_read_funcs[fid].print(seq, result); + acpi_battery_check_result(battery, result); + battery->flags.update[fid] = result; + mutex_unlock(&battery->mutex); + return result; +} + +static int acpi_battery_read_info(struct seq_file *seq, void *offset) +{ + return acpi_battery_read(ACPI_BATTERY_INFO, seq); +} + +static int acpi_battery_read_state(struct seq_file *seq, void *offset) +{ + return acpi_battery_read(ACPI_BATTERY_STATE, seq); +} + +static int acpi_battery_read_alarm(struct seq_file *seq, void *offset) +{ + return acpi_battery_read(ACPI_BATTERY_ALARM, seq); +} + +static int acpi_battery_info_open_fs(struct inode *inode, struct file *file) +{ + return single_open(file, acpi_battery_read_info, PDE(inode)->data); +} + +static int acpi_battery_state_open_fs(struct inode *inode, struct file *file) +{ + return single_open(file, acpi_battery_read_state, PDE(inode)->data); } static int acpi_battery_alarm_open_fs(struct inode *inode, struct file *file) @@ -571,35 +762,51 @@ static int acpi_battery_alarm_open_fs(st return single_open(file, acpi_battery_read_alarm, PDE(inode)->data); } -static const struct file_operations acpi_battery_info_ops = { +static struct battery_file { + struct file_operations ops; + mode_t mode; + char *name; +} acpi_battery_file[] = { + { + .name = "info", + .mode = S_IRUGO, + .ops = { .open = acpi_battery_info_open_fs, .read = seq_read, .llseek = seq_lseek, .release = single_release, .owner = THIS_MODULE, -}; - -static const struct file_operations acpi_battery_state_ops = { + }, + }, + { + .name = "state", + .mode = S_IRUGO, + .ops = { .open = acpi_battery_state_open_fs, .read = seq_read, .llseek = seq_lseek, .release = single_release, .owner = THIS_MODULE, -}; - -static const struct file_operations acpi_battery_alarm_ops = { + }, + }, + { + .name = "alarm", + .mode = S_IFREG | S_IRUGO | S_IWUSR, + .ops = { .open = acpi_battery_alarm_open_fs, .read = seq_read, .write = acpi_battery_write_alarm, .llseek = seq_lseek, .release = single_release, .owner = THIS_MODULE, + }, + }, }; static int acpi_battery_add_fs(struct acpi_device *device) { struct proc_dir_entry *entry = NULL; - + int i; if (!acpi_device_dir(device)) { acpi_device_dir(device) = proc_mkdir(acpi_device_bid(device), @@ -609,38 +816,16 @@ static int acpi_battery_add_fs(struct ac acpi_device_dir(device)->owner = THIS_MODULE; } - /* 'info' [R] */ - entry = create_proc_entry(ACPI_BATTERY_FILE_INFO, - S_IRUGO, acpi_device_dir(device)); - if (!entry) - return -ENODEV; - else { - entry->proc_fops = &acpi_battery_info_ops; - entry->data = acpi_driver_data(device); - entry->owner = THIS_MODULE; - } - - /* 'status' [R] */ - entry = create_proc_entry(ACPI_BATTERY_FILE_STATUS, - S_IRUGO, acpi_device_dir(device)); - if (!entry) - return -ENODEV; - else { - entry->proc_fops = &acpi_battery_state_ops; - entry->data = acpi_driver_data(device); - entry->owner = THIS_MODULE; - } - - /* 'alarm' [R/W] */ - entry = create_proc_entry(ACPI_BATTERY_FILE_ALARM, - S_IFREG | S_IRUGO | S_IWUSR, - acpi_device_dir(device)); - if (!entry) - return -ENODEV; - else { - entry->proc_fops = &acpi_battery_alarm_ops; - entry->data = acpi_driver_data(device); - entry->owner = THIS_MODULE; + for (i = 0; i < ACPI_BATTERY_NUMFILES; ++i) { + entry = create_proc_entry(acpi_battery_file[i].name, + acpi_battery_file[i].mode, acpi_device_dir(device)); + if (!entry) + return -ENODEV; + else { + entry->proc_fops = &acpi_battery_file[i].ops; + entry->data = acpi_driver_data(device); + entry->owner = THIS_MODULE; + } } return 0; @@ -648,15 +833,12 @@ static int acpi_battery_add_fs(struct ac static int acpi_battery_remove_fs(struct acpi_device *device) { - + int i; if (acpi_device_dir(device)) { - remove_proc_entry(ACPI_BATTERY_FILE_ALARM, + for (i = 0; i < ACPI_BATTERY_NUMFILES; ++i) { + remove_proc_entry(acpi_battery_file[i].name, acpi_device_dir(device)); - remove_proc_entry(ACPI_BATTERY_FILE_STATUS, - acpi_device_dir(device)); - remove_proc_entry(ACPI_BATTERY_FILE_INFO, - acpi_device_dir(device)); - + } remove_proc_entry(acpi_device_bid(device), acpi_battery_dir); acpi_device_dir(device) = NULL; } @@ -673,7 +855,6 @@ static void acpi_battery_notify(acpi_han struct acpi_battery *battery = data; struct acpi_device *device = NULL; - if (!battery) return; @@ -684,8 +865,10 @@ static void acpi_battery_notify(acpi_han case ACPI_BATTERY_NOTIFY_INFO: case ACPI_NOTIFY_BUS_CHECK: case ACPI_NOTIFY_DEVICE_CHECK: - acpi_battery_check(battery); - acpi_bus_generate_event(device, event, battery->flags.present); + device = battery->device; + acpi_battery_notify_update(battery); + acpi_bus_generate_event(device, event, + acpi_battery_present(battery)); break; default: ACPI_DEBUG_PRINT((ACPI_DB_INFO, @@ -702,7 +885,6 @@ static int acpi_battery_add(struct acpi_ acpi_status status = 0; struct acpi_battery *battery = NULL; - if (!device) return -EINVAL; @@ -710,15 +892,21 @@ static int acpi_battery_add(struct acpi_ if (!battery) return -ENOMEM; + mutex_init(&battery->mutex); + + mutex_lock(&battery->mutex); + battery->device = device; strcpy(acpi_device_name(device), ACPI_BATTERY_DEVICE_NAME); strcpy(acpi_device_class(device), ACPI_BATTERY_CLASS); acpi_driver_data(device) = battery; - result = acpi_battery_check(battery); + result = acpi_battery_get_status(battery); if (result) goto end; + battery->flags.init_update = 1; + result = acpi_battery_add_fs(device); if (result) goto end; @@ -727,6 +915,7 @@ static int acpi_battery_add(struct acpi_ ACPI_ALL_NOTIFY, acpi_battery_notify, battery); if (ACPI_FAILURE(status)) { + ACPI_EXCEPTION((AE_INFO, status, "Installing notify handler")); result = -ENODEV; goto end; } @@ -736,11 +925,14 @@ static int acpi_battery_add(struct acpi_ device->status.battery_present ? "present" : "absent"); end: + if (result) { acpi_battery_remove_fs(device); kfree(battery); } + mutex_unlock(&battery->mutex); + return result; } @@ -749,18 +941,27 @@ static int acpi_battery_remove(struct ac acpi_status status = 0; struct acpi_battery *battery = NULL; - if (!device || !acpi_driver_data(device)) return -EINVAL; battery = acpi_driver_data(device); + mutex_lock(&battery->mutex); + status = acpi_remove_notify_handler(device->handle, ACPI_ALL_NOTIFY, acpi_battery_notify); acpi_battery_remove_fs(device); + kfree(battery->bif_data.pointer); + + kfree(battery->bst_data.pointer); + + mutex_unlock(&battery->mutex); + + mutex_destroy(&battery->mutex); + kfree(battery); return 0; @@ -775,7 +976,10 @@ static int acpi_battery_resume(struct ac return -EINVAL; battery = device->driver_data; - return acpi_battery_check(battery); + + battery->flags.init_update = 1; + + return 0; } static int __init acpi_battery_init(void) @@ -800,7 +1004,6 @@ static int __init acpi_battery_init(void static void __exit acpi_battery_exit(void) { - acpi_bus_unregister_driver(&acpi_battery_driver); acpi_unlock_battery_dir(acpi_battery_dir); diff --git a/drivers/acpi/bay.c b/drivers/acpi/bay.c index fb3f31b..56a5b3f 100644 --- a/drivers/acpi/bay.c +++ b/drivers/acpi/bay.c @@ -288,6 +288,11 @@ static int bay_add(acpi_handle handle, i new_bay->pdev = pdev; platform_set_drvdata(pdev, new_bay); + /* + * we want the bay driver to be able to send uevents + */ + pdev->dev.uevent_suppress = 0; + if (acpi_bay_add_fs(new_bay)) { platform_device_unregister(new_bay->pdev); goto bay_add_err; @@ -328,18 +333,12 @@ static void bay_notify(acpi_handle handl { struct bay *bay_dev = (struct bay *)data; struct device *dev = &bay_dev->pdev->dev; + char event_string[12]; + char *envp[] = { event_string, NULL }; bay_dprintk(handle, "Bay event"); - - switch(event) { - case ACPI_NOTIFY_BUS_CHECK: - case ACPI_NOTIFY_DEVICE_CHECK: - case ACPI_NOTIFY_EJECT_REQUEST: - kobject_uevent(&dev->kobj, KOBJ_CHANGE); - break; - default: - printk(KERN_ERR PREFIX "Bay: unknown event %d\n", event); - } + sprintf(event_string, "BAY_EVENT=%d\n", event); + kobject_uevent_env(&dev->kobj, KOBJ_CHANGE, envp); } static acpi_status diff --git a/drivers/acpi/dock.c b/drivers/acpi/dock.c index 4546bf8..dc3df93 100644 --- a/drivers/acpi/dock.c +++ b/drivers/acpi/dock.c @@ -40,8 +40,15 @@ MODULE_AUTHOR("Kristen Carlson Accardi") MODULE_DESCRIPTION(ACPI_DOCK_DRIVER_DESCRIPTION); MODULE_LICENSE("GPL"); +static int immediate_undock = 1; +module_param(immediate_undock, bool, 0644); +MODULE_PARM_DESC(immediate_undock, "1 (default) will cause the driver to " + "undock immediately when the undock button is pressed, 0 will cause" + " the driver to wait for userspace to write the undock sysfs file " + " before undocking"); + static struct atomic_notifier_head dock_notifier_list; -static struct platform_device dock_device; +static struct platform_device *dock_device; static char dock_device_name[] = "dock"; struct dock_station { @@ -63,6 +70,7 @@ struct dock_dependent_device { }; #define DOCK_DOCKING 0x00000001 +#define DOCK_UNDOCKING 0x00000002 #define DOCK_EVENT 3 #define UNDOCK_EVENT 2 @@ -327,12 +335,20 @@ static void hotplug_dock_devices(struct static void dock_event(struct dock_station *ds, u32 event, int num) { - struct device *dev = &dock_device.dev; + struct device *dev = &dock_device->dev; + char event_string[7]; + char *envp[] = { event_string, NULL }; + + if (num == UNDOCK_EVENT) + sprintf(event_string, "UNDOCK"); + else + sprintf(event_string, "DOCK"); + /* * Indicate that the status of the dock station has * changed. */ - kobject_uevent(&dev->kobj, KOBJ_CHANGE); + kobject_uevent_env(&dev->kobj, KOBJ_CHANGE, envp); } /** @@ -420,6 +436,16 @@ static inline void complete_dock(struct ds->last_dock_time = jiffies; } +static inline void begin_undock(struct dock_station *ds) +{ + ds->flags |= DOCK_UNDOCKING; +} + +static inline void complete_undock(struct dock_station *ds) +{ + ds->flags &= ~(DOCK_UNDOCKING); +} + /** * dock_in_progress - see if we are in the middle of handling a dock event * @ds: the dock station @@ -550,7 +576,7 @@ static int handle_eject_request(struct d printk(KERN_ERR PREFIX "Unable to undock!\n"); return -EBUSY; } - + complete_undock(ds); return 0; } @@ -594,7 +620,11 @@ static void dock_notify(acpi_handle hand * to the driver who wish to hotplug. */ case ACPI_NOTIFY_EJECT_REQUEST: - handle_eject_request(ds, event); + begin_undock(ds); + if (immediate_undock) + handle_eject_request(ds, event); + else + dock_event(ds, event, UNDOCK_EVENT); break; default: printk(KERN_ERR PREFIX "Unknown dock event %d\n", event); @@ -653,6 +683,17 @@ static ssize_t show_docked(struct device DEVICE_ATTR(docked, S_IRUGO, show_docked, NULL); /* + * show_flags - read method for flags file in sysfs + */ +static ssize_t show_flags(struct device *dev, + struct device_attribute *attr, char *buf) +{ + return snprintf(buf, PAGE_SIZE, "%d\n", dock_station->flags); + +} +DEVICE_ATTR(flags, S_IRUGO, show_flags, NULL); + +/* * write_undock - write method for "undock" file in sysfs */ static ssize_t write_undock(struct device *dev, struct device_attribute *attr, @@ -675,16 +716,15 @@ static ssize_t show_dock_uid(struct devi struct device_attribute *attr, char *buf) { unsigned long lbuf; - acpi_status status = acpi_evaluate_integer(dock_station->handle, "_UID", NULL, &lbuf); - if(ACPI_FAILURE(status)) { + acpi_status status = acpi_evaluate_integer(dock_station->handle, + "_UID", NULL, &lbuf); + if (ACPI_FAILURE(status)) return 0; - } + return snprintf(buf, PAGE_SIZE, "%lx\n", lbuf); } DEVICE_ATTR(uid, S_IRUGO, show_dock_uid, NULL); - - /** * dock_add - add a new dock station * @handle: the dock station handle @@ -711,33 +751,53 @@ static int dock_add(acpi_handle handle) ATOMIC_INIT_NOTIFIER_HEAD(&dock_notifier_list); /* initialize platform device stuff */ - dock_device.name = dock_device_name; - ret = platform_device_register(&dock_device); + dock_device = + platform_device_register_simple(dock_device_name, 0, NULL, 0); + if (IS_ERR(dock_device)) { + kfree(dock_station); + dock_station = NULL; + return PTR_ERR(dock_device); + } + + /* we want the dock device to send uevents */ + dock_device->dev.uevent_suppress = 0; + + ret = device_create_file(&dock_device->dev, &dev_attr_docked); if (ret) { - printk(KERN_ERR PREFIX "Error %d registering dock device\n", ret); + printk("Error %d adding sysfs file\n", ret); + platform_device_unregister(dock_device); kfree(dock_station); + dock_station = NULL; return ret; } - ret = device_create_file(&dock_device.dev, &dev_attr_docked); + ret = device_create_file(&dock_device->dev, &dev_attr_undock); if (ret) { printk("Error %d adding sysfs file\n", ret); - platform_device_unregister(&dock_device); + device_remove_file(&dock_device->dev, &dev_attr_docked); + platform_device_unregister(dock_device); kfree(dock_station); + dock_station = NULL; return ret; } - ret = device_create_file(&dock_device.dev, &dev_attr_undock); + ret = device_create_file(&dock_device->dev, &dev_attr_uid); if (ret) { printk("Error %d adding sysfs file\n", ret); - device_remove_file(&dock_device.dev, &dev_attr_docked); - platform_device_unregister(&dock_device); + device_remove_file(&dock_device->dev, &dev_attr_docked); + device_remove_file(&dock_device->dev, &dev_attr_undock); + platform_device_unregister(dock_device); kfree(dock_station); + dock_station = NULL; return ret; } - ret = device_create_file(&dock_device.dev, &dev_attr_uid); + ret = device_create_file(&dock_device->dev, &dev_attr_flags); if (ret) { printk("Error %d adding sysfs file\n", ret); - platform_device_unregister(&dock_device); + device_remove_file(&dock_device->dev, &dev_attr_docked); + device_remove_file(&dock_device->dev, &dev_attr_undock); + device_remove_file(&dock_device->dev, &dev_attr_uid); + platform_device_unregister(dock_device); kfree(dock_station); + dock_station = NULL; return ret; } @@ -750,6 +810,7 @@ static int dock_add(acpi_handle handle) dd = alloc_dock_dependent_device(handle); if (!dd) { kfree(dock_station); + dock_station = NULL; ret = -ENOMEM; goto dock_add_err_unregister; } @@ -773,10 +834,13 @@ static int dock_add(acpi_handle handle) dock_add_err: kfree(dd); dock_add_err_unregister: - device_remove_file(&dock_device.dev, &dev_attr_docked); - device_remove_file(&dock_device.dev, &dev_attr_undock); - platform_device_unregister(&dock_device); + device_remove_file(&dock_device->dev, &dev_attr_docked); + device_remove_file(&dock_device->dev, &dev_attr_undock); + device_remove_file(&dock_device->dev, &dev_attr_uid); + device_remove_file(&dock_device->dev, &dev_attr_flags); + platform_device_unregister(dock_device); kfree(dock_station); + dock_station = NULL; return ret; } @@ -804,12 +868,15 @@ static int dock_remove(void) printk(KERN_ERR "Error removing notify handler\n"); /* cleanup sysfs */ - device_remove_file(&dock_device.dev, &dev_attr_docked); - device_remove_file(&dock_device.dev, &dev_attr_undock); - platform_device_unregister(&dock_device); + device_remove_file(&dock_device->dev, &dev_attr_docked); + device_remove_file(&dock_device->dev, &dev_attr_undock); + device_remove_file(&dock_device->dev, &dev_attr_uid); + device_remove_file(&dock_device->dev, &dev_attr_flags); + platform_device_unregister(dock_device); /* free dock station memory */ kfree(dock_station); + dock_station = NULL; return 0; } diff --git a/drivers/acpi/events/evgpeblk.c b/drivers/acpi/events/evgpeblk.c index 902c287..361ebe6 100644 --- a/drivers/acpi/events/evgpeblk.c +++ b/drivers/acpi/events/evgpeblk.c @@ -586,6 +586,10 @@ acpi_ev_delete_gpe_xrupt(struct acpi_gpe flags = acpi_os_acquire_lock(acpi_gbl_gpe_lock); if (gpe_xrupt->previous) { gpe_xrupt->previous->next = gpe_xrupt->next; + } else { + /* No previous, update list head */ + + acpi_gbl_gpe_xrupt_list_head = gpe_xrupt->next; } if (gpe_xrupt->next) { diff --git a/drivers/acpi/osl.c b/drivers/acpi/osl.c index 58ceb18..a84e9e7 100644 --- a/drivers/acpi/osl.c +++ b/drivers/acpi/osl.c @@ -77,7 +77,6 @@ static struct workqueue_struct *kacpi_no #define OSI_STRING_LENGTH_MAX 64 /* arbitrary */ static char osi_additional_string[OSI_STRING_LENGTH_MAX]; -#define OSI_LINUX_ENABLED #ifdef OSI_LINUX_ENABLED int osi_linux = 1; /* enable _OSI(Linux) by default */ #else @@ -1056,6 +1055,17 @@ unsigned int max_cstate = ACPI_PROCESSOR EXPORT_SYMBOL(max_cstate); +void (*acpi_do_set_cstate_limit)(void); +EXPORT_SYMBOL(acpi_do_set_cstate_limit); + +void acpi_set_cstate_limit(unsigned int new_limit) +{ + max_cstate = new_limit; + if (acpi_do_set_cstate_limit) + acpi_do_set_cstate_limit(); +} +EXPORT_SYMBOL(acpi_set_cstate_limit); + /* * Acquire a spinlock. * diff --git a/drivers/acpi/processor_core.c b/drivers/acpi/processor_core.c index f7de02a..7dedf3b 100644 --- a/drivers/acpi/processor_core.c +++ b/drivers/acpi/processor_core.c @@ -44,6 +44,7 @@ #include #include #include #include +#include #include #include @@ -66,6 +67,7 @@ #define ACPI_PROCESSOR_FILE_THROTTLING " #define ACPI_PROCESSOR_FILE_LIMIT "limit" #define ACPI_PROCESSOR_NOTIFY_PERFORMANCE 0x80 #define ACPI_PROCESSOR_NOTIFY_POWER 0x81 +#define ACPI_PROCESSOR_NOTIFY_THROTTLING 0x82 #define ACPI_PROCESSOR_LIMIT_USER 0 #define ACPI_PROCESSOR_LIMIT_THERMAL 1 @@ -84,6 +86,8 @@ static int acpi_processor_info_open_fs(s static void acpi_processor_notify(acpi_handle handle, u32 event, void *data); static acpi_status acpi_processor_hotadd_init(acpi_handle handle, int *p_cpu); static int acpi_processor_handle_eject(struct acpi_processor *pr); +extern int acpi_processor_tstate_has_changed(struct acpi_processor *pr); + static struct acpi_driver acpi_processor_driver = { .name = "processor", @@ -699,6 +703,9 @@ static void acpi_processor_notify(acpi_h acpi_processor_cst_has_changed(pr); acpi_bus_generate_event(device, event, 0); break; + case ACPI_PROCESSOR_NOTIFY_THROTTLING: + acpi_processor_tstate_has_changed(pr); + acpi_bus_generate_event(device, event, 0); default: ACPI_DEBUG_PRINT((ACPI_DB_INFO, "Unsupported event [0x%x]\n", event)); @@ -1022,11 +1029,15 @@ #endif acpi_processor_ppc_init(); + cpuidle_register_driver(&acpi_idle_driver); + acpi_do_set_cstate_limit = acpi_max_cstate_changed; return 0; } static void __exit acpi_processor_exit(void) { + acpi_do_set_cstate_limit = NULL; + cpuidle_unregister_driver(&acpi_idle_driver); acpi_processor_ppc_exit(); diff --git a/drivers/acpi/processor_idle.c b/drivers/acpi/processor_idle.c index ee5759b..2c6a3cb 100644 --- a/drivers/acpi/processor_idle.c +++ b/drivers/acpi/processor_idle.c @@ -40,6 +40,7 @@ #include #include /* need_resched() */ #include #include +#include /* * Include the apic definitions for x86 to have the APIC timer related defines @@ -62,25 +63,34 @@ #define ACPI_PROCESSOR_CLASS #define _COMPONENT ACPI_PROCESSOR_COMPONENT ACPI_MODULE_NAME("processor_idle"); #define ACPI_PROCESSOR_FILE_POWER "power" -#define US_TO_PM_TIMER_TICKS(t) ((t * (PM_TIMER_FREQUENCY/1000)) / 1000) -#define C2_OVERHEAD 4 /* 1us (3.579 ticks per us) */ -#define C3_OVERHEAD 4 /* 1us (3.579 ticks per us) */ -static void (*pm_idle_save) (void) __read_mostly; -module_param(max_cstate, uint, 0644); +#define PM_TIMER_TICKS_TO_US(p) (((p) * 1000)/(PM_TIMER_FREQUENCY/1000)) +#define C2_OVERHEAD 1 /* 1us */ +#define C3_OVERHEAD 1 /* 1us */ + +void acpi_max_cstate_changed(void) +{ + /* Driver will reset devices' max cstate limit */ + cpuidle_force_redetect_devices(&acpi_idle_driver); +} + +static int change_max_cstate(const char *val, struct kernel_param *kp) +{ + int max; + + max = simple_strtol(val, NULL, 0); + if (!max) + return -EINVAL; + max_cstate = max; + if (acpi_do_set_cstate_limit) + acpi_do_set_cstate_limit(); + return 0; +} + +module_param_call(max_cstate, change_max_cstate, param_get_uint, &max_cstate, 0644); static unsigned int nocst __read_mostly; module_param(nocst, uint, 0000); -/* - * bm_history -- bit-mask with a bit per jiffy of bus-master activity - * 1000 HZ: 0xFFFFFFFF: 32 jiffies = 32ms - * 800 HZ: 0xFFFFFFFF: 32 jiffies = 40ms - * 100 HZ: 0x0000000F: 4 jiffies = 40ms - * reduce history for more aggressive entry into C3 - */ -static unsigned int bm_history __read_mostly = - (HZ >= 800 ? 0xFFFFFFFF : ((1U << (HZ / 25)) - 1)); -module_param(bm_history, uint, 0644); /* -------------------------------------------------------------------------- Power Management -------------------------------------------------------------------------- */ @@ -166,88 +176,6 @@ static struct dmi_system_id __cpuinitdat {}, }; -static inline u32 ticks_elapsed(u32 t1, u32 t2) -{ - if (t2 >= t1) - return (t2 - t1); - else if (!(acpi_gbl_FADT.flags & ACPI_FADT_32BIT_TIMER)) - return (((0x00FFFFFF - t1) + t2) & 0x00FFFFFF); - else - return ((0xFFFFFFFF - t1) + t2); -} - -static void -acpi_processor_power_activate(struct acpi_processor *pr, - struct acpi_processor_cx *new) -{ - struct acpi_processor_cx *old; - - if (!pr || !new) - return; - - old = pr->power.state; - - if (old) - old->promotion.count = 0; - new->demotion.count = 0; - - /* Cleanup from old state. */ - if (old) { - switch (old->type) { - case ACPI_STATE_C3: - /* Disable bus master reload */ - if (new->type != ACPI_STATE_C3 && pr->flags.bm_check) - acpi_set_register(ACPI_BITREG_BUS_MASTER_RLD, 0); - break; - } - } - - /* Prepare to use new state. */ - switch (new->type) { - case ACPI_STATE_C3: - /* Enable bus master reload */ - if (old->type != ACPI_STATE_C3 && pr->flags.bm_check) - acpi_set_register(ACPI_BITREG_BUS_MASTER_RLD, 1); - break; - } - - pr->power.state = new; - - return; -} - -static void acpi_safe_halt(void) -{ - current_thread_info()->status &= ~TS_POLLING; - /* - * TS_POLLING-cleared state must be visible before we - * test NEED_RESCHED: - */ - smp_mb(); - if (!need_resched()) - safe_halt(); - current_thread_info()->status |= TS_POLLING; -} - -static atomic_t c3_cpu_count; - -/* Common C-state entry for C2, C3, .. */ -static void acpi_cstate_enter(struct acpi_processor_cx *cstate) -{ - if (cstate->space_id == ACPI_CSTATE_FFH) { - /* Call into architectural FFH based C-state */ - acpi_processor_ffh_cstate_enter(cstate); - } else { - int unused; - /* IO port based C-state */ - inb(cstate->address); - /* Dummy wait op - must do something useless after P_LVL2 read - because chipsets cannot guarantee that STPCLK# signal - gets asserted in time to freeze execution properly. */ - unused = inl(acpi_gbl_FADT.xpm_timer_block.address); - } -} - #ifdef ARCH_APICTIMER_STOPS_ON_C3 /* @@ -324,375 +252,6 @@ static void acpi_state_timer_broadcast(s #endif -static void acpi_processor_idle(void) -{ - struct acpi_processor *pr = NULL; - struct acpi_processor_cx *cx = NULL; - struct acpi_processor_cx *next_state = NULL; - int sleep_ticks = 0; - u32 t1, t2 = 0; - - pr = processors[smp_processor_id()]; - if (!pr) - return; - - /* - * Interrupts must be disabled during bus mastering calculations and - * for C2/C3 transitions. - */ - local_irq_disable(); - - /* - * Check whether we truly need to go idle, or should - * reschedule: - */ - if (unlikely(need_resched())) { - local_irq_enable(); - return; - } - - cx = pr->power.state; - if (!cx) { - if (pm_idle_save) - pm_idle_save(); - else - acpi_safe_halt(); - return; - } - - /* - * Check BM Activity - * ----------------- - * Check for bus mastering activity (if required), record, and check - * for demotion. - */ - if (pr->flags.bm_check) { - u32 bm_status = 0; - unsigned long diff = jiffies - pr->power.bm_check_timestamp; - - if (diff > 31) - diff = 31; - - pr->power.bm_activity <<= diff; - - acpi_get_register(ACPI_BITREG_BUS_MASTER_STATUS, &bm_status); - if (bm_status) { - pr->power.bm_activity |= 0x1; - acpi_set_register(ACPI_BITREG_BUS_MASTER_STATUS, 1); - } - /* - * PIIX4 Erratum #18: Note that BM_STS doesn't always reflect - * the true state of bus mastering activity; forcing us to - * manually check the BMIDEA bit of each IDE channel. - */ - else if (errata.piix4.bmisx) { - if ((inb_p(errata.piix4.bmisx + 0x02) & 0x01) - || (inb_p(errata.piix4.bmisx + 0x0A) & 0x01)) - pr->power.bm_activity |= 0x1; - } - - pr->power.bm_check_timestamp = jiffies; - - /* - * If bus mastering is or was active this jiffy, demote - * to avoid a faulty transition. Note that the processor - * won't enter a low-power state during this call (to this - * function) but should upon the next. - * - * TBD: A better policy might be to fallback to the demotion - * state (use it for this quantum only) istead of - * demoting -- and rely on duration as our sole demotion - * qualification. This may, however, introduce DMA - * issues (e.g. floppy DMA transfer overrun/underrun). - */ - if ((pr->power.bm_activity & 0x1) && - cx->demotion.threshold.bm) { - local_irq_enable(); - next_state = cx->demotion.state; - goto end; - } - } - -#ifdef CONFIG_HOTPLUG_CPU - /* - * Check for P_LVL2_UP flag before entering C2 and above on - * an SMP system. We do it here instead of doing it at _CST/P_LVL - * detection phase, to work cleanly with logical CPU hotplug. - */ - if ((cx->type != ACPI_STATE_C1) && (num_online_cpus() > 1) && - !pr->flags.has_cst && !(acpi_gbl_FADT.flags & ACPI_FADT_C2_MP_SUPPORTED)) - cx = &pr->power.states[ACPI_STATE_C1]; -#endif - - /* - * Sleep: - * ------ - * Invoke the current Cx state to put the processor to sleep. - */ - if (cx->type == ACPI_STATE_C2 || cx->type == ACPI_STATE_C3) { - current_thread_info()->status &= ~TS_POLLING; - /* - * TS_POLLING-cleared state must be visible before we - * test NEED_RESCHED: - */ - smp_mb(); - if (need_resched()) { - current_thread_info()->status |= TS_POLLING; - local_irq_enable(); - return; - } - } - - switch (cx->type) { - - case ACPI_STATE_C1: - /* - * Invoke C1. - * Use the appropriate idle routine, the one that would - * be used without acpi C-states. - */ - if (pm_idle_save) - pm_idle_save(); - else - acpi_safe_halt(); - - /* - * TBD: Can't get time duration while in C1, as resumes - * go to an ISR rather than here. Need to instrument - * base interrupt handler. - */ - sleep_ticks = 0xFFFFFFFF; - break; - - case ACPI_STATE_C2: - /* Get start time (ticks) */ - t1 = inl(acpi_gbl_FADT.xpm_timer_block.address); - /* Invoke C2 */ - acpi_state_timer_broadcast(pr, cx, 1); - acpi_cstate_enter(cx); - /* Get end time (ticks) */ - t2 = inl(acpi_gbl_FADT.xpm_timer_block.address); - -#ifdef CONFIG_GENERIC_TIME - /* TSC halts in C2, so notify users */ - mark_tsc_unstable("possible TSC halt in C2"); -#endif - /* Re-enable interrupts */ - local_irq_enable(); - current_thread_info()->status |= TS_POLLING; - /* Compute time (ticks) that we were actually asleep */ - sleep_ticks = - ticks_elapsed(t1, t2) - cx->latency_ticks - C2_OVERHEAD; - acpi_state_timer_broadcast(pr, cx, 0); - break; - - case ACPI_STATE_C3: - - if (pr->flags.bm_check) { - if (atomic_inc_return(&c3_cpu_count) == - num_online_cpus()) { - /* - * All CPUs are trying to go to C3 - * Disable bus master arbitration - */ - acpi_set_register(ACPI_BITREG_ARB_DISABLE, 1); - } - } else { - /* SMP with no shared cache... Invalidate cache */ - ACPI_FLUSH_CPU_CACHE(); - } - - /* Get start time (ticks) */ - t1 = inl(acpi_gbl_FADT.xpm_timer_block.address); - /* Invoke C3 */ - acpi_state_timer_broadcast(pr, cx, 1); - acpi_cstate_enter(cx); - /* Get end time (ticks) */ - t2 = inl(acpi_gbl_FADT.xpm_timer_block.address); - if (pr->flags.bm_check) { - /* Enable bus master arbitration */ - atomic_dec(&c3_cpu_count); - acpi_set_register(ACPI_BITREG_ARB_DISABLE, 0); - } - -#ifdef CONFIG_GENERIC_TIME - /* TSC halts in C3, so notify users */ - mark_tsc_unstable("TSC halts in C3"); -#endif - /* Re-enable interrupts */ - local_irq_enable(); - current_thread_info()->status |= TS_POLLING; - /* Compute time (ticks) that we were actually asleep */ - sleep_ticks = - ticks_elapsed(t1, t2) - cx->latency_ticks - C3_OVERHEAD; - acpi_state_timer_broadcast(pr, cx, 0); - break; - - default: - local_irq_enable(); - return; - } - cx->usage++; - if ((cx->type != ACPI_STATE_C1) && (sleep_ticks > 0)) - cx->time += sleep_ticks; - - next_state = pr->power.state; - -#ifdef CONFIG_HOTPLUG_CPU - /* Don't do promotion/demotion */ - if ((cx->type == ACPI_STATE_C1) && (num_online_cpus() > 1) && - !pr->flags.has_cst && !(acpi_gbl_FADT.flags & ACPI_FADT_C2_MP_SUPPORTED)) { - next_state = cx; - goto end; - } -#endif - - /* - * Promotion? - * ---------- - * Track the number of longs (time asleep is greater than threshold) - * and promote when the count threshold is reached. Note that bus - * mastering activity may prevent promotions. - * Do not promote above max_cstate. - */ - if (cx->promotion.state && - ((cx->promotion.state - pr->power.states) <= max_cstate)) { - if (sleep_ticks > cx->promotion.threshold.ticks && - cx->promotion.state->latency <= system_latency_constraint()) { - cx->promotion.count++; - cx->demotion.count = 0; - if (cx->promotion.count >= - cx->promotion.threshold.count) { - if (pr->flags.bm_check) { - if (! - (pr->power.bm_activity & cx-> - promotion.threshold.bm)) { - next_state = - cx->promotion.state; - goto end; - } - } else { - next_state = cx->promotion.state; - goto end; - } - } - } - } - - /* - * Demotion? - * --------- - * Track the number of shorts (time asleep is less than time threshold) - * and demote when the usage threshold is reached. - */ - if (cx->demotion.state) { - if (sleep_ticks < cx->demotion.threshold.ticks) { - cx->demotion.count++; - cx->promotion.count = 0; - if (cx->demotion.count >= cx->demotion.threshold.count) { - next_state = cx->demotion.state; - goto end; - } - } - } - - end: - /* - * Demote if current state exceeds max_cstate - * or if the latency of the current state is unacceptable - */ - if ((pr->power.state - pr->power.states) > max_cstate || - pr->power.state->latency > system_latency_constraint()) { - if (cx->demotion.state) - next_state = cx->demotion.state; - } - - /* - * New Cx State? - * ------------- - * If we're going to start using a new Cx state we must clean up - * from the previous and prepare to use the new. - */ - if (next_state != pr->power.state) - acpi_processor_power_activate(pr, next_state); -} - -static int acpi_processor_set_power_policy(struct acpi_processor *pr) -{ - unsigned int i; - unsigned int state_is_set = 0; - struct acpi_processor_cx *lower = NULL; - struct acpi_processor_cx *higher = NULL; - struct acpi_processor_cx *cx; - - - if (!pr) - return -EINVAL; - - /* - * This function sets the default Cx state policy (OS idle handler). - * Our scheme is to promote quickly to C2 but more conservatively - * to C3. We're favoring C2 for its characteristics of low latency - * (quick response), good power savings, and ability to allow bus - * mastering activity. Note that the Cx state policy is completely - * customizable and can be altered dynamically. - */ - - /* startup state */ - for (i = 1; i < ACPI_PROCESSOR_MAX_POWER; i++) { - cx = &pr->power.states[i]; - if (!cx->valid) - continue; - - if (!state_is_set) - pr->power.state = cx; - state_is_set++; - break; - } - - if (!state_is_set) - return -ENODEV; - - /* demotion */ - for (i = 1; i < ACPI_PROCESSOR_MAX_POWER; i++) { - cx = &pr->power.states[i]; - if (!cx->valid) - continue; - - if (lower) { - cx->demotion.state = lower; - cx->demotion.threshold.ticks = cx->latency_ticks; - cx->demotion.threshold.count = 1; - if (cx->type == ACPI_STATE_C3) - cx->demotion.threshold.bm = bm_history; - } - - lower = cx; - } - - /* promotion */ - for (i = (ACPI_PROCESSOR_MAX_POWER - 1); i > 0; i--) { - cx = &pr->power.states[i]; - if (!cx->valid) - continue; - - if (higher) { - cx->promotion.state = higher; - cx->promotion.threshold.ticks = cx->latency_ticks; - if (cx->type >= ACPI_STATE_C2) - cx->promotion.threshold.count = 4; - else - cx->promotion.threshold.count = 10; - if (higher->type == ACPI_STATE_C3) - cx->promotion.threshold.bm = bm_history; - } - - higher = cx; - } - - return 0; -} - static int acpi_processor_get_power_info_fadt(struct acpi_processor *pr) { @@ -910,7 +469,7 @@ static void acpi_processor_power_verify_ * Normalize the C2 latency to expidite policy */ cx->valid = 1; - cx->latency_ticks = US_TO_PM_TIMER_TICKS(cx->latency); + cx->latency_ticks = cx->latency; return; } @@ -984,7 +543,7 @@ static void acpi_processor_power_verify_ * use this in our C3 policy */ cx->valid = 1; - cx->latency_ticks = US_TO_PM_TIMER_TICKS(cx->latency); + cx->latency_ticks = cx->latency; return; } @@ -1050,18 +609,6 @@ static int acpi_processor_get_power_info pr->power.count = acpi_processor_power_verify(pr); /* - * Set Default Policy - * ------------------ - * Now that we know which states are supported, set the default - * policy. Note that this policy can be changed dynamically - * (e.g. encourage deeper sleeps to conserve battery life when - * not on AC). - */ - result = acpi_processor_set_power_policy(pr); - if (result) - return result; - - /* * if one state of type C2 or C3 is available, mark this * CPU as being "idle manageable" */ @@ -1078,9 +625,6 @@ static int acpi_processor_get_power_info int acpi_processor_cst_has_changed(struct acpi_processor *pr) { - int result = 0; - - if (!pr) return -EINVAL; @@ -1091,16 +635,9 @@ int acpi_processor_cst_has_changed(struc if (!pr->flags.power_setup_done) return -ENODEV; - /* Fall back to the default idle loop */ - pm_idle = pm_idle_save; - synchronize_sched(); /* Relies on interrupts forcing exit from idle. */ - - pr->flags.power = 0; - result = acpi_processor_get_power_info(pr); - if ((pr->flags.power == 1) && (pr->flags.power_setup_done)) - pm_idle = acpi_processor_idle; - - return result; + acpi_processor_get_power_info(pr); + return cpuidle_force_redetect(per_cpu(cpuidle_devices, pr->id), + &acpi_idle_driver); } /* proc interface */ @@ -1186,30 +723,6 @@ static const struct file_operations acpi .release = single_release, }; -#ifdef CONFIG_SMP -static void smp_callback(void *v) -{ - /* we already woke the CPU up, nothing more to do */ -} - -/* - * This function gets called when a part of the kernel has a new latency - * requirement. This means we need to get all processors out of their C-state, - * and then recalculate a new suitable C-state. Just do a cross-cpu IPI; that - * wakes them all right up. - */ -static int acpi_processor_latency_notify(struct notifier_block *b, - unsigned long l, void *v) -{ - smp_call_function(smp_callback, NULL, 0, 1); - return NOTIFY_OK; -} - -static struct notifier_block acpi_processor_latency_notifier = { - .notifier_call = acpi_processor_latency_notify, -}; -#endif - int __cpuinit acpi_processor_power_init(struct acpi_processor *pr, struct acpi_device *device) { @@ -1226,9 +739,6 @@ int __cpuinit acpi_processor_power_init( "ACPI: processor limited to max C-state %d\n", max_cstate); first_run++; -#ifdef CONFIG_SMP - register_latency_notifier(&acpi_processor_latency_notifier); -#endif } if (!pr) @@ -1245,6 +755,7 @@ #endif acpi_processor_get_power_info(pr); + /* * Install the idle handler if processor power management is supported. * Note that we use previously set idle handler will be used on @@ -1257,11 +768,6 @@ #endif printk(" C%d[C%d]", i, pr->power.states[i].type); printk(")\n"); - - if (pr->id == 0) { - pm_idle_save = pm_idle; - pm_idle = acpi_processor_idle; - } } /* 'power' [R] */ @@ -1289,21 +795,332 @@ int acpi_processor_power_exit(struct acp if (acpi_device_dir(device)) remove_proc_entry(ACPI_PROCESSOR_FILE_POWER, acpi_device_dir(device)); + return 0; +} - /* Unregister the idle handler when processor #0 is removed. */ - if (pr->id == 0) { - pm_idle = pm_idle_save; +/** + * ticks_elapsed - a helper function that determines how many ticks (in US) + * have elapsed between two PM Timer timestamps + * @t1: the start time + * @t2: the end time + */ +static inline u32 ticks_elapsed(u32 t1, u32 t2) +{ + if (t2 >= t1) + return PM_TIMER_TICKS_TO_US(t2 - t1); + else if (!(acpi_gbl_FADT.flags & ACPI_FADT_32BIT_TIMER)) + return PM_TIMER_TICKS_TO_US(((0x00FFFFFF - t1) + t2) & 0x00FFFFFF); + else + return PM_TIMER_TICKS_TO_US((0xFFFFFFFF - t1) + t2); +} - /* - * We are about to unload the current idle thread pm callback - * (pm_idle), Wait for all processors to update cached/local - * copies of pm_idle before proceeding. - */ - cpu_idle_wait(); -#ifdef CONFIG_SMP - unregister_latency_notifier(&acpi_processor_latency_notifier); +/** + * acpi_idle_update_bm_rld - updates the BM_RLD bit depending on target state + * @pr: the processor + * @target: the new target state + */ +static inline void acpi_idle_update_bm_rld(struct acpi_processor *pr, + struct acpi_processor_cx *target) +{ + if (pr->flags.bm_rld_set && target->type != ACPI_STATE_C3) { + acpi_set_register(ACPI_BITREG_BUS_MASTER_RLD, 0); + pr->flags.bm_rld_set = 0; + } + + if (!pr->flags.bm_rld_set && target->type == ACPI_STATE_C3) { + acpi_set_register(ACPI_BITREG_BUS_MASTER_RLD, 1); + pr->flags.bm_rld_set = 1; + } +} + +/** + * acpi_idle_do_entry - a helper function that does C2 and C3 type entry + * @cx: cstate data + */ +static inline void acpi_idle_do_entry(struct acpi_processor_cx *cx) +{ + if (cx->space_id == ACPI_CSTATE_FFH) { + /* Call into architectural FFH based C-state */ + acpi_processor_ffh_cstate_enter(cx); + } else { + int unused; + /* IO port based C-state */ + inb(cx->address); + /* Dummy wait op - must do something useless after P_LVL2 read + because chipsets cannot guarantee that STPCLK# signal + gets asserted in time to freeze execution properly. */ + unused = inl(acpi_gbl_FADT.xpm_timer_block.address); + } +} + +/** + * acpi_idle_enter_c1 - enters an ACPI C1 state-type + * @dev: the target CPU + * @state: the state data + * + * This is equivalent to the HALT instruction. + */ +static int acpi_idle_enter_c1(struct cpuidle_device *dev, + struct cpuidle_state *state) +{ + struct acpi_processor *pr; + struct acpi_processor_cx *cx = cpuidle_get_statedata(state); + pr = processors[smp_processor_id()]; + + if (unlikely(!pr)) + return 0; + + if (pr->flags.bm_check) + acpi_idle_update_bm_rld(pr, cx); + + current_thread_info()->status &= ~TS_POLLING; + /* + * TS_POLLING-cleared state must be visible before we test + * NEED_RESCHED: + */ + smp_mb(); + if (!need_resched()) + safe_halt(); + current_thread_info()->status |= TS_POLLING; + + cx->usage++; + + return 0; +} + +/** + * acpi_idle_enter_c2 - enters an ACPI C2 state-type + * @dev: the target CPU + * @state: the state data + */ +static int acpi_idle_enter_c2(struct cpuidle_device *dev, + struct cpuidle_state *state) +{ + struct acpi_processor *pr; + struct acpi_processor_cx *cx = cpuidle_get_statedata(state); + u32 t1, t2; + pr = processors[smp_processor_id()]; + + if (unlikely(!pr)) + return 0; + + if (pr->flags.bm_check) + acpi_idle_update_bm_rld(pr, cx); + + local_irq_disable(); + current_thread_info()->status &= ~TS_POLLING; + /* + * TS_POLLING-cleared state must be visible before we test + * NEED_RESCHED: + */ + smp_mb(); + + if (unlikely(need_resched())) { + current_thread_info()->status |= TS_POLLING; + local_irq_enable(); + return 0; + } + + t1 = inl(acpi_gbl_FADT.xpm_timer_block.address); + acpi_state_timer_broadcast(pr, cx, 1); + acpi_idle_do_entry(cx); + t2 = inl(acpi_gbl_FADT.xpm_timer_block.address); + +#ifdef CONFIG_GENERIC_TIME + /* TSC halts in C2, so notify users */ + mark_tsc_unstable("possible TSC halt in C2"); #endif + + local_irq_enable(); + current_thread_info()->status |= TS_POLLING; + + cx->usage++; + + acpi_state_timer_broadcast(pr, cx, 0); + return ticks_elapsed(t1, t2); +} + +static int c3_cpu_count; +static DEFINE_SPINLOCK(c3_lock); + +/** + * acpi_idle_enter_c3 - enters an ACPI C3 state-type + * @dev: the target CPU + * @state: the state data + * + * Similar to C2 entry, except special bus master handling is needed. + */ +static int acpi_idle_enter_c3(struct cpuidle_device *dev, + struct cpuidle_state *state) +{ + struct acpi_processor *pr; + struct acpi_processor_cx *cx = cpuidle_get_statedata(state); + u32 t1, t2; + pr = processors[smp_processor_id()]; + + if (unlikely(!pr)) + return 0; + + if (pr->flags.bm_check) + acpi_idle_update_bm_rld(pr, cx); + + local_irq_disable(); + current_thread_info()->status &= ~TS_POLLING; + /* + * TS_POLLING-cleared state must be visible before we test + * NEED_RESCHED: + */ + smp_mb(); + + if (unlikely(need_resched())) { + current_thread_info()->status |= TS_POLLING; + local_irq_enable(); + return 0; } + /* disable bus master */ + if (pr->flags.bm_check) { + spin_lock(&c3_lock); + c3_cpu_count++; + if (c3_cpu_count == num_online_cpus()) { + /* + * All CPUs are trying to go to C3 + * Disable bus master arbitration + */ + acpi_set_register(ACPI_BITREG_ARB_DISABLE, 1); + } + spin_unlock(&c3_lock); + } else { + /* SMP with no shared cache... Invalidate cache */ + ACPI_FLUSH_CPU_CACHE(); + } + + /* Get start time (ticks) */ + t1 = inl(acpi_gbl_FADT.xpm_timer_block.address); + acpi_state_timer_broadcast(pr, cx, 1); + acpi_idle_do_entry(cx); + t2 = inl(acpi_gbl_FADT.xpm_timer_block.address); + + if (pr->flags.bm_check) { + spin_lock(&c3_lock); + /* Enable bus master arbitration */ + if (c3_cpu_count == num_online_cpus()) + acpi_set_register(ACPI_BITREG_ARB_DISABLE, 0); + c3_cpu_count--; + spin_unlock(&c3_lock); + } + +#ifdef CONFIG_GENERIC_TIME + /* TSC halts in C3, so notify users */ + mark_tsc_unstable("TSC halts in C3"); +#endif + + local_irq_enable(); + current_thread_info()->status |= TS_POLLING; + + cx->usage++; + + acpi_state_timer_broadcast(pr, cx, 0); + return ticks_elapsed(t1, t2); +} + +/** + * acpi_idle_bm_check - checks if bus master activity was detected + */ +static int acpi_idle_bm_check(void) +{ + u32 bm_status = 0; + + acpi_get_register(ACPI_BITREG_BUS_MASTER_STATUS, &bm_status); + if (bm_status) + acpi_set_register(ACPI_BITREG_BUS_MASTER_STATUS, 1); + /* + * PIIX4 Erratum #18: Note that BM_STS doesn't always reflect + * the true state of bus mastering activity; forcing us to + * manually check the BMIDEA bit of each IDE channel. + */ + else if (errata.piix4.bmisx) { + if ((inb_p(errata.piix4.bmisx + 0x02) & 0x01) + || (inb_p(errata.piix4.bmisx + 0x0A) & 0x01)) + bm_status = 1; + } + return bm_status; +} + +/** + * acpi_idle_init - attaches the driver to a CPU + * @dev: the CPU + */ +static int acpi_idle_init(struct cpuidle_device *dev) +{ + int cpu = dev->cpu; + int i, count = 0; + struct acpi_processor_cx *cx; + struct cpuidle_state *state; + + struct acpi_processor *pr = processors[cpu]; + + if (!pr->flags.power_setup_done) + return -EINVAL; + + if (pr->flags.power == 0) { + return -EINVAL; + } + + for (i = 1; i < ACPI_PROCESSOR_MAX_POWER && i <= max_cstate; i++) { + cx = &pr->power.states[i]; + state = &dev->states[count]; + + if (!cx->valid) + continue; + +#ifdef CONFIG_HOTPLUG_CPU + if ((cx->type != ACPI_STATE_C1) && (num_online_cpus() > 1) && + !pr->flags.has_cst && + !(acpi_gbl_FADT.flags & ACPI_FADT_C2_MP_SUPPORTED)) + continue; +#endif + cpuidle_set_statedata(state, cx); + + state->exit_latency = cx->latency; + state->target_residency = cx->latency * 6; + state->power_usage = cx->power; + + state->flags = 0; + switch (cx->type) { + case ACPI_STATE_C1: + state->flags |= CPUIDLE_FLAG_SHALLOW; + state->enter = acpi_idle_enter_c1; + break; + + case ACPI_STATE_C2: + state->flags |= CPUIDLE_FLAG_BALANCED; + state->flags |= CPUIDLE_FLAG_TIME_VALID; + state->enter = acpi_idle_enter_c2; + break; + + case ACPI_STATE_C3: + state->flags |= CPUIDLE_FLAG_DEEP; + state->flags |= CPUIDLE_FLAG_TIME_VALID; + state->flags |= CPUIDLE_FLAG_CHECK_BM; + state->enter = acpi_idle_enter_c3; + break; + } + + count++; + } + + if (!count) + return -EINVAL; + + dev->state_count = count; return 0; } + +struct cpuidle_driver acpi_idle_driver = { + .name = "acpi_idle", + .init = acpi_idle_init, + .redetect = acpi_idle_init, + .bm_check = acpi_idle_bm_check, + .owner = THIS_MODULE, +}; diff --git a/drivers/acpi/processor_throttling.c b/drivers/acpi/processor_throttling.c index b334860..3a2e9a6 100644 --- a/drivers/acpi/processor_throttling.c +++ b/drivers/acpi/processor_throttling.c @@ -44,17 +44,231 @@ #define ACPI_PROCESSOR_CLASS #define _COMPONENT ACPI_PROCESSOR_COMPONENT ACPI_MODULE_NAME("processor_throttling"); +static int acpi_processor_get_throttling(struct acpi_processor *pr); +int acpi_processor_set_throttling(struct acpi_processor *pr, int state); + +static int acpi_processor_get_platform_limit(struct acpi_processor *pr) +{ + acpi_status status = 0; + unsigned long tpc = 0; + + if (!pr) + return -EINVAL; + status = acpi_evaluate_integer(pr->handle, "_TPC", NULL, &tpc); + if (ACPI_FAILURE(status) && status != AE_NOT_FOUND) { + ACPI_EXCEPTION((AE_INFO, status, "Evaluating _TPC")); + return -ENODEV; + } + pr->throttling_platform_limit = (int)tpc; + return 0; +} + +int acpi_processor_tstate_has_changed(struct acpi_processor *pr) +{ + return acpi_processor_get_platform_limit(pr); +} + +/* -------------------------------------------------------------------------- + _PTC, _TSS, _TSD support + -------------------------------------------------------------------------- */ +static int acpi_processor_get_throttling_control(struct acpi_processor *pr) +{ + int result = 0; + acpi_status status = 0; + struct acpi_buffer buffer = { ACPI_ALLOCATE_BUFFER, NULL }; + union acpi_object *ptc = NULL; + union acpi_object obj = { 0 }; + + status = acpi_evaluate_object(pr->handle, "_PTC", NULL, &buffer); + if (ACPI_FAILURE(status)) { + ACPI_EXCEPTION((AE_INFO, status, "Evaluating _PTC")); + return -ENODEV; + } + + ptc = (union acpi_object *)buffer.pointer; + if (!ptc || (ptc->type != ACPI_TYPE_PACKAGE) + || (ptc->package.count != 2)) { + printk(KERN_ERR PREFIX "Invalid _PTC data\n"); + result = -EFAULT; + goto end; + } + + /* + * control_register + */ + + obj = ptc->package.elements[0]; + + if ((obj.type != ACPI_TYPE_BUFFER) + || (obj.buffer.length < sizeof(struct acpi_ptc_register)) + || (obj.buffer.pointer == NULL)) { + printk(KERN_ERR PREFIX + "Invalid _PTC data (control_register)\n"); + result = -EFAULT; + goto end; + } + memcpy(&pr->throttling.control_register, obj.buffer.pointer, + sizeof(struct acpi_ptc_register)); + + /* + * status_register + */ + + obj = ptc->package.elements[1]; + + if ((obj.type != ACPI_TYPE_BUFFER) + || (obj.buffer.length < sizeof(struct acpi_ptc_register)) + || (obj.buffer.pointer == NULL)) { + printk(KERN_ERR PREFIX "Invalid _PTC data (status_register)\n"); + result = -EFAULT; + goto end; + } + + memcpy(&pr->throttling.status_register, obj.buffer.pointer, + sizeof(struct acpi_ptc_register)); + + end: + kfree(buffer.pointer); + + return result; +} +static int acpi_processor_get_throttling_states(struct acpi_processor *pr) +{ + int result = 0; + acpi_status status = AE_OK; + struct acpi_buffer buffer = { ACPI_ALLOCATE_BUFFER, NULL }; + struct acpi_buffer format = { sizeof("NNNNN"), "NNNNN" }; + struct acpi_buffer state = { 0, NULL }; + union acpi_object *tss = NULL; + int i; + + status = acpi_evaluate_object(pr->handle, "_TSS", NULL, &buffer); + if (ACPI_FAILURE(status)) { + ACPI_EXCEPTION((AE_INFO, status, "Evaluating _TSS")); + return -ENODEV; + } + + tss = buffer.pointer; + if (!tss || (tss->type != ACPI_TYPE_PACKAGE)) { + printk(KERN_ERR PREFIX "Invalid _TSS data\n"); + result = -EFAULT; + goto end; + } + + ACPI_DEBUG_PRINT((ACPI_DB_INFO, "Found %d throttling states\n", + tss->package.count)); + + pr->throttling.state_count = tss->package.count; + pr->throttling.states_tss = + kmalloc(sizeof(struct acpi_processor_tx_tss) * tss->package.count, + GFP_KERNEL); + if (!pr->throttling.states_tss) { + result = -ENOMEM; + goto end; + } + + for (i = 0; i < pr->throttling.state_count; i++) { + + struct acpi_processor_tx_tss *tx = + (struct acpi_processor_tx_tss *)&(pr->throttling. + states_tss[i]); + + state.length = sizeof(struct acpi_processor_tx_tss); + state.pointer = tx; + + ACPI_DEBUG_PRINT((ACPI_DB_INFO, "Extracting state %d\n", i)); + + status = acpi_extract_package(&(tss->package.elements[i]), + &format, &state); + if (ACPI_FAILURE(status)) { + ACPI_EXCEPTION((AE_INFO, status, "Invalid _TSS data")); + result = -EFAULT; + kfree(pr->throttling.states_tss); + goto end; + } + + if (!tx->freqpercentage) { + printk(KERN_ERR PREFIX + "Invalid _TSS data: freq is zero\n"); + result = -EFAULT; + kfree(pr->throttling.states_tss); + goto end; + } + } + + end: + kfree(buffer.pointer); + + return result; +} +static int acpi_processor_get_tsd(struct acpi_processor *pr) +{ + int result = 0; + acpi_status status = AE_OK; + struct acpi_buffer buffer = { ACPI_ALLOCATE_BUFFER, NULL }; + struct acpi_buffer format = { sizeof("NNNNN"), "NNNNN" }; + struct acpi_buffer state = { 0, NULL }; + union acpi_object *tsd = NULL; + struct acpi_tsd_package *pdomain; + + status = acpi_evaluate_object(pr->handle, "_TSD", NULL, &buffer); + if (ACPI_FAILURE(status)) { + return -ENODEV; + } + + tsd = buffer.pointer; + if (!tsd || (tsd->type != ACPI_TYPE_PACKAGE)) { + ACPI_DEBUG_PRINT((ACPI_DB_ERROR, "Invalid _TSD data\n")); + result = -EFAULT; + goto end; + } + + if (tsd->package.count != 1) { + ACPI_DEBUG_PRINT((ACPI_DB_ERROR, "Invalid _TSD data\n")); + result = -EFAULT; + goto end; + } + + pdomain = &(pr->throttling.domain_info); + + state.length = sizeof(struct acpi_tsd_package); + state.pointer = pdomain; + + status = acpi_extract_package(&(tsd->package.elements[0]), + &format, &state); + if (ACPI_FAILURE(status)) { + ACPI_DEBUG_PRINT((ACPI_DB_ERROR, "Invalid _TSD data\n")); + result = -EFAULT; + goto end; + } + + if (pdomain->num_entries != ACPI_TSD_REV0_ENTRIES) { + ACPI_DEBUG_PRINT((ACPI_DB_ERROR, "Unknown _TSD:num_entries\n")); + result = -EFAULT; + goto end; + } + + if (pdomain->revision != ACPI_TSD_REV0_REVISION) { + ACPI_DEBUG_PRINT((ACPI_DB_ERROR, "Unknown _TSD:revision\n")); + result = -EFAULT; + goto end; + } + + end: + kfree(buffer.pointer); + return result; +} + /* -------------------------------------------------------------------------- Throttling Control -------------------------------------------------------------------------- */ -static int acpi_processor_get_throttling(struct acpi_processor *pr) +static int acpi_processor_get_throttling_fadt(struct acpi_processor *pr) { int state = 0; u32 value = 0; u32 duty_mask = 0; u32 duty_value = 0; - if (!pr) return -EINVAL; @@ -94,13 +308,114 @@ static int acpi_processor_get_throttling return 0; } -int acpi_processor_set_throttling(struct acpi_processor *pr, int state) +static int acpi_read_throttling_status(struct acpi_processor_throttling + *throttling) +{ + int value = -1; + switch (throttling->status_register.space_id) { + case ACPI_ADR_SPACE_SYSTEM_IO: + acpi_os_read_port((acpi_io_address) throttling->status_register. + address, &value, + (u32) throttling->status_register.bit_width * + 8); + break; + case ACPI_ADR_SPACE_FIXED_HARDWARE: + printk(KERN_ERR PREFIX + "HARDWARE addr space,NOT supported yet\n"); + break; + default: + printk(KERN_ERR PREFIX "Unknown addr space %d\n", + (u32) (throttling->status_register.space_id)); + } + return value; +} + +static int acpi_write_throttling_state(struct acpi_processor_throttling + *throttling, int value) +{ + int ret = -1; + + switch (throttling->control_register.space_id) { + case ACPI_ADR_SPACE_SYSTEM_IO: + acpi_os_write_port((acpi_io_address) throttling-> + control_register.address, value, + (u32) throttling->control_register. + bit_width * 8); + ret = 0; + break; + case ACPI_ADR_SPACE_FIXED_HARDWARE: + printk(KERN_ERR PREFIX + "HARDWARE addr space,NOT supported yet\n"); + break; + default: + printk(KERN_ERR PREFIX "Unknown addr space %d\n", + (u32) (throttling->control_register.space_id)); + } + return ret; +} + +static int acpi_get_throttling_state(struct acpi_processor *pr, int value) +{ + int i; + + for (i = 0; i < pr->throttling.state_count; i++) { + struct acpi_processor_tx_tss *tx = + (struct acpi_processor_tx_tss *)&(pr->throttling. + states_tss[i]); + if (tx->control == value) + break; + } + if (i > pr->throttling.state_count) + i = -1; + return i; +} + +static int acpi_get_throttling_value(struct acpi_processor *pr, int state) +{ + int value = -1; + if (state >= 0 && state <= pr->throttling.state_count) { + struct acpi_processor_tx_tss *tx = + (struct acpi_processor_tx_tss *)&(pr->throttling. + states_tss[state]); + value = tx->control; + } + return value; +} + +static int acpi_processor_get_throttling_ptc(struct acpi_processor *pr) +{ + int state = 0; + u32 value = 0; + + if (!pr) + return -EINVAL; + + if (!pr->flags.throttling) + return -ENODEV; + + pr->throttling.state = 0; + local_irq_disable(); + value = acpi_read_throttling_status(&pr->throttling); + if (value >= 0) { + state = acpi_get_throttling_state(pr, value); + pr->throttling.state = state; + } + local_irq_enable(); + + return 0; +} + +static int acpi_processor_get_throttling(struct acpi_processor *pr) +{ + return pr->throttling.acpi_processor_get_throttling(pr); +} + +int acpi_processor_set_throttling_fadt(struct acpi_processor *pr, int state) { u32 value = 0; u32 duty_mask = 0; u32 duty_value = 0; - if (!pr) return -EINVAL; @@ -113,6 +428,8 @@ int acpi_processor_set_throttling(struct if (state == pr->throttling.state) return 0; + if (state < pr->throttling_platform_limit) + return -EPERM; /* * Calculate the duty_value and duty_mask. */ @@ -165,12 +482,50 @@ int acpi_processor_set_throttling(struct return 0; } +int acpi_processor_set_throttling_ptc(struct acpi_processor *pr, int state) +{ + u32 value = 0; + + if (!pr) + return -EINVAL; + + if ((state < 0) || (state > (pr->throttling.state_count - 1))) + return -EINVAL; + + if (!pr->flags.throttling) + return -ENODEV; + + if (state == pr->throttling.state) + return 0; + + if (state < pr->throttling_platform_limit) + return -EPERM; + + local_irq_disable(); + + value = acpi_get_throttling_value(pr, state); + if (value >= 0) { + acpi_write_throttling_state(&pr->throttling, value); + pr->throttling.state = state; + } + local_irq_enable(); + + return 0; +} + +int acpi_processor_set_throttling(struct acpi_processor *pr, int state) +{ + return pr->throttling.acpi_processor_set_throttling(pr, state); +} + int acpi_processor_get_throttling_info(struct acpi_processor *pr) { int result = 0; int step = 0; int i = 0; - + int no_ptc = 0; + int no_tss = 0; + int no_tsd = 0; ACPI_DEBUG_PRINT((ACPI_DB_INFO, "pblk_address[0x%08x] duty_offset[%d] duty_width[%d]\n", @@ -182,6 +537,21 @@ int acpi_processor_get_throttling_info(s return -EINVAL; /* TBD: Support ACPI 2.0 objects */ + no_ptc = acpi_processor_get_throttling_control(pr); + no_tss = acpi_processor_get_throttling_states(pr); + no_tsd = acpi_processor_get_tsd(pr); + + if (no_ptc || no_tss) { + pr->throttling.acpi_processor_get_throttling = + &acpi_processor_get_throttling_fadt; + pr->throttling.acpi_processor_set_throttling = + &acpi_processor_set_throttling_fadt; + } else { + pr->throttling.acpi_processor_get_throttling = + &acpi_processor_get_throttling_ptc; + pr->throttling.acpi_processor_set_throttling = + &acpi_processor_set_throttling_ptc; + } if (!pr->throttling.address) { ACPI_DEBUG_PRINT((ACPI_DB_INFO, "No throttling register\n")); @@ -262,7 +632,6 @@ static int acpi_processor_throttling_seq int i = 0; int result = 0; - if (!pr) goto end; @@ -280,15 +649,25 @@ static int acpi_processor_throttling_seq } seq_printf(seq, "state count: %d\n" - "active state: T%d\n", - pr->throttling.state_count, pr->throttling.state); + "active state: T%d\n" + "state available: T%d to T%d\n", + pr->throttling.state_count, pr->throttling.state, + pr->throttling_platform_limit, + pr->throttling.state_count - 1); seq_puts(seq, "states:\n"); - for (i = 0; i < pr->throttling.state_count; i++) - seq_printf(seq, " %cT%d: %02d%%\n", - (i == pr->throttling.state ? '*' : ' '), i, - (pr->throttling.states[i].performance ? pr-> - throttling.states[i].performance / 10 : 0)); + if (acpi_processor_get_throttling == acpi_processor_get_throttling_fadt) + for (i = 0; i < pr->throttling.state_count; i++) + seq_printf(seq, " %cT%d: %02d%%\n", + (i == pr->throttling.state ? '*' : ' '), i, + (pr->throttling.states[i].performance ? pr-> + throttling.states[i].performance / 10 : 0)); + else + for (i = 0; i < pr->throttling.state_count; i++) + seq_printf(seq, " %cT%d: %02d%%\n", + (i == pr->throttling.state ? '*' : ' '), i, + (int)pr->throttling.states_tss[i]. + freqpercentage); end: return 0; @@ -301,7 +680,7 @@ static int acpi_processor_throttling_ope PDE(inode)->data); } -static ssize_t acpi_processor_write_throttling(struct file * file, +static ssize_t acpi_processor_write_throttling(struct file *file, const char __user * buffer, size_t count, loff_t * data) { @@ -310,7 +689,6 @@ static ssize_t acpi_processor_write_thro struct acpi_processor *pr = m->private; char state_string[12] = { '\0' }; - if (!pr || (count > sizeof(state_string) - 1)) return -EINVAL; diff --git a/drivers/acpi/tables/tbfadt.c b/drivers/acpi/tables/tbfadt.c index 1285e91..002bb33 100644 --- a/drivers/acpi/tables/tbfadt.c +++ b/drivers/acpi/tables/tbfadt.c @@ -211,14 +211,17 @@ void acpi_tb_parse_fadt(acpi_native_uint * DESCRIPTION: Get a local copy of the FADT and convert it to a common format. * Performs validation on some important FADT fields. * + * NOTE: We create a local copy of the FADT regardless of the version. + * ******************************************************************************/ void acpi_tb_create_local_fadt(struct acpi_table_header *table, u32 length) { /* - * Check if the FADT is larger than what we know about (ACPI 2.0 version). - * Truncate the table, but make some noise. + * Check if the FADT is larger than the largest table that we expect + * (the ACPI 2.0/3.0 version). If so, truncate the table, and issue + * a warning. */ if (length > sizeof(struct acpi_table_fadt)) { ACPI_WARNING((AE_INFO, @@ -227,10 +230,12 @@ void acpi_tb_create_local_fadt(struct ac sizeof(struct acpi_table_fadt))); } - /* Copy the entire FADT locally. Zero first for tb_convert_fadt */ + /* Clear the entire local FADT */ ACPI_MEMSET(&acpi_gbl_FADT, 0, sizeof(struct acpi_table_fadt)); + /* Copy the original FADT, up to sizeof (struct acpi_table_fadt) */ + ACPI_MEMCPY(&acpi_gbl_FADT, table, ACPI_MIN(length, sizeof(struct acpi_table_fadt))); @@ -251,7 +256,7 @@ void acpi_tb_create_local_fadt(struct ac * RETURN: None * * DESCRIPTION: Converts all versions of the FADT to a common internal format. - * -> Expand all 32-bit addresses to 64-bit. + * Expand all 32-bit addresses to 64-bit. * * NOTE: acpi_gbl_FADT must be of size (struct acpi_table_fadt), * and must contain a copy of the actual FADT. @@ -292,8 +297,23 @@ static void acpi_tb_convert_fadt(void) } /* - * Expand the 32-bit V1.0 addresses to the 64-bit "X" generic address - * structures as necessary. + * For ACPI 1.0 FADTs (revision 1 or 2), ensure that reserved fields which + * should be zero are indeed zero. This will workaround BIOSs that + * inadvertently place values in these fields. + * + * The ACPI 1.0 reserved fields that will be zeroed are the bytes located at + * offset 45, 55, 95, and the word located at offset 109, 110. + */ + if (acpi_gbl_FADT.header.revision < 3) { + acpi_gbl_FADT.preferred_profile = 0; + acpi_gbl_FADT.pstate_control = 0; + acpi_gbl_FADT.cst_control = 0; + acpi_gbl_FADT.boot_flags = 0; + } + + /* + * Expand the ACPI 1.0 32-bit V1.0 addresses to the ACPI 2.0 64-bit "X" + * generic address structures as necessary. */ for (i = 0; i < ACPI_FADT_INFO_ENTRIES; i++) { target = @@ -349,18 +369,6 @@ static void acpi_tb_convert_fadt(void) acpi_gbl_FADT.xpm1a_event_block.space_id; } - - /* - * For ACPI 1.0 FADTs, ensure that reserved fields (which should be zero) - * are indeed zero. This will workaround BIOSs that inadvertently placed - * values in these fields. - */ - if (acpi_gbl_FADT.header.revision < 3) { - acpi_gbl_FADT.preferred_profile = 0; - acpi_gbl_FADT.pstate_control = 0; - acpi_gbl_FADT.cst_control = 0; - acpi_gbl_FADT.boot_flags = 0; - } } /****************************************************************************** diff --git a/drivers/acpi/thermal.c b/drivers/acpi/thermal.c index 194ecfe..e6de8a8 100644 --- a/drivers/acpi/thermal.c +++ b/drivers/acpi/thermal.c @@ -1107,7 +1107,6 @@ static void acpi_thermal_notify(acpi_han break; case ACPI_THERMAL_NOTIFY_THRESHOLDS: acpi_thermal_get_trip_points(tz); - acpi_thermal_check(tz); acpi_bus_generate_event(device, event, 0); break; case ACPI_THERMAL_NOTIFY_DEVICES: diff --git a/drivers/acpi/utilities/uteval.c b/drivers/acpi/utilities/uteval.c index 8ec6f8e..f112af4 100644 --- a/drivers/acpi/utilities/uteval.c +++ b/drivers/acpi/utilities/uteval.c @@ -62,16 +62,13 @@ acpi_ut_translate_one_cid(union acpi_ope static char *acpi_interfaces_supported[] = { /* Operating System Vendor Strings */ - "Windows 2000", - "Windows 2001", - "Windows 2001 SP0", - "Windows 2001 SP1", - "Windows 2001 SP2", - "Windows 2001 SP3", - "Windows 2001 SP4", - "Windows 2001.1", - "Windows 2001.1 SP1", /* Added 03/2006 */ - "Windows 2006", /* Added 03/2006 */ + "Windows 2000", /* Windows 2000 */ + "Windows 2001", /* Windows XP */ + "Windows 2001 SP1", /* Windows XP SP1 */ + "Windows 2001 SP2", /* Windows XP SP2 */ + "Windows 2001.1", /* Windows Server 2003 */ + "Windows 2001.1 SP1", /* Windows Server 2003 SP1 - Added 03/2006 */ + "Windows 2006", /* Windows Vista - Added 03/2006 */ /* Feature Group Strings */ diff --git a/drivers/acpi/video.c b/drivers/acpi/video.c index 00d25b3..39273da 100644 --- a/drivers/acpi/video.c +++ b/drivers/acpi/video.c @@ -33,6 +33,7 @@ #include #include #include +#include #include #include @@ -169,6 +170,7 @@ struct acpi_video_device { struct acpi_device *dev; struct acpi_video_device_brightness *brightness; struct backlight_device *backlight; + struct output_device *output_dev; }; /* bus */ @@ -272,6 +274,10 @@ static int acpi_video_get_next_level(str u32 level_current, u32 event); static void acpi_video_switch_brightness(struct acpi_video_device *device, int event); +static int acpi_video_device_get_state(struct acpi_video_device *device, + unsigned long *state); +static int acpi_video_output_get(struct output_device *od); +static int acpi_video_device_set_state(struct acpi_video_device *device, int state); /*backlight device sysfs support*/ static int acpi_video_get_brightness(struct backlight_device *bd) @@ -297,6 +303,28 @@ static struct backlight_ops acpi_backlig .update_status = acpi_video_set_brightness, }; +/*video output device sysfs support*/ +static int acpi_video_output_get(struct output_device *od) +{ + unsigned long state; + struct acpi_video_device *vd = + (struct acpi_video_device *)class_get_devdata(&od->class_dev); + acpi_video_device_get_state(vd, &state); + return (int)state; +} + +static int acpi_video_output_set(struct output_device *od) +{ + unsigned long state = od->request_state; + struct acpi_video_device *vd= + (struct acpi_video_device *)class_get_devdata(&od->class_dev); + return acpi_video_device_set_state(vd, state); +} + +static struct output_properties acpi_output_properties = { + .set_state = acpi_video_output_set, + .get_status = acpi_video_output_get, +}; /* -------------------------------------------------------------------------- Video Management -------------------------------------------------------------------------- */ @@ -626,6 +654,17 @@ static void acpi_video_device_find_cap(s kfree(name); } + if (device->cap._DCS && device->cap._DSS){ + static int count = 0; + char *name; + name = kzalloc(MAX_NAME_LEN, GFP_KERNEL); + if (!name) + return; + sprintf(name, "acpi_video%d", count++); + device->output_dev = video_output_register(name, + NULL, device, &acpi_output_properties); + kfree(name); + } return; } @@ -1669,6 +1708,7 @@ static int acpi_video_bus_put_one_device ACPI_DEVICE_NOTIFY, acpi_video_device_notify); backlight_device_unregister(device->backlight); + video_output_unregister(device->output_dev); return 0; } diff --git a/drivers/cpuidle/Kconfig b/drivers/cpuidle/Kconfig new file mode 100644 index 0000000..1497ffc --- /dev/null +++ b/drivers/cpuidle/Kconfig @@ -0,0 +1,39 @@ +menu "CPU idle PM support" + +config CPU_IDLE + bool "CPU idle PM support" + help + CPU idle is a generic framework for supporting software-controlled + idle processor power management. It includes modular cross-platform + governors that can be swapped during runtime. + + If you're using a mobile platform that supports CPU idle PM (e.g. + an ACPI-capable notebook), you should say Y here. + +if CPU_IDLE + +comment "Governors" + +config CPU_IDLE_GOV_LADDER + tristate "'ladder' governor" + depends on CPU_IDLE + default y + help + This cpuidle governor promotes and demotes through the supported idle + states using residency time and bus master activity as metrics. This + algorithm was originally introduced in the old ACPI processor driver. + +config CPU_IDLE_GOV_MENU + tristate "'menu' governor" + depends on CPU_IDLE && NO_HZ + default y + help + This cpuidle governor evaluates all available states and chooses the + deepest state that meets all of the following constraints: BM activity, + expected time until next timer interrupt, and last break event time + delta. It is designed to minimize power consumption. Currently + dynticks is required. + +endif # CPU_IDLE + +endmenu diff --git a/drivers/cpuidle/Makefile b/drivers/cpuidle/Makefile new file mode 100644 index 0000000..5634f88 --- /dev/null +++ b/drivers/cpuidle/Makefile @@ -0,0 +1,5 @@ +# +# Makefile for cpuidle. +# + +obj-y += cpuidle.o driver.o governor.o sysfs.o governors/ diff --git a/drivers/cpuidle/cpuidle.c b/drivers/cpuidle/cpuidle.c new file mode 100644 index 0000000..ca98f7d --- /dev/null +++ b/drivers/cpuidle/cpuidle.c @@ -0,0 +1,307 @@ +/* + * cpuidle.c - core cpuidle infrastructure + * + * (C) 2006-2007 Venkatesh Pallipadi + * Shaohua Li + * Adam Belay + * + * This code is licenced under the GPL. + */ + +#include +#include +#include +#include +#include +#include +#include + +#include "cpuidle.h" + +DEFINE_PER_CPU(struct cpuidle_device *, cpuidle_devices); +EXPORT_PER_CPU_SYMBOL_GPL(cpuidle_devices); + +DEFINE_MUTEX(cpuidle_lock); +LIST_HEAD(cpuidle_detected_devices); +static void (*pm_idle_old)(void); + + +/** + * cpuidle_idle_call - the main idle loop + * + * NOTE: no locks or semaphores should be used here + */ +static void cpuidle_idle_call(void) +{ + struct cpuidle_device *dev = __get_cpu_var(cpuidle_devices); + struct cpuidle_state *target_state; + int next_state; + + /* check if the device is ready */ + if (!dev || dev->status != CPUIDLE_STATUS_DOIDLE) { + if (pm_idle_old) + pm_idle_old(); + else + local_irq_enable(); + return; + } + + /* ask the governor for the next state */ + next_state = cpuidle_curr_governor->select(dev); + if (need_resched()) + return; + target_state = &dev->states[next_state]; + + /* enter the state and update stats */ + dev->last_residency = target_state->enter(dev, target_state); + dev->last_state = target_state; + target_state->time += dev->last_residency; + target_state->usage++; + + /* give the governor an opportunity to reflect on the outcome */ + if (cpuidle_curr_governor->reflect) + cpuidle_curr_governor->reflect(dev); +} + +/** + * cpuidle_install_idle_handler - installs the cpuidle idle loop handler + */ +void cpuidle_install_idle_handler(void) +{ + if (pm_idle != cpuidle_idle_call) { + /* Make sure all changes finished before we switch to new idle */ + smp_wmb(); + pm_idle = cpuidle_idle_call; + } +} + +/** + * cpuidle_uninstall_idle_handler - uninstalls the cpuidle idle loop handler + */ +void cpuidle_uninstall_idle_handler(void) +{ + if (pm_idle != pm_idle_old) { + pm_idle = pm_idle_old; + cpu_idle_wait(); + } +} + +/** + * cpuidle_rescan_device - prepares for a new state configuration + * @dev: the target device + * + * Must be called with cpuidle_lock aquired. + */ +void cpuidle_rescan_device(struct cpuidle_device *dev) +{ + int i; + + if (cpuidle_curr_governor->scan) + cpuidle_curr_governor->scan(dev); + + for (i = 0; i < dev->state_count; i++) { + dev->states[i].usage = 0; + dev->states[i].time = 0; + } +} + +/** + * cpuidle_add_device - attaches the driver to a CPU instance + * @sys_dev: the system device (driver model CPU representation) + */ +static int cpuidle_add_device(struct sys_device *sys_dev) +{ + int cpu = sys_dev->id; + struct cpuidle_device *dev; + + dev = per_cpu(cpuidle_devices, cpu); + + mutex_lock(&cpuidle_lock); + if (cpu_is_offline(cpu)) { + mutex_unlock(&cpuidle_lock); + return 0; + } + + if (!dev) { + dev = kzalloc(sizeof(struct cpuidle_device), GFP_KERNEL); + if (!dev) { + mutex_unlock(&cpuidle_lock); + return -ENOMEM; + } + init_completion(&dev->kobj_unregister); + per_cpu(cpuidle_devices, cpu) = dev; + } + dev->cpu = cpu; + + if (dev->status & CPUIDLE_STATUS_DETECTED) { + mutex_unlock(&cpuidle_lock); + return 0; + } + + cpuidle_add_sysfs(sys_dev); + + if (cpuidle_curr_driver) { + if (cpuidle_attach_driver(dev)) + goto err_ret; + } + + if (cpuidle_curr_governor) { + if (cpuidle_attach_governor(dev)) { + cpuidle_detach_driver(dev); + goto err_ret; + } + } + + if (cpuidle_device_can_idle(dev)) + cpuidle_install_idle_handler(); + + list_add(&dev->device_list, &cpuidle_detected_devices); + dev->status |= CPUIDLE_STATUS_DETECTED; + +err_ret: + mutex_unlock(&cpuidle_lock); + + return 0; +} + +/** + * __cpuidle_remove_device - detaches the driver from a CPU instance + * @sys_dev: the system device (driver model CPU representation) + * + * Must be called with cpuidle_lock aquired. + */ +static int __cpuidle_remove_device(struct sys_device *sys_dev) +{ + struct cpuidle_device *dev; + + dev = per_cpu(cpuidle_devices, sys_dev->id); + + if (!(dev->status & CPUIDLE_STATUS_DETECTED)) { + return 0; + } + dev->status &= ~CPUIDLE_STATUS_DETECTED; + /* NOTE: we don't wait because the cpu is already offline */ + if (cpuidle_curr_governor) + cpuidle_detach_governor(dev); + if (cpuidle_curr_driver) + cpuidle_detach_driver(dev); + cpuidle_remove_sysfs(sys_dev); + list_del(&dev->device_list); + wait_for_completion(&dev->kobj_unregister); + per_cpu(cpuidle_devices, sys_dev->id) = NULL; + kfree(dev); + + return 0; +} + +/** + * cpuidle_remove_device - detaches the driver from a CPU instance + * @sys_dev: the system device (driver model CPU representation) + */ +static int cpuidle_remove_device(struct sys_device *sys_dev) +{ + int ret; + mutex_lock(&cpuidle_lock); + ret = __cpuidle_remove_device(sys_dev); + mutex_unlock(&cpuidle_lock); + + return ret; +} + +static struct sysdev_driver cpuidle_sysdev_driver = { + .add = cpuidle_add_device, + .remove = cpuidle_remove_device, +}; + +static int cpuidle_cpu_callback(struct notifier_block *nfb, + unsigned long action, void *hcpu) +{ + struct sys_device *sys_dev; + + sys_dev = get_cpu_sysdev((unsigned long)hcpu); + + switch (action) { + case CPU_ONLINE: + cpuidle_add_device(sys_dev); + break; + case CPU_DOWN_PREPARE: + mutex_lock(&cpuidle_lock); + break; + case CPU_DEAD: + __cpuidle_remove_device(sys_dev); + mutex_unlock(&cpuidle_lock); + break; + case CPU_DOWN_FAILED: + mutex_unlock(&cpuidle_lock); + break; + } + + return NOTIFY_OK; +} + +static struct notifier_block __cpuinitdata cpuidle_cpu_notifier = +{ + .notifier_call = cpuidle_cpu_callback, +}; + +#ifdef CONFIG_SMP + +static void smp_callback(void *v) +{ + /* we already woke the CPU up, nothing more to do */ +} + +/* + * This function gets called when a part of the kernel has a new latency + * requirement. This means we need to get all processors out of their C-state, + * and then recalculate a new suitable C-state. Just do a cross-cpu IPI; that + * wakes them all right up. + */ +static int cpuidle_latency_notify(struct notifier_block *b, + unsigned long l, void *v) +{ + smp_call_function(smp_callback, NULL, 0, 1); + return NOTIFY_OK; +} + +static struct notifier_block cpuidle_latency_notifier = { + .notifier_call = cpuidle_latency_notify, +}; + +#define latency_notifier_init(x) do { register_latency_notifier(x); } while (0) + +#else /* CONFIG_SMP */ + +#define latency_notifier_init(x) do { } while (0) + +#endif /* CONFIG_SMP */ + +/** + * cpuidle_init - core initializer + */ +static int __init cpuidle_init(void) +{ + int ret; + + pm_idle_old = pm_idle; + + ret = cpuidle_add_class_sysfs(&cpu_sysdev_class); + if (ret) + return ret; + + register_hotcpu_notifier(&cpuidle_cpu_notifier); + + ret = sysdev_driver_register(&cpu_sysdev_class, &cpuidle_sysdev_driver); + + if (ret) { + cpuidle_remove_class_sysfs(&cpu_sysdev_class); + printk(KERN_ERR "cpuidle: failed to initialize\n"); + return ret; + } + + latency_notifier_init(&cpuidle_latency_notifier); + + return 0; +} + +core_initcall(cpuidle_init); diff --git a/drivers/cpuidle/cpuidle.h b/drivers/cpuidle/cpuidle.h new file mode 100644 index 0000000..8bbc090 --- /dev/null +++ b/drivers/cpuidle/cpuidle.h @@ -0,0 +1,50 @@ +/* + * cpuidle.h - The internal header file + */ + +#ifndef __DRIVER_CPUIDLE_H +#define __DRIVER_CPUIDLE_H + +#include + +/* For internal use only */ +extern struct cpuidle_governor *cpuidle_curr_governor; +extern struct cpuidle_driver *cpuidle_curr_driver; +extern struct list_head cpuidle_drivers; +extern struct list_head cpuidle_governors; +extern struct list_head cpuidle_detected_devices; +extern struct mutex cpuidle_lock; + +/* idle loop */ +extern void cpuidle_install_idle_handler(void); +extern void cpuidle_uninstall_idle_handler(void); +extern void cpuidle_rescan_device(struct cpuidle_device *dev); + +/* drivers */ +extern int cpuidle_attach_driver(struct cpuidle_device *dev); +extern void cpuidle_detach_driver(struct cpuidle_device *dev); +extern int cpuidle_switch_driver(struct cpuidle_driver *drv); + +/* governors */ +extern int cpuidle_attach_governor(struct cpuidle_device *dev); +extern void cpuidle_detach_governor(struct cpuidle_device *dev); +extern int cpuidle_switch_governor(struct cpuidle_governor *gov); + +/* sysfs */ +extern int cpuidle_add_class_sysfs(struct sysdev_class *cls); +extern void cpuidle_remove_class_sysfs(struct sysdev_class *cls); +extern int cpuidle_add_driver_sysfs(struct cpuidle_device *device); +extern void cpuidle_remove_driver_sysfs(struct cpuidle_device *device); +extern int cpuidle_add_sysfs(struct sys_device *sysdev); +extern void cpuidle_remove_sysfs(struct sys_device *sysdev); + +/** + * cpuidle_device_can_idle - determines if a CPU can utilize the idle loop + * @dev: the target CPU + */ +static inline int cpuidle_device_can_idle(struct cpuidle_device *dev) +{ + return (dev->status == CPUIDLE_STATUS_DOIDLE); +} + +#endif /* __DRIVER_CPUIDLE_H */ diff --git a/drivers/cpuidle/driver.c b/drivers/cpuidle/driver.c new file mode 100644 index 0000000..20978ba --- /dev/null +++ b/drivers/cpuidle/driver.c @@ -0,0 +1,276 @@ +/* + * driver.c - driver support + * + * (C) 2006-2007 Venkatesh Pallipadi + * Shaohua Li + * Adam Belay + * + * This code is licenced under the GPL. + */ + +#include +#include +#include + +#include "cpuidle.h" + +LIST_HEAD(cpuidle_drivers); +struct cpuidle_driver *cpuidle_curr_driver; + + +/** + * cpuidle_attach_driver - attaches a driver to a CPU + * @dev: the target CPU + * + * Must be called with cpuidle_lock aquired. + */ +int cpuidle_attach_driver(struct cpuidle_device *dev) +{ + int ret; + + if (dev->status & CPUIDLE_STATUS_DRIVER_ATTACHED) + return -EIO; + + if (!try_module_get(cpuidle_curr_driver->owner)) + return -EINVAL; + + ret = cpuidle_curr_driver->init(dev); + if (ret) { + module_put(cpuidle_curr_driver->owner); + printk(KERN_INFO "cpuidle: driver %s failed to attach to " + "cpu %d\n", cpuidle_curr_driver->name, dev->cpu); + } else { + if (dev->status & CPUIDLE_STATUS_GOVERNOR_ATTACHED) + cpuidle_rescan_device(dev); + smp_wmb(); + dev->status |= CPUIDLE_STATUS_DRIVER_ATTACHED; + cpuidle_add_driver_sysfs(dev); + } + + return ret; +} + +/** + * cpuidle_detach_govenor - detaches a driver from a CPU + * @dev: the target CPU + * + * Must be called with cpuidle_lock aquired. + */ +void cpuidle_detach_driver(struct cpuidle_device *dev) +{ + if (dev->status & CPUIDLE_STATUS_DRIVER_ATTACHED) { + cpuidle_remove_driver_sysfs(dev); + dev->status &= ~CPUIDLE_STATUS_DRIVER_ATTACHED; + if (cpuidle_curr_driver->exit) + cpuidle_curr_driver->exit(dev); + module_put(cpuidle_curr_driver->owner); + } +} + +/** + * __cpuidle_find_driver - finds a driver of the specified name + * @str: the name + * + * Must be called with cpuidle_lock aquired. + */ +static struct cpuidle_driver * __cpuidle_find_driver(const char *str) +{ + struct cpuidle_driver *drv; + + list_for_each_entry(drv, &cpuidle_drivers, driver_list) + if (!strnicmp(str, drv->name, CPUIDLE_NAME_LEN)) + return drv; + + return NULL; +} + +/** + * cpuidle_switch_driver - changes the driver + * @drv: the new target driver + * + * NOTE: "drv" can be NULL to specify disabled + * Must be called with cpuidle_lock aquired. + */ +int cpuidle_switch_driver(struct cpuidle_driver *drv) +{ + struct cpuidle_device *dev; + + if (drv == cpuidle_curr_driver) + return -EINVAL; + + cpuidle_uninstall_idle_handler(); + + if (cpuidle_curr_driver) + list_for_each_entry(dev, &cpuidle_detected_devices, device_list) + cpuidle_detach_driver(dev); + + cpuidle_curr_driver = drv; + + if (drv) { + int ret = 1; + list_for_each_entry(dev, &cpuidle_detected_devices, device_list) + if (cpuidle_attach_driver(dev) == 0) + ret = 0; + + /* If attach on all devices fail, switch to NULL driver */ + if (ret) + cpuidle_curr_driver = NULL; + + if (cpuidle_curr_driver && cpuidle_curr_governor) { + printk(KERN_INFO "cpuidle: using driver %s\n", + drv->name); + cpuidle_install_idle_handler(); + } + } + + return 0; +} + +/** + * cpuidle_register_driver - registers a driver + * @drv: the driver + */ +int cpuidle_register_driver(struct cpuidle_driver *drv) +{ + int ret = -EEXIST; + + if (!drv || !drv->init) + return -EINVAL; + + mutex_lock(&cpuidle_lock); + if (__cpuidle_find_driver(drv->name) == NULL) { + ret = 0; + list_add_tail(&drv->driver_list, &cpuidle_drivers); + if (!cpuidle_curr_driver) + cpuidle_switch_driver(drv); + } + mutex_unlock(&cpuidle_lock); + + return ret; +} + +EXPORT_SYMBOL_GPL(cpuidle_register_driver); + +/** + * cpuidle_unregister_driver - unregisters a driver + * @drv: the driver + */ +void cpuidle_unregister_driver(struct cpuidle_driver *drv) +{ + if (!drv) + return; + + mutex_lock(&cpuidle_lock); + if (drv == cpuidle_curr_driver) + cpuidle_switch_driver(NULL); + list_del(&drv->driver_list); + mutex_unlock(&cpuidle_lock); +} + +EXPORT_SYMBOL_GPL(cpuidle_unregister_driver); + +static void __cpuidle_force_redetect(struct cpuidle_device *dev) +{ + cpuidle_remove_driver_sysfs(dev); + cpuidle_curr_driver->redetect(dev); + cpuidle_add_driver_sysfs(dev); +} + +/** + * cpuidle_force_redetect - redetects the idle states of a CPU + * + * @dev: the CPU to redetect + * @drv: the target driver + * + * Generally, the driver will call this when the supported states set has + * changed. (e.g. as the result of an ACPI transition to battery power) + */ +int cpuidle_force_redetect(struct cpuidle_device *dev, + struct cpuidle_driver *drv) +{ + int uninstalled = 0; + + mutex_lock(&cpuidle_lock); + + if (drv != cpuidle_curr_driver) { + mutex_unlock(&cpuidle_lock); + return 0; + } + + if (!(dev->status & CPUIDLE_STATUS_DRIVER_ATTACHED) || + !cpuidle_curr_driver->redetect) { + mutex_unlock(&cpuidle_lock); + return -EIO; + } + + if (cpuidle_device_can_idle(dev)) { + uninstalled = 1; + cpuidle_uninstall_idle_handler(); + } + + __cpuidle_force_redetect(dev); + + if (cpuidle_device_can_idle(dev)) { + cpuidle_rescan_device(dev); + cpuidle_install_idle_handler(); + } + + /* other devices are still ok */ + if (uninstalled) + cpuidle_install_idle_handler(); + + mutex_unlock(&cpuidle_lock); + + return 0; +} + +EXPORT_SYMBOL_GPL(cpuidle_force_redetect); + +/** + * cpuidle_force_redetect_devices - redetects the idle states of all CPUs + * + * @drv: the target driver + * + * Generally, the driver will call this when the supported states set has + * changed. (e.g. as the result of an ACPI transition to battery power) + */ +int cpuidle_force_redetect_devices(struct cpuidle_driver *drv) +{ + struct cpuidle_device *dev; + int ret = 0; + + mutex_lock(&cpuidle_lock); + + if (drv != cpuidle_curr_driver) + goto out; + + if (!cpuidle_curr_driver->redetect) { + ret = -EIO; + goto out; + } + + cpuidle_uninstall_idle_handler(); + + list_for_each_entry(dev, &cpuidle_detected_devices, device_list) + __cpuidle_force_redetect(dev); + + cpuidle_install_idle_handler(); +out: + mutex_unlock(&cpuidle_lock); + return ret; +} + +EXPORT_SYMBOL_GPL(cpuidle_force_redetect_devices); + +/** + * cpuidle_get_bm_activity - determines if BM activity has occured + */ +int cpuidle_get_bm_activity(void) +{ + if (cpuidle_curr_driver->bm_check) + return cpuidle_curr_driver->bm_check(); + else + return 0; +} +EXPORT_SYMBOL_GPL(cpuidle_get_bm_activity); + diff --git a/drivers/cpuidle/governor.c b/drivers/cpuidle/governor.c new file mode 100644 index 0000000..1c7c384 --- /dev/null +++ b/drivers/cpuidle/governor.c @@ -0,0 +1,160 @@ +/* + * governor.c - governor support + * + * (C) 2006-2007 Venkatesh Pallipadi + * Shaohua Li + * Adam Belay + * + * This code is licenced under the GPL. + */ + +#include +#include +#include + +#include "cpuidle.h" + +LIST_HEAD(cpuidle_governors); +struct cpuidle_governor *cpuidle_curr_governor; + + +/** + * cpuidle_attach_governor - attaches a governor to a CPU + * @dev: the target CPU + * + * Must be called with cpuidle_lock aquired. + */ +int cpuidle_attach_governor(struct cpuidle_device *dev) +{ + int ret = 0; + + if(dev->status & CPUIDLE_STATUS_GOVERNOR_ATTACHED) + return -EIO; + + if (!try_module_get(cpuidle_curr_governor->owner)) + return -EINVAL; + + if (cpuidle_curr_governor->init) + ret = cpuidle_curr_governor->init(dev); + if (ret) { + module_put(cpuidle_curr_governor->owner); + printk(KERN_ERR "cpuidle: governor %s failed to attach to cpu %d\n", + cpuidle_curr_governor->name, dev->cpu); + } else { + if (dev->status & CPUIDLE_STATUS_DRIVER_ATTACHED) + cpuidle_rescan_device(dev); + smp_wmb(); + dev->status |= CPUIDLE_STATUS_GOVERNOR_ATTACHED; + } + + return ret; +} + +/** + * cpuidle_detach_govenor - detaches a governor from a CPU + * @dev: the target CPU + * + * Must be called with cpuidle_lock aquired. + */ +void cpuidle_detach_governor(struct cpuidle_device *dev) +{ + if (dev->status & CPUIDLE_STATUS_GOVERNOR_ATTACHED) { + dev->status &= ~CPUIDLE_STATUS_GOVERNOR_ATTACHED; + if (cpuidle_curr_governor->exit) + cpuidle_curr_governor->exit(dev); + module_put(cpuidle_curr_governor->owner); + } +} + +/** + * __cpuidle_find_governor - finds a governor of the specified name + * @str: the name + * + * Must be called with cpuidle_lock aquired. + */ +static struct cpuidle_governor * __cpuidle_find_governor(const char *str) +{ + struct cpuidle_governor *gov; + + list_for_each_entry(gov, &cpuidle_governors, governor_list) + if (!strnicmp(str, gov->name, CPUIDLE_NAME_LEN)) + return gov; + + return NULL; +} + +/** + * cpuidle_switch_governor - changes the governor + * @gov: the new target governor + * + * NOTE: "gov" can be NULL to specify disabled + * Must be called with cpuidle_lock aquired. + */ +int cpuidle_switch_governor(struct cpuidle_governor *gov) +{ + struct cpuidle_device *dev; + + if (gov == cpuidle_curr_governor) + return -EINVAL; + + cpuidle_uninstall_idle_handler(); + + if (cpuidle_curr_governor) + list_for_each_entry(dev, &cpuidle_detected_devices, device_list) + cpuidle_detach_governor(dev); + + cpuidle_curr_governor = gov; + + if (gov) { + list_for_each_entry(dev, &cpuidle_detected_devices, device_list) + cpuidle_attach_governor(dev); + if (cpuidle_curr_driver) + cpuidle_install_idle_handler(); + printk(KERN_INFO "cpuidle: using governor %s\n", gov->name); + } + + return 0; +} + +/** + * cpuidle_register_governor - registers a governor + * @gov: the governor + */ +int cpuidle_register_governor(struct cpuidle_governor *gov) +{ + int ret = -EEXIST; + + if (!gov || !gov->select) + return -EINVAL; + + mutex_lock(&cpuidle_lock); + if (__cpuidle_find_governor(gov->name) == NULL) { + ret = 0; + list_add_tail(&gov->governor_list, &cpuidle_governors); + if (!cpuidle_curr_governor) + cpuidle_switch_governor(gov); + } + mutex_unlock(&cpuidle_lock); + + return ret; +} + +EXPORT_SYMBOL_GPL(cpuidle_register_governor); + +/** + * cpuidle_unregister_governor - unregisters a governor + * @gov: the governor + */ +void cpuidle_unregister_governor(struct cpuidle_governor *gov) +{ + if (!gov) + return; + + mutex_lock(&cpuidle_lock); + if (gov == cpuidle_curr_governor) + cpuidle_switch_governor(NULL); + list_del(&gov->governor_list); + mutex_unlock(&cpuidle_lock); +} + +EXPORT_SYMBOL_GPL(cpuidle_unregister_governor); diff --git a/drivers/cpuidle/governors/Makefile b/drivers/cpuidle/governors/Makefile new file mode 100644 index 0000000..1b51272 --- /dev/null +++ b/drivers/cpuidle/governors/Makefile @@ -0,0 +1,6 @@ +# +# Makefile for cpuidle governors. +# + +obj-$(CONFIG_CPU_IDLE_GOV_LADDER) += ladder.o +obj-$(CONFIG_CPU_IDLE_GOV_MENU) += menu.o diff --git a/drivers/cpuidle/governors/ladder.c b/drivers/cpuidle/governors/ladder.c new file mode 100644 index 0000000..c1fd3ee --- /dev/null +++ b/drivers/cpuidle/governors/ladder.c @@ -0,0 +1,227 @@ +/* + * ladder.c - the residency ladder algorithm + * + * Copyright (C) 2001, 2002 Andy Grover + * Copyright (C) 2001, 2002 Paul Diefenbaugh + * Copyright (C) 2004, 2005 Dominik Brodowski + * + * (C) 2006-2007 Venkatesh Pallipadi + * Shaohua Li + * Adam Belay + * + * This code is licenced under the GPL. + */ + +#include +#include +#include +#include +#include + +#include +#include + +#define PROMOTION_COUNT 4 +#define DEMOTION_COUNT 1 + +/* + * bm_history -- bit-mask with a bit per jiffy of bus-master activity + * 1000 HZ: 0xFFFFFFFF: 32 jiffies = 32ms + * 800 HZ: 0xFFFFFFFF: 32 jiffies = 40ms + * 100 HZ: 0x0000000F: 4 jiffies = 40ms + * reduce history for more aggressive entry into C3 + */ +static unsigned int bm_history __read_mostly = + (HZ >= 800 ? 0xFFFFFFFF : ((1U << (HZ / 25)) - 1)); +module_param(bm_history, uint, 0644); + +struct ladder_device_state { + struct { + u32 promotion_count; + u32 demotion_count; + u32 promotion_time; + u32 demotion_time; + u32 bm; + } threshold; + struct { + int promotion_count; + int demotion_count; + } stats; +}; + +struct ladder_device { + struct ladder_device_state states[CPUIDLE_STATE_MAX]; + unsigned int bm_check:1; + unsigned long bm_check_timestamp; + unsigned long bm_activity; /* FIXME: bm activity should be global */ + int last_state_idx; +}; + +/** + * ladder_do_selection - prepares private data for a state change + * @ldev: the ladder device + * @old_idx: the current state index + * @new_idx: the new target state index + */ +static inline void ladder_do_selection(struct ladder_device *ldev, + int old_idx, int new_idx) +{ + ldev->states[old_idx].stats.promotion_count = 0; + ldev->states[old_idx].stats.demotion_count = 0; + ldev->last_state_idx = new_idx; +} + +/** + * ladder_select_state - selects the next state to enter + * @dev: the CPU + */ +static int ladder_select_state(struct cpuidle_device *dev) +{ + struct ladder_device *ldev = dev->governor_data; + struct ladder_device_state *last_state; + int last_residency, last_idx = ldev->last_state_idx; + + if (unlikely(!ldev)) + return 0; + + last_state = &ldev->states[last_idx]; + + /* demote if within BM threshold */ + if (ldev->bm_check) { + unsigned long diff; + + diff = jiffies - ldev->bm_check_timestamp; + if (diff > 31) + diff = 31; + + ldev->bm_activity <<= diff; + if (cpuidle_get_bm_activity()) + ldev->bm_activity |= ((1 << diff) - 1); + + ldev->bm_check_timestamp = jiffies; + if ((last_idx > 0) && + (last_state->threshold.bm & ldev->bm_activity)) { + ladder_do_selection(ldev, last_idx, last_idx - 1); + return last_idx - 1; + } + } + + if (dev->states[last_idx].flags & CPUIDLE_FLAG_TIME_VALID) + last_residency = cpuidle_get_last_residency(dev) - dev->states[last_idx].exit_latency; + else + last_residency = last_state->threshold.promotion_time + 1; + + /* consider promotion */ + if (last_idx < dev->state_count - 1 && + last_residency > last_state->threshold.promotion_time && + dev->states[last_idx + 1].exit_latency <= system_latency_constraint()) { + last_state->stats.promotion_count++; + last_state->stats.demotion_count = 0; + if (last_state->stats.promotion_count >= last_state->threshold.promotion_count) { + ladder_do_selection(ldev, last_idx, last_idx + 1); + return last_idx + 1; + } + } + + /* consider demotion */ + if (last_idx > 0 && + last_residency < last_state->threshold.demotion_time) { + last_state->stats.demotion_count++; + last_state->stats.promotion_count = 0; + if (last_state->stats.demotion_count >= last_state->threshold.demotion_count) { + ladder_do_selection(ldev, last_idx, last_idx - 1); + return last_idx - 1; + } + } + + /* otherwise remain at the current state */ + return last_idx; +} + +/** + * ladder_scan_device - scans a CPU's states and does setup + * @dev: the CPU + */ +static void ladder_scan_device(struct cpuidle_device *dev) +{ + int i, bm_check = 0; + struct ladder_device *ldev = dev->governor_data; + struct ladder_device_state *lstate; + struct cpuidle_state *state; + + ldev->last_state_idx = 0; + ldev->bm_check_timestamp = 0; + ldev->bm_activity = 0; + + for (i = 0; i < dev->state_count; i++) { + state = &dev->states[i]; + lstate = &ldev->states[i]; + + lstate->stats.promotion_count = 0; + lstate->stats.demotion_count = 0; + + lstate->threshold.promotion_count = PROMOTION_COUNT; + lstate->threshold.demotion_count = DEMOTION_COUNT; + + if (i < dev->state_count - 1) + lstate->threshold.promotion_time = state->exit_latency; + if (i > 0) + lstate->threshold.demotion_time = state->exit_latency; + if (state->flags & CPUIDLE_FLAG_CHECK_BM) { + lstate->threshold.bm = bm_history; + bm_check = 1; + } else + lstate->threshold.bm = 0; + } + + ldev->bm_check = bm_check; +} + +/** + * ladder_init_device - initializes a CPU-instance + * @dev: the CPU + */ +static int ladder_init_device(struct cpuidle_device *dev) +{ + dev->governor_data = kmalloc(sizeof(struct ladder_device), GFP_KERNEL); + + return !dev->governor_data; +} + +/** + * ladder_exit_device - exits a CPU-instance + * @dev: the CPU + */ +static void ladder_exit_device(struct cpuidle_device *dev) +{ + kfree(dev->governor_data); +} + +static struct cpuidle_governor ladder_governor = { + .name = "ladder", + .init = ladder_init_device, + .exit = ladder_exit_device, + .scan = ladder_scan_device, + .select = ladder_select_state, + .owner = THIS_MODULE, +}; + +/** + * init_ladder - initializes the governor + */ +static int __init init_ladder(void) +{ + return cpuidle_register_governor(&ladder_governor); +} + +/** + * exit_ladder - exits the governor + */ +static void __exit exit_ladder(void) +{ + cpuidle_unregister_governor(&ladder_governor); +} + +MODULE_LICENSE("GPL"); +module_init(init_ladder); +module_exit(exit_ladder); diff --git a/drivers/cpuidle/governors/menu.c b/drivers/cpuidle/governors/menu.c new file mode 100644 index 0000000..f00dfc6 --- /dev/null +++ b/drivers/cpuidle/governors/menu.c @@ -0,0 +1,152 @@ +/* + * menu.c - the menu idle governor + * + * Copyright (C) 2006-2007 Adam Belay + * + * This code is licenced under the GPL. + */ + +#include +#include +#include +#include +#include +#include +#include + +#define BM_HOLDOFF 20000 /* 20 ms */ + +struct menu_device { + int last_state_idx; + int deepest_bm_state; + + int break_last_us; + int break_elapsed_us; + + int bm_elapsed_us; + int bm_holdoff_us; + + unsigned long idle_jiffies; +}; + +static DEFINE_PER_CPU(struct menu_device, menu_devices); + +/** + * menu_select - selects the next idle state to enter + * @dev: the CPU + */ +static int menu_select(struct cpuidle_device *dev) +{ + struct menu_device *data = &__get_cpu_var(menu_devices); + int i, expected_us, max_state = dev->state_count; + + /* discard BM history because it is sticky */ + cpuidle_get_bm_activity(); + + /* determine the expected residency time */ + expected_us = (s32) ktime_to_ns(tick_nohz_get_sleep_length()) / 1000; + expected_us = min(expected_us, data->break_last_us); + + /* determine the maximum state compatible with current BM status */ + if (cpuidle_get_bm_activity()) + data->bm_elapsed_us = 0; + if (data->bm_elapsed_us <= data->bm_holdoff_us) + max_state = data->deepest_bm_state + 1; + + /* find the deepest idle state that satisfies our constraints */ + for (i = 1; i < max_state; i++) { + struct cpuidle_state *s = &dev->states[i]; + if (s->target_residency > expected_us) + break; + if (s->exit_latency > system_latency_constraint()) + break; + } + + data->last_state_idx = i - 1; + data->idle_jiffies = tick_nohz_get_idle_jiffies(); + return i - 1; +} + +/** + * menu_reflect - attempts to guess what happened after entry + * @dev: the CPU + * + * NOTE: it's important to be fast here because this operation will add to + * the overall exit latency. + */ +static void menu_reflect(struct cpuidle_device *dev) +{ + struct menu_device *data = &__get_cpu_var(menu_devices); + int last_idx = data->last_state_idx; + int measured_us = cpuidle_get_last_residency(dev); + struct cpuidle_state *target = &dev->states[last_idx]; + + /* + * Ugh, this idle state doesn't support residency measurements, so we + * are basically lost in the dark. As a compromise, assume we slept + * for one full standard timer tick. However, be aware that this + * could potentially result in a suboptimal state transition. + */ + if (!(target->flags & CPUIDLE_FLAG_TIME_VALID)) + measured_us = USEC_PER_SEC / HZ; + + data->bm_elapsed_us += measured_us; + data->break_elapsed_us += measured_us; + + /* + * Did something other than the timer interrupt cause the break event? + */ + if (tick_nohz_get_idle_jiffies() == data->idle_jiffies) { + data->break_last_us = data->break_elapsed_us; + data->break_elapsed_us = 0; + } +} + +/** + * menu_scan_device - scans a CPU's states and does setup + * @dev: the CPU + */ +static void menu_scan_device(struct cpuidle_device *dev) +{ + struct menu_device *data = &per_cpu(menu_devices, dev->cpu); + int i; + + data->last_state_idx = 0; + data->break_last_us = 0; + data->break_elapsed_us = 0; + data->bm_elapsed_us = 0; + data->bm_holdoff_us = BM_HOLDOFF; + + for (i = 1; i < dev->state_count; i++) + if (dev->states[i].flags & CPUIDLE_FLAG_CHECK_BM) + break; + data->deepest_bm_state = i - 1; +} + +struct cpuidle_governor menu_governor = { + .name = "menu", + .scan = menu_scan_device, + .select = menu_select, + .reflect = menu_reflect, + .owner = THIS_MODULE, +}; + +/** + * init_menu - initializes the governor + */ +static int __init init_menu(void) +{ + return cpuidle_register_governor(&menu_governor); +} + +/** + * exit_menu - exits the governor + */ +static void __exit exit_menu(void) +{ + cpuidle_unregister_governor(&menu_governor); +} + +MODULE_LICENSE("GPL"); +module_init(init_menu); +module_exit(exit_menu); diff --git a/drivers/cpuidle/sysfs.c b/drivers/cpuidle/sysfs.c new file mode 100644 index 0000000..5010762 --- /dev/null +++ b/drivers/cpuidle/sysfs.c @@ -0,0 +1,373 @@ +/* + * sysfs.c - sysfs support + * + * (C) 2006-2007 Shaohua Li + * + * This code is licenced under the GPL. + */ + +#include +#include +#include +#include + +#include "cpuidle.h" + +static ssize_t show_available_drivers(struct sys_device *dev, char *buf) +{ + ssize_t i = 0; + struct cpuidle_driver *tmp; + + mutex_lock(&cpuidle_lock); + list_for_each_entry(tmp, &cpuidle_drivers, driver_list) { + if (i >= (ssize_t)((PAGE_SIZE/sizeof(char)) - CPUIDLE_NAME_LEN - 2)) + goto out; + i += scnprintf(&buf[i], CPUIDLE_NAME_LEN, "%s ", tmp->name); + } +out: + i+= sprintf(&buf[i], "\n"); + mutex_unlock(&cpuidle_lock); + return i; +} + +static ssize_t show_available_governors(struct sys_device *dev, char *buf) +{ + ssize_t i = 0; + struct cpuidle_governor *tmp; + + mutex_lock(&cpuidle_lock); + list_for_each_entry(tmp, &cpuidle_governors, governor_list) { + if (i >= (ssize_t)((PAGE_SIZE/sizeof(char)) - CPUIDLE_NAME_LEN - 2)) + goto out; + i += scnprintf(&buf[i], CPUIDLE_NAME_LEN, "%s ", tmp->name); + } + if (list_empty(&cpuidle_governors)) + i+= sprintf(&buf[i], "no governors"); +out: + i+= sprintf(&buf[i], "\n"); + mutex_unlock(&cpuidle_lock); + return i; +} + +static ssize_t show_current_driver(struct sys_device *dev, char *buf) +{ + ssize_t ret; + + mutex_lock(&cpuidle_lock); + ret = sprintf(buf, "%s\n", cpuidle_curr_driver->name); + mutex_unlock(&cpuidle_lock); + return ret; +} + +static ssize_t store_current_driver(struct sys_device *dev, + const char *buf, size_t count) +{ + char str[CPUIDLE_NAME_LEN]; + int len = count; + struct cpuidle_driver *tmp, *found = NULL; + + if (len > CPUIDLE_NAME_LEN) + len = CPUIDLE_NAME_LEN; + + if (sscanf(buf, "%s", str) != 1) + return -EINVAL; + + mutex_lock(&cpuidle_lock); + list_for_each_entry(tmp, &cpuidle_drivers, driver_list) { + if (strncmp(tmp->name, str, CPUIDLE_NAME_LEN) == 0) { + found = tmp; + break; + } + } + if (found) + cpuidle_switch_driver(found); + mutex_unlock(&cpuidle_lock); + + return count; +} + +static ssize_t show_current_governor(struct sys_device *dev, char *buf) +{ + ssize_t i; + + mutex_lock(&cpuidle_lock); + if (cpuidle_curr_governor) + i = sprintf(buf, "%s\n", cpuidle_curr_governor->name); + else + i = sprintf(buf, "no governor\n"); + mutex_unlock(&cpuidle_lock); + + return i; +} + +static ssize_t store_current_governor(struct sys_device *dev, + const char *buf, size_t count) +{ + char str[CPUIDLE_NAME_LEN]; + int len = count; + struct cpuidle_governor *tmp, *found = NULL; + + if (len > CPUIDLE_NAME_LEN) + len = CPUIDLE_NAME_LEN; + + if (sscanf(buf, "%s", str) != 1) + return -EINVAL; + + mutex_lock(&cpuidle_lock); + list_for_each_entry(tmp, &cpuidle_governors, governor_list) { + if (strncmp(tmp->name, str, CPUIDLE_NAME_LEN) == 0) { + found = tmp; + break; + } + } + if (found) + cpuidle_switch_governor(found); + mutex_unlock(&cpuidle_lock); + + return count; +} + +static SYSDEV_ATTR(available_drivers, 0444, show_available_drivers, NULL); +static SYSDEV_ATTR(available_governors, 0444, show_available_governors, NULL); +static SYSDEV_ATTR(current_driver, 0644, show_current_driver, + store_current_driver); +static SYSDEV_ATTR(current_governor, 0644, show_current_governor, + store_current_governor); + +static struct attribute *cpuclass_default_attrs[] = { + &attr_available_drivers.attr, + &attr_available_governors.attr, + &attr_current_driver.attr, + &attr_current_governor.attr, + NULL +}; + +static struct attribute_group cpuclass_attr_group = { + .attrs = cpuclass_default_attrs, + .name = "cpuidle", +}; + +/** + * cpuidle_add_class_sysfs - add CPU global sysfs attributes + */ +int cpuidle_add_class_sysfs(struct sysdev_class *cls) +{ + return sysfs_create_group(&cls->kset.kobj, &cpuclass_attr_group); +} + +/** + * cpuidle_remove_class_sysfs - remove CPU global sysfs attributes + */ +void cpuidle_remove_class_sysfs(struct sysdev_class *cls) +{ + sysfs_remove_group(&cls->kset.kobj, &cpuclass_attr_group); +} + +struct cpuidle_attr { + struct attribute attr; + ssize_t (*show)(struct cpuidle_device *, char *); + ssize_t (*store)(struct cpuidle_device *, const char *, size_t count); +}; + +#define define_one_ro(_name, show) \ + static struct cpuidle_attr attr_##_name = __ATTR(_name, 0444, show, NULL) +#define define_one_rw(_name, show, store) \ + static struct cpuidle_attr attr_##_name = __ATTR(_name, 0644, show, store) + +#define kobj_to_cpuidledev(k) container_of(k, struct cpuidle_device, kobj) +#define attr_to_cpuidleattr(a) container_of(a, struct cpuidle_attr, attr) +static ssize_t cpuidle_show(struct kobject * kobj, struct attribute * attr ,char * buf) +{ + int ret = -EIO; + struct cpuidle_device *dev = kobj_to_cpuidledev(kobj); + struct cpuidle_attr * cattr = attr_to_cpuidleattr(attr); + + if (cattr->show) { + mutex_lock(&cpuidle_lock); + ret = cattr->show(dev, buf); + mutex_unlock(&cpuidle_lock); + } + return ret; +} + +static ssize_t cpuidle_store(struct kobject * kobj, struct attribute * attr, + const char * buf, size_t count) +{ + int ret = -EIO; + struct cpuidle_device *dev = kobj_to_cpuidledev(kobj); + struct cpuidle_attr * cattr = attr_to_cpuidleattr(attr); + + if (cattr->store) { + mutex_lock(&cpuidle_lock); + ret = cattr->store(dev, buf, count); + mutex_unlock(&cpuidle_lock); + } + return ret; +} + +static struct sysfs_ops cpuidle_sysfs_ops = { + .show = cpuidle_show, + .store = cpuidle_store, +}; + +static void cpuidle_sysfs_release(struct kobject *kobj) +{ + struct cpuidle_device *dev = kobj_to_cpuidledev(kobj); + + complete(&dev->kobj_unregister); +} + +static struct kobj_type ktype_cpuidle = { + .sysfs_ops = &cpuidle_sysfs_ops, + .release = cpuidle_sysfs_release, +}; + +struct cpuidle_state_attr { + struct attribute attr; + ssize_t (*show)(struct cpuidle_state *, char *); + ssize_t (*store)(struct cpuidle_state *, const char *, size_t); +}; + +#define define_one_state_ro(_name, show) \ +static struct cpuidle_state_attr attr_##_name = __ATTR(_name, 0444, show, NULL) + +#define define_show_state_function(_name) \ +static ssize_t show_state_##_name(struct cpuidle_state *state, char *buf) \ +{ \ + return sprintf(buf, "%d\n", state->_name);\ +} + +define_show_state_function(exit_latency) +define_show_state_function(power_usage) +define_show_state_function(usage) +define_show_state_function(time) +define_one_state_ro(latency, show_state_exit_latency); +define_one_state_ro(power, show_state_power_usage); +define_one_state_ro(usage, show_state_usage); +define_one_state_ro(time, show_state_time); + +static struct attribute *cpuidle_state_default_attrs[] = { + &attr_latency.attr, + &attr_power.attr, + &attr_usage.attr, + &attr_time.attr, + NULL +}; + +#define kobj_to_state_obj(k) container_of(k, struct cpuidle_state_kobj, kobj) +#define kobj_to_state(k) (kobj_to_state_obj(k)->state) +#define attr_to_stateattr(a) container_of(a, struct cpuidle_state_attr, attr) +static ssize_t cpuidle_state_show(struct kobject * kobj, + struct attribute * attr ,char * buf) +{ + int ret = -EIO; + struct cpuidle_state *state = kobj_to_state(kobj); + struct cpuidle_state_attr * cattr = attr_to_stateattr(attr); + + if (cattr->show) + ret = cattr->show(state, buf); + + return ret; +} + +static struct sysfs_ops cpuidle_state_sysfs_ops = { + .show = cpuidle_state_show, +}; + +static void cpuidle_state_sysfs_release(struct kobject *kobj) +{ + struct cpuidle_state_kobj *state_obj = kobj_to_state_obj(kobj); + + complete(&state_obj->kobj_unregister); +} + +static struct kobj_type ktype_state_cpuidle = { + .sysfs_ops = &cpuidle_state_sysfs_ops, + .default_attrs = cpuidle_state_default_attrs, + .release = cpuidle_state_sysfs_release, +}; + +static void inline cpuidle_free_state_kobj(struct cpuidle_device *device, int i) +{ + kobject_unregister(&device->kobjs[i]->kobj); + wait_for_completion(&device->kobjs[i]->kobj_unregister); + kfree(device->kobjs[i]); + device->kobjs[i] = NULL; +} + +/** + * cpuidle_add_driver_sysfs - adds driver-specific sysfs attributes + * @device: the target device + */ +int cpuidle_add_driver_sysfs(struct cpuidle_device *device) +{ + int i, ret; + struct cpuidle_state_kobj *kobj; + + /* state statistics */ + for (i = 0; i < device->state_count; i++) { + kobj = kzalloc(sizeof(struct cpuidle_state_kobj), GFP_KERNEL); + if (!kobj) + goto error_state; + kobj->state = &device->states[i]; + init_completion(&kobj->kobj_unregister); + + kobj->kobj.parent = &device->kobj; + kobj->kobj.ktype = &ktype_state_cpuidle; + kobject_set_name(&kobj->kobj, "state%d", i); + ret = kobject_register(&kobj->kobj); + if (ret) { + kfree(kobj); + goto error_state; + } + device->kobjs[i] = kobj; + } + + return 0; + +error_state: + for (i = i - 1; i >= 0; i--) + cpuidle_free_state_kobj(device, i); + return ret; +} + +/** + * cpuidle_remove_driver_sysfs - removes driver-specific sysfs attributes + * @device: the target device + */ +void cpuidle_remove_driver_sysfs(struct cpuidle_device *device) +{ + int i; + + for (i = 0; i < device->state_count; i++) + cpuidle_free_state_kobj(device, i); +} + +/** + * cpuidle_add_sysfs - creates a sysfs instance for the target device + * @sysdev: the target device + */ +int cpuidle_add_sysfs(struct sys_device *sysdev) +{ + int cpu = sysdev->id; + struct cpuidle_device *dev; + + dev = per_cpu(cpuidle_devices, cpu); + dev->kobj.parent = &sysdev->kobj; + dev->kobj.ktype = &ktype_cpuidle; + kobject_set_name(&dev->kobj, "%s", "cpuidle"); + return kobject_register(&dev->kobj); +} + +/** + * cpuidle_remove_sysfs - deletes a sysfs instance on the target device + * @sysdev: the target device + */ +void cpuidle_remove_sysfs(struct sys_device *sysdev) +{ + int cpu = sysdev->id; + struct cpuidle_device *dev; + + dev = per_cpu(cpuidle_devices, cpu); + kobject_unregister(&dev->kobj); +} diff --git a/drivers/video/Kconfig b/drivers/video/Kconfig index 403dac7..7b91ef7 100644 --- a/drivers/video/Kconfig +++ b/drivers/video/Kconfig @@ -12,6 +12,13 @@ config VGASTATE tristate default n +config VIDEO_OUTPUT_CONTROL + tristate "Lowlevel video output switch controls" + default m + help + This framework adds support for low-level control of the video + output switch. + config FB tristate "Support for frame buffer devices" ---help--- diff --git a/drivers/video/Makefile b/drivers/video/Makefile index bd8b052..d955074 100644 --- a/drivers/video/Makefile +++ b/drivers/video/Makefile @@ -122,3 +122,6 @@ obj-$(CONFIG_FB_OF) += off # the test framebuffer is last obj-$(CONFIG_FB_VIRTUAL) += vfb.o + +#video output switch sysfs driver +obj-$(CONFIG_VIDEO_OUTPUT_CONTROL) += output.o diff --git a/include/acpi/acmacros.h b/include/acpi/acmacros.h index 8948a64..c22f6da 100644 --- a/include/acpi/acmacros.h +++ b/include/acpi/acmacros.h @@ -486,6 +486,8 @@ #else #define ACPI_FUNCTION_NAME(name) #endif +#ifdef DEBUG_FUNC_TRACE + #define ACPI_FUNCTION_TRACE(a) ACPI_FUNCTION_NAME(a) \ acpi_ut_trace(ACPI_DEBUG_PARAMETERS) #define ACPI_FUNCTION_TRACE_PTR(a,b) ACPI_FUNCTION_NAME(a) \ @@ -563,6 +565,27 @@ #define return_UINT32(s) #endif /* ACPI_SIMPLE_RETURN_MACROS */ +#else /* !DEBUG_FUNC_TRACE */ + +#define ACPI_FUNCTION_TRACE(a) +#define ACPI_FUNCTION_TRACE_PTR(a,b) +#define ACPI_FUNCTION_TRACE_U32(a,b) +#define ACPI_FUNCTION_TRACE_STR(a,b) +#define ACPI_FUNCTION_EXIT +#define ACPI_FUNCTION_STATUS_EXIT(s) +#define ACPI_FUNCTION_VALUE_EXIT(s) +#define ACPI_FUNCTION_TRACE(a) +#define ACPI_FUNCTION_ENTRY() + +#define return_VOID return +#define return_ACPI_STATUS(s) return(s) +#define return_VALUE(s) return(s) +#define return_UINT8(s) return(s) +#define return_UINT32(s) return(s) +#define return_PTR(s) return(s) + +#endif /* DEBUG_FUNC_TRACE */ + /* Conditional execution */ #define ACPI_DEBUG_EXEC(a) a diff --git a/include/acpi/acoutput.h b/include/acpi/acoutput.h index 7812267..c090a8b 100644 --- a/include/acpi/acoutput.h +++ b/include/acpi/acoutput.h @@ -178,8 +178,8 @@ #define ACPI_DB_ALL ACPI /* Defaults for debug_level, debug and normal */ -#define ACPI_DEBUG_DEFAULT (ACPI_LV_INIT | ACPI_LV_WARN | ACPI_LV_ERROR | ACPI_LV_DEBUG_OBJECT) -#define ACPI_NORMAL_DEFAULT (ACPI_LV_INIT | ACPI_LV_WARN | ACPI_LV_ERROR | ACPI_LV_DEBUG_OBJECT) +#define ACPI_DEBUG_DEFAULT (ACPI_LV_INIT | ACPI_LV_WARN | ACPI_LV_ERROR) +#define ACPI_NORMAL_DEFAULT (ACPI_LV_INIT | ACPI_LV_WARN | ACPI_LV_ERROR) #define ACPI_DEBUG_ALL (ACPI_LV_AML_DISASSEMBLE | ACPI_LV_ALL_EXCEPTIONS | ACPI_LV_ALL) #endif /* __ACOUTPUT_H__ */ diff --git a/include/acpi/platform/acenv.h b/include/acpi/platform/acenv.h index dab2ec5..c785485 100644 --- a/include/acpi/platform/acenv.h +++ b/include/acpi/platform/acenv.h @@ -136,7 +136,7 @@ #endif /*! [Begin] no source code translation */ -#if defined(__linux__) +#if defined(_LINUX) || defined(__linux__) #include "aclinux.h" #elif defined(_AED_EFI) diff --git a/include/acpi/platform/aclinux.h b/include/acpi/platform/aclinux.h index a568717..6ed15a0 100644 --- a/include/acpi/platform/aclinux.h +++ b/include/acpi/platform/aclinux.h @@ -91,7 +91,10 @@ #define COMPILER_DEPENDENT_UINT64 unsi #define ACPI_USE_NATIVE_DIVIDE #endif +#ifndef __cdecl #define __cdecl +#endif + #define ACPI_FLUSH_CPU_CACHE() #endif /* __KERNEL__ */ diff --git a/include/acpi/processor.h b/include/acpi/processor.h index b4b0ffd..1cd6030 100644 --- a/include/acpi/processor.h +++ b/include/acpi/processor.h @@ -21,6 +21,8 @@ #define ACPI_PDC_REVISION_ID 0x1 #define ACPI_PSD_REV0_REVISION 0 /* Support for _PSD as in ACPI 3.0 */ #define ACPI_PSD_REV0_ENTRIES 5 +#define ACPI_TSD_REV0_REVISION 0 /* Support for _PSD as in ACPI 3.0 */ +#define ACPI_TSD_REV0_ENTRIES 5 /* * Types of coordination defined in ACPI 3.0. Same macros can be used across * P, C and T states @@ -125,17 +127,53 @@ struct acpi_processor_performance { /* Throttling Control */ +struct acpi_tsd_package { + acpi_integer num_entries; + acpi_integer revision; + acpi_integer domain; + acpi_integer coord_type; + acpi_integer num_processors; +} __attribute__ ((packed)); + +struct acpi_ptc_register { + u8 descriptor; + u16 length; + u8 space_id; + u8 bit_width; + u8 bit_offset; + u8 reserved; + u64 address; +} __attribute__ ((packed)); + +struct acpi_processor_tx_tss { + acpi_integer freqpercentage; /* */ + acpi_integer power; /* milliWatts */ + acpi_integer transition_latency; /* microseconds */ + acpi_integer control; /* control value */ + acpi_integer status; /* success indicator */ +}; struct acpi_processor_tx { u16 power; u16 performance; }; +struct acpi_processor; struct acpi_processor_throttling { - int state; + unsigned int state; + unsigned int platform_limit; + struct acpi_pct_register control_register; + struct acpi_pct_register status_register; + unsigned int state_count; + struct acpi_processor_tx_tss *states_tss; + struct acpi_tsd_package domain_info; + cpumask_t shared_cpu_map; + int (*acpi_processor_get_throttling) (struct acpi_processor * pr); + int (*acpi_processor_set_throttling) (struct acpi_processor * pr, + int state); + u32 address; u8 duty_offset; u8 duty_width; - int state_count; struct acpi_processor_tx states[ACPI_PROCESSOR_MAX_THROTTLING]; }; @@ -161,6 +199,7 @@ struct acpi_processor_flags { u8 bm_check:1; u8 has_cst:1; u8 power_setup_done:1; + u8 bm_rld_set:1; }; struct acpi_processor { @@ -169,6 +208,9 @@ struct acpi_processor { u32 id; u32 pblk; int performance_platform_limit; + int throttling_platform_limit; + /* 0 - states 0..n-th state available */ + struct acpi_processor_flags flags; struct acpi_processor_power power; struct acpi_processor_performance *performance; @@ -270,7 +312,7 @@ #endif /* CONFIG_CPU_FREQ */ /* in processor_throttling.c */ int acpi_processor_get_throttling_info(struct acpi_processor *pr); -int acpi_processor_set_throttling(struct acpi_processor *pr, int state); +extern int acpi_processor_set_throttling(struct acpi_processor *pr, int state); extern struct file_operations acpi_processor_throttling_fops; /* in processor_idle.c */ @@ -279,6 +321,8 @@ int acpi_processor_power_init(struct acp int acpi_processor_cst_has_changed(struct acpi_processor *pr); int acpi_processor_power_exit(struct acpi_processor *pr, struct acpi_device *device); +extern struct cpuidle_driver acpi_idle_driver; +void acpi_max_cstate_changed(void); /* in processor_thermal.c */ int acpi_processor_get_limit_info(struct acpi_processor *pr); diff --git a/include/linux/acpi.h b/include/linux/acpi.h index fccd8b5..137baad 100644 --- a/include/linux/acpi.h +++ b/include/linux/acpi.h @@ -206,11 +206,8 @@ static inline unsigned int acpi_get_csta { return max_cstate; } -static inline void acpi_set_cstate_limit(unsigned int new_limit) -{ - max_cstate = new_limit; - return; -} +extern void (*acpi_do_set_cstate_limit)(void); +extern void acpi_set_cstate_limit(unsigned int new_limit); #else static inline unsigned int acpi_get_cstate_limit(void) { return 0; } static inline void acpi_set_cstate_limit(unsigned int new_limit) { return; } diff --git a/include/linux/cpuidle.h b/include/linux/cpuidle.h new file mode 100644 index 0000000..3e37e38 --- /dev/null +++ b/include/linux/cpuidle.h @@ -0,0 +1,189 @@ +/* + * cpuidle.h - a generic framework for CPU idle power management + * + * (C) 2007 Venkatesh Pallipadi + * Shaohua Li + * Adam Belay + * + * This code is licenced under the GPL. + */ + +#ifndef _LINUX_CPUIDLE_H +#define _LINUX_CPUIDLE_H + +#include +#include +#include +#include +#include + +#define CPUIDLE_STATE_MAX 8 +#define CPUIDLE_NAME_LEN 16 + +struct cpuidle_device; + + +/**************************** + * CPUIDLE DEVICE INTERFACE * + ****************************/ + +struct cpuidle_state { + char name[CPUIDLE_NAME_LEN]; + void *driver_data; + + unsigned int flags; + unsigned int exit_latency; /* in US */ + unsigned int power_usage; /* in mW */ + unsigned int target_residency; /* in US */ + + unsigned int usage; + unsigned int time; /* in US */ + + int (*enter) (struct cpuidle_device *dev, + struct cpuidle_state *state); +}; + +/* Idle State Flags */ +#define CPUIDLE_FLAG_TIME_VALID (0x01) /* is residency time measurable? */ +#define CPUIDLE_FLAG_CHECK_BM (0x02) /* BM activity will exit state */ +#define CPUIDLE_FLAG_SHALLOW (0x10) /* low latency, minimal savings */ +#define CPUIDLE_FLAG_BALANCED (0x20) /* medium latency, moderate savings */ +#define CPUIDLE_FLAG_DEEP (0x40) /* high latency, large savings */ + +#define CPUIDLE_DRIVER_FLAGS_MASK (0xFFFF0000) + +/** + * cpuidle_get_statedata - retrieves private driver state data + * @state: the state + */ +static inline void * cpuidle_get_statedata(struct cpuidle_state *state) +{ + return state->driver_data; +} + +/** + * cpuidle_set_statedata - stores private driver state data + * @state: the state + * @data: the private data + */ +static inline void +cpuidle_set_statedata(struct cpuidle_state *state, void *data) +{ + state->driver_data = data; +} + +struct cpuidle_state_kobj { + struct cpuidle_state *state; + struct completion kobj_unregister; + struct kobject kobj; +}; + +struct cpuidle_device { + unsigned int status; + int cpu; + + int last_residency; + int state_count; + struct cpuidle_state states[CPUIDLE_STATE_MAX]; + struct cpuidle_state_kobj *kobjs[CPUIDLE_STATE_MAX]; + struct cpuidle_state *last_state; + + struct list_head device_list; + struct kobject kobj; + struct completion kobj_unregister; + void *governor_data; +}; + +DECLARE_PER_CPU(struct cpuidle_device *, cpuidle_devices); + +/* Device Status Flags */ +#define CPUIDLE_STATUS_DETECTED (0x1) +#define CPUIDLE_STATUS_DRIVER_ATTACHED (0x2) +#define CPUIDLE_STATUS_GOVERNOR_ATTACHED (0x4) +#define CPUIDLE_STATUS_DOIDLE (CPUIDLE_STATUS_DETECTED | \ + CPUIDLE_STATUS_DRIVER_ATTACHED | \ + CPUIDLE_STATUS_GOVERNOR_ATTACHED) + +/** + * cpuidle_get_last_residency - retrieves the last state's residency time + * @dev: the target CPU + * + * NOTE: this value is invalid if CPUIDLE_FLAG_TIME_VALID isn't set + */ +static inline int cpuidle_get_last_residency(struct cpuidle_device *dev) +{ + return dev->last_residency; +} + + +/**************************** + * CPUIDLE DRIVER INTERFACE * + ****************************/ + +struct cpuidle_driver { + char name[CPUIDLE_NAME_LEN]; + struct list_head driver_list; + + int (*init) (struct cpuidle_device *dev); + void (*exit) (struct cpuidle_device *dev); + int (*redetect) (struct cpuidle_device *dev); + + int (*bm_check) (void); + + struct module *owner; +}; + +#ifdef CONFIG_CPU_IDLE + +extern int cpuidle_register_driver(struct cpuidle_driver *drv); +extern void cpuidle_unregister_driver(struct cpuidle_driver *drv); +extern int cpuidle_force_redetect(struct cpuidle_device *dev, struct cpuidle_driver *drv); +extern int cpuidle_force_redetect_devices(struct cpuidle_driver *drv); + +#else + +static inline int cpuidle_register_driver(struct cpuidle_driver *drv) +{return 0;} +static inline void cpuidle_unregister_driver(struct cpuidle_driver *drv) { } +static inline int cpuidle_force_redetect(struct cpuidle_device *dev, struct cpuidle_driver *drv) +{return 0;} +static inline int cpuidle_force_redetect_devices(struct cpuidle_driver *drv) +{return 0;} + +#endif + +/****************************** + * CPUIDLE GOVERNOR INTERFACE * + ******************************/ + +struct cpuidle_governor { + char name[CPUIDLE_NAME_LEN]; + struct list_head governor_list; + + int (*init) (struct cpuidle_device *dev); + void (*exit) (struct cpuidle_device *dev); + void (*scan) (struct cpuidle_device *dev); + + int (*select) (struct cpuidle_device *dev); + void (*reflect) (struct cpuidle_device *dev); + + struct module *owner; +}; + +#ifdef CONFIG_CPU_IDLE + +extern int cpuidle_register_governor(struct cpuidle_governor *gov); +extern void cpuidle_unregister_governor(struct cpuidle_governor *gov); +extern int cpuidle_get_bm_activity(void); + +#else + +static inline int cpuidle_register_governor(struct cpuidle_governor *gov) +{return 0;} +static inline void cpuidle_unregister_governor(struct cpuidle_governor *gov) { } +static inline int cpuidle_get_bm_activity(void) +{return 0;} + +#endif + +#endif /* _LINUX_CPUIDLE_H */ diff --git a/include/linux/tick.h b/include/linux/tick.h index 9a7252e..319b8c9 100644 --- a/include/linux/tick.h +++ b/include/linux/tick.h @@ -40,6 +40,7 @@ enum tick_nohz_mode { * @idle_sleeps: Number of idle calls, where the sched tick was stopped * @idle_entrytime: Time when the idle call was entered * @idle_sleeptime: Sum of the time slept in idle with sched tick stopped + * @sleep_length: Duration of the current idle sleep */ struct tick_sched { struct hrtimer sched_timer; @@ -52,6 +53,7 @@ struct tick_sched { unsigned long idle_sleeps; ktime_t idle_entrytime; ktime_t idle_sleeptime; + ktime_t sleep_length; unsigned long last_jiffies; unsigned long next_jiffies; ktime_t idle_expires; @@ -100,10 +102,18 @@ # ifdef CONFIG_NO_HZ extern void tick_nohz_stop_sched_tick(void); extern void tick_nohz_restart_sched_tick(void); extern void tick_nohz_update_jiffies(void); +extern ktime_t tick_nohz_get_sleep_length(void); +extern unsigned long tick_nohz_get_idle_jiffies(void); # else static inline void tick_nohz_stop_sched_tick(void) { } static inline void tick_nohz_restart_sched_tick(void) { } static inline void tick_nohz_update_jiffies(void) { } +static inline ktime_t tick_nohz_get_sleep_length(void) +{ + ktime_t len = { .tv64 = NSEC_PER_SEC/HZ }; + + return len; +} # endif /* !NO_HZ */ #endif diff --git a/kernel/softirq.c b/kernel/softirq.c index 0b9886a..3de1cb4 100644 --- a/kernel/softirq.c +++ b/kernel/softirq.c @@ -303,11 +303,6 @@ void irq_exit(void) if (!in_interrupt() && local_softirq_pending()) invoke_softirq(); -#ifdef CONFIG_NO_HZ - /* Make sure that timer wheel updates are propagated */ - if (!in_interrupt() && idle_cpu(smp_processor_id()) && !need_resched()) - tick_nohz_stop_sched_tick(); -#endif preempt_enable_no_resched(); } diff --git a/kernel/time/tick-sched.c b/kernel/time/tick-sched.c index 52db9e3..5a50329 100644 --- a/kernel/time/tick-sched.c +++ b/kernel/time/tick-sched.c @@ -153,6 +153,7 @@ void tick_nohz_stop_sched_tick(void) unsigned long seq, last_jiffies, next_jiffies, delta_jiffies, flags; struct tick_sched *ts; ktime_t last_update, expires, now, delta; + struct clock_event_device *dev = __get_cpu_var(tick_cpu_device).evtdev; int cpu; local_irq_save(flags); @@ -290,11 +291,34 @@ void tick_nohz_stop_sched_tick(void) out: ts->next_jiffies = next_jiffies; ts->last_jiffies = last_jiffies; + ts->sleep_length = ktime_sub(dev->next_event, now); end: local_irq_restore(flags); } /** + * tick_nohz_get_sleep_length - return the length of the current sleep + * + * Called from power state control code with interrupts disabled + */ +ktime_t tick_nohz_get_sleep_length(void) +{ + struct tick_sched *ts = &__get_cpu_var(tick_cpu_sched); + + return ts->sleep_length; +} + +/** + * tick_nohz_get_idle_jiffies - returns the current idle jiffie count + */ +unsigned long tick_nohz_get_idle_jiffies(void) +{ + struct tick_sched *ts = &__get_cpu_var(tick_cpu_sched); + + return ts->idle_jiffies; +} + +/** * nohz_restart_sched_tick - restart the idle tick from the idle task * * Restart the idle tick when the CPU is woken up from idle