summaryrefslogtreecommitdiffstats
path: root/kernel
AgeCommit message (Collapse)Author
2013-12-09posix-timers: Remove useless clock sample on timers cleanupFrederic Weisbecker
a0b2062b0904ef07944c4a6e4d0f88ee44f1e9f2 ("posix_timers: fix racy timer delta caching on task exit") forgot to remove the arguments used for timer caching. Fix this leftover. Signed-off-by: Frederic Weisbecker <fweisbec@gmail.com> Cc: Thomas Gleixner <tglx@linutronix.de> Cc: Ingo Molnar <mingo@kernel.org> Cc: Peter Zijlstra <peterz@infradead.org> Cc: Oleg Nesterov <oleg@redhat.com> Cc: Kosaki Motohiro <kosaki.motohiro@jp.fujitsu.com> Cc: Andrew Morton <akpm@linux-foundation.org>
2013-12-09posix-timers: Remove dead task special caseFrederic Weisbecker
Now that we've removed all the optimizations that could result in NULL timer's targets, we can remove all the associated special case handling. Also add some warnings on NULL targets to spot any possible leftover. Signed-off-by: Frederic Weisbecker <fweisbec@gmail.com> Cc: Thomas Gleixner <tglx@linutronix.de> Cc: Ingo Molnar <mingo@kernel.org> Cc: Peter Zijlstra <peterz@infradead.org> Cc: Oleg Nesterov <oleg@redhat.com> Cc: Kosaki Motohiro <kosaki.motohiro@jp.fujitsu.com> Cc: Andrew Morton <akpm@linux-foundation.org>
2013-12-09posix-timers: Cleanup reaped target handlingFrederic Weisbecker
When a timer's target is seen to be buried, for example on calls to timer_gettime(), the posix cpu timers code behaves a bit like a garbage collector and releases early the reference to the task. Then again, this optimization complicates the code for no much value: it's up to the user to release the timer and its associated ressources by calling timer_delete() after it buries the target tasks. Remove this to simplify the code. Signed-off-by: Frederic Weisbecker <fweisbec@gmail.com> Cc: Thomas Gleixner <tglx@linutronix.de> Cc: Ingo Molnar <mingo@kernel.org> Cc: Peter Zijlstra <peterz@infradead.org> Cc: Oleg Nesterov <oleg@redhat.com> Cc: Kosaki Motohiro <kosaki.motohiro@jp.fujitsu.com> Cc: Andrew Morton <akpm@linux-foundation.org>
2013-12-09posix-timers: Remove dead process posix cpu timers cachingFrederic Weisbecker
Now that we removed dead thread posix cpu timers caching, lets remove the dead process wide version. This caching is similar to the per thread version but it should be even more rare: * If the process id dead, we are not reading its timers status from a thread belonging to its group since they are all dead. So this caching only concern remote process timers reads. Now posix cpu timers using itimers or timer_settime() can't do remote process timers anyway so it's not even clear if there is actually a user for this caching. * Unlike per thread timers caching, this only applies to zombies targets. Buried targets' process wide timers return 0 values. But then again, timer_gettime() can't read remote process timers, so if the process is dead, there can't be any reader left anyway. Then again this caching seem to complicate the code for corner cases that are probably not worth it. So lets get rid of it. Also remove the sample snapshot on dying process timer that is now useless, as suggested by Kosaki. Signed-off-by: Frederic Weisbecker <fweisbec@gmail.com> Cc: Thomas Gleixner <tglx@linutronix.de> Cc: Ingo Molnar <mingo@kernel.org> Cc: Peter Zijlstra <peterz@infradead.org> Cc: Oleg Nesterov <oleg@redhat.com> Cc: Kosaki Motohiro <kosaki.motohiro@jp.fujitsu.com> Cc: Andrew Morton <akpm@linux-foundation.org>
2013-12-09posix-timers: Remove dead thread posix cpu timers cachingFrederic Weisbecker
When a task is exiting or has exited, its posix cpu timers don't tick anymore and won't elapse further. It's too late for them to expire. So any further call to timer_gettime() on these timers will return the same remaining expiry time. The current code optimize this by caching the remaining delta and storing it where we use to save the absolute expiration time. This way, the future calls to timer_gettime() won't need to compute the difference between the absolute expiration time and the current time anymore. Now this optimization doesn't seem to bring much value. Computing the timer remaining delta is not very costly. Fetching the timer value OTOH can be costly in two ways: * CPUCLOCK_SCHED read requires to lock the target's rq. But some optimizations are on the way to make task_sched_runtime() not holding the rq lock of a non-running target. * CPUCLOCK_VIRT/CPUCLOCK_PROF read simply consist in fetching current->utime/current->stime except when the system uses full dynticks cputime accounting. The latter requires a per task lock in order to correctly compute user and system time. But once the target is dead, this lock shouldn't be contended anyway. All in one this caching doesn't seem to be justified. Given that it complicates the code significantly for few wins, let's remove it on single thread timers. Signed-off-by: Frederic Weisbecker <fweisbec@gmail.com> Cc: Thomas Gleixner <tglx@linutronix.de> Cc: Ingo Molnar <mingo@kernel.org> Cc: Peter Zijlstra <peterz@infradead.org> Cc: Oleg Nesterov <oleg@redhat.com> Cc: Kosaki Motohiro <kosaki.motohiro@jp.fujitsu.com> Cc: Andrew Morton <akpm@linux-foundation.org>
2013-12-07PCI: Disable Bus Master only on kexec rebootKhalid Aziz
Add a flag to tell the PCI subsystem that kernel is shutting down in preparation to kexec a kernel. Add code in PCI subsystem to use this flag to clear Bus Master bit on PCI devices only in case of kexec reboot. This fixes a power-off problem on Acer Aspire V5-573G and likely other machines and avoids any other issues caused by clearing Bus Master bit on PCI devices in normal shutdown path. The problem was introduced by b566a22c2332 ("PCI: disable Bus Master on PCI device shutdown"). This patch is based on discussion at http://marc.info/?l=linux-pci&m=138425645204355&w=2 Link: https://bugzilla.kernel.org/show_bug.cgi?id=63861 Reported-by: Chang Liu <cl91tp@gmail.com> Signed-off-by: Khalid Aziz <khalid.aziz@oracle.com> Signed-off-by: Bjorn Helgaas <bhelgaas@google.com> Acked-by: Konstantin Khlebnikov <koct9i@gmail.com> Cc: stable@vger.kernel.org # v3.5+
2013-12-06cgroup: remove for_each_root_subsys()Tejun Heo
After the previous patch which introduced for_each_css(), for_each_root_subsys() only has two users left. This patch replaces it with for_each_subsys() + explicit subsys_mask testing and remove for_each_root_subsys() along with cgroupfs_root->subsys_list handling. This patch doesn't introduce any behavior changes. Signed-off-by: Tejun Heo <tj@kernel.org> Acked-by: Li Zefan <lizefan@huawei.com>
2013-12-06cgroup: implement for_each_css()Tejun Heo
There are enough places where css's of a cgroup are iterated, which currently uses for_each_root_subsys() + explicit cgroup_css(). This patch implements for_each_css() and replaces the above combination with it. This patch doesn't introduce any behavior changes. v2: Updated to apply cleanly on top of v2 of "cgroup: fix css leaks on online_css() failure" Signed-off-by: Tejun Heo <tj@kernel.org> Acked-by: Li Zefan <lizefan@huawei.com>
2013-12-06cgroup: factor out cgroup_subsys_state creation into create_css()Tejun Heo
Now that all opertations to create a css (cgroup_subsys_state) are collected into a single loop in cgroup_create(), it's easy to factor it out into its own function. Factor out css creation into create_css(). This makes the code easier to follow and will enable decoupling css creation from cgroup creation which is necessary for the planned unified hierarchy. Signed-off-by: Tejun Heo <tj@kernel.org> Acked-by: Li Zefan <lizefan@huawei.com>
2013-12-06cgroup: combine css handling loops in cgroup_create()Tejun Heo
Now that css operations in cgroup_create() are back-to-back, there isn't much point in allocating css's in one loop and onlining them in another. Merge the two loops so that a css is allocated and onlined on each iteration. css_ar[] is no longer necessary and replaced with a single pointer. This also simplifies the error handling path. Signed-off-by: Tejun Heo <tj@kernel.org> Acked-by: Li Zefan <lizefan@huawei.com>
2013-12-06cgroup: reorder operations in cgroup_create()Tejun Heo
cgroup_create() currently does the followings. 1. alloc cgroup 2. alloc css's 3. create the directory and commit to cgroup creation 4. online css's 5. create cgroup and css files The sequence performs allocations before other operations but it doesn't buy anything because each of the above steps may fail and should be unrollable. Reorganize the sequence such that cgroup operations are done before css operations. 1. alloc cgroup 2. create the directory and files and commit to cgroup creation 3. alloc css's 4. create files for and online css's This simplifies the code a bit and enables further simplification and separating out css creation from cgroup creation which is necessary for the planned unified hierarchy where css's will be created and destroyed dynamically across the lifetime of a cgroup. Signed-off-by: Tejun Heo <tj@kernel.org> Acked-by: Li Zefan <lizefan@huawei.com>
2013-12-06cgroup: make for_each_subsys() useable under cgroup_root_mutexTejun Heo
We want to use for_each_subsys() in cgroupfs_root handling where only cgroup_root_mutex is held. The only way cgroup_subsys[] can change is through module load/unload, make cgroup_[un]load_subsys() grab cgroup_root_mutex too and update the lockdep annotation in for_each_subsys() to allow either cgroup_mutex or cgroup_root_mutex. * Lockdep annotation is moved from inner 'if' condition to outer 'for' init caluse. There's no reason to execute the assertion every loop. * Loop index @i is renamed to @ssid. Indices iterating through subsys will be [re]named to @ssid gradually. v2: cgroup_assert_mutex_or_root_locked() caused build failure if !CONFIG_LOCKEDP. Conditionalize its definition. The build failure was reported by kbuild test bot. Signed-off-by: Tejun Heo <tj@kernel.org> Acked-by: Li Zefan <lizefan@huawei.com> Cc: kbuild test robot <fengguang.wu@intel.com>
2013-12-06cgroup: css iterations and css_from_dir() are safe under cgroup_mutexTejun Heo
Currently, all css iterations and css_from_dir() require RCU read lock whether the caller is holding cgroup_mutex or not, which is unnecessarily restrictive. They are all safe to use under cgroup_mutex without holding RCU read lock. Factor out cgroup_assert_mutex_or_rcu_locked() from css_from_id() and apply it to all css iteration functions and css_from_dir(). v2: cgroup_assert_mutex_or_rcu_locked() definition doesn't need to be inside CONFIG_PROVE_RCU ifdef as rcu_lockdep_assert() is always defined and conditionalized. Move it outside of the ifdef block. Signed-off-by: Tejun Heo <tj@kernel.org> Acked-by: Li Zefan <lizefan@huawei.com>
2013-12-06Merge branch 'for-3.13-fixes' into for-3.14Tejun Heo
Pulling in as patches depending on 266ccd505e8a ("cgroup: fix cgroup_create() error handling path") are scheduled. Signed-off-by: Tejun Heo <tj@kernel.org>
2013-12-06cgroup: fix cgroup_create() error handling pathTejun Heo
ae7f164a09 ("cgroup: move cgroup->subsys[] assignment to online_css()") moved cgroup->subsys[] assignements later in cgroup_create() but didn't update error handling path accordingly leading to the following oops and leaking later css's after an online_css() failure. The oops is from cgroup destruction path being invoked on the partially constructed cgroup which is not ready to handle empty slots in cgrp->subsys[] array. BUG: unable to handle kernel NULL pointer dereference at 0000000000000008 IP: [<ffffffff810eeaa8>] cgroup_destroy_locked+0x118/0x2f0 PGD a780a067 PUD aadbe067 PMD 0 Oops: 0000 [#1] SMP Modules linked in: CPU: 6 PID: 7360 Comm: mkdir Not tainted 3.13.0-rc2+ #69 Hardware name: task: ffff8800b9dbec00 ti: ffff8800a781a000 task.ti: ffff8800a781a000 RIP: 0010:[<ffffffff810eeaa8>] [<ffffffff810eeaa8>] cgroup_destroy_locked+0x118/0x2f0 RSP: 0018:ffff8800a781bd98 EFLAGS: 00010282 RAX: ffff880586903878 RBX: ffff880586903800 RCX: ffff880586903820 RDX: ffff880586903860 RSI: ffff8800a781bdb0 RDI: ffff880586903820 RBP: ffff8800a781bde8 R08: ffff88060e0b8048 R09: ffffffff811d7bc1 R10: 000000000000008c R11: 0000000000000001 R12: ffff8800a72286c0 R13: 0000000000000000 R14: ffffffff81cf7a40 R15: 0000000000000001 FS: 00007f60ecda57a0(0000) GS:ffff8806272c0000(0000) knlGS:0000000000000000 CS: 0010 DS: 0000 ES: 0000 CR0: 0000000080050033 CR2: 0000000000000008 CR3: 00000000a7a03000 CR4: 00000000000007e0 Stack: ffff880586903860 ffff880586903910 ffff8800a72286c0 ffff880586903820 ffffffff81cf7a40 ffff880586903800 ffff88060e0b8018 ffffffff81cf7a40 ffff8800b9dbec00 ffff8800b9dbf098 ffff8800a781bec8 ffffffff810ef5bf Call Trace: [<ffffffff810ef5bf>] cgroup_mkdir+0x55f/0x5f0 [<ffffffff811c90ae>] vfs_mkdir+0xee/0x140 [<ffffffff811cb07e>] SyS_mkdirat+0x6e/0xf0 [<ffffffff811c6a19>] SyS_mkdir+0x19/0x20 [<ffffffff8169e569>] system_call_fastpath+0x16/0x1b This patch moves reference bumping inside online_css() loop, clears css_ar[] as css's are brought online successfully, and updates err_destroy path so that either a css is fully online and destroyed by cgroup_destroy_locked() or the error path frees it. This creates a duplicate css free logic in the error path but it will be cleaned up soon. v2: Li pointed out that cgroup_destroy_locked() would do NULL-deref if invoked with a cgroup which doesn't have all css's populated. Update cgroup_destroy_locked() so that it skips NULL css's. Signed-off-by: Tejun Heo <tj@kernel.org> Acked-by: Li Zefan <lizefan@huawei.com> Reported-by: Vladimir Davydov <vdavydov@parallels.com> Cc: stable@vger.kernel.org # v3.12+
2013-12-06Merge tag 'trace-fixes-3.13-rc2' of ↵Linus Torvalds
git://git.kernel.org/pub/scm/linux/kernel/git/rostedt/linux-trace Pull tracing fix from Steven Rostedt: "A regression showed up that there's a large delay when enabling all events. This was prevalent when FTRACE_SELFTEST was enabled which enables all events several times, and caused the system bootup to pause for over a minute. This was tracked down to an addition of a synchronize_sched() performed when system call tracepoints are unregistered. The synchronize_sched() is needed between the unregistering of the system call tracepoint and a deletion of a tracing instance buffer. But placing the synchronize_sched() in the unreg of *every* system call tracepoint is a bit overboard. A single synchronize_sched() before the deletion of the instance is sufficient" * tag 'trace-fixes-3.13-rc2' of git://git.kernel.org/pub/scm/linux/kernel/git/rostedt/linux-trace: tracing: Only run synchronize_sched() at instance deletion time
2013-12-05tracing: Only run synchronize_sched() at instance deletion timeSteven Rostedt
It has been reported that boot up with FTRACE_SELFTEST enabled can take a very long time. There can be stalls of over a minute. This was tracked down to the synchronize_sched() called when a system call event is disabled. As the self tests enable and disable thousands of events, this makes the synchronize_sched() get called thousands of times. The synchornize_sched() was added with d562aff93bfb53 "tracing: Add support for SOFT_DISABLE to syscall events" which caused this regression (added in 3.13-rc1). The synchronize_sched() is to protect against the events being accessed when a tracer instance is being deleted. When an instance is being deleted all the events associated to it are unregistered. The synchronize_sched() makes sure that no more users are running when it finishes. Instead of calling synchronize_sched() for all syscall events, we only need to call it once, after the events are unregistered and before the instance is deleted. The event_mutex is held during this action to prevent new users from enabling events. Link: http://lkml.kernel.org/r/20131203124120.427b9661@gandalf.local.home Reported-by: Petr Mladek <pmladek@suse.cz> Acked-by: Tom Zanussi <tom.zanussi@linux.intel.com> Acked-by: Petr Mladek <pmladek@suse.cz> Tested-by: Petr Mladek <pmladek@suse.cz> Signed-off-by: Steven Rostedt <rostedt@goodmis.org>
2013-12-05cgroup: unify pidlist and other file handlingTejun Heo
In preparation of conversion to kernfs, cgroup file handling is updated so that it can be easily mapped to kernfs. With the previous changes, the difference between pidlist and other files are very small. Both are served by seq_file in a pretty standard way with the only difference being !pidlist files use single_open(). This patch adds cftype->seq_start(), ->seq_next and ->seq_stop() and implements the matching cgroup_seqfile_start/next/stop() which either emulates single_open() behavior or invokes cftype->seq_*() operations if specified. This allows using single seq_operations for both pidlist and other files and makes cgroup_pidlist_operations and cgorup_pidlist_open() no longer necessary. As cgroup_pidlist_open() was the only user of cftype->open(), the method is dropped together. This brings cftype file interface very close to kernfs interface and mapping shouldn't be too difficult. Once converted to kernfs, most of the plumbing code including cgroup_seqfile_*() will be removed as kernfs provides those facilities. This patch does not introduce any behavior changes. v2: Refreshed on top of the updated "cgroup: introduce struct cgroup_pidlist_open_file". v3: Refreshed on top of the updated "cgroup: attach cgroup_open_file to all cgroup files". Signed-off-by: Tejun Heo <tj@kernel.org> Acked-by: Li Zefan <lizefan@huawei.com>
2013-12-05cgroup: replace cftype->read_seq_string() with cftype->seq_show()Tejun Heo
In preparation of conversion to kernfs, cgroup file handling is updated so that it can be easily mapped to kernfs. This patch replaces cftype->read_seq_string() with cftype->seq_show() which is not limited to single_open() operation and will map directcly to kernfs seq_file interface. The conversions are mechanical. As ->seq_show() doesn't have @css and @cft, the functions which make use of them are converted to use seq_css() and seq_cft() respectively. In several occassions, e.f. if it has seq_string in its name, the function name is updated to fit the new method better. This patch does not introduce any behavior changes. Signed-off-by: Tejun Heo <tj@kernel.org> Acked-by: Aristeu Rozanski <arozansk@redhat.com> Acked-by: Vivek Goyal <vgoyal@redhat.com> Acked-by: Michal Hocko <mhocko@suse.cz> Acked-by: Daniel Wagner <daniel.wagner@bmw-carit.de> Acked-by: Li Zefan <lizefan@huawei.com> Cc: Jens Axboe <axboe@kernel.dk> Cc: Ingo Molnar <mingo@redhat.com> Cc: Peter Zijlstra <peterz@infradead.org> Cc: Johannes Weiner <hannes@cmpxchg.org> Cc: Balbir Singh <bsingharora@gmail.com> Cc: KAMEZAWA Hiroyuki <kamezawa.hiroyu@jp.fujitsu.com> Cc: Neil Horman <nhorman@tuxdriver.com>
2013-12-05cgroup: attach cgroup_open_file to all cgroup filesTejun Heo
In preparation of conversion to kernfs, cgroup file handling is updated so that it can be easily mapped to kernfs. This patch attaches cgroup_open_file, which used to be attached to pidlist files, to all cgroup files, introduces seq_css/cft() accessors to determine the cgroup_subsys_state and cftype associated with a given cgroup seq_file, exports them as public interface. This doesn't cause any behavior changes but unifies cgroup file handling across different file types and will help converting them to kernfs seq_show() interface. v2: Li pointed out that the original patch was using single_open_size() incorrectly assuming that the size param is private data size. Fix it by allocating @of separately and passing it to single_open() and explicitly freeing it in the release path. This isn't the prettiest but this path is gonna be restructured by the following patches pretty soon. Signed-off-by: Tejun Heo <tj@kernel.org> Acked-by: Li Zefan <lizefan@huawei.com>
2013-12-05cgroup: generalize cgroup_pidlist_open_fileTejun Heo
In preparation of conversion to kernfs, cgroup file handling is updated so that it can be easily mapped to kernfs. This patch renames cgroup_pidlist_open_file to cgroup_open_file and updates it so that it only contains a field to identify the specific file, ->cfe, and an opaque ->priv pointer. When cgroup is converted to kernfs, this will be replaced by kernfs_open_file which contains about the same information. As whether the file is "cgroup.procs" or "tasks" should now be determined from cgroup_open_file->cfe, the cftype->private for the two files now carry the file type and cgroup_pidlist_start() reads the type through cfe->type->private. This makes the distinction between cgroup_tasks_open() and cgroup_procs_open() unnecessary. cgroup_pidlist_open() is now directly used as the open method. This patch doesn't make any behavior changes. v2: Refreshed on top of the updated "cgroup: introduce struct cgroup_pidlist_open_file". Signed-off-by: Tejun Heo <tj@kernel.org> Acked-by: Li Zefan <lizefan@huawei.com>
2013-12-05cgroup: unify read path so that seq_file is always usedTejun Heo
With the recent removal of cftype->read() and ->read_map(), only three operations are remaining, ->read_u64(), ->read_s64() and ->read_seq_string(). Currently, the first two are handled directly while the last is handled through seq_file. It is trivial to serve the first two through the seq_file path too. This patch restructures read path so that all operations are served through cgroup_seqfile_show(). This makes all cgroup files seq_file - single_open/release() are now used by default, cgroup_seqfile_operations is dropped, and cgroup_file_operations uses seq_read() for read. This simplifies the code and makes the read path easy to convert to use kernfs. Note that, while cgroup_file_operations uses seq_read() for read, it still uses generic_file_llseek() for seeking instead of seq_lseek(). This is different from cgroup_seqfile_operations but shouldn't break anything and brings the seeking behavior aligned with kernfs. Signed-off-by: Tejun Heo <tj@kernel.org> Acked-by: Li Zefan <lizefan@huawei.com>
2013-12-05cgroup: unify cgroup_write_X64() and cgroup_write_string()Tejun Heo
cgroup_write_X64() and cgroup_write_string() both implement about the same buffering logic. Unify the two into cgroup_file_write() which always allocates dynamic buffer for simplicity and uses kstrto*() instead of simple_strto*(). This patch doesn't make any visible behavior changes except for possibly different error value from kstrsto*(). Signed-off-by: Tejun Heo <tj@kernel.org> Acked-by: Li Zefan <lizefan@huawei.com>
2013-12-05cgroup: remove cftype->read(), ->read_map() and ->write()Tejun Heo
In preparation of conversion to kernfs, cgroup file handling is being consolidated so that it can be easily mapped to the seq_file based interface of kernfs. After recent updates, ->read() and ->read_map() don't have any user left and ->write() never had any user. Remove them. Signed-off-by: Tejun Heo <tj@kernel.org> Acked-by: Li Zefan <lizefan@huawei.com>
2013-12-05cpuset: convert away from cftype->read()Tejun Heo
In preparation of conversion to kernfs, cgroup file handling is being consolidated so that it can be easily mapped to the seq_file based interface of kernfs. All users of cftype->read() can be easily served, usually better, by seq_file and other methods. Rename cpuset_common_file_read() to cpuset_common_read_seq_string() and convert it to use read_seq_string() interface instead. This not only simplifies the code but also makes it more versatile. Before, the file couldn't output if the result is longer than PAGE_SIZE. After the conversion, seq_file automatically grows the buffer until the output can fit. This patch doesn't make any visible behavior changes except for being able to handle output larger than PAGE_SIZE. Signed-off-by: Tejun Heo <tj@kernel.org> Acked-by: Li Zefan <lizefan@huawei.com>
2013-12-05cgroup, sched: convert away from cftype->read_map()Tejun Heo
In preparation of conversion to kernfs, cgroup file handling is being consolidated so that it can be easily mapped to the seq_file based interface of kernfs. cftype->read_map() doesn't add any value and being replaced with ->read_seq_string(). Update cpu_stats_show() and cpuacct_stats_show() accordingly. This patch doesn't make any visible behavior changes. Signed-off-by: Tejun Heo <tj@kernel.org> Acked-by: Li Zefan <lizefan@huawei.com> Cc: Ingo Molnar <mingo@redhat.com> Cc: Peter Zijlstra <peterz@infradead.org>
2013-12-05padata: Fix wrong usage of rcu_dereference()Mathias Krause
A kernel with enabled lockdep complains about the wrong usage of rcu_dereference() under a rcu_read_lock_bh() protected region. =============================== [ INFO: suspicious RCU usage. ] 3.13.0-rc1+ #126 Not tainted ------------------------------- linux/kernel/padata.c:115 suspicious rcu_dereference_check() usage! other info that might help us debug this: rcu_scheduler_active = 1, debug_locks = 1 1 lock held by cryptomgr_test/153: #0: (rcu_read_lock_bh){.+....}, at: [<ffffffff8115c235>] padata_do_parallel+0x5/0x270 Fix that by using rcu_dereference_bh() instead. Signed-off-by: Mathias Krause <minipli@googlemail.com> Acked-by: Steffen Klassert <steffen.klassert@secunet.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2013-12-05sched/numa: Drop idx field of task_numa_env structWanpeng Li
Drop unused idx field of task_numa_env struct. Signed-off-by: Wanpeng Li <liwanp@linux.vnet.ibm.com> Reviewed-by: Rik van Riel <riel@redhat.com> Cc: Peter Zijlstra <peterz@infradead.org> Cc: Mel Gorman <mgorman@suse.de> Cc: Naoya Horiguchi <n-horiguchi@ah.jp.nec.com> Cc: linux-mm@kvack.org Link: http://lkml.kernel.org/r/1386241817-5051-2-git-send-email-liwanp@linux.vnet.ibm.com Signed-off-by: Ingo Molnar <mingo@kernel.org>
2013-12-04Merge branch 'timers-urgent-for-linus' of ↵Linus Torvalds
git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip Pull timer fixes from Thomas Gleixner: - timekeeping: Cure a subtle drift issue on GENERIC_TIME_VSYSCALL_OLD - nohz: Make CONFIG_NO_HZ=n and nohz=off command line option behave the same way. Fixes a long standing load accounting wreckage. - clocksource/ARM: Kconfig update to avoid ARM=n wreckage - clocksource/ARM: Fixlets for the AT91 and SH clocksource/clockevents - Trivial documentation update and kzalloc conversion from akpms pile * 'timers-urgent-for-linus' of git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip: nohz: Fix another inconsistency between CONFIG_NO_HZ=n and nohz=off time: Fix 1ns/tick drift w/ GENERIC_TIME_VSYSCALL_OLD clocksource: arm_arch_timer: Hide eventstream Kconfig on non-ARM clocksource: sh_tmu: Add clk_prepare/unprepare support clocksource: sh_tmu: Release clock when sh_tmu_register() fails clocksource: sh_mtu2: Add clk_prepare/unprepare support clocksource: sh_mtu2: Release clock when sh_mtu2_register() fails ARM: at91: rm9200: switch back to clockevents_config_and_register tick: Document tick_do_timer_cpu timer: Convert kmalloc_node(...GFP_ZERO...) to kzalloc_node(...) NOHZ: Check for nohz active instead of nohz enabled
2013-12-04Merge branch 'timers/core-v2' of ↵Ingo Molnar
git://git.kernel.org/pub/scm/linux/kernel/git/frederic/linux-dynticks into timers/core Pull dynticks updates from Frederic Weisbecker: * Fix a bug where posix cpu timers requeued due to interval got ignored on full dynticks CPUs (not a regression though as it only impacts full dynticks and the bug is there since we merged full dynticks). * Optimizations and cleanups on the use of per CPU APIs to improve code readability, performance and debuggability in the nohz subsystem; * Optimize posix cpu timer by sparing stub workqueue queue with full dynticks off case * Rename some functions to extend with *_this_cpu() suffix for clarity * Refine the naming of some context tracking subsystem state accessors * Trivial spelling fix by Paul Gortmaker Signed-off-by: Ingo Molnar <mingo@kernel.org>
2013-12-04params: improve standard definitionsFelipe Contreras
We are repeating the functionality of kstrtol in param_set_long, and the same for kstrtoint. We can get rid of the extra code by using the right functions. Signed-off-by: Felipe Contreras <felipe.contreras@gmail.com> Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
2013-12-03rcu: Let the world know when RCU adjusts its geometryPaul E. McKenney
Some RCU bugs have been specific to the layout of the rcu_node tree, but RCU will silently adjust the tree at boot time if appropriate. This obscures valuable debugging information, so print a message when this happens. Signed-off-by: Paul E. McKenney <paulmck@linux.vnet.ibm.com>
2013-12-03rcu: Fix srcu_barrier() docbook headerPaul E. McKenney
The srcu_barrier() docbook header left out the "sp" argument, so this commit adds that argument's docbook text. Signed-off-by: Paul E. McKenney <paulmck@linux.vnet.ibm.com>
2013-12-03rcu: Allow task-level idle entry/exit nestingPaul E. McKenney
The current task-level idle entry/exit code forces an entry/exit on each call, regardless of the nesting level. This commit therefore properly accounts for nesting. Signed-off-by: Paul E. McKenney <paulmck@linux.vnet.ibm.com> Reviewed-by: Frederic Weisbecker <fweisbec@gmail.com>
2013-12-03rcu: Break call_rcu() deadlock involving scheduler and perfPaul E. McKenney
Dave Jones got the following lockdep splat: > ====================================================== > [ INFO: possible circular locking dependency detected ] > 3.12.0-rc3+ #92 Not tainted > ------------------------------------------------------- > trinity-child2/15191 is trying to acquire lock: > (&rdp->nocb_wq){......}, at: [<ffffffff8108ff43>] __wake_up+0x23/0x50 > > but task is already holding lock: > (&ctx->lock){-.-...}, at: [<ffffffff81154c19>] perf_event_exit_task+0x109/0x230 > > which lock already depends on the new lock. > > > the existing dependency chain (in reverse order) is: > > -> #3 (&ctx->lock){-.-...}: > [<ffffffff810cc243>] lock_acquire+0x93/0x200 > [<ffffffff81733f90>] _raw_spin_lock+0x40/0x80 > [<ffffffff811500ff>] __perf_event_task_sched_out+0x2df/0x5e0 > [<ffffffff81091b83>] perf_event_task_sched_out+0x93/0xa0 > [<ffffffff81732052>] __schedule+0x1d2/0xa20 > [<ffffffff81732f30>] preempt_schedule_irq+0x50/0xb0 > [<ffffffff817352b6>] retint_kernel+0x26/0x30 > [<ffffffff813eed04>] tty_flip_buffer_push+0x34/0x50 > [<ffffffff813f0504>] pty_write+0x54/0x60 > [<ffffffff813e900d>] n_tty_write+0x32d/0x4e0 > [<ffffffff813e5838>] tty_write+0x158/0x2d0 > [<ffffffff811c4850>] vfs_write+0xc0/0x1f0 > [<ffffffff811c52cc>] SyS_write+0x4c/0xa0 > [<ffffffff8173d4e4>] tracesys+0xdd/0xe2 > > -> #2 (&rq->lock){-.-.-.}: > [<ffffffff810cc243>] lock_acquire+0x93/0x200 > [<ffffffff81733f90>] _raw_spin_lock+0x40/0x80 > [<ffffffff810980b2>] wake_up_new_task+0xc2/0x2e0 > [<ffffffff81054336>] do_fork+0x126/0x460 > [<ffffffff81054696>] kernel_thread+0x26/0x30 > [<ffffffff8171ff93>] rest_init+0x23/0x140 > [<ffffffff81ee1e4b>] start_kernel+0x3f6/0x403 > [<ffffffff81ee1571>] x86_64_start_reservations+0x2a/0x2c > [<ffffffff81ee1664>] x86_64_start_kernel+0xf1/0xf4 > > -> #1 (&p->pi_lock){-.-.-.}: > [<ffffffff810cc243>] lock_acquire+0x93/0x200 > [<ffffffff8173419b>] _raw_spin_lock_irqsave+0x4b/0x90 > [<ffffffff810979d1>] try_to_wake_up+0x31/0x350 > [<ffffffff81097d62>] default_wake_function+0x12/0x20 > [<ffffffff81084af8>] autoremove_wake_function+0x18/0x40 > [<ffffffff8108ea38>] __wake_up_common+0x58/0x90 > [<ffffffff8108ff59>] __wake_up+0x39/0x50 > [<ffffffff8110d4f8>] __call_rcu_nocb_enqueue+0xa8/0xc0 > [<ffffffff81111450>] __call_rcu+0x140/0x820 > [<ffffffff81111b8d>] call_rcu+0x1d/0x20 > [<ffffffff81093697>] cpu_attach_domain+0x287/0x360 > [<ffffffff81099d7e>] build_sched_domains+0xe5e/0x10a0 > [<ffffffff81efa7fc>] sched_init_smp+0x3b7/0x47a > [<ffffffff81ee1f4e>] kernel_init_freeable+0xf6/0x202 > [<ffffffff817200be>] kernel_init+0xe/0x190 > [<ffffffff8173d22c>] ret_from_fork+0x7c/0xb0 > > -> #0 (&rdp->nocb_wq){......}: > [<ffffffff810cb7ca>] __lock_acquire+0x191a/0x1be0 > [<ffffffff810cc243>] lock_acquire+0x93/0x200 > [<ffffffff8173419b>] _raw_spin_lock_irqsave+0x4b/0x90 > [<ffffffff8108ff43>] __wake_up+0x23/0x50 > [<ffffffff8110d4f8>] __call_rcu_nocb_enqueue+0xa8/0xc0 > [<ffffffff81111450>] __call_rcu+0x140/0x820 > [<ffffffff81111bb0>] kfree_call_rcu+0x20/0x30 > [<ffffffff81149abf>] put_ctx+0x4f/0x70 > [<ffffffff81154c3e>] perf_event_exit_task+0x12e/0x230 > [<ffffffff81056b8d>] do_exit+0x30d/0xcc0 > [<ffffffff8105893c>] do_group_exit+0x4c/0xc0 > [<ffffffff810589c4>] SyS_exit_group+0x14/0x20 > [<ffffffff8173d4e4>] tracesys+0xdd/0xe2 > > other info that might help us debug this: > > Chain exists of: > &rdp->nocb_wq --> &rq->lock --> &ctx->lock > > Possible unsafe locking scenario: > > CPU0 CPU1 > ---- ---- > lock(&ctx->lock); > lock(&rq->lock); > lock(&ctx->lock); > lock(&rdp->nocb_wq); > > *** DEADLOCK *** > > 1 lock held by trinity-child2/15191: > #0: (&ctx->lock){-.-...}, at: [<ffffffff81154c19>] perf_event_exit_task+0x109/0x230 > > stack backtrace: > CPU: 2 PID: 15191 Comm: trinity-child2 Not tainted 3.12.0-rc3+ #92 > ffffffff82565b70 ffff880070c2dbf8 ffffffff8172a363 ffffffff824edf40 > ffff880070c2dc38 ffffffff81726741 ffff880070c2dc90 ffff88022383b1c0 > ffff88022383aac0 0000000000000000 ffff88022383b188 ffff88022383b1c0 > Call Trace: > [<ffffffff8172a363>] dump_stack+0x4e/0x82 > [<ffffffff81726741>] print_circular_bug+0x200/0x20f > [<ffffffff810cb7ca>] __lock_acquire+0x191a/0x1be0 > [<ffffffff810c6439>] ? get_lock_stats+0x19/0x60 > [<ffffffff8100b2f4>] ? native_sched_clock+0x24/0x80 > [<ffffffff810cc243>] lock_acquire+0x93/0x200 > [<ffffffff8108ff43>] ? __wake_up+0x23/0x50 > [<ffffffff8173419b>] _raw_spin_lock_irqsave+0x4b/0x90 > [<ffffffff8108ff43>] ? __wake_up+0x23/0x50 > [<ffffffff8108ff43>] __wake_up+0x23/0x50 > [<ffffffff8110d4f8>] __call_rcu_nocb_enqueue+0xa8/0xc0 > [<ffffffff81111450>] __call_rcu+0x140/0x820 > [<ffffffff8109bc8f>] ? local_clock+0x3f/0x50 > [<ffffffff81111bb0>] kfree_call_rcu+0x20/0x30 > [<ffffffff81149abf>] put_ctx+0x4f/0x70 > [<ffffffff81154c3e>] perf_event_exit_task+0x12e/0x230 > [<ffffffff81056b8d>] do_exit+0x30d/0xcc0 > [<ffffffff810c9af5>] ? trace_hardirqs_on_caller+0x115/0x1e0 > [<ffffffff810c9bcd>] ? trace_hardirqs_on+0xd/0x10 > [<ffffffff8105893c>] do_group_exit+0x4c/0xc0 > [<ffffffff810589c4>] SyS_exit_group+0x14/0x20 > [<ffffffff8173d4e4>] tracesys+0xdd/0xe2 The underlying problem is that perf is invoking call_rcu() with the scheduler locks held, but in NOCB mode, call_rcu() will with high probability invoke the scheduler -- which just might want to use its locks. The reason that call_rcu() needs to invoke the scheduler is to wake up the corresponding rcuo callback-offload kthread, which does the job of starting up a grace period and invoking the callbacks afterwards. One solution (championed on a related problem by Lai Jiangshan) is to simply defer the wakeup to some point where scheduler locks are no longer held. Since we don't want to unnecessarily incur the cost of such deferral, the task before us is threefold: 1. Determine when it is likely that a relevant scheduler lock is held. 2. Defer the wakeup in such cases. 3. Ensure that all deferred wakeups eventually happen, preferably sooner rather than later. We use irqs_disabled_flags() as a proxy for relevant scheduler locks being held. This works because the relevant locks are always acquired with interrupts disabled. We may defer more often than needed, but that is at least safe. The wakeup deferral is tracked via a new field in the per-CPU and per-RCU-flavor rcu_data structure, namely ->nocb_defer_wakeup. This flag is checked by the RCU core processing. The __rcu_pending() function now checks this flag, which causes rcu_check_callbacks() to initiate RCU core processing at each scheduling-clock interrupt where this flag is set. Of course this is not sufficient because scheduling-clock interrupts are often turned off (the things we used to be able to count on!). So the flags are also checked on entry to any state that RCU considers to be idle, which includes both NO_HZ_IDLE idle state and NO_HZ_FULL user-mode-execution state. This approach should allow call_rcu() to be invoked regardless of what locks you might be holding, the key word being "should". Reported-by: Dave Jones <davej@redhat.com> Signed-off-by: Paul E. McKenney <paulmck@linux.vnet.ibm.com> Cc: Peter Zijlstra <peterz@infradead.org>
2013-12-03rcu: Fix and comment ordering around wait_event()Paul E. McKenney
It is all too easy to forget that wait_event() does not necessarily imply a full memory barrier. The case where it does not is where the condition transitions to true just as wait_event() starts execution. This is actually a feature: The standard use of wait_event() involves locking, in which case the locks provide the needed ordering (you hold a lock across the wake_up() and acquire that same lock after wait_event() returns). Given that I did forget that wait_event() does not necessarily imply a full memory barrier in one case, this commit fixes that case. This commit also adds comments calling out the placement of existing memory barriers relied on by wait_event() calls. Signed-off-by: Paul E. McKenney <paulmck@linux.vnet.ibm.com>
2013-12-03rcu: Kick CPU halfway to RCU CPU stall warningPaul E. McKenney
When an RCU CPU stall warning occurs, the CPU invokes resched_cpu() on itself. This can help move the grace period forward in some situations, but it would be even better to do this -before- the RCU CPU stall warning. This commit therefore causes resched_cpu() to be called every five jiffies once the system is halfway to an RCU CPU stall warning. Signed-off-by: Paul E. McKenney <paulmck@linux.vnet.ibm.com>
2013-12-02posix-timers: Fix full dynticks CPUs kick on timer reschedulingFrederic Weisbecker
A posix CPU timer can be rearmed while it is firing or after it is notified with a signal. This can happen for example with timers that were set with a non zero interval in timer_settime(). This rearming can happen in two places: 1) On timer firing time, which happens on the target's tick. If the timer can't trigger a signal because it is ignored, it reschedules itself to honour the timer interval. 2) On signal handling from the timer's notification target. This one can be a different task than the timer's target itself. Once the signal is notified, the notification target rearms the timer, again to honour the timer interval. When a timer is rearmed, we need to notify the full dynticks CPUs such that they restart their tick in case they are running tasks that may have a share in elapsing this timer. Now the 1st case above handles full dynticks CPUs with a call to posix_cpu_timer_kick_nohz() from the posix cpu timer firing code. But the second case ignores the fact that some CPUs may run non-idle tasks with their tick off. As a result, when a timer is resheduled after its signal notification, the full dynticks CPUs may completely ignore it and not tick on the timer as expected This patch fixes this bug by handling both cases in one. All we need is to move the kick to the rearming common code in posix_cpu_timer_schedule(). Signed-off-by: Frederic Weisbecker <fweisbec@gmail.com> Cc: Thomas Gleixner <tglx@linutronix.de> Cc: Ingo Molnar <mingo@kernel.org> Cc: Peter Zijlstra <peterz@infradead.org> Cc: Oleg Nesterov <oleg@redhat.com> Cc: Steven Rostedt <rostedt@goodmis.org> Cc: Olivier Langlois <olivier@olivierlanglois.net>
2013-12-02posix-timers: Spare workqueue if there is no full dynticks CPU to kickFrederic Weisbecker
After a posix cpu timer is set, a workqueue is scheduled in order to kick the full dynticks CPUs and let them restart their tick if necessary in case the task they are running is concerned by the new timer. This kick is implemented by way of IPIs, which require interrupts to be enabled, hence the need for a workqueue to raise them because the posix cpu timer set path has interrupts disabled. Now if there is no full dynticks CPU on the system, the workqueue is still scheduled but it simply won't send any IPI and return immediately. So lets spare that worqueue when it is not needed. Signed-off-by: Frederic Weisbecker <fweisbec@gmail.com> Cc: Thomas Gleixner <tglx@linutronix.de> Cc: Ingo Molnar <mingo@kernel.org> Cc: Peter Zijlstra <peterz@infradead.org> Cc: Oleg Nesterov <oleg@redhat.com> Cc: Steven Rostedt <rostedt@goodmis.org>
2013-12-02context_tracking: Wrap static key check into more intuitive function nameFrederic Weisbecker
Use a function with a meaningful name to check the global context tracking state. static_key_false() is a bit confusing for reviewers. Signed-off-by: Frederic Weisbecker <fweisbec@gmail.com> Cc: Thomas Gleixner <tglx@linutronix.de> Cc: Ingo Molnar <mingo@kernel.org> Cc: Peter Zijlstra <peterz@infradead.org> Cc: Oleg Nesterov <oleg@redhat.com> Cc: Steven Rostedt <rostedt@goodmis.org>
2013-12-02nohz: Convert a few places to use local per cpu accessesFrederic Weisbecker
A few functions use remote per CPU access APIs when they deal with local values. Just do the right conversion to improve performance, code readability and debug checks. While at it, lets extend some of these function names with *_this_cpu() suffix in order to display their purpose more clearly. Signed-off-by: Frederic Weisbecker <fweisbec@gmail.com> Cc: Thomas Gleixner <tglx@linutronix.de> Cc: Ingo Molnar <mingo@kernel.org> Cc: Peter Zijlstra <peterz@infradead.org> Cc: Oleg Nesterov <oleg@redhat.com> Cc: Steven Rostedt <rostedt@goodmis.org>
2013-12-02Merge branch 'irq-urgent-for-linus' of ↵Linus Torvalds
git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip Pull irq fixes from Thomas Gleixner: - Correction of fuzzy and fragile IRQ_RETVAL macro - IRQ related resume fix affecting only XEN - ARM/GIC fix for chained GIC controllers * 'irq-urgent-for-linus' of git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip: irqchip: Gic: fix boot for chained gics irq: Enable all irqs unconditionally in irq_resume genirq: Correct fuzzy and fragile IRQ_RETVAL() definition
2013-12-02Merge branch 'sched-urgent-for-linus' of ↵Linus Torvalds
git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip Pull scheduler fixes from Ingo Molnar: "Various smaller fixlets, all over the place" * 'sched-urgent-for-linus' of git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip: sched/doc: Fix generation of device-drivers sched: Expose preempt_schedule_irq() sched: Fix a trivial typo in comments sched: Remove unused variable in 'struct sched_domain' sched: Avoid NULL dereference on sd_busy sched: Check sched_domain before computing group power MAINTAINERS: Update file patterns in the lockdep and scheduler entries
2013-12-02Merge branch 'perf-urgent-for-linus' of ↵Linus Torvalds
git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip Pull perf fixes from Ingo Molnar: "Misc kernel and tooling fixes" * 'perf-urgent-for-linus' of git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip: tools lib traceevent: Fix conversion of pointer to integer of different size perf/trace: Properly use u64 to hold event_id perf: Remove fragile swevent hlist optimization ftrace, perf: Avoid infinite event generation loop tools lib traceevent: Fix use of multiple options in processing field perf header: Fix possible memory leaks in process_group_desc() perf header: Fix bogus group name perf tools: Tag thread comm as overriden
2013-11-30PM / hibernate: export hibernation_set_opsLeonardo Potenza
To support the ability to implement PM hibernation code as modules the hibernation_set_ops function requires to be exported. Similar solution already available for suspend_set_ops (please refer to commit a5e4fd8783a2bec861ecf1138cdc042269ff59aa). Signed-off-by: Leonardo Potenza <leonardo.potenza@intel.com> Signed-off-by: Edwin Verplanke <edwin.verplanke@intel.com> Reviewed-by: Rafael J. Wysocki <rafael.j.wysocki@intel.com> Signed-off-by: Rafael J. Wysocki <rafael.j.wysocki@intel.com>
2013-11-29Merge branch 'for-3.13-fixes' of ↵Linus Torvalds
git://git.kernel.org/pub/scm/linux/kernel/git/tj/wq Pull workqueue fixes from Tejun Heo: "This contains one important fix. The NUMA support added a while back broke ordering guarantees on ordered workqueues. It was enforced by having single frontend interface with @max_active == 1 but the NUMA support puts multiple interfaces on unbound workqueues on NUMA machines thus breaking the ordered guarantee. This is fixed by disabling NUMA support on ordered workqueues. The above and a couple other patches were sitting in for-3.12-fixes but I forgot to push that out, so they ended up waiting a bit too long. My aplogies. Other fixes are minor" * 'for-3.13-fixes' of git://git.kernel.org/pub/scm/linux/kernel/git/tj/wq: workqueue: fix pool ID allocation leakage and remove BUILD_BUG_ON() in init_workqueues workqueue: fix comment typo for __queue_work() workqueue: fix ordered workqueues in NUMA setups workqueue: swap set_cpus_allowed_ptr() and PF_NO_SETAFFINITY
2013-11-29Merge branch 'for-3.13-fixes' of ↵Linus Torvalds
git://git.kernel.org/pub/scm/linux/kernel/git/tj/cgroup Pull cgroup fixes from Tejun Heo: "Fixes for three issues. - cgroup destruction path could swamp system_wq possibly leading to deadlock. This actually seems to happen in the wild with memcg because memcg destruction path adds nested dependency on system_wq. Resolved by isolating cgroup destruction work items on its dedicated workqueue. - Possible locking context deadlock through seqcount reported by lockdep - Memory leak under certain conditions" * 'for-3.13-fixes' of git://git.kernel.org/pub/scm/linux/kernel/git/tj/cgroup: cgroup: fix cgroup_subsys_state leak for seq_files cpuset: Fix memory allocator deadlock cgroup: use a dedicated workqueue for cgroup destruction
2013-11-29cgroup: don't guarantee cgroup.procs is sorted if sane_behaviorTejun Heo
For some reason, tasks and cgroup.procs guarantee that the result is sorted. This is the only reason this whole pidlist logic is necessary instead of just iterating through sorted member tasks. We can't do anything about the existing interface but at least ensure that such expectation doesn't exist for the new interface so that pidlist logic may be removed in the distant future. This patch scrambles the sort order if sane_behavior so that the output is usually not sorted in the new interface. Signed-off-by: Tejun Heo <tj@kernel.org> Acked-by: Li Zefan <lizefan@huawei.com>
2013-11-29cgroup: remove cgroup_pidlist->use_countTejun Heo
After the recent changes, pidlist ref is held only between cgroup_pidlist_start() and cgroup_pidlist_stop() during which cgroup->pidlist_mutex is also held. IOW, the reference count is redundant now. While in use, it's always one and pidlist_mutex is held - holding the mutex has exactly the same protection. This patch collapses destroy_dwork queueing into cgroup_pidlist_stop() so that pidlist_mutex is not released inbetween and drops pidlist->use_count. This patch shouldn't introduce any behavior changes. Signed-off-by: Tejun Heo <tj@kernel.org> Acked-by: Li Zefan <lizefan@huawei.com>
2013-11-29cgroup: load and release pidlists from seq_file start and stop respectivelyTejun Heo
Currently, pidlists are reference counted from file open and release methods. This means that holding onto an open file may waste memory and reads may return data which is very stale. Both aren't critical because pidlists are keyed and shared per namespace and, well, the user isn't supposed to have large delay between open and reads. cgroup is planned to be converted to use kernfs and it'd be best if we can stick to just the seq_file operations - start, next, stop and show. This can be achieved by loading pidlist on demand from start and release with time delay from stop, so that consecutive reads don't end up reloading the pidlist on each iteration. This would remove the need for hooking into open and release while also avoiding issues with holding onto pidlist for too long. The previous patches implemented delayed release and restructured pidlist handling so that pidlists can be loaded and released from seq_file start / stop. This patch actually moves pidlist load to start and release to stop. This means that pidlist is pinned only between start and stop and may go away between two consecutive read calls if the two calls are apart by more than CGROUP_PIDLIST_DESTROY_DELAY. cgroup_pidlist_start() thus can't re-use the stored cgroup_pid_list_open_file->pidlist directly. During start, it's only used as a hint indicating whether this is the first start after open or not and pidlist is always looked up or created. pidlist_mutex locking and reference counting are moved out of pidlist_array_load() so that pidlist_array_load() can perform lookup and creation atomically. While this enlarges the area covered by pidlist_mutex, given how the lock is used, it's highly unlikely to be noticeable. v2: Refreshed on top of the updated "cgroup: introduce struct cgroup_pidlist_open_file". Signed-off-by: Tejun Heo <tj@kernel.org> Acked-by: Li Zefan <lizefan@huawei.com>