[PATCH 1/2] Adds a read-only "procs" file similar to "tasks" that shows only unique tgids
menage at google.com
Thu Jul 2 21:16:15 PDT 2009
On Thu, Jul 2, 2009 at 7:08 PM, Andrew Morton<akpm at linux-foundation.org> wrote:
> Why are we doing all this anyway? To avoid presenting duplicated pids
> to userspace? Nothing else?
To present the pids or tgids in sorted order. Removing duplicates is
only for the case of the "procs" file; that could certainly be left to
userspace, but it wouldn't by itself remove the existing requirement
for a contiguous array.
The seq_file iterator for these files relies on them being sorted so
that it can pick up where it left off even in the event of the pid set
changing between reads - it does a binary search to find the first pid
greater than the last one that was returned, so as to guarantee that
we return every pid that was in the cgroup before the scan started and
remained in the cgroup until after the scan finished; there are no
guarantees about pids that enter/leave the cgroup during the scan.
> Or we can do it the other way? Create an initially-empty local IDR
> tree or radix tree and, within that, mark off any pids which we've
> already emitted? That'll have a worst-case memory consumption of
> approximately PID_MAX_LIMIT bits -- presently that's half a megabyte.
> With no large allocations needed?
But that would be half a megabyte per open fd? That's a lot of memory
that userspace can pin down by opening fds. The reason for the current
pid array approach is to mean that there's only ever one pid array
allocated at a time per cgroup, rather than per open fd.
There's actually a structure already for doing that - cgroup_scanner,
which uses a high-watermark and a priority heap to provide a similar
guarantee, with a constant memory size overhead (typically one page).
But it can take O(n^2) time to scan a large cgroup, as would, I
suspect, using an IDR, so it's only used for cases where we really
can't avoid it due to locking reasons. I'd rather have something that
accumulates unsorted pids in page-size chunks as we iterate through
the cgroup, and then sorts them using something like Lai Jiangshan's
> btw, did pidlist_uniq() actually needs to allocate new memory for the
> output array? Could it have done the filtering in-place?
Yes - or just omit duplicates in the seq_file iterator, I guess
More information about the Containers