RFC: Audit Kernel Container IDs
Richard Guy Briggs
rgb at redhat.com
Fri Sep 15 10:19:11 UTC 2017
On 2017-09-14 01:30, Richard Guy Briggs wrote:
> On 2017-09-13 14:33, Carlos O'Donell wrote:
> > On 09/13/2017 12:13 PM, Richard Guy Briggs wrote:
> > > Containers are a userspace concept. The kernel knows nothing of them.
> > I am looking at this RFC from a userspace perspective, particularly from
> > the loader's point of view and the unshare syscall and the semantics that
> > arise from the use of it.
> > At a high level what you are doing is providing a way to group, without
> > hierarchy, processes and namespaces. The processes can move between
> > container's if they have CAP_CONTAINER_ADMIN and can open and write to
> > a special proc file.
I should clarify: It wasn't intended that a process can see or modify
its own or a peer's special proc container file to be able to set it or
discover its value. This was only meant for its orchestrator or
delegated agents to do. This can't be left only to CAP_CONTAINER_ADMIN.
This may require a container to have its own mount namespace if the
trigger mechanism is a proc file write. Other methods (additional
namespaces?) may be needed to restrict it for other trigger methods
> > * With unshare a thread may dissociate part of its execution context and
> > therefore see a distinct mount namespace. When you say "process" in this
> > particular RFC do you exclude the fact that a thread might be in a
> > distinct container from the rest of the threads in the process?
> > > The Linux audit system needs a way to be able to track the container
> > > provenance of events and actions. Audit needs the kernel's help to do
> > > this.
> > * Why does the Linux audit system need to tracker container provenance?
> - ability to filter unwanted, irrelevant or unimportant messages before
> they fill queue so important messages don't get lost. This is a
> certification requirement.
> - ability to make security claims about containers, require tracking of
> actions within those containers to ensure compliance with established
> security policies.
> - ability to route messages from events to relevant audit daemon
> instance or host audit daemon instance or both, as required or
> determined by user-initiated rules
> > - How does it help to provide better audit messages?
> > - Is it be enough to list the namespace that a process occupies?
> We started with that approach back more than 4 years ago and found it
> helped, but didn't go far enough in terms of quick and inexpensive
> record filtering and left some doubt about provenance of events in the
> case of non-user context events (incoming network packets).
> > * Why does it need the kernel's help?
> > - Is there a race condition that is only fixable with kernel support?
> This was a concern, but relatively minor compared with the other benefits.
> > - Or is it easier with kernel help but not required?
> It is much easier and much less expensive.
> > Providing background on these questions would help clarify the
> > design requirements.
> Here are some references that should help provide some background:
> RFE: add namespace IDs to audit records
> SPEC Virtualization Manager Guest Lifecycle Events
> Audit, namespaces, and containers
> Containers as kernel objects
> (my reply, with references: https://lkml.org/lkml/2017/8/14/15 )
> audit: add namespace IDs to log records
> > > Since the concept of a container is entirely a userspace concept, a
> > > trigger signal from the userspace container orchestration system
> > > initiates this. This will define a point in time and a set of resources
> > > associated with a particular container with an audit container ID.
> > Please don't use the word 'signal', I suggest 'register' since you are
> > writing to a filesystem.
> Ok, that's a very reasonable request. 'signal' has a previous meaning.
> > > The trigger is a pseudo filesystem (proc, since PID tree already exists)
> > > write of a u64 representing the container ID to a file representing a
> > > process that will become the first process in a new container.
> > > This might place restrictions on mount namespaces required to define a
> > > container, or at least careful checking of namespaces in the kernel to
> > > verify permissions of the orchestrator so it can't change its own
> > > container ID.
> > > A bind mount of nsfs may be necessary in the container orchestrator's
> > > mntNS.
> > >
> > > Require a new CAP_CONTAINER_ADMIN to be able to write to the pseudo
> > > filesystem to have this action permitted. At that time, record the
> > > child container's user-supplied 64-bit container identifier along with
> > What is a "child container?" Containers don't have any hierarchy.
> Maybe some don't, but that's not likely to last long given the
> abstraction and nesting of orchestration tools. This must be nestable.
This is why we can't rely only on CAP_CONTAINER_ADMIN to restrict the
ability for self-modification or self-discovery.
> > I assume that if you don't have CAP_CONTAINER_ADMIN, that nothing prevents
> > your continued operation as we have today?
> Correct. It won't prevent processes that otherwise have permissions
> today from creating all the namespaces it wishes.
> > > the child container's first process (which may become the container's
> > > "init" process) process ID (referenced from the initial PID namespace),
> > > all namespace IDs (in the form of a nsfs device number and inode number
> > > tuple) in a new auxilliary record AUDIT_CONTAINER with a qualifying
> > > op=$action field.
> > What kind of requirement is there on the first tid/pid registering
> > the container ID? What if the 8th tid/pid does the registration?
> > Would that mean that the first process of the container did not
> > register? It seems like you are suggesting that the registration
> > by the 8th tid/pid causes a cascading registration progress,
> > registering all tid/pids in the same grouping? Is that true?
> Ah, good question, I forgot to address that fact. The intent is that
> either threaded processes after initiating threading will not have
> permission to execute this, or all the processes in the thread group
> will be forced into the same container. I don't have a strong opinion
> on whether or not it must be the lead thread process that must be the
> one to receive that registration, but I suspect that would be wise.
> > > Issue a new auxilliary record AUDIT_CONTAINER_INFO for each valid
> > > container ID present on an auditable action or event.
> > >
> > > Forked and cloned processes inherit their parent's container ID,
> > > referenced in the process' audit_context struct.
> > So a cloned process with CLONE_NEWNS has the came container ID
> > as the parent process that called clone, at least until the clone
> > has time to change to a new container ID?
And as pointed to above, it isn't the process itself that is able to
change to a new container, but its orchestrator to move/assign it.
> > Do you forsee any case where someone might need a semantic that is
> > slightly different? For example wanting to set the container ID on
> > clone?
> I could envision that situation and I think that might be workable but
> for the synchronicity of having one initiated by a specific syscall and
> the other initiated by a /proc write.
The ability to clone while providing a containerID would work really
well, but I'm hesitant to extend or duplicate the clone call. This
actually sounds like a potentially sane way of approaching it.
> > > Log the creation of every namespace, inheriting/adding its spawning
> > > process' containerID(s), if applicable. Include the spawning and
> > > spawned namespace IDs (device and inode number tuples).
> > > [AUDIT_NS_CREATE, AUDIT_NS_DESTROY] [clone(2), unshare(2), setns(2)]
> > > Note: At this point it appears only network namespaces may need to track
> > > container IDs apart from processes since incoming packets may cause an
> > > auditable event before being associated with a process.
> > OK.
> > > Log the destruction of every namespace when it is no longer used by any
> > > process, include the namespace IDs (device and inode number tuples).
> > > [AUDIT_NS_DESTROY] [process exit, unshare(2), setns(2)]
> > >
> > > Issue a new auxilliary record AUDIT_NS_CHANGE listing (opt: op=$action)
> > > the parent and child namespace IDs for any changes to a process'
> > > namespaces. [setns(2)]
> > > Note: It may be possible to combine AUDIT_NS_* record formats and
> > > distinguish them with an op=$action field depending on the fields
> > > required for each message type.
> > >
> > > A process can be moved from one container to another by using the
> > > container assignment method outlined above a second time.
> > OK.
> > > When a container ceases to exist because the last process in that
> > > container has exited and hence the last namespace has been destroyed and
> > > its refcount dropping to zero, log the fact.
> > > (This latter is likely needed for certification accountability.) A
> > > container object may need a list of processes and/or namespaces.
> > OK.
> > > A namespace cannot directly migrate from one container to another but
> > > could be assigned to a newly spawned container. A namespace can be
> > > moved from one container to another indirectly by having that namespace
> > > used in a second process in another container and then ending all the
> > > processes in the first container.
> > OK.
> > > Feedback please.
> Thank you sir!
> > Carlos.
> - RGB
> Richard Guy Briggs <rgb at redhat.com>
> Sr. S/W Engineer, Kernel Security, Base Operating Systems
> Remote, Ottawa, Red Hat Canada
> IRC: rgb, SunRaycer
> Voice: +1.647.777.2635, Internal: (81) 32635
> Linux-audit mailing list
> Linux-audit at redhat.com
Richard Guy Briggs <rgb at redhat.com>
Sr. S/W Engineer, Kernel Security, Base Operating Systems
Remote, Ottawa, Red Hat Canada
IRC: rgb, SunRaycer
Voice: +1.647.777.2635, Internal: (81) 32635
More information about the Containers