[RFC PATCH net-next v2 0/5] netns: allow to identify peer netns

Eric W. Biederman ebiederm at xmission.com
Fri Sep 26 18:57:44 UTC 2014


Andy Lutomirski <luto at amacapital.net> writes:

> On Fri, Sep 26, 2014 at 11:10 AM, Eric W. Biederman
> <ebiederm at xmission.com> wrote:
>> Nicolas Dichtel <nicolas.dichtel at 6wind.com> writes:
>>
>>> The goal of this serie is to be able to multicast netlink messages with an
>>> attribute that identify a peer netns.
>>> This is needed by the userland to interpret some informations contained in
>>> netlink messages (like IFLA_LINK value, but also some other attributes in case
>>> of x-netns netdevice (see also
>>> http://thread.gmane.org/gmane.linux.network/315933/focus=316064 and
>>> http://thread.gmane.org/gmane.linux.kernel.containers/28301/focus=4239)).
>>
>> I want say that the problem addressed by patch 3/5 of this series is a
>> fundamentally valid problem.  We have network objects spanning network
>> namespaces and it would be very nice to be able to talk about them in
>> netlink, and file descriptors are too local and argubably too heavy
>> weight for netlink quires and especially for netlink broadcast messages.
>>
>> Furthermore the concept of ineternal concept of peernet2id seems valid.
>>
>> However what you do not address is a way for CRIU (aka process
>> migration) to be able to restore these ids after process migration.
>> Going farther it looks like you are actively breaking process migration
>> at this time, making this set of patches a no-go.
>>
>> When adding a new form of namespace id CRIU patches are just about
>> as necessary as iproute patches.
>>
>>> Ids are stored in the parent user namespace. These ids are valid only inside
>>> this user namespace. The user can retrieve these ids via a new netlink messages,
>>> but only if peer netns are in the same user namespace.
>>
>> That does not describe what you have actually implemented in the
>> patches.
>>
>> I see two ways to go with this.
>>
>> - A per network namespace table to that you can store ids for ``peer''
>>   network namespaces.  The table would need to be populated manually by
>>   the likes of ip netns add.
>>
>>   That flips the order of assignment and makes this idea solid.
>>
>>   Unfortunately in the case of a fully referencing mesh of N network
>>   namespaces such a mesh winds up taking O(N^2) space, which seems
>>   undesirable.
>>
>> - Add a netlink attribute that says this network element is in a peer
>>   network namespace.
>>
>>   Add a unicast query message that let's you ask if the remote
>>   end of a tunnel is in a network namespace specified by file
>>   descriptor.
>>
>> I personally lean towards the second version as it is fundamentally
>> simpler, and generally scales better, and the visibility controls are
>> the existing visibility controls.  The only downside is it requires
>> a query after receiving a netlink broadcast message for the times that
>> we care.
>
> The downside of that approach, and all the similar kcmp stuff, is that
> it scales poorly for applications using it.  This is probably not the
> end of the world, but it's not ideal.

Agreed, the efficiency is not ideal and there is plenty of room for
optimization.  We could certainly adopt some of kcmps ordering
infrastructure to make it suck less, or even potentially work out how
to return a file descriptor to the network namespace in question.

The key insight of my second proposal is that we can get out of the
broadcast message business, and only care about the remote namespace for
unicast messages.  Putting the work in an infrequently used slow path
instead of a comparitively common path gives us much more freedom in
the implementation.

Eric


More information about the Containers mailing list