[Ksummit-discuss] [CORE TOPIC] Redesign Memory Management layer and more core subsystem

Christoph Lameter cl at gentwo.org
Fri Jun 13 17:02:54 UTC 2014


On Thu, 12 Jun 2014, Phillip Lougher wrote:

> > 1. The need to use larger order pages, and the resulting problems with
> > fragmentation. Memory sizes grow and therefore the number of page structs
> > where state has to be maintained. Maybe there is something different? If
> > we use hugepages then we have 511 useless page structs. Some apps need
> > linear memory where we have trouble and are creating numerous memory
> > allocators (recently the new bootmem allocator and CMA. Plus lots of
> > specialized allocators in various subsystems).
> >
>
> This was never solved to my knowledge, there is no panacea here.
> Even in the 90s we had video subsystems wanting to allocate in units
> of 1Mbyte, and others in units of 4k.  The "solution" was so called
> split-level allocators, each specialised to deal with a particular
> "first class media", with them giving back memory to the underlying
> allocator when memory got tight in another specialised allocator.
> Not much different to the ad-hoc solutions being adopted in Linux,
> except the general idea was each specialised allocator had the same
> API.

It is solvable if the objects are inherent movable. If any object
allocated provides a function that makes an object movable then
defragmentation is possible and therefore large contiguous area of memory
can be created at any time.


> > Can we develop the notion that subsystems own certain cores so that their
> > execution is restricted to a subset of the system avoiding data
> > replication and keeping subsystem data hot? I.e. have a device driver
> > and subsystems driving those devices just run on the NUMA node to which
> > the PCI-E root complex is attached. Restricting to NUMA node reduces data
> > locality complexity and increases performance due to cache hot data.
>
> Lots of academic hot-air was expended here when designing distributed
> systems which could scale seamlessly across heterogeneous CPUs connected
> via different levels of interconnects (bus, ATM, ethernet etc.), zoning,
> migration, replication etc.  The "solution" is probably out there somewhere
> forgotten about.

We have the issue with homogenous cpus due to the proliferation of cores
on processors now. Maybe that is solvable?

> Case in point, many years ago I was the lead Linux guy for a company
> designing a SOC for digital TV.  Just before I left I had an interesting
> "conversation" with the chief hardware guy of the team who designed the SOC.
> Turns out they'd budgeted for the RAM bandwidth needed to decode a typical
> MPEG stream, but they'd not reckoned on all the memcopies Linux needs to do
> between its "separate address space" processes.  He'd been used to embedded
> oses which run in a single address space.

Well maybe that is appropriate for some processes? And we could carve out
subsections of the hardware where single adress space stuff is possible?


More information about the Ksummit-discuss mailing list