[Ce-android-mainline] [RFC][PATCH 1/1] gpu: ion: Add IOMMU heap allocator with IOMMU API

Arnd Bergmann arnd at arndb.de
Thu Jan 19 12:04:34 UTC 2012

On Wednesday 18 January 2012, Hiroshi Doyu wrote:
> Hi,
> Recently we've implemented IOMMU heap as an attachment which is one of
> the ION memory manager(*1) heap/backend. This implementation is
> completely independent of any SoC, and this can be used for other SoC
> as well. If our implementation is not totally wrong, it would be nice
> to share some experience/code here since Ion is still not so clear to
> me yet.
> I found that Linaro also seems to have started some ION work(*2). I
> think that some of Ion feature could be supported/replaced with Linaro
> UMM. For example, presently "ion_iommu_heap" is implemented with the
> standard IOMMU API, but it could be also implemented with the coming
> DMA API? Also DMABUF can be used in Ion core part as well, I guess.
> Currently there's no Ion memmgr code in the upstream
> "drivers/staging/android"(*3). Is there any plan to support this? Or
> is this something considered as a completely _temporary_ solution, and
> never going to be added?
> It would be nice if we can share some of our effort here since not
> small Android users need Ion, even temporary.
> Any comment would be really appreciated.


Your implemention looks quite nice overall, but on the high level,
I don't see what the benefit of using the IOMMU API is here instead
of the dma-mapping API. I believe that all device drivers should
by default use the dma-mapping interfaces, which may in turn
be based on linear mapping (possibly using CMA) or an IOMMU.

The main advantage of the IOMMU API is that it is able to separate
multiple contexts, per user, so one application that is able to
program a DMA engine cannot access memory that another application
has mapped. Is that what you do with ion_iommu_heap_create(), i.e.
that it gets called for each GPU context?

>+#define NUM_PAGES(buf) (PAGE_ALIGN((buf)->size) >> PAGE_SHIFT)
>+#define GFP_ION                (GFP_KERNEL | __GFP_HIGHMEM | __GFP_NOWARN)

I find macros like these more confusing than helpful because to the
casual reader of one driver they don't mean anything. It's better to
just open-code them.

>+static int ion_buffer_allocate(struct ion_buffer *buf)
>+       int i, npages = NUM_PAGES(buf);
>+       buf->pages = kmalloc(npages * sizeof(*buf->pages), GFP_KERNEL);
>+       if (!buf->pages)
>+               goto err_pages;
>+       buf->sglist = vzalloc(npages * sizeof(*buf->sglist));
>+       if (!buf->sglist)
>+               goto err_sgl;

Using vzalloc here probably goes a bit too far, it's fairly expensive
to use compared with kzalloc, not only do you have to maintain the
page table entries for the new mapping, it also means you use up
precious space in the vmalloc area and have to use small pages for
accessing the data in it.

Why do you need two arrays here? The sglist already contains
pointers to each page, right?

>+       sg_init_table(buf->sglist, npages);
>+       for (i = 0; i < npages; i++) {
>+               struct page *page;
>+               phys_addr_t pa;
>+               page = alloc_page(GFP_ION);
>+               if (!page)
>+                       goto err_pgalloc;
>+               pa = page_to_phys(page);
>+               sg_set_page(&buf->sglist[i], page, PAGE_SIZE, 0);
>+               flush_dcache_page(page);
>+               outer_flush_range(pa, pa + PAGE_SIZE);

This allocates pages that are contiguous only by page. Note
that a lot of IOMMUs work only (or at least much better) with
larger allocations, such as 64KB at a time.

Further, the code you have here is completely arm specific:
outer_flush_range() is only available on ARM, and most architectures
don't require flushing caches here. In fact even on ARM I see no
reason to flush a newly allocated page -- invalidating should be

>+static void *iommu_heap_map_kernel(struct ion_heap *heap,
>+                                  struct ion_buffer *buf)
>+       int npages = NUM_PAGES(buf);
>+       BUG_ON(!buf->pages);
>+       buf->vaddr = vm_map_ram(buf->pages, npages, -1,
>+                               pgprot_noncached(pgprot_kernel));
>+       pr_debug("va:%p\n", buf->vaddr);
>+       WARN_ON(!buf->vaddr);
>+       return buf->vaddr;

You can't do this on ARMv6 or ARMv7: There is already a cacheable
mapping of all entries in buf->pages, so you must not install
a second noncached mapping. Either make sure you have only
noncached pages, or only cached ones.

>+static int iommu_heap_map_user(struct ion_heap *mapper,
>+                              struct ion_buffer *buf,
>+                              struct vm_area_struct *vma)
>+       int i = vma->vm_pgoff >> PAGE_SHIFT;
>+       unsigned long uaddr = vma->vm_start;
>+       unsigned long usize = vma->vm_end - vma->vm_start;
>+       pr_debug("vma:%08lx-%08lx\n", vma->vm_start, vma->vm_end);
>+       BUG_ON(!buf->pages);
>+       vma->vm_page_prot = pgprot_noncached(vma->vm_page_prot);
>+       do {
>+               int ret;
>+               struct page *page = buf->pages[i++];
>+               ret = vm_insert_page(vma, uaddr, page);
>+               if (ret)
>+                       return ret;

Same here: If you want to have an uncached mapping in user space,
the pages must not be in a cacheable kernel mapping.


More information about the Ce-android-mainline mailing list