[RFC PATCH v4 20/21] iommu/vt-d: hpet: Reserve an interrupt remampping table entry for watchdog

Jacob Pan jacob.jun.pan at intel.com
Fri Jun 21 18:39:38 UTC 2019


On Fri, 21 Jun 2019 10:31:26 -0700
Jacob Pan <jacob.jun.pan at intel.com> wrote:

> On Fri, 21 Jun 2019 17:33:28 +0200 (CEST)
> Thomas Gleixner <tglx at linutronix.de> wrote:
> 
> > On Wed, 19 Jun 2019, Jacob Pan wrote:  
> > > On Tue, 18 Jun 2019 01:08:06 +0200 (CEST)
> > > Thomas Gleixner <tglx at linutronix.de> wrote:    
> > > > 
> > > > Unless this problem is not solved and I doubt it can be solved
> > > > after talking to IOMMU people and studying manuals,    
> > >
> > > I agree. modify irte might be done with cmpxchg_double() but the
> > > queued invalidation interface for IRTE cache flush is shared with
> > > DMA and requires holding a spinlock for enque descriptors, QI tail
> > > update etc.
> > > 
> > > Also, reserving & manipulating IRTE slot for hpet via backdoor
> > > might not be needed if the HPET PCI BDF (found in ACPI) can be
> > > utilized. But it might need more work to add a fake PCI device for
> > > HPET.    
> > 
> > What would PCI/BDF solve?  
> I was thinking if HPET is a PCI device then it can naturally
> gain slots in IOMMU remapping table IRTEs via PCI MSI code. Then
> perhaps it can use the IRQ subsystem to set affinity etc. w/o
> directly adding additional helper functions in IRQ remapping code. I
> have not followed all the discussions, just a thought.
> 
I looked at the code again, seems the per cpu HPET code already taken
care of HPET MSI management. Why can't we use IR-HPET-MSI chip and
domain to allocate and set affinity etc.?
Most APIC timer has ARAT not enough per cpu HPET, so per cpu HPET is
not used mostly.


Jacob


More information about the iommu mailing list