[PATCH 2/5] ftrace: use code patching for ftrace graph tracer

Steven Rostedt rostedt at goodmis.org
Wed Nov 26 08:46:58 PST 2008



On Tue, 25 Nov 2008, Andrew Morton wrote:

> On Wed, 26 Nov 2008 00:16:24 -0500 Steven Rostedt <rostedt at goodmis.org> wrote:
> 
> > From: Steven Rostedt <rostedt at goodmis.org>
> > 
> > Impact: more efficient code for ftrace graph tracer
> > 
> > This patch uses the dynamic patching, when available, to patch
> > the function graph code into the kernel.
> > 
> > This patch will ease the way for letting both function tracing
> > and function graph tracing run together.
> > 
> > ...
> >
> > +static int ftrace_mod_jmp(unsigned long ip,
> > +			  int old_offset, int new_offset)
> > +{
> > +	unsigned char code[MCOUNT_INSN_SIZE];
> > +
> > +	if (probe_kernel_read(code, (void *)ip, MCOUNT_INSN_SIZE))
> > +		return -EFAULT;
> > +
> > +	if (code[0] != 0xe9 || old_offset != *(int *)(&code[1]))
> 
> erk.  I suspect that there's a nicer way of doing this amongst our
> forest of get_unaligned_foo() interfaces.  Harvey will know.

Hmm, I may be able to make a struct out of code.

  struct {
	unsigned char op;
	unsigned int  offset;
  } code __attribute__((packed));

Would that look better?

> 
> > +		return -EINVAL;
> > +
> > +	*(int *)(&code[1]) = new_offset;
> 
> Might be able to use put_unaligned_foo() here.
> 
> The problem is that these functions use sizeof(*ptr) to work out what
> to do, so a cast is still needed.  A get_unaligned32(ptr) would be
> nice.  One which takes a void* and assumes CPU ordering.

Is there a correctness concern here? This is arch specific code, so I'm
not worried about other archs.

-- Steve

> 
> > +	if (do_ftrace_mod_code(ip, &code))
> > +		return -EPERM;
> > +
> > +	return 0;
> > +}
> > +
> 
> 
> 


More information about the Containers mailing list