Merge branch 'perf-probes-for-linus-2' of git://git.kernel.org/pub/scm/linux/kernel/git/tip/linux-2.6-tip

* 'perf-probes-for-linus-2' of git://git.kernel.org/pub/scm/linux/kernel/git/tip/linux-2.6-tip:
  x86: Issue at least one memory barrier in stop_machine_text_poke()
  perf probe: Correct probe syntax on command line help
  perf probe: Add lazy line matching support
  perf probe: Show more lines after last line
  perf probe: Check function address range strictly in line finder
  perf probe: Use libdw callback routines
  perf probe: Use elfutils-libdw for analyzing debuginfo
  perf probe: Rename probe finder functions
  perf probe: Fix bugs in line range finder
  perf probe: Update perf probe document
  perf probe: Do not show --line option without dwarf support
  kprobes: Add documents of jump optimization
  kprobes/x86: Support kprobes jump optimization on x86
  x86: Add text_poke_smp for SMP cross modifying code
  kprobes/x86: Cleanup save/restore registers
  kprobes/x86: Boost probes when reentering
  kprobes: Jump optimization sysctl interface
  kprobes: Introduce kprobes jump optimization
  kprobes: Introduce generic insn_slot framework
  kprobes/x86: Cleanup RELATIVEJUMP_INSTRUCTION to RELATIVEJUMP_OPCODE
This commit is contained in:
Linus Torvalds 2010-03-05 10:50:22 -08:00
commit 660f6a360b
18 changed files with 2091 additions and 863 deletions

View File

@ -1,6 +1,7 @@
Title : Kernel Probes (Kprobes) Title : Kernel Probes (Kprobes)
Authors : Jim Keniston <jkenisto@us.ibm.com> Authors : Jim Keniston <jkenisto@us.ibm.com>
: Prasanna S Panchamukhi <prasanna@in.ibm.com> : Prasanna S Panchamukhi <prasanna.panchamukhi@gmail.com>
: Masami Hiramatsu <mhiramat@redhat.com>
CONTENTS CONTENTS
@ -15,6 +16,7 @@ CONTENTS
9. Jprobes Example 9. Jprobes Example
10. Kretprobes Example 10. Kretprobes Example
Appendix A: The kprobes debugfs interface Appendix A: The kprobes debugfs interface
Appendix B: The kprobes sysctl interface
1. Concepts: Kprobes, Jprobes, Return Probes 1. Concepts: Kprobes, Jprobes, Return Probes
@ -42,13 +44,13 @@ registration/unregistration of a group of *probes. These functions
can speed up unregistration process when you have to unregister can speed up unregistration process when you have to unregister
a lot of probes at once. a lot of probes at once.
The next three subsections explain how the different types of The next four subsections explain how the different types of
probes work. They explain certain things that you'll need to probes work and how jump optimization works. They explain certain
know in order to make the best use of Kprobes -- e.g., the things that you'll need to know in order to make the best use of
difference between a pre_handler and a post_handler, and how Kprobes -- e.g., the difference between a pre_handler and
to use the maxactive and nmissed fields of a kretprobe. But a post_handler, and how to use the maxactive and nmissed fields of
if you're in a hurry to start using Kprobes, you can skip ahead a kretprobe. But if you're in a hurry to start using Kprobes, you
to section 2. can skip ahead to section 2.
1.1 How Does a Kprobe Work? 1.1 How Does a Kprobe Work?
@ -161,13 +163,125 @@ In case probed function is entered but there is no kretprobe_instance
object available, then in addition to incrementing the nmissed count, object available, then in addition to incrementing the nmissed count,
the user entry_handler invocation is also skipped. the user entry_handler invocation is also skipped.
1.4 How Does Jump Optimization Work?
If you configured your kernel with CONFIG_OPTPROBES=y (currently
this option is supported on x86/x86-64, non-preemptive kernel) and
the "debug.kprobes_optimization" kernel parameter is set to 1 (see
sysctl(8)), Kprobes tries to reduce probe-hit overhead by using a jump
instruction instead of a breakpoint instruction at each probepoint.
1.4.1 Init a Kprobe
When a probe is registered, before attempting this optimization,
Kprobes inserts an ordinary, breakpoint-based kprobe at the specified
address. So, even if it's not possible to optimize this particular
probepoint, there'll be a probe there.
1.4.2 Safety Check
Before optimizing a probe, Kprobes performs the following safety checks:
- Kprobes verifies that the region that will be replaced by the jump
instruction (the "optimized region") lies entirely within one function.
(A jump instruction is multiple bytes, and so may overlay multiple
instructions.)
- Kprobes analyzes the entire function and verifies that there is no
jump into the optimized region. Specifically:
- the function contains no indirect jump;
- the function contains no instruction that causes an exception (since
the fixup code triggered by the exception could jump back into the
optimized region -- Kprobes checks the exception tables to verify this);
and
- there is no near jump to the optimized region (other than to the first
byte).
- For each instruction in the optimized region, Kprobes verifies that
the instruction can be executed out of line.
1.4.3 Preparing Detour Buffer
Next, Kprobes prepares a "detour" buffer, which contains the following
instruction sequence:
- code to push the CPU's registers (emulating a breakpoint trap)
- a call to the trampoline code which calls user's probe handlers.
- code to restore registers
- the instructions from the optimized region
- a jump back to the original execution path.
1.4.4 Pre-optimization
After preparing the detour buffer, Kprobes verifies that none of the
following situations exist:
- The probe has either a break_handler (i.e., it's a jprobe) or a
post_handler.
- Other instructions in the optimized region are probed.
- The probe is disabled.
In any of the above cases, Kprobes won't start optimizing the probe.
Since these are temporary situations, Kprobes tries to start
optimizing it again if the situation is changed.
If the kprobe can be optimized, Kprobes enqueues the kprobe to an
optimizing list, and kicks the kprobe-optimizer workqueue to optimize
it. If the to-be-optimized probepoint is hit before being optimized,
Kprobes returns control to the original instruction path by setting
the CPU's instruction pointer to the copied code in the detour buffer
-- thus at least avoiding the single-step.
1.4.5 Optimization
The Kprobe-optimizer doesn't insert the jump instruction immediately;
rather, it calls synchronize_sched() for safety first, because it's
possible for a CPU to be interrupted in the middle of executing the
optimized region(*). As you know, synchronize_sched() can ensure
that all interruptions that were active when synchronize_sched()
was called are done, but only if CONFIG_PREEMPT=n. So, this version
of kprobe optimization supports only kernels with CONFIG_PREEMPT=n.(**)
After that, the Kprobe-optimizer calls stop_machine() to replace
the optimized region with a jump instruction to the detour buffer,
using text_poke_smp().
1.4.6 Unoptimization
When an optimized kprobe is unregistered, disabled, or blocked by
another kprobe, it will be unoptimized. If this happens before
the optimization is complete, the kprobe is just dequeued from the
optimized list. If the optimization has been done, the jump is
replaced with the original code (except for an int3 breakpoint in
the first byte) by using text_poke_smp().
(*)Please imagine that the 2nd instruction is interrupted and then
the optimizer replaces the 2nd instruction with the jump *address*
while the interrupt handler is running. When the interrupt
returns to original address, there is no valid instruction,
and it causes an unexpected result.
(**)This optimization-safety checking may be replaced with the
stop-machine method that ksplice uses for supporting a CONFIG_PREEMPT=y
kernel.
NOTE for geeks:
The jump optimization changes the kprobe's pre_handler behavior.
Without optimization, the pre_handler can change the kernel's execution
path by changing regs->ip and returning 1. However, when the probe
is optimized, that modification is ignored. Thus, if you want to
tweak the kernel's execution path, you need to suppress optimization,
using one of the following techniques:
- Specify an empty function for the kprobe's post_handler or break_handler.
or
- Config CONFIG_OPTPROBES=n.
or
- Execute 'sysctl -w debug.kprobes_optimization=n'
2. Architectures Supported 2. Architectures Supported
Kprobes, jprobes, and return probes are implemented on the following Kprobes, jprobes, and return probes are implemented on the following
architectures: architectures:
- i386 - i386 (Supports jump optimization)
- x86_64 (AMD-64, EM64T) - x86_64 (AMD-64, EM64T) (Supports jump optimization)
- ppc64 - ppc64
- ia64 (Does not support probes on instruction slot1.) - ia64 (Does not support probes on instruction slot1.)
- sparc64 (Return probes not yet implemented.) - sparc64 (Return probes not yet implemented.)
@ -193,6 +307,10 @@ it useful to "Compile the kernel with debug info" (CONFIG_DEBUG_INFO),
so you can use "objdump -d -l vmlinux" to see the source-to-object so you can use "objdump -d -l vmlinux" to see the source-to-object
code mapping. code mapping.
If you want to reduce probing overhead, set "Kprobes jump optimization
support" (CONFIG_OPTPROBES) to "y". You can find this option under the
"Kprobes" line.
4. API Reference 4. API Reference
The Kprobes API includes a "register" function and an "unregister" The Kprobes API includes a "register" function and an "unregister"
@ -389,7 +507,10 @@ the probe which has been registered.
Kprobes allows multiple probes at the same address. Currently, Kprobes allows multiple probes at the same address. Currently,
however, there cannot be multiple jprobes on the same function at however, there cannot be multiple jprobes on the same function at
the same time. the same time. Also, a probepoint for which there is a jprobe or
a post_handler cannot be optimized. So if you install a jprobe,
or a kprobe with a post_handler, at an optimized probepoint, the
probepoint will be unoptimized automatically.
In general, you can install a probe anywhere in the kernel. In general, you can install a probe anywhere in the kernel.
In particular, you can probe interrupt handlers. Known exceptions In particular, you can probe interrupt handlers. Known exceptions
@ -453,6 +574,38 @@ reason, Kprobes doesn't support return probes (or kprobes or jprobes)
on the x86_64 version of __switch_to(); the registration functions on the x86_64 version of __switch_to(); the registration functions
return -EINVAL. return -EINVAL.
On x86/x86-64, since the Jump Optimization of Kprobes modifies
instructions widely, there are some limitations to optimization. To
explain it, we introduce some terminology. Imagine a 3-instruction
sequence consisting of a two 2-byte instructions and one 3-byte
instruction.
IA
|
[-2][-1][0][1][2][3][4][5][6][7]
[ins1][ins2][ ins3 ]
[<- DCR ->]
[<- JTPR ->]
ins1: 1st Instruction
ins2: 2nd Instruction
ins3: 3rd Instruction
IA: Insertion Address
JTPR: Jump Target Prohibition Region
DCR: Detoured Code Region
The instructions in DCR are copied to the out-of-line buffer
of the kprobe, because the bytes in DCR are replaced by
a 5-byte jump instruction. So there are several limitations.
a) The instructions in DCR must be relocatable.
b) The instructions in DCR must not include a call instruction.
c) JTPR must not be targeted by any jump or call instruction.
d) DCR must not straddle the border betweeen functions.
Anyway, these limitations are checked by the in-kernel instruction
decoder, so you don't need to worry about that.
6. Probe Overhead 6. Probe Overhead
On a typical CPU in use in 2005, a kprobe hit takes 0.5 to 1.0 On a typical CPU in use in 2005, a kprobe hit takes 0.5 to 1.0
@ -476,6 +629,19 @@ k = 0.49 usec; j = 0.76; r = 0.80; kr = 0.82; jr = 1.07
ppc64: POWER5 (gr), 1656 MHz (SMT disabled, 1 virtual CPU per physical CPU) ppc64: POWER5 (gr), 1656 MHz (SMT disabled, 1 virtual CPU per physical CPU)
k = 0.77 usec; j = 1.31; r = 1.26; kr = 1.45; jr = 1.99 k = 0.77 usec; j = 1.31; r = 1.26; kr = 1.45; jr = 1.99
6.1 Optimized Probe Overhead
Typically, an optimized kprobe hit takes 0.07 to 0.1 microseconds to
process. Here are sample overhead figures (in usec) for x86 architectures.
k = unoptimized kprobe, b = boosted (single-step skipped), o = optimized kprobe,
r = unoptimized kretprobe, rb = boosted kretprobe, ro = optimized kretprobe.
i386: Intel(R) Xeon(R) E5410, 2.33GHz, 4656.90 bogomips
k = 0.80 usec; b = 0.33; o = 0.05; r = 1.10; rb = 0.61; ro = 0.33
x86-64: Intel(R) Xeon(R) E5410, 2.33GHz, 4656.90 bogomips
k = 0.99 usec; b = 0.43; o = 0.06; r = 1.24; rb = 0.68; ro = 0.30
7. TODO 7. TODO
a. SystemTap (http://sourceware.org/systemtap): Provides a simplified a. SystemTap (http://sourceware.org/systemtap): Provides a simplified
@ -523,7 +689,8 @@ is also specified. Following columns show probe status. If the probe is on
a virtual address that is no longer valid (module init sections, module a virtual address that is no longer valid (module init sections, module
virtual addresses that correspond to modules that've been unloaded), virtual addresses that correspond to modules that've been unloaded),
such probes are marked with [GONE]. If the probe is temporarily disabled, such probes are marked with [GONE]. If the probe is temporarily disabled,
such probes are marked with [DISABLED]. such probes are marked with [DISABLED]. If the probe is optimized, it is
marked with [OPTIMIZED].
/sys/kernel/debug/kprobes/enabled: Turn kprobes ON/OFF forcibly. /sys/kernel/debug/kprobes/enabled: Turn kprobes ON/OFF forcibly.
@ -533,3 +700,19 @@ registered probes will be disarmed, till such time a "1" is echoed to this
file. Note that this knob just disarms and arms all kprobes and doesn't file. Note that this knob just disarms and arms all kprobes and doesn't
change each probe's disabling state. This means that disabled kprobes (marked change each probe's disabling state. This means that disabled kprobes (marked
[DISABLED]) will be not enabled if you turn ON all kprobes by this knob. [DISABLED]) will be not enabled if you turn ON all kprobes by this knob.
Appendix B: The kprobes sysctl interface
/proc/sys/debug/kprobes-optimization: Turn kprobes optimization ON/OFF.
When CONFIG_OPTPROBES=y, this sysctl interface appears and it provides
a knob to globally and forcibly turn jump optimization (see section
1.4) ON or OFF. By default, jump optimization is allowed (ON).
If you echo "0" to this file or set "debug.kprobes_optimization" to
0 via sysctl, all optimized probes will be unoptimized, and any new
probes registered after that will not be optimized. Note that this
knob *changes* the optimized state. This means that optimized probes
(marked [OPTIMIZED]) will be unoptimized ([OPTIMIZED] tag will be
removed). If the knob is turned on, they will be optimized again.

View File

@ -41,6 +41,17 @@ config KPROBES
for kernel debugging, non-intrusive instrumentation and testing. for kernel debugging, non-intrusive instrumentation and testing.
If in doubt, say "N". If in doubt, say "N".
config OPTPROBES
bool "Kprobes jump optimization support (EXPERIMENTAL)"
default y
depends on KPROBES
depends on !PREEMPT
depends on HAVE_OPTPROBES
select KALLSYMS_ALL
help
This option will allow kprobes to optimize breakpoint to
a jump for reducing its overhead.
config HAVE_EFFICIENT_UNALIGNED_ACCESS config HAVE_EFFICIENT_UNALIGNED_ACCESS
bool bool
help help
@ -83,6 +94,8 @@ config HAVE_KPROBES
config HAVE_KRETPROBES config HAVE_KRETPROBES
bool bool
config HAVE_OPTPROBES
bool
# #
# An arch should select this if it provides all these things: # An arch should select this if it provides all these things:
# #

View File

@ -31,6 +31,7 @@ config X86
select ARCH_WANT_FRAME_POINTERS select ARCH_WANT_FRAME_POINTERS
select HAVE_DMA_ATTRS select HAVE_DMA_ATTRS
select HAVE_KRETPROBES select HAVE_KRETPROBES
select HAVE_OPTPROBES
select HAVE_FTRACE_MCOUNT_RECORD select HAVE_FTRACE_MCOUNT_RECORD
select HAVE_DYNAMIC_FTRACE select HAVE_DYNAMIC_FTRACE
select HAVE_FUNCTION_TRACER select HAVE_FUNCTION_TRACER

View File

@ -165,10 +165,12 @@ static inline void apply_paravirt(struct paravirt_patch_site *start,
* invalid instruction possible) or if the instructions are changed from a * invalid instruction possible) or if the instructions are changed from a
* consistent state to another consistent state atomically. * consistent state to another consistent state atomically.
* More care must be taken when modifying code in the SMP case because of * More care must be taken when modifying code in the SMP case because of
* Intel's errata. * Intel's errata. text_poke_smp() takes care that errata, but still
* doesn't support NMI/MCE handler code modifying.
* On the local CPU you need to be protected again NMI or MCE handlers seeing an * On the local CPU you need to be protected again NMI or MCE handlers seeing an
* inconsistent instruction while you patch. * inconsistent instruction while you patch.
*/ */
extern void *text_poke(void *addr, const void *opcode, size_t len); extern void *text_poke(void *addr, const void *opcode, size_t len);
extern void *text_poke_smp(void *addr, const void *opcode, size_t len);
#endif /* _ASM_X86_ALTERNATIVE_H */ #endif /* _ASM_X86_ALTERNATIVE_H */

View File

@ -32,7 +32,10 @@ struct kprobe;
typedef u8 kprobe_opcode_t; typedef u8 kprobe_opcode_t;
#define BREAKPOINT_INSTRUCTION 0xcc #define BREAKPOINT_INSTRUCTION 0xcc
#define RELATIVEJUMP_INSTRUCTION 0xe9 #define RELATIVEJUMP_OPCODE 0xe9
#define RELATIVEJUMP_SIZE 5
#define RELATIVECALL_OPCODE 0xe8
#define RELATIVE_ADDR_SIZE 4
#define MAX_INSN_SIZE 16 #define MAX_INSN_SIZE 16
#define MAX_STACK_SIZE 64 #define MAX_STACK_SIZE 64
#define MIN_STACK_SIZE(ADDR) \ #define MIN_STACK_SIZE(ADDR) \
@ -44,6 +47,17 @@ typedef u8 kprobe_opcode_t;
#define flush_insn_slot(p) do { } while (0) #define flush_insn_slot(p) do { } while (0)
/* optinsn template addresses */
extern kprobe_opcode_t optprobe_template_entry;
extern kprobe_opcode_t optprobe_template_val;
extern kprobe_opcode_t optprobe_template_call;
extern kprobe_opcode_t optprobe_template_end;
#define MAX_OPTIMIZED_LENGTH (MAX_INSN_SIZE + RELATIVE_ADDR_SIZE)
#define MAX_OPTINSN_SIZE \
(((unsigned long)&optprobe_template_end - \
(unsigned long)&optprobe_template_entry) + \
MAX_OPTIMIZED_LENGTH + RELATIVEJUMP_SIZE)
extern const int kretprobe_blacklist_size; extern const int kretprobe_blacklist_size;
void arch_remove_kprobe(struct kprobe *p); void arch_remove_kprobe(struct kprobe *p);
@ -64,6 +78,21 @@ struct arch_specific_insn {
int boostable; int boostable;
}; };
struct arch_optimized_insn {
/* copy of the original instructions */
kprobe_opcode_t copied_insn[RELATIVE_ADDR_SIZE];
/* detour code buffer */
kprobe_opcode_t *insn;
/* the size of instructions copied to detour code buffer */
size_t size;
};
/* Return true (!0) if optinsn is prepared for optimization. */
static inline int arch_prepared_optinsn(struct arch_optimized_insn *optinsn)
{
return optinsn->size;
}
struct prev_kprobe { struct prev_kprobe {
struct kprobe *kp; struct kprobe *kp;
unsigned long status; unsigned long status;

View File

@ -7,6 +7,7 @@
#include <linux/mm.h> #include <linux/mm.h>
#include <linux/vmalloc.h> #include <linux/vmalloc.h>
#include <linux/memory.h> #include <linux/memory.h>
#include <linux/stop_machine.h>
#include <asm/alternative.h> #include <asm/alternative.h>
#include <asm/sections.h> #include <asm/sections.h>
#include <asm/pgtable.h> #include <asm/pgtable.h>
@ -572,3 +573,62 @@ void *__kprobes text_poke(void *addr, const void *opcode, size_t len)
local_irq_restore(flags); local_irq_restore(flags);
return addr; return addr;
} }
/*
* Cross-modifying kernel text with stop_machine().
* This code originally comes from immediate value.
*/
static atomic_t stop_machine_first;
static int wrote_text;
struct text_poke_params {
void *addr;
const void *opcode;
size_t len;
};
static int __kprobes stop_machine_text_poke(void *data)
{
struct text_poke_params *tpp = data;
if (atomic_dec_and_test(&stop_machine_first)) {
text_poke(tpp->addr, tpp->opcode, tpp->len);
smp_wmb(); /* Make sure other cpus see that this has run */
wrote_text = 1;
} else {
while (!wrote_text)
cpu_relax();
smp_mb(); /* Load wrote_text before following execution */
}
flush_icache_range((unsigned long)tpp->addr,
(unsigned long)tpp->addr + tpp->len);
return 0;
}
/**
* text_poke_smp - Update instructions on a live kernel on SMP
* @addr: address to modify
* @opcode: source of the copy
* @len: length to copy
*
* Modify multi-byte instruction by using stop_machine() on SMP. This allows
* user to poke/set multi-byte text on SMP. Only non-NMI/MCE code modifying
* should be allowed, since stop_machine() does _not_ protect code against
* NMI and MCE.
*
* Note: Must be called under get_online_cpus() and text_mutex.
*/
void *__kprobes text_poke_smp(void *addr, const void *opcode, size_t len)
{
struct text_poke_params tpp;
tpp.addr = addr;
tpp.opcode = opcode;
tpp.len = len;
atomic_set(&stop_machine_first, 1);
wrote_text = 0;
stop_machine(stop_machine_text_poke, (void *)&tpp, NULL);
return addr;
}

View File

@ -49,6 +49,7 @@
#include <linux/module.h> #include <linux/module.h>
#include <linux/kdebug.h> #include <linux/kdebug.h>
#include <linux/kallsyms.h> #include <linux/kallsyms.h>
#include <linux/ftrace.h>
#include <asm/cacheflush.h> #include <asm/cacheflush.h>
#include <asm/desc.h> #include <asm/desc.h>
@ -106,16 +107,22 @@ struct kretprobe_blackpoint kretprobe_blacklist[] = {
}; };
const int kretprobe_blacklist_size = ARRAY_SIZE(kretprobe_blacklist); const int kretprobe_blacklist_size = ARRAY_SIZE(kretprobe_blacklist);
/* Insert a jump instruction at address 'from', which jumps to address 'to'.*/ static void __kprobes __synthesize_relative_insn(void *from, void *to, u8 op)
static void __kprobes set_jmp_op(void *from, void *to)
{ {
struct __arch_jmp_op { struct __arch_relative_insn {
char op; u8 op;
s32 raddr; s32 raddr;
} __attribute__((packed)) * jop; } __attribute__((packed)) *insn;
jop = (struct __arch_jmp_op *)from;
jop->raddr = (s32)((long)(to) - ((long)(from) + 5)); insn = (struct __arch_relative_insn *)from;
jop->op = RELATIVEJUMP_INSTRUCTION; insn->raddr = (s32)((long)(to) - ((long)(from) + 5));
insn->op = op;
}
/* Insert a jump instruction at address 'from', which jumps to address 'to'.*/
static void __kprobes synthesize_reljump(void *from, void *to)
{
__synthesize_relative_insn(from, to, RELATIVEJUMP_OPCODE);
} }
/* /*
@ -202,7 +209,7 @@ static int recover_probed_instruction(kprobe_opcode_t *buf, unsigned long addr)
/* /*
* Basically, kp->ainsn.insn has an original instruction. * Basically, kp->ainsn.insn has an original instruction.
* However, RIP-relative instruction can not do single-stepping * However, RIP-relative instruction can not do single-stepping
* at different place, fix_riprel() tweaks the displacement of * at different place, __copy_instruction() tweaks the displacement of
* that instruction. In that case, we can't recover the instruction * that instruction. In that case, we can't recover the instruction
* from the kp->ainsn.insn. * from the kp->ainsn.insn.
* *
@ -284,21 +291,37 @@ static int __kprobes is_IF_modifier(kprobe_opcode_t *insn)
} }
/* /*
* Adjust the displacement if the instruction uses the %rip-relative * Copy an instruction and adjust the displacement if the instruction
* addressing mode. * uses the %rip-relative addressing mode.
* If it does, Return the address of the 32-bit displacement word. * If it does, Return the address of the 32-bit displacement word.
* If not, return null. * If not, return null.
* Only applicable to 64-bit x86. * Only applicable to 64-bit x86.
*/ */
static void __kprobes fix_riprel(struct kprobe *p) static int __kprobes __copy_instruction(u8 *dest, u8 *src, int recover)
{ {
#ifdef CONFIG_X86_64
struct insn insn; struct insn insn;
kernel_insn_init(&insn, p->ainsn.insn); int ret;
kprobe_opcode_t buf[MAX_INSN_SIZE];
kernel_insn_init(&insn, src);
if (recover) {
insn_get_opcode(&insn);
if (insn.opcode.bytes[0] == BREAKPOINT_INSTRUCTION) {
ret = recover_probed_instruction(buf,
(unsigned long)src);
if (ret)
return 0;
kernel_insn_init(&insn, buf);
}
}
insn_get_length(&insn);
memcpy(dest, insn.kaddr, insn.length);
#ifdef CONFIG_X86_64
if (insn_rip_relative(&insn)) { if (insn_rip_relative(&insn)) {
s64 newdisp; s64 newdisp;
u8 *disp; u8 *disp;
kernel_insn_init(&insn, dest);
insn_get_displacement(&insn); insn_get_displacement(&insn);
/* /*
* The copied instruction uses the %rip-relative addressing * The copied instruction uses the %rip-relative addressing
@ -312,20 +335,23 @@ static void __kprobes fix_riprel(struct kprobe *p)
* extension of the original signed 32-bit displacement would * extension of the original signed 32-bit displacement would
* have given. * have given.
*/ */
newdisp = (u8 *) p->addr + (s64) insn.displacement.value - newdisp = (u8 *) src + (s64) insn.displacement.value -
(u8 *) p->ainsn.insn; (u8 *) dest;
BUG_ON((s64) (s32) newdisp != newdisp); /* Sanity check. */ BUG_ON((s64) (s32) newdisp != newdisp); /* Sanity check. */
disp = (u8 *) p->ainsn.insn + insn_offset_displacement(&insn); disp = (u8 *) dest + insn_offset_displacement(&insn);
*(s32 *) disp = (s32) newdisp; *(s32 *) disp = (s32) newdisp;
} }
#endif #endif
return insn.length;
} }
static void __kprobes arch_copy_kprobe(struct kprobe *p) static void __kprobes arch_copy_kprobe(struct kprobe *p)
{ {
memcpy(p->ainsn.insn, p->addr, MAX_INSN_SIZE * sizeof(kprobe_opcode_t)); /*
* Copy an instruction without recovering int3, because it will be
fix_riprel(p); * put by another subsystem.
*/
__copy_instruction(p->ainsn.insn, p->addr, 0);
if (can_boost(p->addr)) if (can_boost(p->addr))
p->ainsn.boostable = 0; p->ainsn.boostable = 0;
@ -406,18 +432,6 @@ static void __kprobes restore_btf(void)
update_debugctlmsr(current->thread.debugctlmsr); update_debugctlmsr(current->thread.debugctlmsr);
} }
static void __kprobes prepare_singlestep(struct kprobe *p, struct pt_regs *regs)
{
clear_btf();
regs->flags |= X86_EFLAGS_TF;
regs->flags &= ~X86_EFLAGS_IF;
/* single step inline if the instruction is an int3 */
if (p->opcode == BREAKPOINT_INSTRUCTION)
regs->ip = (unsigned long)p->addr;
else
regs->ip = (unsigned long)p->ainsn.insn;
}
void __kprobes arch_prepare_kretprobe(struct kretprobe_instance *ri, void __kprobes arch_prepare_kretprobe(struct kretprobe_instance *ri,
struct pt_regs *regs) struct pt_regs *regs)
{ {
@ -429,20 +443,50 @@ void __kprobes arch_prepare_kretprobe(struct kretprobe_instance *ri,
*sara = (unsigned long) &kretprobe_trampoline; *sara = (unsigned long) &kretprobe_trampoline;
} }
#ifdef CONFIG_OPTPROBES
static int __kprobes setup_detour_execution(struct kprobe *p,
struct pt_regs *regs,
int reenter);
#else
#define setup_detour_execution(p, regs, reenter) (0)
#endif
static void __kprobes setup_singlestep(struct kprobe *p, struct pt_regs *regs, static void __kprobes setup_singlestep(struct kprobe *p, struct pt_regs *regs,
struct kprobe_ctlblk *kcb) struct kprobe_ctlblk *kcb, int reenter)
{ {
if (setup_detour_execution(p, regs, reenter))
return;
#if !defined(CONFIG_PREEMPT) #if !defined(CONFIG_PREEMPT)
if (p->ainsn.boostable == 1 && !p->post_handler) { if (p->ainsn.boostable == 1 && !p->post_handler) {
/* Boost up -- we can execute copied instructions directly */ /* Boost up -- we can execute copied instructions directly */
reset_current_kprobe(); if (!reenter)
reset_current_kprobe();
/*
* Reentering boosted probe doesn't reset current_kprobe,
* nor set current_kprobe, because it doesn't use single
* stepping.
*/
regs->ip = (unsigned long)p->ainsn.insn; regs->ip = (unsigned long)p->ainsn.insn;
preempt_enable_no_resched(); preempt_enable_no_resched();
return; return;
} }
#endif #endif
prepare_singlestep(p, regs); if (reenter) {
kcb->kprobe_status = KPROBE_HIT_SS; save_previous_kprobe(kcb);
set_current_kprobe(p, regs, kcb);
kcb->kprobe_status = KPROBE_REENTER;
} else
kcb->kprobe_status = KPROBE_HIT_SS;
/* Prepare real single stepping */
clear_btf();
regs->flags |= X86_EFLAGS_TF;
regs->flags &= ~X86_EFLAGS_IF;
/* single step inline if the instruction is an int3 */
if (p->opcode == BREAKPOINT_INSTRUCTION)
regs->ip = (unsigned long)p->addr;
else
regs->ip = (unsigned long)p->ainsn.insn;
} }
/* /*
@ -456,11 +500,8 @@ static int __kprobes reenter_kprobe(struct kprobe *p, struct pt_regs *regs,
switch (kcb->kprobe_status) { switch (kcb->kprobe_status) {
case KPROBE_HIT_SSDONE: case KPROBE_HIT_SSDONE:
case KPROBE_HIT_ACTIVE: case KPROBE_HIT_ACTIVE:
save_previous_kprobe(kcb);
set_current_kprobe(p, regs, kcb);
kprobes_inc_nmissed_count(p); kprobes_inc_nmissed_count(p);
prepare_singlestep(p, regs); setup_singlestep(p, regs, kcb, 1);
kcb->kprobe_status = KPROBE_REENTER;
break; break;
case KPROBE_HIT_SS: case KPROBE_HIT_SS:
/* A probe has been hit in the codepath leading up to, or just /* A probe has been hit in the codepath leading up to, or just
@ -535,13 +576,13 @@ static int __kprobes kprobe_handler(struct pt_regs *regs)
* more here. * more here.
*/ */
if (!p->pre_handler || !p->pre_handler(p, regs)) if (!p->pre_handler || !p->pre_handler(p, regs))
setup_singlestep(p, regs, kcb); setup_singlestep(p, regs, kcb, 0);
return 1; return 1;
} }
} else if (kprobe_running()) { } else if (kprobe_running()) {
p = __get_cpu_var(current_kprobe); p = __get_cpu_var(current_kprobe);
if (p->break_handler && p->break_handler(p, regs)) { if (p->break_handler && p->break_handler(p, regs)) {
setup_singlestep(p, regs, kcb); setup_singlestep(p, regs, kcb, 0);
return 1; return 1;
} }
} /* else: not a kprobe fault; let the kernel handle it */ } /* else: not a kprobe fault; let the kernel handle it */
@ -550,6 +591,69 @@ static int __kprobes kprobe_handler(struct pt_regs *regs)
return 0; return 0;
} }
#ifdef CONFIG_X86_64
#define SAVE_REGS_STRING \
/* Skip cs, ip, orig_ax. */ \
" subq $24, %rsp\n" \
" pushq %rdi\n" \
" pushq %rsi\n" \
" pushq %rdx\n" \
" pushq %rcx\n" \
" pushq %rax\n" \
" pushq %r8\n" \
" pushq %r9\n" \
" pushq %r10\n" \
" pushq %r11\n" \
" pushq %rbx\n" \
" pushq %rbp\n" \
" pushq %r12\n" \
" pushq %r13\n" \
" pushq %r14\n" \
" pushq %r15\n"
#define RESTORE_REGS_STRING \
" popq %r15\n" \
" popq %r14\n" \
" popq %r13\n" \
" popq %r12\n" \
" popq %rbp\n" \
" popq %rbx\n" \
" popq %r11\n" \
" popq %r10\n" \
" popq %r9\n" \
" popq %r8\n" \
" popq %rax\n" \
" popq %rcx\n" \
" popq %rdx\n" \
" popq %rsi\n" \
" popq %rdi\n" \
/* Skip orig_ax, ip, cs */ \
" addq $24, %rsp\n"
#else
#define SAVE_REGS_STRING \
/* Skip cs, ip, orig_ax and gs. */ \
" subl $16, %esp\n" \
" pushl %fs\n" \
" pushl %ds\n" \
" pushl %es\n" \
" pushl %eax\n" \
" pushl %ebp\n" \
" pushl %edi\n" \
" pushl %esi\n" \
" pushl %edx\n" \
" pushl %ecx\n" \
" pushl %ebx\n"
#define RESTORE_REGS_STRING \
" popl %ebx\n" \
" popl %ecx\n" \
" popl %edx\n" \
" popl %esi\n" \
" popl %edi\n" \
" popl %ebp\n" \
" popl %eax\n" \
/* Skip ds, es, fs, gs, orig_ax, and ip. Note: don't pop cs here*/\
" addl $24, %esp\n"
#endif
/* /*
* When a retprobed function returns, this code saves registers and * When a retprobed function returns, this code saves registers and
* calls trampoline_handler() runs, which calls the kretprobe's handler. * calls trampoline_handler() runs, which calls the kretprobe's handler.
@ -563,65 +667,16 @@ static void __used __kprobes kretprobe_trampoline_holder(void)
/* We don't bother saving the ss register */ /* We don't bother saving the ss register */
" pushq %rsp\n" " pushq %rsp\n"
" pushfq\n" " pushfq\n"
/* SAVE_REGS_STRING
* Skip cs, ip, orig_ax.
* trampoline_handler() will plug in these values
*/
" subq $24, %rsp\n"
" pushq %rdi\n"
" pushq %rsi\n"
" pushq %rdx\n"
" pushq %rcx\n"
" pushq %rax\n"
" pushq %r8\n"
" pushq %r9\n"
" pushq %r10\n"
" pushq %r11\n"
" pushq %rbx\n"
" pushq %rbp\n"
" pushq %r12\n"
" pushq %r13\n"
" pushq %r14\n"
" pushq %r15\n"
" movq %rsp, %rdi\n" " movq %rsp, %rdi\n"
" call trampoline_handler\n" " call trampoline_handler\n"
/* Replace saved sp with true return address. */ /* Replace saved sp with true return address. */
" movq %rax, 152(%rsp)\n" " movq %rax, 152(%rsp)\n"
" popq %r15\n" RESTORE_REGS_STRING
" popq %r14\n"
" popq %r13\n"
" popq %r12\n"
" popq %rbp\n"
" popq %rbx\n"
" popq %r11\n"
" popq %r10\n"
" popq %r9\n"
" popq %r8\n"
" popq %rax\n"
" popq %rcx\n"
" popq %rdx\n"
" popq %rsi\n"
" popq %rdi\n"
/* Skip orig_ax, ip, cs */
" addq $24, %rsp\n"
" popfq\n" " popfq\n"
#else #else
" pushf\n" " pushf\n"
/* SAVE_REGS_STRING
* Skip cs, ip, orig_ax and gs.
* trampoline_handler() will plug in these values
*/
" subl $16, %esp\n"
" pushl %fs\n"
" pushl %es\n"
" pushl %ds\n"
" pushl %eax\n"
" pushl %ebp\n"
" pushl %edi\n"
" pushl %esi\n"
" pushl %edx\n"
" pushl %ecx\n"
" pushl %ebx\n"
" movl %esp, %eax\n" " movl %esp, %eax\n"
" call trampoline_handler\n" " call trampoline_handler\n"
/* Move flags to cs */ /* Move flags to cs */
@ -629,15 +684,7 @@ static void __used __kprobes kretprobe_trampoline_holder(void)
" movl %edx, 52(%esp)\n" " movl %edx, 52(%esp)\n"
/* Replace saved flags with true return address. */ /* Replace saved flags with true return address. */
" movl %eax, 56(%esp)\n" " movl %eax, 56(%esp)\n"
" popl %ebx\n" RESTORE_REGS_STRING
" popl %ecx\n"
" popl %edx\n"
" popl %esi\n"
" popl %edi\n"
" popl %ebp\n"
" popl %eax\n"
/* Skip ds, es, fs, gs, orig_ax and ip */
" addl $24, %esp\n"
" popf\n" " popf\n"
#endif #endif
" ret\n"); " ret\n");
@ -805,8 +852,8 @@ static void __kprobes resume_execution(struct kprobe *p,
* These instructions can be executed directly if it * These instructions can be executed directly if it
* jumps back to correct address. * jumps back to correct address.
*/ */
set_jmp_op((void *)regs->ip, synthesize_reljump((void *)regs->ip,
(void *)orig_ip + (regs->ip - copy_ip)); (void *)orig_ip + (regs->ip - copy_ip));
p->ainsn.boostable = 1; p->ainsn.boostable = 1;
} else { } else {
p->ainsn.boostable = -1; p->ainsn.boostable = -1;
@ -1033,6 +1080,358 @@ int __kprobes longjmp_break_handler(struct kprobe *p, struct pt_regs *regs)
return 0; return 0;
} }
#ifdef CONFIG_OPTPROBES
/* Insert a call instruction at address 'from', which calls address 'to'.*/
static void __kprobes synthesize_relcall(void *from, void *to)
{
__synthesize_relative_insn(from, to, RELATIVECALL_OPCODE);
}
/* Insert a move instruction which sets a pointer to eax/rdi (1st arg). */
static void __kprobes synthesize_set_arg1(kprobe_opcode_t *addr,
unsigned long val)
{
#ifdef CONFIG_X86_64
*addr++ = 0x48;
*addr++ = 0xbf;
#else
*addr++ = 0xb8;
#endif
*(unsigned long *)addr = val;
}
void __kprobes kprobes_optinsn_template_holder(void)
{
asm volatile (
".global optprobe_template_entry\n"
"optprobe_template_entry: \n"
#ifdef CONFIG_X86_64
/* We don't bother saving the ss register */
" pushq %rsp\n"
" pushfq\n"
SAVE_REGS_STRING
" movq %rsp, %rsi\n"
".global optprobe_template_val\n"
"optprobe_template_val: \n"
ASM_NOP5
ASM_NOP5
".global optprobe_template_call\n"
"optprobe_template_call: \n"
ASM_NOP5
/* Move flags to rsp */
" movq 144(%rsp), %rdx\n"
" movq %rdx, 152(%rsp)\n"
RESTORE_REGS_STRING
/* Skip flags entry */
" addq $8, %rsp\n"
" popfq\n"
#else /* CONFIG_X86_32 */
" pushf\n"
SAVE_REGS_STRING
" movl %esp, %edx\n"
".global optprobe_template_val\n"
"optprobe_template_val: \n"
ASM_NOP5
".global optprobe_template_call\n"
"optprobe_template_call: \n"
ASM_NOP5
RESTORE_REGS_STRING
" addl $4, %esp\n" /* skip cs */
" popf\n"
#endif
".global optprobe_template_end\n"
"optprobe_template_end: \n");
}
#define TMPL_MOVE_IDX \
((long)&optprobe_template_val - (long)&optprobe_template_entry)
#define TMPL_CALL_IDX \
((long)&optprobe_template_call - (long)&optprobe_template_entry)
#define TMPL_END_IDX \
((long)&optprobe_template_end - (long)&optprobe_template_entry)
#define INT3_SIZE sizeof(kprobe_opcode_t)
/* Optimized kprobe call back function: called from optinsn */
static void __kprobes optimized_callback(struct optimized_kprobe *op,
struct pt_regs *regs)
{
struct kprobe_ctlblk *kcb = get_kprobe_ctlblk();
preempt_disable();
if (kprobe_running()) {
kprobes_inc_nmissed_count(&op->kp);
} else {
/* Save skipped registers */
#ifdef CONFIG_X86_64
regs->cs = __KERNEL_CS;
#else
regs->cs = __KERNEL_CS | get_kernel_rpl();
regs->gs = 0;
#endif
regs->ip = (unsigned long)op->kp.addr + INT3_SIZE;
regs->orig_ax = ~0UL;
__get_cpu_var(current_kprobe) = &op->kp;
kcb->kprobe_status = KPROBE_HIT_ACTIVE;
opt_pre_handler(&op->kp, regs);
__get_cpu_var(current_kprobe) = NULL;
}
preempt_enable_no_resched();
}
static int __kprobes copy_optimized_instructions(u8 *dest, u8 *src)
{
int len = 0, ret;
while (len < RELATIVEJUMP_SIZE) {
ret = __copy_instruction(dest + len, src + len, 1);
if (!ret || !can_boost(dest + len))
return -EINVAL;
len += ret;
}
/* Check whether the address range is reserved */
if (ftrace_text_reserved(src, src + len - 1) ||
alternatives_text_reserved(src, src + len - 1))
return -EBUSY;
return len;
}
/* Check whether insn is indirect jump */
static int __kprobes insn_is_indirect_jump(struct insn *insn)
{
return ((insn->opcode.bytes[0] == 0xff &&
(X86_MODRM_REG(insn->modrm.value) & 6) == 4) || /* Jump */
insn->opcode.bytes[0] == 0xea); /* Segment based jump */
}
/* Check whether insn jumps into specified address range */
static int insn_jump_into_range(struct insn *insn, unsigned long start, int len)
{
unsigned long target = 0;
switch (insn->opcode.bytes[0]) {
case 0xe0: /* loopne */
case 0xe1: /* loope */
case 0xe2: /* loop */
case 0xe3: /* jcxz */
case 0xe9: /* near relative jump */
case 0xeb: /* short relative jump */
break;
case 0x0f:
if ((insn->opcode.bytes[1] & 0xf0) == 0x80) /* jcc near */
break;
return 0;
default:
if ((insn->opcode.bytes[0] & 0xf0) == 0x70) /* jcc short */
break;
return 0;
}
target = (unsigned long)insn->next_byte + insn->immediate.value;
return (start <= target && target <= start + len);
}
/* Decode whole function to ensure any instructions don't jump into target */
static int __kprobes can_optimize(unsigned long paddr)
{
int ret;
unsigned long addr, size = 0, offset = 0;
struct insn insn;
kprobe_opcode_t buf[MAX_INSN_SIZE];
/* Dummy buffers for lookup_symbol_attrs */
static char __dummy_buf[KSYM_NAME_LEN];
/* Lookup symbol including addr */
if (!kallsyms_lookup(paddr, &size, &offset, NULL, __dummy_buf))
return 0;
/* Check there is enough space for a relative jump. */
if (size - offset < RELATIVEJUMP_SIZE)
return 0;
/* Decode instructions */
addr = paddr - offset;
while (addr < paddr - offset + size) { /* Decode until function end */
if (search_exception_tables(addr))
/*
* Since some fixup code will jumps into this function,
* we can't optimize kprobe in this function.
*/
return 0;
kernel_insn_init(&insn, (void *)addr);
insn_get_opcode(&insn);
if (insn.opcode.bytes[0] == BREAKPOINT_INSTRUCTION) {
ret = recover_probed_instruction(buf, addr);
if (ret)
return 0;
kernel_insn_init(&insn, buf);
}
insn_get_length(&insn);
/* Recover address */
insn.kaddr = (void *)addr;
insn.next_byte = (void *)(addr + insn.length);
/* Check any instructions don't jump into target */
if (insn_is_indirect_jump(&insn) ||
insn_jump_into_range(&insn, paddr + INT3_SIZE,
RELATIVE_ADDR_SIZE))
return 0;
addr += insn.length;
}
return 1;
}
/* Check optimized_kprobe can actually be optimized. */
int __kprobes arch_check_optimized_kprobe(struct optimized_kprobe *op)
{
int i;
struct kprobe *p;
for (i = 1; i < op->optinsn.size; i++) {
p = get_kprobe(op->kp.addr + i);
if (p && !kprobe_disabled(p))
return -EEXIST;
}
return 0;
}
/* Check the addr is within the optimized instructions. */
int __kprobes arch_within_optimized_kprobe(struct optimized_kprobe *op,
unsigned long addr)
{
return ((unsigned long)op->kp.addr <= addr &&
(unsigned long)op->kp.addr + op->optinsn.size > addr);
}
/* Free optimized instruction slot */
static __kprobes
void __arch_remove_optimized_kprobe(struct optimized_kprobe *op, int dirty)
{
if (op->optinsn.insn) {
free_optinsn_slot(op->optinsn.insn, dirty);
op->optinsn.insn = NULL;
op->optinsn.size = 0;
}
}
void __kprobes arch_remove_optimized_kprobe(struct optimized_kprobe *op)
{
__arch_remove_optimized_kprobe(op, 1);
}
/*
* Copy replacing target instructions
* Target instructions MUST be relocatable (checked inside)
*/
int __kprobes arch_prepare_optimized_kprobe(struct optimized_kprobe *op)
{
u8 *buf;
int ret;
long rel;
if (!can_optimize((unsigned long)op->kp.addr))
return -EILSEQ;
op->optinsn.insn = get_optinsn_slot();
if (!op->optinsn.insn)
return -ENOMEM;
/*
* Verify if the address gap is in 2GB range, because this uses
* a relative jump.
*/
rel = (long)op->optinsn.insn - (long)op->kp.addr + RELATIVEJUMP_SIZE;
if (abs(rel) > 0x7fffffff)
return -ERANGE;
buf = (u8 *)op->optinsn.insn;
/* Copy instructions into the out-of-line buffer */
ret = copy_optimized_instructions(buf + TMPL_END_IDX, op->kp.addr);
if (ret < 0) {
__arch_remove_optimized_kprobe(op, 0);
return ret;
}
op->optinsn.size = ret;
/* Copy arch-dep-instance from template */
memcpy(buf, &optprobe_template_entry, TMPL_END_IDX);
/* Set probe information */
synthesize_set_arg1(buf + TMPL_MOVE_IDX, (unsigned long)op);
/* Set probe function call */
synthesize_relcall(buf + TMPL_CALL_IDX, optimized_callback);
/* Set returning jmp instruction at the tail of out-of-line buffer */
synthesize_reljump(buf + TMPL_END_IDX + op->optinsn.size,
(u8 *)op->kp.addr + op->optinsn.size);
flush_icache_range((unsigned long) buf,
(unsigned long) buf + TMPL_END_IDX +
op->optinsn.size + RELATIVEJUMP_SIZE);
return 0;
}
/* Replace a breakpoint (int3) with a relative jump. */
int __kprobes arch_optimize_kprobe(struct optimized_kprobe *op)
{
unsigned char jmp_code[RELATIVEJUMP_SIZE];
s32 rel = (s32)((long)op->optinsn.insn -
((long)op->kp.addr + RELATIVEJUMP_SIZE));
/* Backup instructions which will be replaced by jump address */
memcpy(op->optinsn.copied_insn, op->kp.addr + INT3_SIZE,
RELATIVE_ADDR_SIZE);
jmp_code[0] = RELATIVEJUMP_OPCODE;
*(s32 *)(&jmp_code[1]) = rel;
/*
* text_poke_smp doesn't support NMI/MCE code modifying.
* However, since kprobes itself also doesn't support NMI/MCE
* code probing, it's not a problem.
*/
text_poke_smp(op->kp.addr, jmp_code, RELATIVEJUMP_SIZE);
return 0;
}
/* Replace a relative jump with a breakpoint (int3). */
void __kprobes arch_unoptimize_kprobe(struct optimized_kprobe *op)
{
u8 buf[RELATIVEJUMP_SIZE];
/* Set int3 to first byte for kprobes */
buf[0] = BREAKPOINT_INSTRUCTION;
memcpy(buf + 1, op->optinsn.copied_insn, RELATIVE_ADDR_SIZE);
text_poke_smp(op->kp.addr, buf, RELATIVEJUMP_SIZE);
}
static int __kprobes setup_detour_execution(struct kprobe *p,
struct pt_regs *regs,
int reenter)
{
struct optimized_kprobe *op;
if (p->flags & KPROBE_FLAG_OPTIMIZED) {
/* This kprobe is really able to run optimized path. */
op = container_of(p, struct optimized_kprobe, kp);
/* Detour through copied instructions */
regs->ip = (unsigned long)op->optinsn.insn + TMPL_END_IDX;
if (!reenter)
reset_current_kprobe();
preempt_enable_no_resched();
return 1;
}
return 0;
}
#endif
int __init arch_init_kprobes(void) int __init arch_init_kprobes(void)
{ {
return 0; return 0;

View File

@ -122,6 +122,11 @@ struct kprobe {
/* Kprobe status flags */ /* Kprobe status flags */
#define KPROBE_FLAG_GONE 1 /* breakpoint has already gone */ #define KPROBE_FLAG_GONE 1 /* breakpoint has already gone */
#define KPROBE_FLAG_DISABLED 2 /* probe is temporarily disabled */ #define KPROBE_FLAG_DISABLED 2 /* probe is temporarily disabled */
#define KPROBE_FLAG_OPTIMIZED 4 /*
* probe is really optimized.
* NOTE:
* this flag is only for optimized_kprobe.
*/
/* Has this kprobe gone ? */ /* Has this kprobe gone ? */
static inline int kprobe_gone(struct kprobe *p) static inline int kprobe_gone(struct kprobe *p)
@ -134,6 +139,12 @@ static inline int kprobe_disabled(struct kprobe *p)
{ {
return p->flags & (KPROBE_FLAG_DISABLED | KPROBE_FLAG_GONE); return p->flags & (KPROBE_FLAG_DISABLED | KPROBE_FLAG_GONE);
} }
/* Is this kprobe really running optimized path ? */
static inline int kprobe_optimized(struct kprobe *p)
{
return p->flags & KPROBE_FLAG_OPTIMIZED;
}
/* /*
* Special probe type that uses setjmp-longjmp type tricks to resume * Special probe type that uses setjmp-longjmp type tricks to resume
* execution at a specified entry with a matching prototype corresponding * execution at a specified entry with a matching prototype corresponding
@ -249,6 +260,39 @@ extern kprobe_opcode_t *get_insn_slot(void);
extern void free_insn_slot(kprobe_opcode_t *slot, int dirty); extern void free_insn_slot(kprobe_opcode_t *slot, int dirty);
extern void kprobes_inc_nmissed_count(struct kprobe *p); extern void kprobes_inc_nmissed_count(struct kprobe *p);
#ifdef CONFIG_OPTPROBES
/*
* Internal structure for direct jump optimized probe
*/
struct optimized_kprobe {
struct kprobe kp;
struct list_head list; /* list for optimizing queue */
struct arch_optimized_insn optinsn;
};
/* Architecture dependent functions for direct jump optimization */
extern int arch_prepared_optinsn(struct arch_optimized_insn *optinsn);
extern int arch_check_optimized_kprobe(struct optimized_kprobe *op);
extern int arch_prepare_optimized_kprobe(struct optimized_kprobe *op);
extern void arch_remove_optimized_kprobe(struct optimized_kprobe *op);
extern int arch_optimize_kprobe(struct optimized_kprobe *op);
extern void arch_unoptimize_kprobe(struct optimized_kprobe *op);
extern kprobe_opcode_t *get_optinsn_slot(void);
extern void free_optinsn_slot(kprobe_opcode_t *slot, int dirty);
extern int arch_within_optimized_kprobe(struct optimized_kprobe *op,
unsigned long addr);
extern void opt_pre_handler(struct kprobe *p, struct pt_regs *regs);
#ifdef CONFIG_SYSCTL
extern int sysctl_kprobes_optimization;
extern int proc_kprobes_optimization_handler(struct ctl_table *table,
int write, void __user *buffer,
size_t *length, loff_t *ppos);
#endif
#endif /* CONFIG_OPTPROBES */
/* Get the kprobe at this addr (if any) - called with preemption disabled */ /* Get the kprobe at this addr (if any) - called with preemption disabled */
struct kprobe *get_kprobe(void *addr); struct kprobe *get_kprobe(void *addr);
void kretprobe_hash_lock(struct task_struct *tsk, void kretprobe_hash_lock(struct task_struct *tsk,

View File

@ -42,9 +42,11 @@
#include <linux/freezer.h> #include <linux/freezer.h>
#include <linux/seq_file.h> #include <linux/seq_file.h>
#include <linux/debugfs.h> #include <linux/debugfs.h>
#include <linux/sysctl.h>
#include <linux/kdebug.h> #include <linux/kdebug.h>
#include <linux/memory.h> #include <linux/memory.h>
#include <linux/ftrace.h> #include <linux/ftrace.h>
#include <linux/cpu.h>
#include <asm-generic/sections.h> #include <asm-generic/sections.h>
#include <asm/cacheflush.h> #include <asm/cacheflush.h>
@ -105,57 +107,74 @@ static struct kprobe_blackpoint kprobe_blacklist[] = {
* stepping on the instruction on a vmalloced/kmalloced/data page * stepping on the instruction on a vmalloced/kmalloced/data page
* is a recipe for disaster * is a recipe for disaster
*/ */
#define INSNS_PER_PAGE (PAGE_SIZE/(MAX_INSN_SIZE * sizeof(kprobe_opcode_t)))
struct kprobe_insn_page { struct kprobe_insn_page {
struct list_head list; struct list_head list;
kprobe_opcode_t *insns; /* Page of instruction slots */ kprobe_opcode_t *insns; /* Page of instruction slots */
char slot_used[INSNS_PER_PAGE];
int nused; int nused;
int ngarbage; int ngarbage;
char slot_used[];
}; };
#define KPROBE_INSN_PAGE_SIZE(slots) \
(offsetof(struct kprobe_insn_page, slot_used) + \
(sizeof(char) * (slots)))
struct kprobe_insn_cache {
struct list_head pages; /* list of kprobe_insn_page */
size_t insn_size; /* size of instruction slot */
int nr_garbage;
};
static int slots_per_page(struct kprobe_insn_cache *c)
{
return PAGE_SIZE/(c->insn_size * sizeof(kprobe_opcode_t));
}
enum kprobe_slot_state { enum kprobe_slot_state {
SLOT_CLEAN = 0, SLOT_CLEAN = 0,
SLOT_DIRTY = 1, SLOT_DIRTY = 1,
SLOT_USED = 2, SLOT_USED = 2,
}; };
static DEFINE_MUTEX(kprobe_insn_mutex); /* Protects kprobe_insn_pages */ static DEFINE_MUTEX(kprobe_insn_mutex); /* Protects kprobe_insn_slots */
static LIST_HEAD(kprobe_insn_pages); static struct kprobe_insn_cache kprobe_insn_slots = {
static int kprobe_garbage_slots; .pages = LIST_HEAD_INIT(kprobe_insn_slots.pages),
static int collect_garbage_slots(void); .insn_size = MAX_INSN_SIZE,
.nr_garbage = 0,
};
static int __kprobes collect_garbage_slots(struct kprobe_insn_cache *c);
/** /**
* __get_insn_slot() - Find a slot on an executable page for an instruction. * __get_insn_slot() - Find a slot on an executable page for an instruction.
* We allocate an executable page if there's no room on existing ones. * We allocate an executable page if there's no room on existing ones.
*/ */
static kprobe_opcode_t __kprobes *__get_insn_slot(void) static kprobe_opcode_t __kprobes *__get_insn_slot(struct kprobe_insn_cache *c)
{ {
struct kprobe_insn_page *kip; struct kprobe_insn_page *kip;
retry: retry:
list_for_each_entry(kip, &kprobe_insn_pages, list) { list_for_each_entry(kip, &c->pages, list) {
if (kip->nused < INSNS_PER_PAGE) { if (kip->nused < slots_per_page(c)) {
int i; int i;
for (i = 0; i < INSNS_PER_PAGE; i++) { for (i = 0; i < slots_per_page(c); i++) {
if (kip->slot_used[i] == SLOT_CLEAN) { if (kip->slot_used[i] == SLOT_CLEAN) {
kip->slot_used[i] = SLOT_USED; kip->slot_used[i] = SLOT_USED;
kip->nused++; kip->nused++;
return kip->insns + (i * MAX_INSN_SIZE); return kip->insns + (i * c->insn_size);
} }
} }
/* Surprise! No unused slots. Fix kip->nused. */ /* kip->nused is broken. Fix it. */
kip->nused = INSNS_PER_PAGE; kip->nused = slots_per_page(c);
WARN_ON(1);
} }
} }
/* If there are any garbage slots, collect it and try again. */ /* If there are any garbage slots, collect it and try again. */
if (kprobe_garbage_slots && collect_garbage_slots() == 0) { if (c->nr_garbage && collect_garbage_slots(c) == 0)
goto retry; goto retry;
}
/* All out of space. Need to allocate a new page. Use slot 0. */ /* All out of space. Need to allocate a new page. */
kip = kmalloc(sizeof(struct kprobe_insn_page), GFP_KERNEL); kip = kmalloc(KPROBE_INSN_PAGE_SIZE(slots_per_page(c)), GFP_KERNEL);
if (!kip) if (!kip)
return NULL; return NULL;
@ -170,20 +189,23 @@ static kprobe_opcode_t __kprobes *__get_insn_slot(void)
return NULL; return NULL;
} }
INIT_LIST_HEAD(&kip->list); INIT_LIST_HEAD(&kip->list);
list_add(&kip->list, &kprobe_insn_pages); memset(kip->slot_used, SLOT_CLEAN, slots_per_page(c));
memset(kip->slot_used, SLOT_CLEAN, INSNS_PER_PAGE);
kip->slot_used[0] = SLOT_USED; kip->slot_used[0] = SLOT_USED;
kip->nused = 1; kip->nused = 1;
kip->ngarbage = 0; kip->ngarbage = 0;
list_add(&kip->list, &c->pages);
return kip->insns; return kip->insns;
} }
kprobe_opcode_t __kprobes *get_insn_slot(void) kprobe_opcode_t __kprobes *get_insn_slot(void)
{ {
kprobe_opcode_t *ret; kprobe_opcode_t *ret = NULL;
mutex_lock(&kprobe_insn_mutex); mutex_lock(&kprobe_insn_mutex);
ret = __get_insn_slot(); ret = __get_insn_slot(&kprobe_insn_slots);
mutex_unlock(&kprobe_insn_mutex); mutex_unlock(&kprobe_insn_mutex);
return ret; return ret;
} }
@ -199,7 +221,7 @@ static int __kprobes collect_one_slot(struct kprobe_insn_page *kip, int idx)
* so as not to have to set it up again the * so as not to have to set it up again the
* next time somebody inserts a probe. * next time somebody inserts a probe.
*/ */
if (!list_is_singular(&kprobe_insn_pages)) { if (!list_is_singular(&kip->list)) {
list_del(&kip->list); list_del(&kip->list);
module_free(NULL, kip->insns); module_free(NULL, kip->insns);
kfree(kip); kfree(kip);
@ -209,51 +231,84 @@ static int __kprobes collect_one_slot(struct kprobe_insn_page *kip, int idx)
return 0; return 0;
} }
static int __kprobes collect_garbage_slots(void) static int __kprobes collect_garbage_slots(struct kprobe_insn_cache *c)
{ {
struct kprobe_insn_page *kip, *next; struct kprobe_insn_page *kip, *next;
/* Ensure no-one is interrupted on the garbages */ /* Ensure no-one is interrupted on the garbages */
synchronize_sched(); synchronize_sched();
list_for_each_entry_safe(kip, next, &kprobe_insn_pages, list) { list_for_each_entry_safe(kip, next, &c->pages, list) {
int i; int i;
if (kip->ngarbage == 0) if (kip->ngarbage == 0)
continue; continue;
kip->ngarbage = 0; /* we will collect all garbages */ kip->ngarbage = 0; /* we will collect all garbages */
for (i = 0; i < INSNS_PER_PAGE; i++) { for (i = 0; i < slots_per_page(c); i++) {
if (kip->slot_used[i] == SLOT_DIRTY && if (kip->slot_used[i] == SLOT_DIRTY &&
collect_one_slot(kip, i)) collect_one_slot(kip, i))
break; break;
} }
} }
kprobe_garbage_slots = 0; c->nr_garbage = 0;
return 0; return 0;
} }
static void __kprobes __free_insn_slot(struct kprobe_insn_cache *c,
kprobe_opcode_t *slot, int dirty)
{
struct kprobe_insn_page *kip;
list_for_each_entry(kip, &c->pages, list) {
long idx = ((long)slot - (long)kip->insns) / c->insn_size;
if (idx >= 0 && idx < slots_per_page(c)) {
WARN_ON(kip->slot_used[idx] != SLOT_USED);
if (dirty) {
kip->slot_used[idx] = SLOT_DIRTY;
kip->ngarbage++;
if (++c->nr_garbage > slots_per_page(c))
collect_garbage_slots(c);
} else
collect_one_slot(kip, idx);
return;
}
}
/* Could not free this slot. */
WARN_ON(1);
}
void __kprobes free_insn_slot(kprobe_opcode_t * slot, int dirty) void __kprobes free_insn_slot(kprobe_opcode_t * slot, int dirty)
{ {
struct kprobe_insn_page *kip;
mutex_lock(&kprobe_insn_mutex); mutex_lock(&kprobe_insn_mutex);
list_for_each_entry(kip, &kprobe_insn_pages, list) { __free_insn_slot(&kprobe_insn_slots, slot, dirty);
if (kip->insns <= slot &&
slot < kip->insns + (INSNS_PER_PAGE * MAX_INSN_SIZE)) {
int i = (slot - kip->insns) / MAX_INSN_SIZE;
if (dirty) {
kip->slot_used[i] = SLOT_DIRTY;
kip->ngarbage++;
} else
collect_one_slot(kip, i);
break;
}
}
if (dirty && ++kprobe_garbage_slots > INSNS_PER_PAGE)
collect_garbage_slots();
mutex_unlock(&kprobe_insn_mutex); mutex_unlock(&kprobe_insn_mutex);
} }
#ifdef CONFIG_OPTPROBES
/* For optimized_kprobe buffer */
static DEFINE_MUTEX(kprobe_optinsn_mutex); /* Protects kprobe_optinsn_slots */
static struct kprobe_insn_cache kprobe_optinsn_slots = {
.pages = LIST_HEAD_INIT(kprobe_optinsn_slots.pages),
/* .insn_size is initialized later */
.nr_garbage = 0,
};
/* Get a slot for optimized_kprobe buffer */
kprobe_opcode_t __kprobes *get_optinsn_slot(void)
{
kprobe_opcode_t *ret = NULL;
mutex_lock(&kprobe_optinsn_mutex);
ret = __get_insn_slot(&kprobe_optinsn_slots);
mutex_unlock(&kprobe_optinsn_mutex);
return ret;
}
void __kprobes free_optinsn_slot(kprobe_opcode_t * slot, int dirty)
{
mutex_lock(&kprobe_optinsn_mutex);
__free_insn_slot(&kprobe_optinsn_slots, slot, dirty);
mutex_unlock(&kprobe_optinsn_mutex);
}
#endif
#endif #endif
/* We have preemption disabled.. so it is safe to use __ versions */ /* We have preemption disabled.. so it is safe to use __ versions */
@ -284,23 +339,401 @@ struct kprobe __kprobes *get_kprobe(void *addr)
if (p->addr == addr) if (p->addr == addr)
return p; return p;
} }
return NULL; return NULL;
} }
static int __kprobes aggr_pre_handler(struct kprobe *p, struct pt_regs *regs);
/* Return true if the kprobe is an aggregator */
static inline int kprobe_aggrprobe(struct kprobe *p)
{
return p->pre_handler == aggr_pre_handler;
}
/*
* Keep all fields in the kprobe consistent
*/
static inline void copy_kprobe(struct kprobe *old_p, struct kprobe *p)
{
memcpy(&p->opcode, &old_p->opcode, sizeof(kprobe_opcode_t));
memcpy(&p->ainsn, &old_p->ainsn, sizeof(struct arch_specific_insn));
}
#ifdef CONFIG_OPTPROBES
/* NOTE: change this value only with kprobe_mutex held */
static bool kprobes_allow_optimization;
/*
* Call all pre_handler on the list, but ignores its return value.
* This must be called from arch-dep optimized caller.
*/
void __kprobes opt_pre_handler(struct kprobe *p, struct pt_regs *regs)
{
struct kprobe *kp;
list_for_each_entry_rcu(kp, &p->list, list) {
if (kp->pre_handler && likely(!kprobe_disabled(kp))) {
set_kprobe_instance(kp);
kp->pre_handler(kp, regs);
}
reset_kprobe_instance();
}
}
/* Return true(!0) if the kprobe is ready for optimization. */
static inline int kprobe_optready(struct kprobe *p)
{
struct optimized_kprobe *op;
if (kprobe_aggrprobe(p)) {
op = container_of(p, struct optimized_kprobe, kp);
return arch_prepared_optinsn(&op->optinsn);
}
return 0;
}
/*
* Return an optimized kprobe whose optimizing code replaces
* instructions including addr (exclude breakpoint).
*/
struct kprobe *__kprobes get_optimized_kprobe(unsigned long addr)
{
int i;
struct kprobe *p = NULL;
struct optimized_kprobe *op;
/* Don't check i == 0, since that is a breakpoint case. */
for (i = 1; !p && i < MAX_OPTIMIZED_LENGTH; i++)
p = get_kprobe((void *)(addr - i));
if (p && kprobe_optready(p)) {
op = container_of(p, struct optimized_kprobe, kp);
if (arch_within_optimized_kprobe(op, addr))
return p;
}
return NULL;
}
/* Optimization staging list, protected by kprobe_mutex */
static LIST_HEAD(optimizing_list);
static void kprobe_optimizer(struct work_struct *work);
static DECLARE_DELAYED_WORK(optimizing_work, kprobe_optimizer);
#define OPTIMIZE_DELAY 5
/* Kprobe jump optimizer */
static __kprobes void kprobe_optimizer(struct work_struct *work)
{
struct optimized_kprobe *op, *tmp;
/* Lock modules while optimizing kprobes */
mutex_lock(&module_mutex);
mutex_lock(&kprobe_mutex);
if (kprobes_all_disarmed || !kprobes_allow_optimization)
goto end;
/*
* Wait for quiesence period to ensure all running interrupts
* are done. Because optprobe may modify multiple instructions
* there is a chance that Nth instruction is interrupted. In that
* case, running interrupt can return to 2nd-Nth byte of jump
* instruction. This wait is for avoiding it.
*/
synchronize_sched();
/*
* The optimization/unoptimization refers online_cpus via
* stop_machine() and cpu-hotplug modifies online_cpus.
* And same time, text_mutex will be held in cpu-hotplug and here.
* This combination can cause a deadlock (cpu-hotplug try to lock
* text_mutex but stop_machine can not be done because online_cpus
* has been changed)
* To avoid this deadlock, we need to call get_online_cpus()
* for preventing cpu-hotplug outside of text_mutex locking.
*/
get_online_cpus();
mutex_lock(&text_mutex);
list_for_each_entry_safe(op, tmp, &optimizing_list, list) {
WARN_ON(kprobe_disabled(&op->kp));
if (arch_optimize_kprobe(op) < 0)
op->kp.flags &= ~KPROBE_FLAG_OPTIMIZED;
list_del_init(&op->list);
}
mutex_unlock(&text_mutex);
put_online_cpus();
end:
mutex_unlock(&kprobe_mutex);
mutex_unlock(&module_mutex);
}
/* Optimize kprobe if p is ready to be optimized */
static __kprobes void optimize_kprobe(struct kprobe *p)
{
struct optimized_kprobe *op;
/* Check if the kprobe is disabled or not ready for optimization. */
if (!kprobe_optready(p) || !kprobes_allow_optimization ||
(kprobe_disabled(p) || kprobes_all_disarmed))
return;
/* Both of break_handler and post_handler are not supported. */
if (p->break_handler || p->post_handler)
return;
op = container_of(p, struct optimized_kprobe, kp);
/* Check there is no other kprobes at the optimized instructions */
if (arch_check_optimized_kprobe(op) < 0)
return;
/* Check if it is already optimized. */
if (op->kp.flags & KPROBE_FLAG_OPTIMIZED)
return;
op->kp.flags |= KPROBE_FLAG_OPTIMIZED;
list_add(&op->list, &optimizing_list);
if (!delayed_work_pending(&optimizing_work))
schedule_delayed_work(&optimizing_work, OPTIMIZE_DELAY);
}
/* Unoptimize a kprobe if p is optimized */
static __kprobes void unoptimize_kprobe(struct kprobe *p)
{
struct optimized_kprobe *op;
if ((p->flags & KPROBE_FLAG_OPTIMIZED) && kprobe_aggrprobe(p)) {
op = container_of(p, struct optimized_kprobe, kp);
if (!list_empty(&op->list))
/* Dequeue from the optimization queue */
list_del_init(&op->list);
else
/* Replace jump with break */
arch_unoptimize_kprobe(op);
op->kp.flags &= ~KPROBE_FLAG_OPTIMIZED;
}
}
/* Remove optimized instructions */
static void __kprobes kill_optimized_kprobe(struct kprobe *p)
{
struct optimized_kprobe *op;
op = container_of(p, struct optimized_kprobe, kp);
if (!list_empty(&op->list)) {
/* Dequeue from the optimization queue */
list_del_init(&op->list);
op->kp.flags &= ~KPROBE_FLAG_OPTIMIZED;
}
/* Don't unoptimize, because the target code will be freed. */
arch_remove_optimized_kprobe(op);
}
/* Try to prepare optimized instructions */
static __kprobes void prepare_optimized_kprobe(struct kprobe *p)
{
struct optimized_kprobe *op;
op = container_of(p, struct optimized_kprobe, kp);
arch_prepare_optimized_kprobe(op);
}
/* Free optimized instructions and optimized_kprobe */
static __kprobes void free_aggr_kprobe(struct kprobe *p)
{
struct optimized_kprobe *op;
op = container_of(p, struct optimized_kprobe, kp);
arch_remove_optimized_kprobe(op);
kfree(op);
}
/* Allocate new optimized_kprobe and try to prepare optimized instructions */
static __kprobes struct kprobe *alloc_aggr_kprobe(struct kprobe *p)
{
struct optimized_kprobe *op;
op = kzalloc(sizeof(struct optimized_kprobe), GFP_KERNEL);
if (!op)
return NULL;
INIT_LIST_HEAD(&op->list);
op->kp.addr = p->addr;
arch_prepare_optimized_kprobe(op);
return &op->kp;
}
static void __kprobes init_aggr_kprobe(struct kprobe *ap, struct kprobe *p);
/*
* Prepare an optimized_kprobe and optimize it
* NOTE: p must be a normal registered kprobe
*/
static __kprobes void try_to_optimize_kprobe(struct kprobe *p)
{
struct kprobe *ap;
struct optimized_kprobe *op;
ap = alloc_aggr_kprobe(p);
if (!ap)
return;
op = container_of(ap, struct optimized_kprobe, kp);
if (!arch_prepared_optinsn(&op->optinsn)) {
/* If failed to setup optimizing, fallback to kprobe */
free_aggr_kprobe(ap);
return;
}
init_aggr_kprobe(ap, p);
optimize_kprobe(ap);
}
#ifdef CONFIG_SYSCTL
static void __kprobes optimize_all_kprobes(void)
{
struct hlist_head *head;
struct hlist_node *node;
struct kprobe *p;
unsigned int i;
/* If optimization is already allowed, just return */
if (kprobes_allow_optimization)
return;
kprobes_allow_optimization = true;
mutex_lock(&text_mutex);
for (i = 0; i < KPROBE_TABLE_SIZE; i++) {
head = &kprobe_table[i];
hlist_for_each_entry_rcu(p, node, head, hlist)
if (!kprobe_disabled(p))
optimize_kprobe(p);
}
mutex_unlock(&text_mutex);
printk(KERN_INFO "Kprobes globally optimized\n");
}
static void __kprobes unoptimize_all_kprobes(void)
{
struct hlist_head *head;
struct hlist_node *node;
struct kprobe *p;
unsigned int i;
/* If optimization is already prohibited, just return */
if (!kprobes_allow_optimization)
return;
kprobes_allow_optimization = false;
printk(KERN_INFO "Kprobes globally unoptimized\n");
get_online_cpus(); /* For avoiding text_mutex deadlock */
mutex_lock(&text_mutex);
for (i = 0; i < KPROBE_TABLE_SIZE; i++) {
head = &kprobe_table[i];
hlist_for_each_entry_rcu(p, node, head, hlist) {
if (!kprobe_disabled(p))
unoptimize_kprobe(p);
}
}
mutex_unlock(&text_mutex);
put_online_cpus();
/* Allow all currently running kprobes to complete */
synchronize_sched();
}
int sysctl_kprobes_optimization;
int proc_kprobes_optimization_handler(struct ctl_table *table, int write,
void __user *buffer, size_t *length,
loff_t *ppos)
{
int ret;
mutex_lock(&kprobe_mutex);
sysctl_kprobes_optimization = kprobes_allow_optimization ? 1 : 0;
ret = proc_dointvec_minmax(table, write, buffer, length, ppos);
if (sysctl_kprobes_optimization)
optimize_all_kprobes();
else
unoptimize_all_kprobes();
mutex_unlock(&kprobe_mutex);
return ret;
}
#endif /* CONFIG_SYSCTL */
static void __kprobes __arm_kprobe(struct kprobe *p)
{
struct kprobe *old_p;
/* Check collision with other optimized kprobes */
old_p = get_optimized_kprobe((unsigned long)p->addr);
if (unlikely(old_p))
unoptimize_kprobe(old_p); /* Fallback to unoptimized kprobe */
arch_arm_kprobe(p);
optimize_kprobe(p); /* Try to optimize (add kprobe to a list) */
}
static void __kprobes __disarm_kprobe(struct kprobe *p)
{
struct kprobe *old_p;
unoptimize_kprobe(p); /* Try to unoptimize */
arch_disarm_kprobe(p);
/* If another kprobe was blocked, optimize it. */
old_p = get_optimized_kprobe((unsigned long)p->addr);
if (unlikely(old_p))
optimize_kprobe(old_p);
}
#else /* !CONFIG_OPTPROBES */
#define optimize_kprobe(p) do {} while (0)
#define unoptimize_kprobe(p) do {} while (0)
#define kill_optimized_kprobe(p) do {} while (0)
#define prepare_optimized_kprobe(p) do {} while (0)
#define try_to_optimize_kprobe(p) do {} while (0)
#define __arm_kprobe(p) arch_arm_kprobe(p)
#define __disarm_kprobe(p) arch_disarm_kprobe(p)
static __kprobes void free_aggr_kprobe(struct kprobe *p)
{
kfree(p);
}
static __kprobes struct kprobe *alloc_aggr_kprobe(struct kprobe *p)
{
return kzalloc(sizeof(struct kprobe), GFP_KERNEL);
}
#endif /* CONFIG_OPTPROBES */
/* Arm a kprobe with text_mutex */ /* Arm a kprobe with text_mutex */
static void __kprobes arm_kprobe(struct kprobe *kp) static void __kprobes arm_kprobe(struct kprobe *kp)
{ {
/*
* Here, since __arm_kprobe() doesn't use stop_machine(),
* this doesn't cause deadlock on text_mutex. So, we don't
* need get_online_cpus().
*/
mutex_lock(&text_mutex); mutex_lock(&text_mutex);
arch_arm_kprobe(kp); __arm_kprobe(kp);
mutex_unlock(&text_mutex); mutex_unlock(&text_mutex);
} }
/* Disarm a kprobe with text_mutex */ /* Disarm a kprobe with text_mutex */
static void __kprobes disarm_kprobe(struct kprobe *kp) static void __kprobes disarm_kprobe(struct kprobe *kp)
{ {
get_online_cpus(); /* For avoiding text_mutex deadlock */
mutex_lock(&text_mutex); mutex_lock(&text_mutex);
arch_disarm_kprobe(kp); __disarm_kprobe(kp);
mutex_unlock(&text_mutex); mutex_unlock(&text_mutex);
put_online_cpus();
} }
/* /*
@ -369,7 +802,7 @@ static int __kprobes aggr_break_handler(struct kprobe *p, struct pt_regs *regs)
void __kprobes kprobes_inc_nmissed_count(struct kprobe *p) void __kprobes kprobes_inc_nmissed_count(struct kprobe *p)
{ {
struct kprobe *kp; struct kprobe *kp;
if (p->pre_handler != aggr_pre_handler) { if (!kprobe_aggrprobe(p)) {
p->nmissed++; p->nmissed++;
} else { } else {
list_for_each_entry_rcu(kp, &p->list, list) list_for_each_entry_rcu(kp, &p->list, list)
@ -492,15 +925,6 @@ static void __kprobes cleanup_rp_inst(struct kretprobe *rp)
free_rp_inst(rp); free_rp_inst(rp);
} }
/*
* Keep all fields in the kprobe consistent
*/
static inline void copy_kprobe(struct kprobe *old_p, struct kprobe *p)
{
memcpy(&p->opcode, &old_p->opcode, sizeof(kprobe_opcode_t));
memcpy(&p->ainsn, &old_p->ainsn, sizeof(struct arch_specific_insn));
}
/* /*
* Add the new probe to ap->list. Fail if this is the * Add the new probe to ap->list. Fail if this is the
* second jprobe at the address - two jprobes can't coexist * second jprobe at the address - two jprobes can't coexist
@ -508,6 +932,10 @@ static inline void copy_kprobe(struct kprobe *old_p, struct kprobe *p)
static int __kprobes add_new_kprobe(struct kprobe *ap, struct kprobe *p) static int __kprobes add_new_kprobe(struct kprobe *ap, struct kprobe *p)
{ {
BUG_ON(kprobe_gone(ap) || kprobe_gone(p)); BUG_ON(kprobe_gone(ap) || kprobe_gone(p));
if (p->break_handler || p->post_handler)
unoptimize_kprobe(ap); /* Fall back to normal kprobe */
if (p->break_handler) { if (p->break_handler) {
if (ap->break_handler) if (ap->break_handler)
return -EEXIST; return -EEXIST;
@ -522,7 +950,7 @@ static int __kprobes add_new_kprobe(struct kprobe *ap, struct kprobe *p)
ap->flags &= ~KPROBE_FLAG_DISABLED; ap->flags &= ~KPROBE_FLAG_DISABLED;
if (!kprobes_all_disarmed) if (!kprobes_all_disarmed)
/* Arm the breakpoint again. */ /* Arm the breakpoint again. */
arm_kprobe(ap); __arm_kprobe(ap);
} }
return 0; return 0;
} }
@ -531,12 +959,13 @@ static int __kprobes add_new_kprobe(struct kprobe *ap, struct kprobe *p)
* Fill in the required fields of the "manager kprobe". Replace the * Fill in the required fields of the "manager kprobe". Replace the
* earlier kprobe in the hlist with the manager kprobe * earlier kprobe in the hlist with the manager kprobe
*/ */
static inline void add_aggr_kprobe(struct kprobe *ap, struct kprobe *p) static void __kprobes init_aggr_kprobe(struct kprobe *ap, struct kprobe *p)
{ {
/* Copy p's insn slot to ap */
copy_kprobe(p, ap); copy_kprobe(p, ap);
flush_insn_slot(ap); flush_insn_slot(ap);
ap->addr = p->addr; ap->addr = p->addr;
ap->flags = p->flags; ap->flags = p->flags & ~KPROBE_FLAG_OPTIMIZED;
ap->pre_handler = aggr_pre_handler; ap->pre_handler = aggr_pre_handler;
ap->fault_handler = aggr_fault_handler; ap->fault_handler = aggr_fault_handler;
/* We don't care the kprobe which has gone. */ /* We don't care the kprobe which has gone. */
@ -546,8 +975,9 @@ static inline void add_aggr_kprobe(struct kprobe *ap, struct kprobe *p)
ap->break_handler = aggr_break_handler; ap->break_handler = aggr_break_handler;
INIT_LIST_HEAD(&ap->list); INIT_LIST_HEAD(&ap->list);
list_add_rcu(&p->list, &ap->list); INIT_HLIST_NODE(&ap->hlist);
list_add_rcu(&p->list, &ap->list);
hlist_replace_rcu(&p->hlist, &ap->hlist); hlist_replace_rcu(&p->hlist, &ap->hlist);
} }
@ -561,12 +991,12 @@ static int __kprobes register_aggr_kprobe(struct kprobe *old_p,
int ret = 0; int ret = 0;
struct kprobe *ap = old_p; struct kprobe *ap = old_p;
if (old_p->pre_handler != aggr_pre_handler) { if (!kprobe_aggrprobe(old_p)) {
/* If old_p is not an aggr_probe, create new aggr_kprobe. */ /* If old_p is not an aggr_kprobe, create new aggr_kprobe. */
ap = kzalloc(sizeof(struct kprobe), GFP_KERNEL); ap = alloc_aggr_kprobe(old_p);
if (!ap) if (!ap)
return -ENOMEM; return -ENOMEM;
add_aggr_kprobe(ap, old_p); init_aggr_kprobe(ap, old_p);
} }
if (kprobe_gone(ap)) { if (kprobe_gone(ap)) {
@ -585,6 +1015,9 @@ static int __kprobes register_aggr_kprobe(struct kprobe *old_p,
*/ */
return ret; return ret;
/* Prepare optimized instructions if possible. */
prepare_optimized_kprobe(ap);
/* /*
* Clear gone flag to prevent allocating new slot again, and * Clear gone flag to prevent allocating new slot again, and
* set disabled flag because it is not armed yet. * set disabled flag because it is not armed yet.
@ -593,6 +1026,7 @@ static int __kprobes register_aggr_kprobe(struct kprobe *old_p,
| KPROBE_FLAG_DISABLED; | KPROBE_FLAG_DISABLED;
} }
/* Copy ap's insn slot to p */
copy_kprobe(ap, p); copy_kprobe(ap, p);
return add_new_kprobe(ap, p); return add_new_kprobe(ap, p);
} }
@ -743,27 +1177,34 @@ int __kprobes register_kprobe(struct kprobe *p)
p->nmissed = 0; p->nmissed = 0;
INIT_LIST_HEAD(&p->list); INIT_LIST_HEAD(&p->list);
mutex_lock(&kprobe_mutex); mutex_lock(&kprobe_mutex);
get_online_cpus(); /* For avoiding text_mutex deadlock. */
mutex_lock(&text_mutex);
old_p = get_kprobe(p->addr); old_p = get_kprobe(p->addr);
if (old_p) { if (old_p) {
/* Since this may unoptimize old_p, locking text_mutex. */
ret = register_aggr_kprobe(old_p, p); ret = register_aggr_kprobe(old_p, p);
goto out; goto out;
} }
mutex_lock(&text_mutex);
ret = arch_prepare_kprobe(p); ret = arch_prepare_kprobe(p);
if (ret) if (ret)
goto out_unlock_text; goto out;
INIT_HLIST_NODE(&p->hlist); INIT_HLIST_NODE(&p->hlist);
hlist_add_head_rcu(&p->hlist, hlist_add_head_rcu(&p->hlist,
&kprobe_table[hash_ptr(p->addr, KPROBE_HASH_BITS)]); &kprobe_table[hash_ptr(p->addr, KPROBE_HASH_BITS)]);
if (!kprobes_all_disarmed && !kprobe_disabled(p)) if (!kprobes_all_disarmed && !kprobe_disabled(p))
arch_arm_kprobe(p); __arm_kprobe(p);
/* Try to optimize kprobe */
try_to_optimize_kprobe(p);
out_unlock_text:
mutex_unlock(&text_mutex);
out: out:
mutex_unlock(&text_mutex);
put_online_cpus();
mutex_unlock(&kprobe_mutex); mutex_unlock(&kprobe_mutex);
if (probed_mod) if (probed_mod)
@ -785,7 +1226,7 @@ static int __kprobes __unregister_kprobe_top(struct kprobe *p)
return -EINVAL; return -EINVAL;
if (old_p == p || if (old_p == p ||
(old_p->pre_handler == aggr_pre_handler && (kprobe_aggrprobe(old_p) &&
list_is_singular(&old_p->list))) { list_is_singular(&old_p->list))) {
/* /*
* Only probe on the hash list. Disarm only if kprobes are * Only probe on the hash list. Disarm only if kprobes are
@ -793,7 +1234,7 @@ static int __kprobes __unregister_kprobe_top(struct kprobe *p)
* already have been removed. We save on flushing icache. * already have been removed. We save on flushing icache.
*/ */
if (!kprobes_all_disarmed && !kprobe_disabled(old_p)) if (!kprobes_all_disarmed && !kprobe_disabled(old_p))
disarm_kprobe(p); disarm_kprobe(old_p);
hlist_del_rcu(&old_p->hlist); hlist_del_rcu(&old_p->hlist);
} else { } else {
if (p->break_handler && !kprobe_gone(p)) if (p->break_handler && !kprobe_gone(p))
@ -809,8 +1250,13 @@ noclean:
list_del_rcu(&p->list); list_del_rcu(&p->list);
if (!kprobe_disabled(old_p)) { if (!kprobe_disabled(old_p)) {
try_to_disable_aggr_kprobe(old_p); try_to_disable_aggr_kprobe(old_p);
if (!kprobes_all_disarmed && kprobe_disabled(old_p)) if (!kprobes_all_disarmed) {
disarm_kprobe(old_p); if (kprobe_disabled(old_p))
disarm_kprobe(old_p);
else
/* Try to optimize this probe again */
optimize_kprobe(old_p);
}
} }
} }
return 0; return 0;
@ -827,7 +1273,7 @@ static void __kprobes __unregister_kprobe_bottom(struct kprobe *p)
old_p = list_entry(p->list.next, struct kprobe, list); old_p = list_entry(p->list.next, struct kprobe, list);
list_del(&p->list); list_del(&p->list);
arch_remove_kprobe(old_p); arch_remove_kprobe(old_p);
kfree(old_p); free_aggr_kprobe(old_p);
} }
} }
@ -1123,7 +1569,7 @@ static void __kprobes kill_kprobe(struct kprobe *p)
struct kprobe *kp; struct kprobe *kp;
p->flags |= KPROBE_FLAG_GONE; p->flags |= KPROBE_FLAG_GONE;
if (p->pre_handler == aggr_pre_handler) { if (kprobe_aggrprobe(p)) {
/* /*
* If this is an aggr_kprobe, we have to list all the * If this is an aggr_kprobe, we have to list all the
* chained probes and mark them GONE. * chained probes and mark them GONE.
@ -1132,6 +1578,7 @@ static void __kprobes kill_kprobe(struct kprobe *p)
kp->flags |= KPROBE_FLAG_GONE; kp->flags |= KPROBE_FLAG_GONE;
p->post_handler = NULL; p->post_handler = NULL;
p->break_handler = NULL; p->break_handler = NULL;
kill_optimized_kprobe(p);
} }
/* /*
* Here, we can remove insn_slot safely, because no thread calls * Here, we can remove insn_slot safely, because no thread calls
@ -1241,6 +1688,15 @@ static int __init init_kprobes(void)
} }
} }
#if defined(CONFIG_OPTPROBES)
#if defined(__ARCH_WANT_KPROBES_INSN_SLOT)
/* Init kprobe_optinsn_slots */
kprobe_optinsn_slots.insn_size = MAX_OPTINSN_SIZE;
#endif
/* By default, kprobes can be optimized */
kprobes_allow_optimization = true;
#endif
/* By default, kprobes are armed */ /* By default, kprobes are armed */
kprobes_all_disarmed = false; kprobes_all_disarmed = false;
@ -1259,7 +1715,7 @@ static int __init init_kprobes(void)
#ifdef CONFIG_DEBUG_FS #ifdef CONFIG_DEBUG_FS
static void __kprobes report_probe(struct seq_file *pi, struct kprobe *p, static void __kprobes report_probe(struct seq_file *pi, struct kprobe *p,
const char *sym, int offset,char *modname) const char *sym, int offset, char *modname, struct kprobe *pp)
{ {
char *kprobe_type; char *kprobe_type;
@ -1269,19 +1725,21 @@ static void __kprobes report_probe(struct seq_file *pi, struct kprobe *p,
kprobe_type = "j"; kprobe_type = "j";
else else
kprobe_type = "k"; kprobe_type = "k";
if (sym) if (sym)
seq_printf(pi, "%p %s %s+0x%x %s %s%s\n", seq_printf(pi, "%p %s %s+0x%x %s ",
p->addr, kprobe_type, sym, offset, p->addr, kprobe_type, sym, offset,
(modname ? modname : " "), (modname ? modname : " "));
(kprobe_gone(p) ? "[GONE]" : ""),
((kprobe_disabled(p) && !kprobe_gone(p)) ?
"[DISABLED]" : ""));
else else
seq_printf(pi, "%p %s %p %s%s\n", seq_printf(pi, "%p %s %p ",
p->addr, kprobe_type, p->addr, p->addr, kprobe_type, p->addr);
(kprobe_gone(p) ? "[GONE]" : ""),
((kprobe_disabled(p) && !kprobe_gone(p)) ? if (!pp)
"[DISABLED]" : "")); pp = p;
seq_printf(pi, "%s%s%s\n",
(kprobe_gone(p) ? "[GONE]" : ""),
((kprobe_disabled(p) && !kprobe_gone(p)) ? "[DISABLED]" : ""),
(kprobe_optimized(pp) ? "[OPTIMIZED]" : ""));
} }
static void __kprobes *kprobe_seq_start(struct seq_file *f, loff_t *pos) static void __kprobes *kprobe_seq_start(struct seq_file *f, loff_t *pos)
@ -1317,11 +1775,11 @@ static int __kprobes show_kprobe_addr(struct seq_file *pi, void *v)
hlist_for_each_entry_rcu(p, node, head, hlist) { hlist_for_each_entry_rcu(p, node, head, hlist) {
sym = kallsyms_lookup((unsigned long)p->addr, NULL, sym = kallsyms_lookup((unsigned long)p->addr, NULL,
&offset, &modname, namebuf); &offset, &modname, namebuf);
if (p->pre_handler == aggr_pre_handler) { if (kprobe_aggrprobe(p)) {
list_for_each_entry_rcu(kp, &p->list, list) list_for_each_entry_rcu(kp, &p->list, list)
report_probe(pi, kp, sym, offset, modname); report_probe(pi, kp, sym, offset, modname, p);
} else } else
report_probe(pi, p, sym, offset, modname); report_probe(pi, p, sym, offset, modname, NULL);
} }
preempt_enable(); preempt_enable();
return 0; return 0;
@ -1399,12 +1857,13 @@ int __kprobes enable_kprobe(struct kprobe *kp)
goto out; goto out;
} }
if (!kprobes_all_disarmed && kprobe_disabled(p))
arm_kprobe(p);
p->flags &= ~KPROBE_FLAG_DISABLED;
if (p != kp) if (p != kp)
kp->flags &= ~KPROBE_FLAG_DISABLED; kp->flags &= ~KPROBE_FLAG_DISABLED;
if (!kprobes_all_disarmed && kprobe_disabled(p)) {
p->flags &= ~KPROBE_FLAG_DISABLED;
arm_kprobe(p);
}
out: out:
mutex_unlock(&kprobe_mutex); mutex_unlock(&kprobe_mutex);
return ret; return ret;
@ -1424,12 +1883,13 @@ static void __kprobes arm_all_kprobes(void)
if (!kprobes_all_disarmed) if (!kprobes_all_disarmed)
goto already_enabled; goto already_enabled;
/* Arming kprobes doesn't optimize kprobe itself */
mutex_lock(&text_mutex); mutex_lock(&text_mutex);
for (i = 0; i < KPROBE_TABLE_SIZE; i++) { for (i = 0; i < KPROBE_TABLE_SIZE; i++) {
head = &kprobe_table[i]; head = &kprobe_table[i];
hlist_for_each_entry_rcu(p, node, head, hlist) hlist_for_each_entry_rcu(p, node, head, hlist)
if (!kprobe_disabled(p)) if (!kprobe_disabled(p))
arch_arm_kprobe(p); __arm_kprobe(p);
} }
mutex_unlock(&text_mutex); mutex_unlock(&text_mutex);
@ -1456,16 +1916,23 @@ static void __kprobes disarm_all_kprobes(void)
kprobes_all_disarmed = true; kprobes_all_disarmed = true;
printk(KERN_INFO "Kprobes globally disabled\n"); printk(KERN_INFO "Kprobes globally disabled\n");
/*
* Here we call get_online_cpus() for avoiding text_mutex deadlock,
* because disarming may also unoptimize kprobes.
*/
get_online_cpus();
mutex_lock(&text_mutex); mutex_lock(&text_mutex);
for (i = 0; i < KPROBE_TABLE_SIZE; i++) { for (i = 0; i < KPROBE_TABLE_SIZE; i++) {
head = &kprobe_table[i]; head = &kprobe_table[i];
hlist_for_each_entry_rcu(p, node, head, hlist) { hlist_for_each_entry_rcu(p, node, head, hlist) {
if (!arch_trampoline_kprobe(p) && !kprobe_disabled(p)) if (!arch_trampoline_kprobe(p) && !kprobe_disabled(p))
arch_disarm_kprobe(p); __disarm_kprobe(p);
} }
} }
mutex_unlock(&text_mutex); mutex_unlock(&text_mutex);
put_online_cpus();
mutex_unlock(&kprobe_mutex); mutex_unlock(&kprobe_mutex);
/* Allow all currently running kprobes to complete */ /* Allow all currently running kprobes to complete */
synchronize_sched(); synchronize_sched();

View File

@ -50,6 +50,7 @@
#include <linux/ftrace.h> #include <linux/ftrace.h>
#include <linux/slow-work.h> #include <linux/slow-work.h>
#include <linux/perf_event.h> #include <linux/perf_event.h>
#include <linux/kprobes.h>
#include <asm/uaccess.h> #include <asm/uaccess.h>
#include <asm/processor.h> #include <asm/processor.h>
@ -1449,6 +1450,17 @@ static struct ctl_table debug_table[] = {
.mode = 0644, .mode = 0644,
.proc_handler = proc_dointvec .proc_handler = proc_dointvec
}, },
#endif
#if defined(CONFIG_OPTPROBES)
{
.procname = "kprobes-optimization",
.data = &sysctl_kprobes_optimization,
.maxlen = sizeof(int),
.mode = 0644,
.proc_handler = proc_kprobes_optimization_handler,
.extra1 = &zero,
.extra2 = &one,
},
#endif #endif
{ } { }
}; };

View File

@ -41,7 +41,8 @@ OPTIONS
-d:: -d::
--del=:: --del=::
Delete a probe event. Delete probe events. This accepts glob wildcards('*', '?') and character
classes(e.g. [a-z], [!A-Z]).
-l:: -l::
--list:: --list::
@ -50,17 +51,29 @@ OPTIONS
-L:: -L::
--line=:: --line=::
Show source code lines which can be probed. This needs an argument Show source code lines which can be probed. This needs an argument
which specifies a range of the source code. which specifies a range of the source code. (see LINE SYNTAX for detail)
-f::
--force::
Forcibly add events with existing name.
PROBE SYNTAX PROBE SYNTAX
------------ ------------
Probe points are defined by following syntax. Probe points are defined by following syntax.
"[EVENT=]FUNC[+OFFS|:RLN|%return][@SRC]|SRC:ALN [ARG ...]" 1) Define event based on function name
[EVENT=]FUNC[@SRC][:RLN|+OFFS|%return|;PTN] [ARG ...]
2) Define event based on source file with line number
[EVENT=]SRC:ALN [ARG ...]
3) Define event based on source file with lazy pattern
[EVENT=]SRC;PTN [ARG ...]
'EVENT' specifies the name of new event, if omitted, it will be set the name of the probed function. Currently, event group name is set as 'probe'. 'EVENT' specifies the name of new event, if omitted, it will be set the name of the probed function. Currently, event group name is set as 'probe'.
'FUNC' specifies a probed function name, and it may have one of the following options; '+OFFS' is the offset from function entry address in bytes, 'RLN' is the relative-line number from function entry line, and '%return' means that it probes function return. In addition, 'SRC' specifies a source file which has that function. 'FUNC' specifies a probed function name, and it may have one of the following options; '+OFFS' is the offset from function entry address in bytes, ':RLN' is the relative-line number from function entry line, and '%return' means that it probes function return. And ';PTN' means lazy matching pattern (see LAZY MATCHING). Note that ';PTN' must be the end of the probe point definition. In addition, '@SRC' specifies a source file which has that function.
It is also possible to specify a probe point by the source line number by using 'SRC:ALN' syntax, where 'SRC' is the source file path and 'ALN' is the line number. It is also possible to specify a probe point by the source line number or lazy matching by using 'SRC:ALN' or 'SRC;PTN' syntax, where 'SRC' is the source file path, ':ALN' is the line number and ';PTN' is the lazy matching pattern.
'ARG' specifies the arguments of this probe point. You can use the name of local variable, or kprobe-tracer argument format (e.g. $retval, %ax, etc). 'ARG' specifies the arguments of this probe point. You can use the name of local variable, or kprobe-tracer argument format (e.g. $retval, %ax, etc).
LINE SYNTAX LINE SYNTAX
@ -76,6 +89,41 @@ and 'ALN2' is end line number in the file. It is also possible to specify how
many lines to show by using 'NUM'. many lines to show by using 'NUM'.
So, "source.c:100-120" shows lines between 100th to l20th in source.c file. And "func:10+20" shows 20 lines from 10th line of func function. So, "source.c:100-120" shows lines between 100th to l20th in source.c file. And "func:10+20" shows 20 lines from 10th line of func function.
LAZY MATCHING
-------------
The lazy line matching is similar to glob matching but ignoring spaces in both of pattern and target. So this accepts wildcards('*', '?') and character classes(e.g. [a-z], [!A-Z]).
e.g.
'a=*' can matches 'a=b', 'a = b', 'a == b' and so on.
This provides some sort of flexibility and robustness to probe point definitions against minor code changes. For example, actual 10th line of schedule() can be moved easily by modifying schedule(), but the same line matching 'rq=cpu_rq*' may still exist in the function.)
EXAMPLES
--------
Display which lines in schedule() can be probed:
./perf probe --line schedule
Add a probe on schedule() function 12th line with recording cpu local variable:
./perf probe schedule:12 cpu
or
./perf probe --add='schedule:12 cpu'
this will add one or more probes which has the name start with "schedule".
Add probes on lines in schedule() function which calls update_rq_clock().
./perf probe 'schedule;update_rq_clock*'
or
./perf probe --add='schedule;update_rq_clock*'
Delete all probes on schedule().
./perf probe --del='schedule*'
SEE ALSO SEE ALSO
-------- --------
linkperf:perf-trace[1], linkperf:perf-record[1] linkperf:perf-trace[1], linkperf:perf-record[1]

View File

@ -500,12 +500,12 @@ else
msg := $(error No libelf.h/libelf found, please install libelf-dev/elfutils-libelf-devel and glibc-dev[el]); msg := $(error No libelf.h/libelf found, please install libelf-dev/elfutils-libelf-devel and glibc-dev[el]);
endif endif
ifneq ($(shell sh -c "(echo '\#ifndef _MIPS_SZLONG'; echo '\#define _MIPS_SZLONG 0'; echo '\#endif'; echo '\#include <dwarf.h>'; echo '\#include <libdwarf.h>'; echo 'int main(void) { Dwarf_Debug dbg; Dwarf_Error err; Dwarf_Ranges *rng; dwarf_init(0, DW_DLC_READ, 0, 0, &dbg, &err); dwarf_get_ranges(dbg, 0, &rng, 0, 0, &err); return (long)dbg; }') | $(CC) -x c - $(ALL_CFLAGS) -D_LARGEFILE64_SOURCE -D_FILE_OFFSET_BITS=64 -I/usr/include/libdwarf -ldwarf -lelf -o $(BITBUCKET) $(ALL_LDFLAGS) $(EXTLIBS) "$(QUIET_STDERR)" && echo y"), y) ifneq ($(shell sh -c "(echo '\#include <dwarf.h>'; echo '\#include <libdw.h>'; echo 'int main(void) { Dwarf *dbg; dbg = dwarf_begin(0, DWARF_C_READ); return (long)dbg; }') | $(CC) -x c - $(ALL_CFLAGS) -D_LARGEFILE64_SOURCE -D_FILE_OFFSET_BITS=64 -I/usr/include/elfutils -ldw -lelf -o $(BITBUCKET) $(ALL_LDFLAGS) $(EXTLIBS) "$(QUIET_STDERR)" && echo y"), y)
msg := $(warning No libdwarf.h found or old libdwarf.h found, disables dwarf support. Please install libdwarf-dev/libdwarf-devel >= 20081231); msg := $(warning No libdw.h found or old libdw.h found, disables dwarf support. Please install elfutils-devel/elfutils-dev);
BASIC_CFLAGS += -DNO_LIBDWARF BASIC_CFLAGS += -DNO_DWARF_SUPPORT
else else
BASIC_CFLAGS += -I/usr/include/libdwarf BASIC_CFLAGS += -I/usr/include/elfutils
EXTLIBS += -lelf -ldwarf EXTLIBS += -lelf -ldw
LIB_OBJS += util/probe-finder.o LIB_OBJS += util/probe-finder.o
endif endif

View File

@ -128,7 +128,7 @@ static void evaluate_probe_point(struct probe_point *pp)
pp->function); pp->function);
} }
#ifndef NO_LIBDWARF #ifndef NO_DWARF_SUPPORT
static int open_vmlinux(void) static int open_vmlinux(void)
{ {
if (map__load(session.kmaps[MAP__FUNCTION], NULL) < 0) { if (map__load(session.kmaps[MAP__FUNCTION], NULL) < 0) {
@ -156,14 +156,16 @@ static const char * const probe_usage[] = {
"perf probe [<options>] --add 'PROBEDEF' [--add 'PROBEDEF' ...]", "perf probe [<options>] --add 'PROBEDEF' [--add 'PROBEDEF' ...]",
"perf probe [<options>] --del '[GROUP:]EVENT' ...", "perf probe [<options>] --del '[GROUP:]EVENT' ...",
"perf probe --list", "perf probe --list",
#ifndef NO_DWARF_SUPPORT
"perf probe --line 'LINEDESC'", "perf probe --line 'LINEDESC'",
#endif
NULL NULL
}; };
static const struct option options[] = { static const struct option options[] = {
OPT_BOOLEAN('v', "verbose", &verbose, OPT_BOOLEAN('v', "verbose", &verbose,
"be more verbose (show parsed arguments, etc)"), "be more verbose (show parsed arguments, etc)"),
#ifndef NO_LIBDWARF #ifndef NO_DWARF_SUPPORT
OPT_STRING('k', "vmlinux", &symbol_conf.vmlinux_name, OPT_STRING('k', "vmlinux", &symbol_conf.vmlinux_name,
"file", "vmlinux pathname"), "file", "vmlinux pathname"),
#endif #endif
@ -172,30 +174,32 @@ static const struct option options[] = {
OPT_CALLBACK('d', "del", NULL, "[GROUP:]EVENT", "delete a probe event.", OPT_CALLBACK('d', "del", NULL, "[GROUP:]EVENT", "delete a probe event.",
opt_del_probe_event), opt_del_probe_event),
OPT_CALLBACK('a', "add", NULL, OPT_CALLBACK('a', "add", NULL,
#ifdef NO_LIBDWARF #ifdef NO_DWARF_SUPPORT
"[EVENT=]FUNC[+OFFS|%return] [ARG ...]", "[EVENT=]FUNC[+OFF|%return] [ARG ...]",
#else #else
"[EVENT=]FUNC[+OFFS|%return|:RLN][@SRC]|SRC:ALN [ARG ...]", "[EVENT=]FUNC[@SRC][+OFF|%return|:RL|;PT]|SRC:AL|SRC;PT"
" [ARG ...]",
#endif #endif
"probe point definition, where\n" "probe point definition, where\n"
"\t\tGROUP:\tGroup name (optional)\n" "\t\tGROUP:\tGroup name (optional)\n"
"\t\tEVENT:\tEvent name\n" "\t\tEVENT:\tEvent name\n"
"\t\tFUNC:\tFunction name\n" "\t\tFUNC:\tFunction name\n"
"\t\tOFFS:\tOffset from function entry (in byte)\n" "\t\tOFF:\tOffset from function entry (in byte)\n"
"\t\t%return:\tPut the probe at function return\n" "\t\t%return:\tPut the probe at function return\n"
#ifdef NO_LIBDWARF #ifdef NO_DWARF_SUPPORT
"\t\tARG:\tProbe argument (only \n" "\t\tARG:\tProbe argument (only \n"
#else #else
"\t\tSRC:\tSource code path\n" "\t\tSRC:\tSource code path\n"
"\t\tRLN:\tRelative line number from function entry.\n" "\t\tRL:\tRelative line number from function entry.\n"
"\t\tALN:\tAbsolute line number in file.\n" "\t\tAL:\tAbsolute line number in file.\n"
"\t\tPT:\tLazy expression of line code.\n"
"\t\tARG:\tProbe argument (local variable name or\n" "\t\tARG:\tProbe argument (local variable name or\n"
#endif #endif
"\t\t\tkprobe-tracer argument format.)\n", "\t\t\tkprobe-tracer argument format.)\n",
opt_add_probe_event), opt_add_probe_event),
OPT_BOOLEAN('f', "force", &session.force_add, "forcibly add events" OPT_BOOLEAN('f', "force", &session.force_add, "forcibly add events"
" with existing name"), " with existing name"),
#ifndef NO_LIBDWARF #ifndef NO_DWARF_SUPPORT
OPT_CALLBACK('L', "line", NULL, OPT_CALLBACK('L', "line", NULL,
"FUNC[:RLN[+NUM|:RLN2]]|SRC:ALN[+NUM|:ALN2]", "FUNC[:RLN[+NUM|:RLN2]]|SRC:ALN[+NUM|:ALN2]",
"Show source code lines.", opt_show_lines), "Show source code lines.", opt_show_lines),
@ -223,7 +227,7 @@ static void init_vmlinux(void)
int cmd_probe(int argc, const char **argv, const char *prefix __used) int cmd_probe(int argc, const char **argv, const char *prefix __used)
{ {
int i, ret; int i, ret;
#ifndef NO_LIBDWARF #ifndef NO_DWARF_SUPPORT
int fd; int fd;
#endif #endif
struct probe_point *pp; struct probe_point *pp;
@ -259,7 +263,7 @@ int cmd_probe(int argc, const char **argv, const char *prefix __used)
return 0; return 0;
} }
#ifndef NO_LIBDWARF #ifndef NO_DWARF_SUPPORT
if (session.show_lines) { if (session.show_lines) {
if (session.nr_probe != 0 || session.dellist) { if (session.nr_probe != 0 || session.dellist) {
pr_warning(" Error: Don't use --line with" pr_warning(" Error: Don't use --line with"
@ -290,9 +294,9 @@ int cmd_probe(int argc, const char **argv, const char *prefix __used)
init_vmlinux(); init_vmlinux();
if (session.need_dwarf) if (session.need_dwarf)
#ifdef NO_LIBDWARF #ifdef NO_DWARF_SUPPORT
die("Debuginfo-analysis is not supported"); die("Debuginfo-analysis is not supported");
#else /* !NO_LIBDWARF */ #else /* !NO_DWARF_SUPPORT */
pr_debug("Some probes require debuginfo.\n"); pr_debug("Some probes require debuginfo.\n");
fd = open_vmlinux(); fd = open_vmlinux();
@ -312,7 +316,7 @@ int cmd_probe(int argc, const char **argv, const char *prefix __used)
continue; continue;
lseek(fd, SEEK_SET, 0); lseek(fd, SEEK_SET, 0);
ret = find_probepoint(fd, pp); ret = find_probe_point(fd, pp);
if (ret > 0) if (ret > 0)
continue; continue;
if (ret == 0) { /* No error but failed to find probe point. */ if (ret == 0) { /* No error but failed to find probe point. */
@ -333,7 +337,7 @@ int cmd_probe(int argc, const char **argv, const char *prefix __used)
close(fd); close(fd);
end_dwarf: end_dwarf:
#endif /* !NO_LIBDWARF */ #endif /* !NO_DWARF_SUPPORT */
/* Synthesize probes without dwarf */ /* Synthesize probes without dwarf */
for (i = 0; i < session.nr_probe; i++) { for (i = 0; i < session.nr_probe; i++) {

View File

@ -119,14 +119,14 @@ static void parse_perf_probe_probepoint(char *arg, struct probe_point *pp)
char c, nc = 0; char c, nc = 0;
/* /*
* <Syntax> * <Syntax>
* perf probe [EVENT=]SRC:LN * perf probe [EVENT=]SRC[:LN|;PTN]
* perf probe [EVENT=]FUNC[+OFFS|%return][@SRC] * perf probe [EVENT=]FUNC[@SRC][+OFFS|%return|:LN|;PAT]
* *
* TODO:Group name support * TODO:Group name support
*/ */
ptr = strchr(arg, '='); ptr = strpbrk(arg, ";=@+%");
if (ptr) { /* Event name */ if (ptr && *ptr == '=') { /* Event name */
*ptr = '\0'; *ptr = '\0';
tmp = ptr + 1; tmp = ptr + 1;
ptr = strchr(arg, ':'); ptr = strchr(arg, ':');
@ -139,7 +139,7 @@ static void parse_perf_probe_probepoint(char *arg, struct probe_point *pp)
arg = tmp; arg = tmp;
} }
ptr = strpbrk(arg, ":+@%"); ptr = strpbrk(arg, ";:+@%");
if (ptr) { if (ptr) {
nc = *ptr; nc = *ptr;
*ptr++ = '\0'; *ptr++ = '\0';
@ -156,7 +156,11 @@ static void parse_perf_probe_probepoint(char *arg, struct probe_point *pp)
while (ptr) { while (ptr) {
arg = ptr; arg = ptr;
c = nc; c = nc;
ptr = strpbrk(arg, ":+@%"); if (c == ';') { /* Lazy pattern must be the last part */
pp->lazy_line = strdup(arg);
break;
}
ptr = strpbrk(arg, ";:+@%");
if (ptr) { if (ptr) {
nc = *ptr; nc = *ptr;
*ptr++ = '\0'; *ptr++ = '\0';
@ -165,13 +169,13 @@ static void parse_perf_probe_probepoint(char *arg, struct probe_point *pp)
case ':': /* Line number */ case ':': /* Line number */
pp->line = strtoul(arg, &tmp, 0); pp->line = strtoul(arg, &tmp, 0);
if (*tmp != '\0') if (*tmp != '\0')
semantic_error("There is non-digit charactor" semantic_error("There is non-digit char"
" in line number."); " in line number.");
break; break;
case '+': /* Byte offset from a symbol */ case '+': /* Byte offset from a symbol */
pp->offset = strtoul(arg, &tmp, 0); pp->offset = strtoul(arg, &tmp, 0);
if (*tmp != '\0') if (*tmp != '\0')
semantic_error("There is non-digit charactor" semantic_error("There is non-digit character"
" in offset."); " in offset.");
break; break;
case '@': /* File name */ case '@': /* File name */
@ -179,9 +183,6 @@ static void parse_perf_probe_probepoint(char *arg, struct probe_point *pp)
semantic_error("SRC@SRC is not allowed."); semantic_error("SRC@SRC is not allowed.");
pp->file = strdup(arg); pp->file = strdup(arg);
DIE_IF(pp->file == NULL); DIE_IF(pp->file == NULL);
if (ptr)
semantic_error("@SRC must be the last "
"option.");
break; break;
case '%': /* Probe places */ case '%': /* Probe places */
if (strcmp(arg, "return") == 0) { if (strcmp(arg, "return") == 0) {
@ -196,11 +197,18 @@ static void parse_perf_probe_probepoint(char *arg, struct probe_point *pp)
} }
/* Exclusion check */ /* Exclusion check */
if (pp->lazy_line && pp->line)
semantic_error("Lazy pattern can't be used with line number.");
if (pp->lazy_line && pp->offset)
semantic_error("Lazy pattern can't be used with offset.");
if (pp->line && pp->offset) if (pp->line && pp->offset)
semantic_error("Offset can't be used with line number."); semantic_error("Offset can't be used with line number.");
if (!pp->line && pp->file && !pp->function) if (!pp->line && !pp->lazy_line && pp->file && !pp->function)
semantic_error("File always requires line number."); semantic_error("File always requires line number or "
"lazy pattern.");
if (pp->offset && !pp->function) if (pp->offset && !pp->function)
semantic_error("Offset requires an entry function."); semantic_error("Offset requires an entry function.");
@ -208,11 +216,13 @@ static void parse_perf_probe_probepoint(char *arg, struct probe_point *pp)
if (pp->retprobe && !pp->function) if (pp->retprobe && !pp->function)
semantic_error("Return probe requires an entry function."); semantic_error("Return probe requires an entry function.");
if ((pp->offset || pp->line) && pp->retprobe) if ((pp->offset || pp->line || pp->lazy_line) && pp->retprobe)
semantic_error("Offset/Line can't be used with return probe."); semantic_error("Offset/Line/Lazy pattern can't be used with "
"return probe.");
pr_debug("symbol:%s file:%s line:%d offset:%d, return:%d\n", pr_debug("symbol:%s file:%s line:%d offset:%d return:%d lazy:%s\n",
pp->function, pp->file, pp->line, pp->offset, pp->retprobe); pp->function, pp->file, pp->line, pp->offset, pp->retprobe,
pp->lazy_line);
} }
/* Parse perf-probe event definition */ /* Parse perf-probe event definition */
@ -458,6 +468,8 @@ static void clear_probe_point(struct probe_point *pp)
free(pp->function); free(pp->function);
if (pp->file) if (pp->file)
free(pp->file); free(pp->file);
if (pp->lazy_line)
free(pp->lazy_line);
for (i = 0; i < pp->nr_args; i++) for (i = 0; i < pp->nr_args; i++)
free(pp->args[i]); free(pp->args[i]);
if (pp->args) if (pp->args)
@ -719,6 +731,7 @@ void del_trace_kprobe_events(struct strlist *dellist)
} }
#define LINEBUF_SIZE 256 #define LINEBUF_SIZE 256
#define NR_ADDITIONAL_LINES 2
static void show_one_line(FILE *fp, unsigned int l, bool skip, bool show_num) static void show_one_line(FILE *fp, unsigned int l, bool skip, bool show_num)
{ {
@ -779,5 +792,11 @@ void show_line_range(struct line_range *lr)
show_one_line(fp, (l++) - lr->offset, false, false); show_one_line(fp, (l++) - lr->offset, false, false);
show_one_line(fp, (l++) - lr->offset, false, true); show_one_line(fp, (l++) - lr->offset, false, true);
} }
if (lr->end == INT_MAX)
lr->end = l + NR_ADDITIONAL_LINES;
while (l < lr->end && !feof(fp))
show_one_line(fp, (l++) - lr->offset, false, false);
fclose(fp); fclose(fp);
} }

File diff suppressed because it is too large Load Diff

View File

@ -1,6 +1,7 @@
#ifndef _PROBE_FINDER_H #ifndef _PROBE_FINDER_H
#define _PROBE_FINDER_H #define _PROBE_FINDER_H
#include <stdbool.h>
#include "util.h" #include "util.h"
#define MAX_PATH_LEN 256 #define MAX_PATH_LEN 256
@ -20,6 +21,7 @@ struct probe_point {
/* Inputs */ /* Inputs */
char *file; /* File name */ char *file; /* File name */
int line; /* Line number */ int line; /* Line number */
char *lazy_line; /* Lazy line pattern */
char *function; /* Function name */ char *function; /* Function name */
int offset; /* Offset bytes */ int offset; /* Offset bytes */
@ -46,53 +48,46 @@ struct line_range {
char *function; /* Function name */ char *function; /* Function name */
unsigned int start; /* Start line number */ unsigned int start; /* Start line number */
unsigned int end; /* End line number */ unsigned int end; /* End line number */
unsigned int offset; /* Start line offset */ int offset; /* Start line offset */
char *path; /* Real path name */ char *path; /* Real path name */
struct list_head line_list; /* Visible lines */ struct list_head line_list; /* Visible lines */
}; };
#ifndef NO_LIBDWARF #ifndef NO_DWARF_SUPPORT
extern int find_probepoint(int fd, struct probe_point *pp); extern int find_probe_point(int fd, struct probe_point *pp);
extern int find_line_range(int fd, struct line_range *lr); extern int find_line_range(int fd, struct line_range *lr);
/* Workaround for undefined _MIPS_SZLONG bug in libdwarf.h: */
#ifndef _MIPS_SZLONG
# define _MIPS_SZLONG 0
#endif
#include <dwarf.h> #include <dwarf.h>
#include <libdwarf.h> #include <libdw.h>
struct probe_finder { struct probe_finder {
struct probe_point *pp; /* Target probe point */ struct probe_point *pp; /* Target probe point */
/* For function searching */ /* For function searching */
Dwarf_Addr addr; /* Address */ Dwarf_Addr addr; /* Address */
Dwarf_Unsigned fno; /* File number */ const char *fname; /* File name */
Dwarf_Unsigned lno; /* Line number */ int lno; /* Line number */
Dwarf_Off inl_offs; /* Inline offset */ Dwarf_Die cu_die; /* Current CU */
Dwarf_Die cu_die; /* Current CU */
/* For variable searching */ /* For variable searching */
Dwarf_Addr cu_base; /* Current CU base address */ Dwarf_Op *fb_ops; /* Frame base attribute */
Dwarf_Locdesc fbloc; /* Location of Current Frame Base */ Dwarf_Addr cu_base; /* Current CU base address */
const char *var; /* Current variable name */ const char *var; /* Current variable name */
char *buf; /* Current output buffer */ char *buf; /* Current output buffer */
int len; /* Length of output buffer */ int len; /* Length of output buffer */
struct list_head lcache; /* Line cache for lazy match */
}; };
struct line_finder { struct line_finder {
struct line_range *lr; /* Target line range */ struct line_range *lr; /* Target line range */
Dwarf_Unsigned fno; /* File number */ const char *fname; /* File name */
Dwarf_Unsigned lno_s; /* Start line number */ int lno_s; /* Start line number */
Dwarf_Unsigned lno_e; /* End line number */ int lno_e; /* End line number */
Dwarf_Addr addr_s; /* Start address */ Dwarf_Die cu_die; /* Current CU */
Dwarf_Addr addr_e; /* End address */
Dwarf_Die cu_die; /* Current CU */
int found; int found;
}; };
#endif /* NO_LIBDWARF */ #endif /* NO_DWARF_SUPPORT */
#endif /*_PROBE_FINDER_H */ #endif /*_PROBE_FINDER_H */

View File

@ -265,21 +265,21 @@ error:
return false; return false;
} }
/** /* Glob/lazy pattern matching */
* strglobmatch - glob expression pattern matching static bool __match_glob(const char *str, const char *pat, bool ignore_space)
* @str: the target string to match
* @pat: the pattern string to match
*
* This returns true if the @str matches @pat. @pat can includes wildcards
* ('*','?') and character classes ([CHARS], complementation and ranges are
* also supported). Also, this supports escape character ('\') to use special
* characters as normal character.
*
* Note: if @pat syntax is broken, this always returns false.
*/
bool strglobmatch(const char *str, const char *pat)
{ {
while (*str && *pat && *pat != '*') { while (*str && *pat && *pat != '*') {
if (ignore_space) {
/* Ignore spaces for lazy matching */
if (isspace(*str)) {
str++;
continue;
}
if (isspace(*pat)) {
pat++;
continue;
}
}
if (*pat == '?') { /* Matches any single character */ if (*pat == '?') { /* Matches any single character */
str++; str++;
pat++; pat++;
@ -308,3 +308,32 @@ bool strglobmatch(const char *str, const char *pat)
return !*str && !*pat; return !*str && !*pat;
} }
/**
* strglobmatch - glob expression pattern matching
* @str: the target string to match
* @pat: the pattern string to match
*
* This returns true if the @str matches @pat. @pat can includes wildcards
* ('*','?') and character classes ([CHARS], complementation and ranges are
* also supported). Also, this supports escape character ('\') to use special
* characters as normal character.
*
* Note: if @pat syntax is broken, this always returns false.
*/
bool strglobmatch(const char *str, const char *pat)
{
return __match_glob(str, pat, false);
}
/**
* strlazymatch - matching pattern strings lazily with glob pattern
* @str: the target string to match
* @pat: the pattern string to match
*
* This is similar to strglobmatch, except this ignores spaces in
* the target string.
*/
bool strlazymatch(const char *str, const char *pat)
{
return __match_glob(str, pat, true);
}

View File

@ -10,6 +10,7 @@ s64 perf_atoll(const char *str);
char **argv_split(const char *str, int *argcp); char **argv_split(const char *str, int *argcp);
void argv_free(char **argv); void argv_free(char **argv);
bool strglobmatch(const char *str, const char *pat); bool strglobmatch(const char *str, const char *pat);
bool strlazymatch(const char *str, const char *pat);
#define _STR(x) #x #define _STR(x) #x
#define STR(x) _STR(x) #define STR(x) _STR(x)