mirror of
https://github.com/FEX-Emu/linux.git
synced 2024-12-22 17:33:01 +00:00
rseq: Introduce restartable sequences system call
Expose a new system call allowing each thread to register one userspace memory area to be used as an ABI between kernel and user-space for two purposes: user-space restartable sequences and quick access to read the current CPU number value from user-space. * Restartable sequences (per-cpu atomics) Restartables sequences allow user-space to perform update operations on per-cpu data without requiring heavy-weight atomic operations. The restartable critical sections (percpu atomics) work has been started by Paul Turner and Andrew Hunter. It lets the kernel handle restart of critical sections. [1] [2] The re-implementation proposed here brings a few simplifications to the ABI which facilitates porting to other architectures and speeds up the user-space fast path. Here are benchmarks of various rseq use-cases. Test hardware: arm32: ARMv7 Processor rev 4 (v7l) "Cubietruck", 2-core x86-64: Intel E5-2630 v3@2.40GHz, 16-core, hyperthreading The following benchmarks were all performed on a single thread. * Per-CPU statistic counter increment getcpu+atomic (ns/op) rseq (ns/op) speedup arm32: 344.0 31.4 11.0 x86-64: 15.3 2.0 7.7 * LTTng-UST: write event 32-bit header, 32-bit payload into tracer per-cpu buffer getcpu+atomic (ns/op) rseq (ns/op) speedup arm32: 2502.0 2250.0 1.1 x86-64: 117.4 98.0 1.2 * liburcu percpu: lock-unlock pair, dereference, read/compare word getcpu+atomic (ns/op) rseq (ns/op) speedup arm32: 751.0 128.5 5.8 x86-64: 53.4 28.6 1.9 * jemalloc memory allocator adapted to use rseq Using rseq with per-cpu memory pools in jemalloc at Facebook (based on rseq 2016 implementation): The production workload response-time has 1-2% gain avg. latency, and the P99 overall latency drops by 2-3%. * Reading the current CPU number Speeding up reading the current CPU number on which the caller thread is running is done by keeping the current CPU number up do date within the cpu_id field of the memory area registered by the thread. This is done by making scheduler preemption set the TIF_NOTIFY_RESUME flag on the current thread. Upon return to user-space, a notify-resume handler updates the current CPU value within the registered user-space memory area. User-space can then read the current CPU number directly from memory. Keeping the current cpu id in a memory area shared between kernel and user-space is an improvement over current mechanisms available to read the current CPU number, which has the following benefits over alternative approaches: - 35x speedup on ARM vs system call through glibc - 20x speedup on x86 compared to calling glibc, which calls vdso executing a "lsl" instruction, - 14x speedup on x86 compared to inlined "lsl" instruction, - Unlike vdso approaches, this cpu_id value can be read from an inline assembly, which makes it a useful building block for restartable sequences. - The approach of reading the cpu id through memory mapping shared between kernel and user-space is portable (e.g. ARM), which is not the case for the lsl-based x86 vdso. On x86, yet another possible approach would be to use the gs segment selector to point to user-space per-cpu data. This approach performs similarly to the cpu id cache, but it has two disadvantages: it is not portable, and it is incompatible with existing applications already using the gs segment selector for other purposes. Benchmarking various approaches for reading the current CPU number: ARMv7 Processor rev 4 (v7l) Machine model: Cubietruck - Baseline (empty loop): 8.4 ns - Read CPU from rseq cpu_id: 16.7 ns - Read CPU from rseq cpu_id (lazy register): 19.8 ns - glibc 2.19-0ubuntu6.6 getcpu: 301.8 ns - getcpu system call: 234.9 ns x86-64 Intel(R) Xeon(R) CPU E5-2630 v3 @ 2.40GHz: - Baseline (empty loop): 0.8 ns - Read CPU from rseq cpu_id: 0.8 ns - Read CPU from rseq cpu_id (lazy register): 0.8 ns - Read using gs segment selector: 0.8 ns - "lsl" inline assembly: 13.0 ns - glibc 2.19-0ubuntu6 getcpu: 16.6 ns - getcpu system call: 53.9 ns - Speed (benchmark taken on v8 of patchset) Running 10 runs of hackbench -l 100000 seems to indicate, contrary to expectations, that enabling CONFIG_RSEQ slightly accelerates the scheduler: Configuration: 2 sockets * 8-core Intel(R) Xeon(R) CPU E5-2630 v3 @ 2.40GHz (directly on hardware, hyperthreading disabled in BIOS, energy saving disabled in BIOS, turboboost disabled in BIOS, cpuidle.off=1 kernel parameter), with a Linux v4.6 defconfig+localyesconfig, restartable sequences series applied. * CONFIG_RSEQ=n avg.: 41.37 s std.dev.: 0.36 s * CONFIG_RSEQ=y avg.: 40.46 s std.dev.: 0.33 s - Size On x86-64, between CONFIG_RSEQ=n/y, the text size increase of vmlinux is 567 bytes, and the data size increase of vmlinux is 5696 bytes. [1] https://lwn.net/Articles/650333/ [2] http://www.linuxplumbersconf.org/2013/ocw/system/presentations/1695/original/LPC%20-%20PerCpu%20Atomics.pdf Signed-off-by: Mathieu Desnoyers <mathieu.desnoyers@efficios.com> Signed-off-by: Thomas Gleixner <tglx@linutronix.de> Acked-by: Peter Zijlstra (Intel) <peterz@infradead.org> Cc: Joel Fernandes <joelaf@google.com> Cc: Catalin Marinas <catalin.marinas@arm.com> Cc: Dave Watson <davejwatson@fb.com> Cc: Will Deacon <will.deacon@arm.com> Cc: Andi Kleen <andi@firstfloor.org> Cc: "H . Peter Anvin" <hpa@zytor.com> Cc: Chris Lameter <cl@linux.com> Cc: Russell King <linux@arm.linux.org.uk> Cc: Andrew Hunter <ahh@google.com> Cc: Michael Kerrisk <mtk.manpages@gmail.com> Cc: "Paul E . McKenney" <paulmck@linux.vnet.ibm.com> Cc: Paul Turner <pjt@google.com> Cc: Boqun Feng <boqun.feng@gmail.com> Cc: Josh Triplett <josh@joshtriplett.org> Cc: Steven Rostedt <rostedt@goodmis.org> Cc: Ben Maurer <bmaurer@fb.com> Cc: Alexander Viro <viro@zeniv.linux.org.uk> Cc: linux-api@vger.kernel.org Cc: Andy Lutomirski <luto@amacapital.net> Cc: Andrew Morton <akpm@linux-foundation.org> Cc: Linus Torvalds <torvalds@linux-foundation.org> Link: http://lkml.kernel.org/r/20151027235635.16059.11630.stgit@pjt-glaptop.roam.corp.google.com Link: http://lkml.kernel.org/r/20150624222609.6116.86035.stgit@kitami.mtv.corp.google.com Link: https://lkml.kernel.org/r/20180602124408.8430-3-mathieu.desnoyers@efficios.com
This commit is contained in:
parent
b575e83721
commit
d7822b1e24
11
MAINTAINERS
11
MAINTAINERS
@ -11976,6 +11976,17 @@ F: include/dt-bindings/reset/
|
||||
F: include/linux/reset.h
|
||||
F: include/linux/reset-controller.h
|
||||
|
||||
RESTARTABLE SEQUENCES SUPPORT
|
||||
M: Mathieu Desnoyers <mathieu.desnoyers@efficios.com>
|
||||
M: Peter Zijlstra <peterz@infradead.org>
|
||||
M: "Paul E. McKenney" <paulmck@linux.vnet.ibm.com>
|
||||
M: Boqun Feng <boqun.feng@gmail.com>
|
||||
L: linux-kernel@vger.kernel.org
|
||||
S: Supported
|
||||
F: kernel/rseq.c
|
||||
F: include/uapi/linux/rseq.h
|
||||
F: include/trace/events/rseq.h
|
||||
|
||||
RFKILL
|
||||
M: Johannes Berg <johannes@sipsolutions.net>
|
||||
L: linux-wireless@vger.kernel.org
|
||||
|
@ -272,6 +272,13 @@ config HAVE_REGS_AND_STACK_ACCESS_API
|
||||
declared in asm/ptrace.h
|
||||
For example the kprobes-based event tracer needs this API.
|
||||
|
||||
config HAVE_RSEQ
|
||||
bool
|
||||
depends on HAVE_REGS_AND_STACK_ACCESS_API
|
||||
help
|
||||
This symbol should be selected by an architecture if it
|
||||
supports an implementation of restartable sequences.
|
||||
|
||||
config HAVE_CLK
|
||||
bool
|
||||
help
|
||||
|
@ -1822,6 +1822,7 @@ static int do_execveat_common(int fd, struct filename *filename,
|
||||
current->fs->in_exec = 0;
|
||||
current->in_execve = 0;
|
||||
membarrier_execve(current);
|
||||
rseq_execve(current);
|
||||
acct_update_integrals(current);
|
||||
task_numa_free(current);
|
||||
free_bprm(bprm);
|
||||
|
@ -27,6 +27,7 @@
|
||||
#include <linux/signal_types.h>
|
||||
#include <linux/mm_types_task.h>
|
||||
#include <linux/task_io_accounting.h>
|
||||
#include <linux/rseq.h>
|
||||
|
||||
/* task_struct member predeclarations (sorted alphabetically): */
|
||||
struct audit_context;
|
||||
@ -1047,6 +1048,17 @@ struct task_struct {
|
||||
unsigned long numa_pages_migrated;
|
||||
#endif /* CONFIG_NUMA_BALANCING */
|
||||
|
||||
#ifdef CONFIG_RSEQ
|
||||
struct rseq __user *rseq;
|
||||
u32 rseq_len;
|
||||
u32 rseq_sig;
|
||||
/*
|
||||
* RmW on rseq_event_mask must be performed atomically
|
||||
* with respect to preemption.
|
||||
*/
|
||||
unsigned long rseq_event_mask;
|
||||
#endif
|
||||
|
||||
struct tlbflush_unmap_batch tlb_ubc;
|
||||
|
||||
struct rcu_head rcu;
|
||||
@ -1757,4 +1769,126 @@ extern long sched_getaffinity(pid_t pid, struct cpumask *mask);
|
||||
#define TASK_SIZE_OF(tsk) TASK_SIZE
|
||||
#endif
|
||||
|
||||
#ifdef CONFIG_RSEQ
|
||||
|
||||
/*
|
||||
* Map the event mask on the user-space ABI enum rseq_cs_flags
|
||||
* for direct mask checks.
|
||||
*/
|
||||
enum rseq_event_mask_bits {
|
||||
RSEQ_EVENT_PREEMPT_BIT = RSEQ_CS_FLAG_NO_RESTART_ON_PREEMPT_BIT,
|
||||
RSEQ_EVENT_SIGNAL_BIT = RSEQ_CS_FLAG_NO_RESTART_ON_SIGNAL_BIT,
|
||||
RSEQ_EVENT_MIGRATE_BIT = RSEQ_CS_FLAG_NO_RESTART_ON_MIGRATE_BIT,
|
||||
};
|
||||
|
||||
enum rseq_event_mask {
|
||||
RSEQ_EVENT_PREEMPT = (1U << RSEQ_EVENT_PREEMPT_BIT),
|
||||
RSEQ_EVENT_SIGNAL = (1U << RSEQ_EVENT_SIGNAL_BIT),
|
||||
RSEQ_EVENT_MIGRATE = (1U << RSEQ_EVENT_MIGRATE_BIT),
|
||||
};
|
||||
|
||||
static inline void rseq_set_notify_resume(struct task_struct *t)
|
||||
{
|
||||
if (t->rseq)
|
||||
set_tsk_thread_flag(t, TIF_NOTIFY_RESUME);
|
||||
}
|
||||
|
||||
void __rseq_handle_notify_resume(struct pt_regs *regs);
|
||||
|
||||
static inline void rseq_handle_notify_resume(struct pt_regs *regs)
|
||||
{
|
||||
if (current->rseq)
|
||||
__rseq_handle_notify_resume(regs);
|
||||
}
|
||||
|
||||
static inline void rseq_signal_deliver(struct pt_regs *regs)
|
||||
{
|
||||
preempt_disable();
|
||||
__set_bit(RSEQ_EVENT_SIGNAL_BIT, ¤t->rseq_event_mask);
|
||||
preempt_enable();
|
||||
rseq_handle_notify_resume(regs);
|
||||
}
|
||||
|
||||
/* rseq_preempt() requires preemption to be disabled. */
|
||||
static inline void rseq_preempt(struct task_struct *t)
|
||||
{
|
||||
__set_bit(RSEQ_EVENT_PREEMPT_BIT, &t->rseq_event_mask);
|
||||
rseq_set_notify_resume(t);
|
||||
}
|
||||
|
||||
/* rseq_migrate() requires preemption to be disabled. */
|
||||
static inline void rseq_migrate(struct task_struct *t)
|
||||
{
|
||||
__set_bit(RSEQ_EVENT_MIGRATE_BIT, &t->rseq_event_mask);
|
||||
rseq_set_notify_resume(t);
|
||||
}
|
||||
|
||||
/*
|
||||
* If parent process has a registered restartable sequences area, the
|
||||
* child inherits. Only applies when forking a process, not a thread. In
|
||||
* case a parent fork() in the middle of a restartable sequence, set the
|
||||
* resume notifier to force the child to retry.
|
||||
*/
|
||||
static inline void rseq_fork(struct task_struct *t, unsigned long clone_flags)
|
||||
{
|
||||
if (clone_flags & CLONE_THREAD) {
|
||||
t->rseq = NULL;
|
||||
t->rseq_len = 0;
|
||||
t->rseq_sig = 0;
|
||||
t->rseq_event_mask = 0;
|
||||
} else {
|
||||
t->rseq = current->rseq;
|
||||
t->rseq_len = current->rseq_len;
|
||||
t->rseq_sig = current->rseq_sig;
|
||||
t->rseq_event_mask = current->rseq_event_mask;
|
||||
rseq_preempt(t);
|
||||
}
|
||||
}
|
||||
|
||||
static inline void rseq_execve(struct task_struct *t)
|
||||
{
|
||||
t->rseq = NULL;
|
||||
t->rseq_len = 0;
|
||||
t->rseq_sig = 0;
|
||||
t->rseq_event_mask = 0;
|
||||
}
|
||||
|
||||
#else
|
||||
|
||||
static inline void rseq_set_notify_resume(struct task_struct *t)
|
||||
{
|
||||
}
|
||||
static inline void rseq_handle_notify_resume(struct pt_regs *regs)
|
||||
{
|
||||
}
|
||||
static inline void rseq_signal_deliver(struct pt_regs *regs)
|
||||
{
|
||||
}
|
||||
static inline void rseq_preempt(struct task_struct *t)
|
||||
{
|
||||
}
|
||||
static inline void rseq_migrate(struct task_struct *t)
|
||||
{
|
||||
}
|
||||
static inline void rseq_fork(struct task_struct *t, unsigned long clone_flags)
|
||||
{
|
||||
}
|
||||
static inline void rseq_execve(struct task_struct *t)
|
||||
{
|
||||
}
|
||||
|
||||
#endif
|
||||
|
||||
#ifdef CONFIG_DEBUG_RSEQ
|
||||
|
||||
void rseq_syscall(struct pt_regs *regs);
|
||||
|
||||
#else
|
||||
|
||||
static inline void rseq_syscall(struct pt_regs *regs)
|
||||
{
|
||||
}
|
||||
|
||||
#endif
|
||||
|
||||
#endif
|
||||
|
@ -66,6 +66,7 @@ struct old_linux_dirent;
|
||||
struct perf_event_attr;
|
||||
struct file_handle;
|
||||
struct sigaltstack;
|
||||
struct rseq;
|
||||
union bpf_attr;
|
||||
|
||||
#include <linux/types.h>
|
||||
@ -897,7 +898,8 @@ asmlinkage long sys_pkey_alloc(unsigned long flags, unsigned long init_val);
|
||||
asmlinkage long sys_pkey_free(int pkey);
|
||||
asmlinkage long sys_statx(int dfd, const char __user *path, unsigned flags,
|
||||
unsigned mask, struct statx __user *buffer);
|
||||
|
||||
asmlinkage long sys_rseq(struct rseq __user *rseq, uint32_t rseq_len,
|
||||
int flags, uint32_t sig);
|
||||
|
||||
/*
|
||||
* Architecture-specific system calls
|
||||
|
57
include/trace/events/rseq.h
Normal file
57
include/trace/events/rseq.h
Normal file
@ -0,0 +1,57 @@
|
||||
/* SPDX-License-Identifier: GPL-2.0+ */
|
||||
#undef TRACE_SYSTEM
|
||||
#define TRACE_SYSTEM rseq
|
||||
|
||||
#if !defined(_TRACE_RSEQ_H) || defined(TRACE_HEADER_MULTI_READ)
|
||||
#define _TRACE_RSEQ_H
|
||||
|
||||
#include <linux/tracepoint.h>
|
||||
#include <linux/types.h>
|
||||
|
||||
TRACE_EVENT(rseq_update,
|
||||
|
||||
TP_PROTO(struct task_struct *t),
|
||||
|
||||
TP_ARGS(t),
|
||||
|
||||
TP_STRUCT__entry(
|
||||
__field(s32, cpu_id)
|
||||
),
|
||||
|
||||
TP_fast_assign(
|
||||
__entry->cpu_id = raw_smp_processor_id();
|
||||
),
|
||||
|
||||
TP_printk("cpu_id=%d", __entry->cpu_id)
|
||||
);
|
||||
|
||||
TRACE_EVENT(rseq_ip_fixup,
|
||||
|
||||
TP_PROTO(unsigned long regs_ip, unsigned long start_ip,
|
||||
unsigned long post_commit_offset, unsigned long abort_ip),
|
||||
|
||||
TP_ARGS(regs_ip, start_ip, post_commit_offset, abort_ip),
|
||||
|
||||
TP_STRUCT__entry(
|
||||
__field(unsigned long, regs_ip)
|
||||
__field(unsigned long, start_ip)
|
||||
__field(unsigned long, post_commit_offset)
|
||||
__field(unsigned long, abort_ip)
|
||||
),
|
||||
|
||||
TP_fast_assign(
|
||||
__entry->regs_ip = regs_ip;
|
||||
__entry->start_ip = start_ip;
|
||||
__entry->post_commit_offset = post_commit_offset;
|
||||
__entry->abort_ip = abort_ip;
|
||||
),
|
||||
|
||||
TP_printk("regs_ip=0x%lx start_ip=0x%lx post_commit_offset=%lu abort_ip=0x%lx",
|
||||
__entry->regs_ip, __entry->start_ip,
|
||||
__entry->post_commit_offset, __entry->abort_ip)
|
||||
);
|
||||
|
||||
#endif /* _TRACE_SOCK_H */
|
||||
|
||||
/* This part must be outside protection */
|
||||
#include <trace/define_trace.h>
|
133
include/uapi/linux/rseq.h
Normal file
133
include/uapi/linux/rseq.h
Normal file
@ -0,0 +1,133 @@
|
||||
/* SPDX-License-Identifier: GPL-2.0+ WITH Linux-syscall-note */
|
||||
#ifndef _UAPI_LINUX_RSEQ_H
|
||||
#define _UAPI_LINUX_RSEQ_H
|
||||
|
||||
/*
|
||||
* linux/rseq.h
|
||||
*
|
||||
* Restartable sequences system call API
|
||||
*
|
||||
* Copyright (c) 2015-2018 Mathieu Desnoyers <mathieu.desnoyers@efficios.com>
|
||||
*/
|
||||
|
||||
#ifdef __KERNEL__
|
||||
# include <linux/types.h>
|
||||
#else
|
||||
# include <stdint.h>
|
||||
#endif
|
||||
|
||||
#include <linux/types_32_64.h>
|
||||
|
||||
enum rseq_cpu_id_state {
|
||||
RSEQ_CPU_ID_UNINITIALIZED = -1,
|
||||
RSEQ_CPU_ID_REGISTRATION_FAILED = -2,
|
||||
};
|
||||
|
||||
enum rseq_flags {
|
||||
RSEQ_FLAG_UNREGISTER = (1 << 0),
|
||||
};
|
||||
|
||||
enum rseq_cs_flags_bit {
|
||||
RSEQ_CS_FLAG_NO_RESTART_ON_PREEMPT_BIT = 0,
|
||||
RSEQ_CS_FLAG_NO_RESTART_ON_SIGNAL_BIT = 1,
|
||||
RSEQ_CS_FLAG_NO_RESTART_ON_MIGRATE_BIT = 2,
|
||||
};
|
||||
|
||||
enum rseq_cs_flags {
|
||||
RSEQ_CS_FLAG_NO_RESTART_ON_PREEMPT =
|
||||
(1U << RSEQ_CS_FLAG_NO_RESTART_ON_PREEMPT_BIT),
|
||||
RSEQ_CS_FLAG_NO_RESTART_ON_SIGNAL =
|
||||
(1U << RSEQ_CS_FLAG_NO_RESTART_ON_SIGNAL_BIT),
|
||||
RSEQ_CS_FLAG_NO_RESTART_ON_MIGRATE =
|
||||
(1U << RSEQ_CS_FLAG_NO_RESTART_ON_MIGRATE_BIT),
|
||||
};
|
||||
|
||||
/*
|
||||
* struct rseq_cs is aligned on 4 * 8 bytes to ensure it is always
|
||||
* contained within a single cache-line. It is usually declared as
|
||||
* link-time constant data.
|
||||
*/
|
||||
struct rseq_cs {
|
||||
/* Version of this structure. */
|
||||
__u32 version;
|
||||
/* enum rseq_cs_flags */
|
||||
__u32 flags;
|
||||
LINUX_FIELD_u32_u64(start_ip);
|
||||
/* Offset from start_ip. */
|
||||
LINUX_FIELD_u32_u64(post_commit_offset);
|
||||
LINUX_FIELD_u32_u64(abort_ip);
|
||||
} __attribute__((aligned(4 * sizeof(__u64))));
|
||||
|
||||
/*
|
||||
* struct rseq is aligned on 4 * 8 bytes to ensure it is always
|
||||
* contained within a single cache-line.
|
||||
*
|
||||
* A single struct rseq per thread is allowed.
|
||||
*/
|
||||
struct rseq {
|
||||
/*
|
||||
* Restartable sequences cpu_id_start field. Updated by the
|
||||
* kernel, and read by user-space with single-copy atomicity
|
||||
* semantics. Aligned on 32-bit. Always contains a value in the
|
||||
* range of possible CPUs, although the value may not be the
|
||||
* actual current CPU (e.g. if rseq is not initialized). This
|
||||
* CPU number value should always be compared against the value
|
||||
* of the cpu_id field before performing a rseq commit or
|
||||
* returning a value read from a data structure indexed using
|
||||
* the cpu_id_start value.
|
||||
*/
|
||||
__u32 cpu_id_start;
|
||||
/*
|
||||
* Restartable sequences cpu_id field. Updated by the kernel,
|
||||
* and read by user-space with single-copy atomicity semantics.
|
||||
* Aligned on 32-bit. Values RSEQ_CPU_ID_UNINITIALIZED and
|
||||
* RSEQ_CPU_ID_REGISTRATION_FAILED have a special semantic: the
|
||||
* former means "rseq uninitialized", and latter means "rseq
|
||||
* initialization failed". This value is meant to be read within
|
||||
* rseq critical sections and compared with the cpu_id_start
|
||||
* value previously read, before performing the commit instruction,
|
||||
* or read and compared with the cpu_id_start value before returning
|
||||
* a value loaded from a data structure indexed using the
|
||||
* cpu_id_start value.
|
||||
*/
|
||||
__u32 cpu_id;
|
||||
/*
|
||||
* Restartable sequences rseq_cs field.
|
||||
*
|
||||
* Contains NULL when no critical section is active for the current
|
||||
* thread, or holds a pointer to the currently active struct rseq_cs.
|
||||
*
|
||||
* Updated by user-space, which sets the address of the currently
|
||||
* active rseq_cs at the beginning of assembly instruction sequence
|
||||
* block, and set to NULL by the kernel when it restarts an assembly
|
||||
* instruction sequence block, as well as when the kernel detects that
|
||||
* it is preempting or delivering a signal outside of the range
|
||||
* targeted by the rseq_cs. Also needs to be set to NULL by user-space
|
||||
* before reclaiming memory that contains the targeted struct rseq_cs.
|
||||
*
|
||||
* Read and set by the kernel with single-copy atomicity semantics.
|
||||
* Set by user-space with single-copy atomicity semantics. Aligned
|
||||
* on 64-bit.
|
||||
*/
|
||||
LINUX_FIELD_u32_u64(rseq_cs);
|
||||
/*
|
||||
* - RSEQ_DISABLE flag:
|
||||
*
|
||||
* Fallback fast-track flag for single-stepping.
|
||||
* Set by user-space if lack of progress is detected.
|
||||
* Cleared by user-space after rseq finish.
|
||||
* Read by the kernel.
|
||||
* - RSEQ_CS_FLAG_NO_RESTART_ON_PREEMPT
|
||||
* Inhibit instruction sequence block restart and event
|
||||
* counter increment on preemption for this thread.
|
||||
* - RSEQ_CS_FLAG_NO_RESTART_ON_SIGNAL
|
||||
* Inhibit instruction sequence block restart and event
|
||||
* counter increment on signal delivery for this thread.
|
||||
* - RSEQ_CS_FLAG_NO_RESTART_ON_MIGRATE
|
||||
* Inhibit instruction sequence block restart and event
|
||||
* counter increment on migration for this thread.
|
||||
*/
|
||||
__u32 flags;
|
||||
} __attribute__((aligned(4 * sizeof(__u64))));
|
||||
|
||||
#endif /* _UAPI_LINUX_RSEQ_H */
|
23
init/Kconfig
23
init/Kconfig
@ -1417,6 +1417,29 @@ config ARCH_HAS_MEMBARRIER_CALLBACKS
|
||||
config ARCH_HAS_MEMBARRIER_SYNC_CORE
|
||||
bool
|
||||
|
||||
config RSEQ
|
||||
bool "Enable rseq() system call" if EXPERT
|
||||
default y
|
||||
depends on HAVE_RSEQ
|
||||
select MEMBARRIER
|
||||
help
|
||||
Enable the restartable sequences system call. It provides a
|
||||
user-space cache for the current CPU number value, which
|
||||
speeds up getting the current CPU number from user-space,
|
||||
as well as an ABI to speed up user-space operations on
|
||||
per-CPU data.
|
||||
|
||||
If unsure, say Y.
|
||||
|
||||
config DEBUG_RSEQ
|
||||
default n
|
||||
bool "Enabled debugging of rseq() system call" if EXPERT
|
||||
depends on RSEQ && DEBUG_KERNEL
|
||||
help
|
||||
Enable extra debugging checks for the rseq system call.
|
||||
|
||||
If unsure, say N.
|
||||
|
||||
config EMBEDDED
|
||||
bool "Embedded system"
|
||||
option allnoconfig_y
|
||||
|
@ -113,6 +113,7 @@ obj-$(CONFIG_CONTEXT_TRACKING) += context_tracking.o
|
||||
obj-$(CONFIG_TORTURE_TEST) += torture.o
|
||||
|
||||
obj-$(CONFIG_HAS_IOMEM) += memremap.o
|
||||
obj-$(CONFIG_RSEQ) += rseq.o
|
||||
|
||||
$(obj)/configs.o: $(obj)/config_data.h
|
||||
|
||||
|
@ -1899,6 +1899,8 @@ static __latent_entropy struct task_struct *copy_process(
|
||||
*/
|
||||
copy_seccomp(p);
|
||||
|
||||
rseq_fork(p, clone_flags);
|
||||
|
||||
/*
|
||||
* Process group and session signals need to be delivered to just the
|
||||
* parent before the fork or both the parent and the child after the
|
||||
|
357
kernel/rseq.c
Normal file
357
kernel/rseq.c
Normal file
@ -0,0 +1,357 @@
|
||||
// SPDX-License-Identifier: GPL-2.0+
|
||||
/*
|
||||
* Restartable sequences system call
|
||||
*
|
||||
* Copyright (C) 2015, Google, Inc.,
|
||||
* Paul Turner <pjt@google.com> and Andrew Hunter <ahh@google.com>
|
||||
* Copyright (C) 2015-2018, EfficiOS Inc.,
|
||||
* Mathieu Desnoyers <mathieu.desnoyers@efficios.com>
|
||||
*/
|
||||
|
||||
#include <linux/sched.h>
|
||||
#include <linux/uaccess.h>
|
||||
#include <linux/syscalls.h>
|
||||
#include <linux/rseq.h>
|
||||
#include <linux/types.h>
|
||||
#include <asm/ptrace.h>
|
||||
|
||||
#define CREATE_TRACE_POINTS
|
||||
#include <trace/events/rseq.h>
|
||||
|
||||
#define RSEQ_CS_PREEMPT_MIGRATE_FLAGS (RSEQ_CS_FLAG_NO_RESTART_ON_MIGRATE | \
|
||||
RSEQ_CS_FLAG_NO_RESTART_ON_PREEMPT)
|
||||
|
||||
/*
|
||||
*
|
||||
* Restartable sequences are a lightweight interface that allows
|
||||
* user-level code to be executed atomically relative to scheduler
|
||||
* preemption and signal delivery. Typically used for implementing
|
||||
* per-cpu operations.
|
||||
*
|
||||
* It allows user-space to perform update operations on per-cpu data
|
||||
* without requiring heavy-weight atomic operations.
|
||||
*
|
||||
* Detailed algorithm of rseq user-space assembly sequences:
|
||||
*
|
||||
* init(rseq_cs)
|
||||
* cpu = TLS->rseq::cpu_id_start
|
||||
* [1] TLS->rseq::rseq_cs = rseq_cs
|
||||
* [start_ip] ----------------------------
|
||||
* [2] if (cpu != TLS->rseq::cpu_id)
|
||||
* goto abort_ip;
|
||||
* [3] <last_instruction_in_cs>
|
||||
* [post_commit_ip] ----------------------------
|
||||
*
|
||||
* The address of jump target abort_ip must be outside the critical
|
||||
* region, i.e.:
|
||||
*
|
||||
* [abort_ip] < [start_ip] || [abort_ip] >= [post_commit_ip]
|
||||
*
|
||||
* Steps [2]-[3] (inclusive) need to be a sequence of instructions in
|
||||
* userspace that can handle being interrupted between any of those
|
||||
* instructions, and then resumed to the abort_ip.
|
||||
*
|
||||
* 1. Userspace stores the address of the struct rseq_cs assembly
|
||||
* block descriptor into the rseq_cs field of the registered
|
||||
* struct rseq TLS area. This update is performed through a single
|
||||
* store within the inline assembly instruction sequence.
|
||||
* [start_ip]
|
||||
*
|
||||
* 2. Userspace tests to check whether the current cpu_id field match
|
||||
* the cpu number loaded before start_ip, branching to abort_ip
|
||||
* in case of a mismatch.
|
||||
*
|
||||
* If the sequence is preempted or interrupted by a signal
|
||||
* at or after start_ip and before post_commit_ip, then the kernel
|
||||
* clears TLS->__rseq_abi::rseq_cs, and sets the user-space return
|
||||
* ip to abort_ip before returning to user-space, so the preempted
|
||||
* execution resumes at abort_ip.
|
||||
*
|
||||
* 3. Userspace critical section final instruction before
|
||||
* post_commit_ip is the commit. The critical section is
|
||||
* self-terminating.
|
||||
* [post_commit_ip]
|
||||
*
|
||||
* 4. <success>
|
||||
*
|
||||
* On failure at [2], or if interrupted by preempt or signal delivery
|
||||
* between [1] and [3]:
|
||||
*
|
||||
* [abort_ip]
|
||||
* F1. <failure>
|
||||
*/
|
||||
|
||||
static int rseq_update_cpu_id(struct task_struct *t)
|
||||
{
|
||||
u32 cpu_id = raw_smp_processor_id();
|
||||
|
||||
if (__put_user(cpu_id, &t->rseq->cpu_id_start))
|
||||
return -EFAULT;
|
||||
if (__put_user(cpu_id, &t->rseq->cpu_id))
|
||||
return -EFAULT;
|
||||
trace_rseq_update(t);
|
||||
return 0;
|
||||
}
|
||||
|
||||
static int rseq_reset_rseq_cpu_id(struct task_struct *t)
|
||||
{
|
||||
u32 cpu_id_start = 0, cpu_id = RSEQ_CPU_ID_UNINITIALIZED;
|
||||
|
||||
/*
|
||||
* Reset cpu_id_start to its initial state (0).
|
||||
*/
|
||||
if (__put_user(cpu_id_start, &t->rseq->cpu_id_start))
|
||||
return -EFAULT;
|
||||
/*
|
||||
* Reset cpu_id to RSEQ_CPU_ID_UNINITIALIZED, so any user coming
|
||||
* in after unregistration can figure out that rseq needs to be
|
||||
* registered again.
|
||||
*/
|
||||
if (__put_user(cpu_id, &t->rseq->cpu_id))
|
||||
return -EFAULT;
|
||||
return 0;
|
||||
}
|
||||
|
||||
static int rseq_get_rseq_cs(struct task_struct *t, struct rseq_cs *rseq_cs)
|
||||
{
|
||||
struct rseq_cs __user *urseq_cs;
|
||||
unsigned long ptr;
|
||||
u32 __user *usig;
|
||||
u32 sig;
|
||||
int ret;
|
||||
|
||||
ret = __get_user(ptr, &t->rseq->rseq_cs);
|
||||
if (ret)
|
||||
return ret;
|
||||
if (!ptr) {
|
||||
memset(rseq_cs, 0, sizeof(*rseq_cs));
|
||||
return 0;
|
||||
}
|
||||
urseq_cs = (struct rseq_cs __user *)ptr;
|
||||
if (copy_from_user(rseq_cs, urseq_cs, sizeof(*rseq_cs)))
|
||||
return -EFAULT;
|
||||
if (rseq_cs->version > 0)
|
||||
return -EINVAL;
|
||||
|
||||
/* Ensure that abort_ip is not in the critical section. */
|
||||
if (rseq_cs->abort_ip - rseq_cs->start_ip < rseq_cs->post_commit_offset)
|
||||
return -EINVAL;
|
||||
|
||||
usig = (u32 __user *)(rseq_cs->abort_ip - sizeof(u32));
|
||||
ret = get_user(sig, usig);
|
||||
if (ret)
|
||||
return ret;
|
||||
|
||||
if (current->rseq_sig != sig) {
|
||||
printk_ratelimited(KERN_WARNING
|
||||
"Possible attack attempt. Unexpected rseq signature 0x%x, expecting 0x%x (pid=%d, addr=%p).\n",
|
||||
sig, current->rseq_sig, current->pid, usig);
|
||||
return -EPERM;
|
||||
}
|
||||
return 0;
|
||||
}
|
||||
|
||||
static int rseq_need_restart(struct task_struct *t, u32 cs_flags)
|
||||
{
|
||||
u32 flags, event_mask;
|
||||
int ret;
|
||||
|
||||
/* Get thread flags. */
|
||||
ret = __get_user(flags, &t->rseq->flags);
|
||||
if (ret)
|
||||
return ret;
|
||||
|
||||
/* Take critical section flags into account. */
|
||||
flags |= cs_flags;
|
||||
|
||||
/*
|
||||
* Restart on signal can only be inhibited when restart on
|
||||
* preempt and restart on migrate are inhibited too. Otherwise,
|
||||
* a preempted signal handler could fail to restart the prior
|
||||
* execution context on sigreturn.
|
||||
*/
|
||||
if (unlikely((flags & RSEQ_CS_FLAG_NO_RESTART_ON_SIGNAL) &&
|
||||
(flags & RSEQ_CS_PREEMPT_MIGRATE_FLAGS) !=
|
||||
RSEQ_CS_PREEMPT_MIGRATE_FLAGS))
|
||||
return -EINVAL;
|
||||
|
||||
/*
|
||||
* Load and clear event mask atomically with respect to
|
||||
* scheduler preemption.
|
||||
*/
|
||||
preempt_disable();
|
||||
event_mask = t->rseq_event_mask;
|
||||
t->rseq_event_mask = 0;
|
||||
preempt_enable();
|
||||
|
||||
return !!(event_mask & ~flags);
|
||||
}
|
||||
|
||||
static int clear_rseq_cs(struct task_struct *t)
|
||||
{
|
||||
/*
|
||||
* The rseq_cs field is set to NULL on preemption or signal
|
||||
* delivery on top of rseq assembly block, as well as on top
|
||||
* of code outside of the rseq assembly block. This performs
|
||||
* a lazy clear of the rseq_cs field.
|
||||
*
|
||||
* Set rseq_cs to NULL with single-copy atomicity.
|
||||
*/
|
||||
return __put_user(0UL, &t->rseq->rseq_cs);
|
||||
}
|
||||
|
||||
/*
|
||||
* Unsigned comparison will be true when ip >= start_ip, and when
|
||||
* ip < start_ip + post_commit_offset.
|
||||
*/
|
||||
static bool in_rseq_cs(unsigned long ip, struct rseq_cs *rseq_cs)
|
||||
{
|
||||
return ip - rseq_cs->start_ip < rseq_cs->post_commit_offset;
|
||||
}
|
||||
|
||||
static int rseq_ip_fixup(struct pt_regs *regs)
|
||||
{
|
||||
unsigned long ip = instruction_pointer(regs);
|
||||
struct task_struct *t = current;
|
||||
struct rseq_cs rseq_cs;
|
||||
int ret;
|
||||
|
||||
ret = rseq_get_rseq_cs(t, &rseq_cs);
|
||||
if (ret)
|
||||
return ret;
|
||||
|
||||
/*
|
||||
* Handle potentially not being within a critical section.
|
||||
* If not nested over a rseq critical section, restart is useless.
|
||||
* Clear the rseq_cs pointer and return.
|
||||
*/
|
||||
if (!in_rseq_cs(ip, &rseq_cs))
|
||||
return clear_rseq_cs(t);
|
||||
ret = rseq_need_restart(t, rseq_cs.flags);
|
||||
if (ret <= 0)
|
||||
return ret;
|
||||
ret = clear_rseq_cs(t);
|
||||
if (ret)
|
||||
return ret;
|
||||
trace_rseq_ip_fixup(ip, rseq_cs.start_ip, rseq_cs.post_commit_offset,
|
||||
rseq_cs.abort_ip);
|
||||
instruction_pointer_set(regs, (unsigned long)rseq_cs.abort_ip);
|
||||
return 0;
|
||||
}
|
||||
|
||||
/*
|
||||
* This resume handler must always be executed between any of:
|
||||
* - preemption,
|
||||
* - signal delivery,
|
||||
* and return to user-space.
|
||||
*
|
||||
* This is how we can ensure that the entire rseq critical section,
|
||||
* consisting of both the C part and the assembly instruction sequence,
|
||||
* will issue the commit instruction only if executed atomically with
|
||||
* respect to other threads scheduled on the same CPU, and with respect
|
||||
* to signal handlers.
|
||||
*/
|
||||
void __rseq_handle_notify_resume(struct pt_regs *regs)
|
||||
{
|
||||
struct task_struct *t = current;
|
||||
int ret;
|
||||
|
||||
if (unlikely(t->flags & PF_EXITING))
|
||||
return;
|
||||
if (unlikely(!access_ok(VERIFY_WRITE, t->rseq, sizeof(*t->rseq))))
|
||||
goto error;
|
||||
ret = rseq_ip_fixup(regs);
|
||||
if (unlikely(ret < 0))
|
||||
goto error;
|
||||
if (unlikely(rseq_update_cpu_id(t)))
|
||||
goto error;
|
||||
return;
|
||||
|
||||
error:
|
||||
force_sig(SIGSEGV, t);
|
||||
}
|
||||
|
||||
#ifdef CONFIG_DEBUG_RSEQ
|
||||
|
||||
/*
|
||||
* Terminate the process if a syscall is issued within a restartable
|
||||
* sequence.
|
||||
*/
|
||||
void rseq_syscall(struct pt_regs *regs)
|
||||
{
|
||||
unsigned long ip = instruction_pointer(regs);
|
||||
struct task_struct *t = current;
|
||||
struct rseq_cs rseq_cs;
|
||||
|
||||
if (!t->rseq)
|
||||
return;
|
||||
if (!access_ok(VERIFY_READ, t->rseq, sizeof(*t->rseq)) ||
|
||||
rseq_get_rseq_cs(t, &rseq_cs) || in_rseq_cs(ip, &rseq_cs))
|
||||
force_sig(SIGSEGV, t);
|
||||
}
|
||||
|
||||
#endif
|
||||
|
||||
/*
|
||||
* sys_rseq - setup restartable sequences for caller thread.
|
||||
*/
|
||||
SYSCALL_DEFINE4(rseq, struct rseq __user *, rseq, u32, rseq_len,
|
||||
int, flags, u32, sig)
|
||||
{
|
||||
int ret;
|
||||
|
||||
if (flags & RSEQ_FLAG_UNREGISTER) {
|
||||
/* Unregister rseq for current thread. */
|
||||
if (current->rseq != rseq || !current->rseq)
|
||||
return -EINVAL;
|
||||
if (current->rseq_len != rseq_len)
|
||||
return -EINVAL;
|
||||
if (current->rseq_sig != sig)
|
||||
return -EPERM;
|
||||
ret = rseq_reset_rseq_cpu_id(current);
|
||||
if (ret)
|
||||
return ret;
|
||||
current->rseq = NULL;
|
||||
current->rseq_len = 0;
|
||||
current->rseq_sig = 0;
|
||||
return 0;
|
||||
}
|
||||
|
||||
if (unlikely(flags))
|
||||
return -EINVAL;
|
||||
|
||||
if (current->rseq) {
|
||||
/*
|
||||
* If rseq is already registered, check whether
|
||||
* the provided address differs from the prior
|
||||
* one.
|
||||
*/
|
||||
if (current->rseq != rseq || current->rseq_len != rseq_len)
|
||||
return -EINVAL;
|
||||
if (current->rseq_sig != sig)
|
||||
return -EPERM;
|
||||
/* Already registered. */
|
||||
return -EBUSY;
|
||||
}
|
||||
|
||||
/*
|
||||
* If there was no rseq previously registered,
|
||||
* ensure the provided rseq is properly aligned and valid.
|
||||
*/
|
||||
if (!IS_ALIGNED((unsigned long)rseq, __alignof__(*rseq)) ||
|
||||
rseq_len != sizeof(*rseq))
|
||||
return -EINVAL;
|
||||
if (!access_ok(VERIFY_WRITE, rseq, rseq_len))
|
||||
return -EFAULT;
|
||||
current->rseq = rseq;
|
||||
current->rseq_len = rseq_len;
|
||||
current->rseq_sig = sig;
|
||||
/*
|
||||
* If rseq was previously inactive, and has just been
|
||||
* registered, ensure the cpu_id_start and cpu_id fields
|
||||
* are updated before returning to user-space.
|
||||
*/
|
||||
rseq_set_notify_resume(current);
|
||||
|
||||
return 0;
|
||||
}
|
@ -1191,6 +1191,7 @@ void set_task_cpu(struct task_struct *p, unsigned int new_cpu)
|
||||
if (p->sched_class->migrate_task_rq)
|
||||
p->sched_class->migrate_task_rq(p);
|
||||
p->se.nr_migrations++;
|
||||
rseq_migrate(p);
|
||||
perf_event_task_migrate(p);
|
||||
}
|
||||
|
||||
@ -2634,6 +2635,7 @@ prepare_task_switch(struct rq *rq, struct task_struct *prev,
|
||||
{
|
||||
sched_info_switch(rq, prev, next);
|
||||
perf_event_task_sched_out(prev, next);
|
||||
rseq_preempt(prev);
|
||||
fire_sched_out_preempt_notifiers(prev, next);
|
||||
prepare_task(next);
|
||||
prepare_arch_switch(next);
|
||||
|
@ -432,3 +432,6 @@ COND_SYSCALL(setresgid16);
|
||||
COND_SYSCALL(setresuid16);
|
||||
COND_SYSCALL(setreuid16);
|
||||
COND_SYSCALL(setuid16);
|
||||
|
||||
/* restartable sequence */
|
||||
COND_SYSCALL(rseq);
|
||||
|
Loading…
Reference in New Issue
Block a user