mirror of
https://github.com/mozilla/gecko-dev.git
synced 2024-11-24 13:21:05 +00:00
Bug 1186934 - update jemalloc to upstream HEAD; r=glandium
This commit is contained in:
parent
056852f555
commit
1ef4f45198
@ -4,24 +4,101 @@ brevity. Much more detail can be found in the git revision history:
|
||||
|
||||
https://github.com/jemalloc/jemalloc
|
||||
|
||||
* 4.0.1 (XXX)
|
||||
* 4.0.4 (October 24, 2015)
|
||||
|
||||
This bugfix release fixes another xallocx() regression. No other regressions
|
||||
have come to light in over a month, so this is likely a good starting point
|
||||
for people who prefer to wait for "dot one" releases with all the major issues
|
||||
shaken out.
|
||||
|
||||
Bug fixes:
|
||||
- Fix xallocx(..., MALLOCX_ZERO to zero the last full trailing page of large
|
||||
allocations that have been randomly assigned an offset of 0 when
|
||||
--enable-cache-oblivious configure option is enabled.
|
||||
|
||||
* 4.0.3 (September 24, 2015)
|
||||
|
||||
This bugfix release continues the trend of xallocx() and heap profiling fixes.
|
||||
|
||||
Bug fixes:
|
||||
- Fix xallocx(..., MALLOCX_ZERO) to zero all trailing bytes of large
|
||||
allocations when --enable-cache-oblivious configure option is enabled.
|
||||
- Fix xallocx(..., MALLOCX_ZERO) to zero trailing bytes of huge allocations
|
||||
when resizing from/to a size class that is not a multiple of the chunk size.
|
||||
- Fix prof_tctx_dump_iter() to filter out nodes that were created after heap
|
||||
profile dumping started.
|
||||
- Work around a potentially bad thread-specific data initialization
|
||||
interaction with NPTL (glibc's pthreads implementation).
|
||||
|
||||
* 4.0.2 (September 21, 2015)
|
||||
|
||||
This bugfix release addresses a few bugs specific to heap profiling.
|
||||
|
||||
Bug fixes:
|
||||
- Fix ixallocx_prof_sample() to never modify nor create sampled small
|
||||
allocations. xallocx() is in general incapable of moving small allocations,
|
||||
so this fix removes buggy code without loss of generality.
|
||||
- Fix irallocx_prof_sample() to always allocate large regions, even when
|
||||
alignment is non-zero.
|
||||
- Fix prof_alloc_rollback() to read tdata from thread-specific data rather
|
||||
than dereferencing a potentially invalid tctx.
|
||||
|
||||
* 4.0.1 (September 15, 2015)
|
||||
|
||||
This is a bugfix release that is somewhat high risk due to the amount of
|
||||
refactoring required to address deep xallocx() problems. As a side effect of
|
||||
these fixes, xallocx() now tries harder to partially fulfill requests for
|
||||
optional extra space. Note that a couple of minor heap profiling
|
||||
optimizations are included, but these are better thought of as performance
|
||||
fixes that were integral to disovering most of the other bugs.
|
||||
|
||||
Optimizations:
|
||||
- Avoid a chunk metadata read in arena_prof_tctx_set(), since it is in the
|
||||
fast path when heap profiling is enabled. Additionally, split a special
|
||||
case out into arena_prof_tctx_reset(), which also avoids chunk metadata
|
||||
reads.
|
||||
- Optimize irallocx_prof() to optimistically update the sampler state. The
|
||||
prior implementation appears to have been a holdover from when
|
||||
rallocx()/xallocx() functionality was combined as rallocm().
|
||||
|
||||
Bug fixes:
|
||||
- Fix TLS configuration such that it is enabled by default for platforms on
|
||||
which it works correctly.
|
||||
- Fix arenas_cache_cleanup() and arena_get_hard() to handle
|
||||
allocation/deallocation within the application's thread-specific data
|
||||
cleanup functions even after arenas_cache is torn down.
|
||||
- Don't bitshift by negative amounts when encoding/decoding run sizes in chunk
|
||||
header maps. This affected systems with page sizes greater than 8 KiB.
|
||||
- Rename index_t to szind_t to avoid an existing type on Solaris.
|
||||
- Add JEMALLOC_CXX_THROW to the memalign() function prototype, in order to
|
||||
match glibc and avoid compilation errors when including both
|
||||
jemalloc/jemalloc.h and malloc.h in C++ code.
|
||||
- Fix xallocx() bugs related to size+extra exceeding HUGE_MAXCLASS.
|
||||
- Fix chunk purge hook calls for in-place huge shrinking reallocation to
|
||||
specify the old chunk size rather than the new chunk size. This bug caused
|
||||
no correctness issues for the default chunk purge function, but was
|
||||
visible to custom functions set via the "arena.<i>.chunk_hooks" mallctl.
|
||||
- Fix TLS configuration such that it is enabled by default for platforms on
|
||||
which it works correctly.
|
||||
- Fix heap profiling bugs:
|
||||
+ Fix heap profiling to distinguish among otherwise identical sample sites
|
||||
with interposed resets (triggered via the "prof.reset" mallctl). This bug
|
||||
could cause data structure corruption that would most likely result in a
|
||||
segfault.
|
||||
+ Fix irealloc_prof() to prof_alloc_rollback() on OOM.
|
||||
+ Make one call to prof_active_get_unlocked() per allocation event, and use
|
||||
the result throughout the relevant functions that handle an allocation
|
||||
event. Also add a missing check in prof_realloc(). These fixes protect
|
||||
allocation events against concurrent prof_active changes.
|
||||
+ Fix ixallocx_prof() to pass usize_max and zero to ixallocx_prof_sample()
|
||||
in the correct order.
|
||||
+ Fix prof_realloc() to call prof_free_sampled_object() after calling
|
||||
prof_malloc_sample_object(). Prior to this fix, if tctx and old_tctx were
|
||||
the same, the tctx could have been prematurely destroyed.
|
||||
- Fix portability bugs:
|
||||
+ Don't bitshift by negative amounts when encoding/decoding run sizes in
|
||||
chunk header maps. This affected systems with page sizes greater than 8
|
||||
KiB.
|
||||
+ Rename index_t to szind_t to avoid an existing type on Solaris.
|
||||
+ Add JEMALLOC_CXX_THROW to the memalign() function prototype, in order to
|
||||
match glibc and avoid compilation errors when including both
|
||||
jemalloc/jemalloc.h and malloc.h in C++ code.
|
||||
+ Don't assume that /bin/sh is appropriate when running size_classes.sh
|
||||
during configuration.
|
||||
+ Consider __sparcv9 a synonym for __sparc64__ when defining LG_QUANTUM.
|
||||
+ Link tests to librt if it contains clock_gettime(2).
|
||||
|
||||
* 4.0.0 (August 17, 2015)
|
||||
|
||||
|
@ -28,6 +28,7 @@ CFLAGS := @CFLAGS@
|
||||
LDFLAGS := @LDFLAGS@
|
||||
EXTRA_LDFLAGS := @EXTRA_LDFLAGS@
|
||||
LIBS := @LIBS@
|
||||
TESTLIBS := @TESTLIBS@
|
||||
RPATH_EXTRA := @RPATH_EXTRA@
|
||||
SO := @so@
|
||||
IMPORTLIB := @importlib@
|
||||
@ -265,15 +266,15 @@ $(STATIC_LIBS):
|
||||
|
||||
$(objroot)test/unit/%$(EXE): $(objroot)test/unit/%.$(O) $(TESTS_UNIT_LINK_OBJS) $(C_JET_OBJS) $(C_TESTLIB_UNIT_OBJS)
|
||||
@mkdir -p $(@D)
|
||||
$(CC) $(LDTARGET) $(filter %.$(O),$^) $(call RPATH,$(objroot)lib) $(LDFLAGS) $(filter-out -lm,$(LIBS)) -lm $(EXTRA_LDFLAGS)
|
||||
$(CC) $(LDTARGET) $(filter %.$(O),$^) $(call RPATH,$(objroot)lib) $(LDFLAGS) $(filter-out -lm,$(LIBS)) -lm $(TESTLIBS) $(EXTRA_LDFLAGS)
|
||||
|
||||
$(objroot)test/integration/%$(EXE): $(objroot)test/integration/%.$(O) $(C_TESTLIB_INTEGRATION_OBJS) $(C_UTIL_INTEGRATION_OBJS) $(objroot)lib/$(LIBJEMALLOC).$(IMPORTLIB)
|
||||
@mkdir -p $(@D)
|
||||
$(CC) $(LDTARGET) $(filter %.$(O),$^) $(call RPATH,$(objroot)lib) $(objroot)lib/$(LIBJEMALLOC).$(IMPORTLIB) $(LDFLAGS) $(filter-out -lm,$(filter -lpthread,$(LIBS))) -lm $(EXTRA_LDFLAGS)
|
||||
$(CC) $(LDTARGET) $(filter %.$(O),$^) $(call RPATH,$(objroot)lib) $(objroot)lib/$(LIBJEMALLOC).$(IMPORTLIB) $(LDFLAGS) $(filter-out -lm,$(filter -lpthread,$(LIBS))) -lm $(TESTLIBS) $(EXTRA_LDFLAGS)
|
||||
|
||||
$(objroot)test/stress/%$(EXE): $(objroot)test/stress/%.$(O) $(C_JET_OBJS) $(C_TESTLIB_STRESS_OBJS) $(objroot)lib/$(LIBJEMALLOC).$(IMPORTLIB)
|
||||
@mkdir -p $(@D)
|
||||
$(CC) $(LDTARGET) $(filter %.$(O),$^) $(call RPATH,$(objroot)lib) $(objroot)lib/$(LIBJEMALLOC).$(IMPORTLIB) $(LDFLAGS) $(filter-out -lm,$(LIBS)) -lm $(EXTRA_LDFLAGS)
|
||||
$(CC) $(LDTARGET) $(filter %.$(O),$^) $(call RPATH,$(objroot)lib) $(objroot)lib/$(LIBJEMALLOC).$(IMPORTLIB) $(LDFLAGS) $(filter-out -lm,$(LIBS)) -lm $(TESTLIBS) $(EXTRA_LDFLAGS)
|
||||
|
||||
build_lib_shared: $(DSOS)
|
||||
build_lib_static: $(STATIC_LIBS)
|
||||
@ -343,22 +344,23 @@ check_unit_dir:
|
||||
@mkdir -p $(objroot)test/unit
|
||||
check_integration_dir:
|
||||
@mkdir -p $(objroot)test/integration
|
||||
check_stress_dir:
|
||||
stress_dir:
|
||||
@mkdir -p $(objroot)test/stress
|
||||
check_dir: check_unit_dir check_integration_dir check_stress_dir
|
||||
check_dir: check_unit_dir check_integration_dir
|
||||
|
||||
check_unit: tests_unit check_unit_dir
|
||||
$(SHELL) $(objroot)test/test.sh $(TESTS_UNIT:$(srcroot)%.c=$(objroot)%)
|
||||
check_integration_prof: tests_integration check_integration_dir
|
||||
ifeq ($(enable_prof), 1)
|
||||
$(MALLOC_CONF)="prof:true" $(SHELL) $(objroot)test/test.sh $(TESTS_INTEGRATION:$(srcroot)%.c=$(objroot)%)
|
||||
$(MALLOC_CONF)="prof:true,prof_active:false" $(SHELL) $(objroot)test/test.sh $(TESTS_INTEGRATION:$(srcroot)%.c=$(objroot)%)
|
||||
endif
|
||||
check_integration: tests_integration check_integration_dir
|
||||
$(SHELL) $(objroot)test/test.sh $(TESTS_INTEGRATION:$(srcroot)%.c=$(objroot)%)
|
||||
check_stress: tests_stress check_stress_dir
|
||||
stress: tests_stress stress_dir
|
||||
$(SHELL) $(objroot)test/test.sh $(TESTS_STRESS:$(srcroot)%.c=$(objroot)%)
|
||||
check: tests check_dir check_integration_prof
|
||||
$(SHELL) $(objroot)test/test.sh $(TESTS:$(srcroot)%.c=$(objroot)%)
|
||||
$(SHELL) $(objroot)test/test.sh $(TESTS_UNIT:$(srcroot)%.c=$(objroot)%) $(TESTS_INTEGRATION:$(srcroot)%.c=$(objroot)%)
|
||||
|
||||
ifeq ($(enable_code_coverage), 1)
|
||||
coverage_unit: check_unit
|
||||
@ -372,7 +374,7 @@ coverage_integration: check_integration
|
||||
$(SHELL) $(srcroot)coverage.sh $(srcroot)test/src integration $(C_TESTLIB_INTEGRATION_OBJS)
|
||||
$(SHELL) $(srcroot)coverage.sh $(srcroot)test/integration integration $(TESTS_INTEGRATION_OBJS)
|
||||
|
||||
coverage_stress: check_stress
|
||||
coverage_stress: stress
|
||||
$(SHELL) $(srcroot)coverage.sh $(srcroot)src pic $(C_PIC_OBJS)
|
||||
$(SHELL) $(srcroot)coverage.sh $(srcroot)src jet $(C_JET_OBJS)
|
||||
$(SHELL) $(srcroot)coverage.sh $(srcroot)test/src stress $(C_TESTLIB_STRESS_OBJS)
|
||||
|
@ -1 +1 @@
|
||||
4.0.0-12-ged4883285e111b426e5769b24dad164ebacaa5b9
|
||||
4.0.4-12-g3a92319ddc5610b755f755cbbbd12791ca9d0c3d
|
||||
|
@ -1160,8 +1160,21 @@ sub PrintSymbolizedProfile {
|
||||
}
|
||||
print '---', "\n";
|
||||
|
||||
$PROFILE_PAGE =~ m,[^/]+$,; # matches everything after the last slash
|
||||
my $profile_marker = $&;
|
||||
my $profile_marker;
|
||||
if ($main::profile_type eq 'heap') {
|
||||
$HEAP_PAGE =~ m,[^/]+$,; # matches everything after the last slash
|
||||
$profile_marker = $&;
|
||||
} elsif ($main::profile_type eq 'growth') {
|
||||
$GROWTH_PAGE =~ m,[^/]+$,; # matches everything after the last slash
|
||||
$profile_marker = $&;
|
||||
} elsif ($main::profile_type eq 'contention') {
|
||||
$CONTENTION_PAGE =~ m,[^/]+$,; # matches everything after the last slash
|
||||
$profile_marker = $&;
|
||||
} else { # elsif ($main::profile_type eq 'cpu')
|
||||
$PROFILE_PAGE =~ m,[^/]+$,; # matches everything after the last slash
|
||||
$profile_marker = $&;
|
||||
}
|
||||
|
||||
print '--- ', $profile_marker, "\n";
|
||||
if (defined($main::collected_profile)) {
|
||||
# if used with remote fetch, simply dump the collected profile to output.
|
||||
@ -1171,6 +1184,12 @@ sub PrintSymbolizedProfile {
|
||||
}
|
||||
close(SRC);
|
||||
} else {
|
||||
# --raw/http: For everything to work correctly for non-remote profiles, we
|
||||
# would need to extend PrintProfileData() to handle all possible profile
|
||||
# types, re-enable the code that is currently disabled in ReadCPUProfile()
|
||||
# and FixCallerAddresses(), and remove the remote profile dumping code in
|
||||
# the block above.
|
||||
die "--raw/http: jeprof can only dump remote profiles for --raw\n";
|
||||
# dump a cpu-format profile to standard out
|
||||
PrintProfileData($profile);
|
||||
}
|
||||
@ -3427,12 +3446,22 @@ sub FetchDynamicProfile {
|
||||
}
|
||||
$url .= sprintf("seconds=%d", $main::opt_seconds);
|
||||
$fetch_timeout = $main::opt_seconds * 1.01 + 60;
|
||||
# Set $profile_type for consumption by PrintSymbolizedProfile.
|
||||
$main::profile_type = 'cpu';
|
||||
} else {
|
||||
# For non-CPU profiles, we add a type-extension to
|
||||
# the target profile file name.
|
||||
my $suffix = $path;
|
||||
$suffix =~ s,/,.,g;
|
||||
$profile_file .= $suffix;
|
||||
# Set $profile_type for consumption by PrintSymbolizedProfile.
|
||||
if ($path =~ m/$HEAP_PAGE/) {
|
||||
$main::profile_type = 'heap';
|
||||
} elsif ($path =~ m/$GROWTH_PAGE/) {
|
||||
$main::profile_type = 'growth';
|
||||
} elsif ($path =~ m/$CONTENTION_PAGE/) {
|
||||
$main::profile_type = 'contention';
|
||||
}
|
||||
}
|
||||
|
||||
my $profile_dir = $ENV{"JEPROF_TMPDIR"} || ($ENV{HOME} . "/jeprof");
|
||||
@ -3730,6 +3759,8 @@ sub ReadProfile {
|
||||
my $symbol_marker = $&;
|
||||
$PROFILE_PAGE =~ m,[^/]+$,; # matches everything after the last slash
|
||||
my $profile_marker = $&;
|
||||
$HEAP_PAGE =~ m,[^/]+$,; # matches everything after the last slash
|
||||
my $heap_marker = $&;
|
||||
|
||||
# Look at first line to see if it is a heap or a CPU profile.
|
||||
# CPU profile may start with no header at all, and just binary data
|
||||
@ -3756,7 +3787,13 @@ sub ReadProfile {
|
||||
$header = ReadProfileHeader(*PROFILE) || "";
|
||||
}
|
||||
|
||||
if ($header =~ m/^--- *($heap_marker|$growth_marker)/o) {
|
||||
# Skip "--- ..." line for profile types that have their own headers.
|
||||
$header = ReadProfileHeader(*PROFILE) || "";
|
||||
}
|
||||
|
||||
$main::profile_type = '';
|
||||
|
||||
if ($header =~ m/^heap profile:.*$growth_marker/o) {
|
||||
$main::profile_type = 'growth';
|
||||
$result = ReadHeapProfile($prog, *PROFILE, $header);
|
||||
@ -3808,9 +3845,9 @@ sub ReadProfile {
|
||||
# independent implementation.
|
||||
sub FixCallerAddresses {
|
||||
my $stack = shift;
|
||||
if ($main::use_symbolized_profile) {
|
||||
return $stack;
|
||||
} else {
|
||||
# --raw/http: Always subtract one from pc's, because PrintSymbolizedProfile()
|
||||
# dumps unadjusted profiles.
|
||||
{
|
||||
$stack =~ /(\s)/;
|
||||
my $delimiter = $1;
|
||||
my @addrs = split(' ', $stack);
|
||||
@ -3878,12 +3915,7 @@ sub ReadCPUProfile {
|
||||
for (my $j = 0; $j < $d; $j++) {
|
||||
my $pc = $slots->get($i+$j);
|
||||
# Subtract one from caller pc so we map back to call instr.
|
||||
# However, don't do this if we're reading a symbolized profile
|
||||
# file, in which case the subtract-one was done when the file
|
||||
# was written.
|
||||
if ($j > 0 && !$main::use_symbolized_profile) {
|
||||
$pc--;
|
||||
}
|
||||
$pc--;
|
||||
$pc = sprintf("%0*x", $address_length, $pc);
|
||||
$pcs->{$pc} = 1;
|
||||
push @k, $pc;
|
||||
|
140
memory/jemalloc/src/configure
vendored
140
memory/jemalloc/src/configure
vendored
@ -628,6 +628,7 @@ cfghdrs_in
|
||||
enable_zone_allocator
|
||||
enable_tls
|
||||
enable_lazy_lock
|
||||
TESTLIBS
|
||||
jemalloc_version_gid
|
||||
jemalloc_version_nrev
|
||||
jemalloc_version_bugfix
|
||||
@ -728,7 +729,6 @@ infodir
|
||||
docdir
|
||||
oldincludedir
|
||||
includedir
|
||||
runstatedir
|
||||
localstatedir
|
||||
sharedstatedir
|
||||
sysconfdir
|
||||
@ -832,7 +832,6 @@ datadir='${datarootdir}'
|
||||
sysconfdir='${prefix}/etc'
|
||||
sharedstatedir='${prefix}/com'
|
||||
localstatedir='${prefix}/var'
|
||||
runstatedir='${localstatedir}/run'
|
||||
includedir='${prefix}/include'
|
||||
oldincludedir='/usr/include'
|
||||
docdir='${datarootdir}/doc/${PACKAGE}'
|
||||
@ -1085,15 +1084,6 @@ do
|
||||
| -silent | --silent | --silen | --sile | --sil)
|
||||
silent=yes ;;
|
||||
|
||||
-runstatedir | --runstatedir | --runstatedi | --runstated \
|
||||
| --runstate | --runstat | --runsta | --runst | --runs \
|
||||
| --run | --ru | --r)
|
||||
ac_prev=runstatedir ;;
|
||||
-runstatedir=* | --runstatedir=* | --runstatedi=* | --runstated=* \
|
||||
| --runstate=* | --runstat=* | --runsta=* | --runst=* | --runs=* \
|
||||
| --run=* | --ru=* | --r=*)
|
||||
runstatedir=$ac_optarg ;;
|
||||
|
||||
-sbindir | --sbindir | --sbindi | --sbind | --sbin | --sbi | --sb)
|
||||
ac_prev=sbindir ;;
|
||||
-sbindir=* | --sbindir=* | --sbindi=* | --sbind=* | --sbin=* \
|
||||
@ -1231,7 +1221,7 @@ fi
|
||||
for ac_var in exec_prefix prefix bindir sbindir libexecdir datarootdir \
|
||||
datadir sysconfdir sharedstatedir localstatedir includedir \
|
||||
oldincludedir docdir infodir htmldir dvidir pdfdir psdir \
|
||||
libdir localedir mandir runstatedir
|
||||
libdir localedir mandir
|
||||
do
|
||||
eval ac_val=\$$ac_var
|
||||
# Remove trailing slashes.
|
||||
@ -1384,7 +1374,6 @@ Fine tuning of the installation directories:
|
||||
--sysconfdir=DIR read-only single-machine data [PREFIX/etc]
|
||||
--sharedstatedir=DIR modifiable architecture-independent data [PREFIX/com]
|
||||
--localstatedir=DIR modifiable single-machine data [PREFIX/var]
|
||||
--runstatedir=DIR modifiable per-process data [LOCALSTATEDIR/run]
|
||||
--libdir=DIR object code libraries [EPREFIX/lib]
|
||||
--includedir=DIR C header files [PREFIX/include]
|
||||
--oldincludedir=DIR C header files for non-gcc [/usr/include]
|
||||
@ -2495,6 +2484,36 @@ ac_compiler_gnu=$ac_cv_c_compiler_gnu
|
||||
|
||||
|
||||
|
||||
ac_aux_dir=
|
||||
for ac_dir in build-aux "$srcdir"/build-aux; do
|
||||
if test -f "$ac_dir/install-sh"; then
|
||||
ac_aux_dir=$ac_dir
|
||||
ac_install_sh="$ac_aux_dir/install-sh -c"
|
||||
break
|
||||
elif test -f "$ac_dir/install.sh"; then
|
||||
ac_aux_dir=$ac_dir
|
||||
ac_install_sh="$ac_aux_dir/install.sh -c"
|
||||
break
|
||||
elif test -f "$ac_dir/shtool"; then
|
||||
ac_aux_dir=$ac_dir
|
||||
ac_install_sh="$ac_aux_dir/shtool install -c"
|
||||
break
|
||||
fi
|
||||
done
|
||||
if test -z "$ac_aux_dir"; then
|
||||
as_fn_error $? "cannot find install-sh, install.sh, or shtool in build-aux \"$srcdir\"/build-aux" "$LINENO" 5
|
||||
fi
|
||||
|
||||
# These three variables are undocumented and unsupported,
|
||||
# and are intended to be withdrawn in a future Autoconf release.
|
||||
# They can cause serious problems if a builder's source tree is in a directory
|
||||
# whose full name contains unusual characters.
|
||||
ac_config_guess="$SHELL $ac_aux_dir/config.guess" # Please don't use this var.
|
||||
ac_config_sub="$SHELL $ac_aux_dir/config.sub" # Please don't use this var.
|
||||
ac_configure="$SHELL $ac_aux_dir/configure" # Please don't use this var.
|
||||
|
||||
|
||||
|
||||
|
||||
|
||||
|
||||
@ -4624,35 +4643,6 @@ cat >>confdefs.h <<_ACEOF
|
||||
_ACEOF
|
||||
|
||||
|
||||
ac_aux_dir=
|
||||
for ac_dir in "$srcdir" "$srcdir/.." "$srcdir/../.."; do
|
||||
if test -f "$ac_dir/install-sh"; then
|
||||
ac_aux_dir=$ac_dir
|
||||
ac_install_sh="$ac_aux_dir/install-sh -c"
|
||||
break
|
||||
elif test -f "$ac_dir/install.sh"; then
|
||||
ac_aux_dir=$ac_dir
|
||||
ac_install_sh="$ac_aux_dir/install.sh -c"
|
||||
break
|
||||
elif test -f "$ac_dir/shtool"; then
|
||||
ac_aux_dir=$ac_dir
|
||||
ac_install_sh="$ac_aux_dir/shtool install -c"
|
||||
break
|
||||
fi
|
||||
done
|
||||
if test -z "$ac_aux_dir"; then
|
||||
as_fn_error $? "cannot find install-sh, install.sh, or shtool in \"$srcdir\" \"$srcdir/..\" \"$srcdir/../..\"" "$LINENO" 5
|
||||
fi
|
||||
|
||||
# These three variables are undocumented and unsupported,
|
||||
# and are intended to be withdrawn in a future Autoconf release.
|
||||
# They can cause serious problems if a builder's source tree is in a directory
|
||||
# whose full name contains unusual characters.
|
||||
ac_config_guess="$SHELL $ac_aux_dir/config.guess" # Please don't use this var.
|
||||
ac_config_sub="$SHELL $ac_aux_dir/config.sub" # Please don't use this var.
|
||||
ac_configure="$SHELL $ac_aux_dir/configure" # Please don't use this var.
|
||||
|
||||
|
||||
# Make sure we can run config.sub.
|
||||
$SHELL "$ac_aux_dir/config.sub" sun4 >/dev/null 2>&1 ||
|
||||
as_fn_error $? "cannot run $SHELL $ac_aux_dir/config.sub" "$LINENO" 5
|
||||
@ -7135,6 +7125,67 @@ fi
|
||||
|
||||
CPPFLAGS="$CPPFLAGS -D_REENTRANT"
|
||||
|
||||
SAVED_LIBS="${LIBS}"
|
||||
LIBS=
|
||||
{ $as_echo "$as_me:${as_lineno-$LINENO}: checking for library containing clock_gettime" >&5
|
||||
$as_echo_n "checking for library containing clock_gettime... " >&6; }
|
||||
if ${ac_cv_search_clock_gettime+:} false; then :
|
||||
$as_echo_n "(cached) " >&6
|
||||
else
|
||||
ac_func_search_save_LIBS=$LIBS
|
||||
cat confdefs.h - <<_ACEOF >conftest.$ac_ext
|
||||
/* end confdefs.h. */
|
||||
|
||||
/* Override any GCC internal prototype to avoid an error.
|
||||
Use char because int might match the return type of a GCC
|
||||
builtin and then its argument prototype would still apply. */
|
||||
#ifdef __cplusplus
|
||||
extern "C"
|
||||
#endif
|
||||
char clock_gettime ();
|
||||
int
|
||||
main ()
|
||||
{
|
||||
return clock_gettime ();
|
||||
;
|
||||
return 0;
|
||||
}
|
||||
_ACEOF
|
||||
for ac_lib in '' rt; do
|
||||
if test -z "$ac_lib"; then
|
||||
ac_res="none required"
|
||||
else
|
||||
ac_res=-l$ac_lib
|
||||
LIBS="-l$ac_lib $ac_func_search_save_LIBS"
|
||||
fi
|
||||
if ac_fn_c_try_link "$LINENO"; then :
|
||||
ac_cv_search_clock_gettime=$ac_res
|
||||
fi
|
||||
rm -f core conftest.err conftest.$ac_objext \
|
||||
conftest$ac_exeext
|
||||
if ${ac_cv_search_clock_gettime+:} false; then :
|
||||
break
|
||||
fi
|
||||
done
|
||||
if ${ac_cv_search_clock_gettime+:} false; then :
|
||||
|
||||
else
|
||||
ac_cv_search_clock_gettime=no
|
||||
fi
|
||||
rm conftest.$ac_ext
|
||||
LIBS=$ac_func_search_save_LIBS
|
||||
fi
|
||||
{ $as_echo "$as_me:${as_lineno-$LINENO}: result: $ac_cv_search_clock_gettime" >&5
|
||||
$as_echo "$ac_cv_search_clock_gettime" >&6; }
|
||||
ac_res=$ac_cv_search_clock_gettime
|
||||
if test "$ac_res" != no; then :
|
||||
test "$ac_res" = "none required" || LIBS="$ac_res $LIBS"
|
||||
TESTLIBS="${LIBS}"
|
||||
fi
|
||||
|
||||
|
||||
LIBS="${SAVED_LIBS}"
|
||||
|
||||
ac_fn_c_check_func "$LINENO" "secure_getenv" "ac_cv_func_secure_getenv"
|
||||
if test "x$ac_cv_func_secure_getenv" = xyes; then :
|
||||
have_secure_getenv="1"
|
||||
@ -8859,6 +8910,7 @@ cat >>$CONFIG_STATUS <<_ACEOF || ac_write_fail=1
|
||||
objroot="${objroot}"
|
||||
|
||||
|
||||
SHELL="${SHELL}"
|
||||
srcdir="${srcdir}"
|
||||
objroot="${objroot}"
|
||||
LG_QUANTA="${LG_QUANTA}"
|
||||
@ -9502,7 +9554,7 @@ $as_echo "$as_me: executing $ac_file commands" >&6;}
|
||||
;;
|
||||
"include/jemalloc/internal/size_classes.h":C)
|
||||
mkdir -p "${objroot}include/jemalloc/internal"
|
||||
"${srcdir}/include/jemalloc/internal/size_classes.sh" "${LG_QUANTA}" ${LG_TINY_MIN} "${LG_PAGE_SIZES}" ${LG_SIZE_CLASS_GROUP} > "${objroot}include/jemalloc/internal/size_classes.h"
|
||||
"${SHELL}" "${srcdir}/include/jemalloc/internal/size_classes.sh" "${LG_QUANTA}" ${LG_TINY_MIN} "${LG_PAGE_SIZES}" ${LG_SIZE_CLASS_GROUP} > "${objroot}include/jemalloc/internal/size_classes.h"
|
||||
;;
|
||||
"include/jemalloc/jemalloc_protos_jet.h":C)
|
||||
mkdir -p "${objroot}include/jemalloc"
|
||||
@ -9585,6 +9637,8 @@ $as_echo "LDFLAGS : ${LDFLAGS}" >&6; }
|
||||
$as_echo "EXTRA_LDFLAGS : ${EXTRA_LDFLAGS}" >&6; }
|
||||
{ $as_echo "$as_me:${as_lineno-$LINENO}: result: LIBS : ${LIBS}" >&5
|
||||
$as_echo "LIBS : ${LIBS}" >&6; }
|
||||
{ $as_echo "$as_me:${as_lineno-$LINENO}: result: TESTLIBS : ${TESTLIBS}" >&5
|
||||
$as_echo "TESTLIBS : ${TESTLIBS}" >&6; }
|
||||
{ $as_echo "$as_me:${as_lineno-$LINENO}: result: RPATH_EXTRA : ${RPATH_EXTRA}" >&5
|
||||
$as_echo "RPATH_EXTRA : ${RPATH_EXTRA}" >&6; }
|
||||
{ $as_echo "$as_me:${as_lineno-$LINENO}: result: " >&5
|
||||
|
@ -1,6 +1,8 @@
|
||||
dnl Process this file with autoconf to produce a configure script.
|
||||
AC_INIT([Makefile.in])
|
||||
|
||||
AC_CONFIG_AUX_DIR([build-aux])
|
||||
|
||||
dnl ============================================================================
|
||||
dnl Custom macro definitions.
|
||||
|
||||
@ -1190,6 +1192,14 @@ fi
|
||||
|
||||
CPPFLAGS="$CPPFLAGS -D_REENTRANT"
|
||||
|
||||
dnl Check whether clock_gettime(2) is in libc or librt. This function is only
|
||||
dnl used in test code, so save the result to TESTLIBS to avoid poluting LIBS.
|
||||
SAVED_LIBS="${LIBS}"
|
||||
LIBS=
|
||||
AC_SEARCH_LIBS([clock_gettime], [rt], [TESTLIBS="${LIBS}"])
|
||||
AC_SUBST([TESTLIBS])
|
||||
LIBS="${SAVED_LIBS}"
|
||||
|
||||
dnl Check if the GNU-specific secure_getenv function exists.
|
||||
AC_CHECK_FUNC([secure_getenv],
|
||||
[have_secure_getenv="1"],
|
||||
@ -1621,8 +1631,9 @@ AC_CONFIG_COMMANDS([include/jemalloc/internal/public_unnamespace.h], [
|
||||
])
|
||||
AC_CONFIG_COMMANDS([include/jemalloc/internal/size_classes.h], [
|
||||
mkdir -p "${objroot}include/jemalloc/internal"
|
||||
"${srcdir}/include/jemalloc/internal/size_classes.sh" "${LG_QUANTA}" ${LG_TINY_MIN} "${LG_PAGE_SIZES}" ${LG_SIZE_CLASS_GROUP} > "${objroot}include/jemalloc/internal/size_classes.h"
|
||||
"${SHELL}" "${srcdir}/include/jemalloc/internal/size_classes.sh" "${LG_QUANTA}" ${LG_TINY_MIN} "${LG_PAGE_SIZES}" ${LG_SIZE_CLASS_GROUP} > "${objroot}include/jemalloc/internal/size_classes.h"
|
||||
], [
|
||||
SHELL="${SHELL}"
|
||||
srcdir="${srcdir}"
|
||||
objroot="${objroot}"
|
||||
LG_QUANTA="${LG_QUANTA}"
|
||||
@ -1693,6 +1704,7 @@ AC_MSG_RESULT([CPPFLAGS : ${CPPFLAGS}])
|
||||
AC_MSG_RESULT([LDFLAGS : ${LDFLAGS}])
|
||||
AC_MSG_RESULT([EXTRA_LDFLAGS : ${EXTRA_LDFLAGS}])
|
||||
AC_MSG_RESULT([LIBS : ${LIBS}])
|
||||
AC_MSG_RESULT([TESTLIBS : ${TESTLIBS}])
|
||||
AC_MSG_RESULT([RPATH_EXTRA : ${RPATH_EXTRA}])
|
||||
AC_MSG_RESULT([])
|
||||
AC_MSG_RESULT([XSLTPROC : ${XSLTPROC}])
|
||||
|
@ -1418,8 +1418,8 @@ malloc_conf = "xmalloc:true";]]></programlisting>
|
||||
can cause asynchronous string deallocation. Furthermore, each
|
||||
invocation of this interface can only read or write; simultaneous
|
||||
read/write is not supported due to string lifetime limitations. The
|
||||
name string must nil-terminated and comprised only of characters in the
|
||||
sets recognized
|
||||
name string must be nil-terminated and comprised only of characters in
|
||||
the sets recognized
|
||||
by <citerefentry><refentrytitle>isgraph</refentrytitle>
|
||||
<manvolnum>3</manvolnum></citerefentry> and
|
||||
<citerefentry><refentrytitle>isblank</refentrytitle>
|
||||
|
@ -424,7 +424,7 @@ extern arena_bin_info_t arena_bin_info[NBINS];
|
||||
extern size_t map_bias; /* Number of arena chunk header pages. */
|
||||
extern size_t map_misc_offset;
|
||||
extern size_t arena_maxrun; /* Max run size for arenas. */
|
||||
extern size_t arena_maxclass; /* Max size class for arenas. */
|
||||
extern size_t large_maxclass; /* Max large size class. */
|
||||
extern unsigned nlclasses; /* Number of large size classes. */
|
||||
extern unsigned nhclasses; /* Number of huge size classes. */
|
||||
|
||||
@ -461,8 +461,10 @@ extern arena_dalloc_junk_small_t *arena_dalloc_junk_small;
|
||||
void arena_dalloc_junk_small(void *ptr, arena_bin_info_t *bin_info);
|
||||
#endif
|
||||
void arena_quarantine_junk_small(void *ptr, size_t usize);
|
||||
void *arena_malloc_small(arena_t *arena, size_t size, bool zero);
|
||||
void *arena_malloc_large(arena_t *arena, size_t size, bool zero);
|
||||
void *arena_malloc_small(arena_t *arena, size_t size, szind_t ind,
|
||||
bool zero);
|
||||
void *arena_malloc_large(arena_t *arena, size_t size, szind_t ind,
|
||||
bool zero);
|
||||
void *arena_palloc(tsd_t *tsd, arena_t *arena, size_t usize,
|
||||
size_t alignment, bool zero, tcache_t *tcache);
|
||||
void arena_prof_promoted(const void *ptr, size_t size);
|
||||
@ -488,7 +490,7 @@ extern arena_ralloc_junk_large_t *arena_ralloc_junk_large;
|
||||
bool arena_ralloc_no_move(void *ptr, size_t oldsize, size_t size,
|
||||
size_t extra, bool zero);
|
||||
void *arena_ralloc(tsd_t *tsd, arena_t *arena, void *ptr, size_t oldsize,
|
||||
size_t size, size_t extra, size_t alignment, bool zero, tcache_t *tcache);
|
||||
size_t size, size_t alignment, bool zero, tcache_t *tcache);
|
||||
dss_prec_t arena_dss_prec_get(arena_t *arena);
|
||||
bool arena_dss_prec_set(arena_t *arena, dss_prec_t dss_prec);
|
||||
ssize_t arena_lg_dirty_mult_default_get(void);
|
||||
@ -512,7 +514,7 @@ arena_chunk_map_bits_t *arena_bitselm_get(arena_chunk_t *chunk,
|
||||
size_t pageind);
|
||||
arena_chunk_map_misc_t *arena_miscelm_get(arena_chunk_t *chunk,
|
||||
size_t pageind);
|
||||
size_t arena_miscelm_to_pageind(arena_chunk_map_misc_t *miscelm);
|
||||
size_t arena_miscelm_to_pageind(const arena_chunk_map_misc_t *miscelm);
|
||||
void *arena_miscelm_to_rpages(arena_chunk_map_misc_t *miscelm);
|
||||
arena_chunk_map_misc_t *arena_rd_to_miscelm(arena_runs_dirty_link_t *rd);
|
||||
arena_chunk_map_misc_t *arena_run_to_miscelm(arena_run_t *run);
|
||||
@ -556,11 +558,13 @@ unsigned arena_run_regind(arena_run_t *run, arena_bin_info_t *bin_info,
|
||||
const void *ptr);
|
||||
prof_tctx_t *arena_prof_tctx_get(const void *ptr);
|
||||
void arena_prof_tctx_set(const void *ptr, size_t usize, prof_tctx_t *tctx);
|
||||
void *arena_malloc(tsd_t *tsd, arena_t *arena, size_t size, bool zero,
|
||||
tcache_t *tcache);
|
||||
void arena_prof_tctx_reset(const void *ptr, size_t usize,
|
||||
const void *old_ptr, prof_tctx_t *old_tctx);
|
||||
void *arena_malloc(tsd_t *tsd, arena_t *arena, size_t size, szind_t ind,
|
||||
bool zero, tcache_t *tcache, bool slow_path);
|
||||
arena_t *arena_aalloc(const void *ptr);
|
||||
size_t arena_salloc(const void *ptr, bool demote);
|
||||
void arena_dalloc(tsd_t *tsd, void *ptr, tcache_t *tcache);
|
||||
void arena_dalloc(tsd_t *tsd, void *ptr, tcache_t *tcache, bool slow_path);
|
||||
void arena_sdalloc(tsd_t *tsd, void *ptr, size_t size, tcache_t *tcache);
|
||||
#endif
|
||||
|
||||
@ -588,7 +592,7 @@ arena_miscelm_get(arena_chunk_t *chunk, size_t pageind)
|
||||
}
|
||||
|
||||
JEMALLOC_ALWAYS_INLINE size_t
|
||||
arena_miscelm_to_pageind(arena_chunk_map_misc_t *miscelm)
|
||||
arena_miscelm_to_pageind(const arena_chunk_map_misc_t *miscelm)
|
||||
{
|
||||
arena_chunk_t *chunk = (arena_chunk_t *)CHUNK_ADDR2BASE(miscelm);
|
||||
size_t pageind = ((uintptr_t)miscelm - ((uintptr_t)chunk +
|
||||
@ -1105,8 +1109,8 @@ arena_prof_tctx_set(const void *ptr, size_t usize, prof_tctx_t *tctx)
|
||||
|
||||
assert(arena_mapbits_allocated_get(chunk, pageind) != 0);
|
||||
|
||||
if (unlikely(usize > SMALL_MAXCLASS || tctx >
|
||||
(prof_tctx_t *)(uintptr_t)1U)) {
|
||||
if (unlikely(usize > SMALL_MAXCLASS || (uintptr_t)tctx >
|
||||
(uintptr_t)1U)) {
|
||||
arena_chunk_map_misc_t *elm;
|
||||
|
||||
assert(arena_mapbits_large_get(chunk, pageind) != 0);
|
||||
@ -1126,35 +1130,64 @@ arena_prof_tctx_set(const void *ptr, size_t usize, prof_tctx_t *tctx)
|
||||
huge_prof_tctx_set(ptr, tctx);
|
||||
}
|
||||
|
||||
JEMALLOC_INLINE void
|
||||
arena_prof_tctx_reset(const void *ptr, size_t usize, const void *old_ptr,
|
||||
prof_tctx_t *old_tctx)
|
||||
{
|
||||
|
||||
cassert(config_prof);
|
||||
assert(ptr != NULL);
|
||||
|
||||
if (unlikely(usize > SMALL_MAXCLASS || (ptr == old_ptr &&
|
||||
(uintptr_t)old_tctx > (uintptr_t)1U))) {
|
||||
arena_chunk_t *chunk = (arena_chunk_t *)CHUNK_ADDR2BASE(ptr);
|
||||
if (likely(chunk != ptr)) {
|
||||
size_t pageind;
|
||||
arena_chunk_map_misc_t *elm;
|
||||
|
||||
pageind = ((uintptr_t)ptr - (uintptr_t)chunk) >>
|
||||
LG_PAGE;
|
||||
assert(arena_mapbits_allocated_get(chunk, pageind) !=
|
||||
0);
|
||||
assert(arena_mapbits_large_get(chunk, pageind) != 0);
|
||||
|
||||
elm = arena_miscelm_get(chunk, pageind);
|
||||
atomic_write_p(&elm->prof_tctx_pun,
|
||||
(prof_tctx_t *)(uintptr_t)1U);
|
||||
} else
|
||||
huge_prof_tctx_reset(ptr);
|
||||
}
|
||||
}
|
||||
|
||||
JEMALLOC_ALWAYS_INLINE void *
|
||||
arena_malloc(tsd_t *tsd, arena_t *arena, size_t size, bool zero,
|
||||
tcache_t *tcache)
|
||||
arena_malloc(tsd_t *tsd, arena_t *arena, size_t size, szind_t ind,
|
||||
bool zero, tcache_t *tcache, bool slow_path)
|
||||
{
|
||||
|
||||
assert(size != 0);
|
||||
|
||||
if (likely(tcache != NULL)) {
|
||||
if (likely(size <= SMALL_MAXCLASS)) {
|
||||
return (tcache_alloc_small(tsd, arena, tcache, size,
|
||||
ind, zero, slow_path));
|
||||
}
|
||||
if (likely(size <= tcache_maxclass)) {
|
||||
return (tcache_alloc_large(tsd, arena, tcache, size,
|
||||
ind, zero, slow_path));
|
||||
}
|
||||
/* (size > tcache_maxclass) case falls through. */
|
||||
assert(size > tcache_maxclass);
|
||||
}
|
||||
|
||||
arena = arena_choose(tsd, arena);
|
||||
if (unlikely(arena == NULL))
|
||||
return (NULL);
|
||||
|
||||
if (likely(size <= SMALL_MAXCLASS)) {
|
||||
if (likely(tcache != NULL)) {
|
||||
return (tcache_alloc_small(tsd, arena, tcache, size,
|
||||
zero));
|
||||
} else
|
||||
return (arena_malloc_small(arena, size, zero));
|
||||
} else if (likely(size <= arena_maxclass)) {
|
||||
/*
|
||||
* Initialize tcache after checking size in order to avoid
|
||||
* infinite recursion during tcache initialization.
|
||||
*/
|
||||
if (likely(tcache != NULL) && size <= tcache_maxclass) {
|
||||
return (tcache_alloc_large(tsd, arena, tcache, size,
|
||||
zero));
|
||||
} else
|
||||
return (arena_malloc_large(arena, size, zero));
|
||||
} else
|
||||
return (huge_malloc(tsd, arena, size, zero, tcache));
|
||||
if (likely(size <= SMALL_MAXCLASS))
|
||||
return (arena_malloc_small(arena, size, ind, zero));
|
||||
if (likely(size <= large_maxclass))
|
||||
return (arena_malloc_large(arena, size, ind, zero));
|
||||
return (huge_malloc(tsd, arena, size, zero, tcache));
|
||||
}
|
||||
|
||||
JEMALLOC_ALWAYS_INLINE arena_t *
|
||||
@ -1220,7 +1253,7 @@ arena_salloc(const void *ptr, bool demote)
|
||||
}
|
||||
|
||||
JEMALLOC_ALWAYS_INLINE void
|
||||
arena_dalloc(tsd_t *tsd, void *ptr, tcache_t *tcache)
|
||||
arena_dalloc(tsd_t *tsd, void *ptr, tcache_t *tcache, bool slow_path)
|
||||
{
|
||||
arena_chunk_t *chunk;
|
||||
size_t pageind, mapbits;
|
||||
@ -1237,7 +1270,8 @@ arena_dalloc(tsd_t *tsd, void *ptr, tcache_t *tcache)
|
||||
if (likely(tcache != NULL)) {
|
||||
szind_t binind = arena_ptr_small_binind_get(ptr,
|
||||
mapbits);
|
||||
tcache_dalloc_small(tsd, tcache, ptr, binind);
|
||||
tcache_dalloc_small(tsd, tcache, ptr, binind,
|
||||
slow_path);
|
||||
} else {
|
||||
arena_dalloc_small(extent_node_arena_get(
|
||||
&chunk->node), chunk, ptr, pageind);
|
||||
@ -1252,7 +1286,7 @@ arena_dalloc(tsd_t *tsd, void *ptr, tcache_t *tcache)
|
||||
if (likely(tcache != NULL) && size - large_pad <=
|
||||
tcache_maxclass) {
|
||||
tcache_dalloc_large(tsd, tcache, ptr, size -
|
||||
large_pad);
|
||||
large_pad, slow_path);
|
||||
} else {
|
||||
arena_dalloc_large(extent_node_arena_get(
|
||||
&chunk->node), chunk, ptr);
|
||||
@ -1288,7 +1322,7 @@ arena_sdalloc(tsd_t *tsd, void *ptr, size_t size, tcache_t *tcache)
|
||||
/* Small allocation. */
|
||||
if (likely(tcache != NULL)) {
|
||||
szind_t binind = size2index(size);
|
||||
tcache_dalloc_small(tsd, tcache, ptr, binind);
|
||||
tcache_dalloc_small(tsd, tcache, ptr, binind, true);
|
||||
} else {
|
||||
size_t pageind = ((uintptr_t)ptr -
|
||||
(uintptr_t)chunk) >> LG_PAGE;
|
||||
@ -1300,7 +1334,7 @@ arena_sdalloc(tsd_t *tsd, void *ptr, size_t size, tcache_t *tcache)
|
||||
PAGE_MASK) == 0);
|
||||
|
||||
if (likely(tcache != NULL) && size <= tcache_maxclass)
|
||||
tcache_dalloc_large(tsd, tcache, ptr, size);
|
||||
tcache_dalloc_large(tsd, tcache, ptr, size, true);
|
||||
else {
|
||||
arena_dalloc_large(extent_node_arena_get(
|
||||
&chunk->node), chunk, ptr);
|
||||
|
45
memory/jemalloc/src/include/jemalloc/internal/assert.h
Normal file
45
memory/jemalloc/src/include/jemalloc/internal/assert.h
Normal file
@ -0,0 +1,45 @@
|
||||
/*
|
||||
* Define a custom assert() in order to reduce the chances of deadlock during
|
||||
* assertion failure.
|
||||
*/
|
||||
#ifndef assert
|
||||
#define assert(e) do { \
|
||||
if (unlikely(config_debug && !(e))) { \
|
||||
malloc_printf( \
|
||||
"<jemalloc>: %s:%d: Failed assertion: \"%s\"\n", \
|
||||
__FILE__, __LINE__, #e); \
|
||||
abort(); \
|
||||
} \
|
||||
} while (0)
|
||||
#endif
|
||||
|
||||
#ifndef not_reached
|
||||
#define not_reached() do { \
|
||||
if (config_debug) { \
|
||||
malloc_printf( \
|
||||
"<jemalloc>: %s:%d: Unreachable code reached\n", \
|
||||
__FILE__, __LINE__); \
|
||||
abort(); \
|
||||
} \
|
||||
unreachable(); \
|
||||
} while (0)
|
||||
#endif
|
||||
|
||||
#ifndef not_implemented
|
||||
#define not_implemented() do { \
|
||||
if (config_debug) { \
|
||||
malloc_printf("<jemalloc>: %s:%d: Not implemented\n", \
|
||||
__FILE__, __LINE__); \
|
||||
abort(); \
|
||||
} \
|
||||
} while (0)
|
||||
#endif
|
||||
|
||||
#ifndef assert_not_implemented
|
||||
#define assert_not_implemented(e) do { \
|
||||
if (unlikely(config_debug && !(e))) \
|
||||
not_implemented(); \
|
||||
} while (0)
|
||||
#endif
|
||||
|
||||
|
@ -13,11 +13,10 @@ void *huge_malloc(tsd_t *tsd, arena_t *arena, size_t size, bool zero,
|
||||
tcache_t *tcache);
|
||||
void *huge_palloc(tsd_t *tsd, arena_t *arena, size_t size, size_t alignment,
|
||||
bool zero, tcache_t *tcache);
|
||||
bool huge_ralloc_no_move(void *ptr, size_t oldsize, size_t size,
|
||||
size_t extra, bool zero);
|
||||
bool huge_ralloc_no_move(void *ptr, size_t oldsize, size_t usize_min,
|
||||
size_t usize_max, bool zero);
|
||||
void *huge_ralloc(tsd_t *tsd, arena_t *arena, void *ptr, size_t oldsize,
|
||||
size_t size, size_t extra, size_t alignment, bool zero,
|
||||
tcache_t *tcache);
|
||||
size_t usize, size_t alignment, bool zero, tcache_t *tcache);
|
||||
#ifdef JEMALLOC_JET
|
||||
typedef void (huge_dalloc_junk_t)(void *, size_t);
|
||||
extern huge_dalloc_junk_t *huge_dalloc_junk;
|
||||
@ -27,6 +26,7 @@ arena_t *huge_aalloc(const void *ptr);
|
||||
size_t huge_salloc(const void *ptr);
|
||||
prof_tctx_t *huge_prof_tctx_get(const void *ptr);
|
||||
void huge_prof_tctx_set(const void *ptr, prof_tctx_t *tctx);
|
||||
void huge_prof_tctx_reset(const void *ptr);
|
||||
|
||||
#endif /* JEMALLOC_H_EXTERNS */
|
||||
/******************************************************************************/
|
||||
|
@ -232,7 +232,7 @@ typedef unsigned szind_t;
|
||||
# ifdef __alpha__
|
||||
# define LG_QUANTUM 4
|
||||
# endif
|
||||
# ifdef __sparc64__
|
||||
# if (defined(__sparc64__) || defined(__sparcv9))
|
||||
# define LG_QUANTUM 4
|
||||
# endif
|
||||
# if (defined(__amd64__) || defined(__x86_64__) || defined(_M_X64))
|
||||
@ -317,6 +317,10 @@ typedef unsigned szind_t;
|
||||
#define PAGE ((size_t)(1U << LG_PAGE))
|
||||
#define PAGE_MASK ((size_t)(PAGE - 1))
|
||||
|
||||
/* Return the page base address for the page containing address a. */
|
||||
#define PAGE_ADDR2BASE(a) \
|
||||
((void *)((uintptr_t)(a) & ~PAGE_MASK))
|
||||
|
||||
/* Return the smallest pagesize multiple that is >= s. */
|
||||
#define PAGE_CEILING(s) \
|
||||
(((s) + PAGE_MASK) & ~PAGE_MASK)
|
||||
@ -433,7 +437,7 @@ extern unsigned ncpus;
|
||||
* index2size_tab encodes the same information as could be computed (at
|
||||
* unacceptable cost in some code paths) by index2size_compute().
|
||||
*/
|
||||
extern size_t const index2size_tab[NSIZES];
|
||||
extern size_t const index2size_tab[NSIZES+1];
|
||||
/*
|
||||
* size2index_tab is a compact lookup table that rounds request sizes up to
|
||||
* size classes. In order to reduce cache footprint, the table is compressed,
|
||||
@ -620,7 +624,7 @@ JEMALLOC_ALWAYS_INLINE size_t
|
||||
index2size(szind_t index)
|
||||
{
|
||||
|
||||
assert(index < NSIZES);
|
||||
assert(index <= NSIZES);
|
||||
return (index2size_lookup(index));
|
||||
}
|
||||
|
||||
@ -705,7 +709,7 @@ sa2u(size_t size, size_t alignment)
|
||||
}
|
||||
|
||||
/* Try for a large size class. */
|
||||
if (likely(size <= arena_maxclass) && likely(alignment < chunksize)) {
|
||||
if (likely(size <= large_maxclass) && likely(alignment < chunksize)) {
|
||||
/*
|
||||
* We can't achieve subpage alignment, so round up alignment
|
||||
* to the minimum that can actually be supported.
|
||||
@ -819,12 +823,14 @@ arena_get(tsd_t *tsd, unsigned ind, bool init_if_missing,
|
||||
#ifndef JEMALLOC_ENABLE_INLINE
|
||||
arena_t *iaalloc(const void *ptr);
|
||||
size_t isalloc(const void *ptr, bool demote);
|
||||
void *iallocztm(tsd_t *tsd, size_t size, bool zero, tcache_t *tcache,
|
||||
bool is_metadata, arena_t *arena);
|
||||
void *imalloct(tsd_t *tsd, size_t size, tcache_t *tcache, arena_t *arena);
|
||||
void *imalloc(tsd_t *tsd, size_t size);
|
||||
void *icalloct(tsd_t *tsd, size_t size, tcache_t *tcache, arena_t *arena);
|
||||
void *icalloc(tsd_t *tsd, size_t size);
|
||||
void *iallocztm(tsd_t *tsd, size_t size, szind_t ind, bool zero,
|
||||
tcache_t *tcache, bool is_metadata, arena_t *arena, bool slow_path);
|
||||
void *imalloct(tsd_t *tsd, size_t size, szind_t ind, tcache_t *tcache,
|
||||
arena_t *arena);
|
||||
void *imalloc(tsd_t *tsd, size_t size, szind_t ind, bool slow_path);
|
||||
void *icalloct(tsd_t *tsd, size_t size, szind_t ind, tcache_t *tcache,
|
||||
arena_t *arena);
|
||||
void *icalloc(tsd_t *tsd, size_t size, szind_t ind);
|
||||
void *ipallocztm(tsd_t *tsd, size_t usize, size_t alignment, bool zero,
|
||||
tcache_t *tcache, bool is_metadata, arena_t *arena);
|
||||
void *ipalloct(tsd_t *tsd, size_t usize, size_t alignment, bool zero,
|
||||
@ -833,10 +839,11 @@ void *ipalloc(tsd_t *tsd, size_t usize, size_t alignment, bool zero);
|
||||
size_t ivsalloc(const void *ptr, bool demote);
|
||||
size_t u2rz(size_t usize);
|
||||
size_t p2rz(const void *ptr);
|
||||
void idalloctm(tsd_t *tsd, void *ptr, tcache_t *tcache, bool is_metadata);
|
||||
void idalloctm(tsd_t *tsd, void *ptr, tcache_t *tcache, bool is_metadata,
|
||||
bool slow_path);
|
||||
void idalloct(tsd_t *tsd, void *ptr, tcache_t *tcache);
|
||||
void idalloc(tsd_t *tsd, void *ptr);
|
||||
void iqalloc(tsd_t *tsd, void *ptr, tcache_t *tcache);
|
||||
void iqalloc(tsd_t *tsd, void *ptr, tcache_t *tcache, bool slow_path);
|
||||
void isdalloct(tsd_t *tsd, void *ptr, size_t size, tcache_t *tcache);
|
||||
void isqalloc(tsd_t *tsd, void *ptr, size_t size, tcache_t *tcache);
|
||||
void *iralloct_realign(tsd_t *tsd, void *ptr, size_t oldsize, size_t size,
|
||||
@ -877,14 +884,14 @@ isalloc(const void *ptr, bool demote)
|
||||
}
|
||||
|
||||
JEMALLOC_ALWAYS_INLINE void *
|
||||
iallocztm(tsd_t *tsd, size_t size, bool zero, tcache_t *tcache, bool is_metadata,
|
||||
arena_t *arena)
|
||||
iallocztm(tsd_t *tsd, size_t size, szind_t ind, bool zero, tcache_t *tcache,
|
||||
bool is_metadata, arena_t *arena, bool slow_path)
|
||||
{
|
||||
void *ret;
|
||||
|
||||
assert(size != 0);
|
||||
|
||||
ret = arena_malloc(tsd, arena, size, zero, tcache);
|
||||
ret = arena_malloc(tsd, arena, size, ind, zero, tcache, slow_path);
|
||||
if (config_stats && is_metadata && likely(ret != NULL)) {
|
||||
arena_metadata_allocated_add(iaalloc(ret), isalloc(ret,
|
||||
config_prof));
|
||||
@ -893,31 +900,33 @@ iallocztm(tsd_t *tsd, size_t size, bool zero, tcache_t *tcache, bool is_metadata
|
||||
}
|
||||
|
||||
JEMALLOC_ALWAYS_INLINE void *
|
||||
imalloct(tsd_t *tsd, size_t size, tcache_t *tcache, arena_t *arena)
|
||||
imalloct(tsd_t *tsd, size_t size, szind_t ind, tcache_t *tcache, arena_t *arena)
|
||||
{
|
||||
|
||||
return (iallocztm(tsd, size, false, tcache, false, arena));
|
||||
return (iallocztm(tsd, size, ind, false, tcache, false, arena, true));
|
||||
}
|
||||
|
||||
JEMALLOC_ALWAYS_INLINE void *
|
||||
imalloc(tsd_t *tsd, size_t size)
|
||||
imalloc(tsd_t *tsd, size_t size, szind_t ind, bool slow_path)
|
||||
{
|
||||
|
||||
return (iallocztm(tsd, size, false, tcache_get(tsd, true), false, NULL));
|
||||
return (iallocztm(tsd, size, ind, false, tcache_get(tsd, true), false,
|
||||
NULL, slow_path));
|
||||
}
|
||||
|
||||
JEMALLOC_ALWAYS_INLINE void *
|
||||
icalloct(tsd_t *tsd, size_t size, tcache_t *tcache, arena_t *arena)
|
||||
icalloct(tsd_t *tsd, size_t size, szind_t ind, tcache_t *tcache, arena_t *arena)
|
||||
{
|
||||
|
||||
return (iallocztm(tsd, size, true, tcache, false, arena));
|
||||
return (iallocztm(tsd, size, ind, true, tcache, false, arena, true));
|
||||
}
|
||||
|
||||
JEMALLOC_ALWAYS_INLINE void *
|
||||
icalloc(tsd_t *tsd, size_t size)
|
||||
icalloc(tsd_t *tsd, size_t size, szind_t ind)
|
||||
{
|
||||
|
||||
return (iallocztm(tsd, size, true, tcache_get(tsd, true), false, NULL));
|
||||
return (iallocztm(tsd, size, ind, true, tcache_get(tsd, true), false,
|
||||
NULL, true));
|
||||
}
|
||||
|
||||
JEMALLOC_ALWAYS_INLINE void *
|
||||
@ -993,7 +1002,8 @@ p2rz(const void *ptr)
|
||||
}
|
||||
|
||||
JEMALLOC_ALWAYS_INLINE void
|
||||
idalloctm(tsd_t *tsd, void *ptr, tcache_t *tcache, bool is_metadata)
|
||||
idalloctm(tsd_t *tsd, void *ptr, tcache_t *tcache, bool is_metadata,
|
||||
bool slow_path)
|
||||
{
|
||||
|
||||
assert(ptr != NULL);
|
||||
@ -1002,31 +1012,31 @@ idalloctm(tsd_t *tsd, void *ptr, tcache_t *tcache, bool is_metadata)
|
||||
config_prof));
|
||||
}
|
||||
|
||||
arena_dalloc(tsd, ptr, tcache);
|
||||
arena_dalloc(tsd, ptr, tcache, slow_path);
|
||||
}
|
||||
|
||||
JEMALLOC_ALWAYS_INLINE void
|
||||
idalloct(tsd_t *tsd, void *ptr, tcache_t *tcache)
|
||||
{
|
||||
|
||||
idalloctm(tsd, ptr, tcache, false);
|
||||
idalloctm(tsd, ptr, tcache, false, true);
|
||||
}
|
||||
|
||||
JEMALLOC_ALWAYS_INLINE void
|
||||
idalloc(tsd_t *tsd, void *ptr)
|
||||
{
|
||||
|
||||
idalloctm(tsd, ptr, tcache_get(tsd, false), false);
|
||||
idalloctm(tsd, ptr, tcache_get(tsd, false), false, true);
|
||||
}
|
||||
|
||||
JEMALLOC_ALWAYS_INLINE void
|
||||
iqalloc(tsd_t *tsd, void *ptr, tcache_t *tcache)
|
||||
iqalloc(tsd_t *tsd, void *ptr, tcache_t *tcache, bool slow_path)
|
||||
{
|
||||
|
||||
if (config_fill && unlikely(opt_quarantine))
|
||||
if (slow_path && config_fill && unlikely(opt_quarantine))
|
||||
quarantine(tsd, ptr);
|
||||
else
|
||||
idalloctm(tsd, ptr, tcache, false);
|
||||
idalloctm(tsd, ptr, tcache, false, slow_path);
|
||||
}
|
||||
|
||||
JEMALLOC_ALWAYS_INLINE void
|
||||
@ -1096,7 +1106,7 @@ iralloct(tsd_t *tsd, void *ptr, size_t oldsize, size_t size, size_t alignment,
|
||||
zero, tcache, arena));
|
||||
}
|
||||
|
||||
return (arena_ralloc(tsd, arena, ptr, oldsize, size, 0, alignment, zero,
|
||||
return (arena_ralloc(tsd, arena, ptr, oldsize, size, alignment, zero,
|
||||
tcache));
|
||||
}
|
||||
|
||||
|
@ -58,7 +58,6 @@ arena_mapbits_unallocated_set
|
||||
arena_mapbits_unallocated_size_get
|
||||
arena_mapbits_unallocated_size_set
|
||||
arena_mapbits_unzeroed_get
|
||||
arena_maxclass
|
||||
arena_maxrun
|
||||
arena_maybe_purge
|
||||
arena_metadata_allocated_add
|
||||
@ -81,6 +80,7 @@ arena_prof_accum_impl
|
||||
arena_prof_accum_locked
|
||||
arena_prof_promoted
|
||||
arena_prof_tctx_get
|
||||
arena_prof_tctx_reset
|
||||
arena_prof_tctx_set
|
||||
arena_ptr_small_binind_get
|
||||
arena_purge_all
|
||||
@ -251,6 +251,7 @@ huge_dalloc_junk
|
||||
huge_malloc
|
||||
huge_palloc
|
||||
huge_prof_tctx_get
|
||||
huge_prof_tctx_reset
|
||||
huge_prof_tctx_set
|
||||
huge_ralloc
|
||||
huge_ralloc_no_move
|
||||
@ -285,6 +286,7 @@ ixalloc
|
||||
jemalloc_postfork_child
|
||||
jemalloc_postfork_parent
|
||||
jemalloc_prefork
|
||||
large_maxclass
|
||||
lg_floor
|
||||
malloc_cprintf
|
||||
malloc_mutex_init
|
||||
@ -379,6 +381,7 @@ prof_reset
|
||||
prof_sample_accum_update
|
||||
prof_sample_threshold_update
|
||||
prof_tctx_get
|
||||
prof_tctx_reset
|
||||
prof_tctx_set
|
||||
prof_tdata_cleanup
|
||||
prof_tdata_get
|
||||
|
@ -90,10 +90,11 @@ struct prof_tctx_s {
|
||||
prof_tdata_t *tdata;
|
||||
|
||||
/*
|
||||
* Copy of tdata->thr_uid, necessary because tdata may be defunct during
|
||||
* teardown.
|
||||
* Copy of tdata->thr_{uid,discrim}, necessary because tdata may be
|
||||
* defunct during teardown.
|
||||
*/
|
||||
uint64_t thr_uid;
|
||||
uint64_t thr_discrim;
|
||||
|
||||
/* Profiling counters, protected by tdata->lock. */
|
||||
prof_cnt_t cnts;
|
||||
@ -330,14 +331,18 @@ bool prof_gdump_get_unlocked(void);
|
||||
prof_tdata_t *prof_tdata_get(tsd_t *tsd, bool create);
|
||||
bool prof_sample_accum_update(tsd_t *tsd, size_t usize, bool commit,
|
||||
prof_tdata_t **tdata_out);
|
||||
prof_tctx_t *prof_alloc_prep(tsd_t *tsd, size_t usize, bool update);
|
||||
prof_tctx_t *prof_alloc_prep(tsd_t *tsd, size_t usize, bool prof_active,
|
||||
bool update);
|
||||
prof_tctx_t *prof_tctx_get(const void *ptr);
|
||||
void prof_tctx_set(const void *ptr, size_t usize, prof_tctx_t *tctx);
|
||||
void prof_tctx_reset(const void *ptr, size_t usize, const void *old_ptr,
|
||||
prof_tctx_t *tctx);
|
||||
void prof_malloc_sample_object(const void *ptr, size_t usize,
|
||||
prof_tctx_t *tctx);
|
||||
void prof_malloc(const void *ptr, size_t usize, prof_tctx_t *tctx);
|
||||
void prof_realloc(tsd_t *tsd, const void *ptr, size_t usize,
|
||||
prof_tctx_t *tctx, bool updated, size_t old_usize, prof_tctx_t *old_tctx);
|
||||
prof_tctx_t *tctx, bool prof_active, bool updated, const void *old_ptr,
|
||||
size_t old_usize, prof_tctx_t *old_tctx);
|
||||
void prof_free(tsd_t *tsd, const void *ptr, size_t usize);
|
||||
#endif
|
||||
|
||||
@ -411,6 +416,17 @@ prof_tctx_set(const void *ptr, size_t usize, prof_tctx_t *tctx)
|
||||
arena_prof_tctx_set(ptr, usize, tctx);
|
||||
}
|
||||
|
||||
JEMALLOC_ALWAYS_INLINE void
|
||||
prof_tctx_reset(const void *ptr, size_t usize, const void *old_ptr,
|
||||
prof_tctx_t *old_tctx)
|
||||
{
|
||||
|
||||
cassert(config_prof);
|
||||
assert(ptr != NULL);
|
||||
|
||||
arena_prof_tctx_reset(ptr, usize, old_ptr, old_tctx);
|
||||
}
|
||||
|
||||
JEMALLOC_ALWAYS_INLINE bool
|
||||
prof_sample_accum_update(tsd_t *tsd, size_t usize, bool update,
|
||||
prof_tdata_t **tdata_out)
|
||||
@ -420,16 +436,16 @@ prof_sample_accum_update(tsd_t *tsd, size_t usize, bool update,
|
||||
cassert(config_prof);
|
||||
|
||||
tdata = prof_tdata_get(tsd, true);
|
||||
if ((uintptr_t)tdata <= (uintptr_t)PROF_TDATA_STATE_MAX)
|
||||
if (unlikely((uintptr_t)tdata <= (uintptr_t)PROF_TDATA_STATE_MAX))
|
||||
tdata = NULL;
|
||||
|
||||
if (tdata_out != NULL)
|
||||
*tdata_out = tdata;
|
||||
|
||||
if (tdata == NULL)
|
||||
if (unlikely(tdata == NULL))
|
||||
return (true);
|
||||
|
||||
if (tdata->bytes_until_sample >= usize) {
|
||||
if (likely(tdata->bytes_until_sample >= usize)) {
|
||||
if (update)
|
||||
tdata->bytes_until_sample -= usize;
|
||||
return (true);
|
||||
@ -442,7 +458,7 @@ prof_sample_accum_update(tsd_t *tsd, size_t usize, bool update,
|
||||
}
|
||||
|
||||
JEMALLOC_ALWAYS_INLINE prof_tctx_t *
|
||||
prof_alloc_prep(tsd_t *tsd, size_t usize, bool update)
|
||||
prof_alloc_prep(tsd_t *tsd, size_t usize, bool prof_active, bool update)
|
||||
{
|
||||
prof_tctx_t *ret;
|
||||
prof_tdata_t *tdata;
|
||||
@ -450,8 +466,8 @@ prof_alloc_prep(tsd_t *tsd, size_t usize, bool update)
|
||||
|
||||
assert(usize == s2u(usize));
|
||||
|
||||
if (!prof_active_get_unlocked() || likely(prof_sample_accum_update(tsd,
|
||||
usize, update, &tdata)))
|
||||
if (!prof_active || likely(prof_sample_accum_update(tsd, usize, update,
|
||||
&tdata)))
|
||||
ret = (prof_tctx_t *)(uintptr_t)1U;
|
||||
else {
|
||||
bt_init(&bt, tdata->vec);
|
||||
@ -478,17 +494,19 @@ prof_malloc(const void *ptr, size_t usize, prof_tctx_t *tctx)
|
||||
|
||||
JEMALLOC_ALWAYS_INLINE void
|
||||
prof_realloc(tsd_t *tsd, const void *ptr, size_t usize, prof_tctx_t *tctx,
|
||||
bool updated, size_t old_usize, prof_tctx_t *old_tctx)
|
||||
bool prof_active, bool updated, const void *old_ptr, size_t old_usize,
|
||||
prof_tctx_t *old_tctx)
|
||||
{
|
||||
bool sampled, old_sampled;
|
||||
|
||||
cassert(config_prof);
|
||||
assert(ptr != NULL || (uintptr_t)tctx <= (uintptr_t)1U);
|
||||
|
||||
if (!updated && ptr != NULL) {
|
||||
if (prof_active && !updated && ptr != NULL) {
|
||||
assert(usize == isalloc(ptr, true));
|
||||
if (prof_sample_accum_update(tsd, usize, true, NULL)) {
|
||||
/*
|
||||
* Don't sample. The usize passed to PROF_ALLOC_PREP()
|
||||
* Don't sample. The usize passed to prof_alloc_prep()
|
||||
* was larger than what actually got allocated, so a
|
||||
* backtrace was captured for this allocation, even
|
||||
* though its actual usize was insufficient to cross the
|
||||
@ -498,12 +516,16 @@ prof_realloc(tsd_t *tsd, const void *ptr, size_t usize, prof_tctx_t *tctx,
|
||||
}
|
||||
}
|
||||
|
||||
if (unlikely((uintptr_t)old_tctx > (uintptr_t)1U))
|
||||
prof_free_sampled_object(tsd, old_usize, old_tctx);
|
||||
if (unlikely((uintptr_t)tctx > (uintptr_t)1U))
|
||||
sampled = ((uintptr_t)tctx > (uintptr_t)1U);
|
||||
old_sampled = ((uintptr_t)old_tctx > (uintptr_t)1U);
|
||||
|
||||
if (unlikely(sampled))
|
||||
prof_malloc_sample_object(ptr, usize, tctx);
|
||||
else
|
||||
prof_tctx_set(ptr, usize, (prof_tctx_t *)(uintptr_t)1U);
|
||||
prof_tctx_reset(ptr, usize, old_ptr, old_tctx);
|
||||
|
||||
if (unlikely(old_sampled))
|
||||
prof_free_sampled_object(tsd, old_usize, old_tctx);
|
||||
}
|
||||
|
||||
JEMALLOC_ALWAYS_INLINE void
|
||||
|
@ -79,6 +79,15 @@ struct { \
|
||||
(a_node)->a_field.rbn_right_red = (a_type *) (((intptr_t) \
|
||||
(a_node)->a_field.rbn_right_red) & ((ssize_t)-2)); \
|
||||
} while (0)
|
||||
|
||||
/* Node initializer. */
|
||||
#define rbt_node_new(a_type, a_field, a_rbt, a_node) do { \
|
||||
/* Bookkeeping bit cannot be used by node pointer. */ \
|
||||
assert(((uintptr_t)(a_node) & 0x1) == 0); \
|
||||
rbtn_left_set(a_type, a_field, (a_node), &(a_rbt)->rbt_nil); \
|
||||
rbtn_right_set(a_type, a_field, (a_node), &(a_rbt)->rbt_nil); \
|
||||
rbtn_red_set(a_type, a_field, (a_node)); \
|
||||
} while (0)
|
||||
#else
|
||||
/* Right accessors. */
|
||||
#define rbtn_right_get(a_type, a_field, a_node) \
|
||||
@ -99,7 +108,6 @@ struct { \
|
||||
#define rbtn_black_set(a_type, a_field, a_node) do { \
|
||||
(a_node)->a_field.rbn_red = false; \
|
||||
} while (0)
|
||||
#endif
|
||||
|
||||
/* Node initializer. */
|
||||
#define rbt_node_new(a_type, a_field, a_rbt, a_node) do { \
|
||||
@ -107,6 +115,7 @@ struct { \
|
||||
rbtn_right_set(a_type, a_field, (a_node), &(a_rbt)->rbt_nil); \
|
||||
rbtn_red_set(a_type, a_field, (a_node)); \
|
||||
} while (0)
|
||||
#endif
|
||||
|
||||
/* Tree initializer. */
|
||||
#define rb_new(a_type, a_field, a_rbt) do { \
|
||||
@ -169,11 +178,11 @@ a_prefix##next(a_rbt_type *rbtree, a_type *node); \
|
||||
a_attr a_type * \
|
||||
a_prefix##prev(a_rbt_type *rbtree, a_type *node); \
|
||||
a_attr a_type * \
|
||||
a_prefix##search(a_rbt_type *rbtree, a_type *key); \
|
||||
a_prefix##search(a_rbt_type *rbtree, const a_type *key); \
|
||||
a_attr a_type * \
|
||||
a_prefix##nsearch(a_rbt_type *rbtree, a_type *key); \
|
||||
a_prefix##nsearch(a_rbt_type *rbtree, const a_type *key); \
|
||||
a_attr a_type * \
|
||||
a_prefix##psearch(a_rbt_type *rbtree, a_type *key); \
|
||||
a_prefix##psearch(a_rbt_type *rbtree, const a_type *key); \
|
||||
a_attr void \
|
||||
a_prefix##insert(a_rbt_type *rbtree, a_type *node); \
|
||||
a_attr void \
|
||||
@ -183,7 +192,10 @@ a_prefix##iter(a_rbt_type *rbtree, a_type *start, a_type *(*cb)( \
|
||||
a_rbt_type *, a_type *, void *), void *arg); \
|
||||
a_attr a_type * \
|
||||
a_prefix##reverse_iter(a_rbt_type *rbtree, a_type *start, \
|
||||
a_type *(*cb)(a_rbt_type *, a_type *, void *), void *arg);
|
||||
a_type *(*cb)(a_rbt_type *, a_type *, void *), void *arg); \
|
||||
a_attr void \
|
||||
a_prefix##destroy(a_rbt_type *rbtree, void (*cb)(a_type *, void *), \
|
||||
void *arg);
|
||||
|
||||
/*
|
||||
* The rb_gen() macro generates a type-specific red-black tree implementation,
|
||||
@ -254,7 +266,7 @@ a_prefix##reverse_iter(a_rbt_type *rbtree, a_type *start, \
|
||||
* last/first.
|
||||
*
|
||||
* static ex_node_t *
|
||||
* ex_search(ex_t *tree, ex_node_t *key);
|
||||
* ex_search(ex_t *tree, const ex_node_t *key);
|
||||
* Description: Search for node that matches key.
|
||||
* Args:
|
||||
* tree: Pointer to an initialized red-black tree object.
|
||||
@ -262,9 +274,9 @@ a_prefix##reverse_iter(a_rbt_type *rbtree, a_type *start, \
|
||||
* Ret: Node in tree that matches key, or NULL if no match.
|
||||
*
|
||||
* static ex_node_t *
|
||||
* ex_nsearch(ex_t *tree, ex_node_t *key);
|
||||
* ex_nsearch(ex_t *tree, const ex_node_t *key);
|
||||
* static ex_node_t *
|
||||
* ex_psearch(ex_t *tree, ex_node_t *key);
|
||||
* ex_psearch(ex_t *tree, const ex_node_t *key);
|
||||
* Description: Search for node that matches key. If no match is found,
|
||||
* return what would be key's successor/predecessor, were
|
||||
* key in tree.
|
||||
@ -312,6 +324,20 @@ a_prefix##reverse_iter(a_rbt_type *rbtree, a_type *start, \
|
||||
* arg : Opaque pointer passed to cb().
|
||||
* Ret: NULL if iteration completed, or the non-NULL callback return value
|
||||
* that caused termination of the iteration.
|
||||
*
|
||||
* static void
|
||||
* ex_destroy(ex_t *tree, void (*cb)(ex_node_t *, void *), void *arg);
|
||||
* Description: Iterate over the tree with post-order traversal, remove
|
||||
* each node, and run the callback if non-null. This is
|
||||
* used for destroying a tree without paying the cost to
|
||||
* rebalance it. The tree must not be otherwise altered
|
||||
* during traversal.
|
||||
* Args:
|
||||
* tree: Pointer to an initialized red-black tree object.
|
||||
* cb : Callback function, which, if non-null, is called for each node
|
||||
* during iteration. There is no way to stop iteration once it has
|
||||
* begun.
|
||||
* arg : Opaque pointer passed to cb().
|
||||
*/
|
||||
#define rb_gen(a_attr, a_prefix, a_rbt_type, a_type, a_field, a_cmp) \
|
||||
a_attr void \
|
||||
@ -397,7 +423,7 @@ a_prefix##prev(a_rbt_type *rbtree, a_type *node) { \
|
||||
return (ret); \
|
||||
} \
|
||||
a_attr a_type * \
|
||||
a_prefix##search(a_rbt_type *rbtree, a_type *key) { \
|
||||
a_prefix##search(a_rbt_type *rbtree, const a_type *key) { \
|
||||
a_type *ret; \
|
||||
int cmp; \
|
||||
ret = rbtree->rbt_root; \
|
||||
@ -415,7 +441,7 @@ a_prefix##search(a_rbt_type *rbtree, a_type *key) { \
|
||||
return (ret); \
|
||||
} \
|
||||
a_attr a_type * \
|
||||
a_prefix##nsearch(a_rbt_type *rbtree, a_type *key) { \
|
||||
a_prefix##nsearch(a_rbt_type *rbtree, const a_type *key) { \
|
||||
a_type *ret; \
|
||||
a_type *tnode = rbtree->rbt_root; \
|
||||
ret = &rbtree->rbt_nil; \
|
||||
@ -437,7 +463,7 @@ a_prefix##nsearch(a_rbt_type *rbtree, a_type *key) { \
|
||||
return (ret); \
|
||||
} \
|
||||
a_attr a_type * \
|
||||
a_prefix##psearch(a_rbt_type *rbtree, a_type *key) { \
|
||||
a_prefix##psearch(a_rbt_type *rbtree, const a_type *key) { \
|
||||
a_type *ret; \
|
||||
a_type *tnode = rbtree->rbt_root; \
|
||||
ret = &rbtree->rbt_nil; \
|
||||
@ -976,6 +1002,28 @@ a_prefix##reverse_iter(a_rbt_type *rbtree, a_type *start, \
|
||||
ret = NULL; \
|
||||
} \
|
||||
return (ret); \
|
||||
} \
|
||||
a_attr void \
|
||||
a_prefix##destroy_recurse(a_rbt_type *rbtree, a_type *node, void (*cb)( \
|
||||
a_type *, void *), void *arg) { \
|
||||
if (node == &rbtree->rbt_nil) { \
|
||||
return; \
|
||||
} \
|
||||
a_prefix##destroy_recurse(rbtree, rbtn_left_get(a_type, a_field, \
|
||||
node), cb, arg); \
|
||||
rbtn_left_set(a_type, a_field, (node), &rbtree->rbt_nil); \
|
||||
a_prefix##destroy_recurse(rbtree, rbtn_right_get(a_type, a_field, \
|
||||
node), cb, arg); \
|
||||
rbtn_right_set(a_type, a_field, (node), &rbtree->rbt_nil); \
|
||||
if (cb) { \
|
||||
cb(node, arg); \
|
||||
} \
|
||||
} \
|
||||
a_attr void \
|
||||
a_prefix##destroy(a_rbt_type *rbtree, void (*cb)(a_type *, void *), \
|
||||
void *arg) { \
|
||||
a_prefix##destroy_recurse(rbtree, rbtree->rbt_root, cb, arg); \
|
||||
rbtree->rbt_root = &rbtree->rbt_nil; \
|
||||
}
|
||||
|
||||
#endif /* RB_H_ */
|
||||
|
@ -167,6 +167,8 @@ size_classes() {
|
||||
lg_large_minclass=$((${lg_grp} + 2))
|
||||
fi
|
||||
fi
|
||||
# Final written value is correct:
|
||||
huge_maxclass="((((size_t)1) << ${lg_grp}) + (((size_t)${ndelta}) << ${lg_delta}))"
|
||||
index=$((${index} + 1))
|
||||
ndelta=$((${ndelta} + 1))
|
||||
done
|
||||
@ -185,6 +187,7 @@ size_classes() {
|
||||
# - lookup_maxclass
|
||||
# - small_maxclass
|
||||
# - lg_large_minclass
|
||||
# - huge_maxclass
|
||||
}
|
||||
|
||||
cat <<EOF
|
||||
@ -215,6 +218,7 @@ cat <<EOF
|
||||
* LOOKUP_MAXCLASS: Maximum size class included in lookup table.
|
||||
* SMALL_MAXCLASS: Maximum small size class.
|
||||
* LG_LARGE_MINCLASS: Lg of minimum large size class.
|
||||
* HUGE_MAXCLASS: Maximum (huge) size class.
|
||||
*/
|
||||
|
||||
#define LG_SIZE_CLASS_GROUP ${lg_g}
|
||||
@ -238,6 +242,7 @@ for lg_z in ${lg_zarr} ; do
|
||||
echo "#define LOOKUP_MAXCLASS ${lookup_maxclass}"
|
||||
echo "#define SMALL_MAXCLASS ${small_maxclass}"
|
||||
echo "#define LG_LARGE_MINCLASS ${lg_large_minclass}"
|
||||
echo "#define HUGE_MAXCLASS ${huge_maxclass}"
|
||||
echo "#endif"
|
||||
echo
|
||||
done
|
||||
|
@ -70,6 +70,13 @@ struct tcache_bin_s {
|
||||
int low_water; /* Min # cached since last GC. */
|
||||
unsigned lg_fill_div; /* Fill (ncached_max >> lg_fill_div). */
|
||||
unsigned ncached; /* # of cached objects. */
|
||||
/*
|
||||
* To make use of adjacent cacheline prefetch, the items in the avail
|
||||
* stack goes to higher address for newer allocations. avail points
|
||||
* just above the available space, which means that
|
||||
* avail[-ncached, ... -1] are available items and the lowest item will
|
||||
* be allocated first.
|
||||
*/
|
||||
void **avail; /* Stack of available objects. */
|
||||
};
|
||||
|
||||
@ -126,7 +133,7 @@ extern tcaches_t *tcaches;
|
||||
size_t tcache_salloc(const void *ptr);
|
||||
void tcache_event_hard(tsd_t *tsd, tcache_t *tcache);
|
||||
void *tcache_alloc_small_hard(tsd_t *tsd, arena_t *arena, tcache_t *tcache,
|
||||
tcache_bin_t *tbin, szind_t binind);
|
||||
tcache_bin_t *tbin, szind_t binind, bool *tcache_success);
|
||||
void tcache_bin_flush_small(tsd_t *tsd, tcache_t *tcache, tcache_bin_t *tbin,
|
||||
szind_t binind, unsigned rem);
|
||||
void tcache_bin_flush_large(tsd_t *tsd, tcache_bin_t *tbin, szind_t binind,
|
||||
@ -155,15 +162,15 @@ void tcache_flush(void);
|
||||
bool tcache_enabled_get(void);
|
||||
tcache_t *tcache_get(tsd_t *tsd, bool create);
|
||||
void tcache_enabled_set(bool enabled);
|
||||
void *tcache_alloc_easy(tcache_bin_t *tbin);
|
||||
void *tcache_alloc_easy(tcache_bin_t *tbin, bool *tcache_success);
|
||||
void *tcache_alloc_small(tsd_t *tsd, arena_t *arena, tcache_t *tcache,
|
||||
size_t size, bool zero);
|
||||
size_t size, szind_t ind, bool zero, bool slow_path);
|
||||
void *tcache_alloc_large(tsd_t *tsd, arena_t *arena, tcache_t *tcache,
|
||||
size_t size, bool zero);
|
||||
size_t size, szind_t ind, bool zero, bool slow_path);
|
||||
void tcache_dalloc_small(tsd_t *tsd, tcache_t *tcache, void *ptr,
|
||||
szind_t binind);
|
||||
szind_t binind, bool slow_path);
|
||||
void tcache_dalloc_large(tsd_t *tsd, tcache_t *tcache, void *ptr,
|
||||
size_t size);
|
||||
size_t size, bool slow_path);
|
||||
tcache_t *tcaches_get(tsd_t *tsd, unsigned ind);
|
||||
#endif
|
||||
|
||||
@ -247,44 +254,69 @@ tcache_event(tsd_t *tsd, tcache_t *tcache)
|
||||
}
|
||||
|
||||
JEMALLOC_ALWAYS_INLINE void *
|
||||
tcache_alloc_easy(tcache_bin_t *tbin)
|
||||
tcache_alloc_easy(tcache_bin_t *tbin, bool *tcache_success)
|
||||
{
|
||||
void *ret;
|
||||
|
||||
if (unlikely(tbin->ncached == 0)) {
|
||||
tbin->low_water = -1;
|
||||
*tcache_success = false;
|
||||
return (NULL);
|
||||
}
|
||||
/*
|
||||
* tcache_success (instead of ret) should be checked upon the return of
|
||||
* this function. We avoid checking (ret == NULL) because there is
|
||||
* never a null stored on the avail stack (which is unknown to the
|
||||
* compiler), and eagerly checking ret would cause pipeline stall
|
||||
* (waiting for the cacheline).
|
||||
*/
|
||||
*tcache_success = true;
|
||||
ret = *(tbin->avail - tbin->ncached);
|
||||
tbin->ncached--;
|
||||
|
||||
if (unlikely((int)tbin->ncached < tbin->low_water))
|
||||
tbin->low_water = tbin->ncached;
|
||||
ret = tbin->avail[tbin->ncached];
|
||||
|
||||
return (ret);
|
||||
}
|
||||
|
||||
JEMALLOC_ALWAYS_INLINE void *
|
||||
tcache_alloc_small(tsd_t *tsd, arena_t *arena, tcache_t *tcache, size_t size,
|
||||
bool zero)
|
||||
szind_t binind, bool zero, bool slow_path)
|
||||
{
|
||||
void *ret;
|
||||
szind_t binind;
|
||||
size_t usize;
|
||||
tcache_bin_t *tbin;
|
||||
bool tcache_success;
|
||||
size_t usize JEMALLOC_CC_SILENCE_INIT(0);
|
||||
|
||||
binind = size2index(size);
|
||||
assert(binind < NBINS);
|
||||
tbin = &tcache->tbins[binind];
|
||||
usize = index2size(binind);
|
||||
ret = tcache_alloc_easy(tbin);
|
||||
if (unlikely(ret == NULL)) {
|
||||
ret = tcache_alloc_small_hard(tsd, arena, tcache, tbin, binind);
|
||||
if (ret == NULL)
|
||||
ret = tcache_alloc_easy(tbin, &tcache_success);
|
||||
assert(tcache_success == (ret != NULL));
|
||||
if (unlikely(!tcache_success)) {
|
||||
bool tcache_hard_success;
|
||||
arena = arena_choose(tsd, arena);
|
||||
if (unlikely(arena == NULL))
|
||||
return (NULL);
|
||||
|
||||
ret = tcache_alloc_small_hard(tsd, arena, tcache, tbin, binind,
|
||||
&tcache_hard_success);
|
||||
if (tcache_hard_success == false)
|
||||
return (NULL);
|
||||
}
|
||||
assert(tcache_salloc(ret) == usize);
|
||||
|
||||
assert(ret);
|
||||
/*
|
||||
* Only compute usize if required. The checks in the following if
|
||||
* statement are all static.
|
||||
*/
|
||||
if (config_prof || (slow_path && config_fill) || unlikely(zero)) {
|
||||
usize = index2size(binind);
|
||||
assert(tcache_salloc(ret) == usize);
|
||||
}
|
||||
|
||||
if (likely(!zero)) {
|
||||
if (config_fill) {
|
||||
if (slow_path && config_fill) {
|
||||
if (unlikely(opt_junk_alloc)) {
|
||||
arena_alloc_junk_small(ret,
|
||||
&arena_bin_info[binind], false);
|
||||
@ -292,7 +324,7 @@ tcache_alloc_small(tsd_t *tsd, arena_t *arena, tcache_t *tcache, size_t size,
|
||||
memset(ret, 0, usize);
|
||||
}
|
||||
} else {
|
||||
if (config_fill && unlikely(opt_junk_alloc)) {
|
||||
if (slow_path && config_fill && unlikely(opt_junk_alloc)) {
|
||||
arena_alloc_junk_small(ret, &arena_bin_info[binind],
|
||||
true);
|
||||
}
|
||||
@ -309,28 +341,38 @@ tcache_alloc_small(tsd_t *tsd, arena_t *arena, tcache_t *tcache, size_t size,
|
||||
|
||||
JEMALLOC_ALWAYS_INLINE void *
|
||||
tcache_alloc_large(tsd_t *tsd, arena_t *arena, tcache_t *tcache, size_t size,
|
||||
bool zero)
|
||||
szind_t binind, bool zero, bool slow_path)
|
||||
{
|
||||
void *ret;
|
||||
szind_t binind;
|
||||
size_t usize;
|
||||
tcache_bin_t *tbin;
|
||||
bool tcache_success;
|
||||
size_t usize JEMALLOC_CC_SILENCE_INIT(0);
|
||||
|
||||
binind = size2index(size);
|
||||
usize = index2size(binind);
|
||||
assert(usize <= tcache_maxclass);
|
||||
assert(binind < nhbins);
|
||||
tbin = &tcache->tbins[binind];
|
||||
ret = tcache_alloc_easy(tbin);
|
||||
if (unlikely(ret == NULL)) {
|
||||
ret = tcache_alloc_easy(tbin, &tcache_success);
|
||||
assert(tcache_success == (ret != NULL));
|
||||
if (unlikely(!tcache_success)) {
|
||||
/*
|
||||
* Only allocate one large object at a time, because it's quite
|
||||
* expensive to create one and not use it.
|
||||
*/
|
||||
ret = arena_malloc_large(arena, usize, zero);
|
||||
arena = arena_choose(tsd, arena);
|
||||
if (unlikely(arena == NULL))
|
||||
return (NULL);
|
||||
|
||||
usize = index2size(binind);
|
||||
assert(usize <= tcache_maxclass);
|
||||
ret = arena_malloc_large(arena, usize, binind, zero);
|
||||
if (ret == NULL)
|
||||
return (NULL);
|
||||
} else {
|
||||
/* Only compute usize on demand */
|
||||
if (config_prof || (slow_path && config_fill) || unlikely(zero)) {
|
||||
usize = index2size(binind);
|
||||
assert(usize <= tcache_maxclass);
|
||||
}
|
||||
|
||||
if (config_prof && usize == LARGE_MINCLASS) {
|
||||
arena_chunk_t *chunk =
|
||||
(arena_chunk_t *)CHUNK_ADDR2BASE(ret);
|
||||
@ -340,7 +382,7 @@ tcache_alloc_large(tsd_t *tsd, arena_t *arena, tcache_t *tcache, size_t size,
|
||||
BININD_INVALID);
|
||||
}
|
||||
if (likely(!zero)) {
|
||||
if (config_fill) {
|
||||
if (slow_path && config_fill) {
|
||||
if (unlikely(opt_junk_alloc))
|
||||
memset(ret, 0xa5, usize);
|
||||
else if (unlikely(opt_zero))
|
||||
@ -360,14 +402,15 @@ tcache_alloc_large(tsd_t *tsd, arena_t *arena, tcache_t *tcache, size_t size,
|
||||
}
|
||||
|
||||
JEMALLOC_ALWAYS_INLINE void
|
||||
tcache_dalloc_small(tsd_t *tsd, tcache_t *tcache, void *ptr, szind_t binind)
|
||||
tcache_dalloc_small(tsd_t *tsd, tcache_t *tcache, void *ptr, szind_t binind,
|
||||
bool slow_path)
|
||||
{
|
||||
tcache_bin_t *tbin;
|
||||
tcache_bin_info_t *tbin_info;
|
||||
|
||||
assert(tcache_salloc(ptr) <= SMALL_MAXCLASS);
|
||||
|
||||
if (config_fill && unlikely(opt_junk_free))
|
||||
if (slow_path && config_fill && unlikely(opt_junk_free))
|
||||
arena_dalloc_junk_small(ptr, &arena_bin_info[binind]);
|
||||
|
||||
tbin = &tcache->tbins[binind];
|
||||
@ -377,14 +420,15 @@ tcache_dalloc_small(tsd_t *tsd, tcache_t *tcache, void *ptr, szind_t binind)
|
||||
(tbin_info->ncached_max >> 1));
|
||||
}
|
||||
assert(tbin->ncached < tbin_info->ncached_max);
|
||||
tbin->avail[tbin->ncached] = ptr;
|
||||
tbin->ncached++;
|
||||
*(tbin->avail - tbin->ncached) = ptr;
|
||||
|
||||
tcache_event(tsd, tcache);
|
||||
}
|
||||
|
||||
JEMALLOC_ALWAYS_INLINE void
|
||||
tcache_dalloc_large(tsd_t *tsd, tcache_t *tcache, void *ptr, size_t size)
|
||||
tcache_dalloc_large(tsd_t *tsd, tcache_t *tcache, void *ptr, size_t size,
|
||||
bool slow_path)
|
||||
{
|
||||
szind_t binind;
|
||||
tcache_bin_t *tbin;
|
||||
@ -396,7 +440,7 @@ tcache_dalloc_large(tsd_t *tsd, tcache_t *tcache, void *ptr, size_t size)
|
||||
|
||||
binind = size2index(size);
|
||||
|
||||
if (config_fill && unlikely(opt_junk_free))
|
||||
if (slow_path && config_fill && unlikely(opt_junk_free))
|
||||
arena_dalloc_junk_large(ptr, size);
|
||||
|
||||
tbin = &tcache->tbins[binind];
|
||||
@ -406,8 +450,8 @@ tcache_dalloc_large(tsd_t *tsd, tcache_t *tcache, void *ptr, size_t size)
|
||||
(tbin_info->ncached_max >> 1), tcache);
|
||||
}
|
||||
assert(tbin->ncached < tbin_info->ncached_max);
|
||||
tbin->avail[tbin->ncached] = ptr;
|
||||
tbin->ncached++;
|
||||
*(tbin->avail - tbin->ncached) = ptr;
|
||||
|
||||
tcache_event(tsd, tcache);
|
||||
}
|
||||
|
@ -190,7 +190,7 @@ a_name##tsd_boot0(void) \
|
||||
return (false); \
|
||||
} \
|
||||
a_attr void \
|
||||
a_name##tsd_boot1() \
|
||||
a_name##tsd_boot1(void) \
|
||||
{ \
|
||||
\
|
||||
/* Do nothing. */ \
|
||||
@ -235,7 +235,7 @@ a_name##tsd_boot0(void) \
|
||||
return (false); \
|
||||
} \
|
||||
a_attr void \
|
||||
a_name##tsd_boot1() \
|
||||
a_name##tsd_boot1(void) \
|
||||
{ \
|
||||
\
|
||||
/* Do nothing. */ \
|
||||
@ -345,7 +345,7 @@ a_name##tsd_boot0(void) \
|
||||
return (false); \
|
||||
} \
|
||||
a_attr void \
|
||||
a_name##tsd_boot1() \
|
||||
a_name##tsd_boot1(void) \
|
||||
{ \
|
||||
a_name##tsd_wrapper_t *wrapper; \
|
||||
wrapper = (a_name##tsd_wrapper_t *) \
|
||||
@ -467,7 +467,7 @@ a_name##tsd_boot0(void) \
|
||||
return (false); \
|
||||
} \
|
||||
a_attr void \
|
||||
a_name##tsd_boot1() \
|
||||
a_name##tsd_boot1(void) \
|
||||
{ \
|
||||
a_name##tsd_wrapper_t *wrapper; \
|
||||
wrapper = (a_name##tsd_wrapper_t *) \
|
||||
|
@ -81,49 +81,7 @@
|
||||
# define unreachable()
|
||||
#endif
|
||||
|
||||
/*
|
||||
* Define a custom assert() in order to reduce the chances of deadlock during
|
||||
* assertion failure.
|
||||
*/
|
||||
#ifndef assert
|
||||
#define assert(e) do { \
|
||||
if (unlikely(config_debug && !(e))) { \
|
||||
malloc_printf( \
|
||||
"<jemalloc>: %s:%d: Failed assertion: \"%s\"\n", \
|
||||
__FILE__, __LINE__, #e); \
|
||||
abort(); \
|
||||
} \
|
||||
} while (0)
|
||||
#endif
|
||||
|
||||
#ifndef not_reached
|
||||
#define not_reached() do { \
|
||||
if (config_debug) { \
|
||||
malloc_printf( \
|
||||
"<jemalloc>: %s:%d: Unreachable code reached\n", \
|
||||
__FILE__, __LINE__); \
|
||||
abort(); \
|
||||
} \
|
||||
unreachable(); \
|
||||
} while (0)
|
||||
#endif
|
||||
|
||||
#ifndef not_implemented
|
||||
#define not_implemented() do { \
|
||||
if (config_debug) { \
|
||||
malloc_printf("<jemalloc>: %s:%d: Not implemented\n", \
|
||||
__FILE__, __LINE__); \
|
||||
abort(); \
|
||||
} \
|
||||
} while (0)
|
||||
#endif
|
||||
|
||||
#ifndef assert_not_implemented
|
||||
#define assert_not_implemented(e) do { \
|
||||
if (unlikely(config_debug && !(e))) \
|
||||
not_implemented(); \
|
||||
} while (0)
|
||||
#endif
|
||||
#include "jemalloc/internal/assert.h"
|
||||
|
||||
/* Use to assert a particular configuration, e.g., cassert(config_debug). */
|
||||
#define cassert(c) do { \
|
||||
|
@ -36,32 +36,7 @@
|
||||
# define JEMALLOC_CXX_THROW
|
||||
#endif
|
||||
|
||||
#ifdef JEMALLOC_HAVE_ATTR
|
||||
# define JEMALLOC_ATTR(s) __attribute__((s))
|
||||
# define JEMALLOC_ALIGNED(s) JEMALLOC_ATTR(aligned(s))
|
||||
# ifdef JEMALLOC_HAVE_ATTR_ALLOC_SIZE
|
||||
# define JEMALLOC_ALLOC_SIZE(s) JEMALLOC_ATTR(alloc_size(s))
|
||||
# define JEMALLOC_ALLOC_SIZE2(s1, s2) JEMALLOC_ATTR(alloc_size(s1, s2))
|
||||
# else
|
||||
# define JEMALLOC_ALLOC_SIZE(s)
|
||||
# define JEMALLOC_ALLOC_SIZE2(s1, s2)
|
||||
# endif
|
||||
# ifndef JEMALLOC_EXPORT
|
||||
# define JEMALLOC_EXPORT JEMALLOC_ATTR(visibility("default"))
|
||||
# endif
|
||||
# ifdef JEMALLOC_HAVE_ATTR_FORMAT_GNU_PRINTF
|
||||
# define JEMALLOC_FORMAT_PRINTF(s, i) JEMALLOC_ATTR(format(gnu_printf, s, i))
|
||||
# elif defined(JEMALLOC_HAVE_ATTR_FORMAT_PRINTF)
|
||||
# define JEMALLOC_FORMAT_PRINTF(s, i) JEMALLOC_ATTR(format(printf, s, i))
|
||||
# else
|
||||
# define JEMALLOC_FORMAT_PRINTF(s, i)
|
||||
# endif
|
||||
# define JEMALLOC_NOINLINE JEMALLOC_ATTR(noinline)
|
||||
# define JEMALLOC_NOTHROW JEMALLOC_ATTR(nothrow)
|
||||
# define JEMALLOC_SECTION(s) JEMALLOC_ATTR(section(s))
|
||||
# define JEMALLOC_RESTRICT_RETURN
|
||||
# define JEMALLOC_ALLOCATOR
|
||||
#elif _MSC_VER
|
||||
#if _MSC_VER
|
||||
# define JEMALLOC_ATTR(s)
|
||||
# define JEMALLOC_ALIGNED(s) __declspec(align(s))
|
||||
# define JEMALLOC_ALLOC_SIZE(s)
|
||||
@ -87,6 +62,31 @@
|
||||
# else
|
||||
# define JEMALLOC_ALLOCATOR
|
||||
# endif
|
||||
#elif defined(JEMALLOC_HAVE_ATTR)
|
||||
# define JEMALLOC_ATTR(s) __attribute__((s))
|
||||
# define JEMALLOC_ALIGNED(s) JEMALLOC_ATTR(aligned(s))
|
||||
# ifdef JEMALLOC_HAVE_ATTR_ALLOC_SIZE
|
||||
# define JEMALLOC_ALLOC_SIZE(s) JEMALLOC_ATTR(alloc_size(s))
|
||||
# define JEMALLOC_ALLOC_SIZE2(s1, s2) JEMALLOC_ATTR(alloc_size(s1, s2))
|
||||
# else
|
||||
# define JEMALLOC_ALLOC_SIZE(s)
|
||||
# define JEMALLOC_ALLOC_SIZE2(s1, s2)
|
||||
# endif
|
||||
# ifndef JEMALLOC_EXPORT
|
||||
# define JEMALLOC_EXPORT JEMALLOC_ATTR(visibility("default"))
|
||||
# endif
|
||||
# ifdef JEMALLOC_HAVE_ATTR_FORMAT_GNU_PRINTF
|
||||
# define JEMALLOC_FORMAT_PRINTF(s, i) JEMALLOC_ATTR(format(gnu_printf, s, i))
|
||||
# elif defined(JEMALLOC_HAVE_ATTR_FORMAT_PRINTF)
|
||||
# define JEMALLOC_FORMAT_PRINTF(s, i) JEMALLOC_ATTR(format(printf, s, i))
|
||||
# else
|
||||
# define JEMALLOC_FORMAT_PRINTF(s, i)
|
||||
# endif
|
||||
# define JEMALLOC_NOINLINE JEMALLOC_ATTR(noinline)
|
||||
# define JEMALLOC_NOTHROW JEMALLOC_ATTR(nothrow)
|
||||
# define JEMALLOC_SECTION(s) JEMALLOC_ATTR(section(s))
|
||||
# define JEMALLOC_RESTRICT_RETURN
|
||||
# define JEMALLOC_ALLOCATOR
|
||||
#else
|
||||
# define JEMALLOC_ATTR(s)
|
||||
# define JEMALLOC_ALIGNED(s)
|
||||
|
@ -11,7 +11,7 @@ arena_bin_info_t arena_bin_info[NBINS];
|
||||
size_t map_bias;
|
||||
size_t map_misc_offset;
|
||||
size_t arena_maxrun; /* Max run size for arenas. */
|
||||
size_t arena_maxclass; /* Max size class for arenas. */
|
||||
size_t large_maxclass; /* Max large size class. */
|
||||
static size_t small_maxrun; /* Max run size used for small size classes. */
|
||||
static bool *small_run_tab; /* Valid small run page multiples. */
|
||||
unsigned nlclasses; /* Number of large size classes. */
|
||||
@ -62,7 +62,7 @@ arena_miscelm_key_size_get(const arena_chunk_map_misc_t *miscelm)
|
||||
}
|
||||
|
||||
JEMALLOC_INLINE_C size_t
|
||||
arena_miscelm_size_get(arena_chunk_map_misc_t *miscelm)
|
||||
arena_miscelm_size_get(const arena_chunk_map_misc_t *miscelm)
|
||||
{
|
||||
arena_chunk_t *chunk;
|
||||
size_t pageind, mapbits;
|
||||
@ -76,7 +76,7 @@ arena_miscelm_size_get(arena_chunk_map_misc_t *miscelm)
|
||||
}
|
||||
|
||||
JEMALLOC_INLINE_C int
|
||||
arena_run_comp(arena_chunk_map_misc_t *a, arena_chunk_map_misc_t *b)
|
||||
arena_run_comp(const arena_chunk_map_misc_t *a, const arena_chunk_map_misc_t *b)
|
||||
{
|
||||
uintptr_t a_miscelm = (uintptr_t)a;
|
||||
uintptr_t b_miscelm = (uintptr_t)b;
|
||||
@ -169,7 +169,8 @@ run_quantize_first(size_t size)
|
||||
}
|
||||
|
||||
JEMALLOC_INLINE_C int
|
||||
arena_avail_comp(arena_chunk_map_misc_t *a, arena_chunk_map_misc_t *b)
|
||||
arena_avail_comp(const arena_chunk_map_misc_t *a,
|
||||
const arena_chunk_map_misc_t *b)
|
||||
{
|
||||
int ret;
|
||||
uintptr_t a_miscelm = (uintptr_t)a;
|
||||
@ -425,7 +426,7 @@ arena_run_split_large_helper(arena_t *arena, arena_run_t *run, size_t size,
|
||||
{
|
||||
arena_chunk_t *chunk;
|
||||
arena_chunk_map_misc_t *miscelm;
|
||||
size_t flag_dirty, flag_decommitted, run_ind, need_pages, i;
|
||||
size_t flag_dirty, flag_decommitted, run_ind, need_pages;
|
||||
size_t flag_unzeroed_mask;
|
||||
|
||||
chunk = (arena_chunk_t *)CHUNK_ADDR2BASE(run);
|
||||
@ -459,6 +460,7 @@ arena_run_split_large_helper(arena_t *arena, arena_run_t *run, size_t size,
|
||||
* The run is clean, so some pages may be zeroed (i.e.
|
||||
* never before touched).
|
||||
*/
|
||||
size_t i;
|
||||
for (i = 0; i < need_pages; i++) {
|
||||
if (arena_mapbits_unzeroed_get(chunk, run_ind+i)
|
||||
!= 0)
|
||||
@ -1659,18 +1661,6 @@ arena_run_size_get(arena_t *arena, arena_chunk_t *chunk, arena_run_t *run,
|
||||
return (size);
|
||||
}
|
||||
|
||||
static bool
|
||||
arena_run_decommit(arena_t *arena, arena_chunk_t *chunk, arena_run_t *run)
|
||||
{
|
||||
arena_chunk_map_misc_t *miscelm = arena_run_to_miscelm(run);
|
||||
size_t run_ind = arena_miscelm_to_pageind(miscelm);
|
||||
size_t offset = run_ind << LG_PAGE;
|
||||
size_t length = arena_run_size_get(arena, chunk, run, run_ind);
|
||||
|
||||
return (arena->chunk_hooks.decommit(chunk, chunksize, offset, length,
|
||||
arena->ind));
|
||||
}
|
||||
|
||||
static void
|
||||
arena_run_dalloc(arena_t *arena, arena_run_t *run, bool dirty, bool cleaned,
|
||||
bool decommitted)
|
||||
@ -1929,7 +1919,6 @@ arena_bin_nonfull_run_get(arena_t *arena, arena_bin_t *bin)
|
||||
static void *
|
||||
arena_bin_malloc_hard(arena_t *arena, arena_bin_t *bin)
|
||||
{
|
||||
void *ret;
|
||||
szind_t binind;
|
||||
arena_bin_info_t *bin_info;
|
||||
arena_run_t *run;
|
||||
@ -1943,6 +1932,7 @@ arena_bin_malloc_hard(arena_t *arena, arena_bin_t *bin)
|
||||
* Another thread updated runcur while this one ran without the
|
||||
* bin lock in arena_bin_nonfull_run_get().
|
||||
*/
|
||||
void *ret;
|
||||
assert(bin->runcur->nfree > 0);
|
||||
ret = arena_run_reg_alloc(bin->runcur, bin_info);
|
||||
if (run != NULL) {
|
||||
@ -1981,8 +1971,6 @@ arena_tcache_fill_small(arena_t *arena, tcache_bin_t *tbin, szind_t binind,
|
||||
{
|
||||
unsigned i, nfill;
|
||||
arena_bin_t *bin;
|
||||
arena_run_t *run;
|
||||
void *ptr;
|
||||
|
||||
assert(tbin->ncached == 0);
|
||||
|
||||
@ -1992,6 +1980,8 @@ arena_tcache_fill_small(arena_t *arena, tcache_bin_t *tbin, szind_t binind,
|
||||
malloc_mutex_lock(&bin->lock);
|
||||
for (i = 0, nfill = (tcache_bin_info[binind].ncached_max >>
|
||||
tbin->lg_fill_div); i < nfill; i++) {
|
||||
arena_run_t *run;
|
||||
void *ptr;
|
||||
if ((run = bin->runcur) != NULL && run->nfree > 0)
|
||||
ptr = arena_run_reg_alloc(run, &arena_bin_info[binind]);
|
||||
else
|
||||
@ -2000,11 +1990,10 @@ arena_tcache_fill_small(arena_t *arena, tcache_bin_t *tbin, szind_t binind,
|
||||
/*
|
||||
* OOM. tbin->avail isn't yet filled down to its first
|
||||
* element, so the successful allocations (if any) must
|
||||
* be moved to the base of tbin->avail before bailing
|
||||
* out.
|
||||
* be moved just before tbin->avail before bailing out.
|
||||
*/
|
||||
if (i > 0) {
|
||||
memmove(tbin->avail, &tbin->avail[nfill - i],
|
||||
memmove(tbin->avail - i, tbin->avail - nfill,
|
||||
i * sizeof(void *));
|
||||
}
|
||||
break;
|
||||
@ -2014,7 +2003,7 @@ arena_tcache_fill_small(arena_t *arena, tcache_bin_t *tbin, szind_t binind,
|
||||
true);
|
||||
}
|
||||
/* Insert such that low regions get used first. */
|
||||
tbin->avail[nfill - 1 - i] = ptr;
|
||||
*(tbin->avail - nfill + i) = ptr;
|
||||
}
|
||||
if (config_stats) {
|
||||
bin->stats.nmalloc += i;
|
||||
@ -2066,12 +2055,13 @@ arena_redzone_corruption_t *arena_redzone_corruption =
|
||||
static void
|
||||
arena_redzones_validate(void *ptr, arena_bin_info_t *bin_info, bool reset)
|
||||
{
|
||||
size_t size = bin_info->reg_size;
|
||||
size_t redzone_size = bin_info->redzone_size;
|
||||
size_t i;
|
||||
bool error = false;
|
||||
|
||||
if (opt_junk_alloc) {
|
||||
size_t size = bin_info->reg_size;
|
||||
size_t redzone_size = bin_info->redzone_size;
|
||||
size_t i;
|
||||
|
||||
for (i = 1; i <= redzone_size; i++) {
|
||||
uint8_t *byte = (uint8_t *)((uintptr_t)ptr - i);
|
||||
if (*byte != 0xa5) {
|
||||
@ -2134,14 +2124,12 @@ arena_quarantine_junk_small(void *ptr, size_t usize)
|
||||
}
|
||||
|
||||
void *
|
||||
arena_malloc_small(arena_t *arena, size_t size, bool zero)
|
||||
arena_malloc_small(arena_t *arena, size_t size, szind_t binind, bool zero)
|
||||
{
|
||||
void *ret;
|
||||
arena_bin_t *bin;
|
||||
arena_run_t *run;
|
||||
szind_t binind;
|
||||
|
||||
binind = size2index(size);
|
||||
assert(binind < NBINS);
|
||||
bin = &arena->bins[binind];
|
||||
size = index2size(binind);
|
||||
@ -2188,7 +2176,7 @@ arena_malloc_small(arena_t *arena, size_t size, bool zero)
|
||||
}
|
||||
|
||||
void *
|
||||
arena_malloc_large(arena_t *arena, size_t size, bool zero)
|
||||
arena_malloc_large(arena_t *arena, size_t size, szind_t binind, bool zero)
|
||||
{
|
||||
void *ret;
|
||||
size_t usize;
|
||||
@ -2198,7 +2186,7 @@ arena_malloc_large(arena_t *arena, size_t size, bool zero)
|
||||
UNUSED bool idump;
|
||||
|
||||
/* Large allocation. */
|
||||
usize = s2u(size);
|
||||
usize = index2size(binind);
|
||||
malloc_mutex_lock(&arena->lock);
|
||||
if (config_cache_oblivious) {
|
||||
uint64_t r;
|
||||
@ -2223,7 +2211,7 @@ arena_malloc_large(arena_t *arena, size_t size, bool zero)
|
||||
ret = (void *)((uintptr_t)arena_miscelm_to_rpages(miscelm) +
|
||||
random_offset);
|
||||
if (config_stats) {
|
||||
szind_t index = size2index(usize) - NBINS;
|
||||
szind_t index = binind - NBINS;
|
||||
|
||||
arena->stats.nmalloc_large++;
|
||||
arena->stats.nrequests_large++;
|
||||
@ -2345,19 +2333,21 @@ arena_palloc(tsd_t *tsd, arena_t *arena, size_t usize, size_t alignment,
|
||||
if (usize <= SMALL_MAXCLASS && (alignment < PAGE || (alignment == PAGE
|
||||
&& (usize & PAGE_MASK) == 0))) {
|
||||
/* Small; alignment doesn't require special run placement. */
|
||||
ret = arena_malloc(tsd, arena, usize, zero, tcache);
|
||||
} else if (usize <= arena_maxclass && alignment <= PAGE) {
|
||||
ret = arena_malloc(tsd, arena, usize, size2index(usize), zero,
|
||||
tcache, true);
|
||||
} else if (usize <= large_maxclass && alignment <= PAGE) {
|
||||
/*
|
||||
* Large; alignment doesn't require special run placement.
|
||||
* However, the cached pointer may be at a random offset from
|
||||
* the base of the run, so do some bit manipulation to retrieve
|
||||
* the base.
|
||||
*/
|
||||
ret = arena_malloc(tsd, arena, usize, zero, tcache);
|
||||
ret = arena_malloc(tsd, arena, usize, size2index(usize), zero,
|
||||
tcache, true);
|
||||
if (config_cache_oblivious)
|
||||
ret = (void *)((uintptr_t)ret & ~PAGE_MASK);
|
||||
} else {
|
||||
if (likely(usize <= arena_maxclass)) {
|
||||
if (likely(usize <= large_maxclass)) {
|
||||
ret = arena_palloc_large(tsd, arena, usize, alignment,
|
||||
zero);
|
||||
} else if (likely(alignment <= chunksize))
|
||||
@ -2549,7 +2539,7 @@ arena_dalloc_junk_large_t *arena_dalloc_junk_large =
|
||||
JEMALLOC_N(arena_dalloc_junk_large_impl);
|
||||
#endif
|
||||
|
||||
void
|
||||
static void
|
||||
arena_dalloc_large_locked_impl(arena_t *arena, arena_chunk_t *chunk,
|
||||
void *ptr, bool junked)
|
||||
{
|
||||
@ -2631,41 +2621,57 @@ arena_ralloc_large_shrink(arena_t *arena, arena_chunk_t *chunk, void *ptr,
|
||||
|
||||
static bool
|
||||
arena_ralloc_large_grow(arena_t *arena, arena_chunk_t *chunk, void *ptr,
|
||||
size_t oldsize, size_t size, size_t extra, bool zero)
|
||||
size_t oldsize, size_t usize_min, size_t usize_max, bool zero)
|
||||
{
|
||||
size_t pageind = ((uintptr_t)ptr - (uintptr_t)chunk) >> LG_PAGE;
|
||||
size_t npages = (oldsize + large_pad) >> LG_PAGE;
|
||||
size_t followsize;
|
||||
size_t usize_min = s2u(size);
|
||||
|
||||
assert(oldsize == arena_mapbits_large_size_get(chunk, pageind) -
|
||||
large_pad);
|
||||
|
||||
/* Try to extend the run. */
|
||||
assert(usize_min > oldsize);
|
||||
malloc_mutex_lock(&arena->lock);
|
||||
if (pageind+npages < chunk_npages &&
|
||||
arena_mapbits_allocated_get(chunk, pageind+npages) == 0 &&
|
||||
(followsize = arena_mapbits_unallocated_size_get(chunk,
|
||||
pageind+npages)) >= usize_min - oldsize) {
|
||||
if (pageind+npages >= chunk_npages || arena_mapbits_allocated_get(chunk,
|
||||
pageind+npages) != 0)
|
||||
goto label_fail;
|
||||
followsize = arena_mapbits_unallocated_size_get(chunk, pageind+npages);
|
||||
if (oldsize + followsize >= usize_min) {
|
||||
/*
|
||||
* The next run is available and sufficiently large. Split the
|
||||
* following run, then merge the first part with the existing
|
||||
* allocation.
|
||||
*/
|
||||
arena_run_t *run;
|
||||
size_t flag_dirty, flag_unzeroed_mask, splitsize, usize;
|
||||
size_t usize, splitsize, size, flag_dirty, flag_unzeroed_mask;
|
||||
|
||||
usize = s2u(size + extra);
|
||||
usize = usize_max;
|
||||
while (oldsize + followsize < usize)
|
||||
usize = index2size(size2index(usize)-1);
|
||||
assert(usize >= usize_min);
|
||||
assert(usize >= oldsize);
|
||||
splitsize = usize - oldsize;
|
||||
if (splitsize == 0)
|
||||
goto label_fail;
|
||||
|
||||
run = &arena_miscelm_get(chunk, pageind+npages)->run;
|
||||
if (arena_run_split_large(arena, run, splitsize, zero)) {
|
||||
malloc_mutex_unlock(&arena->lock);
|
||||
return (true);
|
||||
if (arena_run_split_large(arena, run, splitsize, zero))
|
||||
goto label_fail;
|
||||
|
||||
if (config_cache_oblivious && zero) {
|
||||
/*
|
||||
* Zero the trailing bytes of the original allocation's
|
||||
* last page, since they are in an indeterminate state.
|
||||
* There will always be trailing bytes, because ptr's
|
||||
* offset from the beginning of the run is a multiple of
|
||||
* CACHELINE in [0 .. PAGE).
|
||||
*/
|
||||
void *zbase = (void *)((uintptr_t)ptr + oldsize);
|
||||
void *zpast = PAGE_ADDR2BASE((void *)((uintptr_t)zbase +
|
||||
PAGE));
|
||||
size_t nzero = (uintptr_t)zpast - (uintptr_t)zbase;
|
||||
assert(nzero > 0);
|
||||
memset(zbase, 0, nzero);
|
||||
}
|
||||
|
||||
size = oldsize + splitsize;
|
||||
@ -2708,8 +2714,8 @@ arena_ralloc_large_grow(arena_t *arena, arena_chunk_t *chunk, void *ptr,
|
||||
malloc_mutex_unlock(&arena->lock);
|
||||
return (false);
|
||||
}
|
||||
label_fail:
|
||||
malloc_mutex_unlock(&arena->lock);
|
||||
|
||||
return (true);
|
||||
}
|
||||
|
||||
@ -2738,98 +2744,108 @@ arena_ralloc_junk_large_t *arena_ralloc_junk_large =
|
||||
* always fail if growing an object, and the following run is already in use.
|
||||
*/
|
||||
static bool
|
||||
arena_ralloc_large(void *ptr, size_t oldsize, size_t size, size_t extra,
|
||||
bool zero)
|
||||
arena_ralloc_large(void *ptr, size_t oldsize, size_t usize_min,
|
||||
size_t usize_max, bool zero)
|
||||
{
|
||||
size_t usize;
|
||||
arena_chunk_t *chunk;
|
||||
arena_t *arena;
|
||||
|
||||
/* Make sure extra can't cause size_t overflow. */
|
||||
if (unlikely(extra >= arena_maxclass))
|
||||
return (true);
|
||||
|
||||
usize = s2u(size + extra);
|
||||
if (usize == oldsize) {
|
||||
/* Same size class. */
|
||||
if (oldsize == usize_max) {
|
||||
/* Current size class is compatible and maximal. */
|
||||
return (false);
|
||||
} else {
|
||||
arena_chunk_t *chunk;
|
||||
arena_t *arena;
|
||||
|
||||
chunk = (arena_chunk_t *)CHUNK_ADDR2BASE(ptr);
|
||||
arena = extent_node_arena_get(&chunk->node);
|
||||
|
||||
if (usize < oldsize) {
|
||||
/* Fill before shrinking in order avoid a race. */
|
||||
arena_ralloc_junk_large(ptr, oldsize, usize);
|
||||
arena_ralloc_large_shrink(arena, chunk, ptr, oldsize,
|
||||
usize);
|
||||
return (false);
|
||||
} else {
|
||||
bool ret = arena_ralloc_large_grow(arena, chunk, ptr,
|
||||
oldsize, size, extra, zero);
|
||||
if (config_fill && !ret && !zero) {
|
||||
if (unlikely(opt_junk_alloc)) {
|
||||
memset((void *)((uintptr_t)ptr +
|
||||
oldsize), 0xa5, isalloc(ptr,
|
||||
config_prof) - oldsize);
|
||||
} else if (unlikely(opt_zero)) {
|
||||
memset((void *)((uintptr_t)ptr +
|
||||
oldsize), 0, isalloc(ptr,
|
||||
config_prof) - oldsize);
|
||||
}
|
||||
}
|
||||
return (ret);
|
||||
}
|
||||
}
|
||||
|
||||
chunk = (arena_chunk_t *)CHUNK_ADDR2BASE(ptr);
|
||||
arena = extent_node_arena_get(&chunk->node);
|
||||
|
||||
if (oldsize < usize_max) {
|
||||
bool ret = arena_ralloc_large_grow(arena, chunk, ptr, oldsize,
|
||||
usize_min, usize_max, zero);
|
||||
if (config_fill && !ret && !zero) {
|
||||
if (unlikely(opt_junk_alloc)) {
|
||||
memset((void *)((uintptr_t)ptr + oldsize), 0xa5,
|
||||
isalloc(ptr, config_prof) - oldsize);
|
||||
} else if (unlikely(opt_zero)) {
|
||||
memset((void *)((uintptr_t)ptr + oldsize), 0,
|
||||
isalloc(ptr, config_prof) - oldsize);
|
||||
}
|
||||
}
|
||||
return (ret);
|
||||
}
|
||||
|
||||
assert(oldsize > usize_max);
|
||||
/* Fill before shrinking in order avoid a race. */
|
||||
arena_ralloc_junk_large(ptr, oldsize, usize_max);
|
||||
arena_ralloc_large_shrink(arena, chunk, ptr, oldsize, usize_max);
|
||||
return (false);
|
||||
}
|
||||
|
||||
bool
|
||||
arena_ralloc_no_move(void *ptr, size_t oldsize, size_t size, size_t extra,
|
||||
bool zero)
|
||||
{
|
||||
size_t usize_min, usize_max;
|
||||
|
||||
if (likely(size <= arena_maxclass)) {
|
||||
usize_min = s2u(size);
|
||||
usize_max = s2u(size + extra);
|
||||
if (likely(oldsize <= large_maxclass && usize_min <= large_maxclass)) {
|
||||
/*
|
||||
* Avoid moving the allocation if the size class can be left the
|
||||
* same.
|
||||
*/
|
||||
if (likely(oldsize <= arena_maxclass)) {
|
||||
if (oldsize <= SMALL_MAXCLASS) {
|
||||
assert(
|
||||
arena_bin_info[size2index(oldsize)].reg_size
|
||||
== oldsize);
|
||||
if ((size + extra <= SMALL_MAXCLASS &&
|
||||
size2index(size + extra) ==
|
||||
size2index(oldsize)) || (size <= oldsize &&
|
||||
size + extra >= oldsize))
|
||||
if (oldsize <= SMALL_MAXCLASS) {
|
||||
assert(arena_bin_info[size2index(oldsize)].reg_size ==
|
||||
oldsize);
|
||||
if ((usize_max <= SMALL_MAXCLASS &&
|
||||
size2index(usize_max) == size2index(oldsize)) ||
|
||||
(size <= oldsize && usize_max >= oldsize))
|
||||
return (false);
|
||||
} else {
|
||||
if (usize_max > SMALL_MAXCLASS) {
|
||||
if (!arena_ralloc_large(ptr, oldsize, usize_min,
|
||||
usize_max, zero))
|
||||
return (false);
|
||||
} else {
|
||||
assert(size <= arena_maxclass);
|
||||
if (size + extra > SMALL_MAXCLASS) {
|
||||
if (!arena_ralloc_large(ptr, oldsize,
|
||||
size, extra, zero))
|
||||
return (false);
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
/* Reallocation would require a move. */
|
||||
return (true);
|
||||
} else
|
||||
return (huge_ralloc_no_move(ptr, oldsize, size, extra, zero));
|
||||
} else {
|
||||
return (huge_ralloc_no_move(ptr, oldsize, usize_min, usize_max,
|
||||
zero));
|
||||
}
|
||||
}
|
||||
|
||||
static void *
|
||||
arena_ralloc_move_helper(tsd_t *tsd, arena_t *arena, size_t usize,
|
||||
size_t alignment, bool zero, tcache_t *tcache)
|
||||
{
|
||||
|
||||
if (alignment == 0)
|
||||
return (arena_malloc(tsd, arena, usize, size2index(usize), zero,
|
||||
tcache, true));
|
||||
usize = sa2u(usize, alignment);
|
||||
if (usize == 0)
|
||||
return (NULL);
|
||||
return (ipalloct(tsd, usize, alignment, zero, tcache, arena));
|
||||
}
|
||||
|
||||
void *
|
||||
arena_ralloc(tsd_t *tsd, arena_t *arena, void *ptr, size_t oldsize, size_t size,
|
||||
size_t extra, size_t alignment, bool zero, tcache_t *tcache)
|
||||
size_t alignment, bool zero, tcache_t *tcache)
|
||||
{
|
||||
void *ret;
|
||||
size_t usize;
|
||||
|
||||
if (likely(size <= arena_maxclass)) {
|
||||
usize = s2u(size);
|
||||
if (usize == 0)
|
||||
return (NULL);
|
||||
|
||||
if (likely(usize <= large_maxclass)) {
|
||||
size_t copysize;
|
||||
|
||||
/* Try to avoid moving the allocation. */
|
||||
if (!arena_ralloc_no_move(ptr, oldsize, size, extra, zero))
|
||||
if (!arena_ralloc_no_move(ptr, oldsize, usize, 0, zero))
|
||||
return (ptr);
|
||||
|
||||
/*
|
||||
@ -2837,53 +2853,23 @@ arena_ralloc(tsd_t *tsd, arena_t *arena, void *ptr, size_t oldsize, size_t size,
|
||||
* the object. In that case, fall back to allocating new space
|
||||
* and copying.
|
||||
*/
|
||||
if (alignment != 0) {
|
||||
size_t usize = sa2u(size + extra, alignment);
|
||||
if (usize == 0)
|
||||
return (NULL);
|
||||
ret = ipalloct(tsd, usize, alignment, zero, tcache,
|
||||
arena);
|
||||
} else {
|
||||
ret = arena_malloc(tsd, arena, size + extra, zero,
|
||||
tcache);
|
||||
}
|
||||
|
||||
if (ret == NULL) {
|
||||
if (extra == 0)
|
||||
return (NULL);
|
||||
/* Try again, this time without extra. */
|
||||
if (alignment != 0) {
|
||||
size_t usize = sa2u(size, alignment);
|
||||
if (usize == 0)
|
||||
return (NULL);
|
||||
ret = ipalloct(tsd, usize, alignment, zero,
|
||||
tcache, arena);
|
||||
} else {
|
||||
ret = arena_malloc(tsd, arena, size, zero,
|
||||
tcache);
|
||||
}
|
||||
|
||||
if (ret == NULL)
|
||||
return (NULL);
|
||||
}
|
||||
ret = arena_ralloc_move_helper(tsd, arena, usize, alignment,
|
||||
zero, tcache);
|
||||
if (ret == NULL)
|
||||
return (NULL);
|
||||
|
||||
/*
|
||||
* Junk/zero-filling were already done by
|
||||
* ipalloc()/arena_malloc().
|
||||
*/
|
||||
|
||||
/*
|
||||
* Copy at most size bytes (not size+extra), since the caller
|
||||
* has no expectation that the extra bytes will be reliably
|
||||
* preserved.
|
||||
*/
|
||||
copysize = (size < oldsize) ? size : oldsize;
|
||||
copysize = (usize < oldsize) ? usize : oldsize;
|
||||
JEMALLOC_VALGRIND_MAKE_MEM_UNDEFINED(ret, copysize);
|
||||
memcpy(ret, ptr, copysize);
|
||||
isqalloc(tsd, ptr, oldsize, tcache);
|
||||
} else {
|
||||
ret = huge_ralloc(tsd, arena, ptr, oldsize, size, extra,
|
||||
alignment, zero, tcache);
|
||||
ret = huge_ralloc(tsd, arena, ptr, oldsize, usize, alignment,
|
||||
zero, tcache);
|
||||
}
|
||||
return (ret);
|
||||
}
|
||||
@ -3231,7 +3217,6 @@ small_run_size_init(void)
|
||||
bool
|
||||
arena_boot(void)
|
||||
{
|
||||
size_t header_size;
|
||||
unsigned i;
|
||||
|
||||
arena_lg_dirty_mult_default_set(opt_lg_dirty_mult);
|
||||
@ -3250,7 +3235,7 @@ arena_boot(void)
|
||||
*/
|
||||
map_bias = 0;
|
||||
for (i = 0; i < 3; i++) {
|
||||
header_size = offsetof(arena_chunk_t, map_bits) +
|
||||
size_t header_size = offsetof(arena_chunk_t, map_bits) +
|
||||
((sizeof(arena_chunk_map_bits_t) +
|
||||
sizeof(arena_chunk_map_misc_t)) * (chunk_npages-map_bias));
|
||||
map_bias = (header_size + PAGE_MASK) >> LG_PAGE;
|
||||
@ -3262,17 +3247,17 @@ arena_boot(void)
|
||||
|
||||
arena_maxrun = chunksize - (map_bias << LG_PAGE);
|
||||
assert(arena_maxrun > 0);
|
||||
arena_maxclass = index2size(size2index(chunksize)-1);
|
||||
if (arena_maxclass > arena_maxrun) {
|
||||
large_maxclass = index2size(size2index(chunksize)-1);
|
||||
if (large_maxclass > arena_maxrun) {
|
||||
/*
|
||||
* For small chunk sizes it's possible for there to be fewer
|
||||
* non-header pages available than are necessary to serve the
|
||||
* size classes just below chunksize.
|
||||
*/
|
||||
arena_maxclass = arena_maxrun;
|
||||
large_maxclass = arena_maxrun;
|
||||
}
|
||||
assert(arena_maxclass > 0);
|
||||
nlclasses = size2index(arena_maxclass) - size2index(SMALL_MAXCLASS);
|
||||
assert(large_maxclass > 0);
|
||||
nlclasses = size2index(large_maxclass) - size2index(SMALL_MAXCLASS);
|
||||
nhclasses = NSIZES - nlclasses - NBINS;
|
||||
|
||||
bin_info_init();
|
||||
|
@ -69,8 +69,6 @@ void *
|
||||
chunk_alloc_dss(arena_t *arena, void *new_addr, size_t size, size_t alignment,
|
||||
bool *zero, bool *commit)
|
||||
{
|
||||
void *ret;
|
||||
|
||||
cassert(have_dss);
|
||||
assert(size > 0 && (size & chunksize_mask) == 0);
|
||||
assert(alignment > 0 && (alignment & chunksize_mask) == 0);
|
||||
@ -84,9 +82,6 @@ chunk_alloc_dss(arena_t *arena, void *new_addr, size_t size, size_t alignment,
|
||||
|
||||
malloc_mutex_lock(&dss_mtx);
|
||||
if (dss_prev != (void *)-1) {
|
||||
size_t gap_size, cpad_size;
|
||||
void *cpad, *dss_next;
|
||||
intptr_t incr;
|
||||
|
||||
/*
|
||||
* The loop is necessary to recover from races with other
|
||||
@ -94,6 +89,9 @@ chunk_alloc_dss(arena_t *arena, void *new_addr, size_t size, size_t alignment,
|
||||
* malloc.
|
||||
*/
|
||||
do {
|
||||
void *ret, *cpad, *dss_next;
|
||||
size_t gap_size, cpad_size;
|
||||
intptr_t incr;
|
||||
/* Avoid an unnecessary system call. */
|
||||
if (new_addr != NULL && dss_max != new_addr)
|
||||
break;
|
||||
|
@ -6,14 +6,16 @@
|
||||
static void *
|
||||
chunk_alloc_mmap_slow(size_t size, size_t alignment, bool *zero, bool *commit)
|
||||
{
|
||||
void *ret, *pages;
|
||||
size_t alloc_size, leadsize;
|
||||
void *ret;
|
||||
size_t alloc_size;
|
||||
|
||||
alloc_size = size + alignment - PAGE;
|
||||
/* Beware size_t wrap-around. */
|
||||
if (alloc_size < size)
|
||||
return (NULL);
|
||||
do {
|
||||
void *pages;
|
||||
size_t leadsize;
|
||||
pages = pages_map(NULL, alloc_size);
|
||||
if (pages == NULL)
|
||||
return (NULL);
|
||||
|
@ -283,12 +283,12 @@ ckh_grow(tsd_t *tsd, ckh_t *ckh)
|
||||
ckh->lg_curbuckets = lg_curcells - LG_CKH_BUCKET_CELLS;
|
||||
|
||||
if (!ckh_rebuild(ckh, tab)) {
|
||||
idalloctm(tsd, tab, tcache_get(tsd, false), true);
|
||||
idalloctm(tsd, tab, tcache_get(tsd, false), true, true);
|
||||
break;
|
||||
}
|
||||
|
||||
/* Rebuilding failed, so back out partially rebuilt table. */
|
||||
idalloctm(tsd, ckh->tab, tcache_get(tsd, false), true);
|
||||
idalloctm(tsd, ckh->tab, tcache_get(tsd, false), true, true);
|
||||
ckh->tab = tab;
|
||||
ckh->lg_curbuckets = lg_prevbuckets;
|
||||
}
|
||||
@ -330,7 +330,7 @@ ckh_shrink(tsd_t *tsd, ckh_t *ckh)
|
||||
ckh->lg_curbuckets = lg_curcells - LG_CKH_BUCKET_CELLS;
|
||||
|
||||
if (!ckh_rebuild(ckh, tab)) {
|
||||
idalloctm(tsd, tab, tcache_get(tsd, false), true);
|
||||
idalloctm(tsd, tab, tcache_get(tsd, false), true, true);
|
||||
#ifdef CKH_COUNT
|
||||
ckh->nshrinks++;
|
||||
#endif
|
||||
@ -338,7 +338,7 @@ ckh_shrink(tsd_t *tsd, ckh_t *ckh)
|
||||
}
|
||||
|
||||
/* Rebuilding failed, so back out partially rebuilt table. */
|
||||
idalloctm(tsd, ckh->tab, tcache_get(tsd, false), true);
|
||||
idalloctm(tsd, ckh->tab, tcache_get(tsd, false), true, true);
|
||||
ckh->tab = tab;
|
||||
ckh->lg_curbuckets = lg_prevbuckets;
|
||||
#ifdef CKH_COUNT
|
||||
@ -421,7 +421,7 @@ ckh_delete(tsd_t *tsd, ckh_t *ckh)
|
||||
(unsigned long long)ckh->nrelocs);
|
||||
#endif
|
||||
|
||||
idalloctm(tsd, ckh->tab, tcache_get(tsd, false), true);
|
||||
idalloctm(tsd, ckh->tab, tcache_get(tsd, false), true, true);
|
||||
if (config_debug)
|
||||
memset(ckh, 0x5a, sizeof(ckh_t));
|
||||
}
|
||||
|
@ -115,7 +115,7 @@ CTL_PROTO(tcache_create)
|
||||
CTL_PROTO(tcache_flush)
|
||||
CTL_PROTO(tcache_destroy)
|
||||
CTL_PROTO(arena_i_purge)
|
||||
static void arena_purge(unsigned arena_ind);
|
||||
static void arena_i_purge(unsigned arena_ind);
|
||||
CTL_PROTO(arena_i_dss)
|
||||
CTL_PROTO(arena_i_lg_dirty_mult)
|
||||
CTL_PROTO(arena_i_chunk_hooks)
|
||||
@ -1538,7 +1538,7 @@ label_return:
|
||||
|
||||
/* ctl_mutex must be held during execution of this function. */
|
||||
static void
|
||||
arena_purge(unsigned arena_ind)
|
||||
arena_i_purge(unsigned arena_ind)
|
||||
{
|
||||
tsd_t *tsd;
|
||||
unsigned i;
|
||||
@ -1576,7 +1576,7 @@ arena_i_purge_ctl(const size_t *mib, size_t miblen, void *oldp, size_t *oldlenp,
|
||||
READONLY();
|
||||
WRITEONLY();
|
||||
malloc_mutex_lock(&ctl_mtx);
|
||||
arena_purge(mib[1]);
|
||||
arena_i_purge(mib[1]);
|
||||
malloc_mutex_unlock(&ctl_mtx);
|
||||
|
||||
ret = 0;
|
||||
|
@ -15,7 +15,7 @@ extent_quantize(size_t size)
|
||||
}
|
||||
|
||||
JEMALLOC_INLINE_C int
|
||||
extent_szad_comp(extent_node_t *a, extent_node_t *b)
|
||||
extent_szad_comp(const extent_node_t *a, const extent_node_t *b)
|
||||
{
|
||||
int ret;
|
||||
size_t a_qsize = extent_quantize(extent_node_size_get(a));
|
||||
@ -41,7 +41,7 @@ rb_gen(, extent_tree_szad_, extent_tree_t, extent_node_t, szad_link,
|
||||
extent_szad_comp)
|
||||
|
||||
JEMALLOC_INLINE_C int
|
||||
extent_ad_comp(extent_node_t *a, extent_node_t *b)
|
||||
extent_ad_comp(const extent_node_t *a, const extent_node_t *b)
|
||||
{
|
||||
uintptr_t a_addr = (uintptr_t)extent_node_addr_get(a);
|
||||
uintptr_t b_addr = (uintptr_t)extent_node_addr_get(b);
|
||||
|
@ -75,7 +75,7 @@ huge_palloc(tsd_t *tsd, arena_t *arena, size_t size, size_t alignment,
|
||||
arena = arena_choose(tsd, arena);
|
||||
if (unlikely(arena == NULL) || (ret = arena_chunk_alloc_huge(arena,
|
||||
size, alignment, &is_zeroed)) == NULL) {
|
||||
idalloctm(tsd, node, tcache, true);
|
||||
idalloctm(tsd, node, tcache, true, true);
|
||||
return (NULL);
|
||||
}
|
||||
|
||||
@ -83,7 +83,7 @@ huge_palloc(tsd_t *tsd, arena_t *arena, size_t size, size_t alignment,
|
||||
|
||||
if (huge_node_set(ret, node)) {
|
||||
arena_chunk_dalloc_huge(arena, ret, size);
|
||||
idalloctm(tsd, node, tcache, true);
|
||||
idalloctm(tsd, node, tcache, true, true);
|
||||
return (NULL);
|
||||
}
|
||||
|
||||
@ -126,44 +126,46 @@ huge_dalloc_junk_t *huge_dalloc_junk = JEMALLOC_N(huge_dalloc_junk_impl);
|
||||
#endif
|
||||
|
||||
static void
|
||||
huge_ralloc_no_move_similar(void *ptr, size_t oldsize, size_t usize,
|
||||
size_t size, size_t extra, bool zero)
|
||||
huge_ralloc_no_move_similar(void *ptr, size_t oldsize, size_t usize_min,
|
||||
size_t usize_max, bool zero)
|
||||
{
|
||||
size_t usize_next;
|
||||
size_t usize, usize_next;
|
||||
extent_node_t *node;
|
||||
arena_t *arena;
|
||||
chunk_hooks_t chunk_hooks = CHUNK_HOOKS_INITIALIZER;
|
||||
bool zeroed;
|
||||
bool pre_zeroed, post_zeroed;
|
||||
|
||||
/* Increase usize to incorporate extra. */
|
||||
while (usize < s2u(size+extra) && (usize_next = s2u(usize+1)) < oldsize)
|
||||
usize = usize_next;
|
||||
for (usize = usize_min; usize < usize_max && (usize_next = s2u(usize+1))
|
||||
<= oldsize; usize = usize_next)
|
||||
; /* Do nothing. */
|
||||
|
||||
if (oldsize == usize)
|
||||
return;
|
||||
|
||||
node = huge_node_get(ptr);
|
||||
arena = extent_node_arena_get(node);
|
||||
pre_zeroed = extent_node_zeroed_get(node);
|
||||
|
||||
/* Fill if necessary (shrinking). */
|
||||
if (oldsize > usize) {
|
||||
size_t sdiff = oldsize - usize;
|
||||
if (config_fill && unlikely(opt_junk_free)) {
|
||||
memset((void *)((uintptr_t)ptr + usize), 0x5a, sdiff);
|
||||
zeroed = false;
|
||||
post_zeroed = false;
|
||||
} else {
|
||||
zeroed = !chunk_purge_wrapper(arena, &chunk_hooks, ptr,
|
||||
CHUNK_CEILING(oldsize), usize, sdiff);
|
||||
post_zeroed = !chunk_purge_wrapper(arena, &chunk_hooks,
|
||||
ptr, CHUNK_CEILING(oldsize), usize, sdiff);
|
||||
}
|
||||
} else
|
||||
zeroed = true;
|
||||
post_zeroed = pre_zeroed;
|
||||
|
||||
malloc_mutex_lock(&arena->huge_mtx);
|
||||
/* Update the size of the huge allocation. */
|
||||
assert(extent_node_size_get(node) != usize);
|
||||
extent_node_size_set(node, usize);
|
||||
/* Clear node's zeroed field if zeroing failed above. */
|
||||
extent_node_zeroed_set(node, extent_node_zeroed_get(node) && zeroed);
|
||||
/* Update zeroed. */
|
||||
extent_node_zeroed_set(node, post_zeroed);
|
||||
malloc_mutex_unlock(&arena->huge_mtx);
|
||||
|
||||
arena_chunk_ralloc_huge_similar(arena, ptr, oldsize, usize);
|
||||
@ -171,7 +173,7 @@ huge_ralloc_no_move_similar(void *ptr, size_t oldsize, size_t usize,
|
||||
/* Fill if necessary (growing). */
|
||||
if (oldsize < usize) {
|
||||
if (zero || (config_fill && unlikely(opt_zero))) {
|
||||
if (!zeroed) {
|
||||
if (!pre_zeroed) {
|
||||
memset((void *)((uintptr_t)ptr + oldsize), 0,
|
||||
usize - oldsize);
|
||||
}
|
||||
@ -189,12 +191,15 @@ huge_ralloc_no_move_shrink(void *ptr, size_t oldsize, size_t usize)
|
||||
arena_t *arena;
|
||||
chunk_hooks_t chunk_hooks;
|
||||
size_t cdiff;
|
||||
bool zeroed;
|
||||
bool pre_zeroed, post_zeroed;
|
||||
|
||||
node = huge_node_get(ptr);
|
||||
arena = extent_node_arena_get(node);
|
||||
pre_zeroed = extent_node_zeroed_get(node);
|
||||
chunk_hooks = chunk_hooks_get(arena);
|
||||
|
||||
assert(oldsize > usize);
|
||||
|
||||
/* Split excess chunks. */
|
||||
cdiff = CHUNK_CEILING(oldsize) - CHUNK_CEILING(usize);
|
||||
if (cdiff != 0 && chunk_hooks.split(ptr, CHUNK_CEILING(oldsize),
|
||||
@ -206,21 +211,21 @@ huge_ralloc_no_move_shrink(void *ptr, size_t oldsize, size_t usize)
|
||||
if (config_fill && unlikely(opt_junk_free)) {
|
||||
huge_dalloc_junk((void *)((uintptr_t)ptr + usize),
|
||||
sdiff);
|
||||
zeroed = false;
|
||||
post_zeroed = false;
|
||||
} else {
|
||||
zeroed = !chunk_purge_wrapper(arena, &chunk_hooks,
|
||||
post_zeroed = !chunk_purge_wrapper(arena, &chunk_hooks,
|
||||
CHUNK_ADDR2BASE((uintptr_t)ptr + usize),
|
||||
CHUNK_CEILING(oldsize),
|
||||
CHUNK_ADDR2OFFSET((uintptr_t)ptr + usize), sdiff);
|
||||
}
|
||||
} else
|
||||
zeroed = true;
|
||||
post_zeroed = pre_zeroed;
|
||||
|
||||
malloc_mutex_lock(&arena->huge_mtx);
|
||||
/* Update the size of the huge allocation. */
|
||||
extent_node_size_set(node, usize);
|
||||
/* Clear node's zeroed field if zeroing failed above. */
|
||||
extent_node_zeroed_set(node, extent_node_zeroed_get(node) && zeroed);
|
||||
/* Update zeroed. */
|
||||
extent_node_zeroed_set(node, post_zeroed);
|
||||
malloc_mutex_unlock(&arena->huge_mtx);
|
||||
|
||||
/* Zap the excess chunks. */
|
||||
@ -230,18 +235,11 @@ huge_ralloc_no_move_shrink(void *ptr, size_t oldsize, size_t usize)
|
||||
}
|
||||
|
||||
static bool
|
||||
huge_ralloc_no_move_expand(void *ptr, size_t oldsize, size_t size, bool zero) {
|
||||
size_t usize;
|
||||
huge_ralloc_no_move_expand(void *ptr, size_t oldsize, size_t usize, bool zero) {
|
||||
extent_node_t *node;
|
||||
arena_t *arena;
|
||||
bool is_zeroed_subchunk, is_zeroed_chunk;
|
||||
|
||||
usize = s2u(size);
|
||||
if (usize == 0) {
|
||||
/* size_t overflow. */
|
||||
return (true);
|
||||
}
|
||||
|
||||
node = huge_node_get(ptr);
|
||||
arena = extent_node_arena_get(node);
|
||||
malloc_mutex_lock(&arena->huge_mtx);
|
||||
@ -282,89 +280,76 @@ huge_ralloc_no_move_expand(void *ptr, size_t oldsize, size_t size, bool zero) {
|
||||
}
|
||||
|
||||
bool
|
||||
huge_ralloc_no_move(void *ptr, size_t oldsize, size_t size, size_t extra,
|
||||
bool zero)
|
||||
huge_ralloc_no_move(void *ptr, size_t oldsize, size_t usize_min,
|
||||
size_t usize_max, bool zero)
|
||||
{
|
||||
size_t usize;
|
||||
|
||||
/* Both allocations must be huge to avoid a move. */
|
||||
if (oldsize < chunksize)
|
||||
return (true);
|
||||
|
||||
assert(s2u(oldsize) == oldsize);
|
||||
usize = s2u(size);
|
||||
if (usize == 0) {
|
||||
/* size_t overflow. */
|
||||
|
||||
/* Both allocations must be huge to avoid a move. */
|
||||
if (oldsize < chunksize || usize_max < chunksize)
|
||||
return (true);
|
||||
|
||||
if (CHUNK_CEILING(usize_max) > CHUNK_CEILING(oldsize)) {
|
||||
/* Attempt to expand the allocation in-place. */
|
||||
if (!huge_ralloc_no_move_expand(ptr, oldsize, usize_max, zero))
|
||||
return (false);
|
||||
/* Try again, this time with usize_min. */
|
||||
if (usize_min < usize_max && CHUNK_CEILING(usize_min) >
|
||||
CHUNK_CEILING(oldsize) && huge_ralloc_no_move_expand(ptr,
|
||||
oldsize, usize_min, zero))
|
||||
return (false);
|
||||
}
|
||||
|
||||
/*
|
||||
* Avoid moving the allocation if the existing chunk size accommodates
|
||||
* the new size.
|
||||
*/
|
||||
if (CHUNK_CEILING(oldsize) >= CHUNK_CEILING(usize)
|
||||
&& CHUNK_CEILING(oldsize) <= CHUNK_CEILING(s2u(size+extra))) {
|
||||
huge_ralloc_no_move_similar(ptr, oldsize, usize, size, extra,
|
||||
if (CHUNK_CEILING(oldsize) >= CHUNK_CEILING(usize_min)
|
||||
&& CHUNK_CEILING(oldsize) <= CHUNK_CEILING(usize_max)) {
|
||||
huge_ralloc_no_move_similar(ptr, oldsize, usize_min, usize_max,
|
||||
zero);
|
||||
return (false);
|
||||
}
|
||||
|
||||
/* Attempt to shrink the allocation in-place. */
|
||||
if (CHUNK_CEILING(oldsize) >= CHUNK_CEILING(usize))
|
||||
return (huge_ralloc_no_move_shrink(ptr, oldsize, usize));
|
||||
if (CHUNK_CEILING(oldsize) > CHUNK_CEILING(usize_max))
|
||||
return (huge_ralloc_no_move_shrink(ptr, oldsize, usize_max));
|
||||
return (true);
|
||||
}
|
||||
|
||||
/* Attempt to expand the allocation in-place. */
|
||||
if (huge_ralloc_no_move_expand(ptr, oldsize, size + extra, zero)) {
|
||||
if (extra == 0)
|
||||
return (true);
|
||||
static void *
|
||||
huge_ralloc_move_helper(tsd_t *tsd, arena_t *arena, size_t usize,
|
||||
size_t alignment, bool zero, tcache_t *tcache)
|
||||
{
|
||||
|
||||
/* Try again, this time without extra. */
|
||||
return (huge_ralloc_no_move_expand(ptr, oldsize, size, zero));
|
||||
}
|
||||
return (false);
|
||||
if (alignment <= chunksize)
|
||||
return (huge_malloc(tsd, arena, usize, zero, tcache));
|
||||
return (huge_palloc(tsd, arena, usize, alignment, zero, tcache));
|
||||
}
|
||||
|
||||
void *
|
||||
huge_ralloc(tsd_t *tsd, arena_t *arena, void *ptr, size_t oldsize, size_t size,
|
||||
size_t extra, size_t alignment, bool zero, tcache_t *tcache)
|
||||
huge_ralloc(tsd_t *tsd, arena_t *arena, void *ptr, size_t oldsize, size_t usize,
|
||||
size_t alignment, bool zero, tcache_t *tcache)
|
||||
{
|
||||
void *ret;
|
||||
size_t copysize;
|
||||
|
||||
/* Try to avoid moving the allocation. */
|
||||
if (!huge_ralloc_no_move(ptr, oldsize, size, extra, zero))
|
||||
if (!huge_ralloc_no_move(ptr, oldsize, usize, usize, zero))
|
||||
return (ptr);
|
||||
|
||||
/*
|
||||
* size and oldsize are different enough that we need to use a
|
||||
* usize and oldsize are different enough that we need to use a
|
||||
* different size class. In that case, fall back to allocating new
|
||||
* space and copying.
|
||||
*/
|
||||
if (alignment > chunksize) {
|
||||
ret = huge_palloc(tsd, arena, size + extra, alignment, zero,
|
||||
tcache);
|
||||
} else
|
||||
ret = huge_malloc(tsd, arena, size + extra, zero, tcache);
|
||||
ret = huge_ralloc_move_helper(tsd, arena, usize, alignment, zero,
|
||||
tcache);
|
||||
if (ret == NULL)
|
||||
return (NULL);
|
||||
|
||||
if (ret == NULL) {
|
||||
if (extra == 0)
|
||||
return (NULL);
|
||||
/* Try again, this time without extra. */
|
||||
if (alignment > chunksize) {
|
||||
ret = huge_palloc(tsd, arena, size, alignment, zero,
|
||||
tcache);
|
||||
} else
|
||||
ret = huge_malloc(tsd, arena, size, zero, tcache);
|
||||
|
||||
if (ret == NULL)
|
||||
return (NULL);
|
||||
}
|
||||
|
||||
/*
|
||||
* Copy at most size bytes (not size+extra), since the caller has no
|
||||
* expectation that the extra bytes will be reliably preserved.
|
||||
*/
|
||||
copysize = (size < oldsize) ? size : oldsize;
|
||||
copysize = (usize < oldsize) ? usize : oldsize;
|
||||
memcpy(ret, ptr, copysize);
|
||||
isqalloc(tsd, ptr, oldsize, tcache);
|
||||
return (ret);
|
||||
@ -387,7 +372,7 @@ huge_dalloc(tsd_t *tsd, void *ptr, tcache_t *tcache)
|
||||
extent_node_size_get(node));
|
||||
arena_chunk_dalloc_huge(extent_node_arena_get(node),
|
||||
extent_node_addr_get(node), extent_node_size_get(node));
|
||||
idalloctm(tsd, node, tcache, true);
|
||||
idalloctm(tsd, node, tcache, true, true);
|
||||
}
|
||||
|
||||
arena_t *
|
||||
@ -441,3 +426,10 @@ huge_prof_tctx_set(const void *ptr, prof_tctx_t *tctx)
|
||||
extent_node_prof_tctx_set(node, tctx);
|
||||
malloc_mutex_unlock(&arena->huge_mtx);
|
||||
}
|
||||
|
||||
void
|
||||
huge_prof_tctx_reset(const void *ptr)
|
||||
{
|
||||
|
||||
huge_prof_tctx_set(ptr, (prof_tctx_t *)(uintptr_t)1U);
|
||||
}
|
||||
|
@ -70,12 +70,29 @@ typedef enum {
|
||||
} malloc_init_t;
|
||||
static malloc_init_t malloc_init_state = malloc_init_uninitialized;
|
||||
|
||||
/* 0 should be the common case. Set to true to trigger initialization. */
|
||||
static bool malloc_slow = true;
|
||||
|
||||
/* When malloc_slow != 0, set the corresponding bits for sanity check. */
|
||||
enum {
|
||||
flag_opt_junk_alloc = (1U),
|
||||
flag_opt_junk_free = (1U << 1),
|
||||
flag_opt_quarantine = (1U << 2),
|
||||
flag_opt_zero = (1U << 3),
|
||||
flag_opt_utrace = (1U << 4),
|
||||
flag_in_valgrind = (1U << 5),
|
||||
flag_opt_xmalloc = (1U << 6)
|
||||
};
|
||||
static uint8_t malloc_slow_flags;
|
||||
|
||||
/* Last entry for overflow detection only. */
|
||||
JEMALLOC_ALIGNED(CACHELINE)
|
||||
const size_t index2size_tab[NSIZES] = {
|
||||
const size_t index2size_tab[NSIZES+1] = {
|
||||
#define SC(index, lg_grp, lg_delta, ndelta, bin, lg_delta_lookup) \
|
||||
((ZU(1)<<lg_grp) + (ZU(ndelta)<<lg_delta)),
|
||||
SIZE_CLASSES
|
||||
#undef SC
|
||||
ZU(0)
|
||||
};
|
||||
|
||||
JEMALLOC_ALIGNED(CACHELINE)
|
||||
@ -309,14 +326,15 @@ a0ialloc(size_t size, bool zero, bool is_metadata)
|
||||
if (unlikely(malloc_init_a0()))
|
||||
return (NULL);
|
||||
|
||||
return (iallocztm(NULL, size, zero, false, is_metadata, a0get()));
|
||||
return (iallocztm(NULL, size, size2index(size), zero, false,
|
||||
is_metadata, a0get(), true));
|
||||
}
|
||||
|
||||
static void
|
||||
a0idalloc(void *ptr, bool is_metadata)
|
||||
{
|
||||
|
||||
idalloctm(NULL, ptr, false, is_metadata);
|
||||
idalloctm(NULL, ptr, false, is_metadata, true);
|
||||
}
|
||||
|
||||
void *
|
||||
@ -838,6 +856,26 @@ malloc_conf_error(const char *msg, const char *k, size_t klen, const char *v,
|
||||
(int)vlen, v);
|
||||
}
|
||||
|
||||
static void
|
||||
malloc_slow_flag_init(void)
|
||||
{
|
||||
/*
|
||||
* Combine the runtime options into malloc_slow for fast path. Called
|
||||
* after processing all the options.
|
||||
*/
|
||||
malloc_slow_flags |= (opt_junk_alloc ? flag_opt_junk_alloc : 0)
|
||||
| (opt_junk_free ? flag_opt_junk_free : 0)
|
||||
| (opt_quarantine ? flag_opt_quarantine : 0)
|
||||
| (opt_zero ? flag_opt_zero : 0)
|
||||
| (opt_utrace ? flag_opt_utrace : 0)
|
||||
| (opt_xmalloc ? flag_opt_xmalloc : 0);
|
||||
|
||||
if (config_valgrind)
|
||||
malloc_slow_flags |= (in_valgrind ? flag_in_valgrind : 0);
|
||||
|
||||
malloc_slow = (malloc_slow_flags != 0);
|
||||
}
|
||||
|
||||
static void
|
||||
malloc_conf_init(void)
|
||||
{
|
||||
@ -1304,6 +1342,8 @@ malloc_init_hard_finish(void)
|
||||
arenas[0] = a0;
|
||||
|
||||
malloc_init_state = malloc_init_initialized;
|
||||
malloc_slow_flag_init();
|
||||
|
||||
return (false);
|
||||
}
|
||||
|
||||
@ -1355,34 +1395,36 @@ malloc_init_hard(void)
|
||||
*/
|
||||
|
||||
static void *
|
||||
imalloc_prof_sample(tsd_t *tsd, size_t usize, prof_tctx_t *tctx)
|
||||
imalloc_prof_sample(tsd_t *tsd, size_t usize, szind_t ind,
|
||||
prof_tctx_t *tctx, bool slow_path)
|
||||
{
|
||||
void *p;
|
||||
|
||||
if (tctx == NULL)
|
||||
return (NULL);
|
||||
if (usize <= SMALL_MAXCLASS) {
|
||||
p = imalloc(tsd, LARGE_MINCLASS);
|
||||
szind_t ind_large = size2index(LARGE_MINCLASS);
|
||||
p = imalloc(tsd, LARGE_MINCLASS, ind_large, slow_path);
|
||||
if (p == NULL)
|
||||
return (NULL);
|
||||
arena_prof_promoted(p, usize);
|
||||
} else
|
||||
p = imalloc(tsd, usize);
|
||||
p = imalloc(tsd, usize, ind, slow_path);
|
||||
|
||||
return (p);
|
||||
}
|
||||
|
||||
JEMALLOC_ALWAYS_INLINE_C void *
|
||||
imalloc_prof(tsd_t *tsd, size_t usize)
|
||||
imalloc_prof(tsd_t *tsd, size_t usize, szind_t ind, bool slow_path)
|
||||
{
|
||||
void *p;
|
||||
prof_tctx_t *tctx;
|
||||
|
||||
tctx = prof_alloc_prep(tsd, usize, true);
|
||||
tctx = prof_alloc_prep(tsd, usize, prof_active_get_unlocked(), true);
|
||||
if (unlikely((uintptr_t)tctx != (uintptr_t)1U))
|
||||
p = imalloc_prof_sample(tsd, usize, tctx);
|
||||
p = imalloc_prof_sample(tsd, usize, ind, tctx, slow_path);
|
||||
else
|
||||
p = imalloc(tsd, usize);
|
||||
p = imalloc(tsd, usize, ind, slow_path);
|
||||
if (unlikely(p == NULL)) {
|
||||
prof_alloc_rollback(tsd, tctx, true);
|
||||
return (NULL);
|
||||
@ -1393,23 +1435,45 @@ imalloc_prof(tsd_t *tsd, size_t usize)
|
||||
}
|
||||
|
||||
JEMALLOC_ALWAYS_INLINE_C void *
|
||||
imalloc_body(size_t size, tsd_t **tsd, size_t *usize)
|
||||
imalloc_body(size_t size, tsd_t **tsd, size_t *usize, bool slow_path)
|
||||
{
|
||||
szind_t ind;
|
||||
|
||||
if (unlikely(malloc_init()))
|
||||
if (slow_path && unlikely(malloc_init()))
|
||||
return (NULL);
|
||||
*tsd = tsd_fetch();
|
||||
ind = size2index(size);
|
||||
|
||||
if (config_prof && opt_prof) {
|
||||
*usize = s2u(size);
|
||||
if (unlikely(*usize == 0))
|
||||
return (NULL);
|
||||
return (imalloc_prof(*tsd, *usize));
|
||||
if (config_stats ||
|
||||
(config_prof && opt_prof) ||
|
||||
(slow_path && config_valgrind && unlikely(in_valgrind))) {
|
||||
*usize = index2size(ind);
|
||||
}
|
||||
|
||||
if (config_stats || (config_valgrind && unlikely(in_valgrind)))
|
||||
*usize = s2u(size);
|
||||
return (imalloc(*tsd, size));
|
||||
if (config_prof && opt_prof) {
|
||||
if (unlikely(*usize == 0))
|
||||
return (NULL);
|
||||
return (imalloc_prof(*tsd, *usize, ind, slow_path));
|
||||
}
|
||||
|
||||
return (imalloc(*tsd, size, ind, slow_path));
|
||||
}
|
||||
|
||||
JEMALLOC_ALWAYS_INLINE_C void
|
||||
imalloc_post_check(void *ret, tsd_t *tsd, size_t usize, bool slow_path)
|
||||
{
|
||||
if (unlikely(ret == NULL)) {
|
||||
if (slow_path && config_xmalloc && unlikely(opt_xmalloc)) {
|
||||
malloc_write("<jemalloc>: Error in malloc(): "
|
||||
"out of memory\n");
|
||||
abort();
|
||||
}
|
||||
set_errno(ENOMEM);
|
||||
}
|
||||
if (config_stats && likely(ret != NULL)) {
|
||||
assert(usize == isalloc(ret, config_prof));
|
||||
*tsd_thread_allocatedp_get(tsd) += usize;
|
||||
}
|
||||
}
|
||||
|
||||
JEMALLOC_EXPORT JEMALLOC_ALLOCATOR JEMALLOC_RESTRICT_RETURN
|
||||
@ -1424,21 +1488,20 @@ je_malloc(size_t size)
|
||||
if (size == 0)
|
||||
size = 1;
|
||||
|
||||
ret = imalloc_body(size, &tsd, &usize);
|
||||
if (unlikely(ret == NULL)) {
|
||||
if (config_xmalloc && unlikely(opt_xmalloc)) {
|
||||
malloc_write("<jemalloc>: Error in malloc(): "
|
||||
"out of memory\n");
|
||||
abort();
|
||||
}
|
||||
set_errno(ENOMEM);
|
||||
if (likely(!malloc_slow)) {
|
||||
/*
|
||||
* imalloc_body() is inlined so that fast and slow paths are
|
||||
* generated separately with statically known slow_path.
|
||||
*/
|
||||
ret = imalloc_body(size, &tsd, &usize, false);
|
||||
imalloc_post_check(ret, tsd, usize, false);
|
||||
} else {
|
||||
ret = imalloc_body(size, &tsd, &usize, true);
|
||||
imalloc_post_check(ret, tsd, usize, true);
|
||||
UTRACE(0, size, ret);
|
||||
JEMALLOC_VALGRIND_MALLOC(ret != NULL, ret, usize, false);
|
||||
}
|
||||
if (config_stats && likely(ret != NULL)) {
|
||||
assert(usize == isalloc(ret, config_prof));
|
||||
*tsd_thread_allocatedp_get(tsd) += usize;
|
||||
}
|
||||
UTRACE(0, size, ret);
|
||||
JEMALLOC_VALGRIND_MALLOC(ret != NULL, ret, usize, false);
|
||||
|
||||
return (ret);
|
||||
}
|
||||
|
||||
@ -1468,7 +1531,7 @@ imemalign_prof(tsd_t *tsd, size_t alignment, size_t usize)
|
||||
void *p;
|
||||
prof_tctx_t *tctx;
|
||||
|
||||
tctx = prof_alloc_prep(tsd, usize, true);
|
||||
tctx = prof_alloc_prep(tsd, usize, prof_active_get_unlocked(), true);
|
||||
if (unlikely((uintptr_t)tctx != (uintptr_t)1U))
|
||||
p = imemalign_prof_sample(tsd, alignment, usize, tctx);
|
||||
else
|
||||
@ -1576,34 +1639,35 @@ je_aligned_alloc(size_t alignment, size_t size)
|
||||
}
|
||||
|
||||
static void *
|
||||
icalloc_prof_sample(tsd_t *tsd, size_t usize, prof_tctx_t *tctx)
|
||||
icalloc_prof_sample(tsd_t *tsd, size_t usize, szind_t ind, prof_tctx_t *tctx)
|
||||
{
|
||||
void *p;
|
||||
|
||||
if (tctx == NULL)
|
||||
return (NULL);
|
||||
if (usize <= SMALL_MAXCLASS) {
|
||||
p = icalloc(tsd, LARGE_MINCLASS);
|
||||
szind_t ind_large = size2index(LARGE_MINCLASS);
|
||||
p = icalloc(tsd, LARGE_MINCLASS, ind_large);
|
||||
if (p == NULL)
|
||||
return (NULL);
|
||||
arena_prof_promoted(p, usize);
|
||||
} else
|
||||
p = icalloc(tsd, usize);
|
||||
p = icalloc(tsd, usize, ind);
|
||||
|
||||
return (p);
|
||||
}
|
||||
|
||||
JEMALLOC_ALWAYS_INLINE_C void *
|
||||
icalloc_prof(tsd_t *tsd, size_t usize)
|
||||
icalloc_prof(tsd_t *tsd, size_t usize, szind_t ind)
|
||||
{
|
||||
void *p;
|
||||
prof_tctx_t *tctx;
|
||||
|
||||
tctx = prof_alloc_prep(tsd, usize, true);
|
||||
tctx = prof_alloc_prep(tsd, usize, prof_active_get_unlocked(), true);
|
||||
if (unlikely((uintptr_t)tctx != (uintptr_t)1U))
|
||||
p = icalloc_prof_sample(tsd, usize, tctx);
|
||||
p = icalloc_prof_sample(tsd, usize, ind, tctx);
|
||||
else
|
||||
p = icalloc(tsd, usize);
|
||||
p = icalloc(tsd, usize, ind);
|
||||
if (unlikely(p == NULL)) {
|
||||
prof_alloc_rollback(tsd, tctx, true);
|
||||
return (NULL);
|
||||
@ -1621,6 +1685,7 @@ je_calloc(size_t num, size_t size)
|
||||
void *ret;
|
||||
tsd_t *tsd;
|
||||
size_t num_size;
|
||||
szind_t ind;
|
||||
size_t usize JEMALLOC_CC_SILENCE_INIT(0);
|
||||
|
||||
if (unlikely(malloc_init())) {
|
||||
@ -1650,17 +1715,18 @@ je_calloc(size_t num, size_t size)
|
||||
goto label_return;
|
||||
}
|
||||
|
||||
ind = size2index(num_size);
|
||||
if (config_prof && opt_prof) {
|
||||
usize = s2u(num_size);
|
||||
usize = index2size(ind);
|
||||
if (unlikely(usize == 0)) {
|
||||
ret = NULL;
|
||||
goto label_return;
|
||||
}
|
||||
ret = icalloc_prof(tsd, usize);
|
||||
ret = icalloc_prof(tsd, usize, ind);
|
||||
} else {
|
||||
if (config_stats || (config_valgrind && unlikely(in_valgrind)))
|
||||
usize = s2u(num_size);
|
||||
ret = icalloc(tsd, num_size);
|
||||
usize = index2size(ind);
|
||||
ret = icalloc(tsd, num_size, ind);
|
||||
}
|
||||
|
||||
label_return:
|
||||
@ -1682,7 +1748,7 @@ label_return:
|
||||
}
|
||||
|
||||
static void *
|
||||
irealloc_prof_sample(tsd_t *tsd, void *oldptr, size_t old_usize, size_t usize,
|
||||
irealloc_prof_sample(tsd_t *tsd, void *old_ptr, size_t old_usize, size_t usize,
|
||||
prof_tctx_t *tctx)
|
||||
{
|
||||
void *p;
|
||||
@ -1690,37 +1756,42 @@ irealloc_prof_sample(tsd_t *tsd, void *oldptr, size_t old_usize, size_t usize,
|
||||
if (tctx == NULL)
|
||||
return (NULL);
|
||||
if (usize <= SMALL_MAXCLASS) {
|
||||
p = iralloc(tsd, oldptr, old_usize, LARGE_MINCLASS, 0, false);
|
||||
p = iralloc(tsd, old_ptr, old_usize, LARGE_MINCLASS, 0, false);
|
||||
if (p == NULL)
|
||||
return (NULL);
|
||||
arena_prof_promoted(p, usize);
|
||||
} else
|
||||
p = iralloc(tsd, oldptr, old_usize, usize, 0, false);
|
||||
p = iralloc(tsd, old_ptr, old_usize, usize, 0, false);
|
||||
|
||||
return (p);
|
||||
}
|
||||
|
||||
JEMALLOC_ALWAYS_INLINE_C void *
|
||||
irealloc_prof(tsd_t *tsd, void *oldptr, size_t old_usize, size_t usize)
|
||||
irealloc_prof(tsd_t *tsd, void *old_ptr, size_t old_usize, size_t usize)
|
||||
{
|
||||
void *p;
|
||||
bool prof_active;
|
||||
prof_tctx_t *old_tctx, *tctx;
|
||||
|
||||
old_tctx = prof_tctx_get(oldptr);
|
||||
tctx = prof_alloc_prep(tsd, usize, true);
|
||||
prof_active = prof_active_get_unlocked();
|
||||
old_tctx = prof_tctx_get(old_ptr);
|
||||
tctx = prof_alloc_prep(tsd, usize, prof_active, true);
|
||||
if (unlikely((uintptr_t)tctx != (uintptr_t)1U))
|
||||
p = irealloc_prof_sample(tsd, oldptr, old_usize, usize, tctx);
|
||||
p = irealloc_prof_sample(tsd, old_ptr, old_usize, usize, tctx);
|
||||
else
|
||||
p = iralloc(tsd, oldptr, old_usize, usize, 0, false);
|
||||
if (p == NULL)
|
||||
p = iralloc(tsd, old_ptr, old_usize, usize, 0, false);
|
||||
if (unlikely(p == NULL)) {
|
||||
prof_alloc_rollback(tsd, tctx, true);
|
||||
return (NULL);
|
||||
prof_realloc(tsd, p, usize, tctx, true, old_usize, old_tctx);
|
||||
}
|
||||
prof_realloc(tsd, p, usize, tctx, prof_active, true, old_ptr, old_usize,
|
||||
old_tctx);
|
||||
|
||||
return (p);
|
||||
}
|
||||
|
||||
JEMALLOC_INLINE_C void
|
||||
ifree(tsd_t *tsd, void *ptr, tcache_t *tcache)
|
||||
ifree(tsd_t *tsd, void *ptr, tcache_t *tcache, bool slow_path)
|
||||
{
|
||||
size_t usize;
|
||||
UNUSED size_t rzsize JEMALLOC_CC_SILENCE_INIT(0);
|
||||
@ -1735,10 +1806,15 @@ ifree(tsd_t *tsd, void *ptr, tcache_t *tcache)
|
||||
usize = isalloc(ptr, config_prof);
|
||||
if (config_stats)
|
||||
*tsd_thread_deallocatedp_get(tsd) += usize;
|
||||
if (config_valgrind && unlikely(in_valgrind))
|
||||
rzsize = p2rz(ptr);
|
||||
iqalloc(tsd, ptr, tcache);
|
||||
JEMALLOC_VALGRIND_FREE(ptr, rzsize);
|
||||
|
||||
if (likely(!slow_path))
|
||||
iqalloc(tsd, ptr, tcache, false);
|
||||
else {
|
||||
if (config_valgrind && unlikely(in_valgrind))
|
||||
rzsize = p2rz(ptr);
|
||||
iqalloc(tsd, ptr, tcache, true);
|
||||
JEMALLOC_VALGRIND_FREE(ptr, rzsize);
|
||||
}
|
||||
}
|
||||
|
||||
JEMALLOC_INLINE_C void
|
||||
@ -1775,7 +1851,7 @@ je_realloc(void *ptr, size_t size)
|
||||
/* realloc(ptr, 0) is equivalent to free(ptr). */
|
||||
UTRACE(ptr, 0, 0);
|
||||
tsd = tsd_fetch();
|
||||
ifree(tsd, ptr, tcache_get(tsd, false));
|
||||
ifree(tsd, ptr, tcache_get(tsd, false), true);
|
||||
return (NULL);
|
||||
}
|
||||
size = 1;
|
||||
@ -1802,7 +1878,10 @@ je_realloc(void *ptr, size_t size)
|
||||
}
|
||||
} else {
|
||||
/* realloc(NULL, size) is equivalent to malloc(size). */
|
||||
ret = imalloc_body(size, &tsd, &usize);
|
||||
if (likely(!malloc_slow))
|
||||
ret = imalloc_body(size, &tsd, &usize, false);
|
||||
else
|
||||
ret = imalloc_body(size, &tsd, &usize, true);
|
||||
}
|
||||
|
||||
if (unlikely(ret == NULL)) {
|
||||
@ -1831,7 +1910,10 @@ je_free(void *ptr)
|
||||
UTRACE(ptr, 0, 0);
|
||||
if (likely(ptr != NULL)) {
|
||||
tsd_t *tsd = tsd_fetch();
|
||||
ifree(tsd, ptr, tcache_get(tsd, false));
|
||||
if (likely(!malloc_slow))
|
||||
ifree(tsd, ptr, tcache_get(tsd, false), false);
|
||||
else
|
||||
ifree(tsd, ptr, tcache_get(tsd, false), true);
|
||||
}
|
||||
}
|
||||
|
||||
@ -1918,6 +2000,7 @@ imallocx_flags_decode_hard(tsd_t *tsd, size_t size, int flags, size_t *usize,
|
||||
*alignment = MALLOCX_ALIGN_GET_SPECIFIED(flags);
|
||||
*usize = sa2u(size, *alignment);
|
||||
}
|
||||
assert(*usize != 0);
|
||||
*zero = MALLOCX_ZERO_GET(flags);
|
||||
if ((flags & MALLOCX_TCACHE_MASK) != 0) {
|
||||
if ((flags & MALLOCX_TCACHE_MASK) == MALLOCX_TCACHE_NONE)
|
||||
@ -1959,42 +2042,32 @@ JEMALLOC_ALWAYS_INLINE_C void *
|
||||
imallocx_flags(tsd_t *tsd, size_t usize, size_t alignment, bool zero,
|
||||
tcache_t *tcache, arena_t *arena)
|
||||
{
|
||||
szind_t ind;
|
||||
|
||||
if (alignment != 0)
|
||||
ind = size2index(usize);
|
||||
if (unlikely(alignment != 0))
|
||||
return (ipalloct(tsd, usize, alignment, zero, tcache, arena));
|
||||
if (zero)
|
||||
return (icalloct(tsd, usize, tcache, arena));
|
||||
return (imalloct(tsd, usize, tcache, arena));
|
||||
}
|
||||
|
||||
JEMALLOC_ALWAYS_INLINE_C void *
|
||||
imallocx_maybe_flags(tsd_t *tsd, size_t size, int flags, size_t usize,
|
||||
size_t alignment, bool zero, tcache_t *tcache, arena_t *arena)
|
||||
{
|
||||
|
||||
if (likely(flags == 0))
|
||||
return (imalloc(tsd, size));
|
||||
return (imallocx_flags(tsd, usize, alignment, zero, tcache, arena));
|
||||
if (unlikely(zero))
|
||||
return (icalloct(tsd, usize, ind, tcache, arena));
|
||||
return (imalloct(tsd, usize, ind, tcache, arena));
|
||||
}
|
||||
|
||||
static void *
|
||||
imallocx_prof_sample(tsd_t *tsd, size_t size, int flags, size_t usize,
|
||||
size_t alignment, bool zero, tcache_t *tcache, arena_t *arena)
|
||||
imallocx_prof_sample(tsd_t *tsd, size_t usize, size_t alignment, bool zero,
|
||||
tcache_t *tcache, arena_t *arena)
|
||||
{
|
||||
void *p;
|
||||
|
||||
if (usize <= SMALL_MAXCLASS) {
|
||||
assert(((alignment == 0) ? s2u(LARGE_MINCLASS) :
|
||||
sa2u(LARGE_MINCLASS, alignment)) == LARGE_MINCLASS);
|
||||
p = imallocx_maybe_flags(tsd, LARGE_MINCLASS, flags,
|
||||
LARGE_MINCLASS, alignment, zero, tcache, arena);
|
||||
p = imallocx_flags(tsd, LARGE_MINCLASS, alignment, zero, tcache,
|
||||
arena);
|
||||
if (p == NULL)
|
||||
return (NULL);
|
||||
arena_prof_promoted(p, usize);
|
||||
} else {
|
||||
p = imallocx_maybe_flags(tsd, size, flags, usize, alignment,
|
||||
zero, tcache, arena);
|
||||
}
|
||||
} else
|
||||
p = imallocx_flags(tsd, usize, alignment, zero, tcache, arena);
|
||||
|
||||
return (p);
|
||||
}
|
||||
@ -2012,13 +2085,12 @@ imallocx_prof(tsd_t *tsd, size_t size, int flags, size_t *usize)
|
||||
if (unlikely(imallocx_flags_decode(tsd, size, flags, usize, &alignment,
|
||||
&zero, &tcache, &arena)))
|
||||
return (NULL);
|
||||
tctx = prof_alloc_prep(tsd, *usize, true);
|
||||
if (likely((uintptr_t)tctx == (uintptr_t)1U)) {
|
||||
p = imallocx_maybe_flags(tsd, size, flags, *usize, alignment,
|
||||
zero, tcache, arena);
|
||||
} else if ((uintptr_t)tctx > (uintptr_t)1U) {
|
||||
p = imallocx_prof_sample(tsd, size, flags, *usize, alignment,
|
||||
zero, tcache, arena);
|
||||
tctx = prof_alloc_prep(tsd, *usize, prof_active_get_unlocked(), true);
|
||||
if (likely((uintptr_t)tctx == (uintptr_t)1U))
|
||||
p = imallocx_flags(tsd, *usize, alignment, zero, tcache, arena);
|
||||
else if ((uintptr_t)tctx > (uintptr_t)1U) {
|
||||
p = imallocx_prof_sample(tsd, *usize, alignment, zero, tcache,
|
||||
arena);
|
||||
} else
|
||||
p = NULL;
|
||||
if (unlikely(p == NULL)) {
|
||||
@ -2041,9 +2113,10 @@ imallocx_no_prof(tsd_t *tsd, size_t size, int flags, size_t *usize)
|
||||
arena_t *arena;
|
||||
|
||||
if (likely(flags == 0)) {
|
||||
szind_t ind = size2index(size);
|
||||
if (config_stats || (config_valgrind && unlikely(in_valgrind)))
|
||||
*usize = s2u(size);
|
||||
return (imalloc(tsd, size));
|
||||
*usize = index2size(ind);
|
||||
return (imalloc(tsd, size, ind, true));
|
||||
}
|
||||
|
||||
if (unlikely(imallocx_flags_decode_hard(tsd, size, flags, usize,
|
||||
@ -2093,8 +2166,8 @@ label_oom:
|
||||
}
|
||||
|
||||
static void *
|
||||
irallocx_prof_sample(tsd_t *tsd, void *oldptr, size_t old_usize, size_t size,
|
||||
size_t alignment, size_t usize, bool zero, tcache_t *tcache, arena_t *arena,
|
||||
irallocx_prof_sample(tsd_t *tsd, void *old_ptr, size_t old_usize,
|
||||
size_t usize, size_t alignment, bool zero, tcache_t *tcache, arena_t *arena,
|
||||
prof_tctx_t *tctx)
|
||||
{
|
||||
void *p;
|
||||
@ -2102,13 +2175,13 @@ irallocx_prof_sample(tsd_t *tsd, void *oldptr, size_t old_usize, size_t size,
|
||||
if (tctx == NULL)
|
||||
return (NULL);
|
||||
if (usize <= SMALL_MAXCLASS) {
|
||||
p = iralloct(tsd, oldptr, old_usize, LARGE_MINCLASS, alignment,
|
||||
p = iralloct(tsd, old_ptr, old_usize, LARGE_MINCLASS, alignment,
|
||||
zero, tcache, arena);
|
||||
if (p == NULL)
|
||||
return (NULL);
|
||||
arena_prof_promoted(p, usize);
|
||||
} else {
|
||||
p = iralloct(tsd, oldptr, old_usize, size, alignment, zero,
|
||||
p = iralloct(tsd, old_ptr, old_usize, usize, alignment, zero,
|
||||
tcache, arena);
|
||||
}
|
||||
|
||||
@ -2116,28 +2189,30 @@ irallocx_prof_sample(tsd_t *tsd, void *oldptr, size_t old_usize, size_t size,
|
||||
}
|
||||
|
||||
JEMALLOC_ALWAYS_INLINE_C void *
|
||||
irallocx_prof(tsd_t *tsd, void *oldptr, size_t old_usize, size_t size,
|
||||
irallocx_prof(tsd_t *tsd, void *old_ptr, size_t old_usize, size_t size,
|
||||
size_t alignment, size_t *usize, bool zero, tcache_t *tcache,
|
||||
arena_t *arena)
|
||||
{
|
||||
void *p;
|
||||
bool prof_active;
|
||||
prof_tctx_t *old_tctx, *tctx;
|
||||
|
||||
old_tctx = prof_tctx_get(oldptr);
|
||||
tctx = prof_alloc_prep(tsd, *usize, false);
|
||||
prof_active = prof_active_get_unlocked();
|
||||
old_tctx = prof_tctx_get(old_ptr);
|
||||
tctx = prof_alloc_prep(tsd, *usize, prof_active, true);
|
||||
if (unlikely((uintptr_t)tctx != (uintptr_t)1U)) {
|
||||
p = irallocx_prof_sample(tsd, oldptr, old_usize, size,
|
||||
alignment, *usize, zero, tcache, arena, tctx);
|
||||
p = irallocx_prof_sample(tsd, old_ptr, old_usize, *usize,
|
||||
alignment, zero, tcache, arena, tctx);
|
||||
} else {
|
||||
p = iralloct(tsd, oldptr, old_usize, size, alignment, zero,
|
||||
p = iralloct(tsd, old_ptr, old_usize, size, alignment, zero,
|
||||
tcache, arena);
|
||||
}
|
||||
if (unlikely(p == NULL)) {
|
||||
prof_alloc_rollback(tsd, tctx, false);
|
||||
prof_alloc_rollback(tsd, tctx, true);
|
||||
return (NULL);
|
||||
}
|
||||
|
||||
if (p == oldptr && alignment != 0) {
|
||||
if (p == old_ptr && alignment != 0) {
|
||||
/*
|
||||
* The allocation did not move, so it is possible that the size
|
||||
* class is smaller than would guarantee the requested
|
||||
@ -2148,7 +2223,8 @@ irallocx_prof(tsd_t *tsd, void *oldptr, size_t old_usize, size_t size,
|
||||
*/
|
||||
*usize = isalloc(p, config_prof);
|
||||
}
|
||||
prof_realloc(tsd, p, *usize, tctx, false, old_usize, old_tctx);
|
||||
prof_realloc(tsd, p, *usize, tctx, prof_active, true, old_ptr,
|
||||
old_usize, old_tctx);
|
||||
|
||||
return (p);
|
||||
}
|
||||
@ -2243,26 +2319,13 @@ ixallocx_helper(void *ptr, size_t old_usize, size_t size, size_t extra,
|
||||
|
||||
static size_t
|
||||
ixallocx_prof_sample(void *ptr, size_t old_usize, size_t size, size_t extra,
|
||||
size_t alignment, size_t max_usize, bool zero, prof_tctx_t *tctx)
|
||||
size_t alignment, bool zero, prof_tctx_t *tctx)
|
||||
{
|
||||
size_t usize;
|
||||
|
||||
if (tctx == NULL)
|
||||
return (old_usize);
|
||||
/* Use minimum usize to determine whether promotion may happen. */
|
||||
if (((alignment == 0) ? s2u(size) : sa2u(size, alignment)) <=
|
||||
SMALL_MAXCLASS) {
|
||||
if (ixalloc(ptr, old_usize, SMALL_MAXCLASS+1,
|
||||
(SMALL_MAXCLASS+1 >= size+extra) ? 0 : size+extra -
|
||||
(SMALL_MAXCLASS+1), alignment, zero))
|
||||
return (old_usize);
|
||||
usize = isalloc(ptr, config_prof);
|
||||
if (max_usize < LARGE_MINCLASS)
|
||||
arena_prof_promoted(ptr, usize);
|
||||
} else {
|
||||
usize = ixallocx_helper(ptr, old_usize, size, extra, alignment,
|
||||
zero);
|
||||
}
|
||||
usize = ixallocx_helper(ptr, old_usize, size, extra, alignment, zero);
|
||||
|
||||
return (usize);
|
||||
}
|
||||
@ -2271,9 +2334,11 @@ JEMALLOC_ALWAYS_INLINE_C size_t
|
||||
ixallocx_prof(tsd_t *tsd, void *ptr, size_t old_usize, size_t size,
|
||||
size_t extra, size_t alignment, bool zero)
|
||||
{
|
||||
size_t max_usize, usize;
|
||||
size_t usize_max, usize;
|
||||
bool prof_active;
|
||||
prof_tctx_t *old_tctx, *tctx;
|
||||
|
||||
prof_active = prof_active_get_unlocked();
|
||||
old_tctx = prof_tctx_get(ptr);
|
||||
/*
|
||||
* usize isn't knowable before ixalloc() returns when extra is non-zero.
|
||||
@ -2281,21 +2346,23 @@ ixallocx_prof(tsd_t *tsd, void *ptr, size_t old_usize, size_t size,
|
||||
* prof_alloc_prep() to decide whether to capture a backtrace.
|
||||
* prof_realloc() will use the actual usize to decide whether to sample.
|
||||
*/
|
||||
max_usize = (alignment == 0) ? s2u(size+extra) : sa2u(size+extra,
|
||||
usize_max = (alignment == 0) ? s2u(size+extra) : sa2u(size+extra,
|
||||
alignment);
|
||||
tctx = prof_alloc_prep(tsd, max_usize, false);
|
||||
assert(usize_max != 0);
|
||||
tctx = prof_alloc_prep(tsd, usize_max, prof_active, false);
|
||||
if (unlikely((uintptr_t)tctx != (uintptr_t)1U)) {
|
||||
usize = ixallocx_prof_sample(ptr, old_usize, size, extra,
|
||||
alignment, zero, max_usize, tctx);
|
||||
alignment, zero, tctx);
|
||||
} else {
|
||||
usize = ixallocx_helper(ptr, old_usize, size, extra, alignment,
|
||||
zero);
|
||||
}
|
||||
if (unlikely(usize == old_usize)) {
|
||||
if (usize == old_usize) {
|
||||
prof_alloc_rollback(tsd, tctx, false);
|
||||
return (usize);
|
||||
}
|
||||
prof_realloc(tsd, ptr, usize, tctx, false, old_usize, old_tctx);
|
||||
prof_realloc(tsd, ptr, usize, tctx, prof_active, false, ptr, old_usize,
|
||||
old_tctx);
|
||||
|
||||
return (usize);
|
||||
}
|
||||
@ -2317,6 +2384,17 @@ je_xallocx(void *ptr, size_t size, size_t extra, int flags)
|
||||
tsd = tsd_fetch();
|
||||
|
||||
old_usize = isalloc(ptr, config_prof);
|
||||
|
||||
/* Clamp extra if necessary to avoid (size + extra) overflow. */
|
||||
if (unlikely(size + extra > HUGE_MAXCLASS)) {
|
||||
/* Check for size overflow. */
|
||||
if (unlikely(size > HUGE_MAXCLASS)) {
|
||||
usize = old_usize;
|
||||
goto label_not_resized;
|
||||
}
|
||||
extra = HUGE_MAXCLASS - size;
|
||||
}
|
||||
|
||||
if (config_valgrind && unlikely(in_valgrind))
|
||||
old_rzsize = u2rz(old_usize);
|
||||
|
||||
@ -2377,7 +2455,7 @@ je_dallocx(void *ptr, int flags)
|
||||
tcache = tcache_get(tsd, false);
|
||||
|
||||
UTRACE(ptr, 0, 0);
|
||||
ifree(tsd_fetch(), ptr, tcache);
|
||||
ifree(tsd_fetch(), ptr, tcache, true);
|
||||
}
|
||||
|
||||
JEMALLOC_ALWAYS_INLINE_C size_t
|
||||
|
@ -139,9 +139,16 @@ prof_tctx_comp(const prof_tctx_t *a, const prof_tctx_t *b)
|
||||
uint64_t b_thr_uid = b->thr_uid;
|
||||
int ret = (a_thr_uid > b_thr_uid) - (a_thr_uid < b_thr_uid);
|
||||
if (ret == 0) {
|
||||
uint64_t a_tctx_uid = a->tctx_uid;
|
||||
uint64_t b_tctx_uid = b->tctx_uid;
|
||||
ret = (a_tctx_uid > b_tctx_uid) - (a_tctx_uid < b_tctx_uid);
|
||||
uint64_t a_thr_discrim = a->thr_discrim;
|
||||
uint64_t b_thr_discrim = b->thr_discrim;
|
||||
ret = (a_thr_discrim > b_thr_discrim) - (a_thr_discrim <
|
||||
b_thr_discrim);
|
||||
if (ret == 0) {
|
||||
uint64_t a_tctx_uid = a->tctx_uid;
|
||||
uint64_t b_tctx_uid = b->tctx_uid;
|
||||
ret = (a_tctx_uid > b_tctx_uid) - (a_tctx_uid <
|
||||
b_tctx_uid);
|
||||
}
|
||||
}
|
||||
return (ret);
|
||||
}
|
||||
@ -202,7 +209,7 @@ prof_alloc_rollback(tsd_t *tsd, prof_tctx_t *tctx, bool updated)
|
||||
*/
|
||||
tdata = prof_tdata_get(tsd, true);
|
||||
if (tdata != NULL)
|
||||
prof_sample_threshold_update(tctx->tdata);
|
||||
prof_sample_threshold_update(tdata);
|
||||
}
|
||||
|
||||
if ((uintptr_t)tctx > (uintptr_t)1U) {
|
||||
@ -544,9 +551,9 @@ prof_gctx_create(tsd_t *tsd, prof_bt_t *bt)
|
||||
/*
|
||||
* Create a single allocation that has space for vec of length bt->len.
|
||||
*/
|
||||
prof_gctx_t *gctx = (prof_gctx_t *)iallocztm(tsd, offsetof(prof_gctx_t,
|
||||
vec) + (bt->len * sizeof(void *)), false, tcache_get(tsd, true),
|
||||
true, NULL);
|
||||
size_t size = offsetof(prof_gctx_t, vec) + (bt->len * sizeof(void *));
|
||||
prof_gctx_t *gctx = (prof_gctx_t *)iallocztm(tsd, size,
|
||||
size2index(size), false, tcache_get(tsd, true), true, NULL, true);
|
||||
if (gctx == NULL)
|
||||
return (NULL);
|
||||
gctx->lock = prof_gctx_mutex_choose();
|
||||
@ -587,7 +594,7 @@ prof_gctx_try_destroy(tsd_t *tsd, prof_tdata_t *tdata_self, prof_gctx_t *gctx,
|
||||
prof_leave(tsd, tdata_self);
|
||||
/* Destroy gctx. */
|
||||
malloc_mutex_unlock(gctx->lock);
|
||||
idalloctm(tsd, gctx, tcache_get(tsd, false), true);
|
||||
idalloctm(tsd, gctx, tcache_get(tsd, false), true, true);
|
||||
} else {
|
||||
/*
|
||||
* Compensate for increment in prof_tctx_destroy() or
|
||||
@ -694,7 +701,7 @@ prof_tctx_destroy(tsd_t *tsd, prof_tctx_t *tctx)
|
||||
prof_tdata_destroy(tsd, tdata, false);
|
||||
|
||||
if (destroy_tctx)
|
||||
idalloctm(tsd, tctx, tcache_get(tsd, false), true);
|
||||
idalloctm(tsd, tctx, tcache_get(tsd, false), true, true);
|
||||
}
|
||||
|
||||
static bool
|
||||
@ -723,7 +730,8 @@ prof_lookup_global(tsd_t *tsd, prof_bt_t *bt, prof_tdata_t *tdata,
|
||||
if (ckh_insert(tsd, &bt2gctx, btkey.v, gctx.v)) {
|
||||
/* OOM. */
|
||||
prof_leave(tsd, tdata);
|
||||
idalloctm(tsd, gctx.v, tcache_get(tsd, false), true);
|
||||
idalloctm(tsd, gctx.v, tcache_get(tsd, false), true,
|
||||
true);
|
||||
return (true);
|
||||
}
|
||||
new_gctx = true;
|
||||
@ -782,8 +790,9 @@ prof_lookup(tsd_t *tsd, prof_bt_t *bt)
|
||||
|
||||
/* Link a prof_tctx_t into gctx for this thread. */
|
||||
tcache = tcache_get(tsd, true);
|
||||
ret.v = iallocztm(tsd, sizeof(prof_tctx_t), false, tcache, true,
|
||||
NULL);
|
||||
ret.v = iallocztm(tsd, sizeof(prof_tctx_t),
|
||||
size2index(sizeof(prof_tctx_t)), false, tcache, true, NULL,
|
||||
true);
|
||||
if (ret.p == NULL) {
|
||||
if (new_gctx)
|
||||
prof_gctx_try_destroy(tsd, tdata, gctx, tdata);
|
||||
@ -791,6 +800,7 @@ prof_lookup(tsd_t *tsd, prof_bt_t *bt)
|
||||
}
|
||||
ret.p->tdata = tdata;
|
||||
ret.p->thr_uid = tdata->thr_uid;
|
||||
ret.p->thr_discrim = tdata->thr_discrim;
|
||||
memset(&ret.p->cnts, 0, sizeof(prof_cnt_t));
|
||||
ret.p->gctx = gctx;
|
||||
ret.p->tctx_uid = tdata->tctx_uid_next++;
|
||||
@ -802,7 +812,7 @@ prof_lookup(tsd_t *tsd, prof_bt_t *bt)
|
||||
if (error) {
|
||||
if (new_gctx)
|
||||
prof_gctx_try_destroy(tsd, tdata, gctx, tdata);
|
||||
idalloctm(tsd, ret.v, tcache, true);
|
||||
idalloctm(tsd, ret.v, tcache, true, true);
|
||||
return (NULL);
|
||||
}
|
||||
malloc_mutex_lock(gctx->lock);
|
||||
@ -1094,11 +1104,23 @@ prof_tctx_dump_iter(prof_tctx_tree_t *tctxs, prof_tctx_t *tctx, void *arg)
|
||||
{
|
||||
bool propagate_err = *(bool *)arg;
|
||||
|
||||
if (prof_dump_printf(propagate_err,
|
||||
" t%"FMTu64": %"FMTu64": %"FMTu64" [%"FMTu64": %"FMTu64"]\n",
|
||||
tctx->thr_uid, tctx->dump_cnts.curobjs, tctx->dump_cnts.curbytes,
|
||||
tctx->dump_cnts.accumobjs, tctx->dump_cnts.accumbytes))
|
||||
return (tctx);
|
||||
switch (tctx->state) {
|
||||
case prof_tctx_state_initializing:
|
||||
case prof_tctx_state_nominal:
|
||||
/* Not captured by this dump. */
|
||||
break;
|
||||
case prof_tctx_state_dumping:
|
||||
case prof_tctx_state_purgatory:
|
||||
if (prof_dump_printf(propagate_err,
|
||||
" t%"FMTu64": %"FMTu64": %"FMTu64" [%"FMTu64": "
|
||||
"%"FMTu64"]\n", tctx->thr_uid, tctx->dump_cnts.curobjs,
|
||||
tctx->dump_cnts.curbytes, tctx->dump_cnts.accumobjs,
|
||||
tctx->dump_cnts.accumbytes))
|
||||
return (tctx);
|
||||
break;
|
||||
default:
|
||||
not_reached();
|
||||
}
|
||||
return (NULL);
|
||||
}
|
||||
|
||||
@ -1191,7 +1213,7 @@ prof_gctx_finish(tsd_t *tsd, prof_gctx_tree_t *gctxs)
|
||||
tctx_tree_remove(&gctx->tctxs,
|
||||
to_destroy);
|
||||
idalloctm(tsd, to_destroy,
|
||||
tcache_get(tsd, false), true);
|
||||
tcache_get(tsd, false), true, true);
|
||||
} else
|
||||
next = NULL;
|
||||
} while (next != NULL);
|
||||
@ -1569,7 +1591,6 @@ prof_idump(void)
|
||||
{
|
||||
tsd_t *tsd;
|
||||
prof_tdata_t *tdata;
|
||||
char filename[PATH_MAX + 1];
|
||||
|
||||
cassert(config_prof);
|
||||
|
||||
@ -1585,6 +1606,7 @@ prof_idump(void)
|
||||
}
|
||||
|
||||
if (opt_prof_prefix[0] != '\0') {
|
||||
char filename[PATH_MAX + 1];
|
||||
malloc_mutex_lock(&prof_dump_seq_mtx);
|
||||
prof_dump_filename(filename, 'i', prof_dump_iseq);
|
||||
prof_dump_iseq++;
|
||||
@ -1623,7 +1645,6 @@ prof_gdump(void)
|
||||
{
|
||||
tsd_t *tsd;
|
||||
prof_tdata_t *tdata;
|
||||
char filename[DUMP_FILENAME_BUFSIZE];
|
||||
|
||||
cassert(config_prof);
|
||||
|
||||
@ -1639,6 +1660,7 @@ prof_gdump(void)
|
||||
}
|
||||
|
||||
if (opt_prof_prefix[0] != '\0') {
|
||||
char filename[DUMP_FILENAME_BUFSIZE];
|
||||
malloc_mutex_lock(&prof_dump_seq_mtx);
|
||||
prof_dump_filename(filename, 'u', prof_dump_useq);
|
||||
prof_dump_useq++;
|
||||
@ -1694,8 +1716,8 @@ prof_tdata_init_impl(tsd_t *tsd, uint64_t thr_uid, uint64_t thr_discrim,
|
||||
|
||||
/* Initialize an empty cache for this thread. */
|
||||
tcache = tcache_get(tsd, true);
|
||||
tdata = (prof_tdata_t *)iallocztm(tsd, sizeof(prof_tdata_t), false,
|
||||
tcache, true, NULL);
|
||||
tdata = (prof_tdata_t *)iallocztm(tsd, sizeof(prof_tdata_t),
|
||||
size2index(sizeof(prof_tdata_t)), false, tcache, true, NULL, true);
|
||||
if (tdata == NULL)
|
||||
return (NULL);
|
||||
|
||||
@ -1709,7 +1731,7 @@ prof_tdata_init_impl(tsd_t *tsd, uint64_t thr_uid, uint64_t thr_discrim,
|
||||
|
||||
if (ckh_new(tsd, &tdata->bt2tctx, PROF_CKH_MINITEMS,
|
||||
prof_bt_hash, prof_bt_keycomp)) {
|
||||
idalloctm(tsd, tdata, tcache, true);
|
||||
idalloctm(tsd, tdata, tcache, true, true);
|
||||
return (NULL);
|
||||
}
|
||||
|
||||
@ -1764,9 +1786,9 @@ prof_tdata_destroy_locked(tsd_t *tsd, prof_tdata_t *tdata,
|
||||
|
||||
tcache = tcache_get(tsd, false);
|
||||
if (tdata->thread_name != NULL)
|
||||
idalloctm(tsd, tdata->thread_name, tcache, true);
|
||||
idalloctm(tsd, tdata->thread_name, tcache, true, true);
|
||||
ckh_delete(tsd, &tdata->bt2tctx);
|
||||
idalloctm(tsd, tdata, tcache, true);
|
||||
idalloctm(tsd, tdata, tcache, true, true);
|
||||
}
|
||||
|
||||
static void
|
||||
@ -1927,7 +1949,8 @@ prof_thread_name_alloc(tsd_t *tsd, const char *thread_name)
|
||||
if (size == 1)
|
||||
return ("");
|
||||
|
||||
ret = iallocztm(tsd, size, false, tcache_get(tsd, true), true, NULL);
|
||||
ret = iallocztm(tsd, size, size2index(size), false, tcache_get(tsd,
|
||||
true), true, NULL, true);
|
||||
if (ret == NULL)
|
||||
return (NULL);
|
||||
memcpy(ret, thread_name, size);
|
||||
@ -1960,7 +1983,7 @@ prof_thread_name_set(tsd_t *tsd, const char *thread_name)
|
||||
|
||||
if (tdata->thread_name != NULL) {
|
||||
idalloctm(tsd, tdata->thread_name, tcache_get(tsd, false),
|
||||
true);
|
||||
true, true);
|
||||
tdata->thread_name = NULL;
|
||||
}
|
||||
if (strlen(s) > 0)
|
||||
|
@ -23,12 +23,14 @@ static quarantine_t *
|
||||
quarantine_init(tsd_t *tsd, size_t lg_maxobjs)
|
||||
{
|
||||
quarantine_t *quarantine;
|
||||
size_t size;
|
||||
|
||||
assert(tsd_nominal(tsd));
|
||||
|
||||
quarantine = (quarantine_t *)iallocztm(tsd, offsetof(quarantine_t, objs)
|
||||
+ ((ZU(1) << lg_maxobjs) * sizeof(quarantine_obj_t)), false,
|
||||
tcache_get(tsd, true), true, NULL);
|
||||
size = offsetof(quarantine_t, objs) + ((ZU(1) << lg_maxobjs) *
|
||||
sizeof(quarantine_obj_t));
|
||||
quarantine = (quarantine_t *)iallocztm(tsd, size, size2index(size),
|
||||
false, tcache_get(tsd, true), true, NULL, true);
|
||||
if (quarantine == NULL)
|
||||
return (NULL);
|
||||
quarantine->curbytes = 0;
|
||||
@ -55,7 +57,7 @@ quarantine_alloc_hook_work(tsd_t *tsd)
|
||||
if (tsd_quarantine_get(tsd) == NULL)
|
||||
tsd_quarantine_set(tsd, quarantine);
|
||||
else
|
||||
idalloctm(tsd, quarantine, tcache_get(tsd, false), true);
|
||||
idalloctm(tsd, quarantine, tcache_get(tsd, false), true, true);
|
||||
}
|
||||
|
||||
static quarantine_t *
|
||||
@ -87,7 +89,7 @@ quarantine_grow(tsd_t *tsd, quarantine_t *quarantine)
|
||||
memcpy(&ret->objs[ncopy_a], quarantine->objs, ncopy_b *
|
||||
sizeof(quarantine_obj_t));
|
||||
}
|
||||
idalloctm(tsd, quarantine, tcache_get(tsd, false), true);
|
||||
idalloctm(tsd, quarantine, tcache_get(tsd, false), true, true);
|
||||
|
||||
tsd_quarantine_set(tsd, ret);
|
||||
return (ret);
|
||||
@ -98,7 +100,7 @@ quarantine_drain_one(tsd_t *tsd, quarantine_t *quarantine)
|
||||
{
|
||||
quarantine_obj_t *obj = &quarantine->objs[quarantine->first];
|
||||
assert(obj->usize == isalloc(obj->ptr, config_prof));
|
||||
idalloctm(tsd, obj->ptr, NULL, false);
|
||||
idalloctm(tsd, obj->ptr, NULL, false, true);
|
||||
quarantine->curbytes -= obj->usize;
|
||||
quarantine->curobjs--;
|
||||
quarantine->first = (quarantine->first + 1) & ((ZU(1) <<
|
||||
@ -123,7 +125,7 @@ quarantine(tsd_t *tsd, void *ptr)
|
||||
assert(opt_quarantine);
|
||||
|
||||
if ((quarantine = tsd_quarantine_get(tsd)) == NULL) {
|
||||
idalloctm(tsd, ptr, NULL, false);
|
||||
idalloctm(tsd, ptr, NULL, false, true);
|
||||
return;
|
||||
}
|
||||
/*
|
||||
@ -162,7 +164,7 @@ quarantine(tsd_t *tsd, void *ptr)
|
||||
}
|
||||
} else {
|
||||
assert(quarantine->curbytes == 0);
|
||||
idalloctm(tsd, ptr, NULL, false);
|
||||
idalloctm(tsd, ptr, NULL, false, true);
|
||||
}
|
||||
}
|
||||
|
||||
@ -177,7 +179,7 @@ quarantine_cleanup(tsd_t *tsd)
|
||||
quarantine = tsd_quarantine_get(tsd);
|
||||
if (quarantine != NULL) {
|
||||
quarantine_drain(tsd, quarantine, 0);
|
||||
idalloctm(tsd, quarantine, tcache_get(tsd, false), true);
|
||||
idalloctm(tsd, quarantine, tcache_get(tsd, false), true, true);
|
||||
tsd_quarantine_set(tsd, NULL);
|
||||
}
|
||||
}
|
||||
|
@ -72,7 +72,7 @@ tcache_event_hard(tsd_t *tsd, tcache_t *tcache)
|
||||
|
||||
void *
|
||||
tcache_alloc_small_hard(tsd_t *tsd, arena_t *arena, tcache_t *tcache,
|
||||
tcache_bin_t *tbin, szind_t binind)
|
||||
tcache_bin_t *tbin, szind_t binind, bool *tcache_success)
|
||||
{
|
||||
void *ret;
|
||||
|
||||
@ -80,7 +80,7 @@ tcache_alloc_small_hard(tsd_t *tsd, arena_t *arena, tcache_t *tcache,
|
||||
tcache->prof_accumbytes : 0);
|
||||
if (config_prof)
|
||||
tcache->prof_accumbytes = 0;
|
||||
ret = tcache_alloc_easy(tbin);
|
||||
ret = tcache_alloc_easy(tbin, tcache_success);
|
||||
|
||||
return (ret);
|
||||
}
|
||||
@ -102,7 +102,7 @@ tcache_bin_flush_small(tsd_t *tsd, tcache_t *tcache, tcache_bin_t *tbin,
|
||||
for (nflush = tbin->ncached - rem; nflush > 0; nflush = ndeferred) {
|
||||
/* Lock the arena bin associated with the first object. */
|
||||
arena_chunk_t *chunk = (arena_chunk_t *)CHUNK_ADDR2BASE(
|
||||
tbin->avail[0]);
|
||||
*(tbin->avail - 1));
|
||||
arena_t *bin_arena = extent_node_arena_get(&chunk->node);
|
||||
arena_bin_t *bin = &bin_arena->bins[binind];
|
||||
|
||||
@ -122,7 +122,7 @@ tcache_bin_flush_small(tsd_t *tsd, tcache_t *tcache, tcache_bin_t *tbin,
|
||||
}
|
||||
ndeferred = 0;
|
||||
for (i = 0; i < nflush; i++) {
|
||||
ptr = tbin->avail[i];
|
||||
ptr = *(tbin->avail - 1 - i);
|
||||
assert(ptr != NULL);
|
||||
chunk = (arena_chunk_t *)CHUNK_ADDR2BASE(ptr);
|
||||
if (extent_node_arena_get(&chunk->node) == bin_arena) {
|
||||
@ -139,7 +139,7 @@ tcache_bin_flush_small(tsd_t *tsd, tcache_t *tcache, tcache_bin_t *tbin,
|
||||
* locked. Stash the object, so that it can be
|
||||
* handled in a future pass.
|
||||
*/
|
||||
tbin->avail[ndeferred] = ptr;
|
||||
*(tbin->avail - 1 - ndeferred) = ptr;
|
||||
ndeferred++;
|
||||
}
|
||||
}
|
||||
@ -158,8 +158,8 @@ tcache_bin_flush_small(tsd_t *tsd, tcache_t *tcache, tcache_bin_t *tbin,
|
||||
malloc_mutex_unlock(&bin->lock);
|
||||
}
|
||||
|
||||
memmove(tbin->avail, &tbin->avail[tbin->ncached - rem],
|
||||
rem * sizeof(void *));
|
||||
memmove(tbin->avail - rem, tbin->avail - tbin->ncached, rem *
|
||||
sizeof(void *));
|
||||
tbin->ncached = rem;
|
||||
if ((int)tbin->ncached < tbin->low_water)
|
||||
tbin->low_water = tbin->ncached;
|
||||
@ -182,7 +182,7 @@ tcache_bin_flush_large(tsd_t *tsd, tcache_bin_t *tbin, szind_t binind,
|
||||
for (nflush = tbin->ncached - rem; nflush > 0; nflush = ndeferred) {
|
||||
/* Lock the arena associated with the first object. */
|
||||
arena_chunk_t *chunk = (arena_chunk_t *)CHUNK_ADDR2BASE(
|
||||
tbin->avail[0]);
|
||||
*(tbin->avail - 1));
|
||||
arena_t *locked_arena = extent_node_arena_get(&chunk->node);
|
||||
UNUSED bool idump;
|
||||
|
||||
@ -206,7 +206,7 @@ tcache_bin_flush_large(tsd_t *tsd, tcache_bin_t *tbin, szind_t binind,
|
||||
}
|
||||
ndeferred = 0;
|
||||
for (i = 0; i < nflush; i++) {
|
||||
ptr = tbin->avail[i];
|
||||
ptr = *(tbin->avail - 1 - i);
|
||||
assert(ptr != NULL);
|
||||
chunk = (arena_chunk_t *)CHUNK_ADDR2BASE(ptr);
|
||||
if (extent_node_arena_get(&chunk->node) ==
|
||||
@ -220,7 +220,7 @@ tcache_bin_flush_large(tsd_t *tsd, tcache_bin_t *tbin, szind_t binind,
|
||||
* Stash the object, so that it can be handled
|
||||
* in a future pass.
|
||||
*/
|
||||
tbin->avail[ndeferred] = ptr;
|
||||
*(tbin->avail - 1 - ndeferred) = ptr;
|
||||
ndeferred++;
|
||||
}
|
||||
}
|
||||
@ -241,8 +241,8 @@ tcache_bin_flush_large(tsd_t *tsd, tcache_bin_t *tbin, szind_t binind,
|
||||
malloc_mutex_unlock(&arena->lock);
|
||||
}
|
||||
|
||||
memmove(tbin->avail, &tbin->avail[tbin->ncached - rem],
|
||||
rem * sizeof(void *));
|
||||
memmove(tbin->avail - rem, tbin->avail - tbin->ncached, rem *
|
||||
sizeof(void *));
|
||||
tbin->ncached = rem;
|
||||
if ((int)tbin->ncached < tbin->low_water)
|
||||
tbin->low_water = tbin->ncached;
|
||||
@ -333,9 +333,14 @@ tcache_create(tsd_t *tsd, arena_t *arena)
|
||||
assert((TCACHE_NSLOTS_SMALL_MAX & 1U) == 0);
|
||||
for (i = 0; i < nhbins; i++) {
|
||||
tcache->tbins[i].lg_fill_div = 1;
|
||||
stack_offset += tcache_bin_info[i].ncached_max * sizeof(void *);
|
||||
/*
|
||||
* avail points past the available space. Allocations will
|
||||
* access the slots toward higher addresses (for the benefit of
|
||||
* prefetch).
|
||||
*/
|
||||
tcache->tbins[i].avail = (void **)((uintptr_t)tcache +
|
||||
(uintptr_t)stack_offset);
|
||||
stack_offset += tcache_bin_info[i].ncached_max * sizeof(void *);
|
||||
}
|
||||
|
||||
return (tcache);
|
||||
@ -379,7 +384,7 @@ tcache_destroy(tsd_t *tsd, tcache_t *tcache)
|
||||
arena_prof_accum(arena, tcache->prof_accumbytes))
|
||||
prof_idump();
|
||||
|
||||
idalloctm(tsd, tcache, false, true);
|
||||
idalloctm(tsd, tcache, false, true, true);
|
||||
}
|
||||
|
||||
void
|
||||
@ -496,13 +501,13 @@ tcache_boot(void)
|
||||
unsigned i;
|
||||
|
||||
/*
|
||||
* If necessary, clamp opt_lg_tcache_max, now that arena_maxclass is
|
||||
* If necessary, clamp opt_lg_tcache_max, now that large_maxclass is
|
||||
* known.
|
||||
*/
|
||||
if (opt_lg_tcache_max < 0 || (1U << opt_lg_tcache_max) < SMALL_MAXCLASS)
|
||||
tcache_maxclass = SMALL_MAXCLASS;
|
||||
else if ((1U << opt_lg_tcache_max) > arena_maxclass)
|
||||
tcache_maxclass = arena_maxclass;
|
||||
else if ((1U << opt_lg_tcache_max) > large_maxclass)
|
||||
tcache_maxclass = large_maxclass;
|
||||
else
|
||||
tcache_maxclass = (1U << opt_lg_tcache_max);
|
||||
|
||||
|
@ -73,6 +73,9 @@ tsd_cleanup(void *arg)
|
||||
tsd_t *tsd = (tsd_t *)arg;
|
||||
|
||||
switch (tsd->state) {
|
||||
case tsd_state_uninitialized:
|
||||
/* Do nothing. */
|
||||
break;
|
||||
case tsd_state_nominal:
|
||||
#define O(n, t) \
|
||||
n##_cleanup(tsd);
|
||||
|
@ -1,3 +1,7 @@
|
||||
/*
|
||||
* Define simple versions of assertion macros that won't recurse in case
|
||||
* of assertion failures in malloc_*printf().
|
||||
*/
|
||||
#define assert(e) do { \
|
||||
if (config_debug && !(e)) { \
|
||||
malloc_write("<jemalloc>: Failed assertion\n"); \
|
||||
@ -648,3 +652,12 @@ malloc_printf(const char *format, ...)
|
||||
malloc_vcprintf(NULL, NULL, format, ap);
|
||||
va_end(ap);
|
||||
}
|
||||
|
||||
/*
|
||||
* Restore normal assertion macros, in order to make it possible to compile all
|
||||
* C files as a single concatenation.
|
||||
*/
|
||||
#undef assert
|
||||
#undef not_reached
|
||||
#undef not_implemented
|
||||
#include "jemalloc/internal/assert.h"
|
||||
|
@ -121,9 +121,11 @@ zone_memalign(malloc_zone_t *zone, size_t alignment, size_t size)
|
||||
static void
|
||||
zone_free_definite_size(malloc_zone_t *zone, void *ptr, size_t size)
|
||||
{
|
||||
size_t alloc_size;
|
||||
|
||||
if (ivsalloc(ptr, config_prof) != 0) {
|
||||
assert(ivsalloc(ptr, config_prof) == size);
|
||||
alloc_size = ivsalloc(ptr, config_prof);
|
||||
if (alloc_size != 0) {
|
||||
assert(alloc_size == size);
|
||||
je_free(ptr);
|
||||
return;
|
||||
}
|
||||
|
@ -1,5 +1,85 @@
|
||||
#include "test/jemalloc_test.h"
|
||||
|
||||
static unsigned
|
||||
get_nsizes_impl(const char *cmd)
|
||||
{
|
||||
unsigned ret;
|
||||
size_t z;
|
||||
|
||||
z = sizeof(unsigned);
|
||||
assert_d_eq(mallctl(cmd, &ret, &z, NULL, 0), 0,
|
||||
"Unexpected mallctl(\"%s\", ...) failure", cmd);
|
||||
|
||||
return (ret);
|
||||
}
|
||||
|
||||
static unsigned
|
||||
get_nhuge(void)
|
||||
{
|
||||
|
||||
return (get_nsizes_impl("arenas.nhchunks"));
|
||||
}
|
||||
|
||||
static size_t
|
||||
get_size_impl(const char *cmd, size_t ind)
|
||||
{
|
||||
size_t ret;
|
||||
size_t z;
|
||||
size_t mib[4];
|
||||
size_t miblen = 4;
|
||||
|
||||
z = sizeof(size_t);
|
||||
assert_d_eq(mallctlnametomib(cmd, mib, &miblen),
|
||||
0, "Unexpected mallctlnametomib(\"%s\", ...) failure", cmd);
|
||||
mib[2] = ind;
|
||||
z = sizeof(size_t);
|
||||
assert_d_eq(mallctlbymib(mib, miblen, &ret, &z, NULL, 0),
|
||||
0, "Unexpected mallctlbymib([\"%s\", %zu], ...) failure", cmd, ind);
|
||||
|
||||
return (ret);
|
||||
}
|
||||
|
||||
static size_t
|
||||
get_huge_size(size_t ind)
|
||||
{
|
||||
|
||||
return (get_size_impl("arenas.hchunk.0.size", ind));
|
||||
}
|
||||
|
||||
TEST_BEGIN(test_oom)
|
||||
{
|
||||
size_t hugemax, size, alignment;
|
||||
|
||||
hugemax = get_huge_size(get_nhuge()-1);
|
||||
|
||||
/*
|
||||
* It should be impossible to allocate two objects that each consume
|
||||
* more than half the virtual address space.
|
||||
*/
|
||||
{
|
||||
void *p;
|
||||
|
||||
p = mallocx(hugemax, 0);
|
||||
if (p != NULL) {
|
||||
assert_ptr_null(mallocx(hugemax, 0),
|
||||
"Expected OOM for mallocx(size=%#zx, 0)", hugemax);
|
||||
dallocx(p, 0);
|
||||
}
|
||||
}
|
||||
|
||||
#if LG_SIZEOF_PTR == 3
|
||||
size = ZU(0x8000000000000000);
|
||||
alignment = ZU(0x8000000000000000);
|
||||
#else
|
||||
size = ZU(0x80000000);
|
||||
alignment = ZU(0x80000000);
|
||||
#endif
|
||||
assert_ptr_null(mallocx(size, MALLOCX_ALIGN(alignment)),
|
||||
"Expected OOM for mallocx(size=%#zx, MALLOCX_ALIGN(%#zx)", size,
|
||||
alignment);
|
||||
}
|
||||
TEST_END
|
||||
|
||||
TEST_BEGIN(test_basic)
|
||||
{
|
||||
#define MAXSZ (((size_t)1) << 26)
|
||||
@ -96,6 +176,7 @@ main(void)
|
||||
{
|
||||
|
||||
return (test(
|
||||
test_oom,
|
||||
test_basic,
|
||||
test_alignment_and_size));
|
||||
}
|
||||
|
@ -22,7 +22,7 @@ TEST_BEGIN(test_grow_and_shrink)
|
||||
szs[j-1], szs[j-1]+1);
|
||||
szs[j] = sallocx(q, 0);
|
||||
assert_zu_ne(szs[j], szs[j-1]+1,
|
||||
"Expected size to at least: %zu", szs[j-1]+1);
|
||||
"Expected size to be at least: %zu", szs[j-1]+1);
|
||||
p = q;
|
||||
}
|
||||
|
||||
|
@ -1,5 +1,24 @@
|
||||
#include "test/jemalloc_test.h"
|
||||
|
||||
/*
|
||||
* Use a separate arena for xallocx() extension/contraction tests so that
|
||||
* internal allocation e.g. by heap profiling can't interpose allocations where
|
||||
* xallocx() would ordinarily be able to extend.
|
||||
*/
|
||||
static unsigned
|
||||
arena_ind(void)
|
||||
{
|
||||
static unsigned ind = 0;
|
||||
|
||||
if (ind == 0) {
|
||||
size_t sz = sizeof(ind);
|
||||
assert_d_eq(mallctl("arenas.extend", &ind, &sz, NULL, 0), 0,
|
||||
"Unexpected mallctl failure creating arena");
|
||||
}
|
||||
|
||||
return (ind);
|
||||
}
|
||||
|
||||
TEST_BEGIN(test_same_size)
|
||||
{
|
||||
void *p;
|
||||
@ -48,6 +67,414 @@ TEST_BEGIN(test_no_move_fail)
|
||||
}
|
||||
TEST_END
|
||||
|
||||
static unsigned
|
||||
get_nsizes_impl(const char *cmd)
|
||||
{
|
||||
unsigned ret;
|
||||
size_t z;
|
||||
|
||||
z = sizeof(unsigned);
|
||||
assert_d_eq(mallctl(cmd, &ret, &z, NULL, 0), 0,
|
||||
"Unexpected mallctl(\"%s\", ...) failure", cmd);
|
||||
|
||||
return (ret);
|
||||
}
|
||||
|
||||
static unsigned
|
||||
get_nsmall(void)
|
||||
{
|
||||
|
||||
return (get_nsizes_impl("arenas.nbins"));
|
||||
}
|
||||
|
||||
static unsigned
|
||||
get_nlarge(void)
|
||||
{
|
||||
|
||||
return (get_nsizes_impl("arenas.nlruns"));
|
||||
}
|
||||
|
||||
static unsigned
|
||||
get_nhuge(void)
|
||||
{
|
||||
|
||||
return (get_nsizes_impl("arenas.nhchunks"));
|
||||
}
|
||||
|
||||
static size_t
|
||||
get_size_impl(const char *cmd, size_t ind)
|
||||
{
|
||||
size_t ret;
|
||||
size_t z;
|
||||
size_t mib[4];
|
||||
size_t miblen = 4;
|
||||
|
||||
z = sizeof(size_t);
|
||||
assert_d_eq(mallctlnametomib(cmd, mib, &miblen),
|
||||
0, "Unexpected mallctlnametomib(\"%s\", ...) failure", cmd);
|
||||
mib[2] = ind;
|
||||
z = sizeof(size_t);
|
||||
assert_d_eq(mallctlbymib(mib, miblen, &ret, &z, NULL, 0),
|
||||
0, "Unexpected mallctlbymib([\"%s\", %zu], ...) failure", cmd, ind);
|
||||
|
||||
return (ret);
|
||||
}
|
||||
|
||||
static size_t
|
||||
get_small_size(size_t ind)
|
||||
{
|
||||
|
||||
return (get_size_impl("arenas.bin.0.size", ind));
|
||||
}
|
||||
|
||||
static size_t
|
||||
get_large_size(size_t ind)
|
||||
{
|
||||
|
||||
return (get_size_impl("arenas.lrun.0.size", ind));
|
||||
}
|
||||
|
||||
static size_t
|
||||
get_huge_size(size_t ind)
|
||||
{
|
||||
|
||||
return (get_size_impl("arenas.hchunk.0.size", ind));
|
||||
}
|
||||
|
||||
TEST_BEGIN(test_size)
|
||||
{
|
||||
size_t small0, hugemax;
|
||||
void *p;
|
||||
|
||||
/* Get size classes. */
|
||||
small0 = get_small_size(0);
|
||||
hugemax = get_huge_size(get_nhuge()-1);
|
||||
|
||||
p = mallocx(small0, 0);
|
||||
assert_ptr_not_null(p, "Unexpected mallocx() error");
|
||||
|
||||
/* Test smallest supported size. */
|
||||
assert_zu_eq(xallocx(p, 1, 0, 0), small0,
|
||||
"Unexpected xallocx() behavior");
|
||||
|
||||
/* Test largest supported size. */
|
||||
assert_zu_le(xallocx(p, hugemax, 0, 0), hugemax,
|
||||
"Unexpected xallocx() behavior");
|
||||
|
||||
/* Test size overflow. */
|
||||
assert_zu_le(xallocx(p, hugemax+1, 0, 0), hugemax,
|
||||
"Unexpected xallocx() behavior");
|
||||
assert_zu_le(xallocx(p, SIZE_T_MAX, 0, 0), hugemax,
|
||||
"Unexpected xallocx() behavior");
|
||||
|
||||
dallocx(p, 0);
|
||||
}
|
||||
TEST_END
|
||||
|
||||
TEST_BEGIN(test_size_extra_overflow)
|
||||
{
|
||||
size_t small0, hugemax;
|
||||
void *p;
|
||||
|
||||
/* Get size classes. */
|
||||
small0 = get_small_size(0);
|
||||
hugemax = get_huge_size(get_nhuge()-1);
|
||||
|
||||
p = mallocx(small0, 0);
|
||||
assert_ptr_not_null(p, "Unexpected mallocx() error");
|
||||
|
||||
/* Test overflows that can be resolved by clamping extra. */
|
||||
assert_zu_le(xallocx(p, hugemax-1, 2, 0), hugemax,
|
||||
"Unexpected xallocx() behavior");
|
||||
assert_zu_le(xallocx(p, hugemax, 1, 0), hugemax,
|
||||
"Unexpected xallocx() behavior");
|
||||
|
||||
/* Test overflow such that hugemax-size underflows. */
|
||||
assert_zu_le(xallocx(p, hugemax+1, 2, 0), hugemax,
|
||||
"Unexpected xallocx() behavior");
|
||||
assert_zu_le(xallocx(p, hugemax+2, 3, 0), hugemax,
|
||||
"Unexpected xallocx() behavior");
|
||||
assert_zu_le(xallocx(p, SIZE_T_MAX-2, 2, 0), hugemax,
|
||||
"Unexpected xallocx() behavior");
|
||||
assert_zu_le(xallocx(p, SIZE_T_MAX-1, 1, 0), hugemax,
|
||||
"Unexpected xallocx() behavior");
|
||||
|
||||
dallocx(p, 0);
|
||||
}
|
||||
TEST_END
|
||||
|
||||
TEST_BEGIN(test_extra_small)
|
||||
{
|
||||
size_t small0, small1, hugemax;
|
||||
void *p;
|
||||
|
||||
/* Get size classes. */
|
||||
small0 = get_small_size(0);
|
||||
small1 = get_small_size(1);
|
||||
hugemax = get_huge_size(get_nhuge()-1);
|
||||
|
||||
p = mallocx(small0, 0);
|
||||
assert_ptr_not_null(p, "Unexpected mallocx() error");
|
||||
|
||||
assert_zu_eq(xallocx(p, small1, 0, 0), small0,
|
||||
"Unexpected xallocx() behavior");
|
||||
|
||||
assert_zu_eq(xallocx(p, small1, 0, 0), small0,
|
||||
"Unexpected xallocx() behavior");
|
||||
|
||||
assert_zu_eq(xallocx(p, small0, small1 - small0, 0), small0,
|
||||
"Unexpected xallocx() behavior");
|
||||
|
||||
/* Test size+extra overflow. */
|
||||
assert_zu_eq(xallocx(p, small0, hugemax - small0 + 1, 0), small0,
|
||||
"Unexpected xallocx() behavior");
|
||||
assert_zu_eq(xallocx(p, small0, SIZE_T_MAX - small0, 0), small0,
|
||||
"Unexpected xallocx() behavior");
|
||||
|
||||
dallocx(p, 0);
|
||||
}
|
||||
TEST_END
|
||||
|
||||
TEST_BEGIN(test_extra_large)
|
||||
{
|
||||
int flags = MALLOCX_ARENA(arena_ind());
|
||||
size_t smallmax, large0, large1, large2, huge0, hugemax;
|
||||
void *p;
|
||||
|
||||
/* Get size classes. */
|
||||
smallmax = get_small_size(get_nsmall()-1);
|
||||
large0 = get_large_size(0);
|
||||
large1 = get_large_size(1);
|
||||
large2 = get_large_size(2);
|
||||
huge0 = get_huge_size(0);
|
||||
hugemax = get_huge_size(get_nhuge()-1);
|
||||
|
||||
p = mallocx(large2, flags);
|
||||
assert_ptr_not_null(p, "Unexpected mallocx() error");
|
||||
|
||||
assert_zu_eq(xallocx(p, large2, 0, flags), large2,
|
||||
"Unexpected xallocx() behavior");
|
||||
/* Test size decrease with zero extra. */
|
||||
assert_zu_eq(xallocx(p, large0, 0, flags), large0,
|
||||
"Unexpected xallocx() behavior");
|
||||
assert_zu_eq(xallocx(p, smallmax, 0, flags), large0,
|
||||
"Unexpected xallocx() behavior");
|
||||
|
||||
assert_zu_eq(xallocx(p, large2, 0, flags), large2,
|
||||
"Unexpected xallocx() behavior");
|
||||
/* Test size decrease with non-zero extra. */
|
||||
assert_zu_eq(xallocx(p, large0, large2 - large0, flags), large2,
|
||||
"Unexpected xallocx() behavior");
|
||||
assert_zu_eq(xallocx(p, large1, large2 - large1, flags), large2,
|
||||
"Unexpected xallocx() behavior");
|
||||
assert_zu_eq(xallocx(p, large0, large1 - large0, flags), large1,
|
||||
"Unexpected xallocx() behavior");
|
||||
assert_zu_eq(xallocx(p, smallmax, large0 - smallmax, flags), large0,
|
||||
"Unexpected xallocx() behavior");
|
||||
|
||||
assert_zu_eq(xallocx(p, large0, 0, flags), large0,
|
||||
"Unexpected xallocx() behavior");
|
||||
/* Test size increase with zero extra. */
|
||||
assert_zu_eq(xallocx(p, large2, 0, flags), large2,
|
||||
"Unexpected xallocx() behavior");
|
||||
assert_zu_eq(xallocx(p, huge0, 0, flags), large2,
|
||||
"Unexpected xallocx() behavior");
|
||||
|
||||
assert_zu_eq(xallocx(p, large0, 0, flags), large0,
|
||||
"Unexpected xallocx() behavior");
|
||||
/* Test size increase with non-zero extra. */
|
||||
assert_zu_lt(xallocx(p, large0, huge0 - large0, flags), huge0,
|
||||
"Unexpected xallocx() behavior");
|
||||
|
||||
assert_zu_eq(xallocx(p, large0, 0, flags), large0,
|
||||
"Unexpected xallocx() behavior");
|
||||
/* Test size increase with non-zero extra. */
|
||||
assert_zu_eq(xallocx(p, large0, large2 - large0, flags), large2,
|
||||
"Unexpected xallocx() behavior");
|
||||
|
||||
assert_zu_eq(xallocx(p, large2, 0, flags), large2,
|
||||
"Unexpected xallocx() behavior");
|
||||
/* Test size+extra overflow. */
|
||||
assert_zu_lt(xallocx(p, large2, hugemax - large2 + 1, flags), huge0,
|
||||
"Unexpected xallocx() behavior");
|
||||
|
||||
dallocx(p, flags);
|
||||
}
|
||||
TEST_END
|
||||
|
||||
TEST_BEGIN(test_extra_huge)
|
||||
{
|
||||
int flags = MALLOCX_ARENA(arena_ind());
|
||||
size_t largemax, huge0, huge1, huge2, hugemax;
|
||||
void *p;
|
||||
|
||||
/* Get size classes. */
|
||||
largemax = get_large_size(get_nlarge()-1);
|
||||
huge0 = get_huge_size(0);
|
||||
huge1 = get_huge_size(1);
|
||||
huge2 = get_huge_size(2);
|
||||
hugemax = get_huge_size(get_nhuge()-1);
|
||||
|
||||
p = mallocx(huge2, flags);
|
||||
assert_ptr_not_null(p, "Unexpected mallocx() error");
|
||||
|
||||
assert_zu_eq(xallocx(p, huge2, 0, flags), huge2,
|
||||
"Unexpected xallocx() behavior");
|
||||
/* Test size decrease with zero extra. */
|
||||
assert_zu_ge(xallocx(p, huge0, 0, flags), huge0,
|
||||
"Unexpected xallocx() behavior");
|
||||
assert_zu_ge(xallocx(p, largemax, 0, flags), huge0,
|
||||
"Unexpected xallocx() behavior");
|
||||
|
||||
assert_zu_eq(xallocx(p, huge2, 0, flags), huge2,
|
||||
"Unexpected xallocx() behavior");
|
||||
/* Test size decrease with non-zero extra. */
|
||||
assert_zu_eq(xallocx(p, huge0, huge2 - huge0, flags), huge2,
|
||||
"Unexpected xallocx() behavior");
|
||||
assert_zu_eq(xallocx(p, huge1, huge2 - huge1, flags), huge2,
|
||||
"Unexpected xallocx() behavior");
|
||||
assert_zu_eq(xallocx(p, huge0, huge1 - huge0, flags), huge1,
|
||||
"Unexpected xallocx() behavior");
|
||||
assert_zu_ge(xallocx(p, largemax, huge0 - largemax, flags), huge0,
|
||||
"Unexpected xallocx() behavior");
|
||||
|
||||
assert_zu_ge(xallocx(p, huge0, 0, flags), huge0,
|
||||
"Unexpected xallocx() behavior");
|
||||
/* Test size increase with zero extra. */
|
||||
assert_zu_le(xallocx(p, huge2, 0, flags), huge2,
|
||||
"Unexpected xallocx() behavior");
|
||||
assert_zu_le(xallocx(p, hugemax+1, 0, flags), huge2,
|
||||
"Unexpected xallocx() behavior");
|
||||
|
||||
assert_zu_ge(xallocx(p, huge0, 0, flags), huge0,
|
||||
"Unexpected xallocx() behavior");
|
||||
/* Test size increase with non-zero extra. */
|
||||
assert_zu_le(xallocx(p, huge0, SIZE_T_MAX - huge0, flags), hugemax,
|
||||
"Unexpected xallocx() behavior");
|
||||
|
||||
assert_zu_ge(xallocx(p, huge0, 0, flags), huge0,
|
||||
"Unexpected xallocx() behavior");
|
||||
/* Test size increase with non-zero extra. */
|
||||
assert_zu_le(xallocx(p, huge0, huge2 - huge0, flags), huge2,
|
||||
"Unexpected xallocx() behavior");
|
||||
|
||||
assert_zu_eq(xallocx(p, huge2, 0, flags), huge2,
|
||||
"Unexpected xallocx() behavior");
|
||||
/* Test size+extra overflow. */
|
||||
assert_zu_le(xallocx(p, huge2, hugemax - huge2 + 1, flags), hugemax,
|
||||
"Unexpected xallocx() behavior");
|
||||
|
||||
dallocx(p, flags);
|
||||
}
|
||||
TEST_END
|
||||
|
||||
static void
|
||||
print_filled_extents(const void *p, uint8_t c, size_t len)
|
||||
{
|
||||
const uint8_t *pc = (const uint8_t *)p;
|
||||
size_t i, range0;
|
||||
uint8_t c0;
|
||||
|
||||
malloc_printf(" p=%p, c=%#x, len=%zu:", p, c, len);
|
||||
range0 = 0;
|
||||
c0 = pc[0];
|
||||
for (i = 0; i < len; i++) {
|
||||
if (pc[i] != c0) {
|
||||
malloc_printf(" %#x[%zu..%zu)", c0, range0, i);
|
||||
range0 = i;
|
||||
c0 = pc[i];
|
||||
}
|
||||
}
|
||||
malloc_printf(" %#x[%zu..%zu)\n", c0, range0, i);
|
||||
}
|
||||
|
||||
static bool
|
||||
validate_fill(const void *p, uint8_t c, size_t offset, size_t len)
|
||||
{
|
||||
const uint8_t *pc = (const uint8_t *)p;
|
||||
bool err;
|
||||
size_t i;
|
||||
|
||||
for (i = offset, err = false; i < offset+len; i++) {
|
||||
if (pc[i] != c)
|
||||
err = true;
|
||||
}
|
||||
|
||||
if (err)
|
||||
print_filled_extents(p, c, offset + len);
|
||||
|
||||
return (err);
|
||||
}
|
||||
|
||||
static void
|
||||
test_zero(size_t szmin, size_t szmax)
|
||||
{
|
||||
int flags = MALLOCX_ARENA(arena_ind()) | MALLOCX_ZERO;
|
||||
size_t sz, nsz;
|
||||
void *p;
|
||||
#define FILL_BYTE 0x7aU
|
||||
|
||||
sz = szmax;
|
||||
p = mallocx(sz, flags);
|
||||
assert_ptr_not_null(p, "Unexpected mallocx() error");
|
||||
assert_false(validate_fill(p, 0x00, 0, sz), "Memory not filled: sz=%zu",
|
||||
sz);
|
||||
|
||||
/*
|
||||
* Fill with non-zero so that non-debug builds are more likely to detect
|
||||
* errors.
|
||||
*/
|
||||
memset(p, FILL_BYTE, sz);
|
||||
assert_false(validate_fill(p, FILL_BYTE, 0, sz),
|
||||
"Memory not filled: sz=%zu", sz);
|
||||
|
||||
/* Shrink in place so that we can expect growing in place to succeed. */
|
||||
sz = szmin;
|
||||
assert_zu_eq(xallocx(p, sz, 0, flags), sz,
|
||||
"Unexpected xallocx() error");
|
||||
assert_false(validate_fill(p, FILL_BYTE, 0, sz),
|
||||
"Memory not filled: sz=%zu", sz);
|
||||
|
||||
for (sz = szmin; sz < szmax; sz = nsz) {
|
||||
nsz = nallocx(sz+1, flags);
|
||||
assert_zu_eq(xallocx(p, sz+1, 0, flags), nsz,
|
||||
"Unexpected xallocx() failure");
|
||||
assert_false(validate_fill(p, FILL_BYTE, 0, sz),
|
||||
"Memory not filled: sz=%zu", sz);
|
||||
assert_false(validate_fill(p, 0x00, sz, nsz-sz),
|
||||
"Memory not filled: sz=%zu, nsz-sz=%zu", sz, nsz-sz);
|
||||
memset((void *)((uintptr_t)p + sz), FILL_BYTE, nsz-sz);
|
||||
assert_false(validate_fill(p, FILL_BYTE, 0, nsz),
|
||||
"Memory not filled: nsz=%zu", nsz);
|
||||
}
|
||||
|
||||
dallocx(p, flags);
|
||||
}
|
||||
|
||||
TEST_BEGIN(test_zero_large)
|
||||
{
|
||||
size_t large0, largemax;
|
||||
|
||||
/* Get size classes. */
|
||||
large0 = get_large_size(0);
|
||||
largemax = get_large_size(get_nlarge()-1);
|
||||
|
||||
test_zero(large0, largemax);
|
||||
}
|
||||
TEST_END
|
||||
|
||||
TEST_BEGIN(test_zero_huge)
|
||||
{
|
||||
size_t huge0, huge1;
|
||||
|
||||
/* Get size classes. */
|
||||
huge0 = get_huge_size(0);
|
||||
huge1 = get_huge_size(1);
|
||||
|
||||
test_zero(huge1, huge0 * 2);
|
||||
}
|
||||
TEST_END
|
||||
|
||||
int
|
||||
main(void)
|
||||
{
|
||||
@ -55,5 +482,12 @@ main(void)
|
||||
return (test(
|
||||
test_same_size,
|
||||
test_extra_no_move,
|
||||
test_no_move_fail));
|
||||
test_no_move_fail,
|
||||
test_size,
|
||||
test_size_extra_overflow,
|
||||
test_extra_small,
|
||||
test_extra_large,
|
||||
test_extra_huge,
|
||||
test_zero_large,
|
||||
test_zero_huge));
|
||||
}
|
||||
|
@ -140,7 +140,7 @@ TEST_BEGIN(test_junk_large)
|
||||
{
|
||||
|
||||
test_skip_if(!config_fill);
|
||||
test_junk(SMALL_MAXCLASS+1, arena_maxclass);
|
||||
test_junk(SMALL_MAXCLASS+1, large_maxclass);
|
||||
}
|
||||
TEST_END
|
||||
|
||||
@ -148,7 +148,7 @@ TEST_BEGIN(test_junk_huge)
|
||||
{
|
||||
|
||||
test_skip_if(!config_fill);
|
||||
test_junk(arena_maxclass+1, chunksize*2);
|
||||
test_junk(large_maxclass+1, chunksize*2);
|
||||
}
|
||||
TEST_END
|
||||
|
||||
@ -172,8 +172,8 @@ arena_ralloc_junk_large_intercept(void *ptr, size_t old_usize, size_t usize)
|
||||
{
|
||||
|
||||
arena_ralloc_junk_large_orig(ptr, old_usize, usize);
|
||||
assert_zu_eq(old_usize, arena_maxclass, "Unexpected old_usize");
|
||||
assert_zu_eq(usize, shrink_size(arena_maxclass), "Unexpected usize");
|
||||
assert_zu_eq(old_usize, large_maxclass, "Unexpected old_usize");
|
||||
assert_zu_eq(usize, shrink_size(large_maxclass), "Unexpected usize");
|
||||
most_recently_trimmed = ptr;
|
||||
}
|
||||
|
||||
@ -181,13 +181,13 @@ TEST_BEGIN(test_junk_large_ralloc_shrink)
|
||||
{
|
||||
void *p1, *p2;
|
||||
|
||||
p1 = mallocx(arena_maxclass, 0);
|
||||
p1 = mallocx(large_maxclass, 0);
|
||||
assert_ptr_not_null(p1, "Unexpected mallocx() failure");
|
||||
|
||||
arena_ralloc_junk_large_orig = arena_ralloc_junk_large;
|
||||
arena_ralloc_junk_large = arena_ralloc_junk_large_intercept;
|
||||
|
||||
p2 = rallocx(p1, shrink_size(arena_maxclass), 0);
|
||||
p2 = rallocx(p1, shrink_size(large_maxclass), 0);
|
||||
assert_ptr_eq(p1, p2, "Unexpected move during shrink");
|
||||
|
||||
arena_ralloc_junk_large = arena_ralloc_junk_large_orig;
|
||||
|
@ -16,6 +16,14 @@ prof_dump_open_intercept(bool propagate_err, const char *filename)
|
||||
return (fd);
|
||||
}
|
||||
|
||||
static void
|
||||
set_prof_active(bool active)
|
||||
{
|
||||
|
||||
assert_d_eq(mallctl("prof.active", NULL, NULL, &active, sizeof(active)),
|
||||
0, "Unexpected mallctl failure");
|
||||
}
|
||||
|
||||
static size_t
|
||||
get_lg_prof_sample(void)
|
||||
{
|
||||
@ -97,15 +105,12 @@ prof_dump_header_intercept(bool propagate_err, const prof_cnt_t *cnt_all)
|
||||
|
||||
TEST_BEGIN(test_prof_reset_cleanup)
|
||||
{
|
||||
bool active;
|
||||
void *p;
|
||||
prof_dump_header_t *prof_dump_header_orig;
|
||||
|
||||
test_skip_if(!config_prof);
|
||||
|
||||
active = true;
|
||||
assert_d_eq(mallctl("prof.active", NULL, NULL, &active, sizeof(active)),
|
||||
0, "Unexpected mallctl failure while activating profiling");
|
||||
set_prof_active(true);
|
||||
|
||||
assert_zu_eq(prof_bt_count(), 0, "Expected 0 backtraces");
|
||||
p = mallocx(1, 0);
|
||||
@ -133,9 +138,7 @@ TEST_BEGIN(test_prof_reset_cleanup)
|
||||
dallocx(p, 0);
|
||||
assert_zu_eq(prof_bt_count(), 0, "Expected 0 backtraces");
|
||||
|
||||
active = false;
|
||||
assert_d_eq(mallctl("prof.active", NULL, NULL, &active, sizeof(active)),
|
||||
0, "Unexpected mallctl failure while deactivating profiling");
|
||||
set_prof_active(false);
|
||||
}
|
||||
TEST_END
|
||||
|
||||
@ -192,7 +195,6 @@ thd_start(void *varg)
|
||||
TEST_BEGIN(test_prof_reset)
|
||||
{
|
||||
size_t lg_prof_sample_orig;
|
||||
bool active;
|
||||
thd_t thds[NTHREADS];
|
||||
unsigned thd_args[NTHREADS];
|
||||
unsigned i;
|
||||
@ -208,9 +210,7 @@ TEST_BEGIN(test_prof_reset)
|
||||
lg_prof_sample_orig = get_lg_prof_sample();
|
||||
do_prof_reset(5);
|
||||
|
||||
active = true;
|
||||
assert_d_eq(mallctl("prof.active", NULL, NULL, &active, sizeof(active)),
|
||||
0, "Unexpected mallctl failure while activating profiling");
|
||||
set_prof_active(true);
|
||||
|
||||
for (i = 0; i < NTHREADS; i++) {
|
||||
thd_args[i] = i;
|
||||
@ -224,9 +224,7 @@ TEST_BEGIN(test_prof_reset)
|
||||
assert_zu_eq(prof_tdata_count(), tdata_count,
|
||||
"Unexpected remaining tdata structures");
|
||||
|
||||
active = false;
|
||||
assert_d_eq(mallctl("prof.active", NULL, NULL, &active, sizeof(active)),
|
||||
0, "Unexpected mallctl failure while deactivating profiling");
|
||||
set_prof_active(false);
|
||||
|
||||
do_prof_reset(lg_prof_sample_orig);
|
||||
}
|
||||
@ -237,6 +235,58 @@ TEST_END
|
||||
#undef RESET_INTERVAL
|
||||
#undef DUMP_INTERVAL
|
||||
|
||||
/* Test sampling at the same allocation site across resets. */
|
||||
#define NITER 10
|
||||
TEST_BEGIN(test_xallocx)
|
||||
{
|
||||
size_t lg_prof_sample_orig;
|
||||
unsigned i;
|
||||
void *ptrs[NITER];
|
||||
|
||||
test_skip_if(!config_prof);
|
||||
|
||||
lg_prof_sample_orig = get_lg_prof_sample();
|
||||
set_prof_active(true);
|
||||
|
||||
/* Reset profiling. */
|
||||
do_prof_reset(0);
|
||||
|
||||
for (i = 0; i < NITER; i++) {
|
||||
void *p;
|
||||
size_t sz, nsz;
|
||||
|
||||
/* Reset profiling. */
|
||||
do_prof_reset(0);
|
||||
|
||||
/* Allocate small object (which will be promoted). */
|
||||
p = ptrs[i] = mallocx(1, 0);
|
||||
assert_ptr_not_null(p, "Unexpected mallocx() failure");
|
||||
|
||||
/* Reset profiling. */
|
||||
do_prof_reset(0);
|
||||
|
||||
/* Perform successful xallocx(). */
|
||||
sz = sallocx(p, 0);
|
||||
assert_zu_eq(xallocx(p, sz, 0, 0), sz,
|
||||
"Unexpected xallocx() failure");
|
||||
|
||||
/* Perform unsuccessful xallocx(). */
|
||||
nsz = nallocx(sz+1, 0);
|
||||
assert_zu_eq(xallocx(p, nsz, 0, 0), sz,
|
||||
"Unexpected xallocx() success");
|
||||
}
|
||||
|
||||
for (i = 0; i < NITER; i++) {
|
||||
/* dallocx. */
|
||||
dallocx(ptrs[i], 0);
|
||||
}
|
||||
|
||||
set_prof_active(false);
|
||||
do_prof_reset(lg_prof_sample_orig);
|
||||
}
|
||||
TEST_END
|
||||
#undef NITER
|
||||
|
||||
int
|
||||
main(void)
|
||||
{
|
||||
@ -247,5 +297,6 @@ main(void)
|
||||
return (test(
|
||||
test_prof_reset_basic,
|
||||
test_prof_reset_cleanup,
|
||||
test_prof_reset));
|
||||
test_prof_reset,
|
||||
test_xallocx));
|
||||
}
|
||||
|
@ -21,7 +21,7 @@ struct node_s {
|
||||
};
|
||||
|
||||
static int
|
||||
node_cmp(node_t *a, node_t *b) {
|
||||
node_cmp(const node_t *a, const node_t *b) {
|
||||
int ret;
|
||||
|
||||
assert_u32_eq(a->magic, NODE_MAGIC, "Bad magic");
|
||||
@ -212,6 +212,15 @@ remove_reverse_iterate_cb(tree_t *tree, node_t *node, void *data)
|
||||
return (ret);
|
||||
}
|
||||
|
||||
static void
|
||||
destroy_cb(node_t *node, void *data)
|
||||
{
|
||||
unsigned *nnodes = (unsigned *)data;
|
||||
|
||||
assert_u_gt(*nnodes, 0, "Destruction removed too many nodes");
|
||||
(*nnodes)--;
|
||||
}
|
||||
|
||||
TEST_BEGIN(test_rb_random)
|
||||
{
|
||||
#define NNODES 25
|
||||
@ -278,7 +287,7 @@ TEST_BEGIN(test_rb_random)
|
||||
}
|
||||
|
||||
/* Remove nodes. */
|
||||
switch (i % 4) {
|
||||
switch (i % 5) {
|
||||
case 0:
|
||||
for (k = 0; k < j; k++)
|
||||
node_remove(&tree, &nodes[k], j - k);
|
||||
@ -314,6 +323,12 @@ TEST_BEGIN(test_rb_random)
|
||||
assert_u_eq(nnodes, 0,
|
||||
"Removal terminated early");
|
||||
break;
|
||||
} case 4: {
|
||||
unsigned nnodes = j;
|
||||
tree_destroy(&tree, destroy_cb, &nnodes);
|
||||
assert_u_eq(nnodes, 0,
|
||||
"Destruction terminated early");
|
||||
break;
|
||||
} default:
|
||||
not_reached();
|
||||
}
|
||||
|
@ -42,7 +42,7 @@ TEST_BEGIN(test_stats_huge)
|
||||
size_t sz;
|
||||
int expected = config_stats ? 0 : ENOENT;
|
||||
|
||||
p = mallocx(arena_maxclass+1, 0);
|
||||
p = mallocx(large_maxclass+1, 0);
|
||||
assert_ptr_not_null(p, "Unexpected mallocx() failure");
|
||||
|
||||
assert_d_eq(mallctl("epoch", NULL, NULL, &epoch, sizeof(epoch)), 0,
|
||||
@ -88,7 +88,7 @@ TEST_BEGIN(test_stats_arenas_summary)
|
||||
|
||||
little = mallocx(SMALL_MAXCLASS, 0);
|
||||
assert_ptr_not_null(little, "Unexpected mallocx() failure");
|
||||
large = mallocx(arena_maxclass, 0);
|
||||
large = mallocx(large_maxclass, 0);
|
||||
assert_ptr_not_null(large, "Unexpected mallocx() failure");
|
||||
huge = mallocx(chunksize, 0);
|
||||
assert_ptr_not_null(huge, "Unexpected mallocx() failure");
|
||||
@ -200,7 +200,7 @@ TEST_BEGIN(test_stats_arenas_large)
|
||||
assert_d_eq(mallctl("thread.arena", NULL, NULL, &arena, sizeof(arena)),
|
||||
0, "Unexpected mallctl() failure");
|
||||
|
||||
p = mallocx(arena_maxclass, 0);
|
||||
p = mallocx(large_maxclass, 0);
|
||||
assert_ptr_not_null(p, "Unexpected mallocx() failure");
|
||||
|
||||
assert_d_eq(mallctl("epoch", NULL, NULL, &epoch, sizeof(epoch)), 0,
|
||||
|
@ -55,7 +55,7 @@ TEST_BEGIN(test_zero_large)
|
||||
{
|
||||
|
||||
test_skip_if(!config_fill);
|
||||
test_zero(SMALL_MAXCLASS+1, arena_maxclass);
|
||||
test_zero(SMALL_MAXCLASS+1, large_maxclass);
|
||||
}
|
||||
TEST_END
|
||||
|
||||
@ -63,7 +63,7 @@ TEST_BEGIN(test_zero_huge)
|
||||
{
|
||||
|
||||
test_skip_if(!config_fill);
|
||||
test_zero(arena_maxclass+1, chunksize*2);
|
||||
test_zero(large_maxclass+1, chunksize*2);
|
||||
}
|
||||
TEST_END
|
||||
|
||||
|
@ -1,2 +1,2 @@
|
||||
UPSTREAM_REPO=https://github.com/glandium/jemalloc
|
||||
UPSTREAM_COMMIT=ed4883285e111b426e5769b24dad164ebacaa5b9
|
||||
UPSTREAM_REPO=https://github.com/jemalloc/jemalloc
|
||||
UPSTREAM_COMMIT=3a92319ddc5610b755f755cbbbd12791ca9d0c3d
|
||||
|
Loading…
Reference in New Issue
Block a user