Bug 1920451 - Introduce a concept of last significant re-use to decide when to purge. r=pbone,smaug

We want to better predict when it is a good moment to purge. D220616 answered well the question, when to best process a purge when it is needed (that is, during main thread idle time), but that does not necessarily mean it is a good idea for a single arena, to purge in that moment as arenas can be used on other threads intensely while the main thread is idle.

If we saw a recent re-use in an arena we can most likely expect to come more of that and wait with purge a little more. So whenever we are able to re-use a dirty page, we assume that this is a sign of a significant re-use that avoided an allocation of new memory.

We can record a timestamp of when this happened. Unless forced, purge requests are only executed, if their grace period wrt the last significant re-use expired.

Finally we introduce the third result `WantsLater` for a single purge step, which indicates that there are purge requests that wait for their grace period to expire and the main thread should do some later extra spin to process those.

***
Default Atomic

Differential Revision: https://phabricator.services.mozilla.com/D223709
This commit is contained in:
Jens Stutte
2025-02-19 07:21:15 +00:00
parent 553dce6f70
commit ad5ec08465
7 changed files with 361 additions and 124 deletions

View File

@@ -157,16 +157,25 @@ MALLOC_DECL(moz_set_max_dirty_page_modifier, void, int32_t)
// explicitly. Disabling it may cause an immediate synchronous purge of all
// arenas.
// Must be called only on the main thread.
// Parameters:
// bool: enable/disable
MALLOC_DECL(moz_enable_deferred_purge, bool, bool)
// Execute at most one purge.
// Returns true if there are more purges wanted, otherwise false.
// If the bool parameter aPeekOnly is true, it won't process any purge
// but just return if at least one is wanted.
// Returns a purge_result_t with the following meaning:
// Done: Purge has completed for all arenas.
// NeedsMore: There is at least one arena that needs to be purged now.
// WantsLater: There is at least one arena that might want a purge later,
// according to aReuseGraceMS passed.
// Parameters:
// bool: If the bool parameter aPeekOnly is true, it won't process
// any purge but just return if some is needed now or wanted later.
// uint32_t: aReuseGraceMS is the time to wait with purge after a significant
// re-use happened for an arena.
// The cost of calling this when there is no pending purge is minimal: a mutex
// lock/unlock and an isEmpty check. Note that the mutex is never held during
// expensive operations and guards only that list.
MALLOC_DECL(moz_may_purge_one_now, bool, bool)
MALLOC_DECL(moz_may_purge_one_now, purge_result_t, bool, uint32_t)
# endif

View File

@@ -127,6 +127,7 @@
#include <cstring>
#include <cerrno>
#include <chrono>
#include <optional>
#include <type_traits>
#ifdef XP_WIN
@@ -1144,6 +1145,19 @@ static_assert(sizeof(arena_bin_t) == 48);
static_assert(sizeof(arena_bin_t) == 32);
#endif
// We cannot instantiate
// Atomic<std::chrono::time_point<std::chrono::steady_clock>>
// so we explicitly force timestamps to be uint64_t in ns.
uint64_t GetTimestampNS() {
// On most if not all systems we care about the conversion to ns is a no-op,
// so we prefer to keep the precision here for performance, but let's be
// explicit about it.
return std::chrono::floor<std::chrono::nanoseconds>(
std::chrono::steady_clock::now())
.time_since_epoch()
.count();
}
struct arena_t {
#if defined(MOZ_DIAGNOSTIC_ASSERT_ENABLED)
uint32_t mMagic;
@@ -1165,7 +1179,6 @@ struct arena_t {
// only (currently only the main thread can be used like this).
// Can be acquired while holding gArenas.mLock, but must not be acquired or
// held while holding or acquiring gArenas.mPurgeListLock.
//
MaybeMutex mLock MOZ_UNANNOTATED;
// The lock is required to write to fields of mStats, but it is not needed to
@@ -1251,6 +1264,17 @@ struct arena_t {
// Note that this must only be accessed while holding gArenas.mPurgeListLock
// (but not arena_t.mLock !) through gArenas.mOutstandingPurges.
DoublyLinkedListElement<arena_t> mPurgeListElem;
// A "significant reuse" is when a dirty page is used for a new allocation,
// it has the CHUNK_MAP_DIRTY bit cleared and CHUNK_MAP_ALLOCATED set.
//
// Timestamp of the last time we saw a significant reuse (in ns).
// Note that this variable is written very often from many threads and read
// only sparsely on the main thread, but when we read it we need to see the
// chronologically latest write asap (so we cannot use Relaxed).
Atomic<uint64_t> mLastSignificantReuseNS;
public:
// A flag that indicates if arena is (or will be) in mOutstandingPurges.
// When set true a thread has committed to adding a purge request for this
// arena. Cleared only by Purge when we know we are completely done.
@@ -1483,6 +1507,9 @@ struct arena_t {
return (mNumDirty > ((aForce) ? 0 : mMaxDirty >> 1));
}
// Update the last significant reuse timestamp.
void NotifySignificantReuse() MOZ_EXCLUDES(mLock);
bool IsMainThreadOnly() const { return !mLock.LockIsEnabled(); }
void* operator new(size_t aCount) = delete;
@@ -1690,14 +1717,22 @@ class ArenaCollection {
// Execute all outstanding purge requests, if any.
void MayPurgeAll(bool aForce);
// Execute at most one request and return, if there are more to process.
// Execute at most one request.
// Returns a purge_result_t with the following meaning:
// Done: Purge has completed for all arenas.
// NeedsMore: There is at least one arena that needs to be purged now.
// WantsLater: There is at least one arena that might want a purge later,
// according to aReuseGraceMS.
//
// aPeekOnly - If true, check only if there is work to do without doing it.
// Parameters:
// aPeekOnly: If true, check only if there is work to do without doing it.
// uint32_t: aReuseGraceMS is the time to wait with purge after a
// significant re-use happened for an arena.
//
// Note that this executes at most one call to arena_t::Purge for at most
// one arena, which will purge at most one chunk from that arena. This is the
// smallest possible fraction we can purge currently.
bool MayPurgeStep(bool aPeekOnly);
purge_result_t MayPurgeStep(bool aPeekOnly, uint32_t aReuseGraceMS);
private:
const static arena_id_t MAIN_THREAD_ARENA_BIT = 0x1;
@@ -1729,7 +1764,7 @@ class ArenaCollection {
// destroyed.
uint64_t mNumOperationsDisposedArenas = 0;
// Linked list of outstanding purges.
// Linked list of outstanding purges. This list has no particular order.
DoublyLinkedList<arena_t> mOutstandingPurges MOZ_GUARDED_BY(mPurgeListLock);
// Flag if we should defer purge to later. Only ever set when holding the
// collection lock.
@@ -4042,6 +4077,7 @@ void* arena_t::MallocSmall(size_t aSize, bool aZero) {
}
MOZ_DIAGNOSTIC_ASSERT(aSize == bin->mSizeClass);
size_t num_dirty_before, num_dirty_after;
{
MaybeMutexAutoLock lock(mLock);
@@ -4060,7 +4096,9 @@ void* arena_t::MallocSmall(size_t aSize, bool aZero) {
MOZ_ASSERT(!mRandomizeSmallAllocations || mPRNG ||
(mIsPRNGInitializing && !isInitializingThread));
num_dirty_before = mNumDirty;
run = GetNonFullBinRun(bin);
num_dirty_after = mNumDirty;
if (MOZ_UNLIKELY(!run)) {
return nullptr;
}
@@ -4076,7 +4114,9 @@ void* arena_t::MallocSmall(size_t aSize, bool aZero) {
mStats.allocated_small += aSize;
mStats.operations++;
}
if (num_dirty_after < num_dirty_before) {
NotifySignificantReuse();
}
if (!aZero) {
ApplyZeroOrJunk(ret, aSize);
} else {
@@ -4092,15 +4132,21 @@ void* arena_t::MallocLarge(size_t aSize, bool aZero) {
// Large allocation.
aSize = PAGE_CEILING(aSize);
size_t num_dirty_before, num_dirty_after;
{
MaybeMutexAutoLock lock(mLock);
num_dirty_before = mNumDirty;
ret = AllocRun(aSize, true, aZero);
num_dirty_after = mNumDirty;
if (!ret) {
return nullptr;
}
mStats.allocated_large += aSize;
mStats.operations++;
}
if (num_dirty_after < num_dirty_before) {
NotifySignificantReuse();
}
if (!aZero) {
ApplyZeroOrJunk(ret, aSize);
@@ -4131,8 +4177,10 @@ void* arena_t::PallocLarge(size_t aAlignment, size_t aSize, size_t aAllocSize) {
MOZ_ASSERT((aSize & gPageSizeMask) == 0);
MOZ_ASSERT((aAlignment & gPageSizeMask) == 0);
size_t num_dirty_before, num_dirty_after;
{
MaybeMutexAutoLock lock(mLock);
num_dirty_before = mNumDirty;
ret = AllocRun(aAllocSize, true, false);
if (!ret) {
return nullptr;
@@ -4162,11 +4210,14 @@ void* arena_t::PallocLarge(size_t aAlignment, size_t aSize, size_t aAllocSize) {
TrimRunTail(chunk, (arena_run_t*)ret, aSize + trailsize, aSize, false);
}
}
num_dirty_after = mNumDirty;
mStats.allocated_large += aSize;
mStats.operations++;
}
if (num_dirty_after < num_dirty_before) {
NotifySignificantReuse();
}
// Note that since Bug 1488780we don't attempt purge dirty memory on this code
// path. In general there won't be dirty memory above the threshold after an
// allocation, only after free. The exception is if the dirty page threshold
@@ -4638,6 +4689,15 @@ inline void arena_t::MayDoOrQueuePurge(purge_action_t aAction) {
}
}
inline void arena_t::NotifySignificantReuse() {
// Note that there is a chance here for a race between threads calling
// GetTimeStampNS in a different order than writing it to the Atomic,
// resulting in mLastSignificantReuseNS going potentially backwards.
// Our use case is not sensitive to small deviations, the worse that can
// happen is a slightly earlier purge.
mLastSignificantReuseNS = GetTimestampNS();
}
void arena_t::RallocShrinkLarge(arena_chunk_t* aChunk, void* aPtr, size_t aSize,
size_t aOldSize) {
MOZ_ASSERT(aSize < aOldSize);
@@ -4662,6 +4722,7 @@ bool arena_t::RallocGrowLarge(arena_chunk_t* aChunk, void* aPtr, size_t aSize,
size_t pageind = (uintptr_t(aPtr) - uintptr_t(aChunk)) >> gPageSize2Pow;
size_t npages = aOldSize >> gPageSize2Pow;
size_t num_dirty_before, num_dirty_after;
{
MaybeMutexAutoLock lock(mLock);
MOZ_DIAGNOSTIC_ASSERT(aOldSize ==
@@ -4674,6 +4735,7 @@ bool arena_t::RallocGrowLarge(arena_chunk_t* aChunk, void* aPtr, size_t aSize,
(CHUNK_MAP_ALLOCATED | CHUNK_MAP_BUSY)) == 0 &&
(aChunk->map[pageind + npages].bits & ~gPageSizeMask) >=
aSize - aOldSize) {
num_dirty_before = mNumDirty;
// The next run is available and sufficiently large. Split the
// following run, then merge the first part with the existing
// allocation.
@@ -4689,11 +4751,15 @@ bool arena_t::RallocGrowLarge(arena_chunk_t* aChunk, void* aPtr, size_t aSize,
mStats.allocated_large += aSize - aOldSize;
mStats.operations++;
return true;
num_dirty_after = mNumDirty;
} else {
return false;
}
return false;
}
if (num_dirty_after < num_dirty_before) {
NotifySignificantReuse();
}
return true;
}
void* arena_t::RallocSmallOrLarge(void* aPtr, size_t aSize, size_t aOldSize) {
@@ -4818,6 +4884,7 @@ arena_t::arena_t(arena_params_t* aParams, bool aIsPrivate) {
mMaxDirtyDecreaseOverride = 0;
}
mLastSignificantReuseNS = GetTimestampNS();
mIsDeferredPurgePending = false;
mIsDeferredPurgeEnabled = gArenas.IsDeferredPurgeEnabled();
@@ -5871,8 +5938,9 @@ inline bool MozJemalloc::moz_enable_deferred_purge(bool aEnabled) {
return gArenas.SetDeferredPurge(aEnabled);
}
inline bool MozJemalloc::moz_may_purge_one_now(bool aPeekOnly) {
return gArenas.MayPurgeStep(aPeekOnly);
inline purge_result_t MozJemalloc::moz_may_purge_one_now(
bool aPeekOnly, uint32_t aReuseGraceMS) {
return gArenas.MayPurgeStep(aPeekOnly, aReuseGraceMS);
}
inline void ArenaCollection::AddToOutstandingPurges(arena_t* aArena) {
@@ -5887,7 +5955,9 @@ inline void ArenaCollection::AddToOutstandingPurges(arena_t* aArena) {
}
inline void ArenaCollection::RemoveObsoletePurgeFromList(arena_t* aArena) {
// We cannot trust the caller to know whether the element was already added
MOZ_ASSERT(aArena);
// We cannot trust the caller to know whether the element was already removed
// from another thread given we have our own lock.
MutexAutoLock lock(mPurgeListLock);
if (mOutstandingPurges.ElementProbablyInList(aArena)) {
@@ -5895,55 +5965,60 @@ inline void ArenaCollection::RemoveObsoletePurgeFromList(arena_t* aArena) {
}
}
purge_result_t ArenaCollection::MayPurgeStep(bool aPeekOnly,
uint32_t aReuseGraceMS) {
// If we'd want to allow to call this from non-main threads, we would need
// to ensure that arenas cannot be disposed while here, see bug 1364359.
MOZ_ASSERT(IsOnMainThreadWeak());
arena_t* found = nullptr;
{
MutexAutoLock lock(mPurgeListLock);
if (mOutstandingPurges.isEmpty()) {
return purge_result_t::Done;
}
uint64_t now = GetTimestampNS();
uint64_t reuseGraceNS = (uint64_t)aReuseGraceMS * 1000 * 1000;
for (arena_t& arena : mOutstandingPurges) {
if (now - arena.mLastSignificantReuseNS >= reuseGraceNS) {
found = &arena;
break;
}
}
if (!found) {
return purge_result_t::WantsLater;
}
if (aPeekOnly && found) {
return purge_result_t::NeedsMore;
}
}
if (!found->Purge(false)) {
// If this arena finished purging, remove it from the list.
MutexAutoLock lock(mPurgeListLock);
if (mOutstandingPurges.ElementProbablyInList(found)) {
mOutstandingPurges.remove(found);
}
}
// Even if there is no other arena that needs work, let the caller just call
// us again and we will do the above checks then and return their result.
// Note that in the current surrounding setting this may (rarely) cause a
// new slice of our idle task runner if we are exceeding idle budget.
return purge_result_t::NeedsMore;
}
void ArenaCollection::MayPurgeAll(bool aForce) {
MutexAutoLock lock(mLock);
for (auto* arena : iter()) {
// Arenas that are not IsMainThreadOnly can be purged from any thread.
// So we do what we can even if called from another thread.
if (!arena->IsMainThreadOnly() || IsOnMainThreadWeak()) {
while (arena->Purge(aForce)) {
}
while (arena->Purge(aForce));
RemoveObsoletePurgeFromList(arena);
}
}
}
bool ArenaCollection::MayPurgeStep(bool aPeekOnly) {
// If we'd want to allow to call this from non-main threads, we would need
// to ensure that arenas cannot be disposed while here, see bug 1364359.
MOZ_ASSERT(gArenas.IsOnMainThreadWeak());
if (!aPeekOnly) {
arena_t* current = nullptr;
{
MutexAutoLock lock(mPurgeListLock);
if (!mOutstandingPurges.isEmpty()) {
current = mOutstandingPurges.popFront();
}
}
// Note that Purge will clear mIsDeferredPurgePending while holding the
// arena's lock when it's done and thus returns false.
if (current && current->Purge(false)) {
MutexAutoLock lock(mPurgeListLock);
// Another thread may have inserted the arena in the list,
// we must recheck it.
if (!mOutstandingPurges.ElementProbablyInList(current)) {
// We pushFront here to ensure our CPU caches are still loaded for this
// arena if whoever calls us for the next round, does so immediately.
mOutstandingPurges.pushFront(current);
}
return true;
}
}
bool hasMore = false;
{
MutexAutoLock lock(mPurgeListLock);
hasMore = !mOutstandingPurges.isEmpty();
}
return hasMore;
}
#define MALLOC_DECL(name, return_type, ...) \
inline return_type MozJemalloc::moz_arena_##name( \
arena_id_t aArenaId, ARGS_HELPER(TYPED_ARGS, ##__VA_ARGS__)) { \

View File

@@ -171,7 +171,10 @@ struct DummyArenaAllocator {
static bool moz_enable_deferred_purge(bool aEnable) { return false; }
static bool moz_may_purge_one_now(bool aPeekOnly) { return false; }
static purge_result_t moz_may_purge_one_now(bool aPeekOnly,
uint32_t aReuseGraceMS) {
return purge_result_t::Done;
}
#define MALLOC_DECL(name, return_type, ...) \
static return_type moz_arena_##name( \

View File

@@ -214,6 +214,22 @@ static inline bool jemalloc_ptr_is_freed_page(jemalloc_ptr_info_t* info) {
return info->tag == TagFreedPage;
}
// The result of a purge step.
enum purge_result_t {
// Done: No more purge requests are pending.
Done,
// There is at least one arena left whose reuse grace period expired and
// needs purging asap.
NeedsMore,
// There is at least one arena that waits either for its reuse grace to
// expire or for significant reuse to happen. As we cannot foresee the
// future, whatever schedules the purges should come back later to check
// if we need a purge.
WantsLater,
};
#ifdef __cplusplus
} // extern "C"
#endif

View File

@@ -12542,6 +12542,12 @@
value: 5
mirror: always
# If lazy purge is enabled, the minimum time we wait with purge after some
# memory was reused (tracked per arena).
- name: memory.lazypurge.reuse_grace_period
type: uint32_t
value: 500
mirror: always
#---------------------------------------------------------------------------
# Prefs starting with "middlemouse."

View File

@@ -551,8 +551,9 @@ static bool replace_moz_enable_deferred_purge(bool aEnable) {
return gMallocTable.moz_enable_deferred_purge(aEnable);
}
static bool replace_moz_may_purge_one_now(bool aPeekOnly) {
return gMallocTable.moz_may_purge_one_now(aPeekOnly);
static purge_result_t replace_moz_may_purge_one_now(bool aPeekOnly,
uint32_t aReuseGraceMS) {
return gMallocTable.moz_may_purge_one_now(aPeekOnly, aReuseGraceMS);
}
// Must come after all the replace_* funcs

View File

@@ -298,11 +298,15 @@ Task* Task::GetHighestPriorityDependency() {
#ifdef MOZ_MEMORY
static StaticRefPtr<IdleTaskRunner> sIdleMemoryCleanupRunner;
static StaticRefPtr<nsITimer> sIdleMemoryCleanupWantsLater;
static bool sIdleMemoryCleanupWantsLaterScheduled = false;
static const char kEnableLazyPurgePref[] = "memory.lazypurge.enable";
static const char kMaxPurgeDelayPref[] = "memory.lazypurge.maximum_delay";
static const char kMinPurgeBudgetPref[] =
"memory.lazypurge.minimum_idle_budget";
static const char kMinPurgeReuseGracePref[] =
"memory.lazypurge.reuse_grace_period";
#endif
void TaskController::Initialize() {
@@ -381,6 +385,11 @@ void TaskController::Shutdown() {
sIdleMemoryCleanupRunner->Cancel();
sIdleMemoryCleanupRunner = nullptr;
}
if (sIdleMemoryCleanupWantsLater) {
sIdleMemoryCleanupWantsLater->Cancel();
sIdleMemoryCleanupWantsLater = nullptr;
sIdleMemoryCleanupWantsLaterScheduled = false;
}
#endif
}
@@ -843,7 +852,8 @@ void TaskController::UpdateIdleMemoryCleanupPrefs() {
static void PrefChangeCallback(const char* aPrefName, void* aNull) {
MOZ_ASSERT((0 == strcmp(aPrefName, kEnableLazyPurgePref)) ||
(0 == strcmp(aPrefName, kMaxPurgeDelayPref)) ||
(0 == strcmp(aPrefName, kMinPurgeBudgetPref)));
(0 == strcmp(aPrefName, kMinPurgeBudgetPref)) ||
(0 == strcmp(aPrefName, kMinPurgeReuseGracePref)));
TaskController::Get()->UpdateIdleMemoryCleanupPrefs();
}
@@ -853,86 +863,203 @@ void TaskController::SetupIdleMemoryCleanup() {
Preferences::RegisterCallback(PrefChangeCallback, kEnableLazyPurgePref);
Preferences::RegisterCallback(PrefChangeCallback, kMaxPurgeDelayPref);
Preferences::RegisterCallback(PrefChangeCallback, kMinPurgeBudgetPref);
Preferences::RegisterCallback(PrefChangeCallback, kMinPurgeReuseGracePref);
TaskController::Get()->UpdateIdleMemoryCleanupPrefs();
}
void TaskController::MayScheduleIdleMemoryCleanup() {
// We want to schedule an idle task only if we:
// - know to be about to become idle
// - are not shutting down
// - have not yet an active IdleTaskRunner
// - have something to cleanup
if (PendingMainthreadTaskCountIncludingSuspended() > 0) {
// This is a hot code path for the main thread, so please do not add
// logic here or before.
return;
}
if (!mIsLazyPurgeEnabled) {
return;
}
if (AppShutdown::IsShutdownImpending()) {
if (sIdleMemoryCleanupRunner) {
sIdleMemoryCleanupRunner->Cancel();
sIdleMemoryCleanupRunner = nullptr;
}
return;
}
bool RunIdleMemoryCleanup(TimeStamp aDeadline, uint32_t aWantsLaterDelay);
if (!moz_may_purge_one_now(/* aPeekOnly */ true)) {
// Currently we unqueue purge requests only if we run moz_may_purge_one_now
// with aPeekOnly==false and that happens in the below IdleTaskRunner which
// cancels itself when done (and all of this happens on the main thread
// without possible races) OR if something else causes a MayPurgeAll (like
// jemalloc_free_(excess)_dirty_pages or moz_set_max_dirty_page_modifier)
// which can happen anytime (and even from other threads).
if (sIdleMemoryCleanupRunner) {
sIdleMemoryCleanupRunner->Cancel();
sIdleMemoryCleanupRunner = nullptr;
}
return;
}
void CheckIdleMemoryCleanupNeeded(nsITimer* aTimer, void* aClosure);
void CancelIdleMemoryCleanupTimerAndRunner() {
if (sIdleMemoryCleanupRunner) {
return;
sIdleMemoryCleanupRunner->Cancel();
sIdleMemoryCleanupRunner = nullptr;
}
if (sIdleMemoryCleanupWantsLaterScheduled) {
MOZ_ASSERT(sIdleMemoryCleanupWantsLater);
sIdleMemoryCleanupWantsLater->Cancel();
sIdleMemoryCleanupWantsLaterScheduled = false;
}
}
// Only create a marker if we really do something.
PROFILER_MARKER_TEXT("MayScheduleIdleMemoryCleanup", OTHER, {},
"Schedule for immediate run."_ns);
void ScheduleWantsLaterTimer(uint32_t aWantsLaterDelay) {
if (sIdleMemoryCleanupRunner) {
sIdleMemoryCleanupRunner->Cancel();
sIdleMemoryCleanupRunner = nullptr;
}
if (!sIdleMemoryCleanupWantsLater) {
auto res = NS_NewTimerWithFuncCallback(
CheckIdleMemoryCleanupNeeded, (void*)"IdleMemoryCleanupWantsLaterCheck",
aWantsLaterDelay, nsITimer::TYPE_ONE_SHOT_LOW_PRIORITY,
"IdleMemoryCleanupWantsLaterCheck");
if (res.isOk()) {
sIdleMemoryCleanupWantsLater = res.unwrap().forget();
}
} else {
if (sIdleMemoryCleanupWantsLaterScheduled) {
sIdleMemoryCleanupWantsLater->Cancel();
}
sIdleMemoryCleanupWantsLater->InitWithNamedFuncCallback(
CheckIdleMemoryCleanupNeeded, (void*)"IdleMemoryCleanupWantsLaterCheck",
aWantsLaterDelay, nsITimer::TYPE_ONE_SHOT_LOW_PRIORITY,
"IdleMemoryCleanupWantsLaterCheck");
}
sIdleMemoryCleanupWantsLaterScheduled = true;
}
void ScheduleIdleMemoryCleanup(uint32_t aWantsLaterDelay) {
TimeDuration maxPurgeDelay = TimeDuration::FromMilliseconds(
StaticPrefs::memory_lazypurge_maximum_delay());
TimeDuration minPurgeBudget = TimeDuration::FromMilliseconds(
StaticPrefs::memory_lazypurge_minimum_idle_budget());
CancelIdleMemoryCleanupTimerAndRunner();
sIdleMemoryCleanupRunner = IdleTaskRunner::Create(
[](TimeStamp aDeadline) {
bool pending = moz_may_purge_one_now(true);
if (pending) {
AUTO_PROFILER_MARKER_TEXT(
"DoIdleMemoryCleanup", OTHER, {},
"moz_may_purge_one_now until there is budget."_ns);
while (pending) {
pending = moz_may_purge_one_now(false);
if (!aDeadline.IsNull() && TimeStamp::Now() > aDeadline) {
break;
}
}
}
if (!pending && sIdleMemoryCleanupRunner) {
PROFILER_MARKER_TEXT("DoIdleMemoryCleanup", OTHER, {},
"Finished all cleanup."_ns);
sIdleMemoryCleanupRunner->Cancel();
sIdleMemoryCleanupRunner = nullptr;
}
// We never get here without attempting at least one purge call.
return true;
[aWantsLaterDelay](TimeStamp aDeadline) {
return RunIdleMemoryCleanup(aDeadline, aWantsLaterDelay);
},
"TaskController::IdlePurgeRunner", TimeDuration::FromMilliseconds(0),
maxPurgeDelay, minPurgeBudget, true, nullptr, nullptr);
// We do not pass aMayStopProcessing, which would be the only legitimate
// reason to return nullptr (OOM would crash), so no fallback needed.
MOZ_ASSERT(sIdleMemoryCleanupRunner);
"TaskController::IdlePurgeRunner", TimeDuration(), maxPurgeDelay,
minPurgeBudget, true, nullptr, nullptr);
}
// Check if a purge needs to be scheduled now or later.
// Both used as timer callback and directly from MayScheduleIdleMemoryCleanup.
//
// We schedule our runner if we are about to go idle and there is a purge
// due now (NeedsMore). We (re-)schedule instead a low-priority timer if
// we need to check again for a possible future purge (WantsLater). We use
// a timer for this instead of the same IdleTaskRunner in order to avoid it
// to post some runnables to the main thread to find idle time before the
// (very cheap) check actually runs.
//
// aTimer: Not used
// aClosure: Supposed to point to a name literal to be used for profile
// markers.
void CheckIdleMemoryCleanupNeeded(nsITimer* aTimer, void* aClosure) {
MOZ_ASSERT(aClosure);
const char* name = (const char*)aClosure;
uint32_t reuseGracePeriod =
StaticPrefs::memory_lazypurge_reuse_grace_period();
// The wantsLaterDelay is used as a last resort when the main thread stays
// idle but we knew we should come back.
// We double the grace time to increase the chance that all arenas' grace
// periods expired if we really ever trigger it after going idle and to
// reduce the impact of occasionally firing while being busy.
uint32_t wantsLaterDelay = reuseGracePeriod * 2;
MOZ_ASSERT(!sIdleMemoryCleanupRunner ||
!sIdleMemoryCleanupWantsLaterScheduled);
auto result = moz_may_purge_one_now(/* aPeekOnly */ true, reuseGracePeriod);
switch (result) {
case purge_result_t::Done:
// Currently we unqueue purge requests only:
// if we run moz_may_purge_one_now with aPeekOnly==false and that happens
// only in the IdleTaskRunner which cancels itself when done
// OR
// if something else causes a MayPurgeAll (like
// jemalloc_free_(excess)_dirty_pages or moz_set_max_dirty_page_modifier)
// which can happen anytime.
if (sIdleMemoryCleanupRunner || sIdleMemoryCleanupWantsLaterScheduled) {
PROFILER_MARKER_TEXT(
ProfilerString8View::WrapNullTerminatedString(name), OTHER, {},
"Done (Cancel timer or runner)"_ns);
CancelIdleMemoryCleanupTimerAndRunner();
}
break;
case purge_result_t::WantsLater:
if (!sIdleMemoryCleanupWantsLaterScheduled) {
PROFILER_MARKER_TEXT(
ProfilerString8View::WrapNullTerminatedString(name), OTHER, {},
"WantsLater (First schedule of low priority timer)"_ns);
}
// We always want to (re-)schedule the timer to prevent it from firing
// as much as possible.
ScheduleWantsLaterTimer(wantsLaterDelay);
break;
case purge_result_t::NeedsMore:
// We can get here from the main thread going repeatedly idle after we
// already scheduled a runner. Just keep it.
if (!sIdleMemoryCleanupRunner) {
PROFILER_MARKER_TEXT(
ProfilerString8View::WrapNullTerminatedString(name), OTHER, {},
"NeedsMore (Schedule as-soon-as-idle cleanup)"_ns);
ScheduleIdleMemoryCleanup(wantsLaterDelay);
} else {
MOZ_ASSERT(!sIdleMemoryCleanupWantsLaterScheduled);
}
break;
}
}
// Do some purging until our idle budget is used.
//
// At the time the runner actually runs, the situation might have changed wrt
// when our runner has been scheduled, such that we might find nothing to do.
// And if we reached our budget and it still NeedsMore, we just keep the runner
// alive to get another slice of idle time from the current instance.
// Otherwise we just (un)schedule accordingly like CheckIdleMemoryCleanupNeeded
// would do.
//
// aDeadline: Deadline passed by the IdleTaskRunner until which we are
// allowed to consume time.
// aWantsLaterDelay: (Minimum) delay to be used for the WantsLater timer.
bool RunIdleMemoryCleanup(TimeStamp aDeadline, uint32_t aWantsLaterDelay) {
AUTO_PROFILER_MARKER_TEXT("RunIdleMemoryCleanup", OTHER, {}, ""_ns);
MOZ_ASSERT(!sIdleMemoryCleanupWantsLaterScheduled);
uint32_t reuseGracePeriod =
StaticPrefs::memory_lazypurge_reuse_grace_period();
purge_result_t result = purge_result_t::NeedsMore;
while (result == purge_result_t::NeedsMore) {
result = moz_may_purge_one_now(/* aPeekOnly */ false, reuseGracePeriod);
if (!aDeadline.IsNull() && TimeStamp::Now() > aDeadline) {
break;
}
}
switch (result) {
case purge_result_t::Done:
PROFILER_MARKER_TEXT("RunIdleMemoryCleanup", OTHER, {},
"Done (Cancel timer and runner)"_ns);
CancelIdleMemoryCleanupTimerAndRunner();
break;
case purge_result_t::WantsLater:
PROFILER_MARKER_TEXT(
"RunIdleMemoryCleanup", OTHER, {},
"WantsLater (First schedule of low priority timer)"_ns);
ScheduleWantsLaterTimer(aWantsLaterDelay);
break;
case purge_result_t::NeedsMore:
PROFILER_MARKER_TEXT("RunIdleMemoryCleanup", OTHER, {},
"NeedsMore (wait for next idle slice)."_ns);
break;
}
return true;
};
void TaskController::MayScheduleIdleMemoryCleanup() {
if (PendingMainthreadTaskCountIncludingSuspended() > 0) {
// This is a hot code path for the main thread, so please be cautious when
// adding more logic here or before.
// For example it is counterproductive to try to detect here if the main
// thread is busy and cancel the timer in case.
return;
}
if (!mIsLazyPurgeEnabled) {
return;
}
if (AppShutdown::IsShutdownImpending()) {
CancelIdleMemoryCleanupTimerAndRunner();
return;
}
CheckIdleMemoryCleanupNeeded(nullptr, (void*)"MayScheduleIdleMemoryCleanup");
}
#endif