CondVar already calls `AUTO_PROFILER_THREAD_SLEEP`, which shouldn't be called recursively.
mozilla::Monitor also uses CondVar, so it shouldn't be surrounded by `AUTO_PROFILER_THREAD_SLEEP` either.
Depends on D72850
Differential Revision: https://phabricator.services.mozilla.com/D72851
IDLE markers help with categorizing threads in the profiler UI.
_SLEEP makes the profiler spend less time sampling threads that haven't changed.
Differential Revision: https://phabricator.services.mozilla.com/D10815
Most of the times when we automatically create nsThread wrappers for threads
that don't already have them, we don't actually need the event targets, since
those threads don't run XPCOM event loops. Aside from wasting memory, actually
creating these event loops can lead to leaks if a thread tries to dispatch a
runnable to the queue which creates a reference cycle with the thread.
Not creating the event queues for threads that don't actually need them helps
avoid those foot guns, and also makes it easier to figure out which treads
actually run XPCOM event loops.
MozReview-Commit-ID: Arck4VQqdne
Most of the times when we automatically create nsThread wrappers for threads
that don't already have them, we don't actually need the event targets, since
those threads don't run XPCOM event loops. Aside from wasting memory, actually
creating these event loops can lead to leaks if a thread tries to dispatch a
runnable to the queue which creates a reference cycle with the thread.
Not creating the event queues for threads that don't actually need them helps
avoid those foot guns, and also makes it easier to figure out which treads
actually run XPCOM event loops.
MozReview-Commit-ID: Arck4VQqdne
Most of the times when we automatically create nsThread wrappers for threads
that don't already have them, we don't actually need the event targets, since
those threads don't run XPCOM event loops. Aside from wasting memory, actually
creating these event loops can lead to leaks if a thread tries to dispatch a
runnable to the queue which creates a reference cycle with the thread.
Not creating the event queues for threads that don't actually need them helps
avoid those foot guns, and also makes it easier to figure out which treads
actually run XPCOM event loops.
MozReview-Commit-ID: Arck4VQqdne
This parameter isn't used by any implementation of onDispatchedEvent,
and keeping the parameter makes later refactorings in this bug more difficult.
MozReview-Commit-ID: 90VY2vYtwCW
CachePerfStats gathers performance data for single open, read and write operations as well as the whole cache entry opening. It maintains long term and short term average. The long term average filters out excessive values and it represents and average time for a given operation when the cache is not busy. The short term average represents the current cache speed. By comparing these two stats we know pretty quickly that the cache is getting slower and then we race the cache with network immediately without a delay. Otherwise the delay is based on the average cache entry open time.
CachePerfStats gathers performance data for single open, read and write operations as well as the whole cache entry opening. It maintains long term and short term average. The long term average filters out excessive values and it represents and average time for a given operation when the cache is not busy. The short term average represents the current cache speed. By comparing these two stats we know pretty quickly that the cache is getting slower and then we race the cache with network immediately without a delay. Otherwise the delay is based on the average cache entry open time.
CachePerfStats gathers performance data for single open, read and write operations as well as the whole cache entry opening. It maintains long term and short term average. The long term average filters out excessive values and it represents and average time for a given operation when the cache is not busy. The short term average represents the current cache speed. By comparing these two stats we know pretty quickly that the cache is getting slower and then we race the cache with network immediately without a delay. Otherwise the delay is based on the average cache entry open time.
NS_SetCurrentThreadName() is added as an alternative to PR_SetCurrentThreadName()
inside libxul. The thread names are collected in the form of crash annotation to
be processed on socorro.
MozReview-Commit-ID: 4RpAWzTuvPs
Changed |print("enum ID : uint32_t {", file=output)| to |print("enum HistogramID : uint32_t {", file=output)| at line 53 of the file |toolkit/components/telemetry/gen-histogram-enum.py|, and then replaced all the textual occurrences of |Telemetry::ID| to |Telemetry::HistogramID| and |ID| to |HistogramID| in 43 other files.
As far as I can tell, this covers all the remaining threads which we start
using PR_CreateThread, except the ones that are created inside NSPR or NSS,
and except for the Shutdown Watchdog thread in nsTerminator.cpp and the
CacheIO thread. The Shutdown Watchdog thread stays alive past leak detection
during shutdown (by design), so we'd report leaks if we profiled it. The
CacheIO thread seems to stay alive past shutdown leak detection sometimes as
well.
This adds a AutoProfilerRegister stack class for easy registering and
unregistering. There are a few places where we still call
profiler_register_thread() and profiler_unregister_thread() manually, either
because registration happens conditionally, or because there is a variable that
gets put on the stack before the AutoProfilerRegister (e.g. a dynamically
generated thread name). AutoProfilerRegister needs to be the first object on
the stack because it uses its own `this` pointer as the stack top address.
MozReview-Commit-ID: 3vwhS55Yzt
All of these things are called with the result of
NS_NewRunnableFunction, so we need to transition them over to a world
where NS_NewRunnableFunction returns something different.