This collects client-side fxa-content-server data. The data covers
only the about:accounts experience until:
* the fxa-content-server provides the LOADED message; or
* connection failure is observed.
Nota bene: a healthy fxa-content-server always delivers the LOADED
message! In future, we might want to timeout the load (and observe
said timeouts) separately.
We collect no data after the fxa-content-server LOADED message. The
intention is for the server-side metrics flow to capture the valuable
"bounce rate" metrics, since the fxa-content-server team are in
position to quickly improve the web-based UI flow.
The client-side data collected is intended to answer the following
questions:
1) How many remote content loads started;
2) How many loads completed;
3) What proportion of loads made it to the LOADED message, as opposed
to failed;
4) How long it took each successful load to observe the LOADED
message;
5) How long it took each failing load to observe failure.
All of these are keyed by the fxa-content-server endpoint path (like
'settings' or 'profile/avatar'), since I observe differences between
the time-to-LOADED for each endpoint path.
There is a privacy trade-off here. Mozilla is collecting data to
understand the user experience when about:accounts is connecting to
the specific fxa-content-server hosted by Mozilla at
accounts.firefox.com. However, we don't want to observe what
alternate servers users might be using, so we can't collect the whole
URL. Here, we filter the data based on whether the user is /not/
using accounts.firefox.com, and then record just the endpoint path.
Other collected data could expose that the user is using Firefox
Accounts, and together, that leaks the number of users not using
accounts.firefox.com. We accept this leak: Mozilla already collects
data about whether Sync (both legacy and FxA) is using a custom server
in various situations: see the WEAVE_CUSTOM_* Telemetry histograms.
The GMP manager uses a copy of the update service's url formatting code and has
since fallen out of sync. We'll also want to use the same formatting code for
the system add-on update checks so this just exposes it in a shared API.
I've moved the contents of UpdateChannel.jsm to UpdateUtils.jsm and exposed
formatUpdateURL there as well as a few properties that the update service still
needs access to.
UpdateUtils.UpdateChannel is intended to be a lazy getter but isn't for now
since tests expect to be able to change the update channel at runtime.
The GMP manager uses a copy of the update service's url formatting code and has
since fallen out of sync. We'll also want to use the same formatting code for
the system add-on update checks so this just exposes it in a shared API.
I've moved the contents of UpdateChannel.jsm to UpdateUtils.jsm and exposed
formatUpdateURL there as well as a few properties that the update service still
needs access to.
UpdateUtils.UpdateChannel is intended to be a lazy getter but isn't for now
since tests expect to be able to change the update channel at runtime.
Jemalloc 4 purges dirty pages regularly during free() when the ratio of dirty
pages compared to active pages is higher than 1 << lg_dirty_mult. We set
lg_dirty_mult in jemalloc_config to limit RSS usage, but it also has an impact
on performance.
So instead of enforcing a high ratio to force more pages being purged, we keep
jemalloc's default ratio of 8, and force a regular purge of all dirty pages,
after cycle collection.
Keeping jemalloc's default ratio avoids cycle-collection-triggered purge to
have to go through really all dirty pages when there are a lot, in which case
the normal jemalloc purge during free() will already have kicked in. It also
takes care of everything that doesn't run the cycle collector still having
a level of purge, like plugins in the plugin-container.
At the same time, since jemalloc_purge_freed_pages does nothing with jemalloc 4,
repurpose the MEMORY_FREE_PURGED_PAGES_MS telemetry probe to track the time
spent in this cycle-collector-triggered purge.