Specifically:
- Make the warning about editing in all-caps;
- Make it clear that about:config is a browser thing;
- Add a mention of the user.js file;
- Use C++ comments, because I prefer them to C comments and I am the module
owner :)
MozReview-Commit-ID: 9GXS5nNHywO
The prefs parser has two significant problems.
- It doesn't separate tokenizing from parsing.
- It is implemented as a loop around a big switch on a "current state"
variable.
As a result, it is hard to understand and modify, slower than it could be, and
in obscure cases (involving comments and whitespace) it fails to parse what
should be valid input.
This patch replaces it with a recursive descent parser (albeit one without any
recursion!) that has separate tokenization. The new parser is easier to
understand and modify, more correct, and has better error messages. It doesn't
do error recovery, but that would be much easier to add than in the old parser.
The new parser also runs about 1.9x faster than the existing parser. (As
measured by parsing greprefs.js's contents from memory 1000 times in
succession, omitting the prefs hash table construction. If the table
construction is included, it's about 1.6x faster.)
The new parser is slightly stricter than the old parser in a few ways.
- Disconcertingly, the old parser allowed arbitrary junk between prefs
(including at the start and end of the prefs file) so long as that junk
didn't include any of the following chars: '/', '#', 'u', 's', 'p'. I.e.
lines like these:
!foo@bar&pref("prefname", true);
ticky_pref("prefname", true); // missing 's' at start
User_pref("prefname", true); // should be 'u' at start
would all be treated the same as this:
pref("prefname", true);
The new parser disallows such junk because it isn't necessary and seems like
an unintentional botch by the old parser.
- The old parser allowed character 0x1a (SUB) between tokens and treated it
like '\n'.
The new parser does not allow this character. SUB was used to indicate
end-of-file (*not* end-of-line) in some old operating systems such as MS-DOS,
but this doesn't seem necessary today.
- The old parser tolerated (with a warning) invalid escape sequences within
string literals -- such as "\q" (not a valid escape) and "\x1" and "\u12"
(both of which have insufficient hex digits) -- accepting them literally.
The new parser does not tolerate invalid escape sequences because it doesn't
seem necessary and would complicate things.
- The old parser tolerated character 0x00 (NUL) within string literals; this is
dangerous because C++ code that manipulates string values with embedded NULs
will almost certainly consider those chars as end-of-string markers.
The new parser treats NUL chars as end-of-file, to avoid this danger and
because it facilitates a significant optimization (described within the
code).
- The old parser allowed integer literals to overflow, silently wrapping them.
The new parser treats integer overflow as a parse error. This seems better,
and it caught existing overflows of places.database.lastMaintenance, in
testing/profiles/prefs_general.js (bug 1424030) and
testing/talos/talos/config.py (bug 1434813).
The first of these changes meant that a couple of existing prefs with ";;" at
the end had to be changed (done in the preceding patch).
The minor increase in strictness shouldn't be a problem for default pref files
such as greprefs.js within the application (which we can modify), nor for
app-written prefs files such as prefs.js. It could affect user-written prefs
files such as user.js; the experience above suggests that integer overflow and
";;" are the most likely problems in practice. In my opinion, the risk here is
acceptable.
The new parser also does a better job of tracking line numbers because it (a)
treats "\r\n" sequences as a single end-of-line marker, and (a) pays attention
to end-of-line sequences within string literals.
Finally, the patch adds thorough tests of both valid and invalid syntax.
MozReview-Commit-ID: JD3beOQl4AJ
The prefs parser has two significant problems.
- It doesn't separate tokenizing from parsing.
- It is implemented as a loop around a big switch on a "current state"
variable.
As a result, it is hard to understand and modify, slower than it could be, and
in obscure cases (involving comments and whitespace) it fails to parse what
should be valid input.
This patch replaces it with a recursive descent parser (albeit one without any
recursion!) that has separate tokenization. The new parser is easier to
understand and modify, more correct, and has better error messages. It doesn't
do error recovery, but that would be much easier to add than in the old parser.
The new parser also runs about 1.9x faster than the existing parser. (As
measured by parsing greprefs.js's contents from memory 1000 times in
succession, omitting the prefs hash table construction. If the table
construction is included, it's about 1.6x faster.)
The new parser is slightly stricter than the old parser in a few ways.
- Disconcertingly, the old parser allowed arbitrary junk between prefs
(including at the start and end of the prefs file) so long as that junk
didn't include any of the following chars: '/', '#', 'u', 's', 'p'. I.e.
lines like these:
!foo@bar&pref("prefname", true);
ticky_pref("prefname", true); // missing 's' at start
User_pref("prefname", true); // should be 'u' at start
would all be treated the same as this:
pref("prefname", true);
The new parser disallows such junk because it isn't necessary and seems like
an unintentional botch by the old parser.
- The old parser allowed character 0x1a (SUB) between tokens and treated it
like '\n'.
The new parser does not allow this character. SUB was used to indicate
end-of-file (*not* end-of-line) in some old operating systems such as MS-DOS,
but this doesn't seem necessary today.
- The old parser tolerated (with a warning) invalid escape sequences within
string literals -- such as "\q" (not a valid escape) and "\x1" and "\u12"
(both of which have insufficient hex digits) -- accepting them literally.
The new parser does not tolerate invalid escape sequences because it doesn't
seem necessary and would complicate things.
- The old parser tolerated character 0x00 (NUL) within string literals; this is
dangerous because C++ code that manipulates string values with embedded NULs
will almost certainly consider those chars as end-of-string markers.
The new parser treats NUL chars as end-of-file, to avoid this danger and
because it facilitates a significant optimization (described within the
code).
- The old parser allowed integer literals to overflow, silently wrapping them.
The new parser treats integer overflow as a parse error. This seems better,
and it caught an existing overflow in testing/profiles/prefs_general.js, for
places.database.lastMaintenance (see bug 1424030).
The first of these changes meant that a couple of existing prefs with ";;" at
the end had to be changed (done in the preceding patch).
The minor increase in strictness shouldn't be a problem for default pref files
such as greprefs.js within the application (which we can modify), nor for
app-written prefs files such as prefs.js. It could affect user-written prefs
files such as user.js; the experience above suggests that ";;" is the most
likely problem in practice. In my opinion, the risk here is acceptable.
The new parser also does a better job of tracking line numbers because it (a)
treats "\r\n" sequences as a single end-of-line marker, and (a) pays attention
to end-of-line sequences within string literals.
Finally, the patch adds thorough tests of both valid and invalid syntax.
MozReview-Commit-ID: 8EYWH7KxGG
* * *
[mq]: win-fix
MozReview-Commit-ID: 91Bxjfghqfw
This pref does not override privacy.resistFingerprinting, but when it is set (and
privacy.resistFingerprinting is not) we will still adjust the precision of almost
all timers. The adjustment amount is the second pref, which is defaulted to
20us but now dynamically adjustable (in the scale of microseconds.)
This patch does _not_ address the performance API, which privacy.resistFingerprinting
disables.
We are landing this preffed on at the current value we clamp performance.now() at
which is 20us.
MozReview-Commit-ID: ESZlSvH9w1D
This patch introduces three keyed histograms:
- PREFERENCES_FILE_LOAD_SIZE_B
- PREFERENCES_FILE_LOAD_NUM_PREFS
- PREFERENCES_FILE_LOAD_TIME_US
They are all keyed on the prefs file's name; in my local Linux64 build there
are 13 such files.
Because prefs start up earlier than telemetry, we have to save the measurements
and then pass them to telemetry later.
MozReview-Commit-ID: H6KD7oeK8O0
Currently pref_ReadPrefFromJar() will return NS_OK if parsing fails. This is
weird, and inconsistent with openPrefFile().
MozReview-Commit-ID: 7cHSewQYymE
New content processes get prefs in three ways.
- They read them from greprefs.js, prefs.js and other such files.
- They get sent "early prefs" from the parent process via the command line
(-intPrefs/-boolPrefs/-stringPrefs).
- They get sent "late prefs" from the parent process via IPC message.
(The latter two are necessary for communicating prefs that have been added or
modified in the parent process since the file reading occurred at startup.)
We have some machinery that detects if a late pref is accessed before the late
prefs are set, which is good. But it has a big exception in it; late pref
accesses that occur early via Add*VarCache() and RegisterCallbackAndCall() are
allowed.
This exception was added in bug 1341414. The description of that bug says "We
should change AddBoolVarCache so that it doesn't look at the pref in the
content process until prefs have been received from the parent." Unfortunately,
the patch in that bug added the exception to the checking without changing
Add*VarCache() in the suggested way!
This means it's possible for late prefs to be read early via VarCaches (or
RegisterCallbackAndCall()) when their values are incorrect, which is bad.
Changing Add*VarCache() to delay the reading as bug 1341414 originally
suggested seems difficult. A simpler fix is to just remove the exception in the
checking and extend the early prefs list as necessary. This patch does that,
lengthening the early prefs list from ~210 to ~300. Fortunately, most (all?) of
the added prefs are ints or bools rather than strings, so it doesn't increase
the size of the command line arguments for content processes by too much.
This code is used to detect too-early accesses of prefs in content processes.
The patch makes the following changes.
- New terminology: "early" prefs are those sent via the command line; "late"
prefs are those sent via IPC. Previously the former were "init" prefs and the
latter didn't have a clear name.
- The phase tracking and checking is now almost completely encapsulated within
Preferences.cpp. The only exposure to outside code is via the
AreAllPrefsSetInContentProcess() method, which has a single use.
- The number of states tracked drops from 5 to 3. There's no need to track the
beginning of the pref-setting operations, because we only need to know if
they've finished. (This also avoids the weirdness where we could transition
from END_INIT_PREFS back to BEGIN_INIT_PREFS because of the way -intPrefs,
-boolPrefs and -stringPrefs were parsed separately.)
MozReview-Commit-ID: IVJWiDxdsDV
The default path and the user path are entirely disjoint, and some of the
arguments only apply to one of the paths, so splitting it into two functions
makes things a bit clearer. The aForceSet arg is also renamed aFromFile.
MozReview-Commit-ID: LYtrwz5JHiN
pref_SetPref() is now the only function that runs in the content process and
calls HandleDirty(). So this patch moves the parent process check out of
HandleDirty() into pref_SetPref().
The patch also adds assertions to a couple of other parent-process-only
functions.
MozReview-Commit-ID: KurXKMl4IIb
The code to migrate from the toolkit.telemetry.enabledPreRelease pref to
toolkit.telemetry.enabled was added to Firefox 31 in bug 986582. It should be
safe to remove now.
MozReview-Commit-ID: JBkn20bUQXx
Preferences::SetPreference() is used when setting prefs in the content process
from dom::Pref values that are passed from the parent process. Currently we
use the high-level Set*InAnyProcess() methods to do this -- basically the same
code path as sets done via the API -- but this has several problems.
- It is subtly broken. If the content process already has a pref of type T with
a default value and then we get a SetPreference() call that tries to change
it to type U, it will erroneously fail. In practice this never(?) happens,
but it shows that things aren't arranged very well.
- SetPreference() also looks up the hashtable twice to get the same pref if
both a default value and a user value are present in the dom::Pref.
- This happens in content processes, while all other pref set operations occur
in the parent process. This is the main reason we have the Set*InAnyProcess()
functions.
In short, the setting of prefs via IPC is quite different to the setting of
prefs via other means -- because it happens in content processes, and it's more
of a clobber that can set both values at once -- and so should be handled
differently.
The solution is to make Preferences::SetPreference() use lower-level operations
to do the update. It now does the hash table lookup/add itself, and then calls
the new Pref::FromDomPref() method. That method then possibly calls the new
PrefValue::FromDomPrefValue() method for both kinds of value, and overwrites
them as necessary. SetValueFromDom() is no longer used and the patch removes it.
MozReview-Commit-ID: 2Rg8VMOc0Cl
The nice thing about this is that the memory management of the strings
(moz_xstrdup() and free()) is now entirely within PrefValue.
MozReview-Commit-ID: KJjnURpmgfE
It's something of an obfuscation, and it forces together various bool values
that don't necessarily have to be together (and won't be together after future
refactorings).
The patch also reorders some function arguments for consistency: PrefType, then
PrefValueKind, then PrefValue.
MozReview-Commit-ID: KNY0Pxo0Nxf
Although it is a subclass of PLDHashEntryHdr, it's the main representation of a
pref, so the name should reflect that.
MozReview-Commit-ID: 5qJNQtjbFmH
As is done in pref_SavePrefs().
The confusion here is because a Vector can fill 100% of its capacity, but a
hash table cannot go past 75% of its capacity.
MozReview-Commit-ID: 5JMbmtrxMGN
It represents a pref, so `Pref` is a better name. Within Preferences.cpp the
patch uses domPref/aDomPref to distinguish it from PrefHashEntry values.
MozReview-Commit-ID: HXTl0GX4BtO
This factors out some common code from SetValue(), making it easier to read.
The patch also improves the comments in SetValue().
MozReview-Commit-ID: 60JnBlIS1q6
Currently, you can create a pref that only has a user value, and then later
give it a default value with a different type. The entire pref is then recorded
as having this second type. This causes problems later when interpreting the
user value.
This patch makes SetValue() fail if it tries to set a default value whose type
differs from an existing user value. It also expands an existing test to cover
this case and some similar ones.
MozReview-Commit-ID: 89tvISQ7RNT
This requires adding an aPriority argument (defaulting to false) to
Preferences::RegisterCallback(). And RegisterVarCacheCallback() is no longer
necessary.
MozReview-Commit-ID: BMDk3HuaQVV