This patch makes the following changes to the macros.
- Removes PROFILER_LABEL_FUNC. It's only suitable for use in functions outside
classes, due to PROFILER_FUNCTION_NAME not getting class names, and it was
mostly misused.
- Removes PROFILER_FUNCTION_NAME. It's no longer used, and __func__ is
universally available now anyway.
- Combines the first two string literal arguments of PROFILER_LABEL and
PROFILER_LABEL_DYNAMIC into a single argument. There was no good reason for
them to be separate, and it forced a '::' in the label, which isn't always
appropriate. Also, the meaning of the "name_space" argument was interpreted
in an interesting variety of ways.
- Adds an "AUTO_" prefix to PROFILER_LABEL and PROFILER_LABEL_DYNAMIC, to make
it clearer they construct RAII objects rather than just being function calls.
(I myself have screwed up the scoping because of this in the past.)
- Fills in the 'js::ProfileEntry::Category::' qualifier within the macro, so
the caller doesn't need to. This makes a *lot* more of the uses fit onto a
single line.
The patch also makes the following changes to the macro uses (beyond those
required by the changes described above).
- Fixes a bunch of labels that had gotten out of sync with the name of the
class and/or function that encloses them.
- Removes a useless PROFILER_LABEL use within a trivial scope in
EventStateManager::DispatchMouseOrPointerEvent(). It clearly wasn't serving
any useful purpose. It also serves as extra evidence that the AUTO_ prefix is
a good idea.
- Tweaks DecodePool::SyncRunIf{Preferred,Possible} so that the labelling is
done within them, instead of at their callsites, because that's a more
standard way of doing things.
This patch includes following fix in classifierHelper.js:
1. Avoid the reuse of same chunk numbers
2. Remove unused removeUrlFromDB function
MozReview-Commit-ID: XK1oHBa8gf
The Flash infobar is showing up a bit too often. There are a few changes planned to address this. One of them is a list that will contain sites that should never display infobars. This patch allows the list to be downloaded from Shavar and updated via the URL classifier.
MozReview-Commit-ID: BgAaysyRzIE
We write a lot of 4-bytes prefixes to file which call many system calls.
We should use a buffer and only write to file if the buffer is full or
finish writing. nsIBufferedOutputStream is a good candidate to do that
MozReview-Commit-ID: CzGOd7iXVTv
These methods do not appear to be used.
When JSM global sharing is enabled, these methods contaminate the
global Function.prototype, which breaks Marionette object
serialization.
MozReview-Commit-ID: CAfJ2FCkhlK
After adopting the new thread model for safebrowsing, we will create a new
lookup cache for update so we can still check lookup cache at the same time.
Prefix set, completions will be generated when we open the new lookup cache
but it won't include cache, so we will loss cache after that.
This patch will copy cache data from old lookup cache to new lookup
cache while update.
MozReview-Commit-ID: L0WpiHOGIGm
Based on the telemetry that landed as part of bug 1336904, Safe Browsing
updates are failing too often: https://mzl.la/2qGkOPS
This should enable browsers on slower networks to reach the update
servers while still putting a reasonable bound on how long the update
thread can be blocked.
MozReview-Commit-ID: 6puVtpMT87K
resetDatabase() is used to clear out the Safe Browsing database and cache in tests.
Since bug 1333328 however we can no longer rely on updates clearing the cache.
There are two solutions to address this issue:
1. resetDatabase() calls another test-only function: reloadDatabase(). Since the cache
is in memory, reloading the URL classifier will automatically clear the cache.
2. During an update, remove cache entries which are not in the database.
I prefer taking the first one because if we implement the second
approach, an update will take longer since we need to check if each prefix
in the cache can also be found in the database. I think this is not necessary
because prefixes not in the database will eventually be removed when they
are expired.
MozReview-Commit-ID: BjsDKDMr205
In V2 we shuffled the hash entries before sending the request to obscure the real
entry from noises. We should also hide the real entry in V4. Using sort() is
enough for both V2 and V4 because the array contains exactly 5 entries in almost
all cases.
MozReview-Commit-ID: 4uOXIF83KQL
Technically speaking, we use the new async API 'nsIURIClassifier.asyncClassifyLocalWithTables'
to replace the old sync API. However, since we cannot guarantee the async call will be done when
we neet its result, we need a sync call as a fallback in this case. This is a sub-optimal
solution and we will be investigating if there's a better solution if the sync call
is used too frequently.
MozReview-Commit-ID: L1uQ2eaYr1e
This patch includes two test cases:
1. Apply an empty update through Classifier interface, which is the normal use case.
2. Apply an empty update through LookupCacheV4::ApplyUpdate, this ensure update algorithm is
correct when applying an empty update. This scenario actually shouldn't happen in
normal use case because it will be skipped by Classifier::CheckValidUpdate.
MozReview-Commit-ID: 9khsuVatX0u
With the activation of Ask-to-Activate mode by default, we'll also activate the fallback rule that favors fallback content when the object has not provided a src, so we need to prepare this test for that
MozReview-Commit-ID: JmmxJEiziHW
This function is arguably nicer than calling NS_ProcessNextEvent
manually, is slightly more efficient, and will enable better auditing
for NS_ProcessNextEvent when we do Quantum DOM scheduling changes.
Functions and Preference related to mTableFreshness and gFreshnessGuarantee
could be removed since we will no longer require them.
MozReview-Commit-ID: IAa0UHLSQ56
In this patch, we will make Safebrowsing V2 caching use the same algorithm as V4.
So we remove "mMissCache" for negative caching and TableFresness check for
positive caching.
But Safebrowsing V2 doesn't contain negative/positive cache duration information in
gethash response. So we hard-code a fixed value, 15 minutes, as cache duration.
In this way, we can sync the mechanism we handle caching for V2 and V4.
An extra effort for V2 here is that we need to manually record prefixes misses
because we won't get any response for those prefixes(implemented in
nsUrlClassifierLookupCallback::CacheMisses).
This is in preparation for being able to be replaced with SpinEventLoopUntil(),
which is going to be shipped in bug 1359490.
MozReview-Commit-ID: AChVqh4kfVb
These timeouts will ensure that we don't block the Safe Browsing update thread
for too long when we encounter slow or bad network conditions.
MozReview-Commit-ID: AJfR193cTf8