The test_partialUpdateV4() test case doesn't wait for the update task
to be finished. It checks the status in the HTTP server side and then calls
run_next_test(). However, when XPCShell test is done, it will trigger
the shutdown process and hence interrupt the ongoing update task.
This cause the xpcshell test receives an error since the update is
interrupted and returns an error like NS_ERROR_UC_UPDATE_SHUTDOWNING.
This patch also fixes a javascript error that we didn't stop the httpd
server when cleanup.
Differential Revision: https://phabricator.services.mozilla.com/D2360
Some of the tests don't handle update errors very well and rely on timeouts to
fire after several minutes. Since these tests are not actually testing update
failure modes, it's safe to fail quicly and terminate the test with an
exception.
MozReview-Commit-ID: EJgaWke6kl2
This test is supposed to verify that Safe Browsing providers can
be initialized correctly even when a table is not configured
properly.
By removing a table from both google and google4, we ensure that
the test will be meaningful regardless of the stack in use.
Also filter out the console noise triggered by looking for the
update and gethash URLs of the "test" dummy provider.
MozReview-Commit-ID: KjWqSqA4FxJ
This patch was autogenerated by my decomponents.py
It covers almost every file with the extension js, jsm, html, py,
xhtml, or xul.
It removes blank lines after removed lines, when the removed lines are
preceded by either blank lines or the start of a new block. The "start
of a new block" is defined fairly hackily: either the line starts with
//, ends with */, ends with {, <![CDATA[, """ or '''. The first two
cover comments, the third one covers JS, the fourth covers JS embedded
in XUL, and the final two cover JS embedded in Python. This also
applies if the removed line was the first line of the file.
It covers the pattern matching cases like "var {classes: Cc,
interfaces: Ci, utils: Cu, results: Cr} = Components;". It'll remove
the entire thing if they are all either Ci, Cr, Cc or Cu, or it will
remove the appropriate ones and leave the residue behind. If there's
only one behind, then it will turn it into a normal, non-pattern
matching variable definition. (For instance, "const { classes: Cc,
Constructor: CC, interfaces: Ci, utils: Cu } = Components" becomes
"const CC = Components.Constructor".)
MozReview-Commit-ID: DeSHcClQ7cG
After this patch landed, we will have 3 cases:
1. For providers are not "test", for example, google, google4, mozilla ... etc
backoff CANNOT be disabled.
2. For "test" provider, if preference "browser.safebrowsing.provider.test.disableBackoff" is ON
backoff is disabled.
3. For "test" provider, if preference "browser.safebrowsing.provider.test.disableBackoff" is Off
backoff is NOT disabled.
So if your testcase will use listmanager or hashcompleter, you should try to use "test" provider
if possible, otherwise testcase may encounter intermittent failure due to backoff.
MozReview-Commit-ID: 3BDxs0ARyQM
After this patch landed, we will have 3 cases:
1. For providers are not "test", for example, google, google4, mozilla ... etc
backoff CANNOT be disabled.
2. For "test" provider, if preference "browser.safebrowsing.provider.test.disableBackoff" is ON
backoff is disabled.
3. For "test" provider, if preference "browser.safebrowsing.provider.test.disableBackoff" is Off
backoff is NOT disabled.
So if your testcase will use listmanager or hashcompleter, you should try to use "test" provider
if possible, otherwise testcase may encounter intermittent failure due to backoff.
MozReview-Commit-ID: 3BDxs0ARyQM
With JSM global sharing, the object returned by Cu.import() is a
NonSyntacticVariablesObject, rather than a global. Various code tries
to use properties from a JSM global via an import.
Cu.importGlobalProperties can also be used in some places.
MozReview-Commit-ID: HudCXO2GKN0
The issue occurs when nsITimer is fired earlier than the backoff time. In that
case, the update doesn't proceed and we never make another attempt because the
backoff update timer was oneshot.
We fix the issue in two ways:
- Add a tolerance of 1 second in case the timer fires too early.
- Set another oneshot timer whenever we are prevented from updating due to
backoff.
MozReview-Commit-ID: E2ogNRsHJVK
The issue occurs when nsITimer is fired earlier than the backoff time. In that
case, the update doesn't proceed and we never make another attempt because the
backoff update timer was oneshot.
We fix the issue in two ways:
- Add a tolerance of 1 second in case the timer fires too early.
- Set another oneshot timer whenever we are prevented from updating due to
backoff.
MozReview-Commit-ID: E2ogNRsHJVK
Version 4 of the Google Safe Browsing server will return a 404 if any of the
application reputation lists are requested on Android. As a result, we should
avoid these threat types from being sent along with ANDROID_PLATFORM.
MozReview-Commit-ID: 6TUBVxe455y
After adopting the new thread model for safebrowsing, we will create a new
lookup cache for update so we can still check lookup cache at the same time.
Prefix set, completions will be generated when we open the new lookup cache
but it won't include cache, so we will loss cache after that.
This patch will copy cache data from old lookup cache to new lookup
cache while update.
MozReview-Commit-ID: L0WpiHOGIGm
In V2 we shuffled the hash entries before sending the request to obscure the real
entry from noises. We should also hide the real entry in V4. Using sort() is
enough for both V2 and V4 because the array contains exactly 5 entries in almost
all cases.
MozReview-Commit-ID: 4uOXIF83KQL
In this patch, we will make Safebrowsing V2 caching use the same algorithm as V4.
So we remove "mMissCache" for negative caching and TableFresness check for
positive caching.
But Safebrowsing V2 doesn't contain negative/positive cache duration information in
gethash response. So we hard-code a fixed value, 15 minutes, as cache duration.
In this way, we can sync the mechanism we handle caching for V2 and V4.
An extra effort for V2 here is that we need to manually record prefixes misses
because we won't get any response for those prefixes(implemented in
nsUrlClassifierLookupCallback::CacheMisses).
This patch includes following changes:
1. nsUrlClassifierHashCompleter.js
nsUrlClassifierHashCompleter.idl
- Add completionV4 interface for hashCompleter to pass response data to
DB service.
- Process response data includes negative cache duration, matched full
hashes and cache duration for each match. Full matches are passed through
nsIFullHashMatch interface.
- Change _requests.responses from array contains matched fullhashes to
dictionary so that it can store additional information likes negative cache
duration.
2. nsUrlClassifierDBService.cpp
- Implement CompletionV4 interface, store response data to CacheResultV4
object. Expired duration to expired time is handled here.
- Add CacheResultToTableUpdate function to convert V2 & V4 cache result
to TableUpdate object.
3. LookupCache.h
- Extend CacheResult to CacheResultV2 and CacheResultV4 so we can store
response data in CompletionV2 and CompletionV4.
4. HashStore.h
- Add API and member variable in TableUpdateV4 to store response data.
TableUpdate object is used by DB service to pass update data or gethash
response to Classifier, so we need to extend TableUpdateV4 to be able
to store fullHashes.find response.
6. Entry.h
- Define the structure about how we cache fullHashes.find response.
MozReview-Commit-ID: FV4yAl2SAc6
This patch includes following changes:
1. nsUrlClassifierHashCompleter.js
nsUrlClassifierHashCompleter.idl
- Add completionV4 interface for hashCompleter to pass response data to
DB service.
- Process response data includes negative cache duration, matched full
hashes and cache duration for each match. Full matches are passed through
nsIFullHashMatch interface.
- Change _requests.responses from array contains matched fullhashes to
dictionary so that it can store additional information likes negative cache
duration.
2. nsUrlClassifierDBService.cpp
- Implement CompletionV4 interface, store response data to CacheResultV4
object. Expired duration to expired time is handled here.
- Add CacheResultToTableUpdate function to convert V2 & V4 cache result
to TableUpdate object.
3. LookupCache.h
- Extend CacheResult to CacheResultV2 and CacheResultV4 so we can store
response data in CompletionV2 and CompletionV4.
4. HashStore.h
- Add API and member variable in TableUpdateV4 to store response data.
TableUpdate object is used by DB service to pass update data or gethash
response to Classifier, so we need to extend TableUpdateV4 to be able
to store fullHashes.find response.
6. Entry.h
- Define the structure about how we cache fullHashes.find response.
MozReview-Commit-ID: KgR1NASl7GC
A new function Classifier::AsyncApplyUpdates() is implemented for async update.
Besides, all public Classifier interfaces become "worker thread only" and
we remove DBServiceWorker::ApplyUpdatesBackground/Foreground.
In DBServiceWorker::FinishUpdate, instead of calling Classifier::ApplyUpdates,
we call Classifier::AsyncApplyUpdates and install a callback for notifying
the update observer when update is finished. The callback will occur on the
caller thread (i.e. worker thread.)
As for the shutdown issue, when the main thread is notified to shut down,
we at first *synchronously* dispatch an event to the worker thread to
shut down the update thread. After getting synchronized with all other
threads, we send last two events "CancelUpdate" and "CloseDb" to notify
dangling update (i.e. BeginUpdate is called but FinishUpdate isn't)
and do cleanup work.
MozReview-Commit-ID: DXZvA2eFKlc