These methods do not appear to be used.
When JSM global sharing is enabled, these methods contaminate the
global Function.prototype, which breaks Marionette object
serialization.
MozReview-Commit-ID: CAfJ2FCkhlK
After adopting the new thread model for safebrowsing, we will create a new
lookup cache for update so we can still check lookup cache at the same time.
Prefix set, completions will be generated when we open the new lookup cache
but it won't include cache, so we will loss cache after that.
This patch will copy cache data from old lookup cache to new lookup
cache while update.
MozReview-Commit-ID: L0WpiHOGIGm
Based on the telemetry that landed as part of bug 1336904, Safe Browsing
updates are failing too often: https://mzl.la/2qGkOPS
This should enable browsers on slower networks to reach the update
servers while still putting a reasonable bound on how long the update
thread can be blocked.
MozReview-Commit-ID: 6puVtpMT87K
resetDatabase() is used to clear out the Safe Browsing database and cache in tests.
Since bug 1333328 however we can no longer rely on updates clearing the cache.
There are two solutions to address this issue:
1. resetDatabase() calls another test-only function: reloadDatabase(). Since the cache
is in memory, reloading the URL classifier will automatically clear the cache.
2. During an update, remove cache entries which are not in the database.
I prefer taking the first one because if we implement the second
approach, an update will take longer since we need to check if each prefix
in the cache can also be found in the database. I think this is not necessary
because prefixes not in the database will eventually be removed when they
are expired.
MozReview-Commit-ID: BjsDKDMr205
In V2 we shuffled the hash entries before sending the request to obscure the real
entry from noises. We should also hide the real entry in V4. Using sort() is
enough for both V2 and V4 because the array contains exactly 5 entries in almost
all cases.
MozReview-Commit-ID: 4uOXIF83KQL
Technically speaking, we use the new async API 'nsIURIClassifier.asyncClassifyLocalWithTables'
to replace the old sync API. However, since we cannot guarantee the async call will be done when
we neet its result, we need a sync call as a fallback in this case. This is a sub-optimal
solution and we will be investigating if there's a better solution if the sync call
is used too frequently.
MozReview-Commit-ID: L1uQ2eaYr1e
This patch includes two test cases:
1. Apply an empty update through Classifier interface, which is the normal use case.
2. Apply an empty update through LookupCacheV4::ApplyUpdate, this ensure update algorithm is
correct when applying an empty update. This scenario actually shouldn't happen in
normal use case because it will be skipped by Classifier::CheckValidUpdate.
MozReview-Commit-ID: 9khsuVatX0u
With the activation of Ask-to-Activate mode by default, we'll also activate the fallback rule that favors fallback content when the object has not provided a src, so we need to prepare this test for that
MozReview-Commit-ID: JmmxJEiziHW
This function is arguably nicer than calling NS_ProcessNextEvent
manually, is slightly more efficient, and will enable better auditing
for NS_ProcessNextEvent when we do Quantum DOM scheduling changes.
Functions and Preference related to mTableFreshness and gFreshnessGuarantee
could be removed since we will no longer require them.
MozReview-Commit-ID: IAa0UHLSQ56
In this patch, we will make Safebrowsing V2 caching use the same algorithm as V4.
So we remove "mMissCache" for negative caching and TableFresness check for
positive caching.
But Safebrowsing V2 doesn't contain negative/positive cache duration information in
gethash response. So we hard-code a fixed value, 15 minutes, as cache duration.
In this way, we can sync the mechanism we handle caching for V2 and V4.
An extra effort for V2 here is that we need to manually record prefixes misses
because we won't get any response for those prefixes(implemented in
nsUrlClassifierLookupCallback::CacheMisses).
This is in preparation for being able to be replaced with SpinEventLoopUntil(),
which is going to be shipped in bug 1359490.
MozReview-Commit-ID: AChVqh4kfVb
These timeouts will ensure that we don't block the Safe Browsing update thread
for too long when we encounter slow or bad network conditions.
MozReview-Commit-ID: AJfR193cTf8
mResults is lookupResultArray which is created when we find matched prefix in the database.
mCacheResults stores response of gethash request.
We record threat type match telemetry only when completion is found in both V2 and V4,
that is we got one confirmed result for V2 and one for V4. And then we could record threat
types by iterating through mCacheResultsArray.
But if one of lookupResult is from cache, for example, completion is found in 'goog-malware-proto',
then we won't trigger a gethash request for it because it is in the cache.
In this scenario, mCacheResults will not have results from V4 so when we try to record
threat types we won't find V4 ones.
In this patch we only record telemetry when gethash is sent for both V2 and V4. This may limit
the usefulness of that probe a little bit, but we shouldn't make the code less efficient just
to be able to measure telemetry better.
MozReview-Commit-ID: Ib8SGUaxb4c
We will cache all preferences which will be read during classifing channel
- Store them into static variables nsUrlClassifierDBService
- Use a singleton class to manage/update preferrences in nsChannelClassifier
MozReview-Commit-ID: GvyBI3rVpYh
The about:url-classifier supports following functions:
1. Provider section
- Show update status for each provider, update status include
last update time, next update time and last update status
- Update button to manually trigger an update for the provider.
2. Debug section
- Set MOZ_LOG Modules
- Set MOZ_LOG_FILE
MozReview-Commit-ID: AHiveKEHSNC
In Bug 1311935, We change positive/negative cache duration from milli-second to second.
But the value doesn't covert back to milli-second when store to telemetry(telemetry use
milli-second).
MozReview-Commit-ID: KR6xn9pwhUd
Recent change of safebrowsing thread model may cause nsUrlClassifierDBService::ReloadDatabase
API fail if there is an ongoing update at the same.
Fix this issue by adding retry in testcase.
MozReview-Commit-ID: CZGMpQvuzum
The Flash block tests sometimes timeout in debug runs. So far it has always happened the same way. All assertions in the test run (and pass), but the test times out during cleanup. Bumping up the timeout for these tests should fix this problem.
MozReview-Commit-ID: F04nSzSyLtr
When starting up, SafeBrowsing.jsm will try to use DBService to add testing entries. Meanwhile,
listmanager will request StreamUpdater to download lists with a random initial delay.
The requests that listmanager issue to StreamUpdater will be queued up
if DBserve is busy and will be retried when StreamUpdater is notified that
the previous update is complete. However, in some edge cases,
the queued requests may not be processed until the next update request from listmanager.
For example, SafeBrowsing.jsm calls DBService.beginUpdate at t0 and the update is
complete at t1. If listmanager sends all requests via StreamUpdate between t0 and t1,
they will all be queued up and no further request can trigger the queued ones.
So in this patch I add a timer to re-trigger FetchNextRequest() in case StreamUpdater is not
notified the previous update is complete.
MozReview-Commit-ID: 3hHsS5N7WRI
mTableRefreshness, a non-thread-safe object, might be accessed on worker thread
and update thread cocurrently. To solve this issue, on update thread we only
insert data to mNewTableRefreshness and merge to mTableRefreshness on
the worker thread later.
MozReview-Commit-ID: 9WgoeYfWVfK
When full match is found in both v2 and v4, the threat types returned should also be the same.
If threat types are different, the telemetry record this by setting a bit flags which indicates
what threat types are being returned.
If threat types are the same, this telemetry will record 0.
MozReview-Commit-ID: Laz77yoCg00
In Bug 1323953, we always send 4-bytes prefix for completion and the prefix is also
used as the key to store cache result from gethash request.
Since it is always 4-bytes, we could convert it to integer for simplicity.
MozReview-Commit-ID: Lkvrg0wvX5Z
LookupCacheV4::Has implements safebrowsing v4 caching logic.
1. Check if fullhash match any prefix in local database:
- If not, the URL is safe.
2. Check if prefix is in the cache(prefix is always the first 4-byte of
the fullhash, Bug 1323953):
- If not, send fullhash request
3. Check if fullhash is in the positive cache:
- If fullhash is found and it is not expired, the URL is not safe.
- If fullhash is found and it is expired, send fullhash request.
4. If fullhash is not found, check negative cache expired time:
- If negative cache time is not expired, the URL is safe.
- If negative cache time is expired, send fullhash request.
MozReview-Commit-ID: GRX7CP8ig49
This patch includes following changes:
1. nsUrlClassifierHashCompleter.js
nsUrlClassifierHashCompleter.idl
- Add completionV4 interface for hashCompleter to pass response data to
DB service.
- Process response data includes negative cache duration, matched full
hashes and cache duration for each match. Full matches are passed through
nsIFullHashMatch interface.
- Change _requests.responses from array contains matched fullhashes to
dictionary so that it can store additional information likes negative cache
duration.
2. nsUrlClassifierDBService.cpp
- Implement CompletionV4 interface, store response data to CacheResultV4
object. Expired duration to expired time is handled here.
- Add CacheResultToTableUpdate function to convert V2 & V4 cache result
to TableUpdate object.
3. LookupCache.h
- Extend CacheResult to CacheResultV2 and CacheResultV4 so we can store
response data in CompletionV2 and CompletionV4.
4. HashStore.h
- Add API and member variable in TableUpdateV4 to store response data.
TableUpdate object is used by DB service to pass update data or gethash
response to Classifier, so we need to extend TableUpdateV4 to be able
to store fullHashes.find response.
6. Entry.h
- Define the structure about how we cache fullHashes.find response.
MozReview-Commit-ID: FV4yAl2SAc6