A LazyIdle thread should be created and removed by the same thread. This
patch fixes testcases that trigger the assertion.
Depends on D49874
Differential Revision: https://phabricator.services.mozilla.com/D49875
Before this patch, when Safe Browsing updating process discovers an error, it
quits and resets the table failing to update.
After this patch, updating process will continue to run when an error
occurs to find all the tables failing to apply an update.
Differential Revision: https://phabricator.services.mozilla.com/D42615
Patch P2 & P3 refine how Safe Browsing handles Safe Browsing database
loading failure.
Safe Browsing databases are read in 3 scenarios:
1. |GetLookupCache| is called on startup. Safe Browsing reads prefix
files in this case. Metadata for updates(.sbstore, .metadata) are not
read in this scenario.
2. |TableRequest| is called before applying an update, Safe Browsing
reads update metadata to apply a partial update.
3. During an update, Safe Browsing reads both prefix files and metadata
in order to merge the update result.
For Case 1, we reset a table's database only when it returns FILE_CORRUPTED
while loading prefixes from the prefix file(.vlpset).
For Case 2, we reset a table's database when the table fails to load its
metadata file or prefix file. This is because we need to make sure both
files are complete so we can correctly perform a partial update.
Note that in this case, we don't just reset the database when "FILE_CORRUPTED"
is detected, we reset the database as long as an error occurs while loading the
database.
For Case 3, For all the tables failing to load their database during an
updating process, the databases of those tables will be reset.
Case 1 and Case 2 are done in Patch P2; Case 3 is done in Patch P3
Differential Revision: https://phabricator.services.mozilla.com/D42614
This patch does the following:
1. Remove testing files from disk because they are no longer required.
2. Load completions from previous version of HashStore until an update
is applied.
3. Older version of HashStore(.sbstore) & PrefixSet(.vlpset) will be
removed during an update
Differential Revision: https://phabricator.services.mozilla.com/D36002
Create test entries via update introduces performance overhead.
We can store them directly in LookupCache and do not save test entries
to disk.
Differential Revision: https://phabricator.services.mozilla.com/D34576
In Bug 1453038, |mUpdateInterrupted| is set in Classifer::Close() which is
called by PreShutdown to abort an ongoing update. That doesn't handle
all the cases.
The SafeBrowsing update involves two threads, worker thread, and update
thread. Update thread takes care of most of the update work, when it finishes
its task, it posts a task back to the worker thread to apply the updated database
and also do some cleanup stuff. Then the update is complete.
The fix in Bug 1453038 doesn't abort an update if the woker thread is doing
the job. This is because the |mUpdateInterrupted| flag is set in the
worker thread. The PreShutdown event which eventually sets the flag has to
wait until the worker thread's current task is done.
In this patch:
1. Check nsUrlClassifierDBService::ShutdownHasStarted() to abort shutdown.
This is set by main thread so both worker thread and update thread can
be interrupted now.
2. mIsClosed is now replaced by the mUpdateInterrupted. The semantics of
mUpdateInterrupted is now changed to abort update for any internal APIs
which should cause an update to abort.
3. Remove |mUpdateInterrupted| and |ShutdownHasStarted()| checks and
unify with |ShouldAbort()|
Differential Revision: https://phabricator.services.mozilla.com/D12229
A directory-based file copy without checkpoint to abort may take lots
of time to finish. This cause an issue that if firefox is shutting down
and try to close an ongoing update thread, main-thread may be blocked
for a long time.
This patch adds a wrapper for copying an entire directory, within this
wrapper, we use file-based copy and add checkpoints to let update thread
has a chance to abort.
Differential Revision: https://phabricator.services.mozilla.com/D3414
A directory-based file copy without checkpoint to abort may take lots
of time to finish. This cause an issue that if firefox is shutting down
and try to close an ongoing update thread, main-thread may be blocked
for a long time.
This patch adds a wrapper for copying an entire directory, within this
wrapper, we use file-based copy and add checkpoints to let update thread
has a chance to abort.
Differential Revision: https://phabricator.services.mozilla.com/D3414
Explicitly specify the arguments to copy to avoid making a copy of
a dangling `this` pointer.
Convert nsUrlClassifierDBService::mClassifier to a RefPtr since
the update closure might need to continue to access its members
after it's been released by the main thread.
MozReview-Commit-ID: CPio3n9MmsK
The existing mix of UniquePtr and raw pointers is confusing when
trying to figure out the exact lifetime of these objects.
MozReview-Commit-ID: Br4S7BXEFKs
I tried to make TableUpdateArray point to const TableUpdate objects
everywhere but there were two problems:
- HashStore::ApplyUpdate() triggers a few Merge() calls which include
sorting the underlying TableUpdate object first.
- LookupCacheV4::ApplyUpdate() calls TableUpdateV4::NewChecksum() when the
checksum is missing and that sets mChecksum.
MozReview-Commit-ID: LIhJcoxo7e7
Manually keeping tabs on the lifetime of these objects is a pain
and is the likely source of some of our crashes. I suspect we might
also be leaking memory.
This change creates an explicit copy of the main array into the
update thread to avoid using a non-thread-safe shared data
structure. This is a shallow copy. Only the pointers to the
TableUpdates are copied, which means one pointer per list (e.g. 5
in total for google4 in a new profile).
MozReview-Commit-ID: 221d6GkKt0M
After adopting the new thread model for safebrowsing, we will create a new
lookup cache for update so we can still check lookup cache at the same time.
Prefix set, completions will be generated when we open the new lookup cache
but it won't include cache, so we will loss cache after that.
This patch will copy cache data from old lookup cache to new lookup
cache while update.
MozReview-Commit-ID: L0WpiHOGIGm
Functions and Preference related to mTableFreshness and gFreshnessGuarantee
could be removed since we will no longer require them.
MozReview-Commit-ID: IAa0UHLSQ56
In this patch, we will make Safebrowsing V2 caching use the same algorithm as V4.
So we remove "mMissCache" for negative caching and TableFresness check for
positive caching.
But Safebrowsing V2 doesn't contain negative/positive cache duration information in
gethash response. So we hard-code a fixed value, 15 minutes, as cache duration.
In this way, we can sync the mechanism we handle caching for V2 and V4.
An extra effort for V2 here is that we need to manually record prefixes misses
because we won't get any response for those prefixes(implemented in
nsUrlClassifierLookupCallback::CacheMisses).
We will cache all preferences which will be read during classifing channel
- Store them into static variables nsUrlClassifierDBService
- Use a singleton class to manage/update preferrences in nsChannelClassifier
MozReview-Commit-ID: GvyBI3rVpYh
mTableRefreshness, a non-thread-safe object, might be accessed on worker thread
and update thread cocurrently. To solve this issue, on update thread we only
insert data to mNewTableRefreshness and merge to mTableRefreshness on
the worker thread later.
MozReview-Commit-ID: 9WgoeYfWVfK
This patch fixes that Classifier::ActiveTables doesn't return v4 tables.
Classifier::mActiveTablesCache is generated by scanning safebrowsing directory.
We use Classifier::ScanStoreDir to do the work, but it will ignore subdirectory.
Since v4 tables are stored in subdirectory 'google4', mActiveTablesCache doesn't
include v4 tables.
Fix this issue by checking subdirectory recursively in ScanStoreDir.
MozReview-Commit-ID: I6pa6e4bFND
This patch fixes that Classifier::ActiveTables doesn't return v4 tables.
Classifier::mActiveTablesCache is generated by scanning safebrowsing directory.
We use Classifier::ScanStoreDir to do the work, but it will ignore subdirectory.
Since v4 tables are stored in subdirectory 'google4', mActiveTablesCache doesn't
include v4 tables.
Fix this issue by checking subdirectory recursively in ScanStoreDir.
MozReview-Commit-ID: I6pa6e4bFND
A new function Classifier::AsyncApplyUpdates() is implemented for async update.
Besides, all public Classifier interfaces become "worker thread only" and
we remove DBServiceWorker::ApplyUpdatesBackground/Foreground.
In DBServiceWorker::FinishUpdate, instead of calling Classifier::ApplyUpdates,
we call Classifier::AsyncApplyUpdates and install a callback for notifying
the update observer when update is finished. The callback will occur on the
caller thread (i.e. worker thread.)
As for the shutdown issue, when the main thread is notified to shut down,
we at first *synchronously* dispatch an event to the worker thread to
shut down the update thread. After getting synchronized with all other
threads, we send last two events "CancelUpdate" and "CloseDb" to notify
dangling update (i.e. BeginUpdate is called but FinishUpdate isn't)
and do cleanup work.
MozReview-Commit-ID: DXZvA2eFKlc