When the base request that we are filtering finishes, it generally removes it
self from the document's load group. If that happens before the parser has
started (which tends to be the case for XML documents that haven't consumed
any data), it can unblock the load event too early. Adding the
StreamFilterParent to the load group while the request is still pending
prevents this problem.
MozReview-Commit-ID: InxAVZQy9kT
Running the StreamFilter tests as parallel xpcshell tests uncovered a race, in
which we sometimes wind up releasing the last reference to an HttpChannel on a
background thread, and its destructor attempts to free things which can only
be freed on the main thread.
There are some corner cases where we try to attach StreamFilter endpoints to a
channel after its IPC has been closed from from the other side, but request
listeners haven't been notified. This causes crashes in any of several places.
This patch changes nsHttpChannel::ProcessId to return 0 when IPC is closed, so
callers can detect that it's no longer possible to attach endpoints to it.
MozReview-Commit-ID: BZTOqezih0P
Currently if you write an async IPDL method which has a return value, we expose
a SendXXX method which returns a MozPromise. This MozPromise can then be
->Then-ed to run code when it is resolved or rejected.
Unfortunately, using this API loses ordering guarantees which IPDL provides.
MozPromise::Then takes an event target, which the resolve runnable is dispatched
to. This means that the resolve callback's code doesn't have any ordering
guarantees relative to the processing of other IPC messages coming over the same
protocol.
This adds a new overload to SendXXX with two additional arguments, a lambda
callback which is called if the call succeeds, and a lambda callback which is
called if the call fails. These will be called in order with other IPC messages
sent over the same protocol.
MozReview-Commit-ID: FZHJJaSDoZy
getter_AddRefs nulls its parameter before passing it to the getter function,
which means that on failure, we wind up with a null IO thread, rather than its
original main thread value.
MozReview-Commit-ID: 1SSIeNtiBq9
In cases where data transfer finishes immediately after we close a request, we
can sometimes wind up overwriting that state information with
"finishedtransferringdata", which allows scripted callers to break certain
invariants and cause crashes.
MozReview-Commit-ID: Do3GttF3M9S
It currently isn't possible to suspend a channel from onHeadersReceived for a
cached response. And since it's not possible to add a new stream filter after
a response has started, adding a stream filter at that point will crash if the
channel is still registered.
This test is a basic sanity check for that scenario.
MozReview-Commit-ID: ALYUtxX7mci
Our current StreamFilter code doesn't behave well when data delivery is
targeted to a thread pool, rather than a single thread.
Thread pools don't guarantee ordered processing of events. It's theoretically
always possible for multiple events dispatched to a pool to be processed in
parallel, or even slightly out of order.
For the most part, this should only be a theoretical concern, unless several
data events are dispatched at the same time, and the pool has enough available
threads to service all of them (which is an unlikely scenario in this code).
However, when data delivery is targeted to a thread pool, the OnDataAvailable
callbacks do not have access to the thread pool itself, only the thread that
the callback was dispatched to. This means that after each OnDataAvailable
call, we likely store a new IO thread, and writes end up queued to a different
single thread depending on exactly when they happen.
Threads in thread pools often wind up executing long-running runnables, such
as synchronous IO or network operations. Which means that we introduce
arbitrary delays for some writes, and are likely to wind up with highly
arbitrary ordering.
This patch solves both of these problems by introducing strict event queue
ordering, and also dispatching IO events to the original explicit delivery
target, rather than whatever the current thread happened to be at the time of
the last data event.
MozReview-Commit-ID: 1SdYjS6ltqw
(Path is actually r=froydnj.)
Bug 1400459 devirtualized nsIAtom so that it is no longer a subclass of
nsISupports. This means that nsAtom is now a better name for it than nsIAtom.
MozReview-Commit-ID: 91U22X2NydP
This patch merges nsAtom into nsIAtom. For the moment, both names can be used
interchangeably due to a typedef. The patch also devirtualizes nsIAtom, by
making it not inherit from nsISupports, removing NS_DECL_NSIATOM, and dropping
the use of NS_IMETHOD_. It also removes nsIAtom's IIDs.
These changes trigger knock-on changes throughout the codebase, changing the
types of lots of things as follows.
- nsCOMPtr<nsIAtom> --> RefPtr<nsIAtom>
- nsCOMArray<nsIAtom> --> nsTArray<RefPtr<nsIAtom>>
- Count() --> Length()
- ObjectAt() --> ElementAt()
- AppendObject() --> AppendElement()
- RemoveObjectAt() --> RemoveElementAt()
- ns*Hashtable<nsISupportsHashKey, ...> -->
ns*Hashtable<nsRefPtrHashKey<nsIAtom>, ...>
- nsInterfaceHashtable<T, nsIAtom> --> nsRefPtrHashtable<T, nsIAtom>
- This requires adding a Get() method to nsRefPtrHashtable that it lacks but
nsInterfaceHashtable has.
- nsCOMPtr<nsIMutableArray> --> nsTArray<RefPtr<nsIAtom>>
- nsArrayBase::Create() --> nsTArray()
- GetLength() --> Length()
- do_QueryElementAt() --> operator[]
The patch also has some changes to Rust code that manipulates nsIAtom.
MozReview-Commit-ID: DykOl8aEnUJ
We don't use the initial Map returned by ChannelWrapper as a map, so there's no
need for the overhead involved in creating it. We also don't need the header map
generated by HeaderChanger unless headers are actually being modified, which
for many listeners they never are, so there's no need for the map creation and
string lower-casing overhead prior to modification time.
MozReview-Commit-ID: K2uK93Oo542
This allows us to reuse the same URLInfo objects for each permission or
extension that we match, and also avoids a lot of XPConnect overhead we wind
up incurring when we access URI objects from the JS side.
MozReview-Commit-ID: GqgVRjQ3wYQ
The main change here is to disconnect stream filters immediately if we try to
send start or data events to a window that's already been destroyed.
It also fixes a race where we end up in the wrong state if a stop event
arrives while the channel is being disconnected.
MozReview-Commit-ID: LwxXxoRUDgQ
Normally, we try to use the same thread for the IO and actor threads, which
means there's some basic assurance that OnStopRequest is always dispatched
after the last OnDataAvailable call. However, in cases where callers retarget
data delivery to a different background thread, it's possible for the main
thread to process the OnStopRequest runnable before the IO thread has
processed the last OnDataAvailable runnable, which can cause problems.
Dispatching the OnStop runnable through the IO thread guarantees at last basic
consistency in order of dispatch. In the case where the IO thread is the same
as the actor thread, the runnable is processed synchronously, and there's no
behavior change. In other cases, it's dispatched to the IO thread first, and
waits in the same queue as the already-dispatched OnDataAvailable events.
MozReview-Commit-ID: H2GD66WKxNn
Ehsan, can you please review the (trivial) WebIDL changes, and Shane the
WebRequest logic?
The change to allow strings in MatchPattern arguments removes a huge amount of
XPConnect overhead that accumulates when creating nsIURI objects for
WebRequest processing.
The change to re-use existing URI objects removes a huge amount of URI
creation overhead.
MozReview-Commit-ID: 3DJjAKJK1Sa
Ehsan, can you please review the DOM bindings, and Shane the request logic?
The bulk of the overhead WebRequest API is in its access to nsIChannel and
friends through XPConnect. Since it's not really feasible to convert channels
to use WebIDL bindings directly, this generic channel wrapper class serves the
same purpose.
MozReview-Commit-ID: 4mNP8HiKWK
The extension policy services uses atoms internally for permission names, so
using them directly rather than strings is considerably cheaper.
MozReview-Commit-ID: Io8EuOXHKVy
When channel registrations aren't explicitly unregistred from JS, they're
instead unregistered when the entry object is cycle collected. When the
entries are created close to shutdown, that can leave some uncertainty as to
the order of destruction, and the WebRequestService might wind up being
destroyed before all of the entries. In that case, the registrations are
cleaned up when the hash entries hash table is being destroyed.
While that isn't strictly a problem, the entries expect to still be present in
the hash table when they're being destroyed, as a basic sanity check. This
patch ensures that we always remove entries from the hash table before it's
destroyed, so those invariants are maintained.
MozReview-Commit-ID: 5jWpFeFyjJZ
This part hooks up the parent side of the StreamListener protocol to the
channel, and implements the event handling and actual IO work.
Dragana, can you please review the network portions, particularly the thread
sanity, and Shane, the integration with the rest of the patch set?
MozReview-Commit-ID: DFuALpSSgA7