This was done automatically replacing:
s/mozilla::Move/std::move/
s/ Move(/ std::move(/
s/(Move(/(std::move(/
Removing the 'using mozilla::Move;' lines.
And then with a few manual fixups, see the bug for the split series..
MozReview-Commit-ID: Jxze3adipUh
Adds a new TYPE_SPECULATIVE to nsIContentPolicy uses it as the type for
speculative connection channels from the IO service. I believe I've added it to
all the content policies in tree to make sure it behaves the same as TYPE_OTHER
used to.
The webextension test shows that the webextension proxy API sees speculative
lookups requested through the IO service.
MozReview-Commit-ID: DQ4Kq0xdUOD
When the base request that we are filtering finishes, it generally removes it
self from the document's load group. If that happens before the parser has
started (which tends to be the case for XML documents that haven't consumed
any data), it can unblock the load event too early. Adding the
StreamFilterParent to the load group while the request is still pending
prevents this problem.
MozReview-Commit-ID: InxAVZQy9kT
Running the StreamFilter tests as parallel xpcshell tests uncovered a race, in
which we sometimes wind up releasing the last reference to an HttpChannel on a
background thread, and its destructor attempts to free things which can only
be freed on the main thread.
There are some corner cases where we try to attach StreamFilter endpoints to a
channel after its IPC has been closed from from the other side, but request
listeners haven't been notified. This causes crashes in any of several places.
This patch changes nsHttpChannel::ProcessId to return 0 when IPC is closed, so
callers can detect that it's no longer possible to attach endpoints to it.
MozReview-Commit-ID: BZTOqezih0P
Currently if you write an async IPDL method which has a return value, we expose
a SendXXX method which returns a MozPromise. This MozPromise can then be
->Then-ed to run code when it is resolved or rejected.
Unfortunately, using this API loses ordering guarantees which IPDL provides.
MozPromise::Then takes an event target, which the resolve runnable is dispatched
to. This means that the resolve callback's code doesn't have any ordering
guarantees relative to the processing of other IPC messages coming over the same
protocol.
This adds a new overload to SendXXX with two additional arguments, a lambda
callback which is called if the call succeeds, and a lambda callback which is
called if the call fails. These will be called in order with other IPC messages
sent over the same protocol.
MozReview-Commit-ID: FZHJJaSDoZy
getter_AddRefs nulls its parameter before passing it to the getter function,
which means that on failure, we wind up with a null IO thread, rather than its
original main thread value.
MozReview-Commit-ID: 1SSIeNtiBq9
In cases where data transfer finishes immediately after we close a request, we
can sometimes wind up overwriting that state information with
"finishedtransferringdata", which allows scripted callers to break certain
invariants and cause crashes.
MozReview-Commit-ID: Do3GttF3M9S
It currently isn't possible to suspend a channel from onHeadersReceived for a
cached response. And since it's not possible to add a new stream filter after
a response has started, adding a stream filter at that point will crash if the
channel is still registered.
This test is a basic sanity check for that scenario.
MozReview-Commit-ID: ALYUtxX7mci
Our current StreamFilter code doesn't behave well when data delivery is
targeted to a thread pool, rather than a single thread.
Thread pools don't guarantee ordered processing of events. It's theoretically
always possible for multiple events dispatched to a pool to be processed in
parallel, or even slightly out of order.
For the most part, this should only be a theoretical concern, unless several
data events are dispatched at the same time, and the pool has enough available
threads to service all of them (which is an unlikely scenario in this code).
However, when data delivery is targeted to a thread pool, the OnDataAvailable
callbacks do not have access to the thread pool itself, only the thread that
the callback was dispatched to. This means that after each OnDataAvailable
call, we likely store a new IO thread, and writes end up queued to a different
single thread depending on exactly when they happen.
Threads in thread pools often wind up executing long-running runnables, such
as synchronous IO or network operations. Which means that we introduce
arbitrary delays for some writes, and are likely to wind up with highly
arbitrary ordering.
This patch solves both of these problems by introducing strict event queue
ordering, and also dispatching IO events to the original explicit delivery
target, rather than whatever the current thread happened to be at the time of
the last data event.
MozReview-Commit-ID: 1SdYjS6ltqw
(Path is actually r=froydnj.)
Bug 1400459 devirtualized nsIAtom so that it is no longer a subclass of
nsISupports. This means that nsAtom is now a better name for it than nsIAtom.
MozReview-Commit-ID: 91U22X2NydP
This patch merges nsAtom into nsIAtom. For the moment, both names can be used
interchangeably due to a typedef. The patch also devirtualizes nsIAtom, by
making it not inherit from nsISupports, removing NS_DECL_NSIATOM, and dropping
the use of NS_IMETHOD_. It also removes nsIAtom's IIDs.
These changes trigger knock-on changes throughout the codebase, changing the
types of lots of things as follows.
- nsCOMPtr<nsIAtom> --> RefPtr<nsIAtom>
- nsCOMArray<nsIAtom> --> nsTArray<RefPtr<nsIAtom>>
- Count() --> Length()
- ObjectAt() --> ElementAt()
- AppendObject() --> AppendElement()
- RemoveObjectAt() --> RemoveElementAt()
- ns*Hashtable<nsISupportsHashKey, ...> -->
ns*Hashtable<nsRefPtrHashKey<nsIAtom>, ...>
- nsInterfaceHashtable<T, nsIAtom> --> nsRefPtrHashtable<T, nsIAtom>
- This requires adding a Get() method to nsRefPtrHashtable that it lacks but
nsInterfaceHashtable has.
- nsCOMPtr<nsIMutableArray> --> nsTArray<RefPtr<nsIAtom>>
- nsArrayBase::Create() --> nsTArray()
- GetLength() --> Length()
- do_QueryElementAt() --> operator[]
The patch also has some changes to Rust code that manipulates nsIAtom.
MozReview-Commit-ID: DykOl8aEnUJ
We don't use the initial Map returned by ChannelWrapper as a map, so there's no
need for the overhead involved in creating it. We also don't need the header map
generated by HeaderChanger unless headers are actually being modified, which
for many listeners they never are, so there's no need for the map creation and
string lower-casing overhead prior to modification time.
MozReview-Commit-ID: K2uK93Oo542
This allows us to reuse the same URLInfo objects for each permission or
extension that we match, and also avoids a lot of XPConnect overhead we wind
up incurring when we access URI objects from the JS side.
MozReview-Commit-ID: GqgVRjQ3wYQ