(Path is actually r=froydnj.)
Bug 1400459 devirtualized nsIAtom so that it is no longer a subclass of
nsISupports. This means that nsAtom is now a better name for it than nsIAtom.
MozReview-Commit-ID: 91U22X2NydP
This allows us to reuse the same URLInfo objects for each permission or
extension that we match, and also avoids a lot of XPConnect overhead we wind
up incurring when we access URI objects from the JS side.
MozReview-Commit-ID: GqgVRjQ3wYQ
This patch merges nsAtom into nsIAtom. For the moment, both names can be used
interchangeably due to a typedef. The patch also devirtualizes nsIAtom, by
making it not inherit from nsISupports, removing NS_DECL_NSIATOM, and dropping
the use of NS_IMETHOD_. It also removes nsIAtom's IIDs.
These changes trigger knock-on changes throughout the codebase, changing the
types of lots of things as follows.
- nsCOMPtr<nsIAtom> --> RefPtr<nsIAtom>
- nsCOMArray<nsIAtom> --> nsTArray<RefPtr<nsIAtom>>
- Count() --> Length()
- ObjectAt() --> ElementAt()
- AppendObject() --> AppendElement()
- RemoveObjectAt() --> RemoveElementAt()
- ns*Hashtable<nsISupportsHashKey, ...> -->
ns*Hashtable<nsRefPtrHashKey<nsIAtom>, ...>
- nsInterfaceHashtable<T, nsIAtom> --> nsRefPtrHashtable<T, nsIAtom>
- This requires adding a Get() method to nsRefPtrHashtable that it lacks but
nsInterfaceHashtable has.
- nsCOMPtr<nsIMutableArray> --> nsTArray<RefPtr<nsIAtom>>
- nsArrayBase::Create() --> nsTArray()
- GetLength() --> Length()
- do_QueryElementAt() --> operator[]
The patch also has some changes to Rust code that manipulates nsIAtom.
MozReview-Commit-ID: DykOl8aEnUJ
Ehsan, can you please review the (trivial) WebIDL changes, and Shane the
WebRequest logic?
The change to allow strings in MatchPattern arguments removes a huge amount of
XPConnect overhead that accumulates when creating nsIURI objects for
WebRequest processing.
The change to re-use existing URI objects removes a huge amount of URI
creation overhead.
MozReview-Commit-ID: 3DJjAKJK1Sa
Bill, can you please review the binding code, and the general sanity of the
platform code. Andrew and zombie, can you please matching algorithms and
tests.
Change summary:
The existing JavaScript matching code works well overall, but it needs to be
called a lot, particularly from hot code paths. In most cases, the overhead of
the matching code on its own adds up enough to cause a problem. When we have
to call out to JavaScript via XPConnect to make a policy decision, it adds up
even more.
These classes solve both of these problems by a) being very fast, and b) being
accessible directly from C++. They are particularly optimized for the common
cases where only literal or prefix matches are required, and they take special
steps to avoid virtual calls wherever possible, and caching computed URL
values so that they can be reused across many match operations without
additional overhead.
MozReview-Commit-ID: BZzPZDQRnl