Implements https://github.com/whatwg/html/issues/6962 . Improves performance
when <meta charset> occurs in head but after the first kilobyte and aligns
behavior better with WebKit and Blink.
The main change is to avoid reloads when meta appears within head but
after the first kilobyte. Prior to this change, Gecko reloaded in that
case (in compliance with the spec!) even though WebKit and Blink did not.
Differences from WebKit and Blink:
* WebKit and Blink honor <meta charset> in <noscript>. This implementation
does not.
* WebKit and Blink look for meta as if the tree builder was unaware of
foreign content. This implementation is foreign content-aware. This
makes a difference for CDATA sections that contain a > before the meta
as well as style and script elements within foreign content. This could
happen if the CDATA section that has mysteriously been introduced around
a what looks like a meta tag also contains another prior tag-looking
run of text.
* This implementation processes rel=preload and speculative loads that are
seen before <meta charset> has been seen. WebKit and Blink instead first
look for the meta and rewind before starting speculative parsing.
* Unlike WebKit, if there is neither an honored meta nor syntax resembling
an XML declaration, detection from content takes place (as in Blink).
* Unlike Blink, if there is neither an honored meta nor syntax resembling
an XML declaration, the detection from content is not dependent of network
buffer boundaries.
* Unlike Blink, detection from content can trigger a reload at the end of
the stream if the guess made at that point differs from the first guess.
(See below for the definition of the input to the first guess.)
Differences from the old spec and Gecko previously:
* Meta inside script and RCDATA elements is no longer honored.
* Late meta is now ignored and no longer triggers a reload.
* Later meta counts as early enough meta: In addition to the previous
meta within the first 1024 bytes, now a meta that started within the first
1024 bytes counts as early enough. Additionally, if by then there hasn't
been a template start tag and head hasn't ended, meta occurring before the
earlier of the end of the head or a template start tag counts as early
enough.
* Meta now counts as not-late even if the encoding label has numeric
character reference escapes.
* Syntax resembling an XML declaration longer than a kilobyte is honored if
there is no honored meta.
* If there is neither an honored meta nor syntax resembling an XML declaration,
the initial chardetng scan is potentially longer than before: the first 1024
bytes, the token spanning the 1024-byte boundary if there is such a token,
and, if by then head hasn't ended and there hasn't been a template start tag
until the end of the template start tag or the end of the token that causes
head to end, ever comes first. However, if the token implying the end of the
head is a text token, bytes only to the end of the previous non-text token is
considered. (This definition avoids depending on network buffer boundaries.)
* XML View Source now uses the code for syntax resembling an XML declaration
instead of expat for extracting the internal encoding label.
Reftest are added as both WPT and Gecko reftests in order to test both http:
and file: URL scenarios. The Gecko tests retain the WPT <link> tags in order
to use the exact same bytes.
An encoding declaration has been added to a number of old tests that didn't
intend to test the new speculation behavior especially in the context of
https://bugzilla.mozilla.org/show_bug.cgi?id=1727750 .
Differential Revision: https://phabricator.services.mozilla.com/D125808
Keeping the pref as signed, since the existing code explicitly handles that case, so someone may have -1 as the pref value.
Differential Revision: https://phabricator.services.mozilla.com/D132020
This patch enables partitioned service workers in third-party contexts.
Doesn't like before, we will allow service workers to be registered even
the third-party contexts don't have storage access and the registered
service workers will be partitioned by the top-level site.
You can control this by using the pref 'privacy.partition.serviceWorkers'.
Setting this to true will enable the partitioning, otherwise, it will
still use the old behavior.
Differential Revision: https://phabricator.services.mozilla.com/D131788
Keeping the pref as signed, since the existing code explicitly handles that case, so someone may have -1 as the pref value.
Differential Revision: https://phabricator.services.mozilla.com/D132020
Currently, we are using the regular principal and inherited regular
principal to create clientSource in nsDocShell. This patch makes the
nsDocshell to use the partitioned principal if needed.
Differential Revision: https://phabricator.services.mozilla.com/D127629
`profiler_thread_is_being_profiled` is used a lot for markers, so it makes sense to have a specialized version, which is a bit shorter, and lives in ProfilerMarkers.h.
Differential Revision: https://phabricator.services.mozilla.com/D130009
Currently, we are using the regular principal and inherited regular
principal to create clientSource in nsDocShell. This patch makes the
nsDocshell to use the partitioned principal if needed.
Differential Revision: https://phabricator.services.mozilla.com/D127629
`profiler_thread_is_being_profiled` is used a lot for markers, so it makes sense to have a specialized version, which is a bit shorter, and lives in ProfilerMarkers.h.
Differential Revision: https://phabricator.services.mozilla.com/D130009
Brief recap:
Before we have blocking autoplay, we had a feature which will delay media from start if the tab hasn't been visited by users, in order to prevent an unvisited background tab suddenly play sounds, which is annoying and might be unexpected to users.
In this patch:
The attribute we use to check whether we need to delay media is on the outer window, which is not Fission compatible if the top level window and its child windows are on the different process.
If the top level window has been visited, then all its child window should follow the same result. Therefore, we need to move the attribute to the browsing context in order to sync it across different windows.
Differential Revision: https://phabricator.services.mozilla.com/D128126
Currently, soft reload uses the `VALIDATE_ALWAYS` flag to not only
force revalidate the top level document, but also subresources.
This causes content to be refetched from the web even if there
are caches that are still valid and can be used.
Chrome already has such behaviour to not revalidate all resources.
Differential Revision: https://phabricator.services.mozilla.com/D122270
Currently, we are using the regular principal and inherited regular
principal to create clientSource in nsDocShell. This patch makes the
nsDocshell to use the partitioned principal if needed.
Differential Revision: https://phabricator.services.mozilla.com/D127629
Currently, we are using the regular principal and inherited regular
principal to create clientSource in nsDocShell. This patch makes the
nsDocshell to use the partitioned principal if needed.
Differential Revision: https://phabricator.services.mozilla.com/D127629
Currently, we are using the regular principal and inherited regular
principal to create clientSource in nsDocShell. This patch makes the
nsDocshell to use the partitioned principal if needed.
Differential Revision: https://phabricator.services.mozilla.com/D127629
Currently, we are using the regular principal and inherited regular
principal to create clientSource in nsDocShell. This patch makes the
nsDocshell to use the partitioned principal if needed.
Differential Revision: https://phabricator.services.mozilla.com/D127629
Currently, we are using the regular principal and inherited regular
principal to create clientSource in nsDocShell. This patch makes the
nsDocshell to use the partitioned principal if needed.
Differential Revision: https://phabricator.services.mozilla.com/D127629
This is a pretty cheap way of fixing this bug by saying nsDocShell::loadURI calls
done with a system principal will add user interaction to the current page. This takes
advantage of the fact that all UI code for loading URIs goes through this code path.
Note that during debugging I've found other cases where SH entries would be added with
a system principal, most notably when navigating to URL hashes/fragments (example.com#hash).
I'm not sure why this is happening but it doesn't go through nsDocShell::loadURI.
Differential Revision: https://phabricator.services.mozilla.com/D127558
When we navigate in history to the same entry that we're current at then we
actually do a reload. The problem is in the way we detect whether to do a reload
in the parent process.
If a page does a back and a forward one after the other in a script, then the
parent will calculate the index for the back and tell the child to load the
entry at that index. While the child is processing the load of that entry, the
BC in the parent process still has the same entry as its active entry (until the
child commits the load of the entry over IPC). The parent then processes the
forward, calculates the index for the forward and finds the entry at that index.
This is the same entry that we were at before doing anything, and so the same
entry as the active entry in the BC in the parent process. We used to compare
the entry that we're going to load with the active entry in the BC to determine
whether we're doing a reload, and so in this situation we would assume the
forward navigation was actually doing a reload. The child would reload the page,
and we'd run the script again and we'd end up in a reload loop.
Comparing the offset with 0 to determine whether we're doing a reload fixes this
issue.
Differential Revision: https://phabricator.services.mozilla.com/D126585
When we navigate in history to the same entry that we're current at then we
actually do a reload. The problem is in the way we detect whether to do a reload
in the parent process.
If a page does a back and a forward one after the other in a script, then the
parent will calculate the index for the back and tell the child to load the
entry at that index. While the child is processing the load of that entry, the
BC in the parent process still has the same entry as its active entry (until the
child commits the load of the entry over IPC). The parent then processes the
forward, calculates the index for the forward and finds the entry at that index.
This is the same entry that we were at before doing anything, and so the same
entry as the active entry in the BC in the parent process. We used to compare
the entry that we're going to load with the active entry in the BC to determine
whether we're doing a reload, and so in this situation we would assume the
forward navigation was actually doing a reload. The child would reload the page,
and we'd run the script again and we'd end up in a reload loop.
Comparing the offset with 0 to determine whether we're doing a reload fixes this
issue.
Differential Revision: https://phabricator.services.mozilla.com/D126585
Using requestedIndex on the child side is hard, because there are race conditions when a session history load is triggered
and at the same time a non-session history load commits a new active entry.
Differential Revision: https://phabricator.services.mozilla.com/D126619
When we navigate in history to the same entry that we're current at then we
actually do a reload. The problem is in the way we detect whether to do a reload
in the parent process.
If a page does a back and a forward one after the other in a script, then the
parent will calculate the index for the back and tell the child to load the
entry at that index. While the child is processing the load of that entry, the
BC in the parent process still has the same entry as its active entry (until the
child commits the load of the entry over IPC). The parent then processes the
forward, calculates the index for the forward and finds the entry at that index.
This is the same entry that we were at before doing anything, and so the same
entry as the active entry in the BC in the parent process. We used to compare
the entry that we're going to load with the active entry in the BC to determine
whether we're doing a reload, and so in this situation we would assume the
forward navigation was actually doing a reload. The child would reload the page,
and we'd run the script again and we'd end up in a reload loop.
Comparing the offset with 0 to determine whether we're doing a reload fixes this
issue.
Differential Revision: https://phabricator.services.mozilla.com/D126585
This version doesn't change SetContainer handling, since it seems to be tricky for the top level page.
So only activity change notification is fired and IsActive() is updated.
The comment about IsActive() was wrong even with the old bfcache implementation.
(I did check that it returned false when the page was in bfcache and many of the activity observers rely on that)
The changes to HTMLMediaElement are needed to ensure page can enter bfcache..
Differential Revision: https://phabricator.services.mozilla.com/D124684