-
Notifications
You must be signed in to change notification settings - Fork 2.7k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
HTML API: Add custom text decoder #6387
Conversation
Test using WordPress PlaygroundThe changes in this pull request can previewed and tested using a WordPress Playground instance. WordPress Playground is an experimental project that creates a full WordPress instance entirely within the browser. Some things to be aware of
For more details about these limitations and more, check out the Limitations page in the WordPress Playground documentation. |
Is there a performance penalty to implement this in PHP versus using |
Probably, and I will attempt to quantify that. This patch is mostly about avoiding corruption though, so introducing a performance penalty, if one exists, will not be a blocker in principle unless it's heavy enough to warrant a problem. Personally I've spent too much time trying over and over again to get WordPress to properly save and render markup with specific characters after it corrupts or eliminates what I wrote. This is editor time wasted and not render time, but it's hard to quantify the costs of data corruption. |
Further, there's a real clear need to push this upstream, but that requires some design changes as PHP's Theoretically this patch can cure the decoding for now while we wait for PHP to get a better decoder, and then we can rely on PHP's functionality, which is surely faster. |
ff30946
to
796d11a
Compare
@westonruter I have some preliminary performance testing data. this is slower, but not so much that it should make much of an impact. in the worst case scenario it was adding tens of microseconds when decoding every text value in the document, including every attribute value. |
0351b78
to
b80e784
Compare
The following accounts have interacted with this PR and/or linked issues. I will continue to update these lists as activity occurs. You can also manually ask me to refresh this list by adding the Core Committers: Use this line as a base for the props when committing in SVN:
To understand the WordPress project's expectations around crediting contributors, please review the Contributor Attribution page in the Core Handbook. |
d12f1a7
to
a99fda8
Compare
388fc03
to
b05d048
Compare
Provide a custom decoder for strings coming from HTML attributes and markup. This custom decoder is necessary because of deficiencies in PHP's `html_entity_decode()` function: - It isn't aware of 720 of the possible named character references in HTML, leaving many out that should be translated. - It isn't able to decode character references in data segments where the final semicolon is missing, or when there are ambiguous characters after the reference name but before the semicolon. This one is complicated: refer to the HTML5 specification to clarify. This decoder will also provide some conveniences, such as making a single-pass and interruptable decode operation possible. This will provide a number of opportunities to optimize detection and decoding of things like value prefixes, and whether a value contains a given substring.
b05d048
to
401d30f
Compare
I've pushed a few changes, I think there was some inconsistency in
|
* @param ?string $case_sensitivity Set to `ascii-case-insensitive` to ignore ASCII case when matching. | ||
* @return bool Whether the attribute value starts with the given string. | ||
*/ | ||
public static function attribute_starts_with( $haystack, $search_text, $case_sensitivity = 'case-sensitive' ) { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I was a little bit surprised to see this method. It seems like an optimization on what could be str_starts_with
+ decode_attribute
. Will you explain the motivation?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@dmsnell explained the motivation to me, I'll paraphrase here (my interpretation):
It is an optimization, but an important one. I've seen a number of crashes where many megabytes of data URIs are (attempted to be) decoded just want to check if a URL starts with
https
. This provides an optimized but also safer way to perform these smaller checks.
* @param string $haystack String containing the raw non-decoded attribute value. | ||
* @param string $search_text Does the attribute value start with this plain string. | ||
* @param ?string $case_sensitivity Set to `ascii-case-insensitive` to ignore ASCII case when matching. | ||
* @return bool Whether the attribute value starts with the given string. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I'll push a change updating this (and something similar in WP_Token_Map).
I believe ?string
is a nullable string, not an optional string, equivalent to string|null
.
optional vs. nullable params
<?php
function f( ?string $s = "default" ) { var_dump($s); };
function g( string $s = "default" ) { var_dump($s); };
echo "F\n";
f();
f(null);
f('y');
echo "G\n";
g();
try { g(null); } catch ( Error $e ) { echo $e->getMessage() . "\n"; }
g('y');
F
string(7) "default"
NULL
string(1) "y"
G
string(7) "default"
g(): Argument #1 ($s) must be of type string, null given, called in /in/EicXB on line 12
string(1) "y"
I don't think PHPDoc has a way of annotating optional parameters, we just state it:
* @param string $haystack String containing the raw non-decoded attribute value. | |
* @param string $search_text Does the attribute value start with this plain string. | |
* @param ?string $case_sensitivity Set to `ascii-case-insensitive` to ignore ASCII case when matching. | |
* @return bool Whether the attribute value starts with the given string. | |
* @param string $haystack String containing the raw non-decoded attribute value. | |
* @param string $search_text Does the attribute value start with this plain string. | |
* @param string $case_sensitivity Optional. Set to `ascii-case-insensitive` to ignore ASCII case when matching. | |
* @return bool Whether the attribute value starts with the given string. |
?string (nullable string) was used for these types. The type should be string, optionality can appear in the param description but is derived from a default argument.
?Type is a nullable Type, not an optional parameter. Identify optional parameters in the @param description.
Instead of a nullable parameter that updates to `0` in the body of the function when null, use a default argument `0`, update the @param type and remove the null check and set from the function body.
Add mixed case and non-matching test cases
Update the parameter name and description to align with the descriptions of analogous parameters used in WP_Token_Map methods.
|
||
while ( $search_at < $search_length && $haystack_at < $haystack_end ) { | ||
$chars_match = $loose_case | ||
? strtolower( $haystack[ $haystack_at ] ) === strtolower( $search_text[ $search_at ] ) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I mentioned this in the Token Map PR (#5373 (comment)) and now I've taken a moment to do some synthetic benchmarks, and I think we should change both that comparison in Token Map and this case to use strcasecmp
. The difference is small, but may be relevant for highly optimized code. In ~3 million comparisons, strcasecmp
ran 1.52 ± 0.02 times faster than strtolower
.
? strtolower( $haystack[ $haystack_at ] ) === strtolower( $search_text[ $search_at ] ) | |
? 0 === strcasecmp( $haystack[ $haystack_at ], $search_text[ $search_at ] ) |
I also put these both into 3v4l.org to get "Vulcan Logic Dumper" results, which does show strcasecmp
performing fewer operations, supporting the idea that strcasecmp
may perform slightly better:
Function str_case_cmp:
Finding entry points
Branch analysis from position: 0
1 jumps found. (Code = 62) Position 1 = -2
filename: /in/6a2UE
function name: str_case_cmp
number of ops: 9
compiled vars: !0 = $a, !1 = $b
line #* E I O op fetch ext return operands
-------------------------------------------------------------------------------------
3 0 E > RECV !0
1 RECV !1
4 2 INIT_FCALL 'strcasecmp'
3 SEND_VAR !0
4 SEND_VAR !1
5 DO_ICALL $2
6 IS_IDENTICAL ~3 $2, 0
7 > RETURN ~3
5 8* > RETURN null
End of function str_case_cmp
Function str_to_lower:
Finding entry points
Branch analysis from position: 0
1 jumps found. (Code = 62) Position 1 = -2
filename: /in/6a2UE
function name: str_to_lower
number of ops: 11
compiled vars: !0 = $a, !1 = $b
line #* E I O op fetch ext return operands
-------------------------------------------------------------------------------------
6 0 E > RECV !0
1 RECV !1
7 2 INIT_FCALL 'strtolower'
3 SEND_VAR !0
4 DO_ICALL $2
5 INIT_FCALL 'strtolower'
6 SEND_VAR !1
7 DO_ICALL $3
8 IS_IDENTICAL ~4 $2, $3
9 > RETURN ~4
8 10* > RETURN null
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
We can land dmsnell#15 to switch to strcasecmp
here if you agree.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
please let me add another word of caution here, because you're comparing two very different functions. we don't know how long the comparison is that we want to make, so I suspect we'd need to perform multiple iterations through the short characters to find out. not a big deal, but with insignificant improvements, insignificant setbacks can add up too.
so the first question is how many characters of the source text are involved in a match for a short word? We don't know this up-front, which is why the existing algorithm moves character-by-character until it finds a result. that is, it proceeds until we have matches leading up to a null byte.
I've taken a moment to do some synthetic benchmarks, and I think we should change both that comparison in Token Map and this case to use strcasecmp. The difference is small, but may be relevant for highly optimized code. In ~3 million comparisons, strcasecmp ran 1.52 ± 0.02 times faster than strtolower.
thanks for running these, but I'm not interested in synthetic benchmarks, particularly because of how misleading they can be. as with the broader measurements I've made in this patch, we're 6x - 30x slower than html_entity_decode()
and yet the impact on WordPress is insignificant because we aren't spending our time on this.
in all of my explorations, both with 300,000 actual web pages and randomly-generated files full of character entities, never were named character reference lookups in the hot path.
if we want to make this faster we should attack numeric character references, and I've failed to find any way to make that faster.
strcasecmp performing fewer operations, supporting the idea that strcasecmp may perform slightly better:
I'll believe measurements and I'll use VLD dumps to gain insight, but not for performance. those operations can take wildly different amounts of time, plus two parallel strtolower()
calls on independent data might end up executing in parallel on the CPU, which can eliminate the sequential bottleneck. modern CPUs are complicated beasts.
I'm nervous about this because I find strcasecmp()
's interface much riskier because it's not doing what we want, but something different, and we have to adapt that different. so if you want to change this just make sure to include ample real-world metrics and try hard to break it, particularly with maps containing overlapping short tokens, e.g. [ 'a' => '1', 'aa' => 2 ]
, and also maps with different prefix lengths.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I've been over this very carefully and compared it with the standard. I can't find any issues. Nicely done.
My confidence is greatly increased by the html5lib entities tests that were largely broken before (with PHP html entity implementations) and are all passing with this change.
* @param string $context `attribute` for decoding attribute values, `data` otherwise. | ||
* @param string $text Text document containing span of text to decode. | ||
* @param int $at Optional. Byte offset into text where span begins, defaults to the beginning (0). | ||
* @param ?int &$matched_token_byte_length Optional. Holds byte-length of found lookup key if matched, otherwise not set. Default null. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I don't mind matched_token_byte_length
, but an alternative that may be better here is matched_character_reference_byte_length
because we are looking for character references and I don't think we mention tokens elsewhere here.
* @param int $code_point Which code point to convert. | ||
* @return string Converted code point, or `�` if invalid. | ||
*/ | ||
public static function code_point_to_utf8_bytes( $code_point ) { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This section makes a lot more sense after seeing the table here:
It might be nice to include a textual version here, although something is certainly lost without the coloring.
Code point to UTF-8 conversion follows this pattern:
First code point Last code point Byte 1 Byte 2 Byte 3 Byte 4
U+0000 U+007F 0xxxxxxx
U+0080 U+07FF 110xxxxx 10xxxxxx
U+0800 U+FFFF 1110xxxx 10xxxxxx 10xxxxxx
U+010000 U+10FFFF 11110xxx 10xxxxxx 10xxxxxx 10xxxxxx
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I'm torn on adding this, because I both want the code to be clearer but also I don't want to embed the UTF-8 standard inside the code. WordPress already has a couple of UTF-8 functions using this pattern, and a part of me simply wants to let the docblock link to and reference this more than explain it.
I've updated the docs a little anyway.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
That's fine. This isn't a section of code I'd expect to change much.
Provides a custom decoder for strings coming from HTML attributes and markup. This custom decoder is necessary because of deficiencies in PHP's `html_entity_decode()` function: - It isn't aware of 720 of the possible named character references in HTML, leaving many out that should be translated. - It isn't aware of the ambiguous ampersand rule, which allows conversion of character references in certain contexts when they are missing their closing `;`. - It doesn't draw a distinction for the ambiguous ampersand rule when decoding attribute values instead of markup values. - Use of `html_entity_decode()` requires manually passing non-default paramter values to ensure it decodes properly. This decoder also provides some conveniences, such as making a single-pass and interruptable decode operation possible. This will provide a number of opportunities to optimize detection and decoding of things like value prefixes, and whether a value contains a given substring. Developed in #6387 Discussed in https://core.trac.wordpress.org/ticket/61072 Props dmsnell, gziolo, jonsurrell, jorbin, westonruter, zieladam. Fixes #61072. git-svn-id: https://develop.svn.wordpress.org/trunk@58281 602fd350-edb4-49c9-b593-d223f7449a82
Provides a custom decoder for strings coming from HTML attributes and markup. This custom decoder is necessary because of deficiencies in PHP's `html_entity_decode()` function: - It isn't aware of 720 of the possible named character references in HTML, leaving many out that should be translated. - It isn't aware of the ambiguous ampersand rule, which allows conversion of character references in certain contexts when they are missing their closing `;`. - It doesn't draw a distinction for the ambiguous ampersand rule when decoding attribute values instead of markup values. - Use of `html_entity_decode()` requires manually passing non-default paramter values to ensure it decodes properly. This decoder also provides some conveniences, such as making a single-pass and interruptable decode operation possible. This will provide a number of opportunities to optimize detection and decoding of things like value prefixes, and whether a value contains a given substring. Developed in WordPress/wordpress-develop#6387 Discussed in https://core.trac.wordpress.org/ticket/61072 Props dmsnell, gziolo, jonsurrell, jorbin, westonruter, zieladam. Fixes #61072. Built from https://develop.svn.wordpress.org/trunk@58281 git-svn-id: http://core.svn.wordpress.org/trunk@57741 1a063a9b-81f0-0310-95a4-ce76da25c4cd
Provides a custom decoder for strings coming from HTML attributes and markup. This custom decoder is necessary because of deficiencies in PHP's `html_entity_decode()` function: - It isn't aware of 720 of the possible named character references in HTML, leaving many out that should be translated. - It isn't aware of the ambiguous ampersand rule, which allows conversion of character references in certain contexts when they are missing their closing `;`. - It doesn't draw a distinction for the ambiguous ampersand rule when decoding attribute values instead of markup values. - Use of `html_entity_decode()` requires manually passing non-default paramter values to ensure it decodes properly. This decoder also provides some conveniences, such as making a single-pass and interruptable decode operation possible. This will provide a number of opportunities to optimize detection and decoding of things like value prefixes, and whether a value contains a given substring. Developed in WordPress/wordpress-develop#6387 Discussed in https://core.trac.wordpress.org/ticket/61072 Props dmsnell, gziolo, jonsurrell, jorbin, westonruter, zieladam. Fixes #61072. Built from https://develop.svn.wordpress.org/trunk@58281 git-svn-id: https://core.svn.wordpress.org/trunk@57741 1a063a9b-81f0-0310-95a4-ce76da25c4cd
Trac ticket: Core-61072
Token Map Trac ticket: Core-60698
From #5337 takes the HTML text decoder.
Replaces WordPress/gutenberg#47040
Status
The code should be working now with this, and fully spec-compliant.
Tests are covered generally by the html5lib test suite.
Performance
After some initial testing this appears to be around 20% slower in its current state at decoding text values compared to using
html_entity_decode()
. I tested against a set of 296,046 web pages at the root domain for a list of the top-ranked domains that I found online.The impact is quite marginal, adding around 60 µs per page. For the set of close to 300k pages that took the total runtime from 87s to 105s. I tested with the following main loop, using
microtime( true )
before and after the loop to add to the total time in an attempt to eliminate the I/O wait time from the results. This is a worst-case scenario where decode every attribute and every text node. Again, in practice, WordPress would only likely experience a fraction of that 60 µs because it's not decodingevery text node and every attribute of the HTML it ships to a browser.I attempted to avoid string allocations and this raised a challenge:
strpos()
doesn't provide a way to stop at a given index. This led me to try replacing it with a simple look to advance character by character until finding a&
. This slowed it down to about 25% slower thanhtml_entity_decode()
so I removed that and instead relied on usingstrpos()
with the possibility that it scans much further past the end of the value. On the test set of data it was still faster.For comparison, I built a version that skips the
WP_Token_Map
and instead relies on a basic associative array whose keys are the character reference names and whose values are the replacements. This was 840% slower thanhtml_decode_entities()
and increased the average page processing time by 2.175 ms. The token map is thus approximately 36x faster than the naive implementation.Pre-decoding
In an attempt to rely more on
html_entity_decode()
I added a pre-decoding step that would handle all well-formed numeric character encodings. The logic here is that if we can use a quickpreg_replace_callback()
pass to get as much into C-code as we can, by means ofhtml_entity_decode()
, then maybe it would be worth it even with the additional pass.Unfortunately the results were instantly slower, adding another 20% slowdown in my first 100k domains under test. That is, it's over 40% slower than a pure
html_entity_decode()
whereas the code without the pre-encoding step is only 20% slower.The Pre-Decoder
Faster integer decoding.
I attempted to parse the code point inline while scanning the digits in hopes to save some time computing, but this dramatically slowed down the interpret. I think that the per-character parsing is much slower than
intval()
.Faster digit detection.
I attempted to replace
strspn( $text, $numeric_digits )
with a custom look examining each character for whether it was in the digit ranges, but this was just as slow as the custom integer decoder.Quick table lookup of group/small token indexing.
On the idea that looking up the group or small word in the lookup strings might be slow, given that it's required to iterate every time, I tried adding a patch to introduce an index table for direct lookup into where words of the given starting letter start, and if they even exist in the table at all.
Table-lookup patch
This did not introduce a measurable speedup or slowdown on the dataset of 300k HTML pages. While I believe that the table lookup could speed up certain workloads that are heavy with named character references, it does not justify itself on realistic data and so I'm leaving the patch out.
Metrics on character references.
From the same set of 296k webpages I counted the frequency of each character reference. This includes the full syntax, so if we were to have come across
9
it would appear in the list. The linked file contains ANSII terminal codes, so view it throughcat
orless -R
.all-text-and-ref-counts.txt
Based on this data I added a special-case for
"
,
, and&
before calling into theWP_Token_Map
but it didn't have a measurable impact on performance. I'm led to conclude from this that it's not those common character references slowing things down. Possibly it's the numeric character references.In another experiment I replaced my custom
code_point_to_utf8_bytes()
function with a call tomb_chr()
, and again the impact wasn't significant. That method performs the same computation within PHP that this application-level does, so this is not surprising.For clearer performance direction it's probably most helpful to profile a run of decoding and see where the CPU is spending its time. It appears to be fairly quick as it is in this patch.
Attempted alternatives
WP_Token_Map
implementation."
,&
, and
, as they might account for up to 70-80% of all named character references in practice. This didn't impact the runtime. Runtime is likely dominated by numeric character reference decoding.if
checks, rearranging code for frequency-analysis of code points, and replaced the Windows-1252 remapping with direct replacement. In a test 10 million randomly generated numeric character references, this performed around 3-5% faster than in the branch, but in real tests I could not measure any impact. The micro-optimizations are likely inert in a real context.In my benchmark of decoding 10 million randomly-generated numeric character references about half the time is spent exclusively inside
read_character_reference()
and the other half is spent incode_point_to_utf8_bytes()
.substr() + intval()
with an unrolled table-lookup custom string-to-integer decoder. While that decoder performed significantly better than a native pure-PHP decoder, it was still noticeably slower thanintval()
.I'm led to believe that his is nearly optimal for a pure PHP solution.
Character-set detections.
The following CSV file is the result of surveying the
/
path of popular domains. It includes detections of whether the given HTML found at that path is valid UTF8, valid Windows-1252, valid ASCII, and whether it's valid in its self-reported character sets.A 1 indicates that the HTML passes
mb_check_encoding()
for the encoding of the given column. A 0 indicates that it doesn't. A missing value indicates that the site did not self-report to contain that encoding.Note that a site might self-report being encoded in multiple simultaneous and mutually-exclusive encodings.
charset-detections.csv
html5lib
testsTests: 609, Assertions: 172, Failures: 63, Skipped: 435.
Tests: 607, Assertions: 172, Skipped: 435.
Tests that are now possible to run that previously weren't.
Differences from
html_entity_decode()
PHP misses 720 character references
Æ & & Á Â À ⁡ Å ≔ Ã Ä ∖ ⌆ ℬ ≎ © © ℭ Ç ⊖ ∲ ” ’ ∯ ℂ ∳ ⅅ ∇ ˙ ` ⋄ ¨ ≐ ⇓ ⇔ ⫤ ⟸ ⟺ ⇒ ⇕ ∥ ↓ ↽ ⇁ Ð É Ê È ∈ ≂ ⇌ ℰ Ë ⅇ ▪ ∀ ℱ > > ≥ ⋛ ≧ ≷ ⩾ ≫ ℋ ≎ Í Î Ì ℑ ⋂ ⁣ ℐ Ï < < ℒ ⟨ ← ⇆ ⌈ ⇃ ↔ ⊣ ⊲ ↿ ↼ ⇐ ⇔ ⋚ ≦ ≶ ⩽ ⇚ ⟵ ⟸ ⟺ ⟹ ↙ ℒ ≪ ℳ ​ ​ ​ ​ ≫ ≪   ℕ ∦ ∉ ≂̸ ∄ ≯ ≱ ⩾̸ ≵ ≎̸ ≏̸ ⋪ ⋬ ≸ ≪̸ ⩽̸ ≴ ⊀ ∌ ⋫ ⊂⃒ ⊃⃒ ≄ ≇ ≉ ∤ Ñ Ó Ô Ò Ø Õ Ö ‾ ∂ ± ℌ ℙ ≺ ⪯ ≾ ∏ ∷ ∝ " " ℚ ⤐ ® ® ↠ ℜ ⇋ → ⇄ ⊢ ↦ ⊳ ⇀ ⇒ ⇛ ℛ ↱ ↓ ← → ↑ ∘ ⊓ ⊏ ⊐ ⊔ ⋐ ≻ ≽ ∋ ∑ ⋑ ⊃ ⊇ Þ ™ ∴ ∼ ≃ ≈ Ú Û Ù _ ⎵ ⋃ ↑ ⇅ ⥮ ⊥ ⇑ ↖ ϒ Ü ⋁ ‖ ∣ | ≀   ⋀ Ý á â ´ ´ æ à ℵ & ∠ Å ≈ ≊ å ≈ ≍ ã ä ≌ ∽ ⌅ ∵ ∵ ϶ ℬ ⨀ ★ ⋁ ⋀ ⧫ ▪ ⊥ ⊥ ─ ⊠ ‵ ˘ ¦ ⋍ • ≏ ≏ ˇ ç ¸ ¸ ¢ · ✓ ↺ ↻ ® Ⓢ ⊛ ⊚ ⊝ ≗ ♣ ≔ ∁ ≅ ∮ ∐ © ⋞ ↶ ⋟ ¤ ↷ ⋎ ⋏ ⇓ ‐ ˝ ⅆ ‡ ⇊ ° ⇂ ⋄ ♦ ¨ ϝ ÷ ÷ ⋇ ⌞ ≐ ∸ ∔ ⊡ ↓ ⇃ ⇂ ▿ ▾ ⇵ ⥯ ⩷ ≑ é ê ≕ ⅇ ≒ è ∅ ∅ ε ϵ ≖ ≂ ⪖ ⪕ ≡ ≓ ð ë ∃ ⋔ ½ ½ ¼ ¾ ≧ ⋛ ≥ ⩾ ⋙ ≩ ⪊ ⪈ ≳ > ⋗ ⪆ ⪌ ≷ ≳ ≩︀ ℋ ℏ ♥ ⤥ ⤦ ↩ ↪ ℏ í î ¡ ⇔ ì ⅈ ∭ ℑ ℑ ı ∈ ∫ ℤ ⊺ ⨼ ¿ ∈ ⁢ ï ϰ ⇐ ⪋ ⟨ « ⇤ { “ „ ≤ ← ↢ ⇇ ↔ ⇆ ↭ ⋋ ⋚ ≦ ⩽ ⪅ ≲ ⌊ ≶ ↽ ↼ ⎰ ≨ ⪉ ⪇ ⟦ ⟷ ⟼ ⟶ ↫ ◊ ⌟ ⇋ ↰ ≲ [ ‘ ‚ < ⋖ ⊴ ◂ ≨︀ ¯ ✠ ↦ ↧ ↤ ↥ ∡ µ * · · ⊟ … ∓ ∓ ∾ ⊸ ≫̸ ⇎ ⇏ ≉ ♮   ≠ ↗ ↗ ≢ ⤨ ∄ ≧̸ ≱ ≧̸ ⩾̸ ≯ ↮ ∋ ∋ ⇍ ↚ ≰ ≰ ≦̸ ⩽̸ ≮ ≮ ∤ ¬ ∉ ∌ ∦ ⋠ ⪯̸ ⊀ ⪯̸ ↛ ⋫ ⋭ ⊁ ⋡ ⪰̸ ∦ ≁ ≄ ∤ ∦ ⋢ ⋣ ⊈ ⊂⃒ ⊈ ⫅̸ ⊁ ⪰̸ ⫆̸ ⊉ ⊉ ≹ ñ ⋪ ⋬ ⋭ ↖ ó ô ⊙ ò Ω ∮ ⊕ ℴ ª º ℴ ø õ ⊗ ö ∥ ¶ ∥ ϕ ℳ ℏ ⊞ ± ± £ ≺ ⪷ ≼ ⪯ ≼ ⪵ ⪹ ⋨ ∝ ≾ ⨌ ℍ ≟ " ⇒ ⤏ √ ⟩ ⟩ » → ⇥ ↬ ⤍ } ] ⌉ ” ℜ ℜ ℝ ® ⌋ → ↣ ⇁ ⇀ ⇉ ↝ ⋌ ⇄ ⇌ ⎱ ⟧ ’ ⊵ ▸ ≻ ⪸ ≽ ⪰ ⪺ ≿ ↘ ↘ § ∖ ∖ ⌢ ∣ ­ ς ≃ ← ∖ ∣ ♠ ∥ ⊑ ⊏ ⊑ ⊐ ⊒ ⊒ □ □ ▪ ⌣ ⋆ ¯ ⊆ ⫋ ⊊ ⊂ ⊆ ⫅ ⪰ ⪶ ⋩ ≿ ¹ ² ³ ⫆ ⊋ ⊃ ⊇ ⫌ ↙ ß ⎴ ⃛ ∴ ϑ ≈ ∼   ≈ ∼ þ ˜ × ⊤ ⤩ ◃ ⊴ ▹ ⊵ ≜ ≬ ↞ ⇑ ú û ù ↾ ⌜ ¨ ¨ ↑ ↕ ↿ ↾ ⊎ υ ⌝ ▵ ▴ ⇈ ü ⇕ ⊨ ϵ ∅ ϕ ϖ ∝ ↕ ϱ ς ⊊︀ ⫋︀ ⊋︀ ϑ ⊳ ∨ | ⊲ ⊃⃒ ∝ ⫌︀ ∧ ℘ ≀ ⋂ ◯ ⋃ ▽ ⟷ ⟵ ⨁ ⨂ ⟹ ⟶ ⨆ ⨄ △ ý ¥ ÿ ℨIn this list are many named character references without a trailing
;
. This is because HTML does not require one in all cases. There's another behavior concerning numeric character references where the trailing;
isn't required at certain boundaries.Further, whether or not the trailing
;
is required is subject to the ambiguous ampersand rule, which guards a legacy behavior for certain query args in URL attributes which weren't properly encoded.Outputs from this PR
The
FromPHP
column shows howhtml_entity_decode( $input, ENT_QUOTES | ENT_SUBSTITUTE | ENT_HTML5 )
would decode the input.The
Data
andAttribute
columns show how the HTML API decodes the text in the context of markup (data) and of an attribute value (attribute). These are different in HTML, and unfortunately PHP does not provide a way to differentiate them. The main difference is in the so-called "ambiguous ampersand" rule which allows many "entities" to be written without the terminating semicolon;
(though not all of the named character references may do this). In attributes, however, some of these can look like URL query params. E.g. is¬=dogs
supposed to be¬=dogs
or a query arg namednot
whose value isdogs
? HTML chose to ensure the safety of URLs and forbid decoding character references in these ambiguous cases.Outputs from a browser
I've compared Firefox and Safari. The middle column shows the data value and the right column has extracted the
title
attribute of the input and set it as theinnerHTML
of theTD
.The empty boxes represent unrendered Unicode charactered. While some characters, like the null byte, are replaced with a Replacement Character
�
, "non-characters" are passed through, even though they are parser errors.Trac ticket:
This Pull Request is for code review only. Please keep all other discussion in the Trac ticket. Do not merge this Pull Request. See GitHub Pull Requests for Code Review in the Core Handbook for more details.