-
Notifications
You must be signed in to change notification settings - Fork 11
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Bandwidth Considerations #4
base: master
Are you sure you want to change the base?
Changes from 3 commits
e5dbc26
a5b62f3
3bcb5c0
c0a75d3
fdcaf4a
f3650d4
6fc589d
File filter
Filter by extension
Conversations
Jump to
Diff view
Diff view
There are no files selected for viewing
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -0,0 +1,80 @@ | ||
_Note: This recommendation is a DRAFT for consideration of the MODX Governance Advisory Board._ | ||
|
||
|
||
To be mindful of the cost users pay to access the MODX Next management experience the core shall effectively leverage the browser cache. | ||
|
||
|
||
**Editor: JP de Vries** | ||
**First publish draft: TBD** | ||
|
||
|
||
## Goals of Recommendation | ||
|
||
|
||
To make the MODX Next management experience more accessible and less expensive for our global user base. To optimize the page load times of the MODX Next manager across the web as a whole. To promote stability of the MODX Next manager experience. To add costly features and enhancements in a prgoressive manor so they can be opted out of. | ||
|
||
|
||
## Relevant Recommendations | ||
As bandwidth considerations is part of accessibility best practices, [DRAFT] Accessibility Mandate. This recommendation is proposed as a guideline accepted by the MODX Advisory Board and enforced by the Accessibility Mandate. | ||
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more.
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. i just took that from Jason's template. |
||
|
||
|
||
## Recommendation | ||
Leverage the browser cache and stability of the progressively enhanced manager with the following practices: | ||
|
||
|
||
Front End (CSS, JS) assets delivered with the core shall not be compressed or concatenated on the server side. | ||
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. This should probably be clarified to mean that they are not compressed or concat'd dynamically? There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. Point I was trying to make is that we probably don't want to flat-out decline compression if it's done in a responsible matter, like in a build step. There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. The point is front end assets shouldn't be compressed on the server side. That should be done in a build process on the front end before checking in the code. Not in real time on the server (fingers crossed). You have 0% code coverage if you are shipping code you concat & minify on the server especially with something like MODX installed on so many different server environments. There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. Agreed. |
||
Front End assets delivered with the core shall first perform feature detection to test whether or not they are supported and not be loaded if they do not support the user’s environment. | ||
Common Dependencies used by the core such as jQuery, React, AngularJS, etc shall conditionally be loaded from a common CDN with local fallbacks to preserve stability in the event the CDN is unreachable. | ||
Extra Developers shall be encouraged, but not required, to perform feature detection to determine if a given asset has already been loaded. | ||
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. I'd probably refer to it as Take out |
||
Server Admins will be encouraged to optimize the delivery of MODX by enabling HTTPS (HTTP/2), GZIP compression, and far future reaching expiry dates | ||
|
||
|
||
A great deal of research, testing, and debate went into the changes introduced in 2.5 that improved the stability and performance of the delivery of front end dependencies. The TL;DR is compressing and concatenating scripts on the server side is an anti optimization. This worst practice made the Manager experience less stable, as it was the cause of dozens of “white screen of death” issues. | ||
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. I'm wondering what this adds to the recommendation. Perhaps turn it into something like
as a note on the actual guideline in the list earlier, with a link to the changes we (you) made back then. |
||
|
||
|
||
The MODX Manager is not a website! The assets it loads vary from page to page, user to user, and change as Extras are enabled, installed, and disabled. From a bandwidth perspective, the Manager is the opposite of a web site. As proven in the Proof of Concept section, optimization that commonly apply to websites are anti-optimal when applied to the manager experience. | ||
|
||
|
||
|
||
|
||
## Proof of Concept | ||
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. Further to another comment, this doesn't necessarily recommend something new. It's worthwhile research to prove a point that could be linked to, but I'm not sure if this should be in the actual recommendation. Perhaps a simple proof of concept github repo people can interact with instead? One reason I'd personally challenge putting too much original "research" into a recommendation, is that the research may change over time. Better examples, a live concept someone can actually try out, or perhaps something new comes out that changes the research. A recommendation will be pretty much static once accepted. There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. I'm fine with taking research out of the recommendation. It can be shared with the MAB without being in the recommendation itself. I wouldn't worry to much about anything changing this specific research though. The fundamentals of how browsers cache files (by the URL) hasn't changed and probably never will. |
||
Seen as how the pre-2.5 `compress_js` codebase was made under the assumption that developers should reduce the number of HTTP requests at all costs, and nothing else, it was frustrating to be asked to show numbers to prove that this was anti-optimal. That said, some interesting discoveries came from this process. For example, imagine this scenario: | ||
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. Sorry I hurt your feelings by being stubborn about wanting proof back then. There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. It's not about my feelings at all. My feelings weren't hurt. It was frustrating though. The reason I think this is something worth pointing out is the poor practice in question had not been tested or proven yet hard evidence was required to remove it. It took a lot of effort to prove to integrators the fundamentals of the cache; that browsers cache files by the URL. I wouldn't expect any other contributor to have invested that much energy in making a project realize why their unproven and untested poor practice needed to be changed. So I think that if we are going to demand proof of contributors we should also be willing to prove and test the code we are arguing shouldn't be changed. Prove why it should be there in the first place, meet the contributor half way. Otherwise all the burden falls on the contributor and they may not be willing to make that significant of an effort. There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. In retrospective I definitely was too stubborn on that issue, and you're right that not a lot of contributors would go to the lengths you did. I'm glad you made the effort to convince me and have tried to keep a more open mind since ;) |
||
|
||
- Let's say we have 350kb of scripts that when server side compress_js is used is all in a URL that contains a comma separated list of dependencies like this: | ||
`/manager/min/index.php?f=/manager/assets/modext/core/modx.localization.js,/manager/assets/modext/util/utilities.js` | ||
- Everything is instead in it's own file and possibly not uglified at all (`compress_js` disabled) | ||
|
||
|
||
With these user stories: | ||
- `compress_js` enabled | ||
- User visits a Resource edit page with `compress_js` enabled | ||
- We send them 350kb on script in a single file. `redactor.js` is one of the included files | ||
- User disabled Rich Text on the resource and refreshes the page | ||
- New URL, so we send them all the scripts again and leverage the cache for none of them even though all we are doing is sending them same scripts minus redactor | ||
|
||
|
||
• User visits a Resource edit page with compress_js disabled | ||
• We send them 350kb or more of script in individual files. redactor.js is one of the included files | ||
• User disabled Rich Text on the resource and refreshes the page | ||
|
||
|
||
Scenario A has a cache usage of 0%. Even though we are serving less data by disabling an Extra we resend all the bytes. The URL changes, the cache is purged. We couldn’t leverage the browser cache less if we tried. We couldn’t make the experience more expensive in terms of bandwidth if we tried. | ||
|
||
|
||
Scenario B has a cache usage of 100%. Each dependency is its own request, and therefore own URL. The cache is leveraged as much as possible. Yes this means more HTTP requests and a possibly slower load time (probably not) over HTTP/1 but that only applies to the first page load. Once assets are cached, they are cached! | ||
|
||
|
||
|
||
|
||
|
||
## Considerations | ||
The cost of bandwidth varies greatly in different countries. For example, the average Brazilian works over 34 hours for every 500MB of bandwidth. By being considerate of bandwidth, we make the MODX site building more accessible to our global community. | ||
|
||
![](http://j4p.us/0S0e2g1F2r0v/Image%202016-11-03%20at%2010.46.24%20PM.jpg) | ||
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. I don't think recommendations should contain an image hosted elsewhere, plus the data in that image is not accessible. There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. GitHub copies the images to their own CDN if you are worried about losing the image if the original goes away. I'm fine with adding alt text. |
||
|
||
|
||
|
||
## Examples | ||
Loading from a CDN, with local fallbacks, only if feature detection is passed. | ||
|
||
![](http://j4p.us/282s2A321a0r/Screen%20Shot%202016-11-03%20at%2010.47.31%20PM.png) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
typo
prgoressive manor
should beprogressive manner
?