This morning I was reading an important reminder from Jeremy Keith. It’s about Progressive Enhancement works and how the server plays an important role in the enhancement step of the process.
Even with better tools and abilities for client side work, we still need a server to produce some content and send it to the browser:
These days this is called “server-side rendering”, even though for decades the technical term was “serving a web page” (I’m pretty sure the rendering part happens in a browser).
Reading this article and at the same time thinking of the changes Chrome is making to the User-Agent and the introduction of “UA Client Hints”, makes me wonder how this will affect Progressive Enhancement as a mindset for creating universally accessible web applications.
My worries are:
Less Progressive Enhancement on the 1st request
UA-CH: designed by front end devs, for front end devs
…at least it feels like it when reading the explainer.
It is true that some servers used to block certain browsers based on the contents of the User-Agent in the early days of the web. Who has experienced this lately? The business of server-side device detection has settled and is the foundation of countless use cases on the web, but not obvious. “Server Side Rendering” is only one use case. Advertising, fraud/spam/but detection, analytics are others.
But the server must have the same abilities! As the spec stands now, the server is discriminated:
Spec says this behaviour is “accounted for”, but why wouldn’t you restrict Facebook or any other 3rd party to fingerprint your users if you could? I would. It’s naive to consider the presence of all 3rd parties included on a webpage as an opt-in.
navigator.User-Agent is the most “fingerprint-like” thing we have.
Active fingerprinting better than passive?
Will UA-CH lie?
Yes. Of course. Just like the User-Agent. Common lies are better handled by server side device detection than all front end developers individually.
Maybe server-side device detection used to be a problem, but this business has matured and settled. The proposed changes make it harder for the server to do content negotiation and play its role in Progressive Enhancement.
If device detection is bad, why give this opportunity to the font end devs? Sure, parsing the User-Agent is hard. Device detection products on the server side has figured out how to do it effectively, but what will happen when every front end developer will have to learn the pitfalls of device detection? It’s not about how hard the User-Agent is to parse, it’s about poor programming choices and lack of a Progressive Enhancement mindset. I envision similar scenarios to the good ol’ “this webpage only works with Internet Explorer”.
The situation we have now is the worst of both worlds: server-side rendering followed by a tsunami of hydration.