Foundations

There was quite a kerfuffle recently about a feature being removed from Google Chrome. To be honest, the details don’t really matter for the point I want to make, but for the record, this was about removing alert and confirm dialogs from cross-origin iframes (and eventually everywhere else too).

It’s always tricky to remove a long-established feature from web browsers, but in this case there were significant security and performance reasons. The problem was how the change was communicated. It kind of wasn’t. So the first that people found out about it about was when things suddenly stopped working (like CodePen embeds).

The Chrome team responded quickly and the change has now been pushed back to next year. Hopefully there will be significant communication before that to let site owners know about the upcoming breakage.

So all’s well that ends well and we’ve all learned a valuable lesson about the importance of communication.

Or have we?

While this was going on, Emily Stark tweeted a more general point about breakage on the web:

Breaking changes happen often on the web, and as a developer it’s good practice to test against early release channels of major browsers to learn about any compatibility issues upfront.

Yikes! To me, this appears wrong on almost every level.

First of all, breaking changes don’t happen often on the web. They are—and should be—rare. If that were to change, the web would suffer massively in terms of predictability.

Secondly, the onus is not on web developers to keep track of older features in danger of being deprecated. That’s on the browser makers. I sincerely hope we’re not expected to consult a site called canistilluse.com.

I wasn’t the only one surprised by this message.

Simon says:

No, no, no, no! One of the best things about developing for the web is that, as a rule, browsers don’t break old code. Expecting every website and application to have an active team of developers maintaining it at all times is not how the web should work!

Edward Faulkner:

Most organizations and individuals do not have the resources to properly test and debug their website against Chrome canary every six weeks. Anybody who published a spec-compliant website should be able to trust that it will keep working.

Evan You:

This statement seriously undermines my trust in Google as steward for the web platform. When did we go from “never break the web” to “yes we will break the web often and you should be prepared for it”?!

It’s worth pointing out that the original tweet was not an official Google announcement. As Emily says right there on her Twitter account:

Opinions are my own.

Still, I was shaken to see such a cavalier attitude towards breaking changes on the World Wide Web. I know that removing dangerous old features is inevitable, but it should also be exceptional. It should not be taken lightly, and it should certainly not be expected to be an everyday part of web development.

It’s almost miraculous that I can visit the first web page ever published in a modern web browser and it still works. Let’s not become desensitised to how magical that is. I know it’s hard work to push the web forward, constantly add new features, while also maintaining backward compatibility, but it sure is worth it! We have collectively banked three decades worth of trust in the web as a stable place to build a home. Let’s not blow it.

If you published a website ten or twenty years ago, and you didn’t use any proprietary technology but only stuck to web standards, you should rightly expect that site to still work today …and still work ten and twenty years from now.

There was something else that bothered me about that tweet and it’s not something that I saw mentioned in the responses. There was an unspoken assumption that the web is built by professional web developers. That gave me a cold chill.

The web has made great strides in providing more and more powerful features that can be wielded in learnable, declarative, forgiving languages like HTML and CSS. With a bit of learning, anyone can make web pages complete with form validation, lazily-loaded responsive images, and beautiful grids that kick in on larger screens. The barrier to entry for all of those features has lowered over time—they used to require JavaScript or complex hacks. And with free(!) services like Netlify, you could literally drag a folder of web pages from your computer into a browser window and boom!, you’ve published to the entire world.

But the common narrative in the web development community—and amongst browser makers too apparently—is that web development has become more complex; so complex, in fact, that only an elite priesthood are capable of making websites today.

Absolute bollocks.

You can choose to make it really complicated. Convince yourself that “the modern web” is inherently complex and convoluted. But then look at what makes it complex and convoluted: toolchains, build tools, pipelines, frameworks, libraries, and abstractions. Please try to remember that none of those things are required to make a website.

This is for everyone. Not just for everyone to consume, but for everyone to make.

Have you published a response to this? :

Responses

Bruce Lawson

“the common narrative in the web development community—and amongst browser makers too apparently—is that web development has become more complex; so complex, in fact, that only an elite priesthood are capable of making websites today. Absolute bollocks.” adactio.com/journal/18337

Tom Hazledine

“I know that removing dangerous old features is inevitable, but it should also be exceptional. It should not be taken lightly, and it should certainly not be expected to be an everyday part of web development.” Wisdom, as always, from @adactio adactio.com/journal/18337

Dave Herman

Shout it from the rooftops: “There was something else that bothered me about that tweet. There was an unspoken assumption that the web is built by professional web developers. That gave me a cold chill.” THIS is why we don’t casually break the web. adactio.com/journal/18337

David Humphrey

“It’s almost miraculous that I can visit the first web page ever published in a modern web browser and it still works. Let’s not become desensitised to how magical that is.” adactio.com/journal/18337

T.J. Crowder

“There was something else that bothered me about that tweet and it’s not something that I saw mentioned in the responses. There was an unspoken assumption that the web is built by professional web developers. That gave me a cold chill.” Well put @adactio adactio.com/journal/18337

Eλf Sternberg

This is super-important. Google tried to break the web recently. Their reasons are valid, but the way they did it was a disaster. More to the point, Google needs to remember that amateurs build half the web, using books they find at second-hand stores. adactio.com/journal/18337

blog.jim-nielsen.com

Disclaimer: these are mostly thoughts I’m thinking out loud with no real coherence or point to drive home. Writing it all is a way to question what I actually believe myself in this piece, if anything.

[the web] is for everyone. Not just for everyone to consume, but for everyone to make. — Jeremy Keith

A little while back, I listened to an excellent talk by Hidde de Vries called “On the origin of cascades”.

There are some great ideas in the talk, but I want to pull out this one in particular which talks about the origins of styling documents on the web:

Where it all started on the web was websites without style. Web documents were just structure and browsers would decide how to present them. And that seemed fine originally because it was used in a scientific environment where people cared a lot more about the content than what that content looked like. It was also like a feature: the browsers were about the style, we just worry about the contents. But when the web got more popular, people started asking about styling because they were used to word processors where they could change what fonts looked like or what colors looked like. So they wanted something like that on the web…and at that point, people started to put out proposals.

It’s interesting to think about the early web as this thing shaped and molded by grassroots contributors. But as the web has become more mainstream, influence from larger, commercial entities has grown.

The paths in which browsers grow is influenced by what is being asked for, and what is being asked for is in large part influenced by people and organizations with commercial interests.

Browser standards are decided upon by a consortium of people who—I believe—consist largely of representatives from big, for-profit companies. They make the browsers, so they collectively decide together what’s best.

It feels like the web we’re making now is a web designed for commercial interests. The reason we get CSS grid or the JS APIs of ES6,7, and 8 has more to do with how companies want to build and deliver software over the web than it does with how individuals want to connect and communicate with each other over the web.

If the web is “for everyone”, how and where are “everyone’s” interested being represented?

Browsers are not an enterprise of the people. We do not elect our browser representatives who decide what a browser is and is not. I suppose by using Chrome you’re casting a vote, but ultimately browsers are made following the golden rule: he who has the gold makes the rules.

# Friday, August 6th, 2021 at 7:00pm

Florens Verschelde

I can only agree with every one of @adactio’s points re. Chrome’s deprecation of alert/confirm: - Badly communicated. - Bar for breaking the Web must be extremely high. - Declarative and/or imperative features benefit Web authors who are not pro devs. adactio.com/journal/18337

Roy Tang

A quote:

“Ask yourself: Why am I seeing and feeling this? How am I growing? What am I learning? Remember: Every coincidence is potentially meaningful. How high your awareness level is determines how much meaning you get from your world.” – Ansel Adams

The world:

Links of interest:

Visit the Links page for more links of interest.

From the archives, this week in history:

My stuff:

Beach volleyball #sketchdaily 217/365 Previously (Click to view full-size)

Beach volleyball #sketchdaily 217/365 Previously

Aug. 5, 2021, 9:24 p.m. View post Close
  • Watching:
    • Movies:

    • TV: No new series watched this week, but I have started a new Parks and Recreation rewatch as my background noise. Series is still great.
  • Gaming:
    • I think I’m close to completing all the Horizon Zero Dawn (PS4) Frozen Wilds quests. I only have 1 sidequest left, plus finishing the hunting grounds (1 out of 3 done so far). Might have less PS4 during the ECQ though.
    • Still managing to play a bit of Guilty Gear Strive on weeknights. I also tried playing some Street Fighter V again after the announcements this week, with the view of earning a bit more money so I can just unlock new DLC fighter Akira for free when she drops on the 16th. Coming back to SFV is weird after playing GGS for a couple of months; I lost my first match back because my instincts were all wrong, but I kind of got back into the groove by the second and third matches. I’ll try to do both GGS and SFV on my nightly runs.
    • Magic Arena: Streamed the final episode for my Adventures in the Forgotten Realms drafting on Magic Arena, you can follow via the MTGAFR tag. The last stream lasted more than 3 hours! You can view the whole Youtube playlist here.
    • Regular Saturday group played some Root and Blood Rage again, as usual. I won this week’s Blood Rage!
  • Reading: Mostly just comics again this week. When I say I’ve been reading “comics”, I mostly mean I’ve been trying to make my way through a stack of printed comic strip collections that someone had given to me earlier this year. I’ll make a blog post about them presumably when I’m done.
  • No quiz night this past week, just the usual NY Times crosswords and spelling bee with the trivia team.
This coming week:

  • August continues to be stacked! This coming week we have:
    • Marvel’s What If on Disney+ (Aug 11)
    • Jumpstart Historic Horizons dropping on Magic Arena (Aug 12)
    • Brooklyn Nine-Nine’s final season starts (Aug 12)

# Posted by Roy Tang on Sunday, August 8th, 2021 at 1:09pm

Caspar Hübinger

“But the common narrative in the web development community… is that web development has become more complex; so complex, in fact, that only an elite priesthood are capable of making websites today. Absolute bollocks.” 💛 adactio.com/journal/18337

Viljami Salminen

“Common narrative in the web dev community […] is that web development has become more complex; so complex, in fact, that only an elite priesthood are capable of making websites today. Absolute bollocks. You can choose to make it really complicated.” adactio.com/journal/18337

Brian Rinaldi

Great post @adictio: “You can…convince yourself that “the modern web” is inherently complex…But…what makes it complex: toolchains, build tools, pipelines, frameworks, libraries & abstractions….none of those things are required to make a website.” adactio.com/journal/18337

Mike Birch

“If you published a website ten or twenty years ago, and you didn’t use any proprietary technology but only stuck to web standards, you should rightly expect that site to still work today …and still work ten and twenty years from now.” adactio.com/journal/18337

# Posted by Mike Birch on Monday, August 9th, 2021 at 9:51pm

Yakim

This article by @adactio explains so well how the actions and communication by the Chrome team are not in line with the goals and principles of the web platform. adactio.com/journal/18337

# Posted by Yakim on Tuesday, August 10th, 2021 at 4:07pm

Juan Báez

“But the common narrative in the web development community—and amongst browser makers too apparently—is that web development has become more complex; so complex, in fact, that only an elite priesthood are capable of making websites today. #javascript adactio.com/journal/18337?…

# Posted by Juan Báez on Thursday, August 12th, 2021 at 9:47am

Juan Báez

–You can choose to make it really complicated. Convince yourself that “the modern web” is inherently complex and convoluted. But then look at what makes it complex and convoluted: toolchains, build tools, pipelines, frameworks, libraries, and abstractions. adactio.com/journal/18337?…

# Posted by Juan Báez on Thursday, August 12th, 2021 at 9:48am

James Nash

“If you published a website ten or twenty years ago, and you didn’t use any proprietary technology but only stuck to web standards, you should rightly expect that site to still work today …and still work ten and twenty years from now.” adactio.com/journal/18337

# Posted by James Nash on Thursday, August 12th, 2021 at 12:25pm

Peter Müller

Another good one from ⁦@adactio⁩: »You can choose to make it really complicated … toolchains, build tools, pipelines, frameworks, libraries, and abstractions. Please try to remember that none of those things are required to make a website.« #web adactio.com/journal/18337

adactio.com

After I jotted down some quick thoughts last week on the disastrous way that Google Chrome rolled out a breaking change, others have posted more measured and incisive takes:

In fairness to Google, the Chrome team is receiving the brunt of the criticism because they were the first movers. Mozilla and Apple are on baord with making the same breaking change, but Google is taking the lead on this.

As I said in my piece, my issue was less to do with whether confirm(), prompt(), and alert() should be deprecated but more to do with how it was done, and the woeful lack of communication.

Thinking about it some more, I realised that what bothered me was the lack of an upgrade path. Considering that dialog is nowhere near ready for use, it seems awfully cart-before-horse-putting to first remove a feature and then figure out a replacement.

I was chatting to Amber recently and realised that there was a very different example of a feature being deprecated in web browsers…

We were talking about the KeyboardEvent.keycode property. Did you get the memo that it’s deprecated?

But fear not! You can use the KeyboardEvent.code property instead. It’s much nicer to use too. You don’t need to look up a table of numbers to figure out how to refer to a specific key on the keyboard—you use its actual value instead.

So the way that change was communicated was:

Hey, you really shouldn’t use the keycode property. Here’s a better alternative.

But with the more recently change, the communication was more like:

Hey, you really shouldn’t use confirm(), prompt(), or alert(). So go fuck yourself.

# Monday, August 16th, 2021 at 2:46pm

blog.jim-nielsen.com

There’s something beautiful about the website caniuse.com which I never fully appreciated until last week when news spread that alert, prompt, and confirm were in danger of being deprecated from the web platform.

When you lookup a certain feature on caniuse.com, there’s an incredible assumption many of us make when interpreting its UI: given enough time, most everything goes green.

For example, here’s a screenshot of support for the .avif file format—a feature that (at the time of this writing) isn’t supported across all major browsers.

For perhaps the first time explicitly, I noticed how my brain interprets this UI: .avif isn’t widely supported across many browsers, but support is gaining traction as time passes and eventually everything will be green.

Eventually everything will be green. That’s quite an optimistic assumption when you think about it. Granted, there are nuances here about the standards process, but that’s probably how many of our brains work when we look at caniuse.com:

  • We get a general consensus about where a given feature is in the standards process.
  • If its far enough along, we look at caniuse.com to understand how and where it’s implemented today.
  • We assume eventually it’ll be supported (green) everywhere.
  • Once green, forever green.

Let’s look at a feature that recently reached the threshold of (mostly) supported everywhere: the .webp file format.

It’s noteworthy how my brain now thinks of this UI: I can use this feature—indefinitely. That “indefinitely” is the interesting part.

Due to the nature of evergreen browsers—a wonderful advancement in the evolution of the web—we currently operate under the assumption that once a feature is supported, we’ll be able to use it for the foreseeable future. There is no semver on the web. Major version changes (HTML5, CSS3, ES6) does not mean breaking changes.

I can’t (and never will) use .webp in IE. Not because API support never made the roadmap. Nor because API support was deprecated. Rather, it’s because the browser itself is being deprecated by its maker. On the web (thus far), browser APIs are rarely deprecated. Instead, browsers themselves are.

A browser with widely deprecated APIs is a broken browser for end users, and a broken browser isn’t much worth using.

Given the above, you would be forgiven if you saw an API where a feature went from green (supported) to red (unsupported) and you thought: is the browser being deprecated?

That’s the idea behind my new shiny domain: canistilluse.com. I made the site as satire after reading Jeremy Keith’s insightful piece where he notes:

the onus is not on web developers to keep track of older features in danger of being deprecated. That’s on the browser makers. I sincerely hope we’re not expected to consult a site called canistilluse.com.

There are a few cases where browser APIs have been deprecated. An example on caniuse.com is appcache:

Note how browser support was short-lived.

But what about longer-lived APIs? Take a look at the substr method in JavaScript. Note its support on caniuse.com

All green boxes indicating support, with a note at the bottom: “this feature is deprecated/obsolete and should not be used”.

To Jeremy’s point, the onus should not be on web developers to keep track of older APIs in danger of deprecation. substr is an API that’s been in browser since, well, as far back as caniuse.com tracks browser support. alert, confirm, and prompt are the same. Green boxes back to the year 2002.

I sincerely hope browser makers can find a way forward in improving the deficiencies of APIs like alert without setting further precedent that breaking the web is the price of progress.

Anyhow, that’s the thrust of the idea behind canistilluse.com (yet another domain I purchased and don’t need).

# Monday, August 16th, 2021 at 7:00pm

danq.me

Web standards sometimes disappear

Sometimes a web standard disappears quickly at the whim of some company, perhaps to a great deal of complaint (and at least one joke).

But sometimes, they disappear slowly, like this kind of web address:

http://username:password@example.com/somewhere

If you’ve not seen a URL like that before, that’s fine, because the answer to the question “Can I still use HTTP Basic Auth in URLs?” is, I’m afraid: no, you probably can’t.

But by way of a history lesson, let’s go back and look at what these URLs were, why they died out, and how web browsers handle them today. Thanks to Ruth who asked the original question that inspired this post.

Basic authentication

The early Web wasn’t built for authentication. A resource on the Web was theoretically accessible to all of humankind: if you didn’t want it in the public eye, you didn’t put it on the Web! A reliable method wouldn’t become available until the concept of state was provided by Netscape’s invention of HTTP cookies in 1994, and even that wouldn’t see widespread for several years, not least because implementing a CGI (or similar) program to perform authentication was a complex and computationally-expensive option for all but the biggest websites.

A simplified view of the form-and-cookie based authentication system used by virtually every website today, but which was too computationally-expensive for many sites in the 1990s.

1996’s HTTP/1.0 specification tried to simplify things, though, with the introduction of the WWW-Authenticate header. The idea was that when a browser tried to access something that required authentication, the server would send a 401 Unauthorized response along with a WWW-Authenticate header explaining how the browser could authenticate itself. Then, the browser would send a fresh request, this time with an Authorization: header attached providing the required credentials. Initially, only “basic authentication” was available, which basically involved sending a username and password in-the-clear unless SSL (HTTPS) was in use, but later, digest authentication and a host of others would appear.

For all its faults, HTTP Basic Authentication (and its near cousins) are certainly elegant.

Webserver software quickly added support for this new feature and as a result web authors who lacked the technical know-how (or permission from the server administrator) to implement more-sophisticated authentication systems could quickly implement HTTP Basic Authentication, often simply by adding a .htaccess file to the relevant directory. .htaccess files would later go on to serve many other purposes, but their original and perhaps best-known purpose – and the one that gives them their name – was access control.

Credentials in the URL

A separate specification, not specific to the Web (but one of Tim Berners-Lee’s most important contributions to it), described the general structure of URLs as follows:

<scheme>://<username>:<password>@<host>:<port>/<url-path>#<fragment>

At the time that specification was written, the Web didn’t have a mechanism for passing usernames and passwords: this general case was intended only to apply to protocols that did have these credentials. An example is given in the specification, and clarified with “An optional user name. Some schemes (e.g., ftp) allow the specification of a user name.”

But once web browsers had WWW-Authenticate, virtually all of them added support for including the username and password in the web address too. This allowed for e.g. hyperlinks with credentials embedded in them, which made for very convenient bookmarks, or partial credentials (e.g. just the username) to be included in a link, with the user being prompted for the password on arrival at the destination. So far, so good.

Encoding authentication into the URL provided an incredible shortcut at a time when Web round-trip times were much longer owing to higher latencies and no keep-alives. This is why we can’t have nice things

The technique fell out of favour as soon as it started being used for nefarious purposes. It didn’t take long for scammers to realise that they could create links like this:

https://YourBank.com@HackersSite.com/

Everything we were teaching users about checking for “https://” followed by the domain name of their bank… was undermined by this user interface choice. The poor victim would actually be connecting to e.g. HackersSite.com, but a quick glance at their address bar would leave them convinced that they were talking to YourBank.com!

Theoretically: widespread adoption of EV certificates coupled with sensible user interface choices (that were never made) could have solved this problem, but a far simpler solution was just to not show usernames in the address bar. Web developers were by now far more excited about forms and cookies for authentication anyway, so browsers started curtailing the “credentials in addresses” feature.

Users trained to look for “https://” followed by the site they wanted would often fall for scams like this one: the real domain name is after the @-sign. (This attacker is also using dword notation to obfuscate their IP address; this dated technique wasn’t often employed alongside this kind of scam, but it’s another historical oddity I enjoy so I’m shoehorning it in.)

(There are other reasons this particular implementation of HTTP Basic Authentication was less-than-ideal, but this reason is the big one that explains why things had to change.)

One by one, browsers made the change. But here’s the interesting bit: the browsers didn’t always make the change in the same way.

How different browsers handle basic authentication in URLs

Let’s examine some popular browsers. To run these tests I threw together a tiny web application that outputs the Authorization: header passed to it, if present, and can optionally send a 401 Unauthorized response along with a WWW-Authenticate: Basic realm="Test Site" header in order to trigger basic authentication. Why both? So that I can test not only how browsers handle URLs containing credentials when an authentication request is received, but how they handle them when one is not. This is relevant because some addresses – often API endpoints – have optional HTTP authentication, and it’s sometimes important for a user agent (albeit typically a library or command-line one) to pass credentials without first being prompted.

In each case, I tried each of the following tests in a fresh browser instance:

  1. Go to http://<username>:<password>@<domain>/optional (authentication is optional).
  2. Go to http://<username>:<password>@<domain>/mandatory (authentication is mandatory).
  3. Experiment 1, then f0llow relative hyperlinks (which should correctly retain the credentials) to /mandatory.
  4. Experiment 2, then follow relative hyperlinks to the /optional.

I’m only testing over the http scheme, because I’ve no reason to believe that any of the browsers under test treat the https scheme differently.

Chromium desktop family

Chrome 93 and Edge 93 both immediately suppressed the username and password from the address bar, along with the “http://” as we’ve come to expect of them. Like the “http://”, though, the plaintext username and password are still there. You can retrieve them by copy-pasting the entire address.

Opera 78 similarly suppressed the username, password, and scheme, but didn’t retain the username and password in a way that could be copy-pasted out.

Authentication was passed only when landing on a “mandatory” page; never when landing on an “optional” page. Refreshing the page or re-entering the address with its credentials did not change this.

Navigating from the “optional” page to the “mandatory” page using only relative links retained the username and password and submitted it to the server when it became mandatory, even Opera which didn’t initially appear to retain the credentials at all.

Navigating from the “mandatory” to the “optional” page using only relative links, or even entering the “optional” page address with credentials after visiting the “mandatory” page, does not result in authentication being passed to the “optional” page. However, it’s interesting to note that once authentication has occurred on a mandatory page, pressing enter at the end of the address bar on the optional page, with credentials in the address bar (whether visible or hidden from the user) does result in the credentials being passed to the optional page! They continue to be passed on each subsequent load of the “optional” page until the browsing session is ended.

Firefox desktop

Firefox 91 does a clever thing very much in-line with its image as a browser that puts decision-making authority into the hands of its user. When going to the “optional” page first it presents a dialog, warning the user that they’re going to a site that does not specifically request a username, but they’re providing one anyway. If the user says that no, navigation ceases (the GET request for the page takes place the same either way; this happens before the dialog appears). Strangely: regardless of whether the user selects yes or no, the credentials are not passed on the “optional” page. The credentials (although not the “http://”) appear in the address bar while the user makes their decision.

Similar to Opera, the credentials do not appear in the address bar thereafter, but they’re clearly still being stored: if the refresh button is pressed the dialog appears again. It does not appear if the user selects the address bar and presses enter.

Similarly, going to the “mandatory” page in Firefox results in an informative dialog warning the user that credentials are being passed. I like this approach: not only does it help protect the user from the use of authentication as a tracking technique (an old technique that I’ve not seen used in well over a decade, mind), it also helps the user be sure that they’re logging in using the account they mean to, when following a link for that purpose. Again, clicking cancel stops navigation, although the initial request (with no credentials) and the 401 response has already occurred.

Visiting any page within the scope of the realm of the authentication after visiting the “mandatory” page results in credentials being sent, whether or not they’re included in the address. This is probably the most-true implementation to the expectations of the standard that I’ve found in a modern graphical browser.

Safari desktop

Safari 14 never displays or uses credentials provided via the web address, whether or not authentication is mandatory. Mandatory authentication is always met by a pop-up dialog, even if credentials were provided in the address bar. Boo!

Once passed, credentials are later provided automatically to other addresses within the same realm (i.e. optional pages).

Older browsers

Let’s try some older browsers.

From version 7 onwards – right up to the final version 11 – Internet Explorer fails to even recognise addresses with authentication credentials in as legitimate web addresses, regardless of whether or not authentication is requested by the server. It’s easy to assume that this is yet another missing feature in the browser we all love to hate, but it’s interesting to note that credentials-in-addresses is permitted for ftp:// URLs…

…and if you go back a little way, Internet Explorer 6 and below supported credentials in the address bar pretty much as you’d expect based on the standard. The error message seen in IE7 and above is a deliberate design decision, albeit a somewhat knee-jerk reaction to the security issues posed by the feature (compare to the more-careful approach of other browsers).

These older versions of IE even (correctly) retain the credentials through relative hyperlinks, allowing them to be passed when they become mandatory. They’re not passed on optional pages unless a mandatory page within the same realm has already been encountered.

Pre-Mozilla Netscape behaved the same way. Truly this was the de facto standard for a long period on the Web, and the varied approaches we see today are the anomaly. That’s a strange observation to make, considering how much the Web of the 1990s was dominated by incompatible implementations of different Web features (I’ve written about the <blink> and <marquee> tags before, which was perhaps the most-visible division between the Microsoft and Netscape camps, but there were many, many more).

Interestingly: by Netscape 7.2 the browser’s behaviour had evolved to be the same as modern Firefox’s, except that it still displayed the credentials in the address bar for all to see.

Now here’s a real gem: pre-Chromium Opera. It would send credentials to “mandatory” pages and remember them for the duration of the browsing session, which is great. But it would also send credentials when passed in a web address to “optional” pages. However, it wouldn’t remember them on optional pages unless they remained in the address bar: this feels to me like an optimum balance of features for power users. Plus, it’s one of very few browsers that permitted you to change credentials mid-session: just by changing them in the address bar! Most other browsers, even to this day, ignore changes to HTTP Authentication credentials, which was sometimes be a source of frustration back in the day.

Finally, classic Opera was the only browser I’ve seen to mask the password in the address bar, turning it into a series of asterisks. This ensures the user knows that a password was used, but does not leak any sensitive information to shoulder-surfers (the length of the “masked” password was always the same length, too, so it didn’t even leak the length of the password). Altogether a spectacular design and a great example of why classic Opera was way ahead of its time.

The Command-Line

Most people using web addresses with credentials embedded within them nowadays are probably working with code, APIs, or the command line, so it’s unsurprising to see that this is where the most “traditional” standards-compliance is found.

I was unsurprised to discover that giving curl a username and password in the URL meant that username and password was sent to the server (using Basic authentication, of course, if no authentication was requested):

$ curl http://alpha:beta@localhost/optional Header: Basic YWxwaGE6YmV0YQ== $ curl http://alpha:beta@localhost/mandatory Header: Basic YWxwaGE6YmV0YQ==

However, wget did catch me out. Hitting the same addresses with wget didn’t result in the credentials being sent except where it was mandatory (i.e. where a HTTP 401 response and a WWW-Authenticate: header was received on the initial attempt). To force wget to send credentials when they haven’t been asked-for requires the use of the --http-user and --http-password switches:

$ wget http://alpha:beta@localhost/optional -qO- Header: $ wget http://alpha:beta@localhost/mandatory -qO- Header: Basic YWxwaGE6YmV0YQ==

lynx does a cute and clever thing. Like most modern browsers, it does not submit credentials unless specifically requested, but if they’re in the address bar when they become mandatory (e.g. because of following relative hyperlinks or hyperlinks containing credentials) it prompts for the username and password, but pre-fills the form with the details from the URL. Nice.

What’s the status of HTTP (Basic) Authentication?

HTTP Basic Authentication and its close cousin Digest Authentication (which overcomes some of the security limitations of running Basic Authentication over an unencrypted connection) is very much alive, but its use in hyperlinks can’t be relied upon: some browsers (e.g. IE, Safari) completely munge such links while others don’t behave as you might expect. Other mechanisms like Bearer see widespread use in APIs, but nowhere else.

The WWW-Authenticate: and Authorization: headers are, in some ways, an example of the best possible way to implement authentication on the Web: as an underlying standard independent of support for forms (and, increasingly, Javascript), cookies, and complex multi-part conversations. It’s easy to imagine an alternative timeline where these standards continued to be collaboratively developed and maintained and their shortfalls – e.g. not being able to easily log out when using most graphical browsers! – were overcome. A timeline in which one might write a login form like this, knowing that your e.g. “authenticate” attributes would instruct the browser to send credentials using an Authorization: header:

<form method="get" action="/" authenticate="Basic"> <label for="username">Username:</label> <input type="text" id="username" authenticate="username"> <label for="password">Password:</label> <input type="text" id="password" authenticate="password"> <input type="submit" value="Log In"> </form>

In such a world, more-complex authentication strategies (e.g. multi-factor authentication) could involve encoding forms as JSON. And single-sign-on systems would simply involve the browser collecting a token from the authentication provider and passing it on to the third-party service, directly through browser headers, with no need for backwards-and-forwards redirects with stacks of information in GET parameters as is the case today. Client-side certificates – long a powerful but neglected authentication mechanism in their own right – could act as first class citizens directly alongside such a system, providing transparent second-factor authentication wherever it was required. You wouldn’t have to accept a tracking cookie from a site in order to log in (or stay logged in), and if your browser-integrated password safe supported it you could log on and off from any site simply by toggling that account’s “switch”, without even visiting the site: all you’d be changing is whether or not your credentials would be sent when the time came.

The Web has long been on a constant push for the next new shiny thing, and that’s sometimes meant that established standards have been neglected prematurely or have failed to evolve for longer than we’d have liked. Consider how long it took us to get the <video> and <audio> elements because the “new shiny” Flash came to dominate, how the Web Payments API is only just beginning to mature despite over 25 years of ecommerce on the Web, or how we still can’t use Link: headers for all the things we can use <link> elements for despite them being semantically-equivalent!

The new model for Web features seems to be that new features first come from a popular JavaScript implementation, and then eventually it evolves into a native browser feature: for example HTML form validations, which for the longest time could only be done client-side using scripting languages. I’d love to see somebody re-think HTTP Authentication in this way, but sadly we’ll never get a 100% solution in JavaScript alone: (distributed SSO is almost certainly off the table, for example, owing to cross-domain limitations).

Or maybe it’s just a problem that’s waiting for somebody cleverer than I to come and solve it. Want to give it a go?

# Tuesday, September 7th, 2021 at 10:26am

mono

Web foundations. “Convince yourself that “the modern web” is inherently complex and convoluted. But then look at what makes it complex and convoluted: toolchains, build tools, pipelines, frameworks, libraries, and abstractions”. adactio.com/journal/18337

# Posted by mono on Thursday, December 30th, 2021 at 1:55pm

1 Share

# Shared by Aleksi Peebles on Friday, August 6th, 2021 at 11:41am

4 Likes

# Liked by Marty McGuire on Friday, August 6th, 2021 at 12:55pm

# Liked by Dominik Schwind on Friday, August 6th, 2021 at 3:44pm

# Liked by Tomáš Jakl on Friday, August 6th, 2021 at 5:27pm

# Saturday, August 14th, 2021 at 6:49am

Related posts

Get the FLoC out

Google Chrome is prioritising third parties over end users.

Numbers

“I am not a number, I am a free website!”

Browsers

I’m on Team Firefox.

The imitation game

The only way to win is not to play.

Backdoor Service Workers

The tragedy of the iframe commons.

Related links

Baseline’s evolution on MDN | MDN Blog

These updated definitions makes sense to me:

  1. Newly available. The feature is marked as interoperable from the day the last core browser implements it. It marks the moment when developers can start getting excited and learning about a feature.
  2. Widely available. The feature is marked as having wider support thirty months or 2.5 years later. It marks the moment when it’s safe to start using a feature without explicit cross-browser compatibility knowledge.

Tagged with

Tagged with

The UI fund

This is an excellent initiate spearheaded by Nicole and Sarah at Google! They want to fund research into important web UI work: accessibility, form controls, layout, and so on. If that sounds like something you’ve always wanted to do, but lacked the means, fill in the form.

Tagged with

History of the Web - YouTube

I really enjoyed this trip down memory lane with Chris:

From the Web’s inception, an ancient to contemporary history of the Web.

Tagged with

The Core Web Vitals hype train

Goodhart’s Law applied to Google’s core web vitals:

If developers start to focus solely on Core Web Vitals because it is important for SEO, then some folks will undoubtedly try to game the system.

Personally, my beef with core web vitals is that they introduce even more uneccessary initialisms (see, for example, Harry’s recent post where he uses CWV metrics like LCP, FID, and CLS—alongside TTFB and SI—to look at PLPs, PDPs, and SRPs. I mean, WTF?).

Tagged with

Previously on this day

9 years ago I wrote Brighton Homebrew Website Club

Come along to the inaugural meetup in our treehouse.

9 years ago I wrote dConstruct 2015 podcast: Josh Clark

Chatting about magic and technology.

11 years ago I wrote August in America, day three

Alexandria, Virginia.

12 years ago I wrote Noon

Brighton SF just got even better.

12 years ago I wrote Countdown to September

Preparing for the Brighton Digital Festival.

14 years ago I wrote Hypertext history

A-wwilfing we will go…

16 years ago I wrote Geek out and about

Say it loud and say it proud.

18 years ago I wrote V for Vendetta

The film of the graphic novel.

18 years ago I wrote Happy Birthday, WWW

Many happy returns.

19 years ago I wrote Robin Cook

I just heard that Robin Cook died today. I have to say I’m somewhat shocked.

20 years ago I wrote Definite article

I finally put together some of my slides from the SkillSwap talk I gave with Richard and turned them into an article.

22 years ago I wrote Free Online Barcode Generator

Ever wondered how your name would look as a bar code?