As this is quite a lengthy article, let’s begin with a quick summary:

  • Using web standards can help you make decisions;
  • Smarter people than myself have thought of various ways to use the world wide web, including edge cases and other considerations;
  • There exists a proposal for anything you can think of;
  • The web wasn’t built for the browser;
  • You use web standards.

Are you a front-end developer who wants to know more about the internet or the world wide web? Are you a full-stack developer (in whatever capacity, or meaning of this term) and do you want to know more about web standards? Are you an architect or back-end developer and require aid designing a (web-based) API (application programming interface)?

If you said yes to at least one of those questions, or if this subject has sparked your interest, keep on reading!

  1. What is the world wide web?
    1. HTTP
    2. Hypertext, hyperlinks, hypermedia
    3. HTML
    4. …what about other media?
    5. Mediatype
    6. Registering a media type?
  2. Request for Comments (RFC)
    1. RFC for everything
    2. RFC for anything
      1. RFC 1121
      2. RFC 1925
      3. RFC 2119
  3. What gives? How does this help me?
    1. HTTP: the why?
    2. Inventing the wheel
    3. How do I prevent re-inventing the wheel?
  4. Webstandards for web developers
  5. Join the development of our www
  6. Final words

What is the World Wide Web?

To be able to talk about web standards, it’s good to first define what the web is and consists of, including some elaborations.

HTTP

HTTP is short for Hypertext Transfer Protocol.

A protocol is generally a collection of rules to achieve some goal. In this case we’re dealing with a communication and transfer protocol, and it describes the syntax, semantics, error handling and correction, as well as synchronization to accomplish the following:

“…where hypertext documents include hyperlinks to other resources that the user can easily access.”

Wikipedia

“Hyper-“ from the Greek prefix “ὑπερ-“, which means over or beyond, as in exceeding.

  • Hypertext is digitally displayable text, with references (yes, the hyperlinks) to other (hyper)text, which can be accessed directly by a user.
  • Hyperlink is a link (a reference) that grants a user access to the data at the other end of that link (referred data).
  • Hypermedia is an extension of the hypertext term and describes non-lineair (digital) media both including plain text and hyperlinks, as well as images, audio, and video.

An example of hypermedia is the World Wide Web!

HTML

HTML is short for HyperText Markup Language.

It follows that hypertext doesn’t equal HTML. Instead, this language is a way to mark up hypertext in such a way that the data (the text, and when we talk about hypermedia also the other types of content) can be displayed or referenced (using hyperlinks).

The HTML standard is meant to describe how to annotate documents so a browser may display it. It’s a way to make hypertext interactive. Thus, the web browser is a tool to follow hyperlinks, which we know as browsing the web.

HTTP aids transferring HTML documents, as it’s the Hypertext Transfer Protocol for documents written in Hypertext Markup Language.

…what about other media?

When you’re browsing the web, you’ll often run into content that’s not just text or text-based — plain or styled — but rather things such as images, audio, video, and other non-textual content.

The smart humans that brought HTTP to life (and continued developing it) also came up with a smart way to allow HTTP to send more than just hypertext and hypertext-based documents. The way they accomplished this is by using metadata which is sent along with the content requested.

In order to support other (types of) media, HTTP commonly uses a Media type (MIME TYPE).

Mediatype

The data format of a representation is known as a media type […]

On the web we technically talk about representations of resources. The format of such a representation (meaning the syntax, the rules, the usage, the constraints, etc.) are commonly known as a media type.

Some well known examples of media types are:

  • text/html
  • image/png
  • application/json

There are rules about the syntax and usage of these media types. Together we came up with and agreed on how specific binary data can be interpreted (read) or arranged (written). A PNG image is a PNG image if we interpret the binary data as a PNG. How that would and should work for PNGs is written down and accepted as a standard, after which it was (publicly) registered.

Registering a media type?

Here are another three examples of media types:

  • application/vnd.ms-powerpoint
  • application/graphql
  • application/vnd.xpbytes.errors.v1+json

The first one describes Microsoft Powerpoint files. The second one used to be the (defacto) standard way to desrcibe GraphQL queries and responses. The last one is one out of many vendor (vnd) specific media types that we use at work to describe error messages.

There exists a document (and standard) that defines and explains how to register a media type: RFC 6838: Media Type Specifications and Registration Procedures

“This document defines procedures for the specification and registration of media types for use in HTTP, MIME, and other Internet protocols.”

The goal of registering a media type is to make it accessible for other to use on the interwebs, and so that we may agree on how data can be interpreted or arranged. Whilst it’s not strictly mandatory to register media types starting with vnd., it will aid in the process of writing, sanitizing, iterating, and improving a media type specification.

For media types outside of the vnd. (vendor) and prs. (personal) trees — in other words: media types that don’t start with that prefix — registration is mandatory. It’s not a media type until it’s officially been registered.

Request for Comments (RFC)

That last document linked to register a media type is called an RFC: a request for comments. It is a publication of a technical document that has the goal of describing a new standard or altering an existing standard (which proliferates into a new standard because of the changes).

XKCD 927: Standards. Comic with three panels. Heading above the panels: "How standards proliferate (see: A/C chargers, character encodings, instant messaging, etc.)". First panel: "Situation: there are 14 competing standards". Second panel: In speech bubbles: "14?! Ridiculous! We need to develop one universal standard that covers everyone's use cases.". The response: "Yeah!". Third panel: "Soon: Situation. There are 15 competing standards."

It’s commonly the Internet Engineering Task Force (IETF) that publishes such documents. It’s this task force that will ultimately decide if a distributed proposal after receiving commentary will be published as internet (web) standard.

And my oh my… there literally exists an RFC for almost every single thing you can think of.

RFC for everything

As mentioned, media types are standards, and these exist as published RFCs. Example are:

Incredibly helpful, because if you want to know if something is possible or allowed, how to accomplish it, and what the rules are, you can inspect these documents and almost always retrieve the answer.

With “for everything” I did really mean everything.

How plain text is defined and how to work with it can be found in RFC 822, which was written before the internet existed but rather was called ARPANET. How to transfer text using the internet (utilizing HTTP) is defined in RFC 1521, but HTTP itself is also published in various RFCs, including:

RFC for anything

That many technical aspects have been documented and published in RFC (and often have been accepted as Web Standard) may not be unexpected. However, when I was hinting at the fact that there is an RFC for everything, I really did mean anything. So to continue down this rabbit hole, here are three more examples.

RFC 1121

This RFC presents a collection of poems that were presented at “Act One”, a symposium held partially in celebration of the 20th anniversary of the ARPANET.

RFC 1925

This memo documents the fundamental truths of networking for the Internet community.

[…]

The Fundamental Truths

(1) It Has To Work.

[…]

(3) With sufficient thrust, pigs fly just fine.

RFC 2119

We’ve even defined how to talk about standards in RFCs.

The one that’s referred to the most is RFC 2119: Key words for use in RFCs to Indicate Requirement Levels:

The key words “MUST”, “MUST NOT”, “REQUIRED”, “SHALL”, “SHALL NOT”, “SHOULD”, “SHOULD NOT”, “RECOMMENDED”, “MAY”, and “OPTIONAL” in this document are to be interpreted as described in RFC 2119.

There are narrow definitions for the aforementioned keywords, and knowing those definitions will with both writing and reading RFC, and subsequently any Web Standard.

What gives? How does this help me?

Let’s start with the question: Who was HTTP meant for. You’ve read the word user several times, without a definition of whom or what a user is. Tell me, what’s a user?

The answer to this question can be found in RFC 2616: Hypertext Transfer Protocol – HTTP/1.1:

[…] used as a generic protocol for communication between user agents […]

It first describes what HTTP can be used for. Then, not much further, the definition is given as follows:

user agent The client which initiates a request.

These are often browsers, editors, spiders (web-traversing robots), or other end user tools.

Therefore, HTTP is not solely meant for people, nor browsers.

HTTP: the why?

If the protocol is meant for more than browser and end-users (people), there must be a bigger picture. This standard wants to help us achieve more than merely sending hypertext and other hypermedia over the wire to our collective screens.

It doesn’t take long to discover which challenges HTTP is trying to solve, or facilitate. This article is not the right place to take a deep dive, but understand that there is more than the list that follows. Smart people have spend ample time finding the difficulties with communicating over the internet, and which problems and challenges need solving. HTTP is the answer.

HTTP is the solution to so-called hard problems

  • Caching (many user agents, one server);
  • Consistency (data format, errors, and separating data from errors);
  • Interoperability (between different implementations);
  • (and more).

Inventing the wheel

It’s more or less (or is supposed to be) common knowledge that in general it’s disastrous to role your own encryption. On the security stackexchange one can find many questions about the why’s and why nots, and one of the most compelling comments to me is:

If you are not convinced of “Don’t Roll Your Own [Cryptography/Security]”, then you probably are not an expert and there are many mistakes you likely will make.

dr jimbob on security.stackexchange.com

How does that relate to web standards?

You may have heard the expression “there’s a package for that” or “there’s a gem for that”. The same could be said about abstract problems: “there’s a spec for that”. Even when there is no spec, there’s often at least one matching proposal or draft. This also means that it’s generally unnecessary to reinvent the wheel. There’s already been a person or group of people who has thought about the problems, and (hopefully) wrote down some smart things about them.

And if you do decide despite all that to try to figure it out on your own, you can still learn from what’s already out there, or compile your research using the RFC format so that others may use your standard.

As a developer — front-end, or back-end, or other-end — you use (web)standards every day!

How do I prevent re-inventing the wheel?

There’s no easy answer to this question because it implies you can easily know or intuit that there’s already a standard. When I am looking for a standard myself, it’s usually because of one of two reasons:

  1. something feels harder than necessary,
  2. something sounds complex.

A few examples that hopefully can light your way:

a. Adding interaction to HTML elements.

In the past I have made the grave mistake (yes, mistake) to use onclick="" attributes or the equivalent code in JavaScript, on elements that are not interactive. After that past, I learnt about using the web without using a mouse (and became a keyboard user), and found out that to make a “button” clickable for all types of users, a lot of JavaScript is required. Because there’s not only keyboard, and mouse, and taps. There’s also screenreaders, and voice control, and joysticks, and other assistive technologies. Oof.

<!-- a bit more accessible than onclick="" -->
<div class="button" role="button" aria-pressed="false" tabindex="0">
  My action
</div>

<script>
  // [...]
  myActionElement.addEventListener('keydown', (e) => {
    if (e.key === ' ' || e.key === 'Enter' || e.key === 'Spacebar') {
      e.target.setAttribute('aria-pressed', 'true');
    }
  });

  // [...] add up handler to remove `aria-pressed`
  // [...] add blur handler to remove `aria-pressed`
  // [...] add disabled support via `aria-disabled`
  // [...] and more
</script>

The HTML standard has quite a few tools in its proverbial shed to make HTML interactive, including the button element. This element supports all kinds of operation and interaction. According to the description on MDN (Mozilla Developer Network), the button element is an interactive element activated by a user with a mouse, keyboard, touch, voice command, or other assistive technology.

On top of that, the action can be blocked by using the disabled attribute, has a default role (namely button), can receive focus without tabindex and doesn’t require custom CSS to make it appear as a clickable button.

So before using a div, my rule is: ensure knowledge of the available HTML elements and their purpose.

b. Being able to serve many users with little server power

I’ve worked on more than one project with hundred, thousand, or even millions of daily users. Servers can be expensive, as is bandwidth in the cloud, and usage is often inconsistent.

This is a hard challenge.

If you search for solutions to this consistency problem, you’ll find a technique called auto-scaling: automatically adding or removing servers or server-capacity to be able to serve more concurreny users when it’s necessary, and reducing costs when it’s not. If you then continue your search you’ll also find that auto-scaling is really hard to get right. For example: What are the right rules to trigger a scale-up or scale down? Should there be a limit, if yes, how high or low should that be? How will you ensure that booting a server (which is slow) is taken into account, or how will you ensure the trigger to scale-up is no longer active during that process? Answers to these question vary between “it depends” and “thoughts and prayers”.

No, there should be a better (and more importantly more stable and consistent) way to be able to serve more users without it costing you the big bucks.

In our case, one of the application serving millions of users a day had more requests wanting to read content than write content. In HTTP terms, the most occuring verb was GET. That by itself yields some possibilities, because even though some pages contain user specific data, most of the content was the same for everyone. Previously I had used a Content Delivery Network (CDN) to serve media (images, videos, etc) to users, such as Cloudflare. Amazon Web Services (AWS) has their own variant called CloudFront, and there are many others to be found. Perhaps this is something I could use…

All CDNs have (some variant) of caching: (temporarily) storing information so you don’t need to recalculate or retrieve that information for a next request. Even though they all have their own implementation, they (almost) all support (part of) a standard: HTTP Caching. (This is sometimes called Origin Cache Control should you search for it at your CDN). MDN has an article that describes many of the options; here are a few we used to cache the contents on our CDN:

# You are not allowed to cache this page. Also don't cache
# it in the browser. Each request MUST generate and serve a
# new response.

Cache-Control: no-store
# Store this in the cache for 1 year. This is used for
# files that include a hash or version identifier that
# changes if the file is changed, resulting in a new url.
#
# immutable prevents browsers from making conditional GET
# requests to check if the response has changed since it
# was cached.
#
# public ensures that the response can be shared between
# users visiting the resource. In other words: if someone
# has already requested this resource, the response given
# to them can be given to other requests for this resource.

Cache-Control: public, max-age=31536000, immutable
# Store this in the cache for at least 5 minutes, and after
# 5 minutes mark it as stale. This is used for the majority
# of pages that do get updates, but don't require a fresh
# response every second. It is often more important to show
# slightly outdated content than no content at all.
#
# This policy is actually valid for almost all content,
# including that on the rest of the world wide web.
#
# Use the stale (cached) response for at most 60 seconds
# whilst a background job refreshes the cache. After that
# it is no longer stale (for 5 minutes). In case of an
# error, use that stale (cached) response at most for one
# hour.
#
Cache-Control: private, max-age=300, stale-while-revalidate=60, stale-if-error=3600

I also found out that some implementations of HTTP Caching are not spec-compliant. They don’t follow the standard. Cloudflare, for example, does not respect the Vary header.

Only because I knew there was a standard (HTTP Caching), I could easily figure out what the implementations are of not respecting that header, and that allowed me to make conscious choices.

Note: Please refrain from blindly adding caching headers. Especially using max-age on pages that are updated can cause issues if you don’t take care of the references on that page still pointing to older urls (such as CSS and JavaScript files). These should remain operational during the max-age period, after which they can be cleaned up. Jake Archibald wrote about HTTP Caching. So continue reading, and then profit!

Oh about that performance? Realise that adding public, max-age=60 can have a tremendous effect on the amount of requests actually served by the server:

Situation Visitors per minute Requests per minute
No caching 1000 1000
max-age=5 1000 ~12
max-age=60 1000 ~1

Even for private content marked as must-revalidate (which means the user-agent such as the browser MUST ask the server to validate that what’s cached locally matches what’s fresh on the server), together with a max-age=0 (store in the cache as fresh for 0 seconds, then mark it as stale) can help. Because if the server responds to the revalidation check that it’s still fresh (for example using an ETag comparison), you still save on bandwidth and time it costs to send and receive the data.

In this case the solution that solved the issue to satisfaction was the HTTP Caching standard.

c. Accepting old software whilst the API has changed (significantly)

The problem is as follows:

I worked on a system with a public API available to all kinds of other systems, one of which a mobile SDK (Software Development Kit). Such SDK becomes part of one or multiple apps which are then distributed, usually via the Google Play and Apple iTunes stores. This means that when the API (the server code) changes, that change needs to be proliferated to the SDK(s), which in turn then need to be updated in the apps, which finally then need to be published to the stores, and finally updated on the end-user devices. That takes time (and effort).

In other words: this is a problem, also known as a challenge!uitdaging!

Roy Fielding, is a very smart person. They worked on things such as URI Templates, and was also editor of the “Do No Track” working group. In 2013 Roy said during a talk that the best-practice for versioning an API is to not version the API as a whole. Yes, so what?…you may be thinking. Roy wrote their dissertation on Architectural Styles and the Design of Network-based Software Architectures. This is what is now commonly known as REST (Representational State Transfer) and that is the spine of a great many amount of modern web interfaces.

What I enjoy most about that is that the follow-up to their vision of REST was written and publish as a, yes, RFC, which was accepted as “current best practice”.

I won’t dive into too many details about what we used to solve the challenge here, as that’s worth multiple articles, but to summarize:

  • Use versioned media types (such as application/vnd.acme.config.v1+json);
  • Use content-negotiation: a tactic where a server inspects the Accept header of the request and bases the response often setting the Content-Type header;
  • Add strongly-typed validation (a data format) on the server side and run responses through it before sending it to the client. Blow up and raise an error when that response doesn’t validate against a known contract (the data format). In other words: determine the shape of the response for a given Content-Type and enforce it. This has the added benefit that clients consuming the API can assume a valid response shape;
  • Support a media type for as long as possible. It’s often cheap to do so, as the only difference is often the shape of the response, not the logic, and you’ll almost never need to change the shape generation of older versions because of that.

Solving this interesting challenge was possible using the work of multiple smart people (those contributing to that RFC that was accepted as a best practice), which in turn has led to the fact that five years later we are still able to serve requests to apps that are still using “v1 of the API”. This also enabled the mobile app developers to gradually upgrade. For each request-response pair they could decide if they wanted to support new behaviour by controlling the Accept and Content-Type headers.

d. Disagreements about errors

During this challenge I was the frontender who needed to use an API to built both a mobile app ánd a web application. I could influence that API because it was being written by the client. The challenge came in three parts:

  1. The API returned a 500 Server Error if anything went wrong, even if I made a mistake.
  2. The API wasn’t consistent in how it returned errors. Sometimes it was a plain text response (with Content-Type of application/json), sometimes a JSON object with a key "error", and sometimes differently again.
  3. The API returned error messages that were either not actionable (did not help to resolve them), or were completely incoherent.

The first point was really annoying. In general, when I implement front-end code that fetches content from an API, any 5xx response is seen as a server related problem. That means that for those requests, in general, the request is automatically retried. The client complained that they had many error-request-responses.

RFC 9110 talks about semantics, including the HTTP status codes. Because the error code 500 isn’t really telling and doesn’t indicate if this is a temporary or permanent issue, together with the fact that the error message included in the response not being structured or helpful, I opted to always retry requests that results in a 500. In contrast, for responses with any 4xx code, I did not retry the requests, as those indicate that I made a mistake.

Luckily the client was very willing to listen to feedback, and upon sharing both the standard and the given descriptions, they felt they had to update their API, and started responding with both 4xx and 5xx codes, where appropriate. This both solved the first point, and helped immensely with the third one, because even if the error message wasn’t telling, the status code gave a good idea about the issue category.

For the second point neither of us had a good solution. In the end I proposed to use someone else their ideas on the matter, namely RFC 7807: Problem Details for HTTP APIs.

This RFC introduces a standard format (registered with a media type application/problem+json) and that in turn makes all error responses more or less look like this:

HTTP/1.1 403 Forbidden
Content-Type: application/problem+json
Content-Language: en

{
    "type": "https://example.com/probs/out-of-credit",
    "title": "You do not have enough credit.",
    "detail": "Your current balance is 30, but that costs 50.",
    "instance": "/account/12345/msgs/abc",
    "balance": 30,
    "accounts": ["/account/12345",
                 "/account/67890"]
}

In short:

  • type: always a URI indentifying (categorising) the problem. It’s strongly recommended to make this URI browsable and link to documentation about the error.
  • title: a human-readable summary. It can even be translated using Content-Negotiation (by indicating which language(s) you’d like to receive using Accept-Language, and the server can provide a different response based on that request header).
  • detail: a textual explanation of the problem, also made for consumption by humans! So no computer bleep-bloop or SQL statements, or a verbatim error message.
  • instance: a URI for this specific instance of the problem.

As the title and detail fields are mandatory, the client is now also forced to write better error messages (which I helped them with).

Finally we were able to upgrade the API gradually, as once the error message was upgraded to the problem format, it had the new Content-Type. I was able to make a list of responses that did not adhere to that new content-type.

In this particular case I used multiple standard to reach my goals. I did not need to argue for these solutions because the status quo wasn’t working.

e. Accessible emails?

Finally I have an example where a client wanted me to add a new user option. This client indicated that not all members could read their newsletter that was digitally distributed via e-mail

Their request was:

Please add the ability to pick between HTML email and text email in the user profile.

It was relatively straightforward to expose this option and implement it. And so I did.

Two weeks passed and I was contacted and told the feature did not work.

It did work, but a customer of the client complained who previously indicated having trouble opening the emails on some of their devices switched to the text option and now got text emails on all of their devices. This particular person used a text-only email client on some of their devices, but other devices were able to display the HTML e-mails. Additionally the told me that they had seen this issue before, but also showed me e-mails that worked just fine on both types of devices.

I looked for the applicable standard, because if multiple applications get this right, it almost always is a standard.

The solution was found in RFC 2046: Multipurpose Internet Mail Extensions (MIME) Part Two. It describes how e-mails are composed and how e-mail readers should treat them. It also talks about a special mediatype multipart can be used to compose content out of multiple parts, especially using multipart/alternative.

“Multipart/alternative” may be used, for example, to send a message in a fancy text format in such a way that it can easily be displayed anywhere.

And that matched the e-mails that did work in both types of devices.

From: sender@example.com
To: recipient@example.com
Subject: Multipart Email Example
Content-Type: multipart/alternative; boundary="boundary-string"

--your-boundary
Content-Type: text/plain; charset="utf-8"
Content-Transfer-Encoding: quoted-printable
Content-Disposition: inline

Platte tekst email hier! Dit wordt gebruikt
als text/html niet werkt.

--boundary-string
Content-Type: text/html; charset="utf-8"
Content-Transfer-Encoding: quoted-printable
Content-Disposition: inline

<h1>Woopdiedoo html werkt!</h1>
<p>Hier dan de HTML variant</p>

--boundary-string--

It was very easy to remove the option from the user profile and instead implement this multipart solution for all their emails.

Unfortunately the only two types supported by most e-mail readers are text/plain and text/html. Recently Apple added support for a new variant text/watch-html. It is used on the Apple Watch!

In this case you can see that the easiest solution wasn’t the right one. The standard used here covered more usecases than the simple implementation.

Webstandards for web developers

You’ve scrolled passed (and hopefully read) about 2300 words and now I can tell you you’re already using (web)standards. I’m not even talking about HTTP or tools using standards such as formatters and linters.

Do you produce HTML? Congratulations. Here is the HTML specification

Do you write CSS? There are many CSS specifications such as mediaqueries, selectors, and the box-model;

Interactivity through a language such as JavaScript or TypeScript? You are probably complying the ECMAScript standard;

Calling an API from JavaScript? There is a standard both for fetch and xhr (XMLHttpRequest);

Uploading files through a standard form? You’re using multipart/form-data.

Sending e-mails? Hello Simple Mail Transfer Protocol. Reading emails often uses either the older Post Office Protocol or the Internet Message Access Protocol. The content and headers are specified in RFC 2822: Internet Message Format, DomainKeys Identified Mail (DKIM) is RFC 6375, and sending e-mails in multiple formats such as HTML and plain text is made possible because of multipart/alternative.

Dealing with 3D graphics? The standard you’re using is probably written by and maintained by the Khronos Group.

Testing for accessibility? You’re most likely following the Web Content Accessibility Guidelines, as described in this specification by the W3C (World Wide Web Consortium).

The above is only a small part of the many many standards you’re using, you’re building on top of, and you’re profiting of. Such as how URLs are composed, and what the rules are around e-mail addresses.

Join the development of our www

Was this extremely interesting to you and do you have ideas about developing the web? Have you run into an issue that a current standard doesn’t solve? There are many ways to participate.

  1. Many of the organisations now partaking in writing standard can be found on sites such as GitHub.com. Examples are the WC3, WICG, WhatWG and TC39. All of these have other contributor guidelines and rules to partake, often described in their README and/or Code of Conduct. This is one of the easiest ways to explore the current developments and discussions.
  2. The WC3 has their own website. You can find all kinds of working groups and other collaborations here. Becoming a member of such a group or community is often really easy and without barrier, and once you are a member, you are invited to many if not all of their discussions.
  3. I have not yet mentioned the Web Incubator Community Group (WICG). This group works on proposals introducing new features! Many of those features can be found on their website, as well as a link to their Discourse.
  4. Then there is the Web Hypertext Application Technology Working Group (WHATWG). This group maintains and develops the HTML standard! How to contribute is explained on their website.
  5. For JavaScript (and related languages) we have the TC39 standard. Whilst the contributions are collected via their GitHub, as listed on their website, that website also contains a lot of additional information.

Is becoming part of such a group the only way?

NO!. As an individual you are able to respond to issues, submit new proposals, as well as partake in events hosted by the IETF. There is a Datatracker collecting and keeping track of all those proposals (accepted, drafts, expired, etc.) and also exposes which questions are open. You can also find the next opportunity to ask questions synchronously, for example during an online event. I have joined and done so a few times now.

If you know people who contribute to these standards, they may tell you that it’s not always without challenges. Development takes long and to reach consensus isn’t always possible. It’s sluggish machinery. On the other hand it makes it possible to enjoy all the things we enjoy digitally and online.

Final words

Using (web)standards made is possible for us — 5 developers total — to deploy an application being able to handle 15 million requests per evening, process and send 2.2 TB of data per day, with an average and median time of 35 ms of processing time using a single (pretty powerful) server.

Using all these (web)standards we were able to tell another client: “You have clients using screenreaders? That’s not an issue. We won’t charge you extra for supporting that.” after which we found out that we didn’t need to build any support for it as we had already used the right constructs.

By using (web)standards the endless discussions can be averted because we (almost) always have something to fall back to and refer to, and find out if we made a programming error in our code, or if there has been made an error implementing a specification for example in a browser.

You use (web)standards, and that’s smart, so keep on going.