Imagine if the web was one big API, and you could request and mashup content from anywhere.
In the years where we used the container model, all implemented sites got their containers from one source: the same site itself. But Containerist can be a lot more powerful. It can become a mesh of distributed services and sites. Here’s how.
If you want to have a crash course into the containerist concept, imagine to do every of your pages with just: URLs. Imagine, your home page could be described as followed:
title: Konstantin Weiss -- ctn/header ctn/articles-intro ctn/articles-tiles?latest=9 ctn/about ctn/footer
If you only have urls as a means to show content, you have no chance but craft the content into containers. It’s usually painful in the beginning, like most crash courses are. But you’ll be quick to realise some advantages of the concept.
For instance, you will be able to re-use some containers on other pages, simply by copy-pasting their origin (url). An overview page of your articles could reuse the “articles-tiles” container and be described like that:
title: Articles by Konstantin Weiss -- ctn/articles-title ctn/articles-tiles?year=2014 ctn/articles-tiles?year=2013 ctn/footer
Note that the articles-tiles container is the same, but is parameterised in a different way, so it responds to different conditions. Hence, the result will be a bit different (only 9 articles, or articles only from the year 2014).
As you can see, there is no computation in the page itself. It happens within the containers.
Now you got the idea of using containers. So far, they came from the site. Now, let's start using containers from other sites. For instance, use a picture service for your image gallery. An example:
title: Found by Konstantin Weiss -- ctn/sub-nav?sub=Found&url=/found http://drop.ctn.io/k/found/teasers.ctn?root=/found ctn/footer-bottom
Simple, isn’t it?
For federated containers to work, they have to have a certain anatomy:
Every container has to have a url. I call it its origin. The origin has to be available and accessible to anyone.
Every container type has to have some front-end, e.g. HTML/CSS/JS, in order to be shown to the user and be interacted with. I call it the skin. It is often a template.
But before the skin is rendered and shown, the container has to reveal its structure. This is the container’s data and interaction possibilities, provided in a machine-readable way. It shall be as skin-independent and as structured as possible.
The structure has to be revealed and provided. Then, the container can be skinned differently, depending on the site and the device.
The structure and a skin template are then rendered in order to display the container.
In one location, you can combine different sources, taken from APIs, RSS feeds etc. You can combine them in order to show compressed information which would otherwise have been scattered throughout the web.
Piping is a concept used in unix and invented by Douglas McIlroy:
A set of processes chained by their standard streams, so that the output of each process feeds directly as input to the next one.
With piping, you can take structured information from containers as input and use them to compute new output.
With such an anatomy, you can build web pages which consist of distributed containers.
Your content is then still on your website, but provided by powerful external services.
All of this only works, if first of all we follow the basic principle:
A web page is a stack of autonomous containers.
And then we apply the power of the URL and the container anatomy:
- Origin (URL)
- Structure (Machine readable)
- Skin (HTML/CSS/JS)
(cc-by-sa) since 2005 by Konstantin Weiss.