Over the last few years I've participated in a recurring discussion on designing developer platforms, so I thought I'd write it down, share and incrementally make it better as I gather more data points/opinions.
The boiling frogI've used many different names over the years (e.g. use-case agnostic, unanticipated use cases, general purpose blocks, bottom up versus top downetc) , but it always comes down to a fundamental difference in attitude between two types of architectural designs: do you want to build foundations or ceilings?
Foundations are designed - from their ground up - to enable other layers to be built hoping to be as strong as possible to be surprised by what gets built next. Ceilings, on the other hand, are - by design - the end game: to prescribe and dictate exactly where the building ends.
The balancing act here is that, on one hand, it is easier to think (implement and sell upwards/downwards) incrementally about shiny ceilings because they enable you to visualize the end result - the windows, the doors, the white fence and the name of the dogs. Whereas foundations requires you to look ahead and take a leap of faith that you'll be surprised by what comes out.
On the other hand, the challenge is that if you decide to build one ceiling after the other, there is a very concrete limit of what you can achieve: you'll boil the frog not knowing that you aren't really ever going to go any further. As soon as you get that ceiling installed, you can't go any taller: you've defined your own self as that much tall.
Two concrete examplesArchitecturally, the way this materializes itself into systems is their ability to surprise their designers: can (good) unanticipated things happen in the system without any recompilation of the binary?
(As usual) I'll pick on my work for a concrete example of - in hindsight - incomplete design: schema.org Actions (although you can probably make the argument with the Semantic Web as a whole - more on that later). After implementing one ceiling after the other, you start asking if the entire approach isn't funky: implementing an extra use case will only incrementally move the needle.
Every implementation of a new schema.org Action types requires an entire new product, engineering stack and developer ecosystem to be built from scratch: they are a ceiling after all. Every single iteration takes years. The entire binary has to be recompiled.
I could certainly spend my next decade building use-case specific platforms for making restaurant reservations, movie ticketing, event rsvp-ing, movie watching, song listening, package tracking, etc but if you fast forward and look backwards that would just look like the equivalent to opentable, fandango, eventbrite, netflix, spotify ... the system would only do what it had been programmed to do (as opposed to my previous example with the web - in that it enabled something that it wasn't initially programmed to do).
I probably boiled a couple of frogs.
Now, to be fair, the one level of serendipity that schema.org Actions enabled is provider discovery: although the use case is indeed hard-coded, the implementor/provider isn't: new providers can join without recompilation of the binary. Sometimes, the use case is so powerful, that it is worth the cost.
Sometimes the answer is right in front of you, but it takes time for you to see it. Gmail Actions launched with 3 actions: two use-case specific (RsvpAction and ReviewAction) and one general purpose (GoToAction). It doesn't matter how many more use-case specific you add, the general purpose one will always be more useful: it enables anyone to participate right now right there with the current rules of the game, no binaries need to be recompiled.
Still abstract. What do you mean exactly?To build a general purpose architecture, you have to build use-case agnostic systems. It sounds obvious, but it isn't - at least not to me.
Here is my litmus test: for a new use case to occur in your developer platform, do you need to recompile any of your binaries (your client or any of your servers)? Or can it just happen organically?
It implies that you have to think hard about your architecture and look at places where you hard coded a specific use case: that's the part of your binary that will have to be recompiled when you introduce another one.
You have to look holistically too on your product, legal and outreach framework too: what parts of them need to be changed when a new industry emerges?
Again, think the web: do you need to recompile your browser every time a new website comes up out of nowhere?
It means that you have to give up on use-case specific semantics and move down to a presentation layer. And, as a general rule of thumb, it probably also means that you have to give up on JSON (as a data format) and embracing XML (as a layout format).
Note that there is nothing wrong with ceiling technologies per se, it is just that they have a limit of how far they can get. That's totally cool to be built - and should be built, lots of money to be made here - but you have to acknowledge that this is equivalent to the manager/parent dilemma: if you tell someone what to do, the worst thing that can happen is if they actually listen to you.
So, where do we go from here?
I think there are a few ways to make forward progress.
One of them is to hope that deep learning will get advanced enough that it will reason on semantics without supervision and recompilation of the binary.
The second hope is that we'll find a universal user interface that all apps can use effectively (think natural language and chatbots).
Think browsers and HTML more so than ontologies/vocabularies and JSON.
My intuition is that the third is where practicality meets feasibility.
Stay tuned! Project still under construction - even after more than 5 years into the making :)