The MetaComputer™ (Part "What" of 3)

In part “Why” of this series that appeared previously, I talked about the need for a new computing model that simplifies modern cloud-native distributed application development. In this part, I’ll go into some details of what this new computing model should be and what it should provide.

Who Is the MetaComputer™ For?

You would have figured by now that we’re talking about a computing model for programmers who develop cloud-native distributed applications. But what does that mean?

The MetaComputer™ would be useful for programmers who develop connected applications, i.e. applications that are accessible over the network rather than being invoked as standalone executables or libraries. The key properties of these applications are:

  • they expose their functionality through network endpoints (REST, gRPC, GraphQL, Messaging… )
  • they need to respond whenever a network client invokes them
    • they may be running continuously in a wait loop
    • they may be terminated when clients disconnect and respawn when clients connect
  • they handle persistent state that outlives the application process execution. i.e. even if the application is terminated, the data it operates on remains available for future use
  • they run on elastic compute resources, i.e. they may be deployed to as few or as many computers1 or VMs as needed to satisfy the number of clients invoking them but not more
  • they are parallelised, i.e. multiple instances of these applications may be running at the same time, accessing the same persistent data
  • they usually work with other connected applications as a group to provide end user functionality (e.g. microservices)
  • they are heavily instrumented to provide computation and business execution data for subsequent analysis. i.e. they support the build-measure-learn development cycle.

What Does the MetaComputer™ Provide?

Now that we understand what kind of applications we need to be able to build on the MetaComputer™, we can start asking for functionality that the model should handle on its own, rather than requiring the programmer to be explicit about.

The MetaComputer™ must provide the following:

Service Programming Constructs

  • Native support for services2 as a programming construct
    • transparent network daemons, wire format conversion, stubs and skeletons, service discovery
    • user-to-service / service-to-service authentication and authorization
  • Well defined resiliency model for interacting services
  • Programming constructs for service integration through
    • sychronous request/response
    • asynchronous request/callback
    • bidirectional streaming
    • events and actors
  • Persistence aware programming and native support for durable datatypes
  • Language level support for key architectural patterns (e.g. CQRS, Event Sourcing, Actors, Workflows, etc.)
  • Built-in extensible observability stack with logs, monitors, profilers and pre-defined/user-defined metrics

MetaComputing Environment

The primary concern of a developer is to get the code they have written to execute in a computing environment. On top of that, they need to be able to get feedback from the environment about the code’s performance and tools to troubleshoot unexpected program behaviour.

Developers aren’t the only people that interact with a computing environment. A significant improvement to basic computing models can come in the area of state management and data driven intelligence. Following is the minimal list of built-in features that a MetaComputer™ must support

  • MetaCompiler™
    • Compiles MetaComputer™ programs into distributed service topologies
    • Provisions networks and infrastructure for the compiled service ecosystem
  • Built-in support for managing versions and environments
  • SuperState™
    • Logically consolidated persistent data storage
    • CDC and data aggregation
    • Out-of-the-box analytics over persistent data types
    • First class programming support for experimentation branches and experiment analysis
  • Managed (serverless) runtime environment with auto-scaling

MetaComputer™ Schematic
MetaComputer™ Schematic

Next Generation Ecosystem

Contemporary software distribution is predominantly through source code integration (library dependencies) or as SaaS. There was a time when distributing commercial software through precompiled binary libraries was quite common but it’s no longer acceptable at large, compared to open source software distribution. Unfortunately, open source doesn’t work very well for commercial vendors which has led to monetisation either through support services – which only works for monetising through enterprise customers – or through SaaS . SaaS, however, doesn’t work very well for customers due to unnecessary network dependencies, loss of control over data fencing and single-vendor dependency.

A cloud-native programming environment brings about the opportunity to change software integration and distribution models. It can lead to a new distribution model that brings the best of both worlds (source integration and SaaS). This new model can support:

  • Secure code sharing and distribution, resilient to supply-chain attacks
  • Install tracking and monetization for authors of proprietary as well as open source code
  • Open standards for enabling competition and differentiation among providers of metacomputing environment implementations
  • Broadly uniform pricing structure across metacomputing providers with different rate cards
  • Ability to switch across metacomputing providers by simply redeploying (eliminate cloud/SaaS vendor lock-in)

What the MetaComputer™ Isn’t

At this point, one might think that we’re just looking for an enhancement over some existing technology. However, that is not the case. I ended the previous article talking about why Apache Mesos or Kubernetes are not the computing model we are looking for. Here I elaborate a bit more on the conceptual differences.

Not a Supercomputer

Apache Mesos originated from the cluster computing / supercomputing / HPC background. The main concern in that environment is to allow running singular applications that place extreme demands on computing power. Think climate simulations involving particle physics etc.

A supercomputing model then is all about abstracting a collection of many many interconnected computers as one humongous computer with the sum total of all the computers’ CPU power, memory and disk space.

That’s not the kind of abstraction that a typical service developer really wants. Most service backends are applications with modest resource requirements for a single business process but they run many instances of the same business process in parallel.

Not an Orchestrator or IaaS

Kubernetes, Docker Swarm and similar tools are a much more appropriate interface for managing the kind of applications we’re interested in developing.

They take the underlying cluster of available computers and rather than summing them up into one giant conglomeration, they divvy up the individual computers’ resources into smaller logical chunks. Then they take the applications along with their resource requirement specifications and deploy those applications on to these chunks.

They go a step further and recognise that some applications can be run as “services” – a set of identical instances of the application all running in parallel under a load balancer that evenly distributes instances of business tasks across these instances.

Where the orchestrators fall short is in the amount of operational concerns that they require developers to become aware of. That’s why I previously mentioned that suggesting these orchestrators as a computing model is like suggesting LLVM to someone looking for a modern general purpose programming language. The level of abstraction is too low.

Not PaaS / FaaS

Platform as a Service (PaaS) is essentially IaaS with some of the common applications and operational support made available by the PaaS provider through their APIs.

An IaaS provider would give you access to computing resources, leaving it up to you to run your application and also any supporting components of that application. A PaaS provider would, at the very minimum, include applications required to for smooth operation of your main applications.

There’s no limit to how many supporting applications a PaaS provider may bundle. Usually, PaaS providers include popular databases that they manage entirely on their own and only require you to access them via APIs or special client libraries in your code. Stretching the PaaS concept all the way takes you to Serverless computing, wherein you do not need to deal with any server provisioning, management or operations.

PaaS, especially Serverless PaaS, offloads a lot of infrastructure provisioning and management but unfortunately it doesn’t move the conceptual needle much further beyond the IaaS model. As a developer you still need to know about all the components and you need to wire them together in code. The more abstracted the PaaS is (e.g. FaaS), the more tedious it becomes to code applications that don’t strictly fit its model.

Not “Low Code”

As we’ve gone through the “nots” above, we’ve moved closer to the ideal MetaComputer™ model. However, we now must take a wide diversion into Low Code Land.

Low Code is a class of application development environments that comprise a lot of pre-developed functionality that you simply import and customise for your needs. Any functionality that is not covered by the customisation needs to be written in the regular (High Code?) way.

If programming were like sculpting, Low Code would be like assembling lego blocks. What we’re looking for is the 3D Printer with swappable materials.

What Is the MetaComputer™?

We have come the long way round to finally define what a MetaComputer™ is.

A MetaComputer™ is a computing model with an associated programming language and runtime for distributed applications. The MetaComputer™ models elastic network services and abstracts their integration, change management and infrastructure.

The layman definition may be

A MetaComputer™ is a magical technology that makes modern software development 10x more effective, scalable and reliable in a cost-efficient way.

Or maybe

A computing foundation that present day programmers can develop their skills on to last through to the end of their careers, while the MetaComputer™ abstracts technology changes into ever improving implementations of the MetaComputing Environment.

Phew! That’s a lot to process in one go! By this point pretty much everyone I’ve talked to about this shurgs their shoulders and wonders if it’s even possible to implement such a system.

Well, that would take us to the upcoming Part “How” of this series. That’s going to take me a while to write. Meanwhile, if you believe you have some idea of how to make this happen, join the MetaComputer™ organisation on Github.


  1. My definition of a “computer” is an independently bootable chassis with one or more CPUs, RAM, network interface(s) and some non-volatile storage. This also includes VMs, unless the exclusion is pertinent. ↩︎

  2. A Service is an elastically scalable network API with specific calling conventions, attached SL[I|O|A]s and access permissions ↩︎

Previous Post:
The MetaComputer™ (Part "Why" of 3)

Next Post:
Fujifilm X100V Review: An Unfulfilled Dream

Articles

Tahir Hashmi