Micro-monoliths and the future of infrastructure...

I’ve got my silly opinions on this site, but generally my silly thoughts have a way of manifesting over time. In this post, I’m thinking about possible alignments of my thoughts with the broader industry. The key question is which of my ideas are worth going to production with the limited innovation tokens I would have if leveraged in a business.

First, the Adama language itself will take a tremendous time to finish both in terms of quality, features, tooling, documents, idioms, and what-not. I believe strongly in the language, but this would not make a great foundation for an enterprise business. Programming languages tend to gain a religious zeal to them, and it tends to be best to focus at making either a great library or a service with robust API.

Aside: speaking of religious zeal, I am now a fan of Rust. It’s great, and I am playing with WebAssembly. A personal goal this year is to be somewhat competent at both Rust and WebAssembly because the artifacts produced make me happy. Rust gives me confidence that we can have good software, and WebAssembly is the modern JVM that will be ubiquitous.

Focusing on the language could be a sketchy lifestyle business, and maybe that is ok? But... If the goal is to align and lead industry, then I could find success by building an infrastructure business around WebAssembly such that people could bring their own language. This would allow that business to simply focus on supporting a black-box of logic, and then orchestration and state management would be the business.

This language could power a "state-machine/actor as a service" business, and it would be very similar to Durable Objects from Cloudflare. The key difference is that I’d hook up a WebSocket to it and invent yet another robust streaming protocol (sigh).

It would feel a bit like Firebase, but it would be much more expressive as you would build your own database operations within the document. Is there an open-source Firebase?

Beyond WebAssembly’s current popularity and tremendous investments, I believe that WebAssembly has a huge potential for disruption in how we think about building software. Generally speaking, the hot shit these days is Docker/k8s/micro-services. All the cool kids want their software to be super scalable, and they cargo cult practices that only make sense with hundreds of engineers. That’s fine and par for the course, but as the guy that generally cleans up messes and make services reliable; I shudder as I have a bipolar relationship with micro-services.

On one hand, they let engineers scope and focus their world view. Micro service architectures tend to solve very real people problems, but this comes at great cost. That cost manifests in extra machines, but also requires everyone to understand that networks are not perfectly reliable. Having a bunch of stateless services sound great until failures mount and your reliability sucks.

The other perspective relates to monoliths which enable engineers to build (more) reliable software (due to less moving pieces), but slow build times and hard release schedules make them undesirable because people. Distributed systems are hard asynchronous systems that are exceptionally expensive in terms of hardware, but people can move fast. Monoliths have all the nice properties, but scaling is people expensive and requires vertical growth.

This is where WebAssembly can enter the picture as you can re-create a monolith with WebAssembly containers which can be changed during normal operation. This is a “micro-monolith” which fits as many conceptual services within a single machine as possible such that you get the people benefits of micro-services with a monolithic runtime. This thinking mirrors mainframes where hardware can be replaced while the machine is running.

Developers already contend with asynchronous interfaces for services, so it is a productivity wash with respect to hands on keyboard coding when compared to microservice frameworks. The upsides comes from reduced operational cost and better responsiveness due to locality as compute can chase data and caches, and this has the potential to nullify the advantages of a traditional monolith.

The potential for removing the need to serialize requests, queue a write, let the network do its magic, read from network queue, and deserialize the request is exceptionally interesting. This reduces cpu work, decreases latency, increases reliability, and reduces heap and GC pressure. It's full of win especially when you think that a diverse fleet of stateless services feels exceptionally pointless and wasteful of network resources.

Paradoxically, it would seem these ideas are not new. We can check out the actor model or Erlang/Elixer/BEAM VM for spiritual alignment. It's always good when ideas are not new as it represents harmonization, and I feel my appreciation and education deepening within this field. I've come to believe that this mode of programming is superior (as many of the Erlang zealots would promote), but it has been held back by languages. WebAssembly feels like the way to escape that dogma, and the key is to produce the platform.

Having written about DFINITY with a technical lens, I'm realizing it may be a bigger deal in the broader sense. I now realize that decentralized compute will fundamentally transform the landscape for comnputing and manifest infrastructure as a public utility without corporate goverance. What will computing look like if both compute and storage are public utilities? What happens when racks can be installed on-premise and become assets that serve the broader community?

This is an exciting time to be alive, and the question then is what do I do? Stay tuned.

Wrapping my head around DFINITY's Internet Computer

I hope to ship a single simple game this year, so I will need some infrastructure to run the game. Alas, I'm at a crossroads.

One enjoyable path is that I could build the remaining infrastructure myself. While this would require starting a bunch of fun projects and then trying to collapse them into a single offering, this potentially creates an operational burden that I'm not willing to live with. Wisdom would require me to put my excitement aside of combining gossip failure detection with raft and handling replication myself.

Another path is to simply not worry about scale nor care about durability. The simple game to launch only lasts for five minutes, so my core failure scenario of concern is deployment. The only thing that I should do here is to use a database like MySQL on the host instead of my ad-hoc file log. Ideally, the stateful-yet-stateless process should be able to quickly recover the state on deployment. Scale is easy enough to achieve if you are willing to have a static mapping of games to hosts, and this currently is a reasonable assumption for a single game instance. This is not ideal for the future serverless cloud offering empire, but the key is to get a tactical product off the ground and evolve. As durability demands and scale demands go up, the key is to balance the operational cost with engineering cost.

So, it's clear that I should start the model as simple as possible. Starting simple is never perfect and takes too long, but it's how to navigate uncertain waters. This is where DFINITY's Internet Computer (IC) enters the picture as it is almost the perfect infrastructure. Today, I want to give a brief overview of what the Internet Computer (IC) is, and I want to compare and contrast it with Adama. For context, I'm basing my opinions on their website as I'm a buyer looking at brochures; some of this might be inaccurate.

Canister / Document#

At the core, the Internet Computer is a series of stateful compute containers called canisters. A canister is a stateful application running WebAssembly, so it is a foundational building block. The state within the canister is made durable via replication of a log of messages (i.e., the blockchain), and the state is the deterministic result of ingesting those messages. Canisters can be assembled together into a graph to build anything at a massive scale. However, the techniques to do so are not commonly understood since the canister relies on actor-centric programming.

This is very similar to an Adama document where each document is a stateful document fed via a message queue from users, but Adama doesn't address the graph element. What does it mean for two documents to talk to each other? There are three reasons that Adama doesn't do this (yet):

  • Allowing any kind of asynchronous behavior requires error handling as bumps in the night will increase dramatically. The moment you send a message, that message may succeed or fail outright, timeout, and may have side effects; a design goal of Adama is to eliminate as much failure handling as possible (no disk, no network).
  • I don't believe direct message exchange is compatible with reactivity. Instead of two documents sending messages as commands to each other, I'd rather have one document export a reactive view to another. This collapses the error handling to the data has or hasn't arrived. Adama's design for document linking is on the back burner since I currently lack a usecase within board games.
  • While humans are somewhat limited in their ability to generate messages, machines are not. It is entirely appropriate to have a queue from a device driven by a human and then leverage flow control to limit the human. Many operational issues can be traced to an unbound queue somewhere, and handling failures is non-trivial to reason about. This further informs the need to have a reactive shared view between documents since reactivity enables flow control.

From my perspective, the IC's canister concept is a foundational building block. Time and experience will build a lingo around the types of canisters. For instance, Adama fits within a "data-only canister" which only accepts messages and yields data. I'll talk more about possible types of canisters later on in the document.

Access Control / Privacy#

The beauty of the IC canister is that it can be a self-enclosed security container as well via the use of principals and strong crypto. There is no need for an external access control list. Each canister has the responsibility of authorizing what principals can see and do, and this is made possible since each canister is a combination of state and logic. The moment you separate state and logic, a metagame is required to protect that state with common access control logic or access rules.

This is what Adama does via the combination of @connected and privacy controls (without strong crypto). The Adama constructor allows the document to initialize state around a single principal (the creator) and anything within the constructor argument. The @connected event will enable principals to be authorized to interact with the document or be rejected. Each message sent requires the handler to validate the sender can send that message or not.

Since the IC canister is service orientated, there is nothing that different for updating state within the container. The big difference happens at the read side where Adama has all the state as readable to clients up to privacy rules, and privacy rules are attached to data. This means that Adama clients get only the data the document opens up to them, and they don't need to ask for it explicitly.

Since Adama retroactively forwards updates to connected clients, this is the most significant technical hurdle to cross with using IC. However, this is solvable in a variety of ways. It would be great to connect a socket up to a canister, and I don't see this as an impossible feat.

Cost#

The canister also has an elegant cost model around “cycles” that Adama has somewhat except Adama can bill for partial compute. The Adama language solved the halting problem preventing infinite cost! A finite budget is a key aspect for a language to consider to guarantee to halt. When you run out of budget, the program stops, and you are broke. The critical technical decision to ensure randomly stopping a program isn't a horrific idea is that state must be transactional, and you can go back in time. Given the canister's all-or-nothing billing model, it seems that it also has transactional state.

The IC canister talks abstractly about cycles, but I'm not sure how cycles relate to storage and memory costs. Perhaps, cycles are associated with paging memory in and off disk? This is where I'm not clear about how IC canister communicates itself to cost-conscious people. Furthermore, it's not clear how cost-competitive it is with centralized infrastructure.

Usage and Scale#

The IC canister is far more generic than Adama, but this is where we have to think about the roles of canisters as they evolve with people building products. With both Adama and IC's canister deeply inspired by actors, this is a shared problem about relating the ideas to people. There will be off-the-shelf canisters that can be configured since scale is part of every business's journey.

From a read side, the good news is that both Adama and canisters that only store data and respond to queries can scale to infinity and beyond. Since replication is a part of both stories, one primary replica cluster can handle the write and read replicas that can tail a stream of updates. The emergence of the "data primary actor" and "data replica" roles are born.

From a write side, data at scale must be sharded across a variety of "data primary actors" in a variety of ways, so even more roles are born.

First, you need "stateless router actor" which will route requests to the appropriate "data primary actor" either with stickiness or randomness (a random router could also be called a "stateless load balancer actor").

Second, with writes splayed across different machines, you will at some point require an "aggregator actor" which will aggregate/walk the shards building something like a report, index, or what-not.

Third, replication gives you the ability to reactively inform customers of changes, an "ephemeral data replica actor" or "fan out actor" would be a decent way for clients to listen to data changes to know when to poll or provide a delta. This is where Adama would put the privacy logic and reactivity layer. Given the infinite read scale of replicas, this also offers an infinite reactive scale.

Fourth, perhaps sharding doesn't take the heat off a "primary data actor", then a "fan in actor reducer" roll would be able to buffer writes and aggregate them into the main "data primary actor." The ability to fan in would enable view counters to accurately bump up with many viewers.

Fifth, beyond data, there is the problem of static assets and user-generated content. A "web server actor" makes sense for front-ends which they already have in the platform. I imagine IPFS would be the place for user-generated content, so long as the IC canister and IPFS got along.

There are more actors for sure, but the key thing to note is that building a new internet requires rethinking how to transform the old world ideas into the new world. Some ideas will die, but some may be foundational.

Operations#

As I haven't started to play with the DFINITY sdk yet, so I'm not sure about a few things. Adama was built around easy deployment, monitoring, and everything I know about operating services. This is where the IC container feels murky to me. For instance, how do I deploy and validate a new actor? How does the state up rev? Can the state down rev during a rollback? This is why Adama has all of its state represented by a giant JSON document because it is easy to understand how data can change both forward and backward.

Deploying with Adama should be a cakewalk, and I want Adama to replicate state changes from a running Adama document into a new shadow document. I can validate the lack of crashing, behavior changes, data changes, and manually auditing things without impacting users.

I'm interested to see how DFINITY addresses the engineering process side of their offering.

Concluding Thoughts & Interesting Next Steps#

Overall, I'm excited about the technology. I'm wary of the cost and the lack of communal know-how, but these are addressable over time. I also feel like there is a synergy between how Adama and the canister think about state. For instance, I choose JSON purely as a way to move faster. However, I hope to design a binary format with similar properties. Perhaps DFINITY will release a compatible memory model that is vastly more efficient?

Ultimately, the future in this space will require first adopters willing to slog through the issues and build the vocabulary about how to create products with this offering.

A clear next step for me in 2022/2023 is to figure out a WebAssembly strategy that would enable me to offload my infrastructure to this new cloud. It makes sense that I keep my infrastructure investments low and focus on killer products to make tactical progress towards a product that could fund deeper engineering. This translates towards a more casual approach towards durability and scale. For durability, I'll just use block storage and hope that my blocks do not go poof. As protection against catastrophe, I'll integrate object storage into the mix and move cold state off block storage into Amazon S3 or compatible storage tier. For availability, I'll avoid treating my servers like cattle and use a proper hand-off. For now, I just have to accept that machine failures will result in an availability loss.

JSON on the Brain

Since I am tweaking the API, I am going to write some pre-docs here on my passion for JSON. I'm only using JSON because I'm lazy and don't want to write yet another binary encoder for Java, JavaScript, and eventually Rust. I may at some point, but I'm kind of OK with JSON for now. The key thing I like about JSON is that you can treat it algebraically.

For a brief primer, check out RFC7396 as merge is an exceptionally powerful idea. At core, merge enables the set of all objects expressible via JSON to be an almost algebraic group. If you interpret null as nothing equivalently, then you have a group.

As a math guy, these group properties tend to signal a general substrate for useful stuff. Unfortunately, RFC7396 is not efficient when it comes to arrays. In this document, I’ll introduce how Adama leverages RFC7396 in two contexts. The first context is the delta log persisted on disk or within a database, and the second context is the change log for a connected client. The second context requires a non-trivial extension.

Adama and the Delta Log#

The key deficit of RFC7396 is that arrays don't work well. Arrays can have elements inserted, removed, re-arranged and the merge just treats the array like a big value. Given this, Adama doesn't use arrays within the delta log. The entire data model of Adama is a giant object with just objects and values. The way Adama represents collections is via either tables or maps, and this works well with the language integrated query and indexing. The efficiency gains of an array are manifested during the runtime, and we tolerate the representation overhead on the wire.

The motivation for is to enable efficient replication of potentially large documents, and the value proposition of Adama is to translate domain messages from people into deltas. This means that Adama, as a document store, achieves replication efficiency matched by traditional database solutions. A document in Adama behaves much like a reasonably sized database which can fit within memory. Note: the log is a powerful concept.

Putting efficiency aside, this also enables a powerful feature for board games (and any application). First, the game's state can be rewound into the past at any point. Simply trim the head of the log, and rebuild the state. For usability, games powered by Adama should allow people to make mistakes and roll back time; the only downside of this feature is enabling crafty players to know what the future holds.

Applications can leverage this as well because it provides a universal undo which is also collaborative. While the rewind operation is required for board games, a collaborative undo requires the contributions of multiple people to be considered. For instance, consider the log:

seqwhoredoundo
1Alice{"name":"Phobia","balance":42}{"name":null,"balance":null}
2Carol{"name":"Fear"}{"name":"Phobia"}
3Bob{"balance":100}{"balance":42}

If Alice wishes to undo her contribution, then we can take her undo and then run it forward. In this case, there is nothing to undo as Carol and Bob have made contributions which conflict. On the other hand, Carol and Bob are able to undo their messages. Fortunately, this algorithm is fairly easy. We simply match the fields within the undo to the fields within the future redo objects, and then trim down the undo operation that we will apply to the log.

undo(at)redo(at + k)behavior
objectobjectrecurse
*not definedpreserve
**remove

Sadly, this may not be sound within all application domains. However, this is worth researching, and it may work in enough domains to not be a big problem. The plan here is to simply describe the "Undo" API and provide guidance on how to use it.

Since good can sometimes be the enemy of the perfect, an item on the roadmap is to enable message handlers to emit an undo message which can be played at a future date. This will be nicer for some scenarios, but it will not be collaborative in spirit. For instance, if Alice creates an object and Bob comes along to manipulate it, then should Alice's undo remove the object?

This is clearly a complex subject, but it's fun to play with super powers! Things that I need to read and digest:

Thankfully, I can launch with rewind which is within my domain...

Adama and Clients#

While the philosophy of having everything being a map or value works well enough on the server, clients require the order that arrays provide. This is where we extend RFC7396.

Fundamentally, we will divide arrays into two classes. The first class is an array of values like [1,2,3] which will isn’t interesting, so we lean on RFC7396 for merging that array like a giant value. The second class is an array of objects with an unique integer id field, and it’s worth noting that is the pareto-major for Adama. Arrays of objects without an integer id field will be treated like a value, and these map into the first class.

We will transform the array of objects into an object with a special field. The conversion process takes every element in the array, and creates a mapping between the element's id and the element. The special field with key “@o” is then the order of the elements by id. Immediately, this has the benefit that using RFC7396 will then work on data within the new object.

For example, the given array:

[
{id:42, name:"Jeff"},
{id:50, name:"Jake"},
{id:66, name:"Bob"}
]

will convert into

{
42: {name:"Jeff"},
50: {name:"Jake"},
66: {name:"Bob"},
'@o': [42, 50, 66];
]

The client can now reconstruct the original array, and this enables a delta to change "Jeff" to "Jeffrey" be represented as:

{
42: {name:"Jeffrey"},
'@o': [42, 50, 66];
]

This allows RFC7396 to just work as elements can be inserted, rearranged, removed with the overhead being concentrated within the ‘@o’ field. Now, this can be optimized as well if we allow ourselves to do a bit of transcoding on the server. The server can monitor the ordering of the array before and after execution. Take the given array:

[1, 5, 6, 7, 10, 24, 50, 51, 52, 53]

as the before execution ordering, and then

[1, 5, 6, 7, 10, 100, 24, 50, 51, 52, 53]

as the after execution ordering. Sending this large array is an exceptional waste, and here we will hack JSON and have the ‘@o’ array be heterogeneous with two types of elements. The first is the id as a value, and the second is an array representing a range. The key idea being that subsequences from the before execution ordering can be reused. The above change would then be represented as:

[[0, 4], 100, [5,9]]

which saves tremendous space for small data changes. The values within the sub-arrays are pairs of inclusive beginning and end index values. This extension requires tweaks to RFC7396 to detect objects containing the '@o'.

Thoughts for the API#

Rebuilding the data is one thing, the other interesting potential of having clients receiving deltas is that they can update their application proportionally. This is represented in the poorly implemented current client, and the weak point is the emitting events of tree changes. It works well enough for my experiments, but I have a way to go.

As I build confidence that the code is working via all my hacking, my aspiration is to document this differential tree event firing in great detail and test it fully. Fun times!

A manifesto of user interface architectures

The chat demo scripts work with an implicit mental model using the browser, and this post aims to take that model out of my head and make it explicit. For lack of better words, this has turned into a manifesto for user interface.

The model#

The below picture illustrates the picture for the mental model of what I'm thinking to simplify the architecture for the board game user interface stuff.

mental model of the ui flow

The mental model starts with you, the human user. You are alone in front of your computer playing a board game with friends online, and you see a nice image on the screen. You move your mouse or position your fingers and you interact with the picture (I). This device interaction leverages low level signals of mouse/touch down, move, and up (II).

The current system is the browser using the DOM and some JavaScript to convert those raw signals into something more useful (III). Either the signals manifest in a message being sent to the Adama server (IV.a) or update some internal state within the browser (IV.b: scrolling around or opening a combo box).

While the user is sending signals to the Adama server other users are also sending signals to the same server (IV.c). The Adama server synchronizes its state to the client (V). The DOM combines the hidden viewer state with the server state to produce the pretty picture (VI), and this establishes the feedback loop between many people playing a board game using Adama.

It is ruthlessly simple, and we can discuss a great deal of interesting details. Join me as I wander around those details.

Limits of the network#

The boundary from client to server may be vast on this blue marble we call home. Unfortunately, the network is not perfect, and you can expect delays disconnects when things go bump in the night. Things will go wrong, and products must tolerate it in a predictably reliable way.

We can talk about a couple of models that work well enough.

First, perhaps the state is so small that you can just send the entire state in both directions. This is great as this model can tolerate packet loss, and you can simply use UDP and just add a sequencer to your state such that you ignore old state. The key problem with this model is that it isn't a unified solution; product growth and complexity will eventually force you away from this model such that you either you abandon it or complement with a tcp-like stream.

Second, once you have a tcp-like stream then you have the potential for an unbound queue. Some products and games can absorb this overhead because games end, so their bound is implicit. Heroes of the Storm and Overwatch do this as they capture replay logs and joining a game has a non-trivial download. This feature implies usage of the command pattern such that all state changes are guarded by a queue of commands, and if you can serialize the queue then you get both replay and network synchronization features.

public interface Command {
void execute(YourGameState);
String toJSON();
}

This pattern provides an exceptionally reliable framework for keeping products up to date, but you must be willing to pay the high cost during failure modes. Perhaps, the protocol can help such that only missed updates are exchanged; however, the queue is still unbound. Fortunately, cloud storage is so cheap that every command humanity could ever emit could be persisted forever. Sadly, clients lack the bandwidth to reconstruct the universe, so this is not the unified solution. Furthermore, the serialization depends on the implementation of the Command; it is a common problem between versions of games that offer replay to wipe out existing replays due to versioning issues.

Another nice property of the command pattern is that you can leverage client-side prediction such that you can run your client's state forward while the network delay and server unify input from all people. You can detect conflicts between the client and server, roll back state, accept the server's commands and then replay actions locally. This will cause a temporary disturbance, but the state will be correct without divergence. Now, board games do not require client-side prediction, but it is something to keep in mind.

The unbound queue is a problem that I am interested in especially after a decade dealing with them in infrastructure events. The solution is to bound the queue. There are many ways to do this. For instance, you can just throw items in the queue away; this is not perfect, but it is a common and good way when the contract for customers is to retry with exponential back-off. An alternative way is to leverage flow control such that each side has a limit of what it can have in-flight.

For the messages sent from client to server, flow control is perfect such that the client is only able to send some number of messages or bytes without the server acknowledging its commitment. Problematically, flow control breaks down from server to client for two reasons. First, there is more data flowing from server to client as it is aggregating data changes from everyone. Second, the pipe can get clogged as the command pattern requires a potentially infinite stream of updates; it is easily possible to never catch up especially if the client is on bad network or has a slow CPU.

This is where my math-addled brain realizes that this is solvable with algebraic concepts. Instead of the command pattern, we can use state synchronization with state differentials. For board games (and games in general), the state is bound (there are only so many game pieces within the box). With a finite state, you have a maximum transmission rate of the just sending the entire state as fast as you can using flow control to decide when to snapshot the state. This upper limit means that blips in the network just manifest as strange jitter.

This requires a data model that supports both differentiation and integration, so now you know why you studied Calculus as it is universally applicable. This is why Adama is using JSON without arrays along with JSON merge as the algebraic operator; arrays are problematic, and they can be overcome with specialized objects. This means that an infinite stream of updates can collapse into a finite update. This allows flow control from server to client be effective and cheap.

Alas, this amazing property is not without cost. This capability makes it hard for people to reason about what happened as there is no convenient log anymore to tell them directly. If players care about understanding what happened because their focus drifted, then you lose the ability to have nice change logs like "player X played card Y, and player Z discarded two cards". Instead, you must construct these human friendly logs on the fly by describing the change from just the before and after states. There are no silver bullets in this life...

Now, is this important for board games? Yes and no. It mirrors the problems of using a video channel, and if there is a disruption then people can just ask "Hey, what just happened?". The technical problem is easily handled by kind humans. However, there is exceptional value in solving this problem.

How Adama addresses this network business.#

Clients send the Adama server a domain message which has the spirit of the command pattern. For instance, the message could be "pick a card" or "say hello". The key is that the message is a command the client and human care about within the product's domain.

the adama flow

The Adama server will then ingest a domain message and convert it to a data change deterministically. Not only will this happen for the current user, but the Adama server brings together many users. Adama will then emit a stream of data changes to the client which it can leverage to show the state.

The key is that the language bridges the complexity of thinking about data changes by enabling developers to think imperatively within the domain exclusively. As the language emits state changes, the platform can broker those state changes to the client using flow control. While the client is not ready to get an update, the platform can batch together changes using the algebra of JSON merge. This will ensure that clients with adverse network conditions will snap to the correct state without overcommitting to an infinite stream of updates. This model also works well with catastrophic network events like a loss of connection as the client can just reconnect and download the entire state at any moment.

All of the complexity and pain of using the network goes away if developers commit to holding their hands behind their back and watching the JSON document update.

Rendering combined with state changes#

With efficient network synchronization figured out, the next step is to figure out how to make a nice picture.

The chat demo uses the DOM such that state changes from the server are converted directly to DOM changes, and this works well enough. There are some gotchas. For instance, the DOM has a great deal of view state which is not convenient to reason about. An example is using innerHTML to do updates; this can be destructive to internal state like scroll bar position or text input entry.

However, we can outline the nature and shape of all state changes. As a prerequisite we must establish that the state has a static type so we should not expect strings to become objects and vice versa. With this, we can see state changes manifest as:

  • values within objects changing
  • objects having fields update
  • arrays having elements removed or added
  • the order of elements within an array changing

Given the simplicity of json, there are few possibilities when the types are consistent between updates. If types can change, then bad times are ahead as the complexity blows up!

Fortunately, with JSON being a tree and the DOM being a tree means synchronization is straightforward while there is one to one correspondence. When you start aggregating the state into the DOM, then things become more complex. However, you could offload that aggregation to the server via formulas. If the DOM representation looks like a mustache template over a giant JSON object, then good times are ahead.

Enter the canvas#

Alas, the DOM has limits for some of the scenarios that I was envisioning for another game where I started to get into the land of canvas. Canvas is the 2D escape hatch such that all your fantasies come alive... at great cost.

Now, I love writing code using the canvas. It takes me back to mode 0x13h days! Naively, it's straightforward to write a render function that converts the state into a pretty picture. However, a predictable game emerges.

First, you will want to convert raw mouse/touch events into meaningful messages. This manifests in building or buying yet another UI kit. Fortunately, for simple things this is not a huge problem and it is easy to get started. However, this is the seed to any UI empire, and it's generally better to buy a UI framework than invent yet another one.

Second, you give up on accessibility flat-out. Is this OK? Well, this is the normal and sad state of affairs for most games. However, I feel like we can do better. The key to accessibility may rest in thinking differently for different handicaps. For instance, if I can solve how to make games playable via a home assistant like Alexa, then I can bring the blind into the game. If I can simplify the user interaction to rely on a joystick and a button, then mouse precision can be factored out.

Third, there is a performance cost for rendering the entire scene based on any data change. Given today's hardware this is abated by hardware acceleration and generally is fine. However, this is a white whale as it would be nice to scope rendering updates to where data changes manifest visual changes. This requires building a renderer that can cache images between updates and then appropriately invalidate them on change, clipping rendering to a portion of the screen, not drawing things that don't intersect the rendering box, and other techniques.

Fourth, you may interact with the picture by scrolling or selecting a portion of the data to view, and this gives birth to view state. While this is not a huge problem, it requires being mindful of where the view state is stored. I believe the view state should be public, and this has the nice feature of having deterministic rendering. If local changes are extracted away, then the render function can be a pure function with both view state and the server state as inputs. This gives rise to the capability of unit testing entire products with automation! It turns out that private state within objects makes life hard, and object orientated programming was a giant mistake.

Fifth, rendering the JSON scene will lack smoothness. The picture will just snap to state instantly, and while this is great for testing and predictability. Humans need animation to visualize change. This introduces the need for view state to smooth out the data changes, and this requires an update step to transform the view state based on server state and time.

the adama flow

Time must exist outside the update function because this forces the update function to be deterministic in how it updates the view state. Determinism is an important quality, so this also means not leveraging randomness. This vital property enables testability of products.

Regardless of the bold testability claims, the key is that the shape of the solution for all of these issues start to evolve the DOM. The hard lesson in life is that you can't really escape the DOM. You can call it a rendering tree, scene tree, or scene graph; however, that's just different shades of lipstick on the pig.

Future Thoughts#

Like a fool, I'm thinking at this deep level because I am toying around with the idea of building yet another cross platform UX ecosystem... for board games... in rust (and web). The core reason is that I really-really-really hate other UI frameworks and development models. It's worth noting that I built a similar UI ecosystem a decade ago, so this isn't my first rodeo. The target back then was the web with crazy ajax, but my failure back then was a lack of accounting for SEO which was a death blow back then; in today's landscape of PWAs and SPAs...

Developer tools is a murky business, so ultimately this is not a business play (yet?). It is a mini empire play with a niche focus, and I realize this requires ruthless focus. However, the siren song and distraction is cross platform lust.

First, I want the web because the web means online.

Second, I want mobile devices, and the web works here as well. However, I would prefer a native app because I'm a performance junkie.

Third, I want it to run on Nintendo Switch because I like the Switch; it is a fun mobile platform. I still need to get the indie dev kit and check out the licensing. If the switch works, then all consoles should work.

Fourth, I want it to run on TVs which are basically either mobile devices or web, so I should have my bases covered. The only caveat is that it makes sense for TV to be a pure observer, so people can congregate in the living room around the big screen.

Finally, as a fun twist, I want home assistants. I want to be able to play while I cook or while I exercise.

All these wants can be so distracting, so the hard question is how to focus. The platform specific user interaction idioms are a nice way to tie my hands behind my back as I slam my head into the wall of building yet another framework, and the key is to keep things simple. As an example, I will need to have a library of solid components. The tactical mistake is to go forth and focus on all the low hanging components like label or button. Rather, I should focus on a meaty domain specific component like "hand of cards" as that would be used in card games and deck builders. The effort then is less empire building and focused on solving a rich domain problem, and the design game that I need to play starts to take shape; this component design game has some rules.

First, the component model assumes the component will fit all activities within a box. This feels like a reasonable boundary.

Second, the input language for the control is limited and there are five forms:

  • tap(x, y)
  • move(left, up, right, down)
  • main button
  • accelerator button(...)
  • voice intent

There are interesting challenges for both d-pad movement and voice intent, and this will require an interesting design language to sort out.

Third, the component must describe or narrate the state change effectively. For instance, a "hand of cards" component would describe detecting a new card appearing as "a two of spades was drawn" while detecting the loss of a card would be describe via "the ace of hearts was discarded". Now, this narrator business is an exceptionally fun challenge because language is interesting to generate. Not only does this provide a change log history, but it also enables home assistants to narrate the game.

The mission of the UX ecosystem is therefore to unify all these idioms into one coherent offering such that board games are easy to interact with regardless of the platform. Is this a silly endeavor? Perhaps...

Thoughts, rust, the dip, strategy forward

Well, February was a fun month, and I thought about many things. I’d like to share some of them now.

First, I’m learning rust and I love it. It’s a good language, and it triggers the special feeling that I'm making something that will last. Part of this is from the discipline of the borrow checker where I have to be very careful with how I do memory. Sure, I could try to mimic the niceness that a garbage collected language provides or just go sloppy with how C++ works, but that’s just digging a hole. With rust, I have to plan, think hard, and design the shape of the beast more upfront. As an example, I’m writing a JSON parser to flex some parser skills. I’m proud to say that it was not easy, and I like the results so far.

Second, I am also looking at how to bridge between the rust code and the browser via WebAssembly. I’m impressed with the current ecosystem, and I wish I could go 100% rust. However, that would be following a siren song as there are many gaps between what the browser offers and where the rust ecosystem is. I hope to write more about these gaps in the future, but my goal is to find a reasonable balance now such that long term investments can migrate to rust over time. Simply put, I need to move faster on higher level design details without being bogged down with lack of features. The browser is a challenging foe to usurp.

Third, I realize now this project is in the dip. This is no small project, and I’ve recently re-read The Dip from Seth Godin](https://www.amazon.com/Dip-Little-Book-Teaches-Stick/dp/1591841666). It’s a small book that reminds one of the power of quitting early or being strategic to get out of the dip. The key is recognizing your situation. There is a long road ahead for this, and I have to recognize that I’m not going to get every detail right from the beginning. I’m going to make catastrophic mistakes, but I must soldier on with a strategy.

Fourth, I’m looking to leverage what I current have built in an intermediate product. As I was opening up the repo, I realized “I forgot practically everything practical.” This is the bad part of becoming battle aloof, but this is also an opportunity to look at the project with fresh eyes. I intend to focus on some of the accoutrements that help ease people into a project because I’m finding myself in need. It’s helpful as I don’t remember where the bodies are buried, so I intend to focus on that.

Fifth, I am getting the urge to extend the language based on rust (Rust’s enums remind me of the good days when I was in love with OCaml). Now, this would be the worst way to spend my time, but it would be fun. Should I do it? Maybe. Life is short, so why should I feel bad about delaying shipping? This isn’t a business, and I probably am not going to make a business play for a few years. Problematically, this only will extend the dip, but what is success? Is success numbers in a bank account? Is success a wikipedia page? Is success a huge number of stars and followers? Is success doing talks? No, I think success comes from contentedness that we spent our time well, so how do I measure that? Is it measurable?

Sixth, I feel like if I can make progress on the README, a tutorial, and maybe get cracking on polishing up the demo then March will be time well spent. At least for the meta project, and I need to think about how far and wide I can go on shipping real products. Now, problematically, I can go meta yet again and build all sorts of stuff before building the actual game, so that’s a problem. Instead, I need to focus on solving exactly one meta problem: the deck builder. If I think about limiting meta to smart components, then perhaps I don’t get too bogged down in empire building. I also need to not get bogged down in worrying too much about rendering performance or overhead...

Anyways, if you are reading this, then thank you.

Some Thinky Thoughts for 2021

So, as promised, I've been thinking about things.

First up, I am having this deep well of regret that I fucked up by targeting Java as the platform. For one, companies like Cloudflare are investing deep in WebAssembly, and their offering around durable objects is exciting. It feels like the right infrastructure for my language to target. Now is it going to be perfect? Doubtful, but perfect is the enemy of good which is a common fight I enjoy because I’m stupid. Furthermore, dfinity is introducing WebAssembly canisters to run in a decentralized way. IF ONLY I had the fucking to wisdom target WebAssembly, then I could leverage other people’s infrastructure. However, WebAssembly feels way too low-level for my patience, and there are a swarm of problems in targetting that directly. What if I translated to Rust?

Second, I must decide on what kind of platform I want to build and offer. A motivating concept at play is that I could see starting a company called “The Board Game Infrastructure Company” which is an exceptionally precise name given my propensity to leverage codenames for new efforts. The problem with starting an infrastructure company is people, and people are the worst. Making the internet easier is not free from burden, and I do not want to enable the infrastructure for child porn, sex trafficking, and terrorists. I certainly do not want to build and pay for an organization to handle that shit either, so that’s a bag I don’t want to own. This feeds into the regret around WebAssembly. Either I convert things to WebAssembly with hope that some sucker is going to build the right infrastructure, or I must design and build a distributed system as open source that runs easily on AWS. Fundamentally, Adama is going to look like a strange database.

Third, maybe my task is much simpler and all I should focus on making the language and devkit great. If I can make it simple and find the right boundary for others to consume, then they will figure out how to make durability great and do all the platform building. Take litestream as an example; sqlite is an exceptionally well-defined product with a good boundary and Ben is building an awesome thing to make it durable-enough. This seems to speak that if I can find the courage to balance work on the language with making great products then people will believe and help me. I don’t know, it sounds crazy.

Fourth, I am stuck on the UI bits because once again I am stupid. I do not like what the modern web has become, and I am not alone. Every time I start writing JavaScript or TypeScript, I realize that this is not going to last for exceptionally long. I know that I am going to fuck up and need to refactor, and it is just painful. I do not enjoy the feeling that I am marching towards despair, and I find great comfort in good languages with good tools. I am old and want things to last. I have Java from a decade ago that still works, and I like that. Therefore, I have decided to learn and master Rust, and I am having a good time. Rust is great, and I recommend it. I still have a lot to learn, but I am playing with WebAssembly and I expect to have some results this year.

Fifth, with Rust being great. I am making slow and steady progress, and I am having fun building one of my architectural katas: Visio. For instance, I wrote world bootstrap five years ago exploring JavaFX, but I dropped it because work was getting crazy. Basically, making a drag and drop WYSIWYG editor is a thing that I do ever since my college days (again, I’m stupid). So, this is how I intend to master Rust, and I’m liking it. Not only am I making a drag and drop editor, but I’m going to make a mess and make a tiny UI framework in Rust against only canvas. Maybe then I can use Skia and have a portable UI framework between desktop, web, android, iOS, and Switch. It is time to misbehave.

Sixth, as a recap, I finished the back-end for Battlestar Galactica last year, and it gave a rough cost for the UI that I’m not willing to pay at this moment since I can’t release it (but I did hear from Fantasy Flight Games with their rules and they can't help me). Finishing that back-end was enough to give me confidence that I have something non-trivial to contend with. However, the ultimate success of this entire endeavor is to have proof of results. This means shipping games. The good news for me is that people are insatiable for content, and I doubt the community will ever throw their hands up and say something like “these are the games for all time”. I am reading The Art of Game Design by Jesse Schell, and guess what? I am game designer.

Seventh, as I master Rust, I will get a sense of the cost of having my language target Rust. This is probably going to take 18 months to understand since reactivity is exceptionally hard. Java is a much nicer language to target for being productive, but where I am at this migration is probably a bad idea. Since reactivity is hard, my time will be better spent in learning how to make Rust reactive in a similar enough way on the front-end.

Eighth, the next clear thing that I must do from a platform aspect is design the API and implement a Rust library. Basically, the pattern is precisely what I wanted from Lamancha. I am opening up the original design documents for Lamancha to provide context for what this meant. Two years ago, I had a vision on how to build a new type of browser, and I started to build it with SDL and then with C#. The documents are rough, incomplete, but now open. Here they are: why project lamanhcha, core idea, the amazing octoprotocol for making compute durable, and the ultimate design quip. I'm sharing these both to share ideas, but also to clean up my messy private quip.

Finally, this project is going to depend on me shipping games. This is a fact, but I also do not want to deal with licenses, so I am designing a game. I intend to take this slow as I will be building tools to help me analytically determine balance. I can already have AI players play poorly with random decisions, and I may reach out to a friend in AI to task some research students with automating play blindly. The core nature of the game is a deck builder designed to be co-op against a narrative structure, and my hope is to make a fun game for couples to play. I have the elements of the story figured out, and that is a great deal of fun. I have this epic vision of game that can be picked up and stopped in reasonable chunks of time, but the entire game mirrors a 50 hour legacy game. Also, with it being co-op, I want it to be a serious challenge such that people have to look up strategies and socialize about them. It should feel like a raid.

Anyway, thanks for reading my thoughts. I also enabled rss which is available here /blog/rss.xml.

A New Year, What is Up for 2021?

It is a new year, and I have been incommunicado up until now. So, I’ll scream into the void of digital ether.

I am looking forward to 2021, so what is going to change? First, I do not intend to overpromise or commit to anything. This project is still alive, but on the back burner due to a lack of bandwidth. I believe in it, but I am focusing on self-care. This crazy pandemic has given me a great deal of time, and I am using it to work out and improve myself. I am doing so much self-care that my body is much appreciative, and I am regaining lost vitality. I feel great!

Fortunately, my habit changes went into effect on November 1st, 2020; this means these new life changes are not likely to fall into a state of failure like many new year resolutions. Instead of committing myself to more things this year, I am instead going to commit myself to less things. Sadly, for this project this means it will appear to stall out.

Instead of committing myself to a bunch of expensive execution, this year the focus is purely on strategy and thinking this project through the following decades. For instance, I feel that I may be too myopically focused on board games. Now, I love board games, but I have gotten to an interesting point where I need to build UIs.

Alas, there are many things I do not like about the way products are built today. It bothers me, and this hampers my commercial success. Fortunately, I am not striving for a commercial success this year, and this project exists mainly to amuse myself. However, if I find myself serious about this project in the coming years, I want to set myself up at least a hope and a prayer of success (in some way).

I know that I do not like the modern web tool chain, and I am looking into making yet another box-moving-vector-editing-package for making cross-platform UIs...

So, what if? What if I went a bit further and made a full fledge editor? Adama was inspired by Excel, so what if I went all out and embraced this? What if I made something that average people could use to build products with a fully functional collaborative-real-time-version-history-document-centric backend? What if the product was as easy to use as Excel but with the rigors of modern engineering? I do not know, but I do know that I should play in this space for a bit and see how I like it.

This is a year to imagine the future!

OK, August came and went.

I did not get what I wanted done in August.

I am failure.

OK, that's dramatic, but that's the problem with TODO lists and procrastination.

I have a lot of changes happening in my life at this moment, but I have a bit more clarity. What is clear is that I need to pivot and focus on building things rather than the meta things. The short term goal has thus shifted to: ship something useful.

  • Write something useful

So, my next update should be more meaningful.

August Goal

August is upon us, and I feel good about how July went down. While I'm not making concrete steps towards the idealized milestone of launching a game, I am having fun polishing things. It is worth noting that the polishing found game-ending bugs, so this has been a worthwhile process. August is go-time, and I need to start telling more and more people about this project. So, with that, I'll enumerate concrete goals for August. Note, this blog post will update with changes.

  • Finish migrating all code from private repo.
  • Write a better README.md
  • README: Photo of Adama
  • README: Tighten the introduction and make it clear and cheeky
  • README: Some badges?
  • README: An Animated GIF
  • README: Step by Step setup guide (With Binaries / Building Binaries)
  • README: How to use and build a product
  • Finish a reasonable first pass of documentation
  • DOCS: Each section shows working code
  • DOCS: Each section has a fast intro
  • DOCS: Each section has a bullet list of details that I hope to explore
  • Polish up error messages language
  • Write tool to dump all error messages from unit tests
  • Integrate with vscode
  • VSCODE: Super basic LSP over Socket
  • VSCODE: Syntax highlights
  • Polish up error messages alignment with document
  • Resolve all the erroneous error line numbers 0, 2.5B
  • Make Three Demos
  • Demo: Chat
  • Chat Cast Study
  • Demo: Lobby
  • Lobby Case Study
  • Demo: Hearts Game
  • Hearts Case Study
  • Release a Binary

Obviously, the best way forward after telling people is to make games with it.

Let's see that animated GIF#

This was made with PowerPoint!?!

animated gif explaining Adama

Last Performance Update (for July?)

From last time, the performance and user cost was sitting at:

msbilling cost
1191938080

This seems good enough...

Unfortunately, this number does not account for the end to end story where clients get changes rather than entire copies of their personalized view. Ideally, clients will connect and get an entire snapshot of the document, then subsequent updates will require a JSON merge to integrate updates. Alas, the above number represents the cost to construct a copy of their complete personalized view for every update. Outside of that measure, we then produce a delta. So, let's account for taking the difference between two versions of the view.

msbilling cost
3501938080

Ouch! That's a punch to the gut. This makes sense since we are now computing the entire view, then comparing it to the previous view and emitting a delta. It is a bunch of work! Instead, what if we compute the delta as we go. That is, store a resident copy of the private view and then update it in a way which produces a delta as a side-effect. This means we avoid construction the JSON tree along with a bunch of memory allocations. This will then also avoid the costly JSON to JSON comparison.

msbilling cost
1371143089

That is not half bad. We are providing more value for almost the same cost. However, this work also reverted the benefits of caching views between people which is reasonable as people may be at different points of synchronization. However, it also revealed the costs of working with JSON trees, so let's remove them and use streaming readers and writers everywhere!

msbilling cost
951143089

Yay, more work done at a lower cost is the way to go. Now, this is the last update for July, but it is also the last update on performance for a while. 95 ms is fairly good for 802 user actions over 4 users. That means we take 0.03 ms/user-action which is fast. I think this is good enough, but something else that is interesting emerged.

As part of testing, I validated that snapshots work as expected. A core value of this system is that you can stop the computation (i.e. deployment or crash) and move it to another machine without people noticing beyond a touch of latency.

The test that I did was simple. After each decision, I'd snapshot the state, then throw away everything in memory, and then reconstruct the memory and compute the next decision. This naturally slows it down, but it also illustrates the opportunity of this concept. The measurements are sadness inducing because they are bad:

msbilling cost
856n/a

So, the inability to preserve state between actions is 9x more expensive on the CPU. This aligns with a view about remote caches and fast key value stores which I believe need to go. The document's size is between 75 to 80K, so this begged a question of how bandwidth changes between versions. Now, here is where is easy to get confused, so let's outline the two versions with some pseudo code.

The first version is "cmp-set", and it is something that I could see be implemented via AWS Lambda, so let's look at the client side code.

cmp-set-client.js#

while (true) {
// somehow learn that it is the client's turn
var decisions = await somehowLearnOfMyTurnAndGetDecision()
// ask the user (somehow) to make a decision
makeDecision(decision[k]);
}

cmp-set-server.js#

// server routed a decision based on the person
function on_decision(player, decision) {
// download the entire state from the store/db
var state = download_from_db();
// teardown/parse the state
var vm = new VM(state);
// send the VM the message and let its state update
vm.send(player, decision.channel, decision);
// pack up that state
var next_state = vm.pack();
// upload the state with a cmp-set (failure will throw)
upload_to_db_if(vm.seq, next_state);
// somehow tell the people that are blocking the progress of the game
vm.getPeopleBlocking().map(
function(person, decisions) {
person.sendDecisions(decisions);
});
// give the state to the current player
return next_state;
}

Now, this example has all sorts of problems, but it shows how a stateless server can almost be used to power an Adama experience. You can refer to first case study to contrast the above code to how this would work with Adama (without any hacks or holes). We can leverage the above mental model to outline two useful metrics. First, there is the "client bandwidth" which measures the bytes going from the server to all the clients (in aggregate). Second, there us "storage bandwidth" which measures all the bytes from the stateless server to some database or storage service. We can use the tooling to estimate these metrics for our example game. Here "cmp-set" refers to the above code, and we compare this to the adama version.

dimensioncmp-setadamaadama/cmp-set %
ms8569511.1%
client bandwidth24 MB1.17MB5%
storage bandwidth32 MB644KB2%
% client updates that are less than 1KB0%94.8%

As a bonus metric, I also counted how many responses were less than 1024 bytes which can safely fit within an ethernet frame (1500 MTU). That is, close to 95% of the responses from the server can travel to the client within a single packet. This data is very promising, and it demonstrates the potential of Adama as a unified platform for building interactive products which are exceptionally cheap. I intend to dig into the 5% of responses which are larger than 1500 as another source of optimization, but my gut is telling me that I need to move away from JSON and lean up the wire/storage format. This should be low on my list of priorities... We shall see.