JSON on the Brain

Since I am tweaking the API, I am going to write some pre-docs here on my passion for JSON. I'm only using JSON because I'm lazy and don't want to write yet another binary encoder for Java, JavaScript, and eventually Rust. I may at some point, but I'm kind of OK with JSON for now. The key thing I like about JSON is that you can treat it algebraically.

For a brief primer, check out RFC7386 as merge is an exceptionally powerful idea. At core, merge enables the set of all objects expressible via JSON to be an almost algebraic group. If you interpret null as nothing equivalently, then you have a group.

As a math guy, these group properties tend to signal a general substrate for useful stuff. Unfortunately, RFC7386 is not efficient when it comes to arrays. In this document, I’ll introduce how Adama leverages RFC7386 in two contexts. The first context is the delta log persisted on disk or within a database, and the second context is the change log for a connected client. The second context requires a non-trivial extension.

Adama and the Delta Log#

The key deficit of RFC7386 is that arrays don't work well. Arrays can have elements inserted, removed, re-arranged and the merge just treats the array like a big value. Given this, Adama doesn't use arrays within the delta log. The entire data model of Adama is a giant object with just objects and values. The way Adama represents collections is via either tables or maps, and this works well with the language integrated query and indexing. The efficiency gains of an array are manifested during the runtime, and we tolerate the representation overhead on the wire.

The motivation for is to enable efficient replication of potentially large documents, and the value proposition of Adama is to translate domain messages from people into deltas. This means that Adama, as a document store, achieves replication efficiency matched by traditional database solutions. A document in Adama behaves much like a reasonably sized database which can fit within memory. Note: the log is a powerful concept.

Putting efficiency aside, this also enables a powerful feature for board games (and any application). First, the game's state can be rewound into the past at any point. Simply trim the head of the log, and rebuild the state. For usability, games powered by Adama should allow people to make mistakes and roll back time; the only downside of this feature is enabling crafty players to know what the future holds.

Applications can leverage this as well because it provides a universal undo which is also collaborative. While the rewind operation is required for board games, a collaborative undo requires the contributions of multiple people to be considered. For instance, consider the log:

seqwhoredoundo
1Alice{"name":"Phobia","balance":42}{"name":null,"balance":null}
2Carol{"name":"Fear"}{"name":"Phobia"}
3Bob{"balance":100}{"balance":42}

If Alice wishes to undo her contribution, then we can take her undo and then run it forward. In this case, there is nothing to undo as Carol and Bob have made contributions which conflict. On the other hand, Carol and Bob are able to undo their messages. Fortunately, this algorithm is fairly easy. We simply match the fields within the undo to the fields within the future redo objects, and then trim down the undo operation that we will apply to the log.

undo(at)redo(at + k)behavior
objectobjectrecurse
*not definedpreserve
**remove

Sadly, this may not be sound within all application domains. However, this is worth researching, and it may work in enough domains to not be a big problem. The plan here is to simply describe the "Undo" API and provide guidance on how to use it.

Since good can sometimes be the enemy of the perfect, an item on the roadmap is to enable message handlers to emit an undo message which can be played at a future date. This will be nicer for some scenarios, but it will not be collaborative in spirit. For instance, if Alice creates an object and Bob comes along to manipulate it, then should Alice's undo remove the object?

This is clearly a complex subject, but it's fun to play with super powers! Things that I need to read and digest:

Thankfully, I can launch with rewind which is within my domain...

Adama and Clients#

While the philosophy of having everything being a map or value works well enough on the server, clients require the order that arrays provide. This is where we extend RFC7386.

Fundamentally, we will divide arrays into two classes. The first class is an array of values like [1,2,3] which will isn’t interesting, so we lean on RFC7386 for merging that array like a giant value. The second class is an array of objects with an unique integer id field, and it’s worth noting that is the pareto-major for Adama. Arrays of objects without an integer id field will be treated like a value, and these map into the first class.

We will transform the array of objects into an object with a special field. The conversion process takes every element in the array, and creates a mapping between the element's id and the element. The special field with key “@o” is then the order of the elements by id. Immediately, this has the benefit that using RFC7386 will then work on data within the new object.

For example, the given array:

[
{id:42, name:"Jeff"},
{id:50, name:"Jake"},
{id:66, name:"Bob"}
]

will convert into

{
42: {name:"Jeff"},
50: {name:"Jake"},
66: {name:"Bob"},
'@o': [42, 50, 66];
]

The client can now reconstruct the original array, and this enables a delta to change "Jeff" to "Jeffrey" be represented as:

{
42: {name:"Jeffrey"},
'@o': [42, 50, 66];
]

This allows RFC7386 to just work as elements can be inserted, rearranged, removed with the overhead being concentrated within the ‘@o’ field. Now, this can be optimized as well if we allow ourselves to do a bit of transcoding on the server. The server can monitor the ordering of the array before and after execution. Take the given array:

[1, 5, 6, 7, 10, 24, 50, 51, 52, 53]

as the before execution ordering, and then

[1, 5, 6, 7, 10, 100, 24, 50, 51, 52, 53]

as the after execution ordering. Sending this large array is an exceptional waste, and here we will hack JSON and have the ‘@o’ array be heterogeneous with two types of elements. The first is the id as a value, and the second is an array representing a range. The key idea being that subsequences from the before execution ordering can be reused. The above change would then be represented as:

[[0, 4], 100, [5,9]]

which saves tremendous space for small data changes. The values within the sub-arrays are pairs of inclusive beginning and end index values. This extension requires tweaks to RFC7386 to detect objects containing the '@o'.

Thoughts for the API#

Rebuilding the data is one thing, the other interesting potential of having clients receiving deltas is that they can update their application proportionally. This is represented in the poorly implemented current client, and the weak point is the emitting events of tree changes. It works well enough for my experiments, but I have a way to go.

As I build confidence that the code is working via all my hacking, my aspiration is to document this differential tree event firing in great detail and test it fully. Fun times!

A manifesto of user interface architectures

The chat demo scripts work with an implicit mental model using the browser, and this post aims to take that model out of my head and make it explicit. For lack of better words, this has turned into a manifesto for user interface.

The model#

The below picture illustrates the picture for the mental model of what I'm thinking to simplify the architecture for the board game user interface stuff.

mental model of the ui flow

The mental model starts with you, the human user. You are alone in front of your computer playing a board game with friends online, and you see a nice image on the screen. You move your mouse or position your fingers and you interact with the picture (I). This device interaction leverages low level signals of mouse/touch down, move, and up (II).

The current system is the browser using the DOM and some JavaScript to convert those raw signals into something more useful (III). Either the signals manifest in a message being sent to the Adama server (IV.a) or update some internal state within the browser (IV.b: scrolling around or opening a combo box).

While the user is sending signals to the Adama server other users are also sending signals to the same server (IV.c). The Adama server synchronizes its state to the client (V). The DOM combines the hidden viewer state with the server state to produce the pretty picture (VI), and this establishes the feedback loop between many people playing a board game using Adama.

It is ruthlessly simple, and we can discuss a great deal of interesting details. Join me as I wander around those details.

Limits of the network#

The boundary from client to server may be vast on this blue marble we call home. Unfortunately, the network is not perfect, and you can expect delays disconnects when things go bump in the night. Things will go wrong, and products must tolerate it in a predictably reliable way.

We can talk about a couple of models that work well enough.

First, perhaps the state is so small that you can just send the entire state in both directions. This is great as this model can tolerate packet loss, and you can simply use UDP and just add a sequencer to your state such that you ignore old state. The key problem with this model is that it isn't a unified solution; product growth and complexity will eventually force you away from this model such that you either you abandon it or complement with a tcp-like stream.

Second, once you have a tcp-like stream then you have the potential for an unbound queue. Some products and games can absorb this overhead because games end, so their bound is implicit. Heroes of the Storm and Overwatch do this as they capture replay logs and joining a game has a non-trivial download. This feature implies usage of the command pattern such that all state changes are guarded by a queue of commands, and if you can serialize the queue then you get both replay and network synchronization features.

public interface Command {
void execute(YourGameState);
String toJSON();
}

This pattern provides an exceptionally reliable framework for keeping products up to date, but you must be willing to pay the high cost during failure modes. Perhaps, the protocol can help such that only missed updates are exchanged; however, the queue is still unbound. Fortunately, cloud storage is so cheap that every command humanity could ever emit could be persisted forever. Sadly, clients lack the bandwidth to reconstruct the universe, so this is not the unified solution. Furthermore, the serialization depends on the implementation of the Command; it is a common problem between versions of games that offer replay to wipe out existing replays due to versioning issues.

Another nice property of the command pattern is that you can leverage client-side prediction such that you can run your client's state forward while the network delay and server unify input from all people. You can detect conflicts between the client and server, roll back state, accept the server's commands and then replay actions locally. This will cause a temporary disturbance, but the state will be correct without divergence. Now, board games do not require client-side prediction, but it is something to keep in mind.

The unbound queue is a problem that I am interested in especially after a decade dealing with them in infrastructure events. The solution is to bound the queue. There are many ways to do this. For instance, you can just throw items in the queue away; this is not perfect, but it is a common and good way when the contract for customers is to retry with exponential back-off. An alternative way is to leverage flow control such that each side has a limit of what it can have in-flight.

For the messages sent from client to server, flow control is perfect such that the client is only able to send some number of messages or bytes without the server acknowledging its commitment. Problematically, flow control breaks down from server to client for two reasons. First, there is more data flowing from server to client as it is aggregating data changes from everyone. Second, the pipe can get clogged as the command pattern requires a potentially infinite stream of updates; it is easily possible to never catch up especially if the client is on bad network or has a slow CPU.

This is where my math-addled brain realizes that this is solvable with algebraic concepts. Instead of the command pattern, we can use state synchronization with state differentials. For board games (and games in general), the state is bound (there are only so many game pieces within the box). With a finite state, you have a maximum transmission rate of the just sending the entire state as fast as you can using flow control to decide when to snapshot the state. This upper limit means that blips in the network just manifest as strange jitter.

This requires a data model that supports both differentiation and integration, so now you know why you studied Calculus as it is universally applicable. This is why Adama is using JSON without arrays along with JSON merge as the algebraic operator; arrays are problematic, and they can be overcome with specialized objects. This means that an infinite stream of updates can collapse into a finite update. This allows flow control from server to client be effective and cheap.

Alas, this amazing property is not without cost. This capability makes it hard for people to reason about what happened as there is no convenient log anymore to tell them directly. If players care about understanding what happened because their focus drifted, then you lose the ability to have nice change logs like "player X played card Y, and player Z discarded two cards". Instead, you must construct these human friendly logs on the fly by describing the change from just the before and after states. There are no silver bullets in this life...

Now, is this important for board games? Yes and no. It mirrors the problems of using a video channel, and if there is a disruption then people can just ask "Hey, what just happened?". The technical problem is easily handled by kind humans. However, there is exceptional value in solving this problem.

How Adama addresses this network business.#

Clients send the Adama server a domain message which has the spirit of the command pattern. For instance, the message could be "pick a card" or "say hello". The key is that the message is a command the client and human care about within the product's domain.

the adama flow

The Adama server will then ingest a domain message and convert it to a data change deterministically. Not only will this happen for the current user, but the Adama server brings together many users. Adama will then emit a stream of data changes to the client which it can leverage to show the state.

The key is that the language bridges the complexity of thinking about data changes by enabling developers to think imperatively within the domain exclusively. As the language emits state changes, the platform can broker those state changes to the client using flow control. While the client is not ready to get an update, the platform can batch together changes using the algebra of JSON merge. This will ensure that clients with adverse network conditions will snap to the correct state without overcommitting to an infinite stream of updates. This model also works well with catastrophic network events like a loss of connection as the client can just reconnect and download the entire state at any moment.

All of the complexity and pain of using the network goes away if developers commit to holding their hands behind their back and watching the JSON document update.

Rendering combined with state changes#

With efficient network synchronization figured out, the next step is to figure out how to make a nice picture.

The chat demo uses the DOM such that state changes from the server are converted directly to DOM changes, and this works well enough. There are some gotchas. For instance, the DOM has a great deal of view state which is not convenient to reason about. An example is using innerHTML to do updates; this can be destructive to internal state like scroll bar position or text input entry.

However, we can outline the nature and shape of all state changes. As a prerequisite we must establish that the state has a static type so we should not expect strings to become objects and vice versa. With this, we can see state changes manifest as:

  • values within objects changing
  • objects having fields update
  • arrays having elements removed or added
  • the order of elements within an array changing

Given the simplicity of json, there are few possibilities when the types are consistent between updates. If types can change, then bad times are ahead as the complexity blows up!

Fortunately, with JSON being a tree and the DOM being a tree means synchronization is straightforward while there is one to one correspondence. When you start aggregating the state into the DOM, then things become more complex. However, you could offload that aggregation to the server via formulas. If the DOM representation looks like a mustache template over a giant JSON object, then good times are ahead.

Enter the canvas#

Alas, the DOM has limits for some of the scenarios that I was envisioning for another game where I started to get into the land of canvas. Canvas is the 2D escape hatch such that all your fantasies come alive... at great cost.

Now, I love writing code using the canvas. It takes me back to mode 0x13h days! Naively, it's straightforward to write a render function that converts the state into a pretty picture. However, a predictable game emerges.

First, you will want to convert raw mouse/touch events into meaningful messages. This manifests in building or buying yet another UI kit. Fortunately, for simple things this is not a huge problem and it is easy to get started. However, this is the seed to any UI empire, and it's generally better to buy a UI framework than invent yet another one.

Second, you give up on accessibility flat-out. Is this OK? Well, this is the normal and sad state of affairs for most games. However, I feel like we can do better. The key to accessibility may rest in thinking differently for different handicaps. For instance, if I can solve how to make games playable via a home assistant like Alexa, then I can bring the blind into the game. If I can simplify the user interaction to rely on a joystick and a button, then mouse precision can be factored out.

Third, there is a performance cost for rendering the entire scene based on any data change. Given today's hardware this is abated by hardware acceleration and generally is fine. However, this is a white whale as it would be nice to scope rendering updates to where data changes manifest visual changes. This requires building a renderer that can cache images between updates and then appropriately invalidate them on change, clipping rendering to a portion of the screen, not drawing things that don't intersect the rendering box, and other techniques.

Fourth, you may interact with the picture by scrolling or selecting a portion of the data to view, and this gives birth to view state. While this is not a huge problem, it requires being mindful of where the view state is stored. I believe the view state should be public, and this has the nice feature of having deterministic rendering. If local changes are extracted away, then the render function can be a pure function with both view state and the server state as inputs. This gives rise to the capability of unit testing entire products with automation! It turns out that private state within objects makes life hard, and object orientated programming was a giant mistake.

Fifth, rendering the JSON scene will lack smoothness. The picture will just snap to state instantly, and while this is great for testing and predictability. Humans need animation to visualize change. This introduces the need for view state to smooth out the data changes, and this requires an update step to transform the view state based on server state and time.

the adama flow

Time must exist outside the update function because this forces the update function to be deterministic in how it updates the view state. Determinism is an important quality, so this also means not leveraging randomness. This vital property enables testability of products.

Regardless of the bold testability claims, the key is that the shape of the solution for all of these issues start to evolve the DOM. The hard lesson in life is that you can't really escape the DOM. You can call it a rendering tree, scene tree, or scene graph; however, that's just different shades of lipstick on the pig.

Future Thoughts#

Like a fool, I'm thinking at this deep level because I am toying around with the idea of building yet another cross platform UX ecosystem... for board games... in rust (and web). The core reason is that I really-really-really hate other UI frameworks and development models. It's worth noting that I built a similar UI ecosystem a decade ago, so this isn't my first rodeo. The target back then was the web with crazy ajax, but my failure back then was a lack of accounting for SEO which was a death blow back then; in today's landscape of PWAs and SPAs...

Developer tools is a murky business, so ultimately this is not a business play (yet?). It is a mini empire play with a niche focus, and I realize this requires ruthless focus. However, the siren song and distraction is cross platform lust.

First, I want the web because the web means online.

Second, I want mobile devices, and the web works here as well. However, I would prefer a native app because I'm a performance junkie.

Third, I want it to run on Nintendo Switch because I like the Switch; it is a fun mobile platform. I still need to get the indie dev kit and check out the licensing. If the switch works, then all consoles should work.

Fourth, I want it to run on TVs which are basically either mobile devices or web, so I should have my bases covered. The only caveat is that it makes sense for TV to be a pure observer, so people can congregate in the living room around the big screen.

Finally, as a fun twist, I want home assistants. I want to be able to play while I cook or while I exercise.

All these wants can be so distracting, so the hard question is how to focus. The platform specific user interaction idioms are a nice way to tie my hands behind my back as I slam my head into the wall of building yet another framework, and the key is to keep things simple. As an example, I will need to have a library of solid components. The tactical mistake is to go forth and focus on all the low hanging components like label or button. Rather, I should focus on a meaty domain specific component like "hand of cards" as that would be used in card games and deck builders. The effort then is less empire building and focused on solving a rich domain problem, and the design game that I need to play starts to take shape; this component design game has some rules.

First, the component model assumes the component will fit all activities within a box. This feels like a reasonable boundary.

Second, the input language for the control is limited and there are five forms:

  • tap(x, y)
  • move(left, up, right, down)
  • main button
  • accelerator button(...)
  • voice intent

There are interesting challenges for both d-pad movement and voice intent, and this will require an interesting design language to sort out.

Third, the component must describe or narrate the state change effectively. For instance, a "hand of cards" component would describe detecting a new card appearing as "a two of spades was drawn" while detecting the loss of a card would be describe via "the ace of hearts was discarded". Now, this narrator business is an exceptionally fun challenge because language is interesting to generate. Not only does this provide a change log history, but it also enables home assistants to narrate the game.

The mission of the UX ecosystem is therefore to unify all these idioms into one coherent offering such that board games are easy to interact with regardless of the platform. Is this a silly endeavor? Perhaps...

Thoughts, rust, the dip, strategy forward

Well, February was a fun month, and I thought about many things. I’d like to share some of them now.

First, I’m learning rust and I love it. It’s a good language, and it triggers the special feeling that I'm making something that will last. Part of this is from the discipline of the borrow checker where I have to be very careful with how I do memory. Sure, I could try to mimic the niceness that a garbage collected language provides or just go sloppy with how C++ works, but that’s just digging a hole. With rust, I have to plan, think hard, and design the shape of the beast more upfront. As an example, I’m writing a JSON parser to flex some parser skills. I’m proud to say that it was not easy, and I like the results so far.

Second, I am also looking at how to bridge between the rust code and the browser via WebAssembly. I’m impressed with the current ecosystem, and I wish I could go 100% rust. However, that would be following a siren song as there are many gaps between what the browser offers and where the rust ecosystem is. I hope to write more about these gaps in the future, but my goal is to find a reasonable balance now such that long term investments can migrate to rust over time. Simply put, I need to move faster on higher level design details without being bogged down with lack of features. The browser is a challenging foe to usurp.

Third, I realize now this project is in the dip. This is no small project, and I’ve recently re-read The Dip from Seth Godin](https://www.amazon.com/Dip-Little-Book-Teaches-Stick/dp/1591841666). It’s a small book that reminds one of the power of quitting early or being strategic to get out of the dip. The key is recognizing your situation. There is a long road ahead for this, and I have to recognize that I’m not going to get every detail right from the beginning. I’m going to make catastrophic mistakes, but I must soldier on with a strategy.

Fourth, I’m looking to leverage what I current have built in an intermediate product. As I was opening up the repo, I realized “I forgot practically everything practical.” This is the bad part of becoming battle aloof, but this is also an opportunity to look at the project with fresh eyes. I intend to focus on some of the accoutrements that help ease people into a project because I’m finding myself in need. It’s helpful as I don’t remember where the bodies are buried, so I intend to focus on that.

Fifth, I am getting the urge to extend the language based on rust (Rust’s enums remind me of the good days when I was in love with OCaml). Now, this would be the worst way to spend my time, but it would be fun. Should I do it? Maybe. Life is short, so why should I feel bad about delaying shipping? This isn’t a business, and I probably am not going to make a business play for a few years. Problematically, this only will extend the dip, but what is success? Is success numbers in a bank account? Is success a wikipedia page? Is success a huge number of stars and followers? Is success doing talks? No, I think success comes from contentedness that we spent our time well, so how do I measure that? Is it measurable?

Sixth, I feel like if I can make progress on the README, a tutorial, and maybe get cracking on polishing up the demo then March will be time well spent. At least for the meta project, and I need to think about how far and wide I can go on shipping real products. Now, problematically, I can go meta yet again and build all sorts of stuff before building the actual game, so that’s a problem. Instead, I need to focus on solving exactly one meta problem: the deck builder. If I think about limiting meta to smart components, then perhaps I don’t get too bogged down in empire building. I also need to not get bogged down in worrying too much about rendering performance or overhead...

Anyways, if you are reading this, then thank you.

Some Thinky Thoughts for 2021

So, as promised, I've been thinking about things.

First up, I am having this deep well of regret that I fucked up by targeting Java as the platform. For one, companies like Cloudflare are investing deep in WebAssembly, and their offering around durable objects is exciting. It feels like the right infrastructure for my language to target. Now is it going to be perfect? Doubtful, but perfect is the enemy of good which is a common fight I enjoy because I’m stupid. Furthermore, dfinity is introducing WebAssembly canisters to run in a decentralized way. IF ONLY I had the fucking to wisdom target WebAssembly, then I could leverage other people’s infrastructure. However, WebAssembly feels way too low-level for my patience, and there are a swarm of problems in targetting that directly. What if I translated to Rust?

Second, I must decide on what kind of platform I want to build and offer. A motivating concept at play is that I could see starting a company called “The Board Game Infrastructure Company” which is an exceptionally precise name given my propensity to leverage codenames for new efforts. The problem with starting an infrastructure company is people, and people are the worst. Making the internet easier is not free from burden, and I do not want to enable the infrastructure for child porn, sex trafficking, and terrorists. I certainly do not want to build and pay for an organization to handle that shit either, so that’s a bag I don’t want to own. This feeds into the regret around WebAssembly. Either I convert things to WebAssembly with hope that some sucker is going to build the right infrastructure, or I must design and build a distributed system as open source that runs easily on AWS. Fundamentally, Adama is going to look like a strange database.

Third, maybe my task is much simpler and all I should focus on making the language and devkit great. If I can make it simple and find the right boundary for others to consume, then they will figure out how to make durability great and do all the platform building. Take litestream as an example; sqlite is an exceptionally well-defined product with a good boundary and Ben is building an awesome thing to make it durable-enough. This seems to speak that if I can find the courage to balance work on the language with making great products then people will believe and help me. I don’t know, it sounds crazy.

Fourth, I am stuck on the UI bits because once again I am stupid. I do not like what the modern web has become, and I am not alone. Every time I start writing JavaScript or TypeScript, I realize that this is not going to last for exceptionally long. I know that I am going to fuck up and need to refactor, and it is just painful. I do not enjoy the feeling that I am marching towards despair, and I find great comfort in good languages with good tools. I am old and want things to last. I have Java from a decade ago that still works, and I like that. Therefore, I have decided to learn and master Rust, and I am having a good time. Rust is great, and I recommend it. I still have a lot to learn, but I am playing with WebAssembly and I expect to have some results this year.

Fifth, with Rust being great. I am making slow and steady progress, and I am having fun building one of my architectural katas: Visio. For instance, I wrote world bootstrap five years ago exploring JavaFX, but I dropped it because work was getting crazy. Basically, making a drag and drop WYSIWYG editor is a thing that I do ever since my college days (again, I’m stupid). So, this is how I intend to master Rust, and I’m liking it. Not only am I making a drag and drop editor, but I’m going to make a mess and make a tiny UI framework in Rust against only canvas. Maybe then I can use Skia and have a portable UI framework between desktop, web, android, iOS, and Switch. It is time to misbehave.

Sixth, as a recap, I finished the back-end for Battlestar Galactica last year, and it gave a rough cost for the UI that I’m not willing to pay at this moment since I can’t release it (but I did hear from Fantasy Flight Games with their rules and they can't help me). Finishing that back-end was enough to give me confidence that I have something non-trivial to contend with. However, the ultimate success of this entire endeavor is to have proof of results. This means shipping games. The good news for me is that people are insatiable for content, and I doubt the community will ever throw their hands up and say something like “these are the games for all time”. I am reading The Art of Game Design by Jesse Schell, and guess what? I am game designer.

Seventh, as I master Rust, I will get a sense of the cost of having my language target Rust. This is probably going to take 18 months to understand since reactivity is exceptionally hard. Java is a much nicer language to target for being productive, but where I am at this migration is probably a bad idea. Since reactivity is hard, my time will be better spent in learning how to make Rust reactive in a similar enough way on the front-end.

Eighth, the next clear thing that I must do from a platform aspect is design the API and implement a Rust library. Basically, the pattern is precisely what I wanted from Lamancha. I am opening up the original design documents for Lamancha to provide context for what this meant. Two years ago, I had a vision on how to build a new type of browser, and I started to build it with SDL and then with C#. The documents are rough, incomplete, but now open. Here they are: why project lamanhcha, core idea, the amazing octoprotocol for making compute durable, and the ultimate design quip. I'm sharing these both to share ideas, but also to clean up my messy private quip.

Finally, this project is going to depend on me shipping games. This is a fact, but I also do not want to deal with licenses, so I am designing a game. I intend to take this slow as I will be building tools to help me analytically determine balance. I can already have AI players play poorly with random decisions, and I may reach out to a friend in AI to task some research students with automating play blindly. The core nature of the game is a deck builder designed to be co-op against a narrative structure, and my hope is to make a fun game for couples to play. I have the elements of the story figured out, and that is a great deal of fun. I have this epic vision of game that can be picked up and stopped in reasonable chunks of time, but the entire game mirrors a 50 hour legacy game. Also, with it being co-op, I want it to be a serious challenge such that people have to look up strategies and socialize about them. It should feel like a raid.

Anyway, thanks for reading my thoughts. I also enabled rss which is available here /blog/rss.xml.

A New Year, What is Up for 2021?

It is a new year, and I have been incommunicado up until now. So, I’ll scream into the void of digital ether.

I am looking forward to 2021, so what is going to change? First, I do not intend to overpromise or commit to anything. This project is still alive, but on the back burner due to a lack of bandwidth. I believe in it, but I am focusing on self-care. This crazy pandemic has given me a great deal of time, and I am using it to work out and improve myself. I am doing so much self-care that my body is much appreciative, and I am regaining lost vitality. I feel great!

Fortunately, my habit changes went into effect on November 1st, 2020; this means these new life changes are not likely to fall into a state of failure like many new year resolutions. Instead of committing myself to more things this year, I am instead going to commit myself to less things. Sadly, for this project this means it will appear to stall out.

Instead of committing myself to a bunch of expensive execution, this year the focus is purely on strategy and thinking this project through the following decades. For instance, I feel that I may be too myopically focused on board games. Now, I love board games, but I have gotten to an interesting point where I need to build UIs.

Alas, there are many things I do not like about the way products are built today. It bothers me, and this hampers my commercial success. Fortunately, I am not striving for a commercial success this year, and this project exists mainly to amuse myself. However, if I find myself serious about this project in the coming years, I want to set myself up at least a hope and a prayer of success (in some way).

I know that I do not like the modern web tool chain, and I am looking into making yet another box-moving-vector-editing-package for making cross-platform UIs...

So, what if? What if I went a bit further and made a full fledge editor? Adama was inspired by Excel, so what if I went all out and embraced this? What if I made something that average people could use to build products with a fully functional collaborative-real-time-version-history-document-centric backend? What if the product was as easy to use as Excel but with the rigors of modern engineering? I do not know, but I do know that I should play in this space for a bit and see how I like it.

This is a year to imagine the future!

OK, August came and went.

I did not get what I wanted done in August.

I am failure.

OK, that's dramatic, but that's the problem with TODO lists and procrastination.

I have a lot of changes happening in my life at this moment, but I have a bit more clarity. What is clear is that I need to pivot and focus on building things rather than the meta things. The short term goal has thus shifted to: ship something useful.

  • Write something useful

So, my next update should be more meaningful.

August Goal

August is upon us, and I feel good about how July went down. While I'm not making concrete steps towards the idealized milestone of launching a game, I am having fun polishing things. It is worth noting that the polishing found game-ending bugs, so this has been a worthwhile process. August is go-time, and I need to start telling more and more people about this project. So, with that, I'll enumerate concrete goals for August. Note, this blog post will update with changes.

  • Finish migrating all code from private repo.
  • Write a better README.md
  • README: Photo of Adama
  • README: Tighten the introduction and make it clear and cheeky
  • README: Some badges?
  • README: An Animated GIF
  • README: Step by Step setup guide (With Binaries / Building Binaries)
  • README: How to use and build a product
  • Finish a reasonable first pass of documentation
  • DOCS: Each section shows working code
  • DOCS: Each section has a fast intro
  • DOCS: Each section has a bullet list of details that I hope to explore
  • Polish up error messages language
  • Write tool to dump all error messages from unit tests
  • Integrate with vscode
  • VSCODE: Super basic LSP over Socket
  • VSCODE: Syntax highlights
  • Polish up error messages alignment with document
  • Resolve all the erroneous error line numbers 0, 2.5B
  • Make Three Demos
  • Demo: Chat
  • Chat Cast Study
  • Demo: Lobby
  • Lobby Case Study
  • Demo: Hearts Game
  • Hearts Case Study
  • Release a Binary

Obviously, the best way forward after telling people is to make games with it.

Let's see that animated GIF#

This was made with PowerPoint!?!

animated gif explaining Adama

Last Performance Update (for July?)

From last time, the performance and user cost was sitting at:

msbilling cost
1191938080

This seems good enough...

Unfortunately, this number does not account for the end to end story where clients get changes rather than entire copies of their personalized view. Ideally, clients will connect and get an entire snapshot of the document, then subsequent updates will require a JSON merge to integrate updates. Alas, the above number represents the cost to construct a copy of their complete personalized view for every update. Outside of that measure, we then produce a delta. So, let's account for taking the difference between two versions of the view.

msbilling cost
3501938080

Ouch! That's a punch to the gut. This makes sense since we are now computing the entire view, then comparing it to the previous view and emitting a delta. It is a bunch of work! Instead, what if we compute the delta as we go. That is, store a resident copy of the private view and then update it in a way which produces a delta as a side-effect. This means we avoid construction the JSON tree along with a bunch of memory allocations. This will then also avoid the costly JSON to JSON comparison.

msbilling cost
1371143089

That is not half bad. We are providing more value for almost the same cost. However, this work also reverted the benefits of caching views between people which is reasonable as people may be at different points of synchronization. However, it also revealed the costs of working with JSON trees, so let's remove them and use streaming readers and writers everywhere!

msbilling cost
951143089

Yay, more work done at a lower cost is the way to go. Now, this is the last update for July, but it is also the last update on performance for a while. 95 ms is fairly good for 802 user actions over 4 users. That means we take 0.03 ms/user-action which is fast. I think this is good enough, but something else that is interesting emerged.

As part of testing, I validated that snapshots work as expected. A core value of this system is that you can stop the computation (i.e. deployment or crash) and move it to another machine without people noticing beyond a touch of latency.

The test that I did was simple. After each decision, I'd snapshot the state, then throw away everything in memory, and then reconstruct the memory and compute the next decision. This naturally slows it down, but it also illustrates the opportunity of this concept. The measurements are sadness inducing because they are bad:

msbilling cost
856n/a

So, the inability to preserve state between actions is 9x more expensive on the CPU. This aligns with a view about remote caches and fast key value stores which I believe need to go. The document's size is between 75 to 80K, so this begged a question of how bandwidth changes between versions. Now, here is where is easy to get confused, so let's outline the two versions with some pseudo code.

The first version is "cmp-set", and it is something that I could see be implemented via AWS Lambda, so let's look at the client side code.

cmp-set-client.js#

while (true) {
// somehow learn that it is the client's turn
var decisions = await somehowLearnOfMyTurnAndGetDecision()
// ask the user (somehow) to make a decision
makeDecision(decision[k]);
}

cmp-set-server.js#

// server routed a decision based on the person
function on_decision(player, decision) {
// download the entire state from the store/db
var state = download_from_db();
// teardown/parse the state
var vm = new VM(state);
// send the VM the message and let its state update
vm.send(player, decision.channel, decision);
// pack up that state
var next_state = vm.pack();
// upload the state with a cmp-set (failure will throw)
upload_to_db_if(vm.seq, next_state);
// somehow tell the people that are blocking the progress of the game
vm.getPeopleBlocking().map(
function(person, decisions) {
person.sendDecisions(decisions);
});
// give the state to the current player
return next_state;
}

Now, this example has all sorts of problems, but it shows how a stateless server can almost be used to power an Adama experience. You can refer to first case study to contrast the above code to how this would work with Adama (without any hacks or holes). We can leverage the above mental model to outline two useful metrics. First, there is the "client bandwidth" which measures the bytes going from the server to all the clients (in aggregate). Second, there us "storage bandwidth" which measures all the bytes from the stateless server to some database or storage service. We can use the tooling to estimate these metrics for our example game. Here "cmp-set" refers to the above code, and we compare this to the adama version.

dimensioncmp-setadamaadama/cmp-set %
ms8569511.1%
client bandwidth24 MB1.17MB5%
storage bandwidth32 MB644KB2%
% client updates that are less than 1KB0%94.8%

As a bonus metric, I also counted how many responses were less than 1024 bytes which can safely fit within an ethernet frame (1500 MTU). That is, close to 95% of the responses from the server can travel to the client within a single packet. This data is very promising, and it demonstrates the potential of Adama as a unified platform for building interactive products which are exceptionally cheap. I intend to dig into the 5% of responses which are larger than 1500 as another source of optimization, but my gut is telling me that I need to move away from JSON and lean up the wire/storage format. This should be low on my list of priorities... We shall see.

First Case Study (Chat) & Open Thoughts

The language is in a high functioning state, and it is validating the vision that I have as a madman lunatic. Today, I would like to jump ahead into the future and share the vision for how to build products with the platform that I envision. This is a useful exercise as I am in the process of defining the service beyond the series of hacks that got my prototype working. More importantly, I want to share this vision with you, and I’d also like to quickly contrast this to the past.

User Story: Chat Room#

Let us get started with a concrete use-case: a chat room. The story of this use-case is that you want to create a chat room with your friends, or you want to extend your website with a chat room feature of sorts. The key is that you want people chatting on a website. There are 3 steps to achieve this story.

Step 1: Write the entire infrastructure in Adama#

We need some infrastructure (i.e. servers) to provide the connective glue between people that use unreliable devices (they go to sleep, go through tunnels, must manage power, etc...). In this new universe, the first step is to write code within Adama, and so we must justify the reasoning for learning yet another programming language. The reasoning starts with why we write schemas or use interface description languages like thrift: it is a good practice to lay out the state in a formal and rigorous way. Adama is no different, so we must layout our state. The following code will define the shape of table.

// the lines of chat
record Line {
public client who;
public string what;
}
// the chat table
table<Line> _chat;

Intuitively, the above defines a ledger of chat lines within the chat room. At this point, Adama escapes the confines of just laying out state into the manipulation of state. People that connect to this chat room are able to send messages to it, and we can outline a message thusly:

// how someone communicates to the document
message Say {
string what;
}

This representation outlines what a single person can contribute to a conversation, and we can handle that message with a channel handler. A channel handler is a procedure which is available only to consumers of the document (i.e. the people in the chat) that are connected.

// the "channel" which enables someone to say something
channel speak(client who, Say what) {
// ingest the line into the chat, this is a strongly typed INSERT
_chat <- {who:who, what:what.what};
}

The above will combine the who (which is the authenticated sender) with the message what into an item within the table using the ingestion operator ("<-"). The "<-" operator is how data is inserted into tables.

What this means is that people can come together around the outlined data and contribute, but how do they read the data? Well, we expose data reactively via formulas. In this case, we exploit language integrated queries such that every update to the table will reactively update all people connected.

// emit the data out
public formula chat = iterate _chat;

This will expose a field chat to all consumers containing a list of all chat items. Note: the iterate _chat expression is shorthand for the SQL SELECT * FROM _chat. Every time the _chat table changes, the chat field will be recomputed. Now, this begs a question of how expensive this is for devices, and the answer is not at all expensive because we leverage a socket such that clients can leverage prior state to incorporate changes from the server. That is, if someone sends a message "Hello Human", then every client will get a change that looks like this on the wire:

{"chat":
{
"44":{"who":{"agent":"jeffrey","authority":"me"},"what":"Hello Human"},
"@o":[{"@r":[0,43]},"44"]
}
}

This will be discussed in further detail in a future post around "Calculus". At this point, this trifecta of (1) laying out state, (2) ingesting state from people, and (3) reactively exposing state in real-time to people via formulas is sufficient to build a wide array of products. The chat infrastructure is done!

Step 2: Upload Script to the Goat Cloud#

The above Adama script is called “chat_room.a”, and the developer can spin up their chat room infrastructure via the handy goat tool.

./goat upload --gamespace chatrooms --file chat_room.a

Here, the "gamespace" term is a play on "namespace", but it is a globally unique identifier to identify and isolate the space of all instances of a "chat_room.a" experience. It is worth noting that the script defines a class of chatrooms, and there are an infinite number of chatrooms available. Once this script is uploaded, the gamespace enables UIs to create a chatroom and connect into a chatroom.

Step 3: Build the UI#

Disclaimer: This is not a lesson about how to build pretty UIs as the UI is very ugly. This is about the way the UI is populated from the server. So, with much shame, the UI looks like this:

the way the chat UI looks

And it has the expected behavior that:

  • First person clicks "Create a New Room",
  • First person shares that "Room ID" with friends (somehow)
  • First person clicks "Connect"
  • Other people paste the id into the "Room ID" box, then they click "Connect".
  • Everyone connected can then chat by typing in the last box and hit "Speak".

We will walk through how the Adama JavaScript Client Library enables these behaviors and fulfills the expectations. First, here is the HTML for that ugly UI:

<html>
<head>
<title>Chat</title>
<script type="text/javascript" src="adama.js"></script>
</head>
<body>
<button id="create_new_room">Create a New Room</button>
<hr />
Room ID: <input type="text" id="chat_id" />
<button id="connect_to_room">Connect</button>
<hr />
<pre id="chat"></pre>
<hr />
<input type="text" id="say" />
<button id="speak">Speak</button>
</body>
</html>

With this skeleton, let's make it do stuff. First, let's connect this document to the devkit which is running locally.

var adama = new AdamaClient("localhost", 8080);
// some auth stuff ignored for now
adama.connect();

This will establish a connection to the server, but now we need to make the "create_new_room" button work. Ultimately, we are going to let the server decide the ID such that it is globally unique.

document.getElementById("create_new_room").onclick = async function() {
document.getElementById("chat_id").value =
await adama.createAndGenerateId("chat.a");
};

With this, an id will pop into the text box. Now, we need to make the "connect_to_room" room button work, and this button should populate the "chat" <pre> element. Since this is going to result in a stream of document change, we need a way to accumulate those changes in a coherent way. This is where the AdamaTree comes into play.

// first, we create a tree to receive updates
var tree = new AdamaTree();

This tree can receive updates from the server and hold a most recent copy of the document, so we can use this to attach events to learn of specific updates to the tree. Here, we will subscribe to when the "chat" field changes within the document. We will then construct the HTML for the "chat" element.

// second we outline how changes on the tree manifest
tree.onTreeChange({chat: function(change) {
// tree.chat has changed, so let's recompute the "chat" element's innerHTML
var chat = change.after;
var html = [];
for (var k = 0; k < chat.length; k++) {
html.push(chat[k].who.agent + ":" + chat[k].what);
}
//
document.getElementById("chat").innerHTML = html.join("\n");
}});

This tree now needs to be connected to a specific document, and this needs to happen when the "connect_to_room" button is clicked. So, we will do just that.

// the button was clicked
document.getElementById("connect_to_room").onclick = function() {
adama.connectTree("chat.a", document.getElementById("chat_id").value, tree);
};

This illustrate a core design pattern. Namely, you outline how changes within the tree manifest in changes in UI. The above shows how to update the UI when the "chat" field changes, but since the "chat" field is a list it may be prudent to update the UI based on specific list changes (i.e. append item, reordered, inserts, etc...). These specific updates are possible, but they will be saved for a later example as they introduce more DOM complexity. For now, let's focus on what happens when the "speak" button gets clicked.

document.getElementById("speak").onclick = function() {
var msg = {what:document.getElementById("say").value};
adama.send("chat.a", document.getElementById("chat_id").value, "speak", msg);
};

This will send the msg to the document via speak channel handler. That handler will insert data, and this will invalidate the chat field which gets recomputed. This recomputation will manifest a change for all connected people, and each person person will get a delta changing their copy of the chat field. This delta will trigger the above onTreeChange which will render the message.

This completes the UI, and it completes the usecase story.

A Time to Reflect#

At core, these three steps demonstrate how to create a working product which brings people together. This is just one demo of the future platform-as-a-service, and I am working on a few more. The key takeaway, I hope, is that every step is minimal and intuitive.

I am beginning to realize that the language is a red herring of sorts in terms of marketing. While the Adama language is the keystone for building back-ends which connect people together, the key is the platform and how it works to enable people to build. Put another way, it would be more prudent to talk about it as a real-time document store or database rather than a programming language. However, it feels like something new, and new stuff is hard to market.

It is very interesting to be in a state of seeing and believing in something, but it makes sense when I look back. Personally, I've been developing web properties for over twenty years, and I look at AWS as an inspiring enabler of doing more with less. However, a pattern is emerging where if you look at what it takes to build a web property change over time, then the following emerges.

less is more

In a sense, things are getting better on many dimensions, and the key is that our progress as a species depends on a persistence to make things better by enabling more with less. I'm a bit biased, but there is something here. I'm excited to wrestle with it.

Performance Updates & Good-Enough?!?

This update spans events over four days of joyful suffering.

Day #1#

I’m dumb. My benchmark code would randomly inject 1 ms delays due to a stupid spin-lock type concept. This was introduced quickly because there is a scheduler which scheduled future state transitions. I fixed the state transition for 0 ms delays to be... well... instant, and the world is much better. Fixing just this took our previously reported 550 to 350 ms which is a massive reduction, but it also changes the perception of the impact of the prior work.

msbilling cost
3502328882

We can retrospectively re-evaluate the impact and adjust expectations. For instance, if 200 ms is pure testing overhead, then we can factor that in. That is, instead of comparing 350 to 740 (where performance began), we can compare 350 to 540 to measure the actual impact of the optimization work on production scenarios. While the testing environment saw a whooping 53% drop in time, the production environment only would see a 35% drop. While this is sad, I'm excited to see more testing happen faster.

Moral of the day: measuring is hard.

Day #2#

The day started with a focus on improving testing, finding issues, and resolving some long-standing bugs and swamp of TODOs. I was very happy with the 90%+ unit test coverage on the runtime, but the coverage improved to the point where I had to deal with the cruft and tech debt because I didn't want to write unit tests which would become bunk. It was clear that I needed to invest in sorting out the persistence model and making it rigorous, and it was time to go all in on the "delta-model". Here, I want to spend a bit of time talking about the "delta-model" and why it is so important.

At core, we must use a distributed system for durability, and a key primitive to leverage is "compare and set" which enables multiple parties to atomically agree on a consistent value. Adama was designed to exploit this, and we talk about the entire game that Adama plays by looking at how messages get integrated into a document. The below code lays out the game.

function integrate_message(msg, key) {
// download document from store
let [seq, doc] = get_document(key);
// compute new document
let new_doc = do_compute(doc, msg);
// leverage compare and set to share the new document
if (!put_doc_if_seq_matches(key, seq, new_doc)) {
// it failed, try again
integrate_message(msg);
}
}

Half of Adama is designed to make do_compute really easy to build with some special sauce between multiple users. An absolute key requirement for do_compute is that it must absolutely be a side-effect-free-honest-to-goodness-mathematical function otherwise the system becomes unpredictable. Now, Adama has been at this stage for a while via a series of shadow documents. The objects that Adama's code would interact were backed by a reactive object, and reactive objects were backed by JSON objects.

pure delta mode

The way this worked is that changes all flow to the JSON, and the entire role of the reactive objects was to provide a cached copy to provide the ability to revert changes. That is, the entire document is transactional. For instance, if a message handler manipulates the document then aborts a message for some reason, then those manipulations are rolled back. This ability to roll-back is exceptionally powerful, and we will see how in a moment. The work at hand was to simply throw away the shadow copies and just one giant reactive tree which could produce a delta on a transactional commit. Since JSON is the core format, we will emit JSON deltas using rfc7386. This required more code-generation and a lot of work, but it was producing deltas. But, we return to why deltas? The core reason to leverage deltas is because of physics, and physics is a harsh mistress.

Namely, what happens as document size increases with a compare and set system?

  • The time for both get_document and put_doc_if_seq_matches increase due to network cost.
  • Whoever is executing get_document must deserialize, execute do_compute, then serialize for put_doc_if_seq_matches; these all cost resources which grow with document size.
  • As time increases, the probability of conflicts emerges for put_doc_if_seq_matches which adds more time which starts to cascade with more time and more cost.
  • Oh, and people are constantly downloading the document, so the resources on what-ever shard is providing get_document is undergoing more contention to simply share updates to other people. (And, how they know when to update is an entirely different system usually using pub/sub

There be dragons in all services, but the short answer is these compare and set services work well for small fixed objects. These systems tend to have caps on what their document size is. For instance, Amazon DynamoDB has a low 400KB limit, and I almost guarantee that there is a frustrated principal at Amazon who thinks that value is too high. Now, there are a variety of ways of working within these limits, but they tend to shift the cost to more resources especially on the network. Instead, Adama proposes a core shift towards a more database-inspired design of using a logger.

Databases have the advantages that their updates can be replicated rather than direct data changes, and this enables Databases to become massive! This is the property we want to exploit, except we don't want to describe changes to the data. Adama's role is to deal with user messages, integrate them into the document in a natural way, then emit a data change seamless to the user. This is game changing, and the key is to understand what runs on which machine. Abstractly, I see a mathematical foundation to be exploited.

pure delta mode

This is the foundation for an entirely new service, but it requires clients to maintain state. That is, clients use must leverage stateful practices like using a WebSocket. Typically, stateful approaches have short-comings as application services tend to have gnarly problems, but Adama overcomes them. Let's explore what integrate_message becomes in this new world.

function integrate_message2(doc_reactive_cache, msg, key) {
// pull any updates into our local cache
sync_document(doc_reactive_cache, key);
// compute a delta (half the purpose of Adama).
// -- 1: do compute with side-effects
// -- 2: roll-back side-effects
delta = compute_prepare_delta(doc_reactive_cache, cache, key);
if (!append_delta(key, delta)) {
// well, we already failed, so let's maybe check back with the caller?
// maybe the caller has more messages to send?
// maybe the need for the message got cancelled based on new data?
// maybe just convert msg to [msg]? maybe and get some batching love?
// lot of opportunity in the retry here! we can even exploit flow control!
integrate_message(doc_reactive_cache, msg, key);
} else {
// pull the commit down along with anything
sync_document(doc_reactive_cache, key);
}
}

This has much nicer physics. As the document size increases:

  • time is bounded by changes inflight
  • the network cost is proportional to changes.
  • the CPU cost is proportional to changes.
  • the consensus is proportional to data changes.
  • as conflicts emerge, messages can be batched locally which reduce pressure to minimize conflict
  • batching locally enables us to exploit stickiness to optimistically eliminate conflict and further drive down cost.

And this is just what happens when using a reactive cache which can be blown away at any moment.

With the physics sorting out as way better, we must return to the decision to leverage a stateful transport like WebSocket because it may be a horrific idea as stateful services are exceptionally hard. The moment you have a socket, you have a different game to play on the server. This new socket game is mucher hard to win. Now, it is very easy to get started and achieve impact. However, the moment you consider reliability you must think about what happens when your stateful service dies. This is the path for understanding why databases exist. It's so hard that there is a reason that databases are basically empires!

In this context, using a socket is appropriate because it has one job: leverage the prior state that the connection has in a predictable way. For devices to the server, the socket is used simply as a way of minimizing data churn on the client. For instance, the "stateful server code" is simply:

function on_socket(connection) {
var doc = get_document(connection.key);
connection.write(doc); // the first element on the connection is the entire document
while(sleep_for_update()) { // somehow learn of an update to the document
var new_doc = get_document(connection.key); // fetch the entire document
connection.write(json.diff(new_doc, doc)); // emit updates as merges
doc = new_doc;
}
}

Now, there is room for improvement in the above with Adama as the language which generates document updates, but the core reliability is understandable and not very complex. This is the key to leveraging something like WebSocket without a great deal of pain or self-abuse.

Day #3#

Holy crap, I’m dumb. I am just not smart enough to be doing what I am doing building a reactive differential programming language database thing. Found two big issue which invalidate all of my work and life... in a good way.

First up, and this is really bad... I wasn’t actually deleting items from the tables. I was just hiding them and not removing them from the internal tables which means most of the loops were filtering dead stuff. No wonder things were going slow. Fixing this alone dropped the time to less than 120. This was unexpected, but it is worth noting that I wasn't investigating performance at the time. Instead, I was investigating the correctness of the "delta-model", and there was something rotten in the mix. Testing the delta model requires accepting a message, producing a delta, then persisting that delta, then throwing away all memory (effectivelly turn the tiny server off), then rehydrate the state from disk. The goal is to produce a model where servers can come and go, but users don't notice.

do users notice

Well, as a user, I was noticing. It turns out there was a break in reactive chain, and this was the second issue. I was up until four am in the next day...

Day #4#

Woke up, fixed the issue. It's amazing how sleep fixes things for you. An interesting observation was that testing the delta-model was showing a bug in the prior assumption validity of the test; that is, I found a deeper bug outside of my immediate changes. Unfortunately, that means my test case is no longer congruent with the previous test runs (the number of decisions dropped from 798 to 603). I've checked the delta log, and the new version appears correct at the point where the fault happened. Because it is doing the right thing now, I had to find a new one. The new test case has 802 decisions, and it comes in at

msbilling cost
1191938080

Based on decisions, this feels close enough to call it a good enough test case given the results from day two were in the same ballpark. This means that the production environment would experience a 540 --> 120 time drop (78%) while the testing environment would have experienced a 84% drop, and user satisifcation would be up due to less bugs.

While I'm still not done dealing with all the fall-out of the delta model, I do have to wonder if this is good enough. Well, I know I can do better because in this world if I turn off client view computation, then the time drops to 49 ms which gives me hope that I can optimize the client view computation. However, I think for the day and probably the month, this will be good enough... or is it? Honestly, I'll probably explore creating client views with streaming json.