Since last time I wrote, I’ve made a few more pieces. Note: pieces are not in chronological order. A few technical details are at the end.

We’ll start with a recent one that I’m proud of:

rat vase

Yep, it’s a vase with rats coming out of it. The black glaze is Marilee’s Lava with 7% Mason 6666 black stain, which I put together myself. I have now manufactured a few glazes, including this; a lichen glaze which slid so dramatically that I haven’t dared to use it on a real piece; and a supposedly mint green which actually came out so much more matte than the reference photo that I suspect I must have fucked up somehow. I’ve also played with oxides some and gotten results varying from unnoticeable to garish to awesome. I’m not sure how I feel about a process with such high-variance results but I’ll probably keep exploring anyway.

Here’s the first piece I made that I still like:

deco pot

Probably I’ll stop liking it once I improve further. I enjoy the texture, but I wish it were more precise. I still don’t really know how to get precise lines — somehow, even with a ruler, it always comes out messy. So I’ve been trying to learn to make pieces that embrace the mess.

I also made vases as holiday presents for everyone on my team. The project we were working on is called The Aleph (after the Borges story), so all of the vases have Alephs on them. This is my favorite:

aleph vase

One co-worker in particular got excited about the pottery, so I made her a casserole:


My cats insist on photobombing my pottery photos. This is the rare Mutex photobomb — it’s usually Semaphore. The texture on the casserole was made with an old daisy wheel. The piece is really quite large — it cooks^Hholds almost 16 lbs of cat. I used S762 kitchenware clay, a buff stoneware with fine grog. It reportedly goes happily into a cold oven without cracking, so I guess the body lives up to its name. Unfortunately, most of the studio’s glazes, and some commercial glazes come out muddy or crazed on this body, so I’m pretty limited in where I’m willing to use it.

One piece I made out of it was my second orcish teapot. But before I get to the second one, I have to show you the first:

orcish teapot 1

The body here has exactly the hue I wanted, but about two shades too light. That’s significantly better than some other possible outcomes: the glaze combination I used is a bright green celadon on top of a chameleonic red-brown. Too little green, and you get mud; too much and you get blobs of snot.

Other folks insist on calling this a warthog teapot, but what does a warthog need with a teapot? Maybe the orcs used warthog tusks to make it. Another potter at my studio had been doing a bunch of stuff with spikes, and I was a bit inspired. Peter’s spikes are broader than mine, and straight rather than bent. He throws them on the wheel, which I did for these and for my doorknob, but which I’ve since given up: handbuilding them is faster for me. And it’s easier to get close-to-identical copies, at least at my skill level.

three-horn doorknob

I like the idea of a second draft of a piece, and wish I had the patience to do it more often. The casserole above is actually a second draft: the first one was too thin and ended up cracking. The second one has straighter walls and better texture, and I think the handles are nicer too.

Here’s that second orcish teapot:

orcish teapot

The second orcish teapot did not come out the way I had planned, but I am not disappointed, because it’s pretty metal. Literally: the black glaze is primarily black copper oxide, and the horns have some red iron oxide stirred into the clear. Copper oxide usually produces green, and I had hoped to reproduce something like the original glaze by adding a bit of copper oxide to a not-green-enough glaze. You can see the normal copper green where the oxide has bled into the white interior glaze. That’s not the color I wanted on the outside either, but it’s attractive if not particularly orcish.

Britt suggests adding bentonite to oxides to get a more even application. Brickhouse’s studio oxide washes don’t have this, so they settle out very rapidly, making it hard to get an even application. I made up a copper oxide wash with some methylcellulose (AKA “CMC gum”), which I have used in the kitchen in the past. It burns off in the kiln, but suspends the oxide enough to get an evener application. However, I think I might have more oxide in this wash than I really want. Copper oxide is quite corrosive: one little drop of the oxide glaze slid off my piece and ruined a kiln shelf.

I know that the standard approach here is to glaze a bunch of test tiles, and not glaze the final piece until the test tiles are done, but the turnaround time on test tiles can be up to three weeks, so I often test on finished pieces. I do take glaze notes inside a crappy webapp I threw together, which at least helps me avoid making the same mistake twice.

All of these horns got me excited about animal forms, so I went back to handbuilding. The rat vase was the first thing I made with animal faces. Then I made a skunk mug:

skunk mug

I have to admit that this one, too, was partially inspired by [Peter]((’s work — he had been doing “eat, fart, love” mugs (“I don’t pray”, he explains). So when I made a skunk mug, the “eat, spray, love” slogan came to me in a flash. The face has kind of a funny shape, but I think it’s not too far from what a real skunk’s face looks like. It’s just that our mental image of a skunk has been warped by PepĂ© Le Pew.

Another thing that inspired me to make animal forms was this Pikamug:

Pikachu mug Pikachu mug: tail

Once I had made this for one cousin, I knew I would have to make something for his sister, and their mom suggested Ponyta:

Ponyta mug

One rule I have is that everything I make has to be at least somewhat functional. This constraint is somewhat arbitrary — just about anything hollow could be a vase. But it does mean that I am forced to pay attention to how a piece could be used, and think about smoothness and weight. I should probably institute a “no vases” rule at some point, since I basically never use a vase for anything. Maybe I’ll ban teapots too, except that I have yet to make a teapot that’s technically even base-level competent yet. For instance, the first orcish teapot doesn’t pour very well because the glaze covered up the holes. The second’s spout is too low. So I may have to make a few more of those.

Here’s a completely boring bowl where I like the glaze; it reminds me of a night sky:

bowl of stars

The waiting is a real struggle: the most recent work I can show is some jars that I threw in late December, and then sculpted the lids of in January, and then glazed a few weeks ago, and just got out of the kiln. In fact, the third pot’s lid didn’t look great after glaze firing, so I’ve reglazed it and am refiring it. Here are the remaining two jars: a French bulldog, and a demon:

Bulldog jar

Demon jar

This delay means I can’t show you my alligator teapot or my Baba Yaga’s hut teapot, or my seed pod vase, or my second doorknob, which is completely different than the first one. So I guess I’ll have to post again.

Technical details:

I’m working at Brickhouse Ceramic Arts Center in Long Island City. My primary clay body is the studio’s brown stoneware, but I’ve also used Laguna B-Mix and S762 from Ceramics Supply. The studio fires at Cone 6 in oxidation. I use the studio’s glazes, as well as some commercial glazes from Amaco, Mayco, Coyote, and Potter’s Choice. If you like any of the glazes above and want more details, let me know and I’ll happily share.

Reiner Knizia is one of my favorite board game designers. One thing I really admire is that he’s willing to noodle on a theme (e.g. his four early tile-laying games) until he’s satisfied with it. He’s gone through a few versions of Lost Cities, including Keltis: Das Kartenspiel (hereinafter, “Keltis” — there are a few other Keltis variants, but this is the one I’ve been playing.

First, I’ll briefly explain the rules of Lost Cities, and then the changes in Keltis. Then I’ll explain why these changes produce lower variance. Finally, I’ll explain why I’ve been thinking about this.

The Lost Cities deck consists of twelve cards in each of five suits. Nine cards per suit are numbered 2-10; the rest are identical “investment” cards, which multiply a player’s score in that suit. Each player has a hand of eight cards. On their turn, they either play or discard. Plays and discards are both by suit. After playing, they draw either from the deck or any discard pile. As soon as the last card is taken from the deck, the game ends. There are two things that make the game interesting: 1. You can only play cards in each suit in-order: first any investment cards, then the numbers in ascending order (with gaps permitted). 2. If you don’t play any cards in a suit, you get zero points for that suit. Otherwise, your score in a suit is negative twenty points plus the sum of the values of the cards played in that suit. This generally means that you only open a suit if you’re pretty sure you’re going to make 20 points in it.

Keltis has a few differences from Lost Cities, but from our perspective, the most important ones are: 1. The value of each card is approximately the same. Cards still have numbers, but the point value for a suit depends only on the number of cards in that suit. 2. You can play a suit in either ascending or descending order.

Here’s how this leads to lower variance: In Lost Cities, opening with a hand of, say, the 9 and 10 of each of four suits is somewhat unlucky. You don’t want to discard anything, because your opponent will snap it up from the discard pile. But you also don’t particularly want to open a suit, because you’re guaranteed to lose a point on it. In general, getting cards in the wrong order can turn what would be a good suit bad. There’s nothing more frustrating than ending the game with the 7-8-9 of a suit and missing six points just because of bad timing. In Keltis, that would be an acceptable opening hand, since the 10s are all immediately playable.

And in Lost Cities, if you have 3-4-6 in a suit (13 points already), you’ll surely open that suit since you expect to get approximately two of the 7-8-9-10 cards. But in a quarter of games you’ll get just one, and in a quarter of games you’ll get three. And it’s possible to get zero or all four. In Keltis, this is approximately true as well (the card distribution is a bit different, so not exactly). But in Keltis, this will result in a swing of a few points — not 30 points (or more with investment cards).

It’s always nice to have data to back up a theory, so I found this page. It claims that, in fact, Keltis (listed as “Keltis Card”) is lower-variance. Well, more precisely, it makes a more-complicated claim about Elo ratings, but I think the effect is the same.

Subjectively, I think Lost Cities might be a slightly more fun game, and this says bad things about me. There’s a real excitement as things come down to the wire: will I suck out and get that blue ten (which is now worth 40 because of investments), or won’t I? That part of the game is pure gambling, and while it’s fun, it’s not something I feel proud of enjoying. But at least I’m not alone: Keltis gets 6.7 on BGG, while Lost Cities gets 7.1.

I have been thinking about this because I just had opposite feedback about dynamic range in two of my prototypes. In Sekhmet, the range was considered too low: a given tile could score between 1/2 and 2 points. In Banshee, the tile values are between 1 and 10, and the player who happened to draw the 10-point tiles was very likely to be able to use them and win. Similar problems don’t necessarily demand parallel solutions, so while I’m going to replace all the 1s with 2s in the next test of Banshee, I’ll probably fix Banshee by completely replacing the way that tiles score — and in the process, maybe reduce the range.

There’s a really neat technique for doing simple encryption that you can decrypt with your eyes. It goes like this, assuming that your data is a black-and-white image:


First, make a new image with the same size as your original image. Fill it in totally randomly.

Now, resize your random image to have twice as many pixels in each dimension. Each 2x2 pixel square of the resized image gets one of these “macropixels”:


One has the top-left and bottom-right pixels black; the other has the top-right and bottom-left pixels black. Let’s say that we’ll replace black pixels with the left image and white pixels with the right image. Your new image will have exactly half of its squares black, and the other half white. This is your key (although it doesn’t actually matter which side is the “key” and which is the “data”). Here’s an example:


Now, take your original image, and encode it like this: where there’s a black pixel, take the macropixel that’s different from your key, and where there’s a white pixel, take the one that’s the same as your key.


Notice that this new (“ciphertext”) image has no information from the original image. If your original pixel was black, there’s a 50-50 chance that it will be the same as the black pixel from the key, and a 50-50 chance that it will be different. It’s the visual version of an xor-based one-time-pad.

Now comes the really cool part: to decrypt the image, you can print the key on a transparency, and overlay it on the ciphertext:


You’ll see fully-black macropixels where the original image was black, and half-tone where it was white. No special hardware needed — just apply eyeballs.

As part of the Museum of Math’s opening puzzle hunt in 2012, I used this to create a fun reveal. They had transparent disks designed to be overlayed to display moiré patterns. Instead, I encrypted a puzzle answer, and printed the key on one disk and the ciphertext on the other. When the disks were rotated to the right angle, the answer image would pop out. Of course, this isn’t very secure — if you look at the images, you’ll see the grid axes, and then there are only four possible rotations. But if you don’t have one of the disks, it’s totally secure. And it’s a cool effect. When I visited the museum recently, the museum staff mentioned that the Fitzwilliam Museum had been inspired by my little toy, and have a version of it in their upcoming codebreaking exhibit (opening October 24th). So if you happen to be in Cambridge (the one in the UK) between next week and next April, please drop by and take it for a spin.

The game of Set is not a strategic game. Nonetheless, there are techniques that good Set players use that new players ought to learn in order to get competitive more quickly. Since I recently taught a few new folks how to play, I thought I would discuss the strategies I use. For background, keep in mind that each pair of cards has a unique third card that makes a set with it.

Step one is to just scan the whole board, without any particular feature in mind. This strategy will almost never find sets for new players, because they haven’t got their pattern recognizers wired up right. But it’s good to do anyway, because you’ll need it for the next step.

Step two is to look only at the most-common attribute. If there are six red cards, pop out the reds and look just at those. Since you’ve just scanned the board, you’ll be able to find the attribute quickly. Among the cards with that attribute, you’ll be able to see a set if there is one. If not, you can quickly check the greens and purples. If you still haven’t found a set, you’ll know you need differing colors. Here, it’s often easiest to start with the smallest two categories: if there are three green and three purple cards, you only have nine pairs of cards to look at. And since you’ve scanned the board, you can often simply remember whether a pair’s third card is available.

When new cards are dealt (especially when there are no sets among the twelve cards on the board), it’s a good idea to look at those cards first. And if you’ve been tracking the distribution of attributes, you’ll know what’s common. On a board with lots of ovals, a new oval is exciting because it’s very likely to complete a set.

At the beginning of the game, the average number of sets on the board is almost three. Pretty often, even if someone else got one, there will still be one remaining.

As an aside, very few board and card games discuss strategy in their rule books, which I think is a shame. Sure, there’s some fun to learning the early tricks on your own. But with most games, the real depth happens after you’ve played a few rounds. Adding a tiny strategy guide to game manuals would help new players to enjoy games more.

Some game designs seem more robust than others.

Dominion is a very robust design. They recently reprinted the base game, and replaced six of the original twenty-five cards because they were too underpowered or too situational. What other game could not only survive having nearly a quarter of its components being nearly useless, but manage to sell millions of copies despite this? Maybe we can look at some of the reasons behind this robustness, and learn something that we can apply to our own games.

  1. Underpowered is better then overpowered. If Rebuild had been in the base game, folks would have complained a lot more. It’s a one-card engine that’s basically a must-buy.

  2. High variance adds to robustness. It’s harder to detect a bias in a noiser signal.

  3. Nobody is forced to take a bad card (except through something like Swindler, where the availability of bad cards is arguably a perk). Having a choice available that nobody ever takes are is terrible. The effect is that the designer has wasted some time, and there’s a bit of additional cognitive load. Otherwise, it’s fine. If there’s a whole subgame that’s useless, that’s bad because players shouldn’t have to learn a useless subgame. But if the choice is just one card vs another, it turns out, it’s workable to have a few less-good choices.

There are other reasons that Dominion is a great game, but I don’t know if there are other reasons why it’s a robust game.

It’s OK for a game to be less robust. With a less robust design, the flaws in those six Dominion cards might have been discovered during development, and they would not have been printed. But I think that overall, robustness is a virtue. Once a game gets out into the world, players will discover, over the course of many years, how the game ought to be played. A robust game will better survive that experimentation process.

I made a greebled teapot:

Greebled teapot

I was inspired by nostalgebraist (re)posting this image, entitled “A cube and its greebled version”:

"A cube and its greebled version. Rendered by Gargaj / Conspiracy.", CC-BY-SA

Of course, mine is more regular (but, being handmade, is also much more irregular). It’s slab-built: first I carved an annular sector and a circle on a slab. Then I cut and rolled the sector (making a truncated cone), and molded the circle over a dome to make the bottom. I attached the two pieces, and cut a hole for the spout. The spout is a coil with a hole poked through it, hand-molded, with both carving and additions to get the greebling. The handle was a thinner slab, also with both carving and addition. As the piece was drying, the handle cracked, so I had to repair it with paper clay (which, as far as I can tell, is some kind of magic). Then I had to make a lid, and I realized that I had not thought at all about what the handle should be like. So I just whipped up something that would work with the texture.

The glaze is three coats of Coyote’s Really Red (two on the bottom, which turned out to be plenty). I thought that a complicated form should have a simple glaze. Also, having spent like fifteen hours greebling the thing, I wasn’t about to spend another fifteen painting it. And I recently had some bad luck with the studio glazes; I tried to make a mug that was yellow, black, and red-brown, and got greenish-brown, brown, and green (respectively) instead. So I stuck with something I knew would work.

Greebled teapot

I’ve been messing around with ceramics for nine or so months now, and this is the piece that I’m proudest of.

It’s increasingly popular for variables to be immutable by default. This makes the word “variable” a bit funny.

Also, I had a code review recently where a co-worker asked me to change some hard-coded strings to be constants. The strings, in this case, were argument names for a JSON API. So the API took e.g.

    "function" : "launchMissiles",
    "args" : {
        "target" : "Moscow",
        "type" : "ICBM",
        "count" : 17

The co-worker wanted all of the strings to be constants (except I think “Moscow” and “ICBM” came from user input and were thus variables). I thought it was reasonable to have “target”, “type”, and “count” be hard-coded. That’s because:

  1. Imagine that they were constants — what would you name them? final String ARG_FIELD_TYPE = "type"? That seems to make the code harder to read. Also, it repeats the value of the constant in its name. If tomorrow the value were changed to “model”, should we also change the name of the constant? To do so would be insane: changing a constant’s value shouldn’t entail changing its name. But to leave it the same would be monstrous: future readers would have no way of matching the function call to the API docs without resolving the value of each constant.

  2. Would it prevent misspellings? Not really. You could just as easily misspell a constant’s value as a hard-coded string’s value. If the string were repeated often, then maybe it could get occasionally typoed, but these weren’t repeated very often.

  3. And even if they were repeated, there would be no logical connection between the instances. The launchMissiles function happens to have a target argument, but so does the strstr function. But in the next release, maybe they’ll correct strstr to have better names (needle and haystack are the only correct names for strstr’s args).

Anyway, the point is that constants are often valuable for things that we do expect to change, and often less valuable for things that we don’t expect to change. So the “constant” name is a little funny too.

I was talking to my friend C about work benefits, and I mentioned a particular benefit that I had taken advantage of in some job I had ever had. I’m going to be a little vague here, because maybe someone else had the same idea I did, and I don’t want to kill a good thing. Basically, this was a benefit intended for some religious minority that happened to be useful to me as well. It might have been (but wasn’t) that on free ice cream day there were kosher (parve) ice creams, and I’m lactose-intolerant so I ate one.

Anyway, C claimed that this was disrespectful, since the benefit was intended for religious minorities, but I was taking advantage of it. I pointed out that being atheists are in fact quite a small religious minority. This is somewhat disingenuous as, normally I consider atheism to be a lack of religion. But when we discuss matters of religious discrimination, atheists are a group against which there is discrimination on the basis of religion.

I guess maybe there was one fewer ice cream available for folks who keep kosher, but (a) I don’t think they measure the exact number of folks who keep kosher and order precisely that many units, and (b) this was a zero-sum situation; one of us was going to go without and it didn’t really matter which, and (c) they could always just order more next time and (d) I work in the software industry and basically all of my co-workers can afford more dessert than they could possibly eat. (Since this ice cream thing is not the real thing that C and I were discussion, the details aren’t really important; the actual situation was non-rivalrous but I also didn’t have the lactose intolerance excuse. I just wanted the benefit).

In my conversation with C, I also mentioned a hypothetical, which I think I took from Eugene Volokh but now can’t find the source for. The idea is that some company ordinarily requires everyone to work on Saturday. They grant an exemption to Michael, because he’s an observant Jew. But Frank is a divorced father, and his custody arrangement only lets him see his kid on Saturdays. Why is it fair that Michael gets the exemption, but not Frank? From an atheistic perspective, Michael is making a non-existent being happy, while Frank would be making his actually-existing kid happy. Of course, that’s not how Michael sees it! But the point is the at people have many compelling reasons to want exemptions to generally-applicable rules, and while it’s quite reasonable to grant these exemptions liberally, it’s problematic to do so only when the exemptions are religious in nature.

I don’t think any of this was super-convincing to C.

Anyway, I was telling E about this conversation, and E pointed out that when we think about rules, there are at least three levels: the letter of the law, the spirit of the law, and broad moral principles. I tend to care about broad moral principles and about the letter of the law (which I was, in the case at hand, following; the hypothetical ice cream was labeled as “kosher”, but not labeled as “for observant Jews only”). But the spirit of the law often moves me less. C, on the other hand, cares a lot about the spirit of the law. It’s unsurprising that I care strongly about the letter of the law, as both my parents worked as lawyers for most of my life. Also, I’m a software engineer and software is a field that is about the letter of the law (though recent discussions about undefined behavior in C are often about how strongly to adhere to an ill-thought-out standard, so maybe this isn’t a universal professional deformation).

I also think it’s possible that there are different moral principles at play. Religious folks (I don’t know whether or not C is religious, or has this belief) often think of respect for religion as a terminal value. Some non-religious folks think this true. So if, for example, someone describes the Book of Mormon as kind of Bible fanfic, that comparison will rankle (even if they personally believe that in fact, Mormons are mistaken and that Joseph Smith composed the Book of Mormon himself). This generic reverence for religion is not a value I share. Of course, if it comes up in conversation that someone is a member of religious group X and your first response is to say “X is false and bad”, that’s just being a jerk. But in an abstract philosophical conversation, I don’t think there’s a huge problem with comparing religious texts to non-religious texts — even low-status non-religious texts like fanfic. (The low-status bit is actually pretty important; the title of The Greatest Story Ever Told compares the Gospels to literature, and it is not regarded as disrespectful).

Also, I think that even among people who do have this value, it tends to reinforce existing power structures. For example, I have read that no non-Christian group has ever won a free exercise clause (of the US Constitution; RFRA is different) Supreme Court case. So it seems to me that one’s idea of which religious practices fit into this sort of reverence is colored by one’s personal experiences of religion, and those that one is exposed to through mainstream culture. That is, it often ends up being a facet of status quo bias: an inability to look at things with fresh eyes.

I don’t really have a conclusion here. I just thought E’s comment was so interesting that I decided to dress it up in a bunch of bloviation.

I loved Ben H. Winters’s Underground Airlines. It’s about an alternate history where instead of the civil war, there’s a variant of the Crittenden Compromise. So there’s still slavery in a few states.

There was just one problem: a throwaway line about Carolina. That’s the state formed, in this alternate history, by the merger of North and South Carolina. This would never happen. The US political system gives more power to smaller states. What state would give up a senator (and maybe a representative) to join another? None. Ever. And this gets to the heart of why there was a Civil War in the first place.

In 1860, the (then chiefly southern) Democratic party had won three of the past six presidential elections. The slave states had between them about 40% of the Electoral College votes. They had about 45% of the Senate. But they only had about 1/3 of the population (and under 1/4 excluding slaves, who certainly weren’t going to fight for the South). The combination of the three-fifths compromise and the Electoral College led the South to dramatically overestimate their true strength. This, in my view, was a major cause of the Civil War. Nobody starts a war they don’t expect to win. But it’s very easy to fool yourself into thinking that you might win.

The way that democracy helps prevent civil wars, is that a faction that loses an election knows that it’s outnumbered. By screwing with this function, the Electoral College increases the odds of a civil war in this country. (So do weird ways of counting prisoners). Leaving aside the fundamental unjustness of it, this is the true reason we ought to get rid of it.

Side note: The fourteenth amendment made the Electoral College unconstitutional at least at the current population numbers, but somehow no court has noticed this yet. Reynolds v. Sims found a state-level Electoral-College-like system unconstitutional. But there’s no reason that the logic of the case doesn’t apply to the federal system as well. The Senate too, of course.

All this is to say that you should read Underground Airlines, but ignore the Carolinas bit. It doesn’t affect the story at all.

I’ve done some development on Git. I’m pretty proud of it, because it’s a tool that powers so much of modern software development.

At Practice, I was asked to describe the difference between SVN and Git, and also between Perforce and Git.

The answer I gave goes like this:

A Guide to SF Chronophysics” describes four types of time travel plots. Type 1 is deterministic — whatever happens, was what was destined to happen. There’s only one timeline. Type 3 is the one where someone steps on a butterfly and Trump is elected president. Type 2 is half way in between — you can change things, but they tend to converge back to the original timeline. And finally, type 4 involves multiple universes — every change (including time travel) creates a new timeline.

SVN is type 1. Git is type 4. When you “amend” a commit in Git, you actually create a whole new commit, forking off from the same parent as the previous one. You can use your time machine’s “reflog” functionality to see the old one. Similarly, rebase creates a new timeline from some point in the past.

Perforce, I’m told, is somewhat like git, but it treats changesets rather than snapshots (“commits” in gitspeak, although in ordinary usage the term commit often refers to a changeset) as fundamental.

This is an instance of the mathematical notion of duality. The first example of duality I learned was polyhedra: if you swap the faces of a polyhedron with vertices, you get a different polyhedron. The dual of a cube is an octahedron (known by gamers as a d8). Instead of six faces and eight vertices, it’s got eight faces and six vertices.. The dual of a dodecahedron (d12) is an icosahedron (d20). The dual of a tetrahedron (d4) is itself. The Japanese addressing system is almost a dual of the US addressing system. In the US, we give addresses in terms of strees. In the Japanese system, blocks are the fundamental unit. I have been meaning for some time to design a game around the concept of duality, but I have not yet figured out quite how to do it.

Anyway, the graph of changesets is just the graph of snapshots with the vertices and edges swapped. Duality.