The game of Set is not a strategic game. Nonetheless, there are techniques that good Set players use that new players ought to learn in order to get competitive more quickly. Since I recently taught a few new folks how to play, I thought I would discuss the strategies I use. For background, keep in mind that each pair of cards has a unique third card that makes a set with it.

Step one is to just scan the whole board, without any particular feature in mind. This strategy will almost never find sets for new players, because they haven’t got their pattern recognizers wired up right. But it’s good to do anyway, because you’ll need it for the next step.

Step two is to look only at the most-common attribute. If there are six red cards, pop out the reds and look just at those. Since you’ve just scanned the board, you’ll be able to find the attribute quickly. Among the cards with that attribute, you’ll be able to see a set if there is one. If not, you can quickly check the greens and purples. If you still haven’t found a set, you’ll know you need differing colors. Here, it’s often easiest to start with the smallest two categories: if there are three green and three purple cards, you only have nine pairs of cards to look at. And since you’ve scanned the board, you can often simply remember whether a pair’s third card is available.

When new cards are dealt (especially when there are no sets among the twelve cards on the board), it’s a good idea to look at those cards first. And if you’ve been tracking the distribution of attributes, you’ll know what’s common. On a board with lots of ovals, a new oval is exciting because it’s very likely to complete a set.

At the beginning of the game, the average number of sets on the board is almost three. Pretty often, even if someone else got one, there will still be one remaining.

As an aside, very few board and card games discuss strategy in their rule books, which I think is a shame. Sure, there’s some fun to learning the early tricks on your own. But with most games, the real depth happens after you’ve played a few rounds. Adding a tiny strategy guide to game manuals would help new players to enjoy games more.

Some game designs seem more robust than others.

Dominion is a very robust design. They recently reprinted the base game, and replaced six of the original twenty-five cards because they were too underpowered or too situational. What other game could not only survive having nearly a quarter of its components being nearly useless, but manage to sell millions of copies despite this? Maybe we can look at some of the reasons behind this robustness, and learn something that we can apply to our own games.

  1. Underpowered is better then overpowered. If Rebuild had been in the base game, folks would have complained a lot more. It’s a one-card engine that’s basically a must-buy.

  2. High variance adds to robustness. It’s harder to detect a bias in a noiser signal.

  3. Nobody is forced to take a bad card (except through something like Swindler, where the availability of bad cards is arguably a perk). Having a choice available that nobody ever takes are is terrible. The effect is that the designer has wasted some time, and there’s a bit of additional cognitive load. Otherwise, it’s fine. If there’s a whole subgame that’s useless, that’s bad because players shouldn’t have to learn a useless subgame. But if the choice is just one card vs another, it turns out, it’s workable to have a few less-good choices.

There are other reasons that Dominion is a great game, but I don’t know if there are other reasons why it’s a robust game.

It’s OK for a game to be less robust. With a less robust design, the flaws in those six Dominion cards might have been discovered during development, and they would not have been printed. But I think that overall, robustness is a virtue. Once a game gets out into the world, players will discover, over the course of many years, how the game ought to be played. A robust game will better survive that experimentation process.

I made a greebled teapot:

Greebled teapot

I was inspired by nostalgebraist (re)posting this image, entitled “A cube and its greebled version”:

"A cube and its greebled version. Rendered by Gargaj / Conspiracy.", CC-BY-SA

Of course, mine is more regular (but, being handmade, is also much more irregular). It’s slab-built: first I carved an annular sector and a circle on a slab. Then I cut and rolled the sector (making a truncated cone), and molded the circle over a dome to make the bottom. I attached the two pieces, and cut a hole for the spout. The spout is a coil with a hole poked through it, hand-molded, with both carving and additions to get the greebling. The handle was a thinner slab, also with both carving and addition. As the piece was drying, the handle cracked, so I had to repair it with paper clay (which, as far as I can tell, is some kind of magic). Then I had to make a lid, and I realized that I had not thought at all about what the handle should be like. So I just whipped up something that would work with the texture.

The glaze is three coats of Coyote’s Really Red (two on the bottom, which turned out to be plenty). I thought that a complicated form should have a simple glaze. Also, having spent like fifteen hours greebling the thing, I wasn’t about to spend another fifteen painting it. And I recently had some bad luck with the studio glazes; I tried to make a mug that was yellow, black, and red-brown, and got greenish-brown, brown, and green (respectively) instead. So I stuck with something I knew would work.

Greebled teapot

I’ve been messing around with ceramics for nine or so months now, and this is the piece that I’m proudest of.

It’s increasingly popular for variables to be immutable by default. This makes the word “variable” a bit funny.

Also, I had a code review recently where a co-worker asked me to change some hard-coded strings to be constants. The strings, in this case, were argument names for a JSON API. So the API took e.g.

{
    "function" : "launchMissiles",
    "args" : {
        "target" : "Moscow",
        "type" : "ICBM",
        "count" : 17
    }
}

The co-worker wanted all of the strings to be constants (except I think “Moscow” and “ICBM” came from user input and were thus variables). I thought it was reasonable to have “target”, “type”, and “count” be hard-coded. That’s because:

  1. Imagine that they were constants — what would you name them? final String ARG_FIELD_TYPE = "type"? That seems to make the code harder to read. Also, it repeats the value of the constant in its name. If tomorrow the value were changed to “model”, should we also change the name of the constant? To do so would be insane: changing a constant’s value shouldn’t entail changing its name. But to leave it the same would be monstrous: future readers would have no way of matching the function call to the API docs without resolving the value of each constant.

  2. Would it prevent misspellings? Not really. You could just as easily misspell a constant’s value as a hard-coded string’s value. If the string were repeated often, then maybe it could get occasionally typoed, but these weren’t repeated very often.

  3. And even if they were repeated, there would be no logical connection between the instances. The launchMissiles function happens to have a target argument, but so does the strstr function. But in the next release, maybe they’ll correct strstr to have better names (needle and haystack are the only correct names for strstr’s args).

Anyway, the point is that constants are often valuable for things that we do expect to change, and often less valuable for things that we don’t expect to change. So the “constant” name is a little funny too.

I was talking to my friend C about work benefits, and I mentioned a particular benefit that I had taken advantage of in some job I had ever had. I’m going to be a little vague here, because maybe someone else had the same idea I did, and I don’t want to kill a good thing. Basically, this was a benefit intended for some religious minority that happened to be useful to me as well. It might have been (but wasn’t) that on free ice cream day there were kosher (parve) ice creams, and I’m lactose-intolerant so I ate one.

Anyway, C claimed that this was disrespectful, since the benefit was intended for religious minorities, but I was taking advantage of it. I pointed out that being atheists are in fact quite a small religious minority. This is somewhat disingenuous as, normally I consider atheism to be a lack of religion. But when we discuss matters of religious discrimination, atheists are a group against which there is discrimination on the basis of religion.

I guess maybe there was one fewer ice cream available for folks who keep kosher, but (a) I don’t think they measure the exact number of folks who keep kosher and order precisely that many units, and (b) this was a zero-sum situation; one of us was going to go without and it didn’t really matter which, and (c) they could always just order more next time and (d) I work in the software industry and basically all of my co-workers can afford more dessert than they could possibly eat. (Since this ice cream thing is not the real thing that C and I were discussion, the details aren’t really important; the actual situation was non-rivalrous but I also didn’t have the lactose intolerance excuse. I just wanted the benefit).

In my conversation with C, I also mentioned a hypothetical, which I think I took from Eugene Volokh but now can’t find the source for. The idea is that some company ordinarily requires everyone to work on Saturday. They grant an exemption to Michael, because he’s an observant Jew. But Frank is a divorced father, and his custody arrangement only lets him see his kid on Saturdays. Why is it fair that Michael gets the exemption, but not Frank? From an atheistic perspective, Michael is making a non-existent being happy, while Frank would be making his actually-existing kid happy. Of course, that’s not how Michael sees it! But the point is the at people have many compelling reasons to want exemptions to generally-applicable rules, and while it’s quite reasonable to grant these exemptions liberally, it’s problematic to do so only when the exemptions are religious in nature.

I don’t think any of this was super-convincing to C.

Anyway, I was telling E about this conversation, and E pointed out that when we think about rules, there are at least three levels: the letter of the law, the spirit of the law, and broad moral principles. I tend to care about broad moral principles and about the letter of the law (which I was, in the case at hand, following; the hypothetical ice cream was labeled as “kosher”, but not labeled as “for observant Jews only”). But the spirit of the law often moves me less. C, on the other hand, cares a lot about the spirit of the law. It’s unsurprising that I care strongly about the letter of the law, as both my parents worked as lawyers for most of my life. Also, I’m a software engineer and software is a field that is about the letter of the law (though recent discussions about undefined behavior in C are often about how strongly to adhere to an ill-thought-out standard, so maybe this isn’t a universal professional deformation).

I also think it’s possible that there are different moral principles at play. Religious folks (I don’t know whether or not C is religious, or has this belief) often think of respect for religion as a terminal value. Some non-religious folks think this true. So if, for example, someone describes the Book of Mormon as kind of Bible fanfic, that comparison will rankle (even if they personally believe that in fact, Mormons are mistaken and that Joseph Smith composed the Book of Mormon himself). This generic reverence for religion is not a value I share. Of course, if it comes up in conversation that someone is a member of religious group X and your first response is to say “X is false and bad”, that’s just being a jerk. But in an abstract philosophical conversation, I don’t think there’s a huge problem with comparing religious texts to non-religious texts — even low-status non-religious texts like fanfic. (The low-status bit is actually pretty important; the title of The Greatest Story Ever Told compares the Gospels to literature, and it is not regarded as disrespectful).

Also, I think that even among people who do have this value, it tends to reinforce existing power structures. For example, I have read that no non-Christian group has ever won a free exercise clause (of the US Constitution; RFRA is different) Supreme Court case. So it seems to me that one’s idea of which religious practices fit into this sort of reverence is colored by one’s personal experiences of religion, and those that one is exposed to through mainstream culture. That is, it often ends up being a facet of status quo bias: an inability to look at things with fresh eyes.

I don’t really have a conclusion here. I just thought E’s comment was so interesting that I decided to dress it up in a bunch of bloviation.

I loved Ben H. Winters’s Underground Airlines. It’s about an alternate history where instead of the civil war, there’s a variant of the Crittenden Compromise. So there’s still slavery in a few states.

There was just one problem: a throwaway line about Carolina. That’s the state formed, in this alternate history, by the merger of North and South Carolina. This would never happen. The US political system gives more power to smaller states. What state would give up a senator (and maybe a representative) to join another? None. Ever. And this gets to the heart of why there was a Civil War in the first place.

In 1860, the (then chiefly southern) Democratic party had won three of the past six presidential elections. The slave states had between them about 40% of the Electoral College votes. They had about 45% of the Senate. But they only had about 1/3 of the population (and under 1/4 excluding slaves, who certainly weren’t going to fight for the South). The combination of the three-fifths compromise and the Electoral College led the South to dramatically overestimate their true strength. This, in my view, was a major cause of the Civil War. Nobody starts a war they don’t expect to win. But it’s very easy to fool yourself into thinking that you might win.

The way that democracy helps prevent civil wars, is that a faction that loses an election knows that it’s outnumbered. By screwing with this function, the Electoral College increases the odds of a civil war in this country. (So do weird ways of counting prisoners). Leaving aside the fundamental unjustness of it, this is the true reason we ought to get rid of it.

Side note: The fourteenth amendment made the Electoral College unconstitutional at least at the current population numbers, but somehow no court has noticed this yet. Reynolds v. Sims found a state-level Electoral-College-like system unconstitutional. But there’s no reason that the logic of the case doesn’t apply to the federal system as well. The Senate too, of course.

All this is to say that you should read Underground Airlines, but ignore the Carolinas bit. It doesn’t affect the story at all.

I’ve done some development on Git. I’m pretty proud of it, because it’s a tool that powers so much of modern software development.

At Practice, I was asked to describe the difference between SVN and Git, and also between Perforce and Git.

The answer I gave goes like this:

A Guide to SF Chronophysics” describes four types of time travel plots. Type 1 is deterministic — whatever happens, was what was destined to happen. There’s only one timeline. Type 3 is the one where someone steps on a butterfly and Trump is elected president. Type 2 is half way in between — you can change things, but they tend to converge back to the original timeline. And finally, type 4 involves multiple universes — every change (including time travel) creates a new timeline.

SVN is type 1. Git is type 4. When you “amend” a commit in Git, you actually create a whole new commit, forking off from the same parent as the previous one. You can use your time machine’s “reflog” functionality to see the old one. Similarly, rebase creates a new timeline from some point in the past.

Perforce, I’m told, is somewhat like git, but it treats changesets rather than snapshots (“commits” in gitspeak, although in ordinary usage the term commit often refers to a changeset) as fundamental.

This is an instance of the mathematical notion of duality. The first example of duality I learned was polyhedra: if you swap the faces of a polyhedron with vertices, you get a different polyhedron. The dual of a cube is an octahedron (known by gamers as a d8). Instead of six faces and eight vertices, it’s got eight faces and six vertices.. The dual of a dodecahedron (d12) is an icosahedron (d20). The dual of a tetrahedron (d4) is itself. The Japanese addressing system is almost a dual of the US addressing system. In the US, we give addresses in terms of strees. In the Japanese system, blocks are the fundamental unit. I have been meaning for some time to design a game around the concept of duality, but I have not yet figured out quite how to do it.

Anyway, the graph of changesets is just the graph of snapshots with the vertices and edges swapped. Duality.

So David Albert wrote a tweetstorm about Plan 9 and about generality. I’ve reassembled some paragraphs for ease of quoting:

There is a ton of symmetry between messaging and late binding at the core of OOP, and private name spaces in Plan 9. With messaging in OOP, the decision about what code to run is made dynamically, as late as possible. With private name spaces, each process sees a its own file system hierarchy. The /foo/bar/baz that I see might not be the same one you see. In a sense, private name spaces late bind file contents. This is a big deal when all system functions are accessed using files.

There’s a great quote from Kay in the Early History of Smalltalk, that I still don’t fully understand, but I think applies here.

“Smalltalk is a recursion on the notion of computer itself. Instead of dividing ‘computer stuff’ into things each less strong than the whole–like data structures, procedures, and functions which are the usual paraphernalia of programming languages — each Smalltalk object is a recursion on the entire possibilities of the computer.”

This seems pretty reasonable descriptively, but not really great software engineering.

Recently I submitted a bug fix which illustrates one case of this: jgit was willing to write git tree entries with zero-length names. These entries represent, roughly, filenames. So by removing power, I was able to reduce bugs. This is sort of a small case of a power reduction — previously, the domain of the function was approximately all strings; now it’s all-but-one.

But let’s look at a stronger case: OpenSSL. OpenSSL famously had a wide surface area which allowed all sorts of use cases. Unfortunately, most of those use cases were wrong, from a security perspective. Maybe there’s room in the world for a security library where everything is permitted. But mostly I would rather use the library where only correct things are possible.

I guess this isn’t always true — I use a lot of Python, and when I’m writing Python to write SVG files, I don’t bother with an interface that would prevent me from making formatting errors. I just use print statements. But I probably would prefer the interface if I were programming for external consumption, as opposed to hacking together some throw-away code to get something else done.

Those are some special cases, but the most general reason for limiting what your code can do, is that limits make analysis easier. Valgrind has to do a tremendous amount of work to show that one particular run of your C code doesn’t have memory errors. Java simply never has that problem (C++ references don’t either). Regular expressions are far less powerful than full parsers, so it’s easier for a human reader to understand what they’re doing. Pure functions and immutable data structures are weaker than impure/mutable — but if you use a lot of them, it’s easier to track down where that stupid variable got changed. You can also build abstractions like map-reduce on top of them.

Which I guess gets to a point that David makes later:

I think the key idea is find uniform interfaces (the message, the file), make them as dynamic as possible, and build a system around that. Another striking thing about Plan 9 is that everything uses 9P – the remote file system protocol – both locally and remotely. If you didn’t have to interact with the outside world, you’d basically have only one network protocol for all services.

But this also reminds me of the STEPS project to build a complete system in 20,000 lines of code (also Alan Kay, et al). To do that, you have to discover powerful abstractions and use them everywhere. Having just one network protocol is a good start.

[rearranged from earlier]

Consider the Plan 9 window manager. It consumes a screen, a mouse, and a keyboard from the computer, (/dev/draw, /dev/mouse, etc.)… …and then re-exports virtual screens, mice, and keyboards to each of the windows that it makes. The programs in each window don’t know they’re in a window. You could run them w/o the window manager and they’d take up the whole screen.

In indexed-color (e.g. 256-color) graphics, which Plan 9 supported, there is a difference between being full-screen and being windowed; when you are full-screen, you have full control over the palette. When you aren’t, you have a sad negotiation problem.

Also, in a windowed mode, you can be partially covered up and then exposed, while in a full-screen mode, you can’t. So either the full-screen interface has to contemplate this possibility, or the windowed interface has to be artificially weakened.

Anyway, a file (or series of files) is the wrong interface to a screen. You want a higher-level interface that can do things like scrolling, or playing movies, or drawing textured triangles. These are both often hardware-accelerated, and this matters a lot for smooth graphics. This sort of rich interface is best accessed through a series of functions, which communicate, in part, by reifying objects (“a window”, or “a button”) so that they can be referenced.

Because I can write any old string to a file, there is nothing that will check for me whether I have written a string that does something meaningful (until I run my program). Plan 9’s use of C’s file reading APIs makes this even worse: are short reads or short writes possible? What do they mean? Sure, you could document that, but you shouldn’t have to; a good API is the documentation about what’s possible.

And to a reader of code, uniformity makes navigation difficult. What’s this piece of code doing? The same thing as all of the other code: reading and writing some files. At this point, strace is a more useful debugging tool than grep, since at least I can see which file is being read/written by a particular piece of code. Larry Wall once said, “Lisp has all the visual appeal of oatmeal with fingernail clippings mixed in.” There’s more to life than visual appeal, but I do think there’s something to the idea that different tools should look different so you don’t accidentally grab the scalpel when you wanted the cautery pen.

I also don’t believe that local resources should be treated the same as remote resources. This is a seductive idea — they’re just streams of bytes, who cares where they’re stored? And sometimes, it’s reasonable: when you’re building casual software where you’re not going to think too hard about failure cases. But when engineering something that will see heavy use, it matters whether a read failed because of a network failure vs a disk failure. Network failures are recoverable; disk failures more-or-less aren’t. And often a stream isn’t the interface that you want for network communication anyway — something that’s datagram-based and best-effort is better for games and telephony.

And this is why basically nothing is 20,000 lines of code, and if anything is 20,000 lines of code, it’s “by shaving off as many requirements of every imaginable kind as you can”. As programmers, we deal with extremely heterogeneous systems. A carpenter might pound a thousand identical nails; we just write a nail-pounding function. So it’s not surprising that we end up with specialized rather than uniform interfaces, and it’s not bad either.

For the Power Broker game design contest, my friend Ed and I designed “Whipsaw!”. It’s a set-collection card game with lying. You can’t have a Robert Moses game without lying. We didn’t win the contest, but we had fun making and playing the game.

When I told some folks about this at NYC Playtest, I was told that people who like set-collection games don’t like lying games and visa-versa, so nobody would ever play it. But in fact all of our playtesters liked it just fine. Also, poker is kind of that.

Random side note about poker: the notion of a game that is (almost) exclusively played for money is bizarre. Remember when MtG started there was this notion of playing for ante? And then people tried it and it was terrible and it never caught on. If someone invented poker today, as a Euro-style game, would people think of the real money thing as a gimmick? Would it be almost an art game, like Cordial Minuet or Train?

We didn’t do nearly as much playtesting on Whipsaw! as I’ve done on Loading Zone. It’s a much simpler game, and we were on a pretty tight timeline. It’s definitely not perfect: in playtests, players didn’t lie as much as we wanted them to. That might be because it’s hard to convince people to lie (or hard to do so in a set-collection game!). Or it might be because the incentives are wrong. But we weren’t able to figure out a way to improve the situation.

The game works like this: you’re trying to build parkways (there are four). The scoring is roughly quadratic: most parkway cards give victory points for each card in that parkway. So you would rather have all of one parkway than half of two. The cost of a parkway card is some number each of legislators, judges, and bankers. You have a hand of these resource cards, and draw more every turn. They’re played face-down, so you can lie about what you’ve played. Each type of card can also be played as an action: bankers lend money (which acts as additional bankers but costs points at the end of the game), legislators call bluffs, and judges temporarily block legislators, giving you a chance to “make it right”. To make lying more interesting, the resources have colored backs which give incomplete information about what they are. So, the cards with black backs are mostly judges — but not all. The game has a little more complexity, but that’s the gist.

Whipsaw! came together pretty quickly: I wrote up a first draft, then Ed and I tested it. My version was too long: it had six parkways instead of four. And it had a weird complication: instead of judges blocking bluff calls, special lawyer cards would do it. Lawyers could also block opponents’ lawyers. (In this version, instead of using legislators to call bluffs, you would do it by paying the cost of it yourself). Scoring was roughly linear (and Lost Cities-inspired: you could actually go negative if you didn’t have enough cards in a parkway).

My notes on this playtest say that the major fun parts were: - It played pretty quickly. - Getting away with lying was fun. - Strategizing about lying was fun. - Lawyers were probably the most fun part. - The colored backs made lying more interesting.

And the notes say that the major flaws were: - It was too long — 30 minutes would be better. - There was maybe not enough lying.. - There was definitely not enough calling of bluffs (in part because it was expensive). - Some of the complexity was silly. - And it was maybe hard to know what resources to save in your hand, and in general save vs spend was not an interesting decision. - Set collection wasn’t that interesting. - It was hard to track what other people were doing (too many cards, in part).

Ed’s second draft fixed most of this. He added more of a narrative arc by dividing the game deck into three “years”, with more-expensive properties available in the later years. He reduced the number of roads, and adjusted the scoring. And he invented the rules about how legislators, judges, and bankers worked. The game was pretty close to the final form at this point.

I ran a few more tests — at NYC Playtest, and at Recurse Center, and made some minor tweaks. And then we declared the game done and submitted. We should probably have done some artwork.

Want to give Whipsaw! a try? Here are the print-and-play rules and cards.

One of my favorite games is Set. When I first discovered it, in 1998, I made a Java applet since I couldn’t immediately find the cards. It’s apparently been used as an example in a college course, which is pretty surprising given that it’s undergraduate Java (to be fair, I had hacked on it occasionally since). My applet used to be a fairly accurate representation of the original game, but a few months ago I got a letter from the lawyers for the company who make Set. They were pretty polite, but I had to change the graphics. So now my applet has polka-dots.

Set is played on ℤ34. There’s a variant called Projective Set, which is played on something called a finite vector space — 𝔽26. Basically, that’s the six-bit numbers (but Projective Set excludes the zero card). The rule is that N cards can be removed if, for each bit position, the cards sum to zero mod 2. Or, perhaps more simply, each symbol appears an even number of times. Any seven distinct cards contain at least one such group.

tilde, square, triangle, plus plus, star square, plus, circle tilde, triangle, plus, star, circle

These four cards can be removed.

One of Danielle’s co-workers lamented not being able to buy a copy, so I decided to make some. Unlike the one shown on the Wikipedia page, mine are accessible to the colorblind. I guess I should note that my version is in fact just called “Projective”, as Set is a trademark of Set Enterprises and I don’t want to annoy their lawyers any more than my applet already has. So now you can buy a copy. I’m not making money off of this because doing so would be a hassle; the price is what The Game Crafter set. I got mine yesterday, and it looks pretty good. Danielle beat me up some, and I enjoyed it.

This got me to thinking about what other mathematical objects could be used for pattern recognition games. I immediately thought of quaternions, and then got Hamilton stuck in my head. Did I mention that I’m not great at math? I had forgotten that the quaternions are non-commutative, making the game much trickier to design. But I guess I don’t need to be totally accurate to the math. As long as I keep i2=k2=k2=ijk=-1, everything will probably work out. The idea will be to find N cards whose product is one. I’m thinking of calling it Uno.

I even came up with some icons, based on the Swedish point of interest symbol:

icons for the quaternion game

Notice that each of i, j, and k can be combined with a rotation to form the -1 symbol, and i, j, and k can be overlapped to do the same.

This might be too easy, so maybe I’ll need to do ℚ2 or something. Or just go straight to 𝕆, whose multiplication table is too big to remember. But you can always use this simple and easy to understand diagram (from Wikipedia):

Fano plane diagram