It’s increasingly popular for variables to be immutable by default. This makes the word “variable” a bit funny.

Also, I had a code review recently where a co-worker asked me to change some hard-coded strings to be constants. The strings, in this case, were argument names for a JSON API. So the API took e.g.

{
    "function" : "launchMissiles",
    "args" : {
        "target" : "Moscow",
        "type" : "ICBM",
        "count" : 17
    }
}

The co-worker wanted all of the strings to be constants (except I think “Moscow” and “ICBM” came from user input and were thus variables). I thought it was reasonable to have “target”, “type”, and “count” be hard-coded. That’s because:

  1. Imagine that they were constants — what would you name them? final String ARG_FIELD_TYPE = "type"? That seems to make the code harder to read. Also, it repeats the value of the constant in its name. If tomorrow the value were changed to “model”, should we also change the name of the constant? To do so would be insane: changing a constant’s value shouldn’t entail changing its name. But to leave it the same would be monstrous: future readers would have no way of matching the function call to the API docs without resolving the value of each constant.

  2. Would it prevent misspellings? Not really. You could just as easily misspell a constant’s value as a hard-coded string’s value. If the string were repeated often, then maybe it could get occasionally typoed, but these weren’t repeated very often.

  3. And even if they were repeated, there would be no logical connection between the instances. The launchMissiles function happens to have a target argument, but so does the strstr function. But in the next release, maybe they’ll correct strstr to have better names (needle and haystack are the only correct names for strstr’s args).

Anyway, the point is that constants are often valuable for things that we do expect to change, and often less valuable for things that we don’t expect to change. So the “constant” name is a little funny too.

I was talking to my friend C about work benefits, and I mentioned a particular benefit that I had taken advantage of in some job I had ever had. I’m going to be a little vague here, because maybe someone else had the same idea I did, and I don’t want to kill a good thing. Basically, this was a benefit intended for some religious minority that happened to be useful to me as well. It might have been (but wasn’t) that on free ice cream day there were kosher (parve) ice creams, and I’m lactose-intolerant so I ate one.

Anyway, C claimed that this was disrespectful, since the benefit was intended for religious minorities, but I was taking advantage of it. I pointed out that being atheists are in fact quite a small religious minority. This is somewhat disingenuous as, normally I consider atheism to be a lack of religion. But when we discuss matters of religious discrimination, atheists are a group against which there is discrimination on the basis of religion.

I guess maybe there was one fewer ice cream available for folks who keep kosher, but (a) I don’t think they measure the exact number of folks who keep kosher and order precisely that many units, and (b) this was a zero-sum situation; one of us was going to go without and it didn’t really matter which, and (c) they could always just order more next time and (d) I work in the software industry and basically all of my co-workers can afford more dessert than they could possibly eat. (Since this ice cream thing is not the real thing that C and I were discussion, the details aren’t really important; the actual situation was non-rivalrous but I also didn’t have the lactose intolerance excuse. I just wanted the benefit).

In my conversation with C, I also mentioned a hypothetical, which I think I took from Eugene Volokh but now can’t find the source for. The idea is that some company ordinarily requires everyone to work on Saturday. They grant an exemption to Michael, because he’s an observant Jew. But Frank is a divorced father, and his custody arrangement only lets him see his kid on Saturdays. Why is it fair that Michael gets the exemption, but not Frank? From an atheistic perspective, Michael is making a non-existent being happy, while Frank would be making his actually-existing kid happy. Of course, that’s not how Michael sees it! But the point is the at people have many compelling reasons to want exemptions to generally-applicable rules, and while it’s quite reasonable to grant these exemptions liberally, it’s problematic to do so only when the exemptions are religious in nature.

I don’t think any of this was super-convincing to C.

Anyway, I was telling E about this conversation, and E pointed out that when we think about rules, there are at least three levels: the letter of the law, the spirit of the law, and broad moral principles. I tend to care about broad moral principles and about the letter of the law (which I was, in the case at hand, following; the hypothetical ice cream was labeled as “kosher”, but not labeled as “for observant Jews only”). But the spirit of the law often moves me less. C, on the other hand, cares a lot about the spirit of the law. It’s unsurprising that I care strongly about the letter of the law, as both my parents worked as lawyers for most of my life. Also, I’m a software engineer and software is a field that is about the letter of the law (though recent discussions about undefined behavior in C are often about how strongly to adhere to an ill-thought-out standard, so maybe this isn’t a universal professional deformation).

I also think it’s possible that there are different moral principles at play. Religious folks (I don’t know whether or not C is religious, or has this belief) often think of respect for religion as a terminal value. Some non-religious folks think this true. So if, for example, someone describes the Book of Mormon as kind of Bible fanfic, that comparison will rankle (even if they personally believe that in fact, Mormons are mistaken and that Joseph Smith composed the Book of Mormon himself). This generic reverence for religion is not a value I share. Of course, if it comes up in conversation that someone is a member of religious group X and your first response is to say “X is false and bad”, that’s just being a jerk. But in an abstract philosophical conversation, I don’t think there’s a huge problem with comparing religious texts to non-religious texts — even low-status non-religious texts like fanfic. (The low-status bit is actually pretty important; the title of The Greatest Story Ever Told compares the Gospels to literature, and it is not regarded as disrespectful).

Also, I think that even among people who do have this value, it tends to reinforce existing power structures. For example, I have read that no non-Christian group has ever won a free exercise clause (of the US Constitution; RFRA is different) Supreme Court case. So it seems to me that one’s idea of which religious practices fit into this sort of reverence is colored by one’s personal experiences of religion, and those that one is exposed to through mainstream culture. That is, it often ends up being a facet of status quo bias: an inability to look at things with fresh eyes.

I don’t really have a conclusion here. I just thought E’s comment was so interesting that I decided to dress it up in a bunch of bloviation.

I loved Ben H. Winters’s Underground Airlines. It’s about an alternate history where instead of the civil war, there’s a variant of the Crittenden Compromise. So there’s still slavery in a few states.

There was just one problem: a throwaway line about Carolina. That’s the state formed, in this alternate history, by the merger of North and South Carolina. This would never happen. The US political system gives more power to smaller states. What state would give up a senator (and maybe a representative) to join another? None. Ever. And this gets to the heart of why there was a Civil War in the first place.

In 1860, the (then chiefly southern) Democratic party had won three of the past six presidential elections. The slave states had between them about 40% of the Electoral College votes. They had about 45% of the Senate. But they only had about 1/3 of the population (and under 1/4 excluding slaves, who certainly weren’t going to fight for the South). The combination of the three-fifths compromise and the Electoral College led the South to dramatically overestimate their true strength. This, in my view, was a major cause of the Civil War. Nobody starts a war they don’t expect to win. But it’s very easy to fool yourself into thinking that you might win.

The way that democracy helps prevent civil wars, is that a faction that loses an election knows that it’s outnumbered. By screwing with this function, the Electoral College increases the odds of a civil war in this country. (So do weird ways of counting prisoners). Leaving aside the fundamental unjustness of it, this is the true reason we ought to get rid of it.

Side note: The fourteenth amendment made the Electoral College unconstitutional at least at the current population numbers, but somehow no court has noticed this yet. Reynolds v. Sims found a state-level Electoral-College-like system unconstitutional. But there’s no reason that the logic of the case doesn’t apply to the federal system as well. The Senate too, of course.

All this is to say that you should read Underground Airlines, but ignore the Carolinas bit. It doesn’t affect the story at all.

I’ve done some development on Git. I’m pretty proud of it, because it’s a tool that powers so much of modern software development.

At Practice, I was asked to describe the difference between SVN and Git, and also between Perforce and Git.

The answer I gave goes like this:

A Guide to SF Chronophysics” describes four types of time travel plots. Type 1 is deterministic — whatever happens, was what was destined to happen. There’s only one timeline. Type 3 is the one where someone steps on a butterfly and Trump is elected president. Type 2 is half way in between — you can change things, but they tend to converge back to the original timeline. And finally, type 4 involves multiple universes — every change (including time travel) creates a new timeline.

SVN is type 1. Git is type 4. When you “amend” a commit in Git, you actually create a whole new commit, forking off from the same parent as the previous one. You can use your time machine’s “reflog” functionality to see the old one. Similarly, rebase creates a new timeline from some point in the past.

Perforce, I’m told, is somewhat like git, but it treats changesets rather than snapshots (“commits” in gitspeak, although in ordinary usage the term commit often refers to a changeset) as fundamental.

This is an instance of the mathematical notion of duality. The first example of duality I learned was polyhedra: if you swap the faces of a polyhedron with vertices, you get a different polyhedron. The dual of a cube is an octahedron (known by gamers as a d8). Instead of six faces and eight vertices, it’s got eight faces and six vertices.. The dual of a dodecahedron (d12) is an icosahedron (d20). The dual of a tetrahedron (d4) is itself. The Japanese addressing system is almost a dual of the US addressing system. In the US, we give addresses in terms of strees. In the Japanese system, blocks are the fundamental unit. I have been meaning for some time to design a game around the concept of duality, but I have not yet figured out quite how to do it.

Anyway, the graph of changesets is just the graph of snapshots with the vertices and edges swapped. Duality.

So David Albert wrote a tweetstorm about Plan 9 and about generality. I’ve reassembled some paragraphs for ease of quoting:

There is a ton of symmetry between messaging and late binding at the core of OOP, and private name spaces in Plan 9. With messaging in OOP, the decision about what code to run is made dynamically, as late as possible. With private name spaces, each process sees a its own file system hierarchy. The /foo/bar/baz that I see might not be the same one you see. In a sense, private name spaces late bind file contents. This is a big deal when all system functions are accessed using files.

There’s a great quote from Kay in the Early History of Smalltalk, that I still don’t fully understand, but I think applies here.

“Smalltalk is a recursion on the notion of computer itself. Instead of dividing ‘computer stuff’ into things each less strong than the whole–like data structures, procedures, and functions which are the usual paraphernalia of programming languages — each Smalltalk object is a recursion on the entire possibilities of the computer.”

This seems pretty reasonable descriptively, but not really great software engineering.

Recently I submitted a bug fix which illustrates one case of this: jgit was willing to write git tree entries with zero-length names. These entries represent, roughly, filenames. So by removing power, I was able to reduce bugs. This is sort of a small case of a power reduction — previously, the domain of the function was approximately all strings; now it’s all-but-one.

But let’s look at a stronger case: OpenSSL. OpenSSL famously had a wide surface area which allowed all sorts of use cases. Unfortunately, most of those use cases were wrong, from a security perspective. Maybe there’s room in the world for a security library where everything is permitted. But mostly I would rather use the library where only correct things are possible.

I guess this isn’t always true — I use a lot of Python, and when I’m writing Python to write SVG files, I don’t bother with an interface that would prevent me from making formatting errors. I just use print statements. But I probably would prefer the interface if I were programming for external consumption, as opposed to hacking together some throw-away code to get something else done.

Those are some special cases, but the most general reason for limiting what your code can do, is that limits make analysis easier. Valgrind has to do a tremendous amount of work to show that one particular run of your C code doesn’t have memory errors. Java simply never has that problem (C++ references don’t either). Regular expressions are far less powerful than full parsers, so it’s easier for a human reader to understand what they’re doing. Pure functions and immutable data structures are weaker than impure/mutable — but if you use a lot of them, it’s easier to track down where that stupid variable got changed. You can also build abstractions like map-reduce on top of them.

Which I guess gets to a point that David makes later:

I think the key idea is find uniform interfaces (the message, the file), make them as dynamic as possible, and build a system around that. Another striking thing about Plan 9 is that everything uses 9P – the remote file system protocol – both locally and remotely. If you didn’t have to interact with the outside world, you’d basically have only one network protocol for all services.

But this also reminds me of the STEPS project to build a complete system in 20,000 lines of code (also Alan Kay, et al). To do that, you have to discover powerful abstractions and use them everywhere. Having just one network protocol is a good start.

[rearranged from earlier]

Consider the Plan 9 window manager. It consumes a screen, a mouse, and a keyboard from the computer, (/dev/draw, /dev/mouse, etc.)… …and then re-exports virtual screens, mice, and keyboards to each of the windows that it makes. The programs in each window don’t know they’re in a window. You could run them w/o the window manager and they’d take up the whole screen.

In indexed-color (e.g. 256-color) graphics, which Plan 9 supported, there is a difference between being full-screen and being windowed; when you are full-screen, you have full control over the palette. When you aren’t, you have a sad negotiation problem.

Also, in a windowed mode, you can be partially covered up and then exposed, while in a full-screen mode, you can’t. So either the full-screen interface has to contemplate this possibility, or the windowed interface has to be artificially weakened.

Anyway, a file (or series of files) is the wrong interface to a screen. You want a higher-level interface that can do things like scrolling, or playing movies, or drawing textured triangles. These are both often hardware-accelerated, and this matters a lot for smooth graphics. This sort of rich interface is best accessed through a series of functions, which communicate, in part, by reifying objects (“a window”, or “a button”) so that they can be referenced.

Because I can write any old string to a file, there is nothing that will check for me whether I have written a string that does something meaningful (until I run my program). Plan 9’s use of C’s file reading APIs makes this even worse: are short reads or short writes possible? What do they mean? Sure, you could document that, but you shouldn’t have to; a good API is the documentation about what’s possible.

And to a reader of code, uniformity makes navigation difficult. What’s this piece of code doing? The same thing as all of the other code: reading and writing some files. At this point, strace is a more useful debugging tool than grep, since at least I can see which file is being read/written by a particular piece of code. Larry Wall once said, “Lisp has all the visual appeal of oatmeal with fingernail clippings mixed in.” There’s more to life than visual appeal, but I do think there’s something to the idea that different tools should look different so you don’t accidentally grab the scalpel when you wanted the cautery pen.

I also don’t believe that local resources should be treated the same as remote resources. This is a seductive idea — they’re just streams of bytes, who cares where they’re stored? And sometimes, it’s reasonable: when you’re building casual software where you’re not going to think too hard about failure cases. But when engineering something that will see heavy use, it matters whether a read failed because of a network failure vs a disk failure. Network failures are recoverable; disk failures more-or-less aren’t. And often a stream isn’t the interface that you want for network communication anyway — something that’s datagram-based and best-effort is better for games and telephony.

And this is why basically nothing is 20,000 lines of code, and if anything is 20,000 lines of code, it’s “by shaving off as many requirements of every imaginable kind as you can”. As programmers, we deal with extremely heterogeneous systems. A carpenter might pound a thousand identical nails; we just write a nail-pounding function. So it’s not surprising that we end up with specialized rather than uniform interfaces, and it’s not bad either.

For the Power Broker game design contest, my friend Ed and I designed “Whipsaw!”. It’s a set-collection card game with lying. You can’t have a Robert Moses game without lying. We didn’t win the contest, but we had fun making and playing the game.

When I told some folks about this at NYC Playtest, I was told that people who like set-collection games don’t like lying games and visa-versa, so nobody would ever play it. But in fact all of our playtesters liked it just fine. Also, poker is kind of that.

Random side note about poker: the notion of a game that is (almost) exclusively played for money is bizarre. Remember when MtG started there was this notion of playing for ante? And then people tried it and it was terrible and it never caught on. If someone invented poker today, as a Euro-style game, would people think of the real money thing as a gimmick? Would it be almost an art game, like Cordial Minuet or Train?

We didn’t do nearly as much playtesting on Whipsaw! as I’ve done on Loading Zone. It’s a much simpler game, and we were on a pretty tight timeline. It’s definitely not perfect: in playtests, players didn’t lie as much as we wanted them to. That might be because it’s hard to convince people to lie (or hard to do so in a set-collection game!). Or it might be because the incentives are wrong. But we weren’t able to figure out a way to improve the situation.

The game works like this: you’re trying to build parkways (there are four). The scoring is roughly quadratic: most parkway cards give victory points for each card in that parkway. So you would rather have all of one parkway than half of two. The cost of a parkway card is some number each of legislators, judges, and bankers. You have a hand of these resource cards, and draw more every turn. They’re played face-down, so you can lie about what you’ve played. Each type of card can also be played as an action: bankers lend money (which acts as additional bankers but costs points at the end of the game), legislators call bluffs, and judges temporarily block legislators, giving you a chance to “make it right”. To make lying more interesting, the resources have colored backs which give incomplete information about what they are. So, the cards with black backs are mostly judges — but not all. The game has a little more complexity, but that’s the gist.

Whipsaw! came together pretty quickly: I wrote up a first draft, then Ed and I tested it. My version was too long: it had six parkways instead of four. And it had a weird complication: instead of judges blocking bluff calls, special lawyer cards would do it. Lawyers could also block opponents’ lawyers. (In this version, instead of using legislators to call bluffs, you would do it by paying the cost of it yourself). Scoring was roughly linear (and Lost Cities-inspired: you could actually go negative if you didn’t have enough cards in a parkway).

My notes on this playtest say that the major fun parts were: - It played pretty quickly. - Getting away with lying was fun. - Strategizing about lying was fun. - Lawyers were probably the most fun part. - The colored backs made lying more interesting.

And the notes say that the major flaws were: - It was too long — 30 minutes would be better. - There was maybe not enough lying.. - There was definitely not enough calling of bluffs (in part because it was expensive). - Some of the complexity was silly. - And it was maybe hard to know what resources to save in your hand, and in general save vs spend was not an interesting decision. - Set collection wasn’t that interesting. - It was hard to track what other people were doing (too many cards, in part).

Ed’s second draft fixed most of this. He added more of a narrative arc by dividing the game deck into three “years”, with more-expensive properties available in the later years. He reduced the number of roads, and adjusted the scoring. And he invented the rules about how legislators, judges, and bankers worked. The game was pretty close to the final form at this point.

I ran a few more tests — at NYC Playtest, and at Recurse Center, and made some minor tweaks. And then we declared the game done and submitted. We should probably have done some artwork.

Want to give Whipsaw! a try? Here are the print-and-play rules and cards.

One of my favorite games is Set. When I first discovered it, in 1998, I made a Java applet since I couldn’t immediately find the cards. It’s apparently been used as an example in a college course, which is pretty surprising given that it’s undergraduate Java (to be fair, I had hacked on it occasionally since). My applet used to be a fairly accurate representation of the original game, but a few months ago I got a letter from the lawyers for the company who make Set. They were pretty polite, but I had to change the graphics. So now my applet has polka-dots.

Set is played on ℤ34. There’s a variant called Projective Set, which is played on something called a finite vector space — 𝔽26. Basically, that’s the six-bit numbers (but Projective Set excludes the zero card). The rule is that N cards can be removed if, for each bit position, the cards sum to zero mod 2. Or, perhaps more simply, each symbol appears an even number of times. Any seven distinct cards contain at least one such group.

tilde, square, triangle, plus plus, star square, plus, circle tilde, triangle, plus, star, circle

These four cards can be removed.

One of Danielle’s co-workers lamented not being able to buy a copy, so I decided to make some. Unlike the one shown on the Wikipedia page, mine are accessible to the colorblind. I guess I should note that my version is in fact just called “Projective”, as Set is a trademark of Set Enterprises and I don’t want to annoy their lawyers any more than my applet already has. So now you can buy a copy. I’m not making money off of this because doing so would be a hassle; the price is what The Game Crafter set. I got mine yesterday, and it looks pretty good. Danielle beat me up some, and I enjoyed it.

This got me to thinking about what other mathematical objects could be used for pattern recognition games. I immediately thought of quaternions, and then got Hamilton stuck in my head. Did I mention that I’m not great at math? I had forgotten that the quaternions are non-commutative, making the game much trickier to design. But I guess I don’t need to be totally accurate to the math. As long as I keep i2=k2=k2=ijk=-1, everything will probably work out. The idea will be to find N cards whose product is one. I’m thinking of calling it Uno.

I even came up with some icons, based on the Swedish point of interest symbol:

icons for the quaternion game

Notice that each of i, j, and k can be combined with a rotation to form the -1 symbol, and i, j, and k can be overlapped to do the same.

This might be too easy, so maybe I’ll need to do ℚ2 or something. Or just go straight to 𝕆, whose multiplication table is too big to remember. But you can always use this simple and easy to understand diagram (from Wikipedia):

Fano plane diagram

I created a totally stupid game called “Word Freq”. It simply asks you which of two randomly chosen n-grams (presently, only 2-grams are included) appears more often in the Google n-gram database.

It’s kind of fun tho.

Play it here.

I’m downloading and processing the 3-grams now, but they’re really big — like several terabytes of data. Then I guess I’ll merge the 1-grams, 2-grams, and 3-grams, so that you can have interesting matchhups like “conformity” vs “the policy of”.

My mom emailed me to ask about machine interpretation of laws (i.e. software judges and lawyers). I replied:

About fifteen years ago, I got into an argument about this in the context of a video game. The idea was that the game would have guards which would punish you if you did something illegal. The problem was that the game supported building machines, and Turing proved that no computer program can, in the general case, figure out what a machine will do. So if you built a sufficiently complicated machine, either you would get away with murder or be wrongly convicted.

Humans, of course, have the same problem, but we pretend not to.

Self-driving cars are a relatively easy case: only a monster would program in anything but pure consequentialism. Especially since, as a practical matter, self-driving cars are likely to be much safer than human-driven cars.

Before computers can replace lawyers in the the general case, they will have to understand human language (in order to interpret contracts, laws, and precedents). This is the single hardest problem in computer science. We have made almost no progress here. If we can solve this problem, we have achieved general intelligence, and within a year or two nobody will have a job.

There are areas now where computers could assist us. For instance, let’s say we’re considering Solomon’s case: which of two women is the mother of a child. Imagine that one woman is has brown hair and brown eyes, and the other has blonde hair and blue eyes. The child has brown hair and blue eyes. (We don’t have a DNA test for some reason). How do we combine our two pieces of data? There’s a fairly simple formula that does it precisely (given some odds of the various inputs). But right now, in order for a jury to use this formula, we need to call a statistician to the stand. And if the jury finds one of the women twice as credible, they don’t have a way to incorporate that data, even though the formula could easily do it. This would be a good first step, but for political reasons it would be shocking to me if we took it.


I should have added: This thing where the law is designed to be interpreted by humans is famously hard for programmers to grasp. I had forgotten that the thing where computers are bad at interpreting language (yet good at following rules) is just as hard for lawyers. I think the lawyers’ confusion is part of what’s behind the recent encryption debate (roughly 40% members of congress have law degrees).

Last weekend, I attended Feldcon, a one-day board gaming event where we played nothing but Stefan Feld’s games. I had never played a Feld before, but I figured if someone liked him enough to have a whole convention, he must be pretty good. I carefully prepared by reading the rules to Macao and Castles of Burgundy — but I didn’t end up playing either of them.

I think Feld is good, but mostly his games weren’t for me. Still, I learned something about what I like in games, and that’s valuable. I had been slowly working on designing a more Euro-style game, and now I have a stronger sense of what sort of play I want to encourage.

In all of games I played, there were multiple ways to score. I do like games that have multiple viable strategies, so this should have been appealing (and in fact it often was). But one play was not really enough to learn to keep track of all of them (in some of the games), or figure out which I should be focusing on. Can you win at Bruges without touching your canal? I don’t know, but I sure didn’t!

I played:

  • Bora Bora: A clever action selection mechanism: each player rolls three dice, then takes turns placing the dice on various actions. Higher numbers are more powerful, but you can’t place a higher numbered die after someone has placed a lower number on a given action. The game had a lot of bits: cards, tiles for women, men, tasks, jewelry, resources, fish, offerings and turn order, along with dice, (cardboard) shells, and various wooden pieces. There was a satisfying variety of actions, but I found it a bit overwhelming to pay attention to all of the ways to score. I guess a second play would have helped here. It didn’t seem reasonable to put together a coherent strategy up front, because the dice and opponents’ plays could easily screw it up. Being forced to react to changing circumstances is something I like. This was probably my second-favorite of the games, although I think with more play, Trajan might displace it.

  • Trajan. The key innovation of Trajan is the Mancala-like action selection mechanism. Mastering this mechanic would allow more planning about what actions to take, but it seems like a pure computation exercise. I guess that could be fun — in theory, portions of my game-in-progress, Loading Zone, are like that. Unlike Loading Zone, you might want to explore a broader tree for Trajan’s Mancala, since you might want to be able to react to your opponents’ choices. Still, the Mancala thing was neat, which counts for a lot in a first play-through.

  • Strasbourg was my favorite of the games I tried. It’s got an all-pay auction mechanic that I really dug. As a first-time player, I managed to screw up and make one of my goals impossible in turn one — I didn’t realize that I would need to win every meat auction. I wonder if somehow moving the goal choosing later in the game would reduce the variance. But maybe if I were a stronger player, I would have chosen more goals and been less sad about losing one.

  • In the Year of the Dragon. A disaster aversion game. It turns out that I really don’t like games where you can end up in an unwinnable state and then have to sit there losing for the rest of the game. That didn’t happen to me; I did get a little screwed but mostly lost through some poor choices. But it was even painful to watch it happen to another player. I’m OK building something that’s less than optimal (but still scoring some points). And I’m OK with getting destroyed. But if I’m getting destroyed, I want it to be over quickly. In a two-player game, I can just resign, but Feld’s games seem mostly to be intended for three or more.

  • Bruges. Another disaster aversion game, although less so. The push-your-luck aspect made disaster aversion much more painful; it feels hideously unfair when you were playing pretty cleanly but just rolled the wrong six. Worse, there was one card which was basically a “screw your neighbor” (and another which made one player much more resilient), each of which seemed to dramatically change the game. The game seemed to want me to primarily build my canal and only secondarily recruit people, but the people were so much more interesting than the canal. I never seemed to have enough actions; each (4-action) turn you can actually recruit fewer than one person on average, since you will gain 1⅔ threat tiles and you need at least 3 actions (half of a worker-creation, one house, at least half of a money-action, the person) to recruit. I know that making hard choices is the essence of gaming, but I wanted to feel like I had lots of good choices, instead of having to constantly spend all of these interesting people I could be living with.

Across all of Feld’s games, I liked the multiple small decisions (especially in the rare cases where they were decisions under uncertainty). I generally liked the feeling of abundance — there are lots of things I could do! The two games I liked least had less of the abundance feeling, unsurprisingly. But because it was generally hard to choose actions (because I had to fuck around with Mancala stones, or hire workers, or have the right dice), it was hard to build up a strategy. The only choice was to be mostly reactive. By the “abundance of actions” metric, I think Bora Bora was the strongest of the games I played, followed closely by Trajan.

Overall, I’m super-glad I spent nearly twelve straight hours playing these games, because I know I’ll be thinking about them for a long time.

Thanks to Daniel for organizing, and to Brooklyn Game Lab for hosting.