Nonsense

some of it isn't, but opinions differ as to which is which

An Open Letter to ArenaNet

To Whom It May Concern at ArenaNet,

I am writing to express my deep concern and disappointment over ArenaNet’s behavior towards and firing of Jessica Price and Peter Fries.

For the record, my account name is Raellwyn.9416. As you can see, I’ve bought both expansions. Following the events of March 8th, I started occasionally buying Gems, the in-game currency, in order to support a company I believed actually stood up for diversity and representation in the gaming industry. I will no longer be able to do this.

As of this writing, there are only two possible explanations for your firing of both Jessica Price and Peter Fries. The first is that you consider the streamers who are associated with you to be nearly untouchable. You appear to believe any sort of mild refusal to suffer foolishness from them gladly is a fireable offense, as is any sort of mild defense of a colleague who does this. This is not how a reputable, professional company should treat its employees.

As a reminder, the in-game economist you used to have, John Smith, was occasionally condescending and rude on the official Guild Wars 2 forums to players. He was not fired instantly for it. Some examples:

“You’ve made two false assumptions, fix them both and you have your answer1.”


“I think you should stop and think about this a little bit, and then come back with another response2.”


“I think it’s hilarious when I’m looking at a million post patch salvage results and you tell me I’m wrong3.”


“Things YOU know as facts:

1. I can buy unlimited items off the Tp without any server limitations- FALSE

2. I can pick up unlimited items into my inventory from the TP without any server limitations – FALSE

3. I can set arbitrary buy order prices at unlimited speed without server limitations. – FALSE

4. Doing any of these things in reverse (selling) causes an error to pop up – what?

Things devs have stated as facts in the past:

-series of random ad hominems that clearly make your points more valid…right?

Just because you don’t see the technical implementation of some aspects of the systems doesn’t mean it’s magic4.”

He would also occasionally call people out when they weren’t helping the discussion:

“Prince, you’re not helping the discussion, if you continue to resort to ad hominem you won’t be welcome anymore5.”

At one point it seems he was allowed to both be caustic and then apologize for it:

“You’re adorable. Economists have ways to measure how much your parents love you and the scariest part of all… it works.”

“I’ve gotten a lot of mixed feedback about this comment and I wanted to agree that it’s unnecessary. It’s an overly caustic response to an accusation (which isn’t the worst thing in the world), but it’s been pointed out that this isn’t a two way street which changes the situation entirely (and makes it really bad). My community team are saints and allow me the time and medium to interact with the players, I’ll be focusing more on providing positive responses as much as I’m able.

Cheers all, questions welcome6.”

Beyond those few examples, I found his posts to be interesting and enlightening as a whole. I feel your obvious acceptance of him as a whole person with his own opinions and frustrations was the more appropriate way to treat an employee. I am disappointed you have diverged from that.

The second possibility is that, despite your recent denial, you were terrified enough of a Reddit mob to fire two employees in order to get on its good side. This possibility is far more concerning.

As I am sure you are aware, your actions do not exist in a vacuum. We live in a post-GamerGate world now. Every woman, LGBT person, and person of color who makes, writes about, or plays video games publicly runs the risk of stalking, threats, or worse. Your actions have made this environment materially worse, and are actively encouraging further bad behavior towards women and minorities in the video game industry. This is true regardless of your intentions.

Your actions are currently chilling attempts by game devs to talk about how everyday sexism and harassment affect their lives and their work. Your actions are chilling the ability of game devs with more privilege to stand up for and stand behind their colleagues who are being harassed. You have made it clear that your priority, as an employer, is not to stand against harassment of your employees. I can no longer recommend you as an employer to anyone I know.

Personally, I can no longer feel as safe playing Guild Wars 2, which is a game I have loved and enjoyed for years. I am a female player. Since you do not seem capable of standing up for your own employees, I cannot trust you to provide reasonable safety against in-game harassment. This is both frightening and disappointing. I also can no longer recommend your games to my friends.

I encourage you to take steps to make this right as quickly as possible, in a widespread and public way.

Sincerely,

Ardith Betz

Book Review: Superintelligence, by Nick Bostrom

Because I am very late to all the worst parties, I have finally read Superintelligence by Nick Bostrom. The hold waitlist at the library where I got it was sixteen deep, and yet I got my hands on a copy in only a few weeks, which probably says something awkward and rude.

This is not a good book. It does not make particularly large amounts of sense. It drives itself in maddening circles of vicious, unanswerable doom and then presents a secular prayer to an AI god-child as the most plausible answer to the apocalypse. You might be forgiven having a different assumption. Maybe you’ve only heard of it from the Silicon Valley AI Safety set of precocious and adorable children who float high on the VC tides in their skiffs of concerned rationalism. I’m not likely to forgive you if you’ve actually read it and liked it.

Let’s get a few things out of the way. This book is somewhat overwritten. It also has a list of tables and figures and boxes. A few of them are related to the subject at hand. The subject at hand is how Nick Bostrom considers AI-based superintelligence1 to be the coming god-emperor, and involves figuring out out a way to ask it politely to be good to us.

Superintelligence starts with a literary attempt called “The Unfinished Fable of the Sparrows.” Some sparrows have resolved to tame an owl to help them out with life, liberty, and the pursuit of happiness. A few smart and unheeded sparrows wonder about how they will control this owl once they’ve got it. There is no ending, and yes, it’s an unsubtle synopsis of the book as a whole.

If that were not enough, next is a preface in which Bostrom talks about how he’s written a book he would have liked to read as “an earlier time-slice of [himself],” and he hopes people who read it will put some thought into it. Please, we should all resist the urge to instantly misunderstand it. He would also like you know how many qualifiers he’s placed in the text, and how they’ve been placed with great care, and he might be very wrong, but he’s not being falsely modest, because not listening to him is DANGER, WILL ROBINSON2.

Don’t ask me to make sense of that last bit, I’m not a professor of philosophy at Oxford.

The least boring parts of the book, where the argument is attempted, are Chapters 4-8 (What will this superintelligence be like, how quickly will it take over, and how bad will the resulting hellscape be?) and Chapter 13 (I bet we can avoid this hellscape through being very clever).

The first problem pops up when he starts discussing the explosive growth of the potential superintelligence in Chapter 4. Hardware bottlenecks are basically ignored. Software bottlenecks are ignored. Any other bottlenecks of any sort are definitely ignored. Instantly, there are no more bottles whatsoever, and suddenly the superintelligence is building nanofactories for itself because it thought up the blueprints very fast. At some point (handwaving) it achieves world domination. At some further point (faster handwaving) it is running a one-world government. And eventually (hands waving very excitedly) it overcomes the vast distances of spacetime and sends baby Von Neumann robots out to colonize the universe.

I can see how this sort of thing is very tempting. It’s difficult to imagine entities smarter than us, so it’s difficult to imagine them having any problems or hardships. I assume this is also difficult when your career and success therein seem to rely on your own intelligence, rather than sheer dumb luck. But failing to have the depth of imagination to consider a being-space between humanity and unknowable, omnipotent gods is concerning, on a scale comparable to H. P. Lovecraft.

A similar problem occurs in Chapter 8 (titled, excitingly, “Is the default outcome doom?”). Having satisfied himself with the inevitability of the new AI god, Bostrom runs through a titillatingly long list of ways in which it could turn out to be malignant, deceptive, or downright evil. Failing that, it could just misinterpret anything we try to tell it. Or maybe it could care about making paperclips way more than it cares about humans. We can’t do anything to stop it, therefore doom creeps over the horizon.

Again, this is largely a failure of imagination. There is no corresponding list of ways things could go right, or well, or even ambiguously. This is an argument about this worst possible case being more terrible than we can survive. This is not an argument for this worst possible case being the most likely case, or even a fairly likely case. It’s important to remember the difference.

There are times when it’s useful to base your discussion on the worst possible case. But the worst possible case here is already several branches down a large logic tree that may or may not actually exist. It is not actual existential danger. It is a theoretical possibility of an existential danger that may or may not come into play should certain possibilities all coincide.

There are hundreds, thousands, millions of other theoretical possibilities of existential danger, that Bostrom is not writing entire books about. A planet could collide with a large asteroid in another solar system and send a huge chunk of itself ricocheting directly towards us on a vector we’re ignoring. That big supervolcano under Yellowstone might get triggered because of an awkward reaction between a solar flare, the Earth’s magnetic pole switching polarity, and a bad bit of shale drilling, and yes, I’m obviously making this stuff up, but that’s kind of the point.

At this point in the story (and it is a story, more on that later, help this is going to be pages and pages) we realize we need to be very clever to avoid a vastly terrible superintelligence from doing terrible things to us in devilishly creative ways. We can’t keep it from arising, because Bostrom has already told us that’s impossible. Maybe we can guide it? Persuade it? Subjugate it first? Control it? Keep it in a tiny box? He considers some of these more or less promising. There is a table. He quickly moves on from how to make a baby superintelligence have values to how to decide which values it should have, as that’s where he believes the trouble truly lies. Values are like wishes; they may not be interpreted the way we assume they will be.

In fact, they probably won’t be. Which is where we need to be very clever. As far as I can tell, Bostrom’s Big Answer to getting a superintelligence to be nice to us is something he’s borrowed from Eliezer Yudkowsky3. It’s called Coherent Extrapolated Volition, which will always be referred to as CEV for short and because I mean, really, those big words. I’m going to quote Yudkowsky’s definition as it’s quoted by Bostrom:

“Our coherent extrapolated volition is our wish if we knew more, thought faster, were more the people we wished we were, had grown up farther together, where the extrapolation converges rather than diverges, where our wishes cohere rather than interfere; extrapolated as we wish that extrapolated, interpreted as we wish that interpreted.”

I feel a little bad. It’s obvious that time and effort has been spent getting the wording just right, making sure all the bases are covered, trying to eliminate misunderstandings and misinterpretations. It’s a very lovely wishful statement about ourselves and our future!

But.

This is a religious statement. Despite the claims that this is a way of avoiding moral judgments and definitions, this is a prayer that the coming AI God will give us not what we want in our deeply irrational now, but what we ultimately will want in our best future, that God will be on our side, help us be better than we are. This is praying for God to be Good and Not Evil. This is praying away the Apocalypse.

We shouldn’t be surprised, we’re humans. Where else did you think we’d end up? Not writing religious stories about ourselves, our world and our futures? This is the sort of thing we do in our sleep. We write stories about the things that go bump in the night, or the things we’re afraid will go bump in the night. We write magical incantations to protect ourselves from the vast, cool intelligences that exist outside our ken, because we have to sleep somehow now that we’ve thought them up. We write stories to tell us that we have some say in our own lives, and in our futures, and in the futures of our children. Sometimes we even dress these stories up as academic books with roots in philosophy and computer science.

At any rate, this is not cleverly and rationally avoiding certain existential danger. In the end, a superintelligent AI as defined by Bostrom is not controllable, is not guaranteed to grow up in the way we want it to, and this CEV is merely a suggestion that it behave itself for our sake. Bit of a shame really. I’d honestly like to see him put some good work into something like AI safety, maybe some acknowledgment that algorithms and learning systems don’t have to be smarter than us or even all that advanced to make a hash of things because of the ways we program our faulty assumptions into them. But instead, this is what we have.

So, no. Nick Bostrom’s Superintelligence isn’t a good book; it doesn’t do what it sets out to accomplish. Bostrom hasn’t given us a warning about a definite existential danger. He also hasn’t given us a way to clearly see or avoid said existential danger. It’s not even a very good story; there is far better science fiction and fantasy and theological work being written every day. Go read some of those.


  1. Superintelligence mostly just means comprehensively and substantially smarter than humans. [return]
  2. I apologize, I’ve been watching the Lost in Space remake. [return]
  3. Of excessive reaction to Roko’s Basilisk fame. [return]

Hugo: A Different Twitter Shortcode

Hugo has a handy little shortcode to embed tweets into a page. It takes the form1:

{{ < tweet [ tweet id ] >}}

So for example, in a Markdown page:

{{ < tweet 763188308662898691 >}}

Produces this:

Under the hood, it’s using the Twitter API to provide the embed code via GET statuses/oembed. Which is fine and all, but there are times when you don’t want the default embed style, and want to use some of the options the Twitter API provides. In a project, I wanted to hide previous-tweet-in-a-thread that Twitter provides by default, using the hide_thread option. (If I want to display threading, I can do it myself with hardcoding and CSS, using styling that’s a bit easier to follow.) The easiest way to turn off the default Twitter thread display was to make my own shortcode that I could call instead of Hugo’s internal one2.

In layouts/shortcodes, I have tweet-single.html, which looks like this:

<div>

  {{ (getJSON "https://api.twitter.com/1.1/statuses/oembed.json?dnt=1&hide_thread=1&id=" (index .Params 0)).html | safeHTML }}

</div>

And now I can call tweet-single just like I can call tweet:

{{ < tweet-single [tweet id x] >}}

I, uh, also told Twitter to turn off tracking for that embed via dnt, because I’m a decent and not at all paranoid person.

Now that everything is exactly the way I want it, I should probably upgrade Hugo and see what needs to be fixed.


  1. I’ve added extra spaces to the start of the shortcode in these examples to keep Hugo from trying to run them as actual shortcode calls. You’ll need to remove the space between "{{" and "<". [return]
  2. All of this works until Twitter changes the api, or Hugo changes under my feet, obviously. [return]

Hugo: Section Sorted by Taxonomy

One of the other weird-ish things I needed to be able to do for this site setup was the Projects page over there to the side. It’s the section page for the Project section, and on it I wanted to sort all my project posts, and only my project posts, by some sort of assigned type, rather than listing by date or whatever.

I’m doing this the cheaty way by just using the ‘tags’ taxonomy for blog posts, and using the ‘categories’ taxonomy only for project posts. I didn’t just want to link to the taxonomy list page, because then I would have to fiddle with the page name and title and whatnot to make it what I wanted. This meant that I needed to pull the full “category” taxonomy listing into the “project” section list template.

In the /layouts/project/project.html template file I have this in the main content <div>:

<section> 
    {{ range $key, $value := $.Site.Taxonomies.categories }}
    <h2> {{ humanize $key }} </h2>
        <ul>
        {{ range $value.Pages }}
            {{ .Render "li" }}
        {{ end }}
        </ul>
    {{ end }} 
</section>

It took surprisingly long for me to find a bit of sample code to modify for this, so here’s hoping I’ve provided another potential search result for someone trying to figure this out. The ‘humanize’ bit in there is because I am difficult and I like capitalization in my organizational structure.

This is using Hugo version 0.37.1 at the time of writing.

Hugo and Footnotes

Hugo does Markdown footnotes! Excellent. However, Hugo tends to assume you will never have more than one post with footnotes visible on a page, because Pretty Links.1 Awkward if you like multiple full posts on the homepage and you’re inordinately fond of footnotes like me. This is easily fixable! I had to spend some time figuring out how, so I’m putting the details here. (Current as of Hugo version 0.37.)

Footnote reference link styles are part of the Blackfriday Markdown engine internal to Hugo. They get adjusted in a separate blackfriday section in your site config file, using the plainIDAnchors setting. To turn off plain ID anchors, and have footnote reference links that reference the post ID as well as the footnote number, this needs to be in config.toml (in the root of your Hugo site directory):

[blackfriday] 
plainIDAnchors = false

If you’re using config.yaml, it’ll be:

blackfriday:
    plainIDAnchors: false

And then you should be able to put all the footnotes you want wherever you want.


  1. In other words, the default Hugo settings assume you’ll prefer short reference links that point to ‘fn:1’ rather than longer, more defined links that point to ‘fn:[long unique ID]:1’ [return]