N-Webz and me at IndieCade East

N-Webz and me at IndieCade East

I gave a talk titled “Beyond the Scrabble Word List” at IndieCade East last weekend. It was a great experience! I’m very grateful to the IndieCade East organizers for giving me a chance to talk at such an amazing event. And such an awesome venue! I’ve never given a presentation on a screen quite as big as the screen in the Museum of the Moving Image’s Redstone Theater.

The gist of the talk is this: Scrabble’s rules and structure reward players who can spell words that are difficult to learn and difficult to spell. Being good at Scrabble is therefore an ersatz measure of “literacy.” But “literacy” isn’t a neutral concept; prescriptivism is a form of oppression and literacy is a privilege. The (implied) conflation of “being a good Scrabble player” and “being a good person” is one of the reasons that Scrabble (and other word games) can cause so much contention and bad feelings in play. One solution is to design word games with the same assistive technologies that people use in real life to cope with the difficult learning curve of the English language, like spellcheck and autocomplete.

The slides and notes (including citations) for the presentation are downloadable here.

The ideas in the talk are very raw, and reflect my own evolving thought on the topic. I’m not totally satisfied with the completeness of my own critique, and certainly the solutions I’ve proposed for the problem are very rudimentary. But I’ve gotten a lot of good feedback on the talk so far and welcome further comments!

(Photo by Tim Szetela)

FATGHONG CHDCK

FATGHONG CHDCK

POULTRI

POULTRI

A CONVERSATION

A CONVERSATION

smiling face withface is a tumblr bot I made that generates and posts glitchy versions of emoji, based on the open-source SVG files released as part of Twitter’s twemoji project. A Python program selects an emoji SVG file at random, adjusts the markup and numbers in the SVG file, and (optionally) recombines the paths in the selected SVG with paths from other emoji SVG files. The results are posted to Tumblr.

Emoji is among the most successful symbol systems in the history of writing, and it’s coming close to achieving the universal success envisioned by the likes of Blissymbols and Isotype. My “glitched” emoji are intended to bring to the surface the material nature of emoji, so we can better understand what it means to communicate with one another using them.

I made this for a few reasons. One of the main suggestions that people have for Library of Emoji is that there should, you know, be actual EMOJI that correspond to the randomly-generated names. Of course, there’s no easy way to do that computationally (at least in a way that I think would satisfy me from an aesthetic standpoint). But when Twitter released their twemoji files, I instantly knew I wanted to do something with them. I had already been working on a little script to mash-up Google’s Material Design Icons, so I repurposed it for the twemoji files and let it run. I was happy with the results. Darius suggested that I make the bot post to Tumblr (instead of Twitter), which I think was a great suggestion, given the visual nature of the bot. (Though you can follow the bot on Twitter as well, if you’d like.)

Occasionally, the bot will post a “conversation” (between two unnamed entities using, I presume, iOS devices equipped with glitch-emoji capabilities), so you can see what the emoji might look like in context.

The names are generated from a database of Unicode codepoint names, mangled with a little library of functions I’ve been working on.

Tags: , ,

Excerpt from "I Waded in Clear Water"

Excerpt from “I Waded in Clear Water”

Last November, I participated in National Novel Generation Month (NaNoGenMo), an event in which participants are encouraged to write a computer program that generates a novel. Originally conceptualized by Darius Kazemi as a cheeky alternative to NaNoWriMo, the event has inspired programmers and writers to create some really beautiful work.

My contribution is a procedurally generated novel called I Waded In Clear Water. The primary source text for the novel is Gustavus Hindman Miller’s Ten Thousand Dreams Interpreted, with footnotes provided by information gleaned from ConceptNet and WordNet. You can read more about the process I used to generate the novel, and see the Python source code, at my NaNoGenMo 2014 Github repository.

I read some excerpts from the novel and gave a presentation about it at WordHack on January 15th. Here’s the presentation deck in PDF format.

Tags: , ,

@eventuallybot

The Infinity

I made a new Twitter bot: @eventuallybot. It generates short, silent films in GIF format, based on randomly-selected snippets of YouTube videos. As of this writing, the bot has generated nearly 300 tiny films!

The code is written in Python and makes heavy use of Connor Mendenhall’s wgif program and ImageMagick. I used my new Python library, My Dinosaur, to generate an RSS feed for the bot (a first for me!), which you can subscribe to here.

I’ve had the idea for this bot for a while. I’ve been interested since my undergraduate linguistics days in the idea of textual cohesion—the methods and strategies that language speakers employ to make the units of the text (lines, sentences, stanzas, paragraphs, etc.) come together as a whole. In particular, I’m interested in how just mimicking the surface forms of cohesion (by, e.g., pronoun substitution, anaphoric/cataphoric demonstratives, or even just lexical repetition in the form of anaphora) can make generative text feel like it’s telling a story, even if the text doesn’t have any kind of underlying semantic model.

With @eventuallybot, I wanted to experiment with some of these concepts. The experiment, specifically, was this: if you take random bits of video, and splice them together with titles that suggest the contour of a story, how often will you get a result that feels at least sort of cohesive?

So I made a big list of transition words—essentially, conjunctions and phrases that function as conjunctions—and (inspired by Labov’s narrative analysis) lightly categorized them like so:

  • beginning phrases (phrases that start a story, like “once upon a time”)
  • “and-then” phrases (phrases that move the story along a bit in time, like “after that”)
  • continuing phrases (phrases that introduce a second situation or complicating factor, like “meanwhile” or “nearby”)
  • concluding phrases (phrases that introduce an explanation of how the story is resolved, like “therefore” or “to summarize…”)
  • ending phrases (like “The End”)

As Mark Sample pointed out on Twitter, filmmakers are already familiar with the “Kuleshov Effect,” which describes how viewers will tend to see two shots juxtaposed in montage as being narratively related. To be sure, the titles in @eventuallybot’s films are a bit less subtle than straight-up cuts between shots. But I kind of enjoy how @eventuallybot (at its most coherent) feels like it’s telling an anecdote with its clips, not just implying a narrative connection among them.

One reason I wanted to have an RSS feed for this bot is Twitter’s support for the .GIF format. Twitter “supports” GIFs, but transparently converts them after upload to a different video format, and (as far as I can tell) throws away the original GIF data. This is probably the right move on Twitter’s part, since GIFs aren’t (byte-for-byte) a very efficient format for storing video, but I wanted people to be able to save and share the GIFs as they were originally generated. So the RSS feed updates at the same time as the bot itself, and it links to the original GIFs.

Task complete

If it hasn’t already happened by the time you read this, it will happen soon: @everyword‘s seven-year mission to tweet “every word in the English language” has come to an end. I hope you’ve all enjoyed the ride!

My plan is to write a more complete post-mortem on the project later. In the mean time, this post contains some links to things that followers of @everyword might find interesting or useful.

The future of @everyword

But first, a word about what’s next for @everyword. Don’t unfollow just yet! My plan at the moment is to let the account rest for a bit, and then run “@everyword Season 2,” starting over from the beginning of the alphabet. Before I do that, I’d like to find a more thorough word list, and also do some programming work so that the bot is less likely to experience failures that interrupt service.

Writing about @everyword

Here’s some writing about @everyword, by me and others.

Writing about Twitter bots

@everyword is a Twitter bot—an automated agent that makes Twitter posts. There are a lot of interesting Twitter bots out there. Here’s some interesting writing by and about bot-makers:

What to follow

Here are some Twitter bots that I think followers of @everyword might enjoy.

Thank you!

The response to @everyword has been overwhelming. When I started the project in 2007, I never would have dreamed that the account would one day have close to 100k followers. And if you’re one of those followers, thank you! It’s a great feeling to have made something that so many people have decided to make a daily (or, uh, half-hourly) part of their lives.

I view @everyword as a success, and I want to note here that I owe this success to all of my friends and family who encouraged me along the way and helped to make @everyword a topic of conversation. I am very bad at finding value in the things I make, and I’m especially bad at self-promotion. Without the help of the people close to me, I’m sure that @everyword would have completed its task in obscurity—if it completed its task at all.

scrabble sucks screencap

I gave a talk at !!Con a few weeks ago. The talk was called “Scrabble Sucks! Toward higher-order word games.” The talk is about some problems I have with Scrabble, and some of the games I’ve made in response to those problems. Download the slides and notes here. There are a few slides I didn’t get to ub my actual presentation, comparing a sizable corpus of Scrabble games to Lexcavator‘s list of all words that players have ever found, that are included in the PDF above for your perusal.

I had a lot of fun participating in !!Con. I was a little nervous talking right before Mark-Jason Dominus, whom I venerated back in my Perl-slinging days, and whose Higher-Order Perl is what I was riffing off of with the subtitle of my talk (except I wasn’t talking about higher-order functions; I was talking about higher-order n-grams.) But everything worked out okay, and I’m glad I got to give my talk to such an enthusiastic and receptive crowd.

Here’s a list of links to student projects made for Reading and Writing Electronic Text, along with a brief description. I thought all of my students did great work this year, and it was a pleasure to teach them!

Centrality by Aankit Patel. I guess I could describe this as “lexicographical dataglitch-punk.” There is an online version available.

Bing Huang‘s TextFinal is an audiovisual meditation on the Declaration of Independence.

Caitlin Weaver produced Susan Scratched, a poem glitched through with a distinctive kind of repetition. A lovely performance of the poem is available on SoundCloud.

Clara Santamaria Vargas made Gertrude’STime, which takes the phrase “in the morning there is meaning, in the evening there is feeling” from Tender Buttons literally, and generates poems accordingly.

Dan Melancon‘s L SYSTEM POEM SYSTEM POEM L POEM L SYSTEM does what it says in the title: applies a Lindenmayer System to an input text. The output exhibits a strangely hypnotic form of uncanny alien repetition.

Eamon O’Connor’s final project uses the CMU pronouncing dictionary to produce metrical verse. Several examples are included on the page.

kepler-186

Hellyn Teng‘s final project was Kepler-186, a multimedia poem about exoplanets and home economics. Documentation includes sound snippets and screenshots.

Jason Sigal‘s write-up of his final project, The Phrases and Pronunciation is fantastic—he goes into detail about his process and the technical and conceptual decisions that he made along the way.


John Ferrell made a Twitter bot called the Rambling Taxidermist, whose inspiration and inner workings he has written up here. The bot responds to tweets about marriage with ill-advised, mashed-up advice composed partially from a taxidermy handbook.

Michelle Chandra created a lovely poem about loneliness, drawing upon a corpus of well-known quotes. I love the repetition and alliteration in this one, well worth reading.

Ran Mo‘s Birdy News juxtaposes Twitter jokes with NY Times to often humorous effect.

reading-between-the-lines

Robert Dionne‘s final project, Reading Between the Lines, generates multidimensional poems from distributed word vectors.

Salem Al-Mansoori‘s final project, @_all_of_us, is a machine for creating random platitudes and aphorisms. It takes the form of a Twitter bot and a generative comic strip.

Sam Lavigne‘s program to Transform any text into a patent application has attracted a lot of press attention, so you might have already seen it! Original and well-executed. I love this project.

Sheri Manson created a series of three poems, based on words and phrases drawn randomly from an interesting selection of source texts.

Uttam Grandhi‘s project The Baptized Pixel uses image data to generate poems with binary-like repetition.

Vicci Ho made a Twitter bot, @onetruewiseman, that combines the social media wisdom of several conservative luminaries.

PoetryDrones

UNMANNED POETRY DRONES
Friday May 9th, 2014
7pm
721 Broadway, New York, NY
Tisch Common Room (ground floor)
FREE!

Please join us as a semester of experimentation with procedural text culminates in a one-night-only performance of computer-generated poetry. Seventeen students at NYU’s Interactive Telecommunications Program will take the stage and, with their voices, set aloft poems and prose produced by programs of their own design. You are likely to encounter: poems made from pixels, automated propaganda, lexicographical cut-ups, twitter bots, and more.

Reading and Writing Electronic Text is a course offered at NYU’s Interactive
Telecommunications Program. (http://itp.nyu.edu/itp/). The course is an introduction to both the Python programming language and contemporary techniques in electronic literature. See the syllabus and examples of student work here: http://rwet.decontextualize.com/

Poster design by Caitlin Weaver (http://www.phasesofsputnik.com/). Print-quality poster art can be downloaded here.

Odds and ends

Here are a few things I made recently that didn’t make it to my blog.

Pizza Clones

pizza

Here’s yet another Twitter bot: Pizza Clones. Every two hours it generates a joke in the form of “Every {NOUN} is a(n) {ADJECTIVE} {NOUN} when/if/as long as {SUBORDINATE-CLAUSE}.” Some sample output:

It’s an attempt to imitate and elaborate on the joke “Every pizza is a personal pizza if you try hard enough and believe in yourself.” That joke in particular is hard to attribute to one person (appearing on Twitter here as early as 2010, more recently here and here), but the general syntactic pattern is found in well-known bits by Mitch Hedberg (“every book is a children’s book if the kid can read”) and Demetri Martin (“every fight is a food fight when you’re a cannibal“).

The bot works by first searching Twitter for tweets containing a phrase in the format “this isn’t a(n) {ADJECTIVE} {NOUN}” and then using a Pattern search to identify and extract the ADJECTIVE and NOUN. It then searches Twitter for phrases that match the string “{NOUN} if” (and “{NOUN} unless”, “{NOUN} as long as”, etc.), and extracts the rest of the clause following the “if.” (There’s some more NLP behind the scenes to ensure that the “if” clause will fit the joke syntax.) Then it jams the NOUN, ADJECTIVE and subordinate clause into the format of the joke and tweets it out to the world. It does this every two hours. Links to the tweets from which the substance of the joke was extracted are included in the body of the tweet, for attribution purposes. The bot keeps a small database of previously used clauses to prevent it from repeating itself too frequently.

I’ll admit that this is a pretty obscure joke, but I’m really happy with the output. The noun—occurring both in the adjective/noun pair and adjacent to the “if” clause—gives the jokes a semantic anchor, but the fact that the text is grabbed from two different tweets (and two different contexts) keeps the jokes surprising and weird. I’m also pleased at how almost all of the tweets feel grammatical, given the limited degree of NLP involved in the procedure. Follow Pizza Clones on Twitter!

« Older entries