Last Baby Standing

Last Baby Standing is a game/sim/toy for Facebook, made during last weekend’s Global Game Jam. The game generates statistics for your Facebook friends, then lets you “mate” any two together, producing statistics and a unique biography for each “child.” The game tied for first place in the “Wild Card” category in NYU Game Center’s chapter of the jam. I was part of the extremely talented crew that made this game—here’s the GGJ page for the game, which includes full credits. And here’s the Game Center’s write-up of the event, which includes a full list of winners and links to the all of the games.

Oh, and here’s the github repository.

The theme of this year’s jam was “extinction,” which we found a bit difficult to work with. For the first few hours on Friday night, we worked on an abstract puzzle/gambling game based on the definition of “extinction” in psychology. (The initial prototype of that game is still in the repository as mimetree.py.) We couldn’t figure out how to make that fun, so we searched for alternative ideas; Last Baby Standing is the result. I’m extremely happy with how we were able to corral all of our technical and creative talents to make something interesting and fun that (mostly!) works great.

Things I learned (mostly technical):

  • FQL is a finicky playmate. Queries that work fine for 200 friends time out with 400. (We used LIMIT clauses and ORDER BY RAND() to get around this limitation. I didn’t know FQL even supported those clauses!)
  • Tornado‘s Facebook Graph authentication mixin doesn’t work right out-of-the-box. I needed to make some changes to the example code and also use the version fresh from the repository (rather than the currently released version).
  • If the whole comedy writing thing doesn’t pan out for him, Rob Dubbin has a real future in generative baby biographies.
  • All you need to produce satisfying portmanteaunomastics is about ten lines of Python code and a regular expression.
  • It is possible to get a decent amount of sleep during the Global Game Jam. You just need to feel confident in the talents and time management skills of your teammates.

I hope everyone enjoys the game! Thanks to the NYU Game Center for hosting, and to Matt Parker in particular for keeping everything running smoothly.

Tags: , , , ,

Since the 2011 version of Reading and Writing Electronic Text begins tonight, I thought I would finally post these photos of last year’s performance event. (Photos courtesy master photographer Rob Dubbin.)

Here are some of the final projects that came out of last year’s class:

Stay tuned for updates about this year’s class!

Tags: , ,

“Be a good chance of sticking a fork in my eye. Temptations off so that people with garden implements.”

This is just one of a practically infinite number of new years resolutions that can be generated by my latest project, A Random Resolution for 2011. I collected about 50,000 tweets matching the term “resolution” on December 31st, 2010 and January 1st, 2011, used a simple grep to extract substrings that looked like resolutions, and fed the whole thing into a Markov chain text generator.

I love using Markov chain text generators on a corpus like this because they manage to both highlight the similarities among all items in the corpus (any given string of characters is likely to have occurred more than once in the corpus), while juxtaposing parts of seemingly unrelated items in surprising (and often amusing) ways.

Technical details: I built the application in a few hours using Tornado, an open-source web framework from the FriendFeed team at Facebook. The application is running behind an nginx server on an EC2 micro instance (a product I’ve wanted to try since Amazon released it last year). I’m amazed at how these tools made it quick and easy to throw the whole thing together. The text generator is using n-grams of eight characters; I chose eight because seven or fewer characters produced too many non-words, while nine too frequently reproduced tweets unchanged from the source text.

Tags: , , ,

Lipstick Enygma by Janet Zweig is an amazing physical/electronic public text generator, made for the Harris Engineering Center at the University of Central Florida, Orlando. Watch the video below:

The page linked above has more examples of text that the piece is capable of generating (“Modemheads in nerdistan!”, “Hack into me mintily.”) I would love to know what algorithm underlies the text generation! (via today and tomorrow; see also Zweig’s Impersonator, a similar piece from 2002)

Tags: , ,

(update: I made each word in the list after the cut into a google search link, so you can see which -toberfests are -tobertaken. I can’t believe “molttoberfest” doesn’t exist!)

AUTUMN IS UPON US, and you know what that means: the sudden appearance of neologisms and portmanteaux designed to mimic the word “oktoberfest.” Rocktoberfest, Shacktoberfest, pop and lock-toberfest. It’s an annual profusion of textual creativity! And as readers of this blog should know, where there is a profusion of textual creativity, there is a text generator waiting to happen.

So I put it to myself to create a -toberfest portmanteaux generator with the tools most readily at hand: grep and awk. Here’s the command-line I ended up with:

egrep '^[^aeiouy]*(o|aw)[^aeiouy]?[cfhkptx]+$' sowpods.txt | awk '{print $0 "toberfest"}'

The source file sowpods.txt is my standby English word list for text generation tasks. The regular expression reads: “find me every word that has o or aw following zero or more non-vowel letters at the beginning of the word, perhaps followed by a single non-vowel letter, and ending with one or more of any of the following letters: c, f, h, k, p, t, or x.” The awk program appends the string toberfest to matching words and prints them out.

The full list of portmanteaux that this simple program generates (all 365!) is below the cut, but here are a few of my favorites:

  • Miami Heat fans! Start the NBA season out right with Boshtoberfest!
  • Spoonflower announces a two yards for one deal during Clothtoberfest!
  • Gather all ye dandies in your finest lederhosen as you celebrate Foptoberfest! (related: Tofftoberfest)
  • If your -toberfest has a seating capacity of 99 to 500, and you’re not in the “Broadway Box,” it’s technically an offtoberfest.
  • Why yes, there is a festival specifically for the nineteenth letter in many Semitic abjads. It’s Qophtoberfest!
  • When, oh when, during the year can we get together to sharpen and polish our razor blades? Why Stroptoberfest, of course!

Lochtoberfest is already exactly what you expect it would be.

In generating this list, I had two criteria: (a) that around 90% of the generated strings “feel right” and (b) that the string “scotchtoberfest” be included in the results.

Criterion (b) was easily met, but (a) was not so easy. What does it mean for a -toberfest portmanteau to “feel right”? It’s highly subjective. For me, the quality of the vowel sound is key: the initial vowel in the portmanteau must rhyme with the initial vowel in “october.” I also found that the length of the vowel is key: the shorter the better, which is why my algorithm selects only words ending with voiceless consonants. (More on allophonic vowel length in English.) I singled out monosyllabic words simply because they’re easier to grep for.

I’m pleased with the results. A few quick googles reveal that many of these words refer to existing festivals, but many return no results (“did you mean bocktoberfest?”). Let me know if this list inspires you to create your own -toberfest, or if you have suggestions to improve my greps and awks.

Here’s the full list:
Read the rest of this entry »

Tags: , , , , ,

I love Google Scribe. It’s the ultimate achievement in oulipian writing tools; it’s the flarfist’s only typewriter. It’s the best thing I can imagine someone doing with Google’s resources. It could be an amazing performance tool. Who’s with me on this?

The most remarkable thing about Scribe is that what everyone does when they encounter it for the first time is use it for creative writing. The MetaFilter thread is a good place find examples. Twitter is already replete (1, 2, 3) with Scribe writing games.

Here’s me throwing my hat into the ring, with a version of William Carlos Williams’ “This Is Just To Say” where each line has three or four Scribe autocompletes tacked on:

This is just to say that they are

I have eaten here several times
the plums and their families
that were in their early
the icebox and then

and which are not
you were probably too young
saving the file to disk
for breakfast and lunch

Forgive me allahabad bank
they were delicious and they
so sweet and innocent
and so cold that they were

(bonus: Scribe is a prude, after the break)

Read the rest of this entry »

Tags: , , , , , , , ,

Last Saturday, Socialbomb held its first Hack Day.

I had two goals for Hack Day: (1) get a PS/2 keyboard talking to an Arduino and (2) make something interesting with processing.py. Here’s the end result (make sure to click through to the full-screen version for maximum legibility):

Crazy Animal Stories Keyboard from Adam Parrish on Vimeo.

It’s the Unexpected Animal Stories Keyboard, a keyboard which intermittently replaces whatever you’re typing with an Unexpected Animal Story.

It turns out that the part of this project that I thought would be difficult turned out to be easy: getting a PS/2 keyboard talking to an Arduino was a piece of cake. I already had a bunch of mini-din connectors; I just soldered one up to a breadboard, hooked it up to my trusty Arduino Diecimila, put the excellent ps2keypolled library in my libraries folder, plugged in the keyboard and voila: keystrokes gettin’ read.

Here's what the setup looks like

globbiest solders since middle school

I’ve got big plans for the PS/2-to-Arduino data chain, involving a data logging chip and shoes made of keyboards and sledgehammers and/or yogurt. But for Hack Day, I just wanted to whip up something fun. So the next step was to get the keystrokes from the Arduino to my computer, preferably into a processing.py sketch. Much to my surprise, Processing’s serial communication libraries worked with processing.py without a hitch*, which left me free to write the tiny little generative text toy that you see in the video above.

The biggest unforeseen timesink: I spent a few hours trying to figure out the best way to send ps2keypolled’s 16-bit key codes from the Arduino to the computer, eventually settling on the stupidest possible ad-hoc protocol that could work (and porting a big chunk of C code to Python to translate the key codes to ASCII). See the source code for more details.

Most surprising happy discovery: processing.py is amazing. Being able to quickly write the text-munging code in Python while still retaining Processing’s built-in functions and easy-to-use libraries is just… a revelation. For a project that’s just a few weeks old, it feels surprisingly polished. If you’ve got Python and Processing expertise, I recommend you give it a go.

Source code for the whole shebang: crazy_animal_keyboard_source.zip

* Okay, there was a single hitch. Apparently, the serial communication library included with Processing (and, therefore, processing.py) doesn’t support 64-bit Snow Leopard (as documented e.g. here). I was able to get around this without problems by using the -d32 parameter to the java runner, i.e.

$ java -d32 -jar processing.py animal_keyboard.py

Tags: , , , , , ,

Poetry in the Post-Now
Bowery Poetry Club
308 Bowery, New York, NY
May 8th, 2010, 12pm-2pm

This is going to be an amazing event. There will be performances, demonstrations, installations and readings from two ITP classes this semester: my Reading and Writing Electronic Text class and Nancy Hechinger’s Writing and Reading Poetry in the Digital Age.

This event is intended to be a showcase for the many text-, language- and poetry-driven projects at ITP, which are sometimes unsuited to the noisy glamor of the regular ITP show (which you should also attend!). I have been overwhelmed by the quality of student projects in both classes, and I’m excited to see them presented and performed.

A sampling of projects from my class: Ramones lyrics interpreted as code, Semaphore Hero, “tagrostics” (procedurally generated acrostics built from word frequency analysis), reading the Ramayana with regular expressions, procedurally generated Vogon poetry, poems composed by weather conditions, self-conversation mangled by Markov chains, physical interfaces for remixing movie subtitles, and more! It may not actually be possible for there to be a better way for you to spend your Saturday afternoon.

Here’s the poster in PDF format. Promotional materials designed by Ted Hayes.

Tags: , , ,

Last October I undertook a “lexical analysis” of the Twitter and Facebook APIs. Twitter came out on top in that analysis. I concluded that Twitter’s “simplicity … has been an important factor in [its] widespread growth among both users and developers” while Facebook’s “baroque and insular” API makes it impossible for users and developers to “keep track of how everything works together.” Facebook’s been promising changes for a while, and change has indeed arrived, in the form of the new Graph API. So how does the Graph API compare, lexically, to Twitter? Here’s a revised analysis. (Spoiler: things are looking much better for Facebook.)

Facebook Graph API Twitter
Verbs
get
post
delete

Nouns
album
photo
event
group
link
note
page
photo
post
status message
user
video
comment
feed
noreply
maybe
invited
attending
declined
picture
member
home
tag
activities
interests
friends
music
books
movies
television
likes
inbox
outbox
updates
search

Verbs
get
show
update
destroy
post
put
exists
verify
end
follow
leave
report
request
authorize
authenticate

Nouns
search
trend
status
timeline
mention
retweet
friend
follower
direct message
friendship
id
account
session
delivery device
color
image
profile
favorite
notification
block
spam
search
token
test
place
geocode

The tally: the Twitter API holds steady with 26 nouns and 15 verbs. The Graph API, meanwhile, has 35 nouns* and 3 verbs—a tremendous improvement on Facebook’s old (so-called) REST API, which has 43 nouns and 24 verbs. The Twitter API and Facebook’s REST API average around 1.7 nouns per verb; the Graph API has almost 12 nouns per verb. What’s especially interesting? The Graph API has managed to sidestep any verbs that aren’t HTTP methods.

The Graph API is a big deal, and an important step forward for Facebook. Here are a few reason I think that’s the case.

First of all, the Graph API is dead simple, especially compared to Facebook’s old REST API. Just as an example: the old API has (by my count) seven nearly synonymous verbs (set, create, add, register, upload, send, publish) that all essentially mean “add content for a user.” These verbs aren’t interchangeable; each works with only a subset of nouns (or even just a single noun). Each verb has idiosyncratic behaviors, arguments and error messages. The Facebook REST API is, in short, a mishmash of conflicting conceptual models. It’s tough to get a handle on all of it.

Not so with the Graph API. Every resource has exactly the same interface. Once you understand what each noun is, it’s easy to understand how to manipulate it—there are only three possible verbs, after all!

The complexity of the REST API necessitated a client library to perform even very simple operations. The Graph API, on the other hand, is so simple, I think that most developers won’t even use a library at all—it’s easier to just hit the URLs directly. Simply put: if you know how to make HTTP requests, you know how to use the Graph API.

That’s the second thing that makes the Graph API so important—it’s the web. The Graph API isn’t just a bunch of function calls for retrieving information; it’s a set of URLs to representations of the data itself. The data is all in JSON format, so it’s easy for browsers to consume it directly. Image URLs return the images themselves, so they can be thrown right into img tags without an intermediary API call. Most importantly, the resources returned from the Graph API are hypermedia: they include URLs to related resources. An example (from the documentation):

{
   "name": "Facebook Developer Garage Austin - SXSW Edition",
   "metadata": {
      "connections": {
         "feed": "http://graph.facebook.com/331218348435/feed",
         "picture": "https://graph.facebook.com/331218348435/picture",
         "invited": "https://graph.facebook.com/331218348435/invited",
         "attending": "https://graph.facebook.com/331218348435/attending",
         "maybe": "https://graph.facebook.com/331218348435/maybe",
         "noreply": "https://graph.facebook.com/331218348435/noreply",
         "declined": "https://graph.facebook.com/331218348435/declined"
      }
   }
}

The URLs are right there in the document, meaning that you don’t have to know anything else about how the Graph API works in order to get information related to a resource. Pretty slick. It’s an idea that (I think) every web API should incorporate.

There are, of course, a few factors that mitigate my enthusiasm for the Graph API. First off, it doesn’t quite model the entirety of Facebook’s functionality yet—notably, there aren’t any resources (so far) representing Facebook applications themselves. There are also privacy concerns that need to be addressed (for example, event listings).

Overall, though, the Graph API is beautifully constructed. It’s just as robust as the Facebook’s old API, but simpler, more convenient, and easier to integrate. As a developer (and programming instructor), my opinion has always been that Twitter’s main advantage over Facebook is its developer-friendly API. The Graph API has the potential to erase that advantage.

* There are a number of verb-like words and verb participles in the Facebook “noun” list. I classify them as such because, to my eye, they’re clearly presented in the API documentation as things you can act on, rather than actions you can take. For example, attending means “users who are attending something,” and acting on that resource allows you to fetch or manipulate that list of users.

Tags: , , , ,

Text lathe prototype from Adam Parrish on Vimeo.

This is a little prototype for a textual interface that I came up with last week after receiving my nanoKONTROL. (I saw Jörg Piringer use one of these in a live electronic sound poetry performance last year at E-Poetry, and I knew I had to have one.) The idea is that two knobs on the controller determine how much text is cut from either side of a text fed to the program on standard input; another knob controls how fast lines of text are read in and displayed. It’s a very simple mapping, but I’m pleased with the results so far.

Tags: , , , ,

« Older entries § Newer entries »