blog

You are currently browsing the archive for the blog category.

It’s time for a year-in-review post! This covers only the latter half of 2015; I made another blog post earlier in the year about what I did in the first half of 2015.

Teaching

Speaking and events

  • I gave a talk on my pronouncingpy library at PyGotham.
  • The Internet Yami-Ichi was amazing! I was a vendor there, selling procedurally generated poetry chapbooks.
  • I gave a brief talk about my upcoming book about Processing.py at Maker Faire.
  • The Rhythms and Methods tour was fantastic! I read excerpts from the introduction to Everyword: The Book alongside a group of amazing trans poets and authors.
  • I appeared on Woodland Secrets (hosted by merritt kopas, one of my Rhythms and Methods tourmates).
  • I arranged and hosted an unofficial Bot summit/#botally meetup in NYC, which attracted a sizable crowd of generative text enthusiasts. The talks (by Jia Zhang and Leonard Richardson) were great there was a wonderful back-and-forth between audience members during the open projector sessions and the Q&A. Definitely want to do this again soon.
  • Michael Cook invited me to give a talk at ProcJam! Here’s a video of my talk.
  • I was one of several participants in a bot workshop at Data and Society. We had some great conversations as a group, and I definitely learned a lot about bots from stakeholders not in the art/computational creativity space. (I think there’s some output from this workshop that is still in the works.)
  • Kyle McDonald invited me to present at Internodal, a series of talks hosted at Dark Matter. Here’s a picture from my presentation!
  • Charles Theonia (yet another Rhythms and Methods tourmate) invited me to perform at The Moon Show. I read excerpts from Our Arrival along with an essay I wrote that contextualizes the piece. Here’s a picture of me performing.

Art and other things I made

  • Instar Books released Everyword: The Book. Buy your copy today!
  • I am one of several collaborators in Incredible Witness; in November, we had our first “test session” which involved getting volunteers to participate in and play the experiences we’ve been designing for the past few months.
  • I released a physical chapbook/zine (my first!) of poems from my bot, The Ephemerides. (I’m hoping to put the remaining stock of these for sale online soon.)
  • For NaNoGenMo 2015, I produced a novel called Our Arrival. Download the PDF here.
  • I am one of several artists featured in It’s Doing It, an online group exhibition of generative artworks. My piece is called Auto-Minimalist; it’s a minimalist poetry generator inspired by Aram Saroyan.

Twitter bots

  • The Ephemerides is a bot that posts computer generated poems juxtaposed with random images from outer planet space probes. Here’s my write-up.
  • ModernArt.exe is a quick hack that parses data from MoMA’s artwork catalog CSV and generates random, plausible-sounding artworks and descriptions of their media.
  • Brain Tendencies: Common and pernicious randomly generated cognitive biases that prevent YOU from making rational decisions.
  • I made Library of Every to celebrate the release of Everyword: The Book. It’s an alphabetical catalog of every possible parody/homage to @everyword.
  • Citation Needed finds a random sentence from Wikipedia marked as “citation needed” and posts it to Twitter.

The theme for 2015′s back nine seems to be a lot of teaching, talking, and advocacy, and not a lot of actually making new things. I love teaching and giving talks, but I’m hoping in 2016 to find time (and money) to focus more on making new work.

How to get from Dress Up Game Jam to Pizza OWL in six easy steps:

(1) This is a cool game jam. I should make a game where the clothes all have randomly generated descriptions!

(2) It’d be helpful to have a list of clothing types. Wikipedia sort of has one but it’d be hard to scrape all of the hierarchical categories. Hmm.

(3) And anyway, it’s not enough to have just clothing types! It’d also be nice to know where they go on the body and what parts they have (sleeves, collars).

(4) Wikipedia definitely won’t have that in a computer readable format. But maybe some semantic web folks have made an open clothing ontology? *googles*

(5) (later) Ugh, all of these ontologies are either too domain-specific (commerce, museums) or laden with weird gender/body type assumptions. Ugh ugh.

(6) I guess I should just bite the bullet and make my own tiny ontology. But I want to share it with other people in an open file format… and that format is OWL… so, uh, how do I write OWL?

~fin~

Tags: ,

The Ephemerides

the ephemerides

If you watched the video of my Eyeo 2015 talk, or read this excellent transcript, you know that two of my main interests are generative poetry and space probes. The talk, specifically, is about a similarity that I perceive between space probes and generative poetry programs, which is this: both space probes and generative poetry programs venture into realms inhospitable to human survival and send back telemetry telling us what is found there. For space probes, that realm is outer space. For generative poetry programs, that realm is nonsense. Humans generally shrink from nonsense, but a good poetic procedure can demonstrate that nonsense is worth engaging with: there are infinite undiscovered gems of language that lie hidden within nonsense’s borders.

I just made a bot, called The Ephemerides, which takes a randomly selected image from NASA’s OPUS database—a repository of data from outer planet probes like Voyager, Cassini and Galileo—and posts it to Twitter, accompanied by a computer-generated poem. The idea behind the bot was to address the similarity between space probes and generative poetry procedures from the opposite direction: what would poetry written by a space probe look like?

Sources

My thought was: space probe poetry would be minimal, contemplative, turning suddenly from the technical to the lyrical and back. The poetry of Bashō, Gary Snyder and Rae Armantrout came to mind as potential points of reference. I proceded with these poets in mind and tried to produce something stylistically similar.

The text of the bot comes from two sources: Astrology by Sepharial and The Ocean And Its Wonders by R. M. Ballantyne, both available from Project Gutenberg. The first text contains references to the planets and their movements, and how those movements can be interpreted; the second text is about the open sea, water, ice and lengthy, often one-way voyages into the unknown. A perfect combination for the language of space probes!

Methodology

I created the source data with Pattern by parsing each text into individual sentences, then separating those sentences into standalone clauses, then parsing each clause into its grammatical constituents, and combining all of these into a shared data structure.

To generate the poem, the procedure selects a clause at random and then, for each constituent in the clause, replaces it at random with a constituent drawn from the entire corpus that shares the same part of speech or grammatical role. The resulting text retains a lot of “grammaticality” and cohesion while still effectively introducing strange juxtapositions of the two source texts. The finishing touch is a simple procedure that enjambs the poem into stanzas and then into individual lines, roughly broken up by syllable count.

Images

The NASA images that you usually see have been post-processed in one way or another. The images that I use in The Ephemerides are so-called “raw” images—the images that come directly from the space probe, before they’ve been edited to correct data glitches, or combine exposures at different wavelengths into a color photo, or create a larger photo from a montage of smaller ones. But I love the raw images! Somehow they seem warmer and more personal to me, which fits into the “fiction” of the bot: that these poems are a personal documentation of the probe’s own experience.

There’s no image recognition or other deep dream nonsense going on to relate the OPUS image to the text: the image and the text are selected at random, and any energy you perceive in their juxtaposition is a product of the neural network between your ears.

Tumblr

The bot is also available on Tumblr. The Tumblr posts include information about which probe took the photo in the post, and what the probe’s intended target was. (Most of the photos are of Saturn’s rings, because Cassini takes a lot of photos of Saturn’s rings.)

I plan to make the source code for the bot available soon—bug me if you’re interested!

I wanted to make a quick list here of all the stuff I’ve done in the first half of 2015. I’m incredibly grateful to the organizations that have supported my work during this time, most importantly Fordham University’s English Department, where I’m a writer-in-residence, and ITP, where I was an adjunct and research fellow during the 2015 Spring semester. I also received support from the Frank-Ratchye STUDIO for Creative Inquiry at CMU, the School for Poetic Computation, and Recurse Center, where I just finished a two-week residency.

Open-source software

  • pronouncingpy is a simple Python interface for the CMU Pronouncing Dictionary. I found myself copying the same rhyming/meter code from one project to the next, so I decided to factor out the commonalities and put them in a library. My goal with pronouncingpy was to create a “professional-grade” open source Python library, one that I could use in my own projects and that I could recommend to students. I learned a lot in pursuit of that end—things like PEP8-compliance, Tox, and automatically generating API documentation and putting it on Read The Docs.
  • I also made a Javascript port of pronouncingpy, called pronouncing.js. Pronouncing.js supports the same API as its Python counterpart, and can be used both in Node and in the browser. (I used this library for a few other projects; see below)
  • Pycorpora is a Python interface for Darius Kazemi’s Corpora Project. It makes it super easy to use Corpora Project data; just pip install pycorpora and you’re off to the races. The “Examples” section of the documentation is a starting point for a tutorial I want to write about how to use Corpora Project data for quick text generation prototyping in Python.
  • Example Node.js Twitter Bot: I made this because I needed to have some Javascript example code for a workshop I was giving at Recurse Center, and I didn’t like any of the existing Node.js Twitter bot examples. Hopefully it’ll helpful for some people!
  • Context-Free GenGen is a version of Darius Kazemi’s GenGen project that uses context-free grammars instead of Mad Libs-style juxtapositions. Basically: you can write a CFG in Google Sheets and then have a shareable text generator that uses that CFG, all without any programming. I used it in the generative text unit in my Appropriation, Iteration, Recontextualization class at Fordham. You can use Context-Free GenGen here.

I’ve really enjoyed working on these projects, and I’ve been surprised and pleased by the (already substantial!) contributions to several of the projects from volunteers. Even more amazing: someone thought that pronouncingpy was cool enough to port it to an entirely different programming language (Clojure). Exciting!

Classes, workshops and tutorials

Talks and presentations

  • In February, I delivered my talk Beyond the Scrabble Word List at IndieCade East and had a great time there hobnobbing with all of the folks on the vanguard of video game design.
  • In addition to the workshop I delivered in Golan Levin’s Interactive Art & Computational Design class, I also gave a talk called Bots: Some Historical Threads, in which I roll out the “PUDG model” for Twitter bots (Twitter bots are procedural, uncreative, data-driven graffiti).
  • I was invited to speak at this year’s Eyeo Festival. My talk there was called Exploring (Semantic) Space with (Literal) Robots. I had a fantastic time at Eyeo and it was such a treat to meet so many of the artists and technologists whose work I’ve been admiring from afar for years. They should post a video of the talk soon; I’ll post an update when they do!
  • I also gave a “tech talk” at Recurse Center, in which I gave an overview of my work as an artist over the past few years and proposed a few new ideas relating to my ITP thesis. I’m happy to provide slides/references for this talk on request.
  • Oh, and I almost forgot! I was on a panel with Brendan Berg at Facets. Brendan gave an excellent talk about the history of text encoding and I gave my talk about the eschatology of @everyword. The Q&A session afterwards was fantastic and the whole experience was tons of fun!

New artwork

  • I made A Travel Guide on commission from Turbulence. A Travel Guide is a generative text generator that creates random, persistent travel guides for any arbitrary place on the Earth’s surface. I also made a companion Twitter bot for the piece: @a_travel_bot.
  • As part of my job as Associate Editor of CURA Magazine, I helped plan CURA’s Museum in Media Res, a kind of literary hackathon/jam session. In addition to the work by our (amazing) invited artists, the magazine published three new pieces that I made during the event.
  • I made two new Twitter bots: @deepquestionbot, which asks difficult and trivial questions based on inverted facts from ConceptNet, and @cashclones, which invents strange and nitpicky alternate history scenarios based on facts from DBpedia.
  • I’ve been putting a lot of thought lately into my ITP thesis, and how I can extend and build upon that work. To that end, I made a few “experimental textual interfaces” while I was at Recurse Center: Linear L-System Poetry, the Motion-Sick Keyboard and the Rhyming Keyboard (recently featured on Waxy Links!).

Thanks again to the institutions and individuals that have helped me to have such a productive and fulfilling year so far!

N-Webz and me at IndieCade East

N-Webz and me at IndieCade East

I gave a talk titled “Beyond the Scrabble Word List” at IndieCade East last weekend. It was a great experience! I’m very grateful to the IndieCade East organizers for giving me a chance to talk at such an amazing event. And such an awesome venue! I’ve never given a presentation on a screen quite as big as the screen in the Museum of the Moving Image’s Redstone Theater.

The gist of the talk is this: Scrabble’s rules and structure reward players who can spell words that are difficult to learn and difficult to spell. Being good at Scrabble is therefore an ersatz measure of “literacy.” But “literacy” isn’t a neutral concept; prescriptivism is a form of oppression and literacy is a privilege. The (implied) conflation of “being a good Scrabble player” and “being a good person” is one of the reasons that Scrabble (and other word games) can cause so much contention and bad feelings in play. One solution is to design word games with the same assistive technologies that people use in real life to cope with the difficult learning curve of the English language, like spellcheck and autocomplete.

The slides and notes (including citations) for the presentation are downloadable here.

The ideas in the talk are very raw, and reflect my own evolving thought on the topic. I’m not totally satisfied with the completeness of my own critique, and certainly the solutions I’ve proposed for the problem are very rudimentary. But I’ve gotten a lot of good feedback on the talk so far and welcome further comments!

(Photo by Tim Szetela)

FATGHONG CHDCK

FATGHONG CHDCK

POULTRI

POULTRI

A CONVERSATION

A CONVERSATION

smiling face withface is a tumblr bot I made that generates and posts glitchy versions of emoji, based on the open-source SVG files released as part of Twitter’s twemoji project. A Python program selects an emoji SVG file at random, adjusts the markup and numbers in the SVG file, and (optionally) recombines the paths in the selected SVG with paths from other emoji SVG files. The results are posted to Tumblr.

Emoji is among the most successful symbol systems in the history of writing, and it’s coming close to achieving the universal success envisioned by the likes of Blissymbols and Isotype. My “glitched” emoji are intended to bring to the surface the material nature of emoji, so we can better understand what it means to communicate with one another using them.

I made this for a few reasons. One of the main suggestions that people have for Library of Emoji is that there should, you know, be actual EMOJI that correspond to the randomly-generated names. Of course, there’s no easy way to do that computationally (at least in a way that I think would satisfy me from an aesthetic standpoint). But when Twitter released their twemoji files, I instantly knew I wanted to do something with them. I had already been working on a little script to mash-up Google’s Material Design Icons, so I repurposed it for the twemoji files and let it run. I was happy with the results. Darius suggested that I make the bot post to Tumblr (instead of Twitter), which I think was a great suggestion, given the visual nature of the bot. (Though you can follow the bot on Twitter as well, if you’d like.)

Occasionally, the bot will post a “conversation” (between two unnamed entities using, I presume, iOS devices equipped with glitch-emoji capabilities), so you can see what the emoji might look like in context.

The names are generated from a database of Unicode codepoint names, mangled with a little library of functions I’ve been working on.

Tags: , ,

Excerpt from "I Waded in Clear Water"

Excerpt from “I Waded in Clear Water”

Last November, I participated in National Novel Generation Month (NaNoGenMo), an event in which participants are encouraged to write a computer program that generates a novel. Originally conceptualized by Darius Kazemi as a cheeky alternative to NaNoWriMo, the event has inspired programmers and writers to create some really beautiful work.

My contribution is a procedurally generated novel called I Waded In Clear Water. The primary source text for the novel is Gustavus Hindman Miller’s Ten Thousand Dreams Interpreted, with footnotes provided by information gleaned from ConceptNet and WordNet. You can read more about the process I used to generate the novel, and see the Python source code, at my NaNoGenMo 2014 Github repository.

I read some excerpts from the novel and gave a presentation about it at WordHack on January 15th. Here’s the presentation deck in PDF format.

Tags: , ,

@eventuallybot

The Infinity

I made a new Twitter bot: @eventuallybot. It generates short, silent films in GIF format, based on randomly-selected snippets of YouTube videos. As of this writing, the bot has generated nearly 300 tiny films!

The code is written in Python and makes heavy use of Connor Mendenhall’s wgif program and ImageMagick. I used my new Python library, My Dinosaur, to generate an RSS feed for the bot (a first for me!), which you can subscribe to here.

I’ve had the idea for this bot for a while. I’ve been interested since my undergraduate linguistics days in the idea of textual cohesion—the methods and strategies that language speakers employ to make the units of the text (lines, sentences, stanzas, paragraphs, etc.) come together as a whole. In particular, I’m interested in how just mimicking the surface forms of cohesion (by, e.g., pronoun substitution, anaphoric/cataphoric demonstratives, or even just lexical repetition in the form of anaphora) can make generative text feel like it’s telling a story, even if the text doesn’t have any kind of underlying semantic model.

With @eventuallybot, I wanted to experiment with some of these concepts. The experiment, specifically, was this: if you take random bits of video, and splice them together with titles that suggest the contour of a story, how often will you get a result that feels at least sort of cohesive?

So I made a big list of transition words—essentially, conjunctions and phrases that function as conjunctions—and (inspired by Labov’s narrative analysis) lightly categorized them like so:

  • beginning phrases (phrases that start a story, like “once upon a time”)
  • “and-then” phrases (phrases that move the story along a bit in time, like “after that”)
  • continuing phrases (phrases that introduce a second situation or complicating factor, like “meanwhile” or “nearby”)
  • concluding phrases (phrases that introduce an explanation of how the story is resolved, like “therefore” or “to summarize…”)
  • ending phrases (like “The End”)

As Mark Sample pointed out on Twitter, filmmakers are already familiar with the “Kuleshov Effect,” which describes how viewers will tend to see two shots juxtaposed in montage as being narratively related. To be sure, the titles in @eventuallybot’s films are a bit less subtle than straight-up cuts between shots. But I kind of enjoy how @eventuallybot (at its most coherent) feels like it’s telling an anecdote with its clips, not just implying a narrative connection among them.

One reason I wanted to have an RSS feed for this bot is Twitter’s support for the .GIF format. Twitter “supports” GIFs, but transparently converts them after upload to a different video format, and (as far as I can tell) throws away the original GIF data. This is probably the right move on Twitter’s part, since GIFs aren’t (byte-for-byte) a very efficient format for storing video, but I wanted people to be able to save and share the GIFs as they were originally generated. So the RSS feed updates at the same time as the bot itself, and it links to the original GIFs.

Task complete

If it hasn’t already happened by the time you read this, it will happen soon: @everyword‘s seven-year mission to tweet “every word in the English language” has come to an end. I hope you’ve all enjoyed the ride!

My plan is to write a more complete post-mortem on the project later. In the mean time, this post contains some links to things that followers of @everyword might find interesting or useful.

The future of @everyword

But first, a word about what’s next for @everyword. Don’t unfollow just yet! My plan at the moment is to let the account rest for a bit, and then run “@everyword Season 2,” starting over from the beginning of the alphabet. Before I do that, I’d like to find a more thorough word list, and also do some programming work so that the bot is less likely to experience failures that interrupt service.

Writing about @everyword

Here’s some writing about @everyword, by me and others.

Writing about Twitter bots

@everyword is a Twitter bot—an automated agent that makes Twitter posts. There are a lot of interesting Twitter bots out there. Here’s some interesting writing by and about bot-makers:

What to follow

Here are some Twitter bots that I think followers of @everyword might enjoy.

Thank you!

The response to @everyword has been overwhelming. When I started the project in 2007, I never would have dreamed that the account would one day have close to 100k followers. And if you’re one of those followers, thank you! It’s a great feeling to have made something that so many people have decided to make a daily (or, uh, half-hourly) part of their lives.

I view @everyword as a success, and I want to note here that I owe this success to all of my friends and family who encouraged me along the way and helped to make @everyword a topic of conversation. I am very bad at finding value in the things I make, and I’m especially bad at self-promotion. Without the help of the people close to me, I’m sure that @everyword would have completed its task in obscurity—if it completed its task at all.

scrabble sucks screencap

I gave a talk at !!Con a few weeks ago. The talk was called “Scrabble Sucks! Toward higher-order word games.” The talk is about some problems I have with Scrabble, and some of the games I’ve made in response to those problems. Download the slides and notes here. There are a few slides I didn’t get to ub my actual presentation, comparing a sizable corpus of Scrabble games to Lexcavator‘s list of all words that players have ever found, that are included in the PDF above for your perusal.

I had a lot of fun participating in !!Con. I was a little nervous talking right before Mark-Jason Dominus, whom I venerated back in my Perl-slinging days, and whose Higher-Order Perl is what I was riffing off of with the subtitle of my talk (except I wasn’t talking about higher-order functions; I was talking about higher-order n-grams.) But everything worked out okay, and I’m glad I got to give my talk to such an enthusiastic and receptive crowd.

« Older entries