Pixel Robots harken back to the olden days of video game sprites in the era of the Apple ][, MS-DOS, font-based graphics and monochrome monitors. They are algorithmically generated in the millions by combinatorial methods. The algorithm is explained in more detail below.
The demo applet uses a random box fitting strategy to fill the image with robots of varying sizes.
The robots are generated within a grid of cells that is 7 columns wide by 11 rows tall. Each cell may be either on (black) or off (white). Without any further constraints this would allow for 2^77 combinations - just over 151 sextillion, a monstrously huge number.
Unfortunately, most of those combinations are not "interesting" in the context of this project. So the cells are constrained in several ways to limit the total number of combinations while focusing in on those that produce interesting robot shapes.
The first constraint imposed is that the cells must be horizontally symetrical. This reduces the effective grid size to 4x11 cells, reducing the number of combinations to 2^44, or about 17.5 trillion.
The second constraint imposed comes about after flipping the problem on it's head. Instead of asking which cells to turn on in order to create robot shapes (which is a non-trivial problem) the question becomes which cells to avoid turning on because they constitute the "interior" of the robot. Then once the interior is defined, simply draw an outline around it, thus producing the desired robot-like shapes.
In the end, 24 cells are directly controlled, allowing 2^24, about 16 million, unique robots. Three 8-bit numbers are used to control the shape of the head, body and feet respectively.
Shown here are the 256 unique heads, bodies and feet, assembled such that head #0 is on body #0 and feet #0, and so on through head/body/feet #255. The other 16 million or so robots are created through all the various combinations of different heads, bodies and feet.
This particular encoding of robots via three 8-bit values was chosen intentionally as it lends itself to a variety of subsequent explorations. Since a robot can be defined by any three 8-bit values, and conversely that any three 8-bit values define a particular robot, it then remains only to find interesting sources of 8-bit values which can be converted into robot form.
Typical raster images are stored in RGB colorspace, where a set of three bytes define the red, green and blue intensities of a given pixel. It is thus possible to map those RGB triples into the three bytes that define an individual robot. In other words, every "color" represents a specific robot, every robot has its own characteristic color.
And thus every image, which is simply a combination of colored pixels, represents a specific population and organization of robots. Given a source image, it is possible to extract the color values at every pixel location and produce the corresponding population of robots. A "robotized" derivative of the source image may then be produced by drawing that population with their representative colors in a grid format that matches the source image.
The example source image is a portion of van Gogh's Portrait of Dr. Gachet. The resulting robot-encoded version is:
Note that the output image is exactly 7 times wider and 11 times taller than the source image - the dimensions of a robot. Note also that since robots do not have a square aspect ratio that the output image is taller (more "portrait") than the source image.
If the robotized output image is then scaled back down to the source image resolution of 50x50 pixels we can evaluate the "error" in the encoding/decoding process. The "error" results from the white pixels in the unused portions of the robot grid, explaining why the reconstruction is much less saturated than the original.
The three defining bytes could also be interpreted as HSB values and the entire robot colored according to that scheme. However, a perhaps even more interesting HSB coloring scheme is possible if we take some take some liberty with the saturation and brightness values.
The defining bytes for the head, body and feet are treated as hue values for their respective robot parts. Saturation and Brightness values are derived from coordinates within the grid, where more interior locations are brighter, and higher locations are more saturated. The outline remains black, but the interior is filled in with the specified color.
Text which is encoded in 8-bit ASCII can be represented by a bitmapped font. Conceptually, the bit pattern of the ASCII encoding is being represented by the more visually complex bit pattern of the font. In a similar fashion we may define a robot "font" or "alphabet" that maps character encodings into visual representations as robots. In this case however, the robot alphabet is large and it is necessary to combine three sequential ASCII characters in order to specify a "letter" in the robot "font".
For example, the text "David Bollinger" contains 15 ASCII characters and can thus be mapped into 5 robot letters:
The Preamble to the Consitution of the United States contains 327 ASCII characters and can thus be mapped into 109 robot letters:
"We the People of the United States, in Order to form a more perfect Union, establish Justice, insure domestic Tranquility, provide for the common defense, promote the general Welfare, and secure the Blessings of Liberty to ourselves and our Posterity, do ordain and establish this Constitution for the United States of America."
By mapping the same bit pattern across head/body/feet of the robot we
can define a particular robot for each ASCII letter. For instance, the robot
defined as 0x414141 corresponds to the ASCII uppercase 'A'. In this manner we
can create an entire font of robots. Here is the
Pixel Robots TrueType Font, includes all ASCII
characters from 33 (exclamation point) 127 (tilde). Looks best at 24 points or
Based on the typography rules above, it becomes apparent that a certain subclass of robots have actual names. That is, for every proper name that is three characters in length, there is a particular robot associated with that name. Allow me to introduce to you a few of my robot friends:
And, of course, a robot is well-suited for representing a "TLA", the three-letter-acronyms of agencies, institutions, businesses, technologies and other slang that have become so omnipresent:
It can be an amusing exercise to try and associate a particular robot shape with its TLA. For instance, do you see a man in a gangster hat with two tommy guns in the FBI robot? Is the IRS robot a complicated mess? Is the PDA robot small? Any such associations or resemblances are entirely coincidental.
The wallpaper images were generated using the box fitting strategy of the applet above.
The T-Shirt image was generated using a fractal subdivision strategy like the one described here.
The concept for this project dates back to my early days as an Apple ][ game programmer, though it had all but been forgotten. I credit Jared Tarbell's presentation and write-up of the Flash-based Invaders Fractal for the motivation needed to ressurrect it, extend it, and explore it further with more modern techniques. Tarbell's entire site and numerous works are all highly recommended.
© 2006 Dave Bollinger