Making Interactive Art
[gn_pullquote align="left"]Your task in designing an interactive artwork is to give your audience the basic context, then get out of their way. Arrange the space, put in the items through they can take action, suggest a sequence of events through juxtaposition. Remove anything extraneous.[/gn_pullquote]Interactive art is different that other forms of art. Traditionally, a work of art is an expression, a statement. While it’s true that making art of any kind is an expression, interactive art is a little different. The goal isn’t to interpret your own work, telling the participant what each element in your design “means” or what he or she is expected to do with these elements. You can’t pre-script the experience because a good piece of interactive art will be open enough to allow the participant to have a unique experience based on his own ways of expressing himself through what you’ve set up for him. [gn_pullquote align="right"]Once you’ve made your initial statement by building the thing or the environment and designing its behaviors, shut up. Let the audience listen to your work by taking it in through their senses.[/gn_pullquote]
Your audience becomes part of the performance, by completing the work through how they respond to what you’ve made. Find ways to suggest a course of action without being explicit with instructions. The process of them uncovering the secrets and nuances of your design is what will provide an emotional interpretation and context for your work. In this way, this art form is similar to more traditional forms; you put your work in front of the audience, and it is up to them to decide how they will interpret the parts and how they will respond. Let them speak through their actions. Your job is to listen and observe. This process of “talking” and “listening” is what forms the communicative basis for your work.
From Tom Igoe’s Blog, Physical Computing’s Greatest Hits (And Misses)
Musical instruments are great physical interaction projects because you can’t think about your actions when you make music, you have to think about the music. The theremin is usually the first instrument people build because it’s the simplest to make: attach a photocell or distance ranging sensor to a microcontroller’s analog input, send the results into a synthesizer or music program, and you’re done. When you wave your hands above the cells, they block the light and you generate music. The results can be very pleasurable.
Gloves are popular because they’re fun, and because they relate to gestures we are already comfortable with: movement of our hands.
Probably the easiest way to sensors to a glove is to attach force-sensing resistors to the fingertips. These will let you sense tapping of the fingertips. If you need to sense the bend of the finger, you can attach flex sensors along the length of the fingers. There are some variations that used an LED at each fingertip, a photocell or photodiode at the wrist, and a length of fiber optic cable connecting them. If the fiber optic is scored at each knuckle, it allows light to escape proportional to the bend of the finger.
Dancing is one of the most enjoyable forms of physical expression, and the easiest way to sense it is by sensing where you land. All you need to make them is a few switches on or under a floor.
Video mirrors are the simplest computer vision project you can do. They’re very pretty, and you can stare at them all day, but there’s not much structured interaction. They simply mirror your action.
Processing and Max/MSP/Jitter are really excellent tools for creating these.
Mechanical pixels are a follow-on from video mirrors. Once you can move one thing, the next step might be to move lots of things and make a picture out of them. LIke video mirrors, projects on this theme tend to offer little in the way of structured interaction. The best ones are examples of simple behaviors, with a strong focus on the aesthetics of the look, the behavior, and the sound. The trick to doing this well is to have mechanical precision, money, and patience.
This idea, an interactive painting or display that responds to the viewer’s action, is very popular. The simplest variation has a distance sensor built into the frame, and a change in the sensor’s reading triggers the painting to take action.
The most common mistake made by designers of this type of project is to confuse presence with attention. Presence is easy to sense, as described above. It’s harder to tell whether someone’s paying attention, though. More sophisticated variations on this theme use a camera instead of a distance sensorto detect a face and eyes. You’re still guessing about attention, but if you see a face and eyes, you can assume the person’s at least looking in the direction of the display (assuming he can see).
These projects involve the user by tracking her movement in a defined space and mapping that movement to a visual or audio response, usually onscreen at the periphery of the space. There are two common technical variations: tracking the participant with distance rangers that ring the perimeter of the space, or using a camera mounted over the top of the space to use computer vision to track him in two dimensions.Whether it’s done with video tracking or distance rangers, the effect is the same. A person moves in a space, and their position in that space affects the output. Her whole body is effectively a cursor. Interaction is generally limited to step-and-observe. The interaction affords movement in a large space but tends to ignore the gestures and poses that make up our body language.
These are variations on video mirrors and body-as-cursor in which a camera tracks your hands and causes a projected graphical user interface to react accordingly. Hand-as-cursor offers more in the way expression than body-as-cursor or mirrors. These projects usually track the user’s hands or arms or feet and respond to specific gestures. Wand-driven interfaces are a subset of this type of project. For example, a video camera can track two lit balls held in the hands, and takes action based on the position and movement of the balls. It relies on computer vision and two easily distinguishable points on the body. The use of lights as the tracked points allows them to filter out everything but the brightest two blobs in the camera’s view, simplifying the tracking.
All video tracking problems face environmental limits. In order to recognize or track an object moving in the camera’s field of view, you need to filter out the background. This can be done by using a uniquely colored object, by using an object that emits infrared or ultraviolet light and filtering out visible light at the camera, or by comparing the current frame to a pre-determined reference frame of what the space looks like empty.
Multitouch surfaces are the tangible version of hand-as-cursor. A sensing surface that can sense more than one point of contact at a time is the basis for this theme. There are many ways to create such a system. All of them take a good bit of work and a fair amount of tuning the system. One variation involves flooding the surface with infrared light from behind or from the side, and tracking the surface with a camera fitted with a visible light filter that filters out all but infrared light. Hands and other objects that touch the surface show up distinctly from the background.
Another variation involves using multiple distance sensors tracking the perimeter just above the surface, as with body-as-cursor. These tend to be difficult to maintain, however.
A third variation uses a field of capacitive touch sensors which trigger when humans touch the surface. The biggest drawback besides the difficulty of tuning them is the fact that a flat surface offers no tactile feedback to guide you in using the interface.
Tilty Stands and Tables
These are flat surfaces that respond when you tilt them. The tilty table is usually a table with an accelerometer or tilt sensor built in and a projection on its surface. The projection reacts to the physical tilt of the table as if it shared the physics of the table. The tilty stand is a surface that the user stands on, balancing himself to navigate a two- or three-dimensional space. The stand is physically challenging to operate because it upsets the user’s balance, and it’s endless fun to play when coupled with a game that reacts in real time.
Tilty controllers react to the tilt of an object in your hand. These are best when designed for a specific action, generally, but the Wiimote has managed to blow that away by mapping a generic controller to a whole range of specific behaviors. Like tilty tables, these are usually made of an acclerometer or gyrometer if you need the angle of the tilt, or a ball switch if you don’t.
Things You Yell At
People take great visceral pleasure in yelling at things. Projects that react to a yell are very satisfying, even though the interaction is very simple. Measuring the sound level is very easy, using a microphone connected to any computing device, whether it’s a microcontroller, personal computer, or mobile phone. The advantage to using a mobile phone, or a personal computer is that, if you want to react to something more than the sound level, you can do so relatively easily. Pitch detection and voice recognition are too computationally intensive for a microcontroller.
Meditation helpers are objects, systems, or rooms that react to your state of mind to get you into a more meditative state of mind. The problem with many of them is that a machine can’t read your state of mind. You can read breath rate (through a microphone or stretch sensor around the chest), skin galvanic response (by measuring the resistance across the skin) heart rate (using a heart rate monitor or pulse oximeter), or posture (using accelerometers). Reading involuntary reactions like this doesn’t tell you the meditator’s state of mind, but it lets you make some guesses and take action based on those guesses.
Fields of Grass
Fields of grass are arrays of sensors, generally in a grid, that you run your hand over and touch to make music, light, or some other output. They can be difficult to do well because they generally require a large number of sensors, actuators, or both. In addition, the sensors and actuators need to be small so the stalks can be close together. The best results are usually achieved by attaching multiple stalks to one actuator, and using as few sensors as you can get away with to give the impression of individual stalk response without having to make each one respond individually.
Dolls and Pets
Things you pet, touch, etc that have an anthropomorphic behavior in reaction. Interactive dolls and pets are popular because we like things that appear to behave like us.
Who hasn’t made a project that’s a grandiose version of the blinking LED? We’re all guilty of doing the gratuitous LED project, because it’s too much fun. When (not if) you do it, make it interesting.