Continuing with this week’s labs in Physical Computing using the Arduino Integrated Development Environment (IDE) to control the flow of electricity through simple circuits, this second post explores combining analog input with the digital input/output.
Lab #2 Analog Input
A simple potentiometer (the most basic variable resister/analog input) is added. It’s fixed to an analog pin. An LED is used as an indicator. Note the fixed resistor between the LED and the digital output.
Video of the code and the LED changing brightness.
Now, some more complicated programming, so that the input from a sensor, in this case a TMP36 Temperature Sensor, will be mapped according to specific parameters and output onto three different LED indicators.
The sensor was given a baseline temperature to read (20 degrees C) and then told to increase the brightness on the first, then second, then third LEDs as the temperature increased by an increment of 2 degrees.
The initial baseline was too low because the temperature of my apartment was higher (in the video above you can see the temperature reading from the room was around 27), so I changed that value in the code and this allowed the circuit to respond only to a direct application of increasing temperature near the sensor.
For this week’s labs in Physical Computing at ITP, we began using the Arduino Integrated Development Environment (IDE) to control the flow of electricity through simple circuits.
Lab #1 Digital Input and Output
Above the breadboard is wired to the Arduino. No power is being supplied at this point as the circuit is just beginning. Below a switch has been added and linked to a digital “pin” on the microcontroller, to be programmed as input.
Two LEDs added to additional digital pins on the Arduino, this time for output.
Now I’ve plugged the Arduino in via USB to my laptop.
Basic code written in the Arduino IDE and uploaded to the microcontroller, assigning the roles of the pins (and hence the switch and LEDs). The loop function tells the LED on pin #3 to turn-on (HIGH) when the switch is pressed (pin #2 = HIGH), and to keep the LED on pin #4 glowing in all other cases (here we only have the case when the switch is released/not pressed). Click here for a video of the circuit in action.
And finally an experiment with using a found object switch. Click on the image for video of the final version of my wolf-head switch.
2nd Sketch for Introduction to Computational Media, Fall 2014
With this Processing assignment I was looking to start from a found image, and then replicate some everyday light phenomena using the additional tools of Variables and Conditionals that we’ve added on this week in Introduction to Computational Media.
Variables allow for manipulation of data and beginning to animate elements within a sketch. For instance the variable “mouseX” (built into Processing) allows you to use the position of the mouse along the X axis to change behavior. In my case I utilized this to alter a color fill that severed as my shadow.
Conditionals allow for changes to be implemented under certain circumstances. For instance in my sketch I used the pressing of the mouse as the condition (or event) that triggers lights coming on in the buildings. Pressing the mouse again turns them off.
Here are some examples of code I was experimenting with:
When Magdalena, Gabriel, and I sat down to exchange ideas it was clear structure was trumping concept. We were trying to find a starting point for an interactive sound piece that we were tasked to create together. We kept circling around possible forms for the piece, no matter how hard we tried to find a way towards an underlying theme. Eventually, we decided to each collect sounds that spoke to us and hoped to discover what unified them.
When our group came back together, we had an excellent menagerie of sounds. A menagerie that was diverse and unrelated: the basis for a collage. We began arranging, rearranging. In the end, we arrived at this:
This collage will be the center of an interactive installation, Soundcode.
This blog documents my time at the Interactive Telecommunications Program, where I’ll be until (at least) 2016. You might ask why, with a background (primarily) in theater-making and puppetry, I chose to pursue this course of study — or put another way, “What is it about the potentials of interactive technology that drew you to ITP?”
Last fall I knew I would be passing through New York City, and if there was a singular thing I needed to accomplish it was experiencing Sleep No More (click through if you don’t know anything about Punchdrunk’s show). So I did. Amongst all my reactions and impressions and inspirations, most of which aren’t relevant here, one stood out with resounding intensity: I couldn’t affect the world I had been invited/drawn/yanked into that night. Punchdrunk are masters, masters of crafting a palpable, encompassing environment that (I agree) assumes its own reality around you. There were six massive floors to be traversed, every room filled with curios, oddities and detritus. You could pick it up, move through doors, brush aside curtains. I loved this quality of Sleep No More. But once I was living in that world, I wanted to be able to use my curiosity, my bravery, my ingenuity to participate. This wasn’t possible; instead I felt like a ghost that can travel through the life it knew but struggles with futility and despair to engage with familiar people and places that cannot hear or feel it’s presence. I’m exaggerating here, but bottom line is that Sleep No More introduced a fascinating form of audience immersion, but it was not interactive (which is less a criticism than an aspiration).
Physical interaction requires that your presence as a body or a voice in an environment has a measurable effect. If I speak in a room and the lights go out in response to that sound, a form of interaction has taken place. Chris Crawford would represent this process using three aspects of a conversation: as I’m speaking, sensors in the room are picking up my voice or “listening to what I’m saying”, then they are converting that speech to electrical signals to execute an operation or “thinking about what was said”, and the change in the lighting of the room is “responding to what was said”. Crawford insists that these three aspects are necessary for something to qualify as an instance of interactivity. I would agree — in the case of physical interaction there is none if the active presence of one participant does not have any effect on (does not change in some observable way) the other. If I were to press a surface, walk through a fabric, or pick-up a book I would not call any of those situations interactive unless my pressing, walking, or picking-up caused a change in the object or my environment.
Bret Victor, in “A Brief Rant on the Future of Interaction Design”, makes the additional case that quality interactions only occur when our non-human partner in the exchange (typically a piece of technology, or in my anecdote above the theatrical production by Punchdrunk) has been created with the fullest potential of human capabilities in mind (whatever capabilities will apply in the current situation/problem). His examples lie with technology like touch-screens that do poor justice, in his estimation, to the capabilities of the hand that will be using them. I resonate with Victor’s rant; I would hope for (and I hope to envision here at ITP) a world where designed objects and environments take the fullest capabilities of my presence in mind. When you are immersed/engaged with something your senses are heightened and you become more curious, responsive, and inspired to take risks or make unexpected choices. Quality design will be prepared to accommodate those risks and choices to the degree possible within the constraints of resources at hand (time, space, materials). In my estimation the real pay-off of interactivity comes into play with the effects of this accommodation.
Processing is an unknown country for me. But one place I have lived is with a bit of drawing, and some shadows. In fact I’ve spent a lot of time considering the relationship of darkness to light, the scale from pure white to full black, whether in the work of William Kentridge or Roberto Casati. So when we were asked for our first assignment in Introduction to Computational Media at the Interactive Telecommunications Program to create a Processing sketch using the functions we’re becoming familiar with (sizing, simple 2D shapes, and basic coloration and grayscaling) I immediately felt I could give myself some grounding by exploring that known quantity, shading.
As many of you know, Processing is (as defined on its website) “a programming language, development environment, and online community.” Processing is built on another language, Java. In our first class session and early readings we were introduced to a handful of key Java-based functions that give instructions to create what’s called a “sketch”, functions such as:
size(): sets the size, in pixels, of the window for you sketch.
rect(): draws a rectangle based on specified coordinates.
fill(): specifies a color for the shapes that follow in the code.
I decided that I would depict a value gradient from black to white using the RGB scale which sets the former at “0” and the latter at “255”. I began with a black background:
Then, I began building incrementally smaller rectangles, filled with increasing amounts of red + green + blue (white) values.
After creating several steps, I realized that 1) I wouldn’t have enough sizes of rectangle to allow for a large value gradient (wouldn’t come close to 255), and 2) that the transition from one value to the next was too harsh. So I changed my code to reflect subtler variations. I also added an integer to each rect() instruction to round the edges of each shape.
At this point, I began to wonder if there wasn’t a way to auto-generate these shapes, since the increments I was using for both size and value were fixed. I did some preliminary digging within the Processing Reference but could not locate what I needed, and decided I would present this question to Dan (Shiffman) during the next class. I continued what I now realized was the extra labor of writing out each of the steps longhand until I had reached the full value range. As a final enhancement I included a function to remove the black outlines on each rectangle, which made the overall image more dramatic and quality of depth and dimensionality emerged quite spontaneously, reminding me of perceptual theories connecting shadows to depth, how we depend on them to see.
Inspired by my classmates, attempt some more graphic illustrations.
Utilize the grid geometry of the sketch environment more thoroughly.
Venture into something more figurative or character-based.
In 2005 Janet Cardiff composed Her Long Black Hair, an audio tour for one through Central Park in New York City. About a year ago my friend Janice referred to the piece in conversation one evening in Seattle, speaking with the reverence one holds for experiences that’ve carved themselves into memory, like initials on trees. She had taken the tour while it was installed in the park. Not long after Janice mentioned it, I heard Cardiff interviewed on To the Best of our Knowledge, in an episode called “More Wonder”. Wonder being central in my conceptual landscape, and Janice’s reverence still fresh, I became a devotee of her work, but held no hopes of my own encounter with HLBH.
Advance to the present, I’ve moved to NYC to attend the Interactive Telecommunications Program at New York University and had the pleasure of learning that one of my first assignments is to take an audio tour in the city, and one of the offerings is, of course, HLBH. My wife and I grabbed a train uptown this afternoon with mp3s of Cardiff’s tracks, jpegs of the original photos, and a headphone splitter.
We felt like wrongdoers sitting on the bench where Cardiff asks you to begin the journey. Wrongdoers or drug-users, strangely set apart by something only you’re privy to. Something you’re anxious you may get caught doing. The startle of the sound design, the way it makes use of stereo to immediately surround you, effaced all awareness of our equipment, our posture, our expressions. There it was, that wonder. We were kept at a distance from self-consciousness the whole walk. That removal, which transferred our focus both into the present of the recording and the present of the park around us, away from ourselves, was extraordinary.
The coincidences were extraordinary. Yes, Cardiff timed out the entire path of the tour so that she knew where you’d be at each moment and could refer to landmarks in a satisfying, threads-all-lining-up sort of way, but then there were the coincidences. She’d mention a man with a t-shirt directly ahead, and there’d be a man with a t-shirt, and add to that he was facing away so we could imagine that the slogan Cardiff quoted was actually on his front, if only he’d turn. She wished for us egrets, and there in the water, an egret, regal.
The lingering gift of Her Long Black Hair is hypersensitivity to sound. As we walked from the edge of the park to the subway, we could pick out distinct birdsongs, a woman opening her door, a conversation across the street, the echo of an engine around the corner. We could pick them out, hear them clearly, and place them in front or behind us, in that stereophonic way our ears evolved to hear but our brains learn to flatten. By trusting Cardiff, even when she told us to walk backwards in the middle of Central Park or to walk forwards with our eyes closed, we allowed her to give us back the city in stereo.
A digital archive of Her Long Black Hair provided by Public Art Fund can be found here. It includes all of the audio as well as the photographs originally provided with the tour and a map showing the route.