Programming Inputs and First Sensing, Part 2

Continuing with this week’s labs in Physical Computing using the Arduino Integrated Development Environment (IDE) to control the flow of electricity through simple circuits, this second post explores combining analog input with the digital input/output.

Lab #2 Analog Input

IMG_9295

A simple potentiometer (the most basic variable resister/analog input) is added.  It’s fixed to an analog pin.  An LED is used as an indicator.  Note the fixed resistor between the LED and the digital output.

IMG_9297

IMG_9298

Video of the code and the LED changing brightness.

 

Now, some more complicated programming, so that the input from a sensor, in this case a TMP36 Temperature Sensor, will be mapped according to specific parameters and output onto three different LED indicators.

IMG_9301

IMG_9302

The sensor was given a baseline temperature to read (20 degrees C) and then told to increase the brightness on the first, then second, then third LEDs as the temperature increased by an increment of 2 degrees.

 

The initial baseline was too low because the temperature of my apartment was higher (in the video above you can see the temperature reading from the room was around 27), so I changed that value in the code and this allowed the circuit to respond only to a direct application of increasing temperature near the sensor.

IMG_9305

 

Programming Inputs and First Sensing, Part 1

For this week’s labs in Physical Computing at ITP, we began using the Arduino Integrated Development Environment (IDE) to control the flow of electricity through simple circuits.

Lab #1 Digital Input and Output

IMG_9191

Above the breadboard is wired to the Arduino.  No power is being supplied at this point as the circuit is just beginning.  Below a switch has been added and linked to a digital “pin” on the microcontroller, to be programmed as input.

IMG_9199

Two LEDs added to additional digital pins on the Arduino, this time for output.

IMG_9205 IMG_9207

Now I’ve plugged the Arduino in via USB to my laptop.

IMG_9230

Basic code written in the Arduino IDE and uploaded to the microcontroller, assigning the roles of the pins (and hence the switch and LEDs).  The loop function tells the LED on pin #3 to turn-on (HIGH) when the switch is pressed (pin #2 = HIGH), and to keep the LED on pin #4 glowing in all other cases (here we only have the case when the switch is released/not pressed).  Click here for a video of the circuit in action.

PcompWeek2Lab1Code

And finally an experiment with using a found object switch.  Click on the image for video of the final version of my wolf-head switch.

IMG_9245

Day to Night in a Processed City

2nd Sketch for Introduction to Computational Media, Fall 2014

2nd Sketch for Introduction to Computational Media, Fall 2014

 

With this Processing  assignment I was looking to start from a found image, and then replicate some everyday light phenomena using the additional tools of Variables and Conditionals that we’ve added on this week in Introduction to Computational Media.

Variables allow for manipulation of data and beginning to animate elements within a sketch.  For instance the variable “mouseX” (built into Processing) allows you to use the position of the mouse along the X axis to change behavior.  In my case I utilized this to alter a color fill that severed as my shadow.

Conditionals allow for changes to be implemented under certain circumstances.  For instance in my sketch I used the pressing of the mouse as the condition (or event) that triggers lights coming on in the buildings.  Pressing the mouse again turns them off.

Here are some examples of code I was experimenting with:

PImage buildings;

void setup() {
size(900, 613);
buildings = loadImage(“newyork.jpg”);
}

Buildings Background

float shadow = 0;

shadow = shadow + 0.4;

//NIGHTFALL
if ((mouseX > 360) && (mouseX < 500)) {
fill(0, shadow);
rect(0, 0, 900, 613);
}

Adding the Night

//LIGHTS
if (lights) {  [appearance of lights defined here using rect() function]  }

void mousePressed () {
if (lights) {
lights = false;
} else {
lights = true;
}
}
Adding Lights

Soundcode: Metaphors

When Magdalena, Gabriel, and I sat down to exchange ideas it was clear structure was trumping concept.  We were trying to find a starting point for an interactive sound piece that we were tasked to create together.  We kept circling around possible forms for the piece, no matter how hard we tried to find a way towards an underlying theme.  Eventually, we decided to each collect sounds that spoke to us and hoped to discover what unified them.

I checked out a Zoom H4n Recorder and an Audio Technica AT8035  shotgun mic from the Equipment Room at ITP, and found myself in Greenpoint after dark, waiting for inspiration.

Zoom h4n audio recorder

 

I found some easily enough:

  • 6am church bells at Saint Stanislaus Kostka.
  • My Keurig coffee machine.
  • A washer and dryer at Susan Laundromat.

 

greenpointchurch

When our group came back together, we had an excellent menagerie of sounds. A menagerie that was diverse and unrelated: the basis for a collage. We began arranging, rearranging. In the end, we arrived at this:

This collage will be the center of an interactive installation, Soundcode.  

Presence: Interactivity & Measurable Effects

This blog documents my time at the Interactive Telecommunications Program, where I’ll be until (at least) 2016.  You might ask why, with a background (primarily) in theater-making and puppetry, I chose to pursue this course of study — or put another way, “What is it about the potentials of interactive technology that drew you to ITP?”  

Last fall I knew I would be passing through New York City, and if there was a singular thing I needed to accomplish it was experiencing Sleep No More (click through if you don’t know anything about Punchdrunk’s show).  So I did.  Amongst all my reactions and impressions and inspirations, most of which aren’t relevant here, one stood out with resounding intensity: I couldn’t affect the world I had been invited/drawn/yanked into that night.  Punchdrunk are masters, masters of crafting a palpable, encompassing environment that (I agree) assumes its own reality around you. There were six massive floors to be traversed, every room filled with curios, oddities and detritus. You could pick it up, move through doors, brush aside curtains. I loved this quality of Sleep No More.  But once I was living in that world, I wanted to be able to use my curiosity, my bravery, my ingenuity to participate.  This wasn’t possible; instead I felt like a ghost that can travel through the life it knew but struggles with futility and despair to engage with familiar people and places that cannot hear or feel it’s presence. I’m exaggerating here, but bottom line is that Sleep No More introduced a fascinating form of audience immersion, but it was not interactive (which is less a criticism than an aspiration).

Physical interaction requires that your presence as a body or a voice in an environment has a measurable effect.  If I speak in a room and the lights go out in response to that sound, a form of interaction has taken place.  Chris Crawford would represent this process using three aspects of a conversation: as I’m speaking, sensors in the room are picking up my voice or “listening to what I’m saying”, then they are converting that speech to electrical signals to execute an operation or “thinking about what was said”, and the change in the lighting of the room is “responding to what was said”.  Crawford insists that these three aspects are necessary for something to qualify as an instance of interactivity. I would agree — in the case of physical interaction there is none if the active presence of one participant does not have any effect on (does not change in some observable way) the other. If I were to press a surface, walk through a fabric, or pick-up a book I would not call any of those situations interactive unless my pressing, walking, or picking-up caused a change in the object or my environment.

Bret Victor, in “A Brief Rant on the Future of Interaction Design”, makes the additional case that quality interactions only occur when our non-human partner in the exchange (typically a piece of technology, or in my anecdote above the theatrical production by Punchdrunk) has been created with the fullest potential of human capabilities in mind (whatever capabilities will apply in the current situation/problem). His examples lie with technology like touch-screens that do poor justice, in his estimation, to the capabilities of the hand that will be using them. I resonate with Victor’s rant; I would hope for (and I hope to envision here at ITP) a world where designed objects and environments take the fullest capabilities of my presence in mind. When you are immersed/engaged with something your senses are heightened and you become more curious, responsive, and inspired to take risks or make unexpected choices. Quality design will be prepared to accommodate those risks and choices to the degree possible within the constraints of resources at hand (time, space, materials). In my estimation the real pay-off of interactivity comes into play with the effects of this accommodation.

 

Shades in Processing

First assignment for Introduction to Computational Media, Fall 2014.
                                 First assignment for Introduction to Computational Media, Fall 2014.

 

Processing is an unknown country for me.  But one place I have lived is with a bit of drawing, and some shadows.  In fact I’ve spent a lot of time considering the relationship of darkness to light, the scale from pure white to full black, whether in the work of William Kentridge or Roberto Casati. So when we were asked for our first assignment in Introduction to Computational Media at the Interactive Telecommunications Program to create a Processing sketch using the functions we’re becoming familiar with (sizing, simple 2D shapes, and basic coloration and grayscaling) I immediately felt I could give myself some grounding by exploring that known quantity, shading.

As many of you know, Processing is (as defined on its website) “a programming language, development environment, and online community.” Processing is built on another language, Java. In our first class session and early readings we were introduced to a handful of key Java-based functions that give instructions to create what’s called a “sketch”, functions such as:

size(): sets the size, in pixels, of the window for you sketch.
rect(): draws a rectangle based on specified coordinates.
fill(): specifies a color for the shapes that follow in the code.

I decided that I would depict a value gradient from black to white using the RGB scale which sets the former at “0” and the latter at “255”. I began with a black background:

size(640,360);
background(0);

Screen Shot 2014-09-09 at 9.31.47 PM

Then, I began building incrementally smaller rectangles, filled with increasing amounts of red + green + blue (white) values.

fill(45);
rect(10, 10, 620, 340);
fill(60);
rect(20, 20, 600, 320);
fill(75);
rect(30, 30, 580, 300);

Screen Shot 2014-09-09 at 9.41.24 PM

After creating several steps, I realized that 1) I wouldn’t have enough sizes of rectangle to allow for a large value gradient (wouldn’t come close to 255), and 2) that the transition from one value to the next was too harsh.  So I changed my code to reflect subtler variations.  I also added an integer to each rect() instruction to round the edges of each shape.

fill(45);
rect(7, 7, 625, 345, 2);
fill(55);
rect(14, 14, 610, 330, 2);
fill(65);
rect(21, 21, 595, 315, 2);

Screen Shot 2014-09-09 at 9.52.22 PM

At this point, I began to wonder if there wasn’t a way to auto-generate these shapes, since the increments I was using for both size and value were fixed.  I did some preliminary digging within the Processing Reference but could not locate what I needed, and decided I would present this question to Dan (Shiffman) during the next class.  I continued what I now realized was the extra labor of writing out each of the steps longhand until I had reached the full value range.  As a final enhancement I included a function to remove the black outlines on each rectangle, which made the overall image more dramatic and quality of depth and dimensionality emerged quite spontaneously, reminding me of perceptual theories connecting shadows to depth, how we depend on them to see.

noStroke();

First assignment for Introduction to Computational Media, Fall 2014.

 

Next up:

  • Inspired by my classmates, attempt some more graphic illustrations.
  • Utilize the  grid geometry of the sketch environment more thoroughly.
  • Venture into something more figurative or character-based.

 

Full code here.

After Her Long Black Hair

In 2005 Janet Cardiff composed Her Long Black Hair, an audio tour for one through Central Park in New York City. About a year ago my friend Janice referred to the piece in conversation one evening in Seattle, speaking with the reverence one holds for experiences that’ve carved themselves into memory, like initials on trees. She had taken the tour while it was installed in the park. Not long after Janice mentioned it, I heard Cardiff interviewed on To the Best of our Knowledge, in an episode called “More Wonder”. Wonder being central in my conceptual landscape, and Janice’s reverence still fresh, I became a devotee of her work, but held no hopes of my own encounter with HLBH.

Advance to the present, I’ve moved to NYC to attend the Interactive Telecommunications Program at New York University and had the pleasure of learning that one of my first assignments is to take an audio tour in the city, and one of the offerings is, of course, HLBH. My wife and I grabbed a train uptown this afternoon with mp3s of Cardiff’s tracks, jpegs of the original photos, and a headphone splitter.

We felt like wrongdoers sitting on the bench where Cardiff asks you to begin the journey. Wrongdoers or drug-users, strangely set apart by something only you’re privy to. Something you’re anxious you may get caught doing. The startle of the sound design, the way it makes use of stereo to immediately surround you, effaced all awareness of our equipment, our posture, our expressions. There it was, that wonder. We were kept at a distance from self-consciousness the whole walk. That removal, which transferred our focus both into the present of the recording and the present of the park around us, away from ourselves, was extraordinary.

The coincidences were extraordinary. Yes, Cardiff timed out the entire path of the tour so that she knew where you’d be at each moment and could refer to landmarks in a satisfying, threads-all-lining-up sort of way, but then there were the coincidences. She’d mention a man with a t-shirt directly ahead, and there’d be a man with a t-shirt, and add to that he was facing away so we could imagine that the slogan Cardiff quoted was actually on his front, if only he’d turn. She wished for us egrets, and there in the water, an egret, regal.

The lingering gift of Her Long Black Hair is hypersensitivity to sound. As we walked from the edge of the park to the subway, we could pick out distinct birdsongs, a woman opening her door, a conversation across the street, the echo of an engine around the corner. We could pick them out, hear them clearly, and place them in front or behind us, in that stereophonic way our ears evolved to hear but our brains learn to flatten. By trusting Cardiff, even when she told us to walk backwards in the middle of Central Park or to walk forwards with our eyes closed, we allowed her to give us back the city in stereo.

A digital archive of Her Long Black Hair provided by Public Art Fund can be found here. It includes all of the audio as well as the photographs originally provided with the tour and a map showing the route.

On Originality

Jonathan Lethem, in a 2007 piece for Harper’s, endeavored to make an apology for plagiarism under the title “The Ecstasy of Influence”. In fact he sub-titled his essay “a plagiarism”. And his point in doing so is to be 100% explicit that the best work we can do as artists, whether literary or otherwise, is achieved through direct appropriation from the efforts and output of those who’ve come before us (or for that matter those working alongside us). He’s making this explicit by calling his own labor what it is, and he proves this at the terminus with an exhaustive reference guide that shows exactly where he pulled particular ideas, phrases, or even whole paragraphs. He is ultimately trying to redeem what he asserts is an essential method–really the essential method–by which our culture advances.

Kirby Ferguson, in his celebrated 2012 TED Talk, articulates a similar plea (and I’m paraphrasing), a plea for us to examine what we value in our cultural material and why we have enshrined the particular notions of originality that dominate popular opinion (namely that “original” works arise from some pristine well within us, suffer no influence, and deserve to be protected for the sole benefit of the “originator”).

I’m mentioning these defenses of influence because they were assigned to us for our Comm Lab: Video & Sound course in the Interactive Telecommunications Program at New York University, where, as you may know from the title of this blog or from my previous posts, I am a graduate student. Why were we asked to review these two texts that make this case for collage, remixing, and repurposing work that is often forcefully defined as the intellectual property of another artist or creator? While I can’t claim ownership of the exact intent, I have some speculations.

Creating work at the frontier of what’s been done before brings a host of complications. One is how to navigate the waters between what you think you are innovating or originating, and all of the inventions and insights that others have made that have facilitated your journey — arriving where you are and hopefully going further. Another complication is the seduction to protect. When territory is achieved on the forefront, there’s an impulse to claim it and wall it off, not realizing the damage we might do or the lack or true justification for our protectiveness. So engaging with these texts is a tool to face thoughtfully and with humility those complications that will inevitably arise while we’re within ITP and once we move beyond it.

I believe in the primacy of meaning. I think art and creative work is stripped of its usefulness in society, of its exquisite power, if it does not mean anything. And when we directly copy something that was considered a work of art or culture, without effort or intent other than glory or gain, we seem to strip away that prior essence, because no meaning remains. But originality isn’t necessary for meaning. In fact, it may be that meaningful creations draw their meat from the parts of themselves stolen from others. Those parts may be what makes them mean, since they allow for reference, context, simile, that spark of “oh I know this, I’ve seen this somewhere else” that can make the otherwise foreign familiar. Again I’m speculating here, but perhaps another function of these texts for us at ITP is to disillusion us of any sense we have that the meaning or import of what we do will be compromised if we have relied on others, been influenced by the discoveries of our classmates, or looked to resources outside ourselves to meet the ends we are striving for.

In “On the Rights of Molotov Man”, which is a pair of essays by Joy Garnett and Susan Meiselas again from Harper’s, I think there is an additional warning: when we enter the realm of the remix, where we fully embrace our reliance on (what some may call) plagiarism and we endeavor to make our plagiaristic movements within a creative act visible, even amplified, we must be excessively vigilant about meaning. Meiselas is worried that the paintings of Garnett have stripped a photograph she took of its context — which I think she is conflating with meaning. I would argue that Garnett’s work has stripped the context, or rather altered it, but has not removed or eschewed meaning. If the meaning has shifted, it is not necessarily less potent, and I believe Garnett was attempting to create paintings that carry as much weight as the photograph from which they were sourced. I agree with Meiselas that context is a value to be protected, that context is a requirement for meaning; what I take away then is that by making sure we know our context, and preserve the integrity of that context (even if it is not the same as that of source material we may be drawing on), we preserve the potential for our work to be meaningful.