Posted on and Updated on

Intro to Physical Computing ⁄ Midterm

In the spirit of Halloween and current political events, Marco and I decided to build a talking cyborg Trump head. The head is meant to detect someone’s presence in front of it and speak. We’ve been having problems hooking up the audio playback, and although it is currently not perfected (an amp is needed to increase the output volume, and a solution to the SPI chip selection causing an unresponsive servo has yet to be found), this is it’s current state:

Mr Cyborg Trump provocatively speaks a line originally spoken from Mrs Clinton: “Apologizing is a great thing, but you have to be wrong. I will absolutely apologize sometime, in the hopefully distant future, if I’m ever wrong.”

Parts

  • Rubber Trump mask
  • Stuffing (cotton, bubble wrap)
  • Arduino UNO
  • IR motion sensor
  • 8 ohm speaker
  • Servo motor
  • Micro SD card reader and 1gb micro SD card
  • 9v battery pack

 

Code (Arduino)

#include <SD.h>
#include <pcmConfig.h>
#include <pcmRF.h>
#include <TMRpcm.h>
#include <SPI.h>
#define SD_ChipSelectPin 4
TMRpcm audio;

//using a O O G resistor (3.3 ohms) for IR sensor
//dont delay less than 250 between openMouth & closedMouth

#include <Servo.h>
Servo mouth;

int speakerPin = 3; //digital out
int servoPin = 2; //digital out
int sensorPin = A0; //analog in

int closedMouth = 45;
int openMouth = 180;
int sensorStrength = 500;
unsigned long lengthOfPhrase = 9000; //in milliseconds
unsigned long timePhraseBegin = 0;
boolean movementDetected = false;
boolean speakOnce = false;

void setup() {
  audio.speakerPin = 9;
  Serial.begin(9600);
  mouth.attach(servoPin); 
  mouth.write(closedMouth);

  if (!SD.begin(SD_ChipSelectPin)) {  // see if the card is present and can be initialized:
    Serial.println("SD fail");  
    return;   // don't do anything more if not
  } else {
    Serial.println("SD success"); 
  }
  audio.setVolume(5);
}

void loop() {
  unsigned long timePassed = millis() - timePhraseBegin;
  
  int sensorRead = analogRead(sensorPin);
  
  if(sensorRead > sensorStrength){
    if(!movementDetected){
      Serial.println("ON.");
      timePhraseBegin = millis();
    }
    movementDetected = true;
  } else {
    if(movementDetected && timePassed >= lengthOfPhrase){
      Serial.println("OFF.");
      movementDetected = false;
      speakOnce = false;
    }
  }

  if(movementDetected){
    
    //move mouth
    mouth.write(openMouth);
    delay(250);
    mouth.write(closedMouth);
    delay(250);

    //speak
    if(!speakOnce){
      audio.play("0000.wav");
      speakOnce = true;
    }
    
  } else {
    //close mouth
    mouth.write(closedMouth);

    delay(250);
  }
}

Posted on and Updated on

Intro to Physical Computing ⁄ Week 12 ⁄ Final

Synesthesia VR

Synopsis

Escape into another world.

Synesthesia VR is a device that enables you to experience your surroundings in a new and exciting way. By immersing yourself in this isolated virtual space, you are liberated from the shackles of order and illusions of comprehension, and freed into the higher realm of pure data.

The headset equips you with your very own prosthetic aural and visual sensory inputs which exchange data before feeding into your native senses, in order to ensure complete lack of comprehension of, and thus disconnection from our mundane reality.

Parts

Code

The final Arduino code is below. It uses the Goldelox 4D library (Goldelox is the name of the graphics processor for the LCD displays used, this library implements what the 4D company calls “4DGL” which makes interfacing with the displays very easy), as well as a dedicated library for Sparkfun’s SFE_ISL29125 RGB light sensor. Code for the sound detector can be referenced from here.

/*
 * __________//______//______//____/////____/////__
 * _______////____///__///__//__________//__________
 * ____//____//__///////__//__________//__________
 * ___//////__//__/__//__//__________//__________
 * __//____//__//______//____/////____/////__
 * ___________________________________________________________
 * __________ Copyright (c) 2016 Andrew McCausland __________
 * ________________ <andrewmccausland.net> _________________
 * ________________________________________________________
 * 
 * To upload new code:
 * 1. Disconnect main display (the one that's directly hooked to RX/TX)
 * 2. Disconnect TX connection to display 2
 * 3. Upload
 * 4. Reconnect main display, wait for it to begin visualization
 * 5. Reconnect TX connection to display 2.
 * 
 */

// ------------------------------ visual output (display stuff)
#include "Goldelox_Serial_4DLib.h"
#include "Goldelox_const4D.h"
#define DisplaySerial Serial // The compiler will replace any mention of DisplaySerial with the value Serial 
Goldelox_Serial_4DLib Display(&DisplaySerial);
int width = 127;
int height = 127;
int visOutSwitch = 0;

// ------------------------------ sound input stuff
#define PIN_GATE_IN 2
#define IRQ_GATE_IN  3
#define PIN_LED_OUT 13
#define PIN_ANALOG_IN A0

// ----------------------------- visual input (rgb sensor)

#include <Wire.h>
#include "SFE_ISL29125.h"
SFE_ISL29125 RGB_sensor;

// ----------------------------- sound output

int auxOutPin = 6;
int auxOutSwitch = 0;

void setup() {
  
  // ------------------------------ visual output (displays)
  Display.Callback4D = mycallback;
  Display.TimeLimit4D = 5000;
  DisplaySerial.begin(9600);

  while (!Serial) {
    ; // wait for serial port to connect. Needed for native USB port only
  }
  
  delay (10000); //back up buffer time to let the display start up

  Display.gfx_ScreenMode(LANDSCAPE);

  // ------------------------------ sound input
  pinMode(PIN_LED_OUT, OUTPUT);
  pinMode(PIN_GATE_IN, INPUT);
  attachInterrupt(IRQ_GATE_IN, soundISR, CHANGE);

  // ------------------------------ visual input (rgb sensor)
  
  RGB_sensor.init();
}

void loop() {

  // ------------------------------ for sound input
  int value = analogRead(PIN_ANALOG_IN);

  // ------------------------------ for visual output (display)

  unsigned int blueColors[4] = {MIDNIGHTBLUE,BLUE,DEEPSKYBLUE,LIGHTBLUE};
  unsigned int greenColors[4] = {DARKGREEN,GREEN,GREENYELLOW,LIGHTGREEN};
  unsigned int redColors[4] = {DARKRED,CRIMSON,RED,LIGHTCORAL};
  int brightness = map(value, 0, 600, 0, 3);

  if(value > 80 && value <= 100){
    
      Display.gfx_RectangleFilled(0, 0, width, height, blueColors[brightness]);
      visOutSwitch = 0;
      
  } else if(value > 100 && value <= 200){
    
    if(visOutSwitch == 0){
      Display.gfx_RectangleFilled(0, 0, width, height, blueColors[brightness]);
      visOutSwitch++;
    } else {
      Display.gfx_RectangleFilled(0, 0, width, height, greenColors[brightness]);
      visOutSwitch = 0;
    }
    
  } else if(value > 200 && value <= 600){
    
    if(visOutSwitch == 0){
      Display.gfx_RectangleFilled(0, 0, width, height, blueColors[brightness]);
      visOutSwitch++;
    } else if(visOutSwitch == 1){
      Display.gfx_RectangleFilled(0, 0, width, height, greenColors[brightness]);
      visOutSwitch++;
    } else {
      Display.gfx_RectangleFilled(0, 0, width, height, redColors[brightness]);
      visOutSwitch = 0;
    }
    
  } else if(value > 600){
      Display.gfx_RectangleFilled(0, 0, width, height, WHITE);
      visOutSwitch = 0;
  }

  if(value > 5){
    Display.gfx_Cls(); //clear the screen
  }


  // ------------------------------ for visual input (rgb sensor)
  unsigned int red = RGB_sensor.readRed();
  unsigned int green = RGB_sensor.readGreen();
  unsigned int blue = RGB_sensor.readBlue();

  // ----------------------------- for sound output
  if(auxOutSwitch == 0){
    tone(auxOutPin, red,200);
    auxOutSwitch++;
  } else if (auxOutSwitch == 1){
    tone(auxOutPin, green,200);
    auxOutSwitch++;
  } else {
    tone(auxOutPin, blue,200);
    auxOutSwitch = 0;
  }

}

// ------------------------------ for display
void mycallback(int ErrCode, unsigned char Errorbyte) {
  // Pin 13 has an LED connected on most Arduino boards. Just give it a name
  int led = 13;
  pinMode(led, OUTPUT);
  while(1){
    digitalWrite(led, HIGH);   // turn the LED on (HIGH is the voltage level)
    digitalWrite(led, LOW);    // turn the LED off by making the voltage LOW
  }
}

// ------------------------------ for sound input
void soundISR() {
  int pin_val;

  pin_val = digitalRead(PIN_GATE_IN);
  digitalWrite(PIN_LED_OUT, pin_val);   
}

Posted on and Updated on

Animation ⁄ Week 6 ⁄ Unity

Download

For this assignment, I decided to use Fuse’s doctor outfits to create characters for a hospital scene. Two surgeons fail to save a life, at which point they decide to escape the situation by breaking into spontaneous dance. The two doctors are rigged with confused or worried animations followed by dances. The patient is rigged with a seizure animation. The scene is composed of some Unity primitives with some tile textures I found on Google images, and some free furniture I found on Turbosquid. The music is some classic Dance Mania ghetto house pulled from this youtube video and cut into a new track with a heart monitor sound effect I found online. The colored spotlights and the “DANCE IT OFF” text are all rigged with similar timer scripts which tell them to appear after a certain number of seconds have passed after the app was launched. Here’s the script that tells the spotlights when to activate:

using UnityEngine;
using System.Collections;

public class lights : MonoBehaviour {

	bool waitActive = false; //so wait function wouldn't be called many times per frame

	private Light light;

	// Use this for initialization
	void Start () {
		light = GetComponent<Light>();
		light.enabled = false;
	}

	// Update is called once per frame
	void Update () {

		if (!waitActive) {
			StartCoroutine (Wait ());   
		} else {
			light.enabled = true;
		}
	}

	IEnumerator Wait(){
		yield return new WaitForSeconds (7.0f);
		//		waitActive = false;
		waitActive = true;
	}
}

Essentially the code says “after 7 seconds have passed, set the “enabled” option on the Light object to true“.

I also set the application to automatically quit after a certain period of time using another script which calls `Application.Quit()` after 18 seconds.

Posted on and Updated on

Animation ⁄ Week 3-5 ⁄ After-Effects Animation

Log In (draft) from AM on Vimeo.

The video is entitled Log In, which features a virtual world loading into existence, then a virtual avatar. The plot in more detail consists of the following:

  1. Title screen — command executed in CLI-like interface
  2. A montage of various environmental elements loading in a 2.5 dimensional space, ultimately building the scene’s backdrop
  3. The avatar’s body loads
  4. Avatar’s body textures load
  5. Avatar “comes to life”

Process

I wanted to explore the technical limitation of After Effects — particularly its 2.5 dimensional nature. The camera can capture 3d space, but all the elements in the space must be 2d (images and videos, no 3d objects). I tried to take advantage of this limitation by creating a narrative based in a virtual space, so that the 2d imagery, the pixellation, the unnatural movement and other digital artifacts would appear deliberate in this context.

I initially wanted my character to be a 3d object that I could pan the camera around and place image textures onto the surface of, but since that’s not a possibility unless done through After Effects CC’s regimented and limited 3d pipeline, I took a different, more tedious approach of screen recording animations of 3d objects in Blender with a shader that renders the object completely black on a white backdrop, then used that footage as a mask to color key in AE.

The footage appears pixellated before properly rendered in Blender, which I thought was an interesting effect and kept it.

Credit

I created the music and of course the After Effects composition. All the images of the bits of sky, grass and trees were pulled from public parts of the web. The 3d model was created and shared by Blendswap user AlexanderLee.

Posted on and Updated on

Art Strategies ⁄ Final

This piece is located in a Tisch stairwell, building off Tiri’s 17 Tones prototype. It’s is composed of a continuous rotation servo motor which turns at a speed based on the input from a photo resistor. Two strands of wire are attached to the motor which hit a hanging chime as they are spun around. The photo resistor is taking light input from the fluorescent fixture directly above it. This fixture uses a PIR sensor to detect a person’s presence in the stairwell and responds by raising its brightness.

This piece utilizes the existing visual response system to generate an aural response in addition.

What’s interesting about watching this thing work is how jittery the servo is — it almost has a mind of its own. When the stairwell’s light dims, the servo slows to a crawl and ends up pulling the chime off the wall and dropping it back. When this happens you can hear the motor’s jittering buzz and the chime dinging as it’s pushed around.

Posted on and Updated on

Intro to Physical Computing ⁄ Final Project Outline

I want to use what I’ve learned in this class to focus on approaching the use of video and sound in unique ways. So for my final project I want to build some sort of visual and/or auditory headset that transforms the user’s surroundings in unexpected ways, perhaps enabling them to see an aspect of reality that the naked human body isn’t capable of perceiving.

In the spirit of current VR hype, I will build a headset that simulates an audio-visual synesthetic experience, for any curious individuals interested in having a temporarily destroyed sensorium while running the risk of bumping into walls, stubbing their toes or meandering out into traffic in a manner similar to how someone in the throes of an intense psychedelic experience might do so.

The headset will consist of a visual input (camera) and an audio input (sound detector) hooked up to an Arduino Uno, which will run a program that will take the sound data and output it to a color LCD display or two, and take the camera image data and output it to a standard (3.5mm) headphone jack with which the user listen to with their own headphones.

Video to audio: I will translate the color information from the camera image at each frame to tones — a tone for each color. It could either calculate the most prevalent color and translate that into one tone, or calculate all existing colors in each image and translate them into a polyphonic set of tones, or anything in between.

Audio to video: Amplitude of sound detected will translate to image opacity on the display, and pitch/frequency will translate to hue.

Bill of Materials:

System Diagram:

Roadmap:

Nov 16:

  • Have purchased all parts (or at least enough to begin assembly)

Nov 23:

  • Have begun writing and testing code portion
  • Continue circuit assembly
  • Begin circuit housing / headset assembly

Nov 30:

  • Have completed code portion
  • Finalizing circuit assembly, begin user testing
  • Continue circuit housing / headset assembly

Dec 7:

  • Apply changes from user testing
  • Finalize circuit housing / headset assembly
  • Prepare presentation, finalize documentation

Dec 14:

  • Present

Posted on and Updated on

Art Strategies ⁄ Week 8 ⁄ Social Practice: Whoop Dee Doo!

Whoop Dee Doo is a non-profit organization that puts on family-friendly workshops and live performances in the spirit of cable access variety shows in collaboration with various local youth programs, art institutions, and other community organizations nationally. Kansas City based co-founders and hosts Matt Roche and Jaimie Warren  involve members of the local community through the ideation, construction and performance phases of each show with the help of artists from various disciplines, “from science teachers and Celtic bagpipers to traditional clogging troupes, West African dance teams, Tibetan throat singers, bodybuilders, barbershop quartets, and punk bands” (src), to create a platform that removes the various social, cultural, or economic barriers existing among members of each local community it performs in, as well as a stage for artists to perform; essentially fostering “unique collaborations between unlikely pairings of community members that ultimately blossom into exceptional and meaningful interactions.” (src)

Whoop Dee Doo could be described as socially engaged art because each performance engages directly with the various groups within the communities it’s attempting to foster communication/collaboration among, as a form of communicative action, rather than performing separately from the community as a form of symbolic practice. Although it may not provoke “critically reflexive” dialogue as much as it may create undisputed harmony among the involved groups, Whoop Dee Doo is a community-building mechanism by way of constructing a new temporary social group through this unique, weird, funny, engaging experience they share together. Though, the experience is temporary — what lasting effects are there within these communities, if any? I imagine through targeting the children of each community, the lasting impact is likely greater but perhaps not directly observable. Perhaps Whoop Dee Doo is also catalyzing critical discourse and further collaboration across unexpected cultural boundaries within the art world as well, particularly through the artists who’ve performed on Whoop Dee Doo from the local, or larger “high-art” community.

(src: 1, 2, 3, 4, 5)

The high-school-play-meets-off-Broadway set designs convey handmade, DIY sensibilities and an absence of inhibitions as expressed through the vibrant and almost absurd use of colors and textures. This style reflects the chaotic and unexpected mash-up of participants from across social or cultural boundaries who become inextricable parts of the temporary Whoop Dee Doo world, and especially obfuscates the boundaries between high art and that which is not traditionally considered so. To me it’s simultaneously successful as a critically reflexive work of art as well as a party. In a way, it’s success as a critical piece is driven by the core party ethos of bringing communities together through the prospect of having fun.

Posted on and Updated on

Art Strategies ⁄ Week 7 ⁄ Performance, happenings

http://andrewmccausland.net/watchingNow.html

This work is a “collaborative performance” consisting of 9 live streams ( from Twitch‘s Creative section) playing simultaneously. The title of the piece (in bold) is constantly changing, being a combination of the title of all the currently playing streams. Authorship goes to the streamers themselves.

This piece attempts to provoke the following ideas:

Identity – these streamers have a carefully mediated online identities, each of their streams are sort of themed based on their various interests, they sort of perform for the user bases they develop, and for faceless masses (consciously or subconsciously) through the internet.

Agency – these people have no idea the context that I’ve put them in – up on a projector in a room full of people, contrasted with 8 other streamers.

What’s the nature of a performance, or any time-based work on the internet, with recordable, reproducible media? These users are performing, which is a time-based thing, a happening, but in this context there will always be other performers to replace them, so this piece is time-based in the sense that no one moment is the same as the next, but the piece is always going to be here, it’s constant in that sense.

Posted on and Updated on

Art Strategies ⁄ Week 5 ⁄ Systems

This is a simplified version of the S.O.VIZv1.0 (page hasn’t been updated in a while) interface I’ve been working on for some time now. This is a desktop application which visualizes much of the publicly available data describing the orbits of “satellites” (any man-made object larger than a softball) in Earth orbit, and enables users to explore this data.

When the app is launched, it fetches the full TLE data and SATCAT via the Space Track API, and saves it locally as a few JSON documents from which the app then reads from to create the visuals (with openFrameworks).

The visualization shows each satellite’s apogee and perigee as a red and green point (respectively), ordered chronologically by launch date around a circle representing the Earth (at scale to the data), going clockwise from the top (90º).

The app allows users to zoom between high, medium, and low earth orbits. The arcs which appear in low earth orbit describe the number of satellites launched within each decade. Clicking on an arc filters the visualization so that users can see a specific decade in greater detail.

Demo:

High-res Snapshots (open separately for detail):

Full catalog, high earth orbit (link):

Full catalog, medium earth orbit (link):

Full catalog, low earth orbit with arcs describing number of satellite launched within each decade since Sputnik 1 in 1957 (link):

1990-1999, low earth orbit (link):

1990-1999, medium earth orbit (link):

1990-1999, high earth orbit (link):