Posted on

Research: The Enduring Ephemeral, or the Future Is a Memory (Wendy Hui Kyong Chun)

src


Not “what is new media?” but rather “What *was* new media? What will it be?”

constant repetition, tied to an inhumanly precise and unrelenting clock,
points to a factor more important than speed—a nonsimultaneousness of
the new, which I argue sustains new media as such (148)

The Future, This Time Around

this paper argues these dreams of superhuman digital programmability create, rather than solve, archi-
val nightmares (149)

“vapor theory”

speed makes criticism / reflection difficult

Speed and variability apparently confound critical analysis. According
to Lovink, “because of the speed of events, there is a real danger that an
online phenomenon will already have disappeared before a critical dis-
course reflecting on it has had the time to mature and establish itself as
institutionally recognized knowledge.”

Paul Virilio … has argued that cyberspace has implemented a real time that is
eradicating local spaces and times [threatening] “a total
loss of the bearings of the individual” and “a loss of control over reason,”
as the interval between image and subject disappears. (151)

malleability also makes criticism / reflection difficult

“malleability also makes criticism difficult by troubling a grounding
presumption of humanities research: the reproducibility of sources. (152)”

my words: a text can disappear instantly, or move, or change, making information that cites or builds or otherwise relies upon this text difficult to trust

Digital media, through the memory at its core, was supposed to
solve, if not dissolve, archival problems such as degrading celluloid or
scratched vinyl, not create archival problems of its own. (154)

ephemerality is not new to new media (153)

so what defines “new media?”

The major characteristic of digital media is memory. (154)

Memory allegedly makes digital media an ever-increasing archive in which no
piece of data is lost. (154)

By saving the past, it was supposed to make knowing the future easier. (154)

As a product of programming, it was to program the future. (155)

As we may think

Bush, in “As We May Think,” writing at the end of World War II, argues the crucial problem facing scientists and scientific progress is access (156)

the memex sought to provide people with
“the privilege of forgetting”
by storing and indexing memories for them to be accessed later

memex revisited saw the failure of this:
“We are being buried in our own product. Tons of printed material are
dumped out every week. In this are thoughts, certainly not often as
great as Mendel’s, but important to our progress. Many of them be-
come lost; many others are repeated over and over and over”(158)

Thus the scientific archive, rather than pointing us to the future, is trap-
ping us in the past, making us repeat the present over and over again. Our
product is burying us and the dream of linear additive progress is limiting
what we may think(158)

“The difficulty supposedly lies in selecting the data, not
in reading it”(159)

The pleasure of forgetfulness is to some extent the
pleasure of death and destruction. It is thus no accident that this supple-
menting of human memory has also been imagined as the death of the
human species in so many fictions and films and de ́ja vu as the mark of the
artificial in The Matrix. (160)

Moving memory

an instruction or program is functionally equivalent to its result… this conflation grounds programming, in which process in time is reduced to process in space (161)

By making genes a form of memory, von Neumann also erases the difference between individual and transgenerational memory, making plausible Lamarckian transmission; if chromosomes are a form of secondary memory, they can presumably be written by the primary. This genetic linkage to memory makes clear the stakes of conflating memory with storage— a link from the past to the future. (164)

A memory must be held in order to keep it from moving or fading. Memory does not equal storage. (165)

digital media is truly a time-based medium, which, given a screen’s refresh cycle and the dynamic flow of information in cyberspace, turns images, sounds, and text into discrete moments in time. These images are frozen for human eyes only. (166)

without cultural artifacts, civilization has no memory and no mechanism to learn from its successes and failures. And paradoxically, with the explosion of the Internet, we live in what Danny Hillis has referred to as our “digital dark age.” (168)

the internet, which is in so many ways about memory, has, as Ernst argues, no memory— at least not without the intervention of something like the IWM (wayback machine). (169)

This belief in the internet as cultural memory, paradoxically, threatens to spread this lack of memory everywhere and plunge us negatively into a way way back machine: the so-called digital dark age (169)

Virilio’s constant insistence on speed as distorting space-time and on real time as rendering us susceptible to the dictatorship of speed has generated much good work in the field, but it can blind us to the ways in which images do not simply assault us at the speed of light. Just because images flash up all of a sudden does not mean that response or responsibility is impossible or that scholarly analysis is no longer relevant. As the new obsession with repetition reveals, an image does not flash up only once. The pressing questions are, Why and how is it that the ephemeral endures? And what does the constant repetition and regeneration of information effect? What loops and what instabilities does it introduce into the logic of programmability? (171)

Reliability is linked to deletion; a database is considered to be unreliable (to contain “dirty data”) if it does not adequately get rid of older inaccurate information. (171)

Rather than getting caught up in speed, then, we must analyze, as we try to grasp a present that is always degenerating, the ways in which ephemerality is made to endure. What is surprising is not that digital media fades but rather that it stays at all and that we stay transfixed by our screens as its ephemerality endures. (171)

 

Posted on and Updated on

Live Web | Feeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeed

take1:

take2:

screen version:

For the remainder of this class I wanted to explore the possibilities in creating some sort of online performative visual experience. I wanted to see what could be done by harnessing the masses of content available on the web. I was initially interested in thinking about the internet as something that was “collapsing” time from a cultural perspective facilitated by its massive and cheap capacity to preserve and distribute media—

What was your first experience of going online?What about Google image search? “I specifically remember my first image search. I searched for trees. Like, “green trees.” And it was really overwhelming because there were so many pictures of trees, all at once. Before, you could only look at books.” (src)

art history has been “flattened” for artists of a certain age. “When they typed in ‘tree’ ” in a search engine, … “they got a thousand pictures of a tree: a picture of a tree made in the 18th century, a tree made last year, a cartoon of a tree. You have this flattening of time.” (src)

 

But what if this isn’t actually the case? When we Google something or look at one of our many content feeds on various social or news platforms, it’s all about *now*:

Digital media is truly a time-based medium, which, given a screen’s refresh cycle and the dynamic flow of information in cyberspace, turns images, sounds, and text into discrete moments in time. These images are frozen for human eyes only.(src)


References:

Edward Shenk “Theorist” — Visual artist leveraging right-leaning conspiracy theorist type of content that he apparently finds often on facebook and can probably be recognized to be more or less prevalent on the web by most people. He breaks this style of image-making down to its formal qualities, revealing the sort of sinister underlying tone… “There’s this manic connection-making like in the darker parts of A Beautiful Mind. If something looks like something else then that is proof enough. There is no such thing as pure coincidence, and that’s a hallmark of paranoia.” (src)

XTAL fSCK video performance — The visuals these performers create exemplify the “default” visual effects inherent in the MacOS UI: mashing [ctrl+alt+cmd+8] to use the color-inversion accessibility feature to create a strobe effect, using Quicktime’s screen recording feature to create endless recursion, swiping between desktop spaces, opening and minimizing windows with genie-effect in slow-motion, emojis and iconography are all overlaid and mashed together ad nauseum. The way the MacOS UI is exposed in this formal performative context reveals and accentuates visual qualities that were hidden in plain sight, forcing the audience to realize age-old questions of artistic merit and technological control in a context that we find ourselves in every day but perhaps overlook.

Anne Hirsch “Horny Lil Feminist”

Aaron David Ross “Deceptionista”

Posted on and Updated on

Live Web | Group Project: 360 Livestream

Tiri, Lin and I really wanted to use the 9th floor studio (or a similar setup — we ended up using the micro studio) and decided to explore the possibility of livestreaming with 360 video in this context. The idea we thought would be most interesting was the first that came to mind: presenting a how-to for what we were doing in the form of a Coding Train episode.

Our final production pipeline ended up being pretty simple: Theta S (360 camera) -> OBS (streaming client) -> YouTube. Before getting to this point we had tried many different avenues. Our first approach was to embed the stream into our own webpage and use a javascript library called Aframe to VR-ify the stream content. The first problem with this was cross-origin header issues while embedding. We spoke to Rubin about this and he explained to us that YouTube delivers streams via GET as HLS which doesn’t have native support in Chrome, but more importantly solving the cross-origin issue required each user to install browser extensions which isn’t ideal. We were able to embed Twitch streams but couldn’t block ads on load and couldn’t get a proper .mp4 or .m3u8 (HLS) stream url to work with Aframe because Twitch embeds a webpage via iframe, effectively obscuring the actual video url. At the end of the day, Youtube has a built-in VR feature and essentially plug-and-play 360-video streaming capabilities, so it made no sense to build our own custom implementation.

The livestream imagery is composed of several layers of images chroma-keyed together with several green screens.

Our first test, simply getting a livestream going:

Then managing a 360 livestream:

The final output (viewer’s POV):

Final output (flattened, OBS POV):

Posted on and Updated on

Live Web | Week 3 | Canvas

I really appreciate the non-antialiased, jagged style of digital imagery so my goal for this assignment was to create some sort of collaborative tool for exploring glitchy, abstract visuals taking advantage of this style.

The user draws horizontal or vertical lines (even numbered connection counts are assigned horizontal lines, odds are vertical) along the small 200px canvas by dragging the mouse button. So for users creating vertical lines, at each mouse position along the x-axis of the canvas, a for loop iterates through each pixel in the column and fills it with a 1px black rectangle. In order to keep the canvas from just being filled completely black I made it so that pixels that are already black are turned white.

The result is a lot of unexpected behavior, rendering each canvas image unique, even between users on the same canvas.

The algorithm for filling pixels is as follows:

canvas.addEventListener('mousemove', function(evt) {

	//when mouse is pressed
	if (isMouseDown) {

		//draw pixel at current mousepos:
		if (numUsers % 2 == 0) { //if even number of users
			//set the user to horizontal
			drawLineHor_(evt.clientY, context);
		} else { //if odd number of users
			//set the user to vertical
			drawLineVer_(evt.clientX, context);
		}

		//send drawing
		var objtosend = {
			x: evt.clientX,
			y: evt.clientY,
			px: px,
			py: py,
			userNum: numUsers
		};
		socket.emit('drawing', objtosend);
	}
	//receive drawing
    socket.on('drawing', function(receivedData) {
        if (receivedData.userNum % 2 == 0) { //if this user is even
            //set the user to horizontal
            drawLineHor(receivedData.y, context);
        } else { // if this user is odd
            //set the user to vertical
            drawLineVer(receivedData.x, context);
        }
    });
});

//these functions could probably all be consolidated somehow
function drawLineHor(y, ctx) {
    for (var x = 0; x < canvas.width; x++) {

        //get color
        var p = ctx.getImageData(x, y, 1, 1).data;

        //compare color
        if (p == "0,0,0,255") { //if black
            //set to clear
            ctx.fillStyle = white;
        } else { //if clear
            ctx.fillStyle = black;
        }

        //fill this pixel
        ctx.fillRect(x, y, 1, 1);
    }
}

function drawLineVer(x, ctx) {

    for (var y = 0; y < canvas.height; y++) {

        //get color
        var p = ctx.getImageData(x, y, 1, 1).data;

        //compare color
        if (p == "0,0,0,255") { //if black
            //set to clear
            ctx.fillStyle = white;
        } else { //if clear
            ctx.fillStyle = black;
        }

        //fill this pixel
        ctx.fillRect(x, y, 1, 1);
    }
}

function drawLineHor_(y, ctx) {
    for (var x = 0; x < canvas.width; x++) {
        ctx.fillStyle = black;
        ctx.fillRect(x, y, 1, 1);
    }
}

function drawLineVer_(x, ctx) {

    for (var y = 0; y < canvas.height; y++) {
        ctx.fillStyle = black;
        ctx.fillRect(x, y, 1, 1);
    }
}

I found that there is no way to get perfect, 1px non-antialiased lines in a canvas with the built-in drawing methods. One trick is to translate the entire canvas half a pixel (ctx.translate(0.5, 0.5)) to offset the interpolation that causes 1px lines to fill 2px, but this still doesn’t keep lines 1px when drawn at angles. So what I did instead (which also allowed me individual pixel control) was make 1px sized rects with  ctx.fillRect(x,y,1,1). The canvas thus is so small because this requires large for loops which is detrimental to performance.

Posted on and Updated on

Live Web | Final Project Idea

For the remainder of this class I’ll attempt to discover the sort of “hidden style” of not just the structures of interfaces in digital media (windows, buttons, menus, scrolling… GUI paradigms in general) but of the content itself that fills these structures, specifically on the web. Because content on the web (image, video, and text, interactive or not) encapsulates objects, people, places, and ideas, from arguably every period of human history, could the resulting aesthetic vernacular really be considered an overarching style of everything, a mess of clashing styles and perspectives from wildly different paradigms? Is this what defines our current zeitgeist, and if so, what does that mean for the future? What could possibly come after this massive collapse of time and space in aesthetic understanding?

This kind of flattening seems to have an affect on many things, like political perspectives, cultural diversity (global homogenization in fashion, language, icons, etc), and maybe even more. So not only could the web be considered an “artistic medium” but it’s also arguably the primary source of information dissemination and media consumption for many people all over the world.

I will realize this using appropriated media as facilitated by the use of web APIs for various content aggregators and media platforms. I’ll try to focus on content “hubs” to minimize bias but it will be interesting to see the bias inherent in my choosing regardless, if not instrumental in helping to discover to what degree the web today acts as a window into any sort of objective paradigm of everything.

References:

What was your first experience of going online?What about Google image search? “I specifically remember my first image search. I searched for trees. Like, “green trees.” And it was really overwhelming because there were so many pictures of trees, all at once. Before, you could only look at books.” (src)

 

art history has been “flattened” for artists of a certain age. “When they typed in ‘tree’ ” in a search engine, … “they got a thousand pictures of a tree: a picture of a tree made in the 18th century, a tree made last year, a cartoon of a tree. You have this flattening of time.” (src)

Edward Shenk “Theorist” — Visual artist leveraging right-leaning conspiracy theorist type of content that he apparently finds often on facebook and can probably be recognized to be more or less prevalent on the web by most people. He breaks this style of image-making down to its formal qualities, revealing the sort of sinister underlying tone… “There’s this manic connection-making like in the darker parts of A Beautiful Mind. If something looks like something else then that is proof enough. There is no such thing as pure coincidence, and that’s a hallmark of paranoia.” (src)

XTAL fSCK video performance — The visuals these performers create exemplify the “default” visual effects inherent in the MacOS UI: mashing [ctrl+alt+cmd+8] to use the color-inversion accessibility feature to create a strobe effect, using Quicktime’s screen recording feature to create endless recursion, swiping between desktop spaces, opening and minimizing windows with genie-effect in slow-motion, emojis and iconography are all overlaid and mashed together ad nauseum. The way the MacOS UI is exposed in this formal performative context reveals and accentuates visual qualities that were hidden in plain sight, forcing the audience to realize age-old questions of artistic merit and technological control in a context that we find ourselves in every day but perhaps overlook.

Posted on

Live Web | Mid-Term Proposal

I’d like to take this opportunity to put the tools I’ve learned in this class thus far toward a project I’m starting to develop for Design for Discomfort. The idea is to build a series of interconnected web pages filled with visual tactics commonly found all over the web to invoke extreme sensory discomfort toward its users. I also want to address our interactions in these systems and the resulting feedback loops that affect our actions. I aim for it to serve as somewhat of a timestamp for the current state of our digital landscape, and my hope is that it will prompt people to think about this kind of emotional/behavioral stimuli we all subject ourselves to.  What comprises our contemporary digital landscape? Is it different from, say, 10 years ago? If so, how is it different? What has changed, why, and what does it say about us in terms of what we are using the web for, what we want the web to be, and what it might become in the future?

Posted on and Updated on

Live Web | Class 4 | Camera & WebRTC

For this assignment I was most interested in understanding how image data was encoded into the base64 format. I spent a great deal of time attempting to understand exactly how this was done so that I could alter the image pixel data with code. So far I’ve been unsuccessful, and I have no idea why my current code doesn’t work, but perhaps I’m close, or maybe I’ve led myself completely astray.

First of all I learned that images are a binary data format, meaning the color of each of its pixels can be interpreted via a 1 or 0, (black or white), but in this case the png I am sending as base64 is not just 1 bit per pixel but rather 24 (I think), meaning there are 24 bits being used to represent a whole spectrum of rgb values. Knowing this, I then read up on how base64 encodes such binary data. The way I understand it is that the image takes the binary data in 24-bit chunks and divides each of those into four 6-bit chunks and assigns each of those 6-bits (which has 64 possible values) to an ascii character based on the base64 index table. In the end the long ascii string representing all the data in the image is “padded” with = or == to indicate whether the remainder of bits in the divided up binary image is either 1 bit or 2 bits.

So in order for me to alter the pixel data I thought I’d need to

  1. Decode it from base64 back to raw binary
  2. Do stuff to the binary (here I applied a random shuffle function)
  3. Encode it back to base64

I am able to do this successfully, except that the resulting base64 data just returns a broken image. I’m not sure why this would happen assuming the new base64 data should contain the same exact number of bits (right?). I also made sure that the final base64 string begins with `data:image/png;base64,` and ends with an = or ==. Here’s the corresponding code:

index.html:

<!DOCTYPE html>
<html>
	<head>
		<title></title>
		<script type="text/javascript" src="/socket.io/socket.io.js"></script>
		<script type="text/javascript" src="skrypt.js"></script>
		<style>
			#imgcontainer {
				position:relative;
				float:left;
				width:100%;
				height:100%;
				margin:0 auto;
				border:solid 1px #000;
			}
			#imagecontainer img{
				position:relative;
				float:left;
			}
			#txtbox{
				width:100%;
				word-wrap: break-word;
			}
		</style>
	</head>
	<style>
		canvas{
			border:solid 1px #000;
		}
	</style>
	<body>
		<video id="thevideo" width="320" height="240"></video>
		<canvas id="thecanvas" width="320" height="240" style="display:none"></canvas>
		<div id="imgcontainer">
			<img id="receive" width="320" height="240">
		</div>
	</body>
</html>

skrypt.js:

// HTTP Portion
// var http = require('http');
var https = require('https');
var fs = require('fs'); // Using the filesystem module
// var httpServer = http.createServer(requestHandler);

const options = {
    key: fs.readFileSync('my-key.pem'),
    cert: fs.readFileSync('my-cert.pem')
};

var httpServer = https.createServer(options, requestHandler);
var url = require('url');
httpServer.listen(8080);

function requestHandler(req, res) {

    var parsedUrl = url.parse(req.url);
    console.log("The Request is: " + parsedUrl.pathname);

    fs.readFile(__dirname + parsedUrl.pathname,
        // Callback function for reading
        function(err, data) {
            // if there is an error
            if (err) {
                res.writeHead(500);
                return res.end('Error loading ' + parsedUrl.pathname);
            }
            // Otherwise, send the data, the contents of the file
            res.writeHead(200);
            res.end(data);
        }
    );
}


// WebSocket Portion
// WebSockets work with the HTTP server
var io = require('socket.io').listen(httpServer);

// Register a callback function to run when we have an individual connection
// This is run for each individual user that connects
io.sockets.on('connection',
    // We are given a websocket object in our function
    function(socket) {

        console.log("We have a new client: " + socket.id);

        // When this user emits, client side: socket.emit('otherevent',some data);
        socket.on('image', function(data) {
            // Data comes in as whatever was sent, including objects
            console.log("Received at server: " + data);

            var buf = Buffer.from(data, 'base64'); // Ta-da //encode the base64 to binary?

            // //turn buffer object array into uint8 array
            var uint8 = new Uint8Array(buf);
            console.log("uint8 array: " + uint8);

            // //re-sort binary array
            var arr = uint8;
            var sortedArr = shuffle(arr);
            console.log("sorted: " + sortedArr);

            //turn back into base64
            var newB64str = (new Buffer(sortedArr)).toString("base64");
            console.log("newB64str = " + newB64str);

            //try adding another `=` to end of string
            var finalB64Str = newB64str + "=";

            socket.broadcast.emit('image', finalB64Str); //send to all except sender
        });

        socket.on('disconnect', function() {
            console.log("Client has disconnected " + socket.id);
        });
    }
);

function shuffle(array) { //https://stackoverflow.com/a/2450976/1757149
    var currentIndex = array.length,
        temporaryValue, randomIndex;

    // While there remain elements to shuffle...
    while (0 !== currentIndex) {

        // Pick a remaining element...
        randomIndex = Math.floor(Math.random() * currentIndex);
        currentIndex -= 1;

        // And swap it with the current element.
        temporaryValue = array[currentIndex];
        array[currentIndex] = array[randomIndex];
        array[randomIndex] = temporaryValue;
    }

    return array;
}

server.js (this is where it’s all done):

// HTTP Portion
// var http = require('http');
var https = require('https');
var fs = require('fs'); // Using the filesystem module
// var httpServer = http.createServer(requestHandler);

const options = {
    key: fs.readFileSync('my-key.pem'),
    cert: fs.readFileSync('my-cert.pem')
};

var httpServer = https.createServer(options, requestHandler);
var url = require('url');
httpServer.listen(8080);

function requestHandler(req, res) {

    var parsedUrl = url.parse(req.url);
    console.log("The Request is: " + parsedUrl.pathname);

    fs.readFile(__dirname + parsedUrl.pathname,
        // Callback function for reading
        function(err, data) {
            // if there is an error
            if (err) {
                res.writeHead(500);
                return res.end('Error loading ' + parsedUrl.pathname);
            }
            // Otherwise, send the data, the contents of the file
            res.writeHead(200);
            res.end(data);
        }
    );
}


// WebSocket Portion
// WebSockets work with the HTTP server
var io = require('socket.io').listen(httpServer);

// Register a callback function to run when we have an individual connection
// This is run for each individual user that connects
io.sockets.on('connection',
    // We are given a websocket object in our function
    function(socket) {

        console.log("We have a new client: " + socket.id);

        // When this user emits, client side: socket.emit('otherevent',some data);
        socket.on('image', function(data) {
            // Data comes in as whatever was sent, including objects
            console.log("Received at server: " + data);

            var buf = Buffer.from(data, 'base64'); // Ta-da //encode the base64 to binary?

            // //turn buffer object array into uint8 array
            var uint8 = new Uint8Array(buf);
            console.log("uint8 array: " + uint8);

            // //re-sort binary array
            var arr = uint8;
            var sortedArr = shuffle(arr);
            console.log("sorted: " + sortedArr);

            //turn back into base64
            var newB64str = (new Buffer(sortedArr)).toString("base64");
            console.log("newB64str = " + newB64str);

            //try adding another `=` to end of string
            var finalB64Str = newB64str + "=";

            socket.broadcast.emit('image', finalB64Str); //send to all except sender
        });

        socket.on('disconnect', function() {
            console.log("Client has disconnected " + socket.id);
        });
    }
);

function shuffle(array) { //https://stackoverflow.com/a/2450976/1757149
    var currentIndex = array.length,
        temporaryValue, randomIndex;

    // While there remain elements to shuffle...
    while (0 !== currentIndex) {

        // Pick a remaining element...
        randomIndex = Math.floor(Math.random() * currentIndex);
        currentIndex -= 1;

        // And swap it with the current element.
        temporaryValue = array[currentIndex];
        array[currentIndex] = array[randomIndex];
        array[randomIndex] = temporaryValue;
    }

    return array;
}

I also attempted to perform the shuffle to the base64 encoding itself (without transforming to binary first):

// HTTP Portion
// var http = require('http');
var https = require('https');
var fs = require('fs'); // Using the filesystem module
// var httpServer = http.createServer(requestHandler);

const options = {
    key: fs.readFileSync('my-key.pem'),
    cert: fs.readFileSync('my-cert.pem')
};

var httpServer = https.createServer(options, requestHandler);
var url = require('url');
httpServer.listen(8080);

function requestHandler(req, res) {

    var parsedUrl = url.parse(req.url);
    console.log("The Request is: " + parsedUrl.pathname);

    fs.readFile(__dirname + parsedUrl.pathname,
        // Callback function for reading
        function(err, data) {
            // if there is an error
            if (err) {
                res.writeHead(500);
                return res.end('Error loading ' + parsedUrl.pathname);
            }
            // Otherwise, send the data, the contents of the file
            res.writeHead(200);
            res.end(data);
        }
    );
}


// WebSocket Portion
// WebSockets work with the HTTP server
var io = require('socket.io').listen(httpServer);

// Register a callback function to run when we have an individual connection
// This is run for each individual user that connects
io.sockets.on('connection',
    // We are given a websocket object in our function
    function(socket) {

        console.log("We have a new client: " + socket.id);

        // When this user emits, client side: socket.emit('otherevent',some data);
        socket.on('image', function(data) {
            // Data comes in as whatever was sent, including objects
            console.log("Received at server: " + data);

            var slicedData = data.slice(22); //slice off data:image/png;base64,
            console.log("sliced data: " + slicedData);

            var b64string, numEquals;
            if (slicedData.length - 2 === "=") {
                b64string = slicedData.substr(0, slicedData.length - 2); //remove last 2 `==`
                numEquals = 2;
            } else {
                b64string = slicedData.substr(0, slicedData.length - 1); //remove last `=`
                numEquals = 1;
            }
            console.log("sliced data without == : " + b64string);

            var b64array = Array.from(b64string);
            var shuffledb64array = shuffle(b64array);
            var newB64str = shuffledb64array.join('');
            console.log("newB64str: " + newB64str);

            //try adding another `=` to end of string
            var finalB64Str;
            if (numEquals == 1) {
                finalB64Str = newB64str + "=";
            } else if (numEquals == 2) {
                finalB64Str = newB64str + "==";
            } else {
                console.log("error calculating number of equal signs");
            }

            socket.broadcast.emit('image', finalB64Str); //send to all except sender
        });

        socket.on('disconnect', function() {
            console.log("Client has disconnected " + socket.id);
        });
    }
);

function shuffle(array) { //via https://stackoverflow.com/a/2450976/1757149
    var currentIndex = array.length,
        temporaryValue, randomIndex;

    // While there remain elements to shuffle...
    while (0 !== currentIndex) {

        // Pick a remaining element...
        randomIndex = Math.floor(Math.random() * currentIndex);
        currentIndex -= 1;

        // And swap it with the current element.
        temporaryValue = array[currentIndex];
        array[currentIndex] = array[randomIndex];
        array[randomIndex] = temporaryValue;
    }

    return array;
}

Posted on and Updated on

Live Web ⁄ Class 2 ⁄ Node + Socket.io Chat

Live url TBA, DigitalOcean locked my account immediately after registration and haven’t got back to me yet.

html & css:

<html>
	<head>
		<title>CH4T</title>
		<link rel="stylesheet" href="https://maxcdn.bootstrapcdn.com/bootstrap/3.3.6/css/bootstrap.min.css">
		<script type="text/javascript" src="/socket.io/socket.io.js"></script>
		<script type="text/javascript" src="chat.js"></script>
		<script type="text/javascript">			
		</script>
	</head>
 <body id="body">
	 <div id=main>
		 <input type="text" id="message" name="message">
		 <input type="submit" value=">" onclick="sendmessage(document.getElementById('message').value);">
		 <div id="messages"></div>
		 </div>

		 <style>
			 #main {
			 	/*margin:50px;*/
			 }
			 .newBg{
			 	position:absolute;
			 	top:0;
			 	right:0;
			 	z-index: -1;
			 	width:100vw;
			 	height:90vh;
			 }
			 input {
			 	/*width:100%;*/
			 	border-radius: 2px;
			 }
			 button {
			 	border-radius: 2px;
			 }
			.msgBox {
				position:relative;
				float:left;
				display:block;
				width:100%;
				padding:5px;
				margin:5px 10px 0 0; 
				background:rgba(0,0,0,0.05);
				border-radius: 2px;
			}
		</style>
 </body>
</html>

js:

var socket = io.connect();

socket.on('connect', function() {
    console.log("Connected");
});

// Receive from any event
socket.on('chatmessage', function(data) {
    console.log(data);

    //colors
    var colors = ["red", "orange", "yellow", "green", "blue", "purple", "pink", "brown", "black", "white"];
    for (var i = 0; i < colors.length; i++) {
        if (data == colors[i]) {
            var body = document.getElementById("body");
            body.style.background = colors[i];
        }
    }

    //bg images
    if (data == "zoom") {
        // document.body.style.backgroundImage = "url('img/zoom.gif')";
        var newBg = document.createElement("newBg");
        newBg.className = "newBg";
        newBg.style.backgroundImage = "url('img/zoom.gif')";
        document.body.appendChild(newBg);
    }
    if (data == "blood") {
        // document.body.style.backgroundImage = "url('img/zoom.gif')";
        var newBg = document.createElement("newBg");
        newBg.className = "newBg";
        newBg.style.backgroundImage = "url('img/blood.gif')";
        document.body.appendChild(newBg);
    }
    if (data == "cat") {
        // document.body.style.backgroundImage = "url('img/zoom.gif')";
        var newBg = document.createElement("newBg");
        newBg.className = "newBg";
        newBg.style.backgroundImage = "url('img/cat.gif')";
        document.body.appendChild(newBg);
    }
    if (data == "explode") {
        // document.body.style.backgroundImage = "url('img/zoom.gif')";
        var newBg = document.createElement("newBg");
        newBg.className = "newBg";
        newBg.style.backgroundImage = "url('img/explode.gif')";
        document.body.appendChild(newBg);
    }
    if (data == "kirby") {
        // document.body.style.backgroundImage = "url('img/zoom.gif')";
        var newBg = document.createElement("newBg");
        newBg.className = "newBg";
        newBg.style.backgroundImage = "url('img/kirby.gif')";
        document.body.appendChild(newBg);
    }
    if (data == "pizza") {
        // document.body.style.backgroundImage = "url('img/zoom.gif')";
        var newBg = document.createElement("newBg");
        newBg.className = "newBg";
        newBg.style.backgroundImage = "url('img/pizza.gif')";
        document.body.appendChild(newBg);
    }
    if (data == "werq") {
        // document.body.style.backgroundImage = "url('img/zoom.gif')";
        var newBg = document.createElement("newBg");
        newBg.className = "newBg";
        newBg.style.backgroundImage = "url('img/professional.gif')";
        document.body.appendChild(newBg);
    }
    if (data == "rain") {
        // document.body.style.backgroundImage = "url('img/zoom.gif')";
        var newBg = document.createElement("newBg");
        newBg.className = "newBg";
        newBg.style.backgroundImage = "url('img/rain.gif')";
        document.body.appendChild(newBg);
    }
    if (data == "snow") {
        // document.body.style.backgroundImage = "url('img/zoom.gif')";
        var newBg = document.createElement("newBg");
        newBg.className = "newBg";
        newBg.style.backgroundImage = "url('img/snow.gif')";
        document.body.appendChild(newBg);
    }
    if (data == "sparkle") {
        // document.body.style.backgroundImage = "url('img/zoom.gif')";
        var newBg = document.createElement("newBg");
        newBg.className = "newBg";
        newBg.style.backgroundImage = "url('img/sparkle.gif')";
        document.body.appendChild(newBg);
    }
    if (data == "water") {
        // document.body.style.backgroundImage = "url('img/zoom.gif')";
        var newBg = document.createElement("newBg");
        newBg.className = "newBg";
        newBg.style.backgroundImage = "url('img/water.gif')";
        document.body.appendChild(newBg);
    }
    if (data == "wizard") {
        // document.body.style.backgroundImage = "url('img/zoom.gif')";
        var newBg = document.createElement("newBg");
        newBg.className = "newBg";
        newBg.style.backgroundImage = "url('img/wizard.gif')";
        document.body.appendChild(newBg);
    }

    if (data == "nothing") {
        var thisBg = document.getElementsByClassName("newBg");
        document.body.removeChild(thisBg);
    }

    var msgBox = document.createElement("msgBox");
    msgBox.className = "msgBox";
    msgBox.innerHTML = data;

    document.getElementById("messages").appendChild(msgBox);
});

var sendmessage = function(message) {
    console.log("chatmessage: " + message);
    socket.emit('chatmessage', message);
};

Posted on

Live Web ⁄ Class 1 ⁄ Self Portrait

[Live url here]

My self portrait consists of a series of videos, which together compose an image of my head, that collectively follow the cursor as it moves left and right across the window. In order to achieve this, I first recorded a video of myself turning from one profile position to the other. The video lasts about 25 seconds. After recording I split the video vertically into 10 even pieces and added them to the DOM as 10 separate <video> elements. Then I used js to map the length of the video in seconds to the width of the window in pixels, and had the video fast-forward or rewind to the time that corresponds to the current mouse position, every time mouse movement is detected.

I split the portrait into several pieces so that I could play around with the variables that determine the number of frames and the time each video transition would take to fast-forward/rewind from one point to another. I like the resulting jumpy effect and the way my face gets abstracted at certain moments and then slowly pieces back together.

The html/css…

<html>
	<head>
		<script type="text/javascript" src="main.js"></script>
	</head>
	<body>
	<br>
	<br>
	<div id="2">
		<video id="vid11" src="vid/Comp7_1.mp4" width="9%" height="100%" loop></video>
		<video id="vid12" src="vid/Comp7_2.mp4" width="9%" height="100%" loop></video>
		<video id="vid13" src="vid/Comp7_3.mp4" width="9%" height="100%" loop></video>
		<video id="vid14" src="vid/Comp7_4.mp4" width="9%" height="100%" loop></video>
		<video id="vid15" src="vid/Comp7_5.mp4" width="9%" height="100%" loop></video>
		<video id="vid16" src="vid/Comp7_6.mp4" width="9%" height="100%" loop></video>
		<video id="vid17" src="vid/Comp7_7.mp4" width="9%" height="100%" loop></video>
		<video id="vid18" src="vid/Comp7_8.mp4" width="9%" height="100%" loop></video>
		<video id="vid19" src="vid/Comp7_9.mp4" width="9%" height="100%" loop></video>
		<video id="vid20" src="vid/Comp7_10.mp4" width="9%" height="100%" loop></video>
	</div>
	</body>
	<style>
		video {
		object-fit: fill;
		}
		#2 video{
			display:inline;
			position:relative;
			float:left;
		}
		#2{
			height:800px;
		}
	</style>
</html>

…and the JS.

//src:
//[1] https://stackoverflow.com/a/7790764 - capturing mouse pos
//[2] https://stackoverflow.com/a/10756409 - range conversion
//[3] https://stackoverflow.com/a/36731430 - FF/RW video to given time

function init() { //all js that needs to happen after page has loaded
    document.onmousemove = handleMouseMove; //[1]

    function handleMouseMove(event) {
        var dot, eventDoc, doc, body, pageX, pageY;

        event = event || window.event; //IE

        // If pageX/Y aren't available and clientX/Y are,
        // calculate pageX/Y - logic taken from jQuery.
        // (For old IE)
        if (event.pageX == null && event.clientX != null) {
            eventDoc = (event.target && event.target.ownerDocument) || document;
            doc = eventDoc.documentElement;
            body = eventDoc.body;

            event.pageX = event.clientX +
                (doc && doc.scrollLeft || body && body.scrollLeft || 0) -
                (doc && doc.clientLeft || body && body.clientLeft || 0);
            event.pageY = event.clientY +
                (doc && doc.scrollTop || body && body.scrollTop || 0) -
                (doc && doc.clientTop || body && body.clientTop || 0);
        }

        // console.log(event.pageX);
        var width, vid_length_s, vid_timeToSkipTo;
        // width = screen.width;
        width = window.innerWidth;
        vid_length = 25;
        vid_timeToSkipTo = convertToRange(event.pageX, [0, width], [0, vid_length]);
        console.log(vid_timeToSkipTo);
        goToTime(Math.floor(vid_timeToSkipTo));
    }
}

window.addEventListener('load', init);

function convertToRange(value, srcRange, dstRange) { //[2]
    // value is outside source range return
    if (value < srcRange[0] || value > srcRange[1]) {
        return NaN;
    }
    var srcMax = srcRange[1] - srcRange[0],
        dstMax = dstRange[1] - dstRange[0],
        adjValue = value - srcRange[0];

    return (adjValue * dstMax / srcMax) + dstRange[0];

}

function goToTime(time) { //[3]
    var vid11 = document.getElementById('vid11'),
        ticks11 = 10, // number of frames during fast-forward
        frms11 = 100, // number of milliseconds between frames in fast-forward/rewind
        endtime11 = time; // time to fast-forward/rewind to (in seconds)
    // fast-forward/rewind video to end time 
    var tdelta11 = (endtime11 - vid11.currentTime) / ticks11;
    var startTime11 = vid11.currentTime;
    for (let i = 0; i < ticks11; ++i) {
        (function(j) {
            setTimeout(function() {
                vid11.currentTime = startTime11 + tdelta11 * j;
            }, j * frms11);
        })(i);
    }
    var vid12 = document.getElementById('vid12'),
        ticks12 = 10, // number of frames during fast-forward
        frms12 = 150, // number of milliseconds between frames in fast-forward/rewind
        endtime12 = time; // time to fast-forward/rewind to (in seconds)
    // fast-forward/rewind video to end time 
    var tdelta12 = (endtime12 - vid12.currentTime) / ticks12;
    var startTime12 = vid12.currentTime;
    for (let i = 0; i < ticks12; ++i) {
        (function(j) {
            setTimeout(function() {
                vid12.currentTime = startTime12 + tdelta12 * j;
            }, j * frms12);
        })(i);
    }
    var vid13 = document.getElementById('vid13'),
        ticks13 = 10, // number of frames during fast-forward
        frms13 = 200, // number of milliseconds between frames in fast-forward/rewind
        endtime13 = time; // time to fast-forward/rewind to (in seconds)
    // fast-forward/rewind video to end time 
    var tdelta13 = (endtime13 - vid13.currentTime) / ticks13;
    var startTime13 = vid13.currentTime;
    for (let i = 0; i < ticks13; ++i) {
        (function(j) {
            setTimeout(function() {
                vid13.currentTime = startTime13 + tdelta13 * j;
            }, j * frms13);
        })(i);
    }
    var vid14 = document.getElementById('vid14'),
        ticks14 = 10, // number of frames during fast-forward
        frms14 = 250, // number of milliseconds between frames in fast-forward/rewind
        endtime14 = time; // time to fast-forward/rewind to (in seconds)
    // fast-forward/rewind video to end time 
    var tdelta14 = (endtime14 - vid14.currentTime) / ticks14;
    var startTime14 = vid14.currentTime;
    for (let i = 0; i < ticks14; ++i) {
        (function(j) {
            setTimeout(function() {
                vid14.currentTime = startTime14 + tdelta14 * j;
            }, j * frms14);
        })(i);
    }
    var vid15 = document.getElementById('vid15'),
        ticks15 = 10, // number of frames during fast-forward
        frms15 = 300, // number of milliseconds between frames in fast-forward/rewind
        endtime15 = time; // time to fast-forward/rewind to (in seconds)
    // fast-forward/rewind video to end time 
    var tdelta15 = (endtime15 - vid15.currentTime) / ticks15;
    var startTime15 = vid15.currentTime;
    for (let i = 0; i < ticks15; ++i) {
        (function(j) {
            setTimeout(function() {
                vid15.currentTime = startTime15 + tdelta15 * j;
            }, j * frms15);
        })(i);
    }
    var vid16 = document.getElementById('vid16'),
        ticks16 = 10, // number of frames during fast-forward
        frms16 = 350, // number of milliseconds between frames in fast-forward/rewind
        endtime16 = time; // time to fast-forward/rewind to (in seconds)
    // fast-forward/rewind video to end time 
    var tdelta16 = (endtime16 - vid16.currentTime) / ticks16;
    var startTime16 = vid16.currentTime;
    for (let i = 0; i < ticks16; ++i) {
        (function(j) {
            setTimeout(function() {
                vid16.currentTime = startTime16 + tdelta16 * j;
            }, j * frms16);
        })(i);
    }
    var vid17 = document.getElementById('vid17'),
        ticks17 = 10, // number of frames during fast-forward
        frms17 = 300, // number of milliseconds between frames in fast-forward/rewind
        endtime17 = time; // time to fast-forward/rewind to (in seconds)
    // fast-forward/rewind video to end time 
    var tdelta17 = (endtime17 - vid17.currentTime) / ticks17;
    var startTime17 = vid17.currentTime;
    for (let i = 0; i < ticks17; ++i) {
        (function(j) {
            setTimeout(function() {
                vid17.currentTime = startTime17 + tdelta17 * j;
            }, j * frms17);
        })(i);
    }
    var vid18 = document.getElementById('vid18'),
        ticks18 = 10, // number of frames during fast-forward
        frms18 = 250, // number of milliseconds between frames in fast-forward/rewind
        endtime18 = time; // time to fast-forward/rewind to (in seconds)
    // fast-forward/rewind video to end time 
    var tdelta18 = (endtime18 - vid18.currentTime) / ticks18;
    var startTime18 = vid18.currentTime;
    for (let i = 0; i < ticks18; ++i) {
        (function(j) {
            setTimeout(function() {
                vid18.currentTime = startTime18 + tdelta18 * j;
            }, j * frms18);
        })(i);
    }
    var vid19 = document.getElementById('vid19'),
        ticks19 = 10, // number of frames during fast-forward
        frms19 = 200, // number of milliseconds between frames in fast-forward/rewind
        endtime19 = time; // time to fast-forward/rewind to (in seconds)
    // fast-forward/rewind video to end time 
    var tdelta19 = (endtime19 - vid19.currentTime) / ticks19;
    var startTime19 = vid19.currentTime;
    for (let i = 0; i < ticks19; ++i) {
        (function(j) {
            setTimeout(function() {
                vid19.currentTime = startTime19 + tdelta19 * j;
            }, j * frms19);
        })(i);
    }
    var vid20 = document.getElementById('vid20'),
        ticks20 = 10, // number of frames during fast-forward
        frms20 = 150, // number of milliseconds between frames in fast-forward/rewind
        endtime20 = time; // time to fast-forward/rewind to (in seconds)
    // fast-forward/rewind video to end time 
    var tdelta20 = (endtime20 - vid20.currentTime) / ticks20;
    var startTime20 = vid20.currentTime;
    for (let i = 0; i < ticks20; ++i) {
        (function(j) {
            setTimeout(function() {
                vid20.currentTime = startTime20 + tdelta20 * j;
            }, j * frms20);
        })(i);
    }
}

The code is pretty sloppy and could probably be optimized a lot if given more time. The videos perform horribly on Firefox and Chrome, while working fine in Safari. In general, playing 10 videos at once obviously causes average-performing computers to increase in temperature at an unacceptable rate and would probably eventually cause the browser to crash. Instead, there is probably a way for one video file to be used that can be duplicated and cropped as necessary.