Posted on and Updated on

Live Web | Final Project Idea

For the remainder of this class I’ll attempt to discover the sort of “hidden style” of not just the structures of interfaces in digital media (windows, buttons, menus, scrolling… GUI paradigms in general) but of the content itself that fills these structures, specifically on the web. Because content on the web (image, video, and text, interactive or not) encapsulates objects, people, places, and ideas, from arguably every period of human history, could the resulting aesthetic vernacular really be considered an overarching style of everything, a mess of clashing styles and perspectives from wildly different paradigms? Is this what defines our current zeitgeist, and if so, what does that mean for the future? What could possibly come after this massive collapse of time and space in aesthetic understanding?

This kind of flattening seems to have an affect on many things, like political perspectives, cultural diversity (global homogenization in fashion, language, icons, etc), and maybe even more. So not only could the web be considered an “artistic medium” but it’s also arguably the primary source of information dissemination and media consumption for many people all over the world.

I will realize this using appropriated media as facilitated by the use of web APIs for various content aggregators and media platforms. I’ll try to focus on content “hubs” to minimize bias but it will be interesting to see the bias inherent in my choosing regardless, if not instrumental in helping to discover to what degree the web today acts as a window into any sort of objective paradigm of everything.

References:

What was your first experience of going online?What about Google image search? “I specifically remember my first image search. I searched for trees. Like, “green trees.” And it was really overwhelming because there were so many pictures of trees, all at once. Before, you could only look at books.” (src)

 

art history has been “flattened” for artists of a certain age. “When they typed in ‘tree’ ” in a search engine, … “they got a thousand pictures of a tree: a picture of a tree made in the 18th century, a tree made last year, a cartoon of a tree. You have this flattening of time.” (src)

Edward Shenk “Theorist” — Visual artist leveraging right-leaning conspiracy theorist type of content that he apparently finds often on facebook and can probably be recognized to be more or less prevalent on the web by most people. He breaks this style of image-making down to its formal qualities, revealing the sort of sinister underlying tone… “There’s this manic connection-making like in the darker parts of A Beautiful Mind. If something looks like something else then that is proof enough. There is no such thing as pure coincidence, and that’s a hallmark of paranoia.” (src)

XTAL fSCK video performance — The visuals these performers create exemplify the “default” visual effects inherent in the MacOS UI: mashing [ctrl+alt+cmd+8] to use the color-inversion accessibility feature to create a strobe effect, using Quicktime’s screen recording feature to create endless recursion, swiping between desktop spaces, opening and minimizing windows with genie-effect in slow-motion, emojis and iconography are all overlaid and mashed together ad nauseum. The way the MacOS UI is exposed in this formal performative context reveals and accentuates visual qualities that were hidden in plain sight, forcing the audience to realize age-old questions of artistic merit and technological control in a context that we find ourselves in every day but perhaps overlook.

Posted on

Design for Discomfort | Final Project

A major day-to-day struggle for me is retaining self control at my computer. As one who generally works primarily in the digital media space, distractions are not only infinitely abundant but also very easy to get lost into. Not only does everything happen to be “a few clicks away,” they are such efficient traps by design.

In the grand scheme of things the stress and discomfort induced by such an ecosystem of intentional distraction (decades of attention economics at work) may seem somewhat petty, but maybe it’s worth investigating. Behavioral psychology and the marketing industry as a whole have been around for a while now, but the digital context within which we experience it today on such a massive scale is still fairly new. How can the discomfort caused by our interactions in these systems and the resulting feedback loops that psychologically impact users for clicks and eyeballs be recreated in such a designed experience in order to enlighten rather than obscure and distract?

I figured the best way to address this question would be to replicate such a web experience with exaggerated effects. The experience I designed has a progressive structure— the user follows a linear path while the uncomfortable elements build. From page to page the user is slowly conditioned to increasingly intense visual, auditory and intellectual stimuli. In the future I intend to also incorporate more heavily political content to attempt to emulate the “philosophical dizziness” brought forth by an overload of information/”soft propaganda”/conspiracy theorist fanaticism from all sides of the political spectrum at once. Ideally the experience would go on until the user is too uncomfortable to progress any further.

The experience would be contained in an embraced magic circle. The circle involves an already existing reality within which it exists (the web and it’s capacity to cause you to procrastinate), but the difference within the magic circle is that you are forced to confront an active awareness that you’re wasting time, or simply how you waste time in general, and how it makes you feel.

Posted on

Design for Discomfort | Final Project Pitch

Option 2. A Difficult Conversation

Taking a cue from Chris Crawford, conceptualize your design for an interactive device, application, or experience as a literal difficult conversation. Script out the conversation as the starting point for the design. Test throughout the process to determine if participants are hearing what you intend from your design, and aim to have a device or experience that can keep a person in the challenging conversation long enough to have a meaningful experience.


Concept/purpose: The idea is to build a series of interconnected web pages filled with visual tactics commonly found all over the web to invoke extreme sensory discomfort toward its users. I also want to address our interactions in these systems and the resulting feedback loops that psychologically impact users. I aim for it to serve as somewhat of a timestamp for the current state of our digital landscape, and my hope is that it will prompt people to think about this kind of emotional/behavioral stimuli we all subject ourselves to.

Specific tactics to invoke:

  • Hook users with an infinite stream of memes / low-quality content with “can’t look away” qualities (excluding x-rated content for this assignment because I feel that the likelihood of dealing with institutional politics here would hijack the assigned purpose, but more so because there is absolutely plenty of anodyne content which is just as effective at garnering and holding attention)
  • Invoke anxiety in users by juxtaposing the attention-grabbing content stream with the recognition that they are wasting their time, perhaps procrastinating even
  • Sensory overload (as I learned from my journey prototype, it should be SUBTLE and long-form; blasting users immediately would make intentions immediately obvious and possibly incentivize leaving before being hooked)

Conversation:

user’s internal monologue (1) while interacting with anthropomorphized website (2)

1: “I’m experiencing a fleeting desire to escape from reading the next paragraph of this essay, I’m craving some quick mindless satisfaction”

*Opens new tab, goes to the site*

2: “I have a new visitor. Let me show them content that will bring them immediate satisfaction so that they stay.”

*Pulls the most viral post from imgur.com’s API and displays them.*

1: “Oh look at this adorable puppy!”

*Scrolls down*

2: “User wants more.”

*Pulls the next most viral posts from the imgur API and begins an infinite scrolling mechanism.*

“Perhaps user might be interested in shopping.”

*Displays an ad in the screen’s corner*

*Some time passes*

*Shows “You might be interested in…” link to Forbes 30 Under 30 and similar articles*

1: “Wish I had that much success. What the hell am I doing with my life? I think I’ve wasted enough time here.”

*Begins scrolling back up to the top of the page*

2: “User has abandoned the meme feed, let’s introduce them to other things they might like”

*Displays link to chat room*

1: “Wow I haven’t seen one of these in ages, I wonder what goes on here.”

*Enters chat room page*

2:”User has entered chat room page, let’s make sure this opportunity isn’t wasted.”

*Displays pop-up modal ad*

*Loads eye-catching background*

*Enables emojis in chat*

*Loads some more “You might also like” links to articles about anxiety, procrastination, rare success stories, global issues, luxury goods*

1:”Oh look Trump said a thing on Twitter and embarrassed himself in front of a prime minister, there are riots happening in Charlottesville and a supervolcano is going to erupt any day now in Yellowstone, am I even going to live to retirement age? The people in this chat room aren’t very friendly, I think they’re trolling me. I can’t get caught up in internet arguments but this person is pissing me off. This background is giving me a headache. Maybe I should see if I can find a decent shirt on Ebay, I don’t have any shirts that go my pants. I can’t afford to buy new clothes anyway. This page runs really slowly. I might go back to the imgur feed after reading that article that’s tangentially related to my work— oh, I forgot I’d distracted myself with this stupid site. I should go back now”

*User closes tab and reflects on the experience*

 

Posted on

Live Web | Mid-Term Proposal

I’d like to take this opportunity to put the tools I’ve learned in this class thus far toward a project I’m starting to develop for Design for Discomfort. The idea is to build a series of interconnected web pages filled with visual tactics commonly found all over the web to invoke extreme sensory discomfort toward its users. I also want to address our interactions in these systems and the resulting feedback loops that affect our actions. I aim for it to serve as somewhat of a timestamp for the current state of our digital landscape, and my hope is that it will prompt people to think about this kind of emotional/behavioral stimuli we all subject ourselves to.  What comprises our contemporary digital landscape? Is it different from, say, 10 years ago? If so, how is it different? What has changed, why, and what does it say about us in terms of what we are using the web for, what we want the web to be, and what it might become in the future?

Posted on and Updated on

Live Web | Class 4 | Camera & WebRTC

For this assignment I was most interested in understanding how image data was encoded into the base64 format. I spent a great deal of time attempting to understand exactly how this was done so that I could alter the image pixel data with code. So far I’ve been unsuccessful, and I have no idea why my current code doesn’t work, but perhaps I’m close, or maybe I’ve led myself completely astray.

First of all I learned that images are a binary data format, meaning the color of each of its pixels can be interpreted via a 1 or 0, (black or white), but in this case the png I am sending as base64 is not just 1 bit per pixel but rather 24 (I think), meaning there are 24 bits being used to represent a whole spectrum of rgb values. Knowing this, I then read up on how base64 encodes such binary data. The way I understand it is that the image takes the binary data in 24-bit chunks and divides each of those into four 6-bit chunks and assigns each of those 6-bits (which has 64 possible values) to an ascii character based on the base64 index table. In the end the long ascii string representing all the data in the image is “padded” with = or == to indicate whether the remainder of bits in the divided up binary image is either 1 bit or 2 bits.

So in order for me to alter the pixel data I thought I’d need to

  1. Decode it from base64 back to raw binary
  2. Do stuff to the binary (here I applied a random shuffle function)
  3. Encode it back to base64

I am able to do this successfully, except that the resulting base64 data just returns a broken image. I’m not sure why this would happen assuming the new base64 data should contain the same exact number of bits (right?). I also made sure that the final base64 string begins with `data:image/png;base64,` and ends with an = or ==. Here’s the corresponding code:

index.html:

<!DOCTYPE html>
<html>
	<head>
		<title></title>
		<script type="text/javascript" src="/socket.io/socket.io.js"></script>
		<script type="text/javascript" src="skrypt.js"></script>
		<style>
			#imgcontainer {
				position:relative;
				float:left;
				width:100%;
				height:100%;
				margin:0 auto;
				border:solid 1px #000;
			}
			#imagecontainer img{
				position:relative;
				float:left;
			}
			#txtbox{
				width:100%;
				word-wrap: break-word;
			}
		</style>
	</head>
	<style>
		canvas{
			border:solid 1px #000;
		}
	</style>
	<body>
		<video id="thevideo" width="320" height="240"></video>
		<canvas id="thecanvas" width="320" height="240" style="display:none"></canvas>
		<div id="imgcontainer">
			<img id="receive" width="320" height="240">
		</div>
	</body>
</html>

skrypt.js:

// HTTP Portion
// var http = require('http');
var https = require('https');
var fs = require('fs'); // Using the filesystem module
// var httpServer = http.createServer(requestHandler);

const options = {
    key: fs.readFileSync('my-key.pem'),
    cert: fs.readFileSync('my-cert.pem')
};

var httpServer = https.createServer(options, requestHandler);
var url = require('url');
httpServer.listen(8080);

function requestHandler(req, res) {

    var parsedUrl = url.parse(req.url);
    console.log("The Request is: " + parsedUrl.pathname);

    fs.readFile(__dirname + parsedUrl.pathname,
        // Callback function for reading
        function(err, data) {
            // if there is an error
            if (err) {
                res.writeHead(500);
                return res.end('Error loading ' + parsedUrl.pathname);
            }
            // Otherwise, send the data, the contents of the file
            res.writeHead(200);
            res.end(data);
        }
    );
}


// WebSocket Portion
// WebSockets work with the HTTP server
var io = require('socket.io').listen(httpServer);

// Register a callback function to run when we have an individual connection
// This is run for each individual user that connects
io.sockets.on('connection',
    // We are given a websocket object in our function
    function(socket) {

        console.log("We have a new client: " + socket.id);

        // When this user emits, client side: socket.emit('otherevent',some data);
        socket.on('image', function(data) {
            // Data comes in as whatever was sent, including objects
            console.log("Received at server: " + data);

            var buf = Buffer.from(data, 'base64'); // Ta-da //encode the base64 to binary?

            // //turn buffer object array into uint8 array
            var uint8 = new Uint8Array(buf);
            console.log("uint8 array: " + uint8);

            // //re-sort binary array
            var arr = uint8;
            var sortedArr = shuffle(arr);
            console.log("sorted: " + sortedArr);

            //turn back into base64
            var newB64str = (new Buffer(sortedArr)).toString("base64");
            console.log("newB64str = " + newB64str);

            //try adding another `=` to end of string
            var finalB64Str = newB64str + "=";

            socket.broadcast.emit('image', finalB64Str); //send to all except sender
        });

        socket.on('disconnect', function() {
            console.log("Client has disconnected " + socket.id);
        });
    }
);

function shuffle(array) { //https://stackoverflow.com/a/2450976/1757149
    var currentIndex = array.length,
        temporaryValue, randomIndex;

    // While there remain elements to shuffle...
    while (0 !== currentIndex) {

        // Pick a remaining element...
        randomIndex = Math.floor(Math.random() * currentIndex);
        currentIndex -= 1;

        // And swap it with the current element.
        temporaryValue = array[currentIndex];
        array[currentIndex] = array[randomIndex];
        array[randomIndex] = temporaryValue;
    }

    return array;
}

server.js (this is where it’s all done):

// HTTP Portion
// var http = require('http');
var https = require('https');
var fs = require('fs'); // Using the filesystem module
// var httpServer = http.createServer(requestHandler);

const options = {
    key: fs.readFileSync('my-key.pem'),
    cert: fs.readFileSync('my-cert.pem')
};

var httpServer = https.createServer(options, requestHandler);
var url = require('url');
httpServer.listen(8080);

function requestHandler(req, res) {

    var parsedUrl = url.parse(req.url);
    console.log("The Request is: " + parsedUrl.pathname);

    fs.readFile(__dirname + parsedUrl.pathname,
        // Callback function for reading
        function(err, data) {
            // if there is an error
            if (err) {
                res.writeHead(500);
                return res.end('Error loading ' + parsedUrl.pathname);
            }
            // Otherwise, send the data, the contents of the file
            res.writeHead(200);
            res.end(data);
        }
    );
}


// WebSocket Portion
// WebSockets work with the HTTP server
var io = require('socket.io').listen(httpServer);

// Register a callback function to run when we have an individual connection
// This is run for each individual user that connects
io.sockets.on('connection',
    // We are given a websocket object in our function
    function(socket) {

        console.log("We have a new client: " + socket.id);

        // When this user emits, client side: socket.emit('otherevent',some data);
        socket.on('image', function(data) {
            // Data comes in as whatever was sent, including objects
            console.log("Received at server: " + data);

            var buf = Buffer.from(data, 'base64'); // Ta-da //encode the base64 to binary?

            // //turn buffer object array into uint8 array
            var uint8 = new Uint8Array(buf);
            console.log("uint8 array: " + uint8);

            // //re-sort binary array
            var arr = uint8;
            var sortedArr = shuffle(arr);
            console.log("sorted: " + sortedArr);

            //turn back into base64
            var newB64str = (new Buffer(sortedArr)).toString("base64");
            console.log("newB64str = " + newB64str);

            //try adding another `=` to end of string
            var finalB64Str = newB64str + "=";

            socket.broadcast.emit('image', finalB64Str); //send to all except sender
        });

        socket.on('disconnect', function() {
            console.log("Client has disconnected " + socket.id);
        });
    }
);

function shuffle(array) { //https://stackoverflow.com/a/2450976/1757149
    var currentIndex = array.length,
        temporaryValue, randomIndex;

    // While there remain elements to shuffle...
    while (0 !== currentIndex) {

        // Pick a remaining element...
        randomIndex = Math.floor(Math.random() * currentIndex);
        currentIndex -= 1;

        // And swap it with the current element.
        temporaryValue = array[currentIndex];
        array[currentIndex] = array[randomIndex];
        array[randomIndex] = temporaryValue;
    }

    return array;
}

I also attempted to perform the shuffle to the base64 encoding itself (without transforming to binary first):

// HTTP Portion
// var http = require('http');
var https = require('https');
var fs = require('fs'); // Using the filesystem module
// var httpServer = http.createServer(requestHandler);

const options = {
    key: fs.readFileSync('my-key.pem'),
    cert: fs.readFileSync('my-cert.pem')
};

var httpServer = https.createServer(options, requestHandler);
var url = require('url');
httpServer.listen(8080);

function requestHandler(req, res) {

    var parsedUrl = url.parse(req.url);
    console.log("The Request is: " + parsedUrl.pathname);

    fs.readFile(__dirname + parsedUrl.pathname,
        // Callback function for reading
        function(err, data) {
            // if there is an error
            if (err) {
                res.writeHead(500);
                return res.end('Error loading ' + parsedUrl.pathname);
            }
            // Otherwise, send the data, the contents of the file
            res.writeHead(200);
            res.end(data);
        }
    );
}


// WebSocket Portion
// WebSockets work with the HTTP server
var io = require('socket.io').listen(httpServer);

// Register a callback function to run when we have an individual connection
// This is run for each individual user that connects
io.sockets.on('connection',
    // We are given a websocket object in our function
    function(socket) {

        console.log("We have a new client: " + socket.id);

        // When this user emits, client side: socket.emit('otherevent',some data);
        socket.on('image', function(data) {
            // Data comes in as whatever was sent, including objects
            console.log("Received at server: " + data);

            var slicedData = data.slice(22); //slice off data:image/png;base64,
            console.log("sliced data: " + slicedData);

            var b64string, numEquals;
            if (slicedData.length - 2 === "=") {
                b64string = slicedData.substr(0, slicedData.length - 2); //remove last 2 `==`
                numEquals = 2;
            } else {
                b64string = slicedData.substr(0, slicedData.length - 1); //remove last `=`
                numEquals = 1;
            }
            console.log("sliced data without == : " + b64string);

            var b64array = Array.from(b64string);
            var shuffledb64array = shuffle(b64array);
            var newB64str = shuffledb64array.join('');
            console.log("newB64str: " + newB64str);

            //try adding another `=` to end of string
            var finalB64Str;
            if (numEquals == 1) {
                finalB64Str = newB64str + "=";
            } else if (numEquals == 2) {
                finalB64Str = newB64str + "==";
            } else {
                console.log("error calculating number of equal signs");
            }

            socket.broadcast.emit('image', finalB64Str); //send to all except sender
        });

        socket.on('disconnect', function() {
            console.log("Client has disconnected " + socket.id);
        });
    }
);

function shuffle(array) { //via https://stackoverflow.com/a/2450976/1757149
    var currentIndex = array.length,
        temporaryValue, randomIndex;

    // While there remain elements to shuffle...
    while (0 !== currentIndex) {

        // Pick a remaining element...
        randomIndex = Math.floor(Math.random() * currentIndex);
        currentIndex -= 1;

        // And swap it with the current element.
        temporaryValue = array[currentIndex];
        array[currentIndex] = array[randomIndex];
        array[randomIndex] = temporaryValue;
    }

    return array;
}

Posted on and Updated on

Design for Discomfort / Journey 2

http://andrewmccausland.net/designForDiscomfort/attention/index

1. Choose a form of discomfort (choose one of the four forms, and get more specific from there)

Visceral— sensory overload (fast-paced flashing lights and colors, visual obstruction, inability to concentrate)

2. Identify a goal that could be reached through this discomfort

Get participants to think about their attention spans and how they are captured and manipulated by external stimuli. Ideally would like to facilitate dialogue surrounding this topic to span more overarching issues regarding our visual landscape (for example the advertising industry, technology’s effects on our mental abilities).

3. Identify a design approach (must be different than Journey #1) that can utilize your chosen form of discomfort.

A web-based experience (the space which is most susceptible to such discomfort).

4. Prototype a “journey” using that approach, toward the goal, through the chosen form of discomfort.

A web-based experience where users are presented with the objective of reading an essay, but are subsequently confronted with increasing sensory distraction inhibiting their ability to finish it (they wouldn’t have read it in its entirety anyway, right?)


This prototype focuses on testing the following factors:

  • User interest/engagement (How far will they get? If not far, why not? Is it boring? Does it get too uncomfortable too quickly?)
  • Does the user understand the purpose of the experience?
  • Did the user find the experience worthwhile/rewarding? Does it actually facilitate dialogue/thought as per my design goals, or in any other way?

Based on the results for these research points, I’d like to move on to figuring out what contexts the experience(s) should be presented in (how will users stumble upon this?) then build more complex web-based experiences utilizing similar subtle, encroaching a/v annoyances.

Posted on and Updated on

Live Web ⁄ Class 2 ⁄ Node + Socket.io Chat

Live url TBA, DigitalOcean locked my account immediately after registration and haven’t got back to me yet.

html & css:

<html>
	<head>
		<title>CH4T</title>
		<link rel="stylesheet" href="https://maxcdn.bootstrapcdn.com/bootstrap/3.3.6/css/bootstrap.min.css">
		<script type="text/javascript" src="/socket.io/socket.io.js"></script>
		<script type="text/javascript" src="chat.js"></script>
		<script type="text/javascript">			
		</script>
	</head>
 <body id="body">
	 <div id=main>
		 <input type="text" id="message" name="message">
		 <input type="submit" value=">" onclick="sendmessage(document.getElementById('message').value);">
		 <div id="messages"></div>
		 </div>

		 <style>
			 #main {
			 	/*margin:50px;*/
			 }
			 .newBg{
			 	position:absolute;
			 	top:0;
			 	right:0;
			 	z-index: -1;
			 	width:100vw;
			 	height:90vh;
			 }
			 input {
			 	/*width:100%;*/
			 	border-radius: 2px;
			 }
			 button {
			 	border-radius: 2px;
			 }
			.msgBox {
				position:relative;
				float:left;
				display:block;
				width:100%;
				padding:5px;
				margin:5px 10px 0 0; 
				background:rgba(0,0,0,0.05);
				border-radius: 2px;
			}
		</style>
 </body>
</html>

js:

var socket = io.connect();

socket.on('connect', function() {
    console.log("Connected");
});

// Receive from any event
socket.on('chatmessage', function(data) {
    console.log(data);

    //colors
    var colors = ["red", "orange", "yellow", "green", "blue", "purple", "pink", "brown", "black", "white"];
    for (var i = 0; i < colors.length; i++) {
        if (data == colors[i]) {
            var body = document.getElementById("body");
            body.style.background = colors[i];
        }
    }

    //bg images
    if (data == "zoom") {
        // document.body.style.backgroundImage = "url('img/zoom.gif')";
        var newBg = document.createElement("newBg");
        newBg.className = "newBg";
        newBg.style.backgroundImage = "url('img/zoom.gif')";
        document.body.appendChild(newBg);
    }
    if (data == "blood") {
        // document.body.style.backgroundImage = "url('img/zoom.gif')";
        var newBg = document.createElement("newBg");
        newBg.className = "newBg";
        newBg.style.backgroundImage = "url('img/blood.gif')";
        document.body.appendChild(newBg);
    }
    if (data == "cat") {
        // document.body.style.backgroundImage = "url('img/zoom.gif')";
        var newBg = document.createElement("newBg");
        newBg.className = "newBg";
        newBg.style.backgroundImage = "url('img/cat.gif')";
        document.body.appendChild(newBg);
    }
    if (data == "explode") {
        // document.body.style.backgroundImage = "url('img/zoom.gif')";
        var newBg = document.createElement("newBg");
        newBg.className = "newBg";
        newBg.style.backgroundImage = "url('img/explode.gif')";
        document.body.appendChild(newBg);
    }
    if (data == "kirby") {
        // document.body.style.backgroundImage = "url('img/zoom.gif')";
        var newBg = document.createElement("newBg");
        newBg.className = "newBg";
        newBg.style.backgroundImage = "url('img/kirby.gif')";
        document.body.appendChild(newBg);
    }
    if (data == "pizza") {
        // document.body.style.backgroundImage = "url('img/zoom.gif')";
        var newBg = document.createElement("newBg");
        newBg.className = "newBg";
        newBg.style.backgroundImage = "url('img/pizza.gif')";
        document.body.appendChild(newBg);
    }
    if (data == "werq") {
        // document.body.style.backgroundImage = "url('img/zoom.gif')";
        var newBg = document.createElement("newBg");
        newBg.className = "newBg";
        newBg.style.backgroundImage = "url('img/professional.gif')";
        document.body.appendChild(newBg);
    }
    if (data == "rain") {
        // document.body.style.backgroundImage = "url('img/zoom.gif')";
        var newBg = document.createElement("newBg");
        newBg.className = "newBg";
        newBg.style.backgroundImage = "url('img/rain.gif')";
        document.body.appendChild(newBg);
    }
    if (data == "snow") {
        // document.body.style.backgroundImage = "url('img/zoom.gif')";
        var newBg = document.createElement("newBg");
        newBg.className = "newBg";
        newBg.style.backgroundImage = "url('img/snow.gif')";
        document.body.appendChild(newBg);
    }
    if (data == "sparkle") {
        // document.body.style.backgroundImage = "url('img/zoom.gif')";
        var newBg = document.createElement("newBg");
        newBg.className = "newBg";
        newBg.style.backgroundImage = "url('img/sparkle.gif')";
        document.body.appendChild(newBg);
    }
    if (data == "water") {
        // document.body.style.backgroundImage = "url('img/zoom.gif')";
        var newBg = document.createElement("newBg");
        newBg.className = "newBg";
        newBg.style.backgroundImage = "url('img/water.gif')";
        document.body.appendChild(newBg);
    }
    if (data == "wizard") {
        // document.body.style.backgroundImage = "url('img/zoom.gif')";
        var newBg = document.createElement("newBg");
        newBg.className = "newBg";
        newBg.style.backgroundImage = "url('img/wizard.gif')";
        document.body.appendChild(newBg);
    }

    if (data == "nothing") {
        var thisBg = document.getElementsByClassName("newBg");
        document.body.removeChild(thisBg);
    }

    var msgBox = document.createElement("msgBox");
    msgBox.className = "msgBox";
    msgBox.innerHTML = data;

    document.getElementById("messages").appendChild(msgBox);
});

var sendmessage = function(message) {
    console.log("chatmessage: " + message);
    socket.emit('chatmessage', message);
};

Posted on and Updated on

Design for Discomfort ⁄ Daily Practice

When considering what I could do as my daily practice, I thought about the ways I’d like to grow over the next several weeks, and the biggest thing that came to mind was the amount of time I spend in front of the screen. I decided to draw for 30 minutes each day. Turning off my monitors and focusing on the page for this duration would be an exercise in leaving my comfort zone; I would have to fight impulses to check social media, surf the web, etc. Then, I would have to document my progress forcing me to publish/share my drawings.

[Documentation feed is here]

Note: I started on Saturday. I had trouble coming up with a feasible plan until then.