Enter Sandman

 

We listen every day to the specters of many living artists and also dead. Technology allows us to preserve their thoughts and feelings in the form of waveforms and also as words.

When I was turning 18 I received my first dose of Morphine. A friend gave me my first Morphine cd. I can still recall the moment when I played the record on my sisters Sony silver portable player. It was real, real love… Love at first listen… I then started downloading music from this amazing band as crazy as a Morphine Junkie looking for the next hit. When I was able to go to Tower Records to look for their other records I was so happy to spend my aunts birthday money on them. Suddenly I read “Dedicated to Mark Sandman” and then I see: “Mark Sandman, 2 string bass and vocals” then after some neural activity my blood was cold… I felt the shock of losing a very close person who I never met before.  I found out he died at a young age. He took a plunge to the other side as only true rockstar do: he died on stage from a heart attack.

Here, we are summoning the spirit of Mark Sandman. The goal is to share his unique, fun, controversial, sometimes poetic, other times funny, or spicy or fun, as life itself.  He almost fulfilled his wish of sitting on his back porch in 9, 9 , 1999 singing french fries with pepper!. But he didn’t, he passed a couple of months earlier to that date. So now we are giving him some nice as a way to trade it for some of his words and thoughts, that thanks to technology allow him to exist in a virtual way among us.

For this purpose, we are relaying now on computer vision, pepper and some lines of magic spells, also known as code. All randomness in this magical system comes from the physical action of the computer counting pepper and Mr Sandmans spirit assigning a phrase to that number.

This project plays with different ideas such as the practice of Necromancy, in a metaphorical way.  Also with the idea of the ready-made and the practice of decontextualizing, and the idea of appropriation. Concepts inherited from the avant-garde times. Also from the idea of the monument to commemorate the life of an important person and to keep his memory. The idea of the monument as a way to raise interest in the work done by Mark Sandman.

If you are curious about Mark, a good way to start to know about him could be looking at this article about him from the Washington Post. But also I would advise you to listen to his music on YouTube or other web music platforms. There’s also a documentary called “Morphine, a Journey of dreams”. I would definitely advise you to listen to Morphine’s last record “The night”, with a great stereo or headphones.

http://www.washingtonpost.com/wp-dyn/articles/A23772-2004Nov30.html

The technical process

The process for the code was tricky. The problem to be solved was how to use the “random” number that came from counting the pepper and being able to use it to match the range of the phrases from the songs from Mark Sandman’s lyrics. Using the random function from the computer was a possibility, but it was an easy and boring solution.  And we also wanted the result to be linked closely to the pepper’s amount. So after some thinking, I realized I could use a list of different JSON files, each from a single song, and then use the pepper count number again inside the selected song JSON file. Another issue was the problem of having a larger pepper count number than the amount of JSON song files and of lines inside the son JSON song files. After days of code struggle and headaches and suffering, I could finally write a  for loop to change the position inside the array if the incoming pepper count number was larger then the JSON array size.

Another issue at first was working with JSON files. I didn’t realize the importance of the end the lines until I had formatted almost 30 songs. After struggling with incomprehensible mistakes I finally understood how they had to be written, and how a missing coma or double quote could mean a total disaster. I heard Marks Sandmans voice in my mind singing “Have patience, everything will be alright”

The physical part

The next problem to solve was the physical part. I figured out that I needed specific requirement for the code to work. The hat needed to have enough distance to be able to have a proper pepper count reading and also to be able to show an interesting ambiguous “spectral-looking” image.  So the initial simple design had to be modified, and a box was required to solve this issue.  There’s still work to do to make it a more sophisticated and good looking device.  A better visual design for the interface is also needed.

The interaction with the user

The interaction from taping the hat is interesting and works. Some voice recognition could be added to make it better and more magical. Also by adding the sound output to the text would be great.

 

The Proccesing magic spell:

/*
This code has been written thanks to the help of Danny Rozin, Lisa Jamhoury, Daniel Castano and Leon Eckert!!!
It requires the OPENCV for Processing library installed
*/

import processing.video.*;
import gab.opencv.*;
import processing.pdf.*;
import processing.serial.*;
//Pi mage is a way of refering to images by open cv
PImage src;
OpenCV opencv;
Capture video;
int count = 0;
ArrayList<Contour> contours;
Serial myPort;
//images array
//PImage[] images = new PImage[10]; FOR USING WITH THE DIE
//PImage[] images = new PImage[70];
//image
//PImage img = new PImage();
//inicialize random
//int imageNumber; // **** (is going to be used as the open cv number)

//ArrayList<DrawDice> drawDiceResults;
JSONArray introWords; //frases de instrucciones
String introWordsLine=””;
JSONArray songs; // jason array de coleccion de canciones
String line=””; //linea de la cancion que va a ser seleccionada
JSONArray hiSandman; // mensaje de saludo de Mark #1
String hiSandmanLine=””;
JSONArray sandmanSays; // mensaje de saludo de Mark #1
String sandmanSaysLine=””;

void setup() {
size(1900, 1200);

printArray(Serial.list());
myPort = new Serial(this, Serial.list()[7], 9600); // 7 puede cambiar segun el puerto de conexion

myPort.write(“A”);
String[] cameras=Capture.list();
if (cameras.length == 0) {
println(“There are no cameras available for capture.”);
exit();
} else {
println(“Available cameras:”);
for (int i = 0; i < cameras.length; i++) {
println(“Id: ” + i + “–> ” +cameras[i]);
}
}

video = new Capture(this, 960, 540, cameras[19], 15); //camera id(19) 15 (frame rate)match the space for open cv to work to the same camera space
opencv = new OpenCV(this, 960, 540);
video.start();

hiSandman = loadJSONArray(“hisandman.json”); // my name is mark, do you like french fries?
sandmanSays = loadJSONArray(“sandmansays.json”); // hey,
songs = loadJSONArray(“alltexts.json”);
//print(songs);
}

void draw() {
background (0);
fill(255);
//noFill();
//video.loadPixels();
//image(video,0,0);

opencv.loadImage(video);
opencv.useGray(); // turn our pic to grayscale
image(video, 444 , 230, 950, 750); // visualize the open cv filtered image
textSize(21);
fill(255, 255, 255, 200);
text(“Hello my name is Mark Sanman, I used to play with a band called Morphine, but crossed to the other side in 1999. My spirit lives now as data inside this computer.”, 160, 60);
text(“All I eat now are imaginary french fries, but I would appreciate nice and spicy real pepper, so I can turn it into ghost pepper!”, 280, 105);
text(“Pour some pepper on a plate for me. Soflty whisper Sandman and tap the top of the hat and I’ll tell you whats going through my mind”, 395, 155);

textSize(13);
fill(255, 255, 255, 200);
text(line, 50, 1050); // hisandman
textSize(21);
fill(255, 255, 255, 200);
text(hiSandmanLine, 720, 195); // hisandman
text(sandmanSaysLine, 850, 1111); // hisandman

}

void mousePressed() { //change for serial event using read line
doProcess();
}

void serialEvent(Serial p) {
String inString = p.readString();
println(inString);
if (inString.equals(“s”)){
doProcess();
}
}

void doProcess(){
background(0); // SI QUIERO MODIFICA EL CANVAS AL HACER CLICK POR AHORA NO PASA NADA…
noFill(); // AFECTA LOS CONTOURS DE LOS BLOBS SIN RELLENO…
//clear();

opencv.threshold(205); //opencv.threshold(225); // dice and water and oil and paper tolerance adjustment for the blob recognition with open cv.

opencv.blur(1); //opencv.blur(11); //for water and oil and paper

contours = opencv.findContours(); // tell opencv to find the contours and return as an array list
int countPepperGrounds=0; //count blobs
for (Contour thisContour : contours) { // visit all elements of the array list “contour” naming each one as “thisContour”
stroke(204, 102, 0);
strokeWeight(2);
thisContour.draw();
countPepperGrounds=countPepperGrounds+1; //count++;
}
if(countPepperGrounds == 0){ // to prevent that counter = 0
countPepperGrounds=1;
}

// *****WRITE FUNCTION WITH IF ELSE TO INCLUDE IF BIGUER THAN ARRAY GO BACK TO FIRST TO INDEX 0 OF ARRAY?

// *** code below working only with open cv random using blob count
int songSelector= (countPepperGrounds-1)%songs.size(); // number will always be between 0 and the total size of the array
JSONArray songLines = songs.getJSONArray(songSelector); //-1 == correccion de la cuenta SELECTING THE SONG FROM THE ARRAY LIST (FIRST FILTER)
// print(song);

int songLineSelector = (countPepperGrounds-1)%songLines.size();
line = songLines.getString(songLineSelector); // Line is declared as a global variable so we can called in daw functionSELECTING LINES FROM THE SELECTED SONG
// println(line);

int hiSandmanSelector = (countPepperGrounds-1)%hiSandman.size();
hiSandmanLine = hiSandman.getString(hiSandmanSelector);

// println(hiSandmanLine);

int sandmanSaysSelector = (countPepperGrounds-1)%sandmanSays.size();
sandmanSaysLine = sandmanSays.getString(sandmanSaysSelector); // SELECTING LINES FROM THE SELECTED SONG

// println(sandmanSaysLine);

}

void captureEvent(Capture c) {
c.read();
}

Alea Jacta est version 0.85

Alea Jacta Est

 

 

Open CV as a way to recognize the results of the die was used and worked. But it is very difficult to rely on its precision because of fabrication factors and other variables that can affect its accuracy, such as light changes, material and dirt particles altering the environment, among others possible problems. An alternative would be trying to use Open CV in a different mode, maybe using color percentage detection. The other path to explore is the QR code technology.

 

Using the blender as a dice roller sounds like a fun, possible, simple and great idea. But it’s not… Its power is too much and it is easy to lose control of the mechanical parts that are not designed for this specific use. It can roll dice, it actually did, but not constantly. According to experienced teachers in fabrication like Ben Light and Danny Rozin, this will kill itself sooner than expected.

 

But the blender is a very effective metaphor for rolling dice. So the idea is creating a machine similar to a blender that will be easily controlled, and will also be usable for a very long period of time, repeating almost infinite cycles. For this purpose, the next step is researching about motors, and stepper motors because of their reliability.

 

The visualization is working as expected, but this is just the beginning. The idea will be to create a flexible interface for the user to change the image, keeping the input, which will be always from the collected and always growing results form the dice roller. With the visualization, sound could also be explored as a way of using the data from the dice. This could make the drawing process more enjoyable also and interactive if we allow the users to play with the values of sound, linked to each of the numbers of the die.

 

Another step will be creating the database functionality. Along with this, the next challenge or dream, is creating a function for the computer to “make bets” according to the stored data that is being built through time. To create some kind of machine learning function for this purpose. And in that case, it could be fun to create a competition between users and the computer, to try to guess the next results of the die.

 

 

 

 

 

 

 

Computer Pseudo Random vs Physical Pseudo Random

The idea of this exercise was to think about randomness and to try to create a random function that can be used with a computer, without using the random function used by the computer. A complex mathematical operation which is an algorithm that simulates randomness in a way that is very good for the eye of most people.  But if inspected closely it can show some repetitive patterns.

In this case, we are comparing the random function used by Processing with a random function created using computer vision to “read” images from the real world and Processing code to translate that input into values that would be the alternative random.

For this purpose we tested the system with water with oil, pure water, pieces of white paper over a dark background and paper on water, mixed with oil.

The results are interesting. The “analog” randomness resulted to be more predictable that one could expect. Depending on the interaction with the elements, the resulting number would be shifted towards certain numbers. An interesting fact is that close numbers are repeated but there’s no absolute certainty that when clicking the mouse to read the random result you will get the same number twice, even if the recipient and the elements haven’t been manipulated recently.

On the other hand, it seems like the Processing random function has an even distribution, that once in a while repeats a result, just to look like it is a real random function. We can see more result diversity with this system than with the current analog system.

 

Play test #1

Traffic sounds

Instructions

Two players

Each player received twelve papers with different words, car, ice cream truck, ambulance and bike.

The dealer has four cards with the same kind of names as the players.

The dealer puts one of them in the table between the players.

The players have to find the kind of card given by the dealer, place one of that type and also make a sound associated with the card.

The first one to put it would win the hand and the other would take the cards to add them to theirs.

The dealer then puts the next type and repeats the same dynamic.

The first to get rid of the cards will win.

Comments from users

“It’s not funny when you loose” (Said in a funny way)

  • It felt possible and also good to be able to turn the tide during the game, even if you where losing by a large score.
  • It was fun in general.
  • The reason to create sound was confusing, but making sounds was fun, and also hearing sounds from others. It would be nice to be able to create interference while the other is making the sound.
  • The physical interaction could be designed better, and more fun.
  • More players could make it more fun
  • The time waiting for the next instruction was interesting to be always a different one, because it would give a feeling of uncertainty that was fun.
  • When running out of cards (almost winning) you where forced to loose if you didn’t have the right card. It was frustrating, but it was also fun because you where expecting luck to be on your side, and you would not only rely on your skills and focus.
  • “I feel like I’m also collaborating with the other to make noises.” Or encouraging the other to make funnier noises.
  • Is it for kids?
  • It could be more difficult so adults would have more challenge and interest

Next experiment: 

For the next experiment the rules could be shifted towards a free or collective collaborative sound creation.

Also playing with time restrictions and amount of sounds to be produced by each player.

Increasing the amount of possible sounds to be played.

 

Die is the answer!

http://alpha.editor.p5js.org/nicosanin/sketches/ryd5_hccM

Die is the answer!

For this meditation I chose to play with the idea of automatic writing, the concept of the authorship, and also with the idea of randomness as a topic. I took the given example using the Markov chain, provided in class, to make a ready made of this code, in order to make an automatic writing program that creates enigmatic phrases about randomness in a very intriguing, and if we want to use the words poetic and funny way. I was struck by the idea of how the authorship is a very flexible and questionable concept since modern times, and since postmodern philosophy. We do we start building something? Where do we stop? What happens with the legacy of the many people in the contribution chain that led to the achievements? Is their authorship dead? I prefer to think that this contributions and achievements are a way of immortality of the authors. These legacies will be modified and affected by the actions of the next contributors. In this particular case, the idea was to give way to a random speech about randomness, while also putting into dialogue different voices of authors who wrote about randomness. A selection of texts from song lyrics, quotes and a paper about science, and some Dadaist poetry are part of the “subconscious” of Mr Die. Some other point I want to talk about is the weird feeling I sometimes have when reading a book or listening to a song. I always feel as if the author where ghosts, but not in a creepy way. I feel like they are sending messages from another time and place. So I feel that this is a nostalgic way of invoking their voices and inviting them “back to life” to continue playing, collaborating and “creating” as they did before.

So what is randomness? I still don’t know, you can read some books about it. You can also talk to professors from different areas of knowledge like Physic’s, Math’s, Philosophy, Art’s, Theology, Chemistry, Literature and others. If you are still not satisfied, I will definitely advice you to Ask Mr Die for his current thoughts about it’s nature. I cant say he is right, but I can assure you his replies will be as valid as all the rest that you are going to find anywhere else!

Once the speech was fed with the proper material to create the proper language to reflect the topic of chance in a not so narrow way, then the design problem arise. I wanted to create a very fluid interaction between the user and Mr Dice. I want to have the possibility to make Mr dice relate his speech to a random author form the list of people from which the text was fed to the Markov chain model. So far I have struggled with P5 and haven’t still got to the point of being able o create a very satisfying and beautiful and fully functional interface. I can’t say how frustrating this process have been, and how I have been crashing against the walls over and over again (but that’s a different story). Something I would like to have is the possibility to tweet these phrases to create a memory of this hilarious, beautiful, and sometimes “pretentious-like” voice.

An important improvement I would like to ad, is including part of the users input in the quotes provided by Mr Die. That would probably give the quotes a little bit of more interest and “sense” for the user, making them feel as their existence and presence is key, and also giving Mr Die more credibility and authority as a Random Guru, “Thast’s my Moto”.

A design aspect I would like to have would be also “rolling” this dice every time, so you can also get a “true random fact” if you can say that exist.

The code reference written by Allisson Parrish  example used for this sketch is the following: http://alpha.editor.p5js.org/allison.parrish/sketches/S1Eangfax

Riding with the elephant

*Elephant is the name we are giving to what we often call “the subconscious ” and all the automatic system of the body and mind. In this case the subconscious of Nicolas… 

Nicolas: Hi Elephant! how have you been?

Elephant: A lot better to tell you the truth! less stressed and anxious… positive!

N: Great! so let’s talk about what we just did for this “so called” “experiment”.

Elephant: Sure…

N: So, What was the point of riding the bicycle while gathering data, and also doing different activities as we did for each ride?

E: I wanted to ride the bike…

N: So You tricked me into thinking all of this made sense because you just wanted to ride the bike?

E: I guess I did…. It makes me feel better… and maybe be better…

N: Silence…

E: Maybe it did made some sense…

N: Ok… Please try to explain…

E: The idea was to explore different ideas like cycling as a way of meditation, and in the process to think about how can you connect ideas like performance, focus, fun, the competitive impulse, and the perception of time.

N: Ok… So that sounds like too much at the same time… How did you design the experiment?

E: Well, I thought about the idea of thinking about different ways of riding a bike, of reasons to ride it. Then set some basic common rules to ride it, and some instructions for every riding mode. Each ride would be 31 minutes long, without warming up. The data from the bicycles computer would not be available during our ride. We would have to try to guess when he thought five minutes had passed. To do so, we would signal to the camera each time we thought that happened. Now let me talk about the riding modes we chose. The first mode would be riding without any other activity than pedaling. The second would be smiling during the whole ride. The third, riding with the goal of performing as best as possible. The fourth mode would be riding with the eyes closed.

N: Weird… I still don’t get it…

E: Well, the idea is also being able to analize the gathered information to try to make sense.

N: So what information are we talking about?

E: We will gather information for the heart beat with a heart sensor, a cadence sensor will be used for keeping track of the pedaling and performance, and also for the speed and distance, the speed and distance sensor from the bike trainer. We are also gong to use video and photography to capture our body and face expression trough the whole ride. Photography will be used to produce a time lapse sequence (taking a picture every 10 seconds) to create an average image of the expression in each ride. We can also use these frames to se how the expression is gradually changing.

N: A little confusing!

E: M, in deed… So, let me try to extend a little bit more about the use of video and photography. I actually thought we could use photography as a way of trying to read different gestures that could give us some clues about the state of mind, and also about the way the body changes the moving patterns according to the state o mind and the attitude towards cycling. I had this idea after looking and reading about the painting by Francis Bacon. He used photography as a visual reference to understand human behavior, feelings, nature, or condition… He realized it’s potential to capture by its mechanical means some apparent facts that the human eyes and mind most times choses to ignore or edit from the perception process.

N: Kind of the reason we chose one picture over another when we are captured with one eye closed and one opened.

E: Right…

Francis Bacon – Portrait of George Dyer Riding a Bicycle, 1966 – Self-Portrait

ride#3
Ride#3 (Riding hard mode)

Ride#1 (Riding “normal mode”)

Ride#4 Riding the closed eyes

Ride#4 Riding and laughing mode.

N: So how was the ride?

E: Nico: As you are so smart and rational, please try to make sense out of all this data we put together!

Table chart of perceived time vs real time for each ride and performance.

 

N: Ok, I’ll do my best to make sense out the given information from your none sense experiment! I’m going to talk about performance, and also about the sense of time passing and also will try to relate this collected facts as best as I can. Then, we can talk about how could we make something useful out of this kind of experience.

It’s interesting to note that the differences in general where not big. They are very similar, more than you expected when you designed the experiment. The biggest differences might have been about your feeling about the activity and also regarding the perception of time.

According to the statistical analysis from the bike computer, our performance in terms of amount of distance, speed average did’t actually change that much through out the different sessions. The higher performance in those terms was the ride in “normal mode” .  Followed by the “riding hard mode”, the by the “closed eyed mode” and finally “laughing mode”.

In terms of heat rate average our performance was from higher heart rate to lower heart rat like this: “riding hard” 181 bpm, “closed eyes” 158 bpm, “laughing mode” 153 bpm, and normal 135 bpm.

In terms of the pedaling or cadence average from more to less:

“Normal ride mode” 72 rpm, “riding hard mode” 71 rpm , “closed eyed mode”  69 rpm and “laughing mode” 69 rpm tied at the same number.

In terms of your happiness index you felt happier during the “closed eyes mode”. Followed by the “Laughing mode” and then “riding hard” and finally “riding normally”.

Let’s talk about how we perceived the passing of time. It’s a very tricky aspect of this experience. We are focusing on the moment we thought the 30 minutes where over. You thought it was over at 32.4 during the “closed eyes mode”. At 28.1 during the “riding hard mode”. At 25.35 during the “laughing mode”, and at 23.1 when riding in the “normal mode”. I’m having a little bit of a confusion trying to interprete this numbers. What could be behind the feeling of time passing faster than it actually did according to the timer? Is it that we where more focused in the activity? is it that we felt less tired during the activity? is it that we were more distracted on other things instead of the activity? Instead of making facts about this, which makes very little sense, let’s raise questions. The most inaccurate perception of time was during the normal mode. Time seemed slower by 7 minutes. Was it because the experience was boring? A hint in this direction is the given instruction of “just riding the bike”. Also if we consider the performance, we had the best score. We could ask a question here: do we feel like time passes slower when doing hard tasks? Let’s have a look at riding while we laughed. We felt that we had completed the 30 minutes when only 25.35 had passed.  Was this because we felt bored? maybe not… In terms of performance we had the worst performance. So what was the reason? was it because we felt tired, or because the task of smiling for half an hour was hard or boring? Let’s now think about the results from the “riding hard mode”. We thought time was 2 minutes slower than it was. It was fairly close to reality. Was it slower because the task was hard? was it because hard tasks make the sense of time seem slower? If we compare this result with the “riding normal mode” it’s interesting to note the big difference in time perception against performance achievement. The performance difference wasn’t as large as the time perception. The question that is raised is if being focused on the goal of performing well, having that motivation, made us feel that time passed by a little bit quicker, but slower to reality because of the physical effort we were being aware of and submitted to?

E: Ok! Interesting thoughts! It still seems hard to make incontrovertible facts out of all of this. All I can say is I really enjoyed the experiment because it made me feel better about myself, allowed me to release a lot of stress and tension, to create a space for awareness and meditation, and I enjoyed watching the visualization of the different ride modes. It kind o takes me back to those moments and makes me see some expressions and details that give clues about how the ride was, in a different and maybe complimentary way to the collected data from all the sensors we used. So Nico, how do you imagine this to be used in a practical and positive way in the future?

N: Mmm… I think it could be turned into a tool to analize your body expression, your moving patterns through time, so the system can give you advice about your emotional and physical tate, according to the current expressions and patterns, comparing them to the “normal” ones.  I imagine the system giving advice like, “you should ride longer: because you will feel better, taking into account all the stress you have been feeling, and all the time you have been still at the office” Or maybe: “it seems like riding today should be shorter than 30 minutes: it looks like your tired at a physical level, and need more sleeping than riding to achieve a better index of happiness and well being… I imagine using different technologies like CLM tracking to track the facial expressions, so we can have a sense of how the rider is feeling both, at a physical level, and also from an emotional point of view. When the brain activity sensors will be more accurate and the truly accurate ones are accesible, they could also tell reveal important information about our responses to riding, and how should we ride to make the most out of each particular session in order to feel better from a mental and physical point of view.  There is already an existing device that is an important step into that direction. The purpose seems to be a little different, but information from our body is being used to help us improve the quality of life and in particular, of the cycling activity It is call Mind Rider. Other posible technologies to explore are the virtual reality technologies. They can also offer an interesting potential as a way to make the riding experience a happier one, and also a more productive one from a training point of view. Steps have to be taken from the current bike simulator approaches, that in general try to make take rider feel like he is riding in a road ( virtually anywhere he choses to in the world). I can imagine the bike to be used for playing games or also a a vehicle to meditate or follow interesting therapeutical ideas as the smiling therapy we tried in this experiment. Another idea that came to my mind about the possible use of this exercise we did in particular, is how we can really visualize how the position of the body changes  through time. This can be analyzed carefully to find patterns and tendencies that are linked to factors as the seek of better performance, distraction, fatigue, relax, etc… We could eventually use technologies like the ones used by the Kinect to do so. Having a frontal point of view and a lateral point of view would make it clear how are we going closer or farther to the optimum position for the best possible performance.

E: Cool! I had no clue about those things and possibilities! so it would be like having an Elephant trainer I guess!

N: Mmmm kind of, but an Elephant trainer that also makes us understand ourselves in a wider way through the observation and monitoring of he signs and hints lefts by the elephant.

http://mindriderdata.com/       

https://www.wired.com/2015/01/mindrider-manhattan-bike-map/

 

 

 

 

 

 

 

 

 

 

Drawing: uncertain measure

i

i

 

Talking with Daniela, we realized we had a common interest, drawing. We both loved drawing, but we had a very different approach to it.  She likes to look at people and to make quick sketches of what they are doing. Her process is fast and intuitive. The results give you the feeling of the speed of life and also information about what the people were doing. In my case, I have been interested for some time in the idea of creating drawings in a very repetitive way, following simple instructions and rules of play. In these kind of drawings control of the final result is given to chance, or specifically to the collection of results taken form a die that is rolled many times.

The skills required to do each kind of drawings (Daniela’s and Nico’s) are different, but they also have similarities. We both need to have a very intense focus on the activity and also a great control of the tool we are using and our body, that is behind it.  We thought about how to reveal aspects of that drawing process using different means. We thought that i would be interesting to use the Kinnect to reveal the way we moved while drawing. We played with the code and finally came up with an interesting way of visualizing the movement and seeing also the movement through time.

The collected information actually became a drawing!  Some kind of futurist work of art, that reminded us also about works like Duchamp’s nude descending a staircase.  In this video you can see the Kinnect capture of our movement and also the video capturing at the same time our drawing process.  The sync is not perfect, but you can still understand whats going on. The Kinnect  video is interesting because it shows the complexity of how hour body moves, even doing something “as simple” as sitting down and drawing. This is off course just a fast an inaccurate experiment. But it can definitely could be done with more time, a careful setting and with and improved code to visualize things better.

We also came to an interesting common interest which is the work by Francis Bacon. He used to work in a very interesting way. He explored the limits between rationality, automatic thinking, intuition and randomness. He explored how the gestures that produce marks on paintings come from all this concepts, and how they all work together to try to create a balanced results (which he also believed to be impossible to achieve)

Something that could be very interesting would be taking different data like the brain activity, the pulse, the temperature from the different parts of the brain. Also comparing the initial state of this variables, and the final when the artwork it completed.

The Photo Query

The photo Query is a program made for you to make a question about a particular concern that is in your mind, and to get an answer in the form of photographs, to be interpreted by you and a reader that you pick to have a conversation with.  The idea is that the reader will be someone you feel comfortable talking to.

The images come from a photo library with different photos from many subjects and places. When you open the application, it randomly gives you three rows made out of images. The first represents the past, the second the present, and the third one the future. The idea is that you pick one of each row.  Then, you create a narrative that will try to explain your concern connecting it with the image you chose, in terms of the past issues that could have taken you to the current situation. Then, the reader you picked for the reading, will give you his/her thoughts about the situation. You will then repeat the same process with the present row and the future.

After talking about the future, the reader will ask you we you didn’t picked the other images on each rows. The idea is to think if you are linking this images to ideas of things you are rejecting for some reason.

http://alpha.editor.p5js.org/nicosanin/sketches/By6nv2ivG

As you can see when running the program, you can only see a very limited amount of images. That is design problem caused by the P5 editor not allowing to upload more than that very limited number. Below you can see screenshots taken form the program run locally in the computer, so you can see how it is meant to work.

Why pictures?

Photography is a medium that uses the illusion of time. It collects footprints that being just a selection of a particular moment and place, creating the illusion of memory.  This images have no meaning on their own. By themselves they are only results of a technical and mechanical process. it’s each viewer that makes sense out of them. The potential  “meaning of the images” is very powerful. Only the author or authors of the images know the context and process of how the image was created. To others the pictures are some kind of riddle. When we see something we try to make sense out of it by making associations with our past experiences and knowledge. That’s why one picture is potentially a billion of pictures: it comes alive when someone looks at it and makes sense of it. An interesting aspect about human beings is the tendency to create symbols and narratives out of images, and also to use images and words to share ideas and narratives with others. As and example we can think of the use of photography as a documentation tool used by professionals in different disciplines as ethnography, anthropology, science, history, among others… The use of photography is a fascinating and rich subject, that could not be addressed even with a thousand pages book! But what is important here to mention is the contradiction in photography regarding objectivity and subjectivity. In this case we are using the potential of photography as a trigger or tool to make subjective interpretations, and not as a document that pretends to give information or point out a true fact.

Another big improvement for a future version could be a key part of this idea. It would be allowing the user to upload a photo archive of his own, previously to doing the reading. Somehow using unknown images and familiar images could be both interesting ways to trigger our unconscious and imagination to start a dialogue and analysis about a situation. It’s something that would be interesting to try.

Finally, it’s interesting for me to say how the process of collecting the images from the huge photo archive was an interesting and strong process from an emotional and rational point of view. The images triggered many thoughts, sensations and feelings that made me think about my past, my present and my future. They revealed how powerful and magical pictures can be. Magical in a more open sense than just a positive way.