Monday, 6 December 2010
[Research] Ants
[Example of Use of Motion and Colour Detection]
Ants - Philip Worthington
Yoon: His another project which called "Ants" is another generative table based installation about ants' life. Digitised ants seeking objects on the table and they connect each object as a way.
Yoon: During the interview (Pictures and interview - http://www.worthersoriginal.com/viki/#page=shadowmonsters), Philip Worthington explains ants find where the objects are and what colour they are too. So ants go to find objects and make colourful corridors. This looks like a digital version of a toy called "Ant Farm (1968)".
Yoon: Jose M. Hernandez describes this as an "underground architecture" in his book Antfarm Retrospective (2004). And Worthington uses the characteristic of ants into his project and has made people can make their own architecture from their belongings with ants.
Technically, first camera and the infrared table define where the object is, and they send ants to it, then second camera detects their colour. Then the generative algorithm colour the corridor in the matched colour. And the motion detection runs all the time that it can track their moves.
[Development] Blob Test with Processing
Processing
[blob test]
original open codes: BlobDetection by v3ga.net
//=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-
//BlobDetection by v3ga
//May 2005
//Processing(Beta) v0.85
//
// Adding edge lines on the image process in order to 'close' blobs
//
// ~~~~~~~~~~
// software :
// ~~~~~~~~~~
// - Super Fast Blur v1.1 by Mario Klingemann
// - BlobDetection library
//
// ~~~~~~~~~~
// hardware :
// ~~~~~~~~~~
// - Sony Eye Toy (Logitech)
//
//=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-
import processing.video.*;
import blobDetection.*;
Capture cam;
BlobDetection theBlobDetection;
PImage img;
boolean newFrame=false;
// ==================================================
// setup()
// ==================================================
void setup()
{
// Size of applet
size(640, 480);
// Capture
cam = new Capture(this, 40*4, 30*4, 15);
// BlobDetection
// img which will be sent to detection (a smaller copy of the cam frame);
img = new PImage(80,60);
theBlobDetection = new BlobDetection(img.width, img.height);
theBlobDetection.setPosDiscrimination(true);
theBlobDetection.setThreshold(0.2f); // will detect bright areas whose luminosity > 0.2f;
}
// ==================================================
// captureEvent()
// ==================================================
void captureEvent(Capture cam)
{
cam.read();
newFrame = true;
}
// ==================================================
// draw()
// ==================================================
void draw()
{
if (newFrame)
{
newFrame=false;
image(cam,0,0,width,height);
img.copy(cam, 0, 0, cam.width, cam.height,
0, 0, img.width, img.height);
fastblur(img, 2);
theBlobDetection.computeBlobs(img.pixels);
drawBlobsAndEdges(true,true);
}
}
// ==================================================
// drawBlobsAndEdges()
// ==================================================
void drawBlobsAndEdges(boolean drawBlobs, boolean drawEdges)
{
noFill();
Blob b;
EdgeVertex eA,eB;
for (int n=0 ; n
// ==================================================
void fastblur(PImage img,int radius)
{
if (radius<1){ w="img.width;" h="img.height;" wm="w-1;" hm="h-1;" wh="w*h;" div="radius+radius+1;" pix="img.pixels;" i="0;i<256*div;i++){" yw="yi=" y="0;y
gsum+=(p & 0x00ff00)>>8;
bsum+= p & 0x0000ff;
}
for (x=0;x
gsum+=((p1 & 0x00ff00)-(p2 & 0x00ff00))>>8;
bsum+= (p1 & 0x0000ff)-(p2 & 0x0000ff);
yi++;
}
yw+=w;
}
for (x=0;x
Fixed codes:
//=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-
//BlobDetection by v3ga
//May 2005
//Processing(Beta) v0.85
//
// Adding edge lines on the image process in order to 'close' blobs
//
// ~~~~~~~~~~
// software :
// ~~~~~~~~~~
// - Super Fast Blur v1.1 by Mario Klingemann
// - BlobDetection library
//
// ~~~~~~~~~~
// hardware :
// ~~~~~~~~~~
// - Sony Eye Toy (Logitech)
//
//=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-
import processing.video.*;
import blobDetection.*;
Capture cam;
BlobDetection theBlobDetection;
PImage img;
boolean newFrame=false;
// ==================================================
// setup()
// ==================================================
void setup()
{
// Size of applet
size(640, 480);
// Capture
cam = new Capture(this, width, height, 30); noStroke(); smooth();
// BlobDetection
// img which will be sent to detection (a smaller copy of the cam frame);
img = new PImage(80,60);
theBlobDetection = new BlobDetection(img.width, img.height);
theBlobDetection.setPosDiscrimination(true);
theBlobDetection.setThreshold(0.2f); // will detect bright areas whose luminosity > 0.2f;
}
// ==================================================
// captureEvent()
// ==================================================
void captureEvent(Capture cam)
{
cam.read();
newFrame = true;
}
// ==================================================
// draw()
// ==================================================
void draw()
{
if (newFrame)
{
newFrame=false;
image(cam,0,0,width,height);
img.copy(cam, 0, 0, cam.width, cam.height,
0, 0, img.width, img.height);
fastblur(img, 2);
theBlobDetection.computeBlobs(img.pixels);
drawBlobsAndEdges(false,true);
}
}
// ==================================================
// drawBlobsAndEdges()
// ==================================================
void drawBlobsAndEdges(boolean drawBlobs, boolean drawEdges)
{
fill(0,0,0);
Blob b;
EdgeVertex eA,eB;
for (int n=0 ; n
stroke(174,25,36);
for (int m=0;m
// ==================================================
void fastblur(PImage img,int radius)
{
if (radius<1){ w="img.width;" h="img.height;" wm="w-1;" hm="h-1;" wh="w*h;" div="radius+radius+1;" pix="img.pixels;" i="0;i<256*div;i++){" yw="yi=" y="0;y
gsum+=(p & 0x00ff00)>>8;
bsum+= p & 0x0000ff;
}
for (x=0;x
gsum+=((p1 & 0x00ff00)-(p2 & 0x00ff00))>>8;
bsum+= (p1 & 0x0000ff)-(p2 & 0x0000ff);
yi++;
}
yw+=w;
}
for (x=0;x
rsum=gsum=bsum=0;
yp=-radius*w;
for(i=-radius;i<=radius;i++){ yi=max(0,yp)+x; rsum+=r[yi]; gsum+=g[yi]; bsum+=b[yi]; yp+=w; } yi=x; for (y=0;y
pix[yi]=0xff000000 | (dv[rsum]<<16) x="=" p1="x+vmin[y];" p2="x+vmax[y];">
Yoon: The webcam seems trying to catch some pitch-black area first and comparatively adjusts rest of area with ambient light.
Sunday, 5 December 2010
Thursday, 2 December 2010
[Research] Shadow Studies
Definition of Shadow
from Oxford Advanced Learner's Dictionary [http://www.oxfordadvancedlearnersdictionary.com/dictionary/shadow]
:dark shape
1 [countable] the dark shape that somebody/something's form makes on a surface, for example on the ground, when they are between the light and the surface
:darkness
2 [uncountable] (also shadows [plural]) darkness in a place or on something, especially so that you cannot easily see who or what is there
:influence
4 [singular] shadow of somebody/something the strong (usually bad) influence of somebody/something
:somebody that follows somebody
6 [countable] a person or an animal that follows somebody else all the time
:something not real
7 [countable] a thing that is not real or possible to obtain
Yoon: The meanings of shadow are generally negative if you see the definitions above. However, shadow is a scientific phenomenon in nature. No one can put you and your shadow separately. This is a nature and the shadow is represented by the light on the opposite side of object.
And shadow itself is flexible. But the flexibility is mainly depended on the shape of the real object which leads shadows. Light can be effectively used to change shape of shadows but it is limited.
Yoon: Light can widen, shorten, brighten or darken shadows but it cannot actually transform the original image. There are three elements to play with shadow, first of all, shadow of course, object and light.
Yoon: The illustration describes the relationship between light, object and shadow. A is the distance between the object (presented as a circle in the middle), B is the distance of the light from the ground where the object has been set. If the B casts the light on the object, shadow is automatically produced. "a" is the brightness of shadow and it is interact with the distance of A. Finally, if B gets longer, then the "b" which is the size of shadow gets bigger. So this illustration reminds the importance of lighting. With this simple function, many artist work on their shadow art.
Niloy J. Mitra, Mark Pauly
(Top) A sculpture casts three shadow poses of an animated cartoon character at 60 degree angle. Substantial initial inconsistencies (shown as gray pixels) have been removed by the optimization. The red line indicates a topological surgery performed by the user on the deformation mesh to specify the shape semantics. Bottom row shows the sculpture from different viewpoints without the transparent casing.
Abstract:
To them, I said, the truth would be literally nothing but the shadows of the images.
- Plato, The Republic
: Shadow art is a unique form of sculptural art where the 2D shadows cast by a 3D sculpture are essential for the artistic effect. We introduce computational tools for the creation of shadow art and propose a design process where the user can directly specify the desired shadows by providing a set of binary images and corresponding projection information. Since multiple shadow images often contradict each other, we present a geometric optimization that computes a 3D shadow volume whose shadows best approximate the provided input images. Our analysis shows that this optimization is essential for obtaining physically realizable 3D sculptures. The resulting shadow volume can then be modified with a set of interactive editing tools that automatically respect the often intricate shadow constraints. We demonstrate the potential of our system with a number of complex 3D shadow art sculptures that go beyond what is seen in contemporary art pieces.
[Research] Night Light
night lights from thesystemis on Vimeo.
Hellicar & Lewis produced and art directed a large scale interactive installation for the rebrand of NZ Telecom at the Ferry Building, Auckland, New Zealand.
http://www.hellicarandlewis.com/2009/10/16/night-lights/
Yoon: Big size of event always successful? I don't think so. That is full of risks to succeed. I think any installation has got its appropriate size which has to be on exactly. And this installation went really well with the public size. There are many good works like projecting on public buildings with realistic and gorgeous graphics.
RalphLauren.com celebrates 10 years of digital innovation with Ralph Lauren 4D.
A historic fusion of art, fashion & technology at the 888 Madison Ave store in NYC.
Even it mentions 4D, it was just about a very well produced 3D graphic screen based work. People are standing the opposite side of road and watching it is what it is about. Nevertheless it was successful because its theme was still a fashion show which was to celebrate 10 years of digital innovation in the brand. However, "4D" was bit exaggerated in this 3D screening event. If you look at the crazy people dancing in Night Light project, you may feel more like 4D than Ralgh Lauren "4D" fashion show.
YesYesNo: We used 3 different types of interaction - body interaction on the two stages, hand interaction above a light table, and phone interaction with the tracking of waving phones. There were 6 scenes, cycled every hour for the public.
http://yesyesno.com/night-lights
Yoon: Shadow, people dance with their shadows. It is used as a reinforcement about their dancing. They can see which shadow is theirs with doing some gestures. The shadow actually made a connection between private (person) and public (building).
Feedback
Feedback - case study from onedotzero on Vimeo.
http://www.onedotzero.com/video/11160574/
Memo: "wanted to make people smile", "augment", "people performance based"
How do their works engage audience?
: I think that is fun to play with. It looks playful at the first place and they are actually encouraged to make some art with their performance. During the play, they do what ever they are tempted with the technology and the result will be published in a format of photography in Flikr. It is a process of making a fun so to speak, first, it has an attractive set, second, understandable which is easy to use, three, playful with producing personal meanings. They play with their reflection through the webcam and the monitor. I think it is worth to play with.
[Research] Reactable
Yoon: This is an upgraded version of my proto idea. This installation has a room for participants which is very important to a postmodern art. Minimalism also encourages people to join as a part of it. Beat based music makes people shake their body on the rhythm and beat. And the rhythm is made by participants themselves. On the rounded table, random people surrounded the table and they make music instantly. There are some kinds of objects contain sensors in it and the objects interact with the table. The table is possibly projected from the bottom of inside of table and each object has different levels of sound and vision. The object is interactive as an instant button of mixer or synthesiser. It can be moving all over the table and can be turning as a dial. It can show the image of future DJ in a club. It is totally playful and creative.