Sunday, May 27, 2007

Presentation Screen



I have included a few images here of the pseudo wall that I made as my part of tomorrow.

Wednesday, May 23, 2007

What I have done this week

This week so far I have sent relevant information to Sandra for the poster.
I measured up the height of coffee table and went into uni and found David who helped measure the monitor for where the holes for the frame and the web cam are needed. We decided that the height of the monitor on the coffee table was sufficient, however a couple of telephone books could be used for the extra height, but the cables of the monitor we measured would allow this added height the need to ensure stablity for the monitor took priority.

Before I cut the holes necessary for the photo frame, I need to make sure that Kathryn has not changed the positioning of the animation. I will take the photo frame and tape measure tomorrow to makes sure we both have a clear understanding of where the animation will be positioned and make sure David who will code will keep it in the same position, or find out exactly where it is going to be.

My shopping list included:
Foamcore
Photo Frame
Hot glue for hot glue gun
Blue tac
Double sided tape
Small easel for group name display

Decisions

We decided at our last group meeting to concentrate on our original breakdown of workload, and leave people to have that area covered.

The jobs for this final week are:
Sandra is doing the poster (and we are sending her relevant information to include, over and above what she has).
David is coding and adding the final animations.
Kathryn is finishing the finer points of the animation.
Petra is building the framework to display our presentation.

Update

This is an update on the progress, or lack of over the past weeks. There are limitations with our chosen technology and we have been trying to overcome these problems. Our initial trials allowed us to activate our Director projector using left mouse clicks which were interpreted by our interactive animation (scripted into the program). The technologies appeared to be interacting with each other.

David told us that the hotkeys on the underpinning technology was not being activated by Flash. As I did not understand this comment, I have spent time trying different scripts in Director as I do not know Flash. Hotkeys were added to the original trials for interactivity, these worked with manual key presses but would not be interpretted by Director as a hot key, the same result as Flash that David tried!! (At least now I understood Davids comments)

So at the end of the day we have limited functionality between the two programs:
-run program (allows us to activate our program)
-left mouse clicks (per motion detection with in a zone)

Functionality that could be superimposed, but not actually be working between the programs:
- Hotkey functionality (keyboard imput or sensor pads)
- Sound output (from motion detection program)

Ideas that were discussed:
-using 2 monitors (this could be intergrated, but the idea was discarded)

Friday, April 27, 2007

COMPARE AND CONTRAST: COMP 3000

SELECTED WORK FOR COMPARISON AND CONTRAST

Ipswich Art Gallery’s exhibition of Experimenta: Vanishing Point allowed for personal physical interaction with some of the art on display. Many of the installations were really interesting and one could either enjoy the art for arts sake, or try to interpret the concepts the artist was portraying or try to understand how the technology worked. As this semester Comp 3000 group assignment is to design an interactive installation that potentially may be exhibited at Ipswich Art Gallery, this provided a wonderful opportunity.

The artwork I have chosen to compare with our own project and which worked well with our original concept using the “Dance Floor” ideas was a projected image, which was triggered by utilising sensors. ‘Waterfall’ (“Duk-eum” by Ji-Hoon Byun, Korea, 2003 [4]) is an interactive work that provided enjoyment by allowing the visitors to interact in their own way. A waterfall of light particles flows over the projected image of the users body shape. The visitor walks between the projected image on the wall and the projector casting a shadow on the wall. A web cam detects the change in the reading from the camera, and transfers the information to a program. The program interprets the movements of the users in front of the camera and creates the illusion of water bouncing off the shadow. The sense of enjoyment of this piece, due to the interaction is something we would like to create in our own ubiquitous installation.




Ji-Hoon Byun’s Duk-eum [4]

The installation allows the user the freedom of choice of how they move, causing the interaction. The technology system that detects the user interaction is obvious to participants, which may detract from the ubiquitous nature of the art, but is overcome with the playful nature of the piece. The Waterfall piece also allows for multiple users to interact with the project.





Interaction Technologies

A web cam (or video camera) detects motion (someone walking) between the camera and the projected image on the wall. The information is obtained and transferred to the computer application that is generating the particle waterfall. The particles are them manipulated based on the motion detected. This projected waterfall can be created using different technologies: blob and edge detection [1], frame differentiation [2], and hotspots with triggers [3] to name a few, could be used to activate the software projecting the image.


Constraints and Considerations of Waterfall Installation

Visitors may walk in from any direction.
Multiple visitors may use Waterfall at the same time.
Needs to be away from a main walkway, as this would detract from the experience.
Power must be available.
Power supply must be tapped to the floor (safety) and not be intrusive.
Lighting must be dim to allow for intensity of projection and shadow creation.
Space is required for the distance of projection, allowing for suitable size.
A wall is needed that is suitable for projection, and free from other art works.
Increased sound volume would enhance the experience but may disrupt other exhibits.

Other Installations

Other installations or proposed installations that were investigated using a waterfall theme were: an interactive installation housed in a childrens medical centre [5], Giant Waterfall which is proposed to display on the outside wall of a multi-storey building [6] and a Trophy Waterfall, which uses audio to portray the waterfall experience [7].

Two other artworks from the exhibit that were also of special interest to our project were the “Shy Picture” [8] and “Tools of Life “ [9]. Both of these were ubiquitous in nature, allowing for the user interactive technologies to be less distractive than the camera system used in “Waterfall”. As the group researches more interactive art our own design for the installation changes and evolves.

COMPARED WITH OUR PROJECT

Ubiquitous art is allowing for new forms of creativity in an artistic sense, as discussed in the paper ‘Networking with knobs and knats? Towards ubiquitous computing for artists’ [10], which gives rise to ‘unique design considerations’. Both ‘Waterfall’ and Dance Floor Groups design fall into the area of Ubiquitous Art. However our design was originally to detect motion and respond with an output of projected imagery onto a wall, not unlike ‘Waterfall’ in concept. The constraints of floor area required to exhibit an artwork such as this, in an area not used as a main walkway where the possibility of too many people accessing the restrooms in Ipswich Art Gallery has made us reconsider our output projection.

‘Shy Picture’, another exhibit used a monitor as a framed artwork, alleviating the need for space and the possibility of too many visitors to the gallery accessing the toilets and disrupting the experience of the user or users. David, another group member, is discussing ‘Shy Picture’.

As discussed previously the design and concepts are evolving with researching other artworks. We would like to extend the experience without loosing the simplicity of the artwork in the way the user would interact with it.

Our original design consideration was to trigger an interaction, which could be displayed on two separate walls. The concept behind this was that one user would be aware that they had triggered the interaction, while anyone observing the other projection would be unable to realise the reason for the artworks triggered display.

However having experienced Experimenta the design and direction of our project has changed. Our design changed to using a computer screen within a frame, giving the illusion of animated art. To extend this experience, at our last group meeting, a decision to use two monitors as separate framed artworks, side by side with a small space between. Our animation is based on a small dog cartoon figure that interacts with people viewing the artwork. The dog will be able to move between the two picture framed monitors. Mans best friend often seems to be used, as the western cultures association with pets, specifically dogs, will lend itself to the possibility of either a child or adult exhibit.

Our ideas for the animation are, to have a sleeping dog in the corner of one monitor, and on detection of motion; the dog would wake up and wag his tail, imitating a real dog. The dog may bark at times if people start to move away. There will be a ball on screen, which the dog will keep looking at, and return his look to a viewer trying to entice the viewer to touch the screen where the ball is positioned. If motion is detected in that area and a viewer has been tempted to see what happens if they follow the visual clues, the ball will appear to be thrown onto the other screen, and the dog will chase after it and return the ball to the original screen and drop it where it came from. The dog will return to sleeping position if there is no motion detected after a given amount of time.

Our research into how the artists had achieved their work on exhibit, lead us to more examples of how interactive art is being used. One example, processing.org [11] has a showcase of examples of ubitiquitous computing and with a little more research we have found a possible way to replicate ‘Waterfall’ using Processing and jMyron [12].

Constraints and Comparisons of Waterfall and Dance Groups design.

-Visitors may walk in from any direction for Waterfall and our Dog.
-Multiple visitors may use Waterfall at the same time, however multiple people may view our Dog but once an interaction is triggered it will play to the end unless there is a trigger within a sequence of the movie that is displayed at the time.
-‘Waterfall’ needs to be away from a main walkway, as this would detract from the experience, ours however needs the camera or web cam to point at a smaller and controlled space, but still out of the main traffic area or the gallery may have a dog barking continuously.
-Power must be available for both.
-Power supply must be tapped to the floor (safety) and not be intrusive for both. Dance Floor Groups design requires that a web cam or camera be mounted well above the artwork so as not to be intrusive but allow for motion detection.
-Lighting must be dim to allow for intensity of projection and shadow creation for Waterfall, ours does not require any special lighting requirements. (The sensitivity for lighting can be controlled so as not to have to consider the flickering of florescent lighting.) However it would be better to ensure that there was no problems with reflections on the monitor screens.
-Space is required for the distance of projection, allowing for suitable size for Waterfall, this is no longer a consideration for our design.
-A wall that is suitable for projection, and free from other art works, does not hinder us with the revised design.
-Increased sound volume would enhance the experience but may disrupt other exhibits, applies to both Waterfall and ours.

Dance Floors Groups ‘Dog’ and ‘Waterfall’ are both examples of ubiquitous art, and we will adapt a user centred design approach and evaluation of our design as an iterative step before the design is finalised. Both designs are simple in nature, and allow the user to interact with the artwork, both aim at giving simple pleasure to the users or viewers of the artworks. Both use motion detection and similar technologies, regardless if the output is projected or on a monitor replicating a painting, both cater for any age, and both should produce a smile with the interaction.


References:

[1] Town C., Pugh. D. (2007). ACTA Press. Retrieved 04 26, 2007, from [Abstract] Combining Contour, Edge & Blob Tracking: www.actapress.com/PaperInfo.aspx?PaperID=18828
[2] Koc, U.-V., & Ray Liu, K. (1994, 11 13). IEEE Xplore. Retrieved 04 26, 2007, from Login: www.ieeexplore.ieee.org/xpl/freeabs_all.jsp?tp=&arnumber=413784&isnumber=9214
[3] Omega Unfold. (2007, 04 09). Webcam Zone Trigger. Retrieved 04 26, 2007, from Omega Unfold: /www.zonetrigger.com/index.html
[4] Presit, G. (2004, 01). Realtime. Retrieved 04 26, 2007, from Realtime Arts: www.realtimearts.net/feature/MAAP_in_Singapore:_GRAVITY/8506
[5] Forman C. (2005, 10 16). Media Artist. Retrieved 04 26, 2007, from Setpixel // Interactive Waterfall: www.setpixel.com/content/?ID=waterfall
[6] Knutt E. (2006, 11 03). Moving Buildings become a reality with telemetrics. Retrieved 04 26, 2007, from Building Design: http://www.bdonline.co.uk/story.asp?storyType=80§ioncode=453&storyCode=3076661
[7] Youngs A. (2005, 09 10). Amy Young. Retrieved 04 26, 2007, from Interactive Sculputers, Installations & New Media Art Work: www.ylem.org/artists/ayoungs/trophy.html
[8] Shy Picture, David Maclend and Narinda Reeders, Australia, 2005
[9] 
Tools Life, Minim ++, Japan, 2001
[10] Burke J., Mendelowitz E., Kim J., Lorenzo R. (2002, 08 18). BurkeUCLA. Retrieved 04 26, 2007, from ubicomp2002.pdf (application/PDF objext): www.comp.lancs.ac.uk/computing/users/dixa/conf/ubicomp2002-models/pdf/BurkeUCLA_ubicomp2002.pdf
[11] Rease C., Fry B. (2007). Exhibition. Retrieved 04 26, 2007, from Processing: www.processing.org
[12] Myron (WebCamXtra). (n.d.). Retrieved 04 26, 2007, from Computer vision & Well Connected Motion Tracking: www.webcamxtra.sourceforge.net

Tuesday, April 24, 2007

Processing and jMyron

Processing is now working as well, and by running jMyron the video camera is detecting motion, and interacting with various code applets. The saving as an application I have not persued at this stage, as the applet would do what we wanted but produce some lag.

Webcam Xtra is another line I could persue, but at this stage I have installed it but am not following through, as we have what we need to achieve our prototype and other forms of motion detection using frame differenciation, or blob detection will be reviewed as needed.

Webcam Software working

Quick note, the web cam software is working, detecting two hotspots which activate seperate interactions while running an interactive animation. This in theory should allow for multiple interactions.

This allows for my own understanding of how many ubiquous forms of interactive installation art is created. Although many wouldn't necessarily use this particular software.

Wednesday, April 18, 2007

Trial Software

Well todays attempts are centering around a downloaded security video surveilance program, as it uses "hotspots" to sense motion by way of either a video camera or webcam. If motion is detected then a choice of outputs will be triggered. My thoughts are that if the images (chosing still shots as the video is to big a file load) are stored in a file then that file if it can be accessed then it can be used in a simple "yes there is more than one photo stored", or even a counter that if triggered will afford the opportunity to go into a function.

Am thinking that once that is managed then, by using 2 hotspots, if it is possible to differenciate between which one was triggered, (e.g. one sends an image and the other sends a sound) then it could run two different sequences of animation to the file.

The software program allows for other applications to work with it by changing the preferences. However gives no more information that that. As David is going to use Flash, I am moving to looking at Flash as I have not used it before.

Tuesday, April 17, 2007

processing.org

Lango pointed me in the direction of this site and it is really interesting. It allows you to produce small applications, I have only tried the examples at this point in time. That is really easy and worth a look. (PS I downloaded to a Mac)

Am going to spend a couple of hours trying it out, and am going to
make short notes. What I am trying to achieve is:
- get the camera recognised by the program
- capture one frame from the camera
- have the program identify the pixels of that frame
- capture a second frame
- have the application differenciate between the 2 frames
- set a ratio of difference between the 2 frames
- if the ratio is above the threshhold to open an image

This should visually allow me to see if it is working.

Quick notes on Processing as I read through
- open source
- tested on mac and window platforms mostly
- Processing code is converted to java when "run" is enabled.


Exporting:
Can be exported as a Java Applet to run on the web, can also
be exported as an application to run on window, linux and mac.

- if it is exported as an applet, it can be run on the web,
now while I don't want to use this on the web, the fact that
it exports with an index page makes me think that I can modify the code further on as
I am comfortable with html. (an alterenative to using processing is flash,
which I don't know either, so David wants to use Flash and as he is our
coder, all is good). I just want to know how to do it, and I
am allocated to Reseach and documentation, and this allows us
to all see alternate routes we could take.

- any changes to the indes are lost when a sketch is exported SO
- copy the applet html file from Processing, libraries and export to the root of the sketch folder

- applets are easier than applications, hence guess what I will trial
- applets have security restrictions built in, can only connect to the computer running the program unless it is "signed"
to find out more on this I will need to go to Sun's documentation (will do this later)
- need to avoid people pressing escape as it will be passed, so it is necessary to avoid this.


A Cartesian coordinate system is used, originating from the upper left corner.
e.g. a coordinate of x and y values will start from the top left.
Processing also allows for 3D drawing and the z position at 0 is in the centre of the screen
and minus positions will move the position of the object backwards.

3 levels of programming modes
Basic (beginners), Continuous (looping)

http://processing.org/exhibition/index.html
for images

MOVING OVER TO

http://webcamxtra.sourceforge.net/

COMPUTER VISION FOR ARTISTS
Shadow Monster
was the first example that I open and it uses hands to create monstors on a screen, really cool but takes a while to download!!
by Phil Worthington at the Royal College of Art
http://www.worthersoriginal.com/viki/#page=shadowmonsters


The Legible City by Dirk Groeneveld
Manhattan version (1989), Amsterdam version (1990), Karlsruhe version (1991)
This allows a visitor to ride a bike through a representation of a city.
This site allows you to watch high or low resolution videos.

http://www.jeffrey-shaw.net/html_main/show_work.php3?record_id=83


This is a really informative site, if only I had downloaded the webcam extra for Director!!!
http://sweb.cityu.edu.hk/sm3117/index.htm


OK this is where my fun finished!! Downloaded jMyron for video capture etc and I can not find a way of copying it into the processing folders.
Hum.... hoping that someone out there is reading this and can help.
(Ps this is a few hours work)

Comparison of video shot boundary detection techniques

This paper is about comparing different techniques used for boundary detection of images by comparing different frames of video.

They state that the easiest way to detect if two frames are significantly different is to count the number of pixels that change in value over a value that is set, however they reported that it was slow.

Shahraray [2] divided the images into 12 regions, and compared the differences.

The paper then goes on to discuss different tests that they preformed, a summary of some of them are below:

1. Used histograms, to detect difference in a 64 bit gray scale frames. “A shot boundary is declared it the histogram difference between consecutive frames exceeds a threshold”.

2. Used regional histograms: each frame is divided into 16 blocks in a 4X4 pattern. The histogram differences are computed for each reagion between consecutive frames. If the number of region differences exceeds the difference threshold is greater than the count threshold then the difference is noted.

3. Motion compensated pixel differences: again the screen it divided but this time into 3X4 , using grey scale to detect difference in the pixels.

All the algorithms were implemented in C and run on Unix.

All input video was digitized at a size of 320x240 pixels at a frame rate of 30 fps using a DEC Alpha equipped with a J300 video board. The digitized video was stored as motion JPEG with a compression ratio of about 25 to 1, requiring about 1 GB of space to store 1 hour of video.

From this paper, the idea of using grey scale seems to be the way it has to be done, and differences in the histogram would be noted and begin the sequence of actions. The area separations would allow for greater control, if for instance we used the dog and we wanted it line of vision to change, e.g. move its head in the direction someone moved. Due to the file size of ongoing frame recording of differences I would think that we would only take into account any 2 frames at one time.

[1]Berkeley Multimedia Research Center
Published: September 1994
Berkeley, CA
USA
http://www.bmrc.berkeley.edu

http://bmrc.berkeley.edu/research/publications/1996/133/shots.html#ab


2. Shahraray, B., "Scene Change Detection and Content-Based Sampling of Video Sequences", in Digital Video Compression: Algorithms and Technologies, Arturo Rodriguez, Robert Safranek, Edward Delp, Editors, Proc. SPIE 2419, February, 1995, pp. 2-13.

Went on to look at Phidgets and at the moment Motion sensors are out of stock, but futher reading showed that some web cams come with motion detection built into them.

http://computer.howstuffworks.com/webcam1.htm

Lost

Ok have got myself lost with my own wonderful variation of naming for my notes!!! So bare with me as some of this may be out of order of what I have been researching.

Sunday, April 15, 2007

Video Camera Tracking

This is more what we are looking for, and the reference of where it can be found is below:

Visual Intelligence: How We Create What We See
http://www.socsci.uci.edu/cogsci/personnel/hoffman/vi.html

Donald D. Hoffman
W. W. Norton & Company, Inc.
released October 1998
This article covers how we percieve visually.

___________________________________
http://www.tigoe.net/pcomp/videoTrack.shtml

There are two methods you'll comm,only find in video tracking software: the zone approach and the blob approach. Software such as softVNS or Eric Singer's Cyclops or cv.jit (a plugin for jitter that affords video tracking) take the zone approach. They map the video image into zones, and give you information about the amount of change in each zone from frame to frame. This is useful if your camera is in a fixed location, and you want fixed zones of that trigger activity. Eric has a good example on his site in which he uses Cyclops to play virtual drums. The zone approach makes it difficult to track objects across an image, however. TrackThemColors and Myron are examples of the blob approach, in that they return information about unique blobs within the image, making it easier to track an object moving across an image.

At the most basic level, a computer can tell you a pixel's position, and its color (if you are using a color camera). From those facts, other information can be determined:

One simple way of getting consistent tracking is to reduce the amount of information the computer has to track. For example, if the camera is equipped with an infrared filter, it will see only infrared light. This is very useful, since incandescent sources (lightbulbs with filaments) give off infrared, whereas fluorescent sources don't. Furthermore, the human body doesn't give off infrared light either. This is also useful for tracking in front of a projection, since the image from most LCD projectors contains no infrared light.

When considering where to position the camera, consider what information you want to track. For example, if you want to track a viewer's motion in two dimensions across a floor, then positioning a camera in front of the viewer may not be the best choice. Consider ways of positioning the camera overhead, or underneath the viewer.

Often it is useful to put the tracking camera behind the projection surface, and use a translucent screen, and track what changes on the surface of the screen. This way, the viewer can "draw" with light or darkness on the screen.
___________________________________________________
http://itp.nyu.edu/~dbo3/cgi-bin/wiki.cgi?ProcVid

code samples for motion tracking using java
not sure if this is really what we are looking for but it is a start, for the idea of what we are looking for.

How Stuff Works

Different ways to create motion sensors:

LIGHT SENSORS
A beam of light crosses a space, and is detected by a photo sensor which rings a bell if the light source is interrupted.

RADAR
http://www.howstuffworks.com/radar.htm
Echo and Doppler Shift



An echo is created by sound bouncing off a surface and returning in the direction it came. The time of the return of the echo is determined by the distance of the surface creating the echo.




Doppler shift is the difference in tone of a sound as it approaches and when it is past you. Eg a car or even a plane.
You can combine echo and Doppler shift in the the same manner as the echo bouncing off an approaching object, to create motion sensors. The echo of a sound can determine how far away something is, and Doppler shift iof the echo determines how fast something is going.



Ultrasound
Ultrasound is used instead of sound radar so as not to disrupt everyday use of
Things like police radar, satalite mapping etc. (often used in medical procedures as
Well) Sonar is “sound radar” but sound does not travel very far, and everyone can hear
Sounds, submarines use sound radar or sonar.
Radar uses radio waves instead of sound. Radio waves travel a long way arn are invisible to
Humans and easy to detect even when faint.
Radio waves transmit data invisibly, through the use of wireless technologies, using
Radio waves to communicate.
http://computer.howstuffworks.com/question238.htm


About radio controlled toys:
Basic principles
Transmitter sends radio waves to the receiver (sends signal over a frequency, using
A powersource such as a 9volt battery for the power and transmission). Basic
Consumer items use 27MHz or 49MHz, eg RC toys, garage doors openers etc. More
Sophisticate RC model planes use 72 or 75 MHz. The idea with this is that two can
Be operated at the same time without interference between two transmitters.
Receiver relies on an antenna and circuit board receives signals from transmitter and activates motors
Motors can cause wheels to turn lights to flash etc.
Power source eg batteries.
http://www.howstuffworks.com/rc-toy.htm
Remote control toys etc have a wire connecting the controller and the toy.
Radio contol is always wireless.



Note on Radio Frequencies:
A radio wve is an electromagnetic wave propagated by an antenna.Radio waves have
Different frequencies, and by turning a receiver to a specific frequency you can pick up specific signals
http://electronics

Tuesday, April 10, 2007

Visit to Ipswich Art Gallery

Tuesday April 10, 2007

Today David and I met up at the Ipswich Art Gallery to look at the interactive Exhibition called “EXPERIMENTA VANISHING POINT”.

Spotter
– Hiraki Sawa, Japan, 2002 (represented by Ota Fine Arts)

This exhibit shows people observing planes, which are commonplace in todays society, but in this exhibit they are presented as wild creatures trapped in a domestic space.
The technology behind this appears to be overlays of video with manipulation of the size of the planes to fit within domestic spaces.

1 Parking
111 Crossing
- June Bum Park, Korea, 2002

This exhibit has two projected images on the wall. The display shows our everyday world of cars, which are being manipulated by a persons hands in a car park. The hands carefully place the cars and people walking in the space.
The technology again appears to video of a car park taken from a higher position, so that we are looking down on the car park. The hands are cleverly filmed as if they are positioning the cars.
What also held interest for me is the way this display had built boxes for the projectors that would also allow for a computer tower to be hidden inside. The power source was tapped to the floor and along the skirting. Different coloured tape was used and was very unobtrusive. The box was fully enclosed with an air vent at the back for the heat to escape from the projector.

The Shy Picture
- David Maclend and Narinda Reeders, Australia, 2005

The display is set with a screen within a frame set on a false wall set out from the real wall, allowing for the computer to be hidden. Mounted above is a video camera, which acts as a motion sensor. When someone comes within range the people in the picture run away and hide. They pop their heads out to see if you are still there.
The technologies here appeared to be LCD screen, computer, video camera, and customised software, and a video that loops.

Journey to the Moon
- William Kentridge, South Africa, 2003

William Kentridges hand drawn charcoal artwork is combined with Georges Milies who is an experimental filmmaker, video to provide an eerie film to experience.
The technology integrates both drawings, which have been animated with video and post production techniques to produce this work which was projected onto a wall.

Waterfall
- Duk-euin Ji-Hoon Byun, Korea, 2003

A waterfall of light particles, which are activated by standing in front of the projection and your shadow, produces the waterfall of the particles around the shadow. Moving of arms or body parts causes a change in the particles.
The technology for this relies on the tracking of the shadow and movement of the projected image.

Front Porch
- William Wegman, USA, 1999

This exhibit is of a dogs head on a mans body on the front porch, reading a newspaper. The dog appears to be more interested in looking around than actually reading the paper. This is a well produced idea.


Another exhibit displayed several different concepts on the one screen and they follow each other in succession They were:

“Some want it all”, which was of a sparkler running along a black wall and appeared to disappear into a person’s ear and come out of the other and continue on.

“15 Excavator” which is video of the an excavator which is being manipulated and positioned by hands, similar to a previous exhibit.

“Dog Duet” is also an exhibit by William Wegman (1974) and two very well trained dogs are watching something moving out of the range of the camera, and the dogs are intensely following the movement.

“Elevator No. 4” is a video that portrays an elevator that opens like a zip, and the people entering or in the elevator are distorted and pixelated. An interesting exhibit.

“Line Up” is a purely textual exhibit of the size and speed of the text is consistent with the emotion being written about.

Tools Life, Minim ++, Japan, 2001
This exhibit is of a table, which has various household items standing upright, if you touch one various projected; images are viewed on the table.
The technologies involved with this project appear to be touch sensors, projection and customised code for the interaction. This exhibit also allowed for multiple participants to use the space at one time.

Paraphysical Man, Shaun Gladwell, Australia
This exhibit was of a break-dancer against a wall. His reflection was above him, but it appears that he is upright.
The technologies involved with this are video that has been manipulated and projected upside down to give the illusion of him floating in the air.

Train No 8
This one I am really not sure how they have achieved the effect. I stood and watched this for quiet a while to try to work out how many layers of video had been used. The timing of the different layers moving and was warped at different speeds. I realised that the forefront was probably taken from a train, but that is about as far as I could understand what I was watching. Obviously a postproduction video editing program has been used but I will need to ask others as to how it was done.

House 11, the Great Australian Basin, Pennsylvania, USA, 2003
This exhibit was fascinating as both David and I were looking to see where the video looped and were unable to pick it. We watched for quiet a while before the video finished and restarted, hence no looping we had been looking for. Really interesting concept.

If I have misspelt anyone’s names or the details are not exact, I apologise sincerely and please leave a comment and I will change my mistake. I am having problems reading my notes.

This was a great exhibit and it was really interesting to see how much of our Studio work is aligned with the work being exhibited at the Experimenta Vanishing Point Exhibition.

Monday, April 2, 2007

Week 6

Monday there was an excursion to the Ipswich Art Gallery, which sadly I have missed as my son was hurt over the weekend and I did not know about until after work on the Sunday. This would have been wonderful to go and see how artist have produced successful installation art pieces, instead I spent the Monday taking him to dentists, where we had to wait to be seen, followed by head xrays and CT scans to determine if he had fractured his cheekbone and broken his nose.

This week I have not done a lot for Studio other than cruise around the web looking for examples and possible technologies that could be useful. The last couple of weeks have been allocated to Studio and HCI submissions, so that this week needed to be for Advanced Animation.

Week 5

Presentation and Proposal Document this week!! This went reasonably well, despite the obvious nerves. The 3D that David created was that much better for the
animated dog that Kathryn added. Sandra and I did the finishing of the Proposal document and the referencing section.

Week 4

This week has been dedicated to finding what technologies might fit the ideas that are being put forward. Next week is proposal/presentation of our ideas,
and we have divided the work load into Sandra, Kathryn and Petra collating the documentation which includes scanning the idea sketches, cleaning these up a
little with Photoshop and writing the proposal documentation. David who does not feel that documentation is one of his strengths, has been busy preparing a
great 3D visualisation of the idea to give a clear understanding of what we would like to achieve as none of us feels that we are particularly strong in oral
presentation. We decided that each of us would take one section for the presentation and interrupt if we felt any relevant details were being missed.

Monday, March 19, 2007

Friday 20th March

Today was the last Lecture and tutorial day before our Proposal and Presntation are due. The mock up of the layout of the proposal gave us a good start to the group meeting and it was decided that the questionair for the Art Gallery was not needed as the space there is not part of the assignment really.

We are all going to do the report and on Friday we will put it together. We girls are doing the report for submission while David is doing a 3D or animation of our idea for the project.

Technologies Available

Have spent a few hours looking at sensor technologies that are within a budget and perhaps could enhance our ideas.
Have been reading about the still developing Sun spot. Had a read about Ezio, seeing if there have been any futher developments with this product. Phidgets appear to be easy to use, relatively inexpensive and probably the one that will get the most research to find if we can fit it to our needs.

Would have been nice to have the wireless sensors available, but it appears that this may not be possible.

Links of some of the sites:

Web cam: http://www.active-robots.com/products/phidgets/sensor-1111-details.shtml

Phidgets: http://www.active-robots.com/products/phidgets/index.shtml

Ezio: http://userwww.sfsu.edu/~infoarts/cdmain/sensor/a511.sensor.syllabus.html

Sun Spots: http://www.sun.com/emrkt/educonnection/newsletter/0306insidetech.html

Sunday, March 18, 2007

Ideas

Context of the original similarities of ideas.
- floor acctivated sensor mats for input
- feedback is either in form of visual (projection) or audio (footsteps or giggles)

Potential Ideas Discussed
- Webcam to put a person into a virtual reality in some form.
- Rear projection for the (fear) factor visualisation, snakes or possible aboriginal type art that is animated
- Light activated projection along opposite wall ( a little like the tiger or leopard that was done on buildings, as a car drove along), like breaking a laser beam in shops.
- Dance floor sensors under the floor to active projection (maybe more suitable for the childrens area)

When trying to think of ideas for the project, it occured to me that if we were designing a project for the Ipswich Art Gallery, one of the things our ideas did not address was the fact that there are two entry points to the adult area. Regardless of the fact that perhaps the greater percentage probably would use one door, I think we should design the project to be be activated or at least give some feedback regardless of which door was used.

IDEA
If someone uses the arched doorways then the mats would activate the rear screen projection which is currently available at the gallery. If the other doorway was used then have a sensor mat which activates an audio feedback as the screen is not immediately clearly visible at this entrance. This would make the visitor aware that something had been activated, even if the reason was not apparent.

Monday, March 5, 2007

List of 5 Ideas-Alternate Route Art




ALTERNATE ROUTE ART
The idea is to have a map of a section of road from Ipswich,to
the Gateway Bridge. People will be able to chose where an
accident has happened and apply a broken car to the spot
and alternative routes will be visualised using video

List of 5 Ideas-Self Portrait




SELF PORTRAIT ART
This concept, allows children to use drag and drop pre drawn
shapes representional of their features and use the graphics pad
to write their names. They can use a sensor button to activate a
printer which provides them with a black and white print out of their
work which they can then go on to colou

List of 5 Ideas-Giggle Hopscotch




GIGGLE HOPSCOTCH
This concept, allows children to play hopscotch on mats which
have sensors underneath. These will provide childlike giggle
when the child hops onto the right spot.

List of 5 Ideas-Monster Doll Ar




MONSTER DOLL ART
The idea is to have a doll which is fitted with a webcam and
a speaker for a mouth. There could be a sensor in the foot
which provides added interaction. eg “No one will squeeze
my foot and make it feel better!!”

List of 5 Ideas-Blow Dry Art


BLOW DRY ART
The idea is to have a person pick up a blow dryer and point at
the blank canvas which has heat sensors behind it. These will
activate various images projected onto a larger canvas behind
the easel a few feet away.

Week 2

THINGS MENTIONED IN LECTURE:
spelling, grammar, following criteria more carefully, consideration of ethics within the proposed projects, and informed consent.
(Stuffed up here myself, and the PDF only sent the scanned images and no text!) Aplogised for this and came home to have a look to see what I had done wrong. Still not sure, hopefully someone will be able to show me what I did wrong in saving a photoshop file as a PDF.)

DELEGATION OF DUTIES WITHIN THE GROUP:
David feels his strongest areas are code and graphics.
Sandra feels her stronges areas are graphics, photoshop and 3D
Petra, I feel that perhaps documentation and graphics are my strongest areas. Put my hand up to make the sensors for the floor, and set up a collaborative blog space which I have done and can be found at

psdspace.blogspot.com

COLLABORATION
We have exchanged phone numbers, MSN names, and email addresses so we have several means of exchanging ideas besides the collaborative blog. We will continue with our own private blogs as well.

BRAINSTORMING
We read the ideas that we were given in a folder and looked for common links within them. After we had brainstormed these ideas as well as our own futher thoughts we decided to research further before deciding definitely on one idea. I misunderstood some of the ideas and on rereading them they make a lot more sense to me now.

KEYWORDS FROM BRAINSTORMING
(floor, sensor, lights, laser, music or audio, output and projection.)

Friday, March 2, 2007

Constraints Example 10

Audio Spotlight

.. specialised area to activate the audio output
.. there are very few constraints with the implementation of this style of interaction
.. low level of noise activity with in the area
.. prevention of accidental activation by other exhibits

Thursday, March 1, 2007

Constraints Example 9

Sensitive Floor

.. financial consideration (expensive to produce)
.. time limitations
.. technical ability
.. is a commercial product already (trademark?)

Constraints Example 8

River Glow

.. we don't have a river in the gallery
.. safety where water is used, especially with young children about
.. the idea could be utilized in a different context

Constraints Example 7

Kinetic Museum Theatre

.. I think this idea could be used and intergrated into the Ipswich Art Gallery. I could see very few constraints with this project and it would fit into the childrens area of the gallery.
.. The puppets could be a little fragile in the hands of an under 5yr old visitor.

Constraints Example 6

Remapping the Universe

.. technology not developed
.. lighting would be better dimmed
.. although this is not currently possible the idea could be intergrated in different ways

Constraints Example 5

Bousy

.. requires dim lighting
.. requires smooth polished flooring to allow for movement
.. requires space for movement

Constraints Example 4

Robotic Chair

.. sufficient space for the chair to dismantle
.. fencing to keep visitors at a safe distance
.. the sound for this exhibit was loud (possibly disruptive in a gallery situation)
.. smooth flooring

Constraints Example 3

The Table - Childhood

.. requires an enclosed area
.. sufficient space to move
.. sufficient space for visitors to move and negotiate with the table
.. non carpeted area

Constraints Example 2

Robotic Eyes

.. the eyes need to face one direction (so position against a wall)
.. the eyes have had an alarming reaction from some viewers
.. high level of finish required to produce a polished product

Constraints Example 1

Touch Sensitive Apparel

.. one size fits all
.. lightweight
.. would there be problems with regard to workplace health and safety (many people putting the garment on)
.. the interaction with this as an art piece is not the best choice.

http://architectradure.blogspot.com/2007/01/touch-sensitive-apparel.html

Wednesday, February 28, 2007

Link to Examples

More information on each of the examples below can be found by clicking on the green Example Number headings. While not all of these examples are "Ubiquitous Computing Art", the technologies could be utilised in Art.

Example 10

Audio Spotlight


This technology provides sound only when it comes in contact with a solid surface, like a person. This personalised the experience, and meets the ubiquitous computing theme which could be intergrated into an art form.

Constraints: The area required for this technology would need some traffic but not necessarily to much.

Example 9

Sensitive Floor IO Agency


This project uses an interactive video floor projection that reacts to the way people walk on it.

Constraints: the floor as it is could be too expensive to implement. The idea could be utilised.

Example 8

River Glow


This project produces lighting colours on a river, and depending on the condition of the water, allowing a potential swimmer to ascertain if conditions are safe.

http://www.next2006.dk/2006/en/eContent.php This link takes you to many interesting exhibits of ubiquitous computing, although not strictly falling into an art category.





Constraints: we don't have a river, but the idea could be used, but some safety measures would be needed due to the fact that small children may be tempted to enter the water.