Cinematic Timepiece

picture 1cinematic

Time is our measure of a constant beat. We use seconds, minutes, hours, days, weeks, months, years, decades, centuries, etc. But what if we measured time against rituals, chores, tasks, stories, and narratives? How can we use our memory, prediction, familiar and unfamiliar narratives to tell time?

As a child, I remember using the length of songs as a way to measure how much time was left during a trip. A song was an appropriate period to easily multiply to get a grasp of any larger measure like the time left until we arrived to our grandmother’s place. The length of a song was also a measure I could digest and understand in an instant.

The first iteration of Cinematic Timepiece consists of 5 video loops playing at 5 different speeds on a single screen. The video is of a person coloring in a large circle on a wall.

The frame furthest to the right is a video loop that completes a cycle in one minute. The video to the left of the minute loop completes its cycle in one hour. The next completes in a day, then a month, then a year.

Through various iterations, we intend to experiment with various narratives and rituals captured in a video loop to be read as measures of time.

The software was written in OpenFrameworks for a single screen to be expanded in the future for multiple screens as a piece of hardware.

Cinematic Timepiece is being developed in collaboration with Taylor Levy.

Download the fullscreen app version [http://drop.io/cinematicTimepiece#]

Thermochromic Slow Resolution Display

A slow resolution display made out of standard lightbulbs and thermochromic paint.

This display lives at the intersection of digital control and analog output. A matrix of lightbulbs painted with thermochromic paint allows for an image to appear and dissipate analogously to the temperature of each bulb.

The overall effect defies our ordinary understanding of materials and time within technological systems.
In this case, lightbulbs are used for heat and not light. And the refresh rate of an image is constrained to the time it takes for the material of the screen to change temperature.

More on thermochromism here.

Here, we’re using 40 watt bulbs. They take a few seconds to warm up and turn white and a couple minutes to cool down and fade back to black.

Built by Alex Abreu, Taylor Levy, and Che-Wei Wang

Thanks to the dozens and dozens of you who helped make this project possible!


Thermochromic Display from che-wei wang on Vimeo.

dsc01550dsc01549dsc01546

dsc01540dsc01544dsc01536

Tilt Sensor to Live Web

Tilting my laptop changes the background color of a div element on a webpage. [Live Tilt]

An applet reads tilt sensor values from my laptop, posts them to sensorbase.org, then a webpage running ajax reads the sensor values and changes a background color. LIVE.


It’s super slow right now because I can’t figure out how to get sensorbase to only send me the latest value in the dataset, so i have to poll through a ton of values until i reach the end of the set.
Last polled id number is written and read from a txt file to keep count. Live tilt values to web color updates every 500 milliseconds now.

P.Life

p02

P.Life is a large scale interactive screen designed for the IAC’s 120′ wide video wall. In the world of P.Life, Ps run around growing, living, and dying, as the landscape continuously changes creating unexpected situations challenging their existence.


Scenario

Screen fades from black to dawn and rising sun along a horizon. The bottom third of the screen shows a section through the landscape cutting through underground pipes, tunnels, reservoirs, etc. Towards the top the surface of the landscape is visible as it fades and blurs into the horizon and sky.
A few Ps wander around the flat landscape. A number appears on screen for participants to send an SMS message to with their name. As participants send SMS messages, more groups of Ps appear on screen representing each SMS and wander across the landscape. The landscape begins to undulate as the audience interacts with the screen, creating of hills, valleys, lakes, and cliffs. Ps running across the landscape fall to their death as the ground beneath their feet drops or ride down the side of a hill like a wave as a hill moves a cross the screen like a wave. Ps that fall to their death slowly sink into the ground and become fertilizer for plant-life, which is then eaten by other families of Ps allowing them to multiply.

p01

Features
SMS listener to make new families of Ps
An array of IP cameras to transmit video for screen interaction
Background subtraction to capture the audience’s gestures
or Open CV with blob detection or face detection to capture the audience’s gestures
or IR sensors to capture the audience’s gestures
or Lasers and photo-resistors to capture the audience’s gestures
Multi-channel audio triggers for events in P-Life based on location
Background elements and landscape speed through sunrise to sunset in a 3 minute sequence
Ps with life like motion as they walk, jump, fall, grow, climb, swim, drowned, die, stumble, flip, run, etc.
pixelated stick figures? large head?
Simple 8bit game-like soundtrack
Various plant-life grown from dead Ps

Precedents
Lemmings, N for Ninja, Funky Forrest, Big Shadow, eBoy, Habbo

Technical Requirements
IP camera array
Mulit-channel audio output

Dreyfuss Bluetooth Handset

img_1145img_1152

This is a one-off hack to retrofit a genuine Western Electric Dreyfuss Telephone Handset into a full fledged Bluetooth handset. A single button at the center of the mouthpiece controls all the functions (pairing, answering calls, etc.), while a blue and red LED indicator glows from within. The handset recharges via USB and lasts 6 hours in active talk-time and 110 hours in standby mode.

Feedback Playback 2

dhpicture-9dhpicture-7

fb-1-352009fb-2-352009fb-3-352009

FeedBack PlayBack is a dynamic film re-editing and viewing system. The users’ physical state determines the visceral quality of scenes displayed; immediate reactions to the scenes feed back to generate a cinematic crescendo or a lull. We use material that is rigorously narrative, formulaic, and plentiful: the action movie series Die Hard, starring Bruce Willis. A narrative sequence key breaks any given Die Hard movie into narrative elements, corresponding clips were collected from each of the Die Hard movies. Individual clips fall into high, medium, and low action/arousal categories. The user is seated, and places his or her hands on a Galvanic Skin Response (GSR) detection panel (GSR readings are the same kind of data collected in lie detector test). After calibration, the movie begins showing, and clips are displayed depending on the user’s level of arousal and engagement. The narrative sequence is maintained, though the clips are pulled from any of the movies.

Tetherlight

Tetherlight is a hand held light that perpetually points at its sibling. Two Tetherlights constantly point at each other, guiding one Tetherlight to the other with a beam of light.


Tetherlight: Prototype 02 Rotation from che-wei wang on Vimeo.

The devices are each equipped with a GPS module, a digital compass, and a wireless communication module to locate, orient, and communicate its position to the other. They each calculate the proper alignment of a robotic neck to point a light in the other’s direction. In order to maintain the light’s orientation, an accelerometer compensates for the device’s tilt.

Tetherlights are for loving spouses, cheating spouses, girlfriends, boyfriends, children, pets, bags of money, packages, and pretty much anything that you would want to locate at a glance. An ideal use of Tetherlights would be in a situation where two people or two groups of people need to wander, but also need to locate one another in an instant. In a hiking scenario, a large group might spit up to accommodate different paces. With Tetherlights, understanding one’s whereabouts in relation to the other group is represented spatially with a bright light in the appropriate direction.

Tetherlight attempts to make one’s relation to a distant person more immediate by making a physical pointer instead of an abstract maps. With traditional maps, people need to communicate their positions, orient their maps, locate a point on the map, then look up in that direction. Tetherlight does it in an instant. The difference is like looking at a map to see where your uncle lives or having a string that’s always attached to him.

If you’re interested, here’s the Arduino Code: Tetherlight06xbeeGPS.pde

Feedback Playback

FeedBack PlayBack is an interactive, dynamic film re-editing/viewing system that explores the link between media consumption and physiological arousal.

This project uses galvanic skin response and pulse rate to create a dynamic film re-editing and veiwing system. The users’ physical state determines the rhythm and length of the cuts and the visceral quality of scenes displayed; the user’s immediate reactions to the scenes delivered, feeds back to generate a cinematic crescendo or a lull. This project exploits the power of media to manipulate and alter our state of being at the most basic, primal level, and attempts to synchronize the media and viewer– whether towards a static loop or a explosive climax.

In a darkened, enclosed space, the user approaches a screen and his or her rests fingertips on a pad to the right of the screen. The system establishes baseline for this users physiological response, and re-calibrates. Short, non-sequential clips of a familiar, emotionally charged film– for example, Stanley Kubrick’s 1980 horror masterpiece “The Shining” –are shown. If the user responds to slight shifts in the emotional tone of the media, the system amplifies that response and displays clips that are more violent and arousing, or calmer and more neutral. The film is re-edited, the narrative reformulated according to this user’s response to it.

Feedbak Playback is by Zannah Marsh and Che-Wei Wang

GSR Reader

Galvanic skin response readings are simply the measurement of electrical resistance through the body. Two leads are attached to two fingertips. One lead sends current while the other measures the difference. This setup measures GSR every 50 milliseconds. Each reading is graphed, while peaks are highlighted and an average is calculated to smooth out the values. A baseline reading is taken for 10 seconds if the readings go flat (fingers removed from leads).

Drum Space

Drum space is a percussion instrument that transforms environments into drums. Participants are immersed within the space of drums as beats are created from the surrounding surfaces. The audio experience of the space is negotiated by each participant as they add and remove beats from sequences previously constructed by other participants. Will the space be raging noise or subtle clicks?

Ornos: Prototype 02

dsc00490.jpg

Here’s the first test with Ornos. The compass readings are behaving pretty well considering it’s right underneath a spinning hard drive. The 1.2Ghz processor and 512 RAM don’t seem to be enough to download and render the image quickly enough, so I’m going to have to figure out how to speed things up.

Etek EB-85A GPS Example Code

etekgps5hz-02-m.jpg

Here’s some example Arduino code for getting a Etek EB-85A module up and reading latitude and longitude (will probably work with most GPS modules). You can purchase a module from Sparkfun.

The module only needs power, ground, rx and tx. Most modules like the Etek start sending NMEA strings as soon as it has power. The Etek module takes a minute or two to get a satellite fix from a cold start in urban environments. Signals drop out once in a while between tall buildings at street level even with DGPS and SBAS. On a clear day, if you’re lucky, you can get a signal sitting by the window in urban canyons.

//Etek GPS EB-85A Module Example
//by Che-Wei Wang and Kristin O'Friel
//32 Channel etek GPS unit
//modified from original code by Igor González Martín. http://www.arduino.cc/playground/Tutorials/GPS
boolean startingUp=true;
boolean gpsConnected=false;
boolean satelliteLock=false;

long myLatitude,myLongitude;

//GPS
#include 
#include 
int rxPin = 0;                    // RX PIN 
int txPin = 1;                    // TX TX
int byteGPS=-1;
char linea[300] = "";
char comandoGPR[7] = "$GPRMC";
int cont=0;
int bien=0;
int conta=0;
int indices[13];

//////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////
//////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////

void setup() {
  //GPS
  pinMode(rxPin, INPUT);
  pinMode(txPin, OUTPUT);

  for (int i=0;i

Soft Pneumatic Exoskeleton : Ankle Support

dsc00465.jpgdsc00414.jpg

The ankle support is a low profile sleeve around the foot and a strap at the heel to connect to the air muscle. A shorter 12″ muscle connects the ankle support to the calf. The calf attachment is now constructed out of a single piece of leather with nylon reinforcement at the top to prevent buckling and stretching when the air muscle is actuated. It’s ideal to have the solenoid valve as close to the air muscle as possible, so I’m going to have to make room for that somewhere on the calf.

Air Muscle Connection

dsc00406.jpgdsc00409.jpgdsc00412.jpg

The most common point of failure in all the test have been the connection at the collar around each end of the air muscles. The inflation of the muscle along with the tension force rip through the threads around the collar, so I’ve reinforced those points with rivets with some room to expand.

Exploded Air Muscle

dsc00453.jpg

This is what happens when you pump compressed air directly into an air muscle without a pressure regulator. Good thing I wasn’t wearing it when it exploded.

Soft Pneumatic Exoskeleton : V 01 Test

dsc00388.jpgdsc00375.jpgdsc00377.jpg

dsc00382.jpgdsc00383.jpgdsc00385.jpg

After the first round of testing, I realized assisting jumping with pneumatics is nearly impossible. The size of the tubes and tank necessary to get a high enough cfm to actuate the air muscles at the speed of jumping seems too large for a lightweight wearable application. So, I’m going to concentrate on assisting walking. Maybe speed walking.


Soft Exoskeleton V01 Test from che-wei wang on Vimeo.

Ornos : Prototype 01

dsc00367.jpgdsc00371.jpgdsc00373.jpgdsc00490.jpg

I was going to cover the lasercut masonite with a leather sleeve, but I’m going to go with cnc milled RenShape with a painted finish for the next prototype. The compass needs to be calibrated to the offsets caused by the magnetized computer hardware and I need to tweak some code to get the frames to load faster and smoother. I’ll post a video as soon as that part is worked out.

Soft Pneumatic Suit : Prototype 01

musclesuit01.jpg\

I finally have a full assembly of all the major components sewn with thick canvas and leather. The small scuba tank provides the air pressure, controlled by an Arduino and solenoid valves. The basic operation of pneumatic muscle works. Air pressure at about 100psi inflates the muscle, creating a contracting motion of about 2 inches, enough to make my leg kick out (although the speed of inflation needs to be faster).

The harness for the scuba tank and a guide for the pneumatic muscle are still issues to be dealt with. The strap for the thigh needs to extend further down to help guide the placement of the pneumatic muscle and the scuba tank needs to be attached in a more comfortable way.

Ornos : A View from Above

picture-4.pngpicture-3.pngpicture-2.png

Since the first hand drawn maps of the stars to satellite imagery and GPS navigation today, our frame of reference and our perception of space has been molded into a view from above. Our understanding of place is often linked to an abstract representation on a map rather than a physical relational comprehension. You could probably point out Azerbaijan on a map, but how many of us can simply point in its direction across the globe? The image of the globe projected onto a vertical surface is so pervasive, we often associate “up” with north as we project ourselves into a mental image of map.

The accessibility of GPS and online map services have continued to reinforce the “up” vector while creating a greater divide between the physical world and its virtual representations. Today, we view from above, as primarily experienced on our screens, in an elevation view without any regard to its physical context. We project our presence into the screen through multiple translations of orientation. Viewing a map on a computer screen requires one to find a location on the screen that represents a position, then the abstracted orientation of the vertical screen must be translated and scaled into the physical context of the current position. We’ve lost a step in comprehension without the compass and the horizontal map. The traditional map and compass gave an intuitive understanding of a current position in relation to physical space by rotating the map to align with the space it represented. What appeared one inch to the left of my location on the map could be confirmed by looking up to my left.

Ornos is a telescopic view from above. The horizontal screen reconstructs a view from a position directly above itself using satellite imagery and maps. Exploring your current surroundings is as simple as sliding the device on any surface to pan across the globe. Zooming is controlled by rotating the device itself. The onboard digital compass and GPS modules orient the image on the screen to reflect your physical surroundings while satellite imagery and maps are dynamically loaded from Google, Microsoft, or Yahoo.


Ornos : Prototype 01 from che-wei wang on Vimeo.

Here’s the first test with Ornos. The compass readings are behaving pretty well considering it’s right underneath a spinning hard drive. The 1.2Ghz processor and 512 RAM don’t seem to be enough to download and render the image quickly enough, so I’m going to have to figure out how to speed things up.

Infinite Mouse Tracking

I wanted to use the mouse as a simple surface optical encoder to get infinite panning motion. The problem with reading mouse coordinates on the screen, is that once the mouse reaches the edge of the screen, it stops counting. So a simple solution is to reposition the mouse (using the robot class) every few frames and calculate the change in mouse positions.


//InfiniteMouseTracking
//by Che-Wei Wang
//2.20.2008

float mapPositionX;
float mapPositionY;
long count=0;
int moveX=0;
int moveY=0;

void setup() 
{
  size(screen.width,screen.height,P3D);
  mapPositionX=width/2;
  mapPositionY=height/2;

  //noCursor();
  
  //set the mouse postion once before the program begins
  try {
    Robot robot = new Robot();
    robot.mouseMove(width/2, height/2);    
  } 
  catch (AWTException e) {
  }

}

void draw()
{
  background(0);

  //reset the cursor Position every few frames
  if(count%4==0){
    try {
      Robot robot = new Robot();
      robot.mouseMove(width/2, height/2);    
    } 
    catch (AWTException e) {
    }
    moveX=mouseX-pmouseX;
    moveY=mouseY-pmouseY;
  }

  count++;

  //new position= old position + movement * decay
  mapPositionX=mapPositionX+moveX*.8;
  mapPositionY=mapPositionY+moveY*.8;

  stroke(255);
  line(width/2,height/2,mapPositionX,mapPositionY);
  ellipse(mapPositionX,mapPositionY,100,100);

}

Pneumatic Muscle : Solenoid Valve Test

pneumatic-muscle-solenoid-valve-test.jpg

I finally got the solenoid valve hooked up to an air compressor and and arduino board. Air pressure is regulated at 100 psi. It looks like I’m going to need bigger pipes to get more cfm for faster muscle actuation. The shorter muscle has an over sized braided sleeve, which I thought might help the actuation distance, but it seems like it just takes more air to fill up and doesn’t make a noticeable difference.


Pneumatic Muscle: Solenoid Valve Test 02 from che-wei wang on Vimeo.

Trust Vision

trust.jpg ((image manipulated from original by manitou2121, an interesting composite of faces from hotornot.com))

Why should I trust you? Because you sound trustworthy? Smell trustworthy? Look trustworthy? How do we measure trust?

Trust Vision is a personally biased trust measuring video camera. As each frame is presented, a face or faces within the frame are superimposed with trust ratings based on previous faces that have been rated by the user. Over time, the computer creates a mental map of facial features of a trustworthy individual. The real-time analysis of facial features is aided by the user’s bias, as he or she discretely changes the trust ratings of individuals on the screen. Changes in the mental map may “out” a previously trusted person or improve someone’s standing.


Window

Urban Computing : Window

window.jpg ((image: pmorgan ))

Windows give a frame of reference, crops a view, magnifies a perspective. It filters and blocks. Window is the opening that mediates here from there. I would argue that windows are what give walls meaning. Without windows, there wouldn’t be a notion of “there” since the possibility of something else on the other side wouldn’t exist without the opening. Windows provide a glimpse, a frame, a blur, a dimension of something more, a thought into a different time and space. A prisoner in solitary confinement opens a window in his mind to keep his sanity, while the Queen of England peeks out her picture frame window, checking the weather outside.

The biggest dilemma of the window is the opening itself. We’d like a filter. We want to see what’s there, but we don’t want to hear it, or feel it. Perhaps we have a level of control with the windows in our minds, excluding dreams, nightmares, and synesthesia (leaky mind windows), but physical windows are bound to the performance of material, a challenge since the beginning of windows.

Glass attempts to provide clear views, filter some light, block temperature, and baffle sound, all in the span of a 1/4″ section. We’ve come a long way since the small openings in 15th century castles, yet we still haven’t seen the holy grail of windows. Windows should be thin, structural, fully insulated, sound proof, malleable sheet material that can filter varying amounts of incoming light and views while remaining completely outwardly transparent. Oh, and it should be cheap, lightweight and sustainable. We have low thermal conducting material (Aerogel), fiber optic skylights (Parans Solar Lighting), lightweight skins (ETFE / Water Cube Skin), self-cleaning glass (Pilkington Activ), LCD privacy glass (Privalite), time-space warping windows (Khronos Projector), illuminating glass (Lumaglass), non-reflective glass (Luxar), breathing skins (Living Glass), living organic envelopes (Breeding Spaces), interactive storefronts (Displax), thin-shell frameless glass, the list goes on. So would a combination of these give us the ultimate window?

Laser Tether : Sketch

laser-tether-sketch1.png

Here’s a sketch of how the parts might fit to get the laser to freely point in any direction. 4 servo motors work in tandem to tilt the laser head. Batteries and the circuitry is stored in the bottom half of the cylinder. I was going to have a mechanical gyro to have it orient itself to the ground, but the form seemed too restricted, so I’m opting for a gyro sensor to deal with orienting the device to gravity.

TeleCursing

Video chat is great, but maybe I don’t want to see your face. Or more likely, I don’t want you to see me sitting at my computer in my underwear. A less intrusive channel of communication, like IM, is often desirable but we lose several modes of expression when we only communicate with text. Could a greater sense of presence be transmitted through our most common computer interfaces (keyboard, mouse)? Many of us chat with IM, but using CAPS and emoticons to communicate, lack a level of fidelity to properly transmit a range of emotions. Without any training or instruction, we already convey a large set of emotions towards our devices to express ourselves. We often express our feelings to devices closes to us. We slap our TV, pet our computer, and slam our mouse with frustration, yet these common expressions are ignored. What if we could open a channel of communication for your mouse? What if your mouse gestures could transmit your feelings across to your friend?

telecursing.jpg

TeleCursing is a chat plugin that takes your mouse cursor and simply places it on your friend’s screen. A thin line is drawn on the screen connecting the mouse cursors within proximity. With the cursors on the screen, you can flirt, scribble with frustration, hold hands, play tag, or just know when your friend is active on the other end. The cursor could be customize to be shown in a less distracting translucent cursor or in a more lively avatar-like way with animations based on mouse vector, proximity and clicks.

Other multimouse hacks: DualOsx v.1(hoax?), MPX: The Multi-Pointer X Server, SpookyAction [video]

Wall

Urban Computing: What is a wall?

Wall, as a simple force of division, marks a boundary between here and there. How do we define our world through walls? One could argue that geographical boundaries act as walls defining spaces within which all forms of life acknowledge and encroach. But even geographic boundaries are crossed intentionally for a particular pursuit or by forces of nature outside one’s control. Political boundaries, drawn from geography, enclose a code of conduct and promote a unified culture and language, but are also fought over, erased, and redefined. Cities born out of geographic convenience and political strategy, like Pingyao once surrounded and protected by walls now grow amorphously, engulfing neighboring villages. The most conventional wall, the wall that divides home and nature, surrounds and protects us. This pervasive division between outside and inside, public and private is becoming unstratified as ubiquitous technologies seamlessly occupy both realms.

banksy_grafitti_area.jpg

Shigeru Ban’s Curtain Wall House blurs the traditional boundary between inside and outside, private and public. Masaki Endoh’s Natural Ellipse seamlessly extends the topology of the interior private surface to the exterior. Increasing presence of public surveillance via webcams is creating a global neo-panopticon. ((Koskela, Hille . Surveillance & Society CCTV Special (eds. Norris, McCahill and Wood) 2(2/3): 199-215 )) Banksy reclaims private walls in public domains for public expression. The Great Wall of China is now a tourist attraction. The Principality of Sealand (a nation on a concrete and steel island) offers high security internet services. ((“The Principality of Sealand.” The Principality of Sealand. 30 Jan. 2008 <http://www.sealandgov.org/history.html>.)) Youtube is available on cellphones and cellphones can upload directly to Youtube.  Fred Sandback defines planes in space with a minimal trace of yarn, yet the division of space is respected. The threshold between here and there, yet devoid of any material to block, alter or reflect any of our 5 senses. ((Fred Sandback, “Remarks on My Sculpture 1966–1986,” in Fred Sandback Sculpture 1966–1986 (Mannheim: Kunsthalle, 1986) <http://www.diacenter.org/exhibs/sandback/sculpture/remarks.html> )) Architectural production now resides in the mass media, redefining our perception of space, once defined by walls, to images. ((Colomina, Beatriz. Privacy and Publicity. Cambridge: MIT Press, 1994.))

The point of view of modern architecture is never fixed, as in baroque architecture, or as in the model of vision of the camera obscura, but always in motion, as in film or in the city. Crowds, shoppers in a department store, railroad travelers, and the inhabitants of Le Corbusier’s houses have in common with movie viewers that they cannot fix (arrest) the image. Like the movie viewer that Benjamin describes (“no sooner has his eye grasped a scene than it is already changed”), they inhabit a space that is neither inside nor outside, public nor private (in the traditional understanding of these terms). It is a space that is not made of walls but of images. Images as walls. ((Colomina, Beatriz. Privacy and Publicity. Cambridge: MIT Press, 1994. p.6))

Pneumatic Muscles

img_0798-1.jpg

Here’s my first set homebrew pneumatic muscles. I’m not sure how strong these are going to be, but it looks promising. The compression fittings aren’t being used the way they were meant to be used, so it’s likely that connection will be the first point of failure when it goes under stress.

As soon as the assembly is tested, I’ll be posting instructions on how to make your own.

Laser Tether

laser_caution.gif

The Laser Tether is a hand held laser pointer that perpetually points at its sibling. Two Laser Tethers constantly point at each other no matter where they are on the globe. If my cousin in Beijing had a Laser Tether and I turned on mine in New York City, our lasers would point towards our feet as the lasers draw a straight line through Earth between my cousin and myself.

The devices are each equipped with a GPS module, a digital compass, and a cell network module to locate, orient, and upload its location to an online database. They then calculate the proper alignment of their lasers in relation to the other. In order to maintain an orientation that is always perpendicular to the ground, the lasers are mounted on a self leveling mechanism similar to existing laser levels like the DEWALT DW077KI.

Possible uses for the devices are for loving spouses, cheating spouses, girlfriends, boyfriends, children, pets, bags of money, packages, and pretty much anything that you would want to locate at a glance. Although potential uses are many, the scenarios I have in mind are for hiking, sailing, driving, caravaning, patrolling, playing tag, jogging, getting lost in crowds, hunting, tracking, spying, and battle strategy coordination. An ideal usage of the Laser Tether would be in a situation where two people need to wander, but also need to locate one another in an instant.

With GPS units becoming more pervasive, many GPS tracking applications have databases of people’s known locations. The emerging debate on privacy and accuracy are a major concern ((Katina Michael, Andrew McNamee, MG Michael, “The Emerging Ethics of Humancentric GPS Tracking and Monitoring,” icmb, p. 34, International Conference on Mobile Business (ICMB’06), 2006)), while the abstract representation GPS data in maps fail to convey a strong sense of immediacy and relevance. These applications like Navizon and Mologogo are open to large online communities and are visualized on maps. Common commercial applications of GPS tracking devices are used for fleet control of taxis and trucks, animal control, race tracking, and visualization. ((GPS tracking. (2008, January 27). In Wikipedia, The Free Encyclopedia. Retrieved 04:39, January 30, 2008, from http://en.wikipedia.org/w/index.php?title=GPS_tracking&oldid=187351992))

The Laser Tether attempts to make one’s relation to a distant person more immediate. The difference is like looking at a map to see where your uncle lives versus having a string attached to him.

Pneumatic Soft Exoskeleton

muscle.jpg

Pneumatic systems are clean, safe, lightweight, and reliable. In a pneumatic electronic hybrid system, electric components simply control the flow of air pressure, removing the burden of weight and kinetic actuation from electrical power to pneumatic power. The advantage is a lightweight low idle power system with high power kinetic impact.

My initial encounter with pneumatic systems like inflatable shelters and floatation devices exhibited an enormous potential for creating lightweight systems that can be reconfigurable and transportable. Pneumatic systems are used in various critical applications that require immediate response, reliability and flexibility. Air bag systems, inflatable life vests, and self-inflating rafts are of particular interest to me.

In terms of pneumatic systems as a wearable technology, a few possibilities come to mind (nomadic shelters, impact protection, an expressive suit, and a powered exoskeleton, just to name a few). Each have potential uses that interest me and many have been explored by others. Motoair sells pneumatically actuated vests and jackets for motorcyclists and other high impact sports to cushion accidents. ((Motoair, MOTOAIR, Jan 28.2008, <http://www.motoair.com/>)) Powered exoskeletons, currently developed within research groups around the world, are focused on assisting human locomotion through a wearable machine. ((exoskeletons: http://bleex.me.berkeley.edu/index.htm, http://www.newscientist.com/article.ns?id=dn1072, http://spectrum.ieee.org/print/1974, http://www.youtube.com/watch?v=0hkCcoenLW4)) Actuated parts of the machine coincide with the body and gross muscle groups to help lift heavy loads. A suit for the upper extremities has been created by Hiroshi Kobayashi, a roboticist from the Science University of Tokyo. ((BBC. BBC News: Health, Jan. 28 2008 <http://news.bbc.co.uk/1/hi/health/2002225.stm>)) Dr. Daniel Ferris and Dr. Riann Palmieri-Smith lead a group of researchers at the University of Michigan in creating pneumatically powered exoskeletons for the lower limbs. ((Human Neuromechanics Laboratory, Dr. Daniel Ferris and
Dr. Riann Palmieri-Smith. Jan. 28,2008. ))

The goal of the Pneumatic Soft Exoskeleton project is to create a set of soft and lightweight wearable pneumatic muscles for the lower extremities. A series of pneumatic muscles are worn around the legs to assist the user in lifting loads, walking, running, and perhaps jumping. Unlike other exoskeletons, this application would be untethered and constructed of primarily soft fabric, making the device more lightweight and flexible. The system will be built to remain primarily in an idle state to be activated only as needed. Primary concerns are weight, comfort, and flexibitily. As a benchmark, if I can wear the device and beat the world record for any track event, I would consider the device a success.

Features of the wearable system
The system will be powered by a pneumatic reservoir and triggered by the user’s motions or an external switch to activate the system only at desired moments. An electronic circuit, powered by a battery will sense and trigger movement through a series of artificial muscles. Pneumatic muscles work by inflating a silicon tube within a plastic braided sleeve. The inflation of the tube shortens the overall length of the assembly as the braided sleeve increases radially. ((Lightner, Stan, et al. The International Journal of Modern Engineering. Volume 2, Number 2, Spring 2002, Jan.28 2008 <http://www.ijme.us/issues/spring%202002/articles/fluid%20muscle.dco.htm>))

Pneumatic muscles are a relatively recent development in air powered actuation, lead by the Shadow Robot Company, and FESTO Corporation. They were originally commercialized by The Bridgestone Rubber Company in the 1980’s. ((Lightner, Stan, et al. The International Journal of Modern Engineering. Volume 2, Number 2, Spring 2002, Jan.28 2008 <http://www.ijme.us/issues/spring%202002/articles/fluid%20muscle.dco.htm>
Air Muscle videos: http://www.youtube.com/watch?v=w77YDDTXfRc&NR=1, http://www.imagesco.com/articles/airmuscle/AirMuscleDescription03.html)) Pneumatic muscles use simple materials that have a low cost to manufacture and are extremely lightweight. A fully assembled muscle can potentially have a 1:400 weight to strength ratio (compared to the 1:16 ratio of pneumatic cylinders and DC motors). ((Shadow Robot Company: Air Muscles overview. Shadow Robot Company. Jan. 28 2008. <http://www.shadowrobot.com/airmuscles/overview.shtml>)) The assembly is also flexible, cushioned, and operates smoothly, making it an ideal candidate as an artificial muscle for a wearable application.

The Pneumatic Soft Exoskeleton is worn by strapping components of of the system on parts of the leg to align the assistive muscle to major muscle groups in the leg. By attaching a few pneumatic muscles to assist gross movement of the lower extremities, properly timed actuation of the assisting muscles can add to the user’s own movements to achieve greater results in speed and power. Each assistive muscle would coincide with existing muscle groups and transfer power to tension lines that wrap around the leg in such a way that the forces transfer in a similar fashion to the muscle-tendon-bone hierarchy. The quadriceps femoris muscle group will be the primary group of focus. As the system is perfected, other muscle groups will be identified and addressed.

Components of the system
The system consists of pneumatic muscles of varying sizes which attach to the main wearable frame. The wearable frame is a large fabric that is wrapped tightly around the thigh and crus. The frame consists of straps that are sewn in a pattern to distribute forces from muscle-to-tendon junctures to the rest of the frame. The straps are collected at several junctures to accept a tension connection from a pneumatic muscle.

Air flow of each pneumatic muscle is controlled by a single tube from a 3-way solenoid valve which controls the air flow in and out of the pneumatic muscle from a portable air reservoir worn at the hip of the user. Each solenoid is controlled by outputs from a battery powered Arduino board. Switches from the user’s inputs are fed to the Arduino board to control the actuation of the artificial muscles.

Uses
Potential uses for the Pneumatic Soft Exoskeleton follow much of the current applications for powered exoskeletons. These wearable machines can assist lifting and locomotion. The added benefit of the Pneumatic Soft Exoskeleton is its weight and flexibility. By making the system lightweight and soft, its appearance is less obtrusive and less of a burden to fit to the body. The system is potentially more affordable and highly customizable because of the simplicity of the components.

Technical concerns
The Pneumatic Soft Exoskeleton must be lightweight and perform reliably to assist locomotion. Since the system relies on an air reservoir, it is likely that a user may find the system insufficient in its capacity to perform continuously. This concern can be address with a larger reservoir, but that would add more undesirable weight and volume.

There may be a potentially harmful side-effect to the body due to repeated unfamiliar stress on bones and muscles. The softness of the system is intended to dampen any impact that may be harmful, but repeated stress points due to the power of the assistive muscle or the location and transfer of forces to the limbs may be damaging.

Precedents
Here’s a nice intro to the subject from engineeringtv.com
Human Neuromechanics Laboratory at The University of Michigan
HAL at the University of Tsukuba
Muscle Suits at Koba Lab
Wearable Power Assist Suit at Kanagawa Institute of Technology, Robotics and Mechatronics

Talking Face to Face

taking-face-to-face-01.jpg

Talking Face to Face is a networked communication system that detects the position of two users in relation to each other anywhere on the globe. Using a combination of GPS and a compass, the audio output is modified to create the sensation of the voice’s position and proximity to the listener. If Bob is in NYC and Sarah is in LA, their voices would sound relatively distant to each other. Bob would face west and Sarah would face east so their voices would sound as if they were facing each other. If Bob turns away, Sarah’s voice would sound like they were no longer facing each other. As Bob and Sarah turn their positions the voice they hear will gain reverb, volume, and other effects to simulate their relative presence in a spacial environment.

We are curious beings that need and want confirmation of our existence and presence. We like to explore, yet we want to stay at home. We yearn for freedom, difference and change, yet we find comfort in familiarity. I believe it’s this dilemma that telepresence aims to solve.

The presence stack:
Physical sensory input of a moment,
channeled through nerve paths to the brain,
processed and compared to a set of memories of the sensation,
judgments constructed on the presence of the sensation or the realness of the input,
and predictions constructed of the next sensation.

Is what i see, smell, feel, hear, or taste, real? is it out of the ordinary? What is real and not real? Is this moment not the moment I expected to follow the last moment?

Presence deals with and persists in change. Our presence cannot be static. Presence cannot remain in the past or exist in the future. We can only continuously observe and predict changes within our current moment based on our past experiences.

Being present is a combination of sensations that confirm our prediction of where we are spatially and mentally. If I were to sense or experience a moment that is out of the ordinary and a magnitude beyond what I expect, I would question the reality of my environment in that moment. In other words, if my prediction of the near future is shattered by an extraordinary experience outside my threshold of reality, my presence comes into question. For example, if I were to approach my sink and turn on my faucet and nothing comes out, something unexpected has happened, but I’ve had similar experiences in the past, so I can imagine what had gone wrong and find a solution. On the other hand, if I go to turn on the faucet and butterflies fly out, the presence of the faucet in my reality or my presence in this reality would be questioned. Is that real? Is this real? Could this be happening now? Am I dreaming?

We are often willing to accept the presence of other people with only a fraction or glimpse of their entire presence. I often mistake a mannequin in my peripheral vision as a real person, only to be surprised when a closer look reveals its lifelessness. I can also fool myself into believe I’m somewhere else no matter how hard I try to resist when I listen to binaural recordings of a previous space and time. We are willing to complete the picture for ourselves because we try to match previous experiences to our current experience to predicting how our environment should be. Could implied presence by a mannequin persist over a long period of time? Can audio input be so convincing that we question our mismatched vision?

In each confirmation of another’s presence, i think we are looking for life. Is the other alive? The classic horror movie scenario of walking down a hall of medieval armor statues questions how we determine the presence of another being. Is someone wearing the armor and standing very still, or is it empty and lifeless? The presence of a ghost or prankster is only confirmed when something moves. Or is that enough? If their axe drops to the ground, would we assume someone deliberately dropped the axe, or did the shifting of my weight on the creaking floors cause the accident? Would a glimpse of a pair of eyes confirm our suspicions of someone’s presence? Would the sound or smell of someone’s breath convince us? Whether it is a single clue or a combination of movement, sight and sound, we are looking for a confirmation of life.

If I follow my brother into a dark basement, i would continually feel his presence no matter how dark or silent the room may be. Even if he didn’t respond to me calling his name, i would still expect him to be there when i turn the lights on. Perhaps he’s ignoring me or playing a game with me. It would be a hard stretch in our imagination and a break in our reality if we turned the lights on and he was nowhere to be found. Perhaps I would even search for him in the basement behind the shelves and behind the boxes. Without some indication of him exiting the basement like the sound of the door closing or the sound of his footsteps, his presence is automatically completed by a single instance of his existence before we entered the basement. Our imagination wants to complete our senses to detect his presence.

We want presence and will stretch our imagination to detect presence.

moMo : Version 4

momo-version-4.jpgdsc00197a.jpg

momo version 4 now has a sturdier lighter frame. A couple switches were added to control power to arduino and to the GPS unit. The whole unit somehow runs off 4 AA batteries instead of 8. I also realized the compass unit is not as accurate as we’d like. It has a sour spot around 180-270 degrees where the measurements are noticably off by several degrees, not due to electromagnetic interference from the nearby motors.

3T Chair : prototype 01

dsc00120.jpgdsc00113.jpgdsc00102.jpgdsc00099.jpg

I finally got around to folding and assembling the 3 lasercut metal sheets. There are still a few kinks the iron out, but it may be a while before I get around to it.

Oblik : Construction & Tree Removal Services

We’ve finally agreed on the building’s name, Oblik ( オブリク ), derived from the word oblique.

The initial massing study resulted in a plain exterior with little variation by Palm Tree Removal Cost Phoenix AZ we found room to play on the facade. Decide to hire experts, As professional arborists, Good Fellers can remove trees of any size, whether big or small, as well as those in confined spaces. We first established a 12 deviation is enough to be different, but not to much to offend. that’s just our opinion. We knew we needed more than one rotation to generate any substantial number of iterations, while setting the limit at 2 rotations to maintain a level of transparency to the process. Like a simple puzzler, each facet could return to its origin through two simple operations.

Construction is now set to be completed by the end of February.Major concrete and exterior work is almost done. Now it’s mostly mill work and interior finishes, not only in construction but with all the interior designing arrangements with the help of https://www.anthonymichaelinteriordesign.com/  a recommended site special in interior design. But they had to ask for more financing with one major company as there were some final details to prepare in this stage.

 img_0727.jpgimg_0752.jpgimg_0724.jpg

Our design process began with a series of blocks arranged to fulfill basic site and program requirements, and it was finally finished thanks to a commercial real estate bridge loan to finalize the permanent financing. Each block represents a single unit or half a unit. To provide an interior courtyard for the residents, the placement of the blocks were essentially driven by the sky exposure plane.

 img_0731.jpgimg_0745.jpg

By placing the access corridors within the courtyard, a semi private transition area acts as a buffer between the street and the apartment unit. Residents can step out their apartment door to check the weather, greet neighbors or rest for a moment under the canopy before stepping out into the city.

moMo : version 2

img_0439.jpg

-new compass unit calcuates true north in relation to itself and waypoint (so it points in the right direction even if you spin it)
-upgraded to 32 Channel!! ETek GPS module
-4 AA batteries powers the entire circuit with 5.6V at 2700mha
-a new skeleton ( to be outfitted with a flexible skin soon )
-updated code for smoother motion, although it’s soon to become obsolete as we move into a more gestural configuration ( like Keepon ) without the deadly lead weight.

HND: Shell

For moMo’s prototype shell, we’re initially cutting out cardboard sections into the egg shape. Our next iteration will be lasercut acrylic. Servo motors are counter balanced and secured throught the cutouts in the material. The GPS unit is housed at the top of the egg shape.

momo-0-r02.jpgmomo-0-r03a.jpg

cutouts-cardboard-02.jpg

A Haptic Navigational Device

A haptic navigational device requires only the sense of touch to guide a user. No maps, no text, no arrows, no lights. HND sits on the palm of one’s hand and leans, vibrates and gravitates towards a preset location. Imagine an egg sitting on the palm of your hand, obeying its own gravitational force at a distant location. The egg magically tugs and pulls you. No need to look at any maps. Simply follow the tug.

This is what we want to make. (Eduardo Lytton, Kristin O’Friel + me)

The possible user scenarios that can come out of this device range from treasure hunts to assistive technology for the blind.

Possible methods of creating the sensation of pull or tug:
Weight shifiting via servo motors
Vibrations motors
Gyroscopes | Gyroscopes.org

Precedents:
http://www.beatrizdacosta.net/stillopen/gpstut.php
http://www.mologogo.com/
http://mobiquitous.com/active-belt-e.html
http://news.bbc.co.uk/2/hi/technology/6905286.stm
http://www.freymartin.de/en/projects/cabboots

Observation: Crosswalk Button

At certain intersections, the city offers a push button at the pedestrian crosswalk with a sign that reads, “To cross street, push botton, wait for signal, wait for walk signal.”

Hypothesis: People love to push buttons. It give us a sense of control. So, people push the crosswalk button. And perhaps they push it several times to make sure it’s been pressed. They’ll keep checking for the green light to get a confirmation that the button had caused a series of events that will lead to a signal change in the very near future. People may also have slow responses to the changed green signal.

dsc09995.jpgSite:Location: 2 crosswalks 50′ apart, each with an island along the mediation strip between 6 lanes of vehicular traffic at Christopher and West St. Approximately 50 paces to cross.  1:30pm, 9/17/2007

Scenario #01:
Single male approaches red light. 25 years old.
25 seconds into waiting, stares at incoming traffic
44 seconds into waiting, stares across street, perhaps at traffic signal
58 seconds into waiting, light turns green. Begins walking immediately. (52 paces to the other side of the street)

Voltage Current Resistance

It seems like the prefered analogy for voltage, current and resistance uses water with pressured tanks, hoses, etc. The waterfall analogy was the easiest for me to understand.

If we draw an analogy to a waterfall, the voltage would represent the height of the waterfall: the higher it is, the more potential energy the water has by virtue of its distance from the bottom of the falls, and the more energy it will possess as it hits the bottom. . . If we think about our waterfall example, the current would represent how much water was going over the edge of the falls each second. . . In the waterfall analogy, resistance would refer to any obstacles that slowed down the flow of water over the edge of the falls. Perhaps there are many rocks in the river before the edge, to slow the water down. Or maybe a dam is being used to hold back most of the water and let only a small amount of it through. . . if you think about our waterfall example: the higher the waterfall, the more water will want to rush through, but it can only do so to the extent that it is able to as a result of any opposing forces. If you tried to fit Niagara Falls through a garden hose, you’d only get so much water every second, no matter how high the falls, and no matter how much water was waiting to get through! And if you replace that hose with one that is of a larger diameter, you will get more water in the same amount of time. . . more on Voltage Current Resistance