All posts by natashalewandrowski

Soundsorial 2

Team: Natasha Lewandrowski & Yuchen Zhang

IMG_0230

Soundsorial 2, mobile application prototype

Description
Soundsorial 2 is a mobile device that lets you detect and visualize particulate matter in the air wherever you go using your phone.

In our first iteration of Soundsorial, we made a wearable instrument in the form of a pair of wooden headphones that created audioscapes based on dust levels in the surrounding environment. Our goal was to create a portable device that would allow wearers to sense air quality information in their immediate area as a way to promoted mindfulness about environmental issues.

Their were two main issues with our first design. First, the headphones were inflexible, which meant they only fit limited range of head sizes. With out second iteration we wanted to come up with a more inclusive form factor. Second, our headphones used piezo speakers, which limited the range of sounds we could produce to beeps and buzzes. These sounds were unpleasant for wearers to listen to and thus would not encourage use of the device.

Goals
Our goal for the second half of the semester was to address these two issues. One idea we had was to create a mobile application that would pair with the headphones. We presented our first version of this idea together together with the headphones at the NYC Media Lab on November 6th. The feedback we received was generally positive, but underlined the need for a more integrated presentation.

15685473909_136a9d2bf2_z

Chen presents Soundsorial at NYC Media Lab

15683986628_15e9a6f0cd_z

View of Soundsorial table at NYC Media Lab

Usability Test
Chen conducted a usability evaluation on the Soundsorial device as part of an assignment for another class. We used the feedback she received from participants in the redesign of our device. Chen tested the device on ten users. She set it up as if it was in gallery setting, on a pedestal accompanied by a short written description of the project. Before she let them interact with the device, she asked people what sound they expected it to make and how they thought in might turn on. Then she let the users try the device. Once they were done she asked them how they felt about the sound, shape, and texture.

IMG_0214_2

Usability Evaluation

Findings
Many of Chen’s findings reflected what we already knew, however the test gave us new insights as well. Five out of six participants thought the device should activate automatically when they put it on. Testers thought the device should either, play nature sounds, like birds or crickets, or play music. They also mentioned that the particle levels should introduce static into the pleasant sounds. Participants liked the wooden material visually, but disliked the fact that the headphones were not adjustable.

soundsorialUserTest1

User Test Documentation

SoundsorialUserTest

Participant testing Soundsorial

Redesign
We brainstormed ideas for how the new device might look and function. We considered a couple of options, including a redesigned headset, a pin that could be clipped to the body, and a phone case. At this stage we are thinking that a phone case might be the best option for several reasons.

First, It solves the sizing issue while allowing us to keep the wooden aesthetic. Second, it lets us to utilize the built-in capabilities of the phone to produce better quality sounds and visualizations. It also opens up the possibility of tracking positional data, and connecting to existing APIs in order to compare Soundsorial’s sensor data to information published by other sources.

soundsorialConceptSketches

Headphone concept sketches

IFD_FinalWIP 12

Pin concept sketches

Software Mockup
We created mockups of the software by connecting the existing Soundsorial device to the Processing development environment. We tried many variations before arriving at our final design. We experimented with various types of sound, and displaying the particle information in different ways.

Final
Ultimately we decided to keep it visually simple. We used a web cam to capture what the device is seeing so that the user could use the device to “see” the invisible particles in the air. The number of particles increases and decreases based on the dust level readings.

For the app mockup demonstration we used a program called LiveView to show the Processing sketch on a mobile phone. It shows how the app might be used on a mobile device.

Other Processing Experiments
Below is a selection of videos documenting some of our other Processing experiments. In this version the particles change color when they reach a certain threshold. This could be used to indicate that they have reached a dangerous level.

In this version we created a graph of the data.

In these versions we tried using text and an image of pollen as the particle.

Enclosure and Next Steps
Below are images of a clip-on version of the enclosure. In order to make this into a sellable device the hardware would have to be miniaturized. The goal here is simply to show proof of concept. Ideally we would like to include other sensors in the case as well, for example a carbon monoxide sensor and a humidity sensor. This data could be visualized in the app as well. The device would connect to the phone app via bluetooth. Once miniaturized we could potentially fit the sensor into a phone case as well.

IMG_0643

SoundSorial 2 with box

IMG_0645

Soundsorial 2 front view

IMG_0644

Soundsorial 2 back view

IMG_0646

Soundsorial 2 side view

Link to Final Presentation https://drive.google.com/file/d/0B5WRGcXRY9vvODFZT29XZEdqZFE/view

Final Project Proposal

Chen and I will continue with our project SoundSorial for our final. We have three goals for the next stage of the project.

  • Revisit the form factor
  • Experiment with sound and visual feedback
  • Experiment with presentation/performative aspect of the piece

Progress

This week we presented SoundSorial at NYC Media Lab. We received lots of positive feedback from visitors. However, it was obvious that the form factor is a problem. It is currently too large and inflexible for all head shapes. We want to experiment with different materials and shapes.

We also connected SoundSorial to Processing, through which we can manipulate actual audio files as well as create visual feedback. This opens up many new possibilities for how we express the data we are collecting.

Eventually, we would like to make a companion app for SoundSorial that will allow the user to listen to music and podcasts as the normally would and disrupt their listening experience with static or other audio feedback when they enter an area that is above a certain level of particulate pollution.

Before we try to create an app, however, we want to explore what the possibilities of this project are. Now that we have a working prototype we want to explore the potential types of audiovisual feedback we can create, what it could look like, and how it might be presented. For example, that if SoundSorial became a performance piece?

The first prototype determined the ‘who,’ ‘what,’ and the ‘why’ of our project.  With the second stage of development we want to explore the possibilities of ‘how.’ For our final we envision presenting a series of material, visualization, and presentation experiments.

 

 

Microscope

I haven’t settled on a final design yet, but on Wednesday night I made some cardboard prototypes. I started with a large design based on a classic microscope. I found that it was a bit overkill because the DIY microscope has a very short range of focus. I also need to include a light because the LEDs on my webcam do not work post hacking. I created the medium sized design next. At first I used a press lamp to provide the light, but it was too bright. Then I tried my phone’s flashlight, which was still too bright, so I started using the screen of my phone instead. Finally I made a small holder to fit over the phone. I put the webcam on a miniature tripod because that was the most flexible to adjust. For the final version I will probably make something in-between the 2nd and 3rd versions.

Microscope

Hydra videos

 

Photos

Air Quality Headphones Update

Our team met this week to prototype the air-sensor to audio interaction. In this example we are using an optical dust sensor to sense the amount of particulate matter in the air. We are mapping these values to an audio output.

Due to complications with GPS and data logging we have decided to limit the scope of our initial prototype to the interaction between dust, sound, and the wearer of the headphones. For our next steps we plan to miniaturize the prototype, and work on the manipulation of the sound. We will also begin designing the headphones themselves.

Demo

Code

//Dust Sensor

int dustPin=0;

int dustVal=0;

int ledPower=2;

int delayTime=280;

int delayTime2=40;

float offTime=9680;

//Piezo

const int buzzerPin = 8;

//Setup

void setup(){

Serial.begin(9600);

pinMode(ledPower,OUTPUT);

pinMode(4, OUTPUT);

pinMode(buzzerPin, OUTPUT);

}

void loop(){

// ledPower is any digital pin on the arduino connected to Pin 3 on the sensor

digitalWrite(ledPower,LOW); // power on the LED

delayMicroseconds(delayTime);

dustVal=analogRead(dustPin); // read the dust value via pin 5 on the sensor

delayMicroseconds(delayTime2);

digitalWrite(ledPower,HIGH); // turn the LED off

delayMicroseconds(offTime);

Serial.println(dustVal);

dustVal = map(dustVal, 100, 1000, 31, 3000);

tone(buzzerPin, dustVal);

Serial.println(dustVal);

delay(1000/16);

noTone(buzzerPin);

}

Air Quality Soundscape – Update

Team: Yuchen Zhang, Natasha Lewandrowski, Agustin Crawford (Nevaris), and special guest artist Fabiola Einhorn.

Description: We propose to build an urban air quality monitor that provides realtime audio feedback. Our goal is to create a portable particulate matter monitor that can be worn inconspicuously during regular daily activities. We will utilize the form and function of a pair a headphones to express air-quality information through sound while also keeping the device inconspicuous.

Initial goals: We will collect air quality information using an optical dust sensor.  The sensor data will be transmitted to a mobile phone  using an Arduino Yun. The phone will provide GPS, time, and location data. As an alternate plan, we might use a separate standalone GPS paired with a datalogger to collect this information.

The dust sensor data will be output as sound using the headphones. The wearer will listen to prerecorded tracks that are stored on a micro SD card and played through the Arduino Yun. The sensor data will manifest as noise in the audio track. When the wearer is in a clean air environment she will be able to listen to her music clearly. When the wearer is in a dirty air environment the songs will be distorted. The goal is to guide the wearer to safe air by manipulating the quality of her music.

The GPS and sensor data can be uploaded to a computer and used to create a complementary set of visual maps of air quality based on the wearer’s travels.

Long term goals: We envision continuing this project beyond the scope of the midterm. Our long term goal is to create a mobile app that could be paired with the headphones. This would allow the  wearer to listen to her music normally (instead of through the sd card) while still receiving audio feedback from the sensor. Additionally, the wearer to see the visual data in real time on her phone, rather than having to wait until she gets home.

We would also like to create an inexpensive standalone device that could be used by people who cannot afford a smart phone. In this version audio might be collected and manipulated based on ambient sound in the environment rather than by manipulating audio files.

Production Schedule:

  • Due Sept 25: Create a functional prototype to test the optical dust sensor and see if we can record the information to the data logger (in progress).
  • Due Oct 2: Incorporate the GPS data and experiment with audio feedback (in progress).
  • Due Oct 9: Design the enclosure. Final design complete.
  • Due Oct 16: 3D print enclosure. Document project.
  • Due Oct  23: Present final project.

Technical diagrams

Signal-Chain

Diagram using GPS and datalogger

DustSensorSchematics

Diagram of how the dust sensor is connected to the Arduino

UX diagram

UXdiagram

UX diagram

Air Quality Soundscapes Headphones

Team: Yuchen Zhang, Natasha Lewandrowski, and Agustin Crawford (Nevaris).

Description: For our midterm project we propose to build an urban air quality monitor that provides realtime audio feedback. Our goal is to create a portable particulate matter monitor that can be worn inconspicuously during regular daily activities. We will utilize the form and function of a pair a headphones to express air-quality information through sound while also keeping the device inconspicuous.

How will it work? We will use an optical dust sensor to monitor particulate matter in the air. The information from the monitor will be paired with location data and time data collected using GPS and stored in a data logger. The data logger will output the data as a .csv file which can later be used to create a complementary set of visual maps of air quality based on the wearer’s travels.

Production Schedule:

  • Today: Purchase materials (complete)
  • Due Sept 25: Create a functional prototype to test the optical dust sensor and see if we can record the information to the data logger.
  • Due Oct 2: Incorporate the GPS data and experiment with audio feedback.
  • Due Oct 9: Design the enclosure. Final design complete.
  • Due Oct 16: 3D print enclosure. Document project.
  • Due Oct  23: Present final project.

Inspirations:

EPA dress http://inhabitat.com/2nd-skins-epa-dress-and-piezing-motion-powered-dress/

FLOAT Beijing http://f-l-o-a-t.com/

algaculture symbiosis suit designed by Michael Burton and Michiko Nitta. http://www.dvice.com/2013-8-12/algae-suit-generates-food-feed-your-constant-hunger

Smart Air Kite https://www.kickstarter.com/projects/replaymy/smart-air-kites-float-beijing

 

 

Mobile Lab Mockup

Here is a visual mockup of my mobile lab. My goal was to create a wearable mobile lab for exploring new lands through taking measurements and samples, and capturing documentation.

 

The text is hard to read from the images, so here is a rundown of the features as relates to the numbering in the diagrams:

  1. Positionable ear augmentation that are controlled using an EEG device under the hood. A microphone is also included on each side to record audio.
  2. An eye display similar to Google Glass. Also capable of recording audio/visual and taking voice notes. It can be controlled by voice or a touch pad located on one of the gloves.
  3. Interchangeable magnifying lenses (like a jeweler’s lens) for close up visual analysis.
  4. EEG located under the hood to monitor the wearer and control the direction of the ears.
  5. A measuring tape on the sleeve allows for quick estimations of size.
  6. A vibration sensor in the shoe detects nearby motion.
  7. A GPS tracker in the other shoe monitors the wearers position and automatically attaches this information to any images or voice notes.
  8. A scale in the shoe is zeroed to the wearer’s weight so that she can weight small objects by picking them up.
  9. A temperature sensor in the pointer finger of the left glove can be used to measure air and water temperatures.
  10. A humidity sensor in the index finger of the left glove takes humidity readings.
  11. A head-mounted flashlight makes for easy hands-free operation in the dark. It has regular white light and red light to protect the wearer’s night vision.
  12. A hand-mounted camera, also on the left hand can take images around corners,  up high, and in other places the eye display has trouble reaching.
  13. A processor and battery back operate all the electronics and store data for later analysis.
  14. Containers for samples are located on the outer portion of the boot.
  15. Small instruments such as tweezers and an eyedropper are stored in special holders on the breast of the jacket.
  16. A spotting scope is attached to a holder on the leg for viewing at longer distances. It is good for observing local fauna.
  17. A bite-activated drinking straw in the hood allows for hands-free drinking. A pouch of water is stored in the lining of the back of the jacket.
  18. A touch pad on the left glove can control the functions of the eye-display when the wearer cannot use her voice.
  19. A large cargo pocket on the right leg is used for storage
  20. A pocket knife (always an essential tool for exploring) is clipped to a jacket pocket.
  21. A robotic canary drone can perform visual reconnaissance. The video feeds back to the eye display for exploring hard to access areas remotely. It is controlled with the touch pad or by voice.

Inspirations:

A spacesuit is basically a mobile lab/work suit for an extreme environment.

I am particularly interested in biology, so I looked to early naturalists like James Audubon and Charles Darwin for ideas of what to include in the lab. I did not include a gun, though. I prefer less damaging means of observation.

A company called Necomimi makes EEG controlled ears for cosplay. Why not make them actual ear augmentations?