Tag Archives: SoundSorial

Soundsorial 2

Team: Natasha Lewandrowski & Yuchen Zhang

Soundsorial 2, mobile application prototype

Soundsorial 2 is a mobile device that lets you detect and visualize particulate matter in the air wherever you go using your phone.

In our first iteration of Soundsorial, we made a wearable instrument in the form of a pair of wooden headphones that created audioscapes based on dust levels in the surrounding environment. Our goal was to create a portable device that would allow wearers to sense air quality information in their immediate area as a way to promoted mindfulness about environmental issues.

Their were two main issues with our first design. First, the headphones were inflexible, which meant they only fit limited range of head sizes. With out second iteration we wanted to come up with a more inclusive form factor. Second, our headphones used piezo speakers, which limited the range of sounds we could produce to beeps and buzzes. These sounds were unpleasant for wearers to listen to and thus would not encourage use of the device.

Our goal for the second half of the semester was to address these two issues. One idea we had was to create a mobile application that would pair with the headphones. We presented our first version of this idea together together with the headphones at the NYC Media Lab on November 6th. The feedback we received was generally positive, but underlined the need for a more integrated presentation.

Chen presents Soundsorial at NYC Media Lab
View of Soundsorial table at NYC Media Lab

Usability Test
Chen conducted a usability evaluation on the Soundsorial device as part of an assignment for another class. We used the feedback she received from participants in the redesign of our device. Chen tested the device on ten users. She set it up as if it was in gallery setting, on a pedestal accompanied by a short written description of the project. Before she let them interact with the device, she asked people what sound they expected it to make and how they thought in might turn on. Then she let the users try the device. Once they were done she asked them how they felt about the sound, shape, and texture.

Usability Evaluation

Many of Chen’s findings reflected what we already knew, however the test gave us new insights as well. Five out of six participants thought the device should activate automatically when they put it on. Testers thought the device should either, play nature sounds, like birds or crickets, or play music. They also mentioned that the particle levels should introduce static into the pleasant sounds. Participants liked the wooden material visually, but disliked the fact that the headphones were not adjustable.

User Test Documentation
Participant testing Soundsorial

We brainstormed ideas for how the new device might look and function. We considered a couple of options, including a redesigned headset, a pin that could be clipped to the body, and a phone case. At this stage we are thinking that a phone case might be the best option for several reasons.

First, It solves the sizing issue while allowing us to keep the wooden aesthetic. Second, it lets us to utilize the built-in capabilities of the phone to produce better quality sounds and visualizations. It also opens up the possibility of tracking positional data, and connecting to existing APIs in order to compare Soundsorial’s sensor data to information published by other sources.

Headphone concept sketches
IFD_FinalWIP 12
Pin concept sketches

Software Mockup
We created mockups of the software by connecting the existing Soundsorial device to the Processing development environment. We tried many variations before arriving at our final design. We experimented with various types of sound, and displaying the particle information in different ways.

Ultimately we decided to keep it visually simple. We used a web cam to capture what the device is seeing so that the user could use the device to “see” the invisible particles in the air. The number of particles increases and decreases based on the dust level readings.

For the app mockup demonstration we used a program called LiveView to show the Processing sketch on a mobile phone. It shows how the app might be used on a mobile device.

Other Processing Experiments
Below is a selection of videos documenting some of our other Processing experiments. In this version the particles change color when they reach a certain threshold. This could be used to indicate that they have reached a dangerous level.

In this version we created a graph of the data.

In these versions we tried using text and an image of pollen as the particle.

Enclosure and Next Steps
Below are images of a clip-on version of the enclosure. In order to make this into a sellable device the hardware would have to be miniaturized. The goal here is simply to show proof of concept. Ideally we would like to include other sensors in the case as well, for example a carbon monoxide sensor and a humidity sensor. This data could be visualized in the app as well. The device would connect to the phone app via bluetooth. Once miniaturized we could potentially fit the sensor into a phone case as well.

SoundSorial 2 with box
Soundsorial 2 front view
Soundsorial 2 back view
Soundsorial 2 side view

Link to Final Presentation https://drive.google.com/file/d/0B5WRGcXRY9vvODFZT29XZEdqZFE/view

Final Project Proposal

Chen and I will continue with our project SoundSorial for our final. We have three goals for the next stage of the project.

  • Revisit the form factor
  • Experiment with sound and visual feedback
  • Experiment with presentation/performative aspect of the piece


This week we presented SoundSorial at NYC Media Lab. We received lots of positive feedback from visitors. However, it was obvious that the form factor is a problem. It is currently too large and inflexible for all head shapes. We want to experiment with different materials and shapes.

We also connected SoundSorial to Processing, through which we can manipulate actual audio files as well as create visual feedback. This opens up many new possibilities for how we express the data we are collecting.

Eventually, we would like to make a companion app for SoundSorial that will allow the user to listen to music and podcasts as the normally would and disrupt their listening experience with static or other audio feedback when they enter an area that is above a certain level of particulate pollution.

Before we try to create an app, however, we want to explore what the possibilities of this project are. Now that we have a working prototype we want to explore the potential types of audiovisual feedback we can create, what it could look like, and how it might be presented. For example, that if SoundSorial became a performance piece?

The first prototype determined the ‘who,’ ‘what,’ and the ‘why’ of our project.  With the second stage of development we want to explore the possibilities of ‘how.’ For our final we envision presenting a series of material, visualization, and presentation experiments.