Site-wide Tags:  Linux(17) | CommandLine(12) | Ubuntu(10) | RemoteAccess(7) | Tools(7) | Vim(7) | LiftWeb(5) | SBT(5) | SoftwareDev(5) | Mac(5) | Scripts(4) | WebDev(4) | Diagrams(4) | Lifty(3) | NetworkDrives(3) | Processwire(3) | Security(3) | Fog(3) | VCS(3) | BestPractices(3) | RaspberryPi(2) | WebDesign(2) | Encryption(2) | Windows(2) | SSH(2) | WinCommandPrompt(2) | GitHubRepos(2) | Emacs(2) | PHP(2) | IDE(2) | ErrorMsgs(2) | JVM(2) | Hardware(2) | Bash(2) | Networks(2) | Graphviz(2) | Cloning | Cygwin | Graphics | Java | SystemRecovery | lessc | Maven | Python | PXE | Samba | LXDE | PackageManagement | LifeHacks | LESS |

This site has been archived and will no longer be updated.
You can find my new homepage at neilpahl.com.

This project was an excersice in digital signal processing. The goal of this project was to translate visual information into audio cues so that a blind person could 'hear' what everyone else can see.


Using two web camera's, image processing was performed on the video feeds to detect objects, determine the distance and determine the bearing towards the object. This was all done in real time.

Then, 3D visual data was mapped to the sound domain as follows:

Presence of object -> Pulsing sound
Distance to object -> Period of Pulse
Height of object -> Sound Pitch
Left/Right Positioning -> Left/Right Stereo Fade