FAQ

Who are you?

I am Michael Perl. I carry a BA in Music, Science, and Technology from Stanford.

Why did you create Human-Cyborg Relations?

As an audio enthusiast and archivist, I have always been fascinated by the historic sound design of Star Wars. For all the attention paid to Star Wars’ visual design, relatively little has been paid to its sound design. Tens of thousands of fans can tell you the color of a particular panel on a droid with 5 seconds of screen time. Yet no one could articulate how R2’s voice became a cultural phenomenon that still resonates half a decade later.

In 2020, I thought it would be fun to deep dive on some of my favorite examples of Star Wars' sound design.

I found that the same design process pioneered by Ben Burtt in creating R2's voice could be used to covert static sounds of any kind into animated, organic, infinitely varied audio experiences. Thus, Human-Cyborg Relations was born.

How can I contact you?

Please note that H-CR was simply a passion project that I undertook for historical and charitable purposes. While I do my best to revisit it periodically in between startups, I unfortunately don't have the bandwidth to help with implementation or troubleshooting questions. Please direct those to the droid building forums or Facebook groups. For other inquiries, please fill out this form and I will do my best to get back to you!

Are you profiting from this project?

Absolutely not. All proceeds support Force for Change and FIRST. The software is entirely free and has no ads or monetization of any kind. Users are instead encouraged to donate to charity. Any proceeds from collaborations with 3rd party droid control platforms go to charity as well.

Will you create a vocalizer for ___ droid?

While each droid presents a unique set of challenges, H-CR contains a powerful set of generalized tools that can be applied to droids of all makes and models.

I am primarily interested in historic droids. The resulting vocalizer software is fun to use, but there is historical value of researching and reverse-engineering these famous examples of expirmental sound design.

If you have a droid in mind, please feel free to reach out.

How about Chopper?

A Chopper vocalizer would actually be a fascinating and incredible candidate for a H-CR's tech. However, these vocalizers require immaculately pure source audio. There can't be any artifacts because of the way phonemes get mixed and matched. Isolating all of Chopper's lines from the show's audio would be an insurmountable task.

If Dave Filoni himself requests this vocalizer himself and provides the raw audio, I'll make it happen. So if you want a Chopper vocalizer, please lobby Dave!

Can I use this software with my project?

Yes, as long as it is for personal use only. Commercial use of H-CR is not permitted.

Can you port the R2 Vocalizer to ___ platform?

H-CR vocalizers require specialized software and hardware. General purpose microcontrollers, such as Arduino and Raspberry Pi, are not suitable targets because they lack realtime audio processing capabilities.

H-CR vocalizers do not simply play one raw sound. They play and process sounds polyphonically in realtime. Instead of "play audio clip 1," it's more like "play clip 1 from sample 595815 to sample 619681, retime those samples to a particular duration, linearly modulate the pitch over the course of playback, treat with a particular EQ profile and other filters" and much, much more... then superimpose that on top of 4 other grains with equally complex processing... all in realtime."

A typical sound file plays at 44,100 samples per second. This is roughly 1000 times as fast as an average video source. And unlike video where dropped frames are a minor nuisance, a single dropped audio sample usually results in a loud "pop."

This is why realtime sound processing requires hardware and software with specialized audio capabilities. A port to the Teensy Audio platform is in progress. The Teensy and its accompanying audio shield are small, cheap, efficient, and widely available.

Will we be able to interface with the Teensy port of the R2 Vocalizer?

H-CR on the Teensy will offer access to both input and output. In this arrangement, H-CR acts as the emotional and vocal brain for your project.

Your software can provide an input stimulus encoded as a good/bad/maddening/frightening event with an intensity value. This stimulus can come from a user pressing a button on an RC remote, from an advanced automonmous sensor array, or from any other system you can dream up. H-CR will then provide the appropriate emotional processing, vocalize the stimulus, and return data describing the new emotional state.

You can use the returned data to set an appropriate light color, automate a movement, etc.

With thanks to Disney's Lee Towersey and Michael McMaster.
All proceeds support Force for Change and FIRST®.