I am Michael Perl. I am a 9x founder with a degree in Music, Science, and Technology from Stanford.
As an audio enthusiast and archivist, I have always been fascinated by the historic sound design of Star Wars. For all the attention paid to Star Wars’ visual design, relatively little has been paid to its sound design. Tens of thousands of fans can tell you the color of a particular panel on a droid with 5 seconds of screen time. Yet no one could articulate how R2’s voice became a cultural phenomenon that still resonates half a decade later.
In 2020, I thought it would be fun to deep dive on some of my favorite examples of Star Wars' sound design.
I found that the same design process pioneered by Ben Burtt in creating R2's voice could be used to covert static sounds of any kind into animated, organic, infinitely varied audio experiences. Thus, Human-Cyborg Relations was born.
No. All proceeds support FIRST. The software is entirely free and has no ads or monetization of any kind. Users are instead asked to donate to charity. Any proceeds from collaborations with 3rd party droid control platforms go to charity as well.
H-CR contains a powerful set of generalized tools that can be applied to droids of all makes and models. That said, I am primarily interested in using these tools on historic droids that represent analog era sound design artistry.
A Chopper vocalizer would be a fascinating candidate for a H-CR's tech. However, these vocalizers require immaculately pure source audio. There can't be any artifacts because of the way phonemes get mixed and matched. Isolating all of Chopper's lines from the show's audio would be an insurmountable task. Also... Chopper is not historic. Sorry!
Yes, as long as it is for personal use only. Commercial use of H-CR is not permitted.
Currently, the vocalizers support computers, mobile devices, and the Teensy microcontroller. I have no plans to support additional platforms.
H-CR vocalizers require specialized software and hardware. General purpose microcontrollers, such as Arduino and Raspberry Pi, are not suitable targets because they lack realtime audio processing capabilities. The Teensy has explicit hardware and software for audio processing that makes this possible.
H-CR vocalizers do not simply play one raw sound. They play and process sounds polyphonically in realtime. Instead of "play audio clip 1," it's more like "play clip 1 from sample 595815 to sample 619681, retime those samples to a particular duration, modulate the pitch at different intervals over the course of playback, treat with equalization and other filters, then superimpose all of that on top of 4 other grains with equally complex processing... 44,100 times per second."
Please visit the page for your respective vocalizer. Follow the "Documentation" link. Read the docs in their entirety. You will find your answers in there!
Please note that H-CR was simply a passion project that I undertook for historical and charitable purposes between startups. While I do my best to revisit it periodically, I unfortunately don't have the bandwidth to help with implementation or troubleshooting questions. Please direct those to the Human-Cyborg Relations Facebook community.
For other inquiries, please fill out this form and I will do my best to get back to you.