The unlikely resurrection story began when archivist Chris Hunter grew curious about 13 undocumented film canisters tucked away on a bottom shelf among 5 million items in the basement archives of the Schenectady Museum & Suits-Bueche Planetarium....
The pallophotophone was a technology developed by GE engineer Charles Hoxie in the early 1920s and it bridged the gap between cinema's silent era and "talkies." The big, boxy recorder used 35 mm sprocketless film, with each strip containing a series of eight to10 parallel soundtracks etched on acetate and nitrate film. It used light bouncing off a tiny mirror to expose each strip of film and to capture the sound.
DeMuth had very little to work with, aside from a few archival photographs of the original machine. He had to scour eBay for old film reels and other long-out-of-production parts to build his device. He was not at all confident of its functionality.
The end result being that these dudes went from this:
To living audio. Which is really cool. If this were an example of a larger principle I guess that principle would be called "information archeology" or something. It is similar to finding ancient, unknown written language in a way. Such as the story two weeks ago of a 350-year old unknown language scribbled on a note in the ruins of a church which collapsed in Peru in late 17th century, preserving hundreds of documents. But different, since in this case what is recorded in forgotten encoding is sound itself, not written words.
Whenever I read about the history of 20th Century audio and video recording, I feel an tinge of nostalgic regret that we live in an age where digital reproduction has rendered the science of perfecting mechanical recording methods irrelevant, because to me there's as lot more of a fantastic and magical character to the former field. We carved music into fucking solid objects.
I wanted to do a cursory bit of research to make sure this Edison article wasn't simplifying or exaggerating the find or its context for the sake of narrative before posting my comments here. It turns out most movie audio tracks have always used optical audio. Film is printed with a little line of varying width and that is the sound.
So that the video and audio can be played back at the same time, they are offset several frames. On theater projectors, the audio pickup is calibrated to be the correct physical distance from the video projector so that the audio is in sync. For optical soundtrack, a light is shot at the film and a variable-width transparent line manipulates the amount of light that passes through to the photocell, which pushes electrical signal to the speakers. Later, magnetic audio tracks were developed too, which allowed for better quality and thus multi-track playback. But both methods still had limitations: magnetic was expensive, and deteriorated on film; optical could only be mono -- there was too much background noise to play back more than one track. (More at http://www.howstuffworks.com/movie-sound1.htm), and a neat discussion touching on variable-density optical recording (as opposed to variable-width) and editing magnetic audio by playing the film next to a magnet to erase audio at http://www.tomshardware.com/forum/42273-6-optical-audio-16mm-film.) Basically the limit of purely mechanical film audio quality was achieved in the 50s, but in practice theaters were still using the same technology as the 30s.
Dolby, a one-year old computer company, resolved the impasse and brought us closer to the digital age by creating the Dolby-A method for optical audio recording in 1966, using a processor to separate audio into bands and deemphasize noise while it was being recorded as optical track, and again during playback. This was next applied to make Phillips' compact cassette invention suitable for music. Then later, Dolby made backwards-compatible stereo optical soundtrack for movie film, encoding a 4-track matrix into the same variable-width optical track to bring surround sound and stereo to theaters in 77, with Star Wars. Before then, because magnetic film had proven too expensive and mono Dolby-A's impact was modest, theater-goers in the 70s were still listening to basically the same quality audio as the first on-film sound playback in the 30s:
To forestall compatibility problems after a decade of theatres racing to install sound equipment and filmmakers rushing "talkies" into production, in the late 1930s the film industry adopted a standardized theatre playback response that today is called the "Academy" characteristic. While this resulted in a system of recording and playback that made it possible for just about any film to sound acceptable in any theatre in the world, it lacked the flexibility to incorporate improvements beyond the limitations of the 1930s. Indeed, well into the 1970s conventional optical sound reproduction in the theatre had a frequency response little wider than a telephone's.
Upon investigation, Dolby found that many of the limitations in optical sound stemmed directly from its significantly high background noise. To filter this noise, the high-frequency response of theatre playback systems was deliberately curtailed (the "Academy" characteristic). To make matters worse, to increase dialogue intelligibility over such systems, sound mixers were recording soundtracks with so much high-frequency pre-emphasis that high distortion resulted. (http://www.dolby.com/about/who-we-are/our-history/history-3.html)
So, thank god for digital enhancement.