4b. Applications Of New Language : Transducer


We then focus on a project specifically designed as a proving-ground for the issues raised by the new language. Transducer is a system for building simultaneous audio and visual constructions of sampled audio and three-dimensional form in realtime. Currently the process of creation on a computer remains a hidden task executed by men and women with obstruse skills behind closed doors. Transducer asks one to imagine a day when the process of editing and creating on a computer becomes a performance which an audience can easily comprehend. The content of this peformance may be sufficiently complex to elicit multiple interpretations, but Transducer enforces the notion that the process should become transparent. Over the course of the next few months, this project will mature alongside the language which defines it. (In this way, the language becomes the design process)

Currently Transducer consists of two computers: a Silicon Graphics octane and an Intel PentiumPro 200. The Silicon Graphics computer handles visual computation and output and the Intel handles audio computation and output. In addition, there is a video projector which illuminates a screen hanging from above. The user (or performer) acts upon the system with a mouse.

At first, the system presents a palette of cylindrical objects. As the user moves his or her mouse over each of the cylinders, he or she hears a sampled sound stream associated with that object. Each of the objects has a representative color and shape corresponding to the sound stream associated with it. The motion of each object is also modelled with a unique physics model. Thus two objects react differently to user input based on the internal "mass" and "drag" of each.

By clicking the left mouse button while over the object, the user selects the sound/object to be manipulated. The palette of objects drifts behind the camera, and the selected object moves to the "manipulation zone." While in this area, by clicking on the object and dragging the mouse up or down, the user both stretches and contracts the object and increases and decreases the frequency of the associated sound. By clicking on the object and dragging the mouse to the left and right, the user affects the transparency of the object and the amplitude of its sound stream. Clicking anywhere not on the object brings up the palette of all sound/objects with the sound visible behind. Additional sound/objects can be previewed and any number of sound/objects can be brought into the manipulation zone.

In this way a single user or performer is able to build simultaneous visual and audio constructions in realtime. The user can examine interrelationships between multiple, diverse sound sources and a corresponding visual form.

The current configuration of this project gives us the following system model in our new language:

We see a single human that can manipulate a visual output. This visual output affects an audio output. The human can inturn see the visual output and hear the audio output which modulate his or her communication with the system.

This is when we begin to see the power of our visual language. Similar to the periodic table, we can envision elements with our visual structures that have not yet been discovered. By drawing a single additional line of communication, we can propose that the audio structure communicate back to the visual.

This additional path of communication raises issues of implementation on both the audio and the visual end. The audio system must have more robust digital signal processing capabilities to tell the visual system how to represent the audio streams. A given audio source is split into a range of pitches.

In turn, new visual representations must develop with the additional data communication. Each pitch range is represented as a single cylinder. The amplitude of a pitch can be seen in the diameter of its corresponding cylinder. A complete sound stream forms a solid structure of combined cylinders. The sound structures bend and transform based on the changing audio source:

The additional path of communication allows for significantly more elegant visual constructions. This city of cylindrical forms becomes a living, interactive diagram of the realtime music editing. By adding beat-tracking and sound analysis code developed by Eric Sheirer of the Machine Listening Group, the sound structures will intercommunicate based on the synchronicity of their musical rhythms:

As an additional step, we can imagine that this system becomes even more compelling (and poses more questions!) when an additional human is added to the model. Perhaps a drummer:

In the above image we see some of the same pictographic building blocks from the design language transferred into a sketch of a proposed performance space. This can be seen as an advantage of the language. It is an intuitive step to transform the sketch to a system model diagram: