Yamaguchi Center for Arts and Media(YCAM) – Japan(2006)

Venue:Yamaguchi Center for Arts and Media(YCAM) Studio B
Date:August 9th 2006 – October 9th 2006
Special Site

CREDIT
concept & composition

Keiichiro Shibuya + Takashi Ikegami

multiphonic 3-dimentional programming

evala

program development

Yuta Ogai

lighting control programming

Daito Manabe, Takayuki Ito

production assistant

maria

technical support

YCAM InterLab

Curated by

Kazunao Abe (YCAM)

“Filmachine” is a black spherical skeleton surrounded by the suspended 24 speakers from the ceiling. The “filmachine” replays streams of sound files that are completely controlled by the 3D acoustic system “Huron”, accompanying flashing running LED patterns. A floor is composed of black cubes that converge towards the center of “filmachine”. We can experience the sound of “filmachine” from various positions.

 

The 3D acoustic system “Huron,” developed by the Lake Technology Company in Australia, enables us to powerfully program the orientation, movement and locations of sound images along the time line. The system not only provides an ideal sound experience in a fixed point, but makes it possible to create a new acoustic space where we can perceive the autonomous movement of sound stream on a virtual 3D space. “Filmachine” uses various kind of objects ranging from variations of spiral motions to strange attractors (e.g. Lorentz, Roessler, and Langford) that appear in nonlinear systems, and the displayed sound pattern itinerates among those complex dynamics with different time scales.

 

Sensory experience, in relation to sound and perception, has been paid much attention to these years, which is correlated with a rapid and sophisticated technological progress in sound systems. However, such technological facilities are mere “sound effecter” of audio outputs following the conventional sound principle: any music is represented with a linear superposition of basic frequencies with appropriate amplitudes. In contrast, “filmachine” puts stress on the movement of the sound stream. That is, it provides a new sensory experience by composing the transportation of sound images of different time structures. The “Huron” system enables this by freely moving and assigning sound images irrespective of the real speaker locations.

 

As for creating novel time structure of each sound file, it is based on the third-term music theory started by Keiichiro Shibuya and Takashi Ikegami at Tokyo/ICC in December 2005. The third-term music is to go beyond the concept of drone and melody, known as two fundamental elements of music composition, and provide the possibility of third-element such as the composition of timbre and sound dynamics with introducing meta-framework.

 

Sound files are created by combining and modifying initial bit strings by virtual genetic/evolutionary processes. Such variation mechanism of bit strings is suggested in the paper “Co-evolution of Machines and Tapes by Takashi Ikegami and Takashi Hashimoto in 1992. This paper elaborates two kinds of noise, which are fluctuation of bit originated in the internal and external mechanisms. That is, sound bit strings are recursively varied by the effect of external noise and specifications of deterministic programs. Depending on who reads the tape (which program rewrites the input string), there exits an immense variety.

 

In case of sound file synthesis, a machine rewrites a tape by using the wave form information to increase/decrease pulses or by using the interference between the intensity of sounds (db) and the sampling rate. Some other machines receive two input files and let one file act as a program on the other file, or combine them with variations like a method used in a genetic algorithm. By this way, the initial sound files have unexpected structures as if it acquires life-like autonomy.

 

The other important time structure of sound pattern here is to use the hierarchical composition of biased white noise generated by the Logistic map or by Cellular Automaton. Here, within a short time scale, time series from those systems have apparent structures and prove the difference from a structure generated by random coin toss. For example, by combining the same white noise from the Logistic map but with a slight different nonlinear parameter, we can generate ample difference as sensory experience. Using the timbre generation by the Chaos Theory and layering out the chaotic time series, “filmachine” was able to create the sound membrane, collective motion of different sound images.

 

“Filmachine” is fully uses complex sounds generated from the chaotic dynamics and tapes and machine dynamics, organizing a fully 3D motion structures. A Phenomenologist Husserl discusses a network of longitudinal and transverse intentionality as a basis of subjective perception, where the longitudinal intentionality is a spatial structure and the transverse intentionality is a temporal structure. A spatial structure does not merely imply the 3D space emulated by the “Huron” system, but a space in perception, that is, perceptual structure as a knitting of memory and embodiment. “Filmachine” is an experimental instrument as well as a virtual space/time structure developed by an evolutionary form of the third-term music.

 

That is, “filmachine” synthesizes sound membranes (film) but at the same time, the machine itself becomes an internal observer by compiling the space/time complexity with sound dynamics.

 

Text by Keiichiro Shibuya + Takashi Ikegami
Translated by Takashi Ikegami

Photo: © Michael Sauer – filmachine in Berlin, February 2008

  • 1/2
  • 2/2
CREDIT
concept & composition

Keiichiro Shibuya + Takashi Ikegami

multiphonic 3-dimentional programming

evala

program development

Yuta Ogai

lighting control programming

Daito Manabe, Takayuki Ito

production assistant

maria

technical support

YCAM InterLab

Curated by

Kazunao Abe (YCAM)

Transmediale.08 / PODEWIL – Germany(2008)

Venue:filmachine in Berlin was curated by Stefan Riekeles (JdP)

Date:January 29th 2008 – February 10th 2008

CREDIT
concept & composition

Keiichiro Shibuya + Takashi Ikegami

multiphonic 3-dimentional programming

evala

program development

Yuta Ogai

lighting control programming

Daito Manabe, Takayuki Ito

production assistant

maria

technical support

YCAM InterLab

Curated by

Kazunao Abe (YCAM)

“Filmachine” is a black spherical skeleton surrounded by the suspended 24 speakers from the ceiling. The “filmachine” replays streams of sound files that are completely controlled by the 3D acoustic system “Huron”, accompanying flashing running LED patterns. A floor is composed of black cubes that converge towards the center of “filmachine”. We can experience the sound of “filmachine” from various positions.

 

The 3D acoustic system “Huron,” developed by the Lake Technology Company in Australia, enables us to powerfully program the orientation, movement and locations of sound images along the time line. The system not only provides an ideal sound experience in a fixed point, but makes it possible to create a new acoustic space where we can perceive the autonomous movement of sound stream on a virtual 3D space. “Filmachine” uses various kind of objects ranging from variations of spiral motions to strange attractors (e.g. Lorentz, Roessler, and Langford) that appear in nonlinear systems, and the displayed sound pattern itinerates among those complex dynamics with different time scales.

 

Sensory experience, in relation to sound and perception, has been paid much attention to these years, which is correlated with a rapid and sophisticated technological progress in sound systems. However, such technological facilities are mere “sound effecter” of audio outputs following the conventional sound principle: any music is represented with a linear superposition of basic frequencies with appropriate amplitudes. In contrast, “filmachine” puts stress on the movement of the sound stream. That is, it provides a new sensory experience by composing the transportation of sound images of different time structures. The “Huron” system enables this by freely moving and assigning sound images irrespective of the real speaker locations.

 

As for creating novel time structure of each sound file, it is based on the third-term music theory started by Keiichiro Shibuya and Takashi Ikegami at Tokyo/ICC in December 2005. The third-term music is to go beyond the concept of drone and melody, known as two fundamental elements of music composition, and provide the possibility of third-element such as the composition of timbre and sound dynamics with introducing meta-framework.

 

Sound files are created by combining and modifying initial bit strings by virtual genetic/evolutionary processes. Such variation mechanism of bit strings is suggested in the paper “Co-evolution of Machines and Tapes by Takashi Ikegami and Takashi Hashimoto in 1992. This paper elaborates two kinds of noise, which are fluctuation of bit originated in the internal and external mechanisms. That is, sound bit strings are recursively varied by the effect of external noise and specifications of deterministic programs. Depending on who reads the tape (which program rewrites the input string), there exits an immense variety.

 

In case of sound file synthesis, a machine rewrites a tape by using the wave form information to increase/decrease pulses or by using the interference between the intensity of sounds (db) and the sampling rate. Some other machines receive two input files and let one file act as a program on the other file, or combine them with variations like a method used in a genetic algorithm. By this way, the initial sound files have unexpected structures as if it acquires life-like autonomy.

 

The other important time structure of sound pattern here is to use the hierarchical composition of biased white noise generated by the Logistic map or by Cellular Automaton. Here, within a short time scale, time series from those systems have apparent structures and prove the difference from a structure generated by random coin toss. For example, by combining the same white noise from the Logistic map but with a slight different nonlinear parameter, we can generate ample difference as sensory experience. Using the timbre generation by the Chaos Theory and layering out the chaotic time series, “filmachine” was able to create the sound membrane, collective motion of different sound images.

 

“Filmachine” is fully uses complex sounds generated from the chaotic dynamics and tapes and machine dynamics, organizing a fully 3D motion structures. A Phenomenologist Husserl discusses a network of longitudinal and transverse intentionality as a basis of subjective perception, where the longitudinal intentionality is a spatial structure and the transverse intentionality is a temporal structure. A spatial structure does not merely imply the 3D space emulated by the “Huron” system, but a space in perception, that is, perceptual structure as a knitting of memory and embodiment. “Filmachine” is an experimental instrument as well as a virtual space/time structure developed by an evolutionary form of the third-term music.

 

That is, “filmachine” synthesizes sound membranes (film) but at the same time, the machine itself becomes an internal observer by compiling the space/time complexity with sound dynamics.

 

Text by Keiichiro Shibuya + Takashi Ikegami
Translated by Takashi Ikegami

Photo: © Michael Sauer – filmachine in Berlin, February 2008

  • 1/2
  • 2/2
CREDIT
concept & composition

Keiichiro Shibuya + Takashi Ikegami

multiphonic 3-dimentional programming

evala

program development

Yuta Ogai

lighting control programming

Daito Manabe, Takayuki Ito

production assistant

maria

technical support

YCAM InterLab

Curated by

Kazunao Abe (YCAM)

Festival EXIT – France(2011)

Venue:Maison des Arts Place Salvador Allende, Créteil

Date:March 10th 2011 – March 20th 2011

CREDIT
concept & composition

Keiichiro Shibuya + Takashi Ikegami

multiphonic 3-dimentional programming

evala

program development

Yuta Ogai

lighting control programming

Daito Manabe, Takayuki Ito

production assistant

maria

technical support

YCAM InterLab

Curated by

Kazunao Abe (YCAM)

“Filmachine” is a black spherical skeleton surrounded by the suspended 24 speakers from the ceiling. The “filmachine” replays streams of sound files that are completely controlled by the 3D acoustic system “Huron”, accompanying flashing running LED patterns. A floor is composed of black cubes that converge towards the center of “filmachine”. We can experience the sound of “filmachine” from various positions.

 

The 3D acoustic system “Huron,” developed by the Lake Technology Company in Australia, enables us to powerfully program the orientation, movement and locations of sound images along the time line. The system not only provides an ideal sound experience in a fixed point, but makes it possible to create a new acoustic space where we can perceive the autonomous movement of sound stream on a virtual 3D space. “Filmachine” uses various kind of objects ranging from variations of spiral motions to strange attractors (e.g. Lorentz, Roessler, and Langford) that appear in nonlinear systems, and the displayed sound pattern itinerates among those complex dynamics with different time scales.

 

Sensory experience, in relation to sound and perception, has been paid much attention to these years, which is correlated with a rapid and sophisticated technological progress in sound systems. However, such technological facilities are mere “sound effecter” of audio outputs following the conventional sound principle: any music is represented with a linear superposition of basic frequencies with appropriate amplitudes. In contrast, “filmachine” puts stress on the movement of the sound stream. That is, it provides a new sensory experience by composing the transportation of sound images of different time structures. The “Huron” system enables this by freely moving and assigning sound images irrespective of the real speaker locations.

 

As for creating novel time structure of each sound file, it is based on the third-term music theory started by Keiichiro Shibuya and Takashi Ikegami at Tokyo/ICC in December 2005. The third-term music is to go beyond the concept of drone and melody, known as two fundamental elements of music composition, and provide the possibility of third-element such as the composition of timbre and sound dynamics with introducing meta-framework.

 

Sound files are created by combining and modifying initial bit strings by virtual genetic/evolutionary processes. Such variation mechanism of bit strings is suggested in the paper “Co-evolution of Machines and Tapes by Takashi Ikegami and Takashi Hashimoto in 1992. This paper elaborates two kinds of noise, which are fluctuation of bit originated in the internal and external mechanisms. That is, sound bit strings are recursively varied by the effect of external noise and specifications of deterministic programs. Depending on who reads the tape (which program rewrites the input string), there exits an immense variety.

 

In case of sound file synthesis, a machine rewrites a tape by using the wave form information to increase/decrease pulses or by using the interference between the intensity of sounds (db) and the sampling rate. Some other machines receive two input files and let one file act as a program on the other file, or combine them with variations like a method used in a genetic algorithm. By this way, the initial sound files have unexpected structures as if it acquires life-like autonomy.

 

The other important time structure of sound pattern here is to use the hierarchical composition of biased white noise generated by the Logistic map or by Cellular Automaton. Here, within a short time scale, time series from those systems have apparent structures and prove the difference from a structure generated by random coin toss. For example, by combining the same white noise from the Logistic map but with a slight different nonlinear parameter, we can generate ample difference as sensory experience. Using the timbre generation by the Chaos Theory and layering out the chaotic time series, “filmachine” was able to create the sound membrane, collective motion of different sound images.

 

“Filmachine” is fully uses complex sounds generated from the chaotic dynamics and tapes and machine dynamics, organizing a fully 3D motion structures. A Phenomenologist Husserl discusses a network of longitudinal and transverse intentionality as a basis of subjective perception, where the longitudinal intentionality is a spatial structure and the transverse intentionality is a temporal structure. A spatial structure does not merely imply the 3D space emulated by the “Huron” system, but a space in perception, that is, perceptual structure as a knitting of memory and embodiment. “Filmachine” is an experimental instrument as well as a virtual space/time structure developed by an evolutionary form of the third-term music.

 

That is, “filmachine” synthesizes sound membranes (film) but at the same time, the machine itself becomes an internal observer by compiling the space/time complexity with sound dynamics.

 

Text by Keiichiro Shibuya + Takashi Ikegami
Translated by Takashi Ikegami

Photo: © Michael Sauer – filmachine in Berlin, February 2008

  • 1/2
  • 2/2
CREDIT
concept & composition

Keiichiro Shibuya + Takashi Ikegami

multiphonic 3-dimentional programming

evala

program development

Yuta Ogai

lighting control programming

Daito Manabe, Takayuki Ito

production assistant

maria

technical support

YCAM InterLab

Curated by

Kazunao Abe (YCAM)

Festival VIA – France(2011)

Venue:Espace Sculfort Avenue Jean-Jaurès, route de Valenciennes, Maubeuge

Date:March 24th 2011 – April 3rd 2011

CREDIT
concept & composition

Keiichiro Shibuya + Takashi Ikegami

multiphonic 3-dimentional programming

evala

program development

Yuta Ogai

lighting control programming

Daito Manabe, Takayuki Ito

production assistant

maria

technical support

YCAM InterLab

Curated by

Kazunao Abe (YCAM)

“Filmachine” is a black spherical skeleton surrounded by the suspended 24 speakers from the ceiling. The “filmachine” replays streams of sound files that are completely controlled by the 3D acoustic system “Huron”, accompanying flashing running LED patterns. A floor is composed of black cubes that converge towards the center of “filmachine”. We can experience the sound of “filmachine” from various positions.

 

The 3D acoustic system “Huron,” developed by the Lake Technology Company in Australia, enables us to powerfully program the orientation, movement and locations of sound images along the time line. The system not only provides an ideal sound experience in a fixed point, but makes it possible to create a new acoustic space where we can perceive the autonomous movement of sound stream on a virtual 3D space. “Filmachine” uses various kind of objects ranging from variations of spiral motions to strange attractors (e.g. Lorentz, Roessler, and Langford) that appear in nonlinear systems, and the displayed sound pattern itinerates among those complex dynamics with different time scales.

 

Sensory experience, in relation to sound and perception, has been paid much attention to these years, which is correlated with a rapid and sophisticated technological progress in sound systems. However, such technological facilities are mere “sound effecter” of audio outputs following the conventional sound principle: any music is represented with a linear superposition of basic frequencies with appropriate amplitudes. In contrast, “filmachine” puts stress on the movement of the sound stream. That is, it provides a new sensory experience by composing the transportation of sound images of different time structures. The “Huron” system enables this by freely moving and assigning sound images irrespective of the real speaker locations.

 

As for creating novel time structure of each sound file, it is based on the third-term music theory started by Keiichiro Shibuya and Takashi Ikegami at Tokyo/ICC in December 2005. The third-term music is to go beyond the concept of drone and melody, known as two fundamental elements of music composition, and provide the possibility of third-element such as the composition of timbre and sound dynamics with introducing meta-framework.

 

Sound files are created by combining and modifying initial bit strings by virtual genetic/evolutionary processes. Such variation mechanism of bit strings is suggested in the paper “Co-evolution of Machines and Tapes by Takashi Ikegami and Takashi Hashimoto in 1992. This paper elaborates two kinds of noise, which are fluctuation of bit originated in the internal and external mechanisms. That is, sound bit strings are recursively varied by the effect of external noise and specifications of deterministic programs. Depending on who reads the tape (which program rewrites the input string), there exits an immense variety.

 

In case of sound file synthesis, a machine rewrites a tape by using the wave form information to increase/decrease pulses or by using the interference between the intensity of sounds (db) and the sampling rate. Some other machines receive two input files and let one file act as a program on the other file, or combine them with variations like a method used in a genetic algorithm. By this way, the initial sound files have unexpected structures as if it acquires life-like autonomy.

 

The other important time structure of sound pattern here is to use the hierarchical composition of biased white noise generated by the Logistic map or by Cellular Automaton. Here, within a short time scale, time series from those systems have apparent structures and prove the difference from a structure generated by random coin toss. For example, by combining the same white noise from the Logistic map but with a slight different nonlinear parameter, we can generate ample difference as sensory experience. Using the timbre generation by the Chaos Theory and layering out the chaotic time series, “filmachine” was able to create the sound membrane, collective motion of different sound images.

 

“Filmachine” is fully uses complex sounds generated from the chaotic dynamics and tapes and machine dynamics, organizing a fully 3D motion structures. A Phenomenologist Husserl discusses a network of longitudinal and transverse intentionality as a basis of subjective perception, where the longitudinal intentionality is a spatial structure and the transverse intentionality is a temporal structure. A spatial structure does not merely imply the 3D space emulated by the “Huron” system, but a space in perception, that is, perceptual structure as a knitting of memory and embodiment. “Filmachine” is an experimental instrument as well as a virtual space/time structure developed by an evolutionary form of the third-term music.

 

That is, “filmachine” synthesizes sound membranes (film) but at the same time, the machine itself becomes an internal observer by compiling the space/time complexity with sound dynamics.

 

Text by Keiichiro Shibuya + Takashi Ikegami
Translated by Takashi Ikegami

 

Photo: © Michael Sauer – filmachine in Berlin, February 2008

  • 1/2
  • 2/2
CREDIT
concept & composition

Keiichiro Shibuya + Takashi Ikegami

multiphonic 3-dimentional programming

evala

program development

Yuta Ogai

lighting control programming

Daito Manabe, Takayuki Ito

production assistant

maria

technical support

YCAM InterLab

Curated by

Kazunao Abe (YCAM)

Festival PARANOIA – France(2011)

Venue:Gare Saint Sauveur Boulevard Jean-Baptiste Lebas, Lille
Date:April 13th 2011 – August 14th 2011

CREDIT
concept & composition

Keiichiro Shibuya + Takashi Ikegami

multiphonic 3-dimentional programming

evala

program development

Yuta Ogai

lighting control programming

Daito Manabe, Takayuki Ito

production assistant

maria

technical support

YCAM InterLab

Curated by

Kazunao Abe (YCAM)

“Filmachine” is a black spherical skeleton surrounded by the suspended 24 speakers from the ceiling. The “filmachine” replays streams of sound files that are completely controlled by the 3D acoustic system “Huron”, accompanying flashing running LED patterns. A floor is composed of black cubes that converge towards the center of “filmachine”. We can experience the sound of “filmachine” from various positions.

 

The 3D acoustic system “Huron,” developed by the Lake Technology Company in Australia, enables us to powerfully program the orientation, movement and locations of sound images along the time line. The system not only provides an ideal sound experience in a fixed point, but makes it possible to create a new acoustic space where we can perceive the autonomous movement of sound stream on a virtual 3D space. “Filmachine” uses various kind of objects ranging from variations of spiral motions to strange attractors (e.g. Lorentz, Roessler, and Langford) that appear in nonlinear systems, and the displayed sound pattern itinerates among those complex dynamics with different time scales.

 

Sensory experience, in relation to sound and perception, has been paid much attention to these years, which is correlated with a rapid and sophisticated technological progress in sound systems. However, such technological facilities are mere “sound effecter” of audio outputs following the conventional sound principle: any music is represented with a linear superposition of basic frequencies with appropriate amplitudes. In contrast, “filmachine” puts stress on the movement of the sound stream. That is, it provides a new sensory experience by composing the transportation of sound images of different time structures. The “Huron” system enables this by freely moving and assigning sound images irrespective of the real speaker locations.

 

As for creating novel time structure of each sound file, it is based on the third-term music theory started by Keiichiro Shibuya and Takashi Ikegami at Tokyo/ICC in December 2005. The third-term music is to go beyond the concept of drone and melody, known as two fundamental elements of music composition, and provide the possibility of third-element such as the composition of timbre and sound dynamics with introducing meta-framework.

 

Sound files are created by combining and modifying initial bit strings by virtual genetic/evolutionary processes. Such variation mechanism of bit strings is suggested in the paper “Co-evolution of Machines and Tapes by Takashi Ikegami and Takashi Hashimoto in 1992. This paper elaborates two kinds of noise, which are fluctuation of bit originated in the internal and external mechanisms. That is, sound bit strings are recursively varied by the effect of external noise and specifications of deterministic programs. Depending on who reads the tape (which program rewrites the input string), there exits an immense variety.

 

In case of sound file synthesis, a machine rewrites a tape by using the wave form information to increase/decrease pulses or by using the interference between the intensity of sounds (db) and the sampling rate. Some other machines receive two input files and let one file act as a program on the other file, or combine them with variations like a method used in a genetic algorithm. By this way, the initial sound files have unexpected structures as if it acquires life-like autonomy.

 

The other important time structure of sound pattern here is to use the hierarchical composition of biased white noise generated by the Logistic map or by Cellular Automaton. Here, within a short time scale, time series from those systems have apparent structures and prove the difference from a structure generated by random coin toss. For example, by combining the same white noise from the Logistic map but with a slight different nonlinear parameter, we can generate ample difference as sensory experience. Using the timbre generation by the Chaos Theory and layering out the chaotic time series, “filmachine” was able to create the sound membrane, collective motion of different sound images.

 

“Filmachine” is fully uses complex sounds generated from the chaotic dynamics and tapes and machine dynamics, organizing a fully 3D motion structures. A Phenomenologist Husserl discusses a network of longitudinal and transverse intentionality as a basis of subjective perception, where the longitudinal intentionality is a spatial structure and the transverse intentionality is a temporal structure. A spatial structure does not merely imply the 3D space emulated by the “Huron” system, but a space in perception, that is, perceptual structure as a knitting of memory and embodiment. “Filmachine” is an experimental instrument as well as a virtual space/time structure developed by an evolutionary form of the third-term music.

 

That is, “filmachine” synthesizes sound membranes (film) but at the same time, the machine itself becomes an internal observer by compiling the space/time complexity with sound dynamics.

 

Text by Keiichiro Shibuya + Takashi Ikegami
Translated by Takashi Ikegami

 

Photo: © Michael Sauer – filmachine in Berlin, February 2008

  • 1/2
  • 2/2
CREDIT
concept & composition

Keiichiro Shibuya + Takashi Ikegami

multiphonic 3-dimentional programming

evala

program development

Yuta Ogai

lighting control programming

Daito Manabe, Takayuki Ito

production assistant

maria

technical support

YCAM InterLab

Curated by

Kazunao Abe (YCAM)

MEDIA AMBITION TOKYO 2014 – Japan(2014)

Venue:Roppongi Hills 52nd Floor (Tokyo City View)

Date:7 Feb 2014 ~ 30 Mar 2014

CREDIT
concept & composition

Keiichiro Shibuya + Takashi Ikegami

multiphonic 3-dimentional programming

evala

program development

Yuta Ogai

lighting control programming

Daito Manabe, Takayuki Ito

production assistant

maria

technical support

YCAM InterLab

Curated by

Kazunao Abe (YCAM)

“Filmachine” is a black spherical skeleton surrounded by the suspended 24 speakers from the ceiling. The “filmachine” replays streams of sound files that are completely controlled by the 3D acoustic system “Huron”, accompanying flashing running LED patterns. A floor is composed of black cubes that converge towards the center of “filmachine”. We can experience the sound of “filmachine” from various positions.

 

The 3D acoustic system “Huron,” developed by the Lake Technology Company in Australia, enables us to powerfully program the orientation, movement and locations of sound images along the time line. The system not only provides an ideal sound experience in a fixed point, but makes it possible to create a new acoustic space where we can perceive the autonomous movement of sound stream on a virtual 3D space. “Filmachine” uses various kind of objects ranging from variations of spiral motions to strange attractors (e.g. Lorentz, Roessler, and Langford) that appear in nonlinear systems, and the displayed sound pattern itinerates among those complex dynamics with different time scales.

 

Sensory experience, in relation to sound and perception, has been paid much attention to these years, which is correlated with a rapid and sophisticated technological progress in sound systems. However, such technological facilities are mere “sound effecter” of audio outputs following the conventional sound principle: any music is represented with a linear superposition of basic frequencies with appropriate amplitudes. In contrast, “filmachine” puts stress on the movement of the sound stream. That is, it provides a new sensory experience by composing the transportation of sound images of different time structures. The “Huron” system enables this by freely moving and assigning sound images irrespective of the real speaker locations.

 

As for creating novel time structure of each sound file, it is based on the third-term music theory started by Keiichiro Shibuya and Takashi Ikegami at Tokyo/ICC in December 2005. The third-term music is to go beyond the concept of drone and melody, known as two fundamental elements of music composition, and provide the possibility of third-element such as the composition of timbre and sound dynamics with introducing meta-framework.

 

Sound files are created by combining and modifying initial bit strings by virtual genetic/evolutionary processes. Such variation mechanism of bit strings is suggested in the paper “Co-evolution of Machines and Tapes by Takashi Ikegami and Takashi Hashimoto in 1992. This paper elaborates two kinds of noise, which are fluctuation of bit originated in the internal and external mechanisms. That is, sound bit strings are recursively varied by the effect of external noise and specifications of deterministic programs. Depending on who reads the tape (which program rewrites the input string), there exits an immense variety.

 

In case of sound file synthesis, a machine rewrites a tape by using the wave form information to increase/decrease pulses or by using the interference between the intensity of sounds (db) and the sampling rate. Some other machines receive two input files and let one file act as a program on the other file, or combine them with variations like a method used in a genetic algorithm. By this way, the initial sound files have unexpected structures as if it acquires life-like autonomy.

 

The other important time structure of sound pattern here is to use the hierarchical composition of biased white noise generated by the Logistic map or by Cellular Automaton. Here, within a short time scale, time series from those systems have apparent structures and prove the difference from a structure generated by random coin toss. For example, by combining the same white noise from the Logistic map but with a slight different nonlinear parameter, we can generate ample difference as sensory experience. Using the timbre generation by the Chaos Theory and layering out the chaotic time series, “filmachine” was able to create the sound membrane, collective motion of different sound images.

 

“Filmachine” is fully uses complex sounds generated from the chaotic dynamics and tapes and machine dynamics, organizing a fully 3D motion structures. A Phenomenologist Husserl discusses a network of longitudinal and transverse intentionality as a basis of subjective perception, where the longitudinal intentionality is a spatial structure and the transverse intentionality is a temporal structure. A spatial structure does not merely imply the 3D space emulated by the “Huron” system, but a space in perception, that is, perceptual structure as a knitting of memory and embodiment. “Filmachine” is an experimental instrument as well as a virtual space/time structure developed by an evolutionary form of the third-term music.

 

That is, “filmachine” synthesizes sound membranes (film) but at the same time, the machine itself becomes an internal observer by compiling the space/time complexity with sound dynamics.

 

Text by Keiichiro Shibuya + Takashi Ikegami
Translated by Takashi Ikegami

Photo: © Michael Sauer – filmachine in Berlin, February 2008

 

  • 1/2
  • 2/2
CREDIT
concept & composition

Keiichiro Shibuya + Takashi Ikegami

multiphonic 3-dimentional programming

evala

program development

Yuta Ogai

lighting control programming

Daito Manabe, Takayuki Ito

production assistant

maria

technical support

YCAM InterLab

Curated by

Kazunao Abe (YCAM)