Gen.AV

Gen.AV is a platform for the development of, and performance with, generative audiovisual tools. Workshops, hackathons and performances have been organised as part of this platform. It is part of the Enabling AVUIs research project, supported by a Marie Curie EU fellowship.

Gen.AV 2

Hackathon and performance on Generative Audiovisuals, July 2015. Performance: 30/7/2015, Goldsmiths, University of London, PSHB-LG01

About the projects:

Butterfly is a general purpose scope viewer that can be mapped arbitrarily to any internal bits of SuperCollider synths. It is intended as a interactive instrument that can be manipulated in real time for generative audio visuals performances. The visual tool can be used with any SuperCollider synth definitions, even your own as long as they follow some naming conventions. Project link: https://github.com/AVUIs/Butterfly

Cantor Dust is an audio visual instrument that generates, displays, and sonifies cantor set like fractals. It allows to the user to control the starting seed and number of iterations for a number of fractals at once, either through the graphical interface, or through an midi controller. The project is written in plain old Javascript, and runs entirely in a (modern) web browser. Project link: https://github.com/AVUIs/cantor-dust

Esoterion Universe Gestenkrach is a fork of Esoterion Universe which adds support for SuperCollider sound engine with much wider variety of sounds, and LeapMotion sensor integration for more playful universe navigation and planet sculpting. Also the UI was adapted and made more directly responsive to LeapMotion input. Original description: under Gen.AV 1. Project link: https://github.com/AVUIs/EsoterionUniverseGestenkrach

OnTheTap is a reactive audio visual system. The system plays with the tactile, analog feel of tapping surfaces as a digital input device. This input and it’s gestures in turn drive sound and visuals expressively. Project link: https://github.com/AVUIs/OnTheTap

residUUm is a an attempt to sonify a particle system whose inhabitants exchange and discard their sonic characteristics as they collide leaving remnants that contribute a din of noise as their larger bodies fade. The sound engine and graphics are done, but the exchanging of characteristics has yet to be accomplished. This project uses processing to send visual characteristics of particle bodies to be sonified in pd. Project link: https://github.com/AVUIs/residUUm

wat An Audio-Visual Exploration of Chaotic 2-Dimensional Dynamical Systems (Or ‘Wat’). We would like to develop a 2D or 3D visualization based on Continuous Cellular Automata with various evolving rulesets, and sonify the result in a musical way. The core principle involves applying a matrix of mathematical operations (generally non-linear functions) to an image specifying the starting conditions. We apply our operation matrix my sliding it across the image (as in convolution) and applying the operation in each element of the matrix to the corresponding element in the image matrix. The result is complex evolving, moving, unpredictable textures which can be sonified with the right method. Furthermore, the rules of the system can be ‘performed’ by varying the operation set and coefficients in real time through some sort of input. Project link: https://github.com/AVUIs/wat

Projects developed during the second Gen.AV – Hackathon on Generative Audiovisuals (25-26 July 2015, London Music Hackspace and Goldsmiths, University of London).

Gen.AV 1

Hackathon and performance on Generative Audiovisuals. Performance: 6/2/2015, Goldsmiths, University of London, Cinema

About the projects:

ABP, a PureData patch creates music and sends data to the Cinder visuals software via OSC. Several visual parameters like color, alpha, zoom, repetition of objects, tempo are given by the PureData patch. Project link: https://github.com/AVUIs/AdamBrucePiotr

drawSynth consists of a GUI to control sound and image. Users can draw shapes and select colours. By doing this, users control the synthesis engine. Each pair of vertices of a shape will control an FM oscillator – one vertex for carrier and the other for the modulator. The project is built with openFrameworks for graphics and interaction, and Max/MSP for sound, using FM synthesis. OSC is used for communication between both. Project link: https://github.com/AVUIs/drawSynth

Esoterion Universe consists of an empty 3D space that can be filled with planet-like audiovisual objects. The objects can be manipulated and can be given different appearances and sounds. Users can navigate in space and the audiovisual outcome is influenced by that navigation. Generic, media neutral terms such as warmth, sharpness, size and oscillation are used to characterise and connect sound and visuals. A semantic approach was chosen for this connection instead of a one-to-one parameter mapping. The GUI consists of sliders distributed concentrically, in the shape of a star graph, embedded in the centre of the object. It integrates aesthetically with the objects. openFrameworks with openGL is used for graphics and interaction, and Max/MSP for sound. OSC is used for communication. Project link: https://github.com/AVUIs/EsoterionUniverse

GS.avi is a gestural instrument that generates continuous spatial visualizations and music from the input of a performer. The features extracted form a performer’s gesture defines the color, position, form and orientation of a 3-demensional Delaunay mesh — its composite triangles, vertices, edges and walk. The music, composed using granular synthesis, is generated from features extracted from the mesh — its colors, strokes, position, orientation and patterns. The project was created using Processing and MAX/MSP. OSC is used to communicate between the two. Project link: https://github.com/AVUIs/GS.avi

Modulant allows for the creation of images and their sonification. The present implementation is built upon image-importing and freehand-drawing modules that may be used to create arbitrary visual scenes, with more constrained functional and typographical modules in development. The audio engine is inspired by a 1940’s synthesizer, the ANS, that scans across images. In this scanning, one axis is time and the other axis is frequency. Modulant thus becomes a graphical space to be explored sonically and vice-versa. The project is built with Processing for graphics and interaction, and Ruby with Puredata for sound. Project link: https://github.com/AVUIs/Modulant

Projects developed during the Hackathon on Interactive Computer-Generated Audiovisuals (6-7 December 2014, Goldsmiths, University of London).

Gen.AV Sketches

Workshop on Generative Audiovisuals, 18/10/2014. Goldsmiths, University of London

About the projects:


Gestural Touchscreen
is a touch-screen based application, controlled entirely by gestures. There is no GUI. Users can only load SVG files as visual content and there is a built-in physics engine.

 

Meta/Vis is multitouch app with a “pre-performance” configuration stage. This stage adopts a data-flow paradigm, although substantially simplified. Objects such as sound, visuals, control, generative and physics can be linked with arrows in different configurations, and contain drop-down menus for additional options. The group described it as “a simplified Jitter-style patching system”.

 

Sensor Disco consists of an environment containing multiple sensors. By moving in the space, audience members trigger and modulate sounds, which are visualized on the walls and on the floor.

 

Fields of Interference allows users to create sound and visuals by moving with their mobile devices in a room. The system is composed of an array of sensors, which sonifies and visualizes Wi-Fi interference from mobile devices – using surround sound and an immersive dome-like projection screen.

 

Beat the DJ is a game-ified experience where there is a main performer role (in this case, a DJ/VJ), and the club environment becomes a game where audience activity “unlocks” audiovisual content. In the beginning, the audio and visuals are simple (for example, a drum loop and a few melody lines) but audience reaction can give the DJ/VJ more elements to play with. These elements can potentially trigger further reactions from the audience.

 

< >