Concert

The CSMC2018 Concert will be held on the first evening of the conference, Monday 20th August, at 6:30pm in the Smurfit Lounge

The concert is free and open to those who are not attending the conference.

CSMC2018_Concert_Performers

Photo left to right: Conference Organiser – Róisín Loughran (NCRA Group UCD), Robert Keller (Harvey Mudd University, USA), Fergal Dowling and Michael Quinn (Dublin Sound Lab), Stephen Roddy (TCD), René Mogensen (Royal Birmingham Conservatoire, UK) and Sarah Angliss (Composer, UK).

Overview

Sarah Angliss:  ‘Airloom’
Stephen Roddy ‘Signal to Noise Loop i2+: Noise Water Dirt’
—————-Interval—————-
Fergal Dowling and Michael Quinn: ‘Stops VIII’
Robert Keller: ‘Jamming with Improvisor’
René Mogensen – ‘Favoleggiatori 2’

Full Programme

Sarah Angliss:  ‘Airloom’

Music blending acoustic instruments with electronics and generative software patches, featuring the composer’s polyphonic robotic carillon.

Bio:
Sarah Angliss is a composer, performer and sound historian who creates narratively rich music, performed using acoustic instruments, electronics, bespoke software and musical automata. Sarah’s performed live in The Royal Festival Hall, Cafe Oto, Kings Place, The Union Chapel, BFI Southbank, Handel House, London; National Sawdust, Brooklyn; Wales Millennium Centre; Cardiff; The Arnolfini, Bristol; The Millennium Gallery, Sheffield; Landmark Kunstall, Bergen; Elektriteater, Tartu; Moog Labs, Supersonic; The Royal Institution and many other venues and festivals. Sarah also works in theatre – most recently for a new production of Eugene O’Neill’s expressionist play The Hairy Ape in the Old Vic, London, and Park Avenue Armory, New York (director Richard Jones). She’s currently composing an electroacoustic opera Giant (with librettist Ross Sutherland and director Sarah Fahie), supported by Snape Music the Jerwood Charitable Foundation. Sarah’s biography of Daphne Oram featured in a reprint of Oram’s treatise An Individual Note – of Music, Sound and Electronics, republished by Anomie and the Daphne Oram Trust. Her work on musicians’ early attitudes to drum machines and samplers was published by the Science Museum and Smithsonian Scholarly Press. She’s made documentaries on the use of birds as domestic sound recorders and the cultural history of echo for BBC Radio 4.

Stephen Roddy ‘Signal to Noise Loop i2+: Noise Water Dirt’

Signal to Noise Loops i2+: Noise Water Dirt is a live performance for the PerformIOT system. This system involves the application of techniques and concepts from the field of data-driven music to achieve a balanced co-ordination between algorithmic composition, live looping and improvisation in the context of live electronic music performance.
The tasks of data acquisition and preparation as well as the mapping of data to MIDI is carried out by a bespoke Python script. From there the data is sent to Ableton Live 10 and Max 8 to control synthesis parameters. The performance utilises Smart City IoT data drawn from sensors placed around Dublin City. From January to May of 2018 Ireland experienced a number of unusually strange weather events. Devices monitoring ambient noise levels provided by Dublin City Council and Sonitus Systems (http://dublincitynoise.sonitussystems.com/), water levels and air quality provided by the EPA (http://www.epa.ie) measured the effects of these events on the city. While each of these streams of data represents an independent set of measurements for unique phenomena, they nonetheless share a commonly interrelated structure as they have been shaped by the recent history of strange weather events. This make them useful for coordinating and balancing our live performance system because while they share similar characteristics and trajectories there is also enough variance between the different data streams to prevent the system from sounding too static and homogenous.
The performance component records and loops incoming content improvised by the player using the Lemur IOS app for iPad. The harmonic generative component generates harmonic content and the electroacoustic generative component generates electroacoustic textures and gestural motions from simple sine wave inputs. These are generated on the basis of probabilistic models. Data is mapped to control parameters on three separate levels across each component roughly comparable to micro, meso and macro levels of control. The MIDI level, synthesis level and post level. For the generative music creation process, data is mapped at the midi level to control the chance that a note will play, its possible pitches and its length. For the performative component the player performs a motif or section of music. This is then looped and the data controls how far the loop deviates from the original recording on each repetition. Drawing from the works of Reich and Eno mentioned earlier multiple loops can be created in this way allowing for a kind of evolved approach to phase shifting at the meso level. At the synthesis level the performative component and the harmonic generative components utilize wavetable synthesis.
The timbres are designed to metaphorically represent the different data streams. This represents the micro level and the mapping on this level is rich and complex and will be discussed in greater detail in a future publication. Examples of parameters mapped include amplitude envelopes, filter resonance and cutoff values, delay times and most crucially the patterns of movement across the Wavetable. Mappings on this level are informed by developments in the field of embodied cognition. On the post level the data is mapped to modulate how each of the components is processed using distortion, stereo image, filtering, and reverb. This allows for the division of the piece into three distinct parts on the macro level. Mappings on this level were influenced by the Basinski’s Disintegration Loops pieces discussed earlier. Data can be mapped on this level to control the rate of distortion giving rise to new sonic materials controlled by the data.

Bio:
Stephen Roddy is a composer/performer and an Irish Research Council Government of Ireland Postdoctoral Research Fellow investigating Auditory Display for Large Scale Internet of Things (IoT) Networks at CONNECT, Trinity College Dublin. Stephen holds a PhD in Sonification: the science and art of representing data with sound, from Trinity College Dublin. He also holds a BSc in Music Media and Performance Technology and an MA in Music Technology from DMARC at the University of Limerick. His current academic research is focused on the place of algorithmically generated sound and music in representing and communicating information to listeners in a world that is increasingly connected by smart devices, AI/ML and IoT technologies. He employs empirical and mixed method strategies in the development of creative auditory display systems which integrate principles Embodied cognition and AI with sonification techniques. His artistic work includes installation, data–driven music and sonification, guitar-based improvisation, generative and algorithmic music systems, electronic compositions and collaborations with dancers, choreographers, traditional instrumentalists and other sound artists. His music has been described as “quirky, odd, heavy electronic instrumental” music.

INTERVAL

Advertisements
%d bloggers like this:
search previous next tag category expand menu location phone mail time cart zoom edit close