Skip to content

Latest commit

 

History

History
26 lines (14 loc) · 3.27 KB

README.md

File metadata and controls

26 lines (14 loc) · 3.27 KB

creativeAI-dataset

Project Aims and Objectives

This project aims to design, develop and demonstrate a unique dataset capturing human creativity from inside the embodied relationships of music performance. The understanding of such a critical and fruitful mesh of environment, embodiment, new goal creation, and human-machine cooperation, will allow this project to cast a new light on the role of embodiment in HDI in music. It will also gather empirical data about the way in which human beings can create and enjoy music performance through their data streams. This dataset will be the first of its kind to deal with em-bodied creativity, and will have potential impact upon the fields of realtime performance, game engine integration, companion-bots, healthcare interventions through art, and other interpersonal interaction between people, their data and their machines. Our novel approach of forefronting musicking (over the score) in the process of musical creation will offer new pathways of exploration for those already examining computer models of musical creativity.

The objectives are:

  1. Bring together a transdisciplinary team from the fields of computer vision, gesture recogni-tion, musician composition, musicology, creativity cognition, games studies, interactive art, AI and machine learning
  2. Design a light-weight, low-cost, high-yield, data capture system of human musicking (the creative acts of making music, Small 1989) using off-the-shelf technologies
  3. Capture the live performative data of a range of musicians while they perform solo or in small ensembles.
  4. Demonstrate a small-scale proof-of-concept prototype of the dataset in-action

This project will be split into 3 phases:

Phase 1 (months 1 – 3) – design of the human musician tracking system.

This will be constructed using off-the-shelf technologies such a fitness bands (to track movement, and signs of affect such as heart-rate and galvanic skin resistance); HD video capture using an Android tablet (linked to the fitness band) with a high-quality USB microphone (to cap-ture the realtime audio). The aim of this phase is to design a light-weight system that can be de-ployed remotely by the individual musicians, without the need of a team of scientists, cables and on-body telemetry interfering with their embodied state of musicking. This is a vital element as authentic and embodied musicking is needed to capture a rich dataset, anything less will be con-sidered as a weakness and detrimental to the project.

Phase 2 (months 4 - 8) – harvesting data from remote performances.

Each hub (Leicester, Liverpool, New Haven) will commission performers to engage with this project. Leicester will draw in musicians from electronic music performance and the Royal Birmingham Conservatoire of Music; Liverpool will draw from the youth squad of the Liverpool Philharmonic Orchestra, and New Haven from Yale school of music and the improvisation scene of Connecticut. A concurrent activity will be to engage these musicians in a demographic study and a discussion about the ethics of a Creative AI dataset.

Phase 3 (months 8 – 10) – implementation of the dataset and trained AI into the existing Embod-ied Robotics in Music as a proof-of-concept demonstration and to evaluate the next steps of this project.