The AI and I
What can AI do in music? No, not to generate, not to steal and copy, nor taking away the fundamentals and essences of human value: to create and to express. We develop the AI program in music performance that serves the expressions, messages, stories and imaginations of the creators, of the performers: for them to create with unimaginable vastness of possibilities in the new era of AI and offer the industry the possible direction of future.
There is a long history of compositions blending a live acoustic instrument with recorded sound — environmental sound, computer generated sound, or even recorded acoustic instruments. In most cases the live player follows along with the previously recorded sound adjusting timing as necessary to achieve the desired ensemble. Sound map tracking AI provides an alternative to this model, allowing the live player to lead the performance while the interactive AI follow and interact with the live player. This approach frees the live player from the rigid timing, allowing expressive freedom in creativeness. Allowing for fixed-media pieces as well as imaginative score-based live processing of the soloist’s sound, while freeing the soloist to drive the music as the moment requires. We will demonstrate this technology both with traditional viola repertoire and new compositions of Zhao.