A state of the art on computational music performance
Tài liệu tham khảo
Arcos, 1998, Generating expressive musical performances with SaxEx, Journal of New Music Research, 27, 194, 10.1080/09298219808570746
Bresin, 2000, Emotional coloring of computer-controlled music performances, Computer Music Journal, 24, 44, 10.1162/014892600559515
Canazza, 2000, Audio morphing different expressive intentions for multimedia systems, IEEE MultiMedia, 7, 79
Delgado, 2009, INMAMUSYS: Intelligent multiagent music system, Expert Systems with Applications, 36, 4574, 10.1016/j.eswa.2008.05.028
De Mántaras, 2002, AI and music: From composition to expressive performance, AI Magazine, 23, 43
Dovey, 1995, Analysis of Rachmaninoff’s piano performances using inductive logic programming, 279
Friberg, 1991, Generative rules for music performance: A formal description of a rule system, Computer Music Journal, 15, 56, 10.2307/3680917
Friberg, 2006, pDM: An expressive sequencer with real-time control of the KTH music performance rules movements, Computer Music Journal, 30, 56, 10.1162/comj.2006.30.1.37
Friberg, 2000, Generating musical performances with director musices, Computer Music Journal, 24, 23, 10.1162/014892600559407
Gabrielsson, 1995
Gabrielsson, 2003, Music performance research at the millennium, The Psychology of Music, 31, 221, 10.1177/03057356030313002
Grachten, 2009, Phase-plane representation and visualization of gestural structure in expressive timing, Journal New Music Research, 38, 183, 10.1080/09298210903171160
Hiraga, 2004, Rencon 2004: Turing test for musical expression, 120
Hong, 2003, Investigating expressive timing and dynamics in recorded cello, Psychology of Music, 31, 340, 10.1177/03057356030313006
Ishikawa, O., Aono, Y., Katayose, H., & Inokuchi, S. (2000). Extraction of musical performance rules using a modified algorithm of multiple regression analysis. In Proceedings of the 2000 international computer music conference, San Francisco (pp. 348–351).
Juslin, 1997, Perceived emotional expression in synthesized performances of a short melody: Capturing the listeners judgment policy, Musicae Scientiae, 1, 225, 10.1177/102986499700100205
Juslin, 2002, Toward a computational model of expression in performance: The GERM model, Musicae Scientiae, 63
Langner, 2003, Visualizing expressive performance in tempo-loudness space, Computer Music Journal, 27, 69, 10.1162/014892603322730514
Lindström, 2006, Impact of melodic organization on perceived structure and emotional expression in music, Musicae Scientiae, 10, 85, 10.1177/102986490601000105
Meyer, 1956
Minsky, 1992
Molina-Solana, M., Arcos, J. L., & Gomez, E. (2008). Using expressive trends for identifying violin performers. In Proceedings of ninth international conference on music information retrieval (ISMIR2008) (pp. 495–500).
Palmer, 1996, Anatomy of a performance: Sources of musical expression, Music Perception, 13, 433, 10.2307/40286178
Poli, 2004, Methodologies for expressiveness modelling of and for music performance, Journal of New Music Research, 33, 189, 10.1080/0929821042000317796
Ramírez, 2007, Performance-based interpreter identification in saxophone audio recordings, IEEE Transactions on Circuits and Systems for Video Technology, 17, 356, 10.1109/TCSVT.2007.890862
Rigg, 1964, The mood effects of music: A comparison of data from former investigators, Journal of Psychology, 58, 427, 10.1080/00223980.1964.9916765
Sapp, C. (2007). Comparative analysis of multiple musical performances. In Proceedings of eighth international conference on music information retrieval (ISMIR 2007) Vienna, Austria (pp. 497–500).
Saunders, 2008, Using string kernels to identify famous performers from their playing style, Intelligent Data Analysis, 12, 425, 10.3233/IDA-2008-12408
Seashore, 1938
Sloboda, 1983, The communication of musical metre in piano performance, Quarterly Journal of Experimental Psychology, 35, 377, 10.1080/14640748308402140
Stamatatos, 2005, Automatic identification of music performers with learning ensembles, Artificial Intelligence, 165, 37, 10.1016/j.artint.2005.01.007
Sundberg, 2000, Emotive transforms, Phonetica, 57, 95, 10.1159/000028465
Temperley, 2007
Todd, 1989, A computational model of rubato, Contemporary Music Review, 3, 69, 10.1080/07494468900640061
Todd, 1992, The dynamics of dynamics: A model of musical expression, Journal of the Acoustical Society of America, 91, 3540, 10.1121/1.402843
Widmer, 2003, Discovering simple rules in complex data: A meta-learning algorithm and some surprising musical discoveries, Artificial Intelligence, 146, 129, 10.1016/S0004-3702(03)00016-X
Widmer, 2003, In search of the Horowitz factor, AI Magazine, 24, 111
Widmer, 2004, Computational models of expressive music performance: The state of the art, Journal of New Music Research, 33, 203, 10.1080/0929821042000317804
Zanon, 2003, Estimation of parameters in rule systems for expressive rendering in musical performance, Computer Music Journal, 27, 29, 10.1162/01489260360613326