Masaru Fujieda, Takahiro Murakami, and Yoshihisa Ishida
[1] H. Kameoka, T. Nishimoto, and S. Sagayama, Extractionof multiple fundamental frequencies from polyphonic musicusing harmonic clustering, Proc. 18th Int. Cong. Acoustics(ICA2004), 2004, 59–62. [2] K. Kashino, T. Kinoshita, K. Nakadai, and H. Tanaka, Chordrecognition mechanisms in the OPTIMA processing architec-ture for music scene analysis, Trans Inst. Electronics, Informa-tion and Communication Engineers, J79-D-2 (in Japanese),1996, 1762–1770. [3] M. Goto, A predominant-F0 estimation method for CD record-ings: MAP estimation using EM algorithm for adaptive tonemodels, Proc. IEEE Int. Conf. Acoustics, Speech, and SignalProcessing, 2001, 3365–3368. [4] M. Goto, A predominant-F0 estimation method for polyphonicmusical audio signals, Proc. 18th Int. Cong. Acoustics (ICA2004), 2004, II-1085–1088. [5] K. Miyamoto, H. Kameoka, T. Nishimoto, N. Ono, andS. Sagayama, Harmonic-temporal-timbral clustering (HTTC)for the analysis of multi-instrument polyphonic music signals,Proc. 2008 IEEE Int. Conf. Acoustics, Speech and SignalProcessing (ICASSP 2008), 2008, 113–116. [6] A.P. Klapuri, Multiple fundamental frequency estimation basedon harmonicity and spectral smoothness, IEEE Trans. Speechand Audio Processing, 11 (6), 2003, 804–816. [7] M.P. Ryyn¨anen and A.P. Klapuri, Automatic transcription ofmelody, bass line, and chords in polyphonic music, ComputerMusic Journal, 32 (3), 2008, 72–86. [8] R. Zhou and M. Mattavelli, A new time-frequency representa-tion for music signal analysis: resonator time-frequency image,Proc. 9th Int. Symp. Signal Processing and Its Applications(ISSPA ’07), 2007. [9] R. Zhou, Feature extraction of musical content for automaticmusic transcription, Ph.D. thesis, Swiss Federal Institute ofTechnology, Lausanne, Switzerland, October 2006. Download-able on website http://library.epfl.ch/en/theses/?nr=3638. [10] P. Smaragdis and J.C. Brown, Non-negative matrix factoriza-tion for polyphonic music transcription, Proc. IEEE Workshopon Applications of Signal Processing to Audio and Acoustics(WASPAA), 2003, 177–180. [11] E. Vincent, N. Berlin, and R. Badeau, Harmonic and inhar-monic nonnegative matrix factorization for polyphonic pitchtranscription, Proc. IEEE Int. Conf. Acoustics, Speech andSignal Processing 2008 (ICASSP 2008), 2008, 109–112. [12] D. Matsuyama, M. Natsui, and Y. Tadokoro, Consideration ofpitch estimation method for piano chords consisting of manynotes using cascaded seven comb filters, Tech. Rep., Inform.Processing Society of Japan SIG, 2007-MUS-71 (in Japanese),2007, 167–172. [13] T. Saito, T. Matsui, H. Honda, and Y. Tadokoro, Real-timerealization of scale detection based on comb filters using DSPs,Journal of SICE (in Japanese), 34, 1998, 504–509. [14] G. Agostini, M. Longari, and E. Pollastri, Musical instrumenttimbres classification with spectral features, EURASIP Journalon Applied Signal Processing, 1, 2003, 5–14. [15] N. Cristianini and J. Shawe-Taylor, An introduction to sup-port vector machines and other kernel-based learning methods(Cambridge, UK: Cambridge University Press, 2000). [16] M. Goto, H. Hashiguchi, T. Nishimura, and R. Oka, RWC musicdatabase: Music genre database and musical instrument sounddatabase, Proc. 4th Int. Conf. Music Information Retrieval(ISMIR 2003), 2003, 229–230. [17] J.S. Downie, Music information retrieval evaluation exchange(MIREX), http://www.music-ir.org/mirex/2009. [18] R. Zhou, J.D. Reiss, M. Mattavelli, and G. Zoia, A compu-tationally efficient method for polyphonic pitch estimation,EURASIP Journal on Advances in Signal Processing, 2009,2009, 11.
Important Links:
Go Back