Hello. Welcome again to the course on Signal Processing for Music Applications. We are in the last week of the course and this week, we do not have a coherent topic like the one we had every week. Instead, we're going to cover some small topics. Topics that will help us wrap up the course. By reviewing some aspects, by identifying some trends, by complimenting the material to help you see what could you do next and what type of things you might also be interested in if you were interested with this topic. So, in particular, in this lecture, I want to go over the idea of Beyond Audio Signal Processing in Music Applications. So, in the case of music, what other things we can do beyond what we have been talking about and we'll talk about two aspects. One is within the field of audio signal processing. We will identify some of the topics that we have covered. But that maybe we have covered a little bit lightly and that there is room for a lot of much more in depth studying. And then finally, I want to talk about aspects that are also relating. With analyzing music signals or musical information, but that are not strictly based on audio signal processing techniques. And that therefore, can compliment the kinds of the things that we have been doing quite well. Let's talk about audio signal processing beyond this course. And identify some topics that either we have touched a little bit lightly or that we have not touched and that could deserve more attention if we had time to do so. So, here is a list of such topics. The first one is the detection and estimation of sinusoids. That's a pretty big topic that we did a very first approximation. We simplify the problem quite a bit. But clearly, there is many more advance techniques and methods that can be used to detect and estimate the values of a sinusoid within a complex signal. Another topic, the idea of partial tracking. Again, we track the harmonics or the partials of an audio signal using a quite simple technique that was sufficient for the kind of sounds we analyze. But again, that can be sophisticated much more, and we can develop techniques that track the behavior of partials in time and can use more sophisticated methodologies. The idea of transient modeling, in fact, is an idea that we didn't touch but maybe you experience when analysing and synthesising sounds, that one of the parts of sounds that have a harder time to remodel are the transients. So, when we have an attack of a note or a very sharp change of some signal in those parts there is quite a few approaches That go beyond what we have done and tried to identify those areas, these transients and develop some specific methodologies to handle those transients, independent of the steady states or the more stable parts of the sound. That can be analyzed with the kinds of things we did. Another topic is multi resolution and that's typically one of the biggest short comings of the fast transform approach or at least what people say that is one of the things that it clearly lacks is the idea of multi resolution. The FFT treats all the frequencies, all the spectrum with a linear resolution. And this is not right, especially if we talk about audio signals and we talk about perception, our perceptual system is not a linear Doesn't have a linear resolution, a frequency resolution like the FFT one. So, there has been quite a few attempts and approaches to take care of that and develop methodologies developing spectral analysis techniques that account for this idea of multi resolution for having resolutions that are different at different parts of the spectrum, or on the frequency ranges. And in fact, this can be done with the FFT and this can be done with the Fast Fourier transform. And in the lab of this week, in fact, if you are interested, you can explore that and we propose that as an option for an assignment this week. Another area is the residual analysis that we have been doing. We have done a quite simple analysis of the residual of a signal. We have subtracted the sinusoids of a signal and we have approximated this residual with a stochastic model, a quite stochastic model. And again, there is room for a lot of new developments that approach the idea of approximating this residual. Either from a perceptual point of view or from a source point of view depending on what type of signals to develop specific models for those types of signals. And again, there has been quite a bit of work done on these areas. And finally, the idea of synthesis, we have been synthesizing sinusoids and this filtered noise for the harmonic plus stochastic type of modeling. And we have been doing implementation that is quite efficient using the inverse of 50. But, again, there has been quite a few methodologies proposed as alternative for that. And we can synthesize sine waves and noise in other different approaches. And then, as a separate topic that is a huge topic and that would deserve many new classes like this one, is the idea of completely different modeling approaches. We have done spectrum base approach for modeling. An approach that has been based on analyzing sinusoids and obtaining the residuals and they're the resilience of all that modeling approaches, that can be used for these or other applications. We can model sounds using a physical modeling approaches, we can model sounds using all the transform other than the Fourier transform. And in general there have been many proposals for modeling or synthesizing sounds. That are quite different from what we have been talking about. So anyway, so the idea is that there is a lot out there on audio signal processing applied to music, and I don't want you to get the impression that what we have been doing Is a comprehensive view of the field of audio signal processing for music applications. Now, this was a particular view that I think is quite powerful. I hope you appreciate the potential of it but there is much more than this. Let me now mention a few topics that really go beyond audio signal processing. And the idea is that, when we want to side a music from an engineering perspective, there is much more than just the audio. Audio is a fundamental part but music, it's a much more complex phenomena and we can obtain data to study it and different types of data. So for example, if we just actually focus on the types of data and signals, we can also analyze the scores or the lyrics of songs. Or we can study the gestures And a video of recording of a particular performance. And we can apply some signal processing related methodologies to these type of signals. But we can also have a lot of textual type of data, contextual or community information. That means comments about the music, descriptions about the music. From which we can extract quite meaningful information to describe a piece of music to characterize some particular recording. And in terms of the methodologies to analyze this type of data apart from the signal processing Type of topics. There are quite a few areas of mathematics and engineering that are quite useful for analyzing this other type of data. Of course, statistical analysis is a very big topic. That we have mentioned a few things that has evolved and there is a quite interesting methodologies that can be used to correct complex phenomena and music in particular. Also, pattern analysis and we think of any of this types of data as some time information and the idea of patterns is very important concept to extract. And within that field, there has been quite a lot of progress in terms of methodologies that allow to identify patterns in general and that can be applied to this type of music signals. And finally, the area of machine learning that we have introduced last week. To talk about clustering or classification of sound within a collection has yielded quite a very sophisticated methodologies that can be applied to a wide variety of problems and a wide variety of data. And develop methodologies to learn from the data so that we can automatically extract knowledge from the data. And then there is another area of research, or of methodologies, that is what we call the semantic technologies or the semantic web. That is quite recent field of research that again has brought quite a few new approaches to understand data. To extract information about data and things like network analyses, so when you have a corpus of data. And you want to find relationships between these types of data or these data points network analysis can give us quite a lot of insight on the structure of these relationships. Ontologies is another area of computer science, coming from these semantic web type of approaches that has been a very good source of structuring the data. So, the idea of developing anthologies or developing structure data. A ways to describe the relationship between entities within a particular field of knowledge in this case music can help us a lot in then describing and analyzing these data. And finally, well, music is not just the data. Music involves people, involves relationships. So, all user centered studies, there are many types of user centered studies, can also bring quite a bit of insight into the music. So, all the issues of perception, cognition, how we perceive the music and developing sort of experiments or user driven experiments on the interaction between people and music kind of definitely give us a lot of insights into the music. And the human-computer interaction feels so the more interface aspect of things also is very fruitful area of study. The idea of we interact with instruments, we interact with the music, we need interfaces. So, the study of these interfaces and how they relate to our understanding or our sort of use of music is also very fruitful area of a study that can help us in understanding and modeling music signals. Anyway, so this was some of, just some highlights, mention of some things. That hopefully gives you a much broader view of what we're talking about and that clearly extend our field of a study to many different directions, many fields of study, many methodologies. And there's a lot of very interesting topics around these areas. In terms of references for this type of things since I have been talking about so many things of course is huge and you can search in many different places. In the SMS page of the MTG, there is some information. Especially links to articles that extend the type of analysis and synthesis techniques that we have been doing. And introduce many other approaches for modeling and parameterizing and synthesising sounds. So, that's a good source of articles that you can look at. There is this road map on music information research that was recently published, which you have the link here and that's also a good source of lines of directions related to studying music information and bringing all of these different new areas of study and identifying what things are interesting or not. And it terms of these different fields of methodology that can be used. In Wikipedia, you can find a lot of references for statistics and for machine learning for other types of these semantic type analysis, what is also called knowledge representations and reasoning. And all these user type of studies you can look at music psychology, you can look at the human computer interaction. And through them I'm sure you will be able to find a lot of very interesting literature that describes all these studies that music is at center, but that approach it from quite different perspectives. Of course, these slides are all still available on the SMS tools, and that was all, so this was just one lecture in this week that tried to open up what we have been talking about. Opening up within audio signal processing, but also outside signal processing. Hopefully that gave you some insights into new ventures and new venues that you can take. Thank you very much, see you next class.