Computational Neuroscience Workshop, Online!

Hi guys,

Finally! After several hours of video editing and cursing (ok, it wasn’t that bad), here are the videos of the talks presented during the computational neuroscience workshop, held on May 7th of this year.

Let me remind you that this workshop has been organized by us, students and for us, students. It is meant to be a means of combining our knowledge on methods for the analysis of neuronal signal. Among the topics covered we find: linear regression, optimization, classification, decoding, dimensionality reduction and information theory. Since the workshop, I have heard from many how much it opened their minds on the existing approach to computational neuroscience and/or clarified some concepts. I myself have made a great use of the talk on information theory, adding this aspect to my data analysis.

By putting the talks online, we can now all profit from the workshop presentations. If you are curious or need help regarding any of these topics, you can start your research here. Beyond only watching the talk, feel free to contact the speakers directly to get personalized advice on your specific problem. By sharing knowledge between ourselves, it is the whole community that benefits.

A very big thank to all those who have been involved in the organization of the event: Lennart Hilbert, Vladimir Grouza, Ananda Tay; the speakers and their collaborators: Greg Stacey, Nathan Friedman, Adam Schneider, Ashkan Golzar, Mohsen Jamali and Diego Mendoza. Without you, nothing would have been possible. And, not to forget, CAMBAM for providing funds to offer our participants a free lunch.

Below you will find the listing of the talks, accompanied by a description of their contents.

Frederic Simard,

On the behalf of the CAMBAM student community.

Listing of the talks:

This video is the first part of the series of two talks on the topic of regression, optimization, classification and decoding. In it you will the basics of linear regression, and explanation of the concept of a cost/loss function and of the gradient method for the optimization of such a function.

You can find a code sample here:
RegressionExamples.m

In this talk, the second in the series on linear regression, optimization, classification and decoding, I give a very brief overview of some machine learning classification algorithms. I explain what a linear classifier is and demonstrate both binary and multi-class classifiers. The algorithms presented are: Regularized Least Squares, Logistic Regression, Perceptron, Support Vector Machine, and Fisher’s Discriminant.

In this talk, you will learn the basics of dimensionality reduction. The first algorithm that is presented is the principal component analysis which is based on the variance in the data set. You will learn how to select a subset of dimensions while maintaining the most information about your data, as to, for example, make a classifier. A quick presentation of the Gaussian Process Factor Analysis follows. This algorithm extract trajectories of system state in lower dimension space.

You can find code sample packages here:
Principal Components Analysis Code Sample
Gaussian Process Factor Analysis Code Sample

Information theory, developed by Claude Shannon in 1949, provides mathematically rigorous tools to quantify the precision with which a systems output contains information about its inputs, setting physical limits on a system’s capacity for information transmission.  In this talk I present a brief summary of the fundamental concepts underlying information theory, in the context of its application to neuronal signal processing.

A useful, well documented, MATLAB toolbox for calculating coherence and mutual information in neural systems can be found at www.chronux.org.

Following the fundamental concepts of information theory in the previous part, in the second part of this talk we present three methods to calculate mutual information between stimulus and neural signal: direct method, upper-bound method, and lower-bound method. We discuss advantages and disadvantages of each method as well as the assumptions inherent to each. We also show how information theory can address central questions in the field of neural coding.

Leave a Reply

You must be logged in to post a comment.

Blog authors are solely responsible for the content of the blogs listed in the directory. Neither the content of these blogs, nor the links to other web sites, are screened, approved, reviewed or endorsed by McGill University. The text and other material on these blogs are the opinion of the specific author and are not statements of advice, opinion, or information of McGill.