Introduction and goals
Add to your list(s)
Download to your calendar using vCal
If you have a question about this talk, please contact Damon Wischik.
I will start with a broad overview of what neural networks are, and how the back propagation training algorithm works, in theory and in practice.
I will describe some interesting applications, some fascinating phenomena, and some neural network architectures
I will finish by discussing the role that neural networks should play in data science,
and ask what might come next.
For slides and code snippets, see the
talk archive.
This talk is part of the Mathematics and Machine Learning series.
This talk is included in these lists:
Note that exdirectory lists are not shown.
