Week 10

$$\gdef \sam #1 {\mathrm{softargmax}(#1)}$$ $$\gdef \vect #1 {\boldsymbol{#1}} $$ $$\gdef \matr #1 {\boldsymbol{#1}} $$ $$\gdef \E {\mathbb{E}} $$ $$\gdef \V {\mathbb{V}} $$ $$\gdef \R {\mathbb{R}} $$ $$\gdef \N {\mathbb{N}} $$ $$\gdef \relu #1 {\texttt{ReLU}(#1)} $$ $$\gdef \D {\,\mathrm{d}} $$ $$\gdef \deriv #1 #2 {\frac{\D #1}{\D #2}}$$ $$\gdef \pd #1 #2 {\frac{\partial #1}{\partial #2}}$$ $$\gdef \set #1 {\left\lbrace #1 \right\rbrace} $$
🎙️ Yann LeCun

Lecture part A

In this section, we understand the motivation behind Self-Supervised Learning, define what it is and see some of its applications in NLP and Computer Vision. We understand how pretext tasks aid with SSL and see some example pretext tasks in images, videos and videos with sound. Finally, we try to get an intuition behind the representation learned by pretext tasks.

Lecture part B

In this section, we discussed the shortcomings of Pretext tasks, defined characteristics that make a good pretrained feature, and how we can achieve this using Clustering and Contrastive Learning. We then learned about ClusterFit, its steps and its performance. We also further dived into a specific simple framework for Contrastive Learning known as PIRL. We discussed its working as well as its evaluations in different contexts.

Practicum

During this week’s practicum, we explored the Truck Backer-Upper (Nguyen & Widrow, ‘90). This problem shows how to solve an nonlinear control problem using neural networks. We learn a model of a truck’s kinematics, and optimize a controller through this learned model, finding that the controller is able to learn complex behaviors through purely observational data.


📝