Week 9

$$\gdef \sam #1 {\mathrm{softargmax}(#1)}$$ $$\gdef \vect #1 {\boldsymbol{#1}} $$ $$\gdef \matr #1 {\boldsymbol{#1}} $$ $$\gdef \E {\mathbb{E}} $$ $$\gdef \V {\mathbb{V}} $$ $$\gdef \R {\mathbb{R}} $$ $$\gdef \N {\mathbb{N}} $$ $$\gdef \relu #1 {\texttt{ReLU}(#1)} $$ $$\gdef \D {\,\mathrm{d}} $$ $$\gdef \deriv #1 #2 {\frac{\D #1}{\D #2}}$$ $$\gdef \pd #1 #2 {\frac{\partial #1}{\partial #2}}$$ $$\gdef \set #1 {\left\lbrace #1 \right\rbrace} $$
🎙️ Yann LeCun

Lecture part A

We discussed discriminative recurrent sparse auto-encoders and group sparsity. The main idea was how to combine sparse coding with discriminative training. We went through how to structure a network with a recurrent autoencoder similar to LISTA and a decoder. Then we discussed how to use group sparsity to extract invariant features.

Lecture part B

In this section, we talked about the World Models for autonomous control including the neural network architecture and training schema. Then, we discussed the difference between World Models and Reinforcement Learning (RL). Finally, we studied Generative Adversarial Networks (GANs) in terms of energy-based model with the contrastive method.

Practicum

During this week’s practicum, we explored Generative Adversarial Networks (GANs) and how they can produce realistic generative models. We then compared GANs with VAEs from week 8 to highlight key differences between two networks. Next, we discussed several model limitations of GANs. Finally, we looked at the source code for the PyTorch example Deep Convolutional Generative Adversarial Networks (DCGAN).


📝