Since Feb 2022 I’ve been writing our textbook on Deep Learning with an Energy perspective. It will come in two versions: an electronic one with a dark background for screens (freely available) and a physical one with white background for printers (for purchase).

I finished writing the first 3 chapters and corresponding Jupyter Notebooks:

  • Intro;
  • Spiral;
  • Ellipse.

Once the 4th chapter and notebook are done (end of Aug?), the draft will be submitted to the reviewers (Mikael Henaff and Yann LeCun). After merging their contributions (end of Sep?), a first draft of the book will be available to the public on this website.

Book format

The book is highly illustrated using $\LaTeX$’s packages TikZ and PGFPlots. The figures are numerically generated with the computations done in Python using the PyTorch library. The output of such computations are stored as ASCII files and then read by $\LaTeX$ that visualises them. Moreover, most figures are also rendered on the Notebook using the Matplotlib library.

Why plotting with $\LaTeX$?

Because I can control every single aspect of what is drawn. If I define the hidden vector $\green{\vect{h}} \in \green{\mathcal{H}}$ in the book, I can have a pair of axis lebelled $\green{h_1}$ and $\green{h_2}$ and the Cartesian plane labelled $\green{\mathcal{H}}$ without going (too) crazy. All my maths macros, symbols, font, font size, and colour are just controlled by one single stylesheet called maths-preamble.tex.

Why colours

Because I think in colours. Hence, I write in colours. And if you’ve been my student, you already know that at the bottom left we’ll have a pink-bold-ex $\pink{\vect{x}}$ from which we may want to predict a blue-bold-why $\blue{\vect{y}}$ and there may be lurking an orange-bold-zed $\orange{\vect{z}}$.

Illustrations sneak peeks

To keep myself motivated and avoid going crazy too much, I post the most painful drawings on Twitter, where my followers keep me sane by sending copious amount of love ❤️. You can find here a few of these tweets.

Load tweets (may take a few seconds)

Last update: 26 Jul 2022.

Oct 2022 update

For the entire month of Aug and half of Sep I got stuck on implementing a working sparse coding algo for a low-dimensional toy example. Nothing was working for a long while, although I managed to get the expected result (see tweets below). Then, I spent a couple of weeks on the new semester’s lectures, creating new content (slides below, video available soon) on back-propagation, which I’ve never taught at NYU, topic that will make it to the book. Anyhow, now I’m back to writing! 🤓

Load tweets (may take a few seconds)

Last update: 26 Sep 2022.