# Machine Learning

## My path learning machine learning

These resources describe my path toward learning ML (or an approximate revised version in hindsight), over approximately 6 months, in my free time as a physics postdoc. Not all steps are necessary. Not all resources need to be read in depth. This is tailored for someone with a physics background, with existing coding skills and math knowledge.

### Early-stage: basic familiarity (month 1).

Basic neural networks: Karpathy lectures and Nielsen's book on neural networks.

Machine learning for physicists review article (Pankaj Mehta et al), for a detailed mathematical introduction to general ML techniques. Worth a close read.

Find which textbooks people highly recommend, and skim their tables of contents. Become familiar with all terminology and content on a superficial level.

Twitter, for getting a sense of current state of the field.

Systematic refresher on python basics.

### Mid-stage: identifying core problems (month 2).

Focus on identifying core problems and categorizing ways to solve them.

Explore emerging technology.

Begin exploring cutting-edge papers. Build up a queue for later, but get a rough sense of their content.

Read popular articles by trusted figures in the field, and get a sense for their perspective and their predictions.

Interview friends, professors, and others to help identify important problems.

Systematically refersh any missing basic knowledge or core competency (eg. statistics).

### Late-stage: learning techniques, solving problems (months 3 - 6).

Implement algorithms and networks yourself (play with transformers, implement the AlphaGo zero algorithm, etc.).

Deep dive core recent papers in the field.

Online courses and textbooks.

(I have not thoroughly read each of these, but they represent the best I've encountered for this stage of learning.)

(Skim or read in depth, and dive into the papers referenced in the classes when relevant.)

Find a hands-on course in reinforcement learning. I did Delta Academy, which is unfortunately no longer being run.

Deep reinforcement learning course CS285 at Berkeley (Sergey Levine).

Deep unsupervised learning course CS294 at Berkeley (Pieter Abbeel).

Stanford CS 25 on transformers (Chris Manning).

OpenAI spinning up in deep RL.

Deep learning book (Goodfellow).

Reinforcement learning textbook (Sutton and Barto).

Work on your own independent research projects.