Machine learning scholar adventure: Chapter 3

What progress did I make?

Machine learning

Deep Learning for Coders with Fastai and Pytorch: AI Applications Without a PhD

The fastai book has been my main focus since my last progress update on MLSA: Chapter 2. I have also been supplementing working through the book with viewing the related online course videos.


I really like the fastai book because the approach is based on top down learning where the emphasis is on using deep learning as soon as possible. This was really powerful because the field’s myths were demystified rapidly, I got to see the capability of models available for a range of data and simultaneously got inspired by the opportunity to dig deeper into how everything worked. Below, I explore some of the cool things I explored.


Chapter 1: Your deep learning journey

I upgraded my understanding of how machine learning worked in general. I really liked how the history of the field was brought it to place the progress in context. The first deep learning model example that was covered was just 6 lines of code that enabled categorisation of a cat vs dog. With the fastai library, a pretrained resnet34 model was downloaded after which transfer learning was applied to fit the model to the Oxford PETS dataset.


What did I find most surprising? I couldn’t believe how quickly the model training process was! With just 1 epoch of fine-tuning, the error rate on the validation set was just 1% ? Furthermore, it was super cheap to train! I literally used the free GPU on paperspace. I literally had no idea that transfer learning could be this powerful!

It was also cool seeing how transforming data between different forms can allow you to still use vision based models. For example, you could turn audio data into a spectrogram then train a CNN on these images to help you classify different kinds of sounds. It was fascinating seeing the versatility of the fastai library at work via more top down exploration of varied models in:

Other concepts that were touched upon included:

  • The importance of splitting data into training, validation and test sets
  • How to think about creating representative splits
  • Model hyperparameters
  • Common AI mistakes in organisations

After all the aforementioned learning, it was wonderfully fun to take the helpful quiz after each chapter to reinforce my understanding.


Chapter 2: From model to production

This chapter was focused on the entire machine learning pipeline. It was epic gaining a systematic experiential understanding of how to actually apply models after training them. The drivetrain method was an especially helpful framework that was introduced to aid in designing data products. The steps of the process involve:

  • Defined objective – what are you trying to achieve?
  • Levers – What inputs can you control to get closer to the defined objective?
  • Data – What inputs can we collect?
  • Model  – How can we learn how the levers influence the objective?

With the above process in mind, I am now equipped with a methodical approach to help me tackle future data based projects.


This chapter also described what the current cutting edge of deep learning could do in computer vision, text, combining images + data, tabular data & recommendation systems which was quite impressive. It was highly motivating to discover the breadth of applications! Can’t believe I get to be a part of all this innovation!


One of the key parts of the fastai library that was introduced was the dataloaders class. What is this for? Well, whenever you have non-standard data that does not load easily using fastai’s factory methods, you can tell the fastai library about the structure of your dataset using the datablock API. This helps turn your data into dataloader objects. The dataloaders class takes these dataloader objects and facilitates the data transfer to GPUs. This capability came into use with the bear categorisation tutorial in the book. Bear images were sourced using the Bing image search API.

Ted GIF - Find & Share on GIPHY

What kind of bears? Black bears, grizzly bears and of course teddy bears! Once the data was downloaded, the datablock API was used to load it into fastai and produce the standard dataloaders. With this in hand, it was trivial to use transfer learning to train our classifier by modifying a resnet18 trained model.


It was then easy for me to grok how confusion matrices worked when the confusion matrix for the bear example was showcased.  Following this, the rest of the chapter showed how to build a GUI interface using ipython widgets in jupyter notebook and deploy the trained model using GitHub and binder. You can play with the bear classifier here.


Further play

Following my completion of the tutorial, I watched lecture 2 of the video course to reinforce my understanding. I learned that p-values were not the best measure to determine whether a model was actually correct, I still need to look into this more though as I don’t feel as if my understanding is solid on this yet. However, thanks to the lecture, I do have a grasp of domain shift and out-of-domain data which I’ll be wary of whenever I’m creating data based products.

Finally, I also watched the fastai lesson 5 video on data ethics to prime my mind for playing through chapter 3. From the overview I received from the lecture, I am convinced that data ethics is paramount for any practitioner of the field as technology especially machine learning is having a huge impact on our lives already!

What am I exploring now?

What did I learn from the challenges I’ve conquered?

  • When stuck on concepts in fastai, search the forum first
  • Always have some standby tasks ready when doing model training so that I can keep busy for long training times

What are my next steps?

Please note: As an Amazon Associate I earn from qualifying purchases.

Bookmark the permalink.

One Comment

  1. Pingback: Machine learning scholar adventure: Chapter 4 - Azuremis

Leave a Reply

Your email address will not be published. Required fields are marked *