Machine learning scholar adventure: Chapter 4

Spring is just around the corner but still I machine learn ?

What progress did I make?

Machine learning

Deep Learning for Coders with Fastai and Pytorch: AI Applications Without a PhD

The fastai book is still my main focus since my last progress update on MLSA: Chapter 3.

Chapter 3: Data ethics

I gained further insight into the ethical implications of AI technology. This was my first time where I realised that just like any other technology, data based creations are just as likely to be misused in the same way as other technological developments. Humans make technology generally to improve life but there is no guarantee that something which is intended for defence or for making some part of life easier won’t be adapted by malevolent actors for nefarious ends.


The common ethical issues that were covered were recourse processes, feedback loops and bias which were all backed by specific examples. From a badly designed healthcare system that left patients without required care to Facebook’s recommendation system promoting a plethora of conspiracy groups once a user joins just one such group to racially biased facial recognition systems used by Amazon, I was left with the understanding that data ethics need to be considered as part of machine learning product design.

With respect to bias, I learned that there are different types of social bias that tend to influence machine learning models:

  • Historical bias – Originates from the bias in people, processes and society
  • Representation bias – Stems from model assuming an easy-to-note relationship holds all the time
  • Measurement bias – Arises from measurement mistakes, e.g. measuring the wrong thing or measuring it in the wrong way or using the measurement incorrectly
  • Evaluation bias – Occurs when evaluation data or benchmark data is not representative of the target population
  • Aggregation bias – Results from data not being pooled together correctly e.g. data may be omitted leading to missing factors
  • Deployment bias – Created from a mismatch between what a model is intended for and how it’s used in practice
Chapter 4: Under the hood – Training a Digit Classifier

This chapter is focused on the underlying mechanics of neural networks. I completed watching the lecture 3 video lecture which covered creating a baseline model for the MNIST dataset. I learned that before going straight to a complicated deep learning solution, it’s best to see if there’s a simple rule/heuristic benchmark that can be setup quickly. Then when you do start creating a machine learning based model, you actually have something to compare against for improvement. The video lecture also covered the intuition behind stochastic gradient descent which was really cool. I am yet to finish solidifying my intuition on this so will leave a summary explanation for the next update.

Further exploration

Project(s):
  • Flower classifier – To help consolidate my knowledge from chapter 3, I created my first deep learning application with fastai, I made a flower classifier which is able to classify 20 of the most popular flowers and deployed it using binder. You can read the docs and view the project code here and play with the deployed app here.
  • Fastai flashcards – I’ve started making an anki deck of machine learning terms from the fastai book. It’s currently a work-in-progress but I hope to share it after I’ve included the major terms in the book. This is contingent on approval from the book authors.
Meetup(s)

To help maintain my enthusiasm, I recently attended the LondonAI meetup #22 which had a natural language processing focus. It featured three companies and delved into how they were applying NLP based approaches, the companies were: flowrite, humanloop and monzo. My favourite talk was by flowrite which was are using GPT3 for content generation. I got introduced to active learning from the humanloop talk and for monzo, I got to see how they optimised the chat feature of their app to answer customer queries.

Python

Codex

I also started playing through the tutorial projects on Codex. Why? I realised that my python development skills needed strengthening and from my reflections on my learning journey so far and the advice of stronger programmers, it’s clear to me that learning is doing when it comes to improving coding skills i.e. build as much as possible. So far, I’ve created the following mini-projects:

I hope to extend the functionality and capability of these apps in future with the skills I acquire from my machine learning applications. I listened to this great CodeNewbie podcast episode featuring Danny Thompson and grokked that to really make projects my own, I need to add my own twist just like I did with my flower classifier ?

What helped me stay focused?

I keep on coming back to the two twitter threads below by Radek who has generously reflected on his experience learning via fastai and shares his wisdom through insightful tweets like the ones below. When it comes to learning, it’s so important not to fool yourself and the below kept me from doing so! Thank you Radek ?

Radek on process vs events

Radek on choosing mini-projects

What am I exploring now?

What did I learn from the challenges I’ve conquered?

  • Learning is doing β†’ Build more projects
  • Simple projects can form the basis of more sophisticated solutions
  • Technology is embedded in society so it’s critical to think of it’s applications

What are my next steps?

Bookmark the permalink.

One Comment

  1. Pingback: Machine learning scholar adventure: Chapter 5 - Azuremis

Leave a Reply

Your email address will not be published. Required fields are marked *