Build a TensorFlow Image Classifier in 5 Min


Hello world. It’s Siraj. In this episode we’re going to build an image classifier using Tensorflow in 30 lines of Python. And I don’t mean a classifier that can detect handwritten digits or Iris flowers. I’m talking literally anything you want you’ll be able to train this thing to classify chocolate if you wanted The possibilities are endless there’s so many industries that can be disrupted by just this simple solution: a Japanese Cucumber Farmer built a machine to detect whether each of his cucumbers was one of nine different types using this thing. Let’s say we want to build a Siraj classifier. If we think about this problem in the traditional programming paradigm, we want to handcraft a bunch of features maybe we could do some edge Detection to save the shape of my hair or use a color histogram to save the color of my teeth. The problem with that is there’s so much variance in Siraj’s my hair is a lot. Seriously, it’s never the same. This is where convolutional neural networks come into play They’re essentially a black box that constructs features that we would otherwise have to handcraft And these abstract features they create from training are so generalized that they account for variance. If we wanted to train a CNN ourselves, we need a lot of computing power and a lot of time both of which we don’t have. I don’t even have time to do my dishes (sorry mates). That’s why we want to use a pre-trained CNN model called Inception. Inception was trained by Google on 100 K images with a thousand Categories our use case in this video will be classifying Darth vader pictures, but inception wasn’t trained on vader So we’re going to perform a process called Transfer learning that means applying the learnings from a previous training session to a new training session [if] we look at the inception model We can see that when we feed in an image is an input at each layer it will perform a series of operations on that data until it outputs a label and classification percentage each layer is a different set of Abstractions in the first layers, it’s basically taught itself Edge Detection then Shape Detection in the middle layers and they get increasingly more abstract up until the end if we look at the last few layers these are the highest level detectors for whole objects for Transfer learning will basically just want to retrain that last layer on features of Darth Vader, so it can add a representation of him to its repository of knowledge. So this is going to be a seven step process and we’re going to go through each step in order. Sound good? Step one is to install Docker, which is a tool for creating a virtual container on your machine for running apps. The benefit of Docker is you don’t have to install any dependencies on your machine. So we’ll eventually download a Docker image that has all the necessary dependencies for Tensorflow built in. Just download a Docker toolbox, go through the installation process, and then you can launch your Docker container anytime easily by double-clicking the Docker Quickstart terminal. Cool! Now that we have Docker opened that brings us the step two: installing the Tensorflow Docker image by pasting in this line. It’ll take a few minutes. then once it’s installed we’ll move on to Step 3. Downloading our image dataset to our local machine will stop Docker with control D and create a directory called tf_files /star_wars in our home directory Locally we want to put a folder labeled Darth vader that contains a couple hundred vader pics in here There’s this dope Chrome extension I found called Fatkun Batch Download Image that bulk downloads all the images from your Google image search results Just go to Google image search, type in “darth vader” and start downloading all of those images. Once we got them we’ll just drag that folder into our tf_files/star_wars folder that brings us to step four. Now that we have our images in tf_files directory if we want to link them to our Docker container with this command. Boom! All linked up! Step 5 is to download the training script via Git. Just cd into the Tensorflow directory then run “git pull”. This code will allow us to retrain the inception classifier with a newly linked Darth Vader image dataset. Step six is the actual retraining part. The bottleneck directory will be used to cache the outputs of the lower layers on disk, so they don’t have to repeatedly be recalculated. We’ll run this example for 500 iterations the next flag asks where to store our trained model our output graph which we can later view in tensor board our output labels, which will be the same as our training data folder name and the image directory where we stored our Vader images. Let’s go ahead and run this script right from terminal. It’ll take about 30 minutes or so to train our classifier, so do something productive. The Script should output a training accuracy somewhere between 85 and 99 percent when it’s done. And this brings us to our final step. We want to write a script that will use our new retrained classifier to detect if a novel image contains Darth Vader. Where I disrupt ourselves first things first, we’ll import Tensorflow, then we want to create a variable to store the user input image path We’ll create another variable to store the data from that image and one more to load the label of that image from the label file Next we’ll want to grab our model from the saved retrained graph file, store it in the graph_def variable and parse it. Now that we have our image and model ready it’s time to make a prediction by feeding the image data into our retrained model to get our prediction output. In order to do this will create a Tensorflow session. This will give us an environment to perform operations on our tensor data in. The first thing we’ll do in a session is get our softmax function Tensor from the last layer of our model. The softmax function is using the final layer to map input data into probabilities of an expected output. We will execute our softmax tensor function on our input image Data via a session run function it will output our predictions as an array. We’ll next want to sort our prediction labels in order of confidence and lastly for every prediction we have, we can get the predicted label and the score and print it out to terminal. Let’s take the script and run it on one of our Vader pictures. The result is pretty good! Tensorflow makes it much easier to classify an image. And I’ve got a challenge for you guys on this episode. The challenge is to create a classifier that you think would be a useful tool for scientist to have. It can be any field of science you’d like. Upload your code to GitHub and then in readme write up a few sentences on how a scientist would use this. Post your repository in the comment section and I’ll judge them based on utility and accuracy. The winner gets a shout-out for me (two videos from now, so in two weeks), and I’ll also send you a free signed copy of my book: Decentralized Applications. For now I’ve got to go not buy the iPhone 7 so thanks for watching.

100 Comments

Add a Comment

Your email address will not be published. Required fields are marked *