An Introduction to CreateML

by

A year after unveiling CoreML, Apple introduced CreateML. CreateML allows developers to train machine learning models on their Mac. We will stick with our previous example and build a simple machine learning model that can recognize images of dogs. Head over to https://github.com/RichardBlanch/Dog-Classifier/tree/Starter and download or clone the project.

Creating Our Model

The two most important things in this Xcode project are the ‘CreateDogMLModel.playground’ and the folder named ‘Data Set.’ The first thing we are going to need to do is create our machine learning model. For the sake of brevity, our model will be a subset of a standard dog prediction model. Our model will be able to accept an image and predict whether or not the image provided is a poodle, a dalmatian, or a pug.

Open up ‘CreateDogMLModel.playground’ and press the ‘play’ button. From there you will need to open the ‘Assistant Editor.’ You can do this by clicking ‘View’ -> ‘Assistant Editor’ -> ‘Show Assistant Editor’ in Xcode’s menu bar or by clicking the button in red in the image below. Also ensure that the ‘Live View’ assistant editor option is toggled. After you see the Live View within the Playground, we will need to drop the ‘Data Set’ folder into the spot that is labeled ‘Drop Images to Begin Training.’

After our model has been created we need to click the arrow next to ‘Image Classifier’ and click ‘Save.’ This should save our ‘ImageClassifier.mlmodel’ somewhere on disk. Find ImageClassifier.mlmodel and drag and drop it into the ‘DogClassifier folder’ within your Xcode Project. Ensure that you you ‘Copy items if needed’ and ‘Create groups’ checked as options when adding this file.

 

Inspecting our MLModel

Our machine learning model is located within the ImageClassifier.mlmodel file. Apple says that a MLModel ‘encapsulates a model’s prediction methods, configuration, and model description.’ In other words, this file abstracts the logic for predicting what kind of dog is contained in the image a user submits. The image below shows that our model expects an image as an input and will return ‘classLabelProbs’ (the probability that the model thinks it prediction is correct) and ‘classLabel’ (the type of dog the model has predicted) as outputs. You can view the documentation for these MLModels here.https://developer.apple.com/documentation/coreml/mlmodel.

Setting up our Simulator

We will need images of dogs to test with. Open up your iOS simulator and individually drag and drop the images in. This should save these images into your simulator’s photo album.  If you run the app, you should be able to submit one of these photos. However, our app does not do anything useful yet.

Using Our Model

The first thing we are going to need to do is add a reference to our model. Open up ViewController.swift and add the code below. This will give us a reference to the MLModel that we previously trained within the CreateDogMLModel.playground.

<span class="pl-k">private</span> <span class="pl-k">let</span> machineLearningModel <span class="pl-k">=</span> <span class="pl-c1">ImageClassifier</span>()

After we have a reference to our model, we will be able to take advantage of the outputs that it provides. If you remember from previously inspecting our model, those outputs are ‘classLabel’ and ‘classLabelProbs’ We can use our model to change the dogLabel’s text within imagePickerController(_ didFinishPickingMediaWithInfo). We will need to pass in an image to our model (in the form of a CVPixelBuffer) and will return an object of type ‘ImageClassifierOutput.’ This object will have the ‘classLabel’ and ‘classLabelProbs’ outputs that we desire.

Let’s add the code below:


<span class="pl-k">let</span> prediction <span class="pl-k">=</span> <span class="pl-k">try</span> machineLearningModel.<span class="pl-c1">prediction</span>(<span class="pl-c1">image</span>: pixelBuffer) <span class="pl-k">let</span> typeOfDog <span class="pl-k">=</span> prediction.<span class="pl-smi">classLabel</span> <span class="pl-k">let</span> typeOfDogCertainty <span class="pl-k">=</span> prediction.<span class="pl-smi">classLabelProbs</span>[typeOfDog] <span class="pl-k">??</span> <span class="pl-c1">0.0</span> dogLabel.<span class="pl-c1">text</span> <span class="pl-k">=</span> <span class="pl-s"><span class="pl-pds">"</span>Our model is <span class="pl-pse">\(</span><span class="pl-s1">typeOfDogCertainty</span><span class="pl-pse"><span class="pl-s1">)</span></span>% certain that you have submitted a photo of a <span class="pl-pse">\(</span><span class="pl-s1">typeOfDog</span><span class="pl-pse"><span class="pl-s1">)</span></span><span class="pl-pds">"</span></span>

 

If you run the app now, you should be able to submit those photos which you previously added to your iOS simulator. Apple has made it incredibly simple to train a machine learning model on your Mac and use that model within your iOS app. For more information about CreateML definitely check out CreateML’s documentation https://developer.apple.com/documentation/createml and watch the accompanying WWDC video https://developer.apple.com/videos/play/wwdc2018/703/

Leave a Reply

Your email address will not be published. Required fields are marked