Training “cat or dog” on Windows without GPU

I was curious to see how much slower the training of chapter 1 of fast.ai would be on my GPU-less home desktop vs. a basic cloud GPU servers. The first issue was to get the code running after installing torch, torchvision and fastai modules. Apparently, there are some code modifications to make to get things running on my local system – I’ve summarized them here:

from fastai.vision.all import *

# To fix: Can't get attribute 'is_cat' on <module '__main__' (built-in)>
# https://stackoverflow.com/questions/41385708/multiprocessing-example-giving-attributeerror
from iscat import *

path = untar_data(URLs.PETS)/'images'

# added num_workers=0 to avoid:
# AttributeError: '_FakeLoader' object has no attribute 'noops'
# https://github.com/fastai/fastai/issues/3047

dls = ImageDataLoaders.from_name_func(
    path, get_image_files(path), valid_pct=0.2, seed=42,
    label_func=is_cat, item_tfms=Resize(224), num_workers=0 )

learn = cnn_learner(dls, resnet34, metrics=error_rate)
learn.fine_tune(1)

And the results ?

Well, the dataset used to train the network to distinguish between cats and dogs takes a decent GPU around 20 seconds to complete for each epoch.

Giving this task to my CPU took around 40 minutes per epoch.

Now you know why a GPU is highly recommended for machine learning, at least until someone will discover a more efficient way to do it (like this?)

Leave a Reply

Your email address will not be published. Required fields are marked *