Training “cat or dog” on Windows without GPU

I was curious to see how much slower the training of chapter 1 of would be on my GPU-less home desktop vs. a basic cloud GPU servers. The first issue was to get the code running after installing torch, torchvision and fastai modules. Apparently, there are some code modifications to make to get things running on my local system – I’ve summarized them here:

from import *

# To fix: Can't get attribute 'is_cat' on <module '__main__' (built-in)>
from iscat import *

path = untar_data(URLs.PETS)/'images'

# added num_workers=0 to avoid:
# AttributeError: '_FakeLoader' object has no attribute 'noops'

dls = ImageDataLoaders.from_name_func(
    path, get_image_files(path), valid_pct=0.2, seed=42,
    label_func=is_cat, item_tfms=Resize(224), num_workers=0 )

learn = cnn_learner(dls, resnet34, metrics=error_rate)

And the results ?

Well, the dataset used to train the network to distinguish between cats and dogs takes a decent GPU around 20 seconds to complete for each epoch.

Giving this task to my CPU took around 40 minutes per epoch.

Now you know why a GPU is highly recommended for machine learning, at least until someone will discover a more efficient way to do it (like this?)

cannot import name ‘mobilenet_v2’ from ‘torchvision.models’

This was the error I received when trying to import fastbook to study the excellent course.

I had Python 3.9.1 installed and used pip to install the latest versions of torch, torchvision, and fastai

To spare the reader from boring war stories, here’s the bottom line:

  • Download the latest version of torchvision for your platform from
    (for example cpu/torchvision-0.8.2%2Bcpu-cp39-cp39-win_amd64.whl is torchvision 0.8.2 for cpu on Windows for version 3.9 of C Python)
  • run pip install torchvision-0.8.2+cpu-cp39-cp39-win_amd64.whl (assuming this is the version you downloaded)
  • run pip install fastai --upgrade

You should now be able to import fastbook

Compiling PHP 7.3 with MySQL support for AWS Lambda

This is probably easier to do for PHP 7.4, but I wanted PHP 7.3 to make sure everything is compatible with existing code. The code below assumes you are doing this in an interactive shell in a Docker container build for creating binaries for AWS Lambda. Doing this is explained here

First, install the necessary tools to build PHP

$ yum update -y
$ yum install autoconf bison gcc gcc-c++ libcurl-devel libxml2-devel -y

Next, build the SSL version required for PHP 7.3 build

$ curl -sL | tar -xvz
$ cd openssl-1.0.1k
$ ./config & make & make install

Then download and build PHP 7.3, and configure with all the support for accessing MySQL databases

$ cd ..
$ curl -sL | tar -xvz
$ cd php-src-php-7.3.0
$ ./buildconf --force
$ /configure --prefix=/home/ec2-user/php-7-bin/ --with-openssl=/usr/local/ssl --with-curl --without-libzip --with-zlib --enable-zip --with-pdo-mysql --with-mysqli=mysqlnd
$ make install

Finally, check that the MySQL modules are there:

$ /home/ec2-user/php-7-bin/bin/php -m

Backing up AWS lambda functions

There might be a built-in method to do so, but I haven’t found one. I wanted to download packages of all the lambda files I have in AWS and ended up creating a script to do it.

Of course you should replace the –region parameter with the AWS region you run your AWS lambda functions and the –profile parameter with the appropriate AWS lambda profile in your ~/.aws/credentials file (the correct way would have been to give those as parameters to the bash script but this will be left as an exercise to the reader 😉

Feel free to copy. Don’t forget to chmod u+x …


# Get all the function names
list=`aws lambda list-functions --region us-west-2 | jq -r .Functions[].FunctionName | perl -pe 's/\n/ /g'`

# For each lambda function, get the function's download url and download it
for val in $list; do
    url=`aws lambda get-function --region us-west-2 --function-name $val --profile amnon | jq -r .Code.Location`
    shortname=`echo $url | perl -pe's/^.+\/(.+?)-[\w]{8}.+/\'`
    echo $shortname
    wget -nv $url -O $shortname

Duplicated log lines using Python in AWS Lambda

The short version:

logname = 'processing'
formatter = logging.Formatter(fmt='%(asctime)s %(levelname)-8s %(message)s', datefmt='%Y-%m-%d %H:%M:%S')
handler = logging.StreamHandler()
logger = logging.getLogger(logname)
logger.handlers = [] # === Make sure to add this line
logger.propagate = False # === and this line

For details, check out this link

Standard C++ libraries on AWS Lambda

While attempting to compile a test sample together a library I needed, I received the following

/usr/bin/ld: cannot find -lstdc++

installing the following solved the issue. I didn’t even check if both installations are necessary – if you have the curiosity to dig further then send your conclusions, but all I wanted was a solution to the problem at hand, and here it is:

yum install libstdc++-devel
yum install libstdc++-static

lupa (LuaJIT/Lua-Python bridge) on AWS Lambda

Last post discussed launching LuaJIT from Python on AWS Lambda.

Suppose you want to receive events from the Lua code in the Python code that invoked it.

If you don’t have a Python-Lua API then you are left with either using sockets, files or even getting feedback by reading the realtime output from the process stdout object (which is not that trivial, see this implementation)

A more elegant way would be if there was a Python API for accessing the Lua code (and vice versa). This is what lupa does, and it works with both Lua and LuaJIT.

I’ll explain how to install lupa so it can be used with Python on the amazon linux docker container, and thereby on AWS Lambda.

So, Connect to your amazon linux docker container and

mkdir -p /root/lambdalua
cd /root/lambdalua
# Download the lupa source code
# link for latest source: - look for tar.gz file
# extract the source in /root/lambdalua
tar xzfv lupa-1.7.tar.gz 
# enter the lupa source directory
cd lupa-1.7
wget  # download latest source of LuaJIT
unzip                            # unzip it in /root/lambdalua/lupa-1.7
cd LuaJIT-2.0.5                                   # genter LuaJIT source directory
make CFLAGS=-fPIC                                 # Make LuaJIT with -fPIC compile flag
cd ..                                             # back to lupa source dir
python install                           # Create lupa Python module
cd /root/lambdalua
mkdir lupa_package             # create a directory for a lupa test aws package

# copy the Lupa module that we previously compiled to the lupa_package directory
# so that the "import lupa" will work from the python code on AWS lambda
cp -r ./lupa-1.7/build/lib.linux-x86_64-3.6/lupa lupa_package/

Now create /root/lambdalua/lupa_package/ so that it makes use of Lupa:

import subprocess
import sys
import os

import lupa                 # use the local lupa module
from lupa import LuaRuntime

def lambda_luajit_func(event, context):

    def readfile(filename):
        content = ''
        with open(filename, 'r') as myfile:

    lua = LuaRuntime(unpack_returned_tuples=True)

    # define fhe Python function that will be called from LuaJIT
    def add_one(num):
        return num + 1

    # Load the Lua code
    lua_func = lua.eval(readfile('./test.lua'))

    params = { "add_one_func":add_one, "num":42 }
    # call the Lua function defined in test.lua with the above parameters
    res = lua_func(params)

    return res

if __name__ == "__main__":
    print(lambda_luajit_func(None, None))

and also create the /root/lambdalua/lupa_package/test.lua that the above python file will load:

    -- get the number passed from python
    num = params_table.num
    -- get the python function to invoke
    py_func = params_table.add_one_func

    -- return the result of invoking the Python function on the number
    return "The result is:"..py_func(num)

To see that this actually works in our amazon linux docker, just type:

python # output is: The result is:43

To make this run on AWS lambda, we pack the contents of /root/lambdalua/lupa_package in a zip file.

zip -r .

If the AWS credentials and command line is in your OS shell, then from an OS terminal copy the zip file to your OS from the docker container, e.g (you can skip this step if you copied the aws credentials to ~/.aws in the docker container):

docker cp lucid_poincare:/root/lambdalua/lupa_package/ .

Create a function (if we haven’t done so already) or update the function (if we want to update the function from the previous post) and invoke it.

The details of creating the lambda user, profile, function etc from the aws command line are detailed in a previous post, but a quick overview assuming you already have a lambda user names lambda_user:

Next, assuming you already have a lambda account and user (if you don’t, see a previous post) either create a new function

aws lambda create-function --region us-east-1 --function-name lambda_luajit_func --zip-file fileb:// --role arn:aws:iam::123456789012:role/basic_lambda_role --handler lambdalua.lambda_luajit_func --runtime python3.6 --profile lambda_use

or, if you already created the function lambdalua.lambda_luajit_func from the previous post, you can update it:

# the following assumes myzippackage is a bucket you own
aws s3 rm s3://myzippackage/              # remove previous package if exists
aws s3 cp s3://myzippackage/  # copy new package

# update lambda_luajit_func function with contents of new package
aws lambda update-function-code --region us-east-1 --function-name lambda_luajit_func --s3-bucket myzippackage --s3-key --profile lambda_user

finally, invoke the Lupa test on AWS lambda

aws lambda invoke --invocation-type RequestResponse --function-name lambda_luajit_func --region us-east-1 --log-type Tail  --profile lambda_user out.txt
cat out.txt # output should be: "The result is:43"

AWS Lambda – running python bundles and arbitrary executables – Part 2

In the previous post I explained how to create your AWS lambda environment using Docker, and how to package a python bundle and launch it on AWS Lambda.

In this post I’ll show how you can launch arbitrary executables from an AWS Lambda function.

To make this tutorial even more useful, the example of an arbitrary executable I’ll be using is LuaJIT – an incredibly fast Lua implementation created by Mike Pall. After this you should be able to write blazing fast Lua code and run it on AWS Lambda.

I assume you already have a Docker container that emulates AWS Lambda Linux – if not, check the previous post

So, first thing is to install LuaJIT on the Docker amazon lambda container. Start the container for the amazon linux (use: docker ps -a or docker container list to find the container and docker start -i <name> to connect to it.

Once in the container, make sure you have wget and unzip installed. If not then:

yum install wget
yum install zip

Next, download the latest version of LuaJIT (in my case this was 2.0.5) from here

wget  # download latest source of LuaJIT
unzip                            # unzip it
cd LuaJIT-2.0.5                                   # go to source directory
make                                              # build LuaJIT
make install                                      # install LuaJIT

To run an arbitrary binary from AWS Lambda, we’ll first include and any dependencies it might have in the zip package that we’ll upload to Lambda.

So let’s create the ingredients of this package. For starters we’ll create a directory to place all the relevant files:

mkdir lambdalua
cd lamdalua
mkdir lib       # we'll place any luajit dependencies here

Since we compiled and installed luajit, let’s check where it was placed:

which luajit

in my case, the result is: /usr/local/bin/luajit
Now, we’ll copy luajit to the directory we are in so it will be part of the package

cp /usr/local/bin/luajit .

Next, let’s check whether there are any dynamic linked libraries that luajit depends on, as they’ll need to exist on AWS Lambda too in order for luajit to successfully run:

ldd /usr/local/bin/luajit  # find the shared libraries required by luajit

The result: =>  (0x00007ffdb75a7000) => /usr/lib64/ (0x00007f6deea61000) => /lib64/ (0x00007f6dee81c000) => /lib64/ (0x00007f6dee5f6000) => /lib64/ (0x00007f6dee3d5000) => /lib64/ (0x00007f6dee0d3000) => /lib64/ (0x00007f6dedecf000) => /lib64/ (0x00007f6dedb0b000)
	/lib64/ (0x00007f6deec8d000)

Above, we can see that most of the shared libraries luajit depends on (those starting with /lib64) are part of linux (and hopefully they are the same version as those on AWS Lambda amazon linux).

However, one file is not part of lambda linux, and that is /usr/lib64/ (this was added as part of installing luajit).

We’ll need to make this file available to luajit on lambda so let’s copy it to the lib/ directory we created.

cp /usr/lib64/ lib/

create the following hello.lua file in the directory we’re in:

local str = "hello from LuaJIT - "
for i=1,10 do
    str = str .. i .. " "

Now we create the Python file that will launch the above Lua script using LuaJIT. We’ll name this file Note the explanations in the comments within the code:

import subprocess
import sys
import os

def lambda_luajit_func(event, context):
    lpath = os.path.dirname(os.path.realpath(__file__))  # the path where this file resides
    llib  = lpath + '/lib/'                              # the path for luajit shared library

    # Since we can't execute or modify execution attributes for luajit in the directory
    # we run on aws lambda, we'll copy luajit to the /tmp directory where we'll be able
    # to change it's attributes
    os.system("cp -n %s/luajit /tmp/luajit" % (lpath))  # copy luajit to /tmp
    os.system("chmod u+x /tmp/luajit")                  # and make it executable

    # Since we don't have permission to copy luajit's shared library to the path 
    # where it looks for it (the one shown from the ldd command), we'll add the
    # path where the is located to the LD_LIBRARY_PATH, which enables
    # Linux to search for the shared library elsewhere

    # add our lib/ path to the search path for shared libraries
    os.environ["LD_LIBRARY_PATH"] += (":%s" % (llib)) 

    # prepare a subprocess to run luajit with the hello.lua script path as a parameter
    command = "/tmp/luajit %s/hello.lua" % (lpath) 
    p = subprocess.Popen(command , shell=True, stdout=subprocess.PIPE, stderr=subprocess.STDOUT)

    # launch the process and read the result in stdout
    stdout, stderr = p.communicate()

    # We'll make the return of the lambda function the same as what was the output of the
    # Lua script
    return stdout.decode("utf-8")

if __name__ == "__main__":
    print(lambda_luajit_func(None, None))

At this point, we can create a package to upload as a lambda function. In the directory where we’re in run:

zip -r .

and then copy the zip file to your host OS terminal, e.g:

docker cp lucid_poincare:/root/lambdalua/

now we’ll upload the package and create the lambda function using the same user and role created in the previous post (replace the role ARN with your own):

aws lambda create-function --region us-east-1 --function-name lambda_luajit_func --zip-file fileb:// --role arn:aws:iam::123456789012:role/basic_lambda_role --handler lambdalua.lambda_luajit_func --runtime python3.6 --profile lambda_user

you should get a JSON reply with the information that the function has been created. Finally we can invoke the lambda function as follows:

aws lambda invoke --invocation-type RequestResponse --function-name lambda_luajit_func --region us-east-1 --log-type Tail  --profile lambda_user out.txt

when we check the out.txt file:

$cat out.txt
"hello from LuaJIT - 1 2 3 4 5 6 7 8 9 10 \n"