Machine Learning

Deploy MLflow with docker compose

Track your machine learning experiences with MLflow easily deployed thanks to docker-compose

Guillaume Androz
Towards Data Science
6 min readJan 10, 2020

--

https://unsplash.com/

When in the process of building and training machine learning models, its very important to keep track of the results of each experiment. For deep learning models, TensorBoard is a very power tool to log training performances, track gradient, debug a model and so on. We also need to track the associated source code, and whereas Jupyter Notebooks are hard to version, we can definitely use a VCS such as git to help us. However, we also need a tool to help us keep track of the experiment context, the choice of hyperparameters, the dataset used for an experiment, the resulting model etc. MLflow has been explicitly developped for that purpose as stated on their website

MLflow is an open source platform to manage the ML lifecycle, including experimentation, reproducibility and deployment.

— https://mlflow.org/

For that purpose, MLflow offers the component MLflow Tracking which is a web server that allows the tracking of our experiments/runs.

In this post, I will show the steps to setup such a tracking server and I will progressively add components that could eventually be gathered into a docker-compose file. The docker approach is particularly convenient if MLflow has to be deployed on a remote server, for example on EC2, without having to configure the server by hand every time we need a new one.

Basic local server

The first step to install a MLflow server is straightforward, we only need to install the python package. I will assume that python is installed on your machine and your are confortable with creating a virtual environment. For that purpose, I find conda more convenient than pipenv

$ conda create -n mlflow-env python=3.7
$ conda activate mlflow-env
(mlflow-env)$ pip install mlflow

From this very basic first step, our MLflow tracking server is ready to use, all that remains is launching it with the command:

(mlflow-env)$ mlflow server
Tracking server UI found at http://localhost:5000

We can also specify the host address that tells the server to listen on all public IPs. Although it is a very unsecure approach (the server is unauthenticated and unencrypted), we will further need to run the server behing a reverse proxy such as NGINX or in a virtual private network to control the accesses.

(mlflow-env)$ mlflow server — host 0.0.0.0

Here the 0.0.0.0 IP tells the server to listen to all incoming IPs.

Using AWS S3 as artifact store

We now have a running server to track our experiments and runs, but to go further we need to specify the server where to store the artifacts. For that, MLflow offers several possibilities:

  • Amazon S3
  • Azure Blob Storage
  • Google Cloud Storage
  • FTP server
  • SFTP Server
  • NFS
  • HDFS

As my goal is to host a MLflow server on a cloud instance, I’ve chosen to use Amazon S3 as an artifacts store. All we need is to slightly modify the command to run the server as

(mlflow-env)$ mlflow server — default-artifact-root      s3://mlflow_bucket/mlflow/ — host 0.0.0.0

where mlflow_bucket is a S3 bucket that have been priorly created. Here you would ask “how the hell does MLflow access to my S3 bucket ?”. Well, simply read the documention hehe

MLflow obtains credentials to access S3 from your machine’s IAM role, a profile in ~/.aws/credentials, or the environment variables AWS_ACCESS_KEY_ID and AWS_SECRET_ACCESS_KEY depending on which of these are available.

— https://www.mlflow.org/docs/latest/tracking.html

So the more pratical is to use an IAM role, especially if we want to run the server on an AWS EC2 instance. The use of a profile is quite the same as the use of environment variables but for the illustration, I’ll use environment variables as further explained with the use of docker-compose.

Using backend store

SQLite server

So our tracking server stores artifacts on S3, fine. However, the hyperparameters, comments and so on are still stores in files on the hosting machine. Files are arguably not a good backend store, and we prefer a database backend. MLflow supports various database dialects (essentially the same as SQLAlchemy): mysql, mssql, sqlite, and postgresql.

First I like to use SQLite as it is a compromise between files and databases as the whole database is stored in one file that can easily be moved. The syntax is the same of SQLAlchemy:

(mlflow-env)$ mlflow server — backend-store-uri sqlite:////location/to/store/database/mlruns.db — default-artifact-root s3://mlflow_bucket/mlflow/ — host 0.0.0.0

MySQL server

Keeping in mind that we want to use docker containers, it is not a good idea to store those files locally because we’ll loose the database after each restart of the containers. Of course, we can still mount a volume and an EBS volume on our EC2 instance, but it is cleaner to use a dedicated database server. For that purpose, I like to use MySQL. As we’ll use docker for deployment, let’s postpone the MySQL server installation (as it will be a simple docker container from an official docker image) and focus on MLflow usage. First, we need to install the python driver we want to use to interact with the MySQL. I like pymysql because it is very simple to install, very stable and well documented. So on the MLflow server host, run the command

(mlflow-env)$ pip install pymysql

and now we can update the command to run the server according to the syntax of SQLAlchemy

(mlflow-env)$ mlflow server — backend-store-uri mysql+pymysql://mlflow:strongpassword@db:3306/db — default-artifact-root s3://mlflow_bucket/mlflow/ — host 0.0.0.0

where we assume that our server name is db and it listens on port 3306. We also use the user mlflow with the very strong password strongpassword. Here again, it is not very secure in a production context, but when deploying with docker-compose, we can use environment variables.

NGINX

As mentionned earlier, we’ll use the MLflow tracking server behind a reverse proxy, NGINX. For that, here again we’ll use an official docker image and simply replace the default configuration /etc/nginx/nginx.conf by the following

nginx.conf

You can play with this basic configuration file if you need further customization. And finally, we’ll make a configuration for the MLflow server we’ll store in /etc/nginx/sites-enabled/mlflow.conf

mlflow.conf

Notice the URL used to refer to the MLflow application http://web:5000. The MLflow server uses the port 5000, and the app will run in a docker-compose service which name will be web.

Containerization

As stated before, we want to run all of these in docker containers. The architecture is simple and is composed of three containers:

  • A MySQL database server,
  • A MLflow server,
  • A reverse proxy NGINX

For the database server, we’ll use the official MySQL image without any modification.

For the MLflow server, we can build a container on a debian slim image. The Dockerfile is very simple:

Dockerfile for MLflow tracking server container

And finally the NGINX reverse proxy is also based on the official image plus the configuration presented before

Dockerfile for NGINX

Gather with docker-compose

Now we have all set, it’s time to gather it all in a docker-compose file. It will then be possible to launch our MLflow tracking server with only command, which is very convenient. Our docker-compose file is composed of three services, one for the backend i.e. a MySQL database, one for the reverse proxy and one for the MLflow server itself. It looks like:

docker-compose.yml

First thing to notice, we have built two custom networks to isolate frontend (MLflow UI) with backend (MySQL database). Only the web service i.e. the MLflow server can talk to both. Secondly, as we don’t want to loose all the data as the containers go down, the content of the MySQL database is a mounted volume named dbdata. Lastly, this docker-compose file will be launched on an EC2 instance, but as we do not want to hard-code AWS keys or database connection string, we use environment variables. Those environment variables can be located directly on the host machine or inside an .env file in the same directory as the docker-compose file. All that remains is building and running the containers

$ docker-compose up -d --build 

The -d option tells that we want to execute the containers in daemon mode, and the --build option indicates that we want to build the docker images if needed before running them.

And that’s all ! We now have a perfectly running remote MLflow tracking server we can share between our team. This server can easily be deployed anywhere with only one command thanks to docker-compose.

Happy machine learning !

--

--