Several tools exist in order to try and solve the problems mentioned in the first part of the article and make our lives easier, some are open source and others are proprietary. Each one brings solutions to different problems (often the same).
In the second part of the Data Science DevOps article, we introduced an open source solutions ,DVC, which accompanied by Git and combined with Mlflow solve most of the versioning problems that we data scientists encounter on a daily basis.
Unfortunately, DVC does not manage the deployment aspect, neither à UI to compare the different models built, at least not at the moment, so, let’s make way for MLflow to handle this part.
MLflow
MLflow is an open source platform for managing the end-to-end machine learning lifecycle. It tackles three primary functions:
- MLflow Tracking
- MLflow Project
- MLflow Models
In this article we focus on the two components that are MLflow Tracking and MLflow Models, which we will combine with DVC to address current DVC shortcomings.
MLflow works better with Anaconda
Installation :
pip install mlflow
MLflow Tracking
It is the component of MLflow that allows us to manage experiments that can contain different executions with, of course, different parameters. We will mainly use it to compare metrics from different experiments. To be honest, dvc can also compare metrics from different Git branches/tags but it is done in command line mode and it is not yet very complete. On the other hand, with MLflow we have a graphic interface with curves and filters to do advanced analysis of the metrics and have a better comparison of our different models.
Two methods will be mainly used to track parameters and metrics:
mlflow.log_param(“alpha”, alpha)
mlflow.log_metric(“mae”, mae)
MLflow saves these metrics/parameters in files grouped by folder. Each folder contains the results of an execution which are in turn grouped in experimental folders which are placed in an mlrun folder.
To launch the graphical interface that displays the metrics, simply move to the folder containing the mlrun directory and execute the mlflow ui command, then visit the web page hosted at http://127.0.0.1:5000. Figures 6 and 7 show an overview of this web interface.


MLflow Models
It allows you to save models in a format which is compatible to the different tools such as SPARK or automatic deployment with MLflow via a REST web service.
To save a model in MLflow format, simply import the API associated with the library used to create the model (e.g. scikit learn, TensorFlow…) and save the model with the log_model method preceded by the library used to build the model.
Example : mlflow.sklearn.log_model(mon_model, “model”)
A small point to make here, you must explicitly import the library in question for it to be recognized. for example, importing mlflow is not enough to call mlflow.sklearn.*. you must use the instruction: import mlflow.sklearn
mlflow pyfunc serve -m /home/thierno/mlflow/examples/sklearn_elasticnet_wine/mlruns/0/f0e5c0304ca544f48175411f66799d6c/artifacts/model/ -p 1234 –no-conda
This command launches a local REST API created with Flask, the web service is available via port 1234 indicated during the execution of the command and uses the model given through the m option.
For people wanting to climb mountains, these models can also be deployed in the cloud with amazon sage maker or azure ml.
For more details on how MLflow works and the different commands, please feel free to read the documentation, however we are far from DVC in terms of clarity.
Conclusion
In this series of three articles, we have tried to analyze together the other facet of the work of a data scientist ( data science DevOps), often neglected but essential.
We have first illustrated some difficulties encountered by data scientists related to the technical management of their projects.
After that, two popular open source solutions (DVC and MLflow) were explored to see how they participate to solve the versioning problems that data magicians face, while showcasing their respective weaknesses.
However, these solutions are in full development and should undergo major changes in the near future in the hope that they will become more complete and robust.
Références :
http://www.linux-france.org/article/sys/fichiers/fichiers-1.html