In this demo, we will see how easy we can track experiments with MLflow. The first thing that we will do is to start the MLflow server. Previously, I installed MLflow on my local machine simply using the command pip install MLflow, and also I specify few environment variable. Installation can take around 50 minutes. We start the MLflow server. By default, wherever you run your programs, the tracking API, will write data into files, into the location as specified by the MLflow tracking URI environment variable. In this case, everything will be run locally using my machine. To see MLflow in action, we will run two experiments. But first, I will clone on my local machine a GitHub repo. In this repo, we have few ML model examples created. In this demo, we will run first a Keras sample that trains and evaluates a classification model based on the Reuters newswire dataset, and then we will run a training and evaluation of a TensorFlow regression model based on the Boston Housing Price dataset. You will see that as soon as we execute the commands to run our models, the TensorFlow model or the Keras model, we can see how actually MLflow is tracking automatically all the different metrics. We can see how this information appears in the CLI, in the console. So first we run the Keras model. Here we have the results, and now we will train the TensorFlow model. It's done. Now if we just click on the MLflow GUI, we can see that we have these two experiments, Reuters and Boston. Reuters has the experiment ID Number 1. From there we can visualize a lot of different metrics, hyper parameters, compare different runs. If we click on this specific run, we can see a lot of information like the batch size, the number of epochs, the learning rate, all these are the different parameters that are logged in the MLflow tracking and also all the different metrics, and there are tags here we see them on the summary that is provide by the data framework. Here we can see the artifacts, inside we have a model. It's a folder that contains the model description and the model that is created in this case because we are using Keras, will be H5 model. In the description file, you can see that this particular model has two flavors. You can view this model as a Python function and also view it as a Keras model. One important thing is that this "Conda.yml" file is generated by the MLflow runtime automatically. We will see how this file tracks all the different dependencies that we need to have to implement this model. This was our experiment ID 1. The experiment ID 2 corresponds to the TensorFlow model using the Boston Housing Price dataset. Again, if we click under this particular run, we will have information about different parameters and hyper parameters, and metrics that are track in the MLflow tracking. Again, we have under artifacts, our model. We have the famous or our familiar friend TensorFlow safe model format. We will have the ML model description as before. Again here we have two flavors as before, the Python function and the TensorFlow model and we will have the "Conda.yaml" that contains all the different dependencies that we need to be able to run this model.