This is not meant as a comprehensive guide, rather just a place to start and get things up and running. Read further below for a background and explanation.
Create a custom callback
2. Initialize the MLflow Experiment, Client and Run
3. Log the experiment’s hyperparameters
4. Add the custom callback to the learner
FastAI is a powerful library with a set of good features to make ML fun. One of the things I was struggling with is losing track of my experiment’s configuration and to the metrics and losses they generated.
MLflow seemed like a good fit at least for what I was trying to accomplish. Most documentations around setting up MLflow and fastai are stale (covering v1). MLflow does have fast.ai autolog feature but, as per the documentations (and trials), looks like it works only for version fast.ai 1.0.61 or earlier.
The above setup, although very crude, works for the simple setup I have. It assumes access to an MLflow server with accessible URI (TRACKING_URI). One caveat is that metric values tend to be empty for the first couple of epochs and, with more understanding of fastai Callbacks and recorder, I might be able to track down.
Happy Deep Learning
## TrackingClassfrommlflow.trackingimportMlflowClientfrommlflow.entities.runimportRunfromtypingimportListclassMLFlowTracking(Callback):"A `LearnerCallback` that tracks the loss and other metrics into MLFlow"def__init__(self,metric_names:List[str],client:MlflowClient,run_id:Run):self.client = clientself.run_id = run_idself.metric_names = metric_namesdefafter_epoch(self):"Compare the last value to the best up to now"formetric_nameinself.metric_names: m_idx = list(self.recorder.metric_names[1:]).index(metric_name) iflen(self.recorder.values) > 0: val = self.recorder.values[-1][m_idx]self.client.log_metric(self.run_id,metric_name,np.float(val))