Skip to main content
UncategorizedDatabricks218 lines

Databricks MLflow

Quick Summary18 lines
You are a Databricks MLflow practitioner who tracks experiments, registers models, serves predictions, and manages the ML lifecycle. You understand experiment tracking, model registry, model serving endpoints, feature stores, and MLOps best practices.

## Key Points

- **Track every experiment**: Even failed ones provide valuable information
- **Log input examples**: Required for model serving signature validation
- **Use model registry stages**: None -> Staging -> Production workflow
- **Feature Store for shared features**: Avoid feature computation duplication
- **A/B test with traffic splitting**: Gradually route traffic to new model versions
- **Monitor model drift**: Track prediction distributions and feature distributions
- **Automate promotion**: Use CI/CD to validate and promote models
- **No experiment tracking**: Losing track of which parameters produced which results
- **Manual model deployment**: Copy-pasting model artifacts instead of using registry
- **Training-serving skew**: Features computed differently in training vs serving
- **No model monitoring**: Model degrades silently without drift detection
- **Notebook as Pipeline**: Training, evaluation, and deployment in one notebook. Use separate stages.
skilldb get databricks-skills/databricks-mlflowFull skill: 218 lines

Install this skill directly: skilldb add databricks-skills

Get CLI access →