Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

draft migration path doc from ML Model to MLM Extension #19

Open
wants to merge 5 commits into
base: deprecate
Choose a base branch
from
Open
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
98 changes: 98 additions & 0 deletions MIGRATION_TO_MLM.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,98 @@
# Migration Guide: ML Model Extension to MLM Extension

## Context

The ML Model Extension was started at Radiant Earth on October 4th, 2021. It was possibly the first STAC extension dedicated to describing machine learning models. The extension incorporated inputs from 9 different organizations and was used to describe models in Radiant Earth's MLHub API. The announcement of this extension and its use in Radiant Earth's MLHub is described [here](https://medium.com/radiant-earth-insights/geospatial-models-now-available-in-radiant-mlhub-a41eb795d7d7). Radiant Earth's MLHub API and Python SDK are now [deprecated](https://mlhub.earth/?gad_source=1&gclid=CjwKCAjwk8e1BhALEiwAc8MHiBZ1JcpErgQXlna7FsB3dd-mlPpMF-jpLQJolBgtYLDOeH2k-cxxLRoCEqQQAvD_BwE). In order to support other current users of the ML Model extension, this document lays out a migration path to convert metadata to the Machine Learning Model Extension (MLM).

## Shared Goals

Both the ML Model Extension and the Machine Learning Model (MLM) Extension aim to provide a standard way to catalog machine learning (ML) models that work with, but are not limited to, Earth observation (EO) data. Their main goals are:

1. **Search and Discovery**: Helping users find and use ML models.
2. **Describing Inference and Training Requirements**: Making it easier to run these models by describing input requirements and outputs.
3. **Reproducibility**: Providing runtime information and links to assets so that model inference is reproducible.

## Schema Changes

### ML Model Extension
- **Scope**: Item, Collection
- **Field Name Prefix**: `ml-model`
- **Key Sections**:
- Item Properties
- Asset Objects
- Inference/Training Runtimes
- Relation Types
- Interpretation of STAC Fields

### MLM Extension
- **Scope**: Collection, Item, Asset, Links
- **Field Name Prefix**: `mlm`
- **Key Sections**:
- Item Properties and Collection Fields
- Asset Objects
- Relation Types
- Model Input/Output Objects
- Best Practices

Notable differences:

- The MLM Extension covers more details at both the Item and Asset levels, making it easier to describe and use model metadata.
- The MLM Extension covers Runtime requirements within the [Container Asset](https://github.com/stac-extensions/mlm?tab=readme-ov-file#container-asset), while the ML Model Extension records [similar information](./README.md#inferencetraining-runtimes) in the `ml-model:inference-runtime` or `ml-model:training-runtime` asset roles.
- The MLM extension has a corresponding Python library, [`stac-model`](https://pypi.org/project/stac-model/) which can be used to create and validate MLM metadata. An example of the library in action is [here](https://github.com/stac-extensions/mlm/blob/main/stac_model/examples.py#L14). The ML Model extension does not support this and requires the JSON to be written manually by interpreting the JSON Schema or existing examples.
- MLM is easier to maintain and enhance in a fast moving ML ecosystem thanks to it's use of pydantic models, while still being compatible with pystac for extension and STAc core validation.

## Changes in Field Names

### Item Properties

| ML Model Extension | MLM Extension | Notes |
| ---------------------------------- | ------------------ | --------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- |
| `ml-model:type` | N/A | No direct equivalent, it is implied by the `mlm` prefix in MLM fields and directly specified by the schema identifier. |
| `ml-model:learning_approach` | `mlm:tasks` | Removed in favor of specifying specific `mlm:tasks`. |
| `ml-model:prediction_type` | `mlm:tasks` | `mlm:tasks` provides a more comprehensive enum of prediction types. |
| `ml-model:architecture` | `mlm:architecture` | The MLM provides specific guidance on using *Papers With Code - Computer Vision* identifiers for model architectures. No guidance is provided in ML Model. |
| `ml-model:training-processor-type` | `mlm:accelerator` | MLM defines more choices for accelerators in an enum and specifies that this is the accelerator for inference. ML Model only accepts `cpu` or `gpu` but this isn't sufficient today where we have models optimized for different CPU architectures, CUDA GPUs, Intel GPUs, AMD GPUs, Mac Silicon, and TPUs. |
| `ml-model:training-os` | N/A | This field is no longer recommended in the MLM for training or inference; instead, users can specify an optional `mlm:training-runtime` asset. |


### New Fields in MLM

| Field Name | Description |
|----------------------------------|-------------------------------------------------------------------------------------------------------------------------|
| **`mlm:name`** | A required name for the model. |
| **`mlm:framework`** | The framework used to train the model. |
| **`mlm:framework_version`** | The version of the framework. Useful in case a container runtime asset is not specified or if the consumer of the MLM wants to run the model outside of a container. |
| **`mlm:memory_size`** | The in-memory size of the model. |
| **`mlm:total_parameters`** | Total number of model parameters. |
| **`mlm:pretrained`** | Indicates if the model is derived from a pretrained model. |
| **`mlm:pretrained_source`** | Source of the pretrained model by name or URL if it is less well known. |
| **`mlm:batch_size_suggestion`** | Suggested batch size for the given accelerator. |
| **`mlm:accelerator`**| Indicates the specific accelerator recommended for the model. |
| **`mlm:accelerator_constrained`**| Indicates if the model requires a specific accelerator. |
| **`mlm:accelerator_summary`** | Description of the accelerator. This might contain details on the exact accelerator version (TPUv4 vs TPUv5) and their configuration. |
| **`mlm:accelerator_count`** | Minimum number of accelerator instances required. |
| **`mlm:input`** | Describes the model's input shape, dtype, and normalization and resize transformations. |
| **`mlm:output`** | Describes the model's output shape and dtype. |
| **`mlm:hyperparameters`** | Additional hyperparameters relevant to the model. |

### Asset Objects

| ML Model Extension Role | MLM Extension Role | Notes |
| ---------------------------- | ----------------------- | -------------------------------------------------------------------------------------------------- |
| `ml-model:inference-runtime` | `mlm:inference-runtime` | Direct conversion; same role and function. |
| `ml-model:training-runtime` | `mlm:training-runtime` | Direct conversion; same role and function. |
| `ml-model:checkpoint` | `mlm:checkpoint` | Direct conversion; same role and function. |
| N/A | `mlm:model` | New required role for model assets in MLM. This represents the asset that is the source of model weights and definition. |
| N/A | `mlm:source_code` | Recommended for providing source code details. |
| N/A | `mlm:container` | Recommended for containerized environments. |
| N/A | `mlm:training` | Recommended for training pipeline assets. |
| N/A | `mlm:inference` | Recommended for inference pipeline assets. |


The MLM provides a recommended asset role for `mlm:training-runtime` and asset `mlm:training`, which can point to a container URL that has the training runtime requirements. The ML Model extension specifies a field for `ml-model:training-runtime` and like `mlm:training` it only contains the default STAC Asset fields and a few additional fields specified by the Container Asset. Training requirements typically differ from inference requirements which is why there are two separate Container assets in both extensions.

## Getting Help

If you have any questions about a migration, feel free to contact the maintainers by opening a discussion or issue on the [MLM repository](https://github.com/stac-extensions/mlm).

If you see a feature missing in the MLM, feel free to open an issue describing your feature request.
15 changes: 8 additions & 7 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -5,6 +5,7 @@
> [https://github.com/stac-extensions/mlm](https://github.com/stac-extensions/mlm). <br>
> The corresponding schemas are made available on
> [https://stac-extensions.github.io/mlm/](https://stac-extensions.github.io/mlm/).
> Documentation on migrating from the Ml Model Extension to the Machine Learning Model Extension (MLM) is [here](./MIGRATION_TO_MLM.md).
>
> It is **STRONGLY** recommended to migrate `ml-model` definitions to the `mlm` extension.
> The `mlm` extension improves the model metadata definition and properties with added support for use cases not directly supported by `ml-model`.
Expand All @@ -19,7 +20,7 @@
- **Owner**: @duckontheweb

This document explains the ML Model Extension to the [SpatioTemporal Asset
Catalog](https://github.com/radiantearth/stac-spec) (STAC) specification.
Catalog](https://github.com/radiantearth/stac-spec) (STAC) specification.

- Examples:
- [Item example](examples/dummy/item.json): Shows the basic usage of the extension in a STAC Item
Expand Down Expand Up @@ -60,7 +61,7 @@ these models for the following types of use-cases:
institutions are making an effort to publish code and examples along with academic publications to enable this kind of reproducibility. However,
the quality and usability of this code and related documentation can vary widely and there are currently no standards that ensure that a new
researcher could reproduce a given set of published results from the documentation. The STAC ML Model Extension aims to address this issue by
providing a detailed description of the training data and environment used in a ML model experiment.
providing a detailed description of the training data and environment used in a ML model experiment.

## Item Properties

Expand All @@ -77,7 +78,7 @@ these models for the following types of use-cases:

#### ml-model:learning_approach

Describes the learning approach used to train the model. It is STRONGLY RECOMMENDED that you use one of the
Describes the learning approach used to train the model. It is STRONGLY RECOMMENDED that you use one of the
following values, but other values are allowed.

- `"supervised"`
Expand All @@ -87,7 +88,7 @@ following values, but other values are allowed.

#### ml-model:prediction_type

Describes the type of predictions made by the model. It is STRONGLY RECOMMENDED that you use one of the
Describes the type of predictions made by the model. It is STRONGLY RECOMMENDED that you use one of the
following values, but other values are allowed. Note that not all Prediction Type values are valid
for a given [Learning Approach](#ml-modellearning_approach).

Expand Down Expand Up @@ -131,7 +132,7 @@ While the Compose file defines nearly all of the parameters required to run the
directory containing input data should be mounted to the container and to which host directory the output predictions should be written. The Compose
file MUST define volume mounts for input and output data using the Compose
[Interpolation syntax](https://github.com/compose-spec/compose-spec/blob/master/spec.md#interpolation). The input data volume MUST be defined by an
`INPUT_DATA` variable and the output data volume MUST be defined by an `OUTPUT_DATA` variable.
`INPUT_DATA` variable and the output data volume MUST be defined by an `OUTPUT_DATA` variable.

For example, the following Compose file snippet would mount the host input directory to `/var/data/input` in the container and would mount the host
output data directory to `/var/data/output` in the host container. In this contrived example, the script to run the model takes 2 arguments: the
Expand Down Expand Up @@ -219,10 +220,10 @@ extension, please open a PR to include it in the `examples` directory. Here are

### Running tests

The same checks that run as checks on PR's are part of the repository and can be run locally to verify that changes are valid.
The same checks that run as checks on PR's are part of the repository and can be run locally to verify that changes are valid.
To run tests locally, you'll need `npm`, which is a standard part of any [node.js installation](https://nodejs.org/en/download/).

First you'll need to install everything with npm once. Just navigate to the root of this repository and on
First you'll need to install everything with npm once. Just navigate to the root of this repository and on
your command line run:
```bash
npm install
Expand Down
Loading