Skip to content

Experiment tracking and metric logging for Amazon SageMaker notebooks and model training.

License

Notifications You must be signed in to change notification settings

aws/sagemaker-experiments

SageMaker

NOTE: Use the SageMaker SDK to use SageMaker Experiments. This repository will not be up to date with the latest product improvements. Link to developer guide.

SageMaker Experiments Python SDK

Latest Version Supported Python Versions License PyPI - Downloads CodeCov PyPI - Status Kit format GitHub Workflow Status Github stars Github forks Contributors GitHub search hit counter Code style: black Read the Docs - Sagemaker Experiments

Experiment tracking in SageMaker Training Jobs, Processing Jobs, and Notebooks.

Overview

SageMaker Experiments is an AWS service for tracking machine learning Experiments. The SageMaker Experiments Python SDK is a high-level interface to this service that helps you track Experiment information using Python.

Experiment tracking powers the machine learning integrated development environment Amazon SageMaker Studio.

For detailed API reference please go to: Read the Docs

Concepts

  • Experiment: A collection of related Trials. Add Trials to an Experiment that you wish to compare together.
  • Trial: A description of a multi-step machine learning workflow. Each step in the workflow is described by a Trial Component. There is no relationship between Trial Components such as ordering.
  • Trial Component: A description of a single step in a machine learning workflow. For example data cleaning, feature extraction, model training, model evaluation, etc...
  • Tracker: A Python context-manager for logging information about a single TrialComponent.

For more information see Amazon SageMaker Experiments - Organize, Track, and Compare Your Machine Learning Trainings

Using the SDK

You can use this SDK to:

  • Manage Experiments, Trials, and Trial Components within Python scripts, programs, and notebooks.
  • Add tracking information to a SageMaker notebook, allowing you to model your notebook in SageMaker Experiments as a multi-step ML workflow.
  • Record experiment information from inside your running SageMaker Training and Processing Jobs.

Installation

pip install sagemaker-experiments

Examples

import boto3
import pickle, gzip, numpy, json, os
import io
import numpy as np
import sagemaker.amazon.common as smac
import sagemaker
from sagemaker import get_execution_role
from sagemaker import analytics
from smexperiments import experiment

# Specify training container
from sagemaker.amazon.amazon_estimator import get_image_uri
container = get_image_uri(boto3.Session().region_name, 'linear-learner')

# Load the dataset
s3 = boto3.client("s3")
s3.download_file("sagemaker-sample-files", "datasets/image/MNIST/mnist.pkl.gz", "mnist.pkl.gz")
with gzip.open('mnist.pkl.gz', 'rb') as f:
    train_set, valid_set, test_set = pickle.load(f, encoding='latin1')

vectors = np.array([t.tolist() for t in train_set[0]]).astype('float32')
labels = np.where(np.array([t.tolist() for t in train_set[1]]) == 0, 1, 0).astype('float32')

buf = io.BytesIO()
smac.write_numpy_to_dense_tensor(buf, vectors, labels)
buf.seek(0)

key = 'recordio-pb-data'
bucket = sagemaker.session.Session().default_bucket()
prefix = 'sagemaker/DEMO-linear-mnist'
boto3.resource('s3').Bucket(bucket).Object(os.path.join(prefix, 'train', key)).upload_fileobj(buf)
s3_train_data = 's3://{}/{}/train/{}'.format(bucket, prefix, key)
output_location = 's3://{}/{}/output'.format(bucket, prefix)

my_experiment = experiment.Experiment.create(experiment_name='MNIST')
my_trial = my_experiment.create_trial(trial_name='linear-learner')

role = get_execution_role()
sess = sagemaker.Session()

linear = sagemaker.estimator.Estimator(container,
                                    role,
                                    train_instance_count=1,
                                    train_instance_type='ml.c4.xlarge',
                                    output_path=output_location,
                                    sagemaker_session=sess)
linear.set_hyperparameters(feature_dim=784,
                        predictor_type='binary_classifier',
                        mini_batch_size=200)

linear.fit(inputs={'train': s3_train_data}, experiment_config={
            "ExperimentName": my_experiment.experiment_name,
            "TrialName": my_trial.trial_name,
            "TrialComponentDisplayName": "MNIST-linear-learner",
        },)

trial_component_analytics = analytics.ExperimentAnalytics(experiment_name=my_experiment.experiment_name)

analytic_table = trial_component_analytics.dataframe()
analytic_table

For more examples, check out: sagemaker-experiments in AWS Labs Amazon SageMaker Examples.

License

This library is licensed under the Apache 2.0 License.

Running Tests

Unit Tests

tox tests/unit

Integration Tests

To run the integration tests, the following prerequisites must be met:

  • AWS account credentials are available in the environment for the boto3 client to use.
  • The AWS account has an IAM role with SageMaker permissions.
tox tests/integ
  • Test against different regions
tox -e py39 -- --region cn-north-1

Docker Based Integration Tests

Several integration tests rely on docker to push an image to ECR which is then used for training.

Docker Setup

  1. Install docker
  2. set aws cred helper in docker config (~/.docker/config.json)
# docker config example
{
    "stackOrchestrator": "swarm",
    "credsStore": "desktop",
    "auths": {
        "https://index.docker.io/v1/": {}
    },
    "credHelpers": {
        "aws_account_id.dkr.ecr.region.amazonaws.com": "ecr-login"
    },
    "experimental": "disabled"
}
# run only docker based tests
tox -e py39 -- tests/integ -m 'docker'

# exclude docker based tests
tox -e py39 -- tests/integ -m 'not docker'

Generate Docs

tox -e docs

About

Experiment tracking and metric logging for Amazon SageMaker notebooks and model training.

Resources

License

Code of conduct

Security policy

Stars

Watchers

Forks

Packages

No packages published