From ba2231a60c1f2a39dc11d3734b1de8ae5c793d48 Mon Sep 17 00:00:00 2001 From: Edoardo Conti Date: Wed, 7 Jul 2021 16:04:53 -0400 Subject: [PATCH] Update README.md --- README.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/README.md b/README.md index 111be4a..db9cb67 100644 --- a/README.md +++ b/README.md @@ -1,6 +1,6 @@ # Offline policy evaluation -[![PyPI version](https://badge.fury.io/py/offline-evaluation.svg)](https://badge.fury.io/py/offline-evaluation) [![](https://img.shields.io/badge/code%20style-black-000000.svg)](https://github.com/ambv/black) +[![PyPI version](https://badge.fury.io/py/offline-evaluation.svg)](https://badge.fury.io/py/offline-evaluation) [![](https://img.shields.io/badge/code%20style-black-000000.svg)](https://github.com/ambv/black) [![Downloads](https://static.pepy.tech/personalized-badge/offline-evaluation?period=total&units=international_system&left_color=black&right_color=brightgreen&left_text=Downloads)](https://pepy.tech/project/offline-evaluation) Implementations and examples of common offline policy evaluation methods in Python. For more information on offline policy evaluation see this [tutorial](https://edoconti.medium.com/offline-policy-evaluation-run-fewer-better-a-b-tests-60ce8f93fa15).