diff --git a/devel/CHANGELOG/index.html b/devel/CHANGELOG/index.html index 09a4e756e..1a711a720 100644 --- a/devel/CHANGELOG/index.html +++ b/devel/CHANGELOG/index.html @@ -3147,14 +3147,38 @@
FutureWarning
ParallelConfig
MSR Banzhaf
LissaInfluence
DaskInfluenceCalcualator
TorchnumpyConverter
data_names
ValuationResult.zeros()
init_executor( max_workers: Optional[int] = None, - config: ParallelConfig = ParallelConfig(), + config: Optional[ParallelConfig] = None, **kwargs ) -> Generator[Executor, None, None]
TYPE: - ParallelConfig + Optional[ParallelConfig] DEFAULT: - ParallelConfig() + None
Optional[ParallelConfig]
ParallelConfig()
None
@contextmanager +58 +59 +60 +61 +62
@contextmanager @deprecated( target=None, deprecated_in="0.9.0", @@ -3380,7 +3384,7 @@ ) def init_executor( max_workers: Optional[int] = None, - config: ParallelConfig = ParallelConfig(), + config: Optional[ParallelConfig] = None, **kwargs, ) -> Generator[Executor, None, None]: """Initializes a futures executor for the given parallel configuration. @@ -3409,12 +3413,16 @@ assert results == [1, 2, 3, 4, 5] ``` """ - try: - cls = ParallelBackend.BACKENDS[config.backend] - with cls.executor(max_workers=max_workers, config=config, **kwargs) as e: - yield e - except KeyError: - raise NotImplementedError(f"Unexpected parallel backend {config.backend}") + + if config is None: + config = ParallelConfig() + + try: + cls = ParallelBackend.BACKENDS[config.backend] + with cls.executor(max_workers=max_workers, config=config, **kwargs) as e: + yield e + except KeyError: + raise NotImplementedError(f"Unexpected parallel backend {config.backend}")
pyDVL collects algorithms for data valuation and influence function computation. For the full list see Methods. It supports out-of-core and distributed computation, as well as local or distributed caching of results.
If you're a first time user of pyDVL, we recommend you to go through Getting started.
Getting started
Steps to install and requirements
Example gallery
Notebooks with worked-out examples of data valuation and influence functions
Data valuation
Basics of data valuation and description of the main algorithms
Influence Function
An introduction to the influence function and its computation with pyDVL
Supported methods
List of all methods implemented with references.
API Reference
Full documentation of the API
RankCorrelation
NystroemSketchInfluence
CgInfluence
ArnoldiInfluence
pydvl.utils.cache.memcached
pymemcache
pydvl.utils.cache
model_dtype
TorchInfluenceFunctionModel
EkfacInfluence
to
InfluenceFunctionModel
DaskInfluenceCalculator
SequentialInfluenceCalculator
compute_influences
AntitheticPermutationSampler
This is our first \u03b2 release! We have worked hard to deliver improvements across the board, with a focus on documentation and usability. We have also reworked the internals of the influence module, improved parallelism and handling of randomness.
influence
pydvl.utils.numeric
pydvl.value.shapley
pydvl.value.semivalues
Seed
ensure_seed_sequence
batch_size
compute_banzhaf_semivalues
compute_beta_shapley_semivalues
compute_shapley_semivalues
compute_generic_semivalues
ray.init
compute_influence_factors
semivalues
joblib
linear_solve
compute_semivalues
RayExecutor
from_arrays
from_sklearn
ApproShapley
ValuationResult
empty()
zeros()
from_array
py.typed
non_negative_subsidy
options
solver_options
@unpackable
StandardError
n_jobs
truncated_montecarlo_shapley
n_iterations
Scorer
Status
compute_shapley_values
n_jobs=-1
RayParallelBackend
kwargs
Utility
clone_before_fit
show_warnings
False
MapReduceJob
chunkify_inputs
n_runs
put()
_chunkify()
num_workers
n_local_workers
from_arrays()
Dataset
GroupedDataset
extra_values
compute_removal_score()
compute_random_removal_score()
Mostly API documentation and notebooks, plus some bugfixes.
In PR #161: - Support for $$ math in sphinx docs. - Usage of sphinx extension for external links (introducing new directives like :gh:, :issue: and :tfl: to construct standardised links to external resources). - Only update auto-generated documentation files if there are changes. Some minor additions to update_docs.py. - Parallelization of exact combinatorial Shapley. - Integrated KNN shapley into the main interface compute_shapley_values.
:gh:
:issue:
:tfl:
update_docs.py
In PR #161: - Improved main docs and Shapley notebooks. Added or fixed many docstrings, readme and documentation for contributors. Typos, grammar and style in code, documentation and notebooks. - Internal renaming and rearranging in the parallelization and caching modules.
_chunkify
_backpressure
This is very first release of pyDVL.
It contains:
Data Valuation Methods:
Leave-One-Out
The goal of pyDVL is to be a repository of successful algorithms for the valuation of data, in a broader sense. Contributions are welcome from anyone in the form of pull requests, bug reports and feature requests.
We will consider for inclusion any (tested) implementation of an algorithm appearing in a peer-reviewed journal (even if the method does not improve the state of the art, for benchmarking and comparison purposes). We are also open to improvements to the currently implemented methods and other ideas. Please open a ticket with yours.
If you are interested in setting up a similar project, consider the template pymetrius.
This project uses black to format code and pre-commit to invoke it as a git pre-commit hook. Consider installing any of black's IDE integrations to make your life easier.
Run the following to set up the pre-commit git hook to run before pushes:
pre-commit install --hook-type pre-push\n
Additionally, we use Git LFS for some files like images. Install with
git lfs install\n
We strongly suggest using some form of virtual environment for working with the library. E.g. with venv:
python -m venv ./venv\n. venv/bin/activate # `venv\\Scripts\\activate` in windows\npip install -r requirements-dev.txt -r requirements-docs.txt\n
With conda:
conda create -n pydvl python=3.8\nconda activate pydvl\npip install -r requirements-dev.txt -r requirements-docs.txt\n
A very convenient way of working with your library during development is to install it in editable mode into your environment by running
pip install -e .\n
In order to build the documentation locally (which is done as part of the tox suite) you need to install additional non-python dependencies as described in the documentation of mkdocs-material.
In addition, pandoc is required. Except for OSX, it should be installed automatically as a dependency with requirements-docs.txt. Under OSX you can install pandoc (you'll need at least version 2.11) with:
requirements-docs.txt
brew install pandoc\n
Remember to mark all autogenerated directories as excluded in your IDE. In particular docs_build and .tox should be marked as excluded to avoid slowdowns when searching or refactoring code.
docs_build
.tox
If you use remote execution, don't forget to exclude data paths from deployment (unless you really want to sync them).
Automated builds, tests, generation of documentation and publishing are handled by CI pipelines. Before pushing your changes to the remote we recommend to execute tox locally in order to detect mistakes early on and to avoid failing pipelines. tox will: * run the test suite * build the documentation * build and test installation of the package. * generate coverage and pylint reports in html, as well as badges.
tox
You can configure pytest, coverage and pylint by adjusting pyproject.toml.
Besides the usual unit tests, most algorithms are tested using pytest. This requires ray for the parallelization and Memcached for caching. Please install both before running the tests. We run tests in CI as well.
It is possible to pass optional command line arguments to pytest, for example to run only certain tests using patterns (-k) or marker (-m).
-k
-m
tox -e tests -- <optional arguments>\n
There are a few important arguments:
--memcached-service
localhost:11211
Memcached is needed for testing caching as well as speeding certain methods (e.g. Permutation Shapley).
To start memcached locally in the background with Docker use:
docker run --name pydvl-memcache -p 11211:11211 -d memcached\n
-n
There are two layers of parallelization in the tests. An inner one within the tests themselves, i.e. the parallelism in the algorithms, and an outer one by pytest-xdist. The latter is controlled by the -n argument. If you experience segmentation faults with the tests, try running them with -n 0 to disable parallelization.
-n 0
--slow-tests
We use a few different markers to differentiate between tests and runs groups of them of separately. Use pytest --markers to get a list and description of all available markers.
pytest --markers
Two important markers are:
pytest.mark.slow
A slow test is any test that takes 45 seconds or more to run and that can be skipped most of the time. In some cases a test is slow, but it is required in order to ensure that a feature works as expected and that are no bugs. In those cases, we should not use this marker.
Slow tests are always run on CI. Locally, they are skipped by default but can be additionally run using: pytest --slow-tests.
pytest --slow-tests
pytest.mark.torch
To test modules that rely on PyTorch, use:
tox -e tests -- -m \"torch\"\n
To test the notebooks separately, run (see below for details):
tox -e notebook-tests\n
To create a package locally, run:
python setup.py sdist bdist_wheel\n
We use notebooks both as documentation (copied over to docs/examples) and as integration tests. All notebooks in the notebooks directory are executed during the test run. Because run times are typically too long for large datasets, you must check for the CI environment variable to work with smaller ones. For example, you can select a subset of the data:
docs/examples
notebooks
CI
# In CI we only use a subset of the training set\nif os.environ.get('CI'):\n training_data = training_data[:10]\n
This switching should happen in a separate notebook cell tagged with hide to hide the cell's input and output when rendering it as part of the documents. We want to avoid as much clutter and boilerplate as possible in the notebooks themselves.
hide
Because we want documentation to include the full dataset, we commit notebooks with their outputs running with full datasets to the repo. The notebooks are then added by CI to the section Examples of the documentation.
Switching between CI or not, importing generic modules and plotting results are all examples of boilerplate code irrelevant to a reader interested in pyDVL's functionality. For this reason we choose to isolate this code into separate cells which are then hidden in the documentation.
In order to do this, cells are marked with tags understood by the mkdocs plugin mkdocs-jupyter, namely adding the following to the metadata of the relevant cells:
mkdocs-jupyter
\"tags\": [\n \"hide\"\n]\n
To hide the cell's input and output.
Or:
\"tags\": [\n \"hide-input\"\n]\n
To only hide the input and
\"tags\": [\n \"hide-output\"\n]\n
It is important to leave a warning at the top of the document to avoid confusion. Examples for hidden imports and plots are available in the notebooks, e.g. in notebooks/shapley_basic_spotify.ipynb.
If you add a plot to a notebook, which should also render nicely in browser dark mode, add the tag invertible-output, i.e.
\"tags\": [\n \"invertible-output\"\n]\n
API documentation and examples from notebooks are built with mkdocs, using a number of plugins, including mkdoctrings, with versioning handled by mike.
Notebooks are an integral part of the documentation as well, please read the section on notebooks above.
If you want to build the documentation locally, please make sure you followed the instructions in the section Setting up your environment.
Use the following command to build the documentation the same way it is done in CI:
mkdocs build\n
Locally, you can use this command instead to continuously rebuild documentation on changes to the docs and src folder:
docs
src
mkdocs serve\n
This will rebuild the documentation on changes to .md files inside docs, notebooks and python files.
.md
On OSX, it is possible that the cairo lib file is not properly linked when installed via homebrew. In this case you might encounter an error like this
OSError: no library called \"cairo-2\" was found\nno library called \"cairo\" was found\nno library called \"libcairo-2\" was found\n
mkdocs build
mkdocs serve
DYLD_FALLBACK_LIBRARY_PATH
export DYLD_FALLBACK_LIBRARY_PATH=$DYLD_FALLBACK_LIBRARY_PATH:/opt/homebrew/lib\n
Navigation is configured in mkdocs.yaml using the nav section. We use the plugin mkdoc-literate-nav which allows fine-grained control of the navigation structure. However, most pages are explicitly listed and manually arranged in the nav section of the configuration.
mkdocs.yaml
nav
mkdocstrings includes the plugin autorefs to enable automatic linking across pages with e.g. [a link][to-something]. Anchors are autogenerated from section titles, and are not guaranteed to be unique. In order to ensure that a link will remain valid, add a custom anchor to the section title:
[a link][to-something]
## Some section { #permanent-anchor-to-some-section }\n
(note the space after the opening brace). You can then refer to it within another markdown file with [Some section][permanent-anchor-to-some-section].
[Some section][permanent-anchor-to-some-section]
We use the admonition extension of Mkdocs Material to create admonitions, also known as call-outs, that hold information about when a certain feature was added, changed or deprecated and optionally a description with more details. We put the admonition directly in a module's, a function's or class' docstring.
We use the following syntax:
!!! tip \"<Event Type> in version <Version Number>\"\n\n <Optional Description>\n
The description is useful when the note is about a smaller change such as a parameter.
!!! tip \"New in version <Version Number>\"\n\n <Optional Description>\n
!!! tip \"Changed in version <Version Number>\"\n\n <Optional Description>\n
For example, for a change in version 1.2.3 that adds kwargs to a class' constructor we would write:
1.2.3
!!! tip \"Changed in version 1.2.3\"\n\n Added kwargs to the constructor.\n
!!! tip \"Deprecated in version <Version Number>\"\n\n <Optional Description>\n
Bibliographic citations are managed with the plugin mkdocs-bibtex. To enter a citation first add the entry to docs/pydvl.bib. For team contributor this should be an export of the Zotero folder software/pydvl in the TransferLab Zotero library. All other contributors just add the bibtex data, and a maintainer will add it to the group library upon merging.
docs/pydvl.bib
software/pydvl
To add a citation inside a markdown file, use the notation [@citekey]. Alas, because of when mkdocs-bibtex enters the pipeline, it won't process docstrings. For module documentation, we manually inject html into the markdown files. For example, in pydvl.value.shapley.montecarlo we have:
[@citekey]
pydvl.value.shapley.montecarlo
\"\"\"\nModule docstring...\n\n## References\n\n[^1]: <a name=\"ghorbani_data_2019\"></a>Ghorbani, A., Zou, J., 2019.\n [Data Shapley: Equitable Valuation of Data for Machine\n Learning](https://proceedings.mlr.press/v97/ghorbani19c.html).\n In: Proceedings of the 36th International Conference on Machine Learning,\n PMLR, pp. 2242\u20132251.\n\"\"\"\n
and then later in the file, inside a function's docstring:
This function implements (Ghorbani and Zou, 2019)<sup><a \n href=\"#ghorbani_data_2019\">1</a></sup>\n
Use LaTeX delimiters $ and $$ for inline and displayed mathematics respectively.
$
$$
Warning: backslashes must be escaped in docstrings! (although there are exceptions). For simplicity, declare the string as \"raw\" with the prefix r:
r
# This will work\ndef f(x: float) -> float:\n r\"\"\" Computes \n $${ f(x) = \\frac{1}{x^2} }$$\n \"\"\"\n return 1/(x*x)\n\n# This throws an obscure error\ndef f(x: float) -> float:\n \"\"\" Computes \n $$\\frac{1}{x^2}$$\n \"\"\"\n return 1/(x*x)\n
Note how there is no space after the dollar signs. This is important! You can use braces for legibility like in the first example.
We keep the abbreviations used in the documentation inside the docs_include/abbreviations.md file.
The syntax for abbreviations is:
*[ABBR]: Abbreviation\n
We use workflows to:
awaiting-reply
We test all algorithms with simple datasets in CI jobs. This can amount to a sizeable amount of time, so care must be taken not to overdo it: 1. All algorithm tests must be on very simple datasets and as quick as possible 2. We try not to trigger CI pipelines when unnecessary (see Skipping CI runs). 3. We split the tests based on their duration into groups and run them in parallel.
For that we use pytest-split to first store the duration of all tests with tox -e tests -- --store-durations --slow-tests in a .test_durations file.
tox -e tests -- --store-durations --slow-tests
.test_durations
Alternatively, we case use pytest directly pytest --store-durations --slow-tests.
pytest --store-durations --slow-tests
Note This does not have to be done each time a new test or test case is added. For new tests and test cases pytes-split assumes average test execution time(calculated based on the stored information) for every test which does not have duration information stored. Thus, there's no need to store durations after changing the test suite. However, when there are major changes in the suite compared to what's stored in .test_durations, it's recommended to update the duration information with --store-durations to ensure that the splitting is in balance.
--store-durations
Then we can have as many splits as we want:
tox -e tests -- --splits 3 --group 1\ntox -e tests -- --splits 3 --group 2\ntox -e tests -- --splits 3 --group 3\n
Alternatively, we case use pytest directly pytest --splits 3 ---group 1.
pytest --splits 3 ---group 1
Each one of these commands should be run in a separate shell/job to run the test groups in parallel and decrease the total runtime.
To run Github Actions locally we use act. It uses the workflows defined in .github/workflows and determines the set of actions that need to be run. It uses the Docker API to either pull or build the necessary images, as defined in our workflow files and finally determines the execution path based on the dependencies that were defined.
.github/workflows
Once it has the execution path, it then uses the Docker API to run containers for each action based on the images prepared earlier. The environment variables and filesystem are all configured to match what GitHub provides.
You can install it manually using:
curl -s https://raw.githubusercontent.com/nektos/act/master/install.sh | sudo bash -s -- -d -b ~/bin \n
And then simply add it to your PATH variable: PATH=~/bin:$PATH
PATH=~/bin:$PATH
Refer to its official readme for more installation options.
By default, act will run all workflows in .github/workflows. You can use the -W flag to specify a specific workflow file to run, or you can rely on the job id to be unique (but then you'll see warnings for the workflows without that job id).
act
-W
# Run only the main tests for python 3.8 after a push event (implicit) \nact -W .github/workflows/run-tests-workflow.yaml \\\n -j run-tests \\\n --input tests_to_run=base\\\n --input python_version=3.8\n
Other common flags are:
# List all actions for all events:\nact -l\n\n# List the actions for a specific event:\nact workflow_dispatch -l\n\n# List the actions for a specific job:\nact -j lint -l\n\n# Run the default (`push`) event:\nact\n\n# Run a specific event:\nact pull_request\n\n# Run a specific job:\nact -j lint\n\n# Collect artifacts to the /tmp/artifacts folder:\nact --artifact-server-path /tmp/artifacts\n\n# Run a job in a specific workflow (useful if you have duplicate job names)\nact -j lint -W .github/workflows/tox.yml\n\n# Run in dry-run mode:\nact -n\n\n# Enable verbose-logging (can be used with any of the above commands)\nact -v\n
To run the publish job (the most difficult one to test) you would simply use:
publish
act release -j publish --eventpath events.json\n
With events.json containing:
events.json
{\n \"act\": true\n}\n
This will use your current branch. If you want to test a specific branch you have to use the workflow_dispatch event (see below).
workflow_dispatch
act workflow_dispatch -j publish --eventpath events.json\n
{\n \"act\": true,\n \"inputs\": {\n \"tag_name\": \"v0.6.0\"\n }\n}\n
One sometimes would like to skip CI for certain commits (e.g. updating the readme). In order to do this, simply prefix the commit message with [skip ci]. The string can be anywhere, but adding it to the beginning of the commit message makes it more evident when looking at commits in a PR.
[skip ci]
Refer to the official GitHub documentation for more information.
In order to create an automatic release, a few prerequisites need to be satisfied:
develop
Then, a new release can be created using the script build_scripts/release-version.sh (leave out the version parameter to have bumpversion automatically derive the next release version by bumping the patch part):
build_scripts/release-version.sh
bumpversion
build_scripts/release-version.sh 0.1.6\n
To find out how to use the script, pass the -h or --help flags:
-h
--help
build_scripts/release-version.sh --help\n
If running in interactive mode (without -y|--yes), the script will output a summary of pending changes and ask for confirmation before executing the actions.
-y|--yes
Once this is done, a tag will be created on the repository. You should then create a GitHub release for that tag. That will a trigger a CI pipeline that will automatically create a package and publish it from CI to PyPI.
If the automatic release process doesn't cover your use case, you can also create a new release manually by following these steps:
pip install --pre --index-url https://test.pypi.org/simple/
export RELEASE_VERSION=\"vX.Y.Z\"\ngit checkout develop\ngit branch release/${RELEASE_VERSION} && git checkout release/${RELEASE_VERSION}\n
bumpversion --commit release
bumpversion --commit --new-version X.Y.Z release
release
master
git checkout master\ngit merge --no-ff release/${RELEASE_VERSION}\ngit tag -a ${RELEASE_VERSION} -m\"Release ${RELEASE_VERSION}\"\ngit push --follow-tags origin master\n
release/vX.Y.Z
bumpversion --commit patch
git checkout develop\ngit merge --no-ff release/${RELEASE_VERSION}\ngit push origin develop\n
git branch -d release/${RELEASE_VERSION}
In order to publish new versions of the package from the development branch, the CI pipeline requires the following secret variables set up:
TEST_PYPI_USERNAME\nTEST_PYPI_PASSWORD\nPYPI_USERNAME\nPYPI_PASSWORD\n
The first 2 are used after tests run on the develop branch's CI workflow to automatically publish packages to TestPyPI.
The last 2 are used in the publish.yaml CI workflow to publish packages to PyPI from develop after a GitHub release.
We use bump2version to bump the build part of the version number without commiting or tagging the change and then publish a package to TestPyPI from CI using Twine. The version has the GitHub run number appended.
For more details refer to the files .github/workflows/publish.yaml and .github/workflows/tox.yaml.
This is the API documentation for the Python Data Valuation Library (PyDVL). Use the table of contents to access the documentation for each module.
The two main modules you will want to look at are value and influence.
This package contains algorithms for the computation of the influence function.
See The Influence function for an introduction to the concepts and methods implemented here.
Warning
Much of the code in this package is experimental or untested and is subject to modification. In particular, the package structure and basic API will probably change.
This module provides classes and utilities for handling large arrays that are chunked and lazily evaluated. It includes abstract base classes for converting between tensor types and NumPy arrays, aggregating blocks of data, and abstract representations of lazy arrays. Concrete implementations are provided for handling chunked lazy arrays (chunked in one resp. two dimensions), with support for efficient storage and retrieval using the Zarr library.
Bases: Generic[TensorType], ABC
Generic[TensorType]
ABC
Base class for converting TensorType objects into numpy arrays and vice versa.
abstractmethod
to_numpy(x: TensorType) -> NDArray\n
Override this method for converting a TensorType object into a numpy array
src/pydvl/influence/array.py
@abstractmethod\ndef to_numpy(self, x: TensorType) -> NDArray:\n \"\"\"Override this method for converting a TensorType object into a numpy array\"\"\"\n
from_numpy(x: NDArray) -> TensorType\n
Override this method for converting a numpy array into a TensorType object
@abstractmethod\ndef from_numpy(self, x: NDArray) -> TensorType:\n \"\"\"Override this method for converting a numpy array into a TensorType object\"\"\"\n
__call__(tensor_generator: Generator[TensorType, None, None])\n
Aggregates tensors from a generator.
Implement this method to define how a sequence of tensors, provided by a generator, should be combined.
@abstractmethod\ndef __call__(self, tensor_generator: Generator[TensorType, None, None]):\n \"\"\"\n Aggregates tensors from a generator.\n\n Implement this method to define how a sequence of tensors, provided by a\n generator, should be combined.\n \"\"\"\n
Bases: SequenceAggregator
SequenceAggregator
__call__(\n tensor_generator: Generator[TensorType, None, None]\n) -> List[TensorType]\n
Aggregates tensors from a single-level generator into a list. This method simply collects each tensor emitted by the generator into a single list.
tensor_generator
A generator that yields TensorType objects.
TYPE: Generator[TensorType, None, None]
Generator[TensorType, None, None]
List[TensorType]
A list containing all the tensors provided by the tensor_generator.
def __call__(\n self, tensor_generator: Generator[TensorType, None, None]\n) -> List[TensorType]:\n \"\"\"\n Aggregates tensors from a single-level generator into a list. This method simply\n collects each tensor emitted by the generator into a single list.\n\n Args:\n tensor_generator: A generator that yields TensorType objects.\n\n Returns:\n A list containing all the tensors provided by the tensor_generator.\n \"\"\"\n return [t for t in tensor_generator]\n
__call__(\n nested_generators_of_tensors: Generator[\n Generator[TensorType, None, None], None, None\n ]\n)\n
Aggregates tensors from a generator of generators.
Implement this method to specify how tensors, nested in two layers of generators, should be combined. Useful for complex data structures where tensors are not directly accessible in a flat list.
@abstractmethod\ndef __call__(\n self,\n nested_generators_of_tensors: Generator[\n Generator[TensorType, None, None], None, None\n ],\n):\n \"\"\"\n Aggregates tensors from a generator of generators.\n\n Implement this method to specify how tensors, nested in two layers of\n generators, should be combined. Useful for complex data structures where tensors\n are not directly accessible in a flat list.\n \"\"\"\n
Bases: NestedSequenceAggregator
NestedSequenceAggregator
__call__(\n nested_generators_of_tensors: Generator[\n Generator[TensorType, None, None], None, None\n ]\n) -> List[List[TensorType]]\n
Aggregates tensors from a nested generator structure into a list of lists. Each inner generator is converted into a list of tensors, resulting in a nested list structure.
Args: nested_generators_of_tensors: A generator of generators, where each inner generator yields TensorType objects.
List[List[TensorType]]
A list of lists, where each inner list contains tensors returned from one of the inner generators.
def __call__(\n self,\n nested_generators_of_tensors: Generator[\n Generator[TensorType, None, None], None, None\n ],\n) -> List[List[TensorType]]:\n \"\"\"\n Aggregates tensors from a nested generator structure into a list of lists.\n Each inner generator is converted into a list of tensors, resulting in a nested\n list structure.\n\n Args:\n nested_generators_of_tensors: A generator of generators, where each inner\n generator yields TensorType objects.\n\n Returns:\n A list of lists, where each inner list contains tensors returned from one\n of the inner generators.\n \"\"\"\n return [list(tensor_gen) for tensor_gen in nested_generators_of_tensors]\n
LazyChunkSequence(\n generator_factory: Callable[[], Generator[TensorType, None, None]]\n)\n
A class representing a chunked, and lazily evaluated array, where the chunking is restricted to the first dimension
This class is designed to handle large arrays that don't fit in memory. It works by generating chunks of the array on demand and can also convert these chunks to a Zarr array for efficient storage and retrieval.
generator_factory
A factory function that returns a generator. This generator yields chunks of the large array when called.
def __init__(\n self, generator_factory: Callable[[], Generator[TensorType, None, None]]\n):\n self.generator_factory = generator_factory\n
compute(aggregator: Optional[SequenceAggregator] = None)\n
Computes and optionally aggregates the chunks of the array using the provided aggregator. This method initiates the generation of chunks and then combines them according to the aggregator's logic.
aggregator
An optional aggregator for combining the chunks of the array. If None, a default ListAggregator is used to simply collect the chunks into a list.
TYPE: Optional[SequenceAggregator] DEFAULT: None
Optional[SequenceAggregator]
The aggregated result of all chunks of the array, the format of which depends on the aggregator used.
def compute(self, aggregator: Optional[SequenceAggregator] = None):\n \"\"\"\n Computes and optionally aggregates the chunks of the array using the provided\n aggregator. This method initiates the generation of chunks and then\n combines them according to the aggregator's logic.\n\n Args:\n aggregator: An optional aggregator for combining the chunks of\n the array. If None, a default ListAggregator is used to simply collect\n the chunks into a list.\n\n Returns:\n The aggregated result of all chunks of the array, the format of which\n depends on the aggregator used.\n\n \"\"\"\n if aggregator is None:\n aggregator = ListAggregator()\n return aggregator(self.generator_factory())\n
to_zarr(\n path_or_url: Union[str, StoreLike],\n converter: NumpyConverter,\n return_stored: bool = False,\n overwrite: bool = False,\n) -> Optional[Array]\n
Converts the array into Zarr format, a storage format optimized for large arrays, and stores it at the specified path or URL. This method is suitable for scenarios where the data needs to be saved for later use or for large datasets requiring efficient storage.
path_or_url
The file path or URL where the Zarr array will be stored. Also excepts instances of zarr stores.
TYPE: Union[str, StoreLike]
Union[str, StoreLike]
converter
A converter for transforming blocks into NumPy arrays compatible with Zarr.
TYPE: NumpyConverter
NumpyConverter
return_stored
If True, the method returns the stored Zarr array; otherwise, it returns None.
TYPE: bool DEFAULT: False
bool
overwrite
If True, overwrites existing data at the given path_or_url. If False, an error is raised in case of existing data.
Optional[Array]
The Zarr array if return_stored is True; otherwise, None.
def to_zarr(\n self,\n path_or_url: Union[str, StoreLike],\n converter: NumpyConverter,\n return_stored: bool = False,\n overwrite: bool = False,\n) -> Optional[zarr.Array]:\n \"\"\"\n Converts the array into Zarr format, a storage format optimized for large\n arrays, and stores it at the specified path or URL. This method is suitable for\n scenarios where the data needs to be saved for later use or for large datasets\n requiring efficient storage.\n\n Args:\n path_or_url: The file path or URL where the Zarr array will be stored.\n Also excepts instances of zarr stores.\n converter: A converter for transforming blocks into NumPy arrays\n compatible with Zarr.\n return_stored: If True, the method returns the stored Zarr array; otherwise,\n it returns None.\n overwrite: If True, overwrites existing data at the given path_or_url.\n If False, an error is raised in case of existing data.\n\n Returns:\n The Zarr array if return_stored is True; otherwise, None.\n \"\"\"\n row_idx = 0\n z = None\n for block in self.generator_factory():\n numpy_block = converter.to_numpy(block)\n\n if z is None:\n z = self._initialize_zarr_array(numpy_block, path_or_url, overwrite)\n\n new_shape = self._new_shape_according_to_block(numpy_block, row_idx)\n z.resize(new_shape)\n\n z[row_idx : row_idx + numpy_block.shape[0]] = numpy_block\n row_idx += numpy_block.shape[0]\n\n return z if return_stored else None\n
NestedLazyChunkSequence(\n generator_factory: Callable[\n [], Generator[Generator[TensorType, None, None], None, None]\n ]\n)\n
A class representing chunked, and lazily evaluated array, where the chunking is restricted to the first two dimensions.
This class is designed for handling large arrays where individual chunks are loaded and processed lazily. It supports converting these chunks into a Zarr array for efficient storage and retrieval, with chunking applied along the first two dimensions.
A factory function that returns a generator of generators. Each inner generator yields chunks.
def __init__(\n self,\n generator_factory: Callable[\n [], Generator[Generator[TensorType, None, None], None, None]\n ],\n):\n self.generator_factory = generator_factory\n
compute(aggregator: Optional[NestedSequenceAggregator] = None)\n
An optional aggregator for combining the chunks of the array. If None, a default NestedListAggregator is used to simply collect the chunks into a list of lists.
TYPE: Optional[NestedSequenceAggregator] DEFAULT: None
Optional[NestedSequenceAggregator]
The aggregated result of all chunks of the array, the format of which
depends on the aggregator used.
def compute(self, aggregator: Optional[NestedSequenceAggregator] = None):\n \"\"\"\n Computes and optionally aggregates the chunks of the array using the provided\n aggregator. This method initiates the generation of chunks and then\n combines them according to the aggregator's logic.\n\n Args:\n aggregator: An optional aggregator for combining the chunks of\n the array. If None, a default\n [NestedListAggregator][pydvl.influence.array.NestedListAggregator]\n is used to simply collect the chunks into a list of lists.\n\n Returns:\n The aggregated result of all chunks of the array, the format of which\n depends on the aggregator used.\n\n \"\"\"\n if aggregator is None:\n aggregator = NestedListAggregator()\n return aggregator(self.generator_factory())\n
def to_zarr(\n self,\n path_or_url: Union[str, StoreLike],\n converter: NumpyConverter,\n return_stored: bool = False,\n overwrite: bool = False,\n) -> Optional[zarr.Array]:\n \"\"\"\n Converts the array into Zarr format, a storage format optimized for large\n arrays, and stores it at the specified path or URL. This method is suitable for\n scenarios where the data needs to be saved for later use or for large datasets\n requiring efficient storage.\n\n Args:\n path_or_url: The file path or URL where the Zarr array will be stored.\n Also excepts instances of zarr stores.\n converter: A converter for transforming blocks into NumPy arrays\n compatible with Zarr.\n return_stored: If True, the method returns the stored Zarr array;\n otherwise, it returns None.\n overwrite: If True, overwrites existing data at the given path_or_url.\n If False, an error is raised in case of existing data.\n\n Returns:\n The Zarr array if return_stored is True; otherwise, None.\n \"\"\"\n\n row_idx = 0\n z = None\n numpy_block = None\n for row_blocks in self.generator_factory():\n col_idx = 0\n for block in row_blocks:\n numpy_block = converter.to_numpy(block)\n if z is None:\n z = self._initialize_zarr_array(numpy_block, path_or_url, overwrite)\n new_shape = self._new_shape_according_to_block(\n z, numpy_block, row_idx, col_idx\n )\n z.resize(new_shape)\n idx_slice_to_update = self._idx_slice_for_update(\n numpy_block, row_idx, col_idx\n )\n z[idx_slice_to_update] = numpy_block\n\n col_idx += numpy_block.shape[1]\n\n if numpy_block is None:\n raise ValueError(\"Generator is empty\")\n\n row_idx += numpy_block.shape[0]\n\n return z if return_stored else None\n
Bases: str, Enum
str
Enum
Enum representation for the types of influence.
Up
Approximating the influence of a point
Perturbation
Perturbation definition of the influence score
Bases: Generic[TensorType, DataLoaderType], ABC
Generic[TensorType, DataLoaderType]
Generic abstract base class for computing influence related quantities. For a specific influence algorithm and tensor framework, inherit from this base class
property
n_parameters\n
Number of trainable parameters of the underlying model
is_thread_safe: bool\n
Whether the influence computation is thread safe
is_fitted\n
Override this, to expose the fitting status of the instance.
fit(data: DataLoaderType) -> InfluenceFunctionModel\n
Override this method to fit the influence function model to training data, e.g. pre-compute hessian matrix or matrix decompositions
data
TYPE: DataLoaderType
DataLoaderType
The fitted instance
src/pydvl/influence/base_influence_function_model.py
@abstractmethod\ndef fit(self, data: DataLoaderType) -> InfluenceFunctionModel:\n \"\"\"\n Override this method to fit the influence function model to training data,\n e.g. pre-compute hessian matrix or matrix decompositions\n\n Args:\n data:\n\n Returns:\n The fitted instance\n \"\"\"\n
influences_from_factors(\n z_test_factors: TensorType,\n x: TensorType,\n y: TensorType,\n mode: InfluenceMode = InfluenceMode.Up,\n) -> TensorType\n
Override this method to implement the computation of
for the case of up-weighting influence, resp.
for the perturbation type influence case. The gradient is meant to be per sample of the batch \\((x, y)\\).
z_test_factors
pre-computed array, approximating \\(H^{-1}\\nabla_{\\theta} \\ell(y_{\\text{test}}, f_{\\theta}(x_{\\text{test}}))\\)
TYPE: TensorType
TensorType
x
model input to use in the gradient computations \\(\\nabla_{\\theta}\\ell(y, f_{\\theta}(x))\\), resp. \\(\\nabla_{x}\\nabla_{\\theta}\\ell(y, f_{\\theta}(x))\\), if None, use \\(x=x_{\\text{test}}\\)
y
label tensor to compute gradients
mode
enum value of InfluenceMode
TYPE: InfluenceMode DEFAULT: Up
InfluenceMode
Tensor representing the element-wise scalar products for the provided batch
@abstractmethod\ndef influences_from_factors(\n self,\n z_test_factors: TensorType,\n x: TensorType,\n y: TensorType,\n mode: InfluenceMode = InfluenceMode.Up,\n) -> TensorType:\n r\"\"\"\n Override this method to implement the computation of\n\n \\[ \\langle z_{\\text{test_factors}},\n \\nabla_{\\theta} \\ell(y, f_{\\theta}(x)) \\rangle \\]\n\n for the case of up-weighting influence, resp.\n\n \\[ \\langle z_{\\text{test_factors}},\n \\nabla_{x} \\nabla_{\\theta} \\ell(y, f_{\\theta}(x)) \\rangle \\]\n\n for the perturbation type influence case. The gradient is meant to be per sample\n of the batch $(x, y)$.\n\n Args:\n z_test_factors: pre-computed array, approximating\n $H^{-1}\\nabla_{\\theta} \\ell(y_{\\text{test}},\n f_{\\theta}(x_{\\text{test}}))$\n x: model input to use in the gradient computations\n $\\nabla_{\\theta}\\ell(y, f_{\\theta}(x))$,\n resp. $\\nabla_{x}\\nabla_{\\theta}\\ell(y, f_{\\theta}(x))$,\n if None, use $x=x_{\\text{test}}$\n y: label tensor to compute gradients\n mode: enum value of [InfluenceMode]\n [pydvl.influence.base_influence_function_model.InfluenceMode]\n\n Returns:\n Tensor representing the element-wise scalar products for the provided batch\n\n \"\"\"\n
This module provides functionality for calculating influences for large amount of data. The computation is based on a chunk computation model in the form of an instance of InfluenceFunctionModel, which is mapped over collection of chunks.
This type can be provided to the initialization of a DaskInfluenceCalculator instead of a distributed client object. It is useful in those scenarios, where the user want to disable the checking for thread-safety in the initialization phase, e.g. when using the single machine synchronous scheduler for debugging purposes.
from pydvl.influence import DisableClientThreadingCheck\n\nda_calc = DaskInfluenceCalculator(if_model,\n TorchNumpyConverter(),\n DisableClientThreadingCheck)\nda_influences = da_calc.influences(da_x_test, da_y_test, da_x, da_y)\nda_influences.compute(scheduler='synchronous')\n
DaskInfluenceCalculator(\n influence_function_model: InfluenceFunctionModel,\n converter: NumpyConverter,\n client: Union[Client, Type[DisableClientSingleThreadCheck]],\n)\n
This class is designed to compute influences over dask.array.Array collections, leveraging the capabilities of Dask for distributed computing and parallel processing. It requires an influence computation model of type InfluenceFunctionModel, which defines how influences are computed on a chunk of data. Essentially, this class functions by mapping the influence function model across the various chunks of a dask.array.Array collection.
influence_function_model
instance of type InfluenceFunctionModel, that specifies the computation logic for influence on data chunks. It's a pivotal part of the calculator, determining how influence is computed and applied across the data array.
TYPE: InfluenceFunctionModel
A utility for converting numpy arrays to TensorType objects, facilitating the interaction between numpy arrays and the influence function model.
client
This parameter accepts either of two types:
A distributed Client object
The special type DisableClientSingleThreadCheck, which serves as a flag to bypass certain checks.
During initialization, the system verifies if all workers are operating in single-threaded mode when the provided influence_function_model is designated as not thread-safe (indicated by the is_thread_safe property). If this condition is not met, the initialization will raise a specific error, signaling a potential thread-safety conflict.
is_thread_safe
To intentionally skip this safety check (e.g., for debugging purposes using the single machine synchronous scheduler), you can supply the DisableClientSingleThreadCheck type.
TYPE: Union[Client, Type[DisableClientSingleThreadCheck]]
Union[Client, Type[DisableClientSingleThreadCheck]]
Make sure to set threads_per_worker=1, when using the distributed scheduler for computing, if your implementation of InfluenceFunctionModel is not thread-safe.
threads_per_worker=1
client = Client(threads_per_worker=1)\n
import torch\nfrom torch.utils.data import Dataset, DataLoader\nfrom pydvl.influence import DaskInfluenceCalculator\nfrom pydvl.influence.torch import CgInfluence\nfrom pydvl.influence.torch.util import (\n torch_dataset_to_dask_array,\n TorchNumpyConverter,\n)\nfrom distributed import Client\n\n# Possible some out of memory large Dataset\ntrain_data_set: Dataset = LargeDataSet(...)\ntest_data_set: Dataset = LargeDataSet(...)\n\ntrain_dataloader = DataLoader(train_data_set)\ninfl_model = CgInfluence(model, loss, hessian_regularization=0.01)\ninfl_model = if_model.fit(train_dataloader)\n\n# wrap your input data into dask arrays\nchunk_size = 10\nda_x, da_y = torch_dataset_to_dask_array(train_data_set, chunk_size=chunk_size)\nda_x_test, da_y_test = torch_dataset_to_dask_array(test_data_set,\n chunk_size=chunk_size)\n\n# use only one thread for scheduling, due to non-thread safety of some torch\n# operations\nclient = Client(n_workers=4, threads_per_worker=1)\n\ninfl_calc = DaskInfluenceCalculator(infl_model,\n TorchNumpyConverter(device=torch.device(\"cpu\")),\n client)\nda_influences = infl_calc.influences(da_x_test, da_y_test, da_x, da_y)\n# da_influences is a dask.array.Array\n\n# trigger computation and write chunks to disk in parallel\nda_influences.to_zarr(\"path/or/url\")\n
src/pydvl/influence/influence_calculator.py
def __init__(\n self,\n influence_function_model: InfluenceFunctionModel,\n converter: NumpyConverter,\n client: Union[Client, Type[DisableClientSingleThreadCheck]],\n):\n self._n_parameters = influence_function_model.n_parameters\n self.influence_function_model = influence_function_model\n self.numpy_converter = converter\n\n if isinstance(client, type(DisableClientSingleThreadCheck)):\n logger.warning(DisableClientSingleThreadCheck.warning_msg())\n self.influence_function_model = delayed(influence_function_model)\n elif isinstance(client, Client):\n self._validate_client(client, influence_function_model)\n self.influence_function_model = client.scatter(\n influence_function_model, broadcast=True\n )\n else:\n raise ValueError(\n \"The 'client' parameter \"\n \"must either be a distributed.Client object or the\"\n \"type 'DisableClientSingleThreadCheck'.\"\n )\n
Number of trainable parameters of the underlying model used in the batch computation
influence_factors(x: Array, y: Array) -> Array\n
Computes the expression
where the gradients are computed for the chunks of \\((x, y)\\).
model input to use in the gradient computations
TYPE: Array
Array
dask.array.Array representing the element-wise inverse Hessian matrix vector products for the provided batch.
def influence_factors(self, x: da.Array, y: da.Array) -> da.Array:\n r\"\"\"\n Computes the expression\n\n \\[ H^{-1}\\nabla_{\\theta} \\ell(y, f_{\\theta}(x)) \\]\n\n where the gradients are computed for the chunks of $(x, y)$.\n\n Args:\n x: model input to use in the gradient computations\n y: label tensor to compute gradients\n\n Returns:\n [dask.array.Array][dask.array.Array] representing the element-wise inverse\n Hessian matrix vector products for the provided batch.\n\n \"\"\"\n\n self._validate_aligned_chunking(x, y)\n self._validate_dimensions_not_chunked(x)\n self._validate_dimensions_not_chunked(y)\n\n def func(x_numpy: NDArray, y_numpy: NDArray, model: InfluenceFunctionModel):\n factors = model.influence_factors(\n self.numpy_converter.from_numpy(x_numpy),\n self.numpy_converter.from_numpy(y_numpy),\n )\n return self.numpy_converter.to_numpy(factors)\n\n chunks = []\n for x_chunk, y_chunk, chunk_size in zip(\n x.to_delayed(), y.to_delayed(), x.chunks[0]\n ):\n chunk_shape = (chunk_size, self.n_parameters)\n chunk_array = da.from_delayed(\n delayed(func)(\n x_chunk.squeeze()[()],\n y_chunk.squeeze()[()],\n self.influence_function_model,\n ),\n dtype=x.dtype,\n shape=chunk_shape,\n )\n chunks.append(chunk_array)\n\n return da.concatenate(chunks)\n
influences(\n x_test: Array,\n y_test: Array,\n x: Optional[Array] = None,\n y: Optional[Array] = None,\n mode: InfluenceMode = InfluenceMode.Up,\n) -> Array\n
Compute approximation of
for the perturbation type influence case. The computation is done block-wise for the chunks of the provided dask arrays.
x_test
model input to use in the gradient computations of \\(H^{-1}\\nabla_{\\theta} \\ell(y_{\\text{test}}, f_{\\theta}(x_{\\text{test}}))\\)
y_test
optional model input to use in the gradient computations \\(\\nabla_{\\theta}\\ell(y, f_{\\theta}(x))\\), resp. \\(\\nabla_{x}\\nabla_{\\theta}\\ell(y, f_{\\theta}(x))\\), if None, use \\(x=x_{\\text{test}}\\)
TYPE: Optional[Array] DEFAULT: None
optional label tensor to compute gradients
dask.array.Array representing the element-wise scalar products for the provided batch.
def influences(\n self,\n x_test: da.Array,\n y_test: da.Array,\n x: Optional[da.Array] = None,\n y: Optional[da.Array] = None,\n mode: InfluenceMode = InfluenceMode.Up,\n) -> da.Array:\n r\"\"\"\n Compute approximation of\n\n \\[ \\langle H^{-1}\\nabla_{\\theta} \\ell(y_{\\text{test}},\n f_{\\theta}(x_{\\text{test}})), \\nabla_{\\theta} \\ell(y, f_{\\theta}(x)) \\rangle \\]\n\n for the case of up-weighting influence, resp.\n\n \\[ \\langle H^{-1}\\nabla_{\\theta} \\ell(y_{\\text{test}},\n f_{\\theta}(x_{\\text{test}})),\n \\nabla_{x} \\nabla_{\\theta} \\ell(y, f_{\\theta}(x)) \\rangle \\]\n\n for the perturbation type influence case. The computation is done block-wise\n for the chunks of the provided dask arrays.\n\n Args:\n x_test: model input to use in the gradient computations of\n $H^{-1}\\nabla_{\\theta} \\ell(y_{\\text{test}},\n f_{\\theta}(x_{\\text{test}}))$\n y_test: label tensor to compute gradients\n x: optional model input to use in the gradient computations\n $\\nabla_{\\theta}\\ell(y, f_{\\theta}(x))$,\n resp. $\\nabla_{x}\\nabla_{\\theta}\\ell(y, f_{\\theta}(x))$,\n if None, use $x=x_{\\text{test}}$\n y: optional label tensor to compute gradients\n mode: enum value of [InfluenceMode]\n [pydvl.influence.base_influence_function_model.InfluenceMode]\n\n Returns:\n [dask.array.Array][dask.array.Array] representing the element-wise scalar\n products for the provided batch.\n\n \"\"\"\n\n self._validate_aligned_chunking(x_test, y_test)\n self._validate_dimensions_not_chunked(x_test)\n self._validate_dimensions_not_chunked(y_test)\n\n if (x is None) != (y is None):\n if x is None:\n raise ValueError(\n \"Providing labels y without providing model input x \"\n \"is not supported\"\n )\n if y is None:\n raise ValueError(\n \"Providing model input x without labels y is not supported\"\n )\n elif x is not None:\n self._validate_aligned_chunking(x, y)\n self._validate_dimensions_not_chunked(x)\n self._validate_dimensions_not_chunked(y)\n else:\n x, y = x_test, y_test\n\n def func(\n x_test_numpy: NDArray,\n y_test_numpy: NDArray,\n x_numpy: NDArray,\n y_numpy: NDArray,\n model: InfluenceFunctionModel,\n ):\n values = model.influences(\n self.numpy_converter.from_numpy(x_test_numpy),\n self.numpy_converter.from_numpy(y_test_numpy),\n self.numpy_converter.from_numpy(x_numpy),\n self.numpy_converter.from_numpy(y_numpy),\n mode,\n )\n return self.numpy_converter.to_numpy(values)\n\n un_chunked_x_shapes = [s[0] for s in x_test.chunks[1:]]\n x_test_chunk_sizes = x_test.chunks[0]\n x_chunk_sizes = x.chunks[0]\n blocks = []\n block_shape: Tuple[int, ...]\n\n for x_test_chunk, y_test_chunk, test_chunk_size in zip(\n x_test.to_delayed(), y_test.to_delayed(), x_test_chunk_sizes\n ):\n row = []\n for x_chunk, y_chunk, chunk_size in zip(\n x.to_delayed(), y.to_delayed(), x_chunk_sizes # type:ignore\n ):\n if mode == InfluenceMode.Up:\n block_shape = (test_chunk_size, chunk_size)\n elif mode == InfluenceMode.Perturbation:\n block_shape = (test_chunk_size, chunk_size, *un_chunked_x_shapes)\n else:\n raise UnsupportedInfluenceModeException(mode)\n\n block_array = da.from_delayed(\n delayed(func)(\n x_test_chunk.squeeze()[()],\n y_test_chunk.squeeze()[()],\n x_chunk.squeeze()[()],\n y_chunk.squeeze()[()],\n self.influence_function_model,\n ),\n shape=block_shape,\n dtype=x_test.dtype,\n )\n\n if mode == InfluenceMode.Perturbation:\n n_dims = block_array.ndim\n new_order = tuple(range(2, n_dims)) + (0, 1)\n block_array = block_array.transpose(new_order)\n\n row.append(block_array)\n blocks.append(row)\n\n values_array = da.block(blocks)\n\n if mode == InfluenceMode.Perturbation:\n n_dims = values_array.ndim\n new_order = (n_dims - 2, n_dims - 1) + tuple(range(n_dims - 2))\n values_array = values_array.transpose(new_order)\n\n return values_array\n
influences_from_factors(\n z_test_factors: Array,\n x: Array,\n y: Array,\n mode: InfluenceMode = InfluenceMode.Up,\n) -> Array\n
Computation of
dask.array.Array representing the element-wise scalar product of the provided batch
def influences_from_factors(\n self,\n z_test_factors: da.Array,\n x: da.Array,\n y: da.Array,\n mode: InfluenceMode = InfluenceMode.Up,\n) -> da.Array:\n r\"\"\"\n Computation of\n\n \\[ \\langle z_{\\text{test_factors}},\n \\nabla_{\\theta} \\ell(y, f_{\\theta}(x)) \\rangle \\]\n\n for the case of up-weighting influence, resp.\n\n \\[ \\langle z_{\\text{test_factors}},\n \\nabla_{x} \\nabla_{\\theta} \\ell(y, f_{\\theta}(x)) \\rangle \\]\n\n for the perturbation type influence case. The gradient is meant\n to be per sample of the batch $(x, y)$.\n\n Args:\n z_test_factors: pre-computed array, approximating\n $H^{-1}\\nabla_{\\theta} \\ell(y_{\\text{test}},\n f_{\\theta}(x_{\\text{test}}))$\n x: optional model input to use in the gradient computations\n $\\nabla_{\\theta}\\ell(y, f_{\\theta}(x))$,\n resp. $\\nabla_{x}\\nabla_{\\theta}\\ell(y, f_{\\theta}(x))$,\n if None, use $x=x_{\\text{test}}$\n y: optional label tensor to compute gradients\n mode: enum value of [InfluenceMode]\n [pydvl.influence.base_influence_function_model.InfluenceMode]\n\n Returns:\n [dask.array.Array][dask.array.Array] representing the element-wise scalar\n product of the provided batch\n\n \"\"\"\n self._validate_aligned_chunking(x, y)\n self._validate_dimensions_not_chunked(x)\n self._validate_dimensions_not_chunked(y)\n self._validate_dimensions_not_chunked(z_test_factors)\n\n def func(\n z_test_numpy: NDArray,\n x_numpy: NDArray,\n y_numpy: NDArray,\n model: InfluenceFunctionModel,\n ):\n ups = model.influences_from_factors(\n self.numpy_converter.from_numpy(z_test_numpy),\n self.numpy_converter.from_numpy(x_numpy),\n self.numpy_converter.from_numpy(y_numpy),\n mode=mode,\n )\n return self.numpy_converter.to_numpy(ups)\n\n un_chunked_x_shape = [s[0] for s in x.chunks[1:]]\n x_chunk_sizes = x.chunks[0]\n z_test_chunk_sizes = z_test_factors.chunks[0]\n blocks = []\n block_shape: Tuple[int, ...]\n\n for z_test_chunk, z_test_chunk_size in zip(\n z_test_factors.to_delayed(), z_test_chunk_sizes\n ):\n row = []\n for x_chunk, y_chunk, chunk_size in zip(\n x.to_delayed(), y.to_delayed(), x_chunk_sizes\n ):\n if mode == InfluenceMode.Perturbation:\n block_shape = (z_test_chunk_size, chunk_size, *un_chunked_x_shape)\n elif mode == InfluenceMode.Up:\n block_shape = (z_test_chunk_size, chunk_size)\n else:\n raise UnsupportedInfluenceModeException(mode)\n\n block_array = da.from_delayed(\n delayed(func)(\n z_test_chunk.squeeze()[()],\n x_chunk.squeeze()[()],\n y_chunk.squeeze()[()],\n self.influence_function_model,\n ),\n shape=block_shape,\n dtype=z_test_factors.dtype,\n )\n\n if mode == InfluenceMode.Perturbation:\n n_dims = block_array.ndim\n new_order = tuple(range(2, n_dims)) + (0, 1)\n block_array = block_array.transpose(*new_order)\n\n row.append(block_array)\n blocks.append(row)\n\n values_array = da.block(blocks)\n\n if mode == InfluenceMode.Perturbation:\n n_dims = values_array.ndim\n new_order = (n_dims - 2, n_dims - 1) + tuple(range(n_dims - 2))\n values_array = values_array.transpose(*new_order)\n\n return values_array\n
SequentialInfluenceCalculator(influence_function_model: InfluenceFunctionModel)\n
This class serves as a simple wrapper for processing batches of data in a sequential manner. It is particularly useful in scenarios where parallel or distributed processing is not required or not feasible. The core functionality of this class is to apply a specified influence computation model, of type InfluenceFunctionModel, to batches of data one at a time.
An instance of type [InfluenceFunctionModel] [pydvl.influence.base_influence_function_model.InfluenceFunctionModel], that specifies the computation logic for influence on data chunks.
from pydvl.influence import SequentialInfluenceCalculator\nfrom pydvl.influence.torch.util import (\nNestedTorchCatAggregator,\nTorchNumpyConverter,\n)\nfrom pydvl.influence.torch import CgInfluence\n\nbatch_size = 10\ntrain_dataloader = DataLoader(..., batch_size=batch_size)\ntest_dataloader = DataLoader(..., batch_size=batch_size)\n\ninfl_model = CgInfluence(model, loss, hessian_regularization=0.01)\ninfl_model = infl_model.fit(train_dataloader)\n\ninfl_calc = SequentialInfluenceCalculator(if_model)\n\n# this does not trigger the computation\nlazy_influences = infl_calc.influences(test_dataloader, train_dataloader)\n\n# trigger computation and pull the result into main memory, result is the full\n# tensor for all combinations of the two loaders\ninfluences = lazy_influences.compute(aggregator=NestedTorchCatAggregator())\n# or\n# trigger computation and write results chunk-wise to disk using zarr in a\n# sequential manner\nlazy_influences.to_zarr(\"local_path/or/url\", TorchNumpyConverter())\n
def __init__(\n self,\n influence_function_model: InfluenceFunctionModel,\n):\n self.influence_function_model = influence_function_model\n
influence_factors(\n data_iterable: Iterable[Tuple[TensorType, TensorType]]\n) -> LazyChunkSequence\n
Compute the expression
where the gradient are computed for the chunks \\((x, y)\\) of the data_iterable in a sequential manner.
data_iterable
An iterable that returns tuples of tensors. Each tuple consists of a pair of tensors (x, y), representing input data and corresponding targets.
TYPE: Iterable[Tuple[TensorType, TensorType]]
Iterable[Tuple[TensorType, TensorType]]
LazyChunkSequence
A lazy data structure representing the chunks of the resulting tensor
def influence_factors(\n self,\n data_iterable: Iterable[Tuple[TensorType, TensorType]],\n) -> LazyChunkSequence:\n r\"\"\"\n Compute the expression\n\n \\[ H^{-1}\\nabla_{\\theta} \\ell(y, f_{\\theta}(x)) \\]\n\n where the gradient are computed for the chunks $(x, y)$ of the data_iterable in\n a sequential manner.\n\n Args:\n data_iterable: An iterable that returns tuples of tensors.\n Each tuple consists of a pair of tensors (x, y), representing input data\n and corresponding targets.\n\n Returns:\n A lazy data structure representing the chunks of the resulting tensor\n \"\"\"\n tensors_gen_factory = partial(self._influence_factors_gen, data_iterable)\n return LazyChunkSequence(tensors_gen_factory)\n
influences(\n test_data_iterable: Iterable[Tuple[TensorType, TensorType]],\n train_data_iterable: Iterable[Tuple[TensorType, TensorType]],\n mode: InfluenceMode = InfluenceMode.Up,\n) -> NestedLazyChunkSequence\n
for the perturbation type influence case. The computation is done block-wise for the chunks of the provided data iterables and aggregated into a single tensor in memory.
test_data_iterable
train_data_iterable
NestedLazyChunkSequence
def influences(\n self,\n test_data_iterable: Iterable[Tuple[TensorType, TensorType]],\n train_data_iterable: Iterable[Tuple[TensorType, TensorType]],\n mode: InfluenceMode = InfluenceMode.Up,\n) -> NestedLazyChunkSequence:\n r\"\"\"\n Compute approximation of\n\n \\[ \\langle H^{-1}\\nabla_{\\theta} \\ell(y_{\\text{test}},\n f_{\\theta}(x_{\\text{test}})), \\nabla_{\\theta} \\ell(y, f_{\\theta}(x)) \\rangle \\]\n\n for the case of up-weighting influence, resp.\n\n \\[ \\langle H^{-1}\\nabla_{\\theta} \\ell(y_{\\text{test}},\n f_{\\theta}(x_{\\text{test}})),\n \\nabla_{x} \\nabla_{\\theta} \\ell(y, f_{\\theta}(x)) \\rangle \\]\n\n for the perturbation type influence case. The computation is done block-wise for\n the chunks of the provided\n data iterables and aggregated into a single tensor in memory.\n\n Args:\n test_data_iterable: An iterable that returns tuples of tensors.\n Each tuple consists of a pair of tensors (x, y), representing input data\n and corresponding targets.\n train_data_iterable: An iterable that returns tuples of tensors.\n Each tuple consists of a pair of tensors (x, y), representing input data\n and corresponding targets.\n mode: enum value of [InfluenceMode]\n [pydvl.influence.base_influence_function_model.InfluenceMode]\n\n Returns:\n A lazy data structure representing the chunks of the resulting tensor\n\n \"\"\"\n nested_tensor_gen_factory = partial(\n self._influences_gen,\n test_data_iterable,\n train_data_iterable,\n mode,\n )\n\n return NestedLazyChunkSequence(nested_tensor_gen_factory)\n
influences_from_factors(\n z_test_factors: Iterable[TensorType],\n train_data_iterable: Iterable[Tuple[TensorType, TensorType]],\n mode: InfluenceMode = InfluenceMode.Up,\n) -> NestedLazyChunkSequence\n
Pre-computed iterable of tensors, approximating \\(H^{-1}\\nabla_{\\theta} \\ell(y_{\\text{test}}, f_{\\theta}(x_{\\text{test}}))\\)
TYPE: Iterable[TensorType]
Iterable[TensorType]
def influences_from_factors(\n self,\n z_test_factors: Iterable[TensorType],\n train_data_iterable: Iterable[Tuple[TensorType, TensorType]],\n mode: InfluenceMode = InfluenceMode.Up,\n) -> NestedLazyChunkSequence:\n r\"\"\"\n Computation of\n\n \\[ \\langle z_{\\text{test_factors}}, \\nabla_{\\theta} \\ell(y, f_{\\theta}(x))\n \\rangle \\]\n\n for the case of up-weighting influence, resp.\n\n \\[ \\langle z_{\\text{test_factors}}, \\nabla_{x} \\nabla_{\\theta}\n \\ell(y, f_{\\theta}(x)) \\rangle \\]\n\n for the perturbation type influence case. The gradient is meant to be per sample\n of the batch $(x, y)$.\n\n Args:\n z_test_factors: Pre-computed iterable of tensors, approximating\n $H^{-1}\\nabla_{\\theta} \\ell(y_{\\text{test}},\n f_{\\theta}(x_{\\text{test}}))$\n train_data_iterable: An iterable that returns tuples of tensors.\n Each tuple consists of a pair of tensors (x, y), representing input data\n and corresponding targets.\n mode: enum value of [InfluenceMode]\n [pydvl.influence.base_influence_function_model.InfluenceMode]\n\n Returns:\n A lazy data structure representing the chunks of the resulting tensor\n\n \"\"\"\n nested_tensor_gen = partial(\n self._influences_from_factors_gen,\n z_test_factors,\n train_data_iterable,\n mode,\n )\n return NestedLazyChunkSequence(nested_tensor_gen)\n
This module provides methods for efficiently computing tensors related to first and second order derivatives of torch models, using functionality from torch.func. To indicate higher-order functions, i.e. functions which return functions, we use the naming convention create_**_function.
create_**_function
In particular, the module contains functionality for
dataclass
LowRankProductRepresentation(eigen_vals: Tensor, projections: Tensor)\n
Representation of a low rank product of the form \\(H = V D V^T\\), where D is a diagonal matrix and V is orthogonal.
eigen_vals
Diagonal of D.
TYPE: Tensor
Tensor
projections
The matrix V.
to(device: device)\n
Move the representing tensors to a device
src/pydvl/influence/torch/functional.py
def to(self, device: torch.device):\n \"\"\"\n Move the representing tensors to a device\n \"\"\"\n return LowRankProductRepresentation(\n self.eigen_vals.to(device), self.projections.to(device)\n )\n
hvp(\n func: Callable[[Dict[str, Tensor]], Tensor],\n params: Dict[str, Tensor],\n vec: Dict[str, Tensor],\n reverse_only: bool = True,\n) -> Dict[str, Tensor]\n
Computes the Hessian-vector product (HVP) for a given function at the given parameters, i.e.
This function can operate in two modes, either reverse-mode autodiff only or both forward- and reverse-mode autodiff.
func
The scalar-valued function for which the HVP is computed.
TYPE: Callable[[Dict[str, Tensor]], Tensor]
Callable[[Dict[str, Tensor]], Tensor]
params
The parameters at which the HVP is computed.
TYPE: Dict[str, Tensor]
Dict[str, Tensor]
vec
The vector with which the Hessian is multiplied.
reverse_only
Whether to use only reverse-mode autodiff (True, default) or both forward- and reverse-mode autodiff (False).
TYPE: bool DEFAULT: True
True
The HVP of the function at the given parameters with the given vector.
>>> def f(z): return torch.sum(z**2)\n>>> u = torch.ones(10, requires_grad=True)\n>>> v = torch.ones(10)\n>>> hvp_vec = hvp(f, u, v)\n>>> assert torch.allclose(hvp_vec, torch.full((10, ), 2.0))\n
def hvp(\n func: Callable[[Dict[str, torch.Tensor]], torch.Tensor],\n params: Dict[str, torch.Tensor],\n vec: Dict[str, torch.Tensor],\n reverse_only: bool = True,\n) -> Dict[str, torch.Tensor]:\n r\"\"\"\n Computes the Hessian-vector product (HVP) for a given function at the given\n parameters, i.e.\n\n \\[\\nabla_{\\theta} \\nabla_{\\theta} f (\\theta)\\cdot v\\]\n\n This function can operate in two modes, either reverse-mode autodiff only or both\n forward- and reverse-mode autodiff.\n\n Args:\n func: The scalar-valued function for which the HVP is computed.\n params: The parameters at which the HVP is computed.\n vec: The vector with which the Hessian is multiplied.\n reverse_only: Whether to use only reverse-mode autodiff\n (True, default) or both forward- and reverse-mode autodiff (False).\n\n Returns:\n The HVP of the function at the given parameters with the given vector.\n\n ??? Example\n\n ```pycon\n >>> def f(z): return torch.sum(z**2)\n >>> u = torch.ones(10, requires_grad=True)\n >>> v = torch.ones(10)\n >>> hvp_vec = hvp(f, u, v)\n >>> assert torch.allclose(hvp_vec, torch.full((10, ), 2.0))\n ```\n \"\"\"\n\n output: Dict[str, torch.Tensor]\n\n if reverse_only:\n _, vjp_fn = vjp(grad(func), params)\n output = vjp_fn(vec)[0]\n else:\n output = jvp(grad(func), (params,), (vec,))[1]\n\n return output\n
create_batch_hvp_function(\n model: Module,\n loss: Callable[[Tensor, Tensor], Tensor],\n reverse_only: bool = True,\n) -> Callable[[Dict[str, Tensor], Tensor, Tensor, Tensor], Tensor]\n
Creates a function to compute Hessian-vector product (HVP) for a given model and loss function, where the Hessian information is computed for a provided batch.
This function takes a PyTorch model, a loss function, and an optional boolean parameter. It returns a callable that computes the Hessian-vector product for batches of input data and a given vector. The computation can be performed in reverse mode only, based on the reverse_only parameter.
model
The PyTorch model for which the Hessian-vector product is to be computed.
TYPE: Module
Module
loss
The loss function. It should take two torch.Tensor objects as input and return a torch.Tensor.
TYPE: Callable[[Tensor, Tensor], Tensor]
Callable[[Tensor, Tensor], Tensor]
If True, the Hessian-vector product is computed in reverse mode only.
Callable[[Dict[str, Tensor], Tensor, Tensor, Tensor], Tensor]
A function that takes three torch.Tensor objects - input data (x), target data (y), and a vector (vec), and returns the Hessian-vector product of the loss evaluated on x, y times vec.
torch.Tensor
# Assume `model` is a PyTorch model and `loss_fn` is a loss function.\nb_hvp_function = batch_hvp(model, loss_fn)\n\n# `x_batch`, `y_batch` are batches of input and target data,\n# and `vec` is a vector.\nhvp_result = b_hvp_function(x_batch, y_batch, vec)\n
def create_batch_hvp_function(\n model: torch.nn.Module,\n loss: Callable[[torch.Tensor, torch.Tensor], torch.Tensor],\n reverse_only: bool = True,\n) -> Callable[\n [Dict[str, torch.Tensor], torch.Tensor, torch.Tensor, torch.Tensor], torch.Tensor\n]:\n r\"\"\"\n Creates a function to compute Hessian-vector product (HVP) for a given model and\n loss function, where the Hessian information is computed for a provided batch.\n\n This function takes a PyTorch model, a loss function,\n and an optional boolean parameter. It returns a callable\n that computes the Hessian-vector product for batches of input data\n and a given vector. The computation can be performed in reverse mode only,\n based on the `reverse_only` parameter.\n\n Args:\n model: The PyTorch model for which the Hessian-vector product is to be computed.\n loss: The loss function. It should take two\n torch.Tensor objects as input and return a torch.Tensor.\n reverse_only (bool, optional): If True, the Hessian-vector product is computed\n in reverse mode only.\n\n Returns:\n A function that takes three `torch.Tensor` objects - input data (`x`),\n target data (`y`), and a vector (`vec`),\n and returns the Hessian-vector product of the loss\n evaluated on `x`, `y` times `vec`.\n\n ??? Example\n ```python\n # Assume `model` is a PyTorch model and `loss_fn` is a loss function.\n b_hvp_function = batch_hvp(model, loss_fn)\n\n # `x_batch`, `y_batch` are batches of input and target data,\n # and `vec` is a vector.\n hvp_result = b_hvp_function(x_batch, y_batch, vec)\n ```\n \"\"\"\n\n def b_hvp(\n params: Dict[str, torch.Tensor],\n x: torch.Tensor,\n y: torch.Tensor,\n vec: torch.Tensor,\n ):\n return flatten_dimensions(\n hvp(\n lambda p: create_batch_loss_function(model, loss)(p, x, y),\n params,\n align_structure(params, vec),\n reverse_only=reverse_only,\n ).values()\n )\n\n return b_hvp\n
create_empirical_loss_function(\n model: Module,\n loss: Callable[[Tensor, Tensor], Tensor],\n data_loader: DataLoader,\n) -> Callable[[Dict[str, Tensor]], Tensor]\n
Creates a function to compute the empirical loss of a given model on a given dataset. If we denote the model parameters with \\( \\theta \\), the resulting function approximates:
for a loss function \\(\\operatorname{loss}\\) and a model \\(\\operatorname{model}\\) with model parameters \\(\\theta\\), where \\(N\\) is the number of all elements provided by the data_loader.
The model for which the loss should be computed.
The loss function to be used.
data_loader
The data loader for iterating over the dataset.
TYPE: DataLoader
DataLoader
A function that computes the empirical loss of the model on the dataset for given model parameters.
def create_empirical_loss_function(\n model: torch.nn.Module,\n loss: Callable[[torch.Tensor, torch.Tensor], torch.Tensor],\n data_loader: DataLoader,\n) -> Callable[[Dict[str, torch.Tensor]], torch.Tensor]:\n r\"\"\"\n Creates a function to compute the empirical loss of a given model\n on a given dataset. If we denote the model parameters with \\( \\theta \\),\n the resulting function approximates:\n\n \\[\n f(\\theta) = \\frac{1}{N}\\sum_{i=1}^N\n \\operatorname{loss}(y_i, \\operatorname{model}(\\theta, x_i))\n \\]\n\n for a loss function $\\operatorname{loss}$ and a model $\\operatorname{model}$\n with model parameters $\\theta$, where $N$ is the number of all elements provided\n by the data_loader.\n\n Args:\n model: The model for which the loss should be computed.\n loss: The loss function to be used.\n data_loader: The data loader for iterating over the dataset.\n\n Returns:\n A function that computes the empirical loss of the model on the dataset for\n given model parameters.\n\n \"\"\"\n\n def empirical_loss(params: Dict[str, torch.Tensor]):\n total_loss = to_model_device(torch.zeros((), requires_grad=True), model)\n total_samples = to_model_device(torch.zeros(()), model)\n\n for x, y in iter(data_loader):\n output = functional_call(\n model,\n params,\n (to_model_device(x, model),),\n )\n loss_value = loss(output, to_model_device(y, model))\n total_loss = total_loss + loss_value * x.size(0)\n total_samples += x.size(0)\n\n return total_loss / total_samples\n\n return empirical_loss\n
create_batch_loss_function(\n model: Module, loss: Callable[[Tensor, Tensor], Tensor]\n) -> Callable[[Dict[str, Tensor], Tensor, Tensor], Tensor]\n
Creates a function to compute the loss of a given model on a given batch of data, i.e. the function
for a loss function \\(\\operatorname{loss}\\) and a model \\(\\operatorname{model}\\) with model parameters \\(\\theta\\), where \\(N\\) is the number of elements in the batch. Args: model: The model for which the loss should be computed. loss: The loss function to be used, which should be able to handle a batch dimension
Callable[[Dict[str, Tensor], Tensor, Tensor], Tensor]
A function that computes the loss of the model on a batch for given model parameters. The model parameter input to the function must take the form of a dict conform to model.named_parameters(), i.e. the keys must be a subset of the parameters and the corresponding tensor shapes must align. For the data input, the first dimension has to be the batch dimension.
def create_batch_loss_function(\n model: torch.nn.Module,\n loss: Callable[[torch.Tensor, torch.Tensor], torch.Tensor],\n) -> Callable[[Dict[str, torch.Tensor], torch.Tensor, torch.Tensor], torch.Tensor]:\n r\"\"\"\n Creates a function to compute the loss of a given model on a given batch of data,\n i.e. the function\n\n \\[f(\\theta, x, y) = \\frac{1}{N} \\sum_{i=1}^N\n \\operatorname{loss}(\\operatorname{model}(\\theta, x_i), y_i)\\]\n\n for a loss function $\\operatorname{loss}$ and a model $\\operatorname{model}$\n with model parameters $\\theta$, where $N$ is the number of elements in the batch.\n Args:\n model: The model for which the loss should be computed.\n loss: The loss function to be used, which should be able to handle\n a batch dimension\n\n Returns:\n A function that computes the loss of the model on a batch for given\n model parameters. The model parameter input to the function must take\n the form of a dict conform to model.named_parameters(), i.e. the keys\n must be a subset of the parameters and the corresponding tensor shapes\n must align. For the data input, the first dimension has to be the batch\n dimension.\n \"\"\"\n\n def batch_loss(params: Dict[str, torch.Tensor], x: torch.Tensor, y: torch.Tensor):\n outputs = functional_call(model, params, (to_model_device(x, model),))\n return loss(outputs, y)\n\n return batch_loss\n
create_hvp_function(\n model: Module,\n loss: Callable[[Tensor, Tensor], Tensor],\n data_loader: DataLoader,\n precompute_grad: bool = True,\n use_average: bool = True,\n reverse_only: bool = True,\n track_gradients: bool = False,\n) -> Callable[[Tensor], Tensor]\n
Returns a function that calculates the approximate Hessian-vector product for a given vector. If you want to compute the exact hessian, i.e., pulling all data into memory and compute a full gradient computation, use the function hvp.
A PyTorch module representing the model whose loss function's Hessian is to be computed.
A callable that takes the model's output and target as input and returns the scalar loss.
A DataLoader instance that provides batches of data for calculating the Hessian-vector product. Each batch from the DataLoader is assumed to return a tuple where the first element is the model's input and the second element is the target output.
precompute_grad
If True, the full data gradient is precomputed and kept in memory, which can speed up the hessian vector product computation. Set this to False, if you can't afford to keep the full computation graph in memory.
use_average
If True, the returned function uses batch-wise computation via a batch loss function and averages the results. If False, the function uses backpropagation on the full empirical loss function, which is more accurate than averaging the batch hessians, but probably has a way higher memory usage.
Whether to use only reverse-mode autodiff or both forward- and reverse-mode autodiff. Ignored if precompute_grad is True.
track_gradients
Whether to track gradients for the resulting tensor of the Hessian-vector products.
Callable[[Tensor], Tensor]
A function that takes a single argument, a vector, and returns the
product of the Hessian of the loss function with respect to the
model's parameters and the input vector.
def create_hvp_function(\n model: torch.nn.Module,\n loss: Callable[[torch.Tensor, torch.Tensor], torch.Tensor],\n data_loader: DataLoader,\n precompute_grad: bool = True,\n use_average: bool = True,\n reverse_only: bool = True,\n track_gradients: bool = False,\n) -> Callable[[torch.Tensor], torch.Tensor]:\n \"\"\"\n Returns a function that calculates the approximate Hessian-vector product\n for a given vector. If you want to compute the exact hessian,\n i.e., pulling all data into memory and compute a full gradient computation, use\n the function [hvp][pydvl.influence.torch.functional.hvp].\n\n Args:\n model: A PyTorch module representing the model whose loss function's\n Hessian is to be computed.\n loss: A callable that takes the model's output and target as input and\n returns the scalar loss.\n data_loader: A DataLoader instance that provides batches of data for\n calculating the Hessian-vector product. Each batch from the\n DataLoader is assumed to return a tuple where the first element is\n the model's input and the second element is the target output.\n precompute_grad: If `True`, the full data gradient is precomputed and\n kept in memory, which can speed up the hessian vector product\n computation. Set this to `False`, if you can't afford to keep the\n full computation graph in memory.\n use_average: If `True`, the returned function uses batch-wise\n computation via\n [a batch loss function][pydvl.influence.torch.functional.create_batch_loss_function]\n and averages the results.\n If `False`, the function uses backpropagation on the full\n [empirical loss function]\n [pydvl.influence.torch.functional.create_empirical_loss_function],\n which is more accurate than averaging the batch hessians, but\n probably has a way higher memory usage.\n reverse_only: Whether to use only reverse-mode autodiff or\n both forward- and reverse-mode autodiff. Ignored if\n `precompute_grad` is `True`.\n track_gradients: Whether to track gradients for the resulting tensor of\n the Hessian-vector products.\n\n Returns:\n A function that takes a single argument, a vector, and returns the\n product of the Hessian of the `loss` function with respect to the\n `model`'s parameters and the input vector.\n \"\"\"\n\n if precompute_grad:\n model_params = {k: p for k, p in model.named_parameters() if p.requires_grad}\n\n if use_average:\n model_dtype = next(p.dtype for p in model.parameters() if p.requires_grad)\n total_grad_xy = torch.empty(0, dtype=model_dtype)\n total_points = 0\n grad_func = torch.func.grad(create_batch_loss_function(model, loss))\n for x, y in iter(data_loader):\n grad_xy = grad_func(\n model_params, to_model_device(x, model), to_model_device(y, model)\n )\n grad_xy = flatten_dimensions(grad_xy.values())\n if total_grad_xy.nelement() == 0:\n total_grad_xy = torch.zeros_like(grad_xy)\n total_grad_xy += grad_xy * len(x)\n total_points += len(x)\n total_grad_xy /= total_points\n else:\n total_grad_xy = torch.func.grad(\n create_empirical_loss_function(model, loss, data_loader)\n )(model_params)\n total_grad_xy = flatten_dimensions(total_grad_xy.values())\n\n def precomputed_grads_hvp_function(\n precomputed_grads: torch.Tensor, vec: torch.Tensor\n ) -> torch.Tensor:\n vec = to_model_device(vec, model)\n if vec.ndim == 1:\n vec = vec.unsqueeze(0)\n\n z = (precomputed_grads * torch.autograd.Variable(vec)).sum(dim=1)\n\n mvp = []\n for i in range(len(z)):\n mvp.append(\n flatten_dimensions(\n torch.autograd.grad(\n z[i], list(model_params.values()), retain_graph=True\n )\n )\n )\n result = torch.stack([arr.contiguous().view(-1) for arr in mvp])\n\n if not track_gradients:\n result = result.detach()\n\n return result\n\n return partial(precomputed_grads_hvp_function, total_grad_xy)\n\n def hvp_function(vec: torch.Tensor) -> torch.Tensor:\n params = {\n k: p if track_gradients else p.detach()\n for k, p in model.named_parameters()\n if p.requires_grad\n }\n v = align_structure(params, vec)\n empirical_loss = create_empirical_loss_function(model, loss, data_loader)\n return flatten_dimensions(\n hvp(empirical_loss, params, v, reverse_only=reverse_only).values()\n )\n\n def avg_hvp_function(vec: torch.Tensor) -> torch.Tensor:\n n_batches = len(data_loader)\n avg_hessian = to_model_device(torch.zeros_like(vec), model)\n b_hvp = create_batch_hvp_function(model, loss, reverse_only)\n params = {\n k: p if track_gradients else p.detach()\n for k, p in model.named_parameters()\n if p.requires_grad\n }\n for t_x, t_y in iter(data_loader):\n t_x, t_y = to_model_device(t_x, model), to_model_device(t_y, model)\n avg_hessian += b_hvp(params, t_x, t_y, to_model_device(vec, model))\n\n return avg_hessian / float(n_batches)\n\n return avg_hvp_function if use_average else hvp_function\n
hessian(\n model: Module,\n loss: Callable[[Tensor, Tensor], Tensor],\n data_loader: DataLoader,\n use_hessian_avg: bool = True,\n track_gradients: bool = False,\n) -> Tensor\n
Computes the Hessian matrix for a given model and loss function.
The PyTorch model for which the Hessian is computed.
A callable that computes the loss.
DataLoader providing batches of input data and corresponding ground truths.
use_hessian_avg
Flag to indicate whether the average Hessian across mini-batches should be computed. If False, the empirical loss across the entire dataset is used.
Whether to track gradients for the resulting tensor of the hessian vector products.
A tensor representing the Hessian matrix. The shape of the tensor will be (n_parameters, n_parameters), where n_parameters is the number of trainable parameters in the model.
def hessian(\n model: torch.nn.Module,\n loss: Callable[[torch.Tensor, torch.Tensor], torch.Tensor],\n data_loader: DataLoader,\n use_hessian_avg: bool = True,\n track_gradients: bool = False,\n) -> torch.Tensor:\n \"\"\"\n Computes the Hessian matrix for a given model and loss function.\n\n Args:\n model: The PyTorch model for which the Hessian is computed.\n loss: A callable that computes the loss.\n data_loader: DataLoader providing batches of input data and corresponding\n ground truths.\n use_hessian_avg: Flag to indicate whether the average Hessian across\n mini-batches should be computed.\n If False, the empirical loss across the entire dataset is used.\n track_gradients: Whether to track gradients for the resulting tensor of\n the hessian vector products.\n\n Returns:\n A tensor representing the Hessian matrix. The shape of the tensor will be\n (n_parameters, n_parameters), where n_parameters is the number of trainable\n parameters in the model.\n \"\"\"\n\n params = {\n k: p if track_gradients else p.detach()\n for k, p in model.named_parameters()\n if p.requires_grad\n }\n n_parameters = sum([p.numel() for p in params.values()])\n model_dtype = next((p.dtype for p in params.values()))\n\n flat_params = flatten_dimensions(params.values())\n\n if use_hessian_avg:\n n_samples = 0\n hessian_mat = to_model_device(\n torch.zeros((n_parameters, n_parameters), dtype=model_dtype), model\n )\n blf = create_batch_loss_function(model, loss)\n\n def flat_input_batch_loss_function(\n p: torch.Tensor, t_x: torch.Tensor, t_y: torch.Tensor\n ):\n return blf(align_with_model(p, model), t_x, t_y)\n\n for x, y in iter(data_loader):\n n_samples += x.shape[0]\n hessian_mat += x.shape[0] * torch.func.hessian(\n flat_input_batch_loss_function\n )(flat_params, to_model_device(x, model), to_model_device(y, model))\n\n hessian_mat /= n_samples\n else:\n\n def flat_input_empirical_loss(p: torch.Tensor):\n return create_empirical_loss_function(model, loss, data_loader)(\n align_with_model(p, model)\n )\n\n hessian_mat = torch.func.jacrev(torch.func.jacrev(flat_input_empirical_loss))(\n flat_params\n )\n\n return hessian_mat\n
create_per_sample_loss_function(\n model: Module, loss: Callable[[Tensor, Tensor], Tensor]\n) -> Callable[[Dict[str, Tensor], Tensor, Tensor], Tensor]\n
Generates a function to compute per-sample losses using PyTorch's vmap, i.e. the vector-valued function
for a loss function \\(\\operatorname{loss}\\) and a model \\(\\operatorname{model}\\) with model parameters \\(\\theta\\), where \\(N\\) is the number of elements in the batch.
The PyTorch model for which per-sample losses will be computed.
A callable that computes the loss for each sample in the batch, given a dictionary of model inputs, the model's predictions, and the true values. The callable will return a tensor where each entry corresponds to the loss of the corresponding sample.
def create_per_sample_loss_function(\n model: torch.nn.Module, loss: Callable[[torch.Tensor, torch.Tensor], torch.Tensor]\n) -> Callable[[Dict[str, torch.Tensor], torch.Tensor, torch.Tensor], torch.Tensor]:\n r\"\"\"\n Generates a function to compute per-sample losses using PyTorch's vmap,\n i.e. the vector-valued function\n\n \\[ f(\\theta, x, y) = (\\operatorname{loss}(\\operatorname{model}(\\theta, x_1), y_1),\n \\dots,\n \\operatorname{loss}(\\operatorname{model}(\\theta, x_N), y_N)), \\]\n\n for a loss function $\\operatorname{loss}$ and a model $\\operatorname{model}$ with\n model parameters $\\theta$, where $N$ is the number of elements in the batch.\n\n Args:\n model: The PyTorch model for which per-sample losses will be computed.\n loss: A callable that computes the loss.\n\n Returns:\n A callable that computes the loss for each sample in the batch,\n given a dictionary of model inputs, the model's predictions,\n and the true values. The callable will return a tensor where\n each entry corresponds to the loss of the corresponding sample.\n \"\"\"\n\n def compute_loss(\n params: Dict[str, torch.Tensor], x: torch.Tensor, y: torch.Tensor\n ) -> torch.Tensor:\n outputs = functional_call(\n model, params, (to_model_device(x.unsqueeze(0), model),)\n )\n return loss(outputs, y.unsqueeze(0))\n\n vmap_loss: Callable[\n [Dict[str, torch.Tensor], torch.Tensor, torch.Tensor], torch.Tensor\n ] = torch.vmap(compute_loss, in_dims=(None, 0, 0))\n return vmap_loss\n
create_per_sample_gradient_function(\n model: Module, loss: Callable[[Tensor, Tensor], Tensor]\n) -> Callable[[Dict[str, Tensor], Tensor, Tensor], Dict[str, Tensor]]\n
Generates a function to computes the per-sample gradient of the loss with respect to the model's parameters, i.e. the tensor-valued function
The PyTorch model for which per-sample gradients will be computed.
Callable[[Dict[str, Tensor], Tensor, Tensor], Dict[str, Tensor]]
A callable that takes a dictionary of model parameters, the model's input, and the labels. It returns a dictionary with the same keys as the model's named parameters. Each entry in the returned dictionary corresponds to the gradient of the corresponding model parameter for each sample in the batch.
def create_per_sample_gradient_function(\n model: torch.nn.Module, loss: Callable[[torch.Tensor, torch.Tensor], torch.Tensor]\n) -> Callable[\n [Dict[str, torch.Tensor], torch.Tensor, torch.Tensor], Dict[str, torch.Tensor]\n]:\n r\"\"\"\n Generates a function to computes the per-sample gradient of the loss with respect to\n the model's parameters, i.e. the tensor-valued function\n\n \\[ f(\\theta, x, y) = (\\nabla_{\\theta}\\operatorname{loss}\n (\\operatorname{model}(\\theta, x_1), y_1), \\dots,\n \\nabla_{\\theta}\\operatorname{loss}(\\operatorname{model}(\\theta, x_N), y_N) \\]\n\n for a loss function $\\operatorname{loss}$ and a model $\\operatorname{model}$ with\n model parameters $\\theta$, where $N$ is the number of elements in the batch.\n\n Args:\n model: The PyTorch model for which per-sample gradients will be computed.\n loss: A callable that computes the loss.\n\n Returns:\n A callable that takes a dictionary of model parameters, the model's input,\n and the labels. It returns a dictionary with the same keys as the model's\n named parameters. Each entry in the returned dictionary corresponds to\n the gradient of the corresponding model parameter for each sample\n in the batch.\n\n \"\"\"\n\n per_sample_grad: Callable[\n [Dict[str, torch.Tensor], torch.Tensor, torch.Tensor], Dict[str, torch.Tensor]\n ] = torch.func.jacrev(create_per_sample_loss_function(model, loss))\n return per_sample_grad\n
create_matrix_jacobian_product_function(\n model: Module, loss: Callable[[Tensor, Tensor], Tensor], g: Tensor\n) -> Callable[[Dict[str, Tensor], Tensor, Tensor], Tensor]\n
Generates a function to computes the matrix-Jacobian product (MJP) of the per-sample loss with respect to the model's parameters, i.e. the function
for a loss function \\(\\operatorname{loss}\\) and a model \\(\\operatorname{model}\\) with model parameters \\(\\theta\\).
The PyTorch model for which the MJP will be computed.
g
Matrix for which the product with the Jacobian will be computed. The shape of this matrix should be consistent with the shape of the jacobian.
A callable that takes a dictionary of model inputs, the model's input, and the labels. The callable returns the matrix-Jacobian product of the per-sample loss with respect to the model's parameters for the given matrix g.
def create_matrix_jacobian_product_function(\n model: torch.nn.Module,\n loss: Callable[[torch.Tensor, torch.Tensor], torch.Tensor],\n g: torch.Tensor,\n) -> Callable[[Dict[str, torch.Tensor], torch.Tensor, torch.Tensor], torch.Tensor]:\n r\"\"\"\n Generates a function to computes the matrix-Jacobian product (MJP) of the\n per-sample loss with respect to the model's parameters, i.e. the function\n\n \\[ f(\\theta, x, y) = g \\, @ \\, (\\nabla_{\\theta}\\operatorname{loss}\n (\\operatorname{model}(\\theta, x_i), y_i))_i^T \\]\n\n for a loss function $\\operatorname{loss}$ and a model $\\operatorname{model}$ with\n model parameters $\\theta$.\n\n Args:\n model: The PyTorch model for which the MJP will be computed.\n loss: A callable that computes the loss.\n g: Matrix for which the product with the Jacobian will be computed.\n The shape of this matrix should be consistent with the shape of\n the jacobian.\n\n Returns:\n A callable that takes a dictionary of model inputs, the model's input,\n and the labels. The callable returns the matrix-Jacobian product of the\n per-sample loss with respect to the model's parameters for the given\n matrix `g`.\n\n \"\"\"\n\n def single_jvp(\n params: Dict[str, torch.Tensor],\n x: torch.Tensor,\n y: torch.Tensor,\n _g: torch.Tensor,\n ):\n return torch.func.jvp(\n lambda p: create_per_sample_loss_function(model, loss)(p, x, y),\n (params,),\n (align_with_model(_g, model),),\n )[1]\n\n def full_jvp(params: Dict[str, torch.Tensor], x: torch.Tensor, y: torch.Tensor):\n return torch.func.vmap(single_jvp, in_dims=(None, None, None, 0))(\n params, x, y, g\n )\n\n return full_jvp\n
create_per_sample_mixed_derivative_function(\n model: Module, loss: Callable[[Tensor, Tensor], Tensor]\n) -> Callable[[Dict[str, Tensor], Tensor, Tensor], Dict[str, Tensor]]\n
Generates a function to computes the mixed derivatives, of the per-sample loss with respect to the model parameters and the input, i.e. the function
The PyTorch model for which the mixed derivatives are computed.
A callable that takes a dictionary of model inputs, the model's input, and the labels. The callable returns the mixed derivatives of the per-sample loss with respect to the model's parameters and input.
def create_per_sample_mixed_derivative_function(\n model: torch.nn.Module, loss: Callable[[torch.Tensor, torch.Tensor], torch.Tensor]\n) -> Callable[\n [Dict[str, torch.Tensor], torch.Tensor, torch.Tensor], Dict[str, torch.Tensor]\n]:\n r\"\"\"\n Generates a function to computes the mixed derivatives, of the per-sample loss with\n respect to the model parameters and the input, i.e. the function\n\n \\[ f(\\theta, x, y) = \\nabla_{\\theta}\\nabla_{x}\\operatorname{loss}\n (\\operatorname{model}(\\theta, x), y) \\]\n\n for a loss function $\\operatorname{loss}$ and a model $\\operatorname{model}$ with\n model parameters $\\theta$.\n\n Args:\n model: The PyTorch model for which the mixed derivatives are computed.\n loss: A callable that computes the loss.\n\n Returns:\n A callable that takes a dictionary of model inputs, the model's input,\n and the labels. The callable returns the mixed derivatives of the\n per-sample loss with respect to the model's parameters and input.\n\n \"\"\"\n\n def compute_loss(params: Dict[str, torch.Tensor], x: torch.Tensor, y: torch.Tensor):\n outputs = functional_call(\n model, params, (to_model_device(x.unsqueeze(0), model),)\n )\n return loss(outputs, y.unsqueeze(0))\n\n per_samp_mix_derivative: Callable[\n [Dict[str, torch.Tensor], torch.Tensor, torch.Tensor], Dict[str, torch.Tensor]\n ] = torch.vmap(\n torch.func.jacrev(torch.func.grad(compute_loss, argnums=1)),\n in_dims=(None, 0, 0),\n )\n return per_samp_mix_derivative\n
lanzcos_low_rank_hessian_approx(\n hessian_vp: Callable[[Tensor], Tensor],\n matrix_shape: Tuple[int, int],\n hessian_perturbation: float = 0.0,\n rank_estimate: int = 10,\n krylov_dimension: Optional[int] = None,\n tol: float = 1e-06,\n max_iter: Optional[int] = None,\n device: Optional[device] = None,\n eigen_computation_on_gpu: bool = False,\n torch_dtype: Optional[dtype] = None,\n) -> LowRankProductRepresentation\n
Calculates a low-rank approximation of the Hessian matrix of a scalar-valued function using the implicitly restarted Lanczos algorithm, i.e.:
where \\(D\\) is a diagonal matrix with the top (in absolute value) rank_estimate eigenvalues of the Hessian and \\(V\\) contains the corresponding eigenvectors.
rank_estimate
hessian_vp
A function that takes a vector and returns the product of the Hessian of the loss function.
TYPE: Callable[[Tensor], Tensor]
matrix_shape
The shape of the matrix, represented by the hessian vector product.
TYPE: Tuple[int, int]
Tuple[int, int]
hessian_perturbation
Regularization parameter added to the Hessian-vector product for numerical stability.
TYPE: float DEFAULT: 0.0
float
0.0
The number of eigenvalues and corresponding eigenvectors to compute. Represents the desired rank of the Hessian approximation.
TYPE: int DEFAULT: 10
int
10
krylov_dimension
The number of Krylov vectors to use for the Lanczos method. If not provided, it defaults to \\( \\min(\\text{model.n_parameters}, \\max(2 \\times \\text{rank_estimate} + 1, 20)) \\).
TYPE: Optional[int] DEFAULT: None
Optional[int]
tol
The stopping criteria for the Lanczos algorithm, which stops when the difference in the approximated eigenvalue is less than tol. Defaults to 1e-6.
TYPE: float DEFAULT: 1e-06
1e-06
max_iter
The maximum number of iterations for the Lanczos method. If not provided, it defaults to \\( 10 \\cdot \\text{model.n_parameters}\\).
device
The device to use for executing the hessian vector product.
TYPE: Optional[device] DEFAULT: None
Optional[device]
eigen_computation_on_gpu
If True, tries to execute the eigen pair approximation on the provided device via cupy implementation. Ensure that either your model is small enough, or you use a small rank_estimate to fit your device's memory. If False, the eigen pair approximation is executed on the CPU with scipy's wrapper to ARPACK.
torch_dtype
If not provided, the current torch default dtype is used for conversion to torch.
TYPE: Optional[dtype] DEFAULT: None
Optional[dtype]
LowRankProductRepresentation
LowRankProductRepresentation instance that contains the top (up until rank_estimate) eigenvalues and corresponding eigenvectors of the Hessian.
def lanzcos_low_rank_hessian_approx(\n hessian_vp: Callable[[torch.Tensor], torch.Tensor],\n matrix_shape: Tuple[int, int],\n hessian_perturbation: float = 0.0,\n rank_estimate: int = 10,\n krylov_dimension: Optional[int] = None,\n tol: float = 1e-6,\n max_iter: Optional[int] = None,\n device: Optional[torch.device] = None,\n eigen_computation_on_gpu: bool = False,\n torch_dtype: Optional[torch.dtype] = None,\n) -> LowRankProductRepresentation:\n r\"\"\"\n Calculates a low-rank approximation of the Hessian matrix of a scalar-valued\n function using the implicitly restarted Lanczos algorithm, i.e.:\n\n \\[ H_{\\text{approx}} = V D V^T\\]\n\n where \\(D\\) is a diagonal matrix with the top (in absolute value) `rank_estimate`\n eigenvalues of the Hessian and \\(V\\) contains the corresponding eigenvectors.\n\n Args:\n hessian_vp: A function that takes a vector and returns the product of\n the Hessian of the loss function.\n matrix_shape: The shape of the matrix, represented by the hessian vector\n product.\n hessian_perturbation: Regularization parameter added to the\n Hessian-vector product for numerical stability.\n rank_estimate: The number of eigenvalues and corresponding eigenvectors\n to compute. Represents the desired rank of the Hessian approximation.\n krylov_dimension: The number of Krylov vectors to use for the Lanczos\n method. If not provided, it defaults to\n \\( \\min(\\text{model.n_parameters},\n \\max(2 \\times \\text{rank_estimate} + 1, 20)) \\).\n tol: The stopping criteria for the Lanczos algorithm, which stops when\n the difference in the approximated eigenvalue is less than `tol`.\n Defaults to 1e-6.\n max_iter: The maximum number of iterations for the Lanczos method. If\n not provided, it defaults to \\( 10 \\cdot \\text{model.n_parameters}\\).\n device: The device to use for executing the hessian vector product.\n eigen_computation_on_gpu: If True, tries to execute the eigen pair\n approximation on the provided device via [cupy](https://cupy.dev/)\n implementation. Ensure that either your model is small enough, or you\n use a small rank_estimate to fit your device's memory. If False, the\n eigen pair approximation is executed on the CPU with scipy's wrapper to\n ARPACK.\n torch_dtype: If not provided, the current torch default dtype is used for\n conversion to torch.\n\n Returns:\n [LowRankProductRepresentation]\n [pydvl.influence.torch.functional.LowRankProductRepresentation]\n instance that contains the top (up until rank_estimate) eigenvalues\n and corresponding eigenvectors of the Hessian.\n \"\"\"\n\n torch_dtype = torch.get_default_dtype() if torch_dtype is None else torch_dtype\n\n if eigen_computation_on_gpu:\n try:\n import cupy as cp\n from cupyx.scipy.sparse.linalg import LinearOperator, eigsh\n from torch.utils.dlpack import from_dlpack, to_dlpack\n except ImportError as e:\n raise ImportError(\n f\"Try to install missing dependencies or set eigen_computation_on_gpu \"\n f\"to False: {e}\"\n )\n\n if device is None:\n raise ValueError(\n \"Without setting an explicit device, cupy is not supported\"\n )\n\n def to_torch_conversion_function(x: cp.NDArray) -> torch.Tensor:\n return from_dlpack(x.toDlpack()).to(torch_dtype)\n\n def mv(x):\n x = to_torch_conversion_function(x)\n y = hessian_vp(x) + hessian_perturbation * x\n return cp.from_dlpack(to_dlpack(y))\n\n else:\n from scipy.sparse.linalg import LinearOperator, eigsh\n\n def mv(x):\n x_torch = torch.as_tensor(x, device=device, dtype=torch_dtype)\n y = (\n (hessian_vp(x_torch) + hessian_perturbation * x_torch)\n .detach()\n .cpu()\n .numpy()\n )\n return y\n\n to_torch_conversion_function = partial(torch.as_tensor, dtype=torch_dtype)\n\n try:\n eigen_vals, eigen_vecs = eigsh(\n LinearOperator(matrix_shape, matvec=mv),\n k=rank_estimate,\n maxiter=max_iter,\n tol=tol,\n ncv=krylov_dimension,\n return_eigenvectors=True,\n )\n\n except ArpackNoConvergence as e:\n logger.warning(\n f\"ARPACK did not converge for parameters {max_iter=}, {tol=}, \"\n f\"{krylov_dimension=}, {rank_estimate=}. \\n \"\n f\"Returning the best approximation found so far. \"\n f\"Use those with care or modify parameters.\\n Original error: {e}\"\n )\n\n eigen_vals, eigen_vecs = e.eigenvalues, e.eigenvectors\n\n eigen_vals = to_torch_conversion_function(eigen_vals)\n eigen_vecs = to_torch_conversion_function(eigen_vecs)\n\n return LowRankProductRepresentation(eigen_vals, eigen_vecs)\n
model_hessian_low_rank(\n model: Module,\n loss: Callable[[Tensor, Tensor], Tensor],\n training_data: DataLoader,\n hessian_perturbation: float = 0.0,\n rank_estimate: int = 10,\n krylov_dimension: Optional[int] = None,\n tol: float = 1e-06,\n max_iter: Optional[int] = None,\n eigen_computation_on_gpu: bool = False,\n precompute_grad: bool = False,\n) -> LowRankProductRepresentation\n
Calculates a low-rank approximation of the Hessian matrix of the model's loss function using the implicitly restarted Lanczos algorithm, i.e.
A PyTorch model instance. The Hessian will be calculated with respect to this model's parameters.
training_data
A DataLoader instance that provides the model's training data. Used in calculating the Hessian-vector products.
Optional regularization parameter added to the Hessian-vector product for numerical stability.
The number of Krylov vectors to use for the Lanczos method. If not provided, it defaults to min(model.n_parameters, max(2*rank_estimate + 1, 20)).
The maximum number of iterations for the Lanczos method. If not provided, it defaults to 10*model.n_parameters.
If True, tries to execute the eigen pair approximation on the provided device via cupy implementation. Make sure, that either your model is small enough or you use a small rank_estimate to fit your device's memory. If False, the eigen pair approximation is executed on the CPU by scipy wrapper to ARPACK.
def model_hessian_low_rank(\n model: torch.nn.Module,\n loss: Callable[[torch.Tensor, torch.Tensor], torch.Tensor],\n training_data: DataLoader,\n hessian_perturbation: float = 0.0,\n rank_estimate: int = 10,\n krylov_dimension: Optional[int] = None,\n tol: float = 1e-6,\n max_iter: Optional[int] = None,\n eigen_computation_on_gpu: bool = False,\n precompute_grad: bool = False,\n) -> LowRankProductRepresentation:\n r\"\"\"\n Calculates a low-rank approximation of the Hessian matrix of the model's\n loss function using the implicitly restarted Lanczos algorithm, i.e.\n\n \\[ H_{\\text{approx}} = V D V^T\\]\n\n where \\(D\\) is a diagonal matrix with the top (in absolute value) `rank_estimate`\n eigenvalues of the Hessian and \\(V\\) contains the corresponding eigenvectors.\n\n\n Args:\n model: A PyTorch model instance. The Hessian will be calculated with respect to\n this model's parameters.\n loss : A callable that computes the loss.\n training_data: A DataLoader instance that provides the model's training data.\n Used in calculating the Hessian-vector products.\n hessian_perturbation: Optional regularization parameter added to the\n Hessian-vector product for numerical stability.\n rank_estimate: The number of eigenvalues and corresponding eigenvectors to\n compute. Represents the desired rank of the Hessian approximation.\n krylov_dimension: The number of Krylov vectors to use for the Lanczos method.\n If not provided, it defaults to min(model.n_parameters,\n max(2*rank_estimate + 1, 20)).\n tol: The stopping criteria for the Lanczos algorithm,\n which stops when the difference in the approximated eigenvalue is less than\n `tol`. Defaults to 1e-6.\n max_iter: The maximum number of iterations for the Lanczos method.\n If not provided, it defaults to 10*model.n_parameters.\n eigen_computation_on_gpu: If True, tries to execute the eigen pair approximation\n on the provided device via cupy implementation.\n Make sure, that either your model is small enough or you use a\n small rank_estimate to fit your device's memory.\n If False, the eigen pair approximation is executed on the CPU by\n scipy wrapper to ARPACK.\n precompute_grad: If True, the full data gradient is precomputed and kept\n in memory, which can speed up the hessian vector product computation.\n Set this to False, if you can't afford to keep the full computation graph\n in memory.\n\n Returns:\n [LowRankProductRepresentation]\n [pydvl.influence.torch.functional.LowRankProductRepresentation]\n instance that contains the top (up until rank_estimate) eigenvalues\n and corresponding eigenvectors of the Hessian.\n \"\"\"\n raw_hvp = create_hvp_function(\n model, loss, training_data, use_average=True, precompute_grad=precompute_grad\n )\n n_params = sum([p.numel() for p in model.parameters() if p.requires_grad])\n device = next(model.parameters()).device\n return lanzcos_low_rank_hessian_approx(\n hessian_vp=raw_hvp,\n matrix_shape=(n_params, n_params),\n hessian_perturbation=hessian_perturbation,\n rank_estimate=rank_estimate,\n krylov_dimension=krylov_dimension,\n tol=tol,\n max_iter=max_iter,\n device=device,\n eigen_computation_on_gpu=eigen_computation_on_gpu,\n )\n
randomized_nystroem_approximation(\n mat_mat_prod: Union[Tensor, Callable[[Tensor], Tensor]],\n input_dim: int,\n rank: int,\n input_type: dtype,\n shift_func: Optional[Callable[[Tensor], Tensor]] = None,\n mat_vec_device: device = torch.device(\"cpu\"),\n) -> LowRankProductRepresentation\n
Given a matrix vector product function (representing a symmetric positive definite matrix \\(A\\) ), computes a random Nystr\u00f6m low rank approximation of \\(A\\) in factored form, i.e.
where \\(\\Omega\\) is a standard normal random matrix.
mat_mat_prod
A callable representing the matrix vector product
TYPE: Union[Tensor, Callable[[Tensor], Tensor]]
Union[Tensor, Callable[[Tensor], Tensor]]
input_dim
dimension of the input for the matrix vector product
TYPE: int
input_type
data_type of inputs
TYPE: dtype
dtype
rank
rank of the approximation
shift_func
optional function for computing the stabilizing shift in the construction of the randomized nystroem approximation, defaults to
where \\(\\varepsilon(\\operatorname{\\text{input_type}})\\) is the value of the machine precision corresponding to the data type.
TYPE: Optional[Callable[[Tensor], Tensor]] DEFAULT: None
Optional[Callable[[Tensor], Tensor]]
mat_vec_device
device where the matrix vector product has to be executed
TYPE: device DEFAULT: device('cpu')
device('cpu')
object containing, \\(U\\) and \\(\\Sigma\\)
def randomized_nystroem_approximation(\n mat_mat_prod: Union[torch.Tensor, Callable[[torch.Tensor], torch.Tensor]],\n input_dim: int,\n rank: int,\n input_type: torch.dtype,\n shift_func: Optional[Callable[[torch.Tensor], torch.Tensor]] = None,\n mat_vec_device: torch.device = torch.device(\"cpu\"),\n) -> LowRankProductRepresentation:\n r\"\"\"\n Given a matrix vector product function (representing a symmetric positive definite\n matrix $A$ ), computes a random Nystr\u00f6m low rank approximation of\n $A$ in factored form, i.e.\n\n $$ A_{\\text{nys}} = (A \\Omega)(\\Omega^T A \\Omega)^{\\dagger}(A \\Omega)^T\n = U \\Sigma U^T $$\n\n where $\\Omega$ is a standard normal random matrix.\n\n Args:\n mat_mat_prod: A callable representing the matrix vector product\n input_dim: dimension of the input for the matrix vector product\n input_type: data_type of inputs\n rank: rank of the approximation\n shift_func: optional function for computing the stabilizing shift in the\n construction of the randomized nystroem approximation, defaults to\n\n $$ \\sqrt{\\operatorname{\\text{input_dim}}} \\cdot\n \\varepsilon(\\operatorname{\\text{input_type}}) \\cdot \\|A\\Omega\\|_2,$$\n\n where $\\varepsilon(\\operatorname{\\text{input_type}})$ is the value of the\n machine precision corresponding to the data type.\n mat_vec_device: device where the matrix vector product has to be executed\n\n Returns:\n object containing, $U$ and $\\Sigma$\n \"\"\"\n\n if shift_func is None:\n\n def shift_func(x: torch.Tensor):\n return (\n torch.sqrt(torch.as_tensor(input_dim))\n * torch.finfo(x.dtype).eps\n * torch.linalg.norm(x)\n )\n\n _mat_mat_prod: Callable[[torch.Tensor], torch.Tensor]\n\n if isinstance(mat_mat_prod, torch.Tensor):\n\n def _mat_mat_prod(x: torch.Tensor):\n return mat_mat_prod @ x\n\n else:\n _mat_mat_prod = mat_mat_prod\n\n random_sample_matrix = torch.randn(\n input_dim, rank, device=mat_vec_device, dtype=input_type\n )\n random_sample_matrix, _ = torch.linalg.qr(random_sample_matrix)\n\n sketch_mat = _mat_mat_prod(random_sample_matrix)\n\n shift = shift_func(sketch_mat)\n sketch_mat += shift * random_sample_matrix\n cholesky_mat = torch.matmul(random_sample_matrix.t(), sketch_mat)\n try:\n triangular_mat = torch.linalg.cholesky(cholesky_mat)\n except _LinAlgError as e:\n logger.warning(\n f\"Encountered error in cholesky decomposition: {e}.\\n \"\n f\"Increasing shift by smallest eigenvalue and re-compute\"\n )\n eigen_vals, eigen_vectors = torch.linalg.eigh(cholesky_mat)\n shift += torch.abs(torch.min(eigen_vals))\n eigen_vals += shift\n triangular_mat = torch.linalg.cholesky(\n torch.mm(eigen_vectors, torch.mm(torch.diag(eigen_vals), eigen_vectors.T))\n )\n\n svd_input = torch.linalg.solve_triangular(\n triangular_mat.t(), sketch_mat, upper=True, left=False\n )\n left_singular_vecs, singular_vals, _ = torch.linalg.svd(\n svd_input, full_matrices=False\n )\n singular_vals = torch.clamp(singular_vals**2 - shift, min=0)\n\n return LowRankProductRepresentation(singular_vals, left_singular_vecs)\n
model_hessian_nystroem_approximation(\n model: Module,\n loss: Callable[[Tensor, Tensor], Tensor],\n data_loader: DataLoader,\n rank: int,\n shift_func: Optional[Callable[[Tensor], Tensor]] = None,\n) -> LowRankProductRepresentation\n
Given a model, loss and a data_loader, computes a random Nystr\u00f6m low rank approximation of the corresponding Hessian matrix in factored form, i.e.
def model_hessian_nystroem_approximation(\n model: torch.nn.Module,\n loss: Callable[[torch.Tensor, torch.Tensor], torch.Tensor],\n data_loader: DataLoader,\n rank: int,\n shift_func: Optional[Callable[[torch.Tensor], torch.Tensor]] = None,\n) -> LowRankProductRepresentation:\n r\"\"\"\n Given a model, loss and a data_loader, computes a random Nystr\u00f6m low rank approximation of\n the corresponding Hessian matrix in factored form, i.e.\n\n $$ H_{\\text{nys}} = (H \\Omega)(\\Omega^T H \\Omega)^{+}(H \\Omega)^T\n = U \\Sigma U^T $$\n\n Args:\n model: A PyTorch model instance. The Hessian will be calculated with respect to\n this model's parameters.\n loss : A callable that computes the loss.\n data_loader: A DataLoader instance that provides the model's training data.\n Used in calculating the Hessian-vector products.\n rank: rank of the approximation\n shift_func: optional function for computing the stabilizing shift in the\n construction of the randomized nystroem approximation, defaults to\n\n $$ \\sqrt{\\operatorname{\\text{input_dim}}} \\cdot\n \\varepsilon(\\operatorname{\\text{input_type}}) \\cdot \\|A\\Omega\\|_2,$$\n\n where $\\varepsilon(\\operatorname{\\text{input_type}})$ is the value of the\n machine precision corresponding to the data type.\n\n Returns:\n object containing, $U$ and $\\Sigma$\n \"\"\"\n\n model_hvp = create_hvp_function(\n model, loss, data_loader, precompute_grad=False, use_average=True\n )\n device = next((p.device for p in model.parameters()))\n dtype = next((p.dtype for p in model.parameters()))\n in_dim = sum((p.numel() for p in model.parameters() if p.requires_grad))\n\n def model_hessian_mat_mat_prod(x: torch.Tensor):\n return torch.func.vmap(model_hvp, in_dims=1, randomness=\"same\")(x).t()\n\n return randomized_nystroem_approximation(\n model_hessian_mat_mat_prod,\n in_dim,\n rank,\n dtype,\n shift_func=shift_func,\n mat_vec_device=device,\n )\n
This module implements several implementations of InfluenceFunctionModel utilizing PyTorch.
TorchInfluenceFunctionModel(\n model: Module, loss: Callable[[Tensor, Tensor], Tensor]\n)\n
Bases: InfluenceFunctionModel[Tensor, DataLoader], ABC
InfluenceFunctionModel[Tensor, DataLoader]
Abstract base class for influence computation related to torch models
src/pydvl/influence/torch/influence_function_model.py
def __init__(\n self,\n model: nn.Module,\n loss: Callable[[torch.Tensor, torch.Tensor], torch.Tensor],\n):\n self.loss = loss\n self.model = model\n self._n_parameters = sum(\n [p.numel() for p in model.parameters() if p.requires_grad]\n )\n self._model_device = next(\n (p.device for p in model.parameters() if p.requires_grad)\n )\n self._model_params = {\n k: p.detach() for k, p in self.model.named_parameters() if p.requires_grad\n }\n self._model_dtype = next(\n (p.dtype for p in model.parameters() if p.requires_grad)\n )\n super().__init__()\n
influences(\n x_test: Tensor,\n y_test: Tensor,\n x: Optional[Tensor] = None,\n y: Optional[Tensor] = None,\n mode: InfluenceMode = InfluenceMode.Up,\n) -> Tensor\n
Compute the approximation of
for the perturbation type influence case. For all input tensors it is assumed, that the first dimension is the batch dimension (in case, you want to provide a single sample z, call z.unsqueeze(0) if no batch dimension is present).
TYPE: Optional[Tensor] DEFAULT: None
Optional[Tensor]
def influences(\n self,\n x_test: torch.Tensor,\n y_test: torch.Tensor,\n x: Optional[torch.Tensor] = None,\n y: Optional[torch.Tensor] = None,\n mode: InfluenceMode = InfluenceMode.Up,\n) -> torch.Tensor:\n r\"\"\"\n Compute the approximation of\n\n \\[\n \\langle H^{-1}\\nabla_{\\theta} \\ell(y_{\\text{test}},\n f_{\\theta}(x_{\\text{test}})), \\nabla_{\\theta} \\ell(y, f_{\\theta}(x))\\rangle\n \\]\n\n for the case of up-weighting influence, resp.\n\n \\[\n \\langle H^{-1}\\nabla_{\\theta} \\ell(y_{\\text{test}}, f_{\\theta}(x_{\\text{test}})),\n \\nabla_{x} \\nabla_{\\theta} \\ell(y, f_{\\theta}(x)) \\rangle\n \\]\n\n for the perturbation type influence case. For all input tensors it is assumed,\n that the first dimension is the batch dimension (in case, you want to provide\n a single sample z, call z.unsqueeze(0) if no batch dimension is present).\n\n Args:\n x_test: model input to use in the gradient computations\n of $H^{-1}\\nabla_{\\theta} \\ell(y_{\\text{test}},\n f_{\\theta}(x_{\\text{test}}))$\n y_test: label tensor to compute gradients\n x: optional model input to use in the gradient computations\n $\\nabla_{\\theta}\\ell(y, f_{\\theta}(x))$,\n resp. $\\nabla_{x}\\nabla_{\\theta}\\ell(y, f_{\\theta}(x))$,\n if None, use $x=x_{\\text{test}}$\n y: optional label tensor to compute gradients\n mode: enum value of [InfluenceMode]\n [pydvl.influence.base_influence_function_model.InfluenceMode]\n\n Returns:\n Tensor representing the element-wise scalar products for the provided batch\n\n \"\"\"\n t: torch.Tensor = super().influences(x_test, y_test, x, y, mode=mode)\n return t\n
influence_factors(x: Tensor, y: Tensor) -> Tensor\n
where the gradient is meant to be per sample of the batch \\((x, y)\\). For all input tensors it is assumed, that the first dimension is the batch dimension (in case, you want to provide a single sample z, call z.unsqueeze(0) if no batch dimension is present).
Tensor representing the element-wise inverse Hessian matrix vector products
def influence_factors(self, x: torch.Tensor, y: torch.Tensor) -> torch.Tensor:\n r\"\"\"\n Compute approximation of\n\n \\[ H^{-1}\\nabla_{\\theta} \\ell(y, f_{\\theta}(x)) \\]\n\n where the gradient is meant to be per sample of the batch $(x, y)$.\n For all input tensors it is assumed,\n that the first dimension is the batch dimension (in case, you want to provide\n a single sample z, call z.unsqueeze(0) if no batch dimension is present).\n\n Args:\n x: model input to use in the gradient computations\n y: label tensor to compute gradients\n\n Returns:\n Tensor representing the element-wise inverse Hessian matrix vector products\n\n \"\"\"\n return super().influence_factors(x, y)\n
influences_from_factors(\n z_test_factors: Tensor,\n x: Tensor,\n y: Tensor,\n mode: InfluenceMode = InfluenceMode.Up,\n) -> Tensor\n
for the perturbation type influence case. The gradient is meant to be per sample of the batch \\((x, y)\\). For all input tensors it is assumed, that the first dimension is the batch dimension (in case, you want to provide a single sample z, call z.unsqueeze(0) if no batch dimension is present).
pre-computed tensor, approximating \\(H^{-1}\\nabla_{\\theta} \\ell(y_{\\text{test}}, f_{\\theta}(x_{\\text{test}}))\\)
model input to use in the gradient computations \\(\\nabla_{\\theta}\\ell(y, f_{\\theta}(x))\\), resp. \\(\\nabla_{x}\\nabla_{\\theta}\\ell(y, f_{\\theta}(x))\\)
def influences_from_factors(\n self,\n z_test_factors: torch.Tensor,\n x: torch.Tensor,\n y: torch.Tensor,\n mode: InfluenceMode = InfluenceMode.Up,\n) -> torch.Tensor:\n r\"\"\"\n Computation of\n\n \\[ \\langle z_{\\text{test_factors}},\n \\nabla_{\\theta} \\ell(y, f_{\\theta}(x)) \\rangle \\]\n\n for the case of up-weighting influence, resp.\n\n \\[ \\langle z_{\\text{test_factors}},\n \\nabla_{x} \\nabla_{\\theta} \\ell(y, f_{\\theta}(x)) \\rangle \\]\n\n for the perturbation type influence case. The gradient is meant to be per sample\n of the batch $(x, y)$. For all input tensors it is assumed,\n that the first dimension is the batch dimension (in case, you want to provide\n a single sample z, call z.unsqueeze(0) if no batch dimension is present).\n\n Args:\n z_test_factors: pre-computed tensor, approximating\n $H^{-1}\\nabla_{\\theta} \\ell(y_{\\text{test}},\n f_{\\theta}(x_{\\text{test}}))$\n x: model input to use in the gradient computations\n $\\nabla_{\\theta}\\ell(y, f_{\\theta}(x))$,\n resp. $\\nabla_{x}\\nabla_{\\theta}\\ell(y, f_{\\theta}(x))$\n y: label tensor to compute gradients\n mode: enum value of [InfluenceMode]\n [pydvl.influence.base_influence_function_model.InfluenceMode]\n\n Returns:\n Tensor representing the element-wise scalar products for the provided batch\n\n \"\"\"\n if mode == InfluenceMode.Up:\n return (\n z_test_factors\n @ self._loss_grad(x.to(self.model_device), y.to(self.model_device)).T\n )\n elif mode == InfluenceMode.Perturbation:\n return torch.einsum(\n \"ia,j...a->ij...\",\n z_test_factors,\n self._flat_loss_mixed_grad(\n x.to(self.model_device), y.to(self.model_device)\n ),\n )\n else:\n raise UnsupportedInfluenceModeException(mode)\n
DirectInfluence(\n model: Module,\n loss: Callable[[Tensor, Tensor], Tensor],\n hessian_regularization: float = 0.0,\n)\n
Bases: TorchInfluenceFunctionModel
Given a model and training data, it finds x such that \\(Hx = b\\), with \\(H\\) being the model hessian.
A PyTorch model. The Hessian will be calculated with respect to this model's parameters.
hessian_regularization
Regularization of the hessian.
def __init__(\n self,\n model: nn.Module,\n loss: Callable[[torch.Tensor, torch.Tensor], torch.Tensor],\n hessian_regularization: float = 0.0,\n):\n super().__init__(model, loss)\n self.hessian_regularization = hessian_regularization\n
fit(data: DataLoader) -> DirectInfluence\n
Compute the hessian matrix based on a provided dataloader.
The data to compute the Hessian with.
DirectInfluence
The fitted instance.
def fit(self, data: DataLoader) -> DirectInfluence:\n \"\"\"\n Compute the hessian matrix based on a provided dataloader.\n\n Args:\n data: The data to compute the Hessian with.\n\n Returns:\n The fitted instance.\n \"\"\"\n self.hessian = hessian(self.model, self.loss, data)\n return self\n
for the perturbation type influence case. The action of \\(H^{-1}\\) is achieved via a direct solver using torch.linalg.solve.
A tensor representing the element-wise scalar products for the provided batch.
@log_duration\ndef influences(\n self,\n x_test: torch.Tensor,\n y_test: torch.Tensor,\n x: Optional[torch.Tensor] = None,\n y: Optional[torch.Tensor] = None,\n mode: InfluenceMode = InfluenceMode.Up,\n) -> torch.Tensor:\n r\"\"\"\n Compute approximation of\n\n \\[ \\langle H^{-1}\\nabla_{\\theta} \\ell(y_{\\text{test}},\n f_{\\theta}(x_{\\text{test}})),\n \\nabla_{\\theta} \\ell(y, f_{\\theta}(x)) \\rangle, \\]\n\n for the case of up-weighting influence, resp.\n\n \\[ \\langle H^{-1}\\nabla_{\\theta} \\ell(y_{\\text{test}},\n f_{\\theta}(x_{\\text{test}})),\n \\nabla_{x} \\nabla_{\\theta} \\ell(y, f_{\\theta}(x)) \\rangle \\]\n\n for the perturbation type influence case. The action of $H^{-1}$ is achieved\n via a direct solver using [torch.linalg.solve][torch.linalg.solve].\n\n Args:\n x_test: model input to use in the gradient computations of\n $H^{-1}\\nabla_{\\theta} \\ell(y_{\\text{test}},\n f_{\\theta}(x_{\\text{test}}))$\n y_test: label tensor to compute gradients\n x: optional model input to use in the gradient computations\n $\\nabla_{\\theta}\\ell(y, f_{\\theta}(x))$,\n resp. $\\nabla_{x}\\nabla_{\\theta}\\ell(y, f_{\\theta}(x))$,\n if None, use $x=x_{\\text{test}}$\n y: optional label tensor to compute gradients\n mode: enum value of [InfluenceMode]\n [pydvl.influence.base_influence_function_model.InfluenceMode]\n\n Returns:\n A tensor representing the element-wise scalar products for the\n provided batch.\n\n \"\"\"\n return super().influences(x_test, y_test, x, y, mode=mode)\n
CgInfluence(\n model: Module,\n loss: Callable[[Tensor, Tensor], Tensor],\n hessian_regularization: float = 0.0,\n x0: Optional[Tensor] = None,\n rtol: float = 1e-07,\n atol: float = 1e-07,\n maxiter: Optional[int] = None,\n progress: bool = False,\n precompute_grad: bool = False,\n pre_conditioner: Optional[PreConditioner] = None,\n use_block_cg: bool = False,\n)\n
Given a model and training data, it uses conjugate gradient to calculate the inverse of the Hessian Vector Product. More precisely, it finds x such that \\(Hx = b\\), with \\(H\\) being the model hessian. For more info, see Conjugate Gradient.
x0
Initial guess for hvp. If None, defaults to b.
rtol
Maximum relative tolerance of result.
TYPE: float DEFAULT: 1e-07
1e-07
atol
Absolute tolerance of result.
maxiter
Maximum number of iterations. If None, defaults to 10*len(b).
progress
If True, display progress bars.
pre_conditioner
Optional pre-conditioner to improve convergence of conjugate gradient method
TYPE: Optional[PreConditioner] DEFAULT: None
Optional[PreConditioner]
use_block_cg
If True, use block variant of conjugate gradient method, which solves several right hand sides simultaneously
def __init__(\n self,\n model: nn.Module,\n loss: Callable[[torch.Tensor, torch.Tensor], torch.Tensor],\n hessian_regularization: float = 0.0,\n x0: Optional[torch.Tensor] = None,\n rtol: float = 1e-7,\n atol: float = 1e-7,\n maxiter: Optional[int] = None,\n progress: bool = False,\n precompute_grad: bool = False,\n pre_conditioner: Optional[PreConditioner] = None,\n use_block_cg: bool = False,\n):\n super().__init__(model, loss)\n self.use_block_cg = use_block_cg\n self.pre_conditioner = pre_conditioner\n self.precompute_grad = precompute_grad\n self.progress = progress\n self.maxiter = maxiter\n self.atol = atol\n self.rtol = rtol\n self.x0 = x0\n self.hessian_regularization = hessian_regularization\n
Compute an approximation of
for the case of perturbation-type influence. The approximate action of \\(H^{-1}\\) is achieved via the conjugate gradient method.
@log_duration\ndef influences(\n self,\n x_test: torch.Tensor,\n y_test: torch.Tensor,\n x: Optional[torch.Tensor] = None,\n y: Optional[torch.Tensor] = None,\n mode: InfluenceMode = InfluenceMode.Up,\n) -> torch.Tensor:\n r\"\"\"\n Compute an approximation of\n\n \\[ \\langle H^{-1}\\nabla_{\\theta} \\ell(y_{\\text{test}},\n f_{\\theta}(x_{\\text{test}})),\n \\nabla_{\\theta} \\ell(y, f_{\\theta}(x)) \\rangle, \\]\n\n for the case of up-weighting influence, resp.\n\n \\[ \\langle H^{-1}\\nabla_{\\theta} \\ell(y_{\\text{test}},\n f_{\\theta}(x_{\\text{test}})),\n \\nabla_{x} \\nabla_{\\theta} \\ell(y, f_{\\theta}(x)) \\rangle \\]\n\n for the case of perturbation-type influence. The approximate action of\n $H^{-1}$ is achieved via the [conjugate gradient\n method](https://en.wikipedia.org/wiki/Conjugate_gradient_method).\n\n Args:\n x_test: model input to use in the gradient computations of\n $H^{-1}\\nabla_{\\theta} \\ell(y_{\\text{test}},\n f_{\\theta}(x_{\\text{test}}))$\n y_test: label tensor to compute gradients\n x: optional model input to use in the gradient computations\n $\\nabla_{\\theta}\\ell(y, f_{\\theta}(x))$,\n resp. $\\nabla_{x}\\nabla_{\\theta}\\ell(y, f_{\\theta}(x))$,\n if None, use $x=x_{\\text{test}}$\n y: optional label tensor to compute gradients\n mode: enum value of [InfluenceMode]\n [pydvl.influence.base_influence_function_model.InfluenceMode]\n\n Returns:\n A tensor representing the element-wise scalar products for the\n provided batch.\n\n \"\"\"\n return super().influences(x_test, y_test, x, y, mode=mode)\n
LissaInfluence(\n model: Module,\n loss: Callable[[Tensor, Tensor], Tensor],\n hessian_regularization: float = 0.0,\n maxiter: int = 1000,\n dampen: float = 0.0,\n scale: float = 10.0,\n h0: Optional[Tensor] = None,\n rtol: float = 0.0001,\n progress: bool = False,\n)\n
Uses LISSA, Linear time Stochastic Second-Order Algorithm, to iteratively approximate the inverse Hessian. More precisely, it finds x s.t. \\(Hx = b\\), with \\(H\\) being the model's second derivative wrt. the parameters. This is done with the update
where \\(I\\) is the identity matrix, \\(d\\) is a dampening term and \\(s\\) a scaling factor that are applied to help convergence. For details, see Linear time Stochastic Second-Order Approximation (LiSSA)
Maximum number of iterations.
TYPE: int DEFAULT: 1000
1000
dampen
Dampening factor, defaults to 0 for no dampening.
scale
Scaling factor, defaults to 10.
TYPE: float DEFAULT: 10.0
10.0
h0
Initial guess for hvp.
tolerance to use for early stopping
TYPE: float DEFAULT: 0.0001
0.0001
def __init__(\n self,\n model: nn.Module,\n loss: Callable[[torch.Tensor, torch.Tensor], torch.Tensor],\n hessian_regularization: float = 0.0,\n maxiter: int = 1000,\n dampen: float = 0.0,\n scale: float = 10.0,\n h0: Optional[torch.Tensor] = None,\n rtol: float = 1e-4,\n progress: bool = False,\n):\n super().__init__(model, loss)\n self.maxiter = maxiter\n self.hessian_regularization = hessian_regularization\n self.progress = progress\n self.rtol = rtol\n self.h0 = h0\n self.scale = scale\n self.dampen = dampen\n
ArnoldiInfluence(\n model: Module,\n loss: Callable[[Tensor, Tensor], Tensor],\n hessian_regularization: float = 0.0,\n rank_estimate: int = 10,\n krylov_dimension: Optional[int] = None,\n tol: float = 1e-06,\n max_iter: Optional[int] = None,\n eigen_computation_on_gpu: bool = False,\n precompute_grad: bool = False,\n)\n
Solves the linear system Hx = b, where H is the Hessian of the model's loss function and b is the given right-hand side vector. It employs the [implicitly restarted Arnoldi method] (https://en.wikipedia.org/wiki/Arnoldi_iteration) for computing a partial eigen decomposition, which is used fo the inversion i.e.
where \\(D\\) is a diagonal matrix with the top (in absolute value) rank_estimate eigenvalues of the Hessian and \\(V\\) contains the corresponding eigenvectors. For more information, see Arnoldi.
The number of Krylov vectors to use for the Lanczos method. Defaults to min(model's number of parameters, max(2 times rank_estimate + 1, 20)).
The stopping criteria for the Lanczos algorithm. Ignored if low_rank_representation is provided.
low_rank_representation
The maximum number of iterations for the Lanczos method. Ignored if low_rank_representation is provided.
If True, tries to execute the eigen pair approximation on the model's device via a cupy implementation. Ensure the model size or rank_estimate is appropriate for device memory. If False, the eigen pair approximation is executed on the CPU by the scipy wrapper to ARPACK.
def __init__(\n self,\n model: nn.Module,\n loss: Callable[[torch.Tensor, torch.Tensor], torch.Tensor],\n hessian_regularization: float = 0.0,\n rank_estimate: int = 10,\n krylov_dimension: Optional[int] = None,\n tol: float = 1e-6,\n max_iter: Optional[int] = None,\n eigen_computation_on_gpu: bool = False,\n precompute_grad: bool = False,\n):\n super().__init__(model, loss)\n self.hessian_regularization = hessian_regularization\n self.rank_estimate = rank_estimate\n self.tol = tol\n self.max_iter = max_iter\n self.krylov_dimension = krylov_dimension\n self.eigen_computation_on_gpu = eigen_computation_on_gpu\n self.precompute_grad = precompute_grad\n
fit(data: DataLoader) -> ArnoldiInfluence\n
Fitting corresponds to the computation of the low rank decomposition
of the Hessian defined by the provided data loader.
def fit(self, data: DataLoader) -> ArnoldiInfluence:\n r\"\"\"\n Fitting corresponds to the computation of the low rank decomposition\n\n \\[ V D^{-1} V^T \\]\n\n of the Hessian defined by the provided data loader.\n\n Args:\n data: The data to compute the Hessian with.\n\n Returns:\n The fitted instance.\n\n \"\"\"\n low_rank_representation = model_hessian_low_rank(\n self.model,\n self.loss,\n data,\n hessian_perturbation=0.0, # regularization is applied, when computing values\n rank_estimate=self.rank_estimate,\n krylov_dimension=self.krylov_dimension,\n tol=self.tol,\n max_iter=self.max_iter,\n eigen_computation_on_gpu=self.eigen_computation_on_gpu,\n precompute_grad=self.precompute_grad,\n )\n self.low_rank_representation = low_rank_representation.to(self.model_device)\n return self\n
EkfacInfluence(\n model: Module,\n update_diagonal: bool = False,\n hessian_regularization: float = 0.0,\n progress: bool = False,\n)\n
Approximately solves the linear system Hx = b, where H is the Hessian of a model with the empirical categorical cross entropy as loss function and b is the given right-hand side vector. It employs the EK-FAC method, which is based on the kronecker factorization of the Hessian.
Contrary to the other influence function methods, this implementation can only be used for classification tasks with a cross entropy loss function. However, it is much faster than the other methods and can be used efficiently for very large datasets and models. For more information, see Eigenvalue Corrected K-FAC.
update_diagonal
If True, the diagonal values in the ekfac representation are refitted from the training data after calculating the KFAC blocks. This provides a more accurate approximation of the Hessian, but it is computationally more expensive.
def __init__(\n self,\n model: nn.Module,\n update_diagonal: bool = False,\n hessian_regularization: float = 0.0,\n progress: bool = False,\n):\n super().__init__(model, torch.nn.functional.cross_entropy)\n self.hessian_regularization = hessian_regularization\n self.update_diagonal = update_diagonal\n self.active_layers = self._parse_active_layers()\n self.progress = progress\n
fit(data: DataLoader) -> EkfacInfluence\n
Compute the KFAC blocks for each layer of the model, using the provided data. It then creates an EkfacRepresentation object that stores the KFAC blocks for each layer, their eigenvalue decomposition and diagonal values.
def fit(self, data: DataLoader) -> EkfacInfluence:\n \"\"\"\n Compute the KFAC blocks for each layer of the model, using the provided data.\n It then creates an EkfacRepresentation object that stores the KFAC blocks for\n each layer, their eigenvalue decomposition and diagonal values.\n \"\"\"\n forward_x, grad_y = self._get_kfac_blocks(data)\n layers_evecs_a = {}\n layers_evect_g = {}\n layers_diags = {}\n for key in self.active_layers.keys():\n evals_a, evecs_a = torch.linalg.eigh(forward_x[key])\n evals_g, evecs_g = torch.linalg.eigh(grad_y[key])\n layers_evecs_a[key] = evecs_a\n layers_evect_g[key] = evecs_g\n layers_diags[key] = torch.kron(evals_g.view(-1, 1), evals_a.view(-1, 1))\n\n self.ekfac_representation = EkfacRepresentation(\n self.active_layers.keys(),\n self.active_layers.values(),\n layers_evecs_a.values(),\n layers_evect_g.values(),\n layers_diags.values(),\n )\n if self.update_diagonal:\n self._update_diag(data)\n return self\n
influences_by_layer(\n x_test: Tensor,\n y_test: Tensor,\n x: Optional[Tensor] = None,\n y: Optional[Tensor] = None,\n mode: InfluenceMode = InfluenceMode.Up,\n) -> Dict[str, Tensor]\n
Compute the influence of the data on the test data for each layer of the model.
A dictionary containing the influence of the data on the test data for each
layer of the model, with the layer name as key.
def influences_by_layer(\n self,\n x_test: torch.Tensor,\n y_test: torch.Tensor,\n x: Optional[torch.Tensor] = None,\n y: Optional[torch.Tensor] = None,\n mode: InfluenceMode = InfluenceMode.Up,\n) -> Dict[str, torch.Tensor]:\n r\"\"\"\n Compute the influence of the data on the test data for each layer of the model.\n\n Args:\n x_test: model input to use in the gradient computations of\n $H^{-1}\\nabla_{\\theta} \\ell(y_{\\text{test}},\n f_{\\theta}(x_{\\text{test}}))$\n y_test: label tensor to compute gradients\n x: optional model input to use in the gradient computations\n $\\nabla_{\\theta}\\ell(y, f_{\\theta}(x))$,\n resp. $\\nabla_{x}\\nabla_{\\theta}\\ell(y, f_{\\theta}(x))$,\n if None, use $x=x_{\\text{test}}$\n y: optional label tensor to compute gradients\n mode: enum value of [InfluenceMode]\n [pydvl.influence.base_influence_function_model.InfluenceMode]\n\n Returns:\n A dictionary containing the influence of the data on the test data for each\n layer of the model, with the layer name as key.\n \"\"\"\n if not self.is_fitted:\n raise ValueError(\n \"Instance must be fitted before calling influence methods on it\"\n )\n\n if x is None:\n if y is not None:\n raise ValueError(\n \"Providing labels y, without providing model input x \"\n \"is not supported\"\n )\n\n return self._symmetric_values_by_layer(\n x_test.to(self.model_device),\n y_test.to(self.model_device),\n mode,\n )\n\n if y is None:\n raise ValueError(\n \"Providing model input x without providing labels y is not supported\"\n )\n\n return self._non_symmetric_values_by_layer(\n x_test.to(self.model_device),\n y_test.to(self.model_device),\n x.to(self.model_device),\n y.to(self.model_device),\n mode,\n )\n
influence_factors_by_layer(x: Tensor, y: Tensor) -> Dict[str, Tensor]\n
Computes the approximation of
for each layer of the model separately.
A dictionary containing the influence factors for each layer of the model,
with the layer name as key.
def influence_factors_by_layer(\n self,\n x: torch.Tensor,\n y: torch.Tensor,\n) -> Dict[str, torch.Tensor]:\n r\"\"\"\n Computes the approximation of\n\n \\[ H^{-1}\\nabla_{\\theta} \\ell(y, f_{\\theta}(x)) \\]\n\n for each layer of the model separately.\n\n Args:\n x: model input to use in the gradient computations\n y: label tensor to compute gradients\n\n Returns:\n A dictionary containing the influence factors for each layer of the model,\n with the layer name as key.\n \"\"\"\n if not self.is_fitted:\n raise ValueError(\n \"Instance must be fitted before calling influence methods on it\"\n )\n\n return self._solve_hvp_by_layer(\n self._loss_grad(x.to(self.model_device), y.to(self.model_device)),\n self.ekfac_representation,\n self.hessian_regularization,\n )\n
influences_from_factors_by_layer(\n z_test_factors: Dict[str, Tensor],\n x: Tensor,\n y: Tensor,\n mode: InfluenceMode = InfluenceMode.Up,\n) -> Dict[str, Tensor]\n
for the perturbation type influence case for each layer of the model separately. The gradients are meant to be per sample of the batch \\((x, y)\\).
A dictionary containing the influence of the data on the test data
for each layer of the model, with the layer name as key.
def influences_from_factors_by_layer(\n self,\n z_test_factors: Dict[str, torch.Tensor],\n x: torch.Tensor,\n y: torch.Tensor,\n mode: InfluenceMode = InfluenceMode.Up,\n) -> Dict[str, torch.Tensor]:\n r\"\"\"\n Computation of\n\n \\[ \\langle z_{\\text{test_factors}},\n \\nabla_{\\theta} \\ell(y, f_{\\theta}(x)) \\rangle \\]\n\n for the case of up-weighting influence, resp.\n\n \\[ \\langle z_{\\text{test_factors}},\n \\nabla_{x} \\nabla_{\\theta} \\ell(y, f_{\\theta}(x)) \\rangle \\]\n\n for the perturbation type influence case for each layer of the model\n separately. The gradients are meant to be per sample of the batch $(x,\n y)$.\n\n Args:\n z_test_factors: pre-computed tensor, approximating\n $H^{-1}\\nabla_{\\theta} \\ell(y_{\\text{test}},\n f_{\\theta}(x_{\\text{test}}))$\n x: model input to use in the gradient computations\n $\\nabla_{\\theta}\\ell(y, f_{\\theta}(x))$,\n resp. $\\nabla_{x}\\nabla_{\\theta}\\ell(y, f_{\\theta}(x))$\n y: label tensor to compute gradients\n mode: enum value of [InfluenceMode]\n [pydvl.influence.base_influence_function_model.InfluenceMode]\n\n Returns:\n A dictionary containing the influence of the data on the test data\n for each layer of the model, with the layer name as key.\n \"\"\"\n if mode == InfluenceMode.Up:\n total_grad = self._loss_grad(\n x.to(self.model_device), y.to(self.model_device)\n )\n start_idx = 0\n influences = {}\n for layer_id, layer_z_test in z_test_factors.items():\n end_idx = start_idx + layer_z_test.shape[1]\n influences[layer_id] = layer_z_test @ total_grad[:, start_idx:end_idx].T\n start_idx = end_idx\n return influences\n elif mode == InfluenceMode.Perturbation:\n total_mixed_grad = self._flat_loss_mixed_grad(\n x.to(self.model_device), y.to(self.model_device)\n )\n start_idx = 0\n influences = {}\n for layer_id, layer_z_test in z_test_factors.items():\n end_idx = start_idx + layer_z_test.shape[1]\n influences[layer_id] = torch.einsum(\n \"ia,j...a->ij...\",\n layer_z_test,\n total_mixed_grad[:, start_idx:end_idx],\n )\n start_idx = end_idx\n return influences\n else:\n raise UnsupportedInfluenceModeException(mode)\n
explore_hessian_regularization(\n x: Tensor, y: Tensor, regularization_values: List[float]\n) -> Dict[float, Dict[str, Tensor]]\n
Efficiently computes the influence for input x and label y for each layer of the model, for different values of the hessian regularization parameter. This is done by computing the gradient of the loss function for the input x and label y only once and then solving the Hessian Vector Product for each regularization value. This is useful for finding the optimal regularization value and for exploring how robust the influence values are to changes in the regularization value.
regularization_values
list of regularization values to use
TYPE: List[float]
List[float]
Dict[float, Dict[str, Tensor]]
A dictionary containing with keys being the regularization values and values
being dictionaries containing the influences for each layer of the model,
def explore_hessian_regularization(\n self,\n x: torch.Tensor,\n y: torch.Tensor,\n regularization_values: List[float],\n) -> Dict[float, Dict[str, torch.Tensor]]:\n \"\"\"\n Efficiently computes the influence for input x and label y for each layer of the\n model, for different values of the hessian regularization parameter. This is done\n by computing the gradient of the loss function for the input x and label y only once\n and then solving the Hessian Vector Product for each regularization value. This is\n useful for finding the optimal regularization value and for exploring\n how robust the influence values are to changes in the regularization value.\n\n Args:\n x: model input to use in the gradient computations\n y: label tensor to compute gradients\n regularization_values: list of regularization values to use\n\n Returns:\n A dictionary containing with keys being the regularization values and values\n being dictionaries containing the influences for each layer of the model,\n with the layer name as key.\n \"\"\"\n grad = self._loss_grad(x, y)\n influences_by_reg_value = {}\n for reg_value in regularization_values:\n reg_factors = self._solve_hvp_by_layer(\n grad, self.ekfac_representation, reg_value\n )\n values = {}\n start_idx = 0\n for layer_id, layer_fac in reg_factors.items():\n end_idx = start_idx + layer_fac.shape[1]\n values[layer_id] = layer_fac @ grad[:, start_idx:end_idx].T\n start_idx = end_idx\n influences_by_reg_value[reg_value] = values\n return influences_by_reg_value\n
NystroemSketchInfluence(\n model: Module,\n loss: Callable[[Tensor, Tensor], Tensor],\n hessian_regularization: float,\n rank: int,\n)\n
Given a model and training data, it uses a low-rank approximation of the Hessian (derived via random projection Nystr\u00f6m approximation) in combination with the Sherman\u2013Morrison\u2013Woodbury formula to calculate the inverse of the Hessian Vector Product. More concrete, it computes a low-rank approximation
in factorized form and approximates the action of the inverse Hessian via
TYPE: float
rank of the low-rank approximation
def __init__(\n self,\n model: torch.nn.Module,\n loss: Callable[[torch.Tensor, torch.Tensor], torch.Tensor],\n hessian_regularization: float,\n rank: int,\n):\n super().__init__(model, loss)\n self.hessian_regularization = hessian_regularization\n self.rank = rank\n
Bases: ABC
Abstract base class for implementing pre-conditioners for improving the convergence of CG for systems of the form
i.e. a matrix \\(M\\) such that \\(M^{-1}(A + \\lambda \\operatorname{I})\\) has a better condition number than \\(A + \\lambda \\operatorname{I}\\).
fit(\n mat_mat_prod: Callable[[Tensor], Tensor],\n size: int,\n dtype: dtype,\n device: device,\n regularization: float = 0.0,\n)\n
Implement this to fit the pre-conditioner to the matrix represented by the mat_mat_prod Args: mat_mat_prod: a callable that computes the matrix-matrix product size: size of the matrix represented by mat_mat_prod dtype: data type of the matrix represented by mat_mat_prod device: device of the matrix represented by mat_mat_prod regularization: regularization parameter \\(\\lambda\\) in the equation $ ( A + \\lambda \\operatorname{I})x = \\operatorname{rhs} $ Returns: self
src/pydvl/influence/torch/pre_conditioner.py
@abstractmethod\ndef fit(\n self,\n mat_mat_prod: Callable[[torch.Tensor], torch.Tensor],\n size: int,\n dtype: torch.dtype,\n device: torch.device,\n regularization: float = 0.0,\n):\n r\"\"\"\n Implement this to fit the pre-conditioner to the matrix represented by the\n mat_mat_prod\n Args:\n mat_mat_prod: a callable that computes the matrix-matrix product\n size: size of the matrix represented by `mat_mat_prod`\n dtype: data type of the matrix represented by `mat_mat_prod`\n device: device of the matrix represented by `mat_mat_prod`\n regularization: regularization parameter $\\lambda$ in the equation\n $ ( A + \\lambda \\operatorname{I})x = \\operatorname{rhs} $\n Returns:\n self\n \"\"\"\n pass\n
solve(rhs: Tensor)\n
Solve the equation \\(M@Z = \\operatorname{rhs}\\) Args: rhs: right hand side of the equation, corresponds to the residuum vector (or matrix) in the conjugate gradient method
solution \\(M^{-1}\\operatorname{rhs}\\)
def solve(self, rhs: torch.Tensor):\n r\"\"\"\n Solve the equation $M@Z = \\operatorname{rhs}$\n Args:\n rhs: right hand side of the equation, corresponds to the residuum vector\n (or matrix) in the conjugate gradient method\n\n Returns:\n solution $M^{-1}\\operatorname{rhs}$\n\n \"\"\"\n if not self.is_fitted:\n raise NotFittedException(type(self))\n\n return self._solve(rhs)\n
JacobiPreConditioner(num_samples_estimator: int = 1)\n
Bases: PreConditioner
PreConditioner
Pre-conditioner for improving the convergence of CG for systems of the form
The JacobiPreConditioner uses the diagonal information of the matrix \\(A\\). The diagonal elements are not computed directly but estimated via Hutchinson's estimator.
where \\(u_i\\) are i.i.d. Gaussian random vectors. Works well in the case the matrix \\(A + \\lambda \\operatorname{I}\\) is diagonal dominant. For more information, see the documentation of Conjugate Gradient Args: num_samples_estimator: number of samples to use in computation of Hutchinson's estimator
def __init__(self, num_samples_estimator: int = 1):\n self.num_samples_estimator = num_samples_estimator\n
Fits by computing an estimate of the diagonal of the matrix represented by mat_mat_prod via Hutchinson's estimator
a callable representing the matrix-matrix product
size
size of the square matrix
needed data type of inputs for the mat_mat_prod
needed device for inputs of mat_mat_prod
TYPE: device
regularization
regularization parameter \\(\\lambda\\) in \\((A+\\lambda I)x=b\\)
def fit(\n self,\n mat_mat_prod: Callable[[torch.Tensor], torch.Tensor],\n size: int,\n dtype: torch.dtype,\n device: torch.device,\n regularization: float = 0.0,\n):\n r\"\"\"\n Fits by computing an estimate of the diagonal of the matrix represented by\n `mat_mat_prod` via Hutchinson's estimator\n\n Args:\n mat_mat_prod: a callable representing the matrix-matrix product\n size: size of the square matrix\n dtype: needed data type of inputs for the mat_mat_prod\n device: needed device for inputs of mat_mat_prod\n regularization: regularization parameter\n $\\lambda$ in $(A+\\lambda I)x=b$\n \"\"\"\n random_samples = torch.randn(\n size, self.num_samples_estimator, device=device, dtype=dtype\n )\n diagonal_estimate = torch.sum(\n torch.mul(random_samples, mat_mat_prod(random_samples)), dim=1\n )\n diagonal_estimate /= self.num_samples_estimator\n self._diag = diagonal_estimate\n self._reg = regularization\n
NystroemPreConditioner(rank: int)\n
The NystroemPreConditioner computes a low-rank approximation
where \\((\\cdot)^{\\dagger}\\) denotes the Moore-Penrose inverse, and uses the matrix
for pre-conditioning, where \\( \\sigma_{\\text{rank}} \\) is the smallest eigenvalue of the low-rank approximation.
def __init__(self, rank: int):\n self._rank = rank\n
Fits by computing a low-rank approximation of the matrix represented by mat_mat_prod via Nystroem approximation
def fit(\n self,\n mat_mat_prod: Callable[[torch.Tensor], torch.Tensor],\n size: int,\n dtype: torch.dtype,\n device: torch.device,\n regularization: float = 0.0,\n):\n r\"\"\"\n Fits by computing a low-rank approximation of the matrix represented by\n `mat_mat_prod` via Nystroem approximation\n\n Args:\n mat_mat_prod: a callable representing the matrix-matrix product\n size: size of the square matrix\n dtype: needed data type of inputs for the mat_mat_prod\n device: needed device for inputs of mat_mat_prod\n regularization: regularization parameter\n $\\lambda$ in $(A+\\lambda I)x=b$\n \"\"\"\n\n self._low_rank_approx = randomized_nystroem_approximation(\n mat_mat_prod, size, self._rank, dtype, mat_vec_device=device\n )\n self._regularization = regularization\n
module-attribute
TorchTensorContainerType = Union[\n Tensor, Collection[Tensor], Mapping[str, Tensor]\n]\n
Type for a PyTorch tensor or a container thereof.
TorchNumpyConverter(device: Optional[device] = None)\n
Bases: NumpyConverter[Tensor]
NumpyConverter[Tensor]
Helper class for converting between torch.Tensor and numpy.ndarray
Optional device parameter to move the resulting torch tensors to the specified device
src/pydvl/influence/torch/util.py
def __init__(self, device: Optional[torch.device] = None):\n self.device = device\n
to_numpy(x: Tensor) -> NDArray\n
Convert a detached torch.Tensor to numpy.ndarray
def to_numpy(self, x: torch.Tensor) -> NDArray:\n \"\"\"\n Convert a detached [torch.Tensor][torch.Tensor] to\n [numpy.ndarray][numpy.ndarray]\n \"\"\"\n arr: NDArray = x.cpu().numpy()\n return arr\n
from_numpy(x: NDArray) -> Tensor\n
Convert a numpy.ndarray to torch.Tensor and optionally move it to a provided device
def from_numpy(self, x: NDArray) -> torch.Tensor:\n \"\"\"\n Convert a [numpy.ndarray][numpy.ndarray] to [torch.Tensor][torch.Tensor] and\n optionally move it to a provided device\n \"\"\"\n t = torch.from_numpy(x)\n if self.device is not None:\n t = t.to(self.device)\n return t\n
Bases: SequenceAggregator[Tensor]
SequenceAggregator[Tensor]
An aggregator that concatenates tensors using PyTorch's torch.cat function. Concatenation is done along the first dimension of the chunks.
__call__(tensor_generator: Generator[Tensor, None, None])\n
Aggregates tensors from a single-level generator into a single tensor by concatenating them. This method is a straightforward way to combine a sequence of tensors into one larger tensor.
A generator that yields torch.Tensor objects.
TYPE: Generator[Tensor, None, None]
Generator[Tensor, None, None]
A single tensor formed by concatenating all tensors from the generator. The concatenation is performed along the default dimension (0).
def __call__(self, tensor_generator: Generator[torch.Tensor, None, None]):\n \"\"\"\n Aggregates tensors from a single-level generator into a single tensor by\n concatenating them. This method is a straightforward way to combine a sequence\n of tensors into one larger tensor.\n\n Args:\n tensor_generator: A generator that yields `torch.Tensor` objects.\n\n Returns:\n A single tensor formed by concatenating all tensors from the generator.\n The concatenation is performed along the default dimension (0).\n \"\"\"\n return torch.cat(list(tensor_generator))\n
Bases: NestedSequenceAggregator[Tensor]
NestedSequenceAggregator[Tensor]
An aggregator that concatenates tensors using PyTorch's torch.cat function. Concatenation is done along the first two dimensions of the chunks.
__call__(\n nested_generators_of_tensors: Generator[\n Generator[Tensor, None, None], None, None\n ]\n)\n
Aggregates tensors from a nested generator structure into a single tensor by concatenating. Each inner generator is first concatenated along dimension 1 into a tensor, and then these tensors are concatenated along dimension 0 together to form the final tensor.
nested_generators_of_tensors
A generator of generators, where each inner generator yields torch.Tensor objects.
TYPE: Generator[Generator[Tensor, None, None], None, None]
Generator[Generator[Tensor, None, None], None, None]
A single tensor formed by concatenating all tensors from the nested
generators.
def __call__(\n self,\n nested_generators_of_tensors: Generator[\n Generator[torch.Tensor, None, None], None, None\n ],\n):\n \"\"\"\n Aggregates tensors from a nested generator structure into a single tensor by\n concatenating. Each inner generator is first concatenated along dimension 1 into\n a tensor, and then these tensors are concatenated along dimension 0 together to\n form the final tensor.\n\n Args:\n nested_generators_of_tensors: A generator of generators, where each inner\n generator yields `torch.Tensor` objects.\n\n Returns:\n A single tensor formed by concatenating all tensors from the nested\n generators.\n\n \"\"\"\n return torch.cat(\n list(\n map(\n lambda tensor_gen: torch.cat(list(tensor_gen), dim=1),\n nested_generators_of_tensors,\n )\n )\n )\n
EkfacRepresentation(\n layer_names: Iterable[str],\n layers_module: Iterable[Module],\n evecs_a: Iterable[Tensor],\n evecs_g: Iterable[Tensor],\n diags: Iterable[Tensor],\n)\n
Container class for the EKFAC representation of the Hessian. It can be iterated over to get the layers names and their corresponding module, eigenvectors and diagonal elements of the factorized Hessian matrix.
layer_names
Names of the layers.
TYPE: Iterable[str]
Iterable[str]
layers_module
The layers.
TYPE: Iterable[Module]
Iterable[Module]
evecs_a
The a eigenvectors of the ekfac representation.
TYPE: Iterable[Tensor]
Iterable[Tensor]
evecs_g
The g eigenvectors of the ekfac representation.
diags
The diagonal elements of the factorized Hessian matrix.
get_layer_evecs() -> Tuple[Dict[str, Tensor], Dict[str, Tensor]]\n
It returns two dictionaries, one for the a eigenvectors and one for the g eigenvectors, with the layer names as keys. The eigenvectors are in the same order as the layers in the model.
def get_layer_evecs(\n self,\n) -> Tuple[Dict[str, torch.Tensor], Dict[str, torch.Tensor]]:\n \"\"\"\n It returns two dictionaries, one for the a eigenvectors and one for the g\n eigenvectors, with the layer names as keys. The eigenvectors are in the same\n order as the layers in the model.\n \"\"\"\n evecs_a_dict = {layer_name: evec_a for layer_name, (_, evec_a, _, _) in self}\n evecs_g_dict = {layer_name: evec_g for layer_name, (_, _, evec_g, _) in self}\n return evecs_a_dict, evecs_g_dict\n
to_model_device(x: Tensor, model: Module) -> Tensor\n
Returns the tensor x moved to the device of the model, if device of model is set
The tensor to be moved to the device of the model.
The model whose device will be used to move the tensor.
The tensor x moved to the device of the model, if device of model is set.
def to_model_device(x: torch.Tensor, model: torch.nn.Module) -> torch.Tensor:\n \"\"\"\n Returns the tensor `x` moved to the device of the `model`, if device of model is set\n\n Args:\n x: The tensor to be moved to the device of the model.\n model: The model whose device will be used to move the tensor.\n\n Returns:\n The tensor `x` moved to the device of the `model`, if device of model is set.\n \"\"\"\n device = next(model.parameters()).device\n return x.to(device)\n
reshape_vector_to_tensors(\n input_vector: Tensor, target_shapes: Iterable[Tuple[int, ...]]\n) -> Tuple[Tensor, ...]\n
Reshape a 1D tensor into multiple tensors with specified shapes.
This function takes a 1D tensor (input_vector) and reshapes it into a series of tensors with shapes given by 'target_shapes'. The reshaped tensors are returned as a tuple in the same order as their corresponding shapes.
The total number of elements in 'input_vector' must be equal to the sum of the products of the shapes in 'target_shapes'.
input_vector
The 1D tensor to be reshaped. Must be 1D.
target_shapes
An iterable of tuples. Each tuple defines the shape of a tensor to be reshaped from the 'input_vector'.
TYPE: Iterable[Tuple[int, ...]]
Iterable[Tuple[int, ...]]
Tuple[Tensor, ...]
A tuple of reshaped tensors.
ValueError
If 'input_vector' is not a 1D tensor or if the total number of elements in 'input_vector' does not match the sum of the products of the shapes in 'target_shapes'.
def reshape_vector_to_tensors(\n input_vector: torch.Tensor, target_shapes: Iterable[Tuple[int, ...]]\n) -> Tuple[torch.Tensor, ...]:\n \"\"\"\n Reshape a 1D tensor into multiple tensors with specified shapes.\n\n This function takes a 1D tensor (input_vector) and reshapes it into a series of\n tensors with shapes given by 'target_shapes'.\n The reshaped tensors are returned as a tuple in the same order\n as their corresponding shapes.\n\n Note:\n The total number of elements in 'input_vector' must be equal to the\n sum of the products of the shapes in 'target_shapes'.\n\n Args:\n input_vector: The 1D tensor to be reshaped. Must be 1D.\n target_shapes: An iterable of tuples. Each tuple defines the shape of a tensor\n to be reshaped from the 'input_vector'.\n\n Returns:\n A tuple of reshaped tensors.\n\n Raises:\n ValueError: If 'input_vector' is not a 1D tensor or if the total\n number of elements in 'input_vector' does not\n match the sum of the products of the shapes in 'target_shapes'.\n \"\"\"\n\n if input_vector.dim() != 1:\n raise ValueError(\"Input vector must be a 1D tensor\")\n\n total_elements = sum(math.prod(shape) for shape in target_shapes)\n\n if total_elements != input_vector.shape[0]:\n raise ValueError(\n f\"The total elements in shapes {total_elements} \"\n f\"does not match the vector length {input_vector.shape[0]}\"\n )\n\n tensors = []\n start = 0\n for shape in target_shapes:\n size = math.prod(shape) # compute the total size of the tensor with this shape\n tensors.append(\n input_vector[start : start + size].view(shape)\n ) # slice the vector and reshape it\n start += size\n return tuple(tensors)\n
align_structure(\n source: Mapping[str, Tensor], target: TorchTensorContainerType\n) -> Dict[str, Tensor]\n
This function transforms target to have the same structure as source, i.e., it should be a dictionary with the same keys as source and each corresponding value in target should have the same shape as the value in source.
target
source
The reference dictionary containing PyTorch tensors.
TYPE: Mapping[str, Tensor]
Mapping[str, Tensor]
The input to be harmonized. It can be a dictionary, tuple, or tensor.
TYPE: TorchTensorContainerType
TorchTensorContainerType
The harmonized version of target.
If target cannot be harmonized to match source.
def align_structure(\n source: Mapping[str, torch.Tensor],\n target: TorchTensorContainerType,\n) -> Dict[str, torch.Tensor]:\n \"\"\"\n This function transforms `target` to have the same structure as `source`, i.e.,\n it should be a dictionary with the same keys as `source` and each corresponding\n value in `target` should have the same shape as the value in `source`.\n\n Args:\n source: The reference dictionary containing PyTorch tensors.\n target: The input to be harmonized. It can be a dictionary, tuple, or tensor.\n\n Returns:\n The harmonized version of `target`.\n\n Raises:\n ValueError: If `target` cannot be harmonized to match `source`.\n \"\"\"\n\n tangent_dict: Dict[str, torch.Tensor]\n\n if isinstance(target, dict):\n if list(target.keys()) != list(source.keys()):\n raise ValueError(\"The keys in 'target' do not match the keys in 'source'.\")\n\n if [v.shape for v in target.values()] != [v.shape for v in source.values()]:\n raise ValueError(\n \"The shapes of the values in 'target' do not match the shapes \"\n \"of the values in 'source'.\"\n )\n\n tangent_dict = target\n\n elif isinstance(target, tuple) or isinstance(target, list):\n if [v.shape for v in target] != [v.shape for v in source.values()]:\n raise ValueError(\n \"'target' is a tuple/list but its elements' shapes do not match \"\n \"the shapes of the values in 'source'.\"\n )\n\n tangent_dict = dict(zip(source.keys(), target))\n\n elif isinstance(target, torch.Tensor):\n try:\n tangent_dict = dict(\n zip(\n source.keys(),\n reshape_vector_to_tensors(\n target, [p.shape for p in source.values()]\n ),\n )\n )\n except Exception as e:\n raise ValueError(\n f\"'target' is a tensor but cannot be reshaped to match 'source'. \"\n f\"Original error: {e}\"\n )\n\n else:\n raise ValueError(f\"'target' is of type {type(target)} which is not supported.\")\n\n return tangent_dict\n
align_with_model(x: TorchTensorContainerType, model: Module)\n
Aligns an input to the model's parameter structure, i.e. transforms it into a dict with the same keys as model.named_parameters() and matching tensor shapes
The input to be aligned. It can be a dictionary, tuple, or tensor.
model to use for alignment
The aligned version of x.
If x cannot be aligned to match the model's parameters .
def align_with_model(x: TorchTensorContainerType, model: torch.nn.Module):\n \"\"\"\n Aligns an input to the model's parameter structure, i.e. transforms it into a dict\n with the same keys as model.named_parameters() and matching tensor shapes\n\n Args:\n x: The input to be aligned. It can be a dictionary, tuple, or tensor.\n model: model to use for alignment\n\n Returns:\n The aligned version of `x`.\n\n Raises:\n ValueError: If `x` cannot be aligned to match the model's parameters .\n\n \"\"\"\n model_params = {k: p for k, p in model.named_parameters() if p.requires_grad}\n return align_structure(model_params, x)\n
flatten_dimensions(\n tensors: Iterable[Tensor],\n shape: Optional[Tuple[int, ...]] = None,\n concat_at: int = -1,\n) -> Tensor\n
Flattens the dimensions of each tensor in the given iterable and concatenates them along a specified dimension.
This function takes an iterable of PyTorch tensors and flattens each tensor. Optionally, each tensor can be reshaped to a specified shape before concatenation. The concatenation is performed along the dimension specified by concat_at.
concat_at
tensors
An iterable containing PyTorch tensors to be flattened and concatenated.
shape
A tuple representing the desired shape to which each tensor is reshaped before concatenation. If None, tensors are flattened to 1D.
TYPE: Optional[Tuple[int, ...]] DEFAULT: None
Optional[Tuple[int, ...]]
The dimension along which to concatenate the tensors.
TYPE: int DEFAULT: -1
-1
A single tensor resulting from the concatenation of the input tensors,
each either flattened or reshaped as specified.
>>> tensors = [torch.tensor([[1, 2], [3, 4]]), torch.tensor([[5, 6], [7, 8]])]\n>>> flatten_dimensions(tensors)\ntensor([1, 2, 3, 4, 5, 6, 7, 8])\n\n>>> flatten_dimensions(tensors, shape=(2, 2), concat_at=0)\ntensor([[1, 2],\n [3, 4],\n [5, 6],\n [7, 8]])\n
def flatten_dimensions(\n tensors: Iterable[torch.Tensor],\n shape: Optional[Tuple[int, ...]] = None,\n concat_at: int = -1,\n) -> torch.Tensor:\n \"\"\"\n Flattens the dimensions of each tensor in the given iterable and concatenates them\n along a specified dimension.\n\n This function takes an iterable of PyTorch tensors and flattens each tensor.\n Optionally, each tensor can be reshaped to a specified shape before concatenation.\n The concatenation is performed along the dimension specified by `concat_at`.\n\n Args:\n tensors: An iterable containing PyTorch tensors to be flattened\n and concatenated.\n shape: A tuple representing the desired shape to which each tensor is reshaped\n before concatenation. If None, tensors are flattened to 1D.\n concat_at: The dimension along which to concatenate the tensors.\n\n Returns:\n A single tensor resulting from the concatenation of the input tensors,\n each either flattened or reshaped as specified.\n\n ??? Example\n ```pycon\n >>> tensors = [torch.tensor([[1, 2], [3, 4]]), torch.tensor([[5, 6], [7, 8]])]\n >>> flatten_dimensions(tensors)\n tensor([1, 2, 3, 4, 5, 6, 7, 8])\n\n >>> flatten_dimensions(tensors, shape=(2, 2), concat_at=0)\n tensor([[1, 2],\n [3, 4],\n [5, 6],\n [7, 8]])\n ```\n \"\"\"\n return torch.cat(\n [t.reshape(-1) if shape is None else t.reshape(*shape) for t in tensors],\n dim=concat_at,\n )\n
torch_dataset_to_dask_array(\n dataset: Dataset,\n chunk_size: int,\n total_size: Optional[int] = None,\n resulting_dtype: Type[number] = np.float32,\n) -> Tuple[Array, ...]\n
Construct tuple of dask arrays from a PyTorch dataset, using dask.delayed
dataset
A PyTorch dataset
TYPE: Dataset
chunk_size
The size of the chunks for the resulting Dask arrays.
total_size
If the dataset does not implement len, provide the length via this parameter. If None the length of the dataset is inferred via accessing the dataset once.
resulting_dtype
The dtype of the resulting dask.array.Array
TYPE: Type[number] DEFAULT: float32
Type[number]
float32
import torch\nfrom torch.utils.data import TensorDataset\nx = torch.rand((20, 3))\ny = torch.rand((20, 1))\ndataset = TensorDataset(x, y)\nda_x, da_y = torch_dataset_to_dask_array(dataset, 4)\n
Tuple[Array, ...]
Tuple of Dask arrays corresponding to each tensor in the dataset.
def torch_dataset_to_dask_array(\n dataset: Dataset,\n chunk_size: int,\n total_size: Optional[int] = None,\n resulting_dtype: Type[np.number] = np.float32,\n) -> Tuple[da.Array, ...]:\n \"\"\"\n Construct tuple of dask arrays from a PyTorch dataset, using dask.delayed\n\n Args:\n dataset: A PyTorch [dataset][torch.utils.data.Dataset]\n chunk_size: The size of the chunks for the resulting Dask arrays.\n total_size: If the dataset does not implement len, provide the length\n via this parameter. If None\n the length of the dataset is inferred via accessing the dataset once.\n resulting_dtype: The dtype of the resulting [dask.array.Array][dask.array.Array]\n\n ??? Example\n ```python\n import torch\n from torch.utils.data import TensorDataset\n x = torch.rand((20, 3))\n y = torch.rand((20, 1))\n dataset = TensorDataset(x, y)\n da_x, da_y = torch_dataset_to_dask_array(dataset, 4)\n ```\n\n Returns:\n Tuple of Dask arrays corresponding to each tensor in the dataset.\n \"\"\"\n\n def _infer_data_len(d_set: Dataset):\n try:\n n_data = len(d_set)\n if total_size is not None and n_data != total_size:\n raise ValueError(\n f\"The number of samples in the dataset ({n_data}), derived \"\n f\"from calling \u00b4len\u00b4, does not match the provided \"\n f\"total number of samples ({total_size}). \"\n f\"Call the function without total_size.\"\n )\n return n_data\n except TypeError as e:\n err_msg = (\n f\"Could not infer the number of samples in the dataset from \"\n f\"calling \u00b4len\u00b4. Original error: {e}.\"\n )\n if total_size is not None:\n logger.warning(\n err_msg\n + f\" Using the provided total number of samples {total_size}.\"\n )\n return total_size\n else:\n logger.warning(\n err_msg + f\" Infer the number of samples from the dataset, \"\n f\"via iterating the dataset once. \"\n f\"This might induce severe overhead, so consider\"\n f\"providing total_size, if you know the number of samples \"\n f\"beforehand.\"\n )\n idx = 0\n while True:\n try:\n t = d_set[idx]\n if all(_t.numel() == 0 for _t in t):\n return idx\n idx += 1\n\n except IndexError:\n return idx\n\n sample = dataset[0]\n if not isinstance(sample, tuple):\n sample = (sample,)\n\n def _get_chunk(\n start_idx: int, stop_idx: int, d_set: Dataset\n ) -> Tuple[torch.Tensor, ...]:\n try:\n t = d_set[start_idx:stop_idx]\n if not isinstance(t, tuple):\n t = (t,)\n return t # type:ignore\n except Exception:\n nested_tensor_list = [\n [d_set[idx][k] for idx in range(start_idx, stop_idx)]\n for k in range(len(sample))\n ]\n return tuple(map(torch.stack, nested_tensor_list))\n\n n_samples = _infer_data_len(dataset)\n chunk_indices = [\n (i, min(i + chunk_size, n_samples)) for i in range(0, n_samples, chunk_size)\n ]\n delayed_dataset = dask.delayed(dataset)\n delayed_chunks = [\n dask.delayed(partial(_get_chunk, start, stop))(delayed_dataset)\n for (start, stop) in chunk_indices\n ]\n\n delayed_arrays_dict: Dict[int, List[da.Array]] = {k: [] for k in range(len(sample))}\n\n for chunk, (start, stop) in zip(delayed_chunks, chunk_indices):\n for tensor_idx, sample_tensor in enumerate(sample):\n delayed_tensor = da.from_delayed(\n dask.delayed(lambda t: t.cpu().numpy())(chunk[tensor_idx]),\n shape=(stop - start, *sample_tensor.shape),\n dtype=resulting_dtype,\n )\n\n delayed_arrays_dict[tensor_idx].append(delayed_tensor)\n\n return tuple(\n da.concatenate(array_list) for array_list in delayed_arrays_dict.values()\n )\n
empirical_cross_entropy_loss_fn(\n model_output: Tensor, *args, **kwargs\n) -> Tensor\n
Computes the empirical cross entropy loss of the model output. This is the cross entropy loss of the model output without the labels. The function takes all the usual arguments and keyword arguments of the cross entropy loss function, so that it is compatible with the PyTorch cross entropy loss function. However, it ignores everything except the first argument, which is the model output.
model_output
The output of the model.
def empirical_cross_entropy_loss_fn(\n model_output: torch.Tensor, *args, **kwargs\n) -> torch.Tensor:\n \"\"\"\n Computes the empirical cross entropy loss of the model output. This is the\n cross entropy loss of the model output without the labels. The function takes\n all the usual arguments and keyword arguments of the cross entropy loss\n function, so that it is compatible with the PyTorch cross entropy loss\n function. However, it ignores everything except the first argument, which is\n the model output.\n\n Args:\n model_output: The output of the model.\n \"\"\"\n probs_ = torch.softmax(model_output, dim=1)\n log_probs_ = torch.log(probs_)\n log_probs_ = torch.where(\n torch.isfinite(log_probs_), log_probs_, torch.zeros_like(log_probs_)\n )\n return torch.sum(log_probs_ * probs_.detach() ** 0.5)\n
This module provides a common interface to parallelization backends. The list of supported backends is here. Backends should be instantiated directly and passed to the respective valuation method.
We use executors that implement the Executor interface to submit tasks in parallel. The basic high-level pattern is:
from pydvl.parallel import JoblibParallelBackend\n\nparallel_backend = JoblibParallelBackend()\nwith parallel_backend.executor(max_workers=2) as executor:\n future = executor.submit(lambda x: x + 1, 1)\n result = future.result()\nassert result == 2\n
Running a map-style job is also easy:
from pydvl.parallel import JoblibParallelBackend\n\nparallel_backend = JoblibParallelBackend()\nwith parallel_backend.executor(max_workers=2) as executor:\n results = list(executor.map(lambda x: x + 1, range(5)))\nassert results == [1, 2, 3, 4, 5]\n
Passsing large objects
When running tasks which accept heavy inputs, it is important to first use put() on the object and use the returned reference as argument to the callable within submit(). For example:
submit()
u_ref = parallel_backend.put(u)\n...\nexecutor.submit(task, utility=u)\n
task()
get()
There is an alternative map-reduce implementation MapReduceJob which internally uses joblib's higher level API with Parallel() which then indirectly also supports the use of Dask and Ray.
Parallel()
Bases: Flag
Flag
Policy to use when cancelling futures after exiting an Executor.
Note
Not all backends support all policies.
NONE
Do not cancel any futures.
PENDING
Cancel all pending futures, but not running ones.
RUNNING
Cancel all running futures, but not pending ones.
ALL
Cancel all pending and running futures.
Abstract base class for all parallel backends.
classmethod
executor(\n max_workers: int | None = None,\n *,\n config: ParallelConfig | None = None,\n cancel_futures: CancellationPolicy | bool = CancellationPolicy.PENDING\n) -> Executor\n
Returns a futures executor for the parallel backend.
src/pydvl/parallel/backend.py
@classmethod\n@abstractmethod\ndef executor(\n cls,\n max_workers: int | None = None,\n *,\n config: ParallelConfig | None = None,\n cancel_futures: CancellationPolicy | bool = CancellationPolicy.PENDING,\n) -> Executor:\n \"\"\"Returns a futures executor for the parallel backend.\"\"\"\n ...\n
init_parallel_backend(\n config: ParallelConfig | None = None, backend_name: str | None = None\n) -> ParallelBackend\n
Initializes the parallel backend and returns an instance of it.
The following example creates a parallel backend instance with the default configuration, which is a local joblib backend.
If you don't pass any arguments, then by default it will instantiate the JoblibParallelBackend:
parallel_backend = init_parallel_backend()\n
To create a parallel backend instance with for example ray as a backend, you can pass the backend name as a string:.
ray
parallel_backend = init_parallel_backend(backend_name=\"ray\")\n
The following is an example of the deprecated way for instantiating a parallel backend:
config = ParallelConfig()\nparallel_backend = init_parallel_backend(config)\n
backend_name
Name of the backend to instantiate.
TYPE: str | None DEFAULT: None
str | None
config
(DEPRECATED) Object configuring parallel computation, with cluster address, number of cpus, etc.
TYPE: ParallelConfig | None DEFAULT: None
ParallelConfig | None
@deprecated(\n target=True,\n args_mapping={\"config\": \"config\"},\n deprecated_in=\"0.9.0\",\n remove_in=\"0.10.0\",\n)\ndef init_parallel_backend(\n config: ParallelConfig | None = None, backend_name: str | None = None\n) -> ParallelBackend:\n \"\"\"Initializes the parallel backend and returns an instance of it.\n\n The following example creates a parallel backend instance with the default\n configuration, which is a local joblib backend.\n\n If you don't pass any arguments, then by default it will instantiate\n the JoblibParallelBackend:\n\n ??? Example\n ```python\n parallel_backend = init_parallel_backend()\n ```\n\n To create a parallel backend instance with for example `ray` as a backend,\n you can pass the backend name as a string:.\n\n ??? Example\n ```python\n parallel_backend = init_parallel_backend(backend_name=\"ray\")\n ```\n\n\n The following is an example of the deprecated\n way for instantiating a parallel backend:\n\n ??? Example\n ``` python\n config = ParallelConfig()\n parallel_backend = init_parallel_backend(config)\n ```\n\n Args:\n backend_name: Name of the backend to instantiate.\n config: (**DEPRECATED**) Object configuring parallel computation,\n with cluster address, number of cpus, etc.\n\n\n \"\"\"\n if backend_name is None:\n if config is None:\n backend_name = \"joblib\"\n else:\n backend_name = config.backend\n\n try:\n parallel_backend_cls = ParallelBackend.BACKENDS[backend_name]\n except KeyError:\n raise NotImplementedError(f\"Unexpected parallel backend {backend_name}\")\n return parallel_backend_cls(config) # type: ignore\n
available_cpus() -> int\n
Platform-independent count of available cores.
FIXME: do we really need this or is os.cpu_count enough? Is this portable?
os.cpu_count
Number of cores, or 1 if it is not possible to determine.
def available_cpus() -> int:\n \"\"\"Platform-independent count of available cores.\n\n FIXME: do we really need this or is `os.cpu_count` enough? Is this portable?\n\n Returns:\n Number of cores, or 1 if it is not possible to determine.\n \"\"\"\n from platform import system\n\n if system() != \"Linux\":\n return os.cpu_count() or 1\n return len(os.sched_getaffinity(0)) # type: ignore\n
ParallelConfig(\n backend: Literal[\"joblib\", \"ray\"] = \"joblib\",\n address: Optional[Union[str, Tuple[str, int]]] = None,\n n_cpus_local: Optional[int] = None,\n logging_level: Optional[int] = None,\n wait_timeout: float = 1.0,\n)\n
Configuration for parallel computation backend.
backend
Type of backend to use. Defaults to 'joblib'
TYPE: Literal['joblib', 'ray'] DEFAULT: 'joblib'
Literal['joblib', 'ray']
'joblib'
address
(DEPRECATED) Address of existing remote or local cluster to use.
TYPE: Optional[Union[str, Tuple[str, int]]] DEFAULT: None
Optional[Union[str, Tuple[str, int]]]
n_cpus_local
(DEPRECATED) Number of CPUs to use when creating a local ray cluster. This has no effect when using an existing ray cluster.
logging_level
(DEPRECATED) Logging level for the parallel backend's worker.
wait_timeout
(DEPRECATED) Timeout in seconds for waiting on futures.
TYPE: float DEFAULT: 1.0
1.0
This module contains a wrapper around joblib's Parallel() class that makes it easy to run map-reduce jobs.
Deprecation
This interface might be deprecated or changed in a future release before 1.0
MapReduceJob(\n inputs: Union[Collection[T], T],\n map_func: MapFunction[R],\n reduce_func: ReduceFunction[R] = identity,\n parallel_backend: Optional[ParallelBackend] = None,\n config: Optional[ParallelConfig] = None,\n *,\n map_kwargs: Optional[Dict] = None,\n reduce_kwargs: Optional[Dict] = None,\n n_jobs: int = -1,\n timeout: Optional[float] = None\n)\n
Bases: Generic[T, R]
Generic[T, R]
Takes an embarrassingly parallel fun and runs it in n_jobs parallel jobs, splitting the data evenly into a number of chunks equal to the number of jobs.
Typing information for objects of this class requires the type of the inputs that are split for map_func and the type of its output.
map_func
inputs
The input that will be split and passed to map_func. if it's not a sequence object. It will be repeat n_jobs number of times.
TYPE: Union[Collection[T], T]
Union[Collection[T], T]
Function that will be applied to the input chunks in each job.
TYPE: MapFunction[R]
MapFunction[R]
reduce_func
Function that will be applied to the results of map_func to reduce them.
TYPE: ReduceFunction[R] DEFAULT: identity
ReduceFunction[R]
identity
map_kwargs
Keyword arguments that will be passed to map_func in each job. Alternatively, one can use functools.partial.
TYPE: Optional[Dict] DEFAULT: None
Optional[Dict]
reduce_kwargs
Keyword arguments that will be passed to reduce_func in each job. Alternatively, one can use functools.partial.
parallel_backend
Parallel backend instance to use for parallelizing computations. If None, use JoblibParallelBackend backend. See the Parallel Backends package for available options.
TYPE: Optional[ParallelBackend] DEFAULT: None
Optional[ParallelBackend]
TYPE: Optional[ParallelConfig] DEFAULT: None
Number of parallel jobs to run. Does not accept 0
A simple usage example with 2 jobs:
>>> from pydvl.parallel import MapReduceJob\n>>> import numpy as np\n>>> map_reduce_job: MapReduceJob[np.ndarray, np.ndarray] = MapReduceJob(\n... np.arange(5),\n... map_func=np.sum,\n... reduce_func=np.sum,\n... n_jobs=2,\n... )\n>>> map_reduce_job()\n10\n
When passed a single object as input, it will be repeated for each job:
>>> from pydvl.parallel import MapReduceJob\n>>> import numpy as np\n>>> map_reduce_job: MapReduceJob[int, np.ndarray] = MapReduceJob(\n... 5,\n... map_func=lambda x: np.array([x]),\n... reduce_func=np.sum,\n... n_jobs=2,\n... )\n>>> map_reduce_job()\n10\n
src/pydvl/parallel/map_reduce.py
@deprecated(\n target=True,\n args_mapping={\"config\": \"config\"},\n deprecated_in=\"0.9.0\",\n remove_in=\"0.10.0\",\n)\ndef __init__(\n self,\n inputs: Union[Collection[T], T],\n map_func: MapFunction[R],\n reduce_func: ReduceFunction[R] = identity,\n parallel_backend: Optional[ParallelBackend] = None,\n config: Optional[ParallelConfig] = None,\n *,\n map_kwargs: Optional[Dict] = None,\n reduce_kwargs: Optional[Dict] = None,\n n_jobs: int = -1,\n timeout: Optional[float] = None,\n):\n parallel_backend = _maybe_init_parallel_backend(parallel_backend, config)\n\n self.parallel_backend = parallel_backend\n\n self.timeout = timeout\n\n self._n_jobs = -1\n # This uses the setter defined below\n self.n_jobs = n_jobs\n\n self.inputs_ = inputs\n\n self.map_kwargs = map_kwargs if map_kwargs is not None else dict()\n self.reduce_kwargs = reduce_kwargs if reduce_kwargs is not None else dict()\n\n self._map_func = reduce(maybe_add_argument, [\"job_id\", \"seed\"], map_func)\n self._reduce_func = reduce_func\n
writable
n_jobs: int\n
Effective number of jobs according to the used ParallelBackend instance.
__call__(seed: Optional[Union[Seed, SeedSequence]] = None) -> R\n
Runs the map-reduce job.
seed
Either an instance of a numpy random number generator or a seed for it.
TYPE: Optional[Union[Seed, SeedSequence]] DEFAULT: None
Optional[Union[Seed, SeedSequence]]
R
The result of the reduce function.
def __call__(\n self,\n seed: Optional[Union[Seed, SeedSequence]] = None,\n) -> R:\n \"\"\"\n Runs the map-reduce job.\n\n Args:\n seed: Either an instance of a numpy random number generator or a seed for\n it.\n\n Returns:\n The result of the reduce function.\n \"\"\"\n seed_seq = ensure_seed_sequence(seed)\n\n if hasattr(self.parallel_backend, \"_joblib_backend_name\"):\n backend = getattr(self.parallel_backend, \"_joblib_backend_name\")\n else:\n warnings.warn(\n \"Parallel backend \"\n f\"{self.parallel_backend.__class__.__name__}. \"\n \"should have a `_joblib_backend_name` attribute in order to work \"\n \"property with MapReduceJob. \"\n \"Defaulting to joblib loky backend\"\n )\n backend = \"loky\"\n\n with Parallel(backend=backend, prefer=\"processes\") as parallel:\n chunks = self._chunkify(self.inputs_, n_chunks=self.n_jobs)\n map_results: List[R] = parallel(\n delayed(self._map_func)(\n next_chunk, job_id=j, seed=seed, **self.map_kwargs\n )\n for j, (next_chunk, seed) in enumerate(\n zip(chunks, seed_seq.spawn(len(chunks)))\n )\n )\n\n reduce_results: R = self._reduce_func(map_results, **self.reduce_kwargs)\n return reduce_results\n
JoblibParallelBackend(config: ParallelConfig | None = None)\n
Bases: ParallelBackend
ParallelBackend
Class used to wrap joblib to make it transparent to algorithms.
Example
from pydvl.parallel import JoblibParallelBackend\nparallel_backend = JoblibParallelBackend()\n
src/pydvl/parallel/backends/joblib.py
@deprecated(\n target=True,\n args_mapping={\"config\": None},\n deprecated_in=\"0.9.0\",\n remove_in=\"0.10.0\",\n)\ndef __init__(self, config: ParallelConfig | None = None) -> None:\n n_jobs: int | None = None\n if config is not None:\n n_jobs = config.n_cpus_local\n self.config = {\n \"n_jobs\": n_jobs,\n }\n
executor(\n max_workers: int | None = None,\n *,\n config: ParallelConfig | None = None,\n cancel_futures: CancellationPolicy | bool = CancellationPolicy.NONE\n) -> Executor\n
from pydvl.parallel import JoblibParallelBackend\nparallel_backend = JoblibParallelBackend()\nwith parallel_backend.executor() as executor:\n executor.submit(...)\n
max_workers
Maximum number of parallel workers.
TYPE: int | None DEFAULT: None
int | None
cancel_futures
TYPE: CancellationPolicy | bool DEFAULT: NONE
CancellationPolicy | bool
Executor
Instance of _ReusablePoolExecutor.
@classmethod\ndef executor(\n cls,\n max_workers: int | None = None,\n *,\n config: ParallelConfig | None = None,\n cancel_futures: CancellationPolicy | bool = CancellationPolicy.NONE,\n) -> Executor:\n \"\"\"Returns a futures executor for the parallel backend.\n\n !!! Example\n ``` python\n from pydvl.parallel import JoblibParallelBackend\n parallel_backend = JoblibParallelBackend()\n with parallel_backend.executor() as executor:\n executor.submit(...)\n ```\n\n Args:\n max_workers: Maximum number of parallel workers.\n config: (**DEPRECATED**) Object configuring parallel computation,\n with cluster address, number of cpus, etc.\n cancel_futures: Policy to use when cancelling futures\n after exiting an Executor.\n\n Returns:\n Instance of [_ReusablePoolExecutor][joblib.externals.loky.reusable_executor._ReusablePoolExecutor].\n \"\"\"\n if config is not None:\n warnings.warn(\n \"The `JoblibParallelBackend` uses deprecated arguments: \"\n \"`config`. They were deprecated since v0.9.0 \"\n \"and will be removed in v0.10.0.\",\n FutureWarning,\n )\n\n if cancel_futures not in (CancellationPolicy.NONE, False):\n warnings.warn(\n \"Cancellation of futures is not supported by the joblib backend\",\n )\n return cast(Executor, get_reusable_executor(max_workers=max_workers))\n
wrap(fun: Callable, **kwargs) -> Callable\n
Wraps a function as a joblib delayed.
fun
the function to wrap
TYPE: Callable
Callable
The delayed function.
def wrap(self, fun: Callable, **kwargs) -> Callable:\n \"\"\"Wraps a function as a joblib delayed.\n\n Args:\n fun: the function to wrap\n\n Returns:\n The delayed function.\n \"\"\"\n return delayed(fun) # type: ignore\n
RayParallelBackend(config: ParallelConfig | None = None)\n
Class used to wrap ray to make it transparent to algorithms.
import ray\nfrom pydvl.parallel import RayParallelBackend\nray.init()\nparallel_backend = RayParallelBackend()\n
src/pydvl/parallel/backends/ray.py
@deprecated(\n target=True,\n args_mapping={\"config\": None},\n deprecated_in=\"0.9.0\",\n remove_in=\"0.10.0\",\n)\ndef __init__(self, config: ParallelConfig | None = None) -> None:\n if not ray.is_initialized():\n raise RuntimeError(\n \"Starting from v0.9.0, ray is no longer automatically initialized. \"\n \"Please use `ray.init()` with the desired configuration \"\n \"before using this class.\"\n )\n # Register ray joblib backend\n register_ray()\n
import ray\nfrom pydvl.parallel import RayParallelBackend\nray.init()\nparallel_backend = RayParallelBackend()\nwith parallel_backend.executor() as executor:\n executor.submit(...)\n
TYPE: CancellationPolicy | bool DEFAULT: PENDING
Instance of RayExecutor.
@classmethod\ndef executor(\n cls,\n max_workers: int | None = None,\n *,\n config: ParallelConfig | None = None,\n cancel_futures: CancellationPolicy | bool = CancellationPolicy.PENDING,\n) -> Executor:\n \"\"\"Returns a futures executor for the parallel backend.\n\n !!! Example\n ``` python\n import ray\n from pydvl.parallel import RayParallelBackend\n ray.init()\n parallel_backend = RayParallelBackend()\n with parallel_backend.executor() as executor:\n executor.submit(...)\n ```\n\n Args:\n max_workers: Maximum number of parallel workers.\n config: (**DEPRECATED**) Object configuring parallel computation,\n with cluster address, number of cpus, etc.\n cancel_futures: Policy to use when cancelling futures\n after exiting an Executor.\n\n Returns:\n Instance of [RayExecutor][pydvl.parallel.futures.ray.RayExecutor].\n \"\"\"\n # Imported here to avoid circular import errors\n from pydvl.parallel.futures.ray import RayExecutor\n\n if config is not None:\n warnings.warn(\n \"The `RayParallelBackend` uses deprecated arguments: \"\n \"`config`. They were deprecated since v0.9.0 \"\n \"and will be removed in v0.10.0.\",\n FutureWarning,\n )\n\n return RayExecutor(max_workers, cancel_futures=cancel_futures) # type: ignore\n
Wraps a function as a ray remote.
keyword arguments to pass to @ray.remote
DEFAULT: {}
{}
The .remote method of the ray RemoteFunction.
.remote
RemoteFunction
def wrap(self, fun: Callable, **kwargs) -> Callable:\n \"\"\"Wraps a function as a ray remote.\n\n Args:\n fun: the function to wrap\n kwargs: keyword arguments to pass to @ray.remote\n\n Returns:\n The `.remote` method of the ray `RemoteFunction`.\n \"\"\"\n if len(kwargs) > 0:\n return ray.remote(**kwargs)(fun).remote # type: ignore\n return ray.remote(fun).remote # type: ignore\n
init_executor(\n max_workers: Optional[int] = None,\n config: ParallelConfig = ParallelConfig(),\n **kwargs\n) -> Generator[Executor, None, None]\n
Initializes a futures executor for the given parallel configuration.
Maximum number of concurrent tasks.
instance of ParallelConfig with cluster address, number of cpus, etc.
TYPE: ParallelConfig DEFAULT: ParallelConfig()
Other optional parameter that will be passed to the executor.
from pydvl.parallel.futures import init_executor, ParallelConfig\n\nconfig = ParallelConfig(backend=\"ray\")\nwith init_executor(max_workers=1, config=config) as executor:\n future = executor.submit(lambda x: x + 1, 1)\n result = future.result()\nassert result == 2\n
from pydvl.parallel.futures import init_executor\nwith init_executor() as executor:\n results = list(executor.map(lambda x: x + 1, range(5)))\nassert results == [1, 2, 3, 4, 5]\n
src/pydvl/parallel/futures/__init__.py
@contextmanager\n@deprecated(\n target=None,\n deprecated_in=\"0.9.0\",\n remove_in=\"0.10.0\",\n)\ndef init_executor(\n max_workers: Optional[int] = None,\n config: ParallelConfig = ParallelConfig(),\n **kwargs,\n) -> Generator[Executor, None, None]:\n \"\"\"Initializes a futures executor for the given parallel configuration.\n\n Args:\n max_workers: Maximum number of concurrent tasks.\n config: instance of [ParallelConfig][pydvl.utils.config.ParallelConfig]\n with cluster address, number of cpus, etc.\n kwargs: Other optional parameter that will be passed to the executor.\n\n\n ??? Examples\n ``` python\n from pydvl.parallel.futures import init_executor, ParallelConfig\n\n config = ParallelConfig(backend=\"ray\")\n with init_executor(max_workers=1, config=config) as executor:\n future = executor.submit(lambda x: x + 1, 1)\n result = future.result()\n assert result == 2\n ```\n ``` python\n from pydvl.parallel.futures import init_executor\n with init_executor() as executor:\n results = list(executor.map(lambda x: x + 1, range(5)))\n assert results == [1, 2, 3, 4, 5]\n ```\n \"\"\"\n try:\n cls = ParallelBackend.BACKENDS[config.backend]\n with cls.executor(max_workers=max_workers, config=config, **kwargs) as e:\n yield e\n except KeyError:\n raise NotImplementedError(f\"Unexpected parallel backend {config.backend}\")\n
RayExecutor(\n max_workers: Optional[int] = None,\n *,\n config: Optional[ParallelConfig] = None,\n cancel_futures: Union[CancellationPolicy, bool] = CancellationPolicy.ALL\n)\n
Bases: Executor
Asynchronous executor using Ray that implements the concurrent.futures API.
Maximum number of concurrent tasks. Each task can request itself any number of vCPUs. You must ensure the product of this value and the n_cpus_per_job parameter passed to submit() does not exceed available cluster resources. If set to None, it will default to the total number of vCPUs in the ray cluster.
Select which futures will be cancelled when exiting this context manager. Pending is the default, which will cancel all pending futures, but not running ones, as done by concurrent.futures.ProcessPoolExecutor. Additionally, All cancels all pending and running futures, and None doesn't cancel any. See CancellationPolicy
Pending
All
TYPE: Union[CancellationPolicy, bool] DEFAULT: ALL
Union[CancellationPolicy, bool]
src/pydvl/parallel/futures/ray.py
@deprecated(\n target=True,\n args_mapping={\"config\": None},\n deprecated_in=\"0.9.0\",\n remove_in=\"0.10.0\",\n)\ndef __init__(\n self,\n max_workers: Optional[int] = None,\n *,\n config: Optional[ParallelConfig] = None,\n cancel_futures: Union[CancellationPolicy, bool] = CancellationPolicy.ALL,\n):\n if max_workers is not None:\n if max_workers <= 0:\n raise ValueError(\"max_workers must be greater than 0\")\n max_workers = max_workers\n\n if isinstance(cancel_futures, CancellationPolicy):\n self._cancel_futures = cancel_futures\n else:\n self._cancel_futures = (\n CancellationPolicy.PENDING\n if cancel_futures\n else CancellationPolicy.NONE\n )\n\n if not ray.is_initialized():\n raise RuntimeError(\n \"Starting from v0.9.0, ray is no longer automatically initialized. \"\n \"Please use `ray.init()` with the desired configuration \"\n \"before using this class.\"\n )\n\n self._max_workers = max_workers\n if self._max_workers is None:\n self._max_workers = int(ray._private.state.cluster_resources()[\"CPU\"])\n\n self._shutdown = False\n self._shutdown_lock = threading.Lock()\n self._queue_lock = threading.Lock()\n self._work_queue: \"queue.Queue[Optional[_WorkItem]]\" = queue.Queue(\n maxsize=self._max_workers\n )\n self._pending_queue: \"queue.SimpleQueue[Optional[_WorkItem]]\" = (\n queue.SimpleQueue()\n )\n\n # Work Item Manager Thread\n self._work_item_manager_thread: Optional[_WorkItemManagerThread] = None\n
submit(fn: Callable[..., T], *args, **kwargs) -> Future[T]\n
Submits a callable to be executed with the given arguments.
Schedules the callable to be executed as fn(*args, **kwargs) and returns a Future instance representing the execution of the callable.
fn
Callable.
TYPE: Callable[..., T]
Callable[..., T]
args
Positional arguments that will be passed to fn.
DEFAULT: ()
()
Keyword arguments that will be passed to fn. It can also optionally contain options for the ray remote function as a dictionary as the keyword argument remote_function_options.
remote_function_options
Returns: A Future representing the given call.
RuntimeError
If a task is submitted after the executor has been shut down.
def submit(self, fn: Callable[..., T], *args, **kwargs) -> \"Future[T]\":\n r\"\"\"Submits a callable to be executed with the given arguments.\n\n Schedules the callable to be executed as fn(\\*args, \\**kwargs)\n and returns a Future instance representing the execution of the callable.\n\n Args:\n fn: Callable.\n args: Positional arguments that will be passed to `fn`.\n kwargs: Keyword arguments that will be passed to `fn`.\n It can also optionally contain options for the ray remote function\n as a dictionary as the keyword argument `remote_function_options`.\n Returns:\n A Future representing the given call.\n\n Raises:\n RuntimeError: If a task is submitted after the executor has been shut down.\n \"\"\"\n with self._shutdown_lock:\n logger.debug(\"executor acquired shutdown lock\")\n if self._shutdown:\n raise RuntimeError(\"cannot schedule new futures after shutdown\")\n\n logging.debug(\"Creating future and putting work item in work queue\")\n future: \"Future[T]\" = Future()\n remote_function_options = kwargs.pop(\"remote_function_options\", None)\n w = _WorkItem(\n future,\n fn,\n args,\n kwargs,\n remote_function_options=remote_function_options,\n )\n self._put_work_item_in_queue(w)\n # We delay starting the thread until the first call to submit\n self._start_work_item_manager_thread()\n return future\n
shutdown(wait: bool = True, *, cancel_futures: Optional[bool] = None) -> None\n
Clean up the resources associated with the Executor.
This method tries to mimic the behaviour of Executor.shutdown while allowing one more value for cancel_futures which instructs it to use the CancellationPolicy defined upon construction.
wait
Whether to wait for pending futures to finish.
Overrides the executor's default policy for cancelling futures on exit. If True, all pending futures are cancelled, and if False, no futures are cancelled. If None (default), the executor's policy set at initialization is used.
TYPE: Optional[bool] DEFAULT: None
Optional[bool]
def shutdown(\n self, wait: bool = True, *, cancel_futures: Optional[bool] = None\n) -> None:\n \"\"\"Clean up the resources associated with the Executor.\n\n This method tries to mimic the behaviour of\n [Executor.shutdown][concurrent.futures.Executor.shutdown]\n while allowing one more value for ``cancel_futures`` which instructs it\n to use the [CancellationPolicy][pydvl.parallel.backend.CancellationPolicy]\n defined upon construction.\n\n Args:\n wait: Whether to wait for pending futures to finish.\n cancel_futures: Overrides the executor's default policy for\n cancelling futures on exit. If ``True``, all pending futures are\n cancelled, and if ``False``, no futures are cancelled. If ``None``\n (default), the executor's policy set at initialization is used.\n \"\"\"\n logger.debug(\"executor shutting down\")\n with self._shutdown_lock:\n logger.debug(\"executor acquired shutdown lock\")\n self._shutdown = True\n self._cancel_futures = {\n None: self._cancel_futures,\n True: CancellationPolicy.PENDING,\n False: CancellationPolicy.NONE,\n }[cancel_futures]\n\n if wait:\n logger.debug(\"executor waiting for futures to finish\")\n if self._work_item_manager_thread is not None:\n # Putting None in the queue to signal\n # to work item manager thread that we are shutting down\n self._put_work_item_in_queue(None)\n logger.debug(\n \"executor waiting for work item manager thread to terminate\"\n )\n self._work_item_manager_thread.join()\n # To reduce the risk of opening too many files, remove references to\n # objects that use file descriptors.\n self._work_item_manager_thread = None\n del self._work_queue\n del self._pending_queue\n
__exit__(exc_type, exc_val, exc_tb)\n
Exit the runtime context related to the RayExecutor object.
def __exit__(self, exc_type, exc_val, exc_tb):\n \"\"\"Exit the runtime context related to the RayExecutor object.\"\"\"\n self.shutdown()\n return False\n
shaded_mean_std(\n data: ndarray,\n abscissa: Optional[Sequence[Any]] = None,\n num_std: float = 1.0,\n mean_color: Optional[str] = \"dodgerblue\",\n shade_color: Optional[str] = \"lightblue\",\n title: Optional[str] = None,\n xlabel: Optional[str] = None,\n ylabel: Optional[str] = None,\n ax: Optional[Axes] = None,\n **kwargs\n) -> Axes\n
The usual mean \\(\\pm\\) std deviation plot to aggregate runs of experiments.
Deprecation notice
This function is bogus and will be removed in the future in favour of properly computed confidence intervals.
axis 0 is to be aggregated on (e.g. runs) and axis 1 is the data for each run.
TYPE: ndarray
ndarray
abscissa
values for the x-axis. Leave empty to use increasing integers.
TYPE: Optional[Sequence[Any]] DEFAULT: None
Optional[Sequence[Any]]
num_std
number of standard deviations to shade around the mean.
mean_color
color for the mean
TYPE: Optional[str] DEFAULT: 'dodgerblue'
Optional[str]
'dodgerblue'
shade_color
color for the shaded region
TYPE: Optional[str] DEFAULT: 'lightblue'
'lightblue'
title
Title text. To use mathematics, use LaTeX notation.
TYPE: Optional[str] DEFAULT: None
xlabel
Text for the horizontal axis.
ylabel
Text for the vertical axis
ax
If passed, axes object into which to insert the figure. Otherwise, a new figure is created and returned
TYPE: Optional[Axes] DEFAULT: None
Optional[Axes]
these are forwarded to the ax.plot() call for the mean.
Axes
The axes used (or created)
src/pydvl/reporting/plots.py
@deprecated(target=None, deprecated_in=\"0.7.1\", remove_in=\"0.9.0\")\ndef shaded_mean_std(\n data: np.ndarray,\n abscissa: Optional[Sequence[Any]] = None,\n num_std: float = 1.0,\n mean_color: Optional[str] = \"dodgerblue\",\n shade_color: Optional[str] = \"lightblue\",\n title: Optional[str] = None,\n xlabel: Optional[str] = None,\n ylabel: Optional[str] = None,\n ax: Optional[Axes] = None,\n **kwargs,\n) -> Axes:\n r\"\"\"The usual mean \\(\\pm\\) std deviation plot to aggregate runs of\n experiments.\n\n !!! warning \"Deprecation notice\"\n This function is bogus and will be removed in the future in favour of\n properly computed confidence intervals.\n\n Args:\n data: axis 0 is to be aggregated on (e.g. runs) and axis 1 is the\n data for each run.\n abscissa: values for the x-axis. Leave empty to use increasing integers.\n num_std: number of standard deviations to shade around the mean.\n mean_color: color for the mean\n shade_color: color for the shaded region\n title: Title text. To use mathematics, use LaTeX notation.\n xlabel: Text for the horizontal axis.\n ylabel: Text for the vertical axis\n ax: If passed, axes object into which to insert the figure. Otherwise,\n a new figure is created and returned\n kwargs: these are forwarded to the ax.plot() call for the mean.\n\n Returns:\n The axes used (or created)\n \"\"\"\n assert len(data.shape) == 2\n mean = data.mean(axis=0)\n std = num_std * data.std(axis=0)\n\n if ax is None:\n fig, ax = plt.subplots()\n if abscissa is None:\n abscissa = list(range(data.shape[1]))\n\n ax.fill_between(abscissa, mean - std, mean + std, alpha=0.3, color=shade_color)\n ax.plot(abscissa, mean, color=mean_color, **kwargs)\n\n ax.set_title(title)\n ax.set_xlabel(xlabel)\n ax.set_ylabel(ylabel)\n\n return ax\n
plot_ci_array(\n data: NDArray,\n level: float,\n type: Literal[\"normal\", \"t\", \"auto\"] = \"normal\",\n abscissa: Optional[Sequence[str]] = None,\n mean_color: Optional[str] = \"dodgerblue\",\n shade_color: Optional[str] = \"lightblue\",\n ax: Optional[Axes] = None,\n **kwargs\n) -> Axes\n
Plot values and a confidence interval from a 2D array.
Supported intervals are based on the normal and the t distributions.
A 2D array with M different values for each of the N indices.
TYPE: NDArray
NDArray
level
The confidence level.
type
The type of confidence interval to use.
TYPE: Literal['normal', 't', 'auto'] DEFAULT: 'normal'
Literal['normal', 't', 'auto']
'normal'
The values for the x-axis. Leave empty to use increasing integers.
TYPE: Optional[Sequence[str]] DEFAULT: None
Optional[Sequence[str]]
The color of the mean line.
The color of the confidence interval.
If passed, axes object into which to insert the figure. Otherwise, a new figure is created and the axes returned.
**kwargs
Additional arguments to pass to the plot function.
The matplotlib axes.
def plot_ci_array(\n data: NDArray,\n level: float,\n type: Literal[\"normal\", \"t\", \"auto\"] = \"normal\",\n abscissa: Optional[Sequence[str]] = None,\n mean_color: Optional[str] = \"dodgerblue\",\n shade_color: Optional[str] = \"lightblue\",\n ax: Optional[plt.Axes] = None,\n **kwargs,\n) -> plt.Axes:\n \"\"\"Plot values and a confidence interval from a 2D array.\n\n Supported intervals are based on the normal and the t distributions.\n\n Args:\n data: A 2D array with M different values for each of the N indices.\n level: The confidence level.\n type: The type of confidence interval to use.\n abscissa: The values for the x-axis. Leave empty to use increasing\n integers.\n mean_color: The color of the mean line.\n shade_color: The color of the confidence interval.\n ax: If passed, axes object into which to insert the figure. Otherwise,\n a new figure is created and the axes returned.\n **kwargs: Additional arguments to pass to the plot function.\n\n Returns:\n The matplotlib axes.\n \"\"\"\n\n m, n = data.shape\n\n means = np.mean(data, axis=0)\n variances = np.var(data, axis=0, ddof=1)\n\n dummy = ValuationResult[np.int_, np.object_](\n algorithm=\"dummy\",\n values=means,\n variances=variances,\n counts=np.ones_like(means, dtype=np.int_) * m,\n indices=np.arange(n),\n data_names=np.array(abscissa, dtype=str)\n if abscissa is not None\n else np.arange(n, dtype=str),\n )\n\n return plot_ci_values(\n dummy,\n level=level,\n type=type,\n mean_color=mean_color,\n shade_color=shade_color,\n ax=ax,\n **kwargs,\n )\n
plot_ci_values(\n values: ValuationResult,\n level: float,\n type: Literal[\"normal\", \"t\", \"auto\"] = \"auto\",\n abscissa: Optional[Sequence[str]] = None,\n mean_color: Optional[str] = \"dodgerblue\",\n shade_color: Optional[str] = \"lightblue\",\n ax: Optional[Axes] = None,\n **kwargs\n)\n
Plot values and a confidence interval.
Uses values.data_names for the x-axis.
values.data_names
values
The valuation result.
TYPE: ValuationResult
The type of confidence interval to use. If \"auto\", uses \"norm\" if the minimum number of updates for all indices is greater than 30, otherwise uses \"t\".
TYPE: Literal['normal', 't', 'auto'] DEFAULT: 'auto'
'auto'
def plot_ci_values(\n values: ValuationResult,\n level: float,\n type: Literal[\"normal\", \"t\", \"auto\"] = \"auto\",\n abscissa: Optional[Sequence[str]] = None,\n mean_color: Optional[str] = \"dodgerblue\",\n shade_color: Optional[str] = \"lightblue\",\n ax: Optional[plt.Axes] = None,\n **kwargs,\n):\n \"\"\"Plot values and a confidence interval.\n\n Uses `values.data_names` for the x-axis.\n\n Supported intervals are based on the normal and the t distributions.\n\n Args:\n values: The valuation result.\n level: The confidence level.\n type: The type of confidence interval to use. If \"auto\", uses \"norm\" if\n the minimum number of updates for all indices is greater than 30,\n otherwise uses \"t\".\n abscissa: The values for the x-axis. Leave empty to use increasing\n integers.\n mean_color: The color of the mean line.\n shade_color: The color of the confidence interval.\n ax: If passed, axes object into which to insert the figure. Otherwise,\n a new figure is created and the axes returned.\n **kwargs: Additional arguments to pass to the plot function.\n\n Returns:\n The matplotlib axes.\n \"\"\"\n\n ppfs = {\n \"normal\": norm.ppf,\n \"t\": partial(t.ppf, df=values.counts - 1),\n \"auto\": norm.ppf\n if np.min(values.counts) > 30\n else partial(t.ppf, df=values.counts - 1),\n }\n\n try:\n score = ppfs[type](1 - level / 2)\n except KeyError:\n raise ValueError(\n f\"Unknown confidence interval type requested: {type}.\"\n ) from None\n\n if abscissa is None:\n abscissa = [str(i) for i, _ in enumerate(values)]\n bound = score * values.stderr\n\n if ax is None:\n fig, ax = plt.subplots()\n\n ax.fill_between(\n abscissa,\n values.values - bound,\n values.values + bound,\n alpha=0.3,\n color=shade_color,\n )\n ax.plot(abscissa, values.values, color=mean_color, **kwargs)\n return ax\n
spearman_correlation(vv: List[OrderedDict], num_values: int, pvalue: float)\n
Simple matrix plots with spearman correlation for each pair in vv.
vv
list of OrderedDicts with index: value. Spearman correlation is computed for the keys.
TYPE: List[OrderedDict]
List[OrderedDict]
num_values
Use only these many values from the data (from the start of the OrderedDicts)
pvalue
correlation coefficients for which the p-value is below the threshold pvalue/len(vv) will be discarded.
pvalue/len(vv)
def spearman_correlation(vv: List[OrderedDict], num_values: int, pvalue: float):\n \"\"\"Simple matrix plots with spearman correlation for each pair in vv.\n\n Args:\n vv: list of OrderedDicts with index: value. Spearman correlation\n is computed for the keys.\n num_values: Use only these many values from the data (from the start\n of the OrderedDicts)\n pvalue: correlation coefficients for which the p-value is below the\n threshold `pvalue/len(vv)` will be discarded.\n \"\"\"\n r: np.ndarray = np.ndarray((len(vv), len(vv)))\n p: np.ndarray = np.ndarray((len(vv), len(vv)))\n for i, a in enumerate(vv):\n for j, b in enumerate(vv):\n from scipy.stats._stats_py import SpearmanrResult\n\n spearman: SpearmanrResult = sp.stats.spearmanr(\n list(a.keys())[:num_values], list(b.keys())[:num_values]\n )\n r[i][j] = (\n spearman.correlation if spearman.pvalue < pvalue / len(vv) else np.nan\n ) # Bonferroni correction\n p[i][j] = spearman.pvalue\n fig, axs = plt.subplots(1, 2, figsize=(16, 7))\n plot1 = axs[0].matshow(r, vmin=-1, vmax=1)\n axs[0].set_title(f\"Spearman correlation (top {num_values} values)\")\n axs[0].set_xlabel(\"Runs\")\n axs[0].set_ylabel(\"Runs\")\n fig.colorbar(plot1, ax=axs[0])\n plot2 = axs[1].matshow(p, vmin=0, vmax=1)\n axs[1].set_title(\"p-value\")\n axs[1].set_xlabel(\"Runs\")\n axs[1].set_ylabel(\"Runs\")\n fig.colorbar(plot2, ax=axs[1])\n\n return fig\n
plot_shapley(\n df: DataFrame,\n *,\n level: float = 0.05,\n ax: Optional[Axes] = None,\n title: Optional[str] = None,\n xlabel: Optional[str] = None,\n ylabel: Optional[str] = None,\n prefix: Optional[str] = \"data_value\"\n) -> Axes\n
Plots the shapley values, as returned from compute_shapley_values, with error bars corresponding to an \\(\\alpha\\)-level Normal confidence interval.
df
dataframe with the shapley values
TYPE: DataFrame
DataFrame
confidence level for the error bars
TYPE: float DEFAULT: 0.05
0.05
axes to plot on or None if a new subplots should be created
string, title of the plot
string, x label of the plot
string, y label of the plot
The axes created or used
def plot_shapley(\n df: pd.DataFrame,\n *,\n level: float = 0.05,\n ax: Optional[plt.Axes] = None,\n title: Optional[str] = None,\n xlabel: Optional[str] = None,\n ylabel: Optional[str] = None,\n prefix: Optional[str] = \"data_value\",\n) -> plt.Axes:\n r\"\"\"Plots the shapley values, as returned from\n [compute_shapley_values][pydvl.value.shapley.common.compute_shapley_values],\n with error bars corresponding to an $\\alpha$-level Normal confidence\n interval.\n\n Args:\n df: dataframe with the shapley values\n level: confidence level for the error bars\n ax: axes to plot on or None if a new subplots should be created\n title: string, title of the plot\n xlabel: string, x label of the plot\n ylabel: string, y label of the plot\n\n Returns:\n The axes created or used\n \"\"\"\n if ax is None:\n _, ax = plt.subplots()\n\n yerr = norm.ppf(1 - level / 2) * df[f\"{prefix}_stderr\"]\n\n ax.errorbar(x=df.index, y=df[prefix], yerr=yerr, fmt=\"o\", capsize=6)\n ax.set_xlabel(xlabel)\n ax.set_ylabel(ylabel)\n ax.set_title(title)\n plt.xticks(rotation=60)\n return ax\n
plot_influence_distribution(\n influences: NDArray[float_], index: int, title_extra: str = \"\"\n) -> Axes\n
Plots the histogram of the influence that all samples in the training set have over a single sample index.
influences
array of influences (training samples x test samples)
TYPE: NDArray[float_]
NDArray[float_]
index
Index of the test sample for which the influences will be plotted.
title_extra
Additional text that will be appended to the title.
TYPE: str DEFAULT: ''
''
def plot_influence_distribution(\n influences: NDArray[np.float_], index: int, title_extra: str = \"\"\n) -> plt.Axes:\n \"\"\"Plots the histogram of the influence that all samples in the training set\n have over a single sample index.\n\n Args:\n influences: array of influences (training samples x test samples)\n index: Index of the test sample for which the influences\n will be plotted.\n title_extra: Additional text that will be appended to the title.\n \"\"\"\n _, ax = plt.subplots()\n ax.hist(influences[:, index], alpha=0.7)\n ax.set_xlabel(\"Influence values\")\n ax.set_ylabel(\"Number of samples\")\n ax.set_title(f\"Distribution of influences {title_extra}\")\n return ax\n
plot_influence_distribution_by_label(\n influences: NDArray[float_], labels: NDArray[float_], title_extra: str = \"\"\n)\n
Plots the histogram of the influence that all samples in the training set have over a single sample index, separated by labels.
labels
labels for the training set.
def plot_influence_distribution_by_label(\n influences: NDArray[np.float_], labels: NDArray[np.float_], title_extra: str = \"\"\n):\n \"\"\"Plots the histogram of the influence that all samples in the training set\n have over a single sample index, separated by labels.\n\n Args:\n influences: array of influences (training samples x test samples)\n labels: labels for the training set.\n title_extra: Additional text that will be appended to the title.\n \"\"\"\n _, ax = plt.subplots()\n unique_labels = np.unique(labels)\n for label in unique_labels:\n ax.hist(influences[labels == label], label=label, alpha=0.7)\n ax.set_xlabel(\"Influence values\")\n ax.set_ylabel(\"Number of samples\")\n ax.set_title(f\"Distribution of influences {title_extra}\")\n ax.legend()\n plt.show()\n
compute_removal_score(\n u: Utility,\n values: ValuationResult,\n percentages: Union[NDArray[float_], Iterable[float]],\n *,\n remove_best: bool = False,\n progress: bool = False\n) -> Dict[float, float]\n
Fits model and computes score on the test set after incrementally removing a percentage of data points from the training set, based on their values.
u
Utility object with model, data, and scoring function.
TYPE: Utility
Data values of data instances in the training set.
percentages
Sequence of removal percentages.
TYPE: Union[NDArray[float_], Iterable[float]]
Union[NDArray[float_], Iterable[float]]
remove_best
If True, removes data points in order of decreasing valuation.
If True, display a progress bar.
Dict[float, float]
Dictionary that maps the percentages to their respective scores.
src/pydvl/reporting/scores.py
def compute_removal_score(\n u: Utility,\n values: ValuationResult,\n percentages: Union[NDArray[np.float_], Iterable[float]],\n *,\n remove_best: bool = False,\n progress: bool = False,\n) -> Dict[float, float]:\n r\"\"\"Fits model and computes score on the test set after incrementally removing\n a percentage of data points from the training set, based on their values.\n\n Args:\n u: Utility object with model, data, and scoring function.\n values: Data values of data instances in the training set.\n percentages: Sequence of removal percentages.\n remove_best: If True, removes data points in order of decreasing valuation.\n progress: If True, display a progress bar.\n\n Returns:\n Dictionary that maps the percentages to their respective scores.\n \"\"\"\n # Sanity checks\n if np.any([x >= 1.0 or x < 0.0 for x in percentages]):\n raise ValueError(\"All percentages should be in the range [0.0, 1.0)\")\n\n if len(values) != len(u.data.indices):\n raise ValueError(\n f\"The number of values, {len(values) }, should be equal to the number of data indices, {len(u.data.indices)}\"\n )\n\n scores = {}\n\n # We sort in descending order if we want to remove the best values\n values.sort(reverse=remove_best)\n\n for pct in tqdm(percentages, disable=not progress, desc=\"Removal Scores\"):\n n_removal = int(pct * len(u.data))\n indices = values.indices[n_removal:]\n score = u(indices)\n scores[pct] = score\n return scores\n
CachedFuncConfig(\n hash_prefix: Optional[str] = None,\n ignore_args: Collection[str] = list(),\n time_threshold: float = 0.3,\n allow_repeated_evaluations: bool = False,\n rtol_stderr: float = 0.1,\n min_repetitions: int = 3,\n)\n
Configuration for cached functions and methods, providing memoization of function calls.
Instances of this class are typically used as arguments for the construction of a Utility.
hash_prefix
Optional string prefix that be prepended to the cache key. This can be provided in order to guarantee cache reuse across runs.
ignore_args
Do not take these keyword arguments into account when hashing the wrapped function for usage as key. This allows sharing the cache among different jobs for the same experiment run if the callable happens to have \"nuisance\" parameters like job_id which do not affect the result of the computation.
job_id
TYPE: Collection[str] DEFAULT: list()
Collection[str]
list()
time_threshold
Computations taking less time than this many seconds are not cached. A value of 0 means that it will always cache results.
TYPE: float DEFAULT: 0.3
0.3
allow_repeated_evaluations
If True, repeated calls to a function with the same arguments will be allowed and outputs averaged until the running standard deviation of the mean stabilizes below rtol_stderr * mean.
rtol_stderr * mean
rtol_stderr
relative tolerance for repeated evaluations. More precisely, memcached() will stop evaluating the function once the standard deviation of the mean is smaller than rtol_stderr * mean.
TYPE: float DEFAULT: 0.1
0.1
min_repetitions
minimum number of times that a function evaluation on the same arguments is repeated before returning cached values. Useful for stochastic functions only. If the model training is very noisy, set this number to higher values to reduce variance.
TYPE: int DEFAULT: 3
3
This module contains convenience classes to handle data and groups thereof.
Shapley and Least Core value computations require evaluation of a scoring function (the utility). This is typically the performance of the model on a test set (as an approximation to its true expected performance). It is therefore convenient to keep both the training data and the test data together to be passed around to methods in shapley and least_core. This is done with Dataset.
This abstraction layer also seamlessly grouping data points together if one is interested in computing their value as a group, see GroupedDataset.
Objects of both types are used to construct a Utility object.
Dataset(\n x_train: Union[NDArray, DataFrame],\n y_train: Union[NDArray, DataFrame],\n x_test: Union[NDArray, DataFrame],\n y_test: Union[NDArray, DataFrame],\n feature_names: Optional[Sequence[str]] = None,\n target_names: Optional[Sequence[str]] = None,\n data_names: Optional[Sequence[str]] = None,\n description: Optional[str] = None,\n is_multi_output: bool = False,\n)\n
A convenience class to handle datasets.
It holds a dataset, split into training and test data, together with several labels on feature names, data point names and a description.
x_train
training data
TYPE: Union[NDArray, DataFrame]
Union[NDArray, DataFrame]
y_train
labels for training data
test data
labels for test data
feature_names
name of the features of input data
target_names
names of the features of target data
names assigned to data points. For example, if the dataset is a time series, each entry can be a timestamp which can be referenced directly instead of using a row number.
description
A textual description of the dataset.
is_multi_output
set to False if labels are scalars, or to True if they are vectors of dimension > 1.
src/pydvl/utils/dataset.py
def __init__(\n self,\n x_train: Union[NDArray, pd.DataFrame],\n y_train: Union[NDArray, pd.DataFrame],\n x_test: Union[NDArray, pd.DataFrame],\n y_test: Union[NDArray, pd.DataFrame],\n feature_names: Optional[Sequence[str]] = None,\n target_names: Optional[Sequence[str]] = None,\n data_names: Optional[Sequence[str]] = None,\n description: Optional[str] = None,\n # FIXME: use same parameter name as in check_X_y()\n is_multi_output: bool = False,\n):\n \"\"\"Constructs a Dataset from data and labels.\n\n Args:\n x_train: training data\n y_train: labels for training data\n x_test: test data\n y_test: labels for test data\n feature_names: name of the features of input data\n target_names: names of the features of target data\n data_names: names assigned to data points.\n For example, if the dataset is a time series, each entry can be a\n timestamp which can be referenced directly instead of using a row\n number.\n description: A textual description of the dataset.\n is_multi_output: set to `False` if labels are scalars, or to\n `True` if they are vectors of dimension > 1.\n \"\"\"\n self.x_train, self.y_train = check_X_y(\n x_train, y_train, multi_output=is_multi_output\n )\n self.x_test, self.y_test = check_X_y(\n x_test, y_test, multi_output=is_multi_output\n )\n\n if x_train.shape[-1] != x_test.shape[-1]:\n raise ValueError(\n f\"Mismatching number of features: \"\n f\"{x_train.shape[-1]} and {x_test.shape[-1]}\"\n )\n if x_train.shape[0] != y_train.shape[0]:\n raise ValueError(\n f\"Mismatching number of samples: \"\n f\"{x_train.shape[-1]} and {x_test.shape[-1]}\"\n )\n if x_test.shape[0] != y_test.shape[0]:\n raise ValueError(\n f\"Mismatching number of samples: \"\n f\"{x_test.shape[-1]} and {y_test.shape[-1]}\"\n )\n\n def make_names(s: str, a: np.ndarray) -> List[str]:\n n = a.shape[1] if len(a.shape) > 1 else 1\n return [f\"{s}{i:0{1 + int(np.log10(n))}d}\" for i in range(1, n + 1)]\n\n self.feature_names = feature_names\n self.target_names = target_names\n\n if self.feature_names is None:\n if isinstance(x_train, pd.DataFrame):\n self.feature_names = x_train.columns.tolist()\n else:\n self.feature_names = make_names(\"x\", x_train)\n\n if self.target_names is None:\n if isinstance(y_train, pd.DataFrame):\n self.target_names = y_train.columns.tolist()\n else:\n self.target_names = make_names(\"y\", y_train)\n\n if len(self.x_train.shape) > 1:\n if (\n len(self.feature_names) != self.x_train.shape[-1]\n or len(self.feature_names) != self.x_test.shape[-1]\n ):\n raise ValueError(\"Mismatching number of features and names\")\n if len(self.y_train.shape) > 1:\n if (\n len(self.target_names) != self.y_train.shape[-1]\n or len(self.target_names) != self.y_test.shape[-1]\n ):\n raise ValueError(\"Mismatching number of targets and names\")\n\n self.description = description or \"No description\"\n self._indices = np.arange(len(self.x_train), dtype=np.int_)\n self._data_names = (\n np.array(data_names, dtype=object)\n if data_names is not None\n else self._indices.astype(object)\n )\n
indices: NDArray[int_]\n
Index of positions in data.x_train.
Contiguous integers from 0 to len(Dataset).
data_names: NDArray[object_]\n
Names of each individual datapoint.
Used for reporting Shapley values.
dim: int\n
Returns the number of dimensions of a sample.
get_training_data(\n indices: Optional[Iterable[int]] = None,\n) -> Tuple[NDArray, NDArray]\n
Given a set of indices, returns the training data that refer to those indices.
This is used mainly by Utility to retrieve subsets of the data from indices. It is typically not needed in algorithms.
indices
Optional indices that will be used to select points from the training data. If None, the entire training data will be returned.
TYPE: Optional[Iterable[int]] DEFAULT: None
Optional[Iterable[int]]
Tuple[NDArray, NDArray]
If indices is not None, the selected x and y arrays from the training data. Otherwise, the entire dataset.
def get_training_data(\n self, indices: Optional[Iterable[int]] = None\n) -> Tuple[NDArray, NDArray]:\n \"\"\"Given a set of indices, returns the training data that refer to those\n indices.\n\n This is used mainly by [Utility][pydvl.utils.utility.Utility] to retrieve\n subsets of the data from indices. It is typically **not needed in\n algorithms**.\n\n Args:\n indices: Optional indices that will be used to select points from\n the training data. If `None`, the entire training data will be\n returned.\n\n Returns:\n If `indices` is not `None`, the selected x and y arrays from the\n training data. Otherwise, the entire dataset.\n \"\"\"\n if indices is None:\n return self.x_train, self.y_train\n x = self.x_train[indices]\n y = self.y_train[indices]\n return x, y\n
get_test_data(\n indices: Optional[Iterable[int]] = None,\n) -> Tuple[NDArray, NDArray]\n
Returns the entire test set regardless of the passed indices.
The passed indices will not be used because for data valuation we generally want to score the trained model on the entire test data.
Additionally, the way this method is used in the Utility class, the passed indices will be those of the training data and would not work on the test data.
There may be cases where it is desired to use parts of the test data. In those cases, it is recommended to inherit from Dataset and override get_test_data().
For example, the following snippet shows how one could go about mapping the training data indices into test data indices inside get_test_data():
>>> from pydvl.utils import Dataset\n>>> import numpy as np\n>>> class DatasetWithTestDataIndices(Dataset):\n... def get_test_data(self, indices=None):\n... if indices is None:\n... return self.x_test, self.y_test\n... fraction = len(list(indices)) / len(self)\n... mapped_indices = len(self.x_test) / len(self) * np.asarray(indices)\n... mapped_indices = np.unique(mapped_indices.astype(int))\n... return self.x_test[mapped_indices], self.y_test[mapped_indices]\n...\n>>> X = np.random.rand(100, 10)\n>>> y = np.random.randint(0, 2, 100)\n>>> dataset = DatasetWithTestDataIndices.from_arrays(X, y)\n>>> indices = np.random.choice(dataset.indices, 30, replace=False)\n>>> _ = dataset.get_training_data(indices)\n>>> _ = dataset.get_test_data(indices)\n
Optional indices into the test data. This argument is unused left for compatibility with get_training_data().
The entire test data.
def get_test_data(\n self, indices: Optional[Iterable[int]] = None\n) -> Tuple[NDArray, NDArray]:\n \"\"\"Returns the entire test set regardless of the passed indices.\n\n The passed indices will not be used because for data valuation\n we generally want to score the trained model on the entire test data.\n\n Additionally, the way this method is used in the\n [Utility][pydvl.utils.utility.Utility] class, the passed indices will\n be those of the training data and would not work on the test data.\n\n There may be cases where it is desired to use parts of the test data.\n In those cases, it is recommended to inherit from\n [Dataset][pydvl.utils.dataset.Dataset] and override\n [get_test_data()][pydvl.utils.dataset.Dataset.get_test_data].\n\n For example, the following snippet shows how one could go about\n mapping the training data indices into test data indices\n inside [get_test_data()][pydvl.utils.dataset.Dataset.get_test_data]:\n\n ??? Example\n ```pycon\n >>> from pydvl.utils import Dataset\n >>> import numpy as np\n >>> class DatasetWithTestDataIndices(Dataset):\n ... def get_test_data(self, indices=None):\n ... if indices is None:\n ... return self.x_test, self.y_test\n ... fraction = len(list(indices)) / len(self)\n ... mapped_indices = len(self.x_test) / len(self) * np.asarray(indices)\n ... mapped_indices = np.unique(mapped_indices.astype(int))\n ... return self.x_test[mapped_indices], self.y_test[mapped_indices]\n ...\n >>> X = np.random.rand(100, 10)\n >>> y = np.random.randint(0, 2, 100)\n >>> dataset = DatasetWithTestDataIndices.from_arrays(X, y)\n >>> indices = np.random.choice(dataset.indices, 30, replace=False)\n >>> _ = dataset.get_training_data(indices)\n >>> _ = dataset.get_test_data(indices)\n ```\n\n Args:\n indices: Optional indices into the test data. This argument is\n unused left for compatibility with\n [get_training_data()][pydvl.utils.dataset.Dataset.get_training_data].\n\n Returns:\n The entire test data.\n \"\"\"\n return self.x_test, self.y_test\n
from_sklearn(\n data: Bunch,\n train_size: float = 0.8,\n random_state: Optional[int] = None,\n stratify_by_target: bool = False,\n **kwargs\n) -> Dataset\n
Constructs a Dataset object from a sklearn.utils.Bunch, as returned by the load_* functions in scikit-learn toy datasets.
load_*
>>> from pydvl.utils import Dataset\n>>> from sklearn.datasets import load_boston\n>>> dataset = Dataset.from_sklearn(load_boston())\n
scikit-learn Bunch object. The following attributes are supported:
DESCR
TYPE: Bunch
Bunch
train_size
size of the training dataset. Used in train_test_split
train_test_split
TYPE: float DEFAULT: 0.8
0.8
random_state
seed for train / test split
stratify_by_target
If True, data is split in a stratified fashion, using the target variable as labels. Read more in scikit-learn's user guide.
Additional keyword arguments to pass to the Dataset constructor. Use this to pass e.g. is_multi_output.
Object with the sklearn dataset
Changed in version 0.6.0
Added kwargs to pass to the Dataset constructor.
@classmethod\ndef from_sklearn(\n cls,\n data: Bunch,\n train_size: float = 0.8,\n random_state: Optional[int] = None,\n stratify_by_target: bool = False,\n **kwargs,\n) -> \"Dataset\":\n \"\"\"Constructs a [Dataset][pydvl.utils.Dataset] object from a\n [sklearn.utils.Bunch][], as returned by the `load_*`\n functions in [scikit-learn toy datasets](https://scikit-learn.org/stable/datasets/toy_dataset.html).\n\n ??? Example\n ```pycon\n >>> from pydvl.utils import Dataset\n >>> from sklearn.datasets import load_boston\n >>> dataset = Dataset.from_sklearn(load_boston())\n ```\n\n Args:\n data: scikit-learn Bunch object. The following attributes are supported:\n\n - `data`: covariates.\n - `target`: target variables (labels).\n - `feature_names` (**optional**): the feature names.\n - `target_names` (**optional**): the target names.\n - `DESCR` (**optional**): a description.\n train_size: size of the training dataset. Used in `train_test_split`\n random_state: seed for train / test split\n stratify_by_target: If `True`, data is split in a stratified\n fashion, using the target variable as labels. Read more in\n [scikit-learn's user guide](https://scikit-learn.org/stable/modules/cross_validation.html#stratification).\n kwargs: Additional keyword arguments to pass to the\n [Dataset][pydvl.utils.Dataset] constructor. Use this to pass e.g. `is_multi_output`.\n\n Returns:\n Object with the sklearn dataset\n\n !!! tip \"Changed in version 0.6.0\"\n Added kwargs to pass to the [Dataset][pydvl.utils.Dataset] constructor.\n \"\"\"\n x_train, x_test, y_train, y_test = train_test_split(\n data.data,\n data.target,\n train_size=train_size,\n random_state=random_state,\n stratify=data.target if stratify_by_target else None,\n )\n return cls(\n x_train,\n y_train,\n x_test,\n y_test,\n feature_names=data.get(\"feature_names\"),\n target_names=data.get(\"target_names\"),\n description=data.get(\"DESCR\"),\n **kwargs,\n )\n
from_arrays(\n X: NDArray,\n y: NDArray,\n train_size: float = 0.8,\n random_state: Optional[int] = None,\n stratify_by_target: bool = False,\n **kwargs\n) -> Dataset\n
Constructs a Dataset object from X and y numpy arrays as returned by the make_* functions in sklearn generated datasets.
make_*
>>> from pydvl.utils import Dataset\n>>> from sklearn.datasets import make_regression\n>>> X, y = make_regression()\n>>> dataset = Dataset.from_arrays(X, y)\n
X
numpy array of shape (n_samples, n_features)
numpy array of shape (n_samples,)
If True, data is split in a stratified fashion, using the y variable as labels. Read more in sklearn's user guide.
Additional keyword arguments to pass to the Dataset constructor. Use this to pass e.g. feature_names or target_names.
Object with the passed X and y arrays split across training and test sets.
New in version 0.4.0
@classmethod\ndef from_arrays(\n cls,\n X: NDArray,\n y: NDArray,\n train_size: float = 0.8,\n random_state: Optional[int] = None,\n stratify_by_target: bool = False,\n **kwargs,\n) -> \"Dataset\":\n \"\"\"Constructs a [Dataset][pydvl.utils.Dataset] object from X and y numpy arrays as\n returned by the `make_*` functions in [sklearn generated datasets](https://scikit-learn.org/stable/datasets/sample_generators.html).\n\n ??? Example\n ```pycon\n >>> from pydvl.utils import Dataset\n >>> from sklearn.datasets import make_regression\n >>> X, y = make_regression()\n >>> dataset = Dataset.from_arrays(X, y)\n ```\n\n Args:\n X: numpy array of shape (n_samples, n_features)\n y: numpy array of shape (n_samples,)\n train_size: size of the training dataset. Used in `train_test_split`\n random_state: seed for train / test split\n stratify_by_target: If `True`, data is split in a stratified fashion,\n using the y variable as labels. Read more in [sklearn's user\n guide](https://scikit-learn.org/stable/modules/cross_validation.html#stratification).\n kwargs: Additional keyword arguments to pass to the\n [Dataset][pydvl.utils.Dataset] constructor. Use this to pass e.g. `feature_names`\n or `target_names`.\n\n Returns:\n Object with the passed X and y arrays split across training and test sets.\n\n !!! tip \"New in version 0.4.0\"\n\n !!! tip \"Changed in version 0.6.0\"\n Added kwargs to pass to the [Dataset][pydvl.utils.Dataset] constructor.\n \"\"\"\n x_train, x_test, y_train, y_test = train_test_split(\n X,\n y,\n train_size=train_size,\n random_state=random_state,\n stratify=y if stratify_by_target else None,\n )\n return cls(x_train, y_train, x_test, y_test, **kwargs)\n
GroupedDataset(\n x_train: NDArray,\n y_train: NDArray,\n x_test: NDArray,\n y_test: NDArray,\n data_groups: Sequence,\n feature_names: Optional[Sequence[str]] = None,\n target_names: Optional[Sequence[str]] = None,\n group_names: Optional[Sequence[str]] = None,\n description: Optional[str] = None,\n **kwargs\n)\n
Bases: Dataset
Used for calculating Shapley values of subsets of the data considered as logical units. For instance, one can group by value of a categorical feature, by bin into which a continuous feature falls, or by label.
labels of training data
labels of test data
data_groups
Iterable of the same length as x_train containing a group label for each training data point. The label can be of any type, e.g. str or int. Data points with the same label will then be grouped by this object and considered as one for effects of valuation.
TYPE: Sequence
Sequence
names of the covariates' features.
names of the labels or targets y
group_names
names of the groups. If not provided, the labels from data_groups will be used.
A textual description of the dataset
Additional keyword arguments to pass to the Dataset constructor.
Added group_names and forwarding of kwargs
def __init__(\n self,\n x_train: NDArray,\n y_train: NDArray,\n x_test: NDArray,\n y_test: NDArray,\n data_groups: Sequence,\n feature_names: Optional[Sequence[str]] = None,\n target_names: Optional[Sequence[str]] = None,\n group_names: Optional[Sequence[str]] = None,\n description: Optional[str] = None,\n **kwargs,\n):\n \"\"\"Class for grouping datasets.\n\n Used for calculating Shapley values of subsets of the data considered\n as logical units. For instance, one can group by value of a categorical\n feature, by bin into which a continuous feature falls, or by label.\n\n Args:\n x_train: training data\n y_train: labels of training data\n x_test: test data\n y_test: labels of test data\n data_groups: Iterable of the same length as `x_train` containing\n a group label for each training data point. The label can be of any\n type, e.g. `str` or `int`. Data points with the same label will\n then be grouped by this object and considered as one for effects of\n valuation.\n feature_names: names of the covariates' features.\n target_names: names of the labels or targets y\n group_names: names of the groups. If not provided, the labels\n from `data_groups` will be used.\n description: A textual description of the dataset\n kwargs: Additional keyword arguments to pass to the\n [Dataset][pydvl.utils.Dataset] constructor.\n\n !!! tip \"Changed in version 0.6.0\"\n Added `group_names` and forwarding of `kwargs`\n \"\"\"\n super().__init__(\n x_train=x_train,\n y_train=y_train,\n x_test=x_test,\n y_test=y_test,\n feature_names=feature_names,\n target_names=target_names,\n description=description,\n **kwargs,\n )\n\n if len(data_groups) != len(x_train):\n raise ValueError(\n f\"data_groups and x_train must have the same length.\"\n f\"Instead got {len(data_groups)=} and {len(x_train)=}\"\n )\n\n self.groups: OrderedDict[Any, List[int]] = OrderedDict(\n {k: [] for k in set(data_groups)}\n )\n for idx, group in enumerate(data_groups):\n self.groups[group].append(idx)\n self.group_items = list(self.groups.items())\n self._indices = np.arange(len(self.groups.keys()))\n self._data_names = (\n np.array(group_names, dtype=object)\n if group_names is not None\n else np.array(list(self.groups.keys()), dtype=object)\n )\n
indices\n
Indices of the groups.
data_names\n
Names of the groups.
Returns the data and labels of all samples in the given groups.
group indices whose elements to return. If None, all data from all groups are returned.
Tuple of training data x and labels y.
def get_training_data(\n self, indices: Optional[Iterable[int]] = None\n) -> Tuple[NDArray, NDArray]:\n \"\"\"Returns the data and labels of all samples in the given groups.\n\n Args:\n indices: group indices whose elements to return. If `None`,\n all data from all groups are returned.\n\n Returns:\n Tuple of training data x and labels y.\n \"\"\"\n if indices is None:\n indices = self.indices\n data_indices = [\n idx for group_id in indices for idx in self.group_items[group_id][1]\n ]\n return super().get_training_data(data_indices)\n
from_sklearn(\n data: Bunch,\n train_size: float = 0.8,\n random_state: Optional[int] = None,\n stratify_by_target: bool = False,\n data_groups: Optional[Sequence] = None,\n **kwargs\n) -> GroupedDataset\n
Constructs a GroupedDataset object from a sklearn.utils.Bunch as returned by the load_* functions in scikit-learn toy datasets and groups it.
>>> from sklearn.datasets import load_iris\n>>> from pydvl.utils import GroupedDataset\n>>> iris = load_iris()\n>>> data_groups = iris.data[:, 0] // 0.5\n>>> dataset = GroupedDataset.from_sklearn(iris, data_groups=data_groups)\n
size of the training dataset. Used in train_test_split.
seed for train / test split.
If True, data is split in a stratified fashion, using the target variable as labels. Read more in sklearn's user guide.
an array holding the group index or name for each data point. The length of this array must be equal to the number of data points in the dataset.
TYPE: Optional[Sequence] DEFAULT: None
Optional[Sequence]
Dataset with the selected sklearn data
@classmethod\ndef from_sklearn(\n cls,\n data: Bunch,\n train_size: float = 0.8,\n random_state: Optional[int] = None,\n stratify_by_target: bool = False,\n data_groups: Optional[Sequence] = None,\n **kwargs,\n) -> \"GroupedDataset\":\n \"\"\"Constructs a [GroupedDataset][pydvl.utils.GroupedDataset] object from a\n [sklearn.utils.Bunch][sklearn.utils.Bunch] as returned by the `load_*` functions in\n [scikit-learn toy datasets](https://scikit-learn.org/stable/datasets/toy_dataset.html) and groups\n it.\n\n ??? Example\n ```pycon\n >>> from sklearn.datasets import load_iris\n >>> from pydvl.utils import GroupedDataset\n >>> iris = load_iris()\n >>> data_groups = iris.data[:, 0] // 0.5\n >>> dataset = GroupedDataset.from_sklearn(iris, data_groups=data_groups)\n ```\n\n Args:\n data: scikit-learn Bunch object. The following attributes are supported:\n\n - `data`: covariates.\n - `target`: target variables (labels).\n - `feature_names` (**optional**): the feature names.\n - `target_names` (**optional**): the target names.\n - `DESCR` (**optional**): a description.\n train_size: size of the training dataset. Used in `train_test_split`.\n random_state: seed for train / test split.\n stratify_by_target: If `True`, data is split in a stratified\n fashion, using the target variable as labels. Read more in\n [sklearn's user guide](https://scikit-learn.org/stable/modules/cross_validation.html#stratification).\n data_groups: an array holding the group index or name for each\n data point. The length of this array must be equal to the number of\n data points in the dataset.\n kwargs: Additional keyword arguments to pass to the\n [Dataset][pydvl.utils.Dataset] constructor.\n\n Returns:\n Dataset with the selected sklearn data\n \"\"\"\n if data_groups is None:\n raise ValueError(\n \"data_groups must be provided when constructing a GroupedDataset\"\n )\n\n x_train, x_test, y_train, y_test, data_groups_train, _ = train_test_split(\n data.data,\n data.target,\n data_groups,\n train_size=train_size,\n random_state=random_state,\n stratify=data.target if stratify_by_target else None,\n )\n\n dataset = Dataset(\n x_train=x_train, y_train=y_train, x_test=x_test, y_test=y_test, **kwargs\n )\n return cls.from_dataset(dataset, data_groups_train) # type: ignore\n
from_arrays(\n X: NDArray,\n y: NDArray,\n train_size: float = 0.8,\n random_state: Optional[int] = None,\n stratify_by_target: bool = False,\n data_groups: Optional[Sequence] = None,\n **kwargs\n) -> Dataset\n
Constructs a GroupedDataset object from X and y numpy arrays as returned by the make_* functions in scikit-learn generated datasets.
>>> from sklearn.datasets import make_classification\n>>> from pydvl.utils import GroupedDataset\n>>> X, y = make_classification(\n... n_samples=100,\n... n_features=4,\n... n_informative=2,\n... n_redundant=0,\n... random_state=0,\n... shuffle=False\n... )\n>>> data_groups = X[:, 0] // 0.5\n>>> dataset = GroupedDataset.from_arrays(X, y, data_groups=data_groups)\n
array of shape (n_samples, n_features)
array of shape (n_samples,)
Additional keyword arguments that will be passed to the Dataset constructor.
Dataset with the passed X and y arrays split across training and test sets.
@classmethod\ndef from_arrays(\n cls,\n X: NDArray,\n y: NDArray,\n train_size: float = 0.8,\n random_state: Optional[int] = None,\n stratify_by_target: bool = False,\n data_groups: Optional[Sequence] = None,\n **kwargs,\n) -> \"Dataset\":\n \"\"\"Constructs a [GroupedDataset][pydvl.utils.GroupedDataset] object from X and y numpy arrays\n as returned by the `make_*` functions in\n [scikit-learn generated datasets](https://scikit-learn.org/stable/datasets/sample_generators.html).\n\n ??? Example\n ```pycon\n >>> from sklearn.datasets import make_classification\n >>> from pydvl.utils import GroupedDataset\n >>> X, y = make_classification(\n ... n_samples=100,\n ... n_features=4,\n ... n_informative=2,\n ... n_redundant=0,\n ... random_state=0,\n ... shuffle=False\n ... )\n >>> data_groups = X[:, 0] // 0.5\n >>> dataset = GroupedDataset.from_arrays(X, y, data_groups=data_groups)\n ```\n\n Args:\n X: array of shape (n_samples, n_features)\n y: array of shape (n_samples,)\n train_size: size of the training dataset. Used in `train_test_split`.\n random_state: seed for train / test split.\n stratify_by_target: If `True`, data is split in a stratified\n fashion, using the y variable as labels. Read more in\n [sklearn's user guide](https://scikit-learn.org/stable/modules/cross_validation.html#stratification).\n data_groups: an array holding the group index or name for each data\n point. The length of this array must be equal to the number of\n data points in the dataset.\n kwargs: Additional keyword arguments that will be passed to the\n [Dataset][pydvl.utils.Dataset] constructor.\n\n Returns:\n Dataset with the passed X and y arrays split across training and\n test sets.\n\n !!! tip \"New in version 0.4.0\"\n\n !!! tip \"Changed in version 0.6.0\"\n Added kwargs to pass to the [Dataset][pydvl.utils.Dataset] constructor.\n \"\"\"\n if data_groups is None:\n raise ValueError(\n \"data_groups must be provided when constructing a GroupedDataset\"\n )\n x_train, x_test, y_train, y_test, data_groups_train, _ = train_test_split(\n X,\n y,\n data_groups,\n train_size=train_size,\n random_state=random_state,\n stratify=y if stratify_by_target else None,\n )\n dataset = Dataset(\n x_train=x_train, y_train=y_train, x_test=x_test, y_test=y_test, **kwargs\n )\n return cls.from_dataset(dataset, data_groups_train)\n
from_dataset(dataset: Dataset, data_groups: Sequence[Any]) -> GroupedDataset\n
Creates a GroupedDataset object from the data a Dataset object and a mapping of data groups.
>>> import numpy as np\n>>> from pydvl.utils import Dataset, GroupedDataset\n>>> dataset = Dataset.from_arrays(\n... X=np.asarray([[1, 2], [3, 4], [5, 6], [7, 8]]),\n... y=np.asarray([0, 1, 0, 1]),\n... )\n>>> dataset = GroupedDataset.from_dataset(dataset, data_groups=[0, 0, 1, 1])\n
The original data.
An array holding the group index or name for each data point. The length of this array must be equal to the number of data points in the dataset.
TYPE: Sequence[Any]
Sequence[Any]
A GroupedDataset with the initial Dataset grouped by data_groups.
@classmethod\ndef from_dataset(\n cls, dataset: Dataset, data_groups: Sequence[Any]\n) -> \"GroupedDataset\":\n \"\"\"Creates a [GroupedDataset][pydvl.utils.GroupedDataset] object from the data a\n [Dataset][pydvl.utils.Dataset] object and a mapping of data groups.\n\n ??? Example\n ```pycon\n >>> import numpy as np\n >>> from pydvl.utils import Dataset, GroupedDataset\n >>> dataset = Dataset.from_arrays(\n ... X=np.asarray([[1, 2], [3, 4], [5, 6], [7, 8]]),\n ... y=np.asarray([0, 1, 0, 1]),\n ... )\n >>> dataset = GroupedDataset.from_dataset(dataset, data_groups=[0, 0, 1, 1])\n ```\n\n Args:\n dataset: The original data.\n data_groups: An array holding the group index or name for each data\n point. The length of this array must be equal to the number of\n data points in the dataset.\n\n Returns:\n A [GroupedDataset][pydvl.utils.GroupedDataset] with the initial\n [Dataset][pydvl.utils.Dataset] grouped by data_groups.\n \"\"\"\n return cls(\n x_train=dataset.x_train,\n y_train=dataset.y_train,\n x_test=dataset.x_test,\n y_test=dataset.y_test,\n data_groups=data_groups,\n feature_names=dataset.feature_names,\n target_names=dataset.target_names,\n description=dataset.description,\n )\n
Supporting utilities for manipulating arguments of functions.
free_arguments(fun: Union[Callable, partial]) -> Set[str]\n
Computes the set of free arguments for a function or functools.partial object.
All arguments of a function are considered free unless they are set by a partial. For example, if f = partial(g, a=1), then a is not a free argument of f.
f = partial(g, a=1)
a
f
A callable or a [partial object][].
TYPE: Union[Callable, partial]
Union[Callable, partial]
Set[str]
The set of free arguments of fun.
New in version 0.7.0
src/pydvl/utils/functional.py
def free_arguments(fun: Union[Callable, partial]) -> Set[str]:\n \"\"\"Computes the set of free arguments for a function or\n [functools.partial][] object.\n\n All arguments of a function are considered free unless they are set by a\n partial. For example, if `f = partial(g, a=1)`, then `a` is not a free\n argument of `f`.\n\n Args:\n fun: A callable or a [partial object][].\n\n Returns:\n The set of free arguments of `fun`.\n\n !!! tip \"New in version 0.7.0\"\n \"\"\"\n args_set_by_partial: Set[str] = set()\n\n def _rec_unroll_partial_function_args(g: Union[Callable, partial]) -> Callable:\n \"\"\"Stores arguments and recursively call itself if `g` is a\n [functools.partial][] object. In the end, returns the initially wrapped\n function.\n\n This handles the construct `partial(_accept_additional_argument, *args,\n **kwargs)` that is used by `maybe_add_argument`.\n\n Args:\n g: A partial or a function to unroll.\n\n Returns:\n Initial wrapped function.\n \"\"\"\n nonlocal args_set_by_partial\n\n if isinstance(g, partial) and g.func == _accept_additional_argument:\n arg = g.keywords[\"arg\"]\n if arg in args_set_by_partial:\n args_set_by_partial.remove(arg)\n return _rec_unroll_partial_function_args(g.keywords[\"fun\"])\n elif isinstance(g, partial):\n args_set_by_partial.update(g.keywords.keys())\n args_set_by_partial.update(g.args)\n return _rec_unroll_partial_function_args(g.func)\n else:\n return g\n\n wrapped_fn = _rec_unroll_partial_function_args(fun)\n sig = inspect.signature(wrapped_fn)\n return args_set_by_partial | set(sig.parameters.keys())\n
maybe_add_argument(fun: Callable, new_arg: str) -> Callable\n
Wraps a function to accept the given keyword parameter if it doesn't already.
If fun already takes a keyword parameter of name new_arg, then it is returned as is. Otherwise, a wrapper is returned which merely ignores the argument.
new_arg
The function to wrap
The name of the argument that the new function will accept (and ignore).
TYPE: str
A new function accepting one more keyword argument.
Changed in version 0.7.0
Ability to work with partials.
def maybe_add_argument(fun: Callable, new_arg: str) -> Callable:\n \"\"\"Wraps a function to accept the given keyword parameter if it doesn't\n already.\n\n If `fun` already takes a keyword parameter of name `new_arg`, then it is\n returned as is. Otherwise, a wrapper is returned which merely ignores the\n argument.\n\n Args:\n fun: The function to wrap\n new_arg: The name of the argument that the new function will accept\n (and ignore).\n\n Returns:\n A new function accepting one more keyword argument.\n\n !!! tip \"Changed in version 0.7.0\"\n Ability to work with partials.\n \"\"\"\n if new_arg in free_arguments(fun):\n return fun\n\n return partial(_accept_additional_argument, fun=fun, arg=new_arg)\n
This module contains routines for numerical computations used across the library.
powerset(s: NDArray[T]) -> Iterator[Collection[T]]\n
Returns an iterator for the power set of the argument.
Subsets are generated in sequence by growing size. See random_powerset() for random sampling.
>>> import numpy as np\n>>> from pydvl.utils.numeric import powerset\n>>> list(powerset(np.array((1,2))))\n[(), (1,), (2,), (1, 2)]\n
s
The set to use
TYPE: NDArray[T]
NDArray[T]
Iterator[Collection[T]]
An iterator over all subsets of the set of indices s.
src/pydvl/utils/numeric.py
def powerset(s: NDArray[T]) -> Iterator[Collection[T]]:\n \"\"\"Returns an iterator for the power set of the argument.\n\n Subsets are generated in sequence by growing size. See\n [random_powerset()][pydvl.utils.numeric.random_powerset] for random\n sampling.\n\n ??? Example\n ``` pycon\n >>> import numpy as np\n >>> from pydvl.utils.numeric import powerset\n >>> list(powerset(np.array((1,2))))\n [(), (1,), (2,), (1, 2)]\n ```\n\n Args:\n s: The set to use\n\n Returns:\n An iterator over all subsets of the set of indices `s`.\n \"\"\"\n return chain.from_iterable(combinations(s, r) for r in range(len(s) + 1))\n
num_samples_permutation_hoeffding(\n eps: float, delta: float, u_range: float\n) -> int\n
Lower bound on the number of samples required for MonteCarlo Shapley to obtain an (\u03b5,\u03b4)-approximation.
That is: with probability 1-\u03b4, the estimated value for one data point will be \u03b5-close to the true quantity, if at least this many permutations are sampled.
eps
\u03b5 > 0
delta
0 < \u03b4 <= 1
u_range
Range of the Utility function
Number of permutations required to guarantee \u03b5-correct Shapley values with probability 1-\u03b4
def num_samples_permutation_hoeffding(eps: float, delta: float, u_range: float) -> int:\n \"\"\"Lower bound on the number of samples required for MonteCarlo Shapley to\n obtain an (\u03b5,\u03b4)-approximation.\n\n That is: with probability 1-\u03b4, the estimated value for one data point will\n be \u03b5-close to the true quantity, if at least this many permutations are\n sampled.\n\n Args:\n eps: \u03b5 > 0\n delta: 0 < \u03b4 <= 1\n u_range: Range of the [Utility][pydvl.utils.utility.Utility] function\n\n Returns:\n Number of _permutations_ required to guarantee \u03b5-correct Shapley\n values with probability 1-\u03b4\n \"\"\"\n return int(np.ceil(np.log(2 / delta) * 2 * u_range**2 / eps**2))\n
random_subset(\n s: NDArray[T], q: float = 0.5, seed: Optional[Seed] = None\n) -> NDArray[T]\n
Returns one subset at random from s.
set to sample from
q
Sampling probability for elements. The default 0.5 yields a uniform distribution over the power set of s.
TYPE: float DEFAULT: 0.5
0.5
TYPE: Optional[Seed] DEFAULT: None
Optional[Seed]
The subset
def random_subset(\n s: NDArray[T], q: float = 0.5, seed: Optional[Seed] = None\n) -> NDArray[T]:\n \"\"\"Returns one subset at random from ``s``.\n\n Args:\n s: set to sample from\n q: Sampling probability for elements. The default 0.5 yields a\n uniform distribution over the power set of s.\n seed: Either an instance of a numpy random number generator or a seed\n for it.\n\n Returns:\n The subset\n \"\"\"\n rng = np.random.default_rng(seed)\n selection = rng.uniform(size=len(s)) > q\n return s[selection]\n
random_powerset(\n s: NDArray[T],\n n_samples: Optional[int] = None,\n q: float = 0.5,\n seed: Optional[Seed] = None,\n) -> Generator[NDArray[T], None, None]\n
Samples subsets from the power set of the argument, without pre-generating all subsets and in no order.
See powerset if you wish to deterministically generate all subsets.
To generate subsets, len(s) Bernoulli draws with probability q are drawn. The default value of q = 0.5 provides a uniform distribution over the power set of s. Other choices can be used e.g. to implement owen_sampling_shapley.
len(s)
q = 0.5
n_samples
if set, stop the generator after this many steps. Defaults to np.iinfo(np.int32).max
np.iinfo(np.int32).max
Generator[NDArray[T], None, None]
Samples from the power set of s.
if the element sampling probability is not in [0,1]
def random_powerset(\n s: NDArray[T],\n n_samples: Optional[int] = None,\n q: float = 0.5,\n seed: Optional[Seed] = None,\n) -> Generator[NDArray[T], None, None]:\n \"\"\"Samples subsets from the power set of the argument, without\n pre-generating all subsets and in no order.\n\n See [powerset][pydvl.utils.numeric.powerset] if you wish to deterministically generate all subsets.\n\n To generate subsets, `len(s)` Bernoulli draws with probability `q` are\n drawn. The default value of `q = 0.5` provides a uniform distribution over\n the power set of `s`. Other choices can be used e.g. to implement\n [owen_sampling_shapley][pydvl.value.shapley.owen.owen_sampling_shapley].\n\n Args:\n s: set to sample from\n n_samples: if set, stop the generator after this many steps.\n Defaults to `np.iinfo(np.int32).max`\n q: Sampling probability for elements. The default 0.5 yields a\n uniform distribution over the power set of s.\n seed: Either an instance of a numpy random number generator or a seed for it.\n\n Returns:\n Samples from the power set of `s`.\n\n Raises:\n ValueError: if the element sampling probability is not in [0,1]\n\n \"\"\"\n if q < 0 or q > 1:\n raise ValueError(\"Element sampling probability must be in [0,1]\")\n\n rng = np.random.default_rng(seed)\n total = 1\n if n_samples is None:\n n_samples = np.iinfo(np.int32).max\n while total <= n_samples:\n yield random_subset(s, q, seed=rng)\n total += 1\n
random_powerset_label_min(\n s: NDArray[T],\n labels: NDArray[int_],\n min_elements_per_label: int = 1,\n seed: Optional[Seed] = None,\n) -> Generator[NDArray[T], None, None]\n
Draws random subsets from s, while ensuring that at least min_elements_per_label elements per label are included in the draw. It can be used for classification problems to ensure that a set contains information for all labels (or not if min_elements_per_label=0).
min_elements_per_label
min_elements_per_label=0
Set to sample from
Labels for the samples
TYPE: NDArray[int_]
NDArray[int_]
Minimum number of elements for each label.
TYPE: int DEFAULT: 1
1
Generated draw from the powerset of s with min_elements_per_label for each
label.
If s and labels are of different length or min_elements_per_label is smaller than 0.
def random_powerset_label_min(\n s: NDArray[T],\n labels: NDArray[np.int_],\n min_elements_per_label: int = 1,\n seed: Optional[Seed] = None,\n) -> Generator[NDArray[T], None, None]:\n \"\"\"Draws random subsets from `s`, while ensuring that at least\n `min_elements_per_label` elements per label are included in the draw. It can be used\n for classification problems to ensure that a set contains information for all labels\n (or not if `min_elements_per_label=0`).\n\n Args:\n s: Set to sample from\n labels: Labels for the samples\n min_elements_per_label: Minimum number of elements for each label.\n seed: Either an instance of a numpy random number generator or a seed for it.\n\n Returns:\n Generated draw from the powerset of s with `min_elements_per_label` for each\n label.\n\n Raises:\n ValueError: If `s` and `labels` are of different length or\n `min_elements_per_label` is smaller than 0.\n \"\"\"\n if len(labels) != len(s):\n raise ValueError(\"Set and labels have to be of same size.\")\n\n if min_elements_per_label < 0:\n raise ValueError(\n f\"Parameter min_elements={min_elements_per_label} needs to be bigger or \"\n f\"equal to 0.\"\n )\n\n rng = np.random.default_rng(seed)\n unique_labels = np.unique(labels)\n\n while True:\n subsets: List[NDArray[T]] = []\n for label in unique_labels:\n label_indices = np.asarray(np.where(labels == label)[0])\n subset_size = int(\n rng.integers(\n min(min_elements_per_label, len(label_indices)),\n len(label_indices) + 1,\n )\n )\n if subset_size > 0:\n subsets.append(\n random_subset_of_size(s[label_indices], subset_size, seed=rng)\n )\n\n if len(subsets) > 0:\n subset = np.concatenate(tuple(subsets))\n rng.shuffle(subset)\n yield subset\n else:\n yield np.array([], dtype=s.dtype)\n
random_subset_of_size(\n s: NDArray[T], size: int, seed: Optional[Seed] = None\n) -> NDArray[T]\n
Samples a random subset of given size uniformly from the powerset of s.
Size of the subset to generate
Raises ValueError: If size > len(s)
def random_subset_of_size(\n s: NDArray[T], size: int, seed: Optional[Seed] = None\n) -> NDArray[T]:\n \"\"\"Samples a random subset of given size uniformly from the powerset\n of `s`.\n\n Args:\n s: Set to sample from\n size: Size of the subset to generate\n seed: Either an instance of a numpy random number generator or a seed for it.\n\n Returns:\n The subset\n\n Raises\n ValueError: If size > len(s)\n \"\"\"\n if size > len(s):\n raise ValueError(\"Cannot sample subset larger than set\")\n rng = np.random.default_rng(seed)\n return rng.choice(s, size=size, replace=False)\n
random_matrix_with_condition_number(\n n: int, condition_number: float, seed: Optional[Seed] = None\n) -> NDArray\n
Constructs a square matrix with a given condition number.
Taken from: https://gist.github.com/bstellato/23322fe5d87bb71da922fbc41d658079#file-random_mat_condition_number-py
Also see: https://math.stackexchange.com/questions/1351616/condition-number-of-ata.
n
size of the matrix
condition_number
duh
An (n,n) matrix with the requested condition number.
def random_matrix_with_condition_number(\n n: int, condition_number: float, seed: Optional[Seed] = None\n) -> NDArray:\n \"\"\"Constructs a square matrix with a given condition number.\n\n Taken from:\n [https://gist.github.com/bstellato/23322fe5d87bb71da922fbc41d658079#file-random_mat_condition_number-py](\n https://gist.github.com/bstellato/23322fe5d87bb71da922fbc41d658079#file-random_mat_condition_number-py)\n\n Also see:\n [https://math.stackexchange.com/questions/1351616/condition-number-of-ata](\n https://math.stackexchange.com/questions/1351616/condition-number-of-ata).\n\n Args:\n n: size of the matrix\n condition_number: duh\n seed: Either an instance of a numpy random number generator or a seed for it.\n\n Returns:\n An (n,n) matrix with the requested condition number.\n \"\"\"\n if n < 2:\n raise ValueError(\"Matrix size must be at least 2\")\n\n if condition_number <= 1:\n raise ValueError(\"Condition number must be greater than 1\")\n\n rng = np.random.default_rng(seed)\n log_condition_number = np.log(condition_number)\n exp_vec = np.arange(\n -log_condition_number / 4.0,\n log_condition_number * (n + 1) / (4 * (n - 1)),\n log_condition_number / (2.0 * (n - 1)),\n )\n exp_vec = exp_vec[:n]\n s: np.ndarray = np.exp(exp_vec)\n S = np.diag(s)\n U, _ = np.linalg.qr((rng.uniform(size=(n, n)) - 5.0) * 200)\n V, _ = np.linalg.qr((rng.uniform(size=(n, n)) - 5.0) * 200)\n P: np.ndarray = U.dot(S).dot(V.T)\n P = P.dot(P.T)\n return P\n
running_moments(\n previous_avg: float | NDArray[float_],\n previous_variance: float | NDArray[float_],\n count: int,\n new_value: float | NDArray[float_],\n) -> Tuple[float | NDArray[float_], float | NDArray[float_]]\n
Uses Welford's algorithm to calculate the running average and variance of a set of numbers.
See Welford's algorithm in wikipedia
This is not really using Welford's correction for numerical stability for the variance. (FIXME)
Todo
This could be generalised to arbitrary moments. See this paper
previous_avg
average value at previous step
TYPE: float | NDArray[float_]
float | NDArray[float_]
previous_variance
variance at previous step
count
number of points seen so far
new_value
new value in the series of numbers
Tuple[float | NDArray[float_], float | NDArray[float_]]
new_average, new_variance, calculated with the new count
def running_moments(\n previous_avg: float | NDArray[np.float_],\n previous_variance: float | NDArray[np.float_],\n count: int,\n new_value: float | NDArray[np.float_],\n) -> Tuple[float | NDArray[np.float_], float | NDArray[np.float_]]:\n \"\"\"Uses Welford's algorithm to calculate the running average and variance of\n a set of numbers.\n\n See [Welford's algorithm in wikipedia](https://en.wikipedia.org/wiki/Algorithms_for_calculating_variance#Welford's_online_algorithm)\n\n !!! Warning\n This is not really using Welford's correction for numerical stability\n for the variance. (FIXME)\n\n !!! Todo\n This could be generalised to arbitrary moments. See [this paper](https://www.osti.gov/biblio/1028931)\n\n Args:\n previous_avg: average value at previous step\n previous_variance: variance at previous step\n count: number of points seen so far\n new_value: new value in the series of numbers\n\n Returns:\n new_average, new_variance, calculated with the new count\n \"\"\"\n # broadcasted operations seem not to be supported by mypy, so we ignore the type\n new_average = (new_value + count * previous_avg) / (count + 1) # type: ignore\n new_variance = previous_variance + (\n (new_value - previous_avg) * (new_value - new_average) - previous_variance\n ) / (count + 1)\n return new_average, new_variance\n
top_k_value_accuracy(\n y_true: NDArray[float_], y_pred: NDArray[float_], k: int = 3\n) -> float\n
Computes the top-k accuracy for the estimated values by comparing indices of the highest k values.
y_true
Exact/true value
y_pred
Predicted/estimated value
k
Number of the highest values taken into account
Accuracy
def top_k_value_accuracy(\n y_true: NDArray[np.float_], y_pred: NDArray[np.float_], k: int = 3\n) -> float:\n \"\"\"Computes the top-k accuracy for the estimated values by comparing indices\n of the highest k values.\n\n Args:\n y_true: Exact/true value\n y_pred: Predicted/estimated value\n k: Number of the highest values taken into account\n\n Returns:\n Accuracy\n \"\"\"\n top_k_exact_values = np.argsort(y_true)[-k:]\n top_k_pred_values = np.argsort(y_pred)[-k:]\n top_k_accuracy = len(np.intersect1d(top_k_exact_values, top_k_pred_values)) / k\n return top_k_accuracy\n
repeat_indices(\n indices: Collection[int],\n result: ValuationResult,\n done: StoppingCriterion,\n **kwargs\n) -> Iterator[int]\n
Helper function to cycle indefinitely over a collection of indices until the stopping criterion is satisfied while displaying progress.
Collection of indices that will be cycled until done.
TYPE: Collection[int]
Collection[int]
result
Object containing the current results.
done
Stopping criterion.
TYPE: StoppingCriterion
StoppingCriterion
Keyword arguments passed to tqdm.
src/pydvl/utils/progress.py
def repeat_indices(\n indices: Collection[int],\n result: \"ValuationResult\",\n done: \"StoppingCriterion\",\n **kwargs,\n) -> Iterator[int]:\n \"\"\"Helper function to cycle indefinitely over a collection of indices\n until the stopping criterion is satisfied while displaying progress.\n\n Args:\n indices: Collection of indices that will be cycled until done.\n result: Object containing the current results.\n done: Stopping criterion.\n kwargs: Keyword arguments passed to tqdm.\n \"\"\"\n with tqdm(total=100, unit=\"%\", **kwargs) as pbar:\n it = takewhile(lambda _: not done(result), cycle(indices))\n for i in it:\n yield i\n pbar.update(100 * done.completion() - pbar.n)\n pbar.refresh()\n
log_duration(func)\n
Decorator to log execution time of a function
def log_duration(func):\n \"\"\"\n Decorator to log execution time of a function\n \"\"\"\n\n @wraps(func)\n def wrapper_log_duration(*args, **kwargs):\n func_name = func.__qualname__\n logger.info(f\"Function '{func_name}' is starting.\")\n start_time = time()\n result = func(*args, **kwargs)\n duration = time() - start_time\n logger.info(f\"Function '{func_name}' completed. Duration: {duration:.2f} sec\")\n return result\n\n return wrapper_log_duration\n
This module provides a Scorer class that wraps scoring functions with additional information.
Scorers are the fundamental building block of many data valuation methods. They are typically used by the Utility class to evaluate the quality of a model when trained on subsets of the training data.
Scorers can be constructed in the same way as in scikit-learn: either from known strings or from a callable. Greater values must be better. If they are not, a negated version can be used, see scikit-learn's make_scorer().
Scorer provides additional information about the scoring function, like its range and default values, which can be used by some data valuation methods (like group_testing_shapley()) to estimate the number of samples required for a certain quality of approximation.
squashed_r2 = compose_score(Scorer('r2'), _sigmoid, (0, 1), 'squashed r2')\n
A scorer that squashes the R\u00b2 score into the range [0, 1] using a sigmoid.
squashed_variance = compose_score(\n Scorer(\"explained_variance\"),\n _sigmoid,\n (0, 1),\n \"squashed explained variance\",\n)\n
A scorer that squashes the explained variance score into the range [0, 1] using a sigmoid.
Bases: Protocol
Protocol
Signature for a scorer
Scorer(\n scoring: Union[str, ScorerCallable],\n default: float = np.nan,\n range: Tuple = (-np.inf, np.inf),\n name: Optional[str] = None,\n)\n
A scoring callable that takes a model, data, and labels and returns a scalar.
scoring
Either a string or callable that can be passed to get_scorer.
TYPE: Union[str, ScorerCallable]
Union[str, ScorerCallable]
default
score to be used when a model cannot be fit, e.g. when too little data is passed, or errors arise.
TYPE: float DEFAULT: nan
nan
range
numerical range of the score function. Some Monte Carlo methods can use this to estimate the number of samples required for a certain quality of approximation. If not provided, it can be read from the scoring object if it provides it, for instance if it was constructed with compose_score().
TYPE: Tuple DEFAULT: (-inf, inf)
Tuple
(-inf, inf)
name
The name of the scorer. If not provided, the name of the function passed will be used.
New in version 0.5.0
src/pydvl/utils/score.py
def __init__(\n self,\n scoring: Union[str, ScorerCallable],\n default: float = np.nan,\n range: Tuple = (-np.inf, np.inf),\n name: Optional[str] = None,\n):\n if name is None and isinstance(scoring, str):\n name = scoring\n self._scorer = get_scorer(scoring)\n self.default = default\n # TODO: auto-fill from known scorers ?\n self.range = np.array(range)\n self._name = getattr(self._scorer, \"__name__\", name or \"scorer\")\n
compose_score(\n scorer: Scorer,\n transformation: Callable[[float], float],\n range: Tuple[float, float],\n name: str,\n) -> Scorer\n
Composes a scoring function with an arbitrary scalar transformation.
Useful to squash unbounded scores into ranges manageable by data valuation methods.
Example:
sigmoid = lambda x: 1/(1+np.exp(-x))\ncompose_score(Scorer(\"r2\"), sigmoid, range=(0,1), name=\"squashed r2\")\n
scorer
The object to be composed.
TYPE: Scorer
transformation
A scalar transformation
TYPE: Callable[[float], float]
Callable[[float], float]
The range of the transformation. This will be used e.g. by Utility for the range of the composed.
TYPE: Tuple[float, float]
Tuple[float, float]
A string representation for the composition, for str().
str()
The composite Scorer.
def compose_score(\n scorer: Scorer,\n transformation: Callable[[float], float],\n range: Tuple[float, float],\n name: str,\n) -> Scorer:\n \"\"\"Composes a scoring function with an arbitrary scalar transformation.\n\n Useful to squash unbounded scores into ranges manageable by data valuation\n methods.\n\n Example:\n\n ```python\n sigmoid = lambda x: 1/(1+np.exp(-x))\n compose_score(Scorer(\"r2\"), sigmoid, range=(0,1), name=\"squashed r2\")\n ```\n\n Args:\n scorer: The object to be composed.\n transformation: A scalar transformation\n range: The range of the transformation. This will be used e.g. by\n [Utility][pydvl.utils.utility.Utility] for the range of the composed.\n name: A string representation for the composition, for `str()`.\n\n Returns:\n The composite [Scorer][pydvl.utils.score.Scorer].\n \"\"\"\n\n class CompositeScorer(Scorer):\n def __call__(self, model: SupervisedModel, X: NDArray, y: NDArray) -> float:\n score = self._scorer(model=model, X=X, y=y)\n return transformation(score)\n\n return CompositeScorer(scorer, range=range, name=name)\n
Bases: Enum
Status of a computation.
Statuses can be combined using bitwise or (|) and bitwise and (&) to get the status of a combined computation. For example, if we have two computations, one that has converged and one that has failed, then the combined status is Status.Converged | Status.Failed == Status.Converged, but Status.Converged & Status.Failed == Status.Failed.
|
&
Status.Converged | Status.Failed == Status.Converged
Status.Converged & Status.Failed == Status.Failed
The result of bitwise or-ing two valuation statuses with | is given by the following table:
where P = Pending, C = Converged, F = Failed.
The result of bitwise and-ing two valuation statuses with & is given by the following table:
The result of bitwise negation of a Status with ~ is Failed if the status is Converged, or Converged otherwise:
~
Failed
Converged
~P == C, ~C == F, ~F == C\n
A Status evaluates to True iff it's Converged or Failed:
bool(Status.Pending) == False\nbool(Status.Converged) == True\nbool(Status.Failed) == True\n
These truth values are inconsistent with the usual boolean operations. In particular the XOR of two instances of Status is not the same as the XOR of their boolean values.
This module contains types, protocols, decorators and generic function transformations. Some of it probably belongs elsewhere.
This is the minimal Protocol that valuation methods require from models in order to work.
All that is needed are the standard sklearn methods fit(), predict() and score().
fit()
predict()
score()
fit(x: NDArray, y: NDArray)\n
Fit the model to the data
Independent variables
Dependent variable
src/pydvl/utils/types.py
def fit(self, x: NDArray, y: NDArray):\n \"\"\"Fit the model to the data\n\n Args:\n x: Independent variables\n y: Dependent variable\n \"\"\"\n pass\n
predict(x: NDArray) -> NDArray\n
Compute predictions for the input
Independent variables for which to compute predictions
Predictions for the input
def predict(self, x: NDArray) -> NDArray:\n \"\"\"Compute predictions for the input\n\n Args:\n x: Independent variables for which to compute predictions\n\n Returns:\n Predictions for the input\n \"\"\"\n pass\n
score(x: NDArray, y: NDArray) -> float\n
Compute the score of the model given test data
The score of the model on (x, y)
(x, y)
def score(self, x: NDArray, y: NDArray) -> float:\n \"\"\"Compute the score of the model given test data\n\n Args:\n x: Independent variables\n y: Dependent variable\n\n Returns:\n The score of the model on `(x, y)`\n \"\"\"\n pass\n
ensure_seed_sequence(\n seed: Optional[Union[Seed, SeedSequence]] = None\n) -> SeedSequence\n
If the passed seed is a SeedSequence object then it is returned as is. If it is a Generator the internal protected seed sequence from the generator gets extracted. Otherwise, a new SeedSequence object is created from the passed (optional) seed.
Either an int, a Generator object a SeedSequence object or None.
SeedSequence
A SeedSequence object.
def ensure_seed_sequence(\n seed: Optional[Union[Seed, SeedSequence]] = None\n) -> SeedSequence:\n \"\"\"\n If the passed seed is a SeedSequence object then it is returned as is. If it is\n a Generator the internal protected seed sequence from the generator gets extracted.\n Otherwise, a new SeedSequence object is created from the passed (optional) seed.\n\n Args:\n seed: Either an int, a Generator object a SeedSequence object or None.\n\n Returns:\n A SeedSequence object.\n\n !!! tip \"New in version 0.7.0\"\n \"\"\"\n if isinstance(seed, SeedSequence):\n return seed\n elif isinstance(seed, Generator):\n return cast(SeedSequence, seed.bit_generator.seed_seq) # type: ignore\n else:\n return SeedSequence(seed)\n
This module contains classes to manage and learn utility functions for the computation of values. Please see the documentation on Computing Data Values for more information.
Utility holds information about model, data and scoring function (the latter being what one usually understands under utility in the general definition of Shapley value). It is automatically cached across machines when the cache is configured and it is enabled upon construction.
DataUtilityLearning adds support for learning the scoring function to avoid repeated re-training of the model to compute the score.
This module also contains derived Utility classes for toy games that are used for testing and for demonstration purposes.
Wang, T., Yang, Y. and Jia, R., 2021. Improving cooperative game theory-based data valuation via data utility learning. arXiv preprint arXiv:2107.06336.\u00a0\u21a9
Utility(\n model: SupervisedModel,\n data: Dataset,\n scorer: Optional[Union[str, Scorer]] = None,\n *,\n default_score: float = 0.0,\n score_range: Tuple[float, float] = (-np.inf, np.inf),\n catch_errors: bool = True,\n show_warnings: bool = False,\n cache_backend: Optional[CacheBackend] = None,\n cached_func_options: Optional[CachedFuncConfig] = None,\n clone_before_fit: bool = True\n)\n
Convenience wrapper with configurable memoization of the scoring function.
An instance of Utility holds the triple of model, dataset and scoring function which determines the value of data points. This is used for the computation of all game-theoretic values like Shapley values and the Least Core.
The Utility expect the model to fulfill the SupervisedModel interface i.e. to have fit(), predict(), and score() methods.
When calling the utility, the model will be cloned if it is a Sci-Kit Learn model, otherwise a copy is created using copy.deepcopy
Since evaluating the scoring function requires retraining the model and that can be time-consuming, this class wraps it and caches the results of each execution. Caching is available both locally and across nodes, but must always be enabled for your project first, see the documentation and the module documentation.
The supervised model.
TYPE: SupervisedModel
SupervisedModel
An object containing the split data.
A scoring function. If None, the score() method of the model will be used. See score for ways to create and compose scorers, in particular how to set default values and ranges.
Any supervised model. Typical choices can be found in the [sci-kit learn documentation][https://scikit-learn.org/stable/supervised_learning.html].
Dataset or GroupedDataset instance.
A scoring object. If None, the score() method of the model will be used. See score for ways to create and compose scorers, in particular how to set default values and ranges. For convenience, a string can be passed, which will be used to construct a Scorer.
TYPE: Optional[Union[str, Scorer]] DEFAULT: None
Optional[Union[str, Scorer]]
default_score
As a convenience when no scorer object is passed (where a default value can be provided), this argument also allows to set the default score for models that have not been fit, e.g. when too little data is passed, or errors arise.
score_range
As with default_score, this is a convenience argument for when no scorer argument is provided, to set the numerical range of the score function. Some Monte Carlo methods can use this to estimate the number of samples required for a certain quality of approximation.
TYPE: Tuple[float, float] DEFAULT: (-inf, inf)
catch_errors
set to True to catch the errors when fit() fails. This could happen in several steps of the pipeline, e.g. when too little training data is passed, which happens often during Shapley value calculations. When this happens, the default_score is returned as a score and computation continues.
Set to False to suppress warnings thrown by fit().
cache_backend
Optional instance of CacheBackend used to wrap the _utility method of the Utility instance. By default, this is set to None and that means that the utility evaluations will not be cached.
TYPE: Optional[CacheBackend] DEFAULT: None
Optional[CacheBackend]
cached_func_options
Optional configuration object for cached utility evaluation.
TYPE: Optional[CachedFuncConfig] DEFAULT: None
Optional[CachedFuncConfig]
If True, the model will be cloned before calling fit().
>>> from pydvl.utils import Utility, DataUtilityLearning, Dataset\n>>> from sklearn.linear_model import LinearRegression, LogisticRegression\n>>> from sklearn.datasets import load_iris\n>>> dataset = Dataset.from_sklearn(load_iris(), random_state=16)\n>>> u = Utility(LogisticRegression(random_state=16), dataset)\n>>> u(dataset.indices)\n0.9\n
With caching enabled:
>>> from pydvl.utils import Utility, DataUtilityLearning, Dataset\n>>> from pydvl.utils.caching.memory import InMemoryCacheBackend\n>>> from sklearn.linear_model import LinearRegression, LogisticRegression\n>>> from sklearn.datasets import load_iris\n>>> dataset = Dataset.from_sklearn(load_iris(), random_state=16)\n>>> cache_backend = InMemoryCacheBackend()\n>>> u = Utility(LogisticRegression(random_state=16), dataset, cache_backend=cache_backend)\n>>> u(dataset.indices)\n0.9\n
src/pydvl/utils/utility.py
def __init__(\n self,\n model: SupervisedModel,\n data: Dataset,\n scorer: Optional[Union[str, Scorer]] = None,\n *,\n default_score: float = 0.0,\n score_range: Tuple[float, float] = (-np.inf, np.inf),\n catch_errors: bool = True,\n show_warnings: bool = False,\n cache_backend: Optional[CacheBackend] = None,\n cached_func_options: Optional[CachedFuncConfig] = None,\n clone_before_fit: bool = True,\n):\n self.model = self._clone_model(model)\n self.data = data\n if isinstance(scorer, str):\n scorer = Scorer(scorer, default=default_score, range=score_range)\n self.scorer = check_scoring(self.model, scorer)\n self.default_score = scorer.default if scorer is not None else default_score\n # TODO: auto-fill from known scorers ?\n self.score_range = scorer.range if scorer is not None else np.array(score_range)\n self.clone_before_fit = clone_before_fit\n self.catch_errors = catch_errors\n self.show_warnings = show_warnings\n self.cache = cache_backend\n if cached_func_options is None:\n cached_func_options = CachedFuncConfig()\n # TODO: Find a better way to do this.\n if cached_func_options.hash_prefix is None:\n # FIX: This does not handle reusing the same across runs.\n cached_func_options.hash_prefix = str(hash((model, data, scorer)))\n self.cached_func_options = cached_func_options\n self._initialize_utility_wrapper()\n
cache_stats: Optional[CacheStats]\n
Cache statistics are gathered when cache is enabled. See CacheStats for all fields returned.
__call__(indices: Iterable[int]) -> float\n
a subset of valid indices for the x_train attribute of Dataset.
TYPE: Iterable[int]
Iterable[int]
def __call__(self, indices: Iterable[int]) -> float:\n \"\"\"\n Args:\n indices: a subset of valid indices for the\n `x_train` attribute of [Dataset][pydvl.utils.dataset.Dataset].\n \"\"\"\n utility: float = self._utility_wrapper(frozenset(indices))\n return utility\n
DataUtilityLearning(u: Utility, training_budget: int, model: SupervisedModel)\n
Implementation of Data Utility Learning (Wang et al., 2022)1.
This object wraps a Utility and delegates calls to it, up until a given budget (number of iterations). Every tuple of input and output (a so-called utility sample) is stored. Once the budget is exhausted, DataUtilityLearning fits the given model to the utility samples. Subsequent calls will use the learned model to predict the utility instead of delegating.
DataUtilityLearning
The Utility to learn.
training_budget
Number of utility samples to collect before fitting the given model.
A supervised regression model
>>> from pydvl.utils import Utility, DataUtilityLearning, Dataset\n>>> from sklearn.linear_model import LinearRegression, LogisticRegression\n>>> from sklearn.datasets import load_iris\n>>> dataset = Dataset.from_sklearn(load_iris())\n>>> u = Utility(LogisticRegression(), dataset)\n>>> wrapped_u = DataUtilityLearning(u, 3, LinearRegression())\n... # First 3 calls will be computed normally\n>>> for i in range(3):\n... _ = wrapped_u((i,))\n>>> wrapped_u((1, 2, 3)) # Subsequent calls will be computed using the fit model for DUL\n0.0\n
def __init__(\n self, u: Utility, training_budget: int, model: SupervisedModel\n) -> None:\n self.utility = u\n self.training_budget = training_budget\n self.model = model\n self._current_iteration = 0\n self._is_model_fit = False\n self._utility_samples: Dict[FrozenSet, Tuple[NDArray[np.bool_], float]] = {}\n
data: Dataset\n
Returns the wrapped utility's Dataset.
This module provides caching of functions.
PyDVL can cache (memoize) the computation of the utility function and speed up some computations for data valuation.
Function evaluations are cached with a key based on the function's signature and code. This can lead to undesired cache hits, see Cache reuse.
Remember not to reuse utility objects for different datasets.
Caching is disabled by default but can be enabled easily, see Setting up the cache. When enabled, it will be added to any callable used to construct a Utility (done with the wrap method of CacheBackend). Depending on the nature of the utility you might want to enable the computation of a running average of function values, see Usage with stochastic functions. You can see all configuration options under CachedFuncConfig.
pyDVL supports 3 different caching backends:
MemcachedCacheBackend: a Memcached-based cache backend that uses pickled values written to and read from a Memcached server. This is used to share cached values between processes across multiple machines.
Info
This specific backend requires optional dependencies not installed by default. See Extra dependencies for more information.
In addition to standard memoization, the wrapped functions can compute running average and standard error of repeated evaluations for the same input. This can be useful for stochastic functions with high variance (e.g. model training for small sample sizes), but drastically reduces the speed benefits of memoization.
This behaviour can be activated with the option allow_repeated_evaluations.
When working directly with CachedFunc, it is essential to only cache pure functions. If they have any kind of state, either internal or external (e.g. a closure over some data that may change), then the cache will fail to notice this and the same value will be returned.
When a function is wrapped with CachedFunc for memoization, its signature (input and output names) and code are used as a key for the cache.
If you are running experiments with the same Utility but different datasets, this will lead to evaluations of the utility on new data returning old values because utilities only use sample indices as arguments (so there is no way to tell the difference between '1' for dataset A and '1' for dataset 2 from the point of view of the cache). One solution is to empty the cache between runs by calling the clear method of the cache backend instance, but the preferred one is to use a different Utility object for each dataset.
clear
Because all arguments to a function are used as part of the key for the cache, sometimes one must exclude some of them. For example, If a function is going to run across multiple processes and some reporting arguments are added (like a job_id for logging purposes), these will be part of the signature and make the functions distinct to the eyes of the cache. This can be avoided with the use of ignore_args option in the configuration.
CacheStats(\n sets: int = 0,\n misses: int = 0,\n hits: int = 0,\n timeouts: int = 0,\n errors: int = 0,\n reconnects: int = 0,\n)\n
Class used to store statistics gathered by cached functions.
sets
Number of times a value was set in the cache.
misses
Number of times a value was not found in the cache.
hits
Number of times a value was found in the cache.
timeouts
Number of times a timeout occurred.
errors
Number of times an error occurred.
reconnects
Number of times the client reconnected to the server.
CacheResult(value: float, count: int = 1, variance: float = 0.0)\n
A class used to store the cached result of a computation as well as count and variance when using repeated evaluation.
value
Cached value.
Number of times this value has been computed.
variance
Variance associated with the cached value.
CacheBackend()\n
Abstract base class for cache backends.
Defines interface for cache access including wrapping callables, getting/setting results, clearing cache, and combining cache keys.
stats
Cache statistics tracker.
src/pydvl/utils/caching/base.py
def __init__(self) -> None:\n self.stats = CacheStats()\n
wrap(\n func: Callable, *, config: Optional[CachedFuncConfig] = None\n) -> CachedFunc\n
Wraps a function to cache its results.
The function to wrap.
Optional caching options for the wrapped function.
CachedFunc
The wrapped cached function.
def wrap(\n self,\n func: Callable,\n *,\n config: Optional[CachedFuncConfig] = None,\n) -> \"CachedFunc\":\n \"\"\"Wraps a function to cache its results.\n\n Args:\n func: The function to wrap.\n config: Optional caching options for the wrapped function.\n\n Returns:\n The wrapped cached function.\n \"\"\"\n return CachedFunc(\n func,\n cache_backend=self,\n config=config,\n )\n
get(key: str) -> Optional[CacheResult]\n
Abstract method to retrieve a cached result.
Implemented by subclasses.
key
The cache key.
Optional[CacheResult]
The cached result or None if not found.
@abstractmethod\ndef get(self, key: str) -> Optional[CacheResult]:\n \"\"\"Abstract method to retrieve a cached result.\n\n Implemented by subclasses.\n\n Args:\n key: The cache key.\n\n Returns:\n The cached result or None if not found.\n \"\"\"\n pass\n
set(key: str, value: CacheResult) -> None\n
Abstract method to set a cached result.
The result to cache.
TYPE: CacheResult
CacheResult
@abstractmethod\ndef set(self, key: str, value: CacheResult) -> None:\n \"\"\"Abstract method to set a cached result.\n\n Implemented by subclasses.\n\n Args:\n key: The cache key.\n value: The result to cache.\n \"\"\"\n pass\n
clear() -> None\n
Abstract method to clear the entire cache.
@abstractmethod\ndef clear(self) -> None:\n \"\"\"Abstract method to clear the entire cache.\"\"\"\n pass\n
combine_hashes(*args: str) -> str\n
Abstract method to combine cache keys.
@abstractmethod\ndef combine_hashes(self, *args: str) -> str:\n \"\"\"Abstract method to combine cache keys.\"\"\"\n pass\n
CachedFunc(\n func: Callable[..., float],\n *,\n cache_backend: CacheBackend,\n config: Optional[CachedFuncConfig] = None\n)\n
Caches callable function results with a provided cache backend.
Wraps a callable function to cache its results using a provided an instance of a subclass of CacheBackend.
This class is heavily inspired from that of joblib.memory.MemorizedFunc.
This class caches calls to the wrapped callable by generating a hash key based on the wrapped callable's code, the arguments passed to it and the optional hash_prefix.
This class only works with hashable arguments to the wrapped callable.
Callable to wrap.
TYPE: Callable[..., float]
Callable[..., float]
Instance of CacheBackendBase that handles setting and getting values.
TYPE: CacheBackend
CacheBackend
Configuration for wrapped function.
def __init__(\n self,\n func: Callable[..., float],\n *,\n cache_backend: CacheBackend,\n config: Optional[CachedFuncConfig] = None,\n) -> None:\n self.func = func\n self.cache_backend = cache_backend\n if config is None:\n config = CachedFuncConfig()\n self.config = config\n\n self.__doc__ = f\"A wrapper around {func.__name__}() with caching enabled.\\n\" + (\n CachedFunc.__doc__ or \"\"\n )\n self.__name__ = f\"cached_{func.__name__}\"\n path = list(reversed(func.__qualname__.split(\".\")))\n patched = [f\"cached_{path[0]}\"] + path[1:]\n self.__qualname__ = \".\".join(reversed(patched))\n
stats: CacheStats\n
Cache backend statistics.
__call__(*args, **kwargs) -> float\n
Call the wrapped cached function.
Executes the wrapped function, caching and returning the result.
def __call__(self, *args, **kwargs) -> float:\n \"\"\"Call the wrapped cached function.\n\n Executes the wrapped function, caching and returning the result.\n \"\"\"\n return self._cached_call(args, kwargs)\n
DiskCacheBackend(cache_dir: Optional[Union[PathLike, str]] = None)\n
Bases: CacheBackend
Disk cache backend that stores results in files.
Implements the CacheBackend interface for a disk-based cache. Stores cache entries as pickled files on disk, keyed by cache key. This allows sharing evaluations across processes in a single node/computer.
cache_dir
Base directory for cache storage.
TYPE: Optional[Union[PathLike, str]] DEFAULT: None
Optional[Union[PathLike, str]]
Basic usage:
>>> from pydvl.utils.caching.disk import DiskCacheBackend\n>>> cache_backend = DiskCacheBackend()\n>>> cache_backend.clear()\n>>> value = 42\n>>> cache_backend.set(\"key\", value)\n>>> cache_backend.get(\"key\")\n42\n
Callable wrapping:
>>> from pydvl.utils.caching.disk import DiskCacheBackend\n>>> cache_backend = DiskCacheBackend()\n>>> cache_backend.clear()\n>>> value = 42\n>>> def foo(x: int):\n... return x + 1\n...\n>>> wrapped_foo = cache_backend.wrap(foo)\n>>> wrapped_foo(value)\n43\n>>> wrapped_foo.stats.misses\n1\n>>> wrapped_foo.stats.hits\n0\n>>> wrapped_foo(value)\n43\n>>> wrapped_foo.stats.misses\n1\n>>> wrapped_foo.stats.hits\n1\n
Base directory for cache storage. If not provided, this defaults to a newly created temporary directory.
src/pydvl/utils/caching/disk.py
def __init__(\n self,\n cache_dir: Optional[Union[os.PathLike, str]] = None,\n) -> None:\n \"\"\"Initialize the disk cache backend.\n\n Args:\n cache_dir: Base directory for cache storage.\n If not provided, this defaults to a newly created\n temporary directory.\n \"\"\"\n super().__init__()\n if cache_dir is None:\n cache_dir = tempfile.mkdtemp(prefix=\"pydvl\")\n self.cache_dir = Path(cache_dir)\n self.cache_dir.mkdir(exist_ok=True, parents=True)\n
get(key: str) -> Optional[Any]\n
Get a value from the cache.
Cache key.
Optional[Any]
Cached value or None if not found.
def get(self, key: str) -> Optional[Any]:\n \"\"\"Get a value from the cache.\n\n Args:\n key: Cache key.\n\n Returns:\n Cached value or None if not found.\n \"\"\"\n cache_file = self.cache_dir / key\n if not cache_file.exists():\n self.stats.misses += 1\n return None\n self.stats.hits += 1\n with cache_file.open(\"rb\") as f:\n return cloudpickle.load(f)\n
set(key: str, value: Any) -> None\n
Set a value in the cache.
Value to cache.
TYPE: Any
Any
def set(self, key: str, value: Any) -> None:\n \"\"\"Set a value in the cache.\n\n Args:\n key: Cache key.\n value: Value to cache.\n \"\"\"\n cache_file = self.cache_dir / key\n self.stats.sets += 1\n with cache_file.open(\"wb\") as f:\n cloudpickle.dump(value, f, protocol=PICKLE_VERSION)\n
Deletes cache directory and recreates it.
def clear(self) -> None:\n \"\"\"Deletes cache directory and recreates it.\"\"\"\n shutil.rmtree(self.cache_dir)\n self.cache_dir.mkdir(exist_ok=True, parents=True)\n
Join cache key components.
def combine_hashes(self, *args: str) -> str:\n \"\"\"Join cache key components.\"\"\"\n return os.pathsep.join(args)\n
MemcachedClientConfig(\n server: Tuple[str, int] = (\"localhost\", 11211),\n connect_timeout: float = 1.0,\n timeout: float = 1.0,\n no_delay: bool = True,\n serde: PickleSerde = PickleSerde(pickle_version=PICKLE_VERSION),\n)\n
Configuration of the memcached client.
server
A tuple of (IP|domain name, port).
TYPE: Tuple[str, int] DEFAULT: ('localhost', 11211)
Tuple[str, int]
('localhost', 11211)
connect_timeout
How many seconds to wait before raising ConnectionRefusedError on failure to connect.
ConnectionRefusedError
timeout
Duration in seconds to wait for send or recv calls on the socket connected to memcached.
no_delay
If True, set the TCP_NODELAY flag, which may help with performance in some cases.
TCP_NODELAY
serde
Serializer / Deserializer (\"serde\"). The default PickleSerde should work in most cases. See pymemcache.client.base.Client for details.
PickleSerde
TYPE: PickleSerde DEFAULT: PickleSerde(pickle_version=PICKLE_VERSION)
PickleSerde(pickle_version=PICKLE_VERSION)
MemcachedCacheBackend(config: MemcachedClientConfig = MemcachedClientConfig())\n
Memcached cache backend for the distributed caching of functions.
Implements the CacheBackend interface for a memcached based cache. This allows sharing evaluations across processes and nodes in a cluster. You can run memcached as a service, locally or remotely, see the caching documentation.
Memcached client configuration.
TYPE: MemcachedClientConfig DEFAULT: MemcachedClientConfig()
MemcachedClientConfig
MemcachedClientConfig()
Memcached client instance.
>>> from pydvl.utils.caching.memcached import MemcachedCacheBackend\n>>> cache_backend = MemcachedCacheBackend()\n>>> cache_backend.clear()\n>>> value = 42\n>>> cache_backend.set(\"key\", value)\n>>> cache_backend.get(\"key\")\n42\n
>>> from pydvl.utils.caching.memcached import MemcachedCacheBackend\n>>> cache_backend = MemcachedCacheBackend()\n>>> cache_backend.clear()\n>>> value = 42\n>>> def foo(x: int):\n... return x + 1\n...\n>>> wrapped_foo = cache_backend.wrap(foo)\n>>> wrapped_foo(value)\n43\n>>> wrapped_foo.stats.misses\n1\n>>> wrapped_foo.stats.hits\n0\n>>> wrapped_foo(value)\n43\n>>> wrapped_foo.stats.misses\n1\n>>> wrapped_foo.stats.hits\n1\n
src/pydvl/utils/caching/memcached.py
def __init__(self, config: MemcachedClientConfig = MemcachedClientConfig()) -> None:\n \"\"\"Initialize memcached cache backend.\n\n Args:\n config: Memcached client configuration.\n \"\"\"\n\n super().__init__()\n self.config = config\n self.client = self._connect(self.config)\n
Get value from memcached.
Cached value or None if not found or client disconnected.
def get(self, key: str) -> Optional[Any]:\n \"\"\"Get value from memcached.\n\n Args:\n key: Cache key.\n\n Returns:\n Cached value or None if not found or client disconnected.\n \"\"\"\n result = None\n try:\n result = self.client.get(key)\n except socket.timeout as e:\n self.stats.timeouts += 1\n warnings.warn(f\"{type(self).__name__}: {str(e)}\", RuntimeWarning)\n except OSError as e:\n self.stats.errors += 1\n warnings.warn(f\"{type(self).__name__}: {str(e)}\", RuntimeWarning)\n except AttributeError as e:\n # FIXME: this depends on _recv() failing on invalid sockets\n # See pymemcache.base.py,\n self.stats.reconnects += 1\n warnings.warn(f\"{type(self).__name__}: {str(e)}\", RuntimeWarning)\n self.client = self._connect(self.config)\n if result is None:\n self.stats.misses += 1\n else:\n self.stats.hits += 1\n return result\n
Set value in memcached.
def set(self, key: str, value: Any) -> None:\n \"\"\"Set value in memcached.\n\n Args:\n key: Cache key.\n value: Value to cache.\n \"\"\"\n self.client.set(key, value, noreply=True)\n self.stats.sets += 1\n
Flush all values from memcached.
def clear(self) -> None:\n \"\"\"Flush all values from memcached.\"\"\"\n self.client.flush_all(noreply=True)\n
Join cache key components for Memcached.
def combine_hashes(self, *args: str) -> str:\n \"\"\"Join cache key components for Memcached.\"\"\"\n return \":\".join(args)\n
__getstate__() -> Dict\n
Enables pickling after a socket has been opened to the memcached server, by removing the client from the stored data.
def __getstate__(self) -> Dict:\n \"\"\"Enables pickling after a socket has been opened to the\n memcached server, by removing the client from the stored\n data.\"\"\"\n odict = self.__dict__.copy()\n del odict[\"client\"]\n return odict\n
__setstate__(d: Dict)\n
Restores a client connection after loading from a pickle.
def __setstate__(self, d: Dict):\n \"\"\"Restores a client connection after loading from a pickle.\"\"\"\n self.config = d[\"config\"]\n self.stats = d[\"stats\"]\n self.client = self._connect(self.config)\n
InMemoryCacheBackend()\n
In-memory cache backend that stores results in a dictionary.
Implements the CacheBackend interface for an in-memory-based cache. Stores cache entries as values in a dictionary, keyed by cache key. This allows sharing evaluations across threads in a single process.
The implementation is not thread-safe.
cached_values
Dictionary used to store cached values.
TYPE: Dict[str, Any]
Dict[str, Any]
>>> from pydvl.utils.caching.memory import InMemoryCacheBackend\n>>> cache_backend = InMemoryCacheBackend()\n>>> cache_backend.clear()\n>>> value = 42\n>>> cache_backend.set(\"key\", value)\n>>> cache_backend.get(\"key\")\n42\n
>>> from pydvl.utils.caching.memory import InMemoryCacheBackend\n>>> cache_backend = InMemoryCacheBackend()\n>>> cache_backend.clear()\n>>> value = 42\n>>> def foo(x: int):\n... return x + 1\n...\n>>> wrapped_foo = cache_backend.wrap(foo)\n>>> wrapped_foo(value)\n43\n>>> wrapped_foo.stats.misses\n1\n>>> wrapped_foo.stats.hits\n0\n>>> wrapped_foo(value)\n43\n>>> wrapped_foo.stats.misses\n1\n>>> wrapped_foo.stats.hits\n1\n
src/pydvl/utils/caching/memory.py
def __init__(self) -> None:\n \"\"\"Initialize the in-memory cache backend.\"\"\"\n super().__init__()\n self.cached_values: Dict[str, Any] = {}\n
def get(self, key: str) -> Optional[Any]:\n \"\"\"Get a value from the cache.\n\n Args:\n key: Cache key.\n\n Returns:\n Cached value or None if not found.\n \"\"\"\n value = self.cached_values.get(key, None)\n if value is not None:\n self.stats.hits += 1\n else:\n self.stats.misses += 1\n return value\n
def set(self, key: str, value: Any) -> None:\n \"\"\"Set a value in the cache.\n\n Args:\n key: Cache key.\n value: Value to cache.\n \"\"\"\n self.cached_values[key] = value\n self.stats.sets += 1\n
Deletes cache dictionary and recreates it.
def clear(self) -> None:\n \"\"\"Deletes cache dictionary and recreates it.\"\"\"\n del self.cached_values\n self.cached_values = {}\n
This module implements algorithms for the exact and approximate computation of values and semi-values.
See Data valuation for an introduction to the concepts and methods implemented here.
This module provides several predefined games and, depending on the game, the corresponding Shapley values, Least Core values or both of them, for benchmarking purposes.
Castro, J., G\u00f3mez, D. and Tejada, J., 2009. Polynomial calculation of the Shapley value based on sampling. Computers & Operations Research, 36(5), pp.1726-1730.\u00a0\u21a9
DummyGameDataset(n_players: int, description: Optional[str] = None)\n
Dummy game dataset.
Initializes a dummy game dataset with n_players and an optional description.
This class is used internally inside the Game class.
n_players
Number of players that participate in the game.
Optional description of the dataset.
src/pydvl/value/games.py
def __init__(self, n_players: int, description: Optional[str] = None) -> None:\n x = np.arange(0, n_players, 1).reshape(-1, 1)\n nil = np.zeros_like(x)\n super().__init__(\n x,\n nil.copy(),\n nil.copy(),\n nil.copy(),\n feature_names=[\"x\"],\n target_names=[\"y\"],\n description=description,\n )\n
Returns the subsets of the train set instead of the test set.
Indices into the training data.
Subset of the train data.
def get_test_data(\n self, indices: Optional[Iterable[int]] = None\n) -> Tuple[NDArray, NDArray]:\n \"\"\"Returns the subsets of the train set instead of the test set.\n\n Args:\n indices: Indices into the training data.\n\n Returns:\n Subset of the train data.\n \"\"\"\n if indices is None:\n return self.x_train, self.y_train\n x = self.x_train[indices]\n y = self.y_train[indices]\n return x, y\n
DummyModel()\n
Bases: SupervisedModel
Dummy model class.
A dummy supervised model used for testing purposes only.
def __init__(self) -> None:\n pass\n
Game(\n n_players: int,\n score_range: Tuple[float, float] = (-np.inf, np.inf),\n description: Optional[str] = None,\n)\n
Base class for games
Any Game subclass has to implement the abstract _score method to assign a score to each coalition/subset and at least one of shapley_values, least_core_values.
_score
shapley_values
least_core_values
Minimum and maximum values of the _score method.
Optional string description of the dummy dataset that will be created.
Dummy dataset object.
Utility object with a dummy model and dataset.
def __init__(\n self,\n n_players: int,\n score_range: Tuple[float, float] = (-np.inf, np.inf),\n description: Optional[str] = None,\n):\n self.n_players = n_players\n self.data = DummyGameDataset(self.n_players, description)\n self.u = Utility(\n DummyModel(),\n self.data,\n scorer=Scorer(self._score, range=score_range),\n catch_errors=False,\n show_warnings=True,\n )\n
SymmetricVotingGame(n_players: int)\n
Bases: Game
Game
Toy game that is used for testing and demonstration purposes.
A symmetric voting game defined in (Castro et al., 2009)1 Section 4.1
For this game the utility of a coalition is 1 if its cardinality is greater than num_samples/2, or 0 otherwise.
def __init__(self, n_players: int) -> None:\n if n_players % 2 != 0:\n raise ValueError(\"n_players must be an even number.\")\n description = \"Dummy data for the symmetric voting game in Castro et al. 2009\"\n super().__init__(\n n_players,\n score_range=(0, 1),\n description=description,\n )\n
AsymmetricVotingGame(n_players: int = 51)\n
An asymmetric voting game defined in (Castro et al., 2009)1 Section 4.2.
For this game the player set is \\(N = \\{1,\\dots,51\\}\\) and the utility of a coalition is given by:
where \\(w = [w_1,\\dots, w_{51}]\\) is a list of weights associated with each player.
TYPE: int DEFAULT: 51
51
def __init__(self, n_players: int = 51) -> None:\n if n_players != 51:\n raise ValueError(\n f\"{self.__class__.__name__} only supports n_players=51 but got {n_players=}.\"\n )\n description = \"Dummy data for the asymmetric voting game in Castro et al. 2009\"\n super().__init__(\n n_players,\n score_range=(0, 1),\n description=description,\n )\n\n ranges = [\n range(0, 1),\n range(1, 2),\n range(2, 3),\n range(3, 5),\n range(5, 6),\n range(6, 7),\n range(7, 9),\n range(9, 10),\n range(10, 12),\n range(12, 15),\n range(15, 16),\n range(16, 20),\n range(20, 24),\n range(24, 26),\n range(26, 30),\n range(30, 34),\n range(34, 35),\n range(35, 44),\n range(44, 51),\n ]\n\n ranges_weights = [\n 45,\n 41,\n 27,\n 26,\n 25,\n 21,\n 17,\n 14,\n 13,\n 12,\n 11,\n 10,\n 9,\n 8,\n 7,\n 6,\n 5,\n 4,\n 3,\n ]\n ranges_values = [\n \"0.08831\",\n \"0.07973\",\n \"0.05096\",\n \"0.04898\",\n \"0.047\",\n \"0.03917\",\n \"0.03147\",\n \"0.02577\",\n \"0.02388\",\n \"0.022\",\n \"0.02013\",\n \"0.01827\",\n \"0.01641\",\n \"0.01456\",\n \"0.01272\",\n \"0.01088\",\n \"0.009053\",\n \"0.00723\",\n \"0.005412\",\n ]\n\n self.weight_table = np.zeros(self.n_players)\n exact_values = np.zeros(self.n_players)\n for r, w, v in zip(ranges, ranges_weights, ranges_values):\n self.weight_table[r] = w\n exact_values[r] = v\n\n self.exact_values = exact_values\n self.threshold = np.sum(self.weight_table) / 2\n
ShoesGame(left: int, right: int)\n
A shoes game defined in (Castro et al., 2009)1.
In this game, some players have a left shoe and others a right shoe. Single shoes have a worth of zero while pairs have a worth of 1.
The payoff of a coalition \\(S\\) is:
Where \\(L\\), respectively \\(R\\), is the set of players with left shoes, respectively right shoes.
left
Number of players with a left shoe.
right
Number of players with a right shoe.
def __init__(self, left: int, right: int) -> None:\n self.left = left\n self.right = right\n n_players = self.left + self.right\n description = \"Dummy data for the shoe game in Castro et al. 2009\"\n max_score = n_players // 2\n super().__init__(n_players, score_range=(0, max_score), description=description)\n
AirportGame(n_players: int = 100)\n
An airport game defined in (Castro et al., 2009)1 Section 4.3
TYPE: int DEFAULT: 100
100
def __init__(self, n_players: int = 100) -> None:\n if n_players != 100:\n raise ValueError(\n f\"{self.__class__.__name__} only supports n_players=100 but got {n_players=}.\"\n )\n description = \"A dummy dataset for the airport game in Castro et al. 2009\"\n super().__init__(n_players, score_range=(0, 100), description=description)\n ranges = [\n range(0, 8),\n range(8, 20),\n range(20, 26),\n range(26, 40),\n range(40, 48),\n range(48, 57),\n range(57, 70),\n range(70, 80),\n range(80, 90),\n range(90, 100),\n ]\n exact = [\n 0.01,\n 0.020869565,\n 0.033369565,\n 0.046883079,\n 0.063549745,\n 0.082780515,\n 0.106036329,\n 0.139369662,\n 0.189369662,\n 0.289369662,\n ]\n c = list(range(1, 10))\n score_table = np.zeros(100)\n exact_values = np.zeros(100)\n\n for r, v in zip(ranges, exact):\n score_table[r] = c\n exact_values[r] = v\n\n self.exact_values = exact_values\n self.score_table = score_table\n
MinimumSpanningTreeGame(n_players: int = 100)\n
A minimum spanning tree game defined in (Castro et al., 2009)1.
Let \\(G = (N \\cup \\{0\\},E)\\) be a valued graph where \\(N = \\{1,\\dots,100\\}\\), and the cost associated to an edge \\((i, j)\\) is:
A minimum spanning tree game \\((N, c)\\) is a cost game, where for a given coalition \\(S \\subset N\\), \\(v(S)\\) is the sum of the edge cost of the minimum spanning tree, i.e. \\(v(S)\\) = Minimum Spanning Tree of the graph \\(G|_{S\\cup\\{0\\}}\\), which is the partial graph restricted to the players \\(S\\) and the source node \\(0\\).
def __init__(self, n_players: int = 100) -> None:\n if n_players != 100:\n raise ValueError(\n f\"{self.__class__.__name__} only supports n_players=100 but got {n_players=}.\"\n )\n description = (\n \"A dummy dataset for the minimum spanning tree game in Castro et al. 2009\"\n )\n super().__init__(n_players, score_range=(0, np.inf), description=description)\n\n graph = np.zeros(shape=(self.n_players, self.n_players))\n\n for i in range(self.n_players):\n for j in range(self.n_players):\n if (\n i == j + 1\n or i == j - 1\n or (i == 1 and j == self.n_players - 1)\n or (i == self.n_players - 1 and j == 1)\n ):\n graph[i, j] = 1\n elif i == 0 or j == 0:\n graph[i, j] = 0\n else:\n graph[i, j] = np.inf\n assert np.all(graph == graph.T)\n\n self.graph = graph\n
MinerGame(n_players: int)\n
Consider a group of n miners, who have discovered large bars of gold.
If two miners can carry one piece of gold, then the payoff of a coalition \\(S\\) is:
If there are more than two miners and there is an even number of miners, then the core consists of the single payoff where each miner gets 1/2.
If there is an odd number of miners, then the core is empty.
Taken from Wikipedia
Number of miners that participate in the game.
def __init__(self, n_players: int) -> None:\n if n_players <= 2:\n raise ValueError(f\"n_players, {n_players}, should be > 2\")\n description = \"Dummy data for Miner Game taken from https://en.wikipedia.org/wiki/Core_(game_theory)\"\n super().__init__(\n n_players,\n score_range=(0, n_players // 2),\n description=description,\n )\n
This module collects types and methods for the inspection of the results of valuation algorithms.
The most important class is ValuationResult, which provides access to raw values, as well as convenient behaviour as a Sequence with extended indexing and updating abilities, and conversion to pandas DataFrames.
Results can be added together with the standard + operator. Because values are typically running averages of iterative algorithms, addition behaves like a weighted average of the two results, with the weights being the number of updates in each result: adding two results is the same as generating one result with the mean of the values of the two results as values. The variances are updated accordingly. See ValuationResult for details.
+
Results can also be sorted by value, variance or number of updates, see sort(). The arrays of ValuationResult.values, ValuationResult.variances, ValuationResult.counts, ValuationResult.indices, ValuationResult.names are sorted in the same way.
Indexing and slicing of results is supported and ValueItem objects are returned. These objects can be compared with the usual operators, which take only the ValueItem.value into account.
The most commonly used factory method is ValuationResult.zeros(), which creates a result object with all values, variances and counts set to zero. ValuationResult.empty() creates an empty result object, which can be used as a starting point for adding results together. Empty results are discarded when added to other results. Finally, ValuationResult.from_random() samples random values uniformly.
ValueItem(\n index: IndexT,\n name: NameT,\n value: float,\n variance: Optional[float],\n count: Optional[int],\n)\n
Bases: Generic[IndexT, NameT]
Generic[IndexT, NameT]
The result of a value computation for one datum.
ValueItems can be compared with the usual operators, forming a total order. Comparisons take only the value into account.
ValueItems
Maybe have a mode of comparing similar to np.isclose, or taking the variance into account.
np.isclose
Index of the sample with this value in the original Dataset
TYPE: IndexT
IndexT
Name of the sample if it was provided. Otherwise, str(index)
str(index)
TYPE: NameT
NameT
The value
Variance of the value if it was computed with an approximate method
TYPE: Optional[float]
Optional[float]
Number of updates for this value
TYPE: Optional[int]
stderr: Optional[float]\n
Standard error of the value.
ValuationResult(\n *,\n values: NDArray[float_],\n variances: Optional[NDArray[float_]] = None,\n counts: Optional[NDArray[int_]] = None,\n indices: Optional[NDArray[IndexT]] = None,\n data_names: Optional[Sequence[NameT] | NDArray[NameT]] = None,\n algorithm: str = \"\",\n status: Status = Status.Pending,\n sort: bool = False,\n **extra_values\n)\n
Bases: Sequence, Iterable[ValueItem[IndexT, NameT]], Generic[IndexT, NameT]
Iterable[ValueItem[IndexT, NameT]]
Objects of this class hold the results of valuation algorithms.
These include indices in the original Dataset, any data names (e.g. group names in GroupedDataset), the values themselves, and variance of the computation in the case of Monte Carlo methods. ValuationResults can be iterated over like any Sequence: iter(valuation_result) returns a generator of ValueItem in the order in which the object is sorted.
ValuationResults
iter(valuation_result)
Indexing can be position-based, when accessing any of the attributes values, variances, counts and indices, as well as when iterating over the object, or using the item access operator, both getter and setter. The \"position\" is either the original sequence in which the data was passed to the constructor, or the sequence in which the object is sorted, see below.
Alternatively, indexing can be data-based, i.e. using the indices in the original dataset. This is the case for the methods get() and update().
Results can be sorted in-place with sort(), or alternatively using python's standard sorted() and reversed() Note that sorting values affects how iterators and the object itself as Sequence behave: values[0] returns a ValueItem with the highest or lowest ranking point if this object is sorted by descending or ascending value, respectively. If unsorted, values[0] returns the ValueItem at position 0, which has data index indices[0] in the Dataset.
sorted()
reversed()
values[0]
ValueItem
indices[0]
The same applies to direct indexing of the ValuationResult: the index is positional, according to the sorting. It does not refer to the \"data index\". To sort according to data index, use sort() with key=\"index\".
key=\"index\"
In order to access ValueItem objects by their data index, use get().
Results can be added to each other with the + operator. Means and variances are correctly updated, using the counts attribute.
counts
Results can also be updated with new values using update(). Means and variances are updated accordingly using the Welford algorithm.
Empty objects behave in a special way, see empty().
An array of values. If omitted, defaults to an empty array or to an array of zeros if indices are given.
An optional array of indices in the original dataset. If omitted, defaults to np.arange(len(values)). Warning: It is common to pass the indices of a Dataset here. Attention must be paid in a parallel context to copy them to the local process. Just do indices=np.copy(data.indices).
np.arange(len(values))
indices=np.copy(data.indices)
TYPE: Optional[NDArray[IndexT]] DEFAULT: None
Optional[NDArray[IndexT]]
variances
An optional array of variances in the computation of each value.
TYPE: Optional[NDArray[float_]] DEFAULT: None
Optional[NDArray[float_]]
An optional array with the number of updates for each value. Defaults to an array of ones.
TYPE: Optional[NDArray[int_]] DEFAULT: None
Optional[NDArray[int_]]
Names for the data points. Defaults to index numbers if not set.
TYPE: Optional[Sequence[NameT] | NDArray[NameT]] DEFAULT: None
Optional[Sequence[NameT] | NDArray[NameT]]
algorithm
The method used.
status
The end status of the algorithm.
TYPE: Status DEFAULT: Pending
sort
Whether to sort the indices by ascending value. See above how this affects usage as an iterable or sequence.
Additional values that can be passed as keyword arguments. This can contain, for example, the least core value.
If input arrays have mismatching lengths.
src/pydvl/value/result.py
def __init__(\n self,\n *,\n values: NDArray[np.float_],\n variances: Optional[NDArray[np.float_]] = None,\n counts: Optional[NDArray[np.int_]] = None,\n indices: Optional[NDArray[IndexT]] = None,\n data_names: Optional[Sequence[NameT] | NDArray[NameT]] = None,\n algorithm: str = \"\",\n status: Status = Status.Pending,\n sort: bool = False,\n **extra_values,\n):\n if variances is not None and len(variances) != len(values):\n raise ValueError(\"Lengths of values and variances do not match\")\n if data_names is not None and len(data_names) != len(values):\n raise ValueError(\"Lengths of values and data_names do not match\")\n if indices is not None and len(indices) != len(values):\n raise ValueError(\"Lengths of values and indices do not match\")\n\n self._algorithm = algorithm\n self._status = Status(status) # Just in case we are given a string\n self._values = values\n self._variances = np.zeros_like(values) if variances is None else variances\n self._counts = np.ones_like(values) if counts is None else counts\n self._sort_order = None\n self._extra_values = extra_values or {}\n\n # Yuk...\n if data_names is None:\n if indices is not None:\n self._names = np.copy(indices)\n else:\n self._names = np.arange(len(self._values), dtype=np.int_)\n elif not isinstance(data_names, np.ndarray):\n self._names = np.array(data_names)\n else:\n self._names = data_names.copy()\n if len(np.unique(self._names)) != len(self._names):\n raise ValueError(\"Data names must be unique\")\n\n if indices is None:\n indices = np.arange(len(self._values), dtype=np.int_)\n self._indices = indices\n self._positions = {idx: pos for pos, idx in enumerate(indices)}\n\n self._sort_positions: NDArray[np.int_] = np.arange(\n len(self._values), dtype=np.int_\n )\n if sort:\n self.sort()\n
values: NDArray[float_]\n
The values, possibly sorted.
variances: NDArray[float_]\n
The variances, possibly sorted.
stderr: NDArray[float_]\n
The raw standard errors, possibly sorted.
counts: NDArray[int_]\n
The raw counts, possibly sorted.
indices: NDArray[IndexT]\n
The indices for the values, possibly sorted.
If the object is unsorted, then these are the same as declared at construction or np.arange(len(values)) if none were passed.
names: NDArray[NameT]\n
The names for the values, possibly sorted. If the object is unsorted, then these are the same as declared at construction or np.arange(len(values)) if none were passed.
sort(\n reverse: bool = False,\n key: Literal[\"value\", \"variance\", \"index\", \"name\"] = \"value\",\n) -> None\n
Sorts the indices in place by key.
Once sorted, iteration over the results, and indexing of all the properties ValuationResult.values, ValuationResult.variances, ValuationResult.counts, ValuationResult.indices and ValuationResult.names will follow the same order.
reverse
Whether to sort in descending order by value.
The key to sort by. Defaults to ValueItem.value.
TYPE: Literal['value', 'variance', 'index', 'name'] DEFAULT: 'value'
Literal['value', 'variance', 'index', 'name']
'value'
def sort(\n self,\n reverse: bool = False,\n # Need a \"Comparable\" type here\n key: Literal[\"value\", \"variance\", \"index\", \"name\"] = \"value\",\n) -> None:\n \"\"\"Sorts the indices in place by `key`.\n\n Once sorted, iteration over the results, and indexing of all the\n properties\n [ValuationResult.values][pydvl.value.result.ValuationResult.values],\n [ValuationResult.variances][pydvl.value.result.ValuationResult.variances],\n [ValuationResult.counts][pydvl.value.result.ValuationResult.counts],\n [ValuationResult.indices][pydvl.value.result.ValuationResult.indices]\n and [ValuationResult.names][pydvl.value.result.ValuationResult.names]\n will follow the same order.\n\n Args:\n reverse: Whether to sort in descending order by value.\n key: The key to sort by. Defaults to\n [ValueItem.value][pydvl.value.result.ValueItem].\n \"\"\"\n keymap = {\n \"index\": \"_indices\",\n \"value\": \"_values\",\n \"variance\": \"_variances\",\n \"name\": \"_names\",\n }\n self._sort_positions = np.argsort(getattr(self, keymap[key]))\n if reverse:\n self._sort_positions = self._sort_positions[::-1]\n self._sort_order = reverse\n
__getattr__(attr: str) -> Any\n
Allows access to extra values as if they were properties of the instance.
def __getattr__(self, attr: str) -> Any:\n \"\"\"Allows access to extra values as if they were properties of the instance.\"\"\"\n # This is here to avoid a RecursionError when copying or pickling the object\n if attr == \"_extra_values\":\n raise AttributeError()\n try:\n return self._extra_values[attr]\n except KeyError as e:\n raise AttributeError(\n f\"{self.__class__.__name__} object has no attribute {attr}\"\n ) from e\n
__iter__() -> Iterator[ValueItem[IndexT, NameT]]\n
Iterate over the results returning ValueItem objects. To sort in place before iteration, use sort().
def __iter__(self) -> Iterator[ValueItem[IndexT, NameT]]:\n \"\"\"Iterate over the results returning [ValueItem][pydvl.value.result.ValueItem] objects.\n To sort in place before iteration, use [sort()][pydvl.value.result.ValuationResult.sort].\n \"\"\"\n for pos in self._sort_positions:\n yield ValueItem(\n self._indices[pos],\n self._names[pos],\n self._values[pos],\n self._variances[pos],\n self._counts[pos],\n )\n
__add__(\n other: ValuationResult[IndexT, NameT]\n) -> ValuationResult[IndexT, NameT]\n
Adds two ValuationResults.
The values must have been computed with the same algorithm. An exception to this is if one argument has empty values, in which case the other argument is returned.
Abusing this will introduce numerical errors.
Means and standard errors are correctly handled. Statuses are added with bit-wise &, see Status. data_names are taken from the left summand, or if unavailable from the right one. The algorithm string is carried over if both terms have the same one or concatenated.
It is possible to add ValuationResults of different lengths, and with different or overlapping indices. The result will have the union of indices, and the values.
FIXME: Arbitrary extra_values aren't handled.
def __add__(\n self, other: ValuationResult[IndexT, NameT]\n) -> ValuationResult[IndexT, NameT]:\n \"\"\"Adds two ValuationResults.\n\n The values must have been computed with the same algorithm. An exception\n to this is if one argument has empty values, in which case the other\n argument is returned.\n\n !!! Warning\n Abusing this will introduce numerical errors.\n\n Means and standard errors are correctly handled. Statuses are added with\n bit-wise `&`, see [Status][pydvl.value.result.Status].\n `data_names` are taken from the left summand, or if unavailable from\n the right one. The `algorithm` string is carried over if both terms\n have the same one or concatenated.\n\n It is possible to add ValuationResults of different lengths, and with\n different or overlapping indices. The result will have the union of\n indices, and the values.\n\n !!! Warning\n FIXME: Arbitrary `extra_values` aren't handled.\n\n \"\"\"\n # empty results\n if len(self.values) == 0:\n return other\n if len(other.values) == 0:\n return self\n\n self._check_compatible(other)\n\n indices = np.union1d(self._indices, other._indices).astype(self._indices.dtype)\n this_pos = np.searchsorted(indices, self._indices)\n other_pos = np.searchsorted(indices, other._indices)\n\n n: NDArray[np.int_] = np.zeros_like(indices, dtype=int)\n m: NDArray[np.int_] = np.zeros_like(indices, dtype=int)\n xn: NDArray[np.int_] = np.zeros_like(indices, dtype=float)\n xm: NDArray[np.int_] = np.zeros_like(indices, dtype=float)\n vn: NDArray[np.int_] = np.zeros_like(indices, dtype=float)\n vm: NDArray[np.int_] = np.zeros_like(indices, dtype=float)\n\n n[this_pos] = self._counts\n xn[this_pos] = self._values\n vn[this_pos] = self._variances\n m[other_pos] = other._counts\n xm[other_pos] = other._values\n vm[other_pos] = other._variances\n\n # np.maximum(1, n + m) covers case n = m = 0.\n n_m_sum = np.maximum(1, n + m)\n\n # Sample mean of n+m samples from two means of n and m samples\n xnm = (n * xn + m * xm) / n_m_sum\n\n # Sample variance of n+m samples from two sample variances of n and m samples\n vnm = (n * (vn + xn**2) + m * (vm + xm**2)) / n_m_sum - xnm**2\n\n if np.any(vnm < 0):\n if np.any(vnm < -1e-6):\n logger.warning(\n \"Numerical error in variance computation. \"\n f\"Negative sample variances clipped to 0 in {vnm}\"\n )\n vnm[np.where(vnm < 0)] = 0\n\n # Merging of names:\n # If an index has the same name in both results, it must be the same.\n # If an index has a name in one result but not the other, the name is\n # taken from the result with the name.\n if self._names.dtype != other._names.dtype:\n if np.can_cast(other._names.dtype, self._names.dtype, casting=\"safe\"):\n other._names = other._names.astype(self._names.dtype)\n logger.warning(\n f\"Casting ValuationResult.names from {other._names.dtype} to {self._names.dtype}\"\n )\n else:\n raise TypeError(\n f\"Cannot cast ValuationResult.names from \"\n f\"{other._names.dtype} to {self._names.dtype}\"\n )\n\n both_pos = np.intersect1d(this_pos, other_pos)\n\n if len(both_pos) > 0:\n this_names: NDArray = np.empty_like(indices, dtype=object)\n other_names: NDArray = np.empty_like(indices, dtype=object)\n this_names[this_pos] = self._names\n other_names[other_pos] = other._names\n\n this_shared_names = np.take(this_names, both_pos)\n other_shared_names = np.take(other_names, both_pos)\n\n if np.any(this_shared_names != other_shared_names):\n raise ValueError(f\"Mismatching names in ValuationResults\")\n\n names = np.empty_like(indices, dtype=self._names.dtype)\n names[this_pos] = self._names\n names[other_pos] = other._names\n\n return ValuationResult(\n algorithm=self.algorithm or other.algorithm or \"\",\n status=self.status & other.status,\n indices=indices,\n values=xnm,\n variances=vnm,\n counts=n + m,\n data_names=names,\n # FIXME: What to do with extra_values? This is not commutative:\n # extra_values=self._extra_values.update(other._extra_values),\n )\n
update(idx: int, new_value: float) -> ValuationResult[IndexT, NameT]\n
Updates the result in place with a new value, using running mean and variance.
idx
Data index of the value to update.
New value to add to the result.
ValuationResult[IndexT, NameT]
A reference to the same, modified result.
IndexError
If the index is not found.
def update(self, idx: int, new_value: float) -> ValuationResult[IndexT, NameT]:\n \"\"\"Updates the result in place with a new value, using running mean\n and variance.\n\n Args:\n idx: Data index of the value to update.\n new_value: New value to add to the result.\n\n Returns:\n A reference to the same, modified result.\n\n Raises:\n IndexError: If the index is not found.\n \"\"\"\n try:\n pos = self._positions[idx]\n except KeyError:\n raise IndexError(f\"Index {idx} not found in ValuationResult\")\n val, var = running_moments(\n self._values[pos], self._variances[pos], self._counts[pos], new_value\n )\n self[pos] = ValueItem(\n index=cast(IndexT, idx), # FIXME\n name=self._names[pos],\n value=val,\n variance=var,\n count=self._counts[pos] + 1,\n )\n return self\n
scale(factor: float, indices: Optional[NDArray[IndexT]] = None)\n
Scales the values and variances of the result by a coefficient.
factor
Factor to scale by.
Indices to scale. If None, all values are scaled.
def scale(self, factor: float, indices: Optional[NDArray[IndexT]] = None):\n \"\"\"\n Scales the values and variances of the result by a coefficient.\n\n Args:\n factor: Factor to scale by.\n indices: Indices to scale. If None, all values are scaled.\n \"\"\"\n self._values[self._sort_positions[indices]] *= factor\n self._variances[self._sort_positions[indices]] *= factor**2\n
get(idx: Integral) -> ValueItem\n
Retrieves a ValueItem by data index, as opposed to sort index, like the indexing operator.
def get(self, idx: Integral) -> ValueItem:\n \"\"\"Retrieves a ValueItem by data index, as opposed to sort index, like\n the indexing operator.\n\n Raises:\n IndexError: If the index is not found.\n \"\"\"\n try:\n pos = self._positions[idx]\n except KeyError:\n raise IndexError(f\"Index {idx} not found in ValuationResult\")\n\n return ValueItem(\n self._indices[pos],\n self._names[pos],\n self._values[pos],\n self._variances[pos],\n self._counts[pos],\n )\n
to_dataframe(\n column: Optional[str] = None, use_names: bool = False\n) -> DataFrame\n
Returns values as a dataframe.
column
Name for the column holding the data value. Defaults to the name of the algorithm used.
use_names
Whether to use data names instead of indices for the DataFrame's index.
A dataframe with two columns, one for the values, with name given as explained in column, and another with standard errors for approximate algorithms. The latter will be named column+'_stderr'.
column+'_stderr'
def to_dataframe(\n self, column: Optional[str] = None, use_names: bool = False\n) -> pd.DataFrame:\n \"\"\"Returns values as a dataframe.\n\n Args:\n column: Name for the column holding the data value. Defaults to\n the name of the algorithm used.\n use_names: Whether to use data names instead of indices for the\n DataFrame's index.\n\n Returns:\n A dataframe with two columns, one for the values, with name\n given as explained in `column`, and another with standard errors for\n approximate algorithms. The latter will be named `column+'_stderr'`.\n \"\"\"\n column = column or self._algorithm\n df = pd.DataFrame(\n self._values[self._sort_positions],\n index=(\n self._names[self._sort_positions]\n if use_names\n else self._indices[self._sort_positions]\n ),\n columns=[column],\n )\n df[column + \"_stderr\"] = self.stderr[self._sort_positions]\n df[column + \"_updates\"] = self.counts[self._sort_positions]\n return df\n
from_random(\n size: int,\n total: Optional[float] = None,\n seed: Optional[Seed] = None,\n **kwargs\n) -> \"ValuationResult\"\n
Creates a ValuationResult object and fills it with an array of random values from a uniform distribution in [-1,1]. The values can be made to sum up to a given total number (doing so will change their range).
Number of values to generate
total
If set, the values are normalized to sum to this number (\"efficiency\" property of Shapley values).
TYPE: Optional[float] DEFAULT: None
Additional options to pass to the constructor of ValuationResult. Use to override status, names, etc.
'ValuationResult'
A valuation result with its status set to
Status.Converged by default.
If size is less than 1.
Added parameter total. Check for zero size
@classmethod\ndef from_random(\n cls,\n size: int,\n total: Optional[float] = None,\n seed: Optional[Seed] = None,\n **kwargs,\n) -> \"ValuationResult\":\n \"\"\"Creates a [ValuationResult][pydvl.value.result.ValuationResult] object and fills it with an array\n of random values from a uniform distribution in [-1,1]. The values can\n be made to sum up to a given total number (doing so will change their range).\n\n Args:\n size: Number of values to generate\n total: If set, the values are normalized to sum to this number\n (\"efficiency\" property of Shapley values).\n kwargs: Additional options to pass to the constructor of\n [ValuationResult][pydvl.value.result.ValuationResult]. Use to override status, names, etc.\n\n Returns:\n A valuation result with its status set to\n [Status.Converged][pydvl.utils.status.Status] by default.\n\n Raises:\n ValueError: If `size` is less than 1.\n\n !!! tip \"Changed in version 0.6.0\"\n Added parameter `total`. Check for zero size\n \"\"\"\n if size < 1:\n raise ValueError(\"Size must be a positive integer\")\n\n rng = np.random.default_rng(seed)\n values = rng.uniform(low=-1, high=1, size=size)\n if total is not None:\n values *= total / np.sum(values)\n\n options = dict(values=values, status=Status.Converged, algorithm=\"random\")\n options.update(kwargs)\n return cls(**options) # type: ignore\n
empty(\n algorithm: str = \"\",\n indices: Optional[Sequence[IndexT] | NDArray[IndexT]] = None,\n data_names: Optional[Sequence[NameT] | NDArray[NameT]] = None,\n n_samples: int = 0,\n) -> ValuationResult\n
Creates an empty ValuationResult object.
Empty results are characterised by having an empty array of values. When another result is added to an empty one, the empty one is discarded.
Name of the algorithm used to compute the values
Optional sequence or array of indices.
TYPE: Optional[Sequence[IndexT] | NDArray[IndexT]] DEFAULT: None
Optional[Sequence[IndexT] | NDArray[IndexT]]
Optional sequences or array of names for the data points. Defaults to index numbers if not set.
Number of valuation result entries.
TYPE: int DEFAULT: 0
0
Object with the results.
@classmethod\ndef empty(\n cls,\n algorithm: str = \"\",\n indices: Optional[Sequence[IndexT] | NDArray[IndexT]] = None,\n data_names: Optional[Sequence[NameT] | NDArray[NameT]] = None,\n n_samples: int = 0,\n) -> ValuationResult:\n \"\"\"Creates an empty [ValuationResult][pydvl.value.result.ValuationResult] object.\n\n Empty results are characterised by having an empty array of values. When\n another result is added to an empty one, the empty one is discarded.\n\n Args:\n algorithm: Name of the algorithm used to compute the values\n indices: Optional sequence or array of indices.\n data_names: Optional sequences or array of names for the data points.\n Defaults to index numbers if not set.\n n_samples: Number of valuation result entries.\n\n Returns:\n Object with the results.\n \"\"\"\n if indices is not None or data_names is not None or n_samples != 0:\n return cls.zeros(\n algorithm=algorithm,\n indices=indices,\n data_names=data_names,\n n_samples=n_samples,\n )\n return cls(algorithm=algorithm, status=Status.Pending, values=np.array([]))\n
zeros(\n algorithm: str = \"\",\n indices: Optional[Sequence[IndexT] | NDArray[IndexT]] = None,\n data_names: Optional[Sequence[NameT] | NDArray[NameT]] = None,\n n_samples: int = 0,\n) -> ValuationResult\n
Empty results are characterised by having an empty array of values. When another result is added to an empty one, the empty one is ignored.
Data indices to use. A copy will be made. If not given, the indices will be set to the range [0, n_samples).
[0, n_samples)
Data names to use. A copy will be made. If not given, the names will be set to the string representation of the indices.
Number of data points whose values are computed. If not given, the length of indices will be used.
@classmethod\ndef zeros(\n cls,\n algorithm: str = \"\",\n indices: Optional[Sequence[IndexT] | NDArray[IndexT]] = None,\n data_names: Optional[Sequence[NameT] | NDArray[NameT]] = None,\n n_samples: int = 0,\n) -> ValuationResult:\n \"\"\"Creates an empty [ValuationResult][pydvl.value.result.ValuationResult] object.\n\n Empty results are characterised by having an empty array of values. When\n another result is added to an empty one, the empty one is ignored.\n\n Args:\n algorithm: Name of the algorithm used to compute the values\n indices: Data indices to use. A copy will be made. If not given,\n the indices will be set to the range `[0, n_samples)`.\n data_names: Data names to use. A copy will be made. If not given,\n the names will be set to the string representation of the indices.\n n_samples: Number of data points whose values are computed. If\n not given, the length of `indices` will be used.\n\n Returns:\n Object with the results.\n \"\"\"\n if indices is None:\n indices = np.arange(n_samples, dtype=np.int_)\n else:\n indices = np.array(indices, dtype=np.int_)\n\n if data_names is None:\n data_names = np.array(indices)\n else:\n data_names = np.array(data_names)\n\n return cls(\n algorithm=algorithm,\n status=Status.Pending,\n indices=indices,\n data_names=data_names,\n values=np.zeros(len(indices)),\n variances=np.zeros(len(indices)),\n counts=np.zeros(len(indices), dtype=np.int_),\n )\n
Samplers iterate over subsets of indices.
The classes in this module are used to iterate over indices and subsets of their complement in the whole set, as required for the computation of marginal utility for semi-values. The elements returned when iterating over any subclass of PowersetSampler are tuples of the form (idx, subset), where idx is the index of the element being added to the subset, and subset is the subset of the complement of idx. The classes in this module are used to iterate over an index set \\(I\\) as required for the computation of marginal utility for semi-values. The elements returned when iterating over any subclass of :class:PowersetSampler are tuples of the form \\((i, S)\\), where \\(i\\) is an index of interest, and \\(S \\subset I \\setminus \\{i\\}\\) is a subset of the complement of \\(i\\).
(idx, subset)
subset
PowersetSampler
The iteration happens in two nested loops. An outer loop iterates over \\(I\\), and an inner loop iterates over the powerset of \\(I \\setminus \\{i\\}\\). The outer iteration can be either sequential or at random.
This is the natural mode of iteration for the combinatorial definition of semi-values, in particular Shapley value. For the computation using permutations, adhering to this interface is not ideal, but we stick to it for consistency.
The samplers are used in the semivalues module to compute any semi-value, in particular Shapley and Beta values, and Banzhaf indices.
The samplers can be sliced for parallel computation. For those which are embarrassingly parallel, this is done by slicing the set of \"outer\" indices and returning new samplers over those slices. This includes all truly powerset-based samplers, such as DeterministicUniformSampler and UniformSampler. In contrast, slicing a PermutationSampler creates a new sampler which iterates over the same indices.
Mitchell, Rory, Joshua Cooper, Eibe Frank, and Geoffrey Holmes. Sampling Permutations for Shapley Value Estimation. Journal of Machine Learning Research 23, no. 43 (2022): 1\u201346.\u00a0\u21a9
Wang, J.T. and Jia, R., 2023. Data Banzhaf: A Robust Data Valuation Framework for Machine Learning. In: Proceedings of The 26th International Conference on Artificial Intelligence and Statistics, pp. 6388-6421.\u00a0\u21a9
PowersetSampler(\n indices: NDArray[IndexT],\n index_iteration: IndexIteration = IndexIteration.Sequential,\n outer_indices: NDArray[IndexT] | None = None,\n **kwargs\n)\n
Bases: ABC, Iterable[SampleT], Generic[IndexT]
Iterable[SampleT]
Generic[IndexT]
Samplers are custom iterables over subsets of indices.
Calling iter() on a sampler returns an iterator over tuples of the form \\((i, S)\\), where \\(i\\) is an index of interest, and \\(S \\subset I \\setminus \\{i\\}\\) is a subset of the complement of \\(i\\).
iter()
This is done in two nested loops, where the outer loop iterates over the set of indices, and the inner loop iterates over subsets of the complement of the current index. The outer iteration can be either sequential or at random.
Samplers are not iterators themselves, so that each call to iter() e.g. in a for loop creates a new iterator.
>>>for idx, s in DeterministicUniformSampler(np.arange(2)):\n>>> print(s, end=\"\")\n[][2,][][1,]\n
Samplers must implement a weight() function to be used as a multiplier in Monte Carlo sums, so that the limit expectation coincides with the semi-value.
The samplers can be sliced for parallel computation. For those which are embarrassingly parallel, this is done by slicing the set of \"outer\" indices and returning new samplers over those slices.
index_iteration: the order in which indices are iterated over\nouter_indices: The set of items (indices) over which to iterate\n when sampling. Subsets are taken from the complement of each index\n in succession. For embarrassingly parallel computations, this set\n is sliced and the samplers are used to iterate over the slices.\n
src/pydvl/value/sampler.py
def __init__(\n self,\n indices: NDArray[IndexT],\n index_iteration: IndexIteration = IndexIteration.Sequential,\n outer_indices: NDArray[IndexT] | None = None,\n **kwargs,\n):\n \"\"\"\n Args:\n indices: The set of items (indices) to sample from.\n index_iteration: the order in which indices are iterated over\n outer_indices: The set of items (indices) over which to iterate\n when sampling. Subsets are taken from the complement of each index\n in succession. For embarrassingly parallel computations, this set\n is sliced and the samplers are used to iterate over the slices.\n \"\"\"\n self._indices = indices\n self._index_iteration = index_iteration\n self._outer_indices = outer_indices if outer_indices is not None else indices\n self._n = len(indices)\n self._n_samples = 0\n
iterindices() -> Iterator[IndexT]\n
Iterates over indices in the order specified at construction.
which method is better
def iterindices(self) -> Iterator[IndexT]:\n \"\"\"Iterates over indices in the order specified at construction.\n\n FIXME: this is probably not very useful, but I couldn't decide\n which method is better\n \"\"\"\n if self._index_iteration is PowersetSampler.IndexIteration.Sequential:\n for idx in self._outer_indices:\n yield idx\n elif self._index_iteration is PowersetSampler.IndexIteration.Random:\n while True:\n yield np.random.choice(self._outer_indices, size=1).item()\n
__len__() -> int\n
Returns the number of outer indices over which the sampler iterates.
def __len__(self) -> int:\n \"\"\"Returns the number of outer indices over which the sampler iterates.\"\"\"\n return len(self._outer_indices)\n
weight(n: int, subset_len: int) -> float\n
Factor by which to multiply Monte Carlo samples, so that the mean converges to the desired expression.
By the Law of Large Numbers, the sample mean of \\(\\delta_i(S_j)\\) converges to the expectation under the distribution from which \\(S_j\\) is sampled.
We add a factor \\(c(S_j)\\) in order to have this expectation coincide with the desired expression.
@classmethod\n@abc.abstractmethod\ndef weight(cls, n: int, subset_len: int) -> float:\n r\"\"\"Factor by which to multiply Monte Carlo samples, so that the\n mean converges to the desired expression.\n\n By the Law of Large Numbers, the sample mean of $\\delta_i(S_j)$\n converges to the expectation under the distribution from which $S_j$ is\n sampled.\n\n $$ \\frac{1}{m} \\sum_{j = 1}^m \\delta_i (S_j) c (S_j) \\longrightarrow\n \\underset{S \\sim \\mathcal{D}_{- i}}{\\mathbb{E}} [\\delta_i (S) c (\n S)]$$\n\n We add a factor $c(S_j)$ in order to have this expectation coincide with\n the desired expression.\n \"\"\"\n ...\n
StochasticSamplerMixin(*args, seed: Optional[Seed] = None, **kwargs)\n
Mixin class for samplers which use a random number generator.
def __init__(self, *args, seed: Optional[Seed] = None, **kwargs):\n super().__init__(*args, **kwargs)\n self._rng = np.random.default_rng(seed)\n
DeterministicUniformSampler(indices: NDArray[IndexT], *args, **kwargs)\n
Bases: PowersetSampler[IndexT]
PowersetSampler[IndexT]
For every index \\(i\\), each subset of the complement indices - {i} is returned.
indices - {i}
Indices are always iterated over sequentially, irrespective of the value of index_iteration upon construction.
index_iteration
>>> for idx, s in DeterministicUniformSampler(np.arange(2)):\n>>> print(f\"{idx} - {s}\", end=\", \")\n1 - [], 1 - [2], 2 - [], 2 - [1],\n
The set of items (indices) to sample from.
TYPE: NDArray[IndexT]
NDArray[IndexT]
def __init__(self, indices: NDArray[IndexT], *args, **kwargs):\n \"\"\"An iterator to perform uniform deterministic sampling of subsets.\n\n For every index $i$, each subset of the complement `indices - {i}` is\n returned.\n\n !!! Note\n Indices are always iterated over sequentially, irrespective of\n the value of `index_iteration` upon construction.\n\n ??? Example\n ``` pycon\n >>> for idx, s in DeterministicUniformSampler(np.arange(2)):\n >>> print(f\"{idx} - {s}\", end=\", \")\n 1 - [], 1 - [2], 2 - [], 2 - [1],\n ```\n\n Args:\n indices: The set of items (indices) to sample from.\n \"\"\"\n # Force sequential iteration\n kwargs.update({\"index_iteration\": PowersetSampler.IndexIteration.Sequential})\n super().__init__(indices, *args, **kwargs)\n
UniformSampler(*args, seed: Optional[Seed] = None, **kwargs)\n
Bases: StochasticSamplerMixin, PowersetSampler[IndexT]
StochasticSamplerMixin
An iterator to perform uniform random sampling of subsets.
Iterating over every index \\(i\\), either in sequence or at random depending on the value of index_iteration, one subset of the complement indices - {i} is sampled with equal probability \\(2^{n-1}\\). The iterator never ends.
The code
for idx, s in UniformSampler(np.arange(3)):\n print(f\"{idx} - {s}\", end=\", \")\n
0 - [1 4], 1 - [2 3], 2 - [0 1 3], 3 - [], 4 - [2], 0 - [1 3 4], 1 - [0 2]\n(...)\n
Correction coming from Monte Carlo integration so that the mean of the marginals converges to the value: the uniform distribution over the powerset of a set with n-1 elements has mass 2^{n-1} over each subset.
@classmethod\ndef weight(cls, n: int, subset_len: int) -> float:\n \"\"\"Correction coming from Monte Carlo integration so that the mean of\n the marginals converges to the value: the uniform distribution over the\n powerset of a set with n-1 elements has mass 2^{n-1} over each subset.\"\"\"\n return float(2 ** (n - 1)) if n > 0 else 1.0\n
MSRSampler(*args, seed: Optional[Seed] = None, **kwargs)\n
An iterator to perform sampling of random subsets.
This sampler does not return any index, it only returns subsets of the data. This sampler is used in (Wang et. al.)2.
AntitheticSampler(*args, seed: Optional[Seed] = None, **kwargs)\n
An iterator to perform uniform random sampling of subsets, and their complements.
Works as UniformSampler, but for every tuple \\((i,S)\\), it subsequently returns \\((i,S^c)\\), where \\(S^c\\) is the complement of the set \\(S\\) in the set of indices, excluding \\(i\\).
PermutationSampler(*args, seed: Optional[Seed] = None, **kwargs)\n
Sample permutations of indices and iterate through each returning increasing subsets, as required for the permutation definition of semi-values.
This sampler does not implement the two loops described in PowersetSampler. Instead, for a permutation (3,1,4,2), it returns in sequence the tuples of index and sets: (3, {}), (1, {3}), (4, {3,1}) and (2, {3,1,4}).
(3,1,4,2)
(3, {})
(1, {3})
(4, {3,1})
(2, {3,1,4})
Note that the full index set is never returned.
This sampler requires caching to be enabled or computation will be doubled wrt. a \"direct\" implementation of permutation MC
__getitem__(key: slice | list[int]) -> PowersetSampler[IndexT]\n
Permutation samplers cannot be split across indices, so we return a copy of the full sampler.
def __getitem__(self, key: slice | list[int]) -> PowersetSampler[IndexT]:\n \"\"\"Permutation samplers cannot be split across indices, so we return\n a copy of the full sampler.\"\"\"\n return super().__getitem__(slice(None))\n
AntitheticPermutationSampler(*args, seed: Optional[Seed] = None, **kwargs)\n
Bases: PermutationSampler[IndexT]
PermutationSampler[IndexT]
Samples permutations like PermutationSampler, but after each permutation, it returns the same permutation in reverse order.
This sampler was suggested in (Mitchell et al. 2022)1
New in version 0.7.1
DeterministicPermutationSampler(*args, seed: Optional[Seed] = None, **kwargs)\n
Samples all n! permutations of the indices deterministically, and iterates through them, returning sets as required for the permutation-based definition of semi-values.
This sampler is not parallelizable, as it always iterates over the whole set of permutations in the same order. Different processes would always return the same values for all indices.
RandomHierarchicalSampler(*args, seed: Optional[Seed] = None, **kwargs)\n
For every index, sample a set size, then a set of that size.
This is unnecessary, but a step towards proper stratified sampling.
This module provides the core functionality for the computation of generic semi-values. A semi-value is any valuation function with the form:
where the coefficients \\(w(k)\\) satisfy the property:
For implementation consistency, we slightly depart from the common definition of semi-values, which includes a factor \\(1/n\\) in the sum over subsets. Instead, we subsume this factor into the coefficient \\(w(k)\\).
The computation of a semi-value requires two components:
Samplers can be found in sampler, and can be classified into two categories: powerset samplers and permutation samplers. Powerset samplers generate subsets of \\(D_{-i}\\), while the permutation sampler generates permutations of \\(D\\). The former conform to the above definition of semi-values, while the latter reformulates it as:
where \\(\\sigma_{:i}\\) denotes the set of indices in permutation sigma before the position where \\(i\\) appears (see Data valuation for details), and
is the weight correction due to the reformulation.
Both PermutationSampler and DeterministicPermutationSampler require caching to be enabled or computation will be doubled wrt. a 'direct' implementation of permutation MC.
Samplers and coefficients can be arbitrarily mixed by means of the main entry point of this module, compute_generic_semivalues. There are several pre-defined coefficients, including the Shapley value of (Ghorbani and Zou, 2019)1, the Banzhaf index of (Wang and Jia)3, and the Beta coefficient of (Kwon and Zou, 2022)2. For each of these methods, there is a convenience wrapper function. Respectively, these are: compute_shapley_semivalues, compute_banzhaf_semivalues, and compute_beta_shapley_semivalues. instead.
Parallelization and batching
In order to ensure reproducibility and fine-grained control of parallelization, samples are generated in the main process and then distributed to worker processes for evaluation. For small sample sizes, this can lead to a significant overhead. To avoid this, we temporarily provide an additional argument batch_size to all methods which can improve performance with small models up to an order of magnitude. Note that this argument will be removed before version 1.0 in favour of a more general solution.
Ghorbani, A., Zou, J., 2019. Data Shapley: Equitable Valuation of Data for Machine Learning. In: Proceedings of the 36th International Conference on Machine Learning, PMLR, pp. 2242\u20132251.\u00a0\u21a9
Kwon, Y. and Zou, J., 2022. Beta Shapley: A Unified and Noise-reduced Data Valuation Framework for Machine Learning. In: Proceedings of the 25th International Conference on Artificial Intelligence and Statistics (AISTATS) 2022, Vol. 151. PMLR, Valencia, Spain.\u00a0\u21a9
The protocol that coefficients for the computation of semi-values must fulfill.
__call__(n: int, k: int) -> float\n
Computes the coefficient for a given subset size.
Total number of elements in the set.
Size of the subset for which the coefficient is being computed
src/pydvl/value/semivalues.py
def __call__(self, n: int, k: int) -> float:\n \"\"\"Computes the coefficient for a given subset size.\n\n Args:\n n: Total number of elements in the set.\n k: Size of the subset for which the coefficient is being computed\n \"\"\"\n ...\n
Bases: MarginalFunction
MarginalFunction
__call__(\n u: Utility, coefficient: SVCoefficient, samples: Iterable[SampleT]\n) -> Tuple[MarginalT, ...]\n
Computation of marginal utility. This is a helper function for compute_generic_semivalues.
coefficient
The semivalue coefficient and sampler weight
TYPE: SVCoefficient
SVCoefficient
samples
A collection of samples. Each sample is a tuple of index and subset of indices to compute a marginal utility.
TYPE: Iterable[SampleT]
MarginalT
A collection of marginals. Each marginal is a tuple with index and its marginal
...
utility.
def __call__(\n self, u: Utility, coefficient: SVCoefficient, samples: Iterable[SampleT]\n) -> Tuple[MarginalT, ...]:\n \"\"\"Computation of marginal utility. This is a helper function for\n [compute_generic_semivalues][pydvl.value.semivalues.compute_generic_semivalues].\n\n Args:\n u: Utility object with model, data, and scoring function.\n coefficient: The semivalue coefficient and sampler weight\n samples: A collection of samples. Each sample is a tuple of index and subset of\n indices to compute a marginal utility.\n\n Returns:\n A collection of marginals. Each marginal is a tuple with index and its marginal\n utility.\n \"\"\"\n n = len(u.data)\n marginals: List[MarginalT] = []\n for idx, s in samples:\n marginal = (u({idx}.union(s)) - u(s)) * coefficient(n, len(s))\n marginals.append((idx, marginal))\n return tuple(marginals)\n
Computation of raw utility without marginalization. This is a helper function for compute_generic_semivalues.
Tuple[MarginalT, ...]
A collection of marginals. Each marginal is a tuple with index and its raw utility.
def __call__(\n self, u: Utility, coefficient: SVCoefficient, samples: Iterable[SampleT]\n) -> Tuple[MarginalT, ...]:\n \"\"\"Computation of raw utility without marginalization. This is a helper function for\n [compute_generic_semivalues][pydvl.value.semivalues.compute_generic_semivalues].\n\n Args:\n u: Utility object with model, data, and scoring function.\n coefficient: The semivalue coefficient and sampler weight\n samples: A collection of samples. Each sample is a tuple of index and subset of\n indices to compute a marginal utility.\n\n Returns:\n A collection of marginals. Each marginal is a tuple with index and its raw utility.\n \"\"\"\n marginals: List[MarginalT] = []\n for idx, s in samples:\n marginals.append((s, u(s)))\n return tuple(marginals)\n
The FutureProcessor class used to process the results of the parallel marginal evaluations.
The marginals are evaluated in parallel by n_jobs threads, but some algorithms require a central method to postprocess the marginal results. This can be achieved through the future processor. This base class does not perform any postprocessing, it is a noop used in most data valuation algorithms.
MSRFutureProcessor(u: Utility)\n
Bases: FutureProcessor
FutureProcessor
This FutureProcessor processes the raw marginals in a way that MSR sampling requires.
MSR sampling evaluates the utility once, and then updates all data semivalues based on this one evaluation. In order to do this, the RawUtility value needs to be postprocessed through this class. For more details on MSR, please refer to the paper (Wang et. al.)3. This processor keeps track of the current values and computes marginals for all data points, so that the values in the ValuationResult can be updated properly down the line.
def __init__(self, u: Utility):\n self.n = len(u.data)\n self.all_indices = u.data.indices.copy()\n self.point_in_subset = np.zeros((self.n,))\n self.positive_sums = np.zeros((self.n,))\n self.negative_sums = np.zeros((self.n,))\n self.total_evaluations = 0\n
__call__(\n future_result: List[Tuple[List[IndexT], float]]\n) -> List[List[MarginalT]]\n
Computation of marginal utility using Maximum Sample Reuse.
This processor requires the Marginal Function to be set to RawUtility.\n Then, this processor computes marginals based on the utility value and the index set provided.\n\n The final formula that gives the Banzhaf semivalue using MSR is:\n $$\\hat{\\phi}_{MSR}(i) = \frac{1}{|\\mathbf{S}_{\n
i i}|} \\sum_{S \\in \\mathbf{S}{ i i}} U(S) - \frac{1}{|\\mathbf{S}{ ot{ i} i}|} \\sum_{S \\in \\mathbf{S}_{ ot{ i} i}} U(S)$$
Args:\n future_result: Result of the parallel computing jobs comprised of\n a list of indices that were used to evaluate the utility, and the evaluation result (metric).\n\n Returns:\n A collection of marginals. Each marginal is a tuple with index and its marginal\n utility.\n
def __call__(\n self, future_result: List[Tuple[List[IndexT], float]]\n) -> List[List[MarginalT]]:\n \"\"\"Computation of marginal utility using Maximum Sample Reuse.\n\n This processor requires the Marginal Function to be set to RawUtility.\n Then, this processor computes marginals based on the utility value and the index set provided.\n\n The final formula that gives the Banzhaf semivalue using MSR is:\n $$\\hat{\\phi}_{MSR}(i) = \\frac{1}{|\\mathbf{S}_{\\ni i}|} \\sum_{S \\in \\mathbf{S}_{\\ni i}} U(S)\n - \\frac{1}{|\\mathbf{S}_{\\not{\\ni} i}|} \\sum_{S \\in \\mathbf{S}_{\\not{\\ni} i}} U(S)$$\n\n Args:\n future_result: Result of the parallel computing jobs comprised of\n a list of indices that were used to evaluate the utility, and the evaluation result (metric).\n\n Returns:\n A collection of marginals. Each marginal is a tuple with index and its marginal\n utility.\n \"\"\"\n marginals: List[List[MarginalT]] = []\n for batch_id, (s, evaluation) in enumerate(future_result):\n previous_values = self.compute_values()\n self.total_evaluations += 1\n self.point_in_subset[s] += 1\n self.positive_sums[s] += evaluation\n not_s = np.setdiff1d(self.all_indices, s)\n self.negative_sums[not_s] += evaluation\n new_values = self.compute_values()\n # Hack to work around the update mechanic that does not work out of the box for MSR\n marginal_vals = (\n self.total_evaluations * new_values\n - (self.total_evaluations - 1) * previous_values\n )\n marginals.append([])\n for data_index in range(self.n):\n marginals[batch_id].append(\n (data_index, float(marginal_vals[data_index]))\n )\n return marginals\n
Enumeration of semi-value modes.
This enum and the associated methods are deprecated and will be removed in 0.8.0.
compute_generic_semivalues(\n sampler: PowersetSampler[IndexT],\n u: Utility,\n coefficient: SVCoefficient,\n done: StoppingCriterion,\n *,\n marginal: MarginalFunction = DefaultMarginal(),\n future_processor: FutureProcessor = FutureProcessor(),\n batch_size: int = 1,\n skip_converged: bool = False,\n n_jobs: int = 1,\n parallel_backend: Optional[ParallelBackend] = None,\n config: Optional[ParallelConfig] = None,\n progress: bool = False\n) -> ValuationResult\n
Computes semi-values for a given utility function and subset sampler.
sampler
The subset sampler to use for utility computations.
TYPE: PowersetSampler[IndexT]
The semi-value coefficient
marginal
Marginal function to be used for computing the semivalues
TYPE: MarginalFunction DEFAULT: DefaultMarginal()
DefaultMarginal()
future_processor
Additional postprocessing steps required for some algorithms
TYPE: FutureProcessor DEFAULT: FutureProcessor()
FutureProcessor()
Number of marginal evaluations per single parallel job.
skip_converged
Whether to skip marginal evaluations for indices that have already converged. CAUTION: This is only entirely safe if the stopping criterion is MaxUpdates. For any other stopping criterion, the convergence status of indices may change during the computation, or they may be marked as having converged even though in fact the estimated values are far from the true values (e.g. for AbsoluteStandardError, you will probably have to carefully adjust the threshold).
Number of parallel jobs to use.
Whether to display a progress bar.
Parameter batch_size is for experimental use and will be removed in future versions.
Changed in version 0.9.0
Deprecated config argument and added a parallel_backend argument to allow users to pass the Parallel Backend instance directly.
@deprecated(\n target=True,\n args_mapping={\"config\": \"config\"},\n deprecated_in=\"0.9.0\",\n remove_in=\"0.10.0\",\n)\ndef compute_generic_semivalues(\n sampler: PowersetSampler[IndexT],\n u: Utility,\n coefficient: SVCoefficient,\n done: StoppingCriterion,\n *,\n marginal: MarginalFunction = DefaultMarginal(),\n future_processor: FutureProcessor = FutureProcessor(),\n batch_size: int = 1,\n skip_converged: bool = False,\n n_jobs: int = 1,\n parallel_backend: Optional[ParallelBackend] = None,\n config: Optional[ParallelConfig] = None,\n progress: bool = False,\n) -> ValuationResult:\n \"\"\"Computes semi-values for a given utility function and subset sampler.\n\n Args:\n sampler: The subset sampler to use for utility computations.\n u: Utility object with model, data, and scoring function.\n coefficient: The semi-value coefficient\n done: Stopping criterion.\n marginal: Marginal function to be used for computing the semivalues\n future_processor: Additional postprocessing steps required for some algorithms\n batch_size: Number of marginal evaluations per single parallel job.\n skip_converged: Whether to skip marginal evaluations for indices that\n have already converged. **CAUTION**: This is only entirely safe if\n the stopping criterion is [MaxUpdates][pydvl.value.stopping.MaxUpdates].\n For any other stopping criterion, the convergence status of indices\n may change during the computation, or they may be marked as having\n converged even though in fact the estimated values are far from the\n true values (e.g. for\n [AbsoluteStandardError][pydvl.value.stopping.AbsoluteStandardError],\n you will probably have to carefully adjust the threshold).\n n_jobs: Number of parallel jobs to use.\n parallel_backend: Parallel backend instance to use\n for parallelizing computations. If `None`,\n use [JoblibParallelBackend][pydvl.parallel.backends.JoblibParallelBackend] backend.\n See the [Parallel Backends][pydvl.parallel.backends] package\n for available options.\n config: (**DEPRECATED**) Object configuring parallel computation,\n with cluster address, number of cpus, etc.\n progress: Whether to display a progress bar.\n\n Returns:\n Object with the results.\n\n !!! warning \"Deprecation notice\"\n Parameter `batch_size` is for experimental use and will be removed in\n future versions.\n\n !!! tip \"Changed in version 0.9.0\"\n Deprecated `config` argument and added a `parallel_backend`\n argument to allow users to pass the Parallel Backend instance\n directly.\n \"\"\"\n if isinstance(sampler, PermutationSampler) and u.cache is None:\n log.warning(\n \"PermutationSampler requires caching to be enabled or computation \"\n \"will be doubled wrt. a 'direct' implementation of permutation MC\"\n )\n\n if batch_size != 1:\n warnings.warn(\n \"Parameter `batch_size` is for experimental use and will be\"\n \" removed in future versions\",\n DeprecationWarning,\n )\n\n result = ValuationResult.zeros(\n algorithm=f\"semivalue-{str(sampler)}-{coefficient.__name__}\", # type: ignore\n indices=u.data.indices,\n data_names=u.data.data_names,\n )\n\n parallel_backend = _maybe_init_parallel_backend(parallel_backend, config)\n u = parallel_backend.put(u)\n correction = parallel_backend.put(\n lambda n, k: coefficient(n, k) * sampler.weight(n, k)\n )\n\n max_workers = parallel_backend.effective_n_jobs(n_jobs)\n n_submitted_jobs = 2 * max_workers # number of jobs in the queue\n\n sampler_it = iter(sampler)\n pbar = tqdm(disable=not progress, total=100, unit=\"%\")\n\n with parallel_backend.executor(\n max_workers=max_workers, cancel_futures=True\n ) as executor:\n pending: set[Future] = set()\n while True:\n pbar.n = 100 * done.completion()\n pbar.refresh()\n\n completed, pending = wait(pending, timeout=1, return_when=FIRST_COMPLETED)\n for future in completed:\n processed_future = future_processor(\n future.result()\n ) # List of tuples or\n for batch_future in processed_future:\n if isinstance(batch_future, list): # Case when batch size is > 1\n for idx, marginal_val in batch_future:\n result.update(idx, marginal_val)\n else: # Batch size 1\n idx, marginal_val = batch_future\n result.update(idx, marginal_val)\n if done(result):\n return result\n\n # Ensure that we always have n_submitted_jobs running\n try:\n while len(pending) < n_submitted_jobs:\n samples = tuple(islice(sampler_it, batch_size))\n if len(samples) == 0:\n raise StopIteration\n\n # Filter out samples for indices that have already converged\n filtered_samples = samples\n if skip_converged and np.count_nonzero(done.converged) > 0:\n # TODO: cloudpickle can't pickle result of `filter` on python 3.8\n filtered_samples = tuple(\n filter(lambda t: not done.converged[t[0]], samples)\n )\n\n if filtered_samples:\n pending.add(\n executor.submit(\n marginal,\n u=u,\n coefficient=correction,\n samples=filtered_samples,\n )\n )\n except StopIteration:\n if len(pending) == 0:\n return result\n
compute_shapley_semivalues(\n u: Utility,\n *,\n done: StoppingCriterion,\n sampler_t: Type[StochasticSampler] = PermutationSampler,\n batch_size: int = 1,\n n_jobs: int = 1,\n parallel_backend: Optional[ParallelBackend] = None,\n config: Optional[ParallelConfig] = None,\n progress: bool = False,\n seed: Optional[Seed] = None\n) -> ValuationResult\n
Computes Shapley values for a given utility function.
This is a convenience wrapper for compute_generic_semivalues with the Shapley coefficient. Use compute_shapley_values for a more flexible interface and additional methods, including TMCS.
sampler_t
The sampler type to use. See the sampler module for a list.
TYPE: Type[StochasticSampler] DEFAULT: PermutationSampler
Type[StochasticSampler]
PermutationSampler
@deprecated(\n target=True,\n args_mapping={\"config\": \"config\"},\n deprecated_in=\"0.9.0\",\n remove_in=\"0.10.0\",\n)\ndef compute_shapley_semivalues(\n u: Utility,\n *,\n done: StoppingCriterion,\n sampler_t: Type[StochasticSampler] = PermutationSampler,\n batch_size: int = 1,\n n_jobs: int = 1,\n parallel_backend: Optional[ParallelBackend] = None,\n config: Optional[ParallelConfig] = None,\n progress: bool = False,\n seed: Optional[Seed] = None,\n) -> ValuationResult:\n \"\"\"Computes Shapley values for a given utility function.\n\n This is a convenience wrapper for\n [compute_generic_semivalues][pydvl.value.semivalues.compute_generic_semivalues]\n with the Shapley coefficient. Use\n [compute_shapley_values][pydvl.value.shapley.common.compute_shapley_values]\n for a more flexible interface and additional methods, including TMCS.\n\n Args:\n u: Utility object with model, data, and scoring function.\n done: Stopping criterion.\n sampler_t: The sampler type to use. See the\n [sampler][pydvl.value.sampler] module for a list.\n batch_size: Number of marginal evaluations per single parallel job.\n n_jobs: Number of parallel jobs to use.\n parallel_backend: Parallel backend instance to use\n for parallelizing computations. If `None`,\n use [JoblibParallelBackend][pydvl.parallel.backends.JoblibParallelBackend] backend.\n See the [Parallel Backends][pydvl.parallel.backends] package\n for available options.\n config: (**DEPRECATED**) Object configuring parallel computation,\n with cluster address, number of cpus, etc.\n seed: Either an instance of a numpy random number generator or a seed\n for it.\n progress: Whether to display a progress bar.\n\n Returns:\n Object with the results.\n\n !!! warning \"Deprecation notice\"\n Parameter `batch_size` is for experimental use and will be removed in\n future versions.\n\n !!! tip \"Changed in version 0.9.0\"\n Deprecated `config` argument and added a `parallel_backend`\n argument to allow users to pass the Parallel Backend instance\n directly.\n \"\"\"\n # HACK: cannot infer return type because of useless IndexT, NameT\n return compute_generic_semivalues( # type: ignore\n sampler_t(u.data.indices, seed=seed),\n u,\n shapley_coefficient,\n done,\n batch_size=batch_size,\n n_jobs=n_jobs,\n parallel_backend=parallel_backend,\n config=config,\n progress=progress,\n )\n
compute_banzhaf_semivalues(\n u: Utility,\n *,\n done: StoppingCriterion,\n sampler_t: Type[StochasticSampler] = PermutationSampler,\n batch_size: int = 1,\n n_jobs: int = 1,\n parallel_backend: Optional[ParallelBackend] = None,\n config: Optional[ParallelConfig] = None,\n progress: bool = False,\n seed: Optional[Seed] = None\n) -> ValuationResult\n
Computes Banzhaf values for a given utility function.
This is a convenience wrapper for compute_generic_semivalues with the Banzhaf coefficient.
@deprecated(\n target=True,\n args_mapping={\"config\": \"config\"},\n deprecated_in=\"0.9.0\",\n remove_in=\"0.10.0\",\n)\ndef compute_banzhaf_semivalues(\n u: Utility,\n *,\n done: StoppingCriterion,\n sampler_t: Type[StochasticSampler] = PermutationSampler,\n batch_size: int = 1,\n n_jobs: int = 1,\n parallel_backend: Optional[ParallelBackend] = None,\n config: Optional[ParallelConfig] = None,\n progress: bool = False,\n seed: Optional[Seed] = None,\n) -> ValuationResult:\n \"\"\"Computes Banzhaf values for a given utility function.\n\n This is a convenience wrapper for\n [compute_generic_semivalues][pydvl.value.semivalues.compute_generic_semivalues]\n with the Banzhaf coefficient.\n\n Args:\n u: Utility object with model, data, and scoring function.\n done: Stopping criterion.\n sampler_t: The sampler type to use. See the\n [sampler][pydvl.value.sampler] module for a list.\n batch_size: Number of marginal evaluations per single parallel job.\n n_jobs: Number of parallel jobs to use.\n seed: Either an instance of a numpy random number generator or a seed\n for it.\n parallel_backend: Parallel backend instance to use\n for parallelizing computations. If `None`,\n use [JoblibParallelBackend][pydvl.parallel.backends.JoblibParallelBackend] backend.\n See the [Parallel Backends][pydvl.parallel.backends] package\n for available options.\n config: (**DEPRECATED**) Object configuring parallel computation,\n with cluster address, number of cpus, etc.\n progress: Whether to display a progress bar.\n\n Returns:\n Object with the results.\n\n !!! warning \"Deprecation notice\"\n Parameter `batch_size` is for experimental use and will be removed in\n future versions.\n\n !!! tip \"Changed in version 0.9.0\"\n Deprecated `config` argument and added a `parallel_backend`\n argument to allow users to pass the Parallel Backend instance\n directly.\n \"\"\"\n # HACK: cannot infer return type because of useless IndexT, NameT\n return compute_generic_semivalues( # type: ignore\n sampler_t(u.data.indices, seed=seed),\n u,\n banzhaf_coefficient,\n done,\n batch_size=batch_size,\n n_jobs=n_jobs,\n parallel_backend=parallel_backend,\n config=config,\n progress=progress,\n )\n
compute_msr_banzhaf_semivalues(\n u: Utility,\n *,\n done: StoppingCriterion,\n sampler_t: Type[StochasticSampler] = MSRSampler,\n batch_size: int = 1,\n n_jobs: int = 1,\n parallel_backend: Optional[ParallelBackend] = None,\n config: Optional[ParallelConfig] = None,\n progress: bool = False,\n seed: Optional[Seed] = None\n) -> ValuationResult\n
Computes MSR sampled Banzhaf values for a given utility function.
This is a convenience wrapper for compute_generic_semivalues with the Banzhaf coefficient and MSR sampling.
This algorithm works by sampling random subsets and then evaluating the utility on that subset only once. Based on the evaluation and the subset indices, the MSRFutureProcessor then computes the marginal updates like in the paper (Wang et. al.)3. Their approach updates the semivalues for all data points every time a new evaluation is computed. This increases sample efficiency compared to normal Monte Carlo updates.
TYPE: Type[StochasticSampler] DEFAULT: MSRSampler
MSRSampler
Object configuring parallel computation, with cluster address, number of cpus, etc.
@deprecated(\n target=True,\n args_mapping={\"config\": \"config\"},\n deprecated_in=\"0.9.0\",\n remove_in=\"0.10.0\",\n)\ndef compute_msr_banzhaf_semivalues(\n u: Utility,\n *,\n done: StoppingCriterion,\n sampler_t: Type[StochasticSampler] = MSRSampler,\n batch_size: int = 1,\n n_jobs: int = 1,\n parallel_backend: Optional[ParallelBackend] = None,\n config: Optional[ParallelConfig] = None,\n progress: bool = False,\n seed: Optional[Seed] = None,\n) -> ValuationResult:\n \"\"\"Computes MSR sampled Banzhaf values for a given utility function.\n\n This is a convenience wrapper for\n [compute_generic_semivalues][pydvl.value.semivalues.compute_generic_semivalues]\n with the Banzhaf coefficient and MSR sampling.\n\n This algorithm works by sampling random subsets and then evaluating the utility\n on that subset only once. Based on the evaluation and the subset indices,\n the MSRFutureProcessor then computes the marginal updates like in the paper\n (Wang et. al.)<sup><a href=\"wang_data_2023\">3</a></sup>.\n Their approach updates the semivalues for all data points every time a new evaluation\n is computed. This increases sample efficiency compared to normal Monte Carlo updates.\n\n Args:\n u: Utility object with model, data, and scoring function.\n done: Stopping criterion.\n sampler_t: The sampler type to use. See the\n [sampler][pydvl.value.sampler] module for a list.\n batch_size: Number of marginal evaluations per single parallel job.\n n_jobs: Number of parallel jobs to use.\n seed: Either an instance of a numpy random number generator or a seed\n for it.\n config: Object configuring parallel computation, with cluster address,\n number of cpus, etc.\n progress: Whether to display a progress bar.\n\n Returns:\n Object with the results.\n\n !!! warning \"Deprecation notice\"\n Parameter `batch_size` is for experimental use and will be removed in\n future versions.\n \"\"\"\n # HACK: cannot infer return type because of useless IndexT, NameT\n return compute_generic_semivalues( # type: ignore\n sampler_t(u.data.indices, seed=seed),\n u,\n always_one_coefficient,\n done,\n marginal=RawUtility(),\n future_processor=MSRFutureProcessor(u),\n batch_size=batch_size,\n n_jobs=n_jobs,\n parallel_backend=parallel_backend,\n config=config,\n progress=progress,\n )\n
compute_beta_shapley_semivalues(\n u: Utility,\n *,\n alpha: float = 1,\n beta: float = 1,\n done: StoppingCriterion,\n sampler_t: Type[StochasticSampler] = PermutationSampler,\n batch_size: int = 1,\n n_jobs: int = 1,\n parallel_backend: Optional[ParallelBackend] = None,\n config: Optional[ParallelConfig] = None,\n progress: bool = False,\n seed: Optional[Seed] = None\n) -> ValuationResult\n
Computes Beta Shapley values for a given utility function.
This is a convenience wrapper for compute_generic_semivalues with the Beta Shapley coefficient.
alpha
Alpha parameter of the Beta distribution.
TYPE: float DEFAULT: 1
beta
Beta parameter of the Beta distribution.
Number of marginal evaluations per (parallelized) task.
@deprecated(\n target=True,\n args_mapping={\"config\": \"config\"},\n deprecated_in=\"0.9.0\",\n remove_in=\"0.10.0\",\n)\ndef compute_beta_shapley_semivalues(\n u: Utility,\n *,\n alpha: float = 1,\n beta: float = 1,\n done: StoppingCriterion,\n sampler_t: Type[StochasticSampler] = PermutationSampler,\n batch_size: int = 1,\n n_jobs: int = 1,\n parallel_backend: Optional[ParallelBackend] = None,\n config: Optional[ParallelConfig] = None,\n progress: bool = False,\n seed: Optional[Seed] = None,\n) -> ValuationResult:\n \"\"\"Computes Beta Shapley values for a given utility function.\n\n This is a convenience wrapper for\n [compute_generic_semivalues][pydvl.value.semivalues.compute_generic_semivalues]\n with the Beta Shapley coefficient.\n\n Args:\n u: Utility object with model, data, and scoring function.\n alpha: Alpha parameter of the Beta distribution.\n beta: Beta parameter of the Beta distribution.\n done: Stopping criterion.\n sampler_t: The sampler type to use. See the\n [sampler][pydvl.value.sampler] module for a list.\n batch_size: Number of marginal evaluations per (parallelized) task.\n n_jobs: Number of parallel jobs to use.\n seed: Either an instance of a numpy random number generator or a seed for it.\n parallel_backend: Parallel backend instance to use\n for parallelizing computations. If `None`,\n use [JoblibParallelBackend][pydvl.parallel.backends.JoblibParallelBackend] backend.\n See the [Parallel Backends][pydvl.parallel.backends] package\n for available options.\n config: (**DEPRECATED**) Object configuring parallel computation,\n with cluster address, number of cpus, etc.\n progress: Whether to display a progress bar.\n\n Returns:\n Object with the results.\n\n !!! warning \"Deprecation notice\"\n Parameter `batch_size` is for experimental use and will be removed in\n future versions.\n\n !!! tip \"Changed in version 0.9.0\"\n Deprecated `config` argument and added a `parallel_backend`\n argument to allow users to pass the Parallel Backend instance\n directly.\n \"\"\"\n # HACK: cannot infer return type because of useless IndexT, NameT\n return compute_generic_semivalues( # type: ignore\n sampler_t(u.data.indices, seed=seed),\n u,\n beta_coefficient(alpha, beta),\n done,\n batch_size=batch_size,\n n_jobs=n_jobs,\n parallel_backend=parallel_backend,\n config=config,\n progress=progress,\n )\n
compute_semivalues(\n u: Utility,\n *,\n done: StoppingCriterion,\n mode: SemiValueMode = SemiValueMode.Shapley,\n sampler_t: Type[StochasticSampler] = PermutationSampler,\n batch_size: int = 1,\n n_jobs: int = 1,\n seed: Optional[Seed] = None,\n **kwargs\n) -> ValuationResult\n
Convenience entry point for most common semi-value computations.
Deprecation warning
This method is deprecated and will be replaced in 0.8.0 by the more general implementation of compute_generic_semivalues. Use compute_shapley_semivalues, compute_banzhaf_semivalues, or compute_beta_shapley_semivalues instead.
The modes supported with this interface are the following. For greater flexibility use compute_generic_semivalues directly.
See Data valuation for an overview of valuation.
The semi-value mode to use. See SemiValueMode for a list.
TYPE: SemiValueMode DEFAULT: Shapley
SemiValueMode
Shapley
The sampler type to use. See sampler for a list.
Additional keyword arguments passed to compute_generic_semivalues.
@deprecated(target=True, deprecated_in=\"0.7.0\", remove_in=\"0.8.0\")\ndef compute_semivalues(\n u: Utility,\n *,\n done: StoppingCriterion,\n mode: SemiValueMode = SemiValueMode.Shapley,\n sampler_t: Type[StochasticSampler] = PermutationSampler,\n batch_size: int = 1,\n n_jobs: int = 1,\n seed: Optional[Seed] = None,\n **kwargs,\n) -> ValuationResult:\n \"\"\"Convenience entry point for most common semi-value computations.\n\n !!! warning \"Deprecation warning\"\n This method is deprecated and will be replaced in 0.8.0 by the more\n general implementation of\n [compute_generic_semivalues][pydvl.value.semivalues.compute_generic_semivalues].\n Use\n [compute_shapley_semivalues][pydvl.value.semivalues.compute_shapley_semivalues],\n [compute_banzhaf_semivalues][pydvl.value.semivalues.compute_banzhaf_semivalues],\n or\n [compute_beta_shapley_semivalues][pydvl.value.semivalues.compute_beta_shapley_semivalues]\n instead.\n\n The modes supported with this interface are the following. For greater\n flexibility use\n [compute_generic_semivalues][pydvl.value.semivalues.compute_generic_semivalues]\n directly.\n\n - [SemiValueMode.Shapley][pydvl.value.semivalues.SemiValueMode]:\n Shapley values.\n - [SemiValueMode.BetaShapley][pydvl.value.semivalues.SemiValueMode]:\n Implements the Beta Shapley semi-value as introduced in\n (Kwon and Zou, 2022)<sup><a href=\"#kwon_beta_2022\">1</a></sup>.\n Pass additional keyword arguments `alpha` and `beta` to set the\n parameters of the Beta distribution (both default to 1).\n - [SemiValueMode.Banzhaf][pydvl.value.semivalues.SemiValueMode]: Implements\n the Banzhaf semi-value as introduced in (Wang and Jia, 2022)<sup><a\n href=\"#wang_data_2023\">1</a></sup>.\n\n See [Data valuation][data-valuation] for an overview of valuation.\n\n Args:\n u: Utility object with model, data, and scoring function.\n done: Stopping criterion.\n mode: The semi-value mode to use. See\n [SemiValueMode][pydvl.value.semivalues.SemiValueMode] for a list.\n sampler_t: The sampler type to use. See [sampler][pydvl.value.sampler]\n for a list.\n batch_size: Number of marginal evaluations per (parallelized) task.\n n_jobs: Number of parallel jobs to use.\n seed: Either an instance of a numpy random number generator or a seed for it.\n kwargs: Additional keyword arguments passed to\n [compute_generic_semivalues][pydvl.value.semivalues.compute_generic_semivalues].\n\n Returns:\n Object with the results.\n\n !!! warning \"Deprecation notice\"\n Parameter `batch_size` is for experimental use and will be removed in\n future versions.\n \"\"\"\n if mode == SemiValueMode.Shapley:\n coefficient = shapley_coefficient\n elif mode == SemiValueMode.BetaShapley:\n alpha = kwargs.pop(\"alpha\", 1)\n beta = kwargs.pop(\"beta\", 1)\n coefficient = beta_coefficient(alpha, beta)\n elif mode == SemiValueMode.Banzhaf:\n coefficient = banzhaf_coefficient\n else:\n raise ValueError(f\"Unknown mode {mode}\")\n coefficient = cast(SVCoefficient, coefficient)\n\n # HACK: cannot infer return type because of useless IndexT, NameT\n return compute_generic_semivalues( # type: ignore\n sampler_t(u.data.indices, seed=seed),\n u,\n coefficient,\n done,\n n_jobs=n_jobs,\n batch_size=batch_size,\n **kwargs,\n )\n
Stopping criteria for value computations.
This module provides a basic set of stopping criteria, like MaxUpdates, MaxTime, or HistoryDeviation among others. These can behave in different ways depending on the context. For example, MaxUpdates limits the number of updates to values, which depending on the algorithm may mean a different number of utility evaluations or imply other computations like solving a linear or quadratic program.
Stopping criteria are callables that are evaluated on a ValuationResult and return a Status object. They can be combined using boolean operators.
Most stopping criteria keep track of the convergence of each index separately but make global decisions based on the overall convergence of some fraction of all indices. For example, if we have a stopping criterion that checks whether the standard error of 90% of values is below a threshold, then methods will keep updating all indices until 90% of them have converged, irrespective of the quality of the individual estimates, and without freezing updates for indices along the way as values individually attain low standard error.
This has some practical implications, because some values do tend to converge sooner than others. For example, assume we use the criterion AbsoluteStandardError(0.02) | MaxUpdates(1000). Then values close to 0 might be marked as \"converged\" rather quickly because they fulfill the first criterion, say after 20 iterations, despite being poor estimates. Because other indices take much longer to have low standard error and the criterion is a global check, the \"converged\" ones keep being updated and end up being good estimates. In this case, this has been beneficial, but one might not wish for converged values to be updated, if one is sure that the criterion is adequate for individual values.
AbsoluteStandardError(0.02) | MaxUpdates(1000)
Semi-value methods include a parameter skip_converged that allows to skip the computation of values that have converged. The way to avoid doing this too early is to use a more stringent check, e.g. AbsoluteStandardError(1e-3) | MaxUpdates(1000). With skip_converged=True this check can still take less time than the first one, despite requiring more iterations for some indices.
AbsoluteStandardError(1e-3) | MaxUpdates(1000)
skip_converged=True
The choice of a stopping criterion greatly depends on the algorithm and the context. A safe bet is to combine a MaxUpdates or a MaxTime with a HistoryDeviation or an AbsoluteStandardError. The former will ensure that the computation does not run for too long, while the latter will try to achieve results that are stable enough. Note however that if the threshold is too strict, one will always end up running until a maximum number of iterations or time. Also keep in mind that different values converge at different times, so you might want to use tight thresholds and skip_converged as described above for semi-values.
from pydvl.value import AbsoluteStandardError, MaxUpdates, compute_banzhaf_semivalues\n\nutility = ... # some utility object\ncriterion = AbsoluteStandardError(threshold=1e-3, burn_in=32) | MaxUpdates(1000)\nvalues = compute_banzhaf_semivalues(\n utility,\n criterion,\n skip_converged=True, # skip values that have converged (CAREFUL!)\n)\n
utility
1e-3
burn_in
32
Be careful not to reuse the same stopping criterion for different computations. The object has state and will not be reset between calls to value computation methods. If you need to reuse the same criterion, you should create a new instance.
The easiest way is to declare a function implementing the interface StoppingCriterionCallable and wrap it with make_criterion(). This creates a StoppingCriterion object that can be composed with other stopping criteria.
Alternatively, and in particular if reporting of completion is required, one can inherit from this class and implement the abstract methods _check and completion.
_check
Objects of type StoppingCriterion can be combined with the binary operators & (and), and | (or), following the truth tables of Status. The unary operator ~ (not) is also supported. See StoppingCriterion for details on how these operations affect the behavior of the stopping criteria.
Signature for a stopping criterion
StoppingCriterion(modify_result: bool = True)\n
A composable callable object to determine whether a computation must stop.
A StoppingCriterion is a callable taking a ValuationResult and returning a Status. It also keeps track of individual convergence of values with converged, and reports the overall completion of the computation with completion.
Instances of StoppingCriterion can be composed with the binary operators & (and), and | (or), following the truth tables of Status. The unary operator ~ (not) is also supported. These boolean operations act according to the following rules:
check()
Subclassing this class requires implementing a check() method that returns a Status object based on a given ValuationResult. This method should update the attribute _converged, which is a boolean array indicating whether the value for each index has converged. When this does not make sense for a particular stopping criterion, completion should be overridden to provide an overall completion value, since its default implementation attempts to compute the mean of _converged.
_converged
modify_result
If True the status of the input ValuationResult is modified in place after the call.
src/pydvl/value/stopping.py
def __init__(self, modify_result: bool = True):\n self.modify_result = modify_result\n self._converged = np.full(0, False)\n
converged: NDArray[bool_]\n
Returns a boolean array indicating whether the values have converged for each data point.
Inheriting classes must set the _converged attribute in their check().
NDArray[bool_]
A boolean array indicating whether the values have converged for
each data point.
completion() -> float\n
Returns a value between 0 and 1 indicating the completion of the computation.
def completion(self) -> float:\n \"\"\"Returns a value between 0 and 1 indicating the completion of the\n computation.\n \"\"\"\n if self.converged.size == 0:\n return 0.0\n return float(np.mean(self.converged).item())\n
__call__(result: ValuationResult) -> Status\n
Calls check(), maybe updating the result.
def __call__(self, result: ValuationResult) -> Status:\n \"\"\"Calls `check()`, maybe updating the result.\"\"\"\n if len(result) == 0:\n logger.warning(\n \"At least one iteration finished but no results where generated. \"\n \"Please check that your scorer and utility return valid numbers.\"\n )\n status = self._check(result)\n if self.modify_result: # FIXME: this is not nice\n result._status = status\n return status\n
AbsoluteStandardError(\n threshold: float,\n fraction: float = 1.0,\n burn_in: int = 4,\n modify_result: bool = True,\n)\n
Bases: StoppingCriterion
Determine convergence based on the standard error of the values.
If \\(s_i\\) is the standard error for datum \\(i\\), then this criterion returns Converged if \\(s_i < \\epsilon\\) for all \\(i\\) and a threshold value \\(\\epsilon \\gt 0\\).
threshold
A value is considered to have converged if the standard error is below this threshold. A way of choosing it is to pick some percentage of the range of the values. For Shapley values this is the difference between the maximum and minimum of the utility function (to see this substitute the maximum and minimum values of the utility into the marginal contribution formula).
fraction
The fraction of values that must have converged for the criterion to return Converged.
The number of iterations to ignore before checking for convergence. This is required because computations typically start with zero variance, as a result of using zeros(). The default is set to an arbitrary minimum which is usually enough but may need to be increased.
TYPE: int DEFAULT: 4
4
def __init__(\n self,\n threshold: float,\n fraction: float = 1.0,\n burn_in: int = 4,\n modify_result: bool = True,\n):\n super().__init__(modify_result=modify_result)\n self.threshold = threshold\n self.fraction = fraction\n self.burn_in = burn_in\n
MaxChecks(n_checks: Optional[int], modify_result: bool = True)\n
Terminate as soon as the number of checks exceeds the threshold.
A \"check\" is one call to the criterion.
n_checks
Threshold: if None, no _check is performed, effectively creating a (never) stopping criterion that always returns Pending.
def __init__(self, n_checks: Optional[int], modify_result: bool = True):\n super().__init__(modify_result=modify_result)\n if n_checks is not None and n_checks < 1:\n raise ValueError(\"n_iterations must be at least 1 or None\")\n self.n_checks = n_checks\n self._count = 0\n
MaxUpdates(n_updates: Optional[int], modify_result: bool = True)\n
Terminate if any number of value updates exceeds or equals the given threshold.
If you want to ensure that all values have been updated, you probably want MinUpdates instead.
This checks the counts field of a ValuationResult, i.e. the number of times that each index has been updated. For powerset samplers, the maximum of this number coincides with the maximum number of subsets sampled. For permutation samplers, it coincides with the number of permutations sampled.
n_updates
def __init__(self, n_updates: Optional[int], modify_result: bool = True):\n super().__init__(modify_result=modify_result)\n if n_updates is not None and n_updates < 1:\n raise ValueError(\"n_updates must be at least 1 or None\")\n self.n_updates = n_updates\n self.last_max = 0\n
MinUpdates(n_updates: Optional[int], modify_result: bool = True)\n
Terminate as soon as all value updates exceed or equal the given threshold.
This checks the counts field of a ValuationResult, i.e. the number of times that each index has been updated. For powerset samplers, the minimum of this number is a lower bound for the number of subsets sampled. For permutation samplers, it lower-bounds the amount of permutations sampled.
def __init__(self, n_updates: Optional[int], modify_result: bool = True):\n super().__init__(modify_result=modify_result)\n self.n_updates = n_updates\n self.last_min = 0\n
MaxTime(seconds: Optional[float], modify_result: bool = True)\n
Terminate if the computation time exceeds the given number of seconds.
Checks the elapsed time since construction
seconds
Threshold: The computation is terminated if the elapsed time between object construction and a _check exceeds this value. If None, no _check is performed, effectively creating a (never) stopping criterion that always returns Pending.
def __init__(self, seconds: Optional[float], modify_result: bool = True):\n super().__init__(modify_result=modify_result)\n self.max_seconds = seconds or np.inf\n if self.max_seconds <= 0:\n raise ValueError(\"Number of seconds for MaxTime must be positive or None\")\n self.start = time()\n
HistoryDeviation(\n n_steps: int,\n rtol: float,\n pin_converged: bool = True,\n modify_result: bool = True,\n)\n
A simple check for relative distance to a previous step in the computation.
The method used by (Ghorbani and Zou, 2019)1 computes the relative distances between the current values \\(v_i^t\\) and the values at the previous checkpoint \\(v_i^{t-\\tau}\\). If the sum is below a given threshold, the computation is terminated.
When the denominator is zero, the summand is set to the value of \\(v_i^{ t-\\tau}\\).
This implementation is slightly generalised to allow for different number of updates to individual indices, as happens with powerset samplers instead of permutations. Every subset of indices that is found to converge can be pinned to that state. Once all indices have converged the method has converged.
This criterion is meant for the reproduction of the results in the paper, but we do not recommend using it in practice.
n_steps
Checkpoint values every so many updates and use these saved values to compare.
Relative tolerance for convergence (\\(\\epsilon\\) in the formula).
pin_converged
If True, once an index has converged, it is pinned
def __init__(\n self,\n n_steps: int,\n rtol: float,\n pin_converged: bool = True,\n modify_result: bool = True,\n):\n super().__init__(modify_result=modify_result)\n if n_steps < 1:\n raise ValueError(\"n_steps must be at least 1\")\n if rtol <= 0 or rtol >= 1:\n raise ValueError(\"rtol must be in (0, 1)\")\n\n self.n_steps = n_steps\n self.rtol = rtol\n self.update_op = np.logical_or if pin_converged else np.logical_and\n self._memory = None # type: ignore\n
RankCorrelation(rtol: float, burn_in: int, modify_result: bool = True)\n
A check for stability of Spearman correlation between checks.
When the change in rank correlation between two successive iterations is below a given threshold, the computation is terminated. The criterion computes the Spearman correlation between two successive iterations. The Spearman correlation uses the ordering indices of the given values and correlates them. This means it focuses on the order of the elements instead of their exact values. If the order stops changing (meaning the Banzhaf semivalues estimates converge), the criterion stops the algorithm.
This criterion is used in (Wang et. al.)2.
Relative tolerance for convergence (\\(\\epsilon\\) in the formula)
If True, the status of the input ValuationResult is modified in place after the call.
The minimum number of iterations before checking for convergence. This is required because the first correlation is meaningless.
Added in 0.9.0
def __init__(\n self,\n rtol: float,\n burn_in: int,\n modify_result: bool = True,\n):\n super().__init__(modify_result=modify_result)\n if rtol <= 0 or rtol >= 1:\n raise ValueError(\"rtol must be in (0, 1)\")\n self.rtol = rtol\n self.burn_in = burn_in\n self._memory: NDArray[np.float_] | None = None\n self._corr = 0.0\n self._completion = 0.0\n self._iterations = 0\n
make_criterion(\n fun: StoppingCriterionCallable,\n converged: Callable[[], NDArray[bool_]] | None = None,\n completion: Callable[[], float] | None = None,\n name: str | None = None,\n) -> Type[StoppingCriterion]\n
Create a new StoppingCriterion from a function. Use this to enable simpler functions to be composed with bitwise operators
The callable to wrap.
TYPE: StoppingCriterionCallable
StoppingCriterionCallable
converged
A callable that returns a boolean array indicating what values have converged.
TYPE: Callable[[], NDArray[bool_]] | None DEFAULT: None
Callable[[], NDArray[bool_]] | None
completion
A callable that returns a value between 0 and 1 indicating the rate of completion of the computation. If not provided, the fraction of converged values is used.
TYPE: Callable[[], float] | None DEFAULT: None
Callable[[], float] | None
The name of the new criterion. If None, the __name__ of the function is used.
__name__
Type[StoppingCriterion]
A new subclass of StoppingCriterion.
def make_criterion(\n fun: StoppingCriterionCallable,\n converged: Callable[[], NDArray[np.bool_]] | None = None,\n completion: Callable[[], float] | None = None,\n name: str | None = None,\n) -> Type[StoppingCriterion]:\n \"\"\"Create a new [StoppingCriterion][pydvl.value.stopping.StoppingCriterion] from a function.\n Use this to enable simpler functions to be composed with bitwise operators\n\n Args:\n fun: The callable to wrap.\n converged: A callable that returns a boolean array indicating what\n values have converged.\n completion: A callable that returns a value between 0 and 1 indicating\n the rate of completion of the computation. If not provided, the fraction\n of converged values is used.\n name: The name of the new criterion. If `None`, the `__name__` of\n the function is used.\n\n Returns:\n A new subclass of [StoppingCriterion][pydvl.value.stopping.StoppingCriterion].\n \"\"\"\n\n class WrappedCriterion(StoppingCriterion):\n def __init__(self, modify_result: bool = True):\n super().__init__(modify_result=modify_result)\n self._name = name or getattr(fun, \"__name__\", \"WrappedCriterion\")\n\n def _check(self, result: ValuationResult) -> Status:\n return fun(result)\n\n @property\n def converged(self) -> NDArray[np.bool_]:\n if converged is None:\n return super().converged\n return converged()\n\n def __str__(self):\n return self._name\n\n def completion(self) -> float:\n if completion is None:\n return super().completion()\n return completion()\n\n return WrappedCriterion\n
This package holds all routines for the computation of Least Core data values.
Please refer to Data valuation for an overview.
In addition to the standard interface via compute_least_core_values(), because computing the Least Core values requires the solution of a linear and a quadratic problem after computing all the utility values, there is the possibility of performing each step separately. This is useful when running multiple experiments: use lc_prepare_problem() or mclc_prepare_problem() to prepare a list of problems to solve, then solve them in parallel with lc_solve_problems().
Note that mclc_prepare_problem() is parallelized itself, so preparing the problems should be done in sequence in this case. The solution of the linear systems can then be done in parallel.
Available Least Core algorithms.
compute_least_core_values(\n u: Utility,\n *,\n n_jobs: int = 1,\n n_iterations: Optional[int] = None,\n mode: LeastCoreMode = LeastCoreMode.MonteCarlo,\n non_negative_subsidy: bool = False,\n solver_options: Optional[dict] = None,\n progress: bool = False,\n **kwargs\n) -> ValuationResult\n
Umbrella method to compute Least Core values with any of the available algorithms.
See Data valuation for an overview.
The following algorithms are available. Note that the exact method can only work with very small datasets and is thus intended only for testing.
exact
montecarlo
Utility object with model, data, and scoring function
Number of jobs to run in parallel. Only used for Monte Carlo Least Core.
Number of subsets to sample and evaluate the utility on. Only used for Monte Carlo Least Core.
Algorithm to use. See LeastCoreMode for available options.
TYPE: LeastCoreMode DEFAULT: MonteCarlo
LeastCoreMode
MonteCarlo
If True, the least core subsidy \\(e\\) is constrained to be non-negative.
Optional dictionary of options passed to the solvers.
TYPE: Optional[dict] DEFAULT: None
Optional[dict]
Object with the computed values.
src/pydvl/value/least_core/__init__.py
def compute_least_core_values(\n u: Utility,\n *,\n n_jobs: int = 1,\n n_iterations: Optional[int] = None,\n mode: LeastCoreMode = LeastCoreMode.MonteCarlo,\n non_negative_subsidy: bool = False,\n solver_options: Optional[dict] = None,\n progress: bool = False,\n **kwargs,\n) -> ValuationResult:\n \"\"\"Umbrella method to compute Least Core values with any of the available\n algorithms.\n\n See [Data valuation][data-valuation] for an overview.\n\n The following algorithms are available. Note that the exact method can only\n work with very small datasets and is thus intended only for testing.\n\n - `exact`: uses the complete powerset of the training set for the constraints\n [combinatorial_exact_shapley()][pydvl.value.shapley.naive.combinatorial_exact_shapley].\n - `montecarlo`: uses the approximate Monte Carlo Least Core algorithm.\n Implemented in [montecarlo_least_core()][pydvl.value.least_core.montecarlo.montecarlo_least_core].\n\n Args:\n u: Utility object with model, data, and scoring function\n n_jobs: Number of jobs to run in parallel. Only used for Monte Carlo\n Least Core.\n n_iterations: Number of subsets to sample and evaluate the utility on.\n Only used for Monte Carlo Least Core.\n mode: Algorithm to use. See\n [LeastCoreMode][pydvl.value.least_core.LeastCoreMode] for available\n options.\n non_negative_subsidy: If True, the least core subsidy $e$ is constrained\n to be non-negative.\n solver_options: Optional dictionary of options passed to the solvers.\n\n Returns:\n Object with the computed values.\n\n !!! tip \"New in version 0.5.0\"\n \"\"\"\n\n if mode == LeastCoreMode.MonteCarlo:\n # TODO fix progress showing in remote case\n progress = False\n if n_iterations is None:\n raise ValueError(\"n_iterations cannot be None for Monte Carlo Least Core\")\n return montecarlo_least_core( # type: ignore\n u=u,\n n_iterations=n_iterations,\n n_jobs=n_jobs,\n progress=progress,\n non_negative_subsidy=non_negative_subsidy,\n solver_options=solver_options,\n **kwargs,\n )\n elif mode == LeastCoreMode.Exact:\n return exact_least_core(\n u=u,\n progress=progress,\n non_negative_subsidy=non_negative_subsidy,\n solver_options=solver_options,\n )\n\n raise ValueError(f\"Invalid value encountered in {mode=}\")\n
lc_solve_problem(\n problem: LeastCoreProblem,\n *,\n u: Utility,\n algorithm: str,\n non_negative_subsidy: bool = False,\n solver_options: Optional[dict] = None\n) -> ValuationResult\n
Solves a linear problem as prepared by mclc_prepare_problem(). Useful for parallel execution of multiple experiments by running this as a remote task.
See exact_least_core() or montecarlo_least_core() for argument descriptions.
src/pydvl/value/least_core/common.py
def lc_solve_problem(\n problem: LeastCoreProblem,\n *,\n u: Utility,\n algorithm: str,\n non_negative_subsidy: bool = False,\n solver_options: Optional[dict] = None,\n) -> ValuationResult:\n \"\"\"Solves a linear problem as prepared by\n [mclc_prepare_problem()][pydvl.value.least_core.montecarlo.mclc_prepare_problem].\n Useful for parallel execution of multiple experiments by running this as a\n remote task.\n\n See [exact_least_core()][pydvl.value.least_core.naive.exact_least_core] or\n [montecarlo_least_core()][pydvl.value.least_core.montecarlo.montecarlo_least_core] for\n argument descriptions.\n \"\"\"\n n = len(u.data)\n\n if np.any(np.isnan(problem.utility_values)):\n warnings.warn(\n f\"Calculation returned \"\n f\"{np.sum(np.isnan(problem.utility_values))} NaN \"\n f\"values out of {problem.utility_values.size}\",\n RuntimeWarning,\n )\n\n if solver_options is None:\n solver_options = {}\n\n if \"solver\" not in solver_options:\n solver_options[\"solver\"] = cp.SCS\n\n if \"max_iters\" not in solver_options and solver_options[\"solver\"] == cp.SCS:\n solver_options[\"max_iters\"] = 10000\n\n logger.debug(\"Removing possible duplicate values in lower bound array\")\n b_lb = problem.utility_values\n A_lb, unique_indices = np.unique(problem.A_lb, return_index=True, axis=0)\n b_lb = b_lb[unique_indices]\n\n logger.debug(\"Building equality constraint\")\n A_eq = np.ones((1, n))\n # We might have already computed the total utility one or more times.\n # This is the index of the row(s) in A_lb with all ones.\n total_utility_indices = np.where(A_lb.sum(axis=1) == n)[0]\n if len(total_utility_indices) == 0:\n b_eq = np.array([u(u.data.indices)])\n else:\n b_eq = b_lb[total_utility_indices]\n # Remove the row(s) corresponding to the total utility\n # from the lower bound constraints\n # because given the equality constraint\n # it is the same as using the constraint e >= 0\n # (i.e. setting non_negative_subsidy = True).\n mask: NDArray[np.bool_] = np.ones_like(b_lb, dtype=bool)\n mask[total_utility_indices] = False\n b_lb = b_lb[mask]\n A_lb = A_lb[mask]\n\n # Remove the row(s) corresponding to the empty subset\n # because, given u(\u2205) = (which is almost always the case,\n # it is the same as using the constraint e >= 0\n # (i.e. setting non_negative_subsidy = True).\n emptyset_utility_indices = np.where(A_lb.sum(axis=1) == 0)[0]\n if len(emptyset_utility_indices) > 0:\n mask = np.ones_like(b_lb, dtype=bool)\n mask[emptyset_utility_indices] = False\n b_lb = b_lb[mask]\n A_lb = A_lb[mask]\n\n _, subsidy = _solve_least_core_linear_program(\n A_eq=A_eq,\n b_eq=b_eq,\n A_lb=A_lb,\n b_lb=b_lb,\n non_negative_subsidy=non_negative_subsidy,\n solver_options=solver_options,\n )\n\n values: Optional[NDArray[np.float_]]\n\n if subsidy is None:\n logger.debug(\"No values were found\")\n status = Status.Failed\n values = np.empty(n)\n values[:] = np.nan\n subsidy = np.nan\n else:\n values = _solve_egalitarian_least_core_quadratic_program(\n subsidy,\n A_eq=A_eq,\n b_eq=b_eq,\n A_lb=A_lb,\n b_lb=b_lb,\n solver_options=solver_options,\n )\n\n if values is None:\n logger.debug(\"No values were found\")\n status = Status.Failed\n values = np.empty(n)\n values[:] = np.nan\n subsidy = np.nan\n else:\n status = Status.Converged\n\n return ValuationResult(\n algorithm=algorithm,\n status=status,\n values=values,\n subsidy=subsidy,\n stderr=None,\n data_names=u.data.data_names,\n )\n
lc_solve_problems(\n problems: Sequence[LeastCoreProblem],\n u: Utility,\n algorithm: str,\n parallel_backend: Optional[ParallelBackend] = None,\n config: Optional[ParallelConfig] = None,\n n_jobs: int = 1,\n non_negative_subsidy: bool = True,\n solver_options: Optional[dict] = None,\n **options\n) -> List[ValuationResult]\n
Solves a list of linear problems in parallel.
Utility.
problems
Least Core problems to solve, as returned by mclc_prepare_problem().
TYPE: Sequence[LeastCoreProblem]
Sequence[LeastCoreProblem]
Name of the valuation algorithm.
Number of parallel jobs to run.
Additional options to pass to the solver.
List[ValuationResult]
List of solutions.
@deprecated(\n target=True,\n args_mapping={\"config\": \"config\"},\n deprecated_in=\"0.9.0\",\n remove_in=\"0.10.0\",\n)\ndef lc_solve_problems(\n problems: Sequence[LeastCoreProblem],\n u: Utility,\n algorithm: str,\n parallel_backend: Optional[ParallelBackend] = None,\n config: Optional[ParallelConfig] = None,\n n_jobs: int = 1,\n non_negative_subsidy: bool = True,\n solver_options: Optional[dict] = None,\n **options,\n) -> List[ValuationResult]:\n \"\"\"Solves a list of linear problems in parallel.\n\n Args:\n u: Utility.\n problems: Least Core problems to solve, as returned by\n [mclc_prepare_problem()][pydvl.value.least_core.montecarlo.mclc_prepare_problem].\n algorithm: Name of the valuation algorithm.\n parallel_backend: Parallel backend instance to use\n for parallelizing computations. If `None`,\n use [JoblibParallelBackend][pydvl.parallel.backends.JoblibParallelBackend] backend.\n See the [Parallel Backends][pydvl.parallel.backends] package\n for available options.\n config: (**DEPRECATED**) Object configuring parallel computation,\n with cluster address, number of cpus, etc.\n n_jobs: Number of parallel jobs to run.\n non_negative_subsidy: If True, the least core subsidy $e$ is constrained\n to be non-negative.\n solver_options: Additional options to pass to the solver.\n\n Returns:\n List of solutions.\n \"\"\"\n\n def _map_func(\n problems: List[LeastCoreProblem], *args, **kwargs\n ) -> List[ValuationResult]:\n return [lc_solve_problem(p, *args, **kwargs) for p in problems]\n\n parallel_backend = _maybe_init_parallel_backend(parallel_backend, config)\n\n map_reduce_job: MapReduceJob[\n \"LeastCoreProblem\", \"List[ValuationResult]\"\n ] = MapReduceJob(\n inputs=problems,\n map_func=_map_func,\n map_kwargs=dict(\n u=u,\n algorithm=algorithm,\n non_negative_subsidy=non_negative_subsidy,\n solver_options=solver_options,\n **options,\n ),\n reduce_func=lambda x: list(itertools.chain(*x)),\n parallel_backend=parallel_backend,\n n_jobs=n_jobs,\n )\n solutions = map_reduce_job()\n\n return solutions\n
montecarlo_least_core(\n u: Utility,\n n_iterations: int,\n *,\n n_jobs: int = 1,\n parallel_backend: Optional[ParallelBackend] = None,\n config: Optional[ParallelConfig] = None,\n non_negative_subsidy: bool = False,\n solver_options: Optional[dict] = None,\n progress: bool = False,\n seed: Optional[Seed] = None\n) -> ValuationResult\n
Computes approximate Least Core values using a Monte Carlo approach.
Where:
total number of iterations to use
number of jobs across which to distribute the computation
Dictionary of options that will be used to select a solver and to configure it. Refer to cvxpy's documentation for all possible options.
If True, shows a tqdm progress bar
Object with the data values and the least core value.
src/pydvl/value/least_core/montecarlo.py
@deprecated(\n target=True,\n args_mapping={\"config\": \"config\"},\n deprecated_in=\"0.9.0\",\n remove_in=\"0.10.0\",\n)\ndef montecarlo_least_core(\n u: Utility,\n n_iterations: int,\n *,\n n_jobs: int = 1,\n parallel_backend: Optional[ParallelBackend] = None,\n config: Optional[ParallelConfig] = None,\n non_negative_subsidy: bool = False,\n solver_options: Optional[dict] = None,\n progress: bool = False,\n seed: Optional[Seed] = None,\n) -> ValuationResult:\n r\"\"\"Computes approximate Least Core values using a Monte Carlo approach.\n\n $$\n \\begin{array}{lll}\n \\text{minimize} & \\displaystyle{e} & \\\\\n \\text{subject to} & \\displaystyle\\sum_{i\\in N} x_{i} = v(N) & \\\\\n & \\displaystyle\\sum_{i\\in S} x_{i} + e \\geq v(S) & ,\n \\forall S \\in \\{S_1, S_2, \\dots, S_m \\overset{\\mathrm{iid}}{\\sim} U(2^N) \\}\n \\end{array}\n $$\n\n Where:\n\n * $U(2^N)$ is the uniform distribution over the powerset of $N$.\n * $m$ is the number of subsets that will be sampled and whose utility will\n be computed and used to compute the data values.\n\n Args:\n u: Utility object with model, data, and scoring function\n n_iterations: total number of iterations to use\n n_jobs: number of jobs across which to distribute the computation\n parallel_backend: Parallel backend instance to use\n for parallelizing computations. If `None`,\n use [JoblibParallelBackend][pydvl.parallel.backends.JoblibParallelBackend] backend.\n See the [Parallel Backends][pydvl.parallel.backends] package\n for available options.\n config: (**DEPRECATED**) Object configuring parallel computation,\n with cluster address, number of cpus, etc.\n non_negative_subsidy: If True, the least core subsidy $e$ is constrained\n to be non-negative.\n solver_options: Dictionary of options that will be used to select a solver\n and to configure it. Refer to [cvxpy's\n documentation](https://www.cvxpy.org/tutorial/advanced/index.html#setting-solver-options)\n for all possible options.\n progress: If True, shows a tqdm progress bar\n seed: Either an instance of a numpy random number generator or a seed for it.\n\n Returns:\n Object with the data values and the least core value.\n\n !!! tip \"Changed in version 0.9.0\"\n Deprecated `config` argument and added a `parallel_backend`\n argument to allow users to pass the Parallel Backend instance\n directly.\n \"\"\"\n problem = mclc_prepare_problem(\n u,\n n_iterations,\n n_jobs=n_jobs,\n parallel_backend=parallel_backend,\n config=config,\n progress=progress,\n seed=seed,\n )\n return lc_solve_problem(\n problem,\n u=u,\n algorithm=\"montecarlo_least_core\",\n non_negative_subsidy=non_negative_subsidy,\n solver_options=solver_options,\n )\n
mclc_prepare_problem(\n u: Utility,\n n_iterations: int,\n *,\n n_jobs: int = 1,\n parallel_backend: Optional[ParallelBackend] = None,\n config: Optional[ParallelConfig] = None,\n progress: bool = False,\n seed: Optional[Seed] = None\n) -> LeastCoreProblem\n
Prepares a linear problem by sampling subsets of the data. Use this to separate the problem preparation from the solving with lc_solve_problem(). Useful for parallel execution of multiple experiments.
See montecarlo_least_core for argument descriptions.
@deprecated(\n target=True,\n args_mapping={\"config\": \"config\"},\n deprecated_in=\"0.9.0\",\n remove_in=\"0.10.0\",\n)\ndef mclc_prepare_problem(\n u: Utility,\n n_iterations: int,\n *,\n n_jobs: int = 1,\n parallel_backend: Optional[ParallelBackend] = None,\n config: Optional[ParallelConfig] = None,\n progress: bool = False,\n seed: Optional[Seed] = None,\n) -> LeastCoreProblem:\n \"\"\"Prepares a linear problem by sampling subsets of the data. Use this to\n separate the problem preparation from the solving with\n [lc_solve_problem()][pydvl.value.least_core.common.lc_solve_problem]. Useful\n for parallel execution of multiple experiments.\n\n See\n [montecarlo_least_core][pydvl.value.least_core.montecarlo.montecarlo_least_core]\n for argument descriptions.\n\n !!! note \"Changed in version 0.9.0\"\n Deprecated `config` argument and added a `parallel_backend`\n argument to allow users to pass the Parallel Backend instance\n directly.\n \"\"\"\n n = len(u.data)\n\n if n_iterations < n:\n warnings.warn(\n f\"Number of iterations '{n_iterations}' is smaller the size of the dataset '{n}'. \"\n f\"This is not optimal because in the worst case we need at least '{n}' constraints \"\n \"to satisfy the individual rationality condition.\"\n )\n\n if n_iterations > 2**n:\n warnings.warn(\n f\"Passed n_iterations is greater than the number subsets! \"\n f\"Setting it to 2^{n}\",\n RuntimeWarning,\n )\n n_iterations = 2**n\n\n parallel_backend = _maybe_init_parallel_backend(parallel_backend, config)\n\n iterations_per_job = max(\n 1, n_iterations // parallel_backend.effective_n_jobs(n_jobs)\n )\n\n map_reduce_job: MapReduceJob[\"Utility\", \"LeastCoreProblem\"] = MapReduceJob(\n inputs=u,\n map_func=_montecarlo_least_core,\n reduce_func=_reduce_func,\n map_kwargs=dict(n_iterations=iterations_per_job, progress=progress),\n n_jobs=n_jobs,\n parallel_backend=parallel_backend,\n )\n\n return map_reduce_job(seed=seed)\n
exact_least_core(\n u: Utility,\n *,\n non_negative_subsidy: bool = False,\n solver_options: Optional[dict] = None,\n progress: bool = True\n) -> ValuationResult\n
Computes the exact Least Core values.
If the training set contains more than 20 instances a warning is printed because the computation is very expensive. This method is mostly used for internal testing and simple use cases. Please refer to the Monte Carlo method for practical applications.
The least core is the solution to the following Linear Programming problem:
Where \\(N = \\{1, 2, \\dots, n\\}\\) are the training set's indices.
Dictionary of options that will be used to select a solver and to configure it. Refer to the cvxpy's documentation for all possible options.
src/pydvl/value/least_core/naive.py
def exact_least_core(\n u: Utility,\n *,\n non_negative_subsidy: bool = False,\n solver_options: Optional[dict] = None,\n progress: bool = True,\n) -> ValuationResult:\n r\"\"\"Computes the exact Least Core values.\n\n !!! Note\n If the training set contains more than 20 instances a warning is printed\n because the computation is very expensive. This method is mostly used for\n internal testing and simple use cases. Please refer to the\n [Monte Carlo method][pydvl.value.least_core.montecarlo.montecarlo_least_core]\n for practical applications.\n\n The least core is the solution to the following Linear Programming problem:\n\n $$\n \\begin{array}{lll}\n \\text{minimize} & \\displaystyle{e} & \\\\\n \\text{subject to} & \\displaystyle\\sum_{i\\in N} x_{i} = v(N) & \\\\\n & \\displaystyle\\sum_{i\\in S} x_{i} + e \\geq v(S) &, \\forall S \\subseteq N \\\\\n \\end{array}\n $$\n\n Where $N = \\{1, 2, \\dots, n\\}$ are the training set's indices.\n\n Args:\n u: Utility object with model, data, and scoring function\n non_negative_subsidy: If True, the least core subsidy $e$ is constrained\n to be non-negative.\n solver_options: Dictionary of options that will be used to select a solver\n and to configure it. Refer to the [cvxpy's\n documentation](https://www.cvxpy.org/tutorial/advanced/index.html#setting-solver-options)\n for all possible options.\n progress: If True, shows a tqdm progress bar\n\n Returns:\n Object with the data values and the least core value.\n \"\"\"\n n = len(u.data)\n if n > 20: # Arbitrary choice, will depend on time required, caching, etc.\n warnings.warn(f\"Large dataset! Computation requires 2^{n} calls to model.fit()\")\n\n problem = lc_prepare_problem(u, progress=progress)\n return lc_solve_problem(\n problem=problem,\n u=u,\n algorithm=\"exact_least_core\",\n non_negative_subsidy=non_negative_subsidy,\n solver_options=solver_options,\n )\n
lc_prepare_problem(u: Utility, progress: bool = False) -> LeastCoreProblem\n
Prepares a linear problem with all subsets of the data Use this to separate the problem preparation from the solving with lc_solve_problem(). Useful for parallel execution of multiple experiments.
See exact_least_core() for argument descriptions.
def lc_prepare_problem(u: Utility, progress: bool = False) -> LeastCoreProblem:\n \"\"\"Prepares a linear problem with all subsets of the data\n Use this to separate the problem preparation from the solving with\n [lc_solve_problem()][pydvl.value.least_core.common.lc_solve_problem]. Useful for\n parallel execution of multiple experiments.\n\n See [exact_least_core()][pydvl.value.least_core.naive.exact_least_core] for argument\n descriptions.\n \"\"\"\n n = len(u.data)\n\n logger.debug(\"Building vectors and matrices for linear programming problem\")\n powerset_size = 2**n\n A_lb = np.zeros((powerset_size, n))\n\n logger.debug(\"Iterating over all subsets\")\n utility_values = np.zeros(powerset_size)\n for i, subset in enumerate( # type: ignore\n tqdm(\n powerset(u.data.indices),\n disable=not progress,\n total=powerset_size - 1,\n position=0,\n )\n ):\n indices: NDArray[np.bool_] = np.zeros(n, dtype=bool)\n indices[list(subset)] = True\n A_lb[i, indices] = 1\n utility_values[i] = u(subset) # type: ignore\n\n return LeastCoreProblem(utility_values, A_lb)\n
compute_loo(\n u: Utility,\n *,\n n_jobs: int = 1,\n parallel_backend: Optional[ParallelBackend] = None,\n config: Optional[ParallelConfig] = None,\n progress: bool = True\n) -> ValuationResult\n
Computes leave one out value:
If True, display a progress bar
Number of parallel jobs to use
Object with the data values.
Renamed from naive_loo and added parallel computation.
naive_loo
src/pydvl/value/loo/loo.py
@deprecated(\n target=True,\n args_mapping={\"config\": \"config\"},\n deprecated_in=\"0.9.0\",\n remove_in=\"0.10.0\",\n)\ndef compute_loo(\n u: Utility,\n *,\n n_jobs: int = 1,\n parallel_backend: Optional[ParallelBackend] = None,\n config: Optional[ParallelConfig] = None,\n progress: bool = True,\n) -> ValuationResult:\n r\"\"\"Computes leave one out value:\n\n $$v(i) = u(D) - u(D \\setminus \\{i\\}) $$\n\n Args:\n u: Utility object with model, data, and scoring function\n progress: If True, display a progress bar\n n_jobs: Number of parallel jobs to use\n parallel_backend: Parallel backend instance to use\n for parallelizing computations. If `None`,\n use [JoblibParallelBackend][pydvl.parallel.backends.JoblibParallelBackend] backend.\n See the [Parallel Backends][pydvl.parallel.backends] package\n for available options.\n config: (**DEPRECATED**) Object configuring parallel computation,\n with cluster address, number of cpus, etc.\n progress: If True, display a progress bar\n\n Returns:\n Object with the data values.\n\n !!! tip \"New in version 0.7.0\"\n Renamed from `naive_loo` and added parallel computation.\n\n !!! tip \"Changed in version 0.9.0\"\n Deprecated `config` argument and added a `parallel_backend`\n argument to allow users to pass the Parallel Backend instance\n directly.\n \"\"\"\n if len(u.data) < 3:\n raise ValueError(\"Dataset must have at least 2 elements\")\n\n result = ValuationResult.zeros(\n algorithm=\"loo\",\n indices=u.data.indices,\n data_names=u.data.data_names,\n )\n\n all_indices = set(u.data.indices)\n total_utility = u(u.data.indices)\n\n def fun(idx: int) -> tuple[int, float]:\n return idx, total_utility - u(all_indices.difference({idx}))\n\n parallel_backend = _maybe_init_parallel_backend(parallel_backend, config)\n max_workers = parallel_backend.effective_n_jobs(n_jobs)\n n_submitted_jobs = 2 * max_workers # number of jobs in the queue\n\n # NOTE: this could be done with a simple executor.map(), but we want to\n # display a progress bar\n\n with parallel_backend.executor(\n max_workers=max_workers, cancel_futures=True\n ) as executor:\n pending: set[Future] = set()\n index_it = iter(u.data.indices)\n\n pbar = tqdm(disable=not progress, total=100, unit=\"%\")\n while True:\n pbar.n = 100 * sum(result.counts) / len(u.data)\n pbar.refresh()\n completed, pending = wait(pending, timeout=0.1, return_when=FIRST_COMPLETED)\n for future in completed:\n idx, marginal = future.result()\n result.update(idx, marginal)\n\n # Ensure that we always have n_submitted_jobs running\n try:\n for _ in range(n_submitted_jobs - len(pending)):\n pending.add(executor.submit(fun, next(index_it)))\n except StopIteration:\n if len(pending) == 0:\n return result\n
Kwon et al. Data-OOB: Out-of-bag Estimate as a Simple and Efficient Data Value. In: Published at ICML 2023\u00a0\u21a9
compute_data_oob(\n u: Utility,\n *,\n n_est: int = 10,\n max_samples: float = 0.8,\n loss: Optional[LossFunction] = None,\n n_jobs: Optional[int] = None,\n seed: Optional[Seed] = None,\n progress: bool = False\n) -> ValuationResult\n
Computes Data out of bag values
This implements the method described in (Kwon and Zou, 2023)1. It fits several base estimators provided through u.model through a bagging process. The point value corresponds to the average loss of estimators which were not fit on it.
\\(w_{bj}\\in Z\\) is the number of times the j-th datum \\((x_j, y_j)\\) is selected in the b-th bootstrap dataset.
With:
T is a score function that represents the goodness of a weak learner \\(\\hat{f}_b\\) at the i-th datum \\((x_i, y_i)\\).
n_est and max_samples must be tuned jointly to ensure that all samples are at least 1 time out-of-bag, otherwise the result could include a NaN value for that datum.
n_est
max_samples
Number of estimator used in the bagging procedure.
The fraction of samples to draw to train each base estimator.
A function taking as parameters model prediction and corresponding data labels(y_true, y_pred) and returning an array of point-wise errors.
TYPE: Optional[LossFunction] DEFAULT: None
Optional[LossFunction]
The number of jobs to run in parallel used in the bagging procedure for both fit and predict.
src/pydvl/value/oob/oob.py
def compute_data_oob(\n u: Utility,\n *,\n n_est: int = 10,\n max_samples: float = 0.8,\n loss: Optional[LossFunction] = None,\n n_jobs: Optional[int] = None,\n seed: Optional[Seed] = None,\n progress: bool = False,\n) -> ValuationResult:\n r\"\"\"Computes Data out of bag values\n\n This implements the method described in\n (Kwon and Zou, 2023)<sup><a href=\"kwon_data_2023\">1</a></sup>.\n It fits several base estimators provided through u.model through a bagging\n process. The point value corresponds to the average loss of estimators which\n were not fit on it.\n\n $w_{bj}\\in Z$ is the number of times the j-th datum $(x_j, y_j)$ is selected\n in the b-th bootstrap dataset.\n\n $$\\psi((x_i,y_i),\\Theta_B):=\\frac{\\sum_{b=1}^{B}\\mathbb{1}(w_{bi}=0)T(y_i,\n \\hat{f}_b(x_i))}{\\sum_{b=1}^{B}\n \\mathbb{1}\n (w_{bi}=0)}$$\n\n With:\n\n $$\n T: Y \\times Y\n \\rightarrow \\mathbb{R}\n $$\n\n T is a score function that represents the goodness of a weak learner\n $\\hat{f}_b$ at the i-th datum $(x_i, y_i)$.\n\n `n_est` and `max_samples` must be tuned jointly to ensure that all samples\n are at least 1 time out-of-bag, otherwise the result could include a NaN\n value for that datum.\n\n Args:\n u: Utility object with model, data, and scoring function.\n n_est: Number of estimator used in the bagging procedure.\n max_samples: The fraction of samples to draw to train each base\n estimator.\n loss: A function taking as parameters model prediction and corresponding\n data labels(y_true, y_pred) and returning an array of point-wise errors.\n n_jobs: The number of jobs to run in parallel used in the bagging\n procedure for both fit and predict.\n seed: Either an instance of a numpy random number generator or a seed\n for it.\n progress: If True, display a progress bar.\n\n Returns:\n Object with the data values.\n \"\"\"\n rng = np.random.default_rng(seed)\n random_state = np.random.RandomState(rng.bit_generator)\n\n result: ValuationResult[np.int_, np.object_] = ValuationResult.empty(\n algorithm=\"data_oob\", indices=u.data.indices, data_names=u.data.data_names\n )\n\n if is_classifier(u.model):\n bag = BaggingClassifier(\n u.model,\n n_estimators=n_est,\n max_samples=max_samples,\n n_jobs=n_jobs,\n random_state=random_state,\n )\n if loss is None:\n loss = point_wise_accuracy\n elif is_regressor(u.model):\n bag = BaggingRegressor(\n u.model,\n n_estimators=n_est,\n max_samples=max_samples,\n n_jobs=n_jobs,\n random_state=random_state,\n )\n if loss is None:\n loss = neg_l2_distance\n else:\n raise Exception(\n \"Model has to be a classifier or a regressor in sklearn format.\"\n )\n\n bag.fit(u.data.x_train, u.data.y_train)\n\n for est, samples in tqdm(\n zip(bag.estimators_, bag.estimators_samples_), disable=not progress, total=n_est\n ): # The bottleneck is the bag fitting not this part so TQDM is not very useful here\n oob_idx = np.setxor1d(u.data.indices, np.unique(samples))\n array_loss = loss(\n y_true=u.data.y_train[oob_idx],\n y_pred=est.predict(u.data.x_train[oob_idx]),\n )\n result += ValuationResult(\n algorithm=\"data_oob\",\n indices=oob_idx,\n values=array_loss,\n counts=np.ones_like(array_loss, dtype=u.data.indices.dtype),\n )\n return result\n
point_wise_accuracy(y_true: NDArray[T], y_pred: NDArray[T]) -> NDArray[T]\n
Point-wise 0-1 loss between two arrays
Array of true values (e.g. labels)
Array of estimated values (e.g. model predictions)
Array with point-wise 0-1 losses between labels and model predictions
def point_wise_accuracy(y_true: NDArray[T], y_pred: NDArray[T]) -> NDArray[T]:\n r\"\"\"Point-wise 0-1 loss between two arrays\n\n Args:\n y_true: Array of true values (e.g. labels)\n y_pred: Array of estimated values (e.g. model predictions)\n\n Returns:\n Array with point-wise 0-1 losses between labels and model predictions\n \"\"\"\n return np.array(y_pred == y_true, dtype=y_pred.dtype)\n
neg_l2_distance(y_true: NDArray[T], y_pred: NDArray[T]) -> NDArray[T]\n
Point-wise negative \\(l_2\\) distance between two arrays
Array with point-wise negative \\(l_2\\) distances between labels and model
predictions
def neg_l2_distance(y_true: NDArray[T], y_pred: NDArray[T]) -> NDArray[T]:\n r\"\"\"Point-wise negative $l_2$ distance between two arrays\n\n Args:\n y_true: Array of true values (e.g. labels)\n y_pred: Array of estimated values (e.g. model predictions)\n\n Returns:\n Array with point-wise negative $l_2$ distances between labels and model\n predictions\n \"\"\"\n return -np.square(np.array(y_pred - y_true), dtype=y_pred.dtype)\n
This package holds all routines for the computation of Shapley Data value. Users will want to use compute_shapley_values or compute_semivalues as interfaces to most methods defined in the modules.
Please refer to the guide on data valuation for an overview of all methods.
Class-wise Shapley (Schoch et al., 2022)1 offers a Shapley framework tailored for classification problems. Let \\(D\\) be a dataset, \\(D_{y_i}\\) be the subset of \\(D\\) with labels \\(y_i\\), and \\(D_{-y_i}\\) be the complement of \\(D_{y_i}\\) in \\(D\\). The key idea is that a sample \\((x_i, y_i)\\), might enhance the overall performance on \\(D\\), while being detrimental for the performance on \\(D_{y_i}\\). The Class-wise value is defined as:
where \\(S_{y_i} \\subseteq D_{y_i} \\setminus \\{i\\}\\) and \\(S_{-y_i} \\subseteq D_{-y_i}\\).
Analysis of Class-wise Shapley
For a detailed analysis of the method, with comparison to other valuation techniques, please refer to the main documentation.
In practice, the quantity above is estimated using Monte Carlo sampling of the powerset and the set of index permutations. This results in the estimator
with \\(S^{(1)}, \\dots, S^{(K)} \\subseteq T_{-y_i},\\) \\(\\sigma^{(1)}, \\dots, \\sigma^{(L)} \\in \\Pi(T_{y_i}\\setminus\\{i\\}),\\) and \\(\\sigma^{(l)}_{:i}\\) denoting the set of indices in permutation \\(\\sigma^{(l)}\\) before the position where \\(i\\) appears. The sets \\(T_{y_i}\\) and \\(T_{-y_i}\\) are the training sets for the labels \\(y_i\\) and \\(-y_i\\), respectively.
The unit tests include the following manually constructed data: Let \\(D=\\{(1,0),(2,0),(3,0),(4,1)\\}\\) be the test set and \\(T=\\{(1,0),(2,0),(3,1),(4,1)\\}\\) the train set. This specific dataset is chosen as it allows to solve the model
in closed form \\(\\beta = \\frac{\\text{dot}(x, y)}{\\text{dot}(x, x)}\\). From the closed-form solution, the tables for in-class accuracy \\(a_S(D_{y_i})\\) and out-of-class accuracy \\(a_S(D_{-y_i})\\) can be calculated. By using these tables and setting \\(\\{S^{(1)}, \\dots, S^{(K)}\\} = 2^{T_{-y_i}}\\) and \\(\\{\\sigma^{(1)}, \\dots, \\sigma^{(L)}\\} = \\Pi(T_{y_i}\\setminus\\{i\\})\\), the Monte Carlo estimator can be evaluated (\\(2^M\\) is the powerset of \\(M\\)). The details of the derivation are left to the eager reader.
Schoch, Stephanie, Haifeng Xu, and Yangfeng Ji. CS-Shapley: Class-wise Shapley Values for Data Valuation in Classification. In Proc. of the Thirty-Sixth Conference on Neural Information Processing Systems (NeurIPS). New Orleans, Louisiana, USA, 2022.\u00a0\u21a9
ClasswiseScorer(\n scoring: Union[str, ScorerCallable] = \"accuracy\",\n default: float = 0.0,\n range: Tuple[float, float] = (0, 1),\n in_class_discount_fn: Callable[[float], float] = lambda x: x,\n out_of_class_discount_fn: Callable[[float], float] = np.exp,\n initial_label: Optional[int] = None,\n name: Optional[str] = None,\n)\n
Bases: Scorer
A Scorer designed for evaluation in classification problems. Its value is computed from an in-class and an out-of-class \"inner score\" (Schoch et al., 2022) 1. Let \\(S\\) be the training set and \\(D\\) be the valuation set. For each label \\(c\\), \\(D\\) is factorized into two disjoint sets: \\(D_c\\) for in-class instances and \\(D_{-c}\\) for out-of-class instances. The score combines an in-class metric of performance, adjusted by a discounted out-of-class metric. These inner scores must be provided upon construction or default to accuracy. They are combined into:
where \\(f\\) and \\(g\\) are continuous, monotonic functions. For a detailed explanation, refer to section four of (Schoch et al., 2022) 1.
Metrics must support multiple class labels if you intend to apply them to a multi-class problem. For instance, the metric 'accuracy' supports multiple classes, but the metric f1 does not. For a two-class classification problem, using f1_weighted is essentially equivalent to using accuracy.
f1
f1_weighted
accuracy
Name of the scoring function or a callable that can be passed to Scorer.
TYPE: Union[str, ScorerCallable] DEFAULT: 'accuracy'
'accuracy'
Score to use when a model fails to provide a number, e.g. when too little was used to train it, or errors arise.
Numerical range of the score function. Some Monte Carlo methods can use this to estimate the number of samples required for a certain quality of approximation. If not provided, it can be read from the scoring object if it provides it, for instance if it was constructed with compose_score.
TYPE: Tuple[float, float] DEFAULT: (0, 1)
(0, 1)
in_class_discount_fn
Continuous, monotonic increasing function used to discount the in-class score.
TYPE: Callable[[float], float] DEFAULT: lambda x: x
lambda x: x
out_of_class_discount_fn
Continuous, monotonic increasing function used to discount the out-of-class score.
TYPE: Callable[[float], float] DEFAULT: exp
exp
initial_label
Set initial label (for the first iteration)
Name of the scorer. If not provided, the name of the inner scoring function will be prefixed by classwise.
classwise
src/pydvl/value/shapley/classwise.py
def __init__(\n self,\n scoring: Union[str, ScorerCallable] = \"accuracy\",\n default: float = 0.0,\n range: Tuple[float, float] = (0, 1),\n in_class_discount_fn: Callable[[float], float] = lambda x: x,\n out_of_class_discount_fn: Callable[[float], float] = np.exp,\n initial_label: Optional[int] = None,\n name: Optional[str] = None,\n):\n disc_score_in_class = in_class_discount_fn(range[1])\n disc_score_out_of_class = out_of_class_discount_fn(range[1])\n transformed_range = (0, disc_score_in_class * disc_score_out_of_class)\n super().__init__(\n scoring=scoring,\n range=transformed_range,\n default=default,\n name=name or f\"classwise {str(scoring)}\",\n )\n self._in_class_discount_fn = in_class_discount_fn\n self._out_of_class_discount_fn = out_of_class_discount_fn\n self.label = initial_label\n
estimate_in_class_and_out_of_class_score(\n model: SupervisedModel,\n x_test: NDArray[float_],\n y_test: NDArray[int_],\n rescale_scores: bool = True,\n) -> Tuple[float, float]\n
Computes in-class and out-of-class scores using the provided inner scoring function. The result is
In this context, for label \\(c\\) calculations are executed twice: once for \\(D_c\\) and once for \\(D_{-c}\\) to determine the in-class and out-of-class scores, respectively. By default, the raw scores are multiplied by \\(\\frac{|D_c|}{|D|}\\) and \\(\\frac{|D_{-c}|}{|D|}\\), respectively. This is done to ensure that both scores are of the same order of magnitude. This normalization is particularly useful when the inner score function \\(a_S\\) is calculated by an estimator of the form \\(\\frac{1}{N} \\sum_i x_i\\), e.g. the accuracy.
Model used for computing the score on the validation set.
Array containing the features of the classification problem.
Array containing the labels of the classification problem.
rescale_scores
If set to True, the scores will be denormalized. This is particularly useful when the inner score function \\(a_S\\) is calculated by an estimator of the form \\(\\frac{1}{N} \\sum_i x_i\\).
Tuple containing the in-class and out-of-class scores.
def estimate_in_class_and_out_of_class_score(\n self,\n model: SupervisedModel,\n x_test: NDArray[np.float_],\n y_test: NDArray[np.int_],\n rescale_scores: bool = True,\n) -> Tuple[float, float]:\n r\"\"\"\n Computes in-class and out-of-class scores using the provided inner\n scoring function. The result is\n\n $$\n a_S(D=\\{(x_1, y_1), \\dots, (x_K, y_K)\\}) = \\frac{1}{N} \\sum_k s(y(x_k), y_k).\n $$\n\n In this context, for label $c$ calculations are executed twice: once for $D_c$\n and once for $D_{-c}$ to determine the in-class and out-of-class scores,\n respectively. By default, the raw scores are multiplied by $\\frac{|D_c|}{|D|}$\n and $\\frac{|D_{-c}|}{|D|}$, respectively. This is done to ensure that both\n scores are of the same order of magnitude. This normalization is particularly\n useful when the inner score function $a_S$ is calculated by an estimator of the\n form $\\frac{1}{N} \\sum_i x_i$, e.g. the accuracy.\n\n Args:\n model: Model used for computing the score on the validation set.\n x_test: Array containing the features of the classification problem.\n y_test: Array containing the labels of the classification problem.\n rescale_scores: If set to True, the scores will be denormalized. This is\n particularly useful when the inner score function $a_S$ is calculated by\n an estimator of the form $\\frac{1}{N} \\sum_i x_i$.\n\n Returns:\n Tuple containing the in-class and out-of-class scores.\n \"\"\"\n scorer = self._scorer\n label_set_match = y_test == self.label\n label_set = np.where(label_set_match)[0]\n num_classes = len(np.unique(y_test))\n\n if len(label_set) == 0:\n return 0, 1 / (num_classes - 1)\n\n complement_label_set = np.where(~label_set_match)[0]\n in_class_score = scorer(model, x_test[label_set], y_test[label_set])\n out_of_class_score = scorer(\n model, x_test[complement_label_set], y_test[complement_label_set]\n )\n\n if rescale_scores:\n n_in_class = np.count_nonzero(y_test == self.label)\n n_out_of_class = len(y_test) - n_in_class\n in_class_score *= n_in_class / (n_in_class + n_out_of_class)\n out_of_class_score *= n_out_of_class / (n_in_class + n_out_of_class)\n\n return in_class_score, out_of_class_score\n
compute_classwise_shapley_values(\n u: Utility,\n *,\n done: StoppingCriterion,\n truncation: TruncationPolicy,\n done_sample_complements: Optional[StoppingCriterion] = None,\n normalize_values: bool = True,\n use_default_scorer_value: bool = True,\n min_elements_per_label: int = 1,\n n_jobs: int = 1,\n parallel_backend: Optional[ParallelBackend] = None,\n config: Optional[ParallelConfig] = None,\n progress: bool = False,\n seed: Optional[Seed] = None\n) -> ValuationResult\n
Computes an approximate Class-wise Shapley value by sampling independent permutations of the index set for each label and index sets sampled from the powerset of the complement (with respect to the currently evaluated label), approximating the sum:
where \\(\\sigma_{:i}\\) denotes the set of indices in permutation sigma before the position where \\(i\\) appears and \\(S\\) is a subset of the index set of all other labels (see the main documentation for details).
Utility object containing model, data, and scoring function. The scorer must be of type ClasswiseScorer.
Function that checks whether the computation needs to stop.
truncation
Callable function that decides whether to interrupt processing a permutation and set subsequent marginals to zero.
TYPE: TruncationPolicy
TruncationPolicy
done_sample_complements
Function checking whether computation needs to stop. Otherwise, it will resample conditional sets until the stopping criterion is met.
TYPE: Optional[StoppingCriterion] DEFAULT: None
Optional[StoppingCriterion]
normalize_values
Indicates whether to normalize the values by the variation in each class times their in-class accuracy.
Number of times to resample the complement set for each permutation.
use_default_scorer_value
The first set of indices is the sampled complement set. Unless not otherwise specified, the default scorer value is used for this. If it is set to false, the base score is calculated from the utility.
The minimum number of elements for each opposite label.
ValuationResult object containing computed data values.
@deprecated(\n target=True,\n args_mapping={\"config\": \"config\"},\n deprecated_in=\"0.9.0\",\n remove_in=\"0.10.0\",\n)\ndef compute_classwise_shapley_values(\n u: Utility,\n *,\n done: StoppingCriterion,\n truncation: TruncationPolicy,\n done_sample_complements: Optional[StoppingCriterion] = None,\n normalize_values: bool = True,\n use_default_scorer_value: bool = True,\n min_elements_per_label: int = 1,\n n_jobs: int = 1,\n parallel_backend: Optional[ParallelBackend] = None,\n config: Optional[ParallelConfig] = None,\n progress: bool = False,\n seed: Optional[Seed] = None,\n) -> ValuationResult:\n r\"\"\"\n Computes an approximate Class-wise Shapley value by sampling independent\n permutations of the index set for each label and index sets sampled from the\n powerset of the complement (with respect to the currently evaluated label),\n approximating the sum:\n\n $$\n v_u(i) = \\frac{1}{K} \\sum_k \\frac{1}{L} \\sum_l\n [u(\\sigma^{(l)}_{:i} \\cup \\{i\\} | S^{(k)} ) \u2212 u( \\sigma^{(l)}_{:i} | S^{(k)})],\n $$\n\n where $\\sigma_{:i}$ denotes the set of indices in permutation sigma before\n the position where $i$ appears and $S$ is a subset of the index set of all\n other labels (see [the main documentation][class-wise-shapley] for\n details).\n\n Args:\n u: Utility object containing model, data, and scoring function. The\n scorer must be of type\n [ClasswiseScorer][pydvl.value.shapley.classwise.ClasswiseScorer].\n done: Function that checks whether the computation needs to stop.\n truncation: Callable function that decides whether to interrupt processing a\n permutation and set subsequent marginals to zero.\n done_sample_complements: Function checking whether computation needs to stop.\n Otherwise, it will resample conditional sets until the stopping criterion is\n met.\n normalize_values: Indicates whether to normalize the values by the variation\n in each class times their in-class accuracy.\n done_sample_complements: Number of times to resample the complement set\n for each permutation.\n use_default_scorer_value: The first set of indices is the sampled complement\n set. Unless not otherwise specified, the default scorer value is used for\n this. If it is set to false, the base score is calculated from the utility.\n min_elements_per_label: The minimum number of elements for each opposite\n label.\n n_jobs: Number of parallel jobs to run.\n parallel_backend: Parallel backend instance to use\n for parallelizing computations. If `None`,\n use [JoblibParallelBackend][pydvl.parallel.backends.JoblibParallelBackend] backend.\n See the [Parallel Backends][pydvl.parallel.backends] package\n for available options.\n config: (**DEPRECATED**) Object configuring parallel computation,\n with cluster address, number of cpus, etc.\n progress: Whether to display a progress bar.\n seed: Either an instance of a numpy random number generator or a seed for it.\n\n Returns:\n ValuationResult object containing computed data values.\n\n !!! tip \"New in version 0.7.1\"\n \"\"\"\n dim_correct = u.data.y_train.ndim == 1 and u.data.y_test.ndim == 1\n is_integral = all(\n map(\n lambda v: isinstance(v, numbers.Integral), (*u.data.y_train, *u.data.y_test)\n )\n )\n if not dim_correct or not is_integral:\n raise ValueError(\n \"The supplied dataset has to be a 1-dimensional classification dataset.\"\n )\n\n if not isinstance(u.scorer, ClasswiseScorer):\n raise ValueError(\n \"Please set a subclass of ClasswiseScorer object as scorer object of the\"\n \" utility. See scoring argument of Utility.\"\n )\n\n parallel_backend = _maybe_init_parallel_backend(parallel_backend, config)\n u_ref = parallel_backend.put(u)\n n_jobs = parallel_backend.effective_n_jobs(n_jobs)\n n_submitted_jobs = 2 * n_jobs\n\n pbar = tqdm(disable=not progress, position=0, total=100, unit=\"%\")\n algorithm = \"classwise_shapley\"\n accumulated_result = ValuationResult.zeros(\n algorithm=algorithm, indices=u.data.indices, data_names=u.data.data_names\n )\n terminate_exec = False\n seed_sequence = ensure_seed_sequence(seed)\n\n parallel_backend = _maybe_init_parallel_backend(parallel_backend, config)\n\n with parallel_backend.executor(max_workers=n_jobs) as executor:\n pending: Set[Future] = set()\n while True:\n completed_futures, pending = wait(\n pending, timeout=60, return_when=FIRST_COMPLETED\n )\n for future in completed_futures:\n accumulated_result += future.result()\n if done(accumulated_result):\n terminate_exec = True\n break\n\n pbar.n = 100 * done.completion()\n pbar.refresh()\n if terminate_exec:\n break\n\n n_remaining_slots = n_submitted_jobs - len(pending)\n seeds = seed_sequence.spawn(n_remaining_slots)\n for i in range(n_remaining_slots):\n future = executor.submit(\n _permutation_montecarlo_classwise_shapley_one_step,\n u_ref,\n truncation=truncation,\n done_sample_complements=done_sample_complements,\n use_default_scorer_value=use_default_scorer_value,\n min_elements_per_label=min_elements_per_label,\n algorithm_name=algorithm,\n seed=seeds[i],\n )\n pending.add(future)\n\n result = accumulated_result\n if normalize_values:\n result = _normalize_classwise_shapley_values(result, u)\n\n return result\n
compute_shapley_values(\n u: Utility,\n *,\n done: StoppingCriterion = MaxChecks(None),\n mode: ShapleyMode = ShapleyMode.TruncatedMontecarlo,\n n_jobs: int = 1,\n seed: Optional[Seed] = None,\n **kwargs\n) -> ValuationResult\n
Umbrella method to compute Shapley values with any of the available algorithms.
The following algorithms are available. Note that the exact methods can only work with very small datasets and are thus intended only for testing. Some algorithms also accept additional arguments, please refer to the documentation of each particular method.
combinatorial_exact
combinatorial_montecarlo
permutation_exact
permutation_montecarlo
owen_sampling
q_max
owen_halved
group_testing
Additionally, one can use model-specific methods:
knn
Object used to determine when to stop the computation for Monte Carlo methods. The default is to stop after 100 iterations. See the available criteria in stopping. It is possible to combine several of them using boolean operators. Some methods ignore this argument, others require specific subtypes.
TYPE: StoppingCriterion DEFAULT: MaxChecks(None)
MaxChecks(None)
Number of parallel jobs (available only to some methods)
Choose which shapley algorithm to use. See ShapleyMode for a list of allowed value.
TYPE: ShapleyMode DEFAULT: TruncatedMontecarlo
ShapleyMode
TruncatedMontecarlo
src/pydvl/value/shapley/common.py
def compute_shapley_values(\n u: Utility,\n *,\n done: StoppingCriterion = MaxChecks(None),\n mode: ShapleyMode = ShapleyMode.TruncatedMontecarlo,\n n_jobs: int = 1,\n seed: Optional[Seed] = None,\n **kwargs,\n) -> ValuationResult:\n \"\"\"Umbrella method to compute Shapley values with any of the available\n algorithms.\n\n See [Data valuation][data-valuation] for an overview.\n\n The following algorithms are available. Note that the exact methods can only\n work with very small datasets and are thus intended only for testing. Some\n algorithms also accept additional arguments, please refer to the\n documentation of each particular method.\n\n - `combinatorial_exact`: uses the combinatorial implementation of data\n Shapley. Implemented in\n [combinatorial_exact_shapley()][pydvl.value.shapley.naive.combinatorial_exact_shapley].\n - `combinatorial_montecarlo`: uses the approximate Monte Carlo\n implementation of combinatorial data Shapley. Implemented in\n [combinatorial_montecarlo_shapley()][pydvl.value.shapley.montecarlo.combinatorial_montecarlo_shapley].\n - `permutation_exact`: uses the permutation-based implementation of data\n Shapley. Computation is **not parallelized**. Implemented in\n [permutation_exact_shapley()][pydvl.value.shapley.naive.permutation_exact_shapley].\n - `permutation_montecarlo`: uses the approximate Monte Carlo\n implementation of permutation data Shapley. Accepts a\n [TruncationPolicy][pydvl.value.shapley.truncated.TruncationPolicy] to stop\n computing marginals. Implemented in\n [permutation_montecarlo_shapley()][pydvl.value.shapley.montecarlo.permutation_montecarlo_shapley].\n - `owen_sampling`: Uses the Owen continuous extension of the utility\n function to the unit cube. Implemented in\n [owen_sampling_shapley()][pydvl.value.shapley.owen.owen_sampling_shapley]. This\n method does not take a [StoppingCriterion][pydvl.value.stopping.StoppingCriterion]\n but instead requires a parameter `q_max` for the number of subdivisions\n of the unit interval to use for integration, and another parameter\n `n_samples` for the number of subsets to sample for each $q$.\n - `owen_halved`: Same as 'owen_sampling' but uses correlated samples in the\n expectation. Implemented in\n [owen_sampling_shapley()][pydvl.value.shapley.owen.owen_sampling_shapley].\n This method requires an additional parameter `q_max` for the number of\n subdivisions of the interval [0,0.5] to use for integration, and another\n parameter `n_samples` for the number of subsets to sample for each $q$.\n - `group_testing`: estimates differences of Shapley values and solves a\n constraint satisfaction problem. High sample complexity, not recommended.\n Implemented in [group_testing_shapley()][pydvl.value.shapley.gt.group_testing_shapley]. This\n method does not take a [StoppingCriterion][pydvl.value.stopping.StoppingCriterion]\n but instead requires a parameter `n_samples` for the number of\n iterations to run.\n\n Additionally, one can use model-specific methods:\n\n - `knn`: Exact method for K-Nearest neighbour models. Implemented in\n [knn_shapley()][pydvl.value.shapley.knn.knn_shapley].\n\n Args:\n u: [Utility][pydvl.utils.utility.Utility] object with model, data, and\n scoring function.\n done: Object used to determine when to stop the computation for Monte\n Carlo methods. The default is to stop after 100 iterations. See the\n available criteria in [stopping][pydvl.value.stopping]. It is\n possible to combine several of them using boolean operators. Some\n methods ignore this argument, others require specific subtypes.\n n_jobs: Number of parallel jobs (available only to some methods)\n seed: Either an instance of a numpy random number generator or a seed\n for it.\n mode: Choose which shapley algorithm to use. See\n [ShapleyMode][pydvl.value.shapley.ShapleyMode] for a list of allowed\n value.\n\n Returns:\n Object with the results.\n\n \"\"\"\n progress: bool = kwargs.pop(\"progress\", False)\n\n if mode not in list(ShapleyMode):\n raise ValueError(f\"Invalid value encountered in {mode=}\")\n\n if mode in (\n ShapleyMode.PermutationMontecarlo,\n ShapleyMode.ApproShapley,\n ShapleyMode.TruncatedMontecarlo,\n ):\n truncation = kwargs.pop(\"truncation\", NoTruncation())\n return permutation_montecarlo_shapley( # type: ignore\n u=u,\n done=done,\n truncation=truncation,\n n_jobs=n_jobs,\n seed=seed,\n progress=progress,\n **kwargs,\n )\n elif mode == ShapleyMode.CombinatorialMontecarlo:\n return combinatorial_montecarlo_shapley( # type: ignore\n u, done=done, n_jobs=n_jobs, seed=seed, progress=progress\n )\n elif mode == ShapleyMode.CombinatorialExact:\n return combinatorial_exact_shapley(u, n_jobs=n_jobs, progress=progress) # type: ignore\n elif mode == ShapleyMode.PermutationExact:\n return permutation_exact_shapley(u, progress=progress)\n elif mode == ShapleyMode.Owen or mode == ShapleyMode.OwenAntithetic:\n if kwargs.get(\"n_samples\") is None:\n raise ValueError(\"n_samples cannot be None for Owen methods\")\n if kwargs.get(\"max_q\") is None:\n raise ValueError(\"Owen Sampling requires max_q for the outer integral\")\n\n method = (\n OwenAlgorithm.Standard\n if mode == ShapleyMode.Owen\n else OwenAlgorithm.Antithetic\n )\n return owen_sampling_shapley( # type: ignore\n u,\n n_samples=int(kwargs.get(\"n_samples\", -1)),\n max_q=int(kwargs.get(\"max_q\", -1)),\n method=method,\n n_jobs=n_jobs,\n seed=seed,\n )\n elif mode == ShapleyMode.KNN:\n return knn_shapley(u, progress=progress)\n elif mode == ShapleyMode.GroupTesting:\n n_samples = kwargs.pop(\"n_samples\")\n if n_samples is None:\n raise ValueError(\"n_samples cannot be None for Group Testing\")\n epsilon = kwargs.pop(\"epsilon\")\n if epsilon is None:\n raise ValueError(\"Group Testing requires error bound epsilon\")\n delta = kwargs.pop(\"delta\", 0.05)\n return group_testing_shapley( # type: ignore\n u,\n epsilon=float(epsilon),\n delta=delta,\n n_samples=int(n_samples),\n n_jobs=n_jobs,\n progress=progress,\n seed=seed,\n **kwargs,\n )\n else:\n raise ValueError(f\"Invalid value encountered in {mode=}\")\n
This module implements Group Testing for the approximation of Shapley values, as introduced in (Jia, R. et al., 2019)1. The sampling of index subsets is done in such a way that an approximation to the true Shapley values can be computed with guarantees.
This method is very inefficient. Potential improvements to the implementation notwithstanding, convergence seems to be very slow (in terms of evaluations of the utility required). We recommend other Monte Carlo methods instead.
You can read more in the documentation.
Jia, R. et al., 2019. Towards Efficient Data Valuation Based on the Shapley Value. In: Proceedings of the 22nd International Conference on Artificial Intelligence and Statistics, pp. 1167\u20131176. PMLR.\u00a0\u21a9
num_samples_eps_delta(\n eps: float, delta: float, n: int, utility_range: float\n) -> int\n
Implements the formula in Theorem 3 of (Jia, R. et al., 2019)1 which gives a lower bound on the number of samples required to obtain an (\u03b5/\u221an,\u03b4/(N(N-1))-approximation to all pair-wise differences of Shapley values, wrt. \\(\\ell_2\\) norm.
\u03b5
\u03b4
Number of data points
utility_range
Returns: Number of samples from \\(2^{[n]}\\) guaranteeing \u03b5/\u221an-correct Shapley pair-wise differences of values with probability 1-\u03b4/(N(N-1)).
src/pydvl/value/shapley/gt.py
def num_samples_eps_delta(\n eps: float, delta: float, n: int, utility_range: float\n) -> int:\n r\"\"\"Implements the formula in Theorem 3 of (Jia, R. et al., 2019)<sup><a href=\"#jia_efficient_2019\">1</a></sup>\n which gives a lower bound on the number of samples required to obtain an\n (\u03b5/\u221an,\u03b4/(N(N-1))-approximation to all pair-wise differences of Shapley\n values, wrt. $\\ell_2$ norm.\n\n Args:\n eps: \u03b5\n delta: \u03b4\n n: Number of data points\n utility_range: Range of the [Utility][pydvl.utils.utility.Utility] function\n Returns:\n Number of samples from $2^{[n]}$ guaranteeing \u03b5/\u221an-correct Shapley\n pair-wise differences of values with probability 1-\u03b4/(N(N-1)).\n\n !!! tip \"New in version 0.4.0\"\n\n \"\"\"\n constants = _constants(n=n, epsilon=eps, delta=delta, utility_range=utility_range)\n return int(constants.T)\n
group_testing_shapley(\n u: Utility,\n n_samples: int,\n epsilon: float,\n delta: float,\n *,\n n_jobs: int = 1,\n parallel_backend: Optional[ParallelBackend] = None,\n config: Optional[ParallelConfig] = None,\n progress: bool = False,\n seed: Optional[Seed] = None,\n **options: dict\n) -> ValuationResult\n
Implements group testing for approximation of Shapley values as described in (Jia, R. et al., 2019)1.
This method is very inefficient. It requires several orders of magnitude more evaluations of the utility than others in montecarlo. It also uses several intermediate objects like the results from the runners and the constraint matrices which can become rather large.
By picking a specific distribution over subsets, the differences in Shapley values can be approximated with a Monte Carlo sum. These are then used to solve for the individual values in a feasibility problem.
Number of tests to perform. Use num_samples_eps_delta to estimate this.
epsilon
From the (\u03b5,\u03b4) sample bound. Use the same as for the estimation of n_iterations.
Number of parallel jobs to use. Each worker performs a chunk of all tests (i.e. utility evaluations).
Whether to display progress bars for each job.
Additional options to pass to cvxpy.Problem.solve(). E.g. to change the solver (which defaults to cvxpy.SCS) pass solver=cvxpy.CVXOPT.
cvxpy.SCS
solver=cvxpy.CVXOPT
TYPE: dict DEFAULT: {}
dict
Changed in version 0.5.0
Changed the solver to cvxpy instead of scipy's linprog. Added the ability to pass arbitrary options to it.
@deprecated(\n target=True,\n args_mapping={\"config\": \"config\"},\n deprecated_in=\"0.9.0\",\n remove_in=\"0.10.0\",\n)\ndef group_testing_shapley(\n u: Utility,\n n_samples: int,\n epsilon: float,\n delta: float,\n *,\n n_jobs: int = 1,\n parallel_backend: Optional[ParallelBackend] = None,\n config: Optional[ParallelConfig] = None,\n progress: bool = False,\n seed: Optional[Seed] = None,\n **options: dict,\n) -> ValuationResult:\n \"\"\"Implements group testing for approximation of Shapley values as described\n in (Jia, R. et al., 2019)<sup><a href=\"#jia_efficient_2019\">1</a></sup>.\n\n !!! Warning\n This method is very inefficient. It requires several orders of magnitude\n more evaluations of the utility than others in\n [montecarlo][pydvl.value.shapley.montecarlo]. It also uses several intermediate\n objects like the results from the runners and the constraint matrices\n which can become rather large.\n\n By picking a specific distribution over subsets, the differences in Shapley\n values can be approximated with a Monte Carlo sum. These are then used to\n solve for the individual values in a feasibility problem.\n\n Args:\n u: Utility object with model, data, and scoring function\n n_samples: Number of tests to perform. Use\n [num_samples_eps_delta][pydvl.value.shapley.gt.num_samples_eps_delta]\n to estimate this.\n epsilon: From the (\u03b5,\u03b4) sample bound. Use the same as for the\n estimation of `n_iterations`.\n delta: From the (\u03b5,\u03b4) sample bound. Use the same as for the\n estimation of `n_iterations`.\n n_jobs: Number of parallel jobs to use. Each worker performs a chunk\n of all tests (i.e. utility evaluations).\n parallel_backend: Parallel backend instance to use\n for parallelizing computations. If `None`,\n use [JoblibParallelBackend][pydvl.parallel.backends.JoblibParallelBackend] backend.\n See the [Parallel Backends][pydvl.parallel.backends] package\n for available options.\n config: (**DEPRECATED**) Object configuring parallel computation,\n with cluster address, number of cpus, etc.\n progress: Whether to display progress bars for each job.\n seed: Either an instance of a numpy random number generator or a seed for it.\n options: Additional options to pass to\n [cvxpy.Problem.solve()](https://www.cvxpy.org/tutorial/advanced/index.html#solve-method-options).\n E.g. to change the solver (which defaults to `cvxpy.SCS`) pass\n `solver=cvxpy.CVXOPT`.\n\n Returns:\n Object with the data values.\n\n !!! tip \"New in version 0.4.0\"\n\n !!! tip \"Changed in version 0.5.0\"\n Changed the solver to cvxpy instead of scipy's linprog. Added the ability\n to pass arbitrary options to it.\n\n !!! tip \"Changed in version 0.9.0\"\n Deprecated `config` argument and added a `parallel_backend`\n argument to allow users to pass the Parallel Backend instance\n directly.\n \"\"\"\n\n n = len(u.data.indices)\n\n const = _constants(\n n=n,\n epsilon=epsilon,\n delta=delta,\n utility_range=u.score_range.max() - u.score_range.min(),\n )\n T = n_samples\n if T < const.T:\n log.warning(\n f\"n_samples of {T} are below the required {const.T} for the \"\n f\"\u03b5={epsilon:.02f} guarantee at \u03b4={1 - delta:.02f} probability\"\n )\n\n parallel_backend = _maybe_init_parallel_backend(parallel_backend, config)\n\n samples_per_job = max(1, n_samples // parallel_backend.effective_n_jobs(n_jobs))\n\n def reducer(\n results_it: Iterable[Tuple[NDArray, NDArray]]\n ) -> Tuple[NDArray, NDArray]:\n return np.concatenate(list(x[0] for x in results_it)).astype(\n np.float_\n ), np.concatenate(list(x[1] for x in results_it)).astype(np.int_)\n\n seed_sequence = ensure_seed_sequence(seed)\n map_reduce_seed_sequence, cvxpy_seed = tuple(seed_sequence.spawn(2))\n\n map_reduce_job: MapReduceJob[Utility, Tuple[NDArray, NDArray]] = MapReduceJob(\n u,\n map_func=_group_testing_shapley,\n reduce_func=reducer,\n map_kwargs=dict(n_samples=samples_per_job, progress=progress),\n parallel_backend=parallel_backend,\n n_jobs=n_jobs,\n )\n uu, betas = map_reduce_job(seed=map_reduce_seed_sequence)\n\n # Matrix of estimated differences. See Eqs. (3) and (4) in the paper.\n C = np.zeros(shape=(n, n))\n for i in range(n):\n for j in range(i + 1, n):\n C[i, j] = np.dot(uu, betas[:, i] - betas[:, j])\n C *= const.Z / T\n total_utility = u(u.data.indices)\n\n ###########################################################################\n # Solution of the constraint problem with cvxpy\n\n v = cp.Variable(n)\n constraints = [cp.sum(v) == total_utility]\n for i in range(n):\n for j in range(i + 1, n):\n constraints.append(v[i] - v[j] <= epsilon + C[i, j])\n constraints.append(v[j] - v[i] <= epsilon - C[i, j])\n\n problem = cp.Problem(cp.Minimize(0), constraints)\n solver = options.pop(\"solver\", cp.SCS)\n problem.solve(solver=solver, **options)\n\n if problem.status != \"optimal\":\n log.warning(f\"cvxpy returned status {problem.status}\")\n values = (\n np.nan * np.ones_like(u.data.indices)\n if not hasattr(v.value, \"__len__\")\n else v.value\n )\n status = Status.Failed\n else:\n values = v.value\n status = Status.Converged\n\n return ValuationResult(\n algorithm=\"group_testing_shapley\",\n status=status,\n values=values,\n data_names=u.data.data_names,\n solver_status=problem.status,\n )\n
This module contains Shapley computations for K-Nearest Neighbours.
Implement approximate KNN computation for sublinear complexity
Jia, R. et al., 2019. Efficient Task-Specific Data Valuation for Nearest Neighbor Algorithms. In: Proceedings of the VLDB Endowment, Vol. 12, No. 11, pp. 1610\u20131623.\u00a0\u21a9
knn_shapley(u: Utility, *, progress: bool = True) -> ValuationResult\n
Computes exact Shapley values for a KNN classifier.
This implements the method described in (Jia, R. et al., 2019)1. It exploits the local structure of K-Nearest Neighbours to reduce the number of calls to the utility function to a constant number per index, thus reducing computation time to \\(O(n)\\).
Utility with a KNN model to extract parameters from. The object will not be modified nor used other than to call get_params()
TypeError
If the model in the utility is not a sklearn.neighbors.KNeighborsClassifier.
New in version 0.1.0
src/pydvl/value/shapley/knn.py
def knn_shapley(u: Utility, *, progress: bool = True) -> ValuationResult:\n \"\"\"Computes exact Shapley values for a KNN classifier.\n\n This implements the method described in (Jia, R. et al., 2019)<sup><a href=\"#jia_efficient_2019a\">1</a></sup>.\n It exploits the local structure of K-Nearest Neighbours to reduce the number\n of calls to the utility function to a constant number per index, thus\n reducing computation time to $O(n)$.\n\n Args:\n u: Utility with a KNN model to extract parameters from. The object\n will not be modified nor used other than to call [get_params()](\n <https://scikit-learn.org/stable/modules/generated/sklearn.base.BaseEstimator.html#sklearn.base.BaseEstimator.get_params>)\n progress: Whether to display a progress bar.\n\n Returns:\n Object with the data values.\n\n Raises:\n TypeError: If the model in the utility is not a\n [sklearn.neighbors.KNeighborsClassifier][].\n\n !!! tip \"New in version 0.1.0\"\n\n \"\"\"\n if not isinstance(u.model, KNeighborsClassifier):\n raise TypeError(\"KNN Shapley requires a K-Nearest Neighbours model\")\n\n defaults: Dict[str, Union[int, str]] = {\n \"algorithm\": \"ball_tree\" if u.data.dim >= 20 else \"kd_tree\",\n \"metric\": \"minkowski\",\n \"p\": 2,\n }\n defaults.update(u.model.get_params())\n # HACK: NearestNeighbors doesn't support this. There will be more...\n del defaults[\"weights\"]\n n_neighbors: int = int(defaults[\"n_neighbors\"])\n defaults[\"n_neighbors\"] = len(u.data) # We want all training points sorted\n\n assert n_neighbors < len(u.data)\n # assert data.target_dim == 1\n\n nns = NearestNeighbors(**defaults).fit(u.data.x_train)\n # closest to farthest\n _, indices = nns.kneighbors(u.data.x_test)\n\n values: NDArray[np.float_] = np.zeros_like(u.data.indices, dtype=np.float_)\n n = len(u.data)\n yt = u.data.y_train\n iterator = enumerate(zip(u.data.y_test, indices), start=1)\n for j, (y, ii) in tqdm(iterator, disable=not progress):\n value_at_x = int(yt[ii[-1]] == y) / n\n values[ii[-1]] += (value_at_x - values[ii[-1]]) / j\n for i in range(n - 2, n_neighbors, -1): # farthest to closest\n value_at_x = (\n values[ii[i + 1]] + (int(yt[ii[i]] == y) - int(yt[ii[i + 1]] == y)) / i\n )\n values[ii[i]] += (value_at_x - values[ii[i]]) / j\n for i in range(n_neighbors, -1, -1): # farthest to closest\n value_at_x = (\n values[ii[i + 1]]\n + (int(yt[ii[i]] == y) - int(yt[ii[i + 1]] == y)) / n_neighbors\n )\n values[ii[i]] += (value_at_x - values[ii[i]]) / j\n\n return ValuationResult(\n algorithm=\"knn_shapley\",\n status=Status.Converged,\n values=values,\n data_names=u.data.data_names,\n )\n
Monte Carlo approximations to Shapley Data values.
You probably want to use the common interface provided by compute_shapley_values() instead of directly using the functions in this module.
Because exact computation of Shapley values requires \\(\\mathcal{O}(2^n)\\) re-trainings of the model, several Monte Carlo approximations are available. The first two sample from the powerset of the training data directly: combinatorial_montecarlo_shapley() and owen_sampling_shapley(). The latter uses a reformulation in terms of a continuous extension of the utility.
Alternatively, employing another reformulation of the expression above as a sum over permutations, one has the implementation in permutation_montecarlo_shapley() with the option to pass an early stopping strategy to reduce computation as done in Truncated MonteCarlo Shapley (TMCS).
Also see
It is also possible to use group_testing_shapley() to reduce the number of evaluations of the utility. The method is however typically outperformed by others in this module.
Additionally, you can consider grouping your data points using GroupedDataset and computing the values of the groups instead. This is not to be confused with \"group testing\" as implemented in group_testing_shapley(): any of the algorithms mentioned above, including Group Testing, can work to valuate groups of samples as units.
permutation_montecarlo_shapley(\n u: Utility,\n done: StoppingCriterion,\n *,\n truncation: TruncationPolicy = NoTruncation(),\n n_jobs: int = 1,\n parallel_backend: Optional[ParallelBackend] = None,\n config: Optional[ParallelConfig] = None,\n progress: bool = False,\n seed: Optional[Seed] = None\n) -> ValuationResult\n
Computes an approximate Shapley value by sampling independent permutations of the index set, approximating the sum:
where \\(\\sigma_{:i}\\) denotes the set of indices in permutation sigma before the position where \\(i\\) appears (see [[data-valuation]] for details).
This implements the method described in (Ghorbani and Zou, 2019)1 with a double stopping criterion.
Think of how to add Robin-Gelman or some other more principled stopping criterion.
Instead of naively implementing the expectation, we sequentially add points to coalitions from a permutation and incrementally compute marginal utilities. We stop computing marginals for a given permutation based on a TruncationPolicy. (Ghorbani and Zou, 2019)1 mention two policies: one that stops after a certain fraction of marginals are computed, implemented in FixedTruncation, and one that stops if the last computed utility (\"score\") is close to the total utility using the standard deviation of the utility as a measure of proximity, implemented in BootstrapTruncation.
We keep sampling permutations and updating all shapley values until the StoppingCriterion returns True.
function checking whether computation must stop.
An optional callable which decides whether to interrupt processing a permutation and set all subsequent marginals to zero. Typically used to stop computation when the marginal is small.
TYPE: TruncationPolicy DEFAULT: NoTruncation()
NoTruncation()
number of jobs across which to distribute the computation.
src/pydvl/value/shapley/montecarlo.py
@deprecated(\n target=True,\n args_mapping={\"config\": \"config\"},\n deprecated_in=\"0.9.0\",\n remove_in=\"0.10.0\",\n)\ndef permutation_montecarlo_shapley(\n u: Utility,\n done: StoppingCriterion,\n *,\n truncation: TruncationPolicy = NoTruncation(),\n n_jobs: int = 1,\n parallel_backend: Optional[ParallelBackend] = None,\n config: Optional[ParallelConfig] = None,\n progress: bool = False,\n seed: Optional[Seed] = None,\n) -> ValuationResult:\n r\"\"\"Computes an approximate Shapley value by sampling independent\n permutations of the index set, approximating the sum:\n\n $$\n v_u(x_i) = \\frac{1}{n!} \\sum_{\\sigma \\in \\Pi(n)}\n \\tilde{w}( | \\sigma_{:i} | )[u(\\sigma_{:i} \\cup \\{i\\}) \u2212 u(\\sigma_{:i})],\n $$\n\n where $\\sigma_{:i}$ denotes the set of indices in permutation sigma before\n the position where $i$ appears (see [[data-valuation]] for details).\n\n This implements the method described in (Ghorbani and Zou, 2019)<sup><a\n href=\"#ghorbani_data_2019\">1</a></sup> with a double stopping criterion.\n\n !!! Todo\n Think of how to add Robin-Gelman or some other more principled stopping\n criterion.\n\n Instead of naively implementing the expectation, we sequentially add points\n to coalitions from a permutation and incrementally compute marginal utilities.\n We stop computing marginals for a given permutation based on a\n [TruncationPolicy][pydvl.value.shapley.truncated.TruncationPolicy].\n (Ghorbani and Zou, 2019)<sup><a href=\"#ghorbani_data_2019\">1</a></sup>\n mention two policies: one that stops after a certain\n fraction of marginals are computed, implemented in\n [FixedTruncation][pydvl.value.shapley.truncated.FixedTruncation],\n and one that stops if the last computed utility (\"score\") is close to the\n total utility using the standard deviation of the utility as a measure of\n proximity, implemented in\n [BootstrapTruncation][pydvl.value.shapley.truncated.BootstrapTruncation].\n\n We keep sampling permutations and updating all shapley values\n until the [StoppingCriterion][pydvl.value.stopping.StoppingCriterion] returns\n `True`.\n\n Args:\n u: Utility object with model, data, and scoring function.\n done: function checking whether computation must stop.\n truncation: An optional callable which decides whether to interrupt\n processing a permutation and set all subsequent marginals to\n zero. Typically used to stop computation when the marginal is small.\n n_jobs: number of jobs across which to distribute the computation.\n parallel_backend: Parallel backend instance to use\n for parallelizing computations. If `None`,\n use [JoblibParallelBackend][pydvl.parallel.backends.JoblibParallelBackend] backend.\n See the [Parallel Backends][pydvl.parallel.backends] package\n for available options.\n config: (**DEPRECATED**) Object configuring parallel computation,\n with cluster address, number of cpus, etc.\n progress: Whether to display a progress bar.\n seed: Either an instance of a numpy random number generator or a seed for it.\n\n Returns:\n Object with the data values.\n\n !!! tip \"Changed in version 0.9.0\"\n Deprecated `config` argument and added a `parallel_backend`\n argument to allow users to pass the Parallel Backend instance\n directly.\n \"\"\"\n algorithm = \"permutation_montecarlo_shapley\"\n\n parallel_backend = _maybe_init_parallel_backend(parallel_backend, config)\n u = parallel_backend.put(u)\n max_workers = parallel_backend.effective_n_jobs(n_jobs)\n n_submitted_jobs = 2 * max_workers # number of jobs in the executor's queue\n\n seed_sequence = ensure_seed_sequence(seed)\n result = ValuationResult.zeros(\n algorithm=algorithm, indices=u.data.indices, data_names=u.data.data_names\n )\n\n pbar = tqdm(disable=not progress, total=100, unit=\"%\")\n\n with parallel_backend.executor(\n max_workers=max_workers, cancel_futures=CancellationPolicy.ALL\n ) as executor:\n pending: set[Future] = set()\n while True:\n pbar.n = 100 * done.completion()\n pbar.refresh()\n\n completed, pending = wait(pending, timeout=1.0, return_when=FIRST_COMPLETED)\n for future in completed:\n result += future.result()\n # we could check outside the loop, but that means more\n # submissions if the stopping criterion is unstable\n if done(result):\n return result\n\n # Ensure that we always have n_submitted_jobs in the queue or running\n n_remaining_slots = n_submitted_jobs - len(pending)\n seeds = seed_sequence.spawn(n_remaining_slots)\n for i in range(n_remaining_slots):\n future = executor.submit(\n _permutation_montecarlo_one_step,\n u,\n truncation,\n algorithm,\n seed=seeds[i],\n )\n pending.add(future)\n
combinatorial_montecarlo_shapley(\n u: Utility,\n done: StoppingCriterion,\n *,\n n_jobs: int = 1,\n parallel_backend: Optional[ParallelBackend] = None,\n config: Optional[ParallelConfig] = None,\n progress: bool = False,\n seed: Optional[Seed] = None\n) -> ValuationResult\n
Computes an approximate Shapley value using the combinatorial definition:
This consists of randomly sampling subsets of the power set of the training indices in u.data, and computing their marginal utilities. See Data valuation for details.
Note that because sampling is done with replacement, the approximation is poor even for \\(2^{m}\\) subsets with \\(m>n\\), even though there are \\(2^{n-1}\\) subsets for each \\(i\\). Prefer permutation_montecarlo_shapley().
Parallelization is done by splitting the set of indices across processes and computing the sum over subsets \\(S \\subseteq N \\setminus \\{i\\}\\) separately.
Stopping criterion for the computation.
number of parallel jobs across which to distribute the computation. Each worker receives a chunk of indices
@deprecated(\n target=True,\n args_mapping={\"config\": \"config\"},\n deprecated_in=\"0.9.0\",\n remove_in=\"0.10.0\",\n)\ndef combinatorial_montecarlo_shapley(\n u: Utility,\n done: StoppingCriterion,\n *,\n n_jobs: int = 1,\n parallel_backend: Optional[ParallelBackend] = None,\n config: Optional[ParallelConfig] = None,\n progress: bool = False,\n seed: Optional[Seed] = None,\n) -> ValuationResult:\n r\"\"\"Computes an approximate Shapley value using the combinatorial\n definition:\n\n $$v_u(i) = \\frac{1}{n} \\sum_{S \\subseteq N \\setminus \\{i\\}}\n \\binom{n-1}{ | S | }^{-1} [u(S \\cup \\{i\\}) \u2212 u(S)]$$\n\n This consists of randomly sampling subsets of the power set of the training\n indices in [u.data][pydvl.utils.utility.Utility], and computing their\n marginal utilities. See [Data valuation][data-valuation] for details.\n\n Note that because sampling is done with replacement, the approximation is\n poor even for $2^{m}$ subsets with $m>n$, even though there are $2^{n-1}$\n subsets for each $i$. Prefer\n [permutation_montecarlo_shapley()][pydvl.value.shapley.montecarlo.permutation_montecarlo_shapley].\n\n Parallelization is done by splitting the set of indices across processes and\n computing the sum over subsets $S \\subseteq N \\setminus \\{i\\}$ separately.\n\n Args:\n u: Utility object with model, data, and scoring function\n done: Stopping criterion for the computation.\n n_jobs: number of parallel jobs across which to distribute the\n computation. Each worker receives a chunk of\n [indices][pydvl.utils.dataset.Dataset.indices]\n parallel_backend: Parallel backend instance to use\n for parallelizing computations. If `None`,\n use [JoblibParallelBackend][pydvl.parallel.backends.JoblibParallelBackend] backend.\n See the [Parallel Backends][pydvl.parallel.backends] package\n for available options.\n config: (**DEPRECATED**) Object configuring parallel computation,\n with cluster address, number of cpus, etc.\n progress: Whether to display progress bars for each job.\n seed: Either an instance of a numpy random number generator or a seed for it.\n\n Returns:\n Object with the data values.\n\n !!! tip \"Changed in version 0.9.0\"\n Deprecated `config` argument and added a `parallel_backend`\n argument to allow users to pass the Parallel Backend instance\n directly.\n \"\"\"\n parallel_backend = _maybe_init_parallel_backend(parallel_backend, config)\n\n map_reduce_job: MapReduceJob[NDArray, ValuationResult] = MapReduceJob(\n u.data.indices,\n map_func=_combinatorial_montecarlo_shapley,\n reduce_func=lambda results: reduce(operator.add, results),\n map_kwargs=dict(u=u, done=done, progress=progress),\n n_jobs=n_jobs,\n parallel_backend=parallel_backend,\n )\n return map_reduce_job(seed=seed)\n
This module implements exact Shapley values using either the combinatorial or permutation definition.
The exact computation of \\(n\\) values takes \\(\\mathcal{O}(2^n)\\) evaluations of the utility and is therefore only possible for small datasets. For larger datasets, consider using any of the approximations, such as Monte Carlo, or proxy models like kNN.
See Data valuation for details.
permutation_exact_shapley(\n u: Utility, *, progress: bool = True\n) -> ValuationResult\n
Computes the exact Shapley value using the formulation with permutations:
When the length of the training set is > 10 this prints a warning since the computation becomes too expensive. Used mostly for internal testing and simple use cases. Please refer to the Monte Carlo approximations for practical applications.
src/pydvl/value/shapley/naive.py
def permutation_exact_shapley(u: Utility, *, progress: bool = True) -> ValuationResult:\n r\"\"\"Computes the exact Shapley value using the formulation with permutations:\n\n $$v_u(x_i) = \\frac{1}{n!} \\sum_{\\sigma \\in \\Pi(n)} [u(\\sigma_{i-1}\n \\cup {i}) \u2212 u(\\sigma_{i})].$$\n\n See [Data valuation][data-valuation] for details.\n\n When the length of the training set is > 10 this prints a warning since the\n computation becomes too expensive. Used mostly for internal testing and\n simple use cases. Please refer to the [Monte Carlo\n approximations][pydvl.value.shapley.montecarlo] for practical applications.\n\n Args:\n u: Utility object with model, data, and scoring function\n progress: Whether to display progress bars for each job.\n\n Returns:\n Object with the data values.\n \"\"\"\n\n n = len(u.data)\n # Note that the cache in utility saves most of the refitting because we\n # use frozenset for the input.\n if n > 10:\n warnings.warn(\n f\"Large dataset! Computation requires {n}! calls to utility()\",\n RuntimeWarning,\n )\n\n values = np.zeros(n)\n for p in tqdm(\n permutations(u.data.indices),\n disable=not progress,\n desc=\"Permutation\",\n total=math.factorial(n),\n ):\n for i, idx in enumerate(p):\n values[idx] += u(p[: i + 1]) - u(p[:i])\n values /= math.factorial(n)\n\n return ValuationResult(\n algorithm=\"permutation_exact_shapley\",\n status=Status.Converged,\n values=values,\n data_names=u.data.data_names,\n )\n
combinatorial_exact_shapley(\n u: Utility,\n *,\n n_jobs: int = 1,\n parallel_backend: Optional[ParallelBackend] = None,\n config: Optional[ParallelConfig] = None,\n progress: bool = False\n) -> ValuationResult\n
Computes the exact Shapley value using the combinatorial definition.
If the length of the training set is > n_jobs*20 this prints a warning because the computation is very expensive. Used mostly for internal testing and simple use cases. Please refer to the Monte Carlo approximations for practical applications.
@deprecated(\n target=True,\n args_mapping={\"config\": \"config\"},\n deprecated_in=\"0.9.0\",\n remove_in=\"0.10.0\",\n)\ndef combinatorial_exact_shapley(\n u: Utility,\n *,\n n_jobs: int = 1,\n parallel_backend: Optional[ParallelBackend] = None,\n config: Optional[ParallelConfig] = None,\n progress: bool = False,\n) -> ValuationResult:\n r\"\"\"Computes the exact Shapley value using the combinatorial definition.\n\n $$v_u(i) = \\frac{1}{n} \\sum_{S \\subseteq N \\setminus \\{i\\}}\n \\binom{n-1}{ | S | }^{-1} [u(S \\cup \\{i\\}) \u2212 u(S)].$$\n\n See [Data valuation][data-valuation] for details.\n\n !!! Note\n If the length of the training set is > n_jobs*20 this prints a warning\n because the computation is very expensive. Used mostly for internal\n testing and simple use cases. Please refer to the\n [Monte Carlo][pydvl.value.shapley.montecarlo] approximations for\n practical applications.\n\n Args:\n u: Utility object with model, data, and scoring function\n n_jobs: Number of parallel jobs to use\n parallel_backend: Parallel backend instance to use\n for parallelizing computations. If `None`,\n use [JoblibParallelBackend][pydvl.parallel.backends.JoblibParallelBackend] backend.\n See the [Parallel Backends][pydvl.parallel.backends] package\n for available options.\n config: (**DEPRECATED**) Object configuring parallel computation,\n with cluster address, number of cpus, etc.\n progress: Whether to display progress bars for each job.\n\n Returns:\n Object with the data values.\n\n !!! tip \"Changed in version 0.9.0\"\n Deprecated `config` argument and added a `parallel_backend`\n argument to allow users to pass the Parallel Backend instance\n directly.\n \"\"\"\n # Arbitrary choice, will depend on time required, caching, etc.\n if len(u.data) // n_jobs > 20:\n warnings.warn(\n f\"Large dataset! Computation requires 2^{len(u.data)} calls to model.fit()\"\n )\n\n def reduce_fun(results: List[NDArray]) -> NDArray:\n return np.array(results).sum(axis=0) # type: ignore\n\n parallel_backend = _maybe_init_parallel_backend(parallel_backend, config)\n\n map_reduce_job: MapReduceJob[NDArray, NDArray] = MapReduceJob(\n u.data.indices,\n map_func=_combinatorial_exact_shapley,\n map_kwargs=dict(u=u, progress=progress),\n reduce_func=reduce_fun,\n n_jobs=n_jobs,\n parallel_backend=parallel_backend,\n )\n values = map_reduce_job()\n return ValuationResult(\n algorithm=\"combinatorial_exact_shapley\",\n status=Status.Converged,\n values=values,\n data_names=u.data.data_names,\n )\n
Okhrati, R., Lipani, A., 2021. A Multilinear Sampling Algorithm to Estimate Shapley Values. In: 2020 25th International Conference on Pattern Recognition (ICPR), pp. 7992\u20137999. IEEE.\u00a0\u21a9
Choices for the Owen sampling method.
Standard
Use q \u2208 [0, 1]
Antithetic
Use q \u2208 [0, 0.5] and correlated samples
owen_sampling_shapley(\n u: Utility,\n n_samples: int,\n max_q: int,\n *,\n method: OwenAlgorithm = OwenAlgorithm.Standard,\n n_jobs: int = 1,\n parallel_backend: Optional[ParallelBackend] = None,\n config: Optional[ParallelConfig] = None,\n progress: bool = False,\n seed: Optional[Seed] = None\n) -> ValuationResult\n
Owen sampling of Shapley values as described in (Okhrati and Lipani, 2021)1.
This function computes a Monte Carlo approximation to
using one of two methods. The first one, selected with the argument mode = OwenAlgorithm.Standard, approximates the integral with:
mode = OwenAlgorithm.Standard
where \\(q_j = \\frac{j}{Q} \\in [0,1]\\) and the sets \\(S^{(q_j)}\\) are such that a sample \\(x \\in S^{(q_j)}\\) if a draw from a \\(Ber(q_j)\\) distribution is 1.
The second method, selected with the argument mode = OwenAlgorithm.Antithetic, uses correlated samples in the inner sum to reduce the variance:
mode = OwenAlgorithm.Antithetic
where now \\(q_j = \\frac{j}{2Q} \\in [0,\\frac{1}{2}]\\), and \\(S^c\\) is the complement of \\(S\\).
The outer integration could be done instead with a quadrature rule.
Utility object holding data, model and scoring function.
Numer of sets to sample for each value of q
max_q
Number of subdivisions for q \u2208 [0,1] (the element sampling probability) used to approximate the outer integral.
method
Selects the algorithm to use, see the description. Either OwenAlgorithm.Full for \\(q \\in [0,1]\\) or OwenAlgorithm.Halved for \\(q \\in [0,0.5]\\) and correlated samples
TYPE: OwenAlgorithm DEFAULT: Standard
OwenAlgorithm
Number of parallel jobs to use. Each worker receives a chunk of the total of max_q values for q.
New in version 0.3.0
Support for parallel computation and enable antithetic sampling.
src/pydvl/value/shapley/owen.py
@deprecated(\n target=True,\n args_mapping={\"config\": \"config\"},\n deprecated_in=\"0.9.0\",\n remove_in=\"0.10.0\",\n)\ndef owen_sampling_shapley(\n u: Utility,\n n_samples: int,\n max_q: int,\n *,\n method: OwenAlgorithm = OwenAlgorithm.Standard,\n n_jobs: int = 1,\n parallel_backend: Optional[ParallelBackend] = None,\n config: Optional[ParallelConfig] = None,\n progress: bool = False,\n seed: Optional[Seed] = None\n) -> ValuationResult:\n r\"\"\"Owen sampling of Shapley values as described in\n (Okhrati and Lipani, 2021)<sup><a href=\"#okhrati_multilinear_2021\">1</a></sup>.\n\n This function computes a Monte Carlo approximation to\n\n $$v_u(i) = \\int_0^1 \\mathbb{E}_{S \\sim P_q(D_{\\backslash \\{i\\}})}\n [u(S \\cup \\{i\\}) - u(S)]$$\n\n using one of two methods. The first one, selected with the argument ``mode =\n OwenAlgorithm.Standard``, approximates the integral with:\n\n $$\\hat{v}_u(i) = \\frac{1}{Q M} \\sum_{j=0}^Q \\sum_{m=1}^M [u(S^{(q_j)}_m\n \\cup \\{i\\}) - u(S^{(q_j)}_m)],$$\n\n where $q_j = \\frac{j}{Q} \\in [0,1]$ and the sets $S^{(q_j)}$ are such that a\n sample $x \\in S^{(q_j)}$ if a draw from a $Ber(q_j)$ distribution is 1.\n\n The second method, selected with the argument ``mode =\n OwenAlgorithm.Antithetic``, uses correlated samples in the inner sum to\n reduce the variance:\n\n $$\\hat{v}_u(i) = \\frac{1}{2 Q M} \\sum_{j=0}^Q \\sum_{m=1}^M [u(S^{(q_j)}_m\n \\cup \\{i\\}) - u(S^{(q_j)}_m) + u((S^{(q_j)}_m)^c \\cup \\{i\\}) - u((S^{(\n q_j)}_m)^c)],$$\n\n where now $q_j = \\frac{j}{2Q} \\in [0,\\frac{1}{2}]$, and $S^c$ is the\n complement of $S$.\n\n !!! Note\n The outer integration could be done instead with a quadrature rule.\n\n Args:\n u: [Utility][pydvl.utils.utility.Utility] object holding data, model\n and scoring function.\n n_samples: Numer of sets to sample for each value of q\n max_q: Number of subdivisions for q \u2208 [0,1] (the element sampling\n probability) used to approximate the outer integral.\n method: Selects the algorithm to use, see the description. Either\n [OwenAlgorithm.Full][pydvl.value.shapley.owen.OwenAlgorithm] for\n $q \\in [0,1]$ or\n [OwenAlgorithm.Halved][pydvl.value.shapley.owen.OwenAlgorithm] for\n $q \\in [0,0.5]$ and correlated samples\n n_jobs: Number of parallel jobs to use. Each worker receives a chunk\n of the total of `max_q` values for q.\n parallel_backend: Parallel backend instance to use\n for parallelizing computations. If `None`,\n use [JoblibParallelBackend][pydvl.parallel.backends.JoblibParallelBackend] backend.\n See the [Parallel Backends][pydvl.parallel.backends] package\n for available options.\n config: (**DEPRECATED**) Object configuring parallel computation,\n with cluster address, number of cpus, etc.\n progress: Whether to display progress bars for each job.\n seed: Either an instance of a numpy random number generator or a seed for it.\n\n Returns:\n Object with the data values.\n\n !!! tip \"New in version 0.3.0\"\n\n !!! tip \"Changed in version 0.5.0\"\n Support for parallel computation and enable antithetic sampling.\n\n !!! tip \"Changed in version 0.9.0\"\n Deprecated `config` argument and added a `parallel_backend`\n argument to allow users to pass the Parallel Backend instance\n directly.\n\n \"\"\"\n parallel_backend = _maybe_init_parallel_backend(parallel_backend, config)\n\n map_reduce_job: MapReduceJob[NDArray, ValuationResult] = MapReduceJob(\n u.data.indices,\n map_func=_owen_sampling_shapley,\n reduce_func=lambda results: reduce(operator.add, results),\n map_kwargs=dict(\n u=u,\n method=OwenAlgorithm(method),\n n_samples=n_samples,\n max_q=max_q,\n progress=progress,\n ),\n n_jobs=n_jobs,\n parallel_backend=parallel_backend,\n )\n\n return map_reduce_job(seed=seed)\n
TruncationPolicy()\n
A policy for deciding whether to stop computing marginals in a permutation.
Statistics are kept on the number of calls and truncations as n_calls and n_truncations respectively.
n_calls
n_truncations
Number of calls to the policy.
Number of truncations made by the policy.
Because the policy objects are copied to the workers, the statistics are not accessible from the coordinating process. We need to add methods for this.
src/pydvl/value/shapley/truncated.py
def __init__(self) -> None:\n self.n_calls: int = 0\n self.n_truncations: int = 0\n
reset(u: Optional[Utility] = None)\n
Reset the policy to a state ready for a new permutation.
@abc.abstractmethod\ndef reset(self, u: Optional[Utility] = None):\n \"\"\"Reset the policy to a state ready for a new permutation.\"\"\"\n ...\n
__call__(idx: int, score: float) -> bool\n
Check whether the computation should be interrupted.
Position in the permutation currently being computed.
score
Last utility computed.
True if the computation should be interrupted.
def __call__(self, idx: int, score: float) -> bool:\n \"\"\"Check whether the computation should be interrupted.\n\n Args:\n idx: Position in the permutation currently being computed.\n score: Last utility computed.\n\n Returns:\n `True` if the computation should be interrupted.\n \"\"\"\n ret = self._check(idx, score)\n self.n_calls += 1\n self.n_truncations += 1 if ret else 0\n return ret\n
NoTruncation()\n
Bases: TruncationPolicy
A policy which never interrupts the computation.
FixedTruncation(u: Utility, fraction: float)\n
Break a permutation after computing a fixed number of marginals.
The experiments in Appendix B of (Ghorbani and Zou, 2019)1 show that when the training set size is large enough, one can simply truncate the iteration over permutations after a fixed number of steps. This happens because beyond a certain number of samples in a training set, the model becomes insensitive to new ones. Alas, this strongly depends on the data distribution and the model and there is no automatic way of estimating this number.
Fraction of marginals in a permutation to compute before stopping (e.g. 0.5 to compute half of the marginals).
def __init__(self, u: Utility, fraction: float):\n super().__init__()\n if fraction <= 0 or fraction > 1:\n raise ValueError(\"fraction must be in (0, 1]\")\n self.max_marginals = len(u.data) * fraction\n self.count = 0\n
RelativeTruncation(u: Utility, rtol: float)\n
Break a permutation if the marginal utility is too low.
This is called \"performance tolerance\" in (Ghorbani and Zou, 2019)1.
Relative tolerance. The permutation is broken if the last computed utility is less than total_utility * rtol.
total_utility * rtol
def __init__(self, u: Utility, rtol: float):\n super().__init__()\n self.rtol = rtol\n logger.info(\"Computing total utility for permutation truncation.\")\n self.total_utility = self.reset(u)\n self._u = u\n
BootstrapTruncation(u: Utility, n_samples: int, sigmas: float = 1)\n
Break a permutation if the last computed utility is close to the total utility, measured as a multiple of the standard deviation of the utilities.
Number of bootstrap samples to use to compute the variance of the utilities.
sigmas
Number of standard deviations to use as a threshold.
def __init__(self, u: Utility, n_samples: int, sigmas: float = 1):\n super().__init__()\n self.n_samples = n_samples\n logger.info(\"Computing total utility for permutation truncation.\")\n self.total_utility = u(u.data.indices)\n self.count: int = 0\n self.variance: float = 0\n self.mean: float = 0\n self.sigmas: float = sigmas\n
Supported algorithms for the computation of Shapley values.
Make algorithms register themselves here.
Shapley values
An introduction using the spotify dataset, showcasing grouped datasets and applied to improving model performance and identifying bogus data.
KNN Shapley
A showcase of a fast model-specific valuation method using the iris dataset.
Data utility learning
Learning a utility function from a few evaluations and using it to estimate the value of the remaining data.
Least Core
An alternative solution concept from game theory, illustrated on a classification problem.
Data OOB
A different and fast strategy for data valuation, using the out-of-bag error of a bagging model.
Faster Banzhaf values
Using Banzhaf values to estimate the value of data points in MNIST, and evaluating convergence speed of MSR.
For CNNs
Detecting corrupted labels with influence functions on the ImageNet dataset.
For language models
Using the IMDB dataset for sentiment analysis and a fine-tuned BERT model.
For mislabeled data
Detecting corrupted labels using a synthetic dataset.
For outlier detection
Using the wine dataset
This notebook introduces the Data- OOB method, an implementation based on a publication from Kwon and Zou \" Data- OOB : Out-of-bag Estimate as a Simple and Efficient Data Value \" ICML 2023 , using pyDVL.
The objective of this paper is mainly to overcome the computational bottleneck of shapley-based data valuation methods that require to fit a significant number of models to accurately estimate marginal contributions. The algorithms compute data values from out of bag estimates using a bagging model.
The value can be interpreted as a partition of the OOB estimate, which is originally introduced to estimate the prediction error. This OOB estimate is given as:
%autoreload\nfrom pydvl.utils import Dataset, Scorer, Seed, Utility, ensure_seed_sequence\nfrom pydvl.value import ValuationResult, compute_data_oob\n
We will work with the adult classification dataset from the UCI repository. The objective is to predict whether a person earns more than 50k a year based on a set of features such as age, education, occupation, etc.
With a helper function we download the data and obtain the following pandas dataframe, where the categorical features have been removed:
\nFound cached file: adult_data.pkl.\n\n
Found cached file: adult_data.pkl.\n
data_adult.head()\n
data = Dataset.from_arrays(\n X=data_adult.drop(columns=[\"income\"]).values,\n y=data_adult.loc[:, \"income\"].cat.codes.values,\n random_state=random_state,\n)\n\nmodel = KNeighborsClassifier(n_neighbors=5)\n\nutility = Utility(model, data, Scorer(\"accuracy\", default=0.0))\n
n_estimators = [100, 500]\noob_values = [\n compute_data_oob(utility, n_est=n_est, max_samples=0.95, seed=random_state)\n for n_est in n_estimators\n]\n
The two results are stored in an array of ValuationResult objects. Here's their distribution. The left-hand side depicts value as it increases with rank and a 99% t-confidence interval. The right-hand side shows the histogram of values.
Observe how adding estimators reduces the variance of the values, but doesn't change their distribution much.
We begin by importing the main libraries and setting some defaults.
The main idea of Data- OOB is to take an existing classifier or regression model and compute a per-sample out-of-bag performance estimate via bagging.
For this example, we use a simple KNN classifier with \\(k=5\\) neighbours on the data and compute the data-oob values with two choices for the number of estimators in the bagging. For that we construct a Utility object using the Scorer class to specify the metric to use for the evaluation. Note how we pass a random seed to Dataset.from_arrays in order to ensure that we always get the same split when running this notebook multiple times. This will be particularly important when running the standard point removal experiments later.
We then use the compute_data_oob function to compute the data-oob values.
The standard procedure for the evaluation of data valuation schemes is the point removal experiment. The objective is to measure the evolution of performance when the best/worst points are removed from the training set. This can be done with the function compute_removal_score , which takes precomputed values and computes the performance of the model as points are removed.
In order to test the true performance of DataOOB, we repeat the whole task of computing the values and the point removal experiment multiple times, including the splitting of the dataset into training and valuation sets. It is important to remember to pass random state adequately for full reproducibility.
from pydvl.influence.torch import CgInfluence\nfrom pydvl.reporting.plots import plot_influence_distribution_by_label\nfrom sklearn.metrics import confusion_matrix, ConfusionMatrixDisplay, f1_score\n
label_names = {90: \"tables\", 100: \"boats\"}\ntrain_ds, val_ds, test_ds = load_preprocess_imagenet(\n train_size=0.8,\n test_size=0.1,\n keep_labels=label_names,\n downsampling_ratio=1,\n)\n\nprint(\"Normalised image dtype:\", train_ds[\"normalized_images\"][0].dtype)\nprint(\"Label type:\", type(train_ds[\"labels\"][0]))\nprint(\"Image type:\", type(train_ds[\"images\"][0]))\ntrain_ds.info()\n
Let's take a closer look at a few image samples
Let's now further pre-process the data and prepare for model training. The helper function process_io converts the normalized images into tensors and the labels to the indices 0 and 1 to train the classifier.
process_io
def process_io(df: pd.DataFrame, labels: dict) -> Tuple[torch.Tensor, torch.Tensor]:\n x = df[\"normalized_images\"]\n y = df[\"labels\"]\n ds_label_to_model_label = {\n ds_label: idx for idx, ds_label in enumerate(labels.values())\n }\n x_nn = torch.stack(x.tolist()).to(DEVICE)\n y_nn = torch.tensor([ds_label_to_model_label[yi] for yi in y], device=DEVICE)\n return x_nn, y_nn\n\n\ntrain_x, train_y = process_io(train_ds, label_names)\nval_x, val_y = process_io(val_ds, label_names)\ntest_x, test_y = process_io(test_ds, label_names)\n\nbatch_size = 768\ntrain_data = DataLoader(TensorDataset(train_x, train_y), batch_size=batch_size)\ntest_data = DataLoader(TensorDataset(test_x, test_y), batch_size=batch_size)\nval_data = DataLoader(TensorDataset(val_x, val_y), batch_size=batch_size)\n
device = torch.device(\"cuda:0\" if torch.cuda.is_available() else \"cpu\")\nmodel_ft = new_resnet_model(output_size=len(label_names))\nmgr = TrainingManager(\n \"model_ft\",\n model_ft,\n nn.CrossEntropyLoss(),\n train_data,\n val_data,\n MODEL_PATH,\n device=device,\n)\n# Set use_cache=False to retrain the model\ntrain_loss, val_loss = mgr.train(n_epochs=50, use_cache=True)\n
plot_losses(Losses(train_loss, val_loss))\n
The confusion matrix and \\(F_1\\) score look good, especially considering the low resolution of the images and their complexity (they contain different objects)
pred_y_test = np.argmax(model_ft(test_x).cpu().detach(), axis=1).cpu()\nmodel_score = f1_score(test_y.cpu(), pred_y_test, average=\"weighted\")\n\ncm = confusion_matrix(test_y.cpu(), pred_y_test)\ndisp = ConfusionMatrixDisplay(confusion_matrix=cm, display_labels=label_names.values())\nprint(\"f1_score of model:\", model_score)\ndisp.plot();\n
\nf1_score of model: 0.9062805208898536\n\n
f1_score of model: 0.9062805208898536\n
influence_model = CgInfluence(mgr.model, mgr.loss, hessian_reg, progress=True)\ninfluence_model = influence_model.fit(train_data)\n
On the instantiated influence object, we can call the method influences , which takes some test data and some input dataset with labels (which typically is the training data, or a subset of it). The influence type will be up . The other option, perturbation , is beyond the scope of this notebook, but more info can be found in the notebook using the Wine dataset or in the documentation for pyDVL.
up
perturbation
influences = influence_model.influences(test_x, test_y, train_x, train_y, mode=\"up\")\n
The output is a matrix of size test_set_length x training_set_length . Each row represents a test data point, and each column a training data point, so that entry \\((i,j)\\) represents the influence of training point \\(j\\) on test point \\(i\\) .
test_set_length
training_set_length
Now we plot the histogram of the influence that all training images have on the image selected above, separated by their label.
Rather unsurprisingly, the training points with the highest influence have the same label. Now we can take the training images with the same label and show those with highest and lowest scores.
Looking at the images, it is difficult to explain why those on the right are more influential than those on the left. At first sight, the choice seems to be random (or at the very least noisy). Let's dig in a bit more by looking at average influences:
avg_influences = np.mean(influences.cpu().numpy(), axis=0)\n
Once again, let's plot the histogram of influence values by label.
Next, for each class (you can change value by changing label key) we can have a look at the top and bottom images by average influence, i.e. we can show the images that have the highest and lowest average influence over all test images.
Once again, it is not easy to explain why the images on the left have a lower influence than the ones on the right.
corrupted_model = new_resnet_model(output_size=len(label_names))\ncorrupted_dataset, corrupted_indices = corrupt_imagenet(\n dataset=train_ds,\n fraction_to_corrupt=0.1,\n avg_influences=avg_influences,\n)\n\ncorrupted_train_x, corrupted_train_y = process_io(corrupted_dataset, label_names)\ncorrupted_data = DataLoader(\n TensorDataset(corrupted_train_x, corrupted_train_y), batch_size=batch_size\n)\n\nmgr = TrainingManager(\n \"corrupted_model\",\n corrupted_model,\n nn.CrossEntropyLoss(),\n corrupted_data,\n val_data,\n MODEL_PATH,\n device=device,\n)\ntraining_loss, validation_loss = mgr.train(n_epochs=50, use_cache=True)\n
plot_losses(Losses(training_loss, validation_loss))\n
\nF1 score of model with corrupted data: 0.8541666666666666\n\n
F1 score of model with corrupted data: 0.8541666666666666\n
Interestingly, despite being trained on a corrupted dataset, the model has a fairly high \\(F_1\\) score. Let's now calculate the influence of the corrupted training data points over the test data points.
influence_model = CgInfluence(mgr.model, mgr.loss, hessian_reg, progress=True)\ninfluence_model = influence_model.fit(corrupted_data)\ninfluences = influence_model.influences(\n test_x, test_y, corrupted_train_x, corrupted_train_y\n)\n
As before, since we are interested in the average influence on the test dataset, we take the average of influences across rows, and then plot the highest and lowest influences for a chosen label
avg_corrupted_influences = np.mean(influences.cpu().numpy(), axis=0)\n
As expected, the samples with lowest (negative) influence for the label \"boats\" are those that have been corrupted: all the images on the left are tables! We can compare the average influence of corrupted data with non-corrupted ones
And indeed corrupted data have a more negative influence on average than clean ones!
Despite this being a useful property, influence functions are known to be unreliable for tasks of data valuation, especially in deep learning where the fundamental assumption of the theory (convexity) is grossly violated. A lot of factors (e.g. the size of the network, the training process or the Hessian regularization term) can interfere with the computation, to the point that often the results that we obtain cannot be trusted. This has been extensively studied in the recent paper:
Basu, S., P. Pope, and S. Feizi. Influence Functions in Deep Learning Are Fragile. International Conference on Learning Representations (ICLR). 2021 .
Nevertheless, influence functions offer a relatively quick and mathematically rigorous way to evaluate (at first order) the importance of a training point for a model's prediction.
This notebook explores the use of influence functions for convolutional neural networks. In the first part we will investigate the usefulness, or lack thereof, of influence functions for the interpretation of a classifier's outputs.
For our study we choose a pre-trained ResNet18, fine-tuned on the tiny-imagenet dataset . This dataset was created for a Stanford course on Deep Learning for Computer Vision , and is a subset of the famous ImageNet with 200 classes instead of 1000, and images down-sampled to a lower resolution of 64x64 pixels.
After tuning the last layers of the network, we will use pyDVL to find the most and the least influential training images for the test set. This can sometimes be used to explain inference errors, or to direct efforts during data collection, although we will face inconclusive results with our model and data. This illustrates well-known issues of influence functions for neural networks.
However, in the final part of the notebook we will see that influence functions are an effective tool for finding anomalous or corrupted data points.
We conclude with an appendix with some basic theoretical concepts used.
We pick two classes arbitrarily to work with: 90 and 100, corresponding respectively to dining tables, and boats in Venice (you can of course select any other two classes, or more of them, although that would imply longer training times and some modifications in the notebook below). The dataset is loaded with load_preprocess_imagenet() , which returns three pandas DataFrames with training, validation and test sets respectively. Each dataframe has three columns: normalized images, labels and the original images. Note that you can load a subset of the data decreasing downsampling_ratio.
load_preprocess_imagenet()
DataFrames
We use a ResNet18 from torchvision with final layers modified for binary classification.
torchvision
For training, we use the convenience class TrainingManager which transparently handles persistence after training. It is not part of the main pyDVL package but just a way to reduce clutter in this notebook.
TrainingManager
We train the model for 50 epochs and save the results. Then we plot the train and validation loss curves.
Let's now calculate influences! The central interface for computing influences is InfluenceFunctionModel . Since Resnet18 is quite big, we pick the conjugate gradient implementation CgInfluence , which takes a trained torch.nn.Module , the training loss and the training data. Other important parameters are the Hessian regularization term, which should be chosen as small as possible for the computation to converge (further details on why this is important can be found in the Appendix ).
With the computed influences we can study single images or all of them together:
Let's take any image in the test set:
By averaging across the rows of the influence matrix, we obtain the average influence of each training sample on the whole test set:
After facing the shortcomings of influence functions for explaining decisions, we move to an application with clear-cut results. Influences can be successfully used to detect corrupted or mislabeled samples, making them an effective tool to \"debug\" training data.
We begin by training a new model (with the same architecture as before) on a dataset with some corrupted labels. The method get_corrupted_imagenet will take the training dataset and corrupt a certain fraction of the labels by flipping them. We use the same number of epochs and optimizer as before.
get_corrupted_imagenet
In this appendix we will briefly go through the basic ideas of influence functions adapted for neural networks as introduced in Koh, Pang Wei, and Percy Liang. \"Understanding Black-box Predictions via Influence Functions\" International conference on machine learning. PMLR, 2017.
Note however that this paper departs from the standard and established theory and notation for influence functions. For a rigorous introduction to the topic we recommend classical texts like Hampel, Frank R., Elvezio M. Ronchetti, Peter J. Rousseeuw, and Werner A. Stahel. Robust Statistics: The Approach Based on Influence Functions. 1st edition. Wiley Series in Probability and Statistics. New York: Wiley-Interscience, 2005. https://doi.org/10.1002/9781118186435.
Let's start by considering some input space \\(\\mathcal{X}\\) to a model (e.g. images) and an output space \\(\\mathcal{Y}\\) (e.g. labels). Let's take \\(z_i = (x_i, y_i)\\) to be the \\(i\\) -th training point, and \\(\\theta\\) to be the (potentially highly) multi-dimensional parameters of the neural network (i.e. \\(\\theta\\) is a big array with very many parameters). We will indicate with \\(L(z, \\theta)\\) the loss of the model for point \\(z\\) and parameters \\(\\theta\\) . When training the model we minimize the loss over all points, i.e. the optimal parameters are calculated through gradient descent on the following formula:
where \\(n\\) is the total number of training data points.
For notational convenience, let's define
i.e. \\(\\hat{\\theta}_{-z}\\) are the model parameters that minimize the total loss when \\(z\\) is not in the training dataset.
In order to check the impact of each training point on the model, we would need to calculate \\(\\hat{\\theta}_{-z}\\) for each \\(z\\) in the training dataset, thus re-training the model at least ~ \\(n\\) times (more if model training is noisy). This is computationally very expensive, especially for big neural networks. To circumvent this problem, we can just calculate a first order approximation of \\(\\hat{\\theta}\\) . This can be done through single backpropagation and without re-training the full model.
Let's define
which is the optimal \\(\\hat{\\theta}\\) if we were to up-weigh \\(z\\) by an amount \\(\\epsilon\\) .
From a classical result (a simple derivation is available in Appendix A of Koh and Liang's paper), we know that:
where \\(H_{\\hat{\\theta}} = \\frac{1}{n} \\sum_{i=1}^n \\nabla_\\theta^2 L(z_i, \\hat{\\theta})\\) is the Hessian of \\(L\\) . Importantly, notice that this expression is only valid when \\(\\hat{\\theta}\\) is a minimum of \\(L\\) , or otherwise \\(H_{\\hat{\\theta}}\\) cannot be inverted!
We will define the influence of training point \\(z\\) on test point \\(z_{\\text{test}}\\) as \\(\\mathcal{I}(z, z_{\\text{test}}) = L(z_{\\text{test}}, \\hat{\\theta}_{-z}) - L(z_{\\text{test}}, \\hat{\\theta})\\) (notice that it is higher for points \\(z\\) which positively impact the model score, since if they are excluded, the loss is higher). In practice, however, we will always use the infinitesimal approximation \\(\\mathcal{I}_{up}(z, z_{\\text{test}})\\) , defined as
Using the chain rule and the results calculated above, we thus have:
In order to calculate this expression we need the gradient and the Hessian of the loss wrt. the model parameters \\(\\hat{\\theta}\\) . This can be easily done through a single backpropagation pass.
One very important assumption that we make when approximating influence is that \\(\\hat{\\theta}\\) is at least a local minimum of the loss. However, we clearly cannot guarantee this except for convex models, and despite good apparent convergence, \\(\\hat{\\theta}\\) might be located in a region with flat curvature or close to a saddle point. In particular, the Hessian might have vanishing eigenvalues making its direct inversion impossible.
To circumvent this problem, instead of inverting the true Hessian \\(H_{\\hat{\\theta}}\\) , one can invert a small perturbation thereof: \\(H_{\\hat{\\theta}} + \\lambda \\mathbb{I}\\) , with \\(\\mathbb{I}\\) being the identity matrix. This standard trick ensures that the eigenvalues of \\(H_{\\hat{\\theta}}\\) are bounded away from zero and therefore the matrix is invertible. In order for this regularization not to corrupt the outcome too much, the parameter \\(\\lambda\\) should be as small as possible while still allowing a reliable inversion of \\(H_{\\hat{\\theta}} + \\lambda \\mathbb{I}\\) .
This notebooks showcases the use of influence functions for large language models. In particular, it focuses on sentiment analysis using the IMDB dataset and a fine-tuned BERT model.
Not all the methods for influence function calculation can scale to large models and datasets. In this notebook we will use the Kronecker-Factored Approximate Curvature method, which is the only one that can scale to current state-of-the-art language models.
The notebook is structured as follows:
Finally, the Appendix shows how to select the Hessian regularization parameter to obtain the best influence function approximation.
Let's start by importing the required libraries. If not already installed, you can install them with pip install -r requirements-notebooks.txt .
pip install -r requirements-notebooks.txt
import os\nfrom copy import deepcopy\nfrom typing import Sequence\n\nimport matplotlib.pyplot as plt\nimport torch\nimport torch.nn.functional as F\nfrom datasets import load_dataset\nfrom IPython.display import HTML, display\nfrom sklearn.metrics import f1_score\nfrom transformers import AutoModelForSequenceClassification, AutoTokenizer\n\nfrom pydvl.influence.torch import EkfacInfluence\nfrom support.torch import ImdbDataset, ModelLogitsWrapper\n
Sentiment analysis is the task of classifying a sentence as having a positive or negative sentiment. For example, the sentence \"I love this movie\" has a positive sentiment, while \"I hate this movie\" has a negative sentiment. In this notebook we will use the IMDB dataset, which contains 50,000 movie reviews with corresponding labels. The dataset is split into 25,000 reviews for training and 25,000 reviews for testing. The dataset is balanced, meaning that there are the same number of positive and negative reviews in the training and test set.
imdb = load_dataset(\"imdb\")\n
Let's print an example of review and its label
sample_review = imdb[\"train\"].select([24])\n\nprint(f\"Here is a sample review with label {sample_review['label'][0]}: \\n\")\n\ndisplay(HTML(sample_review[\"text\"][0].split(\"<br/>\")[0]))\ndisplay(HTML(sample_review[\"text\"][0].split(\"<br/>\")[-1]))\n
\nHere is a sample review with label 0: \n\n\n
Here is a sample review with label 0: \n\n
The review is negative, and so label 0 is associated to negative sentiment.
The model is a BERT model fine-tuned on the IMDB dataset. BERT is a large language model that has been pre-trained on a large corpus of text. The model was fine-tuned on the IMDB dataset by AssemblyAI and is available on the HuggingFace model hub. We also load its tokenizer, which is used to convert sentences into numeric tokens.
tokenizer = AutoTokenizer.from_pretrained(\"assemblyai/distilbert-base-uncased-sst2\")\nmodel = AutoModelForSequenceClassification.from_pretrained(\n \"assemblyai/distilbert-base-uncased-sst2\"\n)\n
Even if the model is trained on movie reviews, it can be used to classify any sentence as positive or negative. Let's try it on a simple sentence created by us.
example_phrase = (\n \"Pydvl is the best data valuation library, and it is fully open-source!\"\n)\n\ntokenized_example = tokenizer(\n [example_phrase],\n return_tensors=\"pt\",\n truncation=True,\n)\n\nmodel_output = model(\n input_ids=tokenized_example.input_ids,\n)\n
The model output is a SequenceClassificationOutput object, which contains the logits and other information.
SequenceClassificationOutput
\nModel Output:\n SequenceClassifierOutput(loss=None, logits=tensor([[-2.6237, 2.8350]], grad_fn=<AddmmBackward0>), hidden_states=None, attentions=None)\n\n
Model Output:\n SequenceClassifierOutput(loss=None, logits=tensor([[-2.6237, 2.8350]], grad_fn=<AddmmBackward0>), hidden_states=None, attentions=None)\n
For calculating probabilities and for the influence functions we only need the logits. Then the softmax function converts the logits into probabilities.
model_predictions = F.softmax(model_output.logits, dim=1)\n
The model is quite confident that the sentence has a positive sentiment, which is correct.
\nPositive probability: 99.6%\nNegative probability: 0.4%\n\n
Positive probability: 99.6%\nNegative probability: 0.4%\n
Let's examine the model's f1 score on a small subset of the test set.
sample_test_set = imdb[\"test\"].shuffle(seed=seed).select(range(50 if not is_CI else 5))\nsample_test_set = sample_test_set.map(\n lambda example: tokenizer(example[\"text\"], truncation=True, padding=\"max_length\"),\n batched=True,\n)\nsample_test_set.set_format(\"torch\", columns=[\"input_ids\", \"attention_mask\", \"label\"])\nmodel.eval()\nwith torch.no_grad():\n logits = model(\n input_ids=sample_test_set[\"input_ids\"],\n attention_mask=sample_test_set[\"attention_mask\"],\n ).logits\n predictions = torch.argmax(logits, dim=1)\n
f1_score_value = f1_score(sample_test_set[\"label\"], predictions)\nprint(f\"F1 Score: {round(f1_score_value, 3)}\")\n
\nF1 Score: 0.955\n\n
F1 Score: 0.955\n
In this section we will define two helper function and classes that will be used in the rest of the notebook.
def print_sentiment_preds(\n model: ModelLogitsWrapper, model_input: torch.Tensor, true_label: int\n):\n \"\"\"\n Prints the sentiment predictions in a human-readable format given a model and an\n input. It also prints the true label.\n \"\"\"\n model_predictions = F.softmax(model(model_input.unsqueeze(0)), dim=1)\n print(\n \"Positive probability: \"\n + str(round(model_predictions[0][1].item(), 3) * 100)\n + \"%\"\n )\n print(\n \"Negative probability: \"\n + str(round(model_predictions[0][0].item(), 3) * 100)\n + \"%\"\n )\n\n true_label = \"Positive\" if true_label == 1 else \"Negative\"\n print(f\"True label: {true_label} \\n\")\n\n\ndef strip_layer_names(param_names: Sequence[str]):\n \"\"\"\n Helper function that strips the parameter names of the model and the transformer,\n so that they can be printed and compared more easily.\n \"\"\"\n stripped_param_names = []\n for name in param_names:\n name = name.replace(\"model.\", \"\")\n if name.startswith(\"distilbert.transformer.\"):\n name = name.replace(\"distilbert.transformer.\", \"\")\n stripped_param_names.append(name)\n return stripped_param_names\n
Importantly, we will need to assign all the linear layers to require gradients, so that we can compute the influence function with respect to them. Keep in mind that the current implementation of Ekfac only supports linear layers, so if any other type of layer in the model requires gradients the initialisation of the influence function class will fail.
for param in model.named_parameters():\n param[1].requires_grad = False\n\nfor m_name, module in model.named_modules():\n if len(list(module.children())) == 0 and len(list(module.parameters())) > 0:\n if isinstance(module, torch.nn.Linear):\n for p_name, param in module.named_parameters():\n if (\n (\"ffn\" in m_name and not is_CI)\n or \"pre_classifier\" in m_name\n or \"classifier\" in m_name\n ):\n param.requires_grad = True\n
Albeit restrictive, linear layers constitute a large fraction of the parameters of most large language models, and so our analysis still holds a lot of information about the full neural network.
\nTotal parameters: 66.96 millions\nParameters requiring gradients: 28.93 millions\nRatio of Linear over other layer types: 43.20%\n\n
Total parameters: 66.96 millions\nParameters requiring gradients: 28.93 millions\nRatio of Linear over other layer types: 43.20%\n
We are now ready to compute the influence function for a few testing and training examples. Let's start by selecting a subset of the full training and testing dataset and wrapping them in a DataLoader object, so that we can easily do batching.
NUM_TRAIN_EXAMPLES = 100 if not is_CI else 7\nNUM_TEST_EXAMPLES = 100 if not is_CI else 5\n\nsmall_train_dataset = (\n imdb[\"train\"]\n .shuffle(seed=seed)\n .select([i for i in list(range(NUM_TRAIN_EXAMPLES))])\n)\nsmall_test_dataset = (\n imdb[\"test\"].shuffle(seed=seed).select([i for i in list(range(NUM_TEST_EXAMPLES))])\n)\n\ntrain_dataset = ImdbDataset(small_train_dataset, tokenizer=tokenizer)\ntest_dataset = ImdbDataset(small_test_dataset, tokenizer=tokenizer)\n\ntrain_dataloader = torch.utils.data.DataLoader(\n train_dataset, batch_size=7, shuffle=True\n)\ntest_dataloader = torch.utils.data.DataLoader(test_dataset, batch_size=5, shuffle=True)\n
For influence computation we need to take the model in evaluation mode, so that no dropout or batch normalization is applied. Then, we can fit the Ekfac representation.
wrapped_model = ModelLogitsWrapper(model)\nwrapped_model.eval()\n\nekfac_influence_model = EkfacInfluence(\n wrapped_model,\n progress=True,\n)\nekfac_influence_model = ekfac_influence_model.fit(train_dataloader)\n
\nK-FAC blocks - batch progress: 0%| | 0/15 [00:00<?, ?it/s]\n
K-FAC blocks - batch progress: 0%| | 0/15 [00:00<?, ?it/s]
And the approximate Hessian is thus obtained. Considering that the model has almost 30 million parameters requiring gradients, this was very fast! Of course, this Hessian is computed using only a very small fraction (~0.4%) of the training data, and for a better approximation we should use a larger subset.
Before continuing, we need to set the Hessian regularization parameter to an appropriate value. A way to decide which is better can be found in the Appendix . Here, we will just set it to 1e-5.
ekfac_influence_model.hessian_regularization = 1e-5\n
We calculate the influence of the first batch of training data over the first batch of test data. This is because influence functions are very expensive to compute, and so to keep the runtime of this notebook within a few minutes we need to restrict ourselves to a small number of examples.
test_input, test_labels, test_text = next(iter(test_dataloader))\ntrain_input, train_labels, train_text = next(iter(train_dataloader))\n
And let's finally compute the influence function values
ekfac_train_influences = ekfac_influence_model.influences(\n test_input,\n test_labels,\n train_input,\n train_labels,\n)\n
\n/home/jakob/Documents/pyDVL/venv/lib/python3.10/site-packages/transformers/models/distilbert/modeling_distilbert.py:222: UserWarning: There is a performance drop because we have not yet implemented the batching rule for aten::masked_fill.Tensor. Please file us an issue on GitHub so that we can prioritize its implementation. (Triggered internally at ../aten/src/ATen/functorch/BatchedFallback.cpp:82.)\n scores = scores.masked_fill(\n\n
/home/jakob/Documents/pyDVL/venv/lib/python3.10/site-packages/transformers/models/distilbert/modeling_distilbert.py:222: UserWarning: There is a performance drop because we have not yet implemented the batching rule for aten::masked_fill.Tensor. Please file us an issue on GitHub so that we can prioritize its implementation. (Triggered internally at ../aten/src/ATen/functorch/BatchedFallback.cpp:82.)\n scores = scores.masked_fill(\n
Now that we have calculated the influences for a few examples, let's analyse some of the extreme values.
Let's plot the influence values as a heatmap for easily spotting patterns.
Most of the test and training examples have similar influence, close to zero. However, there is one test and one training samples that stand out. In particular, their cross influence is very large and negative. Let's examine them more closely.
\nTraining example with idx 3: \n\nPositive probability: 18.099999999999998%\nNegative probability: 81.89999999999999%\nTrue label: Positive \n\nSentence:\n\n
Training example with idx 3: \n\nPositive probability: 18.099999999999998%\nNegative probability: 81.89999999999999%\nTrue label: Positive \n\nSentence:\n
We can see that, despite being positive, this review is quite hard to classify. Its language is overall negative, mostly associated to the facts narrated rather than the movie itself. Notice how several terms are related to war and invasion.
\nTest example with idx 4: \n\nPositive probability: 39.6%\nNegative probability: 60.4%\nTrue label: Negative \n\nSentence:\n\n
Test example with idx 4: \n\nPositive probability: 39.6%\nNegative probability: 60.4%\nTrue label: Negative \n\nSentence:\n
This review is also quite hard to classify. This time it has a negative sentiment towards the movie, but it also contains several words with positive connotation. The parallel with the previous review is quite interesting since both talk about an invasion.
As it is often the case when analysing influence functions, it is hard to understand why these examples have such a large influence. We have seen some interesting patterns, mostly related to similarities in the language and words used, but it is hard to say with certainty if these are the reasons for such a large influence.
A recent paper has explored this topic in high detail, even for much larger language models than BERT (up to ~50 billion parameters!). Among the most interesting findings is that smaller models tend to rely a lot on word-to-word correspondencies, while larger models are more capable of extracting higher level concepts, drawing connections between words across multiple phrases.
For more info, you can visit our blog on influence functions for large language models
In this sections we want to get an idea of how influence functions change when training examples are corrupted. In the next cell we will flip the label of all the training examples and compute the influences on the same test batch as before.
modified_train_labels = deepcopy(train_labels)\nmodified_train_labels = 1 - train_labels\n\ncorrupted_ekfac_train_influences = ekfac_influence_model.influences(\n test_input,\n test_labels,\n train_input,\n modified_train_labels,\n)\n
Overall, when corrupted the influences tend to become negative, as expected. Nevertheless, there are cases where values go from slightly negative to positive, mostly isolated to the second and last test samples. Single values can be quite noisy, so it is difficult to generalise this result, but it would be interesting to see how common these cases are in the full test dataset.
Since ekfac is based on a block diagonal approximation of the Fisher information matrix, we can compute the influence functions separately for each layer of the neural network. In this section we show how to do that and we briefly analyse the results.
influences_by_layer = ekfac_influence_model.influences_by_layer(\n test_input,\n test_labels,\n train_input,\n train_labels,\n)\n
The method influences_by_layer returns a dictionary containing the influence function values for each layer of the neural network as a tensor. To recover the full influence values as returned by the influences (as done in the previous section), we need to sum each layer's values.
influences_by_layer
influences = torch.zeros_like(ekfac_train_influences)\nfor layer_id, value in influences_by_layer.items():\n influences += value.detach()\n
And if we plot the result as a heatmap we can see that the results are the same as in Negative influence training examples
Let's analyse how the influence values change across different layers for given test and train examples.
The plot above shows the influences for test idx 0 and all train idx apart idx=3 (excluded for clarity since it has a very large absolute value). We can see that the scores tend to keep their sign across layers, but in almost all cases tend to decrease when approaching the output layer. This is not always the case, and in fact other test examples show different patterns. Understanding why this happens is an interesting research direction.
Ekfac is a powerful approximate method for computing the influence function of models that use a cross-entropy loss. In this notebook we applied it to sentiment analysis with BERT on the IMDB dataset. However, this method can be applied to much larger models and problems, e.g. to analyse the influence of entire sentences generated by GPT, Llama or Claude. For more info, you can visit our paper pill on influence functions for large language models
The Hessian regularization value impacts a lot the quality of the influence function approximation. In general, the value should be chosen as small as possible so that the results are finite. In practice, even when finite the influence values can be too large and lead to numerical instabilities. In this section we show how to efficiently analyse the impact of the Hessian regularization value with the ekfac method.
Let's start with a few additional imports.
import pandas as pd\nfrom scipy.stats import pearsonr, spearmanr\n
The method explore_hessian_regularization will calculate the influence values of the training examples with each other for a range of Hessian regularization values. The method optimises gradient calculation and Hessian inversion to minimise the computation time.
explore_hessian_regularization
influences_by_reg_value = ekfac_influence_model.explore_hessian_regularization(\n train_input,\n train_labels,\n regularization_values=[1e-15, 1e-9, 1e-5, 1],\n)\n
The resulting object, influences_by_reg_value is a dictionary that associates to each regularization value the influences for each layer of the neural network. This is a lot of data, so we will first organise it in a pandas dataframe and take the average across training examples.
influences_by_reg_value
cols = [\"reg_value\", \"layer_id\", \"mean_infl\"]\ninfl_df = pd.DataFrame(influences_by_reg_value, columns=cols)\nfor reg_value in influences_by_reg_value:\n for layer_id, layer_influences in influences_by_reg_value[reg_value].items():\n mean_infl = torch.mean(layer_influences, dim=0).detach().numpy()\n infl_df = pd.concat(\n [infl_df, pd.DataFrame([[reg_value, layer_id, mean_infl]], columns=cols)]\n )\n
\n/tmp/ipykernel_8503/1081261490.py:6: FutureWarning: The behavior of DataFrame concatenation with empty or all-NA entries is deprecated. In a future version, this will no longer exclude empty or all-NA columns when determining the result dtypes. To retain the old behavior, exclude the relevant entries before the concat operation.\n infl_df = pd.concat(\n\n
/tmp/ipykernel_8503/1081261490.py:6: FutureWarning: The behavior of DataFrame concatenation with empty or all-NA entries is deprecated. In a future version, this will no longer exclude empty or all-NA columns when determining the result dtypes. To retain the old behavior, exclude the relevant entries before the concat operation.\n infl_df = pd.concat(\n
With this dataframe, we can take contiguous values of regularization and, for each layer, calculate the Pearson and Spearman correlation coefficients. This will give us an idea of how the influence values change with the regularization value.
result_corr = {}\nfor layer_id, group_df in infl_df.groupby(\"layer_id\"):\n result_corr[layer_id + \"_pearson\"] = {}\n result_corr[layer_id + \"_spearman\"] = {}\n for idx, mean_infl in enumerate(group_df[\"mean_infl\"]):\n if idx == 0:\n continue\n reg_value_diff = f\"Reg: {group_df['reg_value'].iloc[idx-1]} -> {group_df['reg_value'].iloc[idx]}\"\n pearson = pearsonr(mean_infl, group_df[\"mean_infl\"].iloc[idx - 1]).statistic\n spearman = spearmanr(mean_infl, group_df[\"mean_infl\"].iloc[idx - 1]).statistic\n result_corr[layer_id + \"_pearson\"].update({f\"{reg_value_diff}\": pearson})\n result_corr[layer_id + \"_spearman\"].update({f\"{reg_value_diff}\": spearman})\nresult_df = pd.DataFrame(result_corr).T\n
Let's plot the correlations heatmap. The y-axis reports Spearman and Pearson correlations for each layer, while the x-axis reports pairs of regularization values. High correlations mean that influences are stable across regularization values.
In our case, we can see that for regularization = 1 the spearman correlation becomes very bad. However, for a large range of regularization values smaller than 1 the sample rankings are stable. This is a good indicator that the model is not too sensitive to the regularization value. We therefore chose the value 1e-5 for our analysis.
%autoreload\n%matplotlib inline\n\nimport os\nimport random\nimport numpy as np\nimport torch\nimport torch.nn.functional as F\nimport matplotlib.pyplot as plt\nfrom pydvl.influence.torch import DirectInfluence, CgInfluence\nfrom support.shapley import (\n synthetic_classification_dataset,\n decision_boundary_fixed_variance_2d,\n)\nfrom support.common import (\n plot_gaussian_blobs,\n plot_losses,\n plot_influences,\n)\nfrom support.torch import (\n fit_torch_model,\n TorchLogisticRegression,\n)\nfrom sklearn.metrics import confusion_matrix, ConfusionMatrixDisplay\nfrom torch.optim import AdamW, lr_scheduler\nfrom torch.utils.data import DataLoader, TensorDataset\n
The following code snippet generates the aforementioned dataset.
train_data, val_data, test_data = synthetic_classification_dataset(\n means, sigma, num_samples, train_size=0.7, test_size=0.2\n)\n
Given the simplicity of the dataset, we can calculate exactly the optimal decision boundary(that which maximizes our accuracy). The following code maps a continuous line of z values to a 2-dimensional vector in feature space (More details are in the appendix to this notebook.)
decision_boundary_fn = decision_boundary_fixed_variance_2d(means[0], means[1])\ndecision_boundary = decision_boundary_fn(np.linspace(-1.5, 1.5, 100))\n
plot_gaussian_blobs(\n train_data,\n test_data,\n xlabel=\"$x_0$\",\n ylabel=\"$x_1$\",\n legend_title=\"$y - labels$\",\n line=decision_boundary,\n s=10,\n suptitle=\"Plot of train-test data\",\n)\n
Note that there are samples which go across the optimal decision boundary and will be wrongly labelled. The optimal decision boundary can not discriminate these as the mislabelling is a consequence of the presence of random noise.
model = TorchLogisticRegression(num_features)\ndevice = torch.device(\"cuda:0\" if torch.cuda.is_available() else \"cpu\")\nmodel.to(device)\n\nnum_epochs = 50\nlr = 0.05\nweight_decay = 0.05\nbatch_size = 256\n\ntrain_data_loader = DataLoader(\n TensorDataset(\n torch.as_tensor(train_data[0]),\n torch.as_tensor(train_data[1], dtype=torch.float64).unsqueeze(-1),\n ),\n batch_size=batch_size,\n shuffle=True,\n)\n\nval_data_loader = DataLoader(\n TensorDataset(\n torch.as_tensor(val_data[0]),\n torch.as_tensor(val_data[1], dtype=torch.float64).unsqueeze(-1),\n ),\n batch_size=batch_size,\n shuffle=True,\n)\n\noptimizer = AdamW(params=model.parameters(), lr=lr, weight_decay=weight_decay)\nscheduler = lr_scheduler.CosineAnnealingLR(optimizer, T_max=num_epochs)\nlosses = fit_torch_model(\n model=model,\n training_data=train_data_loader,\n val_data=val_data_loader,\n loss=F.binary_cross_entropy,\n optimizer=optimizer,\n scheduler=scheduler,\n num_epochs=num_epochs,\n device=device,\n)\n
And let's check that the model is not overfitting
plot_losses(losses)\n
A look at the confusion matrix also shows good results
It is important that the model converges to a point near the optimum, since the influence values assume that we are at a minimum (or close) in the loss landscape. The function
measures the influence of the data point \\(x_1\\) onto \\(x_2\\) conditioned on the training targets \\(y_1\\) and \\(y_2\\) trough some model parameters \\(\\theta\\) . If the loss function L is differentiable, we can take \\(I\\) to be
$$ I(x_1, x_2) = \\nabla_\\theta\\; L(x_1, y_1) ^\\mathsf{T} \\; H_\\theta^{-1} \\; \\nabla_\\theta \\; L(x_2, y_2) $$ See \"Understanding Black-box Predictions via Influence Functions\" for a detailed derivation of this formula
Let's take a subset of the training data points, which we will calculate the influence values of.
x = train_data[0][:100]\ny = train_data[1][:100]\n
In pyDVL, the influence of the training points on the test points can be calculated with the following
train_x = torch.as_tensor(x)\ntrain_y = torch.as_tensor(y, dtype=torch.float64).unsqueeze(-1)\ntest_x = torch.as_tensor(test_data[0])\ntest_y = torch.as_tensor(test_data[1], dtype=torch.float64).unsqueeze(-1)\n\ntrain_data_loader = DataLoader(\n TensorDataset(train_x, train_y),\n batch_size=batch_size,\n)\n\ninfluence_model = DirectInfluence(\n model,\n F.binary_cross_entropy,\n hessian_regularization=0.0,\n)\ninfluence_model = influence_model.fit(train_data_loader)\n\ninfluence_values = influence_model.influences(\n test_x, test_y, train_x, train_y, mode=\"up\"\n)\n
The above explicitly constructs the Hessian. This can often be computationally expensive and conjugate gradient approximate calculation should be used for bigger models.
With the influence type 'up', training influences have shape [NxM] where N is the number of test samples and M is the number of training samples. They therefore associate to each training sample its influence on each test sample. Influence type 'perturbation', instead, return an array of shape [NxMxF], where F is the number of features in input, ie. the length of x.
In our case, in order to have a value of the total average influence of a point we can just average across training samples.
mean_train_influences = np.mean(influence_values.cpu().numpy(), axis=0)\n
Let's plot the results (adjust colorbar_limits for better color gradient)
plot_influences(\n x,\n mean_train_influences,\n line=decision_boundary,\n xlabel=\"$x_0$\",\n ylabel=\"$x_1$\",\n suptitle=\"Influences of input points\",\n legend_title=\"influence values\",\n # colorbar_limits=(-0.3,),\n);\n
We can see that, as we approach the separation line, the influences tend to move away from zero, i.e. the points become more decisive for model training, some in a positive way, some negative.
As a further test, let's introduce some labelling errors into \\(y\\) and see how the distribution of the influences changes. Let's flip the first 10 labels and calculate influences
y_corrupted = np.copy(y)\ny_corrupted[:10] = [1 - yi for yi in y[:10]]\ntrain_y_corrupted = torch.as_tensor(y_corrupted, dtype=torch.float64).unsqueeze(-1)\ntrain_corrupted_data_loader = DataLoader(\n TensorDataset(\n train_x,\n train_y_corrupted,\n ),\n batch_size=batch_size,\n)\n\ninfluence_model = DirectInfluence(\n model,\n F.binary_cross_entropy,\n hessian_regularization=0.0,\n)\ninfluence_model = influence_model.fit(train_corrupted_data_loader)\ninfluence_values = influence_model.influences(\n test_x, test_y, train_x, train_y_corrupted, mode=\"up\"\n)\n\nmean_train_influences = np.mean(influence_values.cpu().numpy(), axis=0)\n
\nAverage mislabelled data influence: -0.8450009471117079\nAverage correct data influence: 0.010396852920315886\n\n
Average mislabelled data influence: -0.8450009471117079\nAverage correct data influence: 0.010396852920315886\n
Red circles indicate the points which have been corrupted. We can see that the mislabelled data have a more negative average influence on the model, especially those that are farther away from the decision boundary.
The \"direct\" method that we have used above involves the inversion of the Hessian matrix of the model. If a model has \\(n\\) training points and \\(\\theta \\in \\mathbb{R}^p\\) parameters, this requires \\(O(n \\ p^2 + p^3)\\) operations, which for larger models, like neural networks, becomes quickly unfeasible. Conjugate gradient avoids the explicit computation of the Hessian via a technique called implicit Hessian-vector products (HVPs), which typically takes \\(O(n \\ p)\\) operations.
In the next cell we will use conjugate gradient to compute the influence factors. Since logistic regression is a very simple model, \"cg\" actually slows computation with respect to the direct method, which in this case is a much better choice. Nevertheless, we are able to verify that the influences calculated with \"cg\" are the same (to a minor error) as those calculated directly.
influence_model = CgInfluence(\n model,\n F.binary_cross_entropy,\n hessian_regularization=0.0,\n)\ninfluence_model = influence_model.fit(train_corrupted_data_loader)\ninfluence_values = influence_model.influences(\n test_x, test_y, train_x, train_y_corrupted\n)\nmean_train_influences = np.mean(influence_values.cpu().numpy(), axis=0)\n\nprint(\"Average mislabelled data influence:\", np.mean(mean_train_influences[:10]))\nprint(\"Average correct data influence:\", np.mean(mean_train_influences[10:]))\n
\nAverage mislabelled data influence: -0.8448414156979295\nAverage correct data influence: 0.010395021813591145\n\n
Average mislabelled data influence: -0.8448414156979295\nAverage correct data influence: 0.010395021813591145\n
Averages are very similar to the ones calculated through direct method. Same is true for the plot
In this notebook, we will take a closer look at the theory of influence functions with the help of a synthetic dataset. Data mislabeling occurs whenever some examples from a usually big dataset are wrongly-labeled. In real-life this happens fairly often, e.g. as a consequence of human error, or noise in the data.
Let's consider a classification problem with the following notation:
In other words, we have a dataset containing \\(N\\) samples, each with label 1 or 0. As typical example you can think of y indicating whether a patient has a disease based on some feature representation \\(x\\) .
Let's now introduce a toy model that will help us delve into the theory and practical utility of influence functions. We will assume that \\(y\\) is a Bernoulli binary random variable while the input \\(x\\) is d-dimensional Gaussian distribution which depends on the label \\(y\\) . More precisely:
with fixed means and diagonal covariance. Implementing the sampling scheme in python is straightforward and can be achieved by first sampling \\(y\\) and afterward \\(x\\) .
Let's plot the dataset is plotted with their respective labels and the optimal decision line
We will now train a logistic regression model on the training data. This can be done with the following
For obtaining the optimal discriminator one has to solve the equation
and determine the solution set \\(X\\) . Let's take the following probabilities
For a single fixed diagonal variance parameterized by \\(\\sigma\\) , the optimal discriminator lays at points which are equidistant from the means of the two distributions, i.e.
This is just the implicit description of the line. Solving for the explicit form can be achieved by enforcing a functional form \\(f(z) = x = a z + b\\) with \\(z \\in \\mathbb{R}\\) onto \\(x\\) . After the term is inserted in the previous equation
We can write \\(a\\) since, by symmetry, it is expected to be explicitly orthogonal to \\(\\mu_2 - \\mu_1\\) . Then, solving for \\(b\\) , the solution can be found to be
Let's start by loading the imports, the dataset and splitting it into train, validation and test sets. We will use a large test set to have a less noisy estimate of the average influence.
%autoreload\n%matplotlib inline\n\nimport os\nimport random\n\nimport matplotlib.pyplot as plt\nimport numpy as np\nimport torch\nimport torch.nn.functional as F\nfrom support.common import plot_losses\nfrom support.torch import TorchMLP, fit_torch_model\nfrom pydvl.influence.torch import (\n DirectInfluence,\n CgInfluence,\n ArnoldiInfluence,\n EkfacInfluence,\n NystroemSketchInfluence,\n LissaInfluence,\n)\nfrom support.shapley import load_wine_dataset\nfrom sklearn.metrics import confusion_matrix, ConfusionMatrixDisplay, f1_score\nfrom torch.optim import Adam, lr_scheduler\nfrom torch.utils.data import DataLoader, TensorDataset\nfrom scipy.stats import pearsonr, spearmanr\n
training_data, val_data, test_data, feature_names = load_wine_dataset(\n train_size=0.6, test_size=0.3\n)\n
We will corrupt some of the training points by flipping their labels
num_corrupted_idxs = 10\ntraining_data[1][:num_corrupted_idxs] = torch.tensor(\n [(val + 1) % 3 for val in training_data[1][:num_corrupted_idxs]]\n)\n
and let's wrap it in a pytorch data loader
training_data_loader = DataLoader(\n TensorDataset(*training_data), batch_size=32, shuffle=False\n)\nval_data_loader = DataLoader(TensorDataset(*val_data), batch_size=32, shuffle=False)\ntest_data_loader = DataLoader(TensorDataset(*test_data), batch_size=32, shuffle=False)\n
feature_dimension = 13\nnum_classes = 3\nnetwork_size = [16, 16]\nlayers_size = [feature_dimension, *network_size, num_classes]\nnum_epochs = 300\nlr = 0.005\nweight_decay = 0.01\n\nnn_model = TorchMLP(layers_size)\nnn_model.to(device)\n\noptimizer = Adam(params=nn_model.parameters(), lr=lr, weight_decay=weight_decay)\nscheduler = lr_scheduler.CosineAnnealingLR(optimizer, T_max=num_epochs)\n\nlosses = fit_torch_model(\n model=nn_model,\n training_data=training_data_loader,\n val_data=val_data_loader,\n loss=F.cross_entropy,\n optimizer=optimizer,\n scheduler=scheduler,\n num_epochs=num_epochs,\n device=device,\n)\n
Let's check that the training has found a stable minimum by plotting the training and validation loss
Since it is a classification problem, let's also take a look at the confusion matrix on the test set
And let's compute the f1 score of the model
f1_score(test_data[1], pred_y_test, average=\"weighted\")\n
\n0.9633110554163186\n
0.9633110554163186
Let's now move to calculating influences of each point on the total score.
influence_model = DirectInfluence(\n nn_model,\n F.cross_entropy,\n hessian_regularization=0.1,\n)\ninfluence_model = influence_model.fit(training_data_loader)\ntrain_influences = influence_model.influences(*test_data, *training_data, mode=\"up\")\n
the returned matrix, train_influences, has a quantity of columns equal to the points in the training set, and a number of rows equal to the points in the test set. At each element \\(a_{i,j}\\) it stores the influence that training point \\(j\\) has on the classification of test point \\(i\\) .
If we take the average across every column of the influences matrix, we obtain an estimate of the overall influence of a training point on the total accuracy of the network.
mean_train_influences = np.mean(train_influences.cpu().numpy(), axis=0)\n
The following histogram shows that there are big differences in score within the training set (notice the log-scale on the y axis).
We can see that the corrupted points tend to have a negative effect on the model, as expected
\nAverage influence of corrupted points: -1.0667533\nAverage influence of other points: 0.10814369\n\n
Average influence of corrupted points: -1.0667533\nAverage influence of other points: 0.10814369\n
We have seen how to calculate the influence of single training points on each test point using influence_type 'up'. Using influence_type 'perturbation' we can also calculate the influence of the input features of each point. In the next cell we will calculate the average influence of each feature on training and test points, and ultimately assess which are the most relevant to model performance.
influence_model.hessian_regularization = 1.0\nfeature_influences = influence_model.influences(\n *test_data, *training_data, mode=\"perturbation\"\n)\n
The explicit calculation of the Hessian matrix is numerically challenging, and due to the high memory need infeasible for larger models. PyDVL allows to use several approximation methods for the action of the inverse Hessian matrix to overcome this bottleneck:
In the following, we show the usage of these approximation methods and investigate their performance.
Since the Hessian is symmetric and positive definite (at least after applying a sufficient regularization), we can utilize the Conjugate Gradients Algorithm to approximately solve the equations
Most importantly, the algorithm do not require the computation of the full Hessian matrix, but only requires the implementation of Hessian vector products. pyDVL implements a stable block variant of preconditioned conjugate gradients algorithm.
from pydvl.influence.torch.pre_conditioner import NystroemPreConditioner\n\nnn_model.to(\"cpu\")\ncg_influence_model = CgInfluence(\n nn_model,\n F.cross_entropy,\n hessian_regularization=0.1,\n progress=True,\n use_block_cg=True,\n pre_conditioner=NystroemPreConditioner(rank=5),\n)\ncg_influence_model = cg_influence_model.fit(training_data_loader)\ncg_train_influences = cg_influence_model.influences(\n *test_data, *training_data, mode=\"up\"\n)\nmean_cg_train_influences = np.mean(cg_train_influences.numpy(), axis=0)\n
Let's compare the results obtained through conjugate gradient with those from the direct method
\nPercentage error of Cg over direct method:20.877432823181152 %\n\n
Percentage error of Cg over direct method:20.877432823181152 %\n
\nPearson Correlation Cg vs direct 0.9977952163687263\nSpearman Correlation Cg vs direct 0.9972793913897776\n\n
Pearson Correlation Cg vs direct 0.9977952163687263\nSpearman Correlation Cg vs direct 0.9972793913897776\n
The LiSSA method is a stochastic approximation of the inverse Hessian vector product. Compared to conjugate gradient it is faster but less accurate and typically suffers from instability.
In order to find the solution of the HVP, LiSSA iteratively approximates the inverse of the Hessian matrix with the following update:
where \\(d\\) and \\(s\\) are a dampening and a scaling factor.
lissa_influence_model = LissaInfluence(\n nn_model,\n F.cross_entropy,\n hessian_regularization=0.1,\n progress=True,\n)\nlissa_influence_model = lissa_influence_model.fit(training_data_loader)\nlissa_train_influences = lissa_influence_model.influences(\n *test_data, *training_data, mode=\"up\"\n)\nmean_lissa_train_influences = np.mean(lissa_train_influences.numpy(), axis=0)\n
\nPercentage error of Lissa over direct method:9.416493028402328 %\n\n
Percentage error of Lissa over direct method:9.416493028402328 %\n
\nPearson Correlation Lissa vs direct 0.999796846529674\nSpearman Correlation Lissa vs direct 0.9990729778068873\n\n
Pearson Correlation Lissa vs direct 0.999796846529674\nSpearman Correlation Lissa vs direct 0.9990729778068873\n
The Arnoldi method leverages a low rank approximation of the Hessian matrix to reduce the memory requirements. It is generally much faster than the conjugate gradient method and can achieve similar accuracy.
arnoldi_influence_model = ArnoldiInfluence(\n nn_model,\n F.cross_entropy,\n rank_estimate=30,\n hessian_regularization=0.1,\n)\narnoldi_influence_model = arnoldi_influence_model.fit(training_data_loader)\narnoldi_train_influences = arnoldi_influence_model.influences(\n *test_data, *training_data, mode=\"up\"\n)\nmean_arnoldi_train_influences = np.mean(arnoldi_train_influences.numpy(), axis=0)\n
\nPercentage error of Arnoldi over direct method:37.59587109088898 %\n\n
Percentage error of Arnoldi over direct method:37.59587109088898 %\n
\nPearson Correlation Arnoldi vs direct 0.9873076667742712\nSpearman Correlation Arnoldi vs direct 0.9791621533113334\n\n
Pearson Correlation Arnoldi vs direct 0.9873076667742712\nSpearman Correlation Arnoldi vs direct 0.9791621533113334\n
Similar to the Arnoldi method. the Nystr\u00f6m method uses a low-rank approximation, which is computed from random projections of the Hessian matrix. In general the approximation is expected to be worse then the Arnoldi approximation, but is cheaper to compute.
nystroem_influence_model = NystroemSketchInfluence(\n nn_model,\n F.cross_entropy,\n rank=30,\n hessian_regularization=0.1,\n)\nnystroem_influence_model = nystroem_influence_model.fit(training_data_loader)\nnystroem_train_influences = nystroem_influence_model.influences(\n *test_data, *training_data, mode=\"up\"\n)\nmean_nystroem_train_influences = np.mean(nystroem_train_influences.numpy(), axis=0)\n
\nPercentage error of Nystr\u00f6m over direct method:37.205782532691956 %\n\n
Percentage error of Nystr\u00f6m over direct method:37.205782532691956 %\n
\nPearson Correlation Nystr\u00f6m vs direct 0.9946487475679316\nSpearman Correlation Nystr\u00f6m vs direct 0.985500163740333\n\n
Pearson Correlation Nystr\u00f6m vs direct 0.9946487475679316\nSpearman Correlation Nystr\u00f6m vs direct 0.985500163740333\n
The EKFAC method is a more recent technique that leverages the Kronecker product structure of the Hessian matrix to reduce the memory requirements. It is generally much faster than iterative methods like conjugate gradient and Arnoldi and it allows for an easier handling of memory. Therefore, it is the only technique that can scale to very large models (e.g. billions of parameters). Its accuracy is however much worse. Let's see how it performs on our example.
ekfac_influence_model = EkfacInfluence(\n nn_model,\n update_diagonal=True,\n hessian_regularization=0.1,\n)\nekfac_influence_model = ekfac_influence_model.fit(training_data_loader)\nekfac_train_influences = ekfac_influence_model.influences(\n *test_data, *training_data, mode=\"up\"\n)\nmean_ekfac_train_influences = np.mean(ekfac_train_influences.numpy(), axis=0)\n
\nPercentage error of EK-FAC over direct method:969.1289901733398 %\n\n
Percentage error of EK-FAC over direct method:969.1289901733398 %\n
The accuracy is not good, and it is not recommended to use this method for small models. Nevertheless, a look at the actual influence values reveals that the EK-FAC estimates are not completely off.
The above plot shows a good correlation between the EK-FAC and the direct method. Corrupted points have been circled in red, and in both the direct and approximate case they are correcly identified as having negative influence on the model's accuracy. This is confirmed by explicit calculation of the Pearson and Spearman correlation coefficients.
\nPearson Correlation EK-FAC vs direct 0.9602782923532728\nSpearman Correlation EK-FAC vs direct 0.8999118321283724\n\n
Pearson Correlation EK-FAC vs direct 0.9602782923532728\nSpearman Correlation EK-FAC vs direct 0.8999118321283724\n
The correlation between the EK-FAC and the direct method is quite good, and it improves significantly if we just keep top-20 highest absolute influences.
\nPearson Correlation EK-FAC vs direct - top-20 influences 0.9876922383437592\nSpearman Correlation EK-FAC vs direct - top-20 influences 0.9338345864661652\n\n
Pearson Correlation EK-FAC vs direct - top-20 influences 0.9876922383437592\nSpearman Correlation EK-FAC vs direct - top-20 influences 0.9338345864661652\n
When we calculate influence scores, typically we are more interested in assessing which training points have the highest or lowest impact on the model rather than having a precise estimate of the influence value. EK-FAC then provides a fast and memory-efficient way to calculate a coarse influence ranking of the training points which scales very well even to the largest neural networks.
This was a quick introduction to the pyDVL interface for influence functions. Despite their speed and simplicity, influence functions are known to be a very noisy estimator of data quality, as pointed out in the paper \"Influence functions in deep learning are fragile\" . The size of the network, the weight decay, the inversion method used for calculating influences, the size of the test set: they all add up to the total amount of noise. Experiments may therefore give quantitative and qualitatively different results if not averaged across several realisations. Shapley values, on the contrary, have shown to be a more robust, but this comes at the cost of high computational requirements. PyDVL employs several parallelization and caching techniques to optimize such calculations.
This notebook shows how to calculate influences on a NN model using pyDVL for an arbitrary dataset, and how this can be used to find anomalous or corrupted data points.
It uses the wine dataset from sklearn: given a set of 13 different input parameters regarding a particular bottle, each related to some physical property (e.g. concentration of magnesium, malic acidity, alcoholic percentage, etc.), the model will need to predict to which of 3 classes the wine belongs to. For more details, please refer to the sklearn documentation .
We will train a 2-layer neural network. PyDVL has some convenience wrappers to initialize a pytorch NN. If you already have a model loaded and trained, you can skip this section.
The following cell calculates the influences of each training data point on the neural network. Neural networks have typically a very bumpy parameter space, which, during training, is explored until the configuration that minimises the loss is found. There is an important assumption in influence functions that the model lays at a (at least local) minimum of such loss, and if this is not fulfilled many issues can arise. In order to avoid this scenario, a regularisation term should be used whenever dealing with big and noisy models.
We will be using the following functions and classes from pyDVL.
%autoreload\nfrom pydvl.utils import (\n Dataset,\n Utility,\n)\nfrom pydvl.value import compute_least_core_values, LeastCoreMode, ValuationResult\nfrom pydvl.reporting.plots import shaded_mean_std\nfrom pydvl.reporting.scores import compute_removal_score\n
X, y = make_classification(\n n_samples=dataset_size,\n n_features=50,\n n_informative=25,\n n_classes=3,\n random_state=random_state,\n)\n
full_dataset = Dataset.from_arrays(\n X, y, stratify_by_target=True, random_state=random_state\n)\nsmall_dataset = Dataset.from_arrays(\n X,\n y,\n stratify_by_target=True,\n train_size=train_size,\n random_state=random_state,\n)\n
model = LogisticRegression(max_iter=500, solver=\"liblinear\")\n
model.fit(full_dataset.x_train, full_dataset.y_train)\nprint(\n f\"Training accuracy: {100 * model.score(full_dataset.x_train, full_dataset.y_train):0.2f}%\"\n)\nprint(\n f\"Testing accuracy: {100 * model.score(full_dataset.x_test, full_dataset.y_test):0.2f}%\"\n)\n
\nTraining accuracy: 86.25%\nTesting accuracy: 70.00%\n\n
Training accuracy: 86.25%\nTesting accuracy: 70.00%\n
model.fit(small_dataset.x_train, small_dataset.y_train)\nprint(\n f\"Training accuracy: {100 * model.score(small_dataset.x_train, small_dataset.y_train):0.2f}%\"\n)\nprint(\n f\"Testing accuracy: {100 * model.score(small_dataset.x_test, small_dataset.y_test):0.2f}%\"\n)\n
\nTraining accuracy: 100.00%\nTesting accuracy: 47.89%\n\n
Training accuracy: 100.00%\nTesting accuracy: 47.89%\n
utility = Utility(model=model, data=small_dataset)\n
exact_values = compute_least_core_values(\n u=utility,\n mode=LeastCoreMode.Exact,\n progress=True,\n)\n
exact_values_df = exact_values.to_dataframe(column=\"exact_value\").T\nexact_values_df = exact_values_df[sorted(exact_values_df.columns)]\n
budget_array = np.linspace(200, 2 ** len(small_dataset), num=10, dtype=int)\n\nall_estimated_values_df = []\nall_errors = {budget: [] for budget in budget_array}\n\nfor budget in tqdm(budget_array):\n dfs = []\n errors = []\n column_name = f\"estimated_value_{budget}\"\n for i in range(20):\n values = compute_least_core_values(\n u=utility,\n mode=LeastCoreMode.MonteCarlo,\n n_iterations=budget,\n n_jobs=n_jobs,\n )\n df = (\n values.to_dataframe(column=column_name)\n .drop(columns=[f\"{column_name}_stderr\", f\"{column_name}_updates\"])\n .T\n )\n df = df[sorted(df.columns)]\n error = mean_squared_error(\n exact_values_df.loc[\"exact_value\"].values, np.nan_to_num(df.values.ravel())\n )\n all_errors[budget].append(error)\n df[\"budget\"] = budget\n dfs.append(df)\n estimated_values_df = pd.concat(dfs)\n all_estimated_values_df.append(estimated_values_df)\n\nvalues_df = pd.concat(all_estimated_values_df)\nerrors_df = pd.DataFrame(all_errors)\n
We can see that the approximation error decreases, on average, as the we increase the budget.
Still, the decrease may not always necessarily happen when we increase the number of iterations because of the fact that we sample the subsets with replacement in the Monte Carlo method i.e there may be repeated subsets.
utility = Utility(model=model, data=full_dataset)\n
method_names = [\"Random\", \"Least Core\"]\nremoval_percentages = np.arange(0, 0.41, 0.05)\n
all_scores = []\n\nfor i in trange(5):\n for method_name in method_names:\n if method_name == \"Random\":\n values = ValuationResult.from_random(size=len(utility.data))\n else:\n values = compute_least_core_values(\n u=utility,\n mode=LeastCoreMode.MonteCarlo,\n n_iterations=n_iterations,\n n_jobs=n_jobs,\n )\n scores = compute_removal_score(\n u=utility,\n values=values,\n percentages=removal_percentages,\n remove_best=True,\n )\n scores[\"method_name\"] = method_name\n all_scores.append(scores)\n\nscores_df = pd.DataFrame(all_scores)\n
We can clearly see that removing the most valuable data points, as given by the Least Core method, leads to, on average, a decrease in the model's performance and that the method outperforms random removal of data points.
all_scores = []\n\nfor i in trange(5):\n for method_name in method_names:\n if method_name == \"Random\":\n values = ValuationResult.from_random(size=len(utility.data))\n else:\n values = compute_least_core_values(\n u=utility,\n mode=LeastCoreMode.MonteCarlo,\n n_iterations=n_iterations,\n n_jobs=n_jobs,\n )\n scores = compute_removal_score(\n u=utility,\n values=values,\n percentages=removal_percentages,\n )\n scores[\"method_name\"] = method_name\n all_scores.append(scores)\n\nscores_df = pd.DataFrame(all_scores)\n
We can clearly see that removing the least valuable data points, as given by the Least Core method, leads to, on average, an increase in the model's performance and that the method outperforms the random removal of data points.
This notebook introduces Least Core methods for the computation of data values using pyDVL.
Shapley values define a fair way of distributing the worth of the whole training set when every data point is part of it. But they do not consider the question of stability of subsets: Could some data points obtain a higher payoff if they formed smaller subsets? It is argued that this might be relevant if data providers are paid based on data value, since Shapley values can incentivise them not to contribute their data to the \"grand coalition\", but instead try to form smaller ones. Whether this is of actual practical relevance is debatable, but in any case, the least core is an alternative tool available for any task of Data Valuation
The Core is another approach to compute data values originating in cooperative game theory that attempts to answer those questions. It is the set of feasible payoffs that cannot be improved upon by a coalition of the participants.
Its use for Data Valuation was first described in the paper If You Like Shapley Then You\u2019ll Love the Core by Tom Yan and Ariel D. Procaccia.
The Least Core value \\(v\\) of the \\(i\\) -th sample in dataset \\(D\\) wrt. utility \\(u\\) is computed by solving the following Linear Program:
To illustrate this method we will use a synthetic dataset. We will first use a subset of 10 data point to compute the exact values and use them to assess the Monte Carlo approximation. Afterwards, we will conduct the data removal experiments as described by Ghorbani and Zou in their paper Data Shapley: Equitable Valuation of Data for Machine Learning : We compute the data valuation given different computation budgets and incrementally remove a percentage of the best, respectively worst, data points and observe how that affects the utility.
We generate a synthetic dataset using the make_classification function from scikit-learn.
make_classification
We sample 200 data points from a 50-dimensional Gaussian distribution with 25 informative features and 25 non-informative features (generated as random linear combinations of the informative features).
The 200 samples are uniformly distributed across 3 classes with a small percentage of noise added to the labels to make the task a bit more difficult.
In this first section we will use a smaller subset of the dataset containing 10 samples in order to be able to compute exact values in a reasonable amount of time. Afterwards, we will use the Monte Carlo method with a limited budget (maximum number of subsets) to approximate these values.
We now move on to the data removal experiments using the full dataset.
In these experiments, we first rank the data points from most valuable to least valuable using the values estimated by the Monte Carlo Least Core method. Then, we gradually remove from 5 to 40 percent, by increments of 5 percentage points, of the most valuable/least valuable ones, train the model on this subset and compute its accuracy.
We start by removing the best data points and seeing how the model's accuracy evolves.
We then proceed to removing the worst data points and seeing how the model's accuracy evolves.
We will be using the following functions from pyDVL. The main entry point is the function compute_banzhaf_semivalues() . In order to use it we need the classes Dataset , Utility and Scorer .
compute_banzhaf_semivalues()
%autoreload\nfrom pydvl.reporting.plots import plot_shapley\nfrom support.banzhaf import load_digits_dataset\nfrom pydvl.value import *\n
training_data, _, test_data = load_digits_dataset(\n test_size=0.3, random_state=random_state\n)\n
Training and test data are then used to instantiate a Dataset object:
dataset = Dataset(*training_data, *test_data)\n
import torch\nfrom support.banzhaf import TorchCNNModel\n\ndevice = torch.device(\"cuda\" if torch.cuda.is_available() else \"cpu\")\nmodel = TorchCNNModel(lr=0.001, epochs=40, batch_size=32, device=device)\nmodel.fit(x=training_data[0], y=training_data[1])\n
\nTrain Accuracy: 0.705\nTest Accuracy: 0.630\n\n
Train Accuracy: 0.705\nTest Accuracy: 0.630\n
The final component is the scoring function. It can be anything like accuracy or \\(R^2\\) , and is set with a string from the standard sklearn scoring methods . Please refer to that documentation on information on how to define your own scoring function.
We group dataset, model and scoring function into an instance of Utility and compute the Banzhaf semi-values. We take all defaults, and choose to stop computation using the MaxChecks stopping criterion, which terminates after a fixed number of calls to it. With the default batch_size of 1 this means that we will retrain the model.
Note how we enable caching using memcached (assuming memcached runs with the default configuration for localhost). This is necessary in the current preliminary implementation of permutation sampling , which is the default for compute_banzhaf_semivalues .
from pydvl.utils import MemcachedCacheBackend, MemcachedClientConfig\n\n# Compute regular Banzhaf semivalue\nutility = Utility(\n model=model,\n data=dataset,\n scorer=Scorer(\"accuracy\", default=0.0, range=(0, 1)),\n cache_backend=MemcachedCacheBackend(MemcachedClientConfig()),\n)\nvalues = compute_banzhaf_semivalues(\n utility, done=MaxChecks(max_checks), n_jobs=n_jobs, progress=True\n)\nvalues.sort(key=\"value\")\ndf = values.to_dataframe(column=\"banzhaf_value\", use_names=True)\n
The returned dataframe contains the mean and variance of the Monte Carlo estimates for the values:
Let us plot the results. In the next cell we will take the 30 images with the lowest score and plot their values with 95% Normal confidence intervals. Keep in mind that Permutation Monte Carlo Banzhaf is typically very noisy, and it can take many steps to arrive at a clean estimate.
\nAverage value of first 10 data points: 0.650003277874342\nExact values:\n39 0.432836\n45 0.455392\n158 0.533221\n144 0.571260\n36 0.633091\n161 0.697940\n77 0.698507\n28 0.752367\n35 0.838752\n175 0.886668\nName: banzhaf_value, dtype: float64\n\n
Average value of first 10 data points: 0.650003277874342\nExact values:\n39 0.432836\n45 0.455392\n158 0.533221\n144 0.571260\n36 0.633091\n161 0.697940\n77 0.698507\n28 0.752367\n35 0.838752\n175 0.886668\nName: banzhaf_value, dtype: float64\n
For the first 5 images, we will falsify their label, for images 6-10, we will add some noise.
x_train_anomalous = training_data[0].copy()\ny_train_anomalous = training_data[1].copy()\nanomalous_indices = high_dvl.index.map(int).values[:10]\n\n# Set label of first 5 images to 0\ny_train_anomalous[high_dvl.index.map(int).values[:5]] = 0\n\n# Add noise to images 6-10\nindices = high_dvl.index.values[5:10].astype(int)\ncurrent_images = x_train_anomalous[indices]\nnoisy_images = current_images + 0.5 * np.random.randn(*current_images.shape)\nnoisy_images[noisy_images < 0] = 0.0\nnoisy_images[noisy_images > 1] = 1.0\nx_train_anomalous[indices] = noisy_images\n
anomalous_dataset = Dataset(\n x_train=x_train_anomalous,\n y_train=y_train_anomalous,\n x_test=test_data[0],\n y_test=test_data[1],\n)\n\nanomalous_utility = Utility(\n model=TorchCNNModel(),\n data=anomalous_dataset,\n scorer=Scorer(\"accuracy\", default=0.0, range=(0, 1)),\n cache_backend=MemcachedCacheBackend(MemcachedClientConfig()),\n)\nanomalous_values = compute_banzhaf_semivalues(\n anomalous_utility, done=MaxChecks(max_checks), n_jobs=n_jobs, progress=True\n)\nanomalous_values.sort(key=\"value\")\nanomalous_df = anomalous_values.to_dataframe(column=\"banzhaf_value\", use_names=True)\n
Let us now take a look at the low-value images and check how many of our anomalous images are part of it.
As can be seen in this figure, the valuation of the data points has decreased significantly by adding noise or falsifying their labels. This shows the potential of using Banzhaf values or other data valuation methods to detect mislabeled data points or noisy input data.
\nAverage value of original data points: 0.650003277874342\nAverage value of modified, anomalous data points: -0.02501543656281746\nFor reference, these are the average data values of all data points used for training (anomalous):\nbanzhaf_value 0.006044\nbanzhaf_value_stderr 0.103098\nbanzhaf_value_updates 5.000000\ndtype: float64\nThese are the average data values of all points (original data):\nbanzhaf_value 0.005047\nbanzhaf_value_stderr 0.115262\nbanzhaf_value_updates 5.000000\ndtype: float64\n\n
Average value of original data points: 0.650003277874342\nAverage value of modified, anomalous data points: -0.02501543656281746\nFor reference, these are the average data values of all data points used for training (anomalous):\nbanzhaf_value 0.006044\nbanzhaf_value_stderr 0.103098\nbanzhaf_value_updates 5.000000\ndtype: float64\nThese are the average data values of all points (original data):\nbanzhaf_value 0.005047\nbanzhaf_value_stderr 0.115262\nbanzhaf_value_updates 5.000000\ndtype: float64\n
utility = Utility(\n model=TorchCNNModel(),\n data=dataset,\n scorer=Scorer(\"accuracy\", default=0.0, range=(0, 1)),\n cache_backend=MemcachedCacheBackend(MemcachedClientConfig()),\n)\n
Computing the values is the same, but we now use a better stopping criterion. Instead of fixing the number of utility evaluations with MaxChecks , we use RankCorrelation to stop when the change in Spearman correlation between the ranking of two successive iterations is below a threshold.
values = compute_msr_banzhaf_semivalues(\n utility,\n done=RankCorrelation(rtol=0.0001, burn_in=10),\n n_jobs=n_jobs,\n progress=True,\n)\nvalues.sort(key=\"value\")\nmsr_df = values.to_dataframe(column=\"banzhaf_value\", use_names=True)\n
Inspection of the values reveals (generally) much lower variances. Notice the number of updates to each value as well.
from sklearn.linear_model import SGDClassifier\n\nif is_CI:\n utility = Utility(\n model=SGDClassifier(max_iter=2),\n data=dataset,\n scorer=Scorer(\"accuracy\", default=0.0, range=(0, 1)),\n )\nelse:\n utility = Utility(\n model=TorchCNNModel(),\n data=dataset,\n scorer=Scorer(\"accuracy\", default=0.0, range=(0, 1)),\n )\n
def get_semivalues_and_history(\n sampler_t, max_checks=max_checks, n_jobs=n_jobs, progress=True\n):\n _history = HistoryDeviation(n_steps=max_checks, rtol=1e-9)\n if sampler_t == MSRSampler:\n semivalue_function = compute_msr_banzhaf_semivalues\n else:\n semivalue_function = compute_banzhaf_semivalues\n _values = semivalue_function(\n utility,\n sampler_t=sampler_t,\n done=MaxChecks(max_checks + 2) | _history,\n n_jobs=n_jobs,\n progress=progress,\n )\n return _history, _values\n
# Monte Carlo Permutation Sampling Banzhaf semivalues\nhistory_permutation, permutation_values = get_semivalues_and_history(PermutationSampler)\n
# MSR Banzhaf values\nhistory_msr, msr_values = get_semivalues_and_history(MSRSampler)\n
# UniformSampler\nhistory_uniform, uniform_values = get_semivalues_and_history(UniformSampler)\n
# AntitheticSampler\nhistory_antithetic, antithetic_values = get_semivalues_and_history(AntitheticSampler)\n
# RandomHierarchicalSampler\nhistory_random, random_values = get_semivalues_and_history(RandomHierarchicalSampler)\n
The plot above visualizes the convergence speed of different samplers used for Banzhaf semivalue calculation. /It shows the average magnitude of how much the semivalues are updated in every step of the algorithm.
As you can see, MSR Banzhaf stabilizes much faster. After 1000 iterations (subsets sampled and evaluated with the utility), Permutation Monte Carlo Banzhaf has evaluated the marginal function about 5 times per data point (we are using 200 data points). For MSR , the semivalue of each data point was updated 1000 times. Due to this, the values converge much faster wrt. the number of utility evaluations, which is the key advantage of MSR sampling.
MSR sampling does come at a cost, however, which is that the updates to the semivalues are more noisy than in other methods. We will analyze the impact of this tradeoff in the next sections. First, let us look at how similar all the computed semivalues are. They are all Banzhaf values, so in a perfect world, all samplers should result in the exact same semivalues. However, due to randomness in the utility (recall that we use a neural network) and randomness in the samplers, the resulting values are likely never exactly the same. Another quality measure is that a good sampler would lead to very consistent values, a bad one to less consistent values. Let us first examine how similar the results are, then we'll look at consistency.
This plot shows that the samplers lead to quite different Banzhaf semivalues, however, all of them have some points in common. The MSR Sampler does not seem to be significantly worse than any others.
In an ideal setting without randomness, the overlap of points would be higher, however, the stochastic nature of the CNN model that we use together with the fact that we use only 200 data points for training, might overshadow these results. As a matter of fact we have the rather discouraging following result:
\nTotal number of top 20 points that all samplers have in common: 0\n\n
Total number of top 20 points that all samplers have in common: 0\n
This notebook showcases Data Banzhaf: A Robust Data Valuation Framework for Machine Learning by Wang, and Jia.
Computing Banzhaf semi-values using pyDVL follows basically the same procedure as all other semi-value-based methods like Shapley values. However, Data-Banzhaf tends to be more robust to stochasticity in the training process than other semi-values. A property that we study here.
Additionally, we compare two sampling techniques: the standard permutation-based Monte Carlo sampling, and the so-called MSR (Maximum Sample Reuse) principle.
In order to highlight the strengths of Data-Banzhaf, we require a stochastic model. For this reason, we use a CNN to classify handwritten digits from the scikit-learn toy datasets .
We use a support function, load_digits_dataset() , which downloads the data and prepares it for usage. It returns four arrays that we then use to construct a Dataset . The data consists of grayscale images of shape 8x8 pixels with 16 shades of gray. These images contain handwritten digits from 0 to 9.
load_digits_dataset()
Now we can calculate the contribution of each training sample to the model performance. First we need a model and a Scorer .
As a model, we use a simple CNN written torch, and wrapped into an object to convert numpy arrays into tensors (as of v0.9.0 valuation methods in pyDVL work only with numpy arrays). Note that any model that implements the protocol pydvl.utils.types.SupervisedModel , which is just the standard sklearn interface of fit() , predict() and score() can be used to construct the utility.
An interesting use-case for data valuation is finding anomalous data. Maybe some of the data is really noisy or has been mislabeled. To simulate this, we will change some of the labels of our dataset and add noise to some others. Intuitively, these anomalous data points should then have a lower value.
To evaluate this, let us first check the average value of the first 10 data points, as these will be the ones that we modify. Currently, these are the 10 data points with the highest values:
Despite the previous results already being useful, we had to retrain the model a number of times and yet the variance of the value estimates was high. This has consequences for the stability of the top-k ranking of points, which decreases the applicability of the method. We now introduce a different sampling method called Maximum Sample Reuse ( MSR ) which reuses every sample for updating the Banzhaf values. The method was introduced by the authors of Data-Banzhaf and is much more sample-efficient, as we will show.
We next construct a new utility. Note how this time we don't use a cache: the chance of hitting twice the same subset of the training set is low enough that one can dispense with it (nevertheless it can still be useful, e.g. when running many experiments).
Conventional margin-based samplers produce require evaluating the utility twice to do one update of the value, and permutation samplers do instead \\(n+1\\) evaluations for \\(n\\) updates. Maximum Sample Reuse ( MSR ) updates instead all indices in every sample that the utility evaluates. We compare the convergence rates of these methods.
In order to do so, we will compute the semi-values using different samplers and use a high number of iterations to make sure that the values have converged.
Finally, we want to analyze how consistent the semivalues returned by the different samplers are. In order to do this, we compute semivalues multiple times and check how many of the data points in the top and lowest 20% of valuation of the data overlap.
MSR sampling updates the semivalue estimates for every index in the sample, much more frequently than any other sampler available, which leads to much faster convergence . Additionally, the sampler is more consistent with its value estimates than the other samplers, which might be caused by the higher number of value updates.
There is alas no general recommendation. It is best to try different samplers when computing semivalues and test which one is best suited for your use case. Nevertheless, the MSR sampler seems like a more efficient sampler which may bring fast results and is well-suited for stochastic models.
This notebook introduces Shapley methods for the computation of data value using pyDVL.
In order to illustrate the practical advantages, we will predict the popularity of songs in the dataset Top Hits Spotify from 2000-2019 , and highlight how data valuation can help investigate and boost the performance of the models. In doing so, we will describe the basic usage patterns of pyDVL.
Recall that data value is a function of three things:
Below we will describe how to instantiate each one of these objects and how to use them for data valuation. Please also see the documentation on data valuation .
We will be using the following functions from pyDVL. The main entry point is the function compute_shapley_values() , which provides a facade to all Shapley methods. In order to use it we need the classes Dataset , Utility and Scorer .
%autoreload\nfrom pydvl.reporting.plots import plot_shapley\nfrom pydvl.utils.dataset import GroupedDataset\nfrom support.shapley import load_spotify_dataset\nfrom pydvl.value import *\n
training_data, val_data, test_data = load_spotify_dataset(\n val_size=0.3, test_size=0.3, target_column=\"popularity\", random_state=random_state\n)\n
training_data[0].head()\n
The dataset has many high-level features, some quite intuitive ('duration_ms' or 'tempo'), while others are a bit more cryptic ('valence'?). For information on each feature, please consult the dataset's website .
In our analysis, we will use all the columns, except for 'artist' and 'song', to predict the 'popularity' of each song. We will nonetheless keep the information on song and artist in a separate object for future reference.
song_name = training_data[0][\"song\"]\nartist = training_data[0][\"artist\"]\ntraining_data[0] = training_data[0].drop([\"song\", \"artist\"], axis=1)\ntest_data[0] = test_data[0].drop([\"song\", \"artist\"], axis=1)\nval_data[0] = val_data[0].drop([\"song\", \"artist\"], axis=1)\n
Input and label data are then used to instantiate a Dataset object:
dataset = Dataset(*training_data, *val_data)\n
The calculation of exact Shapley values is computationally very expensive (exponentially so!) because it requires training the model on every possible subset of the training set. For this reason, PyDVL implements techniques to speed up the calculation, such as Monte Carlo approximations , surrogate models or caching of intermediate results and grouping of data to calculate group Shapley values instead of single data points.
In our case, we will group songs by artist and calculate the Shapley value for the artists. Given the pandas Series for 'artist', to group the dataset by it, one does the following:
grouped_dataset = GroupedDataset.from_dataset(dataset=dataset, data_groups=artist)\n
utility = Utility(\n model=GradientBoostingRegressor(n_estimators=3),\n data=grouped_dataset,\n scorer=Scorer(\"neg_mean_absolute_error\", default=0.0),\n)\nvalues = compute_shapley_values(\n utility,\n mode=ShapleyMode.TruncatedMontecarlo,\n # Stop if the standard error is below 1% of the range of the values (which is ~2),\n # or if the number of updates exceeds 1000\n done=AbsoluteStandardError(threshold=0.2, fraction=0.9) | MaxUpdates(1000),\n truncation=RelativeTruncation(utility, rtol=0.01),\n n_jobs=-1,\n)\nvalues.sort(key=\"value\")\ndf = values.to_dataframe(column=\"data_value\", use_names=True)\n
\nCancellation of futures is not supported by the joblib backend\n\n
Cancellation of futures is not supported by the joblib backend\n
The function compute_shapley_values() serves as a common access point to all Shapley methods. For most of them, we must choose a StoppingCriterion with the argument done= . In this case we choose to stop when the ratio of standard error to value is below 0.2 for at least 90% of the training points, or if the number of updates of any index exceeds 1000. The mode argument specifies the Shapley method to use. In this case, we use the Truncated Monte Carlo approximation , which is the fastest of the Monte Carlo methods, owing both to using the permutation definition of Shapley values and the ability to truncate the iteration over a given permutation. We configure this to happen when the contribution of the remaining elements is below 1% of the total utility with the parameter truncation= and the policy RelativeTruncation .
done=
truncation=
Let's take a look at the returned dataframe:
df.head()\n
The first thing to notice is that we sorted the results in ascending order of Shapley value. The index holds the labels for each data group: in this case, artist names. The column data_value is just that: the Shapley Data value, and data_value_stderr is its estimated standard error because we are using a Monte Carlo approximation.
data_value
data_value_stderr
Let us plot the results. In the next cell we will take the 30 artists with the lowest score and plot their values with 95% Normal confidence intervals. Keep in mind that Monte Carlo Shapley is typically very noisy, and it can take many steps to arrive at a clean estimate.
We can immediately see that many artists (groups of samples) have very low, even negative value, which means that they tend to decrease the total score of the model when present in the training set! What happens if we remove them?
In the next cell we create a new training set excluding the artists with the lowest scores:
low_dvl_artists = df.iloc[: int(0.2 * len(df))].index.to_list()\nartist_filter = ~artist.isin(low_dvl_artists)\nX_train_good_dvl = training_data[0][artist_filter]\ny_train_good_dvl = training_data[1][artist_filter]\n
Now we will use this \"cleaned\" dataset to retrain the same model and compare its mean absolute error to the one trained on the full dataset. Notice that the score now is calculated using the test set, while in the calculation of the Shapley values we were using the validation set.
model_good_data = GradientBoostingRegressor(n_estimators=3).fit(\n X_train_good_dvl, y_train_good_dvl\n)\nerror_good_data = mean_absolute_error(\n model_good_data.predict(test_data[0]), test_data[1]\n)\n\nmodel_all_data = GradientBoostingRegressor(n_estimators=3).fit(\n training_data[0], training_data[1]\n)\nerror_all_data = mean_absolute_error(model_all_data.predict(test_data[0]), test_data[1])\n\nprint(f\"Improvement: {100*(error_all_data - error_good_data)/error_all_data:02f}%\")\n
\nImprovement: 15.314214%\n\n
Improvement: 15.314214%\n
The score has improved by almost 14%! This is quite an important result, as it shows a consistent process to improve the performance of a model by excluding data points from its training set.
Let us take all the songs by Billie Eilish, set their score to 0 and re-calculate the Shapley values.
y_train_anomalous = training_data[1].copy(deep=True)\ny_train_anomalous[artist == \"Billie Eilish\"] = 0\nanomalous_dataset = Dataset(\n x_train=training_data[0],\n y_train=y_train_anomalous,\n x_test=val_data[0],\n y_test=val_data[1],\n)\ngrouped_anomalous_dataset = GroupedDataset.from_dataset(anomalous_dataset, artist)\nanomalous_utility = Utility(\n model=GradientBoostingRegressor(n_estimators=3),\n data=grouped_anomalous_dataset,\n scorer=Scorer(\"neg_mean_absolute_error\", default=0.0),\n)\nvalues = compute_shapley_values(\n anomalous_utility,\n mode=ShapleyMode.TruncatedMontecarlo,\n done=AbsoluteStandardError(threshold=0.2, fraction=0.9) | MaxUpdates(1000),\n n_jobs=-1,\n)\nvalues.sort(key=\"value\")\ndf = values.to_dataframe(column=\"data_value\", use_names=True)\n
Let us now consider the low-value artists (at least for predictive purposes, no claims are made about their artistic value!) and plot the results
And Billie Eilish (our anomalous data group) has moved from top contributor to having negative impact on the performance of the model, as expected!
What is going on? A popularity of 0 for Billie Eilish's songs is inconsistent with listening patterns for other artists. In artificially setting this, we degrade the predictive power of the model.
By dropping low-value groups or samples, one can often increase model performance, but by inspecting them, it is possible to identify bogus data sources or acquisition methods.
pyDVL provides a support function for this notebook, load_spotify_dataset() , which downloads data on songs published after 2014, and splits 30% of data for testing, and 30% of the remaining data for validation. The return value is a triple of training, validation and test data as lists of the form [X_input, Y_label] .
load_spotify_dataset()
[X_input, Y_label]
Now we can calculate the contribution of each group to the model performance.
As a model, we use scikit-learn's GradientBoostingRegressor , but pyDVL can work with any model from sklearn, xgboost or lightgbm. More precisely, any model that implements the protocol pydvl.utils.types.SupervisedModel , which is just the standard sklearn interface of fit() , predict() and score() can be used to construct the utility.
The third and final component is the scoring function. It can be anything like accuracy or \\(R^2\\) , and is set with a string from the standard sklearn scoring methods . Please refer to that documentation on information on how to define your own scoring function.
We group dataset, model and scoring function into an instance of Utility .
One interesting test is to corrupt some data and to monitor how their value changes. To do this, we will take one of the artists with the highest value and set the popularity of all their songs to 0.
This notebook shows how to calculate Shapley values for the K-Nearest Neighbours algorithm. By making use of the local structure of KNN, it is possible to compute an exact value in almost linear time, as opposed to exponential complexity of exact, model-agnostic Shapley.
The main idea is to exploit the fact that adding or removing points beyond the k-ball doesn't influence the score. Because the algorithm then essentially only needs to do a search it runs in \\(\\mathcal{O}(N \\log N)\\) time.
By further using approximate nearest neighbours, it is possible to achieve \\((\\epsilon,\\delta)\\) -approximations in sublinear time. However, this is not implemented in pyDVL yet.
We refer to the original paper that pyDVL implements for details: Jia, Ruoxi, David Dao, Boxin Wang, Frances Ann Hubis, Nezihe Merve Gurel, Bo Li, Ce Zhang, Costas Spanos, and Dawn Song. Efficient Task-Specific Data Valuation for Nearest Neighbor Algorithms . Proceedings of the VLDB Endowment 12, no. 11 (1 July 2019): 1610\u201323.
The main entry point is the function compute_shapley_values() , which provides a facade to all Shapley methods. In order to use it we need the classes Dataset , Utility and Scorer , all of which can be imported from pydvl.value :
pydvl.value
from pydvl.value import *\n
sklearn_dataset = datasets.load_iris()\ndata = Dataset.from_sklearn(sklearn_dataset)\nknn = sk.neighbors.KNeighborsClassifier(n_neighbors=5)\nutility = Utility(knn, data)\n
shapley_values = compute_shapley_values(utility, mode=ShapleyMode.KNN, progress=True)\nshapley_values.sort(key=\"value\")\nvalues = shapley_values.values\n
\n0it [00:00, ?it/s]\n
0it [00:00, ?it/s]
If we now look at the distribution of Shapley values for each class, we see that each has samples with both high and low scores. This is expected, because an accurate model uses information of all classes.
corrupted_data = deepcopy(data)\nn_corrupted = 10\ncorrupted_data.y_train[:n_corrupted] = (corrupted_data.y_train[:n_corrupted] + 1) % 3\nknn = sk.neighbors.KNeighborsClassifier(n_neighbors=5)\ncontaminated_values = compute_shapley_values(\n Utility(knn, corrupted_data), mode=ShapleyMode.KNN\n)\n
Taking the average corrupted value and comparing it to non-corrupted ones, we notice that on average anomalous points have a much lower score, i.e. they tend to be much less valuable to the model.
To do this, first we make sure that we access the results by data index with a call to ValuationResult.sort() , then we split the values into two groups: corrupted and non-corrupted. Note how we access property values of the ValuationResult object. This is a numpy array of values, sorted however the object was sorted. Finally, we compute the quantiles of the two groups and compare them. We see that the corrupted mean is in the lowest percentile of the value distribution, while the correct mean is in the 70th percentile.
ValuationResult.sort()
contaminated_values.sort(\n key=\"index\"\n) # This is redundant, but illustrates sorting, which is in-place\n\ncorrupted_shapley_values = contaminated_values.values[:n_corrupted]\ncorrect_shapley_values = contaminated_values.values[n_corrupted:]\n\nmean_corrupted = np.mean(corrupted_shapley_values)\nmean_correct = np.mean(correct_shapley_values)\npercentile_corrupted = np.round(100 * np.mean(values < mean_corrupted), 0)\npercentile_correct = np.round(100 * np.mean(values < mean_correct), 0)\n\nprint(\n f\"The corrupted mean is at percentile {percentile_corrupted:.0f} of the value distribution.\"\n)\nprint(\n f\"The correct mean is percentile {percentile_correct:.0f} of the value distribution.\"\n)\n
\nThe corrupted mean is at percentile 2 of the value distribution.\nThe correct mean is percentile 71 of the value distribution.\n\n
The corrupted mean is at percentile 2 of the value distribution.\nThe correct mean is percentile 71 of the value distribution.\n
This is confirmed if we plot the distribution of Shapley values and circle corrupt points in red. They all tend to have low Shapley scores, regardless of their position in space and assigned label:
We use the sklearn iris dataset and wrap it into a pydvl.utils.dataset.Dataset calling the factory pydvl.utils.dataset.Dataset.from_sklearn() . This automatically creates a train/test split for us which will be used to compute the utility.
We then create a model and instantiate a Utility using data and model. The model needs to implement the protocol pydvl.utils.types.SupervisedModel , which is just the standard sklearn interface of fit() , predict() and score() . In constructing the Utility one can also choose a scoring function, but we pick the default which is just the model's knn.score() .
knn.score()
Calculating the Shapley values is straightforward. We just call compute_shapley_values() with the utility object we created above. The function returns a ValuationResult . This object contains the values themselves, data indices and labels.
Let us first look at the labels' distribution as a function of petal and sepal length:
To test how informative values are, we can corrupt some training labels and see how their Shapley values change with respect to the non-corrupted points.
This notebook introduces Data Utility Learning , a method of approximating Data Shapley values by learning to estimate the utility function.
The idea is to employ a model to learn the performance of the learning algorithm of interest on unseen data combinations (i.e. subsets of the dataset). The method was originally described in Wang, Tianhao, Yu Yang, and Ruoxi Jia. Improving Cooperative Game Theory-Based Data Valuation via Data Utility Learning . arXiv, 2022 .
Warning: Work on Data Utility Learning is preliminary. It remains to be seen when or whether it can be put effectively into application. For this further testing and benchmarking are required.
Recall the definition of Shapley value \\(v_u(i)\\) for data point \\(i\\) :
where \\(N\\) is the set of all indices in the training set and \\(u\\) is the utility.
In Data Utility Learning, to avoid the exponential cost of computing this sum, one learns a surrogate model for \\(u\\) . We start by sampling so-called utility samples to form a training set \\(S_\\mathrm{train}\\) for our utility model. Each utility sample is a tuple consisting of a subset of indices \\(S_j\\) in the dataset and its utility \\(u(S_j)\\) :
where \\(m_\\mathrm{train}\\) denotes the training budget for the learned utility function.
The subsets are then transformed into boolean vectors \\(\\phi\\) in which a \\(1\\) at index \\(k\\) means that the \\(k\\) -th sample of the dataset is present in the subset:
We fit a regression model \\(\\tilde{u}\\) , called data utility model , on the transformed utility samples \\(\\phi (\\mathcal{S}_\\mathrm{train}) := \\{(\\phi(S_j), u(S_j): j = 1 , ..., m_\\mathrm{train}\\}\\) and use it to predict instead of computing the utility for any \\(S_j \\notin \\mathcal{S}_\\mathrm{train}\\) . We abuse notation and identify \\(\\tilde{u}\\) with the composition \\(\\tilde{u} \\circ \\phi : N \\rightarrow \\mathbb{R}\\) .
The main assumption is that it is much faster to fit and use \\(\\tilde{u}\\) than it is to compute \\(u\\) and that for most \\(i\\) , \\(v_\\tilde{u}(i) \\approx v_u(i)\\) in some sense.
As is the case with all other Shapley methods, the main entry point is the function compute_shapley_values() , which provides a facade to all algorithms in this family. We use it with the usual classes Dataset and Utility . In addition, we must import the core class for learning a utility, DataUtilityLearning .
%autoreload\nfrom pydvl.utils import DataUtilityLearning, top_k_value_accuracy\nfrom pydvl.reporting.plots import shaded_mean_std\nfrom pydvl.value import *\n
dataset = Dataset.from_sklearn(\n load_iris(),\n train_size=train_size,\n random_state=random_state,\n stratify_by_target=True,\n)\n
We verify that, as in the paper, if we fit a Support-Vector Classifier to the training data, we obtain an accuracy of around 92%:
model = LinearSVC()\nmodel.fit(dataset.x_train, dataset.y_train)\nprint(f\"Mean accuracy: {100 * model.score(dataset.x_test, dataset.y_test):0.2f}%\")\n
\nMean accuracy: 92.59%\n\n
Mean accuracy: 92.59%\n
computation_times = {}\n
utility = Utility(model=model, data=dataset)\n
start_time = time.monotonic()\n\nresult = compute_shapley_values(\n u=utility,\n mode=ShapleyMode.CombinatorialExact,\n n_jobs=-1,\n progress=False,\n)\n\ncomputation_time = time.monotonic() - start_time\ncomputation_times[\"exact\"] = computation_time\n\ndf = result.to_dataframe(column=\"exact\").drop(columns=[\"exact_stderr\"])\n
We now estimate the Data Shapley values using the DataUtilityLearning wrapper. This class wraps a Utility and delegates calls to it, up until a given budget. Every call yields a utility sample which is saved under the hood for training of the given utility model. Once the budget is exhausted, DataUtilityLearning fits the model to the utility samples and all subsequent calls use the learned model to predict the wrapped utility instead of delegating to it.
For the utility model we follow the paper and use a fully connected neural network. To train it we use a total of training_budget utility samples. We repeat this multiple times for each training budget.
mlp_kwargs = dict(\n hidden_layer_sizes=(20, 10),\n activation=\"relu\",\n solver=\"adam\",\n learning_rate_init=0.001,\n batch_size=batch_size,\n max_iter=800,\n)\n\nprint(\n f\"Doing {n_runs} runs for each of {len(training_budget_values)} different training budgets.\"\n)\n\npbar = tqdm(\n product(range(n_runs), training_budget_values),\n total=n_runs * len(training_budget_values),\n)\nfor idx, budget in pbar:\n pbar.set_postfix_str(f\"Run {idx} for training budget: {budget}\")\n dul_utility = DataUtilityLearning(\n u=utility, training_budget=budget, model=MLPRegressor(**mlp_kwargs)\n )\n\n start_time = time.monotonic()\n\n # DUL will kick in after training_budget calls to utility\n result = compute_shapley_values(\n u=dul_utility,\n mode=ShapleyMode.PermutationMontecarlo,\n done=MaxUpdates(300),\n n_jobs=-1,\n )\n\n computation_time = time.monotonic() - start_time\n if budget in computation_times:\n computation_times[budget].append(computation_time)\n else:\n computation_times[budget] = [computation_time]\n\n dul_df = result.to_dataframe(column=f\"{budget}_{idx}\").drop(\n columns=[f\"{budget}_{idx}_stderr\"]\n )\n df = pd.concat([df, dul_df], axis=1)\n\ncomputation_times_df = pd.DataFrame(computation_times)\n
\nDoing 10 runs for each of 10 different training budgets.\n\n
Doing 10 runs for each of 10 different training budgets.\n
\n 0%| | 0/100 [00:00<?, ?it/s]\n
0%| | 0/100 [00:00<?, ?it/s]
Next we compute the \\(l_1\\) error for the different training budgets across all runs and plot mean and standard deviation. We obtain results analogous to Figure 1 of the paper, verifying that the method indeed works for estimating the Data Shapley values (at least in this context).
In the plot we also display the mean and standard deviation of the computation time taken for each training budget.
errors = np.zeros((len(training_budget_values), n_runs), dtype=float)\naccuracies = np.zeros((len(training_budget_values), n_runs), dtype=float)\n\ntop_k = 3\n\nfor i, budget in enumerate(training_budget_values):\n for j in range(n_runs):\n y_true = df[\"exact\"].values\n y_estimated = df[f\"{budget}_{j}\"].values\n errors[i, j] = np.linalg.norm(y_true - y_estimated, ord=2)\n accuracies[i, j] = top_k_value_accuracy(y_true, y_estimated, k=top_k)\n\nerror_from_mean = np.linalg.norm(df[\"exact\"].values - df[\"exact\"].values.mean(), ord=2)\n
Let us next look at how well the ranking of values resulting from using the surrogate \\(\\tilde{u}\\) matches the ranking by the exact values. For this we fix \\(k=3\\) and consider the \\(k\\) samples with the highest value according to \\(\\tilde{u}\\) and \\(u\\) :
Finally, for each sample, we look at the distance of the estimates to the exact value across runs. Boxes are centered at the 50th percentile with wiskers at the 25th and 75th. We plot relative distances, as a percentage. We observe a general tendency to underestimate the value:
highest_value_index = df.index[df[\"exact\"].argmax()]\ny_train_corrupted = dataset.y_train.copy()\ny_train_corrupted[highest_value_index] = (\n y_train_corrupted[highest_value_index] + 1\n) % 3\n\ncorrupted_dataset = Dataset(\n x_train=dataset.x_train,\n y_train=y_train_corrupted,\n x_test=dataset.x_test,\n y_test=dataset.y_test,\n)\n
We retrain the model on the new dataset and verify that the accuracy decreases:
model = LinearSVC()\nmodel.fit(dataset.x_train, y_train_corrupted)\nprint(f\"Mean accuracy: {100 * model.score(dataset.x_test, dataset.y_test):0.2f}%\")\n
\nMean accuracy: 82.96%\n\n
Mean accuracy: 82.96%\n
Finally, we recompute the values of all samples using the exact method and the best training budget previously obtained and then plot the resulting scores.
best_training_budget = training_budget_values[errors.mean(axis=1).argmin()]\n\nutility = Utility(\n model=LinearSVC(),\n data=corrupted_dataset,\n)\n\nresult = compute_shapley_values(\n u=utility,\n mode=ShapleyMode.CombinatorialExact,\n n_jobs=-1,\n progress=False,\n)\ndf_corrupted = result.to_dataframe(column=\"exact\").drop(columns=[\"exact_stderr\"])\n\ndul_utility = DataUtilityLearning(\n u=utility, training_budget=best_training_budget, model=MLPRegressor(**mlp_kwargs)\n)\n\nresult = compute_shapley_values(\n u=dul_utility,\n mode=ShapleyMode.PermutationMontecarlo,\n done=MaxUpdates(300),\n n_jobs=-1,\n)\ndul_df = result.to_dataframe(column=\"estimated\").drop(columns=[\"estimated_stderr\"])\ndf_corrupted = pd.concat([df_corrupted, dul_df], axis=1)\n
We can see in the figure that both methods assign the lowest value to the sample with the corrupted label.
Following the paper, we take 15 samples (10%) from the Iris dataset and compute their Data Shapley values by using all the remaining samples as test set for computing the utility, which in this case is accuracy.
We start by defining the utility using the model and computing the exact Data Shapley values by definition \\(\\ref{eq:shapley-def}\\) .
One interesting way to assess the Data Utility Learning approach is to corrupt some data and monitor how the value changes. To do this, we will take the sample with the highest score and change its label.
If you want to jump straight in, install pyDVL and then check out the examples. You will probably want to install with support for influence function computation.
We have introductions to the ideas behind Data valuation and Influence functions, as well as a short overview of common applications.
To install the latest release use:
pip install pyDVL\n
See Extras for optional dependencies, in particular if you are interested in influence functions. You can also install the latest development version from TestPyPI:
pip install pyDVL --index-url https://test.pypi.org/simple/\n
In order to check the installation you can use:
python -c \"import pydvl; print(pydvl.__version__)\"\n
pyDVL requires Python >= 3.8, numpy, scikit-learn, scipy, cvxpy for the core methods, and joblib for parallelization locally. Additionally,the Influence functions module requires PyTorch (see Extras below).
pyDVL has a few extra dependencies that can be optionally installed:
To use the module on influence functions, pydvl.influence, run:
pip install pyDVL[influence]\n
This includes a dependency on PyTorch (Version 2.0 and above) and thus is left out by default.
In case that you have a supported version of CUDA installed (v11.2 to 11.8 as of this writing), you can enable eigenvalue computations for low-rank approximations with CuPy on the GPU by using:
pip install pyDVL[cupy]\n
This installs cupy-cuda11x.
If you use a different version of CUDA, please install CuPy manually.
If you want to use Ray to distribute data valuation workloads across nodes in a cluster (it can be used locally as well, but for this we recommend joblib instead) install pyDVL using:
pip install pyDVL[ray]\n
see the intro to parallelization for more details on how to use it.
If you want to use Memcached for caching utility evaluations, use:
pip install pyDVL[memcached]\n
This installs pymemcache additionally. Be aware that you still have to start a memcached server manually. See Setting up the Memcached cache.
Besides the dos and don'ts of data valuation itself, which are the subject of the examples and the documentation of each method, there are two main things to keep in mind when using pyDVL namely Parallelization and Caching.
pyDVL uses parallelization to scale and speed up computations. It does so using one of Dask, Ray or Joblib. The first is used in the influence package whereas the other two are used in the value package.
For data valuation, pyDVL uses joblib for local parallelization (within one machine) and supports using Ray for distributed parallelization (across multiple machines).
The former works out of the box but for the latter you will need to install additional dependencies (see Extras) and to provide a running cluster (or run ray in local mode).
As of v0.9.0 pyDVL does not allow requesting resources per task sent to the cluster, so you will need to make sure that each worker has enough resources to handle the tasks it receives. A data valuation task using game-theoretic methods will typically make a copy of the whole model and dataset to each worker, even if the re-training only happens on a subset of the data. This means that you should make sure that each worker has enough memory to handle the whole dataset.
We use backend classes for both joblib and ray as well as two types of executors for the different algorithms: the first uses a map reduce pattern as seen in the MapReduceJob class and the second implements the futures executor interface from concurrent.futures.
As a convenience, you can also instantiate a parallel backend class by using the init_parallel_backend function:
from pydvl.parallel import init_parallel_backend\nparallel_backend = init_parallel_backend(backend_name=\"joblib\")\n
The executor classes are not meant to be instantiated and used by users of pyDVL. They are used internally as part of the computations of the different methods.
We are currently planning to deprecate MapReduceJob in favour of the futures executor interface because it allows for more diverse computation patterns with interruptions.
Please follow the instructions in Joblib's documentation for all possible configuration options that you can pass to the parallel_config context manager.
To use the joblib parallel backend with the loky backend and verbosity set to 100 to compute exact shapley values you would use:
loky
import joblib\nfrom pydvl.parallel import JoblibParallelBackend\nfrom pydvl.value.shapley import combinatorial_exact_shapley\nfrom pydvl.utils.utility import Utility\n\nparallel_backend = JoblibParallelBackend() \nu = Utility(...)\n\nwith joblib.parallel_config(backend=\"loky\", verbose=100):\n values = combinatorial_exact_shapley(u, parallel_backend=parallel_backend)\n
Additional dependencies
The Ray parallel backend requires optional dependencies. See Extras for more information.
Please follow the instructions in Ray's documentation to set up a remote cluster. You could alternatively use a local cluster and in that case you don't have to set anything up.
Before starting a computation, you should initialize ray by calling ray.init with the appropriate parameters:
To set up and start a local ray cluster with 4 CPUs you would use:
import ray\n\nray.init(num_cpus=4)\n
Whereas for a remote ray cluster you would use:
import ray\n\naddress = \"<Hypothetical Ray Cluster IP Address>\"\nray.init(address)\n
To use the ray parallel backend to compute exact shapley values you would use:
import ray\nfrom pydvl.parallel import RayParallelBackend\nfrom pydvl.value.shapley import combinatorial_exact_shapley\nfrom pydvl.utils.utility import Utility\n\nray.init()\nparallel_backend = RayParallelBackend()\nu = Utility(...)\nvaues = combinatorial_exact_shapley(u, parallel_backend=parallel_backend)\n
For the futures executor interface, we have implemented an executor class for ray in RayExecutor and rely on joblib's loky get_reusable_executor function to instantiate an executor for local parallelization.
They are both compatibles with the builtin ThreadPoolExecutor and ProcessPoolExecutor classes.
>>> from joblib.externals.loky import _ReusablePoolExecutor\n>>> from pydvl.parallel import JoblibParallelBackend\n>>> parallel_backend = JoblibParallelBackend() \n>>> with parallel_backend.executor() as executor:\n... results = list(executor.map(lambda x: x + 1, range(3)))\n...\n>>> results\n[1, 2, 3]\n
The map-reduce interface is older and more limited in the patterns it allows us to use.
To reproduce the previous example using MapReduceJob, we would use:
>>> from pydvl.parallel import JoblibParallelBackend, MapReduceJob\n>>> parallel_backend = JoblibParallelBackend() \n>>> map_reduce_job = MapReduceJob(\n... list(range(3)),\n... map_func=lambda x: x[0] + 1,\n... parallel_backend=parallel_backend,\n... )\n>>> results = map_reduce_job()\n>>> results\n[1, 2, 3]\n
Refer to Scaling influence computation for explanations about parallelization for Influence Functions.
PyDVL can cache (memoize) the computation of the utility function and speed up some computations for data valuation. It is however disabled by default. When it is enabled it takes into account the data indices passed as argument and the utility function wrapped into the Utility object. This means that care must be taken when reusing the same utility function with different data, see the documentation for the caching package for more information.
In general, caching won't play a major role in the computation of Shapley values because the probability of sampling the same subset twice, and hence needing the same utility function computation, is very low. However, it can be very useful when comparing methods that use the same utility function, or when running multiple experiments with the same data.
InMemoryCacheBackend: an in-memory cache backend that uses a dictionary to store and retrieve cached values. This is used to share cached values between threads in a single process.
DiskCacheBackend: a disk-based cache backend that uses pickled values written to and read from disk. This is used to share cached values between processes in a single machine.
The Memcached backend requires optional dependencies. See Extras for more information.
As an example, here's how one would use the disk-based cached backend with a utility:
from pydvl.utils.caching.disk import DiskCacheBackend\nfrom pydvl.utils.utility import Utility\n\ncache_backend = DiskCacheBackend()\nu = Utility(..., cache_backend=cache_backend)\n
Please refer to the documentation and examples of each backend class for more details.
When is the cache really necessary?
Crucially, semi-value computations with the PermutationSampler require caching to be enabled, or they will take twice as long as the direct implementation in compute_shapley_values.
Using the cache
Continue reading about the cache in the documentation for the caching package.
Memcached is an in-memory key-value store accessible over the network. pyDVL can use it to cache the computation of the utility function and speed up some computations (in particular, semi-value computations with the PermutationSampler but other methods may benefit as well).
You can either install it as a package or run it inside a docker container (the simplest). For installation instructions, refer to the Getting started section in memcached's wiki. Then you can run it with:
memcached -u user\n
To run memcached inside a container in daemon mode instead, use:
docker container run -d --rm -p 11211:11211 memcached:latest\n
Data valuation methods can improve various aspects of data engineering and machine learning workflows. When applied judiciously, these methods can enhance data quality, model performance, and cost-effectiveness.
However, the results can be inconsistent. Values have a strong dependency on the training procedure and the performance metric used. For instance, accuracy is a poor metric for imbalanced sets and this has a stark effect on data values. Some models exhibit great variance in some regimes and this again has a detrimental effect on values. See Problems of data values for more on this.
Here we quickly enumerate the most common uses of data valuation. For a comprehensive overview, along with concrete examples, please refer to the Transferlab blog post on this topic.
Some of the promising applications in data engineering include:
Some of the useful applications include:
Data valuation techniques have applications in detecting data manipulation and contamination, although the feasibility of such attacks is limited.
Additionally, one of the motivating applications for the whole field is that of data markets, where data valuation can be the key component to determine the price of data.
Game-theoretic valuation methods like Shapley values can help assign fair prices, but have limitations around handling duplicates or adversarial data. Model-free methods like LAVA (Just et al., 2023)2 and CRAIG are particularly well suited for this, as they use the Wasserstein distance between a vendor's data and the buyer's to determine the value of the former.
However, this is a complex problem which can face simple practical problems like data owners not willing to disclose their data for valuation, even to a broker.
Broderick, T., Giordano, R., Meager, R., 2021. An Automatic Finite-Sample Robustness Metric: When Can Dropping a Little Data Make a Big Difference? \u21a9
Just, H.A., Kang, F., Wang, T., Zeng, Y., Ko, M., Jin, M., Jia, R., 2023. LAVA: Data Valuation without Pre-Specified Learning Algorithms. Presented at the The Eleventh International Conference on Learning Representations (ICLR 2023).\u00a0\u21a9
Because the magnitudes of values or influences from different algorithms, or datasets, are not comparable to each other, evaluation of the methods is typically done with downstream tasks.
Data valuation is particularly useful for data selection, pruning and inspection in general. For this reason, the most common benchmarks are data removal and noisy label detection.
After computing the values for all data in \\(T = \\{ \\mathbf{z}_i : i = 1, \\ldots, n \\}\\), the set is sorted by decreasing value. We denote by \\(T_{[i :]}\\) the sorted sequence of points \\((\\mathbf{z}_i, \\mathbf{z}_{i + 1}, \\ldots, \\mathbf{z}_n)\\) for \\(1 \\leqslant i \\leqslant n\\). Now train successively \\(f_{T [i :]}\\) and compute its accuracy \\(a_{T_{[i :]}} (D_{\\operatorname{test}})\\) on the held-out test set, then plot all numbers. By using \\(D_{\\operatorname{test}}\\) one approximates the expected accuracy drop on unseen data. Because the points removed have a high value, one expects performance to drop visibly wrt. a random baseline.
The complementary experiment removes data in increasing order, with the lowest valued points first. Here one expects performance to increase relatively to randomly removing points before training. Additionally, every real dataset will include slightly out-of-distribution points, so one should also expect an absolute increase in performance when some of the lowest valued points are removed.
This experiment explores the extent to which data values computed with one (cheap) model can be transferred to another (potentially more complex) one. Different classifiers are used as a source to calculate data values. These values are then used in the point removal tasks described above, but using a different (target) model for evaluation of the accuracies \\(a_{T [i :]}\\). A multi-layer perceptron is added for evaluation as well.
This experiment tests the ability of a method to detect mislabeled instances in the data. A fixed fraction \\(\\alpha\\) of the training data are picked at random and their labels flipped. Data values are computed, then the \\(\\alpha\\)-fraction of lowest-valued points are selected, and the overlap with the subset of flipped points is computed. This synthetic experiment is however hard to put into practical use, since the fraction \\(\\alpha\\) is of course unknown in practice.
Introduced in [@wang_data_2022], one can look at how stable the top \\(k\\)% of the values is across runs. Rank stability of a method is necessary but not sufficient for good results. Ideally one wants to identify high-value points reliably (good precision and recall) and consistently (good rank stability).
This section is basically a stub
Although in principle one can compute the average influence over the test set and run the same tasks as above, because influences are computed for each pair of training and test sample, they typically require different experiments to compare their efficacy.
The biggest difficulty when computing influences is the approximation of the inverse Hessian-vector product. For this reason one often sees in the literature the quality of the approximation to LOO as an indicator of performance, the exact Influence Function being a first order approximation to it. However, as shown by (Bae et al., 2022)1, the different approximation errors ensuing for lack of convexity, approximate Hessian-vector products and so on, lead to this being a poor benchmark overall.
(Kong et al., 2022)2 introduce a method using IFs to re-label harmful training samples in order to improve accuracy. One can then take the obtained improvement as a measure of the quality of the IF method.
Introduced in [@...], the idea is to compute influences over a carefully selected fair set, and using them to re-weight the training data.
Bae, J., Ng, N., Lo, A., Ghassemi, M., Grosse, R.B., 2022. If Influence Functions are the Answer, Then What is the Question?, in: Advances in Neural Information Processing Systems. Presented at the NeurIPS 2022, pp. 17953\u201317967.\u00a0\u21a9
Kong, S., Shen, Y., Huang, L., 2022. Resolving Training Biases via Influence-based Data Relabeling. Presented at the International Conference on Learning Representations (ICLR 2022).\u00a0\u21a9
Make sure you have read Getting started before using the library. In particular read about which extra dependencies you may need.
pyDVL aims to be a repository of production-ready, reference implementations of algorithms for data valuation and influence functions. Even though we only briefly introduce key concepts in the documentation, the following sections should be enough to get you started.
If you are somewhat familiar with the concepts of data valuation, you can start by browsing our worked-out examples illustrating pyDVL's capabilities either:
Refer to the Advanced usage page for explanations on how to enable and use parallelization and caching.
This glossary is meant to provide only brief explanations of each term, helping to clarify the concepts and techniques used in the library. For more detailed information, please refer to the relevant literature or resources.
This glossary is still a work in progress. Pull requests are welcome!
Terms in data valuation and influence functions:
The Arnoldi method approximately computes eigenvalue, eigenvector pairs of a symmetric matrix. For influence functions, it is used to approximate the iHVP. Introduced by (Schioppa et al., 2022)1 in the context of influence functions.
A blocked version of CG, which solves several linear systems simultaneously. For Influence Functions, it is used to approximate the iHVP.
Class-wise Shapley is a Shapley valuation method which introduces a utility function that balances in-class, and out-of-class accuracy, with the goal of favoring points that improve the model's performance on the class they belong to. It is estimated to be particularly useful in imbalanced datasets, but more research is needed to confirm this. Introduced by (Schoch et al., 2022)2.
CG is an algorithm for solving linear systems with a symmetric and positive-definite coefficient matrix. For Influence Functions, it is used to approximate the iHVP.
Data Utility Learning is a method that uses an ML model to learn the utility function. Essentially, it learns to predict the performance of a model when trained on a given set of indices from the dataset. The cost of training this model is quickly amortized by avoiding costly re-evaluations of the original utility. Introduced by (Wang et al., 2022)3.
EKFAC builds on K-FAC by correcting for the approximation errors in the eigenvalues of the blocks of the Kronecker-factored approximate curvature matrix. This correction aims to refine the accuracy of natural gradient approximations, thus potentially offering better training efficiency and stability in neural networks.
Group Testing is a strategy for identifying characteristics within groups of items efficiently, by testing groups rather than individuals to quickly narrow down the search for items with specific properties. Introduced into data valuation by (Jia et al., 2019)4.
The Influence Function measures the impact of a single data point on a statistical estimator. In machine learning, it's used to understand how much a particular data point affects the model's prediction. Introduced into data valuation by (Koh and Liang, 2017)5.
iHVP is the operation of calculating the product of the inverse Hessian matrix of a function and a vector, without explicitly constructing nor inverting the full Hessian matrix first. This is essential for influence function computation.
K-FAC is an optimization technique that approximates the Fisher Information matrix's inverse efficiently. It uses the Kronecker product to factor the matrix, significantly speeding up the computation of natural gradient updates and potentially improving training efficiency.
The Least Core is a solution concept in cooperative game theory, referring to the smallest set of payoffs to players that cannot be improved upon by any coalition, ensuring stability in the allocation of value. In data valuation, it implies solving a linear and a quadratic system whose constraints are determined by the evaluations of the utility function on every subset of the training data. Introduced as data valuation method by (Yan and Procaccia, 2021)6.
LiSSA is an efficient algorithm for approximating the inverse Hessian-vector product, enabling faster computations in large-scale machine learning problems, particularly for second-order optimization. For Influence Functions, it is used to approximate the iHVP. Introduced by (Agarwal et al., 2017)7.
LOO in the context of data valuation refers to the process of evaluating the impact of removing individual data points on the model's performance. The value of a training point is defined as the marginal change in the model's performance when that point is removed from the training set.
MSR is a sampling method for data valuation that updates the value of every data point in one sample. This method can achieve much faster convergence. Introduced by (Wang and Jia, 2023)8
MCLC is a variation of the Least Core that uses a reduced amount of constraints, sampled randomly from the powerset of the training data. Introduced by (Yan and Procaccia, 2021)6.
MCS estimates the Shapley Value using a Monte Carlo approximation to the sum over subsets of the training set. This reduces computation to polynomial time at the cost of accuracy, but this loss is typically irrelevant for downstream applications in ML. Introduced into data valuation by (Ghorbani and Zou, 2019)9.
The Nystr\u00f6m approximation computes a low-rank approximation to a symmetric positive-definite matrix via random projections. For influence functions, it is used to approximate the iHVP. Introduced as sketch and solve algorithm in (Hataya and Yamada, 2023)10, and as preconditioner for PCG in (Frangella et al., 2023)11.
A task in data valuation where the quality of a valuation method is measured through the impact of incrementally removing data points on the model's performance, where the points are removed in order of their value. See
A blocked version of PCG, which solves several linear systems simultaneously. For Influence Functions, it is used to approximate the iHVP.
A preconditioned version of CG for improved convergence, depending on the characteristics of the matrix and the preconditioner. For Influence Functions, it is used to approximate the iHVP.
Shapley Value is a concept from cooperative game theory that allocates payouts to players based on their contribution to the total payoff. In data valuation, players are data points. The method assigns a value to each data point based on a weighted average of its marginal contributions to the model's performance when trained on each subset of the training set. This requires \\(\\mathcal{O}(2^{n-1})\\) re-trainings of the model, which is infeasible for even trivial data set sizes, so one resorts to approximations like TMCS. Introduced into data valuation by (Ghorbani and Zou, 2019)9.
TMCS is an efficient approach to estimating the Shapley Value using a truncated version of the Monte Carlo method, reducing computation time while maintaining accuracy in large datasets. Introduced by (Ghorbani and Zou, 2019)9.
WAD is a metric to evaluate the impact of sequentially removing data points on the performance of a machine learning model, weighted by their rank, i.e. by the time at which they were removed. Introduced by (Schoch et al., 2022)2.
CV is a statistical measure of the dispersion of data points in a data series around the mean, expressed as a percentage. It's used to compare the degree of variation from one data series to another, even if the means are drastically different.
A CSP involves finding values for variables within specified constraints or conditions, commonly used in scheduling, planning, and design problems where solutions must satisfy a set of restrictions.
OOB refers to data samples in an ensemble learning context (like random forests) that are not selected for training a specific model within the ensemble. These OOB samples are used as a validation set to estimate the model's accuracy, providing a convenient internal cross-validation mechanism.
The MLRC is an initiative that encourages the verification and replication of machine learning research findings, promoting transparency and reliability in the field. Papers are published in Transactions on Machine Learning Research (TMLR).
Schioppa, A., Zablotskaia, P., Vilar, D., Sokolov, A., 2022. Scaling Up Influence Functions. Proc. AAAI Conf. Artif. Intell. 36, 8179\u20138186. https://doi.org/10.1609/aaai.v36i8.20791 \u21a9
Schoch, S., Xu, H., Ji, Y., 2022. CS-Shapley: Class-wise Shapley Values for Data Valuation in Classification, in: Proc. Of the Thirty-Sixth Conference on Neural Information Processing Systems (NeurIPS). Presented at the Advances in Neural Information Processing Systems (NeurIPS 2022).\u00a0\u21a9\u21a9
Wang, T., Yang, Y., Jia, R., 2022. Improving Cooperative Game Theory-based Data Valuation via Data Utility Learning. Presented at the International Conference on Learning Representations (ICLR 2022). Workshop on Socially Responsible Machine Learning, arXiv. https://doi.org/10.48550/arXiv.2107.06336 \u21a9
Jia, R., Dao, D., Wang, B., Hubis, F.A., Gurel, N.M., Li, B., Zhang, C., Spanos, C., Song, D., 2019. Efficient task-specific data valuation for nearest neighbor algorithms. Proc. VLDB Endow. 12, 1610\u20131623. https://doi.org/10.14778/3342263.3342637 \u21a9
Koh, P.W., Liang, P., 2017. Understanding Black-box Predictions via Influence Functions, in: Proceedings of the 34th International Conference on Machine Learning. Presented at the International Conference on Machine Learning, PMLR, pp. 1885\u20131894.\u00a0\u21a9
Yan, T., Procaccia, A.D., 2021. If You Like Shapley Then You\u2019ll Love the Core, in: Proceedings of the 35th AAAI Conference on Artificial Intelligence, 2021. Presented at the AAAI Conference on Artificial Intelligence, Association for the Advancement of Artificial Intelligence, pp. 5751\u20135759. https://doi.org/10.1609/aaai.v35i6.16721 \u21a9\u21a9
Agarwal, N., Bullins, B., Hazan, E., 2017. Second-Order Stochastic Optimization for Machine Learning in Linear Time. JMLR 18, 1\u201340.\u00a0\u21a9
Wang, J.T., Jia, R., 2023. Data Banzhaf: A Robust Data Valuation Framework for Machine Learning, in: Proceedings of The 26th International Conference on Artificial Intelligence and Statistics. Presented at the International Conference on Artificial Intelligence and Statistics, PMLR, pp. 6388\u20136421.\u00a0\u21a9
Ghorbani, A., Zou, J., 2019. Data Shapley: Equitable Valuation of Data for Machine Learning, in: Proceedings of the 36th International Conference on Machine Learning, PMLR. Presented at the International Conference on Machine Learning (ICML 2019), PMLR, pp. 2242\u20132251.\u00a0\u21a9\u21a9\u21a9
Hataya, R., Yamada, M., 2023. Nystr\u00f6m Method for Accurate and Scalable Implicit Differentiation, in: Proceedings of The 26th International Conference on Artificial Intelligence and Statistics. Presented at the International Conference on Artificial Intelligence and Statistics, PMLR, pp. 4643\u20134654.\u00a0\u21a9
Frangella, Z., Tropp, J.A., Udell, M., 2023. Randomized Nystr\u00f6m Preconditioning. SIAM J. Matrix Anal. Appl. 44, 718\u2013752. https://doi.org/10.1137/21M1466244 \u21a9
We currently implement the following methods:
LOO.
Permutation Shapley (also called ApproxShapley) (Castro et al., 2009)1.
TMCS (Ghorbani and Zou, 2019)2.
Data Banzhaf [@wang_data_2022].
Beta Shapley (Kwon and Zou, 2022)3.
CS-Shapley (Schoch et al., 2022)4.
Least Core (Yan and Procaccia, 2021)5.
Owen Sampling (Okhrati and Lipani, 2021)6.
Data Utility Learning (Wang et al., 2022)7.
kNN-Shapley (Jia et al., 2019)8.
Group Testing (Jia et al., 2019)9
Data-OOB (Kwon and Zou, 2023)10.
CG Influence. (Koh and Liang, 2017)11.
Direct Influence (Koh and Liang, 2017)11.
LiSSA (Agarwal et al., 2017)12.
Arnoldi Influence (Schioppa et al., 2022)13.
EKFAC Influence (George et al., 2018; Martens and Grosse, 2015)1415.
Nystr\u00f6m Influence, based on the ideas in (Hataya and Yamada, 2023)16 for bi-level optimization.
Castro, J., G\u00f3mez, D., Tejada, J., 2009. Polynomial calculation of the Shapley value based on sampling. Computers & Operations Research, Selected papers presented at the Tenth International Symposium on Locational Decisions (ISOLDE X) 36, 1726\u20131730. https://doi.org/10.1016/j.cor.2008.04.004 \u21a9
Ghorbani, A., Zou, J., 2019. Data Shapley: Equitable Valuation of Data for Machine Learning, in: Proceedings of the 36th International Conference on Machine Learning, PMLR. Presented at the International Conference on Machine Learning (ICML 2019), PMLR, pp. 2242\u20132251.\u00a0\u21a9
Kwon, Y., Zou, J., 2022. Beta Shapley: A Unified and Noise-reduced Data Valuation Framework for Machine Learning, in: Proceedings of the 25th International Conference on Artificial Intelligence and Statistics (AISTATS) 2022,. Presented at the AISTATS 2022, PMLR.\u00a0\u21a9
Schoch, S., Xu, H., Ji, Y., 2022. CS-Shapley: Class-wise Shapley Values for Data Valuation in Classification, in: Proc. Of the Thirty-Sixth Conference on Neural Information Processing Systems (NeurIPS). Presented at the Advances in Neural Information Processing Systems (NeurIPS 2022).\u00a0\u21a9
Yan, T., Procaccia, A.D., 2021. If You Like Shapley Then You\u2019ll Love the Core, in: Proceedings of the 35th AAAI Conference on Artificial Intelligence, 2021. Presented at the AAAI Conference on Artificial Intelligence, Association for the Advancement of Artificial Intelligence, pp. 5751\u20135759. https://doi.org/10.1609/aaai.v35i6.16721 \u21a9
Okhrati, R., Lipani, A., 2021. A Multilinear Sampling Algorithm to Estimate Shapley Values, in: 2020 25th International Conference on Pattern Recognition (ICPR). Presented at the 2020 25th International Conference on Pattern Recognition (ICPR), IEEE, pp. 7992\u20137999. https://doi.org/10.1109/ICPR48806.2021.9412511 \u21a9
Jia, R., Dao, D., Wang, B., Hubis, F.A., Hynes, N., G\u00fcrel, N.M., Li, B., Zhang, C., Song, D., Spanos, C.J., 2019. Towards Efficient Data Valuation Based on the Shapley Value, in: Proceedings of the 22nd International Conference on Artificial Intelligence and Statistics. Presented at the International Conference on Artificial Intelligence and Statistics (AISTATS), PMLR, pp. 1167\u20131176.\u00a0\u21a9
Kwon, Y., Zou, J., 2023. Data-OOB: Out-of-bag Estimate as a Simple and Efficient Data Value, in: Proceedings of the 40th International Conference on Machine Learning. Presented at the International Conference on Machine Learning, PMLR, pp. 18135\u201318152.\u00a0\u21a9
Koh, P.W., Liang, P., 2017. Understanding Black-box Predictions via Influence Functions, in: Proceedings of the 34th International Conference on Machine Learning. Presented at the International Conference on Machine Learning, PMLR, pp. 1885\u20131894.\u00a0\u21a9\u21a9
George, T., Laurent, C., Bouthillier, X., Ballas, N., Vincent, P., 2018. Fast Approximate Natural Gradient Descent in a Kronecker Factored Eigenbasis, in: Advances in Neural Information Processing Systems. Curran Associates, Inc.\u00a0\u21a9
Martens, J., Grosse, R., 2015. Optimizing Neural Networks with Kronecker-factored Approximate Curvature, in: Proceedings of the 32nd International Conference on Machine Learning. Presented at the International Conference on Machine Learning, PMLR, pp. 2408\u20132417.\u00a0\u21a9
The code in the package pydvl.influence is experimental. Package structure and basic API are bound to change before v1.0.0
The influence function (IF) is a method to quantify the effect (influence) that each training point has on the parameters of a model, and by extension on any function thereof. In particular, it allows to estimate how much each training sample affects the error on a test point, making the IF useful for understanding and debugging models.
Alas, the influence function relies on some assumptions that can make their application difficult. Yet another drawback is that they require the computation of the inverse of the Hessian of the model wrt. its parameters, which is intractable for large models like deep neural networks. Much of the recent research tackles this issue using approximations, like a Neuman series (Agarwal et al., 2017)1, with the most successful solution using a low-rank approximation that iteratively finds increasing eigenspaces of the Hessian (Schioppa et al., 2022)2.
pyDVL implements several methods for the efficient computation of the IF for machine learning. In the examples we document some of the difficulties that can arise when using the IF.
First introduced in the context of robust statistics in (Hampel, 1974)3, the IF was popularized in the context of machine learning in (Koh and Liang, 2017)4.
Following their formulation, consider an input space \\(\\mathcal{X}\\) (e.g. images) and an output space \\(\\mathcal{Y}\\) (e.g. labels). Let's take \\(z_i = (x_i, y_i)\\), for \\(i \\in \\{1,...,n\\}\\) to be the \\(i\\)-th training point, and \\(\\theta\\) to be the (potentially highly) multi-dimensional parameters of a model (e.g. \\(\\theta\\) is a big array with all of a neural network's parameters, including biases and/or dropout rates). We will denote with \\(L(z, \\theta)\\) the loss of the model for point \\(z\\) when the parameters are \\(\\theta.\\)
To train a model, we typically minimize the loss over all \\(z_i\\), i.e. the optimal parameters are
In practice, lack of convexity means that one doesn't really obtain the minimizer of the loss, and the training is stopped when the validation loss stops decreasing.
In order to compute the impact of each training point on the model, we would need to calculate \\(\\hat{\\theta}_{-z}\\) for each \\(z\\) in the training dataset, thus re-training the model at least ~\\(n\\) times (more if model training is stochastic). This is computationally very expensive, especially for big neural networks. To circumvent this problem, we can just calculate a first order approximation of \\(\\hat{\\theta}\\). This can be done through single backpropagation and without re-training the full model.
pyDVL supports two ways of computing the empirical influence function, namely up-weighting of samples and perturbation influences.
which is the optimal \\(\\hat{\\theta}\\) when we up-weight \\(z\\) by an amount \\(\\epsilon \\gt 0\\).
From a classical result (a simple derivation is available in Appendix A of (Koh and Liang, 2017)4), we know that:
where \\(H_{\\hat{\\theta}} = \\frac{1}{n} \\sum_{i=1}^n \\nabla_\\theta^2 L(z_i, \\hat{\\theta})\\) is the Hessian of \\(L\\). These quantities are also knows as influence factors.
Importantly, notice that this expression is only valid when \\(\\hat{\\theta}\\) is a minimum of \\(L\\), or otherwise \\(H_{\\hat{\\theta}}\\) cannot be inverted! At the same time, in machine learning full convergence is rarely achieved, so direct Hessian inversion is not possible. Approximations need to be developed that circumvent the problem of inverting the Hessian of the model in all those (frequent) cases where it is not positive definite.
The influence of training point \\(z\\) on test point \\(z_{\\text{test}}\\) is defined as:
Notice that \\(\\mathcal{I}\\) is higher for points \\(z\\) which positively impact the model score, since the loss is higher when they are excluded from training. In practice, one needs to rely on the following infinitesimal approximation:
Using the chain rule and the results calculated above, we get:
All the resulting factors are gradients of the loss wrt. the model parameters \\(\\hat{\\theta}\\). This can be easily computed through one or more backpropagation passes.
How would the loss of the model change if, instead of up-weighting an individual point \\(z\\), we were to up-weight only a single feature of that point? Given \\(z = (x, y)\\), we can define \\(z_{\\delta} = (x+\\delta, y)\\), where \\(\\delta\\) is a vector of zeros except for a 1 in the position of the feature we want to up-weight. In order to approximate the effect of modifying a single feature of a single point on the model score we can define
Similarly to what was done above, we up-weight point \\(z_{\\delta}\\), but then we also remove the up-weighting for all the features that are not modified by \\(\\delta\\). From the calculations in the previous section, it is then easy to see that
and if the feature space is continuous and as \\(\\delta \\to 0\\) we can write
The influence of each feature of \\(z\\) on the loss of the model can therefore be estimated through the following quantity:
which, using the chain rule and the results calculated above, is equal to
The perturbation definition of the influence score is not straightforward to understand, but it has a simple interpretation: it tells how much the loss of the model changes when a certain feature of point z is up-weighted. A positive perturbation influence score indicates that the feature might have a positive effect on the accuracy of the model.
It is worth noting that the perturbation influence score is a very rough estimate of the impact of a point on the models loss and it is subject to large approximation errors. It can nonetheless be used to build training-set attacks, as done in (Koh and Liang, 2017)4.
The main abstraction of the library for influence calculation is InfluenceFunctionModel. On implementations of this abstraction, you can call the method influences to compute influences.
pyDVL provides implementations to use with pytorch model in pydvl.influence.torch. For detailed information on available implementations see the documentation in InfluenceFunctionModel.
Given a pre-trained pytorch model and a loss, a basic example would look like
from torch.utils.data import DataLoader\nfrom pydvl.influence.torch import DirectInfluence\n\ntraining_data_loader = DataLoader(...)\ninfl_model = DirectInfluence(model, loss)\ninfl_model = infl_model.fit(training_data_loader)\n\ninfluences = infl_model.influences(x_test, y_test, x, y)\n
Compared to the mathematical definitions above, we switch the ordering of \\(z\\) and \\(z_{\\text{test}}\\), in order to make the input ordering consistent with the dimensions of the resulting tensor. More concrete if the first dimension of \\(z_{\\text{test}}\\) is \\(N\\) and that of \\(z\\), the resulting tensor is of shape \\(N \\times M\\)
A large positive influence indicates that training point \\(j\\) tends to improve the performance of the model on test point \\(i\\), and vice versa, a large negative influence indicates that training point \\(j\\) tends to worsen the performance of the model on test point \\(i\\).
Additionally, and as discussed in the introduction, in machine learning training rarely converges to a global minimum of the loss. Despite good apparent convergence, \\(\\hat{\\theta}\\) might be located in a region with flat curvature or close to a saddle point. In particular, the Hessian might have vanishing eigenvalues making its direct inversion impossible. Certain methods, such as the Arnoldi method are robust against these problems, but most are not.
To circumvent this problem, many approximate methods can be implemented. The simplest adds a small hessian perturbation term, i.e. \\(H_{\\hat{\\theta}} + \\lambda \\mathbb{I}\\), with \\(\\mathbb{I}\\) being the identity matrix.
from torch.utils.data import DataLoader\nfrom pydvl.influence.torch import DirectInfluence\n\ntraining_data_loader = DataLoader(...)\ninfl_model = DirectInfluence(model, loss, hessian_regularization=0.01)\ninfl_model = infl_model.fit(training_data_loader)\n
This standard trick ensures that the eigenvalues of \\(H_{\\hat{\\theta}}\\) are bounded away from zero and therefore the matrix is invertible. In order for this regularization not to corrupt the outcome too much, the parameter \\(\\lambda\\) should be as small as possible while still allowing a reliable inversion of \\(H_{\\hat{\\theta}} + \\lambda \\mathbb{I}\\).
The method of empirical influence computation can be selected with the parameter mode:
from pydvl.influence import InfluenceMode\n\ninfluences = infl_model.influences(x_test, y_test, x, y,\n mode=InfluenceMode.Perturbation)\n
mode=InfluenceMode.Up
The influence factors(refer to the previous section for a definition) are typically the most computationally demanding part of influence calculation. They can be obtained via calling the influence_factors method, saved, and later used for influence calculation on different subsets of the training dataset.
influence_factors
influence_factors = infl_model.influence_factors(x_test, y_test)\ninfluences = infl_model.influences_from_factors(influence_factors, x, y)\n
Hampel, F.R., 1974. The Influence Curve and Its Role in Robust Estimation. J. Am. Stat. Assoc. 69, 383\u2013393. https://doi.org/10.2307/2285666 \u21a9
Koh, P.W., Liang, P., 2017. Understanding Black-box Predictions via Influence Functions, in: Proceedings of the 34th International Conference on Machine Learning. Presented at the International Conference on Machine Learning, PMLR, pp. 1885\u20131894.\u00a0\u21a9\u21a9\u21a9
In almost every practical application it is not possible to construct, even less invert the complete Hessian in memory. pyDVL offers several implementations of the interface InfluenceFunctionModel , which do not compute the full Hessian (in contrast to DirectInfluence ).
This classical procedure for solving linear systems of equations is an iterative method that does not require the explicit inversion of the Hessian. Instead, it only requires the calculation of Hessian-vector products, making it a good choice for large datasets or models with many parameters. It is nevertheless much slower to converge than the direct inversion method and not as accurate.
More info on the theory of conjugate gradient can be found on Wikipedia, or in text books such as (Trefethen and Bau, 1997, Lecture 38)1.
pyDVL also implements a stable block variant of the conjugate gradient method, defined in (Ji and Li, 2017)2, which solves several right hand sides simultaneously.
Optionally, the user can provide a pre-conditioner to improve convergence, such as a Jacobi pre-conditioner , which is a simple diagonal pre-conditioner based on Hutchinson's diagonal estimator (Bekas et al., 2007)3, or a Nystr\u00f6m approximation based pre-conditioner , described in (Frangella et al., 2023)4.
from pydvl.influence.torch import CgInfluence\nfrom pydvl.influence.torch.pre_conditioner import NystroemPreConditioner\n\nif_model = CgInfluence(\n model,\n loss,\n hessian_regularization=0.0,\n rtol=1e-7,\n atol=1e-7,\n maxiter=None,\n use_block_cg=True,\n pre_conditioner=NystroemPreConditioner(rank=10)\n)\nif_model.fit(train_loader)\n
The additional optional parameters rtol, atol, maxiter, use_block_cg and pre_conditioner are respectively, the relative tolerance, the absolute tolerance, the maximum number of iterations, a flag indicating whether to use block variant of cg and an optional pre-conditioner.
where \\(d\\) and \\(s\\) are a dampening and a scaling factor, which are essential for the convergence of the method and they need to be chosen carefully, and I is the identity matrix. More info on the theory of LiSSA can be found in the original paper (Agarwal et al., 2017)5.
from pydvl.influence.torch import LissaInfluence\nif_model = LissaInfluence(\n model,\n loss,\n hessian_regularization=0.0 \n maxiter=1000,\n dampen=0.0,\n scale=10.0,\n h0=None,\n rtol=1e-4,\n)\nif_model.fit(train_loader)\n
with the additional optional parameters maxiter, dampen, scale, h0, and rtol, being the maximum number of iterations, the dampening factor, the scaling factor, the initial guess for the solution and the relative tolerance, respectively.
The Arnoldi method is a Krylov subspace method for approximating dominating eigenvalues and eigenvectors. Under a low rank assumption on the Hessian at a minimizer (which is typically observed for deep neural networks), this approximation captures the essential action of the Hessian. More concretely, for \\(Hx=b\\) the solution is approximated by
where \\(D\\) is a diagonal matrix with the top (in absolute value) eigenvalues of the Hessian and \\(V\\) contains the corresponding eigenvectors. See also (Schioppa et al., 2022)6.
from pydvl.influence.torch import ArnoldiInfluence\nif_model = ArnoldiInfluence(\n model,\n loss,\n hessian_regularization=0.0,\n rank_estimate=10,\n tol=1e-6,\n)\nif_model.fit(train_loader)\n
K-FAC, short for Kronecker-Factored Approximate Curvature, is a method that approximates the Fisher information matrix FIM of a model. It is possible to show that for classification models with appropriate loss functions the FIM is equal to the Hessian of the model\u2019s loss over the dataset. In this restricted but nonetheless important context K-FAC offers an efficient way to approximate the Hessian and hence the influence scores. For more info and details refer to the original paper (Martens and Grosse, 2015)7.
The K-FAC method is implemented in the class EkfacInfluence . The following code snippet shows how to use the K-FAC method to calculate the influence function of a model. Note that, in contrast to the other methods for influence function calculation, K-FAC does not require the loss function as an input. This is because the current implementation is only applicable to classification models with a cross entropy loss function.
from pydvl.influence.torch import EkfacInfluence\nif_model = EkfacInfluence(\n model,\n hessian_regularization=0.0,\n)\nif_model.fit(train_loader)\n
A further improvement of the K-FAC method is the Eigenvalue Corrected K-FAC (EKFAC) method (George et al., 2018)8, which allows to further re-fit the eigenvalues of the Hessian, thus providing a more accurate approximation. On top of the K-FAC method, the EKFAC method is implemented by setting update_diagonal=True when initialising EkfacInfluence . The following code snippet shows how to use the EKFAC method to calculate the influence function of a model.
update_diagonal=True
from pydvl.influence.torch import EkfacInfluence\nif_model = EkfacInfluence(\n model,\n update_diagonal=True,\n hessian_regularization=0.0,\n)\nif_model.fit(train_loader)\n
This approximation is based on a Nystr\u00f6m low-rank approximation of the form
where \\((\\cdot)^{\\dagger}\\) denotes the Moore-Penrose inverse, in combination with the Sherman\u2013Morrison\u2013Woodbury formula to calculate the action of its inverse:
see also (Hataya and Yamada, 2023)9 and (Frangella et al., 2023)4. The essential parameter is the rank of the approximation.
from pydvl.influence.torch import NystroemSketchInfluence\nif_model = NystroemSketchInfluence(\n model,\n loss,\n rank=10,\n hessian_regularization=0.0,\n)\nif_model.fit(train_loader)\n
These implementations represent the calculation logic on in memory tensors. To scale up to large collection of data, we map these influence function models over these collections. For a detailed discussion see the documentation page Scaling Computation.
Trefethen, L.N., Bau, D., Iii, 1997. Numerical Linear Algebra. Society for Industrial and Applied Mathematics. https://doi.org/10.1137/1.9780898719574 \u21a9
Ji, H., Li, Y., 2017. A breakdown-free block conjugate gradient method. Bit Numer Math 57, 379\u2013403. https://doi.org/10.1007/s10543-016-0631-z \u21a9
Bekas, C., Kokiopoulou, E., Saad, Y., 2007. An estimator for the diagonal of a matrix. Applied Numerical Mathematics, Numerical Algorithms, Parallelism and Applications (2) 57, 1214\u20131229. https://doi.org/10.1016/j.apnum.2007.01.003 \u21a9
Frangella, Z., Tropp, J.A., Udell, M., 2023. Randomized Nystr\u00f6m Preconditioning. SIAM J. Matrix Anal. Appl. 44, 718\u2013752. https://doi.org/10.1137/21M1466244 \u21a9\u21a9
The implementations of InfluenceFunctionModel provide a convenient way to calculate influences for in memory tensors.
Nevertheless, there is a need for computing the influences on batches of data. This might happen, if your input data does not fit into memory (e.g. it is very high-dimensional) or for large models the derivative computations exceed your memory or any combinations of these. For this scenario, we want to map our influence function model over collections of batches (or chunks) of data.
The simplest way is to use a double for-loop to iterate over the batches sequentially and collect them. pyDVL provides the simple convenience class SequentialInfluenceCalculator to do this. The batch size should be chosen as large as possible, such that the corresponding batches fit into memory.
from pydvl.influence import SequentialInfluenceCalculator\nfrom pydvl.influence.torch.util import (\n NestedTorchCatAggregator, \n TorchNumpyConverter,\n)\nfrom pydvl.influence.torch import CgInfluence\n\nbatch_size = 10\ntrain_dataloader = DataLoader(..., batch_size=batch_size)\ntest_dataloader = DataLoader(..., batch_size=batch_size)\n\ninfl_model = CgInfluence(model, loss, hessian_regularization=0.01)\ninfl_model = infl_model.fit(train_dataloader)\n\ninfl_calc = SequentialInfluenceCalculator(infl_model)\n\n# this does not trigger the computation\nlazy_influences = infl_calc.influences(test_dataloader, train_dataloader)\n\n# trigger computation and pull the result into main memory, \n# result is the full tensor for all combinations of the two loaders\ninfluences = lazy_influences.compute(aggregator=NestedTorchCatAggregator())\n# or\n# trigger computation and write results chunk-wise to disk using zarr \n# in a sequential manner\nlazy_influences.to_zarr(\"local_path/or/url\", TorchNumpyConverter())\n
compute
For more intricate aggregations, such as an argmax operation, it's advisable to use the DaskInfluenceCalculator (refer to Parallel for more details). This is because it returns data structures in the form of dask.array.Array objects, which offer an API almost fully compatible with NumPy arrays.
While the sequential calculation helps in the case the resulting tensors are too large to fit into memory, the batches are computed one after another. Because the influence computation itself is completely data parallel, you may want to use a parallel processing framework.
pyDVL provides an implementation of a parallel computation model using dask. The wrapper class DaskInfluenceCalculator has convenience methods to map the influence function computation over chunks of data in a parallel manner.
Again, choosing an appropriate chunk size can be crucial. For a better understanding see the official dask best practice documentation and the following blog entry.
import torch\nfrom torch.utils.data import Dataset, DataLoader\nfrom pydvl.influence import DaskInfluenceCalculator\nfrom pydvl.influence.torch import CgInfluence\nfrom pydvl.influence.torch.util import (\n torch_dataset_to_dask_array,\n TorchNumpyConverter,\n)\nfrom distributed import Client\n\ntrain_data_set: Dataset = LargeDataSet(\n ...) # Possible some out of memory large Dataset\ntest_data_set: Dataset = LargeDataSet(\n ...) # Possible some out of memory large Dataset\n\ntrain_dataloader = DataLoader(train_data_set)\ninfl_model = CgInfluence(model, loss, hessian_regularization=0.01)\ninfl_model = infl_model.fit(train_dataloader)\n\n# wrap your input data into dask arrays\nchunk_size = 10\nda_x, da_y = torch_dataset_to_dask_array(train_data_set, chunk_size=chunk_size)\nda_x_test, da_y_test = torch_dataset_to_dask_array(test_data_set,\n chunk_size=chunk_size)\n\n# use only one thread for scheduling, \n# due to non-thread safety of some torch operations\nclient = Client(n_workers=4, threads_per_worker=1)\n\ninfl_calc = DaskInfluenceCalculator(infl_model, \n converter=TorchNumpyConverter(\n device=torch.device(\"cpu\")\n ),\n client=client)\nda_influences = infl_calc.influences(da_x_test, da_y_test, da_x, da_y)\n# da_influences is a dask.array.Array\n# trigger computation and write chunks to disk in parallel\nda_influences.to_zarr(\"path/or/url\")\n
from pydvl.influence import DisableClientSingleThreadCheck\n\ninfl_calc = DaskInfluenceCalculator(infl_model,\n TorchNumpyConverter(device=torch.device(\"cpu\")),\n DisableClientSingleThreadCheck)\nda_influences = infl_calc.influences(da_x_test, da_y_test, da_x, da_y)\nda_influences.compute(scheduler=\"synchronous\")\n
If you want to jump right into it, skip ahead to Computing data values. If you want a quick list of applications, see Applications of data valuation. For a list of all algorithms implemented in pyDVL, see Methods.
Data valuation is the task of assigning a number to each element of a training set which reflects its contribution to the final performance of some model trained on it. Some methods attempt to be model-agnostic, but in most cases the model is an integral part of the method. In these cases, this number is not an intrinsic property of the element of interest, but typically a function of three factors:
The dataset \\(D\\), or more generally, the distribution it was sampled from: In some cases one only cares about values wrt. a given data set, in others value would ideally be the (expected) contribution of a data point to any random set \\(D\\) sampled from the same distribution. pyDVL implements methods of the first kind.
The algorithm \\(\\mathcal{A}\\) mapping the data \\(D\\) to some estimator \\(f\\) in a model class \\(\\mathcal{F}\\). E.g. MSE minimization to find the parameters of a linear model.
The performance metric of interest \\(u\\) for the problem. When value depends on a model, it must be measured in some way which uses it. E.g. the \\(R^2\\) score or the negative MSE over a test set. This metric will be computed over a held-out valuation set.
pyDVL collects algorithms for the computation of data values in this sense, mostly those derived from cooperative game theory. The methods can be found in the package [[pydvl.value]], with support from modules pydvl.utils.dataset and pydvl.utils.utility, as detailed below.
Be sure to read the section on the difficulties using data values.
There are three main families of methods for data valuation: game-theoretic, influence-based and intrinsic. As of v0.8.1 pyDVL supports the first two. Here, we focus on game-theoretic concepts and refer to the main documentation on the influence funtion for the second.
The main contenders in game-theoretic approaches are Shapley values (Ghorbani and Zou, 2019)1, (Kwon et al., 2021)2, (Schoch et al., 2022)3, their generalization to so-called semi-values by (Kwon and Zou, 2022)4 and [@wang_data_2022], and the Core (Yan and Procaccia, 2021)5. All of these are implemented in pyDVL. For a full list see Methods
In these methods, data points are considered players in a cooperative game whose outcome is the performance of the model when trained on subsets (coalitions) of the data, measured on a held-out valuation set. This outcome, or utility, must typically be computed for every subset of the training set, so that an exact computation is \\(\\mathcal{O} (2^n)\\) in the number of samples \\(n\\), with each iteration requiring a full re-fitting of the model using a coalition as training set. Consequently, most methods involve Monte Carlo approximations, and sometimes approximate utilities which are faster to compute, e.g. proxy models (Wang et al., 2022)6 or constant-cost approximations like Neural Tangent Kernels (Wu et al., 2022)7.
The reasoning behind using game theory is that, in order to be useful, an assignment of value, dubbed valuation function, is usually required to fulfil certain requirements of consistency and \"fairness\". For instance, in some applications value should not depend on the order in which data are considered, or it should be equal for samples that contribute equally to any subset of the data (of equal size). When considering aggregated value for (sub-)sets of data there are additional desiderata, like having a value function that does not increase with repeated samples. Game-theoretic methods are all rooted in axioms that by construction ensure different desiderata, but despite their practical usefulness, none of them are either necessary or sufficient for all applications. For instance, SV methods try to equitably distribute all value among all samples, failing to identify repeated ones as unnecessary, with e.g. a zero value.
Using pyDVL to compute data values is a simple process that can be broken down into three steps:
The first item in the tuple \\((D, \\mathcal{A}, u)\\) characterising data value is the dataset. The class Dataset is a simple convenience wrapper for the train and test splits that is used throughout pyDVL. The test set will be used to evaluate a scoring function for the model.
It can be used as follows:
import numpy as np\nfrom pydvl.utils import Dataset\nfrom sklearn.model_selection import train_test_split\nX, y = np.arange(100).reshape((50, 2)), np.arange(50)\nX_train, X_test, y_train, y_test = train_test_split(\n X, y, test_size=0.5, random_state=16\n)\ndataset = Dataset(X_train, X_test, y_train, y_test)\n
It is also possible to construct Datasets from sklearn toy datasets for illustrative purposes using from_sklearn.
Be it because data valuation methods are computationally very expensive, or because we are interested in the groups themselves, it can be often useful or necessary to group samples to valuate them together. GroupedDataset provides an alternative to Dataset with the same interface which allows this.
You can see an example in action in the Spotify notebook, but here's a simple example grouping a pre-existing Dataset. First we construct an array mapping each index in the dataset to a group, then use from_dataset:
import numpy as np\nfrom pydvl.utils import GroupedDataset\n\n# Randomly assign elements to any one of num_groups:\ndata_groups = np.random.randint(0, num_groups, len(dataset))\ngrouped_dataset = GroupedDataset.from_dataset(dataset, data_groups)\n
In pyDVL we have slightly overloaded the name \"utility\" and use it to refer to an object that keeps track of all three items in \\((D, \\mathcal{A}, u)\\). This will be an instance of Utility which, as mentioned, is a convenient wrapper for the dataset, model and scoring function used for valuation methods.
Here's a minimal example:
import sklearn as sk\nfrom pydvl.utils import Dataset, Utility\n\ndataset = Dataset.from_sklearn(sk.datasets.load_iris())\nmodel = sk.svm.SVC()\nutility = Utility(model, dataset)\n
The object utility is a callable that data valuation methods will execute with different subsets of training data. Each call will retrain the model on a subset and evaluate it on the test data using a scoring function. By default, Utility will use model.score(), but it is possible to use any scoring function (greater values must be better). In particular, the constructor accepts the same types as argument as sklearn.model_selection.cross_validate: a string, a scorer callable or None for the default.
model.score()
utility = Utility(model, dataset, \"explained_variance\")\n
Utility will wrap the fit() method of the model to cache its results. This greatly reduces computation times of Monte Carlo methods. Because of how caching is implemented, it is important not to reuse Utility objects for different datasets. You can read more about setting up the cache in the installation guide, and in the documentation of the caching module.
The scoring argument of Utility can be used to specify a custom Scorer object. This is a simple wrapper for a callable that takes a model, and test data and returns a score.
More importantly, the object provides information about the range of the score, which is used by some methods by estimate the number of samples necessary, and about what default value to use when the model fails to train.
The most important property of a Scorer is its default value. Because many models will fail to fit on small subsets of the data, it is important to provide a sensible default value for the score.
It is possible to skip the construction of the Scorer when constructing the Utility object. The two following calls are equivalent:
from pydvl.utils import Utility, Scorer\n\nutility = Utility(\n model, dataset, \"explained_variance\", score_range=(-np.inf, 1), default_score=0.0\n)\nutility = Utility(\n model, dataset, Scorer(\"explained_variance\", range=(-np.inf, 1), default=0.0)\n)\n
Because each evaluation of the utility entails a full retrain of the model with a new subset of the training set, it is natural to try to learn this mapping from subsets to scores. This is the idea behind Data Utility Learning (DUL) (Wang et al., 2022)6 and in pyDVL it's as simple as wrapping the Utility inside DataUtilityLearning:
from pydvl.utils import Utility, DataUtilityLearning, Dataset\nfrom sklearn.linear_model import LinearRegression, LogisticRegression\nfrom sklearn.datasets import load_iris\n\ndataset = Dataset.from_sklearn(load_iris())\nu = Utility(LogisticRegression(), dataset)\ntraining_budget = 3\nwrapped_u = DataUtilityLearning(u, training_budget, LinearRegression())\n\n# First 3 calls will be computed normally\nfor i in range(training_budget):\n _ = wrapped_u((i,))\n# Subsequent calls will be computed using the fit model for DUL\nwrapped_u((1, 2, 3))\n
As you can see, all that is required is a model to learn the utility itself and the fitting and using of the learned model happens behind the scenes.
There is a longer example with an investigation of the results achieved by DUL in a dedicated notebook.
LOO is the simplest approach to valuation. It assigns to each sample its marginal utility as value:
For notational simplicity, we consider the valuation function as defined over the indices of the dataset \\(D\\), and \\(i \\in D\\) is the index of the sample, \\(D_{-i}\\) is the training set without the sample \\(x_i\\), and \\(u\\) is the utility function. See the section on notation for more.
For the purposes of data valuation, this is rarely useful beyond serving as a baseline for benchmarking. Although in some benchmarks it can perform astonishingly well on occasion. One particular weakness is that it does not necessarily correlate with an intrinsic value of a sample: since it is a marginal utility, it is affected by diminishing returns. Often, the training set is large enough for a single sample not to have any significant effect on training performance, despite any qualities it may possess. Whether this is indicative of low value or not depends on each one's goals and definitions, but other methods are typically preferable.
from pydvl.value.loo import compute_loo\n\nvalues = compute_loo(utility, n_jobs=-1)\n
The return value of all valuation functions is an object of type ValuationResult. This can be iterated over, indexed with integers, slices and Iterables, as well as converted to a pandas.DataFrame.
There are a number of factors that affect how useful values can be for your project. In particular, regression can be especially tricky, but the particular nature of every (non-trivial) ML problem can have an effect:
Variance of the utility: Classical applications of game theoretic value concepts operate with deterministic utilities, as do many of the bounds in the literature. But in ML we use an evaluation of the model on a validation set as a proxy for the true risk. Even if the utility is bounded, its variance will affect final values, and even more so any Monte Carlo estimates. Several works have tried to cope with variance. [@wang_data_2022] prove that by relaxing one of the Shapley axioms and considering the general class of semi-values, of which Shapley is an instance, one can prove that a choice of constant weights is the best one can do in a utility-agnostic setting. This method, dubbed Data Banzhaf, is available in pyDVL as compute_banzhaf_semivalues.
One workaround in pyDVL is to configure the caching system to allow multiple evaluations of the utility for every index set. A moving average is computed and returned once the standard error is small, see CachedFuncConfig. Note however that in practice, the likelihood of cache hits is low, so one would have to force recomputation manually somehow.
Unbounded utility: Choosing a scorer for a classifier is simple: accuracy or some F-score provides a bounded number with a clear interpretation. However, in regression problems most scores, like \\(R^2\\), are not bounded because regressors can be arbitrarily bad. This leads to great variability in the utility for low sample sizes, and hence unreliable Monte Carlo approximations to the values. Nevertheless, in practice it is only the ranking of samples that matters, and this tends to be accurate (wrt. to the true ranking) despite inaccurate values.
pyDVL offers a dedicated function composition for scorer functions which can be used to squash a score. The following is defined in module score:
import numpy as np\nfrom pydvl.utils import compose_score\n\ndef sigmoid(x: float) -> float:\n return float(1 / (1 + np.exp(-x)))\n\nsquashed_r2 = compose_score(\"r2\", sigmoid, \"squashed r2\")\n\nsquashed_variance = compose_score(\n \"explained_variance\", sigmoid, \"squashed explained variance\"\n)\n
Data set size: Computing exact Shapley values is NP-hard, and Monte Carlo approximations can converge slowly. Massive datasets are thus impractical, at least with game-theoretical methods. A workaround is to group samples and investigate their value together. You can do this using GroupedDataset. There is a fully worked-out example here. Some algorithms also provide different sampling strategies to reduce the variance, but due to a no-free-lunch-type theorem, no single strategy can be optimal for all utilities. Finally, model specific methods like kNN-Shapley (Jia et al., 2019)8, or altogether different and typically faster approaches like Data-OOB (Kwon and Zou, 2023)9 can also be used.
Model size: Since every evaluation of the utility entails retraining the whole model on a subset of the data, large models require great amounts of computation. But also, they will effortlessly interpolate small to medium datasets, leading to great variance in the evaluation of performance on the dedicated validation set. One mitigation for this problem is cross-validation, but this would incur massive computational cost. As of v0.8.1 there are no facilities in pyDVL for cross-validating the utility (note that this would require cross-validating the whole value computation).
Organize this section better and use its content consistently throughout the documentation.
The following notation is used throughout the documentation:
Let \\(D = \\{x_1, \\ldots, x_n\\}\\) be a training set of \\(n\\) samples.
The utility function \\(u:\\mathcal{D} \\rightarrow \\mathbb{R}\\) maps subsets of \\(D\\) to real numbers. In pyDVL, we typically call this mapping a score for consistency with sklearn, and reserve the term utility for the triple of dataset \\(D\\), model \\(f\\) and score \\(u\\), since they are used together to compute the value.
The value \\(v\\) of the \\(i\\)-th sample in dataset \\(D\\) wrt. utility \\(u\\) is denoted as \\(v_u(x_i)\\) or simply \\(v(i)\\).
For any \\(S \\subseteq D\\), we denote by \\(S_{-i}\\) the set of samples in \\(D\\) excluding \\(x_i\\), and \\(S_{+i}\\) denotes the set \\(S\\) with \\(x_i\\) added.
The marginal utility of adding sample \\(x_i\\) to a subset \\(S\\) is denoted as \\(\\delta(i) := u(S_{+i}) - u(S)\\).
The set \\(D_{-i}^{(k)}\\) contains all subsets of \\(D\\) of size \\(k\\) that do not include sample \\(x_i\\).
Kwon, Y., Rivas, M.A., Zou, J., 2021. Efficient Computation and Analysis of Distributional Shapley Values, in: Proceedings of the 24th International Conference on Artificial Intelligence and Statistics. Presented at the International Conference on Artificial Intelligence and Statistics, PMLR, pp. 793\u2013801.\u00a0\u21a9
Wang, T., Yang, Y., Jia, R., 2022. Improving Cooperative Game Theory-based Data Valuation via Data Utility Learning. Presented at the International Conference on Learning Representations (ICLR 2022). Workshop on Socially Responsible Machine Learning, arXiv. https://doi.org/10.48550/arXiv.2107.06336 \u21a9\u21a9
Wu, Z., Shu, Y., Low, B.K.H., 2022. DAVINZ: Data Valuation using Deep Neural Networks at Initialization, in: Proceedings of the 39th International Conference on Machine Learning. Presented at the International Conference on Machine Learning, PMLR, pp. 24150\u201324176.\u00a0\u21a9
Class-wise Shapley (CWS) (Schoch et al., 2022)1 offers a Shapley framework tailored for classification problems. Given a sample \\(x_i\\) with label \\(y_i \\in \\mathbb{N}\\), let \\(D_{y_i}\\) be the subset of \\(D\\) with labels \\(y_i\\), and \\(D_{-y_i}\\) be the complement of \\(D_{y_i}\\) in \\(D\\). The key idea is that the sample \\((x_i, y_i)\\) might improve the overall model performance on \\(D\\), while being detrimental for the performance on \\(D_{y_i},\\) e.g. because of a wrong label. To address this issue, the authors introduced
where \\(S_{y_i} \\subseteq D_{y_i} \\setminus \\{i\\}\\) and \\(S_{-y_i} \\subseteq D_{-y_i}\\) is arbitrary (in particular, not the complement of \\(S_{y_i}\\)). The function \\(\\delta\\) is called set-conditional marginal Shapley value and is defined as
for any set \\(S\\) such that \\(i \\notin S, C\\) and \\(S \\cap C = \\emptyset\\).
In practical applications, estimating this quantity is done both with Monte Carlo sampling of the powerset, and the set of index permutations (Castro et al., 2009)2. Typically, this requires fewer samples than the original Shapley value, although the actual speed-up depends on the model and the dataset.
Computing classwise Shapley values
Like all other game-theoretic valuation methods, CWS requires a Utility object constructed with model and dataset, with the peculiarity of requiring a specific ClasswiseScorer. The entry point is the function compute_classwise_shapley_values:
from pydvl.value import *\n\nmodel = ...\ndata = Dataset(...)\nscorer = ClasswiseScorer(...)\nutility = Utility(model, data, scorer)\nvalues = compute_classwise_shapley_values(\n utility,\n done=HistoryDeviation(n_steps=500, rtol=5e-2) | MaxUpdates(5000),\n truncation=RelativeTruncation(utility, rtol=0.01),\n done_sample_complements=MaxChecks(1),\n normalize_values=True\n)\n
In order to use the classwise Shapley value, one needs to define a ClasswiseScorer. This scorer is defined as
where \\(f\\) and \\(g\\) are monotonically increasing functions, \\(a_S(D_{y_i})\\) is the in-class accuracy, and \\(a_S(D_{-y_i})\\) is the out-of-class accuracy (the names originate from a choice by the authors to use accuracy, but in principle any other score, like \\(F_1\\) can be used).
The authors show that \\(f(x)=x\\) and \\(g(x)=e^x\\) have favorable properties and are therefore the defaults, but we leave the option to set different functions \\(f\\) and \\(g\\) for an exploration with different base scores.
The default class-wise scorer
Constructing the CWS scorer requires choosing a metric and the functions \\(f\\) and \\(g\\):
import numpy as np\nfrom pydvl.value.shapley.classwise import ClasswiseScorer\n\n# These are the defaults\nidentity = lambda x: x\nscorer = ClasswiseScorer(\n \"accuracy\",\n in_class_discount_fn=identity,\n out_of_class_discount_fn=np.exp\n)\n
The level curves for \\(f(x)=x\\) and \\(g(x)=e^x\\) are depicted below. The lines illustrate the contour lines, annotated with their respective gradients. Level curves of the class-wise utility
We illustrate the method with two experiments: point removal and noise removal, as well as an analysis of the distribution of the values. For this we employ the nine datasets used in (Schoch et al., 2022)1, using the same pre-processing. For images, PCA is used to reduce down to 32 the features found by a pre-trained Resnet18 model. Standard loc-scale normalization is performed for all models except gradient boosting, since the latter is not sensitive to the scale of the features.
Resnet18
We show mean and coefficient of variation (CV) \\(\\frac{\\sigma}{\\mu}\\) of an \"inner metric\". The former shows the performance of the method, whereas the latter displays its stability: we normalize by the mean to see the relative effect of the standard deviation. Ideally the mean value is maximal and CV minimal.
Finally, we note that for all sampling-based valuation methods the same number of evaluations of the marginal utility was used. This is important to make the algorithms comparable, but in practice one should consider using a more sophisticated stopping criterion.
In (best-)point removal, one first computes values for the training set and then removes in sequence the points with the highest values. After each removal, the remaining points are used to train the model from scratch and performance is measured on a test set. This produces a curve of performance vs. number of points removed which we show below.
As a scalar summary of this curve, (Schoch et al., 2022)1 define Weighted Accuracy Drop (WAD) as:
where \\(a_T(D)\\) is the accuracy of the model (trained on \\(T\\)) evaluated on \\(D\\) and \\(T_{-\\{1 \\colon j \\}}\\) is the set \\(T\\) without elements from \\(\\{1, \\dots , j \\}\\).
We run the point removal experiment for a logistic regression model five times and compute WAD for each run, then report the mean \\(\\mu_\\text{WAD}\\) and standard deviation \\(\\sigma_\\text{WAD}\\).
Mean WAD for best-point removal on logistic regression. Values computed using LOO, CWS, Beta Shapley, and TMCS
We see that CWS is competitive with all three other methods. In all problems except MNIST (multi) it outperforms TMCS, while in that case TMCS has a slight advantage.
MNIST (multi)
In order to understand the variability of WAD we look at its coefficient of variation (lower is better):
Coefficient of Variation of WAD for best-point removal on logistic regression. Values computed using LOO, CWS, Beta Shapley, and TMCS
CWS is not the best method in terms of CV. For CIFAR10, Click, CPU and MNIST (binary) Beta Shapley has the lowest CV. For Diabetes, MNIST (multi) and Phoneme CWS is the winner and for FMNIST and Covertype TMCS takes the lead. Besides LOO, TMCS has the highest relative standard deviation.
CIFAR10
Click
CPU
MNIST (binary)
Diabetes
Phoneme
FMNIST
Covertype
The following plot shows accuracy vs number of samples removed. Random values serve as a baseline. The shaded area represents the 95% bootstrap confidence interval of the mean across 5 runs.
Accuracy after best-sample removal using values from logistic regression
Because samples are removed from high to low valuation order, we expect a steep decrease in the curve.
Overall we conclude that in terms of mean WAD, CWS and TMCS perform best, with CWS's CV on par with Beta Shapley's, making CWS a competitive method.
Transfer of values from one model to another is probably of greater practical relevance: values are computed using a cheap model and used to prune the dataset before training a more expensive one.
The following plot shows accuracy vs number of samples removed for transfer from logistic regression to a neural network. The shaded area represents the 95% bootstrap confidence interval of the mean across 5 runs.
Accuracy after sample removal using values transferred from logistic regression to an MLP
As in the previous experiment samples are removed from high to low valuation order and hence we expect a steep decrease in the curve. CWS is competitive with the other methods, especially in very unbalanced datasets like Click. In other datasets, like Covertype, Diabetes and MNIST (multi) the performance is on par with TMCS.
The next experiment tries to detect mis-labeled data points in binary classification tasks. 20% of the indices is flipped at random (we don't consider multi-class datasets because there isn't a unique flipping strategy). The following table shows the mean of the area under the curve (AUC) for five runs.
Mean AUC for mis-labeled data point detection. Values computed using LOO, CWS, Beta Shapley, and TMCS
In the majority of cases TMCS has a slight advantage over CWS, except for Click, where CWS has a slight edge, most probably due to the unbalanced nature of the dataset. The following plot shows the CV for the AUC of the five runs.
Coefficient of variation of AUC for mis-labeled data point detection. Values computed using LOO, CWS, Beta Shapley, and TMCS
In terms of CV, CWS has a clear edge over TMCS and Beta Shapley.
Finally, we look at the ROC curves training the classifier on the \\(n\\) first samples in increasing order of valuation (i.e. starting with the worst):
Mean ROC across 5 runs with 95% bootstrap CI
Although at first sight TMCS seems to be the winner, CWS stays competitive after factoring in running time. For a perfectly balanced dataset, CWS needs on average fewer samples than TCMS.
For illustration, we compare the distribution of values computed by TMCS and CWS.
Histogram and estimated density of the values computed by TMCS and CWS on all nine datasets
For Click TMCS has a multi-modal distribution of values. We hypothesize that this is due to the highly unbalanced nature of the dataset, and notice that CWS has a single mode, leading to its greater performance on this dataset.
CWS is an effective way to handle classification problems, in particular for unbalanced datasets. It reduces the computing requirements by considering in-class and out-of-class points separately.
Schoch, S., Xu, H., Ji, Y., 2022. CS-Shapley: Class-wise Shapley Values for Data Valuation in Classification, in: Proc. Of the Thirty-Sixth Conference on Neural Information Processing Systems (NeurIPS). Presented at the Advances in Neural Information Processing Systems (NeurIPS 2022).\u00a0\u21a9\u21a9\u21a9
SV is a particular case of a more general concept called semi-value, which is a generalization to different weighting schemes. A semi-value is any valuation function with the form:
the set \\(D_{-i}^{(k)}\\) contains all subsets of \\(D\\) of size \\(k\\) that do not include sample \\(x_i\\), \\(S_{+i}\\) is the set \\(S\\) with \\(x_i\\) added, and \\(u\\) is the utility function.
Two instances of this are Banzhaf indices (Wang and Jia, 2023)1, and Beta Shapley (Kwon and Zou, 2022)2, with better numerical and rank stability in certain situations.
Shapley values are a particular case of semi-values and can therefore also be computed with the methods described here. However, as of version 0.8.1, we recommend using compute_shapley_values instead, in particular because it implements truncation policies for TMCS.
For some machine learning applications, where the utility is typically the performance when trained on a set \\(S \\subset D\\), diminishing returns are often observed when computing the marginal utility of adding a new data point.
Beta Shapley is a weighting scheme that uses the Beta function to place more weight on subsets deemed to be more informative. The weights are defined as:
where \\(B\\) is the Beta function, and \\(\\alpha\\) and \\(\\beta\\) are parameters that control the weighting of the subsets. Setting both to 1 recovers Shapley values, and setting \\(\\alpha = 1\\), and \\(\\beta = 16\\) is reported in (Kwon and Zou, 2022)2 to be a good choice for some applications. Beta Shapley values are available in pyDVL through compute_beta_shapley_semivalues:
from pydvl.value import *\n\nutility = Utility(model, data)\nvalues = compute_beta_shapley_semivalues(\n u=utility, done=AbsoluteStandardError(threshold=1e-4), alpha=1, beta=16\n)\n
See however the Banzhaf indices section for an alternative choice of weights which is reported to work better.
As noted in the section Problems of Data Values, the Shapley value can be very sensitive to variance in the utility function. For machine learning applications, where the utility is typically the performance when trained on a set \\(S \\subset D\\), this variance is often largest for smaller subsets \\(S\\). It is therefore reasonable to try reducing the relative contribution of these subsets with adequate weights.
One such choice of weights is the Banzhaf index, which is defined as the constant:
for all set sizes \\(k\\). The intuition for picking a constant weight is that for any choice of weight function \\(w\\), one can always construct a utility with higher variance where \\(w\\) is greater. Therefore, in a worst-case sense, the best one can do is to pick a constant weight.
The authors of (Wang and Jia, 2023)1 show that Banzhaf indices are more robust to variance in the utility function than Shapley and Beta Shapley values. They are available in pyDVL through compute_banzhaf_semivalues:
from pydvl.value import *\n\nutility = Utility(model, data)\nvalues = compute_banzhaf_semivalues(\n u=utility, done=AbsoluteStandardError(threshold=1e-4), alpha=1, beta=16\n)\n
Wang et. al. propose a more sample-efficient method for computing Banzhaf semivalues in their paper Data Banzhaf: A Robust Data Valuation Framework for Machine Learning (Wang and Jia, 2023)1. This method updates all semivalues per evaluation of the utility (i.e. per model trained) based on whether a specific data point was included in the data subset or not. The expression for computing the semivalues is
where \\(\\mathbf{S}_{\\ni i}\\) are the subsets that contain the index \\(i\\) and \\(\\mathbf{S}_{\\not{\\ni} i}\\) are the subsets not containing the index \\(i\\).
The function implementing this method is compute_msr_banzhaf_semivalues.
from pydvl.value import compute_msr_banzhaf_semivalues, RankCorrelation, Utility\n\nutility = Utility(model, data)\nvalues = compute_msr_banzhaf_semivalues(\n u=utility, done=RankCorrelation(rtol=0.001),\n )\n
As explained above, both Beta Shapley and Banzhaf indices are special cases of semi-values. In pyDVL we provide a general method for computing these with any combination of the three ingredients that define a semi-value:
You can construct any combination of these three ingredients with compute_generic_semivalues. The utility function is the same as for Shapley values, and the sampling method can be any of the types defined in the samplers module. For instance, the following snippet is equivalent to the above:
from pydvl.value import *\n\ndata = Dataset(...)\nutility = Utility(model, data)\nvalues = compute_generic_semivalues(\n sampler=PermutationSampler(data.indices),\n u=utility,\n coefficient=beta_coefficient(alpha=1, beta=16),\n done=AbsoluteStandardError(threshold=1e-4),\n)\n
Allowing any coefficient can help when experimenting with models which are more sensitive to changes in training set size. However, Data Banzhaf indices are proven to be the most robust to variance in the utility function, in the sense of rank stability, across a range of models and datasets (Wang and Jia, 2023)1.
Careful with permutation sampling
This generic implementation of semi-values allowing for any combination of sampling and weighting schemes is very flexible and, in principle, it recovers the original Shapley value, so that compute_shapley_values is no longer necessary. However, it loses the optimization in permutation sampling that reuses the utility computation from the last iteration when iterating over a permutation. This doubles the computation requirements (and slightly increases variance) when using permutation sampling, unless the cache is enabled. In addition, as mentioned above, truncation policies are not supported by this generic implementation (as of v0.8.1). For these reasons it is preferable to use compute_shapley_values whenever not computing other semi-values.
Wang, J.T., Jia, R., 2023. Data Banzhaf: A Robust Data Valuation Framework for Machine Learning, in: Proceedings of The 26th International Conference on Artificial Intelligence and Statistics. Presented at the International Conference on Artificial Intelligence and Statistics, PMLR, pp. 6388\u20136421.\u00a0\u21a9\u21a9\u21a9\u21a9
Kwon, Y., Zou, J., 2022. Beta Shapley: A Unified and Noise-reduced Data Valuation Framework for Machine Learning, in: Proceedings of the 25th International Conference on Artificial Intelligence and Statistics (AISTATS) 2022,. Presented at the AISTATS 2022, PMLR.\u00a0\u21a9\u21a9
The Shapley method is an approach to compute data values originating in cooperative game theory. Shapley values are a common way of assigning payoffs to each participant in a cooperative game (i.e. one in which players can form coalitions) in a way that ensures that certain axioms are fulfilled.
pyDVL implements several methods for the computation and approximation of Shapley values. They can all be accessed via the facade function compute_shapley_values. The supported methods are enumerated in ShapleyMode.
Empirically, the most useful method is the so-called Truncated Monte Carlo Shapley (Ghorbani and Zou, 2019)1, which is a Monte Carlo approximation of the permutation Shapley value.
The first algorithm is just a verbatim implementation of the definition. As such it returns as exact a value as the utility function allows (see what this means in Problems of Data Values).
The value \\(v\\) of the \\(i\\)-th sample in dataset \\(D\\) wrt. utility \\(u\\) is computed as a weighted sum of its marginal utility wrt. every possible coalition of training samples within the training set:
where \\(D_{-i}\\) denotes the set of samples in \\(D\\) excluding \\(x_i\\), and \\(S_{+i}\\) denotes the set \\(S\\) with \\(x_i\\) added.
from pydvl.value import compute_shapley_values\n\nvalues = compute_shapley_values(utility, mode=\"combinatorial_exact\")\ndf = values.to_dataframe(column='value')\n
We can convert the return value to a pandas.DataFrame. and name the column with the results as value. Please refer to the documentation in shapley and ValuationResult for more information.
Because the number of subsets \\(S \\subseteq D_{-i}\\) is \\(2^{ | D | - 1 }\\), one typically must resort to approximations. The simplest one is done via Monte Carlo sampling of the powerset \\(\\mathcal{P}(D)\\). In pyDVL this simple technique is called \"Monte Carlo Combinatorial\". The method has very poor converge rate and others are preferred, but if desired, usage follows the same pattern:
from pydvl.value import compute_shapley_values, MaxUpdates\n\nvalues = compute_shapley_values(\n utility, mode=\"combinatorial_montecarlo\", done=MaxUpdates(1000)\n)\ndf = values.to_dataframe(column='cmc')\n
The DataFrames returned by most Monte Carlo methods will contain approximate standard errors as an additional column, in this case named cmc_stderr.
cmc_stderr
Note the usage of the object MaxUpdates as the stop condition. This is an instance of a StoppingCriterion. Other examples are MaxTime and AbsoluteStandardError.
Owen Sampling (Okhrati and Lipani, 2021)2 is a practical algorithm based on the combinatorial definition. It uses a continuous extension of the utility from \\(\\{0,1\\}^n\\), where a 1 in position \\(i\\) means that sample \\(x_i\\) is used to train the model, to \\([0,1]^n\\). The ensuing expression for Shapley value uses integration instead of discrete weights:
Using Owen sampling follows the same pattern as every other method for Shapley values in pyDVL. First construct the dataset and utility, then call compute_shapley_values:
from pydvl.value import compute_shapley_values\n\nvalues = compute_shapley_values(\n u=utility, mode=\"owen\", n_iterations=4, max_q=200\n)\n
There are more details on Owen sampling, and its variant Antithetic Owen Sampling in the documentation for the function doing the work behind the scenes: owen_sampling_shapley.
Note that in this case we do not pass a StoppingCriterion to the function, but instead the number of iterations and the maximum number of samples to use in the integration.
An equivalent way of computing Shapley values (ApproShapley) appeared in (Castro et al., 2009)3 and is the basis for the method most often used in practice. It uses permutations over indices instead of subsets:
where \\(\\sigma_{:i}\\) denotes the set of indices in permutation sigma before the position where \\(i\\) appears. To approximate this sum (which has \\(\\mathcal{O}(n!)\\) terms!) one uses Monte Carlo sampling of permutations, something which has surprisingly low sample complexity. One notable difference wrt. the combinatorial approach above is that the approximations always fulfill the efficiency axiom of Shapley, namely \\(\\sum_{i=1}^n \\hat{v}_i = u(D)\\) (see (Castro et al., 2009)3, Proposition 3.2).
By adding two types of early stopping, the result is the so-called Truncated Monte Carlo Shapley (Ghorbani and Zou, 2019)1, which is efficient enough to be useful in applications. The first is simply a convergence criterion, of which there are several to choose from. The second is a criterion to truncate the iteration over single permutations. RelativeTruncation chooses to stop iterating over samples in a permutation when the marginal utility becomes too small.
from pydvl.value import compute_shapley_values, MaxUpdates, RelativeTruncation\n\nvalues = compute_shapley_values(\n u=utility,\n mode=\"permutation_montecarlo\",\n done=MaxUpdates(1000),\n truncation=RelativeTruncation(utility, rtol=0.01)\n)\n
You can see this method in action in this example using the Spotify dataset.
It is possible to exploit the local structure of K-Nearest Neighbours to reduce the amount of subsets to consider: because no sample besides the K closest affects the score, most are irrelevant and it is possible to compute a value in linear time. This method was introduced by (Jia et al., 2019)4, and can be used in pyDVL with:
from pydvl.utils import Dataset, Utility\nfrom pydvl.value import compute_shapley_values\nfrom sklearn.neighbors import KNeighborsClassifier\n\nmodel = KNeighborsClassifier(n_neighbors=5)\ndata = Dataset(...)\nutility = Utility(model, data)\nvalues = compute_shapley_values(u=utility, mode=\"knn\")\n
An alternative method for the approximation of Shapley values introduced in (Jia et al., 2019)4 first estimates the differences of values with a Monte Carlo sum. With
one then solves the following linear constraint satisfaction problem (CSP) to infer the final values:
We have reproduced this method in pyDVL for completeness and benchmarking, but we don't advocate its use because of the speed and memory cost. Despite our best efforts, the number of samples required in practice for convergence can be several orders of magnitude worse than with e.g. TMCS. Additionally, the CSP can sometimes turn out to be infeasible.
Usage follows the same pattern as every other Shapley method, but with the addition of an epsilon parameter required for the solution of the CSP. It should be the same value used to compute the minimum number of samples required. This can be done with num_samples_eps_delta, but note that the number returned will be huge! In practice, fewer samples can be enough, but the actual number will strongly depend on the utility, in particular its variance.
from pydvl.utils import Dataset, Utility\nfrom pydvl.value import compute_shapley_values\n\nmodel = ...\ndata = Dataset(...)\nutility = Utility(model, data, score_range=(_min, _max))\nmin_iterations = num_samples_eps_delta(epsilon, delta, n, utility.score_range)\nvalues = compute_shapley_values(\n u=utility, mode=\"group_testing\", n_iterations=min_iterations, eps=eps\n)\n
Ghorbani, A., Zou, J., 2019. Data Shapley: Equitable Valuation of Data for Machine Learning, in: Proceedings of the 36th International Conference on Machine Learning, PMLR. Presented at the International Conference on Machine Learning (ICML 2019), PMLR, pp. 2242\u20132251.\u00a0\u21a9\u21a9
Castro, J., G\u00f3mez, D., Tejada, J., 2009. Polynomial calculation of the Shapley value based on sampling. Computers & Operations Research, Selected papers presented at the Tenth International Symposium on Locational Decisions (ISOLDE X) 36, 1726\u20131730. https://doi.org/10.1016/j.cor.2008.04.004 \u21a9\u21a9
Jia, R., Dao, D., Wang, B., Hubis, F.A., Gurel, N.M., Li, B., Zhang, C., Spanos, C., Song, D., 2019. Efficient task-specific data valuation for nearest neighbor algorithms. Proc. VLDB Endow. 12, 1610\u20131623. https://doi.org/10.14778/3342263.3342637 \u21a9\u21a9
Shapley values define a fair way to distribute payoffs amongst all participants (training points) when they form a grand coalition, i.e. when the model is trained on the whole dataset. But they do not consider the question of stability: under which conditions do all participants in a game form the grand coalition? Are the payoffs distributed in such a way that prioritizes its formation?
The Core is another solution concept in cooperative game theory that attempts to ensure stability in the sense that it provides the set of feasible payoffs that cannot be improved upon by a sub-coalition. This can be interesting for some applications of data valuation because it yields values consistent with training on the whole dataset, avoiding the spurious selection of subsets.
It satisfies the following 2 properties:
Efficiency: The payoffs are distributed such that it is not possible to make any participant better off without making another one worse off. \\(\\sum_{i \\in D} v(i) = u(D).\\)
Coalitional rationality: The sum of payoffs to the agents in any coalition \\(S\\) is at least as large as the amount that these agents could earn by forming a coalition on their own. \\(\\sum_{i \\in S} v(i) \\geq u(S), \\forall S \\subset D.\\)
The Core was first introduced into data valuation by (Yan and Procaccia, 2021)1, in the following form.
Unfortunately, for many cooperative games the Core may be empty. By relaxing the coalitional rationality property by a subsidy \\(e \\gt 0\\), we are then able to find approximate payoffs:
The Least Core (LC) values \\(\\{v\\}\\) for utility \\(u\\) are computed by solving the following linear program:
Note that solving this program yields a set of solutions \\(\\{v_j:N \\rightarrow \\mathbb{R}\\}\\), whereas the Shapley value is a single function \\(v\\). In order to obtain a single valuation to use, one breaks ties by solving a quadratic program to select the \\(v\\) in the LC with the smallest \\(\\ell_2\\) norm. This is called the egalitarian least core.
This first algorithm is just a verbatim implementation of the definition, in compute_least_core_values. It computes all constraints for the linear problem by evaluating the utility on every subset of the training data, and returns as exact a value as the utility function allows (see what this means in Problems of Data Values).
from pydvl.value import compute_least_core_values\n\nvalues = compute_least_core_values(utility, mode=\"exact\")\n
Because the number of subsets \\(S \\subseteq D \\setminus \\{i\\}\\) is \\(2^{ | D | - 1 }\\), one typically must resort to approximations.
The simplest one consists in using a fraction of all subsets for the constraints. (Yan and Procaccia, 2021)1 show that a quantity of order \\(\\mathcal{O}((n - \\log \\Delta ) / \\delta^2)\\) is enough to obtain a so-called \\(\\delta\\)-approximate least core with high probability. I.e. the following property holds with probability \\(1-\\Delta\\) over the choice of subsets:
where \\(e^{*}\\) is the optimal least core subsidy. This approximation is also implemented in compute_least_core_values:
from pydvl.value import compute_least_core_values\n\nvalues = compute_least_core_values(\n utility, mode=\"montecarlo\", n_iterations=n_iterations\n)\n
Although any number is supported, it is best to choose n_iterations to be at least equal to the number of data points.
Because computing the Least Core values requires the solution of a linear and a quadratic problem after computing all the utility values, we offer the possibility of splitting the latter from the former. This is useful when running multiple experiments: use mclc_prepare_problem to prepare a list of problems to solve, then solve them in parallel with lc_solve_problems.
from pydvl.value.least_core import mclc_prepare_problem, lc_solve_problems\n\nn_experiments = 10\nproblems = [mclc_prepare_problem(utility, n_iterations=n_iterations)\n for _ in range(n_experiments)]\nvalues = lc_solve_problems(problems)\n
The TransferLab team reproduced the results of the original paper in a publication for the 2022 MLRC (Benmerzoug and Benito Delgado, 2023)2.
Best sample removal on binary image classification
Roughly speaking, MCLC performs better in identifying high value points, as measured by best-sample removal tasks. In all other aspects, it performs worse or similarly to TMCS at comparable sample budgets. But using an equal number of subsets is more computationally expensive because of the need to solve large linear and quadratic optimization problems.
Worst sample removal on binary image classification
For these reasons we recommend some variation of SV like TMCS for outlier detection, data cleaning and pruning, and perhaps MCLC for the selection of interesting points to be inspected for the improvement of data collection or model design.
Benmerzoug, A., Benito Delgado, M. de, 2023. [Re] If you like Shapley, then you\u2019ll love the core. ReScience C 9. https://doi.org/10.5281/zenodo.8173733 \u21a9
init_executor(\n max_workers: Optional[int] = None,\n config: Optional[ParallelConfig] = None,\n **kwargs\n) -> Generator[Executor, None, None]\n
@contextmanager\n@deprecated(\n target=None,\n deprecated_in=\"0.9.0\",\n remove_in=\"0.10.0\",\n)\ndef init_executor(\n max_workers: Optional[int] = None,\n config: Optional[ParallelConfig] = None,\n **kwargs,\n) -> Generator[Executor, None, None]:\n \"\"\"Initializes a futures executor for the given parallel configuration.\n\n Args:\n max_workers: Maximum number of concurrent tasks.\n config: instance of [ParallelConfig][pydvl.utils.config.ParallelConfig]\n with cluster address, number of cpus, etc.\n kwargs: Other optional parameter that will be passed to the executor.\n\n\n ??? Examples\n ``` python\n from pydvl.parallel.futures import init_executor, ParallelConfig\n\n config = ParallelConfig(backend=\"ray\")\n with init_executor(max_workers=1, config=config) as executor:\n future = executor.submit(lambda x: x + 1, 1)\n result = future.result()\n assert result == 2\n ```\n ``` python\n from pydvl.parallel.futures import init_executor\n with init_executor() as executor:\n results = list(executor.map(lambda x: x + 1, range(5)))\n assert results == [1, 2, 3, 4, 5]\n ```\n \"\"\"\n\n if config is None:\n config = ParallelConfig()\n\n try:\n cls = ParallelBackend.BACKENDS[config.backend]\n with cls.executor(max_workers=max_workers, config=config, **kwargs) as e:\n yield e\n except KeyError:\n raise NotImplementedError(f\"Unexpected parallel backend {config.backend}\")\n