-
Notifications
You must be signed in to change notification settings - Fork 0
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Release Candidate 0.1 #5
Conversation
Removed all mixins but `seedable` and `timeable` as they are not currently well supported.
… to set what should be seeded and what not for faster usage.
Added some new tests.
WalkthroughThe pull request consolidates GitHub Actions workflows for code quality into a single file named Changes
Possibly related PRs
Poem
Thank you for using CodeRabbit. We offer it for free to the OSS community and would appreciate your support in helping us grow. If you find it useful, would you consider giving us a shout-out on your favorite social media? 🪧 TipsChatThere are 3 ways to chat with CodeRabbit:
Note: Be mindful of the bot's finite context window. It's strongly recommended to break down tasks such as reading entire modules into smaller chunks. For a focused discussion, use review comments to chat about specific files and their changes, instead of using the PR comments. CodeRabbit Commands (Invoked using PR comments)
Other keywords and placeholders
CodeRabbit Configuration File (
|
Codecov ReportAll modified and coverable lines are covered by tests ✅
✅ All tests successful. No failed tests found. Additional details and impacted files@@ Coverage Diff @@
## main #5 +/- ##
========================================
Coverage ? 100.00%
========================================
Files ? 4
Lines ? 160
Branches ? 0
========================================
Hits ? 160
Misses ? 0
Partials ? 0 ☔ View full report in Codecov by Sentry. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 8
🧹 Outside diff range and nitpick comments (16)
.github/workflows/code-quality-main.yaml (1)
Line range hint
26-33
: LGTM with suggestions: Package installation and pre-commit steps are well-configured.The new package installation step is a good addition, ensuring all necessary dependencies are available for the pre-commit hooks. The use of
-e .[dev]
correctly installs the package in editable mode with development dependencies.Consider the following suggestions for potential improvements:
- Add a step to cache pip dependencies to speed up future runs.
- Explicitly upgrade pip before installation.
- Add a step to verify the installation was successful.
Here's an example of how you could implement these suggestions:
- name: Cache pip dependencies uses: actions/cache@v3 with: path: ~/.cache/pip key: ${{ runner.os }}-pip-${{ hashFiles('**/requirements.txt') }} restore-keys: | ${{ runner.os }}-pip- - name: Install packages run: | python -m pip install --upgrade pip pip install -e .[dev] pip list # Verify installation - name: Run pre-commits uses: pre-commit/[email protected]These changes would potentially improve the workflow's efficiency and reliability.
tests/test_torch.py (2)
7-10
: Improve exception handling for better error context.The error handling for the torch import is good, but it can be improved to provide better context for the exception.
Consider updating the except clause as follows:
try: import torch -except (ImportError, ModuleNotFoundError): - raise ImportError("This test requires torch to run.") +except (ImportError, ModuleNotFoundError) as err: + raise ImportError("This test requires torch to run.") from errThis change will preserve the original exception's context, making it easier to debug if needed.
🧰 Tools
🪛 Ruff
10-10: Within an
except
clause, raise exceptions withraise ... from err
orraise ... from None
to distinguish them from errors in exception handling(B904)
17-46
: LGTM: Comprehensive test for seed_everything function.The test thoroughly checks the seeding functionality across different random number generators (Python's random, NumPy, and PyTorch). The assertions correctly verify the expected behavior of each library.
To improve readability, consider extracting the random number generation into a separate function:
def generate_random_numbers(): return ( random.randint(0, 100000000), np.random.randint(0, 100000000), torch.randint(0, 100000000, (1,)).item() ) def test_seed_everything(): seed_everything(1, seed_engines={"torch"}) rand_1_1, np_rand_1_1, torch_rand_1_1 = generate_random_numbers() rand_2_1, np_rand_2_1, torch_rand_2_1 = generate_random_numbers() seed_everything(1, seed_engines={"torch"}) rand_1_2, np_rand_1_2, torch_rand_1_2 = generate_random_numbers() rand_2_2, np_rand_2_2, torch_rand_2_2 = generate_random_numbers() # ... rest of the assertions ...This refactoring would make the test more concise and easier to maintain.
.github/workflows/tests.yaml (2)
36-38
: Approve the separation of non-torch tests and suggest a minor improvement.The separation of non-torch tests is a good organizational change. It allows for more granular control over test execution and potentially faster CI runs.
Consider adding a comment explaining why torch tests are separated, for better maintainability:
- name: Run non-torch tests + # Torch tests are run separately to manage dependencies and improve CI performance run: | pytest -v --doctest-modules --cov=src --junitxml=junit.xml -s --ignore=docs --ignore=tests/test_torch.py
51-57
: Approve the addition of torch-specific steps and suggest a minor improvement.The addition of separate steps for installing torch and running torch-specific tests is a good practice. It allows for more focused dependency management and test execution.
Consider pinning the torch version for better reproducibility:
- name: Install torch as well run: | - pip install torch + pip install torch==1.10.0 # Replace with the desired versionAlso, consider adding the
--cov-append
flag to the pytest command for torch tests to combine coverage with the previous run:- name: Run torch tests run: | - pytest -v --cov=src --junitxml=junit.xml -s tests/test_torch.py + pytest -v --cov=src --cov-append --junitxml=junit.xml -s tests/test_torch.pytests/test_timeable_mixin.py (1)
26-28
: Consider adding assertions to verify instantiation.The test function successfully creates instances of
TimeableMixin
andTimeableDerived
, but it doesn't verify any properties of these instances. Consider adding assertions to ensure that the objects are created correctly and have the expected attributes or methods.Here's a suggestion to improve the test:
def test_constructs(): timeable_mixin = TimeableMixin() assert isinstance(timeable_mixin, TimeableMixin) timeable_derived = TimeableDerived() assert isinstance(timeable_derived, TimeableDerived) assert isinstance(timeable_derived, TimeableMixin) assert hasattr(timeable_derived, 'foo')src/mixins/timeable.py (4)
14-26
: Enhance the docstring for improved clarity and completeness.The added docstring provides valuable information about the
TimeableMixin
class. However, consider the following improvements:
- Adjust the indentation for better readability.
- Provide a more precise description of the
_timings
attribute structure.- Mention the main methods provided by the class.
Here's a suggested revision:
"""A mixin class to add timing functionality to a class for profiling its methods. This mixin class provides the following functionality: - Timing of methods using the TimeAs decorator. - Timing of arbitrary code blocks using the _time_as context manager. - Profiling of the durations of the timed methods. Attributes: _timings (dict): A dictionary storing timing data for methods. - Keys: Names of the timed methods (str). - Values: List of dictionaries, each containing: - "start": Start time of the method call (float). - "end": End time of the method call (float). Main methods: - TimeAs: Decorator for timing methods. - _time_as: Context manager for timing arbitrary code blocks. - _profile_durations: Method to generate a summary of timing statistics. """
Line range hint
38-54
: Enhance_get_pprint_num_unit
method with type hinting.Consider adding type hints to improve code readability and maintainability.
Here's a suggested improvement:
@classmethod def _get_pprint_num_unit(cls, x: float, x_unit: str = "sec") -> tuple[float, str]: x_unit_factor: float = 1 for fac, unit in cls._CUTOFFS_AND_UNITS: if unit == x_unit: break if fac is None: raise LookupError( f"Passed unit {x_unit} invalid! " f"Must be one of {', '.join(u for f, u in cls._CUTOFFS_AND_UNITS)}." ) x_unit_factor *= fac min_unit: float = x * x_unit_factor upper_bound: float = 1 for upper_bound_factor, unit in cls._CUTOFFS_AND_UNITS: if (upper_bound_factor is None) or (min_unit < upper_bound * upper_bound_factor): return min_unit / upper_bound, unit upper_bound *= upper_bound_factor
Line range hint
56-70
: Simplify_pprint_duration
method for better readability.The
_pprint_duration
method could be simplified to improve readability and reduce complexity.Consider refactoring the method as follows:
@classmethod def _pprint_duration(cls, mean_sec: float, n_times: int = 1, std_seconds: float | None = None) -> str: mean_time, mean_unit = cls._get_pprint_num_unit(mean_sec) if std_seconds: std_time = std_seconds * mean_time / mean_sec mean_std_str = f"{mean_time:.1f} ± {std_time:.1f} {mean_unit}" else: mean_std_str = f"{mean_time:.1f} {mean_unit}" return f"{mean_std_str} (x{n_times})" if n_times > 1 else mean_std_strThis refactored version:
- Removes the redundant
else
clause.- Simplifies the conditional return statement using a ternary operator.
- Improves overall readability while maintaining the same functionality.
Line range hint
149-160
: Enhance_profile_durations
method for improved readability and performance.The
_profile_durations
method could be improved for better readability and potentially better performance.Consider refactoring the method as follows:
def _profile_durations(self, only_keys: set[str] | None = None): stats = self._duration_stats if only_keys: stats = {k: v for k, v in stats.items() if k in only_keys} if not stats: return "No timing data available." longest_key_length = max(len(k) for k in stats) ordered_keys = sorted(stats, key=lambda k: stats[k][0] * stats[k][1]) return "\n".join( f"{k}:{' ' * (longest_key_length - len(k))} {self._pprint_duration(*stats[k])}" for k in ordered_keys )This refactored version:
- Adds a check for empty
stats
to handle cases where no timing data is available.- Simplifies the
sorted
function call by directly sortingstats.keys()
.- Uses a generator expression in the
join
method for potentially better memory efficiency.- Improves overall readability while maintaining the same functionality.
tests/test_seedable_mixin.py (5)
48-91
: Comprehensive test for seed_everything functionThe
test_seed_everything
function is well-designed and covers multiple scenarios:
- Seeding with environment variable
- Seeding with a specific value
- Seeding specific engines (random and numpy)
The test effectively verifies the consistency of random number generation across different seeding strategies.
Consider grouping related assertions into helper functions or using parametrized tests to improve readability and maintainability. For example:
def assert_consistent_random(rand_1, rand_2, np_rand_1, np_rand_2): assert rand_1 == rand_2 assert np_rand_1 == np_rand_2 assert rand_1 != np_rand_1 # Then use it in the test assert_consistent_random(rand_1_1, rand_1_2, np_rand_1_1, np_rand_1_3)
94-102
: Simple construction and seeding benchmark testsThe
test_constructs
andtest_benchmark_seeding
functions are straightforward and serve their purposes well.For
test_benchmark_seeding
, consider adding multiple scenarios to provide a more comprehensive benchmark. For example:@pytest.mark.parametrize("seed", [None, 0, 42, 10**6]) def test_benchmark_seeding(benchmark, seed): T = SeedableDerived() benchmark(T._seed, seed)This will help identify any performance differences based on the seed value.
105-113
: Basic method existence checkThe
test_responds_to_methods
function verifies that bothSeedableMixin
andSeedableDerived
classes respond to the_seed
and_last_seed
methods.Consider enhancing this test to check the behavior of these methods:
def test_responds_to_methods(): for cls in (SeedableMixin, SeedableDerived): T = cls() assert T._seed() is not None, f"{cls.__name__}._seed() should return a value" assert T._last_seed("foo") == (-1, None), f"{cls.__name__}._last_seed() should return (-1, None) for unused keys"This improvement will ensure that the methods not only exist but also behave as expected.
141-171
: Thorough test for decorated seeding behaviorThe
test_decorated_seeding_freezes_randomness
function comprehensively tests the seeding behavior of the decorated method:
- It verifies that unseeded calls produce different results.
- It checks that seeded calls with different seeds produce different results.
- It ensures that seeded calls with the same seed produce consistent results.
- It verifies that the seeding is consistent even when interrupted by different seeds.
This test effectively covers various scenarios for the decorated seeding functionality.
Consider adding a test case for calling the decorated method without explicitly providing a seed after setting a global seed. This will verify that the decorator correctly uses the global seed when no seed is provided:
T._seed(42) seeded_global = T.decorated_gen_random_num() assert seeded_global == T.decorated_gen_random_num(seed=42), "Decorated method should use global seed when no seed is provided"
197-221
: Comprehensive test for _last_seed methodThe
test_get_last_seed
function effectively tests the behavior of the_last_seed
method:
- It checks the initial state when no seed has been set.
- It verifies that the correct seed and index are returned for a specific key.
- It ensures that the method correctly handles multiple seed settings with different keys.
This test covers the essential functionality of the
_last_seed
method.Consider adding a test case for retrieving the last seed for a key that hasn't been used for seeding:
idx, seed = T._last_seed("unused_key") assert idx == -1 and seed is None, "Unused key should return (-1, None)"This will ensure that the method behaves correctly for all possible scenarios.
src/mixins/seedable.py (1)
18-20
: Enhance PyTorch seeding for complete reproducibilityTo ensure full reproducibility when using PyTorch, consider setting
torch.backends.cudnn.deterministic
andtorch.backends.cudnn.benchmark
in theseed_torch
function. This will help in achieving deterministic behavior with CUDA operations.Apply this diff to incorporate the changes:
def seed_torch(seed: int): torch.manual_seed(seed) torch.cuda.manual_seed_all(seed) + torch.backends.cudnn.deterministic = True + torch.backends.cudnn.benchmark = False
📜 Review details
Configuration used: CodeRabbit UI
Review profile: CHILL
📒 Files selected for processing (21)
- .github/workflows/code-quality-main.yaml (1 hunks)
- .github/workflows/code-quality-pr.yaml (1 hunks)
- .github/workflows/python-build.yaml (1 hunks)
- .github/workflows/tests.yaml (1 hunks)
- pyproject.toml (1 hunks)
- src/mixins/init.py (1 hunks)
- src/mixins/debuggable.py (0 hunks)
- src/mixins/multiprocessingable.py (0 hunks)
- src/mixins/saveable.py (0 hunks)
- src/mixins/seedable.py (2 hunks)
- src/mixins/swapcacheable.py (0 hunks)
- src/mixins/tensorable.py (0 hunks)
- src/mixins/timeable.py (1 hunks)
- src/mixins/tqdmable.py (0 hunks)
- tests/test_saveable_mixin.py (0 hunks)
- tests/test_seedable_mixin.py (2 hunks)
- tests/test_swapcacheable_mixin.py (0 hunks)
- tests/test_tensorable_mixin.py (0 hunks)
- tests/test_timeable_mixin.py (1 hunks)
- tests/test_torch.py (1 hunks)
- tests/test_tqdmable_mixin.py (0 hunks)
💤 Files with no reviewable changes (10)
- src/mixins/debuggable.py
- src/mixins/multiprocessingable.py
- src/mixins/saveable.py
- src/mixins/swapcacheable.py
- src/mixins/tensorable.py
- src/mixins/tqdmable.py
- tests/test_saveable_mixin.py
- tests/test_swapcacheable_mixin.py
- tests/test_tensorable_mixin.py
- tests/test_tqdmable_mixin.py
🧰 Additional context used
🪛 Ruff
src/mixins/seedable.py
114-114: Loop control variable
time
not used within loop bodyRename unused
time
to_time
(B007)
tests/test_timeable_mixin.py
51-51: Yoda condition detected
(SIM300)
56-56: Yoda condition detected
(SIM300)
57-57: Yoda condition detected
(SIM300)
58-58: Yoda condition detected
(SIM300)
66-66: Within an
except
clause, raise exceptions withraise ... from err
orraise ... from None
to distinguish them from errors in exception handling(B904)
94-94: Yoda condition detected
Rewrite as
stats["decorated"][1] == 2
(SIM300)
97-97: Yoda condition detected
Rewrite as
stats["decorated_takes_time_auto_key"][1] == 1
(SIM300)
98-98: Yoda condition detected
Rewrite as
stats["decorated_takes_time_auto_key"][2] == 0
(SIM300)
tests/test_torch.py
10-10: Within an
except
clause, raise exceptions withraise ... from err
orraise ... from None
to distinguish them from errors in exception handling(B904)
🔇 Additional comments (23)
src/mixins/__init__.py (1)
1-4
: LGTM! Simplified mixin imports and exports.The changes to this file look good. The imports and
__all__
list are consistent, only includingSeedableMixin
andTimeableMixin
. This simplification aligns with the overall changes described in the PR summary.However, given the significant reduction in exported mixins, it's crucial to ensure that these changes don't break existing code. Please run the following script to verify that the removed mixins are no longer used elsewhere in the codebase:
Additionally, consider updating the project's documentation to reflect these changes, especially if any of the removed mixins were part of the public API.
✅ Verification successful
Verified: No usages of removed mixins found.
All removed mixins (
DebuggableMixin
,MultiprocessingMixin
,SaveableMixin
,SwapcacheableMixin
,TensorableMixin
,TQDMableMixin
) are not used elsewhere in the codebase. This ensures that the simplification of mixin imports and exports will not impact existing functionality.🏁 Scripts executed
The following scripts were executed for the analysis:
Script:
#!/bin/bash # Description: Check for any remaining usage of removed mixins # Test: Search for usage of removed mixins echo "Searching for usage of removed mixins:" rg --type python "DebuggableMixin|MultiprocessingMixin|SaveableMixin|SwapcacheableMixin|TensorableMixin|TQDMableMixin" # Note: If this search returns results, it may indicate areas of the code that need to be updated.Length of output: 238
Script:
#!/bin/bash # Description: Check for any remaining usage of removed mixins using glob pattern echo "Searching for usage of removed mixins in Python files:" rg --glob '*.py' "DebuggableMixin|MultiprocessingMixin|SaveableMixin|SwapcacheableMixin|TensorableMixin|TQDMableMixin"Length of output: 239
.github/workflows/code-quality-main.yaml (3)
Line range hint
1-17
: LGTM: Workflow configuration is well-structured.The workflow configuration is appropriate for running code quality checks on the main branch. It correctly triggers on pushes to main and uses the latest Ubuntu runner with Python 3.12.
Line range hint
18-25
: LGTM: Checkout and Python setup steps are correctly configured.The workflow uses the latest versions of actions for checkout (v4) and Python setup (v5), which is a good practice for security and feature updates.
Line range hint
1-33
: Overall assessment: Excellent addition to the CI/CD process.This new workflow file for code quality checks on the main branch is a valuable addition to the repository. It complements the existing PR checks by ensuring that code quality is maintained after merges to the main branch. The workflow is well-structured, uses up-to-date action versions, and includes all necessary steps.
The suggestions provided for the package installation step could further enhance the workflow's efficiency and reliability, but the current implementation is already solid and functional.
pyproject.toml (4)
22-22
: LGTM: Version constraint added for pre-commit.Adding a version constraint
<4
for thepre-commit
package is a good practice. It ensures compatibility with known working versions and prevents potential breaking changes from future major releases.
22-23
: Summary: Significant changes to project dependencies and testing strategyThe changes to
pyproject.toml
indicate a substantial update to the project's dependencies and testing strategy. Here's a summary of the key points:
- Version constraint added for
pre-commit
in dev dependencies.- Introduction of
pytest-benchmark
for performance testing.- Removal of
tqdmable
andtensorable
optional dependencies.These changes suggest an evolution in the project's focus or requirements. To better understand the full context of these changes, it would be helpful if you could provide an overview of:
- The project's current development phase and how these changes align with its goals.
- Any performance or compatibility issues that prompted these updates.
- The overall strategy for managing optional dependencies and how it might be changing.
This information will help in understanding how these changes fit into the broader picture of the project's development and ensure that all potential impacts have been considered.
To get a broader view of the changes across the project, we can run the following script:
#!/bin/bash # Description: Overview of project changes # Test 1: List all modified Python files echo "Modified Python files:" git diff --name-only origin/main -- '*.py' # Test 2: Check for changes in test files echo "Changes in test files:" git diff --stat origin/main -- 'tests/*.py' # Test 3: Check for changes in CI configuration echo "Changes in CI configuration:" git diff --stat origin/main -- '.github/workflows/*.yml'
23-23
: LGTM: Added pytest-benchmark to test dependencies. Please clarify its introduction.The addition of
pytest-benchmark
to the test dependencies is noted. This suggests the introduction of performance benchmarking capabilities to the test suite, which is generally a positive addition for monitoring and improving code performance.However, could you please provide some context on the decision to introduce benchmarking at this stage? It would be helpful to understand:
- The specific areas or functions targeted for benchmarking.
- How these benchmarks will be integrated into the existing test suite or CI/CD pipeline.
- Any performance targets or thresholds that have been set.
This information will help in understanding the broader impact of this change on the project's testing strategy.
To verify the introduction of benchmarking tests, we can search for their usage in the test files:
#!/bin/bash # Description: Search for usage of pytest-benchmark in test files # Test: Look for benchmark fixtures or decorators in test files rg --type python 'def test_.*benchmark' tests/ rg --type python '@pytest\.mark\.benchmark' tests/
22-23
: Please clarify the removal of 'tqdmable' and 'tensorable' dependencies.The AI summary mentions the removal of
tqdmable
andtensorable
dependencies from the[project.optional-dependencies]
section. However, these are not visible in the provided code snippet. This removal could have significant implications:
- It suggests a change in the project's optional features.
- Existing code relying on these dependencies might be affected.
Could you please provide more context on:
- The rationale behind removing these dependencies.
- Any code changes made to accommodate this removal.
- The impact on existing functionality and how it's being mitigated.
This information will help in assessing the full scope of these changes and their potential impact on the project.
To verify the removal of these dependencies and check for any remaining usage, we can run the following script:
✅ Verification successful
Removal of 'tqdmable' and 'tensorable' Dependencies Verified Successfully.
All instances of
tqdmable
andtensorable
have been removed frompyproject.toml
. Additionally, there are no remaining import statements for these dependencies in the codebase, and CI configurations remain unaffected. The removal does not introduce any issues to the project.🏁 Scripts executed
The following scripts were executed for the analysis:
Script:
#!/bin/bash # Description: Verify removal of tqdmable and tensorable dependencies and check for remaining usage # Test 1: Check if tqdmable and tensorable are still present in pyproject.toml echo "Checking pyproject.toml for tqdmable and tensorable:" grep -E "tqdmable|tensorable" pyproject.toml # Test 2: Search for import statements of tqdm and torch in the project echo "Searching for tqdm imports:" rg --type python "^import tqdm|^from tqdm" echo "Searching for torch imports:" rg --type python "^import torch|^from torch"Length of output: 1280
Script:
#!/bin/bash # Description: Re-verify removal of tqdmable and tensorable dependencies and check for remaining usage # Test 1: Check if tqdmable and tensorable are still present in pyproject.toml echo "Checking pyproject.toml for tqdmable and tensorable:" grep -E "tqdmable|tensorable" pyproject.toml # Test 2: Search for import statements of tqdm and tensorable in the project echo "Searching for tqdm imports:" rg --type python "^import tqdm|^from tqdm" echo "Searching for tensorable imports:" rg --type python "^import tensorable|^from tensorable" # Test 3: Verify changes in CI configuration without syntax errors echo "Checking CI configuration files for changes related to tqdmable and tensorable:" git diff --stat origin/main -- '.github/workflows/*.yml'Length of output: 695
Script:
#!/bin/bash # Description: Re-verify removal of tqdmable and tensorable dependencies and check for remaining usage # Test 1: Check if tqdmable and tensorable are still present in pyproject.toml echo "Checking pyproject.toml for tqdmable and tensorable:" grep -E "tqdmable|tensorable" pyproject.toml # Test 2: Search for import statements of tqdm and tensorable in the project echo "Searching for tqdm imports:" rg "^import tqdm|^from tqdm" -g '*.py' echo "Searching for tensorable imports:" rg "^import tensorable|^from tensorable" -g '*.py' # Test 3: Verify changes in CI configuration without syntax errors echo "Checking CI configuration files for changes related to tqdmable and tensorable:" git diff --stat origin/main -- '.github/workflows/*.yml'Length of output: 625
.github/workflows/code-quality-pr.yaml (2)
29-32
: Excellent addition of the package installation step!The new "Install packages" step is a valuable addition to the workflow. It ensures that the project and its development dependencies are properly installed before running the code quality checks. This has several benefits:
- It creates a consistent environment for all PRs, reducing the chances of environment-related issues.
- By installing in editable mode (
-e
flag), it allows for any local changes to be immediately reflected, which is ideal for development and testing.- Including the
[dev]
extra ensures that all development dependencies are available for tools like pre-commit hooks.This step will help catch dependency-related issues earlier and provide a more reliable code quality check process.
Line range hint
1-32
: Overall, this is a well-implemented update to the workflowThe addition of the package installation step significantly improves the robustness of the code quality checks. It ensures a consistent environment and helps catch potential issues earlier in the process. The placement and implementation of this step are appropriate.
While the current changes are already beneficial, consider the suggested optimizations for pip upgrading and package caching to further enhance the workflow's efficiency and reliability.
Great job on improving the CI process!
tests/test_torch.py (2)
13-14
: LGTM: Benchmark test is well-defined.The benchmark test for
seed_everything
with torch as the seed engine is concise and correctly implemented.
1-46
: Overall, excellent test implementation for seed_everything function.This new file introduces comprehensive tests for the
seed_everything
function, focusing on its integration with PyTorch. The tests cover both performance benchmarking and functional correctness across different random number generators.Key strengths:
- Proper error handling for PyTorch import.
- Concise benchmark test.
- Thorough testing of seeding functionality across Python's random, NumPy, and PyTorch.
The suggested improvements (exception handling and code structure) will further enhance the code quality without altering its functionality.
🧰 Tools
🪛 Ruff
10-10: Within an
except
clause, raise exceptions withraise ... from err
orraise ... from None
to distinguish them from errors in exception handling(B904)
.github/workflows/tests.yaml (3)
44-50
: Approve the addition of test results upload to Codecov.The addition of a step to upload test results to Codecov is a valuable improvement. It enhances visibility into test outcomes and aids in debugging. The condition to upload even when the job is cancelled is particularly useful for capturing partial results.
Line range hint
1-63
: Overall, the changes to the testing workflow are well-structured and beneficial.The separation of torch and non-torch tests, along with the addition of test result uploads to Codecov, significantly improves the organization and visibility of the CI process. The suggestions provided in the review comments, such as combining coverage data and adding explanatory comments, can further enhance the workflow's effectiveness and maintainability.
Great job on these improvements!
🧰 Tools
🪛 actionlint
30-30: shellcheck reported issue in this script: SC2102:info:1:17: Ranges can only match single chars (mentioned due to duplicates)
(shellcheck)
31-31
: Verify the impact of removing 'tqdmable' and 'tensorable' dependencies.The package installation command has been simplified, removing the 'tqdmable' and 'tensorable' dependencies. Please ensure that these are no longer required for the tests or any other part of the project.
To check if these dependencies are still used in the project, run the following script:
If the script returns any results, consider re-adding these dependencies or updating the relevant code.
✅ Verification successful
'tqdmable' and 'tensorable' dependencies are not used in the project.
🏁 Scripts executed
The following scripts were executed for the analysis:
Script:
#!/bin/bash # Description: Check for usage of 'tqdmable' and 'tensorable' in the project echo "Searching for 'tqdmable' usage:" rg --type python 'tqdmable' echo "Searching for 'tensorable' usage:" rg --type python 'tensorable'Length of output: 281
Script:
#!/bin/bash # Description: Check for usage of 'tqdmable' and 'tensorable' in the project echo "Searching for 'tqdmable' usage:" rg --type py 'tqdmable' echo "Searching for 'tensorable' usage:" rg --type py 'tensorable'Length of output: 211
Script:
#!/bin/bash # Description: Check if 'tqdmable' and 'tensorable' are listed in dependency or configuration files echo "Searching for 'tqdmable' in dependency/configuration files:" rg 'tqdmable' --glob '*.yml' --glob '*.yaml' --glob '*.toml' --glob 'requirements.txt' --glob 'setup.py' --glob 'pyproject.toml' echo "Searching for 'tensorable' in dependency/configuration files:" rg 'tensorable' --glob '*.yml' --glob '*.yaml' --glob '*.toml' --glob 'requirements.txt' --glob 'setup.py' --glob 'pyproject.toml'Length of output: 521
.github/workflows/python-build.yaml (1)
77-77
: Approve the update of sigstore/gh-action-sigstore-python action version.The update from v2.1.1 to v3.0.0 of the sigstore/gh-action-sigstore-python action is a good practice to keep the workflow up-to-date with the latest features and security improvements.
To ensure this update doesn't introduce any breaking changes, please verify the changelog or release notes for v3.0.0. You can run the following script to check for any recent issues or discussions related to this update:
If there are any concerning issues or discussions, please review them before merging this change.
✅ Verification successful
Approve the update of sigstore/gh-action-sigstore-python action version.
The update from v2.1.1 to v3.0.0 of the sigstore/gh-action-sigstore-python action does not introduce any active usage of the removed settings (
fulcio-url
,rekor-url
,ctfe
,rekor-root-pubkey
,signature
,certificate
,bundle
) in the workflow file. The mentions ofsignature
andcertificate
are only present in comments and do not affect the workflow's functionality.🏁 Scripts executed
The following scripts were executed for the analysis:
Script:
#!/bin/bash # Description: Check for recent issues or discussions related to sigstore/gh-action-sigstore-python v3.0.0 # Test: Search for recent issues mentioning v3.0.0 gh issue list -R sigstore/gh-action-sigstore-python -S "v3.0.0 in:title,body" --limit 5 --json number,title,url # Test: Search for recent discussions mentioning v3.0.0 gh discussion list -R sigstore/gh-action-sigstore-python -S "v3.0.0 in:title,body" --limit 5 --json number,title,url # Note: Ensure you have the GitHub CLI (gh) installed and authenticated to run these commands.Length of output: 587
Script:
#!/bin/bash # Description: Verify updates related to sigstore/gh-action-sigstore-python v3.0.0 # Search for recent issues mentioning v3.0.0 gh issue list -R sigstore/gh-action-sigstore-python -S "v3.0.0" --limit 5 --json number,title,url # Retrieve the latest release notes for v3.0.0 gh release view v3.0.0 -R sigstore/gh-action-sigstore-python --json name,tagName,body # Check the commit history for any notes on the update gh api repos/sigstore/gh-action-sigstore-python/commits --paginate -q '.[].commit.message' | grep -i "v3.0.0"Length of output: 2323
Script:
#!/bin/bash # Description: Check for usage of removed settings in python-build.yaml for sigstore/gh-action-sigstore-python v3.0.0 # Define the workflow file path WORKFLOW_FILE=".github/workflows/python-build.yaml" # Define the list of removed settings REMOVED_SETTINGS=( "fulcio-url" "rekor-url" "ctfe" "rekor-root-pubkey" "signature" "certificate" "bundle" ) # Iterate over each removed setting and search for its usage for setting in "${REMOVED_SETTINGS[@]}"; do echo "Searching for '${setting}' in ${WORKFLOW_FILE}..." rg --ignore-case "${setting}" "${WORKFLOW_FILE}" doneLength of output: 2147
tests/test_timeable_mixin.py (1)
Line range hint
1-106
: Overall improvement in test structure and coverage.The transition from class-based to function-based tests represents a positive modernization of the test suite. This change aligns well with pytest conventions and generally leads to more maintainable and readable tests. The new structure allows for easier addition of new test cases and better isolation between tests.
Key improvements and suggestions:
- Each test function now focuses on a specific aspect of the
TimeableMixin
andTimeableDerived
classes.- The use of
np.testing.assert_almost_equal
for floating-point comparisons is a good practice.- The addition of more assertions and error checking in the suggested improvements will enhance the robustness of the tests.
- Breaking down larger test functions into smaller, more focused ones improves readability and maintainability.
- The use of pytest features like
pytest.mark.parametrize
can further streamline the tests.To further improve the test suite, consider:
- Adding more edge cases and boundary condition tests.
- Implementing property-based testing for functions that accept a wide range of inputs.
- Ensuring consistent naming conventions across all test functions.
- Adding docstrings to test functions to clearly state their purpose and any complex setup.
Overall, these changes and suggestions should result in a more comprehensive and maintainable test suite for the
TimeableMixin
functionality.🧰 Tools
🪛 Ruff
51-51: Yoda condition detected
(SIM300)
56-56: Yoda condition detected
(SIM300)
57-57: Yoda condition detected
(SIM300)
58-58: Yoda condition detected
(SIM300)
66-66: Within an
except
clause, raise exceptions withraise ... from err
orraise ... from None
to distinguish them from errors in exception handling(B904)
94-94: Yoda condition detected
Rewrite as
stats["decorated"][1] == 2
(SIM300)
97-97: Yoda condition detected
Rewrite as
stats["decorated_takes_time_auto_key"][1] == 1
(SIM300)
98-98: Yoda condition detected
Rewrite as
stats["decorated_takes_time_auto_key"][2] == 0
(SIM300)
src/mixins/timeable.py (1)
Line range hint
1-160
: Overall assessment: Docstring addition is beneficial, with room for further improvements.The addition of the detailed docstring to the
TimeableMixin
class is a positive change that enhances the class's documentation. This improvement aids in understanding the class's purpose and functionality.While no functional changes were made to the existing code, there are opportunities for further enhancements:
- The newly added docstring could be refined for better clarity and completeness.
- Some methods, such as
_get_pprint_num_unit
,_pprint_duration
, and_profile_durations
, could benefit from refactoring to improve readability, maintainability, and potentially performance.These suggested improvements, if implemented, would further elevate the quality of the code without altering its core functionality.
tests/test_seedable_mixin.py (4)
35-46
: Well-structured benchmark tests for seed_everythingThe new benchmark tests for
seed_everything
are well-designed and cover important scenarios:
- Without a seed
- With a specific seed
- With an environment variable
These tests will help ensure the performance of the seeding function across different use cases.
116-138
: Comprehensive test for seeding behaviorThe
test_seeding_freezes_randomness
function effectively verifies the seeding behavior:
- It checks that unseeded calls produce different results.
- It ensures that seeded calls with the same seed produce consistent results.
- It verifies that different seeded calls produce different results.
This test covers the essential aspects of the seeding functionality for the
gen_random_num
method.
174-194
: Effective test for seed sequence consistencyThe
test_seeds_follow_consistent_sequence
function effectively verifies the behavior of seed sequences:
- It checks that unseeded sequences generate different seeds.
- It verifies that seeded sequences with the same initial seed produce consistent results.
- It ensures that seeded sequences differ from unseeded sequences.
This test comprehensively covers the expected behavior of seed generation and consistency.
Line range hint
1-221
: Comprehensive refactoring and enhancement of the test suiteThe changes to this file significantly improve the test suite for the
SeedableMixin
class:
- The transition from unittest to pytest-style tests enhances readability and maintainability.
- New benchmark tests have been added to monitor the performance of critical functions like
seed_everything
.- Existing tests have been expanded to cover more scenarios and edge cases.
- The tests now provide better coverage for seeding behavior, including decorated methods and seed sequences.
These improvements will help ensure the reliability and performance of the
SeedableMixin
class as the codebase evolves.src/mixins/seedable.py (1)
66-67
: Verify the use of 'PL_GLOBAL_SEED' environment variableThe code checks for the environment variable
PL_GLOBAL_SEED
when no seed is provided. Please verify ifPL_GLOBAL_SEED
is the appropriate variable for this context or if a different environment variable should be used.To confirm whether
PL_GLOBAL_SEED
is used elsewhere in the codebase, you can run the following script:✅ Verification successful
Verified: 'PL_GLOBAL_SEED' is used appropriately
🏁 Scripts executed
The following scripts were executed for the analysis:
Script:
#!/bin/bash # Description: Search for occurrences of 'PL_GLOBAL_SEED' in the codebase. # Test: Search for 'PL_GLOBAL_SEED' in code files. rg --type py 'PL_GLOBAL_SEED'Length of output: 307
- name: Install packages | ||
run: | | ||
pip install -e .[dev] | ||
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
🛠️ Refactor suggestion
Consider additional optimizations for the workflow
While the new package installation step is a great addition, consider the following optimizations to further improve the workflow:
- Add a step to upgrade pip before installing packages. This ensures you're using the latest pip version, which can sometimes resolve dependency issues:
- name: Upgrade pip
run: python -m pip install --upgrade pip
- Implement caching for pip packages to speed up subsequent runs:
- uses: actions/cache@v3
with:
path: ~/.cache/pip
key: ${{ runner.os }}-pip-${{ hashFiles('**/requirements.txt') }}
restore-keys: |
${{ runner.os }}-pip-
These changes could make your workflow more robust and efficient.
- name: Upload coverage to Codecov | ||
uses: codecov/[email protected] | ||
with: | ||
token: ${{ secrets.CODECOV_TOKEN }} | ||
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
🛠️ Refactor suggestion
Suggest combining coverage data from both test runs.
While adding a separate coverage upload for torch tests ensures that this data is captured, it might overwrite the coverage data from the non-torch tests. To get a complete picture of the test coverage, consider combining the coverage data from both test runs.
Here's a suggested approach:
- After the non-torch tests, instead of uploading to Codecov, save the coverage data:
- name: Save non-torch coverage
run: |
coverage xml -o coverage-non-torch.xml
- After the torch tests, combine the coverage data and then upload:
- name: Combine coverage and upload to Codecov
run: |
coverage combine coverage-non-torch.xml .coverage
coverage xml
uses: codecov/[email protected]
with:
token: ${{ secrets.CODECOV_TOKEN }}
This approach will give you a combined coverage report for both non-torch and torch tests.
def test_responds_to_methods(): | ||
T = TimeableMixin() | ||
|
||
T._times_for("key") | ||
T._register_start("key") | ||
T._register_end("key") | ||
|
||
T._register_start("key") | ||
T._time_so_far("key") | ||
T._register_end("key") | ||
T._times_for("key") | ||
|
||
def test_pprint_num_unit(self): | ||
self.assertEqual((5, "μs"), TimeableMixin._get_pprint_num_unit(5 * 1e-6)) | ||
T._register_start("key") | ||
T._time_so_far("key") | ||
T._register_end("key") |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Enhance test coverage with assertions.
The current test function calls various methods of TimeableMixin
, but it doesn't verify their behavior or output. To improve the effectiveness of this test, consider adding assertions after each method call.
Here's a suggestion to improve the test:
def test_responds_to_methods():
T = TimeableMixin()
T._register_start("key")
assert "key" in T._start_times, "Start time should be registered"
time.sleep(0.1) # Small delay to ensure measurable time difference
T._register_end("key")
assert "key" in T._end_times, "End time should be registered"
times = T._times_for("key")
assert len(times) == 1, "Should have one recorded time"
assert times[0] > 0, "Recorded time should be positive"
T._register_start("key")
time_so_far = T._time_so_far("key")
assert time_so_far > 0, "Time so far should be positive"
T._register_end("key")
assert len(T._times_for("key")) == 2, "Should have two recorded times now"
This improved version adds assertions to verify the behavior of each method, ensuring that the TimeableMixin
is functioning as expected.
def test_benchmark_timing(benchmark): | ||
T = TimeableDerived() | ||
|
||
def test_context_manager(self): | ||
T = TimeableDerived() | ||
benchmark(T.decorated_takes_time, 0.00001) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
🛠️ Refactor suggestion
Enhance benchmark test with assertions and realistic values.
While using a benchmark fixture is good for performance testing, the current implementation has some limitations:
- The time value (0.00001 seconds) is extremely small and may not provide meaningful results due to system-level timing inconsistencies.
- There are no assertions to verify the benchmark results.
Consider the following improvements:
- Use a more realistic time value for benchmarking.
- Add assertions to verify the benchmark results.
- Consider running the benchmark multiple times to get a more stable average.
Here's a suggested improvement:
def test_benchmark_timing(benchmark):
T = TimeableDerived()
result = benchmark(T.decorated_takes_time, 0.1) # Use 0.1 seconds for a more stable measurement
# Verify that the function returned None (as expected for a time.sleep() call)
assert result is None
# Verify that the execution time is close to the expected time
# Note: benchmark.stats contains timing statistics
assert 0.09 < benchmark.stats.mean < 0.11, f"Expected mean execution time close to 0.1, got {benchmark.stats.mean}"
This improved version uses a more realistic time value, adds assertions to verify the function's return value and execution time, and leverages the benchmark statistics for a more meaningful test.
def test_pprint_num_unit(): | ||
assert (5, "μs") == TimeableMixin._get_pprint_num_unit(5 * 1e-6) | ||
|
||
def test_times_and_profiling(self): | ||
T = TimeableDerived() | ||
T.decorated_takes_time(num_seconds=2) | ||
class Derived(TimeableMixin): | ||
_CUTOFFS_AND_UNITS = [(10, "foo"), (2, "bar"), (None, "biz")] | ||
|
||
duration = T._times_for("decorated")[-1] | ||
np.testing.assert_almost_equal(duration, 2, decimal=1) | ||
assert (3, "biz") == Derived._get_pprint_num_unit(3, "biz") | ||
assert (3, "foo") == Derived._get_pprint_num_unit(3 / 20, "biz") | ||
assert (1.2, "biz") == Derived._get_pprint_num_unit(2.4 * 10, "foo") | ||
|
||
T.decorated_takes_time_auto_key(num_seconds=2) | ||
duration = T._times_for("decorated_takes_time_auto_key")[-1] | ||
np.testing.assert_almost_equal(duration, 2, decimal=1) | ||
try: | ||
Derived._get_pprint_num_unit(1, "WRONG") | ||
raise AssertionError("Should have raised an exception") | ||
except LookupError: | ||
pass | ||
except Exception as e: | ||
raise AssertionError(f"Raised the wrong exception: {e}") |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
🛠️ Refactor suggestion
Improve test readability and exception handling.
The test function covers various scenarios well, but there are a few areas for improvement:
- The Yoda conditions flagged by the static analysis tool can be rewritten for better readability.
- The exception test can be improved using a context manager.
Here's a suggested improvement:
import pytest
def test_pprint_num_unit():
assert TimeableMixin._get_pprint_num_unit(5 * 1e-6) == (5, "μs")
class Derived(TimeableMixin):
_CUTOFFS_AND_UNITS = [(10, "foo"), (2, "bar"), (None, "biz")]
assert Derived._get_pprint_num_unit(3, "biz") == (3, "biz")
assert Derived._get_pprint_num_unit(3 / 20, "biz") == (3, "foo")
assert Derived._get_pprint_num_unit(2.4 * 10, "foo") == (1.2, "biz")
with pytest.raises(LookupError):
Derived._get_pprint_num_unit(1, "WRONG")
This improved version:
- Reverses the Yoda conditions for better readability.
- Uses a context manager (
with pytest.raises(LookupError):
) for cleaner exception testing. - Removes the unnecessary
try
-except
block, simplifying the code.
These changes make the test more readable and maintainable while still covering all the necessary scenarios.
🧰 Tools
🪛 Ruff
51-51: Yoda condition detected
(SIM300)
56-56: Yoda condition detected
(SIM300)
57-57: Yoda condition detected
(SIM300)
58-58: Yoda condition detected
(SIM300)
66-66: Within an
except
clause, raise exceptions withraise ... from err
orraise ... from None
to distinguish them from errors in exception handling(B904)
def test_times_and_profiling(): | ||
T = TimeableDerived() | ||
T.decorated_takes_time(num_seconds=2) | ||
|
||
duration = T._times_for("decorated")[-1] | ||
np.testing.assert_almost_equal(duration, 2, decimal=1) | ||
|
||
T.decorated_takes_time_auto_key(num_seconds=2) | ||
duration = T._times_for("decorated_takes_time_auto_key")[-1] | ||
np.testing.assert_almost_equal(duration, 2, decimal=1) | ||
|
||
T.decorated_takes_time(num_seconds=1) | ||
stats = T._duration_stats | ||
|
||
assert {"decorated", "decorated_takes_time_auto_key"} == set(stats.keys()) | ||
np.testing.assert_almost_equal(1.5, stats["decorated"][0], decimal=1) | ||
assert 2 == stats["decorated"][1] | ||
np.testing.assert_almost_equal(0.5, stats["decorated"][2], decimal=1) | ||
np.testing.assert_almost_equal(2, stats["decorated_takes_time_auto_key"][0], decimal=1) | ||
assert 1 == stats["decorated_takes_time_auto_key"][1] | ||
assert 0 == stats["decorated_takes_time_auto_key"][2] | ||
|
||
got_str = T._profile_durations() | ||
want_str = "decorated_takes_time_auto_key: 2.0 sec\ndecorated: 1.5 ± 0.5 sec (x2)" | ||
assert want_str == got_str, f"Want:\n{want_str}\nGot:\n{got_str}" | ||
|
||
got_str = T._profile_durations(only_keys=["decorated_takes_time_auto_key"]) | ||
want_str = "decorated_takes_time_auto_key: 2.0 sec" | ||
assert want_str == got_str, f"Want:\n{want_str}\nGot:\n{got_str}" |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
🛠️ Refactor suggestion
Enhance test structure and readability.
The test function covers multiple aspects of the TimeableDerived
class comprehensively. However, there are a few areas for improvement:
- The Yoda conditions flagged by the static analysis tool can be rewritten for better readability.
- The test function is quite long and tests multiple features. Consider breaking it down into smaller, more focused test functions.
Here's a suggested improvement:
import pytest
@pytest.mark.parametrize("method_name, expected_duration", [
("decorated_takes_time", 2),
("decorated_takes_time_auto_key", 2),
])
def test_decorated_method_timing(method_name, expected_duration):
T = TimeableDerived()
method = getattr(T, method_name)
method(num_seconds=expected_duration)
duration = T._times_for(method_name.replace("decorated_takes_time_", "decorated"))[-1]
np.testing.assert_almost_equal(duration, expected_duration, decimal=1)
def test_multiple_executions_and_stats():
T = TimeableDerived()
T.decorated_takes_time(num_seconds=2)
T.decorated_takes_time(num_seconds=1)
stats = T._duration_stats
assert set(stats.keys()) == {"decorated"}
np.testing.assert_almost_equal(stats["decorated"][0], 1.5, decimal=1)
assert stats["decorated"][1] == 2
np.testing.assert_almost_equal(stats["decorated"][2], 0.5, decimal=1)
def test_profile_durations():
T = TimeableDerived()
T.decorated_takes_time(num_seconds=2)
T.decorated_takes_time_auto_key(num_seconds=2)
got_str = T._profile_durations()
want_str = "decorated_takes_time_auto_key: 2.0 sec\ndecorated: 2.0 sec"
assert got_str == want_str, f"Want:\n{want_str}\nGot:\n{got_str}"
got_str = T._profile_durations(only_keys=["decorated_takes_time_auto_key"])
want_str = "decorated_takes_time_auto_key: 2.0 sec"
assert got_str == want_str, f"Want:\n{want_str}\nGot:\n{got_str}"
This improved version:
- Breaks down the large test function into smaller, more focused tests.
- Uses
pytest.mark.parametrize
to test both decorated methods with a single test function. - Reverses the Yoda conditions for better readability.
- Simplifies some assertions and improves variable naming.
These changes make the tests more readable, maintainable, and easier to debug if they fail.
🧰 Tools
🪛 Ruff
94-94: Yoda condition detected
Rewrite as
stats["decorated"][1] == 2
(SIM300)
97-97: Yoda condition detected
Rewrite as
stats["decorated_takes_time_auto_key"][1] == 1
(SIM300)
98-98: Yoda condition detected
Rewrite as
stats["decorated_takes_time_auto_key"][2] == 0
(SIM300)
try: | ||
pass | ||
|
||
raise ImportError("This test requires torch not to be installed to run.") | ||
except (ImportError, ModuleNotFoundError): | ||
pass |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Remove unnecessary try-except block
The try-except block for importing torch appears to be empty and serves no purpose. Consider removing it to improve code clarity.
Apply this diff to remove the unnecessary code:
-try:
- pass
-
- raise ImportError("This test requires torch not to be installed to run.")
-except (ImportError, ModuleNotFoundError):
- pass
📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
try: | |
pass | |
raise ImportError("This test requires torch not to be installed to run.") | |
except (ImportError, ModuleNotFoundError): | |
pass |
(1, 2) | ||
>>> M._last_seed("baz") | ||
(3, 6) | ||
""" | ||
for idx, (s, k, time) in enumerate(self._past_seeds[::-1]): |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Rename unused loop variable time
in _last_seed
method
In the _last_seed
method, the loop control variable time
is not used within the loop body. To indicate that it is intentionally unused and to adhere to best practices, consider renaming it to _time
.
Apply this diff to fix the issue:
for idx, (s, k, time) in enumerate(self._past_seeds[::-1]):
if k == key:
idx = len(self._past_seeds) - 1 - idx
return idx, s
+for idx, (s, k, _time) in enumerate(self._past_seeds[::-1]):
+ if k == key:
+ idx = len(self._past_seeds) - 1 - idx
+ return idx, s
Committable suggestion was skipped due to low confidence.
🧰 Tools
🪛 Ruff
114-114: Loop control variable
time
not used within loop bodyRename unused
time
to_time
(B007)
Summary by CodeRabbit
New Features
seed_everything
function.Bug Fixes
Documentation
TimeableMixin
class.Refactor
unittest
framework in favor of function-based tests.Chores