-
Notifications
You must be signed in to change notification settings - Fork 0
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Add audinterface.Segment.process_table() #172
Conversation
I don't get why we run into a
for most of the test when running Concerning
I checked all files with |
Regarding The easiest solution locally would be to run $ pre-commit install
$ pre-commit run --all-files When running it the first time it will fail as it had to make changes, but when running it again you will see that it passes. |
The test fails with the same error locally for me. |
Codecov ReportAll modified and coverable lines are covered by tests ✅
Additional details and impacted files
|
…gmentation returning additional segments
thanks
yes, I forgot
I added a test here ll. 333-. Generally, handling all edge cases and ensuring that the Otherwise, all tests are running without errors, now. @hagenw |
8bd24aa resolved a bug for File "xxx/audinterface/core/segment.py", line 596, in <dictcomp>
col: labels[:, icol].astype(dtypes[icol])
TypeError: Cannot interpret 'CategoricalDtype(categories=['anger', 'boredom', 'disgust', 'fear', 'happiness',
'sadness', 'neutral'],
, ordered=False, categories_dtype=object)' as a data type |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Very nice, thanks for adding this.
Besides the comments I made, it would be nice to have a test that shows the new method behaves well if the segment function returns overlapping segments, e.g.
start,end
0,2
1,3
2,4
...
Co-authored-by: Hagen Wierstorf <[email protected]>
Co-authored-by: Hagen Wierstorf <[email protected]>
Co-authored-by: Hagen Wierstorf <[email protected]>
Co-authored-by: Hagen Wierstorf <[email protected]>
Co-authored-by: Hagen Wierstorf <[email protected]>
This was now added by: e393dc8 All other suggestions were adopted. Some tests are failing now (especially python 3.8, everything seems to be working for newer versions), but the errors are likely to originate from code that has not been touched. |
It's indeed interesting that the tests for Python 3.8 fail due to |
I can also not reproduce the failing test locally with Python 3.8. try:
deps = Dependencies()
deps.load(cached_deps_file)
except (AttributeError, FileNotFoundError, ValueError, EOFError):
# If loading cached file fails, load again from backend
backend_interface = utils.lookup_backend(name, version)
deps = download_dependencies(backend_interface, name, version, verbose)
# Store as pickle in cache
deps.save(cached_deps_file) So it seems we need to catch |
I proposed a fix for |
The test for Python 3.8 is now passing, but the test under Windows fails for the same reason as before the test under Python 3.8. We will discuss in |
Now, only the tests on Windows are failing due to pyarrow: |
Yes, but the reason is basically the same as before under Python 3.8, see audeering/audb#411 (comment). So we need again to fix it in |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
As the audb
error is related to the cache, I managed to get the tests pass by deleting the existing cache and re-running the tests. The underlying problem, and how to solve it in audb
is discussed in audeering/audb#413.
I have updated the description of the pull request by adding two screenshots of the new documentation, as it is always helpful to have the description as documentation on what was added by this pull request. Otherwise, I have just one other suggestion, and we should be fine to go here.
Co-authored-by: Hagen Wierstorf <[email protected]>
The Windows test (at least for Python 3.9) is still failing. Can we delete the cache also for this one? |
As I understood it, it should have started with creating a new cache already. Which means for some reason the new I could delete the cache and start the pipeline again, but then we would most likely need to do that again after merging. If you don't need this feature next week, I would propose to postpone until we have found a better solution in |
Just for the record:
|
Makes sense, no rush! |
Good news, with |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
All fine for merging here.
New method
Segment.process_table()
.Added Usage instructions in
Usage.rst
and tests.#167 (comment)
From usage documentation: