Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Encoding baseline duration exceeds memory limit #147

Open
james-tanner opened this issue Jun 24, 2019 · 1 comment
Open

Encoding baseline duration exceeds memory limit #147

james-tanner opened this issue Jun 24, 2019 · 1 comment
Assignees
Milestone

Comments

@james-tanner
Copy link
Contributor

I've been trying to use the encode_baseline measure for words inside of a SPADE script, currently:

with CorpusContext(config) as c:
    if not c.hierarchy.has_token_property('word', 'baseline'):
        print('getting baseline word duration')
        c.encode_baseline('word', 'duration')

This works fine on smaller corpora (like ICE-Can or Modern RP), but exceeds the memory limit (even on Roquefort) for corpora of SOTC-size and larger.

@msonderegger
Copy link
Member

@mmcauliffe any thoughts on this? I know you probably won't have time to fix before leaving, but any guidance appreciated. like, do you suspect the issue will have been resolved with your recent memory optimizations -- or does the issue seem like an actual bug?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

No branches or pull requests

3 participants