You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
I've been trying to use the encode_baseline measure for words inside of a SPADE script, currently:
with CorpusContext(config) as c:
if not c.hierarchy.has_token_property('word', 'baseline'):
print('getting baseline word duration')
c.encode_baseline('word', 'duration')
This works fine on smaller corpora (like ICE-Can or Modern RP), but exceeds the memory limit (even on Roquefort) for corpora of SOTC-size and larger.
The text was updated successfully, but these errors were encountered:
@mmcauliffe any thoughts on this? I know you probably won't have time to fix before leaving, but any guidance appreciated. like, do you suspect the issue will have been resolved with your recent memory optimizations -- or does the issue seem like an actual bug?
I've been trying to use the
encode_baseline
measure for words inside of a SPADE script, currently:This works fine on smaller corpora (like ICE-Can or Modern RP), but exceeds the memory limit (even on Roquefort) for corpora of SOTC-size and larger.
The text was updated successfully, but these errors were encountered: