Skip to content
This repository has been archived by the owner on Mar 10, 2019. It is now read-only.

late events #14

Open
lvm opened this issue Jul 5, 2016 · 6 comments
Open

late events #14

lvm opened this issue Jul 5, 2016 · 6 comments

Comments

@lvm
Copy link
Contributor

lvm commented Jul 5, 2016

Not really sure how to explain this bug but here it goes:

When playing a simple pattern (four on the floor) like

cps 0.5
drums $ perc "bd(4,16)"

It plays OK but also get this message

late [146,36,63] midi now 41880 midi onset: 41728 onset (relative): "-0.152" , sched: 41728
and 1 more

and the pattern sometimes goes a bit out of rhythm (as if I'd use # nudge "0.25" 1/4 of the cycles). It happens with all tidal-midi modules (though I'm only using GMPerc.hs and Synth.hs from the 0.9-dev branch).
I tracked this message to this block but I'm not really in a position to fix this bug, so here I am.

My setup/config is here and my tidal.el. This didn't happened with tidal-midi 0.6 so I don't really think this is due to running all inside a Docker container (I might be wrong tho).

I also checked that all MIDI messages/events are sent OK to the devices with midisnoop.
If you need some extra information let me know.

@kindohm
Copy link
Contributor

kindohm commented Jul 5, 2016

👍 I also experience the same problem, but it only seems to happen when I open a 2nd channel on the same device.

For example, this works fine:

r1 <- midiStream devs "Elektron Analog Rytm" 1 rytmController
r1 $ midinote "0(4,16)"

But as soon as I eval this next line, that r1 pattern starts getting "off" and I see those same output messages:

r2 <- midiStream devs "Elektron Analog Rytm" 2 rytmController

I don't even need to eval a pattern on r2. Simply opening up that channel on the same device causes a problem.

@lennart
Copy link
Contributor

lennart commented Jul 5, 2016

regarding nudge this will currently fail on all tidal-midi version 0.8+.

multichannel should be better once I get my local changes for 0.9 merged into 0.9-dev.

however I. also experience late messages especially when reducing cps on complex patterns and it seems it cannot be restored when increasing cps back again.

This is related to how messages get scheduled since port midi enforces monotonic timestamps and otherwise drops events, so if one message is late, currently it seem all following might be late too (but only in certain circumstances I haven't yet figured out).

thx for reporting but, yes timing is still not good enough in tidal-midi

Am 05.07.2016 um 15:44 schrieb Mike Hodnick [email protected]:

👍 I also experience the same problem, but it only seems to happen when I open a 2nd channel on the same device.

For example, this works fine:

r1 <- midiStream devs "Elektron Analog Rytm" 1 rytmController
r1 $ midinote "0(4,16)"
But as soon as I eval this next line, that r1 pattern starts getting "off" and I see those same output messages:

r2 <- midiStream devs "Elektron Analog Rytm" 2 rytmController
As a result, I can really only use one channel on any device.


You are receiving this because you are subscribed to this thread.
Reply to this email directly, view it on GitHub, or mute the thread.

@lennart
Copy link
Contributor

lennart commented Jul 5, 2016

aah, I increased latency of the contorllershape for e.g. the Tetra to 0.2 which (if I remember correctly result in 200ms latency) however VolcaKeys is 0.01 so this might be bit short and could lead to late messages pretty early. Can you check whether it works better you if you increase latency?

@lvm
Copy link
Contributor Author

lvm commented Jul 5, 2016

Ah, increasing the latency improved.
First I tried with 0.2 but sometimes the message appeared like

late [146,36,63] midi now NNNNN midi onset: NNNN onset (relative): "-0.001" , sched: NNNN
and 1 more

So, to give it a wide range, increased from 0.1 to 0.3 (a bit exagerated I know).
Also tried modifying the cps while playing and sometimes prints the late... message, but seems to be more stable now.

@lennart
Copy link
Contributor

lennart commented Nov 21, 2016

#22

@lennart
Copy link
Contributor

lennart commented Feb 4, 2017

As long as we have no better technical solution, this and the information from #22 should be put into the docs and recommend a workaround e.g. set higher latency values (I am not sure if we already documented this somewhere)

Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants