-
Notifications
You must be signed in to change notification settings - Fork 9
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Issue with single point behavior analysis #238
Comments
Hi, |
Thank you for your reply. The approximate duration of one contraction event is about 3 frames if I had to estimate. The fps of my videos is 15 fps. For one behavior example, I set the frames to 5. I had two behavior categories, one being "contraction" and the other being "relaxed". I'm not interested in the "relaxed" category and use it as a "trash" behavior. I have about 100 pairs of sorted behavior examples for the "relaxed" category and about 200 pairs of sorted behavior examples for the "contraction" category. My categorizer settings were non-interactive basic, animation analyzer+pattern recognizer, I believe the animation analyzer complexity was set to 3, while pattern recognizer was set to 4, and I believe I set the input shape to 32. Thank you! |
The duration of one behavior example (5 frames) is about double of the actual duration of one behavior event/episode (3 frames). This might be the reason why you got a lot of excessive data points. Would it be possible to send me a sample 'all_events' spreadsheet? I would like to take a look at the distribution of the identified events. You can either send it via email ([email protected]) or attach here. |
Sure. I will get that sent right over to you. I am currently generating new behavior examples to sort, with the duration being 3 frames instead. |
Alright, I have sent it over via email. |
I have gotten the best results yet by training a new categorizer and using 3 frames. Something confusing that I'm seeing is that the count parameter spreadsheet has a pretty accurate number to what I'm expecting. However, when I go in the all_events spreadsheet and remove all the "trash" categories, there appears to be twice as many events as in the count summary. |
LabGym calculates the count in this way: if a behavior last for several frames, it counts as occurring once. The way you did for the 'all_events' spreadsheet was basically counting every frame as occurring once, correct? If so, this might also be the origin of your issue. LabGym categorizes behaviors in a frame-wise manner. Therefore, if a behavior episode lasts for 3 frames, there will be 3 data points in the 'all_event' spreadsheet. But these 3 data points belong to one behavior episode/event. |
Thanks for the clarification. Due to this, I've tried to filter out some of the data from the 'all_events' spreadsheet. Such as, filtering out the behaviors I'm not interested in. As well as using a formula to determine if one contraction is within 1-3 frames of another to determine which to remove. However, this method seems to still not yield the count amount. I'm not sure if this is the best way to go about filtering the data. By any chance, is there a way to obtain a spreadsheet, similar to 'all_events' that represents only the contractions counted in the count parameter? |
When a contraction lasts for several frames until it's ended, it may contain several behavior episodes. In the all_event sheet or the raster plot, it appears as a continued event that lasts for several frames, and is counted as one occurrence. So I don't quite understand what you meant by 'a spreadsheet similar to all_event that represents only the contractions counted in the count parameter'. Did you mean combining multiple frame-wise outputs of behavior categorization into one? |
Hi, |
Hi. I've recently installed LabGym and was able to train a pretty accurate detector and categorizer. However, my issue lies more with the analysis data. I'm currently working with jellyfish footage that only contains one jellyfish in the video. Since this particular species pulses (contraction of its bell) it is very easily quantified. While the categorizer does detect the behavior accurately, the data in the "all_events" goes frame by frame and plots whichever behavior the categorizer thinks it is exhibiting during that frame. For example, if I was looking at frame 1 and the jellyfish is contracting, I believe this is a good data point. However, it will then go onto frame 2, in which the jellyfish is still in the same contraction, and it will also plot this frame as a contraction. This leads to too many "contraction" data points as it is plotting by frame. How would I better set up my categorizer to only plot one point instance of each contraction to get an accurate data set?
(I'd like to clarify that while this is a very simple behavior, it usually occurs around 1300-1500 times during a video)
Thank you for your time, and I apologize for the lengthy question.
The text was updated successfully, but these errors were encountered: