You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
I am working on retraining BirdNET to recognize nocturnal birds of the Atlantic Forest in low-quality PAM recordings. For this task, I want to use the autotune feature to select the best parameter combinations. Upon inspecting the code, I found that autotune implements Bayesian Optimization via Keras Tuner. I'm trying to determine the optimal number of autotune trials to run, considering my computational limits (approximately 100 trials take 1 hour).
My understanding is that Bayesian Optimization does not necessarily test all the possible combinations, but selects the most promising ones based on the model it builds internally. Given this, what would be an optimal number of trials to let autotune explore the parameter combinations to ensure a reliable search through the parameter space without overfitting or missing out on potentially better configurations?
I have considered focusing only on tuning fewer parameters, specifically those that might have more influence on model performance. Is it advisable to prioritize parameters such as learning rate, batch size, hidden units, and dropout? This subset alone brings the combinations to 1,440, which is the maximum feasible within my computational budget (14 hours to run). For this, I'm contemplating modifying the code to default some parameters (like setting "upsampling" to 0 because I have balanced training classes). Are there any strategies or modifications you would recommend for managing the parameter search more effectively?
Thank you for any insights or suggestions you can provide!
reacted with thumbs up emoji reacted with thumbs down emoji reacted with laugh emoji reacted with hooray emoji reacted with confused emoji reacted with heart emoji reacted with rocket emoji reacted with eyes emoji
-
Hello everyone,
I am working on retraining BirdNET to recognize nocturnal birds of the Atlantic Forest in low-quality PAM recordings. For this task, I want to use the autotune feature to select the best parameter combinations. Upon inspecting the code, I found that autotune implements Bayesian Optimization via Keras Tuner. I'm trying to determine the optimal number of autotune trials to run, considering my computational limits (approximately 100 trials take 1 hour).
My understanding is that Bayesian Optimization does not necessarily test all the possible combinations, but selects the most promising ones based on the model it builds internally. Given this, what would be an optimal number of trials to let autotune explore the parameter combinations to ensure a reliable search through the parameter space without overfitting or missing out on potentially better configurations?
I have considered focusing only on tuning fewer parameters, specifically those that might have more influence on model performance. Is it advisable to prioritize parameters such as learning rate, batch size, hidden units, and dropout? This subset alone brings the combinations to 1,440, which is the maximum feasible within my computational budget (14 hours to run). For this, I'm contemplating modifying the code to default some parameters (like setting "upsampling" to 0 because I have balanced training classes). Are there any strategies or modifications you would recommend for managing the parameter search more effectively?
Thank you for any insights or suggestions you can provide!
Beta Was this translation helpful? Give feedback.
All reactions