You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Currently when executing network studies using Strategus we we use a separate ModelTransferModule to download models either from local path, s3 or github. We used this successfully in the Deep Learning Comparison study using s3. This requires that a regular external validation study workflow looks like:
The issue with this approach is that the logic of which model is used with which prediction problem can't be fully customized upfront since we haven't downloaded the models. Currently I've added logic to match on modelTargetId and modelOutcomeId. This means the prespecified module settings to execute a validation study would look like:
Then once the models have been downloaded using the ModelTransferModule the downloaded models are matched using targetId=modelTargetId and outcomeId=modelOutcomeId and then the validationDesign is created (only one shown):
A better solution I think is to forego the ModelTransferModule and instead be able to supply the s3 or github paths directly in the plpModelList when creating the validationDesign. Then the models are fetched at runtime and validated. This allows for complete flexibility in what models are used for what prediction problem.
The text was updated successfully, but these errors were encountered:
Currently when executing network studies using Strategus we we use a separate
ModelTransferModule
to download models either from local path, s3 or github. We used this successfully in the Deep Learning Comparison study using s3. This requires that a regular external validation study workflow looks like:CohortGenerator -> ModelTransfer -> External validation
The issue with this approach is that the logic of which model is used with which prediction problem can't be fully customized upfront since we haven't downloaded the models. Currently I've added logic to match on
modelTargetId
andmodelOutcomeId
. This means the prespecified module settings to execute a validation study would look like:Then once the models have been downloaded using the
ModelTransferModule
the downloaded models are matched usingtargetId=modelTargetId
andoutcomeId=modelOutcomeId
and then thevalidationDesign
is created (only one shown):A better solution I think is to forego the
ModelTransferModule
and instead be able to supply the s3 or github paths directly in theplpModelList
when creating thevalidationDesign
. Then the models are fetched at runtime and validated. This allows for complete flexibility in what models are used for what prediction problem.The text was updated successfully, but these errors were encountered: