-
Notifications
You must be signed in to change notification settings - Fork 1.1k
[2021.2.11] Deploy WG: Core weekly
David Bericat edited this page Feb 23, 2021
·
5 revisions
- Ralf Floca (DKFZ), 2. Pouria Rouzrokh (Mayo), 3. Karthik (Vanderbilt), 4. Kilian Hett (Vanderbilt), 5. Laurence Jackson (GSTT), 6. B. Selnur Erdal (Mayo Jacksonville), 7. Dana Groff (NVIDIA), 8. Rahul Choudhury (NVIDIA), 9. Raghav Mani (NVIDIA), 10. David Bericat (NVIDIA)
MONAI Deploy core WG - bi-weekly-20210211_110354-Meeting Recording
- AI1 - [DB] To open a slide deck for all of us to document - [COMPLETED] - MONAI Deploy WG use cases
- AI2 - [All] Add slides with workflows per team from training to deployment and feedback
- Data flows
- Input syncs
- Output syncs
- AI3 - [Selnur] Share how they document a use case and requirements sample
- AI4 - [All] Identify a use case from MONAI and use the use case and reqs template as a way to document what we will design and build
AGENDA (+Notes) - Whiteboard 2/11/2021
End-to-end workflow from trained model to PACS integration, resource utilization, GPUs, etc. (3 people)
- Mayo
- AI lab where they train their models
- What platform they were originally trained
- Do we have to re-train?
- Someone takes it and move them to
- AI lab where they train their models
- DKFZ
- Use https://github.com/kaapana/kaapana as our deployment platform
- Abstract the data source and good as possible from the very model
- Roughly:
- Images pushed (DIMSE, DicomWeb, Manual upload, Minio) or pulled (e.g. DICOM QR)
- Workflow system (airflow) ensures preprocessing to convert, resample (what ever to a form that is needed)
- Consumed/processed by the AI algorithm
- Workflow system ensures that the algorithm results are converted in the needed format
- Bottleneck; interfacing different algorithms with differing input/output semantics or needs in an automatic manner.
- Several containers and workflows built on Helm with repos
- GSTT (Laurence, Haris)
- In process of setting up complete workflow for new AI projects
- Interested in hearing about others' experiences with various tools for MLOPS including continuous integration/training/deployment.
- Mayo Jacksonville (Selnur)
- DICOM is key to their design
- Exchanging data and results
- Custom models, custom apps in Python
- Train to deploy to retrain (maybe with new labels) to redeploy lifecycle - continuous learning
- Pre and post processing
- Inputs
- Outputs depending on data types
- Use cases
- Requirements doc
- NVIDIA (Dana, Rahul, David)
- Model vs operator vs pipeline
- Write use cases first
- That drives the inputs and outputs
- INPUT:
- Model trained - format?
- Data - format (i.e. DICOM) and where is it?
- Result -> data ingested in the system
- EXECUTE:
- What is my executable artifact? How do we package it?
- Container? Pipeline?
- Should we define one?
- Jorge:
- Containers, models, inference engine
- What is my executable artifact? How do we package it?
- OUTPUT:
- Result formats?
- Consumers? What do they speak? (i.e. DICOM)
- Responsible - Accountable - Consulted - Informed