-
Notifications
You must be signed in to change notification settings - Fork 64
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
TFX + PyTorch Example #156
Comments
Proposal for the TFX Addons Example: #157 |
Yes, please. If possible, let's demonstrate it with a model from Hugging Face with PT backend. |
One of the things that we will need for this is an ONNX extractor for Evaluator. Maybe we should break that out as a separate project? |
Could you elaborate this a bit more? |
My understanding is that for PyTorch developers ONNX is a normal format for saving trained models, while TF's SavedModel format introduces friction. For non-SavedModel models Evaluator needs an Extractor in order to generate predictions to measure. For example, the one for Sklearn and the one for XGBoost |
ONNX is definitely used but I am not sure that is a normal one like you mentioned. This document gives a good rundown of the serialization semantics in PyTorch: https://pytorch.org/docs/stable/notes/serialization.html ONNX is definitely quite popularly used there (PyTorch has a direct ONNX exporter too). From what I am gathering here is that we make ONNX the serialization format for the PyTorch models to make them work in a TFX pipeline. Is that so? |
My thought is more one of the serialization formats, which to me suggests that breaking it out as a separate project might make sense. We could also do Extractors for TensorRT, TorchScript, or whatever makes sense (and here I'm displaying my ignorance about what makes sense) and let users choose the one they need. |
Got it. Yeah I concur with your thoughts now. Moreover, the reason it might make even more sense is because users might want to choose an Extractor in accordance with their deployment infra. For example, ONNX might be better for CPU-based deployment while TensorRT would be better suited for a GPU-based runtime (although ONNX can handle TensorRT as a backend as well). |
I think Wihan wrote a custom TFMA extractor for PyTorch. We had everything done up to the trainer when we shared the notebook with Wihan. Last time, we talked he was in the process of cleaning up his implementation. He said it worked end-to-end. |
@wihanbooyse - That would be great! It might make sense to refactor the example to break out the extractor separately, and follow that up with some more extractors for other formats. |
There are a few TFX examples for how to train Scikit learn or JAX models, I haven't seen an example pipeline for PyTorch.
The pipeline could use a known dataset, e.g. MNIST, ingest the data via the
CSVExampleGen
, run the standard statistics and schema steps, performs a pseudo transformation (passthrough of the values) with the newPandasTransform
component fromtfx-addons
, add a customrun_fn
function for PyTorch, and then add a TFMA example.Any thoughts?
The text was updated successfully, but these errors were encountered: