-
Notifications
You must be signed in to change notification settings - Fork 215
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
added new regression example #52
base: master
Are you sure you want to change the base?
Conversation
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thanks, overall this looks very good. Before I get into more detailed feedback could you make the following stylistic fixes:
- Use width 2 indents instead of 4
- Keep lines < 80 chars (import lines excepted)
- copy the indent pattern for long multi-line expressions from the other examples. The simplest is
_ = (
...indented by 4 spaces
....another line indented by 4 spaces)
# Will write a SavedModel and metadata to two subdirectories of | ||
# working_dir, given by transform_fn_io.TRANSFORM_FN_DIR and | ||
# transform_fn_io.TRANSFORMED_METADATA_DIR respectively. | ||
create_transform_fn(train_data_file, working_dir) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This fails when running in clustered mode. I don't know how to tell tensorflow that this needs only to be done once (by any of the nodes)
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
What kind of cluster are you using? Do the other examples work in clustered mode?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I'm referring to Google Cloud Machine Learning Engine. When using train_and_evaluate you can run it there with a "scaleTier: STANDARD_1" (clustered) without any change in the code. Other examples I guess won't work (don't use train_and_evaluate). The example I'm providing works (provided I copy the transform_fn manually to the gs bucket, or I remove all tf-transform step altogether)
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
In my mind transformation 'on-the-fly' makes sense when the data set is huge, in which case you'll probably want to use a cluster as well. That's why I thought the example would provide more value if combining both features.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
What is we set a "mode" command line argument that took values "analyze_and_train" (default), "analyze" or "train"? Would this allow the cloud trainer job to just transform? Note that we don't plan to provide instructions for running Cloud ML Engine here, but ideally these examples would run on Cloud ML Engine without modification.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The gcloud command line doesn't seem to support trainer parameters when using the gcloud command line (at first sight). Anyway I just wanted to raise awareness of this possible issue (how tf-transform fits with Cloud ML Engine).
Maybe we could just ignore it for the sake of this simple example, for the time being.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Sounds good, there are a variety of samples that show usage with Cloud ML Engine, and these are written with separate scripts. This example is a single script but could easily be split into a transform script and a train script.
# Create a coder to read the mpg data with the schema. To do this we | ||
# need to list all columns in order since the schema doesn't specify the | ||
# order of columns in the csv. | ||
converter = csv_coder.CsvCoder(ordered_columns, RAW_DATA_METADATA.schema) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
We are using a different csv_coder (tf.decode_csv) when parsing training samples (see line 172). Don't know if that might be an issue.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
My belief is that if a given example parses successfully with both coders, the results are very likely to be the same. But in edge cases the coders may differ in what they consider a valid input. Ultimately we want to harmonize the parsing done in tf.Transform with the in-graph parsing done with tf.decode_csv
and tf.parse_example
, but that is a work in progress.
For now I would say proceed assuming they are the same but let us know if any discrepancies emerge.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I agree
OK, done. I added a couple of comments in the code review as well. |
Thanks for your patience, there are still many lint errors but we haven't figured out how to run an open source linter that roughly replicates our internal linter. |
It transforms data while training ( in contrast to transform it all beforehand and having to write it to disk).
It also uses datasets, the latest versions of estimator and the train_and_evaluate() method that works in clustered mode.