Skip to content

Commit

Permalink
Removing surplus brackets from print function
Browse files Browse the repository at this point in the history
The 2to3 script added extra brackets around print functions that were
already compatible with Python 3. This has now been removed

	modified:   code/Advanced_Analytics_and_Machine_Learning-Chapter_25_Preprocessing_and_Feature_Engineering.py
	modified:   code/Advanced_Analytics_and_Machine_Learning-Chapter_28_Recommendation.py
	modified:   code/Advanced_Analytics_and_Machine_Learning-Chapter_31_Deep_Learning.py
  • Loading branch information
tallamjr committed May 20, 2020
1 parent 7119a3e commit e551473
Show file tree
Hide file tree
Showing 3 changed files with 4 additions and 4 deletions.
Original file line number Diff line number Diff line change
Expand Up @@ -252,7 +252,7 @@
result = model.transform(documentDF)
for row in result.collect():
text, vector = row
print(("Text: [%s] => \nVector: %s\n" % (", ".join(text), str(vector))))
print("Text: [%s] => \nVector: %s\n" % (", ".join(text), str(vector)))


# COMMAND ----------
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -36,7 +36,7 @@
.setLabelCol("rating")\
.setPredictionCol("prediction")
rmse = evaluator.evaluate(predictions)
print(("Root-mean-square error = %f" % rmse))
print("Root-mean-square error = %f" % rmse)


# COMMAND ----------
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -38,8 +38,8 @@
from pyspark.ml.evaluation import MulticlassClassificationEvaluator
tested_df = p_model.transform(test_df)
evaluator = MulticlassClassificationEvaluator(metricName="accuracy")
print(("Test set accuracy = " + str(evaluator.evaluate(tested_df.select(
"prediction", "label")))))
print("Test set accuracy = " + str(evaluator.evaluate(tested_df.select(
"prediction", "label"))))


# COMMAND ----------
Expand Down

0 comments on commit e551473

Please sign in to comment.