You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
I am getting vertex failed errors constantly with both 100 node Hive and LLAP clusters. My data size is 100TB . Do you have any recommended settings that I should be using for these cluster sizes. I have played with some settings , but they have not helped and the error persists when I run the script below for certain tables.
I am getting vertex failed errors constantly with both 100 node Hive and LLAP clusters. My data size is 100TB . Do you have any recommended settings that I should be using for these cluster sizes. I have played with some settings , but they have not helped and the error persists when I run the script below for certain tables.
beeline -u "jdbc:hive2://
hostname -f
:10001/;transportMode=http" -n "" -p "" -i settings.hql -f ddl/createAllORCTables.hql -hiveconf ORCDBNAME=tpcds_orc -hiveconf SOURCE=tpcdsERROR : FAILED: Execution Error, return code 2 from org.apache.hadoop.hive.ql.exec.tez.TezTask. Vertex failed, vertexName=Map 1, vertexId=vertex_1556442545309_0012_5_00, diagnostics=[Task failed, taskId=task_1556442545309_0012_5_00_000216, diagnostics=[TaskAttempt 0 failed, info=[Error: Error while running task ( failure ) : java.lang.OutOfMemoryError: Java heap space
The text was updated successfully, but these errors were encountered: