-
Notifications
You must be signed in to change notification settings - Fork 0
/
hive-default.xml
5959 lines (5951 loc) · 252 KB
/
hive-default.xml
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
232
233
234
235
236
237
238
239
240
241
242
243
244
245
246
247
248
249
250
251
252
253
254
255
256
257
258
259
260
261
262
263
264
265
266
267
268
269
270
271
272
273
274
275
276
277
278
279
280
281
282
283
284
285
286
287
288
289
290
291
292
293
294
295
296
297
298
299
300
301
302
303
304
305
306
307
308
309
310
311
312
313
314
315
316
317
318
319
320
321
322
323
324
325
326
327
328
329
330
331
332
333
334
335
336
337
338
339
340
341
342
343
344
345
346
347
348
349
350
351
352
353
354
355
356
357
358
359
360
361
362
363
364
365
366
367
368
369
370
371
372
373
374
375
376
377
378
379
380
381
382
383
384
385
386
387
388
389
390
391
392
393
394
395
396
397
398
399
400
401
402
403
404
405
406
407
408
409
410
411
412
413
414
415
416
417
418
419
420
421
422
423
424
425
426
427
428
429
430
431
432
433
434
435
436
437
438
439
440
441
442
443
444
445
446
447
448
449
450
451
452
453
454
455
456
457
458
459
460
461
462
463
464
465
466
467
468
469
470
471
472
473
474
475
476
477
478
479
480
481
482
483
484
485
486
487
488
489
490
491
492
493
494
495
496
497
498
499
500
501
502
503
504
505
506
507
508
509
510
511
512
513
514
515
516
517
518
519
520
521
522
523
524
525
526
527
528
529
530
531
532
533
534
535
536
537
538
539
540
541
542
543
544
545
546
547
548
549
550
551
552
553
554
555
556
557
558
559
560
561
562
563
564
565
566
567
568
569
570
571
572
573
574
575
576
577
578
579
580
581
582
583
584
585
586
587
588
589
590
591
592
593
594
595
596
597
598
599
600
601
602
603
604
605
606
607
608
609
610
611
612
613
614
615
616
617
618
619
620
621
622
623
624
625
626
627
628
629
630
631
632
633
634
635
636
637
638
639
640
641
642
643
644
645
646
647
648
649
650
651
652
653
654
655
656
657
658
659
660
661
662
663
664
665
666
667
668
669
670
671
672
673
674
675
676
677
678
679
680
681
682
683
684
685
686
687
688
689
690
691
692
693
694
695
696
697
698
699
700
701
702
703
704
705
706
707
708
709
710
711
712
713
714
715
716
717
718
719
720
721
722
723
724
725
726
727
728
729
730
731
732
733
734
735
736
737
738
739
740
741
742
743
744
745
746
747
748
749
750
751
752
753
754
755
756
757
758
759
760
761
762
763
764
765
766
767
768
769
770
771
772
773
774
775
776
777
778
779
780
781
782
783
784
785
786
787
788
789
790
791
792
793
794
795
796
797
798
799
800
801
802
803
804
805
806
807
808
809
810
811
812
813
814
815
816
817
818
819
820
821
822
823
824
825
826
827
828
829
830
831
832
833
834
835
836
837
838
839
840
841
842
843
844
845
846
847
848
849
850
851
852
853
854
855
856
857
858
859
860
861
862
863
864
865
866
867
868
869
870
871
872
873
874
875
876
877
878
879
880
881
882
883
884
885
886
887
888
889
890
891
892
893
894
895
896
897
898
899
900
901
902
903
904
905
906
907
908
909
910
911
912
913
914
915
916
917
918
919
920
921
922
923
924
925
926
927
928
929
930
931
932
933
934
935
936
937
938
939
940
941
942
943
944
945
946
947
948
949
950
951
952
953
954
955
956
957
958
959
960
961
962
963
964
965
966
967
968
969
970
971
972
973
974
975
976
977
978
979
980
981
982
983
984
985
986
987
988
989
990
991
992
993
994
995
996
997
998
999
1000
<?xml version="1.0" encoding="UTF-8" standalone="no"?>
<?xml-stylesheet type="text/xsl" href="configuration.xsl"?><!--
Licensed to the Apache Software Foundation (ASF) under one or more
contributor license agreements. See the NOTICE file distributed with
this work for additional information regarding copyright ownership.
The ASF licenses this file to You under the Apache License, Version 2.0
(the "License"); you may not use this file except in compliance with
the License. You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.
--><configuration>
<!-- WARNING!!! This file is auto generated for documentation purposes ONLY! -->
<!-- WARNING!!! Any changes you make to this file will be ignored by Hive. -->
<!-- WARNING!!! You must make your changes in hive-site.xml instead. -->
<!-- Hive Execution Parameters -->
<property>
<name>hive.exec.script.wrapper</name>
<value/>
<description/>
</property>
<property>
<name>hive.exec.plan</name>
<value/>
<description/>
</property>
<property>
<name>hive.exec.stagingdir</name>
<value>.hive-staging</value>
<description>Directory name that will be created inside table locations in order to support HDFS encryption. This is replaces ${hive.exec.scratchdir} for query results with the exception of read-only tables. In all cases ${hive.exec.scratchdir} is still used for other temporary files, such as job plans.</description>
</property>
<property>
<name>hive.exec.scratchdir</name>
<value>/tmp/hive</value>
<description>HDFS root scratch dir for Hive jobs which gets created with write all (733) permission. For each connecting user, an HDFS scratch dir: ${hive.exec.scratchdir}/<username> is created, with ${hive.scratch.dir.permission}.</description>
</property>
<property>
<name>hive.repl.rootdir</name>
<value>/user/hive/repl/</value>
<description>HDFS root dir for all replication dumps.</description>
</property>
<property>
<name>hive.repl.cm.enabled</name>
<value>false</value>
<description>Turn on ChangeManager, so delete files will go to cmrootdir.</description>
</property>
<property>
<name>hive.repl.cmrootdir</name>
<value>/user/hive/cmroot/</value>
<description>Root dir for ChangeManager, used for deleted files.</description>
</property>
<property>
<name>hive.repl.cm.retain</name>
<value>24h</value>
<description>
Expects a time value with unit (d/day, h/hour, m/min, s/sec, ms/msec, us/usec, ns/nsec), which is hour if not specified.
Time to retain removed files in cmrootdir.
</description>
</property>
<property>
<name>hive.repl.cm.interval</name>
<value>3600s</value>
<description>
Expects a time value with unit (d/day, h/hour, m/min, s/sec, ms/msec, us/usec, ns/nsec), which is sec if not specified.
Inteval for cmroot cleanup thread.
</description>
</property>
<property>
<name>hive.exec.local.scratchdir</name>
<value>${system:java.io.tmpdir}/${system:user.name}</value>
<description>Local scratch space for Hive jobs</description>
</property>
<property>
<name>hive.downloaded.resources.dir</name>
<value>${system:java.io.tmpdir}/${hive.session.id}_resources</value>
<description>Temporary local directory for added resources in the remote file system.</description>
</property>
<property>
<name>hive.scratch.dir.permission</name>
<value>700</value>
<description>The permission for the user specific scratch directories that get created.</description>
</property>
<property>
<name>hive.exec.submitviachild</name>
<value>false</value>
<description/>
</property>
<property>
<name>hive.exec.submit.local.task.via.child</name>
<value>true</value>
<description>
Determines whether local tasks (typically mapjoin hashtable generation phase) runs in
separate JVM (true recommended) or not.
Avoids the overhead of spawning new JVM, but can lead to out-of-memory issues.
</description>
</property>
<property>
<name>hive.exec.script.maxerrsize</name>
<value>100000</value>
<description>
Maximum number of bytes a script is allowed to emit to standard error (per map-reduce task).
This prevents runaway scripts from filling logs partitions to capacity
</description>
</property>
<property>
<name>hive.exec.script.allow.partial.consumption</name>
<value>false</value>
<description>
When enabled, this option allows a user script to exit successfully without consuming
all the data from the standard input.
</description>
</property>
<property>
<name>stream.stderr.reporter.prefix</name>
<value>reporter:</value>
<description>Streaming jobs that log to standard error with this prefix can log counter or status information.</description>
</property>
<property>
<name>stream.stderr.reporter.enabled</name>
<value>true</value>
<description>Enable consumption of status and counter messages for streaming jobs.</description>
</property>
<property>
<name>hive.exec.compress.output</name>
<value>false</value>
<description>
This controls whether the final outputs of a query (to a local/HDFS file or a Hive table) is compressed.
The compression codec and other options are determined from Hadoop config variables mapred.output.compress*
</description>
</property>
<property>
<name>hive.exec.compress.intermediate</name>
<value>false</value>
<description>
This controls whether intermediate files produced by Hive between multiple map-reduce jobs are compressed.
The compression codec and other options are determined from Hadoop config variables mapred.output.compress*
</description>
</property>
<property>
<name>hive.intermediate.compression.codec</name>
<value/>
<description/>
</property>
<property>
<name>hive.intermediate.compression.type</name>
<value/>
<description/>
</property>
<property>
<name>hive.exec.reducers.bytes.per.reducer</name>
<value>256000000</value>
<description>size per reducer.The default is 256Mb, i.e if the input size is 1G, it will use 4 reducers.</description>
</property>
<property>
<name>hive.exec.reducers.max</name>
<value>1009</value>
<description>
max number of reducers will be used. If the one specified in the configuration parameter mapred.reduce.tasks is
negative, Hive will use this one as the max number of reducers when automatically determine number of reducers.
</description>
</property>
<property>
<name>hive.exec.pre.hooks</name>
<value/>
<description>
Comma-separated list of pre-execution hooks to be invoked for each statement.
A pre-execution hook is specified as the name of a Java class which implements the
org.apache.hadoop.hive.ql.hooks.ExecuteWithHookContext interface.
</description>
</property>
<property>
<name>hive.exec.post.hooks</name>
<value/>
<description>
Comma-separated list of post-execution hooks to be invoked for each statement.
A post-execution hook is specified as the name of a Java class which implements the
org.apache.hadoop.hive.ql.hooks.ExecuteWithHookContext interface.
</description>
</property>
<property>
<name>hive.exec.failure.hooks</name>
<value/>
<description>
Comma-separated list of on-failure hooks to be invoked for each statement.
An on-failure hook is specified as the name of Java class which implements the
org.apache.hadoop.hive.ql.hooks.ExecuteWithHookContext interface.
</description>
</property>
<property>
<name>hive.exec.query.redactor.hooks</name>
<value/>
<description>
Comma-separated list of hooks to be invoked for each query which can
tranform the query before it's placed in the job.xml file. Must be a Java class which
extends from the org.apache.hadoop.hive.ql.hooks.Redactor abstract class.
</description>
</property>
<property>
<name>hive.client.stats.publishers</name>
<value/>
<description>
Comma-separated list of statistics publishers to be invoked on counters on each job.
A client stats publisher is specified as the name of a Java class which implements the
org.apache.hadoop.hive.ql.stats.ClientStatsPublisher interface.
</description>
</property>
<property>
<name>hive.ats.hook.queue.capacity</name>
<value>64</value>
<description>
Queue size for the ATS Hook executor. If the number of outstanding submissions
to the ATS executor exceed this amount, the Hive ATS Hook will not try to log queries to ATS.
</description>
</property>
<property>
<name>hive.exec.parallel</name>
<value>false</value>
<description>Whether to execute jobs in parallel</description>
</property>
<property>
<name>hive.exec.parallel.thread.number</name>
<value>8</value>
<description>How many jobs at most can be executed in parallel</description>
</property>
<property>
<name>hive.mapred.reduce.tasks.speculative.execution</name>
<value>true</value>
<description>Whether speculative execution for reducers should be turned on. </description>
</property>
<property>
<name>hive.exec.counters.pull.interval</name>
<value>1000</value>
<description>
The interval with which to poll the JobTracker for the counters the running job.
The smaller it is the more load there will be on the jobtracker, the higher it is the less granular the caught will be.
</description>
</property>
<property>
<name>hive.exec.dynamic.partition</name>
<value>true</value>
<description>Whether or not to allow dynamic partitions in DML/DDL.</description>
</property>
<property>
<name>hive.exec.dynamic.partition.mode</name>
<value>strict</value>
<description>
In strict mode, the user must specify at least one static partition
in case the user accidentally overwrites all partitions.
In nonstrict mode all partitions are allowed to be dynamic.
</description>
</property>
<property>
<name>hive.exec.max.dynamic.partitions</name>
<value>1000</value>
<description>Maximum number of dynamic partitions allowed to be created in total.</description>
</property>
<property>
<name>hive.exec.max.dynamic.partitions.pernode</name>
<value>100</value>
<description>Maximum number of dynamic partitions allowed to be created in each mapper/reducer node.</description>
</property>
<property>
<name>hive.exec.max.created.files</name>
<value>100000</value>
<description>Maximum number of HDFS files created by all mappers/reducers in a MapReduce job.</description>
</property>
<property>
<name>hive.exec.default.partition.name</name>
<value>__HIVE_DEFAULT_PARTITION__</value>
<description>
The default partition name in case the dynamic partition column value is null/empty string or any other values that cannot be escaped.
This value must not contain any special character used in HDFS URI (e.g., ':', '%', '/' etc).
The user has to be aware that the dynamic partition value should not contain this value to avoid confusions.
</description>
</property>
<property>
<name>hive.lockmgr.zookeeper.default.partition.name</name>
<value>__HIVE_DEFAULT_ZOOKEEPER_PARTITION__</value>
<description/>
</property>
<property>
<name>hive.exec.show.job.failure.debug.info</name>
<value>true</value>
<description>
If a job fails, whether to provide a link in the CLI to the task with the
most failures, along with debugging hints if applicable.
</description>
</property>
<property>
<name>hive.exec.job.debug.capture.stacktraces</name>
<value>true</value>
<description>
Whether or not stack traces parsed from the task logs of a sampled failed task
for each failed job should be stored in the SessionState
</description>
</property>
<property>
<name>hive.exec.job.debug.timeout</name>
<value>30000</value>
<description/>
</property>
<property>
<name>hive.exec.tasklog.debug.timeout</name>
<value>20000</value>
<description/>
</property>
<property>
<name>hive.output.file.extension</name>
<value/>
<description>
String used as a file extension for output files.
If not set, defaults to the codec extension for text files (e.g. ".gz"), or no extension otherwise.
</description>
</property>
<property>
<name>hive.exec.mode.local.auto</name>
<value>false</value>
<description>Let Hive determine whether to run in local mode automatically</description>
</property>
<property>
<name>hive.exec.mode.local.auto.inputbytes.max</name>
<value>134217728</value>
<description>When hive.exec.mode.local.auto is true, input bytes should less than this for local mode.</description>
</property>
<property>
<name>hive.exec.mode.local.auto.input.files.max</name>
<value>4</value>
<description>When hive.exec.mode.local.auto is true, the number of tasks should less than this for local mode.</description>
</property>
<property>
<name>hive.exec.drop.ignorenonexistent</name>
<value>true</value>
<description>Do not report an error if DROP TABLE/VIEW/Index/Function specifies a non-existent table/view/index/function</description>
</property>
<property>
<name>hive.ignore.mapjoin.hint</name>
<value>true</value>
<description>Ignore the mapjoin hint</description>
</property>
<property>
<name>hive.file.max.footer</name>
<value>100</value>
<description>maximum number of lines for footer user can define for a table file</description>
</property>
<property>
<name>hive.resultset.use.unique.column.names</name>
<value>true</value>
<description>
Make column names unique in the result set by qualifying column names with table alias if needed.
Table alias will be added to column names for queries of type "select *" or
if query explicitly uses table alias "select r1.x..".
</description>
</property>
<property>
<name>fs.har.impl</name>
<value>org.apache.hadoop.hive.shims.HiveHarFileSystem</value>
<description>The implementation for accessing Hadoop Archives. Note that this won't be applicable to Hadoop versions less than 0.20</description>
</property>
<property>
<name>hive.metastore.warehouse.dir</name>
<value>/user/hive/warehouse</value>
<description>location of default database for the warehouse</description>
</property>
<property>
<name>hive.metastore.uris</name>
<value/>
<description>Thrift URI for the remote metastore. Used by metastore client to connect to remote metastore.</description>
</property>
<property>
<name>hive.metastore.client.capability.check</name>
<value>true</value>
<description>Whether to check client capabilities for potentially breaking API usage.</description>
</property>
<property>
<name>hive.metastore.fastpath</name>
<value>false</value>
<description>Used to avoid all of the proxies and object copies in the metastore. Note, if this is set, you MUST use a local metastore (hive.metastore.uris must be empty) otherwise undefined and most likely undesired behavior will result</description>
</property>
<property>
<name>hive.metastore.fshandler.threads</name>
<value>15</value>
<description>Number of threads to be allocated for metastore handler for fs operations.</description>
</property>
<property>
<name>hive.metastore.hbase.catalog.cache.size</name>
<value>50000</value>
<description>Maximum number of objects we will place in the hbase metastore catalog cache. The objects will be divided up by types that we need to cache.</description>
</property>
<property>
<name>hive.metastore.hbase.aggregate.stats.cache.size</name>
<value>10000</value>
<description>Maximum number of aggregate stats nodes that we will place in the hbase metastore aggregate stats cache.</description>
</property>
<property>
<name>hive.metastore.hbase.aggregate.stats.max.partitions</name>
<value>10000</value>
<description>Maximum number of partitions that are aggregated per cache node.</description>
</property>
<property>
<name>hive.metastore.hbase.aggregate.stats.false.positive.probability</name>
<value>0.01</value>
<description>Maximum false positive probability for the Bloom Filter used in each aggregate stats cache node (default 1%).</description>
</property>
<property>
<name>hive.metastore.hbase.aggregate.stats.max.variance</name>
<value>0.1</value>
<description>Maximum tolerable variance in number of partitions between a cached node and our request (default 10%).</description>
</property>
<property>
<name>hive.metastore.hbase.cache.ttl</name>
<value>600s</value>
<description>
Expects a time value with unit (d/day, h/hour, m/min, s/sec, ms/msec, us/usec, ns/nsec), which is sec if not specified.
Number of seconds for a cached node to be active in the cache before they become stale.
</description>
</property>
<property>
<name>hive.metastore.hbase.cache.max.writer.wait</name>
<value>5000ms</value>
<description>
Expects a time value with unit (d/day, h/hour, m/min, s/sec, ms/msec, us/usec, ns/nsec), which is msec if not specified.
Number of milliseconds a writer will wait to acquire the writelock before giving up.
</description>
</property>
<property>
<name>hive.metastore.hbase.cache.max.reader.wait</name>
<value>1000ms</value>
<description>
Expects a time value with unit (d/day, h/hour, m/min, s/sec, ms/msec, us/usec, ns/nsec), which is msec if not specified.
Number of milliseconds a reader will wait to acquire the readlock before giving up.
</description>
</property>
<property>
<name>hive.metastore.hbase.cache.max.full</name>
<value>0.9</value>
<description>Maximum cache full % after which the cache cleaner thread kicks in.</description>
</property>
<property>
<name>hive.metastore.hbase.cache.clean.until</name>
<value>0.8</value>
<description>The cleaner thread cleans until cache reaches this % full size.</description>
</property>
<property>
<name>hive.metastore.hbase.connection.class</name>
<value>org.apache.hadoop.hive.metastore.hbase.VanillaHBaseConnection</value>
<description>Class used to connection to HBase</description>
</property>
<property>
<name>hive.metastore.hbase.aggr.stats.cache.entries</name>
<value>10000</value>
<description>How many in stats objects to cache in memory</description>
</property>
<property>
<name>hive.metastore.hbase.aggr.stats.memory.ttl</name>
<value>60s</value>
<description>
Expects a time value with unit (d/day, h/hour, m/min, s/sec, ms/msec, us/usec, ns/nsec), which is sec if not specified.
Number of seconds stats objects live in memory after they are read from HBase.
</description>
</property>
<property>
<name>hive.metastore.hbase.aggr.stats.invalidator.frequency</name>
<value>5s</value>
<description>
Expects a time value with unit (d/day, h/hour, m/min, s/sec, ms/msec, us/usec, ns/nsec), which is sec if not specified.
How often the stats cache scans its HBase entries and looks for expired entries
</description>
</property>
<property>
<name>hive.metastore.hbase.aggr.stats.hbase.ttl</name>
<value>604800s</value>
<description>
Expects a time value with unit (d/day, h/hour, m/min, s/sec, ms/msec, us/usec, ns/nsec), which is sec if not specified.
Number of seconds stats entries live in HBase cache after they are created. They may be invalided by updates or partition drops before this. Default is one week.
</description>
</property>
<property>
<name>hive.metastore.hbase.file.metadata.threads</name>
<value>1</value>
<description>Number of threads to use to read file metadata in background to cache it.</description>
</property>
<property>
<name>hive.metastore.connect.retries</name>
<value>3</value>
<description>Number of retries while opening a connection to metastore</description>
</property>
<property>
<name>hive.metastore.failure.retries</name>
<value>1</value>
<description>Number of retries upon failure of Thrift metastore calls</description>
</property>
<property>
<name>hive.metastore.port</name>
<value>9083</value>
<description>Hive metastore listener port</description>
</property>
<property>
<name>hive.metastore.client.connect.retry.delay</name>
<value>1s</value>
<description>
Expects a time value with unit (d/day, h/hour, m/min, s/sec, ms/msec, us/usec, ns/nsec), which is sec if not specified.
Number of seconds for the client to wait between consecutive connection attempts
</description>
</property>
<property>
<name>hive.metastore.client.socket.timeout</name>
<value>600s</value>
<description>
Expects a time value with unit (d/day, h/hour, m/min, s/sec, ms/msec, us/usec, ns/nsec), which is sec if not specified.
MetaStore Client socket timeout in seconds
</description>
</property>
<property>
<name>hive.metastore.client.socket.lifetime</name>
<value>0s</value>
<description>
Expects a time value with unit (d/day, h/hour, m/min, s/sec, ms/msec, us/usec, ns/nsec), which is sec if not specified.
MetaStore Client socket lifetime in seconds. After this time is exceeded, client
reconnects on the next MetaStore operation. A value of 0s means the connection
has an infinite lifetime.
</description>
</property>
<property>
<name>javax.jdo.option.ConnectionPassword</name>
<value>mine</value>
<description>password to use against metastore database</description>
</property>
<property>
<name>hive.metastore.ds.connection.url.hook</name>
<value/>
<description>Name of the hook to use for retrieving the JDO connection URL. If empty, the value in javax.jdo.option.ConnectionURL is used</description>
</property>
<property>
<name>javax.jdo.option.Multithreaded</name>
<value>true</value>
<description>Set this to true if multiple threads access metastore through JDO concurrently.</description>
</property>
<property>
<name>javax.jdo.option.ConnectionURL</name>
<value>jdbc:derby:;databaseName=metastore_db;create=true</value>
<description>
JDBC connect string for a JDBC metastore.
To use SSL to encrypt/authenticate the connection, provide database-specific SSL flag in the connection URL.
For example, jdbc:postgresql://myhost/db?ssl=true for postgres database.
</description>
</property>
<property>
<name>hive.metastore.dbaccess.ssl.properties</name>
<value/>
<description>
Comma-separated SSL properties for metastore to access database when JDO connection URL
enables SSL access. e.g. javax.net.ssl.trustStore=/tmp/truststore,javax.net.ssl.trustStorePassword=pwd.
</description>
</property>
<property>
<name>hive.hmshandler.retry.attempts</name>
<value>10</value>
<description>The number of times to retry a HMSHandler call if there were a connection error.</description>
</property>
<property>
<name>hive.hmshandler.retry.interval</name>
<value>2000ms</value>
<description>
Expects a time value with unit (d/day, h/hour, m/min, s/sec, ms/msec, us/usec, ns/nsec), which is msec if not specified.
The time between HMSHandler retry attempts on failure.
</description>
</property>
<property>
<name>hive.hmshandler.force.reload.conf</name>
<value>false</value>
<description>
Whether to force reloading of the HMSHandler configuration (including
the connection URL, before the next metastore query that accesses the
datastore. Once reloaded, this value is reset to false. Used for
testing only.
</description>
</property>
<property>
<name>hive.metastore.server.max.message.size</name>
<value>104857600</value>
<description>Maximum message size in bytes a HMS will accept.</description>
</property>
<property>
<name>hive.metastore.server.min.threads</name>
<value>200</value>
<description>Minimum number of worker threads in the Thrift server's pool.</description>
</property>
<property>
<name>hive.metastore.server.max.threads</name>
<value>1000</value>
<description>Maximum number of worker threads in the Thrift server's pool.</description>
</property>
<property>
<name>hive.metastore.server.tcp.keepalive</name>
<value>true</value>
<description>Whether to enable TCP keepalive for the metastore server. Keepalive will prevent accumulation of half-open connections.</description>
</property>
<property>
<name>hive.metastore.archive.intermediate.original</name>
<value>_INTERMEDIATE_ORIGINAL</value>
<description>
Intermediate dir suffixes used for archiving. Not important what they
are, as long as collisions are avoided
</description>
</property>
<property>
<name>hive.metastore.archive.intermediate.archived</name>
<value>_INTERMEDIATE_ARCHIVED</value>
<description/>
</property>
<property>
<name>hive.metastore.archive.intermediate.extracted</name>
<value>_INTERMEDIATE_EXTRACTED</value>
<description/>
</property>
<property>
<name>hive.metastore.kerberos.keytab.file</name>
<value/>
<description>The path to the Kerberos Keytab file containing the metastore Thrift server's service principal.</description>
</property>
<property>
<name>hive.metastore.kerberos.principal</name>
<value>hive-metastore/[email protected]</value>
<description>
The service principal for the metastore Thrift server.
The special string _HOST will be replaced automatically with the correct host name.
</description>
</property>
<property>
<name>hive.metastore.sasl.enabled</name>
<value>false</value>
<description>If true, the metastore Thrift interface will be secured with SASL. Clients must authenticate with Kerberos.</description>
</property>
<property>
<name>hive.metastore.thrift.framed.transport.enabled</name>
<value>false</value>
<description>If true, the metastore Thrift interface will use TFramedTransport. When false (default) a standard TTransport is used.</description>
</property>
<property>
<name>hive.metastore.thrift.compact.protocol.enabled</name>
<value>false</value>
<description>
If true, the metastore Thrift interface will use TCompactProtocol. When false (default) TBinaryProtocol will be used.
Setting it to true will break compatibility with older clients running TBinaryProtocol.
</description>
</property>
<property>
<name>hive.metastore.token.signature</name>
<value/>
<description>The delegation token service name to match when selecting a token from the current user's tokens.</description>
</property>
<property>
<name>hive.cluster.delegation.token.store.class</name>
<value>org.apache.hadoop.hive.thrift.MemoryTokenStore</value>
<description>The delegation token store implementation. Set to org.apache.hadoop.hive.thrift.ZooKeeperTokenStore for load-balanced cluster.</description>
</property>
<property>
<name>hive.cluster.delegation.token.store.zookeeper.connectString</name>
<value/>
<description>
The ZooKeeper token store connect string. You can re-use the configuration value
set in hive.zookeeper.quorum, by leaving this parameter unset.
</description>
</property>
<property>
<name>hive.cluster.delegation.token.store.zookeeper.znode</name>
<value>/hivedelegation</value>
<description>
The root path for token store data. Note that this is used by both HiveServer2 and
MetaStore to store delegation Token. One directory gets created for each of them.
The final directory names would have the servername appended to it (HIVESERVER2,
METASTORE).
</description>
</property>
<property>
<name>hive.cluster.delegation.token.store.zookeeper.acl</name>
<value/>
<description>
ACL for token store entries. Comma separated list of ACL entries. For example:
sasl:hive/[email protected]:cdrwa,sasl:hive/[email protected]:cdrwa
Defaults to all permissions for the hiveserver2/metastore process user.
</description>
</property>
<property>
<name>hive.metastore.cache.pinobjtypes</name>
<value>Table,StorageDescriptor,SerDeInfo,Partition,Database,Type,FieldSchema,Order</value>
<description>List of comma separated metastore object types that should be pinned in the cache</description>
</property>
<property>
<name>datanucleus.connectionPoolingType</name>
<value>BONECP</value>
<description>
Expects one of [bonecp, dbcp, hikaricp, none].
Specify connection pool library for datanucleus
</description>
</property>
<property>
<name>datanucleus.connectionPool.maxPoolSize</name>
<value>10</value>
<description>
Specify the maximum number of connections in the connection pool. Note: The configured size will be used by
2 connection pools (TxnHandler and ObjectStore). When configuring the max connection pool size, it is
recommended to take into account the number of metastore instances and the number of HiveServer2 instances
configured with embedded metastore. To get optimal performance, set config to meet the following condition
(2 * pool_size * metastore_instances + 2 * pool_size * HS2_instances_with_embedded_metastore) =
(2 * physical_core_count + hard_disk_count).
</description>
</property>
<property>
<name>datanucleus.rdbms.initializeColumnInfo</name>
<value>NONE</value>
<description>initializeColumnInfo setting for DataNucleus; set to NONE at least on Postgres.</description>
</property>
<property>
<name>datanucleus.schema.validateTables</name>
<value>false</value>
<description>validates existing schema against code. turn this on if you want to verify existing schema</description>
</property>
<property>
<name>datanucleus.schema.validateColumns</name>
<value>false</value>
<description>validates existing schema against code. turn this on if you want to verify existing schema</description>
</property>
<property>
<name>datanucleus.schema.validateConstraints</name>
<value>false</value>
<description>validates existing schema against code. turn this on if you want to verify existing schema</description>
</property>
<property>
<name>datanucleus.storeManagerType</name>
<value>rdbms</value>
<description>metadata store type</description>
</property>
<property>
<name>datanucleus.schema.autoCreateAll</name>
<value>false</value>
<description>Auto creates necessary schema on a startup if one doesn't exist. Set this to false, after creating it once.To enable auto create also set hive.metastore.schema.verification=false. Auto creation is not recommended for production use cases, run schematool command instead.</description>
</property>
<property>
<name>hive.metastore.schema.verification</name>
<value>true</value>
<description>
Enforce metastore schema version consistency.
True: Verify that version information stored in is compatible with one from Hive jars. Also disable automatic
schema migration attempt. Users are required to manually migrate schema after Hive upgrade which ensures
proper metastore schema migration. (Default)
False: Warn if the version information stored in metastore doesn't match with one from in Hive jars.
</description>
</property>
<property>
<name>hive.metastore.schema.verification.record.version</name>
<value>false</value>
<description>
When true the current MS version is recorded in the VERSION table. If this is disabled and verification is
enabled the MS will be unusable.
</description>
</property>
<property>
<name>datanucleus.transactionIsolation</name>
<value>read-committed</value>
<description>Default transaction isolation level for identity generation.</description>
</property>
<property>
<name>datanucleus.cache.level2</name>
<value>false</value>
<description>Use a level 2 cache. Turn this off if metadata is changed independently of Hive metastore server</description>
</property>
<property>
<name>datanucleus.cache.level2.type</name>
<value>none</value>
<description/>
</property>
<property>
<name>datanucleus.identifierFactory</name>
<value>datanucleus1</value>
<description>
Name of the identifier factory to use when generating table/column names etc.
'datanucleus1' is used for backward compatibility with DataNucleus v1
</description>
</property>
<property>
<name>datanucleus.rdbms.useLegacyNativeValueStrategy</name>
<value>true</value>
<description/>
</property>
<property>
<name>datanucleus.plugin.pluginRegistryBundleCheck</name>
<value>LOG</value>
<description>Defines what happens when plugin bundles are found and are duplicated [EXCEPTION|LOG|NONE]</description>
</property>
<property>
<name>hive.metastore.batch.retrieve.max</name>
<value>300</value>
<description>
Maximum number of objects (tables/partitions) can be retrieved from metastore in one batch.
The higher the number, the less the number of round trips is needed to the Hive metastore server,
but it may also cause higher memory requirement at the client side.
</description>
</property>
<property>
<name>hive.metastore.batch.retrieve.table.partition.max</name>
<value>1000</value>
<description>Maximum number of objects that metastore internally retrieves in one batch.</description>
</property>
<property>
<name>hive.metastore.init.hooks</name>
<value/>
<description>
A comma separated list of hooks to be invoked at the beginning of HMSHandler initialization.
An init hook is specified as the name of Java class which extends org.apache.hadoop.hive.metastore.MetaStoreInitListener.
</description>
</property>
<property>
<name>hive.metastore.pre.event.listeners</name>
<value/>
<description>List of comma separated listeners for metastore events.</description>
</property>
<property>
<name>hive.metastore.event.listeners</name>
<value/>
<description>A comma separated list of Java classes that implement the org.apache.hadoop.hive.metastore.MetaStoreEventListener interface. The metastore event and corresponding listener method will be invoked in separate JDO transactions. Alternatively, configure hive.metastore.transactional.event.listeners to ensure both are invoked in same JDO transaction.</description>
</property>
<property>
<name>hive.metastore.transactional.event.listeners</name>
<value/>
<description>A comma separated list of Java classes that implement the org.apache.hadoop.hive.metastore.MetaStoreEventListener interface. Both the metastore event and corresponding listener method will be invoked in the same JDO transaction.</description>
</property>
<property>
<name>hive.metastore.event.db.listener.timetolive</name>
<value>86400s</value>
<description>
Expects a time value with unit (d/day, h/hour, m/min, s/sec, ms/msec, us/usec, ns/nsec), which is sec if not specified.
time after which events will be removed from the database listener queue
</description>
</property>
<property>
<name>hive.metastore.authorization.storage.checks</name>
<value>false</value>
<description>
Should the metastore do authorization checks against the underlying storage (usually hdfs)
for operations like drop-partition (disallow the drop-partition if the user in
question doesn't have permissions to delete the corresponding directory
on the storage).
</description>
</property>
<property>
<name>hive.metastore.authorization.storage.check.externaltable.drop</name>
<value>true</value>
<description>
Should StorageBasedAuthorization check permission of the storage before dropping external table.
StorageBasedAuthorization already does this check for managed table. For external table however,
anyone who has read permission of the directory could drop external table, which is surprising.
The flag is set to false by default to maintain backward compatibility.
</description>
</property>
<property>
<name>hive.metastore.event.clean.freq</name>
<value>0s</value>
<description>
Expects a time value with unit (d/day, h/hour, m/min, s/sec, ms/msec, us/usec, ns/nsec), which is sec if not specified.
Frequency at which timer task runs to purge expired events in metastore.
</description>
</property>
<property>
<name>hive.metastore.event.expiry.duration</name>
<value>0s</value>
<description>
Expects a time value with unit (d/day, h/hour, m/min, s/sec, ms/msec, us/usec, ns/nsec), which is sec if not specified.
Duration after which events expire from events table
</description>
</property>
<property>
<name>hive.metastore.event.message.factory</name>
<value>org.apache.hadoop.hive.metastore.messaging.json.JSONMessageFactory</value>
<description>Factory class for making encoding and decoding messages in the events generated.</description>
</property>
<property>
<name>hive.metastore.execute.setugi</name>
<value>true</value>
<description>
In unsecure mode, setting this property to true will cause the metastore to execute DFS operations using
the client's reported user and group permissions. Note that this property must be set on
both the client and server sides. Further note that its best effort.
If client sets its to true and server sets it to false, client setting will be ignored.
</description>
</property>
<property>
<name>hive.metastore.partition.name.whitelist.pattern</name>
<value/>
<description>Partition names will be checked against this regex pattern and rejected if not matched.</description>
</property>
<property>
<name>hive.metastore.integral.jdo.pushdown</name>
<value>false</value>
<description>
Allow JDO query pushdown for integral partition columns in metastore. Off by default. This
improves metastore perf for integral columns, especially if there's a large number of partitions.
However, it doesn't work correctly with integral values that are not normalized (e.g. have
leading zeroes, like 0012). If metastore direct SQL is enabled and works, this optimization
is also irrelevant.
</description>
</property>
<property>
<name>hive.metastore.try.direct.sql</name>
<value>true</value>
<description>
Whether the Hive metastore should try to use direct SQL queries instead of the
DataNucleus for certain read paths. This can improve metastore performance when
fetching many partitions or column statistics by orders of magnitude; however, it
is not guaranteed to work on all RDBMS-es and all versions. In case of SQL failures,
the metastore will fall back to the DataNucleus, so it's safe even if SQL doesn't
work for all queries on your datastore. If all SQL queries fail (for example, your
metastore is backed by MongoDB), you might want to disable this to save the
try-and-fall-back cost.
</description>
</property>
<property>
<name>hive.metastore.direct.sql.batch.size</name>
<value>0</value>
<description>
Batch size for partition and other object retrieval from the underlying DB in direct
SQL. For some DBs like Oracle and MSSQL, there are hardcoded or perf-based limitations
that necessitate this. For DBs that can handle the queries, this isn't necessary and
may impede performance. -1 means no batching, 0 means automatic batching.
</description>
</property>
<property>
<name>hive.metastore.try.direct.sql.ddl</name>
<value>true</value>
<description>
Same as hive.metastore.try.direct.sql, for read statements within a transaction that
modifies metastore data. Due to non-standard behavior in Postgres, if a direct SQL
select query has incorrect syntax or something similar inside a transaction, the
entire transaction will fail and fall-back to DataNucleus will not be possible. You
should disable the usage of direct SQL inside transactions if that happens in your case.
</description>
</property>
<property>
<name>hive.direct.sql.max.query.length</name>
<value>100</value>
<description>
The maximum
size of a query string (in KB).
</description>
</property>
<property>
<name>hive.direct.sql.max.elements.in.clause</name>
<value>1000</value>
<description>
The maximum number of values in a IN clause. Once exceeded, it will be broken into
multiple OR separated IN clauses.
</description>
</property>
<property>
<name>hive.direct.sql.max.elements.values.clause</name>
<value>1000</value>
<description>The maximum number of values in a VALUES clause for INSERT statement.</description>
</property>
<property>
<name>hive.metastore.orm.retrieveMapNullsAsEmptyStrings</name>
<value>false</value>
<description>Thrift does not support nulls in maps, so any nulls present in maps retrieved from ORM must either be pruned or converted to empty strings. Some backing dbs such as Oracle persist empty strings as nulls, so we should set this parameter if we wish to reverse that behaviour. For others, pruning is the correct behaviour</description>
</property>
<property>
<name>hive.metastore.disallow.incompatible.col.type.changes</name>
<value>true</value>
<description>
If true (default is false), ALTER TABLE operations which change the type of a
column (say STRING) to an incompatible type (say MAP) are disallowed.
RCFile default SerDe (ColumnarSerDe) serializes the values in such a way that the
datatypes can be converted from string to any type. The map is also serialized as
a string, which can be read as a string as well. However, with any binary
serialization, this is not true. Blocking the ALTER TABLE prevents ClassCastExceptions
when subsequently trying to access old partitions.
Primitive types like INT, STRING, BIGINT, etc., are compatible with each other and are
not blocked.
See HIVE-4409 for more details.
</description>
</property>
<property>
<name>hive.metastore.limit.partition.request</name>
<value>-1</value>
<description>
This limits the number of partitions that can be requested from the metastore for a given table.
The default value "-1" means no limit.
</description>
</property>
<property>
<name>hive.table.parameters.default</name>
<value/>
<description>Default property values for newly created tables</description>
</property>
<property>