irpas技术客

hive初次启动问题_hadoop的大象_启动hive前需要启动什么

未知 4526

文章目录 重点问题


重点

重点:启动hive前要先启动zk,然后启动hdfs,然后是初始化 Hive 元数据库,配置文件一定要配对

问题

为什么:都需要cp /opt/dtc/software//flume/conf/flume-env.sh.template /opt/dtc/software/flume/conf/flume-env.sh hadoop-daemon.sh start zkfc和zkServer.sh 问题1:

Mon Mar 07 23:28:09 PST 2022 WARN: Establishing SSL connection without server's identity verification is not recommended. According to MySQL 5.5.45+, 5.6.26+ and 5.7.6+ requirements SSL connection must be established by default if explicit option isn't set. For compliance with existing applications not using SSL the verifyServerCertificate property is set to 'false'. You need either to explicitly disable SSL by setting useSSL=false, or set useSSL=true and provide truststore for server certificate verification. Mon Mar 07 23:28:10 PST 2022 WARN: Establishing SSL connection without server's identity verification is not recommended. According to MySQL 5.5.45+, 5.6.26+ and 5.7.6+ requirements SSL connection must be established by default if explicit option isn't set. For compliance with existing applications not using SSL the verifyServerCertificate property is set to 'false'. You need either to explicitly disable SSL by setting useSSL=false, or set useSSL=true and provide truststore for server certificate verification.

原因:根据mysql的要求,登录mysql服务器验证,这只是个警告,想要解决就需要加东西 解决办法: 找到hive的配置文件/hive/conf/hive-site.xml修改javax.jdo.option.ConnectionURL 在连接结尾加useSSL=false

<property> <name>javax.jdo.option.ConnectionURL</name> <value>jdbc:mysql://a:3306/metastore?characterEncoding=UTF-8&amp;createDatabaseIfNotExist=true&amp;useSSL=false</value> </property>

问题2:

[Fatal Error] hive-site.xml:505:90: The reference to entity "createDatabaseIfNotExist" must end with the ';' delimiter. Exception in thread "main" java.lang.RuntimeException: org.xml.sax.SAXParseException; systemId: file:/opt/dtc/software/hive/conf/hive-site.xml; lineNumber: 505; columnNumber: 90; The reference to entity "createDatabaseIfNotExist" must end with the ';' delimiter. at org.apache.hadoop.conf.Configuration.loadResource(Configuration.java:2860) at org.apache.hadoop.conf.Configuration.loadResources(Configuration.java:2706) at org.apache.hadoop.conf.Configuration.getProps(Configuration.java:2579) at org.apache.hadoop.conf.Configuration.get(Configuration.java:1350) at org.apache.hadoop.hive.conf.HiveConf.getVar(HiveConf.java:3558) at org.apache.hadoop.hive.conf.HiveConf.getVar(HiveConf.java:3622) at org.apache.hadoop.hive.conf.HiveConf.initialize(HiveConf.java:3709) at org.apache.hadoop.hive.conf.HiveConf.<init>(HiveConf.java:3652) at org.apache.hadoop.hive.common.LogUtils.initHiveLog4jCommon(LogUtils.java:82) at org.apache.hadoop.hive.common.LogUtils.initHiveLog4j(LogUtils.java:66) at org.apache.hadoop.hive.cli.CliDriver.run(CliDriver.java:657) at org.apache.hadoop.hive.cli.CliDriver.main(CliDriver.java:641) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.apache.hadoop.util.RunJar.run(RunJar.java:234) at org.apache.hadoop.util.RunJar.main(RunJar.java:148) Caused by: org.xml.sax.SAXParseException; systemId: file:/opt/dtc/software/hive/conf/hive-site.xml; lineNumber: 505; columnNumber: 90; The reference to entity "createDatabaseIfNotExist" must end with the ';' delimiter. at org.apache.xerces.parsers.DOMParser.parse(Unknown Source) at org.apache.xerces.jaxp.DocumentBuilderImpl.parse(Unknown Source) at javax.xml.parsers.DocumentBuilder.parse(DocumentBuilder.java:150) at org.apache.hadoop.conf.Configuration.parse(Configuration.java:2684) at org.apache.hadoop.conf.Configuration.parse(Configuration.java:2672) at org.apache.hadoop.conf.Configuration.loadResource(Configuration.java:2743) ... 17 more

原因:这是因为xml文件中的编码规则引起的。 解决办法: 找到hive的配置文件/hive/conf/hive-site.xml修改&---->&

<property> <name>javax.jdo.option.ConnectionURL</name> <value>jdbc:mysql://a:3306/metastore?characterEncoding=UTF-8&amp;createDatabaseIfNotExist=true&amp;useSSL=false<</value> </property>

问题3:

Exception in thread "main" java.lang.IllegalArgumentException: java.net.URISyntaxException: Relative path in absolute URI: ${system:java.io.tmpdir%7D/$%7Bsystem:user.name%7D at org.apache.hadoop.fs.Path.initialize(Path.java:254) at org.apache.hadoop.fs.Path.<init>(Path.java:212) at org.apache.hadoop.hive.ql.session.SessionState.createSessionDirs(SessionState.java:644) at org.apache.hadoop.hive.ql.session.SessionState.start(SessionState.java:563) at org.apache.hadoop.hive.ql.session.SessionState.beginStart(SessionState.java:531) at org.apache.hadoop.hive.cli.CliDriver.run(CliDriver.java:705) at org.apache.hadoop.hive.cli.CliDriver.main(CliDriver.java:641) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.apache.hadoop.util.RunJar.run(RunJar.java:234) at org.apache.hadoop.util.RunJar.main(RunJar.java:148) Caused by: java.net.URISyntaxException: Relative path in absolute URI: ${system:java.io.tmpdir%7D/$%7Bsystem:user.name%7D at java.net.URI.checkPath(URI.java:1823) at java.net.URI.<init>(URI.java:745) at org.apache.hadoop.fs.Path.initialize(Path.java:251) ... 12 more

原因:这种方式应该是让 Java 程序去通过 System 类 来读取这些配置项,比如: System.getProperty(“java.io.tmpdir”); 解决办法:可以直接将配置文件中${system:java.io.tmpdir} 这类配置值中的system: 直接去掉,改为 j a v a . i o . t m p d i r , 让 J a v a 程 序 直 接 读 {java.io.tmpdir},让 Java 程序直接读 java.io.tmpdir,让Java程序直接读{java.io.tmpdir}即可。 问题4:

[root@a bin]# hdfs haadmin -transitionToActive nn1 Automatic failover is enabled for NameNode at b/192.168.80.129:8020 Refusing to manually manage HA state, since it may cause a split-brain scenario or other incorrect state. If you are very sure you know what you are doing, please specify the --forcemanual flag.

原因:因为开启了zkfc 自动选active的namenode 不能手动切换了 zkfc会自动选择namenode节点作为active的。 解决办法: 先把zk和hdfs停掉 然后 zkStart-all.sh start hadoop-daemon.sh start zkfc start-all.sh #查看namenode状态 hdfs haadmin -getServiceState nn1


1.本站遵循行业规范,任何转载的稿件都会明确标注作者和来源;2.本站的原创文章,会注明原创字样,如未注明都非原创,如有侵权请联系删除!;3.作者投稿可能会经我们编辑修改或补充;4.本站不提供任何储存功能只提供收集或者投稿人的网盘链接。

标签: #启动hive前需要启动什么 #hive初次启动问题