irpas技术客

java.net.ConnectException: Call From localhost/127.0.0.1 to 192.168.232.138:9000

irpas 1893

java.net.ConnectException: Call From localhost/127.0.0.1 to 192.168.232.138:9000 failed on connection exception: java.net.ConnectException: Connection refused; 22/05/03 00:34:57 INFO client.RMProxy: Connecting to ResourceManager at /192.168.232.138:8032 java.net.ConnectException: Call From localhost/127.0.0.1 to 192.168.232.138:9000 failed on connection exception: java.net.ConnectException: Connection refused; For more details see: http://wiki.apache.org/hadoop/ConnectionRefused at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method) at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) at java.lang.reflect.Constructor.newInstance(Constructor.java:423) at org.apache.hadoop.net.NetUtils.wrapWithMessage(NetUtils.java:827) at org.apache.hadoop.net.NetUtils.wrapException(NetUtils.java:757) at org.apache.hadoop.ipc.Client.getRpcResponse(Client.java:1553) at org.apache.hadoop.ipc.Client.call(Client.java:1495) at org.apache.hadoop.ipc.Client.call(Client.java:1394) at org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:232) at org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:118) at com.sun.proxy.$Proxy10.getFileInfo(Unknown Source) at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.getFileInfo(ClientNamenodeProtocolTranslatorPB.java:800) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:422) at org.apache.hadoop.io.retry.RetryInvocationHandler$Call.invokeMethod(RetryInvocationHandler.java:165) at org.apache.hadoop.io.retry.RetryInvocationHandler$Call.invoke(RetryInvocationHandler.java:157) at org.apache.hadoop.io.retry.RetryInvocationHandler$Call.invokeOnce(RetryInvocationHandler.java:95) at org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:359) at com.sun.proxy.$Proxy11.getFileInfo(Unknown Source) at org.apache.hadoop.hdfs.DFSClient.getFileInfo(DFSClient.java:1673) at org.apache.hadoop.hdfs.DistributedFileSystem$29.doCall(DistributedFileSystem.java:1524) at org.apache.hadoop.hdfs.DistributedFileSystem$29.doCall(DistributedFileSystem.java:1521) at org.apache.hadoop.fs.FileSystemLinkResolver.resolve(FileSystemLinkResolver.java:81) at org.apache.hadoop.hdfs.DistributedFileSystem.getFileStatus(DistributedFileSystem.java:1521) at org.apache.hadoop.fs.FileSystem.exists(FileSystem.java:1632) at org.apache.hadoop.mapreduce.lib.output.FileOutputFormat.checkOutputSpecs(FileOutputFormat.java:145) at org.apache.hadoop.mapreduce.JobSubmitter.checkSpecs(JobSubmitter.java:279) at org.apache.hadoop.mapreduce.JobSubmitter.submitJobInternal(JobSubmitter.java:145) at org.apache.hadoop.mapreduce.Job$11.run(Job.java:1570) at org.apache.hadoop.mapreduce.Job$11.run(Job.java:1567) at java.security.AccessController.doPrivileged(Native Method) at javax.security.auth.Subject.doAs(Subject.java:422) at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1926) at org.apache.hadoop.mapreduce.Job.submit(Job.java:1567) at org.apache.hadoop.mapreduce.Job.waitForCompletion(Job.java:1588) at org.apache.hadoop.examples.WordCount.main(WordCount.java:87) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.apache.hadoop.util.ProgramDriver$ProgramDescription.invoke(ProgramDriver.java:71) at org.apache.hadoop.util.ProgramDriver.run(ProgramDriver.java:144) at org.apache.hadoop.examples.ExampleDriver.main(ExampleDriver.java:74) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.apache.hadoop.util.RunJar.run(RunJar.java:244) at org.apache.hadoop.util.RunJar.main(RunJar.java:158) Caused by: java.net.ConnectException: Connection refused at sun.nio.ch.SocketChannelImpl.checkConnect(Native Method) at sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:715) at org.apache.hadoop.net.SocketIOWithTimeout.connect(SocketIOWithTimeout.java:206) at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:532) at org.apache.hadoop.ipc.Client$Connection.setupConnection(Client.java:701) at org.apache.hadoop.ipc.Client$Connection.setupIOstreams(Client.java:814) at org.apache.hadoop.ipc.Client$Connection.access$3700(Client.java:423) at org.apache.hadoop.ipc.Client.getConnection(Client.java:1610) at org.apache.hadoop.ipc.Client.call(Client.java:1441) ... 45 more

Connection Refused

连接拒绝

You get a ConnectionRefused Exception when there is a machine at the address specified, but there is no program listening on the specific TCP port the client is using -and there is no firewall in the way silently dropping TCP connection requests. If you do not know what a TCP connection request is, please consult the specification.

当指定的地址处有一台机器,但是没有程序监听客户机正在使用的特定TCP端口—并且没有防火墙以静默的方式丢弃TCP连接请求时,您会得到ConnectionRefused Exception。 如果您不知道什么是TCP连接请求,请参阅规范。 ?

Unless there is a configuration error at either end, a common cause for this is the Hadoop service isn't running.

除非两端都有配置错误,否则常见的原因是Hadoop服务没有运行。 ?

This stack trace is very common when the cluster is being shut down -because at that point Hadoop services are being torn down across the cluster, which is visible to those services and applications which haven't been shut down themselves. Seeing this error message during cluster shutdown is not anything to worry about.

当集群关闭时,这种堆栈跟踪非常常见——因为此时Hadoop服务在整个集群中被关闭,这对那些自己没有关闭的服务和应用程序是可见的。 不用担心在集群关闭期间看到这个错误消息。 ?

If the application or cluster is not working, and this message appears in the log, then it is more serious.

如果应用程序或集群不工作,并且此消息出现在日志中,则问题更为严重。

The exception text declares both the hostname and the port to which the connection failed. The port can be used to identify the service. For example, port 9000 is the HDFS port. Consult the Ambari port reference, and/or those of the supplier of your Hadoop management tools.

异常文本声明了主机名和连接失败的端口。 端口可以用来识别服务。 例如,9000端口是HDFS的端口。 请查阅Ambari端口参考资料和/或Hadoop管理工具供应商的参考资料。 ?

Check the hostname the client using is correct. If it's in a Hadoop configuration option: examine it carefully, try doing an ping by hand.

检查客户端使用的主机名是否正确。 如果它在Hadoop配置选项中:仔细检查它,尝试手动执行ping操作。 ?? ? Check the IP address the client is trying to talk to for the hostname is correct.

检查客户端试图与之通信的IP地址,确认主机名是否正确。

Make sure the destination address in the exception isn't 0.0.0.0 -this means that you haven't actually configured the client with the real address for that service, and instead it is picking up the server-side property telling it to listen on every port for connections.

确保异常中的目的地址不是0.0.0.0—这意味着您实际上没有为客户端配置该服务的真实地址,而是获取服务器端属性,告诉它侦听每个端口的连接。

If the error message says the remote service is on "127.0.0.1" or "localhost" that means the configuration file is telling the client that the service is on the local server. If your client is trying to talk to a remote system, then your configuration is broken.

如果错误消息说远程服务在“127.0.0.1”或“localhost”上,这意味着配置文件告诉客户端服务在本地服务器上。 如果您的客户端试图与远程系统通信,那么您的配置就被破坏了。

Check that there isn't an entry for your hostname mapped to 127.0.0.1 or 127.0.1.1 in /etc/hosts (Ubuntu is notorious for this).

检查/etc/hosts中是否有映射到127.0.0.1或127.0.1.1的主机名条目(Ubuntu在这方面臭名昭著)。

Check the port the client is trying to talk to using matches that the server is offering a service on. The netstat command is useful there.

使用服务器提供服务的端口检查客户端试图与之通信的端口。 netstat命令在这里很有用。

On the server, try a telnet localhost <port> to see if the port is open there.

在服务器上,尝试telnet localhost <端口>,以查看该端口在那里是否打开。

On the client, try a telnet <server> <port> to see if the port is accessible remotely.

在客户机上,尝试telnet <server> <port>,以查看该端口是否可以远程访问。

Try connecting to the server/port from a different machine, to see if it just the single client misbehaving.

尝试从另一台机器连接到服务器/端口,看看是否只是单个客户机行为不正常。

If your client and the server are in different subdomains, it may be that the configuration of the service is only publishing the basic hostname, rather than the Fully Qualified Domain Name. The client in the different subdomain can be unintentionally attempt to bind to a host in the local subdomain —and failing.

如果您的客户端和服务器在不同的子域名中,服务的配置可能只是发布基本主机名,而不是完全限定的域名。 不同子域中的客户端可能无意中试图绑定到本地子域中的主机—并且失败。

If you are using a Hadoop-based product from a third party, -please use the support channels provided by the vendor.

如果您正在使用第三方提供的基于hadoop的产品,请使用供应商提供的支持渠道。

Please do not file bug reports related to your problem, as they will be closed as Invalid

请不要提交与您的问题相关的bug报告,因为它们将被关闭为无效

See also Server Overflow

参见服务器溢出

None of these are Hadoop problems, they are hadoop, host, network and firewall configuration issues. As it is your cluster, only you can find out and track down the problem.

这些都不是Hadoop的问题,而是Hadoop、主机、网络和防火墙的配置问题。 因为这是您的集群,只有您才能发现并跟踪问题。


1.本站遵循行业规范,任何转载的稿件都会明确标注作者和来源;2.本站的原创文章,会注明原创字样,如未注明都非原创,如有侵权请联系删除!;3.作者投稿可能会经我们编辑修改或补充;4.本站不提供任何储存功能只提供收集或者投稿人的网盘链接。

标签: #call #from #localhost127001 #To #info #clientRMProxy