0

I´m executing this command in a 4 node hadoop cluster on the namenode node:

hadoop fs -ls /

But it shows an error:

ls: Failed on local exception: java.net.SocketException: 
Network is unreachable; Host Details: local host is "namenode/172.16.1.2"; 
destination host is: "namenode":9000;

core-site.xml

<configuration>
    <property>
        <name>fs.defaultFS</name>
        <value>hdfs://namenode:9000</value>
    </property>
</configuration>

cat /etc/hosts:

172.16.1.2  namenode
172.16.1.3  datanode1
172.16.1.4  datanode2
172.16.1.5  datanode3
4
  • post a little more code Commented Apr 29, 2016 at 17:23
  • Hi. I didnt understand, more code? I just executed that command "hadoop fs -ls /" and get that error. Commented Apr 29, 2016 at 17:52
  • I update with configuration of two files.. Commented Apr 29, 2016 at 19:06
  • Try using port 8020 instead of 9000 Commented Apr 29, 2016 at 20:22

2 Answers 2

1

First try to ping namenode and see what happen. If ping reaches the host, check the firewall via iptables on your current machine and namenode because it is probably blocking related traffic.

Sign up to request clarification or add additional context in comments.

Comments

0

For me work setting JVM config

-Djava.net.preferIPv4Stack=true

Comments

Your Answer

By clicking “Post Your Answer”, you agree to our terms of service and acknowledge you have read our privacy policy.

Start asking to get answers

Find the answer to your question by asking.

Ask question

Explore related questions

See similar questions with these tags.