2

I have been trying to set up and run Hadoop in the pseudo Distributed Mode.But when I type

bin/hadoop fs -mkdir input

I get

mkdir: Call From h1/192.168.1.13 to h1:9000 failed on connection exception: java.net.ConnectException: Connection refused; For more details see: http://wiki.apache.org/hadoop/ConnectionRefused

here is the details

core-site.xml

<configuration>
  <property>
    <name>hadoop.tmp.dir</name>
    <value>/home/grid/tmp</value>
  </property>
    <property>
        <name>fs.defaultFS</name>
        <value>hdfs://h1:9000</value>
    </property>
</configuration>

mapred-site.xml

<configuration>
    <property>
        <name>mapred.job.tracker</name>
        <value>h1:9001</value>
    </property>

  <property>
    <name>mapred.map.tasks</name>
    <value>20</value>
  </property>
  <property>
    <name>mapred.reduce.tasks</name>
    <value>4</value>
  </property>
  <property>
    <name>mapreduce.framework.name</name>
    <value>yarn</value>
  </property>
  <property>
    <name>mapreduce.jobtracker.http.address</name>
    <value>h1:50030</value>
  </property>
  <property>
    <name>mapreduce.jobhistory.address</name>
    <value>h1:10020</value>
  </property>
  <property>
    <name>mapreduce.jobhistory.webapp.address</name>
    <value>h1:19888</value>
  </property>

</configuration>

hdfs-site.xml

<configuration>

  <property>
    <name>dfs.http.address</name>
    <value>h1:50070</value>
  </property>
  <property>
    <name>dfs.namenode.rpc-address</name>
    <value>h1:9001</value>
  </property>
  <property>
    <name>dfs.namenode.secondary.http-address</name>
    <value>h1:50090</value>
  </property>
  <property>
    <name>dfs.datanode.data.dir</name>
    <value>/home/grid/data</value>
  </property>
    <property>
        <name>dfs.replication</name>
        <value>2</value>
    </property>
</configuration>

/etc/hosts

127.0.0.1   localhost localhost.localdomain localhost4 localhost4.localdomain4
::1         localhost localhost.localdomain localhost6 localhost6.localdomain6
192.168.1.13 h1
192.168.1.14 h2
192.168.1.15 h3

After hadoop namenode -format and start-all.sh

1702 ResourceManager
1374 DataNode
1802 NodeManager
2331 Jps
1276 NameNode
1558 SecondaryNameNode

the problem occurs

[grid@h1 hadoop-2.6.0]$ bin/hadoop fs -mkdir input
15/05/13 16:37:57 WARN util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
mkdir: Call From h1/192.168.1.13 to h1:9000 failed on connection exception: java.net.ConnectException: Connection refused; For more details see:  http://wiki.apache.org/hadoop/ConnectionRefused

Where is the problems?

hadoop-grid-datanode-h1.log

2015-05-12 11:26:20,329 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: STARTUP_MSG:
/************************************************************
STARTUP_MSG: Starting DataNode
STARTUP_MSG:   host = h1/192.168.1.13
STARTUP_MSG:   args = []
STARTUP_MSG:   version = 2.6.0

hadoop-grid-namenode-h1.log

2015-05-08 16:06:32,561 INFO org.apache.hadoop.hdfs.server.namenode.NameNode: STARTUP_MSG:
/************************************************************
STARTUP_MSG: Starting NameNode
STARTUP_MSG:   host = h1/192.168.1.13
STARTUP_MSG:   args = []
STARTUP_MSG:   version = 2.6.0

why the port 9000 does not work?

[grid@h1 ~]$ netstat -tnl |grep 9000
[grid@h1 ~]$ netstat -tnl |grep 9001
tcp        0      0 192.168.1.13:9001           0.0.0.0:*                   LISTEN     
4
  • Post your namenode and datanode logs. Commented May 13, 2015 at 9:45
  • Are both of your machines 32 bit OS? Commented May 13, 2015 at 10:08
  • Is your HDFS instance is running on the specified port "9000" ? Commented May 13, 2015 at 12:06
  • wow, it seems that the port "9000" does not open, but I still cannot figure out why, the port 9001 can work Commented May 14, 2015 at 1:43

4 Answers 4

2

Please start dfs and yarn.

[hadoop@hadooplab sbin]$ ./start-dfs.sh

[hadoop@hadooplab sbin]$ ./start-yarn.sh

Now try using "bin/hadoop fs -mkdir input"

The issue usually comes when you install hadoop in a VM and then shut it down. When you shut down VM, dfs and yarn also stops. So you need to start dfs and yarn each time you restart the VM.

Sign up to request clarification or add additional context in comments.

Comments

0

Firstly try command

bin/hadoop dfs -mkdir input

If you have followed micheal-roll post properly then you should not have any issue. I suspect that passwordless ssh is not working in your configuration, recheck it.

2 Comments

[grid@h1 hadoop-2.6.0]$ bin/hadoop dfs -mkdir input DEPRECATED: Use of this script to execute hdfs command is deprecated. Instead use the hdfs command for it. 15/05/14 09:45:27 WARN util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable mkdir: Call From h1/192.168.1.13 to h1:9000 failed on connection exception: java.net.ConnectException: Connection refused; For more details see: wiki.apache.org/hadoop/ConnectionRefused
so did you figure out why port 9000 is not opening? try restarting the system once.
0

Following procedure resolved the issue for me:

  1. Stop all the services.

  2. Delete namenode and datanode directories as specified in hdfs-site.xml.

  3. Create new namenode and datanode directories and modify hdfs-site.xml accordingly.

  4. In core-site.xml, make the following changes or add the following properties:

    fs.defaultFS hdfs://172.20.12.168/ fs.default.name hdfs://172.20.12.168:8020

  5. Make the following changes in hadoop-2.6.4/etc/hadoop/hadoop-env.sh file:

    export JAVA_HOME=/Library/Java/JavaVirtualMachines/jdk1.8.0_91.jdk/Contents/Home

  6. Restart dfs, yarn and mr as follows:

    start-dfs.sh start-yarn.sh mr-jobhistory-daemon.sh start historyserver

Comments

0

This command worked for me:

hadoop namenode -format

Comments

Your Answer

By clicking “Post Your Answer”, you agree to our terms of service and acknowledge you have read our privacy policy.

Start asking to get answers

Find the answer to your question by asking.

Ask question

Explore related questions

See similar questions with these tags.