5

I've followed the instructions in Hadoop the definitive guide, 4th edition : Appendix A to configure Hadoop in pseudo-distributed mode. Everything is working good, except for when I try to make a directory :

hadoop fs -mkdir -p /user/$USER

The commande is returning the following message : mkdir: /user/my_user_name': Input/output error.

Although, when I first log into my root account sudo -s and then type the hadoop fs -mkdir -p /user/$USER commande, the directory 'user/root'is created (all directories in the path).

I think I'm having Hadoop permission issues.

Any help would be really appreciated, Thanks.

6
  • Did you check if underlying filesystem has any IO error? Commented Oct 7, 2017 at 12:11
  • The command mkdir -p user/$USER works fine (I'm creating the hierarchy in my home directory, no need to root), I assume that the filesystem has not IO error Commented Oct 7, 2017 at 12:16
  • You need to be the HDFS superuser to make those folders. Apparently, yours is root, but that's not always the case Commented Oct 7, 2017 at 15:08
  • Which Hadoop version are you using? Commented May 1, 2018 at 16:02
  • 1
    I got the same error message, once I tried access the file system and if forgot to configure fs.defaultFS in core-site.xml (see also: hadoop.apache.org/docs/current/hadoop-project-dist/…) Commented May 1, 2018 at 16:11

2 Answers 2

4

It means that you have a mistake in the 'core-site.xml' file. For instance, I had an error in the first line (name) in which I wrote 'fa.defaultFS' instead 'fs.defaultFS'.

After that, you have to execute the script 'stop-all.sh' to stop Hadoop. Probably, here, you will have to format the namenode with the commands: 'rm -Rf /app/tmp/your-username/*' and 'hdfs namenode -format'. Next, you have to start Hadoop with the 'start-all.sh' script.

Maybe, you have to reboot the system when you have executed the stop script.

After these steps, I could run that command again.

Sign up to request clarification or add additional context in comments.

Comments

1

I Corrected the core-site.xml file based on standard commands and it works fine now.

<property>
        <name>hadoop.tmp.dir</name>
        <value>/home/your_user_name/hadooptmpdata</value>
        <description>Where Hadoop will place all of its working files</description>
    </property>
    <property>
        <name>fs.defaultFS</name>
        <value>hdfs://localhost:9000</value>
        <description>Where HDFS NameNode can be found on the network</description>
</prosperty>

Comments

Your Answer

By clicking “Post Your Answer”, you agree to our terms of service and acknowledge you have read our privacy policy.

Start asking to get answers

Find the answer to your question by asking.

Ask question

Explore related questions

See similar questions with these tags.