4

Am new to hadoop, Today only i started with it, I want to write the file to hdfs hadoop server, Am using the server hadoop 1.2.1, When i give jps command in cli am able to see all the nodes are running,

31895 Jps
29419 SecondaryNameNode
29745 TaskTracker
29257 DataNode

This is my sample client code to write the file to hdfs system

public static void main(String[] args) 
   {
        try {
          //1. Get the instance of COnfiguration
          Configuration configuration = new Configuration();
          configuration.addResource(new Path("/data/WorkArea/hadoop/hadoop-1.2.1/hadoop-1.2.1/conf/core-site.xml"));
          configuration.addResource(new Path("/data/WorkArea/hadoop/hadoop-1.2.1/hadoop-1.2.1/conf/hdfs-site.xml"));
          //2. Create an InputStream to read the data from local file
          InputStream inputStream = new BufferedInputStream(new FileInputStream("/home/local/PAYODA/hariprasanth.l/Desktop/ProjectionTest"));
          //3. Get the HDFS instance
          FileSystem hdfs = FileSystem.get(new URI("hdfs://localhost:54310"), configuration);
          //4. Open a OutputStream to write the data, this can be obtained from the FileSytem
          OutputStream outputStream = hdfs.create(new Path("hdfs://localhost:54310/user/hadoop/Hadoop_File.txt"),
          new Progressable() {  
                  @Override
                  public void progress() {
             System.out.println("....");
                  }
                        });
          try
          {
            IOUtils.copyBytes(inputStream, outputStream, 4096, false); 
          }
          finally
          {
            IOUtils.closeStream(inputStream);
            IOUtils.closeStream(outputStream);
          } 
       } catch (Exception e) {
           e.printStackTrace();
       }
   }

My Exception while running the code,

java.io.IOException: Call to localhost/127.0.0.1:54310 failed on local exception: java.io.EOFException
at org.apache.hadoop.ipc.Client.wrapException(Client.java:1063)
at org.apache.hadoop.ipc.Client.call(Client.java:1031)
at org.apache.hadoop.ipc.WritableRpcEngine$Invoker.invoke(WritableRpcEngine.java:198)
at com.sun.proxy.$Proxy0.getProtocolVersion(Unknown Source)
at org.apache.hadoop.ipc.WritableRpcEngine.getProxy(WritableRpcEngine.java:235)
at org.apache.hadoop.ipc.RPC.getProxy(RPC.java:275)
at org.apache.hadoop.ipc.RPC.getProxy(RPC.java:249)
at org.apache.hadoop.hdfs.DFSClient.createRPCNamenode(DFSClient.java:163)
at org.apache.hadoop.hdfs.DFSClient.<init>(DFSClient.java:283)
at org.apache.hadoop.hdfs.DFSClient.<init>(DFSClient.java:247)
at org.apache.hadoop.hdfs.DistributedFileSystem.initialize(DistributedFileSystem.java:109)
at org.apache.hadoop.fs.FileSystem.createFileSystem(FileSystem.java:1792)
at org.apache.hadoop.fs.FileSystem.access$200(FileSystem.java:76)
at org.apache.hadoop.fs.FileSystem$Cache.getInternal(FileSystem.java:1826)
at org.apache.hadoop.fs.FileSystem$Cache.get(FileSystem.java:1808)
at org.apache.hadoop.fs.FileSystem.get(FileSystem.java:265)
at com.test.hadoop.writefiles.FileWriter.main(FileWriter.java:27)
Caused by: java.io.EOFException
at java.io.DataInputStream.readInt(DataInputStream.java:392)
at org.apache.hadoop.ipc.Client$Connection.receiveResponse(Client.java:760)
at org.apache.hadoop.ipc.Client$Connection.run(Client.java:698)

When i debug it, The error happens in the line when i try to connect to hdfs local server,

  FileSystem hdfs = FileSystem.get(new URI("hdfs://localhost:54310"), configuration);

As fas as I googled, It shows that am mis-matching the version,

Server version of hadoop is - 1.2.1 Client jar am using are

hadoop-common-0.22.0.jar
hadoop-hdfs-0.22.0.jar

Please tell me the problem, ASAP,

If possible recommend the place where can i find the client jars for hadoop, name the jars too... please...

Regards, Hari

1
  • Looks like hadoop services are not started properly. There is no "NameNode" service started. That might not be an issue at this point of time, but it comes once you resolve jar dependency issue. Please post your core-site.xml, hdfs-site.xml, and mapred-site.xml files. Commented Aug 5, 2014 at 5:21

2 Answers 2

2

It is because of the same class representation in different jar (i.e) hadoop commons and hadoop core having the same class. Actually I got confused of using the corresponding jars.

Finally I ended up using the apache hadoop core. It works like a fly.

Sign up to request clarification or add additional context in comments.

Comments

0

There is no NameNode running. Problem is with your Namenode. Did you format NameNode before starting up?

hadoop namenode -format

5 Comments

2988 org.eclipse.equinox.launcher_1.2.0.v20110502.jar \n 3719 TaskTracker \n 3271 NameNode \n 3511 SecondaryNameNode \n 8472 Jps \n 3606 JobTracker \n this is my jps command output
There is no problem with namenode too... Could u pls tell me the steps in configuring the eclipse part
8486 NodeManager 7823 DataNode 8092 SecondaryNameNode 7613 NameNode 10831 Jps 8265 ResourceManager When you run jps you should get all the above
I think you are missing datanode now
Once delete all the files in the hadoop filesystem path (hadoop.tmp.dir) in /hadoop_path/etc/hadoop/core-site.xml and save it. Format it again now. may be this would help.

Your Answer

By clicking “Post Your Answer”, you agree to our terms of service and acknowledge you have read our privacy policy.

Start asking to get answers

Find the answer to your question by asking.

Ask question

Explore related questions

See similar questions with these tags.