2

I am using Ubuntu 12.04, hadoop-0.23.5, hive-0.9.0. I specified my metastore_db separately to some other place $HIVE_HOME/my_db/metastore_db in hive-site.xml

Hadoop runs fine, jps gives ResourceManager,NameNode,DataNode,NodeManager,SecondaryNameNode

Hive gets started perfectly,metastore_db & derby.log also created,and all hive commands run successfully,I can create databases,table,etc. But after few day later,when I run show databases,or show tables, get below error

FAILED: Error in metadata: MetaException(message:Got exception:  java.net.ConnectException Call to localhost/127.0.0.1:54310 failed on connection exception: java.net.ConnectException: Connection refused) FAILED: Execution Error, return code 1 from org.apache.hadoop.hive.ql.exec.DDLTask

4 Answers 4

3

I had this problem too and the accepted answer did not help me so will add my solution here for others:

My problem was I had a single machine with a pseudo distributed set up installed with hive. It was working fine with localhost as the host name. However when we decided to add multiple machines to the cluster we also decided to give the machines proper names "machine01, machine 02 etc etc".

I changed all the hadoop conf/*-site.xml files and the hive-site.xml file too but still had the error. After exhaustive research I realized that in the metastore db hive was picking up the URIs not from *-site files, but from the metastore tables in mysql. Where all the hive table meta data was saved are two tables SDS and DBS. Upon changing the DB_LOCATION_URI column and LOCATION in the tables DBS and SDS respectively to point to the latest namenode URI, I was back in business.

Hope this helps others.

Sign up to request clarification or add additional context in comments.

Comments

0

reasons for this

  1. If you changed your Hadoop/Hive version,you may be specifying previous hadoop version (which has ds.default.name=hdfs://localhost:54310 in core-site.xml) in your hive-0.9.0/conf/hive-env.sh file
  2. $HADOOP_HOME may be point to some other location
  3. Specified version of Hadoop is not working
  4. your namenode may be in safe mode ,run bin/hdfs dfsadmin -safemode leave or bin/hadoop dsfadmin -safemode leave

1 Comment

i am facing similar issue.My daemons are running as expected.My namenode is not in safemode but still i do face this issue.
0

In case of fresh installation
the above problem can be the effect of a name node issue

try formatting the namenode using the command

hadoop namenode -format

2 Comments

Formating a namenode without a solid reason is asking for trouble.You can do that on your local machine with any number of times but not on production environment.
as i setup my cluster freshly, this actually worked. thanks
0

1.Turn off your namenode from safe mode. Try the commands below:

hadoop dfsadmin -safemode leave

2.Restart your Hadoop daemons:

sudo service hadoop-master stop

sudo service hadoop-master start

1 Comment

Explanation ? This is the equivalent of "turn on and off your machine"

Your Answer

By clicking “Post Your Answer”, you agree to our terms of service and acknowledge you have read our privacy policy.

Start asking to get answers

Find the answer to your question by asking.

Ask question

Explore related questions

See similar questions with these tags.