您现在的位置 >> Hadoop教程 >> Hadoop实战 >> hbase,hive专题  
 

hive0.13调整hbase 0.96.2 hadoop2.2.0 问题总结

【作者:Hadoop实战专家】【关键词:hive hbase 需要 问题 文件 】 【点击:42076次】【2013-09-2】
failed with exception 'java.io.FileNotFoundException(File does not exist: hdfs://*. 这是需要将使用的jar包上传到hdfs文件系统中,所以,需要哪个就上传哪个吧,用put命令。 最后成功实现hive与hbase的整合。  

相关热门搜索:

大数据标签:hadoop hdfs mapreduce hbase hive zookeeper bigdata

问题导读:
1.hive.aux.jars.path参数的作用是什么?
2.Job Submission failed with exception 'java.io.FileNotFoundException'这个问题该如何解决?

最近研究了一下hive与hbase的整合,使用的都是各自的最新release, hive0.13, hbase0.96.2,整合的过程其实挺简单的,大致需要注意的地方如下:

1. hive的配置文件hive-site.xml需要添加的内容:


hive.aux.jars.path
file:///home/grid/hive/lib/hive-hbase-handler-0.13.0.jar,file:///home/grid/hive/lib/hbase-client-0.96.2-hadoop2.jar,file:///home/grid/hive/lib/hbase-common-0.96.2-hadoop2.jar,file:///home/grid/hive/lib/hbase-common-0.96.2-hadoop2-tests.jar,file:///home/grid/hive/lib/hbase-protocol-0.96.2-hadoop2.jar,file:///home/grid/hive/lib/hbase-server-0.96.2-hadoop2.jar,file:///home/grid/hive/lib/htrace-core-2.04.jar,file:///home/grid/hive/lib/zookeeper-3.4.6.jar,file:///home/grid/hive/lib/protobuf-java-2.5.0.jar,file:///home/grid/hive/lib/guava-11.0.2.jar


hbase.zookeeper.quorum
server1,server2


2.安装hbase的过程不再累述

最后在双方查询都能查到数据,但是在hive中向hbase插入数据的时候出现了问题,错误信息如下:

java.io.FileNotFoundException: File does not exist: hdfs://*.*.*.*:9000/home/grid/hbase/lib/hbase-hadoop-compat-0.96.2-hadoop2.jar
at org.apache.hadoop.hdfs.DistributedFileSystem$17.doCall(DistributedFileSystem.java:1110)
at org.apache.hadoop.hdfs.DistributedFileSystem$17.doCall(DistributedFileSystem.java:1102)
at org.apache.hadoop.fs.FileSystemLinkResolver.resolve(FileSystemLinkResolver.java:81)
at org.apache.hadoop.hdfs.DistributedFileSystem.getFileStatus(DistributedFileSystem.java:1102)
at org.apache.hadoop.mapreduce.filecache.ClientDistributedCacheManager.getFileStatus(ClientDistributedCacheManager.java:288)
at org.apache.hadoop.mapreduce.filecache.ClientDistributedCacheManager.getFileStatus(ClientDistributedCacheManager.java:224)
at org.apache.hadoop.mapreduce.filecache.ClientDistributedCacheManager.determineTimestamps(ClientDistributedCacheManager.java:99)
at org.apache.hadoop.mapreduce.filecache.ClientDistributedCacheManager.determineTimestampsAndCacheVisibilities(ClientDistributedCacheManager.java:57)
at org.apache.hadoop.mapreduce.JobSubmitter.copyAndConfigureFiles(JobSubmitter.java:264)
at org.apache.hadoop.mapreduce.JobSubmitter.copyAndConfigureFiles(JobSubmitter.java:300)
at org.apache.hadoop.mapreduce.JobSubmitter.submitJobInternal(JobSubmitter.java:387)
at org.apache.hadoop.mapreduce.Job$10.run(Job.java:1268)
at org.apache.hadoop.mapreduce.Job$10.run(Job.java:1265)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:415)
at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1491)
at org.apache.hadoop.mapreduce.Job.submit(Job.java:1265)
at org.apache.hadoop.mapred.JobClient$1.run(JobClient.java:562)
at org.apache.hadoop.mapred.JobClient$1.run(JobClient.java:557)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:415)
at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1491)
at org.apache.hadoop.mapred.JobClient.submitJobInternal(JobClient.java:557)
at org.apache.hadoop.mapred.JobClient.submitJob(JobClient.java:548)
at org.apache.hadoop.hive.ql.exec.mr.ExecDriver.execute(ExecDriver.java:420)
at org.apache.hadoop.hive.ql.exec.mr.ExecDriver.main(ExecDriver.java:740)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:606)
at org.apache.hadoop.util.RunJar.main(RunJar.java:212)
Job Submission failed with exception 'java.io.FileNotFoundException(File does not exist: hdfs://*.*.*.*:9000/home/grid/hbase/lib/hbase-hadoop-compat-0.96.2-hadoop2.jar)'
Execution failed with exit status: 1
Obtaining error information

Task failed!
Task ID:
Stage-0

Logs:

/tmp/root/hive.log
FAILED: Execution Error, return code 1 from org.apache.hadoop.hive.ql.exec.mr.MapRedTask

这是需要将使用的jar包上传到hdfs文件系统中,所以,需要哪个就上传哪个吧,用put命令。

最后成功实现hive与hbase的整合。

大数据系列hbase,hive相关文章:

最新评论
钟鹏2014-09-10 12:04:14
用虚拟机装了最新版HUE,测试了下其中的hbase browser,里面的bulk upload,switch cluster可以借鉴下到phphbaseadmin中。
Clazy 戏子2014-09-09 05:12:41
严加入本群
天人感应2014-09-09 08:07:05
多好多
北方的狼2014-09-08 04:09:14
怎么会进入安全模式的?有节点死了吗?
paulie2014-09-07 08:05:07
#大数据工具#【强烈推荐!大数据领域的顶级开源工具大集合—数据存储篇】:①Apache Hadoop– Cloud Foundry(VMware), Hortonworks, Hadapt; ②NoSql 数据库 – MongoDB, Cassandra, Hbase; ③SQL 数据库 – MySql(Oracle), MariaDB, PostgreSQL, TokuDB http://t.cn/Rv0cZes
加州海客2014-09-07 03:12:21
ABSTRACT SYNTAX TREE:
小珂_机器学习2014-09-06 11:59:08
下面一个是dfs可以用的
SayHi2014-09-05 05:19:53
蔡雯雯太正整晚一直被騷擾🚷 #kaskade# #girl# #hive# @ HIVE CLUB Taipei http://t.cn/8sCFE1v
此女、白痴2014-09-05 09:19:05
|
星辰2014-09-04 09:22:49
从零开始nodejs系列文章 | 粉丝日志 跨界的IT博客,核心IT技术包括:Hadoop,R,RHadoop,Nodejs,AngularJS,KVM,NoSQL,IT金融 http://t.cn/RvTKpcm http://t.cn/RvTKpcm
 
  • Hadoop生态系统资料推荐