当尝试从M/R大容量加载到启用Snappy压缩的表中时.我得到以下错误:
ERROR mapreduce.LoadIncrementalHFiles: Unexpected execution exception during splitting
java.util.concurrent.ExecutionException: java.lang.UnsatisfiedLinkError: org.apache.hadoop.util.NativeCodeLoader.buildSupportsSnappy()Z
at java.util.concurrent.FutureTask$Sync.innerGet(FutureTask.java:252)
at java.util.concurrent.FutureTask.get(FutureTask.java:111)
at org.apache.hadoop.hbase.mapreduce.LoadIncrementalHFiles.groupOrSplitPhase(LoadIncrementalHFiles.java:335)
at org.apache.hadoop.hbase.mapreduce.LoadIncrementalHFiles.doBulkLoad(LoadIncrementalHFiles.java:234)
表描述是:
DESCRIPTION ENABLED
{NAME => 'matrix_com', FAMILIES => [{NAME => 't', BLOOMFILTER => 'NONE', REPLICATION_SCOPE => '0', COMPRESSION => 'SNAPPY' true
, VERSIONS => '12', TTL => '1555200000', MIN_VERSIONS => '0', BLOCKSIZE => '65536', IN_MEMORY => 'false', BLOCKCACHE => 't
rue'}]}
如果Hadoop已经安装了所有snappy编解码器,那么HBase在使用snappy创建表时也不会给出一个错误,为什么我会得到这个错误?
发布于 2014-05-29 00:54:19
似乎这是Hadoop开发人员刚刚修复的一个bug。请查看以下链接https://issues.apache.org/jira/browse/MAPREDUCE-5799
https://stackoverflow.com/questions/19492347
复制相似问题