我已经在两台机器上设置了hadoop集群。一台机器同时具有主和从-1。第二台机器有slave-2。当我使用start-all.sh启动集群时,我在secondarynamenode的.out文件中得到以下错误:
java.io.IOException: Failed on local exception: org.apache.hadoop.ipc.RpcException: RPC response exceeds maximum data length; Host Details : local host is: "ip-10-179-185-169/10.179.185.169&
我已经用3台机器建立了一个小型Hadoop集群:
机器(Hadoop1)同时运行NameNode和Jobtracker。
机器(Hadoop2)正在运行SecondaryNameNode
机器(Hadoop3)正在运行DataNode和TaskTracker
当我检查日志文件时,每件事都很正常。但是,当我试图通过在机器SecondaryNameNode上键入localhost:50090来检查Hadoop2的工作状态时,它会显示:
Unable to connect ....can't establish a connection to the server at
我是hadoop的新手,试图在windows中创建一个独立的hadoop集群。启动name节点时,我会得到一个错误,如下所示。但是,在使用端口50070为进程签入windows时,我找不到任何进程。
Hadoop错误:
20/04/18 08:32:24 ERROR namenode.NameNode: Failed to start namenode.
java.net.BindException: Port in use: 0.0.0.0:50070
at org.apache.hadoop.http.HttpServer2.openListeners(HttpServer
我已经在伪分布式模式下安装了Hadoop 2.7.1。以下守护进程的IP是什么:
IP address of Namenode?
IP address of Datanode?
IP address of Resource Manager?
IP address of Node Manager?
我机器中的/etc/hosts文件内容如下:
127.0.0.1 localhost
127.0.1.1 linuxPC
linuxPC是我的机器的名称。
当在HDInsight集群( Microsoft的Hadoop发行版)上启动Hive亚稳态时,我得到了以下错误:
org.apache.thrift.transport.TTransportException: Could not create ServerSocket on address 0.0.0.0/0.0.0.0:9083.
at org.apache.thrift.transport.TServerSocket.<init>(TServerSocket.java:93)
at org.apache.thrift.transport.TSe
场景:
I am trying for importing data from MS SQL Server to HDFS. But I am getting certain errors as:
错误:
hadoop@ubuntu:~/sqoop-1.1.0$ bin/sqoop import --connect 'jdbc:sqlserver://localhost;username=abcd;password=12345;database=HadoopTest' --table PersonInfo
11/12/09 18:08:15 ERROR sqoop.
我已经在一个虚拟纱线集群上部署了spring cloud dataflow。启动服务器./bin/dataflow-server-yarn可以正确执行。并返回
2016-11-02 10:31:59.786 INFO 42493 --- [ main] o.s.i.endpoint.EventDrivenConsumer : Adding {logging-channel-adapter:_org.springframework.integration.errorLogger} as a subscriber to the 'errorChannel&
当我在hive命令行中运行时:
hive > select count(*) from alogs;
在终端上,它显示以下内容:
Total jobs = 1
Launching Job 1 out of 1
Number of reduce tasks determined at compile time: 1
In order to change the average load for a reducer (in bytes):
set hive.exec.reducers.bytes.per.reducer=<number>
In order to lim
我添加了这个项目所需的所有jars,但是我无法解决这个exception.can --任何人都会对此提出建议。您还能告诉我如何给予hive数据库访问权限吗?提前谢谢。
java.lang.ClassNotFoundException: org.apache.hadoop.hive.jdbc.HiveDriver
at java.net.URLClassLoader$1.run(URLClassLoader.java:366)
at java.net.URLClassLoader$1.run(URLClassLoader.java:355)
at java.securit