首页
学习
活动
专区
圈层
工具
发布
社区首页 >问答首页 >在Hadoop2.2.0上,我的datanode无法启动

在Hadoop2.2.0上,我的datanode无法启动
EN

Stack Overflow用户
提问于 2014-07-01 09:03:05
回答 1查看 785关注 0票数 0

各位,我在构建Hadoop集群时遇到了一些小问题

我的节点安装了CentOS 6.5、java1.7.60和Hadoop2.2.0

我想建造一个主人和三个奴隶

我试着像一样构建它

但最后,我尝试启动我的namenode和datanode

我的/etc/主机是这样的:

代码语言:javascript
复制
127.0.0.1   localhost localhost.localdomain localhost4 localhost4.localdomain4
::1         localhost localhost.localdomain localhost6 localhost6.localdomain6

10.10.10.10 master
10.10.10.11 slave1
10.10.10.12 slave2
10.10.10.13 slave3

就像我打字的时候:

代码语言:javascript
复制
$ hadoop namenode -fromat:

java.net.UnknownHostException: hadoop01.hadoopcluster: hadoop01.hadoopcluster
    at java.net.InetAddress.getLocalHost(InetAddress.java:1473)
    at org.apache.hadoop.net.DNS.resolveLocalHostname(DNS.java:264)
    at org.apache.hadoop.net.DNS.<clinit>(DNS.java:57)
    at org.apache.hadoop.hdfs.server.namenode.NNStorage.newBlockPoolID(NNStorage.java:914)
    at org.apache.hadoop.hdfs.server.namenode.NNStorage.newNamespaceInfo(NNStorage.java:550)
    at org.apache.hadoop.hdfs.server.namenode.FSImage.format(FSImage.java:144)
    at org.apache.hadoop.hdfs.server.namenode.NameNode.format(NameNode.java:837)
    at org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNode.java:1213)
    at org.apache.hadoop.hdfs.server.namenode.NameNode.main(NameNode.java:1320)
Caused by: java.net.UnknownHostException: hadoop01.hadoopcluster
    at java.net.Inet4AddressImpl.lookupAllHostAddr(Native Method)
    at java.net.InetAddress$1.lookupAllHostAddr(InetAddress.java:901)
    at java.net.InetAddress.getAddressesFromNameService(InetAddress.java:1293)
    at java.net.InetAddress.getLocalHost(InetAddress.java:1469)
    ... 8 more
14/07/01 16:50:46 WARN net.DNS: Unable to determine address of the host-falling back to "localhost" address
java.net.UnknownHostException: hadoop01.hadoopcluster: hadoop01.hadoopcluster
    at java.net.InetAddress.getLocalHost(InetAddress.java:1473)
    at org.apache.hadoop.net.DNS.resolveLocalHostIPAddress(DNS.java:287)
    at org.apache.hadoop.net.DNS.<clinit>(DNS.java:58)
    at org.apache.hadoop.hdfs.server.namenode.NNStorage.newBlockPoolID(NNStorage.java:914)
    at org.apache.hadoop.hdfs.server.namenode.NNStorage.newNamespaceInfo(NNStorage.java:550)
    at org.apache.hadoop.hdfs.server.namenode.FSImage.format(FSImage.java:144)
    at org.apache.hadoop.hdfs.server.namenode.NameNode.format(NameNode.java:837)
    at org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNode.java:1213)
    at org.apache.hadoop.hdfs.server.namenode.NameNode.main(NameNode.java:1320)
Caused by: java.net.UnknownHostException: hadoop01.hadoopcluster
    at java.net.Inet4AddressImpl.lookupAllHostAddr(Native Method)
    at java.net.InetAddress$1.lookupAllHostAddr(InetAddress.java:901)
    at java.net.InetAddress.getAddressesFromNameService(InetAddress.java:1293)
    at java.net.InetAddress.getLocalHost(InetAddress.java:1469)
    ... 8 more

然后尝试发出start-dfs.sh和start-yarn.sh:

代码语言:javascript
复制
$ start-dfs.sh
14/07/01 16:55:23 WARN util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
Starting namenodes on [master]
master: namenode running as process 2395. Stop it first.
slave2: starting datanode, logging to /opt/hadoop/logs/hadoop-hadoop-datanode-hadoop03.hadopcluster.out
slave1: starting datanode, logging to /opt/hadoop/logs/hadoop-hadoop-datanode-hadoop02.hadopcluster.out
slave3: starting datanode, logging to /opt/hadoop/logs/hadoop-hadoop-datanode-hadoop04.hadopcluster.out
Starting secondary namenodes [0.0.0.0]
0.0.0.0: secondarynamenode running as process 2564. Stop it first.
14/07/01 16:55:31 WARN util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable


$ start-yarn.sh
starting yarn daemons
starting resourcemanager, logging to /opt/hadoop/logs/yarn-hadoop-resourcemanager-hadoop01.hadoopcluster.out
slave1: starting nodemanager, logging to /opt/hadoop/logs/yarn-hadoop-nodemanager-hadoop02.hadopcluster.out
slave3: starting nodemanager, logging to /opt/hadoop/logs/yarn-hadoop-nodemanager-hadoop04.hadopcluster.out
slave2: starting nodemanager, logging to /opt/hadoop/logs/yarn-hadoop-nodemanager-hadoop03.hadopcluster.out

并输入jps:

代码语言:javascript
复制
$ jps
2564 SecondaryNameNode
5591 Jps
2395 NameNode

我只像这样,没有DataNode,NodeManager,ResourceManger.等等,当我设置它时,它有哪里不对吗?有人能给我点建议吗,谢谢!

EN

回答 1

Stack Overflow用户

发布于 2014-07-01 12:30:22

DHCP (动态主机配置协议)

它是部署在IP网络上的服务器,用于为其客户端分配IP地址。因此,您必须在服务器和客户端配置DHCP。

在服务器端:

  • 获得包裹:isc-dhcp-server
  • 编辑/etc/network接口以配置接口,选择DHCP服务器的静态IP地址
  • 指定本地网络接口DHCP将监听(/etc/default/isc-dhcp-server)
  • 编辑配置文件/etc/dhcp/dhcpd.conf以选择本地网络将要使用的IP地址,定义DNS服务器。

在客户端:

  • 编辑/etc/网络接口以配置接口

您可以确保它安装和配置良好,您可以使用ifconfigping命令。

DNS (域名系统)

  • 凹式bind9
  • 编辑/etc/bind/named.conf以添加主机

祝好运。

票数 0
EN
页面原文内容由Stack Overflow提供。腾讯云小微IT领域专用引擎提供翻译支持
原文链接:

https://stackoverflow.com/questions/24506332

复制
相关文章

相似问题

领券
问题归档专栏文章快讯文章归档关键词归档开发者手册归档开发者手册 Section 归档