工学1号馆

home

ssh: connect to host master port 22: No route to host异常解决办法

Wu Yudong    March 13, 2016     Hadoop   1,142   

好久没有启动我的虚拟机小集群了,今天在master主机上执行如下命令:

$ sbin/start-dfs.sh

以为会很顺利的,但是却出现意想不到的异常:

Starting namenodes on [master]
master: ssh: connect to host master port 22: No route to host
slave1: ssh: connect to host slave1 port 22: No route to host
Starting secondary namenodes [master]
master: ssh: connect to host master port 22: No route to host

感觉像是网络连接问题,于是执行:

$ ifconfig

得到相关的网络配置:
eth0 Link encap:Ethernet HWaddr 00:0c:29:30:a1:fd
inet addr:192.168.111.138 Bcast:192.168.111.255 Mask:255.255.255.0
inet6 addr: fe80::20c:29ff:fe30:a1fd/64 Scope:Link
UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1
RX packets:18914 errors:0 dropped:0 overruns:0 frame:0
TX packets:17702 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:1000
RX bytes:4973739 (4.9 MB) TX bytes:3034201 (3.0 MB)

lo Link encap:Local Loopback
inet addr:127.0.0.1 Mask:255.0.0.0
inet6 addr: ::1/128 Scope:Host
UP LOOPBACK RUNNING MTU:65536 Metric:1
RX packets:15126 errors:0 dropped:0 overruns:0 frame:0
TX packets:15126 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:0
RX bytes:916437 (916.4 KB) TX bytes:916437 (916.4 KB)

先ping了ip,一切正常:

$ ping 192.168.111.138
PING 192.168.111.138 (192.168.111.138) 56(84) bytes of data.
64 bytes from 192.168.111.138: icmp_seq=1 ttl=64 time=0.036 ms
64 bytes from 192.168.111.138: icmp_seq=2 ttl=64 time=0.076 ms
……

再ping主机名,出现问题:

$ ping master
PING master (192.168.111.131) 56(84) bytes of data.
From 192.168.111.138 icmp_seq=1 Destination Host Unreachable
From 192.168.111.138 icmp_seq=2 Destination Host Unreachable
From 192.168.111.138 icmp_seq=3 Destination Host Unreachable
……

说明hosts文件中的映射出现问题,打开一看,果然这样:

127.0.0.1 localhost
192.168.111.131 master
192.168.111.135 slave1

但是以前没有问题,而且ip也没设置错,现在ip怎么变成了192.168.111.138,slave1的ip也变成了192.168.111.139

难道主机的ip发生变化,Vmare中的虚拟机ip也会相应的变化

接着查看集群信息:

$ bin/hdfs dfsadmin -report
Configured Capacity: 0 (0 B)
Present Capacity: 0 (0 B)
DFS Remaining: 0 (0 B)
DFS Used: 0 (0 B)
DFS Used%: NaN%
Under replicated blocks: 0
Blocks with corrupt replicas: 0
Missing blocks: 0

解决办法:

删除slave1中的hadoop文件夹下的tmp文件夹即可

$ bin/hdfs dfsadmin -report
Configured Capacity: 20474130432 (19.07 GB)
Present Capacity: 13815742464 (12.87 GB)
DFS Remaining: 13815717888 (12.87 GB)
DFS Used: 24576 (24 KB)
DFS Used%: 0.00%
Under replicated blocks: 0
Blocks with corrupt replicas: 0
Missing blocks: 0

————————————————-
Live datanodes (1):

Name: 192.168.111.139:50010 (slave1)
Hostname: slave1
Decommission Status : Normal
Configured Capacity: 20474130432 (19.07 GB)
DFS Used: 24576 (24 KB)
Non DFS Used: 6658387968 (6.20 GB)
DFS Remaining: 13815717888 (12.87 GB)
DFS Used%: 0.00%
DFS Remaining%: 67.48%
Configured Cache Capacity: 0 (0 B)
Cache Used: 0 (0 B)
Cache Remaining: 0 (0 B)
Cache Used%: 100.00%
Cache Remaining%: 0.00%
Xceivers: 1
Last contact: Sun Mar 13 06:24:08 PDT 2016

如果文章对您有帮助,欢迎点击下方按钮打赏作者

Comments

No comments yet.
To verify that you are human, please fill in "七"(required)