#title HDFS [[TableOfContents]] ==== hadoop fs ==== stdinput (pipe; |) 로 hdfs에 입력하기 {{{ cat test.txt | hadoop fs -put - /tmp }}} ==== hadoop dfsadmin ==== 각 노드들이 잘 작동하고 있는지 살펴보자. {{{ hadoop dfsadmin -report }}} {{{ huser@nameNode:~$ hadoop dfsadmin -report Warning: $HADOOP_HOME is deprecated. Configured Capacity: 80319594496 (74.8 GB) Present Capacity: 63944634368 (59.55 GB) DFS Remaining: 63753732096 (59.38 GB) DFS Used: 190902272 (182.06 MB) DFS Used%: 0.3% Under replicated blocks: 40 Blocks with corrupt replicas: 0 Missing blocks: 0 ------------------------------------------------- Datanodes available: 4 (4 total, 0 dead) Name: 192.168.136.5:50010 Decommission Status : Normal Configured Capacity: 20079898624 (18.7 GB) DFS Used: 47538176 (45.34 MB) Non DFS Used: 4093796352 (3.81 GB) DFS Remaining: 15938564096(14.84 GB) DFS Used%: 0.24% DFS Remaining%: 79.38% Last contact: Wed Nov 28 09:01:22 KST 2012 Name: 192.168.136.4:50010 Decommission Status : Normal Configured Capacity: 20079898624 (18.7 GB) DFS Used: 47890432 (45.67 MB) Non DFS Used: 4093718528 (3.81 GB) DFS Remaining: 15938289664(14.84 GB) DFS Used%: 0.24% DFS Remaining%: 79.37% Last contact: Wed Nov 28 09:01:21 KST 2012 Name: 192.168.136.6:50010 Decommission Status : Normal Configured Capacity: 20079898624 (18.7 GB) DFS Used: 47534080 (45.33 MB) Non DFS Used: 4093710336 (3.81 GB) DFS Remaining: 15938654208(14.84 GB) DFS Used%: 0.24% DFS Remaining%: 79.38% Last contact: Wed Nov 28 09:01:21 KST 2012 Name: 192.168.136.7:50010 Decommission Status : Normal Configured Capacity: 20079898624 (18.7 GB) DFS Used: 47939584 (45.72 MB) Non DFS Used: 4093734912 (3.81 GB) DFS Remaining: 15938224128(14.84 GB) DFS Used%: 0.24% DFS Remaining%: 79.37% Last contact: Wed Nov 28 09:01:22 KST 2012 huser@nameNode:~$ }}} ==== Requested data length 80848523 is longer than maximum configured RPC length 67108864 ==== 데이터 노드는 네임노드 블록 리포트를 넘겨줘야 하는데, 리포트 크기가 너무 커서 발생한 에러다. HDFS 스토리지 정책으로 데이터 노드 3대에 iSCIS 스토리지를 달아서 아키이빙하고 있는 상태다. 그러서 이 데이터 노드 3대가 비대해졌다. {{{ Socket Reader #1 for port 8022: readAndProcess from client 10.1.113.126 threw exception [java.io.IOException: Requested data length 80848523 is longer than maximum configured RPC length 67108864. RPC came from 10.1.113.126] java.io.IOException: Requested data length 80848523 is longer than maximum configured RPC length 67108864. RPC came from 10.1.113.126 at org.apache.hadoop.ipc.Server$Connection.checkDataLength(Server.java:1610) at org.apache.hadoop.ipc.Server$Connection.readAndProcess(Server.java:1672) at org.apache.hadoop.ipc.Server$Listener.doRead(Server.java:896) at org.apache.hadoop.ipc.Server$Listener$Reader.doRunLoop(Server.java:752) at org.apache.hadoop.ipc.Server$Listener$Reader.run(Server.java:723) }}} 아래 2개 값을 늘려줘라. * ipc.maximum.data.length * ipc.maximum.response.length --참고: https://velog.io/@km1031kim/ipc.maximum.data.length-ipc.maximum.response.length ==== 참고자료 ==== * [http://hadoop.apache.org/docs/r0.19.0/hdfs_shell.html Hadoop FS Shell Guide] * [http://hadoop.apache.org/docs/r0.20.0/hdfs_shell.html HDFS File System Shell Guide] * [http://hadoop.apache.org/docs/r0.19.0/commands_manual.html Hadoop Command Guide] * [https://it-sunny-333.tistory.com/99 네임노드 SafeMode 켜지는 경우] * [https://likebnb.tistory.com/162 HDFS 구동과정]