Hadoop namenode metadata

Tag: hadoop Author: lvshengbiao Date: 2011-05-22

I am a bit confused by the Hadoop architecture.

  1. What kind of file metadata is stored in Hadoop Namenode? From Hadoop wiki, it says Namenode stores the entire system namespace. Does information like last modified time, created time, file size, owner, permissions and etc stored in Namenode?

  2. Does datanode store any metadata information?

  3. There is only one Namenode, can the metadata data exceed the server's limit?

  4. If a user wants to download a file from Hadoop, does he have to download it from the Namenode? I found the below architecure picture from web, it shows a client can direct write data to datanode? Is it true? enter image description here


Best Answer

I think the following explanation can help you to better understand the HDFS architecture. You can consider Name node to be like FAT (file allocation table) + Directory data and Data nodes to be dumb block devices. When you want to read the file from the regular file system, you should go to Directory, then go to FAT, get locations of all relevant blocks and read them. The same happens with HDFS. When you want to read the file, you go to the Namenode, get the list blocks the given file have. This information about blocks will contain list of datanodes where this information sitting. After it you go to the datanode and get relevant blocks from them.

Other Answer1

  1. yes
  2. no, apart from the blocks themselves
  3. yes, if you have many small files
  4. no, the info about the file is on the Namenode, the file itself is on Datanodes (a datanode could in theory be on the same machine, and often is on smaller clusters)

Other Answer2

  1. The fsimage on the name node is in a binary format. Use the "Offline Image Viewer" to dump the fsimage in a human-readable format. The output of this tool can be further analyzed with pig or some other tool to get more meaningful data.


Other Answer3

3) When the no.of files are so huge , a single Namenode will not be able to keep all the metadata . In fact that is one of the limitations of HDFS . You can check HDFS Federation which aims to address this problem by splitting into different namespaces served by different namenodes .


Read process :    
a) Client first gets the datanodes where the actual data is located from the namenode 
b) Then it directly contacts the datanodes to read the data

Write process : 
a) Client asks namenode for some datanodes to write the data and if available Namenode gives them 
b)Client goes directly to the datanodes and write