HDFS basic functions
Basic function
has the basic operation function of file system;
file blocks for unit store the data in different machine disk
the default 128 MB large data slices, 3 copies of the
the NameNode master node: virtual directory maintenance, management of child nodes (Secondly the NameNode) storage resources scheduling, and the client interaction
the DataNode from multiple nodes: save the data, register when they start with the master node, the master node can know the information of it, convenient later call
(DataNode from cluster nodes to meet the basic conditions: Linux01, 02 01… The IP domain name between the secret set. Because nodes communicate with each other)
Linux01:NameNode DataNode
Linux02:DataNode
Linux03:DataNode
Use of client base commands
- location bin/HDFS DFS among them are some commands to upload: HDFS DFS – put./to/(the front is a local path, followed by HDFS path) directory: HDFS DFS – ls/(files in the directory view points, and other road king in the same way) to create folders: HDFS DFS – mkdir/data (create the folder in the root directory) to check the file content: HDFS DFS – cat file path from HDFS download: HDFS DFS get /data/1.txt/(HDFS path in front followed by local path)
Read More:
- Hadoop — HDFS data writing process
- Hadoop hdfs dfs -ls/ error: Call From master/192.168.88.108 to slave1:9820 failed on connection except
- Stata external commands: the most common and up-to-date commands
- Basic use of filter
- org.apache.hadoop.hive.ql.exec.spark.SparkTask. Failed to create Spark client for Spark session 0354
- Use the Zsh plug-in ‘Zsh syntax highlighting’ to highlight your available Zsh commands
- [Solved] Logon failed, use ctrl+c to cancel basic credential prompt. unable to access
- mkdir: Call From hadoop102/192.168.6.102 to hadoop102:8020 failed on connection exception: java.net.
- Introduction to JIRA introduction to HP ALM
- Error: attempting to operate on HDFS namenode as root
- Linux use ls to view the file color is all white solution, and Linux file color introduction
- hdfs-bug:DataXceiver error processing WRITE_BLOCK operation
- VC + + OpenGL is used as the development platform to design the program, which can generate any pixel on the screen, and use the pixel to draw the basic graphics
- HDFS Java API operation error (user permission)
- Hadoop datanode using JPS to view the solution that can’t be started
- Hadoop cluster: about course not obtain block: error reporting
- Common linux commands — find command’s Exec
- HDFS and local file transfer and error reporting
- hdfs 192.168.2.19:9000 failed on connection exception: java.net.ConnectException:Connection refused
- Summary of Hadoop error handling methods