- upload the file to Linux and then pass it to HDFS
hadoop fs -put 1.csv /
- enter the shell command of hbase and create hbase table
create 'amazon_key_word','MM'
- import data by command
format: hbase [class] [separator] [row key, column family] [table] [import file]
hbase org.apache.hadoop.hbase.mapreduce.ImportTsv -Dimporttsv.separator="," -Dimporttsv.columns=HBASE_ROW_KEY,MM:dept,MM:search_word,MM:Search_ranking,MM:asin,MM:name,MM:click_v,MM:conversion,MM:asin2,MM:name2,MM:click_v2,MM:conversion2,MM:asin3,MM:name3,MM:click_v3,MM:conversion3 amazon_key_word /1.csv
Note that the first column data is hbase rowkey