同步mysql数据到hive
1. 下载sqoop
地址为:http://archive.cloudera.com/cdh/3/下载相应版本,如sqoop-1.2.0-CDH3B4.tar.gz
2. 下载 hadoop
地址为:http://archive.cloudera.com/cdh/3/,版本可以为hadoop-0.20.2-CDH3B4.tar.gz
3.解压 sqoop-1.2.0-CDH3B4.tar.gz ,hadoop-0.20.2-CDH3B4.tar.gz 到某目录如/home/hadoop/,解压后的目录为
A:/home/hadoop/ sqoop-1.2.0-CDH3B4.tar
B:/home/hadoop/ hadoop-0.20.2-CDH3B4
4.复制B里hadoop-core-0.20.2-CDH3B4.jar到sqoop(A)的lib下
5. sqoop导入mysql数据运行过程中依赖mysql-connector-java-*.jar,所以需要下载mysql-connector-java-*.jar到sqoop(A)的lib下
6.修改configure-sqoop
注释掉hbase zookeeper检查:
#if [ ! -d "${HBASE_HOME}" ]; then # echo "Error: $HBASE_HOME does not exist!" # echo 'Please set $HBASE_HOME to the root of your HBase installation.' # exit 1 #fi #if [ ! -d "${ZOOKEEPER_HOME}" ]; then # echo "Error: $ZOOKEEPER_HOME does not exist!" # echo 'Please set $ZOOKEEPER_HOME to the root of your ZooKeeper installation.' # exit 1 #fi7.运行:
设置表的访问权限:
mysql> GRANT ALL PRIVILEGES ON *.* TO 'hadoop_test'@'%' WITH GRANT OPTION,
此命令的含义是把mysql下的hadoop_test的访问权限为任意IP,如果需要设置特定IP访问,则把%换成具体IP,如
mysql> GRANT ALL PRIVILEGES ON *.* TO 'hadoop_test'@'10.6.42.101' WITH GRANT
列出mysql所有的表:
./sqoop list-tables --connect jdbc:mysql://10.6.42.101:3306/test --username hadoop_test --password 123456
导入mysql表到hive:
./sqoop import --connect jdbc:mysql://10.6.42.101:3306/test --username hadoop_test --password 123456 --table mytest--hive-import导入需要表里有主建,还要注意不要使用127.0.0.1,因为map出去不一定在哪个节点执行。
如果曾经执行失败过,那再执行的时候,会有错误提示:
ERROR tool.ImportTool: Encountered IOException running import job: org.apache.hadoop.mapred.FileAlreadyExistsException: Output directory xxx already exists
执行 $HADOOP_HOME/bin/hadoop fs -rmr xxx 即可
8.验证:
bin/hive
show tables;多了一个表
9.经验:
sqoop做了一些mysqldump时的map reduce,所以速度会比单纯的dump后load快。
已有 0 人发表留言,猛击->> 这里<<-参与讨论
ITeye推荐