通过一个具体的实例来学习hive

标签: 实例 学习 hive | 发表时间:2014-06-09 01:27 | 作者:crazyzhb2012
出处:http://blog.csdn.net
   ----资源来自于官网教程

Simple Example Use Cases

MovieLens User Ratings

First, create a table with tab-delimited text file format:

CREATE TABLE u_data (
  userid INT,
  movieid INT,
  rating INT,
  unixtime STRING)
ROW FORMAT DELIMITED
FIELDS TERMINATED BY '\t'
STORED AS TEXTFILE;

Then, download the data files from MovieLens 100k on the  GroupLens datasets page (which also has a README.txt file and index of unzipped files):

wget http://files.grouplens.org/datasets/movielens/ml-100k.zip

or:

curl --remote-name http://files.grouplens.org/datasets/movielens/ml-100k.zip

Note:  If the link to  GroupLens datasets does not work, please report it on  HIVE-5341 or send a message to the  [email protected] mailing list.

Unzip the data files:

unzip ml-100k.zip

And load  u.data into the table that was just created:

LOAD DATA LOCAL INPATH '<path>/u.data'
OVERWRITE INTO TABLE u_data;

Count the number of rows in table u_data:

SELECT COUNT(*) FROM u_data;

Note that for older versions of Hive which don't include  HIVE-287, you'll need to use COUNT(1) in place of COUNT(*).

Now we can do some complex data analysis on the table  u_data:

Create  weekday_mapper.py:

import sys
import datetime

for line in sys.stdin:
  line = line.strip()
  userid, movieid, rating, unixtime = line.split('\t')
  weekday = datetime.datetime.fromtimestamp(float(unixtime)).isoweekday()
  print '\t'.join([userid, movieid, rating, str(weekday)])

Use the mapper script:

CREATE TABLE u_data_new (
  userid INT,
  movieid INT,
  rating INT,
  weekday INT)
ROW FORMAT DELIMITED
FIELDS TERMINATED BY '\t';

add FILE weekday_mapper.py;

INSERT OVERWRITE TABLE u_data_new
SELECT
  TRANSFORM (userid, movieid, rating, unixtime)
  USING 'python weekday_mapper.py'
  AS (userid, movieid, rating, weekday)
FROM u_data;

SELECT weekday, COUNT(*)
FROM u_data_new
GROUP BY weekday;

Note that if you're using Hive 0.5.0 or earlier you will need to use  COUNT(1) in place of  COUNT(*).



我把weekday_mapper.py和mapper脚本hive-mapper.q都放在/home/administrator/bigData_software下

然后命令行使用 hive -f hive-mapper.q来运行脚本即可。


Logging initialized using configuration in jar:file:/home/administrator/bigData_software/hive-0.12.0/lib/hive-common-0.12.0.jar!/hive-log4j.properties
OK
Time taken: 14.88 seconds
Added resource: weekday_mapper.py
Total MapReduce jobs = 3
Launching Job 1 out of 3
Number of reduce tasks is set to 0 since there's no reduce operator
Starting Job = job_201406081532_0003, Tracking URL = http://localhost:50030/jobdetails.jsp?jobid=job_201406081532_0003
Kill Command = /home/administrator/hadoop-0.20.2/bin/../bin/hadoop job -kill job_201406081532_0003
CREATE TABLE u_data_new (
userid INT,
movieid INT,
rating INT,
weekday INT)
ROW FORMAT DELIMITED
FIELDS TERMINATED BY '\t';

add FILE weekday_mapper.py;

INSERT OVERWRITE TABLE u_data_new
SELECT
TRANSFORM (userid, movieid, rating, unixtime)
USING 'python weekday_mapper.py'
AS (userid, movieid, rating, weekday)
FROM u_data;

SELECT weekday, COUNT(*)
FROM u_data_new
GROUP BY weekday;Hadoop job information for Stage-1: number of mappers: 1; number of reducers: 0
2014-06-08 17:11:24,585 Stage-1 map = 0%, reduce = 0%
2014-06-08 17:11:36,767 Stage-1 map = 7%, reduce = 0%
2014-06-08 17:11:38,789 Stage-1 map = 18%, reduce = 0%
2014-06-08 17:11:41,831 Stage-1 map = 30%, reduce = 0%
2014-06-08 17:11:44,871 Stage-1 map = 54%, reduce = 0%
2014-06-08 17:11:47,941 Stage-1 map = 100%, reduce = 0%
2014-06-08 17:11:54,053 Stage-1 map = 100%, reduce = 100%
Ended Job = job_201406081532_0003
Stage-4 is selected by condition resolver.
Stage-3 is filtered out by condition resolver.
Stage-5 is filtered out by condition resolver.
Moving data to: hdfs://localhost:9000/tmp/hive-administrator/hive_2014-06-08_17-11-07_182_3664785529616823633-1/-ext-10000
Loading data to table default.u_data_new
[Warning] could not update stats.
MapReduce Jobs Launched: 
Job 0: Map: 1 HDFS Read: 1979173 HDFS Write: 1179173 SUCCESS
Total MapReduce CPU Time Spent: 0 msec
OK
Time taken: 47.904 seconds
Total MapReduce jobs = 1
Launching Job 1 out of 1
Number of reduce tasks not specified. Estimated from input data size: 1
In order to change the average load for a reducer (in bytes):
set hive.exec.reducers.bytes.per.reducer=<number>
In order to limit the maximum number of reducers:
set hive.exec.reducers.max=<number>
In order to set a constant number of reducers:
set mapred.reduce.tasks=<number>
Starting Job = job_201406081532_0004, Tracking URL = http://localhost:50030/jobdetails.jsp?jobid=job_201406081532_0004
Kill Command = /home/administrator/hadoop-0.20.2/bin/../bin/hadoop job -kill job_201406081532_0004
Hadoop job information for Stage-1: number of mappers: 1; number of reducers: 1
2014-06-08 17:12:09,719 Stage-1 map = 0%, reduce = 0%
2014-06-08 17:12:18,831 Stage-1 map = 100%, reduce = 0%
2014-06-08 17:12:30,038 Stage-1 map = 100%, reduce = 100%
Ended Job = job_201406081532_0004
MapReduce Jobs Launched: 
Job 0: Map: 1 Reduce: 1 HDFS Read: 1179173 HDFS Write: 56 SUCCESS
Total MapReduce CPU Time Spent: 0 msec
OK
1 12254
2 13579
3 14430
4 15114
5 14743
6 18229
7 11651
Time taken: 41.398 seconds, Fetched: 7 row(s)

成功了,现在发现使用hive可以节省编写mapreduce的时间,真是不错。

作者:crazyzhb2012 发表于2014-6-8 17:27:47 原文链接
阅读:91 评论:0 查看评论

相关 [实例 学习 hive] 推荐:

通过一个具体的实例来学习hive

- - CSDN博客云计算推荐文章
   ----资源来自于官网教程. CREATE TABLE u_data ( userid INT, movieid INT, rating INT, unixtime STRING) ROW FORMAT DELIMITED FIELDS TERMINATED BY '\t' STORED AS TEXTFILE;.

java JDBC 链接hive 操作实例

- - CSDN博客云计算推荐文章
1.在eclipse下面 导入包,简便方式 new ->mapred project -> 右键 ->选择“Properties”->Java Build Path->Library->Add External Jars 将/usr/hive/lib的所有jar包 添加上. (因为之前的配置 所有jar包 已经包括 java链接mysql的包).

深入学习《Programing Hive》:数据压缩

- - 互联网 - ITeye博客
Hive使用的是Hadoop的文件系统和文件格式,比如TEXTFILE,SEQUENCEFILE等.          在Hive中对中间数据或最终数据数据做压缩,是提高数据吞吐量和性能的一种手段. 对数据做压缩,可以大量减少磁盘的存储空间,比如基于文本的数据文件, 可以将文件压缩40%或更多,同时压缩后的文件在磁盘间传输和I/O也会大大减少;当然压缩和解压缩也会带来额外的CPU开销,但是却可以节省更多的I /O和使用更少的内存开销.

hiveQL学习和hive常用操作

- - 数据库 - ITeye博客
Hive外壳环境是可以使用hive命令来运行的一项服务. service选项指明要使用哪种服务. 键入hive-servicehelp可以获得可用服务.    Hive的命令行接口(外壳环境).     让Hive以提供Trift服务的服务器形式运行,允许用不同语言编写的客户端进. 使用Thrift,  JDBC和ODBC连接器的客户端需要运行Hive服务器来.

学习AngularJS实例

- - Web前端 - ITeye博客
怎么样快速学习AngularJS. 相信很多初学者都有过或者类似的疑问,其实这个问题没有标准的答案,每个人的技术背景、工作经验等等都不经相同,所以学习AngularJS的切入点肯定也就不同,我之前初略使用过knockoutjs,当我第一眼看到AngularJS的Helloworld案例后,顿时就被声明式的语法和强大的双向绑定特性所吸引.

React入门实例学习

- - JavaScript - Web前端 - ITeye博客
        React可以在浏览器运行,也可以在服务器运行,但是在这为了尽量保持简单,且React语法是一致的,服务器的用法和浏览器差别不大,在这只涉及浏览器. 一. HTML模板.         使用React的网页源码,结构大致如下:.         1.最后一个script标签的type属性为text/jsx.

HBase开发实例学习

- - 互联网 - ITeye博客
在进行Hbase开发前,需要安装JDK、 Hadoop和HBase,选择一款合适的开发IDE,具体安装方法就不介绍了,网上有很多参考资料,这里给出我的开发环境:. 操作系统:Ubuntu 14.04 LTS. Java版本:jdk1.7.0_79. Hadoop版本:hadoop-2.6.0-cdh5.7.1.

hive调优

- - 互联网 - ITeye博客
一、    控制hive任务中的map数: . 1.    通常情况下,作业会通过input的目录产生一个或者多个map任务. 主要的决定因素有: input的文件总个数,input的文件大小,集群设置的文件块大小(目前为128M, 可在hive中通过set dfs.block.size;命令查看到,该参数不能自定义修改);.

hive 优化 tips

- - CSDN博客推荐文章
一、     Hive join优化. 也可以显示声明进行map join:特别适用于小表join大表的时候,SELECT /*+ MAPJOIN(b) */ a.key, a.value FROM a join b on a.key = b.key. 2.     注意带表分区的join, 如:.

Hive中的join

- - CSDN博客云计算推荐文章
select a.* from a join b on a.id = b.id select a.* from a join b on (a.id = b.id and a.department = b.department). 在使用join写查询的时候有一个原则:应该将条目少的表或者子查询放在join操作符的左边.