企业🤖AI智能体构建引擎,智能编排和调试,一键部署,支持私有化部署方案 广告
[TOC] # 案例一 Hive的 TRANSFORM 关键字提供了在SQL中调用自写脚本的功能 适合实现Hive中没有的功能又不想写UDF的情况 使用示例1: 下面这句sql就是借用了weekday_mapper.py对数据进行了处理. ~~~ CREATE TABLE u_data_new ( movieid INT, rating INT, weekday INT, userid INT) ROW FORMAT DELIMITED FIELDS TERMINATED BY '\t'; ~~~ ~~~ add FILE weekday_mapper.py; ~~~ ~~~ INSERT OVERWRITE TABLE u_data_new SELECT TRANSFORM (movieid , rate, timestring,uid) USING 'python weekday_mapper.py' AS (movieid, rating, weekday,userid) FROM t_rating; ~~~ 其中weekday_mapper.py内容如下 ~~~ #!/bin/python import sys import datetime for line in sys.stdin: line = line.strip() movieid, rating, unixtime,userid = line.split('\t') weekday = datetime.datetime.fromtimestamp(float(unixtime)).isoweekday() print '\t'.join([movieid, rating, str(weekday),userid]) ~~~ # 案例二 rating.json ~~~ {"movie":"1193","rate":"5","timeStamp":"978300760","uid":"1"} {"movie":"661","rate":"3","timeStamp":"978302109","uid":"1"} {"movie":"914","rate":"3","timeStamp":"978301968","uid":"1"} {"movie":"3408","rate":"4","timeStamp":"978300275","uid":"1"} {"movie":"2355","rate":"5","timeStamp":"978824291","uid":"1"} {"movie":"1197","rate":"3","timeStamp":"978302268","uid":"1"} {"movie":"1287","rate":"5","timeStamp":"978302039","uid":"1"} {"movie":"2804","rate":"5","timeStamp":"978300719","uid":"1"} {"movie":"594","rate":"4","timeStamp":"978302268","uid":"1"} {"movie":"919","rate":"4","timeStamp":"978301368","uid":"1"} {"movie":"595","rate":"5","timeStamp":"978824268","uid":"1"} {"movie":"938","rate":"4","timeStamp":"978301752","uid":"1"} ~~~ 1. 先加载rating.json文件到hive的一个原始表 rat_json ~~~ create table rat_json(line string) row format delimited; load data local inpath '/home/hadoop/rating.json' into table rat_json; ~~~ 2. 需要解析json数据成四个字段,插入一张新的表 t_rating ~~~ insert overwrite table t_rating select get_json_object(line,'$.movie') as moive,get_json_object(line,'$.rate') as rate from rat_json; ~~~ 3. 使用transform+python的方式去转换unixtime为weekday 先编辑一个python脚本文件 ~~~ ########python######代码 vi weekday_mapper.py #!/bin/python import sys import datetime for line in sys.stdin: line = line.strip() movieid, rating, unixtime,userid = line.split('\t') weekday = datetime.datetime.fromtimestamp(float(unixtime)).isoweekday() print '\t'.join([movieid, rating, str(weekday),userid]) ~~~ 保存文件 然后,将文件加入hive的classpath: ~~~ hive>add FILE /home/hadoop/weekday_mapper.py; hive>create TABLE u_data_new as SELECT TRANSFORM (movieid, rate, timestring,uid) USING 'python weekday_mapper.py' AS (movieid, rate, weekday,uid) FROM t_rating; ~~~ ~~~ select distinct(weekday) from u_data_new limit 10; ~~~