🔥码云GVP开源项目 12k star Uniapp+ElementUI 功能强大 支持多语言、二开方便! 广告
[TOC] # 自定义函数 当Hive提供的内置函数无法满足你的业务处理需要时,此时就可以考虑使用用户自定义函数(`UDF:user-defined function`)。 # 自定义函数类别 UDF 作用于单个数据行,产生一个数据行作为输出。(数学函数,字符串函数) UDAF(用户定义聚集函数):接收多个输入数据行,并产生一个输出数据行。(count,max) # UDF开发实例 ## 简单UDF示例 1. 先开发一个java类,继承UDF,并重载evaluate方法 ~~~ import org.apache.hadoop.hive.ql.exec.UDF; import org.apache.hadoop.io.Text; public final class Lower extends UDF{ public Text evaluate(final Text s){ if(s==null){return null;} return new Text(s.toString().toLowerCase()); } } ~~~ 2. 打成jar包上传到服务器 3. 将jar包添加到hive的classpath ~~~ hive>add JAR /home/hadoop/udf.jar; ~~~ 4. 创建临时函数与开发好的java class关联 ~~~ Hive>create temporary function tolowercase as 'cn.bigdata.Lower'; ~~~ 5. 即可在hql中使用自定义的函数tolowercase ip  ~~~ select tolowercase(name),age from t_test; ~~~ **自定义函数只在当前session有用** ## Json数据解析UDF开发 有原始json数据如下: ~~~ {"movie":"1193","rate":"5","timeStamp":"978300760","uid":"1"} {"movie":"661","rate":"3","timeStamp":"978302109","uid":"1"} {"movie":"914","rate":"3","timeStamp":"978301968","uid":"1"} {"movie":"3408","rate":"4","timeStamp":"978300275","uid":"1"} {"movie":"2355","rate":"5","timeStamp":"978824291","uid":"1"} {"movie":"1197","rate":"3","timeStamp":"978302268","uid":"1"} {"movie":"1287","rate":"5","timeStamp":"978302039","uid":"1"} ~~~ 需要将数据导入到hive数据仓库中 我不管你中间用几个表,最终我要得到一个结果表: ~~~ movie rate timestamp uid 1197 3 978302268 1 ~~~ 注:全在hive中完成,可以用自定义函数 ~~~ //{"movie":"1721","rate":"3","timeStamp":"965440048","uid":"5114"} public class MovieRateBean { private String movie; private String rate; private String timeStamp; private String uid; public String getMovie() { return movie; } public void setMovie(String movie) { this.movie = movie; } public String getRate() { return rate; } public void setRate(String rate) { this.rate = rate; } public String getTimeStamp() { return timeStamp; } public void setTimeStamp(String timeStamp) { this.timeStamp = timeStamp; } public String getUid() { return uid; } public void setUid(String uid) { this.uid = uid; } @Override public String toString() { return movie + "\t" + rate + "\t" + timeStamp + "\t" + uid; } } ~~~ ~~~ import java.io.IOException; import org.apache.hadoop.hive.ql.exec.UDF; import parquet.org.codehaus.jackson.JsonParseException; import parquet.org.codehaus.jackson.map.JsonMappingException; import parquet.org.codehaus.jackson.map.ObjectMapper; public class JsonParser extends UDF { public String evaluate(String jsonLine) { ObjectMapper objectMapper = new ObjectMapper(); try { MovieRateBean bean = objectMapper.readValue(jsonLine, MovieRateBean.class); return bean.toString(); } catch (Exception e) { } return ""; } } ~~~ 创建表 ~~~ create table rat_json(line string) row format delimited; ~~~ 然后导入数据,和jar包 ~~~ create temporary function parsejson as 'com.hive.JsonParser'; ~~~ ~~~ select parsejson(line) from rat_json limit 10; ~~~ 出现这个错误,就是函数写错了,或者打包出错了 ~~~ SemanticException [Error 10014]: Line 1:7 Wrong arguments 'line': No matching method for class com.hive.ParseJson with (string). Possible choices: ~~~ ## 内置解析json `get_json_object` ~~~ hive> select * from rat_json limit 10; OK {"movie":"1193","rate":"5","timeStamp":"978300760","uid":"1"} {"movie":"661","rate":"3","timeStamp":"978302109","uid":"1"} {"movie":"914","rate":"3","timeStamp":"978301968","uid":"1"} {"movie":"3408","rate":"4","timeStamp":"978300275","uid":"1"} {"movie":"2355","rate":"5","timeStamp":"978824291","uid":"1"} {"movie":"1197","rate":"3","timeStamp":"978302268","uid":"1"} {"movie":"1287","rate":"5","timeStamp":"978302039","uid":"1"} {"movie":"2804","rate":"5","timeStamp":"978300719","uid":"1"} {"movie":"594","rate":"4","timeStamp":"978302268","uid":"1"} {"movie":"919","rate":"4","timeStamp":"978301368","uid":"1"} Time taken: 0.131 seconds, Fetched: 10 row(s) hive> select get_json_object(line,'$.movie') as moive,get_json_object(line,'$.rate') as rate from rat_json limit 10; OK 1193 5 661 3 914 3 3408 4 2355 5 1197 3 1287 5 2804 5 594 4 919 4 Time taken: 0.108 seconds, Fetched: 10 row(s) ~~~ # 函数的有效区域 ## 临时函数 1. 自定义UDF需要继承org.apache.hadoop.hive.ql.UDF。 2. 需要实现evaluate函数,evaluate函数支持重载。(注意:在一个库中创建的UDF函数,不能在另一个库中使用 ) 3. 把程序打包放到目标机器上去; 4. 进入hive客户端,添加jar包:`hive>add jar /run/jar/udf_test.jar;` 5. 创建临时函数:`hive>CREATE TEMPORARY FUNCTION add_example AS 'hive.udf.Add';` 6. 查询HQL语句: ~~~ SELECT add_example(8, 9) FROM scores; SELECT add_example(scores.math, scores.art) FROM scores; SELECT add_example(6, 7, 8, 6.8) FROM scores; ~~~ 7. 销毁临时函数:hive> DROP TEMPORARY FUNCTION add_example; 注:UDF只能实现一进一出的操作,如果需要实现多进一出,则需要实现UDAF ## 永久函数 1. 自定义UDF需要继承org.apache.hadoop.hive.ql.UDF。(注意该类的包名必须是org.apache.hadoop.hive.ql.udf) 2. 需要实现evaluate函数,evaluate函数支持重载。 3. 修改FunctionRegistry这个类,注册定义的udf 4. 把udf函数编译成class放到hive-exec-0.12.0-cdh5.0.0.jar中org\apache\hadoop\hive\ql\udf 路径下面 5. 将新的FunctionRegistry编译后的class文件替换hive-exec-0.12.0-cdh5.0.0.jar中的org.apache.hadoop.hive.ql.exec