多应用+插件架构,代码干净,二开方便,首家独创一键云编译技术,文档视频完善,免费商用码云13.8K 广告
临时表在hive0.14.0及以上版本支持。 <br/> **临时表的作用:** 临时表是应用程序自动管理在复杂查询期间生成的中间数据的方法。 <br/> **临时表的特点:** (1)临时表只对当前session有效,session退出后自动删除; (2)假如在hivebook数据库中创建employee临时表,如果早已存在其他类型的表(内部表、外部表)同名,则当前session使用的是刚刚创建的临时表,只有`drop`或`rename`临时表才会使用其他类型的表。 (3)临时表的存储位置由下面两个属性共同决定 ```xml -- {hive_home}/conf/hive-site.xml <!-- 配置 Hive 临时文件存储地址 --> <property> <name>hive.exec.scratchdir</name> <!-- 我的hdfs的用户名为 root --> <!-- 使用 hive-${user.name} 命令有一定的安全考虑 --> <value>/home/hadoop/hive/data/hive-${user.name}</value> <description>Scratch space for Hive jobs</description> </property> <property> <name>hive.exec.local.scratchdir</name> <value>/home/hadoop/hive/data/${user.name}</value> <description>Local scratch space for Hive jobs</description> </property> 则 employee 临时表的存储根目录为:/home/hadoop/hive/data/hive-root/root/ ``` (4)临时表不支持分区字段和创建索引; (5)hive1.1及以上版本支持临时表在内存、ssd卡中存储,通过在 {hive_home}/conf/hive-site.xml 中指定`hive.exec.temporary.table.storage`属性来配置,分别有`memory、ssd、default`三种取值。 <br/> **在 hivebook 数据库中创建 employee 临时表,并导入本地的 employee.txt 数据到该表中:** *`employee.txt`* ```xml Michael|Montreal,Toronto|Male,30|DB:80|Product:Lead,Developer:Lead Will|Montreal|Male,35|Perl:85|Product:Lead,Test:Lead Shelley|New York|Female,27|Python:80|Test:Lead,COE:Architect Lucy|Vancouver|Female,57|Sales:89,HR:94|Sales:Lead ``` ```sql ####(1)根据 employee.txt 格式创建 employee 临时表 #### create temporary table if not exists employee( emp_name string, work_place array<string>, sex_age struct<sex:string, age:int>, skills_score map<string, int>, depart_title map<string, array<string>> ) row format delimited fields terminated by '|' collection items terminated by ',' map keys terminated by ':' ; 0: jdbc:hive2://hadoop101:10000> show create table employee; +----------------------------------------------------+--+ | createtab_stmt | +----------------------------------------------------+--+ | CREATE TEMPORARY TABLE `employee`( | | `emp_name` string, | | `work_place` array<string>, | | `sex_age` struct<sex:string,age:int>, | | `skills_score` map<string,int>, | | `depart_title` map<string,array<string>>) | | ROW FORMAT SERDE | | 'org.apache.hadoop.hive.serde2.lazy.LazySimpleSerDe' | | WITH SERDEPROPERTIES ( | | 'colelction.delim'=',', | | 'field.delim'='|', | | 'mapkey.delim'=':', | | 'serialization.format'='|') | | STORED AS INPUTFORMAT | | 'org.apache.hadoop.mapred.TextInputFormat' | | OUTPUTFORMAT | | 'org.apache.hadoop.hive.ql.io.HiveIgnoreKeyTextOutputFormat' | | LOCATION | | 'hdfs://hadoop101:9000/home/hadoop/hive/data/hive-root/root/5189eaf7-d7c8-4422-94a7-8aca0b1bb8e0/_tmp_space.db/15fa5bcb-e1e6-477c-877d-4fb38b9b6591' | | TBLPROPERTIES ( | | 'COLUMN_STATS_ACCURATE'='true', | | 'numFiles'='1', | | 'totalSize'='234') | +----------------------------------------------------+--+ ####(2)往表中装载 employee.txt 数据 #### 0: jdbc:hive2://hadoop101:10000> load data local inpath "/employee.txt" into table employee; ####(3)查询 #### 0: jdbc:hive2://hadoop101:10000> select * from employee; +--------------------+-------------------------+----------------------------+------------------------+--------------------------------------------+--+ | employee.emp_name | employee.work_place | employee.sex_age | employee.skills_score | employee.depart_title | +--------------------+-------------------------+----------------------------+------------------------+--------------------------------------------+--+ | Michael | ["Montreal","Toronto"] | {"sex":"Male","age":30} | {"DB":80} | {"Product":["Lead"],"Developer":["Lead"]} | | Will | ["Montreal"] | {"sex":"Male","age":35} | {"Perl":85} | {"Product":["Lead"],"Test":["Lead"]} | | Shelley | ["New York"] | {"sex":"Female","age":27} | {"Python":80} | {"Test":["Lead"],"COE":["Architect"]} | | Lucy | ["Vancouver"] | {"sex":"Female","age":57} | {"Sales":89,"HR":94} | {"Sales":["Lead"]} | +--------------------+-------------------------+----------------------------+------------------------+--------------------------------------------+--+ ```