## 简单使用
* 启动hive,在hive安装目录中bin文件夹直接执行hive命令。
```
bin/hive
```
* 之后,配置的数据库中会生成一个库。
![](https://img.kancloud.cn/5d/b5/5db5d1e1a7a3834aa3da661bb0f16176_139x45.png)
* 新建一个hive的数据库
```
hive> create database test_hive;
```
* 新建一个表,这个表是可以直接用文件导入的。见下文。
```
create table players(id int,name string) ROW FORMAT DELIMITED FIELDS TERMINATED BY '\t'
```
* 在hive的data文件夹新建一个文件players。
```
1 james
2 zion
3 davis
4 george
```
* 然后将文件导入players表中。
```
load data local inpath '/home/bizzbee/work/app/hive-1.1.0-cdh5.15.1/data/players' overwrite into table players;
```
* 如果执行统计的话,会自动生成MapReduce作业。
```
hive> select count(1) from players;
Query ID = bizzbee_20191105232020_fa9a96e2-3a68-4671-a4a5-df1e88145c50
Total jobs = 1
Launching Job 1 out of 1
Number of reduce tasks determined at compile time: 1
In order to change the average load for a reducer (in bytes):
set hive.exec.reducers.bytes.per.reducer=<number>
In order to limit the maximum number of reducers:
set hive.exec.reducers.max=<number>
In order to set a constant number of reducers:
set mapreduce.job.reduces=<number>
Starting Job = job_1572942693118_0001, Tracking URL = http://bizzbee:8088/proxy/application_1572942693118_0001/
Kill Command = /home/bizzbee/work/app/hadoop-2.6.0-cdh5.15.1/bin/hadoop job -kill job_1572942693118_0001
Hadoop job information for Stage-1: number of mappers: 1; number of reducers: 1
2019-11-05 23:21:13,111 Stage-1 map = 0%, reduce = 0%
2019-11-05 23:21:25,470 Stage-1 map = 100%, reduce = 0%, Cumulative CPU 5.96 sec
2019-11-05 23:21:35,551 Stage-1 map = 100%, reduce = 100%, Cumulative CPU 11.61 sec
MapReduce Total cumulative CPU time: 11 seconds 610 msec
Ended Job = job_1572942693118_0001
MapReduce Jobs Launched:
Stage-Stage-1: Map: 1 Reduce: 1 Cumulative CPU: 11.61 sec HDFS Read: 7283 HDFS Write: 2 SUCCESS
Total MapReduce CPU Time Spent: 11 seconds 610 msec
OK
4
Time taken: 50.814 seconds, Fetched: 1 row(s)
```