## 注意!!!!
不管如何!网上说的乱七八糟的,我是这么配置的!终于成功!
### 远程机器配置
> 一下配置是成功的配置,个人感觉有两个关键点
* host文件一定要配置本机名的映射
* 本机名映射应该到本机对外ip,而不是127.0.0.1。也就是ifconfig命令结果的第一个IP。
`core-site.xml`
```
<configuration>
<property>
<name>fs.defaultFS</name>
<value>hdfs://bizzbee:8020</value>
</property>
</configuration>
```
`/etc/hosts`
```
192.168.31.249 bizzbee
127.0.0.1 localhost localhost.localdomain localhost4 localhost4.localdomain4
::1 localhost localhost.localdomain localhost6 localhost6.localdomain6
```
`netstat -ntlp`
```
Proto Recv-Q Send-Q Local Address Foreign Address State PID/Program name
tcp 0 0 0.0.0.0:50090 0.0.0.0:* LISTEN 67441/java
tcp 0 0 127.0.0.1:49715 0.0.0.0:* LISTEN 67044/java
tcp 0 0 192.168.31.249:8020 0.0.0.0:* LISTEN 66793/java
tcp 0 0 192.168.122.1:53 0.0.0.0:* LISTEN -
tcp 0 0 0.0.0.0:50070 0.0.0.0:* LISTEN 66793/java
tcp 0 0 0.0.0.0:22 0.0.0.0:* LISTEN -
tcp 0 0 127.0.0.1:631 0.0.0.0:* LISTEN -
tcp 0 0 127.0.0.1:25 0.0.0.0:* LISTEN -
tcp 0 0 0.0.0.0:50010 0.0.0.0:* LISTEN 67044/java
tcp 0 0 0.0.0.0:50075 0.0.0.0:* LISTEN 67044/java
tcp 0 0 0.0.0.0:50020 0.0.0.0:* LISTEN 67044/java
tcp6 0 0 :::22 :::* LISTEN -
```
## mac本机和Java 代码
* 项目中的properties
~~~
INPUT_PATH=/bizzbee/input/nba
OUTPUT_PATH=/bizzbee/output/
OUTPUT_FILE=result2.out
HDFS_URI=hdfs://bizzbee:8020
MAPPER_CLASS=com.bizzbee.bigdata.hadoop.hdfs.WordCountMapper
~~~
* 主程序
~~~
package com.bizzbee.bigdata.hadoop.hdfs;
import org.apache.hadoop.conf.Configuration;
import org.apache.hadoop.fs.*;
import java.io.BufferedReader;
import java.io.InputStreamReader;
import java.net.URI;
import java.util.HashMap;
import java.util.Map;
import java.util.Properties;
import java.util.Set;
public class HDFSAPP01 {
public static void main(String[] args) throws Exception{
//加载配置文件
Properties properties = ParamsUtils.getProperties();
Path input = new Path(properties.getProperty(Constants.INPUT_PATH));
//Path input = new Path("/bizzbee/test/cba1.txt");
Configuration configuration = new Configuration();
configuration.set("dfs.client.use.datanode.hostname", "true");
FileSystem fileSystem = FileSystem.get(new URI(properties.getProperty(Constants.HDFS_URI)),configuration,"bizzbee");
RemoteIterator<LocatedFileStatus> iterator = fileSystem.listFiles(input,false);
// BizzbeeMapper mapper = new WordCountMapper();
/*
* 使用反射来构建对象。
* */
Class<?> clazz = Class.forName(properties.getProperty(Constants.MAPPER_CLASS));
BizzbeeMapper mapper = (BizzbeeMapper) clazz.newInstance();
BizzbeeContext context = new BizzbeeContext();
while(iterator.hasNext()){
LocatedFileStatus file = iterator.next();
FSDataInputStream in = fileSystem.open(file.getPath());
BufferedReader reader = new BufferedReader(new InputStreamReader(in));
String line ="";
while((line=reader.readLine())!=null){
mapper.map(line,context);
}
reader.close();
in.close();
}
//3.将处理结果存起来
Map<Object,Object> contextMap = context.getCacheMap();
//4.将结果输出
Path output = new Path(properties.getProperty(Constants.OUTPUT_PATH));
FSDataOutputStream out = fileSystem.create(new Path(output,new Path(properties.getProperty(Constants.OUTPUT_FILE))));
//5.将第三部缓存输出出去,entry相当于map里面的一对映射
Set<Map.Entry<Object, Object>> entris = contextMap.entrySet();
for (Map.Entry<Object,Object> entry:entris){
out.write((entry.getKey().toString()+"\t"+entry.getValue().toString()+"\n").getBytes());
}
out.close();
fileSystem.close();
System.out.println("ok...");
}
}
~~~
* hosts
```
192.168.31.249 bizzbee
```