企业🤖AI智能体构建引擎,智能编排和调试,一键部署,支持私有化部署方案 广告
[TOC] # 1. windows环境搭建 1. 官网下载与Linux中一致的hadoop安装包 https://archive.apache.org/dist/hadoop/common/hadoop-2.6.0/ ![](https://img.kancloud.cn/4d/05/4d05267dafd00df04cc8e9a1502c7463_1040x256.png) Windows和Linux使用的是同一个.tar.gz文件。 2. 将安装包解压到D盘或其他盘符下 ![](https://img.kancloud.cn/89/75/897519331713ae9c760ff3ad33299f98_1145x38.png) 3. 添加 hadoop.dll 和 winutils.exe 到 D:\hadoop-2.6.0-cdh5.14.2\bin 目录下(去网上找文件) 4. 添加hadoop到Windows的环境变量中 ![](https://img.kancloud.cn/5b/1f/5b1f39fcad70c06ff8644b324ee5fa52_841x219.png) ![](https://img.kancloud.cn/ad/48/ad486f29d72d09fc30df5c365044ba1b_1219x347.png) <br/> # 2. WordCount案例代码 统计`/input`目录下的单词数量,我在该目录下放了两个 `hello001.txt` 和 `hello002.txt` 两个文件,它们的内容一样,如下: ```java Hello BigData Hello Hadoop MapReduce Hello HDFS BigData Hadoop Hadoop MapReduce ``` 1. 使用IDEA创建一个Maven工程 ![](https://img.kancloud.cn/ce/73/ce73b4a55efbe6cd9efbec22fd7e964d_1055x464.png) 2. 添加依赖 *`pom.xml`* ```xml <repositories> <repository> <id>cloudera</id> <url>https://repository.cloudera.com/artifactory/cloudera-repos/</url> </repository> </repositories> <dependencies> <dependency> <groupId>junit</groupId> <artifactId>junit</artifactId> <version>RELEASE</version> </dependency> <dependency> <groupId>org.apache.logging.log4j</groupId> <artifactId>log4j-core</artifactId> <version>2.8.2</version> </dependency> <dependency> <groupId>org.apache.hadoop</groupId> <artifactId>hadoop-common</artifactId> <version>2.6.0</version> </dependency> <dependency> <groupId>org.apache.hadoop</groupId> <artifactId>hadoop-client</artifactId> <version>2.6.0</version> </dependency> <dependency> <groupId>org.apache.hadoop</groupId> <artifactId>hadoop-hdfs</artifactId> <version>2.6.0</version> </dependency> <dependency> <groupId>org.apache.hadoop</groupId> <artifactId>hadoop-mapreduce-client-core</artifactId> <version>2.6.0</version> </dependency> <dependency> <groupId>org.apache.hadoop</groupId> <artifactId>hadoop-mapreduce-client-jobclient</artifactId> <version>2.6.0</version> </dependency> <dependency> <groupId>org.apache.hadoop</groupId> <artifactId>hadoop-auth</artifactId> <version>2.6.0</version> </dependency> </dependencies> <build> <pluginManagement> <!-- lock down plugins versions to avoid using Maven defaults (may be moved to parent pom) --> <plugins> <!-- 打包插件 --> <plugin> <artifactId>maven-assembly-plugin</artifactId> <executions> <execution> <phase>package</phase> <goals> <goal>single</goal> </goals> </execution> </executions> <configuration> <descriptorRefs> <descriptorRef>jar-with-dependencies</descriptorRef> </descriptorRefs> </configuration> </plugin> </plugins> </pluginManagement> </build> ``` *`resources/log4j.properties`* ```xml log4j.rootLogger=INFO, stdout log4j.appender.stdout=org.apache.log4j.ConsoleAppender log4j.appender.stdout.layout=org.apache.log4j.PatternLayout log4j.appender.stdout.layout.ConversionPattern=%d %p [%c] - %m%n log4j.appender.logfile=org.apache.log4j.FileAppender log4j.appender.logfile.File=target/spring.log log4j.appender.logfile.layout=org.apache.log4j.PatternLayout log4j.appender.logfile.layout.ConversionPattern=%d %p [%c] - %m%n ``` 3. Java程序 *`com/exa/mapreduce001/WordCountMapper.java`* ```java package com.exa.mapreduce001; import org.apache.hadoop.io.IntWritable; import org.apache.hadoop.io.LongWritable; import org.apache.hadoop.io.Text; import org.apache.hadoop.mapreduce.Mapper; import java.io.IOException; /** * Mapper<KEYIN, VALUEIN, KEYOUT, VALUEOUT> * * KEYIN:输入的key * VALUEIN:输入的value * KEYOUT:输出的key * VALUEOUT:输出的value */ public class WordCountMapper extends Mapper<LongWritable, Text, Text, IntWritable> { Text keyOut = new Text(); IntWritable valueOut = new IntWritable(1); @Override protected void map(LongWritable key, Text value, Context context) throws IOException, InterruptedException { // 1. Mapper以行为单位读取数据 String line = value.toString(); // 2. 分割每一行的数据,\\s+匹配所有类型的空格 String[] words = line.split("\\s+"); // 3. 写入Context上下文对象中,其格式为(word, 1) for (String word : words) { keyOut.set(word); context.write(keyOut, valueOut); } } } ``` *`com/exa/mapreduce001/WordCountReducer.java`* ```java package com.exa.mapreduce001; import org.apache.hadoop.io.IntWritable; import org.apache.hadoop.io.Text; import org.apache.hadoop.mapreduce.Reducer; import java.io.IOException; /** * Reducer<KEYIN, VALUEIN, KEYOUT, VALUEOUT> * */ public class WordCountReducer extends Reducer<Text, IntWritable, Text, IntWritable> { int sum; IntWritable count = new IntWritable(); @Override protected void reduce(Text key, Iterable<IntWritable> values, Context context) throws IOException, InterruptedException { sum = 0; for (IntWritable value : values) { sum += value.get(); } count.set(sum); context.write(key, count); } } ``` *`com/exa/mapreduce001/WordCountDriver.java`* ```java package com.exa.mapreduce001; import org.apache.hadoop.conf.Configuration; import org.apache.hadoop.fs.Path; import org.apache.hadoop.io.IntWritable; import org.apache.hadoop.io.Text; import org.apache.hadoop.mapreduce.Job; import org.apache.hadoop.mapreduce.lib.input.FileInputFormat; import org.apache.hadoop.mapreduce.lib.output.FileOutputFormat; import java.io.IOException; public class WordCountDriver { public static void main(String args[]) throws IOException, ClassNotFoundException, InterruptedException { //1.获取配置信息以及创建任务 Configuration conf = new Configuration(); Job job = Job.getInstance(conf); //2.指定Driver类程序jar所在的路径 job.setJarByClass(WordCountDriver.class); //3.指定Mapper和Reducer job.setMapperClass(WordCountMapper.class); job.setReducerClass(WordCountReducer.class); //4.指定Mapper端的输出类型(key和value) job.setMapOutputKeyClass(Text.class); job.setMapOutputValueClass(IntWritable.class); //5.指定最终的结果输出类型 job.setOutputKeyClass(Text.class); job.setOutputValueClass(IntWritable.class); //6.指定输入文件和输出文件的路径 // 如果输入或输出路径在本地盘符则用file:///开头 FileInputFormat.setInputPaths(job, new Path("file:///D:\\IDEAWorkspace\\hadoop\\mapreduce001\\hadoop\\input")); // 输出路径已存在则报错 FileOutputFormat.setOutputPath(job, new Path("file:///D:\\IDEAWorkspace\\hadoop\\mapreduce001\\hadoop\\output")); // 7. 提交代码 boolean result = job.waitForCompletion(true); // 退出程序 System.exit(result ? 0 : 1); } } ``` 结果写到了`/output/part-r-0000`文件中 ```java BigData 4 HDFS 2 Hadoop 6 Hello 6 MapReduce 4 ``` 上面在 WordCountDriver 的输入和输出路径都是在本地盘符上,如果需要输入和输出路径是HDFS系统,则需要将第6步的代码替换如下: ```java // 6.指定输入文件和输出文件的路径 // 指定输入或输出路径为HDFS系统, 后面将会利用main函数的进行传参 FileInputFormat.setInputPaths(job, new Path(args[0])); // 输出路径已存在则报错 FileOutputFormat.setOutputPath(job, new Path(args[1])); ``` 1. 然后打包 ![](https://img.kancloud.cn/2b/7f/2b7f17da434f1cb859e4e476b5bd0bf2_935x436.png) ![](https://img.kancloud.cn/fa/66/fa665f7bb8521763f473d4f3419f1c38_1259x347.png) 2. 使用Xftp软件将jar包上传到Linux任一目录下 3. 运行jar包 ```java -- /user/hadoop/input 就是传给args[0]的 -- /user/hadoop/output 就是传给args[1]的 -- 上面两个都是HDFS的路径,不是操作系统上的路径 # hadoop jar com-exa-mapreduce001-1.0-SNAPSHOT.jar com.exa.mapreduce001.WordCountDriver /user/hadoop/input /user/hadoop/output ```