```shell
tar -zxvf flink-1.6.4-bin-hadoop27-scala_2.11.tgz -C /opt/
```
```shell
mv /opt/flink-1.6.4/ /opt/flink
```
```shell
vi /opt/flink/conf/flink-conf.yaml # 1
```
```yaml
jobmanager.rpc.address: flink01
```
```shell
vi /opt/flink/conf/masters # 2 || echo "flink01:8081" >/opt/flink/conf/masters
```
```
flink01:8081
```
```shell
vi /opt/flink/conf/slaves # 3
```
```
flink02
flink03
```
---
分发文件
切记:不同服务器之间的flink文件夹位置保持一致!
```shell
scp -r /opt/flink/ root@flink02:/opt/
```
```shell
scp -r /opt/flink/ root@flink03:/opt/
```
---
以下步骤在集群节点均需配置
```shell
echo "export FLINK_BIN_DIR=/opt/flink/bin" >>/etc/profile \
&& echo "export PATH=\$PATH:\$FLINK_BIN_DIR" >>/etc/profile \
&& source /etc/profile
```
---
启动集群
```shell
/opt/flink/bin/start-cluster.sh
```
关闭集群
```shell
/opt/flink/bin/stop-cluster.sh
```
访问:[http://flink01:8081/](http://flink01:8081/)
> 单节点启动与停止
```shell
/opt/flink/bin/jobmanager.sh ((start|start-foreground) cluster)|stop|stop-all
/opt/flink/bin/taskmanager.sh start|start-foreground|stop|stop-all
```
运行测试任务(启动Hadoop)
```shell
/opt/flink/bin/flink run /opt/flink/examples/batch/WordCount.jar --input hdfs://flink01:9000/input/part1rfid0901.txt --output hdfs://flink01:9000/output/part1rfid0901.output.txt
```
Hadoop:[http://flink01:50070/](http://flink01:50070/)
- Flink简介
- flink搭建standalone模式与测试
- flink提交任务(界面方式)
- Flink项目初始化
- Java版WordCount(匿名类)
- Java版WordCount(lambda)
- Scala版WordCount
- Java版WordCount[批处理]
- Scala版WordCount[批处理]
- 流处理非并行的Source
- 流处理可并行的Source
- kafka的Source
- Flink算子(Map,FlatMap,Filter)
- Flink算子KeyBy
- Flink算子Reduce和Max与Min
- addSink自定义Sink
- startNewChain和disableChaining
- 资源槽slotSharingGroup
- 计数窗口
- 滚动窗口
- 滑动窗口
- Session窗口
- 按照EventTime作为标准