企业🤖AI智能体构建引擎,智能编排和调试,一键部署,支持私有化部署方案 广告
[TOC] > [参考网址](https://blog.51cto.com/taoismli/2163097) ## 创建用户 ``` export username=im_user useradd -d /home/${username} -m ${username} passwd ${username} echo "${username} ALL = (root) NOPASSWD:ALL" | sudo tee /etc/sudoers.d/${username} ``` ## 设置免登陆 ``` cat >> /etc/hosts <<EOF 192.168.0.110 h1 192.168.0.229 h3 192.168.0.111 h2 EOF ``` ``` su ${username} ssh-keygen #(10). 将管理节点产生的公钥拷贝到其他节点 ssh-copy-id -i /home/${username}/.ssh/id_rsa.pub ${username}@h1 ssh-copy-id -i /home/${username}/.ssh/id_rsa.pub ${username}@h2 ssh-copy-id -i /home/${username}/.ssh/id_rsa.pub ${username}@h3 ``` 测试 免登陆 `ssh h2` ## 修改主机名(三台) 修改主机名以访问时候,不会有 http://localhost:9864的情况 ![UTOOLS1576138968054.png](http://yanxuan.nosdn.127.net/b8c6d743d9deaaae011b4d0f1705b604.png) ``` hostnamectl set-hostname h1 -H h1 hostnamectl set-hostname h2 -H h2 hostnamectl set-hostname h3 -H h3 ``` ## 本地设置 hosts 如果直接本地获取,需要设置 本地 hosts ``` 192.168.0.110 h1 192.168.0.229 h3 192.168.0.111 h2 ``` ## 配置 jdk(三台主机都要配置) ### 首先确认删除 centos 系统自带的 jdk ``` rpm -qa | grep java python-javapackages-3.4.1-11.el7.noarch tzdata-java-2018e-3.el7.noarch javapackages-tools-3.4.1-11.el7.noarch java-1.8.0-openjdk-1.8.0.102-4.b14.el7.x86_64 java-1.8.0-openjdk-headless-1.8.0.102-4.b14.el7.x86_64 ``` 名称里有 `openjdk` 的要删除 ``` rpm -e --nodeps java-1.8.0-openjdk-1.8.0.102-4.b14.el7.x86_64 rpm -e --nodeps java-1.8.0-openjdk-headless-1.8.0.102-4.b14.el7.x86_64 ``` ### 安装 jdk 这里安装的是 [jdk1.8](https://www.oracle.com/technetwork/java/javase/downloads/jdk8-downloads-2133151.html#/) 创建 jdk 环境的路径 ``` mkdir /usr/local/java ``` 将下载好的压缩包解压到指定路径 ``` tar -zxvf jdk-8u201-linux-x64.tar.gz -C /usr/local/java/ ``` 配置 java 的环境变量 ``` # vi /etc/profile export JAVA_HOME=/usr/local/java/jdk1.8.0_201 export CLASSPATH=.:$JAVA_HOME/lib/dt.jar:$JAVA_HOME/lib/tools.jar export PATH=$PATH:$JAVA_HOME/bin ``` 使配置生效 ``` source /etc/profile ``` 测试是否配置成功 ``` java -version ``` ## 安装 hadoop [镜像下载3.1](https://mirrors.tuna.tsinghua.edu.cn/apache/hadoop/common/) ### 解压 ``` mkdir /usr/local/hadoop tar -zxvf hadoop-3.1.2.tar.gz -C /usr/local/hadoop/ ``` ### 修改配置文件(三台主机都要配置) vim /usr/local/hadoop/etc/hadoop/hadoop-env.sh ``` export JAVA_HOME=/usr/local/java/jdk1.8.0_201 export HADOOP_HOME=/usr/local/hadoop export HDFS_NAMENODE_USER=root export HDFS_DATANODE_USER=root export HDFS_SECONDARYNAMENODE_USER=root export YARN_RESOURCEMANAGER_USER=root export YARN_NODEMANGER_USER=root ``` vim /usr/local/hadoop/etc/hadoop/core-site.xml ``` <configuration> <property> <!-- 指定 hadoop 运行时产生文件的存储路径 --> <name>hadoop.tmp.dir</name> <value>file:/usr/local/hadoop/tmp</value> </property> <property> <!-- 指定 namenode 的通信地址 默认 8020 端口 --> <name>fs.defaultFS</name> <value>hdfs://192.168.0.110:9000</value> </property> <property> <name>hadoop.http.staticuser.user</name> <value>im_user</value> </property> </configuration> ``` ### 只修改 master 节点(h1节点) vim /usr/local/hadoop/etc/hadoop/hdfs-site.xml ``` <configuration> <property> <name>dfs.webhdfs.enabled</name> <value>true</value> </property> <property> <!-- web管理端口 --> <name>dfs.namenode.http-address</name> <value>0.0.0.0:50070</value> </property> <property> <!-- 设置 hdfs 副本数量 --> <name>dfs.replication</name> <value>2</value> </property> <property> <!-- namenode 上存储 hdfs 名字空间元数据--> <name>dfs.namenode.name.dir</name> <value>file:/usr/local/hadoop/namenode</value> </property> <property> <!-- datanode 上数据块的物理存储位置--> <name>dfs.datanode.data.dir</name> <value>file:/usr/local/hadoop/data</value> </property> <!-- datanode 文件上传接口--> <property> <name>dfs.datanode.http.address</name> <value>0.0.0.0:9876</value> </property> </configuration> ``` vim /usr/local/hadoop/etc/hadoop/mapred-site.xml ``` <configuration> <property> <!-- 指定yarn运行--> <name>mapreduce.framework.name</name> <value>yarn</value> </property> </configuration> ``` vim /usr/local/hadoop/etc/hadoop/workers ``` h1 h2 h3 ``` vim /usr/local/hadoop/etc/hadoop/yarn-site.xml ``` <configuration> <!-- 指定ResourceManager的地址 --> <property> <name>yarn.resourcemanager.hostname</name> <value>h1</value> </property> <!-- reducer取数据的方式是mapreduce_shuffle --> <property> <name>yarn.nodemanager.aux-services</name> <value>mapreduce_shuffle</value> </property> <property> <name>yarn.nodemanager.aux-services.mapred.shuffle.class</name> <value>org.apache.hadoop.mapred.ShuffleHandler</value> </property> </configuration> ``` ### 初始化Hadoop系统 `/usr/local/hadoop/bin/hdfs namenode -format` 出现 `INFO common.Storage: Storage directory /usr/local/hadoop/hadoop-3.1.2/namenode has been successfully formatted. ` 表示成功 ### 启动 hadoop `/usr/local/hadoop/sbin/start-all.sh` ### 判断是否启动 主节点 ``` jps 39578 ResourceManager 39324 SecondaryNameNode 39933 Jps 39039 NameNode ``` 子节点 ``` jps 16000 Jps 15907 NodeManager 15780 DataNode ``` ### 停止hadoop `/usr/local/hadoop/sbin/stop-all.sh` ### 查看网址 `http://192.168.0.110:50070` ## curl 测试 ``` curl -L -i "http://192.168.0.110:50070/webhdfs/v1/input/hadoop-im_user-datanode-node-3.log?op=open" ```