irpas技术客

轻松搭建 Zookeeper + Hadoop 伪集群_LemonCoder呀_zookeeper+hadoop

大大的周 1687

Linux 搭建 Hadoop 集群

文章目录 Linux 搭建 Hadoop 集群一、实验环境(目录一中的操作三台节点服务器均需配置)1、构建三台服务器环境,配置如下2、配置主机名和地址映射关系3、安装`java8`4、配置`java`环境变量5、关闭并禁用防火墙 二、准备`zookeeper`和`hadoop`安装包1、`hadoop` 软件包下载2、`zookeeper` 软件包下载3、通过`xftp`工具或者`scp`命令将软件包上传到master节点 三、安装 `Zookeeper`1、解压 `apache-zookeeper-3.8.0-bin.tar.gz`并重命名解压包文件名2、修改`zookeeper`配置文件 四、安装`Hadoop`1、解压 `hadoop-3.2.3.tar.gz`并重命名解压包文件名2、切换到配置目录3、配置 `hadoop`相关文件 五、配置环境变量,ssh免密登录、分发文件1、编辑 环境变量2、配置ssh免密登录3、分发相关文件 六、启动集群1、配合`jps`命令测试节点2、一次性启动方式3、安全模式

一、实验环境(目录一中的操作三台节点服务器均需配置) 1、构建三台服务器环境,配置如下 主机ip主机名节点类型192.168.10.10hadoop-mastermaster192.168.10.11hadoop-node1worker192.168.10.12hadoop-node2worker
2、配置主机名和地址映射关系 $ cat << EOF >> /etc/hosts 192.168.10.10 hadoop-master 192.168.10.11 hadoop-node1 192.168.10.12 hadoop-node2 EOF 3、安装java8 # 查看是否存在旧版本安装包,如果存在,则卸载 $ rpm -qa | grep java* # 查询java8多有的依赖包 $ yum list | grep java-1.8.0-openjdk* # 安装java8 $ yum install -y java-1.8.0-openjdk* # 查询安装位置 $ whereis java 4、配置java环境变量 # java环境安装不做描述 $ cat << EOF >> /etc/profile # set java environment export JAVA_HOME=/usr/lib/jvm/java-1.8.0-openjdk-1.8.0.322.b06-1.el7_9.x86_64 export JRE_HOME=$JAVA_HOME/jre export PATH=$PATH:$JAVA_HOME/bin export CLASSPATH=.:$JAVA_HOME/lib/dt.jar:$JAVA_HOME/lib/tools.jar EOF # 使配置生效 $ source /etc/profile 5、关闭并禁用防火墙 $ systemctl stop firewalld.service $ systemctl disable firewalld.service 二、准备zookeeper和hadoop安装包 1、hadoop 软件包下载

zookeeper下载地址

2、zookeeper 软件包下载

hadoop下载地址

所有的旧版本下载地址

清华大学国内镜像源下载地址(只有最新版本)

3、通过xftp工具或者scp命令将软件包上传到master节点 # linux 下新建两个目录,后面会用上 $ mkdir -p {/opt/software,/opt/module} # 打开windows cmd窗口 > scp E:\SOFTWARE\hadoop-3.2.3.tar.gz root@hadoop-master:/opt/software/ > scp E:\SOFTWARE\apache-zookeeper-3.8.0-bin.tar.gz root@hadoop-master:/opt/software/ 三、安装 Zookeeper 1、解压 apache-zookeeper-3.8.0-bin.tar.gz并重命名解压包文件名 $ tar -zxvf /opt/software/apache-zookeeper-3.8.0-bin.tar.gz -C /opt/module $ mv /opt/module/apache-zookeeper-3.8.0-bin /opt/module/zookeeper 2、修改zookeeper配置文件 $ cd /opt/module/zookeeper/conf $ cp ./zoo-example.cfg zoo/cfg $ vi ./zoo.cfg # 将改行前的'#'号去掉,并修改路径 # 该路径代表zookeeper文件存储路径 dataDir=/home/zookeeper/data # 新增下面四行 # 该路径代表zookeeper日志写入路径 dataLogDir=/home/zookeeper/datalog # 下面配置为三个服务器节点 server.1=hadoop-master:2888:3888 server.2=hadoop-node1:2888:3888 server.3=hadoop-node2:2888:3888 # 保存退出 :wq 四、安装Hadoop 1、解压 hadoop-3.2.3.tar.gz并重命名解压包文件名 $ tar -zxvf /opt/software/hadoop-3.2.3.tar.gz -C /opt/module $ mv /opt/module/hadoop-3.2.3 /opt/module/hadoop 2、切换到配置目录 $ cd /opt/module/hadoop/etc/hadoop 3、配置 hadoop相关文件

hadoop-env.sh

# 文件末尾新增以下7行配置 export JAVA_HOME=/usr/lib/jvm/java-1.8.0-openjdk-1.8.0.322.b06-1.el7_9.x86_64 export HDFS_NAMENODE_USER=root export HDFS_DATANODE_USER=root export HDFS_ZKFC_USER=root export HDFS_JOURNALNODE_USER=root export YARN_RESOURCEMANAGER_USER=root export YARN_NODEMANAGER_USER=root

core-site.xml

<configuration> <property> <name>fs.defaultFS</name> <value>hdfs://puegg</value> </property> <property> <name>hadoop.tmp.dir</name> <value>/home/hadoop_data</value> </property> <property> <name>hadoop.http.staticuser.user</name> <value>root</value> </property> <!-- hdfs连接zookeeper集群的地址和端口 --> <property> <name>ha.zookeeper.quorum</name> <value>hadoop-master:2181,hadoop-node1:2181,hadoop-node2:2181</value> </property> </configuration>

hdfs-site.xml

<configuration> <!-- 定义hdfs集群ID号 --> <property> <name>dfs.nameservices</name> <value>puegg</value> </property> <!-- 定义hdfs集群中namenode的ID号 --> <property> <name>dfs.ha.namenodes.puegg</name> <value>nn1,nn2</value> </property> <!-- 定义namenode的主机名和RPC协议的端口 --> <property> <name>dfs.namenode.rpc-address.puegg.nn1</name> <value>hadoop-master:8020</value> </property> <property> <name>dfs.namenode.rpc-address.puegg.nn2</name> <value>hadoop-node1:8020</value> </property> <!-- 定义namenode的主机名和HTTP协议的端口 --> <property> <name>dfs.namenode.http-address.puegg.nn1</name> <value>hadoop-master:9870</value> </property> <property> <name>dfs.namenode.http-address.puegg.nn2</name> <value>hadoop-node1:9870</value> </property> <!-- 定义共享edits的URL --> <property> <name>dfs.namenode.shared.edits.dir</name> <value>qjournal://hadoop-master:8485;hadoop-node1:8485;hadoop-node2:8485/puegg</value> </property> <!-- 定义HDFS的客户端连接HDFS集群时返回active namenode地址 --> <property> <name>dfs.client.failover.proxy.provider.puegg</name> <value>org.apache.hadoop.hdfs.server.namenode.ha.ConfiguredFailoverProxyProvider</value> </property> <!-- HDFS集群中两个namenode切换状态时的隔离方法 --> <property> <name>dfs.ha.fencing.methods</name> <value>sshfence</value> </property> <!-- HDFS集群中两个namenode切换状态时的隔离方法的密钥 --> <property> <name>dfs.ha.fencing.ssh.private-key-files</name> <value>/root/.ssh/id_rsa</value> </property> <!-- journalnode集群中用于保存edits文件的目录 --> <property> <name>dfs.journalnode.edits.dir</name> <value>/opt/module/journalnode/data</value> </property> <!-- HA的HDFS集群自动切换namenode的开关 --> <property> <name>dfs.ha.automatic-failover.enabled</name> <value>true</value> </property> <property> <name>dfs.safemode.threshold.pct</name> <value>1</value> <description> Specify the percentage of blocks that should be satisfied minimum replication requirements defined by dfs.replication. A value less than or equal to O Sean does not wait for any specific percentage of blocks to exit the secure node. A value greater than 1 makes safe mode permanent. </description> </property> </configuration>

yarn-site.xml

<configuration> <!-- Site specific YARN configuration properties --> <property> <name>yarn.resourcemanager.ha.enabled</name> <value>true</value> </property> <property> <name>yarn.resourcemanager.cluster-id</name> <value>cluster1</value> </property> <property> <name>yarn.resourcemanager.ha.rm-ids</name> <value>rm1,rm2</value> </property> <property> <name>yarn.resourcemanager.hostname.rm1</name> <value>hadoop-master</value> </property> <property> <name>yarn.resourcemanager.hostname.rm2</name> <value>hadoop-node1</value> </property> <property> <name>yarn.resourcemanager.webapp.address.rm1</name> <value>hadoop-master:8088</value> </property> <property> <name>yarn.resourcemanager.webapp.address.rm2</name> <value>hadoop-node1:8088</value> </property> <property> <name>yarn.resourcemanager.zk-address</name> <value>hadoop-master:2181,hadoop-node1:2181,hadoop-node2:2181</value> </property> <property> <name>yarn.nodemanager.aux-services</name> <value>mapreduce_shuffle</value> </property> <property> <name>yarn.nodemanager.aux-services.mapreduce.shuffle.class</name> <value>org.apache.hadoop.mapred.ShuffleHandler</value> </property> <property> <name>yarn.nodemanager.env-whitelist</name> <value>JAVA_HOME,HADOOP_COMMON_HOME,HADOOP_HDFS_HOME,HADOOP_CONF_DIR,CLASSPATH_PREPEND_DISTCACHE,HADOOP_YARN_HOME,HADOOP_MAPRED_HOME</value> </property> </configuration>

mapred-site.xml

<configuration> <property> <name>mapreduce.framework.name</name> <value>yarn</value> </property> <property> <name>mapreduce.application.classpath</name> <value>$HADOOP_MAPRED_HOME/share/hadoop/mapreduce/*:$HADOOP_MAPRED_HOME/share/hadoop/mapreduce/lib/*</value> </property> </configuration>

workers (2.x版本配置 slave )

# 指定节点主机名(2.x版本配置slave) hadoop-master hadoop-node1 hadoop-node2 五、配置环境变量,ssh免密登录、分发文件 1、编辑 环境变量 cat << EOF >> /etc/profile # set zookeeper environment export ZOOKEEPER_HOME=/opt/module/apache-zookeeper-3.8.0-bin export PATH=$ZOOKEEPER_HOME/bin:$PATH # set hadoop environment export HADOOP_HOME=/opt/module/hadoop-3.2.3 export PATH=$HADOOP_HOME/bin:$HADOOP_HOME/sbin:$PATH EOF # 使配置生效 $ source /etc/profile 2、配置ssh免密登录 # 生成密钥文件并写入authorized_keys $ ssh-keygen -t rsa -P '' -f ~/.ssh/id_rsa $ cat ~/.ssh/id_rsa.pub >> ~/.ssh/authorized_keys # 完成三台主机的ssh免密登录 # 不要忘记同样操作hadoop-node1、hadoop-node2 $ ssh hadoop-master $ yes $ exit # 此时/root/.ssh/路径下会生成包含三台节点免密登陆后的know_hosts信息文件 # 通过scp分发到每台节点上 $ scp /root/.ssh/* root@hadoop-node1:/root/.ssh/ $ scp /root/.ssh/* root@hadoop-node2:/root/.ssh/ # 若分发失败,可能是因为目录创建失败,在node1和node2该目录下新建.ssh文件重试即可 $ mkdir -p /root/.ssh $ scp /root/.ssh/* root@hadoop-node1:/root/.ssh/ $ scp /root/.ssh/* root@hadoop-node2:/root/.ssh/ 3、分发相关文件 # 分发hadoop和zookeeper配置文件 $ scp /opt/module root@hadoop-node1:/opt $ scp /opt/module root@hadoop-node2:/opt # 分发环境变量配置文件 $ scp /etc/profile root@hadoop-node1:/etc $ scp /etc/profile root@hadoop-node2:/etc # 使环境变量生效 $ source /etc/profile # 至此集群配置已结束 六、启动集群 1、配合jps命令测试节点 # 启动三台节点上的zookeeper $ zkServer.sh start # 启动三台节点上的journalnode $ hdfs --daemon start journalnode # master节点上格式化namenode $ hdft namenode -format # master节点上启动格式化后的namenode $ hdfs --daemon start namenode # node1节点上同步数据 $ hdfs namenode -bootstrapStandby # node1节点上启动格式化后的namenode $ hdfs --daemon start namenode # master或者node1节点上初始化Zk $ hdfs zkfc -formatZK # 通过jps查看启动的节点进程 $ jps 2、一次性启动方式 # 停止全部节点 $ stop-dfs.sh # 启动全部节点 $ start-dfs.sh # 启动resourcemanager节点 $ yarn -daemon start resourcemanager 3、安全模式 $ hdfs dfsadmin -safemode enter $ hdfs dfsadmin -safemode leave


1.本站遵循行业规范,任何转载的稿件都会明确标注作者和来源;2.本站的原创文章,会注明原创字样,如未注明都非原创,如有侵权请联系删除!;3.作者投稿可能会经我们编辑修改或补充;4.本站不提供任何储存功能只提供收集或者投稿人的网盘链接。

标签: #zookeeperhadoop #Linux #搭建 #Hadoop #集群文章目录Linux