1,安装jdk
解压:tar -zvxf jdk-7u51-linux-x64.gz
配置环境变量:vi /etc/profile
添加: JAVA_HOME=/opt/beh/jdk1.7.0_51
PATH=$PATH:$JAVA_HOME/bin
重新加载环境变量:source /etc/profile
测试: java,javac, java -version 命令
2.创建hadoop用户,组
groupadd hadoop
useradd -s /bin/bash -d /home/hadoop -m hadoop -g hadoop -G root
passwd hadoop
3.ssh 面密
切换到hadoop用户
执行命令: ssh-keygen -t rsa
复制公钥:cat .ssh/id_rsa.pub >> authroized_keys
修改权限:chmod 700 ~/.ssh
chmod 600 ~/.ssh/authroized_keys
ssh面密应该可以了,如果有问题修改 /etc/ssh/sshd_config ,取消注释
#RSAAuthentication=yes
#PubkeyAuthentication=yes
#AuthorizedKeysFile=.ssh/authorized_keys
重启ssh服务 :service ssh restart
4.配置hadoop
编辑:$HADOOP_HOME/etc/hadoop/hadoop-env.sh
修改:export JAVA_HOME=/opt/beh/jdk1.7.0_51
编辑:$HADOOP_HOME/etc/hadoop/core-site.xml
添加:
<property>
<name>hadoop.tmp.dir</name>
<value>/opt/beh/data/hadoop</value>
</property>
<property>
<name>fs.default.name</name>
<value>hdfs://hadoop001:9000</value>
</property>
编辑:$HADOOP_HOME/etc/hadoop/hdsf-site.xml
添加:
<property>
<name>dfs.namenode.name.dir</name>
<value>file:/opt/beh/data/nameNode</value>
<final>true</final>
</property>
<property>
<name>dfs.datanode.data.dir</name>
<value>file:/opt/beh/data/dataNode</value>
</property>
<property>
<name>dfs.replication</name>
<value>1</value>
</property>
<property>
<name>dfs.permissions</name>
<value>false</value>
</property>
编辑:$HADOOP_HOME/etc/hadoop/mapred-site.xml
添加:
<property>
<name>mapreduce.framework.name</name>
<value>yarn</value>
</property>
<property>
<name>mapreduce.job.tracker</name>
<value>hadoop001:90010</value>
</property>
编辑:$HADOOP_HOME/etc/hadoop/mapred-site.xml
添加:
<property>
<name>yarn.resourcemanager.hostname</name>
<value>hadoop001</value>
<description>hostanem of RM</description>
</property>
<property>
<name>yarn.nodemanager.aux-services</name>
<value>mapreduce.shuffle</value>
<description>shuffle service that needs to be set for Map Reduce to run </description>
</property>
<property>
<name>yarn.nodemanager.aux-services.mapreduce.shuffle.class</name>
<value>org.apache.hadoop.mapred.ShuffleHandler</value>
</property>
5.配置环境变量
vi /etc/profile
添加配置:
export JAVA_HOME=/opt/beh/jdk1.7.0_51
PATH=$PATH:$JAVA_HOME/bin
export HADOOP_HOME=/opt/beh/hadoop-2.3.0-cdh5.0.0
export PATH=$PATH:$HADOOP_HOME/bin:$HADOOP_HOME/sbin
export HADOOP_MAPRED_HOME=$HADOOP_HOME
export HADOOP_COMMON_HOME=$HADOOP_HOME
export HADOOP_HDFS_HOME=$HADOOP_HOME
export HADOOP_YARN_HOME=$HADOOP_HOME
export HADOOP_CONF_DIR=$HADOOP_HOME/etc/hadoop
export YARN_CONF_DIR=$HADOOP_HOME/etc/hadoop
6.格式化namenode
hdfs namenode -format
7.启动集群
start-all.sh
集群报错: WARN util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable,是找不到本地的库文件 ,这是编译问题,复制以前编译好的64位的到$HADOOP_HOME/lib/native 就好了
版权声明:本文为zwf1732289600原创文章,遵循CC 4.0 BY-SA版权协议,转载请附上原文出处链接和本声明。