1. 准备环境

角色 IP
k8s-master 192.168.0.213
k8s-node1 192.168.0.214
k8s-node2 192.168.0.215
# 关闭防火墙:
systemctl stop firewalld
systemctl disable firewalld

# 关闭selinux:
sed -i 's/enforcing/disabled/' /etc/selinux/config  # 永久
setenforce 0  # 临时

# 关闭swap:
swapoff -a  # 临时
vim /etc/fstab  # 永久

# 设置主机名:
hostnamectl set-hostname <hostname>

# 在master添加hosts:
cat > /etc/hosts << EOF
192.168.0.213 k8s-master
192.168.0.214 k8s-node1
192.168.0.215 k8s-node2
EOF


将桥接的IPv4流量传递到iptables的链:
cat > /etc/sysctl.d/k8s.conf << EOF
net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1
EOF
sysctl --system  # 生效

2. 所有节点安装Docker/kubeadm/kubelet/kubectl

2.1 安装Docker

wget https://mirrors.aliyun.com/docker-ce/linux/centos/docker-ce.repo -O /etc/yum.repos.d/docker-ce.repo

yum -y install docker-ce-18.06.1.ce-3.el7

# 配置加速器
mkdir /etc/docker
cat > /etc/docker/daemon.json << EOF
{
  "registry-mirrors": ["https://0yd85e5a.mirror.aliyuncs.com"]
}
EOF

systemctl enable docker && systemctl start docker

docker --version
Docker version 18.06.1-ce, build e68fc7a

2.2 添加阿里云YUM软件源

cat > /etc/yum.repos.d/kubernetes.repo << EOF
[kubernetes]
name=Kubernetes
baseurl=https://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64
enabled=1
gpgcheck=0
repo_gpgcheck=0
gpgkey=https://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg https://mirrors.aliyun.com/kubernetes/yum/doc/rpm-package-key.gpg
EOF

2.3 安装kubeadm,kubelet和kubectl

yum install -y kubeadm-1.18.3 kubectl-1.18.3 kubelet-1.18.3

systemctl enable kubelet

3. 部署Kubernetes Master

在 192.168.0.213 (Master) 执行

kubeadm init \
--apiserver-advertise-address=192.168.0.213 \
--image-repository registry.aliyuncs.com/google_containers \
--kubernetes-version v1.18.3 \
--service-cidr=10.96.0.0/12 \
--pod-network-cidr=10.244.0.0/16 \
--ignore-preflight-errors=all

or

# vi kubeadm.conf

apiVersion: kubeadm.k8s.io/v1beta2
kind: ClusterConfiguration
kubernetesVersion: v1.18.3
imageRepository: registry.aliyuncs.com/google_containers 
networking:
  podSubnet: 10.244.0.0/16 
  serviceSubnet: 10.96.0.0/12 

kubeadm init --config kubeadm.conf --ignore-preflight-errors=all 

由于默认拉取镜像地址k8s.gcr.io国内无法访问,这里指定阿里云镜像仓库地址。

使用kubectl工具:

mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config

kubectl get nodes

4. 加入Kubernetes Node

在192.168.0.214/215 (Node) 执行

向集群添加新节点,执行在kubeadm init输出的kubeadm join命令:

kubeadm join 192.168.0.213:6443 --token zuw832.jq583xz01n9k24q1 --discovery-token-ca-cert-hash sha256:069dd0fa2d23a61dd66441590f90c88a864e1ae08bb8286ca3208a8a95f5b9e2

5. 安装网络 Calico

kubectl apply -f https://docs.projectcalico.org/v3.14/manifests/calico.yaml

kubectl get pods -n kube-system

6. 测试kubernetes集群

  • 验证Pod工作
  • 验证Pod网络通信
  • 验证DNS解析

在Kubernetes集群中创建一个pod,验证是否正常运行:

kubectl create deployment web --image=nginx:1.17
kubectl expose deployment web --port=80 --target-port=80 --name=web-service --type=NodePort
kubectl get pods,svc

访问地址:http://NodeIP:Port

7. 部署 Dashboard

kubectl apply -f https://raw.githubusercontent.com/kubernetes/dashboard/v2.0.3/aio/deploy/recommended.yaml

# 修改镜像地址 recommended.yaml
image: registry.cn-hangzhou.aliyuncs.com/google_containers/dashboard:v2.0.3
image: registry.cn-hangzhou.aliyuncs.com/kubernetesui/metrics-scraper:v1.0.4

默认Dashboard只能集群内部访问,修改Service为NodePort类型,暴露到外部:

kind: Service
apiVersion: v1
metadata:
  labels:
    k8s-app: kubernetes-dashboard
  name: kubernetes-dashboard
  namespace: kubernetes-dashboard
spec:
  type: NodePort
  ports:
    - port: 443
      targetPort: 8443
      nodePort: 30001
  selector:
    k8s-app: kubernetes-dashboard

访问地址:https://NodeIP:30001

创建service account并绑定默认cluster-admin管理员集群角色:

kubectl create serviceaccount dashboard-admin -n kube-system
kubectl create clusterrolebinding dashboard-admin --clusterrole=cluster-admin --serviceaccount=kube-system:dashboard-admin
kubectl describe secrets -n kube-system $(kubectl -n kube-system get secret | awk '/dashboard-admin/{print $1}')

使用输出的token登录Dashboard。

版权声明:如无特殊说明,文章均为本站原创,转载请注明出处

本文链接:http://jiazone.cn:8848/article/1/