代码编织梦想

目录

1.环境信息

2.搭建过程

2.1 安装Docker源

2.2  安装Docker

2.3 安装kubeadm,kubelet和kubectl

2.4 部署Kubernetes Master(node1)

2.5 安装Pod网络插件(CNI)

2.6 加入Kubernetes Node

2.7 测试kubernetes集群

3.部署 Dashboard

 3.1 在node1上新建文件:kubernetes-dashboard.yaml

3.2 Docker拉去镜像


1.环境信息

阿里云服务器每台服务器可有两个IP,一个是供外网直接访问,另一个内网,我的服务器配置是:

172.16.247.250  node1
172.16.247.248  node2
172.16.247.249  node3

通xshell连接,我配置node1就是master

关闭防火墙(所有服务器上执行)

systemctl stop firewalld
systemctl disable firewalld

关闭selinux

 sed -i 's/enforcing/disabled/' /etc/selinux/config 
 setenforce 0

关闭swap

swapoff -a # 临时关闭
sed -ri 's/.*swap.*/#&/' /etc/fstab  #永久关闭

将桥接的IPv4流量传递到iptables的链

 cat > /etc/sysctl.d/k8s.conf << EOF
net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1
EOF

2.搭建过程

所有节点安装Docker/kubeadm/kubelet

2.1 安装Docker源

yum install -y wget && wget https://mirrors.aliyun.com/docker-ce/linux/centos/docker-ce.repo -O /etc/yum.repos.d/docker-ce.repo

2.2  安装Docker

yum -y install docker-ce-18.06.1.ce-3.el7

开启自启和启动

systemctl enable docker && systemctl start docker

查看版本

docker --version

2.3 安装kubeadm,kubelet和kubectl

添加阿里云YUM的软件源

cat > /etc/yum.repos.d/kubernetes.repo << EOF
[kubernetes]
name=Kubernetes
baseurl=https://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64
enabled=1
gpgcheck=0
repo_gpgcheck=0
gpgkey=https://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg https://mirrors.aliyun.com/kubernetes/yum/doc/rpm-package-key.gpg
EOF

安装kubeadm,kubelet和kubectl  ,由于版本更新频繁,这里指定版本号部署:

yum install -y kubelet-1.15.0 kubeadm-1.15.0 kubectl-1.15.0

设置开机自启

systemctl enable kubelet

2.4 部署Kubernetes Master(node1)

kubeadm init \
--apiserver-advertise-address=172.16.247.250 \
--image-repository registry.aliyuncs.com/google_containers \
--kubernetes-version v1.15.0 \
--service-cidr=10.1.0.0/16 \
--pod-network-cidr=10.244.0.0/16

在node1上执行

使用kubectl工具

mkdir -p $HOME/.kube
  sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
  sudo chown $(id -u):$(id -g) $HOME/.kube/config

2.5 安装Pod网络插件(CNI)

在node1服务器上新建文件:kube-flannel.yml

---
apiVersion: policy/v1beta1
kind: PodSecurityPolicy
metadata:
  name: psp.flannel.unprivileged
  annotations:
    seccomp.security.alpha.kubernetes.io/allowedProfileNames: docker/default
    seccomp.security.alpha.kubernetes.io/defaultProfileName: docker/default
    apparmor.security.beta.kubernetes.io/allowedProfileNames: runtime/default
    apparmor.security.beta.kubernetes.io/defaultProfileName: runtime/default
spec:
  privileged: false
  volumes:
  - configMap
  - secret
  - emptyDir
  - hostPath
  allowedHostPaths:
  - pathPrefix: "/etc/cni/net.d"
  - pathPrefix: "/etc/kube-flannel"
  - pathPrefix: "/run/flannel"
  readOnlyRootFilesystem: false
  # Users and groups
  runAsUser:
    rule: RunAsAny
  supplementalGroups:
    rule: RunAsAny
  fsGroup:
    rule: RunAsAny
  # Privilege Escalation
  allowPrivilegeEscalation: false
  defaultAllowPrivilegeEscalation: false
  # Capabilities
  allowedCapabilities: ['NET_ADMIN', 'NET_RAW']
  defaultAddCapabilities: []
  requiredDropCapabilities: []
  # Host namespaces
  hostPID: false
  hostIPC: false
  hostNetwork: true
  hostPorts:
  - min: 0
    max: 65535
  # SELinux
  seLinux:
    # SELinux is unused in CaaSP
    rule: 'RunAsAny'
---
kind: ClusterRole
apiVersion: rbac.authorization.k8s.io/v1
metadata:
  name: flannel
rules:
- apiGroups: ['extensions']
  resources: ['podsecuritypolicies']
  verbs: ['use']
  resourceNames: ['psp.flannel.unprivileged']
- apiGroups:
  - ""
  resources:
  - pods
  verbs:
  - get
- apiGroups:
  - ""
  resources:
  - nodes
  verbs:
  - list
  - watch
- apiGroups:
  - ""
  resources:
  - nodes/status
  verbs:
  - patch
---
kind: ClusterRoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
  name: flannel
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: ClusterRole
  name: flannel
subjects:
- kind: ServiceAccount
  name: flannel
  namespace: kube-system
---
apiVersion: v1
kind: ServiceAccount
metadata:
  name: flannel
  namespace: kube-system
---
kind: ConfigMap
apiVersion: v1
metadata:
  name: kube-flannel-cfg
  namespace: kube-system
  labels:
    tier: node
    app: flannel
data:
  cni-conf.json: |
    {
      "name": "cbr0",
      "cniVersion": "0.3.1",
      "plugins": [
        {
          "type": "flannel",
          "delegate": {
            "hairpinMode": true,
            "isDefaultGateway": true
          }
        },
        {
          "type": "portmap",
          "capabilities": {
            "portMappings": true
          }
        }
      ]
    }
  net-conf.json: |
    {
      "Network": "10.244.0.0/16",
      "Backend": {
        "Type": "vxlan"
      }
    }
---
apiVersion: apps/v1
kind: DaemonSet
metadata:
  name: kube-flannel-ds
  namespace: kube-system
  labels:
    tier: node
    app: flannel
spec:
  selector:
    matchLabels:
      app: flannel
  template:
    metadata:
      labels:
        tier: node
        app: flannel
    spec:
      affinity:
        nodeAffinity:
          requiredDuringSchedulingIgnoredDuringExecution:
            nodeSelectorTerms:
            - matchExpressions:
              - key: kubernetes.io/os
                operator: In
                values:
                - linux
      hostNetwork: true
      priorityClassName: system-node-critical
      tolerations:
      - operator: Exists
        effect: NoSchedule
      serviceAccountName: flannel
      initContainers:
      - name: install-cni-plugin
        image: rancher/mirrored-flannelcni-flannel-cni-plugin:v1.0.0
        command:
        - cp
        args:
        - -f
        - /flannel
        - /opt/cni/bin/flannel
        volumeMounts:
        - name: cni-plugin
          mountPath: /opt/cni/bin
      - name: install-cni
        image: quay.io/coreos/flannel:v0.15.1
        command:
        - cp
        args:
        - -f
        - /etc/kube-flannel/cni-conf.json
        - /etc/cni/net.d/10-flannel.conflist
        volumeMounts:
        - name: cni
          mountPath: /etc/cni/net.d
        - name: flannel-cfg
          mountPath: /etc/kube-flannel/
      containers:
      - name: kube-flannel
        image: quay.io/coreos/flannel:v0.15.1
        command:
        - /opt/bin/flanneld
        args:
        - --ip-masq
        - --kube-subnet-mgr
        resources:
          requests:
            cpu: "100m"
            memory: "50Mi"
          limits:
            cpu: "100m"
            memory: "50Mi"
        securityContext:
          privileged: false
          capabilities:
            add: ["NET_ADMIN", "NET_RAW"]
        env:
        - name: POD_NAME
          valueFrom:
            fieldRef:
              fieldPath: metadata.name
        - name: POD_NAMESPACE
          valueFrom:
            fieldRef:
              fieldPath: metadata.namespace
        volumeMounts:
        - name: run
          mountPath: /run/flannel
        - name: flannel-cfg
          mountPath: /etc/kube-flannel/
      volumes:
      - name: run
        hostPath:
          path: /run/flannel
      - name: cni-plugin
        hostPath:
          path: /opt/cni/bin
      - name: cni
        hostPath:
          path: /etc/cni/net.d
      - name: flannel-cfg
        configMap:
          name: kube-flannel-cfg

然后在node1上执行:

kubectl apply -f kube-flannel.yml

2.6 加入Kubernetes Node

剩下的node2 node3分别执行:

docker pull lizhenliang/flannel:v0.11.0-amd64

向集群添加新节点,执行在kubeadm init输出的kubeadm join命令:

kubeadm join 172.16.247.250:6443 --token y7z5d4.3q8basg1rhwfqeo2 \
    --discovery-token-ca-cert-hash sha256:775f6b8fe3c32065d76cc670e48757135e2d1be30b4362e0c80f9f190f2356c2 

查看Node

 kubectl get node

 node1 服务器上执行

2.7 测试kubernetes集群

在Kubernetes集群中创建一个pod,验证是否正常运行:
创建nginx容器

 kubectl create deployment nginx --image=nginx

暴露对外端口

 kubectl expose deployment nginx --port=80 --type=NodePort

查看nginx是否运行成功

 kubectl get pod,svc

这个时候,在阿云安全组,打开这个端口可访问,否则是无法访问的确

浏览器分别执行:

http://47.100.41.232:30415/
http://101.132.190.9:30415/
http://101.132.190.9:30415/

三个结点都可访问,说明集群已经搭建完成

扩容nginx副本wei3个,成功

kubectl scale deployment nginx --replicas=3
kubectl get pods

3.部署 Dashboard

 3.1 在node1上新建文件:kubernetes-dashboard.yaml

# Copyright 2017 The Kubernetes Authors.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
#     http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.

# ------------------- Dashboard Secret ------------------- #

apiVersion: v1
kind: Secret
metadata:
  labels:
    k8s-app: kubernetes-dashboard
  name: kubernetes-dashboard-certs
  namespace: kube-system
type: Opaque

---
# ------------------- Dashboard Service Account ------------------- #

apiVersion: v1
kind: ServiceAccount
metadata:
  labels:
    k8s-app: kubernetes-dashboard
  name: kubernetes-dashboard
  namespace: kube-system

---
# ------------------- Dashboard Role & Role Binding ------------------- #

kind: Role
apiVersion: rbac.authorization.k8s.io/v1
metadata:
  name: kubernetes-dashboard-minimal
  namespace: kube-system
rules:
  # Allow Dashboard to create 'kubernetes-dashboard-key-holder' secret.
- apiGroups: [""]
  resources: ["secrets"]
  verbs: ["create"]
  # Allow Dashboard to create 'kubernetes-dashboard-settings' config map.
- apiGroups: [""]
  resources: ["configmaps"]
  verbs: ["create"]
  # Allow Dashboard to get, update and delete Dashboard exclusive secrets.
- apiGroups: [""]
  resources: ["secrets"]
  resourceNames: ["kubernetes-dashboard-key-holder", "kubernetes-dashboard-certs"]
  verbs: ["get", "update", "delete"]
  # Allow Dashboard to get and update 'kubernetes-dashboard-settings' config map.
- apiGroups: [""]
  resources: ["configmaps"]
  resourceNames: ["kubernetes-dashboard-settings"]
  verbs: ["get", "update"]
  # Allow Dashboard to get metrics from heapster.
- apiGroups: [""]
  resources: ["services"]
  resourceNames: ["heapster"]
  verbs: ["proxy"]
- apiGroups: [""]
  resources: ["services/proxy"]
  resourceNames: ["heapster", "http:heapster:", "https:heapster:"]
  verbs: ["get"]

---
apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
  name: kubernetes-dashboard-minimal
  namespace: kube-system
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: Role
  name: kubernetes-dashboard-minimal
subjects:
- kind: ServiceAccount
  name: kubernetes-dashboard
  namespace: kube-system

---
# ------------------- Dashboard Deployment ------------------- #

kind: Deployment
apiVersion: apps/v1
metadata:
  labels:
    k8s-app: kubernetes-dashboard
  name: kubernetes-dashboard
  namespace: kube-system
spec:
  replicas: 1
  revisionHistoryLimit: 10
  selector:
    matchLabels:
      k8s-app: kubernetes-dashboard
  template:
    metadata:
      labels:
        k8s-app: kubernetes-dashboard
    spec:
      containers:
      - name: kubernetes-dashboard
        image: lizhenliang/kubernetes-dashboard-amd64:v1.10.1
        ports:
        - containerPort: 8443
          protocol: TCP
        args:
          - --auto-generate-certificates
          # Uncomment the following line to manually specify Kubernetes API server Host
          # If not specified, Dashboard will attempt to auto discover the API server and connect
          # to it. Uncomment only if the default does not work.
          # - --apiserver-host=http://my-address:port
        volumeMounts:
        - name: kubernetes-dashboard-certs
          mountPath: /certs
          # Create on-disk volume to store exec logs
        - mountPath: /tmp
          name: tmp-volume
        livenessProbe:
          httpGet:
            scheme: HTTPS
            path: /
            port: 8443
          initialDelaySeconds: 30
          timeoutSeconds: 30
      volumes:
      - name: kubernetes-dashboard-certs
        secret:
          secretName: kubernetes-dashboard-certs
      - name: tmp-volume
        emptyDir: {}
      serviceAccountName: kubernetes-dashboard
      # Comment the following tolerations if Dashboard must not be deployed on master
      tolerations:
      - key: node-role.kubernetes.io/master
        effect: NoSchedule

---
# ------------------- Dashboard Service ------------------- #

kind: Service
apiVersion: v1
metadata:
  labels:
    k8s-app: kubernetes-dashboard
  name: kubernetes-dashboard
  namespace: kube-system
spec:
  type: NodePort 
  ports:
    - port: 443
      targetPort: 8443
      nodePort: 30001
  selector:
    k8s-app: kubernetes-dashboard

3.2 Docker拉去镜像

docker pull  lizhenliang/kubernetes-dashboard-amd64:v1.10.1

执行kubernetes-dashboard.yaml 文件

 kubectl apply -f kubernetes-dashboard.yaml 

查看暴露的端口

kubectl get pods,svc -n kube-system

访问 Dashboard的web界面注意,http,是无法访问的

阿里云服务器还得开通端口访问才可以:

https://47.100.41.232:30001/#!/login

创建service account并绑定默认cluster-admin管理员集群角色

kubectl create serviceaccount dashboard-admin -n kube-system
 kubectl create clusterrolebinding dashboard-admin --clusterrole=cluster-admin --serviceaccount=kube-system:dashboard-admin
 kubectl describe secrets -n kube-system $(kubectl -n kube-system get secret | awk '/dashboard-admin/{print $1}')

三台服务器都可以访问到

https://47.100.41.232:30001/#!/storageclass?namespace=default
https://101.132.190.9:30001/#!/overview?namespace=default
https://101.132.186.133:30001/#!/overview?namespace=default

 k8s集群搭建完成

参考文献:

kubeadm部署Kubernetes(k8s)完整版详细教程_kubadm 安装k8s-CSDN博客

版权声明:本文为博主原创文章,遵循 CC 4.0 BY-SA 版权协议,转载请附上原文出处链接和本声明。
本文链接:https://blog.csdn.net/fyihdg/article/details/136058541

【云原生 kubernetes】基于 minikube 搭建第一个k8s集群-爱代码爱编程

一、前言 对于k8s来说,搭建方式有多种,如果是生产环境,一般来说,至少需要3台节点确保服务的高可用性,常用的搭建方式列举如下(提供参考): kubeadm搭建(推荐) 一个K8s部署工具,提供kubeadm init和kubeadm join ; 用于快速搭建k8s集群,比较推荐(也是官方推荐的方式);  二进制包搭建 gi

不同厂商云服务器公网ip部署k8s集群(腾讯云+阿里云)_apiserver-爱代码爱编程

不同厂商云服务器公网IP部署k8s集群(腾讯云+阿里云) 文章目录 不同厂商云服务器公网IP部署k8s集群(腾讯云+阿里云)前言一、安装kubeadm1、基础环境搭建2、公网搭建集群的关键3、安装kube

游戏如何选择服务器-爱代码爱编程

游戏如何选择服务器 1、CPU处理器:作为游戏服务器的运算和控制核心,是信息处理、程序运行的最终执行单元。我们可以将它简单的理解为公司的核心部门,一个核心部门的处理效率,就是核心数和线程数,比如16核心32线程,24核心48线程等等。 在线玩家数量较多,对读写信息、并发、交互这些要求高。而像开新服、合服的现象也很很常见,因此就需要高性能的服务器来抗住峰值

幻兽帕鲁专用服务器,多人游戏(专用服务器)搭建-爱代码爱编程

玩转幻兽帕鲁服务器,阿里云推出新手0基础一键部署幻兽帕鲁服务器教程,傻瓜式一键部署,3分钟即可成功创建一台Palworld专属服务器,成本仅需26元,阿里云服务器网aliyunfuwuqi.com分享2024年新版基于阿里云搭建幻兽帕鲁服务器教程: 阿里云幻兽帕鲁服务器搭建 一、前提准备:部署幻兽帕鲁服务器前提条件 由于是在阿里云部署幻兽

linux自有服务与软件包管理-爱代码爱编程

这次来学习一下Linux自有服务与软件包管理相关内容,如下。 一、systemctl管理系统服务 什么是Linux自有服务? 服务是一些特定的进程,自有服务就是系统开机后就自动运行的一些进程,一旦客户发出请

hcia-爱代码爱编程

HCIA-Datacom实验指导手册:4.3 实验三:网络地址转换配置实验 一、实验介绍:二、 思考题与附加内容 一、实验介绍: NAT的作用: 1、很大程度提高网络安全性。 2、控制内外网网络联通

服务器被黑,安装linux rootkit木马-爱代码爱编程

前言 疫情还没有结束,放假只能猫家里继续分析和研究最新的攻击技术和样本了,正好前段时间群里有人说服务器被黑,然后扔了个样本在群里,今天咱就拿这个样本开刀,给大家研究一下这个样本究竟是个啥,顺便也给大家分享一些关于Linux Rootkit恶意软件方面的相关知识点吧,全球高端的黑客组织都在不断进步,做安全更应该努力学习,走在黑客前面,走在客户前面,加强自身

【无标题】分别通过select、多进程、多线程实现一个并发服务器-爱代码爱编程

select #include<myhead.h> #define PORT 8888 //端口号 #define IP "192.168.0.100" //IP地址 int main(int argc, const char *argv[]) { //1、创建用于接受连接的套接字

linux运用fork函数创建进程-爱代码爱编程

fork函数: 函数原型: pid_t fork(void); 父进程调用fork函数创建一个子进程,子进程的用户区父进程的用户区完全一样,但是内核区不完全一样;如父进程的PID和子进程的PID不一样。 返回值: RETURN VALUE On success, the PID of the child process is ret