登入其中一台服务器(这台服务器同时会承载 Nexus 服务),执行以下操作。
获取源码
$ git clone --depth=1 --recursive https://gitee.com/bottlelee/kubernetes-rook-nexus/
$ cd kubernetes-rook-nexus
git clone 过程中,可能会遇到 kubespray 下载失败的问题(网络原因),这时候你可以重复执行 git submodule update --remote
直到完成。
create_python_venv.sh
脚本来创建一个独立的 Python 环境。bash create_python_venv.sh
source ./python_venv/bin/activate
如果激活成功,你应该可以看到终端提示的前面会出现 (python_venv)
的字样。例如
(python_venv) haibin@lacrimosa:
复制一份 inventory 目录,用你环境名称(例如 stage)
不要省略这个步骤,避免操作失误把服务器的安全信息提交到 git 库。
$ cp -R inventory/sample inventory/stage
假设 3 台服务器的 IP 分配如下:
主机名 | IP |
---|---|
k8s-1 | 172.17.8.101 |
k8s-2 | 172.17.8.102 |
k8s-3 | 172.17.8.103 |
修改 inventory/stage/inventory.ini
.
vi inventory/stage/inventory.ini
根据你的实际环境修改相关信息。
# ## Configure 'ip' variable to bind kubernetes services on a
# ## different ip than the default iface
# ## We should set etcd_member_name for etcd cluster. The node that is not a etcd member do not need to set the value, or can set the empty string value.
[all]
k8s-1 ansible_host=172.17.8.101 ip=172.17.8.101 etcd_member_name=etcd1
k8s-2 ansible_host=172.17.8.102 ip=172.17.8.102 etcd_member_name=etcd2
k8s-3 ansible_host=172.17.8.103 ip=172.17.8.103 etcd_member_name=etcd3
# ## configure a bastion host if your nodes are not directly reachable
# bastion ansible_host=x.x.x.x ansible_user=some_user
[kube-master]
k8s-[1:3]
[etcd]
k8s-[1:3]
[kube-node]
k8s-[1:3]
[k8s-cluster:children]
kube-master
kube-node
# ## Hosts in group 'ceph-osd' must be in group 'kube-node' too.
# ## If this group is not exist or empty, rook ceph will deploy to all kube nodes
[ceph]
[all:vars]
ansible_ssh_user=vagrant
ansible_user='vagrant'
ansible_ssh_private_key_file='/home/haibin/.vagrant.d/insecure_private_key'
kube_network_plugin=calico
‘ceph’ 组是用来指定需要启用 ROOK CEPH 的。对应的 node 节点会打上 ‘ceph=true’ 的 label。 如果留空或不定义,默认是所有 kube-node 组内的机器都部署。
开始部署
$ ansible-playbook -i inventory/stage/inventory.ini -b play-all.yml
如果过程中有失败的(多数是因为网络问题),可以重复执行以上命令直到 OK。
查看 kube-system 状态
$ kubectl -n kube-system get all
NAME READY STATUS RESTARTS AGE
pod/calico-kube-controllers-754c95c575-88k8f 1/1 Running 2 26h
pod/calico-node-79p7f 1/1 Running 0 26h
pod/calico-node-d7sk8 1/1 Running 1 26h
pod/calico-node-rd2pn 1/1 Running 0 26h
pod/coredns-5476d7f756-65fmc 1/1 Running 0 26h
pod/coredns-5476d7f756-x8558 1/1 Running 0 26h
pod/dns-autoscaler-65ddb5b699-h8z4l 1/1 Running 0 26h
pod/kube-apiserver-k8s-1 1/1 Running 0 26h
pod/kube-apiserver-k8s-2 1/1 Running 0 26h
pod/kube-controller-manager-k8s-1 1/1 Running 0 26h
pod/kube-controller-manager-k8s-2 1/1 Running 0 26h
pod/kube-proxy-7prdh 1/1 Running 2 26h
pod/kube-proxy-crdn8 1/1 Running 0 26h
pod/kube-proxy-kmn66 1/1 Running 0 26h
pod/kube-scheduler-k8s-1 1/1 Running 0 26h
pod/kube-scheduler-k8s-2 1/1 Running 0 26h
pod/kubernetes-dashboard-76f47cccf5-2nzmh 1/1 Running 1 26h
pod/nginx-proxy-k8s-3 1/1 Running 2 26h
pod/nodelocaldns-9xrs6 1/1 Running 0 26h
pod/nodelocaldns-fq4rw 1/1 Running 0 26h
pod/nodelocaldns-s5xnt 1/1 Running 1 26h
pod/tiller-deploy-84867676cf-sfhfs 1/1 Running 3 26h
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
service/coredns ClusterIP 10.233.0.3 <none> 53/UDP,53/TCP,9153/TCP 26h
service/kubernetes-dashboard ClusterIP 10.233.1.227 <none> 443/TCP 26h
service/tiller-deploy ClusterIP 10.233.52.235 <none> 44134/TCP 26h
NAME DESIRED CURRENT READY UP-TO-DATE AVAILABLE NODE SELECTOR AGE
daemonset.apps/calico-node 3 3 3 3 3 <none> 26h
daemonset.apps/kube-proxy 3 3 3 3 3 beta.kubernetes.io/os=linux 26h
daemonset.apps/nodelocaldns 3 3 3 3 3 <none> 26h
NAME READY UP-TO-DATE AVAILABLE AGE
deployment.apps/calico-kube-controllers 1/1 1 1 26h
deployment.apps/coredns 2/2 2 2 26h
deployment.apps/dns-autoscaler 1/1 1 1 26h
deployment.apps/kubernetes-dashboard 1/1 1 1 26h
deployment.apps/tiller-deploy 1/1 1 1 26h
NAME DESIRED CURRENT READY AGE
replicaset.apps/calico-kube-controllers-754c95c575 1 1 1 26h
replicaset.apps/coredns-5476d7f756 2 2 2 26h
replicaset.apps/dns-autoscaler-65ddb5b699 1 1 1 26h
replicaset.apps/kubernetes-dashboard-76f47cccf5 1 1 1 26h
replicaset.apps/tiller-deploy-84867676cf 1 1 1 26h
查看 rook-ceph 状态
$ kubectl -n rook-ceph get all
NAME READY STATUS RESTARTS AGE
pod/rook-ceph-agent-2wf2c 1/1 Running 0 26h
pod/rook-ceph-agent-74z7h 1/1 Running 1 26h
pod/rook-ceph-agent-tfvj8 1/1 Running 0 26h
pod/rook-ceph-mds-my-fs-a-5df698cd57-zxbbs 1/1 Running 2 26h
pod/rook-ceph-mds-my-fs-b-74cd97f5cc-96fm2 1/1 Running 1 26h
pod/rook-ceph-mgr-a-59744ddb4b-pjks4 1/1 Running 0 26h
pod/rook-ceph-mon-a-56cdfc7789-nxzwq 1/1 Running 0 26h
pod/rook-ceph-mon-b-f85cdc7d-g9m4v 1/1 Running 0 26h
pod/rook-ceph-mon-c-54cd685bb7-cdfsn 1/1 Running 1 26h
pod/rook-ceph-operator-74f9995556-k2lmt 1/1 Running 1 26h
pod/rook-ceph-osd-0-6b56c58fcd-sdl5b 1/1 Running 0 26h
pod/rook-ceph-osd-1-559744f5d-2kbs5 1/1 Running 1 26h
pod/rook-ceph-osd-2-5d49c46989-g7qwd 1/1 Running 0 26h
pod/rook-ceph-osd-prepare-k8s-1-f98cn 0/2 Completed 1 26h
pod/rook-ceph-osd-prepare-k8s-2-xgs5t 0/2 Completed 0 26h
pod/rook-ceph-osd-prepare-k8s-3-g9fd5 0/2 Completed 0 26h
pod/rook-ceph-rgw-my-store-6df744d788-5rcrn 1/1 Running 0 26h
pod/rook-ceph-tools-68b89cb877-5nq77 1/1 Running 2 26h
pod/rook-discover-6r8xb 1/1 Running 0 26h
pod/rook-discover-kqsfn 1/1 Running 1 26h
pod/rook-discover-q9psz 1/1 Running 0 26h
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
service/rook-ceph-mgr ClusterIP 10.233.29.1 <none> 9283/TCP 26h
service/rook-ceph-mgr-dashboard ClusterIP 10.233.53.128 <none> 8443/TCP 26h
service/rook-ceph-mgr-dashboard-external-https NodePort 10.233.21.80 <none> 8443:30803/TCP 26h
service/rook-ceph-mon-a ClusterIP 10.233.27.62 <none> 6789/TCP,3300/TCP 26h
service/rook-ceph-mon-b ClusterIP 10.233.9.175 <none> 6789/TCP,3300/TCP 26h
service/rook-ceph-mon-c ClusterIP 10.233.33.70 <none> 6789/TCP,3300/TCP 26h
service/rook-ceph-rgw-my-store ClusterIP 10.233.36.93 <none> 80/TCP 26h
NAME DESIRED CURRENT READY UP-TO-DATE AVAILABLE NODE SELECTOR AGE
daemonset.apps/rook-ceph-agent 3 3 3 3 3 <none> 26h
daemonset.apps/rook-discover 3 3 3 3 3 <none> 26h
NAME READY UP-TO-DATE AVAILABLE AGE
deployment.apps/rook-ceph-mds-my-fs-a 1/1 1 1 26h
deployment.apps/rook-ceph-mds-my-fs-b 1/1 1 1 26h
deployment.apps/rook-ceph-mgr-a 1/1 1 1 26h
deployment.apps/rook-ceph-mon-a 1/1 1 1 26h
deployment.apps/rook-ceph-mon-b 1/1 1 1 26h
deployment.apps/rook-ceph-mon-c 1/1 1 1 26h
deployment.apps/rook-ceph-operator 1/1 1 1 26h
deployment.apps/rook-ceph-osd-0 1/1 1 1 26h
deployment.apps/rook-ceph-osd-1 1/1 1 1 26h
deployment.apps/rook-ceph-osd-2 1/1 1 1 26h
deployment.apps/rook-ceph-rgw-my-store 1/1 1 1 26h
deployment.apps/rook-ceph-tools 1/1 1 1 26h
NAME DESIRED CURRENT READY AGE
replicaset.apps/rook-ceph-mds-my-fs-a-5df698cd57 1 1 1 26h
replicaset.apps/rook-ceph-mds-my-fs-b-74cd97f5cc 1 1 1 26h
replicaset.apps/rook-ceph-mgr-a-59744ddb4b 1 1 1 26h
replicaset.apps/rook-ceph-mon-a-56cdfc7789 1 1 1 26h
replicaset.apps/rook-ceph-mon-b-f85cdc7d 1 1 1 26h
replicaset.apps/rook-ceph-mon-c-54cd685bb7 1 1 1 26h
replicaset.apps/rook-ceph-operator-74f9995556 1 1 1 26h
replicaset.apps/rook-ceph-osd-0-6b56c58fcd 1 1 1 26h
replicaset.apps/rook-ceph-osd-1-559744f5d 1 1 1 26h
replicaset.apps/rook-ceph-osd-2-5d49c46989 1 1 1 26h
replicaset.apps/rook-ceph-rgw-my-store-6df744d788 1 1 1 26h
replicaset.apps/rook-ceph-tools-68b89cb877 1 1 1 26h
NAME COMPLETIONS DURATION AGE
job.batch/rook-ceph-osd-prepare-k8s-1 1/1 13s 26h
job.batch/rook-ceph-osd-prepare-k8s-2 1/1 9s 26h
job.batch/rook-ceph-osd-prepare-k8s-3 1/1 10s 26h