编程技术网

关注微信公众号,定时推送前沿、专业、深度的编程技术资料。

 找回密码
 立即注册

QQ登录

只需一步,快速开始

从2.2.0发出升级到CSI 2.4.2-错误“无法在kubernetes cluster中获得共享数据存储”

作者: Jictyvoo 2022-9-13 20:03:40 显示全部楼层 |阅读模式

Post upgrade to CSI 2.4.2 from 2.2.0 - error "failed to get shared datastores in kubernetes cluster"

这是错误报告还是 Feature 请求?:

只有一个,将其放在自己的线上:

/ Kind 的错误

/亲切 Feature

What happened: Upgraded CSI driver from 2.2.0 to 2.4.2 after following the document https://docs.vmware.com/en/VMware-vSphere-Container-Storage-Plug-in/2.0/vmware-vsphere-csp-getting-started/GUID-3F277B52-68CC-4125-AD0F-E7293940B4B4.html. After deployment when created a pvc getting below error

Events: > Type Reason Age From Message > ---- ------ ---- ---- ------- > Normal WaitForFirstConsumer 10m persistentvolume-controller waiting for first consumer to be created before binding > Warning ProvisioningFailed 10m csi.vsphere.vmware.com_vsphere-csi-controller-84759bcd6f-ljd6m_b746b604-b6c9-4dc5-80ed-4e009b2018ca failed to provision volume with StorageClass "mongo-sc": rpc error: code = Internal desc = failed to get shared datastores in kubernetes cluster. Error: no shared datastores found for nodeVm: VirtualMachine:vm-166441 [VirtualCenterHost: 172.16.32.10, UUID: 422180a2-b068-15cb-501d-22fc1df1a0ad, Datacenter: Datacenter [Datacenter: Datacenter:datacenter-2, VirtualCenterHost: 172.16.32.10]] > Warning ProvisioningFailed 2m11s (x7 over 10m) csi.vsphere.vmware.com_vsphere-csi-controller-84759bcd6f-ljd6m_b746b604-b6c9-4dc5-80ed-4e009b2018ca failed to provision volume with StorageClass "mongo-sc": rpc error: code = Internal desc = failed to get shared datastores in kubernetes cluster. Error: no shared datastores found for nodeVm: VirtualMachine:vm-166439 [VirtualCenterHost: 172.16.32.10, UUID: 4221912b-cf62-5873-be79-1b215ef9dd36, Datacenter: Datacenter [Datacenter: Datacenter:datacenter-2, VirtualCenterHost: 172.16.32.10]] > Normal Provisioning 53s (x11 over 10m) csi.vsphere.vmware.com_vsphere-csi-controller-84759bcd6f-ljd6m_b746b604-b6c9-4dc5-80ed-4e009b2018ca External provisioner is provisioning volume for claim "default/pvc-demo-2" > Warning ProvisioningFailed 53s (x3 over 10m) csi.vsphere.vmware.com_vsphere-csi-controller-84759bcd6f-ljd6m_b746b604-b6c9-4dc5-80ed-4e009b2018ca failed to provision volume with StorageClass "mongo-sc": rpc error: code = Internal desc = failed to get shared datastores in kubernetes cluster. Error: no shared datastores found for nodeVm: VirtualMachine:vm-166440 [VirtualCenterHost: 172.16.32.10, UUID: 4221b82a-2b93-342f-7e63-4dec57b3784e, Datacenter: Datacenter [Datacenter: Datacenter:datacenter-2, VirtualCenterHost: 172.16.32.10]] > Normal ExternalProvisioning 28s (x43 over 10m) persistentvolume-controller waiting for a volume to be created, either by external provisioner "csi.vsphere.vmware.com" or manually created by system administrator > ~">
> Events:
>   Type     Reason                Age                  From                                                                                                 Message
>   ----     ------                ----                 ----                                                                                                 -------
>   Normal   WaitForFirstConsumer  10m                  persistentvolume-controller                                                                          waiting for first consumer to be created before binding
>   Warning  ProvisioningFailed    10m                  csi.vsphere.vmware.com_vsphere-csi-controller-84759bcd6f-ljd6m_b746b604-b6c9-4dc5-80ed-4e009b2018ca  failed to provision volume with StorageClass "mongo-sc": rpc error: code = Internal desc = failed to get shared datastores in kubernetes cluster. Error: no shared datastores found for nodeVm: VirtualMachine:vm-166441 [VirtualCenterHost: 172.16.32.10, UUID: 422180a2-b068-15cb-501d-22fc1df1a0ad, Datacenter: Datacenter [Datacenter: Datacenter:datacenter-2, VirtualCenterHost: 172.16.32.10]]
>   Warning  ProvisioningFailed    2m11s (x7 over 10m)  csi.vsphere.vmware.com_vsphere-csi-controller-84759bcd6f-ljd6m_b746b604-b6c9-4dc5-80ed-4e009b2018ca  failed to provision volume with StorageClass "mongo-sc": rpc error: code = Internal desc = failed to get shared datastores in kubernetes cluster. Error: no shared datastores found for nodeVm: VirtualMachine:vm-166439 [VirtualCenterHost: 172.16.32.10, UUID: 4221912b-cf62-5873-be79-1b215ef9dd36, Datacenter: Datacenter [Datacenter: Datacenter:datacenter-2, VirtualCenterHost: 172.16.32.10]]
>   Normal   Provisioning          53s (x11 over 10m)   csi.vsphere.vmware.com_vsphere-csi-controller-84759bcd6f-ljd6m_b746b604-b6c9-4dc5-80ed-4e009b2018ca  External provisioner is provisioning volume for claim "default/pvc-demo-2"
>   Warning  ProvisioningFailed    53s (x3 over 10m)    csi.vsphere.vmware.com_vsphere-csi-controller-84759bcd6f-ljd6m_b746b604-b6c9-4dc5-80ed-4e009b2018ca  failed to provision volume with StorageClass "mongo-sc": rpc error: code = Internal desc = failed to get shared datastores in kubernetes cluster. Error: no shared datastores found for nodeVm: VirtualMachine:vm-166440 [VirtualCenterHost: 172.16.32.10, UUID: 4221b82a-2b93-342f-7e63-4dec57b3784e, Datacenter: Datacenter [Datacenter: Datacenter:datacenter-2, VirtualCenterHost: 172.16.32.10]]
>   Normal   ExternalProvisioning  28s (x43 over 10m)   persistentvolume-controller                                                                          waiting for a volume to be created, either by external provisioner "csi.vsphere.vmware.com" or manually created by system administrator
>  ~

我们能够在2.2.0版中创建PVC,因此这不是权限问题。

这是我们的 cloud 配置

password = <> port = secret-namespace = "kube-system" [VirtualCenter "172.16.32.10"] datacenters = "sadc-npe-icon-dc" [Labels] region = k8s-regions zone = k8s-zones">
[Global]
insecure-flag = "true"
user = <>
password = <>
port = 
secret-namespace = "kube-system"

[VirtualCenter "172.16.32.10"]
datacenters = "sadc-npe-icon-dc"

[Labels]
region = k8s-regions
zone = k8s-zones

这些是我们的节点,错误中提到的VM ID是主节点的节点。

kubectl get nodes --show-labels NAME STATUS ROLES AGE VERSION LABELS lxk8s-carbon-upgrade-sadc-sbx-mongo-az1-v2-master1 Ready controlplane,etcd 89d v1.21.5 beta.kubernetes.io/arch=amd64,beta.kubernetes.io/instance-type=vsphere-vm.cpu-4.mem-8gb.os-centos7,beta.kubernetes.io/os=linux,cattle.io/creator=norman,failure-domain.beta.kubernetes.io/region=region-1,failure-domain.beta.kubernetes.io/zone=zone-a,kubernetes.io/arch=amd64,kubernetes.io/hostname=lxk8s-carbon-upgrade-sadc-sbx-mongo-az1-v2-master1,kubernetes.io/os=linux,node-role.kubernetes.io/controlplane=true,node-role.kubernetes.io/etcd=true lxk8s-carbon-upgrade-sadc-sbx-mongo-az1-v2-worker1 Ready worker 89d v1.21.5 beta.kubernetes.io/arch=amd64,beta.kubernetes.io/instance-type=vsphere-vm.cpu-16.mem-32gb.os-centos7,beta.kubernetes.io/os=linux,cattle.io/creator=norman,failure-domain.beta.kubernetes.io/region=region-1,failure-domain.beta.kubernetes.io/zone=zone-a,kubernetes.io/arch=amd64,kubernetes.io/hostname=lxk8s-carbon-upgrade-sadc-sbx-mongo-az1-v2-worker1,kubernetes.io/os=linux,node-role.kubernetes.io/worker=true lxk8s-carbon-upgrade-sadc-sbx-mongo-az2-v2-master1 Ready controlplane,etcd 89d v1.21.5 beta.kubernetes.io/arch=amd64,beta.kubernetes.io/instance-type=vsphere-vm.cpu-4.mem-8gb.os-centos7,beta.kubernetes.io/os=linux,cattle.io/creator=norman,failure-domain.beta.kubernetes.io/region=region-1,failure-domain.beta.kubernetes.io/zone=zone-b,kubernetes.io/arch=amd64,kubernetes.io/hostname=lxk8s-carbon-upgrade-sadc-sbx-mongo-az2-v2-master1,kubernetes.io/os=linux,node-role.kubernetes.io/controlplane=true,node-role.kubernetes.io/etcd=true lxk8s-carbon-upgrade-sadc-sbx-mongo-az2-v2-worker1 Ready worker 89d v1.21.5 beta.kubernetes.io/arch=amd64,beta.kubernetes.io/instance-type=vsphere-vm.cpu-16.mem-32gb.os-centos7,beta.kubernetes.io/os=linux,cattle.io/creator=norman,failure-domain.beta.kubernetes.io/region=region-1,failure-domain.beta.kubernetes.io/zone=zone-b,kubernetes.io/arch=amd64,kubernetes.io/hostname=lxk8s-carbon-upgrade-sadc-sbx-mongo-az2-v2-worker1,kubernetes.io/os=linux,node-role.kubernetes.io/worker=true lxk8s-carbon-upgrade-sadc-sbx-mongo-az3-v2-master1 Ready controlplane,etcd 89d v1.21.5 beta.kubernetes.io/arch=amd64,beta.kubernetes.io/instance-type=vsphere-vm.cpu-4.mem-8gb.os-centos7,beta.kubernetes.io/os=linux,cattle.io/creator=norman,failure-domain.beta.kubernetes.io/region=region-1,failure-domain.beta.kubernetes.io/zone=zone-c,kubernetes.io/arch=amd64,kubernetes.io/hostname=lxk8s-carbon-upgrade-sadc-sbx-mongo-az3-v2-master1,kubernetes.io/os=linux,node-role.kubernetes.io/controlplane=true,node-role.kubernetes.io/etcd=true lxk8s-carbon-upgrade-sadc-sbx-mongo-az3-v2-worker1 Ready worker 89d v1.21.5 beta.kubernetes.io/arch=amd64,beta.kubernetes.io/instance-type=vsphere-vm.cpu-16.mem-32gb.os-centos7,beta.kubernetes.io/os=linux,cattle.io/creator=norman,failure-domain.beta.kubernetes.io/region=region-1,failure-domain.beta.kubernetes.io/zone=zone-c,kubernetes.io/arch=amd64,kubernetes.io/hostname=lxk8s-carbon-upgrade-sadc-sbx-mongo-az3-v2-worker1,kubernetes.io/os=linux,node-role.kubernetes.io/worker=true">
> kubectl get nodes --show-labels
NAME                                                 STATUS   ROLES               AGE   VERSION   LABELS
lxk8s-carbon-upgrade-sadc-sbx-mongo-az1-v2-master1   Ready    controlplane,etcd   89d   v1.21.5   beta.kubernetes.io/arch=amd64,beta.kubernetes.io/instance-type=vsphere-vm.cpu-4.mem-8gb.os-centos7,beta.kubernetes.io/os=linux,cattle.io/creator=norman,failure-domain.beta.kubernetes.io/region=region-1,failure-domain.beta.kubernetes.io/zone=zone-a,kubernetes.io/arch=amd64,kubernetes.io/hostname=lxk8s-carbon-upgrade-sadc-sbx-mongo-az1-v2-master1,kubernetes.io/os=linux,node-role.kubernetes.io/controlplane=true,node-role.kubernetes.io/etcd=true
lxk8s-carbon-upgrade-sadc-sbx-mongo-az1-v2-worker1   Ready    worker              89d   v1.21.5   beta.kubernetes.io/arch=amd64,beta.kubernetes.io/instance-type=vsphere-vm.cpu-16.mem-32gb.os-centos7,beta.kubernetes.io/os=linux,cattle.io/creator=norman,failure-domain.beta.kubernetes.io/region=region-1,failure-domain.beta.kubernetes.io/zone=zone-a,kubernetes.io/arch=amd64,kubernetes.io/hostname=lxk8s-carbon-upgrade-sadc-sbx-mongo-az1-v2-worker1,kubernetes.io/os=linux,node-role.kubernetes.io/worker=true
lxk8s-carbon-upgrade-sadc-sbx-mongo-az2-v2-master1   Ready    controlplane,etcd   89d   v1.21.5   beta.kubernetes.io/arch=amd64,beta.kubernetes.io/instance-type=vsphere-vm.cpu-4.mem-8gb.os-centos7,beta.kubernetes.io/os=linux,cattle.io/creator=norman,failure-domain.beta.kubernetes.io/region=region-1,failure-domain.beta.kubernetes.io/zone=zone-b,kubernetes.io/arch=amd64,kubernetes.io/hostname=lxk8s-carbon-upgrade-sadc-sbx-mongo-az2-v2-master1,kubernetes.io/os=linux,node-role.kubernetes.io/controlplane=true,node-role.kubernetes.io/etcd=true
lxk8s-carbon-upgrade-sadc-sbx-mongo-az2-v2-worker1   Ready    worker              89d   v1.21.5   beta.kubernetes.io/arch=amd64,beta.kubernetes.io/instance-type=vsphere-vm.cpu-16.mem-32gb.os-centos7,beta.kubernetes.io/os=linux,cattle.io/creator=norman,failure-domain.beta.kubernetes.io/region=region-1,failure-domain.beta.kubernetes.io/zone=zone-b,kubernetes.io/arch=amd64,kubernetes.io/hostname=lxk8s-carbon-upgrade-sadc-sbx-mongo-az2-v2-worker1,kubernetes.io/os=linux,node-role.kubernetes.io/worker=true
lxk8s-carbon-upgrade-sadc-sbx-mongo-az3-v2-master1   Ready    controlplane,etcd   89d   v1.21.5   beta.kubernetes.io/arch=amd64,beta.kubernetes.io/instance-type=vsphere-vm.cpu-4.mem-8gb.os-centos7,beta.kubernetes.io/os=linux,cattle.io/creator=norman,failure-domain.beta.kubernetes.io/region=region-1,failure-domain.beta.kubernetes.io/zone=zone-c,kubernetes.io/arch=amd64,kubernetes.io/hostname=lxk8s-carbon-upgrade-sadc-sbx-mongo-az3-v2-master1,kubernetes.io/os=linux,node-role.kubernetes.io/controlplane=true,node-role.kubernetes.io/etcd=true
lxk8s-carbon-upgrade-sadc-sbx-mongo-az3-v2-worker1   Ready    worker              89d   v1.21.5   beta.kubernetes.io/arch=amd64,beta.kubernetes.io/instance-type=vsphere-vm.cpu-16.mem-32gb.os-centos7,beta.kubernetes.io/os=linux,cattle.io/creator=norman,failure-domain.beta.kubernetes.io/region=region-1,failure-domain.beta.kubernetes.io/zone=zone-c,kubernetes.io/arch=amd64,kubernetes.io/hostname=lxk8s-carbon-upgrade-sadc-sbx-mongo-az3-v2-worker1,kubernetes.io/os=linux,node-role.kubernetes.io/worker=true

其他信息

kubectl describe nodes | grep "ProviderID" ProviderID: vsphere://4221912b-cf62-5873-be79-1b215ef9dd36 ProviderID: vsphere://4221ec20-7e72-4d83-321a-52abfcd760e0 ProviderID: vsphere://422180a2-b068-15cb-501d-22fc1df1a0ad ProviderID: vsphere://4221dbbd-be70-6d99-4541-37124fd222ee ProviderID: vsphere://4221b82a-2b93-342f-7e63-4dec57b3784e ProviderID: vsphere://422179e1-53ad-cc38-b9ea-9fe01eaf32ba > > kubectl get CSINode NAME DRIVERS AGE lxk8s-carbon-upgrade-sadc-sbx-mongo-az1-v2-master1 1 89d lxk8s-carbon-upgrade-sadc-sbx-mongo-az1-v2-worker1 1 89d lxk8s-carbon-upgrade-sadc-sbx-mongo-az2-v2-master1 1 89d lxk8s-carbon-upgrade-sadc-sbx-mongo-az2-v2-worker1 1 89d lxk8s-carbon-upgrade-sadc-sbx-mongo-az3-v2-master1 1 89d lxk8s-carbon-upgrade-sadc-sbx-mongo-az3-v2-worker1 1 89d > > kubectl describe sc mongo-sc Name: mongo-sc IsDefaultClass: No Annotations: Provisioner: csi.vsphere.vmware.com Parameters: csi.storage.k8s.io/fstype=ext4,storagepolicyname=k8s AllowVolumeExpansion: True MountOptions: ReclaimPolicy: Delete VolumeBindingMode: WaitForFirstConsumer Events: ">
> kubectl describe nodes | grep "ProviderID"
ProviderID:                   vsphere://4221912b-cf62-5873-be79-1b215ef9dd36
ProviderID:                   vsphere://4221ec20-7e72-4d83-321a-52abfcd760e0
ProviderID:                   vsphere://422180a2-b068-15cb-501d-22fc1df1a0ad
ProviderID:                   vsphere://4221dbbd-be70-6d99-4541-37124fd222ee
ProviderID:                   vsphere://4221b82a-2b93-342f-7e63-4dec57b3784e
ProviderID:                   vsphere://422179e1-53ad-cc38-b9ea-9fe01eaf32ba
> 
> kubectl get CSINode
NAME                                                 DRIVERS   AGE
lxk8s-carbon-upgrade-sadc-sbx-mongo-az1-v2-master1   1         89d
lxk8s-carbon-upgrade-sadc-sbx-mongo-az1-v2-worker1   1         89d
lxk8s-carbon-upgrade-sadc-sbx-mongo-az2-v2-master1   1         89d
lxk8s-carbon-upgrade-sadc-sbx-mongo-az2-v2-worker1   1         89d
lxk8s-carbon-upgrade-sadc-sbx-mongo-az3-v2-master1   1         89d
lxk8s-carbon-upgrade-sadc-sbx-mongo-az3-v2-worker1   1         89d
> 
> kubectl describe sc mongo-sc
Name:                  mongo-sc
IsDefaultClass:        No
Annotations:           <none>
Provisioner:           csi.vsphere.vmware.com
Parameters:            csi.storage.k8s.io/fstype=ext4,storagepolicyname=k8s
AllowVolumeExpansion:  True
MountOptions:          <none>
ReclaimPolicy:         Delete
VolumeBindingMode:     WaitForFirstConsumer
Events:                <none>

What you expected to happen:

I would expect the PVC to be created

How to reproduce it (as minimally and precisely as possible):

Create a K8s cluster with CSI 2.2.0 and upgrade the CSI to 2.4.2 as per the link above.

Anything else we need to know?:

Environment:

csi-vsphere version: 2.4.2 vsphere-cloud-controller-manager version: Kubernetes version: 1.21.15 vSphere version: Version: 7.0.2 OS (e.g. from /etc/os-release): Centos 6 Kernel (e.g. uname -a): Install tools: Others:
腾讯云服务器 阿里云服务器
关注微信
^