When helm installed Prometheus, the NFS client provider serviceaccount was arranged in the default namespace and encountered a title problem
[hadoop@hadoop03 NFS]$ vim nfs-rbac.yaml
apiVersion: v1
kind: ServiceAccount
metadata:
name: nfs-client-provisioner
#namespace: nfs-client
---
kind: ClusterRole
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: nfs-client-provisioner-runner
rules:
- apiGroups: [""]
resources: ["persistentvolumes"]
verbs: ["get", "list", "watch", "create", "delete"]
- apiGroups: [""]
resources: ["persistentvolumeclaims"]
verbs: ["get", "list", "watch", "update"]
- apiGroups: ["storage.k8s.io"]
resources: ["storageclasses"]
verbs: ["get", "list", "watch"]
- apiGroups: [""]
resources: ["events"]
verbs: ["create", "update", "patch"] ## Deploy to the default namespace to report an error title error
---
kind: ClusterRoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: run-nfs-client-provisioner
subjects:
- kind: ServiceAccount
name: nfs-client-provisioner
namespace: default
roleRef:
kind: ClusterRole
name: nfs-client-provisioner-runner
apiGroup: rbac.authorization.k8s.io
---
kind: Role
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: leader-locking-nfs-client-provisioner
namespace: default
rules:
- apiGroups: [""]
resources: ["endpoints"]
verbs: ["get", "list", "watch", "create", "update", "patch"]
---
kind: RoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: leader-locking-nfs-client-provisioner
namespace: default
subjects:
- kind: ServiceAccount
name: nfs-client-provisioner
namespace: default
roleRef:
kind: Role
name: leader-locking-nfs-client-provisioner
apiGroup: rbac.authorization.k8s.io
kubectl logs nfs-client-provisioner-764f44f754-wdtqp nfs provider pod
E1206 08:52:27.293890 1 leaderelection.go:234] error retrieving resource lock default/fuseim.pri-ifs: endpoints "fuseim.pri-ifs" is forbidden: User "system:serviceaccount:default:nfs-client-provisioner" cannot get resource "endpoints" in API group "" in the namespace "default"
Modify clusterrole configuration permissions
kind: ClusterRole
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: nfs-client-provisioner-runner
rules:
- apiGroups: [""]
resources: ["persistentvolumes"]
verbs: ["get", "list", "watch", "create", "delete"]
- apiGroups: [""]
resources: ["persistentvolumeclaims"]
verbs: ["get", "list", "watch", "update"]
- apiGroups: ["storage.k8s.io"]
resources: ["storageclasses"]
verbs: ["get", "list", "watch"]
- apiGroups: [""]
resources: ["events"]
verbs: ["get", "list", "watch", "create", "update", "patch"] ### 把权限修改为这个(default namespace)
Read More:
- k8s kubernetes ingress error: endpoints “default-http-backend“ not found
- K8s ❉ Error: cannot be handled as a** [How to Solve]
- [Solved] K8s cluster build error: error: kubectl get csr No resources found.
- [Solved] ClientError.Security.Unauthorized: The client is unauthorized due to authentication failure.
- K8S error validating data: ValidationError(Deployment.spec): missing required field selector
- [Solved] k8s Error: Back-off restarting failed container
- K8s initializing the master & worker node error [How to Solve]
- [Solved] k8s kubeadmin init Error: http://localhost:10248/healthz‘ failed
- K8s cluster initialization error: Port 6443 is in use [Solved]
- K8s Install Error: Error: unknown flag: –experimental-upload-certs
- How to Solve kubelet starts error (k8s Cluster Restarted)
- K8S Master Initialize Error: [ERROR CRI]: container runtime is not running: output: E0812
- [Solved] Ubuntu 20.04 LTS Install k8s Error: The connection to the server localhost:8080 was refused
- [Solved] Upstream connect error or disconnect occurs after the k8s istio virtual machine is restarted
- [Solved] K8s Initialize Error: failed with error: Get “http://localhost:10248/healthz“
- How to Solve k8s Nodal issues: /sys/fs/cgroup/memory/docker: no space left on device\““: unknown.
- [Solved] K8s Error: ERROR: Unable to access datastore to query node configuration
- [Solved] Redisson distributed lock error: attempt to unlock lock, not locked by current thread by node id
- [Solved] kubectl top pod error: error: Metrics API not available
- Tomcat: “localhost:8080” Error 401: Unauthorized