Forbidden: may only update PVC status - amazon-web-services

I installed the EFS CSI driver to mount EFS on EKS, I followed Amazon EFS CSI driver and aws-efs-csi-driver
I've faced the below error while deploying PersistentVolumeClaim.
Error from server (Forbidden): error when creating "claim.yml": persistentvolumeclaims "efs-claim" is forbidden: may only update PVC status
StorageClass.yaml -->
kind: StorageClass
apiVersion: storage.k8s.io/v1
metadata:
name: efs-sc
provisioner: efs.csi.aws.com
mountOptions:
- tls
pv.yaml -->
apiVersion: v1
kind: PersistentVolume
metadata:
name: efs-pv
spec:
capacity:
storage: 5Gi
volumeMode: Filesystem
accessModes:
- ReadWriteOnce
persistentVolumeReclaimPolicy: Retain
storageClassName: efs-sc
csi:
driver: efs.csi.aws.com
volumeHandle: fs-xxxxxxxxxxx
pvclaim.yaml -->
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: efs-claim
spec:
accessModes:
- ReadWriteOnce
storageClassName: efs-sc
resources:
requests:
storage: 5Gi
selector:
matchLabels:
name: production-environment
role: prod
Kindly help me to resolve this

I fixed the issue with aws support.
Posting the resolution might help someone.
We removed the system:nodes ands system:bootstrappers permission of the controller server from the aws-auth configmap. and it's fixed the issue.
Previous configmap/aws-auth -->
apiVersion: v1
data:
mapRoles: |
- groups:
- system:bootstrappers
- system:nodes
rolearn: arn:aws:iam::xxxxxxxxxx:role/eksctl-sc-prod-eks-cluster-NodeInstanceRole-T3B32A19KBZB
username: system:node:{{EC2PrivateDNSName}}
- groups:
- system:bootstrappers
- system:nodes
- system:masters
rolearn: arn:aws:iam::xxxxxxxxx:role/sc-prod-iam-ec2-instance-profile-bastion
username: system:node:{{EC2PrivateDNSName}}
mapUsers: |
- userarn: arn:aws:iam::xxxxxxxxxx:user/jawad846
username: admin
groups:
- system:masters
Currenlt configmap/aws-auth -->
apiVersion: v1
data:
mapRoles: |
- groups:
- system:bootstrappers
- system:nodes
rolearn: arn:aws:iam::xxxxxxxxx:role/eksctl-sc-prod-eks-cluster-NodeInstanceRole-T3B32A19KBZB
username: system:node:{{EC2PrivateDNSName}}
- groups:
- system:masters
rolearn: arn:aws:iam::xxxxxxxxxx:role/sc-prod-iam-ec2-instance-profile-bastion
username: system:node:{{EC2PrivateDNSName}}
mapUsers: |
- userarn: arn:aws:iam::xxxxxxxxxx:user/jawad846
username: admin
groups:
- system:masters
Thanks #all

Related

RabbitMQ only shows one node

I have been trying to set up RabbitMQ on a k8s cluster, I finally got everything set up, but only one node shows up on the managementUI. Here are my steps:
1. Dockerfile Setup
I do this to enable autocluster:
FROM rabbitmq:3.8-rc-management-alpine
MAINTAINER kevlai
RUN rabbitmq-plugins --offline enable rabbitmq_peer_discovery_k8s
2. Set up RBAC
apiVersion: v1
kind: ServiceAccount
metadata:
name: borecast-rabbitmq
namespace: borecast-production
---
kind: Role
apiVersion: rbac.authorization.k8s.io/v1beta1
metadata:
name: borecast-rabbitmq
namespace: borecast-production
rules:
- apiGroups:
- ""
resources:
- endpoints
verbs:
- get
---
kind: RoleBinding
apiVersion: rbac.authorization.k8s.io/v1beta1
metadata:
name: borecast-rabbitmq
namespace: borecast-production
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: Role
name: dev
subjects:
- kind: ServiceAccount
name: borecast-rabbitmq
namespace: borecast-production
3. Set up Secrets
apiVersion: v1
kind: Secret
metadata:
name: rabbitmq-secret
namespace: borecast-production
type: Opaque
data:
username: a2V2
password: Ym9yZWNhc3RydWx6
secretCookie: c2VjcmV0Y29va2llaGVyZQ==
4. Set up StorageClass
I'm setting up StorageClass so k8s will automatically do provision for me on AWS.
kind: StorageClass
apiVersion: storage.k8s.io/v1beta1
metadata:
name: rabbitmq-sc
provisioner: kubernetes.io/aws-ebs
parameters:
type: gp2
zone: us-east-2a
reclaimPolicy: Retain
5. Set up StatefulSets and Services
You can see there are two services. The headless service is for the pods themselves. As for the management service, I'll expose the service for an Ingress controller in order for it to be accessible from outside.
---
apiVersion: v1
kind: Service
metadata:
name: borecast-rabbitmq-management-service
namespace: borecast-production
labels:
app: borecast-rabbitmq
spec:
ports:
- port: 15672
targetPort: 15672
name: http
- port: 5672
targetPort: 5672
name: amqp
selector:
app: borecast-rabbitmq
---
apiVersion: v1
kind: Service
metadata:
name: borecast-rabbitmq-service
namespace: borecast-production
labels:
app: borecast-rabbitmq
spec:
clusterIP: None
ports:
- port: 5672
name: amqp
selector:
app: borecast-rabbitmq
---
apiVersion: apps/v1beta1
kind: StatefulSet
metadata:
name: borecast-rabbitmq
namespace: borecast-production
spec:
serviceName: borecast-rabbitmq-service
replicas: 3
template:
metadata:
labels:
app: borecast-rabbitmq
spec:
serviceAccountName: borecast-rabbitmq
containers:
- image: docker.borecast.com/borecast-rabbitmq:v1.0.3
name: borecast-rabbitmq
imagePullPolicy: Always
resources:
requests:
memory: "256Mi"
cpu: "150m"
limits:
memory: "512Mi"
cpu: "250m"
ports:
- containerPort: 5672
name: amqp
env:
- name: RABBITMQ_DEFAULT_USER
valueFrom:
secretKeyRef:
name: rabbitmq-secret
key: username
- name: RABBITMQ_DEFAULT_PASS
valueFrom:
secretKeyRef:
name: rabbitmq-secret
key: password
- name: RABBITMQ_ERLANG_COOKIE
valueFrom:
secretKeyRef:
name: rabbitmq-secret
key: secretCookie
- name: MY_POD_NAME
valueFrom:
fieldRef:
fieldPath: metadata.name
- name: K8S_SERVICE_NAME
# value: borecast-rabbitmq-service.borecast-production.svc.cluster.local
value: borecast-rabbitmq-service
- name: RABBITMQ_USE_LONGNAME
value: "true"
- name: RABBITMQ_NODENAME
value: "rabbit#$(MY_POD_NAME).$(K8S_SERVICE_NAME)"
# value: rabbit#$(MY_POD_NAME).borecast-rabbitmq-service.borecast-production.svc.cluster.local
- name: RABBITMQ_NODE_TYPE
value: disc
- name: AUTOCLUSTER_TYPE
value: "k8s"
- name: AUTOCLUSTER_DELAY
value: "10"
- name: AUTOCLUSTER_CLEANUP
value: "true"
- name: CLEANUP_WARN_ONLY
value: "false"
- name: K8S_ADDRESS_TYPE
value: "hostname"
- name: K8S_HOSTNAME_SUFFIX
value: ".$(K8S_SERVICE_NAME)"
# value: .borecast-rabbitmq-service.borecast-production.svc.cluster.local
volumeMounts:
- name: rabbitmq-volume
mountPath: /var/lib/rabbitmq
imagePullSecrets:
- name: regcred
volumeClaimTemplates:
- metadata:
name: rabbitmq-volume
namespace: borecast-production
spec:
accessModes: [ "ReadWriteOnce" ]
storageClassName: rabbitmq-sc
resources:
requests:
storage: 5Gi
Problem
Everything is working. However, when I access the management UI (i.e. I'm access the borecast-rabbitmq-management-service, port 15672), I only see one node showing up, when it should be three:
Also notice that the cluster name is
rabbit#borecast-rabbitmq-0.borecast-rabbitmq-service.borecast-production.svc.cluster.local
but when I log out and log in again, sometimes the number 0 will be changed to 1 or 2 for borecast-rabbitmq-0.
And also notice the node name is
rabbit#borecast-rabbitmq-1.borecast-rabbitmq-service
And you guessed it, sometimes the number is 2 or 0 for borecast-rabbitmq-1.
I have been trying to debug but to no avail. The logs for each pod doesn't raise any suspicions and every service and statefulset are working normally. I repeated the five steps multiple times, and if your cluster is on AWS, you can totally replicate my setup by following the steps (after creating the namespace borecast-production of course). If anybody can shed some light on the matter, I'll be eternally grateful.
The problem is with the headless service name definition:
- name: K8S_SERVICE_NAME
# value: borecast-rabbitmq-service.borecast-production.svc.cluster.local
value: borecast-rabbitmq-service
which is a building block of node name:
- name: RABBITMQ_NODENAME
value: "rabbit#$(MY_POD_NAME).$(K8S_SERVICE_NAME)"
whereas the proper node name, should be of FQDN of the POD (<statefulset name>-<ordinal index>.<headless_svc_name>.<namespace>.svc.cluster.local):
- name: RABBITMQ_NODENAME
value: "rabbit#$(MY_POD_NAME).$(K8S_SERVICE_NAME).$(MY_POD_NAMESPACE).svc.cluster.local"
Therefore you ended up with NodeName
borecast-rabbitmq-1.borecast-rabbitmq-service
instead of:
borecast-rabbitmq-1.borecast-rabbitmq-service.borecast-production.svc.cluster.local
Look up the fqdn of the pod created by borecast-rabbitmq StatefulSet (in other word: SRV records of the Pods) with nslookup util from inside of your cluster as explained here, to see what form the RABBITMQ_NODENAME is expected to have.
try exposing 4369 for headless service;
https://www.rabbitmq.com/clustering.html
see the port access section
Had the same issue, and it came down to
Deleting all the rabbitmq resources including the pvc created under the statefulset.
Then reinstalling everything from the manifests.

k8s - Using Promethues with cAdvisor to monitor microservice/Pod data

I'm running Prometheus operator in the new Kubernetes cluster and I try to get container details.
The query dashboard of Prometheus doesn't provide any container data, when I look at the target I see the following
Maybe it's because of the roles but I'm not sure since I'm new to this topic
I saw also this:
https://github.com/coreos/prometheus-operator/issues/867
and I add the authentication-token-webhook which doesn't help, but maybe I didn't do it in the right place...
Any idea what am I missing here?
my operator.yml config look like following
apiVersion: rbac.authorization.k8s.io/v1beta1
kind: ClusterRoleBinding
metadata:
name: prometheus-operator
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: prometheus-operator
subjects:
- kind: ServiceAccount
name: prometheus-operator
namespace: default
---
apiVersion: rbac.authorization.k8s.io/v1beta1
kind: ClusterRole
metadata:
name: prometheus-operator
rules:
- apiGroups:
- extensions
resources:
- thirdpartyresources
verbs:
- "*"
- apiGroups:
- apiextensions.k8s.io
resources:
- customresourcedefinitions
verbs:
- "*"
- apiGroups:
- monitoring.coreos.com
resources:
- alertmanagers
- prometheuses
- prometheuses/finalizers
- servicemonitors
verbs:
- "*"
- apiGroups:
- apps
resources:
- statefulsets
verbs: ["*"]
- apiGroups: [""]
resources:
- configmaps
- secrets
verbs: ["*"]
- apiGroups: [""]
resources:
- pods
verbs: ["list", "delete"]
- apiGroups: [""]
resources:
- services
- endpoints
verbs: ["get", "create", "update"]
- apiGroups: [""]
resources:
- nodes
verbs: ["list", "watch"]
- apiGroups: [""]
resources:
- namespaces
verbs: ["list"]
---
apiVersion: v1
kind: ServiceAccount
metadata:
name: prometheus-operator
---
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
labels:
k8s-app: prometheus-operator
name: prometheus-operator
spec:
replicas: 1
template:
metadata:
labels:
k8s-app: prometheus-operator
spec:
containers:
- args:
- --kubelet-service=kube-system/kubelet
- --config-reloader-image=quay.io/coreos/configmap-reload:v0.0.1
- --authentication-token-webhook=true
- --extra-config=kubelet.authentication-token-webhook=true
image: quay.io/coreos/prometheus-operator:v0.17.0
name: prometheus-operator
ports:
- containerPort: 8080
name: http
resources:
limits:
cpu: 200m
memory: 100Mi
requests:
cpu: 100m
memory: 50Mi
securityContext:
runAsNonRoot: true
runAsUser: 65534
serviceAccountName: prometheus-operator
my rbac looks like following
---
apiVersion: v1
kind: ServiceAccount
metadata:
name: prometheus
---
apiVersion: rbac.authorization.k8s.io/v1beta1
kind: ClusterRole
metadata:
name: prometheus
rules:
- apiGroups: [""]
resources:
- nodes
- services
- endpoints
- pods
verbs: ["get", "list", "watch"]
- apiGroups: [""]
resources:
- configmaps
verbs: ["get"]
- nonResourceURLs: ["/metrics"]
verbs: ["get"]
---
apiVersion: rbac.authorization.k8s.io/v1beta1
kind: ClusterRoleBinding
metadata:
name: prometheus
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: prometheus
subjects:
- kind: ServiceAccount
name: prometheus
namespace: default
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
name: prometheus-k8s
rules:
- apiGroups:
- ""
resources:
- nodes/metrics
verbs:
- get
- nonResourceURLs:
- /metrics
verbs:
- get
If some config file is missing please let me know and I'll add it.
Add the below params to kubelet config on each workder node
--authentication-token-webhook=true
--extra-config=kubelet.authorization-mode=Webhook
then run the below commands
systemctl daemon-reload
systemctl restart kubelet

kubectl - cert manager - credentials not found

I want to have TLS termination enabled on ingress (on top of kubernetes) on google cloud platform.
My ingress cluster is working, my cert manager is failing with the error message
textPayload: "2018/07/05 22:04:00 Error while processing certificate during sync: Error while creating ACME client for 'domain': Error while initializing challenge provider googlecloud: Unable to get Google Cloud client: google: error getting credentials using GOOGLE_APPLICATION_CREDENTIALS environment variable: open /opt/google/kube-cert-manager.json: no such file or directory
"
This is what I did in order to get into the current state:
created cluster, deployment, service, ingress
executed:
gcloud --project 'project' iam service-accounts create kube-cert-manager-sv-security --display-name "kube-cert-manager-sv-security"
gcloud --project 'project' iam service-accounts keys create ~/.config/gcloud/kube-cert-manager-sv-security.json --iam-account kube-cert-manager-sv-security#'project'.iam.gserviceaccount.com
gcloud --project 'project' projects add-iam-policy-binding --member serviceAccount:kube-cert-manager-sv-security#'project'.iam.gserviceaccount.com --role roles/dns.admin
kubectl create secret generic kube-cert-manager-sv-security-secret --from-file=/home/perre/.config/gcloud/kube-cert-manager-sv-security.json
and created the following resources:
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: kube-cert-manager-sv-security-data
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 5Gi
apiVersion: v1
kind: ServiceAccount
metadata:
name: kube-cert-manager-sv-security
namespace: default
---
kind: ClusterRole
apiVersion: rbac.authorization.k8s.io/v1beta1
metadata:
name: kube-cert-manager-sv-security
rules:
- apiGroups: ["*"]
resources: ["certificates", "ingresses"]
verbs: ["get", "list", "watch"]
- apiGroups: ["*"]
resources: ["secrets"]
verbs: ["get", "list", "create", "update", "delete"]
- apiGroups: ["*"]
resources: ["events"]
verbs: ["create"]
---
kind: ClusterRoleBinding
apiVersion: rbac.authorization.k8s.io/v1beta1
metadata:
name: kube-cert-manager-sv-security-service-account
subjects:
- kind: ServiceAccount
namespace: default
name: kube-cert-manager-sv-security
roleRef:
kind: ClusterRole
name: kube-cert-manager-sv-security
apiGroup: rbac.authorization.k8s.io
apiVersion: apiextensions.k8s.io/v1beta1
kind: CustomResourceDefinition
metadata:
name: certificates.stable.k8s.psg.io
spec:
scope: Namespaced
group: stable.k8s.psg.io
version: v1
names:
kind: Certificate
plural: certificates
singular: certificate
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
labels:
app: kube-cert-manager-sv-security
name: kube-cert-manager-sv-security
spec:
replicas: 1
template:
metadata:
labels:
app: kube-cert-manager-sv-security
name: kube-cert-manager-sv-security
spec:
serviceAccount: kube-cert-manager-sv-security
containers:
- name: kube-cert-manager
env:
- name: GCE_PROJECT
value: solidair-vlaanderen-207315
- name: GOOGLE_APPLICATION_CREDENTIALS
value: /opt/google/kube-cert-manager.json
image: bcawthra/kube-cert-manager:2017-12-10
args:
- "-data-dir=/var/lib/cert-manager-sv-security"
#- "-acme-url=https://acme-staging.api.letsencrypt.org/directory"
# NOTE: the URL above points to the staging server, where you won't get real certs.
# Uncomment the line below to use the production LetsEncrypt server:
- "-acme-url=https://acme-v01.api.letsencrypt.org/directory"
# You can run multiple instances of kube-cert-manager for the same namespace(s),
# each watching for a different value for the 'class' label
- "-class=kube-cert-manager"
# You can choose to monitor only some namespaces, otherwise all namespaces will be monitored
#- "-namespaces=default,test"
# If you set a default email, you can omit the field/annotation from Certificates/Ingresses
- "-default-email=viae.it#gmail.com"
# If you set a default provider, you can omit the field/annotation from Certificates/Ingresses
- "-default-provider=googlecloud"
volumeMounts:
- name: data-sv-security
mountPath: /var/lib/cert-manager-sv-security
- name: google-application-credentials
mountPath: /opt/google
volumes:
- name: data-sv-security
persistentVolumeClaim:
claimName: kube-cert-manager-sv-security-data
- name: google-application-credentials
secret:
secretName: kube-cert-manager-sv-security-secret
anyone knows what I'm missing?
Your secret resource kube-cert-manager-sv-security-secret may contains a JSON file named kube-cert-manager-sv-security.json and it is not matched with GOOGLE_APPLICATION_CREDENTIALS value. You can confirm file name in the secret resource with kubectl get secret -oyaml YOUR-SECRET-NAME.
So you change the file path to the actual file name, cert-manager works fine.
- name: GOOGLE_APPLICATION_CREDENTIALS
# value: /opt/google/kube-cert-manager.json
value: /opt/google/kube-cert-manager-sv-security.json

Kubernetes Autoscaler on AWS not working

I am trying to setup Kubernetes autoscaler with Amazon AWS as described here: DOCS but I am getting this error in my cluster-autoscaler pod logs:
E0411 09:23:25.529212 1 static_autoscaler.go:118] Failed to update node registry: RequestError: send request failed caused by: Post https://autoscaling.us-west-2a.amazonaws.com/: dial tcp: lookup autoscaling.us-west-2a.amazonaws.com on 10.96.0.10:53: no such host
Context:
I've created AWS Autoscaling Group named KubeAutoscale from Launch Configration with my custom instance AMI which has installed Ubuntu server 16.04 LTS (HVM) and Docker with Kubernetes (just raw install).
In AWS Autoscaling Group I've put 2 instances as minimum and maximum of 5 instances (they are in us-west-2a region) and I logged in on one of those 2 and setup Kubernetes cluster, logged in on other instance and add it to created cluster and logged again on master (first) instance run Autoscaler with configuration:
---
apiVersion: v1
kind: ServiceAccount
metadata:
labels:
k8s-addon: cluster-autoscaler.addons.k8s.io
k8s-app: cluster-autoscaler
name: cluster-autoscaler
namespace: kube-system
---
apiVersion: rbac.authorization.k8s.io/v1beta1
kind: ClusterRole
metadata:
name: cluster-autoscaler
labels:
k8s-addon: cluster-autoscaler.addons.k8s.io
k8s-app: cluster-autoscaler
rules:
- apiGroups: [""]
resources: ["events","endpoints"]
verbs: ["create", "patch"]
- apiGroups: [""]
resources: ["pods/eviction"]
verbs: ["create"]
- apiGroups: [""]
resources: ["pods/status"]
verbs: ["update"]
- apiGroups: [""]
resources: ["endpoints"]
resourceNames: ["cluster-autoscaler"]
verbs: ["get","update"]
- apiGroups: [""]
resources: ["nodes"]
verbs: ["watch","list","get","update"]
- apiGroups: [""]
resources: ["pods","services","replicationcontrollers","persistentvolumeclaims","persistentvolumes"]
verbs: ["watch","list","get"]
- apiGroups: ["extensions"]
resources: ["replicasets","daemonsets"]
verbs: ["watch","list","get"]
- apiGroups: ["policy"]
resources: ["poddisruptionbudgets"]
verbs: ["watch","list"]
- apiGroups: ["apps"]
resources: ["statefulsets"]
verbs: ["watch","list","get"]
- apiGroups: ["storage.k8s.io"]
resources: ["storageclasses"]
verbs: ["watch","list","get"]
---
apiVersion: rbac.authorization.k8s.io/v1beta1
kind: Role
metadata:
name: cluster-autoscaler
namespace: kube-system
labels:
k8s-addon: cluster-autoscaler.addons.k8s.io
k8s-app: cluster-autoscaler
rules:
- apiGroups: [""]
resources: ["configmaps"]
verbs: ["create"]
- apiGroups: [""]
resources: ["configmaps"]
resourceNames: ["cluster-autoscaler-status"]
verbs: ["delete","get","update"]
---
apiVersion: rbac.authorization.k8s.io/v1beta1
kind: ClusterRoleBinding
metadata:
name: cluster-autoscaler
labels:
k8s-addon: cluster-autoscaler.addons.k8s.io
k8s-app: cluster-autoscaler
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: cluster-autoscaler
subjects:
- kind: ServiceAccount
name: cluster-autoscaler
namespace: kube-system
---
apiVersion: rbac.authorization.k8s.io/v1beta1
kind: RoleBinding
metadata:
name: cluster-autoscaler
namespace: kube-system
labels:
k8s-addon: cluster-autoscaler.addons.k8s.io
k8s-app: cluster-autoscaler
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: Role
name: cluster-autoscaler
subjects:
- kind: ServiceAccount
name: cluster-autoscaler
namespace: kube-system
---
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: cluster-autoscaler
namespace: kube-system
labels:
app: cluster-autoscaler
spec:
replicas: 1
selector:
matchLabels:
app: cluster-autoscaler
template:
metadata:
labels:
app: cluster-autoscaler
spec:
serviceAccountName: cluster-autoscaler
containers:
- image: k8s.gcr.io/cluster-autoscaler:v0.6.0
name: cluster-autoscaler
resources:
limits:
cpu: 100m
memory: 300Mi
requests:
cpu: 100m
memory: 300Mi
command:
- ./cluster-autoscaler
- --v=4
- --stderrthreshold=info
- --cloud-provider=aws
- --skip-nodes-with-local-storage=false
- --nodes=2:5:KubeAutoscale
env:
- name: AWS_REGION
value: us-west-2a
volumeMounts:
- name: ssl-certs
mountPath: /etc/ssl/certs/ca-certificates.crt
readOnly: true
imagePullPolicy: "Always"
volumes:
- name: ssl-certs
hostPath:
path: "/etc/ssl/certs/ca-certificates.crt"
You have the configuration issue:
env:
- name: AWS_REGION
value: us-west-2a
Your AWS region is us-west-2, but AZ is us-west-2a. That's why when Autoscaling generates the URL of autoscaling endpoint, the result is https://autoscaling.us-west-2a.amazonaws.com/ instead of https://autoscaling.us-west-2.amazonaws.com/ - which is the correct one.
To fix it, just set AWS_REGION to us-west-2 instead of us-west-2a.

Google Cloud, Kubernetes and Volumes

I'm new to GCE and K8s and I'm trying to figure out my first deployment, but I get an error with my volumes:
Failed to attach volume "pv0001" on node "xxxxx" with: GCE persistent disk not found: diskName="pd-disk-1" zone="europe-west1-b"
Error syncing pod, skipping: timeout expired waiting for volumes to attach/mount for pod "xxx". list of unattached/unmounted volumes=[registrator-claim0]
This is my storage yaml:
apiVersion: v1
kind: PersistentVolume
metadata:
name: pv0001
spec:
capacity:
storage: 10Gi
accessModes:
- ReadWriteOnce
gcePersistentDisk:
fsType: ext4
pdName: pd-disk-1
This is my Claim:
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
creationTimestamp: null
name: registrator-claim0
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 100Mi
status: {}
This is my Deployment:
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
creationTimestamp: null
name: consul
spec:
replicas: 1
strategy: {}
template:
metadata:
creationTimestamp: null
labels:
service: consul
spec:
restartPolicy: Always
containers:
- name: consul
image: eu.gcr.io/xxxx/consul
ports:
- containerPort: 8300
protocol: TCP
- containerPort: 8400
protocol: TCP
- containerPort: 8500
protocol: TCP
- containerPort: 53
protocol: UDP
env:
- name: MY_POD_IP
valueFrom:
fieldRef:
fieldPath: status.podIP
args:
- -server
- -bootstrap
- -advertise=$(MY_POD_IP)
- name: registrator
args:
- -internal
- -ip=192.168.99.101
- consul://localhost:8500
image: eu.gcr.io/xxxx/registrator
volumeMounts:
- mountPath: /tmp/docker.sock
name: registrator-claim0
volumes:
- name: registrator-claim0
persistentVolumeClaim:
claimName: registrator-claim0
status: {}
What am I doing wrong? Figuring out K8s and GCE isn't that easy. These errors are not exactly helping. Hope someone can help me.
you've to create the actual storage before you define the PV, this can be done with sth like:
# make sure you're in the right zone
$ gcloud config set compute/europe-west1-b
# create the disk
$ gcloud compute disks create --size 10GB pd-disk-1
Once thats available you can create the PV and the PVC