Persistent volume claims (PVCs) that are in active use by a pod and persistent volumes (PVs) that are bound to PVCs can be protected from premature removal.
The Storage Object in Use Protection feature is enabled in one of the below Kubernetes versions:
Kubernetes v1.10
beta
Kubernetes v1.11
stable
The example below uses a GCE PD StorageClass
, however, similar steps can be performed for any volume type.
Create a StorageClass
for convenient storage provisioning:
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
name: slow
provisioner: kubernetes.io/gce-pd
parameters:
type: pd-standard
Verification scenarios follow below.
kind: PersistentVolumeClaim
apiVersion: v1
metadata:
name: slzc
spec:
accessModes:
- ReadWriteOnce
storageClassName: slow
resources:
requests:
storage: 3.7Gi
kubernetes.io/pvc-protection
set:kubectl describe pvc slzc
Name: slzc
Namespace: default
StorageClass: slow
Status: Bound
Volume: pvc-bee8c30a-d6a3-11e7-9af0-42010a800002
Labels: <none>
Annotations: pv.kubernetes.io/bind-completed=yes
pv.kubernetes.io/bound-by-controller=yes
volume.beta.kubernetes.io/storage-provisioner=kubernetes.io/gce-pd
Finalizers: [kubernetes.io/pvc-protection]
Capacity: 4Gi
Access Modes: RWO
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal ProvisioningSucceeded 2m persistentvolume-controller Successfully provisioned volume pvc-bee8c30a-d6a3-11e7-9af0-42010a800002 using kubernetes.io/gce-pd
kind: Pod
apiVersion: v1
metadata:
name: app1
spec:
containers:
- name: test-pod
image: k8s.gcr.io/busybox:1.24
command:
- "/bin/sh"
args:
- "-c"
- "date > /mnt/app1.txt; sleep 60 && exit 0 || exit 1"
volumeMounts:
- name: path-pvc
mountPath: "/mnt"
restartPolicy: "Never"
volumes:
- name: path-pvc
persistentVolumeClaim:
claimName: slzc
Running
, i.e. the PVC becomes in active use.Terminating
:Name: slzc
Namespace: default
StorageClass: slow
Status: Terminating (since Fri, 01 Dec 2017 14:47:55 +0000)
Volume: pvc-803a1f4d-d6a6-11e7-9af0-42010a800002
Labels: <none>
Annotations: pv.kubernetes.io/bind-completed=yes
pv.kubernetes.io/bound-by-controller=yes
volume.beta.kubernetes.io/storage-provisioner=kubernetes.io/gce-pd
Finalizers: [kubernetes.io/pvc-protection]
Capacity: 4Gi
Access Modes: RWO
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal ProvisioningSucceeded 52s persistentvolume-controller Successfully provisioned volume pvc-803a1f4d-d6a6-11e7-9af0-42010a800002 using kubernetes.io/gce-pd
Terminated
(either delete the pod or wait until it finishes). Afterwards, check that the PVC is removed.kind: Pod
apiVersion: v1
metadata:
name: app1
spec:
containers:
- name: test-pod
image: k8s.gcr.io/busybox:1.24
command:
- "/bin/sh"
args:
- "-c"
- "date > /mnt/app1.txt; sleep 600 && exit 0 || exit 1"
volumeMounts:
- name: path-pvc
mountPath: "/mnt"
restartPolicy: "Never"
volumes:
- name: path-pvc
persistentVolumeClaim:
claimName: slzc
Running
, i.e. the PVC becomes in active use.Terminating
:Name: slzc
Namespace: default
StorageClass: slow
Status: Terminating (since Fri, 01 Dec 2017 14:47:55 +0000)
Volume: pvc-803a1f4d-d6a6-11e7-9af0-42010a800002
Labels: <none>
Annotations: pv.kubernetes.io/bind-completed=yes
pv.kubernetes.io/bound-by-controller=yes
volume.beta.kubernetes.io/storage-provisioner=kubernetes.io/gce-pd
Finalizers: [kubernetes.io/pvc-protection]
Capacity: 4Gi
Access Modes: RWO
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal ProvisioningSucceeded 52s persistentvolume-controller Successfully provisioned volume pvc-803a1f4d-d6a6-11e7-9af0-42010a800002 using kubernetes.io/gce-pd
kind: Pod
apiVersion: v1
metadata:
name: app2
spec:
containers:
- name: test-pod
image: gcr.io/google_containers/busybox:1.24
command:
- "/bin/sh"
args:
- "-c"
- "date > /mnt/app1.txt; sleep 600 && exit 0 || exit 1"
volumeMounts:
- name: path-pvc
mountPath: "/mnt"
restartPolicy: "Never"
volumes:
- name: path-pvc
persistentVolumeClaim:
claimName: slzc
Warning FailedScheduling 18s (x4 over 21s) default-scheduler persistentvolumeclaim "slzc" is being deleted
Terminated
or Completed
(either delete the pods or wait until they finish). Afterwards, check that the PVC is removed.The example below uses a HostPath
PV.
Verification scenarios follow below.
kind: PersistentVolume
apiVersion: v1
metadata:
name: task-pv-volume
labels:
type: local
spec:
capacity:
storage: 1Gi
accessModes:
- ReadWriteOnce
persistentVolumeReclaimPolicy: Delete
storageClassName: standard
hostPath:
path: "/tmp/data"
kubernetes.io/pv-protection
set:Name: task-pv-volume
Labels: type=local
Annotations: pv.kubernetes.io/bound-by-controller=yes
Finalizers: [kubernetes.io/pv-protection]
StorageClass: standard
Status: Terminating (lasts 1m)
Claim: default/task-pv-claim
Reclaim Policy: Delete
Access Modes: RWO
Capacity: 1Gi
Message:
Source:
Type: HostPath (bare host directory volume)
Path: /tmp/data
HostPathType:
Events: <none>
Again, create the same PV.
Create a PVC
kind: PersistentVolumeClaim
apiVersion: v1
metadata:
name: task-pv-claim
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 1Gi
Terminating
:NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS REASON AGE
task-pv-volume 1Gi RWO Delete Terminating default/task-pv-claim standard 59s
kubectl delete pvc task-pv-claim
persistentvolumeclaim "task-pv-claim" deleted
$ kubectl get pvc
No resources found.
$ kubectl get pv
No resources found.