How we could  fix a nasty out of space issue leveraging the flexibility of Percona MySQL operator (PMO)  DiskSpaceFull

When planning a database deployment, one of the most challenging factors to consider is the amount of space we need to dedicate for Data on disk.

This is even more cumbersome when working on bare metal. Given it is definitely more difficult to add space when using this kind of solution in respect to the cloud. 

This is it, when using cloud storage like EBS or similar it is normally easy(er) to extend volumes, which gives us the luxury to plan the space to allocate for data with a good grade of relaxation. 

Is this also true when using a solution based on Kubernetes like Percona Operator for MySQL? Well it depends on where you run it, however if the platform you choose supports the option to extend volumes K8s per se is giving you the possibility to do so as well.

However, if it can go wrong it will, and ending up with a fully filled device with MySQL is not a fun experience. 

As you know, on normal deployments, when mysql has no space left on the device, it simply stops working, ergo it will cause a production down event, which of course is an unfortunate event that we want to avoid at any cost.  

This blog is the story of what happened, what was supposed to happen and why. 

The story 

Case was on AWS using EKS.

Given all the above, I was quite surprised when we had a case in which a deployed solution based on PMO went out of space. However we start to dig and review what was going on and why.

The first thing we did was to quickly investigate what was really taking space, that could have been an easy win if most of the space was taken by some log, but unfortunately this was not the case, data was really taking all the available space. 

The next step was to check what storage class was used for the PVC

k get pvc
NAME                         VOLUME    CAPACITY   ACCESS MODES   STORAGECLASS
datadir-mt-cluster-1-pxc-0   pvc-<snip>   233Gi      RWO            io1
datadir-mt-cluster-1-pxc-1   pvc-<snip>   233Gi      RWO            io1
datadir-mt-cluster-1-pxc-2   pvc-<snip>   233Gi      RWO            io1

Ok we use the io1 SC, it is now time to check if the SC is supporting volume expansion:

kubectl describe sc io1
Name:            io1
IsDefaultClass:  No
Annotations:     kubectl.kubernetes.io/last-applied-configuration={"apiVersion":"storage.k8s.io/v1","kind":"StorageClass","metadata":{"annotations":{"storageclass.kubernetes.io/is-default-class":"false"},"name":"io1"},"parameters":{"fsType":"ext4","iopsPerGB":"12","type":"io1"},"provisioner":"kubernetes.io/aws-ebs"}
,storageclass.kubernetes.io/is-default-class=false
Provisioner:           kubernetes.io/aws-ebs
Parameters:            fsType=ext4,iopsPerGB=12,type=io1
AllowVolumeExpansion:  <unset> <------------
MountOptions:          <none>
ReclaimPolicy:         Delete
VolumeBindingMode:     Immediate
Events:                <none>

And no is not enabled, in this case we cannot just go and expand the volume, must change the storage class settings first.
To enable volume expansion, you need to delete the storage class and enable it again. 

Unfortunately we were unsuccessful in doing that operation, because the storage class kept staying as unset for  ALLOWVOLUMEEXPANSION. 

As said this is a production down event, so we cannot invest too much time in digging why it was not correctly changing the mode, we had to act quickly. 

The only option we had to fix it was:

  • Expand the io1 volumes from AWS console (or aws client)
  • Resize the file system 
  • Patch any K8 file to allow K8 to correctly see the new volumes dimension  

Expanding EBS volumes from the console is trivial, just go to Volumes, select the volume you want to modify, choose modify and change the size of it with the one desired, done. 

Once that is done connect to the Node hosting the pod which has the volume mounted like:

 k get pods -o wide|grep mysql-0
NAME                                        READY     STATUS    RESTARTS   AGE    IP            NODE             
cluster-1-pxc-0                               2/2     Running   1          11d    10.1.76.189     <mynode>.eu-central-1.compute.internal

Then we need to get the id of the pvc to identify it on the node

k get pvc
NAME                         STATUS   VOLUME                                     CAPACITY   ACCESS MODES   STORAGECLASS
datadir-cluster-1-pxc-0   Bound    pvc-1678c7ee-3e50-4329-a5d8-25cd188dc0df   233Gi      RWO            io1

One note, when doing this kind of recovery with a PXC based solution, always recover node-0 first, then the others.  

So we connect to <mynode> and identify the volume: 

lslbk |grep pvc-1678c7ee-3e50-4329-a5d8-25cd188dc0df
nvme1n1      259:4    0  350G  0 disk /var/lib/kubelet/pods/9724a0f6-fb79-4e6b-be8d-b797062bf716/volumes/kubernetes.io~aws-ebs/pvc-1678c7ee-3e50-4329-a5d8-25cd188dc0df <-----

At this point we can resize it:

root@ip-<snip>:/# resize2fs  /dev/nvme1n1
resize2fs 1.45.5 (07-Jan-2020)
Filesystem at /dev/nvme1n1 is mounted on /var/lib/kubelet/plugins/kubernetes.io/aws-ebs/mounts/aws/eu-central-1a/vol-0ab0db8ecf0293b2f; on-line resizing required
old_desc_blocks = 30, new_desc_blocks = 44
The filesystem on /dev/nvme1n1 is now 91750400 (4k) blocks long.

The good thing is that as soon as you do that the MySQL daemon see the space and will restart, however it will happen only on the current pod and K8 will still see the old dimension:

k get pv
NAME                                       CAPACITY   ACCESS MODES   RECLAIM POLICY   STATUS   CLAIM                            STORAGECLASS   REASON
pvc-1678c7ee-3e50-4329-a5d8-25cd188dc0df   333Gi      RWO            Delete           Bound    pxc/datadir-cluster-1-pxc-0   io1

To allow k8 to be align with the real dimension we must patch the information stored, the command is the following:

kubectl patch pvc <pvc-name>  -n <pvc-namespace> -p '{ "spec": { "resources": { "requests": { "storage": "NEW STORAGE VALUE" }}}}'
Ie:
kubectl patch pvc datadir-cluster-1-pxc-0 -n pxc -p '{ "spec": { "resources": { "requests": { "storage": "350" }}}}'

Remember to use as pvc-name the NAME coming from

kubectl get pvc.

Once this is done k8 will see the new volume dimension correctly.

Just repeat the process for Node-1 and Node-2 and …done the cluster is up again.

Finally do not forget to modify your custom resources file (cr.yaml) to match the new volume size. IE:

    volumeSpec:
      persistentVolumeClaim:
        storageClassName: "io1"
        resources:
          requests:
            storage: 350G

The whole process took just a few minutes, it was time now to investigate why the incident happened and why the storage class was not allowing extension in the first place.  

 

Why it happened

Well first and foremost the platform was not correctly monitored. As such there was lack of visibility about the space utilization, and no alert about disk space. 

This was easy to solve just enabling the PMM feature in the cluster cr and set the alert in PMM once the nodes join it (see https://docs.percona.com/percona-monitoring-and-management/get-started/alerting.html for details on how to).

The second issue was the problem with the storage class. Once we had the time to carefully review the configuration files, we identified that there was an additional tab in the SC class, which was causing k8 to ignore the directive. 

Was suppose to be:

kind: StorageClass
apiVersion: storage.k8s.io/v1
metadata:
  name: io1
  annotations:
    storageclass.kubernetes.io/is-default-class: "false"
provisioner: kubernetes.io/aws-ebs
parameters:
  type: io1
  iopsPerGB: "12"
  fsType: ext4 
allowVolumeExpansion: true <----------

It was:
kind: StorageClass
apiVersion: storage.k8s.io/v1
metadata:
  name: io1
  annotations:
    storageclass.kubernetes.io/is-default-class: "false"
provisioner: kubernetes.io/aws-ebs
parameters:
  type: io1
  iopsPerGB: "12"
  fsType: ext4 
  allowVolumeExpansion: true. <---------

What was concerning was the lack of error returned by the Kubernetes API, so in theory the configuration was accepted but not really validated. 

In any case once we had fix the typo and recreated the SC, the setting for volume expansion was correctly accepted:

kubectl describe sc io1
Name:            io1
IsDefaultClass:  No
Annotations:     kubectl.kubernetes.io/last-applied-configuration={"allowVolumeExpansion":true,"apiVersion":"storage.k8s.io/v1","kind":"StorageClass","metadata":{"annotations":{"storageclass.kubernetes.io/is-default-class":"false"},"name":"io1"},"parameters":{"fsType":"ext4","iopsPerGB":"12","type":"io1"},"provisioner":"kubernetes.io/aws-ebs"}
,storageclass.kubernetes.io/is-default-class=false
Provisioner:           kubernetes.io/aws-ebs
Parameters:            fsType=ext4,iopsPerGB=12,type=io1
AllowVolumeExpansion:  True    
MountOptions:          <none>
ReclaimPolicy:         Delete
VolumeBindingMode:     Immediate
Events:                <none>

What should have happened instead?

If a proper monitoring and alerting was in place, the administrators would have the time to act and extend the volumes without downtime. 

However, the procedure for extending volumes on K8 is not complex but also not as straightforward as you may think. My colleague Natalia Marukovich wrote this blog post (https://www.percona.com/blog/percona-operator-volume-expansion-without-downtime/)  that gives you the step by step instructions on how to extend the volumes without downtime. 

Conclusions

Using the cloud, containers, automation or more complex orchestrators like Kubernetes, do not solve all, do not prevent mistakes from happening, and more importantly do not make the right decisions for you. 

You must set up a proper architecture that includes backup, monitoring and alerting. You must set the right alerts and act on them in time. 

Finally automation is cool, however the devil is in the details and typos are his day to day joy. Be careful and check what you put online, do not rush it. Validate, validate validate… 

Great stateful MySQL to all. 


Latest conferences

We have 2908 guests and no members online