Feedback

  • Contents
 

Backup Data

Backup of CXInsights data is needed to retore the CXInsights dashboards in case of disaster event.Backup configuration can set in automatically via ansible. In case automated way fails cutomer can configure it manually.

We are relying on rsync linux utility for backup. Kubernetes persistent volumes are copied as backup where container keeps CXInsights keep dashboards information. Customer can have backup on local machine but which may lost in disaster. So to avoid this data loss on local box, solution is to use share of other machine which is available to local machine via Network FIle System. Below table summarizes these option and links provided can help in configuring the NFS share. Custom must mount NFS share to local box before proceeding the backup.

Below table summarizes the steps  based on backup data location like locally of on nfs share.

  Backup on local machine (Not recommonded) Backup on Linux NFS share Backup on Windows NFS share
prerequisite CXInsights kubernetes cluster is installed and hence required volumes be created under folder path "/opt/local-path-provisioner/" CXInsights kubernetes cluster is installed CXInsights kubernetes cluster is installed
packages need to installed and command

rsync and cronjob utils should present on linux box

If not install them as below command

sudo yum install rsync cronie

Linux NFS share should be mounted on local path

sudo mount nfsdata.linuxdomain.com:/nfs/shared-volume  -t nfs /mnt/some/backup/path

Windows NFS share should be mounted on local path

sudo mount nfsdata.windomain.com:/nfs/shared-volume  -t nfs /mnt/some/backup/path

Links

rsync : https://linux.die.net/man/1/rsync.

cronjob:

https://www.guru99.com/crontab-in-linux-with-examples.html

https://tecadmin.net/crontab-in-linux-with-20-examples-of-cron-schedule/

Setup linux NFS share

https://www.tecmint.com/how-to-setup-nfs-server-in-linux/

Setup windows NFS share:

     https://support.microfocus.com/kb/doc.php?id=7020834

Comment/Recommendation

Backup on local machine is not advisable. Data may lost In hard disk crash or any other disaster scenario.

Customer needs to take care of availability of NFS share it's own

Customer need to note the backpath which will required in a restore operation

Customer needs to take care of high availability of NFS share it's own

Automatic Backup configuration through ansible

Step1. Decide backup destination and Mount NFS share:   once NFS share is mounted on local machine's mount point say /mnt/some/nfs/share which will be acting as backup directory. One can verify it by command

  mount |grep "/mnt/some/nfs/share" # one should see entry in the output

Step2. Verify that current user is privileged to write data on the same path. If not provide write permission to current user on that path.

Step3. Provide backup_dir varibales in group_vars/all.yml file.

   Before starting ansible installation, one needs to provide the path of backup directory configured in step#1 to ansible.

  User need to edit the file  group_vars/all.yml and update the variable backup_dir and optional cron expression (default "0 0 * * *" ) for backup frequency as below.

  backup_dir: /home/cxinsights/cxinsights-playbook-k3s/backup # mandatory for backup cron_schedule: "0 0 * * *" # default 12 am in the night

Note that Customer can change cronjob frequency as per his need. Increasing frequency does not increase the load on backup activity since rsync push incremental changes only.

Step4. If user wishes to change the cronjob frequency he must validate the provided expression. (Note: Cronjob will be added for root user)

Step5:  Run ansible installaiton. Ansible backup role will internally call cxinsight-backup-restore.sh with above provided arguments.

Note: Above steps will generate backup configuration only and not actual backup has been performed yet. Backup configuration internally creates volume mapping for rsync and create cronjob located at path  /home/cxinsights/.gcxi_backup_cron.sh which is responsible to actual backup(i.e. to run rsync). The same '.gcxi_backup_cron.sh' is passed as script to cronjopb which runs at configured interval. If exist old data in backup pdirectory, configuration will archive that data in same directory (e.g. gcxi-backup_2020-08-06_01-55-36.tar.gz) . That archive can be used as checkpoint if current operation (say upgrade or rollback or restore) fails.  Customer can safely delete archived file once operation is successful like restore or upgrade or rollback.

Step6: (Optional) Create the custom dashboards if any, and perform backup by running 'sudo /home/cxinsights/.gcxi_backup_cron.sh' or cutomer can reply on schedule backup.

Backup activity logs are captured in file '/home/cxinsights/.gcxi_backup_trace.log'.

Manual backup configuration.

This section requires only if ansible fails to perform backup configuration due to some unexpected error. (E.g. due to wrong cron expression) or if cutomer deletes the pods may using helm delete or so.  

Step-1 and Step-2 in automated way are required here as well.

Step3:  run cxinsight-backup-restore.sh manually

run cxinsight-backup-restore.sh as below by providing the backup directory path configured in step1 (say /mnt/some/nfs/share/backup) where he wants to have a backup data and .

Syntax as below:

cxinsight-backup-restore.sh  backup <backup dir> ["Cron expression"(optional)].  # if not provide default cron expression is "0 0 * * *" that is run cron jon daily at 12:00 am

E.g. Take backup in /mnt/some/nfs/share/backup and run cronjob every six hours ==>  $ sudo sh cxinsight-backup-restore.sh backup /mnt/some/nfs/share/backup "* */6 * * *

Note: One needs to run the  cxinsight-backup-restore.sh only once after successful deployment.  Since re-run of cxinsight-backup-restore.sh, will archive old data in backup directory, running it again, is not recommended.  Customer can reply on schedule backup or for instant backp run 'sudo /home/cxinsights/.gcxi_backup_cron.sh'.

Remember if customer updated the volumes (in some way e.g. helm delete ...) then one needs run cxinsight-backup-restore.sh again to update the volume mapping in cronjob.(Resetting the mapping is important).