Skip to main content
Version: 2.6

Add cloud-based NFS backup locations

Portworx Backup allows you to add your cloud-based NFS targets as backup location. Refer the below topics to add EFS, GCP, and Azure cloud-based NFS targets as backup locations in your Portworx Backup.

Prerequisite

  • Make sure that NFS packages are installed in all the worker nodes of all of your Kubernetes clusters before adding any of these cloud-based NFS backup locations.

Add EFS as NFS backup target

Prerequisite

Ensure that you have the required credentials and permissions to create EKS clusters and EFS volumes.

Create EFS volume on AWS

To add an EFS volume as NFS target, perform the following steps:

  1. Create an EKS cluster on which you want to back up your data. If you are creating your first EKS cluster apply the following standard spec:

    apiVersion: eksctl.io/v1alpha5
    kind: ClusterConfig
    metadata:
    name: <cluster-name>
    region: <region-name>
    version: "1.26"
    managedNodeGroups:
    - name: storage-nodes
    instanceType: t3.2xlarge
    minSize: 4
    maxSize: 4
    volumeSize: 200
    amiFamily: AmazonLinux2
    labels: {role: worker, "px/node-type": "storage"}
    tags:
    nodegroup-role: worker
    ssh:
    allow: true
    publicKeyPath: ~/.ssh/id_rsa.pub
    iam:
    attachPolicyARNs:
    - arn:aws:iam::aws:policy/AmazonEKSWorkerNodePolicy
    - arn:aws:iam::aws:policy/AmazonEKS_CNI_Policy
    - arn:aws:iam::aws:policy/AmazonEC2ContainerRegistryReadOnly
    - arn:aws:iam::aws:policy/ElasticLoadBalancingFullAccess
    - arn:aws:iam::aws:policy/AWSMarketplaceMeteringFullAccess
    - arn:aws:iam::aws:policy/AWSMarketplaceMeteringRegisterUsage
    - arn:aws:iam::649513742363:policy/<other-policies-needed-for-cluster>
    withAddonPolicies:
    imageBuilder: true
    autoScaler: true
    ebs: true
    fsx: true
    efs: true
    albIngress: true
    cloudWatch: true
    availabilityZones: [ '<region-1>', '<region-2>', '<region-3>' ]
  2. Once your first EKS cluster is ready, set up the second cluster with the following spec where you need to add your corresponding VPC and subnet details. This cluster can act as an application cluster:

    apiVersion: eksctl.io/v1alpha5
    kind: ClusterConfig
    metadata:
    name: <cluster-name>
    region: <region-name>
    version: "1.26"
    vpc:
    id: "<VPC-id>"
    subnets:
    private:
    us-east-1a:
    id: "<subnet-id>"
    us-east-1b:
    id: "<subnet-id>""
    us-east-1c:
    id: "<subnet-id>"
    public:
    us-east-1a:
    id: "<subnet-id>""
    us-east-1b:
    id: "<subnet-id>""
    us-east-1c:
    id: "<subnet-id>""
    managedNodeGroups:
    - name: storage-nodes
    instanceType: t3.xlarge
    minSize: 5
    maxSize: 5
    volumeSize: 200
    amiFamily: AmazonLinux2
    labels: {role: worker, "px/node-type": "storage"}
    tags:
    nodegroup-role: worker
    ssh:
    allow: true
    publicKeyPath: ~/.ssh/id_rsa.pub
    iam:
    attachPolicyARNs:
    - arn:aws:iam::aws:policy/AmazonEKSWorkerNodePolicy
    - arn:aws:iam::aws:policy/AmazonEKS_CNI_Policy
    - arn:aws:iam::aws:policy/AmazonEC2ContainerRegistryReadOnly
    - arn:aws:iam::aws:policy/ElasticLoadBalancingFullAccess
    - arn:aws:iam::aws:policy/AWSMarketplaceMeteringFullAccess
    - arn:aws:iam::aws:policy/AWSMarketplaceMeteringRegisterUsage
    - arn:aws:iam::649513742363:policy/<other-policies-needed-for-cluster>
    withAddonPolicies:
    imageBuilder: true
    autoScaler: true
    ebs: true
    fsx: true
    efs: true
    albIngress: true
    cloudWatch: true
  3. Install EFS provisioner:

    kubectl apply -k "github.com/kubernetes-sigs/aws-efs-csi-driver/deploy/kubernetes/overlays/stable/?ref=release-1.3"
  4. Copy and paste the below shell script in the terminal, update the name and region variables to create an EFS volume and attach the volume to the subnet.

    export cluster_name=<Name>
    export cluster_region=<Region>
    vpc_id=$(aws eks describe-cluster \
    --name $cluster_name \
    --query "cluster.resourcesVpcConfig.vpcId" \
    --output text --region $cluster_region)
    cidr_range=$(aws ec2 describe-vpcs \
    --vpc-ids $vpc_id \
    --query "Vpcs[].CidrBlock" \
    --output text --region $cluster_region)
    security_group_id=$(aws eks describe-cluster
    --name $cluster_name
    --query "cluster.resourcesVpcConfig.securityGroupIds[0]" --output text --region $cluster_region)
    aws ec2 authorize-security-group-ingress \
    --group-id $security_group_id \
    --protocol tcp \
    --port 2049 \
    --cidr $cidr_range --region $cluster_region
    file_system_id=$(aws efs create-file-system \
    --region $cluster_region \
    --performance-mode generalPurpose \
    --query 'FileSystemId' \
    --output text --region $cluster_region)
    aws ec2 describe-subnets \
    --filters "Name=vpc-id,Values=$vpc_id" \
    --query 'Subnets[*].{SubnetId: SubnetId,AvailabilityZone: AvailabilityZone,CidrBlock: CidrBlock}' \
    --output table --region $cluster_region
  5. Copy the output from step 4, pick one subnet from each zone and add subnet values to the below command to create EFS mount target on the subnet:

    for subnet in <Subnet values>;do aws efs create-mount-target --file-system-id $file_system_id --subnet-id $subnet --security-groups $security_group_id --region $cluster_region;done
  6. Copy the file system ID from Step 5, paste in the search field on the EFS page in AWS console and then click Attach. EFS page displays the file system details. For more information on creating EFS, refer Creating Amazon EFS.

  7. Click Mount Via IP and provide the mount details.

Add EFS as backup target in Portworx Backup

  • With the inputs obtained from Step 7, create an NFS-based backup location in Portworx Backup. For more information on how to add a backup location in Portworx Backup, refer to Add NFS backup location.

Add GCP NFS Filestore as backup target

Prerequisite

Ensure you have required credentials and permissions to create NFS filestore on GCP.

Create NFS Filestore on Google cloud

Perform the below step to create NFS Filestore on Google Cloud:

Add GCP NFS Filestore as backup target in Portworx Backup

Add Azure NFS file share as backup target

Prerequisite

  • Install Azure CLI and login to the CLI.

  • Ensure that you have required permissions to create resources.

Create NFS file share on Azure

Perform the below steps in Azure CLI to create an NFS file share on Azure cloud:

  1. Create resource group to store the metadata of the resources:

    az group create  --name <resource-group-name> --location <location>
  2. Create AKS cluster on which you want to back up your data :

    az aks create --resource-group <resource-group-name> --name  <cluster-name> --node-count 3 --node-resource-group <node-resource-group-name>
  3. (Optional) Connect to the AKS cluster created in Step 2:

    az aks get-credentials  --resource-group <resource-group-name> --name  <cluster-name>
  4. Enable service endpoint for Azure storage within the virtual network:

  • Get the virtual network name:

    az network vnet list --resource-group  <node-resource-group-name> --query '[].[name]' --output tsv
  • Get the subnet name:

    az network vnet subnet list --resource-group  <node-resource-group-name> --vnet-name <vnet-name> --query '[].[name]' --output tsv
  • Provide the virtual network name and subnet name obtained from Step a and Step b in the following command to enable the service endpoint:

    az network vnet subnet update --resource-group <node-resource-group-name> --vnet-name <virtual-network-name> --name <subnet-name> --service-endpoints Microsoft.Storage
  1. Create a storage account for your Azure NFS file share:

    az storage account create --location <location-name> -g <resource-group-name> --sku Premium_LRS --kind FileStorage -n <storage-account-name> --enable-sftp false --default-action Deny --https-only false
  2. Obtain the subnet ID:

    az network vnet subnet list --resource-group  <node-resource-group-name> --vnet-name <vnet-name> --query '[].[id]' --output tsv
  3. Add the network rule to the service account with the subnet ID obtained from Step 6:

    az storage account network-rule add -g  <resource-group-name>  --account-name <storage-account-name>  --subnet <subnet-id>
  4. Create Azure NFS file share :

    az storage share-rm create -g <resource-group-name> --storage-account <storage-account-name>  --name <nfs-share-name> --quota <size-in-GB> --enabled-protocols NFS
  5. Fetch the NFS server address:

    az storage account show --name <storage-account-name> --query primaryEndpoints.file

    Above command gives NFS server address in the following format:

    https://<storage-account-name>.file.core.windows.net/

    The string <storage-account-name>.file.core.windows.net and /<storage-account-name>/<nfs-share-name> serves as the NFS server address and NFS exported path respectively, to create NFS-based backup location in Portworx Backup.

Add Azure NFS file share as backup target in Portworx Backup

  • Create an NFS backup location in Portworx Backup with the inputs obtained from the Step 9 of the topic above. For more information on how to add a backup location, refer to Add NFS backup location.
Was this page helpful?