Migration with Stork
This document will walk you through how to migrate your Portworx volumes between clusters with Stork on Kubernetes.
Prerequisites
- Kubernetes
- OpenShift
- Secret Store: A secret store is configured on both clusters. This will store the credentials for the objectstore.
- Objectstore: An AWS S3 compatible, AWS S3, GCP Object Storage, or Azure Blob Storage.
- Network Connectivity:
- Kubernetes: All worker nodes are able to reach Kubernetes API endpoints on both Kubernetes clusters, for example, port 6443, 443 and so on.
- Portworx: Open port in the range of 9001-9020 for Portworx worker nodes to communicate with each other. To know more about the specific ports for your environment, see the Network table on the Prerequisites page.
- Default StorageClass: At most one StorageClass object is configured as the default. Having multiple default StorageClasses will cause PVC migrations to fail.
- Cloud Environment: Depending on your cloud provider (EKS, GKE, AAD enabled AKS, or OKE) ensure that you have successfully applied the instructions provided on the respective page for setting up your destination cluster.
storkctl
is installed on both clusters. Note that always use the lateststorkctl
binary tool by downloading it from the current running Stork container.
- Secret Store: A secret store is configured on both clusters. This will store the credentials for the objectstore.
- Objectstore: An AWS S3 compatible, AWS S3, GCP Object Storage, or Azure Blob Storage.
- Network Connectivity:
- OpenShift4+: Open ports in the range of 17001-17020 must be accessible to reach Portworx API endpoints. To know more about the specific ports for your environment, see the Network table on the Prerequisites page.
- Default StorageClass: At most one StorageClass object is configured as the default. Having multiple default StorageClasses will cause PVC migrations to fail.
- Cloud Environment: Depending on your cloud provider (EKS, GKE, AAD enabled AKS, or OKE) ensure that you have successfully applied the instructions provided on the respective page for setting up your destination cluster.
storkctl
is installed on both clusters. Note that always use the lateststorkctl
binary tool by downloading it from the current running Stork container.
The default admin namespace is kube-system
. In all examples, <migrationnamespace>
is considered the admin namespace responsible for migrating all namespaces from your source cluster to the destination cluster. Alternatively, you can specify a non-admin namespace, in such a case, only that specific namespace will be migrated. To learn how to set up an admin namespace, refer to the Set up a Cluster Admin namespace for Migration page.
Create a ClusterPair object
For migration with Stork, it is essential to pair two clusters to enable the migration of data and resources. To facilitate this process, you need to create a trust object, known as ClusterPair object, on the source cluster. Portworx requires this object to establish a communication channel between the two clusters.
The ClusterPair object pairs the two Kubernetes clusters, allowing migration of resources and volumes.
Pair your clusters
Use the storkctl create clusterpair
command to create your unidirectional ClusterPair using the command options specific to your environment, as explained in the following sections. The unidirectional ClusterPair will establish authentication from the source cluster to the destination cluster so resources and data can be migrated in one direction.
This command creates Clusterpair object on the source cluster using the source config file (<source-kubeconfig-cluster>
) and the destination kubeconfig file (<destination-kubeconfig-cluster>
) in the specified namespace (<migrationnamespace>
). This object establishes and authenticates the connection between the two clusters for migrating resources and volumes within the specified namespace.
If you configured the portworx-api
service to be accessible externally through ingresses or routes, specify the following two additional command options while creating the ClusterPair:
--dest-ep string
: Endpoint of theportworx-api
service in the destination cluster.--src-ep string
: Endpoint of theportworx-api
service in the source cluster.If the above endpoints are not specified, the storage status of the ClusterPair will show as
failed
.
- Amazon S3 or S3 compatible
- Microsoft Azure
- GCP
Run the following command to create a unidirectional ClusterPair named migration-cluster-pair
for cluster migration:
storkctl create clusterpair migration-cluster-pair \
--namespace <migrationnamespace> \
--dest-kube-file <destination-kubeconfig-file> \
--src-kube-file <source-kubeconfig-file> \
--provider s3 \
--s3-endpoint s3.amazonaws.com \
--s3-access-key <s3-access-key> \
--s3-secret-key <s3-secret-key> \
--s3-region <s3-region> \
--unidirectional
Portworx will use AWS S3 or S3 compatible blob storage for migrating volume data between the two clusters. The credentials specified in the above command are used for authenticating with the your cloud platform. The S3 bucket information is provided with the specified access key, secret key, and region (<s3-bucket-location>
) to facilitate the data transfer between the two clusters.
Run the following command to create a unidirectional ClusterPair named migration-cluster-pair
for cluster migration:
storkctl create clusterpair migration-cluster-pair \
--namespace <migrationnamespace> \
--dest-kube-file <destination-kubeconfig-file> \
--src-kube-file <source-kubeconfig-file> \
--provider azure \
--azure-account-name <azure-account-name> \
--azure-account-key <azure-account-key> \
--unidirectional
Portworx will use Azure blob storage for migrating volume data between the two clusters. The Azure credentials specified in the above command are used for authenticating with the Azure cloud platform.
Run the following command to create a unidirectional ClusterPair named migration-cluster-pair
for cluster migration:
storkctl create clusterpair migration-cluster-pair \
--namespace <migrationnamespace> \
--dest-kube-file <destination-kubeconfig-file> \
--src-kube-file <source-kubeconfig-file> \
--provider google \
--google-project-id <gcp-project-ID> \
--google-json-key <gcp-json-auth-key> \
–-unidirectional
Portworx will use Google Cloud storage for migrating volume data between the two clusters. The Google Cloud credentials specified in the above command are used for authenticating with the GCP.
Verify the status of your unidirectional ClusterPair
The following command uses storkctl
to retrieve information about the cluster pairs in the Kubernetes namespace <migrationnamespace>
:
storkctl get clusterpair -n <migrationnamespace>
This command displays details such as the name and status of the existing cluster pairs within that specific namespace.
Source cluster details:
NAME STORAGE-STATUS SCHEDULER-STATUS CREATED
migration-cluster-pair Ready Ready 10 Mar 23 17:16 PST
On a successful pairing, you should see the STORAGE-STATUS
and SCHEDULER-STATUS
as Ready
.
Encountered an error?
kubectl describe clusterpair <your-clusterpair-name> -n <migrationnamespace>
Use Rancher Projects with ClusterPair
Follow the instructions on the Use Rancher Projects with ClusterPair page if you are using Rancher projects, otherwise skip to the next section.
Start your migration
Once the pairing is configured, applications can be migrated repeatedly to the destination cluster.
Perform the following steps to migrate your Kubernetes resources and volumes.
Define your migration object
Paste the following spec into the migration.yaml
file to define your migration object:
apiVersion: stork.libopenstorage.org/v1alpha1
kind: Migration
metadata:
name: <your-migration-object-name>
namespace: <migrationnamespace>
spec:
clusterPair: migration-cluster-pair
includeResources: true
startApplications: true
namespaces:
- <app-namespace1>
- <app-namespace2>
purgeDeletedResources: false
Where:
- apiVersion: is set as
stork.libopenstorage.org/v1alpha1
- kind: is set as
Migration
- metadata.name: is the name of the object that performs the migration
- metadata.namespace: is the name of the namespace in which you want to create the object
- spec.clusterPair: is the name of the
ClusterPair
object created in the Pair your clusters section - spec.includeResources: is a boolean value specifying if the migration should include PVCs and other application specs. If you set this field to
false
, Portworx will only migrate your volumes. - spec.startApplications: with a boolean value specifying if Portworx should automatically start applications on the destination cluster. If you set this field to
false
, then theDeployment
andStatefulSet
objects on the destination cluster will be set to zero. Note that, on the destination cluster, Portworx uses thestork.openstorage.org/migrationReplicas
annotation to store the number of replicas from the source cluster. - spec.namespaces: is the list of namespaces you want to migrate
- spec.purgeDeletedResources: with a boolean value specifying if Stork should automatically purge a resource from the destination cluster when you delete it from the source cluster. The default value is
false
.
(Optional) Customize your migration
You can define pre and post executable rules for your migration and specify them in your migration object spec to customize your migration. The pre-executable rule will run before the migration is triggered, and the post-executable rule will run after the migration has been triggered.
When you are using an admin namespace to migrate multiple namespaces and want to customize your migration using the pre or post executable rules, create these rules in the admin namespace.
The following example shows how you can specify the pre and post rules in your migration object:
apiVersion: stork.libopenstorage.org/v1alpha1
kind: Migration
metadata:
name: <your-migration-object-name>
namespace: <migrationnamespace>
spec:
clusterPair: migration-cluster-pair
includeResources: true
startApplications: true
preExecRule: <your-pre-rule>
postExecRule: <your-post-rule>
namespaces:
- <app-namespace1>
- <app-namespace2>
purgeDeletedResources: false
If the rules do not exist, you will see an event and the migration will stop.
If the PreExec rule fails for any reason, it will log an event against the object and retry. The Migration will not be marked as failed.
If the PostExec rule fails for any reason, it will log an event and mark the Migration as failed. It will also try to cancel the migration that was started from the underlying storage driver.
Apply the spec
Apply the spec to start the migration process:
kubectl apply -f migration.yaml
Monitoring a migration
Once the migration has been started using the previous commands, you can check the status using storkctl
:
storkctl get migration -n <migrationnamespace>
Here is an example output that you will see initially when the migration is triggered:
NAME CLUSTERPAIR STAGE STATUS VOLUMES RESOURCES CREATED
<your-migration-object-name>-2022-12-12-200210 migration-cluster-pair Volumes InProgress 0/3 0/0 12 Dec 22 11:45 PST
If the migration is successful, the STAGE
will change from Volumes
to Application
to Final
.
Here is an example output of a successful migration:
NAME CLUSTERPAIR STAGE STATUS VOLUMES RESOURCES CREATED ELAPSED
<your-migration-object-name>-2022-12-12-200210 migration-cluster-pair Final Successful 3/3 10/10 12 Dec 22 12:02 PST 1m23s
Need to see more details?
kubectl describe clusterpair <your-clusterpair-name> -n <migrationnamespace>