Skip to main content
Version: 3.1

Install Portworx on Azure Red Hat OpenShift

Prerequisites

Procedure

Find the ARO Service Principal

When deploying Portworx on Azure Red Hat Openshift (ARO), the virtual machines are created in a resource group with a Deny Assignment role that prevents any service principal from accessing virtual machines except the service principal created for the resource group. In this task, you identify the service principal for the resource group that has access, and configure it to pass on the credentials (Azure Client ID, Azure Client Secret, and Tenant ID) via the Portworx cluster spec. Portworx will fetch the px-azure secret object file to authenticate. Perform the following steps from your Azure Web UI:

  1. Select Virtual Machines from the top navigation menu.

  2. From the Virtual machines page, select the Resource Group associated with your cluster.

  3. From the left panel on the Resource group page, select Access control (IAM).

  4. On the Access control (IAM) subpage of your resource group, select Deny assignments from the toolbar in the center of the page, then select the link under the Name column (this will likely be an autogenerated string of letters and numbers).

  5. This page shows that all principals are denied access, except for your resource group. Select your resource group's name.

  6. From the application page, copy and save the following values:

    • Name
    • Application ID
    • Object ID

    You will use these to create the px-azure secret.

  7. From the home page, open the Azure Active Directory page (select All services to see the option). Select App registrations on the left pane, followed by All applications. In the search bar in the center of the page, paste the application name you saved in the previous step and press the enter key. Select the application link that shows in the results to open the next page.

  8. From your application's page, select Certificates & secrets under Manage from the left pane.

  9. From the Certificates & secrets page, select + New client secret to create a new secret. On the Add a client secret page, provide the description and expiry date of your secret and click Add.

  10. You can see the newly created secret listed on the Client secret subpage. Copy and save the following values of your newly created secret:

    • Value
    • Secret ID

Create the px-azure secret with Service Principal credentials

Create a secret called px-azure to give Portworx access to Azure APIs by updating the following fields with the associated fields from the service principal you created in the previous section.

 ./oc create secret generic -n portworx px-azure\
--from-literal=AZURE_TENANT_ID=<tenant> \
--from-literal=AZURE_CLIENT_ID=<appId> \
--from-literal=AZURE_CLIENT_SECRET=<value>
secret/px-azure created
  • AZURE_TENANT_ID: Run the az login command to get this value
  • AZURE_CLIENT_ID: Provide the Application ID associated with your cluster's resource group, which you saved in step 6 of the previous section
  • AZURE_CLIENT_SECRET: Provide the Value of your secret, which you saved in the step 10 of the previous section

Create a monitoring ConfigMap

Newer OpenShift versions do not support the Portworx Prometheus deployment. As a result, you must enable monitoring for user-defined projects before installing the Portworx Operator. Use the instructions in this section to configure the OpenShift Prometheus deployment to monitor Portworx metrics.

To integrate OpenShift’s monitoring and alerting system with Portworx, create a cluster-monitoring-config ConfigMap in the openshift-monitoring namespace:

apiVersion: v1
kind: ConfigMap
metadata:
name: cluster-monitoring-config
namespace: openshift-monitoring
data:
config.yaml: |
enableUserWorkload: true

The enableUserWorkload parameter enables monitoring for user-defined projects in the OpenShift cluster. This creates a prometheus-operated service in the openshift-user-workload-monitoring namespace.

Generate Portworx spec

  1. Navigate to Portworx Central and log in, or create an account.

  2. Select Portworx Enterprise from the Product Catalog page.

  3. On the Product Line page, choose any option depending on which license you intend to use, then click Continue to start the spec generator.

  4. For Platform, choose Azure. Select Azure Red Hat OpenShift (ARO) for Distribution Name, then click Save Spec to generate the specs.

Install Portworx Operator using OpenShift UI

  1. From your OpenShift UI, select OperatorHub in the left pane.

  2. On the OperatorHub page, search for Portworx and select the Portworx Enterprise or Portworx Essentials card:

    search catalog

  3. Click Install to install Portworx Operator:

    select catalog

  4. Portworx Operator begins to install and takes you to the Install Operator page. On this page, select the A specific namespace on the cluster option for Installation mode. Choose the Create Project option from the Installed Namespace dropdown:

    Installed operator page

  5. In the Create Project window, provide the name portworx and click Create to create a namespace called portworx.

  6. Click Install to deploy Portworx Operator in the portworx namespace.

Deploy Portworx using OpenShift UI

  1. Once the Operator is successfully installed, a Create StorageCluster button appears. Click the button to create a StorageCluster object:

    Portworx Operator

  2. On the Create StorageCluster page, choose YAML view to configure the StorageCluster object.

  3. Copy and paste the Portworx spec that you generated in the Generate Portworx spec section into the text editor and click Create to deploy Portworx:

    YAML view

  4. Verify that Portworx has deployed successfully by navigating to the Storage Cluster tab of the Installed Operators page. Once Portworx has fully deployed, the status will show as Online:

    Portworx status

Verify your Portworx installation

Once you've installed Portworx, you can perform the following tasks to verify that Portworx has installed correctly.

Verify if all pods are running

Enter the following oc get pods command to list and filter the results for Portworx pods:

oc get pods -n portworx -o wide | grep -e portworx -e px
portworx-api-774c2                                      1/1     Running   0                2m55s   192.168.121.196   username-k8s1-node0    <none>           <none>
portworx-api-t4lf9 1/1 Running 0 2m55s 192.168.121.99 username-k8s1-node1 <none> <none>
portworx-kvdb-94bpk 1/1 Running 0 4s 192.168.121.196 username-k8s1-node0 <none> <none>
portworx-operator-58967ddd6d-kmz6c 1/1 Running 0 4m1s 10.244.1.99 username-k8s1-node0 <none> <none>
prometheus-px-prometheus-0 2/2 Running 0 2m41s 10.244.1.105 username-k8s1-node0 <none> <none>
px-cluster-1c3edc42-4541-48fc-b173-3e9bf3cd834d-9gs79 2/2 Running 0 2m55s 192.168.121.196 username-k8s1-node0 <none> <none>
px-cluster-1c3edc42-4541-48fc-b173-3e9bf3cd834d-vpptx 1/2 Running 0 2m55s 192.168.121.99 username-k8s1-node1 <none> <none>
px-csi-ext-868fcb9fc6-54bmc 4/4 Running 0 3m5s 10.244.1.103 username-k8s1-node0 <none> <none>
px-csi-ext-868fcb9fc6-8tk79 4/4 Running 0 3m5s 10.244.1.102 username-k8s1-node0 <none> <none>
px-csi-ext-868fcb9fc6-vbqzk 4/4 Running 0 3m5s 10.244.3.107 username-k8s1-node1 <none> <none>
px-prometheus-operator-59b98b5897-9nwfv 1/1 Running 0 3m3s 10.244.1.104 username-k8s1-node0 <none> <none>

Note the name of one of your px-cluster pods. You'll run pxctl commands from these pods in following steps.

Verify Portworx cluster status

You can find the status of the Portworx cluster by running pxctl status commands from a pod. Enter the following oc exec command, specifying the pod name you retrieved in the previous section:

oc exec px-cluster-1c3edc42-4541-48fc-b173-3e9bf3cd834d-vpptx -n portworx -- /opt/pwx/bin/pxctl status
Defaulted container "portworx" out of: portworx, csi-node-driver-registrar
Status: PX is operational
Telemetry: Disabled or Unhealthy
Metering: Disabled or Unhealthy
License: Trial (expires in 31 days)
Node ID: 788bf810-57c4-4df1-9a5a-70c31d0f478e
IP: 192.168.121.99
Local Storage Pool: 1 pool
POOL IO_PRIORITY RAID_LEVEL USABLE USED STATUS ZONE REGION
0 HIGH raid0 3.0 TiB 10 GiB Online default default
Local Storage Devices: 3 devices
Device Path Media Type Size Last-Scan
0:1 /dev/vdb STORAGE_MEDIUM_MAGNETIC 1.0 TiB 14 Jul 22 22:03 UTC
0:2 /dev/vdc STORAGE_MEDIUM_MAGNETIC 1.0 TiB 14 Jul 22 22:03 UTC
0:3 /dev/vdd STORAGE_MEDIUM_MAGNETIC 1.0 TiB 14 Jul 22 22:03 UTC
* Internal kvdb on this node is sharing this storage device /dev/vdc to store its data.
total - 3.0 TiB
Cache Devices:
* No cache devices
Cluster Summary
Cluster ID: px-cluster-1c3edc42-4541-48fc-b173-3e9bf3cd834d
Cluster UUID: 33a82fe9-d93b-435b-943e-6f3fd5522eae
Scheduler: kubernetes
Nodes: 2 node(s) with storage (2 online)
IP ID SchedulerNodeName Auth StorageNode Used Capacity Status StorageStatus Version Kernel OS
192.168.121.196 f6d87392-81f4-459a-b3d4-fad8c65b8edc username-k8s1-node0 Disabled Yes 10 GiB 3.0 TiB Online Up 2.11.0-81faacc 3.10.0-1127.el7.x86_64 CentOS Linux 7 (Core)
192.168.121.99 788bf810-57c4-4df1-9a5a-70c31d0f478e username-k8s1-node1 Disabled Yes 10 GiB 3.0 TiB Online Up (This node) 2.11.0-81faacc 3.10.0-1127.el7.x86_64 CentOS Linux 7 (Core)
Global Storage Pool
Total Used : 20 GiB
Total Capacity : 6.0 TiB

The Portworx status will display PX is operational if your cluster is running as intended.

Verify pxctl cluster provision status

  • Find the storage cluster, the status should show as Online:

    oc -n portworx get storagecluster
    NAME                                              CLUSTER UUID                           STATUS   VERSION   AGE
    px-cluster-1c3edc42-4541-48fc-b173-3e9bf3cd834d 33a82fe9-d93b-435b-943e-6f3fd5522eae Online 2.11.0 10m
  • Find the storage nodes, the statuses should show as Online:

    oc -n portworx get storagenodes
    NAME                  ID                                     STATUS   VERSION          AGE
    username-k8s1-node0 f6d87392-81f4-459a-b3d4-fad8c65b8edc Online 2.11.0-81faacc 11m
    username-k8s1-node1 788bf810-57c4-4df1-9a5a-70c31d0f478e Online 2.11.0-81faacc 11m
  • Verify the Portworx cluster provision status . Enter the following oc exec command, specifying the pod name you retrieved in the previous section:

    oc exec px-cluster-1c3edc42-4541-48fc-b173-3e9bf3cd834d-vpptx -n portworx -- /opt/pwx/bin/pxctl cluster provision-status
    Defaulted container "portworx" out of: portworx, csi-node-driver-registrar
    NODE NODE STATUS POOL POOL STATUS IO_PRIORITY SIZE AVAILABLE USED PROVISIONED ZONE REGION RACK
    788bf810-57c4-4df1-9a5a-70c31d0f478e Up 0 ( 96e7ff01-fcff-4715-b61b-4d74ecc7e159 ) Online HIGH 3.0 TiB 3.0 TiB 10 GiB 0 B default default default
    f6d87392-81f4-459a-b3d4-fad8c65b8edc Up 0 ( e06386e7-b769-4ce0-b674-97e4359e57c0 ) Online HIGH 3.0 TiB 3.0 TiB 10 GiB 0 B default default default

Create your first PVC

For your apps to use persistent volumes powered by Portworx, you must use a StorageClass that references Portworx as the provisioner. Portworx includes a number of default StorageClasses, which you can reference with PersistentVolumeClaims (PVCs) you create. For a more general overview of how storage works within Kubernetes, refer to the Persistent Volumes section of the Kubernetes documentation.

Perform the following steps to create a PVC:

  1. Create a PVC referencing the px-csi-db default StorageClass and save the file:

    kind: PersistentVolumeClaim
    apiVersion: v1
    metadata:
    name: px-check-pvc
    spec:
    storageClassName: px-csi-db
    accessModes:
    - ReadWriteOnce
    resources:
    requests:
    storage: 2Gi
  2. Run the oc apply command to create a PVC:

    oc apply -f <your-pvc-name>.yaml
    persistentvolumeclaim/example-pvc created

Verify your StorageClass and PVC

  1. Enter the following oc get storageclass command, specify the name of the StorageClass you created in the steps above:

    oc get storageclass <your-storageclass-name>
    NAME                   PROVISIONER        RECLAIMPOLICY   VOLUMEBINDINGMODE   ALLOWVOLUMEEXPANSION   AGE
    example-storageclass pxd.portworx.com Delete Immediate false 24m

    oc will return details about your storageClass if it was created correctly. Verify the configuration details appear as you intended.

  2. Enter the oc get pvc command, if this is the only StorageClass and PVC you've created, you should see only one entry in the output:

    oc get pvc <your-pvc-name>
    NAME          STATUS   VOLUME                                     CAPACITY   ACCESS MODES   STORAGECLASS           AGE
    example-pvc Bound pvc-dce346e8-ff02-4dfb-935c-2377767c8ce0 2Gi RWO example-storageclass 3m7s

    oc will return details about your PVC if it was created correctly. Verify the configuration details appear as you intended.

Was this page helpful?