diff --git a/docs/config/aws/oadp-aws-sts-cloud-authentication.adoc b/docs/config/aws/oadp-aws-sts-cloud-authentication.adoc index a09b888cb1..70330cf7ad 100644 --- a/docs/config/aws/oadp-aws-sts-cloud-authentication.adoc +++ b/docs/config/aws/oadp-aws-sts-cloud-authentication.adoc @@ -194,31 +194,43 @@ echo "Role ARN: $ROLE_ARN" oc create namespace openshift-adp ---- -. Annotate the service accounts to use AWS STS: +[id="oadp-aws-console-installation_{context}"] +== Installing OADP Operator via OpenShift Web Console + +When installing the OADP operator through the OpenShift web console with tokenized authentication support, you will be presented with cloud provider-specific configuration fields. + +[NOTE] +==== +For OpenShift 4.15 and later, the web console supports tokenized authentication during operator installation, allowing you to provide cloud credentials directly through the installation form. +==== + +.Console Installation Fields for AWS + +During operator installation, the web console will display the following field: + +*role ARN*:: +**Field Label:** "role ARN" + -[source,bash] ----- -oc annotate serviceaccount velero -n openshift-adp \ - eks.amazonaws.com/role-arn="${ROLE_ARN}" --overwrite +**Help Text:** "The role ARN required for the operator to access the cloud API." ++ +**Value to Enter:** Use the `ROLE_ARN` value from the prerequisite setup steps above (e.g., `arn:aws:iam::123456789012:role/openshift-adp-controller-manager`). -oc annotate serviceaccount openshift-adp-controller-manager -n openshift-adp \ - eks.amazonaws.com/role-arn="${ROLE_ARN}" --overwrite ----- +This field corresponds to the IAM role you created in the prerequisite steps. The role ARN format is `arn:aws:iam::${AWS_ACCOUNT_ID}:role/${ROLE_NAME}`. [id="oadp-aws-cloud-storage-api_{context}"] == Alternative: Using Cloud Storage API for Automated Bucket Management -Instead of manually creating S3 buckets, you can use the OADP Cloud Storage API to automatically manage bucket creation and configuration. This approach requires OADP operator version with Cloud Storage API support. +Instead of manually creating S3 buckets, you can use the OADP CloudStorage API to automatically manage bucket creation and configuration. -.Prerequisites for Cloud Storage API +[NOTE] +==== +For comprehensive documentation on the CloudStorage API, including detailed configuration options, troubleshooting, and advanced usage, see link:../oadp-cloudstorage-api.html[OADP CloudStorage API]. +==== -* OADP operator with Cloud Storage API functionality enabled -* The same AWS STS configuration as above +.AWS-Specific CloudStorage Configuration -.Procedure for Cloud Storage API +For AWS with STS authentication, create a CloudStorage resource using the variables from the STS setup above: -. Create a CloudStorage resource instead of manually creating buckets: -+ [source,yaml] ---- cat < + namespace: openshift-adp +spec: + name: # Required + provider: # Required: aws, azure, or gcp + region: # Optional: AWS and GCP only + enableSharedConfig: # Optional: AWS only - enable shared config loading + tags: # Optional: Tags for bucket/container + : + creationSecret: # Required + name: + key: # Provider-specific key name + config: # Optional: Provider-specific configuration + storageAccount: # Required for Azure CloudStorage +status: + name: + lastSyncTimestamp: # Last sync time +---- + +=== Provider-Specific Parameters + +[cols="1,1,1,2,2", options="header"] +|=== +|Cloud Provider|Provider Value|Credential Key|CloudStorage Spec Requirements|Optional CloudStorage Fields + +|AWS +|`aws` +|`credentials` +|`region` (in spec.region) +|`enableSharedConfig`, `tags` + +|Azure +|`azure` +|`azurekey` +|`storageAccount` (in spec.config) +|`tags` + +|GCP +|`gcp` +|`service_account.json` +|`region` (in spec.region) +|`tags` +|=== + +== DataProtectionApplication Configuration + +When using CloudStorage resources, configure your DataProtectionApplication to reference the CloudStorage resource instead of specifying bucket details directly: + +[source,yaml] +---- +apiVersion: oadp.openshift.io/v1alpha1 +kind: DataProtectionApplication +metadata: + name: + namespace: openshift-adp +spec: + configuration: + velero: + defaultPlugins: + - # aws, azure, or gcp + - openshift + - csi + backupLocations: + - name: default + bucket: + provider: # aws, azure, or gcp + cloudStorageRef: + name: + prefix: velero + credential: + name: + key: # Provider-specific: credentials (AWS), azurekey (Azure), service_account.json (GCP) + config: + region: # AWS and GCP + resourceGroup: # Azure + storageAccount: # Azure + subscriptionId: # Azure + default: true + snapshotLocations: + - name: default + velero: + provider: + credential: + name: + key: + config: + # Provider-specific snapshot configuration +---- + +== Verification and Monitoring + +=== Verify CloudStorage Resource Status + +Check the status of your CloudStorage resource: + +[source,bash] +---- +# View CloudStorage resource details +oc get cloudstorage -n openshift-adp -o yaml + +# Check CloudStorage resource status +oc describe cloudstorage -n openshift-adp +---- + +=== Monitor CloudStorage Operations + +Monitor CloudStorage controller operations through operator logs: + +[source,bash] +---- +# Check operator logs for CloudStorage operations +oc logs -n openshift-adp deployment/oadp-operator-controller-manager | grep -i cloudstorage + +# Check for provider-specific operations +oc logs -n openshift-adp deployment/oadp-operator-controller-manager | grep -i +---- + +== Deletion and Finalizer Management + +[WARNING] +==== +CloudStorage resources are protected by a finalizer (`oadp.openshift.io/bucket-protection`) to prevent accidental deletion of storage containing backup data. + +To delete a CloudStorage resource, you must first add the deletion annotation: + +[source,bash] +---- +# Add deletion annotation before attempting to delete +oc annotate cloudstorage -n openshift-adp \ + oadp.openshift.io/cloudstorage-delete=true --overwrite + +# Then delete the CloudStorage resource +oc delete cloudstorage -n openshift-adp +---- + +Without this annotation, the deletion will hang indefinitely as the finalizer prevents removal. + +**Alternative: Preserve Cloud Storage** + +If you want to remove the CloudStorage resource from OpenShift without deleting the actual cloud storage, you can manually remove the finalizer: + +[source,bash] +---- +# Remove finalizer to delete CloudStorage CR without deleting the cloud storage +oc patch cloudstorage -n openshift-adp --type json \ + -p='[{"op": "remove", "path": "/metadata/finalizers"}]' +---- +==== + +== Automatic Features + +The CloudStorage API automatically provides the following functionality: + +* **Storage Creation**: Creates buckets/containers if they don't exist +* **Authentication Integration**: Integrates with existing cloud authentication methods and uses pre-configured IAM permissions: + - Works with static credentials, STS roles, managed identities, or workload identity federation + - Relies on pre-existing permissions for Velero backup and restore operations + - Validates credential compatibility with the specified authentication method +* **Regional Configuration**: Sets up location-based configuration as specified +* **Lifecycle Protection**: Protects storage resources with finalizers to prevent accidental deletion +* **Status Reporting**: Provides detailed status information about storage provisioning and health + +== Provider-Specific Examples + +=== AWS (S3) + +==== CloudStorage Resource + +[source,yaml] +---- +apiVersion: oadp.openshift.io/v1alpha1 +kind: CloudStorage +metadata: + name: aws-backup-storage + namespace: openshift-adp +spec: + name: my-backup-bucket + provider: aws + region: us-east-1 + enableSharedConfig: true # Optional: Enable shared config loading + tags: # Optional: Bucket tags + environment: production + team: platform + creationSecret: + name: cloud-credentials-aws + key: credentials +---- + +==== DPA Configuration Options + +When using AWS CloudStorage, the following config fields are available in your DataProtectionApplication: + +[source,yaml] +---- +backupLocations: + - name: default + bucket: + provider: aws + cloudStorageRef: + name: aws-backup-storage + prefix: velero + credential: + name: cloud-credentials-aws + key: credentials + config: + region: us-east-1 # AWS region (overrides CloudStorage spec.region) + profile: "default" # AWS profile name (optional) + s3ForcePathStyle: "false" # Force path-style S3 URLs (optional) + s3Url: "https://s3.custom.endpoint" # Custom S3 endpoint (optional) + insecureSkipTLSVerify: "false" # Skip TLS verification (optional) + enableSharedConfig: "true" # Enable shared config loading (optional) + checksumAlgorithm: "CRC32" # Checksum algorithm (optional) + signatureVersion: "1" # AWS signature version (optional) + public_url: "https://public.s3.url" # Public URL for S3 (optional) + default: true +---- + +=== Azure (Storage Container) + +==== CloudStorage Resource + +[source,yaml] +---- +apiVersion: oadp.openshift.io/v1alpha1 +kind: CloudStorage +metadata: + name: azure-backup-storage + namespace: openshift-adp +spec: + name: my-backup-container + provider: azure + tags: # Optional: Container tags + environment: production + team: platform + creationSecret: + name: cloud-credentials-azure + key: azurekey + config: + storageAccount: mystorageaccount # Required for Azure CloudStorage +---- + +==== DPA Configuration Options + +When using Azure CloudStorage, the following config fields are available in your DataProtectionApplication: + +[source,yaml] +---- +backupLocations: + - name: default + bucket: + provider: azure + cloudStorageRef: + name: azure-backup-storage + prefix: velero + credential: + name: cloud-credentials-azure + key: azurekey + config: + resourceGroup: my-resource-group # Required: Azure resource group + storageAccount: mystorageaccount # Required: Azure storage account + subscriptionId: 12345678-1234-1234-1234-123456789012 # Required: Azure subscription + useAAD: "true" # Optional: Use Azure AD authentication + default: true +---- + +=== GCP (Cloud Storage) + +==== CloudStorage Resource + +[source,yaml] +---- +apiVersion: oadp.openshift.io/v1alpha1 +kind: CloudStorage +metadata: + name: gcp-backup-storage + namespace: openshift-adp +spec: + name: my-backup-bucket + provider: gcp + region: us-central1 + tags: # Optional: Bucket tags + environment: production + team: platform + creationSecret: + name: cloud-credentials-gcp + key: service_account.json +---- + +==== DPA Configuration Options + +When using GCP CloudStorage, the following config fields are available in your DataProtectionApplication: + +[source,yaml] +---- +backupLocations: + - name: default + bucket: + provider: gcp + cloudStorageRef: + name: gcp-backup-storage + prefix: velero + credential: + name: cloud-credentials-gcp + key: service_account.json + config: + project: my-gcp-project # Required: GCP project ID + snapshotLocation: us-central1 # Optional: Region for snapshots + default: true +---- + +== Troubleshooting + +If you experience issues with CloudStorage resources: + +. **Check CloudStorage status**: Use `oc describe cloudstorage` to view detailed status and events +. **Review operator logs**: Look for CloudStorage-related messages in the OADP operator logs +. **Verify credentials**: Ensure the credential secret exists and has the correct key name +. **Check permissions**: Verify that the cloud identity has permissions to create and manage storage resources +. **Validate region/location**: Ensure the specified region or storage account location is correct and accessible + +=== Provider-Specific Verification Commands + +**AWS S3 Verification:** +[source,bash] +---- +# Check if bucket was created in S3 +aws s3 ls s3://${BUCKET_NAME}/ + +# Verify bucket policies and configuration +aws s3api get-bucket-policy --bucket ${BUCKET_NAME} +---- + +**Azure Storage Verification:** +[source,bash] +---- +# Check if container was created in Azure Storage +az storage container list --account-name ${STORAGE_ACCOUNT_NAME} --auth-mode login --query "[].name" -o tsv + +# Verify container access +az storage container show --name ${CONTAINER_NAME} --account-name ${STORAGE_ACCOUNT_NAME} --auth-mode login +---- + +**GCP Cloud Storage Verification:** +[source,bash] +---- +# Check if bucket was created in GCP +gsutil ls -p ${GCP_PROJECT_ID} | grep "gs://${BUCKET_NAME}/" + +# Verify bucket configuration +gsutil ls -L -b gs://${BUCKET_NAME}/ +---- \ No newline at end of file diff --git a/docs/design/gcp-wif-support_design.md b/docs/design/gcp-wif-support_design.md deleted file mode 100644 index cc270ecad2..0000000000 --- a/docs/design/gcp-wif-support_design.md +++ /dev/null @@ -1,157 +0,0 @@ -# GCP WIF Support for OADP - - - -## Abstract -Support Google Cloud's WIF (Workload Identity Federation) for OADP. - -## Background -In currently released versions of OADP, the only way to authenticate to GCP is via a long lived service account credentials. -This is not ideal for customers who are using GCP's WIF ([Workload Identity](https://cloud.google.com/kubernetes-engine/docs/concepts/workload-identity)) feature to authenticate to GCP. -This proposal aims to add support for WIF to OADP. - -## Goals -- GCP WIF support for OADP and Velero for backup and restore of applications backed by GCP resources. -- Using OpenShift's Cloud Credentials Operator to generate a short-lived token for authentication to GCP. -- ImageStreamTag backup and restore - -## Non Goals -- [Standardized update flow for OLM-managed operators leveraging short-lived token authentication](https://issues.redhat.com/browse/OCPSTRAT-95) (follow up design/implementation) -- Allowing customers to use another long lived tokens separate from the one used by the Cloud Credentials Operator to generate short-lived tokens. - - -## High-Level Design - -A wiki will be made available to customers to follow the steps to configure GCP WIF for OADP. The wiki will also include steps to configure the Cloud Credentials Operator to generate a short-lived token for authentication to GCP. Updates Velero GCP plugin to use the short-lived token for authentication to GCP. - -## Detailed Design - - -### Prerequisites -- Cluster installed in manual mode [with GCP Workload Identity configured](https://access.redhat.com/documentation/en-us/openshift_container_platform/4.12/html-single/authentication_and_authorization/index#gcp-workload-identity-mode-installing). - - This means you should now have access to `ccoctl` CLI from this step and access to associated workload-identity-pool. - -### Create Credential Request for OADP Operator -- Create oadp-credrequest dir - ```bash - mkdir -p oadp-credrequest - ``` -- Create credrequest.yaml - ```bash - echo 'apiVersion: cloudcredential.openshift.io/v1 - kind: CredentialsRequest - metadata: - name: oadp-operator-credentials - namespace: openshift-cloud-credential-operator - spec: - providerSpec: - apiVersion: cloudcredential.openshift.io/v1 - kind: GCPProviderSpec - permissions: - - compute.disks.get - - compute.disks.create - - compute.disks.createSnapshot - - compute.snapshots.get - - compute.snapshots.create - - compute.snapshots.useReadOnly - - compute.snapshots.delete - - compute.zones.get - - storage.objects.create - - storage.objects.delete - - storage.objects.get - - storage.objects.list - - iam.serviceAccounts.signBlob - skipServiceCheck: true - secretRef: - name: cloud-credentials-gcp - namespace: - serviceAccountNames: - - velero - ' > oadp-credrequest/credrequest.yaml - ``` -- Use ccoctl to create the credrequest poiting to dir `oadp-credrequest` - ```bash - ccoctl gcp create-service-accounts --name= \ - --project= \ - --credentials-requests-dir=oadp-credrequest \ - --workload-identity-pool= \ - --workload-identity-provider= - ``` - [ccoctl reference](https://github.com/openshift/cloud-credential-operator/blob/master/docs/ccoctl.md#creating-iam-service-accounts) - This should generate `manifests/openshift-adp-cloud-credentials-gcp-credentials.yaml` to use in the next step. - -### Apply credentials secret to openshift-adp namespace -```bash -oc create namespace openshift-adp -oc apply -f manifests/openshift-adp-cloud-credentials-gcp-credentials.yaml -``` - -- [4.3.4.1. Installing the OADP Operator](https://access.redhat.com/documentation/en-us/openshift_container_platform/4.10/html-single/backup_and_restore/index#oadp-installing-operator_installing-oadp-gcp) -- Skip to [4.3.4.5. Installing the Data Protection Application -](https://access.redhat.com/documentation/en-us/openshift_container_platform/4.10/html-single/backup_and_restore/index#oadp-installing-dpa_installing-oadp-gcp) to create Data Protection Application - - Note that the key for credentials should be `service_account.json` instead of `cloud` in the official documentation example. - ```yaml - apiVersion: oadp.openshift.io/v1alpha1 - kind: DataProtectionApplication - metadata: - name: - namespace: openshift-adp - spec: - configuration: - velero: - defaultPlugins: - - openshift - - gcp - backupLocations: - - velero: - provider: gcp - default: true - credential: - key: service_account.json - name: cloud-credentials-gcp - objectStorage: - bucket: - prefix: - # Temporary image override while https://github.com/vmware-tanzu/velero-plugin-for-gcp/pull/142 not cherry-picked to Openshift - unsupportedOverrides: - gcpPluginImageFqin: ghcr.io/kaovilai/velero-plugin-for-gcp:file-wif - ``` - -## Alternatives Considered -- Using Google Config Connector on OpenShift to manage short-lived tokens. - - This would require another long lived token to be created and managed by the administrator, increasing the attack surface. - - There are pull requests put up during investigation of this alternative. - - https://github.com/GoogleCloudPlatform/k8s-config-connector/pull/797 - - https://github.com/GoogleCloudPlatform/k8s-config-connector/pull/801 - -## Security Considerations -This proposal allows OADP Operator to depend on short lived credentials generated by the Cloud Credentials Operator. This is a more secure way to authenticate to GCP than using a long lived service account key. - -## Compatibility - - -## Implementation - - -velero-plugin-for-gcp update to support (stop panicking on) external_account (WIF) credentials https://github.com/vmware-tanzu/velero-plugin-for-gcp/pull/142 - -OADP Operator will be updated to bind openshift service account token when WIF credentials is used. The following is done today for AWS STS credentials and will be extended to GCP WIF credentials. -```go - veleroContainer.VolumeMounts = append(veleroContainer.VolumeMounts, - corev1.VolumeMount{ - Name: "bound-sa-token", - MountPath: "/var/run/secrets/openshift/serviceaccount", - ReadOnly: true, - }), -``` - -## Open Issues - -- [Standardized update flow for OLM-managed operators leveraging short-lived token authentication](https://issues.redhat.com/browse/OCPSTRAT-95) do not yet have support for WIF. We will have to follow up with this work later. - diff --git a/docs/design/gcp-wif-support_design.md b/docs/design/gcp-wif-support_design.md new file mode 120000 index 0000000000..215ef71a22 --- /dev/null +++ b/docs/design/gcp-wif-support_design.md @@ -0,0 +1 @@ +../config/gcp/oadp-gcp-wif-cloud-authentication.adoc \ No newline at end of file