-
Notifications
You must be signed in to change notification settings - Fork 82
Closed
Labels
kind/bugCategorizes issue or PR as related to a bug.Categorizes issue or PR as related to a bug.
Description
Contact Details
Describe bug
Hello,
I deployed oadp operator but one pod is crashing, seems because from info is loaded into deployment config (I replaced secret with <value> below):
I created secret:
[ocpadmin@develop-305-jnkns-bastion oadp-config]$ oc get secret cloud-credentials-azure -o yaml
apiVersion: v1
data:
azure: <some base64 here>
kind: Secret
metadata:
annotations:
meta.helm.sh/release-name: oadp-config
meta.helm.sh/release-namespace: openshift-adp
creationTimestamp: "2021-12-15T15:04:44Z"
labels:
app.kubernetes.io/managed-by: Helm
name: cloud-credentials-azure
namespace: openshift-adp
resourceVersion: "1839550"
uid: 8237d331-9499-426e-a887-13bd0dc6b032
type: Opaque
[ocpadmin@develop-305-jnkns-bastion oadp-config]$ echo <some base64 here> | base64 -d
AZURE_SUBSCRIPTION_ID: <azure_subscription_id_here>
AZURE_TENANT_ID: <azure_tenant_id_here>
AZURE_CLIENT_ID: <azure_client_id_here>
AZURE_CLIENT_SECRET: <azure_secret_here>
AZURE_RESOURCE_GROUP: <azure_resource_group>
AZURE_CLOUD_NAME: <azure_storage_name>
AZURE_STORAGE_ACCOUNT_ACCESS_KEY: <azure_storage_key>
and DataProtectionApplication
[ocpadmin@develop-305-jnkns-bastion oadp-config]$ oc get DataProtectionApplication velero-backup -o yaml
apiVersion: oadp.openshift.io/v1alpha1
kind: DataProtectionApplication
metadata:
annotations:
meta.helm.sh/release-name: oadp-config
meta.helm.sh/release-namespace: openshift-adp
creationTimestamp: "2021-12-15T15:04:44Z"
generation: 1
labels:
app.kubernetes.io/managed-by: Helm
name: velero-backup
namespace: openshift-adp
resourceVersion: "1840351"
uid: ed9044be-bc59-43af-8f4d-9c1ca224449e
spec:
backupLocations:
- velero:
config:
resourceGroup: <azure_resource_group>
storageAccount: <azure_storage_name>
storageAccountKeyEnvVar: AZURE_STORAGE_ACCOUNT_ACCESS_KEY
credential:
key: azure
name: cloud-credentials-azure
default: true
objectStorage:
bucket: velero-backup
prefix: backups
provider: azure
configuration:
restic:
enable: true
velero:
defaultPlugins:
- openshift
- azure
snapshotLocations:
- velero:
provider: azure
status:
conditions:
- lastTransitionTime: "2021-12-15T15:05:18Z"
message: Reconcile complete
reason: Complete
status: "True"
type: Reconciled
So this went fine. But when I check namespace, I see:
[ocpadmin@develop-305-jnkns-bastion oadp-config]$ oc get all
NAME READY STATUS RESTARTS AGE
pod/oadp-velero-backup-1-azure-registry-7cdb965948-xw8q8 0/1 CrashLoopBackOff 32 139m
pod/openshift-adp-controller-manager-78f66bc459-fkcjv 1/1 Running 0 140m
pod/restic-9p44t 1/1 Running 0 139m
pod/restic-wmdcc 1/1 Running 0 139m
pod/velero-67b998889c-tdj6v 1/1 Running 0 139m
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
service/oadp-velero-backup-1-azure-registry-svc ClusterIP 172.30.72.5 <none> 5000/TCP 139m
service/openshift-adp-controller-manager-metrics-service ClusterIP 172.30.136.242 <none> 8443/TCP 140m
NAME DESIRED CURRENT READY UP-TO-DATE AVAILABLE NODE SELECTOR AGE
daemonset.apps/restic 2 2 2 2 2 <none> 139m
NAME READY UP-TO-DATE AVAILABLE AGE
deployment.apps/oadp-velero-backup-1-azure-registry 0/1 1 0 139m
deployment.apps/openshift-adp-controller-manager 1/1 1 1 140m
deployment.apps/velero 1/1 1 1 139m
NAME DESIRED CURRENT READY AGE
replicaset.apps/oadp-velero-backup-1-azure-registry-7cdb965948 1 1 0 139m
replicaset.apps/openshift-adp-controller-manager-78f66bc459 1 1 1 140m
replicaset.apps/velero-67b998889c 1 1 1 139m
NAME HOST/PORT PATH SERVICES PORT TERMINATION WILDCARD
route.route.openshift.io/oadp-velero-backup-1-azure-registry-route oadp-velero-backup-1-azure-registry-route-openshift-adp.apps.develop-305.azure.dev-ocp-aws.com oadp-velero-backup-1-azure-registry-svc <all> None
and in deployment config, in spec section is:
spec:
containers:
- env:
- name: REGISTRY_STORAGE
value: azure
- name: REGISTRY_STORAGE_AZURE_CONTAINER
value: velero-backup
- name: REGISTRY_STORAGE_AZURE_ACCOUNTNAME
value: <azure_storage_name>
- name: REGISTRY_STORAGE_AZURE_ACCOUNTKEY
value: AZURE_STORAGE_ACCOUNT_ACCESS_KEY:<azure_storage_key>
So instead of having just "value:<azure_storage_key>" is there "value: AZURE_STORAGE_ACCOUNT_ACCESS_KEY:<azure_storage_key>" . When I corrected this manually, new pod was created and running.
What happened?
Pod cannot start
OADP Version
0.5.x (Stable)
OpenShift Version
4.8
Velero pod logs
No response
Restic pod logs
No response
Operator pod logs
No response
New issue
- This issue is new
Metadata
Metadata
Assignees
Labels
kind/bugCategorizes issue or PR as related to a bug.Categorizes issue or PR as related to a bug.