Skip to content

Conversation

@LukasCuperDT
Copy link
Contributor

No description provided.

@gitguardian
Copy link

gitguardian bot commented Dec 1, 2025

️✅ There are no secrets present in this pull request anymore.

If these secrets were true positive and are still valid, we highly recommend you to revoke them.
While these secrets were previously flagged, we no longer have a reference to the
specific commits where they were detected. Once a secret has been leaked into a git
repository, you should consider it compromised, even if it was deleted immediately.
Find here more information about risks.


🦉 GitGuardian detects secrets in your source code to help developers and security teams secure the modern development process. You are seeing this because you or someone else with access to this repository has authorized GitGuardian to scan your pull request.

@LukasCuperDT LukasCuperDT force-pushed the preprod branch 3 times, most recently from dc717ad to 85fa909 Compare December 1, 2025 11:37
LukasCuperDT and others added 7 commits December 3, 2025 09:18
- Removed nodeSelector: dedicated=zuul-ci from all 8 base component files
- Deleted remove-node-selectors.yaml patch (no longer needed)
- Updated kustomization.yaml to remove patch reference
- Allows pods to schedule on any available nodes
- Add patch to change zuul-config PVC from csi-nas to csi-sfsturbo
- csi-nas was failing with provisioning errors
- Deleted old zuul-var-zuul-executor-0 PVC (will be recreated with correct storageClassName)
@LukasCuperDT LukasCuperDT force-pushed the preprod branch 2 times, most recently from b1956d0 to becabb8 Compare December 3, 2025 09:45
- csi-sfsturbo requires special parameters (everest.io/volume-as)
- Use nfs-rw flex-volume based storage class instead
- Pods expect exact secret names without hash suffixes
- Added disableNameSuffixHash: true to both secret generators
- Changed container names from component-specific (zuul-scheduler, zuul-web, etc) to 'zuul'
- Base components use 'name: zuul' for all containers
- Patches were creating new containers instead of patching existing ones
- This caused volume mounts from base to be lost
@LukasCuperDT LukasCuperDT force-pushed the preprod branch 2 times, most recently from 485fcc9 to c786473 Compare January 15, 2026 15:00
- Switch to OpenStack Kolla images (2024.1-ubuntu-jammy):
  - swift-proxy-server for proxy
  - swift-object, swift-container, swift-account for storage
- Add Kolla config.json ConfigMaps for each service
- Update security contexts to allow Kolla containers to run as root
- Create shared PV pointing to same NFS as zuul-config
- Remove hardcoded commands, use Kolla's kolla_start entrypoint
- Fix PVC binding to use separate PV in swift-proxy namespace
- Add symlink filter after versioned_writes to match production
- Document that validatetoken (custom middleware) is skipped in Kolla
- Pipeline now matches production except for custom validatetoken filter
- Add swift-rings volume mount to proxy deployment
- Add swift-rings volume mount to all storage server containers
- Fixes: FileNotFoundError for container.ring.gz
- Ring files are created by ring-builder job hook
GitOps approach:
- Remove all hooks (Helm and ArgoCD hooks are not GitOps-friendly)
- Use sync-wave: -1 to ensure job runs before deployment
- ArgoCD will deploy and manage the job as regular resource
- Job creates rings ConfigMap that pods depend on
Preprod has only 1 storage node, requires replica factor of 1
Production uses 3 replicas with multiple storage nodes
- Replace kubectl with curl + Kubernetes API calls
- Kolla image doesn't have kubectl installed
- Use service account token for authentication
- Create or update ConfigMap with ring files as binaryData
- Create Role allowing configmap create/update operations
- Bind Role to service account used by ring-builder job
- Fixes: 403 Forbidden when job tries to create rings ConfigMap
ServiceAccount already exists in serviceaccount.yaml
Kolla needs to write configuration files to /etc/swift, but mounting
the rings ConfigMap there makes it read-only. Changed approach:
- Mount swift-rings ConfigMap at /srv/rings (read-only)
- Let Kolla initialize and write configs to /etc/swift
- Copy ring files from /srv/rings to /etc/swift after Kolla completes
- Start Swift servers with ring files in place

Applied to: proxy deployment and all three storage containers
(object-server, container-server, account-server)
kolla_start immediately executes the Swift server, which fails because
ring files aren't copied yet. Changed to:
1. Call kolla_set_configs directly to setup configs
2. Copy ring files from /srv/rings to /etc/swift
3. Exec Swift server with rings in place

Also use full venv paths for Swift binaries.
Changed zuul-logs cloud configuration to use internal Swift cluster
instead of OBS:
- Endpoint: http://swift-proxy-preprod.swift-proxy.svc.cluster.local:8080
- No authentication needed (internal cluster access)
- Shares SFS Turbo storage with Zuul
- Container: zuul-preprod-logs

This fixes 403 Forbidden errors when uploading logs to OBS.
…empauth

- Re-enable log uploads (zuul_site_upload_logs: true)
- Configure zuul-logs cloud to use internal Swift cluster
- Use v1password auth with credentials from Vault (secret/data/zuul/logs/otc_technical_user)
- Swift tempauth configured to read account, user, and password from Vault
- Credentials: swift_username (account:user format), swift_password
- Region set to eu-de for consistency with OTC
- ArgoCD Vault Plugin will inject secrets at deployment time
…g storage

- Remove tempauth from Swift proxy pipeline (was causing 503 errors)
- Add formpost middleware for unauthenticated uploads
- Configure clouds.yaml with auth_type: none and direct endpoint
- Simplifies authentication for internal-only log storage cluster
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

4 participants