-
Notifications
You must be signed in to change notification settings - Fork 7
Init of preprod branch #1458
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: main
Are you sure you want to change the base?
Init of preprod branch #1458
Conversation
️✅ There are no secrets present in this pull request anymore.If these secrets were true positive and are still valid, we highly recommend you to revoke them. 🦉 GitGuardian detects secrets in your source code to help developers and security teams secure the modern development process. You are seeing this because you or someone else with access to this repository has authorized GitGuardian to scan your pull request. |
dc717ad to
85fa909
Compare
…cutor storage class
- Removed nodeSelector: dedicated=zuul-ci from all 8 base component files - Deleted remove-node-selectors.yaml patch (no longer needed) - Updated kustomization.yaml to remove patch reference - Allows pods to schedule on any available nodes
- Add patch to change zuul-config PVC from csi-nas to csi-sfsturbo - csi-nas was failing with provisioning errors - Deleted old zuul-var-zuul-executor-0 PVC (will be recreated with correct storageClassName)
b1956d0 to
becabb8
Compare
- csi-sfsturbo requires special parameters (everest.io/volume-as) - Use nfs-rw flex-volume based storage class instead
- Pods expect exact secret names without hash suffixes - Added disableNameSuffixHash: true to both secret generators
- Changed container names from component-specific (zuul-scheduler, zuul-web, etc) to 'zuul' - Base components use 'name: zuul' for all containers - Patches were creating new containers instead of patching existing ones - This caused volume mounts from base to be lost
…es and ring management
485fcc9 to
c786473
Compare
- Switch to OpenStack Kolla images (2024.1-ubuntu-jammy): - swift-proxy-server for proxy - swift-object, swift-container, swift-account for storage - Add Kolla config.json ConfigMaps for each service - Update security contexts to allow Kolla containers to run as root - Create shared PV pointing to same NFS as zuul-config - Remove hardcoded commands, use Kolla's kolla_start entrypoint - Fix PVC binding to use separate PV in swift-proxy namespace
- Add symlink filter after versioned_writes to match production - Document that validatetoken (custom middleware) is skipped in Kolla - Pipeline now matches production except for custom validatetoken filter
- Add swift-rings volume mount to proxy deployment - Add swift-rings volume mount to all storage server containers - Fixes: FileNotFoundError for container.ring.gz - Ring files are created by ring-builder job hook
GitOps approach: - Remove all hooks (Helm and ArgoCD hooks are not GitOps-friendly) - Use sync-wave: -1 to ensure job runs before deployment - ArgoCD will deploy and manage the job as regular resource - Job creates rings ConfigMap that pods depend on
Preprod has only 1 storage node, requires replica factor of 1 Production uses 3 replicas with multiple storage nodes
- Replace kubectl with curl + Kubernetes API calls - Kolla image doesn't have kubectl installed - Use service account token for authentication - Create or update ConfigMap with ring files as binaryData
- Create Role allowing configmap create/update operations - Bind Role to service account used by ring-builder job - Fixes: 403 Forbidden when job tries to create rings ConfigMap
ServiceAccount already exists in serviceaccount.yaml
Kolla needs to write configuration files to /etc/swift, but mounting the rings ConfigMap there makes it read-only. Changed approach: - Mount swift-rings ConfigMap at /srv/rings (read-only) - Let Kolla initialize and write configs to /etc/swift - Copy ring files from /srv/rings to /etc/swift after Kolla completes - Start Swift servers with ring files in place Applied to: proxy deployment and all three storage containers (object-server, container-server, account-server)
kolla_start immediately executes the Swift server, which fails because ring files aren't copied yet. Changed to: 1. Call kolla_set_configs directly to setup configs 2. Copy ring files from /srv/rings to /etc/swift 3. Exec Swift server with rings in place Also use full venv paths for Swift binaries.
Changed zuul-logs cloud configuration to use internal Swift cluster instead of OBS: - Endpoint: http://swift-proxy-preprod.swift-proxy.svc.cluster.local:8080 - No authentication needed (internal cluster access) - Shares SFS Turbo storage with Zuul - Container: zuul-preprod-logs This fixes 403 Forbidden errors when uploading logs to OBS.
…empauth - Re-enable log uploads (zuul_site_upload_logs: true) - Configure zuul-logs cloud to use internal Swift cluster - Use v1password auth with credentials from Vault (secret/data/zuul/logs/otc_technical_user) - Swift tempauth configured to read account, user, and password from Vault - Credentials: swift_username (account:user format), swift_password - Region set to eu-de for consistency with OTC - ArgoCD Vault Plugin will inject secrets at deployment time
…g storage - Remove tempauth from Swift proxy pipeline (was causing 503 errors) - Add formpost middleware for unauthenticated uploads - Configure clouds.yaml with auth_type: none and direct endpoint - Simplifies authentication for internal-only log storage cluster
No description provided.