Skip to content

feat(chart): add chart#56

Open
Vad1mo wants to merge 13 commits intomainfrom
feat/chart
Open

feat(chart): add chart#56
Vad1mo wants to merge 13 commits intomainfrom
feat/chart

Conversation

@Vad1mo
Copy link
Copy Markdown
Contributor

@Vad1mo Vad1mo commented Mar 10, 2026

Summary

Add a dedicated harbor-next chart.

Related Issues

Type of Change

  • Bug fix (fix:)
  • New feature (feat:)
  • Breaking change (feat!: / fix!:)
  • Documentation (docs:)
  • Refactoring (refactor:)
  • CI/CD or build changes (ci: / build:)
  • Dependencies update (chore:)

Release Notes

Testing

  • Unit tests added/updated
  • Manual testing performed

Checklist

  • PR title follows Conventional Commits format
  • Commits are signed off (git commit -s)
  • No new warnings introduced

Summary by CodeRabbit

  • New Features

    • Delivered a complete Helm chart for Harbor Next with install/upgrade flows, schema validation, cert-manager support, tracing, metrics, multiple exposure options, HA guidance, and lifecycle tasks.
  • Documentation

    • Added comprehensive README, migration notes, deployment guides (k3s, OpenShift, Rancher, AWS IRSA, Nutanix), examples, and remaining-gaps guidance.
  • Chores

    • Added license and packaging ignore rules; chart maintenance/automation tasks.
  • Tests

    • Added extensive chart unit and template tests covering components, exposure modes, storage, TLS, PDBs, and integrations.

Copilot AI review requested due to automatic review settings March 10, 2026 15:35
@coderabbitai
Copy link
Copy Markdown

coderabbitai bot commented Mar 10, 2026

Note

Reviews paused

It looks like this branch is under active development. To avoid overwhelming you with review comments due to an influx of new commits, CodeRabbit has automatically paused this review. You can configure this behavior by changing the reviews.auto_review.auto_pause_after_reviewed_commits setting.

Use the following commands to manage reviews:

  • @coderabbitai resume to resume automatic reviews.
  • @coderabbitai review to trigger a single review.

Use the checkboxes below for quick actions:

  • ▶️ Resume reviews
  • 🔍 Trigger review
📝 Walkthrough

Walkthrough

Adds a complete Helm chart for Harbor Next under deploy/chart/ (chart metadata, values/schema, extensive templates, tests, docs, examples, platform guides) plus Taskfile/task include for helm workflows and CI-related tooling and example automation scripts for platform deployments.

Changes

Cohort / File(s) Summary
Top-level Taskfile include
Taskfile.yml, taskfile/helm.yml
Added helm include alias [h] in Taskfile.yml and introduced taskfile/helm.yml providing a comprehensive Helm QA/task workflow used by the top-level Taskfile.
Chart metadata & packaging
deploy/chart/Chart.yaml, deploy/chart/.helmignore, deploy/chart/LICENSE
New Chart descriptor, license file, and .helmignore to support Helm chart packaging and distribution.
Chart defaults & schema
deploy/chart/values.yaml, deploy/chart/values.schema.json, deploy/chart/ci/*.yaml
Introduced exhaustive default values and a Draft-2020-12 JSON Schema for values validation; added CI-focused values under deploy/chart/ci/.
Helm task orchestration
deploy/chart/Taskfile.yaml
New Taskfile for chart-local tasks (lint, template, docs, deps, test, package, install/upgrade/uninstall, CI orchestration).
Helpers & core templates
deploy/chart/templates/_helpers.tpl, deploy/chart/templates/_helpers-*.tpl
Large set of Helm helpers: naming/labels, toEnvVars, image helpers, validation, DB/Redis/TLS/trace/URL helpers, and pod scheduling helpers.
Core / Jobservice / Portal
deploy/chart/templates/core.*, deploy/chart/templates/jobservice.*, deploy/chart/templates/portal.*
New ConfigMaps, Secrets, Deployments, Services, PVCs and related wiring for core, jobservice, and portal components with health checks, env wiring, and secret handling.
Registry & registryctl
deploy/chart/templates/registry.*, deploy/chart/templates/registryctl.*
Registry and registryctl ConfigMaps, Secrets, Deployments, Service, storage backend templating, and PVC support (filesystem/S3/etc.).
Optional components & infra
deploy/chart/templates/exporter.*, deploy/chart/templates/trivy.*, deploy/chart/templates/serviceaccounts.yaml, deploy/chart/templates/pdbs.yaml, deploy/chart/templates/servicemonitor.yaml, deploy/chart/templates/extraManifests.yaml, deploy/chart/templates/validate-values.yaml
Templates for exporter, Trivy (StatefulSet), ServiceAccounts generation, PDBs, ServiceMonitor, extra manifests injection, and values validation include.
Ingress / Exposure
deploy/chart/templates/ingress.yaml, deploy/chart/templates/expose-service.yaml, deploy/chart/templates/expose-route.yaml, deploy/chart/templates/gateway-httproute.yaml, deploy/chart/templates/ingress.secret.yaml, deploy/chart/templates/tls-certificates.yaml
Support for ClusterIP/NodePort/LoadBalancer, OpenShift Route, Gateway API HTTPRoute, TLS secret generation/reuse, and cert-manager Certificates per ingress host.
NOTES, packaging & CI tasks
deploy/chart/NOTES.txt, deploy/chart/Taskfile.yaml, Taskfile.yml
Post-install NOTES, packaging tasks, and Taskfile integration (helm include added to top-level Taskfile).
Docs, README & gaps
deploy/chart/README.md, deploy/chart/README.md.gotmpl, deploy/chart/REMAINING-GAPS.md, deploy/chart/docs/guide/*
Autogenerated README template + rendered README, remaining-gaps doc, and platform-specific guides (k3s, OpenShift, Rancher, Nutanix, AWS IRSA, etc.).
Examples & platform artifacts
deploy/chart/example/*, deploy/chart/example/aws-eks-irsa/*
Multiple example values and platform examples (k3d-local, rke2-rancher, openshift, aws-eks-irsa) including EKS IRSA scripts (setup/teardown/verify), IAM policy, cluster config, and example values files.
Helm tests
deploy/chart/tests/*.yaml
Extensive helm-unittest suites covering components, exposure, ingress/gateway, TLS, PDBs, ServiceAccounts, ServiceMonitor, extraManifests, image pull secrets, and many field-level assertions.
Templates — assorted manifests
deploy/chart/templates/*
Many new template files across components to render Kubernetes resources (ConfigMaps, Secrets, Deployments, Services, StatefulSets, PVCs, ServiceAccounts, NOTES, certs).

Sequence Diagram(s)

sequenceDiagram
    participant Dev as Dev/CI
    participant Helm as Helm (chart)
    participant K8s as Kubernetes API
    participant DB as External DB
    participant Pods as Application Pods

    Dev->>Helm: helm upgrade --install (chart + values)
    Helm->>K8s: Apply rendered manifests (Deployments, Services, Secrets, ConfigMaps, Ingress, Certificates)
    K8s-->>Pods: Schedule & start Pods (core, registry, jobservice, trivy, exporter, etc.)
    Pods->>DB: Connect using provided database host/secret
    Pods->>K8s: Expose readiness/liveness probes & register Services
    K8s-->>Helm: Resource status/events returned
    Helm-->>Dev: Install/upgrade complete (NOTES, outputs)
Loading

Estimated code review effort

🎯 5 (Critical) | ⏱️ ~120 minutes

Possibly related PRs

✨ Finishing Touches
🧪 Generate unit tests (beta)
  • Create PR with unit tests
  • Commit unit tests in branch feat/chart

@github-actions
Copy link
Copy Markdown

github-actions bot commented Mar 10, 2026

Dependency Review

✅ No vulnerabilities or license issues or OpenSSF Scorecard issues found.

Snapshot Warnings

⚠️: No snapshots were found for the head SHA c1300ff.
Ensure that dependencies are being submitted on PR branches and consider enabling retry-on-snapshot-warnings. See the documentation for more information and troubleshooting advice.

Scanned Files

None

@github-actions github-actions bot added documentation Improvements or additions to documentation component/ci labels Mar 10, 2026
@Vad1mo Vad1mo closed this Mar 10, 2026
@Vad1mo Vad1mo deleted the feat/chart branch March 10, 2026 15:36
Copy link
Copy Markdown
Contributor

Copilot AI left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Pull request overview

This PR appears to expand the GSD framework under .claude/ by adding a large set of planning/research/debugging templates, references, and command definitions (including model-profile configuration and verification/UAT flows).

Changes:

  • Add new research project templates (stack/features/architecture/pitfalls + synthesized summary template).
  • Add planning artifacts templates (PROJECT/REQUIREMENTS, phase context, discovery, UAT, debug) and codebase mapping templates.
  • Add/extend many gsd:* commands, plus new references for model profiles, git integration, and phase parsing; bump GSD version.

Reviewed changes

Copilot reviewed 79 out of 234 changed files in this pull request and generated 5 comments.

Show a summary per file
File Description
.claude/get-shit-done/templates/research-project/SUMMARY.md Adds a research synthesis template intended to drive roadmap implications.
.claude/get-shit-done/templates/research-project/STACK.md Adds a stack research template for .planning/research/STACK.md.
.claude/get-shit-done/templates/research-project/PITFALLS.md Adds a pitfalls research template for .planning/research/PITFALLS.md.
.claude/get-shit-done/templates/research-project/FEATURES.md Adds a feature landscape research template for .planning/research/FEATURES.md.
.claude/get-shit-done/templates/research-project/ARCHITECTURE.md Adds an architecture research template for .planning/research/ARCHITECTURE.md.
.claude/get-shit-done/templates/requirements.md Adds a requirements template for .planning/REQUIREMENTS.md.
.claude/get-shit-done/templates/project.md Adds a PROJECT.md template for .planning/PROJECT.md.
.claude/get-shit-done/templates/planner-subagent-prompt.md Adds a prompt template for spawning the planner subagent with planning context.
.claude/get-shit-done/templates/milestone.md Adds a milestone entry template for .planning/MILESTONES.md.
.claude/get-shit-done/templates/milestone-archive.md Adds a template for milestone archive files under .planning/milestones/.
.claude/get-shit-done/templates/discovery.md Adds a discovery template for shallow option/library decisions.
.claude/get-shit-done/templates/debug-subagent-prompt.md Adds a prompt template for spawning the debugger subagent.
.claude/get-shit-done/templates/continue-here.md Adds a handoff template for resumable .continue-here.md files.
.claude/get-shit-done/templates/context.md Adds a phase context template capturing implementation decisions.
.claude/get-shit-done/templates/config.json Adds default configuration toggles (research/plan-check/verifier, parallelization, gates, safety).
.claude/get-shit-done/templates/codebase/testing.md Adds a template for documenting test frameworks and patterns.
.claude/get-shit-done/templates/codebase/structure.md Adds a template for documenting physical codebase structure.
.claude/get-shit-done/templates/codebase/stack.md Adds a template for documenting actual codebase runtime stack.
.claude/get-shit-done/templates/codebase/integrations.md Adds a template for documenting external integrations/dependencies.
.claude/get-shit-done/templates/codebase/conventions.md Adds a template for documenting coding conventions.
.claude/get-shit-done/templates/codebase/concerns.md Adds a template for documenting actionable codebase risks/concerns.
.claude/get-shit-done/templates/codebase/architecture.md Adds a template for documenting conceptual codebase architecture.
.claude/get-shit-done/templates/UAT.md Adds a persistent UAT session tracking template per phase.
.claude/get-shit-done/templates/DEBUG.md Adds a persistent debug session tracking template.
.claude/get-shit-done/references/ui-brand.md Adds UI output conventions (banners/boxes/progress blocks).
.claude/get-shit-done/references/tdd.md Adds TDD guidance and a TDD plan structure reference.
.claude/get-shit-done/references/questioning.md Adds questioning guidance for project initialization/milestones.
.claude/get-shit-done/references/planning-config.md Adds documentation for config schema and behaviors (git, planning).
.claude/get-shit-done/references/phase-argument-parsing.md Adds reference for normalizing/parsing phase arguments.
.claude/get-shit-done/references/model-profiles.md Adds a model profile table mapping agents to models.
.claude/get-shit-done/references/model-profile-resolution.md Adds a recommended pattern for resolving model profile from config.
.claude/get-shit-done/references/git-planning-commit.md Adds guidance for committing planning artifacts via gsd-tools.js commit.
.claude/get-shit-done/references/git-integration.md Adds git integration rules and commit message patterns.
.claude/get-shit-done/references/decimal-phase-calculation.md Adds reference on computing next decimal phase numbers.
.claude/get-shit-done/references/continuation-format.md Adds a standard "Next Up" continuation output format reference.
.claude/get-shit-done/VERSION Bumps GSD version to 1.18.0.
.claude/commands/gsd/verify-work.md Adds a command to run conversational UAT with persistent state + diagnosis.
.claude/commands/gsd/update.md Adds a command routing to an update workflow with changelog display.
.claude/commands/gsd/settings.md Adds a command to configure GSD workflow toggles and model profile.
.claude/commands/gsd/set-profile.md Adds a command to switch model profile (quality/balanced/budget).
.claude/commands/gsd/resume-work.md Adds a command routing to a resume workflow that restores context and routes next steps.
.claude/commands/gsd/research-phase.md Adds a command to run standalone phase research (usually via plan-phase).
.claude/commands/gsd/remove-phase.md Adds a command to remove an unstarted future roadmap phase and renumber subsequent phases.
.claude/commands/gsd/reapply-patches.md Adds a command to reapply local patches after updates wipe/reinstall files.
.claude/commands/gsd/quick.md Adds a quick-task execution command bypassing optional agents.
.claude/commands/gsd/progress.md Adds a progress command that routes to planning or execution.
.claude/commands/gsd/plan-phase.md Updates/defines phase planning command with research + verification loop.
.claude/commands/gsd/plan-milestone-gaps.md Adds a command to create phases for closing gaps from milestone audit.
.claude/commands/gsd/pause-work.md Adds a command to create .continue-here.md handoff + WIP commit.
.claude/commands/gsd/new-project.md Adds a command to initialize a new project with questioning/research/requirements/roadmap.
.claude/commands/gsd/new-milestone.md Adds a command to start a new milestone cycle for an existing project.
.claude/commands/gsd/map-codebase.md Adds a command to map codebase into .planning/codebase/ via parallel agents.
.claude/commands/gsd/list-phase-assumptions.md Adds a command to surface Claude assumptions prior to planning.
.claude/commands/gsd/join-discord.md Adds a command that outputs the Discord invite link.
.claude/commands/gsd/insert-phase.md Adds a command to insert decimal phases for urgent work.
.claude/commands/gsd/help.md Adds a command to output the help reference content only.
.claude/commands/gsd/execute-phase.md Adds a command to execute all plans in a phase with wave parallelism.
.claude/commands/gsd/discuss-phase.md Adds a command to gather phase context and write {phase}-CONTEXT.md.
.claude/commands/gsd/debug.md Adds a command to run systematic debugging with persistent state + subagents.
.claude/commands/gsd/complete-milestone.md Adds a command to archive a milestone, update docs, and tag git.
.claude/commands/gsd/check-todos.md Adds a command to list/select todos and route to appropriate next action.
.claude/commands/gsd/audit-milestone.md Adds a command to audit milestone completion vs requirements/integration/E2E.
.claude/commands/gsd/add-todo.md Adds a command to capture a structured todo and update state.
.claude/commands/gsd/add-phase.md Adds a command to append a new phase to the roadmap.
.claude/agents/gsd-research-synthesizer.md Adds an agent spec to synthesize research outputs into .planning/research/SUMMARY.md.
.claude/agents/gsd-executor.md Adds/updates executor agent spec for plan execution, checkpoints, commits, and state updates.

💡 Add Copilot custom instructions for smarter, more guided reviews. Learn how to get started.

Comment thread .claude/get-shit-done/templates/research-project/STACK.md Outdated
Comment thread .claude/get-shit-done/templates/UAT.md Outdated
Comment thread .claude/get-shit-done/templates/DEBUG.md Outdated
Comment thread .claude/commands/gsd/research-phase.md Outdated
Comment thread .claude/get-shit-done/references/model-profile-resolution.md Outdated
Copy link
Copy Markdown

@cubic-dev-ai cubic-dev-ai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

6 issues found across 234 files

Note: This PR contains a large number of files. cubic only reviews up to 75 files per PR, so some files may not have been reviewed.

Prompt for AI agents (unresolved issues)

Check if these issues are valid — if so, understand the root cause of each and fix them. If appropriate, use sub-agents to investigate and fix each issue separately.


<file name=".claude/get-shit-done/templates/UAT.md">

<violation number="1" location=".claude/get-shit-done/templates/UAT.md:77">
P2: `status` rules are inconsistent with the documented lifecycle by omitting `diagnosed` as an allowed value.</violation>
</file>

<file name=".claude/agents/gsd-phase-researcher.md">

<violation number="1" location=".claude/agents/gsd-phase-researcher.md:173">
P2: The RESEARCH.md template contradicts the requirement that `## User Constraints` be the first content section, so the generated output can omit/put constraints in the wrong place.</violation>
</file>

<file name=".claude/commands/gsd/research-phase.md">

<violation number="1" location=".claude/commands/gsd/research-phase.md:58">
P2: `${PHASE}` is never defined in this command, so these lookups will miss existing research/context files. Use `phase_dir` (from init) or `phase_number` consistently for the paths.</violation>

<violation number="2" location=".claude/commands/gsd/research-phase.md:132">
P2: `{slug}` is not defined anywhere in this command. Use the `phase_dir` value from init (or a defined `phase_slug`) so the researcher writes to the correct phase folder.</violation>
</file>

<file name=".claude/agents/gsd-executor.md">

<violation number="1" location=".claude/agents/gsd-executor.md:275">
P1: SUMMARY output path is hardcoded to `XX-name` instead of using the runtime phase directory, which can write to a non-existent/wrong location.</violation>

<violation number="2" location=".claude/agents/gsd-executor.md:368">
P1: Final commit command uses a hardcoded summary file path (`XX-name`), so the commit can miss or fail to stage the real SUMMARY file.</violation>
</file>

Reply with feedback, questions, or to request a fix. Tag @cubic-dev-ai to re-run a review.

Comment thread .claude/agents/gsd-executor.md Outdated
Comment thread .claude/agents/gsd-executor.md Outdated
Comment thread .claude/get-shit-done/templates/UAT.md Outdated
Comment thread .claude/agents/gsd-phase-researcher.md Outdated
Comment thread .claude/commands/gsd/research-phase.md Outdated
Comment thread .claude/commands/gsd/research-phase.md Outdated
Signed-off-by: Vadim Bauer <vb@container-registry.com>
@Vad1mo Vad1mo restored the feat/chart branch March 10, 2026 17:23
@Vad1mo Vad1mo reopened this Mar 10, 2026
@github-actions github-actions bot removed the documentation Improvements or additions to documentation label Mar 10, 2026
@coderabbitai coderabbitai bot added enhancement New feature or request tests labels Mar 10, 2026
Copy link
Copy Markdown

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 15

Note

Due to the large number of review comments, Critical, Major severity comments were prioritized as inline comments.

🟡 Minor comments (19)
deploy/chart/example/aws-eks-irsa/iam-policy.json-26-26 (1)

26-26: ⚠️ Potential issue | 🟡 Minor

Parameterize the RDS ARN region in this example.

Every other value in this ARN (ACCOUNT_ID, DB_RESOURCE_ID) is a placeholder, but the region is hardcoded to eu-central-1. Users copying this policy for any other region will have an invalid policy. The setup.sh script also needs updating to replace the region during sed substitution.

Suggested fix
-      "Resource": "arn:aws:rds-db:eu-central-1:ACCOUNT_ID:dbuser:DB_RESOURCE_ID/harbor_iam_user"
+      "Resource": "arn:aws:rds-db:REGION:ACCOUNT_ID:dbuser:DB_RESOURCE_ID/harbor_iam_user"

Also update setup.sh line 79 to include:

   -e "s|ACCOUNT_ID|${AWS_ACCOUNT_ID}|g" \
+  -e "s|REGION|${AWS_REGION}|g" \
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@deploy/chart/example/aws-eks-irsa/iam-policy.json` at line 26, The RDS ARN in
the policy is hardcoded to eu-central-1—replace the literal region in the
Resource string
"arn:aws:rds-db:eu-central-1:ACCOUNT_ID:dbuser:DB_RESOURCE_ID/harbor_iam_user"
with a region placeholder (e.g. REGION or ${AWS_REGION}) so the ARN becomes
parameterized, and update your setup.sh replacement logic to also substitute
that REGION placeholder (add the REGION replacement to the same sed command or
add a new sed replacement alongside ACCOUNT_ID and DB_RESOURCE_ID) so the
example policy is valid for other regions.
deploy/chart/docs/guide/suse.md-136-141 (1)

136-141: ⚠️ Potential issue | 🟡 Minor

Add language specifier to fenced code block.

The code block showing the Harbor URL lacks a language specifier.

Suggested fix
-```
+```text
 https://harbor.example.com

</details>

<details>
<summary>🤖 Prompt for AI Agents</summary>

Verify each finding against the current code and only fix it if needed.

In @deploy/chart/docs/guide/suse.md around lines 136 - 141, Update the fenced
code block that currently contains "https://harbor.example.com" so it includes a
language specifier; change the opening backticks to text (i.e., replace the existing with text) so the block reads text followed by the URL and
the closing ```, ensuring the snippet is explicitly marked as plain text.


</details>

</blockquote></details>
<details>
<summary>deploy/chart/example/aws-eks-irsa/setup.sh-305-309 (1)</summary><blockquote>

`305-309`: _⚠️ Potential issue_ | _🟡 Minor_

**Sensitive credentials printed to stdout.**

The script outputs `DB_MASTER_PASSWORD` and other sensitive values to stdout. In CI/CD environments, this could be captured in logs. Consider writing to a file with restricted permissions or prompting the user to retrieve values via AWS Secrets Manager.



<details>
<summary>Suggested improvement</summary>

```diff
 echo "--- Saved values ---"
 echo "KUBECONFIG=${KUBECONFIG_PATH}"
 echo "DB_ENDPOINT=${DB_ENDPOINT}"
-echo "DB_MASTER_PASSWORD=${DB_MASTER_PASSWORD}"
+echo "DB_MASTER_PASSWORD=<stored in ${HOME}/.harbor-setup-creds>"
 echo "BUCKET_NAME=${BUCKET_NAME}"
 echo "ROLE_ARN=${ROLE_ARN}"
+
+# Save sensitive values to a protected file
+umask 077
+cat > "${HOME}/.harbor-setup-creds" <<EOF
+DB_MASTER_PASSWORD=${DB_MASTER_PASSWORD}
+EOF
+echo "Credentials saved to ${HOME}/.harbor-setup-creds (mode 600)"
```

</details>

<details>
<summary>🤖 Prompt for AI Agents</summary>

```
Verify each finding against the current code and only fix it if needed.

In `@deploy/chart/example/aws-eks-irsa/setup.sh` around lines 305 - 309, Do not
print sensitive secrets to stdout; remove or mask direct echoes of
DB_MASTER_PASSWORD (and optionally other sensitive variables like DB_ENDPOINT)
and instead write those values to a file with restricted permissions (e.g.,
create an output file, chown/chmod 600) or store/retrieve them from a secrets
manager; update the lines that echo KUBECONFIG_PATH, DB_ENDPOINT,
DB_MASTER_PASSWORD, BUCKET_NAME so that DB_MASTER_PASSWORD is not printed
(either omit, replace with a masked placeholder, or save only to the restricted
file) and add a short comment/instruction indicating how to retrieve the secret
from the secure store.
```

</details>

</blockquote></details>
<details>
<summary>deploy/chart/docs/guide/suse.md-108-112 (1)</summary><blockquote>

`108-112`: _⚠️ Potential issue_ | _🟡 Minor_

**Add language specifier to fenced code block.**

The code block showing the `/etc/hosts` entry lacks a language specifier. While not strictly code, using `text` or `plaintext` satisfies the linter and improves consistency.



<details>
<summary>Suggested fix</summary>

```diff
-```
+```text
 127.0.0.1 harbor.example.com
 ```
```

</details>

<details>
<summary>🤖 Prompt for AI Agents</summary>

Verify each finding against the current code and only fix it if needed.

In @deploy/chart/docs/guide/suse.md around lines 108 - 112, Update the fenced
code block that contains the /etc/hosts entry so it includes a language
specifier (e.g., use "text" or "plaintext") to satisfy the linter; locate the
block that currently contains the line "127.0.0.1 harbor.example.com" and change
the opening fence to include the language identifier (for example, ```text)
while keeping the entry itself unchanged.


</details>

</blockquote></details>
<details>
<summary>deploy/chart/example/aws-eks-irsa/verify.sh-96-101 (1)</summary><blockquote>

`96-101`: _⚠️ Potential issue_ | _🟡 Minor_

**IAM auth log check doesn't contribute to pass/fail count.**

Unlike other checks, the IAM auth log verification only prints a warning without incrementing the FAIL counter. This could mask a genuine IAM authentication issue. Consider making it consistent with other checks or explicitly documenting why this is a soft check.



<details>
<summary>Suggested fix</summary>

```diff
 if echo "${CORE_LOGS}" | grep -qi "iam\|IAMAuth"; then
   echo "  PASS: Core logs mention IAM auth"
   ((PASS++))
 else
-  echo "  WARN: No IAM auth mention in recent core logs (may need to check earlier logs)"
+  echo "  WARN: No IAM auth mention in recent core logs (may need to check earlier logs)"
+  # Not failing — IAM references may appear earlier or use different keywords
 fi
```

</details>

<details>
<summary>🤖 Prompt for AI Agents</summary>

```
Verify each finding against the current code and only fix it if needed.

In `@deploy/chart/example/aws-eks-irsa/verify.sh` around lines 96 - 101, The IAM
auth check reads CORE_LOGS and prints a WARN but doesn't update the FAIL
counter; make it consistent by incrementing the FAIL variable when no IAM
mention is found (or explicitly document rationale). Update the block that
checks CORE_LOGS (the if/else using grep "iam\|IAMAuth") to increment FAIL
(e.g., ((FAIL++))) and emit a failing message, or alternatively add a comment
explaining this is intentionally a non-fatal check; ensure references to PASS
and FAIL variables remain consistent with other checks in verify.sh.
```

</details>

</blockquote></details>
<details>
<summary>deploy/chart/Taskfile.yaml-32-35 (1)</summary><blockquote>

`32-35`: _⚠️ Potential issue_ | _🟡 Minor_

**`validate` does client-side parsing, not API validation.**

`kubectl apply --dry-run=client` never asks the API server to validate the rendered manifests, so the task description is overstating what this check catches. Either switch to `--dry-run=server` or rename the task/description to make the scope explicit.

<details>
<summary>🤖 Prompt for AI Agents</summary>

```
Verify each finding against the current code and only fix it if needed.

In `@deploy/chart/Taskfile.yaml` around lines 32 - 35, The Taskfile 'validate'
currently runs "helm template ... | kubectl apply --dry-run=client -f -" but the
description claims it validates against the Kubernetes API while
--dry-run=client only performs local parsing; either update the command to use
server-side validation (replace --dry-run=client with --dry-run=server so the
API server is consulted) or change the task name/desc to accurately state it
only does client-side/template parsing; adjust the Taskfile entry for validate
(the helm template + kubectl apply pipeline) and its desc field accordingly.
```

</details>

</blockquote></details>
<details>
<summary>deploy/chart/docs/guide/nutanix.md-31-31 (1)</summary><blockquote>

`31-31`: _⚠️ Potential issue_ | _🟡 Minor_

**Replace “here” with descriptive link text.**

This trips MD059 and makes the rendered guide harder to scan.

<details>
<summary>💡 Suggested fix</summary>

```diff
-Create a file to override the default values (see [here](https://grafana.com/docs/alloy/latest/collect/logs-in-kubernetes/#pods-logs) for more):
+Create a file to override the default values (see [Grafana Alloy pod logs documentation](https://grafana.com/docs/alloy/latest/collect/logs-in-kubernetes/#pods-logs) for more):
```
</details>

<details>
<summary>🤖 Prompt for AI Agents</summary>

```
Verify each finding against the current code and only fix it if needed.

In `@deploy/chart/docs/guide/nutanix.md` at line 31, Replace the generic link text
"here" in the sentence "Create a file to override the default values (see
[here](https://grafana.com/docs/alloy/latest/collect/logs-in-kubernetes/#pods-logs)
for more):" with a descriptive phrase (e.g., "Grafana Alloy docs — Collect logs
in Kubernetes: Pods logs" or "Grafana: Collect logs in Kubernetes (Pods logs)")
so the link is self-describing and MD059-compliant; update the markdown link
label accordingly to preserve the existing URL and improve accessibility and
scanability.
```

</details>

</blockquote></details>
<details>
<summary>deploy/chart/templates/_helpers-tls.tpl-14-16 (1)</summary><blockquote>

`14-16`: _⚠️ Potential issue_ | _🟡 Minor_

**Correct the `middleware` spelling in the helper definition and its caller.**

`harbor.middlware.enabled` is misspelled at line 14. The typo is already in use at `deploy/chart/templates/registry.configmap.yaml:168`, so both the definition and caller need to be updated to use the correct spelling `harbor.middleware.enabled`.

<details>
<summary>🤖 Prompt for AI Agents</summary>

```
Verify each finding against the current code and only fix it if needed.

In `@deploy/chart/templates/_helpers-tls.tpl` around lines 14 - 16, The helper
name is misspelled as "harbor.middlware.enabled"; rename the template definition
to "harbor.middleware.enabled" and update every template that calls the old name
to call "harbor.middleware.enabled" instead (preserve the same return
value/behavior, e.g., still returning false by default in the define block).
Search for the symbol "harbor.middlware.enabled" and replace it with
"harbor.middleware.enabled" across templates to ensure callers and the helper
match.
```

</details>

</blockquote></details>
<details>
<summary>taskfile/helm.yml-115-118 (1)</summary><blockquote>

`115-118`: _⚠️ Potential issue_ | _🟡 Minor_

**Point the failure message to a real task target.**

`task docs` is not defined in this included Taskfile, so the current error message sends people to a nonexistent command. Point it at the actual chart docs task you expect them to run from the repo root.

<details>
<summary>💡 Suggested fix</summary>

```diff
-      - diff -q /tmp/helm-docs-expected.md /tmp/helm-docs-actual.md || (echo "README.md is out of date. Run{{\":\"}} task docs" && exit 1)
+      - diff -q /tmp/helm-docs-expected.md /tmp/helm-docs-actual.md || (echo "README.md is out of date. Run: task h:helm-docs" && exit 1)
```
</details>
Based on learnings: Use Task runner commands defined in `Taskfile.yml` and `taskfile/` for building, testing, and local development workflows.

<details>
<summary>🤖 Prompt for AI Agents</summary>

```
Verify each finding against the current code and only fix it if needed.

In `@taskfile/helm.yml` around lines 115 - 118, The failure message currently
points to a nonexistent "task docs" target; update the echo string in the diff's
final line so it instructs users to run the actual chart documentation task from
the repo root (replace "task docs" with the correct Taskfile target name you use
for generating chart docs), e.g. change the echo in the line containing diff -q
... || (echo "README.md is out of date. Run{{":"}} task docs" && exit 1) to
reference the real target (keep the surrounding diff/helm-docs/grep flow
unchanged).
```

</details>

</blockquote></details>
<details>
<summary>deploy/chart/example/aws-eks-irsa/README.md-7-27 (1)</summary><blockquote>

`7-27`: _⚠️ Potential issue_ | _🟡 Minor_

**Add a language to the ASCII-art fence.**

The unlabeled fenced block at Line 7 triggers MD040. Mark it as `text` so the docs checks stay clean.

<details>
<summary>💡 Suggested fix</summary>

```diff
-```
+```text
 ┌─────────────────────────────────────────────────────────┐
 │  EKS Cluster (harbor-next-irsa)                         │
 │                                                         │
@@
                                  └────────────┘
-```
+```
```
</details>

<details>
<summary>🤖 Prompt for AI Agents</summary>

Verify each finding against the current code and only fix it if needed.

In @deploy/chart/example/aws-eks-irsa/README.md around lines 7 - 27, The fenced
ASCII-art block in the README triggers MD040 because it lacks a language tag;
update the opening fence for the ASCII diagram to include the text language
(change totext) so the markdown linter passes, leaving the closing fence
unchanged; reference the ASCII-art fenced block in the README.md and ensure only
the opening ``` is modified.


</details>

</blockquote></details>
<details>
<summary>deploy/chart/templates/registryctl.configmap.yaml-7-9 (1)</summary><blockquote>

`7-9`: _⚠️ Potential issue_ | _🟡 Minor_

**Potential indentation mismatch with trace helper.**

The `harbor.trace.envs.registryctl` helper has 2-space indentation baked in. Combined with `nindent 2`, the trace environment variables will be indented 4 spaces instead of 2, causing YAML structure issues.

Either:
1. Remove the hardcoded indentation from the trace helpers (see earlier comment), or
2. Use `nindent 0` here if the helper's indentation is intentional

Additionally, the `_PLACEHOLDER` key appears to be a workaround. If the ConfigMap may be empty when tracing is disabled, consider wrapping the entire template in a conditional or using Helm's `data: {}` pattern.

<details>
<summary>🤖 Prompt for AI Agents</summary>

```
Verify each finding against the current code and only fix it if needed.

In `@deploy/chart/templates/registryctl.configmap.yaml` around lines 7 - 9, The
ConfigMap is mis-indented because the helper harbor.trace.envs.registryctl
already emits 2-space indentation but the template calls it with nindent 2,
producing 4 spaces; remove the extra indentation by changing the call to use
nindent 0 (i.e. replace `{{- include "harbor.trace.envs.registryctl" . | nindent
2 }}` with `... | nindent 0}`) or alternatively remove hardcoded indentation
inside the harbor.trace.envs.registryctl helper so it can be nindented here;
also remove the _PLACEHOLDER workaround by making the data section conditional
(wrap the ConfigMap creation around your trace-enabled condition) or emit an
empty data: {} when tracing is disabled to avoid an empty key.
```

</details>

</blockquote></details>
<details>
<summary>deploy/chart/templates/_helpers-jobservice.tpl-45-51 (1)</summary><blockquote>

`45-51`: _⚠️ Potential issue_ | _🟡 Minor_

**Missing validation for secretName when certSource is "secret".**

If `tls.certSource` is `"secret"` but `.Values.jobservice.secretName` is not provided, this template outputs an empty string, which could cause deployment failures.

<details>
<summary>Proposed fix with validation</summary>

```diff
 {{- define "harbor.jobservice.secretName" -}}
   {{- if eq .Values.tls.certSource "secret" -}}
-    {{- .Values.jobservice.secretName -}}
+    {{- required "jobservice.secretName is required when tls.certSource is 'secret'" .Values.jobservice.secretName -}}
   {{- else -}}
     {{- printf "%s-jobservice-internal-tls" (include "harbor.fullname" .) -}}
   {{- end -}}
 {{- end -}}
```
</details>

<details>
<summary>🤖 Prompt for AI Agents</summary>

```
Verify each finding against the current code and only fix it if needed.

In `@deploy/chart/templates/_helpers-jobservice.tpl` around lines 45 - 51, The
template helper harbor.jobservice.secretName doesn't validate that
.Values.jobservice.secretName is present when .Values.tls.certSource ==
"secret", which can emit an empty string; update the define
"harbor.jobservice.secretName" to check .Values.tls.certSource and, when it
equals "secret", assert that .Values.jobservice.secretName is non-empty (emit a
clear error via required or failFunc) and return that value; otherwise keep the
existing fallback that prints "%s-jobservice-internal-tls" using include
"harbor.fullname" .; ensure the validation references
.Values.jobservice.secretName exactly so deployments fail fast with a helpful
message.
```

</details>

</blockquote></details>
<details>
<summary>deploy/chart/example/README.md-16-19 (1)</summary><blockquote>

`16-19`: _⚠️ Potential issue_ | _🟡 Minor_

**Make the chart path explicit in these install commands.**

`helm install harbor . -f example/k3d-local.yaml` only works from the chart root. From `deploy/chart/example/`, `.` points at the example directory instead of the chart, so the documented copy-paste flow is ambiguous.



Also applies to: 79-80

<details>
<summary>🤖 Prompt for AI Agents</summary>

```
Verify each finding against the current code and only fix it if needed.

In `@deploy/chart/example/README.md` around lines 16 - 19, The helm install
commands in the README use '.' as the chart path (e.g., "helm install harbor .
-n harbor --create-namespace -f example/k3d-local.yaml"), which only works when
run from the chart root; update those commands (including the similar occurrence
around lines 79-80) to use an explicit chart path (for example "../../" or the
repo-relative path to the chart directory) so copy-pasting from
deploy/chart/example/ will install the correct chart and avoid ambiguity.
```

</details>

</blockquote></details>
<details>
<summary>deploy/chart/templates/NOTES.txt-27-40 (1)</summary><blockquote>

`27-40`: _⚠️ Potential issue_ | _🟡 Minor_

**Handle non-ingress installs in the push instructions.**

This section only renders commands when `ingress.enabled` is true. Gateway-backed installs and port-forward-only installs get the section header with no login/tag/push example, even though those exposure modes are supported above.

<details>
<summary>🤖 Prompt for AI Agents</summary>

```
Verify each finding against the current code and only fix it if needed.

In `@deploy/chart/templates/NOTES.txt` around lines 27 - 40, The NOTES template
only emits push/login/tag/push commands when .Values.ingress.enabled is true,
leaving gateway- or port-forward-only installs without useful examples; update
the template to add an else branch for when .Values.ingress.enabled is false
that renders the same docker login/tag/push commands using (urlParse
.Values.externalURL).host (or another appropriate host value you already expose)
so the NOTES always show a complete push example; locate the conditional blocks
referencing .Values.ingress.enabled, .Values.ingress.hosts and
.Values.externalURL and add the alternate rendering path there.
```

</details>

</blockquote></details>
<details>
<summary>deploy/chart/templates/registry.service.yaml-18-20 (1)</summary><blockquote>

`18-20`: _⚠️ Potential issue_ | _🟡 Minor_

**Gate the metrics port on `metrics.enabled`.**

Lines 18-20 are rendered unconditionally, so `metrics.enabled: false` still publishes the registry metrics port. That makes the toggle incomplete and leaves a dead service port at best.


<details>
<summary>Possible fix</summary>

```diff
-    - name: {{ include "harbor.metrics.portName" . | quote }}
-      port: {{ include "harbor.metrics.port" . }}
-      targetPort: {{ include "harbor.metrics.port" . }}
-      protocol: TCP
+    {{- if .Values.metrics.enabled }}
+    - name: {{ include "harbor.metrics.portName" . | quote }}
+      port: {{ include "harbor.metrics.port" . }}
+      targetPort: {{ include "harbor.metrics.port" . }}
+      protocol: TCP
+    {{- end }}
```
</details>

<details>
<summary>🤖 Prompt for AI Agents</summary>

```
Verify each finding against the current code and only fix it if needed.

In `@deploy/chart/templates/registry.service.yaml` around lines 18 - 20, The
service currently always exposes the metrics port; wrap the port/name entries in
a Helm conditional that checks .Values.metrics.enabled (the same toggle used
elsewhere) so the block that uses include "harbor.metrics.portName", include
"harbor.metrics.port" and targetPort is only rendered when metrics are enabled;
update registry.service.yaml to guard the port entries with an if/endif around
those lines (referencing the include templates "harbor.metrics.portName" and
"harbor.metrics.port" to locate the exact lines to change).
```

</details>

</blockquote></details>
<details>
<summary>deploy/chart/README.md-5-5 (1)</summary><blockquote>

`5-5`: _⚠️ Potential issue_ | _🟡 Minor_

**Grammar: Use hyphen in compound adjective.**

"cloud native" should be hyphenated as "cloud-native" when used as a compound adjective before a noun. As per coding guidelines, spell checking must pass on `.md` files.


<details>
<summary>Suggested fix</summary>

```diff
-A modern, production-ready Helm chart for [Harbor Next](https://github.com/container-registry/harbor-next) - the cloud native container registry for Kubernetes.
+A modern, production-ready Helm chart for [Harbor Next](https://github.com/container-registry/harbor-next) - the cloud-native container registry for Kubernetes.
```
</details>

<details>
<summary>🤖 Prompt for AI Agents</summary>

```
Verify each finding against the current code and only fix it if needed.

In `@deploy/chart/README.md` at line 5, Update the README sentence that reads "A
modern, production-ready Helm chart for [Harbor Next] - the cloud native
container registry for Kubernetes." to hyphenate the compound adjective by
replacing "cloud native container registry" with "cloud-native container
registry" so the phrase becomes "the cloud-native container registry for
Kubernetes."; locate the exact phrase "cloud native container registry" in the
README and make this one-word hyphenation change to satisfy the spelling/grammar
guideline.
```

</details>

</blockquote></details>
<details>
<summary>deploy/chart/templates/jobservice.deployment.yaml-100-106 (1)</summary><blockquote>

`100-106`: _⚠️ Potential issue_ | _🟡 Minor_

**Remove unnecessary `and` with single argument.**

The `and` function on line 101 has only one argument, which is incorrect usage. The `and` function expects at least two arguments to perform a logical AND operation. With a single argument, it should just be a direct conditional check.



<details>
<summary>🐛 Proposed fix</summary>

```diff
-          {{- if and .Values.jobservice.persistence.enabled }}
+          {{- if .Values.jobservice.persistence.enabled }}
           persistentVolumeClaim:
             claimName: {{ .Values.jobservice.persistence.existingClaim | default (include "harbor.jobservice" .) }}
           {{- else }}
```
</details>

<details>
<summary>🤖 Prompt for AI Agents</summary>

```
Verify each finding against the current code and only fix it if needed.

In `@deploy/chart/templates/jobservice.deployment.yaml` around lines 100 - 106,
The conditional for the "job-logs" volume uses the template function and with
only one argument; replace the incorrect and expression with a direct test of
.Values.jobservice.persistence.enabled by removing the and() wrapper so the
block becomes conditional on .Values.jobservice.persistence.enabled (the volume
stanza around name: job-logs / persistentVolumeClaim / claimName which
references .Values.jobservice.persistence.existingClaim | default (include
"harbor.jobservice" .)).
```

</details>

</blockquote></details>
<details>
<summary>deploy/chart/values.schema.json-103-108 (1)</summary><blockquote>

`103-108`: _⚠️ Potential issue_ | _🟡 Minor_

**Inconsistent schema for `secretKey` - allows empty value.**

The `secretKey` property has `minLength: 0` which allows empty strings, but the description states it should be "16 characters" for encryption. An empty or short encryption key could cause runtime issues or weaken security.

Consider requiring the key to be exactly 16 characters when provided, or making this clearer:



<details>
<summary>💡 Suggested improvement</summary>

```diff
     "secretKey": {
       "type": "string",
-      "description": "Secret key for encryption (16 characters)",
-      "minLength": 0,
+      "description": "Secret key for encryption (must be exactly 16 characters if provided, or omit to auto-generate)",
+      "minLength": 16,
       "maxLength": 16
     },
```

Alternatively, if empty is intentional (to trigger auto-generation), document this behavior in the description.
</details>

<details>
<summary>🤖 Prompt for AI Agents</summary>

```
Verify each finding against the current code and only fix it if needed.

In `@deploy/chart/values.schema.json` around lines 103 - 108, The schema for the
secretKey property currently allows empty strings (minLength: 0) but the
description says "16 characters"; update the secretKey JSON schema to enforce
exact 16-character keys by setting "minLength": 16 and "maxLength": 16 (or
replace with a "pattern" validating 16 characters if you prefer specific
charset), and if empty/auto-generation is intentional instead, update the
"description" to document that an empty string triggers auto-generation rather
than changing validation.
```

</details>

</blockquote></details>
<details>
<summary>deploy/chart/templates/registry.configmap.yaml-168-168 (1)</summary><blockquote>

`168-168`: _⚠️ Potential issue_ | _🟡 Minor_

**Fix the `middlware` helper typo.**

Line 168 already fails the typos check, and if the intended define is `harbor.middleware.enabled`, this branch will never resolve the right helper. Rename this include, and its matching define if needed, consistently.


<details>
<summary>💡 Suggested fix</summary>

```diff
-    {{- if (include "harbor.middlware.enabled" .) }}
+    {{- if (include "harbor.middleware.enabled" .) }}
```
</details>

<details>
<summary>🤖 Prompt for AI Agents</summary>

```
Verify each finding against the current code and only fix it if needed.

In `@deploy/chart/templates/registry.configmap.yaml` at line 168, The helper
include "harbor.middlware.enabled" is misspelled and never matches the intended
helper; rename the include to "harbor.middleware.enabled" and update the
corresponding define/helper name (the define that registers the helper) to the
same correct spelling so both include "harbor.middleware.enabled" and its define
use "middleware" consistently (search for occurrences of
"harbor.middlware.enabled" and "middlware" and replace them with
"harbor.middleware.enabled" and "middleware" in the helper and all templates
such as the registry.configmap.yaml include).
```

</details>

</blockquote></details>

</blockquote></details>

<details>
<summary>🧹 Nitpick comments (22)</summary><blockquote>

<details>
<summary>deploy/chart/templates/image-pull-secrets.yaml (1)</summary><blockquote>

`1-9`: **Consider adding standard Helm labels to the Secret.**

The secret lacks standard labels (e.g., `helm.sh/chart`, `app.kubernetes.io/managed-by`) which are helpful for resource tracking and cleanup. Other resources in this chart consistently include these labels.

<details>
<summary>♻️ Proposed enhancement</summary>

```diff
 {{- if and .Values.imageCredentials .Values.imageCredentials.registry }}
 apiVersion: v1
 kind: Secret
 metadata:
   name: {{ .Release.Name }}-registry-creds
+  labels:
+    {{- include "harbor.labels" . | nindent 4 }}
 type: kubernetes.io/dockerconfigjson
 data:
   .dockerconfigjson: {{ include "imagePullSecret" . }}
 {{- end }}
```
</details>

<details>
<summary>🤖 Prompt for AI Agents</summary>

```
Verify each finding against the current code and only fix it if needed.

In `@deploy/chart/templates/image-pull-secrets.yaml` around lines 1 - 9, The
Secret created in the template is missing standard Helm labels; modify the
metadata block for the Secret (the resource with name {{ .Release.Name
}}-registry-creds) to add a labels map consistent with other chart resources
(e.g., include keys like helm.sh/chart: "{{ .Chart.Name }}-{{ .Chart.Version
}}", app.kubernetes.io/name: "{{ .Chart.Name }}", app.kubernetes.io/instance:
"{{ .Release.Name }}", and app.kubernetes.io/managed-by: "Helm"), ensuring
values use the existing template variables (.Chart.Name, .Chart.Version,
.Release.Name) so labels remain consistent and templated across the chart.
```

</details>

</blockquote></details>
<details>
<summary>deploy/chart/example/aws-eks-irsa/verify.sh (1)</summary><blockquote>

`25-34`: **Consider preserving command output on failure for debugging.**

The `check()` function redirects all output to `/dev/null`, which makes debugging failures difficult. Consider capturing and displaying output only when a check fails.



<details>
<summary>Suggested improvement</summary>

```diff
 check() {
   local desc="$1"; shift
-  if "$@" >/dev/null 2>&1; then
+  local output
+  if output=$("$@" 2>&1); then
     echo "  PASS: ${desc}"
     ((PASS++))
   else
     echo "  FAIL: ${desc}"
+    echo "${output}" | sed 's/^/    /' | head -10
     ((FAIL++))
   fi
 }
```

</details>

<details>
<summary>🤖 Prompt for AI Agents</summary>

```
Verify each finding against the current code and only fix it if needed.

In `@deploy/chart/example/aws-eks-irsa/verify.sh` around lines 25 - 34, The
check() function currently silences all output by redirecting "$@" to /dev/null;
change it to capture stdout/stderr (e.g., into a temp file or a variable) when
invoking "$@" in check(), then on success discard the capture and increment
PASS, but on failure print the captured output alongside the "FAIL: ${desc}"
message and increment FAIL; reference the check() invocation of "$@" and the
PASS/FAIL counters so the fix preserves the existing success/failure logic while
exposing command output only on failures.
```

</details>

</blockquote></details>
<details>
<summary>deploy/chart/example/openshift/README.md (1)</summary><blockquote>

`31-32`: **Consider using a generic namespace instead of developer-specific name.**

The namespace `vad1mo-dev` appears throughout this guide and seems to be a personal developer namespace. For public documentation, consider using a generic namespace like `harbor` or `harbor-next` to make the guide more universally applicable.



<details>
<summary>Example changes needed</summary>

```diff
-helm install harbor-next . -n vad1mo-dev -f example/openshift/values.yaml
+helm install harbor-next . -n harbor -f example/openshift/values.yaml
```

Similar changes would be needed on lines 38, 41, 44, 81, 83, 84, 98, 99, 110, 118, and 119.

</details>

<details>
<summary>🤖 Prompt for AI Agents</summary>

```
Verify each finding against the current code and only fix it if needed.

In `@deploy/chart/example/openshift/README.md` around lines 31 - 32, Replace the
developer-specific namespace "vad1mo-dev" used throughout the README examples
with a generic namespace such as "harbor" or "harbor-next"; specifically update
every occurrence in the helm install command (helm install harbor-next . -n
vad1mo-dev -f ...), any kubectl/oc commands that create or switch namespaces,
and other examples called out in the comment so they consistently use the chosen
generic namespace across the document.
```

</details>

</blockquote></details>
<details>
<summary>deploy/chart/example/aws-eks-irsa/setup.sh (1)</summary><blockquote>

`263-274`: **SQL user creation uses here-doc with variable interpolation.**

While the `DB_IAM_USER` variable is controlled by environment defaults, if a user overrides it with special characters, it could cause SQL syntax errors or unexpected behavior. The current default `harbor_iam_user` is safe.



Consider adding input validation for the username:

```bash
# Validate DB_IAM_USER contains only safe characters
if [[ ! "${DB_IAM_USER}" =~ ^[a-z_][a-z0-9_]*$ ]]; then
  echo "ERROR: DB_IAM_USER must be a valid PostgreSQL identifier"; exit 1
fi
```

<details>
<summary>🤖 Prompt for AI Agents</summary>

```
Verify each finding against the current code and only fix it if needed.

In `@deploy/chart/example/aws-eks-irsa/setup.sh` around lines 263 - 274, The
here-doc in setup.sh injects ${DB_IAM_USER} directly into SQL which can break or
be exploited if the variable contains special chars; before the kubectl run ...
psql <<SQL block, add validation that DB_IAM_USER matches a safe PostgreSQL
identifier (e.g. regex like ^[a-z_][a-z0-9_]*$) and exit with an error if it
does not, and/or sanitize/quote the identifier properly; update the script
around the kubectl run invocation (where DB_IAM_USER is interpolated) to enforce
this check so only valid usernames are used.
```

</details>

</blockquote></details>
<details>
<summary>deploy/chart/example/aws-eks-irsa/teardown.sh (1)</summary><blockquote>

`78-93`: **Security group deletion may fail if ENIs are still attached.**

If network interfaces (from deleted Aurora or other resources) are still cleaning up, the security group deletion could fail with "DependencyViolation". The script continues due to `set -e` being in effect without explicit error handling here.



Consider adding a retry loop or explicit error handling:

```bash
if [ -n "${SG_ID}" ] && [ "${SG_ID}" != "None" ]; then
  echo "Deleting security group ${SG_ID}..."
  for i in {1..5}; do
    if aws ec2 delete-security-group --group-id "${SG_ID}" --region "${AWS_REGION}" 2>/dev/null; then
      echo "Deleted security group ${SG_ID}."
      break
    fi
    echo "  Waiting for ENIs to detach (attempt $i/5)..."
    sleep 10
  done
fi
```

<details>
<summary>🤖 Prompt for AI Agents</summary>

```
Verify each finding against the current code and only fix it if needed.

In `@deploy/chart/example/aws-eks-irsa/teardown.sh` around lines 78 - 93, The
security-group delete can fail with DependencyViolation if ENIs are still
attached; update the block that checks SG_ID (variables SG_ID, VPC_ID, SG_NAME,
AWS_REGION) to retry the aws ec2 delete-security-group call instead of calling
it once: loop (e.g., 3–5 attempts) calling aws ec2 delete-security-group
--group-id "${SG_ID}" --region "${AWS_REGION}", check its exit status, break on
success, sleep between attempts, and log each attempt and a final failure
message if all retries fail; ensure the script does not abort unexpectedly under
set -e by handling the delete command’s failure inside the loop (so you capture
and react to the exit code rather than letting set -e stop the script).
```

</details>

</blockquote></details>
<details>
<summary>deploy/chart/tests/serviceaccount_test.yaml (1)</summary><blockquote>

`5-83`: **The count-only assertions are pretty permissive.**

Several cases only check the total document count, so the suite would still pass if the chart rendered the wrong ServiceAccount set but kept the same number of documents. Add per-resource assertions on `metadata.name`/kind for the component being toggled, not just the count.



Also applies to: 140-148

<details>
<summary>🤖 Prompt for AI Agents</summary>

```
Verify each finding against the current code and only fix it if needed.

In `@deploy/chart/tests/serviceaccount_test.yaml` around lines 5 - 83, Tests in
serviceaccount_test.yaml rely on count-only assertions which can pass even if
the wrong ServiceAccount resources are rendered; update each case that toggles a
component to assert the specific resource name and kind. For the exporter tests,
add a documentSelector checking path: metadata.name value:
RELEASE-NAME-harbor-next-exporter and an asserts:isKind of ServiceAccount in the
"exporter.enabled" true/false cases; for the trivy tests use documentSelector
metadata.name RELEASE-NAME-harbor-next-trivy and isKind assertions when
trivy.serviceAccount.create is true, and when false assert the absence by
ensuring a selector for that metadata.name is not present (or keep the count
assert but also add the negative selector). Apply the same per-resource
metadata.name/kind assertions to the other count-only blocks (including the
tests around lines 140-148) so each toggle verifies the exact SA resource, not
just total count.
```

</details>

</blockquote></details>
<details>
<summary>deploy/chart/templates/_helpers-trace.tpl (3)</summary><blockquote>

`7-26`: **Hardcoded indentation in helper may cause YAML formatting issues.**

The template includes 2-space indentation within the helper definition (e.g., `  TRACE_ENABLED`). When callers use `nindent` to position the output, this compounds the indentation. Consider removing the leading spaces from the helper and letting callers control indentation entirely:

```diff
 {{- define "harbor.trace.envs" -}}
-  TRACE_ENABLED: "{{ .Values.trace.enabled }}"
-  TRACE_SAMPLE_RATE: "{{ .Values.trace.sample_rate }}"
-  TRACE_NAMESPACE: "{{ .Values.trace.namespace }}"
+TRACE_ENABLED: "{{ .Values.trace.enabled }}"
+TRACE_SAMPLE_RATE: "{{ .Values.trace.sample_rate }}"
+TRACE_NAMESPACE: "{{ .Values.trace.namespace }}"
   {{- if .Values.trace.attributes }}
-  TRACE_ATTRIBUTES: {{ .Values.trace.attributes | toJson | squote }}
+TRACE_ATTRIBUTES: {{ .Values.trace.attributes | toJson | squote }}
   {{- end }}
   {{- if eq .Values.trace.provider "jaeger" }}
-  TRACE_JAEGER_ENDPOINT: "{{ .Values.trace.jaeger.endpoint }}"
+TRACE_JAEGER_ENDPOINT: "{{ .Values.trace.jaeger.endpoint }}"
   ...
```

This requires updating all callers to use appropriate `nindent` values.

<details>
<summary>🤖 Prompt for AI Agents</summary>

```
Verify each finding against the current code and only fix it if needed.

In `@deploy/chart/templates/_helpers-trace.tpl` around lines 7 - 26, The helper
"harbor.trace.envs" emits hardcoded leading spaces before each env var which can
double-indent when callers use nindent; remove all leading spaces inside the
helper so each output line starts at column zero (e.g., change lines that begin
with "  TRACE_*" to start with "TRACE_*") and keep the conditional blocks
intact, then update every caller that includes the "harbor.trace.envs" template
to apply the correct nindent (or indent) value when rendering so indentation is
controlled by the caller rather than the helper.
```

</details>

---

`14-25`: **Implicit fallback to OTEL when provider is not "jaeger" may cause nil pointer issues.**

The `else` branch assumes OTEL configuration exists whenever the provider is not `"jaeger"`. If `trace.provider` is misconfigured or the OTEL values are not set, this could output empty/nil values. Consider explicitly checking for `"otel"`:

<details>
<summary>Proposed fix</summary>

```diff
   {{- if eq .Values.trace.provider "jaeger" }}
   TRACE_JAEGER_ENDPOINT: "{{ .Values.trace.jaeger.endpoint }}"
   TRACE_JAEGER_USERNAME: "{{ .Values.trace.jaeger.username }}"
   TRACE_JAEGER_AGENT_HOSTNAME: "{{ .Values.trace.jaeger.agent_host }}"
   TRACE_JAEGER_AGENT_PORT: "{{ .Values.trace.jaeger.agent_port }}"
-  {{- else }}
+  {{- else if eq .Values.trace.provider "otel" }}
   TRACE_OTEL_ENDPOINT: "{{ .Values.trace.otel.endpoint }}"
   TRACE_OTEL_URL_PATH: "{{ .Values.trace.otel.url_path }}"
   TRACE_OTEL_COMPRESSION: "{{ .Values.trace.otel.compression }}"
   TRACE_OTEL_INSECURE: "{{ .Values.trace.otel.insecure }}"
   TRACE_OTEL_TIMEOUT: "{{ .Values.trace.otel.timeout }}"
   {{- end }}
```
</details>

<details>
<summary>🤖 Prompt for AI Agents</summary>

```
Verify each finding against the current code and only fix it if needed.

In `@deploy/chart/templates/_helpers-trace.tpl` around lines 14 - 25, The template
currently renders OTEL environment keys in the else branch whenever
.Values.trace.provider != "jaeger", which can emit empty/nil values if provider
is misconfigured; update the conditional logic in
deploy/chart/templates/_helpers-trace.tpl to explicitly check for eq
.Values.trace.provider "otel" (e.g., replace the bare else with an elif/check
for "otel") and render TRACE_OTEL_* only in that branch, and add a final else
that omits OTEL/JAeger keys (or renders safe empty defaults) to avoid nil
outputs; refer to the template and the variables .Values.trace.provider,
.Values.trace.otel.*, and .Values.trace.jaeger.* when making the change.
```

</details>

---

`28-47`: **Same indentation issue exists in service-specific helpers.**

These helpers also have hardcoded 2-space indentation. Apply the same fix as suggested for `harbor.trace.envs` to maintain consistency.

<details>
<summary>🤖 Prompt for AI Agents</summary>

```
Verify each finding against the current code and only fix it if needed.

In `@deploy/chart/templates/_helpers-trace.tpl` around lines 28 - 47, The three
service-specific helpers (templates "harbor.trace.envs.core",
"harbor.trace.envs.jobservice", "harbor.trace.envs.registryctl") contain
hardcoded 2-space indentation; remove the literal leading spaces and emit the
env lines using Helm's indenting helpers so output indentation is produced by
the template engine. Replace the hardcoded TRACE_SERVICE_NAME lines and the
include call so that TRACE_SERVICE_NAME is rendered via a templated expression
(e.g., printf) and the include of "harbor.trace.envs" is piped through an
indent/nindent helper (use nindent 2) so the block is consistently indented by 2
spaces only when rendered.
```

</details>

</blockquote></details>
<details>
<summary>deploy/chart/templates/_helpers-jobservice.tpl (1)</summary><blockquote>

`25-27`: **Proxy helper adds indirection without additional logic.**

`harbor.redis.urlForJobservice` simply includes `harbor.redis.url`. If this is intentional for future extensibility (e.g., jobservice-specific Redis configuration), consider adding a comment. Otherwise, callers could directly use `harbor.redis.url`.

<details>
<summary>🤖 Prompt for AI Agents</summary>

```
Verify each finding against the current code and only fix it if needed.

In `@deploy/chart/templates/_helpers-jobservice.tpl` around lines 25 - 27, The
helper define "harbor.redis.urlForJobservice" is a no-op wrapper around
"harbor.redis.url"; either remove this indirection and update callers to use
"harbor.redis.url" directly, or keep it but document intent by adding a comment
inside the "harbor.redis.urlForJobservice" template noting it's a deliberate
abstraction for potential jobservice-specific overrides; locate the define
"harbor.redis.urlForJobservice" and either delete it and refactor callers, or
insert a clear explanatory comment referencing jobservice-specific future
config.
```

</details>

</blockquote></details>
<details>
<summary>deploy/chart/templates/trivy.service.yaml (1)</summary><blockquote>

`1-15`: **Add explicit Service type and improve consistency.**

A few inconsistencies compared to other service templates in this chart:

1. **Missing explicit `type`**: Other services specify `type: ClusterIP`. While Kubernetes defaults to ClusterIP, explicit declaration improves clarity.

2. **Inconsistent conditional style**: Uses `{{ if ... }}` / `{{ end }}` instead of `{{- if ... }}` / `{{- end }}` used elsewhere.

3. **Hardcoded port**: Port 8080 is hardcoded. Consider using a helper like `harbor.trivy.port` for consistency.

<details>
<summary>Proposed alignment with other services</summary>

```diff
-{{ if .Values.trivy.enabled }}
+{{- if .Values.trivy.enabled }}
 apiVersion: v1
 kind: Service
 metadata:
   name: "{{ template "harbor.trivy" . }}"
   labels:
-{{ include "harbor.labels" . | indent 4 }}
+    {{- include "harbor.labels" . | nindent 4 }}
 spec:
+  type: ClusterIP
   ports:
     - name: http-trivy
       protocol: TCP
       port: 8080
   selector:
     {{- include "harbor.componentSelectorLabels" (dict "root" . "component" "trivy") | nindent 4 }}
-{{ end }}
+{{- end }}
```
</details>

<details>
<summary>🤖 Prompt for AI Agents</summary>

```
Verify each finding against the current code and only fix it if needed.

In `@deploy/chart/templates/trivy.service.yaml` around lines 1 - 15, Update the
Trivy Service template to match other services: change the outer conditional to
use trimmed delimiters (replace {{ if .Values.trivy.enabled }} / {{ end }} with
{{- if .Values.trivy.enabled }} / {{- end }}), add an explicit service type line
(type: ClusterIP) under spec, and replace the hardcoded port number 8080 with
the shared helper or value (use the harbor.trivy.port helper or
.Values.trivy.port) while keeping the port name (http-trivy) and selector that
uses harbor.componentSelectorLabels; keep existing metadata references like
template "harbor.trivy" and include "harbor.labels".
```

</details>

</blockquote></details>
<details>
<summary>deploy/chart/example/rke2-rancher.yaml (1)</summary><blockquote>

`43-51`: **Consider documenting the security implications of disabled Valkey auth.**

While acceptable for local/dev setups, `auth.enabled: false` should include a comment noting this is insecure for production.


<details>
<summary>Suggested improvement</summary>

```diff
-# Use internal Valkey (Redis-compatible)
+# Use internal Valkey (Redis-compatible)
+# NOTE: auth.enabled: false is suitable for development only; enable auth for production
 valkey:
   enabled: true
   architecture: standalone
   auth:
     enabled: false
```
</details>

<details>
<summary>🤖 Prompt for AI Agents</summary>

```
Verify each finding against the current code and only fix it if needed.

In `@deploy/chart/example/rke2-rancher.yaml` around lines 43 - 51, The valkey
block currently disables authentication (valkey.auth.enabled: false) without
guidance; add a clear inline comment next to valkey.auth.enabled and/or at the
top of the valkey section stating that disabling auth is insecure and intended
only for local/dev/testing, and that production deployments must enable auth
(set valkey.auth.enabled: true) and configure appropriate
credentials/persistence; mention any required follow‑up steps (e.g., enable
secure credential storage and backup of valkey.master.persistence) and point
maintainers to the valkey/auth configuration in docs for production hardening.
```

</details>

</blockquote></details>
<details>
<summary>deploy/chart/templates/extraManifests.yaml (1)</summary><blockquote>

`19-29`: **Consider adding error handling for templated manifest parsing.**

If `extraTemplateManifests` contains invalid YAML or templates that fail to render, `tpl . $ | fromYaml` will produce a cryptic error. A `fail` with context could improve debugging.


<details>
<summary>Optional: Add validation context</summary>

```diff
 {{/* Templated manifests - can use .Values, .Release, etc. */}}
-{{- range .Values.extraTemplateManifests }}
+{{- range $idx, $item := .Values.extraTemplateManifests }}
 ---
-{{- $rendered := tpl . $ | fromYaml }}
+{{- $rendered := tpl $item $ | fromYaml }}
+{{- if not $rendered }}
+{{- fail (printf "extraTemplateManifests[%d]: failed to parse rendered YAML" $idx) }}
+{{- end }}
 {{- $manifest := deepCopy $rendered }}
```
</details>

<details>
<summary>🤖 Prompt for AI Agents</summary>

```
Verify each finding against the current code and only fix it if needed.

In `@deploy/chart/templates/extraManifests.yaml` around lines 19 - 29, When
rendering each entry in extraTemplateManifests, add explicit validation and a
helpful fail message around the tpl . $ | fromYaml step: capture the raw
template output (tpl . $), attempt to parse it with fromYaml but guard it with
default/detection, and if the parsed $rendered is empty or not an expected
map/list, call fail with a message that includes the original tpl output and the
index/identifier so debugging is easy; update the block that defines
$rendered/$manifest (the tpl, fromYaml, $rendered, $manifest variables and
subsequent set calls) to perform this check and use fail to emit contextual
errors before continuing to merge labels and output the manifest.
```

</details>

</blockquote></details>
<details>
<summary>deploy/chart/templates/portal.deployment.yaml (1)</summary><blockquote>

`51-54`: **Wrap the whole `env` stanza in the `with` block.**

Right now an unset `portal.extraEnv` still renders an empty `env` field. Omitting the stanza entirely keeps the manifest cleaner.


<details>
<summary>Possible cleanup</summary>

```diff
-          env:
-            {{- with .Values.portal.extraEnv }}
-            {{- toYaml . | nindent 12 }}
-            {{- end }}
+          {{- with .Values.portal.extraEnv }}
+          env:
+            {{- toYaml . | nindent 12 }}
+          {{- end }}
```
</details>

<details>
<summary>🤖 Prompt for AI Agents</summary>

```
Verify each finding against the current code and only fix it if needed.

In `@deploy/chart/templates/portal.deployment.yaml` around lines 51 - 54, The env:
stanza is rendered even when .Values.portal.extraEnv is unset; wrap the entire
env: block in the with conditional so the stanza is omitted when empty — replace
the current standalone env: with a with on .Values.portal.extraEnv that outputs
the env: header and the indented toYaml content (use the same nindent 12) inside
the block and close with {{- end }} to preserve indentation and formatting in
portal.deployment.yaml.
```

</details>

</blockquote></details>
<details>
<summary>deploy/chart/templates/ingress.yaml (1)</summary><blockquote>

`3-5`: **`ingress.hosts[].paths` is ignored here.**

The template only consumes `.host`; every path/backend pair is hardcoded below. That makes the `paths` entries shown in values and tests a silent no-op. Either wire those entries into the rendered rules or remove the dead setting from the chart interface.




Also applies to: 37-85

<details>
<summary>🤖 Prompt for AI Agents</summary>

```
Verify each finding against the current code and only fix it if needed.

In `@deploy/chart/templates/ingress.yaml` around lines 3 - 5, The template
currently only reads .Values.ingress.hosts[].host (via $ingressHosts and include
"harbor.ingress.primaryHost") and ignores the hosts[].paths entries, so
path/backend pairs in values.yaml are no-ops; update the ingress template
(around the $ingressHosts usage and the rule generation code) to range over each
host in $ingressHosts and then range over host.paths to render each http.path
and backend (mapping path, service.name and service.port) into the Ingress
rules, ensuring you use the host.paths entries rather than hardcoded backends;
alternatively, if you intentionally do not support per-host paths, remove
hosts[].paths from the chart values and tests to avoid dead settings.
```

</details>

</blockquote></details>
<details>
<summary>deploy/chart/tests/ingress_test.yaml (1)</summary><blockquote>

`147-158`: **Avoid baking route order into these assertions.**

Line 154 assumes the portal catch-all route will stay at index 5. If another core path is inserted above it, this test fails even though routing is still correct. Matching the path/backend pair by value would be more stable.

<details>
<summary>🤖 Prompt for AI Agents</summary>

```
Verify each finding against the current code and only fix it if needed.

In `@deploy/chart/tests/ingress_test.yaml` around lines 147 - 158, The test
currently hardcodes array indexes under spec.rules[0].http.paths (e.g., paths[0]
and paths[5]) which breaks when route order changes; change the assertions to
find the path objects by their path value (e.g., locate the object where path ==
"/api/" and assert its backend.service.name matches /.*-core$/ and locate the
object where path == "/" and assert its backend.service.name matches
/.*-portal$/) so the test verifies path->backend pairs instead of relying on
fixed indexes.
```

</details>

</blockquote></details>
<details>
<summary>deploy/chart/tests/core_test.yaml (1)</summary><blockquote>

`113-123`: **Avoid asserting the total document count here.**

Line 121 couples this test to every other ServiceAccount rendered by `templates/serviceaccounts.yaml`. A new component SA will break the test even if the core SA is still absent. Prefer selecting the core ServiceAccount by name and asserting that it is not rendered.

<details>
<summary>🤖 Prompt for AI Agents</summary>

```
Verify each finding against the current code and only fix it if needed.

In `@deploy/chart/tests/core_test.yaml` around lines 113 - 123, This test is
brittle because it asserts total rendered documents (hasDocuments count: 4)
instead of checking the core ServiceAccount presence; change the assertion to
explicitly verify the core ServiceAccount is NOT rendered when
core.serviceAccount.create is false by replacing the count-based check with an
assertion that selects the ServiceAccount by its metadata.name (the chart's core
SA name produced by templates/serviceaccounts.yaml) and asserts its absence;
keep the test name "should not create core service account when disabled" and
the set key core.serviceAccount.create: false so the test still targets that
behavior.
```

</details>

</blockquote></details>
<details>
<summary>deploy/chart/example/openshift/values.yaml (1)</summary><blockquote>

`8-42`: **Make this example cluster-agnostic.**

These values are hardcoded to one namespace and one OpenShift apps domain, so the checked-in example looks reusable but only works after users scrub several environment-specific fields. Prefer obvious placeholders here.


<details>
<summary>Possible cleanup</summary>

```diff
-#   helm install harbor-next . -n vad1mo-dev -f example/openshift/values.yaml
+#   helm install harbor-next . -n <namespace> -f example/openshift/values.yaml

-externalURL: "https://harbor-next-vad1mo-dev.apps.rm1.0a51.p1.openshiftapps.com"
+externalURL: "https://harbor-next.<apps-domain>"

-  core: harbor-next-vad1mo-dev.apps.rm1.0a51.p1.openshiftapps.com
+  core: harbor-next.<apps-domain>
   hosts:
-    - host: harbor-next-vad1mo-dev.apps.rm1.0a51.p1.openshiftapps.com
+    - host: harbor-next.<apps-domain>

-  host: postgres.vad1mo-dev.svc.cluster.local
+  host: postgres.<namespace>.svc.cluster.local
```
</details>

<details>
<summary>🤖 Prompt for AI Agents</summary>

```
Verify each finding against the current code and only fix it if needed.

In `@deploy/chart/example/openshift/values.yaml` around lines 8 - 42, Replace all
environment-specific literals with obvious placeholders so the example is
cluster-agnostic: change externalURL to a placeholder like "<EXTERNAL_URL>",
harborAdminPassword to "<HARBOR_ADMIN_PASSWORD>", ingress.core and
ingress.hosts[*].host to "<ROUTE_HOST>" (and leave className empty), remove or
generalize the OpenShift-specific annotation value if needed
(route.openshift.io/termination: <TERMINATION_TYPE>), set tls.enabled to false
but document that TLS is handled externally, and replace database.host,
database.password, and any namespace-specific values with "<POSTGRES_HOST>",
"<POSTGRES_PASSWORD>" (and optionally "<POSTGRES_PORT>" and "<POSTGRES_DB>").
Ensure placeholders are clearly delimited (angle brackets or ALL_CAPS) so users
must replace them before deployment.
```

</details>

</blockquote></details>
<details>
<summary>deploy/chart/templates/registry.deployment.yaml (2)</summary><blockquote>

`158-159`: **Consider separate resource configuration for registryctl.**

The registryctl sidecar shares the same resource limits as the main registry container. Since registryctl is typically lighter-weight than the registry itself, you may want to add a separate `registry.controller.resources` configuration option.

<details>
<summary>🤖 Prompt for AI Agents</summary>

```
Verify each finding against the current code and only fix it if needed.

In `@deploy/chart/templates/registry.deployment.yaml` around lines 158 - 159, Add
a separate resource config for the registry controller sidecar: introduce a new
values key registry.controller.resources and update the deployment template
where the sidecar/container uses the current resources block (the section that
renders {{- toYaml .Values.registry.resources | nindent 12 }}) so it renders
.Values.registry.controller.resources for the registryctl sidecar and falls back
to .Values.registry.resources if not set; ensure you reference the registryctl
container spec (registryctl) and update the template rendering call to use the
new key with the same toYaml|nindent pattern so existing formatting is
preserved.
```

</details>

---

`180-182`: **Minor: Inconsistent volume definition ordering.**

The `registry-config` volume has `configMap` before `name`, while other volumes define `name` first. Consider reordering for consistency.


<details>
<summary>Suggested fix</summary>

```diff
-        - configMap:
-            name: {{ include "harbor.registry.name" . }}
-          name: registry-config
+        - name: registry-config
+          configMap:
+            name: {{ include "harbor.registry.name" . }}
```
</details>

<details>
<summary>🤖 Prompt for AI Agents</summary>

```
Verify each finding against the current code and only fix it if needed.

In `@deploy/chart/templates/registry.deployment.yaml` around lines 180 - 182, The
volume definition for "registry-config" is inconsistent: it lists the configMap
key before the name while other volumes place name first; update the
registry-config volume block (the entry that sets name: registry-config and
configMap: name: {{ include "harbor.registry.name" . }}) to put the name field
first, then the configMap block so ordering matches other volume definitions for
consistency.
```

</details>

</blockquote></details>
<details>
<summary>deploy/chart/templates/core.secret.yaml (1)</summary><blockquote>

`23-27`: **Token certificate has 365-day validity - document renewal process.**

The auto-generated token CA certificate has a 365-day validity period. Consider documenting the certificate renewal process or increasing the validity period to reduce operational burden.



<details>
<summary>💡 Consideration</summary>

If this is intentional for security (shorter validity = better rotation), document the renewal procedure in the chart's README. Alternatively, consider making the validity period configurable:

```diff
-  {{- $ca := genCA "harbor-token-ca" 365 }}
+  {{- $ca := genCA "harbor-token-ca" (.Values.core.tokenCertValidity | default 365) }}
```
</details>

<details>
<summary>🤖 Prompt for AI Agents</summary>

```
Verify each finding against the current code and only fix it if needed.

In `@deploy/chart/templates/core.secret.yaml` around lines 23 - 27, The chart
currently hardcodes a 365-day CA validity in the genCA call for the token
certificate (see genCA "harbor-token-ca" and the token tls.key/tls.crt
generation guarded by .Values.core.tokenSecretName); make the validity
configurable and/or document renewal: add a new values key (e.g.,
.Values.core.tokenCAValidityDays defaulting to 365) and use it as the validity
argument to genCA, and update the chart README with a short renewal procedure
for tokens (or guidance to set the new values key) so operators can extend or
rotate the certificate as needed.
```

</details>

</blockquote></details>
<details>
<summary>deploy/chart/templates/exporter.deployment.yaml (1)</summary><blockquote>

`65-70`: **Consider using templated values for core service connection.**

The `HARBOR_SERVICE_PORT` is hardcoded to "80" while other components use helper templates for port resolution. This could cause issues if the core service port is customized.



<details>
<summary>💡 Suggested improvement</summary>

```diff
             - name: HARBOR_SERVICE_HOST
               value: {{ include "harbor.fullname" . }}-core
             - name: HARBOR_SERVICE_PORT
-              value: "80"
+              value: {{ include "harbor.core.service.port" . | quote }}
```
</details>

<details>
<summary>🤖 Prompt for AI Agents</summary>

```
Verify each finding against the current code and only fix it if needed.

In `@deploy/chart/templates/exporter.deployment.yaml` around lines 65 - 70, The
HARBOR_SERVICE_PORT env var is hardcoded to "80" which breaks custom port
configurations; update the HARBOR_SERVICE_PORT value to use a Helm template that
reads the configured core service port (for example use {{
.Values.core.service.port | default "80" | quote }} or an existing helper like
an include that resolves the harbor port) so it defaults to 80 but respects
custom values; modify the HARBOR_SERVICE_PORT entry (next to the
HARBOR_SERVICE_HOST which uses include "harbor.fullname") to reference that
templated value.
```

</details>

</blockquote></details>

</blockquote></details>

<!-- This is an auto-generated comment by CodeRabbit for review status -->

Comment on lines +22 to +26
2. Decompress the downloaded chart:

```bash
tar xzvf harbor-*.tgz
```
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major

The final install command doesn't use the chart or values prepared above.

After extracting the archive, the guide still installs . and never passes the values that contain extraManifests, database secret wiring, and ingress settings. Copied as-is, this won't reproduce the documented deployment.

Suggested doc fix
  ```bash
  tar xzvf harbor-*.tgz
+ cd harbor-next

```diff
-```bash
-helm upgrade --install test-1 . \
-  --set externalURL=https://harbor.localhost \
-  --set database.host=harbor-db-rw \
-  --set harborAdminPassword=Harbor12345
-```
+```bash
+helm upgrade --install test-1 . \
+  -f values-k3s.yaml \
+  --set externalURL=https://harbor.localhost \
+  --set harborAdminPassword=Harbor12345
+```

Also applies to: 74-83

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@deploy/chart/docs/guide/k3s.md` around lines 22 - 26, The install step is
using the current directory and omits the prepared values; after extracting the
chart (tar xzvf harbor-*.tgz) change into the extracted chart directory
(harbor-next) and run the helm command against that chart while supplying the
prepared values file (values-k3s.yaml) so extraManifests, DB secret wiring and
ingress settings are applied; update the helm invocation (the command
referencing "helm upgrade --install test-1 .") to include "-f values-k3s.yaml"
and point at the extracted chart directory (instead of the original working
directory), and remove the redundant explicit database.host override so the
documented database wiring from values is used.

Comment thread deploy/chart/docs/guide/nutanix-kp.md
Comment thread deploy/chart/docs/guide/openshift.md
Comment on lines +36 to +40
port: 5432
username: postgres
password: test1234!
database: registry
sslmode: disable
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major

Replace hardcoded database password with a placeholder.

Hardcoded credentials in example files pose a security risk—they may be inadvertently used in production or expose credential patterns. Use a placeholder value with a clear comment indicating it must be changed.

Proposed fix
 database:
   host: postgresql.my-db.svc.cluster.local
   port: 5432
   username: postgres
-  password: test1234!
+  password: "CHANGE_ME"  # REQUIRED: Set a strong password or use existingSecret
   database: registry
   sslmode: disable
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@deploy/chart/example/rke2-rancher.yaml` around lines 36 - 40, Replace the
hardcoded database password value in the YAML snippet by changing the password:
field to a clear placeholder (e.g. password: "<CHANGE_ME>" or password:
"${POSTGRES_PASSWORD}") and add an inline comment indicating it must be replaced
or injected via environment/secret management; ensure any CI/helm templates or
README reference using a secret or env var instead of the literal value so the
placeholder is not used in production.

Comment on lines +10 to +15
```bash
helm install my-harbor oci://8gears.container-registry.com/8gcr/chart/harbor-next \
--set externalURL=https://harbor.example.com \
--set database.host=my-postgres.example.com \
--set database.password=secret
```
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major

Avoid documenting secrets via --set.

These examples put the database password in shell history, process lists, and CI logs. Since the chart already supports database.existingSecret and values files, the docs should steer users there instead.

Suggested doc fix
 helm install my-harbor oci://8gears.container-registry.com/8gcr/chart/harbor-next \
   --set externalURL=https://harbor.example.com \
   --set database.host=my-postgres.example.com \
-  --set database.password=secret
+  --set database.existingSecret=my-harbor-db
 helm install my-harbor oci://8gears.container-registry.com/8gcr/chart/harbor-next \
   --namespace harbor \
   --create-namespace \
   --set externalURL=https://harbor.example.com \
   --set database.host=postgres.example.com \
-  --set database.password=your-password
+  --set database.existingSecret=my-harbor-db

Also applies to: 130-136

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@deploy/chart/README.md.gotmpl` around lines 10 - 15, The README example
documents passing secrets with --set (database.password) which exposes secrets;
change the example to demonstrate using a values file or the chart's supported
database.existingSecret instead — update the helm install example that
references externalURL and database.password to instead reference a values file
(e.g., values.yaml) or show how to create and reference a Kubernetes secret via
database.existingSecret, and add a short note instructing readers not to place
passwords on the command line or in shell history/CI logs.

Comment thread deploy/chart/templates/core.deployment.yaml Outdated
Comment on lines +4 to +22
{{- $certHost := include "harbor.ingress.primaryHost" . }}
apiVersion: v1
kind: Secret
metadata:
name: "{{ $secretName }}"
labels:
{{ include "harbor.labels" . | indent 4 }}
type: kubernetes.io/tls
data:
{{- if $existingSecret }}
tls.crt: {{ index $existingSecret.data "tls.crt" }}
tls.key: {{ index $existingSecret.data "tls.key" }}
ca.crt: {{ index $existingSecret.data "ca.crt" }}
{{- else }}
{{- $ca := genCA "harbor-ca" 365 }}
{{- $cert := genSignedCert $certHost nil (list $certHost) 365 $ca }}
tls.crt: {{ $cert.Cert | b64enc | quote }}
tls.key: {{ $cert.Key | b64enc | quote }}
ca.crt: {{ $ca.Cert | b64enc | quote }}
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major

The auto-generated certificate only covers one ingress host.

$certHost is derived from harbor.ingress.primaryHost, and genSignedCert is called with that single hostname as the SAN list. The chart already supports multiple ingress.hosts, so secondary hosts will present an invalid certificate when autoGenCert is used.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@deploy/chart/templates/ingress.secret.yaml` around lines 4 - 22, The
auto-generated cert currently uses only $certHost (from include
"harbor.ingress.primaryHost") as the SAN list when calling genSignedCert, so
secondary hosts in ingress.hosts are not covered; update the template to build a
SAN list from all configured ingress hosts (e.g., iterate
.Values.harbor.ingress.hosts or use an include that returns all hosts), ensure
the list includes $certHost, and pass that full list to genSignedCert instead of
(list $certHost) so the generated tls.crt covers every ingress host when
autoGenCert is enabled.

Comment on lines +18 to +20
{{- if .Values.jobservice.persistence.storageClass }}
storageClassName: {{ .Values.jobservice.persistence.storageClass | quote }}
{{- end }}
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major

Preserve the explicit storageClassName: "" case.

This truthy check drops an empty string, but storageClassName: "" is the Kubernetes way to opt out of the cluster default StorageClass. Omitting the field here changes provisioning behavior.

Suggested fix
-  {{- if .Values.jobservice.persistence.storageClass }}
+  {{- if hasKey .Values.jobservice.persistence "storageClass" }}
   storageClassName: {{ .Values.jobservice.persistence.storageClass | quote }}
   {{- end }}
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
{{- if .Values.jobservice.persistence.storageClass }}
storageClassName: {{ .Values.jobservice.persistence.storageClass | quote }}
{{- end }}
{{- if hasKey .Values.jobservice.persistence "storageClass" }}
storageClassName: {{ .Values.jobservice.persistence.storageClass | quote }}
{{- end }}
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@deploy/chart/templates/jobservice.pvc.yaml` around lines 18 - 20, The Helm
template currently omits storageClassName when
.Values.jobservice.persistence.storageClass is an empty string; change the
conditional to detect presence (not truthiness) so storageClassName: "{{
.Values.jobservice.persistence.storageClass | quote }}" is emitted even when the
value is "" (Kubernetes uses "" to opt out of default StorageClass). Replace the
existing if with a presence check such as if hasKey
.Values.jobservice.persistence "storageClass" (or equivalent defined check)
around the storageClassName line so .Values.jobservice.persistence.storageClass
is rendered quoted whether empty or non-empty.

Comment thread deploy/chart/templates/pdbs.yaml
Comment on lines +12 to +16
name: {{ $fullname }}-{{ .host | replace "." "-" }}
labels:
{{- include "harbor.labels" $ | nindent 4 }}
spec:
secretName: {{ $fullname }}-{{ .host | replace "." "-" }}-tls
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major

🧩 Analysis chain

🏁 Script executed:

# Locate and read the full template file
find . -path "*/deploy/chart/templates/tls-certificates.yaml" -type f

Repository: container-registry/harbor-next

Length of output: 120


🏁 Script executed:

# Read the template file to see full context
if [ -f "deploy/chart/templates/tls-certificates.yaml" ]; then
  cat -n "deploy/chart/templates/tls-certificates.yaml"
fi

Repository: container-registry/harbor-next

Length of output: 1066


🏁 Script executed:

# Search for any other references to this template or host naming in the codebase
rg "tls-certificates|\.host.*replace" --type yaml --type go -A 2

Repository: container-registry/harbor-next

Length of output: 879


🏁 Script executed:

# Search for test cases or examples showing host values
rg "host.*:" deploy/chart -A 1 -B 1

Repository: container-registry/harbor-next

Length of output: 50386


🏁 Script executed:

# Look for values.yaml or example values to understand typical $fullname and host patterns
find . -name "values.yaml" -o -name "values*.yaml" | grep chart

Repository: container-registry/harbor-next

Length of output: 306


🏁 Script executed:

# Check if there are any test cases that validate the Certificate resource generation
cat -n deploy/chart/tests/tls_test.yaml 2>/dev/null | head -100

Repository: container-registry/harbor-next

Length of output: 3593


🏁 Script executed:

# Search for documentation or comments about naming constraints
rg "metadata.name|secretName" deploy/chart -B 2 -A 2

Repository: container-registry/harbor-next

Length of output: 20901


🏁 Script executed:

# Check if there are any real-world examples or comments about these constraints
rg "63|length|wildcard|asterisk" deploy/chart --type yaml -i

Repository: container-registry/harbor-next

Length of output: 427


🏁 Script executed:

# Look for any existing validation or sanitization logic in the codebase
rg "replace|trunc|sha256|sanitize" deploy/chart -A 2 -B 2

Repository: container-registry/harbor-next

Length of output: 7820


Sanitize host-derived names to comply with Kubernetes 63-character limit and handle wildcards.

The template generates Certificate resource names from untreated hostnames. Kubernetes limits metadata.name to 63 characters with only alphanumeric characters and hyphens allowed. Wildcard hosts like *.example.com become *-example-com (invalid—asterisk not permitted), and long FQDNs combined with the release name can overflow the limit, causing certificate creation to fail. The dnsNames field accepts the raw hostname, so the issue is isolated to the resource name itself.

The codebase already uses this pattern elsewhere (see _helpers.tpl): apply character replacement, hash the input for determinism, and truncate safely with trimSuffix "-" to avoid trailing hyphens.

Suggested fix
+{{- $hostSlug := .host | replace "*." "wildcard-" | replace "." "-" -}}
+{{- $hostHash := sha256sum .host | trunc 8 -}}
 ---
 apiVersion: cert-manager.io/v1
 kind: Certificate
 metadata:
-  name: {{ $fullname }}-{{ .host | replace "." "-" }}
+  name: {{ printf "%s-%s-%s" $fullname ($hostSlug | trunc 40 | trimSuffix "-") $hostHash | trunc 63 | trimSuffix "-" }}
   labels:
     {{- include "harbor.labels" $ | nindent 4 }}
 spec:
-  secretName: {{ $fullname }}-{{ .host | replace "." "-" }}-tls
+  secretName: {{ printf "%s-%s-%s-tls" $fullname ($hostSlug | trunc 36 | trimSuffix "-") $hostHash | trunc 63 | trimSuffix "-" }}
   duration: {{ $.Values.tls.certManager.duration | default "2160h" }}
   renewBefore: {{ $.Values.tls.certManager.renewBefore | default "360h" }}
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@deploy/chart/templates/tls-certificates.yaml` around lines 12 - 16, The
metadata.name and secretName built from {{ $fullname }} and {{ .host }} must be
sanitized to meet Kubernetes' 63-char alphanumeric-and-hyphen limit and to
remove wildcards; update the template that builds name/secretName to follow the
existing pattern in _helpers.tpl: normalize the host by replacing invalid chars
(remove or replace leading '*' and non-alphanumerics with '-'), compute a short
deterministic hash of the full host (e.g., sha256 and take a fixed-length
prefix), concatenate {{ $fullname }} and a truncated sanitized host+hash,
truncate the whole result to 63 characters, and run trimSuffix "-" to avoid
trailing hyphens so both metadata.name and secretName are valid.

Copy link
Copy Markdown

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 4

🧹 Nitpick comments (7)
deploy/chart/docs/guide/nutanix-kp.md (3)

264-264: Consider using a more descriptive release name.

The release name test-1 suggests a test environment. Consider using a more production-appropriate name like harbor or adding a note that users should customize this value.

🔧 Suggested fix
-  --install test-1 .
+  --install harbor .
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@deploy/chart/docs/guide/nutanix-kp.md` at line 264, The Helm install example
uses a non-descriptive release name "test-1"; update the command string shown
("--install test-1 .") to use a meaningful default like "harbor" or replace it
with a placeholder and short note (e.g., "<RELEASE_NAME>") instructing users to
customize the release name; ensure the updated text clearly shows the new
release token and includes a one-line hint to choose a production-appropriate
name.

232-232: Note that single-instance PostgreSQL is not production-ready.

The configuration specifies instances: 1, which provides no high availability. Consider adding a note that this is suitable for testing/development and that production deployments should use 3+ instances for fault tolerance.

📝 Suggested addition

Add after line 239:

> **Note:** This example uses a single PostgreSQL instance for simplicity. For production deployments, increase `instances` to 3 or more to ensure high availability.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@deploy/chart/docs/guide/nutanix-kp.md` at line 232, Update the nutanix-kp.md
documentation to warn that the shown PostgreSQL configuration uses `instances:
1` which is not production-ready; add a brief note immediately after the
PostgreSQL config block (near the `instances: 1` line and the surrounding
PostgreSQL example) stating that this single-instance setup is suitable only for
testing/development and recommending increasing `instances` to 3+ for production
high availability.

31-31: Use descriptive link text instead of "here".

The link text "here" is non-descriptive. Consider making it more informative for readers scanning the document.

📖 Suggested fix
-Create a file to override the default values (see [here](https://grafana.com/docs/alloy/latest/collect/logs-in-kubernetes/#pods-logs) for more):
+Create a file to override the default values (see [Grafana Alloy pod logs documentation](https://grafana.com/docs/alloy/latest/collect/logs-in-kubernetes/#pods-logs) for more details):
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@deploy/chart/docs/guide/nutanix-kp.md` at line 31, Replace the
non-descriptive link text "here" in the markdown sentence "Create a file to
override the default values (see
[here](https://grafana.com/docs/alloy/latest/collect/logs-in-kubernetes/#pods-logs)
for more):" with a descriptive phrase such as "Grafana documentation on
collecting logs in Kubernetes" or "Grafana: collect logs in Kubernetes (pods
logs)" so the link text clearly conveys the destination and purpose to readers.
deploy/chart/README.md (2)

262-279: Document secret-based credential handling for S3 storage.

The example shows AWS credentials in plaintext values, which is a security anti-pattern. While this is just documentation, it may lead users to hardcode credentials. Consider adding a note or example showing how to use Kubernetes secrets or the chart's existing secret mechanism for storing S3 credentials securely.

🔐 Suggested addition

Add a note after the S3 example:

> **Security Best Practice**: For production deployments, store S3 credentials in a Kubernetes Secret and reference it using `registry.storage.s3.existingSecret` instead of hardcoding credentials in values.yaml.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@deploy/chart/README.md` around lines 262 - 279, The README S3 example shows
plaintext AWS credentials (registry.storage.s3.accesskey and
registry.storage.s3.secretkey); update the docs to instruct users to store creds
in a Kubernetes Secret and reference it via the chart's secret option
(registry.storage.s3.existingSecret) instead of hardcoding values, and add a
short "Security Best Practice" note right after the S3 snippet explaining how to
create/use an existing secret for S3 credentials in production.

5-5: Fix hyphenation in compound adjective.

In technical documentation, "cloud native" should be hyphenated as "cloud-native" when used as a compound adjective modifying a noun.

📝 Proposed fix
-A modern, production-ready Helm chart for [Harbor Next](https://github.com/container-registry/harbor-next) - the cloud native container registry for Kubernetes.
+A modern, production-ready Helm chart for [Harbor Next](https://github.com/container-registry/harbor-next) - the cloud-native container registry for Kubernetes.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@deploy/chart/README.md` at line 5, Update the compound adjective "cloud
native" to "cloud-native" in the README sentence that reads "the cloud native
container registry for Kubernetes" so it becomes "the cloud-native container
registry for Kubernetes"; edit the line containing "Harbor Next - the cloud
native container registry for Kubernetes" to use the hyphenated form.
deploy/chart/docs/guide/rancher.md (2)

44-50: Add security note for production deployments.

The example uses a weak password (test1234!) for demonstration purposes. Consider adding a note emphasizing that production deployments should use strong, randomly-generated passwords and consider using Kubernetes secrets.

📝 Suggested addition

After line 50, add:

> **Note**: This example uses a simple password for demonstration. In production, use strong, randomly-generated passwords and store them in Kubernetes secrets.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@deploy/chart/docs/guide/rancher.md` around lines 44 - 50, Add a short
security note after the Helm example to warn readers that the hard-coded demo
password (auth.postgresPassword=test1234!) is insecure for production; instruct
them to use strong, randomly-generated passwords and store them as Kubernetes
Secrets instead of embedding them in the Helm command or values. Reference the
Helm install example and the auth.postgresPassword setting so the note appears
directly after that snippet.

136-142: Add password security reminder.

While the default credentials are documented, consider adding a security reminder that users should change the default password immediately after first login.

🔒 Suggested addition
 The default credentials are `admin` and `Harbor12345`.
+
+> **⚠️ Security**: Change the default admin password immediately after first login.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@deploy/chart/docs/guide/rancher.md` around lines 136 - 142, Update the
paragraph that begins "Since RKE2 comes with an Nginx ingress deployed by
default..." which documents the default credentials (`admin` / `Harbor12345`) to
include a security reminder: add a concise sentence instructing users to change
the default password immediately after first login and use a strong unique
password (or enable SSO/LDAP) to secure the admin account.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.

Inline comments:
In `@deploy/chart/docs/guide/nutanix-kp.md`:
- Around line 150-160: The serviceMonitor labels block uses a Kommander-specific
label "prometheus.kommander.d2iq.io/select" which will prevent scraping on
vanilla Nutanix KP; update the docs and values under metrics -> serviceMonitor
-> labels to either (a) replace that key with a generic selector users commonly
employ for their Prometheus setup (e.g., a standard app or prometheus-scrape
label) or (b) add an explicit note above the labels explaining that
"prometheus.kommander.d2iq.io/select: 'true'" is only for D2iQ Kommander and
should be removed or changed for standard Nutanix KP deployments so
serviceMonitor can be discovered. Ensure the change references the
metrics->serviceMonitor->labels block and the
prometheus.kommander.d2iq.io/select label so reviewers can find and update the
right lines.
- Around line 272-276: The curl example that queries Loki (the command using
localhost:3100) needs a prerequisite instruction to port-forward the Loki
service first; add a short step before the curl that tells users to run kubectl
port-forward -n monitoring svc/loki 3100:3100 in a separate terminal so the curl
against localhost:3100 will succeed, and mention that the port-forward runs
concurrently with the curl command targeting the Loki API.
- Line 47: The documentation's namespace regex includes a test namespace string
"test-1-alex" that should be removed; update the regex value (the line setting
regex = "harbor|test-1-alex") to remove the test namespace so it only contains
production namespaces (e.g., "harbor" or a generic example like
"harbor|my-namespace") and ensure the docs no longer reference
non-production/test namespaces.

In `@deploy/chart/README.md`:
- Line 558: Update the README's Basic Installation section to add a prominent
security warning about the default admin password `Harbor12345`: insert a clear
notice (e.g., a warning block) stating that the default is weak and must be
changed immediately after installation and show how to set a custom password via
the existing helm values `harborAdminPassword` or `existingSecretAdminPassword`;
reference the `harborAdminPassword` value from the values table so users know
the exact setting to override.

---

Nitpick comments:
In `@deploy/chart/docs/guide/nutanix-kp.md`:
- Line 264: The Helm install example uses a non-descriptive release name
"test-1"; update the command string shown ("--install test-1 .") to use a
meaningful default like "harbor" or replace it with a placeholder and short note
(e.g., "<RELEASE_NAME>") instructing users to customize the release name; ensure
the updated text clearly shows the new release token and includes a one-line
hint to choose a production-appropriate name.
- Line 232: Update the nutanix-kp.md documentation to warn that the shown
PostgreSQL configuration uses `instances: 1` which is not production-ready; add
a brief note immediately after the PostgreSQL config block (near the `instances:
1` line and the surrounding PostgreSQL example) stating that this
single-instance setup is suitable only for testing/development and recommending
increasing `instances` to 3+ for production high availability.
- Line 31: Replace the non-descriptive link text "here" in the markdown sentence
"Create a file to override the default values (see
[here](https://grafana.com/docs/alloy/latest/collect/logs-in-kubernetes/#pods-logs)
for more):" with a descriptive phrase such as "Grafana documentation on
collecting logs in Kubernetes" or "Grafana: collect logs in Kubernetes (pods
logs)" so the link text clearly conveys the destination and purpose to readers.

In `@deploy/chart/docs/guide/rancher.md`:
- Around line 44-50: Add a short security note after the Helm example to warn
readers that the hard-coded demo password (auth.postgresPassword=test1234!) is
insecure for production; instruct them to use strong, randomly-generated
passwords and store them as Kubernetes Secrets instead of embedding them in the
Helm command or values. Reference the Helm install example and the
auth.postgresPassword setting so the note appears directly after that snippet.
- Around line 136-142: Update the paragraph that begins "Since RKE2 comes with
an Nginx ingress deployed by default..." which documents the default credentials
(`admin` / `Harbor12345`) to include a security reminder: add a concise sentence
instructing users to change the default password immediately after first login
and use a strong unique password (or enable SSO/LDAP) to secure the admin
account.

In `@deploy/chart/README.md`:
- Around line 262-279: The README S3 example shows plaintext AWS credentials
(registry.storage.s3.accesskey and registry.storage.s3.secretkey); update the
docs to instruct users to store creds in a Kubernetes Secret and reference it
via the chart's secret option (registry.storage.s3.existingSecret) instead of
hardcoding values, and add a short "Security Best Practice" note right after the
S3 snippet explaining how to create/use an existing secret for S3 credentials in
production.
- Line 5: Update the compound adjective "cloud native" to "cloud-native" in the
README sentence that reads "the cloud native container registry for Kubernetes"
so it becomes "the cloud-native container registry for Kubernetes"; edit the
line containing "Harbor Next - the cloud native container registry for
Kubernetes" to use the hyphenated form.
🪄 Autofix (Beta)

Fix all unresolved CodeRabbit comments on this PR:

  • Push a commit to this branch (recommended)
  • Create a new PR with the fixes

ℹ️ Review info
⚙️ Run configuration

Configuration used: Organization UI

Review profile: CHILL

Plan: Pro

Run ID: 491119cf-0aa1-4a51-a1ca-345933a29209

📥 Commits

Reviewing files that changed from the base of the PR and between fcbe4dd and f940944.

📒 Files selected for processing (5)
  • deploy/chart/README.md
  • deploy/chart/README.md.gotmpl
  • deploy/chart/docs/guide/k3s.md
  • deploy/chart/docs/guide/nutanix-kp.md
  • deploy/chart/docs/guide/rancher.md
✅ Files skipped from review due to trivial changes (2)
  • deploy/chart/docs/guide/k3s.md
  • deploy/chart/README.md.gotmpl

Comment thread deploy/chart/docs/guide/nutanix-kp.md Outdated
Comment on lines +150 to +160
metrics:
enabled: true
serviceMonitor:
enabled: true
namespace: ""
labels:
prometheus.kommander.d2iq.io/select: "true"
interval: 30s
scrapeTimeout: 10s
honorLabels: true
```
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major

Clarify Kommander-specific label or provide generic alternative.

The prometheus.kommander.d2iq.io/select: "true" label at line 156 is specific to D2iQ Kommander and will not enable metrics scraping on vanilla Nutanix Kubernetes Platform. Users following this guide may find their monitoring non-functional.

Consider either:

  1. Adding a note explaining this label is for Kommander-based deployments, or
  2. Providing a more generic label configuration that works on standard Nutanix KP
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@deploy/chart/docs/guide/nutanix-kp.md` around lines 150 - 160, The
serviceMonitor labels block uses a Kommander-specific label
"prometheus.kommander.d2iq.io/select" which will prevent scraping on vanilla
Nutanix KP; update the docs and values under metrics -> serviceMonitor -> labels
to either (a) replace that key with a generic selector users commonly employ for
their Prometheus setup (e.g., a standard app or prometheus-scrape label) or (b)
add an explicit note above the labels explaining that
"prometheus.kommander.d2iq.io/select: 'true'" is only for D2iQ Kommander and
should be removed or changed for standard Nutanix KP deployments so
serviceMonitor can be discovered. Ensure the change references the
metrics->serviceMonitor->labels block and the
prometheus.kommander.d2iq.io/select label so reviewers can find and update the
right lines.

Comment thread deploy/chart/docs/guide/nutanix-kp.md
Comment thread deploy/chart/README.md
| global | object | `{"priorityClassName":"","revisionHistoryLimit":3}` | Global defaults inherited by all components |
| global.priorityClassName | string | `""` | Priority class name for all component pods |
| global.revisionHistoryLimit | int | `3` | Number of old ReplicaSets to retain (K8s default is 10) |
| harborAdminPassword | string | "Harbor12345" | Harbor admin password (initial setup) |
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major

Strengthen guidance on default admin password.

The default password Harbor12345 is well-known and weak. While a default is needed for initial setup, the documentation should include a prominent security warning. Consider adding a warning box in the installation section emphasizing that this password must be changed immediately after installation.

🔒 Suggested addition

In the "Basic Installation" section (around line 128), add a security warning:

> **⚠️ Security Warning**: The default admin password is `Harbor12345`. You **must** change this immediately after installation or set a custom password using `--set harborAdminPassword=<strong-password>` or `--set existingSecretAdminPassword=<secret-name>`.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@deploy/chart/README.md` at line 558, Update the README's Basic Installation
section to add a prominent security warning about the default admin password
`Harbor12345`: insert a clear notice (e.g., a warning block) stating that the
default is weak and must be changed immediately after installation and show how
to set a custom password via the existing helm values `harborAdminPassword` or
`existingSecretAdminPassword`; reference the `harborAdminPassword` value from
the values table so users know the exact setting to override.

@github-actions
Copy link
Copy Markdown

Preview images for this PR are available in registry.goharbor.io/harbor-next with tag pr-56.

  • registry.goharbor.io/harbor-next/harbor-core:pr-56
  • registry.goharbor.io/harbor-next/harbor-jobservice:pr-56
  • registry.goharbor.io/harbor-next/harbor-registryctl:pr-56
  • registry.goharbor.io/harbor-next/harbor-exporter:pr-56
  • registry.goharbor.io/harbor-next/harbor-portal:pr-56
  • registry.goharbor.io/harbor-next/harbor-registry:pr-56
  • registry.goharbor.io/harbor-next/harbor-trivy-adapter:pr-56

Verify a preview image:

cosign verify \
  --certificate-identity-regexp="https://github.com/container-registry/harbor-next/.github/workflows/pr-ci.yml@.*" \
  --certificate-oidc-issuer="https://token.actions.githubusercontent.com" \
  registry.goharbor.io/harbor-next/harbor-core:pr-56

Verify SBOM attestation:

cosign verify-attestation \
  --certificate-identity-regexp="https://github.com/container-registry/harbor-next/.github/workflows/pr-ci.yml@.*" \
  --certificate-oidc-issuer="https://token.actions.githubusercontent.com" \
  --type spdxjson \
  registry.goharbor.io/harbor-next/harbor-core:pr-56

Copy link
Copy Markdown

@cubic-dev-ai cubic-dev-ai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

1 issue found across 5 files (changes from recent commits).

Prompt for AI agents (unresolved issues)

Check if these issues are valid — if so, understand the root cause of each and fix them. If appropriate, use sub-agents to investigate and fix each issue separately.


<file name="deploy/chart/README.md">

<violation number="1" location="deploy/chart/README.md:13">
P2: README was edited directly without updating the `.gotmpl` source, so these doc changes are not durable and will be lost on regeneration.</violation>
</file>

Reply with feedback, questions, or to request a fix. Tag @cubic-dev-ai to re-run a review.

Comment thread deploy/chart/README.md
helm install my-harbor oci://8gears.container-registry.com/8gcr/charts/harbor-next \
--set externalURL=https://harbor.example.com \
--set database.host=my-postgres.example.com \
--set database.existingSecret=my-harbor-db
Copy link
Copy Markdown

@cubic-dev-ai cubic-dev-ai bot Mar 25, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

P2: README was edited directly without updating the .gotmpl source, so these doc changes are not durable and will be lost on regeneration.

Prompt for AI agents
Check if this issue is valid — if so, understand the root cause and fix it. At deploy/chart/README.md, line 13:

<comment>README was edited directly without updating the `.gotmpl` source, so these doc changes are not durable and will be lost on regeneration.</comment>

<file context>
@@ -10,7 +10,7 @@ A modern, production-ready Helm chart for [Harbor Next](https://github.com/conta
   --set externalURL=https://harbor.example.com \
   --set database.host=my-postgres.example.com \
-  --set database.password=secret
+  --set database.existingSecret=my-harbor-db

</file context>


</details>

<a href="https://www.cubic.dev/action/fix/violation/dc0e5476-e00a-4a59-a8a0-5a934ce12f91" target="_blank" rel="noopener noreferrer" data-no-image-dialog="true">
  <picture>
    <source media="(prefers-color-scheme: dark)" srcset="https://cubic.dev/buttons/fix-with-cubic-dark.svg">
    <source media="(prefers-color-scheme: light)" srcset="https://cubic.dev/buttons/fix-with-cubic-light.svg">
    <img alt="Fix with Cubic" src="https://cubic.dev/buttons/fix-with-cubic-dark.svg">
  </picture>
</a>

Copy link
Copy Markdown

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 4

♻️ Duplicate comments (1)
deploy/chart/docs/guide/nutanix-kp.md (1)

150-157: ⚠️ Potential issue | 🟠 Major

Clarify or generalize the Kommander-specific ServiceMonitor label.

At Line 156, the label key is Kommander-specific but the guide reads generic for Nutanix KP. Please either add an explicit note or switch to a broadly applicable example label.

📝 Suggested update
 metrics:
   enabled: true
   serviceMonitor:
     enabled: true
     namespace: ""
     labels:
-      prometheus.kommander.d2iq.io/select: "true"
+      release: "kube-prometheus-stack" # example; adjust to your Prometheus selector
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@deploy/chart/docs/guide/nutanix-kp.md` around lines 150 - 157, The
ServiceMonitor label prometheus.kommander.d2iq.io/select is Kommander-specific;
update the docs to either add an explicit note that this label is
Kommander-specific or replace it with a generic example (e.g., a standard
selector label such as app.kubernetes.io/name or prometheus.io/scrape) under the
metrics.serviceMonitor.labels section so the example is broadly applicable to
Nutanix KP users; mention the change near the metrics → serviceMonitor → labels
block and reference the existing label key prometheus.kommander.d2iq.io/select
in your note or replacement.
🧹 Nitpick comments (1)
deploy/chart/docs/guide/nutanix-kp.md (1)

31-31: Use descriptive link text instead of “here”.

At Line 31, replace “here” with explicit text (for example, “Grafana Alloy Kubernetes pod logs documentation”).

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@deploy/chart/docs/guide/nutanix-kp.md` at line 31, Replace the ambiguous link
text "here" in the sentence starting "Create a file to override the default
values (see here for more):" with a descriptive phrase such as "Grafana Alloy
Kubernetes pod logs documentation" so the link text clearly describes the
destination and improves accessibility and clarity.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.

Inline comments:
In `@deploy/chart/README.md`:
- Line 5: Update the README description to use a hyphenated compound adjective:
change the phrase "cloud native container registry" to "cloud-native container
registry" in the sentence that reads "A modern, production-ready Helm chart for
Harbor Next - the cloud native container registry for Kubernetes." to ensure
correct wording.
- Around line 130-138: The README's helm install example exposes secrets inline;
remove direct secret examples for the helm flags and instead show a safe
pattern: keep the helm install example but delete or replace the inline values
for harborAdminPassword and database.password with a note to supply them via a
values file or Kubernetes Secret (e.g., reference using --values values.yaml or
pre-create secrets and reference them in values.yaml), and keep non-secret flags
(like externalURL) as-is; specifically update the example that uses
harborAdminPassword and database.password (and duplicate locations around lines
referenced) to instruct users to set these via values.yaml or secret-backed
references rather than --set.

In `@deploy/chart/templates/core.deployment.yaml`:
- Around line 139-144: The volume mount currently hardcodes the secret item key
to "secretKey" which mismatches the existingSecretKey setting; update the secret
items key for the "secret-key" volume to use the value from
.Values.core.existingSecretKey (with the same default used elsewhere, e.g.
"secret") instead of the literal "secretKey", and make the same change wherever
the secret item key is set (referencing core.existingSecret and
core.existingSecretKey to keep them aligned).

In `@deploy/chart/templates/pdbs.yaml`:
- Around line 17-31: The template currently only fails when neither
.values.pdb.maxUnavailable nor .values.pdb.minAvailable is set but does not
reject when both are provided; update the initial validation block in
deploy/chart/templates/pdbs.yaml to also check for the mutually-exclusive case
and call fail if both keys exist (use hasKey .values.pdb "maxUnavailable" and
hasKey .values.pdb "minAvailable") so the template errors out with a clear
message (e.g., "<name>.pdb requires either minAvailable or maxUnavailable, not
both") before rendering the PodDisruptionBudget.

---

Duplicate comments:
In `@deploy/chart/docs/guide/nutanix-kp.md`:
- Around line 150-157: The ServiceMonitor label
prometheus.kommander.d2iq.io/select is Kommander-specific; update the docs to
either add an explicit note that this label is Kommander-specific or replace it
with a generic example (e.g., a standard selector label such as
app.kubernetes.io/name or prometheus.io/scrape) under the
metrics.serviceMonitor.labels section so the example is broadly applicable to
Nutanix KP users; mention the change near the metrics → serviceMonitor → labels
block and reference the existing label key prometheus.kommander.d2iq.io/select
in your note or replacement.

---

Nitpick comments:
In `@deploy/chart/docs/guide/nutanix-kp.md`:
- Line 31: Replace the ambiguous link text "here" in the sentence starting
"Create a file to override the default values (see here for more):" with a
descriptive phrase such as "Grafana Alloy Kubernetes pod logs documentation" so
the link text clearly describes the destination and improves accessibility and
clarity.
🪄 Autofix (Beta)

Fix all unresolved CodeRabbit comments on this PR:

  • Push a commit to this branch (recommended)
  • Create a new PR with the fixes

ℹ️ Review info
⚙️ Run configuration

Configuration used: Organization UI

Review profile: CHILL

Plan: Pro

Run ID: 2cd11191-51d6-4519-9224-26610f4204d6

📥 Commits

Reviewing files that changed from the base of the PR and between f940944 and 55208dd.

📒 Files selected for processing (5)
  • deploy/chart/README.md
  • deploy/chart/docs/guide/nutanix-kp.md
  • deploy/chart/docs/guide/openshift.md
  • deploy/chart/templates/core.deployment.yaml
  • deploy/chart/templates/pdbs.yaml
✅ Files skipped from review due to trivial changes (1)
  • deploy/chart/docs/guide/openshift.md

Comment thread deploy/chart/README.md

![Version: 3.0.0](https://img.shields.io/badge/Version-3.0.0-informational?style=flat-square) ![Type: application](https://img.shields.io/badge/Type-application-informational?style=flat-square) ![AppVersion: v2.14.1](https://img.shields.io/badge/AppVersion-v2.14.1-informational?style=flat-square)

A modern, production-ready Helm chart for [Harbor Next](https://github.com/container-registry/harbor-next) - the cloud native container registry for Kubernetes.
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟡 Minor

Use hyphenated compound adjective in description.

At Line 5, use “cloud-native container registry” for correct wording.

🧰 Tools
🪛 LanguageTool

[grammar] ~5-~5: Use a hyphen to join words.
Context: ...tainer-registry/harbor-next) - the cloud native container registry for Kubernetes...

(QB_NEW_EN_HYPHEN)

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@deploy/chart/README.md` at line 5, Update the README description to use a
hyphenated compound adjective: change the phrase "cloud native container
registry" to "cloud-native container registry" in the sentence that reads "A
modern, production-ready Helm chart for Harbor Next - the cloud native container
registry for Kubernetes." to ensure correct wording.

Comment thread deploy/chart/README.md
Comment on lines +130 to +138
```bash
helm install my-harbor oci://8gears.container-registry.com/8gcr/charts/harbor-next \
--namespace harbor \
--create-namespace \
--set harborAdminPassword=your-strong-password \
--set externalURL=https://harbor.example.com \
--set database.host=postgres.example.com \
--set database.password=your-password
```
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major

Avoid documenting secret values via --set in command examples.

Using inline secrets on Lines 134/137 (and similarly in Lines 837-856) encourages credential leakage via shell history and process inspection.

🔒 Suggested doc adjustment
 helm install my-harbor oci://8gears.container-registry.com/8gcr/charts/harbor-next \
   --namespace harbor \
   --create-namespace \
-  --set harborAdminPassword=your-strong-password \
   --set externalURL=https://harbor.example.com \
   --set database.host=postgres.example.com \
-  --set database.password=your-password
+  --set database.existingSecret=harbor-db-secret \
+  --set existingSecretAdminPassword=harbor-admin-secret

Also applies to: 837-856

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@deploy/chart/README.md` around lines 130 - 138, The README's helm install
example exposes secrets inline; remove direct secret examples for the helm flags
and instead show a safe pattern: keep the helm install example but delete or
replace the inline values for harborAdminPassword and database.password with a
note to supply them via a values file or Kubernetes Secret (e.g., reference
using --values values.yaml or pre-create secrets and reference them in
values.yaml), and keep non-secret flags (like externalURL) as-is; specifically
update the example that uses harborAdminPassword and database.password (and
duplicate locations around lines referenced) to instruct users to set these via
values.yaml or secret-backed references rather than --set.

Comment on lines +139 to +144
- name: secret-key
secret:
secretName: {{ .Values.core.existingSecret | default (include "harbor.core" .) }}
items:
- key: secretKey
path: key
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major

Align mounted secret key lookup with core.existingSecretKey when using existing secrets.

At Lines 139-144, the volume always expects key secretKey, but Lines 69-73 allow a different existing key (core.existingSecretKey, default secret). This mismatch can break pods when core.existingSecret is used.

💡 Proposed fix
         - name: secret-key
           secret:
             secretName: {{ .Values.core.existingSecret | default (include "harbor.core" .) }}
             items:
-              - key: secretKey
+              - key: {{ if .Values.core.existingSecret }}{{ .Values.core.existingSecretKey | default "secret" }}{{ else }}secretKey{{ end }}
                 path: key

Also applies to: 69-73

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@deploy/chart/templates/core.deployment.yaml` around lines 139 - 144, The
volume mount currently hardcodes the secret item key to "secretKey" which
mismatches the existingSecretKey setting; update the secret items key for the
"secret-key" volume to use the value from .Values.core.existingSecretKey (with
the same default used elsewhere, e.g. "secret") instead of the literal
"secretKey", and make the same change wherever the secret item key is set
(referencing core.existingSecret and core.existingSecretKey to keep them
aligned).

Comment on lines +17 to +31
{{- if and (not (hasKey .values.pdb "maxUnavailable")) (not (hasKey .values.pdb "minAvailable")) }}
{{- fail (printf "%s.pdb requires either minAvailable or maxUnavailable when enabled" .name) }}
{{- end }}
---
apiVersion: policy/v1
kind: PodDisruptionBudget
metadata:
name: {{ include "harbor.fullname" $root }}-{{ .name }}
labels:
{{- include "harbor.componentLabels" (dict "root" $root "component" .name) | nindent 4 }}
spec:
{{- if hasKey .values.pdb "maxUnavailable" }}
maxUnavailable: {{ .values.pdb.maxUnavailable }}
{{- else if hasKey .values.pdb "minAvailable" }}
minAvailable: {{ .values.pdb.minAvailable }}
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major

🧩 Analysis chain

🌐 Web query:

In Kubernetes policy/v1 PodDisruptionBudget, are spec.minAvailable and spec.maxUnavailable mutually exclusive?

💡 Result:

Yes, in Kubernetes policy/v1 PodDisruptionBudget, spec.minAvailable and spec.maxUnavailable are mutually exclusive. You can specify only one of them in a single PodDisruptionBudget.

Citations:


Add validation to reject both PDB availability fields simultaneously.

Line 17 validates that at least one of maxUnavailable or minAvailable is set, but Lines 28–31 still allow both to be present in values, silently rendering only maxUnavailable. Since Kubernetes requires mutual exclusivity, this silently drops one field instead of failing fast.

Add a check to explicitly reject when both are set:

Proposed fix
 {{- if $enabled }}
+{{- if and (hasKey .values.pdb "maxUnavailable") (hasKey .values.pdb "minAvailable") }}
+{{- fail (printf "%s.pdb must set only one of minAvailable or maxUnavailable" .name) }}
+{{- end }}
 {{- if and (not (hasKey .values.pdb "maxUnavailable")) (not (hasKey .values.pdb "minAvailable")) }}
 {{- fail (printf "%s.pdb requires either minAvailable or maxUnavailable when enabled" .name) }}
 {{- end }}
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@deploy/chart/templates/pdbs.yaml` around lines 17 - 31, The template
currently only fails when neither .values.pdb.maxUnavailable nor
.values.pdb.minAvailable is set but does not reject when both are provided;
update the initial validation block in deploy/chart/templates/pdbs.yaml to also
check for the mutually-exclusive case and call fail if both keys exist (use
hasKey .values.pdb "maxUnavailable" and hasKey .values.pdb "minAvailable") so
the template errors out with a clear message (e.g., "<name>.pdb requires either
minAvailable or maxUnavailable, not both") before rendering the
PodDisruptionBudget.

Copy link
Copy Markdown

@cubic-dev-ai cubic-dev-ai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

1 issue found across 1 file (changes from recent commits).

Prompt for AI agents (unresolved issues)

Check if these issues are valid — if so, understand the root cause of each and fix them. If appropriate, use sub-agents to investigate and fix each issue separately.


<file name="deploy/chart/docs/guide/rancher.md">

<violation number="1" location="deploy/chart/docs/guide/rancher.md:112">
P2: This guide now sets a custom `harborAdminPassword`, but later still tells users to log in with `Harbor12345`, which is inconsistent and will cause incorrect login instructions.</violation>
</file>

Reply with feedback, questions, or to request a fix. Tag @cubic-dev-ai to re-run a review.


```yaml
externalURL: "https://harbor.example.com"
harborAdminPassword: "strong-password"
Copy link
Copy Markdown

@cubic-dev-ai cubic-dev-ai bot Mar 25, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

P2: This guide now sets a custom harborAdminPassword, but later still tells users to log in with Harbor12345, which is inconsistent and will cause incorrect login instructions.

Prompt for AI agents
Check if this issue is valid — if so, understand the root cause and fix it. At deploy/chart/docs/guide/rancher.md, line 112:

<comment>This guide now sets a custom `harborAdminPassword`, but later still tells users to log in with `Harbor12345`, which is inconsistent and will cause incorrect login instructions.</comment>

<file context>
@@ -105,7 +105,27 @@ database:
+
+```yaml
+externalURL: "https://harbor.example.com"
+harborAdminPassword: "strong-password"
+```
+
</file context>
Fix with Cubic

Copy link
Copy Markdown

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 5

🧹 Nitpick comments (1)
deploy/chart/docs/guide/rancher.md (1)

108-113: Use a clearer placeholder for password examples.

The example value "strong-password" might be misinterpreted as a recommendation rather than a placeholder. Consider using a more explicit placeholder like "CHANGEME" or "your-secure-password-here" to emphasize that users must supply their own credential.

📝 Suggested improvement
 We set also `externalURL` and `harborAdminPassword`:
 
 ```yaml
 externalURL: "https://harbor.example.com"
-harborAdminPassword: "strong-password"
+harborAdminPassword: "CHANGEME-use-your-secure-password"
</details>

<details>
<summary>🤖 Prompt for AI Agents</summary>

Verify each finding against the current code and only fix it if needed.

In @deploy/chart/docs/guide/rancher.md around lines 108 - 113, Replace the
example password value to make it an explicit placeholder rather than a
suggestion: update the YAML snippet that sets externalURL and
harborAdminPassword so that harborAdminPassword is a clear placeholder (e.g.,
"CHANGEME-use-your-secure-password" or "your-secure-password-here") next to the
externalURL line; keep the externalURL example as-is and ensure the new
placeholder makes it obvious users must supply their own credential for
harborAdminPassword.


</details>

</blockquote></details>

</blockquote></details>

<details>
<summary>🤖 Prompt for all review comments with AI agents</summary>

Verify each finding against the current code and only fix it if needed.

Inline comments:
In @deploy/chart/docs/guide/rancher.md:

  • Around line 128-132: Update the fenced code block containing "127.0.0.1
    harbor.example.com" to include a language identifier (e.g., change totext) so the line becomes a ```text fenced block; this will enable proper
    syntax highlighting and correct documentation rendering for that snippet in
    rancher.md.
  • Around line 156-160: Update the fenced code block that contains
    "https://harbor.example.com" to include a language identifier (e.g., text) so
    rendering/syntax-highlighting works correctly; locate the code block with the
    URL text and add the language tag after the opening backticks (change ``` to
- Line 162: Add a clear, prominent security warning next to the line "The
default credentials are `admin` and `Harbor12345`" stating these defaults are
publicly known and must be changed immediately after first login; instruct the
user to set a strong unique password, rotate or disable the default admin
account where possible, and document where/how to change the password in
Harbor's UI or CLI (mention the "admin" account and default password explicitly
so readers know which credentials to change).
- Around line 44-50: Add a prominent security warning above the Helm
installation snippet that uses the hardcoded example password `test1234!`,
mentioning that example credentials should never be used in production and
suggesting alternatives such as using a generated secure password, Kubernetes
Secrets, or a password manager; reference the Helm command and the
auth.postgresPassword example so the warning is clearly adjacent to the helm
install postgresql bitnami/postgresql snippet and the --set
auth.postgresPassword value.
- Around line 18-20: Update the kubectl apply command to use the latest
local-path-provisioner release by replacing the version token in the URL
"https://raw.githubusercontent.com/rancher/local-path-provisioner/v0.0.34/deploy/local-path-storage.yaml"
with "v0.0.35" so the line invoking kubectl apply references v0.0.35 instead of
v0.0.34.

---

Nitpick comments:
In `@deploy/chart/docs/guide/rancher.md`:
- Around line 108-113: Replace the example password value to make it an explicit
placeholder rather than a suggestion: update the YAML snippet that sets
externalURL and harborAdminPassword so that harborAdminPassword is a clear
placeholder (e.g., "CHANGEME-use-your-secure-password" or
"your-secure-password-here") next to the externalURL line; keep the externalURL
example as-is and ensure the new placeholder makes it obvious users must supply
their own credential for harborAdminPassword.
🪄 Autofix (Beta)

Fix all unresolved CodeRabbit comments on this PR:

  • Push a commit to this branch (recommended)
  • Create a new PR with the fixes

ℹ️ Review info
⚙️ Run configuration

Configuration used: Organization UI

Review profile: CHILL

Plan: Pro

Run ID: f929733a-7e50-4629-a82e-c0303eb15c4f

📥 Commits

Reviewing files that changed from the base of the PR and between e7374f7 and c1300ff.

📒 Files selected for processing (1)
  • deploy/chart/docs/guide/rancher.md

Comment on lines +18 to +20
```bash
kubectl apply -f https://raw.githubusercontent.com/rancher/local-path-provisioner/v0.0.34/deploy/local-path-storage.yaml
```
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟡 Minor

🧩 Analysis chain

🏁 Script executed:

#!/bin/bash
# Check available releases of local-path-provisioner
gh api repos/rancher/local-path-provisioner/releases --paginate | jq -r '.[].tag_name' | sort -V | tail -10

Repository: container-registry/harbor-next

Length of output: 173


🏁 Script executed:

#!/bin/bash
# Check available releases of local-path-provisioner using public GitHub API
curl -s "https://api.github.com/repos/rancher/local-path-provisioner/releases" | head -500

Repository: container-registry/harbor-next

Length of output: 35161


🌐 Web query:

local-path-provisioner rancher latest version releases

💡 Result:

The latest version of the local-path-provisioner from the official Rancher GitHub repository is v0.0.35, released on March 10, 2026. This is confirmed across the releases page, tags page, main repo, and pkg.go.dev, which explicitly mark it as the latest release. A pre-release v0.0.35-rc1 was also tagged on the same date. A third-party Helm chart on ArtifactHub (containeroo) mentions 0.0.36, but this is not from the official Rancher source. Official deployment YAML references v0.0.35: kubectl apply -f https://raw.githubusercontent.com/rancher/local-path-provisioner/v0.0.35/deploy/local-path-storage.yaml.

Citations:


Update to the latest version v0.0.35.

Version v0.0.34 is valid, but the latest stable release is now v0.0.35 (released March 10, 2026). Update the guide to reference v0.0.35 instead to ensure users deploy the current recommended version with the latest improvements and dependencies.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@deploy/chart/docs/guide/rancher.md` around lines 18 - 20, Update the kubectl
apply command to use the latest local-path-provisioner release by replacing the
version token in the URL
"https://raw.githubusercontent.com/rancher/local-path-provisioner/v0.0.34/deploy/local-path-storage.yaml"
with "v0.0.35" so the line invoking kubectl apply references v0.0.35 instead of
v0.0.34.

Comment on lines +44 to +50
```bash
helm repo add bitnami https://charts.bitnami.com/bitnami
helm install postgresql bitnami/postgresql \
--namespace my-db \
--set auth.postgresPassword=test1234! \
--set persistence.size=8Gi
```
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major

Add security warning for example credentials.

The guide uses a hardcoded example password (test1234!) without an explicit security warning. While acceptable for a tutorial, readers may copy these examples into production environments.

Consider adding a prominent security note before this section:

🔒 Suggested security warning
 We deploy our external Postgresql database with the Bitnami Helm chart:
 
+> **⚠️ Security Warning:** The credentials shown below are for demonstration purposes only. In production environments, always use strong, unique passwords and consider using Kubernetes Secrets or a secrets management solution.
+
 ```bash
 helm repo add bitnami https://charts.bitnami.com/bitnami
 helm install postgresql bitnami/postgresql \
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@deploy/chart/docs/guide/rancher.md` around lines 44 - 50, Add a prominent
security warning above the Helm installation snippet that uses the hardcoded
example password `test1234!`, mentioning that example credentials should never
be used in production and suggesting alternatives such as using a generated
secure password, Kubernetes Secrets, or a password manager; reference the Helm
command and the auth.postgresPassword example so the warning is clearly adjacent
to the helm install postgresql bitnami/postgresql snippet and the --set
auth.postgresPassword value.

Comment on lines +128 to +132
The hostname chosen for our local deployment is `harbor.example.com`. To override our DNS resolver, we set a mapping to localhost in `/etc/hosts`:

```
127.0.0.1 harbor.example.com
```
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟡 Minor

Specify language identifier for code block.

The code block at lines 130-132 is missing a language identifier, which affects syntax highlighting and documentation rendering.

📄 Proposed fix
 The hostname chosen for our local deployment is `harbor.example.com`. To override our DNS resolver, we set a mapping to localhost in `/etc/hosts`:
 
-```
+```text
 127.0.0.1 harbor.example.com
</details>

<!-- suggestion_start -->

<details>
<summary>📝 Committable suggestion</summary>

> ‼️ **IMPORTANT**
> Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

```suggestion
The hostname chosen for our local deployment is `harbor.example.com`. To override our DNS resolver, we set a mapping to localhost in `/etc/hosts`:

🧰 Tools
🪛 markdownlint-cli2 (0.21.0)

[warning] 130-130: Fenced code blocks should have a language specified

(MD040, fenced-code-language)

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@deploy/chart/docs/guide/rancher.md` around lines 128 - 132, Update the fenced
code block containing "127.0.0.1 harbor.example.com" to include a language
identifier (e.g., change ``` to ```text) so the line becomes a ```text fenced
block; this will enable proper syntax highlighting and correct documentation
rendering for that snippet in rancher.md.

Comment on lines +156 to +160
Since RKE2 comes with an Nginx ingress deployed by default, we can access the portal from a browser at:

```
https://harbor.example.com
```
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟡 Minor

Specify language identifier for code block.

The code block at lines 158-160 is missing a language identifier, which affects syntax highlighting and documentation rendering.

📄 Proposed fix
 Since RKE2 comes with an Nginx ingress deployed by default, we can access the portal from a browser at:
 
-```
+```text
 https://harbor.example.com
</details>

<details>
<summary>🧰 Tools</summary>

<details>
<summary>🪛 markdownlint-cli2 (0.21.0)</summary>

[warning] 158-158: Fenced code blocks should have a language specified

(MD040, fenced-code-language)

</details>

</details>

<details>
<summary>🤖 Prompt for AI Agents</summary>

Verify each finding against the current code and only fix it if needed.

In @deploy/chart/docs/guide/rancher.md around lines 156 - 160, Update the fenced
code block that contains "https://harbor.example.com" to include a language
identifier (e.g., text) so rendering/syntax-highlighting works correctly; locate
the code block with the URL text and add the language tag after the opening
backticks (change totext).


</details>

<!-- fingerprinting:phantom:triton:puma -->

<!-- This is an auto-generated comment by CodeRabbit -->

https://harbor.example.com
```

The default credentials are `admin` and `Harbor12345`.
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major

Add critical security warning about changing default credentials.

The default Harbor credentials are well-known and represent a significant security risk if left unchanged. The documentation should strongly emphasize that these must be changed immediately after first login.

🔐 Suggested security warning
-The default credentials are `admin` and `Harbor12345`.
+The default credentials are `admin` and `Harbor12345`.
+
+> **⚠️ CRITICAL SECURITY WARNING:** These default credentials are publicly known and must be changed immediately after your first login. Leaving default credentials in place exposes your Harbor instance to unauthorized access. Navigate to User Settings after login to update the admin password.
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
The default credentials are `admin` and `Harbor12345`.
The default credentials are `admin` and `Harbor12345`.
> **⚠️ CRITICAL SECURITY WARNING:** These default credentials are publicly known and must be changed immediately after your first login. Leaving default credentials in place exposes your Harbor instance to unauthorized access. Navigate to User Settings after login to update the admin password.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@deploy/chart/docs/guide/rancher.md` at line 162, Add a clear, prominent
security warning next to the line "The default credentials are `admin` and
`Harbor12345`" stating these defaults are publicly known and must be changed
immediately after first login; instruct the user to set a strong unique
password, rotate or disable the default admin account where possible, and
document where/how to change the password in Harbor's UI or CLI (mention the
"admin" account and default password explicitly so readers know which
credentials to change).

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants