Skip to content

Conversation

@fm3
Copy link
Member

@fm3 fm3 commented Dec 2, 2025

When a dataset with a pending conversion job is deleted by the user, that job is now cancelled, so the user won’t get a failed job notification and server compute power is saved.

Also slipped in here: small styling follow-up for #9068

Steps to test:

  • yarn enable-jobs
  • upload tiff dataset so that conversion job is started
  • in dashboard, click delete dataset
  • job should automatically be cancelled. (see job list view)
  • other jobs should not be cancelled.

Issues:


  • Added changelog entry (create a $PR_NUMBER.md file in unreleased_changes or use ./tools/create-changelog-entry.py)
  • Removed dev-only changes like prints and application.conf edits
  • Considered common edge cases

@fm3 fm3 self-assigned this Dec 2, 2025
@coderabbitai
Copy link
Contributor

coderabbitai bot commented Dec 2, 2025

📝 Walkthrough

Walkthrough

Adds cancellation of pending convert_to_wkw jobs when a dataset deleted before full upload; introduces a JobDAO method to mark those jobs CANCELLED and calls it from DatasetService during dataset deletion. Also adjusts three table column widths in the admin job list view and adds release note.

Changes

Cohort / File(s) Summary
Dataset deletion logic
app/models/dataset/DatasetService.scala
Added JobDAO dependency and call to cancel pending convert_to_wkw jobs when deleting datasets in notYetUploadedToPaths / notYetUploaded states.
Job DAO update
app/models/job/Job.scala
Added cancelConvertToWkwJobForDataset(datasetId: ObjectId): Fox[Unit] to update matching convert_to_wkw jobs' manualState to CANCELLED for PENDING/STARTED states.
Frontend column widths
frontend/javascripts/admin/job/job_list_view.tsx
Set explicit widths for Job Id (120), Date (190), and State (120) table columns.
Release notes
unreleased_changes/9116.md
Documented that deleting not-yet-fully-uploaded datasets cancels associated pending conversion jobs.

Estimated code review effort

🎯 3 (Moderate) | ⏱️ ~20 minutes

  • Review interaction between DatasetService and JobDAO for error handling and transactional consistency.
  • Verify Job.scala query filters exactly command = convert_to_wkw, commandArgs.dataset_id match, and only PENDING/STARTED states are affected.
  • Confirm frontend width changes are non-breaking in responsive layouts.

Possibly related PRs

Suggested reviewers

  • daniel-wer
  • frcroth

Poem

🐰 I hopped through code with nimble paws,

Cancelled jobs without a pause.
Datasets gone, the queue stays light,
No failed tasks to haunt the night.
A tidy server, snug and bright. 🥕

Pre-merge checks and finishing touches

❌ Failed checks (1 warning)
Check name Status Explanation Resolution
Docstring Coverage ⚠️ Warning Docstring coverage is 0.00% which is insufficient. The required threshold is 80.00%. You can run @coderabbitai generate docstrings to improve docstring coverage.
✅ Passed checks (4 passed)
Check name Status Explanation
Title check ✅ Passed The title clearly and concisely summarizes the main change: cancelling convert_to_wkw jobs when datasets are deleted.
Description check ✅ Passed The description is directly related to the changeset, explaining the problem, solution, testing steps, and including a changelog entry.
Linked Issues check ✅ Passed The PR successfully addresses issue #9108 by implementing job cancellation when datasets are deleted, preventing failed job notifications and saving compute resources.
Out of Scope Changes check ✅ Passed The changes are in scope: job cancellation logic and related updates. The styling changes for #9068 are acknowledged as a minor follow-up and do not conflict with the primary objective.
✨ Finishing touches
🧪 Generate unit tests (beta)
  • Create PR with unit tests
  • Post copyable unit tests in a comment
  • Commit unit tests in branch cancel-job-on-dataset-deletion

📜 Recent review details

Configuration used: CodeRabbit UI

Review profile: CHILL

Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between d933780 and d386d60.

📒 Files selected for processing (1)
  • app/models/dataset/DatasetService.scala (3 hunks)
🧰 Additional context used
🧠 Learnings (4)
📚 Learning: 2025-05-12T13:07:29.637Z
Learnt from: frcroth
Repo: scalableminds/webknossos PR: 8609
File: app/models/dataset/Dataset.scala:753-775
Timestamp: 2025-05-12T13:07:29.637Z
Learning: In the `updateMags` method of DatasetMagsDAO (Scala), the code handles different dataset types distinctly:
1. Non-WKW datasets have `magsOpt` populated and use the first branch which includes axisOrder, channelIndex, and credentialId.
2. WKW datasets will have `wkwResolutionsOpt` populated and use the second branch which includes cubeLength.
3. The final branch is a fallback for legacy data.
This ensures appropriate fields are populated for each dataset type.

Applied to files:

  • app/models/dataset/DatasetService.scala
📚 Learning: 2025-04-28T14:18:04.368Z
Learnt from: frcroth
Repo: scalableminds/webknossos PR: 8236
File: webknossos-datastore/app/com/scalableminds/webknossos/datastore/services/mesh/NeuroglancerPrecomputedMeshFileService.scala:161-166
Timestamp: 2025-04-28T14:18:04.368Z
Learning: In Scala for-comprehensions with the Fox error handling monad, `Fox.fromBool()` expressions should use the `<-` binding operator instead of the `=` assignment operator to properly propagate error conditions. Using `=` will cause validation failures to be silently ignored.

Applied to files:

  • app/models/dataset/DatasetService.scala
📚 Learning: 2025-01-27T12:06:42.865Z
Learnt from: MichaelBuessemeyer
Repo: scalableminds/webknossos PR: 8352
File: app/models/organization/CreditTransactionService.scala:0-0
Timestamp: 2025-01-27T12:06:42.865Z
Learning: In Scala's for-comprehension with Fox (Future-like type), the `<-` operator ensures sequential execution. If any step fails, the entire chain short-circuits and returns early, preventing subsequent operations from executing. This makes it safe to perform validation checks before database operations.

Applied to files:

  • app/models/dataset/DatasetService.scala
📚 Learning: 2025-01-13T09:06:15.202Z
Learnt from: MichaelBuessemeyer
Repo: scalableminds/webknossos PR: 8221
File: app/controllers/JobController.scala:226-232
Timestamp: 2025-01-13T09:06:15.202Z
Learning: In the JobController of webknossos, numeric parameters should use `Option[Double]` instead of `Option[String]` for better type safety. Additionally, when adding new job parameters that are conditionally required (like evaluation settings), proper validation should be added in the `for` comprehension block before creating the `commandArgs`.

Applied to files:

  • app/models/dataset/DatasetService.scala
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (3)
  • GitHub Check: frontend-tests
  • GitHub Check: build-smoketest-push
  • GitHub Check: backend-tests
🔇 Additional comments (2)
app/models/dataset/DatasetService.scala (2)

29-29: LGTM!

Import is correctly added to support the new JobDAO dependency.


59-59: LGTM!

Dependency injection follows the standard Play Framework pattern and is positioned appropriately.


Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out.

❤️ Share

Comment @coderabbitai help to get the list of available commands and usage tips.

@fm3 fm3 marked this pull request as ready for review December 2, 2025 12:43
@fm3 fm3 added the jobs label Dec 2, 2025
Copy link
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 1

📜 Review details

Configuration used: CodeRabbit UI

Review profile: CHILL

Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between fe82248 and d933780.

📒 Files selected for processing (4)
  • app/models/dataset/DatasetService.scala (3 hunks)
  • app/models/job/Job.scala (1 hunks)
  • frontend/javascripts/admin/job/job_list_view.tsx (3 hunks)
  • unreleased_changes/9116.md (1 hunks)
🧰 Additional context used
🧠 Learnings (1)
📚 Learning: 2025-05-12T13:07:29.637Z
Learnt from: frcroth
Repo: scalableminds/webknossos PR: 8609
File: app/models/dataset/Dataset.scala:753-775
Timestamp: 2025-05-12T13:07:29.637Z
Learning: In the `updateMags` method of DatasetMagsDAO (Scala), the code handles different dataset types distinctly:
1. Non-WKW datasets have `magsOpt` populated and use the first branch which includes axisOrder, channelIndex, and credentialId.
2. WKW datasets will have `wkwResolutionsOpt` populated and use the second branch which includes cubeLength.
3. The final branch is a fallback for legacy data.
This ensures appropriate fields are populated for each dataset type.

Applied to files:

  • app/models/dataset/DatasetService.scala
🧬 Code graph analysis (2)
app/models/dataset/DatasetService.scala (3)
app/models/job/Job.scala (1)
  • cancelConvertToWkwJobForDataset (214-224)
util/src/main/scala/com/scalableminds/util/tools/Fox.scala (6)
  • Fox (28-228)
  • Fox (230-303)
  • runIf (167-176)
  • s (234-238)
  • s (238-248)
  • s (248-257)
webknossos-datastore/app/com/scalableminds/webknossos/datastore/models/datasource/DataSourceStatus.scala (1)
  • DataSourceStatus (3-12)
app/models/job/Job.scala (5)
app/models/job/JobResultLinks.scala (1)
  • datasetId (18-18)
app/utils/sql/SimpleSQLDAO.scala (1)
  • run (28-48)
app/utils/sql/SqlInterpolation.scala (1)
  • q (20-39)
app/models/job/JobState.scala (1)
  • JobState (5-8)
app/models/job/JobCommand.scala (1)
  • JobCommand (5-22)
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (3)
  • GitHub Check: backend-tests
  • GitHub Check: frontend-tests
  • GitHub Check: build-smoketest-push
🔇 Additional comments (3)
unreleased_changes/9116.md (1)

1-2: LGTM!

The changelog entry accurately describes the new feature and its benefits (no failed job notifications, saves compute).

frontend/javascripts/admin/job/job_list_view.tsx (1)

506-506: LGTM!

These column width adjustments improve the table layout consistency. The widths are appropriate for Job Id (120px), Date (190px), and State (120px) content.

app/models/job/Job.scala (1)

214-222: Implementation is correct.

The method properly uses manualState for cancellation (consistent with the job cancellation protocol) and correctly filters for active jobs (PENDING or STARTED). The query's comparison of commandArgs->>'dataset_id' with $datasetId works correctly because Slick's SQL interpolation automatically converts ObjectId to its string representation via the ObjectIdValue handler, matching the string-formatted dataset_id stored in the JSON field.

Copy link
Contributor

@MichaelBuessemeyer MichaelBuessemeyer left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Awesome looks good and works well 🎉

Copy link
Contributor

@MichaelBuessemeyer MichaelBuessemeyer left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Everything looks nicey dicey 🎲 (wrong pr xD)

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Projects

None yet

Development

Successfully merging this pull request may close these issues.

Deleting not-yet-fully-uploaded dataset should also cancel its convert_to_wkw worker jobs

3 participants