Skip to content
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
55 changes: 55 additions & 0 deletions .github/workflows/docs-broken-links.yaml
Original file line number Diff line number Diff line change
@@ -0,0 +1,55 @@
name: Docs broken link check

on:
workflow_dispatch:

pull_request:
paths:
- .github/workflows/docs-broken-links.yaml
- "docs/**"

push:
branches:
- main
paths:
- .github/workflows/docs-broken-links.yaml
- "docs/**"

permissions:
contents: read

# Limit concurrency by workflow/branch combination.
#
# For pull request builds, pushing additional changes to the
# branch will cancel prior in-progress and pending builds.
#
# For builds triggered on a branch push, additional changes
# will wait for prior builds to complete before starting.
#
# https://docs.github.com/en/actions/using-jobs/using-concurrency
concurrency:
group: ${{ github.workflow }}-${{ github.head_ref || github.run_id }}
cancel-in-progress: ${{ github.event_name == 'pull_request' }}

jobs:
check-broken-links:
name: Check docs broken links
runs-on: ubuntu-latest
timeout-minutes: 10

steps:
- uses: actions/checkout@v6
with:
persist-credentials: false

- name: Setup NodeJS
uses: actions/setup-node@v6
with:
node-version-file: ".nvmrc"

- name: Set up just
uses: extractions/setup-just@v3

- name: Check for broken links
working-directory: docs
run: just links
Original file line number Diff line number Diff line change
Expand Up @@ -24,7 +24,7 @@ Flow for running a namespaced Kubernetes job.
- `print_func`: A function to print the logs from the job pods.

**Returns:**
- A dict of logs from each pod in the job, e.g. {'pod_name': 'pod_log_str'}.
- A dict of logs from each pod in the job, e.g. `{'pod_name': 'pod_log_str'}`.

**Raises:**
- `RuntimeError`: If the created Kubernetes job attains a failed status.
Expand Down Expand Up @@ -58,7 +58,7 @@ Flow for running a namespaced Kubernetes job.
- `print_func`: A function to print the logs from the job pods.

**Returns:**
- A dict of logs from each pod in the job, e.g. {'pod_name': 'pod_log_str'}.
- A dict of logs from each pod in the job, e.g. `{'pod_name': 'pod_log_str'}`.

**Raises:**
- `RuntimeError`: If the created Kubernetes job attains a failed status.
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -25,7 +25,7 @@ initialize_clients(logger: kopf.Logger, **kwargs: Any)
cleanup_fn(logger: kopf.Logger, **kwargs: Any)
```

### `start_observer` <sup><a href="https://github.com/PrefectHQ/prefect/blob/main/src/integrations/prefect-kubernetes/prefect_kubernetes/observer.py#L793" target="_blank"><Icon icon="github" style="width: 14px; height: 14px;" /></a></sup>
### `start_observer` <sup><a href="https://github.com/PrefectHQ/prefect/blob/main/src/integrations/prefect-kubernetes/prefect_kubernetes/observer.py#L795" target="_blank"><Icon icon="github" style="width: 14px; height: 14px;" /></a></sup>

```python
start_observer()
Expand All @@ -35,7 +35,7 @@ start_observer()
Start the observer in a separate thread.


### `stop_observer` <sup><a href="https://github.com/PrefectHQ/prefect/blob/main/src/integrations/prefect-kubernetes/prefect_kubernetes/observer.py#L844" target="_blank"><Icon icon="github" style="width: 14px; height: 14px;" /></a></sup>
### `stop_observer` <sup><a href="https://github.com/PrefectHQ/prefect/blob/main/src/integrations/prefect-kubernetes/prefect_kubernetes/observer.py#L846" target="_blank"><Icon icon="github" style="width: 14px; height: 14px;" /></a></sup>

```python
stop_observer()
Expand Down
4 changes: 2 additions & 2 deletions docs/v3/advanced/infrastructure-as-code.mdx
Original file line number Diff line number Diff line change
Expand Up @@ -34,7 +34,7 @@ Prefect maintains several Terraform modules to help you get started with common

Prefect does not maintain an official Pulumi package.
However, you can use Pulumi’s terraform-provider to automatically generate a Pulumi SDK from the Prefect Terraform provider.
For details, refer to the [Pulumi documentation on Terraform providers](www.pulumi.com/registry/packages/terraform-provider/).
For details, refer to the [Pulumi documentation on Terraform providers](https://www.pulumi.com/registry/packages/terraform-provider/).

<Tip>
You will need to be using Pulumi version >= 3.147.0.
Expand Down Expand Up @@ -225,4 +225,4 @@ Each Helm chart subdirectory contains usage documentation. There are two main ch
- The `prefect-worker` chart is used to deploy a [Prefect worker](/v3/deploy/infrastructure-concepts/workers).

Finally, there is a `prefect-prometheus-exporter` chart that is used to deploy a Prometheus exporter,
exposing Prefect metrics for monitoring and alerting.
exposing Prefect metrics for monitoring and alerting.
2 changes: 1 addition & 1 deletion docs/v3/api-ref/python/prefect-server-models-workers.mdx
Original file line number Diff line number Diff line change
Expand Up @@ -429,7 +429,7 @@ count_work_pool_slot_holders_by_queue(db: PrefectDBInterface, session: AsyncSess
```


Returns {work_queue_id: count} for slot-holding runs in a pool.
Returns `{work_queue_id: count}` for slot-holding runs in a pool.


### `count_work_queue_slot_holders` <sup><a href="https://github.com/PrefectHQ/prefect/blob/main/src/prefect/server/models/workers.py#L1192" target="_blank"><Icon icon="github" style="width: 14px; height: 14px;" /></a></sup>
Expand Down
2 changes: 1 addition & 1 deletion docs/v3/concepts/deployments.mdx
Original file line number Diff line number Diff line change
Expand Up @@ -23,7 +23,7 @@ In Prefect Cloud, deployment configuration is versioned, and a new [deployment v
[Work pools](/v3/concepts/work-pools) allow you to switch between different types of infrastructure and to create a template for deployments.
Data platform teams find work pools especially useful for managing infrastructure configuration across teams of data professionals.

Common work pool types include [Docker](/v3/how-to-guides/deployment_infra/docker), [Kubernetes](/v3/how-to-guides/deployment_infra/kubernetes), and serverless options such as [AWS ECS](/integrations/prefect-aws/ecs_guide#ecs-worker-guide), [Azure ACI](/integrations/prefect-azure/aci_worker), [GCP Vertex AI](/integrations/prefect-gcp/index#run-flows-on-google-cloud-run-or-vertex-ai), or [GCP Google Cloud Run](/integrations/prefect-gcp/gcp-worker-guide).
Common work pool types include [Docker](/v3/how-to-guides/deployment_infra/docker), [Kubernetes](/v3/how-to-guides/deployment_infra/kubernetes), and serverless options such as [AWS ECS](/integrations/prefect-aws/ecs-worker), [Azure ACI](/integrations/prefect-azure/aci_worker), [GCP Vertex AI](/integrations/prefect-gcp/index#run-flows-on-google-cloud-run-or-vertex-ai), or [GCP Google Cloud Run](/integrations/prefect-gcp/gcp-worker-guide).

### Work pool-based deployment requirements

Expand Down
2 changes: 1 addition & 1 deletion docs/v3/concepts/task-runners.mdx
Original file line number Diff line number Diff line change
Expand Up @@ -98,7 +98,7 @@ result of the task.

A `PrefectFuture` is an object that provides:
- a reference to the result returned by the task
- a [`State`](/v3/api-ref/python/prefect-server#prefect.server.schemas.states) indicating the current state of the task run
- a [`State`](/v3/api-ref/python/prefect-server-schemas-states) indicating the current state of the task run

<Warning>
`PrefectFuture` objects must be resolved explicitly before returning from a flow or task.
Expand Down
2 changes: 1 addition & 1 deletion docs/v3/examples/resume-flow-run-on-pr-merge.mdx
Original file line number Diff line number Diff line change
Expand Up @@ -13,7 +13,7 @@ This page is automatically generated via the `generate_example_pages.py` script.


<Note>
This example uses [webhooks](/v3/automate/events/webhooks), which are only available in Prefect Cloud.
This example uses [webhooks](/v3/concepts/webhooks), which are only available in Prefect Cloud.
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

P2 Badge Fix webhook link in the example source file

This edit updates an auto-generated page (docs/v3/examples/...) but not its source template in examples/resume_flow_run_on_pr_merge.py (which still contains /v3/automate/events/webhooks), so the next docs regeneration will overwrite this line and reintroduce the broken link. To make the fix durable, the source example comment should be updated and the page regenerated.

Useful? React with 👍 / 👎.

Copy link
Copy Markdown
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Good catch — fixed in 990fb67. Updated the source file examples/resume_flow_run_on_pr_merge.py so regeneration won't revert the link fix.

</Note>

When a flow run fails due to a bug in your code, you typically need to:
Expand Down
4 changes: 2 additions & 2 deletions docs/v3/release-notes/oss/version-3-2.mdx
Original file line number Diff line number Diff line change
Expand Up @@ -303,7 +303,7 @@ This release contains a fix for bug a where the default `Runner` limit would pro

*Released on February 21, 2025*

This release [fixes]([#17240](https://github.com/PrefectHQ/prefect/pull/17240)) a bug where the `EventPersister` was not running by default.
This release [fixes](https://github.com/PrefectHQ/prefect/pull/17240) a bug where the `EventPersister` was not running by default.

**Enhancements ➕➕**

Expand Down Expand Up @@ -427,7 +427,7 @@ There was a packaging bug with 3.2.3 where the UI was not included in the publis

*Released on February 13, 2025*

Notably, includes a [fix]([#17123](https://github.com/PrefectHQ/prefect/pull/17123)) for multiple schedules with the same interval and different parameters.
Notably, includes a [fix](https://github.com/PrefectHQ/prefect/pull/17123) for multiple schedules with the same interval and different parameters.

**Bug Fixes 🐞**

Expand Down
4 changes: 2 additions & 2 deletions docs/v3/release-notes/oss/version-3-3.mdx
Original file line number Diff line number Diff line change
Expand Up @@ -173,7 +173,7 @@ Feedback is very important for us to help refine this feature while it's in beta

*Released on April 05, 2025*

Includes a [fix]([#17747](https://github.com/PrefectHQ/prefect/pull/17747)) for a [bug](https://github.com/PrefectHQ/prefect/issues/17729) loading the flow run graph caused by an incorrect timezone-naive default.
Includes a [fix](https://github.com/PrefectHQ/prefect/pull/17747) for a [bug](https://github.com/PrefectHQ/prefect/issues/17729) loading the flow run graph caused by an incorrect timezone-naive default.

**Enhancements ➕➕**

Expand Down Expand Up @@ -238,7 +238,7 @@ Includes a [fix]([#17747](https://github.com/PrefectHQ/prefect/pull/17747)) for

*Released on April 01, 2025*

[Alleviates]([#17671](https://github.com/PrefectHQ/prefect/pull/17671)) a potential error `ValueError: Must provide start_date as a datetime` caused by changes to deprecation utils in [#17577](https://github.com/PrefectHQ/prefect/pull/17577).
[Alleviates](https://github.com/PrefectHQ/prefect/pull/17671) a potential error `ValueError: Must provide start_date as a datetime` caused by changes to deprecation utils in [#17577](https://github.com/PrefectHQ/prefect/pull/17577).

**Enhancements ➕➕**

Expand Down
2 changes: 1 addition & 1 deletion docs/v3/release-notes/oss/version-3-4.mdx
Original file line number Diff line number Diff line change
Expand Up @@ -954,7 +954,7 @@ To learn more, [register for next week's webinar here](https://lu.ma/c2bbz5bg)!

*Released on May 08, 2025*

This release also ships with version 0.6.0 of prefect-kubernetes, featuring a revamped Kubernetes worker. The updated worker is more stateless, providing improved resiliency, enhanced event emission, and better crash detection capabilities. See [#18004]([#18004](https://github.com/PrefectHQ/prefect/pull/18004)) for more details.
This release also ships with version 0.6.0 of prefect-kubernetes, featuring a revamped Kubernetes worker. The updated worker is more stateless, providing improved resiliency, enhanced event emission, and better crash detection capabilities. See [#18004](https://github.com/PrefectHQ/prefect/pull/18004) for more details.

**Enhancements ➕➕**

Expand Down
2 changes: 1 addition & 1 deletion examples/resume_flow_run_on_pr_merge.py
Original file line number Diff line number Diff line change
Expand Up @@ -6,7 +6,7 @@
# ---
#
# <Note>
# This example uses [webhooks](/v3/automate/events/webhooks), which are only available in Prefect Cloud.
# This example uses [webhooks](/v3/concepts/webhooks), which are only available in Prefect Cloud.
# </Note>
#
# When a flow run fails due to a bug in your code, you typically need to:
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -18,7 +18,7 @@ def run_namespaced_job(
print_func: A function to print the logs from the job pods.

Returns:
A dict of logs from each pod in the job, e.g. {'pod_name': 'pod_log_str'}.
A dict of logs from each pod in the job, e.g. `{'pod_name': 'pod_log_str'}`.

Raises:
RuntimeError: If the created Kubernetes job attains a failed status.
Expand Down Expand Up @@ -55,7 +55,7 @@ async def run_namespaced_job_async(
print_func: A function to print the logs from the job pods.

Returns:
A dict of logs from each pod in the job, e.g. {'pod_name': 'pod_log_str'}.
A dict of logs from each pod in the job, e.g. `{'pod_name': 'pod_log_str'}`.

Raises:
RuntimeError: If the created Kubernetes job attains a failed status.
Expand Down
2 changes: 1 addition & 1 deletion src/prefect/server/models/workers.py
Original file line number Diff line number Diff line change
Expand Up @@ -1161,7 +1161,7 @@ async def count_work_pool_slot_holders_by_queue(
session: AsyncSession,
work_pool_id: UUID,
) -> dict[UUID, int]:
"""Returns {work_queue_id: count} for slot-holding runs in a pool."""
"""Returns `{work_queue_id: count}` for slot-holding runs in a pool."""
query = (
select(db.FlowRun.work_queue_id, sa.func.count())
.join(db.WorkQueue, db.FlowRun.work_queue_id == db.WorkQueue.id)
Expand Down
Loading