Skip to content

Commit e48da74

Browse files
authored
Fix capitalization in file titles (#1341)
1 parent 756be4e commit e48da74

File tree

15 files changed

+20
-30
lines changed

15 files changed

+20
-30
lines changed

docs/examples/qdrant_zendesk/qdrant_zendesk.py

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -1,6 +1,6 @@
11
"""
22
---
3-
title: Similarity Searching with Qdrant
3+
title: Similarity searching with Qdrant
44
description: Learn how to use the dlt source, Zendesk and dlt destination, Qdrant to conduct a similarity search on your tickets data.
55
keywords: [similarity search, example]
66
---

docs/website/docs/build-a-pipeline-tutorial.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -1,5 +1,5 @@
11
---
2-
title: Pipeline Tutorial
2+
title: Pipeline tutorial
33
description: Build a data pipeline with dlt from scratch
44
keywords: [getting started, quick start, basics]
55
---

docs/website/docs/dlt-ecosystem/file-formats/csv.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -4,7 +4,7 @@ description: The csv file format
44
keywords: [csv, file formats]
55
---
66

7-
# CSV File Format
7+
# CSV file format
88

99
**csv** is the most basic file format to store tabular data, where all the values are strings and are separated by a delimiter (typically comma).
1010
`dlt` uses it for specific use cases - mostly for the performance and compatibility reasons.

docs/website/docs/dlt-ecosystem/file-formats/parquet.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -4,7 +4,7 @@ description: The parquet file format
44
keywords: [parquet, file formats]
55
---
66

7-
# Parquet File Format
7+
# Parquet file format
88

99
[Apache Parquet](https://en.wikipedia.org/wiki/Apache_Parquet) is a free and open-source column-oriented data storage format in the Apache Hadoop ecosystem. `dlt` is capable of storing data in this format when configured to do so.
1010

docs/website/docs/dlt-ecosystem/verified-sources/index.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -1,5 +1,5 @@
11
---
2-
title: Verified Sources
2+
title: Verified sources
33
description: List of verified sources
44
keywords: ['verified source']
55
---

docs/website/docs/general-usage/credentials/config_providers.md

Lines changed: 1 addition & 4 deletions
Original file line numberDiff line numberDiff line change
@@ -1,13 +1,10 @@
11
---
2-
title: Configuration Providers
2+
title: Configuration providers
33
description: Where dlt looks for config/secrets and in which order.
44
keywords: [credentials, secrets.toml, secrets, config, configuration, environment
55
variables, provider]
66
---
77

8-
# Configuration Providers
9-
10-
118
Configuration Providers in the context of the `dlt` library
129
refer to different sources from which configuration values
1310
and secrets can be retrieved for a data pipeline.

docs/website/docs/general-usage/credentials/config_specs.md

Lines changed: 1 addition & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -1,12 +1,10 @@
11
---
2-
title: Configuration Specs
2+
title: Configuration specs
33
description: How to specify complex custom configurations
44
keywords: [credentials, secrets.toml, secrets, config, configuration, environment
55
variables, specs]
66
---
77

8-
# Configuration Specs
9-
108
Configuration Specs in `dlt` are Python dataclasses that define how complex configuration values,
119
particularly credentials, should be handled.
1210
They specify the types, defaults, and parsing methods for these values.

docs/website/docs/general-usage/credentials/configuration.md

Lines changed: 1 addition & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -1,12 +1,10 @@
11
---
2-
title: Secrets and Configs
2+
title: Secrets and configs
33
description: What are secrets and configs and how sources and destinations read them.
44
keywords: [credentials, secrets.toml, secrets, config, configuration, environment
55
variables]
66
---
77

8-
# Secrets and Configs
9-
108
Use secret and config values to pass access credentials and configure or fine-tune your pipelines without the need to modify your code.
119
When done right you'll be able to run the same pipeline script during development and in production.
1210

docs/website/docs/general-usage/glossary.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -38,7 +38,7 @@ The data store where data from the source is loaded (e.g. Google BigQuery).
3838
Moves the data from the source to the destination, according to instructions provided in the schema
3939
(i.e. extracting, normalizing, and loading the data).
4040

41-
## [Verified Source](../walkthroughs/add-a-verified-source)
41+
## [Verified source](../walkthroughs/add-a-verified-source)
4242

4343
A Python module distributed with `dlt init` that allows creating pipelines that extract data from a
4444
particular **Source**. Such module is intended to be published in order for others to use it to

docs/website/docs/general-usage/schema-contracts.md

Lines changed: 1 addition & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -1,11 +1,9 @@
11
---
2-
title: 🧪 Schema and Data Contracts
2+
title: 🧪 Schema and data contracts
33
description: Controlling schema evolution and validating data
44
keywords: [data contracts, schema, dlt schema, pydantic]
55
---
66

7-
## Schema and Data Contracts
8-
97
`dlt` will evolve the schema at the destination by following the structure and data types of the extracted data. There are several modes
108
that you can use to control this automatic schema evolution, from the default modes where all changes to the schema are accepted to
119
a frozen schema that does not change at all.

0 commit comments

Comments
 (0)