diff --git a/CHANGELOG.md b/CHANGELOG.md index e8ae5a0dd7..432c39f5fd 100644 --- a/CHANGELOG.md +++ b/CHANGELOG.md @@ -304,7 +304,7 @@ * [FEATURE] Compactor: Add `-compactor.skip-blocks-with-out-of-order-chunks-enabled` configuration to mark blocks containing index with out-of-order chunks for no compact instead of halting the compaction. #4707 * [FEATURE] Querier/Query-Frontend: Add `-querier.per-step-stats-enabled` and `-frontend.cache-queryable-samples-stats` configurations to enable query sample statistics. #4708 * [FEATURE] Add shuffle sharding for the compactor #4433 -* [FEATURE] Querier: Use streaming for ingester metdata APIs. #4725 +* [FEATURE] Querier: Use streaming for ingester metadata APIs. #4725 * [ENHANCEMENT] Update Go version to 1.17.8. #4602 #4604 #4658 * [ENHANCEMENT] Keep track of discarded samples due to bad relabel configuration in `cortex_discarded_samples_total`. #4503 * [ENHANCEMENT] Ruler: Add `-ruler.disable-rule-group-label` to disable the `rule_group` label on exported metrics. #4571 @@ -443,7 +443,7 @@ * `memberlist_client_kv_store_value_tombstones` * `memberlist_client_kv_store_value_tombstones_removed_total` * `memberlist_client_messages_to_broadcast_dropped_total` -* [ENHANCEMENT] Alertmanager: Added `-alertmanager.max-dispatcher-aggregation-groups` option to control max number of active dispatcher groups in Alertmanager (per tenant, also overrideable). When the limit is reached, Dispatcher produces log message and increases `cortex_alertmanager_dispatcher_aggregation_group_limit_reached_total` metric. #4254 +* [ENHANCEMENT] Alertmanager: Added `-alertmanager.max-dispatcher-aggregation-groups` option to control max number of active dispatcher groups in Alertmanager (per tenant, also overridable). When the limit is reached, Dispatcher produces log message and increases `cortex_alertmanager_dispatcher_aggregation_group_limit_reached_total` metric. #4254 * [ENHANCEMENT] Alertmanager: Added `-alertmanager.max-alerts-count` and `-alertmanager.max-alerts-size-bytes` to control max number of alerts and total size of alerts that a single user can have in Alertmanager's memory. Adding more alerts will fail with a log message and incrementing `cortex_alertmanager_alerts_insert_limited_total` metric (per-user). These limits can be overrided by using per-tenant overrides. Current values are tracked in `cortex_alertmanager_alerts_limiter_current_alerts` and `cortex_alertmanager_alerts_limiter_current_alerts_size_bytes` metrics. #4253 * [ENHANCEMENT] Store-gateway: added `-store-gateway.sharding-ring.wait-stability-min-duration` and `-store-gateway.sharding-ring.wait-stability-max-duration` support to store-gateway, to wait for ring stability at startup. #4271 * [ENHANCEMENT] Ruler: added `rule_group` label to metrics `cortex_prometheus_rule_group_iterations_total` and `cortex_prometheus_rule_group_iterations_missed_total`. #4121 @@ -530,7 +530,7 @@ * [ENHANCEMENT] Alertmanager: validate configured `-alertmanager.web.external-url` and fail if ends with `/`. #4081 * [ENHANCEMENT] Alertmanager: added `-alertmanager.receivers-firewall.block.cidr-networks` and `-alertmanager.receivers-firewall.block.private-addresses` to block specific network addresses in HTTP-based Alertmanager receiver integrations. #4085 * [ENHANCEMENT] Allow configuration of Cassandra's host selection policy. #4069 -* [ENHANCEMENT] Store-gateway: retry synching blocks if a per-tenant sync fails. #3975 #4088 +* [ENHANCEMENT] Store-gateway: retry syncing blocks if a per-tenant sync fails. #3975 #4088 * [ENHANCEMENT] Add metric `cortex_tcp_connections` exposing the current number of accepted TCP connections. #4099 * [ENHANCEMENT] Querier: Allow federated queries to run concurrently. #4065 * [ENHANCEMENT] Label Values API call now supports `match[]` parameter when querying blocks on storage (assuming `-querier.query-store-for-labels-enabled` is enabled). #4133 @@ -607,7 +607,7 @@ * Prevent compaction loop in TSDB on data gap. * [ENHANCEMENT] Query-Frontend now returns server side performance metrics using `Server-Timing` header when query stats is enabled. #3685 * [ENHANCEMENT] Runtime Config: Add a `mode` query parameter for the runtime config endpoint. `/runtime_config?mode=diff` now shows the YAML runtime configuration with all values that differ from the defaults. #3700 -* [ENHANCEMENT] Distributor: Enable downstream projects to wrap distributor push function and access the deserialized write requests berfore/after they are pushed. #3755 +* [ENHANCEMENT] Distributor: Enable downstream projects to wrap distributor push function and access the deserialized write requests before/after they are pushed. #3755 * [ENHANCEMENT] Add flag `-.tls-server-name` to require a specific server name instead of the hostname on the certificate. #3156 * [ENHANCEMENT] Alertmanager: Remove a tenant's alertmanager instead of pausing it as we determine it is no longer needed. #3722 * [ENHANCEMENT] Blocks storage: added more configuration options to S3 client. #3775 @@ -895,7 +895,7 @@ Note the blocks storage compactor runs a migration task at startup in this versi * [ENHANCEMENT] Chunks GCS object storage client uses the `fields` selector to limit the payload size when listing objects in the bucket. #3218 #3292 * [ENHANCEMENT] Added shuffle sharding support to ruler. Added new metric `cortex_ruler_sync_rules_total`. #3235 * [ENHANCEMENT] Return an explicit error when the store-gateway is explicitly requested without a blocks storage engine. #3287 -* [ENHANCEMENT] Ruler: only load rules that belong to the ruler. Improves rules synching performances when ruler sharding is enabled. #3269 +* [ENHANCEMENT] Ruler: only load rules that belong to the ruler. Improves rules syncing performances when ruler sharding is enabled. #3269 * [ENHANCEMENT] Added `-.redis.tls-insecure-skip-verify` flag. #3298 * [ENHANCEMENT] Added `cortex_alertmanager_config_last_reload_successful_seconds` metric to show timestamp of last successful AM config reload. #3289 * [ENHANCEMENT] Blocks storage: reduced number of bucket listing operations to list block content (applies to newly created blocks only). #3363 @@ -1453,7 +1453,7 @@ This is the first major release of Cortex. We made a lot of **breaking changes** * `-flusher.concurrent-flushes` for number of concurrent flushes. * `-flusher.flush-op-timeout` is duration after which a flush should timeout. * [FEATURE] Ingesters can now have an optional availability zone set, to ensure metric replication is distributed across zones. This is set via the `-ingester.availability-zone` flag or the `availability_zone` field in the config file. #2317 -* [ENHANCEMENT] Better re-use of connections to DynamoDB and S3. #2268 +* [ENHANCEMENT] Better reuse of connections to DynamoDB and S3. #2268 * [ENHANCEMENT] Reduce number of goroutines used while executing a single index query. #2280 * [ENHANCEMENT] Experimental TSDB: Add support for local `filesystem` backend. #2245 * [ENHANCEMENT] Experimental TSDB: Added memcached support for the TSDB index cache. #2290 @@ -2243,7 +2243,7 @@ migrate -path /cmd/cortex/migrations -database postgre ## 0.4.0 / 2019-12-02 -* [CHANGE] The frontend component has been refactored to be easier to re-use. When upgrading the frontend, cache entries will be discarded and re-created with the new protobuf schema. #1734 +* [CHANGE] The frontend component has been refactored to be easier to reuse. When upgrading the frontend, cache entries will be discarded and re-created with the new protobuf schema. #1734 * [CHANGE] Removed direct DB/API access from the ruler. `-ruler.configs.url` has been now deprecated. #1579 * [CHANGE] Removed `Delta` encoding. Any old chunks with `Delta` encoding cannot be read anymore. If `ingester.chunk-encoding` is set to `Delta` the ingester will fail to start. #1706 * [CHANGE] Setting `-ingester.max-transfer-retries` to 0 now disables hand-over when ingester is shutting down. Previously, zero meant infinite number of attempts. #1771 diff --git a/RELEASE.md b/RELEASE.md index 0acb6c8fcb..e1bd6f4491 100644 --- a/RELEASE.md +++ b/RELEASE.md @@ -85,7 +85,7 @@ To prepare release branch, first create new release branch (release-X.Y) in Cort * `[BUGFIX]` - Run `./tools/release/check-changelog.sh LAST-RELEASE-TAG...master` and add any missing PR which includes user-facing changes -Once your PR with release prepartion is approved, merge it to "release-X.Y" branch, and continue with publishing. +Once your PR with release preparation is approved, merge it to "release-X.Y" branch, and continue with publishing. ### Publish a release candidate @@ -127,7 +127,7 @@ To publish a stable release: 1. Open a PR to add the new version to the backward compatibility integration test (`integration/backward_compatibility_test.go`) ### Sign the release artifacts and generate SBOM -1. Make sure you have the release brnach checked out, and you don't have any local modifications +1. Make sure you have the release branch checked out, and you don't have any local modifications 1. Create and `cd` to an empty directory not within the project directory 1. Run `mkdir sbom` 1. Generate SBOMs using https://github.com/kubernetes-sigs/bom diff --git a/build-image/Dockerfile b/build-image/Dockerfile index 0611832fd6..95cbe04c18 100644 --- a/build-image/Dockerfile +++ b/build-image/Dockerfile @@ -7,7 +7,7 @@ RUN curl -sL https://deb.nodesource.com/setup_14.x | bash - RUN apt-get install -y nodejs && rm -rf /var/lib/apt/lists/* /tmp/* /var/tmp/* # Install website builder dependencies. Whenever you change these version, please also change website/package.json -# and viceversa. +# and vice versa. RUN npm install -g postcss-cli@7.1.2 autoprefixer@9.8.5 ENV SHFMT_VERSION=3.2.4 diff --git a/docs/blocks-storage/migrate-from-chunks-to-blocks.md b/docs/blocks-storage/migrate-from-chunks-to-blocks.md index 3ebc122d06..6fe05cd0cc 100644 --- a/docs/blocks-storage/migrate-from-chunks-to-blocks.md +++ b/docs/blocks-storage/migrate-from-chunks-to-blocks.md @@ -104,7 +104,7 @@ Where parameters are: After starting new pod in `ingester-new` statefulset, script then triggers `/shutdown` endpoint on the old ingester. When the flushing on the old ingester is complete, scale down of statefulset continues, and process repeats. -_The script supports both migration from chunks to blocks, and viceversa (eg. rollback)._ +_The script supports both migration from chunks to blocks, and vice versa (eg. rollback)._ ### Known issues diff --git a/docs/blocks-storage/querier.md b/docs/blocks-storage/querier.md index aa7e138a50..942c4d3c11 100644 --- a/docs/blocks-storage/querier.md +++ b/docs/blocks-storage/querier.md @@ -522,11 +522,11 @@ blocks_storage: # CLI flag: -blocks-storage.bucket-store.max-inflight-requests [max_inflight_requests: | default = 0] - # Maximum number of concurrent tenants synching blocks. + # Maximum number of concurrent tenants syncing blocks. # CLI flag: -blocks-storage.bucket-store.tenant-sync-concurrency [tenant_sync_concurrency: | default = 10] - # Maximum number of concurrent blocks synching per tenant. + # Maximum number of concurrent blocks syncing per tenant. # CLI flag: -blocks-storage.bucket-store.block-sync-concurrency [block_sync_concurrency: | default = 20] diff --git a/docs/blocks-storage/store-gateway.md b/docs/blocks-storage/store-gateway.md index 2d1d89bf3e..9ca137199a 100644 --- a/docs/blocks-storage/store-gateway.md +++ b/docs/blocks-storage/store-gateway.md @@ -633,11 +633,11 @@ blocks_storage: # CLI flag: -blocks-storage.bucket-store.max-inflight-requests [max_inflight_requests: | default = 0] - # Maximum number of concurrent tenants synching blocks. + # Maximum number of concurrent tenants syncing blocks. # CLI flag: -blocks-storage.bucket-store.tenant-sync-concurrency [tenant_sync_concurrency: | default = 10] - # Maximum number of concurrent blocks synching per tenant. + # Maximum number of concurrent blocks syncing per tenant. # CLI flag: -blocks-storage.bucket-store.block-sync-concurrency [block_sync_concurrency: | default = 20] diff --git a/docs/configuration/arguments.md b/docs/configuration/arguments.md index 6d04b65737..ed073ec62d 100644 --- a/docs/configuration/arguments.md +++ b/docs/configuration/arguments.md @@ -510,7 +510,7 @@ If you are using a managed memcached service from [Google Cloud](https://cloud.g ## Logging of IP of reverse proxy -If a reverse proxy is used in front of Cortex it might be diffult to troubleshoot errors. The following 3 settings can be used to log the IP address passed along by the reverse proxy in headers like X-Forwarded-For. +If a reverse proxy is used in front of Cortex it might be difficult to troubleshoot errors. The following 3 settings can be used to log the IP address passed along by the reverse proxy in headers like X-Forwarded-For. - `-server.log_source_ips_enabled` diff --git a/docs/configuration/config-file-reference.md b/docs/configuration/config-file-reference.md index f3d7e5f414..82c106bec9 100644 --- a/docs/configuration/config-file-reference.md +++ b/docs/configuration/config-file-reference.md @@ -1067,11 +1067,11 @@ bucket_store: # CLI flag: -blocks-storage.bucket-store.max-inflight-requests [max_inflight_requests: | default = 0] - # Maximum number of concurrent tenants synching blocks. + # Maximum number of concurrent tenants syncing blocks. # CLI flag: -blocks-storage.bucket-store.tenant-sync-concurrency [tenant_sync_concurrency: | default = 10] - # Maximum number of concurrent blocks synching per tenant. + # Maximum number of concurrent blocks syncing per tenant. # CLI flag: -blocks-storage.bucket-store.block-sync-concurrency [block_sync_concurrency: | default = 20] diff --git a/docs/guides/gossip-ring-getting-started.md b/docs/guides/gossip-ring-getting-started.md index b15c9fc373..3fe2797359 100644 --- a/docs/guides/gossip-ring-getting-started.md +++ b/docs/guides/gossip-ring-getting-started.md @@ -10,11 +10,11 @@ but it can also build its own KV store on top of memberlist library using a goss This short guide shows how to start Cortex in [single-binary mode](../architecture.md) with memberlist-based ring. To reduce number of required dependencies in this guide, it will use [blocks storage](../blocks-storage/_index.md) with no shipping to external stores. -Storage engine and external storage configuration are not dependant on the ring configuration. +Storage engine and external storage configuration are not dependent on the ring configuration. ## Single-binary, two Cortex instances -For simplicity and to get started, we'll run it as a two instances of Cortex on local computer. +For simplicity and to get started, we'll run it as two instances of Cortex on local computer. We will use prepared configuration files ([file 1](../../configuration/single-process-config-blocks-gossip-1.yaml), [file 2](../../configuration/single-process-config-blocks-gossip-2.yaml)), with no external dependencies. diff --git a/docs/guides/tls.md b/docs/guides/tls.md index 9975ed325d..752edf6877 100644 --- a/docs/guides/tls.md +++ b/docs/guides/tls.md @@ -38,7 +38,7 @@ openssl x509 -req -in server.csr -CA root.crt -CAkey root.key -CAcreateserial -o Note that the above script generates certificates that are valid for 100000 days. This can be changed by adjusting the `-days` option in the above commands. -It is recommended that the certs be replaced atleast once every 2 years. +It is recommended that the certs be replaced at least once every 2 years. The above script generates keys `client.key, server.key` and certs `client.crt, server.crt` for both the client and server. The CA cert is diff --git a/docs/proposals/api_design.md b/docs/proposals/api_design.md index 92f8b0cfa4..2c666d4438 100644 --- a/docs/proposals/api_design.md +++ b/docs/proposals/api_design.md @@ -71,7 +71,7 @@ Cortex will utilize path based versioning similar to both Prometheus and Alertma The new API endpoints and the current http prefix endpoints can be maintained concurrently. The flag to configure these endpoints will be maintained as `http.prefix`. This will allow us to roll out the new API without disrupting the current routing schema. The original http prefix endpoints can maintained indefinitely or be phased out over time. Deprecation warnings can be added to the current API either when initialized or utilized. This can be accomplished by injecting a middleware that logs a warning whenever a legacy API endpoint is used. -In cases where Cortex is run as a single binary, the Alertmanager module will only be accesible using the new API. +In cases where Cortex is run as a single binary, the Alertmanager module will only be accessible using the new API. ### Implementation diff --git a/docs/proposals/parallel-compaction.md b/docs/proposals/parallel-compaction.md index ee20dbdd4f..a821b75b9a 100644 --- a/docs/proposals/parallel-compaction.md +++ b/docs/proposals/parallel-compaction.md @@ -11,7 +11,7 @@ slug: parallel-compaction --- ## Introduction -As a part of pushing Cortex’s scaling capability at AWS, we have done performance testing with Cortex and found the compactor to be one of the main limiting factors for higher active timeseries limit per tenant. The documentation [Compactor](https://cortexmetrics.io/docs/blocks-storage/compactor/#how-compaction-works) describes the responsibilities of a compactor, and this proposal focuses on the limitations of the current compactor architecture. In the current architecture, compactor has simple sharding, meaning that a single tenant is sharded to a single compactor. The compactor generates compaction groups, which are groups of Prometheus TSDB blocks that can be compacted together, independently of another group. However, a compactor currnetly handles compaction groups of a single tenant iteratively, meaning that blocks belonging non-overlapping times are not compacted in parallel. +As a part of pushing Cortex’s scaling capability at AWS, we have done performance testing with Cortex and found the compactor to be one of the main limiting factors for higher active timeseries limit per tenant. The documentation [Compactor](https://cortexmetrics.io/docs/blocks-storage/compactor/#how-compaction-works) describes the responsibilities of a compactor, and this proposal focuses on the limitations of the current compactor architecture. In the current architecture, compactor has simple sharding, meaning that a single tenant is sharded to a single compactor. The compactor generates compaction groups, which are groups of Prometheus TSDB blocks that can be compacted together, independently of another group. However, a compactor currently handles compaction groups of a single tenant iteratively, meaning that blocks belonging non-overlapping times are not compacted in parallel. Cortex ingesters are responsible for uploading TSDB blocks with data emitted by a tenant. These blocks are considered as level-1 blocks, as they contain duplicate timeseries for the same time interval, depending on the replication factor. [Vertical compaction](https://cortexmetrics.io/docs/blocks-storage/compactor/#how-compaction-works) is done to merge all the blocks with the same time interval and deduplicate the samples. These merged blocks are level-2 blocks. Subsequent compactions such as horizontal compaction can happen, further increasing the compaction level of the blocks. diff --git a/docs/proposals/ring-multikey.md b/docs/proposals/ring-multikey.md index 3945eb901c..16661d975d 100644 --- a/docs/proposals/ring-multikey.md +++ b/docs/proposals/ring-multikey.md @@ -112,7 +112,7 @@ type MultiKey interface { ``` * SplitById - responsible to split the codec in multiple keys and interface. -* JoinIds - responsible to receive multiple keys and interface creating the codec objec +* JoinIds - responsible to receive multiple keys and interface creating the codec object * GetChildFactory - Allow the kv store to know how to serialize and deserialize the interface returned by “SplitById”. The interface returned by SplitById need to be a proto.Message * FindDifference - optimization used to know what need to be updated or deleted from a codec. This avoids updating all keys every diff --git a/integration/e2e/service.go b/integration/e2e/service.go index 9094232bc7..50fc0b8301 100644 --- a/integration/e2e/service.go +++ b/integration/e2e/service.go @@ -523,9 +523,9 @@ func (s *HTTPService) Metrics() (_ string, err error) { localPort := s.networkPortsContainerToLocal[s.httpPort] // Fetch metrics. - // We use IPv4 ports from Dokcer for e2e tests, so lt's use 127.0.0.1 to force IPv4; localhost may map to IPv6. - // It's possible that same port number map to IPv4 for serviceA and IPv6 for servieB, so using localhost makes - // tests flaky because you connect to serviceB while you want to connec to serviceA. + // We use IPv4 ports from Docker for e2e tests, so let's use 127.0.0.1 to force IPv4; localhost may map to IPv6. + // It's possible that same port number map to IPv4 for serviceA and IPv6 for serviceB, so using localhost makes + // tests flaky because you connect to serviceB while you want to connect to serviceA. res, err := GetRequest(fmt.Sprintf("http://127.0.0.1:%d/metrics", localPort)) if err != nil { return "", err diff --git a/pkg/alertmanager/alertmanager.go b/pkg/alertmanager/alertmanager.go index d67f26eefd..b9c0b35f28 100644 --- a/pkg/alertmanager/alertmanager.go +++ b/pkg/alertmanager/alertmanager.go @@ -111,7 +111,7 @@ type Alertmanager struct { lastPipeline notify.Stage // The Dispatcher is the only component we need to recreate when we call ApplyConfig. - // Given its metrics don't have any variable labels we need to re-use the same metrics. + // Given its metrics don't have any variable labels we need to reuse the same metrics. dispatcherMetrics *dispatch.DispatcherMetrics // This needs to be set to the hash of the config. All the hashes need to be same // for deduping of alerts to work, hence we need this metric. See https://github.com/prometheus/alertmanager/issues/596 diff --git a/pkg/alertmanager/merger/v2_silence_id_test.go b/pkg/alertmanager/merger/v2_silence_id_test.go index 6968e278ba..6ba7a118a1 100644 --- a/pkg/alertmanager/merger/v2_silence_id_test.go +++ b/pkg/alertmanager/merger/v2_silence_id_test.go @@ -8,7 +8,7 @@ import ( func TestV2SilenceId_ReturnsNewestSilence(t *testing.T) { - // We re-use MergeV2Silences so we rely on that being primarily tested elsewhere. + // We reuse MergeV2Silences so we rely on that being primarily tested elsewhere. in := [][]byte{ []byte(`{"id":"77b580dd-1d9c-4b7e-9bba-13ac173cb4e5","status":{"state":"expired"},` + diff --git a/pkg/alertmanager/merger/v2_silences.go b/pkg/alertmanager/merger/v2_silences.go index f268e0b983..c3adc94625 100644 --- a/pkg/alertmanager/merger/v2_silences.go +++ b/pkg/alertmanager/merger/v2_silences.go @@ -58,7 +58,7 @@ func mergeV2Silences(in v2_models.GettableSilences) (v2_models.GettableSilences, result = append(result, silence) } - // Re-use Alertmanager sorting for silences. + // Reuse Alertmanager sorting for silences. v2.SortSilences(result) return result, nil diff --git a/pkg/chunk/cache/cache.go b/pkg/chunk/cache/cache.go index ba2c2d175e..5d03bd5279 100644 --- a/pkg/chunk/cache/cache.go +++ b/pkg/chunk/cache/cache.go @@ -15,7 +15,7 @@ import ( // NB we intentionally do not return errors in this interface - caching is best // effort by definition. We found that when these methods did return errors, // the caller would just log them - so its easier for implementation to do that. -// Whatsmore, we found partially successful Fetchs were often treated as failed +// Whatsmore, we found partially successful Fetches were often treated as failed // when they returned an error. type Cache interface { Store(ctx context.Context, key []string, buf [][]byte) diff --git a/pkg/chunk/encoding/chunk.go b/pkg/chunk/encoding/chunk.go index e5eea61985..b6ea02e422 100644 --- a/pkg/chunk/encoding/chunk.go +++ b/pkg/chunk/encoding/chunk.go @@ -38,7 +38,7 @@ type Chunk interface { // The returned Chunk is nil if the sample got appended to the same chunk. Add(sample model.SamplePair) (Chunk, error) // NewIterator returns an iterator for the chunks. - // The iterator passed as argument is for re-use. Depending on implementation, + // The iterator passed as argument is for reuse. Depending on implementation, // the iterator can be re-used or a new iterator can be allocated. NewIterator(Iterator) Iterator Marshal(io.Writer) error diff --git a/pkg/compactor/blocks_cleaner_test.go b/pkg/compactor/blocks_cleaner_test.go index d7546286e6..98f9565fd1 100644 --- a/pkg/compactor/blocks_cleaner_test.go +++ b/pkg/compactor/blocks_cleaner_test.go @@ -115,7 +115,7 @@ func testBlocksCleanerWithOptions(t *testing.T, options testBlocksCleanerOptions // If the markers migration is enabled, then we create the fixture blocks without // writing the deletion marks in the global location, because they will be migrated - // at statup. + // at startup. if !options.markersMigrationEnabled { bucketClient = bucketindex.BucketWithGlobalMarkers(bucketClient) } diff --git a/pkg/compactor/shuffle_sharding_grouper.go b/pkg/compactor/shuffle_sharding_grouper.go index 2d4dc748cb..892ab05398 100644 --- a/pkg/compactor/shuffle_sharding_grouper.go +++ b/pkg/compactor/shuffle_sharding_grouper.go @@ -175,7 +175,7 @@ func (g *ShuffleShardingGrouper) Groups(blocks map[ulid.ULID]*metadata.Meta) (re } // Ensure groups are sorted by smallest range, oldest min time first. The rationale - // is that we wanna favor smaller ranges first (ie. to deduplicate samples sooner + // is that we want to favor smaller ranges first (ie. to deduplicate samples sooner // than later) and older ones are more likely to be "complete" (no missing block still // to be uploaded). sort.SliceStable(groups, func(i, j int) bool { @@ -484,7 +484,7 @@ func groupBlocksByRange(blocks []*metadata.Meta, tr int64) []blocksGroup { group.rangeStart = getRangeStart(m, tr) group.rangeEnd = group.rangeStart + tr - // Skip blocks that don't fall into the range. This can happen via mis-alignment or + // Skip blocks that don't fall into the range. This can happen via misalignment or // by being the multiple of the intended range. if m.MaxTime > group.rangeEnd { i++ diff --git a/pkg/compactor/shuffle_sharding_planner.go b/pkg/compactor/shuffle_sharding_planner.go index 5da27b0bec..16ea32d0fe 100644 --- a/pkg/compactor/shuffle_sharding_planner.go +++ b/pkg/compactor/shuffle_sharding_planner.go @@ -56,7 +56,7 @@ func (p *ShuffleShardingPlanner) Plan(_ context.Context, metasByMinTime []*metad // Ensure all blocks fits within the largest range. This is a double check // to ensure there's no bug in the previous blocks grouping, given this Plan() // is just a pass-through. - // Modifed from https://github.com/cortexproject/cortex/pull/2616/files#diff-e3051fc530c48bb276ba958dd8fadc684e546bd7964e6bc75cef9a86ef8df344R28-R63 + // Modified from https://github.com/cortexproject/cortex/pull/2616/files#diff-e3051fc530c48bb276ba958dd8fadc684e546bd7964e6bc75cef9a86ef8df344R28-R63 largestRange := p.ranges[len(p.ranges)-1] rangeStart := getRangeStart(metasByMinTime[0], largestRange) rangeEnd := rangeStart + largestRange diff --git a/pkg/ha/ha_tracker.go b/pkg/ha/ha_tracker.go index ceccd7f1fe..247b4e3564 100644 --- a/pkg/ha/ha_tracker.go +++ b/pkg/ha/ha_tracker.go @@ -380,7 +380,7 @@ func (c *HATracker) cleanupOldReplicas(ctx context.Context, deadline time.Time) } // CheckReplica checks the cluster and replica against the backing KVStore and local cache in the -// tracker c to see if we should accept the incomming sample. It will return an error if the sample +// tracker c to see if we should accept the incoming sample. It will return an error if the sample // should not be accepted. Note that internally this function does checks against the stored values // and may modify the stored data, for example to failover between replicas after a certain period of time. // ReplicasNotMatchError is returned (from checkKVStore) if we shouldn't store this sample but are diff --git a/pkg/ha/ha_tracker.pb.go b/pkg/ha/ha_tracker.pb.go index 1f43e4c541..3599e3e3a0 100644 --- a/pkg/ha/ha_tracker.pb.go +++ b/pkg/ha/ha_tracker.pb.go @@ -28,7 +28,7 @@ const _ = proto.GoGoProtoPackageIsVersion3 // please upgrade the proto package type ReplicaDesc struct { Replica string `protobuf:"bytes,1,opt,name=replica,proto3" json:"replica,omitempty"` ReceivedAt int64 `protobuf:"varint,2,opt,name=received_at,json=receivedAt,proto3" json:"received_at,omitempty"` - // Unix timestamp in millseconds when this entry was marked for deletion. + // Unix timestamp in milliseconds when this entry was marked for deletion. // Reason for doing marking first, and delete later, is to make sure that distributors // watching the prefix will receive notification on "marking" -- at which point they can // already remove entry from memory. Actual deletion from KV store does *not* trigger diff --git a/pkg/ha/ha_tracker.proto b/pkg/ha/ha_tracker.proto index 9977c75fe2..c0266ddc07 100644 --- a/pkg/ha/ha_tracker.proto +++ b/pkg/ha/ha_tracker.proto @@ -11,7 +11,7 @@ message ReplicaDesc { string replica = 1; int64 received_at = 2; - // Unix timestamp in millseconds when this entry was marked for deletion. + // Unix timestamp in milliseconds when this entry was marked for deletion. // Reason for doing marking first, and delete later, is to make sure that distributors // watching the prefix will receive notification on "marking" -- at which point they can // already remove entry from memory. Actual deletion from KV store does *not* trigger diff --git a/pkg/ingester/ingester.go b/pkg/ingester/ingester.go index a706ab1796..f2f7cdf5dc 100644 --- a/pkg/ingester/ingester.go +++ b/pkg/ingester/ingester.go @@ -679,7 +679,7 @@ func New(cfg Config, limits *validation.Overrides, registerer prometheus.Registe // - Does not start the lifecycler. // // this is a special version of ingester used by Flusher. This ingester is not ingesting anything, its only purpose is to react -// on Flush method and flush all openened TSDBs when called. +// on Flush method and flush all opened TSDBs when called. func NewForFlusher(cfg Config, limits *validation.Overrides, registerer prometheus.Registerer, logger log.Logger) (*Ingester, error) { bucketClient, err := bucket.NewClient(context.Background(), cfg.BlocksStorageConfig.Bucket, "ingester", logger, registerer) if err != nil { @@ -2455,7 +2455,7 @@ func (i *Ingester) closeAndDeleteUserTSDBIfIdle(userID string) tsdbCloseCheckRes return tsdbNotActive } - // If TSDB is fully closed, we will set state to 'closed', which will prevent this defered closing -> active transition. + // If TSDB is fully closed, we will set state to 'closed', which will prevent this deferred closing -> active transition. defer userDB.casState(closing, active) // Make sure we don't ignore any possible inflight pushes. diff --git a/pkg/ingester/user_state_test.go b/pkg/ingester/user_state_test.go index c3aae474da..071aa5733f 100644 --- a/pkg/ingester/user_state_test.go +++ b/pkg/ingester/user_state_test.go @@ -75,7 +75,7 @@ func TestMetricCounter(t *testing.T) { overrides, err := validation.NewOverrides(limits, nil) require.NoError(t, err) - // We're testing code that's not dependant on sharding strategy, replication factor, etc. To simplify the test, + // We're testing code that's not dependent on sharding strategy, replication factor, etc. To simplify the test, // we use local limit only. limiter := NewLimiter(overrides, nil, util.ShardingStrategyDefault, true, 3, false, "") mc := newMetricCounter(limiter, ignored) diff --git a/pkg/querier/blocks_finder_bucket_scan_test.go b/pkg/querier/blocks_finder_bucket_scan_test.go index b502c8eed2..73932cb633 100644 --- a/pkg/querier/blocks_finder_bucket_scan_test.go +++ b/pkg/querier/blocks_finder_bucket_scan_test.go @@ -520,6 +520,6 @@ func prepareBucketScanBlocksFinderConfig() BucketScanBlocksFinderConfig { TenantsConcurrency: 10, MetasConcurrency: 10, IgnoreDeletionMarksDelay: time.Hour, - IgnoreBlocksWithin: 10 * time.Hour, // All blocks created in the last 10 hour shoudn't be scanned. + IgnoreBlocksWithin: 10 * time.Hour, // All blocks created in the last 10 hour shouldn't be scanned. } } diff --git a/pkg/querier/distributor_queryable_test.go b/pkg/querier/distributor_queryable_test.go index 451502167e..739c37f760 100644 --- a/pkg/querier/distributor_queryable_test.go +++ b/pkg/querier/distributor_queryable_test.go @@ -198,7 +198,7 @@ func TestDistributorQueryableFilter(t *testing.T) { func TestIngesterStreaming(t *testing.T) { t.Parallel() - // We need to make sure that there is atleast one chunk present, + // We need to make sure that there is at least one chunk present, // else no series will be selected. promChunk, err := encoding.NewForEncoding(encoding.PrometheusXorChunk) require.NoError(t, err) @@ -379,7 +379,7 @@ func TestDistributorQuerier_LabelNames(t *testing.T) { } func convertToChunks(t *testing.T, samples []cortexpb.Sample) []client.Chunk { - // We need to make sure that there is atleast one chunk present, + // We need to make sure that there is at least one chunk present, // else no series will be selected. promChunk, err := encoding.NewForEncoding(encoding.PrometheusXorChunk) require.NoError(t, err) diff --git a/pkg/querier/querier.go b/pkg/querier/querier.go index bcb0bd321f..0027507bea 100644 --- a/pkg/querier/querier.go +++ b/pkg/querier/querier.go @@ -122,7 +122,7 @@ func (cfg *Config) RegisterFlags(f *flag.FlagSet) { // Validate the config func (cfg *Config) Validate() error { - // Ensure the config wont create a situation where no queriers are returned. + // Ensure the config won't create a situation where no queriers are returned. if cfg.QueryIngestersWithin != 0 && cfg.QueryStoreAfter != 0 { if cfg.QueryStoreAfter >= cfg.QueryIngestersWithin { return errBadLookbackConfigs @@ -214,7 +214,7 @@ func New(cfg Config, limits *validation.Overrides, distributor Distributor, stor } // NewSampleAndChunkQueryable creates a SampleAndChunkQueryable from a -// Queryable with a ChunkQueryable stub, that errors once it get's called. +// Queryable with a ChunkQueryable stub, that errors once it gets called. func NewSampleAndChunkQueryable(q storage.Queryable) storage.SampleAndChunkQueryable { return &sampleAndChunkQueryable{q} } diff --git a/pkg/querier/querier_test.go b/pkg/querier/querier_test.go index bc7f43c550..002569bff5 100644 --- a/pkg/querier/querier_test.go +++ b/pkg/querier/querier_test.go @@ -274,8 +274,8 @@ func TestShouldSortSeriesIfQueryingMultipleQueryables(t *testing.T) { t.Run(tc.name+fmt.Sprintf(", thanos engine: %s", strconv.FormatBool(thanosEngine)), func(t *testing.T) { wDistributorQueriable := &wrappedSampleAndChunkQueryable{QueryableWithFilter: tc.distributorQueryable} var wQueriables []QueryableWithFilter - for _, queriable := range tc.storeQueriables { - wQueriables = append(wQueriables, &wrappedSampleAndChunkQueryable{QueryableWithFilter: queriable}) + for _, queryable := range tc.storeQueriables { + wQueriables = append(wQueriables, &wrappedSampleAndChunkQueryable{QueryableWithFilter: queryable}) } queryable := NewQueryable(wDistributorQueriable, wQueriables, batch.NewChunkMergeIterator, cfg, overrides) opts := promql.EngineOpts{ @@ -480,8 +480,8 @@ func TestLimits(t *testing.T) { t.Run(tc.name+fmt.Sprintf(", Test: %d", i), func(t *testing.T) { wDistributorQueriable := &wrappedSampleAndChunkQueryable{QueryableWithFilter: tc.distributorQueryable} var wQueriables []QueryableWithFilter - for _, queriable := range tc.storeQueriables { - wQueriables = append(wQueriables, &wrappedSampleAndChunkQueryable{QueryableWithFilter: queriable}) + for _, queryable := range tc.storeQueriables { + wQueriables = append(wQueriables, &wrappedSampleAndChunkQueryable{QueryableWithFilter: queryable}) } overrides, err := validation.NewOverrides(DefaultLimitsConfig(), tc.tenantLimit) require.NoError(t, err) diff --git a/pkg/querier/tripperware/query.go b/pkg/querier/tripperware/query.go index f893d20b66..725f3dcd1e 100644 --- a/pkg/querier/tripperware/query.go +++ b/pkg/querier/tripperware/query.go @@ -217,7 +217,7 @@ func BodyBuffer(res *http.Response, logger log.Logger) ([]byte, error) { } } - // if the response is gziped, lets unzip it here + // if the response is gzipped, lets unzip it here if strings.EqualFold(res.Header.Get("Content-Encoding"), "gzip") { gReader, err := gzip.NewReader(buf) if err != nil { @@ -232,7 +232,7 @@ func BodyBuffer(res *http.Response, logger log.Logger) ([]byte, error) { } func BodyBufferFromHTTPGRPCResponse(res *httpgrpc.HTTPResponse, logger log.Logger) ([]byte, error) { - // if the response is gziped, lets unzip it here + // if the response is gzipped, lets unzip it here headers := http.Header{} for _, h := range res.Headers { headers[h.Key] = h.Values diff --git a/pkg/querier/tripperware/queryrange/results_cache.go b/pkg/querier/tripperware/queryrange/results_cache.go index 21f195b9cc..413bc09abf 100644 --- a/pkg/querier/tripperware/queryrange/results_cache.go +++ b/pkg/querier/tripperware/queryrange/results_cache.go @@ -289,7 +289,7 @@ func (s resultsCache) isAtModifierCachable(ctx context.Context, r tripperware.Re expr, err := parser.ParseExpr(query) if err != nil { // We are being pessimistic in such cases. - level.Warn(util_log.WithContext(ctx, s.logger)).Log("msg", "failed to parse query, considering @ modifier as not cachable", "query", query, "err", err) + level.Warn(util_log.WithContext(ctx, s.logger)).Log("msg", "failed to parse query, considering @ modifier as not cacheable", "query", query, "err", err) return false } @@ -333,7 +333,7 @@ func (s resultsCache) isOffsetCachable(ctx context.Context, r tripperware.Reques } expr, err := parser.ParseExpr(query) if err != nil { - level.Warn(util_log.WithContext(ctx, s.logger)).Log("msg", "failed to parse query, considering offset as not cachable", "query", query, "err", err) + level.Warn(util_log.WithContext(ctx, s.logger)).Log("msg", "failed to parse query, considering offset as not cacheable", "query", query, "err", err) return false } diff --git a/pkg/querier/tripperware/queryrange/results_cache_test.go b/pkg/querier/tripperware/queryrange/results_cache_test.go index 44fc2fe04d..fcc6796d6a 100644 --- a/pkg/querier/tripperware/queryrange/results_cache_test.go +++ b/pkg/querier/tripperware/queryrange/results_cache_test.go @@ -893,7 +893,7 @@ func TestHandleHit(t *testing.T) { { name: "Should not throw error if complete-overlapped smaller Extent is erroneous", input: &PrometheusRequest{ - // This request is carefully crated such that cachedEntry is not used to fulfill + // This request is carefully created such that cachedEntry is not used to fulfill // the request. Start: 160, End: 180, diff --git a/pkg/ring/kv/memberlist/memberlist_client.go b/pkg/ring/kv/memberlist/memberlist_client.go index 6b1e1744d1..59a828e48f 100644 --- a/pkg/ring/kv/memberlist/memberlist_client.go +++ b/pkg/ring/kv/memberlist/memberlist_client.go @@ -579,7 +579,7 @@ func (m *KV) joinMembersOnStarting(members []string) error { return err } -// Provides a dns-based member disovery to join a memberlist cluster w/o knowning members' addresses upfront. +// Provides a dns-based member discovery to join a memberlist cluster w/o knowning members' addresses upfront. func (m *KV) discoverMembers(ctx context.Context, members []string) []string { if len(members) == 0 { return nil diff --git a/pkg/ring/kv/memberlist/memberlist_client_test.go b/pkg/ring/kv/memberlist/memberlist_client_test.go index 39c4836f80..e4feb6ce3f 100644 --- a/pkg/ring/kv/memberlist/memberlist_client_test.go +++ b/pkg/ring/kv/memberlist/memberlist_client_test.go @@ -1221,7 +1221,7 @@ func TestSendingOldTombstoneShouldNotForwardMessage(t *testing.T) { for _, tc := range []struct { name string - valueBeforeSend *data // value in KV store before sending messsage + valueBeforeSend *data // value in KV store before sending message msgToSend *data valueAfterSend *data // value in KV store after sending message broadcastMessage *data // broadcasted change, if not nil diff --git a/pkg/ring/kv/memberlist/mergeable.go b/pkg/ring/kv/memberlist/mergeable.go index f2120c1d92..b8970a6434 100644 --- a/pkg/ring/kv/memberlist/mergeable.go +++ b/pkg/ring/kv/memberlist/mergeable.go @@ -14,7 +14,7 @@ type Mergeable interface { // Merge with other value in place. Returns change, that can be sent to other clients. // If merge doesn't result in any change, returns nil. // Error can be returned if merging with given 'other' value is not possible. - // Implementors of this method are permitted to modify the other parameter, as the + // Implementers of this method are permitted to modify the other parameter, as the // memberlist-based KV store will not use the same "other" parameter in multiple Merge calls. // // In order for state merging to work correctly, Merge function must have some properties. When talking about the diff --git a/pkg/ring/lifecycler.go b/pkg/ring/lifecycler.go index ae77027279..daf19ff1eb 100644 --- a/pkg/ring/lifecycler.go +++ b/pkg/ring/lifecycler.go @@ -753,7 +753,7 @@ func (i *Lifecycler) autoJoin(ctx context.Context, targetState InstanceState) er i.setState(targetState) // At this point, we should not have any tokens, and we should be in PENDING state. - // Need to make sure we didnt change the num of tokens configured + // Need to make sure we didn't change the num of tokens configured myTokens, takenTokens := ringDesc.TokensFor(i.ID) needTokens := i.cfg.NumTokens - len(myTokens) diff --git a/pkg/ruler/compat.go b/pkg/ruler/compat.go index 98aa758647..6ae52986f5 100644 --- a/pkg/ruler/compat.go +++ b/pkg/ruler/compat.go @@ -260,7 +260,7 @@ func RecordAndReportRuleQueryMetrics(qf rules.QueryFunc, queryTime prometheus.Co } } -// This interface mimicks rules.Manager API. Interface is used to simplify tests. +// This interface mimics rules.Manager API. Interface is used to simplify tests. type RulesManager interface { // Starts rules manager. Blocks until Stop is called. Run() diff --git a/pkg/ruler/mapper.go b/pkg/ruler/mapper.go index c3b715b7e1..fb14daa5a8 100644 --- a/pkg/ruler/mapper.go +++ b/pkg/ruler/mapper.go @@ -13,7 +13,7 @@ import ( "gopkg.in/yaml.v3" ) -// mapper is designed to enusre the provided rule sets are identical +// mapper is designed to ensure the provided rule sets are identical // to the on-disk rules tracked by the prometheus manager type mapper struct { Path string // Path specifies the directory in which rule files will be mapped. diff --git a/pkg/ruler/rulestore/store.go b/pkg/ruler/rulestore/store.go index b247ebe281..e59557cb9e 100644 --- a/pkg/ruler/rulestore/store.go +++ b/pkg/ruler/rulestore/store.go @@ -10,7 +10,7 @@ import ( var ( // ErrGroupNotFound is returned if a rule group does not exist ErrGroupNotFound = errors.New("group does not exist") - // ErrAccessDenied is returned access denied error was returned when trying to laod the group + // ErrAccessDenied is returned access denied error was returned when trying to load the group ErrAccessDenied = errors.New("access denied") // ErrGroupNamespaceNotFound is returned if a namespace does not exist ErrGroupNamespaceNotFound = errors.New("group namespace does not exist") diff --git a/pkg/storage/bucket/client_mock.go b/pkg/storage/bucket/client_mock.go index c5eebbd3f4..e503a027e7 100644 --- a/pkg/storage/bucket/client_mock.go +++ b/pkg/storage/bucket/client_mock.go @@ -131,7 +131,7 @@ func (m *ClientMock) MockGet(name, content string, err error) { } } -// MockGetRequireUpload is a convenient method to mock Get() return resulst after upload, +// MockGetRequireUpload is a convenient method to mock Get() return results after upload, // otherwise return errObjectDoesNotExist func (m *ClientMock) MockGetRequireUpload(name, content string, err error) { m.uploaded.Store(name, false) diff --git a/pkg/storage/tsdb/config.go b/pkg/storage/tsdb/config.go index ebbd0b2886..cfeb58c1c6 100644 --- a/pkg/storage/tsdb/config.go +++ b/pkg/storage/tsdb/config.go @@ -299,8 +299,8 @@ func (cfg *BucketStoreConfig) RegisterFlags(f *flag.FlagSet) { f.IntVar(&cfg.ChunkPoolMaxBucketSizeBytes, "blocks-storage.bucket-store.chunk-pool-max-bucket-size-bytes", ChunkPoolDefaultMaxBucketSize, "Size - in bytes - of the largest chunks pool bucket.") f.IntVar(&cfg.MaxConcurrent, "blocks-storage.bucket-store.max-concurrent", 100, "Max number of concurrent queries to execute against the long-term storage. The limit is shared across all tenants.") f.IntVar(&cfg.MaxInflightRequests, "blocks-storage.bucket-store.max-inflight-requests", 0, "Max number of inflight queries to execute against the long-term storage. The limit is shared across all tenants. 0 to disable.") - f.IntVar(&cfg.TenantSyncConcurrency, "blocks-storage.bucket-store.tenant-sync-concurrency", 10, "Maximum number of concurrent tenants synching blocks.") - f.IntVar(&cfg.BlockSyncConcurrency, "blocks-storage.bucket-store.block-sync-concurrency", 20, "Maximum number of concurrent blocks synching per tenant.") + f.IntVar(&cfg.TenantSyncConcurrency, "blocks-storage.bucket-store.tenant-sync-concurrency", 10, "Maximum number of concurrent tenants syncing blocks.") + f.IntVar(&cfg.BlockSyncConcurrency, "blocks-storage.bucket-store.block-sync-concurrency", 20, "Maximum number of concurrent blocks syncing per tenant.") f.IntVar(&cfg.MetaSyncConcurrency, "blocks-storage.bucket-store.meta-sync-concurrency", 20, "Number of Go routines to use when syncing block meta files from object storage per tenant.") f.DurationVar(&cfg.ConsistencyDelay, "blocks-storage.bucket-store.consistency-delay", 0, "Minimum age of a block before it's being read. Set it to safe value (e.g 30m) if your object storage is eventually consistent. GCS and S3 are (roughly) strongly consistent.") f.DurationVar(&cfg.IgnoreDeletionMarksDelay, "blocks-storage.bucket-store.ignore-deletion-marks-delay", time.Hour*6, "Duration after which the blocks marked for deletion will be filtered out while fetching blocks. "+ diff --git a/pkg/storage/tsdb/inmemory_index_cache.go b/pkg/storage/tsdb/inmemory_index_cache.go index 8afb06464c..6b9cbdbadb 100644 --- a/pkg/storage/tsdb/inmemory_index_cache.go +++ b/pkg/storage/tsdb/inmemory_index_cache.go @@ -130,7 +130,7 @@ func copyString(s string) string { return string(b) } -// copyToKey is required as underlying strings might be mmaped. +// copyToKey is required as underlying strings might be memory-mapped. func copyToKey(l labels.Label) storecache.CacheKeyPostings { return storecache.CacheKeyPostings(labels.Label{Value: copyString(l.Value), Name: copyString(l.Name)}) } diff --git a/pkg/storegateway/bucket_stores.go b/pkg/storegateway/bucket_stores.go index 318d4c8f39..9050ebe19d 100644 --- a/pkg/storegateway/bucket_stores.go +++ b/pkg/storegateway/bucket_stores.go @@ -546,7 +546,7 @@ func (u *BucketStores) getOrCreateStore(userID string) (*store.BucketStore, erro filters) } else { // Wrap the bucket reader to skip iterating the bucket at all if the user doesn't - // belong to the store-gateway shard. We need to run the BucketStore synching anyway + // belong to the store-gateway shard. We need to run the BucketStore syncing anyway // in order to unload previous tenants in case of a resharding leading to tenants // moving out from the store-gateway shard and also make sure both MetaFetcher and // BucketStore metrics are correctly updated. diff --git a/pkg/util/fakeauth/fake_auth.go b/pkg/util/fakeauth/fake_auth.go index ee850e8045..92207983dc 100644 --- a/pkg/util/fakeauth/fake_auth.go +++ b/pkg/util/fakeauth/fake_auth.go @@ -1,4 +1,4 @@ -// Package fakeauth provides middlewares thats injects a fake userID, so the rest of the code +// Package fakeauth provides middlewares that injects a fake userID, so the rest of the code // can continue to be multitenant. package fakeauth diff --git a/pkg/util/modules/module_service.go b/pkg/util/modules/module_service.go index ac18cdcd4b..f4ab7188e3 100644 --- a/pkg/util/modules/module_service.go +++ b/pkg/util/modules/module_service.go @@ -15,7 +15,7 @@ import ( var ErrStopProcess = errors.New("stop process") // moduleService is a Service implementation that adds waiting for dependencies to start before starting, -// and dependant modules to stop before stopping this module service. +// and dependent modules to stop before stopping this module service. type moduleService struct { services.Service @@ -59,7 +59,7 @@ func (w *moduleService) start(serviceContext context.Context) error { } } - // we don't want to let this service to stop until all dependant services are stopped, + // we don't want to let this service to stop until all dependent services are stopped, // so we use independent context here level.Info(w.logger).Log("msg", "initialising", "module", w.name) err := w.service.StartAsync(context.Background()) diff --git a/pkg/util/modules/module_service_wrapper.go b/pkg/util/modules/module_service_wrapper.go index ef61abb278..f86edf7dff 100644 --- a/pkg/util/modules/module_service_wrapper.go +++ b/pkg/util/modules/module_service_wrapper.go @@ -7,7 +7,7 @@ import ( ) // This function wraps module service, and adds waiting for dependencies to start before starting, -// and dependant modules to stop before stopping this module service. +// and dependent modules to stop before stopping this module service. func newModuleServiceWrapper(serviceMap map[string]services.Service, mod string, logger log.Logger, modServ services.Service, startDeps []string, stopDeps []string) services.Service { getDeps := func(deps []string) map[string]services.Service { r := map[string]services.Service{} diff --git a/pkg/util/services/basic_service_test.go b/pkg/util/services/basic_service_test.go index cdd22b5d21..0856376a5d 100644 --- a/pkg/util/services/basic_service_test.go +++ b/pkg/util/services/basic_service_test.go @@ -116,7 +116,7 @@ func TestAllFunctionality(t *testing.T) { listenerLog: []string{"starting", "failed: Starting: start failed"}, }, - "start is canceled via context and returns cancelation error": { + "start is canceled via context and returns cancellation error": { cancelAfterStartAsync: true, startReturnContextErr: true, awaitRunningError: []error{invalidServiceStateWithFailureError(Failed, Running, context.Canceled)}, @@ -152,7 +152,7 @@ func TestAllFunctionality(t *testing.T) { listenerLog: []string{"starting", "running", "stopping: Running", "failed: Stopping: runFn failed"}, }, - "runFn returns error from context cancelation": { + "runFn returns error from context cancellation": { runReturnContextErr: true, cancelAfterAwaitRunning: true, awaitTerminatedError: invalidServiceStateWithFailureError(Failed, Terminated, context.Canceled), // service will get into Failed state, since run failed diff --git a/pkg/util/validation/limits.go b/pkg/util/validation/limits.go index 54174cc6be..b2922490ec 100644 --- a/pkg/util/validation/limits.go +++ b/pkg/util/validation/limits.go @@ -269,7 +269,7 @@ func (l *Limits) UnmarshalYAML(unmarshal func(interface{}) error) error { // To make unmarshal fill the plain data struct rather than calling UnmarshalYAML // again, we have to hide it using a type indirection. See prometheus/config. - // During startup we wont have a default value so we don't want to overwrite them + // During startup we won't have a default value so we don't want to overwrite them if defaultLimits != nil { *l = *defaultLimits // Make copy of default limits. Otherwise unmarshalling would modify map in default limits. @@ -833,7 +833,7 @@ func SmallestPositiveIntPerTenant(tenantIDs []string, f func(string) int) int { // SmallestPositiveNonZeroFloat64PerTenant is returning the minimal positive and // non-zero value of the supplied limit function for all given tenants. In many -// limits a value of 0 means unlimted so the method will return 0 only if all +// limits a value of 0 means unlimited so the method will return 0 only if all // inputs have a limit of 0 or an empty tenant list is given. func SmallestPositiveNonZeroFloat64PerTenant(tenantIDs []string, f func(string) float64) float64 { var result *float64 @@ -851,7 +851,7 @@ func SmallestPositiveNonZeroFloat64PerTenant(tenantIDs []string, f func(string) // SmallestPositiveNonZeroDurationPerTenant is returning the minimal positive // and non-zero value of the supplied limit function for all given tenants. In -// many limits a value of 0 means unlimted so the method will return 0 only if +// many limits a value of 0 means unlimited so the method will return 0 only if // all inputs have a limit of 0 or an empty tenant list is given. func SmallestPositiveNonZeroDurationPerTenant(tenantIDs []string, f func(string) time.Duration) time.Duration { var result *time.Duration diff --git a/pkg/util/validation/limits_test.go b/pkg/util/validation/limits_test.go index e3b8c6d3ff..bc212540d3 100644 --- a/pkg/util/validation/limits_test.go +++ b/pkg/util/validation/limits_test.go @@ -369,7 +369,7 @@ alertmanager_notification_rate_limit_per_integration: expectedBurstSize: 222, }, - "infinte limit": { + "infinite limit": { inputYAML: ` alertmanager_notification_rate_limit_per_integration: email: .inf