diff --git a/docs/architecture/distributed-cloud-native-apps-containers/cloud-native-identity/identity-server.md b/docs/architecture/distributed-cloud-native-apps-containers/cloud-native-identity/identity-server.md index d97f15790fbf3..4b60c1be092cc 100644 --- a/docs/architecture/distributed-cloud-native-apps-containers/cloud-native-identity/identity-server.md +++ b/docs/architecture/distributed-cloud-native-apps-containers/cloud-native-identity/identity-server.md @@ -50,7 +50,7 @@ You can add it to your applications using its NuGet packages. The main package i ## Configuration -IdentityServer supports different kinds of protocols and social authentication providers that can be configured as part of each custom installation. This is typically done in the ASP.NET Core application's `Program` class (or in the `Startup` class in the `ConfigureServices` method). The configuration involves specifying the supported protocols and the paths to the servers and endpoints that will be used. Figure 11-2 shows an example configuration taken from the IdentityServer Quickstart UI project: +IdentityServer supports different kinds of protocols and social authentication providers that can be configured as part of each custom installation. This is typically done in the ASP.NET Core application's `Program` class (or in the `Startup` class in the `ConfigureServices` method). The configuration involves specifying the supported protocols and the paths to the servers and endpoints that will be used. The following code shows an example configuration taken from the IdentityServer Quickstart UI project: ```csharp public class Startup diff --git a/docs/architecture/distributed-cloud-native-apps-containers/cloud-native-resiliency/cloud-infrastructure-resiliency-azure.md b/docs/architecture/distributed-cloud-native-apps-containers/cloud-native-resiliency/cloud-infrastructure-resiliency-azure.md index 7b44fed857411..5977d60e54cca 100644 --- a/docs/architecture/distributed-cloud-native-apps-containers/cloud-native-resiliency/cloud-infrastructure-resiliency-azure.md +++ b/docs/architecture/distributed-cloud-native-apps-containers/cloud-native-resiliency/cloud-infrastructure-resiliency-azure.md @@ -45,7 +45,7 @@ To architect redundancy, you need to identify the critical paths in your applica - **Plan for multiregion deployment.** If you deploy your application to a single region, and that region becomes unavailable, your application will also become unavailable. This may be unacceptable under the terms of your application's service level agreements. Otherwise, consider deploying your application and its services across multiple regions. For example, an Azure Kubernetes Service (AKS) cluster is deployed to a single region. To protect your system from a regional failure, you might deploy your application to multiple AKS clusters across different regions and use the [Paired Regions](/azure/virtual-machines/regions#region-pairs) feature to coordinate platform updates and prioritize recovery efforts. -- **Enable [geo-replication](/azure/sql-database/sql-database-active-geo-replication).** Geo-replication for services such as Azure SQL Database and Cosmos DB will create secondary replicas of your data across multiple regions. While both services automatically replicate data within the same region, geo-replication protects you against a regional outage by enabling you to fail over to a secondary region. Another best practice for geo-replication centers around storing container images. To deploy a service in AKS, you need to store and pull the image from a repository. Azure Container Registry integrates with AKS and can securely store container images. To improve performance and availability, consider geo-replicating your images to a registry in each region where you have an AKS cluster. Each AKS cluster then pulls container images from the local container registry in its region as shown in Figure 6-4: +- **Enable [geo-replication](/azure/sql-database/sql-database-active-geo-replication).** Geo-replication for services such as Azure SQL Database and Cosmos DB will create secondary replicas of your data across multiple regions. While both services automatically replicate data within the same region, geo-replication protects you against a regional outage by enabling you to fail over to a secondary region. Another best practice for geo-replication centers around storing container images. To deploy a service in AKS, you need to store and pull the image from a repository. Azure Container Registry integrates with AKS and can securely store container images. To improve performance and availability, consider geo-replicating your images to a registry in each region where you have an AKS cluster. Each AKS cluster then pulls container images from the local container registry in its region as shown in Figure 9-4: :::image type="content" source="media/replicated-resources.png" border="false" alt-text="A diagram showing replicated resources across multiple regions."::: @@ -61,11 +61,11 @@ The cloud thrives on scaling. The ability to increase or decrease system resourc - **Partition workloads.** Decomposing domains into independent, self-contained microservices enables each service to scale independently of others. Typically, services will have different scalability needs and requirements. Partitioning enables you to scale only what needs to be scaled without the unnecessary cost of scaling an entire application. -- **Favor scale-out.** Cloud-based applications favor scaling out resources as opposed to scaling up. Scaling out (also known as horizontal scaling) involves adding more service resources to an existing system to meet and share a desired level of performance. Scaling up (also known as vertical scaling) involves replacing existing resources with more powerful hardware (more disk, memory, and processing cores). Scaling out can be invoked automatically with the autoscaling features available in some Azure cloud resources. Scaling out across multiple resources also adds redundancy to the overall system. Finally scaling up a single resource is typically more expensive than scaling out across many smaller resources. Figure 6-6 shows the two approaches: +- **Favor scale-out.** Cloud-based applications favor scaling out resources as opposed to scaling up. Scaling out (also known as horizontal scaling) involves adding more service resources to an existing system to meet and share a desired level of performance. Scaling up (also known as vertical scaling) involves replacing existing resources with more powerful hardware (more disk, memory, and processing cores). Scaling out can be invoked automatically with the autoscaling features available in some Azure cloud resources. Scaling out across multiple resources also adds redundancy to the overall system. Finally scaling up a single resource is typically more expensive than scaling out across many smaller resources. Figure 9-5 shows the two approaches: :::image type="content" source="media/scale-up-scale-out.png" alt-text="A diagram showing the differences between scale up (vertical scaling) versus scale out (horizontal scaling)." border="false"::: - **Figure 9-6**. Scale up versus scale out + **Figure 9-5**. Scale up versus scale out - **Scale proportionally.** When scaling a service, think in terms of *resource sets*. If you were to scale out a specific service dramatically, what impact would that have on back-end data stores, caches, and dependent services? Some resources such as Cosmos DB can scale out proportionally, while many others can't. You want to ensure that you don't scale out a resource to a point where it will exhaust other associated resources. diff --git a/docs/architecture/distributed-cloud-native-apps-containers/cloud-native-resiliency/resilient-communication.md b/docs/architecture/distributed-cloud-native-apps-containers/cloud-native-resiliency/resilient-communication.md index f8b70e4950b93..ee041d710ac0b 100644 --- a/docs/architecture/distributed-cloud-native-apps-containers/cloud-native-resiliency/resilient-communication.md +++ b/docs/architecture/distributed-cloud-native-apps-containers/cloud-native-resiliency/resilient-communication.md @@ -30,7 +30,7 @@ A better approach is an evolving technology entitled *Service Mesh*. A [service :::image type="content" source="media/service-mesh-with-side-car.png" alt-text="A diagram showing a service mesh using sidecars." border="false"::: -**Figure 9-7**. Service mesh with a sidecar +**Figure 9-6**. Service mesh with a sidecar In the previous figure, note how the proxy intercepts and manages communication among the microservices and the cluster. @@ -38,7 +38,7 @@ A service mesh is logically split into two disparate components: A [data plane]( :::image type="content" source="media/istio-control-and-data-plane.png" alt-text="A diagram showing a service mesh control and data plane" border="false"::: -**Figure 9-8.** Service mesh control and data plane +**Figure 9-7.** Service mesh control and data plane Once configured, a service mesh is highly functional. It can retrieve a corresponding pool of instances from a service discovery endpoint. The mesh can then send a request to a specific instance, recording the latency and response type of the result. A mesh can choose the instance most likely to return a fast response based on many factors, including its observed latency for recent requests. diff --git a/docs/architecture/distributed-cloud-native-apps-containers/data-patterns/azure-caching.md b/docs/architecture/distributed-cloud-native-apps-containers/data-patterns/azure-caching.md index 03115e0ce4c94..407d16bb60699 100644 --- a/docs/architecture/distributed-cloud-native-apps-containers/data-patterns/azure-caching.md +++ b/docs/architecture/distributed-cloud-native-apps-containers/data-patterns/azure-caching.md @@ -26,11 +26,11 @@ Also consider caching to avoid repetitive computations. If an operation transfor ## Caching architecture -Cloud native applications typically implement a distributed caching architecture. The cache is hosted as a cloud-based backing service, separate from the microservices. Figure 5-15 shows the architecture. +Cloud native applications typically implement a distributed caching architecture. The cache is hosted as a cloud-based backing service, separate from the microservices. Figure 8-14 shows the architecture. ![A diagram showing how a cache is implemented in a cloud-native app.](media/distributed-data.png) -**Figure 5-15**. Caching in a cloud-native app +**Figure 8-14**. Caching in a cloud-native app The previous figure presents a common caching pattern known as the [cache-aside pattern](/azure/architecture/patterns/cache-aside). For an incoming request, you first query the cache (step \#1) for a response. If found, the data is returned immediately. If the data doesn't exist in the cache (known as a [cache miss](https://www.techopedia.com/definition/6308/cache-miss)), it's retrieved from a local database in a downstream service (step \#2). It's then written to the cache for future requests (step \#3), and returned to the caller. Care must be taken to periodically evict cached data so that the system remains timely and consistent. diff --git a/docs/architecture/distributed-cloud-native-apps-containers/data-patterns/data-driven-crud-microservice.md b/docs/architecture/distributed-cloud-native-apps-containers/data-patterns/data-driven-crud-microservice.md index 606ac45ee06ce..5d7121fb45d8c 100644 --- a/docs/architecture/distributed-cloud-native-apps-containers/data-patterns/data-driven-crud-microservice.md +++ b/docs/architecture/distributed-cloud-native-apps-containers/data-patterns/data-driven-crud-microservice.md @@ -16,13 +16,13 @@ From a design point of view, this type of containerized microservice is very sim ![Diagram showing a simple CRUD microservice internal design pattern.](media/internal-design-simple-crud-microservices.png) -**Figure 6-4**. Internal design for simple CRUD microservices +**Figure 8-15**. Internal design for simple CRUD microservices An example of this kind of simple data-drive service is the catalog microservice from the eShop Reference Architecture sample application. This type of service implements all its functionality in a single ASP.NET Core Web API project that includes classes for its data model, its business logic, and its data access code. It also stores its related data in a database running in SQL Server (as another container for dev/test purposes), but could also be any regular SQL Server host: ![Diagram showing a data-driven/CRUD microservice container.](media/simple-data-driven-crud-microservice.png) -**Figure 6-5**. Simple data-driven/CRUD microservice design +**Figure 8-16**. Simple data-driven/CRUD microservice design This diagram shows the logical Catalog microservice, that includes its Catalog database, which can be or not in the same Docker host. Having the database in the same Docker host might be good for development, but not for production. When you are developing this kind of service, you only need [ASP.NET Core](/aspnet/core/) and a data-access API or ORM like [Entity Framework Core](/ef/core/index). @@ -36,13 +36,13 @@ To implement a simple CRUD microservice using .NET and Visual Studio, you start ![Screenshot of Visual Studios showing the set up of the project.](media/create-asp-net-core-web-api-project.png) -**Figure 6-6**. Creating an ASP.NET Core Web API project in Visual Studio 2019 +**Figure 8-17**. Creating an ASP.NET Core Web API project in Visual Studio 2019 To create an ASP.NET Core Web API Project, first select an ASP.NET Core Web Application and then select the API type. After creating the project, you can implement your MVC controllers as you would in any other Web API project, using the Entity Framework API or other API. In a new Web API project, you can see that the only dependency you have in that microservice is on ASP.NET Core itself. ![Screenshot of VS showing the NuGet dependencies of Catalog.Api](media/simple-crud-web-api-microservice-dependencies.png) -**Figure 6-7**. Dependencies in a simple CRUD Web API microservice +**Figure 8-18**. Dependencies in a simple CRUD Web API microservice The API project includes references to Microsoft.AspNetCore.App NuGet package, that includes references to all essential packages. It could include some other packages as well. @@ -247,7 +247,7 @@ This means you can complement your API with a nice discovery UI to help develope ![Screenshot of Swagger API Explorer displaying eShopOContainers API.](media/swagger-metadata-eshoponcontainers-catalog-microservice.png) -**Figure 6-8**. Swashbuckle API Explorer based on Swagger metadata—eShopOnContainers catalog microservice +**Figure 8-19**. Swashbuckle API Explorer based on Swagger metadata—eShopOnContainers catalog microservice The Swashbuckle generated Swagger UI API documentation includes all published actions. The API explorer is not the most important thing here. You can use tools like [swagger-codegen](https://github.com/swagger-api/swagger-codegen) which allow code generation of API client libraries, server stubs, and documentation automatically. @@ -285,11 +285,11 @@ Once this is done, you can start your application and browse the following Swagg http:///swagger/ ``` -You previously saw the generated UI created by Swashbuckle for a URL like `http:///swagger`. In Figure 6-9, you can also see how you can test any API method. +You previously saw the generated UI created by Swashbuckle for a URL like `http:///swagger`. In Figure 8-20, you can also see how you can test any API method. ![Screenshot of Swagger UI showing available testing tools.](media/swashbuckle-ui-testing.png) -**Figure 6-9**. Swashbuckle UI testing the Catalog/Items API method +**Figure 8-20**. Swashbuckle UI testing the Catalog/Items API method The Swagger UI API detail shows a sample of the response and can be used to execute the real API, which is great for developer discovery. diff --git a/docs/architecture/distributed-cloud-native-apps-containers/data-patterns/distributed-data.md b/docs/architecture/distributed-cloud-native-apps-containers/data-patterns/distributed-data.md index cfea683e0f9f7..b49533a0f9bb8 100644 --- a/docs/architecture/distributed-cloud-native-apps-containers/data-patterns/distributed-data.md +++ b/docs/architecture/distributed-cloud-native-apps-containers/data-patterns/distributed-data.md @@ -10,15 +10,15 @@ ms.date: 10/23/2024 A cloud-native approach changes the way you design, deploy, and manage applications. It also changes the way you manage and store data. -Figure 5-1 contrasts the differences. +Figure 8-1 contrasts the differences. ![A diagram showing data storage in cloud-native applications.](media/distributed-data.png) -**Figure 5-1**. Data management in cloud-native applications +**Figure 8-1**. Data management in cloud-native applications -The left side of figure 5-1 shows a *monolithic application*, business service components collocate together in a shared services tier, sharing data from a single relational database. +The left side of Figure 8-1 shows a *monolithic application*, business service components collocate together in a shared services tier, sharing data from a single relational database. -Designing for cloud-native, we take a different approach. On the right-side of Figure 5-1, note how business functionality segregates into small, independent [microservices](/azure/architecture/guide/architecture-styles/microservices). Each microservice encapsulates a specific business capability and its own data. This is a *database per microservice* design. +Designing for cloud-native, we take a different approach. On the right-side of Figure 8-1, note how business functionality segregates into small, independent [microservices](/azure/architecture/guide/architecture-styles/microservices). Each microservice encapsulates a specific business capability and its own data. This is a *database per microservice* design. ## Why use a database per microservice? @@ -31,11 +31,11 @@ This database per microservice provides many benefits, especially for systems th Segregating data also enables each microservice to implement the data store type that is best optimized for its workload, storage needs, and read/write patterns. Choices include relational, document, key-value, and even graph-based data stores. -Figure 5-2 presents the principle of polyglot persistence in a cloud-native system. +Figure 8-2 presents the principle of polyglot persistence in a cloud-native system. ![A diagram showing polyglot data persistence.](media/polyglot-data-persistence.png) -**Figure 5-2**. Polyglot data persistence +**Figure 8-2**. Polyglot data persistence Note that in this figure, each microservice supports a different type of data store: @@ -45,19 +45,19 @@ Note that in this figure, each microservice supports a different type of data st ## Cross-service queries -While microservices are independent and focus on specific functional capabilities, like inventory, shipping, or ordering, they frequently require integration with other microservices. Often the integration involves one microservice *querying* another for data. Figure 5-3 shows the scenario. +While microservices are independent and focus on specific functional capabilities, like inventory, shipping, or ordering, they frequently require integration with other microservices. Often the integration involves one microservice *querying* another for data. Figure 8-3 shows the scenario. ![A diagram showing querying across microservices.](media/cross-service-query.png) -**Figure 5-3**. Querying across microservices +**Figure 8-3**. Querying across microservices The figure shows a shopping basket microservice that adds an item to a user's shopping basket. While the data store for this microservice contains basket and line item data, it doesn't maintain product or pricing data. Instead, those data items are owned by the catalog and pricing microservices. This arrangement presents a problem. How can the shopping basket microservice add a product to the user's shopping basket when it has neither product nor pricing data in its database? -To deal with this we can use the [Materialized View pattern](/azure/architecture/patterns/materialized-view), shown in Figure 5-4. +To deal with this we can use the [Materialized View pattern](/azure/architecture/patterns/materialized-view), shown in Figure 8-4. ![A diagram showing materialized view pattern.](media/materialized-view-pattern.png) -**Figure 5-4**. Materialized View pattern +**Figure 8-4**. Materialized View pattern With this pattern, you place a local data table, known as a *read model*, in the shopping basket service. This table contains a denormalized copy of the data needed from the product and pricing microservices. Copying the data directly into the shopping basket microservice eliminates the need for expensive cross-service calls. With the data local to the service, you improve the service's response time and reliability. Additionally, having its own copy of the data makes the shopping basket service more resilient. If the catalog service becomes unavailable, it doesn't directly impact the shopping basket service. @@ -65,11 +65,11 @@ With this pattern, you place a local data table, known as a *read model*, in the In cloud-native applications, you must manage distributed transactions programmatically. You move from a world of *immediate consistency* to that of *eventual consistency*. -Figure 5-5 shows the problem. +Figure 8-5 shows the problem. ![A diagram showing transaction in saga pattern.](media/saga-transaction-operation.png) -**Figure 5-5**. Implementing a transaction across microservices +**Figure 8-5**. Implementing a transaction across microservices In the preceding figure, five independent microservices participate in a distributed transaction that creates an order. Each microservice maintains its own data store and implements a local transaction for its store. To create the order, the local transaction for *each* individual microservice must succeed, or *all* must abort and roll back the operation. While built-in transactional support is available inside each of the microservices, there's no support for a distributed transaction that would span across all five services to keep data consistent. @@ -77,7 +77,7 @@ A popular pattern for adding distributed transactional support is the [Saga patt ![ A diagram showng roll back in saga pattern.](media/saga-rollback-operation.png) -**Figure 5-6**. Rolling back a transaction +**Figure 8-6**. Rolling back a transaction In this figure, the *Update Inventory* operation has failed in the Inventory microservice. The Saga pattern invokes a set of compensating transactions (in red) to adjust the inventory counts, cancel the payment and the order, and return the data for each microservice back to a consistent state. @@ -91,11 +91,11 @@ Another approach to optimizing high volume data scenarios involves [Event Sourci A system typically stores the current state of a data entity. In high volume systems, however, overhead from transactional locking and frequent update operations can impact database performance, responsiveness, and limit scalability. -Event Sourcing takes a different approach to capturing data. Each operation that affects data is persisted to an event store. Instead of updating the state of a data record, we append each change to a sequential list of past events - similar to an accountant's ledger. The event store becomes the system of record for the data. It's used to propagate various materialized views within the bounded context of a microservice. Figure 5.8 shows the pattern. +Event Sourcing takes a different approach to capturing data. Each operation that affects data is persisted to an event store. Instead of updating the state of a data record, we append each change to a sequential list of past events - similar to an accountant's ledger. The event store becomes the system of record for the data. It's used to propagate various materialized views within the bounded context of a microservice. Figure 8-7 shows the pattern. ![A diagram showing event sourcing.](media/event-sourcing.png) -**Figure 5-8**. Event Sourcing +**Figure 8-7**. Event Sourcing In the previous figure, note how each entry (in blue) for a user's shopping cart is appended to an underlying event store. In the adjoining materialized view, the system projects the current state by replaying all the events associated with each shopping cart. This view, or read model, is then exposed back to the UI. diff --git a/docs/architecture/distributed-cloud-native-apps-containers/data-patterns/relational-vs-nosql-data.md b/docs/architecture/distributed-cloud-native-apps-containers/data-patterns/relational-vs-nosql-data.md index 9e2c245fad1c5..eec850bdd5faf 100644 --- a/docs/architecture/distributed-cloud-native-apps-containers/data-patterns/relational-vs-nosql-data.md +++ b/docs/architecture/distributed-cloud-native-apps-containers/data-patterns/relational-vs-nosql-data.md @@ -1,10 +1,10 @@ --- -title: Relational versus NoSQL data -description: Architecture for Distributed Cloud-Native Apps with .NET Aspire & Containers | Relational versus NoSQL data +title: Relational and NoSQL data +description: Architecture for Distributed Cloud-Native Apps with .NET Aspire & Containers | Relational and NoSQL data ms.date: 10/23/2024 --- -# Relational versus NoSQL data +# Relational and NoSQL data [!INCLUDE [download-alert](../includes/download-alert.md)] @@ -18,7 +18,7 @@ NoSQL databases include several different models for accessing and managing data ![A diagram showing NoSQL data models.](media/types-of-nosql-datastores.png) -**Figure 5-9**: Data models for NoSQL databases +**Figure 8-8**: Data models for NoSQL databases | Model | Characteristics | | :-------- | :-------- | @@ -33,7 +33,7 @@ As a way to understand the differences between these types of databases, conside ![A diagram showing CAP theorem.](media/cap-theorem.png) -**Figure 5-10**. The CAP theorem +**Figure 8-9**. The CAP theorem The theorem states that a databases in distributed data systems can only guarantee *two* of the following three properties: @@ -74,11 +74,11 @@ They can be configured across cloud availability zones and regions to achieve hi ## Azure relational databases -For cloud-native microservices that require relational data, Azure offers four managed relational databases as a service (DBaaS) offerings, shown in Figure 5-11. +For cloud-native microservices that require relational data, Azure offers four managed relational databases as a service (DBaaS) offerings, shown in Figure 8-10. ![A diagram showing managed relational databases in Azure.](media/azure-managed-databases.png) -**Figure 5-11**. Managed relational databases available in Azure +**Figure 8-10**. Managed relational databases available in Azure The features shown in the figure are especially important to organizations who provision large numbers of databases, but have limited resources to administer them. You can provision an Azure database in minutes by selecting the amount of processing cores, memory, and underlying storage. You can scale the database on-the-fly and dynamically adjust resources with little to no downtime. @@ -115,11 +115,11 @@ You can easily self-host any open-source database on an Azure VM. But this means Cosmos DB is a fully managed, globally distributed NoSQL database service in the Azure cloud. It has been adopted by many large companies across the world, including Coca-Cola, Skype, ExxonMobil, and Liberty Mutual. -If your services require fast response from anywhere in the world, high availability, or elastic scalability, Cosmos DB is a great choice. Figure 5-12 shows Cosmos DB. +If your services require fast response from anywhere in the world, high availability, or elastic scalability, Cosmos DB is a great choice. Figure 8-11 shows Cosmos DB. ![A diagram showing overview of Cosmos DB.](media/cosmos-db-overview.png) -**Figure 5-12**: Overview of Azure Cosmos DB +**Figure 8-11**: Overview of Azure Cosmos DB The previous figure presents many of the built-in cloud-native capabilities available in Cosmos DB. In this section, we'll take a closer look at them. @@ -160,11 +160,11 @@ Cloud-native services with distributed data rely on replication and must make a Most distributed databases allow developers to choose between two consistency models: strong consistency and eventual consistency. **Strong consistency** is the gold standard of data programmability. It guarantees that a query will always return the most current data - even if the system must incur latency waiting for an update to replicate across all database copies. While a database configured for **eventual consistency** will return data immediately, even if that data isn't the most current copy. The latter option enables higher availability, greater scale, and increased performance. -Azure Cosmos DB offers five well-defined [consistency models](/azure/cosmos-db/consistency-levels) shown in Figure 5-13. +Azure Cosmos DB offers five well-defined [consistency models](/azure/cosmos-db/consistency-levels) shown in Figure 8-12. ![A diagram showing Cosmos DB consistency graph.](media/cosmos-consistency-level-graph.png) -**Figure 5-13**: Cosmos DB Consistency Levels +**Figure 8-12**: Cosmos DB Consistency Levels | Consistency Level | Description | | :-------- | :-------- | @@ -182,11 +182,11 @@ Containers live in a Cosmos DB database and represent a schema-agnostic grouping Don't get Cosmos DB containers confused with the virtualization containers we've discussed elsewhere in this book. They are data storage entities in a database, not a code execution environment. -To partition the container, items are divided into distinct subsets called logical partitions. Logical partitions are populated based on the value of a partition key that is associated with each item in a container. Figure 5-14 shows two containers each with a logical partition based on a partition key value: +To partition the container, items are divided into distinct subsets called logical partitions. Logical partitions are populated based on the value of a partition key that is associated with each item in a container. Figure 8-13 shows two containers each with a logical partition based on a partition key value: ![A diagram showing Cosmos DB partitioning mechanics.](media/cosmos-db-partitioning.png) -**Figure 5-14**: Cosmos DB partitioning mechanics +**Figure 8-13**: Cosmos DB partitioning mechanics ## Using databases in a .NET Aspire app diff --git a/docs/architecture/distributed-cloud-native-apps-containers/deploying-distributed-apps/development-vs-production.md b/docs/architecture/distributed-cloud-native-apps-containers/deploying-distributed-apps/development-vs-production.md index b8379ea2dc5b5..ea5f1247c2a22 100644 --- a/docs/architecture/distributed-cloud-native-apps-containers/deploying-distributed-apps/development-vs-production.md +++ b/docs/architecture/distributed-cloud-native-apps-containers/deploying-distributed-apps/development-vs-production.md @@ -1,10 +1,10 @@ --- -title: Development versus production and what .NET Aspire can do for you -description: Architecture for Distributed Cloud-Native Apps with .NET Aspire & Containers | Development versus production and what .NET Aspire can do for you +title: Development and production +description: Architecture for Distributed Cloud-Native Apps with .NET Aspire & Containers | Development and production ms.date: 10/23/2024 --- -# Development versus production and what .NET Aspire can do for you +# Development and production [!INCLUDE [download-alert](../includes/download-alert.md)] diff --git a/docs/architecture/distributed-cloud-native-apps-containers/event-based-communication-patterns/background-tasks-with-ihostedservice.md b/docs/architecture/distributed-cloud-native-apps-containers/event-based-communication-patterns/background-tasks-with-ihostedservice.md index 51daf1fd336f0..5f4fd68f60ff4 100644 --- a/docs/architecture/distributed-cloud-native-apps-containers/event-based-communication-patterns/background-tasks-with-ihostedservice.md +++ b/docs/architecture/distributed-cloud-native-apps-containers/event-based-communication-patterns/background-tasks-with-ihostedservice.md @@ -1,9 +1,9 @@ --- -title: Implement background tasks in microservices with IHostedService and the BackgroundService class +title: Implementing background tasks in microservices description: .NET Microservices Architecture for Containerized .NET Applications | Understand the new ways to use IHostedService and BackgroundService to implement background tasks in microservices. ms.date: 10/23/2024 --- -# Implement background tasks in microservices with IHostedService and the BackgroundService class +# Implementing background tasks in microservices [!INCLUDE [download-alert](../includes/download-alert.md)] @@ -13,11 +13,11 @@ Background tasks and scheduled jobs are something you might need to use in any a From a generic point of view, in .NET we called these type of tasks **hosted services**, because they are services that you host within your application or microservice. Note that in this case, the hosted service simply means a class with the background task logic. -Since .NET Core 2.0, the framework provides a new interface named , which helps you to implement hosted services easily. The basic idea is that you can register multiple background tasks (hosted services) that run in the background while your web host or host is running, as shown in the image 7-26. +Since .NET Core 2.0, the framework provides a new interface named , which helps you to implement hosted services easily. The basic idea is that you can register multiple background tasks (hosted services) that run in the background while your web host or host is running, as shown in the image 7-5. ![Diagram comparing ASP.NET Core IWebHost and .NET Core IHost.](./media/ihosted-service-webhost-vs-host.png) -**Figure 7-26**. Using IHostedService in a WebHost or a Host +**Figure 7-5**. Using IHostedService in a WebHost or a Host ASP.NET Core 1.x and 2.x support `IWebHost` for background processes in web apps. .NET Core 2.1 and later versions support `IHost` for background processes with plain console apps. Note the difference made between `WebHost` and `Host`. @@ -80,7 +80,7 @@ The following image shows a visual summary of the classes and interfaces involve ![Class diagram showing the multiple classes and interfaces related to IHostedService.](./media/class-diagram-custom-ihostedservice.png) -**Figure 7-27**. Class diagram showing the multiple classes and interfaces related to IHostedService +**Figure 7-6**. Class diagram showing the multiple classes and interfaces related to IHostedService `IWebHost` and `IHost` can host many services, which inherit from `BackgroundService`, which implements `IHostedService`. diff --git a/docs/architecture/distributed-cloud-native-apps-containers/event-based-communication-patterns/integration-event-based-microservice-communications.md b/docs/architecture/distributed-cloud-native-apps-containers/event-based-communication-patterns/integration-event-based-microservice-communications.md index 3aec108495e4e..51fdd9e0f91af 100644 --- a/docs/architecture/distributed-cloud-native-apps-containers/event-based-communication-patterns/integration-event-based-microservice-communications.md +++ b/docs/architecture/distributed-cloud-native-apps-containers/event-based-communication-patterns/integration-event-based-microservice-communications.md @@ -10,13 +10,13 @@ ms.date: 10/23/2024 When you use [event-based communication](/azure/architecture/guide/architecture-styles/event-driven), a [microservice](/azure/architecture/microservices/) publishes an event when something notable happens, such as when a product is added to a customer's basket. Other microservices subscribe to those events. When a microservice receives an event, it can update its own business entities, which might lead to more events being published. This is the essence of the eventual consistency concept. This [publish/subscribe](/azure/architecture/patterns/publisher-subscriber) system is usually performed by using an event bus. You can design the event bus as an interface with the API needed to subscribe and unsubscribe to events and to publish events. It can also have one or more implementations based on any inter-process or messaging communication, such as a messaging queue or a service bus that supports asynchronous communication and a publish/subscribe model. -You can use events to implement business transactions that span multiple services, which give you eventual consistency between those services. An eventually consistent transaction consists of a series of distributed actions. At each action, the microservice updates a business entity and publishes an event that triggers the next action. Be aware that transactions don't span the underlying persistence and event bus, so [idempotence needs to be handled](/azure/architecture/reference-architectures/containers/aks-mission-critical/mission-critical-data-platform#idempotent-message-processing). Figure 7-18 below, shows a `PriceUpdated` event published through an event bus, so the price update is propagated to the Basket and other microservices. +You can use events to implement business transactions that span multiple services, which give you eventual consistency between those services. An eventually consistent transaction consists of a series of distributed actions. At each action, the microservice updates a business entity and publishes an event that triggers the next action. Be aware that transactions don't span the underlying persistence and event bus, so [idempotence needs to be handled](/azure/architecture/reference-architectures/containers/aks-mission-critical/mission-critical-data-platform#idempotent-message-processing). Figure 7-1 below, shows a `PriceUpdated` event published through an event bus, so the price update is propagated to the Basket and other microservices. ![Diagram of asynchronous event-driven communication with an event bus.](./media/event-driven-communication.png) -**Figure 7-18**. Event-driven communication based on an event bus +**Figure 7-1**. Event-driven communication based on an event bus -This section describes how you can implement this type of communication with .NET by using a generic event bus interface, as shown in Figure 7-18. There are multiple potential implementations, each using a different technology or infrastructure such as RabbitMQ, Azure Service Bus, or any other third-party open-source or commercial service bus. +This section describes how you can implement this type of communication with .NET by using a generic event bus interface, as shown in Figure 7-1. There are multiple potential implementations, each using a different technology or infrastructure such as RabbitMQ, Azure Service Bus, or any other third-party open-source or commercial service bus. ## Using message brokers and service buses for production systems @@ -147,11 +147,11 @@ There are only a few kinds of libraries you should share across microservices. O ## The event bus -An event bus allows publish/subscribe-style communication between microservices without requiring the components to explicitly be aware of each other, as shown in Figure 7-19. +An event bus allows publish/subscribe-style communication between microservices without requiring the components to explicitly be aware of each other, as shown in Figure 7-2. ![A diagram showing the basic publish/subscribe pattern.](./media/publish-subscribe-basics.png) -**Figure 7-19**. Publish/subscribe basics with an event bus +**Figure 7-2**. Publish/subscribe basics with an event bus The above diagram shows that microservice A publishes to the event bus, which distributes to subscribing microservices B and C, without the publisher needing to know the subscribers. The event bus is related to the observer pattern and the publish-subscribe pattern. @@ -172,13 +172,13 @@ An event bus is typically composed of two parts: - The abstraction or interface. - One or more implementations. -In Figure 7-19 you can see how, from an application point of view, the event bus is nothing more than a pub/sub channel. The way you implement this asynchronous communication can vary. It can have multiple implementations so that you can swap between them, depending on the environment requirements (for example, production or development environments). +In Figure 7-2 you can see how, from an application point of view, the event bus is nothing more than a pub/sub channel. The way you implement this asynchronous communication can vary. It can have multiple implementations so that you can swap between them, depending on the environment requirements (for example, production or development environments). -In Figure 7-20, you can see an abstraction of an event bus with multiple implementations based on infrastructure messaging technologies like RabbitMQ, Azure Service Bus, or another event/message broker. +In Figure 7-3, you can see an abstraction of an event bus with multiple implementations based on infrastructure messaging technologies like RabbitMQ, Azure Service Bus, or another event/message broker. ![Diagram showing the addition of an event bus abstraction layer.](./media/multiple-implementations-event-bus.png) -**Figure 7- 20.** Multiple implementations of an event bus +**Figure 7-3** Multiple implementations of an event bus It's good to have the event bus defined through an interface so it can be implemented with several technologies, like RabbitMQ, Azure Service Bus or others. However, and as mentioned previously, using your own abstractions is good only if you need basic event bus features. If you need richer service bus features, you should probably use the API and abstractions provided by your preferred commercial service bus instead. diff --git a/docs/architecture/distributed-cloud-native-apps-containers/event-based-communication-patterns/rabbitmq-event-bus-development-test-environment.md b/docs/architecture/distributed-cloud-native-apps-containers/event-based-communication-patterns/rabbitmq-event-bus-development-test-environment.md index a967005c0d653..77b8a4c8df5b3 100644 --- a/docs/architecture/distributed-cloud-native-apps-containers/event-based-communication-patterns/rabbitmq-event-bus-development-test-environment.md +++ b/docs/architecture/distributed-cloud-native-apps-containers/event-based-communication-patterns/rabbitmq-event-bus-development-test-environment.md @@ -9,11 +9,11 @@ ms.date: 10/23/2024 We should start by saying that if you create your custom event bus based on [RabbitMQ](https://www.rabbitmq.com/) running in a container, it should be used only for your development and test environments. Don't use it for your production environment, unless you're building it as a part of a production-ready service bus as described in the [Additional resources section below](rabbitmq-event-bus-development-test-environment.md#additional-resources). A simple custom event bus might be missing many production-ready critical features that a commercial service bus has. -The event bus implementation with RabbitMQ lets microservices subscribe to events, publish events, and receive events, as shown in Figure 7-21. +The event bus implementation with RabbitMQ lets microservices subscribe to events, publish events, and receive events, as shown in Figure 7-4. ![Diagram showing RabbitMQ between message sender and message receiver.](./media/rabbitmq-implementation.png) -**Figure 7-21.** RabbitMQ implementation of an event bus +**Figure 7-4.** RabbitMQ implementation of an event bus RabbitMQ functions as an intermediary between a message publisher and subscribers, to handle distribution. In the code, the `EventBusRabbitMQ` class implements the generic `IEventBus` interface. This implementation is based on dependency injection so that you can swap from this development and test version to a production version. @@ -25,7 +25,7 @@ public class EventBusRabbitMQ : IEventBus, IDisposable } ``` -The RabbitMQ implementation of a sample dev/test event bus is boilerplate code. It has to handle the connection to the RabbitMQ server and publish a message event to the queues. It also has to implement a collection of integration event handlers for each event type. These event types can have a different instantiation and different subscriptions for each receiver microservice, as shown in Figure 7-21. +The RabbitMQ implementation of a sample dev/test event bus is boilerplate code. It has to handle the connection to the RabbitMQ server and publish a message event to the queues. It also has to implement a collection of integration event handlers for each event type. These event types can have a different instantiation and different subscriptions for each receiver microservice, as shown in Figure 7-4. ## Implementing a simple publish method with RabbitMQ diff --git a/docs/architecture/distributed-cloud-native-apps-containers/introduction-containers-docker/container-terminology.md b/docs/architecture/distributed-cloud-native-apps-containers/introduction-containers-docker/container-terminology.md index a5eddad0ade0c..d97c8c6036acc 100644 --- a/docs/architecture/distributed-cloud-native-apps-containers/introduction-containers-docker/container-terminology.md +++ b/docs/architecture/distributed-cloud-native-apps-containers/introduction-containers-docker/container-terminology.md @@ -14,7 +14,7 @@ To run the app or service, the app's image is instantiated to create a container Developers should store images in a registry, which acts as a library of images and is needed when deploying to production orchestrators. Docker maintains a public registry via [Docker Hub](https://hub.docker.com/); other vendors provide registries for different collections of images, including [Azure Container Registry](https://azure.microsoft.com/services/container-registry/). Alternatively, enterprises can have a private registry on-premises for their own Docker images. -Figure 2-4 shows how images and registries in Docker relate to other components. It also shows the multiple registry offerings from vendors. +Figure 2-5 shows how images and registries in Docker relate to other components. It also shows the multiple registry offerings from vendors. ![A diagram showing the basic taxonomy in Docker.](media/5-taxonomy-of-docker-terms-and-concepts.png) diff --git a/docs/architecture/distributed-cloud-native-apps-containers/introduction-containers-docker/official-container-images-tooling.md b/docs/architecture/distributed-cloud-native-apps-containers/introduction-containers-docker/official-container-images-tooling.md index 46ce782a07c7a..e6967b7a562d5 100644 --- a/docs/architecture/distributed-cloud-native-apps-containers/introduction-containers-docker/official-container-images-tooling.md +++ b/docs/architecture/distributed-cloud-native-apps-containers/introduction-containers-docker/official-container-images-tooling.md @@ -1,6 +1,6 @@ --- -title: Official .NET container images & SDK tooling -description: Architecture for Distributed Cloud-Native Apps with .NET Aspire & Containers | Official .NET container images & SDK tooling +title: Official .NET container images and SDK tooling +description: Architecture for Distributed Cloud-Native Apps with .NET Aspire & Containers | Official .NET container images and SDK tooling ms.date: 10/23/2024 --- diff --git a/docs/architecture/distributed-cloud-native-apps-containers/introduction-containers-docker/what-is-docker.md b/docs/architecture/distributed-cloud-native-apps-containers/introduction-containers-docker/what-is-docker.md index 12b5ef22defac..4b47f6a0024a0 100644 --- a/docs/architecture/distributed-cloud-native-apps-containers/introduction-containers-docker/what-is-docker.md +++ b/docs/architecture/distributed-cloud-native-apps-containers/introduction-containers-docker/what-is-docker.md @@ -12,7 +12,7 @@ ms.date: 10/23/2024 ![Diagram showing the places Docker containers can run.](media/3-docker-containers-run-anywhere.png) -Figure 2-3. Docker deploys containers at all layers of the hybrid cloud. +**Figure 2-2**. Docker deploys containers at all layers of the hybrid cloud. Docker containers can run anywhere, on-premises in the customer datacenter, in an external service provider or in the cloud, such as on Azure. Docker containers can run natively on Linux and Windows. However, Windows images can run only on Windows hosts and Linux images can run on Linux hosts and Windows hosts (using a Hyper-V Linux VM), where host means a server or a VM. @@ -36,7 +36,7 @@ Let's examine VMs and containers in more detail to understand their uses. ![Diagram showing the hardware/software stack of a traditional VM.](media/3-virtual-machine-hardware-software.png) -Figure 3-2. Diagram showing the hardware/software stack of a traditional VM. +**Figure 2-3**. Diagram showing the hardware/software stack of a traditional VM. Virtual machines include the application, the required libraries or binaries, and a full guest operating system. Full virtualization requires more resources than containerization. @@ -44,7 +44,7 @@ Virtual machines include the application, the required libraries or binaries, an ![Diagram showing the hardware/software stack for Docker containers.](media/3-docker-containers-run-anywhere.png) -Figure 2-4. Diagram showing the hardware/software stack for Docker containers. +**Figure 2-4**. Diagram showing the hardware/software stack for Docker containers. Containers include the application and all its dependencies. However, they share the OS kernel with other containers, running as isolated processes in user space on the host operating system. (Except in Hyper-V containers, where each container runs inside of a special virtual machine per container.) diff --git a/docs/architecture/distributed-cloud-native-apps-containers/introduction-to-cloud-native-development/candidate-apps-for-cloud-native.md b/docs/architecture/distributed-cloud-native-apps-containers/introduction-to-cloud-native-development/candidate-apps-for-cloud-native.md index 790c3066334ff..d2e35697f3e3b 100644 --- a/docs/architecture/distributed-cloud-native-apps-containers/introduction-to-cloud-native-development/candidate-apps-for-cloud-native.md +++ b/docs/architecture/distributed-cloud-native-apps-containers/introduction-to-cloud-native-development/candidate-apps-for-cloud-native.md @@ -27,11 +27,11 @@ Then there are legacy systems. While we'd all like to build new applications, we ## Modernizing legacy apps -The free Microsoft e-book [Modernize existing .NET applications with Azure cloud and Windows Containers](https://dotnet.microsoft.com/download/thank-you/modernizing-existing-net-apps-ebook) provides guidance about migrating on-premises workloads into the cloud. Figure 1-10 shows that there isn't a single, one-size-fits-all strategy for modernizing legacy applications. +The free Microsoft e-book [Modernize existing .NET applications with Azure cloud and Windows Containers](https://dotnet.microsoft.com/download/thank-you/modernizing-existing-net-apps-ebook) provides guidance about migrating on-premises workloads into the cloud. Figure 1-8 shows that there isn't a single, one-size-fits-all strategy for modernizing legacy applications. ![Strategies for migrating legacy workloads](./media/strategies-for-migrating-legacy-workloads.png) -**Figure 1-10**. Strategies for migrating legacy workloads +**Figure 1-8**. Strategies for migrating legacy workloads Monolithic apps that are non-critical might benefit from a quick **lift-and-shift** migration. Here, the on-premises workload is moved to a cloud-based virtual machine (VM), without changes. This approach uses the [IaaS (Infrastructure as a Service) model](https://azure.microsoft.com/resources/cloud-computing-dictionary/what-is-iaas/). Azure includes several tools such as [Azure Migrate](https://azure.microsoft.com/services/azure-migrate/), [Azure Site Recovery](https://azure.microsoft.com/services/site-recovery/), and [Azure Database Migration Service](https://azure.microsoft.com/campaigns/database-migration/) to help streamline the move. While this strategy can yield some cost savings, such applications typically weren't designed to unlock and leverage the benefits of cloud computing. diff --git a/docs/architecture/distributed-cloud-native-apps-containers/introduction-to-cloud-native-development/what-is-cloud-native.md b/docs/architecture/distributed-cloud-native-apps-containers/introduction-to-cloud-native-development/what-is-cloud-native.md index 98a79824daf24..a805fd33dd9af 100644 --- a/docs/architecture/distributed-cloud-native-apps-containers/introduction-to-cloud-native-development/what-is-cloud-native.md +++ b/docs/architecture/distributed-cloud-native-apps-containers/introduction-to-cloud-native-development/what-is-cloud-native.md @@ -199,11 +199,11 @@ Containerizing a microservice is simple and straightforward. The code, its depen When an application starts or scales, you transform the container image into a running container instance. The instance runs on any computer that has a [container runtime](https://kubernetes.io/docs/setup/production-environment/container-runtimes/) engine installed. You can have as many instances of the containerized service as needed. -Figure 1-6 shows three different microservices, each in its own container, all running on a single host. +Figure 1-4 shows three different microservices, each in its own container, all running on a single host. ![Multiple containers running on a container host](./media/hosting-mulitple-containers.png) -**Figure 1-6**. Multiple containers running on a container host +**Figure 1-4**. Multiple containers running on a container host Note how each container maintains its own set of dependencies and runtime, which can be different from one another. Here, we see different versions of the **Product** microservice running on the same host. Each container shares a slice of the underlying host operating system, memory, and processor, but is isolated from the others. @@ -227,11 +227,11 @@ By sharing the underlying operating system and host resources, a container has a While tools such as Docker create images and run containers, you also need tools to manage them. Container management is done with a special software program called a **container orchestrator**. When operating at scale with many independent running containers, orchestration is essential. -Figure 1-7 shows management tasks that container orchestrators automate. +Figure 1-5 shows management tasks that container orchestrators automate. ![What container orchestrators do](./media/what-container-orchestrators-do.png) -**Figure 1-7**. What container orchestrators do +**Figure 1-5**. What container orchestrators do The following table describes common orchestration tasks. @@ -260,11 +260,11 @@ You could host your own instance of Kubernetes, but then you'd be responsible fo Cloud-native systems depend upon many different ancillary resources, such as data stores, message brokers, monitoring, and identity services. These services are known as [backing services](https://12factor.net/backing-services). -Figure 1-8 shows many common backing services that cloud-native systems consume. +Figure 1-6 shows many common backing services that cloud-native systems consume. ![Common backing services](./media/common-backing-services.png) -**Figure 1-8**. Common backing services +**Figure 1-6**. Common backing services You could host your own backing services, but then you'd be responsible for licensing, provisioning, and managing those resources. @@ -312,11 +312,11 @@ The [Twelve-Factor Application](https://12factor.net/), discussed earlier, calls Modern CI/CD systems help fulfill this principle. They provide separate build and delivery steps that help ensure consistent and quality code that's readily available to users. -Figure 1-9 shows the separation across the deployment process. +Figure 1-7 shows the separation across the deployment process. ![Deployments Steps in CI/CD Pipeline](./media/build-release-run-pipeline.png) -**Figure 1-9**. Deployment steps in a CI/CD Pipeline +**Figure 1-7**. Deployment steps in a CI/CD Pipeline In the previous figure, pay special attention to separation of tasks: diff --git a/docs/architecture/distributed-cloud-native-apps-containers/monitoring-health/aspire-dashboard.md b/docs/architecture/distributed-cloud-native-apps-containers/monitoring-health/aspire-dashboard.md index 8cdde83c8eab1..656dfcdc83152 100644 --- a/docs/architecture/distributed-cloud-native-apps-containers/monitoring-health/aspire-dashboard.md +++ b/docs/architecture/distributed-cloud-native-apps-containers/monitoring-health/aspire-dashboard.md @@ -1,10 +1,10 @@ --- -title: .NET Aspire dashboard -description: Architecture for Distributed Cloud-Native Apps with .NET Aspire & Containers | .NET Aspire dashboard +title: The .NET Aspire dashboard +description: Architecture for Distributed Cloud-Native Apps with .NET Aspire & Containers | The .NET Aspire dashboard ms.date: 10/23/2024 --- -# .NET Aspire dashboard +# The .NET Aspire dashboard [!INCLUDE [download-alert](../includes/download-alert.md)] @@ -14,7 +14,7 @@ The dashboard is available after you add .NET Aspire to your solution. The dashb ![A screenshot of the .NET Aspire dashboard.](media/aspire-dashboard-projects.png) -**Figure 10-9**. The .NET Aspire dashboard. +**Figure 10-10**. The .NET Aspire dashboard. There are five main sections in the dashboard: diff --git a/docs/architecture/distributed-cloud-native-apps-containers/monitoring-health/azure-monitor.md b/docs/architecture/distributed-cloud-native-apps-containers/monitoring-health/azure-monitor.md index 5eebc01d48b81..8c561a1aad63d 100644 --- a/docs/architecture/distributed-cloud-native-apps-containers/monitoring-health/azure-monitor.md +++ b/docs/architecture/distributed-cloud-native-apps-containers/monitoring-health/azure-monitor.md @@ -8,11 +8,11 @@ ms.date: 10/23/2024 [!INCLUDE [download-alert](../includes/download-alert.md)] -Azure Monitor is an umbrella name for a collection of cloud tools designed to provide visibility into the state of your system. It helps you understand how your cloud-native services are performing and actively identifies issues affecting them. Figure 10-10 presents a high level of view of Azure Monitor. +Azure Monitor is an umbrella name for a collection of cloud tools designed to provide visibility into the state of your system. It helps you understand how your cloud-native services are performing and actively identifies issues affecting them. Figure 10-11 presents a high level of view of Azure Monitor. ![A diagram of a high-level view of Azure Monitor.](media/azure-monitor.png) -**Figure 10-10**. High-level view of Azure Monitor. +**Figure 10-11**. High-level view of Azure Monitor. ## Gathering logs and metrics @@ -42,7 +42,7 @@ These are the results of the previous Application Insights Query. ![A screenshot of Application Insights query results.](media/application_insights_example.png) -**Figure 10-11**. Application Insights query results drawn as a pie chart. +**Figure 10-12**. Application Insights query results drawn as a pie chart. There is a [playground for experimenting with Kusto](https://dataexplorer.azure.com/clusters/help/databases/Samples) queries. Reading [sample queries](/azure/kusto/query/samples) can also be instructive. @@ -52,13 +52,13 @@ There are several different dashboard technologies that may be used to surface t ![An example screenshot of Application Insights charts embedded in the main Azure Dashboard.](media/azure_dashboard.png) -**Figure 10-12**. An example Application Insights chart embedded in the main Azure Dashboard. +**Figure 10-13**. An example Application Insights chart embedded in the main Azure Dashboard. These charts can then be embedded in the Azure portal proper through use of the dashboard feature. For users with more exacting requirements, such as being able to drill down into several tiers of data, Azure Monitor data is available to [Power BI](https://powerbi.microsoft.com/). Power BI is an industry-leading, enterprise class, business intelligence tool that can aggregate data from many different data sources. ![A screenshot of the Power BI dashboard.](media/powerbidashboard.png) -**Figure 10-13**. An example Power BI dashboard. +**Figure 10-14**. An example Power BI dashboard. ## Alerts diff --git a/docs/architecture/distributed-cloud-native-apps-containers/monitoring-health/health-checks-probes.md b/docs/architecture/distributed-cloud-native-apps-containers/monitoring-health/health-checks-probes.md index 0455bda3526be..9f00789b00fee 100644 --- a/docs/architecture/distributed-cloud-native-apps-containers/monitoring-health/health-checks-probes.md +++ b/docs/architecture/distributed-cloud-native-apps-containers/monitoring-health/health-checks-probes.md @@ -110,11 +110,11 @@ When the endpoint `/hc` is invoked, it runs all the health che ### Query your microservices to report about their health status -When you've configured health checks as described in this article and you have the microservice running in Docker, you can directly check from a browser if it's healthy. You have to publish the container port in the Docker host, so you can access the container through the external Docker host IP or through `host.docker.internal`, as shown in figure 10.6. +When you've configured health checks as described in this article and you have the microservice running in Docker, you can directly check from a browser if it's healthy. You have to publish the container port in the Docker host, so you can access the container through the external Docker host IP or through `host.docker.internal`, as shown in figure 10-7. ![Screenshot of the JSON response returned by a health check.](media/health-check-json-response.png) -**Figure 10-6**. Checking health status of a single service from a browser +**Figure 10-7**. Checking health status of a single service from a browser In that test, you can see that the `Catalog.API` microservice (running on port 5101) is healthy, returning HTTP status 200 and status information in JSON. The service also checked the health of its SQL Server database dependency and RabbitMQ, so the health check reported itself as healthy. @@ -130,13 +130,13 @@ Fortunately, you have many options to add such a service. For example if you hav ![Screenshot of the Health Checks UI eShopOnContainers health statuses.](media/health-check-status-ui.png) -**Figure 10-7**. Sample health check report +**Figure 10-8**. Sample health check report With the introduction of .NET Aspire, you now get a built-in dashboard that has many of the same features as the open source package. You'll see more about the dashboard in the next section. ![A screenshot of the .NET Aspire dashboard.](media/aspire-dashboard-projects.png) -**Figure 10-8**. The .NET Aspire dashboard +**Figure 10-9**. The .NET Aspire dashboard In summary, a watchdog service queries each microservice's endpoint. This will execute all the health checks defined within it and return an overall health state depending on all those checks. The `HealthChecksUI` is easy to consume with a few configuration entries and two lines of code that needs to be added into the *Startup.cs* of the watchdog service. diff --git a/docs/architecture/distributed-cloud-native-apps-containers/monitoring-health/observability-patterns.md b/docs/architecture/distributed-cloud-native-apps-containers/monitoring-health/observability-patterns.md index 1d57d4709fae6..7e010e26f0bbc 100644 --- a/docs/architecture/distributed-cloud-native-apps-containers/monitoring-health/observability-patterns.md +++ b/docs/architecture/distributed-cloud-native-apps-containers/monitoring-health/observability-patterns.md @@ -51,7 +51,7 @@ These different log levels provide granularity in logging. When the application Because of the challenges associated with using file-based logs in cloud-native apps, centralized logs are preferred. Logs are collected by the applications and shipped to a central logging application which indexes and stores the logs. This class of system can ingest tens of gigabytes of logs every day. -It's also helpful to follow some standard practices when building logging that spans many services. For instance, generating a [correlation ID](https://blog.rapid7.com/2016/12/23/the-value-of-correlation-ids/) at the start of a lengthy interaction, and then logging it in each message that is related to that interaction, makes it easier to search for all related messages. Standardization makes reading logs much easier. Figure 7-4 demonstrates how a microservices architecture can leverage centralized logging as part of its workflow. +It's also helpful to follow some standard practices when building logging that spans many services. For instance, generating a [correlation ID](https://blog.rapid7.com/2016/12/23/the-value-of-correlation-ids/) at the start of a lengthy interaction, and then logging it in each message that is related to that interaction, makes it easier to search for all related messages. Standardization makes reading logs much easier. Figure 10-4 demonstrates how a microservices architecture can leverage centralized logging as part of its workflow. ![A diagram showing logs from various sources are ingested into a centralized log store.](media/centralized-logging.png) diff --git a/docs/architecture/distributed-cloud-native-apps-containers/monitoring-health/open-telemetry-grafana-prometheus.md b/docs/architecture/distributed-cloud-native-apps-containers/monitoring-health/open-telemetry-grafana-prometheus.md index 5e466e06fbac2..51ac2047489e9 100644 --- a/docs/architecture/distributed-cloud-native-apps-containers/monitoring-health/open-telemetry-grafana-prometheus.md +++ b/docs/architecture/distributed-cloud-native-apps-containers/monitoring-health/open-telemetry-grafana-prometheus.md @@ -138,6 +138,8 @@ Then you can use Grafana to create dashboards and view the metrics gathered by P ![A screenshot of a Grafana dashboard.](media/grafana.png) +**Figure 10-6**. The Grafana dashboard + To do configure Grafana, you: 1. Add a Grafana container for your app, in the same way as Prometheus. diff --git a/docs/architecture/distributed-cloud-native-apps-containers/service-to-service-communication-patterns/grpc.md b/docs/architecture/distributed-cloud-native-apps-containers/service-to-service-communication-patterns/grpc.md index 5caec3f7c87c4..e4da468ad5581 100644 --- a/docs/architecture/distributed-cloud-native-apps-containers/service-to-service-communication-patterns/grpc.md +++ b/docs/architecture/distributed-cloud-native-apps-containers/service-to-service-communication-patterns/grpc.md @@ -52,17 +52,17 @@ gRPC is integrated into .NET Core 3.0 SDK and later. The following tools support - Visual Studio Code - The `dotnet` CLI -The SDK includes tooling for endpoint routing, built-in IoC, and logging. The open-source Kestrel web server supports HTTP/2 connections. Figure 6-20 shows a Visual Studio 2022 template that scaffolds a skeleton project for a gRPC service. Note how .NET fully supports Windows, Linux, and macOS. +The SDK includes tooling for endpoint routing, built-in IoC, and logging. The open-source Kestrel web server supports HTTP/2 connections. Figure 6-12 shows a Visual Studio 2022 template that scaffolds a skeleton project for a gRPC service. Note how .NET fully supports Windows, Linux, and macOS. ![gRPC Support in Visual Studio 2022 diagram](./media/visual-studio-2022-grpc-template.png) -**Figure 6-18**. gRPC support in Visual Studio 2022 +**Figure 6-13**. gRPC support in Visual Studio 2022 -Figure 6-19 shows the skeleton gRPC service generated from the built-in scaffolding included in Visual Studio 2022. +Figure 6-14 shows the skeleton gRPC service generated from the built-in scaffolding included in Visual Studio 2022. ![gRPC project in Visual Studio 2022 diagram](./media/grpc-project.png ) -**Figure 6-19**. gRPC project in Visual Studio 2022 +**Figure 6-14**. gRPC project in Visual Studio 2022 In the previous figure, note the proto description file and service code. As you'll see shortly, Visual Studio generates additional configuration in both the Startup class and underlying project file. @@ -88,7 +88,7 @@ The microservice reference architecture, [eShop Reference Application](https://g ![Backend architecture for eShop application diagram](./media/eshop-architecture.png) -**Figure 6-20**. Backend architecture for eShop application +**Figure 6-15**. Backend architecture for eShop application The eShop App Workshop adds gRPC as a worked example in the [Add shopping basket capabilities to the web site lab](https://github.com/dotnet-presentations/eshop-app-workshop/tree/main/labs/4-Add-Shopping-Basket) diff --git a/docs/architecture/distributed-cloud-native-apps-containers/service-to-service-communication-patterns/service-discovery.md b/docs/architecture/distributed-cloud-native-apps-containers/service-to-service-communication-patterns/service-discovery.md index 6da24d4a7b736..89a8c1b0c98c1 100644 --- a/docs/architecture/distributed-cloud-native-apps-containers/service-to-service-communication-patterns/service-discovery.md +++ b/docs/architecture/distributed-cloud-native-apps-containers/service-to-service-communication-patterns/service-discovery.md @@ -1,10 +1,10 @@ --- -title: Service discovery introduction +title: What is service discovery? description: Cloud-native service to service communication patterns | Service discovery ms.date: 10/23/2024 --- -# Service discovery introduction +# What is service discovery? [!INCLUDE [download-alert](../includes/download-alert.md)] diff --git a/docs/architecture/distributed-cloud-native-apps-containers/service-to-service-communication-patterns/service-mesh-communication-infrastructure.md b/docs/architecture/distributed-cloud-native-apps-containers/service-to-service-communication-patterns/service-mesh-communication-infrastructure.md index 41e7a729f3fc7..ec2c1eacd03a8 100644 --- a/docs/architecture/distributed-cloud-native-apps-containers/service-to-service-communication-patterns/service-mesh-communication-infrastructure.md +++ b/docs/architecture/distributed-cloud-native-apps-containers/service-to-service-communication-patterns/service-mesh-communication-infrastructure.md @@ -18,7 +18,7 @@ A key component of a service mesh is a proxy. In a cloud-native application, an ![Service mesh with a side car diagram](media/service-mesh-with-side-car.png) -**Figure 6-22**. Service mesh with a side car +**Figure 6-16**. Service mesh with a side car Note in the previous figure how messages are intercepted by a proxy that runs alongside each microservice. Each proxy can be configured with traffic rules specific to the microservice. It understands messages and can route them across your services and the outside world. diff --git a/docs/architecture/distributed-cloud-native-apps-containers/service-to-service-communication-patterns/service-to-service-communication.md b/docs/architecture/distributed-cloud-native-apps-containers/service-to-service-communication-patterns/service-to-service-communication.md index 661f33243871e..9fcbdfc161c74 100644 --- a/docs/architecture/distributed-cloud-native-apps-containers/service-to-service-communication-patterns/service-to-service-communication.md +++ b/docs/architecture/distributed-cloud-native-apps-containers/service-to-service-communication-patterns/service-to-service-communication.md @@ -34,7 +34,7 @@ One option for implementing this scenario is for the calling back-end microservi > ![Direct HTTP communication diagram](media/direct-http-communication.png) -**Figure 6-7**. Direct HTTP communication +**Figure 6-1**. Direct HTTP communication While direct HTTP calls between microservices are relatively simple to implement, care should be taken to minimize this practice. To start, these calls are always *synchronous* and will block the operation until a result is returned or the request times outs. What were once self-contained, independent services, able to evolve independently and deploy frequently, now become coupled to each other. As coupling among microservices increase, their architectural benefits diminish. @@ -42,7 +42,7 @@ Executing an infrequent request that makes a single direct HTTP call to another > ![Chaining HTTP queries diagram](media/chaining-http-queries.png) -**Figure 6-8**. Chaining HTTP queries +**Figure 6-2**. Chaining HTTP queries You can certainly imagine the risk in the design shown in the previous image. What happens if Step \#3 fails? Or Step \#8 fails? How do you recover? What if Step \#6 is slow because the underlying service is busy? How do you continue? Even if all works correctly, think of the latency this call would incur, which is the sum of the latency of each step. @@ -58,7 +58,7 @@ Another option for eliminating microservice-to-microservice coupling is an [Aggr > ![Aggregator service diagram](media/aggregator-service.png) -**Figure 6-9**. Aggregator microservice +**Figure 6-3**. Aggregator microservice The pattern isolates an operation that makes calls to multiple back-end microservices, centralizing its logic into a specialized microservice. The purple checkout aggregator microservice in the previous figure orchestrates the workflow for the Checkout operation. It includes calls to several back-end microservices in a sequenced order. Data from the workflow is aggregated and returned to the caller. While it still implements direct HTTP calls, the aggregator microservice reduces direct dependencies among back-end microservices. @@ -68,17 +68,17 @@ Another approach for decoupling synchronous HTTP messages is a [Request-reply pa > ![Request-reply pattern diagram](media/request-reply-pattern.png) -**Figure 6-10**. Request-reply pattern +**Figure 6-4**. Request-reply pattern Here, the message producer creates a query-based message that contains a unique correlation ID and places it into a request queue. The consuming service dequeues the messages, processes it and places the response into the response queue with the same correlation ID. The producer service dequeues the message, matches it with the correlation ID and continues processing. We cover queues in detail in the next section. ## Commands -Another type of communication interaction is a *command*. A microservice may need another microservice to perform an action. The Ordering microservice may need the Shipping microservice to create a shipment for an approved order. In Figure 6-11, one microservice, called a Producer, sends a message to another microservice, the Consumer, commanding it to do something. +Another type of communication interaction is a *command*. A microservice may need another microservice to perform an action. The Ordering microservice may need the Shipping microservice to create a shipment for an approved order. In Figure 6-5, one microservice, called a Producer, sends a message to another microservice, the Consumer, commanding it to do something. > ![Command interaction with a queue diagram](media/command-interaction-with-queue.png) -**Figure 6-11**. Command interaction with a queue +**Figure 6-5**. Command interaction with a queue Most often, the Producer doesn't require a response and can *fire-and-forget* the message. If a reply is needed, the Consumer sends a separate message back to Producer on another channel. A command message is best sent asynchronously with a message queue, supported by a lightweight message broker. In the previous diagram, note how a queue separates and decouples both services. @@ -102,7 +102,7 @@ There are a few limitations with the service to be aware of: > ![Storage queue hierarchy diagram](media/storage-queue-hierarchy.png) -**Figure 6-12**. The hierarchy of an Azure Storage queue +**Figure 6-6**. The hierarchy of an Azure Storage queue In the previous figure, note how storage queues store their messages in the underlying Azure Storage account. @@ -128,7 +128,7 @@ However, there are some important caveats: Service Bus queue size is limited to > ![Service Bus queue diagram](media/service-bus-queue.png) -**Figure 6-13**. The high-level architecture of a Service Bus queue +**Figure 6-7**. The high-level architecture of a Service Bus queue In the previous figure, note the point-to-point relationship. Two instances of the same provider are enqueuing messages into a single Service Bus queue. Each message is consumed by only one of three consumer instances on the right. Next, we discuss how to implement messaging where different consumers may all be interested the same message. @@ -140,19 +140,20 @@ To address this scenario, we move to the third type of message interaction, the Eventing is a two-step process. For a given state change, a microservice publishes an event to a message broker, making it available to any other interested microservice. The interested microservice is notified by subscribing to the event in the message broker. You use the [Publish/Subscribe](/azure/architecture/patterns/publisher-subscriber) pattern to implement [event-based communication](/dotnet/standard/microservices-architecture/multi-container-microservice-net-applications/integration-event-based-microservice-communications). -Figure 6-14 shows a shopping basket microservice publishing an event with two other microservices subscribing to it. +Figure 6-8 shows a shopping basket microservice publishing an event with two other microservices subscribing to it. > ![Event-Driven messaging diagram](media/event-driven-messaging.png) -**Figure 6-14**. Event-Driven messaging +**Figure 6-8**. Event-Driven messaging Note the *event bus* component that sits in the middle of the communication channel. It's a custom class that encapsulates the message broker and decouples it from the underlying application. The ordering and inventory microservices independently operate the event with no knowledge of each other, nor the shopping basket microservice. When the registered event is published to the event bus, they act upon it. -With eventing, we move from queuing technology to *topics*. A [topic](/azure/service-bus-messaging/service-bus-dotnet-how-to-use-topics-subscriptions) is similar to a queue, but supports a one-to-many messaging pattern. One microservice publishes a message. Multiple subscribing microservices can choose to receive and act upon that message. Figure 6-16 shows a topic architecture. +With eventing, we move from queuing technology to *topics*. A [topic](/azure/service-bus-messaging/service-bus-dotnet-how-to-use-topics-subscriptions) is similar to a queue, but supports a one-to-many messaging pattern. One microservice publishes a message. Multiple subscribing microservices can choose to receive and act upon that message. Figure 6-9 shows a topic architecture. -> [!div class="mx-imgBorder"] > ![Topic architecture diagram](media/topic-architecture.png) +**Figure 6-9**. Topic architecture + In the previous figure, publishers send messages to the topic. At the end, subscribers receive messages from subscriptions. In the middle, the topic forwards messages to subscriptions based on a set of rules, shown in dark blue boxes. Rules act as a filter that forward specific messages to a subscription. Here, a "GetPrice" event would be sent to the price and logging subscriptions as the logging subscription has chosen to receive all messages. A "GetInformation" event would be sent to the information and logging subscriptions. The Azure cloud supports two different topic services: Azure Service Bus Topics and Azure EventGrid. @@ -183,13 +184,13 @@ When publishing and subscribing to native events from Azure resources, no coding > ![Event Grid anatomy diagram](media/event-grid-anatomy.png) -**Figure 6-15**. Event Grid anatomy +**Figure 6-10**. Event Grid anatomy A major difference between EventGrid and Service Bus is the underlying *message exchange pattern*. Service Bus implements an older style *pull model* in which the downstream subscriber actively polls the topic subscription for new messages. On the upside, this approach gives the subscriber full control of the pace at which it processes messages. It controls when and how many messages to process at any given time. Unread messages remain in the subscription until processed. A significant shortcoming is the latency between the time the event is generated and the polling operation that pulls that message to the subscriber for processing. Also, the overhead of constant polling for the next event consumes resources and money. -EventGrid, however, is different. It implements a *push model* in which events are sent to the EventHandlers as received, giving near real-time event delivery. It also reduces cost as the service is triggered only when it's needed to consume an event – not continually as with polling. That said, an event handler must handle the incoming load and provide throttling mechanisms to protect itself from becoming overwhelmed. Many Azure services that consume these events, such as Azure Functions and Logic Apps provide automatic autoscaling capabilities to handle increased loads. +EventGrid, however, is different. It implements a *push model* in which events are sent to the EventHandlers as received, giving near real-time event delivery. It also reduces cost as the service is triggered only when it's needed to consume an event – not continually as with polling. That said, an event handler must handle the incoming load and provide throttling mechanisms to protect itself from becoming overwhelmed. Many Azure services that consume these events, such as Azure Functions and Logic Apps provide automatic autoscaling capabilities to handle increased loads. Event Grid is a fully managed serverless cloud service. It dynamically scales based on your traffic and charges you only for your actual usage, not pre-purchased capacity. The first 100,000 operations per month are free – operations being defined as event ingress (incoming event notifications), subscription delivery attempts, management calls, and filtering by subject. With 99.99% availability, EventGrid guarantees the delivery of an event within a 26-hour period, with built-in retry functionality for unsuccessful delivery. Undelivered messages can be moved to a "dead-letter" queue for resolution. Unlike Azure Service Bus, Event Grid is tuned for fast performance and doesn't support features like ordered messaging, transactions, and sessions. @@ -197,11 +198,11 @@ Event Grid is a fully managed serverless cloud service. It dynamically scales ba Azure Service Bus and Event Grid provide great support for applications that expose single, discrete events. For example, when a new document has been inserted into a Cosmos DB table. But what if your cloud-native system needs to process a *stream of related events*? [Event streams](/archive/msdn-magazine/2015/february/microsoft-azure-the-rise-of-event-stream-oriented-systems) are more complex. They're typically time-ordered, interrelated, and must be processed as a group. -[Azure Event Hub](https://azure.microsoft.com/services/event-hubs/) is a data streaming platform and event ingestion service that collects, transforms, and stores events. It's fine-tuned to capture streaming data, such as continuous event notifications emitted from a telemetry context. The service is highly scalable and can store and [process millions of events per second](/azure/event-hubs/event-hubs-about). Shown in Figure 6-16, it's often a front door for an event pipeline, decoupling ingest stream from event consumption. +[Azure Event Hub](https://azure.microsoft.com/services/event-hubs/) is a data streaming platform and event ingestion service that collects, transforms, and stores events. It's fine-tuned to capture streaming data, such as continuous event notifications emitted from a telemetry context. The service is highly scalable and can store and [process millions of events per second](/azure/event-hubs/event-hubs-about). Shown in Figure 6-11, it's often a front door for an event pipeline, decoupling ingest stream from event consumption. > ![Azure Event Hub diagram](media/azure-event-hub.png) -**Figure 6-16**. Azure Event Hub +**Figure 6-11**. Azure Event Hub Event Hub supports low latency and configurable time retention. Unlike queues and topics, Event Hubs keep event data after it's been read by a consumer. This feature enables other data analytic services, both internal and external, to replay the data for further analysis. Events stored in Event Hub are only deleted upon expiration of the retention period, which is one day by default, but configurable. @@ -211,7 +212,7 @@ Event Hubs implements message streaming through a [partitioned consumer model](/ > ![Event Hub partitioning diagram](media/event-hub-partitioning.png) -**Figure 6-17**. Event Hub partitioning +**Figure 6-12**. Event Hub partitioning Instead of reading from the same resource, each consumer group reads across a subset, or partition, of the message stream. diff --git a/docs/architecture/distributed-cloud-native-apps-containers/testing-distributed-apps/how-aspire-helps.md b/docs/architecture/distributed-cloud-native-apps-containers/testing-distributed-apps/how-aspire-helps.md index ef379d992e6f6..72d496bfb4df3 100644 --- a/docs/architecture/distributed-cloud-native-apps-containers/testing-distributed-apps/how-aspire-helps.md +++ b/docs/architecture/distributed-cloud-native-apps-containers/testing-distributed-apps/how-aspire-helps.md @@ -1,10 +1,10 @@ --- -title: How .NET Aspire helps with the challenges of distributed app testing -description: Architecture for Distributed Cloud-Native Apps with .NET Aspire & Containers | How .NET Aspire helps with the challenges of distributed app testing +title: How .NET Aspire helps testing +description: Architecture for Distributed Cloud-Native Apps with .NET Aspire & Containers | How .NET Aspire helps testing ms.date: 10/23/2024 --- -# How .NET Aspire helps with the challenges of distributed app testing +# How .NET Aspire helps testing [!INCLUDE [download-alert](../includes/download-alert.md)] diff --git a/docs/architecture/distributed-cloud-native-apps-containers/toc.yml b/docs/architecture/distributed-cloud-native-apps-containers/toc.yml index ed9ae3520c4cd..269e30c8ac3cb 100644 --- a/docs/architecture/distributed-cloud-native-apps-containers/toc.yml +++ b/docs/architecture/distributed-cloud-native-apps-containers/toc.yml @@ -18,7 +18,7 @@ items: href: introduction-containers-docker/what-is-docker.md - name: Container terminology href: introduction-containers-docker/container-terminology.md - - name: Official .NET container images & SDK tooling + - name: Official .NET container images and SDK tooling href: introduction-containers-docker/official-container-images-tooling.md - name: Introduction to .NET Aspire items: @@ -28,7 +28,7 @@ items: href: introduction-dotnet-aspire/orchestration.md - name: Service discovery href: introduction-dotnet-aspire/service-discovery.md - - name: Integrations + - name: .NET Aspire integrations href: introduction-dotnet-aspire/integrations.md - name: Observability and dashboard href: introduction-dotnet-aspire/observability-and-dashboard.md @@ -48,7 +48,7 @@ items: items: - name: Introduction href: service-to-service-communication-patterns/introduction.md - - name: Service discovery introduction + - name: Service discovery href: service-to-service-communication-patterns/service-discovery.md - name: Service-to-service communication href: service-to-service-communication-patterns/service-to-service-communication.md @@ -62,7 +62,7 @@ items: href: event-based-communication-patterns/integration-event-based-microservice-communications.md - name: Implementing an event bus with RabbitMQ href: event-based-communication-patterns/rabbitmq-event-bus-development-test-environment.md - - name: Implement background tasks in microservices + - name: Implementing background tasks in microservices href: event-based-communication-patterns/background-tasks-with-ihostedservice.md - name: Subscribing to events href: event-based-communication-patterns/subscribe-events.md @@ -70,7 +70,7 @@ items: items: - name: Data patterns for distributed applications href: data-patterns/distributed-data.md - - name: Relational versus NoSQL data + - name: Relational and NoSQL data href: data-patterns/relational-vs-nosql-data.md - name: Caching in a cloud-native application href: data-patterns/azure-caching.md @@ -88,7 +88,7 @@ items: href: cloud-native-resiliency/resilient-communication.md - name: Add resiliency with .NET href: cloud-native-resiliency/resiliency-with-aspire.md - - name: Monitoring, Diagnostics, and Health + - name: Monitoring, diagnostics, and health items: - name: Observability patterns href: monitoring-health/observability-patterns.md @@ -96,7 +96,7 @@ items: href: monitoring-health/open-telemetry-grafana-prometheus.md - name: Health checks and probes href: monitoring-health/health-checks-probes.md - - name: .NET Aspire dashboard + - name: The .NET Aspire dashboard href: monitoring-health/aspire-dashboard.md - name: Observability platforms href: monitoring-health/observability-platforms.md @@ -130,7 +130,7 @@ items: href: testing-distributed-apps/challenges-of-distributed-app-testing.md - name: Testing ASP.NET Core services and web apps href: testing-distributed-apps/test-aspnet-core-services-web-apps.md - - name: How .NET Aspire helps + - name: How .NET Aspire helps testing href: testing-distributed-apps/how-aspire-helps.md - name: API Gateways items: @@ -142,7 +142,7 @@ items: items: - name: Deploying distributed apps href: deploying-distributed-apps/how-deployment-affects-your-architecture.md - - name: Development vs. production + - name: Development and production href: deploying-distributed-apps/development-vs-production.md - name: Deployment with or without .NET Aspire href: deploying-distributed-apps/deploy-with-dotnet-aspire.md