You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: installation.md
+2Lines changed: 2 additions & 0 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -1,10 +1,12 @@
1
+
1
2
+++
2
3
title = "Docker UCP Quickstart Guide"
3
4
description = "Docker Universal Control Plane"
4
5
[menu.main]
5
6
parent="mn_ucp"
6
7
+++
7
8
9
+
8
10
# Docker UCP Quickstart Guide
9
11
10
12
These instructions explain how to install UCP. A UCP installation consists of an UCP controller and one or more nodes. The same machine can serve as both the controller and the node. These instructions show you how to install both a host and a node. It contains the following sections:
This document outlines how UCP high availability works, and general
8
+
guidelines for deploying a highly available UCP in production.
9
+
When adding nodes to your cluster, you decide which nodes you want to
10
+
be replicas, and which nodes are simply additional engines for extra
11
+
capacity. If you are planning an HA deployment, you should have a
12
+
minimum of 3 nodes (primary + two replicas)
13
+
14
+
It is **highly** recommended that you deploy your initial 3 controller
15
+
nodes (primary + at least 2 replicas) **before** you start adding
16
+
non-replica nodes or start running workloads on your cluster. When adding
17
+
the first replica, if an error occurrs, the cluster will be come unusable.
18
+
19
+
## Architecture
20
+
21
+
***Primary Controller** This is the first node you run the `install` against. It runs the following containers/services:
22
+
***ucp-kv** This etcd container runs the replicated KV store
23
+
***ucp-swarm-manger** This Swarm Manager uses the replicated KV store for leader election and cluster membership tracking
24
+
***ucp-controller** This container runs the UCP server, using the replicated KV store for configuration state
25
+
***ucp-swarm-join** Runs the swarm join command to periodically publish this nodes existence to the KV store. If the node goes down, this publishing stops, and the registration times out, and the node is automatically dropped from the cluster
26
+
***ucp-proxy** Runs a local TLS proxy for the docker socket to enable secure access of the local docker daemon
27
+
***ucp-swarm-ca[-proxy]** These **unreplicated** containers run the Swarm CA used for admin certificate bundles, and adding new nodes
28
+
***ucp-ca[-proxy]** These **unreplicated** containers run the (optional) UCP CA used for signing user bundles.
29
+
***Replica Node** This is a node you `join` to the primary using the `--replica` flag and it contributes to the availability of the cluster
30
+
***ucp-kv** This etcd container runs the replicated KV store
31
+
***ucp-swarm-manger** This Swarm Manager uses the replicated KV store for leader election and cluster membership tracking
32
+
***ucp-controller** This container runs the UCP server, using the replicated KV store for configuration state
33
+
***ucp-swarm-join** Runs the swarm join command to periodically publish this nodes existence to the KV store. If the node goes down, this publishing stops, and the registration times out, and the node is automatically dropped from the cluster
34
+
***ucp-proxy** Runs a local TLS proxy for the docker socket to enable secure access of the local docker daemon
35
+
***Non-Replica Node** These nodes provide additional capacity, but do not enhance the availability of the UCP/Swarm infrastructure
36
+
***ucp-swarm-join** Runs the swarm join command to periodically publish this nodes existence to the KV store. If the node goes down, this publishing stops, and the registration times out, and the node is automatically dropped from the cluster
37
+
***ucp-proxy** Runs a local TLS proxy for the docker socket to enable secure access of the local docker daemon
38
+
39
+
Notes:
40
+
* At present, UCP does not include a load-balancer. Users may provide one exernally and load balance between the primary and replica nodes on port 443 for web access to the system via a single IP/hostname if desired. If no external load balancer is used, admins should note the IP/hostname of the primary and all replicas so they can access them when needed.
41
+
* Backups:
42
+
* Users should always back up their volumes (see the other guides for a complete list of named volumes)
43
+
* The CAs (swarm and UCP) are not currently replicated.
44
+
* Swarm CA:
45
+
* Used for admin cert bundle generation
46
+
* Used for adding hosts to the cluster
47
+
* During an outage, no new admin cert bundles can be downloaded, but existing ones will still work.
48
+
* During an outage, no new nodes can be added to the cluster, but existing nodes will continue to operate
49
+
* UCP CA:
50
+
* Used for user bundle generation
51
+
* Used to sign certs for new replica nodes
52
+
* During an outage, no new user cert bundles can be downloaded, but existing ones will still work
53
+
* During an outage, no new replica nodes can be joined to the cluster
54
+
55
+
**WARNING** You should never run a cluster with only the primary
56
+
controller and a single replica. This will result in an HA configuration
57
+
of "2-nodes" where quorum is also "2-nodes" (to prevent split-brain.)
58
+
If either the primary or single replica were to fail, the cluster will be
59
+
unusable until they are repaired. (So you actually have a higher failure
60
+
probability than if you just ran a non-HA setup with no replica.) You
61
+
should have a minimum of 2 replicas (aka, "3-nodes") so that you can
62
+
tolerate at least a single failure.
63
+
64
+
**TODO** In the future this document should describe best practices for layout,
65
+
target number of nodes, etc. For now, that's an exercise for the reader
0 commit comments