Skip to content

Commit 95151d5

Browse files
Mary AnthonyJoao Fernandes
authored andcommitted
Updating with metadata for website
Closes docker#371: Added metadata for web build Updating with Dan's comments Signed-off-by: Mary Anthony <mary@docker.com>
1 parent adddedb commit 95151d5

15 files changed

+854
-34
lines changed

certs.md

Lines changed: 2 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -1,10 +1,12 @@
1+
12
+++
23
title = "Manually setting up a CA"
34
description = "Docker Universal Control Plane"
45
[menu.main]
56
parent="mn_ucp"
67
+++
78

9+
810
# Manually setting up a CA
911

1012
A few features of UCP require an external CA (cfssl or equivalent) to sign

index.md

Lines changed: 2 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -1,10 +1,12 @@
1+
12
+++
23
title = "Overview"
34
description = "Docker Universal Control Plane"
45
[menu.main]
56
identifier="mn_ucp"
67
+++
78

9+
810
# Welcome to Docker Universal Control Plane BETA docs
911

1012
The following are available:

installation.md

Lines changed: 2 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -1,10 +1,12 @@
1+
12
+++
23
title = "Docker UCP Quickstart Guide"
34
description = "Docker Universal Control Plane"
45
[menu.main]
56
parent="mn_ucp"
67
+++
78

9+
810
# Docker UCP Quickstart Guide
911

1012
These instructions explain how to install UCP. A UCP installation consists of an UCP controller and one or more nodes. The same machine can serve as both the controller and the node. These instructions show you how to install both a host and a node. It contains the following sections:

kv_store.md

Lines changed: 2 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -1,10 +1,12 @@
1+
12
+++
23
title = "Key/Value Store Backends"
34
description = "Docker Universal Control Plane"
45
[menu.main]
56
parent="mn_ucp"
67
+++
78

9+
810
# Key/Value Store Backends
911

1012
In this release, UCP leverages the [etcd](https://github.com/coreos/etcd/) KV

networking.md

Lines changed: 45 additions & 34 deletions
Original file line numberDiff line numberDiff line change
@@ -1,10 +1,12 @@
1+
12
+++
23
title = "Set up container networking with UCP"
34
description = "Docker Universal Control Plane"
45
[menu.main]
56
parent="mn_ucp"
67
+++
78

9+
810
# Set up container networking with UCP
911

1012
Beginning in release 1.9, the Docker Engine updated and expanded its networking
@@ -59,10 +61,10 @@ nodes.
5961

6062
### Prerequisites
6163

62-
You must install UCP on your entire cluster (sever and nodes), before following
63-
these instructions. Make sure you have run on the `install` and `join` on each
64-
node as appropriate. Then, enable mult-host networking on every node in your
65-
cluster using these instructions.
64+
You must install UCP on your entire cluster (controller and nodes), before
65+
following these instructions. Make sure you have run on the `install` and
66+
`join` on each node as appropriate. Then, enable mult-host networking on every
67+
node in your cluster using the instructions in this page.
6668

6769
UCP requires that all clients, including Docker Engine, use a Swarm TLS
6870
certificate chain signed by the UCP Swarm Root CA. You configured these
@@ -73,25 +75,36 @@ To continue with this procedure, you need to know the SAN values you used on
7375
each controller or node. Because you can pass a SAN either as an IP address or
7476
fully-qualified hostname, make sure you know how to find these.
7577

76-
If you used a public IP address, log into the controller host and run these two
77-
commands on the controller:
78+
If you used public IP addresses, do the following:
7879

79-
```bash
80-
$ IP_ADDRESS=$(ip -o -4 route get 8.8.8.8 | cut -f8 -d' ')
81-
$ echo ${IP_ADDRESS}
82-
```
80+
1. Log into a host in your UCP cluster (controller or one of your nodes).
81+
82+
2. Run these two commands to get the public IP:
83+
84+
$ IP_ADDRESS=$(ip -o -4 route get 8.8.8.8 | cut -f8 -d' ')
85+
$ echo ${IP_ADDRESS}
8386

84-
If your cluster is installed on a cloud provider, the public IP may not be the
85-
same as the IP address returned by this command. Confirm through your cloud
86-
provider's console or command line that this value is indeed the public IP. For
87-
example, the AWS console shows these values to you:
87+
If your cluster is installed on a cloud provider, the public IP may not be
88+
the same as the IP address returned by this command. Confirm through your
89+
cloud provider's console or command line that this value is indeed the
90+
public IP. For example, the AWS console shows these values to you:
8891

89-
![Open certs](images/ip_cloud_provider.png)
92+
![Open certs](images/ip_cloud_provider.png)
9093

91-
You can get also the SAN values of the controller by examining the certificate
92-
through your browser. This would include a fully qualified hostname you used for
93-
the controller. Each browser has a different way to view a website's certificate. To
94-
do this on Chrome:
94+
3. Note the host's IP address.
95+
96+
4. Repeat steps 1-3 on the remaining hosts in your cluster.
97+
98+
If you used a fully-qualified domain service names for SANs, you use them again
99+
configure multi-host networking. If you don't recall the name you used for each
100+
node, then:
101+
102+
* If your hosts are on a private network, ask your system administrator for their fully-qualified domain service names.
103+
* If your hosts are from a cloud provider, use the provider's console or other facility to get the name.
104+
105+
An easy way to get the controller's SAN values is to examine its certificate
106+
through your browser. Each browser has a different way to view a website's
107+
certificate. To do this on Chrome:
95108

96109
1. Open the browser to the UCP console.
97110

@@ -107,8 +120,6 @@ do this on Chrome:
107120

108121
![SAN](images/browser_cert_san.png)
109122

110-
If you are using a private network, and don't know the fully-qualified DNS for a node, you can ask your network administrator.
111-
112123

113124
### Configure and restart the daemon
114125

@@ -121,35 +132,35 @@ If you followed the prerequisites, you should have a list of the SAN values you
121132
3. Determine the Docker daemon's startup configuration file.
122133

123134
Each Linux distribution have different approaches for configuring daemon
124-
startup (init) options. On Centos/RedHat systems that rely systemd,
125-
the Docker daemon startup options are stored in the
126-
`/lib/systemd/system/docker.service` file. Ubuntu 14.04 stores these in the `/etc/init/docker.conf` file.
135+
startup (init) options. On Centos/RedHat systems that rely on systemd, the
136+
Docker daemon startup options are stored in the
137+
`/lib/systemd/system/docker.service` file. Ubuntu 14.04 stores these in the
138+
`/etc/default/docker` file.
127139

128140
4. Open the configuration file with your favorite editor.
129141

130142
**Ubuntu**:
131-
$ sudo vi /etc/init/docker.conf
143+
$ sudo vi /etc/default/docker
132144

133145
**Centos/Rehat**:
134146
$ sudo vi /lib/systemd/system/docker.service
135147

136148
5. Uncomment the `DOCKER_OPTS` line and add the following options.
137149

138-
--cluster-advertise eth0:12376
139-
--cluster-store etcd://CONTROLLER_PUBLIC_IP_OR_DOMAIN:12379
150+
--cluster-advertise CURRENT_HOST_PUBLIC_IP_OR_DNS:12376
151+
--cluster-store etcd://CONTROLLER_PUBLIC_IP_OR_DNS:12379
140152
--cluster-store-opt kv.cacertfile=/var/lib/docker/discovery_certs/ca.pem
141153
--cluster-store-opt kv.certfile=/var/lib/docker/discovery_certs/cert.pem
142154
--cluster-store-opt kv.keyfile=/var/lib/docker/discovery_certs/key.pem
143155

144-
Replace `CONTROLLER_PUBLIC_IP_OR_DOMAIN` with the IP address of the UCP
145-
controller. Use `ifconfig` to ensure the host you are installing on is
146-
accessible over `eth0`, on your host the systems with multiple ether
147-
interfaces this value might differ. When you are done, the line should look
148-
similar to the following:
156+
Replace `CURRENT_HOST_PUBLIC_IP` with the IP of the host whose file you are
157+
configuration. Replace `CONTROLLER_PUBLIC_IP_OR_DOMAIN` with the IP address of
158+
the UCP controller. When you are done, the line should look similar to the
159+
following:
149160

150-
DOCKER_OPTS="--dns 8.8.8.8 --dns 8.8.4.4 --cluster-advertise eth0:12376 --cluster-store etcd://52.70.188.239:12379 --cluster-store-opt kv.cacertfile=/var/lib/docker/discovery_certs/ca.pem --cluster-store-opt kv.certfile=/var/lib/docker/discovery_certs/cert.pem --cluster-store-opt kv.keyfile=/var/lib/docker/discovery_certs/key.pem"
161+
DOCKER_OPTS="--dns 8.8.8.8 --dns 8.8.4.4 --cluster-advertise 52.70.180.235:12376 --cluster-store etcd://52.70.188.239:12379 --cluster-store-opt kv.cacertfile=/var/lib/docker/discovery_certs/ca.pem --cluster-store-opt kv.certfile=/var/lib/docker/discovery_certs/cert.pem --cluster-store-opt kv.keyfile=/var/lib/docker/discovery_certs/key.pem"
151162

152-
6. Save and close the `/etc/init/docker.conf` file.
163+
6. Save and close the Docker configuration file file.
153164

154165
6. Restart the Docker daemon.
155166

profiling.md

Lines changed: 2 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -1,10 +1,12 @@
1+
12
+++
23
title = "Profiling UCP"
34
description = "Docker Universal Control Plane"
45
[menu.main]
56
parent="mn_ucp"
67
+++
78

9+
810
# Profiling UCP
911

1012
If you run the UCP server with the debug flag set, not only will you get more logging output, but we enable

release_notes.md

Lines changed: 9 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -1,3 +1,12 @@
1+
2+
+++
3+
title = "Overview"
4+
description = "Docker Universal Control Plane"
5+
[menu.main]
6+
identifier="mn_ucp"
7+
+++
8+
9+
110
# Release Notes
211

312
The latest release is 0.5. Consult with your Docker sales engineer for the release notes of earlier versions.

specs/high_availability.md

Lines changed: 66 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,66 @@
1+
+++
2+
draft = "true"
3+
+++
4+
5+
# UCP High Availability
6+
7+
This document outlines how UCP high availability works, and general
8+
guidelines for deploying a highly available UCP in production.
9+
When adding nodes to your cluster, you decide which nodes you want to
10+
be replicas, and which nodes are simply additional engines for extra
11+
capacity. If you are planning an HA deployment, you should have a
12+
minimum of 3 nodes (primary + two replicas)
13+
14+
It is **highly** recommended that you deploy your initial 3 controller
15+
nodes (primary + at least 2 replicas) **before** you start adding
16+
non-replica nodes or start running workloads on your cluster. When adding
17+
the first replica, if an error occurrs, the cluster will be come unusable.
18+
19+
## Architecture
20+
21+
* **Primary Controller** This is the first node you run the `install` against. It runs the following containers/services:
22+
* **ucp-kv** This etcd container runs the replicated KV store
23+
* **ucp-swarm-manger** This Swarm Manager uses the replicated KV store for leader election and cluster membership tracking
24+
* **ucp-controller** This container runs the UCP server, using the replicated KV store for configuration state
25+
* **ucp-swarm-join** Runs the swarm join command to periodically publish this nodes existence to the KV store. If the node goes down, this publishing stops, and the registration times out, and the node is automatically dropped from the cluster
26+
* **ucp-proxy** Runs a local TLS proxy for the docker socket to enable secure access of the local docker daemon
27+
* **ucp-swarm-ca[-proxy]** These **unreplicated** containers run the Swarm CA used for admin certificate bundles, and adding new nodes
28+
* **ucp-ca[-proxy]** These **unreplicated** containers run the (optional) UCP CA used for signing user bundles.
29+
* **Replica Node** This is a node you `join` to the primary using the `--replica` flag and it contributes to the availability of the cluster
30+
* **ucp-kv** This etcd container runs the replicated KV store
31+
* **ucp-swarm-manger** This Swarm Manager uses the replicated KV store for leader election and cluster membership tracking
32+
* **ucp-controller** This container runs the UCP server, using the replicated KV store for configuration state
33+
* **ucp-swarm-join** Runs the swarm join command to periodically publish this nodes existence to the KV store. If the node goes down, this publishing stops, and the registration times out, and the node is automatically dropped from the cluster
34+
* **ucp-proxy** Runs a local TLS proxy for the docker socket to enable secure access of the local docker daemon
35+
* **Non-Replica Node** These nodes provide additional capacity, but do not enhance the availability of the UCP/Swarm infrastructure
36+
* **ucp-swarm-join** Runs the swarm join command to periodically publish this nodes existence to the KV store. If the node goes down, this publishing stops, and the registration times out, and the node is automatically dropped from the cluster
37+
* **ucp-proxy** Runs a local TLS proxy for the docker socket to enable secure access of the local docker daemon
38+
39+
Notes:
40+
* At present, UCP does not include a load-balancer. Users may provide one exernally and load balance between the primary and replica nodes on port 443 for web access to the system via a single IP/hostname if desired. If no external load balancer is used, admins should note the IP/hostname of the primary and all replicas so they can access them when needed.
41+
* Backups:
42+
* Users should always back up their volumes (see the other guides for a complete list of named volumes)
43+
* The CAs (swarm and UCP) are not currently replicated.
44+
* Swarm CA:
45+
* Used for admin cert bundle generation
46+
* Used for adding hosts to the cluster
47+
* During an outage, no new admin cert bundles can be downloaded, but existing ones will still work.
48+
* During an outage, no new nodes can be added to the cluster, but existing nodes will continue to operate
49+
* UCP CA:
50+
* Used for user bundle generation
51+
* Used to sign certs for new replica nodes
52+
* During an outage, no new user cert bundles can be downloaded, but existing ones will still work
53+
* During an outage, no new replica nodes can be joined to the cluster
54+
55+
**WARNING** You should never run a cluster with only the primary
56+
controller and a single replica. This will result in an HA configuration
57+
of "2-nodes" where quorum is also "2-nodes" (to prevent split-brain.)
58+
If either the primary or single replica were to fail, the cluster will be
59+
unusable until they are repaired. (So you actually have a higher failure
60+
probability than if you just ran a non-HA setup with no replica.) You
61+
should have a minimum of 2 replicas (aka, "3-nodes") so that you can
62+
tolerate at least a single failure.
63+
64+
**TODO** In the future this document should describe best practices for layout,
65+
target number of nodes, etc. For now, that's an exercise for the reader
66+
based on etcd/raft documentation.

0 commit comments

Comments
 (0)