Skip to content

Commit c05e727

Browse files
gbarr01Jim Galasyn
authored and
Jim Galasyn
committed
Start rbac upgrade to deploy dir (#282)
1 parent b09c7d3 commit c05e727

File tree

59 files changed

+1553
-0
lines changed

Some content is hidden

Large Commits have some content hidden by default. Use the searchbox below for content that may be hidden.

59 files changed

+1553
-0
lines changed
Lines changed: 142 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,142 @@
1+
---
2+
title: Access control design with Docker EE Advanced
3+
description: Learn how to architect multitenancy with Docker Enterprise Edition Advanced.
4+
keywords: authorize, authentication, users, teams, groups, sync, UCP, role, access control
5+
redirect_from:
6+
- /ucp/
7+
ui_tabs:
8+
- version: ucp-3.0
9+
orhigher: true
10+
- version: ucp-2.2
11+
orlower: true
12+
---
13+
14+
{% if include.ui %}
15+
16+
17+
{% if include.version=="ucp-3.0" %}
18+
This topic is under construction.
19+
20+
{% elsif include.version=="ucp-2.2" %}
21+
[Collections and grants](index.md) are strong tools that can be used to control
22+
access and visibility to resources in UCP.
23+
24+
The previous tutorial, [Access Control Design with Docker EE
25+
Standard](access-control-design-ee-standard.md), describes a fictional company
26+
called OrcaBank that has designed a resource access architecture that fits the
27+
specific security needs of their organization. Be sure to go through this
28+
tutorial if you have not already before continuing.
29+
30+
In this tutorial, OrcaBank's deployment model adds a staging zone. Instead of
31+
moving developed applications directly in to production, OrcaBank now deploys
32+
apps from their dev cluster to staging for testing before deploying to
33+
production. OrcaBank has very stringent security requirements for production
34+
applications. Its security team recently read a blog post about DevSecOps and is
35+
excited to implement some changes. Production applications aren't permitted to
36+
share any physical infrastructure with non-Production infrastructure.
37+
38+
In this tutorial OrcaBank uses Docker EE Advanced features to segment the
39+
scheduling and access control of applications across disparate physical
40+
infrastructure. [Node Access Control](access-control-node.md) with EE Advanced
41+
licensing allows nodes to be placed in different collections so that resources
42+
can be scheduled and isolated on disparate physical or virtual hardware
43+
resources.
44+
45+
## Team access requirements
46+
47+
As in the [Introductory Multitenancy Tutorial](access-control-design-ee-standard.md)
48+
OrcaBank still has three application teams, `payments`, `mobile`, and `db` that
49+
need to have varying levels of segmentation between them. Their upcoming Access
50+
Control redesign will organize their UCP cluster in to two top-level collections,
51+
Staging and Production, which will be completely separate security zones on
52+
separate physical infrastructure.
53+
54+
- `security` should have visibility-only access across all
55+
applications that are in Production. The security team is not
56+
concerned with Staging applications and thus will not have
57+
access to Staging
58+
- `db` should have the full set of operations against all database
59+
applications that are in Production. `db` does not manage the
60+
databases that are in Staging, which are managed directly by the
61+
application teams.
62+
- `payments` should have the full set of operations to deploy Payments
63+
apps in both Production and Staging and also access some of the shared
64+
services provided by the `db` team.
65+
- `mobile` has the same rights as the `payments` team, with respect to the
66+
Mobile applications.
67+
68+
## Role composition
69+
70+
OrcaBank will use the same roles as in the Introductory Tutorial. An `ops` role
71+
will provide them with the ability to deploy, destroy, and view any kind of
72+
resource. `View Only` will be used by the security team to only view resources
73+
with no edit rights. `View & Use Networks + Secrets` will be used to access
74+
shared resources across collection boundaries, such as the `db` services that
75+
are offered by the `db` collection to the other app teams.
76+
77+
![image](../images/design-access-control-adv-0.png){: .with-border}
78+
79+
80+
## Collection architecture
81+
82+
The previous tutorial had separate collections for each application team.
83+
In this Access Control redesign there will be collections for each zone,
84+
Staging and Production, and also collections within each zone for the
85+
individual applications. Another major change is that Docker nodes will be
86+
segmented themselves so that nodes in Staging are separate from Production
87+
nodes. Within the Production zone every application will also have their own
88+
dedicated nodes.
89+
90+
The resulting collection architecture takes the following tree representation:
91+
92+
```
93+
/
94+
├── System
95+
├── Shared
96+
├── prod
97+
│   ├── db
98+
│   ├── mobile
99+
│   └── payments
100+
└── staging
101+
├── mobile
102+
└── payments
103+
```
104+
105+
## Grant composition
106+
107+
OrcaBank will now be granting teams diverse roles to different collections.
108+
Multiple grants per team are required to grant this kind of access. Each of
109+
the Payments and Mobile applications will have three grants that give them the
110+
operation to deploy in their production zone, their staging zone, and also the
111+
ability to share some resources with the `db` collection.
112+
113+
![image](../images/design-access-control-adv-grant-composition.png){: .with-border}
114+
115+
## OrcaBank access architecture
116+
117+
The resulting access architecture provides the appropriate physical segmentation
118+
between Production and Staging. Applications will be scheduled only on the UCP
119+
Worker nodes in the collection where the application is placed. The production
120+
Mobile and Payments applications use shared resources across collection
121+
boundaries to access the databases in the `/prod/db` collection.
122+
123+
![image](../images/design-access-control-adv-architecture.png){: .with-border}
124+
125+
### DB team
126+
127+
The OrcaBank `db` team is responsible for deploying and managing the full
128+
lifecycle of the databases that are in Production. They have the full set of
129+
operations against all database resources.
130+
131+
![image](../images/design-access-control-adv-db.png){: .with-border}
132+
133+
### Mobile team
134+
135+
The `mobile` team is responsible for deploying their full application stack in
136+
staging. In production they deploy their own applications but utilize the
137+
databases that are provided by the `db` team.
138+
139+
![image](../images/design-access-control-adv-mobile.png){: .with-border}
140+
141+
{% endif %}
142+
{% endif %}
Lines changed: 135 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,135 @@
1+
---
2+
title: Access control design with Docker EE Standard
3+
description: Learn how to architect multitenancy by using Docker Enterprise Edition Advanced.
4+
keywords: authorize, authentication, users, teams, groups, sync, UCP, role, access control
5+
redirect_from:
6+
- /ucp/
7+
ui_tabs:
8+
- version: ucp-3.0
9+
orhigher: true
10+
- version: ucp-2.2
11+
orlower: true
12+
---
13+
14+
{% if include.ui %}
15+
{% if include.version=="ucp-3.0" %}
16+
This topic is under construction.
17+
18+
{% elsif include.version=="ucp-2.2" %}
19+
20+
[Collections and grants](index.md) are strong tools that can be used to control
21+
access and visibility to resources in UCP. This tutorial describes a fictitious
22+
company named OrcaBank that is designing the access architecture for two
23+
application teams that they have, Payments and Mobile.
24+
25+
This tutorial introduces many concepts include collections, grants, centralized
26+
LDAP/AD, and also the ability for resources to be shared between different teams
27+
and across collections.
28+
29+
## Team access requirements
30+
31+
OrcaBank has organized their application teams to specialize more and provide
32+
shared services to other applications. A `db` team was created just to manage
33+
the databases that other applications will utilize. Additionally, OrcaBank
34+
recently read a book about DevOps. They have decided that developers should be
35+
able to deploy and manage the lifecycle of their own applications.
36+
37+
- `security` should have visibility-only access across all applications in the
38+
swarm.
39+
- `db` should have the full set of capabilities against all database
40+
applications and their respective resources.
41+
- `payments` should have the full set of capabilities to deploy Payments apps
42+
and also access some of the shared services provided by the `db` team.
43+
- `mobile` has the same rights as the `payments` team, with respect to the
44+
Mobile applications.
45+
46+
## Role composition
47+
48+
OrcaBank will use a combination of default and custom roles, roles which they
49+
have created specifically for their use case. They are using the default
50+
`View Only` role to provide security access to only see but not edit resources.
51+
There is an `ops` role that they created which can do almost all operations
52+
against all types of resources. They also created the
53+
`View & Use Networks + Secrets` role. This type of role will enable application
54+
DevOps teams to use shared resources provided by other teams. It will enable
55+
applications to connect to networks and use secrets that will also be used by
56+
`db` containers, but not the ability to see or impact the `db` applications
57+
themselves.
58+
59+
![image](../images/design-access-control-adv-0.png){: .with-border}
60+
61+
## Collection architecture
62+
63+
OrcaBank will also create some collections that fit the organizational structure
64+
of the company. Since all applications will share the same physical resources,
65+
all nodes and applications are built in to collections underneath the `/Shared`
66+
built-in collection.
67+
68+
- `/Shared/payments` hosts all applications and resources for the Payments
69+
applications.
70+
- `/Shared/mobile` hosts all applications and resources for the Mobile
71+
applications.
72+
73+
Some other collections will be created to enable the shared `db` applications.
74+
75+
- `/Shared/db` will be a top-level collection for all `db` resources.
76+
- `/Shared/db/payments` will be specifically for `db` resources providing
77+
service to the Payments applications.
78+
- `/Shared/db/mobile` will do the same for the Mobile applications.
79+
80+
The following grant composition will show that this collection architecture
81+
allows an app team to access shared `db` resources without providing access
82+
to _all_ `db` resources. At the same time _all_ `db` resources will be managed
83+
by a single `db` team.
84+
85+
## LDAP/AD integration
86+
87+
OrcaBank has standardized on LDAP for centralized authentication to help their
88+
identity team scale across all the platforms they manage. As a result LDAP
89+
groups will be mapped directly to UCP teams using UCP's native LDAP/AD
90+
integration. As a result users can be added to or removed from UCP teams via
91+
LDAP which can be managed centrally by OrcaBank's identity team. The following
92+
grant composition shows how LDAP groups are mapped to UCP teams .
93+
94+
## Grant composition
95+
96+
Two grants are applied for each application team, allowing each team to fully
97+
manage their own apps in their collection, but also have limited access against
98+
networks and secrets within the `db` collection. This kind of grant composition
99+
provides flexibility to have different roles against different groups of
100+
resources.
101+
102+
![image](../images/design-access-control-adv-1.png){: .with-border}
103+
104+
## OrcaBank access architecture
105+
106+
The resulting access architecture shows applications connecting across
107+
collection boundaries. Multiple grants per team allow Mobile applications and
108+
Databases to connect to the same networks and use the same secrets so they can
109+
securely connect with each other but through a secure and controlled interface.
110+
Note that these resources are still deployed across the same group of UCP
111+
worker nodes. Node segmentation is discussed in the [next tutorial](#).
112+
113+
![image](../images/design-access-control-adv-2.png){: .with-border}
114+
115+
### DB team
116+
117+
The `db` team is responsible for deploying and managing the full lifecycle
118+
of the databases used by the application teams. They have the full set of
119+
operations against all database resources.
120+
121+
![image](../images/design-access-control-adv-3.png){: .with-border}
122+
123+
### Mobile team
124+
125+
The `mobile` team is responsible for deploying their own application stack,
126+
minus the database tier which is managed by the `db` team.
127+
128+
![image](../images/design-access-control-adv-4.png){: .with-border}
129+
130+
## Where to go next
131+
132+
- [Access control design with Docker EE Advanced](access-control-design-ee-advanced.md)
133+
134+
{% endif %}
135+
{% endif %}
Lines changed: 65 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,65 @@
1+
---
2+
title: Node access control in Docker EE Advanced
3+
description: Learn how to architect node access with Docker Enterprise Edition Standard.
4+
keywords: authorize, authentication, node, UCP, role, access control
5+
redirect_from:
6+
- /ucp/
7+
ui_tabs:
8+
- version: ucp-3.0
9+
orhigher: true
10+
- version: ucp-2.2
11+
orlower: true
12+
---
13+
14+
{% if include.ui %}
15+
{% if include.version=="ucp-3.0" %}
16+
This topic is under construction.
17+
18+
{% elsif include.version=="ucp-2.2" %}
19+
20+
The ability to segment scheduling and visibility by node is called
21+
*node access control* and is a feature of Docker EE Advanced. By default,
22+
all nodes that aren't infrastructure nodes (UCP & DTR nodes) belong to a
23+
built-in collection called `/Shared`. By default, all application workloads
24+
in the cluster will get scheduled on nodes in the `/Shared` collection. This
25+
includes users that are deploying in their private collections
26+
(`/Shared/Private/`) and in any other collections under `/Shared`. This is
27+
enabled by a built-in grant that grants every UCP user the `scheduler`
28+
capability against the `/Shared` collection.
29+
30+
Node Access Control works by placing nodes in to custom collections outside of
31+
`/Shared`. If the `scheduler` capability is granted via a role to a user or
32+
group of users against a collection then they will be able to schedule
33+
containers and services on these nodes. In the following example, users with
34+
`scheduler` capability against `/collection1` will be able to schedule
35+
applications on those nodes.
36+
37+
Note that in the directory these collections lie outside of the `/Shared`
38+
collection so users without grants will not have access to these collections
39+
unless explicitly granted access. These users will only be able to deploy
40+
applications on the built-in `/Shared` collection nodes.
41+
42+
![image](../images/design-access-control-adv-custom-grant.png){: .with-border}
43+
44+
The tree representation of this collection structure looks like this:
45+
46+
```
47+
/
48+
├── Shared
49+
├── System
50+
├── collection1
51+
└── collection2
52+
├── sub-collection1
53+
└── sub-collection2
54+
```
55+
56+
With the use of default collections, users, teams, and organizations can be
57+
constrained to what nodes and physical infrastructure they are capable of
58+
deploying on.
59+
60+
## Where to go next
61+
62+
- [Isolate swarm nodes to a specific team](isolate-nodes-between-teams.md)
63+
64+
{% endif %}
65+
{% endif %}

0 commit comments

Comments
 (0)