@@ -4,6 +4,8 @@ title: "Introducing Shipwright - Part 1"
4
4
linkTitle : " Intro to Shipwright (Part 1)"
5
5
description : " A framework for building container images on Kubernetes"
6
6
author : " Adam Kaplan ([@adambkaplan](https://github.com/adambkaplan))"
7
+ aliases :
8
+ - /blog/2020/10/21/introducing-shipwright-part-1
7
9
resources :
8
10
- src : " **.{png}"
9
11
title : " Figure #:counter"
@@ -12,10 +14,10 @@ resources:
12
14
13
15
What is Shipwright? Which problems does this project try to solve?
14
16
15
- In [ Part 1] ( /blog/2020/10/15/introducing-shipwright-part-1 ) of this series, we'll look back at the history of delivering software applications,
17
+ In [ Part 1] ( docs /blog/posts/2020-10-21-intro-shipwright-pt1/ ) of this series, we'll look back at the history of delivering software applications,
16
18
and how that has changed in the age of Kubernetes and cloud-native development.
17
19
18
- In [ Part 2] ( /blog/2020/11/30/introducing- shipwright-part-2 ) of this series, we'll introduce Shipwright and the Build APIs that make it simple to
20
+ In [ Part 2] ( docs /blog/posts/2020-11-30-intro- shipwright-pt2 ) of this series, we'll introduce Shipwright and the Build APIs that make it simple to
19
21
build container images on Kubernetes.
20
22
21
23
## Delivering Your Applications - A History
@@ -31,4 +33,74 @@ laptops, and uploading the JAR to our client's SFTP site. After submitting a tic
31
33
a change control review with our client's IT department, our software would be released during a
32
34
scheduled maintenance window.
33
35
34
- ![ ] ( deploy-java-vm )
36
+ {{< figure
37
+ src="deploy-java-vm.png"
38
+ width="640px"
39
+ height="360px"
40
+ > }}
41
+
42
+ For engineers in larger enterprises, this experience should feel familiar. You may have used C#,
43
+ C++, or were adventurous and testing Ruby on Rails. Perhaps instead of compiling the application
44
+ yourself, a separate release team was responsible for building the application on secured
45
+ infrastructure. Your releases may have undergone extensive acceptance testing in a staging
46
+ environment before being promoted to production (and those practices may still continue today). If
47
+ you were fortunate, some of the release tasks were automated by emerging continuous integration
48
+ tools like Hudson and Jenkins.
49
+
50
+ ## Delivering on Docker and Kubernetes
51
+
52
+ The emergence of [ Docker/Moby] ( https://mobyproject.org/ ) and [ Kubernetes] ( https://kubernetes.io/ )
53
+ changed the unit of delivery. With both of these platforms, developers package their software in
54
+ container images rather than executables, JAR files, or script bundles. Moving to this method of
55
+ delivery was not a simple task, since many teams had to learn entirely new sets of skills to deploy
56
+ their code.
57
+
58
+ I first learned of Docker and Kubernetes at a startup I had joined. We used Kubernetes as a means
59
+ to scale our back-end application and break apart our Python-based monolith. To test our
60
+ applications, we built our container images locally with Docker, and ran clusters locally with
61
+ minikube or used a dev cluster set up with our cloud provider. For acceptance testing and
62
+ production releases, we used a third-party continuous integration service to assemble our code into
63
+ a container image, push it to our private container registry (also hosted by our cloud provider),
64
+ and use a set of deployment scripts to upgrade our applications in the respective environment.
65
+ Along the way, we had to learn the intricacies of Docker, assembling our image via
66
+ [ Dockerfiles] ( https://docs.docker.com/engine/reference/builder/ ) , and running Python inside a
67
+ container.
68
+
69
+ {{< figure
70
+ src="deploy-k8s-image.png"
71
+ width="640px"
72
+ height="360px"
73
+ > }}
74
+
75
+ What we could not do was build our applications directly on our Kubernetes clusters. At the time,
76
+ the only way to build a container image on "vanilla" Kubernetes was to expose the cluster's Docker
77
+ socket to a running container. Since docker ran as root, this presented a significant security
78
+ risk - a malicious actor could use our build containers or service accounts to run arbitrary
79
+ workloads on our clusters. Since our CI provider made it easy to build container images, and we
80
+ implicitly trusted the security of their environments, we opted to use their service instead of
81
+ running our container image builds on our clusters.
82
+
83
+ ## Creating Container Images Today
84
+
85
+ Much has changed since the first release of Kubernetes with regard to building container images.
86
+ There are now tools designed to build images from a Dockerfile inside a container, like
87
+ [ Kaniko] ( https://github.com/GoogleContainerTools/kaniko ) and [ Buildah] ( https://buildah.io/ ) . Other
88
+ tools like [ Source-to-Image] ( https://github.com/openshift/source-to-image ) and
89
+ [ Cloud-Native Buildpacks] ( https://buildpacks.io/ ) go a step further and build images directly from
90
+ source code, without the need to write a Dockerfile. There are even image building tools optimized
91
+ for specific programming languages, such as [ Jib] ( https://github.com/GoogleContainerTools/jib ) .
92
+
93
+ {{< figure
94
+ src="container-tools.png"
95
+ width="640px"
96
+ height="360px"
97
+ > }}
98
+
99
+ When it comes to delivering applications on Kubernetes, there is a wide variety of tooling and
100
+ projects available. [ Jenkins-X] ( https://jenkins-x.io/ ) and [ Tekton] ( https://tekton.dev/ ) are two
101
+ such projects that orchestrate continuous application delivery on Kubernetes. However, there is no
102
+ standard way to produce container images on Kubernetes, nor is there a standard way for build tool
103
+ authors to declare how to use their tool on Kubernetes.
104
+
105
+ In Part 2 of this series, we aim to address these challenges by introducing Shipwright and the
106
+ Build API.
0 commit comments