Cloud Computing, Virtualization
This Lesson’s References
General
- Cloud Computing Beginner to Expert with 3 Projects
- ACG Projects: Build Your Resume on Azure with Blob Storage, Functions, CosmosDB, and GitHub Actions
- Learn to Cloud
Docker, Virtualization
- https://docs.docker.com/get-started/ - an excellent text based 10 part Docker tutorial.
- Docker CLI Cheatsheet.
- Tour de Force video by Network Chuck.
- Hands On Beginner’s Tutorial, video by Fireship.
- Build YOUR OWN Dockerfile, Image, and Container - Docker Tutorial. By Techno Tim. Excellent and to the point.
- https://qbituniverse.com/category/docker/docker-building-blocks/
A summary text based tutorial on Docker. Quite useful as ref.
A Container is an implemented virtualization object for an application.
A container behaves in many ways as a separate platform for your
purpose.
The requirement for working with docker, a piece of software, is that
it is installed. Go to https://docs.docker.com/desktop/, find the
version for your platform, and install it.
This gives you two possibilities:
The Consumer Approach
Watching Network Chuck’s Tour de Force gives you a fast paced overview
of the what, and the why of this aspect of virtualization; a bit of
the how related to usage of containers is also in the video.
After watching that video, you may proceed to hub.docker.com, create an
account, free, and from your CLI do
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
|
~ $ docker pull alpinelinux/docker-cli
Using default tag: latest
latest: Pulling from alpinelinux/docker-cli
b790c763077d: Pull complete
0fce53124704: Pull complete
45b3ad52eae9: Pull complete
Digest: sha256:e2f552055ebfb831a20af5d864a39a84f82d43b95264180be942a7e2081b5fe8
Status: Downloaded newer image for alpinelinux/docker-cli:latest
docker.io/alpinelinux/docker-cli:latest
~ $ docker run -itd --name alp117 alpinelinux/docker-cli
9e1e13fc8ada1f2f50a07ede6949239251a0879d1c8f311b155b9f228b5f6a8a
~ $ docker exec -it alp117 /bin/ash
/ # ls
bin etc lib mnt proc run srv tmp var
dev home media opt root sbin sys usr
/ # exit
~ $
|
in order to practice command line linux on your own computer like we talked about in the previous lesson. You used
1
2
3
4
|
docker pull
docker run
# and
docker exec
|
From the Documentation
The docker manual has
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
|
~ $ docker --help
Usage: docker [OPTIONS] COMMAND
A self-sufficient runtime for containers
Common Commands:
run Create and run a new container from an image
exec Execute a command in a running container
ps List containers
build Build an image from a Dockerfile
pull Download an image from a registry
push Upload an image to a registry
images List images
login Log in to a registry
logout Log out from a registry
search Search Docker Hub for images
version Show the Docker version information
info Display system-wide information
Management Commands:
builder Manage builds
buildx* Docker Buildx (Docker Inc., v0.11.2)
container Manage containers
context Manage contexts
image Manage images
manifest Manage Docker image manifests and manifest lists
network Manage networks
plugin Manage plugins
system Manage Docker
trust Manage trust on Docker images
volume Manage volumes
Swarm Commands:
swarm Manage Swarm
Commands:
attach Attach local standard input, output, and error streams to a running container
commit Create a new image from a container's changes
cp Copy files/folders between a container and the local filesystem
create Create a new container
diff Inspect changes to files or directories on a container's filesystem
events Get real time events from the server
export Export a container's filesystem as a tar archive
history Show the history of an image
import Import the contents from a tarball to create a filesystem image
inspect Return low-level information on Docker objects
kill Kill one or more running containers
load Load an image from a tar archive or STDIN
logs Fetch the logs of a container
pause Pause all processes within one or more containers
port List port mappings or a specific mapping for the container
rename Rename a container
restart Restart one or more containers
rm Remove one or more containers
rmi Remove one or more images
save Save one or more images to a tar archive (streamed to STDOUT by default)
start Start one or more stopped containers
stats Display a live stream of container(s) resource usage statistics
stop Stop one or more running containers
tag Create a tag TARGET_IMAGE that refers to SOURCE_IMAGE
top Display the running processes of a container
unpause Unpause all processes within one or more containers
update Update configuration of one or more containers
wait Block until one or more containers stop, then print their exit codes
Global Options:
--config string Location of client config files (default
"/home/nml/.docker")
-c, --context string Name of the context to use to connect to the daemon
(overrides DOCKER_HOST env var and default context
set with "docker context use")
-D, --debug Enable debug mode
-H, --host list Daemon socket to connect to
-l, --log-level string Set the logging level ("debug", "info", "warn",
"error", "fatal") (default "info")
--tls Use TLS; implied by --tlsverify
--tlscacert string Trust certs signed only by this CA (default
"/home/nml/.docker/ca.pem")
--tlscert string Path to TLS certificate file (default
"/home/nml/.docker/cert.pem")
--tlskey string Path to TLS key file (default
"/home/nml/.docker/key.pem")
--tlsverify Use TLS and verify the remote
-v, --version Print version information and quit
Run 'docker COMMAND --help' for more information on a command.
For more help on how to use Docker, head to https://docs.docker.com/go/guides/
|
notice the CLI invocation by docker --help
.
The Producer Approach
The other video, Fireship’s Hands On Beginner’s Tutorial also gives
a fast paced walk through of the basic concepts, and hands on
mechanics of how to create and consume containers.
Docker, as stated before, allows software produced on one platform to
be executed anywhere. On any computer.
Also as stated before, the basic artifacts of docker are
- dockerfiles
- images
- containers
Images may be pulled from docker hubs, and built into containers on the
downloading computer. When the container is executed, the de facto
execution is as if it was done on the computer where it was built
including the versions af various software as on the development
platform. Hence the term virtualization.
A dockerfile is a blueprint for building a docker image. A docker image
is a template for running docker containers.
A container is just a running process.
Practical Example
To emulate the videos activities we use a static website built
previously as a Node/Express application.
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
|
$ tree -L 3
.
├── app.js
├── bin
│ └── www
├── err.txt
├── log.txt
├── node_modules
.
.
.│
├── package.json
├── package-lock.json
├── public
│ ├── images
│ ├── index.html
│ ├── javascripts
│ │ ├── CandV.js
│ │ ├── diverse.js
│ │ ├── index.js
│ │ ├── indext.js
│ │ ├── indexv.js
│ │ ├── math0.js
│ │ ├── menu.js
│ │ ├── MyMath.js
│ │ ├── nQm.js
│ │ └── TextAnalysis.js
│ ├── pages
│ │ ├── abitofmath.html
│ │ ├── caesar0.html
│ │ ├── diverseText.html
│ │ ├── textAnalysis.html
│ │ └── vigenere0.html
│ └── stylesheets
│ └── style.css
└── routes
├── index.js
└── users.js
73 directories, 284 files
|
The Node Application Setup File nodelive/staticTsec/app.js
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
|
const express = require('express');
const path = require('path');
const cookieParser = require('cookie-parser');
const logger = require('morgan');
const indexRouter = require('./routes/index');
const usersRouter = require('./routes/users');
const app = express();
app.use(logger('dev'));
app.use(express.json());
app.use(express.urlencoded({ extended: false }));
app.use(cookieParser());
app.use(express.static(path.join(__dirname, 'public')));
app.use('/', indexRouter);
app.use('/users', usersRouter);
module.exports = app;
|
The Node Application Setup File nodelive/staticTsec/Dockerfile
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
|
# reference to a dockerimage from the hub
FROM node:12
# our application root
WORKDIR /app
# copy the package files into our root
COPY package*.json ./
# populate node_modules when building
RUN npm install
# copy local files to docker app
COPY . .
# environment vars for app
ENV PORT=8080
# use that port
EXPOSE 8080
# start the application, re content of package.sjon
CMD [ "npm", "start" ]
|
Then run the following from the CLI to build the image from the
Dockerfile:
1
|
docker build -t arosano0/demoapp:0.9 .
|
The final dot, means building it in the current working directory.
The -t
tags the image with a name. Here arosano0
is tha name of
the builder, demoapp
is the application name. 0.9 is an optional
version number. The result:
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
|
$ docker build -t arosano0/demoapp:0.9 .
Sending build context to Docker daemon 110.1kB
Step 1/8 : FROM node:12
---> 6c8de432fc7f
Step 2/8 : WORKDIR /app
---> Using cache
---> 7daf2add46cd
Step 3/8 : COPY package*.json ./
---> 5b75f788e70d
Step 4/8 : RUN npm install
---> Running in 488f4b382882
npm WARN read-shrinkwrap This version of npm is compatible with lockfileVersion@1, but package-lock.json was generated for lockfileVersion@2. I'll try to do my best with it!
added 73 packages from 42 contributors and audited 73 packages in 1.458s
8 packages are looking for funding
run `npm fund` for details
found 0 vulnerabilities
Removing intermediate container 488f4b382882
---> 923f751ffb04
Step 5/8 : COPY . .
---> c6d62012b8d1
Step 6/8 : ENV PORT=8080
---> Running in 02f66e87107f
Removing intermediate container 02f66e87107f
---> 4a598ece7c75
Step 7/8 : EXPOSE 8080
---> Running in 9b6eec275bfc
Removing intermediate container 9b6eec275bfc
---> 5fb8f6e1a037
Step 8/8 : CMD [ "npm", "start" ]
---> Running in f1ac45cbc8ac
Removing intermediate container f1ac45cbc8ac
---> a02b0b6bc585
Successfully built a02b0b6bc585
Successfully tagged arosano0/demoapp:0.9
|
Then we may execute the image by
1
|
docker run -p 3334:8080 a02b0b6bc585
|
An Alternative Container Creating a Demo
This will demo a Linux distribution, Alpine, with Git preinstalled.
The Dockerfile, coursecode/docker/git/Dockerfile
1
2
3
|
FROM alpine
RUN apk update
RUN apk add git
|
Then run from the CLI to build the image from the Dockerfile:
1
|
docker build -t arosano0/alpinewgit:0.9 .
|
The final dot, again, means building it in the current working
directory. The -t
tags the image with a name. Here arosano0
is
the name of the builder, demoapp
is the application name.
0.9
is an optional version number. The result:
1
|
$ docker build -t alpinewgit .
|
Containerizing this, and running it with
1
|
$ docker run -itd --name dockgit alpinewgit
|
as in
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
|
$ docker exec -it dockgit /bin/ash
/ # ls
bin etc lib mnt proc run srv tmp var
dev home media opt root sbin sys usr
/ # history
0 which git
1 git --version
2 git config --list
3 git config --global user.name "foo"
4 git config --global user.email "foo@foo.bar"
5 git config --list
6 git branch
7 git init
8 git branch
9 git status
10 git config --global init.defaultBranch main
11 git config --list
12 exit
|
To demo the alpine Linux.
Persistence of Container Data
Create a data repo on the host machine for the container.
This way we may persist data created by the container on the host
machine. Not doing that implies that data created in the container will
disappear if or when we delete the container.
1
|
$ docker run -itd --name dockgit2 --mount source=crosscontainerstuff,target=/stuff alpinewgit
|
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
|
$ docker exec -it dockgit2 /bin/ash
/ # ls
bin etc lib mnt proc run srv sys usr
dev home media opt root sbin stuff tmp var
/ # cd stuff
/stuff # ls
file1.txt file2.txt
/stuff # echo "abc" > file3.txt
/stuff # ls
file1.txt file2.txt file3.txt
/stuff # exit
docker $ docker volume ls
DRIVER VOLUME NAME
local crosscontainerstuff
docker $ docker volume inspect crosscontainerstuff
[
{
"CreatedAt": "2023-10-28T14:55:17+02:00",
"Driver": "local",
"Labels": null,
"Mountpoint": "/var/lib/docker/volumes/crosscontainerstuff/_data",
"Name": "crosscontainerstuff",
"Options": null,
"Scope": "local"
}
]
docker $ doas ls -al /var/lib/docker/volumes/crosscontainerstuff/_data
doas (nml@localhost.localdomain) password:
/var/lib/docker/volumes/crosscontainerstuff/_data:
total 20
drwxr-xr-x 2 root root 4096 Oct 28 15:20 .
drwx-----x 3 root root 4096 Oct 28 14:55 ..
-rw-r--r-- 1 root root 4 Oct 28 15:05 file1.txt
-rw-r--r-- 1 root root 4 Oct 28 15:11 file2.txt
-rw-r--r-- 1 root root 4 Oct 28 15:20 file3.txt
docker $
|
Since the crosscontainerstuff
volume is local to your computer,
there seems to be no obstacle to mounting the same volume to any
later running of the same container, or even to any other container.
Thus cross container peristancy for data is achieved.
Please notice though that the volume resides in a
protected environment only accessible through docker.
Composition (work in progress)
Our source, the Fireship video, makes the prudent recommendation that
we should confine each container to run but one process.
This entails that we must sometimes execute several containers
simultaneously. An example could be running a webserver from one,
and its database tier from another.
Docker caters to that with Docker Compose.
To handle composition we create a docker-compose.yml in our project
directory. Where a Dockerfile defines a docker image,
a docker-compose.yml file defines a complex of docker images meant
to work together.
The yml file nodelive/staticTsec/docker-compose.yml
1
2
3
4
5
6
7
8
9
10
11
12
13
14
|
version: '3'
services:
web:
build: .
ports:
- "3000:8080"
db:
image: "mysql"
environment:
MYSQL_ROOT_PASSWORD: "password"
volumes:
- db-data:/foo
volumes:
db-data:
|
The command (CLI) docker-compose up
creates and runs the required
containers from the composed images. The corresponding
docker-compose down
will stop execution of the containers.
Please notice that as a matter of course, the volumes are not removed
when the containers are torn down. If that is desired, you must
specify it in the command.
Discussion of the Previous Lessons Exercises
Learn to Cloud, Phases 1 and 2
Phase 1 and 2 involved video watching and digestion. Whatever remains
from the exercises may be done now.
The Q & A
The exercise for today was posing questions. Here are key issues
from the questions we received until this morning:
Linux
- bash what and why?
- System / Environment variables
- Important commands / CLI
- ls, cd
- cat, more, less
- chmod
- chown
- Pipelines/pipes
- Regular Expressions, search for patterns
- Search for files
- find
- which
- locate
- whereis
Git
- Staging
- Branches
- Commit
- README.md
- Alternatives to Git
Programming
- Control statements
- Exceptions
- Commments
Networking
- SSH / HTTPS
- VPN
- TSL / SSL
Exercises
In Learn to Cloud, referenced above, we have Phases 0 through 5,
and then projects.
-
In phase 3, you familiarized yourselves with a lot of Hows of the
cloud platforms.
-
This lesson’s exercise is:
*Phase 3 touches on the actual cloud by means of an example.
Well, the Projects section has three examples. A resume project
that Gwyn mentioned as the first example in the video
Cloud Computing Beginner to Expert with 3 Projects
There are links to three versions, the AWS, the Azure, and the
GCP versions. Pick one, entirely your own choice,
watch the video, and then
you will work with this example project from now
until the next lesson.