aboutsummaryrefslogtreecommitdiff
path: root/doc/manual.cli
blob: 961627e2eea03fc2222516f5921341f44e732052 (plain)
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
// file      : doc/manual.cli
// copyright : Copyright (c) 2014-2017 Code Synthesis Ltd
// license   : MIT; see accompanying LICENSE file

"\name=build2-buildos-manual"
"\subject=buildos"
"\title=Operating System"

// NOTES
//
// - Maximum <pre> line is 70 characters.
//

"
\h0#preface|Preface|

This document describes \c{buildos}, the \c{build2} operating system.

\h1#intro|Introduction|

\c{buildos} is a Debian GNU/Linux-based in-memory network-booted operating
system specialized for autonomous building of software using the \c{build2}
toolchain. It's primary purpose is to run the \c{build2} build bot (\c{bbot}),
build slave (\c{bslave}), or both.

A machine that run an instance of \c{buildos} is called a \i{build host}. A
build host runs the \c{bbot} and/or \c{bslave} in the \i{agent mode}. The
actual building is performed in the virtual machines and/or containers. For
\c{bbot} it is normally one-shot virtual machines and for \c{bslave} it is
normally containers but can also be long-running virtual machines. Inside
virtual machines/containers, \c{bbot} and \c{bslave} run in the \i{worker
mode} and receive \i{build tasks} from their respective agents.

\h1#boot|Booting|

\c{buildos} is normally booted from the network using PXE but can also be
booted locally from initrd directly.

\h2#boot-net|Network|

Here we assume that you have already established your PXE setup using
PXELINUX. That is, you have configured a TFTP server that hosts the
\c{pxelinux} initial bootstrap program (NBP) and configured a DHCP
server to direct PXE client to this server/NBP.

To setup PXE boot of \c{buildos}, perform the following steps:

\ol|

\li|Copy the kernel image and initrd to the TFTP server. For example:

\
# mkdir -p /var/lib/tftpboot/buildos
# cp buildos-image buildos-initrd /var/lib/tftpboot/buildos/
\

|

\li|Assuming the host has MAC address \c{de:ad:be:ef:b8:da}, create a
    host-specific configuration file (or use \c{default} as the last path
    component for a configuration that applies to all the hosts):

\
# cat <<EOF >/var/lib/tftpboot/pxelinux.cfg/01-de-ad-be-ef-b8-da
default buildos
prompt 1
timeout 50

label buildos
  menu label buildos
  kernel /buildos/buildos-image
  initrd /buildos/buildos-initrd
  append buildos.smtp_relay=example.org buildos.admin_email=admin@example.org
EOF
\

|

\li|You can test the setup using QEMU/KVM, for example:

\
$ sudo kvm \
  -m 8G \
  -netdev tap,id=net0,script=./qemu-ifup \
  -device e1000,netdev=net0,mac=de:ad:be:ef:b8:da \
  -boot n
\

||

\h2#boot-local|Local|

During testing it is often useful to boot \c{buildos} directly from the
kernel image and initrd files. As an example, here is how this can be done
using QEMU/KVM:

\
sudo kvm \
  -m 8G \
  -netdev tap,id=net0,script=./qemu-ifup \
  -device e1000,netdev=net0,mac=de:ad:be:ef:b8:da \
  -kernel buildos-image -initrd buildos-initrd
\

\h1#config|Configuration|

\h2#config-net|Network|

Network is configured via DHCP. Initially, all Ethernet interfaces that have
carrier are tried in (some) order and the first interface that is successfully
configured via DHCP is used.

Hostname is configured from the DHCP information. Failed that, a name is
generated based on the MAC address, in the form \c{build-xxxxxxxxxx}.
@@ Maybe also kernel cmdline?

Based on the discovery of the Ethernet interface, two bridge interfaces are
configured: \c{br0} is a public bridge that includes the Ethernet interface
and is configured via DHCP. \c{br1} is a private interface with NAT to \c{br0}
with \c{dnsmasq} configured as a DHCP on this interface.

Normally, \c{br0} is used for \c{bslave} virtual machines/container (since
they may need to be accessed directly) and \c{br1} \- for \c{bbot} virtual
machines. You can view the bridge configuration on a booted \c{buildos}
instance by examining \c{/etc/network/interfaces}.

@@ TODO: private network parameters.

\h2#config-email|Email|

A \c{buildos} instance sends various notifications (including all messages to
\c{root}) to the admin email address. The admin email is specified with
the \c{buildos.admin_email} kernel command line parameter.

In order to deliver mail, the \c{postfix} MTA is configured to forward to a
relay. The relay host is specified with the \c{buildos.smtp_relay} kernel
command line parameter.

Note that no authentication of any kind is configured for relaying. This means
that the relay host should accept emails from build hosts either because of
their network location (for example, because they are on your organization's
local network and you are using your organization's relay) or because the
relay host accepts emails send to the admin address from anyone (which is
normally the case if the relay is the final destination for the admin
address, for example, \c{example.org} and \c{admin@example.org}).

\h2#config-storage|Storage|

Build OS configures storage based on the labels assigned to disks and
partitions (collectively refered to as disks from now on).

For virtual machine and container storage we can use a single disk, in which
case it should be labeled \c{buildos.machines} or multiple disks, in which
case they should be labeled \c{buildos.machines.<volume>}. In both cases the
disks must be formatted as \c{btrfs}.

In a single disk configuration, the disk is mounted as \c{/build/machines}. In
a multi-disk configuration, each disk is mounted as
\c{/build/machines/<volume>}.

If no disks are found for required storage, then the boot process is
interrupted with a shell prompt where you can format and/or lable a suitable
disk. You can also view the storage configuration on a booted Build OS
instance by examining \c{/etc/fstab}.

As an example, let's consider the first boot of a clean machine that has a 1TB
SSD disk as \c{/dev/sda} and which we would like to use for virtual machine
storage. We would also like to over-provision this SSD by 10% to (potentially)
prolong its life and increase performance (you may want to skip this step if
you are using a datacenter-grade SSD that would normally already be generously
over-provisioned).

On the first boot we will be presented with a shell prompt which we use to
over-provision the disk:

\
# fdisk -l /dev/sda            # Query disk information.
# hdparm -N /dev/sda           # Query disk/host protection area sizes.
# hdparm -Np<COUNT> /dev/sda   # COUNT = sector count * 0.9
# hdparm -N /dev/sda           # Verify disk/host protection area sizes.
# ^D                           # Exit shell and reboot.
\

After the reboot we will be presented with a shell prompt again where we
confirm over-provisioning, format the disk as \c{btrfs}, and label it as
\c{buildos.machines}:

\
# fdisk -l /dev/sda            # Confirm disk size decreased by 10%.
# mkfs.btrfs -L buildos.machines -m single /dev/sda
# ^D                           # Exit shell and reboot.
\
"