summaryrefslogtreecommitdiff
path: root/doc
diff options
context:
space:
mode:
Diffstat (limited to 'doc')
-rw-r--r--doc/bash-style.cli461
-rw-r--r--doc/build2-cxx-style.txt (renamed from doc/cxx-style.txt)0
-rwxr-xr-xdoc/cli.sh2
-rw-r--r--doc/release.cli280
m---------doc/style0
5 files changed, 634 insertions, 109 deletions
diff --git a/doc/bash-style.cli b/doc/bash-style.cli
index ef81af2..4edc984 100644
--- a/doc/bash-style.cli
+++ b/doc/bash-style.cli
@@ -15,7 +15,7 @@
\h1#intro|Introduction|
Bash works best for simple tasks. Needing arrays, arithmetic, and so on, is
-usually a good indication that the task at hand is too complex for Bash.
+usually a good indication that the task at hand may be too complex for Bash.
Most of the below rules can be broken if there is a good reason for it.
Besides making things consistent, rules free you from having to stop and think
@@ -31,8 +31,11 @@ the former provides a lot more rationale compared to this guide.
\h1#style|Style|
Don't use any extensions for your scripts. That is, call it just \c{foo}
-rather than \c{foo.sh} or \c{foo.bash}. Use lower-case letters and dash
-to separate words, for example \c{foo-bar}.
+rather than \c{foo.sh} or \c{foo.bash} (though we do use the \c{.bash}
+extension for
+\l{https://build2.org/build2/doc/build2-build-system-manual.xhtml#module-bash
+Bash modules}). Use lower-case letters and dash to separate words, for example
+\c{foo-bar}.
Indentation is two spaces (not tabs). Maximum line length is 79 characters
(excluding newline). Use blank lines between logical blocks to improve
@@ -45,16 +48,29 @@ For \c{if}/\c{while} and \c{for}/\c{do} the corresponding \c{then} or \c{do}
is written on the same line after a semicolon, for example:
\
-if [ ... ]; then
+if [[ ... ]]; then
+ ...
fi
for x in ...; do
+ ...
done
\
-For \c{if} use \c{[ ]} for basic tests and \c{[[ ]]} only if the previous form
-is not sufficient. Use \c{test} for filesystem tests (presence of files,
-etc). Do use \c{elif}.
+Do use \c{elif} instead of nested \c{else} and \c{if} (and consider if
+\c{case} can be used instead).
+
+For \c{if}/\c{while} use \c{[[ ]]} since it results in cleaner code for
+complex expressions, for example:
+
+\
+if [[ \"$foo\" && (\"$bar\" || \"$baz\") ]]; then
+ ...
+fi
+\
+
+\N|If for some reason you need the semantics of \c{[}, use \c{test} instead to
+make it clear this is intentional.|
\h1#struct|Structure|
@@ -73,7 +89,10 @@ usage=\"usage: $0 <OPTIONS>\"
owd=\"$(pwd)\"
trap \"{ cd '$owd'; exit 1; }\" ERR
-set -o errtrace # Trap in functions.
+set -o errtrace # Trap in functions and subshells.
+set -o pipefail # Fail if any pipeline command fails.
+shopt -s lastpipe # Execute last pipeline command in the current shell.
+shopt -s nullglob # Expand no-match globs to nothing rather than themselves.
function info () { echo \"$*\" 1>&2; }
function error () { info \"$*\"; exit 1; }
@@ -131,7 +150,7 @@ file=
Parse the command line options/arguments. For example:
\
-while [ \"$#\" -gt 0 ]; do
+while [[ \"$#\" -gt 0 ]]; do
case \"$1\" in
-q)
quiet=\"y\"
@@ -143,7 +162,7 @@ while [ \"$#\" -gt 0 ]; do
shift
;;
*)
- if [ -n \"$file\" ]; then
+ if [[ -n \"$file\" ]]; then
error \"$usage\"
fi
@@ -155,19 +174,19 @@ done
\
If the value you are expecting from the command line is a directory path,
-the always strip the trailing slash (as shown above for the \c{-t} option).
+then always strip the trailing slash (as shown above for the \c{-t} option).
\h#struct-opt-arg-valid|OPTIONS-ARGUMENTS-VALIDATION|
Validate option/argument values. For example:
\
-if [ -z \"$file\" ]; then
+if [[ -z \"$file\" ]]; then
error \"$usage\"
fi
-if [ ! -d \"$file\" ]; then
- fail \"'$file' does not exist or is not a directory\"
+if [[ ! -d \"$file\" ]]; then
+ error \"'$file' does not exist or is not a directory\"
fi
\
@@ -182,11 +201,14 @@ functions, then define them just before use.
We quote every variable expansion, no exceptions. For example:
\
-if [ -n \"$foo\" ]; then
+if [[ -n \"$foo\" ]]; then
...
fi
\
+\N|While there is no word splitting in the \c{[[ ]]} context, we still quote
+variable expansions for consistency.|
+
This also applies to command substitution (which we always write as
\c{$(foo arg)} rather than \c{`foo arg`}), for example:
@@ -201,28 +223,42 @@ list=\"$(basename \"$1\")\"
\
We also quote values that are \i{strings} as opposed to options/file names,
-paths, or integers. If setting a variable that will contain one of these
-unquoted values, try to give it a name that reflects its type (e.g.,
-\c{foo_file} rather than \c{foo_name}). Prefer single quotes for \c{sed}
+paths, enum-like values, or integers. Prefer single quotes for \c{sed}
scripts, for example:
\
-proto=\"https\"
-quiet=\"y\"
-verbosity=1
-dir=/etc
-out=/dev/null
-file=manifest
-seds='s%^./%%'
+url=\"https://example.org\" # String.
+quiet=y # Enum-like.
+verbosity=1 # Integer.
+dir=/etc # Directory path.
+out=/dev/null # File path.
+file=manifest # File name.
+option=--quiet # Option name.
+seds='s%^./%%' # sed script.
\
-Note that quoting will inhibit globbing so you may end up with expansions
-along these lines:
+Take care to quote globs that are not meant to be expanded, for example:
+
+\
+unset \"array[0]\"
+\
+
+And since quoting will inhibit globbing, you may end up with expansions along
+these lines:
\
rm -f \"$dir/$name\".*
\
+Note also that globbing is not performed in the \c{[[ ]]} context so this is
+ok:
+
+\
+if [[ -v array[0] ]]; then
+ ...
+fi
+\
+
\N|One exception to this quoting rule is arithmetic expansion (\c{$((\ ))}):
Bash treats it as if it was double-quoted and, as a result, any inner quoting
is treated literally. For example:
@@ -243,14 +279,13 @@ typical example of a space-aware argument handling:
\
files=()
-while [ \"$#\" -gt 0 ]; do
+while [[ \"$#\" -gt 0 ]]; do
case \"$1\" in
...
*)
- shift
- files=(\"${files[@]}\" \"$1\")
+ files+=(\"$1\")
shift
;;
esac
@@ -279,58 +314,87 @@ echo \"files: ${files[@]}\" # $1='files: one', $2='2 two', $3='three'
echo \"files: ${files[*]}\" # $1='files: one 2 two three'
\
-\h1#trap|Trap|
-Our scripts use the error trap to automatically terminate the script in case
-any command fails. If you need to check the exit status of a command, use
-\c{if}, for example:
+\h1#bool|Boolean|
-\
-if grep \"foo\" /tmp/bar; then
- info \"found\"
-fi
+For boolean values use empty for false and \c{true} for true. This way you
+can have terse and natural looking conditions, for example:
-if ! grep \"foo\" /tmp/bar; then
- info \"not found\"
-fi
\
+first=true
+while ...; do
-Note that the \c{if}-condition can be combined with capturing the output, for
-example:
+ if [[ ! \"$first\" ]]; then
+ ...
+ fi
-\
-if v=\"$(...)\"; then
- ...
-fi
+ if [[ \"$first\" ]]; then
+ first=
+ fi
+
+done
\
-If you need to ignore the exit status, you can use \c{|| true}, for example:
+
+\h1#subshell|Subshell|
+
+Bush executes certain constructs in \i{subshells} and some of these constructs
+may not be obvious:
+
+\ul|
+
+\li|Explicit subshell: \c{(...)}|
+
+\li|Pipeline: \c{...|...}|
+
+\li|Command substitution: \c{$(...)}|
+
+\li|Process substitution: \c{<(...)}, \c{>(...)}|
+
+\li|Background: \c{...&}, \c{coproc ...}|
+
+|
+
+Naturally, a subshell cannot modify any state in the parent shell, which
+sometimes leads to counter-intuitive behavior, for example:
\
-foo || true
-\
+lines=()
+... | while read l; do
+ lines+=(\"$l\")
+done
+\
-\h1#bool|Boolean|
+At the end of the loop, \c{lines} will remain empty since the loop body is
+executed in a subshell. One way to resolve this is to use the program
+substitution instead of the pipeline:
-For boolean values use empty for false and \c{true} for true. This way you
-can have terse and natural looking conditions, for example:
+\
+lines=()
+while read l; do
+ lines+=(\"$l\")
+done < <(...)
\
-first=true
-while ...; do
- if [ ! \"$first\" ]; then
- ...
- fi
+This, however, results in an unnatural, backwards-looking (compared to the
+pipeline) code. Instead, we can request the last command of the pipeline to be
+executed in the parent shell with the \c{lastpipe} shell option, for example:
- if [ \"$first\" ]; then
- first=
- fi
+\
+shopt -s lastpipe
+lines=()
+
+... | while read l; do
+ lines+=(\"$l\")
done
\
+\N|The \c{lastpipe} shell option is inherited by functions and subshells.|
+
+
\h1#function|Functions|
If a function takes arguments, provide a brief usage after the function
@@ -347,8 +411,8 @@ For non-trivial/obvious functions also provide a short description of its
functionality/purpose, for example:
\
-# Prepare a distribution of the specified packages and place it into the
-# specified directory.
+# Prepare a distribution of the specified packages and place it
+# into the specified directory.
#
function dist() # <pkg> <dir>
{
@@ -367,7 +431,7 @@ function dist()
If the evaluation of the value may fail (e.g., it contains a program
substitution), then place the assignment on a separate line since \c{local}
-will cause the error to be ignore. For example:
+will cause the error to be ignored. For example:
\
function dist()
@@ -377,10 +441,273 @@ function dist()
}
\
+A function can return data in two primary ways: exit code and stdout.
+Normally, exit code 0 means success and exit code 1 means failure though
+additional codes can be used to distinguish between different kinds of
+failures (for example, \"hard\" and \"soft\" failures), signify special
+conditions, etc., see \l{#error-handing Error Handling} for details.
+
+A function can also write to stdout with the result available to the caller in
+the same way as from programs (command substitution, pipeline, etc). If a
+function needs to return multiple values, then it can print them separated
+with newlines with the caller using the \c{readarray} builtin to read them
+into an indexed array, for example:
+
+\
+function func ()
+{
+ echo one
+ echo two
+ echo three
+}
+
+func | readarray -t r
+\
+
+\N|The use of the newline as a separator means that values may not contain
+newlines. While \c{readarray} supports specifying a custom separator with the
+\c{-d} option, including a \c{NUL} separator, this support is only available
+since Bash 4.4.|
+
+This technique can also be extended to return an associative array by first
+returning the values as an indexed array and then converting them to
+an associative array with \c{eval}, for example:
+
+\
+function func ()
+{
+ echo \"[a]=one\"
+ echo \"[b]=two\"
+ echo \"[c]=three\"
+}
+
+func | readarray -t ia
+
+eval declare -A aa=(\"${ia[@]}\")
+\
+
+Note that if a key or a value contains whitespaces, then it must be quoted.
+The recommendation is to always quote both, for example:
+
+\
+function func ()
+{
+ echo \"['a']='one ONE'\"
+ echo \"['b']='two'\"
+ echo \"['c']='three'\"
+}
+\
+
+Or, if returning a local array:
+
+\
+function func ()
+{
+ declare -A a=([a]='one ONE' [b]=two [c]=three)
+
+ for k in \"${!a[@]}\"; do
+ echo \"['$k']='${a[$k]}'\"
+ done
+}
+\
+
For more information on returning data from functions, see
\l{https://mywiki.wooledge.org/BashFAQ/084 BashFAQ#084}.
-For more information on writing reusable functions, see
-\l{https://stackoverflow.com/questions/11369522/bash-utility-script-library
-Bash Utility Script Library}.
+
+\h1#error-handing|Error Handling|
+
+Our scripts use the \c{ERR} trap to automatically terminate the script in case
+any command fail. This semantics is also propagated to functions and subshells
+by specifying the \c{errtrace} shell option and to all the commands of a
+pipeline by specifying the \c{pipefail} option.
+
+\N|Without \c{pipefail}, a non-zero exit of any command in the pipeline except
+the last is ignored. The \c{pipefail} shell option is inherited by functions
+and subshells.|
+
+\N|While the \c{nounset} options may also seem like a good idea, it has
+subtle, often latent pitfalls that make it more trouble than it's worth (see
+\l{https://mywiki.wooledge.org/BashPitfalls#nounset \c{nounset} pitfalls}).|
+
+The \c{pipefail} semantics is not without pitfalls which should be kept in
+mind. In particular, if a command in a pipeline exits before reading the
+preceding command's output in its entirety, such a command may exit with a
+non-zero exit status (see \l{https://mywiki.wooledge.org/BashPitfalls#pipefail
+\c{pipefail} pitfalls} for details).
+
+\N|Note that in such a situation the preceding command may exit with zero
+status not only because it gracefully handled \c{SIGPIPE} but also because all
+of its output happened to fit into the pipe buffer.|
+
+For example, these are the two common pipelines that may exhibit this issue:
+
+\
+prog | head -n 1
+prog | grep -q foo
+\
+
+In these two cases, the simplest (though not the most efficient) way to work
+around this issue is to reimplement \c{head} with \c{sed} and to get rid of
+\c{-q} in \c{grep}, for example:
+
+\
+prog | sed -n -e '1p'
+prog | grep foo >/dev/null
+\
+
+If you need to check the exit status of a command, use \c{if}, for example:
+
+\
+if grep -q \"foo\" /tmp/bar; then
+ info \"found\"
+fi
+
+if ! grep -q \"foo\" /tmp/bar; then
+ info \"not found\"
+fi
+\
+
+Note that the \c{if}-condition can be combined with capturing the output, for
+example:
+
+\
+if v=\"$(...)\"; then
+ ...
+fi
+\
+
+But keep in mind that in Bash a failure is often indistinguishable from a
+true/false result. For example, in the above \c{grep} command, the result will
+be the same whether there is no match or if the file does not exist.
+
+Furthermore, in certain contexts, the above-mentioned \c{ERR} trap is ignored.
+Quoting from the Bash manual:
+
+\i{The \c{ERR} trap is not executed if the failed command is part of the
+command list immediately following an \c{until} or \c{while} keyword, part of
+the test following the \c{if} or \c{elif} reserved words, part of a command
+executed in a \c{&&} or \c{||} list except the command following the final
+\c{&&} or \c{||}, any command in a pipeline but the last, or if the command’s
+return status is being inverted using \c{!}. These are the same conditions
+obeyed by the \c{errexit} (\c{-e}) option.}
+
+To illustrate the gravity of this point, consider the following example:
+
+\
+function cleanup()
+{
+ cd \"$1\"
+ rm -f *
+}
+
+if ! cleanup /no/such/dir; then
+ ...
+fi
+\
+
+Here, the \c{cleanup()} function will continue executing (and may succeed)
+even if the \c{cd} command has failed.
+
+Note, however, that notwithstanding the above statement from the Bash manual,
+the \c{ERR} trap is executed inside all the subshell commands of a pipeline
+provided the \c{errtrace} option is specified. As a result, the above code can
+be made to work by temporarily disabling \c{pipefail} and reimplementing it as
+a pipeline:
+
+\
+set +o pipefail
+cleanup /no/such/dir | cat
+r=\"${PIPESTATUS[0]}\"
+set -o pipefail
+
+if [[ \"$r\" -ne 0 ]]; then
+ ...
+fi
+\
+
+\N|Here, if \c{cleanup}'s \c{cd} fails, the \c{ERR} trap will be executed in
+the subshell, causing it to exit with an error status, which the parent shell
+then makes available in \c{PIPESTATUS}.|
+
+The recommendation is then to avoid calling functions in contexts where the
+\c{ERR} trap is ignored resorting to the above pipe trick where that's not
+possible. And to be mindful of the potential ambiguity between the true/false
+result and failure for other commands. The use of the \c{&&} and \c{||}
+command expressions is best left to the interactive shell.
+
+\N|The pipe trick cannot be used if the function needs to modify the global
+state. Such a function, however, might as well return the exit status also as
+part of the global state. The pipe trick can also be used to ignore the exit
+status of a command.|
+
+The pipe trick can also be used to distinguish between different exit codes,
+for example:
+
+\
+function func()
+{
+ bar # If this command fails, the function returns 1.
+
+ if ... ; then
+ return 2
+ fi
+}
+
+set +o pipefail
+func | cat
+r=\"${PIPESTATUS[0]}\"
+set -o pipefail
+
+case \"$r\" in
+ 0)
+ ;;
+ 1)
+ exit 1
+ ;;
+ 2)
+ ...
+ ;;
+esac
+\
+
+\N|In such functions it makes sense to keep exit code 1 to mean failure so
+that the inherited \c{ERR} trap can be re-used.|
+
+This technique can be further extended to implement functions that both
+return multiple exit codes and produce output, for example:
+
+\
+function func()
+{
+ bar # If this command fails, the function returns 1.
+
+ if ... ; then
+ return 2
+ fi
+
+ echo result
+}
+
+set +o pipefail
+func | readarray -t ro
+r=\"${PIPESTATUS[0]}\"
+set -o pipefail
+
+case \"$r\" in
+ 0)
+ echo \"${ro[0]}\"
+ ;;
+ 1)
+ exit 1
+ ;;
+ 2)
+ ...
+ ;;
+esac
+\
+
+\N|We use \c{readarray} instead of \c{read} since the latter fails if the left
+hand side of the pipeline does not produce anything.|
+
"
diff --git a/doc/cxx-style.txt b/doc/build2-cxx-style.txt
index 8e968b8..8e968b8 100644
--- a/doc/cxx-style.txt
+++ b/doc/build2-cxx-style.txt
diff --git a/doc/cli.sh b/doc/cli.sh
index 458ff87..3f10993 100755
--- a/doc/cli.sh
+++ b/doc/cli.sh
@@ -6,7 +6,7 @@ set -o errtrace # Trap in functions.
function info () { echo "$*" 1>&2; }
function error () { info "$*"; exit 1; }
-copyright="$(sed -n -re 's%^Copyright \(c\) (.+)\.$%\1%p' ../COPYRIGHT)"
+copyright="$(sed -n -re 's%^Copyright \(c\) (.+) \(see the AUTHORS file\)\.$%\1%p' ../LICENSE)"
while [ $# -gt 0 ]; do
case $1 in
diff --git a/doc/release.cli b/doc/release.cli
index 6def002..f02db11 100644
--- a/doc/release.cli
+++ b/doc/release.cli
@@ -17,6 +17,13 @@
Review the state and services list (currently on paper) for any new additions.
Consider how/when they are updated/tested during the release process.
+@@ We have switched to a single configuration for the entire toolchain
+ (plus -libs).
+
+@@ We currently have an issue in that \c{queue} builds \c{public} using
+\c{public} \c{buildtabs} (since it's querying \c{public} brep) which means
+existing packages are not tested with new build configurations. But maybe
+that's correct, conceptually.
\h1#stage|Stage|
@@ -27,8 +34,10 @@ any of these dependencies are in the unreleased state, then they should go
through the applicable steps in this section (e.g., updating of \c{NEWS}, etc)
and then be queued and published (effectively released) as part of the
\c{build2} release. Generally, however, we should strive to not unnecessarily
-bundle the release of dependencies with the release of \c{build2} to keep
-the process as streamlined as possible.
+bundle the release of dependencies with the release of \c{build2} to keep the
+process as streamlined as possible. In fact, we now have \c{queue.stage} (see
+\c{etc/stage-queue}) which is the place for such \"extra packages for
+testing\".
\N|When unbundling the release of a dependency we need to remove its
distribution from \c{etc/stage} and add the pre-distributed packages
@@ -44,6 +53,14 @@ distribution from \c{etc/stage} and add the pre-distributed packages
Use the \c{git/copyright} script. See the script header for instructions.
+ Note that most of (all?) our project have been converted to the automatic
+ copyright update notification via a pre-commit hook. So perhaps this step
+ can be removed.
+
+\h#install-times|Update install script times.|
+
+ See \c{private/install/build2-times.txt} for instructions.
+
\h#etc|Update \c{etc/git/modules}|
Review for any new modules. Remove \c{etc/} and \c{private/} from
@@ -51,7 +68,7 @@ distribution from \c{etc/stage} and add the pre-distributed packages
\h#review|Review \c{@@}|
- Review \c{@@} notes:
+ Review \c{etc/review} for new modules. Then review \c{@@} notes:
\
etc/review | less -R
@@ -81,7 +98,10 @@ distribution from \c{etc/stage} and add the pre-distributed packages
\h#packaging|Release \c{packaging/} dependencies|
Release dependencies that have the snapshot version as described in
- \c{private/build2-packaging.txt}.
+ \l{https://github.com/build2-packaging/README#version-and-release-management
+ Version and Release Management}.
+
+ Also, consider upgrading the \c{curl/ca-certificates-curl}.
\N|Maybe this should be done during queuing? Why do we release (but not
publish) these now and other dependencies later? Maybe so that we can
@@ -113,6 +133,42 @@ distribution from \c{etc/stage} and add the pre-distributed packages
etc/upgrade
\
+ Or, if released ODB on the previous step, update \c{libodb*} version
+ constraint using \c{etc/version}, commit, and then:
+
+ \
+ # Step 0: make sure b, bpkg, bdep are runnable.
+
+ # Step 1a: pull CLI compiler and build it, make sure runnable.
+ # Step 1b: pull ODB compiler and build it, make sure runnable.
+
+ # Step 2: pull everything in build2 (including brep).
+
+ # Step 3: upgrade (in configure-only mode).
+ #
+ # If this step goes sideways, use the saved .bak configurations to
+ # revert, fix the issue, and try again. If unable to fix the issue
+ # (for example, the toolchain is not runnable), then do the from-
+ # scratch bootstrap using the etc/bootstrap script.
+ #
+ etc/upgrade -c
+
+ # Step 4: trigger header regeneration (ignore ODB version errors).
+ #
+ BDEP_SYNC=0 b --match-only ~/work/build2/builds/gcc7/
+
+ # Step 5: regenerated ODB code in the relevant projects (bpkg, bdep, brep):
+ #
+ ./odb.sh ~/work/build2/builds/gcc7/
+
+ # Step 6: finish toolchain update:
+ #
+ BDEP_SYNC=0 b build2/ bpkg/ bdep/
+ b build2/ bpkg/ bdep/ # Should be noop.
+ \
+
+ Then push \c{build2} repositories.
+
\h#hello|Update \c{hello/} projects|
@@ -131,17 +187,19 @@ distribution from \c{etc/stage} and add the pre-distributed packages
git push ...
\
- Once done, run the \c{intro} scripts and review any changes in the output
- (this information will be helpful on the next step):
+ Once done, make sure the latest \c{libhello} revision is on stage, run the
+ \c{intro} scripts, and review any changes in the output (this information
+ will be helpful on the next step):
\
cd etc
- ./intro2-tldr 2>&1 | tee intro2-tldr.out
+ script -qc ./intro2-tldr intro2-tldr.out && sed -i -e 's/\r//g' intro2-tldr.out
diff -u intro2-tldr.orig intro2-tldr.out # Or use gitk.
mv intro2-tldr.out intro2-tldr.orig
- ./intro2-tour 2>&1 | tee intro2-tour.out
+
+ script -qc ./intro2-tour intro2-tour.out && sed -i -e 's/\r//g' intro2-tour.out
diff -u intro2-tour.orig intro2-tour.out # Or use gitk.
mv intro2-tour.out intro2-tour.orig
\
@@ -158,10 +216,12 @@ distribution from \c{etc/stage} and add the pre-distributed packages
\ul|
- \li|Install guide: 1 & 2.|
+ \li|Install guide: 1 & 2 (also review \c{build2-toolchain} commit log).|
\li|Toolchain introduction: 1, 2 & 3 (use \c{intro} script output for 2).|
+ \li|Packaging gude: 2 & 3 (also \c{bdep-new}-generated buildfiles).|
+
\li|Introduction in the build system manual: 1 (uses \c{bdep-new(1)}
output).|
@@ -193,6 +253,12 @@ distribution from \c{etc/stage} and add the pre-distributed packages
\h#stage-machines|Update \c{stage} \c{buildtab}s and build machines|
+ NOTE: may want to keep old machines around for public testing (since we use
+ existing public buildtabs, none of the new machines will be used).
+
+ Note: normally, we try to do all of this as part of normal development
+ (e.g., when adding new machines, etc).
+
Review \c{stage} \c{buildtab} for any configurations to drop (for example,
an intermediate version of a compiler), classes to adjust (\c{legacy},
\c{default}, \c{latest}, etc).
@@ -207,6 +273,7 @@ distribution from \c{etc/stage} and add the pre-distributed packages
\
cd private/buildos/
+ grep '.*' .../brep-buildtab | cut -d' ' -f1 | sort -u
./ls-machines -c stage -c devel
~/work/buildos/remove-machine <host> <machine>
@@ -218,7 +285,7 @@ distribution from \c{etc/stage} and add the pre-distributed packages
\
cd private/buildos/
- ./ls-machines -l \"/btrfs/$(whoami)/machines/default/\"
+ ./ls-machines -l \"/btrfs/$(whoami)/machines/default/\" | sort
./ls-machines -c stage -c devel
~/work/build2/buildos/upload-machine <host> .../new-ver .../old-ver
@@ -235,6 +302,8 @@ distribution from \c{etc/stage} and add the pre-distributed packages
If no upgrade is possible from the previous version, uncomment errors in
install scripts (and add a note to restore after the release).
+ Enable extra packages for testing in \c{etc/stage} script.
+
Restage and upgrade \c{brep} by performing the following steps:
\ol|
@@ -248,6 +317,12 @@ distribution from \c{etc/stage} and add the pre-distributed packages
etc/stage -b
\
+ Consider restaging queue for good measure.
+
+ \
+ etc/stage-queue
+ \
+
|
\li|While build machines are bootstrapping, upgrade \c{brep} on \c{stage},
@@ -265,6 +340,27 @@ distribution from \c{etc/stage} and add the pre-distributed packages
Verify \c{stage} build is clean, nothing is unbuilt.
+\h#test-extra|Perform extra testing|
+
+ CI: (check for any new repositories in github.com/build2/)\n
+ \n
+ \c{libauto-symexport}\n
+ \c{hello-thrift}\n
+ \c{assembler-with-cpp}\n
+
+ Test \c{cxx20-modules-examples} (see \c{test} script).
+
+ Test any third-party/demos (\c{build2-dynamic-module-demo},
+ \c{cherimcu}, \c{boost-dependency}).
+
+ Test on ARM Mac (run tests for \c{libbutl/build2/bpkg/bdep}).
+
+ Test build system modules (especially standard pre-installed).
+
+ Test old toolchain version (can fetch and build old packages from
+ \c{queue.stage}; add dummy package if all require to be released toolchain).
+
+
\h#install-stage|Test install scripts|
Test \l{https://stage.build2.org/0/ \c{stage} install scripts}, including
@@ -309,7 +405,7 @@ distribution from \c{etc/stage} and add the pre-distributed packages
\
git pull
bdep release --no-open --show-push [--alpha|--beta]
- # review commit
+ # review commit, run update (in case anything comitted is pre-generated)
git push ...
\
@@ -333,29 +429,63 @@ distribution from \c{etc/stage} and add the pre-distributed packages
done if created projects use new features.}
\N|Why shouldn't we always do this for simplicity? Maybe because then
- we cannot run tests using \c{public} services? Also the below upgrade
- steps will break since there is no continuity.||
+ we cannot run tests using \c{public} services?
- \li|Change version by updating (including with new modules and/or new
- dependencies) and then executing:
+ Also if we change this in toolchain packages, the below upgrade steps
+ will break since there is no continuity. Perhaps do it in two stages:
+ first change version to final, upgrade, then change toolchain
+ dependencies to final, upgrade again? But then nobody involed in
+ development will be able to pull and upgrade. Maybe KISS and keep
+ it pre-release.||
- \
- etc/version
- ./commit.sh
- git -C build2-toolchain commit --amend # \"Change version to X.Y.Z\"
- \
+ \li|Change version by updating \c{etc/version} (including with new modules
+ and/or new dependencies, but keep pre-release in minimum toolchain
+ version requirements) and then executing:
- Note that \c{libbuild2-hello} is independently versioned but may still
- need to update minimum \c{build2} version requirements (see below).
+ \
+ etc/version
+ ./commit.sh
+ git -C build2-toolchain commit --amend # \"Change version to X.Y.Z\"
+ \
- |
+ Note that \c{libbuild2-*} modules (e.g., \c{libbuild2-hello}) are
+ independently versioned but may still need to update minimum toolchain
+ version requirements (see below).|
\li|Tag by executing \c{tag.sh\ <version>}.|
- \li|Regenerate documentation in each package.|
+
+ \li|Release all standard pre-installed build system modules. Update
+ minimum toolchain version requirements.
+
+ \
+ bdep release --no-open --show-push
+ \
+
+ Also release \c{libbuild2-hello} (it's not standard pre-installed
+ but it gets published).
+
+ |
+
+ \li|Regenerate documentation in each package (including standard
+ pre-installed build system modules, use \c{BDEP_SYNC=0}).|
\li|Upgrade all dependencies in configure-only mode by executing
- \c{etc/upgrade\ -c}.|
+ \c{etc/upgrade\ -c}.
+
+ Avoid this (see above): If the \c{build2}/\c{bpkg} requirements in the
+ manifests have been bumped to the version being released, then first
+ bootstrap the build system and update \c{bpkg}/\c{bdep} (might have to
+ hack their generated \c{version.hxx} to disable constraint checking;
+ also if you forget \c{BDEP_SYNC=0} it will most likely hose the build
+ configuration).
+
+ \
+ BDEP_SYNC=0 b-boot build2/build2/
+ BDEP_SYNC=0 b bpkg/ bdep/
+ \
+
+ |
\li|Trigger regeneration of version files (might require several runs
to \"close off\"):
@@ -387,12 +517,16 @@ distribution from \c{etc/stage} and add the pre-distributed packages
\
BDEP_SYNC=0 b ~/work/build2/builds/gcc7-asan/
+
+ b build2/ bpkg/ bdep/
+
+ # Update standard pre-installed build system modules.
\
|
- \li|Update \c{libbuild2-hello} if required.||
+ \li|Update other \c{libbuild2-*} modules if required.||
Verify key tests pass (in particular, the \c{bdep} tests will now be running
against \c{public} services):
@@ -401,6 +535,8 @@ distribution from \c{etc/stage} and add the pre-distributed packages
b test: build2/ bpkg/ bdep/
b test: bpkg/ config.bpkg.test.remote=true
b test: libbuild2-hello/libbuild2-hello-tests/
+
+ # Test standard pre-installed build system modules.
\
\N|We could have queued after this step before preparing
@@ -415,6 +551,8 @@ distribution from \c{etc/stage} and add the pre-distributed packages
\
./push.sh
+ # Push (with tags) standard pre-installed build system modules.
+
cd build2-toolchain
git submodule update --remote --checkout
@@ -429,12 +567,13 @@ distribution from \c{etc/stage} and add the pre-distributed packages
\ul|
\li|Regenerate documentation in each package inside as well as in
- \c{build2-toolchain} itself.|
+ \c{build2-toolchain} itself. @@ \c{libbuild-kconfig} not configured
+ out of tree? Or maybe it gets updated automatically during dist?|
\li|Update ODB by copying relevant files from the previous step (trust me,
this is the easy way for now). Make sure all \c{*-odb.*} are copied!|
- \li|Change \c{BUILD2_REPO} in \c{build2-toolchain} build scripts to
+ \li|Change \c{build2_repo} in \c{build2-toolchain} \c{buildfile} to
\c{queue}.||
Finally, push the changes:
@@ -445,7 +584,8 @@ distribution from \c{etc/stage} and add the pre-distributed packages
\h#queuing|Queue|
- Prepare packages and the toolchain distribution:
+ Prepare packages and the toolchain distribution (disable extra packages
+ first if any were enabled in the \c{etc/stage} script):
\
etc/stage -q -b
@@ -468,10 +608,13 @@ distribution from \c{etc/stage} and add the pre-distributed packages
git push
\
- If queued package manifests contain new values, then the bpkg-rep-publish
- script will fail to create repository due to unknown manifest values. To
- resolve this we temporarily add (to \c{crontab}) \c{--ignore-unknown} and
- make a note to restore.
+ If queued package manifests contain new values, then the
+ \c{bpkg-rep-publish} script will fail to create repository due to unknown
+ manifest values. To resolve this we temporarily add (to \c{crontab})
+ \c{--ignore-unknown} and make a note to restore.
+
+ Also change \c{--min-bpkg-version} from previous to current release
+ (not the one being released).
\h#build-public|Verify queued packages build with \c{public}|
@@ -516,7 +659,12 @@ distribution from \c{etc/stage} and add the pre-distributed packages
\N|Note that the \c{queue} \c{buildtab} is shared between \c{public} and
\c{queue} builds. As a result, after this update, \c{public} build hosts
- may not have some of the new (or renamed) build machines.|
+ may not have some of the new (or renamed) build machines.
+
+ This also means that if the \c{buildtab} contains anything new (options,
+ etc) that are incompatible with \c{public}, then they should only be
+ enabled later, when upgrading \c{public} \c{buildtab} (make a note if
+ that's the case).|
Adjust \c{stage} and \c{devel} build host configurations (both \c{*-config}
and hardware classes) to enable the \c{queue} toolchain. Shift most
@@ -535,6 +683,8 @@ distribution from \c{etc/stage} and add the pre-distributed packages
replacements. We can only proceed further once we have a \"resolution\"
for every (newly) broken package.
+ \N|If public has been built with the staged toolchain, rebuilding of the
+ public repository (which takes days) can be omitted.|
\h#stop-queue|Stop \c{queue} builds|
@@ -572,18 +722,24 @@ distribution from \c{etc/stage} and add the pre-distributed packages
(effectively making it a no-toolchain configuration), regenerate, and power
on the new set of \c{public} build hosts.
- Review deployed machines against the updated \c{public} \c{buildtab} and
- remove those that are no longer used:
+
+ @@ See new sync scripts for this step: Review deployed machines against the
+ updated \c{public} \c{buildtab} and remove those that are no longer used:
\
cd private/buildos/
+ grep '.*' .../brep-buildtab | cut -d' ' -f1 | sort -u
./ls-machines -c public
~/work/build2/buildos/remove-machine <host> <machine>
\
- Then move now legacy machines to the \"legacy\" build host.
+ Then move now legacy machines to the \"legacy\" build host:
+
+ \
+ grep 'legacy' .../brep-buildtab | cut -d' ' -f1 | sort -u
+ \
Also review deployed machines against the latest available versions and
upgrade those that are not the latest:
@@ -591,12 +747,18 @@ distribution from \c{etc/stage} and add the pre-distributed packages
\
cd private/buildos/
- ./ls-machines -l \"/btrfs/$(whoami)/machines/default/\"
+ ./ls-machines -l \"/btrfs/$(whoami)/machines/default/\" | sort
./ls-machines -c public
~/work/build2/buildos/upload-machine <host> .../new-ver .../old-ver
\
+ Finally, add any new machines:
+
+ \
+ grep -v 'legacy' .../brep-buildtab | cut -d' ' -f1 | sort -u
+ \
+
Uncomment the \c{public} toolchain in the build host configuration and
regenerate. The only remaining step is to reboot (not yet):
@@ -606,7 +768,7 @@ distribution from \c{etc/stage} and add the pre-distributed packages
\h#pub-dist|Publish distribution|
- Change \c{BUILD2_REPO} in \c{build2-toolchain} build scripts to \c{public},
+ Change \c{build2_repo} in \c{build2-toolchain} \c{buildfile} to \c{public},
commit, and publish the distribution (this also cleans/disables the
\c{queue} toolchain):
@@ -616,6 +778,8 @@ distribution from \c{etc/stage} and add the pre-distributed packages
\h#pub-pkg|Publish packages|
+ @@ Need to add --min-bpkg-version 0.13.0
+
Move packages (manually for now) from the \c{queue} to \c{public} \c{git}
repository, including ownership information. Move old/replaced/FTB
packages either to legacy or delete.
@@ -627,7 +791,7 @@ distribution from \c{etc/stage} and add the pre-distributed packages
Note that once published, the existing install instructions/download
links are no longer usable, so do not linger (in fact, may make sense
to update Download and Install pages before publishing packages and
- sync only them immediately after).
+ only sync them immediately after).
\h#start-public|Start \c{public} builds|
@@ -642,6 +806,9 @@ distribution from \c{etc/stage} and add the pre-distributed packages
we just smoke-test each script on its \"primary\" platform and make sure
\c{public} URLs/repositories are used.
+ Also test building an old package with the previous version of the
+ toolchain.
+
\h#web|Update web|
\ul|
@@ -747,13 +914,17 @@ distribution from \c{etc/stage} and add the pre-distributed packages
Essentially, the same steps as in \l{#version-release Change to
release version} (but no tagging).
+ Revert changes to install scripts if upgrade was disabled.
+
\h#stage-machines-reopen|Update \c{stage} \c{buildtab}s and build machines|
Essentially, the same steps as in \l{#public-machines Update \c{public}
\c{buildtab}s and build machines} but for stage. Some differences:
Clean \c{buildtab}s (both \c{stage} and CI) by removing no longer relevant
- configurations, moving some to \c{legacy}, etc.
+ configurations, moving some to \c{legacy}, etc. NOTE: we now keep the
+ same set of machines as public until next release so that we can build
+ public with stage for testing.
More generally, this is the time to do sweeping changes such as renaming
machines/configurations, adjusting classes, etc. This is also the time to
@@ -765,6 +936,8 @@ distribution from \c{etc/stage} and add the pre-distributed packages
\c{baseutils} and \c{mingw}; the idea is that we will start with those and
maybe upgrade later).
+ Review \c{etc/stage} for any packages to enabled/disable.
+
Then cleanup and restage:
\
@@ -782,6 +955,14 @@ distribution from \c{etc/stage} and add the pre-distributed packages
Upgrade \c{brep} on \c{stage}.
+ Review \c{etc/stage-queue} and restage queue.stage:
+
+ \
+ rm -r staging/repository/1/*/
+
+ etc/stage-queue
+ \
+
\h#commit-reopen|Commit and push \c{etc/} and \c{private/}.|
Commit and push changes to \c{etc/} and \c{private/}.
@@ -821,6 +1002,9 @@ distribution from \c{etc/stage} and add the pre-distributed packages
Consider upgrading to new upstream versions in \c{packaging/}.
+ Also, bundled \c{libodb}, \c{libsqlite3}, and \c{libpkg-config} in
+ \c{libbutl}.
+
\h#upgrade-projects|Upgrade own projects|
Adjust own projects to use any newly available features.
@@ -830,6 +1014,20 @@ distribution from \c{etc/stage} and add the pre-distributed packages
Adjust \c{bdep-new}-generated projects to use any newly available features.
+\h1#re-review|Re-review \c{@@}|
+
+ Review \c{@@} notes (in case some should be fixed after the release):
+
+ \
+ etc/review | less -R
+ \
+
+ At least look for \c{@@\ TMP}
+
+ \
+ etc/review | grep TMP
+ \
+
\h1#plan|Plan|
Plan the next release and update the project roadmap.
diff --git a/doc/style b/doc/style
-Subproject 5d12805f877ff931b1195789e0cb0dae2ee9747
+Subproject b72eb624d13b1628e27e9f6c0b3c80853e8e015