// file : doc/packaging.cli // license : MIT; see accompanying LICENSE file "\name=build2-packaging-guide" "\subject=toolchain" "\title=Packaging Guide" // NOTES // // - Maximum
 line is 70 characters.
//
// - In guideline titles (do/don't) omit a/the.

// TODO:
//
// @@ Update pinned repositories (libevent, libasio, xxd, libzstd).
//
// @@ Update build2-packaging front page with link to this guide (add
//    organization README?)
//
// @@ Enforce continous versioning with pre-commit hook -- maybe add a note?
//
// @@ Add example of propagating config.libfoo.debug to macro on build options?
//
// @@ Note on library metadata where talk about configuration. Also about
//    autoconf.
//
// @@ Add link from metadata HOWTO to "Modifying upstream source code with
//    preprocessor".
//
// @@ Default configuration like SQLite -- look in Debian/Fedora (want to
//    be substitutable with system-installed).
//
// @@ Optional dependencies like xxhash.
//
// @@ Mapping to system package manager names/versions.
//
// @@ bdep-new: export stub for executables.
//
// @@ Are upstream tests well behaved? -- add never exit/take unreasonable time
//
// @@ Note: showing UNIX command lines but should be straightforward to adjust
//    for Windows.

"
\h0#preface|Preface|

This document provides guidelines for converting third-party C/C++ projects to
the \c{build2} build system and making them available as packages from
\l{https://cppget.org cppget.org}, the \c{build2} community's central package
repository. For additional information, including documentation for individual
\c{build2} toolchain components, man pages, HOWTOs, etc., refer to the project
\l{https://build2.org/doc.xhtml Documentation} page.


\h1#intro|Introduction|

The aim of this guide is to ease the conversion of third-party C/C++ projects
to the \c{build2} build system and publishing them to the
\l{https://cppget.org cppget.org} package repository by codifying the best
practices and techniques. By following the presented guidelines you will also
make it easier for others to review your work and help with ongoing
maintenance.

\N|A \c{build2}-based project can only consume packages that use the
\c{build2} build system (with the exception of
\l{https://build2.org/faq.xhtml#why-syspkg system-installed packages}). In
other words, there is no support for \"wrapping\" or otherwise adapting
third-party projects' existing build systems. While replacing the build system
unquestionably requires more work upfront, the \c{build2} project's experience
is that the long-term benefits of this effort are well justified (see
\l{https://build2.org/faq.xhtml#why-package-managers How does \c{build2}
compare to other package managers?} for details).|

The primary focus of this guide are existing C/C++ projects that use a
different build system and that are maintained by a third-party, which we will
refer to as \i{upstream}. Unless upstream is willing to incorporate support
for \c{build2} directly into their repository, such projects are normally
packaged for \c{build2} in a separate \c{git} repository under the
\l{https://github.com/build2-packaging github.com/build2-packaging}
organization. Note, however, that many of the presented guidelines are also
applicable when converting your own projects (that is, where you are the
upstream) as well as projects that use languages other than C or C++.

Most C/C++ packages that are published to \l{https://cppget.org cppget.org}
are either libraries or executables (projects that provide both are normally
split into several packages) with libraries being in the strong majority.
Libraries are also generally more difficult to build correctly. As a result,
this guide uses libraries as a baseline. In most cases, a library-specific
step is easily distinguished as such and can be skipped when dealing with
executables. And in cases where a more nuanced change is required, a note will
be provided.

At the high-level, packaging a third-party project involves the following
steps:

\ol|

\li|Create the \c{git} repository and import upstream source code.|

\li|Generate \c{buildfile} templates that match upstream layout.|

\li|Tweak the generated \c{buildfiles} to match upstream build.|

\li|Test locally and using the \l{https://ci.cppget.org \c{build2} CI service}.|

\li|Release and publish the package to \l{https://cppget.org cppget.org}.|

|

Once this process is completed and the package is published, new releases
normally require a small amount of work provided there are no drastic changes
in the upstream layout or build. The sequence of steps for a new release would
typical look like this:

\ol|

\li|Add new and/or remove old upstream source code, if any.|

\li|Tweak \c{buildfiles} to match changes to upstream build, if any.|

\li|Test locally and using the \l{https://ci.cppget.org \c{build2} CI service}.|

\li|Release and publish the package to \l{https://cppget.org cppget.org}.|

|

While packaging a simple library or executable is relatively straightforward,
the C and C++ languages and their ecosystems are infamous for a large amount
variability in the platforms, compilers, source code layouts, and build
systems used. This leads to what looks like an endless list of special
considerations that are only applicable in certain, more complex cases.

As result, the presented guidelines are divided into four chapters: \l{#core
Common Guidelines} cover steps that are applicable to most packaging
efforts. As mentioned earlier, these steps will assume packaging a library but
they should be easy to adapt to executables. This chapter is followed by
\l{#dont-do What Not to Do} which covers the common packaging mistakes and
omissions. These are unfortunately relatively common because experience with
other build systems often does not translate directly to \c{build2} and some
techniques (such as header-only libraries) are discouraged. The last two
chapters are \l{#howto HOWTO} and \l{#faq FAQ}. The former covers the
above-mentioned long list of special considerations that are only applicable
in certain cases while the latter answer frequent packaging-related questions.

Besides the presented guidelines you may also find the existing packages found
in \l{https://github.com/build2-packaging github.com/build2-packaging} a good
source of example material. The repositories pinned to the front page are the
recommended starting point.

This guide assumes familiarity with the \c{build2} toolchain. At the minimum
you should have read through \l{intro The \c{build2} Toolchain Introduction}
and the \l{b#intro Introduction} chapter in the build system manual. Ideally,
you should also have some experience using \c{build2} in your own projects.


\h#intro-term|Terminology|

We use the term \i{upstream} to refer collectively to the third-party project
as well as to its authors. For example, we may say, \"upstream does not use
semver\" meaning that the upstream project does not use semver for
versioning. Or we may say, \"upstream released a new version\" meaning that
the upstream project's authors released a new version.

We will often use \i{upstream} as a qualifier to refer to a specific part of
the upstream project. Commonly used qualified terms like this include:

\dl|

\li|\i{upstream repository}

The version control (normally \c{git}) repository of the third-party project.|

\li|\i{upstream source code}

The C/C++ source code that constitutes the third-party project.

|

\li|\i{upstream layout}

The directory structure and location of source code in the third-party
project.

|

\li|\i{upstream build system}

The equivalents of \c{buildfiles} that are used by the third-party project to
build its source code, run tests, etc. For example, if upstream uses CMake,
then all the \c{CMakeLists.txt}, \c{*.cmake}, etc., files will constitute its
build system.||

To avoid confusion, in this guide we will always use the term \i{project} to
refer to upstream and \i{package} to refer to its \c{build2} conversion, even
though we would normally call our own \c{build2}-based work a project, not a
package (see \l{b#intro-proj-struct Project Structure} for details on the
\c{build2} terminology in this area). Some commonly used \c{build2}-side terms
in this guide include:


\dl|

\li|\i{package \c{git} repository}

The \c{git} repository that hosts the package of the upstream project.|


\li|\i{multi-package repository}

Sometimes it makes sense to split the upstream project into multiple
\c{build2} packages (for example, a library and an executable). In this case
the package repository structure must become multi-package.||


\h1#core|Common Guidelines|

This chapter describes the recommended sequence of steps for packaging a
third-party project for \c{build2} with the end-goal of publishing it to the
\l{https://cppget.org cppget.org} package repository.


\h#core-repo|Setup the package repository|

This section covers the creation of the package \c{git} repository and
the importation of the upstream source code.

\h2#core-repo-exists|Check if package repository already exists|

Before deciding to package a third-party project you have presumably checked
on \l{https://cppget.org cppget.org} if someone has already packaged it. There
are several other places that make sense to check as well:

\ul|

\li|\l{https://queue.cppget.org queue.cppget.org} contains packages that
have been submitted but not yet published.|

\li|\l{https://queue.stage.build2.org queue.stage.build2.org} contains
packages that have been submitted but can only be published after the next
release of the \c{build2} toolchain (see \l{#faq-publish-stage Where to
publish if package requires staged toolchain?} for background).|

\li|\l{https://github.com/build2-packaging github.com/build2-packaging}
contains all the third-party package repositories. Someone could already be
working on the package but haven't yet published it.|

\li|\l{https://github.com/build2-packaging/WISHLIST/issues
github.com/build2-packaging/WISHLIST} contains as issues projects that people
wish were packaged. These may contain offers to collaborate or announcements
of ongoing work.||


In all these cases you should be able to locate the package \c{git} repository
and/or connect with others in order to collaborate on the packaging work. If
the existing effort looks abandoned (for example, there hasn't been any
progress for a while and the existing maintainer doesn't respond) and you
would like to take over the package,
\l{https://build2.org/community.xhtml#help get in touch}.


\h2#core-repo-name|Use upstream repository name as package repository name|

It is almost always best to use the upstream repository name as the package
repository name. If there is no upstream repository (for example, because the
project doesn't use a version control system), the name used in the source
archive distribution would be the natural fallback choice.

\N|See \l{#core-package-name Decide on the package name} for the complete
picture on choosing names.|

\h2#core-repo-create|Create package repository in personal workspace|

For a third-party project, the end result that we are aiming for is a package
repository under the \l{https://github.com/build2-packaging
github.com/build2-packaging} organization.

\N|We require all the third-party projects that are published to
\l{https://cppget.org cppget.org} to be under the
\l{https://github.com/build2-packaging github.com/build2-packaging}
organization in order to ensure some continuity in case the original
maintainer looses interest, etc. You will still be the owner of the repository
and by hosting your packaging efforts under this organization (as opposed to,
say, your personal workspace) you make it easier for others to discover your
work and to contribute to the package maintenance.

Note that this requirement does not apply to your own projects (that is, where
you are the upstream) and where the \c{build2} support is normally part of the
upstream repository.

Finally, a note on the use of \c{git} and GitHub: if for some reason you are
unable to use either, \l{https://build2.org/community.xhtml#help get in touch}
to discuss alternatives.|

However, the recommended approach is to start with a repository in your
personal workspace and then, when it is ready or in a reasonably complete
state, transfer it to \l{https://github.com/build2-packaging
github.com/build2-packaging}. This gives you the freedom to make destructive
changes to the repository (including deleting it and starting over) during the
initial packaging work. It also removes the pressure to perform: you can give
it a try and if things turn out more difficult than you expected, you can just
drop the repository.

\N|For repositories under \l{https://github.com/build2-packaging
github.com/build2-packaging} the \c{master}/\c{main} branch is protected: it
cannot be deleted and its commit history cannot be overwritten with a forced
push.|

\N|While you can use any name for a repository under the personal workspace,
under \l{https://github.com/build2-packaging github.com/build2-packaging} it
should follow the \l{#core-repo-name Use upstream repository name as package
repository name} guideline. In particular, there should be no prefixes like
\c{build2-} or suffixes like \c{-package}. If the repository under your
personal workspace does not follow this guideline, you will need to rename it
before transferring it to the \l{https://github.com/build2-packaging
github.com/build2-packaging} organization.|

There is one potential problem with this approach: it is possible that
several people will start working on the same third-party project without
being aware of each other's efforts. If the project you are packaging is
relatively small and you don't expect it to take more than a day or two, then
this is probably not worth worrying about. For bigger projects, however, it
makes sense to announce your work by creating (or updating) the corresponding
issue in \l{https://github.com/build2-packaging/WISHLIST
github.com/build2-packaging/WISHLIST}.

To put it all together, the recommended sequence of actions for this step:

\ol|

\li|Create a new empty repository under your personal workspace from the
    GitHub UI.|

\li|Set the repository description to \c{build2 package for }, where
\c{} is the third-party project name.|

\li|Make the repository public (otherwise you won't be able to CI it).|

\li|Don't automatically add any files (\c{README}, \c{LICENSE}, etc).|

\li|Clone the empty repository to your machine (using the SSH protocol).||

\N|Since this is your personal repository, you can do the initial work
directly in \c{master}/\c{main} or in a separate branch, it's up to you.|

As a running example, let's assume we want to package a library called \c{foo}
whose upstream repository is at \c{https://github.com//foo.git}. We
have created its package repository at
\c{https://github.com//foo.git} (with the \c{build2 package for foo}
description) and can now clone it:

\
$ git clone git@github.com:/foo.git
\


\h2#core-repo-init|Initialize package repository with \c{bdep new --type empty}|

Change to the root directory of the package repository that you have cloned
on the previous step and run (continuing with our \c{foo} example):

\
$ cd foo/ # Change to the package repository root.
$ bdep new --type empty
$ tree -a .
./
├── .bdep/
│   └── ...
├── .git/
│   └── ...
├── .gitattributes
├── .gitignore
├── README.md
└── repositories.manifest
\

This command creates a number of files in the root of the repository:

\dl|

\li|\n\c{README.md}\n

This is the project \c{README}. We will discuss the recommended content for
this file later.|

\li|\n\c{repositories.manifest}\n

This file specifies the repositories from which this project will obtain its
dependencies (see \l{intro#guide-add-remove-deps Adding and Removing
Dependencies}). If the project you are packaging has no dependencies, then you
can safely remove this file (it's easy to add later if this changes). And for
projects that do have dependencies we will discuss the appropriate changes to
this file later.|


\li|\n\c{.gitattributes} and \c{.gitignore}\n

These are the \c{git} infrastructure files for the repository. You shouldn't
normally need to change anything in them at this stage (see the comments
inside for details).||

Next add and commit these files:

\
$ cd foo/ # Change to the package repository root.
$ git add .
$ git status
$ git commit -m \"Initialize package repository\"
\

\N|In these guidelines we will be using the package repository setup that is
capable of having multiple packages (referred to as \i{multi-package
repository}). This is recommended even for upstream projects that only
provides a single package because it gives us the flexibility of adding new
packages at a later stage without having to perform a major restructuring of
our repository.

Note also that upstream providing multiple package is not the only reason we
may end up having multiple \c{build2} packages. Another common reason is
factoring tests into a separate package due to a dependency on a testing
framework
(see \l{https://github.com/build2/HOWTO/blob/master/entries/handle-tests-with-extra-dependencies.md
How do I handle tests that have extra dependencies?} for background and
details). While upstream adding new packages may not be very common, upstream
deciding to use a testing framework is a lot more plausible.

The only notable drawback of using a multi-package setup with a single package
is the extra subdirectory for the package and a few extra files (such as
\c{packages.manifest} that lists the packages) in the root of the repository.
If you are certain that the project that you are converting is unlikely to
have multiple packages (for example, because you are the upstream) and won't
need extra dependencies for its tests (a reasonable assumption for a C
project), then you could instead go with the single-package repository where
the repository root is the package root. See \l{bdep-new(1)} for details on
how to initialize such a repository. In this guide, however, we will continue
to assume a multi-package repository setup.|


\h2#core-repo-submodule|Add upstream repository as \c{git} submodule|

If the third-party project is available from a \c{git} repository, then the
recommended approach is to use the \c{git} submodule mechanism to make the
upstream source code available inside the package repository, customarily in a
subdirectory called \c{upstream/}.

\N|While \c{git} submodules receive much criticism, in our case we use them
exactly as intended: to select and track specific (release) commits of an
external project. As a result, there is nothing tricky about their use for our
purpose and all the relevant commands will be provided and explained, in case
you are not familiar with this \c{git} mechanism.|

Given the upstream repository URL, to add it as a submodule, run the following
command from the package repository root (continuing with our \c{foo} example):

\
$ cd foo/ # Change to the package repository root.
$ git submodule add https://github.com//foo.git upstream
\

\N|You should prefer \c{https://} over \c{git://} for the upstream repository
URL since the \c{git://} protocol may not be accessible from all networks.
Naturally, never use a URL that requires authentication, for example, SSH
(SSH URLs start with \c{git@github.com} for GitHub).|

Besides the repository URL, you also need the commit of the upstream release
which you will be packaging. It is common practice to tag releases so the
upstream tags would be the first place to check. Failed that, you can always
use the commit id.

Assuming the upstream release tag you are interested in is called \c{vX.Y.Z},
to update the \c{upstream} submodule to point to this release commit, run the
following commands:

\
$ cd upstream/
$ git checkout vX.Y.Z
$ cd ../
\

Then add and commit these changes:

\
$ git add .
$ git status
$ git commit -m \"Add upstream submodule, vX.Y.Z\"
\

Now we have all the upstream source code for the version that we are
packaging available in the \c{upstream/} subdirectory of our repository.

The plan is to then use symbolic links (symlinks) to non-invasively overlay
the \c{build2}-related files (\c{buildfile}, \c{manifest}, etc) with the
upstream source code, if necessary adjusting upstream structure to split it
into multiple packages and/or to better align with the source/output layouts
recommended by \c{build2} (see \l{https://build2.org/article/symlinks.xhtml
Using Symlinks in \c{build2} Projects} for background and rationale). But
before we can start adding symlinks to the upstream source (and other files
like \c{README}, \c{LICENSE}, etc), we need to generate the \c{buildfile}
templates that match the upstream source code layout. This is the subject of
the next section.

\N|While on UNIX-like operating systems symlinks are in widespread use, on
Windows it's a niche feature that unfortunately could be cumbersome to use
(see \l{https://build2.org/article/symlinks.xhtml#windows Symlinks and
Windows} for details). However, the flexibility afforded by symlinks when
packaging third-party projects is unmatched by any other mechanism and we
therefore use them despite potentially sub-optimal packaging experience
on Windows.|


\h#core-package|Create package and generate \c{buildfile} templates|

This section covers the addition of the package to the repository we have
prepared in the previous steps and the generation of the \c{buildfile}
templates that match the upstream source code layout.


\h2#core-package-name|Decide on the package name|

While choosing the package repository name was pretty straightforward, things
get less clear cut when it comes to the package name.

\N|If you need a refresher on the distinction between projects and packages,
see \l{#intro-term Terminology}.|

Picking a name for a package that provides an executable is still relatively
straightforward: you should use the upstream name (which is usually the same
as the upstream project name) unless there is a good reason to deviate. One
recommended place to check before deciding on a name is the
\l{https://packages.debian.org Debian package repository}. If their package
name differs from upstream, then there is likely a good reason for that and
it is worth trying to understand what it is.

\N|Tip: when trying to find the corresponding Debian package, search for the
executable file name in the package contents if you cannot fine the package by
its upstream name. Also consider searching in the \c{unstable} distribution in
addition to \c{stable} for newer packages.|

Picking a name for a package that provides a library is where things can get
more complicated. While all the recommendations that have been listed for
executables apply equally to libraries, there are additional considerations.

In \c{build2} we recommend (but not require) that new library projects use a
name that starts with \c{lib} in order to easily distinguish them from
executables and avoid any clashes, potential in the future (see
\l{intro#proj-struct Canonical Project Structure} for details). To illustrate
the problem, consider the \c{zstd} project which provides a library and an
executable. In upstream repository both are part of the same codebase that
doesn't try to separate them into packages so that, for example, library could
be used without downloading and building the executable. In \c{build2},
however, we do need to split them into two separate packages and both packages
cannot be called \c{zstd}. So we call them \c{zstd} and \c{libzstd}.

\N|If you are familiar with the Debian package naming policy, you will
undoubtedly recognize this approach. In Debian all the library packages (with
very few exceptions) start with the \c{lib} prefix. So when searching for an
upstream name in the \l{https://packages.debian.org Debian package repository}
make sure to prefix it with \c{lib} (unless it already starts with this
prefix, of course).|

This brings the question of what to do about third-party libraries: should we
add the \c{lib} prefix to the package name if it's not already there?
Unfortunately, there is no clear cut answer and whichever decision you make,
there will be drawbacks. Specifically, if you add the \c{lib} prefix, the main
drawback is that the package name now deviates from upstream name and if the
project maintainer ever decides to add \c{build2} support the upstream
repository, there could be substantial friction. On the other handle, if you
don't add the \c{lib} prefix, then you will always run the risk of a future
clash with an executable name. And, as was illustrated with the \c{zstd}
example, a late addition of an executable won't necessarily cause any issues
to upstream. As a result, we don't have a hard requirement for the \c{lib}
prefix unless there is already an executable that would cause the clash (this
applies even if it's not being packaged yet or is provided by an unrelated
project). If you don't have a strong preference, we recommend that you add the
\c{lib} prefix (unless it is already there). In particular, this will free you
from having to check for any potential clashes. See
\l{https://github.com/build2/HOWTO/blob/master/entries/name-packages-in-project.md
How should I name packages when packaging third-party projects?} for
additional background and details.

To build some intuition for choosing package names, let's consider several
real examples. We start with executables:

\
  upstream  |   upstream    |   Debian   | build2 package|   build2
project name|executable name|package name|repository name|package name
------------+---------------+------------+---------------+------------
byacc        byacc           byacc        byacc           byacc
sqlite       sqlite3         sqlite3      sqlite          sqlite3
vim          xxd             xxd          xxd             xxd
OpenBSD      m4              -            openbsd-m4      openbsd-m4
qtbase 5     moc             qtbase5-\    Qt5             Qt5Moc
                             dev-tools
qtbase 6     moc             qt6-base-\   Qt6             Qt6Moc
                             dev-tools
\

The examples are arranged from the most straightforward naming to the
least. The last two examples show that sometimes, after carefully considering
upstream naming, you nevertheless have no choice but to ignore it and forge
your own path.

Next let's look at library examples. Notice that some use the same \c{build2}
package repository name as the executables above. This means they are part of
the same multi-package repository.

\
  upstream  |  upstream     |   Debian   | build2 package|   build2
project name|library name   |package name|repository name|package name
------------+---------------+------------+---------------+------------
libevent     libevent        libevent     libevent        libevent
brotli       brotli          libbrotli    brotli          libbrotli
zlib         zlib            zlib         zlib            libz
sqlite       libsqlite3      libsqlite3   sqlite          libsqlite3
libsig\      libsigc++       libsigc++    libsig\         libsigc++
cplusplus                                 cplusplus
qtbase 5     QtCore          qtbase5-dev  Qt5             libQt5Core
qtbase 6     QtCore          qt6-base-dev Qt6             libQt6Core
\

If an upstream project is just a single library, then the project name is
normally the same as the library name (but there are exceptions, like
\c{libsigcplusplus} in the above table). However, when looking at upstream
repository that contains multiple components (libraries and/or executables,
like \c{qtcore} in the above example), it may not be immediately obvious what
the upstream's library names are. In such cases, the corresponding Debian
packages can really help clarify the situation. Failed that, look into the
existing build system. In particular, if it generates the \c{pkg-config} file,
then the name of this file is usually the upstream library name.

\N|Looking at the names of the library binaries is less helpful because on
UNIX-like systems they must start with the \c{lib} prefix. And on Windows the
names of library binaries often embed extra information (static/import,
debug/release, etc) and may not correspond directly to the library name.|

And, speaking of multiple components, if you realize the upstream project
provides multiple libraries and/or executables, then you need to decide
whether to split them into separate \c{build2} packages and if so, how. Here,
again, the corresponding Debian packages can be a good reference point. Note,
however, that we often deviate from Debian's splits, especially when it comes
to libraries. Such differences are usually due to Debian focusing on binary
packages while in \c{build2} we are focusing on source packages.

To give a few example, \c{libevent} shown in the above table provides several
libraries (\c{libevent-core}, \c{libevent-extra}, etc) and in Debian it is
actually split into several binary packages along these lines. In \c{build2},
however, there is a single source package that provides all these libraries
with everything except \c{libevent-core} being optional. An example which
shows the decision made in a different direction would be the Boost libraries:
in Debian all the header-only Boost libraries are bundled into a single
package while in \c{build2} they are all separate packages.

The overall criteria here can be stated as follows: if a small family of
libraries provide complimentary functionality (like \c{libevent}), then we put
them all into a single package, usually making the additional functionality
optional. However, if the libraries are independent (like Boost) or provide
alternative rather than complimentary functionality (for example, like
different backends in \c{imgui}), then we make them separate packages. Note
that we never bundle an executable and a (public) library in a single package
(because, when consumed, they usually require different
\l{bpkg#manifest-package-depends dependency types}: build-time vs run-time).

Note also that while it's a good idea to decide on the package split and all
the package names upfront to avoid surprises later, you don't have to actually
provide all the packages right away. For example, if upstream provides a
library and an executable (like \c{zstd}), you can start with the library and
the executable package can be added later (potentially by someone else).

\N|In the \"library and executable\" case, if you plan to package both, the
sensible strategy is to first completely package the library stopping short of
releasing and publishing, then repeat the same process to package the
executable, and finally release and publish both.|

Admittedly, the recommendations in this section are all a bit fuzzy and one
can choose different names or different package splits that could all seem
reasonable. If you are unsure how to split the upstream project or what names
to use, \l{https://build2.org/community.xhtml#help get in touch} to discuss
the alternatives. It can be quite painful to change these things after you
have completed the remaining packaging steps.

Continuing with our \c{foo} example, we will follow the above recommendation
and call the library package \c{libfoo}.


\h2#core-package-struct|Decide on the package source code layout|

Another aspect we need to decide on is the source code layout inside the
package. Here we want to stay as close to upstream layout as possible unless
there are valid reasons to deviate. Staying close has the best chance of
giving us a build without any compile errors since the header inclusion in the
project can be sensitive to this layout. This also makes it easier for
upstream to adopt the \c{build2} build.

Sometimes, however, there are good reasons for deviating from upstream,
especially in cases where upstream is clearly following bad practices, for
example including generically-named public headers without the library name as
a subdirectory prefix. If you do decide to change the layout, it's usually
less disruptive (to the build) to rearrange things at the outer levels than at
the inner. For example, it should normally be possible to move/rename the
top-level \c{tests/} directory or to place the library source files into a
subdirectory.

Our overall plan is to create the initial layout and \c{buildfile} templates
automatically using \l{bdep-new(1)} in the \c{--package} mode, then \"fill\"
the package with upstream source code using symlinks, and finish off with
tweaking the generated \c{buildfiles} to match the upstream build.

The main rationale for using \l{bdep-new(1)} instead of doing everything by
hand is that there are many nuances in getting the build right and
auto-generated \c{buildfiles} had years of refinement and fine-tuning. The
familiar structure also makes it easier for others to understand your build,
for example while reviewing your package submission or helping out with
maintenance.

The \l{bdep-new(1)} command supports a wide variety of
\l{bdep-new.xhtml#src-layout source layouts}. While it may take a bit of time
to understand the customization points necessary to achieve the desired layout
for your first package, this experience will pay off in spades when you work
on converting subsequent packages.

And so the focus of the following several steps is to iteratively discover the
\l{bdep-new(1)} command line that best approximates the upstream layout. The
recommended procedure is as follows:

\ol|

\li|\nStudy the upstream source layout and existing build system.|

\li|\nCraft and execute the \l{bdep-new(1)} command line necessary to achieve
the upstream layout.|

\li|\nStudy the auto-generated \c{buildfiles} for things that don't fit and
need to change. But don't rush to start manually editing the result. First get
an overview of the required changes and then check if it's possible to achieve
these changes automatically using one of \l{bdep-new(1)} sub-options. If
that's the case, delete the package, and restart from step 2.||

This and the following two sections discuss each of these steps in more detail
and also look at some examples.

The first step above is to study the upstream project in order to understand
where the various parts are (headers, sources, etc.) and how they are built.
Things that can help here include:

\ul|

\li|Read through the existing build system definitions.|

\li|Try to build the project using the existing build system.|

\li|Try to install the project using the existing build system.|

\li|Look into the Debian package contents to see if there are any differences
    with regards to the installation locations.||

\N|If while studying the upstream build system you notice other requirements,
for example, the need to compile source files other than C/C++ (such as
Objective-C/C++, assembler, etc) or the need to generate files from \c{.in}
templates (or their \c{.cmake}/\c{.meson} equivalents), and are wondering how
they would be handled in the \c{build2} build, see the
\l{#core-adjust-build-src-source-ext Adjust source \c{buildfile}: extra
requirements} step for a collection of pointers.|

For libraries, the first key pieces of information we need to find is how the
public headers are included and where they are installed. The two common
\i{good} practices is to either include the public headers with a library name
as a subdirectory, for example, \c{#include\ }, or to include the
library name into each public header name, for example, \c{#include\
} or \c{#include\ } (in the last example the header name is
the library name itself, which is also fairly common). Unfortunately, there is
also a fairly common \i{bad} practice: having generically named headers (such
as \c{util.h}) included without the library name as a subdirectory.

\N|The reason this is a bad practice is that libraries that have such headers
cannot coexist, neither in the same build nor when installed. See
\l{#howto-bad-inclusion-practice How do I deal with bad header inclusion
practice?} if you encounter such a case. See \l{intro#proj-struct Canonical
Project Structure} for additional background and details.|

Where should we look to get this information? While the library source files
sound like a natural place, oftentimes they include own headers with the
\c{\"\"} style inclusion, either because the headers are in the same directory
or because the library build arranges for them to be found this way with
additional header search paths. As a result, a better place to look could be
library's examples and/or tests. Some libraries also describe which headers
they provide and how to include them in their documentation.

The way public headers are included normally determines where they are
installed. If they are included with a subdirectory, then they are normally
installed into the same subdirectory in, say, \c{/usr/include/}. Continuing
with the above example, a header that is included as \c{} would
normally be installed as \c{/usr/include/foo/util.h}. On the other hand, if
the library name is part of the header name, then the headers are usually (but
not always) installed directly into, say, \c{/usr/include/}, for example as
\c{/usr/include/foo-util.h}.

\N|While these are the commonly used installation schemes, there are
deviations. In particular, in both cases upstream may choose to add an
additional subdirectory when installing (so the above examples will instead
end up with, say, \c{/usr/include/foo-v1/foo/util.h} and
\c{/usr/include/foo-v1/sub/foo-util.h}). See
\l{#howto-extra-header-install-subdir How do I handle extra header
installation subdirectory?} if you encounter such a case.|

The inclusion scheme would normally also be recreated in the upstream source
code layout. In particular, if upstream includes public headers with a
subdirectory prefix, then this subdirectory would normally also be present in
the upstream layout so that such a header can be included form the upstream
codebase directly. As an example, let's say we determined that public headers
of \c{libfoo} are included with the \c{foo/} subdirectory, such as
\c{}. One of the typical upstream layouts for such a library
would look like this:

\
$ tree upstream/
upstream/
├── include/
│   └── foo/
│       └── util.hpp
└── src/
    ├── priv.hpp
    └── util.cpp
\

Notice how the \c{util.hpp} header is in the \c{foo/} subdirectory rather
than in \c{include/} directly.

The second key pieces of information we need to find is whether and, if so,
how the public headers and sources are split. For instance, in the above
example, we can see that public headers go into \c{include/} while sources and
private headers go into \c{src/}. But they could also be combined in the same
directory, for example, as in the following layout:

\
upstream/
└── foo/
    ├── priv.hpp
    ├── util.cpp
    └── util.hpp
\

\N|In multi-package projects, for example, those that provide both a library
and an executable, you would also want to understand how the sources are split
between the packages.|

If the headers and sources are split into different directories, then the
source directory may or may not have the inclusion subdirectory, similar to
the header directory. In the above split layout the \c{src/} directory doesn't
contain the inclusion subdirectory (\c{foo/}) while the following layout does:

\
upstream/
├── include/
│   └── foo/
│       └── util.hpp
└── src/
    └── foo/
        ├── priv.hpp
        └── util.cpp
\

With the understanding of these key properties of upstream layout you should
be in a good position to start crafting the \l{bdep-new(1)} command line that
recreates it.

\N|The \c{bdep-new} documentation uses a slightly more general terminology
compared to what we used in the previous section in order to also be
applicable to projects that use modules instead of headers.

Specifically, the inclusion subdirectory (\c{foo/}) is called \i{source
subdirectory} while the header directory (\c{include/}) and source directory
(\c{src/}) are called \i{header prefix} and \i{source prefix}, respectively.|


\h2#core-package-craft-cmd|Craft \c{bdep new} command line to create package|

The recommended procedure for this step is to read through the \c{bdep-new}'s
\l{bdep-new.xhtml#src-layout SOURCE LAYOUT} section (which contains a large
number of examples) while experimenting with various options in an attempt to
create the desired layout. If the layout you've got isn't quite right yet,
simply remove the package directory along with the \c{packages.manifest} file
and try again.

\N|Next to \c{packages.manifest}, \c{bdep-new} will also create the \"glue\"
\c{buildfile} that allows building all the packages from the repository
root. You don't need to remove it when re-creating the package.|

Let's illustrate this approach on the first split layout from the previous
section:

\
upstream/
├── include/
│   └── foo/
│       └── util.hpp
└── src/
    ├── priv.hpp
    └── util.cpp
\

We know it's split, so let's start with that and see what we get. Remember,
our \c{foo} package repository that we have cloned and initialized earlier
looks like this:

\
$ tree foo/
foo/
├── upstream/
├── .gitattributes
├── .gitignore
├── README.md
└── repositories.manifest
\

Now we create the \c{libfoo} package inside:

\
$ cd foo/
$ bdep new --package --lang c++ --type lib,split libfoo
$ tree libfoo/
libfoo/
├── include/
│   └── libfoo/
│       └── foo.hxx
└── src/
    └── libfoo/
        └── foo.cxx
\

The outer structure looks right, but inside \c{include/} and \c{src/} things
are a bit off. Specifically, the source subdirectory should be \c{foo/}, not
\c{libfoo/}, there shouldn't be one inside \c{src/}, and the file extensions
don't match upstream. All this can be easily tweaked, however:

\
$ rm -r libfoo/ packages.manifest
$ bdep new --package \
  --lang c++,cpp     \
  --type lib,split,subdir=foo,no-subdir-source \
  libfoo
$ tree libfoo/
libfoo/
├── include/
│   └── foo/
│       └── foo.hpp
└── src/
    └── foo.cpp
\

The other \c{bdep-new} sub-options (see the \l{bdep-new(1)} man page for the
complete list) that you will likely want to use when packaging a third-party
project include:

\dl|

\li|\n\cb{no-version}

Omit the auto-generated version header. Usually upstream will provided its own
equivalent of this functionality.

\N|Note that even if upstream doesn't provide any version information, it's
not a good idea to try to rectify this by providing your own version header
since upstream may add it in a future version and you may end up with a
conflict. Instead, work with the project authors to rectify this upstream.||

\li|\n\cb{no-symexport}\n\cb{auto-symexport}

The \c{no-symexport} sub-option suppresses the generation of the DLL symbol
exporting header. This is an appropriate option if upstream provides its
own symbol exporting arrangements.

The \c{auto-symexport} sub-option enables automatic DLL symbol exporting
support (see \l{b#cc-auto-symexport Automatic DLL Symbol Exporting} for
background). This is an appropriate option if upstream relies on similar
support in the existing build system. It is also recommended that you give
this functionality a try even if upstream does not support building
shared libraries on Windows.|

\li|\n\cb{binless}

Create a header-only library. See \l{#dont-header-only Don't make
library header-only if it can be compiled} and
\l{https://github.com/build2/HOWTO/blob/master/entries/make-header-only-library.md
How do I make a header-only C/C++ library?}|

\li|\n\cb{buildfile-in-prefix}

Place header/source \c{buildfiles} into the header/source prefix directory
instead of source subdirectory. To illustrate the difference, compare these
two auto-generated layouts paying attention to the location of \c{buildfiles}:

\
$ bdep new ... --type lib,split,subdir=foo libfoo
$ tree libfoo/
libfoo/
├── include/
│   └── foo/
│       ├── buildfile
│       └── foo.hpp
└── src/
    └── foo/
        ├── buildfile
        └── foo.cpp
\

\
$ bdep new ... --type lib,split,subdir=foo,buildfile-in-prefix libfoo
$ tree libfoo/
libfoo/
├── include/
│   ├── foo/
│   │   └── foo.hpp
│   └── buildfile
└── src/
    ├── foo/
    │   └── foo.cpp
    └── buildfile
\

Note that this sub-option only makes sense if we have the header and/or source
prefixes (\c{include/} and \c{src/} in our case) as well as the source
subdirectory (\c{foo/} in our case).

Why would we want to do this? The main reason is to be able to symlink the
entire upstream directories rather than individual files. In the first
listing, the generated \c{buildfiles} are inside the \c{foo/} subdirectories
which mean we cannot just symlink \c{foo/} from upstream.

With a large number of files to symlink, this can be such a strong motivation
that it may make sense to invent a source subdirectory in the source prefix
even if upstream doesn't have one. See \l{#dont-main-target-root-buildfile
Don't build your main targets in the root \c{buildfile}} for details on this
technique.

Another reason we may want to move \c{buildfiles} to prefix is to be able to
handle upstream projects that have multiple source subdirectories. While this
situation is not very common in the header prefix, it can be encountered in the
source prefix of more complex projects, where upstream wishes to organize the
source files into components.||

\N|If upstream uses a mixture of C and C++, then it's recommended to set this
up using the \c{--lang} sub-option of \c{bdep-new}. For example:

\
$ bdep new --lang c++,c ...
\

|

Continuing with our \c{libfoo} example, assuming upstream provides own symbol
exporting, the final \c{bdep-new} command line would be:

\
$ bdep new --package \
  --lang c++,cpp     \
  --type lib,split,subdir=foo,no-subdir-source,no-version,no-symexport \
  libfoo
\

When packaging an executable, things are usually quite a bit simpler: there is
no version header, symbol exporting, and the layout is normally combined
(since there are no public headers). Typically the only potentially tricky
decision you will need to make is whether to use \i{prefix} or \i{source
subdirectory}. Most likely it will be \i{prefix} since most executable
projects will use the \c{\"\"} style inclusion for own headers. For example:

\
$ bdep new --package \
  --lang c++         \
  --type exe,no-subdir,prefix=foo \
  foo
\

\h2#core-package-review|Review and test auto-generated \c{buildfile} templates|

Let's get a more complete view of what got generated by the final \c{bdep-new}
command line from the previous section:

\
$ tree libfoo/
libfoo/
├── build/
│   └── ...
├── include/
│   └── foo/
│       ├── buildfile
│       └── foo.hpp
├── src/
│   ├── buildfile
│   └── foo.cpp
├── tests/
│   ├── build/
│   │   └── ...
│   ├── basics/
│   │   ├── buildfile
│   │   └── driver.cpp
│   └── buildfile
├── buildfile
├── manifest
└── README.md
\

Once the overall layout looks right, the next step is to take a closer look at
the generated \c{buildfiles} to make sure that overall they match the upstream
build. Of particular interest are the header and source directory
\c{buildfiles} (\c{libfoo/include/foo/buildifle} and \c{libfoo/src/buildifle}
in the above listing) which define how the library is built and installed.

Here we are focusing on the macro-level differences that are easier to change
by tweaking the \c{bdep-new} command line rather than manually. For example,
if we look at the generated source directory \c{buildfile} and realize it
builds a \i{binful} library (that is, a library that includes source files and
therefore produces library binaries) while the upstream library is header-only,
it is much easier to fix this by re-running \c{bdep-new} with the \c{binless}
sub-option than by changing the \c{buildfiles} manually.

\N|Don't be tempted to start making manual changes at this stage even if you
cannot see anything else that can be fixed with a \c{bdep-new} re-run. This
is still a dry-run and we will recreate the package one more time in the
following section before starting manual adjustments.|

Besides examining the generated \c{buildfiles}, it's also a good idea to
build, test, and install the generated package to make sure everything ends up
where you expected and matches upstream where necessary. In particular, make
sure public headers are installed into the same location as upstream (unless
you have decided to deviate, of course) or at least it's clear how to tweak
the generated \c{buildfiles} to achieve this.

\N|The \c{bdep-new}-generated library is a simple \"Hello, World!\" example
that can nevertheless be built, tested, and installed. The idea here is to
verify it matches upstream using the generated source files before replacing
them with the upstream source file symlinks.

If you are using Windows, then you will need to temporarily replace the
\c{no-symexport} sub-option with \c{auto-symexport} in order to make the
generated library buildable. But do not forget to drop this sub-option on
the next step.|

Note that at this stage it's easiest to build, test, and install in source
directly skipping the \c{bdep} initialization of the package (which we would
have to de-initialize before we can re-run \c{bdep-new}). Continue with the
above example, the recommended sequence of commands would be:

\
$ cd libfoo/
$ b update
$ b test
$ rm -rf /tmp/install
$ b install config.install.root=/tmp/install
$ b clean
\

\N|One relatively common case where the installation location may not match
upstream are libraries that include their headers without the subdirectory
prefix (for example, \c{} instead of \c{}). In such
cases, in the \c{bdep-new} command, you want to use \i{prefix} rather that
\i{source subdirectory} (with the latter being the default). For example:

\
$ bdep new --lib,no-subdir,prefix=foo ...
\

See \l{bdep-new.xhtml#src-layout SOURCE LAYOUT} for details.|

Let's also briefly discuss other subdirectories and files found in the
\c{bdep-new}-generated \c{libfoo} package.

The \c{build/} subdirectory is the standard \c{build2} place for project-wide
build system information (see \l{b#intro-proj-struct Project Structure} for
details). We will look closer at its contents in the following sections.

In the root directory of our package we find the root \c{buildfile} and
package \c{manifest}. We will be tweaking both in the following steps. There
is also \c{README.md} which we will replace with the upstream symlink.

The \c{tests/} subdirectory is the standard \c{build2} tests subproject (see
\l{b#intro-operations-test Testing} for background and details). While you can
suppress its generation with the \c{no-tests} \c{bdep-new} sub-option, we
recommend that you keep it and use it as a starting point for porting upstream
tests or, if upstream doesn't provide any, for a basic \"smoke test\".

\N|You can easily add/remove/rename this \c{tests/} subproject. The only place
where it is mentioned explicitly and where you will need to make changes is
the root \c{buildfile}. In particular, if upstream provides examples that you
wish to port, it is recommended that you use a copy of the generated
\c{tests/} subproject as a starting point (not forgetting to add the
corresponding entry in the root \c{buildfile}).|


\h2#core-package-create|Create final package|

If you are satisfied with the \c{bdep-new} command line and there are no more
automatic adjustments you can squeeze out of it, then it's time to re-run
\c{bdep-new} one last time to create the final package.

\N|While redoing this step later will require more effort, especially if
you've made manual modifications to \c{buildfile} and \c{manifest}, nothing is
set in stone and it can be done again by simply removing the package
directory, removing (or editing, if you have multiple packages and only want
to redo some of them) \c{packages.manifest}, and starting over.|

This time, however, we will do things a bit differently in order to take
advantage of some additional automation offered by \c{bdep-new}.

Firstly, we will use the special \c{third-party} sub-option which is meant for
converting third-party projects to \c{build2}. Specifically, this sub-option
automatically enables \c{no-version} and \c{no-symexport} (unless
\c{auto-symexport} is specified). It also adds a number of values to
\c{manifest} that makes sense to specify in a package of a third-party
project. Finally, it generates the \c{PACKAGE-README.md} template which
describes how to use the package from a \c{build2}-based project (see the
\l{bpkg#manifest-package-description \c{package-description}} \c{manifest}
value for background).

Secondly, if the package directory already exists and contains certain files,
\c{bdep-new} can take this into account when generating the root \c{buildfile}
and package \c{manifest}. In particular, it will try to guess the license from
the \c{LICENSE} file and extract the summary from \c{README.md} and use this
information in \c{manifest} as well as generated \c{PACKAGE-README.md}.

\N|If the file names or formats used by upstream don't match those recognized
by \c{bdep-new}, then for now simply omit the corresponding files from the
package directory and add them later manually. Similarly, if an attempt to
extra the information is unsuccessful, we will have a chance to adjust it in
\c{manifest} later.

Specifically, for \c{README}, \c{bdep-new} recognizes \c{README.md},
\c{README.txt} and \c{README} but will only attempt to extract the summary
from \c{README.md}.

For license files, \c{bdep-new} recognizes \c{LICENSE}, \c{LICENSE.txt}
\c{LICENSE.md}, \c{COPYING}, and \c{UNLICENSE}.

For changes files, \c{bdep-new} recognizes \c{NEWS}, \c{CHANGES}, and
\c{CHANGELOG} in various cases as well as with the \c{.md}, \c{.txt}
extensions.|

Continuing with our \c{libfoo} example and assuming upstream provides the
\c{README.md}, \c{LICENSE}, and \c{NEWS} files, we first manually create the
package directory, then add the symlinks, and finally run \c{bdep-new} (notice
that we have replaced \c{no-version} and \c{no-symexport} with \c{third-party}
and omitted the package name from the \c{bdep-new} command line since we are
running from inside the package directory):

\
$ cd foo/ # Change to the package repository root.

$ rm -r libfoo/ packages.manifest
$ mkdir libfoo/

$ cd libfoo/ # Change to the package root.
$ ln -s ../upstream/README.md ./
$ ln -s ../upstream/LICENSE   ./
$ ln -s ../upstream/NEWS      ./

$ bdep new --package \
  --lang c++,cpp     \
  --type lib,split,subdir=foo,no-subdir-source,third-party
\

The final contents of our package will look like this (\c{->} denotes a
symlink):

\
$ cd ../
$ tree libfoo/
libfoo/
├── build/
│   └── ...
├── include/
│   └── foo/
│       ├── buildfile
│       └── foo.hpp
├── src/
│   ├── buildfile
│   └── foo.cpp
├── tests/
│   ├── build/
│   │   └── ...
│   ├── basics/
│   │   ├── buildfile
│   │   └── driver.cpp
│   └── buildfile
├── buildfile
├── manifest
├── NEWS       -> ../upstream/NEWS
├── LICENSE    -> ../upstream/LICENSE
├── README.md  -> ../upstream/README.md
└── PACKAGE-README.md
\

If auto-detection of \c{README}, \c{LICENSE}, and \c{NEWS} succeeds, then you
should see the \c{summary} and \c{license} values automatically populated in
\c{manifest} and the symlinked files listed in the root \c{buildfile}.


\h2#core-package-adjust-version|Adjust package version|

While adjusting the \c{bdep-new}-generated code is the subject of the
following sections, one tweak that we want to make right away is to change the
package version in the \c{manifest} file.

In this guide we will assume the upstream package uses semver (semantic
version) or semver-like (that is, has three version components) and will rely
on the \i{continuous versioning} feature of \c{build2} to make sure that each
commit in our package repository has a distinct version (see
\l{intro#guide-versioning-releasing Versioning and Release Management} for
background).

\N|If upstream does not use semver, then see
\l{https://github.com/build2/HOWTO/blob/master/entries/handle-projects-which-dont-use-semver.md
How do I handle projects that don't use semantic versioning?} and
\l{https://github.com/build2/HOWTO/blob/master/entries/handle-projects-which-dont-use-version.md
How do I handle projects that don't use versions at all?} for available
options. If you decide to use the non-semver upstream version as is, then you
will have to forgo \i{continuous versioning} as well as the use of
\l{bdep-release(1)} for release management. The rest of the guide, however,
will still apply. In particular, you will still be able to use \l{bdep-ci(1)}
and \l{bdep-publish(1)} with a bit of extra effort.|

The overall plan to implement continuous versioning is to start with a
pre-release snapshot of the upstream version, keep it like that while we are
adjusting the \c{bdep-new}-generated package and committing our changes (at
which point we get distinct snapshot versions), and finally, when the package
is ready to publish, change to the final upstream version with the help of
\l{bdep-release(1)}. Specifically, if the upstream version is
\c{\i{X}.\i{Y}.\i{Z}}, then we start with the \c{\i{X}.\i{Y}.\i{Z}-a.0.z}
pre-release snapshot.

\N|In continuous versioning \c{\i{X}.\i{Y}.\i{Z}-a.0.z} means a snapshot after
the (non-existent) 0th alpha pre-release of the \c{\i{X}.\i{Y}.\i{Z}} version.
See \l{intro#guide-versioning-releasing Versioning and Release Management}
for a more detailed explanation and examples.|

Let's see how this works for our \c{libfoo} example. Say, the upstream version
that we are packaging is \c{2.1.0}. This means we start with \c{2.1.0-a.0.z}.

\N|Naturally, the upstream version that we are using should correspond to the
commit of the \c{upstream} submodule we have added on the
\l{#core-repo-submodule Add upstream repository as \c{git} submodule} step.|

Next we edit the \c{manifest} file in the \c{libfoo} package and change the
\c{version} value to read:

\
version: 2.1.0-a.0.z
\

Let's also commit this initial state of the package for easier rollbacks:

\
$ cd foo/ # Change to the package repository root.
$ git add .
$ git status
$ git commit -m \"Initialize package\"
\


\h#core-fill|Fill package with source code and add dependencies|

With the package skeleton ready, the next steps are to fill it with upstream
source code, add dependencies, and make any necessary manual adjustments to
the generated \c{buildfiles}, \c{manifest}, etc. If we do this all at once,
however, it can be hard to pin-point the cause of build failures. For example,
if we convert both the library and its tests right away and something doesn't
work, it can be hard to determine whether the mistake is in the library or in
the tests.  As a result, we are going to split this work into a sequence or
smaller steps that incrementally replace the \c{bdep-new}-generated code with
upstream while allowing us to test each change individually. We will also
commit the changes on each step for easy rollbacks. Specifically, the overall
plan is as follows:

\ol|

\li|Initialize (\c{bdep-init}) the package in one or more build configurations.|

\li|Add dependencies, if any.|

\li|Fill the library with upstream source code.|

\li|Adjust project-wide and source subdirectory \c{buildfiles}.|

\li|Make a smoke test for the library.|

\li|Replace the smoke test with upstream tests.|

\li|Tweak root \c{buildfile} and \c{manifest}.||

The first three steps are the subject of this section with the following
sections covering the rest of the plan.

\N|As you become more experienced with packaging third-party projects for
\c{build2}, it may make sense to start combining or omitting some steps,
especially for simpler libraries. For example, if you see that a library
comes with a simple test that shouldn't cause any complications, then you
could omit the smoke test.|


\h2#core-fill-init|Initialize package in build configurations|

Before we start making any changes to the \c{bdep-new}-generated files, let's
initialize the package in at least one build configuration so that we are able
to build and test our changes (see \l{intro#guide Getting Started Guide} for
background on \c{bdep}-based development workflow). Continuing with our
\c{libfoo} example from the earlier steps:

\
$ cd foo/ # Change to the package repository root.
$ bdep init -C ../foo-gcc @gcc cc config.cxx=g++
\

\N|If you are initializing subsequent packages in the already created
configuration, then the command line will be just:

\
$ bdep init @gcc
\

|

Let's build and test the \c{bdep-new}-generated package to make sure
everything is in order:

\
$ bdep update
$ bdep test
$ bdep clean
\

You can create additional configurations, for example, if you have access to
several compilers. For instance, to create a build configuration for Clang:

\
$ bdep init -C ../foo-clang @clang cc config.cxx=clang++
\

If you would like to perform a certain operation on all the build
configurations, pass the \c{-a|--all} flag to \c{bdep}:

\
$ bdep update -a
$ bdep test -a
$ bdep clean -a
\

Let's also verify that the resulting package repository is clean (doesn't have
any uncommitted or untracked files):

\
$ git status
\

\h2#core-fill-depend|Add dependencies|

If the upstream project has any dependencies, now is a good time to specify
them so that when we attempt to build the upstream source code, they are
already present.

Identifying whether the upstream project has dependencies is not always easy.
The natural first places to check are the documentation and the existing build
system. Sometimes projects also bundle their dependencies with the project
source code (also called vendoring). So it makes sense to look around the
upstream repository for anything that looks like bundled dependencies.
Normally we would need to \"unbundle\" such dependencies when converting to
\c{build2} by instead specifying a dependency on an external package (see
\l{#dont-bundle Don't bundle dependencies} for background).

\N|While there are several reasons we insist on unbundling of dependencies,
the main one is that bundling can cause multiple, potentially conflicting
copied of the same dependency to exist in the build. This can cause subtle
build failures that are hard to understand and track down.|

One particularly common case to check for is bundling of the testing
framework, such as \l{https://cppget.org/catch2 \c{catch2}}, by C++
projects. If you have identified that the upstream tests depend on a testing
framework (whether bundled or not), see
\l{https://github.com/build2/HOWTO/blob/master/entries/handle-tests-with-extra-dependencies.md
How do I handle tests that have extra dependencies?} for the recommended
way to deal with that.

\N|One special type of dependency which is easy to overlook is between
packages in the same package repository. For example, if we were packaging
both \c{libfoo} as well as the \c{foo} executable that depends on it, then the
\c{foo} package has a dependency on \c{libfoo} and it must be specified. In
this case we don't need to add anything to \c{repositories.manifest} and in
the \c{depends} entry (see below) in \c{foo}'s \c{manifest} we will normally
use the special \c{==\ $} version constraint, meaning \c{libfoo} should have
the same version as \c{foo} (see the \l{bpkg#manifest-package-depends
\c{depends} package \c{manifest} value} for details). For example:

\
depends: libfoo == $
\

|

If you have concluded that the upstream project doesn't have any dependencies,
then you can remove \c{repositories.manifest} from the package repository root
(unless you have already done so), commit this change, and skip the rest of
this section.

And if you are still reading, then we assume you have a list of dependencies
you need to add, preferably with their minimum required versions. If you could
not identify the minimum required version for a dependency, then you can
fallback to the latest available version, as will be described in a moment.

With the list of dependencies in hand, the next step is to determine whether
they are already available as \c{build2} packages. For that, head over to
\l{https://cppget.org cppget.org} and search for each dependency.

If you are unable to find a package for a dependency, then it means it hasn't
been packaged for \c{build2} yet. Check the places mentioned in the
\l{#core-repo-exists Check if package repository already exists} step to see
if perhaps someone is already working on the package. If not and the
dependency is not optional, then the only way forward is to first package
the dependency.

If you do find a package for a dependency, then note the section of the
repository (\c{stable}, \c{testing}, etc; see \l{intro#guide-repositories
Package Repositories} for background) from which the minimum required version
of the package is available. If you were unable to identify the minimum
required version, then note the latest version available from the \c{stable}
section.

Given the list of repository sections, edit the \c{repositories.manifest} file
in the package repository root and uncomment the entry for \c{cppget.org}:

\
:
role: prerequisite
location: https://pkg.cppget.org/1/stable
#trust: ...
\

Next, replace \c{stable} at the end of the \c{location} value with the least
stable section from your list. For example, if your list contains \c{stable},
\c{testing}, and \c{beta}, then you need \c{beta} (the sections form a
hierarchy and so \c{beta} includes \c{testing} which in turn includes
\c{stable}).

\N|If you wish, you can also uncomment the \c{trust} value and replace \c{...}
with the \l{https://cppget.org/?about repository fingerprint}. This way you
won't be prompted to confirm the repository authenticity on first fetch. See
\l{intro#guide-add-remove-deps Adding and Removing Dependencies} for details.|

Once this is done, edit \c{manifest} in package root and add the \c{depends}
value for each dependency. See \l{intro#guide-add-remove-deps Adding and
Removing Dependencies} for background. In particular, here you will use the
minimum required version (or the latest available) to form a version
constraint. Which constraint operator to use will depend on the dependency's
versioning policies. If the dependency uses semver, then a \c{^}-based
constraint is a sensible default.

As an example, let's say our \c{libfoo} depends on \c{libz}, \c{libasio}, and
\c{libsqlite3}. To specify these dependencies we would add the following
entries to its \c{manifest}:

\
depends: libz ^1.2.0
depends: libasio ^1.28.0
depends: libsqlite3 ^3.39.4
\

With all the dependencies specified, let's now synchronize the state of the
build configurations with our changes by running \l{bdep-sync(1)} from the
package repository root:

\
$ bdep sync -a
\

This command should first fetch the metadata for the repository we specified
in \c{repositories.manifest} and then fetch, unpack and configure each
dependency that we specified in \c{manifest}.

\N|If you have any build-time dependencies (see
\l{intro#guide-build-time-linked Build-Time Dependencies and Linked
Configurations} for background), then you will get a warning about the
corresponding \c{config.import.*} variable being unused and therefore
dropped. This is because we haven't yet added the corresponding \c{import}
directives to our \c{buildfiles}. For now you can ignore this warning, which
we will fix later, when we adjust the generated \c{buildfiles}.|

We can examine the resulting state, including the version of each dependency,
with \l{bdep-status(1)}:

\
$ bdep status -ai
\

The last step for this section is to commit our changes:

\
$ cd foo/ # Change to the package repository root.
$ git add .
$ git status
$ git commit -m \"Add dependencies\"
\


\h2#core-fill-source|Fill with upstream source code|

Now we are ready to begin replacing the \c{bdep-new}-generated files with
upstream source code symlinks. We start with library's header and source
files. Continuing with our \c{libfoo} example, this is what we currently have
(notice that \c{LICENSE}, \c{README.md}, and \c{NEWS} are already symlinks to
upstream):

\
$ cd foo/ # Change to the package repository root.

$ tree libfoo/
libfoo/
├── build/
│   └── ...
├── include/
│   └── foo/
│       ├── buildfile
│       └── foo.hpp
├── src/
│   ├── buildfile
│   └── foo.cpp
├── tests/
│   └── ...
├── buildfile
├── manifest
├── NEWS       -> ../upstream/NEWS
├── LICENSE    -> ../upstream/LICENSE
├── README.md  -> ../upstream/README.md
└── PACKAGE-README.md
\

Now we replace generated \c{include/foo/foo.hpp} with library's real headers
and \c{src/foo.cpp} with its real source files:

\
$ cd libfoo/ # Change to the package root.

$ cd include/foo/
$ rm foo.hpp
$ ln -s ../../../upstream/include/foo/*.hpp ./
$ cd -

$ cd src/
$ rm foo.cpp
$ ln -s ../../upstream/src/*.hpp ./
$ ln -s ../../upstream/src/*.cpp ./
$ cd -

$ tree libfoo/
libfoo/
├── build/
│   └── ...
├── include/
│   └── foo/
│       ├── buildfile
│       ├── core.hpp -> ../../../upstream/include/foo/core.hpp
│       └── util.hpp -> ../../../upstream/include/foo/util.hpp
├── src/
│   ├── buildfile
│   ├── impl.hpp -> ../../upstream/src/impl.hpp
│   ├── core.cpp -> ../../upstream/src/core.cpp
│   └── util.cpp -> ../../upstream/src/util.cpp
├── tests/
│   └── ...
└── ...
\

Note that the wildcards used above may not be enough in all situations and
it's a good idea to manually examine the relevant upstream directories and
make sure nothing is missing. Specifically, look out for:

\ul|

\li|Header/sources with other extensions, for example, C, Objective-C, etc.|

\li|Other files that may be need, for example, \c{.def}, \c{config.h.in}, etc.|

\li|Subdirectories that contain more header/source files.||

If upstream contains subdirectories with addition header/source files, then
you can symlink entire subdirectories instead of doing it file by file. For
example, let's say \c{libfoo}'s upstream source directory contains the
\c{impl/} subdirectory with additional source files:

\
$ cd src/
$ ln -s ../../upstream/impl ./
$ cd -

$ tree libfoo/
libfoo/
├── build/
│   └── ...
├── include/
│   └── ...
├── src/
│   ├── impl/ -> ../../upstream/src/impl/
│   │   ├── bar.cpp
│   │   └── baz.cpp
│   ├── buildfile
│   ├── impl.hpp -> ../../upstream/src/impl.hpp
│   ├── core.cpp -> ../../upstream/src/core.cpp
│   └── util.cpp -> ../../upstream/src/util.cpp
├── tests/
│   └── ...
└── ...
\

Wouldn't it be nice if we could symlink the entire top-level subdirectories
(\c{include/foo/} and \c{src/} in our case) instead of symlinking individual
files? As discussed in \l{#core-package-craft-cmd Craft \c{bdep new} command
line to create package}, we can but we will need to change the package layout.
Specifically, we will need to move the \c{buildfiles} out of the source
subdirectories with the help of the \c{buildfile-in-prefix} sub-option of
\c{bdep-new}. In the above case, we will also need to invent a source
subdirectory in \c{src/}. Whether this is a worthwhile change largely depends
on how many files you have to symlink individually. If it's just a handful,
then it's probably not worth the complication, especially if you have to
invent source subdirectories. On the other hand, if you are looking at
symlinking hundreds of files, changing the layout makes perfect sense.

\N|One minor drawback of symlinking entire directories is that you cannot
easily patch individual upstream files (see \l{#howto-patch-upstream-source
How do I patch upstream source code?}).

You will also need to explicitly list such directories as symlinks in
\c{.gitattributes} if you want your package to be usable from the \c{git}
repository directly on Windows. See
\l{https://build2.org/article/symlinks.xhtml#windows Symlinks and Windows} for
details.|

We won't be able to test this change yet because to make things build we will
most likely also need to tweak the generated \c{buildfiles}, which is the
subject of the next section. However, it still makes sense to commit our
changes to make rollbacks easier:

\
$ cd foo/ # Change to the package repository root.
$ git add .
$ git status
$ git commit -m \"Add upstream source symlinks\"
\


\h#core-adjust-build|Adjust project-wide and source \c{buildfiles}|

With source code and dependencies added, the next step is to adjust the
regenerated \c{buildfiles} that build the library. This involves two places:
the project-wide build system files in \c{build/} and the source subdirectory
\c{buildfiles} (in \c{include/} and \c{src/} for our \c{libfoo} example).


\h2#core-adjust-build-wide|Adjust project-wide build system files in \c{build/}|

We start with reviewing and adjusting the files in the \c{build/} subdirectory
of our package, where you will find three files:

\
$ cd foo/ # Change to the package repository root.
$ tree libfoo/
libfoo/
├── build/
│   ├── bootstrap.build
│   ├── root.build
│   └── export.build
└── ...
\


To recap, the first two contain the project-wide build system setup (see
\l{b#intro-proj-struct Project Structure} for details) while the last is an
export stub that facilitates the importation of targets from our package (see
\l{b#intro-import Target Importation} for details).

Normally you don't need to change anything in \c{bootstrap.build} \- all it
does is specify the build system project name and load a standard set of core
build system modules. Likewise, \c{export.build} is good as generated unless
you need to do something special, like exporting targets from different
subdirectories of your package.

While \c{root.build} is also often good as is, situations where you may
need to tweak it are not uncommon and include:

\ul|

\li|Loading additional build system module.

    For example, if your package makes use of Objective-C/C++ (see \l{b#c-objc
    Objective-C Compilation} and \l{b#cxx-objcxx Objective-C++ Compilation})
    or Assembler (see \l{b#c-as-cpp Assembler with C Preprocessor
    Compilation}), then \c{root.build} would be the natural place to load the
    corresponding modules.

    \N|If your package uses a mixture of C and C++, then it's recommended to
    set this up using the \c{--lang} sub-option of \c{bdep-new} rather
    than manually. For example:

    \
    $ bdep new --lang c++,c ...
    \

    ||

\li|Specifying package configuration variables.

    If upstream provides the ability to configure their code, for example to
    enable optional features, then you may want to translate this to
    \c{build2} configuration variables, which are specified in \c{root.build}
    (see \l{b#proj-config Project Configuration} for background and details).

    Note that you don't need to add all the configuration variables right
    away. Instead, you could first handle the \"core\" functionality which
    doesn't require any configuration and then add the configuration variables
    one by one while also making the corresponding changes in \c{buildfiles}.

    \N|One type of configuration that you should normally not expose when
    packaging for \c{build2} is support for both header-only and compiled
    modes. See \l{b#dont-header-only Don't make library header-only if it
    can be compiled} for details.|||

Also, in C++ projects, if you don't have any inline or template files, then
you can drop the assignment of the file extension for the \c{ixx{\}} and
\c{txx{\}} target types, respectively.

If you have added any configuration variables and would like to use
non-default values for some of them in your build, then you will need to
reconfigure the package. For example, let' say we have added the
\c{config.libfoo.debug} variable to our \c{libfoo} package which enables
additional debugging facilities in the library. This is how we can reconfigure
all our builds to enable this functionality:

\
$ bdep sync -a config.libfoo.debug=true
\

If you have made any changes, commit them (similar to the previous step, we
cannot test things just yet):

\
$ cd foo/ # Change to the package repository root.
$ git add .
$ git status
$ git commit -m \"Adjust project-wide build system files\"
\


\h2#core-adjust-build-src|Adjust source subdirectory \c{buildfiles}|

The last step we need to perform before we can try to build our library is to
adjust its \c{buildfiles}. These \c{buildfiles} are found in the source
subdirectory or, if we used the \c{buildfile-in-prefix} \c{bdep-new}
sub-option, in the prefix directory. There will be two \c{buildfiles} if we
use the split layout (\c{split} sub-option) or a single \c{buildfile} in the
combined layout. The single \c{buildfile} in the combined layout contains
essentially the same definitions as the split \c{buildfiles} but combined into
one and with some minor simplifications that this allows. Here we will assume
the split layout and continue with our \c{libfoo} from the previous
sections. To recap, here is the layout we've got with the \c{buildfiles} of
interest found in \c{include/foo/} and in \c{src/}:

\
libfoo/
├── build/
│   └── ...
├── include/
│   └── foo/
│       ├── buildfile
│       ├── core.hpp -> ../../../upstream/include/foo/core.hpp
│       └── util.hpp -> ../../../upstream/include/foo/util.hpp
├── src/
│   ├── buildfile
│   ├── impl.hpp -> ../../upstream/src/impl.hpp
│   ├── core.cpp -> ../../upstream/src/core.cpp
│   └── util.cpp -> ../../upstream/src/util.cpp
├── tests/
│   └── ...
└── ...
\

\N|If instead of a library you are packaging an executable, you can skip
directly to \l{#core-adjust-build-src-source-exe Adjust source \c{buildfile}:
executables}.|


\h2#core-adjust-build-src-header|Adjust header \c{buildfile}|

The \c{buildfile} in \c{include/foo/} is pretty simple:

\N|The \c{buildfile} in your package may look slightly differently depending
on the exact \c{bdep-new} sub-options used. However, all the relevant
definitions discussed below should still be easily recognizable.|

\
pub_hdrs = {hxx ixx txx}{**}

./: $pub_hdrs

# Install into the foo/ subdirectory of, say, /usr/include/
# recreating subdirectories.
#
{hxx ixx txx}{*}:
{
  install         = include/foo/
  install.subdirs = true
}
\

Normally the only change that you would make to this \c{buildfile} is to
adjust the installation location of headers (see \l{b#intro-operations-install
Installing} for background). In particular, if our headers were included
without the \c{} prefix but instead contained the library name in
their names (for example, \c{foo-util.hpp}), then the installation setup would
instead look like this:

\
# Install directly into say, /usr/include/ recreating subdirectories.
#
{hxx ixx txx}{*}:
{
  install         = include/
  install.subdirs = true
}
\

If the library doesn't have any headers in nested subdirectories (for example,
\c{}), you can drop the \c{install.subdirs} variable:

\
# Install into the foo/ subdirectory of, say, /usr/include/.
#
{hxx ixx txx}{*}: install = include/foo/
\

\N|In the combined layout, the installation-related definitions are at the end
of the combined \c{buildfile}.

Compared to the split layout where the public and private headers are
separated physically, in the combined layout you may need to achieve the same
result (that is, avoid installing private headers) at the build system
level. If the library provides only a handful of public headers and this set
is unlikely to change often, then listing them explicitly is the most
straightforward approach. For example (the \c{@./} qualifier tells \c{build2}
they are in the source directory):

\
# Only install public headers into, say, /usr/include/.
#
h{foo}@./ h{foo_version}@./: install = include/
h{*}: install = false
\

|

See also \l{#howto-extra-header-install-subdir How do I handle extra header
installation subdirectory?}


\h2#core-adjust-build-src-source|Adjust source \c{buildfile}: overview|

Next is the \c{buildfile} in \c{src/}:

\N|Again, the \c{buildfile} in your package may look slightly differently
depending on the exact \c{bdep-new} sub-options used. However, all the
relevant definitions discussed below should still be easily recognizable.

For a binless (header-only) library, this \c{buildfile} will contain only a
small subset of the definitions shown below. See
\l{https://github.com/build2/HOWTO/blob/master/entries/make-header-only-library.md
How do I make a header-only C/C++ library?} for additional considerations when
packaging header-only libraries.|

\
intf_libs = # Interface dependencies.
impl_libs = # Implementation dependencies.
#import xxxx_libs += libhello%lib{hello}

# Public headers.
#
pub = [dir_path] ../include/foo/

include $pub

pub_hdrs = $($pub/ pub_hdrs)

lib{foo}: $pub/{$pub_hdrs}

# Private headers and sources as well as dependencies.
#
lib{foo}: {hxx ixx txx cxx}{**} $impl_libs $intf_libs

# Build options.
#
out_pfx_inc = [dir_path] $out_root/include/
src_pfx_inc = [dir_path] $src_root/include/
out_pfx_src = [dir_path] $out_root/src/
src_pfx_src = [dir_path] $src_root/src/

cxx.poptions =+ \"-I$out_pfx_src\" \"-I$src_pfx_src\" \
                \"-I$out_pfx_inc\" \"-I$src_pfx_inc\"

#obja{*}: cxx.poptions += -DFOO_STATIC_BUILD
#objs{*}: cxx.poptions += -DFOO_SHARED_BUILD

# Export options.
#
lib{foo}:
{
  cxx.export.poptions = \"-I$out_pfx_inc\" \"-I$src_pfx_inc\"
  cxx.export.libs = $intf_libs
}

#liba{foo}: cxx.export.poptions += -DFOO_STATIC
#libs{foo}: cxx.export.poptions += -DFOO_SHARED

# For pre-releases use the complete version to make sure they cannot
# be used in place of another pre-release or the final version. See
# the version module for details on the version.* variable values.
#
if $version.pre_release
  lib{foo}: bin.lib.version = \"-$version.project_id\"
else
  lib{foo}: bin.lib.version = \"-$version.major.$version.minor\"

# Don't install private headers.
#
{hxx ixx txx}{*}: install = false
\


\h2#core-adjust-build-src-source-clean|Adjust source \c{buildfile}: cleanup|

As a first step, let's remove all the definitions that we don't need in our
library. The two common pieces of functionality that are often not needed
are support for auto-generated headers (such as \c{config.h} generated from
\c{config.h.in}) and dependencies on other libraries.

If you don't have any auto-generated headers, then remove all the assignments
and expansions of the \c{out_pfx_inc} and \c{out_pfx_src} variables. Here
is what the relevant lines in the above \c{buildfile} should look like after
this change:

\
# Build options.
#
src_pfx_inc = [dir_path] $src_root/include/
src_pfx_src = [dir_path] $src_root/src/

cxx.poptions =+ \"-I$src_pfx_src\" \"-I$src_pfx_inc\"

# Export options.
#
lib{foo}:
{
  cxx.export.poptions = \"-I$src_pfx_inc\"
}
\

\N|If you do have auto-generated headers, then in the split layout you can
remove \c{out_pfx_inc} if you only have private auto-generated headers and
\c{out_pfx_src} if you only have public.|

\N|In the combined layout the single \c{buildfile} does not set the
\c{*_pfx_*} variables. Instead it uses the \c{src_root} and \c{out_root}
variables directly. For example:

\
# Build options.
#
cxx.poptions =+ \"-I$out_root\" \"-I$src_root\"

# Export options.
#
lib{foo}:
{
  cxx.export.poptions = \"-I$out_root\" \"-I$src_root\"
}
\

To remove support for auto-generated headers in the combined \c{buildfile},
simply remove the corresponding \c{out_root} expansions:

\
# Build options.
#
cxx.poptions =+ \"-I$src_root\"

# Export options.
#
lib{foo}:
{
  cxx.export.poptions = \"-I$src_root\"
}
\

If you only have private auto-generated headers, then only remove the
expansion from \c{cxx.export.poptions}.|

If you don't have any dependencies, then remove all the assignments and
expansions of the \c{intf_libs} and \c{intf_libs} variables. That is,
the following lines in the original \c{buildfile}:

\
intf_libs = # Interface dependencies.
impl_libs = # Implementation dependencies.
#import xxxx_libs += libhello%lib{hello}

# Private headers and sources as well as dependencies.
#
lib{foo}: {hxx ixx txx cxx}{**} $impl_libs $intf_libs

# Export options.
#
lib{foo}:
{
  cxx.export.poptions = \"-I$out_pfx_inc\" \"-I$src_pfx_inc\"
  cxx.export.libs = $intf_libs
}
\

Become just these:

\
# Private headers and sources as well as dependencies.
#
lib{foo}: {hxx ixx txx cxx}{**}

# Export options.
#
lib{foo}:
{
  cxx.export.poptions = \"-I$out_pfx_inc\" \"-I$src_pfx_inc\"
}
\


\h2#core-adjust-build-src-source-dep|Adjust source \c{buildfile}: dependencies|

If you do have dependencies, then let's handle them now.

\N|Here we will assume dependencies on other libraries, which is the common
case. If you have dependencies on executables, for example, source code
generators, see \l{intro#guide-build-time-linked Build-Time Dependencies and
Linked Configurations} on how to handle that. In this case you will also need
to reconfigure your package after adding the corresponding \c{import}
directives in order to re-acquire the previously dropped \c{config.import.*}
values. Make sure to also pass any configuration variables you specified in
\l{#core-adjust-build-wide Adjust project-wide build system files in
\c{build/}}. For example:

\
$ bdep sync -a --disfigure config.libfoo.debug=true
\

|

For each library that your package depends on (and which you have added
to \c{manifest} on the \l{#core-fill-depend Add dependencies} step),
you need to first determine whether it's an interface of implementation
dependency and then import it either into the \c{intf_libs} or \c{impl_libs}
variable, respectively.

See \l{b#intro-lib Library Exportation and Versioning} for background on the
interface vs implementation distinction. But as a quick rule of thumb, if the
library you are packaging includes a header from the dependency library in one
of its public headers, then it's an interface dependency. Otherwise, it's an
implementation dependency.

Continuing with our \c{libfoo} example, as we have established in
\l{#core-fill-depend Add dependencies}, it depends on \c{libasio}, \c{libz},
and \c{libsqlite3} and let's say we've determined that \c{libasio} is an
interface dependency because it's included from \c{include/foo/core.hpp} while
the other two are implementation dependencies because they are only included
from \c{src/}. Here is how we would change our \c{buildfile} to import them:

\
intf_libs = # Interface dependencies.
impl_libs = # Implementation dependencies.
import intf_libs += libasio%lib{asio}
import impl_libs += libz%lib{z}
import impl_libs += libsqlite3%lib{sqlite3}
\

You can tidy this a bit further if you would like:

\
import intf_libs = libasio%lib{asio}
import impl_libs = libz%lib{z}
import impl_libs += libsqlite3%lib{sqlite3}
\

\N|If you don't have any implementation or interface dependencies, you can
remove the assignment and all the expansion of the corresponding \c{*_libs}
variable.|

Note also that system libraries like \c{-lm}, \c{-ldl} on UNIX or
\c{advapi32.lib}, \c{ws2_32.lib} on Windows should not be imported. Instead,
they should be listed in the \c{c.libs} or \c{cxx.libs} variables. See
\l{https://github.com/build2/HOWTO/blob/master/entries/link-system-library.md
How do I link a system library} for details.


\h2#core-adjust-build-src-source-pub|Adjust source \c{buildfile}: public headers|

With the unnecessary parts of the \c{buildfile} cleaned up and dependencies
handled, let's discuss the common changes to the remaining definitions, going
from top to bottom. We start with the public headers block:

\
# Public headers.
#
pub = [dir_path] ../include/foo/

include $pub

pub_hdrs = $($pub/ pub_hdrs)

lib{foo}: $pub/{$pub_hdrs}
\

This block gets hold of the list of public headers and makes them
prerequisites of the library. Normally you shouldn't need to make any changes
here. If you need to exclude some headers, it should be done in the
header \c{buildfile} in the \c{include/} directory.

\N|In the combined layout the single \c{buildfile} does not have such code.
Instead, all the headers are covered by the wildcard pattern in following
block.|


\h2#core-adjust-build-src-source-src|Adjust source \c{buildfile}: sources, private headers|

The next block deals with sources, private headers, and dependencies, if any:

\
# Private headers and sources as well as dependencies.
#
lib{foo}: {hxx ixx txx cxx}{**} $impl_libs $intf_libs
\

By default it will list all the relevant files as prerequisites of the
library, starting from the directory of the \c{buildfile} and including all
the subdirectories, recursively (see \l{b#name-patterns Name Patterns} for
background on wildcard patterns).

If your C++ package doesn't have any inline or template files, then you can
remove the \c{ixx} and \c{txx} target types, respectively (which would be
parallel to the change made in \c{root.build}; see \l{#core-adjust-build-wide
Adjust project-wide build system files in \c{build/}}). For example:

\
# Private headers and sources as well as dependencies.
#
lib{foo}: {hxx cxx}{**} $impl_libs $intf_libs
\

\N|Source files other than C/C++ (for example, Assembler, Objective-C/C++) are
dealt with in \l{#core-adjust-build-src-source-ext Adjust source
\c{buildfile}: extra requirements} below.|

The other common change to this block is the exclusion of certain files or
making them conditionally included. As an example, let's say in our \c{libfoo}
the source subdirectory contains a bunch of \c{*-test.cpp} files which are
unit tests and should not be listed as prerequisites of a library. Here is how
we can exclude them:

\
# Private headers and sources as well as dependencies.
#
lib{foo}: {hxx cxx}{** -**-test} $impl_libs $intf_libs
\

Let's also assume our \c{libfoo} contains \c{impl-win32.cpp} and
\c{impl-posix.cpp} which provide alternative implementations of the same
functionality for Windows and POSIX and therefore should only be included as
prerequisites on the respective platforms. Here is how we can handle that:

\
# Private headers and sources as well as dependencies.
#
lib{foo}: {hxx cxx}{** -impl-win32 -impl-posix -**-test}
lib{foo}: cxx{impl-win32}: include = ($cxx.target.class == 'windows')
lib{foo}: cxx{impl-posix}: include = ($cxx.target.class != 'windows')
lib{foo}: $impl_libs $intf_libs
\

There are two nuances in the above example worth highlighting. Firstly, we
have to exclude the files from the wildcard pattern before we can
conditionally include them. Secondly, we have to always link libraries
last. In particular, the following is a shorter but an incorrect version of the
above:

\
lib{foo}: {hxx cxx}{** -impl-win32 -impl-posix -**-test} \
          $impl_libs $intf_libs
lib{foo}: cxx{impl-win32}: include = ($cxx.target.class == 'windows')
lib{foo}: cxx{impl-posix}: include = ($cxx.target.class != 'windows')
\

\N|You may also be tempted to use the \c{if} directive instead of the
\c{include} variable for conditional prerequisites. For example:

\
if ($cxx.target.class == 'windows')
  lib{foo}: cxx{impl-win32}
else
  lib{foo}: cxx{impl-posix}
\

This would also be incorrect. For background and details, see
\l{https://github.com/build2/HOWTO/blob/master/entries/keep-build-graph-config-independent.md
How do I keep the build graph configuration-independent?}|


\h2#core-adjust-build-src-source-opt|Adjust source \c{buildfile}: build and export options|

The next two blocks are the build and export options, which we will discuss
together:

\
# Build options.
#
out_pfx_inc = [dir_path] $out_root/include/
src_pfx_inc = [dir_path] $src_root/include/
out_pfx_src = [dir_path] $out_root/src/
src_pfx_src = [dir_path] $src_root/src/

cxx.poptions =+ \"-I$out_pfx_src\" \"-I$src_pfx_src\" \
                \"-I$out_pfx_inc\" \"-I$src_pfx_inc\"

#obja{*}: cxx.poptions += -DFOO_STATIC_BUILD
#objs{*}: cxx.poptions += -DFOO_SHARED_BUILD

# Export options.
#
lib{foo}:
{
  cxx.export.poptions = \"-I$out_pfx_inc\" \"-I$src_pfx_inc\"
  cxx.export.libs = $intf_libs
}

#liba{foo}: cxx.export.poptions += -DFOO_STATIC
#libs{foo}: cxx.export.poptions += -DFOO_SHARED
\

The build options are in effect when the library itself is being build and the
exported options are propagated to the library consumers (see \l{b#intro-lib
Library Exportation and Versioning} for background on exported options). For
now we will ignore the commented out lines that add \c{-DFOO_STATIC*} and
\c{-DFOO_SHARED*} macros \- they are for symbol exporting and we will discuss
this topic separately.

If the library you are packaging only relied on platform-independent APIs,
then chances are you won't need to change anything here. On the other hand, if
it does anything platform-specific, then you will most likely need to add some
options.

As discussed in the \l{b#intro-dirs-scopes Output Directories and Scopes}
section of the build system introduction, there is a number of variables that
are used to specify compilation and linking options, such as \c{*.poptions}
(\c{cxx.poptions} in the above example), \c{*.coptions}, etc. The below table
shows all of them with their rough \c{make} equivalents in the third column:

\
*.poptions   preprocess        CPPFLAGS
*.coptions   compile           CFLAGS/CXXFLAGS
*.loptions   link              LDFLAGS
*.aoptions   archive           ARFLAGS
*.libs       system libraries  LIBS/LDLIBS
\

The recommended approach here is to study the upstream build system and copy
custom compile/link options to the appropriate \c{build2} variables. Note,
however, that doing it thoughtlessly/faithfully by copying all the options may
not always be a good idea. See
\l{https://github.com/build2/HOWTO/blob/master/entries/compile-options-in-buildfile.md
Which C/C++ compile/link options are OK to specify in a project's buildfile?}
for the guidelines.

Also, oftentimes, such custom options must only be specified for certain
target platforms or when using a certain compiler. While \c{build2} provides a
large amount of information to identify the build configuration as well as more
advanced \c{buildfile} language mechanism (such as \l{b#intro-switch Pattern
Matching}) to make sense of it, this is a large topic for which we refer you
to \l{b The \c{build2} Build System} manual. Additionally,
\l{https://github.com/build2-packaging github.com/build2-packaging} now
contains a large number of packages that you can study and search for
examples.

\N|While exporting preprocessor macros to communicate configuration is a
fairly common technique, it has a number of drawbacks and limitations.
Specifically, a large number of such macros will add a lot of noise to the
consumer's compilation command lines (especially if multiple libraries indulge
in this). Plus, the information conveyed by such macros is limited to simple
values and is not easily accessible in consumer \c{buildfiles}.

To overcome these drawbacks and limitations, \c{build2} provides a mechanism
for conveying metadata with C/C++ libraries (and executables). See,
\l{https://github.com/build2/HOWTO/blob/master/entries/convey-additional-information-with-exe-lib.md
How do I convey additional information (metadata) with executables and C/C++
libraries?} for details.

Note that outright replacing the preprocessor macros with metadata can be done
if this information is only used by the library consumers. In other words, if
the library's public headers rely on the presence of such macros, then we have
no choice but to export them, potentially also providing the metadata so that
this information is easily accessible form \c{buildfiles}.|

Let's consider a representative example based on our \c{libfoo} to get a sense
of what this normally looks like as well as to highlight a few nuances. We
will assume our \c{libfoo} requires either the \c{FOO_POSIX} or \c{FOO_WIN32}
macro to be defined during the build in order to identify the target
platform. Additionally, extra features can be enabled by defining
\c{FOO_EXTRAS}, which should be done both during the build and for consumption
(so this macro must also be exported). Next, this library requires the
\c{-fno-strict-aliasing} compile option for the GCC-class compilers (GCC,
Clang, etc). Finally, we need to link \c{pthread} on POSIX and \c{ws2_32.lib}
on Windows. This is how we would work all this into the above fragment:

\
# Build options.
#
out_pfx_inc = [dir_path] $out_root/include/
src_pfx_inc = [dir_path] $src_root/include/
out_pfx_src = [dir_path] $out_root/src/
src_pfx_src = [dir_path] $src_root/src/

cxx.poptions =+ \"-I$out_pfx_src\" \"-I$src_pfx_src\" \
                \"-I$out_pfx_inc\" \"-I$src_pfx_inc\"

cxx.poptions += -DFOO_EXTRAS

if ($cxx.target.class == 'windows')
  cxx.poptions += -DFOO_WIN32
else
  cxx.poptions += -DFOO_POSIX

#obja{*}: cxx.poptions += -DFOO_STATIC_BUILD
#objs{*}: cxx.poptions += -DFOO_SHARED_BUILD

if ($cxx.class == 'gcc')
  cxx.coptions += -fno-strict-aliasing

switch $cxx.target.class, $cxx.target.system
{
  case 'windows', 'mingw32'
    cxx.libs += -lws2_32
  case 'windows'
    cxx.libs += ws2_32.lib
  default
    cxx.libs += -pthread
}

# Export options.
#
lib{foo}:
{
  cxx.export.poptions = \"-I$out_pfx_inc\" \"-I$src_pfx_inc\" -DFOO_EXTRAS
  cxx.export.libs = $intf_libs
}

#liba{foo}: cxx.export.poptions += -DFOO_STATIC
#libs{foo}: cxx.export.poptions += -DFOO_SHARED
\

There are a few nuances in the above code worth keeping in mind. Firstly,
notice that we append (rather than assign) to all the non-export variables
(\c{*.poptions}, \c{*.coptions}, \c{*.libs}). This is because they may already
contain some values specified by the user with their \c{config.*.*}
counterparts. On the other hand, the \c{*.export.*} variables are assigned.

Secondly, the order in which we append to the variables is important for the
value to accumulate correctly. You want to fist append all the scope-level
values, then target type/pattern-specific, and finally any target-specific;
that is, from more general to more specific (see \l{b#intro-lang Buildfile
Language} for background). To illustrate this point, let's say in our
\c{libfoo}, the \c{FOO_POSIX} or \c{FOO_WIN32} macro are only necessary when
compiling \c{util.cpp}. Below would be the correct order of assigning to
\c{cxx.poptions}:

\
cxx.poptions =+ \"-I$out_pfx_src\" \"-I$src_pfx_src\" \
                \"-I$out_pfx_inc\" \"-I$src_pfx_inc\"

cxx.poptions += -DFOO_EXTRAS

#obja{*}: cxx.poptions += -DFOO_STATIC_BUILD
#objs{*}: cxx.poptions += -DFOO_SHARED_BUILD

if ($cxx.target.class == 'windows')
  {obja objs}{util}: cxx.poptions += -DFOO_WIN32
else
  {obja objs}{util}: cxx.poptions += -DFOO_POSIX
\

\N|Not that target-specific \c{*.poptions} and \c{*.coptions} must be
specified on the object file targets while \c{*.loptions} and \c{*.libs} \- on
the library or executable targets.|


\h2#core-adjust-build-src-source-sym|Adjust source \c{buildfile}: symbol exporting|

Let's now turn to a special sub-topic of the build and export options that
relates to the shared library symbol exporting. To recap, a shared library on
Windows must explicitly specify the symbols (functions and global data) that
it wishes to make accessible by its consumers (executables and other shared
libraries). This can be achieved in three different way: The library can
explicitly mark in its source code the names whose symbols should be
exported. Alternatively, the library can provide a \c{.def} file to the linker
that lists the symbols to be exported. Finally, the library can request
automatic exporting of all symbols, which is the default semantics on
non-Windows platforms. Note that the last two approaches only work for
exporting functions, not data, unless special extra steps are taken by the
library consumers. Let's discuss each of these approaches in the reverse
order, that is, starting with the automatic symbol exporting.

The automatic symbol exporting is implemented in \c{build2} by generating a
\c{.def} file that exports all the relevant symbols. It requires a few
additional definitions in our \c{buildfile} as described in
\l{b#cc-auto-symexport Automatic DLL Symbol Exporting}. You can automatically
generate the necessary setup with the \c{auto-symexport} \c{bdep-new}
sub-option.

Using a custom \c{.def} file to export symbols is fairly straightforward:
simply list it as a prerequisite of the library and it will be automatically
passed to the linker when necessary. For example:

\
# Private headers and sources as well as dependencies.
#
lib{foo}: {hxx cxx}{**} $impl_libs $intf_libs def{foo}
\

\N|Some third-party projects automatically generate their \c{.def} file. In
this case you can try to re-create the same generation in the \c{buildfile}
using an ad hoc recipe (or the \l{b#module-in \c{in}} or
\l{https://github.com/build2/libbuild2-autoconf \c{autoconf}} build system
modules). If that doesn't look possible (for example, if the generation logic
is complex and is implemented in something like Perl or Python), then you can
try your luck with automatic symbol exporting. Failed that, the only remaining
option is to use a pre-generated \c{.def} file in the \c{build2} build.|

The last approach is to explicitly specify in the source code which symbols
must be exported by marking the corresponding declarations with
\c{__declspec(dllexport)} during the library build and
\c{__declspec(dllimport)} during the library use. This is commonly achieved
with a macro, customarily called \c{*_EXPORT} or \c{*_API}, which is defined
to one of the above specifiers based on whether static or shared library is
being built or is being consumed, which, in turn, is also normally signaled
with a few more macros, such as \c{*_BUILD_DLL} and \c{*_USE_STATIC}.

\N|Because this approach requires extensive changes to the source code, you
will normally only use it in your \c{build2} build if it is already used in
the upstream build.|

In \c{build2} you can explicitly signal any of the four situations
(shared/static, built/consumed) by uncommenting and adjusting the following
four lines in the build and export options blocks:

\
# Build options.
#

...

#obja{*}: cxx.poptions += -DFOO_STATIC_BUILD
#objs{*}: cxx.poptions += -DFOO_SHARED_BUILD

# Export options.
#

...

#liba{foo}: cxx.export.poptions += -DFOO_STATIC
#libs{foo}: cxx.export.poptions += -DFOO_SHARED
\

As an example, let's assume our \c{libfoo} defines in one of its headers the
\c{FOO_EXPORT} macro based on the \c{FOO_BUILD_DLL} (shared library is being
build) and \c{FOO_USE_STATIC} (static library is being used) macros that it
expects to be appropriately defined by the build system. This is how we would
modify the above fragment to handle this setup:

\
# Build options.
#

...

objs{*}: cxx.poptions += -DFOO_BUILD_DLL

# Export options.
#

...

liba{foo}: cxx.export.poptions += -DFOO_USE_STATIC
\


\h2#core-adjust-build-src-source-ver|Adjust source \c{buildfile}: shared library version|

The final few lines in the above \c{buildfile} deal with shared library binary
(ABI) versioning:

\
# For pre-releases use the complete version to make sure they cannot
# be used in place of another pre-release or the final version. See
# the version module for details on the version.* variable values.
#
if $version.pre_release
  lib{foo}: bin.lib.version = \"-$version.project_id\"
else
  lib{foo}: bin.lib.version = \"-$version.major.$version.minor\"
\

The \c{bdep-new}-generated setup arranges for the platform-independent
versioning where the package's major and minor version components are embedded
into the shared library binary name (and \c{soname}) under the assumption
that only patch versions are ABI-compatible.

The two situation where you would want to change this are when the above
assumption does not hold and/or when the upstream provides platform-specific
shared library versions which you would like to re-create in your \c{build2}
build. See \l{b#intro-lib Library Exportation and Versioning} for background
and details.


\h2#core-adjust-build-src-source-exe|Adjust source \c{buildfile}: executables|

If instead of a library you are packaging an executable, then, as mentioned
earlier, it will most likely be a combined layout with a single \c{buildfile}.
This \c{buildfile} will also be much simpler compared to the library's. For
example, give the following \c{bdep-new} command:

\
$ bdep new --package --lang c++ --type exe,no-subdir,prefix=foo foo
\

The resulting source \c{buildfile} will look like this:

\
libs =
#import libs += libhello%lib{hello}

exe{foo}: {hxx ixx txx cxx}{**} $libs testscript

out_pfx = [dir_path] $out_root/foo/
src_pfx = [dir_path] $src_root/foo/

cxx.poptions =+ \"-I$out_pfx\" \"-I$src_pfx\"
\

If the executable doesn't have any inline/template/header files, then you can
remove the \c{ixx}/\c{txx}/\c{hxx} target types, respectively (which would be
parallel to the change made in \c{root.build}; see \l{#core-adjust-build-wide
Adjust project-wide build system files in \c{build/}}). For example:

\
exe{foo}: {hxx cxx}{**} $libs testscript
\

If the source code includes its own headers with the \c{\"\"} style inclusion
(or doesn't have any headers), then we can also get rid of \c{out_pfx} and
\c{src_pfx}. For example:

\
libs =
#import libs += libhello%lib{hello}

exe{foo}: {hxx ixx txx cxx}{**} $libs testscript
\

\N|Unfortunately it's not uncommon for projects that provide both a library
and an executable, for the executable source code to include public and/or
private library headers with the relative \c{\"\"} style inclusion. For
example:

\
#include \"../../libfoo/include/foo/util.hpp\"
#include \"../../libfoo/src/impl.hpp\"
\

This approach won't work in \c{build2} since the two packages may end up in
different directories or the library could even be installed. There are two
techniques that can be used to work around this issue (other than patching the
upstream source code).

For public headers we can provide, in the appropriate places within the
executable package, \"thunk headers\" with the same names as public headers
that simply include the corresponding public header from the library using the
\c{<>} style inclusion.

For private headers we can provide, again in the appropriate places within the
executable package, our own symlinks for a subset of private headers. Note
that this will only work if the use of private headers within the executable
does not depend on any symbols that are not exported by the library (failed
that, the executable will have to always link to the static variant of the
library).

For a real example of both of these techniques, see the
\l{https://github.com/build2-packaging/zstd/tree/v1.5.5 \c{zstd}} package
repository.|

Dealing with dependencies in executables is similar to libraries except that
here we don't have the interface/implementation distinction; see the
\l{#core-adjust-build-src-source-dep Adjust source \c{buildfile}:
dependencies} step. For example:

\
import libs = libfoo%lib{foo}

exe{foo}: {hxx ixx txx cxx}{**} $libs testscript
\

Likewise, dealing with build options in executables is similar to libraries
except that here we have no export options; see the
\l{#core-adjust-build-src-source-opt Adjust source \c{buildfile}: build and
export options} step.

If the executable can plausibly be used in a build, then it's recommended to
add \c{build2} metadata as describe in
\l{https://github.com/build2/HOWTO/blob/master/entries/convey-additional-information-with-exe-lib.md
How do I convey additional information (metadata) with executables and C/C++
libraries?} See also \l{#howto-patch-upstream-source-preproc Modifying
upstream source code with C/C++ preprocessor} on how to do it without
physically modifying upstream source code. \N{See the
\l{https://github.com/build2-packaging/zstd/tree/v1.5.5 \c{zstd}} package
repository for a real example of doing this.}

\N|We will discuss the \c{testscript} prerequisite on the
\l{#core-test-smoke-exe Make smoke test: executables} step below.|


\h2#core-adjust-build-src-source-ext|Adjust source \c{buildfile}: extra requirements|

The changes discussed so far should be sufficient to handle a typical library
or executable that is written in C and/or C++ and is able to handle platform
differences with the preprocessor and compile/link options. However, sooner or
later you will run into a more complex library that may use additional
languages, require more elaborate platform detection, or use additional
functionality, such as support for source code generators. The below list
provides pointers to resources that cover the more commonly encountered
additional requirements.

\ul|

\li|\l{b#module-in The \c{in} build system module}

Use to process \c{config.h.in} (or other \c{.in} files) that don't require
Autoconf-style platform probing (\c{HAVE_*} options).|

\li|\l{https://github.com/build2/libbuild2-autoconf The \c{autoconf} build
    system module}

Use to process \c{config.h.in} (or their CMake/Meson variants) that require
Autoconf-style platform probing (\c{HAVE_*} options) or CMake/Meson-specific
substitution syntax (\c{#cmakedefine}, etc).|

\li|\l{b#c-objc Objective-C Compilation} and
    \l{b#cxx-objcxx Objective-C++ Compilation}

Use to compile Objective-C (\c{.m}) or Objective-C++ (\c{.mm}) source files.|

\li|\l{b#c-as-cpp Assembler with C Preprocessor Compilation}

Use to compile Assembler with C Preprocessor (\c{.S}) source files.|

\li|\l{b#intro-unit-test Implementing Unit Testing}

Use if upstream has tests (normally unit tests) in the source subdirectory.|

\li|\l{intro#guide-build-time-linked Build-Time Dependencies and Linked
    Configurations}

Use if upstream relies on source code generators, such as
\l{https://cppget.org/reflex \c{lex}} and \l{https://cppget.org/byacc
\c{yacc}}.|

\li|\l{https://github.com/build2/HOWTO/ The \c{build2} HOWTO}

See the \c{build2} HOWTO article collection for more unusual requirements.||


\h2#core-adjust-build-test|Test library build|

At this point our library should be ready to build, at least in theory. While
we cannot build and test the entire package before adjusting the generated
\c{tests/} subproject (the subject of the next step), we can try to build just
the library and, if it has any unit tests in the source subdirectory, even run
some tests.

\N|If the library is header only, there won't be anything to build unless
there are unit tests. Still you may want to continue with this exercise to
detect any syntactic mistakes in the \c{buildfiles}, etc.|

To build only a specific subdirectory of our package we use the build system
directly (continuing with our \c{libfoo} example):

\
$ cd libfoo/src/ # Change to the source subdirectory.
$ b update
\

If there are any issues, try to fix them and then build again. Once the
library builds and if it has unit tests, you can try to run them:

\
$ b test
\

It also makes sense to test the installation and see if anything is off (such
as private headers being installed):

\
$ rm -rf /tmp/install
$ b install config.install.root=/tmp/install
\

Once the library builds, it makes sense to commit our changes for easier
rollbacks:

\
$ cd foo/ # Change to the package repository root.
$ git add .
$ git status
$ git commit -m \"Adjust source subdirectory buildfiles\"
\


\h#core-test-smoke|Make smoke test|

With the library build sorted, we need tests to make sure the result is
actually functional. As \l{#core-fill discussed earlier}, it is recommended to
start with a simple \"smoke test\", make sure that works, and then replace it
with upstream tests. However, if upstream tests look simple enough, you can
skip the smoke test. For example, if upstream has all its tests in a single
source file and the way it is built doesn't look too complicated, then you can
just use that source file in place of the smoke test.

\N|If upstream has no tests, then the smoke test will have to stay. A library
can only be published if it has at least one test.

It is also recommended to have the smoke test if upstream tests are in a
separate package. See
\l{https://github.com/build2/HOWTO/blob/master/entries/handle-tests-with-extra-dependencies.md
How do I handle tests that have extra dependencies?} for background and
details.|

\N|If instead of a library you are packaging an executable, you can skip
directly to \l{#core-test-smoke-exe Make smoke test: executables}.|

To recap, the \c{bdep-new}-generated \c{tests/} subdirectory looks like this
(continuing with our \c{libfoo} example):

\
libfoo/
├── ...
└── tests/
    ├── build/
    │   ├── bootstrap.build
    │   └── root.build
    ├── basics/
    │   ├── driver.cpp
    │   └── buildfile
    └── buildfile
\

The \c{tests/} subdirectory is a build system subproject, meaning that it can
be built independently, for example, to test the installed version of the
library (see \l{b#intro-operations-test Testing} for background). In
particular, this means it has the \c{build/} subdirectory with project-wide
build system files, the same as the library. The \c{basics/} subdirectory
contains the generated test, which is what we will be turning into a smoke
test. The subproject root \c{buildfile} rarely needs changing.


\h2#core-test-smoke-build-wide|Adjust project-wide build system files in \c{tests/build/}|

Review and adjust the generated \c{bootstrap.build} and \c{root.build} (there
will be no \c{export.build}) similar to the \l{#core-adjust-build-wide Adjust
project-wide build system files in \c{build/}} step.

Here the only change you would normally make is in \c{root.build} and which is
to drop the assignment of extensions for target types that are not used in
tests.


\h2#core-test-smoke-adjust|Convert generated test to library smoke test|

The \c{basics/} subdirectory contains the \c{driver.cpp} source file that
implements the test and \c{buildfile} that builds it. You can rename both the
test subdirectory (\c{basics/}) and the source file \c{driver.cpp}, for
example, if you are going with the upstream tests directly. You can also add
more tests by simply copying \c{basics/}.

The purpose of a smoke test is to make sure the library's public headers can
be included (including in the installed case, no pun intended), it can be
linked, and its basic functionality works.

To achieve this, we modify \c{driver.cpp} to include the library's main
headers and call a few functions. For example, if the library has the
initialize/deinitialize type of functions, those are good candidates to
call. If the library is not header-only, make sure that the smoke test calls
at least one non-inline/template function to test symbol exporting.

\N|Make sure that your test includes the library's public headers the same way
as would be done by the library consumers.|

Continuing with our \c{libfoo} example, this is what its smoke test might look
like:

\
#include 
#include 

#undef NDEBUG
#include 

int main ()
{
  foo::context* c (foo::init (0 /* flags */));
  assert (c != nullptr);
  foo::deinit (c);
}
\

\N|The C/C++ \c{assert()} macro is often adequate for simple tests and does
not require extra dependencies. But see
\l{https://github.com/build2/HOWTO/blob/master/entries/use-assert-in-tests.md
How do I correctly use C/C++ assert() in tests?}|

The test \c{buildfile} is pretty simple:

\
import libs = libfoo%lib{foo}

exe{driver}: {hxx ixx txx cxx}{**} $libs testscript{**}
\

If you have adjusted the library target name (\c{lib{foo\}}) in the source
subdirectory \c{buildfile}, then you will need to make the corresponding
change in the \c{import} directive here. You may also want to tidy it up by
removing unused prerequisite types. For example:

\
import libs = libfoo%lib{foo}

exe{driver}: {hxx cxx}{**} $libs
\


\h2#core-test-smoke-exe|Make smoke test: executables|

If instead of a library we are packaging an executable, then instead of the
\c{tests/} subproject we get the \c{testscript} file in the source
subdirectory (see \l{#core-adjust-build-src-source-exe Adjust source
\c{buildfile}: executables} for a refresher). This file can be used to write
one or more tests that exercise our executable (see \l{b#intro-operations-test
Testing} for background).

How exactly to test any given executable depends on its functionality. For
instance, for a compression utility we could write a roundtrip test that first
compresses some input, then decompresses it, and finally compares the result
to the original. For example (taken from the
\l{https://github.com/build2-packaging/zstd/tree/v1.5.5 \c{zstd}} package
repository):

\
: roundtrip
:
echo 'test content' | $* -zc | $* -dc >'test content'
\

On the other hand, for an executable that is a source code generator, proper
testing would involve a separate tests package that has a build-time
dependency on the executable and that exercises the generated code (see
\l{https://github.com/build2/HOWTO/blob/master/entries/handle-tests-with-extra-dependencies.md
How do I handle tests that have extra dependencies?} for background and
details). \N{See the \l{https://github.com/build2-packaging/thrift/tree/0.17
\c{thrift}} package repository for an example of this setup.}

If the executable provides a way to query its version, one test that you
should always be able to write, and which can serve as a last resort smoke
test, is the version check. For example:

\
: version
:
$* --version >>~\"/EOO/\"
/.*$(version.major)\.$(version.minor)\.$(version.patch).*/
EOO
\

See also
\l{https://github.com/build2/HOWTO/blob/master/entries/sanitize-test-execution.md
How do I sanitize the execution of my tests?}


\h2#core-test-smoke-locally|Test locally|

With the smoke test ready, we can finally do some end-to-end testing of our
library build. We will start with doing some local testing to catch basic
mistakes and then do the full CI to detect any platform/compiler-specific
issues.

First let's run the test in the default build configuration by invoking the
build system directly (see \l{intro#guide Getting Started Guide} for
background on default configurations):

\
$ cd libfoo/tests/ # Change to the tests/ subproject.
$ b test
\

If there are any issues (compile/link errors, test failures), try to address
them and re-run the test.

Once the library builds in the default configuration and the result passes the
tests, you can do the same for all the build configurations, in case you have
\l{#core-fill-init initialized} your library in several:

\
$ bdep test -a
\

\h2#core-test-smoke-locally-install|Test locally: installation|

Once the development build works, let's also test the installed version of the
library. In particular, this makes sure that the public headers are installed
in a way that is compatible with how they are included by our test (and would
be included by the library consumers). To test this we first install the
library into a temporary directory:

\
$ cd libfoo/ # Change to the package root.
$ rm -rf /tmp/install
$ b install config.install.root=/tmp/install
\

Next we build just the \c{tests/} subproject out of source and arranging for
it to find the installed library (see \l{b#intro-dirs-scopes Output
Directories and Scopes} for background on the out of source build syntax):

\
$ cd libfoo/ # Change to the package root.
$ b test: tests/@/tmp/libfoo-tests-out/ \
  config.cc.loptions=-L/tmp/install/lib \
  config.bin.rpath=/tmp/install/lib
\

\N|The equivalent MSVC command line would be:

\
> b install config.install.root=c:\tmp\install

> b test: tests\@c:\tmp\libfoo-tests-out\^
  config.cc.loptions=/LIBPATH:c:\tmp\install\lib
\

|

It is a good idea to look over the installed files manually and make sure
there is nothing unexpected, for example, missing or extraneous files.

Once done testing the installed case, let's clean things up:

\
$ rm -r /tmp/install /tmp/libfoo-tests-out
\

\h2#core-test-smoke-locally-dist|Test locally: distribution|

Another special case worth testing is the preparation of the source
distribution (see \l{b#intro-operations-dist Distributing} for
background). This, in particular, is how your package will be turned into the
source archive for publishing to \l{https://cppget.org cppget.org}. Here we
are primarily looking for missing files. As a bonus, this will also allow us
to test the in source build. First we distribute our package to a temporary
directory (again using the default configuration and the build system
directly):

\
$ cd libfoo/ # Change to the package root.
$ b dist config.dist.root=/tmp/dist config.dist.uncommitted=true
\

The result will be in the \c{/tmp/dist/libfoo-/} directory which
should resemble our \c{libfoo/} package but without files like \c{.gitignore}.
Next we build and test the distribution in source:

\
$ cd /tmp/dist/libfoo-/
$ b configure config.cxx=g++
$ b update
$ b test
\

\N|If your package has dependencies that you import in your \c{buildfile},
then the above \c{configure} operation will most likely fail because such
dependencies cannot be found (it may succeed if they are available as
system-installed). The error message will suggest specifying the location of
each dependency with \c{config.import.*} variable. You can fix this by setting
each such \c{config.import.*} to the location of the default build
configuration (created on the \l{#core-fill-init Initialize package in build
configurations} step) which should contain all the necessary
dependencies. Simply re-run the \c{configure} operation until you have
discovered and specified all the necessary \c{config.import.*} variables, for
example:

\
$ b configure config.cxx=g++ \
  config.import.libz=.../foo-gcc \
  config.import.libasio=.../foo-gcc \
  config.import.libsqlite3=.../foo-gcc
\

|

It is a good idea to look over the distributed files manually and make sure
there is nothing missing or extraneous.

Once done testing the distribution, let's clean things up:

\
$ rm -r /tmp/dist
\


\h2#core-test-smoke-ci|Commit and test with CI|

With local testing complete, let's commit our changes and submit a remote CI
job to test our library on all the major platforms and with all the major
compilers:

\
$ cd foo/ # Change to the package repository root.
$ git add .
$ git status
$ git commit -m \"Add smoke test\"
$ git push -u

$ bdep ci
\

The result of the \l{bdep-ci(1)} command is a link where you can see the
status of the builds.

\N|Make sure to wait until there are no more unbuilt configurations (that is,
the number of entries with the \c{} or \c{building} result is
\c{0}).|

If any builds fail, view the logs to determine the cause, try to fix it,
commit your fix, and CI again.

\N|It is possible that upstream does not support some platforms or compilers.
For example, it's common for smaller projects not to bother with supporting
\"secondary\" compilers, such as MinGW GCC on Windows or Homebrew GCC on Mac
OS.

If upstream expressly does not support some platform or compiler, it's probably
not worth spending time and energy trying to support it in the package. Most
likely it will require changes to upstream source code and that is best done
upstream rather than in the package (see \l{#dont-fix-upstream Don't try to
fix upstream issues in the package} for background). In this case you would
want to exclude these platforms/compilers from the CI builds using the
\l{bpkg#manifest-package-builds \c{builds} package \c{manifest} value}.

The other common cause of a failed build is a newer version of a compiler or
platform that breaks upstream. In this case there are three options: Ideally
you would want to fix this in upstream and have a new version released. Failed
that, you may want to patch the upstream code to fix the issues, especially if
this is one of the major platforms and/or primary compilers (see
\l{#howto-patch-upstream-source How do I patch upstream source code?} for
details). Finally, you can just leave the build failing with the expectation
that it will be fixed in the next upstream version. Note that in this case you
should not exclude the failing build from CI.|


\h#core-test-upstream|Replace smoke test with upstream tests|

With the smoke test working we can now proceed with replacing it with the
upstream tests.


\h2#core-test-upstream-understand|Understand how upstream tests work|

While there are some commonalities in how C/C++ libraries are typically built,
when it comes to tests there is unfortunately little common ground in how they
are arranged, built, and executed. As a result, the first step in dealing with
upstream tests is to study the existing build system and try to understand how
they work.

\N|If upstream tests prove incomprehensible (which is unfortunately not
uncommon) and the only options you see are to go with just the smoke test or
to give up, then go with just the smoke test. In this case it's a good idea to
create an issue in the package repository mentioning that upstream tests are
still a TODO.|

\N|If instead of a library you are packaging an executable, then whether the
below steps will apply depends on the functionality of the executable.

In particular, testing source code generators would normally involve
exercising the generated code, in which case the following will largely apply,
though in this case the tests would need to be placed into a separate tests
package that has a build-time dependency on the executable (see
\l{https://github.com/build2/HOWTO/blob/master/entries/handle-tests-with-extra-dependencies.md
How do I handle tests that have extra dependencies?} for background and
details). In fact, if a source code generator is accompanied by a runtime
library, then the tests will normally exercise them together (though a runtime
library might also have its own tests). See the
\l{https://github.com/build2-packaging/thrift/tree/0.17 \c{thrift}} package
repository for an example of this setup.|

To get you started with analyzing the upstream tests, below are some of the
questions you would likely need answered before you can proceed with the
conversion:

\ul|

\li|\b{Are upstream tests unit tests or integration tests?}

While the distinction is often fuzzy, for our purposes the key differentiator
between unit and integration tests is which API they use: integration tests
only use the library's public API while unit tests need access to the
implementation details.

Normally (but not always), unit tests will reside next to the library source
code since they need access to more than just the public headers and the
library binary (private headers, individual object files, utility libraries,
etc). While integration tests are normally (but again not always) placed into
a separate subdirectory, usually called \c{tests} or \c{test}.

If the library has unit tests, then refer to \l{b#intro-unit-test Implementing
Unit Testing} for background on how to handle them in \c{build2}.

If the library has integration tests, then use them to replace (or complement)
the smoke test.

If the library has unit tests but no integration tests, then it is recommended
to keep the smoke test since that's the only way the library will be tested
via its public API.|


\li|\b{Do upstream tests use an external testing framework?}

Oftentimes a C++ library will use an external testing framework to implement
tests. Popular choices include \l{https://cppget.org/catch2 \c{catch2}},
\l{https://cppget.org/gtest \c{gtest}}, \l{https://cppget.org/doctest
\c{doctest}}, and \l{https://cppget.org/libboost-test \c{libboost-test}}.

If a library uses such an external testing framework, then it is recommended
to factor tests into a separate package in order to avoid making the library
package depend on the testing framework (which is only required during
testing). See
\l{https://github.com/build2/HOWTO/blob/master/entries/handle-tests-with-extra-dependencies.md
How do I handle tests that have extra dependencies?} for details.

\N|Sometimes you will find that upstream bundles the source code of the
testing framework with their tests. This is especially common with
\c{catch2}. If that's the case, it is strongly recommended that you
\"unbundle\" it by making it a proper external dependency. See \l{#dont-bundle
Don't bundle dependencies} for background.||


\li|\b{Are upstream tests in a single or multiple executables?}

It's not unusual for libraries to have a single test executable that runs all
the test cases. This is especially common if a C++ testing framework is used.
In this case it is natural to replace the contents of the smoke test with the
upstream source code, potentially renaming the test subdirectory (\c{basics/})
to better match upstream naming.

If upstream has multiple test executables, then they could all be in single
test subdirectory (potentially reusing some common bits) or spread over
multiple subdirectories. In both cases it's a good idea to follow the upstream
structure unless you have good reasons to deviate. In the former case (all
executables in the same subdirectory), you can re-purpose the smoke test
subdirectory. In the latter case (each executable in a separate subdirectory)
you can make copies of the smoke test subdirectory.|


\li|\b{Do upstream tests use an internal utility library?}

If there are multiple test executables and they need to share some common
functionality, then it's not unusual for upstream to place such functionality
into a static library and then link it to each test executable. In \c{build2}
such an internal library is best represented with a utility library (see
\l{b#intro-unit-test Implementing Unit Testing} for details). See the
following section for an example.|


\li|\b{Are upstream tests well behaved?}

Unfortunately it's not uncommon for upstream tests not to behave well, such as
write diagnostics to \c{stdout} instead of \c{stderr}, create temporary files
without cleaning them up, or assume presence of input files in the current
working directory. For details on how to deal with such situations see
\l{https://github.com/build2/HOWTO/blob/master/entries/sanitize-test-execution.md
How do I sanitize the execution of my tests?}||


\h2#core-test-upstream-convert|Convert smoke test to upstream tests|

Once you have a good grasp of how upstream tests work, convert or replace the
smoke test with the upstream tests. If upstream has multiple test executables,
you may want to deal with one test at a time, making sure that it passes
before moving to the next one.

It's normally a good idea to use the smoke test \c{buildfile} as a starting
point for upstream tests. To recap, the smoke test \c{buildfile} for our
\c{libfoo} example ended up looking like this:

\
import libs = libfoo%lib{foo}

exe{driver}: {hxx cxx}{**} $libs
\

At a minimum you will most likely need to change the name of the executable to
match upstream. If you need to build multiple executables in the same
directory, then it's probably best to get rid of the name pattern for the
source files and specify the prerequisite names explicitly, for example:

\
import libs = libfoo%lib{foo}

./: exe{test1}: cxx{test1} $libs
./: exe{test2}: cxx{test2} $libs
\

If you have a large number of such test executables, then a \l{b#intro-for
\c{for}-loop} might be a more scalable option:

\
import libs = libfoo%lib{foo}

for src: cxx{test*}
  ./: exe{$name($src)}: $src $libs
\

If the upstream tests have some common functionality that is used by all the
test executables, then it is best placed into a utility library. For example:

\
import libs = libfoo%lib{foo}

./: exe{test1}: cxx{test1} libue{common}
./: exe{test2}: cxx{test2} libue{common}

libue{common}: {hxx cxx}{common} $libs
\


\h2#core-test-upstream-locally|Test locally|

With the upstream tests ready, we re-do the same end-to-end testing as we did
with the smoke test:

\l{#core-test-smoke-locally Test locally}\n
\l{#core-test-smoke-locally-install Test locally: installation}\n
\l{#core-test-smoke-locally-dist Test locally: distribution}\n


\h2#core-test-upstream-ci|Commit and test with CI|

With local testing complete, we commit our changes and submit a remote CI
job. This step is similar to what \l{#core-test-smoke-ci we did for the smoke
test} but this time we are using the upstream tests:

\
$ cd foo/ # Change to the package repository root.
$ git add .
$ git status
$ git commit -m \"Add upstream tests\"
$ git push

$ bdep ci
\


\h#core-examples-benchmarks|Add upstream examples, benchmarks, if any|

If the upstream project provides examples and/or benchmarks and you wish to
add them to the \c{build2} build (which is not strictly necessary for the
\c{build2} package to be usable), then now is a good time to do that.

As was mentioned in \l{#core-package-review Review and test auto-generated
\c{buildfile} templates}, the recommended approach is to copy the \c{tests/}
subproject (potentially from the commit history before the smoke test was
replaced with the upstream tests) and use that as a starting point for
examples and/or benchmarks. Do not forget to add the corresponding entry in
the root \c{buildfile}.

Once that is done, follow the same steps as in \l{#core-test-upstream Replace
smoke test with upstream tests} to add upstream examples/benchmarks and test
the result.


\h#core-root|Adjust root files (\c{buildfile}, \c{manifest}, etc)|

The last few files that we need to review and potentially adjust are
the root \c{buildfile}, package \c{manifest}, and \c{PACKAGE-README.md}.


\h2#core-root-buildfile|Adjust root \c{buildfile}|

The main function of the root \c{buildfile} is to pull in all the
subdirectories that need building plus list targets that are usually found in
the root directory of a project, typically \c{README.md}, \c{LICENSE},
etc. This is what the generated root \c{buildfile} looks like for our
\c{libfoo} project assuming we have symlinked \c{README.md}, \c{LICENSE}, and
\c{NEWS} from upstream on the \l{#core-package-create Create final package}
step:

\
./: {*/ -build/}                          \
    doc{README.md PACKAGE-README.md NEWS} \
    legal{LICENSE} manifest

# Don't install tests.
#
tests/: install = false
\

If the upstream project provides any other documentation (detailed change
logs, contributing guidelines, etc) or legal files (alternative licenses, list
of authors, code of conduct, etc), then you may want to symlink and list them
as the \c{doc{\}} and \c{legal{\}} prerequisites, respectively.

\N|If you are packaging an executable and it provides a man page, then it can
also be listed in the root \c{buildfile}. For example, if the man page file is
called \c{foo.1}:

\
./: ... man1{foo}
\

|

\N|One file you don't need listing is \c{INSTALL} (or equivalent) which
normally contains the installation instructions for the upstream build
system. In the \c{build2} package of a third-party project the
\c{PACKAGE-README.md} file serves this purpose (see
\l{#core-root-package-readme Adjust \c{PACKAGE-README.md}} for details).|


\h2#core-root-buildfile-doc|Adjust root \c{buildfile}: other subdirectories|

If the upstream project has other subdirectories that makes sense to include
into the \c{build2} package, then now is good time to take care of that.  The
most common such case will be extra documentation (besides the root
\c{README}), typically in a subdirectory called \c{doc/}, \c{docs/}, or
\c{documentation/}.

The standard procedure for handling such subdirectories will be to symlink the
relevant files (or the entire subdirectory) and then list the files as
prerequisites. For this last step, there are two options: we can list the
files directly in the root \c{buildfile} or we can create a separate
\c{buildfile} in the subdirectory.

\N|If symlinking entire subdirectories, don't forget to also list them in
\c{.gitattributes} if you want your package to be usable from the \c{git}
repository directly on Windows. See
\l{https://build2.org/article/symlinks.xhtml#windows Symlinks and Windows} for
details.|

Let's examine each approach using our \c{libfoo} as an example. We will
assume that the upstream project contains the \c{docs/} subdirectory with
additional \c{*.md} files that document the library's API. It would make sense
to include them into the \c{build2} package.

Listing the subdirectory files directly in the root \c{buildfile} works best
for simple cases, where you have a bunch of static files that don't require
any special provisions, such as customizations to their installation
locations. In this case we can symlink the entire \c{docs/} subdirectory:

\
$ cd libfoo/ # Change to the package root.
$ ln -s ../upstream/docs ./
\

The adjustment to the root \c{buildfile} are pretty straightforward: we
exclude the \c{docs/} subdirectory (since it has no \c{buildfile}) and list
the \c{*.md} files as prerequisites using the \c{doc{\}} target type (which,
in particular, makes sure they are installed into the appropriate location):

\
./: {*/ -build/ -docs/}                   \
    doc{README.md PACKAGE-README.md NEWS} \
    docs/doc{*.md}                        \
    legal{LICENSE} manifest
\

The alternative approach (create a separate \c{buildfile}) is a good choice if
things are more complicated than that. Let's say we need to adjust the
installation location of the files in \c{docs/} because there is another
\c{README.md} inside and that would conflict with the root one when installed
into the same location. This time we cannot symlink the top-level \c{docs/}
subdirectory (because we need to place a \c{buildfile} there). The two options
here is to either symlink the individual files or introduce another
subdirectory level inside \c{docs/} (which is the same approach as discussed
in \l{#dont-main-target-root-buildfile Don't build your main targets in the
root \c{buildfile}}). Let's illustrate both sub-cases.

Symlinking individual files works best when you don't expect the set of files
to change often. For example, if \c{docs/} contains a man page and its HTML
rendering, then it's unlikely this set will change. On the other hand, if
\c{docs/} contains a manual split into an \c{.md} file per chapter, then there
is a good chance this set of files will fluctuate between releases.

Continuing with our \c{libfoo} example, this is how we symlink the individual
\c{*.md} files in \c{docs/}:

\
$ cd libfoo/ # Change to the package root.
$ mkdir docs
$ cd docs/
$ ln -s ../../upstream/docs/*.md ./
\

Then write a new \c{buildfile} in \c{docs/}:

\
./: doc{*.md}

# Install the documentation in docs/ into the manual/ subdirectory of,
# say, /usr/share/doc/libfoo/ since we cannot install both its and root
# README.md into the same location.
#
doc{*.md}: install = doc/manual/
\

Note that we don't need to make any changes to the root \c{buildfile} since
this subdirectory will automatically get picked up by the \c{{*/\ -build/\}}
name pattern that we have there.

Let's now look at the alternative arrangement with another subdirectory level
inside \c{docs/}. Here we achieve the same result but in a slightly different
way. Specifically, we call the subdirectory \c{manual/} and install recreating
subdirectories (see \l{b#intro-operations-install Installing} for background):

\
$ cd libfoo/ # Change to the package root.
$ mkdir -p docs/manual
$ cd docs/manual/
$ ln -s ../../../upstream/docs/*.md ./
\

And the corresponding \c{buildfile} in \c{docs/}:

\
./: doc{**.md}

# Install the documentation in docs/ into, say, /usr/share/doc/libfoo/
# recreating subdirectories.
#
doc{*}:
{
  install = doc/
  install.subdirs = true
}
\

\N|A yet another option would be to open a scope for the \c{docs/}
subdirectory directly in the root \c{buildfile} (see \l{b#intro-dirs-scopes
Output Directories and Scopes} for background). For example:

\
$ cd libfoo/ # Change to the package root.
$ ln -s ../upstream/docs ./
\

And then add the following to the root \c{buildfile}:

\
docs/
{
  ./: doc{*.md}

  # Install the documentation in docs/ into the manual/ subdirectory
  # of, say, /usr/share/doc/libfoo/ since we cannot install both its
  # and root README.md into the same location.
  #
  doc{*.md}: install = doc/manual/
}
\

However, this approach should be used sparingly since it can quickly make the
root \c{buildfile} hard to comprehend. Note also that it cannot be used for
main targets since an export stub requires a \c{buildfile} to load (see
\l{#dont-main-target-root-buildfile Don't build your main targets in the
root \c{buildfile}} for details).|


\h2#core-root-buildfile-commit|Adjust root \c{buildfile}: commit and test|

Once all the adjustments to the root \c{buildfile} are made, it makes sense to
test it locally (this time from the root of the package), commit our changes,
and test with CI:

\
$ cd libfoo/ # Change to the package root.
$ b test
$ bdep test -a
\

If you had to add any extra files to the root \c{buildfile} (or add
\c{buildfiles} in extra subdirectories), then it also makes sense to test the
installation (\l{#core-test-smoke-locally-install Test locally: installation})
and the preparation of the source distribution
(\l{#core-test-smoke-locally-dist Test locally: distribution}) to make sure
the extra files end up in the right places.

Then commit our changes and CI:

\
$ cd foo/ # Change to the package repository root.
$ git add .
$ git status
$ git commit -m \"Adjust root buildfile\"
$ git push

$ bdep ci
\


\h2#core-root-manifest|Adjust \c{manifest}|

The next file we need to look over is the package's \c{manifest}. Here is what
it will look like, using our \c{libfoo} as an example:

\
: 1
name: libfoo
version: 2.1.0-a.0.z
language: c++
project: foo
summary: C++ library implementing secure Foo protocol
license: MIT ; MIT License.
description-file: README.md
package-description-file: PACKAGE-README.md
changes-file: NEWS
url: https://example.org/upstream
email: upstream@example.org
package-url: https://github.com/build2-packaging/foo
package-email: packaging@build2.org ; Mailing list.
depends: * build2 >= 0.16.0
depends: * bpkg >= 0.16.0
\

You can find the description of these and other package \c{manifest} values in
\l{bpkg#manifest-package Package Manifest} (the manifest format is described
in \l{bpkg#manifest-format Manifest Format}).

In the above listing the values that we likely need to adjust are \c{summary}
and \c{license}, unless correctly auto-detected by \c{bdep-new} on the
\l{#core-package-create Create final package} step. See
\l{#core-root-manifest-summary Adjust \c{manifest}: \c{summary}} and
\l{#core-root-manifest-license Adjust \c{manifest}: \c{license}} below
for guidelines on changing these values.

\N|It is not uncommon for projects to be licensed under multiple licenses.
Note, however, that \c{bdep-new} will only detect one license and you will
need to specify any additional licenses manually.|

We will also need to change \c{url} and \c{email} with the upstream project's
homepage URL and e-mail, respectively. If upstream doesn't have a dedicated
website for the project, then use its repository URL on GitHub or equivalent.
For e-mail you would normally use a mailing list address. If upstream doesn't
have any e-mail contacts, then you can drop this value from the
\c{manifest}. The \c{package-url} and \c{package-email} values normally do not
need to be changed.

\N|\l{https://lists.build2.org packaging@build2.org} is a mailing list for
discussions related to the packaging efforts of third-party projects.|

Note also that while you may be tempted to adjust the \c{version} value,
resist this temptation since this will be done automatically by
\l{bdep-release(1)} later.

You may also want to add the following value in certain cases:

\dl|

\li|\l{bpkg#manifest-package-changes \cb{changes-file}}

If you have added any extra news of changelog files to the root \c{buildfile}
(see \l{#core-root-buildfile Adjust root buildfile}), then it may also make
sense to list them in the \c{manifest}. For example:

\
changes-file: ChangeLog.txt
\

|

\li|\l{bpkg#manifest-package-topics \cb{topics}}

Package topics. For example:

\
topics: network protocol, network security
\

\N|If the upstream project is hosted on GitHub or similar, then you can
usually copy the topics from the upstream repository description.||

\li|\l{bpkg#manifest-package-doc-url \cb{doc-url}}\n
    \l{bpkg#manifest-package-src-url \cb{src-url}}

Documentation and source code URLs. For example:

\
doc-url: https://example.org/foo/doc/
src-url: https://github.com/.../foo
\

||


\h2#core-root-manifest-summary|Adjust \c{manifest}: \c{summary}|

For \c{summary} use a brief description of the functionality provided by the
library or executable. Less than 70 characters is a good target to aim
for. Don't capitalize subsequent words unless proper nouns and omit the
trailing dot. For example:

\
summary: Vim xxd hexdump utility
\

Omit weasel words such as \"modern\", \"simple\", \"fast\", \"small\", etc.,
since they don't convey anything specific. Omit \"header-only\" or
\"single-header\" for C/C++ libraries since, at least in the context of
\c{build2}, it does not imply any benefit.

If upstream does not offer a sensible summary, the following template is
recommended for libraries:

\
summary:  C library
summary:  C++ library
\

For example:

\
summary: Event notification C library
summary: Validating XML parsing and serialization C++ library
\

If the project consists of multiple packages, it may be tempting to name each
package in terms of the overall project name, for example:

\
name: libigl-core
summary: libigl core module
\

This doesn't give the user any clue about what functionality is provided
unless they find out what \c{libigl} is about. Better:

\
summary: Geometry processing C++ library, core module
\

If you follow the above pattern, then to produce a summary for external tests
or examples packages simply add \"tests\" or \"examples\" at the end,
for example:

\
summary: Event notification C library tests
summary: Geometry processing C++ library, core module examples
\

\h2#core-root-manifest-license|Adjust \c{manifest}: \c{license}|

For \c{license}, use the \l{https://spdx.org/licenses/ SPDX license ID} if at
all possible. If multiple licenses are involved, use the SPDX License
expression. See the
\l{https://build2.org/bpkg/doc/build2-package-manager-manual.xhtml#manifest-package-license
\c{license} manifest value} documentation for details, including the list of
the SPDX IDs for the commonly used licenses.

\h2#core-root-manifest-commit|Adjust \c{manifest}: commit and test|

Once all the adjustments to the \c{manifest} are made, it makes sense to test
it locally, commit our changes, and test with CI:

\
$ cd libfoo/ # Change to the package root.
$ b test
$ bdep test -a
\

Then commit our changes and CI:

\
$ cd foo/ # Change to the package repository root.
$ git add .
$ git status
$ git commit -m \"Adjust manifest\"
$ git push

$ bdep ci
\

\h2#core-root-package-readme|Adjust \c{PACKAGE-README.md}|

The last file we need to adjust is \c{PACKAGE-README.md} which describes how
to use the package from a \c{build2}-based project. The template generated by
\c{bdep-new} establishes the recommended structure and includes a number of
placeholders enclosed in \c{< >}, such as \c{} and
\c{}, that need to be replaced with the
package-specific content. While all the placeholders should be
self-explanatory, below are a couple of guidelines.

For \c{} it's best to copy a paragraph or two from
the upstream documentation, usually from \c{README.md} or the project's web
page.

If the package contains a single importable target, as is typical with
libraries, then it makes sense to drop the \"Importable targets\" section
since it won't add anything that hasn't already been said in the \"Usage\"
section.

Similarly, if the package has no configuration variables, then it makes sense
to drop the \"Configuration variables\" section.

For inspiration, see
\l{https://github.com/build2-packaging/libevent/blob/main/PACKAGE-README.md
\c{PACKAGE-README.md}} in \l{https://github.com/build2-packaging/libevent
\c{libevent}} (library) and
\l{https://github.com/build2-packaging/xxd/blob/master/xxd/README.md
\c{README.md}} in \l{https://github.com/build2-packaging/xxd \c{xxd}}
(executable).

\N|If upstream does not provide a \c{README} file, then it makes sense to
rename \c{PACKAGE-README.md} to just \c{README.md} in the \c{build2} package,
as was done in the \c{xxd} package mentioned above.|

Once \c{PACKAGE-README.md} is ready, commit and push the changes. You may also
want to view the result on GitHub to make sure everything is rendered
correctly.

\
$ cd foo/ # Change to the package repository root.
$ git add .
$ git status
$ git commit -m \"Adjust PACKAGE-README.md\"
$ git push
\


\h#core-release-publish|Release and publish|

Once all the adjustments are in and everything is tested, we can release the
final version of the package and then publish it to \l{https://cppget.org
cppget.org}. Both of these steps are automated with the corresponding \c{bdep}
commands. But before performing these steps we need to transfer the package
repository to \l{https://github.com/build2-packaging
github.com/build2-packaging}.


\h2#core-release-publish-transfer|Transfer package repository|

If you have been doing your work in a repository in your personal workspace,
then now is the time to transfer it to the
\l{https://github.com/build2-packaging github.com/build2-packaging}
organization.

\N|It is important to transfer the repository before publishing the first
version of the package since the repository is used as a proxy for package
name ownership (see \l{bdep-publish(1)} for details). If you publish the
package from your personal workspace and then transfer the repository, the
ownership information will have to be adjusted manually, which we would
prefer to avoid.|

The first step is to become a member of this organization (unless you already
are). This will give you permissions to create new repositories, which is
required to perform a transfer (you will also have full read/write access to
the repository once transferred). To get an invite,
\l{https://build2.org/community.xhtml#help get in touch} not forgetting to
mention your GitHub user name.

Then, if your repository has any prefixes, such as \c{build2-}, or suffixes
such as \c{-package}, then the next step is to rename it to follow the
\l{#core-repo-name Use upstream repository name as package repository name}
guideline. Go to the repository's Settings on GitHub where you should see the
Rename button.

Finally, to perform the transfer, go to the repository's Settings, Danger Zone
section, where you should see the Transfer button. Select \c{build2-packaging}
as the organization to transfer to, and complete the transfer.

Once transferred, you will be considered the maintainer of this package going
forward. If other members of the \c{build2-packaging} organization wish to
participate in the package maintenance, the correct etiquette is to do this
via pull requests. However, if you loose interest in maintaining a package or
otherwise become unresponsive, we may allow a new maintainer may take over
this role.

\N|In extraordinary circumstances the \c{build2-packaging} administrators may
make direct changes to the package, for example, to release a new revision in
order to address a critical issue. They will still try to coordinate the
changes with the maintainer but may not always be able to wait for a response
in time-sensitive cases.|

\h2#core-release-publish-release|Release final version|

As you may recall, our package currently has a pre-release snapshot version of
the upstream version (see \l{#core-package-adjust-version Adjust package
version}). Once all the changes are in, we can change to the final upstream
version, in a sense signaling that this package version is ready.

\N|If you are working in a branch, then now is also the time to merge it into
\c{master} (or equivalent).|

The recommended way to do this is with the \l{bdep-release(1)} command (see
\l{intro#guide-versioning-releasing Versioning and Release Management} for
background). Besides replacing the \c{version} value in the package
\c{manifest} file, it also commits this change, tags it with the
\c{v\i{X}.\i{Y}.\i{Z}} tag, and can be instructed to push the changes (or show
the \c{git} command to do so). This command also by default \"opens\" the next
development version, which is something that we normally want for our own
projects but not when we package a third-party one (since we cannot predict
which version the upstream will release next). So we disable this
functionality. For example:

\
$ cd foo/ # Change to the package repository root.
$ bdep release --no-open --show-push
\

Then review the commit made by \c{bdep-release} and, if everything looks good,
push the changes by copying the command that it printed:

\
$ git diff HEAD~1
$ git push ...
\

\N|If something is wrong and you need to undo this commit, don't forget to
also remove the tag. Note also that once you have pushed your changes, you
cannot undo the commit. Instead, you will need to make a revision. See
\l{#core-version-management Version management} for background and
details.|


\h2#core-release-publish-publish|Publish released version|

Once the version is released we can publish the package to
\l{https://cppget.org cppget.org} with the \l{bdep-publish(1)} command (see
\l{intro#guide-versioning-releasing Versioning and Release Management} for
background):

\
$ cd foo/ # Change to the package repository root.
$ bdep publish
\

The \c{bdep-publish} command prepares the source distribution of your package,
uploads the resulting archive to the package repository, and prints a link to
the package submission in the queue. Open this link in the browser and check
that there are no surprises in build results (they should match the earlier CI
results) or in the displayed package information (\c{PACKAGE-README.md}, etc).

\N|While there should normally be no discrepancies in the build results
compared to our earlier CI submissions, the way the packages are built on CI
and in the package repository are not exactly the same. Specifically, CI
builds them from \c{git} while the package repository \- from the submitted
package archives. If there are differences, it's almost always due to issues
in the source distribution preparation (see \l{#core-test-smoke-locally-dist
Test locally: distribution}).|

If everything looks good, then you are done: the package submission will be
reviewed and, if there are no problems, moved to \l{https://cppget.org
cppget.org}. If there are problems, then an issue will be created in the
package repository with the review feedback. In this case you will need to
\l{#core-version-management-new-revision release and publish a version
revision} to address any problems. However, in both cases, you should first
read through the following \l{#core-version-management Package version
management} section to understand the recommended \"version lifecycle\" of a
third-party package.

Also, if there is an issue for this package in
\l{https://github.com/build2-packaging/WISHLIST
github.com/build2-packaging/WISHLIST}, then you would want to add a comment
and close it once the package has been moved to \l{https://cppget.org
cppget.org}.


\h#core-version-management|Package version management|

Once we have pushed the release commit, in order to preserve continuous
versioning (see \l{#core-package-adjust-version Adjust package version} for
background), no further changes should be made to the package without also
changing its version.

\N|More precisely, you can make and commit changes without changing the
version provided they don't affect the package. For example, you may keep a
\c{TODO} file in the root of your repository which is not part of any
package. Updating such a file without changing the version is ok since the
package remains unchanged.|

While in our own projects we can change the versions as we see fit, with
third-party projects the versions are dictated by the upstream and as a result
we are limited to what we can use to fix issues in our packaging work
itself. It may be tempting (and perhaps even conceptually correct) to release
a patch version for our own fixes, however, we will be in trouble if later
upstream releases the same patch version but with a different set of changes
(plus the users of our package may wonder where did this version come
from). As a result, we should only change the major, minor, or patch
components of the package version in response to the corresponding upstream
releases. For fixes to the packaging work itself we should instead use version
revisions.

\N|Because a revision replaces the existing version, we should try to limit
revision changes to bug fixes and preferably only in the package
\"infrastructure\" (\c{buildfiles}, \c{manifest}, etc). Fixes to upstream
source code should be limited to critical bugs and be preferably backported
from upstream. To put it another way, changes in a revision should have an
even more limited scope than a patch release.|

Based on this, the recommended \"version lifecycle\" for a third-party
package is as follows:

\ol|

\li|After a release (the \l{#core-release-publish-release Release final
version} step above), for example, version \c{2.1.0}, the package enters a
\"revision phase\" where we can release revisions (\c{2.1.0+1}, \c{2.1.0+2},
etc) to address any issues in the packaging work. See
\l{#core-version-management-new-revision New revision} for the detailed
procedure.|

\li|When a new upstream version is released, for example version \c{2.2.0},
and we wish to upgrade our package to this version, we switch to its
pre-release snapshot version (\c{2.2.0-a.0.z}) the same way as we did on the
\l{#core-package-adjust-version Adjust package version} step initially. See
\l{#core-version-management-new-version New version} for the detailed
procedure.|

\li|Once we are done upgrading to the new upstream version, we release the
final version just like on the \l{#core-release-publish-release Release final
version} step initially. At this point the package enters another revision
phase.||

Note also that in the above example, once we have switched to \c{2.2.0-a.0.z},
we cannot go back and release another revision or patch version for \c{2.1.0}
on the current branch. Instead, we will need to create a separate branch for
the \c{2.1.Z} release series and make a revision or patch version there. See
\l{#core-version-management-old-series New version/revision in old release
series} for the detailed procedure.


\h2#core-version-management-new-revision|New revision|

As discussed in \l{#core-version-management Package version management}, we
release revisions to fix issues in the package \"infrastructure\"
(\c{buildfiles}, \c{manifest}, etc) as well as critical bugs in upstream
source code.

\N|Releasing a new revision is also a good opportunity to review and fix any
accumulated issues that didn't warrant a revision on their own. See
\l{#core-version-management-new-version-issues New version: review/fix
accumulated issues} for background.|

In the revision phase of the package version lifecycle (i.e., when the version
does not end with \c{-a.0.z}), every commit must be accompanies by the
revision increment to maintain continuous versioning. As a result, each
revision release commit necessarily also contains the changes in this
revision. Below is a typical workflow for releasing and publishing the
revision:

\
$ # make changes
$ # test locally
$ git add .
$ bdep release --revision --show-push
$ # review commit
$ git push ...
$ # test with CI
$ bdep publish
\

Customarily, the revision commit message has the \c{\"Release version
X.Y.Z+R\"} summary as generated by \c{bdep-release} followed by the
description of changes, organized in a list if there are several. For example:

\
Release version 2.1.0+1

- Don't compile port/strlcpy.c on Linux if GNU libc is 2.38 or newer
  since it now provides the strl*() functions.

- Switch to using -pthread instead of -D_REENTRANT/-lpthread.
\

\N|The fact that all the changes must be in a single commit is another reason
to avoid substantial changes in revisions.|

Note also that you can make multiple commits while developing and testing the
changes for a revision in a separate branch. However, once they are ready for
a release, they need to be squashed into a single commit. The
\l{bdep-release(1)} command provides the \c{--amend} and \c{--squash} options
to automate this. For example, here is what a workflow with a separate branch
might look like:

\
$ git checkout -b wip-2.1.0+1

$ # make strl*() changes
$ # test locally
$ git commit -a -m \"Omit port/strlcpy.c if glibc 2.38 or newer\"
$ git push -u
$ # test with CI

$ # make pthread changes
$ # test locally
$ git commit -a -m \"Switch to using -pthread\"
$ git push
$ # test with CI

$ git checkout master
$ git merge --ff-only wip-2.1.0+1
$ bdep release --revision --show-push --amend --squash 2
$ # review commit
$ # test locally
$ git push ...
$ # test with CI
$ bdep publish
\


\h2#core-version-management-new-version|New version|

As discussed in \l{#core-version-management Package version management}, we
release new versions strictly in response to the corresponding upstream
releases.

The amount or work required to upgrade a package to a new upstream version
depends on the extend of changes in the new version.

On one extreme you may have a patch release which fixes a couple of bugs in
the upstream source code without any changes to the set of source files,
upstream build system, etc. In such cases, upgrading a package is a simple
matter of creating a new work branch, pointing the \c{upstream} \c{git}
submodule to the new release, running tests, and then merging, releasing, and
publishing a new package version.

On the other extreme you may have a new major upstream release which is
essentially a from-scratch rewrite with new source code layout, different
upstream build system, etc. In such cases it may be easier to likewise start
from scratch. Specifically, create a new work branch, point the \c{upstream}
\c{git} submodule to the new release, delete the existing package, and
continue from \l{#core-package Create package and generate \c{buildfile}
templates}.

Most of the time, however, it will be something in between where you may need
to tweak a few things here and there, such as adding symlinks to new source
files (or removing old ones), tweaking the \c{buildfiles} to reflect changes
in the upstream build system, etc.

The following sections provide a checklist-like sequence of steps that can be
used to review upstream changes with links to the relevant earlier sections in
case adjustments are required.


\h2#core-version-management-new-version-branch|New version: create new work branch|

When upgrading a package to a new upstream version it's recommended to do this
in a new work branch which, upon completion, is merged into \c{master} (or
equivalent). For example, if the new upstream version is \c{2.2.0}:

\
$ git checkout -b wip-2.2.0
\

\N|If you are not the maintainer of the package and would like to help with
preparing the new version, then, when everything is ready, use this branch
to create a pull request instead of merging it directly.|


\h2#core-version-management-new-version-open|New version: open new version|

This step corresponds to \l{#core-package-adjust-version Adjust package
version} during the initial packaging. Here we can make use of the
\c{bdep-release} command to automatically open the new version and make
the corresponding commit. For example, if the new upstream version is
\c{2.2.0}:

\
$ bdep release --open --no-push --open-base 2.2.0
\


\h2#core-version-management-new-version-submodule|New version: update \c{upstream} submodule|

This step corresponds to \l{#core-repo-submodule Add upstream repository as
\c{git} submodule} during the initial packaging. Here we need to update
the submodule to point to the upstream commit that corresponds to the
new version.

For example, if the upstream release tag we are interested in is called
\c{v2.2.0}, to update the \c{upstream} submodule to point to this release
commit, run the following commands:

\
$ cd upstream/
$ git checkout v2.2.0
$ cd ../

$ git add .
$ git status
$ git commit -m \"Update upstream submodule to 2.2.0\"
\


\h2#core-version-management-new-version-review|New version: review upstream changes|

At this point it's a good idea to get an overview of the upstream changes
between the two releases in order to determine which adjustments are likely to
be required in the \c{build2} package. We can use the \c{upstream} submodule
for that, which contains the change history we need.

One way to get an overview of changes between the releases is to use a
graphical repository browser such as \c{gitk} and view a cumulative \c{diff}
of changes between the two versions. For example, assuming the latest packaged
version is tagged \c{v2.1.0} and the new version is tagged \c{v2.2.0}:

\
$ cd upstream/
$ gitk v2.1.0..v2.2.0 &
\

Then click on the commit tagged \c{v2.2.0}, scroll down and right-click on the
commit tagged \c{v2.1.0}, and select the \"Diff this -> selected\" menu item.
This will display the cumulative set of changes between these two upstream
versions. Review them looking for the following types of changes in
particular (discussed in the following sections):

\ul|

\li|Changes to the source code layout.|

\li|New dependencies being added or old removed.|

\li|New source files being added or old removed (including in tests, etc).|

\li|Changes to the upstream build system.|

\li|Other new files/subdirectories being added or old removed.||


\h2#core-version-management-new-version-layout|New version: layout changes|

As mentioned earlier, for drastic layout changes it may make sense to start
from scratch and re-generate the package with the \c{bdep-new} command (use
\l{#core-package-struct Decide on the package source code layout} as a
starting point). On the other hand, if the changes are minor, then you can try
to adjust things manually. An in-between strategy is to generate the new
layout using \c{bdep-new} on the side and then retrofit the relevant changes
in \c{buildfiles} to the existing package. In a sense, this approach uses
\c{bdep-new} as a guide to figure out how to implement the new layout.


\h2#core-version-management-new-version-dependencies|New version: new/old dependencies|

If upstream added new or removed old dependencies, then you will need to
replicate these changes in your package as on the \l{#core-fill-depend Add
dependencies} and \l{#core-adjust-build-src-source-dep Adjust source
\c{buildfile}: dependencies} initial packaging steps.


\h2#core-version-management-new-version-sources|New version: new/old source files|

If upstream added new or removed old source files, then you will need to
replicate these changes in your package as on the \l{#core-fill-source Fill
with upstream source code} and possibly \l{#core-adjust-build-src-header
Adjust header \c{buildfile}} and \l{#core-adjust-build-src-source-src Adjust
source \c{buildfile}: sources, private headers} initial packaging steps.

Also don't forget about tests, examples, etc., which may also add new or
remove old source files (typically new tests). See
\l{#core-test-upstream-convert Convert smoke test to upstream tests}.

If there are any manual modifications to the upstream source code, then you
will also need to re-apply them to the new version as discussion in
\l{#howto-patch-upstream-source-manual Modifying upstream source code
manually}.


\h2#core-version-management-new-version-build|New version: changes to build system|

If upstream changed anything in the build system, then you may need to
replicate these changes in your package's \c{buildfiles}. The relevant initial
packaging steps are: \l{#core-adjust-build-wide Adjust project-wide build system files in
\c{build/}} and \l{#core-adjust-build-src-source-opt Adjust source
\c{buildfile}: build and export options}.

The corresponding steps for tests are: \l{#core-test-smoke-build-wide Adjust
project-wide build system files in \c{tests/build/}} and
\l{#core-test-upstream-convert Convert smoke test to upstream tests}.


\h2#core-version-management-new-version-other|New version: other new/old files/subdirectories|

If upstream added or removed any other files or subdirectories that are
relevant to our package (such as documentation), then adjust the package
similar to the \l{#core-root-buildfile Adjust root \c{buildfile}} and
\l{#core-root-buildfile-doc Adjust root \c{buildfile}: other subdirectories}
initial packaging steps.


\h2#core-version-management-new-version-root|New version: review \c{manifest} and \c{PACKAGE-README.md}|

It makes sense to review the package \c{manifest} (\l{#core-root-manifest
Adjust \c{manifest}}) and \c{PACKAGE-README.md} (\l{#core-root-package-readme
Adjust \c{PACKAGE-README.md}}) for any updates.


\h2#core-version-management-new-version-issues|New version: review/fix accumulated issues|

When a bug is identified in an already released package version, we may not
always be able to fix it immediately (for example, by
\l{#core-version-management-new-revision releasing a revision}). This could be
because the change is too extensive/risky for a revision or simply not
critical enough to warrant a release. In such cases it's recommended to file
an issue in the package repository with the view to fix it when the next
opportunity arises. Releasing a new upstream version is one such opportunity
and it makes sense to review any accumulated package issues and see if any
of them could be addressed.


\h2#core-version-management-new-version-test|New version: test locally and with CI|

Once all the adjustments are in, test the package both locally and with CI
similar to how we did it during the initial packaging after completing the
smoke test:

\l{#core-test-smoke-locally Test locally}\n
\l{#core-test-smoke-locally-install Test locally: installation}\n
\l{#core-test-smoke-locally-dist Test locally: distribution}\n
\l{#core-test-smoke-ci Commit and test with CI}


\h2#core-version-management-new-version-release|New version: merge, release, and publish|

When the new version of the package is ready to be released, merge the
work branch to \c{master} (or equivalent):

\
$ git checkout master
$ git merge --ff-only wip-2.2.0
\

Then release and publish using the same steps as after the initial packaging:
\l{#core-release-publish Release and publish}.


\h2#core-version-management-old-series|New version/revision in old release series|

As discussed in \l{#core-version-management Package version management}, if we
have already switched to the next upstream version in the \c{master} (or
equivalent) branch, we cannot go back and release a new version or revision
for an older release series on the same branch. Instead, we need to create a
separate, long-lived branch for this work.

As an example, let's say we need to release another revision or a patch
version for an already released \c{2.1.0} while our \c{master} branch has
already moved on to \c{2.2.0}. In this case we create a new branch, called
\c{2.1}, to continue with the \c{2.1.Z} release series. The starting point of
this branch should be the latest released version/revision in the \c{2.1}
series. Let's say in our case it is \c{2.1.0+2}, meaning we have released two
revisions for \c{2.1.0} on the \c{master} branch before upgrading to
\c{2.2.0}. Therefore we use the \c{v2.1.0+2} release tag to start the
\c{2.1} branch:

\
$ git checkout -b 2.1 v2.1.0+2
\

Once this is done, we continue with the same steps as in
\l{#core-version-management-new-revision New revision} or
\l{#core-version-management-new-version New version} except that we never
merge this branch to \c{master}. If we ever need to release another revision
or version in this release series, then we continue using this branch. In a
sense, this branch becomes the equivalent of the \c{master} branch for this
release series and you should treat it as such (once published, never delete,
rewrite its history, etc).

\N|It is less likely but possible that you may need to release a new minor
version in an old release series. For example, the master branch may have
moved on to \c{3.0.0} and you want to release \c{2.2.0} after the already
released \c{2.1.0}. In this case it makes sense to call the branch \c{2} since
it corresponds to the \c{2.Y.Z} release series. If you already have the
\c{2.1} branch, then it makes sense to rename it to \c{2}.|


\h1#dont-do|What Not to Do|

This chapter describes the common anti-patterns along with the recommended
alternative approaches.


\h#dont-from-scratch|Don't write \c{buildfiles} from scratch, use \c{bdep-new}|

Unless you have good reasons not to, create the initial project layout
automatically using \l{bdep-new(1)}, then tweak it if necessary and fill with
upstream source code.

The main rationale here is that there are many nuances in getting the build
right and auto-generated \c{buildfiles} had years of refinement and
fine-tuning. The familiar structure also makes it easier for others to
understand your build, for example while reviewing your package submission
or helping out with the package maintenance.

The \l{bdep-new(1)} command supports a wide variety of
\l{bdep-new.xhtml#src-layout source layouts}. While it may take a bit of time
to understand the customization points necessary to achieve the desired layout
for your first package, this will pay off in spades when you work on
converting subsequent packages.

See \l{#core-package-craft-cmd Craft \c{bdep\ new} command line to create
package} for details.


\h#dont-fix-upstream|Avoid fixing upstream issues in the \c{build2} package|

Any deviations from upstream makes the \c{build2} package more difficult to
maintain. In particular, if you make a large number of changes to upstream
source code, releasing a new version will require a lot of work. As a result,
it is recommended to avoid fixing upstream issues in the \c{build2} package.
Instead, try to have the issues fixed upstream and wait for them to be
released as a new version.

Sometimes, however, you may have no choice. For example, the upstream is
inactive or has no plans to release a new version with your fixes any time
soon. Or you may want to add support for a platform/compiler that upstream is
not willing or capable of supporting.

Note that even if you do fix some issues in the \c{build2} package directly,
try hard to also incorporate them upstream so that you don't need to maintain
the patches forever.

See also \l{#dont-change-upstream Avoid changing upstream source code layout}
and \l{#howto-patch-upstream-source How do I patch upstream source code?}


\h#dont-change-upstream|Avoid changing upstream source code layout|

It's a good idea to stay as close to the upstream's source code layout as
possible. For background and rationale, see \l{#core-package-struct Decide on
the package source code layout}.


\h#dont-header-only|Don't make library header-only if it can be compiled|

Some libraries offer two alternative modes: header-only and compiled. Unless
there are good reasons not to, a \c{build2} build of such a library should use
the compiled mode.

\N|Some libraries use the \i{precompiled} term to describe the non-header-only
mode. We don't recommend using this term in the \c{build2} package since it
has a strong association with precompiled headers and can therefore be
confusing. Instead, use the \i{compiled} term.|

The main rationale here is that a library would not be offering a compiled
mode if there were no benefits (usually faster compile times of library
consumers) and there is no reason not to take advantage of it in the
\c{build2} package.

There are, however, valid reasons why a compiled mode cannot be used, the most
common of which are:

\ul|

\li|The compiled mode is not well maintained/tested by upstream and therefore
offers inferior user experience.|

\li|The compiled mode does not work on some platforms, usually Windows due to
the lack of symbol export support (but see \l{b#cc-auto-symexport Automatic
DLL Symbol Exporting}).|

\li|Uses of the compiled version of the library requires changes to the
library consumers, for example, inclusion of different headers.|

|

If a compiled mode cannot be always used, then it may be tempting to support
both modes by making the mode user-configurable. Unless there are strong
reasons to, you should resist this temptation and, if the compiled mode is not
universally usable, then use the header-only mode everywhere.

The main rationale here is that variability adds complexity which makes the
result more prone to bugs, more difficult to use, and harder to review and
maintain. If you really want to have the compiled mode, then the right way to
achieve it is to work with upstream to fix any issues that prevent its use
in \c{build2}.

There are, however, valid reasons why supporting both mode may be needed, the
most common of which are:

\ul|

\li|The library is widely used in both modes but switching from one mode to
the other requires changes to the library consumers (for example, inclusion of
different headers). In this case only supporting one mode would mean not
supporting a large number of library consumers.|

\li|The library consists of a large number of independent components while its
common for applications to only use a small subset of them. And compiling all
of them in the compiled mode takes a substantial amount of time.  Note that
this can alternatively be addressed by making the presence of optional
components user-configurable.|

|


\h#dont-bundle|Don't bundle dependencies|

Sometimes third-party projects bundle their dependencies with their source
code (also called vendoring). For example, a C++ library may bundle a testing
framework. This is especially common with \l{https://cppget.org/catch2
\c{catch2}} where one often encounters a comical situation with only a few
kilobytes of library source code and over 600KB of \c{catch2.hpp}.

The extra size, while wasteful, if not the main issue, however. The bigger
problem is that if a bug is fixed in the bundled dependency, then to propagate
the fix we will need to release a new version (or revision) of each package
that bundles it. Needless to say this is not scalable.

While this doesn't apply to testing frameworks, an even bigger issue with
bundling of dependencies in general is that two libraries that bundle the same
dependency (potentially of different versions) may not be able to coexist in
the same build with the symptoms ranging from compile errors to subtle runtime
issues that are hard to diagnose.

As a result, it is strongly recommended that you unbundle any dependencies
that the upstream may have bundled. In case of testing frameworks, see
\l{https://github.com/build2/HOWTO/blob/master/entries/handle-tests-with-extra-dependencies.md
How do I handle tests that have extra dependencies?} for the recommended way
to deal with such cases.

\N|One special case where a bundled dependency may be warranted is a small
utility that is completely inline/private to the implementation and where
making it an external dependency may lead to a less performant result (due to
the inability to inline without resorting to LTO). The \c{xxhash}
implementation in \c{libzstd} is a representative example of this situation.|


\h#dont-main-target-root-buildfile|Don't build your main targets in the root \c{buildfile}|

It may be tempting to have your main targets (libraries, executables) in the
root \c{buildfile}, especially if it allows you to symlink entire directories
from \c{upstream/} (which is not possible if you have to have a \c{buildfile}
inside). However, this is not recommended except for the simplest of projects.

Firstly, this quickly gets messy since you have to combine managing
\c{README}, \c{LICENSE}, etc., and subdirectories with your main target
builds. More importantly, this also means that when you main target is
imported (and thus the \c{buildfile} that defines this target must be loaded),
your entire project will be loaded, including any \c{tests/} and \c{examples/}
subprojects, and that is wasteful.


If you want to continue symlinking entire directories from \c{upstream/} but
without moving everything to the root \c{buildfile}, the recommended approach
is to simply add another subdirectory level. Let's look at a few concrete
example to illustrate the technique (see \l{#core-package-struct Decide on the
package source code layout} for background on the terminology used).

Here is the directory structure of a package which uses a combined layout (no
header/source split) and where the library target is in the root
\c{buildfile}:

\
libigl-core/
├── igl/ -> ../upstream/igl/
├── tests/
└── buildfile                        # Defines lib{igl-core}.
\

And here is the alternative structure where we have added the extra
\c{libigl-core} subdirectory with its own \c{buildfile}:

\
libigl-core/
├── libigl-core/
│   ├── igl/ -> ../../upstream/igl/
│   └── buildfile                    # Defines lib{igl-core}.
├── tests/
└── buildfile
\

Below is the \c{bdep-new} invocation that can be used to automatically
create this alternative structure (see \l{#core-package-craft-cmd Craft
\c{bdep\ new} command line to create package} for background and
\l{bdep-new(1)} for details):

\
$ bdep new \
 --type lib,prefix=libigl-core,subdir=igl,buildfile-in-prefix \
 libigl-core
\

Let's also look at an example of the split layout, which may require a
slightly different \c{bdep-new} sub-options to achieve the same result. Here
is the layout which matched upstream exactly:

\
$ bdep new --type lib,split,subdir=foo,no-subdir-source libfoo
$ tree libfoo
libfoo/
├── include/
│   └── foo/
│       ├── buildfile
│       └── ...
└── src/
    ├── buildfile
    └── ...
\

However, with this layout we will not be able to symlink the entire
\c{include/foo/} and \c{src/} subdirectories because there are \c{buildfiles}
inside (and which may tempt you to just move everything to the root
\c{buildfile}). To fix this we can move the \c{buildfiles} out of source
subdirectory \c{foo/} and into prefixes (\c{include/} and \c{src/}) using the
\c{buildfile-in-prefix} sub-option. And since \c{src/} doesn't have a source
subdirectory, we have to invent one:

\
$ bdep new --type lib,split,subdir=foo,buildfile-in-prefix libfoo
$ tree libfoo
libfoo/
├── include/
│   ├── foo/ -> ../../upstream/include/foo/
│   └── buildfile
└── src/
    ├── foo/ -> ../../upstream/src/
    └── buildfile
\


\h#dont-big-changes-revision|Don't make extensive changes in a revision|

Unlike a new version, a revision replaces the previous revision of the same
version and as a result must be strictly backwards-compatible in all aspects
with what it replaces. If you make extensive changes in a revision, it becomes
difficult to guarantee that this requirement is satisfied. As a result, you
should refrain from making major changes, like substantially altering the
build, in a revision, instead delaying such changes until the next version.


\h1#howto|Packaging HOWTO|

This chapter provides advice on how to handle less common packaging
tasks and requirements.


\h#howto-patch-upstream-source|How do I patch upstream source code?|

If you need to change something in the upstream source code, there are several
options: You can make a copy of the upstream source file and make the
modifications there. While straightforward, this approach has one major
drawback: you will have to keep re-applying the changes for every subsequent
version unless and until upstream incorporates your changes. The other two
options try to work around this drawback.

The first alternative option is to modify upstream source code automatically
during the build, typically using an ad hoc recipe. This approach works best
when the changes are regular and can be applied mechanically with something
like the \l{testscript#builtins-sed \c{sed} builtin}.

The second alternative option is to use the C/C++ preprocessor to make the
necessary changes to the upstream source code during compilation. Unlike the
first alternative, this approach doesn't have a prescribed way to apply it in
every situation and often requires some imagination. Note that it also has the
tendency to quickly get out of hand, at which point it's wise to keep it
simple and use the first option (manual modification).

The following sections examine each approach in detail.


\h2#howto-patch-upstream-source-manual|Modifying upstream source code manually|

As an illustration of this approach, let's say we need to patch
\c{src/foo.cpp} in our \c{libfoo} example from the previous sections (see the
\l{#core-fill-source Fill with upstream source code} step for a refresher).
The recommended sequence of steps is as follows:

\ol|

\li|Rename the upstream symlink to \c{.orig}:

\
$ cd libfoo/src/
$ mv foo.cpp foo.cpp.orig
\

|

\li|Make a deep copy of \c{.orig}:

\
$ cp -H foo.cpp.orig foo.cpp
\

|

\li|Make any necessary modifications in the deep copy:

\
$ edit foo.cpp
\

|

\li|Create a patch for the modifications:

\
$ diff -u foo.cpp.orig foo.cpp >foo.cpp.patch
\

||

The presence of the \c{.orig} and \c{.patch} files makes it clear that the
upstream code was modified. They are also useful when re-applying the changes
to the new version of the upstream source code. The recommended sequence of
steps for this task is as follows:

\ol|

\li|After the \c{upstream} submodule update (see the
\l{#core-version-management-new-version-submodule New version: update
\c{upstream} submodule} step), the \c{.orig} symlink points to the new version
of the upstream source file.|

\li|Overwrite old modified version with a deep copy of new \c{.orig}:

\
$ cp -H foo.cpp.orig foo.cpp
\

|

\li|Apply old modifications to the new deep copy:

\
$ patch foo.cpp.patch
\

||


\h2#howto-patch-upstream-source-build|Modifying upstream source code during build|

As an illustration of this approach, let's say upstream is using the
\c{${VAR\}} style variable substitutions in their \c{config.h.cmake} instead
of the more traditional \c{@VAR@} style:

\
/* config.h.cmake */

#define FOO_VERSION \"${PROJECT_VERSION}\"
\

The \c{${VAR\}} style is not supported by the \c{build2}
\l{https://github.com/build2/libbuild2-autoconf \c{autoconf}} module which
means we cannot use the upstream \c{config.h.cmake} as is. While we could
patch this file manually to use \c{@VAR@} instead, this is a pretty mechanical
change that can be easily made with a simple ad hoc recipe during the build,
freeing us from manually applying the same changes in subsequent versions. For
example:

\
using autoconf

h{config}: in{config.h.in}
{
  autoconf.flavor = cmake
  PROJECT_VERSION = $version
}

in{config.h.in}: file{config.h.cmake}
{{
  sed -e 's/\$\{(.+)\}/@\1@/g' $path($<) >$path($>)
}}
\


\h2#howto-patch-upstream-source-preproc|Modifying upstream source code with C/C++ preprocessor|

A good illustration of this approach is adding the \c{build2} metadata to an
executable (see
\l{https://github.com/build2/HOWTO/blob/master/entries/convey-additional-information-with-exe-lib.md
How do I convey additional information (metadata) with executables and C/C++
libraries?} for background). Let's say we have a symlink to upstream's
\c{main.c} that implements the executable's \c{main()} function and we need to
add a snipped of code at the beginning of this function that handles the
\c{--build2-metadata} option. While manually modifying \c{main.c} is not a
wrong approach, we can try to be clever and do it automatically with the
preprocessor.

Specifically, we can create another file next to the \c{main.c} symlink,
calling it, for example, \c{main-build2.c}, with the following contents:

\
/* Handle --build2-metadata in main() (see also buildfile). */

#define main xmain
#include \"main.c\"
#undef main

#include 
#include 

int main (int argc, const char** argv)
{
  if (argc == 2 && strncmp (argv[1], \"--build2-metadata=\", 18) == 0)
  {
    printf (\"# build2 buildfile foo\n\");
    printf (\"export.metadata = 1 foo\n\");
    printf (\"foo.name = [string] foo\n\");
    ...
    return 0;
  }

  return xmain (argc, argv);
}
\

The idea here is to rename the original \c{main()} with the help of the C
preprocessor and provide our own \c{main()} which, after handling
\c{--build2-metadata} calls the original. One notable deal-breaker for this
approach are C++ implementations of \c{main()} that don't have the explicit
\c{return}. There is also a better chance in C++ for the \c{main} macro to
replace something unintended.

To complete this we also need to exclude \c{main.c} from compilation in
our \c{buildfile} (since it is compiled as part of \c{main-build2.c} via
the preprocessor inclusion). For example:

\
exe{foo}: {h c}{** -main}
exe{foo}: c{main}: include = adhoc  # Included in main-build2.c.
\


\h#howto-bad-inclusion-practice|How do I deal with bad header inclusion practice?|

This sections explains how to deal with libraries that include their public,
generically-named headers without the library name as a subdirectory
prefix. Such libraries cannot coexist, neither in the same build nor when
installed.

Specifically, as an illustration of the problem, consider the \c{libfoo}
library with a public header named \c{util.h} that is included as \c{}
(instead of, say, \c{} or \c{}). If this library's
headers are installed directly into, say, \c{/usr/include}, then if two such
libraries happened to be installed at the same time, then one will overwrite
the other's header. There are also problems in the non-installed case: if two
such libraries are used by the same project, then which \c{} header
gets included depends on which library's header search path ends up being
specified first on the command line (with the \c{-I} option).

These issues are severe enough that libraries with such inclusion issues
cannot be published to \l{https://cppget.org cppget.org} without them being
addressed in the \c{build2} package. Thankfully, most library authors these
days use the library name as an inclusion prefix (or sometimes they have
headers that are decorated with the library name). However, libraries that do
not follow these guidelines do exist and this section describes how to change
their inclusion scheme if you are attempting to package one of them.

\N|One notable consequence of changing the inclusion scheme is that it will no
longer be possible to use a system-installed version of the package (because
it presumably still uses the unqualified inclusion scheme). Note, however,
that distributions like Debian and Fedora have the same co-existence issue as
we do and are generally strict about potential header clashes. In
particular, it is not uncommon to find Debian packages installing library's
headers into subdirectories of \c{/usr/include} to avoid such clashes. And if
you find this to be the case for the library you are packaging, then it may
make sense to use the same prefix as used by the main distributions for
compatibility.

It is also possible that distributions disregard this considerations for some
libraries. This usually happens for older, well-known libraries that happened
to be installed this way in the early days and changing things now will be too
disruptive. In a sense, it is understood that such libraries effectively
\"own\" the unqualified headers names that they happen to be using. If you
think you are packaging such a libraries,
\l{https://build2.org/community.xhtml#help get in touch} to discuss this
further since it may make sense to also disregard this rule in
\l{https://cppget.org cppget.org}.|

As a concrete example of the approach, let's continue with \c{libfoo} that has
\c{util.h} and which upstream expects the users to include as \c{}.
The is what the upstream source code layout may look like:

\
libfoo/
├── include/
│   └── util.h
└── src/
    └── ...
\

Our plan is to change the inclusion scheme in the \c{build2} package from
\c{} to \c{}. To this effect, we use a slightly
modified layout for our package (see \l{#core-package-craft-cmd Craft \c{bdep
new} command line to create package} on how to achieve it):

\
libfoo/
├── include/
│   └── libfoo/
│       └── util.h  -> ../../../upstream/include/util.h
└── src/
    └── ...          -> ../../upstream/src/...
\

The installation-related section in our \l{#core-adjust-build-src-header
header \c{buildfile}} will look like this:

\
# Install into the libfoo/ subdirectory of, say, /usr/include/
# recreating subdirectories.
#
{hxx ixx txx}{*}:
{
  install         = include/libfoo/
  install.subdirs = true
}
\

In the \l{#core-adjust-build-src-source source \c{buildfile}} we will most
likely need to add the \c{include/libfoo} header search path since the
upstream source files continue to include public headers without the library
prefix (there should be no harm in that and it's not worth modifying them):

\
# Build options.
#
out_pfx_inc = [dir_path] $out_root/include/
src_pfx_inc = [dir_path] $src_root/include/
out_pfx_src = [dir_path] $out_root/src/
src_pfx_src = [dir_path] $src_root/src/

# Unqualified (without ) header search paths.
#
out_pfx_inc_unq = [dir_path] $out_root/include/libfoo
src_pfx_inc_unq = [dir_path] $src_root/include/libfoo

cxx.poptions =+ \"-I$out_pfx_src\" \"-I$src_pfx_src\" \\
                \"-I$out_pfx_inc\" \"-I$src_pfx_inc\" \\
                \"-I$out_pfx_inc_unq\" \"-I$src_pfx_inc_unq\"
\

It is also possible that public headers include each other as \c{}
rather than the more common \c{\"util.h\"}. If that's the case, then we need
to fix that and there are two ways to do it. The first approach is to patch
the public headers to include each other with the library prefix (that is,
\c{}, etc). See \l{#howto-patch-upstream-source How do I patch
upstream source code?} for details.

The second approach is to support including public headers both ways, that is,
as \c{} and as \c{}. This will not only solve the above
problem (public headers including each other), but also support any existing
code that uses this library and most likely includes its headers the old way,
without the prefix.

There is, however, a major drawback to doing that: while the installation of
the library can now co-exist with other libraries (because we install its
public headers into, say, \c{/usr/include/libfoo}), it may still not be usable
in combination with other libraries from the same build (because we still add
the unqualified header search path).

If you still want to provide this dual inclusion support, the way to achieve
it is by exporting the unqualified header search path and also adding it to
the \c{pkg-config} files (see \l{#howto-extra-header-install-subdir How do I
handle extra header installation subdirectory?} for background on the
latter). For example:

\
# Export options.
#
lib{foo}:
{
  cxx.export.poptions = \"-I$out_pfx_inc\" \"-I$src_pfx_inc\" \\
                        \"-I$out_pfx_inc_unq\" \"-I$src_pfx_inc_unq\"
  cxx.export.libs = $intf_libs
}

# Make sure headers installed into, say, /usr/include/libfoo/
# can also be included without the directory prefix for backwards
# compatibility.
#
lib{foo}: cxx.pkgconfig.include = include/ include/libfoo/
\


\h#howto-extra-header-install-subdir|How do I handle extra header installation subdirectory?|

This sections explains how to handle an additional header installation
subdirectory. As an illustration of the problem, consider the \c{libfoo}
example from the previous sections (see the \l{#core-fill-source Fill with
upstream source code} step for a refresher). In that example the library
headers are included as \c{} and installed as, say,
\c{/usr/include/foo/util.hpp}. In this scheme the installed header inclusion
works without requiring any extra steps from our side because the compiler
searches for header in \c{/usr/include} by default.

However, some libraries choose to install their headers into a subdirectory
of, say, \c{/usr/include} but without having this subdirectory as part of the
inclusion path (\c{foo/} in \c{}). The two typical reasons for
this are support for installing multiple versions of the same library
side-by-side (for example, \c{/usr/include/foo-v1/foo/util.hpp}) as well as
not using the library name as the inclusion subdirectory prefix and then
having to hide the headers in a subdirectory due to potential clashes with
other headers (if installed directly into, say, \c{/usr/include}; see
\l{#howto-bad-inclusion-practice How do I deal with bad header inclusion
practice?} for background).

In such cases the installed header inclusion does not work out of the box and
we have to arrange for an additional header search path to be added via
\c{pkg-config}. Let's use the versioned library case to illustrate this
technique. The relevant part from the \l{#core-adjust-build-src-header header
\c{buildfile}} will look like this:

\
# Install into the foo-vN/foo/ subdirectory of, say, /usr/include/
# recreating subdirectories.
#
{hxx ixx txx}{*}:
{
  install         = include/\"foo-v$version.major\"/foo/
  install.subdirs = true
}
\

The part that we need to add, this time to the
\l{#core-adjust-build-src-source source \c{buildfile}}, looks like this:

\
# Make sure headers installed into, say, /usr/include/foo-vN/foo/
# can be included as  by overriding the header search
# path in the generated pkg-config files.
#
lib{foo}: cxx.pkgconfig.include = include/\"foo-v$version.major\"/
\

\N|The variable will be \c{c.pkgconfig.include} for a C library.|


\h#howto-no-extension-header|How do I handle headers without extensions?|

If all the headers in a project have no extension, then you can simply
specify the empty \c{extension} value for the \c{hxx{\}} target type
in \c{build/root.build}:

\
hxx{*}: extension =
cxx{*}: extension = cpp
\

Note, however, that using wildcard patterns for such headers in your
\c{buildfile} is a bad idea since such a wildcard will most likely pick up
other files that also have no extension (such as \c{buildfile}, executables on
UNIX-like systems, etc). Instead, it's best to spell the names of such headers
explicitly. For example, instead of:

\
lib{hello}: {hxx cxx}{*}
\

Write:

\
lib{hello}: cxx{*} hxx{hello}
\

If only some headers in a project have no extension, then it's best to specify
the non-empty extension for the \c{extension} variable in \c{build/root.build}
(so that you can still use wildcards for headers with extensions) and spell
out the headers with no extension explicitly. Continuing with the above
example, if we have both the \c{hello.hpp} and \c{hello} headers, then we can
handle them like this:

\
hxx{*}: extension = hpp
cxx{*}: extension = cpp
\

\
lib{hello}: {hxx cxx}{*} hxx{hello.}
\

Notice the trailing dot in \c{hxx{hello.\}} \- this is the explicit \"no
extension\" specification. See \l{b#targets Targets and Target Types}
for details.


\h#howto-debug-macro|How do I expose extra debug macros of a library?|

Sometime libraries provide extra debugging facilities that are usually enabled
or disabled with a macro. For example, \c{libfoo} may provide the
\c{LIBFOO_DEBUG} macro that enables additional sanity checks, tracing, etc.
Normally such facilities are disable by default.

While it may seem like a good idea to detect a debug build and enable this
automatically, it is not: such facilities usually impose substantial overhead
and the presence of debug information does not mean that performance is not
important (people routinely make optimized builds with debug information).

As a result, the recommended approach is to expose this as a configuration
variable that the consumers of the library can use (see \l{b#proj-config
Project Configuration} for background). Continue with the \c{libfoo} example,
we can add \c{config.libfoo.debug} to its \c{build/root.build}:

\
# build/root.build

config [bool] config.libfoo.debug ?= false
\

And then define the \c{LIBFOO_DEBUG} macro based on that in the \c{buildfile}:

\
# src/buildfile

if $config.libfoo.debug
  cxx.poptions += -DLIBFOO_DEBUG
\

If the macro is also used in the library's interface (for example, in inline
or template functions), then we will also need to export it (see
\l{#core-adjust-build-src-source-opt Adjust source \c{buildfile}: build and
export options} for details):

\
# src/buildfile

if $config.libfoo.debug
{
  cxx.poptions += -DLIBFOO_DEBUG
  lib{foo}: cxx.export.poptions += -DLIBFOO_DEBUG
}
\

\N|If the debug facility in question should be enabled by default even in the
optimized builds (in which case the macro usually has the \c{NO_DEBUG}
semantics), the other option is to hook it up to the standard \c{NDEBUG}
macro, for example, in the library's configuration header file.|

Note that such \c{.debug} configuration variables should primarily be meant
for the user to selectively enabled extra debugging support in certain
libraries of their build. However, if your project depends on a number of
libraries with such extra debugging support and it generally makes sense to
also enable this support in dependencies if it is enabled in your project,
then you may want to propagate your \c{.debug} configuration value to the
dependencies (see the \l{bpkg#manifest-package-depends \c{depends} package
\c{manifest} value} for details on dependency configuration). You, however,
should still allow the user to override this decision on the per-dependency
basis.

Continuing with the above example, let's say we have \c{libbar} with
\c{config.libbar.debug} that depends on \c{libfoo} and that wishes to by
default enable debugging in \c{libfoo} if it is enabled in \c{libbar}.
This is how we can correctly arrange for this in \c{libbar}'s \c{manifest}:

\
depends:
\\
libfoo ^1.2.3
{
  # We prefer to enable debug in libfoo if enabled in libbar
  # but accept if it's disabled (for example, by the user).
  #
  prefer
  {
    if $config.libbar.debug
      config.libfoo.debug = true
  }

  accept (true)
}
\\
\


\h1#faq|Packaging FAQ|

\h#faq-alpha-stable|Why is my package in \c{alpha} rather than \c{stable}?|

If your package uses a semver version (or semver-like, that is, has three
version components) and the first component is zero (for example, \c{0.1.0}),
then, according to the semver specification, this is an alpha version and
\l{bdep-publish(1)} automatically published such a version to the \c{alpha}
section of the repository.

Sometimes, however, in a third-party package, while the version may look like
semver, upstream may not assign the zero first component any special
meaning. In such cases you can override the \c{bdep-publish} behavior with the
\c{--section} option, for example:

\
$ bdep publish --section=stable
\

Note that you should only do this if you are satisfied that by having the zero
first component upstream does not imply alpha quality. Getting an explicit
statement to this effect from upstream is recommended.


\h#faq-publish-stage|Where to publish if package requires staged toolchain?|

If your package requires the \l{https://build2.org/community.xhtml#stage staged
toolchain}, for example, because it needs a feature or bugfix that is not yet
available in the released toolchain, then you won't be able to publish it to
\c{cppget.org}. Specifically, if your package has the accurate \c{build2}
version constraint and you attempt to publish it, you will get an error like
this:

\
error: package archive is not valid
  info: unable to satisfy constraint (build2 >= 0.17.0-) for package foo
  info: available build2 version is 0.16.0
\

There are three alternative ways to proceed in this situation:

\ol|

\li|Wait until the next release and then publish the package to
\c{cppget.org}.|

\li|If the requirement for the staged toolchain is \"minor\", that is, it
doesn't affect the common functionality of the package or only affects a small
subset of platforms/compilers, then you can lower the toolchain version
requirement and publish the package to \c{cppget.org}. For example, if you
require the staged toolchain because of a bugfix that only affects one
platform, it doesn't make sense to delay publishing the package since it is
perfectly usable on all the other platforms in the meantime.|

\li|Publish it to \l{https://queue.stage.build2.org queue.stage.build2.org},
the staging package repository. This repository contain new packages that
require the staged toolchain to work and which will be automatically
moved to \c{cppget.org} once the staged version is released. The other
advantage of publishing to this repository (besides not having to remember
to manually publish the package once the staged version is released) is
that your package becomes available from an archive repository, which is
substantially faster than a \c{git} repository.

To publish to this repository, use the following \c{bdep-publish} command
line:

\
$ bdep publish --repository=https://stage.build2.org ...
\

||


"