aboutsummaryrefslogtreecommitdiff
path: root/doc
diff options
context:
space:
mode:
Diffstat (limited to 'doc')
-rw-r--r--doc/intro.cli190
1 files changed, 96 insertions, 94 deletions
diff --git a/doc/intro.cli b/doc/intro.cli
index 4be46dd..15ffab9 100644
--- a/doc/intro.cli
+++ b/doc/intro.cli
@@ -536,11 +536,10 @@ synchronizing:
drop hello
\
-As mentioned earlier, by default \c{bdep-new(1)} initializes a \c{git}
+As mentioned earlier, by default \l{bdep-new(1)} initializes a \c{git}
repository for us. Now that we have successfully built and tested our project,
it might be a good idea to make a first commit and publish it to a remote
-repository hosting service where others can find it. Using GitHub as an
-example:
+repository where others can find it. Using GitHub as an example:
\
$ git add .
@@ -549,15 +548,15 @@ $ git remote add origin git@github.com:john-doe/hello.git
$ git push -u
\
-While we have managed to test a few platforms (Linux and Windows) and compiler
-versions (Clang and GCC) locally, there are quite a few combinations that we
-haven't tried (Mac OS with Apple Clang and Windows with MSVC, to name the
-major ones). We could test them manually, some with the help of virtualization
-while for others (such as Mac OS) we may need physical hardware. Add a few
-versions for each compiler and we are looking at at least a dozen build
-configurations. Needless to say, testing on all of them manually is a lot of
-work. Now that we have our project available from a public remote repository,
-we can instead use the remote testing functionality offered by the
+While we have managed to test a couple of platforms (Linux and Windows) and
+compiler versions (Clang and GCC) locally, there are quite a few combinations
+that we haven't tried (Mac OS with Apple Clang and Windows with MSVC, to name
+the major ones). We could test them manually, some with the help of
+virtualization while for others (such as Mac OS) we may need physical
+hardware. Add a few versions for each compiler and we are looking at a dozen
+build configurations. Needless to say, testing on all of them manually is a
+lot of work. Now that we have our project available from a public remote
+repository, we can instead use the remote testing functionality offered by the
\l{bdep-ci(1)} command. For example:
\
@@ -573,20 +572,20 @@ CI request is queued:
https://ci.cppget.org/@d6ee90f4-21a9-47a0-ab5a-7cd2f521d3d8
\
-Let's see what's going on here. We are submitting our test request to
-\l{https://ci.cppget.org ci.cppget.org} which is a public CI service run by
-the \c{build2} project (see available \l{https://ci.cppget.org?build-configs
-Build Configurations} and use \l{https://ci.cppget.org?ci Policies}). We are
-testing the current working tree state (branch and commit) of our package. It
-should, however, be available from our remote (GitHub repository in this
-example) since that's where the CI service expects to find it. In response we
-get a URL where we can see the build and test results, logs, etc.
+Let's see what's going on here. By default \c{ci} submits a test request to
+\l{https://ci.cppget.org ci.cppget.org}, a public CI service run by the
+\c{build2} project (see available \l{https://ci.cppget.org?build-configs Build
+Configurations} and \l{https://ci.cppget.org?ci Use Policies}). It is testing
+the current working tree state (branch and commit) of our package which should
+be available from our remote repository (on GitHub in this example) since
+that's where the CI service expects to find it. In response we get a URL where
+we can see the build and test results, logs, etc.
\N|This \i{push} CI model works particularly well with the \"feature branch\"
development workflow. Specifically, you would develop a new feature in a
separate branch, publishing and remote-testing it as necessary. When the
feature is ready, you would merge any changes from \c{master}, test the result
-one more time, then merge the feature into master.|
+one more time, and then merge the feature into master.|
Now is a good time to get an overview of the \c{build2} toolchain. After all,
we have already used two of its tools (\c{bdep} and \c{b}) without a clear
@@ -1396,27 +1395,27 @@ reference: 0c596fca2017
Let's see what's going on here. By default \c{publish} submits to the
\l{https://cppget.org cppget.org} repository. On \c{cppget.org} package names
-are assigned on the first come first serve basis. But instead of using logins
-or emails to authenticate package ownership, \c{cppget.org} uses your version
+are assigned on a first come first serve basis. But instead of using logins or
+emails to authenticate package ownership, \c{cppget.org} uses your version
control repository as a proxy. In a nutshell, when we submit a package for the
first time, its control repository is associated with its name and all
subsequent submissions have to use the same control repository (the
-authentication part). When submitting a package, \c{publish} also adds a
-file to the \c{build2-control} branch of the control repository with the
-package archive checksum. On its side, \c{cppget.org} checks for the presence
-of this file in the control repository to make sure that whomever is making
-this submission has write access to the control repository (the authorization
-part). See \l{bdep-publish(1)} for further details.
+authentication part). When submitting a package, \c{publish} also adds a file
+to the \c{build2-control} branch of the control repository with the package
+archive checksum. On the other side, \c{cppget.org} checks for the presence of
+this file to make sure that whomever is making this submission has write
+access to the control repository (the authorization part). See
+\l{bdep-publish(1)} for details.
The rest should be pretty straightforward: \c{publish} prepares and uploads a
distribution of our package which goes into the \c{alpha} section of the
-repository (because it has \c{0} major version component). In response we get
-a reference which we can use to check the status of our submission on
+repository (because it has \c{0} major version). In response we get a
+reference which we can use to check the status of our submission on
\l{https://queue.cppget.org queue.cppget.org}. And after some basic testing
-and verification our package should appear on \c{cppget.org} (the exact steps
-are described in submission \l{https://cppget.org?submit Policies}). Note also
+and verification, our package should appear on \c{cppget.org} (the exact steps
+are described in \l{https://cppget.org?submit Submission Policies}). Note also
that package submissions to \c{cppget.org} are public and permanent and cannot
-be \"unpublished\" under no circumstances.
+be removed under any circumstances.
Finally, we also shouldn't forget to increment the version for the next
development cycle:
@@ -2126,22 +2125,22 @@ entry as a documentation to users of our project.|
\h1#proj-struct|Canonical Project Structure|
-The primary goal of establishing the \i{canonical project structure} is to
-create an ecosystem of packages that can coexist, are easy to comprehend by
-both humans and tools, scale to complex, real-world requirements, and, last
-but not least, are pleasant to work with.
+The goal of establishing the \i{canonical project structure} is to create an
+ecosystem of packages that can coexist, are easy to comprehend by both humans
+and tools, scale to complex, real-world requirements, and, last but not least,
+are pleasant to work with.
-The canonical structure is primarily meant for packages \- a single library or
-program (or, sometimes, a collection of related programs) with a specific and
-well-defined function. While it may be less suitable for more elaborate,
+The canonical structure is primarily meant for a package \- a single library
+or program (or, sometimes, a collection of related programs) with a specific
+and well-defined function. While it may be less suitable for more elaborate,
multi-library/program \i{end-products} that are not meant to be packaged, most
-of the recommendations discussed below would still make sense. Oftentimes, you
+of the recommendations discussed below would still apply. Oftentimes, you
would start with a canonical project and expand from there. Note also that
while the discussion below focuses on C++, most of it applies equally to C
projects.
Projects created by the \l{bdep-new(1)} command have the canonical structure.
-The overall layout for executable (\c{-t\ exe}) and library (\c{-t\ lib})
+The overall layouts for executable (\c{-t\ exe}) and library (\c{-t\ lib})
projects are presented below.
\
@@ -2171,8 +2170,9 @@ lib<name>/
└── manifest
\
-The canonical structure for both project types is discussed in detail next.
-Below is a short summary of the key points:
+The canonical structure for both project types is discussed in detail in
+the following sections with a short summary of the key points presented
+below.
\ul|
@@ -2189,20 +2189,19 @@ prefix, for example, \c{<libhello/hello.hxx>}.}|
\c{.hxx/.cxx} (\c{.mpp} or \c{.mxx} for module interfaces).}|
\li|\n\i{No special characters other than \c{_} and \c{-} in file names with
-\c{.} only used for extensions.}\n\n|
-
-|
+\c{.} only used for extensions.}\n\n||
Let's start with naming projects: by convention, library names start with the
\c{lib} prefix and using this prefix for non-library projects should be
-avoided. The \c{bdep-new} command warns about both violations.
+avoided. The \c{bdep-new} command warns about both violations. \N{The
+rationale for this naming convention will become evident later.}
The project's root directory should contain the root \c{buildfile} and package
\c{manifest} file. Other recommended top-level subdirectory names are
-\c{examples/} (for libraries it is normally a subproject like \c{tests/}, see
-below), \c{doc/}, and \c{etc/} (sample configurations, scripts, third-party
-contributions, etc). See also \l{b#intro-proj-struct Project Structure} in the
-build system manual for details on the build-related files (\c{buildfile}) and
+\c{examples/} (for libraries it is normally a subproject like \c{tests/}, as
+discussed below), \c{doc/}, and \c{etc/} (sample configurations, scripts,
+third-party contributions, etc). See also build system \l{b#intro-proj-struct
+Project Structure} for details on the build-related files (\c{buildfile}) and
subdirectories (\c{build/}).
@@ -2217,21 +2216,22 @@ inclusion scheme (discussed below) where each header is prefixed with its
project name. It also has a predictable name where users (and tools) can
expect to find our project's source code. Finally, this layout prevents
clutter in the project's root directory which usually contains various other
-files (like \c{README}, \c{LICENSE}) and directories (like \c{tests/}).
+files (like \c{README}, \c{LICENSE}) and directories (like \c{doc/},
+\c{tests/}, \c{examples/}).
\N|Another popular approach is to place public headers into the \c{include/}
subdirectory and source files as well as private headers into \c{src/}. The
-cited advantage of this layout is the predictable location that contains only
-the project's public headers (that is, its API). This can make the project
-easier to navigate and understand while harder to misuse, for example, by
-including a private header.
+cited advantage of this layout is the predictable location (\c{include/}) that
+contains only the project's public headers (that is, its API). This can make
+the project easier to navigate and understand while harder to misuse, for
+example, by including a private header.
However, this split layout is not without drawbacks:
\ul|
\li|Navigating between corresponding headers and sources is cumbersome. This
-affects editing, grep'ing as well as code browsing (for example, on GitHub).|
+affects editing, grep'ing, as well as code browsing (for example, on GitHub).|
\li|Implementing the canonical inclusion scheme would require an extra level
of subdirectories (for example, \c{include/libhello/} and \c{src/libhello/}),
@@ -2244,44 +2244,44 @@ build systems may not support this arrangement (for example, \c{build2} does
not currently support target groups with members in different directories).||
Also, the stated advantage of this layout \- separation of public headers from
-private \- is not as clear cut as it may seem. The common assumption of the
-split layout is that only headers from \c{include/} are installed and,
+private \- is not as clear cut as it may seem at first. The common assumption
+of the split layout is that only headers from \c{include/} are installed and,
conversely, to use the headers in-place, all one has to do is add \c{-I}
pointing to \c{include/}. On the other hand, it is common for public headers
to include private, for example, to call an implementation-detail function in
inline or template code (note that the same applies to private modules
imported in public module interfaces). Which means such private, (or, probably
now more accurately called implementation-detail) headers have to be placed in
-the \c{include/} directory as well, perhaps into a subdirectory (such
+the \c{include/} directory as well, perhaps into a subdirectory (such as
\c{details/}) or with a file name suffix (such as \c{-impl}) to signal to the
user that they are still \"private\". Needless to say, keeping track of which
private headers can still stay in \c{src/} and which have to be moved to
-\c{include/} (and vice versa) is an arduous, error-prone task. As a result,
+\c{include/} (and vice versa) is a tedious, error-prone task. As a result,
practically, the split layout quickly degrades into the \"all headers in
\c{include/}\" arrangement which negates its main advantage.
-It is also not clear how the split layout will fit modularized projects. With
-modules, both the interface and implementation (including non-inline/template
-function definitions) can reside in the same file with a substantial number of
-C++ developers finding this arrangement appealing. If a project consists of
-only such single-file modules, then \c{include/} and \c{src/} are effectively
-become the same thing. In a sense, we already have this situation with
-header-only libraries except that in case of modules calling the directory
-\c{include/} would be an anachronism.
+It is also not clear how the split layout will translate to modularized
+projects. With modules, both the interface and implementation (including
+non-inline/template function definitions) can reside in the same file with a
+substantial number of C++ developers finding this arrangement appealing. If a
+project consists of only such single-file modules, then \c{include/} and
+\c{src/} are effectively become the same thing. In a sense, we already have
+this situation with header-only libraries except that in case of modules
+calling the directory \c{include/} would be an anachronism.
To summarize, the split directory arrangement offers little benefit over the
single directory layout, has a number of real drawbacks, and does not fit
modularized projects well. In practice, private headers are placed into
\c{include/}, often either in a subdirectory or with a special file name
-suffix, a mechanism that is readily available to the single directory layout.|
+suffix, a mechanism that is readily available in the single directory layout.|
All headers within a project should be included using the \c{<>} style
inclusion and contain the project name as a directory prefix. And all headers
means \i{all headers} \- public, private, or implementation detail, in
-executables and in libraries.
+executables or in libraries.
-As an example, let's say we've added \c{utility.hxx} to our \c{hello}
-executable project. This is how it should be included in \c{hello.cxx}:
+As an example, let's say we've added \c{utility.hxx} to our \c{hello} project.
+This is how it should be included in \c{hello.cxx}:
\
// #include \"utility.hxx\" // Wrong.
@@ -2291,8 +2291,8 @@ executable project. This is how it should be included in \c{hello.cxx}:
#include <hello/utility.hxx>
\
-Similarly, if we want to include \c{hello.hxx} from \c{libhello}, then it
-should look like this:
+Similarly, if we want to include \c{hello.hxx} from \c{libhello}, then the
+inclusion should look like this:
\
#include <libhello/hello.hxx>
@@ -2356,7 +2356,7 @@ details/hxx{*}: install = false
\
\N|If you are creating a \i{family of libraries} with the common name prefix,
-then it makes sense to use a nested source directory layout with the common
+then it may make sense to use a nested source directory layout with the common
top-level directory. As an example, let's say we have the \c{libstud-path} and
\c{libstud-url} libraries that belong to the same \c{libstud} family. Their
source subdirectory layouts could look like this:
@@ -2450,7 +2450,7 @@ worth the trouble.|
\h#proj-struct-src-content|Source Contents|
-Let's now move inside our source file: All macros defined by a project, such
+Let's now move inside our source files. All macros defined by a project, such
as include guards, version macro, etc., must all start with the project name
(including the \c{lib} prefix for libraries), for example
\c{LIBHELLO_VERSION}. Similarly, the library's namespace and module names all
@@ -2467,7 +2467,7 @@ namespace hello
}
\
-Executable project may use a namespace (in which case it is natural to name it
+Executable project may use a namespace (in which case it is natural to call it
after the project) and its (private) modules shouldn't be qualified with the
project name (in order not to clash with similarly named modules from the
corresponding library, if any).
@@ -2479,11 +2479,11 @@ from the executable into the library. It is natural to want to use the same
name \i{stem} (\c{hello} in our case) for both.
The above naming scheme (with the \c{lib} prefix present in some names but not
-the others) is carefully crafted to allow such library/executable pairs to
-coexist and be used together without too much friction. For example, both the
-library and executable can have a header called \c{utility.hxx} with the
-executable being able to include both and even get the \"merged\"
-functionality without extra effort (since they use the same namespace):
+others) is carefully crafted to allow such library/executable pairs to coexist
+and be used together without too much friction. For example, both the library
+and executable can have a header called \c{utility.hxx} with the executable
+being able to include both and even get the \"merged\" functionality without
+extra effort (since they use the same namespace):
\
// hello/hello.cxx
@@ -2508,7 +2508,7 @@ A canonical library project contains two special headers: \c{export.hxx} (or
\h#proj-struct-tests|Tests|
A source file that implements a module's unit tests should be placed next to
-that module's files and called with the module's name plus the \c{.test}
+that module's files and be called with the module's name plus the \c{.test}
second-level extension. If a module uses Testscript for unit testing, then the
corresponding file should be called with the module's name plus the
\c{.test.testscript} extension. For example:
@@ -2533,9 +2533,11 @@ library via its public API, just like the real users of the library would. The
\c{tests/} subdirectory is an unnamed subproject (in the build system terms)
which allows us to build and run tests against an installed version of the
library (see \l{b#intro-operations-test Testing} for more information on the
-contents of this directory). \N{The \c{build2} CI implementation will
-automatically perform this test is a library contains the \c{tests/}
-subproject. See \c{bbot} \l{bbot#arch-worker Worker Logic} for details.}
+contents of this directory).
+
+\N|The \c{build2} CI implementation will automatically perform the
+installation test if a project contains the \c{tests/} subproject. See
+\c{bbot} \l{bbot#arch-worker Worker Logic} for details.|
By default executable projects do not have the \c{tests/} subprojects instead
placing integration tests next to the source code (the \c{testscript} file;
@@ -2545,7 +2547,7 @@ libraries.
\N|By default projects created by \c{bdep-new} include support for
functional/integration testing but exclude support for unit testing. These
-default, however, can be overridden with \c{no-tests} and \c{unit-tests}
+defaults, however, can be overridden with \c{no-tests} and \c{unit-tests}
options, respectively. For example:
\
@@ -2568,13 +2570,13 @@ relevant parts.|
There are no \c{bin/} or \c{obj/} subdirectories: build output (object files,
libraries, executables, etc.) go into a parallel directory structure (in case
of an out of source build) or next to the sources (in case of an in source
-build). See \l{b#intro-dirs-scopes Directories and Scopes} for details on
-in/out of source builds.
+build). See \l{b#intro-dirs-scopes Output Directories and Scopes} for details
+on in and out of source builds.
Projects managed with \l{bdep(1)} are always built out-of-source. However, by
default, the source directory is configured as \i{forwarded} to one of the
out-of-source builds. This has two effects: we can run the build system driver
-\l{b(1)} directly in the source directory and certain \"interesting\" output
+\l{b(1)} directly in the source directory and certain \"interesting\" targets
(such as executables, documentation, test results, etc) will be automatically
\i{backlinked} to the source directory (see \l{b#intro-operations-config
Configuration} for details on forwarded configurations). The following listing
@@ -2601,7 +2603,7 @@ cross-platform manner. The major drawback of this arrangement is the need for
unique executable names which is especially constraining when writing tests
where it is convenient to call the executable just \c{driver} or \c{test}.
-In \c{build2} there is not such restrictions and all executables can run
+In \c{build2} there is no such restriction and all executables can run
\i{in-place}. This is achieved with \c{rpath} which is emulated with DLL
assemblies on Windows.|
"