Skip to content

Commit 976660b

Browse files
authored
Add Local Regression Tests topic (#221)
* Add Local Regression Test topic * Correction to navigation * Actually correction the nav issue * Updated Regression Dashboard links
1 parent fca0a75 commit 976660b

File tree

3 files changed

+168
-1
lines changed

3 files changed

+168
-1
lines changed

contributor-guide/modules/ROOT/nav.adoc

Lines changed: 1 addition & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -37,6 +37,7 @@ Official repository: https://github.com/boostorg/website-v2-docs
3737
** xref:testing/intro.adoc[]
3838
** xref:testing/test-policy.adoc[]
3939
** xref:testing/boost-test-matrix.adoc[]
40+
** xref:testing/regression-tests.adoc[]
4041
** xref:testing/writing-tests.adoc[]
4142
** xref:testing/sanitizers.adoc[]
4243
** xref:testing/continuous-integration.adoc[]

contributor-guide/modules/ROOT/pages/testing/boost-test-matrix.adoc

Lines changed: 10 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -13,10 +13,19 @@ The Boost Test Matrix is an automated testing system that runs tests on Boost li
1313

1414
The Test Matrix includes tests run on different operating systems (Windows, Linux, macOS) and with various compilers (such as GCC, Clang, MSVC). This diversity helps in catching issues that might only appear in specific environments.
1515

16+
For information on running regression tests locally, refer to xref:testing/regression-tests.adoc[].
17+
1618
== Regression Dashboard
1719

1820
The results of library tests are published on the
19-
http://www.boost.org/development/tests/master/developer/summary.html[Boost Regression Testing Dashboard].
21+
*Boost Regression Testing Dashboard*:
22+
23+
[cols="1,1,1",options="header",stripes=even,frame=none]
24+
|===
25+
| *Version* | *Results* | *Issues*
26+
| Develop branch | https://regression.boost.io/develop/developer/summary.html[Summary] | https://regression.boost.io/develop/developer/issues.html[Unresolved Issues]
27+
| Master branch | https://regression.boost.io/master/developer/summary.html[Summary] | https://regression.boost.io/master/developer/issues.html[Unresolved Issues]
28+
|===
2029

2130
This dashboard is publicly accessible and provides detailed information about the test results for most libraries.
2231

Lines changed: 157 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,157 @@
1+
////
2+
Copyright (c) 2024 The C++ Alliance, Inc. (https://cppalliance.org)
3+
4+
Distributed under the Boost Software License, Version 1.0. (See accompanying
5+
file LICENSE_1_0.txt or copy at http://www.boost.org/LICENSE_1_0.txt)
6+
7+
Official repository: https://github.com/boostorg/website-v2-docs
8+
////
9+
= Local Regression Tests
10+
:navtitle: Local Regression Tests
11+
12+
This section describes how to run regression tests on your local machine, by downloading and running a Python command-line tool.
13+
14+
For information on the regression tests run on all libraries, refer to xref:testing/boost-test-matrix.adoc[].
15+
16+
== Running Regression Tests Locally
17+
18+
It's easy to run regression tests on your Boost clone.
19+
20+
To run a library's regression tests, run Boost's `b2` utility from the `<boost-root>/libs/<library>/test` directory. To run a single test, specify its name (as found in `<boost-root>/libs/<library>/test/Jamfile.v2`) on the command line.
21+
22+
See the https://boost.sourceforge.net/doc/html/jam/building.html[Building BJam] guide for help building or downloading `bjam` for your platform, and navigating your Boost distribution.
23+
24+
To run every library's regression tests, run `b2` from the `<boost-root>/status` directory.
25+
26+
To run Boost.Build's regression tests, run `python test_all.py` from the `<boost-root>/tools/build/v2/test` directory.
27+
28+
== The Run.py Tool
29+
30+
This tool runs all Boost regression tests and reports the results back to the Boost community.
31+
32+
=== Requirements
33+
34+
* Python (2.3 ≤ version < 3.0)
35+
* Git (recent version)
36+
* At least 5 gigabytes of disk space per compiler to be tested
37+
38+
=== Step by Step Instructions
39+
40+
. Create a new directory for the branch you want to test.
41+
. Download the `run.py` script into that directory:
42+
. Open the `run.py` script in your browser.
43+
. Click the *Raw* button.
44+
. Save as `run.py` in the directory you just created.
45+
46+
The syntax to run the tool is `python run.py <options>... [<commands>]` with the following three _required_ options, plus any others you wish to employ (for a full list, refer to <<Commands and Options>>):
47+
48+
* `--runner=`: Your choice of name that identifies your results in the reports.
49+
+
50+
If you are running regressions interlacingly with a different set of compilers (e.g. for Intel in the morning and GCC at the end of the day), you need to provide a different runner ID for each of these runs, e.g. "your_name-intel", and "your_name-gcc".
51+
+
52+
The limitations of the report format imposes a direct dependency between the number of compilers you are testing with and the amount of space available for your runner ID. If you are running regressions for a single compiler, make sure to choose a short enough ID that does not significantly disturb the report layout. You can also use spaces in the runner ID to allow the reports to wrap the name to fit.
53+
54+
* `--toolsets=`: The toolsets you want to test with.
55+
+
56+
If the `--toolsets` option is not provided, the script will try to use the platform's default toolset (gcc for most Unix-based systems).
57+
+
58+
For supported toolsets, refer to xref:user-guide:ROOT:header-organization-compilation.adoc#toolset[toolset].
59+
60+
* `--tag=`: The tag you want to test. The only tags that currently make sense are `develop` and `master`.
61+
62+
For example:
63+
64+
```
65+
python run.py --runner=Metacomm --toolsets=gcc-4.2.1,msvc-8.0 --tag=develop
66+
```
67+
68+
Note::
69+
If you are behind a firewall/proxy server, everything should still "just work". In the rare cases when it doesn't, you can explicitly specify the proxy server parameters through the `--proxy` option. For example:
70+
+
71+
```
72+
python run.py ... --proxy=http://www.someproxy.com:3128
73+
```
74+
75+
=== Commands and Options
76+
77+
The following commands are available: `cleanup`, `collect-logs`, `get-source`, `get-tools`, `patch`, `regression`, `setup`, `show-revision`, `test`, `test-boost-build`, `test-clean`, `test-process`, `test-run`, `update-source`, and `upload-logs`.
78+
79+
The following options are available:
80+
81+
[cols="1,3",options="header",stripes=even,frame=none]
82+
|===
83+
| *Option* | *Description*
84+
| `-h`, `--help` | show this help message and exit
85+
| `--runner=RUNNER` | runner ID (e.g. 'Metacomm')
86+
| `--comment=COMMENT` | an HTML comment file to be inserted in the reports
87+
| `--tag=TAG` | the tag for the results
88+
| `--toolsets=TOOLSETS` | comma-separated list of toolsets to test with
89+
| `--libraries=LIBRARIES` | comma separated list of libraries to test
90+
| `--incremental` | do incremental run (do not remove previous binaries). Refer to <<Incremental Runs>>.
91+
| `--timeout=TIMEOUT` | specifies the timeout, in minutes, for a single test run/compilation
92+
| `--bjam-options=BJAM_OPTIONS` | options to pass to the regression test
93+
| `--bjam-toolset=BJAM_TOOLSET` | bootstrap toolset for 'bjam' executable
94+
| `--pjl-toolset=PJL_TOOLSET` | bootstrap toolset for 'process_jam_log' executable
95+
| `--platform=PLATFORM` |
96+
| `--user=USER` | Boost SVN user ID
97+
| `--local=LOCAL` | the name of the boost tarball
98+
| `--force-update` | do an SVN update (if applicable) instead of a clean checkout, even when performing a full run
99+
| `--have-source` | do neither a tarball download nor an SVN update; used primarily for testing script changes
100+
| `--ftp=FTP` | FTP URL to upload results to.
101+
| `--proxy=PROXY` | HTTP proxy server address and port (e.g.'http://www.someproxy.com:3128')
102+
| `--ftp-proxy=FTP_PROXY` | FTP proxy server (e.g. 'ftpproxy')
103+
| `--dart-server=DART_SERVER` | the dart server to send results to
104+
| `--debug-level=DEBUG_LEVEL` | debugging level; controls the amount of debugging output printed
105+
| `--send-bjam-log` | send full `bjam` log of the regression run
106+
| `--mail=MAIL` | email address to send run notification to
107+
| `--smtp-login=SMTP_LOGIN` | STMP server address/login information, in the following form: `<user>:<password>@<host>[:<port>]`
108+
| `--skip-tests` | do not run `bjam`; used for testing script changes
109+
|===
110+
111+
=== Output
112+
113+
The regression run procedure will:
114+
115+
. Download the most recent regression scripts.
116+
. Download the designated testing tool sources including Boost.Jam, Boost.Build, and the various regression programs.
117+
. Download the most recent from the Boost Git Repository into the subdirectory boost.
118+
. Build `b2` and `process_jam_log` if needed. (`process_jam_log` is a utility, which extracts the test results from the log file produced by Boost.Build).
119+
. Run regression tests, process and collect the results.
120+
. Upload the results to a common FTP server.
121+
122+
The report merger process running continuously will merge all submitted test runs and publish them at various locations.
123+
124+
=== Advanced Use
125+
126+
==== Providing Detailed Information about your Environment
127+
128+
Once you have your regression results displayed in the Boost-wide reports, you may consider providing a bit more information about yourself and your test environment. This additional information will be presented in the reports on a page associated with your runner ID.
129+
130+
By default, the page's content is just a single line coming from the comment.html file in your run.py directory, specifying the tested platform. You can put online a more detailed description of your environment, such as your hardware configuration, compiler builds, and test schedule, by altering the file's content. Also, consider providing your name and email address for cases where Boost developers have questions specific to your particular set of results.
131+
132+
==== Incremental Runs
133+
134+
By default, the script runs in what is known as full mode: on each `run.py` invocation all the files that were left in place by the previous run — including the binaries for the successfully built tests and libraries — are deleted, and everything is rebuilt once again from scratch. By contrast, in `incremental` mode the already existing binaries are left intact, and only the tests and libraries which source files has changed since the previous run are re-built and re-tested.
135+
136+
The main advantage of `incremental` runs is a significantly shorter turnaround time, but unfortunately incremental runs don't always produce reliable results. Some type of changes to the codebase (changes to the `b2` testing subsystem in particular) often require switching to a full mode for one cycle in order to produce trustworthy reports.
137+
138+
Run `run.py` in incremental mode by passing it the identically named command-line flag: `python run.py ... --incremental`.
139+
140+
As a general guideline, if you can afford it, testing in full mode is preferable.
141+
142+
==== Patching Boost Sources
143+
144+
You might encounter an occasional need to make local modifications to the Boost codebase before running the tests, without disturbing the automatic nature of the regression process. To implement this under `regression.py`:
145+
146+
. Codify applying the desired modifications to the sources located in the `./boost_root` subdirectory in a single executable script named `patch_boost` (`patch_boost.bat` on Windows).
147+
. Place the script in the `run.py` directory.
148+
149+
The driver will check for the existence of the patch_boost script, and, if found, execute it after obtaining the Boost sources.
150+
151+
== Feedback
152+
153+
Send all comments/suggestions regarding this document and the testing procedure itself to the https://lists.boost.org/mailman/listinfo.cgi/boost[Boost developers' mailing list].
154+
155+
== See Also
156+
157+
* xref:testing/boost-test-matrix.adoc[]

0 commit comments

Comments
 (0)