Skip to content

Latest commit

 

History

History
435 lines (381 loc) · 22.5 KB

development_guidelines.md

File metadata and controls

435 lines (381 loc) · 22.5 KB

Development guidelines

Table of Contents

Introduction

PILZ is a member of the ROS industrial consortium. As a member of the ROS industrial consortium we are committed to "[...] (extend) the advanced capabilities of the Robot Operating System (ROS) software to manufacturing" (ROS-Industrial web page). To archive this goal, we aim to develop software of high quality.
To ensure that our software fulfills our self-imposed high standards, we developed a number of guidelines which are presented in this document.

We are aware of the fact that our guidelines might at times seem overly strict. However, please know that we welcome every type of contribution. We are always willing and eager to help getting changes upstream.

Please note: This document is an ongoing work, in other words, things might change as time goes on. If you think that something is missing or incorrect, please feel free to open a pull request to showcase your changes.

Default development branch

Our current default development branch is melodic-devel (see GitHub). All new feature are developed for ROS Melodic Morenia which primarily targets Ubuntu Bionic (18.04) (for more information see the ROS wiki).
For backwards compatibility reasons we still support ROS Kinetic Kame in combination with Ubuntu Xenial (16.04). In order to support ROS Kinetic Kame, we proceed as follows:

  • First, we develop the new feature for ROS Melodic Morenia and merge the new feature into the melodic-devel branch.
  • Secondly, we cherry-pick the new feature from the melodic-devel branch and merge it into the kinetic-devel branch.

Review process

Before any pull request is approved a developer not involved in the development of the feature has to review the pull request.

Our goal is to automate the review process as much as possible by including as many checks as possible into the continuous integration process. However, some things are currently manual or simply cannot be automated.
In general the reviewer has to check/review the following:

  • Specification:
    • The feature and its changes to the system are comprehensively specified.
    • All requirements are exhaustively tested by one or more tests.
    • All specifications are linked against their corresponding tests (how-to link see section: How-to link a requirement against a test)
  • User interface:
    • The user interface is easy to understand and use.
    • The user interface is comprehensively documented.
    • If an existing user interface is broken by the new feature, then the interface version has to be adapted according the nature of the change.
  • System architecture:
    • The system architecture complies with the clean architecture guidelines. (see "Clean Architecture" by Robert C. Martin)
    • The architecture diagrams are up to date and easily understandable.
  • Code:
    • The code has to comply with the clean code rules (see "Clean Code: A Handbook of Agile Software Craftsmanship" by Robert C. Martin).
  • Tests:
    • The tests are easy to understand (e.g. each test only checks one aspect of a component, etc.)
    • The tests are comprehensively documented (how-to document tests see section: How-to docuemt tests)

Continuous integration (CI)

All pull requests (PR's) have to be approved by the continuous integration tool Travis.

How-to activate Travis for forked repositories

If you are an external developer and want to create a pull request, you can do so by forking one of our repositories, make changes, and then create a pull request on our repository.
To check that the continuous integration runs without problems on your forked repository, you have to activate Travis for your repository. For more information on how to do that have a look at the Travis tutorials.

How-to run the CI processes locally

If you have a look at our Travis file, you see that our CI is based on the ROS industrial CI configurations for ROS. In some cases, for example in case of errors only appearing on Travis, etc., it might be advantageous to do exactly what Travis does locally (At least in terms of building the packages and executing the tests). To achieve this, have a look at the explanations given in the ros-industrial repository.

Code style guide

We follow the ROS coding style guide. Currently, the styling of the code is not automatically checked by the CI, so please make sure that you comply with the ROS coding style guide.

How-to format code automatically

If you want to automatically format your code using the ROS style guide, then see the ROS wiki.

Coverage

Only PR's are accepted which have a 100% line and function coverage. Currently the coverage check is only partially integrated into the CI. Therefore, everybody has to ensure that all PR's fulfill the coverage requirement. Otherwise the reviewer has to reject the PR.

How-to check coverage in C++ and Python

To manually run the coverage program for a certain package, the following steps need to be performed:

  • Delete the build and devel folder, to ensure a clean build.
  • Run catkin_make -DENABLE_COVERAGE_TESTING=ON -DCMAKE_BUILD_TYPE=Debug, to build all repositories first. This ensures that all necessary dependencies are build.
  • Run catkin_make -DENABLE_COVERAGE_TESTING=ON -DCMAKE_BUILD_TYPE=Debug package_name_coverage, to build the concrete package and to execute all tests of the package.
  • Finally you can open the html page mentioned at the end of the console output. The html page shows the summary of the coverage data collected during the execution of the tests.

Note:
Make sure that all the tests are executed successfully because, otherwise, the coverage value does not reflect the correct coverage of your tests.

Specification

If new properties are added to the system or if old ones have to be changed, the specification of the system needs to be updated. The specification helps each one of us to quickly gain an understanding what and what the system cannot do.
Currently, our requirements are situated in a file called spec.dox. To unify and simplify the creation of requirements, and the linking of requirements against tests, we introduced new doxygen commands which are explained in the following sub-sections.

Please note:
The specification of a package is generated while generating the documentation. For more information on how to generate the documentation for a package see section: How-to generate documentation for a package

How-to define and write a requirement

To define a requirement the following doxygen command is used:

@definereq{my_requirements_name}
Here comes the specification of my super cool new feature I'm adding.

Some remarks how to formulate a requirement:
In order, to keep the requirement simple and understandable, use short and plain sentences. If it difficult to deduce why the system needs to fulfill a certain requirement, please shortly state the reason. Also, do not forget to define how the system behaves if something goes wrong (e.g. errors, etc.) or for edge cases (e.g. lowest/highest possible number, etc.)!

How-to link a requirement against a test

To state that a test checks a certain requirement or part of a requirement, the following doxygen command is used:

@tests{my_requirement_name, A one sentence long short description of the test (without comma!)}

Note:

  • A requirement is allowed to be tested by more than one test. In other words, a requirement can be linked against as many tests as needed.
  • A test is allowed to test more than on requirement. In other words, you are allowed to use as many @tests commands in the test description as needed.

Documentation

To document the code, Doxygen is used for C++ code and Sphinx for Python code.

How-to generate documentation for a package

To generate the documentation for a package consisting of C++ or/and Python code, a tool named rosdoc_lite is used:

Testing

How-to document tests

For a good test documentation choosing an expressive name is the first step. Using a good name you can sometimes express everything the test wants to check. If in doubt, don't hesitate to ask someone else, to ensure that you are using an appropriate name. It's often best to choose the final test name after the test passes. Choose the test name based on the content of the test.

Often it is also advisable to additionally provide a one sentence long short summary of the test intention. If it is difficult to summarize the test objective in one sentence, take a step back and ask yourself if the test is too complex or tries to test too much. The statement also holds for the name of the test.

For a proper test documentation please do not forget to link the test against the corresponding requirement. On how-to do that see the requirement linking section.

Obtaining the logging output

For logging we highly rely on the ROS logging framework (for C++ see the roscpp - ROS wiki, for Python see the rospy - ROS wiki). However, when it comes to testing, obtaining the logging output is not always as straight forward.

How-to obtain logging output for rostests

To obtain the logging output for rostests, we suggest:

  • to first build the system and the test you are interested in via catkin_make and then,
  • execute: rostest my_package_name my_test_filename --text. (The --text flag sends the logging output to the screen as described in the ROS wiki.)

Sometimes when e.g. an exception hits, you may not see the complete logging output. To avoid this issue you can force line buffering for the ROS loggers, by setting the corresponding environment variable to 1:

export ROSCONSOLE_STDOUT_LINE_BUFFERED=1

For more information see the ROS wiki.

How-to change logging level for a package

Sometimes it is necessary to change the logging level for some or all packages. Both kind of modifications can be done by providing a logging config file. On details, how to do it, see the ROS wiki.

How-to write proper tests

Although it sounds like a rather easy job, writing good tests is quite hard. During the development, we came up with a number of guidelines which (we think) improve the quality of tests.

Advise: Each test should only test one objective

Testing more than one objective makes a test more difficult to understand. Why?

  • Each additional objective increases the number of requirements which have to be met by the test, in order, to test all objectives.
  • The order in which the objectives are tested might become important, although this should have no relevance.
  • Each additional objective increases the number of lines of code.
  • If you do not believe us, then maybe the Google team can convince you, therefore, see the google testing blog

Advise: Keep tests as short as possible

A "small" test increases the understandability and decreases the hurdle to refactor a test if needed.

Advise: Use gtests instead of rostests for unit testing

We discovered that often it seems easier to use a rostest to perform an actual unit-test. However, the decision to use a rostest, comes with some heavy costs:

  • For each rostest a roscore (+ ros nodes) is started at the beginning of each test and shutdown at the end of each test. This consumes a lot of time. If you have only one or two tests, everything is fine. However, if you add more and more tests, the time you need to start and shutdown your tests will become a significant time factor. This, in turn, decreases the usefulness of the tests because the time needed to get feedback from the tests is an important factor.
  • Using a rostest means that you use the standard ROS mechanisms (like topics, services, etc.) for communication. This leads to threading which increases the complexity of your tests and, therefore, increases the chance that your tests will be flaky.

If you reach a point in your development where you want to use a rostest to implement an unit-test, try to take a step back and ask yourself: "Why can I not use a gtest for a simple unit-test?" Most of the time you will discover that you neglected the testability of your code. If you have a certain problem, which makes you believe you need a rostest, then please have a look at our unit-tests, maybe they can give you some inspirations how to solve these problems.

Advise: Avoid threading whenever possible

Whenever possible we suggest that you avoid threading. If you have an integration or acceptance test, it cannot be avoided. However, as long as you are dealing with unit tests, we strongly recommend that you use pure gtests (in combination with gmock if necessary). Gtests not only run faster but they allow you to avoid threading. Threading errors can be quite tricky because they are difficult to understand and track (especially if they are sporadic).
If you have to write a test which needs threading, have a look at section: How-to deal with threads in tests

How-to deal with threads in tests

As soon as you are dealing with rostests you are dealing with threads. Even if you are not directly using threads, the ROS communication system (topics, services, actions, etc.) does. In our experience, tests with more than one thread are often much more flaky.
To avoid flaky tests, in case of threading, some important rules need to be kept in mind.

Rule: Use AsyncSpinner for service calls in test

If you are somehow using/calling services in your tests, don't forget to start an AsyncSpinner. Otherwise, your service calls will not be processed and your test gets stuck. For more information about callbacks and spinning have a look at the ROS wiki.
You can start an AsyncSpinner by adding the following lines to your test main function:

ros::AsyncSpinner spinner{1};
spinner.start();

Rule: Never use sleep

What ever you do, do never ever ever use sleep in a test to wait for a certain event. Why not?
Let's assume that you state a certain sleep duration to wait for a certain event in your test. Let's even assume that the test always runs successfully on your particular hardware. Who guarantees you that a different hardware setup (like on the CI servers) does not lead to a different timing behavior and, therefore, to potential errors?
These kind of errors are particular hard to understand and track, because they normally are highly sporadic and you might not even be able to reproduce the error.

We often dealt with these kind of problems and came up with a helper class called AsyncTest. The AsyncTest class allows you to easily signal that a certain event (like a topic or service call) has happend and of course to wait for a certain signal to happen. To see how this helper class can be used, have a look at our documentation.

Rule: Wait until nodes, topics and services are up and running

When using a rostest, you have to ensure that your system is completely up and running, before you start your actual tests. It normally takes a while before all nodes and their topics, services, actions, etc. are up and running. If you do not account for this effect, you might encounter (sporadic) errors. Please be aware of the fact that these errors are especially difficult to understand and track, because they might depend on the underlying hardware setup (e.g. slow vs. fast PC, etc.).

We implemented some helper functions which might be useful:

  • wait till a certain node has started -> see function waitForNode
  • wait till service is up and running -> see function waitForService
  • wait till topic is up and running -> see function waitForTopic

Gtest command line flags

Gtests are ordinary executables which come with a number of build in command line flags. These flags can be quite helpful and we strongly encourage you to make use of them. You can find a description of the different command line flags in the googletest github repository.

Note:
Given that you are building with catkin_make, you usually can find the gtest executable in your workspace under /path_to_my_ros_workspace/devel/lib/my_package/.

How-to run particular test of test suite

If you only want to run a certain test of a test suite, set the environment variable GTEST_FILTER. For example:
If you only want to run the test my_awesome_test then set the environment variable in your terminal as follows:

export GTEST_FILTER=*my_awesome_test*

If you are done and want again run all your tests then don't forget to reset the GTEST_FILTER variable by setting:

export GTEST_FILTER=*

FAQ

  • Q: How can I install missing system dependencies for a package?
    A: To install the missing system dependencies for a package you can use:
    rosdep install --from-paths WORKSPACE --ignore-src --rosdistro=ROSDISTRO -y
    Replace WORKSPACE with the path to your ROS workspace e.g. ~/catkin_ws/ and ROSDISTRO with e.g. melodic (or kinetic respectively).
    Instead of -y which confirms all actions automatically, you can also use --simulate which shows you what rosdep wants to do. For more information about rosdep see the ROS wiki and also docs.ros.org.

  • Q: How can I execute a sanitizer locally?
    A: Steps which need be performed:

    1. Delete your old build and devel folder, to ensure a clean build.
    2. Build everything with the respective compiler flags:
    • For the address sanitizer execute:
      catkin_make -DCMAKE_BUILD_TYPE=Debug -DCMAKE_CXX_FLAGS_DEBUG="-g -fno-omit-frame-pointer -fsanitize=address" -DCMAKE_EXE_LINKER_FLAGS_DEBUG="-g -fsanitize=address"
    • For the undefined behavior sanitizer execute:
      catkin_make -DCMAKE_BUILD_TYPE=Debug -DCMAKE_CXX_FLAGS_DEBUG="-g -fno-omit-frame-pointer -fsanitize=undefined -fno-sanitize-recover=all" -DCMAKE_EXE_LINKER_FLAGS_DEBUG="-g -fsanitize=undefined"
    1. Build your concrete test with the same additional compiler flags as stated in the previous step.
    2. Execute your test. (Please note: This step is necessary because otherwise the sanitizer is not able to detect errors.)
  • Q: What can I do, if my tests use /opt/ros/... source, instead of /catkin_ws/src?
    A: If $ROS_PACKAGE_PATH and $PYTHONPATH are not the problem try:

    1. Run catkin_make -DENABLE_COVERAGE_TESTING=ON -DCMAKE_BUILD_TYPE=Debug
    2. Execute touch ~/catkin_ws/build/CMakeCache.txt or catkin_make install
    3. RUN catkin_make -DENABLE_COVERAGE_TESTING=ON -DCMAKE_BUILD_TYPE=Debug package_name_coverage
  • Q: Why do my tests produce import errors for python packages
    A: This problem can be avoided by including the path in the PYTHONPATH environment variable:

    • e.g. with export PYTHONPATH="${PYTHONPATH}:/usr/lib/python2.7/dist-packages"