Writing Avocado Tests

We are going to write an Avocado test in Python and we are going to inherit from avocado.Test. This makes this test a so-called instrumented test.

Basic example

Let’s re-create an old time favorite, sleeptest [1]. It is so simple, it does nothing besides sleeping for a while:

import time

from avocado import Test

class SleepTest(Test):

    def test(self):
        sleep_length = self.params.get('sleep_length', default=1)
        self.log.debug("Sleeping for %.2f seconds", sleep_length)
        time.sleep(sleep_length)

This is about the simplest test you can write for Avocado, while still leveraging its API power.

What is an Avocado Test

As can be seen in the example above, an Avocado test is a method that starts with test in a class that inherits from avocado.Test.

Multiple tests and naming conventions

You can have multiple tests in a single class.

To do so, just give the methods names that start with test, say test_foo, test_bar and so on. We recommend you follow this naming style, as defined in the PEP8 Function Names section.

For the class name, you can pick any name you like, but we also recommend that it follows the CamelCase convention, also known as CapWords, defined in the PEP 8 document under Class Names.

Convenience Attributes

Note that the test class provides you with a number of convenience attributes:

  • A ready to use log mechanism for your test, that can be accessed by means of self.log. It lets you log debug, info, error and warning messages.
  • A parameter passing system (and fetching system) that can be accessed by means of self.params. This is hooked to the Multiplexer, about which you can find that more information at Test variants - Mux.

Saving test generated (custom) data

Each test instance provides a so called whiteboard. It can be accessed through self.whiteboard. This whiteboard is simply a string that will be automatically saved to test results (as long as the output format supports it). If you choose to save binary data to the whiteboard, it’s your responsibility to encode it first (base64 is the obvious choice).

Building on the previously demonstrated sleeptest, suppose that you want to save the sleep length to be used by some other script or data analysis tool:

def test(self):
    sleep_length = self.params.get('sleep_length', default=1)
    self.log.debug("Sleeping for %.2f seconds", sleep_length)
    time.sleep(sleep_length)
    self.whiteboard = "%.2f" % sleep_length

The whiteboard can and should be exposed by files generated by the available test result plugins. The results.json file already includes the whiteboard for each test. Additionally, we’ll save a raw copy of the whiteboard contents on a file named whiteboard, in the same level as the results.json file, for your convenience (maybe you want to use the result of a benchmark directly with your custom made scripts to analyze that particular benchmark result).

Accessing test parameters

Each test has a set of parameters that can be accessed through self.params.get($name, $path=None, $default=None). Avocado finds and populates self.params with all parameters you define on a Multiplex Config file (see Test variants - Mux). As an example, consider the following multiplex file for sleeptest:

sleeptest:
    type: "builtin"
    length: !mux
        short:
            sleep_length: 0.5
        medium:
            sleep_length: 1
        long:
            sleep_length: 5

When running this example by avocado run $test --mux-yaml $file.yaml three variants are executed and the content is injected into /run namespace (see Test variants - Mux for details). Every variant contains variables “type” and “sleep_length”. To obtain the current value, you need the name (“sleep_length”) and its path. The path differs for each variant so it’s needed to use the most suitable portion of the path, in this example: /run/sleeptest/length/* or perhaps sleeptest/* might be enough. It depends on how your setup looks like.

The default value is optional, but always keep in mind to handle them nicely. Someone might be executing your test with different params or without any params at all. It should work fine.

So the complete example on how to access the “sleep_length” would be:

self.params.get("sleep_length", "/*/sleeptest/*", 1)

There is one way to make this even simpler. It’s possible to define resolution order, then for simple queries you can simply omit the path:

self.params.get("sleep_length", None, 1)
self.params.get("sleep_length", '*', 1)
self.params.get("sleep_length", default=1)

One should always try to avoid param clashes (multiple matching keys for given path with different origin). If it’s not possible (eg. when you use multiple yaml files) you can modify the default paths by modifying --mux-path. What it does is it slices the params and iterates through the paths one by one. When there is a match in the first slice it returns it without trying the other slices. Although relative queries only match from --mux-path slices.

There are many ways to use paths to separate clashing params or just to make more clear what your query for. Usually in tests the usage of ‘*’ is sufficient and the namespacing is not necessarily, but it helps make advanced usage clearer and easier to follow.

When thinking of the path always think about users. It’s common to extend default config with additional variants or combine them with different ones to generate just the right scenarios they need. People might simply inject the values elsewhere (eg. /run/sleeptest => /upstream/sleeptest) or they can merge other clashing file into the default path, which won’t generate clash, but would return their values instead. Then you need to clarify the path (eg. ‘*’ => sleeptest/*)

More details on that are in Test variants - Mux

Using a multiplex file

You may use the Avocado runner with a multiplex file to provide params and matrix generation for sleeptest just like:

$ avocado run sleeptest.py --mux-yaml examples/tests/sleeptest.py.data/sleeptest.yaml
JOB ID     : d565e8dec576d6040f894841f32a836c751f968f
JOB LOG    : $HOME/avocado/job-results/job-2014-08-12T15.44-d565e8de/job.log
TESTS      : 3
 (1/3) sleeptest.py:SleepTest.test;1: PASS (0.50 s)
 (2/3) sleeptest.py:SleepTest.test;2: PASS (1.00 s)
 (3/3) sleeptest.py:SleepTest.test;3: PASS (5.00 s)
RESULTS    : PASS 3 | ERROR 0 | FAIL 0 | SKIP 0 | WARN 0 | INTERRUPT 0
TESTS TIME : 6.50 s
JOB HTML   : $HOME/avocado/job-results/job-2014-08-12T15.44-d565e8de/html/results.html

The --mux-yaml accepts either only $FILE_LOCATION or $INJECT_TO:$FILE_LOCATION. As explained in Test variants - Mux without any path the content gets injected into /run in order to be in the default relative path location. The $INJECT_TO can be either relative path, then it’s injected into /run/$INJECT_TO location, or absolute path (starting with '/'), then it’s injected directly into the specified path and it’s up to the test/framework developer to get the value from this location (using path or adding the path to mux-path). To understand the difference execute those commands:

$ avocado multiplex -t -m examples/tests/sleeptest.py.data/sleeptest.yaml
$ avocado multiplex -t -m duration:examples/tests/sleeptest.py.data/sleeptest.yaml
$ avocado multiplex -t -m /my/location:examples/tests/sleeptest.py.data/sleeptest.yaml

Note that, as your multiplex file specifies all parameters for sleeptest, you can’t leave the test ID empty:

$ scripts/avocado run --mux-yaml examples/tests/sleeptest/sleeptest.yaml
Empty test ID. A test path or alias must be provided

You can also execute multiple tests with the same multiplex file:

$ avocado run sleeptest.py synctest.py --mux-yaml examples/tests/sleeptest.py.data/sleeptest.yaml
JOB ID     : cd20fc8d1714da6d4791c19322374686da68c45c
JOB LOG    : $HOME/avocado/job-results/job-2016-05-04T09.25-cd20fc8/job.log
TESTS      : 8
 (1/8) sleeptest.py:SleepTest.test;1: PASS (0.50 s)
 (2/8) sleeptest.py:SleepTest.test;2: PASS (1.00 s)
 (3/8) sleeptest.py:SleepTest.test;3: PASS (5.01 s)
 (4/8) sleeptest.py:SleepTest.test;4: PASS (10.00 s)
 (5/8) synctest.py:SyncTest.test;1: PASS (2.38 s)
 (6/8) synctest.py:SyncTest.test;2: PASS (2.47 s)
 (7/8) synctest.py:SyncTest.test;3: PASS (2.46 s)
 (8/8) synctest.py:SyncTest.test;4: PASS (2.45 s)
RESULTS    : PASS 8 | ERROR 0 | FAIL 0 | SKIP 0 | WARN 0 | INTERRUPT 0
TESTS TIME : 26.26 s
JOB HTML   : $HOME/avocado/job-results/job-2016-05-04T09.25-cd20fc8/html/results.html

Advanced logging capabilities

Avocado provides advanced logging capabilities at test run time. These can be combined with the standard Python library APIs on tests.

One common example is the need to follow specific progress on longer or more complex tests. Let’s look at a very simple test example, but one multiple clear stages on a single test:

import logging
import time

from avocado import Test

progress_log = logging.getLogger("progress")

class Plant(Test):

    def test_plant_organic(self):
        rows = self.params.get("rows", default=3)

        # Preparing soil
        for row in range(rows):
            progress_log.info("%s: preparing soil on row %s",
                              self.name, row)

        # Letting soil rest
        progress_log.info("%s: letting soil rest before throwing seeds",
                          self.name)
        time.sleep(2)

        # Throwing seeds
        for row in range(rows):
            progress_log.info("%s: throwing seeds on row %s",
                              self.name, row)

        # Let them grow
        progress_log.info("%s: waiting for Avocados to grow",
                          self.name)
        time.sleep(5)

        # Harvest them
        for row in range(rows):
            progress_log.info("%s: harvesting organic avocados on row %s",
                              self.name, row)

From this point on, you can ask Avocado to show your logging stream, either exclusively or in addition to other builtin streams:

$ avocado --show app,progress run plant.py

The outcome should be similar to:

JOB ID     : af786f86db530bff26cd6a92c36e99bedcdca95b
JOB LOG    : /home/cleber/avocado/job-results/job-2016-03-18T10.29-af786f8/job.log
TESTS      : 1
 (1/1) plant.py:Plant.test_plant_organic: progress: 1-plant.py:Plant.test_plant_organic: preparing soil on row 0
progress: 1-plant.py:Plant.test_plant_organic: preparing soil on row 1
progress: 1-plant.py:Plant.test_plant_organic: preparing soil on row 2
progress: 1-plant.py:Plant.test_plant_organic: letting soil rest before throwing seeds
-progress: 1-plant.py:Plant.test_plant_organic: throwing seeds on row 0
progress: 1-plant.py:Plant.test_plant_organic: throwing seeds on row 1
progress: 1-plant.py:Plant.test_plant_organic: throwing seeds on row 2
progress: 1-plant.py:Plant.test_plant_organic: waiting for Avocados to grow
\progress: 1-plant.py:Plant.test_plant_organic: harvesting organic avocados on row 0
progress: 1-plant.py:Plant.test_plant_organic: harvesting organic avocados on row 1
progress: 1-plant.py:Plant.test_plant_organic: harvesting organic avocados on row 2
PASS (7.01 s)
RESULTS    : PASS 1 | ERROR 0 | FAIL 0 | SKIP 0 | WARN 0 | INTERRUPT 0
TESTS TIME : 7.01 s
JOB HTML   : /home/cleber/avocado/job-results/job-2016-03-18T10.29-af786f8/html/results.html

The custom progress stream is combined with the application output, which may or may not suit your needs or preferences. If you want the progress stream to be sent to a separate file, both for clarity and for persistence, you can run Avocado like this:

$ avocado run plant.py --store-logging-stream progress

The result is that, besides all the other log files commonly generated, there will be another log file named progress.INFO at the job results dir. During the test run, one could watch the progress with:

$ tail -f ~/avocado/job-results/latest/progress.INFO
10:36:59 INFO | 1-plant.py:Plant.test_plant_organic: preparing soil on row 0
10:36:59 INFO | 1-plant.py:Plant.test_plant_organic: preparing soil on row 1
10:36:59 INFO | 1-plant.py:Plant.test_plant_organic: preparing soil on row 2
10:36:59 INFO | 1-plant.py:Plant.test_plant_organic: letting soil rest before throwing seeds
10:37:01 INFO | 1-plant.py:Plant.test_plant_organic: throwing seeds on row 0
10:37:01 INFO | 1-plant.py:Plant.test_plant_organic: throwing seeds on row 1
10:37:01 INFO | 1-plant.py:Plant.test_plant_organic: throwing seeds on row 2
10:37:01 INFO | 1-plant.py:Plant.test_plant_organic: waiting for Avocados to grow
10:37:06 INFO | 1-plant.py:Plant.test_plant_organic: harvesting organic avocados on row 0
10:37:06 INFO | 1-plant.py:Plant.test_plant_organic: harvesting organic avocados on row 1
10:37:06 INFO | 1-plant.py:Plant.test_plant_organic: harvesting organic avocados on row 2

The very same progress logger, could be used across multiple test methods and across multiple test modules. In the example given, the test name is used to give extra context.

unittest.TestCase heritage

Since an Avocado test inherits from unittest.TestCase, you can use all the assertion methods that its parent.

The code example bellow uses assertEqual, assertTrue and assertIsInstace:

from avocado import Test

class RandomExamples(Test):
    def test(self):
        self.log.debug("Verifying some random math...")
        four = 2 * 2
        four_ = 2 + 2
        self.assertEqual(four, four_, "something is very wrong here!")

        self.log.debug("Verifying if a variable is set to True...")
        variable = True
        self.assertTrue(variable)

        self.log.debug("Verifying if this test is an instance of test.Test")
        self.assertIsInstance(self, test.Test)

Running tests under other unittest runners

nose is another Python testing framework that is also compatible with unittest.

Because of that, you can run avocado tests with the nosetests application:

$ nosetests examples/tests/sleeptest.py
.
----------------------------------------------------------------------
Ran 1 test in 1.004s

OK

Conversely, you can also use the standard unittest.main() entry point to run an Avocado test. Check out the following code, to be saved as dummy.py:

from avocado import Test
from unittest import main

class Dummy(Test):
    def test(self):
        self.assertTrue(True)

if __name__ == '__main__':
    main()

It can be run by:

$ python dummy.py
.
----------------------------------------------------------------------
Ran 1 test in 0.000s

OK

Setup and cleanup methods

If you need to perform setup actions before/after your test, you may do so in the setUp and tearDown methods, respectively. We’ll give examples in the following section.

Running third party test suites

It is very common in test automation workloads to use test suites developed by third parties. By wrapping the execution code inside an Avocado test module, you gain access to the facilities and API provided by the framework. Let’s say you want to pick up a test suite written in C that it is in a tarball, uncompress it, compile the suite code, and then executing the test. Here’s an example that does that:

#!/usr/bin/env python

import os

from avocado import Test
from avocado import main
from avocado.utils import archive
from avocado.utils import build
from avocado.utils import process


class SyncTest(Test):

    """
    Execute the synctest test suite.
    """
    default_params = {'sync_tarball': 'synctest.tar.bz2',
                      'sync_length': 100,
                      'sync_loop': 10}

    def setUp(self):
        """
        Set default params and build the synctest suite.
        """
        # Build the synctest suite
        self.cwd = os.getcwd()
        tarball_path = os.path.join(self.datadir, self.params.sync_tarball)
        archive.extract(tarball_path, self.srcdir)
        self.srcdir = os.path.join(self.srcdir, 'synctest')
        build.make(self.srcdir)

    def test(self):
        """
        Execute synctest with the appropriate params.
        """
        os.chdir(self.srcdir)
        cmd = ('./synctest %s %s' %
               (self.params.sync_length, self.params.sync_loop))
        process.system(cmd)
        os.chdir(self.cwd)


if __name__ == "__main__":
    main()

Here we have an example of the setUp method in action: Here we get the location of the test suite code (tarball) through avocado.Test.datadir(), then uncompress the tarball through avocado.utils.archive.extract(), an API that will decompress the suite tarball, followed by avocado.utils.build.make(), that will build the suite.

The setUp method is the only place in avocado where you are allowed to call the skip method, given that, if a test started to be executed, by definition it can’t be skipped anymore. Avocado will do its best to enforce this boundary, so that if you use skip outside setUp, the test upon execution will be marked with the ERROR status, and the error message will instruct you to fix your test’s code.

In this example, the test method just gets into the base directory of the compiled suite and executes the ./synctest command, with appropriate parameters, using avocado.utils.process.system().

Fetching asset files

To run third party test suites as mentioned above, or for any other purpose, we offer an asset fetcher as a method of Avocado Test class. The asset method looks for a list of directories in the cache_dirs key, inside the [datadir.paths] section from the configuration files. Read-only directories are also supported. When the asset file is not present in any of the provided directories, we will try to download the file from the provided locations, copying it to the first writable cache directory. Example:

cache_dirs = ['/usr/local/src/', '~/avocado/cache']

In the example above, /usr/local/src/ is a read-only directory. In that case, when we need to fetch the asset from the locations, it will be copied to the ~/avocado/cache directory.

If you don’t provide a cache_dirs, we will create a cache directory inside the avocado data_dir location to put the fetched files in.

  • Use case 1: no cache_dirs key in config files, only the asset name provided in the full url format:

    ...
        def setUp(self):
            stress = 'http://people.seas.harvard.edu/~apw/stress/stress-1.0.4.tar.gz'
            tarball = self.fetch_asset(stress)
            archive.extract(tarball, self.srcdir)
    ...
    

    In this case, fetch_asset() will download the file from the url provided, copying it to the $data_dir/cache directory. tarball variable will contains, for example, /home/user/avocado/data/cache/stress-1.0.4.tar.gz.

  • Use case 2: Read-only cache directory provided. cache_dirs = ['/mnt/files']:

    ...
        def setUp(self):
            stress = 'http://people.seas.harvard.edu/~apw/stress/stress-1.0.4.tar.gz'
            tarball = self.fetch_asset(stress)
            archive.extract(tarball, self.srcdir)
    ...
    

    In this case, we try to find stress-1.0.4.tar.gz file in /mnt/files directory. If it’s not there, since /mnt/files is read-only, we will try to download the asset file to the $data_dir/cache directory.

  • Use case 3: Writable cache directory provided, along with a list of locations. cache_dirs = ['~/avocado/cache']:

    ...
        def setUp(self):
            st_name = 'stress-1.0.4.tar.gz'
            st_hash = 'e1533bc704928ba6e26a362452e6db8fd58b1f0b'
            st_loc = ['http://people.seas.harvard.edu/~apw/stress/stress-1.0.4.tar.gz',
                      'ftp://foo.bar/stress-1.0.4.tar.gz']
            tarball = self.fetch_asset(st_name, asset_hash=st_hash,
                                       locations=st_loc)
            archive.extract(tarball, self.srcdir)
    ...
    

    In this case, we try to download stress-1.0.4.tar.gz from the provided locations list (if it’s not already in ~/avocado/cache). The hash was also provided, so we will verify the hash. To do so, we first look for a hashfile named stress-1.0.4.tar.gz.sha1 in the same directory. If the hashfile is not present we compute the hash and create the hashfile for further usage.

    The resulting tarball variable content will be ~/avocado/cache/stress-1.0.4.tar.gz. An exception will take place if we fail to download or to verify the file.

Detailing the fetch_asset() attributes:

  • name: The name used to name the fetched file. It can also contains a full URL, that will be used as the first location to try (after serching into the cache directories).
  • asset_hash: (optional) The expected file hash. If missing, we skip the check. If provided, before computing the hash, we look for a hashfile to verify the asset. If the hashfile is nor present, we compute the hash and create the hashfile in the same cache directory for further usage.
  • algorithm: (optional) Provided hash algorithm format. Defaults to sha1.
  • locations: (optional) List of locations that will be used to try to fetch the file from. The supported schemes are http://, https://, ftp:// and file://. You’re required to inform the full url to the file, including the file name. The first success will skip the next locations. Notice that for file:// we just create a symbolic link in the cache directory, pointing to the file original location.
  • expire: (optional) time period that the cached file will be considered valid. After that period, the file will be dowloaded again. The value can be an integer or a string containig the time and the unit. Example: ‘10d’ (ten days). Valid units are s (second), m (minute), h (hour) and d (day).

The expected return is the asset file path or an exception.

Test Output Check and Output Record Mode

In a lot of occasions, you want to go simpler: just check if the output of a given application matches an expected output. In order to help with this common use case, we offer the option --output-check-record [mode] to the test runner:

--output-check-record OUTPUT_CHECK_RECORD
                      Record output streams of your tests to reference files
                      (valid options: none (do not record output streams),
                      all (record both stdout and stderr), stdout (record
                      only stderr), stderr (record only stderr). Default:
                      none

If this option is used, it will store the stdout or stderr of the process (or both, if you specified all) being executed to reference files: stdout.expected and stderr.expected. Those files will be recorded in the test data dir. The data dir is in the same directory as the test source file, named [source_file_name.data]. Let’s take as an example the test synctest.py. In a fresh checkout of Avocado, you can see:

examples/tests/synctest.py.data/stderr.expected
examples/tests/synctest.py.data/stdout.expected

From those 2 files, only stdout.expected is non empty:

$ cat examples/tests/synctest.py.data/stdout.expected
PAR : waiting
PASS : sync interrupted

The output files were originally obtained using the test runner and passing the option –output-check-record all to the test runner:

$ scripts/avocado run --output-check-record all synctest.py
JOB ID    : bcd05e4fd33e068b159045652da9eb7448802be5
JOB LOG   : $HOME/avocado/job-results/job-2014-09-25T20.20-bcd05e4/job.log
TESTS     : 1
 (1/1) synctest.py:SyncTest.test: PASS (2.20 s)
RESULTS    : PASS 1 | ERROR 0 | FAIL 0 | SKIP 0 | WARN 0 | INTERRUPT 0
TESTS TIME : 2.20 s

After the reference files are added, the check process is transparent, in the sense that you do not need to provide special flags to the test runner. Now, every time the test is executed, after it is done running, it will check if the outputs are exactly right before considering the test as PASSed. If you want to override the default behavior and skip output check entirely, you may provide the flag --output-check=off to the test runner.

The avocado.utils.process APIs have a parameter allow_output_check (defaults to all), so that you can select which process outputs will go to the reference files, should you chose to record them. You may choose all, for both stdout and stderr, stdout, for the stdout only, stderr, for only the stderr only, or none, to allow neither of them to be recorded and checked.

This process works fine also with simple tests, which are programs or shell scripts that returns 0 (PASSed) or != 0 (FAILed). Let’s consider our bogus example:

$ cat output_record.sh
#!/bin/bash
echo "Hello, world!"

Let’s record the output for this one:

$ scripts/avocado run output_record.sh --output-check-record all
JOB ID    : 25c4244dda71d0570b7f849319cd71fe1722be8b
JOB LOG   : $HOME/avocado/job-results/job-2014-09-25T20.49-25c4244/job.log
TESTS     : 1
 (1/1) output_record.sh: PASS (0.01 s)
RESULTS    : PASS 1 | ERROR 0 | FAIL 0 | SKIP 0 | WARN 0 | INTERRUPT 0
TESTS TIME : 0.01 s

After this is done, you’ll notice that a the test data directory appeared in the same level of our shell script, containing 2 files:

$ ls output_record.sh.data/
stderr.expected  stdout.expected

Let’s look what’s in each of them:

$ cat output_record.sh.data/stdout.expected
Hello, world!
$ cat output_record.sh.data/stderr.expected
$

Now, every time this test runs, it’ll take into account the expected files that were recorded, no need to do anything else but run the test. Let’s see what happens if we change the stdout.expected file contents to Hello, Avocado!:

$ scripts/avocado run output_record.sh
JOB ID    : f0521e524face93019d7cb99c5765aedd933cb2e
JOB LOG   : $HOME/avocado/job-results/job-2014-09-25T20.52-f0521e5/job.log
TESTS     : 1
 (1/1) output_record.sh: FAIL (0.02 s)
RESULTS    : PASS 0 | ERROR 0 | FAIL 1 | SKIP 0 | WARN 0 | INTERRUPT 0
TESTS TIME : 0.02 s

Verifying the failure reason:

$ cat $HOME/avocado/job-results/job-2014-09-25T20.52-f0521e5/job.log
20:52:38 test       L0163 INFO | START 1-output_record.sh
20:52:38 test       L0164 DEBUG|
20:52:38 test       L0165 DEBUG| Test instance parameters:
20:52:38 test       L0173 DEBUG|
20:52:38 test       L0176 DEBUG| Default parameters:
20:52:38 test       L0180 DEBUG|
20:52:38 test       L0181 DEBUG| Test instance params override defaults whenever available
20:52:38 test       L0182 DEBUG|
20:52:38 process    L0242 INFO | Running '$HOME/Code/avocado/output_record.sh'
20:52:38 process    L0310 DEBUG| [stdout] Hello, world!
20:52:38 test       L0565 INFO | Command: $HOME/Code/avocado/output_record.sh
20:52:38 test       L0565 INFO | Exit status: 0
20:52:38 test       L0565 INFO | Duration: 0.00313782691956
20:52:38 test       L0565 INFO | Stdout:
20:52:38 test       L0565 INFO | Hello, world!
20:52:38 test       L0565 INFO |
20:52:38 test       L0565 INFO | Stderr:
20:52:38 test       L0565 INFO |
20:52:38 test       L0060 ERROR|
20:52:38 test       L0063 ERROR| Traceback (most recent call last):
20:52:38 test       L0063 ERROR|   File "$HOME/Code/avocado/avocado/test.py", line 397, in check_reference_stdout
20:52:38 test       L0063 ERROR|     self.assertEqual(expected, actual, msg)
20:52:38 test       L0063 ERROR|   File "/usr/lib64/python2.7/unittest/case.py", line 551, in assertEqual
20:52:38 test       L0063 ERROR|     assertion_func(first, second, msg=msg)
20:52:38 test       L0063 ERROR|   File "/usr/lib64/python2.7/unittest/case.py", line 544, in _baseAssertEqual
20:52:38 test       L0063 ERROR|     raise self.failureException(msg)
20:52:38 test       L0063 ERROR| AssertionError: Actual test sdtout differs from expected one:
20:52:38 test       L0063 ERROR| Actual:
20:52:38 test       L0063 ERROR| Hello, world!
20:52:38 test       L0063 ERROR|
20:52:38 test       L0063 ERROR| Expected:
20:52:38 test       L0063 ERROR| Hello, Avocado!
20:52:38 test       L0063 ERROR|
20:52:38 test       L0064 ERROR|
20:52:38 test       L0529 ERROR| FAIL 1-output_record.sh -> AssertionError: Actual test sdtout differs from expected one:
Actual:
Hello, world!

Expected:
Hello, Avocado!

20:52:38 test       L0516 INFO |

As expected, the test failed because we changed its expectations.

Test log, stdout and stderr in native Avocado modules

If needed, you can write directly to the expected stdout and stderr files from the native test scope. It is important to make the distinction between the following entities:

  • The test logs
  • The test expected stdout
  • The test expected stderr

The first one is used for debugging and informational purposes. Additionally writing to self.log.warning causes test to be marked as dirty and when everything else goes well the test ends with WARN. This means that the test passed but there were non-related unexpected situations described in warning log.

You may log something into the test logs using the methods in avocado.Test.log class attributes. Consider the example:

class output_test(Test):

    def test(self):
        self.log.info('This goes to the log and it is only informational')
        self.log.warn('Oh, something unexpected, non-critical happened, '
                      'but we can continue.')
        self.log.error('Describe the error here and don't forget to raise '
                       'an exception yourself. Writing to self.log.error '
                       'won't do that for you.')
        self.log.debug('Everybody look, I had a good lunch today...')

If you need to write directly to the test stdout and stderr streams, Avocado makes two preconfigured loggers available for that purpose, named avocado.test.stdout and avocado.test.stderr. You can use Python’s standard logging API to write to them. Example:

import logging

class output_test(Test):

    def test(self):
        stdout = logging.getLogger('avocado.test.stdout')
        stdout.info('Informational line that will go to stdout')
        ...
        stderr = logging.getLogger('avocado.test.stderr')
        stderr.info('Informational line that will go to stderr')

Avocado will automatically save anything a test generates on STDOUT into a stdout file, to be found at the test results directory. The same applies to anything a test generates on STDERR, that is, it will be saved into a stderr file at the same location.

Additionally, when using the runner’s output recording features, namely the --output-check-record argument with values stdout, stderr or all, everything given to those loggers will be saved to the files stdout.expected and stderr.expected at the test’s data directory (which is different from the job/test results directory).

Avocado Tests run on a separate process

In order to avoid tests to mess around the environment used by the main Avocado runner process, tests are run on a forked subprocess. This allows for more robustness (tests are not easily able to mess/break Avocado) and some nifty features, such as setting test timeouts.

Setting a Test Timeout

Sometimes your test suite/test might get stuck forever, and this might impact your test grid. You can account for that possibility and set up a timeout parameter for your test. The test timeout can be set through 2 means, in the following order of precedence:

  • Multiplex variable parameters. You may just set the timeout parameter, like in the following simplistic example:
sleep_length = 5
sleep_length_type = float
timeout = 3
timeout_type = float
$ avocado run sleeptest.py --mux-yaml /tmp/sleeptest-example.yaml
JOB ID    : 6d5a2ff16bb92395100fbc3945b8d253308728c9
JOB LOG   : $HOME/avocado/job-results/job-2014-08-12T15.52-6d5a2ff1/job.log
TESTS     : 1
 (1/1) sleeptest.py:SleepTest.test: ERROR (2.97 s)
RESULTS    : PASS 0 | ERROR 1 | FAIL 0 | SKIP 0 | WARN 0 | INTERRUPT 0
TESTS TIME : 2.97 s
JOB HTML  : $HOME/avocado/job-results/job-2014-08-12T15.52-6d5a2ff1/html/results.html
$ cat $HOME/avocado/job-results/job-2014-08-12T15.52-6d5a2ff1/job.log
15:52:51 test       L0143 INFO | START 1-sleeptest.py
15:52:51 test       L0144 DEBUG|
15:52:51 test       L0145 DEBUG| Test log: $HOME/avocado/job-results/job-2014-08-12T15.52-6d5a2ff1/sleeptest.1/test.log
15:52:51 test       L0146 DEBUG| Test instance parameters:
15:52:51 test       L0153 DEBUG|     _name_map_file = {'sleeptest-example.yaml': 'sleeptest'}
15:52:51 test       L0153 DEBUG|     _short_name_map_file = {'sleeptest-example.yaml': 'sleeptest'}
15:52:51 test       L0153 DEBUG|     dep = []
15:52:51 test       L0153 DEBUG|     id = sleeptest
15:52:51 test       L0153 DEBUG|     name = sleeptest
15:52:51 test       L0153 DEBUG|     shortname = sleeptest
15:52:51 test       L0153 DEBUG|     sleep_length = 5.0
15:52:51 test       L0153 DEBUG|     sleep_length_type = float
15:52:51 test       L0153 DEBUG|     timeout = 3.0
15:52:51 test       L0153 DEBUG|     timeout_type = float
15:52:51 test       L0154 DEBUG|
15:52:51 test       L0157 DEBUG| Default parameters:
15:52:51 test       L0159 DEBUG|     sleep_length = 1.0
15:52:51 test       L0161 DEBUG|
15:52:51 test       L0162 DEBUG| Test instance params override defaults whenever available
15:52:51 test       L0163 DEBUG|
15:52:51 test       L0169 INFO | Test timeout set. Will wait 3.00 s for PID 15670 to end
15:52:51 test       L0170 INFO |
15:52:51 sleeptest  L0035 DEBUG| Sleeping for 5.00 seconds
15:52:54 test       L0057 ERROR|
15:52:54 test       L0060 ERROR| Traceback (most recent call last):
15:52:54 test       L0060 ERROR|   File "$HOME/Code/avocado/tests/sleeptest.py", line 36, in action
15:52:54 test       L0060 ERROR|     time.sleep(self.params.sleep_length)
15:52:54 test       L0060 ERROR|   File "$HOME/Code/avocado/avocado/job.py", line 127, in timeout_handler
15:52:54 test       L0060 ERROR|     raise exceptions.TestTimeoutError(e_msg)
15:52:54 test       L0060 ERROR| TestTimeoutError: Timeout reached waiting for sleeptest to end
15:52:54 test       L0061 ERROR|
15:52:54 test       L0400 ERROR| ERROR 1-sleeptest.py -> TestTimeoutError: Timeout reached waiting for sleeptest to end
15:52:54 test       L0387 INFO |

If you pass that multiplex file to the runner multiplexer, this will register a timeout of 3 seconds before Avocado ends the test forcefully by sending a signal.SIGTERM to the test, making it raise a avocado.core.exceptions.TestTimeoutError.

  • Default params attribute. Consider the following example:
import time

from avocado import Test
from avocado import main


class TimeoutTest(Test):

    """
    Functional test for Avocado. Throw a TestTimeoutError.
    """
    default_params = {'timeout': 3.0,
                      'sleep_time': 5.0}

    def test(self):
        """
        This should throw a TestTimeoutError.
        """
        self.log.info('Sleeping for %.2f seconds (2 more than the timeout)',
                      self.params.sleep_time)
        time.sleep(self.params.sleep_time)


if __name__ == "__main__":
    main()

This accomplishes a similar effect to the multiplex setup defined in there.

$ avocado run timeouttest.py
JOB ID    : d78498a54504b481192f2f9bca5ebb9bbb820b8a
JOB LOG   : $HOME/avocado/job-results/job-2014-08-12T15.54-d78498a5/job.log
TESTS     : 1
 (1/1) timeouttest.py:TimeoutTest.test: INTERRUPTED (3.04 s)
RESULTS    : PASS 0 | ERROR 1 | FAIL 0 | SKIP 0 | WARN 0 | INTERRUPT 0
TESTS TIME : 3.04 s
JOB HTML  : $HOME/avocado/job-results/job-2014-08-12T15.54-d78498a5/html/results.html
$ cat $HOME/avocado/job-results/job-2014-08-12T15.54-d78498a5/job.log
15:54:28 test       L0143 INFO | START 1-timeouttest.py:TimeoutTest.test
15:54:28 test       L0144 DEBUG|
15:54:28 test       L0145 DEBUG| Test log: $HOME/avocado/job-results/job-2014-08-12T15.54-d78498a5/timeouttest.1/test.log
15:54:28 test       L0146 DEBUG| Test instance parameters:
15:54:28 test       L0153 DEBUG|     id = timeouttest
15:54:28 test       L0154 DEBUG|
15:54:28 test       L0157 DEBUG| Default parameters:
15:54:28 test       L0159 DEBUG|     sleep_time = 5.0
15:54:28 test       L0159 DEBUG|     timeout = 3.0
15:54:28 test       L0161 DEBUG|
15:54:28 test       L0162 DEBUG| Test instance params override defaults whenever available
15:54:28 test       L0163 DEBUG|
15:54:28 test       L0169 INFO | Test timeout set. Will wait 3.00 s for PID 15759 to end
15:54:28 test       L0170 INFO |
15:54:28 timeouttes L0036 INFO | Sleeping for 5.00 seconds (2 more than the timeout)
15:54:31 test       L0057 ERROR|
15:54:31 test       L0060 ERROR| Traceback (most recent call last):
15:54:31 test       L0060 ERROR|   File "$HOME/Code/avocado/tests/timeouttest.py", line 37, in action
15:54:31 test       L0060 ERROR|     time.sleep(self.params.sleep_time)
15:54:31 test       L0060 ERROR|   File "$HOME/Code/avocado/avocado/job.py", line 127, in timeout_handler
15:54:31 test       L0060 ERROR|     raise exceptions.TestTimeoutError(e_msg)
15:54:31 test       L0060 ERROR| TestTimeoutError: Timeout reached waiting for timeouttest to end
15:54:31 test       L0061 ERROR|
15:54:31 test       L0400 ERROR| ERROR 1-timeouttest.py:TimeoutTest.test -> TestTimeoutError: Timeout reached waiting for timeouttest to end
15:54:31 test       L0387 INFO |

Test Tags

The need may arise for more complex tests, that use more advanced Python features such as inheritance. Due to the fact that Avocado uses a safe test introspection method, that is more limited than actual loading of the test classes, Avocado may need your help to identify those tests. For example, let’s say you are defining a new test class that inherits from the Avocado base test class and putting it in mylibrary.py:

from avocado import Test


class MyOwnDerivedTest(Test):
    def __init__(self, methodName='test', name=None, params=None,
                 base_logdir=None, job=None, runner_queue=None):
        super(MyOwnDerivedTest, self).__init__(methodName, name, params,
                                               base_logdir, job,
                                               runner_queue)
        self.log('Derived class example')

Then implement your actual test using that derived class, in mytest.py:

import mylibrary


class MyTest(mylibrary.MyOwnDerivedTest):

    def test1(self):
        self.log('Testing something important')

    def test2(self):
        self.log('Testing something even more important')

If you try to list the tests in that file, this is what you’ll get:

scripts/avocado list mytest.py -V
Type       Test
NOT_A_TEST mytest.py

ACCESS_DENIED: 0
BROKEN_SYMLINK: 0
EXTERNAL: 0
FILTERED: 0
INSTRUMENTED: 0
MISSING: 0
NOT_A_TEST: 1
SIMPLE: 0
VT: 0

You need to give avocado a little help by adding a docstring tag. That docstring tag is :avocado: enable. That tag tells the Avocado safe test detection code to consider it as an avocado test, regardless of what the (admittedly simple) detection code thinks of it. Let’s see how that works out. Add the docstring, as you can see the example below:

import mylibrary


class MyTest(mylibrary.MyOwnDerivedTest):
    """
    :avocado: enable
    """
    def test1(self):
        self.log('Testing something important')

    def test2(self):
        self.log('Testing something even more important')

Now, trying to list the tests on the mytest.py file again:

scripts/avocado list mytest.py -V
Type         Test
INSTRUMENTED mytest.py:MyTest.test1
INSTRUMENTED mytest.py:MyTest.test2

ACCESS_DENIED: 0
BROKEN_SYMLINK: 0
EXTERNAL: 0
FILTERED: 0
INSTRUMENTED: 2
MISSING: 0
NOT_A_TEST: 0
SIMPLE: 0
VT: 0

You can also use the :avocado: disable tag, that works the opposite way: Something looks like an Avocado test, but we force it to not be listed as one.

Python unittest Compatibility Limitations And Caveats

When executing tests, Avocado uses different techniques than most other Python unittest runners. This brings some compatibility limitations that Avocado users should be aware.

Execution Model

One of the main differences is a consequence of the Avocado design decision that tests should be self contained and isolated from other tests. Additionally, the Avocado test runner runs each test in a separate process.

If you have a unittest class with many test methods and run them using most test runners, you’ll find that all test methods run under the same process. To check that behavior you could add to your setUp method:

def setUp(self):
    print("PID: %s", os.getpid())

If you run the same test under Avocado, you’ll find that each test is run on a separate process.

Class Level setUp and tearDown

Because of Avocado’s test execution model (each test is run on a separate process), it doesn’t make sense to support unittest’s unittest.TestCase.setUpClass() and unittest.TestCase.tearDownClass(). Test classes are freshly instantiated for each test, so it’s pointless to run code in those methods, since they’re supposed to keep class state between tests.

If you require a common setup to a number of tests, the current recommended approach is to to write regular setUp and tearDown code that checks if a given state was already set. One example for such a test that requires a binary installed by a package:

from avocado import Test

from avocado.utils import software_manager
from avocado.utils import path as utils_path
from avocado.utils import process


class BinSleep(Test):

    """
    Sleeps using the /bin/sleep binary
    """
    def setUp(self):
        self.sleep = None
        try:
            self.sleep = utils_path.find_command('sleep')
        except utils_path.CmdNotFoundError:
            software_manager.install_distro_packages({'fedora': ['coreutils']})
            self.sleep = utils_path.find_command('sleep')

    def test(self):
        process.run("%s 1" % self.sleep)

If your test setup is some kind of action that will last accross processes, like the installation of a software package given in the previous example, you’re pretty much covered here.

If you need to keep other type of data a class acrross test executions, you’ll have to resort to saving and restoring the data from an outside source (say a “pickle” file). Finding and using a reliable and safe location for saving such data is currently not in the Avocado supported use cases.

Environment Variables for Simple Tests

Avocado exports Avocado variables and multiplexed variables as BASH environment to the running test. Those variables are interesting to simple tests, because they can not make use of Avocado API directly with Python, like the native tests can do and also they can modify the test parameters.

Here are the current variables that Avocado exports to the tests:

Environemnt Variable Meaning Example
AVOCADO_VERSION Version of Avocado test runner 0.12.0
AVOCADO_TEST_BASEDIR Base directory of Avocado tests $HOME/Downloads/avocado-source/avocado
AVOCADO_TEST_DATADIR Data directory for the test $AVOCADO_TEST_BASEDIR/my_test.sh.data
AVOCADO_TEST_WORKDIR Work directory for the test /var/tmp/avocado_Bjr_rd/my_test.sh
AVOCADO_TEST_SRCDIR Source directory for the test /var/tmp/avocado_Bjr_rd/my-test.sh/src
AVOCADO_TEST_LOGDIR Log directory for the test $HOME/logs/job-results/job-2014-09-16T14.38-ac332e6/test-results/$HOME/my_test.sh.1
AVOCADO_TEST_LOGFILE Log file for the test $HOME/logs/job-results/job-2014-09-16T14.38-ac332e6/test-results/$HOME/my_test.sh.1/debug.log
AVOCADO_TEST_OUTPUTDIR Output directory for the test $HOME/logs/job-results/job-2014-09-16T14.38-ac332e6/test-results/$HOME/my_test.sh.1/data
AVOCADO_TEST_SYSINFODIR The system information directory $HOME/logs/job-results/job-2014-09-16T14.38-ac332e6/test-results/$HOME/my_test.sh.1/sysinfo
All variables from –mux-yaml TIMEOUT=60; IO_WORKERS=10; VM_BYTES=512M; ...

Simple Tests BASH extensions

To enhance simple tests one can use supported set of libraries we created. The only requirement is to use:

PATH=$(avocado "exec-path"):$PATH

which injects path to Avocado utils into shell PATH. Take a look into avocado exec-path to see list of available functions and take a look at examples/tests/simplewarning.sh for inspiration.

Wrap Up

We recommend you take a look at the example tests present in the examples/tests directory, that contains a few samples to take some inspiration from. That directory, besides containing examples, is also used by the Avocado self test suite to do functional testing of Avocado itself.

It is also recommended that you take a look at the API Reference. for more possibilities.

[1]sleeptest is a functional test for Avocado. It’s “old” because we also have had such a test for Autotest for a long time.