Avocado User’s Guide¶
Contents:
About Avocado¶
Avocado is a set of tools and libraries to help with automated testing.
One can call it a test framework with benefits. Native tests are
written in Python and they follow the unittest
pattern, but any
executable can serve as a test.
Avocado is composed of:
- A test runner that lets you execute tests. Those tests can be either written in your language of choice, or be written in Python and use the available libraries. In both cases, you get facilities such as automated log and system information collection.
- Libraries that help you write tests in a concise, yet expressive and powerful way. You can find more information about what libraries are intended for test writers at Libraries and APIs.
Plugins
that can extend and add new functionality to the Avocado Framework.
Avocado tries as much as possible to comply with standard Python testing technology. Tests written using the Avocado API are derived from the unittest class, while other methods suited to functional and performance testing were added. The test runner is designed to help people to run their tests while providing an assortment of system and logging facilities, with no effort, and if you want more features, then you can start using the API features progressively.
Mindmap from workshop (2015) demonstrating features on examples available here.
Getting Started¶
The first step towards using Avocado is, quite obviously, installing it.
Installing Avocado¶
Installing from Packages¶
Avocado is officially available in RPM packages for Fedora and
Enterprise Linux. Other RPM based distributions may package and ship
Avocado themselves. DEB package support is available in the source
tree (look at the contrib/packages/debian
directory).
Avocado is primarily being developed on Fedora, but reasonable efforts are being made to support other GNU/Linux based platforms.
Fedora¶
First, get the package repositories configuration file by running the following command:
sudo curl https://repos-avocadoproject.rhcloud.com/static/avocado-fedora.repo -o /etc/yum.repos.d/avocado.repo
Now check if you have the avocado
and avocado-lts
repositories configured by running:
sudo dnf repolist avocado avocado-lts
...
repo id repo name status
avocado Avocado 50
avocado-lts Avocado LTS (Long Term Stability) disabled
Regular users of Avocado will want to use the standard avocado
repository, which tracks the latest Avocado releases. For more
information about the LTS releases, please refer to the Avocado Long
Term Stability thread and to your package management docs on
how to switch to the avocado-lts
repo.
Finally, after deciding between regular Avocado releases or LTS, you can install the RPM packages by running the following commands:
sudo dnf install avocado
Additionally, two other Avocado packages are available for Fedora:
avocado-examples
: contains example tests and other example filesavocado-plugins-output-html
: HTML job report plugin
Enterprise Linux¶
If you’re running either Red Hat Enterprise Linux or one of the derivatives such as CentOS, just adapt to the following URL and commands:
# If not already, enable epel (for RHEL7 it's following cmd)
sudo yum install https://dl.fedoraproject.org/pub/epel/epel-release-latest-7.noarch.rpm
# Add avocado repository and install avocado
sudo curl https://repos-avocadoproject.rhcloud.com/static/avocado-el.repo -o /etc/yum.repos.d/avocado.repo
sudo yum install avocado
As with Fedora, two other Avocado packages are available for Enterprise Linux:
avocado-examples
: contains example tests and other example filesavocado-plugins-output-html
: HTML job report plugin
The LTS (Long Term Stability) repositories are also available for
Enterprise Linux. Please refer to the Avocado Long Term
Stability thread and to your package management docs on how to
switch to the avocado-lts
repo.
Generic installation from a GIT repository¶
First make sure you have a basic set of packages installed. The following applies to Fedora based distributions, please adapt to your platform:
sudo yum install -y git gcc python-devel python-pip libvirt-devel libyaml-devel redhat-rpm-config xz-devel
Then to install Avocado from the git repository run:
git clone git://github.com/avocado-framework/avocado.git
cd avocado
sudo make requirements
sudo python setup.py install
Note that python and pip should point to the Python interpreter version 2.7.x. If you’re having trouble to install, you can try again and use the command line utilities python2.7 and pip2.7.
Please note that some Avocado functionality may be implemented by optional plugins. To install say, the HTML report plugin, run:
cd optional_plugins/html
sudo python setup.py install
If you intend to hack on Avocado, you may want to look at Hacking and Using Avocado.
Installing from standard Python tools¶
Avocado can also be installed by the standard Python packaging tools,
namely pip
. On most POSIX systems with Python >= 2.7 and pip
available, installation can be performed with the following commands:
pip install avocado-framework
Note
As a design decision, only the dependencies for the core Avocado test runner will be installed. You may notice, depending on your system, that some plugins will fail to load, due to those missing dependencies.
If you want to install all the requirements for all plugins, you may attempt to do so by running:
pip install -r https://raw.githubusercontent.com/avocado-framework/avocado/master/requirements.txt
The result, though, is highly dependent on your system setup, such as having the right compilers, header files and libraries available. The more predictable and complete Avocado experience can be achieved with the official RPM packages.
Using Avocado¶
You should first experience Avocado by using the test runner, that is, the command line tool that will conveniently run your tests and collect their results.
Running Tests¶
To do so, please run avocado
with the run
sub-command followed by
a test reference, which could be either a path to the file, or a
recognizable name:
$ avocado run /bin/true
JOB ID : 381b849a62784228d2fd208d929cc49f310412dc
JOB LOG : $HOME/avocado/job-results/job-2014-08-12T15.39-381b849a/job.log
TESTS : 1
(1/1) /bin/true: PASS (0.01 s)
RESULTS : PASS 1 | ERROR 0 | FAIL 0 | SKIP 0 | WARN 0 | INTERRUPT 0
TESTS TIME : 0.01 s
JOB HTML : $HOME/avocado/job-results/job-2014-08-12T15.39-381b849a/html/results.html
You probably noticed that we used /bin/true
as a test, and in accordance with our
expectations, it passed! These are known as simple tests, but there is also another
type of test, which we call instrumented tests. See more at Test Types or just
keep reading.
Note
Although in most cases running avocado run $test1 $test3 ...` is
fine, it can lead to argument vs. test name clashes. The safest
way to execute tests is ``avocado run --$argument1 --$argument2
-- $test1 $test2
. Everything after – will be considered
positional arguments, therefore test names (in case of
avocado run
)
Listing tests¶
You have two ways of discovering the tests. You can simulate the execution by
using the --dry-run
argument:
avocado run /bin/true --dry-run
JOB ID : 0000000000000000000000000000000000000000
JOB LOG : /tmp/avocado-dry-runSeWniM/job-2015-10-16T15.46-0000000/job.log
TESTS : 1
(1/1) /bin/true: SKIP
RESULTS : PASS 0 | ERROR 0 | FAIL 0 | SKIP 1 | WARN 0 | INTERRUPT 0
TESTS TIME : 0.00 s
JOB HTML : /tmp/avocado-dry-runSeWniM/job-2015-10-16T15.46-0000000/html/results.html
which supports all run
arguments, simulates the run and even lists the test params.
The other way is to use list
subcommand that lists the discovered tests
If no arguments provided, Avocado lists “default” tests per each plugin.
The output might look like this:
$ avocado list
INSTRUMENTED /usr/share/avocado/tests/abort.py
INSTRUMENTED /usr/share/avocado/tests/datadir.py
INSTRUMENTED /usr/share/avocado/tests/doublefail.py
INSTRUMENTED /usr/share/avocado/tests/doublefree.py
INSTRUMENTED /usr/share/avocado/tests/errortest.py
INSTRUMENTED /usr/share/avocado/tests/failtest.py
INSTRUMENTED /usr/share/avocado/tests/fiotest.py
INSTRUMENTED /usr/share/avocado/tests/gdbtest.py
INSTRUMENTED /usr/share/avocado/tests/gendata.py
INSTRUMENTED /usr/share/avocado/tests/linuxbuild.py
INSTRUMENTED /usr/share/avocado/tests/multiplextest.py
INSTRUMENTED /usr/share/avocado/tests/passtest.py
INSTRUMENTED /usr/share/avocado/tests/sleeptenmin.py
INSTRUMENTED /usr/share/avocado/tests/sleeptest.py
INSTRUMENTED /usr/share/avocado/tests/synctest.py
INSTRUMENTED /usr/share/avocado/tests/timeouttest.py
INSTRUMENTED /usr/share/avocado/tests/trinity.py
INSTRUMENTED /usr/share/avocado/tests/warntest.py
INSTRUMENTED /usr/share/avocado/tests/whiteboard.py
...
These Python files are considered by Avocado to contain INSTRUMENTED
tests.
Let’s now list only the executable shell scripts:
$ avocado list | grep ^SIMPLE
SIMPLE /usr/share/avocado/tests/env_variables.sh
SIMPLE /usr/share/avocado/tests/output_check.sh
SIMPLE /usr/share/avocado/tests/simplewarning.sh
SIMPLE /usr/share/avocado/tests/failtest.sh
SIMPLE /usr/share/avocado/tests/passtest.sh
Here, as mentioned before, SIMPLE
means that those files are executables
treated as simple tests. You can also give the --verbose
or -V
flag to
display files that were found by Avocado, but are not considered Avocado tests:
$ avocado list examples/gdb-prerun-scripts/ -V
Type file
NOT_A_TEST examples/gdb-prerun-scripts/README
NOT_A_TEST examples/gdb-prerun-scripts/pass-sigusr1
SIMPLE: 0
INSTRUMENTED: 0
MISSING: 0
NOT_A_TEST: 2
Notice that the verbose flag also adds summary information.
Writing a Simple Test¶
This very simple example of simple test written in shell script:
$ echo '#!/bin/bash' > /tmp/simple_test.sh
$ echo 'exit 0' >> /tmp/simple_test.sh
$ chmod +x /tmp/simple_test.sh
Notice that the file is given executable permissions, which is a requirement for Avocado to treat it as a simple test. Also notice that the script exits with status code 0, which signals a successful result to Avocado.
Running A More Complex Test Job¶
You can run any number of test in an arbitrary order, as well as mix and match instrumented and simple tests:
$ avocado run failtest.py sleeptest.py synctest.py failtest.py synctest.py /tmp/simple_test.sh
JOB ID : 86911e49b5f2c36caeea41307cee4fecdcdfa121
JOB LOG : $HOME/avocado/job-results/job-2014-08-12T15.42-86911e49/job.log
TESTS : 6
(1/6) failtest.py:FailTest.test: FAIL (0.00 s)
(2/6) sleeptest.py:SleepTest.test: PASS (1.00 s)
(3/6) synctest.py:SyncTest.test: PASS (2.43 s)
(4/6) failtest.py:FailTest.test: FAIL (0.00 s)
(5/6) synctest.py:SyncTest.test: PASS (2.44 s)
(6/6) /bin/true: PASS (0.00 s)
(6/6) /tmp/simple_test.sh.1: PASS (0.02 s)
RESULTS : PASS 2 | ERROR 2 | FAIL 2 | SKIP 0 | WARN 0 | INTERRUPT 0
TESTS TIME : 5.88 s
JOB HTML : $HOME/avocado/job-results/job-2014-08-12T15.42-86911e49/html/results.html
Interrupting The Job On First Failed Test (failfast)¶
The Avocado run
command has the option --failfast on
to exit the job
on first failed test:
$ avocado run --failfast on /bin/true /bin/false /bin/true /bin/true
JOB ID : eaf51b8c7d6be966bdf5562c9611b1ec2db3f68a
JOB LOG : $HOME/avocado/job-results/job-2016-07-19T09.43-eaf51b8/job.log
TESTS : 4
(1/4) /bin/true: PASS (0.01 s)
(2/4) /bin/false: FAIL (0.01 s)
Interrupting job (failfast).
RESULTS : PASS 1 | ERROR 0 | FAIL 1 | SKIP 2 | WARN 0 | INTERRUPT 0
TESTS TIME : 0.02 s
JOB HTML : /home/apahim/avocado/job-results/job-2016-07-19T09.43-eaf51b8/html/results.html
The --failfast
option accepts the argument off
. Since it’s disabled
by default, the off
argument only makes sense in replay jobs, when the
original job was executed with --failfast on
.
Running Tests With An External Runner¶
It’s quite common to have organically grown test suites in most software projects. These usually include a custom built, very specific test runner that knows how to find and run their own tests.
Still, running those tests inside Avocado may be a good idea for various reasons, including being able to have results in different human and machine readable formats, collecting system information alongside those tests (the Avocado’s sysinfo functionality), and more.
Avocado makes that possible by means of its “external runner” feature. The most basic way of using it is:
$ avocado run --external-runner=/path/to/external_runner foo bar baz
In this example, Avocado will report individual test results for tests foo, bar and baz. The actual results will be based on the return code of individual executions of /path/to/external_runner foo, /path/to/external_runner bar and finally /path/to/external_runner baz.
As another way to explain an show how this feature works, think of the “external runner” as some kind of interpreter and the individual tests as anything that this interpreter recognizes and is able to execute. A UNIX shell, say /bin/sh could be considered an external runner, and files with shell code could be considered tests:
$ echo "exit 0" > /tmp/pass
$ echo "exit 1" > /tmp/fail
$ avocado run --external-runner=/bin/sh /tmp/pass /tmp/fail
JOB ID : 4a2a1d259690cc7b226e33facdde4f628ab30741
JOB LOG : /home/<user>/avocado/job-results/job-<date>-<shortid>/job.log
TESTS : 2
(1/2) /tmp/pass: PASS (0.01 s)
(2/2) /tmp/fail: FAIL (0.01 s)
RESULTS : PASS 1 | ERROR 0 | FAIL 1 | SKIP 0 | WARN 0 | INTERRUPT 0
TESTS TIME : 0.01 s
JOB HTML : /home/<user>/avocado/job-results/job-<date>-<shortid>/html/results.html
This example is pretty obvious, and could be achieved by giving /tmp/pass and /tmp/fail shell “shebangs” (#!/bin/sh), making them executable (chmod +x /tmp/pass /tmp/fail), and running them as “SIMPLE” tests.
But now consider the following example:
$ avocado run --external-runner=/bin/curl http://local-avocado-server:9405/jobs/ \
http://remote-avocado-server:9405/jobs/
JOB ID : 56016a1ffffaba02492fdbd5662ac0b958f51e11
JOB LOG : /home/<user>/avocado/job-results/job-<date>-<shortid>/job.log
TESTS : 2
(1/2) http://local-avocado-server:9405/jobs/: PASS (0.02 s)
(2/2) http://remote-avocado-server:9405/jobs/: FAIL (3.02 s)
RESULTS : PASS 1 | ERROR 0 | FAIL 1 | SKIP 0 | WARN 0 | INTERRUPT 0
TESTS TIME : 3.04 s
JOB HTML : /home/<user>/avocado/job-results/job-<date>-<shortid>/html/results.html
This effectively makes /bin/curl an “external test runner”, responsible for trying to fetch those URLs, and reporting PASS or FAIL for each of them.
Debugging tests¶
When developing new tests, you frequently want to look straight at the job log, without switching screens or having to “tail” the job log.
In order to do that, you can use avocado --show test run ...
or
avocado run --show-job-log ...
options:
$ avocado --show test run examples/tests/sleeptest.py
...
Job ID: f9ea1742134e5352dec82335af584d1f151d4b85
START 1-sleeptest.py:SleepTest.test
PARAMS (key=timeout, path=*, default=None) => None
PARAMS (key=sleep_length, path=*, default=1) => 1
Sleeping for 1.00 seconds
PASS 1-sleeptest.py:SleepTest.test
Test results available in $HOME/avocado/job-results/job-2015-06-02T10.45-f9ea174
As you can see, the UI output is suppressed and only the job log is shown, making this a useful feature for test development and debugging.
Writing Avocado Tests¶
We are going to write an Avocado test in Python and we are going to inherit from
avocado.Test
. This makes this test a so-called instrumented test.
Basic example¶
Let’s re-create an old time favorite, sleeptest
[1]. It is so simple, it
does nothing besides sleeping for a while:
import time
from avocado import Test
class SleepTest(Test):
def test(self):
sleep_length = self.params.get('sleep_length', default=1)
self.log.debug("Sleeping for %.2f seconds", sleep_length)
time.sleep(sleep_length)
This is about the simplest test you can write for Avocado, while still leveraging its API power.
What is an Avocado Test¶
As can be seen in the example above, an Avocado test is a method that
starts with test
in a class that inherits from avocado.Test
.
Multiple tests and naming conventions¶
You can have multiple tests in a single class.
To do so, just give the methods names that start with test
, say
test_foo
, test_bar
and so on. We recommend you follow this naming
style, as defined in the PEP8 Function Names section.
For the class name, you can pick any name you like, but we also recommend that it follows the CamelCase convention, also known as CapWords, defined in the PEP 8 document under Class Names.
Convenience Attributes¶
Note that the test class provides you with a number of convenience attributes:
- A ready to use log mechanism for your test, that can be accessed by means
of
self.log
. It lets you log debug, info, error and warning messages. - A parameter passing system (and fetching system) that can be accessed by
means of
self.params
. This is hooked to the Multiplexer, about which you can find that more information at Test variants - Mux.
Saving test generated (custom) data¶
Each test instance provides a so called whiteboard
. It can be accessed
through self.whiteboard
. This whiteboard is simply a string that will be
automatically saved to test results (as long as the output format supports it).
If you choose to save binary data to the whiteboard, it’s your responsibility to
encode it first (base64 is the obvious choice).
Building on the previously demonstrated sleeptest
, suppose that you want to save the
sleep length to be used by some other script or data analysis tool:
def test(self):
sleep_length = self.params.get('sleep_length', default=1)
self.log.debug("Sleeping for %.2f seconds", sleep_length)
time.sleep(sleep_length)
self.whiteboard = "%.2f" % sleep_length
The whiteboard can and should be exposed by files generated by the available test result
plugins. The results.json
file already includes the whiteboard for each test.
Additionally, we’ll save a raw copy of the whiteboard contents on a file named
whiteboard
, in the same level as the results.json
file, for your convenience
(maybe you want to use the result of a benchmark directly with your custom made scripts
to analyze that particular benchmark result).
Accessing test parameters¶
Each test has a set of parameters that can be accessed through
self.params.get($name, $path=None, $default=None)
.
Avocado finds and populates self.params
with all parameters you define on
a Multiplex Config file (see Test variants - Mux). As an example, consider
the following multiplex file for sleeptest:
sleeptest:
type: "builtin"
length: !mux
short:
sleep_length: 0.5
medium:
sleep_length: 1
long:
sleep_length: 5
When running this example by avocado run $test --mux-yaml $file.yaml
three variants are executed and the content is injected into /run
namespace
(see Test variants - Mux for details). Every variant contains variables
“type” and “sleep_length”. To obtain the current value, you need the name
(“sleep_length”) and its path. The path differs for each variant so it’s
needed to use the most suitable portion of the path, in this example:
/run/sleeptest/length/* or perhaps sleeptest/* might be enough. It depends
on how your setup looks like.
The default value is optional, but always keep in mind to handle them nicely. Someone might be executing your test with different params or without any params at all. It should work fine.
So the complete example on how to access the “sleep_length” would be:
self.params.get("sleep_length", "/*/sleeptest/*", 1)
There is one way to make this even simpler. It’s possible to define resolution order, then for simple queries you can simply omit the path:
self.params.get("sleep_length", None, 1)
self.params.get("sleep_length", '*', 1)
self.params.get("sleep_length", default=1)
One should always try to avoid param clashes (multiple matching keys for given
path with different origin). If it’s not possible (eg. when
you use multiple yaml files) you can modify the default paths by modifying
--mux-path
. What it does is it slices the params and iterates through the
paths one by one. When there is a match in the first slice it returns
it without trying the other slices. Although relative queries only match
from --mux-path
slices.
There are many ways to use paths to separate clashing params or just to make more clear what your query for. Usually in tests the usage of ‘*’ is sufficient and the namespacing is not necessarily, but it helps make advanced usage clearer and easier to follow.
When thinking of the path always think about users. It’s common to extend default config with additional variants or combine them with different ones to generate just the right scenarios they need. People might simply inject the values elsewhere (eg. /run/sleeptest => /upstream/sleeptest) or they can merge other clashing file into the default path, which won’t generate clash, but would return their values instead. Then you need to clarify the path (eg. ‘*’ => sleeptest/*)
More details on that are in Test variants - Mux
Using a multiplex file¶
You may use the Avocado runner with a multiplex file to provide params and matrix generation for sleeptest just like:
$ avocado run sleeptest.py --mux-yaml examples/tests/sleeptest.py.data/sleeptest.yaml
JOB ID : d565e8dec576d6040f894841f32a836c751f968f
JOB LOG : $HOME/avocado/job-results/job-2014-08-12T15.44-d565e8de/job.log
TESTS : 3
(1/3) sleeptest.py:SleepTest.test;1: PASS (0.50 s)
(2/3) sleeptest.py:SleepTest.test;2: PASS (1.00 s)
(3/3) sleeptest.py:SleepTest.test;3: PASS (5.00 s)
RESULTS : PASS 3 | ERROR 0 | FAIL 0 | SKIP 0 | WARN 0 | INTERRUPT 0
TESTS TIME : 6.50 s
JOB HTML : $HOME/avocado/job-results/job-2014-08-12T15.44-d565e8de/html/results.html
The --mux-yaml
accepts either only $FILE_LOCATION
or $INJECT_TO:$FILE_LOCATION
.
As explained in Test variants - Mux without any path the content gets
injected into /run
in order to be in the default relative path location.
The $INJECT_TO
can be either relative path, then it’s injected into
/run/$INJECT_TO
location, or absolute path (starting with '/'
), then
it’s injected directly into the specified path and it’s up to the test/framework
developer to get the value from this location (using path or adding the path to
mux-path
). To understand the difference execute those commands:
$ avocado multiplex -t -m examples/tests/sleeptest.py.data/sleeptest.yaml
$ avocado multiplex -t -m duration:examples/tests/sleeptest.py.data/sleeptest.yaml
$ avocado multiplex -t -m /my/location:examples/tests/sleeptest.py.data/sleeptest.yaml
Note that, as your multiplex file specifies all parameters for sleeptest, you can’t leave the test ID empty:
$ scripts/avocado run --mux-yaml examples/tests/sleeptest/sleeptest.yaml
Empty test ID. A test path or alias must be provided
You can also execute multiple tests with the same multiplex file:
$ avocado run sleeptest.py synctest.py --mux-yaml examples/tests/sleeptest.py.data/sleeptest.yaml
JOB ID : cd20fc8d1714da6d4791c19322374686da68c45c
JOB LOG : $HOME/avocado/job-results/job-2016-05-04T09.25-cd20fc8/job.log
TESTS : 8
(1/8) sleeptest.py:SleepTest.test;1: PASS (0.50 s)
(2/8) sleeptest.py:SleepTest.test;2: PASS (1.00 s)
(3/8) sleeptest.py:SleepTest.test;3: PASS (5.01 s)
(4/8) sleeptest.py:SleepTest.test;4: PASS (10.00 s)
(5/8) synctest.py:SyncTest.test;1: PASS (2.38 s)
(6/8) synctest.py:SyncTest.test;2: PASS (2.47 s)
(7/8) synctest.py:SyncTest.test;3: PASS (2.46 s)
(8/8) synctest.py:SyncTest.test;4: PASS (2.45 s)
RESULTS : PASS 8 | ERROR 0 | FAIL 0 | SKIP 0 | WARN 0 | INTERRUPT 0
TESTS TIME : 26.26 s
JOB HTML : $HOME/avocado/job-results/job-2016-05-04T09.25-cd20fc8/html/results.html
Advanced logging capabilities¶
Avocado provides advanced logging capabilities at test run time. These can be combined with the standard Python library APIs on tests.
One common example is the need to follow specific progress on longer or more complex tests. Let’s look at a very simple test example, but one multiple clear stages on a single test:
import logging
import time
from avocado import Test
progress_log = logging.getLogger("progress")
class Plant(Test):
def test_plant_organic(self):
rows = self.params.get("rows", default=3)
# Preparing soil
for row in range(rows):
progress_log.info("%s: preparing soil on row %s",
self.name, row)
# Letting soil rest
progress_log.info("%s: letting soil rest before throwing seeds",
self.name)
time.sleep(2)
# Throwing seeds
for row in range(rows):
progress_log.info("%s: throwing seeds on row %s",
self.name, row)
# Let them grow
progress_log.info("%s: waiting for Avocados to grow",
self.name)
time.sleep(5)
# Harvest them
for row in range(rows):
progress_log.info("%s: harvesting organic avocados on row %s",
self.name, row)
From this point on, you can ask Avocado to show your logging stream, either exclusively or in addition to other builtin streams:
$ avocado --show app,progress run plant.py
The outcome should be similar to:
JOB ID : af786f86db530bff26cd6a92c36e99bedcdca95b
JOB LOG : /home/cleber/avocado/job-results/job-2016-03-18T10.29-af786f8/job.log
TESTS : 1
(1/1) plant.py:Plant.test_plant_organic: progress: 1-plant.py:Plant.test_plant_organic: preparing soil on row 0
progress: 1-plant.py:Plant.test_plant_organic: preparing soil on row 1
progress: 1-plant.py:Plant.test_plant_organic: preparing soil on row 2
progress: 1-plant.py:Plant.test_plant_organic: letting soil rest before throwing seeds
-progress: 1-plant.py:Plant.test_plant_organic: throwing seeds on row 0
progress: 1-plant.py:Plant.test_plant_organic: throwing seeds on row 1
progress: 1-plant.py:Plant.test_plant_organic: throwing seeds on row 2
progress: 1-plant.py:Plant.test_plant_organic: waiting for Avocados to grow
\progress: 1-plant.py:Plant.test_plant_organic: harvesting organic avocados on row 0
progress: 1-plant.py:Plant.test_plant_organic: harvesting organic avocados on row 1
progress: 1-plant.py:Plant.test_plant_organic: harvesting organic avocados on row 2
PASS (7.01 s)
RESULTS : PASS 1 | ERROR 0 | FAIL 0 | SKIP 0 | WARN 0 | INTERRUPT 0
TESTS TIME : 7.01 s
JOB HTML : /home/cleber/avocado/job-results/job-2016-03-18T10.29-af786f8/html/results.html
The custom progress
stream is combined with the application output, which
may or may not suit your needs or preferences. If you want the progress
stream to be sent to a separate file, both for clarity and for persistence,
you can run Avocado like this:
$ avocado run plant.py --store-logging-stream progress
The result is that, besides all the other log files commonly generated, there
will be another log file named progress.INFO
at the job results
dir. During the test run, one could watch the progress with:
$ tail -f ~/avocado/job-results/latest/progress.INFO
10:36:59 INFO | 1-plant.py:Plant.test_plant_organic: preparing soil on row 0
10:36:59 INFO | 1-plant.py:Plant.test_plant_organic: preparing soil on row 1
10:36:59 INFO | 1-plant.py:Plant.test_plant_organic: preparing soil on row 2
10:36:59 INFO | 1-plant.py:Plant.test_plant_organic: letting soil rest before throwing seeds
10:37:01 INFO | 1-plant.py:Plant.test_plant_organic: throwing seeds on row 0
10:37:01 INFO | 1-plant.py:Plant.test_plant_organic: throwing seeds on row 1
10:37:01 INFO | 1-plant.py:Plant.test_plant_organic: throwing seeds on row 2
10:37:01 INFO | 1-plant.py:Plant.test_plant_organic: waiting for Avocados to grow
10:37:06 INFO | 1-plant.py:Plant.test_plant_organic: harvesting organic avocados on row 0
10:37:06 INFO | 1-plant.py:Plant.test_plant_organic: harvesting organic avocados on row 1
10:37:06 INFO | 1-plant.py:Plant.test_plant_organic: harvesting organic avocados on row 2
The very same progress
logger, could be used across multiple test methods
and across multiple test modules. In the example given, the test name is used
to give extra context.
unittest.TestCase
heritage¶
Since an Avocado test inherits from unittest.TestCase
, you
can use all the assertion methods that its parent.
The code example bellow uses assertEqual
, assertTrue
and assertIsInstace
:
from avocado import Test
class RandomExamples(Test):
def test(self):
self.log.debug("Verifying some random math...")
four = 2 * 2
four_ = 2 + 2
self.assertEqual(four, four_, "something is very wrong here!")
self.log.debug("Verifying if a variable is set to True...")
variable = True
self.assertTrue(variable)
self.log.debug("Verifying if this test is an instance of test.Test")
self.assertIsInstance(self, test.Test)
Running tests under other unittest
runners¶
nose is another Python testing framework
that is also compatible with unittest
.
Because of that, you can run avocado tests with the nosetests
application:
$ nosetests examples/tests/sleeptest.py
.
----------------------------------------------------------------------
Ran 1 test in 1.004s
OK
Conversely, you can also use the standard unittest.main()
entry point to run an
Avocado test. Check out the following code, to be saved as dummy.py
:
from avocado import Test
from unittest import main
class Dummy(Test):
def test(self):
self.assertTrue(True)
if __name__ == '__main__':
main()
It can be run by:
$ python dummy.py
.
----------------------------------------------------------------------
Ran 1 test in 0.000s
OK
Setup and cleanup methods¶
If you need to perform setup actions before/after your test, you may do so
in the setUp
and tearDown
methods, respectively. We’ll give examples
in the following section.
Running third party test suites¶
It is very common in test automation workloads to use test suites developed by third parties. By wrapping the execution code inside an Avocado test module, you gain access to the facilities and API provided by the framework. Let’s say you want to pick up a test suite written in C that it is in a tarball, uncompress it, compile the suite code, and then executing the test. Here’s an example that does that:
#!/usr/bin/env python
import os
from avocado import Test
from avocado import main
from avocado.utils import archive
from avocado.utils import build
from avocado.utils import process
class SyncTest(Test):
"""
Execute the synctest test suite.
"""
default_params = {'sync_tarball': 'synctest.tar.bz2',
'sync_length': 100,
'sync_loop': 10}
def setUp(self):
"""
Set default params and build the synctest suite.
"""
# Build the synctest suite
self.cwd = os.getcwd()
tarball_path = os.path.join(self.datadir, self.params.sync_tarball)
archive.extract(tarball_path, self.srcdir)
self.srcdir = os.path.join(self.srcdir, 'synctest')
build.make(self.srcdir)
def test(self):
"""
Execute synctest with the appropriate params.
"""
os.chdir(self.srcdir)
cmd = ('./synctest %s %s' %
(self.params.sync_length, self.params.sync_loop))
process.system(cmd)
os.chdir(self.cwd)
if __name__ == "__main__":
main()
Here we have an example of the setUp
method in action: Here we get the
location of the test suite code (tarball) through
avocado.Test.datadir()
, then uncompress the tarball through
avocado.utils.archive.extract()
, an API that will
decompress the suite tarball, followed by avocado.utils.build.make()
, that will build
the suite.
The setUp
method is the only place in avocado where you are allowed to
call the skip
method, given that, if a test started to be executed, by
definition it can’t be skipped anymore. Avocado will do its best to enforce
this boundary, so that if you use skip
outside setUp
, the test upon
execution will be marked with the ERROR
status, and the error message
will instruct you to fix your test’s code.
In this example, the test
method just gets into the base directory of
the compiled suite and executes the ./synctest
command, with appropriate
parameters, using avocado.utils.process.system()
.
Fetching asset files¶
To run third party test suites as mentioned above, or for any other purpose,
we offer an asset fetcher as a method of Avocado Test class.
The asset method looks for a list of directories in the cache_dirs
key,
inside the [datadir.paths]
section from the configuration files. Read-only
directories are also supported. When the asset file is not present in any of
the provided directories, we will try to download the file from the provided
locations, copying it to the first writable cache directory. Example:
cache_dirs = ['/usr/local/src/', '~/avocado/cache']
In the example above, /usr/local/src/
is a read-only directory. In that
case, when we need to fetch the asset from the locations, it will be copied to
the ~/avocado/cache
directory.
If you don’t provide a cache_dirs
, we will create a cache
directory
inside the avocado data_dir
location to put the fetched files in.
Use case 1: no
cache_dirs
key in config files, only the asset name provided in the full url format:... def setUp(self): stress = 'http://people.seas.harvard.edu/~apw/stress/stress-1.0.4.tar.gz' tarball = self.fetch_asset(stress) archive.extract(tarball, self.srcdir) ...
In this case,
fetch_asset()
will download the file from the url provided, copying it to the$data_dir/cache
directory.tarball
variable will contains, for example,/home/user/avocado/data/cache/stress-1.0.4.tar.gz
.Use case 2: Read-only cache directory provided.
cache_dirs = ['/mnt/files']
:... def setUp(self): stress = 'http://people.seas.harvard.edu/~apw/stress/stress-1.0.4.tar.gz' tarball = self.fetch_asset(stress) archive.extract(tarball, self.srcdir) ...
In this case, we try to find
stress-1.0.4.tar.gz
file in/mnt/files
directory. If it’s not there, since/mnt/files
is read-only, we will try to download the asset file to the$data_dir/cache
directory.Use case 3: Writable cache directory provided, along with a list of locations.
cache_dirs = ['~/avocado/cache']
:... def setUp(self): st_name = 'stress-1.0.4.tar.gz' st_hash = 'e1533bc704928ba6e26a362452e6db8fd58b1f0b' st_loc = ['http://people.seas.harvard.edu/~apw/stress/stress-1.0.4.tar.gz', 'ftp://foo.bar/stress-1.0.4.tar.gz'] tarball = self.fetch_asset(st_name, asset_hash=st_hash, locations=st_loc) archive.extract(tarball, self.srcdir) ...
In this case, we try to download
stress-1.0.4.tar.gz
from the provided locations list (if it’s not already in~/avocado/cache
). The hash was also provided, so we will verify the hash. To do so, we first look for a hashfile namedstress-1.0.4.tar.gz.sha1
in the same directory. If the hashfile is not present we compute the hash and create the hashfile for further usage.The resulting
tarball
variable content will be~/avocado/cache/stress-1.0.4.tar.gz
. An exception will take place if we fail to download or to verify the file.
Detailing the fetch_asset()
attributes:
name:
The name used to name the fetched file. It can also contains a full URL, that will be used as the first location to try (after serching into the cache directories).asset_hash:
(optional) The expected file hash. If missing, we skip the check. If provided, before computing the hash, we look for a hashfile to verify the asset. If the hashfile is nor present, we compute the hash and create the hashfile in the same cache directory for further usage.algorithm:
(optional) Provided hash algorithm format. Defaults to sha1.locations:
(optional) List of locations that will be used to try to fetch the file from. The supported schemes arehttp://
,https://
,ftp://
andfile://
. You’re required to inform the full url to the file, including the file name. The first success will skip the next locations. Notice that forfile://
we just create a symbolic link in the cache directory, pointing to the file original location.expire:
(optional) time period that the cached file will be considered valid. After that period, the file will be dowloaded again. The value can be an integer or a string containig the time and the unit. Example: ‘10d’ (ten days). Valid units ares
(second),m
(minute),h
(hour) andd
(day).
The expected return
is the asset file path or an exception.
Test Output Check and Output Record Mode¶
In a lot of occasions, you want to go simpler: just check if the output of a
given application matches an expected output. In order to help with this common
use case, we offer the option --output-check-record [mode]
to the test runner:
--output-check-record OUTPUT_CHECK_RECORD
Record output streams of your tests to reference files
(valid options: none (do not record output streams),
all (record both stdout and stderr), stdout (record
only stderr), stderr (record only stderr). Default:
none
If this option is used, it will store the stdout or stderr of the process (or
both, if you specified all
) being executed to reference files: stdout.expected
and stderr.expected
. Those files will be recorded in the test data dir. The
data dir is in the same directory as the test source file, named
[source_file_name.data]
. Let’s take as an example the test synctest.py
. In a
fresh checkout of Avocado, you can see:
examples/tests/synctest.py.data/stderr.expected
examples/tests/synctest.py.data/stdout.expected
From those 2 files, only stdout.expected is non empty:
$ cat examples/tests/synctest.py.data/stdout.expected
PAR : waiting
PASS : sync interrupted
The output files were originally obtained using the test runner and passing the option –output-check-record all to the test runner:
$ scripts/avocado run --output-check-record all synctest.py
JOB ID : bcd05e4fd33e068b159045652da9eb7448802be5
JOB LOG : $HOME/avocado/job-results/job-2014-09-25T20.20-bcd05e4/job.log
TESTS : 1
(1/1) synctest.py:SyncTest.test: PASS (2.20 s)
RESULTS : PASS 1 | ERROR 0 | FAIL 0 | SKIP 0 | WARN 0 | INTERRUPT 0
TESTS TIME : 2.20 s
After the reference files are added, the check process is transparent, in the sense
that you do not need to provide special flags to the test runner.
Now, every time the test is executed, after it is done running, it will check
if the outputs are exactly right before considering the test as PASSed. If you want to override the default
behavior and skip output check entirely, you may provide the flag --output-check=off
to the test runner.
The avocado.utils.process
APIs have a parameter allow_output_check
(defaults to all
), so that you
can select which process outputs will go to the reference files, should you chose to record them. You may choose
all
, for both stdout and stderr, stdout
, for the stdout only, stderr
, for only the stderr only, or none
,
to allow neither of them to be recorded and checked.
This process works fine also with simple tests, which are programs or shell scripts that returns 0 (PASSed) or != 0 (FAILed). Let’s consider our bogus example:
$ cat output_record.sh
#!/bin/bash
echo "Hello, world!"
Let’s record the output for this one:
$ scripts/avocado run output_record.sh --output-check-record all
JOB ID : 25c4244dda71d0570b7f849319cd71fe1722be8b
JOB LOG : $HOME/avocado/job-results/job-2014-09-25T20.49-25c4244/job.log
TESTS : 1
(1/1) output_record.sh: PASS (0.01 s)
RESULTS : PASS 1 | ERROR 0 | FAIL 0 | SKIP 0 | WARN 0 | INTERRUPT 0
TESTS TIME : 0.01 s
After this is done, you’ll notice that a the test data directory appeared in the same level of our shell script, containing 2 files:
$ ls output_record.sh.data/
stderr.expected stdout.expected
Let’s look what’s in each of them:
$ cat output_record.sh.data/stdout.expected
Hello, world!
$ cat output_record.sh.data/stderr.expected
$
Now, every time this test runs, it’ll take into account the expected files that
were recorded, no need to do anything else but run the test. Let’s see what
happens if we change the stdout.expected
file contents to Hello, Avocado!
:
$ scripts/avocado run output_record.sh
JOB ID : f0521e524face93019d7cb99c5765aedd933cb2e
JOB LOG : $HOME/avocado/job-results/job-2014-09-25T20.52-f0521e5/job.log
TESTS : 1
(1/1) output_record.sh: FAIL (0.02 s)
RESULTS : PASS 0 | ERROR 0 | FAIL 1 | SKIP 0 | WARN 0 | INTERRUPT 0
TESTS TIME : 0.02 s
Verifying the failure reason:
$ cat $HOME/avocado/job-results/job-2014-09-25T20.52-f0521e5/job.log
20:52:38 test L0163 INFO | START 1-output_record.sh
20:52:38 test L0164 DEBUG|
20:52:38 test L0165 DEBUG| Test instance parameters:
20:52:38 test L0173 DEBUG|
20:52:38 test L0176 DEBUG| Default parameters:
20:52:38 test L0180 DEBUG|
20:52:38 test L0181 DEBUG| Test instance params override defaults whenever available
20:52:38 test L0182 DEBUG|
20:52:38 process L0242 INFO | Running '$HOME/Code/avocado/output_record.sh'
20:52:38 process L0310 DEBUG| [stdout] Hello, world!
20:52:38 test L0565 INFO | Command: $HOME/Code/avocado/output_record.sh
20:52:38 test L0565 INFO | Exit status: 0
20:52:38 test L0565 INFO | Duration: 0.00313782691956
20:52:38 test L0565 INFO | Stdout:
20:52:38 test L0565 INFO | Hello, world!
20:52:38 test L0565 INFO |
20:52:38 test L0565 INFO | Stderr:
20:52:38 test L0565 INFO |
20:52:38 test L0060 ERROR|
20:52:38 test L0063 ERROR| Traceback (most recent call last):
20:52:38 test L0063 ERROR| File "$HOME/Code/avocado/avocado/test.py", line 397, in check_reference_stdout
20:52:38 test L0063 ERROR| self.assertEqual(expected, actual, msg)
20:52:38 test L0063 ERROR| File "/usr/lib64/python2.7/unittest/case.py", line 551, in assertEqual
20:52:38 test L0063 ERROR| assertion_func(first, second, msg=msg)
20:52:38 test L0063 ERROR| File "/usr/lib64/python2.7/unittest/case.py", line 544, in _baseAssertEqual
20:52:38 test L0063 ERROR| raise self.failureException(msg)
20:52:38 test L0063 ERROR| AssertionError: Actual test sdtout differs from expected one:
20:52:38 test L0063 ERROR| Actual:
20:52:38 test L0063 ERROR| Hello, world!
20:52:38 test L0063 ERROR|
20:52:38 test L0063 ERROR| Expected:
20:52:38 test L0063 ERROR| Hello, Avocado!
20:52:38 test L0063 ERROR|
20:52:38 test L0064 ERROR|
20:52:38 test L0529 ERROR| FAIL 1-output_record.sh -> AssertionError: Actual test sdtout differs from expected one:
Actual:
Hello, world!
Expected:
Hello, Avocado!
20:52:38 test L0516 INFO |
As expected, the test failed because we changed its expectations.
Test log, stdout and stderr in native Avocado modules¶
If needed, you can write directly to the expected stdout and stderr files from the native test scope. It is important to make the distinction between the following entities:
- The test logs
- The test expected stdout
- The test expected stderr
The first one is used for debugging and informational purposes. Additionally writing to self.log.warning causes test to be marked as dirty and when everything else goes well the test ends with WARN. This means that the test passed but there were non-related unexpected situations described in warning log.
You may log something into the test logs using the methods in
avocado.Test.log
class attributes. Consider the example:
class output_test(Test):
def test(self):
self.log.info('This goes to the log and it is only informational')
self.log.warn('Oh, something unexpected, non-critical happened, '
'but we can continue.')
self.log.error('Describe the error here and don't forget to raise '
'an exception yourself. Writing to self.log.error '
'won't do that for you.')
self.log.debug('Everybody look, I had a good lunch today...')
If you need to write directly to the test stdout and stderr streams,
Avocado makes two preconfigured loggers available for that purpose,
named avocado.test.stdout
and avocado.test.stderr
. You can use
Python’s standard logging API to write to them. Example:
import logging
class output_test(Test):
def test(self):
stdout = logging.getLogger('avocado.test.stdout')
stdout.info('Informational line that will go to stdout')
...
stderr = logging.getLogger('avocado.test.stderr')
stderr.info('Informational line that will go to stderr')
Avocado will automatically save anything a test generates on STDOUT
into a stdout
file, to be found at the test results directory. The same
applies to anything a test generates on STDERR, that is, it will be saved
into a stderr
file at the same location.
Additionally, when using the runner’s output recording features,
namely the --output-check-record
argument with values stdout
,
stderr
or all
, everything given to those loggers will be saved
to the files stdout.expected
and stderr.expected
at the test’s
data directory (which is different from the job/test results directory).
Avocado Tests run on a separate process¶
In order to avoid tests to mess around the environment used by the main Avocado runner process, tests are run on a forked subprocess. This allows for more robustness (tests are not easily able to mess/break Avocado) and some nifty features, such as setting test timeouts.
Setting a Test Timeout¶
Sometimes your test suite/test might get stuck forever, and this might
impact your test grid. You can account for that possibility and set up a
timeout
parameter for your test. The test timeout can be set through
2 means, in the following order of precedence:
- Multiplex variable parameters. You may just set the timeout parameter, like in the following simplistic example:
sleep_length = 5
sleep_length_type = float
timeout = 3
timeout_type = float
$ avocado run sleeptest.py --mux-yaml /tmp/sleeptest-example.yaml
JOB ID : 6d5a2ff16bb92395100fbc3945b8d253308728c9
JOB LOG : $HOME/avocado/job-results/job-2014-08-12T15.52-6d5a2ff1/job.log
TESTS : 1
(1/1) sleeptest.py:SleepTest.test: ERROR (2.97 s)
RESULTS : PASS 0 | ERROR 1 | FAIL 0 | SKIP 0 | WARN 0 | INTERRUPT 0
TESTS TIME : 2.97 s
JOB HTML : $HOME/avocado/job-results/job-2014-08-12T15.52-6d5a2ff1/html/results.html
$ cat $HOME/avocado/job-results/job-2014-08-12T15.52-6d5a2ff1/job.log
15:52:51 test L0143 INFO | START 1-sleeptest.py
15:52:51 test L0144 DEBUG|
15:52:51 test L0145 DEBUG| Test log: $HOME/avocado/job-results/job-2014-08-12T15.52-6d5a2ff1/sleeptest.1/test.log
15:52:51 test L0146 DEBUG| Test instance parameters:
15:52:51 test L0153 DEBUG| _name_map_file = {'sleeptest-example.yaml': 'sleeptest'}
15:52:51 test L0153 DEBUG| _short_name_map_file = {'sleeptest-example.yaml': 'sleeptest'}
15:52:51 test L0153 DEBUG| dep = []
15:52:51 test L0153 DEBUG| id = sleeptest
15:52:51 test L0153 DEBUG| name = sleeptest
15:52:51 test L0153 DEBUG| shortname = sleeptest
15:52:51 test L0153 DEBUG| sleep_length = 5.0
15:52:51 test L0153 DEBUG| sleep_length_type = float
15:52:51 test L0153 DEBUG| timeout = 3.0
15:52:51 test L0153 DEBUG| timeout_type = float
15:52:51 test L0154 DEBUG|
15:52:51 test L0157 DEBUG| Default parameters:
15:52:51 test L0159 DEBUG| sleep_length = 1.0
15:52:51 test L0161 DEBUG|
15:52:51 test L0162 DEBUG| Test instance params override defaults whenever available
15:52:51 test L0163 DEBUG|
15:52:51 test L0169 INFO | Test timeout set. Will wait 3.00 s for PID 15670 to end
15:52:51 test L0170 INFO |
15:52:51 sleeptest L0035 DEBUG| Sleeping for 5.00 seconds
15:52:54 test L0057 ERROR|
15:52:54 test L0060 ERROR| Traceback (most recent call last):
15:52:54 test L0060 ERROR| File "$HOME/Code/avocado/tests/sleeptest.py", line 36, in action
15:52:54 test L0060 ERROR| time.sleep(self.params.sleep_length)
15:52:54 test L0060 ERROR| File "$HOME/Code/avocado/avocado/job.py", line 127, in timeout_handler
15:52:54 test L0060 ERROR| raise exceptions.TestTimeoutError(e_msg)
15:52:54 test L0060 ERROR| TestTimeoutError: Timeout reached waiting for sleeptest to end
15:52:54 test L0061 ERROR|
15:52:54 test L0400 ERROR| ERROR 1-sleeptest.py -> TestTimeoutError: Timeout reached waiting for sleeptest to end
15:52:54 test L0387 INFO |
If you pass that multiplex file to the runner multiplexer, this will register
a timeout of 3 seconds before Avocado ends the test forcefully by sending a
signal.SIGTERM
to the test, making it raise a
avocado.core.exceptions.TestTimeoutError
.
- Default params attribute. Consider the following example:
import time
from avocado import Test
from avocado import main
class TimeoutTest(Test):
"""
Functional test for Avocado. Throw a TestTimeoutError.
"""
default_params = {'timeout': 3.0,
'sleep_time': 5.0}
def test(self):
"""
This should throw a TestTimeoutError.
"""
self.log.info('Sleeping for %.2f seconds (2 more than the timeout)',
self.params.sleep_time)
time.sleep(self.params.sleep_time)
if __name__ == "__main__":
main()
This accomplishes a similar effect to the multiplex setup defined in there.
$ avocado run timeouttest.py
JOB ID : d78498a54504b481192f2f9bca5ebb9bbb820b8a
JOB LOG : $HOME/avocado/job-results/job-2014-08-12T15.54-d78498a5/job.log
TESTS : 1
(1/1) timeouttest.py:TimeoutTest.test: INTERRUPTED (3.04 s)
RESULTS : PASS 0 | ERROR 1 | FAIL 0 | SKIP 0 | WARN 0 | INTERRUPT 0
TESTS TIME : 3.04 s
JOB HTML : $HOME/avocado/job-results/job-2014-08-12T15.54-d78498a5/html/results.html
$ cat $HOME/avocado/job-results/job-2014-08-12T15.54-d78498a5/job.log
15:54:28 test L0143 INFO | START 1-timeouttest.py:TimeoutTest.test
15:54:28 test L0144 DEBUG|
15:54:28 test L0145 DEBUG| Test log: $HOME/avocado/job-results/job-2014-08-12T15.54-d78498a5/timeouttest.1/test.log
15:54:28 test L0146 DEBUG| Test instance parameters:
15:54:28 test L0153 DEBUG| id = timeouttest
15:54:28 test L0154 DEBUG|
15:54:28 test L0157 DEBUG| Default parameters:
15:54:28 test L0159 DEBUG| sleep_time = 5.0
15:54:28 test L0159 DEBUG| timeout = 3.0
15:54:28 test L0161 DEBUG|
15:54:28 test L0162 DEBUG| Test instance params override defaults whenever available
15:54:28 test L0163 DEBUG|
15:54:28 test L0169 INFO | Test timeout set. Will wait 3.00 s for PID 15759 to end
15:54:28 test L0170 INFO |
15:54:28 timeouttes L0036 INFO | Sleeping for 5.00 seconds (2 more than the timeout)
15:54:31 test L0057 ERROR|
15:54:31 test L0060 ERROR| Traceback (most recent call last):
15:54:31 test L0060 ERROR| File "$HOME/Code/avocado/tests/timeouttest.py", line 37, in action
15:54:31 test L0060 ERROR| time.sleep(self.params.sleep_time)
15:54:31 test L0060 ERROR| File "$HOME/Code/avocado/avocado/job.py", line 127, in timeout_handler
15:54:31 test L0060 ERROR| raise exceptions.TestTimeoutError(e_msg)
15:54:31 test L0060 ERROR| TestTimeoutError: Timeout reached waiting for timeouttest to end
15:54:31 test L0061 ERROR|
15:54:31 test L0400 ERROR| ERROR 1-timeouttest.py:TimeoutTest.test -> TestTimeoutError: Timeout reached waiting for timeouttest to end
15:54:31 test L0387 INFO |
Test Tags¶
The need may arise for more complex tests, that use more advanced Python features
such as inheritance. Due to the fact that Avocado uses a safe test introspection
method, that is more limited than actual loading of the test classes, Avocado
may need your help to identify those tests. For example, let’s say you are
defining a new test class that inherits from the Avocado base test class and
putting it in mylibrary.py
:
from avocado import Test
class MyOwnDerivedTest(Test):
def __init__(self, methodName='test', name=None, params=None,
base_logdir=None, job=None, runner_queue=None):
super(MyOwnDerivedTest, self).__init__(methodName, name, params,
base_logdir, job,
runner_queue)
self.log('Derived class example')
Then implement your actual test using that derived class, in mytest.py
:
import mylibrary
class MyTest(mylibrary.MyOwnDerivedTest):
def test1(self):
self.log('Testing something important')
def test2(self):
self.log('Testing something even more important')
If you try to list the tests in that file, this is what you’ll get:
scripts/avocado list mytest.py -V
Type Test
NOT_A_TEST mytest.py
ACCESS_DENIED: 0
BROKEN_SYMLINK: 0
EXTERNAL: 0
FILTERED: 0
INSTRUMENTED: 0
MISSING: 0
NOT_A_TEST: 1
SIMPLE: 0
VT: 0
You need to give avocado a little help by adding a docstring tag. That docstring
tag is :avocado: enable
. That tag tells the Avocado safe test detection
code to consider it as an avocado test, regardless of what the (admittedly simple)
detection code thinks of it. Let’s see how that works out. Add the docstring,
as you can see the example below:
import mylibrary
class MyTest(mylibrary.MyOwnDerivedTest):
"""
:avocado: enable
"""
def test1(self):
self.log('Testing something important')
def test2(self):
self.log('Testing something even more important')
Now, trying to list the tests on the mytest.py
file again:
scripts/avocado list mytest.py -V
Type Test
INSTRUMENTED mytest.py:MyTest.test1
INSTRUMENTED mytest.py:MyTest.test2
ACCESS_DENIED: 0
BROKEN_SYMLINK: 0
EXTERNAL: 0
FILTERED: 0
INSTRUMENTED: 2
MISSING: 0
NOT_A_TEST: 0
SIMPLE: 0
VT: 0
You can also use the :avocado: disable
tag, that works the opposite way:
Something looks like an Avocado test, but we force it to not be listed as one.
Python unittest
Compatibility Limitations And Caveats¶
When executing tests, Avocado uses different techniques than most other Python unittest runners. This brings some compatibility limitations that Avocado users should be aware.
Execution Model¶
One of the main differences is a consequence of the Avocado design decision that tests should be self contained and isolated from other tests. Additionally, the Avocado test runner runs each test in a separate process.
If you have a unittest class with many test methods and run them
using most test runners, you’ll find that all test methods run under
the same process. To check that behavior you could add to your
setUp
method:
def setUp(self):
print("PID: %s", os.getpid())
If you run the same test under Avocado, you’ll find that each test is run on a separate process.
Class Level setUp
and tearDown
¶
Because of Avocado’s test execution model (each test is run on a
separate process), it doesn’t make sense to support unittest’s
unittest.TestCase.setUpClass()
and
unittest.TestCase.tearDownClass()
. Test classes are freshly
instantiated for each test, so it’s pointless to run code in those
methods, since they’re supposed to keep class state between tests.
If you require a common setup to a number of tests, the current
recommended approach is to to write regular setUp
and tearDown
code that checks if a given state was
already set. One example for such a test that requires a binary
installed by a package:
from avocado import Test
from avocado.utils import software_manager
from avocado.utils import path as utils_path
from avocado.utils import process
class BinSleep(Test):
"""
Sleeps using the /bin/sleep binary
"""
def setUp(self):
self.sleep = None
try:
self.sleep = utils_path.find_command('sleep')
except utils_path.CmdNotFoundError:
software_manager.install_distro_packages({'fedora': ['coreutils']})
self.sleep = utils_path.find_command('sleep')
def test(self):
process.run("%s 1" % self.sleep)
If your test setup is some kind of action that will last accross processes, like the installation of a software package given in the previous example, you’re pretty much covered here.
If you need to keep other type of data a class acrross test executions, you’ll have to resort to saving and restoring the data from an outside source (say a “pickle” file). Finding and using a reliable and safe location for saving such data is currently not in the Avocado supported use cases.
Environment Variables for Simple Tests¶
Avocado exports Avocado variables and multiplexed variables as BASH environment to the running test. Those variables are interesting to simple tests, because they can not make use of Avocado API directly with Python, like the native tests can do and also they can modify the test parameters.
Here are the current variables that Avocado exports to the tests:
Environemnt Variable | Meaning | Example |
---|---|---|
AVOCADO_VERSION | Version of Avocado test runner | 0.12.0 |
AVOCADO_TEST_BASEDIR | Base directory of Avocado tests | $HOME/Downloads/avocado-source/avocado |
AVOCADO_TEST_DATADIR | Data directory for the test | $AVOCADO_TEST_BASEDIR/my_test.sh.data |
AVOCADO_TEST_WORKDIR | Work directory for the test | /var/tmp/avocado_Bjr_rd/my_test.sh |
AVOCADO_TEST_SRCDIR | Source directory for the test | /var/tmp/avocado_Bjr_rd/my-test.sh/src |
AVOCADO_TEST_LOGDIR | Log directory for the test | $HOME/logs/job-results/job-2014-09-16T14.38-ac332e6/test-results/$HOME/my_test.sh.1 |
AVOCADO_TEST_LOGFILE | Log file for the test | $HOME/logs/job-results/job-2014-09-16T14.38-ac332e6/test-results/$HOME/my_test.sh.1/debug.log |
AVOCADO_TEST_OUTPUTDIR | Output directory for the test | $HOME/logs/job-results/job-2014-09-16T14.38-ac332e6/test-results/$HOME/my_test.sh.1/data |
AVOCADO_TEST_SYSINFODIR | The system information directory | $HOME/logs/job-results/job-2014-09-16T14.38-ac332e6/test-results/$HOME/my_test.sh.1/sysinfo |
All variables from –mux-yaml | TIMEOUT=60; IO_WORKERS=10; VM_BYTES=512M; ... |
Simple Tests BASH extensions¶
To enhance simple tests one can use supported set of libraries we created. The only requirement is to use:
PATH=$(avocado "exec-path"):$PATH
which injects path to Avocado utils into shell PATH. Take a look into
avocado exec-path
to see list of available functions and take a look at
examples/tests/simplewarning.sh
for inspiration.
Wrap Up¶
We recommend you take a look at the example tests present in the
examples/tests
directory, that contains a few samples to take some
inspiration from. That directory, besides containing examples, is also used by
the Avocado self test suite to do functional testing of Avocado itself.
It is also recommended that you take a look at the API Reference. for more possibilities.
[1] | sleeptest is a functional test for Avocado. It’s “old” because we also have had such a test for Autotest for a long time. |
Result Formats¶
A test runner must provide an assortment of ways to clearly communicate results to interested parties, be them humans or machines.
Results for human beings¶
Avocado has two different result formats that are intended for human beings:
- Its default UI, which shows the live test execution results on a command line, text based, UI.
- The HTML report, which is generated after the test job finishes running.
Avocado command line UI¶
A regular run of Avocado will present the test results in a live fashion, that is, the job and its test(s) results are constantly updated:
$ avocado run sleeptest.py failtest.py synctest.py
JOB ID : 5ffe479262ea9025f2e4e84c4e92055b5c79bdc9
JOB LOG : $HOME/avocado/job-results/job-2014-08-12T15.57-5ffe4792/job.log
TESTS : 3
(1/3) sleeptest.py:SleepTest.test: PASS (1.01 s)
(2/3) failtest.py:FailTest.test: FAIL (0.00 s)
(3/3) synctest.py:SyncTest.test: PASS (1.98 s)
RESULTS : PASS 1 | ERROR 1 | FAIL 1 | SKIP 0 | WARN 0 | INTERRUPT 0
TESTS TIME : 3.17 s
JOB HTML : $HOME/avocado/job-results/job-2014-08-12T15.57-5ffe4792/html/results.html
The most important thing is to remember that programs should never need to parse human output to figure out what happened to a test job run.
HTML report¶
As can be seen in the previous example, Avocado shows the path to an HTML report that will be generated as soon as the job finishes running:
$ avocado run sleeptest.py failtest.py synctest.py
...
JOB HTML : $HOME/avocado/job-results/job-2014-08-12T15.57-5ffe4792/html/results.html
...
You can also request that the report be opened automatically by using the
--open-browser
option. For example:
$ avocado run sleeptest --open-browser
Will show you the nice looking HTML results report right after sleeptest
finishes running.
Machine readable results¶
Another type of results are those intended to be parsed by other applications. Several standards exist in the test community, and Avocado can in theory support pretty much every result standard out there.
Out of the box, Avocado supports a couple of machine readable results. They are always generated and stored in the results directory in results.$type files, but you can ask for a different location too.
xunit¶
The default machine readable output in Avocado is xunit.
xunit is an XML format that contains test results in a structured form, and are used by other test automation projects, such as jenkins. If you want to make Avocado to generate xunit output in the standard output of the runner, simply use:
$ avocado run sleeptest.py failtest.py synctest.py --xunit -
<?xml version="1.0" encoding="UTF-8"?>
<testsuite name="avocado" tests="3" errors="0" failures="1" skipped="0" time="3.5769162178" timestamp="2016-05-04 14:46:52.803365">
<testcase classname="SleepTest" name="1-sleeptest.py:SleepTest.test" time="1.00204920769"/>
<testcase classname="FailTest" name="2-failtest.py:FailTest.test" time="0.00120401382446">
<failure type="TestFail" message="This test is supposed to fail"><![CDATA[Traceback (most recent call last):
File "/home/medic/Work/Projekty/avocado/avocado/avocado/core/test.py", line 490, in _run_avocado
raise test_exception
TestFail: This test is supposed to fail
]]></failure>
<system-out><![CDATA[14:46:53 ERROR|
14:46:53 ERROR| Reproduced traceback from: /home/medic/Work/Projekty/avocado/avocado/avocado/core/test.py:435
14:46:53 ERROR| Traceback (most recent call last):
14:46:53 ERROR| File "/home/medic/Work/Projekty/avocado/avocado/examples/tests/failtest.py", line 17, in test
14:46:53 ERROR| self.fail('This test is supposed to fail')
14:46:53 ERROR| File "/home/medic/Work/Projekty/avocado/avocado/avocado/core/test.py", line 585, in fail
14:46:53 ERROR| raise exceptions.TestFail(message)
14:46:53 ERROR| TestFail: This test is supposed to fail
14:46:53 ERROR|
14:46:53 ERROR| FAIL 2-failtest.py:FailTest.test -> TestFail: This test is supposed to fail
14:46:53 INFO |
]]></system-out>
</testcase>
<testcase classname="SyncTest" name="3-synctest.py:SyncTest.test" time="2.57366299629"/>
</testsuite>
Note
The dash - in the option –xunit, it means that the xunit result should go to the standard output.
json¶
JSON is a widely used data exchange format. The json Avocado plugin outputs job information, similarly to the xunit output plugin:
$ avocado run sleeptest.py failtest.py synctest.py --json -
{
"debuglog": "/home/cleber/avocado/job-results/job-2016-08-09T13.53-10715c4/job.log",
"errors": 0,
"failures": 1,
"job_id": "10715c4645d2d2b57889d7a4317fcd01451b600e",
"pass": 2,
"skip": 0,
"tests": [
{
"end": 1470761623.176954,
"fail_reason": "None",
"logdir": "/home/cleber/avocado/job-results/job-2016-08-09T13.53-10715c4/test-results/1-sleeptest.py:SleepTest.test",
"logfile": "/home/cleber/avocado/job-results/job-2016-08-09T13.53-10715c4/test-results/1-sleeptest.py:SleepTest.test/debug.log",
"start": 1470761622.174918,
"status": "PASS",
"test": "1-sleeptest.py:SleepTest.test",
"time": 1.0020360946655273,
"url": "1-sleeptest.py:SleepTest.test",
"whiteboard": ""
},
{
"end": 1470761623.193472,
"fail_reason": "This test is supposed to fail",
"logdir": "/home/cleber/avocado/job-results/job-2016-08-09T13.53-10715c4/test-results/2-failtest.py:FailTest.test",
"logfile": "/home/cleber/avocado/job-results/job-2016-08-09T13.53-10715c4/test-results/2-failtest.py:FailTest.test/debug.log",
"start": 1470761623.192334,
"status": "FAIL",
"test": "2-failtest.py:FailTest.test",
"time": 0.0011379718780517578,
"url": "2-failtest.py:FailTest.test",
"whiteboard": ""
},
{
"end": 1470761625.656061,
"fail_reason": "None",
"logdir": "/home/cleber/avocado/job-results/job-2016-08-09T13.53-10715c4/test-results/3-synctest.py:SyncTest.test",
"logfile": "/home/cleber/avocado/job-results/job-2016-08-09T13.53-10715c4/test-results/3-synctest.py:SyncTest.test/debug.log",
"start": 1470761623.208165,
"status": "PASS",
"test": "3-synctest.py:SyncTest.test",
"time": 2.4478960037231445,
"url": "3-synctest.py:SyncTest.test",
"whiteboard": ""
}
],
"time": 3.4510700702667236,
"total": 3
}
Note
The dash - in the option –json, it means that the xunit result should go to the standard output.
Bear in mind that there’s no documented standard for the Avocado JSON result format. This means that it will probably grow organically to accommodate newer Avocado features. A reasonable effort will be made to not break backwards compatibility with applications that parse the current form of its JSON result.
TAP¶
Provides the basic TAP (Test Anything Protocol) results, currently in v12. Unlike most existing avocado machine readable outputs this one is streamlined (per test results):
$ avocado run sleeptest.py --tap -
1..1
# debug.log of sleeptest.py:SleepTest.test:
# 12:04:38 DEBUG| PARAMS (key=sleep_length, path=*, default=1) => 1
# 12:04:38 DEBUG| Sleeping for 1.00 seconds
# 12:04:39 INFO | PASS 1-sleeptest.py:SleepTest.test
# 12:04:39 INFO |
ok 1 sleeptest.py:SleepTest.test
Silent result¶
While not a very fancy result format, an application may want nothing but the exit status code from an Avocado test job run. Example:
$ avocado --silent run failtest.py
$ echo $?
1
In practice, this would usually be used by scripts that will in turn run Avocado and check its results:
#!/bin/bash
...
$ avocado --silent run /path/to/my/test.py
if [ $? == 0 ]; then
echo "great success!"
elif
...
more details regarding exit codes in Exit Codes section.
Multiple results at once¶
You can have multiple results formats at once, as long as only one of them uses the standard output. For example, it is fine to use the xunit result on stdout and the JSON result to output to a file:
$ avocado run sleeptest.py synctest.py --xunit - --json /tmp/result.json
<?xml version="1.0" encoding="UTF-8"?>
<testsuite name="avocado" tests="2" errors="0" failures="0" skipped="0" time="3.64848303795" timestamp="2016-05-04 17:26:05.645665">
<testcase classname="SleepTest" name="1-sleeptest.py:SleepTest.test" time="1.00270605087"/>
<testcase classname="SyncTest" name="2-synctest.py:SyncTest.test" time="2.64577698708"/>
</testsuite>
$ cat /tmp/result.json
{
"debuglog": "/home/cleber/avocado/job-results/job-2016-08-09T13.55-1a94ad6/job.log",
"errors": 0,
...
}
But you won’t be able to do the same without the –json flag passed to the program:
$ avocado run sleeptest.py synctest.py --xunit - --json -
Options --json --xunit are trying to use stdout simultaneously
Please set at least one of them to a file to avoid conflicts
That’s basically the only rule, and a sane one, that you need to follow.
Exit Codes¶
Avocado exit code tries to represent different things that can happen during an execution. That means exit codes can be a combination of codes that were ORed toghether as a simgle exit code. The final exit code can be debundled so users can have a good idea on what happened to the job.
The single individual exit codes are:
- AVOCADO_ALL_OK (0)
- AVOCADO_TESTS_FAIL (1)
- AVOCADO_JOB_FAIL (2)
- AVOCADO_FAIL (4)
- AVOCADO_JOB_INTERRUPTED (8)
If a job finishes with exit code 9, for example, it means we had at least one test that failed and also we had at some point a job interruption, probably due to the job timeout or a CTRL+C.
Implementing other result formats¶
If you are looking to implement a new machine or human readable output
format, you can refer to avocado.core.plugins.xunit
and use it as a
starting point.
If your result is something that is produced at once, based on the
complete job outcome, you should create a new class that inherits from
avocado.core.plugin_interfaces.Result
and implements the
avocado.core.plugin_interfaces.Result.render()
method.
But, if your result implementation is something that outputs
information live before/after each test, have to implement the
old-style interface. Create a class that inherits from
avocado.core.result.Result
and implements all public methods,
that perform actions (write to a file/stream) for each test states.
You can take a look at Plugin System for more information on how to write a plugin that will activate and execute the new result format.
Configuration¶
Avocado utilities have a certain default behavior based on educated, reasonable (we hope) guesses about how users like to use their systems. Of course, different people will have different needs and/or dislike our defaults, and that’s why a configuration system is in place to help with those cases
The Avocado config file format is based on the (informal)
INI file ‘specification’, that is implemented by
Python’s ConfigParser
. The format is simple and straightforward, composed by sections,
that contain a number of keys and values. Take for example a basic Avocado config file:
[datadir.paths]
base_dir = ~/avocado
test_dir = /$HOME/Code/avocado/examples/tests
data_dir = /usr/share/avocado/data
logs_dir = ~/avocado/job-results
The datadir.paths
section contains a number of keys, all of them related to directories used by
the test runner. The base_dir
is the base directory to other important Avocado directories, such
as log, data and test directories. You can also choose to set those other important directories by
means of the variables test_dir
, data_dir
and logs_dir
. You can do this by simply editing
the config files available.
Config file parsing order¶
Avocado starts by parsing what it calls system wide config file, that is shipped to all Avocado users on a system
wide directory, /etc/avocado/avocado.conf
. Then it’ll verify if there’s a local user config file, that is located
usually in ~/.config/avocado/avocado.conf
. The order of the parsing matters, so the system wide file is parsed,
then the user config file is parsed last, so that the user can override values at will. There is another directory
that will be scanned by extra config files, /etc/avocado/conf.d
. This directory may contain plugin config files,
and extra additional config files that the system administrator/avocado developers might judge necessary to put there.
Please note that for base directories, if you chose a directory that can’t be properly used by Avocado (some directories
require read access, others, read and write access), Avocado will fall back to some defaults. So if your regular user
wants to write logs to /root/avocado/logs
, Avocado will not use that directory, since it can’t write files to that
place. A new location, by default ~/avocado/job-results
will be selected instead.
The order of files described in this section is only valid if avocado was installed in the system. For people using
avocado from git repos (usually avocado developers), that did not install it in the system, keep in mind that avocado
will read the config files present in the git repos, and will ignore the system wide config files. Running
avocado config
will let you know which files are actually being used.
Plugin config files¶
Plugins can also be configured by config files. In order to not disturb the main Avocado config file, those plugins,
if they wish so, may install additional config files to /etc/avocado/conf.d/[pluginname].conf
, that will be parsed
after the system wide config file. Users can override those values as well at the local config file level.
Considering the config for the hypothethical plugin salad
:
[salad.core]
base = ceasar
dressing = ceasar
If you want, you may change dressing
in your config file by simply adding a [salad.core]
new section in your
local config file, and set a different value for dressing
there.
Parsing order recap¶
So the file parsing order is:
/etc/avocado/avocado.conf
/etc/avocado/conf.d/*.conf
~/.config/avocado/avocado.conf
In this order, meaning that what you set on your local config file may override what’s defined in the system wide files.
Note
Please note that if avocado is running from git repos, those files will be ignored in favor of in tree configuration files. This is something that would normally only affect people developing avocado, and if you are in doubt, avocado config
will tell you exactly which files are being used in any given situation.
Order of precedence for values used in tests¶
Since you can use the config system to alter behavior and values used in tests (think paths to test programs, for example), we established the following order of precedence for variables (from least precedence to most):
- default value (from library or test code)
- global config file
- local (user) config file
- command line switch
- multiplexer
So the least important value comes from the library or test code default, going all the way up to the multiplexing system.
Config plugin¶
A configuration plugin is provided for users that wish to quickly see what’s defined in all sections of their Avocado configuration, after all the files are parsed in their correct resolution order. Example:
$ avocado config
Config files read (in order):
/etc/avocado/avocado.conf
$HOME/.config/avocado/avocado.conf
Section.Key Value
runner.base_dir /usr/share/avocado
runner.test_dir $HOME/Code/avocado/examples/tests
runner.data_dir /usr/share/avocado/data
runner.logs_dir ~/avocado/job-results
The command also shows the order in which your config files were parsed, giving you a better understanding of
what’s going on. The Section.Key nomenclature was inspired in git config --list
output.
Avocado Data Directories¶
When running tests, we are frequently looking to:
- Locate tests
- Write logs to a given location
- Grab files that will be useful for tests, such as ISO files or VM disk images
Avocado has a module dedicated to find those paths, to avoid cumbersome path manipulation magic that people had to do in previous test frameworks [1].
If you want to list all relevant directories for your test, you can use avocado config –datadir command to list those directories. Executing it will give you an output similar to the one seen below:
$ avocado config --datadir
Config files read (in order):
/etc/avocado/avocado.conf
$HOME/.config/avocado/avocado.conf
Avocado replaces config dirs that can't be accessed
with sensible defaults. Please edit your local config
file to customize values
Avocado Data Directories:
base $HOME/avocado
tests $HOME/Code/avocado/examples/tests
data $HOME/avocado/data
logs $HOME/avocado/job-results
Note that, while Avocado will do its best to use the config values you provide in the config file, if it can’t write values to the locations provided, it will fall back to (we hope) reasonable defaults, and we notify the user about that in the output of the command.
The relevant API documentation and meaning of each of those data directories
is in avocado.data_dir
, so it’s highly recommended you take a look.
You may set your preferred data dirs by setting them in the Avocado config files. The only exception for important data dirs here is the Avocado tmp dir, used to place temporary files used by tests. That directory will be in normal circumstances /var/tmp/avocado_XXXXX, (where XXXXX is in actuality a random string) securely created on /var/tmp/, unless the user has the $TMPDIR environment variable set, since that is customary among unix programs.
The next section of the documentation explains how you can see and set config values that modify the behavior for the Avocado utilities and plugins.
[1] For example, autotest.
Test discovery¶
In this section you can learn how tests are being discovered and how to affect this process.
The order of test loaders¶
Avocado supports different types of test starting with SIMPLE tests, which are simply executable files, then unittest-like tests called INSTRUMENTED up to some tests like the avocado-vt ones, which uses complex matrix of tests from config files that don’t directly map to existing files. Given the number of loaders, the mapping from test names on the command line to executed tests might not always be unique. Additionally some people might always (or for given run) want to execute only tests of a single type.
To adjust this behavior you can either tweak plugins.loaders
in avocado
settings (/etc/avocado/
), or temporarily using --loaders
(option of avocado run
) option.
This option allows you to specify order and some params of the available test
loaders. You can specify either loader_name (file
), loader_name +
TEST_TYPE (file.SIMPLE
) and for some loaders even additional params
passed after :
(external:/bin/echo -e
. You can also supply
@DEFAULT
, which injects into that position all the remaining unused
loaders.
To get help about --loaders
:
$ avocado run --loaders ?
$ avocado run --loaders external:?
Example of how --loaders
affects the produced tests (manually gathered
as some of them result in error):
$ avocado run passtest.py boot this_does_not_exist /bin/echo
> INSTRUMENTED passtest.py:PassTest.test
> VT io-github-autotest-qemu.boot
> MISSING this_does_not_exist
> SIMPLE /bin/echo
$ avocado run passtest.py boot this_does_not_exist /bin/echo --loaders @DEFAULT "external:/bin/echo -e"
> INSTRUMENTED passtest.py:PassTest.test
> VT io-github-autotest-qemu.boot
> EXTERNAL this_does_not_exist
> SIMPLE /bin/echo
$ avocado run passtest.py boot this_does_not_exist /bin/echo --loaders file.SIMPLE file.INSTRUMENTED @DEFAULT external.EXTERNAL:/bin/echo
> INSTRUMENTED passtest.py:PassTest.test
> VT io-github-autotest-qemu.boot
> EXTERNAL this_does_not_exist
> SIMPLE /bin/echo
Logging system¶
This section describes the logging system used in avocado and avocado tests.
Tweaking the UI¶
Avocado uses python’s logging system to produce UI and to store test’s output. The system is quite flexible and allows you to tweak the output to your needs either by built-in stream sets, or directly by using the stream name. To tweak them you can use avocado –show STREAM[:LEVEL][,STREAM[:LEVEL],...]. Built-in streams with description (followed by list of associated python streams):
app: | The text based UI (avocado.app) |
---|---|
test: | Output of the executed tests (avocado.test, “”) |
debug: | Additional messages useful to debug avocado (avocado.app.debug) |
remote: | Fabric/paramiko debug messages, useful to analyze remote execution (avocado.fabric, paramiko) |
early: | Early logging before the logging system is set. It includes the test output and lots of output produced by used libraries. (“”, avocado.test) |
Additionally you can specify “all” or “none” to enable/disable all of pre-defined streams and you can also supply custom python logging streams and they will be passed to the standard output.
Warning
Messages with importance greater or equal WARN in logging stream “avocado.app” are always enabled and they go to the standard error.
Storing custom logs¶
When you run a test, you can also store custom logging streams into the results directory by avocado run –store-logging-stream [STREAM[:LEVEL] [STREAM[:LEVEL] ...]], which will produce $STREAM.$LEVEL files per each (unique) entry in the test results directory.
Note
You have to specify separated logging streams. You can’t use the built-in streams in this function.
Note
Currently the custom streams are stored only per job, not per each individual test.
Paginator¶
Some subcommands (list, plugins, ...) support “paginator”, which, on compatible terminals, basically pipes the colored output to less to simplify browsing of the produced output. One can disable it by –paginator {on|off}.
Test variants - Mux¶
The Mux
is a special mechanism to produce multiple variants of the same
test with different parameters. This is essential in order to get a decent
coverage and avocado allows several ways to define those parameters from
simple enumeration of key/value pairs to complex trees which allows in simple
manner define test matrices with all possible variants.
This sounds similar to sparse matrix jobs in Jenkins, but the difference is
that instead of filters, which are available too, avocado allows specifying
so called mux domains
, which is a nicer way to represent data.
As the data is represented in trees it creates all possible variants
per domain and then all combinations of these. It sounds complicated, but
in reality it follows the way people are used to define dependencies,
therefor it’s very simple to use and clear even in complex cases.
The best explanation comes usually from examples, so feel free to scroll down to yaml_to_mux plugin section, which uses the default mux plugin to feed the Mux.
Mux internals¶
The Mux
is a core part of avocado and one can see it as a multiplexed
database, which contains key/value pairs associated to given paths and
as we are talking about a tree of those, we call the paths Nodes
.
Mux allows iterating through all possible combinations which are stored in
the database, which is called multiplexation
. Mux yields variants
,
which are lists of leaf nodes with their values, which are then processed
into AvocadoParams
. Those params are available in tests as
self.params
and one can query for the current parameters:
self.params.get(key="my_key", path="/some/location/*",
default="default_value")
Let’s get back to Mux for a while. As mentioned earlier, it’s a database which allows storing multiple variants of test parameters. To fill the database, you can use several commands.
--mux-inject
- injects directly [path:]key:node values from the cmdline (seeavocado multiplex -h
)yaml_to_mux plugin
- allows parsingyaml
files into the Mux database (see yaml_to_mux plugin)- Custom plugin using the simple
Mux
API (see mux_api)
Mux API¶
Warning
This API is internal, we might change it at any moment. On the
other hand we maintain avocado-virt
plugin which uses this
API so in such case we’d provide a patch there demonstrating
the necessary changes.
The Mux
object is defined in avocado/core/multiplexer.py
, is always
instantiated in avocado.core.parser.py
and always available in
args.mux
. The basic workflow is:
- Initialize
Mux
inargs.mux
- Fill it with data (
plugins
orjob
) - Multiplex it (in
job
) - Iterate through all variants on all job’s tests
Once the Mux
object is multiplexed (3), it’s restricted to alter the
data (2) to avoid changing the already produced data.
The main API needed for your plugins, which we are going to try keeping as stable as possible is:
- mux.is_parsed() - to find out whether the object was already parsed
- data_inject(key, value, path=None) - to inject key/value pairs optionaly to a given path (by default ‘/’)
- data_merge(tree) - to merge
avocado.core.tree.TreeNode
-like tree into the database.
Given these you should be able to implement any kind of parser or params
feeder, should you require one. We favor yaml
and therefor we implemented
a yaml_to_mux
plugin which can be found in
avocado/plugins/yaml_to_mux.py
and on it we also describe the way
Mux
works: yaml_to_mux plugin
Yaml_to_mux plugin¶
In order to get a good coverage one always needs to execute the same test
with different parameters or in various environments. Avocado uses the
term Multiplexation
or Mux
to generate multiple variants of the same
test with different values. To define these variants and values
YAML files are used. The benefit of using YAML
file is the visible separation of different scopes. Even very advanced setups
are still human readable, unlike traditional sparse, multi-dimensional-matrices
of parameters.
Let’s start with an example (line numbers at the first columns are for documentation purposes only, they are not part of the multiplex file format):
1 hw:
2 cpu: !mux
3 intel:
4 cpu_CFLAGS: '-march=core2'
5 amd:
6 cpu_CFLAGS: '-march=athlon64'
7 arm:
8 cpu_CFLAGS: '-mabi=apcs-gnu -march=armv8-a -mtune=arm8'
9 disk: !mux
10 scsi:
11 disk_type: 'scsi'
12 virtio:
13 disk_type: 'virtio'
14 distro: !mux
15 fedora:
16 init: 'systemd'
17 mint:
18 init: 'systemv'
19 env: !mux
20 debug:
21 opt_CFLAGS: '-O0 -g'
22 prod:
23 opt_CFLAGS: '-O2'
There are couple of key=>value pairs (lines 4,6,8,11,13,...) and there are named nodes which define scope (lines 1,2,3,5,7,9,...). There are also additional flags (lines 2, 9, 14, 19) which modifies the behavior.
Nodes¶
They define context of the key=>value pairs allowing us to easily identify for what this values might be used for and also it makes possible to define multiple values of the same keys with different scope.
Due to their purpose the YAML automatic type conversion for nodes names is disabled, so the value of node name is always as written in the yaml file (unlike values, where yes converts to True and such).
Nodes are organized in parent-child relationship and together they create
a tree. To view this structure use avocado multiplex --tree -m <file>
:
┗━━ run
┣━━ hw
┃ ┣━━ cpu
┃ ┃ ╠══ intel
┃ ┃ ╠══ amd
┃ ┃ ╚══ arm
┃ ┗━━ disk
┃ ╠══ scsi
┃ ╚══ virtio
┣━━ distro
┃ ╠══ fedora
┃ ╚══ mint
┗━━ env
╠══ debug
╚══ prod
You can see that hw
has 2 children cpu
and disk
. All parameters
defined in parent node are inherited to children and extended/overwritten by
their values up to the leaf nodes. The leaf nodes (intel
, amd
, arm
,
scsi
, ...) are the most important as after multiplexation they form the
parameters available in tests.
Keys and Values¶
Every value other than dict (4,6,8,11) is used as value of the antecedent node.
Each node can define key/value pairs (lines 4,6,8,11,...). Additionally
each children node inherits values of it’s parent and the result is called
node environment
.
Given the node structure bellow:
devtools:
compiler: 'cc'
flags:
- '-O2'
debug: '-g'
fedora:
compiler: 'gcc'
flags:
- '-Wall'
osx:
compiler: 'clang'
flags:
- '-arch i386'
- '-arch x86_64'
And the rules defined as:
- Scalar values (Booleans, Numbers and Strings) are overwritten by walking from the root until the final node.
- Lists are appended (to the tail) whenever we walk from the root to the final node.
The environment created for the nodes fedora
and osx
are:
- Node
//devtools/fedora
environmentcompiler: 'gcc'
,flags: ['-O2', '-Wall']
- Node
//devtools/osx
environmentcompiler: 'clang'
,flags: ['-O2', '-arch i386', '-arch x86_64']
Note that due to different usage of key and values in environment we disabled the automatic value conversion for keys while keeping it enabled for values. This means that the value can be of any YAML supported value, eg. bool, None, list or custom type, while the key is always string.
Variants¶
In the end all leaves are gathered and turned into parameters, more specifically into
AvocadoParams
:
setup:
graphic:
user: "guest"
password: "pass"
text:
user: "root"
password: "123456"
produces [graphic, text]
. In the test code you’ll be able to query only
those leaves. Intermediary or root nodes are available.
The example above generates a single test execution with parameters separated by path. But the most powerful multiplexer feature is that it can generate multiple variants. To do that you need to tag a node whose children are ment to be multiplexed. Effectively it returns only leaves of one child at the time.In order to generate all possible variants multiplexer creates cartesian product of all of these variants:
cpu: !mux
intel:
amd:
arm:
fmt: !mux
qcow2:
raw:
Produces 6 variants:
/cpu/intel, /fmt/qcow2
/cpu/intel, /fmt/raw
...
/cpu/arm, /fmt/raw
The !mux evaluation is recursive so one variant can expand to multiple ones:
fmt: !mux
qcow: !mux
2:
2v3:
raw:
Results in:
/fmt/qcow2/2
/fmt/qcow2/2v3
/raw
Resolution order¶
You can see that only leaves are part of the test parameters. It might happen that some of these leaves contain different values of the same key. Then you need to make sure your queries separate them by different paths. When the path matches multiple results with different origin, an exception is raised as it’s impossible to guess which key was originally intended.
To avoid these problems it’s recommended to use unique names in test parameters if possible, to avoid the mentioned clashes. It also makes it easier to extend or mix multiple YAML files for a test.
For multiplex YAML files that are part of a framework, contain default configurations, or serve as plugin configurations and other advanced setups it is possible and commonly desirable to use non-unique names. But always keep those points in mind and provide sensible paths.
Multiplexer also supports default paths. By default it’s /run/*
but it can
be overridden by --mux-path
, which accepts multiple arguments. What it does
it splits leaves by the provided paths. Each query goes one by one through
those sub-trees and first one to hit the match returns the result. It might not
solve all problems, but it can help to combine existing YAML files with your
ones:
qa: # large and complex read-only file, content injected into /qa
tests:
timeout: 10
...
my_variants: !mux # your YAML file injected into /my_variants
short:
timeout: 1
long:
timeout: 1000
You want to use an existing test which uses params.get('timeout', '*')
. Then you
can use --mux-path '/my_variants/*' '/qa/*'
and it’ll first look in your
variants. If no matches are found, then it would proceed to /qa/*
Keep in mind that only slices defined in mux-path are taken into account for
relative paths (the ones starting with *
)
Injecting files¶
You can run any test with any YAML file by:
avocado run sleeptest.py --mux-yaml file.yaml
This puts the content of file.yaml
into /run
location, which as mentioned in previous section, is the default mux-path
path. For most simple cases this is the expected behavior as your files
are available in the default path and you can safely use params.get(key)
.
When you need to put a file into a different location, for example when you have two files and you don’t want the content to be merged into a single place becomming effectively a single blob, you can do that by giving a name to your yaml file:
avocado run sleeptest.py --mux-yaml duration:duration.yaml
The content of duration.yaml
is injected into /run/duration
. Still when
keys from other files don’t clash, you can use params.get(key)
and retrieve
from this location as it’s in the default path, only extended by the
duration
intermediary node. Another benefit is you can merge or separate
multiple files by using the same or different name, or even a complex
(relative) path.
Last but not least, advanced users can inject the file into whatever location they prefer by:
avocado run sleeptest.py --mux-yaml /my/variants/duration:duration.yaml
Simple params.get(key)
won’t look in this location, which might be the
intention of the test writer. There are several ways to access the values:
- absolute location
params.get(key, '/my/variants/duration')
- absolute location with wildcards
params.get(key, '/my/*)
(or/*/duration/*
...) - set the mux-path
avocado run ... --mux-path /my/*
and use relative path
It’s recommended to use the simple injection for single YAML files, relative injection for multiple simple YAML files and the last option is for very advanced setups when you either can’t modify the YAML files and you need to specify custom resoltion order or you are specifying non-test parameters, for example parameters for your plugin, which you need to separate from the test parameters.
Multiple files¶
You can provide multiple files. In such scenario final tree is a combination of the provided files where later nodes with the same name override values of the preceding corresponding node. New nodes are appended as new children:
file-1.yaml:
debug:
CFLAGS: '-O0 -g'
prod:
CFLAGS: '-O2'
file-2.yaml:
prod:
CFLAGS: '-Os'
fast:
CFLAGS: '-Ofast'
results in:
debug:
CFLAGS: '-O0 -g'
prod:
CFLAGS: '-Os' # overriden
fast:
CFLAGS: '-Ofast' # appended
It’s also possible to include existing file into another a given node in another file. This is done by the !include : $path directive:
os:
fedora:
!include : fedora.yaml
gentoo:
!include : gentoo.yaml
Warning
Due to YAML nature, it’s mandatory to put space between !include and the colon (:) that must follow it.
The file location can be either absolute path or relative path to the YAML file where the !include is called (even when it’s nested).
Whole file is merged into the node where it’s defined.
Advanced YAML tags¶
There are additional features related to YAML files. Most of them require values
separated by ":"
. Again, in all such cases it’s mandatory to add a white space
(" "
) between the tag and the ":"
, otherwise ":"
is part of the tag
name and the parsing fails.
!include¶
Includes other file and injects it into the node it’s specified in:
my_other_file:
!include : other.yaml
The content of /my_other_file
would be parsed from the other.yaml
. It’s
the hardcoded equivalent of the -m $using:$path
.
Relative paths start from the original file’s directory.
!using¶
Prepends path to the node it’s defined in:
!using : /foo
bar:
!using : baz
bar
is put into baz
becoming /baz/bar
and everything is put into
/foo
. So the final path of bar
is /foo/baz/bar
.
!remove_node¶
Removes node if it existed during the merge. It can be used to extend incompatible YAML files:
os:
fedora:
windows:
3.11:
95:
os:
!remove_node : windows
windows:
win3.11:
win95:
Removes the windows node from structure. It’s different from filter-out as it really removes the node (and all children) from the tree and it can be replaced by you new structure as shown in the example. It removes windows with all children and then replaces this structure with slightly modified version.
As !remove_node is processed during merge, when you reverse the order, windows is not removed and you end-up with /windows/{win3.11,win95,3.11,95} nodes.
!remove_value¶
It’s similar to !remove_node only with values.
!mux¶
Children of this node will be multiplexed. This means that in first variant it’ll return leaves of the first child, in second the leaves of the second child, etc. Example is in section Variants
Complete example¶
Let’s take a second look at the first example:
1 hw:
2 cpu: !mux
3 intel:
4 cpu_CFLAGS: '-march=core2'
5 amd:
6 cpu_CFLAGS: '-march=athlon64'
7 arm:
8 cpu_CFLAGS: '-mabi=apcs-gnu -march=armv8-a -mtune=arm8'
9 disk: !mux
10 scsi:
11 disk_type: 'scsi'
12 virtio:
13 disk_type: 'virtio'
14 distro: !mux
15 fedora:
16 init: 'systemd'
17 mint:
18 init: 'systemv'
19 env: !mux
20 debug:
21 opt_CFLAGS: '-O0 -g'
22 prod:
23 opt_CFLAGS: '-O2'
After filters are applied (simply removes non-matching variants), leaves are gathered and all variants are generated:
$ avocado multiplex -m examples/mux-environment.yaml
Variants generated:
Variant 1: /hw/cpu/intel, /hw/disk/scsi, /distro/fedora, /env/debug
Variant 2: /hw/cpu/intel, /hw/disk/scsi, /distro/fedora, /env/prod
Variant 3: /hw/cpu/intel, /hw/disk/scsi, /distro/mint, /env/debug
Variant 4: /hw/cpu/intel, /hw/disk/scsi, /distro/mint, /env/prod
Variant 5: /hw/cpu/intel, /hw/disk/virtio, /distro/fedora, /env/debug
Variant 6: /hw/cpu/intel, /hw/disk/virtio, /distro/fedora, /env/prod
Variant 7: /hw/cpu/intel, /hw/disk/virtio, /distro/mint, /env/debug
Variant 8: /hw/cpu/intel, /hw/disk/virtio, /distro/mint, /env/prod
Variant 9: /hw/cpu/amd, /hw/disk/scsi, /distro/fedora, /env/debug
Variant 10: /hw/cpu/amd, /hw/disk/scsi, /distro/fedora, /env/prod
Variant 11: /hw/cpu/amd, /hw/disk/scsi, /distro/mint, /env/debug
Variant 12: /hw/cpu/amd, /hw/disk/scsi, /distro/mint, /env/prod
Variant 13: /hw/cpu/amd, /hw/disk/virtio, /distro/fedora, /env/debug
Variant 14: /hw/cpu/amd, /hw/disk/virtio, /distro/fedora, /env/prod
Variant 15: /hw/cpu/amd, /hw/disk/virtio, /distro/mint, /env/debug
Variant 16: /hw/cpu/amd, /hw/disk/virtio, /distro/mint, /env/prod
Variant 17: /hw/cpu/arm, /hw/disk/scsi, /distro/fedora, /env/debug
Variant 18: /hw/cpu/arm, /hw/disk/scsi, /distro/fedora, /env/prod
Variant 19: /hw/cpu/arm, /hw/disk/scsi, /distro/mint, /env/debug
Variant 20: /hw/cpu/arm, /hw/disk/scsi, /distro/mint, /env/prod
Variant 21: /hw/cpu/arm, /hw/disk/virtio, /distro/fedora, /env/debug
Variant 22: /hw/cpu/arm, /hw/disk/virtio, /distro/fedora, /env/prod
Variant 23: /hw/cpu/arm, /hw/disk/virtio, /distro/mint, /env/debug
Variant 24: /hw/cpu/arm, /hw/disk/virtio, /distro/mint, /env/prod
Where the first variant contains:
/hw/cpu/intel/ => cpu_CFLAGS: -march=core2
/hw/disk/ => disk_type: scsi
/distro/fedora/ => init: systemd
/env/debug/ => opt_CFLAGS: -O0 -g
The second one:
/hw/cpu/intel/ => cpu_CFLAGS: -march=core2
/hw/disk/ => disk_type: scsi
/distro/fedora/ => init: systemd
/env/prod/ => opt_CFLAGS: -O2
From this example you can see that querying for /env/debug
works only in
the first variant, but returns nothing in the second variant. Keep this in mind
and when you use the !mux
flag always query for the pre-mux path,
/env/*
in this example.
Job Replay¶
In order to reproduce a given job using the same data, one can use the
--replay
option for the run
command, informing the hash id from
the original job to be replayed. The hash id can be partial, as long as
the provided part corresponds to the initial characters of the original
job id and it is also unique enough. Or, instead of the job id, you can
use the string latest
and avocado will replay the latest job executed.
Let’s see an example. First, running a simple job with two urls:
$ avocado run /bin/true /bin/false
JOB ID : 825b860b0c2f6ec48953c638432e3e323f8d7cad
JOB LOG : $HOME/avocado/job-results/job-2016-01-11T16.14-825b860/job.log
TESTS : 2
(1/2) /bin/true: PASS (0.01 s)
(2/2) /bin/false: FAIL (0.01 s)
RESULTS : PASS 1 | ERROR 0 | FAIL 1 | SKIP 0 | WARN 0 | INTERRUPT 0
TESTS TIME : 0.02 s
JOB HTML : $HOME/avocado/job-results/job-2016-01-11T16.14-825b860/html/results.html
Now we can replay the job by running:
$ avocado run --replay 825b86
JOB ID : 55a0d10132c02b8cc87deb2b480bfd8abbd956c3
SRC JOB ID : 825b860b0c2f6ec48953c638432e3e323f8d7cad
JOB LOG : $HOME/avocado/job-results/job-2016-01-11T16.18-55a0d10/job.log
TESTS : 2
(1/2) /bin/true: PASS (0.01 s)
(2/2) /bin/false: FAIL (0.01 s)
RESULTS : PASS 1 | ERROR 0 | FAIL 1 | SKIP 0 | WARN 0 | INTERRUPT 0
TESTS TIME : 0.01 s
JOB HTML : $HOME/avocado/job-results/job-2016-01-11T16.18-55a0d10/html/results.html
The replay feature will retrieve the original job urls, the multiplex tree and the configuration. Let’s see another example, now using multiplex file:
$ avocado run /bin/true /bin/false --mux-yaml mux-environment.yaml
JOB ID : bd6aa3b852d4290637b5e771b371537541043d1d
JOB LOG : $HOME/avocado/job-results/job-2016-01-11T21.56-bd6aa3b/job.log
TESTS : 48
(1/48) /bin/true;1: PASS (0.01 s)
(2/48) /bin/true;2: PASS (0.01 s)
(3/48) /bin/true;3: PASS (0.01 s)
(4/48) /bin/true;4: PASS (0.01 s)
(5/48) /bin/true;5: PASS (0.01 s)
(6/48) /bin/true;6: PASS (0.01 s)
(7/48) /bin/true;7: PASS (0.01 s)
(8/48) /bin/true;8: PASS (0.01 s)
(9/48) /bin/true;9: PASS (0.01 s)
(10/48) /bin/true;10: PASS (0.01 s)
(11/48) /bin/true;11: PASS (0.01 s)
(12/48) /bin/true;12: PASS (0.01 s)
(13/48) /bin/true;13: PASS (0.01 s)
(14/48) /bin/true;14: PASS (0.01 s)
(15/48) /bin/true;15: PASS (0.01 s)
(16/48) /bin/true;16: PASS (0.01 s)
(17/48) /bin/true;17: PASS (0.01 s)
(18/48) /bin/true;18: PASS (0.01 s)
(19/48) /bin/true;19: PASS (0.01 s)
(20/48) /bin/true;20: PASS (0.01 s)
(21/48) /bin/true;21: PASS (0.01 s)
(22/48) /bin/true;22: PASS (0.01 s)
(23/48) /bin/true;23: PASS (0.01 s)
(24/48) /bin/true;24: PASS (0.01 s)
(25/48) /bin/false;1: FAIL (0.01 s)
(26/48) /bin/false;2: FAIL (0.01 s)
(27/48) /bin/false;3: FAIL (0.01 s)
(28/48) /bin/false;4: FAIL (0.01 s)
(29/48) /bin/false;5: FAIL (0.01 s)
(30/48) /bin/false;6: FAIL (0.01 s)
(31/48) /bin/false;7: FAIL (0.01 s)
(32/48) /bin/false;8: FAIL (0.01 s)
(33/48) /bin/false;9: FAIL (0.01 s)
(34/48) /bin/false;10: FAIL (0.01 s)
(35/48) /bin/false;11: FAIL (0.01 s)
(36/48) /bin/false;12: FAIL (0.01 s)
(37/48) /bin/false;13: FAIL (0.01 s)
(38/48) /bin/false;14: FAIL (0.01 s)
(39/48) /bin/false;15: FAIL (0.01 s)
(40/48) /bin/false;16: FAIL (0.01 s)
(41/48) /bin/false;17: FAIL (0.01 s)
(42/48) /bin/false;18: FAIL (0.01 s)
(43/48) /bin/false;19: FAIL (0.01 s)
(44/48) /bin/false;20: FAIL (0.01 s)
(45/48) /bin/false;21: FAIL (0.01 s)
(46/48) /bin/false;22: FAIL (0.01 s)
(47/48) /bin/false;23: FAIL (0.01 s)
(48/48) /bin/false;24: FAIL (0.01 s)
RESULTS : PASS 24 | ERROR 0 | FAIL 24 | SKIP 0 | WARN 0 | INTERRUPT 0
TESTS TIME : 0.29 s
JOB HTML : $HOME/avocado/job-results/job-2016-01-11T21.56-bd6aa3b/html/results.html
We can replay the job as is, using $ avocado run --replay latest
,
or replay the job ignoring the multiplex file, as below:
$ avocado run --replay bd6aa3b --replay-ignore mux
Ignoring multiplex from source job with --replay-ignore.
JOB ID : d5a46186ee0fb4645e3f7758814003d76c980bf9
SRC JOB ID : bd6aa3b852d4290637b5e771b371537541043d1d
JOB LOG : $HOME/avocado/job-results/job-2016-01-11T22.01-d5a4618/job.log
TESTS : 2
(1/2) /bin/true: PASS (0.01 s)
(2/2) /bin/false: FAIL (0.01 s)
RESULTS : PASS 1 | ERROR 0 | FAIL 1 | SKIP 0 | WARN 0 | INTERRUPT 0
TESTS TIME : 0.02 s
JOB HTML : $HOME/avocado/job-results/job-2016-01-11T22.01-d5a4618/html/results.html
Also, it is possible to replay only the variants that faced a given
result, using the option --replay-test-status
. See the example below:
$ avocado run --replay bd6aa3b --replay-test-status FAIL
JOB ID : 2e1dc41af6ed64895f3bb45e3820c5cc62a9b6eb
SRC JOB ID : bd6aa3b852d4290637b5e771b371537541043d1d
JOB LOG : $HOME/avocado/job-results/job-2016-01-12T00.38-2e1dc41/job.log
TESTS : 48
(1/48) /bin/true;1: SKIP
(2/48) /bin/true;2: SKIP
(3/48) /bin/true;3: SKIP
(4/48) /bin/true;4: SKIP
(5/48) /bin/true;5: SKIP
(6/48) /bin/true;6: SKIP
(7/48) /bin/true;7: SKIP
(8/48) /bin/true;8: SKIP
(9/48) /bin/true;9: SKIP
(10/48) /bin/true;10: SKIP
(11/48) /bin/true;11: SKIP
(12/48) /bin/true;12: SKIP
(13/48) /bin/true;13: SKIP
(14/48) /bin/true;14: SKIP
(15/48) /bin/true;15: SKIP
(16/48) /bin/true;16: SKIP
(17/48) /bin/true;17: SKIP
(18/48) /bin/true;18: SKIP
(19/48) /bin/true;19: SKIP
(20/48) /bin/true;20: SKIP
(21/48) /bin/true;21: SKIP
(22/48) /bin/true;22: SKIP
(23/48) /bin/true;23: SKIP
(24/48) /bin/true;24: SKIP
(25/48) /bin/false;1: FAIL (0.01 s)
(26/48) /bin/false;2: FAIL (0.01 s)
(27/48) /bin/false;3: FAIL (0.01 s)
(28/48) /bin/false;4: FAIL (0.01 s)
(29/48) /bin/false;5: FAIL (0.01 s)
(30/48) /bin/false;6: FAIL (0.01 s)
(31/48) /bin/false;7: FAIL (0.01 s)
(32/48) /bin/false;8: FAIL (0.01 s)
(33/48) /bin/false;9: FAIL (0.01 s)
(34/48) /bin/false;10: FAIL (0.01 s)
(35/48) /bin/false;11: FAIL (0.01 s)
(36/48) /bin/false;12: FAIL (0.01 s)
(37/48) /bin/false;13: FAIL (0.01 s)
(38/48) /bin/false;14: FAIL (0.01 s)
(39/48) /bin/false;15: FAIL (0.01 s)
(40/48) /bin/false;16: FAIL (0.01 s)
(41/48) /bin/false;17: FAIL (0.01 s)
(42/48) /bin/false;18: FAIL (0.01 s)
(43/48) /bin/false;19: FAIL (0.01 s)
(44/48) /bin/false;20: FAIL (0.01 s)
(45/48) /bin/false;21: FAIL (0.01 s)
(46/48) /bin/false;22: FAIL (0.01 s)
(47/48) /bin/false;23: FAIL (0.01 s)
(48/48) /bin/false;24: FAIL (0.01 s)
RESULTS : PASS 0 | ERROR 0 | FAIL 24 | SKIP 24 | WARN 0 | INTERRUPT 0
TESTS TIME : 0.19 s
JOB HTML : $HOME/avocado/job-results/job-2016-01-12T00.38-2e1dc41/html/results.html
When replaying jobs that were executed with the --failfast on
option, you
can disable the failfast
option using --failfast off
in the replay job.
To be able to replay a job, avocado records the job data in the same
job results directory, inside a subdirectory named replay
. If a
given job has a non-default path to record the logs, when the replay
time comes, we need to inform where the logs are. See the example
below:
$ avocado run /bin/true --job-results-dir /tmp/avocado_results/
JOB ID : f1b1c870ad892eac6064a5332f1bbe38cda0aaf3
JOB LOG : /tmp/avocado_results/job-2016-01-11T22.10-f1b1c87/job.log
TESTS : 1
(1/1) /bin/true: PASS (0.01 s)
RESULTS : PASS 1 | ERROR 0 | FAIL 0 | SKIP 0 | WARN 0 | INTERRUPT 0
TESTS TIME : 0.01 s
JOB HTML : /tmp/avocado_results/job-2016-01-11T22.10-f1b1c87/html/results.html
Trying to replay the job, it fails:
$ avocado run --replay f1b1
can't find job results directory in '$HOME/avocado/job-results'
In this case, we have to inform where the job results directory is located:
$ avocado run --replay f1b1 --replay-data-dir /tmp/avocado_results
JOB ID : 19c76abb29f29fe410a9a3f4f4b66387570edffa
SRC JOB ID : f1b1c870ad892eac6064a5332f1bbe38cda0aaf3
JOB LOG : $HOME/avocado/job-results/job-2016-01-11T22.15-19c76ab/job.log
TESTS : 1
(1/1) /bin/true: PASS (0.01 s)
RESULTS : PASS 1 | ERROR 0 | FAIL 0 | SKIP 0 | WARN 0 | INTERRUPT 0
TESTS TIME : 0.01 s
JOB HTML : $HOME/avocado/job-results/job-2016-01-11T22.15-19c76ab/html/results.html
Job Diff¶
Avocado Diff plugin allows users to easily compare several aspects of two given jobs. The basic usage is:
$ avocado diff 7025aaba 384b949c
--- 7025aaba9c2ab8b4bba2e33b64db3824810bb5df
+++ 384b949c991b8ab324ce67c9d9ba761fd07672ff
@@ -1,15 +1,15 @@
COMMAND LINE
-/usr/bin/avocado run sleeptest.py
+/usr/bin/avocado run passtest.py
TOTAL TIME
-1.00 s
+0.00 s
TEST RESULTS
-1-sleeptest.py:SleepTest.test: PASS
+1-passtest.py:PassTest.test: PASS
...
Avocado Diff can compare and create an unified diff of:
- Command line.
- Job time.
- Variants and parameters.
- Tests results.
- Configuration.
- Sysinfo pre and post.
Only sections with different content will be included in the results. You
can also enable/disable those sections with --diff-filter
. Please see
avocado diff --help
for more information.
Jobs can be identified by the Job ID, by the results directory or by the
key latest
. Example:
$ avocado diff ~/avocado/job-results/job-2016-08-03T15.56-4b3cb5b/ latest
--- 4b3cb5bbbb2435c91c7b557eebc09997d4a0f544
+++ 57e5bbb3991718b216d787848171b446f60b3262
@@ -1,9 +1,9 @@
COMMAND LINE
-/usr/bin/avocado run perfmon.py
+/usr/bin/avocado run passtest.py
TOTAL TIME
-11.91 s
+0.00 s
TEST RESULTS
-1-test.py:Perfmon.test: FAIL
+1-examples/tests/passtest.py:PassTest.test: PASS
Along with the unified diff, you can also generate the html (option --html
)
diff file and, optionally, open it on your preferred browser (option
--open-browser
):
$ avocado diff 7025aaba 384b949c --html /tmp/myjobdiff.html
/tmp/myjobdiff.html
If the option --open-browser
is used without the --html
, we will
create a temporary html file.
For those wiling to use a custom diff tool instead of the Avocado Diff tool,
we offer the option --create-reports
, so we create two temporary files
with the relevant content. The file names are printed and user can copy/paste
to the custom diff tool command line:
$ avocado diff 7025aaba 384b949c --create-reports
/var/tmp/avocado_diff_7025aab_zQJjJh.txt /var/tmp/avocado_diff_384b949_AcWq02.txt
$ diff -u /var/tmp/avocado_diff_7025aab_zQJjJh.txt /var/tmp/avocado_diff_384b949_AcWq02.txt
--- /var/tmp/avocado_diff_7025aab_zQJjJh.txt 2016-08-10 21:48:43.547776715 +0200
+++ /var/tmp/avocado_diff_384b949_AcWq02.txt 2016-08-10 21:48:43.547776715 +0200
@@ -1,250 +1,19 @@
COMMAND LINE
============
-/usr/bin/avocado run sleeptest.py
+/usr/bin/avocado run passtest.py
TOTAL TIME
==========
-1.00 s
+0.00 s
...
Running Tests Remotely¶
Running Tests on a Remote Host¶
Avocado lets you run tests directly in a remote machine with SSH connection, provided that you properly set it up by installing Avocado in it.
You can check if this feature (a plugin) is enabled by running:
$ avocado plugins
...
remote Remote machine options for 'run' subcommand
...
Assuming this feature is enabled, you should be able to pass the following options
when using the run
command in the Avocado command line tool:
--remote-hostname REMOTE_HOSTNAME
Specify the hostname to login on remote machine
--remote-port REMOTE_PORT
Specify the port number to login on remote machine.
Default: 22
--remote-username REMOTE_USERNAME
Specify the username to login on remote machine
--remote-password REMOTE_PASSWORD
Specify the password to login on remote machine
From these options, you are normally going to use –remote-hostname and –remote-username in case you did set up your VM with password-less SSH connection (through SSH keys).
Remote Setup¶
Make sure you have:
- Avocado packages installed. You can see more info on how to do that in the Getting Started section.
- The remote machine IP address or fully qualified hostname and the SSH port number.
- All pre-requisites for your test to run installed inside the remote machine (gcc, make and others if you want to compile a 3rd party test suite written in C, for example).
Optionally, you may have password less SSH login on your remote machine enabled.
Running your test¶
Once the remote machine is properly setup, you may run your test. Example:
$ scripts/avocado run --remote-hostname 192.168.122.30 --remote-username fedora examples/tests/sleeptest.py examples/tests/failtest.py
REMOTE LOGIN : fedora@192.168.122.30:22
JOB ID : 60ddd718e7d7bb679f258920ce3c39ce73cb9779
JOB LOG : $HOME/avocado/job-results/job-2014-10-23T11.45-a329461/job.log
TESTS : 2
(1/2) examples/tests/sleeptest.py: PASS (1.00 s)
(2/2) examples/tests/failtest.py: FAIL (0.00 s)
RESULTS : PASS 1 | ERROR 0 | FAIL 1 | SKIP 0 | WARN 0 | INTERRUPT 0
TESTS TIME : 1.01 s
As you can see, Avocado will copy the tests you have to your remote machine and execute them. A bit of extra logging information is added to your job summary, mainly to distinguish the regular execution from the remote one. Note here that we did not need –remote-password because an SSH key was already setup.
Running Tests on a Virtual Machine¶
Sometimes you don’t want to run a given test directly in your own machine (maybe the test is dangerous, maybe you need to run it in another Linux distribution, so on and so forth).
For those scenarios, Avocado lets you run tests directly in VMs defined as libvirt domains in your system, provided that you properly set them up.
You can check if this feature (a plugin) is enabled by running:
$ avocado plugins
...
vm Virtual Machine options for 'run' subcommand
...
Assuming this feature is enabled, you should be able to pass the following options
when using the run
command in the Avocado command line tool:
--vm Run tests on Virtual Machine
--vm-hypervisor-uri VM_HYPERVISOR_URI
Specify hypervisor URI driver connection
--vm-domain VM_DOMAIN
Specify domain name (Virtual Machine name)
--vm-hostname VM_HOSTNAME
Specify VM hostname to login. By default Avocado
attempts to automatically find the VM IP address.
--vm-username VM_USERNAME
Specify the username to login on VM
--vm-password VM_PASSWORD
Specify the password to login on VM
--vm-cleanup Restore VM to a previous state, before running the
tests
From these options, you are normally going to use –vm-domain, –vm-hostname and –vm-username in case you did set up your VM with password-less SSH connection (through SSH keys).
If your VM has the qemu-guest-agent
installed, you can skip the
--vm-hostname
option. Avocado will then probe the VM IP from the
agent.
Virtual Machine Setup¶
Make sure you have:
- A libvirt domain with the Avocado packages installed. You can see more info on how to do that in the Getting Started section.
- The domain IP address or fully qualified hostname.
- All pre-requesites for your test to run installed inside the VM (gcc, make and others if you want to compile a 3rd party test suite written in C, for example).
Optionally, you may have password less SSH login on your VM enabled.
Running your test¶
Once the virtual machine is properly setup, you may run your test. Example:
$ scripts/avocado run --vm-domain fedora20 --vm-username autotest --vm examples/tests/sleeptest.py examples/tests/failtest.py
VM DOMAIN : fedora20
VM LOGIN : autotest@192.168.122.30
JOB ID : 60ddd718e7d7bb679f258920ce3c39ce73cb9779
JOB LOG : $HOME/avocado/job-results/job-2014-09-16T18.41-60ddd71/job.log
TESTS : 2
(1/2) examples/tests/sleeptest.py:SleepTest.test: PASS (1.00 s)
(2/2) examples/tests/failtest.py:FailTest.test: FAIL (0.01 s)
RESULTS : PASS 1 | ERROR 0 | FAIL 1 | SKIP 0 | WARN 0 | INTERRUPT 0
TESTS TIME : 1.01 s
As you can see, Avocado will copy the tests you have to your libvirt domain and execute them. A bit of extra logging information is added to your job summary, mainly to distinguish the regular execution from the remote one. Note here that we did not need –vm-password because the SSH key is already setup.
Running Tests on a Docker container¶
Avocado also lets you run tests on a Docker container, starting and cleaning it up automatically with every execution.
You can check if this feature (a plugin) is enabled by running:
$ avocado plugins
...
docker Run tests inside docker container
...
Docker container images¶
Avocado needs to be present inside the container image in order for
the test execution to be properly performed. There’s one ready to use
image (ldoktor/fedora-avocado
) in the default image repository
(docker.io
):
$ sudo docker pull ldoktor/fedora-avocado
Using default tag: latest
Trying to pull repository docker.io/ldoktor/fedora-avocado ...
latest: Pulling from docker.io/ldoktor/fedora-avocado
...
Status: Downloaded newer image for docker.io/ldoktor/fedora-avocado:latest
Use custom docker images¶
One of the possible ways to use (and develop) Avocado is to create a docker image with your development tree. This is a good way to test your development branch without breaking your system.
To do so, you can following a few simple steps. Begin by fetching the source code as usual:
$ git clone github.com/avocado-framework/avocado.git avocado.git
You may want to make some changes to Avocado:
$ cd avocado.git
$ patch -p1 < MY_PATCH
Finally build a docker image:
$ docker build -t fedora-avocado-custom -f contrib/docker/Dockerfile.fedora .
And now you can run tests with your modified Avocado inside your container:
$ avocado run --docker fedora-avocado-custom examples/tests/passtest.py
Running your test¶
Assuming your system is properly setup to run Docker, including having an image with Avocado, you can run a test inside the container with a command similar to:
$ avocado run passtest.py warntest.py failtest.py --docker ldoktor/fedora-avocado --docker-cmd "sudo docker"
DOCKER : Container id '4bcbcd69801211501a0dde5926c0282a9630adbe29ecb17a21ef04f024366943'
JOB ID : db309f5daba562235834f97cad5f4458e3fe6e32
JOB LOG : $HOME/avocado/job-results/job-2016-07-25T08.01-db309f5/job.log
TESTS : 3
(1/3) /avocado_remote_test_dir/$HOME/passtest.py:PassTest.test: PASS (0.00 s)
(2/3) /avocado_remote_test_dir/$HOME/warntest.py:WarnTest.test: WARN (0.00 s)
(3/3) /avocado_remote_test_dir/$HOME/failtest.py:FailTest.test: FAIL (0.00 s)
RESULTS : PASS 1 | ERROR 0 | FAIL 1 | SKIP 0 | WARN 1 | INTERRUPT 0
TESTS TIME : 0.00 s
JOB HTML : $HOME/avocado/job-results/job-2016-07-25T08.01-db309f5/html/results.html
Environment Variables¶
Running remote instances os Avocado, for example using remote or vm plugins, the remote environment has a different set of environment variables. If you want to make available remotely variables that are available in the local environment, you can use the run option –env-keep. See the example below:
$ export MYVAR1=foobar
$ env MYVAR2=foobar2 avocado run passtest.py --env-keep MYVAR1,MYVAR2 --remote-hostname 192.168.122.30 --remote-username fedora
By doing that, both MYVAR1 and MYVAR2 will be available in remote environment.
Debugging with GDB¶
Avocado has two different types of GDB support that complement each other:
- Transparent execution of executables inside the GNU Debugger. This
takes standard and possibly unmodified tests that uses the
avocado.utils.process
APIs for running processes. By using a command line option, the executable is run on GDB. This allows the user to interact with GDB, but to the test itself, things are pretty much transparent. - The
avocado.utils.gdb
APIs that allows a test to interact with GDB, including setting a executable to be run, setting breakpoints or any other types of commands. This requires a test written with that approach and API in mind.
Tip
Even though this section describes the use of the Avocado GDB
features, which allow live debugging of binaries inside Avocado
tests, it’s also possible to debug some application offline by
using tools such as rr. Avocado ships
with an example wrapper script (to be used with --wrapper
) for
that purpose.
Transparent Execution of Executables¶
This feature adds a few command line options to the Avocado run
command:
$ avocado run --help
...
GNU Debugger support:
--gdb-run-bin EXECUTABLE[:BREAKPOINT]
Run a given executable inside the GNU debugger,
pausing at a given breakpoint (defaults to "main")
--gdb-prerun-commands EXECUTABLE:COMMANDS
After loading an executable in GDB, but before
actually running it, execute the GDB commands in the
given file. EXECUTABLE is optional, if omitted
COMMANDS will apply to all executables
--gdb-coredump {on,off}
Automatically generate a core dump when the inferior
process received a fatal signal such as SIGSEGV or
SIGABRT
...
To get started you want to use --gdb-run-bin
, as shown in the example bellow.
Example¶
The simplest way is to just run
avocado run --gdb-run-bin=doublefree examples/tests/doublefree.py
, which
wraps each executed executable with name doublefree
inside GDB server and
stops at the executable entry point.
Optionally you can specify single breakpoint using
--gdb-run-bin=doublefree:$breakpoint
(eg: doublefree:1
) or just
doublefree:
to stop only when an interruption happens (eg: SIGABRT).
It’s worth mentioning that when breakpoint is not reached, the test finishes
without any interruption. This is helpful when you identify regions where you
should never get in your code, or places which interests you and you can run
your code in production and GDB variants. If after a long time you get to this
place, the test notifies you and you can investigate the problem. This is
demonstrated in examples/tests/doublefree_nasty.py
test. To unveil the
power of Avocado, run this test using:
avocado run --gdb-run-bin=doublefree: examples/tests/doublefree_nasty.py --gdb-prerun-commands examples/tests/doublefree_nasty.py.data/gdb_pre --mux-yaml examples/tests/doublefree_nasty.py.data/iterations.yaml
which executes 100 iterations of this test while setting all breakpoints from
the examples/tests/doublefree_nasty.py.data/gdb_pre
file (you can specify
whatever GDB supports, not only breakpoints).
As you can see this test usually passes, but once in a while it gets into the problematic area. Imagine this is very hard to spot (dependent on HW registers, ...) and this is one way to combine regular testing and the possibility of debugging hard-to-get parts of your code.
Caveats¶
Currently, when using the Avocado GDB plugin, that is, when using the –gdb-run-bin option, there are some caveats you should be aware of:
- It is not currently compatible with Avocado’s –output-check-record feature
- There’s no way to perform proper input to the process, that is, manipulate its STDIN
- The process STDERR content is mixed with the content generated by gdbserver on its own STDERR (because they are in fact, the same thing)
But, you can still depend on the process STDOUT, as exemplified by this fictional test:
from avocado import Test
from avocado.utils import process
class HelloOutputTest(Test):
def test(self):
result = process.run("/path/to/hello", ignore_status=True)
self.assertIn("hello\n", result.stdout)
If run under GDB or not, result.stdout behavior and content is expected to be the same.
Reasons for the caveats¶
There are a two basic reasons for the mentioned caveats:
- The architecture of Avocado’s GDB feature
- GDB’s own behavior and limitations
When using the Avocado GDB plugin, that is, –gdb-run-bin, Avocado runs a gdbserver instance transparently and controls it by means of a gdb process. When a given event happens, say a breakpoint is reached, it disconnects its own gdb from the server, and allows the user to use a standard gdb to connect to the gdbserver. This provides a natural and seamless user experience.
But, gdbserver has some limitations at this point, including:
- Not being able to set a controlling tty
- Not separating its own STDERR content from the application being run
These limitations are being addressed both on Avocado and GDB, and will be resolved in future Avocado versions.
Workaround¶
If the application you’re running as part of your test can read input from alternative sources (including devices, files or the network) and generate output likewise, then you should not be further limited.
GDB support and avocado-virt¶
Another current limitation is the use of avocado-virt and avocado GDB support.
The supported API for transparent debugging is currently limited to
avocado.utils.process.run()
, and does not cover advanced uses of the
avocado.utils.process.SubProcess
class. The avocado-virt
extension, though, uses avocado.utils.process.SubProcess
class to
execute qemu in the background.
This limitation will be addressed in future versions of avocado and avocado-virt.
avocado.utils.gdb
APIs¶
Avocado’s GDB module, provides three main classes that lets a test writer interact with a gdb process, a gdbserver process and also use the GDB remote protocol for interaction with a remote target.
Please refer to avocado.utils.gdb
for more information.
Example¶
Take a look at examples/tests/modify_variable.py
test:
def test(self):
"""
Execute 'print_variable'.
"""
path = os.path.join(self.srcdir, 'print_variable')
app = gdb.GDB()
app.set_file(path)
app.set_break(6)
app.run()
self.log.info("\n".join(app.read_until_break()))
app.cmd("set variable a = 0xff")
app.cmd("c")
out = "\n".join(app.read_until_break())
self.log.info(out)
app.exit()
self.assertIn("MY VARIABLE 'A' IS: ff", out)
You can see that instead of running the executable using
process.run
we invoke avocado.utils.gdb.GDB
. This allows
us to automate the interaction with the GDB in means of setting
breakpoints, executing commands and querying for output.
When you check the output (--show-job-log
) you can see that despite
declaring the variable as 0, ff is injected and printed instead.
Wrap executables run by tests¶
Avocado allows the instrumentation of executables being run by a test in a transparent way. The user specifies a script (“the wrapper”) to be used to run the actual program called by the test.
If the instrumentation script is implemented correctly, it should not interfere with the test behavior. That is, the wrapper should avoid changing the return status, standard output and standard error messages of the original executable.
The user can be specific about which program to wrap (with a shell-like glob), or if that is omitted, a global wrapper that will apply to all programs called by the test.
Usage¶
This feature is implemented as a plugin, that adds the –wrapper option to the Avocado run command. For a detailed explanation, please consult the Avocado man page.
Example of a transparent way of running strace as a wrapper:
#!/bin/sh
exec strace -ff -o $AVOCADO_TEST_LOGDIR/strace.log -- $@
To have all programs started by test.py
wrapped with ~/bin/my-wrapper.sh
:
$ scripts/avocado run --wrapper ~/bin/my-wrapper.sh tests/test.py
To have only my-binary
wrapped with ~/bin/my-wrapper.sh
:
$ scripts/avocado run --wrapper ~/bin/my-wrapper.sh:*my-binary tests/test.py
Caveats¶
- It is not possible to debug with GDB (–gdb-run-bin) and use wrappers (–wrapper) at the same time. These two options are mutually exclusive.
- You can only set one (global) wrapper. If you need functionality present in two wrappers, you have to combine those into a single wrapper script.
- Only executables that are run with the
avocado.utils.process
APIs (and other API modules that make use of it, like mod:avocado.utils.build) are affected by this feature.
Plugin System¶
Avocado has a plugin system that can be used to extended it in a clean way.
Listing plugins¶
The avocado
command line tool has a builtin plugins
command that lets
you list available plugins. The usage is pretty simple:
$ avocado plugins
Plugins that add new commands (avocado.plugins.cli.cmd):
exec-path Returns path to avocado bash libraries and exits.
run Run one or more tests (native test, test alias, binary or script)
sysinfo Collect system information
...
Plugins that add new options to commands (avocado.plugins.cli):
remote Remote machine options for 'run' subcommand
journal Journal options for the 'run' subcommand
...
Since plugins are (usually small) bundles of Python code, they may fail to load if the Python code is broken for any reason. Example:
$ avocado plugins
Failed to load plugin from module "avocado.plugins.exec_path": ImportError('No module named foo',)
Plugins that add new commands (avocado.plugins.cli.cmd):
run Run one or more tests (native test, test alias, binary or script)
sysinfo Collect system information
...
Writing a plugin¶
What better way to understand how an Avocado plugin works than creating one? Let’s use another old time favorite for that, the “Print hello world” theme.
Code example¶
Let’s say you want to write a plugin that adds a new subcommand to the test
runner, hello
. This is how you’d do it:
from avocado.plugins.base import CLICmd
class HelloWorld(CLICmd):
name = 'hello'
description = 'The classical Hello World! plugin example.'
def run(self, args):
print(self.description)
As you can see, this plugins inherits from avocado.plugins.base.CLICmd
.
This specific base class allows for the creation of new commands for the Avocado
CLI tool. The only mandatory method to be implemented is run
and it’s the plugin main entry point.
In this code example it will simply print the plugin’s description.
Registering Plugins¶
Avocado makes use of the Stevedore library to load and activate plugins. Stevedore itself uses setuptools and its entry points to register and find Python objects. So, to make your new plugin visible to Avocado, you need to add to your setuptools based setup.py file something like:
setup(name='mypluginpack',
...
entry_points={
'avocado.plugins.cli': [
'hello = mypluginpack.hello:HelloWorld',
]
}
...
Then, by running either $ python setup.py install
or $ python setup.py
develop
your plugin should be visible to Avocado.
Fully qualified named for a plugin¶
The plugin registry mentioned earlier, (setuptools and its entry
points) is global to a given Python installation. Avocado uses the
namespace prefix avocado.plugins.
to avoid name clashes with other
software. Now, inside Avocado itself, there’s no need keep using the
avocado.plugins.
prefix.
Take for instance, the Job Pre/Post plugins are defined on
setup.py
:
'avocado.plugins.job.prepost': [
'jobscripts = avocado.plugins.jobscripts:JobScripts'
]
The setuptools entry point namespace is composed of the mentioned
prefix avocado.plugins.
, which is is then followed by the Avocado
plugin type, in this case, job.prepost
.
Inside avocado itself, the fully qualified name for a plugin is the
plugin type, such as job.prepost
concatenated to the name used in
the entry point definition itself, in this case, jobscripts
.
To summarize, still using the same example, the fully qualified
Avocado plugin name is going to be job.prepost.jobscripts
.
Disabling a plugin¶
Even though a plugin can be installed and registered under setuptools entry points, it can be explicitly disabled in Avocado.
The mechanism available to do so is to add entries to the disable
key under the plugins
section of the Avocado configuration file.
Example:
[plugins]
disable = ['cli.hello', 'job.prepost.jobscripts']
The exact effect on Avocado when a plugin is disabled depends on the
plugin type. For instance, by disabling plugins of type cli.cmd
,
the command implemented by the plugin should no longer be available on
the Avocado command line application. Now, by disabling a
job.prepost
plugin, those won’t be executed before/after the
execution of the jobs.
Wrap Up¶
We have briefly discussed the making of Avocado plugins. We recommend
the Stevedore documentation and also a look at the
avocado.plugins.base
module for the various plugin interface definitions.
Some plugins examples are available in the Avocado source tree, under examples/plugins
.
Finally, exploring the real plugins shipped with Avocado in avocado.plugins
is the final “documentation” source.
Advanced Topics and Maintenance¶
Reference Guide¶
This guide presents information on the Avocado basic design and its internals.
Job, test and identifiers¶
Job ID¶
The Job ID is a random SHA1 string that uniquely identifies a given job.
The full form of the SHA1 string is used is most references to a job:
$ avocado run sleeptest.py
JOB ID : 49ec339a6cca73397be21866453985f88713ac34
...
But a shorter version is also used at some places, such as in the job results location:
JOB LOG : $HOME/avocado/job-results/job-2015-06-10T10.44-49ec339/job.log
Test References¶
A Test Reference is a string that can be resolved into (interpreted as) one or more tests by the Avocado Test Resolver. A given resolver plugin is free to interpret a test reference, it is completely abstract to the other components of Avocado.
Note
Mapping the Test References to tests can be affected by command-line switches like –external-runner, which completelly changes the meaning of the given strings.
Test Name¶
A test name is an arbitrarily long string that unambiguously points to the source of a single test. In other words the Avocado Test Resolver, as configured for a particular job, should return one and only one test as the interpretation of this name.
This name can be as specific as necessary to make it unique. Therefore it can contain an arbitrary number of variables, prefixes, suffixes, tags, etc. It all depends on user preferences, what is supported by Avocado via its Test Resolvers and the context of the job.
The output of the Test Resolver when resolving Test References should always be a list of unambiguous Test Names (for that particular job).
Notice that although the Test Name has to be unique, one test can be run more than once inside a job.
By definition, a Test Name is a Test Reference, but the reciprocal is not necessarily true, as the latter can represent more than one test.
Variant IDs¶
The multiplexer component creates different sets of variables (known as “variants”), to allow tests to be run individually in each of them.
A Variant ID is an arbitrary and abstract string created by the multiplexer to identify each variant. It should be unique per variant inside a set. In other words, the multiplexer generates a set of variants, identified by unique IDs.
A simpler implementation of the multiplexer uses serial integers as Variant IDs. A more sophisticated implementation could generate Variant IDs with more semantic, potentially representing their contents.
Note
The multiplexer supports serialized variant IDs only
Test ID¶
A test ID is a string that uniquely identifies a test in the context of a job. When considering a single job, there are no two tests with the same ID.
A test ID should encapsulate the Test Name and the Variant ID, to allow direct identification of a test. In other words, by looking at the test ID it should be possible to identify:
- What’s the test name
- What’s the variant used to run this test (if any)
Test IDs don’t necessarily keep their uniqueness properties when considered outside of a particular job, but two identical jobs run in the exact same environment should generate a identical sets of Test IDs.
Syntax:
<unique-id>-<test-name>[;<variant-id>]
Examples of test-names:
'/bin/true'
'/bin/grep foobar /etc/passwd'
'passtest.py:Passtest.test'
'file:///tmp/passtest.py:Passtest.test'
'multiple_tests.py:MultipleTests.test_hello'
'type_specific.io-github-autotest-qemu.systemtap_tracing.qemu.qemu_free'
Test Types¶
Avocado at its simplest configuration can run two different types of tests [1]. You can mix and match those in a single job.
Instrumented¶
These are tests written in Python or BASH with the Avocado helpers that use the Avocado test API.
To be more precise, the Python file must contain a class derived from avocado.test.Test
.
This means that an executable written in Python is not always an instrumented test, but may work
as a simple test.
The instrumented tests allows the writer finer control over the process including logging, test result status and other more sophisticated test APIs.
Test statuses PASS
, WARN
, START
and SKIP
are considered as
successful builds. The ABORT
, ERROR
, FAIL
, ALERT
, RUNNING
,
NOSTATUS
and INTERRUPTED
are considered as failed ones.
Simple¶
Any executable in your box. The criteria for PASS/FAIL is the return code of the executable. If it returns 0, the test PASSes, if it returns anything else, it FAILs.
Test Statuses¶
Avocado sticks to the following definitions of test statuses:
`PASS`
: The test passed, which means all conditions being tested have passed.`FAIL`
: The test failed, which means at least one condition being tested has failed. Ideally, it should mean a problem in the software being tested has been found.`ERROR`
: An error happened during the test execution. This can happen, for example, if there’s a bug in the test runner, in its libraries or if a resource breaks unexpectedly. Uncaught exceptions in the test code will also result in this status.`SKIP`
: The test runner decided a requested test should not be run. This can happen, for example, due to missing requirements in the test environment or when there’s a job timeout.
Libraries and APIs¶
The Avocado libraries and its APIs are a big part of what Avocado is.
But, to avoid having any issues you should understand what parts of the Avocado libraries are intended for test writers and their respective API stability promises.
Test APIs¶
At the most basic level there’s the Test APIs which you should use when writing tests in Python and planning to make use of any other utility library.
The Test APIs can be found in the avocado
main module, and its most important
member is the avocado.Test
class. By conforming to the avocado.Test
API, that is, by inheriting from it, you can use the full set of utility libraries.
The Test APIs are guaranteed to be stable across a single major version of Avocado. That means that a test written for a given version of Avocado should not break on later minor versions because of Test API changes.
Utility Libraries¶
There are also a large number of utility libraries that can be found under the
avocado.utils
namespace. These are very general in nature and can help you
speed up your test development.
The utility libraries may receive incompatible changes across minor versions, but these will be done in a staged fashion. If a given change to an utility library can cause test breakage, it will first be documented and/or deprecated, and only on the next subsequent minor version it will actually be changed.
What this means is that upon updating to later minor versions of Avocado, you should look at the Avocado Release Notes for changes that may impact your tests.
Core (Application) Libraries¶
Finally, everything under avocado.core
is part of the application’s
infrastructure and should not be used by tests.
Extensions and Plugins can use the core libraries, but API stability is not guaranteed at any level.
Test Resolution¶
When you use the Avocado runner, frequently you’ll provide paths to files, that will be inspected, and acted upon depending on their contents. The diagram below shows how Avocado analyzes a file and decides what to do with it:

It’s important to note that the inspection mechanism is safe (that is, python classes and files are not actually loaded and executed on discovery and inspection stage). Due to the fact Avocado doesn’t actually load the code and classes, the introspection is simple and will not catch things like buggy test modules, missing imports and miscellaneous bugs in the code you want to list or run. We recommend only running tests from sources you trust, use of static checking and reviews in your test development process.
Due to the simple test inspection mechanism, avocado will not recognize test
classes that inherit from a class derived from avocado.Test
. Please
refer to the Writing Avocado Tests documentation on how to use the tags functionality
to mark derived classes as avocado test classes.
Results Specification¶
On a machine that executed tests, job results are available under
[job-results]/job-[timestamp]-[short job ID]
, where logdir
is the configured Avocado
logs directory (see the data dir plugin), and the directory name includes
a timestamp, such as job-2014-08-12T15.44-565e8de
. A typical
results directory structure can be seen below
$HOME/avocado/job-results/job-2014-08-13T00.45-4a92bc0/
├── id
├── jobdata
│ ├── args
│ ├── cmdline
│ ├── config
│ ├── multiplex
│ ├── pwd
│ └── urls
├── job.log
├── results.json
├── results.xml
├── sysinfo
│ ├── post
│ │ ├── brctl_show
│ │ ├── cmdline
│ │ ├── cpuinfo
│ │ ├── current_clocksource
│ │ ├── df_-mP
│ │ ├── dmesg_-c
│ │ ├── dmidecode
│ │ ├── fdisk_-l
│ │ ├── gcc_--version
│ │ ├── hostname
│ │ ├── ifconfig_-a
│ │ ├── interrupts
│ │ ├── ip_link
│ │ ├── ld_--version
│ │ ├── lscpu
│ │ ├── lspci_-vvnn
│ │ ├── meminfo
│ │ ├── modules
│ │ ├── mount
│ │ ├── mounts
│ │ ├── numactl_--hardware_show
│ │ ├── partitions
│ │ ├── scaling_governor
│ │ ├── uname_-a
│ │ ├── uptime
│ │ └── version
│ ├── pre
│ │ ├── brctl_show
│ │ ├── cmdline
│ │ ├── cpuinfo
│ │ ├── current_clocksource
│ │ ├── df_-mP
│ │ ├── dmesg_-c
│ │ ├── dmidecode
│ │ ├── fdisk_-l
│ │ ├── gcc_--version
│ │ ├── hostname
│ │ ├── ifconfig_-a
│ │ ├── interrupts
│ │ ├── ip_link
│ │ ├── ld_--version
│ │ ├── lscpu
│ │ ├── lspci_-vvnn
│ │ ├── meminfo
│ │ ├── modules
│ │ ├── mount
│ │ ├── mounts
│ │ ├── numactl_--hardware_show
│ │ ├── partitions
│ │ ├── scaling_governor
│ │ ├── uname_-a
│ │ ├── uptime
│ │ └── version
│ └── profile
└── test-results
└── tests
├── sleeptest.py.1
│ ├── data
│ ├── debug.log
│ └── sysinfo
│ ├── post
│ └── pre
├── sleeptest.py.2
│ ├── data
│ ├── debug.log
│ └── sysinfo
│ ├── post
│ └── pre
└── sleeptest.py.3
├── data
├── debug.log
└── sysinfo
├── post
└── pre
22 directories, 65 files
From what you can see, the results dir has:
- A human readable
id
in the top level, with the job SHA1. - A human readable
job.log
in the top level, with human readable logs of the task - Subdirectory
jobdata
, that contains machine readable data about the job. - A machine readable
results.xml
andresults.json
in the top level, with a summary of the job information in xUnit/json format. - A top level
sysinfo
dir, with sub directoriespre
,post
andprofile
, that store sysinfo files pre/post/during job, respectively. - Subdirectory
test-results
, that contains a number of subdirectories (filesystem-friendly test ids). Those test ids represent instances of test execution results.
Test execution instances specification¶
The instances should have:
- A top level human readable
job.log
, with job debug information - A
sysinfo
subdir, with sub directoriespre
andpost
, that store sysinfo files pre test and post test, respectively. - A
data
subdir, where the test can output a number of files if necessary.
[1] | Avocado plugins can introduce additional test types. |
Job Pre and Post Scripts¶
Avocado ships with a plugin (installed by default) that allows running scripts before and after the actual execution of Jobs. A user can be sure that, when a given “pre” script is run, no test in that job has been run, and when the “post” scripts are run, all the tests in a given job have already finished running.
Configuration¶
By default, the script directory location is:
/etc/avocado/scripts/job
Inside that directory, that is a directory for pre-job scripts:
/etc/avocado/scripts/job/pre.d
And for post-job scripts:
/etc/avocado/scripts/job/post.d
All the configuration about the Pre/Post Job Scripts are placed under
the avocado.plugins.jobscripts
config section. To change the
location for the pre-job scripts, your configuration should look
something like this:
[plugins.jobscripts]
pre = /my/custom/directory/for/pre/job/scripts/
Accordingly, to change the location for the post-job scripts, your configuration should look something like this:
[plugins.jobscripts]
post = /my/custom/directory/for/post/scripts/
A couple of other configuration options are available under the same section:
warn_non_existing_dir
: gives warnings if the configured (or default) directory set for either pre or post scripts do not existwarn_non_zero_status
: gives warnings if a given script (either pre or post) exits with non-zero status
Script Execution Environment¶
All scripts are run in separate process with some environment variables set. These can be used in your scripts in any way you wish:
AVOCADO_JOB_UNIQUE_ID
: the unique job-id.AVOCADO_JOB_STATUS
: the current status of the job.AVOCADO_JOB_LOGDIR
: the filesystem location that holds the logs and various other files for a given job run.
Note: Even though these variables should all be set, it’s a good
practice for scripts to check if they’re set before using their
values. This may prevent unintended actions such as writing to the
current working directory instead of to the AVOCADO_JOB_LOGDIR
if
this is not set.
Finally, any failures in the Pre/Post scripts will not alter the status of the corresponding jobs.
Job Cleanup¶
It’s possible to register a callback function that will be called when all the tests have finished running. This effectively allows for a test job to clean some state it may have left behind.
At the moment, this feature is not intended to be used by test writers, but it’s seen as a feature for Avocado extensions to make use.
To register a callback function, your code should put a message in a very specific format in the “runner queue”. The Avocado test runner code will understand that this message contains a (serialized) function that will be called once all tests finish running.
Example:
from avocado import Test
def my_cleanup(path_to_file):
if os.path.exists(path_to_file):
os.unlink(path_to_file)
class MyCustomTest(Test):
...
cleanup_file = '/tmp/my-custom-state'
self.runner_queue.put({"func_at_exit": self.my_cleanup,
"args": (cleanup_file),
"once": True})
...
This results in the my_cleanup
function being called with
positional argument cleanup_file
.
Because once
was set to True
, only one unique combination of
function, positional arguments and keyword arguments will be
registered, not matter how many times they’re attempted to be
registered. For more information check
avocado.utils.data_structures.CallbackRegister.register()
.
Contribution and Community Guide¶
Useful pointers on how to participate of the Avocado community and contribute.
Hacking and Using Avocado¶
Since version 0.31.0, our plugin system requires Setuptools entry points to be registered. If you’re hacking on Avocado and want to use the same, possibly modified, source for running your tests and experiments, you may do so with one additional step:
$ make develop
On POSIX systems this will create an “egg link” to your original source tree under “$HOME/.local/lib/pythonX.Y/site-packages”. Then, on your original source tree, an “egg info” directory will be created, containing, among other things, the Setuptools entry points mentioned before. This works like a symlink, so you only need to run this once (unless you add a new entry-point, then you need to re-run it to make it available).
Avocado supports various plugins, which are distributed as separate projects, for example “avocado-vt” and “avocado-virt”. These also need to be deployed and linked in order to work properly with the avocado from sources (installed version works out of the box). To simplify this you can use make requirements-plugins from the main avocado project to install requirements of the plugins and make link to link and develop the plugins. The workflow could be:
$ cd $AVOCADO_PROJECTS_DIR
$ git clone $AVOCADO_GIT
$ git clone $AVOCADO_PROJECT2
$ # Add more projects
$ cd avocado # go into the main avocado project dir
$ make requirements-plugins
$ make link
You should see the process and status for each directory.
Contact information¶
- Avocado-devel mailing list: https://www.redhat.com/mailman/listinfo/avocado-devel
- Avocado IRC channel: irc.oftc.net #avocado
Contributing to Avocado¶
Avocado uses github and the github pull request development model. You can find a primer on how to use github pull requests here. Every Pull Request you send will be automatically tested by Travis CI and review will take place in the Pull Request as well.
For people who don’t like the github development model, there is the option of sending the patches to the Mailing List, following a workflow more traditional in Open Source development communities. The patches will be reviewed in the Mailing List, should you opt for that. Then a maintainer will collect the patches, integrate them on a branch, and then those patches will be submitted as a github Pull Request. This process tries to ensure that every contributed patch goes through the CI jobs before it is considered good for inclusion.
Git workflow¶
Fork the repository in github.
Clone from your fork:
$ git clone git@github.com:<username>/avocado.git
Enter the directory:
$ cd avocado
Create a
remote
, pointing to the upstream:$ git remote add upstream git@github.com:avocado-framework/avocado.git
Configure your name and e-mail in git:
$ git config --global user.name "Your Name" $ git config --global user.email email@foo.bar
Golden tip: never work on local branch master. Instead, create a new local branch and checkout to it:
$ git checkout -b my_new_local_branch
Code and then commit your changes:
$ git add new-file.py $ git commit -s # or "git commit -as" to commit all changes
Write a good commit message, pointing motivation, issues that you’re addressing. Usually you should try to explain 3 points in the commit message: motivation, approach and effects:
header <- Limited to 72 characters. No period. <- Blank line message <- Any number of lines, limited to 72 characters per line. <- Blank line Reference: <- External references, one per line (issue, trello, ...) Signed-off-by: <- Signature (created by git commit -s)
Make sure your code is working (install your version of avocado, test your change, run
make check
to make sure you didn’t introduce any regressions).Paste the
job.log
file content from the previous step in a pastebin service, like fpaste.org. If you havefpaste
installed, you can simply run:$ fpaste ~/avocado/job-results/latest/job.log
Rebase your local branch on top of upstream master:
$ git fetch $ git rebase upstream/master (resolve merge conflicts, if any)
Push your commit(s) to your fork:
$ git push origin my_new_local_branch
Create the Pull Request on github. Add the relevant information to the Pull Request description.
In the Pull Request discussion page, comment with the link to the job.log output/file.
Check if your Pull Request passes the CI (travis). Your Pull Request will probably be ignored until it’s all green.
Now you’re waiting for feedback on github Pull Request page. Once you get some, join the discussion, answer the questions, make clear if you’re going to change the code based on some review and, if not, why. Feel free to disagree with the reviewer, they probably have different use cases and opinions, which is expected. Try describing yours and suggest other solutions, if necessary.
New versions of your code should not be force-updated (unless explicitly requested by the code reviewer). Instead, you should:
Create a new branch out of your previous branch:
$ git checkout my_new_local_branch $ git checkout -b my_new_local_branch_v2
Code, and amend the commit(s) and/or create new commits. If you have more than one commit in the PR, you will probably need to rebase interactively to amend the right commits.
git cola
orgit citool
can be handy here.Rebase your local branch on top of upstream master:
$ git fetch $ git rebase upstream/master (resolve merge conflicts, if any)
Push your changes:
$ git push origin my_new_local_branch_v2
Create a new Pull Request for this new branch. In the Pull Request description, point the previous Pull Request and the changes the current Pull Request introduced when compared to the previous Pull Request(s).
Close the previous Pull Request on github.
After your PR gets merged, you can sync the master branch on your local repository propagate the sync to the master branch in your fork repository on github:
$ git checkout master
$ git pull upstream master
$ git push
From time to time, you can remove old branches to avoid pollution:
# To list branches along with time reference:
$ git for-each-ref --sort='-authordate:iso8601' --format=' %(authordate:iso8601)%09%(refname)' refs/heads
# To remove branches from your fork repository:
$ git push origin :my_old_branch
Signing commits¶
Optionally you can sign the commits using GPG signatures. Doing it is simple and it helps from unauthorized code being merged without notice.
All you need is a valid GPG signature, git configuration, slightly modified workflow to use the signature and eventually even setup in github so one benefits from the “nice” UI.
Get a GPG signature:
# Google for howto, but generally it works like this
$ gpg --gen-key # defaults are usually fine (using expiration is recommended)
$ gpg --send-keys $YOUR_KEY # to propagate the key to outer world
Enable it in git:
$ git config --global user.signingkey $YOUR_KEY
(optional) Link the key with your GH account:
1. Login to github
2. Go to settings->SSH and GPG keys
3. Add New GPG key
4. run $(gpg -a --export $YOUR_EMAIL) in shell to see your key
5. paste the key there
Use it:
# You can sign commits by using '-S'
$ git commit -S
# You can sign merges by using '-S'
$ git merge -S
Warning
You can not use the merge button on github to do signed merges as github does not have your private key.
Tests Repositories¶
We encourage you or your company to create public Avocado tests repositories so the community can also benefit of your tests. We will be pleased to advertise your repository here in our documentation.
List of known community and third party maintained repositories:
- https://github.com/avocado-framework-tests/avocado-misc-tests:
Community maintained Avocado miscellaneous tests repository. There you
will find, among others, performance tests like
lmbench
,stress
, cpu tests likeebizzy
and generic tests likeltp
. Some of them were ported from Autotest Client Tests repository. - https://github.com/scylladb/scylla-cluster-tests: Avocado tests for Scylla Clusters. Those tests can automatically create a scylla cluster, some loader machines and then run operations defined by the test writers, such as database workloads.
Avocado development tips¶
Interrupting test¶
In case you want to “pause” the running test, you can use SIGTSTP (ctrl+z) signal sent to the main avocado process. This signal is forwarded to test and it’s children processes. To resume testing you repeat the same signal.
Note: that the job/test timeouts are still enabled on stopped processes.
In tree utils¶
You can find handy utils in avocado.utils.debug:
measure_duration¶
Decorator can be used to print current duration of the executed function and accumulated duration of this decorated function. It’s very handy when optimizing.
Usage:
from avocado.utils import debug
...
@debug.measure_duration
def your_function(...):
During the execution look for:
PERF: <function your_function at 0x29b17d0>: (0.1s, 11.3s)
PERF: <function your_function at 0x29b17d0>: (0.2s, 11.5s)
Line-profiler¶
You can measure line-by-line performance by using line_profiler. You can install it using pip:
pip install line_profiler
and then simply mark the desired function with @profile (no need to import it from anywhere). Then you execute:
kernprof -l -v ./scripts/avocado run ...
and when the process finishes you’ll see the profiling information. (sometimes the binary is called kernprof.py)
Remote debug with Eclipse¶
Eclipse is a nice debugging frontend which allows remote debugging. It’s very simple. The only thing you need is Eclipse with pydev plugin. Then you need to locate the pydevd path (usually $INSTALL_LOCATION/plugins/org.python.pydev_*/pysrc or ~/.eclipse/plugins/org.python.pydev_*/pysrc. Then you set the breakpoint by:
import sys
sys.path.append("$PYDEV_PATH")
import pydevd
pydevd.settrace("$IP_ADDR_OF_ECLIPSE_MACHINE")
Alternatively you can export PYTHONPATH=$PYDEV_PATH and use only last 2 lines.
Before you run the code, you need to start the Eclipse’s debug server. Switch to Debug perspective (you might need to open it first Window->Perspective->Open Perspective). Then start the server from Pydev->Start Debug Server.
Now whenever the pydev.settrace() code is executed, it contacts Eclipse debug server (port 8000 by default, don’t forget to open it) and you can easily continue in execution. This works on every remote machine which has access to your Eclipse’s port 8000 (you can override it).
Using Trello cards in Eclipse¶
Eclipse allows us to create tasks. They are pretty cool as you see the status (not started, started, current, done) and by switching tasks it automatically resumes where you previously finished (opened files, ...)
Avocado is planned using Trello, which is not yet supported by Eclipse. Anyway there is a way to at least get read-only list of your commits. This guide is based on https://docs.google.com/document/d/1jvmJcCStE6QkJ0z5ASddc3fNmJwhJPOFN7X9-GLyabM/ which didn’t work well with lables and descriptions. The only difference is you need to use Query Pattern:
\"url\":\"https://trello.com/[^/]*/[^/]*/({Id}[^\"]+)({Description})\"
Setup Trello key:
- Create a Trello account
- Get (developer_key) here: https://trello.com/1/appKey/generate
- Get user_token from following address (replace key with your key): https://trello.com/1/authorize?key=$developer_key&name=Mylyn%20Tasks&expiration=never&response_type=token
- Address with your assigned tasks (task_addr) is: https://trello.com/1/members/my/cards?key=developer_key&token=$user_token Open it in web browser and you should see [] or [$list_of_cards] without any passwords.
Configure Eclipse:
- We’re going to need Web Templates, which are not yet upstream. We need to use incubator version.
- Help->Install New Software...
- -> Add
- Name: Incubator
- Location: http://download.eclipse.org/mylyn/incubator/3.10
- -> OK
- Select Mylyn Tasks Connector: Web Templates (Advanced) (Incubation) (use filter text to find it)
- Install it (Next->Agree->Next...)
- Restart Eclipse
- Open the Mylyn Team Repositories Window->Show View->Other...->Mylyn->Team Repositories
- Right click the Team Repositories and select New->Repository
- Use Task Repository -> Next
- Use Web Template (Advanced) -> Next
- In the Properties for Task Repository dialog box, enter https://trello.com
- In the Server field and give the repository a label (eg. Trello API).
- In the Additional Settings section set applicationkey = $developer_key and userkey = $user_token.
- In the Advanced Configuration set the Task URL to https://trello.com/c/
- Set New Task URL to https://trello.com
- Set the Query Request URL (no changes required): https://trello.com/1/members/my/cards?key=${applicationkey}&token=${userkey}
- For the Query Pattern enter “url”:”https://trello.com/[^/]*/[^/]*/({Id}[^”]+)({Description})”
- -> Finish
Create task query:
- Create a query by opening the Mylyn Task List.
- Right click the pane and select New Query.
- Select Trello API as the repository.
- -> Next
- Enter the name of your query.
- Expand the Advanced Configuration and make sure the Query Pattern is filled in
- Press Preview to confirm that there are no errors.
- Press Finish.
- Trello tasks assigned to you will now appear in the Mylyn Task List.
Noy you can start using tasks by clicking the small bubble in front of the name. This closes all editors. Try openning some and then click the bubble again. They should get closed. When you click the bubble third time, it should resume all the open editors from before.
My usual workflow is:
- git checkout $branch
- Eclipse: select task
- git commit ...
- Eclipse: unselect task
- git checkout $other_branch
- Eclipse: select another_task
This way you always have all the files present and you can easily resume your work.
Releasing avocado¶
So you have all PRs approved, the Sprint meeting is done and now Avocado is ready to be released. Great, let’s go over (most of) the details you need to pay attention to.
Bump the version number¶
Go through the avocado code base and update the release number. At the time of this writing, the diff looked like this:
diff --git a/avocado.spec b/avocado.spec
index eb910e8..21313ca 100644
--- a/avocado.spec
+++ b/avocado.spec
@@ -1,7 +1,7 @@
Summary: Avocado Test Framework
Name: avocado
-Version: 0.28.0
-Release: 2%{?dist}
+Version: 0.29.0
+Release: 0%{?dist}
License: GPLv2
Group: Development/Tools
URL: http://avocado-framework.github.io/
@@ -104,6 +104,9 @@ examples of how to write tests on your own.
%{_datadir}/avocado/wrappers
%changelog
+* Wed Oct 7 2015 Lucas Meneghel Rodrigues <lmr@redhat.com> - 0.29.0-0
+- New upstream release 0.29.0
+
* Wed Sep 16 2015 Lucas Meneghel Rodrigues <lmr@redhat.com> - 0.28.0-2
- Add pystache, aexpect, psutil, sphinx and yum/dnf dependencies for functional/unittests
diff --git a/avocado/core/version.py b/avocado/core/version.py
index c927b19..a555af5 100755
--- a/avocado/core/version.py
+++ b/avocado/core/version.py
@@ -18,7 +18,7 @@ __all__ = ['MAJOR', 'MINOR', 'RELEASE', 'VERSION']
MAJOR = 0
-MINOR = 28
+MINOR = 29
RELEASE = 0
VERSION = "%s.%s.%s" % (MAJOR, MINOR, RELEASE)
diff --git a/setup.cfg b/setup.cfg
index 76953b9..5cf90e9 100644
--- a/setup.cfg
+++ b/setup.cfg
@@ -1,6 +1,6 @@
[metadata]
name = avocado
-version = 0.28.0
+version = 0.29.0
summary = Avocado Test Framework
description-file =
README.rst
You can find on git such commits that will help you get oriented for other repos.
Which repositories you should pay attention to¶
In general, a release of avocado includes taking a look and eventually release content in the following repositories:
avocado
avocado-vt
Tag all repositories¶
When everything is in good shape, commit the version changes and tag that commit in master with:
$ git tag -u $(GPG_ID) -s $(RELEASE) -m 'Avocado Release $(RELEASE)'
Then the tag should be pushed to the GIT repository with:
$ git push --tags
Build RPMs¶
Go to the source directory and do:
$ make rpm
...
+ exit 0
This should be all. It will build packages using mock
, targeting
your default configuration. That usually means the same platform
you’re currently on.
Sign Packages¶
All the packages should be signed for safer public consumption. The process is, of course, dependent on the private keys, put is based on running:
$ rpm --resign
For more information look at the rpmsign(8)
man page.
Upload packages to repository¶
The current distribution method is based on serving content over HTTP. That means that repository metadata is created locally and synchronized to the well know public Web server. A process similar to:
$ cd $REPO_ROOT && for DIR in epel-?-noarch fedora-??-noarch; \
do cd $DIR && createrepo -v . && cd ..; done;
Creates the repo metadata locally. Then a command similar to:
$ rsync -va $REPO_ROOT user@repo_web_server:/path
Is used to copy the content over.
Write release notes¶
Release notes give an idea of what has changed on a given development cycle. Good places to go for release notes are:
- Git logs
- Trello Cards (Look for the Done lists)
- Github compare views: https://github.com/avocado-framework/avocado/compare/0.28.0...0.29.0
Go there and try to write a text that represents the changes that the release encompasses.
Upload package to PyPI¶
Users may also want to get Avocado from the PyPI repository, so please upload there as well. To help with the process, please run:
$ make pypi
And follow the URL and brief instructions given.
Send e-mails to avocado-devel and other places¶
Send the e-mail with the release notes to avocado-devel and virt-test-devel.
API Reference¶
Test APIs¶
This is the bare mininum set of APIs that users should use, and can rely on, while writing tests.
Module contents¶
-
avocado.
main
¶ alias of
TestProgram
-
class
avocado.
Test
(methodName='test', name=None, params=None, base_logdir=None, job=None, runner_queue=None)¶ Bases:
unittest.case.TestCase
Base implementation for the test class.
You’ll inherit from this to write your own tests. Typically you’ll want to implement setUp(), test*() and tearDown() methods on your own tests.
Initializes the test.
Parameters: - methodName – Name of the main method to run. For the sake of compatibility with the original unittest class, you should not set this.
- name (
avocado.core.test.TestName
) – Pretty name of the test name. For normal tests, written with the avocado API, this should not be set. This is reserved for internal Avocado use, such as when running random executables as tests. - base_logdir – Directory where test logs should go. If None
provided, it’ll use
avocado.data_dir.create_job_logs_dir()
. - job – The job that this test is part of.
Raises: -
basedir
¶ The directory where this test (when backed by a file) is located at
-
cache_dirs
= None¶
-
datadir
¶ Returns the path to the directory that contains test data files
-
default_params
= {}¶
-
error
(message=None)¶ Errors the currently running test.
After calling this method a test will be terminated and have its status as ERROR.
Parameters: message (str) – an optional message that will be recorded in the logs
-
fail
(message=None)¶ Fails the currently running test.
After calling this method a test will be terminated and have its status as FAIL.
Parameters: message (str) – an optional message that will be recorded in the logs
-
fetch_asset
(name, asset_hash=None, algorithm='sha1', locations=None, expire=None)¶ Method o call the utils.asset in order to fetch and asset file supporting hash check, caching and multiple locations.
Parameters: - name – the asset filename or URL
- asset_hash – asset hash (optional)
- algorithm – hash algorithm (optional, defaults to sha1)
- locations – list of URLs from where the asset can be fetched (optional)
- expire – time for the asset to expire
Raises: EnvironmentError – When it fails to fetch the asset
Returns: asset file local path
-
filename
¶ Returns the name of the file (path) that holds the current test
-
get_state
()¶ Serialize selected attributes representing the test state
Returns: a dictionary containing relevant test state data Return type: dict
-
report_state
()¶ Send the current test state to the test runner process
-
run_avocado
()¶ Wraps the run method, for execution inside the avocado runner.
Result: Unused param, compatibility with unittest.TestCase
.
-
skip
(message=None)¶ Skips the currently running test.
This method should only be called from a test’s setUp() method, not anywhere else, since by definition, if a test gets to be executed, it can’t be skipped anymore. If you call this method outside setUp(), avocado will mark your test status as ERROR, and instruct you to fix your test in the error message.
Parameters: message (str) – an optional message that will be recorded in the logs
-
srcdir
= None¶
-
workdir
= None¶
-
avocado.
fail_on
(exceptions=None)¶ Fail the test when decorated function produces exception of the specified type.
(For example, our method may raise IndexError on tested software failure. We can either try/catch it or use this decorator instead)
Parameters: exceptions – Tuple or single exception to be assumed as test fail [Exception] Note: self.error and self.skip behavior remains intact Note: To allow simple usage param “exceptions” must not be callable
Utilities APIs¶
This is a set of utility APIs that Avocado provides as added value to test writers.
Subpackages¶
avocado.utils.external package¶
Submodules¶
avocado.utils.external.gdbmi_parser module¶
-
avocado.utils.external.gdbmi_parser.
parse
(tokens)¶
-
avocado.utils.external.gdbmi_parser.
process
(input)¶
-
avocado.utils.external.gdbmi_parser.
scan
(input)¶
avocado.utils.external.spark module¶
-
class
avocado.utils.external.spark.
GenericASTBuilder
(AST, start)¶ Bases:
avocado.utils.external.spark.GenericParser
-
buildASTNode
(args, lhs)¶
-
nonterminal
(type, args)¶
-
preprocess
(rule, func)¶
-
terminal
(token)¶
-
-
class
avocado.utils.external.spark.
GenericASTMatcher
(start, ast)¶ Bases:
avocado.utils.external.spark.GenericParser
-
foundMatch
(args, func)¶
-
match
(ast=None)¶
-
match_r
(node)¶
-
preprocess
(rule, func)¶
-
resolve
(list)¶
-
-
class
avocado.utils.external.spark.
GenericASTTraversal
(ast)¶ -
default
(node)¶
-
postorder
(node=None)¶
-
preorder
(node=None)¶
-
prune
()¶
-
typestring
(node)¶
-
-
exception
avocado.utils.external.spark.
GenericASTTraversalPruningException
¶ Bases:
exceptions.Exception
-
class
avocado.utils.external.spark.
GenericParser
(start)¶ -
add
(set, item, i=None, predecessor=None, causal=None)¶
-
addRule
(doc, func, _preprocess=1)¶
-
ambiguity
(rules)¶
-
augment
(start)¶
-
buildTree
(nt, item, tokens, k)¶
-
causal
(key)¶
-
collectRules
()¶
-
computeNull
()¶
-
deriveEpsilon
(nt)¶
-
error
(token)¶
-
finalState
(tokens)¶
-
goto
(state, sym)¶
-
gotoST
(state, st)¶
-
gotoT
(state, t)¶
-
isnullable
(sym)¶
-
makeNewRules
()¶
-
makeSet
(token, sets, i)¶
-
makeSet_fast
(token, sets, i)¶
-
makeState
(state, sym)¶
-
makeState0
()¶
-
parse
(tokens)¶
-
predecessor
(key, causal)¶
-
preprocess
(rule, func)¶
-
resolve
(list)¶
-
skip
(lhs_rhs, pos=0)¶
-
typestring
(token)¶
-
Module contents¶
Submodules¶
avocado.utils.archive module¶
Module to help extract and create compressed archives.
-
exception
avocado.utils.archive.
ArchiveException
¶ Bases:
exceptions.Exception
Base exception for all archive errors.
-
class
avocado.utils.archive.
ArchiveFile
(filename, mode='r')¶ Bases:
object
Class that represents an Archive file.
Archives are ZIP files or Tarballs.
Creates an instance of
ArchiveFile
.Parameters: - filename – the archive file name.
- mode – file mode, r read, w write.
-
add
(filename, arcname=None)¶ Add file to the archive.
Parameters: - filename – file to archive.
- arcname – alternative name for the file in the archive.
-
close
()¶ Close archive.
-
extract
(path='.')¶ Extract all files from the archive.
Parameters: path – destination path.
-
list
()¶ List files to the standard output.
-
classmethod
open
(filename, mode='r')¶ Creates an instance of
ArchiveFile
.Parameters: - filename – the archive file name.
- mode – file mode, r read, w write.
-
avocado.utils.archive.
compress
(filename, path)¶ Compress files in an archive.
Parameters: - filename – archive file name.
- path – origin directory path to files to compress. No individual files allowed.
-
avocado.utils.archive.
create
(filename, path)¶ Compress files in an archive.
Parameters: - filename – archive file name.
- path – origin directory path to files to compress. No individual files allowed.
-
avocado.utils.archive.
extract
(filename, path)¶ Extract files from an archive.
Parameters: - filename – archive file name.
- path – destination path to extract to.
-
avocado.utils.archive.
is_archive
(filename)¶ Test if a given file is an archive.
Parameters: filename – file to test. Returns: True if it is an archive.
-
avocado.utils.archive.
uncompress
(filename, path)¶ Extract files from an archive.
Parameters: - filename – archive file name.
- path – destination path to extract to.
avocado.utils.asset module¶
Asset fetcher from multiple locations
-
class
avocado.utils.asset.
Asset
(name, asset_hash, algorithm, locations, cache_dirs, expire=None)¶ Bases:
object
Try to fetch/verify an asset file from multiple locations.
Initialize the Asset() class.
Parameters: - name – the asset filename. url is also supported
- asset_hash – asset hash
- algorithm – hash algorithm
- locations – list of locations fetch asset from
- cache_dirs – list of cache directories
- expire – time in seconds for the asset to expire
-
fetch
()¶ Fetches the asset. First tries to find the asset on the provided cache_dirs list. Then tries to download the asset from the locations list provided.
Raises: EnvironmentError – When it fails to fetch the asset Returns: The path for the file on the cache directory.
avocado.utils.astring module¶
Operations with strings (conversion and sanitation).
The unusual name aims to avoid causing name clashes with the stdlib module string. Even with the dot notation, people may try to do things like
import string ... from avocado.utils import string
And not notice until their code starts failing.
-
avocado.utils.astring.
bitlist_to_string
(data)¶ Transform from bit list to ASCII string.
Parameters: data – Bit list to be transformed
-
avocado.utils.astring.
iter_tabular_output
(matrix, header=None)¶ Generator for a pretty, aligned string representation of a nxm matrix.
This representation can be used to print any tabular data, such as database results. It works by scanning the lengths of each element in each column, and determining the format string dynamically.
Parameters: - matrix – Matrix representation (list with n rows of m elements).
- header – Optional tuple or list with header elements to be displayed.
-
avocado.utils.astring.
shell_escape
(command)¶ Escape special characters from a command so that it can be passed as a double quoted (” ”) string in a (ba)sh command.
Parameters: command – the command string to escape. Returns: The escaped command string. The required englobing double quotes are NOT added and so should be added at some point by the caller. See also: http://www.tldp.org/LDP/abs/html/escapingsection.html
-
avocado.utils.astring.
string_safe_encode
(string)¶ People tend to mix unicode streams with encoded strings. This function tries to replace any input with a valid utf-8 encoded ascii stream.
-
avocado.utils.astring.
string_to_bitlist
(data)¶ Transform from ASCII string to bit list.
Parameters: data – String to be transformed
-
avocado.utils.astring.
string_to_safe_path
(string)¶ Convert string to a valid file/dir name. :param string: String to be converted :return: String which is safe to pass as a file/dir name (on recent fs)
-
avocado.utils.astring.
strip_console_codes
(output, custom_codes=None)¶ Remove the Linux console escape and control sequences from the console output. Make the output readable and can be used for result check. Now only remove some basic console codes using during boot up.
Parameters: - output (string) – The output from Linux console
- custom_codes – The codes added to the console codes which is not covered in the default codes
Returns: the string without any special codes
Return type:
-
avocado.utils.astring.
tabular_output
(matrix, header=None)¶ Pretty, aligned string representation of a nxm matrix.
This representation can be used to print any tabular data, such as database results. It works by scanning the lengths of each element in each column, and determining the format string dynamically.
Parameters: - matrix – Matrix representation (list with n rows of m elements).
- header – Optional tuple or list with header elements to be displayed.
Returns: String with the tabular output, lines separated by unix line feeds.
Return type:
avocado.utils.aurl module¶
URL related functions.
The strange name is to avoid accidental naming collisions in code.
-
avocado.utils.aurl.
is_url
(path)¶ Return True if path looks like an URL.
Parameters: path – path to check. Return type: Boolean.
avocado.utils.build module¶
-
avocado.utils.build.
make
(path, make='make', env=None, extra_args='', ignore_status=False, allow_output_check='none')¶ Run make, adding MAKEOPTS to the list of options.
Parameters: - make – what make command name to use.
- env – dictionary with environment variables to be set before calling make (e.g.: CFLAGS).
- extra – extra command line arguments to pass to make.
- allow_output_check (str) – Whether to log the command stream outputs (stdout and stderr) of the make process in the test stream files. Valid values: ‘stdout’, for allowing only standard output, ‘stderr’, to allow only standard error, ‘all’, to allow both standard output and error, and ‘none’, to allow none to be recorded (default). The default here is ‘none’, because usually we don’t want to use the compilation output as a reference in tests.
Returns: exit status of the make process
-
avocado.utils.build.
run_make
(path, make='make', env=None, extra_args='', ignore_status=False, allow_output_check='none')¶ Run make, adding MAKEOPTS to the list of options.
Parameters: - make – what make command name to use.
- env – dictionary with environment variables to be set before calling make (e.g.: CFLAGS).
- extra – extra command line arguments to pass to make.
- allow_output_check (str) – Whether to log the command stream outputs (stdout and stderr) of the make process in the test stream files. Valid values: ‘stdout’, for allowing only standard output, ‘stderr’, to allow only standard error, ‘all’, to allow both standard output and error, and ‘none’, to allow none to be recorded (default). The default here is ‘none’, because usually we don’t want to use the compilation output as a reference in tests.
Returns: the make command result object
avocado.utils.cpu module¶
Get information from the current’s machine CPU.
-
avocado.utils.cpu.
cpu_has_flags
(flags)¶ Check if a list of flags are available on current CPU info
Parameters: flags (list) – A list of cpu flags that must exists on the current CPU. Returns: bool True if all the flags were found or False if not Return type: list
-
avocado.utils.cpu.
cpu_online_list
()¶ Reports a list of indexes of the online cpus
-
avocado.utils.cpu.
get_cpu_arch
()¶ Work out which CPU architecture we’re running on
-
avocado.utils.cpu.
get_cpu_vendor_name
()¶ Get the current cpu vendor name
Returns: string ‘intel’ or ‘amd’ or ‘power7’ depending on the current CPU architecture. Return type: string
avocado.utils.crypto module¶
-
avocado.utils.crypto.
hash_file
(filename, size=None, algorithm='md5')¶ Calculate the hash value of filename.
If size is not None, limit to first size bytes. Throw exception if something is wrong with filename. Can be also implemented with bash one-liner (assuming
size%1024==0
):dd if=filename bs=1024 count=size/1024 | sha1sum -
Parameters: - filename – Path of the file that will have its hash calculated.
- method – Method used to calculate the hash. Supported methods: * md5 * sha1
- size – If provided, hash only the first size bytes of the file.
Returns: Hash of the file, if something goes wrong, return None.
-
avocado.utils.crypto.
hash_wrapper
(algorithm='md5', data=None)¶ Returns an hash object of data using either md5 or sha1 only.
Parameters: input – Optional input string that will be used to update the hash. Returns: Hash object.
avocado.utils.data_factory module¶
Generate data useful for the avocado framework and tests themselves.
-
avocado.utils.data_factory.
generate_random_string
(length, ignore='!"#$%&\'()*+, -./:;<=>?@[\\]^_`{|}~', convert='')¶ Generate a random string using alphanumeric characters.
Parameters: Returns: The generated random string.
-
avocado.utils.data_factory.
make_dir_and_populate
(basedir='/tmp')¶ Create a directory in basedir and populate with a number of files.
The files just have random text contents.
Parameters: basedir (str) – Base directory where directory should be generated. Returns: Path of the dir created and populated. Return type: str
avocado.utils.data_structures module¶
This module contains handy classes that can be used inside avocado core code or plugins.
-
class
avocado.utils.data_structures.
Borg
¶ Multiple instances of this class will share the same state.
This is considered a better design pattern in Python than more popular patterns, such as the Singleton. Inspired by Alex Martelli’s article mentioned below:
See: http://www.aleax.it/5ep.html
-
class
avocado.utils.data_structures.
CallbackRegister
(name, log)¶ Bases:
object
Registers pickable functions to be executed later.
Parameters: name – Human readable identifier of this register -
register
(func, args, kwargs, once=False)¶ Register function/args to be called on self.destroy() :param func: Pickable function :param args: Pickable positional arguments :param kwargs: Pickable keyword arguments :param once: Add unique (func,args,kwargs) combination only once
-
run
()¶ Call all registered function
-
unregister
(func, args, kwargs)¶ Unregister (func,args,kwargs) combination :param func: Pickable function :param args: Pickable positional arguments :param kwargs: Pickable keyword arguments
-
-
class
avocado.utils.data_structures.
LazyProperty
(f_get)¶ Bases:
object
Lazily instantiated property.
Use this decorator when you want to set a property that will only be evaluated the first time it’s accessed. Inspired by the discussion in the Stack Overflow thread below:
See: http://stackoverflow.com/questions/15226721/
-
avocado.utils.data_structures.
compare_matrices
(matrix1, matrix2, threshold=0.05)¶ Compare 2 matrices nxm and return a matrix nxm with comparison data and stats. When the first columns match, they are considered as header and included in the results intact.
Parameters: - matrix1 – Reference Matrix of floats; first column could be header.
- matrix2 – Matrix that will be compared; first column could be header
- threshold – Any difference greater than this percent threshold will be reported.
Returns: Matrix with the difference in comparison, number of improvements, number of regressions, total number of comparisons.
-
avocado.utils.data_structures.
geometric_mean
(values)¶ Evaluates the geometric mean for a list of numeric values. This implementation is slower but allows unlimited number of values. :param values: List with values. :return: Single value representing the geometric mean for the list values. :see: http://en.wikipedia.org/wiki/Geometric_mean
-
avocado.utils.data_structures.
ordered_list_unique
(object_list)¶ Returns an unique list of objects, with their original order preserved
-
avocado.utils.data_structures.
time_to_seconds
(time)¶ Convert time in minutes, hours and days to seconds. :param time: Time, optionally including the unit (i.e. ‘10d’)
avocado.utils.debug module¶
This file contains tools for (not only) Avocado developers.
-
avocado.utils.debug.
log_calls
(length=None, cls_name=None)¶ Use this as decorator to log the function call altogether with arguments. :param length: Max message length :param cls_name: Optional class name prefix
-
avocado.utils.debug.
log_calls_class
(length=None)¶ Use this as decorator to log the function methods’ calls. :param length: Max message length
-
avocado.utils.debug.
measure_duration
(func)¶ Use this as decorator to measure duration of the function execution. The output is “Function $name: ($current_duration, $accumulated_duration)”
avocado.utils.distro module¶
This module provides the client facilities to detect the Linux Distribution it’s running under.
-
class
avocado.utils.distro.
LinuxDistro
(name, version, release, arch)¶ Bases:
object
Simple collection of information for a Linux Distribution
Initializes a new Linux Distro
Parameters: - name (str) – a short name that precisely distinguishes this Linux Distribution among all others.
- version (str) – the major version of the distribution. Usually this is a single number that denotes a large development cycle and support file.
- release (str) – the release or minor version of the distribution. Usually this is also a single number, that is often omitted or starts with a 0 when the major version is initially release. It’s often associated with a shorter development cycle that contains incremental a collection of improvements and fixes.
- arch (str) – the main target for this Linux Distribution. It’s common for some architectures to ship with packages for previous and still compatible architectures, such as it’s the case with Intel/AMD 64 bit architecture that support 32 bit code. In cases like this, this should be set to the 64 bit architecture name.
-
class
avocado.utils.distro.
Probe
¶ Bases:
object
Probes the machine and does it best to confirm it’s the right distro
-
CHECK_FILE
= None¶ Points to a file that can determine if this machine is running a given Linux Distribution. This servers a first check that enables the extra checks to carry on.
-
CHECK_FILE_CONTAINS
= None¶ Sets the content that should be checked on the file pointed to by
CHECK_FILE_EXISTS
. Leave it set to None (its default) to check only if the file exists, and not check its contents
-
CHECK_FILE_DISTRO_NAME
= None¶ The name of the Linux Distribution to be returned if the file defined by
CHECK_FILE_EXISTS
exist.
-
CHECK_VERSION_REGEX
= None¶ A regular expression that will be run on the file pointed to by
CHECK_FILE_EXISTS
-
check_name_for_file
()¶ Checks if this class will look for a file and return a distro
The conditions that must be true include the file that identifies the distro file being set (
CHECK_FILE
) and the name of the distro to be returned (CHECK_FILE_DISTRO_NAME
)
-
check_name_for_file_contains
()¶ Checks if this class will look for text on a file and return a distro
The conditions that must be true include the file that identifies the distro file being set (
CHECK_FILE
), the text to look for inside the distro file (CHECK_FILE_CONTAINS
) and the name of the distro to be returned (CHECK_FILE_DISTRO_NAME
)
-
check_release
()¶ Checks if this has the conditions met to look for the release number
-
check_version
()¶ Checks if this class will look for a regex in file and return a distro
-
get_distro
()¶ Returns the
LinuxDistro
this probe detected
-
name_for_file
()¶ Get the distro name if the
CHECK_FILE
is set and exists
-
name_for_file_contains
()¶ Get the distro if the
CHECK_FILE
is set and has content
-
release
()¶ Returns the release of the distro
-
version
()¶ Returns the version of the distro
-
-
avocado.utils.distro.
register_probe
(probe_class)¶ Register a probe to be run during autodetection
-
avocado.utils.distro.
detect
()¶ Attempts to detect the Linux Distribution running on this machine
Returns: the detected LinuxDistro
orUNKNOWN_DISTRO
Return type: LinuxDistro
avocado.utils.download module¶
Methods to download URLs and regular files.
-
avocado.utils.download.
get_file
(src, dst, permissions=None, hash_expected=None, hash_algorithm='md5', download_retries=1)¶ Gets a file from a source location, optionally using caching.
If no hash_expected is provided, simply download the file. Else, keep trying to download the file until download_failures exceeds download_retries or the hashes match.
If the hashes match, return dst. If download_failures exceeds download_retries, raise an EnvironmentError.
Parameters: - src – source path or URL. May be local or a remote file.
- dst – destination path.
- permissions – (optional) set access permissions.
- hash_expected – Hash string that we expect the file downloaded to have.
- hash_algorithm – Algorithm used to calculate the hash string (md5, sha1).
- download_retries – Number of times we are going to retry a failed download.
Raise: EnvironmentError.
Returns: destination path.
-
avocado.utils.download.
url_download
(url, filename, data=None, timeout=300)¶ Retrieve a file from given url.
Parameters: - url – source URL.
- filename – destination path.
- data – (optional) data to post.
- timeout – (optional) default timeout in seconds.
Returns: None.
-
avocado.utils.download.
url_download_interactive
(url, output_file, title='', chunk_size=102400)¶ Interactively downloads a given file url to a given output file.
Parameters:
-
avocado.utils.download.
url_open
(url, data=None, timeout=5)¶ Wrapper to
urllib2.urlopen()
with timeout addition.Parameters: - url – URL to open.
- data – (optional) data to post.
- timeout – (optional) default timeout in seconds.
Returns: file-like object.
Raises: URLError.
avocado.utils.filelock module¶
Utility for individual file access control implemented via PID lock files.
-
exception
avocado.utils.filelock.
AlreadyLocked
¶ Bases:
exceptions.Exception
-
class
avocado.utils.filelock.
FileLock
(filename, timeout=0)¶ Bases:
object
Creates an exclusive advisory lock for a file. All processes should use and honor the advisory locking scheme, but uncooperative processes are free to ignore the lock and access the file in any way they choose.
-
exception
avocado.utils.filelock.
LockFailed
¶ Bases:
exceptions.Exception
avocado.utils.gdb module¶
Module that provides communication with GDB via its GDB/MI interpreter
-
class
avocado.utils.gdb.
GDB
(path='/usr/bin/gdb', *extra_args)¶ Bases:
object
Wraps a GDB subprocess for easier manipulation
-
DEFAULT_BREAK
= 'main'¶
-
REQUIRED_ARGS
= ['--interpreter=mi', '--quiet']¶
-
cli_cmd
(command)¶ Sends a cli command encoded as an MI command
Parameters: command (str) – a regular GDB cli command Returns: a CommandResult
instanceReturn type: CommandResult
-
cmd
(command)¶ Sends a command and parses all lines until prompt is received
Parameters: command (str) – the GDB command, hopefully in MI language Returns: a CommandResult
instanceReturn type: CommandResult
-
cmd_exists
(command)¶ Checks if a given command exists
Parameters: command (str) – a GDB MI command, including the dash (-) prefix Returns: either True or False Return type: bool
-
connect
(port)¶ Connects to a remote debugger (a gdbserver) at the given TCP port
This uses the “extended-remote” target type only
Parameters: port (int) – the TCP port number Returns: a CommandResult
instanceReturn type: CommandResult
-
del_break
(number)¶ Deletes a breakpoint by its number
Parameters: number (int) – the breakpoint number Returns: a CommandResult
instanceReturn type: CommandResult
-
disconnect
()¶ Disconnects from a remote debugger
Returns: a CommandResult
instanceReturn type: CommandResult
-
exit
()¶ Exits the GDB application gracefully
Returns: the result of subprocess.POpen.wait()
, that is, asubprocess.POpen.returncode
Return type: int or None
-
read_gdb_response
(timeout=0.01, max_tries=100)¶ Read raw responses from GDB
Parameters: Returns: a string containing a raw response from GDB
Return type:
-
read_until_break
(max_lines=100)¶ Read lines from GDB until a break condition is reached
Parameters: max_lines (int) – the maximum number of lines to read Returns: a list of messages read Return type: list of str
-
run
(args=[])¶ Runs the application inside the debugger
Parameters: args (builtin.list) – the arguments to be passed to the binary as command line arguments Returns: a CommandResult
instanceReturn type: CommandResult
-
send_gdb_command
(command)¶ Send a raw command to the GNU debugger input
Parameters: command (str) – the GDB command, hopefully in MI language Returns: None
-
-
class
avocado.utils.gdb.
GDBServer
(path='/usr/bin/gdbserver', port=None, wait_until_running=True, *extra_args)¶ Bases:
object
Wraps a gdbserver instance
Initializes a new gdbserver instance
Parameters: - path (str) – location of the gdbserver binary
- port (int) – tcp port number to listen on for incoming connections
- wait_until_running (bool) – wait until the gdbserver is running and accepting connections. It may take a little after the process is started and it is actually bound to the allocated port
- extra_args – optional extra arguments to be passed to gdbserver
-
INIT_TIMEOUT
= 2.0¶ The time to optionally wait for the server to initialize itself and be ready to accept new connections
-
PORT_RANGE
= (20000, 20999)¶ The range from which a port to GDB server will try to be allocated from
-
REQUIRED_ARGS
= ['--multi']¶ The default arguments used when starting the GDB server process
-
exit
(force=True)¶ Quits the gdb_server process
Most correct way of quitting the GDB server is by sending it a command. If no GDB client is connected, then we can try to connect to it and send a quit command. If this is not possible, we send it a signal and wait for it to finish.
Parameters: force (bool) – if a forced exit (sending SIGTERM) should be attempted Returns: None
-
class
avocado.utils.gdb.
GDBRemote
(host, port, no_ack_mode=True, extended_mode=True)¶ Bases:
object
Initializes a new GDBRemote object.
A GDBRemote acts like a client that speaks the GDB remote protocol, documented at:
https://sourceware.org/gdb/current/onlinedocs/gdb/Remote-Protocol.html
Caveat: we currently do not support communicating with devices, only with TCP sockets. This limitation is basically due to the lack of use cases that justify an implementation, but not due to any technical shortcoming.
Parameters: -
cmd
(command_data, expected_response=None)¶ Sends a command data to a remote gdb server
Limitations: the current version does not deal with retransmissions.
Parameters: Raises: RetransmissionRequestedError, UnexpectedResponseError
Returns: raw data read from from the remote server
Return type:
-
connect
()¶ Connects to the remote target and initializes the chosen modes
-
set_extended_mode
()¶ Enable extended mode. In extended mode, the remote server is made persistent. The ‘R’ packet is used to restart the program being debugged. Original documentation at:
https://sourceware.org/gdb/current/onlinedocs/gdb/Packets.html#extended-mode
-
start_no_ack_mode
()¶ Request that the remote stub disable the normal +/- protocol acknowledgments. Original documentation at:
https://sourceware.org/gdb/current/onlinedocs/gdb/General-Query-Packets.html#QStartNoAckMode
-
avocado.utils.genio module¶
Avocado generic IO related functions.
-
avocado.utils.genio.
ask
(question, auto=False)¶ Prompt the user with a (y/n) question.
Parameters: Returns: User answer
Return type:
-
avocado.utils.genio.
close_log_file
(filename)¶
-
avocado.utils.genio.
log_line
(filename, line)¶ Write a line to a file.
Parameters: - filename – Path of file to write to, either absolute or relative to the dir set by set_log_file_dir().
- line – Line to write.
-
avocado.utils.genio.
read_all_lines
(filename)¶ Return all lines of a given file
This utility method returns an empty list in any error scenario, that is, it doesn’t attempt to identify error paths and raise appropriate exceptions. It does exactly the opposite to that.
This should be used when it’s fine or desirable to have an empty set of lines if a file is missing or is unreadable.
Parameters: filename (str) – Path to the file. Returns: all lines of the file as list Return type: builtin.list
-
avocado.utils.genio.
read_file
(filename)¶ Read the entire contents of file.
Parameters: filename (str) – Path to the file. Returns: File contents Return type: str
-
avocado.utils.genio.
read_one_line
(filename)¶ Read the first line of filename.
Parameters: filename (str) – Path to the file. Returns: First line contents Return type: str
-
avocado.utils.genio.
set_log_file_dir
(directory)¶ Set the base directory for log files created by log_line().
Parameters: dir – Directory for log files.
-
avocado.utils.genio.
write_file
(filename, data)¶ Write data to a file.
Parameters:
avocado.utils.git module¶
APIs to download/update git repositories from inside python scripts.
-
class
avocado.utils.git.
GitRepoHelper
(uri, branch='master', lbranch=None, commit=None, destination_dir=None, base_uri=None)¶ Bases:
object
Helps to deal with git repos, mostly fetching content from a repo
Instantiates a new GitRepoHelper
Parameters: - uri (string) – git repository url
- branch (string) – git remote branch
- lbranch (string) – git local branch name, if different from remote
- commit (string) – specific commit to download
- destination_dir (string) – path of a dir where to save downloaded code
- base_uri (string) – a closer, usually local, git repository url from where to fetch content first from
-
checkout
(branch=None, commit=None)¶ Performs a git checkout for a given branch and start point (commit)
Parameters: - branch – Remote branch name.
- commit – Specific commit hash.
-
execute
()¶ Performs all steps necessary to initialize and download a git repo.
This includes the init, fetch and checkout steps in one single utility method.
-
fetch
(uri)¶ Performs a git fetch from the remote repo
-
get_top_commit
()¶ Returns the topmost commit id for the current branch.
Returns: Commit id.
-
get_top_tag
()¶ Returns the topmost tag for the current branch.
Returns: Tag.
-
git_cmd
(cmd, ignore_status=False)¶ Wraps git commands.
Parameters: - cmd – Command to be executed.
- ignore_status – Whether we should suppress error.CmdError exceptions if the command did return exit code !=0 (True), or not suppress them (False).
-
init
()¶ Initializes a directory for receiving a verbatim copy of git repo
This creates a directory if necessary, and either resets or inits the repo
-
avocado.utils.git.
get_repo
(uri, branch='master', lbranch=None, commit=None, destination_dir=None, base_uri=None)¶ Utility function that retrieves a given git code repository.
Parameters: - uri (string) – git repository url
- branch (string) – git remote branch
- lbranch (string) – git local branch name, if different from remote
- commit (string) – specific commit to download
- destination_dir (string) – path of a dir where to save downloaded code
- base_uri (string) – a closer, usually local, git repository url from where to fetch content first from
avocado.utils.iso9660 module¶
Basic ISO9660 file-system support.
This code does not attempt (so far) to implement code that knows about ISO9660 internal structure. Instead, it uses commonly available support either in userspace tools or on the Linux kernel itself (via mount).
-
avocado.utils.iso9660.
iso9660
(path)¶ Checks the available tools on a system and chooses class accordingly
This is a convenience function, that will pick the first available iso9660 capable tool.
Parameters: path (str) – path to an iso9660 image file Returns: an instance of any iso9660 capable tool Return type: Iso9660IsoInfo
,Iso9660IsoRead
,Iso9660Mount
or None
-
class
avocado.utils.iso9660.
Iso9660IsoInfo
(path)¶ Bases:
avocado.utils.iso9660.MixInMntDirMount
,avocado.utils.iso9660.BaseIso9660
Represents a ISO9660 filesystem
This implementation is based on the cdrkit’s isoinfo tool
-
read
(path)¶
-
-
class
avocado.utils.iso9660.
Iso9660IsoRead
(path)¶ Bases:
avocado.utils.iso9660.MixInMntDirMount
,avocado.utils.iso9660.BaseIso9660
Represents a ISO9660 filesystem
This implementation is based on the libcdio’s iso-read tool
-
close
()¶
-
copy
(src, dst)¶
-
read
(path)¶
-
avocado.utils.kernel module¶
-
class
avocado.utils.kernel.
KernelBuild
(version, config_path=None, work_dir=None, data_dirs=None)¶ Bases:
object
Build the Linux Kernel from official tarballs.
Creates an instance of
KernelBuild
.Parameters: - version – kernel version (“3.19.8”).
- config_path – path to config file.
- work_dir – work directory.
- data_dirs – list of directories to keep the downloaded kernel
Returns: None.
-
SOURCE
= 'linux-{version}.tar.gz'¶
-
URL
= 'https://www.kernel.org/pub/linux/kernel/v3.x/'¶
-
build
()¶ Build kernel from source.
-
configure
()¶ Configure/prepare kernel source to build.
-
download
()¶ Download kernel source.
-
uncompress
()¶ Uncompress kernel source.
avocado.utils.linux_modules module¶
Linux kernel modules APIs
-
avocado.utils.linux_modules.
BUILTIN
= 2¶ Config built-in to kernel (=y)
-
avocado.utils.linux_modules.
MODULE
= 1¶ Config compiled as loadable module (=m)
-
avocado.utils.linux_modules.
NOT_SET
= 0¶ Config commented out or not set
-
avocado.utils.linux_modules.
check_kernel_config
(config_name)¶ Reports the configuration of $config_name of the current kernel
Parameters: config_name (str) – Name of kernel config to search Returns: Config status in running kernel (NOT_SET, BUILTIN, MODULE) Return type: int
-
avocado.utils.linux_modules.
get_loaded_modules
()¶
-
avocado.utils.linux_modules.
get_submodules
(module_name)¶ Get all submodules of the module.
Parameters: module_name (str) – Name of module to search for Returns: List of the submodules Return type: builtin.list
-
avocado.utils.linux_modules.
load_module
(module_name)¶
-
avocado.utils.linux_modules.
loaded_module_info
(module_name)¶ Get loaded module details: Size and Submodules.
Parameters: module_name (str) – Name of module to search for Returns: Dictionary of module info, name, size, submodules if present Return type: dict
-
avocado.utils.linux_modules.
module_is_loaded
(module_name)¶ Is module loaded
Parameters: module_name (str) – Name of module to search for Returns: True is module is loaded Return type: bool
-
avocado.utils.linux_modules.
parse_lsmod_for_module
(l_raw, module_name, escape=True)¶ Use a regexp to parse raw lsmod output and get module information :param l_raw: raw output of lsmod :type l_raw: str :param module_name: Name of module to search for :type module_name: str :param escape: Escape regexp tokens in module_name, default True :type escape: bool :return: Dictionary of module info, name, size, submodules if present :rtype: dict
avocado.utils.lv_utils module¶
-
exception
avocado.utils.lv_utils.
LVException
¶ Bases:
exceptions.Exception
Base Exception Class for all exceptions
-
avocado.utils.lv_utils.
get_diskspace
(disk)¶ Get the entire disk space of a given disk
Parameters: disk – Name of the disk to find free space Returns: size in bytes
-
avocado.utils.lv_utils.
lv_check
(vg_name, lv_name)¶ Check whether provided Logical volume exists.
Parameters: - vg_name – Name of the volume group
- lv_name – Name of the logical volume
-
avocado.utils.lv_utils.
lv_create
(vg_name, lv_name, lv_size, force_flag=True)¶ Create a Logical volume in a volume group. The volume group must already exist.
Parameters: - vg_name – Name of the volume group
- lv_name – Name of the logical volume
- lv_size – Size for the logical volume to be created
-
avocado.utils.lv_utils.
lv_list
()¶ List available group volumes.
:return list available logical volumes
-
avocado.utils.lv_utils.
lv_mount
(vg_name, lv_name, mount_loc, create_filesystem='')¶ Mount a Logical volume to a mount location.
Parameters: - vg_name – Name of volume group
- lv_name – Name of the logical volume
- create_filesystem – Can be one of ext2, ext3, ext4, vfat or empty if the filesystem was already created and the mkfs process is skipped
Mount_loc: Location to mount the logical volume
-
avocado.utils.lv_utils.
lv_reactivate
(vg_name, lv_name, timeout=10)¶ In case of unclean shutdowns some of the lvs is still active and merging is postponed. Use this function to attempt to deactivate and reactivate all of them to cause the merge to happen.
Parameters: - vg_name – Name of volume group
- lv_name – Name of the logical volume
- timeout – Timeout between operations
-
avocado.utils.lv_utils.
lv_remove
(vg_name, lv_name)¶ Remove a logical volume.
Parameters: - vg_name – Name of the volume group
- lv_name – Name of the logical volume
-
avocado.utils.lv_utils.
lv_revert
(vg_name, lv_name, lv_snapshot_name)¶ Revert the origin to a snapshot.
Parameters: - vg_name – An existing volume group
- lv_name – An existing logical volume
- lv_snapshot_name – Name of the snapshot be to reverted
-
avocado.utils.lv_utils.
lv_revert_with_snapshot
(vg_name, lv_name, lv_snapshot_name, lv_snapshot_size)¶ Perform Logical volume merge with snapshot and take a new snapshot.
Parameters: - vg_name – Name of volume group in which lv has to be reverted
- lv_name – Name of the logical volume to be reverted
- lv_snapshot_name – Name of the snapshot be to reverted
- lv_snapshot_size – Size of the snapshot
-
avocado.utils.lv_utils.
lv_take_snapshot
(vg_name, lv_name, lv_snapshot_name, lv_snapshot_size)¶ Take a snapshot of the original Logical volume.
Parameters: - vg_name – An existing volume group
- lv_name – An existing logical volume
- lv_snapshot_name – Name of the snapshot be to created
- lv_snapshot_size – Size of the snapshot
-
avocado.utils.lv_utils.
lv_umount
(vg_name, lv_name)¶ Unmount a Logical volume from a mount location.
Parameters: - vg_name – Name of volume group
- lv_name – Name of the logical volume
-
avocado.utils.lv_utils.
thin_lv_create
(vg_name, thinpool_name='lvthinpool', thinpool_size='1.5G', thinlv_name='lvthin', thinlv_size='1G')¶ Create a thin volume from given volume group.
Parameters: - vg_name – An exist volume group
- thinpool_name – The name of thin pool
- thinpool_size – The size of thin pool to be created
- thinlv_name – The name of thin volume
- thinlv_size – The size of thin volume
-
avocado.utils.lv_utils.
vg_check
(vg_name)¶ Check whether provided volume group exists.
Parameters: vg_name – Name of the volume group.
-
avocado.utils.lv_utils.
vg_create
(vg_name, pv_list, force=False)¶ Create a volume group by using the block special devices
Parameters: - vg_name – Name of the volume group
- pv_list – List of physical volumes
- force – Create volume group forcefully
-
avocado.utils.lv_utils.
vg_list
()¶ List available volume groups.
:return List of volume groups.
-
avocado.utils.lv_utils.
vg_ramdisk
(disk, vg_name, ramdisk_vg_size, ramdisk_basedir, ramdisk_sparse_filename)¶ Create vg on top of ram memory to speed up lv performance. When disk is specified size of the physical volume is taken from existing disk space.
Parameters: - disk – Name of the disk in which volume groups are created.
- vg_name – Name of the volume group.
- ramdisk_vg_size – Size of the ramdisk virtual group (MB).
- ramdisk_basedir – Base directory for the ramdisk sparse file.
- ramdisk_sparse_filename – Name of the ramdisk sparse file.
Returns: ramdisk_filename, vg_ramdisk_dir, vg_name, loop_device
Raises: LVException – On failure
Sample ramdisk params: - ramdisk_vg_size = “40000” - ramdisk_basedir = “/tmp” - ramdisk_sparse_filename = “virtual_hdd”
Sample general params: - vg_name=’autotest_vg’, - lv_name=’autotest_lv’, - lv_size=‘1G’, - lv_snapshot_name=’autotest_sn’, - lv_snapshot_size=‘1G’ The ramdisk volume group size is in MB.
-
avocado.utils.lv_utils.
vg_ramdisk_cleanup
(ramdisk_filename=None, vg_ramdisk_dir=None, vg_name=None, loop_device=None)¶ Inline cleanup function in case of test error.
It detects whether the components were initialized and if so it tries to remove them. In case of failure it raises summary exception.
Parameters: - ramdisk_filename – Name of the ramdisk sparse file.
- vg_ramdisk_dir – Location of the ramdisk file
Vg_name: Name of the volume group
Loop_device: Name of the disk or loop device
Raises: LVException – In case it fail to clean things detected in system
-
avocado.utils.lv_utils.
vg_remove
(vg_name)¶ Remove a volume group.
Parameters: vg_name – Name of the volume group
avocado.utils.memory module¶
-
avocado.utils.memory.
drop_caches
()¶ Writes back all dirty pages to disk and clears all the caches.
-
avocado.utils.memory.
freememtotal
()¶ Read
MemFree
from meminfo.
-
avocado.utils.memory.
get_buddy_info
(chunk_sizes, nodes='all', zones='all')¶ Get the fragement status of the host.
It uses the same method to get the page size in buddyinfo. The expression to evaluate it is:
2^chunk_size * page_size
The chunk_sizes can be string make up by all orders that you want to check split with blank or a mathematical expression with
>
,<
or=
.- For example:
- The input of chunk_size could be:
0 2 4
, and the return will be{'0': 3, '2': 286, '4': 687}
- If you are using expression:
>=9
the return will be{'9': 63, '10': 225}
- The input of chunk_size could be:
Parameters: Returns: A dict using the chunk_size as the keys
Return type:
-
avocado.utils.memory.
get_huge_page_size
()¶ Get size of the huge pages for this system.
Returns: Huge pages size (KB).
-
avocado.utils.memory.
get_num_huge_pages
()¶ Get number of huge pages for this system.
Returns: Number of huge pages.
-
avocado.utils.memory.
memtotal
()¶ Read
Memtotal
from meminfo.
-
avocado.utils.memory.
node_size
()¶ Return node size.
Returns: Node size.
-
avocado.utils.memory.
numa_nodes
()¶ Get a list of NUMA nodes present on the system.
Returns: List with nodes.
-
avocado.utils.memory.
read_from_meminfo
(key)¶ Retrieve key from meminfo.
Parameters: key – Key name, such as MemTotal
.
-
avocado.utils.memory.
read_from_numa_maps
(pid, key)¶ Get the process numa related info from numa_maps. This function only use to get the numbers like anon=1.
Parameters: - pid (String) – Process id
- key (String) – The item you want to check from numa_maps
Returns: A dict using the address as the keys
Return type:
-
avocado.utils.memory.
read_from_smaps
(pid, key)¶ Get specific item value from the smaps of a process include all sections.
Parameters: - pid (String) – Process id
- key (String) – The item you want to check from smaps
Returns: The value of the item in kb
Return type:
-
avocado.utils.memory.
read_from_vmstat
(key)¶ Get specific item value from vmstat
Parameters: key (String) – The item you want to check from vmstat Returns: The value of the item Return type: int
-
avocado.utils.memory.
rounded_memtotal
()¶ Get memtotal, properly rounded.
Returns: Total memory, KB.
-
avocado.utils.memory.
set_num_huge_pages
(num)¶ Set number of huge pages.
Parameters: num – Target number of huge pages.
avocado.utils.network module¶
Module with network related utility functions
-
avocado.utils.network.
find_free_port
(start_port, end_port, address='localhost')¶ Return a host free port in the range [start_port, end_port].
Parameters: - start_port – First port that will be checked.
- end_port – Port immediately after the last one that will be checked.
-
avocado.utils.network.
find_free_ports
(start_port, end_port, count, address='localhost')¶ Return count of host free ports in the range [start_port, end_port].
Parameters: - count – Initial number of ports known to be free in the range.
- start_port – First port that will be checked.
- end_port – Port immediately after the last one that will be checked.
-
avocado.utils.network.
is_port_free
(port, address)¶ Return True if the given port is available for use.
Parameters: port – Port number
avocado.utils.output module¶
Utility functions for user friendly display of information.
-
class
avocado.utils.output.
ProgressBar
(minimum=0, maximum=100, width=75, title='')¶ Bases:
object
Displays interactively the progress of a given task
Inspired/adapted from https://gist.github.com/t0xicCode/3306295
Initializes a new progress bar
Parameters: - minimum (integer) – minimum (initial) value on the progress bar
- maximum (integer) – maximum (final) value on the progress bar
- with – number of columns, that is screen width
-
append_amount
(amount)¶ Increments the current amount value.
-
draw
()¶ Prints the updated text to the screen.
-
update_amount
(amount)¶ Performs sanity checks and update the current amount.
-
update_percentage
(percentage)¶ Updates the progress bar to the new percentage.
avocado.utils.partition module¶
Utility for handling partitions.
-
class
avocado.utils.partition.
Partition
(device, loop_size=0, mountpoint=None)¶ Bases:
object
Class for handling partitions and filesystems
Parameters: - device – The device in question (e.g.”/dev/hda2”). If device is a file it will be mounted as loopback.
- loop_size – Size of loopback device (in MB). Defaults to 0.
- mountpoint – Where the partition to be mounted to.
-
get_mountpoint
(filename=None)¶ Find the mount point of this partition object.
Parameters: filename – where to look for the mounted partitions information (default None which means it will search /proc/mounts and/or /etc/mtab) Returns: a string with the mount point of the partition or None if not mounted
-
static
list_mount_devices
()¶ Lists mounted file systems and swap on devices.
-
static
list_mount_points
()¶ Lists the mount points.
-
mkfs
(fstype=None, args='')¶ Format a partition to filesystem type
Parameters: - fstype – the filesystem type, such as “ext3”, “ext2”. Defaults to previously set type or “ext2” if none has set.
- args – arguments to be passed to mkfs command.
-
mount
(mountpoint=None, fstype=None, args='')¶ Mount this partition to a mount point
Parameters: - mountpoint – If you have not provided a mountpoint to partition object or want to use a different one, you may specify it here.
- fstype – Filesystem type. If not provided partition object value will be used.
- args – Arguments to be passed to “mount” command.
-
unmount
(force=True)¶ Umount this partition.
It’s easier said than done to umount a partition. We need to lock the mtab file to make sure we don’t have any locking problems if we are umounting in parallel.
When the unmount fails and force==True we unmount the partition ungracefully.
Returns: 1 on success, 2 on force umount success Raises: PartitionError – On failure
-
exception
avocado.utils.partition.
PartitionError
(partition, reason, details=None)¶ Bases:
exceptions.Exception
Generic PartitionError
avocado.utils.path module¶
Avocado path related functions.
-
exception
avocado.utils.path.
CmdNotFoundError
(cmd, paths)¶ Bases:
exceptions.Exception
Indicates that the command was not found in the system after a search.
Parameters: - cmd – String with the command.
- paths – List of paths where we looked after.
-
class
avocado.utils.path.
PathInspector
(path)¶ Bases:
object
-
get_first_line
()¶
-
has_exec_permission
()¶
-
is_empty
()¶
-
is_python
()¶
-
is_script
(language=None)¶
-
-
avocado.utils.path.
find_command
(cmd, default=None)¶ Try to find a command in the PATH, paranoid version.
Parameters: - cmd – Command to be found.
- default – Command path to use as a fallback if not found in the standard directories.
Raise: avocado.utils.path.CmdNotFoundError
in case the command was not found and no default was given.
-
avocado.utils.path.
get_path
(base_path, user_path)¶ Translate a user specified path to a real path. If user_path is relative, append it to base_path. If user_path is absolute, return it as is.
Parameters: - base_path – The base path of relative user specified paths.
- user_path – The user specified path.
-
avocado.utils.path.
init_dir
(*args)¶ Wrapper around os.path.join that creates dirs based on the final path.
Parameters: args – List of dir arguments that will be os.path.joined. Returns: directory. Return type: str
-
avocado.utils.path.
usable_ro_dir
(directory)¶ Verify whether dir exists and we can access its contents.
If a usable RO is there, use it no questions asked. If not, let’s at least try to create one.
Parameters: directory – Directory
-
avocado.utils.path.
usable_rw_dir
(directory)¶ Verify whether we can use this dir (read/write).
Checks for appropriate permissions, and creates missing dirs as needed.
Parameters: directory – Directory
avocado.utils.process module¶
Functions dedicated to find and run external commands.
-
avocado.utils.process.
CURRENT_WRAPPER
= None¶ The active wrapper utility script.
-
exception
avocado.utils.process.
CmdError
(command=None, result=None, additional_text=None)¶ Bases:
exceptions.Exception
-
class
avocado.utils.process.
CmdResult
(command='', stdout='', stderr='', exit_status=None, duration=0, pid=None)¶ Bases:
object
Command execution result.
Parameters: - command – String containing the command line itself
- exit_status – Integer exit code of the process
- stdout – String containing stdout of the process
- stderr – String containing stderr of the process
- duration – Elapsed wall clock time running the process
- pid – ID of the process
-
class
avocado.utils.process.
GDBSubProcess
(cmd, verbose=True, allow_output_check='all', shell=False, env=None, sudo=False)¶ Bases:
object
Runs a subprocess inside the GNU Debugger
Creates the subprocess object, stdout/err, reader threads and locks.
Parameters: - cmd (str) – Command line to run.
- verbose (bool) – Whether to log the command run and stdout/stderr. Currently unused and provided for compatibility only.
- allow_output_check (str) – Whether to log the command stream outputs (stdout and stderr) in the test stream files. Valid values: ‘stdout’, for allowing only standard output, ‘stderr’, to allow only standard error, ‘all’, to allow both standard output and error (default), and ‘none’, to allow none to be recorded. Currently unused and provided for compatibility only.
- sudo – This param will be ignored in this implementation, since the GDB wrapping code does not have support to run commands under sudo just yet.
-
create_and_wait_on_resume_fifo
(path)¶ Creates a FIFO file and waits until it’s written to
Parameters: path (str) – the path that the file will be created Returns: first character that was written to the fifo Return type: str
-
generate_core
()¶
-
generate_gdb_connect_cmds
()¶
-
generate_gdb_connect_sh
()¶
-
handle_break_hit
(response)¶
-
handle_fatal_signal
(response)¶
-
run
(timeout=None)¶
-
wait_for_exit
()¶ Waits until debugger receives a message about the binary exit
-
class
avocado.utils.process.
SubProcess
(cmd, verbose=True, allow_output_check='all', shell=False, env=None, sudo=False)¶ Bases:
object
Run a subprocess in the background, collecting stdout/stderr streams.
Creates the subprocess object, stdout/err, reader threads and locks.
Parameters: - cmd (str) – Command line to run.
- verbose (bool) – Whether to log the command run and stdout/stderr.
- allow_output_check (str) – Whether to log the command stream outputs (stdout and stderr) in the test stream files. Valid values: ‘stdout’, for allowing only standard output, ‘stderr’, to allow only standard error, ‘all’, to allow both standard output and error (default), and ‘none’, to allow none to be recorded.
- shell (bool) – Whether to run the subprocess in a subshell.
- env (dict) – Use extra environment variables.
- sudo – Whether the command requires admin privileges to run, so that sudo will be prepended to the command. The assumption here is that the user running the command has a sudo configuration such that a password won’t be prompted. If that’s not the case, the command will straight out fail.
-
get_pid
()¶ Reports PID of this process
-
get_stderr
()¶ Get the full stderr of the subprocess so far.
Returns: Standard error of the process. Return type: str
-
get_stdout
()¶ Get the full stdout of the subprocess so far.
Returns: Standard output of the process. Return type: str
-
kill
()¶ Send a
signal.SIGKILL
to the process.
-
poll
()¶ Call the subprocess poll() method, fill results if rc is not None.
-
run
(timeout=None, sig=15)¶ Start a process and wait for it to end, returning the result attr.
If the process was already started using .start(), this will simply wait for it to end.
Parameters: Returns: The command result object.
Return type: A
CmdResult
instance.
-
send_signal
(sig)¶ Send the specified signal to the process.
Parameters: sig – Signal to send.
-
start
()¶ Start running the subprocess.
This method is particularly useful for background processes, since you can start the subprocess and not block your test flow.
Returns: Subprocess PID. Return type: int
-
stop
()¶ Stop background subprocess.
Call this method to terminate the background subprocess and wait for it results.
-
terminate
()¶ Send a
signal.SIGTERM
to the process.
-
wait
()¶ Call the subprocess poll() method, fill results if rc is not None.
-
avocado.utils.process.
UNDEFINED_BEHAVIOR_EXCEPTION
= None¶ Exception to be raised when users of this API need to know that the execution of a given process resulted in undefined behavior. One concrete example when a user, in an interactive session, let the inferior process exit before before avocado resumed the debugger session. Since the information is unknown, and the behavior is undefined, this situation will be flagged by an exception.
-
avocado.utils.process.
WRAP_PROCESS
= None¶ The global wrapper. If set, run every process under this wrapper.
-
avocado.utils.process.
WRAP_PROCESS_NAMES_EXPR
= []¶ Set wrapper per program names. A list of wrappers and program names. Format: [ (‘/path/to/wrapper.sh’, ‘progname’), ... ]
-
class
avocado.utils.process.
WrapSubProcess
(cmd, verbose=True, allow_output_check='all', shell=False, env=None, wrapper=None, sudo=False)¶ Bases:
avocado.utils.process.SubProcess
Wrap subprocess inside an utility program.
-
avocado.utils.process.
binary_from_shell_cmd
(cmd)¶ Tries to find the first binary path from a simple shell-like command.
Note: It’s a naive implementation, but for commands like: VAR=VAL binary -args || true gives the right result (binary) Parameters: cmd – simple shell-like binary Returns: first found binary from the cmd
-
avocado.utils.process.
can_sudo
()¶ Returns: True when sudo is available (or is root)
-
avocado.utils.process.
get_children_pids
(ppid)¶ Get all PIDs of children/threads of parent ppid param ppid: parent PID return: list of PIDs of all children/threads of ppid
-
avocado.utils.process.
get_sub_process_klass
(cmd)¶ Which sub process implementation should be used
Either the regular one, or the GNU Debugger version
Parameters: cmd – the command arguments, from where we extract the binary name
-
avocado.utils.process.
kill_process_by_pattern
(pattern)¶ Send SIGTERM signal to a process with matched pattern.
Parameters: pattern – normally only matched against the process name
-
avocado.utils.process.
kill_process_tree
(pid, sig=9, send_sigcont=True)¶ Signal a process and all of its children.
If the process does not exist – return.
Parameters: - pid – The pid of the process to signal.
- sig – The signal to send to the processes.
-
avocado.utils.process.
pid_exists
(pid)¶ Return True if a given PID exists.
Parameters: pid – Process ID number.
-
avocado.utils.process.
process_in_ptree_is_defunct
(ppid)¶ Verify if any processes deriving from PPID are in the defunct state.
Attempt to verify if parent process and any children from PPID is defunct (zombie) or not.
Parameters: ppid – The parent PID of the process to verify.
-
avocado.utils.process.
run
(cmd, timeout=None, verbose=True, ignore_status=False, allow_output_check='all', shell=False, env=None, sudo=False)¶ Run a subprocess, returning a CmdResult object.
Parameters: - cmd (str) – Command line to run.
- timeout (float) – Time limit in seconds before attempting to kill the running process. This function will take a few seconds longer than ‘timeout’ to complete if it has to kill the process.
- verbose (bool) – Whether to log the command run and stdout/stderr.
- ignore_status (bool) – Whether to raise an exception when command returns =! 0 (False), or not (True).
- allow_output_check (str) – Whether to log the command stream outputs (stdout and stderr) in the test stream files. Valid values: ‘stdout’, for allowing only standard output, ‘stderr’, to allow only standard error, ‘all’, to allow both standard output and error (default), and ‘none’, to allow none to be recorded.
- shell (bool) – Whether to run the command on a subshell
- env (dict) – Use extra environment variables
- sudo – Whether the command requires admin privileges to run, so that sudo will be prepended to the command. The assumption here is that the user running the command has a sudo configuration such that a password won’t be prompted. If that’s not the case, the command will straight out fail.
Returns: An
CmdResult
object.Raise: CmdError
, ifignore_status=False
.
-
avocado.utils.process.
safe_kill
(pid, signal)¶ Attempt to send a signal to a given process that may or may not exist.
Parameters: signal – Signal number.
-
avocado.utils.process.
should_run_inside_gdb
(cmd)¶ Wether the given command should be run inside the GNU debugger
Parameters: cmd – the command arguments, from where we extract the binary name
-
avocado.utils.process.
should_run_inside_wrapper
(cmd)¶ Wether the given command should be run inside the wrapper utility.
Parameters: cmd – the command arguments, from where we extract the binary name
-
avocado.utils.process.
split_gdb_expr
(expr)¶ Splits a GDB expr into (binary_name, breakpoint_location)
Returns
avocado.gdb.GDB.DEFAULT_BREAK
as the default breakpoint if one is not given.Parameters: expr (str) – an expression of the form <binary_name>[:<breakpoint>] Returns: a (binary_name, breakpoint_location) tuple Return type: tuple
-
avocado.utils.process.
system
(cmd, timeout=None, verbose=True, ignore_status=False, allow_output_check='all', shell=False, env=None, sudo=False)¶ Run a subprocess, returning its exit code.
Parameters: - cmd (str) – Command line to run.
- timeout (float) – Time limit in seconds before attempting to kill the running process. This function will take a few seconds longer than ‘timeout’ to complete if it has to kill the process.
- verbose (bool) – Whether to log the command run and stdout/stderr.
- ignore_status (bool) – Whether to raise an exception when command returns =! 0 (False), or not (True).
- allow_output_check (str) – Whether to log the command stream outputs (stdout and stderr) in the test stream files. Valid values: ‘stdout’, for allowing only standard output, ‘stderr’, to allow only standard error, ‘all’, to allow both standard output and error (default), and ‘none’, to allow none to be recorded.
- shell (bool) – Whether to run the command on a subshell
- env (dict) – Use extra environment variables.
- sudo – Whether the command requires admin privileges to run, so that sudo will be prepended to the command. The assumption here is that the user running the command has a sudo configuration such that a password won’t be prompted. If that’s not the case, the command will straight out fail.
Returns: Exit code.
Return type: Raise: CmdError
, ifignore_status=False
.
-
avocado.utils.process.
system_output
(cmd, timeout=None, verbose=True, ignore_status=False, allow_output_check='all', shell=False, env=None, sudo=False)¶ Run a subprocess, returning its output.
Parameters: - cmd (str) – Command line to run.
- timeout (float) – Time limit in seconds before attempting to kill the running process. This function will take a few seconds longer than ‘timeout’ to complete if it has to kill the process.
- verbose (bool) – Whether to log the command run and stdout/stderr.
- ignore_status – Whether to raise an exception when command returns =! 0 (False), or not (True).
- allow_output_check (str) – Whether to log the command stream outputs (stdout and stderr) in the test stream files. Valid values: ‘stdout’, for allowing only standard output, ‘stderr’, to allow only standard error, ‘all’, to allow both standard output and error (default), and ‘none’, to allow none to be recorded.
- shell (bool) – Whether to run the command on a subshell
- env (dict) – Use extra environment variables
- sudo – Whether the command requires admin privileges to run, so that sudo will be prepended to the command. The assumption here is that the user running the command has a sudo configuration such that a password won’t be prompted. If that’s not the case, the command will straight out fail.
Returns: Command output.
Return type: Raise: CmdError
, ifignore_status=False
.
avocado.utils.runtime module¶
Module that contains runtime configuration
-
avocado.utils.runtime.
CURRENT_JOB
= None¶ Sometimes it’s useful for the framework and API to know about the job that is currently running, if one exists
-
avocado.utils.runtime.
CURRENT_TEST
= None¶ Sometimes it’s useful for the framework and API to know about the test that is currently running, if one exists
avocado.utils.script module¶
Module to handle scripts creation.
-
avocado.utils.script.
DEFAULT_MODE
= 509¶ What is commonly known as “0775” or “u=rwx,g=rwx,o=rx”
-
class
avocado.utils.script.
Script
(path, content, mode=509)¶ Bases:
object
Class that represents a script.
Creates an instance of
Script
.Note that when the instance inside a with statement, it will automatically call save() and then remove() for you.
Parameters: - path – the script file name.
- content – the script content.
- mode – set file mode, defaults what is commonly known as 0775.
-
remove
()¶ Remove script from the file system.
Returns: True if script has been removed, otherwise False.
-
save
()¶ Store script to file system.
Returns: True if script has been stored, otherwise False.
-
class
avocado.utils.script.
TemporaryScript
(name, content, prefix='avocado_script', mode=509)¶ Bases:
avocado.utils.script.Script
Class that represents a temporary script.
Creates an instance of
TemporaryScript
.Note that when the instance inside a with statement, it will automatically call save() and then remove() for you.
When the instance object is garbage collected, it will automatically call remove() for you.
Parameters: - name – the script file name.
- content – the script content.
- prefix – prefix for the temporary directory name.
- mode – set file mode, default to 0775.
-
remove
()¶
-
avocado.utils.script.
make_script
(path, content, mode=509)¶ Creates a new script stored in the file system.
Parameters: - path – the script file name.
- content – the script content.
- mode – set file mode, default to 0775.
Returns: the script path.
-
avocado.utils.script.
make_temp_script
(name, content, prefix='avocado_script', mode=509)¶ Creates a new temporary script stored in the file system.
Parameters: - path – the script file name.
- content – the script content.
- prefix – the directory prefix Default to ‘avocado_script’.
- mode – set file mode, default to 0775.
Returns: the script path.
avocado.utils.service module¶
-
avocado.utils.service.
ServiceManager
(run=<function run>)¶ Detect which init program is being used, init or systemd and return a class has methods to start/stop services.
# Get the system service manager >> service_manager = ServiceManager()
# Stating service/unit “sshd” >> service_manager.start(“sshd”)
# Getting a list of available units >> units = service_manager.list()
# Disabling and stopping a list of services >> services_to_disable = [‘ntpd’, ‘httpd’]
>> for s in services_to_disable: >> service_manager.disable(s) >> service_manager.stop(s)
Returns: SysVInitServiceManager or SystemdServiceManager Return type: _GenericServiceManager
-
avocado.utils.service.
SpecificServiceManager
(service_name, run=<function run>)¶ # Get the specific service manager for sshd >>> sshd = SpecificServiceManager(“sshd”) >>> sshd.start() >>> sshd.stop() >>> sshd.reload() >>> sshd.restart() >>> sshd.condrestart() >>> sshd.status() >>> sshd.enable() >>> sshd.disable() >>> sshd.is_enabled()
Parameters: service_name (str) – systemd unit or init.d service to manager Returns: SpecificServiceManager that has start/stop methods Return type: _SpecificServiceManager
-
avocado.utils.service.
convert_systemd_target_to_runlevel
(target)¶ Convert systemd target to runlevel.
Parameters: target (str) – systemd target Returns: sys_v runlevel Return type: str Raises: ValueError – when systemd target is unknown
-
avocado.utils.service.
convert_sysv_runlevel
(level)¶ Convert runlevel to systemd target.
Parameters: level (str or int) – sys_v runlevel Returns: systemd target Return type: str Raises: ValueError – when runlevel is unknown
-
avocado.utils.service.
get_name_of_init
(run=<function run>)¶ Determine what executable is PID 1, aka init by checking /proc/1/exe This init detection will only run once and cache the return value.
Returns: executable name for PID 1, aka init Return type: str
-
avocado.utils.service.
service_manager
(run=<function run>)¶ Detect which init program is being used, init or systemd and return a class has methods to start/stop services.
# Get the system service manager >> service_manager = ServiceManager()
# Stating service/unit “sshd” >> service_manager.start(“sshd”)
# Getting a list of available units >> units = service_manager.list()
# Disabling and stopping a list of services >> services_to_disable = [‘ntpd’, ‘httpd’]
>> for s in services_to_disable: >> service_manager.disable(s) >> service_manager.stop(s)
Returns: SysVInitServiceManager or SystemdServiceManager Return type: _GenericServiceManager
-
avocado.utils.service.
specific_service_manager
(service_name, run=<function run>)¶ # Get the specific service manager for sshd >>> sshd = SpecificServiceManager(“sshd”) >>> sshd.start() >>> sshd.stop() >>> sshd.reload() >>> sshd.restart() >>> sshd.condrestart() >>> sshd.status() >>> sshd.enable() >>> sshd.disable() >>> sshd.is_enabled()
Parameters: service_name (str) – systemd unit or init.d service to manager Returns: SpecificServiceManager that has start/stop methods Return type: _SpecificServiceManager
-
avocado.utils.service.
sys_v_init_command_generator
(command)¶ Generate lists of command arguments for sys_v style inits.
Parameters: command (str) – start,stop,restart, etc. Returns: list of commands to pass to process.run or similar function Return type: builtin.list
-
avocado.utils.service.
sys_v_init_result_parser
(command)¶ Parse results from sys_v style commands.
command status: return true if service is running. command is_enabled: return true if service is enabled. command list: return a dict from service name to status. command others: return true if operate success.
Parameters: command (str.) – command. Returns: different from the command.
-
avocado.utils.service.
systemd_command_generator
(command)¶ Generate list of command line argument strings for systemctl.
One argument per string for compatibility Popen
WARNING: If systemctl detects that it is running on a tty it will use color, pipe to $PAGER, change column sizes and not truncate unit names. Use –no-pager to suppress pager output, or set PAGER=cat in the environment. You may need to take other steps to suppress color output. See https://bugzilla.redhat.com/show_bug.cgi?id=713567
Parameters: command (str) – start,stop,restart, etc. Returns: List of command and arguments to pass to process.run or similar functions Return type: builtin.list
-
avocado.utils.service.
systemd_result_parser
(command)¶ Parse results from systemd style commands.
command status: return true if service is running. command is_enabled: return true if service is enabled. command list: return a dict from service name to status. command others: return true if operate success.
Parameters: command (str.) – command. Returns: different from the command.
avocado.utils.software_manager module¶
Software package management library.
This is an abstraction layer on top of the existing distributions high level package managers. It supports package operations useful for testing purposes, and multiple high level package managers (here called backends). If you want to make this lib to support your particular package manager/distro, please implement the given backend class.
author: | Higor Vieira Alves <halves@br.ibm.com> |
---|---|
author: | Lucas Meneghel Rodrigues <lmr@redhat.com> |
author: | Ramon de Carvalho Valle <rcvalle@br.ibm.com> |
copyright: | IBM 2008-2009 |
copyright: | Red Hat 2009-2014 |
-
class
avocado.utils.software_manager.
AptBackend
¶ Bases:
avocado.utils.software_manager.DpkgBackend
Implements the apt backend for software manager.
Set of operations for the apt package manager, commonly found on Debian and Debian based distributions, such as Ubuntu Linux.
Initializes the base command and the debian package repository.
-
add_repo
(repo)¶ Add an apt repository.
Parameters: repo – Repository string. Example: ‘deb http://archive.ubuntu.com/ubuntu/ maverick universe’
-
install
(name)¶ Installs package [name].
Parameters: name – Package name.
-
provides
(path)¶ Return a list of packages that provide [path].
Parameters: path – File path.
-
remove
(name)¶ Remove package [name].
Parameters: name – Package name.
-
remove_repo
(repo)¶ Remove an apt repository.
Parameters: repo – Repository string. Example: ‘deb http://archive.ubuntu.com/ubuntu/ maverick universe’
-
-
class
avocado.utils.software_manager.
BaseBackend
¶ Bases:
object
This class implements all common methods among backends.
-
install_what_provides
(path)¶ Installs package that provides [path].
Parameters: path – Path to file.
-
-
class
avocado.utils.software_manager.
DnfBackend
¶ Bases:
avocado.utils.software_manager.YumBackend
Implements the dnf backend for software manager.
DNF is the successor to yum in recent Fedora.
Initializes the base command and the DNF package repository.
-
class
avocado.utils.software_manager.
DpkgBackend
¶ Bases:
avocado.utils.software_manager.BaseBackend
This class implements operations executed with the dpkg package manager.
dpkg is a lower level package manager, used by higher level managers such as apt and aptitude.
-
INSTALLED_OUTPUT
= 'install ok installed'¶
-
PACKAGE_TYPE
= 'deb'¶
-
check_installed
(name)¶
-
list_all
()¶ List all packages available in the system.
-
list_files
(package)¶ List files installed by package [package].
Parameters: package – Package name. Returns: List of paths installed by package.
-
-
class
avocado.utils.software_manager.
RpmBackend
¶ Bases:
avocado.utils.software_manager.BaseBackend
This class implements operations executed with the rpm package manager.
rpm is a lower level package manager, used by higher level managers such as yum and zypper.
-
PACKAGE_TYPE
= 'rpm'¶
-
SOFTWARE_COMPONENT_QRY
= 'rpm %{NAME} %{VERSION} %{RELEASE} %{SIGMD5} %{ARCH}'¶
-
check_installed
(name, version=None, arch=None)¶ Check if package [name] is installed.
Parameters: - name – Package name.
- version – Package version.
- arch – Package architecture.
-
list_all
(software_components=True)¶ List all installed packages.
Parameters: software_components – log in a format suitable for the SoftwareComponent schema
-
list_files
(name)¶ List files installed on the system by package [name].
Parameters: name – Package name.
-
-
class
avocado.utils.software_manager.
SoftwareManager
¶ Bases:
object
Package management abstraction layer.
It supports a set of common package operations for testing purposes, and it uses the concept of a backend, a helper class that implements the set of operations of a given package management tool.
Lazily instantiate the object
-
class
avocado.utils.software_manager.
SystemInspector
¶ Bases:
object
System inspector class.
This may grow up to include more complete reports of operating system and machine properties.
Probe system, and save information for future reference.
-
get_package_management
()¶ Determine the supported package management systems present on the system. If more than one package management system installed, try to find the best supported system.
-
-
class
avocado.utils.software_manager.
YumBackend
(cmd='yum')¶ Bases:
avocado.utils.software_manager.RpmBackend
Implements the yum backend for software manager.
Set of operations for the yum package manager, commonly found on Yellow Dog Linux and Red Hat based distributions, such as Fedora and Red Hat Enterprise Linux.
Initializes the base command and the yum package repository.
-
add_repo
(url)¶ Adds package repository located on [url].
Parameters: url – Universal Resource Locator of the repository.
-
install
(name)¶ Installs package [name]. Handles local installs.
-
provides
(name)¶ Returns a list of packages that provides a given capability.
Parameters: name – Capability name (eg, ‘foo’).
-
remove
(name)¶ Removes package [name].
Parameters: name – Package name (eg. ‘ipython’).
-
remove_repo
(url)¶ Removes package repository located on [url].
Parameters: url – Universal Resource Locator of the repository.
-
-
class
avocado.utils.software_manager.
ZypperBackend
¶ Bases:
avocado.utils.software_manager.RpmBackend
Implements the zypper backend for software manager.
Set of operations for the zypper package manager, found on SUSE Linux.
Initializes the base command and the yum package repository.
-
add_repo
(url)¶ Adds repository [url].
Parameters: url – URL for the package repository.
-
install
(name)¶ Installs package [name]. Handles local installs.
Parameters: name – Package Name.
-
provides
(name)¶ Searches for what provides a given file.
Parameters: name – File path.
-
remove
(name)¶ Removes package [name].
-
remove_repo
(url)¶ Removes repository [url].
Parameters: url – URL for the package repository.
-
-
avocado.utils.software_manager.
install_distro_packages
(distro_pkg_map, interactive=False)¶ Installs packages for the currently running distribution
This utility function checks if the currently running distro is a key in the distro_pkg_map dictionary, and if there is a list of packages set as its value.
If these conditions match, the packages will be installed using the software manager interface, thus the native packaging system if the currently running distro.
Parameters: distro_pkg_map (dict) – mapping of distro name, as returned by utils.get_os_vendor(), to a list of package names Returns: True if any packages were actually installed, False otherwise
avocado.utils.stacktrace module¶
Traceback standard module plus some additional APIs.
-
avocado.utils.stacktrace.
analyze_unpickable_item
(path_prefix, obj)¶ Recursive method to obtain unpickable objects along with location
Parameters: - path_prefix – Path to this object
- obj – The sub-object under introspection
Returns: [($path_to_the_object, $value), ...]
-
avocado.utils.stacktrace.
log_exc_info
(exc_info, logger='root')¶ Log exception info to logger_name.
Parameters: - exc_info – Exception info produced by sys.exc_info()
- logger – Name of the logger (defaults to root)
-
avocado.utils.stacktrace.
log_message
(message, logger='root')¶ Log message to logger.
Parameters: - message – Message
- logger – Name of the logger (defaults to root)
-
avocado.utils.stacktrace.
prepare_exc_info
(exc_info)¶ Prepare traceback info.
Parameters: exc_info – Exception info produced by sys.exc_info()
-
avocado.utils.stacktrace.
str_unpickable_object
(obj)¶ Return human readable string identifying the unpickable objects
Parameters: obj – The object for analysis Raises: ValueError – In case the object is pickable
-
avocado.utils.stacktrace.
tb_info
(exc_info)¶ Prepare traceback info.
Parameters: exc_info – Exception info produced by sys.exc_info()
avocado.utils.wait module¶
-
avocado.utils.wait.
wait_for
(func, timeout, first=0.0, step=1.0, text=None)¶ Wait until func() evaluates to True.
If func() evaluates to True before timeout expires, return the value of func(). Otherwise return None.
Parameters: - timeout – Timeout in seconds
- first – Time to sleep before first attempt
- steps – Time to sleep between attempts in seconds
- text – Text to print while waiting, for debug purposes
Module contents¶
Internal (Core) APIs¶
Internal APIs that may be of interest to Avocado hackers.
Subpackages¶
avocado.core.remote package¶
Submodules¶
avocado.core.remote.result module¶
Remote test results.
-
class
avocado.core.remote.result.
RemoteResult
(job)¶ Bases:
avocado.core.result.HumanResult
Remote Machine Test Result class.
Creates an instance of RemoteResult.
Parameters: job – an instance of avocado.core.job.Job
.-
tear_down
()¶ Cleanup after test execution
-
-
class
avocado.core.remote.result.
VMResult
(job)¶ Bases:
avocado.core.remote.result.RemoteResult
Virtual Machine Test Result class.
avocado.core.remote.runner module¶
Remote test runner.
-
class
avocado.core.remote.runner.
RemoteTestRunner
(job, test_result)¶ Bases:
avocado.core.runner.TestRunner
Tooled TestRunner to run on remote machine using ssh
-
check_remote_avocado
()¶ Checks if the remote system appears to have avocado installed
The “appears to have” description is justified by the fact that the check is rather simplistic, it attempts to run an avocado -v command and checks if the output looks like what avocado would print out.
Return type: tuple with (bool, tuple) Returns: (True, (x, y, z)) if avocado appears to be installed and (False, None) otherwise.
-
remote
= None¶ remoter connection to the remote machine
-
remote_test_dir
= '~/avocado/tests'¶
-
remote_version_re
= <_sre.SRE_Pattern object>¶
-
run_suite
(test_suite, mux, timeout=0, replay_map=None, test_result_total=0)¶ Run one or more tests and report with test result.
Parameters: - params_list – a list of param dicts.
- mux – A multiplex iterator (unused here)
Returns: a set with types of test failures.
-
run_test
(urls, timeout)¶ Run tests.
Parameters: urls – a string with test URLs. Returns: a dictionary with test results.
-
setup
()¶ Setup remote environment and copy test directories
-
tear_down
()¶ This method is only called when run_suite gets to the point of to be executing setup method and is called at the end of the execution.
Warning: It might be called on setup exceptions, so things initialized during setup might not yet be initialized.
-
-
class
avocado.core.remote.runner.
VMTestRunner
(job, test_result)¶ Bases:
avocado.core.remote.runner.RemoteTestRunner
Test runner to run tests using libvirt domain
-
setup
()¶ Initialize VM and establish connection
-
tear_down
()¶ Stop VM and restore snapshot (if asked for it)
-
vm
= None¶ VM used during testing
-
avocado.core.remote.test module¶
Remote test class.
-
class
avocado.core.remote.test.
RemoteTest
(name, status, time, start, end, fail_reason, logdir, logfile)¶ Bases:
object
Mimics
avocado.core.test.Test
for remote tests.
Module contents¶
-
class
avocado.core.remote.
RemoteResult
(job)¶ Bases:
avocado.core.result.HumanResult
Remote Machine Test Result class.
Creates an instance of RemoteResult.
Parameters: job – an instance of avocado.core.job.Job
.-
tear_down
()¶ Cleanup after test execution
-
-
class
avocado.core.remote.
VMResult
(job)¶ Bases:
avocado.core.remote.result.RemoteResult
Virtual Machine Test Result class.
-
class
avocado.core.remote.
RemoteTestRunner
(job, test_result)¶ Bases:
avocado.core.runner.TestRunner
Tooled TestRunner to run on remote machine using ssh
-
check_remote_avocado
()¶ Checks if the remote system appears to have avocado installed
The “appears to have” description is justified by the fact that the check is rather simplistic, it attempts to run an avocado -v command and checks if the output looks like what avocado would print out.
Return type: tuple with (bool, tuple) Returns: (True, (x, y, z)) if avocado appears to be installed and (False, None) otherwise.
-
remote_test_dir
= '~/avocado/tests'¶
-
remote_version_re
= <_sre.SRE_Pattern object>¶
-
run_suite
(test_suite, mux, timeout=0, replay_map=None, test_result_total=0)¶ Run one or more tests and report with test result.
Parameters: - params_list – a list of param dicts.
- mux – A multiplex iterator (unused here)
Returns: a set with types of test failures.
-
run_test
(urls, timeout)¶ Run tests.
Parameters: urls – a string with test URLs. Returns: a dictionary with test results.
-
setup
()¶ Setup remote environment and copy test directories
-
tear_down
()¶ This method is only called when run_suite gets to the point of to be executing setup method and is called at the end of the execution.
Warning: It might be called on setup exceptions, so things initialized during setup might not yet be initialized.
-
-
class
avocado.core.remote.
VMTestRunner
(job, test_result)¶ Bases:
avocado.core.remote.runner.RemoteTestRunner
Test runner to run tests using libvirt domain
-
setup
()¶ Initialize VM and establish connection
-
tear_down
()¶ Stop VM and restore snapshot (if asked for it)
-
-
class
avocado.core.remote.
RemoteTest
(name, status, time, start, end, fail_reason, logdir, logfile)¶ Bases:
object
Mimics
avocado.core.test.Test
for remote tests.
avocado.core.restclient package¶
Subpackages¶
avocado.core.restclient.cli package¶
-
avocado.core.restclient.cli.actions.base.
action
(function)¶ Simple function that marks functions as CLI actions
Parameters: function – the function that will receive the CLI action mark
Module that implements the actions for the CLI App when the job toplevel command is used
-
avocado.core.restclient.cli.actions.server.
list_brief
(app)¶ Shows the server API list
-
avocado.core.restclient.cli.actions.server.
status
(app)¶ Shows the server status
This module has base action arguments that are used on other top level commands
These top level commands import these definitions for uniformity and consistency sake
This module has actions for the server command
This is the main entry point for the rest client cli application
-
class
avocado.core.restclient.cli.app.
App
¶ Bases:
object
Base class for CLI application
Initializes a new app instance.
This class is intended both to be used by the stock client application and also to be reused by custom applications. If you want, say, to limit the amount of command line actions and its arguments, you can simply supply another argument parser class to this constructor. Of course another way to customize it is to inherit from this and modify its members at will.
-
dispatch_action
()¶ Calls the actions that was specified via command line arguments.
This involves loading the relevant module file.
-
initialize_connection
()¶ Initialize the connection instance
-
run
()¶ Main entry point for application
-
REST client application command line parsing
-
class
avocado.core.restclient.cli.parser.
Parser
(**kwargs)¶ Bases:
argparse.ArgumentParser
The main CLI Argument Parser.
Initializes a new parser
-
add_arguments_on_all_modules
(prefix='avocado.core.restclient.cli.args')¶ Add arguments that are present on all Python modules at a given prefix
Parameters: prefix – a Python module namespace
-
add_arguments_on_module
(name, prefix)¶ Add arguments that are present on a given Python module
Parameters: name – the name of the Python module, without the namespace
-
Submodules¶
avocado.core.restclient.connection module¶
This module provides connection classes the avocado server.
A connection is a simple wrapper around a HTTP request instance. It is this basic object that allows methods to be called on the remote server.
-
avocado.core.restclient.connection.
get_default
()¶ Returns the global, default connection to avocado-server
Returns: an avocado.core.restclient.connection.Connection instance
-
class
avocado.core.restclient.connection.
Connection
(hostname=None, port=None, username=None, password=None)¶ Bases:
object
Connection to the avocado server
Initializes a connection to an avocado-server instance
Parameters: -
check_min_version
(data=None)¶ Checks the minimum server version
-
get_api_list
()¶ Gets the list of APIs the server makes available to the current user
-
get_url
(path=None)¶ Returns a representation of the current connection as an HTTP URL
-
ping
()¶ Tests connectivity to the currently set avocado-server
This is intentionally a simple method that will only return True if a request is made, and a response is received from the server.
-
request
(path, method=<function get>, check_status=True, **data)¶ Performs a request to the server
This method is heavily used by upper level API methods, and more often than not, those upper level API methods should be used instead.
Parameters: - path (str) – the path on the server where the resource lives
- method – the method you want to call on the remote server, defaults to a HTTP GET
- check_status – whether to check the HTTP status code that comes with the response. If set to True, it will depend on the method chosen. If set to False, no check will be performed. If an integer is given then that specific status will be checked for.
- data – keyword arguments to be passed to the remote method
Returns: JSON data
-
avocado.core.restclient.response module¶
Module with base model functions to manipulate JSON data
-
class
avocado.core.restclient.response.
BaseResponse
(json_data)¶ Bases:
object
Base class that provides commonly used features for response handling
-
REQUIRED_DATA
= []¶
-
-
exception
avocado.core.restclient.response.
InvalidJSONError
¶ Bases:
exceptions.Exception
Data given to a loader/decoder is not valid JSON
-
exception
avocado.core.restclient.response.
InvalidResultResponseError
¶ Bases:
exceptions.Exception
Returned result response does not conform to expectation
Even though the result may be a valid json, it may not have the required or expected information that would normally be sent by avocado-server.
-
class
avocado.core.restclient.response.
ResultResponse
(json_data)¶ Bases:
avocado.core.restclient.response.BaseResponse
Provides a wrapper around an ideal result response
This class should be instantiated with the JSON data received from an avocado-server, and will check if the required data members are present and thus the response is well formed.
-
REQUIRED_DATA
= ['count', 'next', 'previous', 'results']¶
-
Module contents¶
Submodules¶
avocado.core.app module¶
The core Avocado application.
avocado.core.data_dir module¶
Library used to let avocado tests find important paths in the system.
The general reasoning to find paths is:
- When running in tree, don’t honor avocado.conf. Also, we get to run/display the example tests shipped in tree.
- When avocado.conf is in /etc/avocado, or ~/.config/avocado, then honor the values there as much as possible. If they point to a location where we can’t write to, use the next best location available.
- The next best location is the default system wide one.
- The next best location is the default user specific one.
-
avocado.core.data_dir.
clean_tmp_files
()¶ Try to clean the tmp directory by removing it.
This is a useful function for avocado entry points looking to clean after tests/jobs are done. If OSError is raised, silently ignore the error.
-
avocado.core.data_dir.
create_job_logs_dir
(logdir=None, unique_id=None)¶ Create a log directory for a job, or a stand alone execution of a test.
Parameters: - logdir – Base log directory, if None, use value from configuration.
- unique_id – The unique identification. If None, create one.
Return type:
-
avocado.core.data_dir.
get_base_dir
()¶ Get the most appropriate base dir.
The base dir is the parent location for most of the avocado other important directories.
- Examples:
- Log directory
- Data directory
- Tests directory
-
avocado.core.data_dir.
get_data_dir
()¶ Get the most appropriate data dir location.
The data dir is the location where any data necessary to job and test operations are located.
- Examples:
- ISO files
- GPG files
- VM images
- Reference bitmaps
-
avocado.core.data_dir.
get_datafile_path
(*args)¶ Get a path relative to the data dir.
Parameters: args – Arguments passed to os.path.join. Ex (‘images’, ‘jeos.qcow2’)
-
avocado.core.data_dir.
get_logs_dir
()¶ Get the most appropriate log dir location.
The log dir is where we store job/test logs in general.
-
avocado.core.data_dir.
get_test_dir
()¶ Get the most appropriate test location.
The test location is where we store tests written with the avocado API.
The heuristics used to determine the test dir are: 1) If an explicit test dir is set in the configuration system, it is used. 2) If user is running Avocado out of the source tree, the example test dir is used 3) System wide test dir is used 4) User default test dir (~/avocado/tests) is used
-
avocado.core.data_dir.
get_tmp_dir
()¶ Get the most appropriate tmp dir location.
The tmp dir is where artifacts produced by the test are kept.
- Examples:
- Copies of a test suite source code
- Compiled test suite source code
avocado.core.dispatcher module¶
Extensions/plugins dispatchers.
-
class
avocado.core.dispatcher.
CLICmdDispatcher
¶ Bases:
avocado.core.dispatcher.Dispatcher
Calls extensions on configure/run
Automatically adds all the extension with entry points registered under ‘avocado.plugins.cli.cmd’
-
class
avocado.core.dispatcher.
CLIDispatcher
¶ Bases:
avocado.core.dispatcher.Dispatcher
Calls extensions on configure/run
Automatically adds all the extension with entry points registered under ‘avocado.plugins.cli’
-
class
avocado.core.dispatcher.
Dispatcher
(namespace)¶ Bases:
stevedore.enabled.EnabledExtensionManager
Base dispatcher for various extension types
-
enabled
(extension)¶
-
static
store_load_failure
(manager, entrypoint, exception)¶
-
-
class
avocado.core.dispatcher.
JobPrePostDispatcher
¶ Bases:
avocado.core.dispatcher.Dispatcher
Calls extensions before Job execution
Automatically adds all the extension with entry points registered under ‘avocado.plugins.job.prepost’
-
map_method
(method_name, job)¶
-
-
class
avocado.core.dispatcher.
ResultDispatcher
¶ Bases:
avocado.core.dispatcher.Dispatcher
-
map_method
(method_name, result, job)¶
-
avocado.core.exceptions module¶
Exception classes, useful for tests, and other parts of the framework code.
-
exception
avocado.core.exceptions.
JobBaseException
¶ Bases:
exceptions.Exception
The parent of all job exceptions.
You should be never raising this, but just in case, we’ll set its status’ as FAIL.
-
status
= 'FAIL'¶
-
-
exception
avocado.core.exceptions.
JobError
¶ Bases:
avocado.core.exceptions.JobBaseException
A generic error happened during a job execution.
-
status
= 'ERROR'¶
-
-
exception
avocado.core.exceptions.
NotATestError
¶ Bases:
avocado.core.exceptions.TestBaseException
Indicates that the file is not a test.
Causes: Non executable, non python file or python module without an avocado test class in it.
-
status
= 'NOT_A_TEST'¶
-
-
exception
avocado.core.exceptions.
OptionValidationError
¶ Bases:
exceptions.Exception
An invalid option was passed to the test runner
-
status
= 'ERROR'¶
-
-
exception
avocado.core.exceptions.
TestAbortError
¶ Bases:
avocado.core.exceptions.TestBaseException
Indicates that the test was prematurely aborted.
-
status
= 'ERROR'¶
-
-
exception
avocado.core.exceptions.
TestBaseException
¶ Bases:
exceptions.Exception
The parent of all test exceptions.
You should be never raising this, but just in case, we’ll set its status’ as FAIL.
-
status
= 'FAIL'¶
-
-
exception
avocado.core.exceptions.
TestError
¶ Bases:
avocado.core.exceptions.TestBaseException
Indicates that the test was not fully executed and an error happened.
This is the sort of exception you raise if the test was partially executed and could not complete due to a setup, configuration, or another fatal condition.
-
status
= 'ERROR'¶
-
-
exception
avocado.core.exceptions.
TestFail
¶ Bases:
avocado.core.exceptions.TestBaseException
,exceptions.AssertionError
Indicates that the test failed.
TestFail inherits from AssertionError in order to keep compatibility with vanilla python unittests (they only consider failures the ones deriving from AssertionError).
-
status
= 'FAIL'¶
-
-
exception
avocado.core.exceptions.
TestInterruptedError
¶ Bases:
avocado.core.exceptions.TestBaseException
Indicates that the test was interrupted by the user (Ctrl+C)
-
status
= 'INTERRUPTED'¶
-
-
exception
avocado.core.exceptions.
TestNotFoundError
¶ Bases:
avocado.core.exceptions.TestBaseException
Indicates that the test was not found in the test directory.
-
status
= 'ERROR'¶
-
-
exception
avocado.core.exceptions.
TestSetupFail
¶ Bases:
avocado.core.exceptions.TestBaseException
Indicates an error during a setup or cleanup procedure.
-
status
= 'ERROR'¶
-
-
exception
avocado.core.exceptions.
TestSkipError
¶ Bases:
avocado.core.exceptions.TestBaseException
Indictates that the test is skipped.
Should be thrown when various conditions are such that the test is inappropriate. For example, inappropriate architecture, wrong OS version, program being tested does not have the expected capability (older version).
-
status
= 'SKIP'¶
-
-
exception
avocado.core.exceptions.
TestTimeoutInterrupted
¶ Bases:
avocado.core.exceptions.TestBaseException
Indicates that the test did not finish before the timeout specified.
-
status
= 'INTERRUPTED'¶
-
-
exception
avocado.core.exceptions.
TestTimeoutSkip
¶ Bases:
avocado.core.exceptions.TestBaseException
Indicates that the test is skipped due to a job timeout.
-
status
= 'SKIP'¶
-
-
exception
avocado.core.exceptions.
TestWarn
¶ Bases:
avocado.core.exceptions.TestBaseException
Indicates that bad things (may) have happened, but not an explicit failure.
-
status
= 'WARN'¶
-
-
avocado.core.exceptions.
fail_on
(exceptions=None)¶ Fail the test when decorated function produces exception of the specified type.
(For example, our method may raise IndexError on tested software failure. We can either try/catch it or use this decorator instead)
Parameters: exceptions – Tuple or single exception to be assumed as test fail [Exception] Note: self.error and self.skip behavior remains intact Note: To allow simple usage param “exceptions” must not be callable
avocado.core.exit_codes module¶
Avocado exit codes.
These codes are returned on the command line and may be used by applications that interface (that is, run) the Avocado command line application.
Besides main status about the execution of the command line application, these exit status may also give extra, although limited, information about test statuses.
-
avocado.core.exit_codes.
AVOCADO_ALL_OK
= 0¶ Both job and tests PASSed
-
avocado.core.exit_codes.
AVOCADO_FAIL
= 4¶ Something else went wrong and avocado failed (or crashed). Commonly used on command line validation errors.
-
avocado.core.exit_codes.
AVOCADO_GENERIC_CRASH
= -1¶ Avocado generic crash
-
avocado.core.exit_codes.
AVOCADO_JOB_FAIL
= 2¶ Something went wrong with the Job itself, by explicit
avocado.core.exceptions.JobError
exception.
-
avocado.core.exit_codes.
AVOCADO_JOB_INTERRUPTED
= 8¶ The job was explicitly interrupted. Usually this means that a user hit CTRL+C while the job was still running.
-
avocado.core.exit_codes.
AVOCADO_TESTS_FAIL
= 1¶ Job went fine, but some tests FAILed or ERRORed
avocado.core.job module¶
Job module - describes a sequence of automated test operations.
-
class
avocado.core.job.
Job
(args=None)¶ Bases:
object
A Job is a set of operations performed on a test machine.
Most of the time, we are interested in simply running tests, along with setup operations and event recording.
Creates an instance of Job class.
Parameters: args – an instance of argparse.Namespace
.-
run
()¶ Handled main job method. Runs a list of test URLs to its completion.
The test runner figures out which tests need to be run on an empty urls list by assuming the first component of the shortname is the test url.
Returns: Integer with overall job status. See avocado.core.exit_codes
for more information.
-
test_suite
= None¶ The list of discovered/resolved tests that will be attempted to be run by this job. If set to None, it means that test resolution has not been attempted. If set to an empty list, it means that no test was found during resolution.
-
-
class
avocado.core.job.
TestProgram
¶ Bases:
object
Convenience class to make avocado test modules executable.
-
parseArgs
(argv)¶
-
runTests
()¶
-
-
avocado.core.job.
main
¶ alias of
TestProgram
avocado.core.job_id module¶
avocado.core.jobdata module¶
Record/retrieve job information
-
avocado.core.jobdata.
get_id
(path, jobid)¶ Gets the full Job ID using the results directory path and a partial Job ID or the string ‘latest’.
-
avocado.core.jobdata.
get_resultsdir
(logdir, jobid)¶ Gets the job results directory using a Job ID.
-
avocado.core.jobdata.
record
(args, logdir, mux, urls=None, cmdline=None)¶ Records all required job information.
-
avocado.core.jobdata.
retrieve_args
(resultsdir)¶ Retrieves the job args from the results directory.
-
avocado.core.jobdata.
retrieve_cmdline
(resultsdir)¶ Retrieves the job command line from the results directory.
-
avocado.core.jobdata.
retrieve_config
(resultsdir)¶ Retrieves the job settings from the results directory.
-
avocado.core.jobdata.
retrieve_mux
(resultsdir)¶ Retrieves the job Mux object from the results directory.
-
avocado.core.jobdata.
retrieve_pwd
(resultsdir)¶ Retrieves the job pwd from the results directory.
-
avocado.core.jobdata.
retrieve_urls
(resultsdir)¶ Retrieves the job urls from the results directory.
avocado.core.loader module¶
Test loader module.
-
class
avocado.core.loader.
AccessDeniedPath
¶ Bases:
object
Dummy object to represent url pointing to a inaccessible path
-
class
avocado.core.loader.
BrokenSymlink
¶ Bases:
object
Dummy object to represent url pointing to a BrokenSymlink path
-
class
avocado.core.loader.
ExternalLoader
(args, extra_params)¶ Bases:
avocado.core.loader.TestLoader
External-runner loader class
-
discover
(url, which_tests=False)¶ Parameters: - url – arguments passed to the external_runner
- which_tests – Limit tests to be displayed (ALL, AVAILABLE or DEFAULT)
Returns: list of matching tests
-
static
get_decorator_mapping
()¶
-
static
get_type_label_mapping
()¶
-
name
= 'external'¶
-
-
class
avocado.core.loader.
FileLoader
(args, extra_params)¶ Bases:
avocado.core.loader.TestLoader
Test loader class.
-
discover
(url, which_tests=False)¶ Discover (possible) tests from a directory.
Recursively walk in a directory and find tests params. The tests are returned in alphabetic order.
Afterwards when “allowed_test_types” is supplied it verifies if all found tests are of the allowed type. If not return None (even on partial match).
Parameters: - url – the directory path to inspect.
- which_tests – Limit tests to be displayed (ALL, AVAILABLE or DEFAULT)
Returns: list of matching tests
-
static
get_decorator_mapping
()¶
-
static
get_type_label_mapping
()¶
-
name
= 'file'¶
-
-
class
avocado.core.loader.
FilteredOut
¶ Bases:
object
Dummy object to represent test filtered out by the optional mask
-
exception
avocado.core.loader.
InvalidLoaderPlugin
¶ Bases:
avocado.core.loader.LoaderError
Invalid loader plugin
-
exception
avocado.core.loader.
LoaderError
¶ Bases:
exceptions.Exception
Loader exception
-
exception
avocado.core.loader.
LoaderUnhandledUrlError
(unhandled_urls, plugins)¶ Bases:
avocado.core.loader.LoaderError
Urls not handled by any loader
-
class
avocado.core.loader.
TestLoader
(args, extra_params)¶ Bases:
object
Base for test loader classes
-
discover
(url, which_tests=False)¶ Discover (possible) tests from an url.
Parameters: - url (str) – the url to be inspected.
- which_tests – Limit tests to be displayed (ALL, AVAILABLE or DEFAULT)
Returns: a list of test matching the url as params.
-
static
get_decorator_mapping
()¶ Get label mapping for display in test listing.
Returns: Dict {TestClass: decorator function}
-
get_extra_listing
()¶
-
static
get_type_label_mapping
()¶ Get label mapping for display in test listing.
Returns: Dict {TestClass: ‘TEST_LABEL_STRING’}
-
name
= None¶
-
-
class
avocado.core.loader.
TestLoaderProxy
¶ Bases:
object
-
discover
(urls, which_tests=False)¶ Discover (possible) tests from test urls.
Parameters: - urls (builtin.list) – a list of tests urls; if [] use plugin defaults
- which_tests – Limit tests to be displayed (ALL, AVAILABLE or DEFAULT)
Returns: A list of test factories (tuples (TestClass, test_params))
-
get_base_keywords
()¶
-
get_decorator_mapping
()¶
-
get_extra_listing
()¶
-
get_type_label_mapping
()¶
-
load_plugins
(args)¶
-
load_test
(test_factory)¶ Load test from the test factory.
Parameters: test_factory (tuple) – a pair of test class and parameters. Returns: an instance of avocado.core.test.Test
.
-
register_plugin
(plugin)¶
-
-
avocado.core.loader.
add_loader_options
(parser)¶
avocado.core.multiplexer module¶
Multiplex and create variants.
-
class
avocado.core.multiplexer.
AvocadoParam
(leaves, name)¶ Bases:
object
This is a single slice params. It can contain multiple leaves and tries to find matching results.
Parameters: - leaves – this slice’s leaves
- name – this slice’s name (identifier used in exceptions)
-
get_or_die
(path, key)¶ Get a value or raise exception if not present :raise NoMatchError: When no matches :raise KeyError: When value is not certain (multiple matches)
-
iteritems
()¶ Very basic implementation which iterates through __ALL__ params, which generates lots of duplicate entries due to inherited values.
-
str_leaves_variant
¶ String with identifier and all params
-
class
avocado.core.multiplexer.
AvocadoParams
(leaves, test_id, mux_path, default_params)¶ Bases:
object
Params object used to retrieve params from given path. It supports absolute and relative paths. For relative paths one can define multiple paths to search for the value. It contains compatibility wrapper to act as the original avocado Params, but by special usage you can utilize the new API. See
get()
docstring for details.You can also iterate through all keys, but this can generate quite a lot of duplicate entries inherited from ancestor nodes. It shouldn’t produce false values, though.
In this version each new “get()” call is logged into “avocado.test” log. This is subject of change (separate file, perhaps)
Parameters: - leaves – List of TreeNode leaves defining current variant
- test_id – test id
- mux_path – list of entry points
- default_params – dict of params used when no matches found
-
get
(key, path=None, default=None)¶ Retrieve value associated with key from params :param key: Key you’re looking for :param path: namespace [‘*’] :param default: default value when not found :raise KeyError: In case of multiple different values (params clash)
-
iteritems
()¶ Iterate through all available params and yield origin, key and value of each unique value.
-
log
(key, path, default, value)¶ Predefined format for displaying params query
-
objects
(key, path=None)¶ Return the names of objects defined using a given key.
Parameters: key – The name of the key whose value lists the objects (e.g. ‘nics’).
-
class
avocado.core.multiplexer.
Mux
(debug=False)¶ Bases:
object
This is a multiplex object which multiplexes the test_suite.
Parameters: debug – Store whether this instance should debug the mux Note: people need to check whether mux uses debug and reflect that in order to provide the right results. -
data_inject
(key, value, path=None)¶ Inject entry to the mux tree (params database)
Parameters: - key – Key to which we’d like to assign the value
- value – The key’s value
- path – Optional path to the node to which we assign the value, by default ‘/’.
-
data_merge
(tree)¶ Merge tree into the mux tree (params database)
Parameters: tree ( avocado.core.tree.TreeNode
) – Tree to be merged into this database.
-
get_number_of_tests
(test_suite)¶ Returns: overall number of tests * multiplex variants
-
is_parsed
()¶ Reports whether the tree was already multiplexed
-
itertests
()¶ Yield variant-id and test params
:yield (variant-id, (list of leaves, list of multiplex paths))
-
parse
(args)¶ Apply options defined on the cmdline
Parameters: args – Parsed cmdline arguments
-
-
class
avocado.core.multiplexer.
MuxTree
(root)¶ Bases:
object
Object representing part of the tree from the root to leaves or another multiplex domain. Recursively it creates multiplexed variants of the full tree.
Parameters: root – Root of this tree slice
-
exception
avocado.core.multiplexer.
NoMatchError
¶ Bases:
exceptions.KeyError
avocado.core.output module¶
Manages output and logging in avocado applications.
-
avocado.core.output.
BUILTIN_STREAMS
= {'test': 'test output', 'debug': 'tracebacks and other debugging info', 'app': 'application output', 'early': 'early logging of other streams, including test (very verbose)', 'remote': 'fabric/paramiko debug'}¶ Builtin special keywords to enable set of logging streams
-
avocado.core.output.
BUILTIN_STREAM_SETS
= {'all': 'all builtin streams', 'none': 'disables regular output (leaving only errors enabled)'}¶ Groups of builtin streams
-
class
avocado.core.output.
FilterInfoAndLess
(name='')¶ Bases:
logging.Filter
Initialize a filter.
Initialize with the name of the logger which, together with its children, will have its events allowed through the filter. If no name is specified, allow every event.
-
filter
(record)¶
-
-
class
avocado.core.output.
FilterWarnAndMore
(name='')¶ Bases:
logging.Filter
Initialize a filter.
Initialize with the name of the logger which, together with its children, will have its events allowed through the filter. If no name is specified, allow every event.
-
filter
(record)¶
-
-
class
avocado.core.output.
LoggingFile
(prefix='', level=10, logger=[<logging.RootLogger object at 0x7f9c65e8fa10>])¶ Bases:
object
File-like object that will receive messages pass them to logging.
Constructor. Sets prefixes and which logger is going to be used.
:param prefix - The prefix for each line logged by this object.
-
flush
()¶
-
isatty
()¶
-
write
(data)¶ ” Writes data only if it constitutes a whole line. If it’s not the case, store it in a buffer and wait until we have a complete line. :param data - Raw data (a string) that will be processed.
-
writelines
(lines)¶ ” Writes itertable of lines
Parameters: lines – An iterable of strings that will be processed.
-
-
class
avocado.core.output.
MemStreamHandler
(stream=None)¶ Bases:
logging.StreamHandler
Handler that stores all records in self.log (shared in all instances)
Initialize the handler.
If stream is not specified, sys.stderr is used.
-
emit
(record)¶
-
flush
()¶ This is in-mem object, it does not require flushing
-
log
= []¶
-
-
exception
avocado.core.output.
PagerNotFoundError
¶ Bases:
exceptions.Exception
-
class
avocado.core.output.
Paginator
¶ Bases:
object
Paginator that uses less to display contents on the terminal.
Contains cleanup handling for when user presses ‘q’ (to quit less).
-
close
()¶
-
write
(msg)¶
-
-
class
avocado.core.output.
ProgressStreamHandler
(stream=None)¶ Bases:
logging.StreamHandler
Handler class that allows users to skip new lines on each emission.
Initialize the handler.
If stream is not specified, sys.stderr is used.
-
emit
(record)¶
-
-
avocado.core.output.
STD_OUTPUT
= <avocado.core.output.StdOutput object>¶ Allows modifying the sys.stdout/sys.stderr
-
class
avocado.core.output.
StdOutput
¶ Bases:
object
Class to modify sys.stdout/sys.stderr
-
close
()¶ Enable original sys.stdout/sys.stderr and cleanup
-
enable_outputs
()¶ Enable sys.stdout/sys.stderr (either with 2 streams or with paginator)
-
enable_paginator
()¶ Enable paginator
-
enable_stderr
()¶ Enable sys.stderr and disable sys.stdout
-
fake_outputs
()¶ Replace sys.stdout/sys.stderr with in-memory-objects
-
print_records
()¶ Prints all stored messages as they occurred into streams they were produced for.
-
records
= []¶ List of records of stored output when stdout/stderr is disabled
-
-
avocado.core.output.
TERM_SUPPORT
= <avocado.core.output.TermSupport object>¶ Transparently handles colored terminal, when one is used
-
class
avocado.core.output.
TermSupport
¶ Bases:
object
-
COLOR_BLUE
= '\x1b[94m'¶
-
COLOR_DARKGREY
= '\x1b[90m'¶
-
COLOR_GREEN
= '\x1b[92m'¶
-
COLOR_RED
= '\x1b[91m'¶
-
COLOR_YELLOW
= '\x1b[93m'¶
-
CONTROL_END
= '\x1b[0m'¶
-
ESCAPE_CODES
= ['\x1b[94m', '\x1b[92m', '\x1b[93m', '\x1b[91m', '\x1b[90m', '\x1b[0m', '\x1b[1D', '\x1b[1C']¶ Class to help applications to colorize their outputs for terminals.
This will probe the current terminal and colorize ouput only if the stdout is in a tty or the terminal type is recognized.
-
MOVE_BACK
= '\x1b[1D'¶
-
MOVE_FORWARD
= '\x1b[1C'¶
-
disable
()¶ Disable colors from the strings output by this class.
-
error_str
()¶ Print a error string (red colored).
If the output does not support colors, just return the original string.
-
fail_header_str
(msg)¶ Print a fail header string (red colored).
If the output does not support colors, just return the original string.
-
fail_str
()¶ Print a fail string (red colored).
If the output does not support colors, just return the original string.
-
header_str
(msg)¶ Print a header string (blue colored).
If the output does not support colors, just return the original string.
-
healthy_str
(msg)¶ Print a healthy string (green colored).
If the output does not support colors, just return the original string.
-
interrupt_str
()¶ Print an interrupt string (red colored).
If the output does not support colors, just return the original string.
-
partial_str
(msg)¶ Print a string that denotes partial progress (yellow colored).
If the output does not support colors, just return the original string.
-
pass_str
()¶ Print a pass string (green colored).
If the output does not support colors, just return the original string.
-
skip_str
()¶ Print a skip string (yellow colored).
If the output does not support colors, just return the original string.
-
warn_header_str
(msg)¶ Print a warning header string (yellow colored).
If the output does not support colors, just return the original string.
-
warn_str
()¶ Print an warning string (yellow colored).
If the output does not support colors, just return the original string.
-
-
class
avocado.core.output.
Throbber
¶ Bases:
object
Produces a spinner used to notify progress in the application UI.
-
MOVES
= ['', '', '', '']¶
-
STEPS
= ['-', '\\', '|', '/']¶
-
render
()¶
-
-
avocado.core.output.
add_log_handler
(logger, klass=<class 'logging.StreamHandler'>, stream=<open file '<stdout>', mode 'w'>, level=20, fmt='%(name)s: %(message)s')¶ Add handler to a logger.
Parameters: - logger_name – the name of a
logging.Logger
instance, that is, the parameter tologging.getLogger()
- klass – Handler class (defaults to
logging.StreamHandler
) - stream – Logging stream, to be passed as an argument to
klass
(defaults tosys.stdout
) - level – Log level (defaults to INFO`)
- fmt – Logging format (defaults to
%(name)s: %(message)s
)
- logger_name – the name of a
-
avocado.core.output.
disable_log_handler
(logger)¶
-
avocado.core.output.
early_start
()¶ Replace all outputs with in-memory handlers
-
avocado.core.output.
log_plugin_failures
(failures)¶ Log in the application UI failures to load a set of plugins
Parameters: failures – a list of load failures, usually coming from a avocado.core.dispatcher.Dispatcher
attribute load_failures
-
avocado.core.output.
reconfigure
(args)¶ Adjust logging handlers accordingly to app args and re-log messages.
avocado.core.parser module¶
Avocado application command line parsing.
-
class
avocado.core.parser.
ArgumentParser
(prog=None, usage=None, description=None, epilog=None, version=None, parents=[], formatter_class=<class 'argparse.HelpFormatter'>, prefix_chars='-', fromfile_prefix_chars=None, argument_default=None, conflict_handler='error', add_help=True)¶ Bases:
argparse.ArgumentParser
Class to override argparse functions
-
error
(message)¶
-
-
class
avocado.core.parser.
FileOrStdoutAction
(option_strings, dest, nargs=None, const=None, default=None, type=None, choices=None, required=False, help=None, metavar=None)¶ Bases:
argparse.Action
Controls claiming the right to write to the application standard output
-
class
avocado.core.parser.
Parser
¶ Bases:
object
Class to Parse the command line arguments.
-
finish
()¶ Finish the process of parsing arguments.
Side effect: set the final value for attribute args.
-
start
()¶ Start to parsing arguments.
At the end of this method, the support for subparsers is activated. Side effect: update attribute args (the namespace).
-
avocado.core.plugin_interfaces module¶
-
class
avocado.core.plugin_interfaces.
CLI
¶ Bases:
avocado.core.plugin_interfaces.Plugin
Base plugin interface for adding options (non-commands) to the command line
Plugins that want to add extra options to the core command line application or to sub commands should use the ‘avocado.plugins.cli’ namespace.
-
configure
(parser)¶ Configures the command line parser with options specific to this plugin
-
run
(args)¶ Execute any action the plugin intends.
Example of action may include activating a special features upon finding that the requested command line options were set by the user.
Note: this plugin class is not intended for adding new commands, for that please use CLICmd.
-
-
class
avocado.core.plugin_interfaces.
CLICmd
¶ Bases:
avocado.core.plugin_interfaces.Plugin
Base plugin interface for adding new commands to the command line app
Plugins that want to add extensions to the run command should use the ‘avocado.plugins.cli.cmd’ namespace.
-
configure
(parser)¶ Lets the extension add command line options and do early configuration
By default it will register its name as the command name and give its description as the help message.
-
description
= None¶
-
name
= None¶
-
run
(args)¶ Entry point for actually running the command
-
-
class
avocado.core.plugin_interfaces.
JobPost
¶ Bases:
avocado.core.plugin_interfaces.Plugin
Base plugin interface for adding actions after a job runs
Plugins that want to add actions to be run after a job runs, should use the ‘avocado.plugins.job.prepost’ namespace and implement the defined interface.
-
post
(job)¶ Entry point for actually running the post job action
-
-
class
avocado.core.plugin_interfaces.
JobPre
¶ Bases:
avocado.core.plugin_interfaces.Plugin
Base plugin interface for adding actions before a job runs
Plugins that want to add actions to be run before a job runs, should use the ‘avocado.plugins.job.prepost’ namespace and implement the defined interface.
-
pre
(job)¶ Entry point for actually running the pre job action
-
-
class
avocado.core.plugin_interfaces.
Result
¶ Bases:
avocado.core.plugin_interfaces.Plugin
-
render
(result, job)¶ Entry point with method that renders the result
This will usually be used to write the result to a file or directory.
Parameters: - result (
avocado.core.result.Result
) – the complete job result - job (
avocado.core.job.Job
) – the finished job for which a result will be written
- result (
-
avocado.core.remoter module¶
Module to provide remote operations.
-
exception
avocado.core.remoter.
ConnectionError
¶
-
class
avocado.core.remoter.
Remote
(hostname, username=None, password=None, key_filename=None, port=22, timeout=60, attempts=10, env_keep=None)¶ Bases:
object
Performs remote operations.
Creates an instance of
Remote
.Parameters: - hostname – the hostname.
- username – the username. Default: autodetect.
- password – the password. Default: try to use public key.
- key_filename – path to an identity file (Example: .pem files from Amazon EC2).
- timeout – remote command timeout, in seconds. Default: 60.
- attempts – number of attempts to connect. Default: 10.
-
makedir
(remote_path)¶ Create a directory.
Parameters: remote_path – the remote path to create.
-
receive_files
(*args, **kwargs)¶
-
run
(*args, **kwargs)¶
-
send_files
(*args, **kwargs)¶
-
uptime
()¶ Performs uptime (good to check connection).
Returns: the uptime string or empty string if fails.
-
exception
avocado.core.remoter.
RemoterError
¶ Bases:
exceptions.Exception
-
avocado.core.remoter.
receive_files
(local_path, remote_path)¶ Receive files from the defined fabric host.
This assumes the fabric environment was previously (and properly) initialized.
Parameters: - local_path – the local path.
- remote_path – the remote path.
-
avocado.core.remoter.
run
(command, ignore_status=False, quiet=True, timeout=60)¶ Executes a command on the defined fabric hosts.
This is basically a wrapper to fabric.operations.run, encapsulating the result on an avocado process.CmdResult object. This also assumes the fabric environment was previously (and properly) initialized.
Parameters: - command – the command string to execute.
- ignore_status – Whether to not raise exceptions in case the command’s return code is different than zero.
- timeout – Maximum time allowed for the command to return.
- quiet – Whether to not log command stdout/err. Default: True.
Returns: the result of the remote program’s execution.
Return type: Raises: fabric.exceptions.CommandTimeout – When timeout exhausted.
-
avocado.core.remoter.
send_files
(local_path, remote_path)¶ Send files to the defined fabric host.
This assumes the fabric environment was previously (and properly) initialized.
Parameters: - local_path – the local path.
- remote_path – the remote path.
avocado.core.result module¶
Contains the definition of the Result class, used for output in avocado.
It also contains the most basic result class, HumanResult, used by the test runner.
-
class
avocado.core.result.
HumanResult
(job)¶ Bases:
avocado.core.result.Result
Human output Test result class.
-
end_test
(state)¶
-
end_tests
()¶ Called once after all tests are executed.
-
notify_progress
(progress=False)¶
-
start_test
(state)¶
-
start_tests
()¶ Called once before any tests are executed.
-
-
exception
avocado.core.result.
InvalidOutputPlugin
¶ Bases:
exceptions.Exception
-
class
avocado.core.result.
Result
(job)¶ Bases:
object
Result class, holder for job (and its tests) result information.
Creates an instance of Result.
Parameters: job – an instance of avocado.core.job.Job
.-
check_test
(state)¶ Called once for a test to check status and report.
Parameters: test – A dict with test internal state
-
end_test
(state)¶ Called when the given test has been run.
Parameters: state (dict) – result of avocado.core.test.Test.get_state
.
-
end_tests
()¶ Called once after all tests are executed.
-
start_test
(state)¶ Called when the given test is about to run.
Parameters: state (dict) – result of avocado.core.test.Test.get_state
.
-
start_tests
()¶ Called once before any tests are executed.
-
-
class
avocado.core.result.
ResultProxy
¶ Bases:
object
-
add_output_plugin
(plugin)¶
-
check_test
(state)¶
-
end_test
(state)¶
-
end_tests
()¶
-
notify_progress
(progress_from_test=False)¶
-
start_test
(state)¶
-
start_tests
()¶
-
-
avocado.core.result.
register_test_result_class
(application_args, klass)¶ Register the given test result class to be loaded and enabled by the job
Parameters: - application_args (
argparse.Namespace
) – the parsed application command line arguments. This is currently being abused to hold various job settings and feature choices, such as the runner. - klass (a subclass of
Result
) – the test result class to enable
- application_args (
avocado.core.runner module¶
Test runner module.
-
class
avocado.core.runner.
TestRunner
(job, test_result)¶ Bases:
object
A test runner class that displays tests results.
Creates an instance of TestRunner class.
Parameters: - job – an instance of
avocado.core.job.Job
. - test_result – an instance of
avocado.core.result.ResultProxy
.
-
DEFAULT_TIMEOUT
= 86400¶
-
run_suite
(test_suite, mux, timeout=0, replay_map=None, test_result_total=0)¶ Run one or more tests and report with test result.
Parameters: - test_suite – a list of tests to run.
- mux – the multiplexer.
- timeout – maximum amount of time (in seconds) to execute.
Returns: a set with types of test failures.
-
run_test
(test_factory, queue, summary, job_deadline=0)¶ Run a test instance inside a subprocess.
Parameters: - test_factory (tuple of
avocado.core.test.Test
and dict.) – Test factory (test class and parameters). - queue (:class`multiprocessing.Queue` instance.) – Multiprocess queue.
- summary (set.) – Contains types of test failures.
- job_deadline (int.) – Maximum time to execute.
- test_factory (tuple of
- job – an instance of
-
class
avocado.core.runner.
TestStatus
(job, queue)¶ Bases:
object
Test status handler
Parameters: - job – Associated job
- queue – test message queue
-
early_status
¶ Get early status
-
finish
(proc, started, timeout, step)¶ Wait for the test process to finish and report status or error status if unable to obtain the status till deadline.
Parameters: - proc – The test’s process
- started – Time when the test started
- timeout – Timeout for waiting on status
- first – Delay before first check
- step – Step between checks for the status
-
wait_for_early_status
(proc, timeout)¶ Wait until early_status is obtained :param proc: test process :param timeout: timeout for early_state :raise exceptions.TestError: On timeout/error
-
avocado.core.runner.
add_runner_failure
(test_state, new_status, message)¶ Append runner failure to the overall test status.
Parameters: - test_state – Original test state (dict)
- new_status – New test status (PASS/FAIL/ERROR/INTERRUPTED/...)
- message – The error message
avocado.core.safeloader module¶
Safe (AST based) test loader module utilities
-
avocado.core.safeloader.
AVOCADO_DOCSTRING_TAG_RE
= <_sre.SRE_Pattern object>¶ Gets the tag value from a string. Used to tag a test class in various ways
-
avocado.core.safeloader.
find_class_and_methods
(path, method_pattern=None, base_class=None)¶ Attempts to find methods names from a given Python source file
Parameters: - path (str) – path to a Python source code file
- method_pattern – compiled regex to match against method name
- base_class (str or None) – only consider classes that inherit from a given base class (or classes that inherit from any class if None is given)
-
avocado.core.safeloader.
get_docstring_tag
(docstring)¶ Returns the value of the avocado custom tag inside a docstring
Parameters: docstring (str) – the complete text used as documentation
-
avocado.core.safeloader.
is_docstring_tag_disable
(docstring)¶ Checks if there’s an avocado tag that disables its class as a Test class
Return type: bool
-
avocado.core.safeloader.
is_docstring_tag_enable
(docstring)¶ Checks if there’s an avocado tag that enables its class as a Test class
Return type: bool
-
avocado.core.safeloader.
modules_imported_as
(module)¶ Returns a mapping of imported module names whether using aliases or not
The goal of this utility function is to return the name of the import as used in the rest of the module, whether an aliased import was used or not.
For code such as:
>>> import foo as bar
This function should return {“foo”: “bar”}
And for code such as:
>>> import foo
It should return {“foo”: “foo”}
Please note that only global level imports are looked at. If there are imports defined, say, inside functions or class definitions, they will not be seen by this function.
Parameters: module ( _ast.Module
) – module, as parsed byast.parse()
Returns: a mapping of names {<realname>: <alias>} of modules imported Return type: dict
avocado.core.settings module¶
Reads the avocado settings from a .ini file (from python ConfigParser).
-
exception
avocado.core.settings.
ConfigFileNotFound
(path_list)¶ Bases:
avocado.core.settings.SettingsError
Error thrown when the main settings file could not be found.
-
class
avocado.core.settings.
Settings
(config_path=None)¶ Bases:
object
Simple wrapper around ConfigParser, with a key type conversion available.
Constructor. Tries to find the main settings file and load it.
Parameters: config_path – Path to a config file. Useful for unittesting. -
get_value
(section, key, key_type=<type 'str'>, default=<object object>, allow_blank=False)¶ Get value from key in a given config file section.
Parameters: - section (str) – Config file section.
- key (str) – Config file key, relative to section.
- key_type (either string based names representing types,
including str, int, float, bool,
list and path, or the types themselves
limited to
str
,int
,float
,bool
andlist
.) – Type of key. - default – Default value for the key, if none found.
- allow_blank – Whether an empty value for the key is allowed.
Returns: value, if one available in the config. default value, if one provided.
Raises: SettingsError, in case key is not set and no default was provided.
-
no_default
= <object object>¶
-
process_config_path
(pth)¶
-
-
exception
avocado.core.settings.
SettingsError
¶ Bases:
exceptions.Exception
Base settings error.
-
exception
avocado.core.settings.
SettingsValueError
¶ Bases:
avocado.core.settings.SettingsError
Error thrown when we could not convert successfully a key to a value.
-
avocado.core.settings.
convert_value_type
(value, value_type)¶ Convert a string value to a given value type.
Parameters: - value (str.) – Value we want to convert.
- value_type (str or type.) – Type of the value we want to convert.
Returns: Converted value type.
Return type: Dependent on value_type.
Raise: TypeError, in case it was not possible to convert values.
avocado.core.status module¶
Maps the different status strings in avocado to booleans.
This is used by methods and functions to return a cut and dry answer to whether a test or a job in avocado PASSed or FAILed.
avocado.core.sysinfo module¶
-
class
avocado.core.sysinfo.
Collectible
(logf)¶ Bases:
object
Abstract class for representing collectibles by sysinfo.
-
readline
(logdir)¶ Read one line of the collectible object.
Parameters: logdir – Path to a log directory.
-
-
class
avocado.core.sysinfo.
Command
(cmd, logf=None, compress_log=False)¶ Bases:
avocado.core.sysinfo.Collectible
Collectible command.
Parameters: - cmd – String with the command.
- logf – Basename of the file where output is logged (optional).
- compress_logf – Wether to compress the output of the command.
-
run
(logdir)¶ Execute the command as a subprocess and log its output in logdir.
Parameters: logdir – Path to a log directory.
-
class
avocado.core.sysinfo.
Daemon
(cmd, logf=None, compress_log=False)¶ Bases:
avocado.core.sysinfo.Command
Collectible daemon.
Parameters: - cmd – String with the daemon command.
- logf – Basename of the file where output is logged (optional).
- compress_logf – Wether to compress the output of the command.
-
run
(logdir)¶ Execute the daemon as a subprocess and log its output in logdir.
Parameters: logdir – Path to a log directory.
-
stop
()¶ Stop daemon execution.
-
class
avocado.core.sysinfo.
JournalctlWatcher
(logf=None)¶ Bases:
avocado.core.sysinfo.Collectible
Track the content of systemd journal into a compressed file.
Parameters: logf – Basename of the file where output is logged (optional). -
run
(logdir)¶
-
-
class
avocado.core.sysinfo.
LogWatcher
(path, logf=None)¶ Bases:
avocado.core.sysinfo.Collectible
Keep track of the contents of a log file in another compressed file.
This object is normally used to track contents of the system log (/var/log/messages), and the outputs are gzipped since they can be potentially large, helping to save space.
Parameters: - path – Path to the log file.
- logf – Basename of the file where output is logged (optional).
-
run
(logdir)¶ Log all of the new data present in the log file.
-
class
avocado.core.sysinfo.
Logfile
(path, logf=None)¶ Bases:
avocado.core.sysinfo.Collectible
Collectible system file.
Parameters: - path – Path to the log file.
- logf – Basename of the file where output is logged (optional).
-
run
(logdir)¶ Copy the log file to the appropriate log dir.
Parameters: logdir – Log directory which the file is going to be copied to.
-
class
avocado.core.sysinfo.
SysInfo
(basedir=None, log_packages=None, profiler=None)¶ Bases:
object
Log different system properties at some key control points:
- start_job
- start_test
- end_test
- end_job
Set sysinfo collectibles.
Parameters: - basedir – Base log dir where sysinfo files will be located.
- log_packages – Whether to log system packages (optional because logging packages is a costly operation). If not given explicitly, tries to look in the config files, and if not found, defaults to False.
- profiler – Wether to use the profiler. If not given explicitly, tries to look in the config files.
-
add_cmd
(cmd, hook)¶ Add a command collectible.
Parameters: - cmd – Command to log.
- hook – In which hook this cmd should be logged (start job, end job).
-
add_file
(filename, hook)¶ Add a system file collectible.
Parameters: - filename – Path to the file to be logged.
- hook – In which hook this file should be logged (start job, end job).
-
add_watcher
(filename, hook)¶ Add a system file watcher collectible.
Parameters: - filename – Path to the file to be logged.
- hook – In which hook this watcher should be logged (start job, end job).
-
end_job_hook
()¶ Logging hook called whenever a job finishes.
-
end_test_hook
()¶ Logging hook called after a test finishes.
-
start_job_hook
()¶ Logging hook called whenever a job starts.
-
start_test_hook
()¶ Logging hook called before a test starts.
-
avocado.core.sysinfo.
collect_sysinfo
(args)¶ Collect sysinfo to a base directory.
Parameters: args – argparse.Namespace
object with command line params.
avocado.core.test module¶
Contains the base test implementation, used as a base for the actual framework tests.
-
class
avocado.core.test.
DryRunTest
(*args, **kwargs)¶ Bases:
avocado.core.test.SkipTest
Fake test which logs itself and reports as SKIP
This class substitutes other classes. Let’s just ignore the remaining arguments and only set the ones supported by avocado.Test
-
setUp
()¶
-
-
class
avocado.core.test.
ExternalRunnerTest
(name, params=None, base_logdir=None, job=None, external_runner=None)¶ Bases:
avocado.core.test.SimpleTest
-
filename
¶
-
test
()¶
-
-
class
avocado.core.test.
MissingTest
(methodName='test', name=None, params=None, base_logdir=None, job=None, runner_queue=None)¶ Bases:
avocado.core.test.Test
Handle when there is no such test module in the test directory.
Initializes the test.
Parameters: - methodName – Name of the main method to run. For the sake of compatibility with the original unittest class, you should not set this.
- name (
avocado.core.test.TestName
) – Pretty name of the test name. For normal tests, written with the avocado API, this should not be set. This is reserved for internal Avocado use, such as when running random executables as tests. - base_logdir – Directory where test logs should go. If None
provided, it’ll use
avocado.data_dir.create_job_logs_dir()
. - job – The job that this test is part of.
Raises: -
test
()¶
-
exception
avocado.core.test.
NameNotTestNameError
¶ Bases:
exceptions.Exception
The given test name is not a TestName instance
With the introduction of
avocado.core.test.TestName
, it’s not allowed to use other types as thename
parameter to a test instance. This exception is raised when this is attempted.
-
class
avocado.core.test.
NotATest
(methodName='test', name=None, params=None, base_logdir=None, job=None, runner_queue=None)¶ Bases:
avocado.core.test.Test
The file is not a test.
Either a non executable python module with no avocado test class in it, or a regular, non executable file.
Initializes the test.
Parameters: - methodName – Name of the main method to run. For the sake of compatibility with the original unittest class, you should not set this.
- name (
avocado.core.test.TestName
) – Pretty name of the test name. For normal tests, written with the avocado API, this should not be set. This is reserved for internal Avocado use, such as when running random executables as tests. - base_logdir – Directory where test logs should go. If None
provided, it’ll use
avocado.data_dir.create_job_logs_dir()
. - job – The job that this test is part of.
Raises: -
test
()¶
-
class
avocado.core.test.
ReplaySkipTest
(*args, **kwargs)¶ Bases:
avocado.core.test.SkipTest
Skip test due to job replay filter.
This test is skipped due to a job replay filter. It will never have a chance to execute.
This class substitutes other classes. Let’s just ignore the remaining arguments and only set the ones supported by avocado.Test
-
class
avocado.core.test.
SimpleTest
(name, params=None, base_logdir=None, job=None)¶ Bases:
avocado.core.test.Test
Run an arbitrary command that returns either 0 (PASS) or !=0 (FAIL).
-
execute_cmd
()¶ Run the executable, and log its detailed execution.
-
filename
¶ Returns the name of the file (path) that holds the current test
-
re_avocado_log
= <_sre.SRE_Pattern object at 0xf84eb0>¶
-
test
()¶ Run the test and postprocess the results
-
-
class
avocado.core.test.
SkipTest
(*args, **kwargs)¶ Bases:
avocado.core.test.Test
Class intended as generic substitute for avocado tests which fails during setUp phase using “self._skip_reason” message.
This class substitutes other classes. Let’s just ignore the remaining arguments and only set the ones supported by avocado.Test
-
setUp
()¶
-
test
()¶ Should not be executed
-
-
class
avocado.core.test.
Test
(methodName='test', name=None, params=None, base_logdir=None, job=None, runner_queue=None)¶ Bases:
unittest.case.TestCase
Base implementation for the test class.
You’ll inherit from this to write your own tests. Typically you’ll want to implement setUp(), test*() and tearDown() methods on your own tests.
Initializes the test.
Parameters: - methodName – Name of the main method to run. For the sake of compatibility with the original unittest class, you should not set this.
- name (
avocado.core.test.TestName
) – Pretty name of the test name. For normal tests, written with the avocado API, this should not be set. This is reserved for internal Avocado use, such as when running random executables as tests. - base_logdir – Directory where test logs should go. If None
provided, it’ll use
avocado.data_dir.create_job_logs_dir()
. - job – The job that this test is part of.
Raises: -
basedir
¶ The directory where this test (when backed by a file) is located at
-
cache_dirs
= None¶
-
datadir
¶ Returns the path to the directory that contains test data files
-
default_params
= {}¶
-
error
(message=None)¶ Errors the currently running test.
After calling this method a test will be terminated and have its status as ERROR.
Parameters: message (str) – an optional message that will be recorded in the logs
-
fail
(message=None)¶ Fails the currently running test.
After calling this method a test will be terminated and have its status as FAIL.
Parameters: message (str) – an optional message that will be recorded in the logs
-
fetch_asset
(name, asset_hash=None, algorithm='sha1', locations=None, expire=None)¶ Method o call the utils.asset in order to fetch and asset file supporting hash check, caching and multiple locations.
Parameters: - name – the asset filename or URL
- asset_hash – asset hash (optional)
- algorithm – hash algorithm (optional, defaults to sha1)
- locations – list of URLs from where the asset can be fetched (optional)
- expire – time for the asset to expire
Raises: EnvironmentError – When it fails to fetch the asset
Returns: asset file local path
-
filename
¶ Returns the name of the file (path) that holds the current test
-
get_state
()¶ Serialize selected attributes representing the test state
Returns: a dictionary containing relevant test state data Return type: dict
-
report_state
()¶ Send the current test state to the test runner process
-
run_avocado
()¶ Wraps the run method, for execution inside the avocado runner.
Result: Unused param, compatibility with unittest.TestCase
.
-
skip
(message=None)¶ Skips the currently running test.
This method should only be called from a test’s setUp() method, not anywhere else, since by definition, if a test gets to be executed, it can’t be skipped anymore. If you call this method outside setUp(), avocado will mark your test status as ERROR, and instruct you to fix your test in the error message.
Parameters: message (str) – an optional message that will be recorded in the logs
-
srcdir
= None¶
-
workdir
= None¶
-
class
avocado.core.test.
TestError
(*args, **kwargs)¶ Bases:
avocado.core.test.Test
Generic test error.
-
test
()¶
-
-
class
avocado.core.test.
TestName
(uid, name, variant=None, no_digits=None)¶ Bases:
object
Test name representation
Test name according to avocado specification
Parameters: - uid – unique test id (within the job)
- name – test name (identifies the executed test)
- variant – variant id
- no_digits – number of digits of the test uid
-
str_filesystem
()¶ File-system friendly representation of the test name
-
class
avocado.core.test.
TimeOutSkipTest
(*args, **kwargs)¶ Bases:
avocado.core.test.SkipTest
Skip test due job timeout.
This test is skipped due a job timeout. It will never have a chance to execute.
This class substitutes other classes. Let’s just ignore the remaining arguments and only set the ones supported by avocado.Test
-
setUp
()¶
-
avocado.core.tree module¶
Tree data structure with nodes.
This tree structure (Tree drawing code) was inspired in the base tree data structure of the ETE 2 project:
A library for analysis of phylogenetics trees.
Explicit permission has been given by the copyright owner of ETE 2 Jaime Huerta-Cepas <jhcepas@gmail.com> to take ideas/use snippets from his original base tree code and re-license under GPLv2+, given that GPLv3 and GPLv2 (used in some avocado files) are incompatible.
-
class
avocado.core.tree.
Control
(code, value=None)¶ Bases:
object
Container used to identify node vs. control sequence
-
class
avocado.core.tree.
OutputValue
(value, node, srcyaml)¶ Bases:
object
Ordinary value with some debug info
-
class
avocado.core.tree.
TreeNode
(name='', value=None, parent=None, children=None)¶ Bases:
object
Class for bounding nodes into tree-structure.
-
add_child
(node)¶ Append node as child. Nodes with the same name gets merged into the existing position.
-
detach
()¶ Detach this node from parent
-
environment
¶ Node environment (values + preceding envs)
-
get_environment
()¶ Get node environment (values + preceding envs)
-
get_leaves
()¶ Get list of leaf nodes
-
get_node
(path, create=False)¶ Parameters: - path – Path of the desired node (relative to this node)
- create – Create the node (and intermediary ones) when not present
Returns: the node associated with this path
Raises: ValueError – When path doesn’t exist and create not set
-
get_parents
()¶ Get list of parent nodes
-
get_path
(sep='/')¶ Get node path
-
get_root
()¶ Get root of this tree
-
is_leaf
¶ Is this a leaf node?
-
iter_children_preorder
()¶ Iterate through children
-
iter_leaves
()¶ Iterate through leaf nodes
-
iter_parents
()¶ Iterate through parent nodes to root
-
merge
(other)¶ Merges other node into this one without checking the name of the other node. New values are appended, existing values overwritten and unaffected ones are kept. Then all other node children are added as children (recursively they get either appended at the end or merged into existing node in the previous position.
-
parents
¶ List of parent nodes
-
path
¶ Node path
-
root
¶ Root of this tree
-
set_environment_dirty
()¶ Set the environment cache dirty. You should call this always when you query for the environment and then change the value or structure. Otherwise you’ll get the old environment instead.
-
-
class
avocado.core.tree.
TreeNodeDebug
(name='', value=None, parent=None, children=None, srcyaml=None)¶ Bases:
avocado.core.tree.TreeNode
Debug version of TreeNodeDebug :warning: Origin of the value is appended to all values thus it’s not suitable for running tests.
-
merge
(other)¶ Override origin with the one from other tree. Updated/Newly set values are going to use this location as origin.
-
-
class
avocado.core.tree.
ValueDict
(srcyaml, node, values)¶ Bases:
dict
Dict which stores the origin of the items
-
iteritems
()¶ Slower implementation with the use of __getitem__
-
-
avocado.core.tree.
apply_filters
(tree, filter_only=None, filter_out=None)¶ Apply a set of filters to the tree.
The basic filtering is filter only, which includes nodes, and the filter out rules, that exclude nodes.
Note that filter_out is stronger than filter_only, so if you filter out something, you could not bypass some nodes by using a filter_only rule.
Parameters: - filter_only – the list of paths which will include nodes.
- filter_out – the list of paths which will exclude nodes.
Returns: the original tree minus the nodes filtered by the rules.
-
avocado.core.tree.
get_named_tree_cls
(path)¶ Return TreeNodeDebug class with hardcoded yaml path
-
avocado.core.tree.
path_parent
(path)¶ From a given path, return its parent path.
Parameters: path – the node path as string. Returns: the parent path as string.
-
avocado.core.tree.
tree_view
(root, verbose=None, use_utf8=None)¶ Generate tree-view of the given node :param root: root node :param verbose: verbosity (0, 1, 2, 3) :param use_utf8: Use utf-8 encoding (None=autodetect) :return: string representing this node’s tree structure
avocado.core.version module¶
avocado.core.virt module¶
Module to provide classes for Virtual Machines.
-
class
avocado.core.virt.
Hypervisor
(uri=None)¶ Bases:
object
The Hypervisor connection class.
Creates an instance of class Hypervisor.
Parameters: uri – the connection URI. -
connect
()¶ Connect to the hypervisor.
-
domains
¶ Property to get the list of all domains.
Returns: a list of instances of libvirt.virDomain
.
-
find_domain_by_name
(name)¶ Find domain by name.
Parameters: domain – the domain name. Returns: an instance of libvirt.virDomain
.
-
static
handler
(ctxt, err)¶ This overwrites the libvirt default error handler, in order to avoid unwanted messages from libvirt exceptions to be sent for stdout.
-
-
class
avocado.core.virt.
VM
(hypervisor, domain)¶ Bases:
object
The Virtual Machine handler class.
Creates an instance of VM class.
Parameters: - hypervisor – an instance of
Hypervisor
. - domain – an instance of
libvirt.virDomain
.
-
create_snapshot
(name=None)¶ Creates a snapshot of kind ‘system checkpoint’.
-
delete_snapshot
()¶ Delete the current snapshot.
-
ip_address
(timeout=30)¶ Returns the domain IP address consulting qemu-guest-agent through libvirt.
Returns: either the IP address or None if not found Return type: str or None
-
is_active
¶ Property to check if VM is active.
Returns: if VM is active. Return type: Boolean
-
name
¶ Property with the name of VM.
Returns: the name of VM.
-
reboot
()¶ Reboot VM.
-
reset
()¶ Reset VM.
-
restore_snapshot
()¶ Revert to previous snapshot and delete the snapshot point.
-
resume
()¶ Resume VM.
-
revert_snapshot
()¶ Revert to previous snapshot.
-
setup_login
(hostname, username, password=None)¶ Setup login on VM.
Parameters: - hostname – the hostname.
- username – the username.
- password – the password.
-
shutdown
()¶ Shutdown VM.
-
snapshots
¶
-
start
()¶ Start VM.
-
state
¶ Property with the state of VM.
Returns: current state name.
-
stop
()¶ Stop VM.
-
suspend
()¶ Suspend VM.
- hypervisor – an instance of
-
exception
avocado.core.virt.
VirtError
¶ Bases:
exceptions.Exception
Generic exception class to propagate underling errors to the caller.
Module contents¶
Extension (plugin) APIs¶
Extension APIs that may be of interest to plugin writers.
Submodules¶
avocado.plugins.config module¶
avocado.plugins.diff module¶
Job Diff
-
class
avocado.plugins.diff.
Diff
¶ Bases:
avocado.core.plugin_interfaces.CLICmd
Implements the avocado ‘diff’ subcommand
-
configure
(parser)¶ Add the subparser for the diff action.
Parameters: parser – Main test runner parser.
-
description
= 'Shows the difference between 2 jobs.'¶
-
name
= 'diff'¶
-
run
(args)¶
-
avocado.plugins.distro module¶
-
avocado.plugins.distro.
DISTRO_PKG_INFO_LOADERS
= {'deb': <class 'avocado.plugins.distro.DistroPkgInfoLoaderDeb'>, 'rpm': <class 'avocado.plugins.distro.DistroPkgInfoLoaderRpm'>}¶ the type of distro that will determine what loader will be used
-
class
avocado.plugins.distro.
Distro
¶ Bases:
avocado.core.plugin_interfaces.CLICmd
Implements the avocado ‘distro’ subcommand
-
configure
(parser)¶
-
description
= 'Shows detected Linux distribution'¶
-
get_output_file_name
(args)¶ Adapt the output file name based on given args
It’s not uncommon for some distros to not have a release number, so adapt the output file name to that
-
name
= 'distro'¶
-
run
(args)¶
-
-
class
avocado.plugins.distro.
DistroDef
(name, version, release, arch)¶ Bases:
avocado.utils.distro.LinuxDistro
More complete information on a given Linux Distribution
Can and should include all the software packages that ship with the distro, so that an analysis can be made on whether a given package that may be responsible for a regression is part of the official set or an external package.
-
software_packages
= None¶ All the software packages that ship with this Linux distro
-
software_packages_type
= None¶ A simple text that denotes the software type that makes this distro
-
to_dict
()¶ Returns the representation as a dictionary
-
to_json
()¶ Returns the representation of the distro as JSON
-
-
class
avocado.plugins.distro.
DistroPkgInfoLoader
(path)¶ Bases:
object
Loads information from the distro installation tree into a DistroDef
It will go through all package files and inspect them with specific package utilities, collecting the necessary information.
-
get_package_info
(path)¶ Returns information about a given software package
Should be implemented by classes inheriting from
DistroDefinitionLoader
.Parameters: path (str) – path to the software package file Returns: tuple with name, version, release, checksum and arch Return type: tuple
-
get_packages_info
()¶ This method will go through each file, checking if it’s a valid software package file by calling
is_software_package()
and callingload_package_info()
if it’s so.
-
is_software_package
(path)¶ Determines if the given file at path is a software package
This check will be used to determine if
load_package_info()
will be called for file at path. This method should be implemented by classes inheriting fromDistroPkgInfoLoader
and could be as simple as checking for a file suffix.Parameters: path (str) – path to the software package file Returns: either True if the file is a valid software package or False otherwise Return type: bool
-
-
class
avocado.plugins.distro.
DistroPkgInfoLoaderDeb
(path)¶ Bases:
avocado.plugins.distro.DistroPkgInfoLoader
Loads package information for DEB files
-
get_package_info
(path)¶
-
is_software_package
(path)¶
-
-
class
avocado.plugins.distro.
DistroPkgInfoLoaderRpm
(path)¶ Bases:
avocado.plugins.distro.DistroPkgInfoLoader
Loads package information for RPM files
-
get_package_info
(path)¶
-
is_software_package
(path)¶ Systems needs to be able to run the rpm binary in order to fetch information on package files. If the rpm binary is not available on this system, we simply ignore the rpm files found
-
-
class
avocado.plugins.distro.
SoftwarePackage
(name, version, release, checksum, arch)¶ Bases:
object
Definition of relevant information on a software package
-
to_dict
()¶ Returns the representation as a dictionary
-
to_json
()¶ Returns the representation of the distro as JSON
-
-
avocado.plugins.distro.
load_distro
(path)¶ Loads the distro from an external file
Parameters: path (str) – the location for the input file Returns: a dict with the distro definition data Return type: dict
-
avocado.plugins.distro.
load_from_tree
(name, version, release, arch, package_type, path)¶ Loads a DistroDef from an installable tree
Parameters: - name (str) – a short name that precisely distinguishes this Linux Distribution among all others.
- version (str) – the major version of the distribution. Usually this is a single number that denotes a large development cycle and support file.
- release (str) – the release or minor version of the distribution. Usually this is also a single number, that is often omitted or starts with a 0 when the major version is initially release. It’s often associated with a shorter development cycle that contains incremental a collection of improvements and fixes.
- arch (str) – the main target for this Linux Distribution. It’s common for some architectures to ship with packages for previous and still compatible architectures, such as it’s the case with Intel/AMD 64 bit architecture that support 32 bit code. In cases like this, this should be set to the 64 bit architecture name.
- package_type (str) – one of the available package info loader types
- path (str) – top level directory of the distro installation tree files
avocado.plugins.docker module¶
Run the job inside a docker container.
-
class
avocado.plugins.docker.
Docker
¶ Bases:
avocado.core.plugin_interfaces.CLI
Run the job inside a docker container
-
configure
(parser)¶
-
description
= 'Run tests inside docker container'¶
-
name
= 'docker'¶
-
run
(args)¶
-
-
class
avocado.plugins.docker.
DockerRemoter
(dkrcmd, image, options, name=None)¶ Bases:
object
Remoter object similar to avocado.core.remoter which implements subset of the commands on docker container.
Executes docker container and attaches it.
Parameters: - dkrcmd – The base docker binary (or command)
- image – docker image to be used in this instance
-
cleanup
()¶ Stop the container and remove it
-
close
()¶ Safely postprocess the container
Note: It won’t remove the container, you need to do it manually
-
get_cid
()¶ Return this remoter’s container ID
-
makedir
(remote_path)¶ Create a directory on the container
Warning: No other process must be running on foreground Parameters: remote_path – the remote path to create.
-
receive_files
(local_path, remote_path)¶ Receive files from the container
-
run
(command, ignore_status=False, quiet=None, timeout=60)¶ Run command inside the container
-
send_files
(local_path, remote_path)¶ Send files to the container
-
class
avocado.plugins.docker.
DockerTestRunner
(job, test_result)¶ Bases:
avocado.core.remote.runner.RemoteTestRunner
Test runner which runs the job inside a docker container
-
remote_test_dir
= '/avocado_remote_test_dir'¶
-
setup
()¶
-
tear_down
()¶
-
avocado.plugins.envkeep module¶
avocado.plugins.exec_path module¶
Libexec PATHs modifier
-
class
avocado.plugins.exec_path.
ExecPath
¶ Bases:
avocado.core.plugin_interfaces.CLICmd
Implements the avocado ‘exec-path’ subcommand
-
description
= 'Returns path to avocado bash libraries and exits.'¶
-
name
= 'exec-path'¶
-
run
(args)¶ Print libexec path and finish
Parameters: args – Command line args received from the run subparser.
-
avocado.plugins.gdb module¶
Run tests with GDB goodies enabled.
avocado.plugins.jobscripts module¶
-
class
avocado.plugins.jobscripts.
JobScripts
¶ Bases:
avocado.core.plugin_interfaces.JobPre
,avocado.core.plugin_interfaces.JobPost
-
description
= 'Runs scripts before/after the job is run'¶
-
name
= 'jobscripts'¶
-
post
(job)¶
-
pre
(job)¶
-
avocado.plugins.journal module¶
Journal Plugin
-
class
avocado.plugins.journal.
Journal
¶ Bases:
avocado.core.plugin_interfaces.CLI
Test journal
-
configure
(parser)¶
-
description
= "Journal options for the 'run' subcommand"¶
-
name
= 'journal'¶
-
run
(args)¶
-
-
class
avocado.plugins.journal.
ResultJournal
(job=None)¶ Bases:
avocado.core.result.Result
Test Result Journal class.
This class keeps a log of the test updates: started running, finished, etc. This information can be forwarded live to an avocado server and provide feedback to users from a central place.
Creates an instance of ResultJournal.
Parameters: job – an instance of avocado.core.job.Job
.-
end_test
(state)¶
-
end_tests
()¶
-
lazy_init_journal
(state)¶
-
start_test
(state)¶
-
avocado.plugins.jsonresult module¶
JSON output module.
-
class
avocado.plugins.jsonresult.
JSONCLI
¶ Bases:
avocado.core.plugin_interfaces.CLI
JSON output
-
configure
(parser)¶
-
description
= "JSON output options for 'run' command"¶
-
name
= 'json'¶
-
run
(args)¶
-
-
class
avocado.plugins.jsonresult.
JSONResult
¶ Bases:
avocado.core.plugin_interfaces.Result
-
description
= 'JSON result support'¶
-
name
= 'json'¶
-
render
(result, job)¶
-
avocado.plugins.list module¶
avocado.plugins.multiplex module¶
avocado.plugins.plugins module¶
Plugins information plugin
avocado.plugins.remote module¶
Run tests on a remote machine.
avocado.plugins.replay module¶
-
class
avocado.plugins.replay.
Replay
¶ Bases:
avocado.core.plugin_interfaces.CLI
Replay a job
-
configure
(parser)¶
-
description
= "Replay options for 'run' subcommand"¶
-
load_config
(resultsdir)¶
-
name
= 'replay'¶
-
run
(args)¶
-
-
avocado.plugins.replay.
ignore_call
(*args, **kwargs)¶ Accepts anything and does nothing
avocado.plugins.run module¶
Base Test Runner Plugins.
-
class
avocado.plugins.run.
Run
¶ Bases:
avocado.core.plugin_interfaces.CLICmd
Implements the avocado ‘run’ subcommand
-
configure
(parser)¶ Add the subparser for the run action.
Parameters: parser – Main test runner parser.
-
description
= 'Runs one or more tests (native test, test alias, binaryor script)'¶
-
name
= 'run'¶
-
run
(args)¶ Run test modules or simple tests.
Parameters: args – Command line args received from the run subparser.
-
avocado.plugins.sysinfo module¶
System information plugin
avocado.plugins.tap module¶
TAP output module.
-
class
avocado.plugins.tap.
TAP
¶ Bases:
avocado.core.plugin_interfaces.CLI
TAP Test Anything Protocol output avocado plugin
-
configure
(parser)¶
-
description
= 'TAP - Test Anything Protocol results'¶
-
name
= 'TAP'¶
-
run
(args)¶
-
-
class
avocado.plugins.tap.
TAPResult
(job, force_output_file=None)¶ Bases:
avocado.core.result.Result
TAP output class
-
end_test
(state)¶ Log the test status and details
-
end_tests
()¶
-
start_tests
()¶ Log the test plan
-
avocado.plugins.vm module¶
Run tests on Virtual Machine.
avocado.plugins.wrapper module¶
avocado.plugins.xunit module¶
xUnit module.
-
class
avocado.plugins.xunit.
XUnitCLI
¶ Bases:
avocado.core.plugin_interfaces.CLI
xUnit output
-
configure
(parser)¶
-
description
= 'xUnit output options'¶
-
name
= 'xunit'¶
-
run
(args)¶
-
-
class
avocado.plugins.xunit.
XUnitResult
¶ Bases:
avocado.core.plugin_interfaces.Result
-
PRINTABLE
= 'abcdefghijklmnopqrstuvwxyzABCDEFGHIJKLMNOPQRSTUVWXYZ0123456789!"#$%&\'()*+,-./:;<=>?@[\\]^_`{|}~\n\r '¶
-
UNKNOWN
= '<unknown>'¶
-
description
= 'XUnit result support'¶
-
name
= 'xunit'¶
-
render
(result, job)¶
-
avocado.plugins.yaml_to_mux module¶
Multiplexer plugin to parse yaml files to params
-
class
avocado.plugins.yaml_to_mux.
ListOfNodeObjects
¶ Bases:
list
Used to mark list as list of objects from whose node is going to be created
-
class
avocado.plugins.yaml_to_mux.
Value
¶ Bases:
tuple
Used to mark values to simplify checking for node vs. value
-
class
avocado.plugins.yaml_to_mux.
YamlToMux
¶ Bases:
avocado.core.plugin_interfaces.CLI
Registers callback to inject params from yaml file to the
-
configure
(parser)¶ Configures “run” and “multiplex” subparsers
-
description
= "YamlToMux options for the 'run' subcommand"¶
-
name
= 'yaml_to_mux'¶
-
run
(args)¶
-
-
avocado.plugins.yaml_to_mux.
create_from_yaml
(paths, debug=False)¶ Create tree structure from yaml-like file :param fileobj: File object to be processed :raise SyntaxError: When yaml-file is corrupted :return: Root of the created tree structure
Module contents¶
Avocado Release Notes¶
Release Notes¶
The following pages summarize what is new in Avocado:
42.0 Stranger Things¶
The Avocado team is proud to present another release: Avocado version 42.0, aka, “Stranger Things”, is now available!
The major changes introduced on this version are listed below, roughly categorized into major topics and intended audience:
Users/Test Writers¶
- Multiplexer: it now defines an API to inject and merge data into the multiplexer tree. With that, it’s now possible to come up with various mechanisms to feed data into the Multiplexer. The standard way to do so continues to be by YAML files, which is now implemented in the
avocado.plugins.yaml_to_mux
plugin module. The –multiplex option, which used to load YAML files into the multiplexer is now deprecated in favor of –mux-yaml.- Docker improvements: Avocado will now name the container accordingly to the job it’s running. Also, it not allows generic Docker options to be passed by using –docker-options on the Avocado command line.
- It’s now possible to disable plugins by using the configuration file. This is documented at Disabling a plugin.
avocado.utils.iso9660
: this utils module received a lot of TLC and it now provides a more complete standard API across all backend implementations. Previously, only the mount based backend implementation would support the mnt_dir API, which would point to a filesystem location where the contents of the ISO would be available. Now all other backends can support that API, given that requirements (such as having the right privileges) are met.- Users of the
avocado.utils.process
module will now be able to access the process ID in theavocado.utils.process.CmdResult
- Users of the
avocado.utils.build
module will find an improved version ofavocado.utils.build.make()
which will now return the make process exit status code.- Users of the virtual machine plugin (
--vm-domain
and related options) will now receive better messages when errors occur.
Documentation¶
- Added section on how to use custom Docker images with user’s own version of Avocado (or anything else for that matter).
- Added section on how to install Avocado using standard OpenSUSE packages.
- Added section on
unittest
compatibility limitations and caveats.- A link to Scylla Clusters tests has been added to the list of Avocado test repos.
- Added section on how to install Avocado by using standard Python packages.
Developers¶
- The make develop target will now activate in-tree optional plugins, such as the HTML report plugin.
- The selftests/run script, usually called as part of make check, will now fail at the first failure (by default). This is controlled by the SELF_CHECK_CONTINUOUS environment variable.
- The make check target can also run tests in parallel, which can be enabled by setting the environment variable AVOCADO_PARALLEL_CHECK.
Bugfixes¶
- An issue where KeyboardInterrupts would be caught by the journalctl run as part of sysinfo was fixed with a workaround. The root cause appears to be located in the
avocado.utils.process
library, and a task is already on track to verify that possible bug.avocado.util.git
module had an issue where git executions would generate content that would erroneously be considered as part of the output check mechanism.
Internal improvements¶
- Selftests are now run while building Enterprise Linux 6 packages. Since most Avocado developers use newer platforms for development, this should make Avocado more reliable for users of those older platforms.
For more information, please check out the complete Avocado changelog.
Release Meeting¶
The Avocado release meetings are now open to the community via Hangouts on Air. The meetings are recorded and made available on the Avocado Test Framework YouTube channel.
For this release, you can watch the meeting on this link.
41.0 Outlander¶
The Avocado team is proud to present another release: Avocado version 41.0, aka, “Outlander”, is now available!
The major changes introduced on this version are listed below, roughly categorized into major topics and intended audience:
Users/Test Writers¶
- Multiplex: remove the -s (system-wide) shortcut to avoid confusion with silent from main apps.
- New
avocado.utils.linux_modules.check_kernel_config()
method, with which users can check if a kernel configuration is not set, a module or built-in.- Show link to file which failed to be processed by sysinfo.
- New path key type for settings that auto-expand tilde notation, that is, when using
avocado.core.settings.Settings.get_value()
you can get this special value treatment.- The automatic VM IP detection that kicks in when one uses –vm-domain without a matching –vm-hostname, now uses a more reliable method (libvirt/qemu-gust-agent query). On the other hand, the QEMU guest agent is now required if you intend to omit the VM IP/hostname.
- Warn users when sysinfo configuration files are not present, and consequently no sysinfo is going to be collected.
- Set LC_ALL=C by default on sysinfo collection to simplify avocado diff comparison between different machines. It can be tweaked in the config file (locale option under sysinfo.collect).
- Remove deprecated option –multiplex-files.
- List result plugins (JSON, XUnit, HTML) in avocado plugins command output.
Documentation¶
- Mention to the community maintained repositories.
- Add GIT workflow to the contribution guide.
Developers¶
- New make check-long target to run long tests. For example, the new FileLockTest.
- New make variables target to display Makefile variables.
- Plugins: add optional plugins directory optional_plugins. This also adds all directories to be found under optional_plugins to the list of candidate plugins when running make clean or make link.
Bugfixes¶
- Fix undefined name error
avocado.core.remote.runner
.- Ignore r when checking for avocado in remote executions.
- Skip file if UnicodeDecodeError is raised when collecting sysinfo.
- Sysinfo: respect package collection on/off configuration.
- Use -y in lvcreate to ignore warnings
avocado.utils.lv_utils
.- Fix crash in
avocado.core.tree
when printing non-string values.- setup.py: fix the virtualenv detection so readthedocs.org can properly probe Avocado’s version.
Internal improvements¶
- Cleanup runner->multiplexer API
- Replay re-factoring, renamed avocado.core.replay to
avocado.core.jobdata
.- Partition utility class defaults to ext2. We documented that and reinforced in the accompanying unittests.
- Unittests for
avocado.utils.partition
has now more specific checks for the conditions necessary to run the Partition tests (sudo, mkfs.ext2 binary).- Several Makefile improvements.
For more information, please check out the complete Avocado changelog.
Release Meeting¶
The Avocado release meetings are now open to the community via Hangouts on Air. The meetings are recorded and made available on the Avocado Test Framework YouTube channel.
For this release, you can watch the meeting on this link.
40.0 Dr Who¶
The Avocado team is proud to present another release: Avocado version 40.0, aka, “Dr Who”, is now available!
The major changes introduced on this version are listed below.
- The introduction of a tool that generated a diff-like report of two jobs. For more information on this feature, please check out its own documentation.
- The
avocado.utils.process
library has been enhanced by adding theavocado.utils.process.SubProcess.get_pid()
method, and also by logging the command name, status and execution time when verbose mode is set. - The introduction of a rr based wrapper.
With such a wrapper, it’s possible to transparently record the
process state (when executed via the
avocado.utils.process
APIs), and deterministically replay them later. - The coredump generation contrib scripts will check if the user running Avocado is privileged to actually generate those dumps. This means that it won’t give errors in the UI about failures on pre/post scripts, but will record that in the appropriate job log.
- BUGFIX: The
--remote-no-copy
command line option, when added to the--remote-*
options that actually trigger the remote execution of tests, will now skip the local test discovery altogether. - BUGFIX: The use of the asset fetcher by multiple avocado executions
could result in a race condition. This is now fixed, backed by a
file based utility lock library:
avocado.utils.filelock
. - BUGFIX: The asset fetcher will now properly check the hash on
file:
based URLs. - BUGFIX: A busy loop in the
avocado.utils.process
library that was reported by our users was promptly fixed. - BUGFIX: Attempts to install Avocado on bare bones environments, such
as virtualenvs, won’t fail anymore due to dependencies required at
setup.py
execution time. Of course Avocado still requires some external Python libraries, but these will only be required after installation. This should let users topip install avocado-framework
successfully.
For more information, please check out the complete Avocado changelog.
Release Meeting¶
The Avocado release meetings are now open to the community via Hangouts on Air. The meetings are recorded and made available on the Avocado Test Framework YouTube channel.
For this release, you can watch the meeting on this link.
39.0 The Hateful Eight¶
The Avocado team is proud to present another incremental release: version 39.0, aka, “The Hateful Eight”, is now available!
The major changes introduced on this version are listed below.
Support for running tests in Docker container. Now, in addition to running tests on a (libvirt based) Virtual Machine or on a remote host, you can now run tests in transient Docker containers. The usage is as simple as:
$ avocado run mytests.py --docker ldoktor/fedora-avocado
The container will be started, using
ldoktor/fedora-avocado
as the image. This image contains a Fedora based system with Avocado already installed, and it’s provided at the official Docker hub.Introduction of the “Fail Fast” feature.
By running a job with the
--failfast
flag, the job will be interrupted after the very first test failure. If your job only makes sense if it’s a complete PASS, this feature can save you a lot of time.Avocado supports replaying previous jobs, selected by using their Job IDs. Now, it’s also possible to use the special keyword
latest
, which will cause Avocado to rerun the very last job.Python’s standard signal handling is restored for SIGPIPE, and thus for all tests running on Avocado.
In previous releases, Avocado introduced a change that set the default handler to SIGPIPE, which caused the application to be terminated. This seemed to be the right approach when testing how the Avocado app would behave on broken pipes on the command line, but it introduced side effects to a lot of Python code. Instead of exceptions, the affected Python code would receive the signal themselves.
This is now reverted to the Python standard, and the signal behavior of Python based tests running on Avocado should not surprise anyone.
The project release notes are now part of the official documentation. That means that users can quickly find when a given change was introduced.
Together with those changes listed, a total of 38 changes made into this release. For more information, please check out the complete Avocado changelog.
Release Meeting¶
The Avocado release meetings are now open to the community via Hangouts on Air. The meetings are recorded and made available on the Avocado Test Framework YouTube channel.
For this release, you can watch the meeting on this link.
38.0 Love, Ken¶
You guessed it right: this is another Avocado release announcement: release 38.0, aka “Love, Ken”, is now out!
Another development cycle has just finished, and our community will receive this new release containing a nice assortment of bug fixes and new features.
The download of assets in tests now allow for an expiration time. This means that tests that need to download any kind of external asset, say a tarball, can now automatically benefit from the download cache, but can also keep receiving new versions automatically.
Suppose your asset uses an asset named myproject-daily.tar.bz2, and that your test runs 50 times a day. By setting the expire time to 1d (1 day), your test will benefit from cache on most runs, but will still fetch the new version when the 24 hours from the first download have passed.
For more information, please check out the documentation on the expire parameter to the fetch_asset() method.
Environment variables can be propagated into tests running on remote systems. It’s a known fact that one way to influence application behavior, including test, is to set environment variables. A command line such as:
$ MYAPP_DEBUG=1 avocado run myapp_test.py
Will work as expected on a local system. But Avocado also allows running tests on remote machines, and up until now, it has been lacking a way to propagate environment variables to the remote system.
Now, you can use:
$ MYAPP_DEBUG=1 avocado run --env-keep MYAPP_DEBUG \ --remote-host test-machine myapp_test.py
The plugin interfaces have been moved into the avocado.core.plugin_interfaces module. This means that plugin writers now have to import the interface definitions this namespace, example:
... from avocado.core.plugin_interfaces import CLICmd class MyCommand(CLICmd): ...
This is a way to keep ourselves honest, and say that there’s no difference from plugin interfaces to Avocado’s core implementation, that is, they may change at will. For greater stability, one should be tracking the LTS releases.
Also, it effectively makes all plugins the same, whether they’re implemented and shipped as part of Avocado, or as part of external projects.
A contrib script for running kvm-unit-tests. As some people are aware, Avocado has indeed a close relation to virtualization testing. Avocado-VT is one obvious example, but there are other virtualization related test suites can Avocado can run.
This release adds a contrib script that will fetch, download, compile and run kvm-unit-tests using Avocado’s external runner feature. This gives results in a better granularity than the support that exists in Avocado-VT, which gives only a single PASS/FAIL for the entire test suite execution.
For more information, please check out the Avocado changelog.
Avocado-VT¶
Also, while we focused on Avocado, let’s also not forget that Avocado-VT maintains it’s own fast pace of incoming niceties.
s390 support: Avocado-VT is breaking into new grounds, and now has support for the s390 architecture. Fedora 23 for s390 has been added as a valid guest OS, and s390-virtio has been added as a new machine type.
Avocado-VT is now more resilient against failures to persist its environment file, and will only give warnings instead of errors when it fails to save it.
An improved implementation of the “job lock” plugin, which prevents multiple Avocado jobs with VT tests to run simultaneously. Since there’s no finer grained resource locking in Avocado-VT, this is a global lock that will prevent issues such as image corruption when two jobs are run at the same time.
This new implementation will now check if existing lock files are stale, that is, they are leftovers from previous run. If the processes associated with these files are not present, the stale lock files are deleted, removing the need to clean them up manually. It also outputs better debugging information when failures to acquire lock.
The complete list of changes to Avocado-VT are available on Avocado-VT changelog.
Miscellaneous¶
While not officially part of this release, this development cycle saw the introduction of new tests on our avocado-misc-tests. Go check it out!
Finally, since Avocado and Avocado-VT are not newly born anymore, we decided to update information mentioning KVM-Autotest, virt-test on so on around the web. This will hopefully redirect new users to the Avocado community and avoid confusion.
Happy hacking and testing!
Sprint Theme: https://trello.com/c/Y6IIFXBS/732-sprint-theme
37.0 Trabant vs. South America¶
This is another proud announcement: Avocado release 37.0, aka “Trabant vs. South America”, is now out!
This release is yet another collection of bug fixes and some new features. Along with the same changes that made the 36.0lts release[1], this brings the following additional changes:
- TAP[2] version 12 support, bringing better integration with other test tools that accept this streaming format as input.
- Added niceties on Avocado’s utility libraries “build” and “kernel”, such as automatic parallelism and resource caching. It makes tests such as “linuxbuild.py” (and your similar tests) run up to 10 times faster.
- Fixed an issue where Avocado could leave processes behind after the test was finished.
- Fixed a bug where the configuration for tests data directory would be ignored.
- Fixed a bug where SIMPLE tests would not properly exit with WARN status.
For a complete list of changes please check the Avocado changelog[3].
For Avocado-VT, please check the full Avocado-VT changelog[4].
Happy hacking and testing!
36.0 LTS¶
This is a very proud announcement: Avocado release 36.0lts, our very first “Long Term Stability” release, is now out!
LTS in a nutshell¶
This release marks the beginning of a special cycle that will last for 18 months. Avocado usage in production environments should favor the use of this LTS release, instead of non-LTS releases.
Bug fixes will be provided on the “36lts”[1] branch until, at least, September 2017. Minor releases, such as “36.1lts”, “36.2lts” an so on, will be announced from time to time, incorporating those stability related improvements.
Keep in mind that no new feature will be added. For more information, please read the “Avocado Long Term Stability” RFC[2].
Changes from 35.0¶
As mentioned in the release notes for the previous release (35.0), only bug fixes and other stability related changes would be added to what is now 36.0lts. For the complete list of changes, please check the GIT repo change log[3].
Install avocado¶
The Avocado LTS packages are available on a separate repository, named “avocado-lts”. These repositories are available for Fedora 22, Fedora 23, EPEL 6 and EPEL 7.
Updated ”.repo” files are available on the usual locations:
- https://repos-avocadoproject.rhcloud.com/static/avocado-fedora.repo
- https://repos-avocadoproject.rhcloud.com/static/avocado-el.repo
Those repo files now contain definitions for both the “LTS” and regular repositories. Users interested in the LTS packages, should disable the regular repositories and enable the “avocado-lts” repo.
Instructions are available in our documentation on how to install either with packages or from source[4].
Happy hacking and testing!
35.0 Mr. Robot¶
This is another proud announcement: Avocado release 35.0, aka “Mr. Robot”, is now out!
This release, while a “regular” release, will also serve as a beta for our first “long term stability” (aka “lts”) release. That means that the next release, will be version “36.0lts” and will receive only bug fixes and minor improvements. So, expect release 35.0 to be pretty much like “36.0lts” feature-wise. New features will make into the “37.0” release, to be released after “36.0lts”. Read more about the details on the specific RFC[9].
The main changes in Avocado for this release are:
- A big round of fixes and on machine readable output formats, such as xunit (aka JUnit) and JSON. The xunit output, for instance, now includes tests with schema checking. This should make sure interoperability is even better on this release.
- Much more robust handling of test references, aka test URLs. Avocado now properly handles very long test references, and also test references with non-ascii characters.
- The avocado command line application now provides richer exit status[1]. If your application or custom script depends on the avocado exit status code, you should be fine as avocado still returns zero for success and non-zero for errors. On error conditions, though, the exit status code are richer and made of combinable (ORable) codes. This way it’s possible to detect that, say, both a test failure and a job timeout occurred in a single execution.
- [SECURITY RELATED] The remote execution of tests (including in Virtual Machines) now allows for proper checks of host keys[2]. Without these checks, avocado is susceptible to a man-in-the-middle attack, by connecting and sending credentials to the wrong machine. This check is disabled by default, because users depend on this behavior when using machines without any prior knowledge such as cloud based virtual machines. Also, a bug in the underlying SSH library may prevent existing keys to be used if these are in ECDSA format[3]. There’s an automated check in place to check for the resolution of the third party library bug. Expect this feature to be enabled by default in the upcoming releases.
- Pre/Post Job hooks. Avocado now defines a proper interface for extension/plugin writers to execute actions while a Job is runnning. Both Pre and Post hooks have access to the Job state (actually, the complete Job instance). Pre job hooks are called before tests are run, and post job hooks are called at the very end of the job (after tests would have usually finished executing).
- Pre/Post job scripts[4]. As a feature built on top of the Pre/Post job hooks described earlier, it’s now possible to put executable scripts in a configurable location, such as /etc/avocado/scripts/job/pre.d and have them called by Avocado before the execution of tests. The executed scripts will receive some information about the job via environment variables[5].
- The implementation of proper Test-IDs[6] in the test result directory.
Also, while not everything is (yet) translated into code, this release saw various and major RFCs, which are definitely shaping the future of Avocado. Among those:
- Introduce proper test IDs[6]
- Pre/Post test hooks[7]
- Multi-stream tests[8]
- Avocado maintainability and integration with avocado-vt[9]
- Improvements to job status (completely implemented)[10]
For a complete list of changes please check the Avocado changelog[11]. For Avocado-VT, please check the full Avocado-VT changelog[12].
Install avocado¶
Instructions are available in our documentation on how to install either with packages or from source[13].
Updated RPM packages are be available in the project repos for Fedora 22, Fedora 23, EPEL 6 and EPEL 7.
Packages¶
As a heads up, we still package the latest version of the various Avocado sub projects, such as the very popular Avocado-VT and the pretty much experimental Avocado-Virt and Avocado-Server projects.
For the upcoming releases, there will be changes in our package offers, with a greater focus on long term stability packages for Avocado. Other packages may still be offered as a convenience, or may see a change of ownership. All in the best interest of our users. If you have any concerns or questions, please let us know.
Happy hacking and testing!
0.34.0 The Hour of the Star¶
Hello to all test enthusiasts out there, specially to those that cherish, care or are just keeping an eye on the greenest test framework there is: Avocado release 0.34.0, aka The Hour of the Star, is now out!
The main changes in Avocado for this release are:
- A complete overhaul of the logging and output implementation. This means that all Avocado output uses the standard Python logging library making it very consistent and easy to understand [1].
- Based on the logging and output overhaul, the command line test runner is now very flexible with its output. A user can choose exactly what should be output. Examples include application output only, test output only, both application and test output or any other combination of the builtin streams. The user visible command line option that controls this behavior is –show, which is an application level option, that is, it’s available to all avocado commands. [2]
- Besides the builtin streams, test writers can use the standard Python logging API to create new streams. These streams can be shown on the command line as mentioned before, or persisted automatically in the job results by means of the –store-logging-stream command line option. [3][4]
- The new avocado.core.safeloader module, intends to make it easier to to write new test loaders for various types of Python code. [5][6]
- Based on the new avocado.core.safeloader module, a contrib script called avocado-find-unittests, returns the name of unittest.TestCase based tests found on a given number of Python source code files. [7]
- Avocado is now able to run its own selftest suite. By leveraging the avocado-find-unittests contrib script and the External Runner [8] feature. A Makefile target is available, allowing developers to run make selfcheck to have the selftest suite run by Avocado. [9]
- Partial Python 3 support. A number of changes were introduced that allow concurrent Python 2 and 3 support on the same code base. Even though the support for Python 3 is still incomplete, the avocado command line application can already run some limited commands at this point.
- Asset fetcher utility library. This new utility library, and INSTRUMENTED test feature, allows users to transparently request external assets to be used in tests, having them cached for later use. [10]
- Further cleanups in the public namespace of the avocado Test class.
- [BUG FIX] Input from the local system was being passed to remote systems when running tests with either in remote systems or VMs.
- [BUG FIX] HTML report stability improvements, including better Unicode handling and support for other versions of the Pystache library.
- [BUG FIX] Atomic updates of the “latest” job symlink, allows for more reliable user experiences when running multiple parallel jobs.
- [BUG FIX] The avocado.core.data_dir module now dynamically checks the configuration system when deciding where the data directory should be located. This allows for later updates, such as when giving one extra –config parameter in the command line, to be applied consistently throughout the framework and test code.
- [MAINTENANCE] The CI jobs now run full checks on each commit on any proposed PR, not only on its topmost commit. This gives higher confidence that a commit in a series is not causing breakage that a later commit then inadvertently fixes.
For a complete list of changes please check the Avocado changelog[11].
For Avocado-VT, please check the full Avocado-VT changelog[12].
Avocado Videos¶
As yet another way to let users know about what’s available in Avocado, we’re introducing short videos with very targeted content on our very own YouTube channel: https://www.youtube.com/channel/UCP4xob52XwRad0bU_8V28rQ
The first video available demonstrates a couple of new features related to the advanced logging mechanisms, introduced on this release: https://www.youtube.com/watch?v=8Ur_p5p6YiQ
Install avocado¶
Instructions are available in our documentation on how to install either with packages or from source[13].
Updated RPM packages are be available in the project repos for Fedora 22, Fedora 23, EPEL 6 and EPEL 7.
Happy hacking and testing!
0.33.0 Lemonade Joe or Horse Opera¶
Hello big farmers, backyard gardeners and supermarket reapers! Here is a new announcement to all the appreciators of the most delicious green fruit out here. Avocado release 0.33.0, aka, Lemonade Joe or Horse Opera, is now out!
The main changes in Avocado are:
- Minor refinements to the Job Replay feature introduced in the last release.
- More consistency naming for the status of tests that were not executed. Namely, the TEST_NA has been renamed to SKIP all across the internal code and user visible places.
- The avocado Test class has received some cleanups and improvements. Some attributes that back the class implementation but are not intended for users to rely upon are now hidden or removed. Additionally some the internal attributes have been turned into proper documented properties that users should feel confident to rely upon. Expect more work on this area, resulting in a cleaner and leaner base Test class on the upcoming releases.
- The avocado command line application used to show the main app help message even when help for a specific command was asked for. This has now been fixed.
- It’s now possible to use the avocado process utility API to run privileged commands transparently via SUDO. Just add the “sudo=True” parameter to the API calls and have your system configured to allow that command without asking interactively for a password.
- The software manager and service utility API now knows about commands that require elevated privileges to be run, such as installing new packages and starting and stopping services (as opposed to querying packages and services status). Those utility APIs have been integrated with the new SUDO features allowing unprivileged users to install packages, start and stop services more easily, given that the system is properly configured to allow that.
- A nasty “fork bomb” situation was fixed. It was caused when a SIMPLE test written in Python used the Avocado’s “main()” function to run itself.
- A bug that prevented SIMPLE tests from being run if Avocado was not given the absolute path of the executable has been fixed.
- A cleaner internal API for registering test result classes has been put into place. If you have written your own test result class, please take a look at avocado.core.result.register_test_result_class.
- Our CI jobs now also do quick “smoke” checks on every new commit (not only the PR’s branch HEAD) that are proposed on github.
- A new utility function, binary_from_shell_cmd, has been added to process API allows to extract the executable to be run from complex command lines, including ones that set shell variable names.
- There have been internal changes to how parameters, including the internally used timeout parameter, are handled by the test loader.
- Test execution can now be PAUSED and RESUMED interactively! By hitting CTRL+Z on the Avocado command line application, all processes of the currently running test are PAUSED. By hitting CTRL+Z again, they are RESUMED.
- The Remote/VM runners have received some refactors, and most of the code that used to live on the result test classes have been moved to the test runner classes. The original goal was to fix a bug, but turns out test runners were more suitable to house some parts of the needed functionality.
For a complete list of changes please check the Avocado changelog[1].
For Avocado-VT, there were also many changes, including:
- A new utility function, get_guest_service_status, to get service status in a VM.
- A fix for ssh login timeout error on remote servers.
- Fixes for usb ehci on PowerPC.
- Fixes for the screenshot path, when on a remote host
- Added libvirt function to create volumes with by XML files
- Added utility function to get QEMU threads (get_qemu_threads)
And many other changes. Again, for a complete list of changes please check the Avocado-VT changelog[2].
Install avocado¶
Instructions are available in our documentation on how to install either with packages or from source[3].
Updated RPM packages are be available in the project repos for Fedora 22, Fedora 23, EPEL 6 and EPEL 7.
Happy hacking and testing!
0.32.0 Road Runner¶
Hi everyone! A new year brings a new Avocado release as the result of Sprint #32: Avocado 0.32.0, aka, “Road Runner”.
The major changes introduced in the previous releases were put to trial on this release cycle, and as a result, we have responded with documentation updates and also many fixes. This release also marks the introduction of a great feature by a new member of our team: Amador Pahim brought us the Job Replay feature! Kudos!!!
So, for Avocado the main changes are:
- Job Replay: users can now easily re-run previous jobs by using the –replay command line option. This will re-run the job with the same tests, configuration and multiplexer variants that were used on the origin one. By using –replay-test-status, users can, for example, only rerun the failed tests of the previous job. For more check our docs[1].
- Documentation changes in response to our users feedback, specially regarding the setup.py install/develop requirement.
- Fixed the static detection of test methods when using repeated names.
- Ported some Autotest tests to Avocado, now available on their own repository[2]. More contributions here are very welcome!
For a complete list of changes please check the Avocado changelog[3].
For Avocado-VT, there were also many changes, including:
- Major documentation updates, making them simpler and more in sync with the Avocado documentation style.
- Refactor of the code under the avocado_vt namespace. Previously most of the code lived under the plugin file itself, now it better resembles the structure in Avocado and the plugin files are hopefully easier to grasp.
Again, for a complete list of changes please check the Avocado-VT changelog[4].
Install avocado¶
Instructions are available in our documentation on how to install either with packages or from source[5].
Updated RPM packages are be available in the project repos for Fedora 22, Fedora 23, EPEL 6 and EPEL 7.
Happy hacking and testing!
0.31.0 Lucky Luke¶
Hi everyone! Right on time for the holidays, Avocado reaches the end of Sprint 31, and together with it, we’re very happy to announce a brand new release! This version brings stability fixes and improvements to both Avocado and Avocado-VT, some new features and a major redesign of our plugin architecture.
For Avocado the main changes are:
- It’s now possible to register callback functions to be executed when all tests finish, that is, at the end of a particular job[1].
- The software manager utility library received a lot of love on the Debian side of things. If you’re writing tests that install software packages on Debian systems, you may be in for some nice treats and much more reliable results.
- Passing malformed commands (such as ones that can not be properly split by the standard shlex library) to the process utility library is now better dealt with.
- The test runner code received some refactors and it’s a lot easier to follow. If you want to understand how the Avocado test runner communicates with the processes that run the test themselves, you may have a much better code reading experience now.
- Updated inspektor to the latest and greatest, so that our code is kept is shiny and good looking (and performing) as possible.
- Fixes to the utility GIT library when using a specific local branch name.
- Changes that allow our selftest suite to run properly on virtualenvs.
- Proper installation requirements definition for Python 2.6 systems.
- A completely new plugin architecture[2]. Now we offload all plugin discovery and loading to the Stevedore library. Avocado now defines precise (and simpler) interfaces for plugin writers. Please be aware that the public and documented interfaces for plugins, at the moment, allows adding new commands to the avocado command line app, or adding new options to existing commands. Other functionality can be achived by “abusing” the core avocado API from within plugins. Our goal is to expand the interfaces so that other areas of the framework can be extended just as easily.
For a complete list of changes please check the Avocado changelog[3].
Avocado-VT received just too many fixes and improvements to list. Please refer to the changelog[4] for more information.
Install avocado¶
Instructions are available in our documentation on how to install either with packages or from source[5].
Within a couple of hours, updated RPM packages will be available in the project repos for Fedora 22, Fedora 23, EPEL 6 and EPEL 7.
Happy hacking and testing!
0.30.0 Jimmy’s Hall¶
Hello! Avocado reaches the end of Sprint 30, and with it, we have a new release available! This version brings stability fixes and improvements to both Avocado and Avocado-vt.
As software doesn’t spring out of life itself, we’d like to acknowledge the major contributions by Lucas (AKA lmr) since the dawn of time for Avocado (and earlier projects like Autotest and virt-test). Although the Avocado team at Red Hat was hit by some changes, we’re already extremely happy to see that this major contributor (and good friend) has not gone too far.
Now back to the more informational part of the release notes. For Avocado the main changes are:
- New RPM repository location, check the docs[1] for instructions on how to install the latest releases
- Makefile rules for building RPMs are now based on mock, to ensure sound dependencies
- Packaged versions are now available for Fedora 22, newly released Fedora 23, EL6 and EL7
- The software manager utility library now supports DNF
- The avocado test runner now supports a dry run mode, which allows users to check how a job would be executed, including tests that would be found and parameters that would be passed to it. This is currently complementary to the avocado list command.
- The avocado test runner now supports running simple tests with parameters. This may come in handy for simple use cases when Avocado will wrap a test suite, but the test suite needs some command line arguments.
Avocado-vt also received many bugfixes[3]. Please refer to the changelog for more information.
Install avocado¶
Instructions are available in our documentation on how to install either with packages or from source[1].
Happy hacking and testing!
0.29.0 Steven Universe¶
Hello! Avocado reaches the end of Sprint 29, and with it, we have a great release coming! This version of avocado once brings new features and plenty of bugfixes:
- The remote and VM plugins now work with –multiplex, so that you can use both features in conjunction. * The VM plugin can now auto detect the IP of a given libvirt domain you pass to it, reducing typing and providing an easier and more pleasant experience. * Temporary directories are now properly cleaned up and no re-creation of directories happens, making avocado more secure.
- Avocado docs are now also tagged by release. You can see the specific documentation of this one at our readthedocs page [1]
- Test introspection/listing is safer: Now avocado does not load python modules to introspect its contents, an alternative method, based on the Python AST parser is used, which means now avocado will not load possible badly written/malicious code at listing stage. You can find more about that in our test resolution documentation [2]
- You can now specify low level loaders to avocado to customize your test running experience. You can learn more about that in the Test Discovery documentation [3]
- The usual many bugfixes and polishing commits. You can see the full amount of 96 commits at [4]
For our Avocado VT plugin, the main changes are:
- The vt-bootstrap process is now more robust against users interrupting previous bootstrap attempts
- Some issues with RPM install in RHEL hosts were fixed
- Issues with unsafe temporary directories were fixed, making the VT tests more secure.
- Issues with unattended installed were fixed
- Now the address of the virbr0 bridge is properly auto detected, which means that our unattended installation content server will work out of the box as long as you have a working virbr0 bridge.
Install avocado
As usual, go to https://copr.fedoraproject.org/coprs/lmr/Autotest/ to install our YUM/DNF repo and get the latest goodies!
Happy hacking and testing!
0.28.0 Jára Cimrman, The Investigation of the Missing Class Register¶
This release basically polishes avocado, fixing a number of small usability issues and bugs, and debuts avocado-vt as the official virt-test replacement!
Let’s go with the changes from our last release, 0.27.0:
Changes in avocado:
The avocado human output received another stream of tweaks and it’s more compact, while still being informative. Here’s an example:
JOB ID : f2f5060440bd57cba646c1f223ec8c40d03f539b JOB LOG : /home/user/avocado/job-results/job-2015-07-27T17.13-f2f5060/job.log TESTS : 1 (1/1) passtest.py:PassTest.test: PASS (0.00 s) RESULTS : PASS 1 | ERROR 0 | FAIL 0 | SKIP 0 | WARN 0 | INTERRUPT 0 JOB HTML : /home/user/avocado/job-results/job-2015-07-27T17.13-f2f5060/html/results.html TIME : 0.00 s
The unittest system was completely revamped, paving the way for making avocado self-testable! Stay tuned for what we have on store.
Many bugfixes. Check [1] for more details.
Changes in avocado-vt:
- The Spice Test provider has been separated from tp-qemu, and changes reflected in avocado-vt [2].
- A number of bugfixes found by our contributors in the process of moving avocado-vt into the official virt-testing project. Check [3] for more details.
See you in a few weeks for our next release! Happy testing!
The avocado development team
0.27.1¶
Hi guys, we’re up to a new avocado release! It’s basically a bugfix release, with a few usability tweaks.
The avocado human output received some extra tweaks. Here’s how it looks now:
$ avocado run passtest JOB ID : f186c729dd234c8fdf4a46f297ff0863684e2955 JOB LOG : /home/lmr/avocado/job-results/job-2015-08-15T08.09-f186c72/job.log TESTS : 1 (1/1) passtest.py:PassTest.test: PASS (0.00 s) RESULTS : PASS 1 | ERROR 0 | FAIL 0 | SKIP 0 | WARN 0 | INTERRUPT 0 JOB HTML : /home/lmr/avocado/job-results/job-2015-08-15T08.09-f186c72/html/results.html TIME : 0.00 s
Bugfixes. You may refer to [1] for the full list of 58 commits.
Changes in avocado-vt:
- Bugfixes. In particular, a lot of issues related to –vt-type libvirt were fixed and now that backend is fully functional.
News:
We, the people that bring you avocado will be at LinuxCon North America 2015 (Aug 17-19). If you are attending, please don’t forget to drop by and say hello to yours truly (lmr). And of course, consider attending my presentation on avocado [2].
0.27.0 Terminator: Genisys¶
Hi guys, here I am, announcing yet another avocado release! The most exciting news for this release is that our avocado-vt plugin was merged with the virt-test project. The avocado-vt plugin will be very important for QEMU/KVM/Libvirt developers, so the main avocado received updates to better support the goal of having a good quality avocado-vt.
Changes in avocado:
The avocado human output received some tweaks and it’s more compact, while still being informative. Here’s an example:
JOB ID : f2f5060440bd57cba646c1f223ec8c40d03f539b JOB LOG : /home/user/avocado/job-results/job-2015-07-27T17.13-f2f5060/job.log JOB HTML : /home/user/avocado/job-results/job-2015-07-27T17.13-f2f5060/html/results.html TESTS : 1 (1/1) passtest.py:PassTest.test: PASS (0.00 s) RESULTS : PASS 1 | ERROR 0 | FAIL 0 | SKIP 0 | WARN 0 | INTERRUPT 0 TIME : 0.00 s
The avocado test loader was refactored and behaves more consistently in different test loading scenarios.
The utils API received new modules and functions:
- NEW avocado.utils.cpu: APIs related to CPU information on linux boxes [1]
- NEW avocado.utils.git: APIs to clone/update git repos [2]
- NEW avocado.utils.iso9660: Get information about ISO files [3]
- NEW avocado.utils.service: APIs to control services on linux boxes (systemv and systemd) [4]
- NEW avocado.utils.output: APIs that help avocado based CLI programs to display results to users [5]
- UPDATE avocado.utils.download: Add url_download_interactive
- UPDATE avocado.utils.download: Add new params to get_file
- Bugfixes. You may refer to [6] for the full list of 64 commits.
Changes in avocado-vt:
- Merged virt-test into avocado-vt. Basically, the virt-test core library (virttest) replaced most uses of autotest by equivalent avocado API calls, and its code was brought up to the virt-test repository [7]. This means, among other things, that you can simply install avocado-vt through RPM and enjoy all the virt tests without having to clone another repository manually to bootstrap your tests. More details about the process will be sent on an e-mail to the avocado and virt-test mailing lists. Please go to [7] for instructions on how to get started with all our new tools.
See you in a couple of weeks for our next release! Happy testing!
0.26.0 The Office¶
Hi guys, I’m here to announce avocado 0.26.0. This release was dedicated to polish aspects of the avocado user experience, such as documentation and behavior.
Changes
- Now avocado tests that raise exceptions that don’t inherit from avocado.core.exceptions.TestBaseException now will be marked as ERRORs. This change was made to make avocado to have clearly defined test statuses. A new decorator, avocado.fail_on_error was added to let arbitrary exceptions to raise errors, if users need a more relaxed behavior.
- The avocado.Test() utility method skip() now can only be called from inside the setUp() method. This was made because by definition, if we get to the test execution step, by definition it can’t be skipped anymore. It’s important to keep the concepts clear and well separated if we want to give users a good experience.
- More documentation polish and updates. Make sure you check out our documentation website http://avocado-framework.readthedocs.org/en/latest/.
- A number of avocado command line options and help text was reviewed and updated.
- A new, leaner and mobile friendly version of the avocado website is live. Please check http://avocado-framework.github.io/ for more information.
- We have the first version of the avocado dashboard! avocado dashboard is the initial version of an avocado web interface, and will serve as the frontend to our testing database. You can check out a screenshot here: https://cloud.githubusercontent.com/assets/296807/8536678/dc5da720-242a-11e5-921c-6abd46e0f51e.png
- And the usual bugfixes. You can take a look at the full list of 68 commits here: https://github.com/avocado-framework/avocado/compare/0.25.0...0.26.0
0.25.0 Blade¶
Hi guys, I’m here to announce the newest avocado release, 0.25.0. This is an important milestone in avocado development, and we would like to invite you to be a part of the development process, by contributing PRs, testing and giving feedback on the test runner’s usability and new plugins we came up with.
What to expect¶
This is the first release aimed for general use. We did our best to deliver a coherent and enjoyable experience, but keep in mind that it’s a young project, so please set your expectations accordingly. What is expected to work well:
- Running avocado ‘instrumented’ tests
- Running arbitrary executables as tests
- Automatic test discovery and run of tests on directories
- xUnit/JSON report
Known Issues¶
- HTML report of test jobs with multiplexed tests has a minor naming display issue that is scheduled to be fixed by next release.
- avocado-vt might fail to load if virt-test was not properly bootstrapped. Make sure you always run bootstrap in the virt-test directory on any virt-test git updates to prevent the issue. Next release will have more mechanisms to give the user better error messages on tough to judge situations (virt-test repo with stale or invalid config files that need update).
Changes¶
- The Avocado API has been greatly streamlined. After a long discussion and several rounds of reviews and planning, now we have a clear separation of what is intended as functions useful for test developers and plugin/core developers:
- avocado.core is intended for plugin/core developers. Things are more fluid on this space, so that we can move fast with development
- avocado.utils is a generic library, with functions we found out to be useful for a variety of tests and core code alike.
- avocado has some symbols exposed at its top level, with the test API:
- our Test() class, derived from the unittest.TestCase() class
- a main() entry point, similar to unittest.main()
- VERSION, that gives the user the avocado version (eg 0.25.0).
Those symbols and classes/APIs will be changed more carefully, and release notes will certainly contain API update notices. In other words, we’ll be a lot more mindful of changes in this area, to reduce the maintenance cost of writing avocado tests.
We believe this more strict separation between the available APIs will help test developers to quickly identify what they need for test development, and reduce following a fast moving target, what usually happens when we have a new project that does not have clear policies behind its API design.
- There’s a new plugin added to the avocado project: avocado-vt. This plugin acts as a wrapper for the virt-test test suite (https://github.com/autotest/virt-test), allowing people to use avocado to list and run the tests available for that test suite. This allows people to leverage a number of the new cool avocado features for the virt tests themselves:
HTML reports, a commonly asked feature for the virt-test suite. You can see a screenshot of what the report looks like here: https://cloud.githubusercontent.com/assets/296807/7406339/7699689e-eed7-11e4-9214-38a678c105ec.png
You can run virt-tests on arbitrary order, and multiple instances of a given test, something that is also currently not possible with the virt test runner (also a commonly asked feature for the suite.
System info collection. It’s a flexible feature, you get to configureeasily what gets logged/recorded between tests.
The avocado multiplexer (test matrix representation/generation system) also received a lot of work and fixes during this release. One of the most visible (and cool) features of 0.25.0 is the new, improved –tree representation of the multiplexer file:
$ avocado multiplex examples/mux-environment.yaml -tc ┗━━ run ┣━━ hw ┃ ┣━━ cpu ┃ ┃ ╠══ intel ┃ ┃ ║ → cpu_CFLAGS: -march=core2 ┃ ┃ ╠══ amd ┃ ┃ ║ → cpu_CFLAGS: -march=athlon64 ┃ ┃ ╚══ arm ┃ ┃ → cpu_CFLAGS: -mabi=apcs-gnu -march=armv8-a -mtune=arm8 ┃ ┗━━ disk ┃ ╠══ scsi ┃ ║ → disk_type: scsi ┃ ╚══ virtio ┃ → disk_type: virtio ┣━━ distro ┃ ╠══ fedora ┃ ║ → init: systemd ┃ ╚══ mint ┃ → init: systemv ┗━━ env ╠══ debug ║ → opt_CFLAGS: -O0 -g ╚══ prod → opt_CFLAGS: -O2
We hope you find the multiplexer useful and enjoyable.
- If an avocado plugin fails to load, due to factors such as missing dependencies, environment problems and misconfiguration, in order to notify users and make them mindful of what it takes to fix the root causes for the loading errors, those errors are displayed in the avocado stderr stream.
However, often we can’t fix the problem right now and don’t need the constant stderr nagging. If that’s the case, you can set in your local config file:
[plugins]
# Suppress notification about broken plugins in the app standard error.
# Add the name of each broken plugin you want to suppress the notification
# in the list. The names can be easily seen from the stderr messages. Example:
# avocado.core.plugins.htmlresult ImportError No module named pystache
# add 'avocado.core.plugins.htmlresult' as an element of the list below.
skip_broken_plugin_notification = []
Our documentation has received a big review, that led to a number of improvements. Those can be seen online (http://avocado-framework.readthedocs.org/en/latest/), but if you feel so inclined, you can build the documentation for local viewing, provided that you have the sphinx python package installed by executing:
$ make -C docs html
Of course, if you find places where our documentation needs fixes/improvements, please send us a PR and we’ll gladly review it.
- As one would expect, many bugs were fixed. You can take a look at the full list of 156 commits here: https://github.com/avocado-framework/avocado/compare/0.24.0...0.25.0