Test APIs

This is the bare mininum set of APIs that users should use, and can rely on, while writing tests.

Module contents

avocado.main

alias of TestProgram

class avocado.Test(methodName='test', name=None, params=None, base_logdir=None, job=None, runner_queue=None)

Bases: unittest.case.TestCase

Base implementation for the test class.

You’ll inherit from this to write your own tests. Typically you’ll want to implement setUp(), test*() and tearDown() methods on your own tests.

Initializes the test.

Parameters:
  • methodName – Name of the main method to run. For the sake of compatibility with the original unittest class, you should not set this.
  • name (avocado.core.test.TestName) – Pretty name of the test name. For normal tests, written with the avocado API, this should not be set. This is reserved for internal Avocado use, such as when running random executables as tests.
  • base_logdir – Directory where test logs should go. If None provided, it’ll use avocado.data_dir.create_job_logs_dir().
  • job – The job that this test is part of.
Raises:

avocado.core.test.NameNotTestNameError

basedir

The directory where this test (when backed by a file) is located at

cache_dirs = None
cancel(message=None)

Cancels the test.

This method is expected to be called from the test method, not anywhere else, since by definition, we can only cancel a test that is currently under execution. If you call this method outside the test method, avocado will mark your test status as ERROR, and instruct you to fix your test in the error message.

Parameters:message (str) – an optional message that will be recorded in the logs
datadir

Returns the path to the directory that contains test data files

default_params = {}
error(message=None)

Errors the currently running test.

After calling this method a test will be terminated and have its status as ERROR.

Parameters:message (str) – an optional message that will be recorded in the logs
fail(message=None)

Fails the currently running test.

After calling this method a test will be terminated and have its status as FAIL.

Parameters:message (str) – an optional message that will be recorded in the logs
fail_class
fail_reason
fetch_asset(name, asset_hash=None, algorithm='sha1', locations=None, expire=None)

Method o call the utils.asset in order to fetch and asset file supporting hash check, caching and multiple locations.

Parameters:
  • name – the asset filename or URL
  • asset_hash – asset hash (optional)
  • algorithm – hash algorithm (optional, defaults to sha1)
  • locations – list of URLs from where the asset can be fetched (optional)
  • expire – time for the asset to expire
Raises:

EnvironmentError – When it fails to fetch the asset

Returns:

asset file local path

filename

Returns the name of the file (path) that holds the current test

get_state()

Serialize selected attributes representing the test state

Returns:a dictionary containing relevant test state data
Return type:dict
job

The job this test is associated with

log

The enhanced test log

logdir

Path to this test’s logging dir

logfile

Path to this test’s main debug.log file

name

The test name (TestName instance)

outputdir

Directory available to test writers to attach files to the results

params

Parameters of this test (AvocadoParam instance)

report_state()

Send the current test state to the test runner process

run_avocado()

Wraps the run method, for execution inside the avocado runner.

Result:Unused param, compatibility with unittest.TestCase.
runner_queue

The communication channel between test and test runner

running

Whether this test is currently being executed

set_runner_queue(runner_queue)

Override the runner_queue

skip(message=None)

Skips the currently running test.

This method should only be called from a test’s setUp() method, not anywhere else, since by definition, if a test gets to be executed, it can’t be skipped anymore. If you call this method outside setUp(), avocado will mark your test status as ERROR, and instruct you to fix your test in the error message.

Parameters:message (str) – an optional message that will be recorded in the logs
srcdir = None
status

The result status of this test

teststmpdir

Returns the path of the temporary directory that will stay the same for all tests in a given Job.

time_elapsed = -1
time_end = -1
time_start = -1
timeout = None
traceback
whiteboard = ''
workdir = None
avocado.fail_on(exceptions=None)

Fail the test when decorated function produces exception of the specified type.

(For example, our method may raise IndexError on tested software failure. We can either try/catch it or use this decorator instead)

Parameters:exceptions – Tuple or single exception to be assumed as test fail [Exception]
Note:self.error and self.skip behavior remains intact
Note:To allow simple usage param “exceptions” must not be callable
avocado.skip(message=None)

Decorator to skip a test.

avocado.skipIf(condition, message=None)

Decorator to skip a test if a condition is True.

avocado.skipUnless(condition, message=None)

Decorator to skip a test if a condition is False.

exception avocado.TestError

Bases: avocado.core.exceptions.TestBaseException

Indicates that the test was not fully executed and an error happened.

This is the sort of exception you raise if the test was partially executed and could not complete due to a setup, configuration, or another fatal condition.

status = 'ERROR'
exception avocado.TestFail

Bases: avocado.core.exceptions.TestBaseException, exceptions.AssertionError

Indicates that the test failed.

TestFail inherits from AssertionError in order to keep compatibility with vanilla python unittests (they only consider failures the ones deriving from AssertionError).

status = 'FAIL'
exception avocado.TestCancel

Bases: avocado.core.exceptions.TestBaseException

Indicates that a test was canceled.

Should be thrown when the cancel() test method is used.

status = 'CANCEL'