RackHD Creating Test Scripts using in the RackHD Test Framework

This page contains instructions for using the 'RackHD TestFramework' to create functional and feature test scripts.

RackHD Test Design Principles

RackHD testing is coded using Python 2.7.

All RackHD test scripts are intended to be launched either by a build server (Jenkins) or manually.

The RackHD Test framework adheres to the following principles:

  • Use the Python 'unittest' module
  • Use 'unittest' asserts to verify test conditions whenever possible
  • After the test, leave the System Under Test in the same state that it was in the beginning (unless a destructive test)
  • Bundle similar tests together in scripts (testsuites)
  • Utilize common library functions for repetitive tasks (tests/common)
  • Use command line arguments to configure test parameters
  • Scripts must meet OpenSource criteria
    • no internal proprietary data contained in the scripts or config files
    • no hardcoded IP addresses, names, etc
    • scripts and config files that do not meet this criteria can be manged using the internal repository and can be run via this same framework by overlaying the files into your test repo

NOTE: There is confusion over the term 'unit test' as it applies to RackHD testing because it utilizes the Python 'unittest' module. The 'unittest' module is a software component utilized by both Unit Tests and Feature/Functional/CI Tests. Unit Tests are typically created by the software developer and checked in with the product code. Feature/Functional test suites also uses the 'unittest' module but is typically created by SW Engineers operating in a test development mode.

GIT Repository

The RackHD test framework can be checked out from github using the following GIT command:

git clone https://github.com/RackHD/RackHD

Code Check-in rules:

  • Create a new development branch under rackhd/rackhd for every check-in
  • Test the script against a vagrant or physical stack prior to check-in
  • Create pull request from the your branch
  • Run mkcheck.sh to check the scripts to conform to Python coding standards
    • Can also run flake8 if available against your script

Test Script Convention and Best Practices

  • Test scripts should be created following the test templates.

  • Test script filename must be unique and be related to test function, and contain the word test.  test_xx.py is standard practice.

  • Test case names must be unique and related to test function.  Test case names must contain the word test to be found via unittest.
  • Bundle similar test cases into scripts but keep test scripts within a narrow scope
  • Addi a test purpose or description information into the test script for others to enjoy
  • Tests should be placed in any directory under rackhd/test/tests other than api and api-cit

  • Tests should utilize common library functions for API and appliance shell access.

  • Test scripts and test cases should be as stand-alone as possible
  • Tests should not call other test scripts, if a common module is needed use or create a library method.  This is very helpful to other developers.

  • If a test script needs some test bed/setup requirements to run, the first test in the script should do a check for the necessary requirements and exit or skip tests if not met

  • Tests should not have any direct references to IP addresses or hostnames.

  • Use the configuration files and fitargs to reference global or stack configs for hardware or resource references.

  • Use log functions instead of print functions to capture output in test scripts, see the logging section

  • Defining attributes (groups)  - smoke, regression, extended, my-test-group

Test Case Structure

Test scripts are reported as 'test suites' in the test framework. Each subtest is reported as a 'test case'.

The basic structure of a unittest testcase is below:

Test structure
from nose.plugins.attrib import attr
@attr(regression=False, smoke=False, my_tests_group=True)


class MyTestSuiteA(unittest.TestCase):
    def setUp(self):
        # specific setup items here, executes on each test case
    def tearDown(self):
        # specific teardown items here, executes on each test case
    #tests follow
    def test_01(self):
        # Insert test case code here

        # End of test case
    def test_02(self):
        # Insert test case code here

        # End of test case


Each itemized testcase resides in a class with multiple methods.

The 'setupClass' method is for suite initialization and is run just once against the class.  This can be used to make variables accessible and updatable within the test cases

The 'setUp' method is for test case initialization and run on the start of every test case.

The 'tearDown' method is for post test processing and run on the end of every test case.

Each test case must have a name with 'test' as a prefix.

Unittest will execute each test case in alpha order within the test class

If tests need to be run in a sequence, two options can be utilized:

       use numbered class and method names 'test01'. 'test02'   - If any test case fails, processing of that test case will end at the failure point, then the next test case in line will be executed. 

       use the nosedep depends attribute.  @depends(after=test_a),  @depends(before=test_last)  See nosedep attributes.  - if test case fails, any dependent test cases are skipped


The unittest group attributes must be defined.  These are defined at the class level.

  from nose.plugins.attrib import attr
  @attr(regression=False, smoke=False, my_tests_group=True)

When setting regression or smoke attribute to True, the test script must meet the CI Smoke and Regression test requirements.

RackHD Creating Test Scripts using in the RackHD Test Framework


Unittest Assert methods

Tests conditions are evaluated with unittest 'assert' commands. There are a number of asserts available within unittest to test program conditions. The optional 'message' will be printed to console and logs if the assertion fails.  Here is a subset of the common unittest asserts.

MethodChecks that
assertEqual(a,b,"message")a == b
assertNotEqual(a,b,"message")a != b
assertTrue(x,"message")bool(x) is true
assertFalse(x,"message")bool(x) is false
assertIs(a,b,"message")a is b
assertIsNot(a,b,"message")a is not b
assertIsNone(x,"message")x is None
assertIsNotNone(x,"message")x is not None
assertIn(a,b,"message")a in b
assertNotIn(a,b,"message")a not in b
assertIsInstance(a,b,"message")isInstance(a,b)
assertNotIsInstance(a,b,"message")not isInstance(a,b)
assertAlmostEqual(a,b,"message") round(a-b,7) == 0
assertNotAlmostEqual(a,b,"message") round(a-b,7) != 0
assertGreater(a,b,"message") a > b
assertGreaterEqual(a,b,"message") a >= b
assertLess(a,b,"message") a < b
assertLessEqual(a,b,"message") a <= b
assertRegexpMatches(s,r,"message") r.search(s)
assertNotRegexpMatches(s,r,"message") not r.search(s)
assertItemsEqual(a,b,"message") sorted(a) == sorted(b)
assertDictContainsSubset(a,b,"message") all key/values in a exist in b


Logging methods

FIT uses a separate logging module via Stuart's python package stream-monitor. The stream-monitor is more that just logging, it will monitor various streams (run_test logging, amqp, and more)
For logging purposes when using run_tests.py, we include the stream-monitor package via fit common. Logging is instantiated within the test scripts, similar to this:   logs = flogging.get_loggers()
To log output, within a test script, here's some sample calls:   logs.warning('Nodes already discovered!')   logs.info('Nodes really discoverd!')   logs.debug('Okay, I'll print the nodes')   logs.debug_2(json.dumps(msg, indent=4))   logs.debug_4(json.dumps(msg, indent=4))   logs.debug_10(json.dumps(msg, indent=4))
When kicking off a test, log files are automatically started. The files are going into the directory test/log_output. For each run, a set of log files is captured under test/log_output.
There are multiple levels of logging and ways to direct log output to different files.

AMQP monitor methods  (in development)

FIT uses a separate amqp monitoring module via Stuart's python package stream-monitor.
For amqp monitoring purposes when using run_tests.py, we include the stream-monitor package via fit common. is instantiated within the test scripts, similar to this:   amqp_queue = amqp_monitors() The suite and individual test cases will be able to instantiate the AMQP processes and retrieve AMQP events from the stream-monitor.

Individual functions will be available to the test developers to use within their scripts.

RackHD Test Libraries and Resources

Refer to this page for detail on common functions and resources to use when creating test scripts.

RackHD Creating Test Scripts using in the RackHD Test Framework


Sample Test Script

The tests/templates directory has sample test scripts that conform to the RackHD test framework, FIT format. The 'fit_test_template.py' template conforms to Python unittest. This template is intended to be used as a pattern for new test scripts, not all features and classes are needed in every test.  Make sure to update comment blocks and provide a good summary of the test for your readers.

Refer to the test scripts checked in rackhd/rackhd/test/templates directory.

'''
Copyright (c) 2017 Dell, Inc. or its subsidiaries. All Rights Reserved.

Author(s):

FIT test script template

Test Script summary:
This template contains the basic script layout for functional test scripts.
It includes examples of using the fit_common methods rackhdapi() to make RackHD API calls.
It also shows an example of using the unittest class method.
It also shows examples of using dependencies between tests with nosedep.
'''

import fit_path  # NOQA: unused import                                                                                                                       
import unittest
from json import dumps

# import nose decorator attr                                                                                                                                 
from nose.plugins.attrib import attr

# Import nosedep if dependencies are needed between tests                                                                                                    
from nosedep import depends

# Import the logging feature                                                                                                                                 
import flogging

from common import fit_common

# set up the logging                                                                                                                                         
logs = flogging.get_loggers()

# Define the test group here using unittest @attr                                                                                                            
# @attr is a decorator and must be located in the line just above the class to be labeled                                                                    
#   These can be any label to run groups of tests selectively                                                                                                
#   When setting regression or smoke to True, the test must meet CI requirements   

@attr(regression=False, smoke=False, my_tests_group=True)
class fit_template(unittest.TestCase):
    @classmethod
    def setUpClass(cls):
        # class method is run once per script
        # usually not required in the script
        cls.nodes = []

    def setUp(self):
        # setUp is run for each test
        test_nodes = []

    def shortDescription(self):
        # This removes the docstrings (""") from the unittest test list (collect-only), don't need to modify this
        return None

    def my_utility(self):
        # local script method
        return None

    def tearDown(self):
        # tearDown is run for each test
        test_nodes = []

    def tearDownClass(self):
        # tearDownClass(cls) is run at once at the end of the script
        cls.nodes = []

    def test_first(self):
        """ Test 1: This test shows success """
        logs.info("This is a successful test")
        logs.debug("This is debug info for successful test")
        self.assertEqual(0, 0)

    @depends(after=test_first)
    def test_second_expect_fail(self):
        """ Test: This test shows failed """
        logs.error(" This is a failed test")
        logs.debug_1(" This is debug info at level 1")
        self.assertEqual(1, 0, msg=("failure due to force assert"))

    def test_next(self):
        """ Test: This test verifies no dependecy chain """
        logs.info_5(" This is a successful test")
        self.assertEqual(0, 0)

    @depends(after=[test_first, test_next])
    def test_next_next(self):
        """ Test: This test depends on two tests to pass """
        logs.warning(" This is a passed test")
        self.assertNotEqual(1, 0, msg="good error message")

    @depends(after=test_second_expect_fail)
    def test_this_will_get_skipped(self):
        """ Test: This test depends a test that will fail """
        logs.info(" This will not get printed")
        self.assertEqual(0, 0)

    @depends(after=test_first)
    def test_get_nodes(self):
        """
        This test is an example of using fit_common.node_select to retrieve a node list.
        For demo purposes, it needs communication to a running rackhd instance or will fail.
        """
        nodes = []
        # Retrive list of nodes, default gets compute nodes
        nodes = fit_common.node_select()

        # Check if any nodes returned
        self.assertNotEqual([], nodes, msg=("No Nodes in List"))

        # Log the list of nodes
        logs.info(" %s", dumps(nodes, indent=4))

    @depends(after=test_get_nodes)
    def test_get_nodes_rackhdapi(self):
        """
        This test is an example of using fit_common.rackhdapi() to perform an API call
        and using data from the response.
        For demo purposes, it needs communication to a running rackhd instance.
        """
        nodes = []
        nodelist = []

        # Perform an API call
        api_data = fit_common.rackhdapi('/api/2.0/nodes')

        # Check return status is what you expect
        status = api_data.get('status')
        self.assertEqual(status, 200,
                         'Incorrect HTTP return code, expected 200, got:' + str(api_data['status']))
        # Use the response data
        try:
            nodes = api_data.get('json')
        except:
            self.fail("No Json data in repsonse")
        for node in nodes:
            nodelist.append(node.get('id'))
        logs.info(" %s", dumps(nodelist, indent=4))

        # example to set the class level nodelist
        self.__class__.nodes = nodelist

    @depends(after=test_get_nodes_rackhdapi)
    def test_display_class_nodes(self):
        """
        This test is an example of using the class variable 'nodes'
        The prior test set the class variable to be used by this test.
        This test prints out the class variable
        """
        my_nodes = self.__class__.nodes
        logs.info(" %s", dumps(my_nodes, indent=4))


if __name__ == '__main__':
    unittest.main()

 

Sample Output

'fit_test_template.py' runtime output:

run_tests.py -stack 10 -config config-mn -test templates/fit_test_template.py -v 0

*** Using config file path: config-mn
*** Created config file: config-mn/generated/fit-config-20170425-143825
*** Using config file: config-mn/generated/fit-config-20170425-143825
2017-04-25 14:38:26,514 INFO    removing previous logging dir /emc/hohene/sandbox-github/rackhd/archive/eh-master/test/log_output/run_2017-04-25_04:15:02.d infra.run **.** 19954 MainProcess stream-monitor/flogging/infra_logging.py:__init__@148 gl-main [any tests]
2017-04-25 14:38:26,515 INFO    this runs logging dir /emc/hohene/sandbox-github/rackhd/archive/eh-master/test/log_output/run_2017-04-25_14:38:26.d infra.run **.** 19954 MainProcess stream-monitor/flogging/infra_logging.py:__init__@148 gl-main [any tests]
*** Reloading config file: config-mn/generated/fit-config-20170425-143825
test_first (test.templates.fit_test_template.fit_template) ... ok
test_next (test.templates.fit_test_template.fit_template) ... ok
test_get_nodes (test.templates.fit_test_template.fit_template) ... /emc/hohene/sandbox-github/rackhd/archive/eh-master/test/.venv/ehm-ub16/local/lib/python2.7/site-packages/requests/packages/urllib3/connectionpool.py:791: InsecureRequestWarning: Unverified HTTPS request is being made. Adding certificate verification is strongly advised. See: https://urllib3.readthedocs.org/en/latest/security.html
  InsecureRequestWarning)
/emc/hohene/sandbox-github/rackhd/archive/eh-master/test/.venv/ehm-ub16/local/lib/python2.7/site-packages/requests/packages/urllib3/connectionpool.py:791: InsecureRequestWarning: Unverified HTTPS request is being made. Adding certificate verification is strongly advised. See: https://urllib3.readthedocs.org/en/latest/security.html
  InsecureRequestWarning)
/emc/hohene/sandbox-github/rackhd/archive/eh-master/test/.venv/ehm-ub16/local/lib/python2.7/site-packages/requests/packages/urllib3/connectionpool.py:791: InsecureRequestWarning: Unverified HTTPS request is being made. Adding certificate verification is strongly advised. See: https://urllib3.readthedocs.org/en/latest/security.html
  InsecureRequestWarning)
ok
test_next_next (test.templates.fit_test_template.fit_template) ... ok
test_second_expect_fail (test.templates.fit_test_template.fit_template) ... FAIL
test_get_nodes_rackhdapi (test.templates.fit_test_template.fit_template) ... ok
test_this_will_get_skipped (test.templates.fit_test_template.fit_template) ... SKIP: Required test 'test_second_expect_fail' FAILED
test_display_class_nodes (test.templates.fit_test_template.fit_template) ... ok

======================================================================
FAIL: test_second_expect_fail (test.templates.fit_test_template.fit_template)
----------------------------------------------------------------------
Traceback (most recent call last):
  File "/emc/hohene/sandbox-github/rackhd/archive/eh-master/test/.venv/ehm-ub16/local/lib/python2.7/site-packages/nosedep.py", line 126, in inner
    return func(*args, **kwargs)
  File "/emc/hohene/sandbox-github/rackhd/archive/eh-master/test/templates/fit_test_template.py", line 70, in test_second_expect_fail
    self.assertEqual(1, 0, msg=("failure due to force assert"))
AssertionError: failure due to force assert
-------------------- >> begin captured logging << --------------------
root: INFO: +1.09 - STARTING TEST: [test_second_expect_fail (test.templates.fit_test_template.fit_template)]
test.run: INFO:   **** Running test: test_second_expect_fail
test.run: ERROR:  This is a failed test
test.run: DEBUG_1:  This is debug info at level 1
--------------------- >> end captured logging << ---------------------

----------------------------------------------------------------------
Ran 8 tests in 0.493s

FAILED (SKIP=1, failures=1)