Skip to main content

Functional tests

These tests are focused on the API requirements, where the result of an action is verified, aiming to validate that many integrated functionalities work as a whole.

GraphQL testing

Most of our functional tests are of this type, which focuses on verifying that GraphQL endpoints consistently deliver and/or mutate the correct data to the appropriate users under specific conditions. They also can halt the pipeline if a resolver provides unexpected data, thereby preventing faulty resolvers from being deployed to production.

Ecosystem

Our tests are done through the pytest library, which provides an ecosystem where you can declare data for tests in a structured way through fixtures and the conftest.py file, run the same test multiple times with multiple data through the parametrize decorator, make assertions short and easy to read and write. We also use the pytest-snapshot plugin, which allows the creation of snapshots of the data in a state where the data is correct and then compare the response of the resolvers against these snapshots.

Our tests run in our CI/CD Pipeline every time a developer pushes changes to our repository and they can also be executed locally for their respective creation and update.

Tests

To add new tests you must understand the most relevant directory structure and modules for pytest.

The following is the main directory and file structure, it should serve as a reference, however the tests use resources from multiple other directories.

universe
|-- integrates
| |-- back
| | |-- test
| | | |--functional
| | | | |-- src
| | | | | |-- test_dir <- One tests dir
| | | | | | |-- __init__.py
| | | | | | |-- conftest.py <- pytest configuration file
| | | | | | |-- test_resolver.py <- Tests file
| | | | | |-- test_dir_other <- One tests dir
| | | | | | |-- __init__.py
| | | | | | |-- conftest.py <- pytest configuration file
| | | | | | |-- test_resolver.py <- Tests file
| | | | | |-- test_dir_other_2 <- One tests dir
| | | | | | |-- __init__.py
| | | | | | |-- conftest.py <- pytest configuration file
| | | | | | |-- test_resolver.py <- Tests file
| | | | | |-- conftest.py
| | | | | |-- utils.py

As seen in the scheme, the functional tests are located in the path /universe/integrates/back/test/functional/src and are grouped for readability in subfolders; these subfolders are named with a descriptive name according to the tests it contains, for example, the finding directory contains tests related to findings.

You may also notice that inside each directory are the following files:

  • init.py: This file contains functions related to the execution of queries and mutations, which are ultimately what request the resolvers to modify and query the data and store the responses for later use in the tests.

  • conftest.py: This is the default pytest configuration file, it is used to define global configuration and resources in an orderly way such as fixtures, plugins and hooks. In our tests, we mainly use it to define the dummy data that is used in the tests.

note

As you may notice, in the schema, there are two levels at which the conftest.py files are located: at the level of the test directory and at the level of the src directory. The one at the src level is used for general test settings. Each one at the test level has a scope within the directory in which it resides.

  • test_resolver.py: It contains multiple tests targeting similar sections of code, for example the src/finding/test_resolver.py file contains multiple tests intended to test that queries and mutations related to findings return and modify data correctly.

  • utils.py: This module contains functions that are common to most tests.

When adding a new test, likely, there is already a directory with tests for queries or mutations similar to what you want to test; in this case, you can take advantage of that directory and place your new test inside the test_resolver.py file, this way you can take advantage of the dummy data, imports, functions and so on that have already been added.

For example, if you want to add a new test for a query called GetFindingDetails you can add this test to the file test_resolver.py in the finding directory. Likely, the directory does not contain the necessary data for your test, in this case, it will be necessary to add them using the file conftest.py.

To add a new test, the following snippet will be very helpful:

Query snippet

01 from back.test.functional.src.utils import (get_api_result,)
02 from typing import (Any,)
03 ...
04
05 async def query_name(*, stakeholder: str, some_variable: str) -> dict:
06 query: str = """
07 ...
08 """
09 variables: dict[str, Any] = {
10 "variable": some_variable,
11 }
12 data: dict[str, Any] = {"query": query, "variables": variables}
13 return await get_api_result(
14 data,
15 stakeholder,
16 )

This snippet is a function that receives the necessary parameters to execute a query, executes the query that we define with said parameters and returns its result.

Explanation of the snippet query
  • 01-03: Required imports.

  • 06: Definition of the query, for example: async def get_finding_details(..){...}

  • 07-09 Definition of the query to execute, for example:

    query: str = """
    query GetOrganizationName($orgId: String!) {
    organization(organizationId: $org_id) {
    name
    }
    }
    """
  • 10-12: Definition of the variables for the query, for example:

      variables: dict[str, Any] = {
    "orgId": org_id,
    }
  • 13: Grouping between the query and the variables inside a dictionary.

  • 14-18: Execution of the query in GraphQL and return of the result.

Test snippet

01 from . import query_function
02 from typing import (Any,)
03 import pytest
04 import json
05 ...
06
07 @pytest.mark.asyncio
08 @pytest.mark.parametrize(
09 [["stakeholder"],["some_variable"]][
10 ["[email protected]", "some_value"],
11 ["[email protected]", {"some_dict": "some value in dict"}]
12 ],
13 )
14 async def test_query_name(
15 populate: bool,
16 stakeholder: str,
17 snapshot: Any,
18 some_variable: Dict
19 ) -> None:
20 assert populate
21 result: dict = await query_function(
22 stakeholder=stakeholder, some_variable=some_variable
23 )
24 assert "errors" not result
25 json_result = str(json.dumps(result, indent=2))
26 snapshot.assert_match(json_result + "\n", "snapshot.json")

This function is in charge of executing the tests. In this case it is a test with snapshots. The function receives the test directory as a parameter and executes a query through the previously imported trimmed query_name. Then it creates and/or compares a snapshot with the result of the query.

Explanation of the test query

  • 01-04: Required imports.

  • 07: Use the @pytest.mark.asyncio decorator to allow asynchronous tests. at the time of executing the test.

  • 08-13: It uses the @pytest.mark.parametrize decorator, which receives a parameter list and a list of argument lists for the parameters, each parameter list will be executed as a separate test.

  • 14: Definition of the function that runs the test, for example async def test_get_organization_name(...){...}. Note that for pytest to recognize the test the function name must start with test\_

  • 15: The populate fixture is passed as an argument, this is a boolean that arrives as true or false depending on whether the dummy db was successfully populated or not.

  • 16: This argument comes from the @pytest.mark.parametrize decorator explained above.

  • 17: Fixture needed to use pytest-snaphot

  • 18: This argument comes from the @pytest.mark.parametrize decorator explained above.

  • 20: It is an assertion that succeeds if populate is true, otherwise the test fails.

  • 21-23: run the query and stores the result in the variable result.

  • 25: The output is formatted for better human readability.

  • 26: This line has two different behaviors: creating the snapshot and comparing the snapshot with the new data.

    • Create the snapshot: the content to be saved (line 26) is passed through the first parameter and the second is the name of the file in which you save.
    • Comparing: the first parameter is the line 26 content, and the second is the snapshot file to be compared with.

Local execution

Without snapshots

If you created a new test, or want to run an existing one you can test it locally with the following command at the universe directory level

m . /integrates/back/test/functional <test_directory_name>

With snapshots

If you created a new test with snapshots, or need to update the snapshots of an existing test, you can run the following command at the universe directory level

m . /integrates/back/test/functional <test_directory_name> --snapshot-update

Here's an example for running the tests located within the finding directory with snapshots:

m . /integrates/back/test/functional finding --snapshot-update

If it is a test that already has the snapshots created and your intention is only to test them and not update them, you must execute the command without the --snapshot-update flag.

note

Whether you created new snapshots or updated existing snapshots, you should carefully review what changed in each snapshot before committing and make a MR.

By default --snapshot-update updates all the existing snapshots in the test directory that you indicate, if you want to update one by one for a more detailed review of the changes you can add the --exitfirst flag.