Testing

You can certainly just directly call any invoke functions directly, manually constructing and providing arguments yourself.

You can also just directly call cappa.parse or cappa.invoke yourself, with relatively little effort.

However, Cappa comes with a built in CommandRunner which is meant to reduce the verbosity of testing CLI commands and in overriding upstream Dep. With it, you can centralize any always-used options, such that only the options which vary from test to test are provied inside the test bodies…

class cappa.testing.CommandRunner

Object to hold common parse/invoke invocation state, for testing.

Accepts almost identical inputs to that of parse/invoke. The notable deviation is argv.

Whereas the original functions accept argv as a single-argument list; CommandRunner accepts base_args at the class-constructor level, which is concatenated with the CommandRunner.parse or CommandRunner.invoke *args, to arrive a the total set of input args.

Example

Some base CLI object

>>> from dataclasses import dataclass
>>>
>>> import cappa
>>>
>>> @dataclass
... class Obj:
...     first: str
...     second: str = '2'

Create an instance with no arguments means there is no default state

>>> runner = CommandRunner()
>>> runner.parse('one', obj=Obj)
Obj(first='one', second='2')

Or create a runner that always uses the same base CLI object, and default base command

>>> runner = CommandRunner(Obj, base_args=['first'])

Now each test, can customize the behavior to suit the test in question.

>>> runner.parse(color=False)
Obj(first='first', second='2')
>>> runner.parse('two')
Obj(first='first', second='two')
backend
base_args
coalesce_args(*args, **kwargs)
Parameters:
Return type:

dict

color = True
completion = True
deps
help = True
invoke(*args, **kwargs)
Parameters:
async invoke_async(*args, **kwargs)
Parameters:
obj
output
parse(*args, **kwargs)
Parameters:
version
class cappa.testing.RunnerArgs

Available kwargs for parse and invoke function, to match CommandRunner fields.

backend
color
completion
deps
help
obj
output
version

Pytest

Cappa does not come with a built in pytest fixture because we assume that any test suite which might benefit from a fixture will likely have other fixture dependencies. Most users will want to customize the construction of their CommandRunner, if they’re going to use one at all.

It is very straightforward to define you own fixture to produce an appropriately configured runner.

For example, let us describe a base CLI which depends upon an Explicit Dependency on a configuration dictionary which pulls data from the environment; which you need to override in a fixture to work with the rest of your testing structure.

Note

See Invoke dependency overrides for additional details.

Your existing code might look like this:

import os
from typing import Annotated

import cappa

def config() -> dict:
    return {
        "env": os.getenv("ENV")
        "foo": os.getenv("FOO")
    }


def fn(config: Annotated[dict, cappa.Dep(config)]):
    print(config)

@cappa.command(invoke=fn)
class CLI:
    name: str

In your tests, you’ve decided you want to hard-code a specific alternative config value. You could define a pytest fixture like so:

import pytest
from cappa.testing import CommandRunner

from package import CLI, config

@pytest.fixture
def runner():  # Note `runner` could itself depend on other fixtures, in more complex scenarios
    return CommandRunner(CLI, deps={config: {"env": "test", "foo": "bar"}})

# OR
def stub_config() -> dict:
    return {
        "env": "test"
        "foo": "bar"
    }

@pytest.fixture
def runner():
    return CommandRunner(CLI, deps={config: cappa.Dep(stub_config)})

Then your tests will be able to omit most configuration, except for the item under test:

def test_foo(runner: CommandRunner):
    runner.invoke('name!')