tmt.steps.provision package
Submodules
tmt.steps.provision.artemis module
- class tmt.steps.provision.artemis.ArtemisAPI(guest: GuestArtemis)
Bases:
object- create(path: str, data: dict[str, Any], request_kwargs: dict[str, Any] | None = None) Response
Create - or request creation of - a resource.
- Parameters:
path – API path to contact.
data – optional key/value data to send with the request.
request_kwargs – optional request options, as supported by
requestslibrary.
- delete(path: str, request_kwargs: dict[str, Any] | None = None) Response
Delete - or request removal of - a resource.
- Parameters:
path – API path to contact.
request_kwargs – optional request options, as supported by
requestslibrary.
- inspect(path: str, params: dict[str, Any] | None = None, request_kwargs: dict[str, Any] | None = None) Response
Inspect a resource.
- Parameters:
path – API path to contact.
params – optional key/value query parameters.
request_kwargs – optional request options, as supported by
requestslibrary.
- query(path: str, method: str = 'get', request_kwargs: dict[str, Any] | None = None) Response
Base helper for Artemis API queries.
Trivial dispatcher per method, returning retrieved response.
- Parameters:
path – API path to contact.
method – HTTP method to use.
request_kwargs – optional request options, as supported by
requestslibrary.
- class tmt.steps.provision.artemis.ArtemisGuestData(_OPTIONLESS_FIELDS: tuple[str, ...] = ('primary_address', 'topology_address', 'facts'), primary_address: str | None = None, topology_address: str | None = None, role: str | None = None, become: bool = False, facts: tmt.guest.GuestFacts = <factory>, environment: tmt.utils.Environment = <factory>, hardware: tmt.hardware.Hardware | None = None, ansible: tmt.ansible.GuestAnsible | None = None, port: int | None = None, user: str = 'root', key: list[tmt._compat.pathlib.Path] = <factory>, password: str | None = None, ssh_option: list[str] = <factory>, api_url: str = 'http://127.0.0.1:8001', api_version: str = '0.0.84', arch: str = 'x86_64', image: str | None = None, pool: str | None = None, priority_group: str = 'default-priority', keyname: str = 'master-key', user_data: dict[str, str] = <factory>, kickstart: dict[str, str] = <factory>, log_type: list[str] = <factory>, guestname: str | None = None, provision_timeout: int = 600, provision_tick: int = 60, api_timeout: int = 10, api_retries: int = 10, api_retry_backoff_factor: int = 1, watchdog_dispatch_delay: int | None = None, watchdog_period_delay: int | None = None, skip_prepare_verify_ssh: bool = False, post_install_script: str | None = None)
Bases:
GuestSshData- api_retries: int = 10
- api_retry_backoff_factor: int = 1
- api_timeout: int = 10
- api_url: str = 'http://127.0.0.1:8001'
- api_version: str = '0.0.84'
- arch: str = 'x86_64'
- guestname: str | None = None
- image: str | None = None
- keyname: str = 'master-key'
- kickstart: dict[str, str]
- log_type: list[str]
- pool: str | None = None
- post_install_script: str | None = None
- priority_group: str = 'default-priority'
- provision_tick: int = 60
- provision_timeout: int = 600
- skip_prepare_verify_ssh: bool = False
- user_data: dict[str, str]
- watchdog_dispatch_delay: int | None = None
- watchdog_period_delay: int | None = None
- exception tmt.steps.provision.artemis.ArtemisProvisionError(message: str, response: Response | None = None, request_data: dict[str, Any] | None = None, *args: Any, **kwargs: Any)
Bases:
ProvisionErrorArtemis provisioning error.
For some provisioning errors, we can provide more context.
General error.
- Parameters:
message – error message.
causes – optional list of exceptions that caused this one. Since
raise ... from ...allows only for a single cause, and some of our workflows may raise exceptions triggered by more than one exception, we need a mechanism for storing them. Our reporting will honor this field, and report causes the same way as__cause__.
- class tmt.steps.provision.artemis.GuestArtemis(*, data: GuestData, name: str | None = None, parent: Common | None = None, logger: Logger)
Bases:
GuestSshArtemis guest instance
The following keys are expected in the ‘data’ dictionary:
Initialize guest data
- property api: ArtemisAPI
- api_retries: int
- api_retry_backoff_factor: int
- api_timeout: int
- api_url: str
- api_version: str
- arch: str
- cli_invocation: 'tmt.cli.CliInvocation' | None = None
- guestname: str | None
- image: str
- property is_ready: bool
Detect the guest is ready or not
- keyname: str
- kickstart: dict[str, str]
- log_type: list[str]
- pool: str | None
- post_install_script: str | None
- priority_group: str
- provision_tick: int
- provision_timeout: int
- reboot(mode: RebootMode = RebootMode.SOFT, command: Command | ShellScript | None = None, waiting: Waiting | None = None) bool
Reboot the guest, and wait for the guest to recover.
- Parameters:
mode – which boot mode to perform.
command – a command to run on the guest to trigger the reboot. Only usable when mode is not
RebootMode.HARD.waiting – deadline for the reboot.
- Returns:
Trueif the reboot succeeded,Falseotherwise.
- remove() None
Remove the guest
- skip_prepare_verify_ssh: bool
- start() None
Start the guest
Get a new guest instance running. This should include preparing any configuration necessary to get it started. Called after load() is completed so all guest data should be available.
- user_data: dict[str, str]
- watchdog_dispatch_delay: int | None
- watchdog_period_delay: int | None
- class tmt.steps.provision.artemis.GuestInspectType
Bases:
TypedDict- address: str | None
- state: str
- class tmt.steps.provision.artemis.GuestLogArtemis(name: str, guest: tmt.steps.provision.artemis.GuestArtemis)
Bases:
GuestLog- property filename: str
A filename to use when storing the log.
By default, the name of the log is used.
- guest: GuestArtemis
Guest whose log this instance represents.
- class tmt.steps.provision.artemis.ProvisionArtemis(*, step: Step, data: StepDataT, workdir: Literal[True] | Path | None = None, logger: Logger)
Bases:
ProvisionPlugin[ProvisionArtemisData]Provision guest using Artemis backend.
Reserve a machine using the Artemis service. Users can specify many requirements, mostly regarding the desired OS, RAM, disk size and more. Most of the HW specifications defined in the
Hardwareare supported. Including thekickstart.Artemis takes machines from AWS, OpenStack, Beaker or Azure. By default, Artemis handles the selection of a cloud provider to its best abilities and the required specification. However, it is possible to specify the keyword
pooland select the desired cloud provider.Artemis project: https://gitlab.com/testing-farm/artemis
Minimal configuration could look like this:
provision: how: artemis image: Fedora api-url: https://your-artemis.com/
Note
When used together with the Testing Farm infrastructure some of the options from the first example below will be filled for you by the Testing Farm service.
Note
The actual value of
imagedepends on what images - or “composes” as Artemis calls them - supports and can deliver.Note
The
api-urlcan be also given viaTMT_PLUGIN_PROVISION_ARTEMIS_API_URLenvironment variable.Full configuration example:
provision: how: artemis # Artemis API api-url: https://your-artemis.com/ api-version: 0.0.32 # Mandatory environment properties image: Fedora # Optional environment properties arch: aarch64 pool: optional-pool-name # Provisioning process control (optional) priority-group: custom-priority-group keyname: custom-SSH-key-name # Labels to be attached to guest request (optional) user-data: foo: bar # Timeouts and deadlines (optional) provision-timeout: 3600 provision-tick: 10 api-timeout: 600 api-retries: 5 api-retry-backoff-factor: 1
Store plugin name, data and parent step
- cli_invocation: 'tmt.cli.CliInvocation' | None = None
- class tmt.steps.provision.artemis.ProvisionArtemisData(name: str, how: str, order: int = 50, when: list[str] = <factory>, summary: str | None = None, role: str | None = None, hardware: tmt.hardware.Hardware | None = None, _OPTIONLESS_FIELDS: tuple[str, ...] = ('primary_address', 'topology_address', 'facts'), primary_address: str | None = None, topology_address: str | None = None, become: bool = False, facts: tmt.guest.GuestFacts = <factory>, environment: tmt.utils.Environment = <factory>, ansible: tmt.ansible.GuestAnsible | None = None, port: int | None = None, user: str = 'root', key: list[tmt._compat.pathlib.Path] = <factory>, password: str | None = None, ssh_option: list[str] = <factory>, api_url: str = 'http://127.0.0.1:8001', api_version: str = '0.0.84', arch: str = 'x86_64', image: str | None = None, pool: str | None = None, priority_group: str = 'default-priority', keyname: str = 'master-key', user_data: dict[str, str] = <factory>, kickstart: dict[str, str] = <factory>, log_type: list[str] = <factory>, guestname: str | None = None, provision_timeout: int = 600, provision_tick: int = 60, api_timeout: int = 10, api_retries: int = 10, api_retry_backoff_factor: int = 1, watchdog_dispatch_delay: int | None = None, watchdog_period_delay: int | None = None, skip_prepare_verify_ssh: bool = False, post_install_script: str | None = None)
Bases:
ArtemisGuestData,ProvisionStepData
tmt.steps.provision.bootc module
- class tmt.steps.provision.bootc.BootcData(name: str, how: str, order: int = 50, when: list[str] = <factory>, summary: str | None = None, role: str | None = None, hardware: tmt.hardware.Hardware | None = None, _OPTIONLESS_FIELDS: tuple[str, ...] = ('primary_address', 'topology_address', 'facts'), primary_address: str | None = None, topology_address: str | None = None, become: bool = False, facts: tmt.guest.GuestFacts = <factory>, environment: tmt.utils.Environment = <factory>, ansible: tmt.ansible.GuestAnsible | None = None, port: int | None = None, user: str = 'root', key: list[tmt._compat.pathlib.Path] = <factory>, password: str | None = None, ssh_option: list[str] = <factory>, image: str = 'fedora', memory: ForwardRef('Size') | None = None, disk: ForwardRef('Size') | None = None, connection: str = 'session', arch: str = 'x86_64', list_local_images: bool = False, image_url: str | None = None, instance_name: str | None = None, stop_retries: int = 10, stop_retry_delay: int = 1, container_file: str | None = None, container_file_workdir: str = '.', container_image: str | None = None, add_tmt_dependencies: bool = True, image_builder: str = 'quay.io/centos-bootc/bootc-image-builder:latest', rootfs: str = 'xfs', build_disk_image_only: bool = False)
Bases:
ProvisionTestcloudData- add_tmt_dependencies: bool = True
- build_disk_image_only: bool = False
- container_file: str | None = None
- container_file_workdir: str = '.'
- container_image: str | None = None
- image_builder: str = 'quay.io/centos-bootc/bootc-image-builder:latest'
- rootfs: str = 'xfs'
- class tmt.steps.provision.bootc.GuestBootc(*, data: GuestData, name: str | None = None, parent: Common | None = None, logger: Logger, containerimage: str | None, rootless: bool)
Bases:
GuestTestcloudInitialize guest data
- cli_invocation: 'tmt.cli.CliInvocation' | None = None
- containerimage: str | None
- remove() None
Remove the guest (disk cleanup)
- class tmt.steps.provision.bootc.ProvisionBootc(*, step: Step, data: StepDataT, workdir: Literal[True] | Path | None = None, logger: Logger)
Bases:
ProvisionPlugin[BootcData]Provision a local virtual machine using a bootc container image
Minimal config which uses the CentOS Stream 9 bootc image:
provision: how: bootc container-image: quay.io/centos-bootc/centos-bootc:stream9 rootfs: xfs
Here’s a config example using a Containerfile:
provision: how: bootc container-file: "./my-custom-image.containerfile" container-file-workdir: . image-builder: quay.io/centos-bootc/bootc-image-builder:stream9 rootfs: ext4 disk: 100
Another config example using an image that already includes tmt dependencies:
provision: how: bootc add-tmt-dependencies: false container-image: localhost/my-image-with-deps rootfs: btrfs
This plugin is an extension of the virtual.testcloud plugin. Essentially, it takes a container image as input, builds a bootc disk image from the container image, then uses the virtual.testcloud plugin to create a virtual machine using the bootc disk image.
The bootc disk creation requires running podman as root. The plugin will automatically check if the current podman connection is rootless. If it is, a podman machine will be spun up and used to build the bootc disk.
To trigger hard reboot of a guest, plugin uses testcloud API. It is also used to trigger soft reboot unless a custom reboot command was specified via
tmt-reboot -c ....Store plugin name, data and parent step
- cli_invocation: 'tmt.cli.CliInvocation' | None = None
- property is_in_standalone_mode: bool
Enable standalone mode when build_disk_image_only is True
tmt.steps.provision.connect module
- class tmt.steps.provision.connect.ConnectGuestData(_OPTIONLESS_FIELDS: tuple[str, ...] = ('primary_address', 'topology_address', 'facts'), primary_address: str | None = None, topology_address: str | None = None, role: str | None = None, become: bool = False, facts: tmt.guest.GuestFacts = <factory>, environment: tmt.utils.Environment = <factory>, hardware: tmt.hardware.Hardware | None = None, ansible: tmt.ansible.GuestAnsible | None = None, port: int | None = None, user: str = 'root', key: list[tmt._compat.pathlib.Path] = <factory>, password: str | None = None, ssh_option: list[str] = <factory>, guest: str | None = None, soft_reboot: tmt.utils.ShellScript | None = None, systemd_soft_reboot: tmt.utils.ShellScript | None = None, hard_reboot: tmt.utils.ShellScript | None = None)
Bases:
GuestSshData- classmethod from_plugin(container: ProvisionConnect) ConnectGuestData
Create guest data from plugin and its current configuration
- guest: str | None = None
- hard_reboot: ShellScript | None = None
- soft_reboot: ShellScript | None = None
- systemd_soft_reboot: ShellScript | None = None
- class tmt.steps.provision.connect.GuestConnect(*, data: GuestData, name: str | None = None, parent: Common | None = None, logger: Logger)
Bases:
GuestSshInitialize guest data
- cli_invocation: 'tmt.cli.CliInvocation' | None = None
- hard_reboot: ShellScript | None
- reboot(mode: RebootMode = RebootMode.SOFT, command: Command | ShellScript | None = None, waiting: Waiting | None = None) bool
Reboot the guest, and wait for the guest to recover.
Plugin will use special commands if specified via
soft-reboot,systemd-soft-reboot, andhard-rebootkeys to perform theRebootMode.SOFT,RebootMode.SYSTEMD_SOFT, andRebootMode.HARDreboot modes, respectively.Warning
Unlike
command, these commands would be executed on the runner, not on the guest.- Parameters:
mode – which boot mode to perform.
command – a command to run on the guest to trigger the reboot. Only usable when mode is not
RebootMode.HARD.waiting – deadline for the reboot.
- Returns:
Trueif the reboot succeeded,Falseotherwise.
- soft_reboot: ShellScript | None
- start() None
Start the guest
- systemd_soft_reboot: ShellScript | None
- class tmt.steps.provision.connect.ProvisionConnect(*, step: Step, data: StepDataT, workdir: Literal[True] | Path | None = None, logger: Logger)
Bases:
ProvisionPlugin[ProvisionConnectData]Connect to a provisioned guest using SSH.
Do not provision any system, tests will be executed directly on the machine that has been already provisioned. Use provided authentication information to connect to it over SSH.
Private key authentication (using
sudoto run scripts):provision: how: connect guest: host.example.org user: fedora become: true key: /home/psss/.ssh/example_rsa
Password authentication:
provision: how: connect guest: host.example.org user: root password: secret
User defaults to
root, so if you have private key correctly set the minimal configuration can look like this:provision: how: connect guest: host.example.org
To support hard reboot of a guest,
hard-rebootmust be set to an executable command or script. Without this key set, hard reboot will remain unsupported and result in an error. In comparison,soft-rebootandsystemd-soft-rebootare optional, but if set, the given commands will be preferred over the default soft and systemd soft-reboot commands:provision: how: connect hard-reboot: virsh reboot my-example-vm systemd-soft-reboot: ssh root@my-example-vm 'systemd soft-reboot' soft-reboot: ssh root@my-example-vm 'shutdown -r now'
provision --how connect \ --hard-reboot="virsh reboot my-example-vm" \ --systemd-soft-reboot="ssh root@my-example-vm 'systemd soft-reboot'" --soft-reboot="ssh root@my-example-vm 'shutdown -r now'"
Warning
hard-reboot,systemd-soft-reboot, andsoft-rebootcommands are executed on the runner, not on the guest.Store plugin name, data and parent step
- cli_invocation: 'tmt.cli.CliInvocation' | None = None
- class tmt.steps.provision.connect.ProvisionConnectData(name: str, how: str, order: int = 50, when: list[str] = <factory>, summary: str | None = None, role: str | None = None, hardware: tmt.hardware.Hardware | None = None, _OPTIONLESS_FIELDS: tuple[str, ...] = ('primary_address', 'topology_address', 'facts'), primary_address: str | None = None, topology_address: str | None = None, become: bool = False, facts: tmt.guest.GuestFacts = <factory>, environment: tmt.utils.Environment = <factory>, ansible: tmt.ansible.GuestAnsible | None = None, port: int | None = None, user: str = 'root', key: list[tmt._compat.pathlib.Path] = <factory>, password: str | None = None, ssh_option: list[str] = <factory>, guest: str | None = None, soft_reboot: tmt.utils.ShellScript | None = None, systemd_soft_reboot: tmt.utils.ShellScript | None = None, hard_reboot: tmt.utils.ShellScript | None = None)
Bases:
ConnectGuestData,ProvisionStepData
tmt.steps.provision.local module
- class tmt.steps.provision.local.GuestLocal(*, data: GuestData, name: str | None = None, parent: Common | None = None, logger: Logger)
Bases:
GuestLocal Host
Initialize guest data
- cli_invocation: 'tmt.cli.CliInvocation' | None = None
- execute(command: Command | ShellScript, cwd: Path | None = None, env: Environment | None = None, friendly_command: str | None = None, test_session: bool = False, immediately: bool = True, tty: bool = False, silent: bool = False, log: LoggingFunction | None = None, interactive: bool = False, on_process_start: Callable[[Command, Popen[bytes], Logger], None] | None = None, on_process_end: Callable[[Command, Popen[bytes], CommandOutput, Logger], None] | None = None, sourced_files: list[Path] | None = None, **kwargs: Any) CommandOutput
Execute command on localhost
- property is_ready: bool
Local is always ready
- localhost = True
- pull(source: Path | None = None, destination: Path | None = None, options: TransferOptions | None = None) None
Nothing to be done to pull workdir
- push(source: Path | None = None, destination: Path | None = None, options: TransferOptions | None = None, superuser: bool = False) None
Nothing to be done to push workdir
- reboot(mode: RebootMode = RebootMode.SOFT, command: Command | ShellScript | None = None, waiting: Waiting | None = None) bool
Reboot the guest, and wait for the guest to recover.
- Parameters:
mode – which boot mode to perform.
command – a command to run on the guest to trigger the reboot. Only usable when mode is not
RebootMode.HARD.waiting – deadline for the reboot.
- Returns:
Trueif the reboot succeeded,Falseotherwise.
- property scripts_path: Path
Absolute path to tmt scripts directory
- start() None
Start the guest
- stop() None
Stop the guest
- class tmt.steps.provision.local.ProvisionLocal(*, step: Step, data: StepDataT, workdir: Literal[True] | Path | None = None, logger: Logger)
Bases:
ProvisionPlugin[ProvisionLocalData]Use the localhost for the test execution.
Do not provision any system, tests will be executed directly on the localhost.
Warning
In general, it is not recommended to run tests on your local machine as there might be security risks. Run only those tests which you know are safe so that you don’t destroy your workstation ;-)
From tmt version 1.38, the
--feeling-safeoption or theTMT_FEELING_SAFE=1environment variable is required in order to use thelocalprovision plugin.Using the plugin:
provision: how: local
provision --how local
Note
tmt runis expected to be executed under a non-privileged user account. For some actions on the localhost, e.g. installation of test requirements,localwill require elevated privileges, either by running underrootaccount, or by usingsudoto run the sensitive commands. You may be asked for a password in such cases.Note
Neither hard nor soft reboot is supported.
Note
Currently the
TMT_SCRIPTS_DIRvariable is not supported in thelocalprovision plugin and the default scripts path is used instead. See issue #4081 for details.Store plugin name, data and parent step
- cli_invocation: 'tmt.cli.CliInvocation' | None = None
- class tmt.steps.provision.local.ProvisionLocalData(name: str, how: str, order: int = 50, when: list[str] = <factory>, summary: str | None = None, role: str | None = None, hardware: tmt.hardware.Hardware | None = None, _OPTIONLESS_FIELDS: tuple[str, ...] = ('primary_address', 'topology_address', 'facts'), primary_address: str | None = None, topology_address: str | None = None, become: bool = False, facts: tmt.guest.GuestFacts = <factory>, environment: tmt.utils.Environment = <factory>, ansible: tmt.ansible.GuestAnsible | None = None)
Bases:
GuestData,ProvisionStepData
tmt.steps.provision.mock module
- class tmt.steps.provision.mock.GuestMock(*args: Any, **kwargs: Any)
Bases:
GuestMock environment
Initialize guest data
- cli_invocation: 'tmt.cli.CliInvocation' | None = None
- execute(command: Command | ShellScript, cwd: Path | None = None, env: Environment | None = None, friendly_command: str | None = None, test_session: bool = False, immediately: bool = True, tty: bool = False, silent: bool = False, log: LoggingFunction | None = None, interactive: bool = False, on_process_start: Callable[[Command, Popen[bytes], Logger], None] | None = None, on_process_end: Callable[[Command, Popen[bytes], CommandOutput, Logger], None] | None = None, sourced_files: list[Path] | None = None, **kwargs: Any) CommandOutput
Execute the command in a running mock shell for increased speed.
- property is_ready: bool
Mock is always ready
- pull(source: Path | None = None, destination: Path | None = None, options: TransferOptions | None = None) None
Pull content from the mock chroot via a pipe at MOCK_PIPE_FILESYNC. For directories we use tar. For files we use cp or install. Compress option is ignored, it only slows down the execution.
- push(source: Path | None = None, destination: Path | None = None, options: TransferOptions | None = None, superuser: bool = False) None
Push content into the mock chroot via a pipe at MOCK_PIPE_FILESYNC. For directories we use tar. For files we use cp or install. Compress option is ignored, it only slows down the execution. Create destination option is ignored, there were problems with workdir.
- reboot(mode: RebootMode = RebootMode.SOFT, command: Command | ShellScript | None = None, waiting: Waiting | None = None) bool
Reboot the guest, and wait for the guest to recover.
- Parameters:
mode – which boot mode to perform.
command – a command to run on the guest to trigger the reboot. Only usable when mode is not
RebootMode.HARD.waiting – deadline for the reboot.
- Returns:
Trueif the reboot succeeded,Falseotherwise.
- remove() None
Currently do not prune the mock chroot, that may be undesirable.
- root: str | None = None
- rootdir: Path | None = None
- start() None
Start the guest
Get a new guest instance running. This should include preparing any configuration necessary to get it started. Called after load() is completed so all guest data should be available.
- stop() None
Stop the guest
Shut down a running guest instance so that it does not consume any memory or cpu resources. If needed, perform any actions necessary to store the instance status to disk.
- suspend() None
Suspend the guest.
Perform any actions necessary before quitting step and tmt. The guest may be reused by future tmt invocations.
- class tmt.steps.provision.mock.MockGuestData(_OPTIONLESS_FIELDS: tuple[str, ...] = ('primary_address', 'topology_address', 'facts'), primary_address: str | None = None, topology_address: str | None = None, role: str | None = None, become: bool = False, facts: tmt.guest.GuestFacts = <factory>, environment: tmt.utils.Environment = <factory>, hardware: tmt.hardware.Hardware | None = None, ansible: tmt.ansible.GuestAnsible | None = None, root: str | None = None, rootdir: tmt._compat.pathlib.Path | None = None)
Bases:
GuestData- root: str | None = None
- rootdir: Path | None = None
- class tmt.steps.provision.mock.MockShell(parent: GuestMock)
Bases:
object- enter_shell() None
- execute(*args: Any, **kwargs: Any) tuple[str, str]
- exit_shell() None
- class tmt.steps.provision.mock.ProvisionMock(*, step: Step, data: StepDataT, workdir: Literal[True] | Path | None = None, logger: Logger)
Bases:
ProvisionPlugin[ProvisionMockData]Use the mock tool for the test execution.
Tests will be executed inside a mock buildroot.
Warning
This plugin requires the
--feeling-safeoption or theTMT_FEELING_SAFE=1environment variable defined. While it is roughly as safe ascontainerprovisioning, it has access to local filesystem.Using the plugin:
provision: how: mock root: fedora-rawhide-x86_64
provision --how mock --root fedora-rawhide-x86_64
Note
Neither hard nor soft reboot is supported.
Store plugin name, data and parent step
- cli_invocation: 'tmt.cli.CliInvocation' | None = None
- class tmt.steps.provision.mock.ProvisionMockData(name: str, how: str, order: int = 50, when: list[str] = <factory>, summary: str | None = None, role: str | None = None, hardware: tmt.hardware.Hardware | None = None, _OPTIONLESS_FIELDS: tuple[str, ...] = ('primary_address', 'topology_address', 'facts'), primary_address: str | None = None, topology_address: str | None = None, become: bool = False, facts: tmt.guest.GuestFacts = <factory>, environment: tmt.utils.Environment = <factory>, ansible: tmt.ansible.GuestAnsible | None = None, root: str | None = None, rootdir: tmt._compat.pathlib.Path | None = None)
Bases:
MockGuestData,ProvisionStepData
tmt.steps.provision.mrack module
- tmt.steps.provision.mrack.BEAKER: Any
- class tmt.steps.provision.mrack.BeakerAPI(guest: GuestBeaker)
Bases:
objectInitialize the API class with defaults and load the config
- create(data: CreateJobParameters) Any
Create - or request creation of - a resource using mrack up.
- Parameters:
data – describes the provisioning request.
- delete() Any
Delete - or request removal of - a resource
- dsp_name: str = 'Beaker'
- inspect() Any
Inspect a resource (kinda wait till provisioned)
- mrack_requirement: dict[str, Any] = {}
- class tmt.steps.provision.mrack.BeakerGuestData(_OPTIONLESS_FIELDS: tuple[str, ...] = ('primary_address', 'topology_address', 'facts'), primary_address: str | None = None, topology_address: str | None = None, role: str | None = None, become: bool = False, facts: tmt.guest.GuestFacts = <factory>, environment: tmt.utils.Environment = <factory>, hardware: tmt.hardware.Hardware | None = None, ansible: tmt.ansible.GuestAnsible | None = None, port: int | None = None, user: str = 'root', key: list[tmt._compat.pathlib.Path] = <factory>, password: str | None = None, ssh_option: list[str] = <factory>, whiteboard: str | None = None, arch: str = 'x86_64', image: str | None = 'fedora', job_id: str | None = None, provision_timeout: int = 3600, provision_tick: int = 60, api_session_refresh_tick: int = 3600, kickstart: dict[str, str] = <factory>, beaker_job_owner: str | None = None, public_key: list[str] = <factory>, beaker_job_group: str | None = None, bootc_check_system_url: str | None = 'https://gitlab.com/fedora/bootc/tests/bootc-beaker-test/-/archive/1.8/bootc-beaker-test-1.8.tar.gz#check-system', bootc_image_url: str | None = None, bootc_registry_secret: str | None = None, bootc: bool = False)
Bases:
GuestSshData- api_session_refresh_tick: int = 3600
- arch: str = 'x86_64'
- beaker_job_group: str | None = None
- beaker_job_owner: str | None = None
- bootc: bool = False
- bootc_check_system_url: str | None = 'https://gitlab.com/fedora/bootc/tests/bootc-beaker-test/-/archive/1.8/bootc-beaker-test-1.8.tar.gz#check-system'
- bootc_image_url: str | None = None
- bootc_registry_secret: str | None = None
- image: str | None = 'fedora'
- job_id: str | None = None
- kickstart: dict[str, str]
- provision_tick: int = 60
- provision_timeout: int = 3600
- public_key: list[str]
- whiteboard: str | None = None
- tmt.steps.provision.mrack.BeakerProvider: Any
- tmt.steps.provision.mrack.BeakerTransformer: Any
- class tmt.steps.provision.mrack.ConstraintT
A type var representing actual constraint type in transformers and their type annotations.
alias of TypeVar(‘ConstraintT’, bound=
Constraint)
- tmt.steps.provision.mrack.ConstraintTransformer
A type of constraint transformers.
alias of
Callable[[ConstraintT,Logger],MrackBaseHWElement|dict[str,Any]]
- class tmt.steps.provision.mrack.CreateJobParameters(tmt_name: str, name: str, os: str, arch: str, hardware: Hardware | None, kickstart: dict[str, str], whiteboard: str | None, beaker_job_owner: str | None, public_key: list[str], beaker_job_group: str | None, bootc_credentials: dict[str, Any] | None, bootc_image_url: str | None, bootc: bool, bootc_check_system_url: str | None, group: str = 'linux')
Bases:
objectCollect all parameters for a future Beaker job
- arch: str
- beaker_job_group: str | None
- beaker_job_owner: str | None
- bootc: bool
- bootc_check_system_url: str | None
- bootc_credentials: dict[str, Any] | None
- bootc_image_url: str | None
- group: str = 'linux'
- kickstart: dict[str, str]
- name: str
- os: str
- public_key: list[str]
- tmt_name: str
- to_mrack() dict[str, Any]
- whiteboard: str | None
- tmt.steps.provision.mrack.DEFAULT_API_SESSION_REFRESH = 3600
How often Beaker session should be refreshed to pick up up-to-date Kerberos ticket.
- class tmt.steps.provision.mrack.GuestBeaker(*args: Any, **kwargs: Any)
Bases:
GuestSshBeaker guest instance
Make sure that the mrack module is available and imported
- api_session_refresh_tick: int
- arch: str
- beaker_job_group: str | None = None
- beaker_job_owner: str | None = None
- bootc: bool
- bootc_check_system_url: str | None
- bootc_image_url: str | None
- bootc_registry_secret: str | None
- cli_invocation: 'tmt.cli.CliInvocation' | None = None
- image: str = 'fedora-latest'
- property is_ready: bool
Check if provisioning of machine is done
- job_id: str | None
- kickstart: dict[str, str]
- provision_tick: int
- provision_timeout: int
- public_key: list[str]
- reboot(mode: RebootMode = RebootMode.SOFT, command: Command | ShellScript | None = None, waiting: Waiting | None = None) bool
Reboot the guest, and wait for the guest to recover.
Plugin will use
bkr system-powercommand to trigger to perform theRebootMode.HARDreboot. Unlikecommand, this command would be executed on the runner, not on the guest.- Parameters:
mode – which boot mode to perform.
command – a command to run on the guest to trigger the reboot. Only usable when mode is not
RebootMode.HARD.waiting – deadline for the reboot.
- Returns:
Trueif the reboot succeeded,Falseotherwise.
- remove() None
Remove the guest
- start() None
Start the guest
Get a new guest instance running. This should include preparing any configuration necessary to get it started. Called after load() is completed so all guest data should be available.
- whiteboard: str | None
- class tmt.steps.provision.mrack.GuestInspectType
Bases:
TypedDict- address: str | None
- status: str
- system: str
- class tmt.steps.provision.mrack.GuestLogBeaker(name: str, guest: tmt.steps.provision.mrack.GuestBeaker, url: str)
Bases:
GuestLog- guest: GuestBeaker
Guest whose log this instance represents.
- update(*, logger: Logger) None
Fetch the up-to-date content of the log, and save it into a file.
- Parameters:
logger – logger to use for logging.
- url: str
- class tmt.steps.provision.mrack.MrackBaseHWElement(name: str)
Bases:
ABCBase for Mrack hardware requirement elements
- name: str
- abstractmethod to_mrack() dict[str, Any]
Convert the element to Mrack-compatible dictionary tree
- class tmt.steps.provision.mrack.MrackHWAndGroup(name: str = 'and', children: list[~tmt.steps.provision.mrack.MrackBaseHWElement | dict[str, ~typing.Any]] = <factory>)
Bases:
MrackHWGroupRepresents
<and/>element- name: str = 'and'
- class tmt.steps.provision.mrack.MrackHWBinOp(name: str, operator: str, value: str)
Bases:
MrackHWElementAn element describing a binary operation, a “check”
- class tmt.steps.provision.mrack.MrackHWDeviceElement(operator: str, value: str, attribute_name: str = 'value')
Bases:
MrackHWElementAn element for device with op and value attributes
- class tmt.steps.provision.mrack.MrackHWElement(name: str, attributes: dict[str, str] = <factory>)
Bases:
MrackBaseHWElementAn element with name and attributes.
This type of element is not allowed to have any child elements.
- attributes: dict[str, str]
- to_mrack() dict[str, Any]
Convert the element to Mrack-compatible dictionary tree
- class tmt.steps.provision.mrack.MrackHWGroup(name: str, children: list[~tmt.steps.provision.mrack.MrackBaseHWElement | dict[str, ~typing.Any]] = <factory>)
Bases:
MrackBaseHWElementAn element with child elements.
This type of element is not allowed to have any attributes.
- children: list[MrackBaseHWElement | dict[str, Any]]
- to_mrack() dict[str, Any]
Convert the element to Mrack-compatible dictionary tree
- class tmt.steps.provision.mrack.MrackHWKeyValue(name: str, operator: str, value: str)
Bases:
MrackHWElementA key-value element
- class tmt.steps.provision.mrack.MrackHWNotGroup(name: str = 'not', children: list[~tmt.steps.provision.mrack.MrackBaseHWElement | dict[str, ~typing.Any]] = <factory>)
Bases:
MrackHWGroupRepresents
<not/>element- name: str = 'not'
- class tmt.steps.provision.mrack.MrackHWOrGroup(name: str = 'or', children: list[~tmt.steps.provision.mrack.MrackBaseHWElement | dict[str, ~typing.Any]] = <factory>)
Bases:
MrackHWGroupRepresents
<or/>element- name: str = 'or'
- tmt.steps.provision.mrack.NotAuthenticatedError: Any
- class tmt.steps.provision.mrack.ProvisionBeaker(*, step: Step, data: StepDataT, workdir: Literal[True] | Path | None = None, logger: Logger)
Bases:
ProvisionPlugin[ProvisionBeakerData]Provision guest on Beaker system using mrack.
Reserve a machine from the Beaker pool using the
mrackplugin.mrackis a multicloud provisioning library supporting multiple cloud services including Beaker.The following two files are used for configuration:
/etc/tmt/mrack.conffor basic configuration/etc/tmt/provisioning-config.yamlconfiguration per supported providerBeaker installs distribution specified by the
imagekey. If the image can not be translated using theprovisioning-config.yamlfile mrack passes the image value to Beaker hub and tries to request distribution based on the image value. This way we can bypass default translations and use desired distribution specified like the one in the example below.Minimal configuration could look like this:
provision: how: beaker image: fedora
To trigger a hard reboot of a guest,
bkr system-power --action rebootcommand is executed.Warning
bkr system-powercommand is executed on the runner, not on the guest.# Specify the distro directly provision: how: beaker image: Fedora-37%
# Set custom whiteboard description (added in 1.30) provision: how: beaker whiteboard: Just a smoke test for now
Store plugin name, data and parent step
- cli_invocation: 'tmt.cli.CliInvocation' | None = None
- wake(data: BeakerGuestData | None = None) None
Wake up the plugin, process data, apply options
- class tmt.steps.provision.mrack.ProvisionBeakerData(name: str, how: str, order: int = 50, when: list[str] = <factory>, summary: str | None = None, role: str | None = None, hardware: tmt.hardware.Hardware | None = None, _OPTIONLESS_FIELDS: tuple[str, ...] = ('primary_address', 'topology_address', 'facts'), primary_address: str | None = None, topology_address: str | None = None, become: bool = False, facts: tmt.guest.GuestFacts = <factory>, environment: tmt.utils.Environment = <factory>, ansible: tmt.ansible.GuestAnsible | None = None, port: int | None = None, user: str = 'root', key: list[tmt._compat.pathlib.Path] = <factory>, password: str | None = None, ssh_option: list[str] = <factory>, whiteboard: str | None = None, arch: str = 'x86_64', image: str | None = 'fedora', job_id: str | None = None, provision_timeout: int = 3600, provision_tick: int = 60, api_session_refresh_tick: int = 3600, kickstart: dict[str, str] = <factory>, beaker_job_owner: str | None = None, public_key: list[str] = <factory>, beaker_job_group: str | None = None, bootc_check_system_url: str | None = 'https://gitlab.com/fedora/bootc/tests/bootc-beaker-test/-/archive/1.8/bootc-beaker-test-1.8.tar.gz#check-system', bootc_image_url: str | None = None, bootc_registry_secret: str | None = None, bootc: bool = False)
Bases:
BeakerGuestData,ProvisionStepData
- tmt.steps.provision.mrack.ProvisioningError: Any
- tmt.steps.provision.mrack.TmtBeakerTransformer: Any
- tmt.steps.provision.mrack.async_run(func: Any) Any
Decorate click actions to run as async
- tmt.steps.provision.mrack.constraint_to_beaker_filter(constraint: BaseConstraint, logger: Logger) MrackBaseHWElement | dict[str, Any]
Convert a hardware constraint into a Mrack-compatible filter
- tmt.steps.provision.mrack.import_and_load_mrack_deps(mrack_log: str, logger: Logger) None
Import mrack module only when needed (thread-safe)
- tmt.steps.provision.mrack.init_mrack_global_context(config_path: str) None
Initialize mrack global context in a thread-safe manner
- tmt.steps.provision.mrack.mrack: Any
- tmt.steps.provision.mrack.mrack_constructs_ks_pre() bool
Kickstart construction has been improved in 1.21.0
- tmt.steps.provision.mrack.operator_to_beaker_op(operator: Operator, value: str) tuple[str, str, bool]
Convert constraint operator to Beaker “op”.
- Parameters:
operator – operator to convert.
value – value operator works with. It shall be a string representation of the the constraint value, as converted for the Beaker job XML.
- Returns:
tuple of three items: Beaker operator, fit for
opattribute of XML filters, a value to go with it instead of the input one, and a boolean signalizing whether the filter, constructed by the caller, should be negated.
- tmt.steps.provision.mrack.providers: Any
- tmt.steps.provision.mrack.transforms(fn: Callable[[ConstraintT, Logger], MrackBaseHWElement | dict[str, Any]]) Callable[[ConstraintT, Logger], MrackBaseHWElement | dict[str, Any]]
A decorator marking a function as a constraint transformer.
Function name is expected to provide the constraint name it transforms: decorator strips away the initial
_transform_prefix, and replaces the first underscore,_with a dot,.:_transform_beaker_pool => beaker.pool _transform_disk_physical_sector_size => disk.physical_sector_size
- Parameters:
fn – function to decorate.
tmt.steps.provision.podman module
- class tmt.steps.provision.podman.GuestContainer(*, data: GuestData, name: str | None = None, parent: Common | None = None, logger: Logger)
Bases:
GuestContainer Instance
Initialize guest data
- NETWORK_NAME_FORMAT: ClassVar[str] = '{prefix}tmt-{run_name}-{plan_name}-network'
- cli_invocation: 'tmt.cli.CliInvocation' | None = None
- container: str | None
- execute(command: Command | ShellScript, cwd: Path | None = None, env: Environment | None = None, friendly_command: str | None = None, test_session: bool = False, immediately: bool = True, tty: bool = False, silent: bool = False, log: LoggingFunction | None = None, interactive: bool = False, on_process_start: Callable[[Command, Popen[bytes], Logger], None] | None = None, on_process_end: Callable[[Command, Popen[bytes], CommandOutput, Logger], None] | None = None, sourced_files: list[Path] | None = None, **kwargs: Any) CommandOutput
Execute given commands in podman via shell
- force_pull: bool
- image: str | None
- property is_ready: bool
Detect the guest is ready or not
- network_prefix: str | None
- podman(command: Command, silent: bool = True, **kwargs: Any) CommandOutput
Run given command via podman
- pull(source: Path | None = None, destination: Path | None = None, options: TransferOptions | None = None) None
Nothing to be done to pull workdir
- pull_attempts: int
- pull_image() None
Pull image if not available or pull forced
- pull_interval: int
- push(source: Path | None = None, destination: Path | None = None, options: TransferOptions | None = None, superuser: bool = False) None
Make sure that the workdir has a correct selinux context
- reboot(mode: RebootMode = RebootMode.SOFT, command: Command | ShellScript | None = None, waiting: Waiting | None = None) bool
Reboot the guest, and wait for the guest to recover.
Note
However, only
RebootMode.HARDmode is supported by the plugin, other modes or a custom reboot command will result in an exception.- Parameters:
mode – which boot mode to perform.
command – a command to run on the guest to trigger the reboot. Only usable when mode is not
RebootMode.HARD.waiting – deadline for the reboot.
- Returns:
Trueif the reboot succeeded,Falseotherwise.
- remove() None
Remove the container
- start() None
Start provisioned guest
- stop() None
Stop provisioned guest
- stop_time: int
- user: str
- wake() None
Wake up the guest
- class tmt.steps.provision.podman.PodmanGuestData(_OPTIONLESS_FIELDS: tuple[str, ...] = ('primary_address', 'topology_address', 'facts'), primary_address: str | None = None, topology_address: str | None = None, role: str | None = None, become: bool = False, facts: tmt.guest.GuestFacts = <factory>, environment: tmt.utils.Environment = <factory>, hardware: tmt.hardware.Hardware | None = None, ansible: tmt.ansible.GuestAnsible | None = None, image: str = 'fedora', user: str = 'root', force_pull: bool = False, container: str | None = None, network: str | None = None, network_prefix: str | None = None, pull_attempts: int = 5, pull_interval: int = 5, stop_time: int = 1)
Bases:
GuestData- container: str | None = None
- force_pull: bool = False
- image: str = 'fedora'
- network: str | None = None
- network_prefix: str | None = None
- pull_attempts: int = 5
- pull_interval: int = 5
- stop_time: int = 1
- user: str = 'root'
- class tmt.steps.provision.podman.ProvisionPodman(*, step: Step, data: StepDataT, workdir: Literal[True] | Path | None = None, logger: Logger)
Bases:
ProvisionPlugin[ProvisionPodmanData]Create a new container using
podman.Example config:
provision: how: container image: fedora:latest
# Use an image with a non-root user with sudo privileges, # and run scripts with sudo. provision: how: container image: image with non-root user with sudo privileges user: tester become: true
In order to always pull the fresh container image use
pull: true.In order to run the container with different user as the default
root, useuser: USER.Container-backed guests do not support soft reboots or custom reboot commands. Soft reboot or
tmt-reboot -c ...will result in an error.Store plugin name, data and parent step
- cli_invocation: 'tmt.cli.CliInvocation' | None = None
- default(option: str, default: Any = None) Any
Return default data for given option
- class tmt.steps.provision.podman.ProvisionPodmanData(name: str, how: str, order: int = 50, when: list[str] = <factory>, summary: str | None = None, role: str | None = None, hardware: tmt.hardware.Hardware | None = None, _OPTIONLESS_FIELDS: tuple[str, ...] = ('primary_address', 'topology_address', 'facts'), primary_address: str | None = None, topology_address: str | None = None, become: bool = False, facts: tmt.guest.GuestFacts = <factory>, environment: tmt.utils.Environment = <factory>, ansible: tmt.ansible.GuestAnsible | None = None, image: str = 'fedora', user: str = 'root', force_pull: bool = False, container: str | None = None, network: str | None = None, network_prefix: str | None = None, pull_attempts: int = 5, pull_interval: int = 5, stop_time: int = 1)
Bases:
PodmanGuestData,ProvisionStepData
tmt.steps.provision.testcloud module
- tmt.steps.provision.testcloud.AArch64ArchitectureConfiguration: Any
- tmt.steps.provision.testcloud.BOOT_METHOD_SUPPORTED_METHODS: tuple[str, ...] = ('bios', 'uefi')
Boot methods supported by the plugin.
- tmt.steps.provision.testcloud.BOOT_TIMEOUT: int = 120
How many seconds to wait for a VM to start. This is the effective value, combining the default and optional envvar,
TMT_BOOT_TIMEOUT.
- class tmt.steps.provision.testcloud.ConsoleLog(name: str, guest: 'Guest', exchange_directory: tmt._compat.pathlib.Path | None = None)
Bases:
GuestLog- exchange_directory: Path | None = None
Temporary directory for storing the console log content.
- setup(*, logger: Logger) None
Prepare for collecting the log.
It is left for plugins to setup the needed infrastructure, make API calls, etc.
- Parameters:
logger – logger to use for logging.
- tmt.steps.provision.testcloud.DEFAULT_BOOT_TIMEOUT: int = 120
How many seconds to wait for a VM to start. This is the default value tmt would use unless told otherwise.
- tmt.steps.provision.testcloud.DEFAULT_STOP_RETRIES = 10
Default number of attempts to stop a VM.
Note
The value
testcloudstarts with is3, and we already observed some VMs with bootc involved to not shut down in time. Therefore starting with increased default on our side.
- tmt.steps.provision.testcloud.DEFAULT_STOP_RETRY_DELAY = 1
Default time, in seconds, to wait between attempts to stop a VM.
- tmt.steps.provision.testcloud.DomainConfiguration: Any
- class tmt.steps.provision.testcloud.GuestTestcloud(*, data: GuestData, name: str | None = None, parent: Common | None = None, logger: Logger)
Bases:
GuestSshTestcloud Instance
The following keys are expected in the ‘data’ dictionary:
image ...... qcov image name or url user ....... user name to log in memory ..... memory size for vm disk ....... disk size for vm connection . either session (default) or system, to be passed to qemu arch ....... architecture for the VM, host arch is the default
Initialize guest data
- arch: str
- cli_invocation: 'tmt.cli.CliInvocation' | None = None
- connection: str
- disk: Size | None
- image: str
- image_url: str | None
- instance_name: str | None
- property is_coreos: bool
- property is_kvm: bool
- property is_legacy_os: bool
- property is_ready: bool
Detect guest is ready or not
- memory: Size | None
- prepare_config() None
Prepare common configuration
- prepare_ssh_key(key_type: str | None = None) str
Prepare ssh key for authentication
- reboot(mode: RebootMode = RebootMode.SOFT, command: Command | ShellScript | None = None, waiting: Waiting | None = None) bool
Reboot the guest, and wait for the guest to recover.
- Parameters:
mode – which boot mode to perform.
command – a command to run on the guest to trigger the reboot. Only usable when mode is not
RebootMode.HARD.waiting – deadline for the reboot.
- Returns:
Trueif the reboot succeeded,Falseotherwise.
- remove() None
Remove the guest (disk cleanup)
- start() None
Start provisioned guest
- stop() None
Stop provisioned guest
- stop_retries: int
- stop_retry_delay: int
- property testcloud_data_dirpath: Path
- property testcloud_image_dirpath: Path
- wake() None
Wake up the guest
- tmt.steps.provision.testcloud.IMAGE_URL_FETCH_RETRY_ATTEMPTS = 5
Image url fetch retry attempts and interval
- tmt.steps.provision.testcloud.NON_KVM_TIMEOUT_COEF = 10
How many times should the timeouts be multiplied in kvm-less cases. These include emulating a different architecture than the host, some nested virtualization cases, and hosts with degraded virt caps.
- tmt.steps.provision.testcloud.Ppc64leArchitectureConfiguration: Any
- class tmt.steps.provision.testcloud.ProvisionTestcloud(*, step: Step, data: StepDataT, workdir: Literal[True] | Path | None = None, logger: Logger)
Bases:
ProvisionPlugin[ProvisionTestcloudData]Local virtual machine using
testcloudlibrary. Testcloud takes care of downloading an image and making necessary changes to it for optimal experience (such as disablingUseDNSandGSSAPIfor SSH).Minimal config which uses the latest Fedora image:
provision: how: virtual
Here’s a full config example:
# Provision a virtual machine from a specific QCOW2 file, # using specific memory and disk settings, using the fedora user, # and using sudo to run scripts. provision: how: virtual image: https://mirror.vpsnet.com/fedora/linux/releases/41/Cloud/x86_64/images/Fedora-Cloud-Base-Generic-41-1.4.x86_64.qcow2 user: fedora become: true # in MB memory: 2048 # in GB disk: 30
Images
As the image use
fedorafor the latest released Fedora compose,fedora-rawhidefor the latest Rawhide compose, short aliases such asfedora-32,f-32orf32for specific release or a full url to the qcow2 image for example from https://kojipkgs.fedoraproject.org/compose/.Short names are also provided for
centos,centos-stream,alma,rocky,oracle,debianandubuntu(e.g.centos-8orc8).Note
The non-rpm distros are not fully supported yet in tmt as the package installation is performed solely using
dnf/yumandrpm. But you should be able the login to the provisioned guest and start experimenting. Full support is coming in the future :)Supported Fedora CoreOS images are:
fedora-coreosfedora-coreos-stablefedora-coreos-testingfedora-coreos-next
Use the full path for images stored on local disk, for example:
/var/tmp/images/Fedora-Cloud-Base-31-1.9.x86_64.qcow2
In addition to the qcow2 format, Vagrant boxes can be used as well, testcloud will take care of unpacking the image for you.
Reboot
To trigger hard reboot of a guest, plugin uses testcloud API. It is also used to trigger soft reboot unless a custom reboot command was specified via
tmt-reboot -c ....Console
The full console log is available, after the guest is booted, in the
logsdirectory under the provision step workdir, for example:plan/provision/client/logs/console.txt. Enable verbose mode using-vvto get the full path printed to the terminal for easy investigation.Store plugin name, data and parent step
- classmethod clean_images(clean: Clean, dry: bool, workdir_root: Path) bool
Remove the testcloud images
- cli_invocation: 'tmt.cli.CliInvocation' | None = None
- class tmt.steps.provision.testcloud.ProvisionTestcloudData(name: str, how: str, order: int = 50, when: list[str] = <factory>, summary: str | None = None, role: str | None = None, hardware: tmt.hardware.Hardware | None = None, _OPTIONLESS_FIELDS: tuple[str, ...] = ('primary_address', 'topology_address', 'facts'), primary_address: str | None = None, topology_address: str | None = None, become: bool = False, facts: tmt.guest.GuestFacts = <factory>, environment: tmt.utils.Environment = <factory>, ansible: tmt.ansible.GuestAnsible | None = None, port: int | None = None, user: str = 'root', key: list[tmt._compat.pathlib.Path] = <factory>, password: str | None = None, ssh_option: list[str] = <factory>, image: str = 'fedora', memory: ForwardRef('Size') | None = None, disk: ForwardRef('Size') | None = None, connection: str = 'session', arch: str = 'x86_64', list_local_images: bool = False, image_url: str | None = None, instance_name: str | None = None, stop_retries: int = 10, stop_retry_delay: int = 1)
Bases:
TestcloudGuestData,ProvisionStepData
- tmt.steps.provision.testcloud.QCow2StorageDevice: Any
- tmt.steps.provision.testcloud.RawStorageDevice: Any
- tmt.steps.provision.testcloud.S390xArchitectureConfiguration: Any
- tmt.steps.provision.testcloud.SystemNetworkConfiguration: Any
- tmt.steps.provision.testcloud.TPMConfiguration: Any
- tmt.steps.provision.testcloud.TPM_CONFIG_ALLOWS_VERSIONS: bool = False
If set,
testcloudTPM configuration accepts TPM version as a parameter.
- tmt.steps.provision.testcloud.TPM_VERSION_ALLOWED_OPERATORS: tuple[Operator, ...] = (Operator.EQ, Operator.GTE, Operator.LTE)
List of operators supported for
tpm.versionHW requirement.
- tmt.steps.provision.testcloud.TPM_VERSION_SUPPORTED_VERSIONS = {False: ['2.0', '2'], True: ['2.0', '2', '1.2']}
TPM versions supported by the plugin. The key is
TPM_CONFIG_ALLOWS_VERSIONS.
- class tmt.steps.provision.testcloud.TestcloudGuestData(_OPTIONLESS_FIELDS: tuple[str, ...] = ('primary_address', 'topology_address', 'facts'), primary_address: str | None = None, topology_address: str | None = None, role: str | None = None, become: bool = False, facts: tmt.guest.GuestFacts = <factory>, environment: tmt.utils.Environment = <factory>, hardware: tmt.hardware.Hardware | None = None, ansible: tmt.ansible.GuestAnsible | None = None, port: int | None = None, user: str = 'root', key: list[tmt._compat.pathlib.Path] = <factory>, password: str | None = None, ssh_option: list[str] = <factory>, image: str = 'fedora', memory: ForwardRef('Size') | None = None, disk: ForwardRef('Size') | None = None, connection: str = 'session', arch: str = 'x86_64', list_local_images: bool = False, image_url: str | None = None, instance_name: str | None = None, stop_retries: int = 10, stop_retry_delay: int = 1)
Bases:
GuestSshData- arch: str = 'x86_64'
- connection: str = 'session'
- disk: Size | None = None
- image: str = 'fedora'
- image_url: str | None = None
- instance_name: str | None = None
- list_local_images: bool = False
- memory: Size | None = None
- show(*, keys: list[str] | None = None, verbose: int = 0, logger: Logger) None
Display guest data in a nice way.
- Parameters:
keys – if set, only these keys would be shown.
verbose – desired verbosity. Some fields may be omitted in low verbosity modes.
logger – logger to use for logging.
- stop_retries: int = 10
- stop_retry_delay: int = 1
- tmt.steps.provision.testcloud.UserNetworkConfiguration: Any
- tmt.steps.provision.testcloud.Workarounds: Any
- tmt.steps.provision.testcloud.X86_64ArchitectureConfiguration: Any
- tmt.steps.provision.testcloud.import_testcloud(logger: Logger) None
Import testcloud module only when needed
Module contents
- class tmt.steps.provision.Provision(*, plan: Plan, data: _RawStepData | list[_RawStepData], logger: Logger)
Bases:
StepProvision an environment for testing or use localhost.
Initialize provision step data
- DEFAULT_HOW: str = 'virtual'
- property ansible_inventory_path: Path
Get path to Ansible inventory This property lazily generates the Ansible inventory file on first access.
- Returns:
Path to the generated inventory.yaml file
- cli_invocation: 'tmt.cli.CliInvocation' | None = None
- cli_invocations: list['tmt.cli.CliInvocation'] = []
- get_guests_info() list[tuple[str, str | None]]
Get a list containing the names and roles of guests that should be enabled.
- go(force: bool = False) None
Provision all guests
- guests: list[Guest]
All known guests.
Warning
Guests may not necessarily be fully provisioned. They are from plugins as soon as possible, and guests may easily be still waiting for their infrastructure to finish the task. For the list of successfully provisioned guests, see
ready_guests.
- property is_multihost: bool
- load() None
Load guest data from the workdir
- property ready_guests: list[Guest]
All successfully provisioned guests.
Most of the time, after
provisionstep finishes successfully, the list should be the same asguests, i.e. it should contain all known guests. There are situations whenready_guestswill be a subset ofguests, and their users must decide which collection is the best for the desired goal:when
provisionis still running.ready_guestswill be slowly gaining new guests as they get up and running.in dry-run mode, no actual provisioning is expected to happen, therefore there are no unsuccessfully provisioned guests. In this mode, all known guests are considered as ready, and
ready_guestsis the same asguests.if tmt is interrupted by a signal or user. Not all guests will finish their provisioning process, and
ready_guestsmay contain just the finished ones.
- save() None
Save guest data to the workdir
- summary() None
Give a concise summary of the provisioning
- suspend() None
Suspend the step.
Perform any actions necessary before quitting the step and tmt. The step may be revisited by future tmt invocations.
- wake() None
Wake up the step (process workdir and command line)
- class tmt.steps.provision.ProvisionPlugin(*, step: Step, data: StepDataT, workdir: Literal[True] | Path | None = None, logger: Logger)
Bases:
GuestlessPlugin[ProvisionStepDataT,None]Common parent of provision plugins
Store plugin name, data and parent step
- classmethod base_command(usage: str, method_class: type[Command] | None = None) Command
Create base click command (common for all provision plugins)
- classmethod clean_images(clean: Clean, dry: bool, workdir_root: Path) bool
Remove the images of one particular plugin
- cli_invocation: 'tmt.cli.CliInvocation' | None = None
- essential_requires() list[DependencySimple | DependencyFmfId | DependencyFile]
Collect all essential requirements of the guest implementation.
Essential requirements of a guest are necessary for the guest to be usable for testing.
By default, plugin’s guest class,
ProvisionPlugin._guest_class, is asked to provide the list of required packages viaGuest.requires()method.- Returns:
a list of requirements.
- go(*, logger: Logger | None = None) None
Perform actions shared among plugins when beginning their tasks
- how: str = 'virtual'
- opt(option: str, default: Any | None = None) Any
Get an option from the command line options
- classmethod options(how: str | None = None) list[Callable[[Any], Any]]
Return list of options.
- show(keys: list[str] | None = None) None
Show plugin details for given or all available keys
- class tmt.steps.provision.ProvisionQueue(name: str, logger: Logger)
Bases:
Queue[ProvisionTask]Queue class for running provisioning tasks
- enqueue(*, phases: list[ProvisionPlugin[ProvisionStepData]], logger: Logger) None
- class tmt.steps.provision.ProvisionStepData(name: str, how: str, order: int = 50, when: list[str] = <factory>, summary: str | None = None, role: str | None = None, hardware: tmt.hardware.Hardware | None = None)
Bases:
StepData- role: str | None = None
- class tmt.steps.provision.ProvisionTask(phases: list[ProvisionPlugin[ProvisionStepData]], logger: Logger)
Bases:
GuestlessTask[None]A task to run provisioning of multiple guests
- go() Iterator[ProvisionTask]
Perform the task.
Called by
Queuemachinery to accomplish the task.Invokes
run()method to perform the task itself, and derived classes therefore must provide implementation ofrunmethod.- Yields:
instances of the same class, describing invocations of the task and their outcome. The task might be executed multiple times, depending on how exactly it was queued, and method would yield corresponding results.
- property name: str
A name of this task.
Left for child classes to implement, because the name depends on the actual task.
- phase: ProvisionPlugin[ProvisionStepData] | None = None
When
ProvisionTaskinstance is received from the queue,phasepoints to the phase that has been provisioned by the task.
- phases: list[ProvisionPlugin[ProvisionStepData]]
Phases describing guests to provision. In the
provisionstep, each phase describes one guest.