Replies: 6 comments 8 replies
-
APIsThese config entries are about where the new API instances can be found. While a lot of things can be done offline such as running jobs locally, the main use-case is to use the API to store / retrieve data and send / receive events via the Pub-Sub interface. It relies on API tokens which are managed as secrets so not in the YAML configuration file. So right now, we just need the URL. For example: APIs:
docker-host:
url: http://172.17.0.1:8001
staging.kernelci.org:
url: https://staging.kernelci.org:9000 I believe it's a bit unusual to have uppercase letters in YAML tags, but
|
Beta Was this translation helpful? Give feedback.
-
storageThe new API doesn't provide any built-in storage solution, instead it just requires all artifacts to be available via an This is where the "storage" YAML configuration comes into play, with a "type" attribute to specify which kind of storage this is. The type is then used to look up an implementation module based on an abstract interface Storage class. storage:
docker-host:
type: ssh
host: 172.17.0.1
port: 8022
base_url: http://172.17.0.1:8002/
staging.kernelci.org:
type: ssh
host: staging.kernelci.org
port: 9022
base_url: http://staging.kernelci.org:9080/ |
Beta Was this translation helpful? Give feedback.
-
jobsA "job" is essentially anything that can be run. This is a lot more generic than the test and build configurations used with the current Jenkins pipeline. There are still a few assumptions about "jobs" though as they typically come with a Jinja2 template to generate a job definition. This is in principle independent of the runtime environment, so a same template can be used to run the job in a local shell, in a Docker container, in Kubernetes, in LAVA etc. It will do this by including a different base template that's specific to the chosen runtime. Jobs are also expected to require an "image" name which can be pretty much anything that's usable by the runtime environment, typically a Docker image or a root file system. Another difference with the current Jenkins pipeline is that jobs have explicit dependencies on other jobs. For example, a KUnit job or a kernel build job will require a source code tarball. Then a kselftest job will require a particular kernel build. This hinges on the Pub/Sub interface: every time a job node changes state an event is sent. Then a pipeline service orchestrating the jobs will look for jobs that have requirements that match the event to run them. This is what the
# Not directly loaded into the config, only used for YAML aliases in this file
_anchors:
checkout: &checkout-node
- channel: node
name: checkout
result: pass
kbuild: &kbuild-node
- channel: node
name: kbuild
result: pass
jobs:
baseline:
template: baseline.jinja2
run_on: *kbuild
checkout:
template: checkout.jinja2
image: kernelci/kernelci
run_on:
- channel: trigger
name: kernel-tree
kbuild-gcc-10-arm: &gcc-10-arm
template: kbuild.jinja2
name: kbuild
image: kernelci/gcc-10:arm-kernelci
run_on: *checkout-node
params:
arch: arm64
compiler: gcc-10
defconfig: '*_defconfig'
kbuild-gcc-10-arm-kselftest:
<<: *gcc-10-arm
params:
arch: arm64
compiler: gcc-10
defconfig: multi_v7_defconfig
fragments: [kselftest]
kbuild-gcc-10-x86:
template: kbuild.jinja2
name: kbuild
image: kernelci/gcc-10:x86-kernelci
run_on: *checkout-node
params:
arch: x86_64
compiler: gcc-10
defconfig:
- x86_64_defconfig
- allmodconfig
kselftest-landlock:
template: kselftest.jinja2
run_on:
<<: *kbuild-node
data.defconfig: kselftest
kunit: &kunit
template: kunit.jinja2
name: kunit
image: kernelci/gcc-10:x86-kunit-kernelci
run_on: *checkout-node
kunit-x86_64:
<<: *kunit
params:
arch: x86_64 |
Beta Was this translation helpful? Give feedback.
-
runtimesJobs are run in runtime environments, or "runtimes". Currently, the only runtimes implemented are a local shell directly on the host, Docker containers and Kubernetes. The next ones should include LAVA and other hardware labs such as LabGrid or custom ones that don't rely on common frameworks. Like the storage configurations, runtimes have a Like jobs, runtimes can have arbitrary parameters passed to the template engine via the
runtimes:
docker:
type: docker
env_file: .docker-env
volumes:
- '/home/user/kernelci/data:/home/kernelci/data' # example
lava-broonie:
type: lava
url: https://lava.sirena.org.uk/RPC2/
job_priority:
min: 0
max: 40
queue_timeout:
days: 1
k8s-gke-eu-west4:
type: kubernetes
context: gke_android-kernelci-external_europe-west4-c_kci-eu-west4
params:
spec:
ttlSecondsAfterFinished: 30
template:
spec:
tolerations:
- key: "kubernetes.azure.com/scalesetpriority"
operator: "Equal"
value: "spot"
effect: "NoSchedule"
shell:
type: shell |
Beta Was this translation helpful? Give feedback.
-
platformsRuntime environments often provide a variety of platforms which can be used to, run jobs. For example, Kubernetes clusters may provide different kinds of pods, and hardware test labs may provide different types of boards or servers. This, is covered by the platform configurations. Each platform is tied to a runtime environment type so that the services, running jobs can tell which ones need to be submitted for which runtime, environment. Then specific fields can be provided depending on the platform's, associated runtime type. For example, hardware platforms have a particular CPU, architecture that the runtime implementation needs to use to determine whether, it's compatible with a particular kernel build. platforms:
kubernetes:
runtime-type: kubernetes
bcm2837-rpi-3-b:
runtime-type: lava
arch: arm64
boot_method: uboot
oem: broadcom
params:
context:
console_device: ttyS1
test_character_delay: 10
qemu_x86_64:
runtime-type: lava
name: qemu
arch: x86_64
boot_method: qemu
oem: qemu
params:
context:
arch: x86_64
cpu: qemu64
guestfs_interface: ide
shell:
runtime-type: shell
shell-dash:
runtime-type: shell
name: shell
params:
shebang: /bin/dash |
Beta Was this translation helpful? Give feedback.
-
schedulerFinally, a list of scheduler combinations is used to tie things together. This is like a switchboard to connect jobs, runtimes and platforms. As such, each entry is anonymous so it takes the form of a list of dictionaries. The initial proposal is to have at least a job and runtime defined in each scheduler entry, but we might need to refine it with additional criteria. The idea being that job, runtime and platform configurations should be self-contained and all the "wiring" between them is done in the scheduler.
scheduler:
- job: kunit
runtime:
name: k8s-gke-eu-west4
- job: kbuild-gcc-10-x86
runtime:
type: kubernetes
- job: baseline
runtime:
type: lava
- job: ltp-crypto
runtime:
type: lava
platforms:
- bcm2837-rpi-3-b
- job: v4l2-compliance
runtime:
name: lava-lkft
platforms:
- bcm2837-rpi-3-b |
Beta Was this translation helpful? Give feedback.
-
Along with the new KernelCI API & Pipeline currently under active development, the YAML pipeline configuration is also going through a redesign. This discussion is to go through the proposals and collect feedback before we can start working on a final implementation and documentation.
Here's some related documentation to provide some context:
The new
kci config
tool should also provide ways to manipulate the YAML configuration, for example to check which jobs would be run upon receiving a particular event or to list things in a more human-readable way.Then ideally some schema should be created using e.g.
pydantic
for the YAML configuration objects, which could also serve as a basis for documentation. We should also have a version for this schema and remove the YAML configuration from the code inkernelci-core
to actually make it user application data.See also the email thread: [RFC] KernelCI YAML pipeline configuration
Beta Was this translation helpful? Give feedback.
All reactions