From 9cc1afc6cd2b5b5a3d0cf55d20c603fbb81f09c8 Mon Sep 17 00:00:00 2001 From: Denys Fedoryshchenko Date: Wed, 24 Jul 2024 09:21:25 +0300 Subject: [PATCH] Improve workflows (#2) MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit * src/scheduler: store error message when job fails with "submit_error" It is helpful for debugging to catch error message when scheduler fails to submit job to runtime. Store the error message to `data.error_msg` field. Signed-off-by: Jeny Sadadia * config: pipeline: Set minimum kernel version for DT kselftest to 6.7 The test was introduced upstream in version 6.7, so no point in trying to run it on earlier versions. Signed-off-by: Nícolas F. R. A. Prado * configs/: Update volteer device Update volteer devices according lab availability Signed-off-by: Denys Fedoryshchenko * result_summary templates: detailed output for active/inactive regressions Signed-off-by: Ricardo Cañuelo * result_summary: new presets for active regressions Signed-off-by: Ricardo Cañuelo * result_summary: update CHANGELOG Signed-off-by: Ricardo Cañuelo * data: chmod -R 777 ./data/output to avoid permission error Avoid errors like PermissionError: [Errno 13] Permission denied: '/home/kernelci/data/output/stable-rc-boot.html' Signed-off-by: Helen Koike * result_summary: move code to _get_logs Signed-off-by: Helen Koike * result_summary: use ThreadPoolExecutor to fetch logs Fetching logs is the bottleneck of the script. Fetch them in parallel with ThreadPoolExecutor. Signed-off-by: Helen Koike * result_summary: fix result presets stable-rc-build-failures and stable-rc-boot-failures weren't querying specifically for test failures. Signed-off-by: Ricardo Cañuelo * src/regression_tracker: rework regression detection Take into account "active" and "inactive" regressions when creating them and when processing new passed or failed nodes. When a node passes, it checks if it "inactivates" an existing "active" regression. When a node fails, it checks if it needs to create a new regression or update an existing "active" one. Signed-off-by: Ricardo Cañuelo * src/regression_tracker: link failed nodes to active regressions When a failed node generates a regression, or when it's a re-run of a run that generated a still active regression, link the node to the regression id. Signed-off-by: Ricardo Cañuelo * result_summary: support for date ranges for creation and update New command line options to let the user specify date ranges for node creation and last update: --created-from, --created-to, --last-updated-from, --last-updated-to Signed-off-by: Ricardo Cañuelo * result_summary templates: support for date ranges for creation and last update Signed-off-by: Ricardo Cañuelo * result_summary: support for extra query parameters in cmdline New command line option: --query-params to specify a set of extra query parameters to complete or override preset parameters. Signed-off-by: Ricardo Cañuelo * result_summary presets: html markup in some preset titles Signed-off-by: Ricardo Cañuelo * result_summary changelog: update and move to docs folder Signed-off-by: Ricardo Cañuelo * result_summary: move parameter loading and processing to 'setup' Signed-off-by: Ricardo Cañuelo * result_summary: refactor and split into two clases (single, run) Split the ResultSummary class into a base class and two child classes: ResultSummarySingle and ResultSummaryLoop (only a stub at this point). Signed-off-by: Ricardo Cañuelo * result_summary: WIP initial implementation of the "loop" command Signed-off-by: Ricardo Cañuelo * result_summary: huge refactoring Implement "summary" (single-shot) and "monitor" (loop) modes based on preset parameters instead of on the command-line main command. Split the logic into multiple files, move all monitor-specific and summary-specific code to independent files, common code in a separate file. Full of kludges, I don't like how this is looking so far, might consider reimplementing it without any dependencies on pipeline code. Signed-off-by: Ricardo Cañuelo * result_summary templates: fix markup and indentation Signed-off-by: Ricardo Cañuelo * result_summary: new generic templates for monitor mode Signed-off-by: Ricardo Cañuelo * result_summary presets: examples for "monitor" and "summary" modes Signed-off-by: Ricardo Cañuelo * result_summary changelog: summary and monitor modes Signed-off-by: Ricardo Cañuelo * result_summary templates: fix generic regression report Signed-off-by: Ricardo Cañuelo * result_summary: summary: fix last_updated option handling Signed-off-by: Ricardo Cañuelo * result_summary: embed css stylesheet in html files Signed-off-by: Ricardo Cañuelo * regression_tracker: [trivial] make regression active by default Fixup for commit fcb29501663d78920bcd129bd57c36b9af624bc4 If the "result" field is ever made non-optional in the models we can probably remove this. Signed-off-by: Ricardo Cañuelo * regression_tracker: [trivial] set default empty node sequence Fixup for commit fcb29501663d78920bcd129bd57c36b9af624bc4 If the "node_sequence" field is ever made non-optional in the models we can probably remove this. Signed-off-by: Ricardo Cañuelo * result_summary: add cmdline option --output-dir Introduce a new command-line option: --output-dir, and rename the old --output to --output-file. Signed-off-by: Ricardo Cañuelo * result_summary changelog: command-line options change Signed-off-by: Ricardo Cañuelo * config: jobs-chromeos: remove meaningless Tast tests Several Tast tests can only fail in the context of KernelCI: * `video.PlatformDecoding.v4l2_state*_vp9_0_svc` do not actually exist, causing the whole test job to fail * `platform.DLCService*` and `platform.Memd` rely on features only present in the downstream Chrom{e,ium}OS kernel (see b/247467814 and b/244479619 for those having access to Google's issue tracker) * `kernel.ConfigVerify.chromeos` relies on downstream-only config options such as `CONFIG_SECURITY_CHROMIUMOS` and other similar ones, and therefore can only fail when testing upstream kernels Signed-off-by: Arnaud Ferraris * config: scheduler-chromeos: don't execute non-working Tast tests Currently, HEVC-related tests are known to either fail or be skipped as ChromeOS doesn't yet handle hardware decoding of HEVC media. This is expected to be fixed at some point though, so we're keeping the job definitions and only remove the corresponding scheduler entries in order to reinstate those jobs when relevant. Signed-off-by: Arnaud Ferraris * config: jobs-chromeos: exclude Tast tests known to always fail Several decoder tests always fail on all platforms where they're executed, adding only noise to otherwise useful test results. Disable those for improving the quality of the results. Signed-off-by: Arnaud Ferraris * config: chromeos: add special case for pre-6.7 qcom codec tests On Qualcomm-based ChromeBooks (`trogdor` being the only model in Collabora's lab), we noticed systematic failures of all `vp9_*_frm_resize` and `vp9_*_sub8x8_sf` tests when using a kernel up to 6.6. With 6.7 and above, all of those tests (except one) now pass. It therefore makes sense to exclude those on pre-6.7 kernels so we don't report known failures and get rid of some noise. This involves "duplicating" affected test jobs (although I did my best to minimize that) and setting rules so only the working variant is executed, based on the version of the kernel being tested. Signed-off-by: Arnaud Ferraris * lava_callback: Compress the log files to save storage space As storage space in cloud and egress have high costs, better to compress potentially large files. Signed-off-by: Denys Fedoryshchenko * tests: Add basic yaml validation Add yaml load to figure out earlier issues with yaml Signed-off-by: Denys Fedoryshchenko * config: chromeos: drop stoneyridge/pineview naming in platforms anchors The "stoneyridge" and "pineview" naming used in the Chromebook platform anchors refers to ChromiumOS specific config fragments, but doesn't necessarily match the actual platform of all the devices listed. Use more generic names to distinguish amd and intel Chromebooks. Signed-off-by: Laura Nao * config: chromeos: rename test job anchors that use chromeos specific configs Rename test job anchors that use chromeos specific kernel configurations to include the 'chromeos' infix. Signed-off-by: Laura Nao * config: chromeos: add baseline tests Enable the baseline tests on all the supported Chromebooks with their default kernel configuration. Signed-off-by: Laura Nao * config: chromeos: drop stoneyridge/pineview naming in job defs The "stoneyridge" and "pineview" naming used in some Chromebook job definitions refers to ChromiumOS specific config fragments, but doesn't necessarily match the actual platforms targeted by the jobs. Replace all occurrences with more generic intel/amd naming. Signed-off-by: Laura Nao * config: chromeos: drop chromeos infix from baseline jobs Keeping different job names for tests targeting different kernel configs might cause too much duplication. Drop the 'chromeos' infix from the job name for the tests using the chromeos config fragment. Users will be able to filter the results using the data.defconfig/data.config_full fields anyway. Signed-off-by: Laura Nao * result_summary: post-process results for summary and monitor modes Split the post-processing of nodes to a common function that can be used for both summary and monitor modes. Currently, post-processing involves only the collection of logs. Signed-off-by: Ricardo Cañuelo * result_summary templates: update and fix presets and templates Signed-off-by: Ricardo Cañuelo * doc/result-summary-CHANGELOG: update Signed-off-by: Ricardo Cañuelo * config/pipeline.yaml: enable 'BayLibre' lab Add lab configuration for BayLibre. Signed-off-by: Jeny Sadadia * docker-compose.yaml: add `lab-baylibre` runtime Add runtime argument `lab-baylibre` to `scheduler-lava` container. This will enable the pipeline to run and submit jobs to BayLibre. Signed-off-by: Jeny Sadadia * config/pipeline.yaml: add `baseline-x86-baylibre` job Add job configuration `baseline-x86-baylibre` for BayLibre. Add scheduler entry as well. Signed-off-by: Jeny Sadadia * config/pipeline.yaml: add `baseline-armel-baylibre` job Add job configuration `baseline-armel-baylibre` for BayLibre. Add scheduler entry and platform config as well. Signed-off-by: Jeny Sadadia * config/pipeline: enable `android` tree and build configs Monitor linux `android` tree. Add build configs for `android-mainline` branch. Signed-off-by: Helen Koike * config/pipeline.yaml: add kbuild definitions for android-mainline Add kbuild jobs to compile the kernel for android-mainline branch Signed-off-by: Helen Koike * config/pipeline.yaml: add entries to schedule to build android-mainline Add entries to `scheduler:` section to run the builds for android-mainline. Signed-off-by: Helen Koike * result_summary: fix node filter in monitor mode Signed-off-by: Ricardo Cañuelo * kernelci.toml: set `checkout` node timeout to `180 min` Currently set `60 min` timeout is not enough as some `kbuild` jobs and its sub-tests take around 2 hrs to complete after getting submitted to runtime. Here is an example from staging. See the information for a `checkout` and its child nodes: | id | name | created | updated | timeout | |--------------------------|---------------------|----------------------------|----------------------------|----------------------------| | 661c9d59b60b785eb9fc42b0 | checkout | 2024-04-15T03:22:01.317000 | 2024-04-15T03:51:03.870000 | 2024-04-15T04:22:01.284000 | | 661c9d97b60b785eb9fc42b4 | kbuild-gcc-10-arm64 | 2024-04-15T03:23:03.399000 | 2024-04-15T03:50:15.031000 | 2024-04-15T09:23:03.399000 | | 661ca3f7b60b785eb9fc4ead | baseline-arm64 | 2024-04-15T03:50:15.304000 | 2024-04-15T05:09:45.247000 | 2024-04-15T09:50:15.304000 | Signed-off-by: Jeny Sadadia * result_summary: add email report capabilities for monitor mode Signed-off-by: Ricardo Cañuelo * result_summary templates: plain text single report templates Signed-off-by: Ricardo Cañuelo * config: chromeos: add baseline-nfs tests Enable the baseline-nfs tests on all the supported Chromebooks, with both the default and the chromeos kernel configurations. Signed-off-by: Laura Nao * src/timeout: set `checkout` result For `TIMEOUT` mode, set `checkout` node result to `fail` if its state is `running` as it means code checkout is still going on and node timed-out. Set it to `pass` if its state is any other than `running`. Set `checkout` node result to `pass` if mode is `DONE` as it means once `checkout` has been in `available` or `closing` state and it could successfully complete source code checkout. Signed-off-by: Jeny Sadadia * regression_tracker: bugfix, failed test with no prior runs Handle the case of a failed test run when it's the first occurence of that test case. Consider it "not a regression" for now, since we're defining a regression as a "breaking point" between a success and a failure. Signed-off-by: Ricardo Cañuelo * config: platforms-chromeos: fix dalboz device type Due due to a copy/paste mishap, the device type for `asus-CM1400CXA-dalboz` had a trailing `_chromeos`, leading LAVA to fail finding the correct device type, and no job from the new system running on this platform. Signed-off-by: Arnaud Ferraris * config: jobs-chromes: run Tast tests only on 5.4+ Current ChromeOS images have `ext4` filesystems using options not present in 4.19. Therefore tests cannot run on kernels that old, and this leads to false positives in corrupt device identification, so we should only run those tests on 5.4 and later kernels. Signed-off-by: Arnaud Ferraris * config: platforms-chromes: drop non-existent platform `hp-x360-12b-ca0500na-n4000-octopus` isn't a device type available in Collabora's LAVA lab, so let's drop its definition. Signed-off-by: Arnaud Ferraris * config: exclude android tree from kbuild jobs Only Android-specific kbuild jobs should run for this tree, let's not overload our system with unneeded builds. Take this opportunity to limit mediatek kbuilds to 6.1+ as that's the earliest version that has upstream support for at least one of our devices. Signed-off-by: Arnaud Ferraris * src/timeout: a bug fix in `_submit_lapsed_nodes` Fix a glitch in the code related to setting `checkout` node result. Fixes: 361fc0d ("src/timeout: set `checkout` result") Signed-off-by: Jeny Sadadia * pipeline.yaml: Update early access FQDN We are moving k8s from eastus to westus3 as it is cheaper Signed-off-by: Denys Fedoryshchenko * src/tarball: fix `_kdir` in `update_repo` Fix the below error: ``` kernelci-pipeline-tarball | File "/home/kernelci/./pipeline/tarball.py", line 79, in _update_repo kernelci-pipeline-tarball | kernelci.shell_cmd(f"rm -rf {self._kdir}") kernelci-pipeline-tarball | ^^^^^^^^^^ kernelci-pipeline-tarball | AttributeError: 'Tarball' object has no attribute '_kdir' ``` Fixes: 0a2fe9c ("src/patchset.py: Implement Patchset service) Signed-off-by: Jeny Sadadia * src/timeout: fix method to get child nodes recursively `TimeoutService._get_child_nodes_recursive` is used to get pending child nodes recursively for closing and timed-out nodes. It overwrites the result while being called recursively. Fix the method to make it work properly. Signed-off-by: Jeny Sadadia * config: pipeline: rename "armel" arch to "arm" `armel` has various meanings depending on the system: for ChromeOS, it is ARMv7, while in Debian it's ARMv{5T,6}. Moreover, this project is *Kernel*CI and the kernel uses `arm` for all 32-bits ARM devices. In order to avoid confusion (including those wondering what the heck does `armel` mean), let's rename `armel` to `arm`. Signed-off-by: Arnaud Ferraris * config: use per-system arch property where relevant With the new `*arch` fields present in the platform configurations, we don't have to hardcode the architecture strings in some specific cases. Let's adapt the config files so we use `{cros,deb,k}arch` wherever it makes sense. Signed-off-by: Arnaud Ferraris * src/timeout: set timed-out `checkout` result Set timed-out `checkout` node result to `incomplete` while in `running` state. As it denotes that the node timed-out while checkout was still going on. Also, set error related information i.e. `error_code` and `error_msg`. Signed-off-by: Jeny Sadadia * src/tarball: update checkout node when update repo fails Tarball updates source code repo and creates tarball. If update repo operation fails even with second attempt, it means it failed to checkout souce code. Hence, update `checkout` node with state `done` state and result `fail`. Also, set appropriate error information to the `data` field. Signed-off-by: Jeny Sadadia * config: pipeline: enable collabora-next tree and build config Monitor the collabora-next tree. Add build config for the for-kernelci branch. Signed-off-by: Laura Nao * config: chromeos: enable acpi kselftest on collabora-next tree Run the ACPI kselftest on the for-kernelci branch of the collabora-next tree. See: https://lore.kernel.org/linux-kselftest/20240308144933.337107-1-laura.nao@collabora.com/T/#t Signed-off-by: Laura Nao * result_summary: restore missing split_query_params function Restore this function that was accidentally removed during the last refactoring. Signed-off-by: Ricardo Cañuelo * lava_callback: Don't upload empty files to Azure There is no use for lot of empty files on Azure, that only complicate cleanup. Signed-off-by: Denys Fedoryshchenko * result_summary presets: unify preset and output names Signed-off-by: Ricardo Cañuelo * result_summary presets: update preset for aferraris Signed-off-by: Ricardo Cañuelo * result_summary presets: new presets for laura.nao Signed-off-by: Ricardo Cañuelo * result_summary presets: fixes and new presets for nfraprado Signed-off-by: Ricardo Cañuelo * result_summary presets: fix arch query parameters Signed-off-by: Ricardo Cañuelo * k8s: Lot of deployment tested fixes Fixes in yaml files for k8s production deployment. Signed-off-by: Denys Fedoryshchenko * result-summary presets: Fix build failure and regression monitors Signed-off-by: Nícolas F. R. A. Prado * result_summary: added debug traces to the monitor Show detailed info of the node filterings in real time. Signed-off-by: Ricardo Cañuelo * result_summary: fix corner case bug when no logs are found Cover rare case where neither the node nor any of its parents up to the checkout node have any log artifacts. Signed-off-by: Ricardo Cañuelo * result_summary presets: refine stable-rc presets Signed-off-by: Ricardo Cañuelo * result_summary templates: add regression info to test reports Signed-off-by: Ricardo Cañuelo * result_summary templates: escape log snippets Signed-off-by: Ricardo Cañuelo * src: lava_callback: add device ID to node data It can be useful to know the exact device on which a job ran, without having to open the LAVA job page. This is done by querying the device ID from the callback data and appending it to the node data. Signed-off-by: Arnaud Ferraris * src: lava_callback: upload raw callback data as well Debugging callback issues is complex due to the raw data not being saved after processing. This change ensures we save the callback data as a JSON file in order to ease development. Signed-off-by: Arnaud Ferraris * DONOTMERGE lava_callback: add debug statements Why the heck doesn't this just work??? Signed-off-by: Arnaud Ferraris * result_summary_templates: fix error 'node' is undefined The object is named test and not node, so s/node/test Signed-off-by: Helen Koike * config/runtime/kunit: set architecture info Set architecture field for `kunit` test nodes. If no `arch` argument is supplied, kunit takes `um` (User Mode Linux) as architecture to run tests. Signed-off-by: Jeny Sadadia * src/timeout: count running child jobs of build nodes Add a method to count running jobs of `kbuild` nodes i.e. jobs being submitted after successful builds. Fox example `baseline` or `tast` jobs. Signed-off-by: Jeny Sadadia * src/timeout: handle closing `checkout` node differently Usually, `checkout` should be transited to `done` state when all its child nodes are completed. In case of closing `checkout`, take into account running child jobs of build nodes before transiting its state to `done`. Otherwise, `checkout` will be assigned to `done` state even if some child jobs are still running. Signed-off-by: Jeny Sadadia * src/timeout: handle holdoff reached `checkout` node differently Usually, available `checkout` for which holdoff is reached should be transited to `done` state only when all its child nodes are completed. In case of such `checkout` node, take into account running child jobs of build nodes before transiting its state to `done`. Otherwise, `checkout` will be assigned to `done` state even if some child jobs are still running. Signed-off-by: Jeny Sadadia * Revert "DONOTMERGE lava_callback: add debug statements" This reverts commit 5ed8218d99840373bbba5830b1976813b52bf4b1. Signed-off-by: Arnaud Ferraris * Create dependabot.yml * result_summary_templates: make generic-test-failures generic to all results The generic-test-failures templates can be used to show general results just replacing the name "failures" by "results". Makeing it easier to be re-used by communities that want to have pre-sets to list all results of the tests, so: s/generic-test-failures/generic-test-results Signed-off-by: Helen Koike * result-summary.yaml: add preset to list android build tests Since we now build android, add a preset to allow result-summary.yaml to list all build results from Android tree. Signed-off-by: Helen Koike * tarball: Implement checkout for specific commit We often need not ToT, but specific commit, implement this. Signed-off-by: Denys Fedoryshchenko * jobs-chromeos.yaml: Disable module compression for every kernel version Commit d4bbe942098b ("kbuild: remove CONFIG_MODULE_COMPRESS"), introduced in kernel v5.13, substituted CONFIG_MODULE_COMPRESS=n for CONFIG_MODULE_COMPRESS_NONE=y as the way to disable module compression. Since module compression causes "Invalid ELF header magic: != ELF" errors during boot on the ChromeOS base config, add the missing config to disable module compression on kernels > v5.13 as well. Signed-off-by: Nícolas F. R. A. Prado * src: lava_callback: reduce callback data size The callback data is quite large, especially as it includes the full log which we already upload separately. By dropping it and compressing the whole file with `gzip` we can avoid wasting too much storage space. Signed-off-by: Arnaud Ferraris * src: lava_callback: don't leak secret token The callback data contains the secret tokens value which shouldn't be leaked. Ensure we drop it from the uploaded data. Signed-off-by: Arnaud Ferraris * config: platforms-chromeos: use new cros-flash image This ensures we use the new version of the `install-modules` script. Signed-off-by: Arnaud Ferraris * src: regression_tracker: add the "device" field to regression data This can be helpful. We're not using it as a search param though, as we don't want to narrow down the search that much, using the platform only is better. Signed-off-by: Arnaud Ferraris * config: result_summary_templates: report device used for job This information is now available, and it can be useful to know the affected device withouth having to look at the LAVA job details. Signed-off-by: Arnaud Ferraris * kubernetes: Update deployment recipe Update list of labs and add KCI_INSTANCE variable. Signed-off-by: Denys Fedoryshchenko * lava-callback: Limit threads of lava-callback Due inrush of lava callbacks and slow Azure Files processing, we need to make sure we dont spawn too many threads. Also add hard limit of memory 1Gbyte Signed-off-by: Denys Fedoryshchenko * result_summary presets: add presetes for fluster test Signed-off-by: Muhammad Usama Anjum --- Changes: - Make template generic for all v4l2 tests - Rebase on main * result_summary presets: make the name of fluster test generic Signed-off-by: Muhammad Usama Anjum * config: enable first fluster test for mt8195-cherry-tomato-r2 Enable first fluster test, AV1-TEST-VECTORS for mt8195-cherry-tomato-r2. Run the test on mainline and next until more trees are added. Signed-off-by: Muhammad Usama Anjum --- Changes: - Create generic v4l2-decoder-conformance-job and use anchers from it - Update the rootfs address - Move anchor to _anchor - Update with nitpicks * config: jobs-chromeos: Add kernelci tree for testing purpose Remove this commit before merging. Signed-off-by: Muhammad Usama Anjum * config: chromeos: Enable cpufreq kselftest Enable cpufreq kselftest on all the trees and branches. Signed-off-by: Shreeya Patel * result_summary presets: fix preset for kselftest-dt failures monitor Signed-off-by: Ricardo Cañuelo * result_summary presets: new presets for kselftest-cpufreq Signed-off-by: Ricardo Cañuelo * config: mt8195-cherry-tomato-r2: enable all fluster tests for all branches Add all the trees and branches on which the tests would be ran. Enable all the tests for tomato. Signed-off-by: Muhammad Usama Anjum --- Changes: - The build config cannot be added yet. Just list the trees, it will only use the branches configured in build_configs: - mainline will use master - next will use master - collabora-chromeos-kernel will use for-kernelci - media will use master and fixes - Remove kernelci tree as it was added just for testing purpose * config: mt8183-kukui-jacuzzi-juniper-sku16: enable add all supported fluster tests Signed-off-by: Muhammad Usama Anjum jacuzzi * config: mt8186-corsola-steelix-sku131072: enable add all supported fluster tests Signed-off-by: Muhammad Usama Anjum * config: mt8192-asurada-spherion-r0: enable add all supported fluster tests Signed-off-by: Muhammad Usama Anjum --- Changes: - Don't specify the platforms manually as they are already mentioned in test-job-arm64-mediatek * config: sc7180-trogdor-kingoftown/lazor-limozeen: enable add all supported fluster tests Signed-off-by: Muhammad Usama Anjum --- Changes: - Use test-job-arm64-qualcomm instead and carete separate jobs for qualcomm devices - Don't specify platforms manually as they are already mentioned in test-job-arm64-qualcomm * build(deps): bump uwsgi from 2.0.21 to 2.0.22 in /docker/lava-callback Bumps [uwsgi](https://uwsgi-docs.readthedocs.io/en/latest/) from 2.0.21 to 2.0.22. --- updated-dependencies: - dependency-name: uwsgi dependency-type: direct:production ... Signed-off-by: dependabot[bot] * pipeline.yaml: Add stable-rc build variants Add more build variants for stable-rc tree to match legacy system. Signed-off-by: Denys Fedoryshchenko * result_summary: add error classification Classify errors according to patterns in the logs Signed-off-by: Helen Koike * result_summary presets: add collabora-chromeos-kernel and media trees for fluster tests Signed-off-by: Muhammad Usama Anjum * config: Use media-stage instead of media-tree Signed-off-by: Muhammad Usama Anjum * config/pipeline: enable android branches from legacy Enable all android branches from the legacy system Signed-off-by: Helen Koike * trigger: Add exclude/include tree list for trigger As we need to restrict list of running kernels on staging, we need to add option allowing that. Also it will be good to exclude staging kernels from production kernel list. So in case of staging we need to run kernels only from tree "kernelci" and sometimes something else, for example "mediatek". Option will look like: --trees kernelci,mediatek or --trees kernelci On production we need to exclude trees kernelci and buggytree: --trees !kernelci,buggytree or just kernelci: --trees !kernelci Purpose of this option is that our compiling capacity is limited, and right now staging and production both compiling very large set of kernels, we need to reduce this amount to drop costs. Signed-off-by: Denys Fedoryshchenko * config: platforms-chromeos: use CrOS R124 files ChromeBooks were upgraded with a new image based on ChromiumOS R124, so we must use those files now. Signed-off-by: Arnaud Ferraris * config: jobs-chromeos: drop non-existent Tast tests Those were removed between R120 and R124 and therefore cause test failures with the new images. Signed-off-by: Arnaud Ferraris * result_summary presets: fix acpi kselftest presets We're interested in catching regressions and failures in the both the kselftest-acpi test suites and its test cases. Match the nodes by group in the presets accordingly. Fix template used by the failure monitor preset. Signed-off-by: Laura Nao * src: update return values of `APIHelper.receive_event_node` `APIHelper.receive_event_node` method is used to receive node data from PubSub event. The method has been updated to return `is_hierarchy` flag as well which represents events related to node hierarchy. Update pipeline services using the method accordingly. Signed-off-by: Jeny Sadadia * result_summary presets: refine presets for v4l2-decoder-conformance Modify the regression preset to monitor regressions on both the v4l2-decoder-conformance test suites and its test cases, by matching the nodes by group instead of by name. Also, change the failure preset to monitor for all errors caused by runtime errors. Signed-off-by: Laura Nao * result_summary presets: add summary presets for v4l2-decoder-conformance Add summary presets to fetch regressions and failures on v4l2-decoder-conformance tests. Two of the presets are the same used by the monitor; add one additional preset to fetch all the failures on both the test suites and their test cases. Signed-off-by: Laura Nao * lava_callback.py: Remove error_code/error_msg on lava-callback Sometimes due congestion node might be set to timeout, but then result might arrive late and we need to use it properly. Signed-off-by: Denys Fedoryshchenko * result_summary presets: fix dt kselftest presets Fix the dt kselftest preset, just like was done for the acpi one, as the current preset doesn't match the actual results we're interested in. Signed-off-by: Nícolas F. R. A. Prado * doc/connecting-lab: refine documentation Refine documentation for connecting LAVA labs and submitting jobs to the lab. Signed-off-by: Jeny Sadadia * lava_callback: Sometimes we get totally invalid log file uploaded Most likely problems lays in threading of flask, and possibly callbacks are getting mixed. This commit attempts to introduce several countermeasures against that. Signed-off-by: Denys Fedoryshchenko * doc: add `_index.md` page Add index documentation page. Signed-off-by: Jeny Sadadia * doc: add `pipeline-details` page Move `pipeline-details` documentation from the API repository to this repo to make it close to the source. Signed-off-by: Jeny Sadadia * doc/connecting-lab: adjust `weight` property Change `weight` property of existing doc page to accommodate with transition of pipeline related docs to pipeline repo. Signed-off-by: Jeny Sadadia * doc: add `developer-documentation` page Add developer manual documentation. Signed-off-by: Jeny Sadadia * config/pipeline.yaml: add lab config for Qualcomm Add an entry to `runtimes` section for Qualcomm lab configurations. Signed-off-by: Jeny Sadadia * config/pipeline.yaml: add `baseline-x86` job for qualcomm Add job configuration `baseline-x86-qualcomm` for running baseline job in Qualcomm LAVA lab. Add scheduler entry as well. Signed-off-by: Jeny Sadadia * docker-compose.yaml: add lab-qualcomm runtime Add runtime argument `lab-qualcomm` to `scheduler-lava` container. This will enable the pipeline to run and submit jobs to Qualcomm LAVA lab. Signed-off-by: Jeny Sadadia * config/pipeline.yaml: add `baseline-arm64` job for qualcomm Add job configuration `baseline-arm64-qualcomm` for running baseline job for `arm64` in Qualcomm LAVA lab. Add scheduler entry as well. Signed-off-by: Jeny Sadadia * pipeline.yaml: Update RISC-V configs 1)rv32 defconfig doesn't exist, remove 2)nommu_k210_defconfig have modules disabled Signed-off-by: Denys Fedoryshchenko * lava_callback.py: Sanitize lava log data As we use this data in reports, lets remove all non-printable characters as they confuse grafana, browsers and others. Signed-off-by: Denys Fedoryshchenko * config/runtime/kunit.jinja2: fix result map Fix result map for skipped tests. Initially, API didn't have `skip` available node result in the schema. That's why it was mapped to `None` result. But now API has `skip` result to denote skipped tests. Fix the result mapping accordingly. Signed-off-by: Jeny Sadadia * config: jobs-chromeos: Add lab-setup fragment Add the lab-setup fragment to the chromebook builds, which contains the architecture independent kernel configs needed to run tests on the platform. Notably this disables IP autoconfig by the kernel. The result of this change is that the 12 seconds boot delay and the consequent deferred probe pending warnings will no longer happen on any platform. Particularly on mt8186-corsola-steelix-sku131072 (due to a different network adapter being used) on which it was still happening. Signed-off-by: Nícolas F. R. A. Prado * lava_callback: bump up slightly threads number Signed-off-by: Denys Fedoryshchenko * config: chromeos: enable watchdog reset test on Chromebooks Add a basic test to verify watchdog reset functionality. Enable the test on all ARM64 and AMD x86_64 Chromebooks. For Intel Chromebooks, enable the test only on octopus, as ACPI PM Timer on the other devices has been disabled in coreboot. Signed-off-by: Laura Nao * src/send_kcidb: use schema version 4.3 Test status `MISS` was added to KCIDB in schema v4.2 and supported by the latest version i.e. v4.3. Hence, use the latest version for submission as API may send a few tests with "MISS" status. Signed-off-by: Jeny Sadadia * send_kcidb: re-structure code for parsing checkout node Move code for parsing checkout node to a separate method. Add `valid` field to parsed checkout node. It denotes if source code was successfully checked out. Signed-off-by: Jeny Sadadia * src/send_kcidb: print more information on invalid data Print details for invalid revision data for the sake of debugging. Signed-off-by: Jeny Sadadia * src/send_kcidb: optimize `kcidb` import Remove redundant `kcidb` import and adjust kcidb Client call accordingly. Signed-off-by: Jeny Sadadia * src/send_kcidb: remove keys with `None` values KCIDB doesn't allow `None` as field value. Remove all optional fields with `None` value to make it valid data for submitting to KCIDB. Signed-off-by: Jeny Sadadia * config: add `kcidb_test_suite` property Every KernelCI test will be mapped to a unified test suite for KCIDB data submission. Add `kcidb_test_suite` property to test job definitions in YAML configuration files. The added property will store the mapped KCIDB test suite name. Signed-off-by: Jeny Sadadia * src/send_kcidb: parse and submit node test and build data Listen to all the node events with node state `done` or `available` and submit the node to KCIDB. Parse node received from the event and create KCIDB schema compatible object based on type of the node i.e. checkout, build or test. Signed-off-by: Jeny Sadadia * src/send_kcidb: set `log_excerpt` for builds and tests Fetch logs from compressed log file(*.log.gz) URL and send last 16*1024 characters for setting `log_excerpt` field for build and test nodes as it is the max allowed length of the KCIDB field. Signed-off-by: Jeny Sadadia * config/jobs-chromes: add kcidb test suite property for watchdog test Add KCIDB test suite mapping for `watchdog_reset` test. Signed-off-by: Jeny Sadadia * lava_callback.py: disable log removal from callback data We need it for investigations if we have any critical data loss during log sanitizing. Signed-off-by: Denys Fedoryshchenko * src/send_kcidb: add error info to build nodes Add error metadata fields such as `error_code` and `error_msg` to `misc` field for build nodes. Signed-off-by: Jeny Sadadia * result_summary presets: add watchdog-reset presets for mainline/next Add monitor and summary presets to track the results from the watchdog reset test on the mainline and next trees. Signed-off-by: Laura Nao * pipeline.yaml: Fix fluster rootfs URL Signed-off-by: Denys Fedoryshchenko * src/send_kcidb: get error metadata for failed/incomplete tests Tweak condition to get error metadata for test nodes. It should get error info for incomplete nodes as well and not just failed nodes. Signed-off-by: Jeny Sadadia * src/send_kcidb: send tests only if KCIDB test mapping exists All test suite definitions must have `kcidb_test_suite` property i.e. KCIDB test suite mapping. Only send tests for those the mapping is found. Signed-off-by: Jeny Sadadia * tests/validate_yaml: add validation for KCIDB mapping To submit KernelCI generated data to KCIDB, it is required to have a mapping for all the job definition with `kcidb_test_suite` property. Add validation to ensure all the jobs have a mapping present to avoid missing data submission. This check is to notify test authors trying to enable tests in maestro to include the required property for the mapping in their definition. Signed-off-by: Jeny Sadadia * config/pipeline.yaml: add qcs6490-rb3gen2 boot test Signed-off-by: Milosz Wasilewski * config: chromeos: Enable kselftest-dt on Qualcomm platforms Signed-off-by: Nícolas F. R. A. Prado * pipeline.yaml: Add one um build for android trees As per request of Android team it will be good to check for breakages UM builds as well. Signed-off-by: Denys Fedoryshchenko * config: use `kind=job` for test suites As part of re-structuring test hierarachy, `Job` model has been introduced for test suite/job nodes. It uses node kind `job`. Update test configurations in `pipeline.yaml` and `jobs-chromeos.yaml` to use `kind=job` to generate job nodes. Signed-off-by: Jeny Sadadia * config/runtime/kunit.jinja2: provide `kind` value for child tests In case of submitting test hierarchy, child nodes by default inherit `kind` value from parent node. As we are re-structuring test hierarchy, test suit/job nodes will have `kind=job` where its child test nodes will have `kind=test`. Provide `kind` field explicitly to test result hierarchy to preserve different kind value than the parent node. Signed-off-by: Jeny Sadadia * config/runtime/kunit.jinja2: fix `NameError` Fix the below error in `_submit` method: ``` Traceback (most recent call last): File "/home/kernelci/data/output/tmp94nrvsvs/kunit-x86_64", line 287, in main job.submit(results) File "/home/kernelci/data/output/tmp94nrvsvs/kunit-x86_64", line 138, in submit self._submit(result) File "/home/kernelci/data/output/tmp94nrvsvs/kunit-x86_64", line 265, in _submit return node NameError: name 'node' is not defined ``` Signed-off-by: Jeny Sadadia * config/runtime/kunit.jinja2: evaluate job node result Evaluate job node result from child node results if `null` result is receive from test result parser. For example nodes such as `fortify`: https://staging.kernelci.org:9000/viewer?node_id=6670ab43d0b7694b399897c4 Signed-off-by: Jeny Sadadia * src/send_kcidb: fix parsing of KUnit log file Handle both compressed(gzip) and plain text log files for getting log excerpt. Signed-off-by: Jeny Sadadia * src/send_kcidb: HTTP exception handling for log excerpt Add HTTP exception handling for getting log excerpt data. Signed-off-by: Jeny Sadadia * config: platforms-chromeos: Add serial delay for some Mediatek platforms Add test_character_delay to the Spherion, Tomato and Steelix platforms to workaround the fact that they're sometimes unable to process serial input fast enough, resulting in mangled commands and consequently flaky test results, as described in https://github.com/kernelci/kernelci-project/issues/366. The right place to do this change would be in the device-type template as described in LAVA's documentation [1]. This overriding in KernelCI is meant only as a temporary workaround to verify whether this fixes the issue. If it does, then we'll do it in LAVA upstream instead. [1] https://docs.lavasoftware.org/lava/debugging.html#differences-in-input-speeds Signed-off-by: Nícolas F. R. A. Prado * config: chromeos: Enable error-logs kselftest for MediaTek Chromebooks Run the error-logs kselftest on MediaTek Chromebooks. This test is currently under review upstream [1] so, in the meantime, it has been added to the collabora-next tree so it can prove its value by helping to detect issues upstream. [1] https://lore.kernel.org/all/20240423-dev-err-log-selftest-v1-0-690c1741d68b@collabora.com Signed-off-by: Nícolas F. R. A. Prado * config/pipeline.yaml: enable CIP lab Add configuration for LAVA CIP lab. Signed-off-by: Jeny Sadadia * config/pipeline.yaml: add baseline-x86 test for CIP Add `baseline-x86-cip` test to be submitted to CIP LAVA lab. Signed-off-by: Jeny Sadadia * docker-compose.yaml: add `lab-cip` runtime Add runtime argument `lab-cip` to `scheduler-lava` container. This will enable the pipeline to run and submit jobs to CIP LAVA lab. Signed-off-by: Jeny Sadadia * src/send_kcidb: enable `job` node submission to KCIDB Parse newly added job node and its child tests for KCIDB submission. Signed-off-by: Jeny Sadadia * src/send_kcidb: don't submit `setup` test suite nodes `setup` test suite has been introduced to store test results for environment setup checks before running actual test suite. KCIDB doesn't require `setup` test suite result as long as main test job result is submitted. Signed-off-by: Jeny Sadadia * src/send_kcidb: add a check before sending data Check if parsed data is available before sending revision data to KCIDB. Signed-off-by: Jeny Sadadia * src/send_kcidb: fix logs Fix log statement about submitting node to KCIDB as we are not sending all the nodes we receive event for to KCIDB. Signed-off-by: Jeny Sadadia * src/send_kcidb: handle skipped tests Do not retrieve artifacts or metadata from parent node for skipped tests as in pratice only kernel revision, test runtime and platform will be available for skipped tests. Signed-off-by: Jeny Sadadia * result_summary/utils: ignore failures on log retrieval Make the script continue running if there was an error fetching a test log. Signed-off-by: Ricardo Cañuelo * doc/developer-documentation: add docs for enabling new tests Add developer documentation for enabling new tests. Signed-off-by: Jeny Sadadia * Fix links after docs page migration Documentation has been migrated to the "docs.*" subdomain. Signed-off-by: Paweł Wieczorek * pipeline.yaml: Add kcidebug fragment Add useful low-overhead debug option to kernel, and test on most x86 boards we have available, with minimal baseline tests. Signed-off-by: Denys Fedoryshchenko * configs: update gcc-10 to gcc-12 As we upgrade compiler images, we need update gcc version Signed-off-by: Denys Fedoryshchenko * regression_tracker: workaround: match node paths programatically Don't use 'path' as an api search parameter. The use of lists as query parameters (path is a list) is undefined. Instead, do the filtering in code. Signed-off-by: Ricardo Cañuelo * config: remove qemu jobs from lab-qualcomm QEMU jobs use container pulled from hub.docker.com. After the lab move pulling from this registry is no longer possible at Qualcomm. This patch disables QEMU jobs from Qualcomm lab. Signed-off-by: Milosz Wasilewski * validate_yaml.py: Improve pipeline validation Add validation that scheduler entries have matching job entry, this is critical validation, and job entries have at least one entry in the scheduler. Fix one entry detected by this validation Signed-off-by: Denys Fedoryshchenko * pipeline.yaml: Add broonie(Mark Brown) trees to pipeline It is time to enable even more trees. Signed-off-by: Denys Fedoryshchenko * validate_yaml.py: Add additional verification for duplicate keys We might have redefined same keys in different yaml files, this tool will ensure consistency of this entries. Signed-off-by: Denys Fedoryshchenko * validate_yaml.py: Remove path separator Signed-off-by: Denys Fedoryshchenko * validate_yaml.py: Rename variable to schedules Signed-off-by: Denys Fedoryshchenko * config/kernelci.toml: update KCIDB origin name As we agreed to refer new KernelCI API & Pipeline as "maestro", use the new name while submitting data to KCIDB. Signed-off-by: Jeny Sadadia * src/send_kcidb: update KCI result mapping with KCIDB status Update evaluation of KCIDB status from KCI result. Create 2 categories for error codes: 1. When pre-check tests completed but actual test suite coudln't run - this will have `MISS` status 2. When pre-check tests completed, actual test suite could run but somehow couldn't complete - this will have `ERROR` status Some LAVA error codes can occur at any point of execution such as `Cancelled` and `Test`. Listed such error codes to the most relevant category based on analysis of available results. Signed-off-by: Jeny Sadadia * result_summary presets: fix presets for v4l2-decoder-conformance Following recent updates to data representation on KernelCI nodes, the top-level nodes for tests now have their kind set to 'job' instead of 'test'. Update the presets for v4l2-decoder-conformance tests accordingly. Signed-off-by: Laura Nao * result_summary presets: fix output file name in kselftest-acpi preset Signed-off-by: Laura Nao * config: enable dmabuf-heaps, exec and iommu kselftest suites Signed-off-by: Muhammad Usama Anjum --- Changes: - Add kcidb_test_suite * config: result-summary: add generic rule to monitor failures and regression Signed-off-by: Muhammad Usama Anjum * config: pipeline: Add rt-stable builds Copy rt-stable builds from legacy KernelCI. Signed-off-by: Muhammad Usama Anjum --- Changes: - Major changes to move to new way of writing kbuild jobs * config: pipeline: Add v6.6-rt branch for builds Signed-off-by: Muhammad Usama Anjum * config: result-summary: add rt-stable kbuilds presets Signed-off-by: Muhammad Usama Anjum * config: chromeos: Add 'nfs' suffix to KCIDB suite name for baseline-nfs The baseline test is currently run with both ramdisk and nfs rootfs. To distinguish baseline-nfs tests in KCIDB, add an 'nfs' suffix to the KCIDB test suite name. Signed-off-by: Laura Nao * aks: Add kubernetes kcidb deployment We need file that will manage deployment of kcidb bridge in kubernetes production deployment. Signed-off-by: Denys Fedoryshchenko * kubernetes: Adjust trigger k8s options Ignore kernelci tree on production, as it is special "staging"-only tree, and read all /config directory, not just default pipeline.yaml. Signed-off-by: Denys Fedoryshchenko * regression_tracker: bugfix: catch empty search condition Fix _get_last_matching_node(), after the previous change there was an unhandled scenario where nodes may be empty but the function wouldn't return None immediately. Signed-off-by: Ricardo Cañuelo * config: pipeline: correct the kind of kselftest suites to job Signed-off-by: Muhammad Usama Anjum * scheduler-chromeos.yaml: Temporarily disable non-essential tast tests As per discussion, we disable temporary tast tests which unlikely will be reviewed. Signed-off-by: Denys Fedoryshchenko * k8s/aks: Update deployment files 1)Update memory limit, as working with linux sources might require 3Gbyte of RAM. 2)Update config file path 3)Add callback environment variable 4)Update image reference to fresh one Signed-off-by: Denys Fedoryshchenko * config: pipeline: enable android builds with gcc-12 for all architectures Signed-off-by: Muhammad Usama Anjum * config: pipeline: enable android builds with clang-17 for all architectures Signed-off-by: Muhammad Usama Anjum * config: pipeline: remove build_variants from android build_configs The build_variants is legacy way to specify the different variants. We have moved to the newer way to specify the variants. Hence remove the build_variants from android build_configs. Signed-off-by: Muhammad Usama Anjum * config: pipeline: add android15-6.6-lts branch for build as well The android15-6.6-lts has been included recently in legacy KernelCI: https://github.com/kernelci/kernelci-core/pull/2597 Add the same in newer KernelCI. Signed-off-by: Muhammad Usama Anjum * config: pipeline: add blocklist for riscv older kernels for android builds Signed-off-by: Muhammad Usama Anjum * config: update KCIDB test suite mapping for baseline Use `boot` as KCIDB test suite mapping for all baseline tests. Signed-off-by: Jeny Sadadia * callback_url: Update config and README As we are moving callback URL to environment variable, updating config and README accordingly. Signed-off-by: Denys Fedoryshchenko * config: pipeline: enable android baseline (boot) testing for arm and arm64 in only allmodconfig Signed-off-by: Muhammad Usama Anjum * scheduler.py: If event have jobfilter, inject it to the node data When someone generate artificial event with jobfilter, this is likely maintainer trying to repeat job. Treat this accordingly, and inject job filter to job node, so we will run only tests maintainer wants. Signed-off-by: Denys Fedoryshchenko * lava_callback: migrate to fastapi It will be easier to maintain API and Pipeline, as both will be powered by FastAPI framework. Signed-off-by: Denys Fedoryshchenko * config: chromeos: Update fluster rootfs URL Signed-off-by: Laura Nao * config: pipeline: fix defconfigs in fragments Signed-off-by: Muhammad Usama Anjum * kbuild.jinja2: support defconfig as list or str As required in https://github.com/kernelci/kernelci-core/pull/2608 defconfig might be two types. Support it in jinja2 accordingly. Signed-off-by: Denys Fedoryshchenko * config: piepline: add kbuilds of lee-mfd with default defconfigs Signed-off-by: Muhammad Usama Anjum * config: pipeline: enable baseline testing for mfd for one board of each arch Signed-off-by: Muhammad Usama Anjum * config: pipeline: fix platform sections for Qualcomm and Android schedules Signed-off-by: Paweł Wieczorek * k8s: Update deployment to uvicorn, as we use fastapi now Signed-off-by: Denys Fedoryshchenko * config: pipeline: Unblock android runs on lava-collabora Signed-off-by: Muhammad Usama Anjum * pipeline: Enable preempt-rt cyclictest test Enable the first preempt-rt test, cyclictest in new KernelCI. Enable it on all platforms. Since these are all smoke test there is no point in running them too long. Thus reduce the runtime per test to one minute. This should keep the total preempt-rt runtime roughly in the same time frame. The changes have been ported from Daniel's PR [1]. [1] https://github.com/kernelci/kernelci-core/pull/2397 Signed-off-by: Daniel Wagner Co-developed-by: Muhammad Usama Anjum Signed-off-by: Muhammad Usama Anjum * pipeline: add all the test jobs for all rt-test Add jobs definition of all the rt-tests. Enable cyclicdeadline and rtla tests to run on all targets. The changes have been ported from Daniel's PR [1]. [1] https://github.com/kernelci/kernelci-core/pull/2397 Signed-off-by: Daniel Wagner Co-developed-by: Muhammad Usama Anjum Signed-off-by: Muhammad Usama Anjum * config: pipeline: add template and test properties for preempt_rt jobs Add template, job add kcidb_test_suite properties for all preempt-rt jobs Signed-off-by: Muhammad Usama Anjum * config: pipeline: rename preempt-rt to rt-tests which is correct name of tests The legacy was using preempt-rt name of tests. But the repository has rt-tests name. We must use the same name to merge with execution results coming from other CIs in KCIDB. Suggested-by: Jeny Sadadia Signed-off-by: Muhammad Usama Anjum * config: pipeline: add the correct nfsroot for rt-tests Signed-off-by: Muhammad Usama Anjum * config: pipeline: Remove android's deprecated branches It has been confirmed with Todd that we should remove the deprecated branches. Hence remove those branches. Signed-off-by: Muhammad Usama Anjum * config: pipeline: run baseline on non-allmodconfig The allmodconfig generates very large kernel image. It cannot be booted on the arm64 and arm targets as tftp errors out that size is too large. Reduce the kernel image size. Use the default defconfig. The same defconfigs have been booting for other trees. Signed-off-by: Muhammad Usama Anjum * doc: developer-documentation: Update documentation by adding more details - Reorganize some things - Specify how to write different variants by removing old syntax - Give two separate templates for kbuild and test - Try to put more details for new contributors Signed-off-by: Muhammad Usama Anjum --- Changes since v1: - Fix type - Apply suggestions from code review * doc/developer-documentation: fix a glitch in enabling new tree section Fix a minor bug in YAML block formatting. Fixes: f5f57de ("doc: developer-documentation: Update documentation by adding more details") Signed-off-by: Jeny Sadadia * doc/developer-documentation: update a section title Rename a section from "Enabling a new Kernel tree" to "Enabling new KernelCI trees, builds, and tests" as it explains enabling tests as well. Signed-off-by: Jeny Sadadia * config: use the new `tree:branch` format for rules For cases where we want a single branch to be allowed for a given tree, we can now use the `tree:branch` format in rules. Convert existing rules accordingly. Signed-off-by: Arnaud Ferraris * config: pipeline: fix improper use of "filters" attribute The `filters` param was used in the legacy system but has been replaced by `rules`, with a different syntax. For Android RISC-V builds, this was used to deny job execution on kernels < 4.19, so let's translate this condition with the rules format, and do a similar change for the `rt-tests`-based jobs. Signed-off-by: Arnaud Ferraris * config/pipeline.yaml: Fix x86 typo in kcidebug job names The kcidebug jobs that run on MediaTek and Qualcomm platforms should have arm64 in the name rather than x86. Fix the typo. Signed-off-by: Nícolas F. R. A. Prado * config: pipeline: remove params The parameters are only needed when they are changed or appeneded. Remvoe the parameters which aren't being modified. Signed-off-by: Muhammad Usama Anjum * validate_yaml.py: Jobs are required to have template parameter Add more validation to config files of mandatory parameters. Signed-off-by: Denys Fedoryshchenko * validate_yaml.py: Add more job validations Add basic validation, each job must have kind parameter Signed-off-by: Denys Fedoryshchenko * workflows: Add label on CI check failures Automatically add label so broken PR wont go to staging Signed-off-by: Denys Fedoryshchenko --------- Signed-off-by: Jeny Sadadia Signed-off-by: Nícolas F. R. A. Prado Signed-off-by: Denys Fedoryshchenko Signed-off-by: Ricardo Cañuelo Signed-off-by: Helen Koike Signed-off-by: Arnaud Ferraris Signed-off-by: Laura Nao Signed-off-by: Muhammad Usama Anjum Signed-off-by: Shreeya Patel Signed-off-by: dependabot[bot] Signed-off-by: Milosz Wasilewski Signed-off-by: Paweł Wieczorek Signed-off-by: Daniel Wagner Co-authored-by: Jeny Sadadia Co-authored-by: Nícolas F. R. A. Prado Co-authored-by: Ricardo Cañuelo Co-authored-by: Helen Koike Co-authored-by: Arnaud Ferraris Co-authored-by: Laura Nao Co-authored-by: Muhammad Usama Anjum Co-authored-by: Shreeya Patel Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com> Co-authored-by: Milosz Wasilewski Co-authored-by: Paweł Wieczorek Co-authored-by: Milosz Wasilewski Co-authored-by: Daniel Wagner Signed-off-by: Denys Fedoryshchenko --- .github/dependabot.yml | 11 + .github/workflows/main.yml | 24 + .gitignore | 3 + README.md | 40 +- config/jobs-chromeos.yaml | 649 +++++++ config/kernelci.toml | 23 +- config/pipeline.yaml | 1706 ++++++++++++++++- config/platforms-chromeos.yaml | 169 ++ config/reports/test-report.jinja2 | 10 +- config/result-summary.yaml | 863 +++++++++ config/result_summary_templates/base.html | 26 + .../generic-regression-report.html.jinja2 | 109 ++ .../generic-regression-report.jinja2 | 61 + .../generic-regressions.html.jinja2 | 168 ++ .../generic-regressions.jinja2 | 100 + .../generic-test-failure-report.html.jinja2 | 101 + .../generic-test-failure-report.jinja2 | 54 + .../generic-test-results.html.jinja2 | 160 ++ .../generic-test-results.jinja2 | 85 + config/result_summary_templates/main.css | 53 + config/runtime/baseline.jinja2 | 1 + config/runtime/kbuild.jinja2 | 155 +- config/runtime/kselftest.jinja2 | 4 + config/runtime/kunit.jinja2 | 26 +- config/runtime/kver.jinja2 | 2 +- config/runtime/rt-tests.jinja2 | 4 + config/runtime/sleep.jinja2 | 3 + config/runtime/tast.jinja2 | 21 + .../runtime/v4l2-decoder-conformance.jinja2 | 4 + config/runtime/watchdog-reset.jinja2 | 3 + config/scheduler-chromeos.yaml | 548 ++++++ config/traces_config.yaml | 37 + data/output/.gitkeep | 0 doc/_index.md | 10 + doc/connecting-lab.md | 133 ++ doc/developer-documentation.md | 165 ++ doc/pipeline-details.md | 84 + doc/result-summary-CHANGELOG | 78 + docker-compose-production.yaml | 17 +- docker-compose.yaml | 70 +- docker/lava-callback/requirements.txt | 5 +- kube/aks/README.md | 6 + kube/aks/ingress.yaml | 32 + kube/aks/kernelci-secrets.toml.example | 49 + kube/aks/kernelci.toml | 16 - kube/aks/lava-callback.yaml | 65 + kube/aks/monitor.yaml | 55 +- kube/aks/nodehandlers.yaml | 137 ++ kube/aks/pipeline-kcidb.yaml | 45 + kube/aks/scheduler-k8s.yaml | 147 +- kube/aks/scheduler-lava.yaml | 97 +- kube/aks/tarball.yaml | 134 +- kube/aks/timeout.yaml | 70 - kube/aks/trigger.yaml | 57 +- restart_services.sh | 10 + setup.cfg | 2 + src/base.py | 2 +- src/fstests/runner.py | 6 +- src/lava_callback.py | 181 +- src/monitor.py | 24 +- src/patchset.py | 329 ++++ src/regression_tracker.py | 255 ++- src/result_summary.py | 224 +++ src/result_summary/__init__.py | 4 + src/result_summary/monitor.py | 148 ++ src/result_summary/summary.py | 155 ++ src/result_summary/utils.py | 259 +++ src/scheduler.py | 116 +- src/send_kcidb.py | 439 ++++- src/tarball.py | 168 +- src/test_report.py | 57 +- src/timeout.py | 90 +- src/trigger.py | 47 +- tests/validate_yaml.py | 130 ++ 74 files changed, 8601 insertions(+), 740 deletions(-) create mode 100644 .github/dependabot.yml create mode 100644 config/jobs-chromeos.yaml create mode 100644 config/platforms-chromeos.yaml create mode 100644 config/result-summary.yaml create mode 100644 config/result_summary_templates/base.html create mode 100644 config/result_summary_templates/generic-regression-report.html.jinja2 create mode 100644 config/result_summary_templates/generic-regression-report.jinja2 create mode 100644 config/result_summary_templates/generic-regressions.html.jinja2 create mode 100644 config/result_summary_templates/generic-regressions.jinja2 create mode 100644 config/result_summary_templates/generic-test-failure-report.html.jinja2 create mode 100644 config/result_summary_templates/generic-test-failure-report.jinja2 create mode 100644 config/result_summary_templates/generic-test-results.html.jinja2 create mode 100644 config/result_summary_templates/generic-test-results.jinja2 create mode 100644 config/result_summary_templates/main.css create mode 100644 config/runtime/kselftest.jinja2 create mode 100644 config/runtime/rt-tests.jinja2 create mode 100644 config/runtime/sleep.jinja2 create mode 100644 config/runtime/tast.jinja2 create mode 100644 config/runtime/v4l2-decoder-conformance.jinja2 create mode 100644 config/runtime/watchdog-reset.jinja2 create mode 100644 config/scheduler-chromeos.yaml create mode 100644 config/traces_config.yaml mode change 100644 => 100755 data/output/.gitkeep create mode 100644 doc/_index.md create mode 100644 doc/connecting-lab.md create mode 100644 doc/developer-documentation.md create mode 100644 doc/pipeline-details.md create mode 100644 doc/result-summary-CHANGELOG create mode 100644 kube/aks/README.md create mode 100644 kube/aks/ingress.yaml create mode 100644 kube/aks/kernelci-secrets.toml.example delete mode 100644 kube/aks/kernelci.toml create mode 100644 kube/aks/lava-callback.yaml create mode 100644 kube/aks/nodehandlers.yaml create mode 100644 kube/aks/pipeline-kcidb.yaml delete mode 100644 kube/aks/timeout.yaml create mode 100755 restart_services.sh create mode 100644 setup.cfg mode change 100644 => 100755 src/lava_callback.py create mode 100755 src/patchset.py create mode 100755 src/result_summary.py create mode 100644 src/result_summary/__init__.py create mode 100644 src/result_summary/monitor.py create mode 100644 src/result_summary/summary.py create mode 100644 src/result_summary/utils.py create mode 100755 tests/validate_yaml.py diff --git a/.github/dependabot.yml b/.github/dependabot.yml new file mode 100644 index 000000000..5990d9c64 --- /dev/null +++ b/.github/dependabot.yml @@ -0,0 +1,11 @@ +# To get started with Dependabot version updates, you'll need to specify which +# package ecosystems to update and where the package manifests are located. +# Please see the documentation for all configuration options: +# https://docs.github.com/code-security/dependabot/dependabot-version-updates/configuration-options-for-the-dependabot.yml-file + +version: 2 +updates: + - package-ecosystem: "" # See documentation for possible values + directory: "/" # Location of package manifests + schedule: + interval: "weekly" diff --git a/.github/workflows/main.yml b/.github/workflows/main.yml index 1241e6614..a7d8e2ce0 100644 --- a/.github/workflows/main.yml +++ b/.github/workflows/main.yml @@ -40,3 +40,27 @@ jobs: - name: Run pycodestyle run: | pycodestyle src/*.py + + - name: Install python yaml package + run: | + pip install pyyaml + + - name: Run basic yaml validation + run: | + python tests/validate_yaml.py + on-fail: + if: failure() && github.event_name == 'pull_request' + runs-on: ubuntu-latest + needs: check + steps: + - name: Add label to PR + uses: actions/github-script@v7 + with: + script: | + const label = 'staging-skip'; + github.rest.issues.addLabels({ + owner: context.repo.owner, + repo: context.repo.repo, + issue_number: context.issue.number, + labels: [label] + }); diff --git a/.gitignore b/.gitignore index 4a1428479..0122bc1d0 100644 --- a/.gitignore +++ b/.gitignore @@ -1,3 +1,6 @@ .env .docker-env data +*.pyc +*.venv + diff --git a/README.md b/README.md index 5b244cf0c..1ef3da16c 100644 --- a/README.md +++ b/README.md @@ -4,14 +4,50 @@ KernelCI Pipeline Modular pipeline based on the new [KernelCI API](https://github.com/kernelci/kernelci-api). -Please refer to the [pipeline design documentation](https://kernelci.org/docs/api/overview/#pipeline-design) for more details. +Please refer to the [pipeline design documentation](https://docs.kernelci.org/api_pipeline/api/design/#pipeline-design) for more details. To use it, first, start the API. Then start the services in this repository on the same host. -Follow instructions to [add a token and start the services](https://kernelci.org/docs/api/getting-started/#setting-up-a-pipeline-instance). +Follow instructions to [add a token and start the services](https://docs.kernelci.org/api_pipeline/api/local-instance/#setting-up-a-pipeline-instance). > **Note** The `trigger` service was run only once as it's not currently configured to run periodically. +### Setting up LAVA lab + +For scheduling jobs, the pipeline needs to be able to submit jobs to a "LAVA lab" type of runtime and receive HTTP(S) callbacks with results over "lava-callback" service. +Runtime is configured in yaml file following way, for example: +``` + lava-collabora: &lava-collabora-staging + lab_type: lava + url: https://lava.collabora.dev/ + priority_min: 40 + priority_max: 60 + notify: + callback: + token: kernelci-api-token-staging +``` + +- url is endpoint of LAVA lab API where job will be submitted. +- notify.callback.token is token DESCRIPTION used in LAVA job definition. This part is a little bit tricky: https://docs.lavasoftware.org/lava/user-notifications.html#notification-callbacks +If you specify token name that does not exist in LAVA under user submitting job, callback will return token secret set to description. If following example it will be "kernelci-api-token-staging". +If you specify token name that matches existing token in LAVA, callback will return token value (secret) from LAVA, which is usually long alphanumeric string. +Tokens generated in LAVA in "API -> Tokens" section. Token name is "DESCRIPTION" and token value (secret) can be shown by clicking on green eye icon named "View token hash". +Callback URL is set in pipeline instance environment variable KCI_INSTANCE_CALLBACK. + +The `lava-callback` service is used to receive notifications from LAVA after a job has finished. It is configured to listen on port 8000 by default and expects in header "Authorization" token value(secret) from LAVA. Mapping of token value to lab name is done over toml file. Example: +``` +[runtime.lava-collabora] +runtime_token = "REPLACE-LAVA-TOKEN-GENERATED-BY-LAB-LAVA-COLLABORA" +callback_token = "REPLACE-LAVA-TOKEN-GENERATED-BY-LAB-LAVA-COLLABORA" + +[runtime.lava-collabora-early-access] +runtime_token = "REPLACE-LAVA-TOKEN-GENERATED-BY-LAB-LAVA-COLLABORA-EARLY-ACCESS" +callback_token = "REPLACE-LAVA-TOKEN-GENERATED-BY-LAB-LAVA-COLLABORA" +``` +In case we have single token, it will be same token used to submit job(by scheduler), runtime_token only, but if we use different to tokens to submit job and to receive callback, we need to specify both runtime_token and callback_token. + +Summary: Token name(description) is used in yaml configuration, token value(secret) is used in toml configuration. + ### Setup KernelCI Pipeline on WSL To setup `kernelci-pipeline` on WSL (Windows Subsystem for Linux), we need to enable case sensitivity for the file system. diff --git a/config/jobs-chromeos.yaml b/config/jobs-chromeos.yaml new file mode 100644 index 000000000..340397e0d --- /dev/null +++ b/config/jobs-chromeos.yaml @@ -0,0 +1,649 @@ +_anchors: + + kbuild-gcc-12-arm64-chromeos: &kbuild-gcc-12-arm64-chromeos-job + template: kbuild.jinja2 + kind: kbuild + image: kernelci/staging-gcc-12:arm64-kselftest-kernelci + params: &kbuild-gcc-12-arm64-chromeos-params + arch: arm64 + compiler: gcc-12 + cross_compile: 'aarch64-linux-gnu-' + cross_compile_compat: 'arm-linux-gnueabihf-' + defconfig: 'cros://chromeos-{krev}/{crosarch}/chromiumos-{flavour}.flavour.config' + flavour: '{crosarch}-generic' + fragments: + - lab-setup + - arm64-chromebook + - CONFIG_MODULE_COMPRESS=n + - CONFIG_MODULE_COMPRESS_NONE=y + rules: &kbuild-gcc-12-arm64-chromeos-rules + tree: + - '!android' + + kbuild-gcc-12-x86-chromeos: &kbuild-gcc-12-x86-chromeos-job + <<: *kbuild-gcc-12-arm64-chromeos-job + image: kernelci/staging-gcc-12:x86-kselftest-kernelci + params: &kbuild-gcc-12-x86-chromeos-params + arch: x86_64 + compiler: gcc-12 + defconfig: 'cros://chromeos-{krev}/{crosarch}/chromeos-{flavour}.flavour.config' + flavour: '{crosarch}-generic' + fragments: + - lab-setup + - x86-board + - CONFIG_MODULE_COMPRESS=n + - CONFIG_MODULE_COMPRESS_NONE=y + rules: + tree: + - '!android' + + min-5_4-rules: &min-5_4-rules + min_version: + version: 5 + patchlevel: 4 + + min-6_7-rules: &min-6_7-rules + min_version: + version: 6 + patchlevel: 7 + + max-6_6-rules: &max-6_6-rules + <<: *min-5_4-rules + max_version: + version: 6 + patchlevel: 6 + + tast: &tast-job + template: tast.jinja2 + kind: job + rules: *min-5_4-rules + kcidb_test_suite: tast + + tast-basic: &tast-basic-job + <<: *tast-job + params: + tests: + - platform.CheckDiskSpace + - platform.TPMResponsive + + tast-decoder-chromestack: &tast-decoder-chromestack-job + <<: *tast-job + params: &tast-decoder-chromestack-params + tests: + - video.ChromeStackDecoder.* + - video.ChromeStackDecoderVerification.* + excluded_tests: + # Those always fail on all platforms + - video.ChromeStackDecoderVerification.hevc_main + - video.ChromeStackDecoderVerification.vp9_0_svc + + tast-decoder-v4l2-sf-h264: &tast-decoder-v4l2-sf-h264-job + <<: *tast-job + params: + tests: + - video.PlatformDecoding.v4l2_stateful_h264_* + + tast-decoder-v4l2-sf-hevc: &tast-decoder-v4l2-sf-hevc-job + <<: *tast-job + params: + tests: + - video.PlatformDecoding.v4l2_stateful_hevc_* + + tast-decoder-v4l2-sf-vp8: &tast-decoder-v4l2-sf-vp8-job + <<: *tast-job + params: + tests: + - video.PlatformDecoding.v4l2_stateful_vp8_* + + tast-decoder-v4l2-sf-vp9: &tast-decoder-v4l2-sf-vp9-job + <<: *tast-job + params: &tast-decoder-v4l2-sf-vp9-params + tests: + - video.PlatformDecoding.v4l2_stateful_vp9_0_group1_* + - video.PlatformDecoding.v4l2_stateful_vp9_0_group2_* + - video.PlatformDecoding.v4l2_stateful_vp9_0_group3_* + - video.PlatformDecoding.v4l2_stateful_vp9_0_group4_* + excluded_tests: + # Regression in ChromeOS R120, to be re-evaluated on next CrOS upgrade + - video.PlatformDecoding.v4l2_stateful_vp9_0_group4_sub8x8_sf + + tast-decoder-v4l2-sf-vp9-extra: &tast-decoder-v4l2-sf-vp9-extra-job + <<: *tast-job + params: &tast-decoder-v4l2-sf-vp9-extra-params + tests: + - video.PlatformDecoding.v4l2_stateful_vp9_0_level5_* + + tast-decoder-v4l2-sl-av1: &tast-decoder-v4l2-sl-av1-job + <<: *tast-job + params: + tests: + - video.PlatformDecoding.v4l2_stateless_av1_* + + tast-decoder-v4l2-sl-h264: &tast-decoder-v4l2-sl-h264-job + <<: *tast-job + params: + tests: + - video.PlatformDecoding.v4l2_stateless_h264_* + + tast-decoder-v4l2-sl-hevc: &tast-decoder-v4l2-sl-hevc-job + <<: *tast-job + params: + tests: + - video.PlatformDecoding.v4l2_stateless_hevc_* + + tast-decoder-v4l2-sl-vp8: &tast-decoder-v4l2-sl-vp8-job + <<: *tast-job + params: + tests: + - video.PlatformDecoding.v4l2_stateless_vp8_* + + tast-decoder-v4l2-sl-vp9: &tast-decoder-v4l2-sl-vp9-job + <<: *tast-job + params: + tests: + - video.PlatformDecoding.v4l2_stateless_vp9_0_group1_* + - video.PlatformDecoding.v4l2_stateless_vp9_0_group2_* + - video.PlatformDecoding.v4l2_stateless_vp9_0_group3_* + - video.PlatformDecoding.v4l2_stateless_vp9_0_group4_* + + tast-decoder-v4l2-sl-vp9-extra: &tast-decoder-v4l2-sl-vp9-extra-job + <<: *tast-job + params: + tests: + - video.PlatformDecoding.v4l2_stateless_vp9_0_level5_* + + tast-hardware: &tast-hardware-job + <<: *tast-job + params: + tests: + - graphics.HardwareProbe + - graphics.KernelConfig + - graphics.KernelMemory + - hardware.DiskErrors + - hardware.SensorAccel + - hardware.SensorIioservice + - hardware.SensorIioserviceHard + - hardware.SensorLight + - hardware.SensorPresence + - hardware.SensorActivity + - health.ProbeSensorInfo + - health.DiagnosticsRun.* + - health.ProbeAudioHardwareInfo + - health.ProbeAudioInfo + - health.ProbeBacklightInfo + - health.ProbeCPUInfo + - health.ProbeFanInfo + - inputs.PhysicalKeyboardKernelMode + + tast-kernel: &tast-kernel-job + <<: *tast-job + params: + tests: + - kernel.Bloat + - kernel.ConfigVerify.chromeos_kernelci + - kernel.CPUCgroup + - kernel.Cpuidle + - kernel.CryptoAPI + - kernel.CryptoDigest + - kernel.ECDeviceNode + - kernel.HighResTimers + - kernel.Limits + - kernel.PerfCallgraph + + tast-mm-decode: &tast-mm-decode-job + <<: *tast-job + params: + tests: + - video.PlatformDecoding.ffmpeg_vaapi_vp9_0_group1_buf + - video.PlatformDecoding.ffmpeg_vaapi_vp9_0_group2_buf + - video.PlatformDecoding.ffmpeg_vaapi_vp9_0_group3_buf + - video.PlatformDecoding.ffmpeg_vaapi_vp9_0_group4_buf + - video.PlatformDecoding.ffmpeg_vaapi_vp9_0_level5_0_buf + - video.PlatformDecoding.ffmpeg_vaapi_vp9_0_level5_1_buf + - video.PlatformDecoding.ffmpeg_vaapi_av1 + - video.PlatformDecoding.ffmpeg_vaapi_vp8_inter + - video.PlatformDecoding.ffmpeg_vaapi_h264_baseline + - video.PlatformDecoding.ffmpeg_vaapi_h264_main + - video.PlatformDecoding.ffmpeg_vaapi_hevc_main + - video.PlatformDecoding.vaapi_vp9_0_group1_buf + - video.PlatformDecoding.vaapi_vp9_0_group2_buf + - video.PlatformDecoding.vaapi_vp9_0_group3_buf + - video.PlatformDecoding.vaapi_vp9_0_group4_buf + - video.PlatformDecoding.vaapi_vp9_0_level5_0_buf + - video.PlatformDecoding.vaapi_vp9_0_level5_1_buf + + tast-mm-encode: &tast-mm-encode-job + <<: *tast-job + params: + tests: + - video.EncodeAccel.h264_1080p_global_vaapi_lock_disabled + - video.EncodeAccel.vp8_1080p_global_vaapi_lock_disabled + - video.EncodeAccel.vp9_1080p_global_vaapi_lock_disabled + - video.EncodeAccelPerf.h264_1080p_global_vaapi_lock_disabled + - video.EncodeAccelPerf.vp8_1080p_global_vaapi_lock_disabled + - video.EncodeAccelPerf.vp9_1080p_global_vaapi_lock_disabled + - video.PlatformEncoding.vaapi_vp8_720 + - video.PlatformEncoding.vaapi_vp8_720_meet + - video.PlatformEncoding.vaapi_vp9_720 + - video.PlatformEncoding.vaapi_vp9_720_meet + - video.PlatformEncoding.vaapi_h264_720 + - video.PlatformEncoding.vaapi_h264_720_meet + - webrtc.MediaRecorderMulti.vp8_vp8_global_vaapi_lock_disabled + - webrtc.MediaRecorderMulti.vp8_h264_global_vaapi_lock_disabled + - webrtc.MediaRecorderMulti.h264_h264_global_vaapi_lock_disabled + - webrtc.RTCPeerConnectionPerf.vp8_hw_multi_vp9_3x3_global_vaapi_lock_disabled + - webrtc.RTCPeerConnectionPerf.vp8_hw_multi_vp9_4x4_global_vaapi_lock_disabled + - webrtc.RTCPeerConnectionPerf.vp9_hw_multi_vp9_3x3_global_vaapi_lock_disabled + + tast-mm-misc: &tast-mm-misc-job + <<: *tast-job + params: + tests: + - camera.Suspend + - camera.V4L2 + - camera.V4L2Compliance + - camera.V4L2.certification + - camera.V4L2.supported_formats + - graphics.Clvk.api_tests + - graphics.Clvk.simple_test + - graphics.DRM.atomic_test_overlay_upscaling + - graphics.DRM.atomic_test_plane_alpha + - graphics.DRM.atomic_test_plane_ctm + - graphics.DRM.atomic_test_primary_pageflip + - graphics.DRM.atomic_test_rgba_primary + - graphics.DRM.atomic_test_video_overlay + - graphics.DRM.atomic_test_video_underlay + - graphics.DRM.dmabuf_test + - graphics.DRM.drm_cursor_test + - graphics.DRM.gbm_test + - graphics.DRM.linear_bo_test + - graphics.DRM.mapped_access_perf_test + - graphics.DRM.mmap_test + - graphics.DRM.null_platform_test + - graphics.DRM.swrast_test + - graphics.DRM.vk_glow + - graphics.DRM.yuv_to_rgb_test + - graphics.GLBench + - security.GPUSandboxed + - video.ImageProcessor.image_processor_unit_test + - video.MemCheck.av1_hw + - video.PlatformVAAPIUnittest + + tast-perf: &tast-perf-job + <<: *tast-job + params: + tests: + - filemanager.UIPerf.directory_list + - filemanager.UIPerf.list_apps + - ui.DesksAnimationPerf + - ui.DragTabInTabletPerf.touch + - ui.OverviewWithExpandedDesksBarPerf + + tast-perf-long-duration: &tast-perf-long-duration-job + <<: *tast-job + params: + tests: + - filemanager.ZipPerf + - storage.WriteZeroPerf + - ui.WindowCyclePerf + - ui.WindowResizePerf + - ui.BubbleLauncherAnimationPerf + - ui.DragMaximizedWindowPerf + - ui.DragTabInClamshellPerf + - ui.DragTabInTabletPerf + + tast-platform: &tast-platform-job + <<: *tast-job + params: + tests: + - platform.CheckDiskSpace + - platform.CheckProcesses + - platform.CheckTracefsInstances + - platform.CrosDisks + - platform.CrosDisksArchive + - platform.CrosDisksFilesystem + - platform.CrosDisksFormat + - platform.CrosDisksRename + - platform.CrosDisksSSHFS + - platform.CrosID + - platform.DMVerity + - platform.DumpVPDLog + - platform.Firewall + - platform.LocalPerfettoTBMTracedProbes + - platform.Mtpd + - platform.TPMResponsive + - storage.HealthInfo + - storage.LowPowerStateResidence + + tast-power: &tast-power-job + <<: *tast-job + params: + tests: + - power.CheckStatus + - power.CpufreqConf + - power.UtilCheck + - typec.Basic + + tast-sound: &tast-sound-job + <<: *tast-job + params: + tests: + - audio.AloopLoadedFixture + - audio.AloopLoadedFixture.stereo + - audio.ALSAConformance + - audio.BrowserShellAudioToneCheck + - audio.CheckingAudioFormats + - audio.CrasFeatures + - audio.CrasPlay + - audio.CrasRecord + - audio.CrasRecordQuality + - audio.DevicePlay + - audio.DevicePlay.unstable_model + - audio.DeviceRecord + - audio.UCMSequences.section_device + - audio.UCMSequences.section_modifier + - audio.UCMSequences.section_verb + + tast-ui: &tast-ui-job + <<: *tast-job + params: + tests: + - ui.DesktopControl + - ui.HotseatAnimation.non_overflow_shelf + - ui.HotseatAnimation.non_overflow_shelf_lacros + - ui.HotseatAnimation.overflow_shelf + - ui.HotseatAnimation.overflow_shelf_lacros + - ui.HotseatAnimation.shelf_with_navigation_widget + - ui.HotseatAnimation.shelf_with_navigation_widget_lacros + - ui.WindowControl + + v4l2-decoder-conformance: &v4l2-decoder-conformance-job + template: 'v4l2-decoder-conformance.jinja2' + kind: job + params: &v4l2-decoder-conformance-params + nfsroot: 'https://storage.kernelci.org/images/rootfs/debian/bookworm-gst-fluster/20240703.0/{debarch}/' + job_timeout: 30 + videodec_parallel_jobs: 1 + videodec_timeout: 90 + rules: + tree: + - mainline + - next + - collabora-chromeos-kernel + - media + kcidb_test_suite: fluster.v4l2 + + watchdog-reset: &watchdog-reset-job + template: watchdog-reset.jinja2 + kind: job + params: &watchdog-reset-job-params + bl_message: 'coreboot-' + wdt_dev: 'watchdog0' + kcidb_test_suite: kernelci_watchdog_reset + +jobs: + + baseline-arm64-mediatek: &baseline-job + template: baseline.jinja2 + kind: job + kcidb_test_suite: boot + + baseline-arm64-qualcomm: *baseline-job + + baseline-nfs-arm64-mediatek: &baseline-nfs-job + template: baseline.jinja2 + kind: job + params: + boot_commands: nfs + nfsroot: http://storage.kernelci.org/images/rootfs/debian/bookworm/20240313.0/{debarch} + kcidb_test_suite: boot.nfs + + baseline-nfs-arm64-qualcomm: *baseline-nfs-job + baseline-nfs-x86-amd: *baseline-nfs-job + baseline-nfs-x86-intel: *baseline-nfs-job + baseline-x86-amd: *baseline-job + baseline-x86-amd-staging: *baseline-job + baseline-x86-intel: *baseline-job + + kbuild-gcc-12-arm64-chromebook: + <<: *kbuild-gcc-12-arm64-chromeos-job + params: + <<: *kbuild-gcc-12-arm64-chromeos-params + cross_compile_compat: + defconfig: defconfig + + kbuild-gcc-12-arm64-chromeos-mediatek: + <<: *kbuild-gcc-12-arm64-chromeos-job + params: + <<: *kbuild-gcc-12-arm64-chromeos-params + flavour: mediatek + rules: + <<: *kbuild-gcc-12-arm64-chromeos-rules + min_version: + version: 6 + patchlevel: 1 + + kbuild-gcc-12-arm64-chromeos-qualcomm: + <<: *kbuild-gcc-12-arm64-chromeos-job + params: + <<: *kbuild-gcc-12-arm64-chromeos-params + flavour: qualcomm + + kbuild-gcc-12-x86-chromeos-intel: + <<: *kbuild-gcc-12-x86-chromeos-job + params: + <<: *kbuild-gcc-12-x86-chromeos-params + flavour: intel-pineview + + kbuild-gcc-12-x86-chromeos-amd: + <<: *kbuild-gcc-12-x86-chromeos-job + params: + <<: *kbuild-gcc-12-x86-chromeos-params + flavour: amd-stoneyridge + + kselftest-acpi: + template: kselftest.jinja2 + kind: job + params: + nfsroot: 'http://storage.kernelci.org/images/rootfs/debian/bookworm-kselftest/20240313.0/{debarch}' + collections: acpi + job_timeout: 10 + rules: + tree: + - collabora-next:for-kernelci + kcidb_test_suite: kselftest.acpi + + kselftest-device-error-logs: + template: kselftest.jinja2 + kind: test + params: + nfsroot: 'http://storage.kernelci.org/images/rootfs/debian/bookworm-kselftest/20240313.0/{debarch}' + collections: devices/error_logs + job_timeout: 10 + rules: + tree: + - collabora-next:for-kernelci + kcidb_test_suite: kselftest.device_error_logs + + tast-basic-arm64-mediatek: *tast-basic-job + tast-basic-arm64-qualcomm: *tast-basic-job + tast-basic-x86-intel: *tast-basic-job + tast-basic-x86-amd: *tast-basic-job + + tast-decoder-chromestack-arm64-mediatek: *tast-decoder-chromestack-job + + tast-decoder-chromestack-arm64-qualcomm: + <<: *tast-decoder-chromestack-job + rules: *min-6_7-rules + + tast-decoder-chromestack-arm64-qualcomm-pre6_7: + <<: *tast-decoder-chromestack-job + params: + <<: *tast-decoder-chromestack-params + excluded_tests: + # Platform-independent excluded tests + - video.ChromeStackDecoderVerification.hevc_main + - video.ChromeStackDecoderVerification.vp9_0_svc + # Qualcomm-specific: those always fail with pre-6.7 kernels + - video.ChromeStackDecoderVerification.vp9_0_group1_frm_resize + - video.ChromeStackDecoderVerification.vp9_0_group1_sub8x8_sf + rules: *max-6_6-rules + + tast-decoder-chromestack-x86-intel: *tast-decoder-chromestack-job + tast-decoder-chromestack-x86-amd: *tast-decoder-chromestack-job + + tast-decoder-v4l2-sl-av1-arm64-mediatek: *tast-decoder-v4l2-sl-av1-job + tast-decoder-v4l2-sl-h264-arm64-mediatek: *tast-decoder-v4l2-sl-h264-job + tast-decoder-v4l2-sl-hevc-arm64-mediatek: *tast-decoder-v4l2-sl-hevc-job + tast-decoder-v4l2-sl-vp8-arm64-mediatek: *tast-decoder-v4l2-sl-vp8-job + tast-decoder-v4l2-sl-vp9-arm64-mediatek: *tast-decoder-v4l2-sl-vp9-job + + tast-decoder-v4l2-sf-h264-arm64-qualcomm: *tast-decoder-v4l2-sf-h264-job + tast-decoder-v4l2-sf-hevc-arm64-qualcomm: *tast-decoder-v4l2-sf-hevc-job + tast-decoder-v4l2-sf-vp8-arm64-qualcomm: *tast-decoder-v4l2-sf-vp8-job + + tast-decoder-v4l2-sf-vp9-arm64-qualcomm: + <<: *tast-decoder-v4l2-sf-vp9-job + rules: *min-6_7-rules + + tast-decoder-v4l2-sf-vp9-arm64-qualcomm-pre6_7: + <<: *tast-decoder-v4l2-sf-vp9-job + params: + <<: *tast-decoder-v4l2-sf-vp9-params + excluded_tests: + - video.PlatformDecoding.v4l2_stateful_vp9_0_group1_frm_resize + - video.PlatformDecoding.v4l2_stateful_vp9_0_group1_sub8x8_sf + - video.PlatformDecoding.v4l2_stateful_vp9_0_group2_frm_resize + - video.PlatformDecoding.v4l2_stateful_vp9_0_group2_sub8x8_sf + - video.PlatformDecoding.v4l2_stateful_vp9_0_group3_frm_resize + - video.PlatformDecoding.v4l2_stateful_vp9_0_group3_sub8x8_sf + - video.PlatformDecoding.v4l2_stateful_vp9_0_group4_frm_resize + - video.PlatformDecoding.v4l2_stateful_vp9_0_group4_sub8x8_sf + rules: *max-6_6-rules + + tast-decoder-v4l2-sf-vp9-extra-arm64-qualcomm: + <<: *tast-decoder-v4l2-sf-vp9-extra-job + rules: *min-6_7-rules + + tast-decoder-v4l2-sf-vp9-extra-arm64-qualcomm-pre6_7: + <<: *tast-decoder-v4l2-sf-vp9-extra-job + params: + <<: *tast-decoder-v4l2-sf-vp9-extra-params + excluded_tests: + - video.PlatformDecoding.v4l2_stateful_vp9_0_level5_0_frm_resize + - video.PlatformDecoding.v4l2_stateful_vp9_0_level5_0_sub8x8_sf + rules: *max-6_6-rules + + tast-hardware-arm64-mediatek: *tast-hardware-job + tast-hardware-arm64-qualcomm: *tast-hardware-job + tast-hardware-x86-intel: *tast-hardware-job + tast-hardware-x86-amd: *tast-hardware-job + + tast-kernel-arm64-mediatek: *tast-kernel-job + tast-kernel-arm64-qualcomm: *tast-kernel-job + tast-kernel-x86-intel: *tast-kernel-job + tast-kernel-x86-amd: *tast-kernel-job + + tast-mm-decode-arm64-mediatek: *tast-mm-decode-job + tast-mm-decode-arm64-qualcomm: *tast-mm-decode-job + + tast-mm-misc-arm64-mediatek: *tast-mm-misc-job + tast-mm-misc-arm64-qualcomm: *tast-mm-misc-job + tast-mm-misc-x86-intel: *tast-mm-misc-job + tast-mm-misc-x86-amd: *tast-mm-misc-job + + tast-perf-arm64-mediatek: *tast-perf-job + tast-perf-arm64-qualcomm: *tast-perf-job + tast-perf-x86-intel: *tast-perf-job + tast-perf-x86-amd: *tast-perf-job + + tast-perf-long-duration-arm64-mediatek: *tast-perf-long-duration-job + tast-perf-long-duration-arm64-qualcomm: *tast-perf-long-duration-job + tast-perf-long-duration-x86-intel: *tast-perf-long-duration-job + tast-perf-long-duration-x86-amd: *tast-perf-long-duration-job + + tast-platform-arm64-mediatek: *tast-platform-job + tast-platform-arm64-qualcomm: *tast-platform-job + tast-platform-x86-intel: *tast-platform-job + tast-platform-x86-amd: *tast-platform-job + + tast-power-arm64-mediatek: *tast-power-job + tast-power-arm64-qualcomm: *tast-power-job + tast-power-x86-intel: *tast-power-job + tast-power-x86-amd: *tast-power-job + + tast-sound-arm64-mediatek: *tast-sound-job + tast-sound-arm64-qualcomm: *tast-sound-job + tast-sound-x86-intel: *tast-sound-job + tast-sound-x86-amd: *tast-sound-job + + tast-ui-arm64-mediatek: *tast-ui-job + tast-ui-arm64-qualcomm: *tast-ui-job + tast-ui-x86-intel: *tast-ui-job + tast-ui-x86-amd: *tast-ui-job + + v4l2-decoder-conformance-av1: + <<: *v4l2-decoder-conformance-job + params: + <<: *v4l2-decoder-conformance-params + testsuite: 'AV1-TEST-VECTORS' + decoders: + - 'GStreamer-AV1-V4L2SL-Gst1.0' + + v4l2-decoder-conformance-av1-chromium-10bit: + <<: *v4l2-decoder-conformance-job + params: + <<: *v4l2-decoder-conformance-params + testsuite: 'CHROMIUM-10bit-AV1-TEST-VECTORS' + decoders: + - 'GStreamer-AV1-V4L2SL-Gst1.0' + + v4l2-decoder-conformance-h264: + <<: *v4l2-decoder-conformance-job + params: + <<: *v4l2-decoder-conformance-params + testsuite: 'JVT-AVC_V1' + decoders: + - 'GStreamer-H.264-V4L2-Gst1.0' + - 'GStreamer-H.264-V4L2SL-Gst1.0' + + v4l2-decoder-conformance-h264-frext: + <<: *v4l2-decoder-conformance-job + params: + <<: *v4l2-decoder-conformance-params + testsuite: 'JVT-FR-EXT' + decoders: + - 'GStreamer-H.264-V4L2-Gst1.0' + - 'GStreamer-H.264-V4L2SL-Gst1.0' + + v4l2-decoder-conformance-h265: + <<: *v4l2-decoder-conformance-job + params: + <<: *v4l2-decoder-conformance-params + testsuite: 'JCT-VC-HEVC_V1' + decoders: + - 'GStreamer-H.265-V4L2-Gst1.0' + - 'GStreamer-H.265-V4L2SL-Gst1.0' + + v4l2-decoder-conformance-vp8: + <<: *v4l2-decoder-conformance-job + params: + <<: *v4l2-decoder-conformance-params + testsuite: 'VP8-TEST-VECTORS' + decoders: + - 'GStreamer-VP8-V4L2-Gst1.0' + - 'GStreamer-VP8-V4L2SL-Gst1.0' + + v4l2-decoder-conformance-vp9: + <<: *v4l2-decoder-conformance-job + params: + <<: *v4l2-decoder-conformance-params + testsuite: 'VP9-TEST-VECTORS' + decoders: + - 'GStreamer-VP9-V4L2-Gst1.0' + - 'GStreamer-VP9-V4L2SL-Gst1.0' + + watchdog-reset-arm64-mediatek: *watchdog-reset-job + watchdog-reset-arm64-qualcomm: *watchdog-reset-job + watchdog-reset-x86-amd: *watchdog-reset-job + watchdog-reset-x86-intel: *watchdog-reset-job \ No newline at end of file diff --git a/config/kernelci.toml b/config/kernelci.toml index 404e309c8..95bcf86dc 100644 --- a/config/kernelci.toml +++ b/config/kernelci.toml @@ -6,22 +6,31 @@ verbose = true [trigger] poll_period = 0 startup_delay = 3 -timeout = 60 +timeout = 180 [tarball] kdir = "/home/kernelci/data/src/linux" output = "/home/kernelci/data/output" storage_config = "docker-host" +[patchset] +kdir = "/home/kernelci/data/src/linux-patchset" +output = "/home/kernelci/data/output" +storage_config = "docker-host" +patchset_tmp_file_prefix = "kernel-patch" +patchset_short_hash_len = 13 +allowed_domains = ["patchwork.kernel.org"] +polling_delay_secs = 30 + [scheduler] -output = "/home/kernelci/output" +output = "/home/kernelci/data/output" [notifier] [send_kcidb] kcidb_topic_name = "playground_kcidb_new" kcidb_project_id = "kernelci-production" -origin = "kernelci" +origin = "maestro" [test_report] email_sender = "bot@kernelci.org" @@ -36,3 +45,11 @@ storage_cred = "/home/kernelci/data/ssh/id_rsa_tarball" [storage.k8s-host] storage_cred = "/home/kernelci/data/ssh/id_rsa_tarball" + +#[runtime.lava-collabora] +#runtime_token = "REPLACE-LAVA-TOKEN-GENERATED-BY-LAB-LAVA-COLLABORA" +#callback_token = "REPLACE-LAVA-TOKEN-GENERATED-BY-LAB-LAVA-COLLABORA" + +#[runtime.lava-collabora-early-access] +#runtime_token = "REPLACE-LAVA-TOKEN-GENERATED-BY-LAB-LAVA-COLLABORA-EARLY-ACCESS" +#callback_token = "REPLACE-LAVA-TOKEN-GENERATED-BY-LAB-LAVA-COLLABORA" diff --git a/config/pipeline.yaml b/config/pipeline.yaml index 0ea81e4e4..00bca695e 100644 --- a/config/pipeline.yaml +++ b/config/pipeline.yaml @@ -6,13 +6,113 @@ # Not directly loaded into the config, only used for YAML aliases in this file _anchors: + arm64-device: &arm64-device + arch: arm64 + boot_method: u-boot + + arm-device: &arm-device + <<: *arm64-device + arch: arm + + baseline: &baseline-job + template: baseline.jinja2 + kind: job + kcidb_test_suite: boot + checkout: &checkout-event channel: node name: checkout state: available + build-k8s-all: &build-k8s-all + event: *checkout-event + runtime: + name: k8s-all + + kbuild: &kbuild-job + template: kbuild.jinja2 + kind: kbuild + rules: + tree: + - '!android' + + kbuild-clang-17-x86: &kbuild-clang-17-x86-job + <<: *kbuild-job + image: kernelci/staging-clang-17:x86-kselftest-kernelci + params: &kbuild-clang-17-x86-params + arch: x86_64 + compiler: clang-17 + defconfig: x86_64_defconfig + + kbuild-clang-12-arm64: &kbuild-clang-17-arm64-job + <<: *kbuild-job + image: kernelci/staging-clang-17:arm64-kselftest-kernelci + params: &kbuild-clang-17-arm64-params + arch: arm64 + compiler: clang-17 + cross_compile: 'aarch64-linux-gnu-' + defconfig: defconfig -api_configs: + kbuild-gcc-12-arm64: &kbuild-gcc-12-arm64-job + <<: *kbuild-job + image: kernelci/staging-gcc-12:arm64-kselftest-kernelci + params: &kbuild-gcc-12-arm64-params + arch: arm64 + compiler: gcc-12 + cross_compile: 'aarch64-linux-gnu-' + defconfig: defconfig + + kbuild-gcc-12-x86: &kbuild-gcc-12-x86-job + <<: *kbuild-job + image: kernelci/staging-gcc-12:x86-kselftest-kernelci + params: &kbuild-gcc-12-x86-params + arch: x86_64 + compiler: gcc-12 + defconfig: x86_64_defconfig + + x86_64-device: &x86_64-device + arch: x86_64 + boot_method: grub + mach: x86 + + amd-platforms: &amd-platforms + - acer-R721T-grunt + - acer-cp514-3wh-r0qs-guybrush + - asus-CM1400CXA-dalboz + - dell-latitude-3445-7520c-skyrim + - hp-14-db0003na-grunt + - hp-11A-G6-EE-grunt + - hp-14b-na0052xx-zork + - hp-x360-14a-cb0001xx-zork + - lenovo-TPad-C13-Yoga-zork + + intel-platforms: &intel-platforms + - acer-cb317-1h-c3z6-dedede + - acer-cbv514-1h-34uz-brya + - acer-chromebox-cxi4-puff + - acer-cp514-2h-1130g7-volteer + - acer-cp514-2h-1160g7-volteer + - asus-C433TA-AJ0005-rammus + - asus-C436FA-Flip-hatch + - asus-C523NA-A20057-coral + - dell-latitude-5300-8145U-arcada + - dell-latitude-5400-4305U-sarien + - dell-latitude-5400-8665U-sarien + - hp-x360-14-G1-sona + - hp-x360-12b-ca0010nr-n4020-octopus + + mediatek-platforms: &mediatek-platforms + - mt8183-kukui-jacuzzi-juniper-sku16 + - mt8186-corsola-steelix-sku131072 + - mt8192-asurada-spherion-r0 + - mt8195-cherry-tomato-r2 + + qualcomm-platforms: &qualcomm-platforms + - sc7180-trogdor-kingoftown + - sc7180-trogdor-lazor-limozeen + + +api: docker-host: url: http://172.17.0.1:8001 @@ -21,13 +121,13 @@ api_configs: url: https://staging.kernelci.org:9000 early-access: - url: https://kernelci-api.eastus.cloudapp.azure.com + url: https://kernelci-api.westus3.cloudapp.azure.com k8s-host: url: http://kernelci-api:8001 -storage_configs: +storage: docker-host: storage_type: ssh @@ -40,7 +140,7 @@ storage_configs: host: staging.kernelci.org port: 9022 base_url: http://storage.staging.kernelci.org/api/ - + k8s-host: storage_type: ssh host: kernelci-api-ssh @@ -51,7 +151,7 @@ storage_configs: storage_type: azure base_url: https://kciapistagingstorage1.file.core.windows.net/ share: staging - sas_public_token: "?sv=2022-11-02&ss=bfqt&srt=sco&sp=r&se=2123-07-20T22:00:00Z&st=2023-07-21T18:27:25Z&spr=https&sig=TDt3NorDXylmyUtBQnP1S5BZ3uywR06htEGTG%2BSxLWg%3D" + sas_public_token: "?sv=2022-11-02&ss=f&srt=sco&sp=r&se=2024-10-17T19:19:12Z&st=2023-10-17T11:19:12Z&spr=https&sig=sLmFlvZHXRrZsSGubsDUIvTiv%2BtzgDq6vALfkrtWnv8%3D" early-access-azure: <<: *azure-files @@ -72,6 +172,23 @@ runtimes: lab_type: kubernetes context: 'gke_android-kernelci-external_europe-west4-c_kci-eu-west4' + k8s-all: + lab_type: kubernetes + context: + - 'aks-kbuild-medium-1' + + lava-broonie: + lab_type: lava + url: 'https://lava.sirena.org.uk/' + priority_min: 10 + priority_max: 40 + notify: + callback: + token: kernelci-new-api-callback + rules: + tree: + - '!android' + lava-collabora: &lava-collabora-staging lab_type: lava url: https://lava.collabora.dev/ @@ -80,7 +197,6 @@ runtimes: notify: callback: token: kernelci-api-token-staging - url: https://staging.kernelci.org:9100 # ToDo: avoid creating a separate Runtime entry # https://github.com/kernelci/kernelci-core/issues/2088 @@ -89,7 +205,39 @@ runtimes: notify: callback: token: kernelci-api-token-early-access - url: https://staging.kernelci.org:9100 + + lava-collabora-staging: + <<: *lava-collabora-staging + url: https://staging.lava.collabora.dev/ + notify: + callback: + token: kernelci-api-token-lava-staging + + lava-baylibre: + lab_type: lava + url: 'https://lava.baylibre.com/' + notify: + callback: + token: kernelci-new-api + rules: + tree: + - kernelci + - mainline + - next + + lava-qualcomm: + lab_type: lava + url: 'https://lava.infra.foundries.io' + notify: + callback: + token: kernelci-lab-qualcomm + + lava-cip: + lab_type: lava + url: 'https://lava.ciplatform.org/' + notify: + callback: + token: kernel-ci-new-api shell: lab_type: shell @@ -104,20 +252,601 @@ jobs: # template: 'fstests.jinja2' # image: 'kernelci/staging-kernelci' - baseline-x86: - template: baseline.jinja2 + baseline-arm64: *baseline-job + baseline-arm64-broonie: *baseline-job + baseline-arm64-qualcomm: *baseline-job + baseline-arm64-android: *baseline-job + baseline-arm64-mfd: *baseline-job + baseline-arm-android: *baseline-job + baseline-arm-mfd: *baseline-job + baseline-arm: *baseline-job + baseline-arm-baylibre: *baseline-job + baseline-x86: *baseline-job + baseline-x86-baylibre: *baseline-job + baseline-x86-qualcomm: *baseline-job + baseline-x86-cip: *baseline-job + baseline-x86-kcidebug-amd: *baseline-job + baseline-x86-kcidebug-intel: *baseline-job + baseline-arm64-kcidebug-mediatek: *baseline-job + baseline-arm64-kcidebug-qualcomm: *baseline-job + baseline-x86-mfd: *baseline-job + + kbuild-gcc-12-arc-haps_hs_smp_defconfig: + <<: *kbuild-job + image: kernelci/staging-gcc-12:arc-kselftest-kernelci + params: + arch: arc + compiler: gcc-12 + cross_compile: 'arc-elf32-' + defconfig: haps_hs_smp_defconfig + rules: + tree: + - 'stable-rc' + - 'kernelci' + + kbuild-clang-17-arm: &kbuild-clang-17-arm-job + <<: *kbuild-job + image: kernelci/staging-clang-17:arm64-kselftest-kernelci + params: &kbuild-clang-17-arm-params + arch: arm + compiler: clang-17 + cross_compile: 'arm-linux-gnueabihf-' + defconfig: multi_v7_defconfig + + kbuild-gcc-12-arm: &kbuild-gcc-12-arm-job + <<: *kbuild-job + image: kernelci/staging-gcc-12:arm64-kselftest-kernelci + params: &kbuild-gcc-12-arm-params + arch: arm + compiler: gcc-12 + cross_compile: 'arm-linux-gnueabihf-' + defconfig: multi_v7_defconfig + + kbuild-clang-17-arm64-android: &kbuild-clang-17-arm64-android-job + <<: *kbuild-clang-17-arm64-job + rules: + tree: + - 'android' + + kbuild-clang-17-arm64-android-allmodconfig: + <<: *kbuild-clang-17-arm64-android-job + params: + <<: *kbuild-clang-17-arm64-params + defconfig: + - defconfig + - allmodconfig + + kbuild-clang-17-arm64-android-allnoconfig: + <<: *kbuild-clang-17-arm64-android-job + params: + <<: *kbuild-clang-17-arm64-params + defconfig: + - defconfig + - allnoconfig + + kbuild-clang-17-arm64-android-big_endian: + <<: *kbuild-clang-17-arm64-android-job + params: + <<: *kbuild-clang-17-arm64-params + fragments: + - CONFIG_CPU_BIG_ENDIAN=y + + kbuild-clang-17-arm64-android-randomize: + <<: *kbuild-clang-17-arm64-android-job + params: + <<: *kbuild-clang-17-arm64-params + fragments: + - CONFIG_RANDOMIZE_BASE=y - kbuild-gcc-10-x86: + kbuild-gcc-12-arm64: + <<: *kbuild-gcc-12-arm64-job + + kbuild-gcc-12-arm64-chromebook-kcidebug: template: kbuild.jinja2 - image: kernelci/staging-gcc-10:x86-kselftest-kernelci + kind: kbuild + image: kernelci/staging-gcc-12:arm64-kselftest-kernelci params: - arch: x86_64 - compiler: gcc-10 - defconfig: x86_64_defconfig + arch: arm64 + compiler: gcc-12 + cross_compile: 'aarch64-linux-gnu-' + cross_compile_compat: 'arm-linux-gnueabihf-' + defconfig: defconfig + fragments: + - lab-setup + - arm64-chromebook + - kcidebug + rules: + tree: + - '!android' + + kbuild-gcc-12-arm64-dtbscheck: + <<: *kbuild-gcc-12-arm64-job + kind: job + params: + <<: *kbuild-gcc-12-arm64-params + dtbs_check: true + kcidb_test_suite: dtbs_check + + kbuild-gcc-12-arm64-preempt_rt: + <<: *kbuild-gcc-12-arm64-job + params: + <<: *kbuild-gcc-12-arm64-params + fragments: + - 'preempt_rt' + defconfig: defconfig + rules: + tree: + - 'stable-rt' + + kbuild-gcc-12-arm64-preempt_rt_chromebook: + <<: *kbuild-gcc-12-arm64-job + params: + <<: *kbuild-gcc-12-arm64-params + fragments: + - 'preempt_rt' + - 'arm64-chromebook' + defconfig: defconfig + rules: + tree: + - 'stable-rt' + + kbuild-gcc-12-arm64-android: &kbuild-gcc-12-arm64-android-job + <<: *kbuild-gcc-12-arm64-job + rules: + tree: + - 'android' + + kbuild-gcc-12-arm64-android-allmodconfig: + <<: *kbuild-gcc-12-arm64-android-job + params: + <<: *kbuild-gcc-12-arm64-params + defconfig: + - defconfig + - allmodconfig + + kbuild-gcc-12-arm64-android-allnoconfig: + <<: *kbuild-gcc-12-arm64-android-job + params: + <<: *kbuild-gcc-12-arm64-params + defconfig: + - defconfig + - allnoconfig + + kbuild-gcc-12-arm64-android-big_endian: + <<: *kbuild-gcc-12-arm64-android-job + params: + <<: *kbuild-gcc-12-arm64-params + fragments: + - CONFIG_CPU_BIG_ENDIAN=y + + kbuild-gcc-12-arm64-android-randomize: + <<: *kbuild-gcc-12-arm64-android-job + params: + <<: *kbuild-gcc-12-arm64-params + fragments: + - CONFIG_RANDOMIZE_BASE=y + + kbuild-gcc-12-arm64-mfd: + <<: *kbuild-gcc-12-arm64-job + rules: + tree: + - 'lee-mfd' + + kbuild-clang-17-arm-android: &kbuild-clang-17-arm-android-job + <<: *kbuild-clang-17-arm-job + rules: + tree: + - 'android' + + kbuild-clang-17-arm-android-allmodconfig: + <<: *kbuild-clang-17-arm-android-job + params: + <<: *kbuild-clang-17-arm-params + defconfig: + - imx_v6_v7_defconfig + - 'allmodconfig' + + kbuild-clang-17-arm-android-multi_v5_defconfig: + <<: *kbuild-clang-17-arm-android-job + params: + <<: *kbuild-clang-17-arm-params + defconfig: multi_v5_defconfig + + kbuild-clang-17-arm-android-imx_v6_v7_defconfig: + <<: *kbuild-clang-17-arm-android-job + params: + <<: *kbuild-clang-17-arm-params + defconfig: imx_v6_v7_defconfig + + kbuild-clang-17-arm-android-omap2plus_defconfig: + <<: *kbuild-clang-17-arm-android-job + params: + <<: *kbuild-clang-17-arm-params + defconfig: omap2plus_defconfig + + kbuild-clang-17-arm-android-vexpress_defconfig: + <<: *kbuild-clang-17-arm-android-job + params: + <<: *kbuild-clang-17-arm-params + defconfig: vexpress_defconfig + + kbuild-gcc-12-arm-imx_v6_v7_defconfig: + <<: *kbuild-gcc-12-arm-job + params: + <<: *kbuild-gcc-12-arm-params + defconfig: imx_v6_v7_defconfig + rules: + tree: + - 'stable-rc' + - 'kernelci' + + kbuild-gcc-12-arm-multi_v5_defconfig: + <<: *kbuild-gcc-12-arm-job + params: + <<: *kbuild-gcc-12-arm-params + defconfig: multi_v5_defconfig + rules: + tree: + - 'stable-rc' + - 'kernelci' + + kbuild-gcc-12-arm-multi_v7_defconfig: + <<: *kbuild-gcc-12-arm-job + params: + <<: *kbuild-gcc-12-arm-params + defconfig: multi_v7_defconfig + rules: + tree: + - 'stable-rc' + - 'kernelci' + + kbuild-gcc-12-arm-mfd: + <<: *kbuild-gcc-12-arm-job + rules: + tree: + - 'lee-mfd' + + kbuild-gcc-12-arm-preempt_rt: + <<: *kbuild-gcc-12-arm-job + params: + <<: *kbuild-gcc-12-arm-params + fragments: + - 'preempt_rt' + defconfig: multi_v7_defconfig + rules: + tree: + - 'stable-rt' + + kbuild-gcc-12-arm-vexpress_defconfig: + <<: *kbuild-gcc-12-arm-job + params: + <<: *kbuild-gcc-12-arm-params + defconfig: vexpress_defconfig + rules: + tree: + - 'stable-rc' + - 'kernelci' + + kbuild-gcc-12-arm-android: &kbuild-gcc-12-arm-android-job + <<: *kbuild-gcc-12-arm-job + rules: + tree: + - 'android' + + kbuild-gcc-12-arm-android-allmodconfig: + <<: *kbuild-gcc-12-arm-android-job + params: + <<: *kbuild-gcc-12-arm-params + defconfig: + - imx_v6_v7_defconfig + - allmodconfig + + kbuild-gcc-12-arm-android-multi_v5_defconfig: + <<: *kbuild-gcc-12-arm-android-job + params: + <<: *kbuild-gcc-12-arm-params + defconfig: multi_v5_defconfig + + kbuild-gcc-12-arm-android-imx_v6_v7_defconfig: + <<: *kbuild-gcc-12-arm-android-job + params: + <<: *kbuild-gcc-12-arm-params + defconfig: imx_v6_v7_defconfig + + kbuild-gcc-12-arm-android-omap2plus_defconfig: + <<: *kbuild-gcc-12-arm-android-job + params: + <<: *kbuild-gcc-12-arm-params + defconfig: omap2plus_defconfig + + kbuild-gcc-12-arm-android-vexpress_defconfig: + <<: *kbuild-gcc-12-arm-android-job + params: + <<: *kbuild-gcc-12-arm-params + defconfig: vexpress_defconfig + + kbuild-gcc-12-arm-omap2plus_defconfig: + <<: *kbuild-gcc-12-arm-job + params: + <<: *kbuild-gcc-12-arm-params + defconfig: omap2plus_defconfig + rules: + tree: + - 'stable-rc' + - 'kernelci' + + kbuild-gcc-12-um: + <<: *kbuild-job + image: kernelci/staging-gcc-12:x86-kselftest-kernelci + params: + arch: um + compiler: gcc-12 + defconfig: defconfig + rules: + tree: + - 'android' + + kbuild-clang-17-i386: &kbuild-clang-17-i386-job + <<: *kbuild-job + image: kernelci/staging-clang-17:x86-kselftest-kernelci + params: &kbuild-clang-17-i386-params + arch: i386 + compiler: clang-17 + defconfig: i386_defconfig + + kbuild-clang-17-i386-android-allnoconfig: + <<: *kbuild-clang-17-i386-job + params: + <<: *kbuild-clang-17-i386-params + defconfig: + - i386_defconfig + - allnoconfig + rules: + tree: + - 'android' + + kbuild-gcc-12-i386: &kbuild-gcc-12-i386-job + <<: *kbuild-job + image: kernelci/staging-gcc-12:x86-kselftest-kernelci + params: &kbuild-gcc-12-i386-params + arch: i386 + compiler: gcc-12 + defconfig: i386_defconfig + + kbuild-gcc-12-i386-allnoconfig: + <<: *kbuild-gcc-12-i386-job + params: + <<: *kbuild-gcc-12-i386-params + defconfig: allnoconfig + disable_modules: true + rules: + tree: + - 'stable-rc' + - 'kernelci' + + kbuild-gcc-12-i386-tinyconfig: + <<: *kbuild-gcc-12-i386-job + params: + <<: *kbuild-gcc-12-i386-params + defconfig: tinyconfig + disable_modules: true + rules: + tree: + - 'stable-rc' + - 'kernelci' + + kbuild-gcc-12-i386-android-allnoconfig: + <<: *kbuild-gcc-12-i386-job + params: + <<: *kbuild-gcc-12-i386-params + defconfig: + - i386_defconfig + - allnoconfig + rules: + tree: + - 'android' + + kbuild-gcc-12-i386-mfd: + <<: *kbuild-gcc-12-i386-job + rules: + tree: + - 'lee-mfd' + + kbuild-gcc-12-mips-32r2el_defconfig: + <<: *kbuild-job + image: kernelci/staging-gcc-12:mips-kselftest-kernelci + params: + arch: mips + compiler: gcc-12 + cross_compile: 'mips-linux-gnu-' + defconfig: 32r2el_defconfig + rules: + tree: + - 'stable-rc' + - 'kernelci' + + kbuild-clang-17-riscv: &kbuild-clang-17-riscv-job + <<: *kbuild-job + image: kernelci/staging-clang-17:riscv64-kselftest-kernelci + params: &kbuild-clang-17-riscv-params + arch: riscv + compiler: clang-17 + cross_compile: 'riscv64-linux-gnu-' + defconfig: defconfig + + kbuild-clang-17-riscv-android-defconfig: + <<: *kbuild-clang-17-riscv-job + params: + <<: *kbuild-clang-17-riscv-params + defconfig: + - defconfig + - allnoconfig + rules: &kbuild-riscv-android-rules + min_version: + version: 4 + patchlevel: 19 + tree: + - 'android' + + kbuild-gcc-12-riscv: &kbuild-gcc-12-riscv-job + <<: *kbuild-job + image: kernelci/staging-gcc-12:riscv64-kselftest-kernelci + params: &kbuild-gcc-12-riscv-params + arch: riscv + compiler: gcc-12 + cross_compile: 'riscv64-linux-gnu-' + defconfig: defconfig + + kbuild-gcc-12-riscv-android-defconfig: + <<: *kbuild-gcc-12-riscv-job + params: + <<: *kbuild-gcc-12-riscv-params + defconfig: + - defconfig + - allnoconfig + rules: + <<: *kbuild-riscv-android-rules + + kbuild-gcc-12-riscv-nommu_k210_defconfig: + <<: *kbuild-gcc-12-riscv-job + params: + <<: *kbuild-gcc-12-riscv-params + defconfig: nommu_k210_defconfig + disable_modules: true + rules: + tree: + - 'stable-rc' + - 'kernelci' + + kbuild-gcc-12-riscv-mfd: + <<: *kbuild-gcc-12-riscv-job + rules: + tree: + - 'lee-mfd' + + kbuild-clang-17-x86: + <<: *kbuild-clang-17-x86-job + + kbuild-clang-17-x86-android-allmodconfig: + <<: *kbuild-clang-17-x86-job + params: + <<: *kbuild-clang-17-x86-params + defconfig: + - x86_64_defconfig + - allmodconfig + rules: + tree: + - 'android' + + kbuild-clang-17-x86-android-allnoconfig: + <<: *kbuild-clang-17-x86-job + params: + <<: *kbuild-clang-17-x86-params + defconfig: + - x86_64_defconfig + - allnoconfig + rules: + tree: + - 'android' + + kbuild-gcc-12-x86: + <<: *kbuild-gcc-12-x86-job + params: + <<: *kbuild-gcc-12-x86-params + fragments: + - lab-setup + - 'x86-board' + + kbuild-gcc-12-x86-allnoconfig: + <<: *kbuild-gcc-12-x86-job + params: + <<: *kbuild-gcc-12-x86-params + defconfig: allnoconfig + disable_modules: true + rules: + tree: + - 'stable-rc' + - 'kernelci' + + kbuild-gcc-12-x86-kcidebug: + <<: *kbuild-gcc-12-i386-job + params: + <<: *kbuild-gcc-12-i386-params + defconfig: defconfig + fragments: + - x86-board + - kcidebug + rules: + tree: + - '!android' + + kbuild-gcc-12-x86-tinyconfig: + <<: *kbuild-gcc-12-x86-job + params: + <<: *kbuild-gcc-12-x86-params + defconfig: tinyconfig + disable_modules: true + rules: + tree: + - 'stable-rc' + - 'kernelci' + + kbuild-gcc-12-x86-android-allmodconfig: + <<: *kbuild-gcc-12-x86-job + params: + <<: *kbuild-gcc-12-x86-params + defconfig: + - x86_64_defconfig + - allmodconfig + rules: + tree: + - 'android' + + kbuild-gcc-12-x86-android-allnoconfig: + <<: *kbuild-gcc-12-x86-job + params: + <<: *kbuild-gcc-12-x86-params + defconfig: + - x86_64_defconfig + - allnoconfig + rules: + tree: + - 'android' + + kbuild-gcc-12-x86-preempt_rt: + <<: *kbuild-gcc-12-x86-job + params: + <<: *kbuild-gcc-12-x86-params + fragments: + - 'preempt_rt' + defconfig: defconfig + rules: + tree: + - 'stable-rt' + + kbuild-gcc-12-x86-preempt_rt_x86_board: + <<: *kbuild-gcc-12-x86-job + params: + <<: *kbuild-gcc-12-x86-params + fragments: + - 'preempt_rt' + - 'x86-board' + defconfig: defconfig + rules: + tree: + - 'stable-rt' + + kbuild-gcc-12-x86-mfd: + <<: *kbuild-gcc-12-x86-job + rules: + tree: + - 'lee-mfd' kunit: &kunit-job template: kunit.jinja2 - image: kernelci/staging-gcc-10:x86-kunit-kernelci + kind: job + image: kernelci/staging-gcc-12:x86-kunit-kernelci + kcidb_test_suite: kunit kunit-x86_64: <<: *kunit-job @@ -126,55 +855,706 @@ jobs: kver: template: kver.jinja2 + kind: test image: kernelci/staging-kernelci + kcidb_test_suite: kernelci_kver + + kselftest-cpufreq: + template: kselftest.jinja2 + kind: job + params: + nfsroot: 'http://storage.kernelci.org/images/rootfs/debian/bookworm-kselftest/20240313.0/{debarch}' + collections: cpufreq + job_timeout: 10 + kcidb_test_suite: kselftest.cpufreq + + kselftest-dmabuf-heaps: + template: kselftest.jinja2 + kind: job + params: + nfsroot: 'http://storage.kernelci.org/images/rootfs/debian/bookworm-kselftest/20240313.0/{debarch}' + collections: dmabuf-heaps + job_timeout: 10 + kcidb_test_suite: kselftest.dmabuf-heaps + + kselftest-dt: + template: kselftest.jinja2 + kind: job + params: + nfsroot: 'http://storage.kernelci.org/images/rootfs/debian/bookworm-kselftest/20240221.0/{debarch}' + collections: dt + job_timeout: 10 + rules: + min_version: + version: 6 + patchlevel: 7 + kcidb_test_suite: kselftest.dt + kselftest-exec: + template: kselftest.jinja2 + kind: job + params: + nfsroot: 'http://storage.kernelci.org/images/rootfs/debian/bookworm-kselftest/20240313.0/{debarch}' + collections: exec + job_timeout: 10 + kcidb_test_suite: kselftest.exec + + kselftest-iommu: + template: kselftest.jinja2 + kind: job + params: + nfsroot: 'http://storage.kernelci.org/images/rootfs/debian/bookworm-kselftest/20240313.0/{debarch}' + collections: iommu + job_timeout: 10 + kcidb_test_suite: kselftest.iommu + + rt-tests: &rt-tests + template: rt-tests.jinja2 + kind: job + nfsroot: 'https://storage.kernelci.org/images/rootfs/debian/bookworm-rt/20240313.0/{debarch}' + params: &rt-tests-params + job_timeout: '10' + duration: '60s' + kcidb_test_suite: rt-tests + rules: + fragments: + - preempt_rt + + rt-tests-cyclicdeadline: + <<: *rt-tests + params: + <<: *rt-tests-params + tst_cmd: 'cyclicdeadline' + kcidb_test_suite: rt-tests.cyclicdeadline + + rt-tests-cyclictest: + <<: *rt-tests + params: + <<: *rt-tests-params + tst_cmd: 'cyclictest' + kcidb_test_suite: rt-tests.cyclictest + + rt-tests-pi-stress: + <<: *rt-tests + params: + <<: *rt-tests-params + tst_cmd: 'pi-stress' + kcidb_test_suite: rt-tests.pi-params + + rt-tests-pmqtest: + <<: *rt-tests + params: + <<: *rt-tests-params + tst_cmd: 'pmqtest' + kcidb_test_suite: rt-tests.pmqtest + + rt-tests-ptsematest: + <<: *rt-tests + params: + <<: *rt-tests-params + tst_cmd: 'ptsematest' + kcidb_test_suite: rt-tests.ptsematest + + rt-tests-rt-migrate-test: + <<: *rt-tests + params: + <<: *rt-tests-params + tst_cmd: 'rt-migrate-test' + kcidb_test_suite: rt-tests.rt-migrate-test + + rt-tests-rtla-osnoise: + <<: *rt-tests + params: + <<: *rt-tests-params + tst_cmd: 'rtla-osnoise' + tst_group: 'rtla' + kcidb_test_suite: rt-tests.rtla-osnoise + + rt-tests-rtla-timerlat: + <<: *rt-tests + params: + <<: *rt-tests-params + tst_cmd: 'rtla-timerlat' + tst_group: 'rtla' + kcidb_test_suite: rt-tests.rtla-timerlat + + rt-tests-signaltest: + <<: *rt-tests + params: + <<: *rt-tests-params + tst_cmd: 'signaltest' + kcidb_test_suite: rt-tests.signaltest + + rt-tests-sigwaittest: + <<: *rt-tests + params: + <<: *rt-tests-params + tst_cmd: 'sigwaittest' + kcidb_test_suite: rt-tests.sigwaittest + + rt-tests-svsematest: + <<: *rt-tests + params: + <<: *rt-tests-params + tst_cmd: 'svsematest' + kcidb_test_suite: rt-tests.svsematest + + # amd64-only temporary + sleep: + template: sleep.jinja2 + kind: job + params: + nfsroot: http://storage.kernelci.org/images/rootfs/debian/bullseye/20240129.0/{debarch} + sleep_params: mem freeze + kcidb_test_suite: kernelci_sleep trees: + broonie-misc: + url: "https://git.kernel.org/pub/scm/linux/kernel/git/broonie/misc.git" + + broonie-regmap: + url: "https://git.kernel.org/pub/scm/linux/kernel/git/broonie/regmap.git" + + broonie-regulator: + url: "https://git.kernel.org/pub/scm/linux/kernel/git/broonie/regulator.git" + + broonie-sound: + url: "https://git.kernel.org/pub/scm/linux/kernel/git/broonie/sound.git" + + broonie-spi: + url: "https://git.kernel.org/pub/scm/linux/kernel/git/broonie/spi.git" + kernelci: url: "https://github.com/kernelci/linux.git" mainline: url: 'https://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git' + stable-rc: + url: 'https://git.kernel.org/pub/scm/linux/kernel/git/stable/linux-stable-rc.git' + + stable-rt: + url: 'https://git.kernel.org/pub/scm/linux/kernel/git/rt/linux-stable-rt.git' + + lee-mfd: + url: "https://git.kernel.org/pub/scm/linux/kernel/git/lee/mfd.git" + + next: + url: 'https://git.kernel.org/pub/scm/linux/kernel/git/next/linux-next.git' + + mediatek: + url: 'https://git.kernel.org/pub/scm/linux/kernel/git/mediatek/linux.git' + + android: + url: 'https://android.googlesource.com/kernel/common' -device_types: + collabora-next: + url: 'https://gitlab.collabora.com/kernel/collabora-next.git' + + collabora-chromeos-kernel: + url: 'https://gitlab.collabora.com/google/chromeos-kernel.git' + + media: + url: 'https://git.linuxtv.org/media_stage.git' + +platforms: docker: - base_name: docker - class: docker - qemu-x86: + qemu-x86: &qemu-device base_name: qemu arch: x86_64 boot_method: qemu mach: qemu + context: + arch: x86_64 + cpu: qemu64 + guestfs_interface: ide + + qemu: *qemu-device + + minnowboard-turbot-E3826: *x86_64-device + aaeon-UPN-EHLX4RE-A10-0864: *x86_64-device + + bcm2711-rpi-4-b: + <<: *arm64-device + mach: broadcom + dtb: dtbs/broadcom/bcm2711-rpi-4-b.dtb + + bcm2836-rpi-2-b: + <<: *arm-device + mach: broadcom + dtb: dtbs/bcm2836-rpi-2-b.dtb + + imx6q-sabrelite: + <<: *arm-device + mach: imx + dtb: dtbs/imx6q-sabrelite.dtb + + sun50i-h5-libretech-all-h3-cc: + <<: *arm64-device + mach: allwinner + dtb: dtbs/allwinner/sun50i-h5-libretech-all-h3-cc.dtb + + sun7i-a20-cubieboard2: + <<: *arm-device + mach: allwinner + dtb: dtbs/sun7i-a20-cubieboard2.dtb + + meson-g12b-a311d-khadas-vim3: + <<: *arm64-device + mach: amlogic + dtb: dtbs/amlogic/meson-g12b-a311d-khadas-vim3.dtb + + odroid-xu3: + <<: *arm-device + mach: samsung + dtb: dtbs/exynos5422-odroidxu3.dtb + + qcs6490-rb3gen2: + <<: *arm64-device + boot_method: fastboot + mach: qcom + dtb: dtbs/qcom/qcs6490-rb3gen2.dtb + + rk3288-rock2-square: + <<: *arm-device + mach: rockchip + dtb: dtbs/rk3288-rock2-square.dtb + + rk3288-veyron-jaq: + <<: *arm-device + boot_method: depthcharge + mach: rockchip + dtb: dtbs/rk3288-veyron-jaq.dtb + + rk3399-gru-kevin: + <<: *arm64-device + boot_method: depthcharge + mach: rockchip + dtb: dtbs/rockchip/rk3399-gru-kevin.dtb + + rk3399-rock-pi-4b: + <<: *arm64-device + mach: rockchip + dtb: dtbs/rockchip/rk3399-rock-pi-4b.dtb + + rk3588-rock-5b: + <<: *arm64-device + mach: rockchip + dtb: dtbs/rockchip/rk3588-rock-5b.dtb + + sun50i-h6-pine-h64: + <<: *arm64-device + mach: allwinner + dtb: dtbs/allwinner/sun50i-h6-pine-h64.dtb kubernetes: - base_name: kubernetes - class: kubernetes shell: - base_name: shell - class: shell scheduler: + - job: baseline-arm64 + event: + channel: node + name: kbuild-gcc-12-arm64 + result: pass + runtime: + type: lava + name: lava-collabora + platforms: + - bcm2711-rpi-4-b + - meson-g12b-a311d-khadas-vim3 + - rk3399-gru-kevin + - rk3399-rock-pi-4b + - rk3588-rock-5b + - sun50i-h6-pine-h64 + + - job: baseline-arm64-broonie + event: + channel: node + name: kbuild-gcc-12-arm64 + result: pass + runtime: + type: lava + name: lava-broonie + platforms: + - sun50i-h5-libretech-all-h3-cc + + - job: baseline-arm64-qualcomm + event: + channel: node + name: kbuild-gcc-12-arm64 + result: pass + runtime: + type: lava + name: lava-qualcomm + platforms: + - bcm2711-rpi-4-b + - qcs6490-rb3gen2 + + - job: baseline-arm64-android + event: + channel: node + name: kbuild-gcc-12-arm64-android + result: pass + runtime: + type: lava + name: lava-collabora + platforms: + - bcm2711-rpi-4-b + - meson-g12b-a311d-khadas-vim3 + - rk3399-gru-kevin + - rk3399-rock-pi-4b + - rk3588-rock-5b + - sun50i-h6-pine-h64 + + - job: baseline-arm64-mfd + event: + channel: node + name: kbuild-gcc-12-arm64-mfd + result: pass + runtime: + type: lava + name: lava-collabora + platforms: + - bcm2711-rpi-4-b + + - job: baseline-arm-android + event: + channel: node + name: kbuild-gcc-12-arm-android + result: pass + runtime: + type: lava + name: lava-collabora + platforms: + - bcm2836-rpi-2-b + - imx6q-sabrelite + - odroid-xu3 + - rk3288-rock2-square + - rk3288-veyron-jaq + + - job: baseline-arm-mfd + event: + channel: node + name: kbuild-gcc-12-arm-mfd + result: pass + runtime: + type: lava + name: lava-collabora + platforms: + - bcm2836-rpi-2-b + + - job: baseline-arm + event: + channel: node + name: kbuild-gcc-12-arm + result: pass + runtime: + type: lava + name: lava-collabora + platforms: + - bcm2836-rpi-2-b + - imx6q-sabrelite + - odroid-xu3 + - rk3288-rock2-square + - rk3288-veyron-jaq + + - job: baseline-arm-baylibre + event: + channel: node + name: kbuild-gcc-12-arm + result: pass + runtime: + type: lava + name: lava-baylibre + platforms: + - sun7i-a20-cubieboard2 + - job: baseline-x86 event: channel: node - name: kbuild-gcc-10-x86 + name: kbuild-gcc-12-x86 result: pass runtime: type: lava + name: lava-collabora platforms: - qemu-x86 + - minnowboard-turbot-E3826 - - job: kbuild-gcc-10-x86 - event: *checkout-event + - job: baseline-x86-baylibre + event: + channel: node + name: kbuild-gcc-12-x86 + result: pass runtime: - type: kubernetes + type: lava + name: lava-baylibre + platforms: + - qemu + + - job: baseline-x86-kcidebug-amd + event: + channel: node + name: kbuild-gcc-12-x86-kcidebug + result: pass + runtime: + type: lava + name: lava-collabora + platforms: *amd-platforms + + - job: baseline-x86-kcidebug-intel + event: + channel: node + name: kbuild-gcc-12-x86-kcidebug + result: pass + runtime: + type: lava + name: lava-collabora + platforms: *intel-platforms + + - job: baseline-arm64-kcidebug-mediatek + event: + channel: node + name: kbuild-gcc-12-arm64-chromebook-kcidebug + result: pass + runtime: + type: lava + name: lava-collabora + platforms: *mediatek-platforms + + - job: baseline-arm64-kcidebug-qualcomm + event: + channel: node + name: kbuild-gcc-12-arm64-chromebook-kcidebug + result: pass + runtime: + type: lava + name: lava-collabora + platforms: *qualcomm-platforms + + - job: baseline-x86-cip + event: + channel: node + name: kbuild-gcc-12-x86 + result: pass + runtime: + type: lava + name: lava-cip + platforms: + - qemu + + - job: baseline-x86-mfd + event: + channel: node + name: kbuild-gcc-12-x86-mfd + result: pass + runtime: + type: lava + name: lava-collabora + platforms: + - minnowboard-turbot-E3826 + + - job: kbuild-clang-17-x86 + <<: *build-k8s-all + + - job: kbuild-clang-17-x86-android-allmodconfig + <<: *build-k8s-all + + - job: kbuild-clang-17-x86-android-allnoconfig + <<: *build-k8s-all + + - job: kbuild-clang-17-arm64-android + <<: *build-k8s-all + + - job: kbuild-clang-17-arm64-android-allmodconfig + <<: *build-k8s-all + + - job: kbuild-clang-17-arm64-android-allnoconfig + <<: *build-k8s-all + + - job: kbuild-clang-17-arm64-android-big_endian + <<: *build-k8s-all + + - job: kbuild-clang-17-arm64-android-randomize + <<: *build-k8s-all + + - job: kbuild-gcc-12-arm64 + <<: *build-k8s-all + + - job: kbuild-gcc-12-arm64-dtbscheck + <<: *build-k8s-all + rules: + tree: + - next:master + + - job: kbuild-gcc-12-arm64-preempt_rt + <<: *build-k8s-all + + - job: kbuild-gcc-12-arm64-preempt_rt_chromebook + <<: *build-k8s-all + + - job: kbuild-gcc-12-arm64-android + <<: *build-k8s-all + + - job: kbuild-gcc-12-arm64-android-allmodconfig + <<: *build-k8s-all + + - job: kbuild-gcc-12-arm64-android-allnoconfig + <<: *build-k8s-all + + - job: kbuild-gcc-12-arm64-android-big_endian + <<: *build-k8s-all + + - job: kbuild-gcc-12-arm64-android-randomize + <<: *build-k8s-all + + - job: kbuild-gcc-12-arm64-mfd + <<: *build-k8s-all + +# Example of same job name to apply to different tree/branch +# - job: kbuild-gcc-12-arm64-dtbscheck +# <<: *build-k8s-all +# rules: +# tree: +# - kernelci:staging-next + + - job: kbuild-clang-17-arm-android + <<: *build-k8s-all + + - job: kbuild-clang-17-arm-android-allmodconfig + <<: *build-k8s-all + + - job: kbuild-clang-17-arm-android-multi_v5_defconfig + <<: *build-k8s-all + + - job: kbuild-clang-17-arm-android-imx_v6_v7_defconfig + <<: *build-k8s-all + + - job: kbuild-clang-17-arm-android-omap2plus_defconfig + <<: *build-k8s-all + + - job: kbuild-clang-17-arm-android-vexpress_defconfig + <<: *build-k8s-all + + - job: kbuild-gcc-12-arm + <<: *build-k8s-all + + - job: kbuild-gcc-12-arm-android + <<: *build-k8s-all + + - job: kbuild-gcc-12-arm-android-allmodconfig + <<: *build-k8s-all + + - job: kbuild-gcc-12-arm-android-multi_v5_defconfig + <<: *build-k8s-all + + - job: kbuild-gcc-12-arm-android-imx_v6_v7_defconfig + <<: *build-k8s-all + + - job: kbuild-gcc-12-arm-android-omap2plus_defconfig + <<: *build-k8s-all + + - job: kbuild-gcc-12-arm-android-vexpress_defconfig + <<: *build-k8s-all + + - job: kbuild-gcc-12-arm64-chromebook-kcidebug + <<: *build-k8s-all + + - job: kbuild-gcc-12-um + <<: *build-k8s-all + + - job: kbuild-gcc-12-i386 + <<: *build-k8s-all + + - job: kbuild-clang-17-riscv-android-defconfig + <<: *build-k8s-all + + - job: kbuild-gcc-12-riscv + <<: *build-k8s-all + + - job: kbuild-gcc-12-riscv-android-defconfig + <<: *build-k8s-all + + - job: kbuild-gcc-12-x86 + <<: *build-k8s-all + + - job: kbuild-gcc-12-x86-allnoconfig + <<: *build-k8s-all + + - job: kbuild-gcc-12-x86-kcidebug + <<: *build-k8s-all + + - job: kbuild-gcc-12-x86-tinyconfig + <<: *build-k8s-all + + - job: kbuild-gcc-12-x86-android-allmodconfig + <<: *build-k8s-all + + - job: kbuild-gcc-12-x86-android-allnoconfig + <<: *build-k8s-all + + - job: kbuild-gcc-12-x86-preempt_rt + <<: *build-k8s-all + + - job: kbuild-gcc-12-x86-preempt_rt_x86_board + <<: *build-k8s-all + + - job: kbuild-gcc-12-x86-mfd + <<: *build-k8s-all + + - job: kbuild-clang-17-i386-android-allnoconfig + <<: *build-k8s-all + + - job: kbuild-gcc-12-i386-allnoconfig + <<: *build-k8s-all + + - job: kbuild-gcc-12-i386-tinyconfig + <<: *build-k8s-all + + - job: kbuild-gcc-12-i386-android-allnoconfig + <<: *build-k8s-all + + - job: kbuild-gcc-12-i386-mfd + <<: *build-k8s-all + + - job: kbuild-gcc-12-mips-32r2el_defconfig + <<: *build-k8s-all + + - job: kbuild-gcc-12-riscv-nommu_k210_defconfig + <<: *build-k8s-all + + - job: kbuild-gcc-12-riscv-mfd + <<: *build-k8s-all + + - job: kbuild-gcc-12-arc-haps_hs_smp_defconfig + <<: *build-k8s-all + + - job: kbuild-gcc-12-arm-imx_v6_v7_defconfig + <<: *build-k8s-all + + - job: kbuild-gcc-12-arm-multi_v5_defconfig + <<: *build-k8s-all + + - job: kbuild-gcc-12-arm-multi_v7_defconfig + <<: *build-k8s-all + + - job: kbuild-gcc-12-arm-mfd + <<: *build-k8s-all + + - job: kbuild-gcc-12-arm-vexpress_defconfig + <<: *build-k8s-all + + - job: kbuild-gcc-12-arm-preempt_rt + <<: *build-k8s-all + + - job: kbuild-gcc-12-arm-omap2plus_defconfig + <<: *build-k8s-all - job: kunit event: *checkout-event @@ -191,13 +1571,35 @@ scheduler: runtime: type: shell + - job: kselftest-dt + event: + channel: node + name: kbuild-gcc-12-arm64 + result: pass + runtime: + type: lava + name: lava-collabora + platforms: + - bcm2711-rpi-4-b + + - job: sleep + event: + channel: node + name: kbuild-gcc-12-x86 + result: pass + runtime: + type: lava + name: lava-collabora + platforms: + - acer-chromebox-cxi4-puff + # ----------------------------------------------------------------------------- # Legacy configuration data (still used by trigger service) # build_environments: - gcc-10: + gcc-12: cc: gcc cc_version: 10 arch_params: @@ -207,17 +1609,70 @@ build_environments: build_variants: variants: &build-variants - gcc-10: - build_environment: gcc-10 + gcc-12: + build_environment: gcc-12 architectures: x86_64: base_defconfig: 'x86_64_defconfig' filters: - regex: { defconfig: 'x86_64_defconfig' } + arm64: + base_defconfig: 'defconfig' + filters: + - regex: { defconfig: 'defconfig' } + arm: + base_defconfig: 'multi_v7_defconfig' + filters: + - regex: { defconfig: 'multi_v7_defconfig' } build_configs: + broonie-misc: + tree: broonie-misc + branch: 'for-kernelci' + variants: *build-variants + + broonie-regmap: + tree: broonie-regmap + branch: 'for-next' + variants: *build-variants + + broonie-regmap-fixes: + tree: broonie-regmap + branch: 'for-linus' + variants: *build-variants + + broonie-regulator: + tree: broonie-regulator + branch: 'for-next' + variants: *build-variants + + broonie-regulator-fixes: + tree: broonie-regulator + branch: 'for-linus' + variants: *build-variants + + broonie-sound: + tree: broonie-sound + branch: 'for-next' + variants: *build-variants + + broonie-sound-fixes: + tree: broonie-sound + branch: 'for-linus' + variants: *build-variants + + broonie-spi: + tree: broonie-spi + branch: 'for-next' + variants: *build-variants + + broonie-spi-fixes: + tree: broonie-spi + branch: 'for-linus' + variants: *build-variants + kernelci_staging-mainline: tree: kernelci branch: 'staging-mainline' @@ -237,3 +1692,194 @@ build_configs: tree: mainline branch: 'master' variants: *build-variants + + stable-rc_4.19: &stable-rc + tree: stable-rc + branch: 'linux-4.19.y' + variants: *build-variants + + stable-rc_5.4: + <<: *stable-rc + branch: 'linux-5.4.y' + + stable-rc_5.10: + <<: *stable-rc + branch: 'linux-5.10.y' + + stable-rc_5.15: + <<: *stable-rc + branch: 'linux-5.15.y' + + stable-rc_6.1: + <<: *stable-rc + branch: 'linux-6.1.y' + + stable-rc_6.6: + <<: *stable-rc + branch: 'linux-6.6.y' + + stable-rc_6.7: + <<: *stable-rc + branch: 'linux-6.7.y' + + next_master: + tree: next + branch: 'master' + variants: *build-variants + + mediatek_for_next: + tree: mediatek + branch: 'for-next' + variants: *build-variants + + android_4.19-stable: + tree: android + branch: 'android-4.19-stable' + + android_mainline: + tree: android + branch: 'android-mainline' + + android_mainline_tracking: + tree: android + branch: 'android-mainline-tracking' + + android11-5.4: + tree: android + branch: 'android11-5.4' + + android12-5.4: + tree: android + branch: 'android12-5.4' + + android12-5.4-lts: + tree: android + branch: 'android12-5.4-lts' + + android12-5.10: + tree: android + branch: 'android12-5.10' + + android12-5.10-lts: + tree: android + branch: 'android12-5.10-lts' + + android13-5.10: + tree: android + branch: 'android13-5.10' + + android13-5.10-lts: + tree: android + branch: 'android13-5.10-lts' + + android13-5.15: + tree: android + branch: 'android13-5.15' + + android13-5.15-lts: + tree: android + branch: 'android13-5.15-lts' + + android14-5.15: + tree: android + branch: 'android14-5.15' + + android14-5.15-lts: + tree: android + branch: 'android14-5.15-lts' + + android14-6.1: + tree: android + branch: 'android14-6.1' + + android14-6.1-lts: + tree: android + branch: 'android14-6.1-lts' + + android15-6.1: + tree: android + branch: 'android15-6.1' + + android15-6.6: + tree: android + branch: 'android15-6.6' + + android15-6.6-lts: + tree: android + branch: 'android15-6.6-lts' + + collabora-next_for-kernelci: + tree: collabora-next + branch: 'for-kernelci' + variants: *build-variants + + collabora-chromeos-kernel_for-kernelci: + tree: collabora-chromeos-kernel + branch: 'for-kernelci' + variants: *build-variants + + lee_mfd: + tree: lee-mfd + branch: 'for-mfd-next' + + media_master: + tree: media + branch: 'master' + variants: *build-variants + + media_fixes: + tree: media + branch: 'fixes' + variants: *build-variants + + stable-rt_v4.14-rt: + tree: stable-rt + branch: 'v4.14-rt' + + stable-rt_v4.14-rt-next: + tree: stable-rt + branch: 'v4.14-rt-next' + + stable-rt_v4.19-rt: + tree: stable-rt + branch: 'v4.19-rt' + + stable-rt_v4.19-rt-next: + tree: stable-rt + branch: 'v4.19-rt-next' + + stable-rt_v5.4-rt: + tree: stable-rt + branch: 'v5.4-rt' + + stable-rt_v5.4-rt-next: + tree: stable-rt + branch: 'v5.4-rt-next' + + stable-rt_v5.10-rt: + tree: stable-rt + branch: 'v5.10-rt' + + stable-rt_v5.10-rt-next: + tree: stable-rt + branch: 'v5.10-rt-next' + + stable-rt_v5.15-rt: + tree: stable-rt + branch: 'v5.15-rt' + + stable-rt_v5.15-rt-next: + tree: stable-rt + branch: 'v5.15-rt-next' + + stable-rt_v6.1-rt: + tree: stable-rt + branch: 'v6.1-rt' + + stable-rt_v6.1-rt-next: + tree: stable-rt + branch: 'v6.1-rt-next' + + stable-rt_v6.6-rt: + tree: stable-rt + branch: 'v6.6-rt' diff --git a/config/platforms-chromeos.yaml b/config/platforms-chromeos.yaml new file mode 100644 index 000000000..23bcf0b23 --- /dev/null +++ b/config/platforms-chromeos.yaml @@ -0,0 +1,169 @@ +_anchors: + + arm64-chromebook-device: &arm64-chromebook-device + arch: arm64 + boot_method: depthcharge + params: &arm64-chromebook-device-params + flash_kernel: + url: https://storage.chromeos.kernelci.org/images/kernel/v6.1-{mach} + image: 'kernel/Image' + nfsroot: https://storage.chromeos.kernelci.org/images/rootfs/debian/bookworm-cros-flash/20240422.0/{debarch} + tast_tarball: https://storage.chromeos.kernelci.org/images/rootfs/chromeos/chromiumos-{base_name}/20240514.0/{debarch}/tast.tgz + rules: &arm64-chromebook-device-rules + defconfig: + - '!allnoconfig' + - '!allmodconfig' + fragments: + - 'arm64-chromebook' + + x86-chromebook-device: &x86-chromebook-device + arch: x86_64 + boot_method: depthcharge + mach: x86 + params: + <<: *arm64-chromebook-device-params + flash_kernel: + url: https://storage.chromeos.kernelci.org/images/kernel/cros-20230815-amd64/clang-14 + image: 'kernel/bzImage' + rules: + <<: *arm64-chromebook-device-rules + fragments: + - 'x86-board' + +platforms: + acer-R721T-grunt: &chromebook-grunt-device + <<: *x86-chromebook-device + base_name: grunt + + acer-cb317-1h-c3z6-dedede: + <<: *x86-chromebook-device + base_name: dedede + + acer-cbv514-1h-34uz-brya: + <<: *x86-chromebook-device + base_name: brya + + acer-chromebox-cxi4-puff: + <<: *x86-chromebook-device + base_name: puff + + acer-cp514-2h-1130g7-volteer: &chromebook-volteer-device + <<: *x86-chromebook-device + base_name: volteer + + acer-cp514-2h-1160g7-volteer: *chromebook-volteer-device + + acer-cp514-3wh-r0qs-guybrush: + <<: *x86-chromebook-device + base_name: guybrush + + asus-C433TA-AJ0005-rammus: + <<: *x86-chromebook-device + base_name: rammus + + asus-C436FA-Flip-hatch: + <<: *x86-chromebook-device + base_name: hatch + + asus-C523NA-A20057-coral: + <<: *x86-chromebook-device + base_name: coral + + asus-CM1400CXA-dalboz: &chromebook-zork-device + <<: *x86-chromebook-device + base_name: zork + + dell-latitude-3445-7520c-skyrim: + <<: *x86-chromebook-device + base_name: skyrim + + dell-latitude-5300-8145U-arcada: &chromebook-sarien-device + <<: *x86-chromebook-device + base_name: sarien + + dell-latitude-5400-4305U-sarien: *chromebook-sarien-device + dell-latitude-5400-8665U-sarien: *chromebook-sarien-device + hp-14-db0003na-grunt: *chromebook-grunt-device + hp-11A-G6-EE-grunt: *chromebook-grunt-device + hp-14b-na0052xx-zork: *chromebook-zork-device + + hp-x360-14-G1-sona: + <<: *x86-chromebook-device + base_name: nami + + hp-x360-12b-ca0010nr-n4020-octopus: + <<: *x86-chromebook-device + base_name: octopus + + hp-x360-14a-cb0001xx-zork: *chromebook-zork-device + lenovo-TPad-C13-Yoga-zork: *chromebook-zork-device + + mt8183-kukui-jacuzzi-juniper-sku16: &mediatek-chromebook-device + <<: *arm64-chromebook-device + base_name: jacuzzi + mach: mediatek + dtb: dtbs/mediatek/mt8183-kukui-jacuzzi-juniper-sku16.dtb + rules: + <<: *arm64-chromebook-device-rules + min_version: + version: 6 + patchlevel: 1 + + mt8186-corsola-steelix-sku131072: + <<: *mediatek-chromebook-device + base_name: corsola + dtb: dtbs/mediatek/mt8186-corsola-steelix-sku131072.dtb + params: + <<: *arm64-chromebook-device-params + flash_kernel: + url: https://storage.chromeos.kernelci.org/images/rootfs/chromeos/chromiumos-corsola/20240514.0/arm64 + image: 'Image' + context: + test_character_delay: 10 + rules: + <<: *arm64-chromebook-device-rules + min_version: + version: 6 + patchlevel: 9 + + mt8192-asurada-spherion-r0: + <<: *mediatek-chromebook-device + base_name: asurada + dtb: dtbs/mediatek/mt8192-asurada-spherion-r0.dtb + context: + test_character_delay: 10 + rules: + <<: *arm64-chromebook-device-rules + min_version: + version: 6 + patchlevel: 4 + + mt8195-cherry-tomato-r2: + <<: *mediatek-chromebook-device + base_name: cherry + dtb: dtbs/mediatek/mt8195-cherry-tomato-r2.dtb + context: + test_character_delay: 10 + rules: + <<: *arm64-chromebook-device-rules + min_version: + version: 6 + patchlevel: 7 + + sc7180-trogdor-kingoftown: &trogdor-chromebook-device + <<: *arm64-chromebook-device + base_name: trogdor + mach: qcom + dtb: + - dtbs/qcom/sc7180-trogdor-kingoftown.dtb + - dtbs/qcom/sc7180-trogdor-kingoftown-r1.dtb + params: + <<: *arm64-chromebook-device-params + flash_kernel: + url: https://storage.chromeos.kernelci.org/images/kernel/v6.1-qualcomm + image: 'kernel/Image' + dtb: 'dtbs/qcom/sc7180-trogdor-kingoftown-r1.dtb' + + sc7180-trogdor-lazor-limozeen: + <<: *trogdor-chromebook-device + dtb: dtbs/qcom/sc7180-trogdor-lazor-limozeen-nots-r5.dtb diff --git a/config/reports/test-report.jinja2 b/config/reports/test-report.jinja2 index 2ce108ce1..54332a54b 100644 --- a/config/reports/test-report.jinja2 +++ b/config/reports/test-report.jinja2 @@ -5,11 +5,11 @@ Summary ======= -Tree: {{ root.revision.tree }} -Branch: {{ root.revision.branch }} -Describe: {{ root.revision.describe }} -URL: {{ root.revision.url }} -SHA1: {{ root.revision.commit }} +Tree: {{ root.data.kernel_revision.tree }} +Branch: {{ root.data.kernel_revision.branch }} +Describe: {{ root.data.kernel_revision.describe }} +URL: {{ root.data.kernel_revision.url }} +SHA1: {{ root.data.kernel_revision.commit }} {%- if jobs.items() %} {{ '%-17s %s %-8s %s %-8s %s %-8s'|format( diff --git a/config/result-summary.yaml b/config/result-summary.yaml new file mode 100644 index 000000000..40e7d8933 --- /dev/null +++ b/config/result-summary.yaml @@ -0,0 +1,863 @@ +# SPDX-License-Identifier: LGPL-2.1-or-later + +# List of report presets +# +# Each item defines a report preset containing a set of search +# parameters and values. + +# Each report preset must include a "metadata" section and a "preset" +# section. The "metadata" section is expected to contain at least the +# "action" to be performed ("summary" or "monitor") and the "template" +# file used for the summary generation. This template must be a file in +# config/result_summary_templates. Other optional fields are supported: +# +# - "output_file": name of the file where the output will be stored (in +# data/output) +# - "title": title for the report +# +# The "preset" section contains the query definition. + +# Inside each preset section, top level blocks define the 'kind' of +# result to search for, ie. test, kbuild, regression. +# The dict of items in each block specifies the query parameters: +# {query_parameter: value} +# The query parameter may include suffixes like __gt, __lt or __re to +# search for values "greater than", "lesser than" or a regexp text +# match. + +# Example, preset. Searches for dmesg baseline tests for +# arm64 in two repo trees, and also for all results for tests whose name +# contains "mytest", and generates a summary: +# +# default-test: +# metadata: +# action: summary +# title: "Example test report" +# template: "mytemplate.jinja2" +# output_file: "default-test.txt" +# preset: +# test: +# # Query by group, name, arch and list of repos +# - group__re: baseline +# name: dmesg +# data.arch: arm64 +# repos: +# - tree: stable-rc +# branch: linux-5.4.y +# - tree: stable-rc +# branch: linux-5.15.y +# - tree: mytree +# # Query by name +# - name_re: mytest + + +### Monitor presets + +# Monitor active stable-rc kernel build regressions +# This will generate a report each time a new regression is detected or +# when an existing regression is updated (ie. the test case has failed +# again) +monitor-active-stable-rc-build-regressions: + metadata: + action: monitor + title: "stable-rc kernel build regression" + template: "generic-regression-report.html.jinja2" + output_file: "stable-rc-build-regression-report.html" + preset: + regression: + - name__re: kbuild + # Regressions with result = fail are "active", ie. still failing + result: fail + data.error_code: null + repos: + - tree: stable-rc + +# Monitor all stable-rc kernel kbuild failures +monitor-stable-rc-build-failures: + metadata: + action: monitor + title: "stable-rc kernel build failure" + template: "generic-test-failure-report.html.jinja2" + output_file: "stable-rc-build-failure.html" + preset: + kbuild: + - result: fail + data.error_code: null + repos: + - tree: stable-rc + +# Monitor active stable-rc kernel boot regressions +monitor-active-stable-rc-boot-regressions: + metadata: + action: monitor + title: "stable-rc kernel boot regression" + template: "generic-regression-report.html.jinja2" + output_file: "stable-rc-boot-regression-report.html" + preset: + regression: + - group__re: baseline + # Regressions with result = fail are "active", ie. still failing + result: fail + data.error_code: null + repos: + - tree: stable-rc + +# Monitor all stable-rc kernel boot failures +monitor-stable-rc-boot-failures: + metadata: + action: monitor + title: "stable-rc kernel boot failures" + template: "generic-test-failure-report.html.jinja2" + output_file: "stable-rc-boot-failure.html" + preset: + test: + - group__re: baseline + result: fail + data.error_code: null + repos: + - tree: stable-rc + +# Monitor all stable-rt kernel kbuild regressions +monitor-stable-rt-build-regressions: + metadata: + action: monitor + title: "stable-rt kernel build regression" + template: "generic-regression-report.html.jinja2" + output_file: "stable-rt-build-regression-report.html" + preset: + regression: + - name__re: kbuild + result: fail + data.error_code: null + repos: + - tree: stable-rt + +# Monitor all stable-rt kernel kbuild failures +monitor-stable-rt-build-failures: + metadata: + action: monitor + title: "stable-rt kernel build failure" + template: "generic-test-failure-report.html.jinja2" + output_file: "stable-rt-build-failure.html" + preset: + kbuild: + - result: fail + data.error_code: null + repos: + - tree: stable-rt + +monitor-all-regressions: + metadata: + action: monitor + title: "KernelCI regression report" + template: "generic-regression-report.html.jinja2" + output_file: "regression-report.html" + preset: + regression: + - data.error_code: null + +monitor-all-regressions__runtime-errors: + metadata: + action: monitor + title: "KernelCI regression report" + template: "generic-regression-report.html.jinja2" + output_file: "regression-report.html" + preset: + regression: + - data.error_code__ne: null + +monitor-all-test-failures__runtime-errors: + metadata: + action: monitor + title: "KernelCI test failure report" + template: "generic-test-failure-report.html.jinja2" + output_file: "test-failure-report.html" + preset: + test: + - result: fail + data.error_code__ne: null + +monitor-all-test-failures: + metadata: + action: monitor + title: "KernelCI test failure report" + template: "generic-test-failure-report.html.jinja2" + output_file: "test-failure-report.html" + preset: + test: + - result: fail + data.error_code: null + +monitor-all-build-failures__runtime-errors: + metadata: + action: monitor + title: "KernelCI kbuild failure report" + template: "generic-test-failure-report.html.jinja2" + output_file: "kbuild-failure-report.html" + preset: + kbuild: + - result: fail + data.error_code__ne: null + +monitor-all-build-failures: + metadata: + action: monitor + title: "KernelCI kbuild failure report" + template: "generic-test-failure-report.html.jinja2" + output_file: "kbuild-failure-report.html" + preset: + kbuild: + - result: fail + data.error_code: null + +aferraris-monitor-chromeos-errors: + metadata: + action: monitor + title: "KernelCI test failure caused by ChromeOS boot errors" + template: "generic-test-failure-report.html.jinja2" + output_file: "test-chromeos-boot-error-report.html" + preset: + test: + - result: fail + data.error_msg__re: "(self_repair|cros-partition-corrupt)" + +#### Failures and regressions in mainline and next for x86_64 + +monitor-build-regressions-x86_64-mainline-next: + metadata: + action: monitor + title: "KernelCI build regressions found on mainline and next for x86_64" + template: "generic-regression-report.html.jinja2" + output_file: "build-regressions-mainline-next-x86_64.html" + preset: + regression: + - data.error_code: null + name__re: kbuild + data.arch: x86_64 + repos: + - tree: mainline + - tree: next + +monitor-build-failures-x86_64-mainline-next: + metadata: + action: monitor + title: "KernelCI build failures found on mainline and next for x86_64" + template: "generic-test-failure-report.html.jinja2" + output_file: "build-failures-mainline-next-x86_64.html" + preset: + kbuild: + - data.error_code: null + result: fail + data.arch: x86_64 + repos: + - tree: mainline + - tree: next + +monitor-baseline-regressions-x86_64-mainline-next: + metadata: + action: monitor + title: "KernelCI baseline regressions found on mainline and next for x86_64" + template: "generic-regression-report.html.jinja2" + output_file: "baseline-regressions-mainline-next-x86_64.html" + preset: + regression: + - data.error_code: null + group__re: baseline + data.arch: x86_64 + repos: + - tree: mainline + - tree: next + +monitor-baseline-failures-x86_64-mainline-next: + metadata: + action: monitor + title: "KernelCI baseline failures found on mainline and next for x86_64" + template: "generic-test-failure-report.html.jinja2" + output_file: "baseline-failures-mainline-next-x86_64.html" + preset: + test: + - data.error_code: null + result: fail + group__re: baseline + data.arch: x86_64 + repos: + - tree: mainline + - tree: next + + +#### Failures and regressions in kselftest-acpi tests in collabora-next + +monitor-kselftest-acpi-regressions-collabora-next: + metadata: + action: monitor + title: "KernelCI kselftest-acpi regressions found on collabora-next" + template: "generic-regression-report.html.jinja2" + output_file: "kselftest-acpi-regressions-collabora-next.html" + preset: + regression: + - data.error_code: null + group: kselftest-acpi + repos: + - tree: collabora-next + +monitor-kselftest-acpi-failures-collabora-next: + metadata: + action: monitor + title: "KernelCI kselftest-acpi failures found on collabora-next" + template: "generic-test-failure-report.html.jinja2" + output_file: "kselftest-acpi-failures-collabora-next.html" + preset: + test: + - data.error_code: null + result: fail + group: kselftest-acpi + repos: + - tree: collabora-next + + +#### Failures and regressions in mainline and next for arm64 + +monitor-build-regressions-arm64-mainline-next: + metadata: + action: monitor + title: "KernelCI build regressions found on mainline and next for arm64" + template: "generic-regression-report.html.jinja2" + output_file: "build-regressions-mainline-next-arm64.html" + preset: + regression: + - data.error_code: null + name__re: kbuild + data.arch: arm64 + repos: + - tree: mainline + - tree: next + +monitor-build-failures-arm64-mainline-next: + metadata: + action: monitor + title: "KernelCI build failures found on mainline and next for arm64" + template: "generic-test-failure-report.html.jinja2" + output_file: "build-failures-mainline-next-arm64.html" + preset: + kbuild: + - data.error_code: null + result: fail + data.arch: arm64 + repos: + - tree: mainline + - tree: next + +monitor-baseline-regressions-arm64-mainline-next: + metadata: + action: monitor + title: "KernelCI baseline regressions found on mainline and next for arm64" + template: "generic-regression-report.html.jinja2" + output_file: "baseline-regressions-mainline-next-arm64.html" + preset: + regression: + - data.error_code: null + group__re: baseline + data.arch: arm64 + repos: + - tree: mainline + - tree: next + +monitor-baseline-failures-arm64-mainline-next: + metadata: + action: monitor + title: "KernelCI baseline failures found on mainline and next for arm64" + template: "generic-test-failure-report.html.jinja2" + output_file: "baseline-failures-mainline-next-arm64.html" + preset: + test: + - data.error_code: null + result: fail + group__re: baseline + data.arch: arm64 + repos: + - tree: mainline + - tree: next + + +#### Failures and regressions in kselftest-dt tests in mainline and next + +monitor-kselftest-dt-regressions-mainline-next: + metadata: + action: monitor + title: "KernelCI kselftest-dt regressions found on mainline and next" + template: "generic-regression-report.html.jinja2" + output_file: "kselftest-dt-regressions-mainline-next.html" + preset: + regression: + - data.error_code: null + group: kselftest-dt + repos: + - tree: mainline + - tree: next + +monitor-kselftest-dt-failures-mainline-next: + metadata: + action: monitor + title: "KernelCI kselftest-dt failures found on mainline and next" + template: "generic-test-failure-report.html.jinja2" + output_file: "kselftest-dt-failures-mainline-next.html" + preset: + test: + - data.error_code: null + result: fail + group: kselftest-dt + repos: + - tree: mainline + - tree: next + +#### Failures and regressions in kselftest-cpufreq tests + +monitor-kselftest-cpufreq-regressions: + metadata: + action: monitor + title: "KernelCI kselftest-cpufreq regressions" + template: "generic-regression-report.html.jinja2" + output_file: "kselftest-cpufreq-regressions.html" + preset: + regression: + - data.error_code: null + name: kselftest-cpufreq + +monitor-kselftest-cpufreq-failures: + metadata: + action: monitor + title: "KernelCI kselftest-cpufreq failures" + template: "generic-regression-report.html.jinja2" + output_file: "kselftest-cpufreq-failures.html" + preset: + test: + - data.error_code: null + result: fail + name: kselftest-cpufreq + +#### Failures and regressions in all kselftest tests + +monitor-kselftest-regressions: + metadata: + action: monitor + title: "KernelCI kselftest regressions" + template: "generic-regression-report.html.jinja2" + output_file: "kselftest-regressions.html" + preset: + regression: + - data.error_code: null + name__re: kselftest + +monitor-kselftest-failures: + metadata: + action: monitor + title: "KernelCI kselftest failures" + template: "generic-regression-report.html.jinja2" + output_file: "kselftest-failures.html" + preset: + test: + - data.error_code: null + result: fail + name__re: kselftest + +# All kunit test failures +monitor-all-kunit-failures: + metadata: + action: monitor + title: "All kunit test failures not caused by runtime errors" + template: "generic-test-results.html.jinja2" + output_file: "kunit-failures.html" + #template: "generic-test-failure-report.jinja2" + #output_file: "kunit-failures.txt" + preset: + test: + - group__re: kunit + result: fail + data.error_code: null + + +### Summary presets + +# New stable-rc kernel build regressions +stable-rc-build-regressions: + metadata: + action: summary + title: "stable-rc kernel build regressions" + template: "generic-regressions.html.jinja2" + output_file: "stable-rc-build-regressions.html" + preset: + regression: + - name__re: kbuild + repos: + - tree: stable-rc + +# All active stable-rc kernel build regressions +active-stable-rc-build-regressions: + metadata: + action: summary + title: "stable-rc kernel build regressions" + template: "generic-regressions.html.jinja2" + output_file: "active-stable-rc-build-regressions.html" + preset: + regression: + - name__re: kbuild + # Regressions with result = fail are "active", ie. still failing + result: fail + repos: + - tree: stable-rc + +# stable-rc kernel build failures +stable-rc-build-failures: + metadata: + action: summary + title: "stable-rc kernel build failures" + template: "generic-test-results.html.jinja2" + output_file: "stable-rc-build-failures.html" + preset: + kbuild: + - result: fail + repos: + - tree: stable-rc + +# stable-rc kernel boot regressions +stable-rc-boot-regressions: + metadata: + action: summary + title: "stable-rc kernel boot regressions" + template: "generic-regressions.html.jinja2" + output_file: "stable-rc-boot-regressions.html" + preset: + regression: + - group__re: baseline + repos: + - tree: stable-rc + +# stable-rc kernel boot active regressions +stable-rc-boot-active-regressions: + metadata: + action: summary + title: "stable-rc kernel boot regressions" + template: "generic-regressions.html.jinja2" + output_file: "stable-rc-boot-regressions.html" + preset: + regression: + - group__re: baseline + # Regressions with result = fail are "active", ie. still failing + result: fail + repos: + - tree: stable-rc + +# stable-rc kernel boot failures +stable-rc-boot-failures: + metadata: + action: summary + title: "stable-rc kernel boot failures" + template: "generic-test-results.html.jinja2" + output_file: "stable-rc-boot-failures.html" + preset: + test: + - group__re: baseline + result: fail + repos: + - tree: stable-rc + +# tast test failures +tast-failures: + metadata: + action: summary + title: "General Tast test failures" + template: "generic-test-results.html.jinja2" + output_file: "tast-failures.html" + preset: + test: + - group__re: tast + name__ne: tast + result: fail + data.error_code: null + +# tast test failures +tast-failures__runtime-errors: + metadata: + action: summary + title: "General Tast test failures" + template: "generic-test-results.html.jinja2" + output_file: "tast-failures.html" + preset: + test: + - group__re: tast + name__ne: tast + result: fail + data.error_code__ne: null + +# General regressions (kbuilds and all tests) on mainline and next +# excluding those triggered by runtime errors +mainline-next-regressions: + metadata: + action: summary + title: "Regressions found in mainline and next" + template: "generic-regressions.html.jinja2" + output_file: "mainline-next-regressions.html" + preset: + regression: + - data.error_code: null + repos: + - tree: mainline + - tree: next + +mainline-next-test-failures: + metadata: + action: summary + title: "Test failures found in mainline and next" + template: "generic-test-results.html.jinja2" + output_file: "mainline-next-failures.html" + preset: + test: + - result: fail + data.error_code: null + repos: + - tree: mainline + - tree: next + +# General regressions (kbuilds and all tests) on mainline and next +# excluding those triggered by runtime errors +mainline-next-active-regressions: + metadata: + action: summary + title: "Regressions found in mainline and next" + template: "generic-regressions.html.jinja2" + output_file: "mainline-next-active-regressions.html" + preset: + regression: + - data.error_code: null + # Regressions with result = fail are "active", ie. still failing + result: fail + repos: + - tree: mainline + - tree: next + +# General regressions (kbuilds and all tests) on mainline and next +# triggered by runtime errors +mainline-next-regressions__runtime-errors: + metadata: + action: summary + title: "'Regressions' found in mainline and next due to runtime errors" + template: "generic-regressions.html.jinja2" + output_file: "mainline-next-regressions__runtime-errors.html" + preset: + regression: + - data.error_code__ne: null + repos: + - tree: mainline + - tree: next + +# tast tests regressions for x86_64 targets +# Collect only regressions that aren't caused by runtime errors +tast-regressions-x86_64: + metadata: + action: summary + title: "Regressions found on Tast tests for x86_64" + template: "generic-regressions.html.jinja2" + output_file: "tast-regressions-x86_64.html" + preset: + regression: + - group__re: tast + name__ne: tast + data.arch: x86_64 + # Get only the regressions from results with no runtime errors + data.error_code: null + +# tast tests regressions for x86_64 targets caused by runtime errors +tast-regressions-x86_64__runtime-errors: + metadata: + action: summary + title: "'Regressions' found on Tast tests for x86_64 due to runtime errors" + template: "generic-regressions.html.jinja2" + output_file: "tast-regressions-x86_64__runtime-errors.html" + preset: + regression: + - group__re: tast + name__ne: tast + data.arch: x86_64 + data.error_code__ne: null + +# All active kunit regressions +active-kunit-regressions: + metadata: + action: summary + title: "Active regressions found on kunit tests" + template: "generic-regressions.html.jinja2" + output_file: "kunit-regressions.html" + preset: + regression: + - group__re: kunit + result: fail + data.error_code: null + +# All kunit test failures +all-kunit-failures: + metadata: + action: summary + title: "All kunit test failures" + template: "generic-test-results.html.jinja2" + output_file: "kunit-failures.html" + preset: + test: + - group__re: kunit + result: fail + data.error_code: null + +# All android build results +all-android-builds: + metadata: + action: summary + title: "Test results for Android branches" + template: "generic-test-results.html.jinja2" + output_file: "all-android-builds.html" + preset: + kbuild: + - data.error_code: null + repos: + - tree: android + +#### Failures and regressions in v4l2-decoder-conformance tests + +monitor-v4l2-decoder-conformance-regressions: + metadata: + action: monitor + title: "KernelCI v4l2-decoder-conformance regressions" + template: "generic-regression-report.html.jinja2" + output_file: "v4l2-decoder-conformance-regressions.html" + preset: + regression: + - data.error_code: null + group__re: v4l2-decoder-conformance + repos: + - tree: mainline + - tree: next + - tree: collabora-chromeos-kernel + - tree: media + +monitor-v4l2-decoder-conformance-failures__runtime-errors: + metadata: + action: monitor + title: "KernelCI v4l2-decoder-conformance failures due to runtime errors" + template: "generic-test-results.html.jinja2" + output_file: "v4l2-decoder-conformance-failures__runtime-errors.html" + preset: + job: + - data.error_code__ne: null + result: fail + name__re: v4l2-decoder-conformance + repos: + - tree: mainline + - tree: next + - tree: collabora-chromeos-kernel + - tree: media + +summary-v4l2-decoder-conformance-regressions: + metadata: + action: summary + title: "KernelCI v4l2-decoder-conformance regressions" + template: "generic-regressions.html.jinja2" + output_file: "v4l2-decoder-conformance-regressions.html" + preset: + regression: + - data.error_code: null + group__re: v4l2-decoder-conformance + repos: + - tree: mainline + - tree: next + - tree: collabora-chromeos-kernel + - tree: media + +summary-v4l2-decoder-conformance-failures__runtime-errors: + metadata: + action: summary + title: "KernelCI v4l2-decoder-conformance failures due to runtime errors" + template: "generic-test-results.html.jinja2" + output_file: "v4l2-decoder-conformance-failures__runtime-errors.html" + preset: + job: + - data.error_code__ne: null + result: fail + name__re: v4l2-decoder-conformance + repos: + - tree: mainline + - tree: next + - tree: collabora-chromeos-kernel + - tree: media + +summary-v4l2-decoder-conformance-failures: + metadata: + action: summary + title: "KernelCI v4l2-decoder-conformance failures" + template: "generic-test-results.html.jinja2" + output_file: "v4l2-decoder-conformance-failures.html" + preset: + test: + - data.error_code: null + result: fail + group__re: v4l2-decoder-conformance + repos: + - tree: mainline + - tree: next + - tree: collabora-chromeos-kernel + - tree: media + +#### Failures and regressions in watchdog reset test + +monitor-watchdog-reset-regressions-mainline-next: + metadata: + action: monitor + title: "KernelCI watchdog reset test regressions on mainline and next" + template: "generic-regression-report.html.jinja2" + output_file: "watchdog-reset-regressions-mainline-next.html" + preset: + regression: + - data.error_code: null + group__re: watchdog-reset + repos: + - tree: mainline + - tree: next + +monitor-watchdog-reset-failures-mainline-next: + metadata: + action: monitor + title: "KernelCI watchdog reset test failures on mainline and next" + template: "generic-test-failure-report.html.jinja2" + output_file: "watchdog-reset-failures-mainline-next.html" + preset: + test: + - result: fail + group__re: watchdog-reset + repos: + - tree: mainline + - tree: next + +summary-watchdog-reset-regressions-mainline-next: + metadata: + action: summary + title: "KernelCI watchdog reset regressions on mainline and next" + template: "generic-regressions.html.jinja2" + output_file: "watchdog-reset-regressions-mainline-next.html" + preset: + regression: + - data.error_code: null + group__re: watchdog-reset + repos: + - tree: mainline + - tree: next + +summary-watchdog-reset-failures-mainline-next: + metadata: + action: summary + title: "KernelCI watchdog reset test failures on mainline and next" + template: "generic-test-results.html.jinja2" + output_file: "watchdog-reset-failures-mainline-next.html" + preset: + test: + - result: fail + group__re: watchdog-reset + repos: + - tree: mainline + - tree: next diff --git a/config/result_summary_templates/base.html b/config/result_summary_templates/base.html new file mode 100644 index 000000000..db88a079d --- /dev/null +++ b/config/result_summary_templates/base.html @@ -0,0 +1,26 @@ + + + + + + + + {% block title %}{% endblock %} + + + +
+ {% block content %} + {% endblock %} +
+ + + diff --git a/config/result_summary_templates/generic-regression-report.html.jinja2 b/config/result_summary_templates/generic-regression-report.html.jinja2 new file mode 100644 index 000000000..123488797 --- /dev/null +++ b/config/result_summary_templates/generic-regression-report.html.jinja2 @@ -0,0 +1,109 @@ +{# SPDX-License-Identifier: LGPL-2.1-or-later -#} + +{# +Template to generate a generic html single regression report. It expects +the following input parameters: + - metadata: preset metadata + - node: the node to report +#} + +{% extends "base.html" %} +{% set title = metadata['title'] if 'title' in metadata else 'Regression report' %} + +{% block title %} + {{ title | striptags }} +{% endblock %} + +{% block content %} +

{{ title }}

+ + + + {% set kernel_version = node['data']['failed_kernel_version'] %} +

+ {{ node['name'] }} ({{ node['group'] }}) +

+
    +
  • KernelCI node: {{ node['id'] }} +
  • +
  • Status: + {% if node['result'] == 'fail' %} + ACTIVE + {% elif node['result'] == 'pass' %} + INACTIVE | Passed run: {{ node['data']['node_sequence'][-1] }} + {% else %} + Unknown + {% endif %} +
  • +
  • + Introduced in: {{ node['created'] }} +
  • +
  • + Previous successful run: {{ node['data']['pass_node'] }} +
  • + + {% if node['result'] == 'fail' and node['data']['node_sequence'] %} +
  • + Failed runs after detection: +
      + {% for run in node['data']['node_sequence'] %} +
    • + {{ run }} +
    • + {% endfor %} +
    +
  • + {% endif %} +
  • Tree: {{ kernel_version['tree'] }}
  • +
  • Branch: {{ kernel_version['branch'] }}
  • +
  • Commit: {{ kernel_version['commit'] }} ({{ kernel_version['describe'] }})
  • + {% if node['data']['arch'] %} +
  • Arch : {{ node['data']['arch'] }}
  • + {% endif %} + {% if node['data']['platform'] %} +
  • Platform : {{ node['data']['platform'] }}
  • + {% endif %} + {% if node['data']['device'] %} +
  • Device : {{ node['data']['device'] }}
  • + {% endif %} + {% if node['data']['config_full'] %} +
  • Config: {{ node['data']['config_full'] }}
  • + {% endif %} + {% if node['data']['compiler'] %} +
  • Compiler: {{ node['data']['compiler'] }}
  • + {% endif %} + {% if node['data']['error_code'] -%} +
  • Error code: {{ node['data']['error_code'] }}
  • + {% endif -%} + {% if node['data']['error_msg'] -%} +
  • Error message: {{ node['data']['error_msg'] }}
  • + {% endif -%} + {% if node['logs'] | count > 0 -%} +
  • Logs: +
      + {% for log in node['logs'] -%} +
    • + {{ log }} +
      +
      +
      +{{ node['logs'][log]['text'] | e }}
      +                  
      +
      +
      +
    • + {% endfor %} +
  • + {% else -%} +
  • No logs available
  • + {% endif %} +
+{% endblock %} diff --git a/config/result_summary_templates/generic-regression-report.jinja2 b/config/result_summary_templates/generic-regression-report.jinja2 new file mode 100644 index 000000000..177a83ade --- /dev/null +++ b/config/result_summary_templates/generic-regression-report.jinja2 @@ -0,0 +1,61 @@ +{# SPDX-License-Identifier: LGPL-2.1-or-later -#} +{# +Template to generate a generic text-based regression report. It expects +the following input parameters: + - metadata: preset metadata + - node: the node to report +#} +{{- metadata['title'] if 'title' in metadata else 'Regression report: ' }} +======================================================================== +{% set kernel_version = node['data']['failed_kernel_version'] %} +KernelCI node id: {{ node['id'] }} +Test name: {{ node['name'] }} ({{ node['group'] }}) +{% if node['result'] == 'fail' -%} +Status: Active +{% elif node['result'] == 'pass' -%} +Status: Inactive | Passed run: {{ node['data']['node_sequence'][-1] }} +{% else -%} +Status: Unknown +{% endif -%} +Introduced in: {{ node['created'] }} ({{ node['data']['fail_node'] }}) +Previous successful run: {{ node['data']['pass_node'] }} +{% if node['result'] == 'fail' and node['data']['node_sequence'] -%} +Failed runs after detection: + {% for run in node['data']['node_sequence'] -%} + - {{ run }} + {% endfor -%} +{% endif -%} +Tree: {{ kernel_version['tree'] }} ({{ kernel_version['url'] }}) +Branch: {{ kernel_version['branch'] }} +Kernel version: {{ kernel_version['describe'] }} +Commit: {{ kernel_version['commit'] }} +{% if node['data']['arch'] -%} +Arch : {{ node['data']['arch'] }} +{% endif -%} +{% if node['data']['platform'] -%} +Platform : {{ node['data']['platform'] }} +{% endif -%} +{% if node['data']['device'] -%} +Device : {{ node['data']['device'] }} +{% endif -%} +{% if node['data']['config_full'] -%} +Config: {{ node['data']['config_full'] }} +{% endif -%} +{% if node['data']['compiler'] -%} +Compiler: {{ node['data']['compiler'] }} +{% endif -%} +{% if node['data']['error_code'] -%} + Error code: {{ node['data']['error_code'] }} +{% endif -%} +{% if node['data']['error_msg'] -%} + Error message: {{ node['data']['error_msg'] }} +{% endif -%} +{% if node['logs'] | count > 0 -%} +Logs: + {% for log in node['logs'] -%} + - {{ log }}: {{ node['logs'][log]['url'] }} + {% endfor %} +{% else -%} +No logs available + +Tested-by: kernelci.org bot diff --git a/config/result_summary_templates/generic-regressions.html.jinja2 b/config/result_summary_templates/generic-regressions.html.jinja2 new file mode 100644 index 000000000..23d9a8baa --- /dev/null +++ b/config/result_summary_templates/generic-regressions.html.jinja2 @@ -0,0 +1,168 @@ +{# SPDX-License-Identifier: LGPL-2.1-or-later -#} + +{# +Template to generate a generic html regression summary. It +expects the following input parameters: + - metadata: summary preset metadata + - from_date: start date of the results query + - to_date: end date of the results query + - results_per_branch: a dict containing the regression nodes + grouped by tree and branch like this: + + results_per_branch = { + : { + : [ + regression_1, + ... + regression_n + ], + ..., + : ... + }, + ..., + : ... + } +#} + +{% extends "base.html" %} +{% set title = metadata['title'] if 'title' in metadata else 'Regression summary: ' %} + +{% block title %} + {{ title | striptags }} +{% endblock %} + +{% block content %} + {% if created_from and created_to %} + {% set created_string = 'Created between ' + created_from + ' and ' + created_to %} + {% elif created_from %} + {% set created_string = 'Created after ' + created_from %} + {% elif created_to %} + {% set created_string = 'Created before ' + created_to %} + {% endif %} + {% if last_updated_from and last_updated_to %} + {% set last_updated_string = 'Last updated between ' + last_updated_from + ' and ' + last_updated_to %} + {% elif last_updated_from %} + {% set last_updated_string = 'Last updated after ' + last_updated_from %} + {% elif last_updated_to %} + {% set last_updated_string = 'Last updated before ' + last_updated_to %} + {% endif %} + +

{{ title }} +
    + {% if created_string %} +
  • {{ created_string }}
  • + {% endif %} + {% if last_updated_string %} +
  • {{ last_updated_string }}
  • + {% endif %} +
+

+ + {% if results_per_branch | count == 0 %} + No regressions found. + {% else -%} + + + {% for tree in results_per_branch %} + {% for branch in results_per_branch[tree] %} +

+ Regressions found in {{ tree }}/{{ branch }}: +

+
+ {% for regression in results_per_branch[tree][branch] -%} + {% set kernel_version = regression['data']['failed_kernel_version'] %} +

+ {{ regression['name'] }} ({{ regression['group'] }}) +

+
    +
  • KernelCI node: {{ regression['id'] }} +
  • +
  • Status: + {% if regression['result'] == 'fail' %} + ACTIVE + {% elif regression['result'] == 'pass' %} + INACTIVE | Passed run: {{ regression['data']['node_sequence'][-1] }} + {% else %} + Unknown + {% endif %} +
  • +
  • + Introduced in: {{ regression['created'] }} +
  • +
  • + Previous successful run: {{ regression['data']['pass_node'] }} +
  • + + {% if regression['result'] == 'fail' and regression['data']['node_sequence'] %} +
  • + Failed runs after detection: +
      + {% for run in regression['data']['node_sequence'] %} +
    • + {{ run }} +
    • + {% endfor %} +
    +
  • + {% endif %} +
  • Tree: {{ kernel_version['tree'] }}
  • +
  • Branch: {{ kernel_version['branch'] }}
  • +
  • Commit: {{ kernel_version['commit'] }} ({{ kernel_version['describe'] }})
  • + {% if regression['data']['arch'] %} +
  • Arch : {{ regression['data']['arch'] }}
  • + {% endif %} + {% if regression['data']['platform'] %} +
  • Platform: {{ regression['data']['platform'] }}
  • + {% endif %} + {% if regression['data']['device'] %} +
  • Device : {{ regression['data']['device'] }}
  • + {% endif %} + {% if regression['data']['config_full'] %} +
  • Config: {{ regression['data']['config_full'] }}
  • + {% endif %} + {% if regression['data']['compiler'] %} +
  • Compiler: {{ regression['data']['compiler'] }}
  • + {% endif %} + {% if regression['data']['error_code'] -%} +
  • Error code: {{ regression['data']['error_code'] }}
  • + {% endif -%} + {% if regression['data']['error_msg'] -%} +
  • Error message: {{ regression['data']['error_msg'] }}
  • + {% endif -%} + {% if regression['category'] -%} +
  • Error category: {{ regression['category']['tag'] }}: {{ regression['category']['name'] }}
  • + {% endif -%} + {% if regression['logs'] | count > 0 -%} +
  • Logs: +
      + {% for log in regression['logs'] -%} +
    • + {{ log }} +
      +
      +
      +{{ regression['logs'][log]['text'] | e }}
      +                            
      +
      +
      +
    • + {% endfor %} +
  • + {% else -%} +
  • No logs available
  • + {% endif %} +
+ {%- endfor %} +
+ {%- endfor %} + {%- endfor %} + {%- endif %} +{% endblock %} diff --git a/config/result_summary_templates/generic-regressions.jinja2 b/config/result_summary_templates/generic-regressions.jinja2 new file mode 100644 index 000000000..12d4bd1f8 --- /dev/null +++ b/config/result_summary_templates/generic-regressions.jinja2 @@ -0,0 +1,100 @@ +{# SPDX-License-Identifier: LGPL-2.1-or-later -#} + +{# +Template to generate a generic text-based regression summary. It expects +the following input parameters: + - metadata: summary preset metadata + - from_date: start date of the results query + - to_date: end date of the results query + - results_per_branch: a dict containing the regression nodes + grouped by tree and branch like this: + + results_per_branch = { + : { + : [ + regression_1, + ... + regression_n + ], + ..., + : ... + }, + ..., + : ... + } +#} + +{% if created_from and created_to -%} + {% set created_string = 'Created between ' + created_from + ' and ' + created_to -%} +{% elif created_from -%} + {% set created_string = 'Created after ' + created_from -%} +{% elif created_to %} + {% set created_string = 'Created before ' + created_to -%} +{% endif -%} +{% if last_updated_from and last_updated_to -%} + {% set last_updated_string = 'Last updated between ' + last_updated_from + ' and ' + last_updated_to -%} +{% elif last_updated_from -%} + {% set last_updated_string = 'Last updated after ' + last_updated_from -%} +{% elif last_updated_to -%} + {% set last_updated_string = 'Last updated before ' + last_updated_to -%} +{% endif -%} +{{ metadata['title'] if 'title' in metadata else 'Regression summary: ' }} +{% if created_string -%} + - {{ created_string }} +{% endif -%} +{% if last_updated_string -%} + - {{ last_updated_string }} +{% endif -%} +{% if results_per_branch | count == 0 %} +No regressions found. +{% else -%} + {% for tree in results_per_branch %} + {% for branch in results_per_branch[tree] %} +## Regressions found in {{ tree }}/{{ branch }}: + {% for regression in results_per_branch[tree][branch] -%} + {% set kernel_version = + regression['data']['failed_kernel_version'] %} + KernelCI node id: {{ regression['id'] }} + Test name: {{ regression['name'] }} ({{ regression['group'] }}) + {% if regression['result'] == 'fail' -%} + Status: Active + {% elif regression['result'] == 'pass' -%} + Status: Inactive | Passed run: {{ regression['data']['node_sequence'][-1] }} + {% else -%} + Status: Unknown + {% endif -%} + Introduced in: {{ regression['created'] }} ({{ regression['data']['fail_node'] }}) + Previous successful run: {{ regression['data']['pass_node'] }} + {% if regression['result'] == 'fail' and regression['data']['node_sequence'] -%} + Failed runs after detection: + {% for run in regression['data']['node_sequence'] -%} + - {{ run }} + {% endfor -%} + {% endif -%} + Tree: {{ kernel_version['tree'] }} ({{ kernel_version['url'] }}) + Branch: {{ kernel_version['branch'] }} + Kernel version: {{ kernel_version['describe'] }} + Commit: {{ kernel_version['commit'] }} + Arch : {{ regression['data']['arch'] }} + Config: {{ regression['data']['config_full'] }} + Compiler: {{ regression['data']['compiler'] }} + {% if regression['data']['error_code'] -%} + Error code: {{ regression['data']['error_code'] }} + {% endif -%} + {% if regression['data']['error_msg'] -%} + Error message: {{ regression['data']['error_msg'] }} + {% endif -%} + {% if regression['logs'] | count > 0 -%} + Logs: + {% for log in regression['logs'] -%} + - {{ log }}: {{ regression['logs'][log]['url'] }} + {% endfor %} + {% else -%} + No logs available + {% endif %} + {%- endfor %} + {%- endfor %} + {%- endfor %} +{%- endif %} + +Tested-by: kernelci.org bot diff --git a/config/result_summary_templates/generic-test-failure-report.html.jinja2 b/config/result_summary_templates/generic-test-failure-report.html.jinja2 new file mode 100644 index 000000000..f85453b46 --- /dev/null +++ b/config/result_summary_templates/generic-test-failure-report.html.jinja2 @@ -0,0 +1,101 @@ +{# SPDX-License-Identifier: LGPL-2.1-or-later -#} + +{# +Template to generate a generic html single test failure report. It expects +the following input parameters: + - metadata: preset metadata + - node: the node to report +#} + +{% extends "base.html" %} +{% set title = metadata['title'] if 'title' in metadata else 'Failure report' %} + +{% block title %} + {{ title | striptags }} +{% endblock %} + +{% block content %} +

{{ title }}

+ + + + {% set kernel_version = node['data']['kernel_revision'] %} +

+ {{ node['name'] }} ({{ node['group'] }}) +

+
    +
  • KernelCI node: {{ node['id'] }} +
  • +
  • Result: + {% if node['result'] == 'fail' %} + FAIL + {% elif node['result'] == 'pass' %} + PASS + {% else %} + {{ node['result'] }} + {% endif %} +
  • + {% if node['data']['regression'] %} +
  • Related to regression: {{ node['data']['regression'] }}
  • + {% endif %} +
  • Date: {{ node['created'] }}
  • +
  • Tree: {{ kernel_version['tree'] }}
  • +
  • Branch: {{ kernel_version['branch'] }}
  • +
  • Kernel version: {{ kernel_version['describe'] }}
  • +
  • Commit: {{ kernel_version['commit'] }} ({{ kernel_version['url'] }})
  • + {% if node['data']['arch'] %} +
  • Arch : {{ node['data']['arch'] }}
  • + {% endif %} + {% if node['data']['platform'] %} +
  • Platform : {{ node['data']['platform'] }}
  • + {% endif %} + {% if node['data']['device'] %} +
  • Device : {{ node['data']['device'] }}
  • + {% endif %} + {% if node['data']['config_full'] %} +
  • Config: {{ node['data']['config_full'] }}
  • + {% endif %} + {% if node['data']['compiler'] %} +
  • Compiler: {{ node['data']['compiler'] }}
  • + {% endif %} + {% if node['data']['runtime'] %} +
  • Runtime: {{ node['data']['runtime'] }}
  • + {% endif %} + {% if node['data']['job_id'] %} +
  • Job ID: {{ node['data']['job_id'] }}
  • + {% endif %} + {% if node['data']['error_code'] -%} +
  • Error code: {{ node['data']['error_code'] }}
  • + {% endif -%} + {% if node['data']['error_msg'] -%} +
  • Error message: {{ node['data']['error_msg'] }}
  • + {% endif -%} + {% if node['logs'] | count > 0 -%} +
  • Logs: +
      + {% for log in node['logs'] -%} +
    • + {{ log }} +
      +
      +
      +{{ node['logs'][log]['text'] | e }}
      +                  
      +
      +
      +
    • + {% endfor %} +
  • + {% else -%} +
  • No logs available
  • + {% endif %} +
+{% endblock %} diff --git a/config/result_summary_templates/generic-test-failure-report.jinja2 b/config/result_summary_templates/generic-test-failure-report.jinja2 new file mode 100644 index 000000000..bcf83d789 --- /dev/null +++ b/config/result_summary_templates/generic-test-failure-report.jinja2 @@ -0,0 +1,54 @@ +{# SPDX-License-Identifier: LGPL-2.1-or-later -#} +{# +Template to generate a generic text-based test failure report. It +expects the following input parameters: + - metadata: preset metadata + - node: the node to report +#} +{{- metadata['title'] if 'title' in metadata else 'Test report: ' }} +======================================================================== +{% set kernel_version = node['data']['kernel_revision'] %} +KernelCI node id: {{ node['id'] }} +Test name: {{ node['name'] }} ({{ node['group'] }}) +Date: {{ node['created'] }} +Tree: {{ kernel_version['tree'] }} +Branch: {{ kernel_version['branch'] }} +Kernel version: {{ kernel_version['describe'] }} +Commit: {{ kernel_version['commit'] }} ({{ kernel_version['url'] }}) +{% if node['data']['arch'] -%} +Arch : {{ node['data']['arch'] }} +{% endif -%} +{% if node['data']['platform'] -%} +Platform : {{ node['data']['platform'] }} +{% endif -%} +{% if node['data']['device'] -%} +Device : {{ node['data']['device'] }} +{% endif -%} +{% if node['data']['config_full'] -%} +Config: {{ node['data']['config_full'] }} +{% endif -%} +{% if node['data']['compiler'] -%} +Compiler: {{ node['data']['compiler'] }} +{% endif -%} +{% if node['data']['runtime'] -%} +Runtime : {{ node['data']['runtime'] }} +{% endif -%} +{% if node['data']['job_id'] -%} +Job ID : {{ node['data']['job_id'] }} +{% endif -%} +{% if node['data']['error_code'] -%} + Error code: {{ node['data']['error_code'] }} +{% endif -%} +{% if node['data']['error_msg'] -%} + Error message: {{ node['data']['error_msg'] }} +{% endif -%} +{% if node['logs'] | count > 0 -%} +Logs: + {% for log in node['logs'] -%} + - {{ log }}: {{ node['logs'][log]['url'] }} + {% endfor %} +{% else -%} +No logs available +{% endif %} + +Tested-by: kernelci.org bot diff --git a/config/result_summary_templates/generic-test-results.html.jinja2 b/config/result_summary_templates/generic-test-results.html.jinja2 new file mode 100644 index 000000000..066f8b4a9 --- /dev/null +++ b/config/result_summary_templates/generic-test-results.html.jinja2 @@ -0,0 +1,160 @@ +{# SPDX-License-Identifier: LGPL-2.1-or-later -#} + +{# +Template to generate a generic html test summary. It +expects the following input parameters: + - metadata: summary preset metadata + - from_date: start date of the results query + - to_date: end date of the results query + - results_per_branch: a dict containing the test nodes + grouped by tree and branch like this: + + results_per_branch = { + : { + : [ + failure_1, + ... + failure_n + ], + ..., + : ... + }, + ..., + : ... + } +#} + +{% extends "base.html" %} +{% set title = metadata['title'] if 'title' in metadata else 'Test results: ' %} + +{% block title %} + {{ title | striptags }} +{% endblock %} + +{% block content %} + {% if created_from and created_to %} + {% set created_string = 'Created between ' + created_from + ' and ' + created_to %} + {% elif created_from %} + {% set created_string = 'Created after ' + created_from %} + {% elif created_to %} + {% set created_string = 'Created before ' + created_to %} + {% endif %} + {% if last_updated_from and last_updated_to %} + {% set last_updated_string = 'Last updated between ' + last_updated_from + ' and ' + last_updated_to %} + {% elif last_updated_from %} + {% set last_updated_string = 'Last updated after ' + last_updated_from %} + {% elif last_updated_to %} + {% set last_updated_string = 'Last updated before ' + last_updated_to %} + {% endif %} + +

{{ title }} +
    + {% if created_string %} +
  • {{ created_string }}
  • + {% endif %} + {% if last_updated_string %} +
  • {{ last_updated_string }}
  • + {% endif %} +
+

+ + {% if results_per_branch | count == 0 %} + No results found. + {% else -%} + + + {% for tree in results_per_branch %} + {% for branch in results_per_branch[tree] %} +

+ Test results found in {{ tree }}/{{ branch }}: +

+
+ {% for test in results_per_branch[tree][branch] -%} + {% set kernel_version = test['data']['kernel_revision'] %} +

+ {{ test['name'] }} ({{ test['group'] }}) +

+
    +
  • KernelCI node: {{ test['id'] }} +
  • +
  • Result: + {% if test['result'] == 'fail' %} + FAIL + {% elif test['result'] == 'pass' %} + PASS + {% else %} + {{ test['result'] }} + {% endif %} +
  • + {% if test['data']['regression'] %} +
  • Related to regression: {{ test['data']['regression'] }}
  • + {% endif %} +
  • Date: {{ test['created'] }}
  • +
  • Tree: {{ kernel_version['tree'] }}
  • +
  • Branch: {{ kernel_version['branch'] }}
  • +
  • Kernel version: {{ kernel_version['describe'] }}
  • +
  • Commit: {{ kernel_version['commit'] }} ({{ kernel_version['url'] }})
  • + {% if test['data']['arch'] %} +
  • Arch : {{ test['data']['arch'] }}
  • + {% endif %} + {% if test['data']['platform'] %} +
  • Platform : {{ test['data']['platform'] }}
  • + {% endif %} + {% if test['data']['device'] %} +
  • Device : {{ test['data']['device'] }}
  • + {% endif %} + {% if test['data']['config_full'] %} +
  • Config: {{ test['data']['config_full'] }}
  • + {% endif %} + {% if test['data']['compiler'] %} +
  • Compiler: {{ test['data']['compiler'] }}
  • + {% endif %} + {% if test['data']['runtime'] %} +
  • Runtime: {{ test['data']['runtime'] }}
  • + {% endif %} + {% if test['data']['job_id'] %} +
  • Job ID: {{ test['data']['job_id'] }}
  • + {% endif %} + {% if test['data']['error_code'] -%} +
  • Error code: {{ test['data']['error_code'] }}
  • + {% endif -%} + {% if test['data']['error_msg'] -%} +
  • Error message: {{ test['data']['error_msg'] }}
  • + {% endif -%} + {% if test['category'] -%} +
  • Error category: {{ test['category']['tag'] }}: {{ test['category']['name'] }}
  • + {% endif -%} + {% if test['logs'] | count > 0 -%} +
  • Logs: +
      + {% for log in test['logs'] -%} +
    • + {{ log }} +
      +
      +
      +{{ test['logs'][log]['text'] | e }}
      +                            
      +
      +
      +
    • + {% endfor %} +
  • + {% else -%} +
  • No logs available
  • + {% endif %} +
+ {%- endfor %} +
+ {%- endfor %} + {%- endfor %} + {%- endif %} +{% endblock %} diff --git a/config/result_summary_templates/generic-test-results.jinja2 b/config/result_summary_templates/generic-test-results.jinja2 new file mode 100644 index 000000000..b10ced7e7 --- /dev/null +++ b/config/result_summary_templates/generic-test-results.jinja2 @@ -0,0 +1,85 @@ +{# SPDX-License-Identifier: LGPL-2.1-or-later -#} + +{# +Template to generate a generic text-based test results summary. It +expects the following input parameters: + - metadata: summary preset metadata + - from_date: start date of the results query + - to_date: end date of the results query + - results_per_branch: a dict containing the test nodes + grouped by tree and branch like this: + + results_per_branch = { + : { + : [ + failure_1, + ... + failure_n + ], + ..., + : ... + }, + ..., + : ... + } +#} + +{% if created_from and created_to -%} + {% set created_string = 'Created between ' + created_from + ' and ' + created_to -%} +{% elif created_from -%} + {% set created_string = 'Created after ' + created_from -%} +{% elif created_to %} + {% set created_string = 'Created before ' + created_to -%} +{% endif -%} +{% if last_updated_from and last_updated_to -%} + {% set last_updated_string = 'Last updated between ' + last_updated_from + ' and ' + last_updated_to -%} +{% elif last_updated_from -%} + {% set last_updated_string = 'Last updated after ' + last_updated_from -%} +{% elif last_updated_to -%} + {% set last_updated_string = 'Last updated before ' + last_updated_to -%} +{% endif -%} +{{ metadata['title'] if 'title' in metadata else 'Test results: ' }} +{% if created_string -%} + - {{ created_string }} +{% endif -%} +{% if last_updated_string -%} + - {{ last_updated_string }} +{% endif -%} +{% if results_per_branch | count == 0 %} +No results found. +{% else -%} + {% for tree in results_per_branch %} + {% for branch in results_per_branch[tree] %} +## Results found in {{ tree }}/{{ branch }}: + {% for test in results_per_branch[tree][branch] -%} + {% set kernel_version = test['data']['kernel_revision'] %} + KernelCI node id: {{ test['id'] }} + Test name: {{ test['name'] }} ({{ test['group'] }}) + Date: {{ test['created'] }} + Tree: {{ kernel_version['tree'] }} + Branch: {{ kernel_version['branch'] }} + Kernel version: {{ kernel_version['describe'] }} + Commit: {{ kernel_version['commit'] }} ({{ kernel_version['url'] }}) + Arch : {{ test['data']['arch'] }} + Config: {{ test['data']['config_full'] }} + Compiler: {{ test['data']['compiler'] }} + {% if test['data']['error_code'] -%} + Error code: {{ test['data']['error_code'] }} + {% endif -%} + {% if test['data']['error_msg'] -%} + Error message: {{ test['data']['error_msg'] }} + {% endif -%} + {% if test['logs'] | count > 0 -%} + Logs: + {% for log in test['logs'] -%} + - {{ log }}: {{ test['logs'][log]['url'] }} + {% endfor %} + {% else -%} + No logs available + {% endif %} + {%- endfor %} + {%- endfor %} + {%- endfor %} +{%- endif %} + +Tested-by: kernelci.org bot diff --git a/config/result_summary_templates/main.css b/config/result_summary_templates/main.css new file mode 100644 index 000000000..c5f7dc9fd --- /dev/null +++ b/config/result_summary_templates/main.css @@ -0,0 +1,53 @@ +@import url("https://fonts.googleapis.com/css?family=Karla"); +body { + background-color: white; + font-family: "Karla", sans-serif; + min-height: 100vh; + display: flex; + flex-direction: column; +} + +th { + text-align: center; +} + +a { + color: #5c3dcc; + text-decoration: none; +} + +a:hover { + color: #25188E; + text-decoration: none; +} + +a.light { + color: white; +} + +a.light:hover { + color: rgb(174, 174, 174); +} + +.number { + text-align: center; +} +.copy-button { + cursor:pointer; +} + +.btn { + --bs-btn-padding-y: .1rem; + --bs-btn-padding-x: .5rem; + --bs-btn-border-radius: 0.4rem; +} + +#footer { + padding-top: 1em; + margin-top: auto; +} + +#result-list { + padding-top: 1em; + padding-left: 1em; +} diff --git a/config/runtime/baseline.jinja2 b/config/runtime/baseline.jinja2 index d6491835c..391daa6f7 100644 --- a/config/runtime/baseline.jinja2 +++ b/config/runtime/baseline.jinja2 @@ -1,3 +1,4 @@ +{% set test_method = 'baseline' %} {% set base_template = 'base/' + runtime + '.jinja2' %} {%- extends base_template %} diff --git a/config/runtime/kbuild.jinja2 b/config/runtime/kbuild.jinja2 index 2be4c2fef..409d8efa5 100644 --- a/config/runtime/kbuild.jinja2 +++ b/config/runtime/kbuild.jinja2 @@ -5,12 +5,13 @@ {%- block python_imports %} {{ super() }} -import subprocess {%- endblock %} {%- block python_local_imports %} {{ super() }} -import kernelci.api.helper +from kernelci.kbuild import KBuild +import os +import sys {%- endblock %} {%- block python_globals %} @@ -18,117 +19,57 @@ import kernelci.api.helper KBUILD_PARAMS = { 'arch': '{{ arch }}', 'compiler': '{{ compiler }}', - 'defconfig': '{{ defconfig }}', + 'defconfig': +{%- if defconfig is string %} + '{{ defconfig }}' +{%- elif defconfig %} + [ + {%- for item in defconfig %} + '{{ item }}' + {%- if not loop.last %}, {% endif %} + {%- endfor %} + ] +{%- endif %}, +{%- if fragments %} + 'fragments': {{ fragments }}, +{%- else %} 'fragments': [], +{%- endif %} +{%- if cross_compile %} + 'cross_compile': '{{ cross_compile }}', +{%- endif %} +{%- if cross_compile_compat %} + 'cross_compile_compat': '{{ cross_compile_compat }}' +{%- endif %} +{%- if disable_modules %} + 'disable_modules': {{ disable_modules }} +{%- endif %} +{%- if dtbs_check %} + 'dtbs_check': '{{ dtbs_check }}' +{%- endif %} } {%- endblock %} {% block python_job -%} -class Job(BaseJob): - def _run_kbuild(self, src_path, command, job_log): - cmd = f"""(\ -set -e -cd {src_path} -echo '# {command}' | tee -a {job_log} -{command} >> {job_log} 2>&1 -)""" - ret = subprocess.run(cmd, shell=True).returncode - return ret == 0 +WORKSPACE = '/tmp/kci' - def _upload_artifacts(self, local_artifacts): - artifacts = {} - storage = self._get_storage() - if storage and NODE: - root_path = '-'.join([JOB_NAME, NODE['id']]) - print(f"Uploading artifacts to {root_path}") - for file_name, file_path in local_artifacts.items(): - if os.path.exists(file_path): - file_url = storage.upload_single( - (file_path, file_name), root_path - ) - print(file_url) - artifacts[file_name] = file_url - return artifacts +def main(args): + build = KBuild(node=NODE, jobname=JOB_NAME, params=KBUILD_PARAMS, apiconfig=API_CONFIG_YAML) + build.set_workspace(WORKSPACE) + build.set_storage_config(STORAGE_CONFIG_YAML) + build.write_script("build.sh") + build.serialize("_build.json") + r = os.system("bash -e build.sh") + build2 = KBuild.from_json("_build.json") + build2.verify_build() + results = build2.submit(r) + return results - def _run(self, src_path): - job_log = 'job.txt' - job_log_path = os.path.join(src_path, job_log) - local_artifacts = { - job_log: job_log_path, - 'config': os.path.join(src_path, '.config'), - 'bzImage': os.path.join(src_path, 'arch/x86/boot/bzImage'), - 'modules.tar.gz': os.path.join(src_path, 'modules.tar.gz'), - } - - if os.path.exists(job_log_path): - os.remove(job_log_path) - - steps = { - 'config': f"make ARCH=x86_64 {KBUILD_PARAMS['defconfig']}", - 'kernel': "make ARCH=x86_64 bzImage --jobs=$(nproc)", - 'modules': "make ARCH=x86_64 modules --jobs=$(nproc)", - 'modules_install': ' '.join([ - "make", - "ARCH=x86_64", - "INSTALL_MOD_PATH=_modules_", - "INSTALL_MOD_STRIP=1", - "modules_install", - ]), - 'modules_tarball': "tar -C _modules_ -czf modules.tar.gz .", - } - step_results = {name: (None, []) for name in steps.keys()} - - for name, command in steps.items(): - res = self._run_kbuild(src_path, command, job_log) - res_str = 'pass' if res is True else 'fail' - step_results[name] = (res_str, []) - if res is False: - break - - artifacts = self._upload_artifacts(local_artifacts) - - if os.path.exists(job_log_path): - with open(job_log_path, encoding='utf-8') as job_log_file: - print("--------------------------------------------------") - print(job_log_file.read()) - print("--------------------------------------------------") - - job_result = 'pass' if all( - res == 'pass' for res in ( - step_res for (name, (step_res, _)) in step_results.items() - ) - ) else 'fail' - - results = { - 'node': { - 'result': job_result, - 'artifacts': artifacts, - }, - 'child_nodes': [ - { - 'node': { - 'name': name, - 'result': result, - }, - 'child_nodes': child_nodes, - } for name, (result, child_nodes) in step_results.items() - ] - } - - return results - - def _submit(self, result, node, api): - node = node.copy() - node['data'] = { - key: KBUILD_PARAMS[key] for key in [ - 'arch', 'defconfig', 'compiler', 'fragments', - ] - } - - # Ensure top-level name is kept the same - result['node']['name'] = node['name'] - api_helper = kernelci.api.helper.APIHelper(api) - api_helper.submit_results(result, node) - return node {% endblock %} + +{%- block python_main %} +if __name__ == '__main__': + main(sys.argv) + sys.exit(0) +{%- endblock %} diff --git a/config/runtime/kselftest.jinja2 b/config/runtime/kselftest.jinja2 new file mode 100644 index 000000000..75221fbfb --- /dev/null +++ b/config/runtime/kselftest.jinja2 @@ -0,0 +1,4 @@ +{%- set boot_commands = 'nfs' %} +{%- set test_method = 'kselftest' %} +{%- set base_template = 'base/' + runtime + '.jinja2' %} +{%- extends base_template %} diff --git a/config/runtime/kunit.jinja2 b/config/runtime/kunit.jinja2 index 2e11381ef..8716e43a0 100644 --- a/config/runtime/kunit.jinja2 +++ b/config/runtime/kunit.jinja2 @@ -12,6 +12,7 @@ import subprocess {%- block python_local_imports %} {{ super() }} import kernelci.api.helper +import kernelci.runtime {%- endblock %} {%- block python_globals %} @@ -19,7 +20,7 @@ import kernelci.api.helper RESULT_MAP = { 'PASS': 'pass', 'FAIL': 'fail', - 'SKIP': None, + 'SKIP': 'skip', } ARCH = '{{ arch }}' {% endblock %} @@ -36,11 +37,17 @@ class Job(BaseJob): 'node': { 'name': test_case['name'], 'result': RESULT_MAP[test_case['status']], + 'kind': 'test' }, 'child_nodes': [], }) for sub_group in group.get('sub_groups', []): child_nodes.append(self._parse_results(sub_group)) + + node['kind'] = 'job' if child_nodes else 'test' + if node['kind'] == 'job': + node['result'] = kernelci.runtime.evaluate_test_suite_result(child_nodes) + return { 'node': node, 'child_nodes': child_nodes, @@ -64,10 +71,12 @@ cd {src_path} def _upload_artifacts(self, local_artifacts): artifacts = {} storage = self._get_storage() - if storage and NODE: - root_path = '-'.join([JOB_NAME, NODE['id']]) + if storage and self._node: + root_path = '-'.join([JOB_NAME, self._node['id']]) print(f"Uploading artifacts to {root_path}") for file_name, file_path in local_artifacts.items(): + # Normalize field names + file_name = file_name.replace('.', '_') if os.path.exists(file_path): file_url = storage.upload_single( (file_path, file_name), root_path @@ -123,10 +132,12 @@ cd {src_path} 'node': { 'result': step_results['exec'][0] or 'fail', 'artifacts': artifacts, + 'data': {'arch': ARCH if ARCH else 'um'} }, 'child_nodes': [ { 'node': { + 'kind': 'job' if child_nodes else 'test' , 'name': name, 'result': result, }, @@ -137,12 +148,11 @@ cd {src_path} return results - def _submit(self, result, node, api): + def _submit(self, result): # Ensure top-level name is kept the same result = result.copy() - result['node']['name'] = node['name'] - api_helper = kernelci.api.helper.APIHelper(api) - api_helper.submit_results(result, node) + result['node']['name'] = self._node['name'] + api_helper = kernelci.api.helper.APIHelper(self._api) + api_helper.submit_results(result, self._node) - return node {% endblock %} diff --git a/config/runtime/kver.jinja2 b/config/runtime/kver.jinja2 index 8f66b49c1..b1b534f02 100644 --- a/config/runtime/kver.jinja2 +++ b/config/runtime/kver.jinja2 @@ -5,7 +5,7 @@ {%- block python_globals %} {{ super() }} -REVISION = {{ node.revision }} +REVISION = {{ node.data.kernel_revision }} {% endblock %} {% block python_job_constr -%} diff --git a/config/runtime/rt-tests.jinja2 b/config/runtime/rt-tests.jinja2 new file mode 100644 index 000000000..c089fb2b3 --- /dev/null +++ b/config/runtime/rt-tests.jinja2 @@ -0,0 +1,4 @@ +{%- set boot_commands = 'nfs' %} +{%- set test_method = 'rt-tests' %} +{%- set base_template = 'base/' + runtime + '.jinja2' %} +{%- extends base_template %} diff --git a/config/runtime/sleep.jinja2 b/config/runtime/sleep.jinja2 new file mode 100644 index 000000000..3a67bb950 --- /dev/null +++ b/config/runtime/sleep.jinja2 @@ -0,0 +1,3 @@ +{%- set test_method = 'sleep' %} +{%- set base_template = 'base/' + runtime + '.jinja2' %} +{%- extends base_template %} diff --git a/config/runtime/tast.jinja2 b/config/runtime/tast.jinja2 new file mode 100644 index 000000000..bee77c48a --- /dev/null +++ b/config/runtime/tast.jinja2 @@ -0,0 +1,21 @@ +{%- set base_kurl = platform_config.params.flash_kernel.url %} +{%- set boot_commands = 'nfs' %} +{%- set boot_namespace = 'modules' %} +{%- set flash_modules = 'modules.tar.xz' %} +{%- if platform_config.params.flash_kernel.modules %} +{%- set flash_modules = platform_config.params.flash_kernel.modules %} +{%- endif %} + +{%- set kernel_url = base_kurl ~ '/' ~ platform_config.params.flash_kernel.image %} +{%- set modules_url = base_kurl ~ '/' ~ flash_modules %} +{%- if platform_config.params.flash_kernel.dtb %} +{%- set dtb_url = base_kurl ~ '/' ~ platform_config.params.flash_kernel.dtb %} +{%- elif device_dtb %} +{%- set dtb_url = base_kurl ~ '/' ~ device_dtb %} +{%- endif %} +{%- set nfsroot = platform_config.params.nfsroot %} + +{%- set test_method = 'tast' %} + +{%- set base_template = 'base/' + runtime + '.jinja2' %} +{%- extends base_template %} diff --git a/config/runtime/v4l2-decoder-conformance.jinja2 b/config/runtime/v4l2-decoder-conformance.jinja2 new file mode 100644 index 000000000..f196854e4 --- /dev/null +++ b/config/runtime/v4l2-decoder-conformance.jinja2 @@ -0,0 +1,4 @@ +{%- set boot_commands = 'nfs' %} +{%- set test_method = 'v4l2-decoder-conformance' %} +{%- set base_template = 'base/' + runtime + '.jinja2' %} +{%- extends base_template %} diff --git a/config/runtime/watchdog-reset.jinja2 b/config/runtime/watchdog-reset.jinja2 new file mode 100644 index 000000000..01f5a1a8c --- /dev/null +++ b/config/runtime/watchdog-reset.jinja2 @@ -0,0 +1,3 @@ +{%- set test_method = 'watchdog-reset' %} +{%- set base_template = 'base/' + runtime + '.jinja2' %} +{%- extends base_template %} \ No newline at end of file diff --git a/config/scheduler-chromeos.yaml b/config/scheduler-chromeos.yaml new file mode 100644 index 000000000..5802dab6b --- /dev/null +++ b/config/scheduler-chromeos.yaml @@ -0,0 +1,548 @@ +_anchors: + + amd-platforms: &amd-platforms + - acer-R721T-grunt + - acer-cp514-3wh-r0qs-guybrush + - asus-CM1400CXA-dalboz + - dell-latitude-3445-7520c-skyrim + - hp-14-db0003na-grunt + - hp-11A-G6-EE-grunt + - hp-14b-na0052xx-zork + - hp-x360-14a-cb0001xx-zork + - lenovo-TPad-C13-Yoga-zork + + intel-platforms: &intel-platforms + - acer-cb317-1h-c3z6-dedede + - acer-cbv514-1h-34uz-brya + - acer-chromebox-cxi4-puff + - acer-cp514-2h-1130g7-volteer + - acer-cp514-2h-1160g7-volteer + - asus-C433TA-AJ0005-rammus + - asus-C436FA-Flip-hatch + - asus-C523NA-A20057-coral + - dell-latitude-5300-8145U-arcada + - dell-latitude-5400-4305U-sarien + - dell-latitude-5400-8665U-sarien + - hp-x360-14-G1-sona + - hp-x360-12b-ca0010nr-n4020-octopus + + mediatek-platforms: &mediatek-platforms + - mt8183-kukui-jacuzzi-juniper-sku16 + - mt8186-corsola-steelix-sku131072 + - mt8192-asurada-spherion-r0 + - mt8195-cherry-tomato-r2 + + qualcomm-platforms: &qualcomm-platforms + - sc7180-trogdor-kingoftown + - sc7180-trogdor-lazor-limozeen + + build-k8s-all: &build-k8s-all + event: + channel: node + name: checkout + state: available + runtime: + name: k8s-all + + lava-job-collabora: &lava-job-collabora + runtime: + type: lava + name: lava-collabora + + test-job-arm64-mediatek: &test-job-arm64-mediatek + <<: *lava-job-collabora + event: + channel: node + name: kbuild-gcc-12-arm64-chromebook + result: pass + platforms: *mediatek-platforms + + test-job-arm64-qualcomm: &test-job-arm64-qualcomm + <<: *test-job-arm64-mediatek + platforms: *qualcomm-platforms + + test-job-chromeos-amd: &test-job-chromeos-amd + <<: *lava-job-collabora + event: + channel: node + name: kbuild-gcc-12-x86-chromeos-amd + result: pass + platforms: *amd-platforms + + test-job-chromeos-intel: &test-job-chromeos-intel + <<: *lava-job-collabora + event: + channel: node + name: kbuild-gcc-12-x86-chromeos-intel + result: pass + platforms: *intel-platforms + + test-job-chromeos-mediatek: &test-job-chromeos-mediatek + <<: *test-job-arm64-mediatek + event: + channel: node + name: kbuild-gcc-12-arm64-chromeos-mediatek + result: pass + + test-job-chromeos-qualcomm: &test-job-chromeos-qualcomm + <<: *test-job-arm64-qualcomm + event: + channel: node + name: kbuild-gcc-12-arm64-chromeos-qualcomm + result: pass + + test-job-x86-amd: &test-job-x86-amd + <<: *lava-job-collabora + event: + channel: node + name: kbuild-gcc-12-x86 + result: pass + platforms: *amd-platforms + + test-job-x86-intel: &test-job-x86-intel + <<: *test-job-x86-amd + platforms: *intel-platforms + +scheduler: + + - job: baseline-arm64-mediatek + <<: *test-job-arm64-mediatek + + - job: baseline-arm64-qualcomm + <<: *test-job-arm64-qualcomm + + - job: baseline-arm64-mediatek + <<: *test-job-chromeos-mediatek + + - job: baseline-arm64-qualcomm + <<: *test-job-chromeos-qualcomm + + - job: baseline-nfs-arm64-mediatek + <<: *test-job-arm64-mediatek + + - job: baseline-nfs-arm64-mediatek + <<: *test-job-chromeos-mediatek + + - job: baseline-nfs-arm64-qualcomm + <<: *test-job-arm64-qualcomm + + - job: baseline-nfs-arm64-qualcomm + <<: *test-job-chromeos-qualcomm + + - job: baseline-nfs-x86-amd + <<: *test-job-chromeos-amd + + - job: baseline-nfs-x86-amd + <<: *test-job-x86-amd + + - job: baseline-nfs-x86-intel + <<: *test-job-chromeos-intel + + - job: baseline-nfs-x86-intel + <<: *test-job-x86-intel + + - job: baseline-x86-amd + <<: *test-job-chromeos-amd + + - job: baseline-x86-amd-staging + <<: *test-job-chromeos-amd + runtime: + type: lava + name: lava-collabora-staging + platforms: + - dell-latitude-3445-7520c-skyrim + + - job: baseline-x86-amd + <<: *test-job-x86-amd + + - job: baseline-x86-intel + <<: *test-job-chromeos-intel + + - job: baseline-x86-intel + <<: *test-job-x86-intel + + - job: kbuild-gcc-12-arm64-chromebook + <<: *build-k8s-all + + - job: kbuild-gcc-12-arm64-chromeos-mediatek + <<: *build-k8s-all + + - job: kbuild-gcc-12-arm64-chromeos-qualcomm + <<: *build-k8s-all + + - job: kbuild-gcc-12-x86-chromeos-amd + <<: *build-k8s-all + + - job: kbuild-gcc-12-x86-chromeos-intel + <<: *build-k8s-all + + - job: kselftest-acpi + <<: *test-job-x86-intel + + - job: kselftest-dt + <<: *lava-job-collabora + event: + channel: node + name: kbuild-gcc-12-arm64-chromebook + result: pass + platforms: + - mt8183-kukui-jacuzzi-juniper-sku16 + - mt8186-corsola-steelix-sku131072 + - mt8192-asurada-spherion-r0 + - mt8195-cherry-tomato-r2 + - sc7180-trogdor-kingoftown + - sc7180-trogdor-lazor-limozeen + + - job: kselftest-device-error-logs + <<: *lava-job-collabora + event: + channel: node + name: kbuild-gcc-12-arm64-chromebook + result: pass + platforms: *mediatek-platforms + + - job: kselftest-cpufreq + <<: *test-job-x86-intel + + - job: kselftest-cpufreq + <<: *test-job-x86-amd + + - job: kselftest-cpufreq + <<: *test-job-arm64-qualcomm + + - job: kselftest-cpufreq + <<: *test-job-arm64-mediatek + + - job: kselftest-dmabuf-heaps + <<: *test-job-x86-intel + + - job: kselftest-dmabuf-heaps + <<: *test-job-x86-amd + + - job: kselftest-dmabuf-heaps + <<: *test-job-arm64-qualcomm + + - job: kselftest-dmabuf-heaps + <<: *test-job-arm64-mediatek + + - job: kselftest-exec + <<: *test-job-x86-intel + + - job: kselftest-exec + <<: *test-job-x86-amd + + - job: kselftest-exec + <<: *test-job-arm64-qualcomm + + - job: kselftest-exec + <<: *test-job-arm64-mediatek + + - job: kselftest-iommu + <<: *test-job-x86-intel + + - job: kselftest-iommu + <<: *test-job-x86-amd + + - job: kselftest-iommu + <<: *test-job-arm64-qualcomm + + - job: kselftest-iommu + <<: *test-job-arm64-mediatek + +# - job: tast-basic-arm64-mediatek +# <<: *test-job-chromeos-mediatek + +# - job: tast-basic-arm64-qualcomm +# <<: *test-job-chromeos-qualcomm + +# - job: tast-basic-x86-amd +# <<: *test-job-chromeos-amd + +# - job: tast-basic-x86-intel +# <<: *test-job-chromeos-intel + + - job: tast-decoder-chromestack-arm64-mediatek + <<: *test-job-chromeos-mediatek + + - job: tast-decoder-chromestack-arm64-qualcomm + <<: *test-job-chromeos-qualcomm + + - job: tast-decoder-chromestack-arm64-qualcomm-pre6_7 + <<: *test-job-chromeos-qualcomm + + - job: tast-decoder-chromestack-x86-amd + <<: *test-job-chromeos-amd + + - job: tast-decoder-chromestack-x86-intel + <<: *test-job-chromeos-intel + + - job: tast-decoder-v4l2-sl-av1-arm64-mediatek + <<: *test-job-chromeos-mediatek + platforms: + - mt8195-cherry-tomato-r2 + + - job: tast-decoder-v4l2-sl-h264-arm64-mediatek + <<: *test-job-chromeos-mediatek + platforms: + - mt8183-kukui-jacuzzi-juniper-sku16 + - mt8192-asurada-spherion-r0 + - mt8195-cherry-tomato-r2 + + - job: tast-decoder-v4l2-sl-vp8-arm64-mediatek + <<: *test-job-chromeos-mediatek + platforms: + - mt8192-asurada-spherion-r0 + - mt8195-cherry-tomato-r2 + + - job: tast-decoder-v4l2-sl-vp9-arm64-mediatek + <<: *test-job-chromeos-mediatek + platforms: + - mt8192-asurada-spherion-r0 + - mt8195-cherry-tomato-r2 + + - job: tast-decoder-v4l2-sf-h264-arm64-qualcomm + <<: *test-job-chromeos-qualcomm + + - job: tast-decoder-v4l2-sf-vp8-arm64-qualcomm + <<: *test-job-chromeos-qualcomm + + - job: tast-decoder-v4l2-sf-vp9-arm64-qualcomm + <<: *test-job-chromeos-qualcomm + + - job: tast-decoder-v4l2-sf-vp9-arm64-qualcomm-pre6_7 + <<: *test-job-chromeos-qualcomm + + - job: tast-decoder-v4l2-sf-vp9-extra-arm64-qualcomm + <<: *test-job-chromeos-qualcomm + + - job: tast-decoder-v4l2-sf-vp9-extra-arm64-qualcomm-pre6_7 + <<: *test-job-chromeos-qualcomm + +# - job: tast-hardware-arm64-mediatek +# <<: *test-job-chromeos-mediatek + +# - job: tast-hardware-arm64-qualcomm +# <<: *test-job-chromeos-qualcomm + +# - job: tast-hardware-x86-amd +# <<: *test-job-chromeos-amd + +# - job: tast-hardware-x86-intel +# <<: *test-job-chromeos-intel + +# - job: tast-kernel-arm64-mediatek +# <<: *test-job-chromeos-mediatek + +# - job: tast-kernel-arm64-qualcomm +# <<: *test-job-chromeos-qualcomm + +# - job: tast-kernel-x86-amd +# <<: *test-job-chromeos-amd + +# - job: tast-kernel-x86-intel +# <<: *test-job-chromeos-intel + +# - job: tast-mm-decode-arm64-mediatek +# <<: *test-job-chromeos-mediatek +# platforms: +# - mt8192-asurada-spherion-r0 +# - mt8195-cherry-tomato-r2 + +# - job: tast-mm-decode-arm64-qualcomm +# <<: *test-job-chromeos-qualcomm + +# - job: tast-mm-misc-arm64-mediatek +# <<: *test-job-chromeos-mediatek + +# - job: tast-mm-misc-arm64-qualcomm +# <<: *test-job-chromeos-qualcomm + +# - job: tast-mm-misc-x86-amd +# <<: *test-job-chromeos-amd + +# - job: tast-mm-misc-x86-intel +# <<: *test-job-chromeos-intel + +# - job: tast-perf-arm64-mediatek +# <<: *test-job-chromeos-mediatek + +# - job: tast-perf-arm64-qualcomm +# <<: *test-job-chromeos-qualcomm + +# - job: tast-perf-x86-amd +# <<: *test-job-chromeos-amd + +# - job: tast-perf-x86-intel +# <<: *test-job-chromeos-intel + +# - job: tast-perf-long-duration-arm64-mediatek +# <<: *test-job-chromeos-mediatek + + # - job: tast-perf-long-duration-arm64-qualcomm +# <<: *test-job-chromeos-qualcomm + +# - job: tast-perf-long-duration-x86-amd +# <<: *test-job-chromeos-amd + +# - job: tast-perf-long-duration-x86-intel +# <<: *test-job-chromeos-intel + +# - job: tast-platform-arm64-mediatek +# <<: *test-job-chromeos-mediatek + +# - job: tast-platform-arm64-qualcomm +# <<: *test-job-chromeos-qualcomm + +# - job: tast-platform-x86-amd +# <<: *test-job-chromeos-amd + +# - job: tast-platform-x86-intel +# <<: *test-job-chromeos-intel + +# - job: tast-power-arm64-mediatek +# <<: *test-job-chromeos-mediatek + +# - job: tast-power-arm64-qualcomm +# <<: *test-job-chromeos-qualcomm + +# - job: tast-power-x86-amd +# <<: *test-job-chromeos-amd + +# - job: tast-power-x86-intel +# <<: *test-job-chromeos-intel + +# - job: tast-sound-arm64-mediatek +# <<: *test-job-chromeos-mediatek + +# - job: tast-sound-arm64-qualcomm +# <<: *test-job-chromeos-qualcomm + +# - job: tast-sound-x86-amd +# <<: *test-job-chromeos-amd + +# - job: tast-sound-x86-intel +# <<: *test-job-chromeos-intel + +# - job: tast-ui-arm64-mediatek +# <<: *test-job-chromeos-mediatek + +# - job: tast-ui-arm64-qualcomm +# <<: *test-job-chromeos-qualcomm + +# - job: tast-ui-x86-amd +# <<: *test-job-chromeos-amd + +# - job: tast-ui-x86-intel +# <<: *test-job-chromeos-intel + + - job: rt-tests-cyclicdeadline + <<: *test-job-x86-intel + + - job: rt-tests-cyclicdeadline + <<: *test-job-x86-amd + + - job: rt-tests-cyclicdeadline + <<: *test-job-arm64-qualcomm + + - job: rt-tests-cyclicdeadline + <<: *test-job-arm64-mediatek + + - job: rt-tests-cyclictest + <<: *test-job-x86-intel + + - job: rt-tests-cyclictest + <<: *test-job-x86-amd + + - job: rt-tests-cyclictest + <<: *test-job-arm64-qualcomm + + - job: rt-tests-cyclictest + <<: *test-job-arm64-mediatek + + - job: rt-tests-rtla-osnoise + <<: *test-job-x86-intel + + - job: rt-tests-rtla-osnoise + <<: *test-job-x86-amd + + - job: rt-tests-rtla-osnoise + <<: *test-job-arm64-qualcomm + + - job: rt-tests-rtla-osnoise + <<: *test-job-arm64-mediatek + + - job: rt-tests-rtla-timerlat + <<: *test-job-x86-intel + + - job: rt-tests-rtla-timerlat + <<: *test-job-x86-amd + + - job: rt-tests-rtla-timerlat + <<: *test-job-arm64-qualcomm + + - job: rt-tests-rtla-timerlat + <<: *test-job-arm64-mediatek + + - job: v4l2-decoder-conformance-av1 + <<: *test-job-arm64-mediatek + platforms: + - mt8195-cherry-tomato-r2 + + - job: v4l2-decoder-conformance-av1-chromium-10bit + <<: *test-job-arm64-mediatek + platforms: + - mt8195-cherry-tomato-r2 + + - job: v4l2-decoder-conformance-h264 + <<: *test-job-arm64-mediatek + + - job: v4l2-decoder-conformance-h264-frext + <<: *test-job-arm64-mediatek + + - job: v4l2-decoder-conformance-h265 + <<: *test-job-arm64-mediatek + platforms: + - mt8195-cherry-tomato-r2 + + - job: v4l2-decoder-conformance-vp8 + <<: *test-job-arm64-mediatek + platforms: + - mt8186-corsola-steelix-sku131072 + - mt8192-asurada-spherion-r0 + - mt8195-cherry-tomato-r2 + + - job: v4l2-decoder-conformance-vp9 + <<: *test-job-arm64-mediatek + platforms: + - mt8186-corsola-steelix-sku131072 + - mt8192-asurada-spherion-r0 + - mt8195-cherry-tomato-r2 + + - job: v4l2-decoder-conformance-h264 + <<: *test-job-arm64-qualcomm + + - job: v4l2-decoder-conformance-h264-frext + <<: *test-job-arm64-qualcomm + + - job: v4l2-decoder-conformance-h265 + <<: *test-job-arm64-qualcomm + + - job: v4l2-decoder-conformance-vp8 + <<: *test-job-arm64-qualcomm + + - job: v4l2-decoder-conformance-vp9 + <<: *test-job-arm64-qualcomm + + - job: watchdog-reset-arm64-mediatek + <<: *test-job-arm64-mediatek + + - job: watchdog-reset-arm64-qualcomm + <<: *test-job-arm64-qualcomm + + - job: watchdog-reset-x86-amd + <<: *test-job-x86-amd + + - job: watchdog-reset-x86-intel + <<: *test-job-x86-intel + platforms: + - hp-x360-12b-ca0010nr-n4020-octopus \ No newline at end of file diff --git a/config/traces_config.yaml b/config/traces_config.yaml new file mode 100644 index 000000000..4bb50fc80 --- /dev/null +++ b/config/traces_config.yaml @@ -0,0 +1,37 @@ +categories: + +- name: Test finished + tag: test + patterns: + - LAVA_SIGNAL_TESTCASE.*fail + +- name: Server error + tag: infra + patterns: + - .*Fault 404.* + - .*503 Service Temporarily Unavailable.* + - .*504 Gateway.* + +- name: Network + tag: infra + patterns: + - Job failed.*registry.* + - "ERROR:.*Docker.*" + - "ERROR:.*failed to pull image.*" + - ssh.*No route to host + - sync failed, giving up + +- name: LAVA Bug + tag: infra-lava + patterns: + - "LAVABug:" + +- name: Test finished + tag: test + patterns: + - 'make.* Error' + +- name: Unknown + tag: unknown + patterns: + - ".*" diff --git a/data/output/.gitkeep b/data/output/.gitkeep old mode 100644 new mode 100755 diff --git a/doc/_index.md b/doc/_index.md new file mode 100644 index 000000000..6e484c08a --- /dev/null +++ b/doc/_index.md @@ -0,0 +1,10 @@ +--- +title: "Pipeline" +date: 2024-05-29 +description: "KernelCI Pipeline" +weight: 1 +--- + +This section explains modular pipeline services including setup and +developer manual. +Github repository can be found at [`kernelci-pipeline`](https://github.com/kernelci/kernelci-pipeline). diff --git a/doc/connecting-lab.md b/doc/connecting-lab.md new file mode 100644 index 000000000..27dad46ac --- /dev/null +++ b/doc/connecting-lab.md @@ -0,0 +1,133 @@ +--- +title: "Connecting LAVA Lab to the pipeline instance" +date: 2024-05-29 +description: "Connecting a LAVA lab to the KernelCI pipeline" +weight: 3 +--- + +As we are moving towards the new KernelCI API and pipeline, we need to make sure +all the existing LAVA labs are connected to the new pipeline instance. This +document explains how to do this. + +## Token setup + +The first step is to generate a token for the lab. This is done by the lab admin, +and the token is used to submit jobs from pipeline to the lab and to authenticate +LAVA lab callbacks to the pipeline. + +Requirements for the token: +- `Description`: a string matching the regular expression `[a-zA-Z0-9\-]+`, for example "kernelci-new-api-callback" +- `Value`: arbitrary, kept secret + +*IMPORTANT!* You need to have both fields, as that's how LAVA works: +- You submit the job with the token description in job definition +- LAVA lab sends the result back to the pipeline with the token value (retrieved by that token-description) in the header + +More details in [LAVA documentation](https://docs.lavasoftware.org/lava/user-notifications.html#notification-callbacks). + + +## Pipeline configuration + +### Update pipeline configuration + +The first step is to add the lab configuration to [pipeline configuration](https://github.com/kernelci/kernelci-pipeline/blob/main/config/pipeline.yaml) file. + +Please add a new entry to the `runtimes` section of the configuration file +as follows: + +```yaml + + lava-broonie: + lab_type: lava + url: 'https://lava.sirena.org.uk/' + priority_min: 10 + priority_max: 40 + notify: + callback: + token: kernelci-new-api-callback + url: https://staging.kernelci.org:9100 + +``` +Where `lava-broonie` is the name of the lab, `lab_type` indicates the lab is of a `lava` type, `url` is the URL of the lab, `priority_min` and `priority_max` are the priority range allowed to jobs, assigned by lab owner, and `notify` is the notification configuration for the lab. The `callback` section contains the token description that you received from the above [step](#token-setup) and the URL of the pipeline instance LAVA callback endpoint. +More details on how LAVA callback and token works can be found in the [LAVA documentation](https://docs.lavasoftware.org/lava/user-notifications.html#notification-callbacks). + +Please submit a pull request to [`kernelci-pipeline`](https://github.com/kernelci/kernelci-pipeline) repository to add the lab configurations. See the +[pull request](https://github.com/kernelci/kernelci-pipeline/pull/426) for reference. + +### KernelCI configuration (TOML) file + +The next step is to add the token to the pipeline services configuration file i.e. [`config/kernelci.toml`](https://github.com/kernelci/kernelci-pipeline/blob/main/config/kernelci.toml) file. Every lab/runtime should have a section `runtime.` in the TOML file. The lab token should be stored in a key named `runtime_token` inside the section. +For example, + +```toml +[runtime.] +runtime_token="" +``` + +Section name `lab-name` should be replaced with the actual lab name, **matching the name of the lab in the pipeline configuration i.e. `config/pipeline.yaml`**. +`lab-token-value` should be replaced with the actual token value that you received in the [`Token setup`](#token-setup) step. Usually, it is a long string of random characters. +For example, in our documentation we used `lava-broonie` as the lab name, so the section will look like this: +```toml +[runtime.lava-broonie] +runtime_token="N0tAS3creTT0k3n" +``` + +### `docker-compose` file + +We are running all the pipeline services as docker containers. +You need to provide lab name to `--runtimes` argument to the [`scheduler-lava`](https://github.com/kernelci/kernelci-pipeline/blob/main/docker-compose.yaml#L80) +service in the `docker-compose.yml` file to enable the lab. +For example, the following configuration adds the `lava-broonie` lab along with other labs: + +```yml +scheduler-lava: + <<: *scheduler + container_name: 'kernelci-pipeline-scheduler-lava' + command: + - './pipeline/scheduler.py' + - '--settings=${KCI_SETTINGS:-/home/kernelci/config/kernelci.toml}' + - 'loop' + - '--runtimes' + - 'lava-collabora' + - 'lava-collabora-staging' + - 'lava-broonie' +``` + +### Jobs and devices specific to the lab + +The last step is to add some jobs that you want KernelCI to submit to the lab. +You also need to add platforms that the job will run on in the lab. +For example, the following adds a job and a device type for the `lava-broonie` lab: + +```yaml +jobs: + baseline-arm64-broonie: + template: baseline.jinja2 + kind: test + +platforms: + sun50i-h5-libretech-all-h3-cc: + <<: *arm64-device + mach: allwinner + dtb: dtbs/allwinner/sun50i-h5-libretech-all-h3-cc.dtb + +scheduler: + - job: baseline-arm64-broonie + event: + channel: node + name: kbuild-gcc-10-arm64 + result: pass + runtime: + type: lava + name: lava-broonie + platforms: + - sun50i-h5-libretech-all-h3-cc +``` + +Jobs usually define tasks to be run such as kernel build or a test suite running on a particular device (platform). +The device is defined in the `platforms` section, and the job is defined in the `jobs` section. Conditions for the job to be run are defined in the `scheduler` section. +More details about pipeline configuration can be found in the pipeline configuration documentation (TBD). + +> **Note** We have [`lava-callback`](https://github.com/kernelci/kernelci-pipeline/blob/main/docker-compose-lava.yaml#L10) service that will receive job results from the lab and send them to the API. + +And, here you go! You have successfully connected your lab with KernelCI. diff --git a/doc/developer-documentation.md b/doc/developer-documentation.md new file mode 100644 index 000000000..3619ee1c5 --- /dev/null +++ b/doc/developer-documentation.md @@ -0,0 +1,165 @@ +--- +title: "Developer Documentation" +date: 2024-06-18 +description: "KernelCI Pipeline developer manual" +weight: 4 +--- + +## Enabling new Kernel trees, builds, and tests + +We can monitor different kernel trees in KernelCI. The builds and test jobs are triggered whenever the specified branches are updated. +This manual describes how to enable trees in [`kernelci-pipeline`](https://github.com/kernelci/kernelci-pipeline.git). + + +### Pipeline configuration +The pipeline [configuration](https://github.com/kernelci/kernelci-pipeline/blob/main/config/pipeline.yaml) file has `trees` section. +In order to enable a new tree, we need to add an entry there. + +```yaml +trees: + : + url: "" +``` +For example, +```yaml +trees: + kernelci: + url: "https://github.com/kernelci/linux.git" +``` + +The `` will be used in the other sections to refer to the newly added tree. +After adding a `trees` entry, we need to define build and test configurations for it. In the same [configuration](https://github.com/kernelci/kernelci-pipeline/blob/main/config/pipeline.yaml) file, `jobs` section is there to specify them. `ChromeOS` specific job definitions are located in [config/jobs-chromeos.yaml](https://github.com/kernelci/kernelci-pipeline/blob/main/config/jobs-chromeos.yaml) file. Depending upon the type of the job such as build or test job, different parameters are specified: + +For instance, +```yaml +jobs: + + : + template: + kind: kbuild + image: + params: + arch: + compiler: + cross_compile: + dtbs_check: + defconfig: + fragments: + - + rules: + min_version: + version: + patchlevel: + tree: + - + - ! + + : + template: + kind: + params: + nfsroot: + collections: + job_timeout: + kcidb_test_suite: + rules: + min_version: + version: + patchlevel: + tree: + - + - ! +``` +Here is the description of each field: +- **`template`**: A `jinja2` template should be added to the [`config/runtime`](https://github.com/kernelci/kernelci-pipeline/tree/main/config/runtime) directory. This template will be used to generate the test definition. +- **`kind`**: The `kind` field specifies the type of job. It should be `kbuild` for build jobs, `job` for a test suite, and `test` for a single test case. +- **`image`**: The `image` field specifies the Docker image used for building and running the test. This field is optional. For example, LAVA test jobs use an image defined in the test definition template instead. +- **`params`**: The `params` field includes parameters for building the kernel (for `kbuild` jobs) or running the test. These parameters can include architecture, compiler, defconfig options, job timeout, etc. +- **`rules`**: The `rules` field defines job rules. If a test should be scheduled for a specific kernel tree, branch, or version, these rules can be specified here. The rules prefixed with `!` exclude the specified condition from job scheduling. For example, in the given scenario, the scheduler does not schedule a job if an event is received for the kernel tree `tree-name2`. +- **`kcidb_test_suite`**: The `kcidb_test_suite` field maps the KernelCI test suite name with the KCIDB test. This field is not required for build jobs (`kind: kbuild`). When adding new tests, ensure their definition is present in the `tests.yaml` file in [KCIDB](https://github.com/kernelci/kcidb/blob/main/tests.yaml). + +Common patterns are often defined using YAML anchors and aliases. This approach allows for concise job definitions by reusing existing configurations. For example, a kbuild job can be defined as follows: +```yaml + kbuild-gcc-12-arm64-preempt_rt_chromebook: + <<: *kbuild-gcc-12-arm64-job + params: + <<: *kbuild-gcc-12-arm64-params + fragments: + - 'preempt_rt' + - 'arm64-chromebook' + defconfig: defconfig + rules: + tree: + - 'stable-rt' +``` +The test job example is: +```yaml + kselftest-exec: + template: kselftest.jinja2 + kind: job + params: + nfsroot: 'http://storage.kernelci.org/images/rootfs/debian/bookworm-kselftest/20240313.0/{debarch}' + collections: exec + job_timeout: 10 + kcidb_test_suite: kselftest.exec +``` +Please have a look at [config/pipeline.yaml](https://github.com/kernelci/kernelci-pipeline/blob/main/config/pipeline.yaml) and [config/jobs-chromeos.yaml](https://github.com/kernelci/kernelci-pipeline/blob/main/config/jobs-chromeos.yaml) files to check currently added job definitions for reference. + +We need to specify which branch to monitor of a particular tree for trigering jobs in `build_configs`. + +```yaml +build_configs: + : + tree: + branch: + + : + tree: + branch: +``` + +That's it! The tree is enabled now. All the jobs defined under `jobs` section of [config file](https://github.com/kernelci/kernelci-pipeline/blob/main/config/pipeline.yaml) would run on the specified branched for this tree. + +### Schedule the job + +We also need a `scheduler` entry for the newly added job to specify pre-conditions for scheduling, and defining runtime and platforms for job submissions. + +For example, +```yaml +scheduler: + + - job: + event: + runtime: + name: + + - job: + event: + runtime: + type: + name: + platforms: + - +``` + +Here is the description of each field: +- **`job`**: Specifies the job name, which must match the name used in the `jobs` section. +- **`event`**: Specifies the API PubSub event triggering the test scheduling. For example, to trigger the `kbuild` job when new source code is published, the event is specified as: +```yaml + event: + channel: node + name: checkout + state: available +``` +For a test that requires a successful completion of a build job such as `kbuild-gcc-10-arm64`, specify the event as follows: +```yaml + event: + channel: node + name: kbuild-gcc-10-arm64 + result: pass +``` +Here, `node` refers to the name of API PubSub channel where node events are published. +- **`runtime`**: Select a runtime for scheduling and running the job. Supported runtimes include `shell`, `docker`, `lava`, and `kubernetes`. Specify the runtime type from the `runtimes` section. Note that the `name` property is required for `lava` and `kubernetes` runtimes to specify which lab or Kubernetes context should execute the test. Several LAVA labs (such as BayLibre, Collabora, Qualcomm) and Kubernetes contexts have been enabled in KernelCI. +- **`platforms`**: Includes a list of device types on which the test should run. These should match entries defined in the `platforms` section, such as `qemu-x86`, `bcm2711-rpi-4-b`, and others. + +After following these steps, run your pipeline instance to activate your newly added test configuration. diff --git a/doc/pipeline-details.md b/doc/pipeline-details.md new file mode 100644 index 000000000..88ccb0b79 --- /dev/null +++ b/doc/pipeline-details.md @@ -0,0 +1,84 @@ +--- +title: "Pipeline details" +date: 2023-09-27 +description: "KernelCI Pipeline design details" +weight: 2 +--- + +GitHub repository: +[`kernelci-pipeline`](https://github.com/kernelci/kernelci-pipeline.git) + +Below is the detailed pipeline flow diagram with associated node and pub/sub event: + +```mermaid +flowchart + start([Start]) --> trigger_service + subgraph trigger_service[Trigger Service] + kernel_revision{New kernel
revision ?} --> |No| sleep[sleep] + sleep --> kernel_revision + kernel_revision --> |Yes| checkout[Create 'checkout' node
state=running, result=None] + end + subgraph tarball_service[Tarball Service] + upload_tarball[Create and upload tarball to the storage] + checkout --> |event:
checkout created, state=running| upload_tarball + upload_tarball --> update_checkout_node[Update 'checkout' node
state=available, set holdoff
update describe and artifacts] + end + subgraph runner_service[Runner Service] + update_checkout_node --> |event:
checkout updated
state=available| runner_node[Create build/test node
state=running, result=None, holdoff=None] + end + subgraph Run Builds/Tests + runner_node --> runtime[Runtime Environment] + runtime --> set_available[Update node
state=available, result=None, set holdoff] + set_available --> run_job[Run build/test job] + run_job --> job_done{Job done?} + job_done --> |Yes| pass_runner_node[Update node
state=done, result=pass/fail/skip] + job_done --> |No| run_job + end + subgraph timeout_service[Timeout Service] + get_nodes[Get nodes
with state=running/available/closing] --> node_timedout{Node timed out?} + verify_avilable_nodes{Node state is available?} --> |Yes| hold_off_reached{Hold off reached?} + hold_off_reached --> |Yes| child_nodes_completed{All child
nodes completed ?} + child_nodes_completed --> |Yes| set_done[Set parent and child nodes
state=done] + child_nodes_completed --> |No| set_closing[Set node
state=closing] + node_timedout --> |Yes| set_done + node_timedout --> |No| verify_avilable_nodes + end + subgraph test_report_service[Test Report Service] + received_tarball{Received checkout node? } --> |Yes| email_report[Generate and
email test report] + end + set_done --> |event:
updated
state=done| received_tarball + test_report_service --> stop([Stop]) +``` + +Here's a description of each client script: + +### Trigger + +The pipeline starts with the trigger script. +The Trigger periodically checks whether a new kernel revision has appeared +on a git branch. If so, it firsts checks if API has already created a node with +the record. If not, it then pushes one node named "checkout". The node's state will be "available" and the result is not defined yet. This will generate pub/sub event of node creation. + +### Tarball + +When the trigger pushes a new revision node (checkout), the tarball receives a pub/sub event. The tarball then updates a local git checkout of the full kernel source tree. Then it makes a tarball with the source code and pushes it to the API storage. The state of the checkout node will be updated to 'available' and the holdoff time will be set. The URL of the tarball is also added to the artifacts of the revision node. + +### Runner + +The Runner step listens for pub/sub events about available checkout node. It will then schedule some jobs (it can be any kind of job including build and test) to be run in various runtime environments as defined in the pipeline YAML configuration from the Core tools. A node is pushed to the API with "available" state e.g. "kunit" node. This will generate pub/sub event of build or test node creation. + +### Runtime Environment + +The jobs added by runner will be run in specified runtime environment i.e. shell, Kubernetes or LAVA lab. +Each environment needs to have its own API token set up locally to be able to submit the results to API. It updates the node with state "done" and result (pass, fail, or skip). This will generate pub/sub event of node update. + +### Timeout + +The timeout service periodically checks all nodes' state. If a node is not in "done" state, then it checks whether the maximum wait time (timeout) is over. If so, it sets the node and all its child nodes to "done" state. +If the node is in "available" state and not timed-out, it will check for holdoff time. If the holdoff reached, and all its child nodes are completed, the node state will be moved to "done", otherwise the state will be set to "closing". +The parent node with "closing" state can not have any new child nodes. +This will generate pub/sub event of node update. + +### Test Report + +The Test Report in its current state listens for completed checkout node. It then generates a test report along with the child nodes' details and sends the report over an email. diff --git a/doc/result-summary-CHANGELOG b/doc/result-summary-CHANGELOG new file mode 100644 index 000000000..9824e31b3 --- /dev/null +++ b/doc/result-summary-CHANGELOG @@ -0,0 +1,78 @@ +12 April 2024 + Node post-processing code moved to utils.py and applied to both + "summary" and "monitor" modes + +9 April 2024 + Implement two working modes: + * "summary": single-shot run that queries the API and generates + a result summary (what we've been doing so far) + * "monitor": permanent process that waits for API events of a + certain type and characteristics and generates an individual + report for every one of them. + Each preset defines in which mode it'll work in the + metadata['action'] parameter. + + HTML output files now embed the css code and the original main.css + file is no longer deployed individually in the output dir. + + New command-line option added: --config-dir. + + --config option renamed as --config-file. + +3 April 2024 + Rework the command-line options to specify the date parameters in + queries: + * --created-from: results created since a specific date and time + * --created-to: results created until a specific date and time + * --last-updated-from: results last updated since a specific + date and time + * --last-updated-to: results last updated until a specific date + and time + They all take a timestamp with format: YYYY-mm-DDTHH:MM:SS. Time + is optional. + + New command-line option to allow the specification of extra query + parameters: --query-params. The parameters must be given in a + string formatted like this: "=,=..." + These parameters may override base parameters defined in the + presets. + + This CHANGELOG moved to doc/result-summary-CHANGELOG + +2 April 2024 + Improve performance by fetching logs from each job in parallel with + ThreadPoolExecutor. + + A regression can now be "active" (when the test hasn't passed + since the regression was created) or "inactive" (when a test run + has passed since it was created). This is encoded in the "result" + field of a regression: result="fail" for "active" regressions and + "pass" for "inactive" ones. + + Each regression now stores a list of test runs that ran after the + regression was detected. For inactive regressions, it stores up to + (and including) the first test run that passed after the + regression was created. + + Generic regression templates updated to print this new + information. + +25 March 2024 + Add support for a "metadata" section in the preset definitions. Each + preset now contains a "metadata" section and a "preset" section, + which contains the query parameters. + + All current examples now use the same set of generic templates. + + Preset-specific customizations, such as the report title, are now + defined in the "metadata" sections. + + Preset-template association is no longer done via soft-links. Each + preset defines the template it uses in its "metadata.template" + field. + + Each preset may now define its default output file name in the + "metadata.output" field. This can be overriden with the --output + command line option. + + Documentation updated and this CHANGELOG file added. diff --git a/docker-compose-production.yaml b/docker-compose-production.yaml index 21281c33a..5cd0bc350 100644 --- a/docker-compose-production.yaml +++ b/docker-compose-production.yaml @@ -9,13 +9,16 @@ services: lava-callback: # With uWSGI in socket mode, to use with reverse proxy e.g. Nginx and SSL - command: - - '/usr/local/bin/uwsgi' - - '--socket=:8000' - - '--buffer-size=32768' - - '-p${NPROC:-4}' - - '--wsgi-file=/home/kernelci/pipeline/lava_callback.py' - - '--callable=app' + command: uvicorn lava_callback:app --host 0.0.0.0 --port 8000 --app-dir /home/kernelci/pipeline/ + #command: + # - '/usr/local/bin/uwsgi' + # - '--master' + # - '--socket=:8000' + # - '--buffer-size=32768' + # - '-p${NPROC:-4}' + # - '--enable-threads' + # - '--wsgi-file=/home/kernelci/pipeline/lava_callback.py' + # - '--callable=app' # With uWSGI HTTP server, suitable for a public instance but no SSL # command: diff --git a/docker-compose.yaml b/docker-compose.yaml index 68a6d0af5..f0a4082ab 100644 --- a/docker-compose.yaml +++ b/docker-compose.yaml @@ -20,6 +20,23 @@ services: volumes: &base-volumes - './src:/home/kernelci/pipeline' - './config:/home/kernelci/config' + extra_hosts: + - "host.docker.internal:host-gateway" + + result_summary: + container_name: 'kernelci-pipeline-result-summary' + image: 'kernelci/staging-kernelci' + env_file: ['.env'] + stop_signal: 'SIGINT' + entrypoint: + - './pipeline/result_summary.py' + - '--settings=${KCI_SETTINGS:-/home/kernelci/config/kernelci.toml}' + - 'run' + - '--config=${CONFIG:-/home/kernelci/config/result-summary.yaml}' + volumes: + - './src:/home/kernelci/pipeline' + - './config:/home/kernelci/config' + - './data/output:/home/kernelci/data/output' scheduler: &scheduler container_name: 'kernelci-pipeline-scheduler' @@ -34,10 +51,12 @@ services: volumes: - './src:/home/kernelci/pipeline' - './config:/home/kernelci/config' - - './data/output:/home/kernelci/output' + - './data/output:/home/kernelci/data/output' - './data/k8s-credentials/.kube:/home/kernelci/.kube' - './data/k8s-credentials/.config/gcloud:/home/kernelci/.config/gcloud' - './data/k8s-credentials/.azure:/home/kernelci/.azure' + extra_hosts: + - "host.docker.internal:host-gateway" scheduler-docker: <<: *scheduler @@ -55,15 +74,25 @@ services: - './data/output:/home/kernelci/data/output' - './.docker-env:/home/kernelci/.docker-env' - '/var/run/docker.sock:/var/run/docker.sock' # Docker-in-Docker + extra_hosts: + - "host.docker.internal:host-gateway" scheduler-lava: <<: *scheduler container_name: 'kernelci-pipeline-scheduler-lava' command: - './pipeline/scheduler.py' - - '--settings=${KCI_SETTINGS:-/home/kernelci/config/kernelci.conf}' + - '--settings=${KCI_SETTINGS:-/home/kernelci/config/kernelci.toml}' - 'loop' - - '--runtimes=lava-collabora' + - '--runtimes' + - 'lava-collabora' + - 'lava-collabora-staging' + - 'lava-broonie' + - 'lava-baylibre' + - 'lava-qualcomm' + - 'lava-cip' + extra_hosts: + - "host.docker.internal:host-gateway" scheduler-k8s: <<: *scheduler @@ -73,7 +102,11 @@ services: - './pipeline/scheduler.py' - '--settings=${KCI_SETTINGS:-/home/kernelci/config/kernelci.toml}' - 'loop' - - '--runtimes=k8s-gke-eu-west4' + - '--runtimes' + - 'k8s-gke-eu-west4' + - 'k8s-all' + extra_hosts: + - "host.docker.internal:host-gateway" tarball: <<: *base-service @@ -88,6 +121,8 @@ services: - './data/ssh:/home/kernelci/data/ssh' - './data/src:/home/kernelci/data/src' - './data/output:/home/kernelci/data/output' + extra_hosts: + - "host.docker.internal:host-gateway" trigger: <<: *base-service @@ -96,6 +131,8 @@ services: - './pipeline/trigger.py' - '--settings=${KCI_SETTINGS:-/home/kernelci/config/kernelci.toml}' - 'run' + extra_hosts: + - "host.docker.internal:host-gateway" regression_tracker: <<: *base-service @@ -106,6 +143,8 @@ services: - '/home/kernelci/pipeline/regression_tracker.py' - '--settings=${KCI_SETTINGS:-/home/kernelci/config/kernelci.toml}' - 'run' + extra_hosts: + - "host.docker.internal:host-gateway" test_report: <<: *base-service @@ -116,6 +155,8 @@ services: - '/home/kernelci/pipeline/test_report.py' - '--settings=${KCI_SETTINGS:-/home/kernelci/config/kernelci.toml}' - 'loop' + extra_hosts: + - "host.docker.internal:host-gateway" timeout: <<: *base-service @@ -127,6 +168,8 @@ services: - '--settings=${KCI_SETTINGS:-/home/kernelci/config/kernelci.toml}' - 'run' - '--mode=timeout' + extra_hosts: + - "host.docker.internal:host-gateway" timeout-closing: <<: *base-service @@ -138,6 +181,8 @@ services: - '--settings=${KCI_SETTINGS:-/home/kernelci/config/kernelci.toml}' - 'run' - '--mode=closing' + extra_hosts: + - "host.docker.internal:host-gateway" timeout-holdoff: <<: *base-service @@ -149,3 +194,20 @@ services: - '--settings=${KCI_SETTINGS:-/home/kernelci/config/kernelci.toml}' - 'run' - '--mode=holdoff' + extra_hosts: + - "host.docker.internal:host-gateway" + + patchset: + <<: *base-service + container_name: 'kernelci-pipeline-patchset' + command: + - './pipeline/patchset.py' + - '--settings=${KCI_SETTINGS:-/home/kernelci/config/kernelci.toml}' + - 'run' + volumes: + - './src:/home/kernelci/pipeline' + - './config:/home/kernelci/config' + - './data/ssh:/home/kernelci/data/ssh' + - './data/src:/home/kernelci/data/src' + - './data/output:/home/kernelci/data/output' + diff --git a/docker/lava-callback/requirements.txt b/docker/lava-callback/requirements.txt index bec13f5f9..a6f83e155 100644 --- a/docker/lava-callback/requirements.txt +++ b/docker/lava-callback/requirements.txt @@ -1,2 +1,3 @@ -flask==2.3.2 -uwsgi==2.0.21 +uwsgi==2.0.22 +uvicorn==0.30.1 +fastapi==0.111.0 diff --git a/kube/aks/README.md b/kube/aks/README.md new file mode 100644 index 000000000..eb6d9f1f0 --- /dev/null +++ b/kube/aks/README.md @@ -0,0 +1,6 @@ +# Pipeline Kubernetes manifest files + +## Usage + +These files are designed to be used by api-pipeline-deploy.sh script from [kernelci-deploy](https://github.com/kernelci/kernelci-deploy) repository. +Additional documentation can be found in [kernelci-deploy README](https://github.com/kernelci/kernelci-deploy/kubernetes/README.md). diff --git a/kube/aks/ingress.yaml b/kube/aks/ingress.yaml new file mode 100644 index 000000000..18889095e --- /dev/null +++ b/kube/aks/ingress.yaml @@ -0,0 +1,32 @@ +--- + +# SPDX-License-Identifier: LGPL-2.1-or-later +# +# Copyright (C) 2023 Collabora Limited +# Author: Guillaume Tucker +apiVersion: networking.k8s.io/v1 +kind: Ingress +metadata: + labels: + app: ingressclass-pipeline + name: pipeline-ingress + namespace: kernelci-pipeline + annotations: + cert-manager.io/cluster-issuer: all-issuer +spec: + ingressClassName: ingressclass-pipeline + tls: + - hosts: + - kernelci-pipeline.westus3.cloudapp.azure.com + secretName: pipeline-tls + rules: + - host: kernelci-pipeline.westus3.cloudapp.azure.com + http: + paths: + - backend: + service: + name: lava-callback + port: + number: 8000 + path: / + pathType: Prefix diff --git a/kube/aks/kernelci-secrets.toml.example b/kube/aks/kernelci-secrets.toml.example new file mode 100644 index 000000000..233148064 --- /dev/null +++ b/kube/aks/kernelci-secrets.toml.example @@ -0,0 +1,49 @@ +[DEFAULT] +api_config = "staging" +storage_config = "staging-azure" +verbose = true + +[trigger] +poll_period = 3600 +startup_delay = 3 +timeout = 60 + +[tarball] +kdir = "/home/kernelci/data/src/linux" +output = "/home/kernelci/data/output" + +[scheduler] +output = "/home/kernelci/data/output" +runtime_config = "k8s-gke-eu-west4" + +[monitor] + +[send_kcidb] +kcidb_topic_name = "playground_kcidb_new" +kcidb_project_id = "kernelci-production" +origin = "kernelci_api" + +[test_report] +smtp_host = "smtp.gmail.com" +smtp_port = 465 +email_sender = "bot@kernelci.org" +email_recipient = "kernelci-results-staging@groups.io" + +[timeout] + +[regression_tracker] + +[storage.staging] +storage_cred = "/home/kernelci/data/ssh/id_rsa_tarball" + +[storage.staging-azure] +storage_cred = "" + +[storage.early-access-azure] +storage_cred = "" + +[runtime.lava-collabora] +runtime_token = "" + +[runtime.lava-collabora-early-access] +runtime_token = "" diff --git a/kube/aks/kernelci.toml b/kube/aks/kernelci.toml deleted file mode 100644 index 07204365f..000000000 --- a/kube/aks/kernelci.toml +++ /dev/null @@ -1,16 +0,0 @@ -[DEFAULT] -api_config = "early-access" -verbose = true -storage_config = "early-access-azure" - -[trigger] -poll_period = 3600 -startup_delay = 3 -build_configs = "mainline" - -[tarball] -kdir = "/home/kernelci/pipeline/data/src/linux" -output = "/home/kernelci/pipeline/data/output" - -[scheduler] -output = "/home/kernelci/pipeline/data/output" diff --git a/kube/aks/lava-callback.yaml b/kube/aks/lava-callback.yaml new file mode 100644 index 000000000..bf7614181 --- /dev/null +++ b/kube/aks/lava-callback.yaml @@ -0,0 +1,65 @@ +--- +# SPDX-License-Identifier: LGPL-2.1-or-later +# +# Copyright (C) 2023 Collabora Limited +# Author: Guillaume Tucker + +apiVersion: apps/v1 +kind: Deployment +metadata: + name: lava-callback + namespace: kernelci-pipeline +spec: + replicas: 1 + selector: + matchLabels: + app: lava-callback + template: + metadata: + labels: + app: lava-callback + spec: + containers: + - name: lava-callback + image: kernelci/kernelci:lava-callback@sha256:0037ee6df605a49938f61e11e071a9d730d1702a042dec4c3baa36beaa9b3262 + imagePullPolicy: Always + command: + - 'uvicorn' + args: + - 'src.lava_callback:app' + - '--host=0.0.0.0' + - '--port=8000' + - '--proxy-headers' + - '--forwarded-allow-ips=*' + env: + - name: KCI_API_TOKEN + valueFrom: + secretKeyRef: + name: kernelci-api-token + key: token + - name: KCI_SETTINGS + value: /secrets/kernelci.toml + volumeMounts: + - name: secrets + mountPath: /secrets + - name: config-volume + mountPath: /config + volumes: + - name: secrets + secret: + secretName: pipeline-secrets + - name: config-volume + configMap: + name: pipeline-configmap +--- +apiVersion: v1 +kind: Service +metadata: + name: lava-callback + namespace: kernelci-pipeline +spec: + ports: + - port: 80 + targetPort: 8000 + selector: + app: lava-callback diff --git a/kube/aks/monitor.yaml b/kube/aks/monitor.yaml index 83d520fcf..8b8806874 100644 --- a/kube/aks/monitor.yaml +++ b/kube/aks/monitor.yaml @@ -1,25 +1,48 @@ +--- # SPDX-License-Identifier: LGPL-2.1-or-later # # Copyright (C) 2023 Collabora Limited # Author: Guillaume Tucker -apiVersion: v1 -kind: Pod +apiVersion: apps/v1 +kind: Deployment metadata: name: monitor namespace: kernelci-pipeline spec: - containers: - - name: monitor - image: kernelci/pipeline - imagePullPolicy: Always - command: - - ./src/monitor.py - - --settings=/home/kernelci/pipeline/kube/aks/kernelci.toml - - run - env: - - name: KCI_API_TOKEN - valueFrom: - secretKeyRef: - name: kernelci-api-token - key: token + replicas: 1 + selector: + matchLabels: + app: monitor + template: + metadata: + labels: + app: monitor + spec: + containers: + - name: monitor + image: kernelci/kernelci:pipeline@sha256:bb01424c4dedcd2ffa87cef225b09116cf874bc2b91fc63ed6d993d6fc5c43cb + imagePullPolicy: Always + command: + - ./src/monitor.py + - --settings=/secrets/kernelci.toml + - --yaml-config=/config + - run + env: + - name: KCI_API_TOKEN + valueFrom: + secretKeyRef: + name: kernelci-api-token + key: token + volumeMounts: + - name: secrets + mountPath: /secrets + - name: config-volume + mountPath: /config + volumes: + - name: secrets + secret: + secretName: pipeline-secrets + - name: config-volume + configMap: + name: pipeline-configmap diff --git a/kube/aks/nodehandlers.yaml b/kube/aks/nodehandlers.yaml new file mode 100644 index 000000000..f8f228794 --- /dev/null +++ b/kube/aks/nodehandlers.yaml @@ -0,0 +1,137 @@ +--- +# SPDX-License-Identifier: LGPL-2.1-or-later +# +# Copyright (C) 2023 Collabora Limited +# Author: Guillaume Tucker + +apiVersion: apps/v1 +kind: Deployment +metadata: + name: timeout + namespace: kernelci-pipeline +spec: + replicas: 1 + selector: + matchLabels: + app: timeout + template: + metadata: + labels: + app: timeout + spec: + containers: + - name: timeout + image: kernelci/kernelci:pipeline@sha256:bb01424c4dedcd2ffa87cef225b09116cf874bc2b91fc63ed6d993d6fc5c43cb + imagePullPolicy: Always + command: + - ./src/timeout.py + - --settings=/secrets/kernelci.toml + - --yaml-config=/config + - run + - --mode=timeout + env: + - name: KCI_API_TOKEN + valueFrom: + secretKeyRef: + name: kernelci-api-token + key: token + volumeMounts: + - name: secrets + mountPath: /secrets + - name: config-volume + mountPath: /config + volumes: + - name: secrets + secret: + secretName: pipeline-secrets + - name: config-volume + configMap: + name: pipeline-configmap +--- +apiVersion: apps/v1 +kind: Deployment +metadata: + name: closing + namespace: kernelci-pipeline +spec: + replicas: 1 + selector: + matchLabels: + app: closing + template: + metadata: + labels: + app: closing + spec: + containers: + - name: timeout + image: kernelci/kernelci:pipeline@sha256:bb01424c4dedcd2ffa87cef225b09116cf874bc2b91fc63ed6d993d6fc5c43cb + imagePullPolicy: Always + command: + - ./src/timeout.py + - --settings=/secrets/kernelci.toml + - --yaml-config=/config + - run + - --mode=closing + env: + - name: KCI_API_TOKEN + valueFrom: + secretKeyRef: + name: kernelci-api-token + key: token + volumeMounts: + - name: secrets + mountPath: /secrets + - name: config-volume + mountPath: /config + volumes: + - name: secrets + secret: + secretName: pipeline-secrets + - name: config-volume + configMap: + name: pipeline-configmap +--- +apiVersion: apps/v1 +kind: Deployment +metadata: + name: holdoff + namespace: kernelci-pipeline +spec: + replicas: 1 + selector: + matchLabels: + app: holdoff + template: + metadata: + labels: + app: holdoff + spec: + containers: + - name: timeout + image: kernelci/kernelci:pipeline@sha256:bb01424c4dedcd2ffa87cef225b09116cf874bc2b91fc63ed6d993d6fc5c43cb + imagePullPolicy: Always + command: + - ./src/timeout.py + - --settings=/secrets/kernelci.toml + - --yaml-config=/config + - run + - --mode=holdoff + env: + - name: KCI_API_TOKEN + valueFrom: + secretKeyRef: + name: kernelci-api-token + key: token + volumeMounts: + - name: secrets + mountPath: /secrets + - name: config-volume + mountPath: /config + volumes: + - name: secrets + secret: + secretName: pipeline-secrets + - name: config-volume + configMap: + name: pipeline-configmap diff --git a/kube/aks/pipeline-kcidb.yaml b/kube/aks/pipeline-kcidb.yaml new file mode 100644 index 000000000..ffa00893a --- /dev/null +++ b/kube/aks/pipeline-kcidb.yaml @@ -0,0 +1,45 @@ +--- + +apiVersion: apps/v1 +kind: Deployment +metadata: + name: pipeline-kcidb + namespace: kernelci-pipeline +spec: + replicas: 1 + selector: + matchLabels: + app: pipeline-kcidb + template: + metadata: + labels: + app: pipeline-kcidb + spec: + containers: + - name: pipeline-kcidb + image: kernelci/kernelci:pipeline@sha256:0c4a5ca55a7de6788dda0ac869210f8adfc169f1a0509b4c8e44335ac71488e2 + imagePullPolicy: Always + command: + - ./src/send_kcidb.py + - --settings=/secrets/kernelci.toml + - run + env: + - name: KCI_API_TOKEN + valueFrom: + secretKeyRef: + name: kernelci-api-token + key: token + - name: GOOGLE_APPLICATION_CREDENTIALS + value: /secrets/kcidb-credentials.json + volumeMounts: + - name: secrets + mountPath: /secrets + - name: config-volume + mountPath: /config + volumes: + - name: secrets + secret: + secretName: pipeline-secrets + - name: config-volume + configMap: + name: pipeline-configmap diff --git a/kube/aks/scheduler-k8s.yaml b/kube/aks/scheduler-k8s.yaml index 54b0dd7da..8655cbc81 100644 --- a/kube/aks/scheduler-k8s.yaml +++ b/kube/aks/scheduler-k8s.yaml @@ -1,82 +1,83 @@ +--- # SPDX-License-Identifier: LGPL-2.1-or-later # # Copyright (C) 2023 Collabora Limited # Author: Guillaume Tucker -apiVersion: v1 -kind: Pod +apiVersion: apps/v1 +kind: Deployment metadata: name: scheduler-k8s namespace: kernelci-pipeline spec: - containers: - - name: scheduler - image: kernelci/pipeline - imagePullPolicy: Always - command: - - ./src/scheduler.py - - --settings=/home/kernelci/secrets/kernelci.toml - - loop - - --runtimes=k8s-gke-eu-west4 - env: - - name: KCI_API_TOKEN - valueFrom: - secretKeyRef: - name: kernelci-api-token - key: token - volumeMounts: - - name: secrets - mountPath: /home/kernelci/secrets - - name: secrets - mountPath: /home/kernelci/.kube - subPath: k8s-credentials/.kube - - name: secrets - mountPath: /home/kernelci/.config/gcloud - subPath: k8s-credentials/.config/gcloud - - name: secrets - mountPath: /home/kernelci/.azure - subPath: k8s-credentials/.azure - initContainers: - - name: settings - image: kernelci/pipeline - imagePullPolicy: Always - env: - - name: AZURE_FILES_TOKEN - valueFrom: - secretKeyRef: - name: azure-files-token - key: token - volumeMounts: - - name: secrets - mountPath: /tmp/secrets - command: - - /bin/bash - - -e - - -c - - "\ -cp /home/kernelci/pipeline/kube/aks/kernelci.toml /tmp/secrets/; \ -echo -e \"\ -\\n\ -[storage.early-access-azure]\\n\ -storage_cred = \\\"$AZURE_FILES_TOKEN\\\"\ -\" >> /tmp/secrets/kernelci.toml;" - - name: credentials - image: kernelci/pipeline - imagePullPolicy: Always - volumeMounts: - - name: secrets - mountPath: /tmp/secrets - - name: credentials - mountPath: /tmp/credentials - command: - - tar - - xzf - - /tmp/credentials/k8s-credentials.tar.gz - - -C - - /tmp/secrets - volumes: - - name: secrets - emptyDir: {} - - name: credentials - secret: - secretName: k8s-credentials + replicas: 1 + selector: + matchLabels: + app: scheduler-k8s + template: + metadata: + labels: + app: scheduler-k8s + spec: + containers: + - name: scheduler + image: kernelci/kernelci:pipeline@sha256:bb01424c4dedcd2ffa87cef225b09116cf874bc2b91fc63ed6d993d6fc5c43cb + imagePullPolicy: Always + command: + - ./src/scheduler.py + - --settings=/secrets/kernelci.toml + - --yaml-config=/config + - loop + - --runtimes + - k8s-gke-eu-west4 + - k8s-all + env: + - name: KCI_API_TOKEN + valueFrom: + secretKeyRef: + name: kernelci-api-token + key: token + volumeMounts: + - name: secrets + mountPath: /secrets + - name: config-volume + mountPath: /config + - name: tmpsecrets + mountPath: /home/kernelci/secrets + - name: tmpsecrets + mountPath: /home/kernelci/.kube + subPath: k8s-credentials/.kube + - name: tmpsecrets + mountPath: /home/kernelci/.config/gcloud + subPath: k8s-credentials/.config/gcloud + - name: tmpsecrets + mountPath: /home/kernelci/.azure + subPath: k8s-credentials/.azure + initContainers: + - name: credentials + image: denysfcollabora/pipeline + imagePullPolicy: Always + volumeMounts: + - name: tmpsecrets + mountPath: /tmp/secrets + - name: k8scredentials + mountPath: /tmp/k8s + readOnly: true + command: + - tar + - xzf + - /tmp/k8s/k8s-credentials.tgz + - -C + - /tmp/secrets + volumes: + - name: secrets + secret: + secretName: pipeline-secrets + - name: config-volume + configMap: + name: pipeline-configmap + - name: k8scredentials + secret: + secretName: k8scredentials + - name: tmpsecrets + emptyDir: {} diff --git a/kube/aks/scheduler-lava.yaml b/kube/aks/scheduler-lava.yaml index cacd8f958..2ec552cc4 100644 --- a/kube/aks/scheduler-lava.yaml +++ b/kube/aks/scheduler-lava.yaml @@ -1,59 +1,56 @@ +--- # SPDX-License-Identifier: LGPL-2.1-or-later # # Copyright (C) 2023 Collabora Limited # Author: Guillaume Tucker -apiVersion: v1 -kind: Pod +apiVersion: apps/v1 +kind: Deployment metadata: name: scheduler-lava namespace: kernelci-pipeline spec: - containers: - - name: scheduler - image: kernelci/pipeline - imagePullPolicy: Always - command: - - ./src/scheduler.py - - --settings=/home/kernelci/secrets/kernelci.toml - - loop - # Note: This sould be lava-collabora but the callback token name is - # different depending on the API instance (staging vs early-access). So - # for now we have 2 configs for the same runtime. - - --runtimes=lava-collabora-early-access - env: - - name: KCI_API_TOKEN - valueFrom: - secretKeyRef: - name: kernelci-api-token - key: token - volumeMounts: - - name: secrets - mountPath: /home/kernelci/secrets - initContainers: - - name: settings - image: kernelci/pipeline - imagePullPolicy: Always - env: - - name: LAVA_COLLABORA_TOKEN - valueFrom: - secretKeyRef: - name: lava-collabora-token - key: token - volumeMounts: - - name: secrets - mountPath: /tmp/secrets - command: - - /bin/bash - - -e - - -c - - "\ -cp /home/kernelci/pipeline/kube/aks/kernelci.toml /tmp/secrets/; \ -echo -e \"\ -\\n\ -[runtime.lava-collabora-early-access]\\n\ -runtime_token = \\\"$LAVA_COLLABORA_TOKEN\\\"\ -\" >> /tmp/secrets/kernelci.toml;" - volumes: - - name: secrets - emptyDir: {} + replicas: 1 + selector: + matchLabels: + app: scheduler-lava + template: + metadata: + labels: + app: scheduler-lava + spec: + containers: + - name: scheduler + image: kernelci/kernelci:pipeline@sha256:bb01424c4dedcd2ffa87cef225b09116cf874bc2b91fc63ed6d993d6fc5c43cb + imagePullPolicy: Always + command: + - ./src/scheduler.py + - --settings=/secrets/kernelci.toml + - --yaml-config=/config + - loop + - --runtimes + - lava-collabora + - lava-broonie + - lava-baylibre + env: + - name: KCI_API_TOKEN + valueFrom: + secretKeyRef: + name: kernelci-api-token + key: token + - name: KCI_INSTANCE + value: prod + - name: KCI_INSTANCE_CALLBACK + value: https://kernelci-pipeline.westus3.cloudapp.azure.com + volumeMounts: + - name: secrets + mountPath: /secrets + - name: config-volume + mountPath: /config + volumes: + - name: secrets + secret: + secretName: pipeline-secrets + - name: config-volume + configMap: + name: pipeline-configmap diff --git a/kube/aks/tarball.yaml b/kube/aks/tarball.yaml index ab29c2f7f..7ba80e690 100644 --- a/kube/aks/tarball.yaml +++ b/kube/aks/tarball.yaml @@ -1,93 +1,59 @@ +--- # SPDX-License-Identifier: LGPL-2.1-or-later # # Copyright (C) 2023 Collabora Limited # Author: Guillaume Tucker -apiVersion: v1 -kind: Pod +apiVersion: apps/v1 +kind: Deployment metadata: name: tarball namespace: kernelci-pipeline spec: - containers: - - name: tarball - image: kernelci/pipeline - imagePullPolicy: Always - resources: - requests: - memory: 1Gi - cpu: 500m - limits: - memory: 4Gi - cpu: 2 - command: - - ./src/tarball.py - - --settings=/home/kernelci/secrets/kernelci.toml - - run - env: - - name: KCI_API_TOKEN - valueFrom: - secretKeyRef: - name: kernelci-api-token - key: token - volumeMounts: - - name: secrets - mountPath: /home/kernelci/secrets - - name: src - mountPath: /home/kernelci/pipeline/data/src - initContainers: - - name: secrets - image: kernelci/pipeline - imagePullPolicy: Always - env: - - name: AZURE_FILES_TOKEN - valueFrom: - secretKeyRef: - name: azure-files-token - key: token - volumeMounts: - - name: secrets - mountPath: /tmp/secrets - command: - - /bin/bash - - -e - - -c - - "\ -cp /home/kernelci/pipeline/kube/aks/kernelci.toml /tmp/secrets/; \ -echo -e \"\ -\\n\ -[storage.early-access-azure]\\n\ -storage_cred = \\\"$AZURE_FILES_TOKEN\\\"\ -\" >> /tmp/secrets/kernelci.toml;" - # Until we have a mirror on persistent storage, pre-populate a linux kernel - # checkout with some amount of git history to speed things up a bit - # https://github.com/kernelci/kernelci-pipeline/issues/310 - - name: git-clone - image: kernelci/pipeline - imagePullPolicy: Always - volumeMounts: - - name: src - mountPath: /tmp/src - command: - - git - - clone - - --depth=100 - - https://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git - - /tmp/src/linux - - name: git-tags - image: kernelci/pipeline - imagePullPolicy: Always - volumeMounts: - - name: src - mountPath: /tmp/src - workingDir: /tmp/src/linux - command: - - git - - fetch - - --tags - - origin - volumes: - - name: src - emptyDir: {} - - name: secrets - emptyDir: {} + replicas: 1 + selector: + matchLabels: + app: tarball + template: + metadata: + labels: + app: tarball + spec: + containers: + - name: tarball + image: kernelci/kernelci:pipeline@sha256:bb01424c4dedcd2ffa87cef225b09116cf874bc2b91fc63ed6d993d6fc5c43cb + imagePullPolicy: Always + resources: + requests: + memory: 3Gi + cpu: 500m + limits: + memory: 4Gi + cpu: 2 + command: + - ./src/tarball.py + - --settings=/secrets/kernelci.toml + - --yaml-config=/config + - run + env: + - name: KCI_API_TOKEN + valueFrom: + secretKeyRef: + name: kernelci-api-token + key: token + volumeMounts: + - name: secrets + mountPath: /secrets + - name: src + mountPath: /home/kernelci/pipeline/data/src + - name: config-volume + mountPath: /config + volumes: + - name: secrets + secret: + secretName: pipeline-secrets + - name: src + emptyDir: {} + - name: config-volume + configMap: + name: pipeline-configmap diff --git a/kube/aks/timeout.yaml b/kube/aks/timeout.yaml deleted file mode 100644 index 14190ecd6..000000000 --- a/kube/aks/timeout.yaml +++ /dev/null @@ -1,70 +0,0 @@ -# SPDX-License-Identifier: LGPL-2.1-or-later -# -# Copyright (C) 2023 Collabora Limited -# Author: Guillaume Tucker - -apiVersion: v1 -kind: Pod -metadata: - name: timeout - namespace: kernelci-pipeline -spec: - containers: - - name: timeout - image: kernelci/pipeline - imagePullPolicy: Always - command: - - ./src/timeout.py - - --settings=/home/kernelci/pipeline/kube/aks/kernelci.toml - - run - - --mode=timeout - env: - - name: KCI_API_TOKEN - valueFrom: - secretKeyRef: - name: kernelci-api-token - key: token ---- -apiVersion: v1 -kind: Pod -metadata: - name: closing - namespace: kernelci-pipeline -spec: - containers: - - name: timeout - image: kernelci/pipeline - imagePullPolicy: Always - command: - - ./src/timeout.py - - --settings=/home/kernelci/pipeline/kube/aks/kernelci.toml - - run - - --mode=closing - env: - - name: KCI_API_TOKEN - valueFrom: - secretKeyRef: - name: kernelci-api-token - key: token ---- -apiVersion: v1 -kind: Pod -metadata: - name: holdoff - namespace: kernelci-pipeline -spec: - containers: - - name: timeout - image: kernelci/pipeline - imagePullPolicy: Always - command: - - ./src/timeout.py - - --settings=/home/kernelci/pipeline/kube/aks/kernelci.toml - - run - - --mode=holdoff - env: - - name: KCI_API_TOKEN - valueFrom: - secretKeyRef: - name: kernelci-api-token - key: token diff --git a/kube/aks/trigger.yaml b/kube/aks/trigger.yaml index e6ec6dbb6..037c8871a 100644 --- a/kube/aks/trigger.yaml +++ b/kube/aks/trigger.yaml @@ -1,25 +1,50 @@ +--- # SPDX-License-Identifier: LGPL-2.1-or-later # # Copyright (C) 2023 Collabora Limited # Author: Guillaume Tucker -apiVersion: v1 -kind: Pod +apiVersion: apps/v1 +kind: Deployment metadata: name: trigger namespace: kernelci-pipeline spec: - containers: - - name: trigger - image: kernelci/pipeline - imagePullPolicy: Always - command: - - ./src/trigger.py - - --settings=/home/kernelci/pipeline/kube/aks/kernelci.toml - - run - env: - - name: KCI_API_TOKEN - valueFrom: - secretKeyRef: - name: kernelci-api-token - key: token + replicas: 1 + selector: + matchLabels: + app: trigger + template: + metadata: + labels: + app: trigger + spec: + containers: + - name: trigger + image: kernelci/kernelci:pipeline@sha256:bb01424c4dedcd2ffa87cef225b09116cf874bc2b91fc63ed6d993d6fc5c43cb + imagePullPolicy: Always + command: + - ./src/trigger.py + - --settings=/secrets/kernelci.toml + - --yaml-config=/config + - run + - --trees=!kernelci + # - --force + env: + - name: KCI_API_TOKEN + valueFrom: + secretKeyRef: + name: kernelci-api-token + key: token + volumeMounts: + - name: secrets + mountPath: /secrets + - name: config-volume + mountPath: /config + volumes: + - name: secrets + secret: + secretName: pipeline-secrets + - name: config-volume + configMap: + name: pipeline-configmap diff --git a/restart_services.sh b/restart_services.sh new file mode 100755 index 000000000..12a253a15 --- /dev/null +++ b/restart_services.sh @@ -0,0 +1,10 @@ +#!/bin/bash + +FILE=".env" +inotifywait -m -e close_write $FILE | while read EVENT; +do + echo $EVENT + echo ".env file changes detected. Restarting pipeline services..." + docker-compose down + docker-compose up --build --no-cache +done diff --git a/setup.cfg b/setup.cfg new file mode 100644 index 000000000..62e73811c --- /dev/null +++ b/setup.cfg @@ -0,0 +1,2 @@ +[pycodestyle] +max-line-length = 100 diff --git a/src/base.py b/src/base.py index 7ad1edf6a..02fe79049 100644 --- a/src/base.py +++ b/src/base.py @@ -20,7 +20,7 @@ class Service: def __init__(self, configs, args, name): self._name = name self._logger = Logger("config/logger.conf", name) - self._api_config = configs['api_configs'][args.api_config] + self._api_config = configs['api'][args.api_config] api_token = os.getenv('KCI_API_TOKEN') self._api = kernelci.api.get_api(self._api_config, api_token) self._api_helper = APIHelper(self._api) diff --git a/src/fstests/runner.py b/src/fstests/runner.py index f072e65a2..899e2871a 100644 --- a/src/fstests/runner.py +++ b/src/fstests/runner.py @@ -15,7 +15,7 @@ import kernelci.config import kernelci.db import kernelci.lab -from kernelci.cli import Args, Command, parse_opts +from kernelci.legacy.cli import Args, Command, parse_opts TEMPLATES_PATHS = ['config/runtime', '/etc/kernelci/runtime', @@ -26,7 +26,7 @@ def __init__(self, configs, args): api_token = os.getenv('API_TOKEN') self._db_config = configs['db_configs'][args.db_config] self._db = kernelci.db.get_db(self._db_config, api_token) - self._device_configs = configs['device_types'] + self._device_configs = configs['platforms'] self._gce = args.gce self._gce_project = args.gce_project self._gce_zone = args.gce_zone @@ -182,6 +182,6 @@ def __call__(self, configs, args): if __name__ == '__main__': opts = parse_opts('fstests_runner', globals()) - configs = kernelci.config.load('config/pipeline.yaml') + configs = kernelci.config.load('config') status = opts.command(configs, opts) sys.exit(0 if status is True else 1) diff --git a/src/lava_callback.py b/src/lava_callback.py old mode 100644 new mode 100755 index 0aa3791e2..d87ca14ad --- a/src/lava_callback.py +++ b/src/lava_callback.py @@ -6,25 +6,33 @@ import os import tempfile +import gzip +import json import requests -from flask import Flask, request import toml +import threading +import uvicorn +from fastapi import FastAPI, HTTPException, Request import kernelci.api.helper import kernelci.config import kernelci.runtime.lava import kernelci.storage +from concurrent.futures import ThreadPoolExecutor + SETTINGS = toml.load(os.getenv('KCI_SETTINGS', 'config/kernelci.toml')) CONFIGS = kernelci.config.load( - SETTINGS.get('DEFAULT', {}).get('yaml_config', 'config/pipeline.yaml') + SETTINGS.get('DEFAULT', {}).get('yaml_config', 'config') ) +SETTINGS_PREFIX = 'runtime' -app = Flask(__name__) +app = FastAPI() +executor = ThreadPoolExecutor(max_workers=16) def _get_api_helper(api_config_name, api_token): - api_config = CONFIGS['api_configs'][api_config_name] + api_config = CONFIGS['api'][api_config_name] api = kernelci.api.get_api(api_config, api_token) return kernelci.api.helper.APIHelper(api) @@ -35,46 +43,153 @@ def _get_storage(storage_config_name): return kernelci.storage.get_storage(storage_config, storage_cred) -def _upload_log(log_parser, job_node, storage): - with tempfile.NamedTemporaryFile(mode='w') as log_txt: - log_parser.get_text_log(log_txt) - os.chmod(log_txt.name, 0o644) - log_dir = '-'.join((job_node['name'], job_node['id'])) - return storage.upload_single((log_txt.name, 'log.txt'), log_dir) - - -@app.errorhandler(requests.exceptions.HTTPError) -def handle_http_error(ex): - detail = ex.response.json().get('detail') or str(ex) - return detail, ex.response.status_code +def _upload_file(storage, job_node, source_name, destination_name=None): + if not destination_name: + destination_name = source_name + upload_dir = '-'.join((job_node['name'], job_node['id'])) + # remove GET parameters from destination_name + return storage.upload_single((source_name, destination_name), upload_dir) -@app.route('/') -def hello(): - return "KernelCI API & Pipeline LAVA callback handler" +def _upload_callback_data(data, job_node, storage): + filename = 'lava_callback.json.gz' + # Temporarily we dont remove log field + # data.pop('log', None) + # Ensure we don't leak secrets + data.pop('token', None) + # Create temporary file to store callback data as gzip'ed JSON + with tempfile.TemporaryDirectory() as tmp_dir: + # open gzip in explicit text mode to avoid platform-dependent line endings + with gzip.open(os.path.join(tmp_dir, filename), 'wt') as f: + serjson = json.dumps(data, indent=4) + f.write(serjson) + src = os.path.join(tmp_dir, filename) + return _upload_file(storage, job_node, src, filename) -@app.post('/node/') -def callback(node_id): - data = request.get_json() - job_callback = kernelci.runtime.lava.Callback(data) - - api_config_name = job_callback.get_meta('api_config_name') - api_token = request.headers.get('Authorization') - api_helper = _get_api_helper(api_config_name, api_token) +def _upload_log(log_parser, job_node, storage): + # create temporary file to store log with gzip + id = job_node['id'] + with tempfile.TemporaryDirectory(suffix=id) as tmp_dir: + # open gzip in explicit text mode to avoid platform-dependent line endings + with gzip.open(os.path.join(tmp_dir, 'lava_log.txt.gz'), 'wt') as f: + data = log_parser.get_text() + if not data or len(data) == 0: + return None + # Delete NULL characters from log data + data = data.replace('\x00', '') + # Sanitize log data from non-printable characters (except newline) + # replace them with '?', original still exists in cb data + data = ''.join([c if c.isprintable() or c == '\n' else + '?' for c in data]) + f.write(data) + src = os.path.join(tmp_dir, 'lava_log.txt.gz') + return _upload_file(storage, job_node, src, 'log.txt.gz') + + +@app.get('/') +async def read_root(): + page = ''' + + + KernelCI Pipeline Callback + + +

KernelCI Pipeline Callback

+

This is a callback endpoint for the KernelCI pipeline.

+ + + ''' + return page + + +def async_job_submit(api_helper, node_id, job_callback): + ''' + Heavy lifting is done in a separate thread to avoid blocking the callback + handler. This is not ideal as we don't have a way to report errors back to + the caller, but it's OK as LAVA don't care about the response. + ''' results = job_callback.get_results() - job_node = api_helper.api.get_node(node_id) - + job_node = api_helper.api.node.get(node_id) + if not job_node: + print(f'Node {node_id} not found') + return + # TODO: Verify lab_name matches job node lab name + # Also extract job_id and compare with node job_id (future) + # Or at least first record job_id in node metadata + + callback_data = job_callback.get_data() log_parser = job_callback.get_log_parser() + job_result = job_callback.get_job_status() + device_id = job_callback.get_device_id() storage_config_name = job_callback.get_meta('storage_config_name') storage = _get_storage(storage_config_name) log_txt_url = _upload_log(log_parser, job_node, storage) - job_node['artifacts']['log.txt'] = log_txt_url - + if log_txt_url: + job_node['artifacts']['lava_log'] = log_txt_url + print(f"Log uploaded to {log_txt_url}") + callback_json_url = _upload_callback_data(callback_data, job_node, storage) + if callback_json_url: + job_node['artifacts']['callback_data'] = callback_json_url + print(f"Callback data uploaded to {callback_json_url}") + # failed LAVA job should have result set to 'incomplete' + job_node['result'] = job_result + job_node['state'] = 'done' + if job_node.get('error_code') == 'node_timeout': + job_node['error_code'] = None + job_node['error_msg'] = None + if device_id: + job_node['data']['device'] = device_id hierarchy = job_callback.get_hierarchy(results, job_node) - return api_helper.submit_results(hierarchy, job_node) + api_helper.submit_results(hierarchy, job_node) + + +def submit_job(api_helper, node_id, job_callback): + ''' + Spawn a thread to do the job submission without blocking + the callback + ''' + executor.submit(async_job_submit, api_helper, node_id, job_callback) + + +# POST /node/ +@app.post('/node/{node_id}') +async def callback(node_id: str, request: Request): + tokens = SETTINGS.get(SETTINGS_PREFIX) + if not tokens: + return 'Unauthorized', 401 + lab_token = request.headers.get('Authorization') + # return 401 if no token + if not lab_token: + return 'Unauthorized', 401 + + # iterate over tokens and check if value of one matches + # we might have runtime_token and callback_token + lab_name = None + for lab, tokens in tokens.items(): + if tokens.get('runtime_token') == lab_token: + lab_name = lab + break + if tokens.get('callback_token') == lab_token: + lab_name = lab + break + if not lab_name: + return 'Unauthorized', 401 + + data = await request.json() + job_callback = kernelci.runtime.lava.Callback(data) + api_config_name = job_callback.get_meta('api_config_name') + api_token = os.getenv('KCI_API_TOKEN') + api_helper = _get_api_helper(api_config_name, api_token) + + submit_job(api_helper, node_id, job_callback) + + return 'OK', 202 # Default built-in development server, not suitable for production if __name__ == '__main__': - app.run(host='0.0.0.0', port=8000) + tokens = SETTINGS.get(SETTINGS_PREFIX) + if not tokens: + print('No tokens configured in toml file') + uvicorn.run(app, host='0.0.0.0', port=8000) diff --git a/src/monitor.py b/src/monitor.py index f5a0df60d..0f1796956 100755 --- a/src/monitor.py +++ b/src/monitor.py @@ -13,21 +13,20 @@ import kernelci import kernelci.config -from kernelci.cli import Args, Command, parse_opts +from kernelci.legacy.cli import Args, Command, parse_opts from base import Service class Monitor(Service): - LOG_FMT = \ - "{time:26s} {commit:12s} {id:24} {state:9s} {result:8s} {name}" + LOG_FMT = ("{time:26s} {kind:15s} {commit:12s} {id:24s} " + "{state:9s} {result:8s} {name}") def __init__(self, configs, args): super().__init__(configs, args, 'monitor') self._log_titles = self.LOG_FMT.format( - time="Time", commit="Commit", id="Node Id", state="State", - result="Result", name="Name" - ) + time="Time", kind="Kind", commit="Commit", id="Node Id", + state="State", result="Result", name="Name") def _setup(self, args): return self._api.subscribe('node') @@ -61,12 +60,18 @@ def _run(self, sub_id): event = self._api.receive_event(sub_id) obj = event.data dt = datetime.datetime.fromisoformat(event['time']) + try: + commit = obj['data']['kernel_revision']['commit'][:12] + except (KeyError, TypeError): + commit = str(None) + result = result_map[obj['result']] if obj['result'] else str(None) print(self.LOG_FMT.format( time=dt.strftime('%Y-%m-%d %H:%M:%S.%f'), - commit=obj['revision']['commit'][:12], + kind=obj['kind'], + commit=commit, id=obj['id'], state=state_map[obj['state']], - result=result_map[obj['result']], + result=result, name=obj['name'] ), flush=True) @@ -83,6 +88,7 @@ def __call__(self, configs, args): if __name__ == '__main__': opts = parse_opts('monitor', globals()) - configs = kernelci.config.load('config/pipeline.yaml') + yaml_configs = opts.get_yaml_configs() or 'config' + configs = kernelci.config.load(yaml_configs) status = opts.command(configs, opts) sys.exit(0 if status is True else 1) diff --git a/src/patchset.py b/src/patchset.py new file mode 100755 index 000000000..c37ce22a2 --- /dev/null +++ b/src/patchset.py @@ -0,0 +1,329 @@ +#!/usr/bin/env python3 +# +# SPDX-License-Identifier: LGPL-2.1-or-later +# +# Copyright (c) Meta Platforms, Inc. and affiliates. +# Author: Nikolay Yurin + +import os +import sys +import json +import requests +import time +import tempfile +import hashlib +from datetime import datetime, timedelta +from urllib.parse import urlparse +from urllib.request import urlopen + +import kernelci +import kernelci.build +import kernelci.config +from kernelci.legacy.cli import Args, Command, parse_opts +import kernelci.storage + +from base import Service +from tarball import Tarball + + +class Patchset(Tarball): + TAR_CREATE_CMD = """\ +set -e +cd {target_dir} +tar --create --transform "s/^/{prefix}\\//" * | gzip > {tarball_path} +""" + + APPLY_PATCH_SHELL_CMD = """\ +set -e +cd {checkout_path} +patch -p1 < {patch_file} +""" + + # FIXME: I really don"t have a good idea what I"m doing here + # This code probably needs rework and put into kernelci.patch + def _hash_patch(self, patch_name, patch_file): + allowed_prefixes = { + b"old mode", # Old file permissions + b"new mode", # New file permissions + b"-", # This convers both removed lines and source file + b"+", # This convers both added lines and target file + # "@" I don"t know how we should handle hunks yet + } + hashable_patch_lines = [] + for line in patch_file.readlines(): + if not line: + continue + + for prefix in allowed_prefixes: + if line.startswith(prefix): + hashable_patch_lines.append(line) + break + + hashable_content = b"/n".join(hashable_patch_lines) + self.log.debug( + "Hashable content:\n" + + hashable_content.decode("utf-8") + ) + patch_hash_digest = hashlib.sha256(hashable_content).hexdigest() + self.log.debug(f"Patch {patch_name} hash: {patch_hash_digest}") + return patch_hash_digest + + # FIXME: move into kernelci.patch + def _apply_patch(self, checkout_path, patch_name, patch_url): + self.log.info( + f"Applying patch {patch_name}, url: {patch_url}", + ) + try: + encoding = urlopen(patch_url).headers.get_charsets()[0] + except Exception as e: + self.log.warn( + "Failed to fetch encoding from patch " + f"{patch_name} headers: {e}" + ) + self.log.warn("Falling back to utf-8 encoding") + encoding = "utf-8" + + with tempfile.NamedTemporaryFile( + prefix="{}-{}-".format( + self._service_config.patchset_tmp_file_prefix, + patch_name + ), + encoding=encoding + ) as tmp_f: + if not kernelci.build._download_file(patch_url, tmp_f.name): + raise FileNotFoundError( + f"Error downloading patch from {patch_url}" + ) + + kernelci.shell_cmd(self.APPLY_PATCH_SHELL_CMD.format( + checkout_path=checkout_path, + patch_file=tmp_f.name, + )) + + return self._hash_patch(patch_name, tmp_f) + + # FIXME: move into kernelci.patch + def _apply_patches(self, checkout_path, patch_artifacts): + patchset_hash = hashlib.sha256() + for patch_name, patch_url in patch_artifacts.items(): + patch_hash = self._apply_patch(checkout_path, patch_name, patch_url) + patchset_hash.update(patch_hash.encode("utf-8")) + + patchset_hash_digest = patchset_hash.hexdigest() + self.log.debug(f"Patchset hash: {patchset_hash_digest}") + return patchset_hash_digest + + def _download_checkout_archive(self, download_path, tarball_url, retries=3): + self.log.info(f"Downloading checkout tarball, url: {tarball_url}") + tar_filename = os.path.basename(urlparse(tarball_url).path) + kernelci.build.pull_tarball( + kdir=download_path, + url=tarball_url, + dest_filename=tar_filename, + retries=retries, + delete=True + ) + + def _update_node( + self, + patchset_node, + checkout_node, + tarball_url, + patchset_hash + ): + patchset_data = checkout_node.get("data", {}).copy() + patchset_data["kernel_revision"]["patchset"] = patchset_hash + + updated_node = patchset_node.copy() + updated_node["artifacts"]["tarball"] = tarball_url + updated_node["state"] = "available" + updated_node["data"] = patchset_data + updated_node["holdoff"] = str( + datetime.utcnow() + timedelta(minutes=10) + ) + + try: + self._api.node.update(updated_node) + except requests.exceptions.HTTPError as err: + err_msg = json.loads(err.response.content).get("detail", []) + self.log.error(err_msg) + + def _setup(self, *args): + return self._api_helper.subscribe_filters({ + "op": "created", + "name": "patchset", + "state": "running", + }) + + def _has_allowed_domain(self, url): + domain = urlparse(url).hostname + if domain not in self._service_config.allowed_domains: + raise RuntimeError( + "Forbidden mbox domain %s, allowed domains: %s", + domain, + self._service_config.allowed_domains, + ) + + def _get_patch_artifacts(self, patchset_node): + node_artifacts = patchset_node.get("artifacts") + if not node_artifacts: + raise ValueError( + "Patchset node %s has no artifacts", + patchset_node["id"], + ) + + for patch_mbox_url in node_artifacts.values(): + self._has_allowed_domain(patch_mbox_url) + + return node_artifacts + + def _gen_checkout_name(self, checkout_node): + revision = checkout_node["data"]["kernel_revision"] + return "-".join([ + "linux", + revision["tree"], + revision["branch"], + revision["describe"], + ]) + + def _process_patchset(self, checkout_node, patchset_node): + patch_artifacts = self._get_patch_artifacts(patchset_node) + + # Tarball download implicitely removes destination dir + # there's no need to cleanup this directory + self._download_checkout_archive( + download_path=self._service_config.kdir, + tarball_url=checkout_node["artifacts"]["tarball"] + ) + + checkout_name = self._gen_checkout_name(checkout_node) + checkout_path = os.path.join(self._service_config.kdir, checkout_name) + + patchset_hash = self._apply_patches(checkout_path, patch_artifacts) + patchset_hash_short = patchset_hash[ + :self._service_config.patchset_short_hash_len + ] + + tarball_path = self._make_tarball( + target_dir=checkout_path, + tarball_name=f"{checkout_name}-{patchset_hash_short}" + ) + tarball_url = self._push_tarball(tarball_path) + + self._update_node( + patchset_node=patchset_node, + checkout_node=checkout_node, + tarball_url=tarball_url, + patchset_hash=patchset_hash + ) + + def _mark_failed(self, patchset_node): + node = patchset_node.copy() + node.update({ + "state": "done", + "result": "fail", + }) + try: + self._api.node.update(node) + except requests.exceptions.HTTPError as err: + err_msg = json.loads(err.response.content).get("detail", []) + self.log.error(err_msg) + + def _mark_failed_if_no_parent(self, patchset_node): + if not patchset_node["parent"]: + self.log.error( + f"Patchset node {patchset_node['id']} as has no parent" + "checkout node , marking node as failed", + ) + self._mark_failed(patchset_node) + return True + + return False + + def _mark_failed_if_parent_failed(self, patchset_node, checkout_node): + if ( + checkout_node["state"] == "done" and + checkout_node["result"] == "fail" + ): + self.log.error( + f"Parent checkout node {checkout_node['id']} failed, " + f"marking patchset node {patchset_node['id']} as failed", + ) + self._mark_failed(patchset_node) + return True + + return False + + def _run(self, _sub_id): + self.log.info("Listening for new trigger events") + self.log.info("Press Ctrl-C to stop.") + + while True: + patchset_nodes = self._api.node.find({ + "name": "patchset", + "state": "running", + }) + + if patchset_nodes: + self.log.debug(f"Found patchset nodes: {patchset_nodes}") + + for patchset_node in patchset_nodes: + if self._mark_failed_if_no_parent(patchset_node): + continue + + checkout_node = self._api.node.get(patchset_node["parent"]) + + if self._mark_failed_if_parent_failed( + patchset_node, + checkout_node + ): + continue + + if checkout_node["state"] == "running": + self.log.info( + f"Patchset node {patchset_node['id']} is waiting " + f"for checkout node {checkout_node['id']} to complete", + ) + continue + + try: + self.log.info( + f"Processing patchset node: {patchset_node['id']}", + ) + self._process_patchset(checkout_node, patchset_node) + except Exception as e: + self.log.error( + f"Patchset node {patchset_node['id']} " + f"processing failed: {e}", + ) + self.log.traceback() + self._mark_failed(patchset_node) + + self.log.info( + "Waiting %d seconds for a new nodes..." % + self._service_config.polling_delay_secs, + ) + time.sleep(self._service_config.polling_delay_secs) + + +class cmd_run(Command): + help = ( + "Wait for a checkout node to be available " + "and push a source+patchset tarball" + ) + args = [ + Args.kdir, Args.output, Args.api_config, Args.storage_config, + ] + opt_args = [ + Args.verbose, Args.storage_cred, + ] + + def __call__(self, configs, args): + return Patchset(configs, args).run(args) + + +if __name__ == "__main__": + opts = parse_opts("patchset", globals()) + configs = kernelci.config.load("config") + status = opts.command(configs, opts) + sys.exit(0 if status is True else 1) diff --git a/src/regression_tracker.py b/src/regression_tracker.py index b62b2a64c..376439e61 100755 --- a/src/regression_tracker.py +++ b/src/regression_tracker.py @@ -7,10 +7,12 @@ import sys +import json + import kernelci import kernelci.config import kernelci.db -from kernelci.cli import Args, Command, parse_opts +from kernelci.legacy.cli import Args, Command, parse_opts from base import Service @@ -19,10 +21,6 @@ class RegressionTracker(Service): def __init__(self, configs, args): super().__init__(configs, args, 'regression_tracker') - self._regression_fields = [ - 'artifacts', 'group', 'name', 'path', 'revision', 'result', - 'state', - ] def _setup(self, args): return self._api_helper.subscribe_filters({ @@ -33,66 +31,218 @@ def _stop(self, sub_id): if sub_id: self._api_helper.unsubscribe_filters(sub_id) - def _create_regression(self, failed_node, last_successful_node): + def _collect_logs(self, node): + """Returns a dict containing the log artifacts of . Log + artifacts are those named 'log' or whose name contains the + '_log' suffix. If doesn't have any artifacts, the + search will continue upwards through parent nodes until reaching + a node that has them. + """ + logs = {} + if node.get('artifacts'): + for artifact, value in node['artifacts'].items(): + if artifact == 'log' or '_log' in artifact: + logs[artifact] = value + elif node.get('parent'): + parent = self._api.node.get(node['parent']) + if parent: + logs = self._collect_logs(parent) + return logs + + def _collect_errors(self, node): + """Returns a dict containing the 'error_code' and 'error_msg' + data fields of . If doesn't have any info in them, + it searches upwards through parent nodes until it reaches a node + that has them. + """ + if node['data'].get('error_code'): + return { + 'error_code': node['data']['error_code'], + 'error_msg': node['data']['error_msg'] + } + elif node.get('parent'): + parent = self._api.node.get(node['parent']) + return self._collect_errors(parent) + return { + 'error_code': None, + 'error_msg': None + } + + def _create_regression(self, failed_node, last_pass_node): """Method to create a regression""" regression = {} - for field in self._regression_fields: - regression[field] = failed_node[field] - regression['parent'] = failed_node['id'] - regression['regression_data'] = [last_successful_node, failed_node] - self._api_helper.submit_regression(regression) - - def _detect_regression(self, node): - """Method to check and detect regression""" - previous_nodes = self._api.get_nodes({ + regression['kind'] = 'regression' + # Make regression "active" by default. + # TODO: 'result' is currently optional in the model, so we set + # it here. Remove this line if the field is set as mandatory in + # the future. + regression['result'] = 'fail' + regression['name'] = failed_node['name'] + regression['path'] = failed_node['path'] + regression['group'] = failed_node['group'] + regression['state'] = 'done' + error = self._collect_errors(failed_node) + regression['data'] = { + 'fail_node': failed_node['id'], + 'pass_node': last_pass_node['id'], + 'arch': failed_node['data'].get('arch'), + 'defconfig': failed_node['data'].get('defconfig'), + 'config_full': failed_node['data'].get('config_full'), + 'compiler': failed_node['data'].get('compiler'), + 'platform': failed_node['data'].get('platform'), + 'device': failed_node['data'].get('device'), + 'failed_kernel_version': failed_node['data'].get('kernel_revision'), # noqa + 'error_code': error['error_code'], + 'error_msg': error['error_msg'], + 'node_sequence': [], + } + regression['artifacts'] = self._collect_logs(failed_node) + return regression + + def _get_last_matching_node(self, search_params): + """Returns the last (by creation date) occurrence of a node + matching a set of search parameters, or None if no nodes were + found. + + TODO: Move this to core helpers. + + """ + # Workaround: Don't use 'path' as a search parameter (we can't + # use lists as query parameter values). Instead, do the + # filtering in python code + path = search_params.pop('path') + nodes = self._api.node.find(search_params) + nodes = [node for node in nodes if node['path'] == path] + if not nodes: + return None + node = sorted( + nodes, + key=lambda node: node['created'], + reverse=True + )[0] + return node + + def _get_related_regression(self, node): + """Returns the last active regression that points to the same job + run instance. Returns None if no active regression was found. + + """ + search_params = { + 'kind': 'regression', + 'result': 'fail', 'name': node['name'], 'group': node['group'], 'path': node['path'], - 'revision.tree': node['revision']['tree'], - 'revision.branch': node['revision']['branch'], - 'revision.url': node['revision']['url'], + 'data.failed_kernel_version.tree': node['data']['kernel_revision']['tree'], + 'data.failed_kernel_version.branch': node['data']['kernel_revision']['branch'], + 'data.failed_kernel_version.url': node['data']['kernel_revision']['url'], 'created__lt': node['created'], - }) + # Parameters that may be null in some test nodes + 'data.arch': node['data'].get('arch', 'null'), + 'data.defconfig': node['data'].get('defconfig', 'null'), + 'data.config_full': node['data'].get('config_full', 'null'), + 'data.compiler': node['data'].get('compiler', 'null'), + 'data.platform': node['data'].get('platform', 'null') + } + return self._get_last_matching_node(search_params) - if previous_nodes: - previous_nodes = sorted( - previous_nodes, - key=lambda node: node['created'], - reverse=True - ) - - if previous_nodes[0]['result'] == 'pass': - self.log.info(f"Detected regression for node id: \ -{node['id']}") - self._create_regression(node, previous_nodes[0]) - - def _get_all_failed_child_nodes(self, failures, root_node): - """Method to get all failed nodes recursively from top-level node""" - child_nodes = self._api.get_nodes({'parent': root_node['id']}) - for node in child_nodes: - if node['result'] == 'fail': - failures.append(node) - self._get_all_failed_child_nodes(failures, node) + def _get_previous_job_instance(self, node): + """Returns the previous job run instance of , or None if + no one was found. + + """ + search_params = { + 'kind': node['kind'], + 'name': node['name'], + 'group': node['group'], + 'path': node['path'], + 'data.kernel_revision.tree': node['data']['kernel_revision']['tree'], + 'data.kernel_revision.branch': node['data']['kernel_revision']['branch'], + 'data.kernel_revision.url': node['data']['kernel_revision']['url'], + 'created__lt': node['created'], + 'state': 'done', + # Parameters that may be null in some test nodes + 'data.arch': node['data'].get('arch', 'null'), + 'data.defconfig': node['data'].get('defconfig', 'null'), + 'data.config_full': node['data'].get('config_full', 'null'), + 'data.compiler': node['data'].get('compiler', 'null'), + 'data.platform': node['data'].get('platform', 'null'), + } + return self._get_last_matching_node(search_params) + + def _process_node(self, node): + if node['result'] == 'pass': + # Find existing active regression + regression = self._get_related_regression(node) + if regression: + # Set regression as inactive + regression['data']['node_sequence'].append(node['id']) + regression['result'] = 'pass' + self._api.node.update(regression) + elif node['result'] == 'fail': + previous = self._get_previous_job_instance(node) + if not previous: + # Not a regression, since there's no previous test run + pass + elif previous['result'] == 'pass': + self.log.info(f"Detected regression for node id: {node['id']}") + # Skip the regression generation if it was already in the + # DB. This may happen if a job was detected to generate a + # regression when it failed and then the same job was + # checked again after its parent job finished running and + # was updated. + existing_regression = self._api.node.find({ + 'kind': 'regression', + 'result': 'fail', + 'data.fail_node': node['id'], + 'data.pass_node': previous['id'] + }) + if not existing_regression: + regression = self._create_regression(node, previous) + resp = self._api_helper.submit_regression(regression) + reg = json.loads(resp.text) + self.log.info(f"Regression submitted: {reg['id']}") + # Update node + node['data']['regression'] = reg['id'] + self._api.node.update(node) + else: + self.log.info(f"Skipping regression: already exists") + elif previous['result'] == 'fail': + # Find existing active regression + regression = self._get_related_regression(node) + if regression: + if node['id'] in regression['data']['node_sequence']: + # The node is already in an active + # regression. This may happen if the job was + # processed right after it finished and then + # again after its parent job finished and was + # updated + return + # Update active regression + regression['data']['node_sequence'].append(node['id']) + self._api.node.update(regression) + # Update node + node['data']['regression'] = regression['id'] + self._api.node.update(node) + # Process children recursively: + # When a node hierarchy is submitted on a single operation, + # an event is generated only for the root node. Walk the + # children node tree to check for event-less finished jobs + child_nodes = self._api.node.find({'parent': node['id']}) + if child_nodes: + for child_node in child_nodes: + self._process_node(child_node) def _run(self, sub_id): - """Method to run regression tracking""" + """Method to run regression detection and generation""" self.log.info("Tracking regressions... ") self.log.info("Press Ctrl-C to stop.") sys.stdout.flush() - while True: - node = self._api_helper.receive_event_node(sub_id) - - if node['name'] == 'checkout': + node, _ = self._api_helper.receive_event_node(sub_id) + if node['kind'] == 'checkout' or node['kind'] == 'regression': continue - - failures = [] - self._get_all_failed_child_nodes(failures, node) - - for node in failures: - self._detect_regression(node) - - sys.stdout.flush() + self._process_node(node) return True @@ -106,6 +256,7 @@ def __call__(self, configs, args): if __name__ == '__main__': opts = parse_opts('regression_tracker', globals()) - configs = kernelci.config.load('config/pipeline.yaml') + yaml_configs = opts.get_yaml_configs() or 'config' + configs = kernelci.config.load(yaml_configs) status = opts.command(configs, opts) sys.exit(0 if status is True else 1) diff --git a/src/result_summary.py b/src/result_summary.py new file mode 100755 index 000000000..8e87a2c49 --- /dev/null +++ b/src/result_summary.py @@ -0,0 +1,224 @@ +#!/usr/bin/env python3 +# +# SPDX-License-Identifier: LGPL-2.1-or-later +# +# Copyright (C) 2024 Collabora Limited +# Author: Ricardo Cañuelo Navarro + +# KernelCI client code to retrieve and summarize job (test, regressions) +# results +# +# How to use this (for now): +# +# docker-compose run result_summary --preset= +# +# where is defined as a query preset definition in +# config/result-summary.yaml. + +# You can specify a date range for the searh using the --date-from +# (default: yesterday) and --date-to (default: now) options, formatted +# as YYYY-MM-DD or YYYY-MM-DDTHH:mm:SS (UTC) +# +# Each preset may define the name and the directory of the output file +# generated (in data/output). This can be overriden with the +# --output-dir and --output-file options. If no output file is defined, +# the output will be printed to stdout. +# +# For current status info, see the development changelog in +# doc/result-summary-CHANGELOG + +# TODO: +# - Refactor liberally +# - Send email reports +# - Do we want to focus on regressions only or on any kind of result? +# If we want test results as well: +# - Provide logs for test leaf nodes +# - Tweak output and templates according to user needs +# - Other suggested improvements + +import sys +import logging + +import jinja2 +import yaml + +import kernelci +from kernelci.legacy.cli import Args, Command, parse_opts +from base import Service +from kernelci_pipeline.email_sender import EmailSender +import result_summary +import result_summary.summary as summary +import result_summary.monitor as monitor +import result_summary.utils as utils + + +class ResultSummary(Service): + def __init__(self, configs, args): + super().__init__(configs, args, result_summary.SERVICE_NAME) + if args.verbose: + self.log._logger.setLevel(logging.DEBUG) + self._template_env = jinja2.Environment( + loader=jinja2.FileSystemLoader(result_summary.TEMPLATES_DIR) + ) + result_summary.logger = self._logger + + def _setup(self, args): + # Load and sanity check command line parameters + # config: the complete config file contents + # preset_name: name of the selected preset to use + # preset: loaded config for the selected preset + with open(args.config, 'r') as config_file: + config = yaml.safe_load(config_file) + if args.preset: + preset_name = args.preset + else: + preset_name = 'default' + if preset_name not in config: + self.log.error(f"No {preset_name} preset found in {args.config}") + sys.exit(1) + preset = config[preset_name] + # Additional query parameters + extra_query_params = {} + if args.query_params: + extra_query_params = utils.split_query_params(args.query_params) + output_dir = None + if args.output_dir: + output_dir = args.output_dir + output_file = None + if args.output_file: + output_file = args.output_file + self._email_sender = EmailSender( + args.smtp_host, args.smtp_port, + email_sender=args.email_sender, + email_recipient=args.email_recipient, + ) if args.smtp_host and args.smtp_port else None + # End of command line argument loading and sanity checks + + # Load presets and template + metadata = {} + preset_params = [] + if 'metadata' in preset: + metadata = preset['metadata'] + for block_name, body in preset['preset'].items(): + preset_params.extend(utils.parse_block_config(body, block_name, 'done')) + if 'template' not in metadata: + self.log.error(f"No template defined for preset {preset_name}") + sys.exit(1) + template = self._template_env.get_template(metadata['template']) + + context = { + 'metadata': metadata, + 'preset_params': preset_params, + 'extra_query_params': extra_query_params, + 'template': template, + 'output_file': output_file, + 'output_dir': output_dir, + } + # Action-specific setup + if metadata.get('action') == 'summary': + extra_context = summary.setup(self, args, context) + elif metadata.get('action') == 'monitor': + extra_context = monitor.setup(self, args, context) + else: + raise Exception("Undefined or unsupported preset action: " + f"{metadata.get('action')}") + return {**context, **extra_context} + + def _stop(self, context): + if not context or 'metadata' not in context: + return + if context['metadata']['action'] == 'summary': + summary.stop(self, context) + elif context['metadata']['action'] == 'monitor': + monitor.stop(self, context) + else: + raise Exception("Undefined or unsupported preset action: " + f"{metadata.get('action')}") + + def _run(self, context): + if context['metadata']['action'] == 'summary': + summary.run(self, context) + elif context['metadata']['action'] == 'monitor': + monitor.run(self, context) + else: + raise Exception("Undefined or unsupported preset action: " + f"{metadata.get('action')}") + + +class cmd_run(Command): + help = ("Checks for test results in a specific date range " + "and generates summary reports (single shot)") + args = [ + { + 'name': '--config', + 'help': "Path to service-specific config yaml file", + }, + ] + opt_args = [ + { + 'name': '--preset', + 'help': "Configuration preset to load ('default' if none)", + }, + { + 'name': '--created-from', + 'help': ("Collect results created since this date and time" + "(YYYY-mm-DDTHH:MM:SS). Default: since last 24 hours"), + }, + { + 'name': '--created-to', + 'help': ("Collect results created up to this date and time " + "(YYYY-mm-DDTHH:MM:SS). Default: until now"), + }, + { + 'name': '--last-updated-from', + 'help': ("Collect results that were last updated since this date and time" + "(YYYY-mm-DDTHH:MM:SS). Default: since last 24 hours"), + }, + { + 'name': '--last-updated-to', + 'help': ("Collect results that were last updated up to this date and time " + "(YYYY-mm-DDTHH:MM:SS). Default: until now"), + }, + { + 'name': '--output-dir', + 'help': "Override the 'output_dir' preset parameter" + }, + { + 'name': '--output-file', + 'help': "Override the 'output' preset parameter" + }, + { + 'name': '--query-params', + 'help': ("Additional query parameters: " + "'=,='") + }, + { + 'name': '--smtp-host', + 'help': "SMTP server host name. If omitted, emails won't be sent", + }, + { + 'name': '--smtp-port', + 'help': "SMTP server port number", + 'type': int, + }, + { + 'name': '--email-sender', + 'help': "Email address of test report sender", + }, + { + 'name': '--email-recipient', + 'help': "Email address of test report recipient", + }, + Args.verbose, + ] + + def __call__(self, configs, args): + return ResultSummary(configs, args).run(args) + + +if __name__ == '__main__': + opts = parse_opts(result_summary.SERVICE_NAME, globals()) + yaml_configs = opts.get_yaml_configs() or 'config/pipeline.yaml' + configs = kernelci.config.load(yaml_configs) + status = opts.command(configs, opts) + sys.exit(0 if status is True else 1) diff --git a/src/result_summary/__init__.py b/src/result_summary/__init__.py new file mode 100644 index 000000000..f1b11264d --- /dev/null +++ b/src/result_summary/__init__.py @@ -0,0 +1,4 @@ +SERVICE_NAME = 'result_summary' +TEMPLATES_DIR = './config/result_summary_templates/' +BASE_OUTPUT_DIR = '/home/kernelci/data/output/' +logger = None diff --git a/src/result_summary/monitor.py b/src/result_summary/monitor.py new file mode 100644 index 000000000..8569614a3 --- /dev/null +++ b/src/result_summary/monitor.py @@ -0,0 +1,148 @@ +# SPDX-License-Identifier: LGPL-2.1-or-later +# +# Copyright (C) 2024 Collabora Limited +# Author: Ricardo Cañuelo Navarro +# +# monitor-mode-specifc code for result-summary. + +from datetime import datetime, timezone +import os +import re + +import result_summary +import result_summary.utils as utils + + +def setup(service, args, context): + base_filter = context['preset_params'][0] + sub_id = service._api_helper.subscribe_filters({ + 'kind': base_filter['kind'], + 'state': base_filter['state'], + }) + if not sub_id: + raise Exception("Error subscribing to event") + return {'sub_id': sub_id} + + +def stop(service, context): + if context and context.get('sub_id'): + service._api_helper.unsubscribe_filters(context['sub_id']) + + +def get_item(dict, item, default=None): + """General form of dict.get() that supports the retrieval of + dot-separated fields in nested dicts. + """ + if not dict: + return default + items = item.split('.') + if len(items) == 1: + return dict.get(items[0], default) + return get_item(dict.get(items[0], default), '.'.join(items[1:]), default) + + +def filter_node(node, params): + """Returns True if matches the constraints defined in + the dict, where each param is defined like: + + node_field : value + + with an optional operator (ne, gt, lt, re): + + node_field__op : value + + The value matching is done differently depending on the + operator (equal, not equal, greater than, lesser than, + regex) + + If the node doesn't match the full set of parameter + constraints, it returns False. + """ + match = True + for param_name, value in params.items(): + if value == 'null': + value = None + field, _, cmd = param_name.partition('__') + node_value = get_item(node, field) + if cmd == 'ne': + if node_value == value: + match = False + break + elif cmd == 'gt': + if node_value <= value: + match = False + break + elif cmd == 'lt': + if node_value >= value: + match = False + break + elif cmd == 're' and node_value: + if not re.search(value, node_value): + match = False + break + else: + if node_value != value: + match = False + break + if not match: + return False, f"<{field} = {node_value}> doesn't match constraint '{param_name}: {value}'" + return True, "Ok" + + +def send_email_report(service, context, report_text): + if not service._email_sender: + return + if 'title' in context['metadata']: + title = context['metadata']['title'] + else: + title = "KernelCI report" + service._email_sender.create_and_send_email(title, report_text) + + +def run(service, context): + while True: + node, _ = service._api_helper.receive_event_node(context['sub_id']) + service.log.debug(f"Node event received: {node['id']}") + preset_params = context['preset_params'] + for param_set in context['preset_params']: + service.log.debug(f"Match check. param_set: {param_set}") + match, msg = filter_node(node, {**param_set, **context['extra_query_params']}) + if match: + service.log.info(f"Result received: {node['id']}") + template_params = { + 'metadata': context['metadata'], + 'node': node, + } + # Post-process node + utils.post_process_node(node, service._api) + + output_text = context['template'].render(template_params) + # Setup output dir from base path and user-specified + # parameter (in preset metadata or cmdline) + output_dir = result_summary.BASE_OUTPUT_DIR + if context.get('output_dir'): + output_dir = os.path.join(output_dir, context['output_dir']) + elif 'output_dir' in context['metadata']: + output_dir = os.path.join(output_dir, context['metadata']['output_dir']) + os.makedirs(output_dir, exist_ok=True) + # Generate and dump output + # output_file specified in cmdline: + output_file = context['output_file'] + if not output_file: + # Check if output_file is specified as a preset + # parameter. Since we expect many reports to be + # generated, prepend them with a timestamp + if 'output_file' in context['metadata']: + now_str = datetime.now(timezone.utc).strftime("%Y-%m-%dT%H:%M:%S") + output_file = now_str + '__' + context['metadata']['output_file'] + if output_file: + output_file = os.path.join(output_dir, output_file) + with open(output_file, 'w') as outfile: + outfile.write(output_text) + service.log.info(f"Report generated in {output_file}\n") + else: + result_summary.logger.info(output_text) + send_email_report(service, context, output_text) + else: + service.log.debug(f"Result received but filtered: {node['id']}. {msg}\n") + return True diff --git a/src/result_summary/summary.py b/src/result_summary/summary.py new file mode 100644 index 000000000..66c02f174 --- /dev/null +++ b/src/result_summary/summary.py @@ -0,0 +1,155 @@ +# SPDX-License-Identifier: LGPL-2.1-or-later +# +# Copyright (C) 2024 Collabora Limited +# Author: Ricardo Cañuelo Navarro +# +# summary-mode-specifc code for result-summary. + +import concurrent.futures +from datetime import datetime, timedelta, timezone +import os + +import result_summary +import result_summary.utils as utils + +_date_params = { + 'created_from': 'created__gt', + 'created_to': 'created__lt', + 'last_updated_from': 'updated__gt', + 'last_updated_to': 'updated__lt' +} + + +def setup(service, args, context): + # Additional date parameters + date_params = {} + if args.created_from: + date_params[_date_params['created_from']] = args.created_from + if args.created_to: + date_params[_date_params['created_to']] = args.created_to + if args.last_updated_from: + date_params[_date_params['last_updated_from']] = args.last_updated_from + if args.last_updated_to: + date_params[_date_params['last_updated_to']] = args.last_updated_to + # Default if no dates are specified: created since yesterday + yesterday = (datetime.now(timezone.utc) - timedelta(days=1)) + now_str = datetime.now(timezone.utc).strftime("%Y-%m-%dT%H:%M:%S") + if not any([args.created_from, + args.created_to, + args.last_updated_from, + args.last_updated_to]): + date_params[_date_params['created_from']] = yesterday.strftime("%Y-%m-%dT%H:%M:%S") + if not args.created_to and not args.last_updated_to: + if args.last_updated_from: + date_params[_date_params['last_updated_to']] = now_str + else: + date_params[_date_params['created_to']] = now_str + return {'date_params': date_params} + + +def stop(service, context): + pass + + +def run(service, context): + # Run queries and collect results + nodes = [] + context['metadata']['queries'] = [] + for params_set in context['preset_params']: + # Apply date range parameters, if defined + params_set.update(context['date_params']) + # Apply extra query parameters from command line, if any + params_set.update(context['extra_query_params']) + result_summary.logger.debug(f"Query: {params_set}") + context['metadata']['queries'].append(params_set) + query_results = utils.iterate_node_find(service, params_set) + result_summary.logger.debug(f"Query matches found: {len(query_results)}") + nodes.extend(query_results) + result_summary.logger.info(f"Total nodes found: {len(nodes)}") + + # Post-process nodes + # Filter log files + # - remove empty files + # - collect log files in a 'logs' field + result_summary.logger.info(f"Post-processing nodes ...") + progress_total = len(nodes) + progress = 0 + with concurrent.futures.ThreadPoolExecutor() as executor: + futures = {executor.submit(utils.post_process_node, node, service._api) for node in nodes} + for future in concurrent.futures.as_completed(futures): + result = future.result() + progress += 1 + if progress >= progress_total / 10: + print('.', end='', flush=True) + progress = 0 + print('', flush=True) + + # Group results by tree/branch + results_per_branch = {} + for node in nodes: + if node['data'].get('failed_kernel_version'): + tree = node['data']['failed_kernel_version']['tree'] + branch = node['data']['failed_kernel_version']['branch'] + else: + tree = node['data']['kernel_revision']['tree'] + branch = node['data']['kernel_revision']['branch'] + if tree not in results_per_branch: + results_per_branch[tree] = {branch: [node]} + else: + if branch not in results_per_branch[tree]: + results_per_branch[tree][branch] = [node] + else: + results_per_branch[tree][branch].append(node) + + # Data provided to the templates: + # - metadata: preset-specific metadata + # - query date specifications and ranges: + # created_to, created_from, last_updated_to, last_updated_from + # - results_per_branch: a dict containing the result nodes + # grouped by tree and branch like this: + # + # results_per_branch = { + # : { + # : [ + # node_1, + # ... + # node_n + # ], + # ..., + # : ... + # }, + # ..., + # : ... + # } + template_params = { + 'metadata': context['metadata'], + 'results_per_branch': results_per_branch, + # Optional parameters + 'created_from': context['date_params'].get(_date_params['created_from']), + 'created_to': context['date_params'].get(_date_params['created_to']), + 'last_updated_from': context['date_params'].get(_date_params['last_updated_from']), + 'last_updated_to': context['date_params'].get(_date_params['last_updated_to']), + } + output_text = context['template'].render(template_params) + # Setup output dir from base path and user-specified + # parameter (in preset metadata or cmdline) + output_dir = result_summary.BASE_OUTPUT_DIR + if context.get('output_dir'): + output_dir = os.path.join(output_dir, context['output_dir']) + elif 'output_dir' in context['metadata']: + output_dir = os.path.join(output_dir, context['metadata']['output_dir']) + os.makedirs(output_dir, exist_ok=True) + # Generate and dump output + # output_file specified in cmdline: + output_file = context['output_file'] + if not output_file: + # Check if output_file is specified as a preset parameter + if 'output_file' in context['metadata']: + output_file = context['metadata']['output_file'] + if output_file: + output_file = os.path.join(output_dir, output_file) + with open((output_file), 'w') as outfile: + outfile.write(output_text) + else: + result_summary.logger.info(output_text) + return True diff --git a/src/result_summary/utils.py b/src/result_summary/utils.py new file mode 100644 index 000000000..6454e0b70 --- /dev/null +++ b/src/result_summary/utils.py @@ -0,0 +1,259 @@ +# SPDX-License-Identifier: LGPL-2.1-or-later +# +# Copyright (C) 2024 Collabora Limited +# Author: Ricardo Cañuelo Navarro +# +# Common utils for result-summary. + +import gzip +import re +import requests +import yaml +from typing import Any, Dict + +import result_summary + + +CONFIG_TRACES_FILE_PATH = './config/traces_config.yaml' + + +def split_query_params(query_string): + """Given a string input formatted like this: + parameter1=value1,parameter2=value2,...,parameterN=valueN + return a dict containing the string information where the + parameters are the dict keys: + {'parameter1': 'value1', + 'parameter2': 'value2', + ..., + 'parameterN': 'valueN' + } + """ + query_dict = {} + matches = re.findall('([^ =,]+)\s*=\s*([^ =,]+)', query_string) # noqa: W605 + for operator, value in matches: + query_dict[operator] = value + return query_dict + + +def parse_block_config(block, kind, state): + """Parse a config block. Every block may define a set of + parameters, including a list of 'repos' (trees/branches). For + every 'repos' item, this method will generate a query parameter + set. All the query parameter sets will be based on the same base + params. + + If the block doesn't define any repos, there'll be only one + query parameter set created. + + If the block definition is empty, that is, if there aren't any + specific query parameters, just return the base query parameter + list. + + Returns a list of query parameter sets. + + """ + # Base query parameters include the node kind and state and the + # date ranges if defined + base_params = { + 'kind': kind, + 'state': state, + } + kernel_revision_field = 'data.kernel_revision' + if kind == 'regression': + kernel_revision_field = 'data.failed_kernel_version' + if not block: + return [{**base_params}] + query_params = [] + for item in block: + item_base_params = base_params.copy() + repos = [] + if 'repos' in item: + for repo in item.pop('repos'): + new_repo = {} + for key, value in repo.items(): + new_repo[f'{kernel_revision_field}.{key}'] = value + repos.append(new_repo) + for key, value in item.items(): + item_base_params[key] = value if value else 'null' + if repos: + for repo in repos: + query_params.append({**item_base_params, **repo}) + else: + query_params.append(item_base_params) + return query_params + + +def iterate_node_find(service, params): + """Request a node search to the KernelCI API based on a set of + search parameters (a dict). The search is split into iterative + limited searches. + + Returns the list of nodes found, or an empty list if the search + didn't find any. + """ + nodes = [] + limit = 100 + offset = 0 + result_summary.logger.info("Searching") + while True: + search = service._api.node.find(params, limit=limit, offset=offset) + print(".", end='', flush=True) + if not search: + break + nodes.extend(search) + offset += limit + print("", flush=True) + return nodes + + +def get_err_category(trace: str, traces_config: Dict) -> Dict[str, Any]: + """Given a trace and a traces config, return its category""" + # sourcery skip: raise-specific-error + for category in traces_config["categories"]: + p = "|".join(category["patterns"]) + if re.findall(p, trace or ""): + return category + raise Exception(f"No category found") + + +def get_log(url, snippet_lines=0): + """Fetches a text log given its url. + + Returns: + If the log file couldn't be retrieved by any reason: None + Otherwise: + If snippet_lines == 0: the full log + If snippet_lines > 0: the first snippet_lines log lines + If snippet_lines < 0: the last snippet_lines log lines + """ + try: + response = requests.get(url) + except: + # Bail out if there was any error fetching the log + return None + if not len(response.content): + return None + try: + raw_bytes = gzip.decompress(response.content) + text = raw_bytes.decode('utf-8') + except gzip.BadGzipFile: + text = response.text + if snippet_lines > 0: + lines = text.splitlines() + return '\n'.join(lines[:snippet_lines]) + elif snippet_lines < 0: + lines = text.splitlines() + return '\n'.join(lines[snippet_lines:]) + return text + + +def artifact_is_log(artifact_name): + """Returns True if artifact_name looks like a log artifact, False + otherwise""" + possible_log_names = [ + 'job_txt', + ] + if (artifact_name == 'log' or + artifact_name.endswith('_log') or + artifact_name in possible_log_names): + return True + return False + + +def get_logs(node): + """Retrieves and processes logs from a specified node. + + This method iterates over a node's 'artifacts', if present, to find + log files. For each identified log file, it obtains the content by + calling the `_get_log` method. + If the content is not empty, it then stores this log data in a + dictionary, which includes both the URL of the log and its text + content. + + If the node log points to an empty file, the dict will contain an + entry for the log with an empty value. + + Args: + node (dict): A dictionary representing a node, which should contain + an 'artifacts' key with log information. + + Returns: + A dict with an entry per node log. If the node log points to an + empty file, the entry will have an emtpy value. Otherwise, the + value will be a dict containing the 'url' and collected 'text' + of the log (may be an excerpt) + + None if no logs were found. + """ + if node.get('artifacts'): + logs = {} + log_fields = {} + for artifact, value in node['artifacts'].items(): + if artifact_is_log(artifact): + log_fields[artifact] = value + for log_name, url in log_fields.items(): + text = get_log(url) + if text: + logs[log_name] = {'url': url, 'text': text} + else: + logs[log_name] = None + return logs + return None + + +def post_process_node(node, api): + """Runs a set of operations to post-proces and extract additional + information for a node: + + - Find/complete/process node logs + + Modifies: + The input `node` dictionary is modified in-place by adding a new + key 'logs', which contains a dictionary of processed log + data (see get_logs()). + """ + + def find_node_logs(node, api): + """For an input node, use get_logs() to retrieve its log + artifacts. If no log artifacts were found in the node, search + upwards through parent links until finding one node in the chain + that contains logs. + + Returns: + A dict as returned by get_logs, but without empty log entries. + """ + logs = get_logs(node) + if not logs: + if node.get('parent'): + parent = api.node.get(node['parent']) + if parent: + logs = find_node_logs(parent, api) + if not logs: + return {} + # Remove empty logs + return {k: v for k, v in logs.items() if v} + + def log_snippets_only(logs, snippet_lines): + for log in logs: + lines = logs[log]['text'].splitlines() + logs[log]['text'] = '\n'.join(lines[-snippet_lines:]) + return logs + + def concatenate_logs(logs): + concatenated_logs = '' + for log in logs: + concatenated_logs += logs[log]['text'] + return concatenated_logs + + node['logs'] = find_node_logs(node, api) + + if node['result'] != 'pass': + concatenated_logs = concatenate_logs(node['logs']) + + with open(CONFIG_TRACES_FILE_PATH) as f: + traces_config: Dict[str, Any] = yaml.load(f, Loader=yaml.FullLoader) + node['category'] = get_err_category(concatenated_logs, traces_config) + + # Only get the last 10 lines of the log + snippet_lines = 10 + node['logs'] = log_snippets_only(node['logs'], snippet_lines) diff --git a/src/scheduler.py b/src/scheduler.py index 19eb690fb..a673b890a 100755 --- a/src/scheduler.py +++ b/src/scheduler.py @@ -6,18 +6,19 @@ # Author: Guillaume Tucker # Author: Jeny Sadadia -import logging import os import sys import tempfile +import json import yaml +import requests import kernelci import kernelci.config import kernelci.runtime import kernelci.scheduler import kernelci.storage -from kernelci.cli import Args, Command, parse_opts +from kernelci.legacy.cli import Args, Command, parse_opts from base import Service @@ -72,21 +73,110 @@ def _stop(self, sub_id): self._cleanup_paths() def _run_job(self, job_config, runtime, platform, input_node): - node = self._api_helper.create_job_node(job_config, input_node) - job = kernelci.runtime.Job(node, job_config) + node = self._api_helper.create_job_node(job_config, input_node, + runtime, platform) + if not node: + return + # Most of the time, the artifacts we need originate from the parent + # node. Import those into the current node, working on a copy so the + # original node doesn't get "polluted" with useless artifacts when we + # update it with the results + job_node = node.copy() + if job_node.get('parent'): + parent_node = self._api.node.get(job_node['parent']) + if job_node.get('artifacts'): + job_node['artifacts'].update(parent_node['artifacts']) + else: + job_node['artifacts'] = parent_node['artifacts'] + job = kernelci.runtime.Job(job_node, job_config) job.platform_config = platform job.storage_config = self._storage_config params = runtime.get_params(job, self._api.config) + if not params: + self.log.error(' '.join([ + node['id'], + runtime.config.name, + platform.name, + job_config.name, + "Invalid job parameters, aborting...", + ])) + node['state'] = 'done' + node['result'] = 'incomplete' + node['data']['error_code'] = 'invalid_job_params' + try: + self._api.node.update(node) + except requests.exceptions.HTTPError as err: + err_msg = json.loads(err.response.content).get("detail", []) + self.log.error(err_msg) + return + # Process potential f-strings in `params` with configured job params + # and platform attributes + kernel_revision = job_node['data']['kernel_revision']['version'] + extra_args = { + 'krev': f"{kernel_revision['version']}.{kernel_revision['patchlevel']}" + } + extra_args.update(job.config.params) + params = job.platform_config.format_params(params, extra_args) data = runtime.generate(job, params) + if not data: + self.log.error(' '.join([ + node['id'], + runtime.config.name, + platform.name, + job_config.name, + "Failed to generate job definition, aborting...", + ])) + node['state'] = 'done' + node['result'] = 'fail' + node['data']['error_code'] = 'job_generation_error' + try: + self._api.node.update(node) + except requests.exceptions.HTTPError as err: + err_msg = json.loads(err.response.content).get("detail", []) + self.log.error(err_msg) + return tmp = tempfile.TemporaryDirectory(dir=self._output) output_file = runtime.save_file(data, tmp.name, params) - running_job = runtime.submit(output_file) + try: + running_job = runtime.submit(output_file) + except Exception as e: + self.log.error(' '.join([ + node['id'], + runtime.config.name, + platform.name, + job_config.name, + str(e), + ])) + node['state'] = 'done' + node['result'] = 'incomplete' + node['data']['error_code'] = 'submit_error' + node['data']['error_msg'] = str(e) + try: + self._api.node.update(node) + except requests.exceptions.HTTPError as err: + err_msg = json.loads(err.response.content).get("detail", []) + self.log.error(err_msg) + return + + job_id = str(runtime.get_job_id(running_job)) + node['data']['job_id'] = job_id + + if platform.name == "kubernetes": + context = runtime.get_context() + node['data']['job_context'] = context + + try: + self._api.node.update(node) + except requests.exceptions.HTTPError as err: + err_msg = json.loads(err.response.content).get("detail", []) + self.log.error(err_msg) + self.log.info(' '.join([ node['id'], runtime.config.name, platform.name, job_config.name, - str(runtime.get_job_id(running_job)), + job_id, ])) if runtime.config.lab_type in ['shell', 'docker']: self._job_tmp_dirs[running_job] = tmp @@ -97,9 +187,14 @@ def _run(self, sub_id): while True: event = self._api_helper.receive_event_data(sub_id) - for job, runtime, platform in self._sched.get_schedule(event): - input_node = self._api.get_node(event['id']) - self._run_job(job, runtime, platform, input_node) + for job, runtime, platform, rules in self._sched.get_schedule(event): + input_node = self._api.node.get(event['id']) + jobfilter = event.get('jobfilter') + # Add to node data the jobfilter if it exists in event + if jobfilter and isinstance(jobfilter, list): + input_node['jobfilter'] = jobfilter + if self._api_helper.should_create_node(rules, input_node): + self._run_job(job, runtime, platform, input_node) return True @@ -122,6 +217,7 @@ def __call__(self, configs, args): if __name__ == '__main__': opts = parse_opts('scheduler', globals()) - configs = kernelci.config.load('config/pipeline.yaml') + yaml_configs = opts.get_yaml_configs() or 'config' + configs = kernelci.config.load(yaml_configs) status = opts.command(configs, opts) sys.exit(0 if status is True else 1) diff --git a/src/send_kcidb.py b/src/send_kcidb.py index de0986994..73eda9b8f 100755 --- a/src/send_kcidb.py +++ b/src/send_kcidb.py @@ -12,29 +12,54 @@ import datetime import sys +import re +import io +import gzip +import requests import kernelci import kernelci.config -from kernelci.cli import Args, Command, parse_opts -from kcidb import Client +from kernelci.legacy.cli import Args, Command, parse_opts import kcidb from base import Service +MISSED_TEST_CODES = ( + 'Bug', + 'Configuration', + 'Infrastructure', + 'invalid_job_params', + 'Job', + 'job_generation_error', + 'ObjectNotPersisted', + 'RequestBodyTooLarge', + 'submit_error', + 'Unexisting permission codename.', +) + +ERRORED_TEST_CODES = ( + 'Canceled', + 'LAVATimeout', + 'MultinodeTimeout', + 'node_timeout', + 'Test', +) + + class KCIDBBridge(Service): def __init__(self, configs, args, name): super().__init__(configs, args, name) + self._jobs = configs['jobs'] def _setup(self, args): return { - 'client': Client( + 'client': kcidb.Client( project_id=args.kcidb_project_id, topic_name=args.kcidb_topic_name ), 'sub_id': self._api_helper.subscribe_filters({ - 'name': 'checkout', - 'state': 'done', + 'state': ('done', 'available'), }), 'origin': args.origin, } @@ -43,10 +68,27 @@ def _stop(self, context): if context['sub_id']: self._api_helper.unsubscribe_filters(context['sub_id']) + def _remove_none_fields(self, data): + """Remove all keys with `None` values as KCIDB doesn't allow it""" + if isinstance(data, dict): + return {key: self._remove_none_fields(val) + for key, val in data.items() if val is not None} + if isinstance(data, list): + return [self._remove_none_fields(item) for item in data] + return data + def _send_revision(self, client, revision): - if kcidb.io.SCHEMA.is_valid(revision): - return client.submit(revision) - self.log.error("Aborting, invalid data") + revision = self._remove_none_fields(revision) + if any(value for key, value in revision.items() if key != 'version'): + self.log.debug(f"DEBUG: sending revision: {revision}") + if kcidb.io.SCHEMA.is_valid(revision): + client.submit(revision) + else: + self.log.error("Aborting, invalid data") + try: + kcidb.io.SCHEMA.validate(revision) + except Exception as exc: + self.log.error(f"Validation error: {str(exc)}") @staticmethod def _set_timezone(created_timestamp): @@ -57,35 +99,373 @@ def _set_timezone(created_timestamp): created_time.timestamp(), tz=tz_utc) return created_time.isoformat() + def _parse_checkout_node(self, origin, checkout_node): + result = checkout_node.get('result') + result_map = { + 'pass': True, + 'fail': False, + 'incomplete': False, + } + valid = result_map[result] if result else None + return [{ + 'id': f"{origin}:{checkout_node['id']}", + 'origin': origin, + 'tree_name': checkout_node['data']['kernel_revision']['tree'], + 'git_repository_url': + checkout_node['data']['kernel_revision']['url'], + 'git_commit_hash': + checkout_node['data']['kernel_revision']['commit'], + 'git_commit_name': + checkout_node['data']['kernel_revision'].get('describe'), + 'git_repository_branch': + checkout_node['data']['kernel_revision']['branch'], + 'start_time': self._set_timezone(checkout_node['created']), + 'patchset_hash': '', + 'misc': { + 'submitted_by': 'kernelci-pipeline' + }, + 'valid': valid, + }] + + def _get_output_files(self, artifacts: dict, exclude_properties=None): + output_files = [] + for name, url in artifacts.items(): + if exclude_properties and name in exclude_properties: + continue + # Replace "/" with "_" to match with the allowed pattern + # for "name" property of "output_files" i.e. '^[^/]+$' + name = name.replace("/", "_") + output_files.append( + { + 'name': name, + 'url': url + } + ) + return output_files + + def _get_log_excerpt(self, log_url): + """Parse compressed(gzip) or text log file and return last 16*1024 characters as it's + the maximum allowed length for KCIDB `log_excerpt` field""" + try: + res = requests.get(log_url, timeout=60) + if res.status_code != 200: + return None + except requests.exceptions.ConnectionError as exc: + self.log.error(f"{str(exc)}") + return None + + try: + # parse compressed file such as lava log files + buffer_data = io.BytesIO(res.content) + with gzip.open(buffer_data, mode='rt') as fp: + data = fp.read() + return data[-(16*1024):] + except gzip.BadGzipFile: + # parse text file such as kunit log file `test_log` + data = res.content.decode("utf-8") + return data[-(16*1024):] + + def _parse_build_node(self, origin, node): + parsed_build_node = { + 'checkout_id': f"{origin}:{node['parent']}", + 'id': f"{origin}:{node['id']}", + 'origin': origin, + 'comment': node['data']['kernel_revision'].get('describe'), + 'start_time': self._set_timezone(node['created']), + 'architecture': node['data'].get('arch'), + 'compiler': node['data'].get('compiler'), + 'config_name': node['data'].get('defconfig'), + 'valid': node['result'] == 'pass', + 'misc': { + 'platform': node['data'].get('platform'), + 'runtime': node['data'].get('runtime'), + 'job_id': node['data'].get('job_id'), + 'job_context': node['data'].get('job_context'), + 'kernel_type': node['data'].get('kernel_type'), + 'error_code': node['data'].get('error_code'), + 'error_msg': node['data'].get('error_msg'), + } + } + artifacts = node.get('artifacts') + if artifacts: + parsed_build_node['output_files'] = self._get_output_files( + artifacts=artifacts, + exclude_properties=('build_log', '_config') + ) + parsed_build_node['config_url'] = artifacts.get('_config') + parsed_build_node['log_url'] = artifacts.get('build_log') + log_url = parsed_build_node['log_url'] + if log_url: + parsed_build_node['log_excerpt'] = self._get_log_excerpt( + log_url) + + return [parsed_build_node] + + def _replace_restricted_chars(self, path, pattern, replace_char='_'): + # Replace restricted characters with "_" to match the allowed pattern + new_path = "" + for char in path: + if not re.match(pattern, char): + new_path += replace_char + else: + new_path += char + return new_path + + def _parse_node_path(self, path, is_checkout_child): + """Parse and create KCIDB schema compatible node path + Convert node path list to dot-separated string. Use unified + test suite name to exclude build and runtime information + from the test path. + For example, test path ['checkout', 'kbuild-gcc-10-x86', 'baseline-x86'] + would be converted to "boot" + """ + if isinstance(path, list): + if is_checkout_child: + # nodes with path such as ['checkout', 'kver'] + parsed_path = path[1:] + else: + # nodes with path such as ['checkout', 'kbuild-gcc-10-x86', 'baseline-x86'] + parsed_path = path[2:] + # Handle node with path ['checkout', 'kbuild-gcc-10-x86', 'sleep', 'sleep'] + if len(parsed_path) >= 2: + if parsed_path[0] == parsed_path[1]: + parsed_path = parsed_path[1:] + new_path = [] + for sub_path in parsed_path: + if sub_path in self._jobs: + suite_name = self._jobs[sub_path].kcidb_test_suite + if suite_name: + new_path.append(suite_name) + else: + self.log.error(f"KCIDB test suite mapping not found for \ +the test: {sub_path}") + return None + else: + new_path.append(sub_path) + # Handle path such as ['tast-ui-x86-intel', 'tast', 'os-release'] converted + # to ['tast', 'tast', 'os-release'] + if len(new_path) >= 2: + if new_path[0] == new_path[1]: + new_path = new_path[1:] + path_str = '.'.join(new_path) + # Allowed pattern for test path is ^[.a-zA-Z0-9_-]*$' + formatted_path_str = self._replace_restricted_chars(path_str, r'^[.a-zA-Z0-9_-]*$') + return formatted_path_str if formatted_path_str else None + return None + + def _parse_node_result(self, test_node): + if test_node['result'] == 'incomplete': + error_code = test_node['data'].get('error_code') + if error_code in ERRORED_TEST_CODES: + return 'ERROR' + if error_code in MISSED_TEST_CODES: + return 'MISS' + return test_node['result'].upper() + + def _get_parent_build_node(self, node): + node = self._api.node.get(node['parent']) + if node['kind'] == 'kbuild' or node['kind'] == 'checkout': + return node + return self._get_parent_build_node(node) + + def _create_dummy_build_node(self, origin, checkout_node, arch): + return { + 'id': f"{origin}:dummy_{checkout_node['id']}_{arch}" if arch + else f"{origin}:dummy_{checkout_node['id']}", + 'checkout_id': f"{origin}:{checkout_node['id']}", + 'comment': 'Dummy build for tests hanging from checkout', + 'origin': origin, + 'start_time': self._set_timezone(checkout_node['created']), + 'valid': True, + 'architecture': arch, + } + + def _get_artifacts(self, node): + """Retrive artifacts + Get node artifacts. If the node doesn't have the artifacts, + it will search through parent nodes recursively until + it's found. + """ + artifacts = node.get('artifacts') + if not artifacts: + if node.get('parent'): + parent = self._api.node.get(node['parent']) + if parent: + artifacts = self._get_artifacts(parent) + return artifacts + + def _get_job_metadata(self, node): + """Retrive job metadata + Get job metadata such as job ID and context. If the node doesn't + have the metadata, it will search through parent nodes recursively + until it's found. + """ + data = node.get('data') + if not data.get('job_id'): + if node.get('parent'): + parent = self._api.node.get(node['parent']) + if parent: + data = self._get_job_metadata(parent) + return data + + def _get_error_metadata(self, node): + """Retrive error metadata for failed tests + Get error metadata such as error code and message for failed jobs. + If the node doesn't have the metadata, it will search through parent + nodes recursively until it's found. + """ + data = node.get('data') + if not data.get('error_code'): + if node.get('parent'): + parent = self._api.node.get(node['parent']) + if parent: + data = self._get_error_metadata(parent) + return data + + def _parse_test_node(self, origin, test_node): + dummy_build = {} + is_checkout_child = False + build_node = self._get_parent_build_node(test_node) + # Create dummy build node if test is hanging directly from checkout + if build_node['kind'] == 'checkout': + is_checkout_child = True + dummy_build = self._create_dummy_build_node(origin, build_node, + test_node['data'].get('arch')) + build_id = dummy_build['id'] + else: + build_id = f"{origin}:{build_node['id']}" + + parsed_test_node = { + 'build_id': build_id, + 'id': f"{origin}:{test_node['id']}", + 'origin': origin, + 'comment': f"{test_node['name']} on {test_node['data'].get('platform')} \ +in {test_node['data'].get('runtime')}", + 'start_time': self._set_timezone(test_node['created']), + 'environment': { + 'comment': f"Runtime: {test_node['data'].get('runtime')}", + 'misc': { + 'platform': test_node['data'].get('platform'), + } + }, + 'waived': False, + 'path': self._parse_node_path(test_node['path'], is_checkout_child), + 'misc': { + 'test_source': test_node['data'].get('test_source'), + 'test_revision': test_node['data'].get('test_revision'), + 'compiler': test_node['data'].get('compiler'), + 'kernel_type': test_node['data'].get('kernel_type'), + 'arch': test_node['data'].get('arch'), + } + } + + if test_node['result']: + parsed_test_node['status'] = self._parse_node_result(test_node) + if parsed_test_node['status'] == 'SKIP': + # No artifacts and metadata will be available for skipped tests + return parsed_test_node, dummy_build + + job_metadata = self._get_job_metadata(test_node) + if job_metadata: + parsed_test_node['environment']['misc']['job_id'] = job_metadata.get( + 'job_id') + parsed_test_node['environment']['misc']['job_context'] = job_metadata.get( + 'job_context') + + artifacts = self._get_artifacts(test_node) + if artifacts: + parsed_test_node['output_files'] = self._get_output_files( + artifacts=artifacts, + exclude_properties=('lava_log', 'test_log') + ) + if artifacts.get('lava_log'): + parsed_test_node['log_url'] = artifacts.get('lava_log') + else: + parsed_test_node['log_url'] = artifacts.get('test_log') + + log_url = parsed_test_node['log_url'] + if log_url: + parsed_test_node['log_excerpt'] = self._get_log_excerpt( + log_url) + + if test_node['result'] != 'pass': + error_metadata = self._get_error_metadata(test_node) + if error_metadata: + parsed_test_node['misc']['error_code'] = error_metadata.get( + 'error_code') + parsed_test_node['misc']['error_msg'] = error_metadata.get( + 'error_msg') + + return parsed_test_node, dummy_build + + def _get_test_data(self, node, origin, + parsed_test_node, parsed_build_node): + test_node, build_node = self._parse_test_node( + origin, node + ) + if not test_node['path']: + self.log.info(f"Not sending test as path information is missing: {test_node['id']}") + return + + if 'setup' in test_node.get('path'): + # do not send setup tests + return + + parsed_test_node.append(test_node) + if build_node: + parsed_build_node.append(build_node) + + def _get_test_data_recursively(self, node, origin, parsed_test_node, parsed_build_node): + child_nodes = self._api.node.find({'parent': node['id']}) + if not child_nodes: + self._get_test_data(node, origin, parsed_test_node, + parsed_build_node) + else: + for child in child_nodes: + self._get_test_data_recursively(child, origin, parsed_test_node, + parsed_build_node) + def _run(self, context): self.log.info("Listening for events... ") self.log.info("Press Ctrl-C to stop.") while True: - node = self._api_helper.receive_event_node(context['sub_id']) - self.log.info(f"Submitting node to KCIDB: {node['id']}") + node, is_hierarchy = self._api_helper.receive_event_node(context['sub_id']) + self.log.info(f"Received an event for node: {node['id']}") + + parsed_checkout_node = [] + parsed_build_node = [] + parsed_test_node = [] + + if node['kind'] == 'checkout': + parsed_checkout_node = self._parse_checkout_node( + context['origin'], node) + + elif node['kind'] == 'kbuild': + parsed_build_node = self._parse_build_node( + context['origin'], node + ) + + elif node['kind'] == 'test': + self._get_test_data(node, context['origin'], + parsed_test_node, parsed_build_node) + + elif node['kind'] == 'job': + # Send only failed/incomplete job nodes + if node['result'] != 'pass': + self._get_test_data(node, context['origin'], + parsed_test_node, parsed_build_node) + if is_hierarchy: + self._get_test_data_recursively(node, context['origin'], + parsed_test_node, parsed_build_node) revision = { - 'builds': [], - 'checkouts': [ - { - 'id': f"{context['origin']}:{node['id']}", - 'origin': context['origin'], - 'tree_name': node['revision']['tree'], - 'git_repository_url': node['revision']['url'], - 'git_commit_hash': node['revision']['commit'], - 'git_repository_branch': node['revision']['branch'], - 'start_time': self._set_timezone(node['created']), - 'patchset_hash': '', - 'misc': { - 'submitted_by': 'kernelci-pipeline' - }, - } - ], - 'tests': [], + 'checkouts': parsed_checkout_node, + 'builds': parsed_build_node, + 'tests': parsed_test_node, 'version': { 'major': 4, - 'minor': 0 + 'minor': 3 } } self._send_revision(context['client'], revision) @@ -116,6 +496,7 @@ def __call__(self, configs, args): if __name__ == '__main__': opts = parse_opts('send_kcidb', globals()) - configs = kernelci.config.load('config/pipeline.yaml') + yaml_configs = opts.get_yaml_configs() or 'config' + configs = kernelci.config.load(yaml_configs) status = opts.command(configs, opts) sys.exit(0 if status is True else 1) diff --git a/src/tarball.py b/src/tarball.py index 1604e169e..859050e0c 100755 --- a/src/tarball.py +++ b/src/tarball.py @@ -5,20 +5,19 @@ # Copyright (C) 2022 Collabora Limited # Author: Guillaume Tucker # Author: Jeny Sadadia +# Author: Nikolay Yurin from datetime import datetime, timedelta -import logging import os import re import sys -import urllib.parse import json import requests import kernelci import kernelci.build import kernelci.config -from kernelci.cli import Args, Command, parse_opts +from kernelci.legacy.cli import Args, Command, parse_opts import kernelci.storage from base import Service @@ -32,51 +31,90 @@ class Tarball(Service): - - def __init__(self, configs, args): - super().__init__(configs, args, 'tarball') - self._build_configs = configs['build_configs'] - self._kdir = args.kdir - self._output = args.output - if not os.path.exists(self._output): - os.makedirs(self._output) - self._verbose = args.verbose - self._storage_config = configs['storage_configs'][args.storage_config] + TAR_CREATE_CMD = """\ +set -e +cd {target_dir} +git archive --format=tar --prefix={prefix}/ HEAD | gzip > {tarball_path} +""" + + def __init__(self, global_configs, service_config): + super().__init__(global_configs, service_config, 'tarball') + self._service_config = service_config + self._build_configs = global_configs['build_configs'] + if not os.path.exists(self._service_config.output): + os.makedirs(self._service_config.output) + storage_config = global_configs['storage_configs'][ + service_config.storage_config + ] self._storage = kernelci.storage.get_storage( - self._storage_config, args.storage_cred + storage_config, service_config.storage_cred ) def _find_build_config(self, node): - revision = node['revision'] + revision = node['data']['kernel_revision'] tree = revision['tree'] branch = revision['branch'] - for name, config in self._build_configs.items(): + for config in self._build_configs.values(): if config.tree.name == tree and config.branch == branch: return config + def _find_build_commit(self, node): + revision = node['data'].get('kernel_revision') + commit = revision.get('commit') + return commit + + def _checkout_commitid(self, commitid): + self.log.info(f"Checking out commit {commitid}") + # i might need something from kernelci.build + # but i prefer to implement it myself + cwd = os.getcwd() + os.chdir(self._service_config.kdir) + kernelci.shell_cmd(f"git checkout {commitid}", self._service_config.kdir) + os.chdir(cwd) + self.log.info("Commit checked out") + def _update_repo(self, config): + ''' + Return True - if failed to update repo and need to retry + Return False - if repo updated successfully + ''' self.log.info(f"Updating repo for {config.name}") - kernelci.build.update_repo(config, self._kdir) + try: + kernelci.build.update_repo(config, self._service_config.kdir) + except Exception as err: + self.log.error(f"Failed to update: {err}, cleaning stale repo") + # safeguard, make sure it is git repo + if not os.path.exists( + os.path.join(self._service_config.kdir, '.git') + ): + err_msg = f"{self._service_config.kdir} is not a git repo" + self.log.error(err_msg) + raise Exception(err_msg) + # cleanup the repo and return True, so we try again + kernelci.shell_cmd(f"rm -rf {self._service_config.kdir}") + return True + self.log.info("Repo updated") + return False - def _make_tarball(self, config, describe): - name = '-'.join(['linux', config.tree.name, config.branch, describe]) - tarball = f"{name}.tar.gz" - self.log.info(f"Making tarball {tarball}") - output_path = os.path.relpath(self._output, self._kdir) - cmd = """\ -set -e -cd {kdir} -git archive --format=tar --prefix={name}/ HEAD | gzip > {output}/{tarball} -""".format(kdir=self._kdir, name=name, output=output_path, tarball=tarball) + def _make_tarball(self, target_dir, tarball_name): + self.log.info(f"Making tarball {tarball_name}") + tarball_path = os.path.join( + self._service_config.output, + f"{tarball_name}.tar.gz" + ) + cmd = self.TAR_CREATE_CMD.format( + target_dir=target_dir, + prefix=tarball_name, + tarball_path=tarball_path + ) self.log.info(cmd) kernelci.shell_cmd(cmd) self.log.info("Tarball created") - return tarball + return tarball_path - def _push_tarball(self, config, describe): - tarball_name = self._make_tarball(config, describe) - tarball_path = os.path.join(self._output, tarball_name) + def _push_tarball(self, tarball_path): + tarball_name = os.path.basename(tarball_path) self.log.info(f"Uploading {tarball_path}") tarball_url = self._storage.upload_single((tarball_path, tarball_name)) self.log.info(f"Upload complete: {tarball_url}") @@ -84,7 +122,9 @@ def _push_tarball(self, config, describe): return tarball_url def _get_version_from_describe(self): - describe_v = kernelci.build.git_describe_verbose(self._kdir) + describe_v = kernelci.build.git_describe_verbose( + self._service_config.kdir + ) version = KVER_RE.match(describe_v).groupdict() return { key: value @@ -94,7 +134,7 @@ def _get_version_from_describe(self): def _update_node(self, checkout_node, describe, version, tarball_url): node = checkout_node.copy() - node['revision'].update({ + node['data']['kernel_revision'].update({ 'describe': describe, 'version': version, }) @@ -106,7 +146,23 @@ def _update_node(self, checkout_node, describe, version, tarball_url): 'holdoff': str(datetime.utcnow() + timedelta(minutes=10)) }) try: - self._api.update_node(node) + self._api.node.update(node) + except requests.exceptions.HTTPError as err: + err_msg = json.loads(err.response.content).get("detail", []) + self.log.error(err_msg) + + def _update_failed_checkout_node(self, checkout_node, error_code, error_msg): + node = checkout_node.copy() + node.update({ + 'state': 'done', + 'result': 'fail', + }) + if 'data' not in node: + node['data'] = {} + node['data']['error_code'] = error_code + node['data']['error_msg'] = error_msg + try: + self._api.node.update(node) except requests.exceptions.HTTPError as err: err_msg = json.loads(err.response.content).get("detail", []) self.log.error(err_msg) @@ -114,7 +170,7 @@ def _update_node(self, checkout_node, describe, version, tarball_url): def _setup(self, args): return self._api_helper.subscribe_filters({ 'op': 'created', - 'name': 'checkout', + 'kind': 'checkout', 'state': 'running', }) @@ -127,22 +183,49 @@ def _run(self, sub_id): self.log.info("Press Ctrl-C to stop.") while True: - checkout_node = self._api_helper.receive_event_node(sub_id) + checkout_node, _ = self._api_helper.receive_event_node(sub_id) build_config = self._find_build_config(checkout_node) if build_config is None: continue - self._update_repo(build_config) + if self._update_repo(build_config): + self.log.error("Failed to update repo, retrying") + if self._update_repo(build_config): + # critical failure, something wrong with git + self.log.error("Failed to update repo again, exit") + # Set checkout node result to fail + self._update_failed_checkout_node(checkout_node, + 'git_checkout_failure', + 'Failed to init/update git repo') + os._exit(1) + + commitid = self._find_build_commit(checkout_node) + if commitid is None: + self.log.error("Failed to find commit id") + self._update_failed_checkout_node(checkout_node, + 'git_checkout_failure', + 'Failed to find commit id') + os._exit(1) + self._checkout_commitid(commitid) + describe = kernelci.build.git_describe( - build_config.tree.name, self._kdir + build_config.tree.name, self._service_config.kdir ) version = self._get_version_from_describe() - tarball_url = self._push_tarball(build_config, describe) + tarball_name = '-'.join([ + 'linux', + build_config.tree.name, + build_config.branch, + describe + ]) + tarball_path = self._make_tarball( + self._service_config.kdir, + tarball_name + ) + tarball_url = self._push_tarball(tarball_path) self._update_node(checkout_node, describe, version, tarball_url) - return True - class cmd_run(Command): help = "Wait for a new revision event and push a source tarball" @@ -159,6 +242,7 @@ def __call__(self, configs, args): if __name__ == '__main__': opts = parse_opts('tarball', globals()) - configs = kernelci.config.load('config/pipeline.yaml') + yaml_configs = opts.get_yaml_configs() or 'config' + configs = kernelci.config.load(yaml_configs) status = opts.command(configs, opts) sys.exit(0 if status is True else 1) diff --git a/src/test_report.py b/src/test_report.py index b15608c31..2173317de 100755 --- a/src/test_report.py +++ b/src/test_report.py @@ -17,7 +17,7 @@ import kernelci.config import kernelci.db -from kernelci.cli import Args, Command, parse_opts +from kernelci.legacy.cli import Args, Command, parse_opts import jinja2 from kernelci_pipeline.email_sender import EmailSender @@ -49,24 +49,24 @@ def _get_job_stats(self, jobs_data): } def _get_job_data(self, checkout_node, job): - revision = checkout_node['revision'] + revision = checkout_node['data']['kernel_revision'] - root_node = self._api.get_nodes({ - 'revision.commit': revision['commit'], - 'revision.tree': revision['tree'], - 'revision.branch': revision['branch'], + root_node = self._api.node.find({ + 'data.kernel_revision.commit': revision['commit'], + 'data.kernel_revision.tree': revision['tree'], + 'data.kernel_revision.branch': revision['branch'], 'name': job, })[0] - job_nodes = self._api.count_nodes({ - 'revision.commit': revision['commit'], - 'revision.tree': revision['tree'], - 'revision.branch': revision['branch'], + job_nodes = self._api.node.count({ + 'data.kernel_revision.commit': revision['commit'], + 'data.kernel_revision.tree': revision['tree'], + 'data.kernel_revision.branch': revision['branch'], 'group': job, }) - failures = self._api.get_nodes({ - 'revision.commit': revision['commit'], - 'revision.tree': revision['tree'], - 'revision.branch': revision['branch'], + failures = self._api.node.find({ + 'data.kernel_revision.commit': revision['commit'], + 'data.kernel_revision.tree': revision['tree'], + 'data.kernel_revision.branch': revision['branch'], 'group': job, 'result': 'fail', }) @@ -83,11 +83,11 @@ def _get_job_data(self, checkout_node, job): def _get_jobs(self, root_node): jobs = [] - revision = root_node['revision'] - nodes = self._api.get_nodes({ - 'revision.commit': revision['commit'], - 'revision.tree': revision['tree'], - 'revision.branch': revision['branch'] + revision = root_node['data']['kernel_revision'] + nodes = self._api.node.find({ + 'data.kernel_revision.commit': revision['commit'], + 'data.kernel_revision.tree': revision['tree'], + 'data.kernel_revision.branch': revision['branch'] }) for node in nodes: if node['group'] and node['group'] not in jobs: @@ -112,13 +112,15 @@ def _get_report(self, root_node): loader=jinja2.FileSystemLoader("./config/reports/") ) template = template_env.get_template("test-report.jinja2") - revision = root_node['revision'] + revision = root_node['data']['kernel_revision'] results = self._get_results_data(root_node) stats = results['stats'] jobs = results['jobs'] - subject = f"\ -[STAGING] {revision['tree']}/{revision['branch']} {revision['describe']}: \ -{stats['total']} runs {stats['failures']} failures" + # TODO: Sanity-check all referenced values, handle corner cases + # properly + subject = (f"[STAGING] {revision['tree']}/{revision['branch']} " + f"{revision.get('describe', '')}: " + f"{stats['total']} runs {stats['failures']} failures") content = template.render( subject=subject, root=root_node, jobs=jobs ) @@ -136,7 +138,7 @@ class TestReportLoop(TestReport): def _setup(self, args): return self._api_helper.subscribe_filters({ - 'name': 'checkout', + 'kind': 'checkout', 'state': 'done', }) @@ -149,7 +151,7 @@ def _run(self, sub_id): self.log.info("Press Ctrl-C to stop.") while True: - root_node = self._api_helper.receive_event_node(sub_id) + root_node, _ = self._api_helper.receive_event_node(sub_id) content, subject = self._get_report(root_node) self._dump_report(content) self._send_report(subject, content) @@ -162,7 +164,7 @@ class TestReportSingle(TestReport): def _setup(self, args): return { - 'root_node': self._api.get_node(args.node_id), + 'root_node': self._api.node.find(args.node_id), 'dump': args.dump, 'send': args.send, } @@ -232,6 +234,7 @@ def __call__(self, configs, args): if __name__ == '__main__': opts = parse_opts('test_report', globals()) - configs = kernelci.config.load('config/pipeline.yaml') + yaml_configs = opts.get_yaml_configs() or 'config' + configs = kernelci.config.load(yaml_configs) status = opts.command(configs, opts) sys.exit(0 if status is True else 1) diff --git a/src/timeout.py b/src/timeout.py index 449f09434..f99542414 100755 --- a/src/timeout.py +++ b/src/timeout.py @@ -14,7 +14,7 @@ import kernelci import kernelci.config import kernelci.db -from kernelci.cli import Args, Command, parse_opts +from kernelci.legacy.cli import Args, Command, parse_opts from base import Service @@ -24,11 +24,11 @@ class TimeoutService(Service): def __init__(self, configs, args, name): super().__init__(configs, args, name) self._pending_states = [ - state.value for state in self._api.node_states + state.value for state in self._api.node.states if state != state.DONE ] - self._user = self._api.whoami() - self._username = self._user['profile']['username'] + self._user = self._api.user.whoami() + self._username = self._user['username'] def _setup(self, args): return { @@ -40,7 +40,7 @@ def _get_pending_nodes(self, filters=None): node_filters = filters.copy() if filters else {} for state in self._pending_states: node_filters['state'] = state - for node in self._api.get_nodes(node_filters): + for node in self._api.node.find(node_filters): # Until permissions for the timeout service are fixed: if node['owner'] == self._username: nodes[node['id']] = node @@ -50,29 +50,53 @@ def _count_running_child_nodes(self, parent_id): nodes_count = 0 for state in self._pending_states: - nodes_count += self._api.count_nodes({ + nodes_count += self._api.node.count({ 'parent': parent_id, 'state': state }) return nodes_count - def _get_child_nodes_recursive(self, node, state_filter=None): - recursive = {} + def _count_running_build_child_nodes(self, checkout_id): + nodes_count = 0 + build_nodes = self._api.node.find({ + 'parent': checkout_id, + 'kind': 'kbuild' + }) + for build in build_nodes: + for state in self._pending_states: + nodes_count += self._api.node.count({ + 'parent': build['id'], 'state': state + }) + return nodes_count + + def _get_child_nodes_recursive(self, node, recursive, state_filter=None): child_nodes = self._get_pending_nodes({'parent': node['id']}) for child_id, child in child_nodes.items(): if state_filter is None or child['state'] == state_filter: - recursive.update(self._get_child_nodes_recursive( - child, state_filter - )) - return recursive + recursive.update({child_id: child}) + self._get_child_nodes_recursive( + child, recursive, state_filter + ) - def _submit_lapsed_nodes(self, lapsed_nodes, state, log=None): + def _submit_lapsed_nodes(self, lapsed_nodes, state, mode): for node_id, node in lapsed_nodes.items(): node_update = node.copy() node_update['state'] = state - if log: - self.log.debug(f"{node_id} {log}") + self.log.debug(f"{node_id} {mode}") + if mode == 'TIMEOUT': + if node['kind'] == 'checkout' and node['state'] != 'running': + node_update['result'] = 'pass' + else: + if 'data' not in node_update: + node_update['data'] = {} + node_update['result'] = 'incomplete' + node_update['data']['error_code'] = 'node_timeout' + node_update['data']['error_msg'] = 'Node timed-out' + + if node['kind'] == 'checkout' and mode == 'DONE': + node_update['result'] = 'pass' + try: - self._api.update_node(node_update) + self._api.node.update(node_update) except requests.exceptions.HTTPError as err: err_msg = json.loads(err.response.content).get("detail", []) self.log.error(err_msg) @@ -87,7 +111,7 @@ def _check_pending_nodes(self, pending_nodes): timeout_nodes = {} for node_id, node in pending_nodes.items(): timeout_nodes[node_id] = node - timeout_nodes.update(self._get_child_nodes_recursive(node)) + self._get_child_nodes_recursive(node, timeout_nodes) self._submit_lapsed_nodes(timeout_nodes, 'done', 'TIMEOUT') def _run(self, ctx): @@ -111,7 +135,7 @@ def __init__(self, configs, args): super().__init__(configs, args, 'timeout-holdoff') def _get_available_nodes(self): - nodes = self._api.get_nodes({ + nodes = self._api.node.find({ 'state': 'available', 'holdoff__lt': datetime.isoformat(datetime.utcnow()), }) @@ -123,15 +147,18 @@ def _check_available_nodes(self, available_nodes): for node_id, node in available_nodes.items(): running = self._count_running_child_nodes(node_id) if running: - closing_nodes.update( - self._get_child_nodes_recursive(node, 'available') - ) closing_nodes[node_id] = node + self._get_child_nodes_recursive(node, closing_nodes, 'available') else: - timeout_nodes.update( - self._get_child_nodes_recursive(node) - ) - timeout_nodes[node_id] = node + if node['kind'] == 'checkout': + running = self._count_running_build_child_nodes(node_id) + self.log.debug(f"{node_id} RUNNING build child nodes: {running}") + if not running: + timeout_nodes[node_id] = node + self._get_child_nodes_recursive(node, timeout_nodes) + else: + timeout_nodes[node_id] = node + self._get_child_nodes_recursive(node, timeout_nodes) self._submit_lapsed_nodes(closing_nodes, 'closing', 'HOLDOFF') self._submit_lapsed_nodes(timeout_nodes, 'done', 'DONE') @@ -153,7 +180,7 @@ def __init__(self, configs, args): super().__init__(configs, args, 'timeout-closing') def _get_closing_nodes(self): - nodes = self._api.get_nodes({'state': 'closing'}) + nodes = self._api.node.find({'state': 'closing'}) return {node['id']: node for node in nodes} def _check_closing_nodes(self, closing_nodes): @@ -162,7 +189,13 @@ def _check_closing_nodes(self, closing_nodes): running = self._count_running_child_nodes(node_id) self.log.debug(f"{node_id} RUNNING: {running}") if not running: - done_nodes[node_id] = node + if node['kind'] == 'checkout': + running = self._count_running_build_child_nodes(node['id']) + self.log.debug(f"{node_id} RUNNING build child nodes: {running}") + if not running: + done_nodes[node_id] = node + else: + done_nodes[node_id] = node self._submit_lapsed_nodes(done_nodes, 'done', 'DONE') def _run(self, ctx): @@ -206,6 +239,7 @@ def __call__(self, configs, args): if __name__ == '__main__': opts = parse_opts('timeout', globals()) - pipeline = kernelci.config.load('config/pipeline.yaml') + yaml_configs = opts.get_yaml_configs() or 'config' + pipeline = kernelci.config.load(yaml_configs) status = opts.command(pipeline, opts) sys.exit(0 if status is True else 1) diff --git a/src/trigger.py b/src/trigger.py index 2062ced68..f2d7b44bb 100755 --- a/src/trigger.py +++ b/src/trigger.py @@ -16,7 +16,7 @@ import kernelci.build import kernelci.config import kernelci.db -from kernelci.cli import Args, Command, parse_opts +from kernelci.legacy.cli import Args, Command, parse_opts import urllib import requests @@ -28,14 +28,25 @@ class Trigger(Service): def __init__(self, configs, args): super().__init__(configs, args, 'trigger') self._build_configs = configs['build_configs'] + self._current_user = self._api.user.whoami() def _log_revision(self, message, build_config, head_commit): self.log.info(f"{message:32s} {build_config.name:32s} {head_commit}") - def _run_trigger(self, build_config, force, timeout): + def _run_trigger(self, build_config, force, timeout, trees): + if trees and len(trees) > 1: + tree_condition = "not" if trees.startswith("!") else "only" + trees_list = trees.strip("!").split(",") # Remove leading '!', split by comma + tree_in_list = build_config.tree.name in trees_list + if (tree_in_list and tree_condition == "not") or \ + (not tree_in_list and tree_condition == "only"): + return + head_commit = kernelci.build.get_branch_head(build_config) - node_count = self._api.count_nodes({ - "revision.commit": head_commit, + node_count = self._api.node.count({ + "kind": "checkout", + "data.kernel_revision.commit": head_commit, + "owner": self._current_user['username'], }) if node_count > 0: @@ -63,11 +74,14 @@ def _run_trigger(self, build_config, force, timeout): node = { 'name': 'checkout', 'path': ['checkout'], - 'revision': revision, + 'kind': 'checkout', + 'data': { + 'kernel_revision': revision, + }, 'timeout': checkout_timeout.isoformat(), } try: - self._api.create_node(node) + self._api.node.add(node) except requests.exceptions.HTTPError as ex: detail = ex.response.json().get('detail') if detail: @@ -76,10 +90,10 @@ def _run_trigger(self, build_config, force, timeout): self.traceback(ex) def _iterate_build_configs(self, force, build_configs_list, - timeout): + timeout, trees): for name, config in self._build_configs.items(): if not build_configs_list or name in build_configs_list: - self._run_trigger(config, force, timeout) + self._run_trigger(config, force, timeout, trees) def _setup(self, args): return { @@ -88,13 +102,14 @@ def _setup(self, args): 'build_configs_list': (args.build_configs or '').split(), 'startup_delay': int(args.startup_delay or 0), 'timeout': args.timeout, + 'trees': args.trees, } def _run(self, ctx): - poll_period, force, build_configs_list, startup_delay, timeout = ( + poll_period, force, build_configs_list, startup_delay, timeout, trees = ( ctx[key] for key in ( 'poll_period', 'force', 'build_configs_list', 'startup_delay', - 'timeout' + 'timeout', 'trees' ) ) @@ -104,7 +119,7 @@ def _run(self, ctx): while True: self._iterate_build_configs(force, build_configs_list, - timeout) + timeout, trees) if poll_period: self.log.info(f"Sleeping for {poll_period}s") time.sleep(poll_period) @@ -145,6 +160,13 @@ class cmd_run(Command): 'type': float, 'help': "Timeout minutes for checkout node", }, + { + 'name': '--trees', + 'help': "Exclude or include certain trees (default: all), " + + "!kernelci for all except kernelci" + + "kernelci for only kernelci" + + "!kernelci,linux not kernelci and not linux", + }, ] def __call__(self, configs, args): @@ -153,6 +175,7 @@ def __call__(self, configs, args): if __name__ == '__main__': opts = parse_opts('trigger', globals()) - configs = kernelci.config.load('config/pipeline.yaml') + yaml_configs = opts.get_yaml_configs() or 'config' + configs = kernelci.config.load(yaml_configs) status = opts.command(configs, opts) sys.exit(0 if status is True else 1) diff --git a/tests/validate_yaml.py b/tests/validate_yaml.py new file mode 100755 index 000000000..5a34bc978 --- /dev/null +++ b/tests/validate_yaml.py @@ -0,0 +1,130 @@ +#!/usr/bin/env python3 +''' +Validate all yaml files in the config/ directory +''' + +import os +import yaml +import sys + +def recursive_merge(d1, d2, detect_same_keys=False): + ''' + Recursively merge two dictionaries, which might have lists + Not sure if i done this right, but it works + ''' + for k, v in d2.items(): + if detect_same_keys and k in d1: + if d1[k] != v: + raise ValueError(f"Key {k} has different values in both dictionaries") +# We have entries duplication in the yaml files, we need to deal with it later +# so previous verification is very important +# else: +# print(f"Warning: Key {k} has same values in both dictionaries") + if k in d1: + if isinstance(v, dict): + d1[k] = recursive_merge(d1[k], v, detect_same_keys=True) + elif isinstance(v, list): + d1[k] += v + else: + d1[k] = v + else: + d1[k] = v + return d1 + +def validate_jobs(jobs): + ''' + Validate jobs, they must have a kcidb_test_suite mapping + ''' + for name, definition in jobs.items(): + if not definition.get('kind'): + raise yaml.YAMLError( + f"Kind not found for job: {name}'" + ) + if definition.get('kind') in ("test", "job"): + if not definition.get('kcidb_test_suite'): + raise yaml.YAMLError( + f"KCIDB test suite mapping not found for job: {name}'" + ) + if definition.get('kind') == "job": + if not definition.get('template'): + raise yaml.YAMLError( + f"Template not found for job: {name}'" + ) + +def validate_scheduler_jobs(data): + ''' + Each entry in scheduler have a job, that should be defined in jobs + ''' + schedules = data.get('scheduler') + jobs = data.get('jobs') + for entry in schedules: + if entry.get('job') not in jobs.keys(): + raise yaml.YAMLError( + f"Job {entry.get('job')} not found in jobs" + ) + +def validate_unused_jobs(data): + ''' + Check if all jobs are used in scheduler + ''' + schedules = data.get('scheduler') + jobs = data.get('jobs') + sch_jobs = [entry.get('job') for entry in schedules] + for job in jobs.keys(): + if job not in sch_jobs: + print(f"Warning: Job {job} is not used in scheduler") + +def validate_build_configs(data): + ''' + Each entry in build_configs have a tree parameter + This tree should exist in the trees: section + ''' + build_configs = data.get('build_configs') + trees = data.get('trees') + for entry in build_configs: + if build_configs[entry].get('tree') not in trees.keys(): + raise yaml.YAMLError( + f"Tree {entry.get('tree')} not found in trees" + ) + +def validate_unused_trees(data): + ''' + Check if all trees are used in build_configs + ''' + build_configs = data.get('build_configs') + trees = data.get('trees') + build_trees = [build_configs[entry].get('tree') for entry in build_configs] + for tree in trees.keys(): + if tree not in build_trees: + print(f"Warning: Tree {tree} is not used in build_configs") + +def validate_yaml(dir='config'): + ''' + Validate all yaml files in the config/ directory + ''' + merged_data = {} + for file in os.listdir(dir): + if file.endswith('.yaml'): + print(f"Validating {file}") + fpath = os.path.join(dir, file) + with open(fpath, 'r') as stream: + try: + data = yaml.safe_load(stream) + merged_data = recursive_merge(merged_data, data) + jobs = data.get('jobs') + if jobs: + validate_jobs(jobs) + except yaml.YAMLError as exc: + print(f'Error in {file}: {exc}') + sys.exit(1) + print("Validating scheduler entries to jobs") + validate_scheduler_jobs(merged_data) + validate_unused_jobs(merged_data) + validate_build_configs(merged_data) + validate_unused_trees(merged_data) + +if __name__ == '__main__': + if len(sys.argv) > 1: + validate_yaml(sys.argv[1]) + else: + validate_yaml()