-
Notifications
You must be signed in to change notification settings - Fork 19
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Simplify Thoth part of the toolchain #233
Comments
/triage accepted worth checking the impact on ODH |
Issues go stale after 90d of inactivity. If this issue is safe to close now please do so with /lifecycle stale |
/remove-lifecycle stale This might be easy to achieve and can reduce maintenance cost. |
/help |
@goern: Please ensure the request meets the requirements listed here. If this request no longer meets these requirements, the label can be removed In response to this:
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository. |
/sig devsecops |
After working on thoth-station/adviser#2411 (comment) I'm starting to wonder whether we should implicate Thoth directly as part of container build, at all ; and rather using it as a "lockfile updater" rather than a installer. Some reasons:
I might be missing some context though on why integrating Thoth advise inside the s2i process is more valuable. I see Thoth more in the role of a @goern @codificat @harshad16 @KPostOffice @Gkrumbach07 wdyt ? |
The initial thoughts were to provide most of the possible integration points (given that a user trusts the system), and the following was derived:
with the s2i image, the idea was that trusting thoth advice would fix the package issue during the build time. Though it wasn't used the most. |
with the s2i image, the idea was that trusting thoth advice would fix the package issue during the build time. Though it wasn't used the most.
also, it was to cover some cases, like GPU enable images
as the developer build might not have required them. However, they are required for cluster images, so thoth advice could be helpful in those scenarios, where the environment dictates the package installation.
In cases of GPU (or any accelerator / custom resources) in a cluster, the image would not be built on the same node that it's used on, would it ?
I think the match would more occur at the Scheduling stage (taking a k8s cluster as model) using limits on custom resources (checkout https://kubernetes.io/docs/tasks/manage-gpus/scheduling-gpus/).
Do some python AI/ML python packages **require** a GPU/accelerator during build time ?
|
Suggestion:
|
Is your feature request related to a problem? Please describe.
As suggested by @frenzymadness, our tooling could be simplified. Instead of creating a patch, we could maintain our own assemble script and propagate it to containers.
Describe the solution you'd like
The text was updated successfully, but these errors were encountered: