-
Notifications
You must be signed in to change notification settings - Fork 31
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Uninstall/upgrade functionality #4
Comments
The options to install, uninstall, upgrade, and run components might all work differently for different sources (singularity hub, singularity recipe, from-source-installation, binary download, etc.). Maybe it would be easier to switch from json configuration files to bash scripts. What I have in mind is a setup like this: Anyway, I think both approaches could work, I just wanted to discuss this as an alternative option. if you are interested, I can create a pull request with my suggestion and we can see how it would look like before deciding. |
Why not both? I entirely understand that some things will have a complicated installation, best left to a script, but for some few common patterns we could have just a simple config. I'm thinking singularity images, binary files, and pip packages in particular. Each would have a standard behaviour for install/uninstall/upgrade, which mostly involves just getting the right file. If you notice in the JSON, there is a |
The advantage I see in having only the scripts is that it would make the main script easier to maintain and also have all the logic related to one component in one place while all logic related to managing the components would be somewhere else. With a mix of both approaches (like The downside is the possible duplication in the scripts. We could handle that by including utility scripts that offer methods for commonly used patterns but that would weaken the separation again. Overall, I'm not completely convinced by either of the approaches. Essentially, what all of this functionality (install, uninstall, upgrade) is, is a package manager for planning tools. Maybe it is worth looking into how some of them are designed? |
I think this is spot on the money. Who was it that recommended the SMT one? I think that was similarly a package manager of sorts. Can't seem to find it again... |
Guillem mentions pySMT and that looks indeed very related. The focus of the project is a common python interface to all solvers (which would also be cool, but I think is currently out-of-scope). Installing pySMT also installs a tool
|
Oy...some things of note:
|
Would it make sense to keep track of what is currently installed and which packages where installed manually? Then if one package is removed, compute the graph of all installed packages with edges for dependencies and prune everything that was not installed manually and is not required anymore. |
Maybe just asking the user to run autoremove or equivalent command after uninstall is executed is enough, or always executing this command automatically and letting the user choose [yes] or [no] in the removal dialog. |
Well there is the simple If the script for installing is arbitrary, then how do we keep track of dependencies through that? |
I'd argue that the package meta data (the json file) is responsible for defining dependencies and that the package manager is responsible for installing the necessary dependencies. That would mean the script for installing a package assumes its dependencies are already met. I also like the idea of automatically going through the dependency tree of an uninstalled package and asking the user for each one that could be removed. |
Now we're back to the finer details of privileged access. Should root-required dependencies be allowed in the json configuration? |
I'm not sure I understand the issue. Are you talking about a package A requiring root access to install and a package B depending on it? If so, here is how I imagine this working:
|
So then dependencies in the metadata will roughly be like...
Already we have a |
I was thinking more like this (assuming for the sake of example, that Fast Downward needs to be build from a Singularity image which would require root access): $ ls packages
lama
fast-downward
$ ls packages/lama
install
uninstall
run
manifest.json
$ cat packages/lama/manifest.json
{
"name": "Lama First",
"shortname": "lama",
"description": "Coming soon...",
"dependencies": ["fast-downward"]
}
$ cat packages/lama/install
#!/bin/bash
# nothing to install here (the actual planner is in the dependent package fast-downward)
$ cat packages/fast-downward/install
#!/bin/bash
if [[ -f $INSTALLED_PACKAGES/fast-downward/fast-downward.sif ]]; then
echo "Already installed"
exit 0
fi
[ "$UID" -eq 0 ] || (echo "installation requires root access"; exec sudo "$0" "$@")
cd $INSTALLED_PACKAGES
mkdir fast-downward
cd fast-downward
wget fast-downward.org/get/Singularity
singularity build --name fast-downward.sif Singularity Alternatively with python scripts instead of the bash scripts, or maybe with common code like building a singularity image factored out to some shared (Python/bash) scripts. |
To be clear, the main difference I see to your suggestion is that it would no be necessary to specify the type of the package or whether it requires root access in the metadata as part of the list of dependencies. Each package should know how to install/uninstall itself assuming all dependencies are already installed. |
Aye, makes sense. Why build the singularity image instead of fetching? |
I just did this to have an example where one component that doesn't need root installation depends on another that does. It makes no sense to really install Fast Downward this way. |
Slowly getting there... #11 |
planutils --remove planner
The text was updated successfully, but these errors were encountered: