Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Added None check before logging statement in JobManager #195

Merged
merged 1 commit into from
Dec 15, 2020

Conversation

mojojojo99
Copy link
Contributor

Addresses some underlying issues that were hinted at in #182:

Changes proposed in this PR:

  • I added a check for whether the pre-allocated VM in jobManager.py returns a None before attempting to do the logging. In the case where there are not enough VMs in the pool available, None can be returned. This previously caused an exception to be thrown and for the job to be considered Dead instead.

Beware, before merging, please do make sure that the below changes are what you intend:

  • In the current setup, an exception is thrown because preVM is None and because of this several branches and code in the Worker are not tested and used at all. Specifically, the currently unused branch of code seems to be the case when a worker is started with preVM is None here. The current behavior of the worker is to initialize a new VM whenever there is not already a pre-allocated VM for it. This might cause some issues and (perhaps) unexpected behavior for the case when Config.REUSE_VMMS is set to true -- when there is a flood of jobs to the autograder, many of these vms might be created in this worker.
  • Besides, because of the fact that they use the call initializeVM to immediately initialize and create a new vm instance, this vm is not added to the pool and is also untracked anywhere else. Based on my limited understanding, it seems like this vm will not be terminated. In the case where ec2ssh is used for example, this might be undesirable.

Is this the desired behavior of the autograder when there are not enough VMs? Perhaps you might also want to consider simply blocking the job when there are no available VMs to use, and waiting for another vm to be freed instead of always creating new VMs. This will potentially cause a lot more VMs to be created during peak usage.

To show this problem, I wrote a short test in #191, which might be good as a reference.

@mojojojo99 mojojojo99 changed the title added none check before logging statement Added None check before logging statement in JobManager Dec 15, 2020
@fanpu
Copy link
Contributor

fanpu commented Dec 15, 2020

Thanks for your PR yet again!

Yes, I agree that the exception gets thrown if preVM is None in jobManger.py which is possible. However, this case is actually subsequently handled within worker.py as you noted.

@fanpu fanpu merged commit db2cef3 into autolab:master Dec 15, 2020
@fanpu fanpu mentioned this pull request Dec 15, 2020
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants