Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

virsh_setvcpus: Update test #1218

Closed
wants to merge 1 commit into from
Closed

Conversation

jferlan
Copy link
Contributor

@jferlan jferlan commented Dec 11, 2013

Update the config/test to follow more "modern" practices, especially
with respect to _domain vs. _vm_ref variables in cfg file. Found instances
where _vm_ref was being used instead of _options - fixed that too. Added
tests for add and remove when the guest is on (hotswap). This code has
undergone quite a bit of change and this test was always trying to remove
a vcpu which has it's issues.

Adjust test flow to use try/finally for the test conditions. Rather than
relying on /etc/libvirt/qemu/%s.xml for running/defined domain output,
use the dumpxml() function with and without --inactive based on whether
the test is adjusting the "--config" or otherwise. Adjust the usage of
temporary xml files to use the tmp directory. Use the more commonly
used new_from_inactive_dumpxml() to save/restore the original configuration.
Finally rather than relying on the generated xml format to re.search
to find the current value, grab that from the XML. The issue with the
current code is that a new property "placement=" was added and
the result was the re.search would never succeed the way it was written.

Use TestNAError for remote tests when not configured properly and also
for the libvirt/qemu error string check. The hotswap/remove cpus is the
most difficult option

Issue #1016 is the impetus behind this pile of changes

# If the orig_current is set, then we are adding CPU's, thus we must
# set then 'current_vcpu' value to something lower than our count in
# order to test that if we start with a current=1 and a count=2 that we
# can set our current up to our count
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Good explanation! Thanks for your research!

@jferlan
Copy link
Contributor Author

jferlan commented Dec 11, 2013

I updated the branch adjusting the code to remove the find/remove of the tmp file and to use .sync()

Didn't even know sync() existed - perhaps it'd be a worthwhile endeavor to update all the tests that use the destroy, undefine, define sequence...

@jferlan
Copy link
Contributor Author

jferlan commented Dec 19, 2013

NOTE: Updated 12/19 to merge in latest HEAD

@yumingfei
Copy link
Member

Hey, @jferlan, I ran the setvcpus testcases, but something wrong with the clean-up:

17:43:33 ERROR| Traceback (most recent call last):
17:43:33 ERROR|   File "/home/autotest_work/fyu_autotest/client/tests/gren_virt/
virttest/standalone_test.py", line 196, in run_once
17:43:33 ERROR|     run_func(self, params, env)
17:43:33 ERROR|   File "/home/autotest_work/fyu_autotest/client/tests/gren_virt/
libvirt/tests/src/virsh_cmd/domain/virsh_setvcpus.py", line 206, in run
17:43:33 ERROR|     (vcpus_current, vcpus_set, count))
17:43:33 ERROR| TestFail: Failed to remove current=0, set=1, count=1
17:43:33 ERROR| 
17:43:33 ERROR| FAIL type_specific.virsh.setvcpus.normal_test.guest_on.name_opti
on.del -> TestFail: Failed to remove current=0, set=1, count=1

Could you find out the reason?

@yumingfei
Copy link
Member

BTW, almost failed cases caused by above problem:
TOTAL TIME: 357.64 s (05:57)
TESTS PASSED: 15
TESTS FAILED: 11
TESTS SKIPPED: 7
SUCCESS RATE: 57.69 %

@jferlan
Copy link
Contributor Author

jferlan commented Jan 10, 2014

@yumingfei,

Been busy dealing with libvirt core issues lately - I hope to take a more detailed look next week. I have a feeling "issues" could be related to OS and OS-Version on which the test is run... I ran on a Fedora19 using a recent top of tree libvirt and whatever qemu is provided with F19.... Of course since then I've upgraded to F20 with a newer qemu, so I'm half expecting different results...

While the traceback does provide good information - I'm curious what's in the logs before the traceback for any sort of debug information. Maybe there's a nugget there that'll help.

@jferlan
Copy link
Contributor Author

jferlan commented Jan 15, 2014

@yumingfei - I've looked at the code a bit more deeper - beyond what I already documented above, it turns out the qemu code will only support hotplug for machine types "pc-i440fx-1.5" or "pc-q35-1.5" and later. Something I added to the description and check for SKIP... I've modified the code that got the vcpu count/current values to also get the <type... machine=''/type> value. It'll be logged in various errors to help find any other odd pairings. I suspect this test will have some churn.

You did not indicate how you ran the tests or had things configured, but I think I've figured out what happened. If you note my comment above regarding the normal processing of the test is to have a vm that has 2 vcpus and then modify it to have a 'current' of 1 so that the add can succeed. Well, if the configuration of the vm has 1 vcpu originally, then attempting a 'del' a running vm to have 0 vcpus will fail for what I hope are obvious reasons.

I've put together an update and have pushed it.. The update will cause a FAIL if you have 1 vCPU configured on the guest and try to hotplug another. Previously we'd just "change" the guest configuration, which I don't think is the right approach. FAILing because of configuration in this case is something I can live with.

The error message would be:

19:08:30 ERROR| TestFail: Run failed with right command mtype=pc-i440fx-1.5 stderr=error: invalid argument: requested vcpus is greater than max allowable vcpus for the domain: 2 > 1
19:08:30 ERROR|
19:08:30 ERROR| FAIL type_specific.virsh.setvcpus.normal_test.guest_on.id_option.option_live.add -> TestFail: Run failed with right command mtype=pc-i440fx-1.5 stderr=error: invalid argument: requested vcpus is greater than max allowable vcpus for the domain: 2 > 1
19:08:30 INFO |

Conversely to the "add"/hotplug a vCPU - it seems the qemu 1.6 code emits a failure similar to attempting to run the setvcpu command on a 1.4 or earlier qemu, such as:

19:12:48 DEBUG| orig_set=2 orig_current=0 mtype=pc-i440fx-1.5
19:12:48 DEBUG| Running virsh command: setvcpus 481 1  --live
19:12:48 DEBUG| Running '/usr/bin/virsh setvcpus 481 1  --live'
19:12:48 DEBUG| status: 1
19:12:48 DEBUG| stdout:
19:12:48 DEBUG| stderr: error: internal error: cannot change vcpu count of this domain

This type of error I chose to SKIP since it's functionality that's missing. Hopefully this update works for you and if not please be sure to leave a few more bread crumbs from the output.

Update the config/test to follow more "modern" practices, especially
with respect to _domain vs. _vm_ref variables in cfg file. Found instances
where _vm_ref was being used instead of _options - fixed that too. Added
tests for add and remove when the guest is on (hotswap). This code has
undergone quite a bit of change and this test was always trying to remove
a vcpu which has it's issues.

Adjust test flow to use try/finally for the test conditions. Rather than
relying on /etc/libvirt/qemu/%s.xml for running/defined domain output,
use the dumpxml() function with and without --inactive based on whether
the test is adjusting the "--config" or otherwise.  Adjust the usage of
temporary xml files to use the tmp directory.  Use the more commonly
used new_from_inactive_dumpxml() to save/restore the original configuration.
Finally rather than relying on the generated xml format to re.search
to find the current value, grab that from the XML.  The issue with the
current code is that a new property "placement=<string>" was added and
the result was the re.search would never succeed the way it was written.

Use TestNAError for remote tests when not configured properly and also
for the libvirt/qemu error string check. The hotswap/remove cpus is the
most difficult option
@jferlan
Copy link
Contributor Author

jferlan commented Jan 23, 2014

Updated/merged with latest sources... including the removal of the remote_* params in setvcpus.cfg.

Would like to see this merged soon...

@yumingfei
Copy link
Member

Thanks very much for your clear explanation!
My original vcpus of vm is 1 as you guess.

Skip live update of vcpu for older qemu is good for me.
What I wonder is the way you deal with 1 or 2 vcpus of vm.
As my consideration, you can specify vm's vcpus count according your need before testcase.
Even you need test what performance it does under different vcpus count, expand testcase is ok:

variants:
    - test...
variants:
    - vcpu1:
        (only/no xxx)
    - vcpu2:
        (only/no xxx)

I'm afraid that guys will be confused by different result for the two condition.
What's your idea about it?

P.S. Just for a check, testcase type_specific.virsh.setvcpus.error_test.invalid_vcpu_count_is_0 is always FAIL,
and after I ran command directly, it failed as virt-test did.So it look like a bug, how it works on your side?

12:14:46 DEBUG| Running '/usr/bin/virsh setvcpus virt-tests-vm1 0  '
12:14:46 DEBUG| status: 0
12:14:46 DEBUG| stdout: 
12:14:46 DEBUG| stderr: error: Invalid number of virtual CPUs

BTW, since @lmr has started to split virt-test, I'm afraid it shouldn't be merged until he is done. :(

@jferlan
Copy link
Contributor Author

jferlan commented Jan 28, 2014

AFAIK things - virt-test creates a 2 vCPU VM for testing. At least when using "./run -t libvirt --keep-image-between-tests --tests virsh.setvcpus --install --remove". That sets the XML attribute:

<vcpu placement='static'>2</vcpu>

Which means, the VM at most is supposed to have 2 vCPUs. This test seems to have been designed to ensure that if we set the "current" value to 1, eg:

  <vcpu current='1' placement='static'>2</vcpu>

then we can dynamically add a vCPU to the guest.

So if I'm reading what you wrote correctly, you want to set the count to 1 and then expect to add? That won't work because the XML would be:

  <vcpu placement='static'>1</vcpu>

And when the VM is running it's not possible to "add" another one since the "max"/"set" value is 1.

Note that the number of vCPU's assigned as a maximum for the guest can be greater than then number physical CPU's on the host. Your performance will eventually suffer as each vCPU is a thread... There is a maximum count, but offhand I don't recall it.

I'm not even sure what you're attempting with the "0" test other than libvirt successfully doesn't allow a 0. Seems you're trying to make changes to the module that I am making changes to in order to test "different" functionality or error paths. I'm just trying to fix the existing code...

As for splitting virt-test - I wasn't really aware of what's going on. I haven't been able to keep up lately and I don't follow the email list for virt-test. If there's too much churn, I'll probably just stop using it and Pavel to handle all the testing aspects...

@lmr
Copy link
Member

lmr commented Jan 28, 2014

@jferlan, I need you to re-create this PR against

https://github.com/autotest/tp-libvirt

Sorry for the inconvenience.

@lmr lmr closed this Jan 28, 2014
jferlan added a commit to jferlan/tp-libvirt that referenced this pull request Feb 11, 2014
This pull request replaces PR 1218 sent to virt-test prior to the split:

autotest/virt-test#1218

There has been no changes made since the last comments in that PR
jferlan added a commit to jferlan/tp-libvirt that referenced this pull request Feb 17, 2014
This pull request replaces PR 1218 sent to virt-test prior to the split:

autotest/virt-test#1218

There has been no changes made since the last comments in that PR
jferlan added a commit to jferlan/tp-libvirt that referenced this pull request Feb 19, 2014
This pull request replaces PR 1218 sent to virt-test prior to the split:

autotest/virt-test#1218

There has been no changes made since the last comments in that PR
@jferlan jferlan deleted the virsh_setvcpus branch February 21, 2014 12:55
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

Successfully merging this pull request may close these issues.

4 participants