-
Notifications
You must be signed in to change notification settings - Fork 349
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Using multiple VM instances with pfring_zc #582
Comments
More details for the above issue. ----------------------------------Working instance ---------------------------------------- [root@guest4 ~]# ls -l /dev/uio* [root@guest4 bin]# ./zcount_ipc -i 1 -c 103 -uAbsolute Stats: 0 pkts (0 drops) - 0 bytes |
Versions used: |
Please run zbalance_ipc with -E (debug mode) and provide the output. |
Hi Cardigliano, The zbalance_ipc does not have -E switch so I collected other information from pfring driver Thank you, Host: Driver (pfring.ko) messages on the host after zbalance_ipc Both Guest 1 and Guest 2 VM's have Guest 1: Guest 2: Guest 1: running zcount_ipc program
|
@tsivakca67 sorry I need debug info from zbalance_ipc, it seems you are using an old version, please update. |
Thank you for the information. The version I was using was 7.3.0. Let me upgrade to stable-7.6.0 from GitHub and update you on the results. |
The output with -E option ./zbalance_ipc -i psdeth2,psdeth3 -c 100 -n 4 -E -Q /tmp/qmp1,/tmp/qmp2[PF_RING-ZC][DEBUG] 98376 1518-byte (1600-byte) buffers requested 23/May/2020 23:07:05 [zbalance_ipc.c:268] ========================= 23/May/2020 23:07:31 [zbalance_ipc.c:268] ========================= |
I have marked the messages in bold when zcount_ipc was executed on guest 1 and guest 2 |
Hi Alfredo Cardigliano, |
@tsivakca67 it seems that after both VMs have been initialized (notice the "VM initialized successfully" msg), the second VM disconnects the shared memory (notice the "SHM disconnection" msg). The reason is not clear to me atm, I need to debug this (I will try to reproduce this). |
Hi Alfredo, |
I had trouble receiving packets on multiple instances of vm. So to create a test case, I am trying out the sample programs found in example_zc
I have spinned out 2 identical vm's in the host machine with the monitor ports being qmp1 and qmp2 ( inserted "Uio and Uio_ivshmem.ko modules)
On the host machine executing the following command
./zbalance_ipc -I eth0 -n 6 -m 0 -c 100 -Q /tmp/qmp1,/tmp/qmp2
In the guest vm's I am executing the following command
./zcount_ipc -I 0 -c 100 in one vm and
./zcount_ipc -I 1 -c 100 in another vm
I am only able to receive incoming packets in one of the vm's
on another vm I am getting the pfring_zc_ipc_attach_queue error( [No buffer space available] Please check that cluster 100 is running)
I have tried mulitple instances with more vm's but find only one vm instance is working.
The following is what I have observed on the vm's.
On a working vm instance, I find both /dev/uio0 and /dev/uio1 being dynamically created
whereas all other vm instances, I find only /dev/uio0 being created.
The text was updated successfully, but these errors were encountered: