Hi
I am able to run mTCP + DPDK on my home KVM guest fine with virtio, but if i run the same mTCP + DPDK app on DO minimum ( $5/month) droplet with priviate network interface attached to DPDK virtio pmd driver, the mTCP app got killed
here is my app output:
root@ubuntu-512mb-nyc2-01:~/mtcp# ./apps/example/epwget 10.128.32.188 1 Application configuration: URL: /
Initializing port 0… PMD: virtio_dev_configure(): configure PMD: virtio_dev_rx_queue_setup(): >> PMD: virtio_dev_queue_setup(): setting up queue: 0 PMD: virtio_dev_queue_setup(): vq_size: 256 nb_desc:128 PMD: virtio_dev_queue_setup(): vring_size: 10244, rounded_vring_size: 12288 PMD: virtio_dev_queue_setup(): vq->vq_ring_mem: 0x1b11c000 PMD: virtio_dev_queue_setup(): vq->vq_ring_virt_mem: 0x7f88da11c000 PMD: virtio_dev_tx_queue_setup(): >> PMD: virtio_dev_queue_setup(): setting up queue: 1 PMD: virtio_dev_queue_setup(): vq_size: 256 nb_desc:512 PMD: virtio_dev_queue_setup(): vring_size: 10244, rounded_vring_size: 12288 PMD: virtio_dev_queue_setup(): vq->vq_ring_mem: 0x1b117000 PMD: virtio_dev_queue_setup(): vq->vq_ring_virt_mem: 0x7f88da117000 PMD: virtio_dev_link_update(): Get link status from hw PMD: virtio_dev_link_update(): Port 0 is up PMD: virtio_dev_rxtx_start(): >> PMD: virtio_dev_vring_start(): >> PMD: virtio_dev_vring_start(): Allocated 256 bufs PMD: virtio_dev_vring_start(): >> PMD: virtio_dev_start(): nb_queues=1 PMD: virtio_dev_start(): Notified backend at initialization rte_eth_dev_config_restore: port 0: MAC address array not supported PMD: virtio_send_command(): vq->vq_desc_head_idx = 0, status = 255, vq->hw->cvq = 0x7f88da144c40 vq = 0x7f88da144c40 PMD: virtio_send_command(): vq->vq_queue_index = 2 PMD: virtio_send_command(): vq->vq_free_cnt=64 vq->vq_desc_head_idx=0 PMD: virtio_send_command(): vq->vq_desc_head_idx = 0, status = 255, vq->hw->cvq = 0x7f88da144c40 vq = 0x7f88da144c40 PMD: virtio_send_command(): vq->vq_queue_index = 2 PMD: virtio_send_command(): vq->vq_free_cnt=64 vq->vq_desc_head_idx=0 PMD: virtio_dev_link_update(): Get link status from hw PMD: virtio_dev_link_update(): Port 0 is up done: PMD: virtio_send_command(): vq->vq_desc_head_idx = 0, status = 255, vq->hw->cvq = 0x7f88da144c40 vq = 0x7f88da144c40 PMD: virtio_send_command(): vq->vq_queue_index = 2 PMD: virtio_send_command(): vq->vq_free_cnt=64 vq->vq_desc_head_idx=0 Port 0, MAC address: 04:01:DF:1E:D5:02
Checking link statusPMD: virtio_dev_link_update(): Get link status from hw PMD: virtio_dev_link_update(): Port 0 is up done PMD: virtio_dev_link_update(): Get link status from hw PMD: virtio_dev_link_update(): Port 0 is up Port 0 Link Up - speed 10000 Mbps - full-duplex Configuration updated by mtcp_setconf(). CPU 0: initialization finished. [mtcp_create_context:1173] CPU 0 is now the master thread. [CPU 0] dpdk0 flows: 0, RX: 0(pps) (err: 0), 0.00(Gbps), TX: 0(pps), 0.00(Gbps) [ ALL ] dpdk0 flows: 0, RX: 0(pps) (err: 0), 0.00(Gbps), TX: 0(pps), 0.00(Gbps) Killed <====================KILLED HERE
does DO support binding private interface to DPDK virtio PMD driver?
These answers are provided by our Community. If you find them useful, show some love by clicking the heart. If you run into issues leave a comment, or add your own answer to help others.
Hi @vli,
Unfortunately at this time we don’t support DPDK. That’s not to say that we’re not investigating it, but for the foreseeable future we won’t have this support exposed in our droplets. Sorry!