Question

Droplet Intel DPDK support

Hi

I am able to run mTCP + DPDK on my home KVM guest fine with virtio, but if i run the same mTCP + DPDK app on DO minimum ( $5/month) droplet with priviate network interface attached to DPDK virtio pmd driver, the mTCP app got killed

here is my app output:

root@ubuntu-512mb-nyc2-01:~/mtcp# ./apps/example/epwget 10.128.32.188 1 Application configuration: URL: /

of total_flows: 1

of cores: 1

Concurrency: 0

Loading mtcp configuration from : /etc/mtcp/config/epwget.conf Loading interface setting EAL: Detected lcore 0 as core 0 on socket 0 EAL: Support maximum 128 logical core(s) by configuration. EAL: Detected 1 lcore(s) EAL: Auto-detected process type: PRIMARY EAL: Probing VFIO support… EAL: Module /sys/module/vfio_pci not found! error 2 (No such file or directory) EAL: VFIO modules not loaded, skipping VFIO support… EAL: Setting up physically contiguous memory… EAL: Ask a virtual area of 0x600000 bytes EAL: Virtual area found at 0x7f88efc00000 (size = 0x600000) EAL: Ask a virtual area of 0x200000 bytes EAL: Virtual area found at 0x7f88ef800000 (size = 0x200000) EAL: Ask a virtual area of 0x13e00000 bytes EAL: Virtual area found at 0x7f88db800000 (size = 0x13e00000) EAL: Ask a virtual area of 0x400000 bytes EAL: Virtual area found at 0x7f88db200000 (size = 0x400000) EAL: Ask a virtual area of 0xa00000 bytes EAL: Virtual area found at 0x7f88da600000 (size = 0xa00000) EAL: Ask a virtual area of 0x400000 bytes EAL: Virtual area found at 0x7f88da000000 (size = 0x400000) EAL: Ask a virtual area of 0xe00000 bytes EAL: Virtual area found at 0x7f88d9000000 (size = 0xe00000) EAL: Ask a virtual area of 0x200000 bytes EAL: Virtual area found at 0x7f88d8c00000 (size = 0x200000) EAL: Requesting 180 pages of size 2MB from socket 0 EAL: TSC frequency is ~2400027 KHz EAL: WARNING: cpu flags constant_tsc=yes nonstop_tsc=no -> using unreliable clock cycles ! EAL: open shared lib /root/mtcp/librte_pmd_virtio.so EAL: Master lcore 0 is ready (tid=f1c8a940;cpuset=[0]) EAL: PCI device 0000:00:04.0 on NUMA socket -1 EAL: probe driver: 1af4:1000 rte_virtio_pmd EAL: PCI memory mapped at 0x7f88f0200000 PMD: virtio_read_caps(): [40] skipping non VNDR cap id: 11 PMD: virtio_read_caps(): no modern virtio pci device found. PMD: vtpci_init(): trying with legacy virtio pci. EAL: PCI Port IO found start=0xc0c0 PMD: virtio_negotiate_features(): guest_features before negotiate = 100cf8020 PMD: virtio_negotiate_features(): host_features before negotiate = 719fffe7 PMD: virtio_negotiate_features(): features after negotiate = 8f8020 PMD: eth_virtio_dev_init(): PORT MAC: 04:01:DF:1E:D5:02 PMD: eth_virtio_dev_init(): VIRTIO_NET_F_MQ is not supported PMD: virtio_dev_cq_queue_setup(): >> PMD: virtio_dev_queue_setup(): setting up queue: 2 PMD: virtio_dev_queue_setup(): vq_size: 64 nb_desc:0 PMD: virtio_dev_queue_setup(): vring_size: 4612, rounded_vring_size: 8192 PMD: virtio_dev_queue_setup(): vq->vq_ring_mem: 0x1b142000 PMD: virtio_dev_queue_setup(): vq->vq_ring_virt_mem: 0x7f88da142000 PMD: eth_virtio_dev_init(): config->max_virtqueue_pairs=1 PMD: eth_virtio_dev_init(): config->status=1 PMD: eth_virtio_dev_init(): PORT MAC: 04:01:DF:1E:D5:02 PMD: eth_virtio_dev_init(): hw->max_rx_queues=1 hw->max_tx_queues=1 PMD: eth_virtio_dev_init(): port 0 vendorID=0x1af4 deviceID=0x1000 PMD: virtio_dev_vring_start(): >> Total number of attached devices: 1 Interface name: dpdk0 Configurations: Number of CPU cores available: 1 Number of CPU cores to use: 1 Number of source ip to use: 32 Maximum number of concurrency per core: 10000 Maximum number of preallocated buffers per core: 1000 Receive buffer size: 128 Send buffer size: 1024 TCP timeout seconds: 30 TCP timewait seconds: 0 NICs to print statistics: dpdk0

Interfaces: name: dpdk0, ifindex: 0, hwaddr: 04:01:DF:1E:D5:02, ipaddr: 10.128.6.216, netmask: 255.255.0.0 Number of NIC queues: 1

Loading routing configurations from : /etc/mtcp/config/route.conf Routes: Destination: 10.128.0.0/16, Mask: 255.255.0.0, Masked: 10.128.0.0, Route: ifdx-0 Destination: 10.128.0.0/16, Mask: 255.255.0.0, Masked: 10.128.0.0, Route: ifdx-0

Loading ARP table from : /etc/mtcp/config/arp.conf ARP Table: IP addr: 10.128.32.188, dst_hwaddr: 04:01:E1:2F:04:02

Initializing port 0… PMD: virtio_dev_configure(): configure PMD: virtio_dev_rx_queue_setup(): >> PMD: virtio_dev_queue_setup(): setting up queue: 0 PMD: virtio_dev_queue_setup(): vq_size: 256 nb_desc:128 PMD: virtio_dev_queue_setup(): vring_size: 10244, rounded_vring_size: 12288 PMD: virtio_dev_queue_setup(): vq->vq_ring_mem: 0x1b11c000 PMD: virtio_dev_queue_setup(): vq->vq_ring_virt_mem: 0x7f88da11c000 PMD: virtio_dev_tx_queue_setup(): >> PMD: virtio_dev_queue_setup(): setting up queue: 1 PMD: virtio_dev_queue_setup(): vq_size: 256 nb_desc:512 PMD: virtio_dev_queue_setup(): vring_size: 10244, rounded_vring_size: 12288 PMD: virtio_dev_queue_setup(): vq->vq_ring_mem: 0x1b117000 PMD: virtio_dev_queue_setup(): vq->vq_ring_virt_mem: 0x7f88da117000 PMD: virtio_dev_link_update(): Get link status from hw PMD: virtio_dev_link_update(): Port 0 is up PMD: virtio_dev_rxtx_start(): >> PMD: virtio_dev_vring_start(): >> PMD: virtio_dev_vring_start(): Allocated 256 bufs PMD: virtio_dev_vring_start(): >> PMD: virtio_dev_start(): nb_queues=1 PMD: virtio_dev_start(): Notified backend at initialization rte_eth_dev_config_restore: port 0: MAC address array not supported PMD: virtio_send_command(): vq->vq_desc_head_idx = 0, status = 255, vq->hw->cvq = 0x7f88da144c40 vq = 0x7f88da144c40 PMD: virtio_send_command(): vq->vq_queue_index = 2 PMD: virtio_send_command(): vq->vq_free_cnt=64 vq->vq_desc_head_idx=0 PMD: virtio_send_command(): vq->vq_desc_head_idx = 0, status = 255, vq->hw->cvq = 0x7f88da144c40 vq = 0x7f88da144c40 PMD: virtio_send_command(): vq->vq_queue_index = 2 PMD: virtio_send_command(): vq->vq_free_cnt=64 vq->vq_desc_head_idx=0 PMD: virtio_dev_link_update(): Get link status from hw PMD: virtio_dev_link_update(): Port 0 is up done: PMD: virtio_send_command(): vq->vq_desc_head_idx = 0, status = 255, vq->hw->cvq = 0x7f88da144c40 vq = 0x7f88da144c40 PMD: virtio_send_command(): vq->vq_queue_index = 2 PMD: virtio_send_command(): vq->vq_free_cnt=64 vq->vq_desc_head_idx=0 Port 0, MAC address: 04:01:DF:1E:D5:02

Checking link statusPMD: virtio_dev_link_update(): Get link status from hw PMD: virtio_dev_link_update(): Port 0 is up done PMD: virtio_dev_link_update(): Get link status from hw PMD: virtio_dev_link_update(): Port 0 is up Port 0 Link Up - speed 10000 Mbps - full-duplex Configuration updated by mtcp_setconf(). CPU 0: initialization finished. [mtcp_create_context:1173] CPU 0 is now the master thread. [CPU 0] dpdk0 flows: 0, RX: 0(pps) (err: 0), 0.00(Gbps), TX: 0(pps), 0.00(Gbps) [ ALL ] dpdk0 flows: 0, RX: 0(pps) (err: 0), 0.00(Gbps), TX: 0(pps), 0.00(Gbps) Killed <====================KILLED HERE

does DO support binding private interface to DPDK virtio PMD driver?

Subscribe
Share

Submit an answer
You can type!ref in this text area to quickly search our full set of tutorials, documentation & marketplace offerings and insert the link!

These answers are provided by our Community. If you find them useful, show some love by clicking the heart. If you run into issues leave a comment, or add your own answer to help others.

Hi @vli,

Unfortunately at this time we don’t support DPDK. That’s not to say that we’re not investigating it, but for the foreseeable future we won’t have this support exposed in our droplets. Sorry!