vli
By:
vli

Droplet Intel DPDK support

May 5, 2016 840 views
Networking

Hi

I am able to run mTCP + DPDK on my home KVM guest fine with virtio, but if i run the same mTCP + DPDK app on DO minimum ( $5/month) droplet with priviate network interface attached to DPDK virtio pmd driver, the mTCP app got killed

here is my app output:

root@ubuntu-512mb-nyc2-01:~/mtcp# ./apps/example/epwget 10.128.32.188 1
Application configuration:
URL: /

of total_flows: 1

of cores: 1

Concurrency: 0

Loading mtcp configuration from : /etc/mtcp/config/epwget.conf
Loading interface setting
EAL: Detected lcore 0 as core 0 on socket 0
EAL: Support maximum 128 logical core(s) by configuration.
EAL: Detected 1 lcore(s)
EAL: Auto-detected process type: PRIMARY
EAL: Probing VFIO support...
EAL: Module /sys/module/vfiopci not found! error 2 (No such file or directory)
EAL: VFIO modules not loaded, skipping VFIO support...
EAL: Setting up physically contiguous memory...
EAL: Ask a virtual area of 0x600000 bytes
EAL: Virtual area found at 0x7f88efc00000 (size = 0x600000)
EAL: Ask a virtual area of 0x200000 bytes
EAL: Virtual area found at 0x7f88ef800000 (size = 0x200000)
EAL: Ask a virtual area of 0x13e00000 bytes
EAL: Virtual area found at 0x7f88db800000 (size = 0x13e00000)
EAL: Ask a virtual area of 0x400000 bytes
EAL: Virtual area found at 0x7f88db200000 (size = 0x400000)
EAL: Ask a virtual area of 0xa00000 bytes
EAL: Virtual area found at 0x7f88da600000 (size = 0xa00000)
EAL: Ask a virtual area of 0x400000 bytes
EAL: Virtual area found at 0x7f88da000000 (size = 0x400000)
EAL: Ask a virtual area of 0xe00000 bytes
EAL: Virtual area found at 0x7f88d9000000 (size = 0xe00000)
EAL: Ask a virtual area of 0x200000 bytes
EAL: Virtual area found at 0x7f88d8c00000 (size = 0x200000)
EAL: Requesting 180 pages of size 2MB from socket 0
EAL: TSC frequency is ~2400027 KHz
EAL: WARNING: cpu flags constant
tsc=yes nonstoptsc=no -> using unreliable clock cycles !
EAL: open shared lib /root/mtcp/librte
pmdvirtio.so
EAL: Master lcore 0 is ready (tid=f1c8a940;cpuset=[0])
EAL: PCI device 0000:00:04.0 on NUMA socket -1
EAL: probe driver: 1af4:1000 rte
virtiopmd
EAL: PCI memory mapped at 0x7f88f0200000
PMD: virtio
readcaps(): [40] skipping non VNDR cap id: 11
PMD: virtio
readcaps(): no modern virtio pci device found.
PMD: vtpci
init(): trying with legacy virtio pci.
EAL: PCI Port IO found start=0xc0c0
PMD: virtionegotiatefeatures(): guestfeatures before negotiate = 100cf8020
PMD: virtio
negotiatefeatures(): hostfeatures before negotiate = 719fffe7
PMD: virtionegotiatefeatures(): features after negotiate = 8f8020
PMD: ethvirtiodevinit(): PORT MAC: 04:01:DF:1E:D5:02
PMD: eth
virtiodevinit(): VIRTIONETFMQ is not supported
PMD: virtio
devcqqueuesetup(): >>
PMD: virtio
devqueuesetup(): setting up queue: 2
PMD: virtiodevqueuesetup(): vqsize: 64 nbdesc:0
PMD: virtio
devqueuesetup(): vringsize: 4612, roundedvringsize: 8192
PMD: virtio
devqueuesetup(): vq->vqringmem: 0x1b142000
PMD: virtiodevqueuesetup(): vq->vqringvirtmem: 0x7f88da142000
PMD: ethvirtiodevinit(): config->maxvirtqueuepairs=1
PMD: eth
virtiodevinit(): config->status=1
PMD: ethvirtiodevinit(): PORT MAC: 04:01:DF:1E:D5:02
PMD: eth
virtiodevinit(): hw->maxrxqueues=1 hw->maxtxqueues=1
PMD: ethvirtiodevinit(): port 0 vendorID=0x1af4 deviceID=0x1000
PMD: virtio
devvringstart(): >>
Total number of attached devices: 1
Interface name: dpdk0
Configurations:
Number of CPU cores available: 1
Number of CPU cores to use: 1
Number of source ip to use: 32
Maximum number of concurrency per core: 10000
Maximum number of preallocated buffers per core: 1000
Receive buffer size: 128
Send buffer size: 1024
TCP timeout seconds: 30
TCP timewait seconds: 0

NICs to print statistics: dpdk0

Interfaces:
name: dpdk0, ifindex: 0, hwaddr: 04:01:DF:1E:D5:02, ipaddr: 10.128.6.216, netmask: 255.255.0.0

Number of NIC queues: 1

Loading routing configurations from : /etc/mtcp/config/route.conf
Routes:
Destination: 10.128.0.0/16, Mask: 255.255.0.0, Masked: 10.128.0.0, Route: ifdx-0

Destination: 10.128.0.0/16, Mask: 255.255.0.0, Masked: 10.128.0.0, Route: ifdx-0

Loading ARP table from : /etc/mtcp/config/arp.conf
ARP Table:

IP addr: 10.128.32.188, dst_hwaddr: 04:01:E1:2F:04:02

Initializing port 0... PMD: virtiodevconfigure(): configure
PMD: virtiodevrxqueuesetup(): >>
PMD: virtiodevqueuesetup(): setting up queue: 0
PMD: virtio
devqueuesetup(): vqsize: 256 nbdesc:128
PMD: virtiodevqueuesetup(): vringsize: 10244, roundedvringsize: 12288
PMD: virtiodevqueuesetup(): vq->vqringmem: 0x1b11c000
PMD: virtio
devqueuesetup(): vq->vqringvirtmem: 0x7f88da11c000
PMD: virtio
devtxqueuesetup(): >>
PMD: virtio
devqueuesetup(): setting up queue: 1
PMD: virtiodevqueuesetup(): vqsize: 256 nbdesc:512
PMD: virtio
devqueuesetup(): vringsize: 10244, roundedvringsize: 12288
PMD: virtio
devqueuesetup(): vq->vqringmem: 0x1b117000
PMD: virtiodevqueuesetup(): vq->vqringvirtmem: 0x7f88da117000
PMD: virtiodevlinkupdate(): Get link status from hw
PMD: virtio
devlinkupdate(): Port 0 is up
PMD: virtiodevrxtxstart(): >>
PMD: virtio
devvringstart(): >>
PMD: virtiodevvringstart(): Allocated 256 bufs
PMD: virtio
devvringstart(): >>
PMD: virtiodevstart(): nbqueues=1
PMD: virtio
devstart(): Notified backend at initialization
rte
ethdevconfigrestore: port 0: MAC address array not supported
PMD: virtio
sendcommand(): vq->vqdescheadidx = 0, status = 255, vq->hw->cvq = 0x7f88da144c40 vq = 0x7f88da144c40
PMD: virtiosendcommand(): vq->vqqueueindex = 2
PMD: virtiosendcommand(): vq->vqfreecnt=64
vq->vqdescheadidx=0
PMD: virtio
sendcommand(): vq->vqdescheadidx = 0, status = 255, vq->hw->cvq = 0x7f88da144c40 vq = 0x7f88da144c40
PMD: virtiosendcommand(): vq->vqqueueindex = 2
PMD: virtiosendcommand(): vq->vqfreecnt=64
vq->vqdescheadidx=0
PMD: virtio
devlinkupdate(): Get link status from hw
PMD: virtiodevlinkupdate(): Port 0 is up
done:
PMD: virtio
sendcommand(): vq->vqdescheadidx = 0, status = 255, vq->hw->cvq = 0x7f88da144c40 vq = 0x7f88da144c40
PMD: virtiosendcommand(): vq->vqqueueindex = 2
PMD: virtiosendcommand(): vq->vqfreecnt=64
vq->vqdeschead_idx=0
Port 0, MAC address: 04:01:DF:1E:D5:02

Checking link statusPMD: virtiodevlinkupdate(): Get link status from hw
PMD: virtio
devlinkupdate(): Port 0 is up
done
PMD: virtiodevlinkupdate(): Get link status from hw
PMD: virtio
devlinkupdate(): Port 0 is up
Port 0 Link Up - speed 10000 Mbps - full-duplex
Configuration updated by mtcpsetconf().
CPU 0: initialization finished.
[mtcp
create_context:1173] CPU 0 is now the master thread.
[CPU 0] dpdk0 flows: 0, RX: 0(pps) (err: 0), 0.00(Gbps), TX: 0(pps), 0.00(Gbps)
[ ALL ] dpdk0 flows: 0, RX: 0(pps) (err: 0), 0.00(Gbps), TX: 0(pps), 0.00(Gbps)
Killed <====================KILLED HERE

does DO support binding private interface to DPDK virtio PMD driver?

1 Answer

Hi @vli,

Unfortunately at this time we don't support DPDK. That's not to say that we're not investigating it, but for the foreseeable future we won't have this support exposed in our droplets. Sorry!

Have another answer? Share your knowledge.