Question

Poor performance on FreeBSD

  • Posted February 3, 2015

Hi! I recently created a droplet running FreeBSD, but it seems like the network performance is really, really poor compared to another droplet running debian (tested by downloading a 1000M file , and both droplets being in the same location). It might be a I/O problem and not the network…

Does anyone know what the problem is? I also know that i am not the only one having this problem. Ask if you need more info, and i will provide it.

Subscribe
Share

I wonder did you ever get this resolved?

I wonder did you ever get this resolved?

Still this problem not solved yet!

See here… https://www.digitalocean.com/community/tutorials/how-to-configure-and-connect-to-a-private-openvpn-server-on-freebsd-10-1?comment=51349

I’ve had all the same problems.

DO support told me that this was fixed in the ZFS version, but it absolutley wasn’t. I’ve had to go back to ubuntu now which is terrible as I’d much rather be running the low resource friendly freebsd.

No I did not, but I got a lot of help from the DO engineering team. Just tell me if you want the answers i got from them and I will post it here :)

Please open a ticket with our support team so they can investigate this and help you get it resolved.


Submit an answer
You can type!ref in this text area to quickly search our full set of tutorials, documentation & marketplace offerings and insert the link!

These answers are provided by our Community. If you find them useful, show some love by clicking the heart. If you run into issues leave a comment, or add your own answer to help others.

To check if it really is a bandwidth issue between different locations use a tool like, iperf. IPERF is a simple client/server tool that shoves data between locations while recording the speed.

To test the disk I/O run

diskinfo -t /the-disk-you-want-to-check

on the disk.

Let me know what you find.

This comment has been deleted

Does DigitalOcean plan to do something with this issue?

Hate to post a “me too” message but same problem here. It seems that solution must come from Digital Ocean folks as I do not believe we can do anything about the virtual hardware and its drivers.

DigitalOcean needs to replace vitro driver and enable Intel NIC

To test this issue I created a couple of droplets in different locations.

I then installed iperf on all nodes.

Server:

iperf -s -f g

Client:

iperf -i 1 -t 30 -f g -c
ifconfig | grep media
	media: Ethernet 10Gbase-T <full-duplex>
	media: Ethernet 10Gbase-T <full-duplex>

Below are the results:

IPV4 Private Interval Transfer Bandwidth 0.0-30.0 sec 3.18 GBytes 0.91 Gbits/sec

IPV4 Public Interval Transfer Bandwidth 0.0-30.0 sec 0.44 GBytes 0.13 Gbits/sec

Notice the speed difference adding one hop to the outside, WAN address.

Observation tools used netstat, systat, systat -ifstat.

Standard configuration follows for reference:

root@FBSD2:~ # sysctl hw.model
hw.model: Intel(R) Xeon(R) CPU E5-2630L v2 @ 2.40GHz
virtio_pci0@pci0:0:3:0:	class=0x020000 card=0x00011af4 chip=0x10001af4 rev=0x00 hdr=0x00
    vendor     = 'Red Hat, Inc'
    device     = 'Virtio network device'
    class      = network
    subclass   = ethernet
    cap 11[40] = MSI-X supports 3 messages, enabled
                 Table in map 0x14[0x0], PBA in map 0x14[0x800]
root@FBSD1:~ # limits
Resource limits (current):
  cputime              infinity secs
  filesize             infinity kB
  datasize             33554432 kB
  stacksize              524288 kB
  coredumpsize         infinity kB
  memoryuse            infinity kB
  memorylocked         infinity kB
  maxprocesses             3531
  openfiles               14049
  sbsize               infinity bytes
  vmemoryuse           infinity kB
  pseudo-terminals     infinity
  swapuse              infinity kB