dymk 20 hours ago

This is a weird article. It’s titled “how to use vsock” but 95% is how to set up Bazel, gRPC, and building a C++ project. And then 5% is a link to an off-site Twitter thread of screenshots for setting up a Linux VM image and running that in qemu.

This should have been a VM with a basic server and socat’ing the vsocket. I don’t know why so much space was dedicated to unrelated topics. Also zero qualifications or benchmarks for “fast” compared to tcp/virtio.

Author says “no ssh keys” when ssh is an orthogonal concept. sshd can listen on a vsock interface, it’s not specific to tcp/ip.

From the “Under the hood” section, which should be the part actually about vsock:

> I haven’t delved into the low-level system API for vsocks, as frameworks typically abstract this away.

Veserv a day ago

Says it is fast, but presents zero benchmarks to demonstrate it is actually fast or even “faster”. It is shameful to make up adjectives just to sound cool.

  • rwmj a day ago

    vsock is pretty widely used, and if you're using virtio-vsock it should be reasonably fast. Anyway if you want to do some quick benchmarks and have an existing Linux VM on a libvirt host:

    (1) 'virsh edit' the guest and check it has '<vsock/>' in the <devices> section of the XML.

    (2) On the host:

      $ nbdkit memory 1G --vsock -f
    
    (3) Inside the guest:

      $ nbdinfo 'nbd+vsock://2'
    
    (You should see the size being 1G)

    And then you can try using commands like nbdcopy to copy data into and out of the host RAM disk over vsock. eg:

      $ time nbdcopy /dev/urandom 'nbd+vsock://2' -p
      $ time nbdcopy 'nbd+vsock://2' null: -p
    
    On my machine that's copying at a fairly consistent 20 Gbps, but it's going to depend on your hardware.

    To compare it to regular TCP:

      host $ nbdkit memory 1G -f -p 10809
      vm $ time nbdcopy /dev/urandom 'nbd://host' -p
      vm $ time nbdcopy 'nbd://host' null: -p
    
    TCP is about 2.5x faster for me.

    I had to kill the firewall on my host to do the TCP test (as trying to reconfigure nft/firewalld was beyond me), which actually points to one advantage of vsock, it bypasses the firewall. It's therefore convenient for things like guest agents where you want them to "just work" without reconfiguration hassle.

    • tuetuopay a day ago

      > It's therefore convenient for things like guest agents where you want them to "just work" without reconfiguration hassle.

      This. The point of vsock is not performance, it's the zero-configuration aspect of them. No IP address plan. No firewall. No DHCP. No nothing. Just a network-like API for guest-host communication for guest agents and configuration agents. Especially useful to fetch a configuration without having a configuration.

      IMHO the "fast" in the original article should be read as "quick to setup", not as "high bandwidth".

    • Veserv a day ago

      Thank you for benchmarking.

      2.5x slower than what they were replacing. Demanding evidence for claims strikes again.

      • rwmj a day ago

        vsock isn't a replacement for TCP, because you can't assume that IP exists or is routable / not firewalled between the guest and the host.

        Having said that, yes it also really ought to be faster. It's a decent, modern protocol so there's no particular reason for it, so with a bit of tuning somewhere it should be possible.

        • happyPersonR 10 hours ago

          Couldn’t you just use a broadcast address and get the same result ?

          • rwmj 5 hours ago

            VMs might not have a network connection at all, or (in a more normal secure configuration) have all their network traffic trunked onto a VLAN that avoids touching the host. Vsock is designed so it can only be used for traffic between the hypervisor/host and guests (or between guests on the same host). It's more akin to virtio or hypercalls than a traditional network.

    • nly a day ago

      Is that a typo? TCP was 2.5x faster?

      I presume this is down to much larger buffers in the TCP stack.

      • rwmj a day ago

        Not a typo & yes quite likely. I haven't tuned nbd/vsock at all.

        Edit: I patched both ends to change SO_SNDBUF and SO_RCVBUF from the default (both 212992) to 4194304, and that made no difference.

    • gpderetta a day ago

      Is nbdcopy actually touching the data consumer side or is splicing to /dev/null ?

      • rwmj a day ago

        It's actually copying the data. Splicing wouldn't be possible (maybe?), since NBD is a client/server protocol.

        The difference between nbdcopy ... /dev/null and nbdcopy ... null: is that in the second case we avoid writing the data anywhere and just throw it away inside nbdcopy.

    • imiric a day ago

      Ah, thanks. That is a much better example than the one in the article.

  • wyldfire 20 hours ago

    It's probably faster than, say, an emulated UART port.

    But likely no faster than a TCP socket across a virtio-net device.

tosti 19 hours ago

My standard solution for doing stuff in a VM is to prepare some kind of storage with downloads, batch jobs, etc. Control it over serial and have the last step be a shutdown. Artifacts can then be found in one of the storage units.

nly a day ago

Given how slow protobufs and grpc is, I wonder if the socket transport would ever be the bottleneck to throughput here.

Changing transports means if you want to move your grpc server process to a different box you now have new runtime configuration to implement/support and new performance characteristics to test.

I can see some of the security benefits if you are running on one host, but I also don't buy the advantages highlighted at the end of the article about using many different OS's and language environments on a single host. Seems like enabling and micro-optimising chaos instead of trying to tame it.

Particularly in the ops demo: Statically linking a C++ grpc binary, and standardising on host OS and gcc-toolset, doesn't seem that hard. On the other hand, if you're using e.g. a python rpc server are you even going to be able to feel the impact of switching to vsock?

  • PunchyHamster a day ago

    > Given how slow protobufs and grpc is, I wonder if the socket transport would ever be the bottleneck to throughput here.

    I think this is supposed to be option for when you want to pass stuff to host quickly without writing another device driver or using other interface rather than replacement for any rpc between VMs. "Being fast" is just a bonus.

    For example at our job we use serial port for the communication with VM agent (it's just passing some host info about where VM is running, so our automation system can pick it up), this would be ideal replacement for that.

    And as it is "just a socket", stuff like this is pretty easy to setup https://libvirt.org/ssh-proxy.html

webdevver 20 hours ago

disappointed to find out "nbd" doesn't mean "No Big Deal"