[GTALUG] Setting up a VM host

Giles Orr gilesorr at gmail.com
Sat Aug 27 08:11:16 EDT 2016


On 26 August 2016 at 21:33, William Park via talk <talk at gtalug.org> wrote:
> On Fri, Aug 26, 2016 at 10:37:37AM -0400, Giles Orr via talk wrote:
>> If I wanted to set up a host for a bunch of headless VMs, what's the
>> OS/Hypervisor to run these days?  I'm doing this out of curiosity and
>> for testing purposes.  I don't exactly have appropriate hardware - an
>> i5 with 16GB of memory - but it should be sufficient to run 5-10 VMs
>> for my very limited purposes (private network, none of the VMs will be
>> public-facing).  QEMU/KVM looks like the best choice for a FOSS
>> advocate?  Other recommendations?  I could particularly use a good
>> HOWTO or tutorial if anyone knows of one.  Thanks.
>
> - QEMU and VirtualBox.  They both use KVM.
> - VirtualBox practically needs no manual.  It's all mouse clicks.  The
>   only time I actually had to read something, was to convert VMDK to VDI
>   format (using VBoxManage on command line in Windows)
> - QEMU requires manpage and shell script to store all the options you
>   discovered. :-)
>
> I'm not sure about "headless".  From memory, I seems to have closer
> association wiimmenseth VirtualBox than with QEMU.

I like VirtualBox and use it heavily.  But it's a Type 2 hypervisor (
https://en.wikipedia.org/wiki/Hypervisor ) and in many circumstances
I've seen it suck up significant amounts of processor even though its
hosted machine(s) are idle.  So I was hoping for a Type 1 hypervisor.

I had a previous, unsatisfactory experience with KVM - although I
admit I was at the time trying to run fully graphical VMs.  Lennart's
suggestion to skip libvirt and just run shell scripts for QEMU agrees
with my general sentiment and behaviour, but I attempted to do that
previously and discovered the string of options for QEMU to run one of
the machines I wanted stretched to fill most of an 80x24 terminal.  So
that didn't seem like a probable option to me: too high a barrier to
entry.

This time out I'm trying Proxmox (thanks Dave!).  It claims the client
performance degradation over bare metal is only 1-3% ...  And I had a
client machine up and running in about three quarters an hour (that
includes the time to install both Proxmox and the Debian client)
almost without reference to their documentation.  That suggests to me
that it's amazingly intuitive for such a high level piece of software.
Although most of the terms and behaviour made sense to me only because
I'm so familiar with VirtualBox.

My primary complaint about Proxmox is that its install disk says "all
your disk is belong to me."  It won't install to a partition, it
simply takes over the entire drive.  I get that it's meant for VMs and
needs space, but it's still unimpressive behaviour.

An important difference I'm noticing between Proxmox and VirtualBox:
while you can see and interact with the VM within the web interface
that Proxmox provides, the assumption appears to be that they'll
mostly be treated as headless.  And to go with this, when you set up a
VM and run a (Debian, in this case) installer, the VM by default has
its own outward-facing (virtual) NIC.  There are options for other
arrangements, but that's the default.  VB's default is that each VM
has a NATed NIC that's not publicly available.  You can change VB's
behaviour, but this is looking like it's got good defaults and good
basic behaviour for what I want now.

Thanks everyone.

-- 
Giles
http://www.gilesorr.com/
gilesorr at gmail.com


More information about the talk mailing list