Xen koans

Ian Petersen ispeters-Re5JQEeQqe8AvxtiuMwx3w at public.gmane.org
Thu Apr 3 15:42:03 UTC 2008


I've only had Xen up for a short time, it was a while ago, and then I
decided I wanted the full utility of my nVidia video card, so I got
rid of it.  However, I did go to see a presentation at NewTLUG from
the guy who always organizes them (can't remember his name, but I
think he's an IBMer--great presenter, too).  Anyway, with caveats in
mind, here are my answers.

On Thu, Apr 3, 2008 at 10:39 AM, William O'Higgins Witteman
<william.ohiggins-H217xnMUJC0sA/PxXw9srA at public.gmane.org> wrote:
>  How do you direct network traffic to the virtual machine?

Same way you would any other machine on your network.  IIRC, Xen can
configure its guests two ways: a virtual NAT behind the host machine's
IP, or as a whole bunch of peers on the network with their own IPs.
In the NAT case, the VMs are in a subnet and the host is their
gateway.  In the bridged case, the host and all the guests all have
IPs on the host's local network and the host's physical network card
handles all the traffic.  I think the default situation is bridged
mode.

>  Can the VM share data with the host OS?

When you say "share data", I assume you mean share files in a
filesystem.  If you mean directly, then I _think_ the answer is no,
but you _can_ share data over any of the usual networked filesystems
(SAMBA, NFS, etc.).

>  How does crash recovery work?

Dunno.

>  Is running virtual machines in Xen (or other) worth the hassle?

That's one of those questions where the answer depends a lot on what
you're doing and why.

I found Xen to be a bigger hassle than, say, VMWare Server, because it
was still so raw and, relative to VMWare Server, had a very unpolished
UI.  VMWare Server is remarkably easy to use (and usually to install,
too), so the hassle there is quite small.  The biggest benefit I've
personally experienced from a virtualized environment was when a
server I was partially responsible for administering was cracked (I
had foolishly left sshd running with an account with a weak password,
and it was compromised by a dictionary attack).  The other guy
responsible for administering the VM was responsible for all the other
machines on that network and he regularly made backups of the various
VM images they had running.  Recovering from the break-in was a simple
matter of restoring the VM image and "rebooting".  It took about 30
minutes because I wanted to check some things before going live again,
but I think that's faster than a full system recovery would have been
if I had to do it the usual way.

I think people running datacenters usually find virtualization to be
"worth it" because it usually
 - reduces costs for things like hardware, electricity, physical space, etc.
 - allows you to administer an entire server farm from a single
console (including tasks like rebooting, re-installing the OS, etc.)
regardless of guest OS
 - uses the resources of your servers more effectively
 - makes it easier to provide a hot spare for a critical server (if
you have two hosts connected to a SAN, you can usually transfer a VM
from one host to the other with as little as zero down time)

On the other hand, virtualization introduces a new variety of
potential performance problems.  I've heard it said that your database
server(s) should be on physical machines with access to real,
physical, fast disks because the IO contention in a virtualized
environment can apparently play havoc with the DB's performance.  I
don't have any personal experience with this matter, but the argument
seems plausible.

I guess my answer to this last question is "it depends".

Ian

-- 
Tired of pop-ups, security holes, and spyware?
Try Firefox: http://www.getfirefox.com
--
The Toronto Linux Users Group.      Meetings: http://gtalug.org/
TLUG requests: Linux topics, No HTML, wrap text below 80 columns
How to UNSUBSCRIBE: http://gtalug.org/wiki/Mailing_lists





More information about the Legacy mailing list