OT: Rogers and Home-based Business Networks?

Jamon Camisso jamon.camisso-H217xnMUJC0sA/PxXw9srA at public.gmane.org
Tue Apr 14 16:47:24 UTC 2009


Marc Lanctot wrote:
> On 14/04/09 12:13 PM, Robert Brockway wrote:
>> On Tue, 14 Apr 2009, Marc Lanctot wrote:
>>
>>> I have never liked the idea of running a server on a virtual OS. It
>>> seems to be the popular thing these days but to me it's always been
>>> just a waste of perfectly good hardware.
>>
>> That's interesting. One of the reasons I'm such a fan of virtualisation
>> is that it makes such good use of perfectly good hardware :)
>>
>> Running a box mostly idle is a waste of hardware and energy, IMHO.
>> Running 50 or 100 virtual boxes provides much better use of the hardware.
>>
>> Advantages include lower purchase costs, lower running costs, task
>> seperation so important in prod networks.
> 
> Haha, so I should have post-scripted my previous message with this: I 
> can see how it works well for web site serving, because given the nature 
> of the usage per time of the web server, yes, everything you have up 
> there makes sens. And given then probably 90% of servers is doing simple 
> web serving with maybe some DB back-end I can see why virtualization is 
> so hot these days. Maybe it's smartest way to host web sites.

Absolutely. But even for individual high traffic sites, it still makes a 
lot of sense given that the performance penalty is about 3-5% (at least 
with Xen which is what we use at work).

Think fast lvm snapshots for backups. Think moving VPS instances between 
machines, where the only change required is a different IP address in 
the guest.

The company I work for has customers who started out with 96mb VPSes 
that now run on dedicated dual quad core machines with 16gb of memory. 
It would be really hard to upgrade so seamlessly across physical 
machines, but with a self-contained snapshot it is as fast as your 
ethernet connection between hosts.

> However, I can assure you what I plan to do will thrash the servers CPU. 
> And there will be a lot of inter-networking between my servers going on. 
> So in my case, performance per time unit is quite important. I'm not 
> sure how the networking between servers would be affected by 
> virtualization.. it depends on the setup, probably.

What are you planning? If you're going to be using a lot of CPU e.g. 
database related activity, it really makes sense to spend the extra 
money for a dedicated db machine.

> So my concern is if I only get a percentage of the CPU. Say there are 10 
> other OS's on my virtual machine. Will my processes ever be allowed to 
> get more than 1/10th of the CPU power? If so, let's say my processes are 
> hogs and take up 80% of the physical machine's CPU at all times.. will 
> this ever be a problem?

Most companies will offer a semi dedicated option where you share a host 
with a few other customers e.g. as many as there are CPUs. That way each 
gets at least a single dedicated core. With octal core machines, my 
company have 8 customers on a machine with 16gb (or more memory).

>>> - Have you ever done any performance tests and compare the result to
>>> an equivalent non-virtual OS on the same hardware? Since I'm doing
>>> more than web
>>
>> Lots of testing has been done as this is a key question. Virtualisation
>> covers many different products with different characteristics so
>> performance testing is very much specific to a particular virtualisation
>> app. You can find a lot about virtualisation performance online.
>>
>> Physical/virtual comparisions also vary a lot based on the workload.
>> Some forms of virtualisation run at close to 100% for CPU but are slow
>> for disk I/O for example.
>>
>> I setup OpenVZ[1] at work and it typially performs at upwards of 97% of
>> the performance of the physical hardware. OpenVZ writes directly to real
>> filesystems so doesn't suffer any I/O performance problems.
>>
>> [1] Technically this isn't virtualisation at all, it is jailing but
>> these two concepts often solve the same problems.
> 
> Do you know what virtualization setup they have at Linode? In 
> particular, will I *ever* be allowed to use more than 1/Xth of the 
> physical CPU?

They use Xen as do many virtualization providers (who provide customers 
with full root access anyways). The overhead is low, community and 
support is great, and it allows for static resource allocation like set 
memory and cpu caps. Some companies advertise 2048mb of memory and lots 
of resources, only you'll find that those are burstable amounts and not 
guaranteed e.g. what happens if two guests both get hit with lots of 
traffic and fight each other for burst memory?

Like Robert said, there is a ton of information out there about 
virtualization, but I highly recommend going with a company (like mine 
;) that uses Xen.

Jamon
--
The Toronto Linux Users Group.      Meetings: http://gtalug.org/
TLUG requests: Linux topics, No HTML, wrap text below 80 columns
How to UNSUBSCRIBE: http://gtalug.org/wiki/Mailing_lists





More information about the Legacy mailing list