Linux Kernel Network Subsystem Patching

Aruna Hewapathirane aruna.hewapathirane-Re5JQEeQqe8AvxtiuMwx3w at public.gmane.org
Thu Jan 23 04:16:30 UTC 2014


Hi Everybodeee,

I just spent over four hours waiting for a kernel to compile and from this
thread am starting to understand my P4 and 2GB of RAM may not be the best
hardware to build kernels on.

So this is a urgent cry for help, can anyone please recommend a
motherboard+CPU combo that is under $100 that will speed things up ?
Maximum available budget is $125.00. If anyone has a old motherboard+CPU
they no longer require and I can use that has more processing power than
mine please let me know. Can pick up.

Question: For reasons unknown am unable to get ccache to kick in. ccache is
installed oh-kay, Checks out oh-kay but does not kick in when I use make ?
Any and all advice or guidance is very welcome. What else can one do to cut
down the time it takes for compilation ? And am still waiting for this to
compile ( aaargh ! )

Thank's !

Aruna






On Wed, Jan 22, 2014 at 10:49 PM, D. Hugh Redelmeier <hugh-pmF8o41NoarQT0dZR+AlfA at public.gmane.org>wrote:

> | From: Lennart Sorensen <lsorense-1wCw9BSqJbv44Nm34jS7GywD8/FfD2ys at public.gmane.org>
>
> | A thread is a CPU context and stack running within a process's memory
> | space.
> | A software process is one or more threads sharing a memoryspace.
> | If you avoid things like thread local storage (TLS), then memory is
> | shared between all threads in a process.
>
> True, in my opinion that isn't the clearest way to explain it to
> people who don't already understand.
>
> BTW, it is sad that TLS isn't a standard part of POSIX.
>
> Threads are given their own stack space -- otherwise things get REALLY
> crazy.
>
> | > Unix Process: essentially a running program.  Remember, you can have
> | > multiple instances of the running program, assuming it is written to
> | > not have multiple instances trip over each other.
> |
> | I would almost have thought that you had to write your program to trip
> | over itself.  Most things should automatically not do so, but then again
> | I am probably thinking of the standard utilities lke cat and such which
> | tend to work with files specified on the command line and pipes and such,
> | and hence really can't interfere with other copies unless the users asks
> | for it.
>
> Unix conventions carefully make it easy to write programs that don't
> trip over themselves.  Not so much other systems.
>
> Consider, for example, systems with co-operative multi-tasking:
> everything trips over everything else.
>
> You remember being taught how to create temp files?  Some of that was
> to make sure your program instance's tempfile didn't have the same
> name as another.
>
> File locking?  All about not tripping over other program instances.
>
> Trying to run multiple X display managers?  An unknown amount of
> danger explored in a current (discussion) thread.  Notice the
> suggestion that different users be logged in to each DM (otherwise
> there will be tripping).
>
> Security race conditions: protect yourself from a bad guy tripping
> you!
>
> | > Simultaneous Multi-Threaded: implement multi-core, but with a lot of
> | > shared hardware resources.
>
> I should clarify that: implement it LIKE multi-core...
> This isn't multi-core but the idea is logically similar.
>
> Multi-core shares less than SMT.  But multi-core generally
> does share things: typically some level of cache, perhaps the access
> to the memory bus, sometimes the clock.  Lately, AMD has started
> sharing FPUs, making their multi-core part way along the spectrum
> towards SMT.
>
> | Well the P4 did SMT with just one core, as does the atom chips.
>
> Sorry that I didn't make it clear that I was talking of SMT as like
> multi-core.
>
> | SMT takes no time to switch because each thread has it's own set of
> | registers.
>
> There are registers and there are registers.  I think SMT switching
> can take time on some implementations (certainly some I can dream up).
> Just like register windows can take time.
>
> | Certainly some interesting research into having compilers generate
> | parallel code automatically from loops and other things in the code.
>
> I think that the normal programming models don't lend themselves to
> fine-grained parallelism, at least not much past what our current
> compilers and out-of-order CPUs manage already.
>
> New languages + new hardware all at once are hard.  You can only
> hope for at most one miracle at a time.
> --
> The Toronto Linux Users Group.      Meetings: http://gtalug.org/
> TLUG requests: Linux topics, No HTML, wrap text below 80 columns
> How to UNSUBSCRIBE: http://gtalug.org/wiki/Mailing_lists
>



-- 
*Aruna Hewapathirane*
Consultant/Trainer
Phone : 647-709-9269
Website: <http://goog_1768911931>Open Source
Solutions<http://sahanaya.net/aruna/>



<https://sites.google.com/site/arunahewapathirane/home/business-card/buisness-card.png?attredirects=0>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://gtalug.org/pipermail/legacy/attachments/20140122/2d89fd4f/attachment.html>


More information about the Legacy mailing list