Linux Kernel Network Subsystem Patching

D. Hugh Redelmeier hugh-pmF8o41NoarQT0dZR+AlfA at public.gmane.org
Thu Jan 23 03:49:09 UTC 2014


| From: Lennart Sorensen <lsorense-1wCw9BSqJbv44Nm34jS7GywD8/FfD2ys at public.gmane.org>

| A thread is a CPU context and stack running within a process's memory
| space.
| A software process is one or more threads sharing a memoryspace.
| If you avoid things like thread local storage (TLS), then memory is
| shared between all threads in a process.

True, in my opinion that isn't the clearest way to explain it to
people who don't already understand.

BTW, it is sad that TLS isn't a standard part of POSIX.

Threads are given their own stack space -- otherwise things get REALLY
crazy.

| > Unix Process: essentially a running program.  Remember, you can have
| > multiple instances of the running program, assuming it is written to
| > not have multiple instances trip over each other.
| 
| I would almost have thought that you had to write your program to trip
| over itself.  Most things should automatically not do so, but then again
| I am probably thinking of the standard utilities lke cat and such which
| tend to work with files specified on the command line and pipes and such,
| and hence really can't interfere with other copies unless the users asks
| for it.

Unix conventions carefully make it easy to write programs that don't
trip over themselves.  Not so much other systems.

Consider, for example, systems with co-operative multi-tasking:
everything trips over everything else.

You remember being taught how to create temp files?  Some of that was
to make sure your program instance's tempfile didn't have the same
name as another.

File locking?  All about not tripping over other program instances.

Trying to run multiple X display managers?  An unknown amount of
danger explored in a current (discussion) thread.  Notice the
suggestion that different users be logged in to each DM (otherwise
there will be tripping).

Security race conditions: protect yourself from a bad guy tripping
you!

| > Simultaneous Multi-Threaded: implement multi-core, but with a lot of
| > shared hardware resources.

I should clarify that: implement it LIKE multi-core...
This isn't multi-core but the idea is logically similar.

Multi-core shares less than SMT.  But multi-core generally
does share things: typically some level of cache, perhaps the access
to the memory bus, sometimes the clock.  Lately, AMD has started
sharing FPUs, making their multi-core part way along the spectrum
towards SMT.

| Well the P4 did SMT with just one core, as does the atom chips.

Sorry that I didn't make it clear that I was talking of SMT as like
multi-core.

| SMT takes no time to switch because each thread has it's own set of
| registers.

There are registers and there are registers.  I think SMT switching
can take time on some implementations (certainly some I can dream up).
Just like register windows can take time.

| Certainly some interesting research into having compilers generate
| parallel code automatically from loops and other things in the code.

I think that the normal programming models don't lend themselves to
fine-grained parallelism, at least not much past what our current
compilers and out-of-order CPUs manage already.

New languages + new hardware all at once are hard.  You can only
hope for at most one miracle at a time.
--
The Toronto Linux Users Group.      Meetings: http://gtalug.org/
TLUG requests: Linux topics, No HTML, wrap text below 80 columns
How to UNSUBSCRIBE: http://gtalug.org/wiki/Mailing_lists





More information about the Legacy mailing list