PS3-XBOX 360 as Linux Graphics workstations
ted leslie
tleslie-RBVUpeUoHUc at public.gmane.org
Mon Oct 24 23:02:22 UTC 2005
I did some research for prof's when i was getting my BS.c-CompSci in MacMaster
and my M.Math at Waterloo.
back in the late 80's they thought it was important for concurrency processing,
because they thought it was impossible for processors to reach the Ghz speeds
they do today.
Much software and procedures where built for concurrency, but essentially they will
not come out strong untill the cpu speeds start to really hit a brick wall, which they are
kinda starting to now.
For graphics, ray tracing is a very concurrent process, its actually childs play,
and of course its important for graphics workstations of the future.
With the dual cores coming out now, programmers have to be more aware making
multi-thread apps. In the case of the raytracing example, one couldhave a seperate
thread for each "ray", but they all would have to access the graphics primatives
in the environment list (but first there bounding boxes).
This means millions of threads are trying to grab access to data at once,
and that is a very big bottle neck. Because RAM will be in each "cell"
it will allow independant local processing. What would be "neat" would be if all
cells could stop and syncronously latch the environemnt of the 3d objects to RAM,
(i.e. one set of transfers, not millions of them) and each cell could then go on to
process its ray via lighting models and geometry bounces.
But for making use of multi-processors, as i said there are languages out there,
in fact microsoft has a "Add on" to C# that it is spec'ing out and has a very early
beta i believe. Mono, i think, is totally on top of this, and might be just as far along.
You will probably see a good bit more movement in concurrent processing on the
MS .Net C# platform and on the Mono platform in the next years.
A wide-area concurrency Comega (extension of C#) project is noted below
below is a blurb about Comega:
============================================
Cω is a research programming language. It is pronounced "c" followed by " -mg, -mg".
It can be written (and searched for) as Cw or the "Comega language".
A Cω compiler preview is now available for download. [ The latest release no longer requires Visual Studio .NET 2003 to be installed on the target machine!] You can also browse the documentation online.
Read about Cω on MSDN!
Read about Cω in Infoworld!
Cω is an extension of C# in two areas:
- A control flow extension for asynchronous wide-area concurrency (formerly known as Polyphonic C#):
Modern Concurrency Abstractions for C#. Nick Benton, Luca Cardelli, Cedric Fournet.
©2004 ACM [ PDF] (Revised version.) To appear in TOPLAS.
©2002 Springer [ PDF] In: Boris Magnusson, Editor: ECOOP 2002 - Object-Oriented Programming, 16th European Conference, Malaga, Spain, June 10-14 2002, Proceedings. Lecture Notes in Computer Science 2374, Springer, 2002. ISBN 3-540-43759-2. pp. 415-440.
- A data type extension for XML and table manipulation (formerly known as Xen and as X#):
The essence of data access in Cω.Gavin Bierman, Erik Meijer, and Wolfram Schulte.
Accepted for publication at ECOOP 2005
Programming with Rectangles, Triangles, and Circles. Gavin Bierman, Erik Meijer, and Wolfram Schulte.
©2004 XMLconference [ HTML] In Proc. XML 2003.
Unifying Tables, Objects and Documents. Erik Meijer, Wolfram Schulte and Gavin Bierman.
[ PDF] Updated version to appear.
In Proc. DP-COOL 2003.
Reasons why these kinds of extensions (and possibly more) are related, are described in this talk:
Transitions in Programming Models. Luca Cardelli.
[ PDF] New University of Lisbon, November 13, 2003.
Project Members
Nick Benton
Gavin Bierman
Luca Cardelli
Erik Meijer
Claudio Russo
Wolfram Schulte
=============================================================================================
On Mon, 24 Oct 2005 23:58:23 +0200 (IST)
Peter <plp-ysDPMY98cNQDDBjDh4tngg at public.gmane.org> wrote:
>
> On Mon, 24 Oct 2005, Lennart Sorensen wrote:
>
> >> What would be very cool would be to generate libraries or such that
> >> can automate the offloading of computational effort.
> >
> > If you could come up with a generic automated way to make programs
> > multithreaded, you would probably become very rich.
>
> PI calculus is said to be a formal model for representing such programs.
> It can be very hairy, and I know very little about it, but I know that
> f.ex. MUD role-playing games can be split across several CPUs relatively
> easily because each player/node only sees a small part of the world at
> any one time, and because the tasks to be done at each cpu are clearly
> defined beforehead.
>
> I have some experience with remotely synchronized state machines
> (hardware and software) and I can say that it gets hairy very fast. The
> telecom industry has a lot of experience with such networks and I think
> that they have funded a large part of the research for parallel
> computing efforts. The other large contributor was transputer research.
>
> The act of parsing a program for possibly parallel tasks and splitting
> them across available nodes for execution is very much like printed
> circuit layout algorythms (like a travelling salesman algorythm but with
> more interesting constraints and in N dimensions). The act of parsing
> such a program can be harder than actually running it afterwards, and
> proving that the split/spread is optimal is even worse.
>
> Peter
> --
> The Toronto Linux Users Group. Meetings: http://tlug.ss.org
> TLUG requests: Linux topics, No HTML, wrap text below 80 columns
> How to UNSUBSCRIBE: http://tlug.ss.org/subscribe.shtml
>
--
The Toronto Linux Users Group. Meetings: http://tlug.ss.org
TLUG requests: Linux topics, No HTML, wrap text below 80 columns
How to UNSUBSCRIBE: http://tlug.ss.org/subscribe.shtml
More information about the Legacy
mailing list