Clarke Nattress's CIS 101 PageFirst Draft
 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

Term Paper

 

OSCAR meets Beowolf

Yes, I know, it sounds like a very bad "B" picture that might possibly be a remake of an early horror flick - or worse yet, a Saturday morning cartoon of superheroes and old Norse gods. Actually, while trying (desperately) to find a topic for my term paper, I decided to follow up on one of my pet "geek" projects - to build a super computer without either needing millions of dollars, or even a few million extra brain cells to learn how to run it. I have tried throughout this whole course to find a topic that REALLY interests me within this subject matter, and yet is not too far afield or too vague to actually be able to tie back in to what we have been learning. So my one thread (now there is a computing term) that can bring this altogether is most likely Moore's Law - very roughly stating that with every passing period of time (first 1 1/2 and now 1 year) there will be a doubling of computing power. 

My contention, however, is that since the advent of 32 bit processing - mostly here I am referring to X86 architecture - the actual "on die" improvements of processors is really irrelevant, at least if raw power is what you are after. Parallel Processing has already shown that - even with junk (case in point being the Stone Souper Computer) like old 386 and 486 desktop computers, anyone with the room, time, and patience can build a Super Computer. 5 or more years ago, this might have only been for government agencies like NASA who had the expertise - or could hire it - but now with software such as OSCAR, PVM, MPI, and Beowolf (one of the first to implement this) even high school science students are setting up clusters - with existing hardware.

Probably the best way to understand what these "Massively Parallel" machines are is bound up in that name Beowolf. The concept  is that you hook a bunch (cluster) of computers together - by using a couple NIC cards in each machine, and connecting them to a switch - all using the same operating system (in this case, either Unix or Linux). Then, you take one of these systems - or one similar to them - and elect it to be god of them all. By loading the Beowolf software on that system, it now tells all the other computers in the cluster what to do. There is obviously a little more to it than this, but that is the gist of the situation. Skyld, who develops the Beowolf software, will develop proprietary software for any type of processing, but also sells (less than $6 with shipping) a demo disk that you can set up a running cluster and do some demonstrated distributed computing. The real difficulty with parallel computing is getting programs that are compiled to take computing tasks and divide them up amongst many different CPUs (multi-threading) rather than to attempt the whole process on one machine. Here again, though, I feel that as time goes on there will not only be inexpensive or free clustering software, through the Open Source community there will no doubt be more and more distributed computing programs for many applications.

While it is true that most of the population doesn't need this type of computational speed (especially to surf the WEB or play Free Cell) - or desire to use it - for those who might want to simulate a celestial event, or maybe simulate the sinking of the Titanic (which is precisely how this was done in the movie - with around 150 PCs), then this technology may be for you. Another idea, for those who have the room, and a few old monitors - actually QUITE a few - is a video wall to present your new 3D animation.

So as not to get away from my original point, I believe that now if someone wants or needs much greater computing power, there is no reason to wait for another "cycle" of growth in processing speed, or to wait for "super conducting" to overcome heat build-up etc. Right now - today - you can take "old" technology that in many cases is piled high in a storeroom waiting for the junk man, and with the right software, NIC cards, cabling, and switches, you have a super computer. If that still isn't enough, you add another 10, or 100 nodes to your Parallel Virtual Machine. I have no doubt it will be decades before a desktop PC will be as powerful as 24 Pentium II's running as one virtual machine. Better yet, just peruse the supply of DEC single and multiple processor Alpha's, Sun Sparc and Silicone Graphic servers that are no longer top of the line mini-computers, and are going for cents on the dollar through Ebay. Many of them are cheaper than an entry-level PC, and much more powerful.

In conclusion, my belief is that Moore was wrong - I think that instead of technology doubling, it is exponential. And, much like that "Stone Soup" story, with just a rock (or "boat anchor" as some people term old PC's), and a little help from "donations" of other old equipment, you too can build a "souper" computer.

 

Links: http://www.cs.rit.edu/~ncs/parallel.html http://wotug.ukc.ac.uk/parallel/ http://newsforge.com/article.pl?sid=01/01/25/2330211&mode=nocomment

History of parallel computing: http://ei.cs.vt.edu/~history/Parallel.html http://www.cacr.caltech.edu/Contact/other_sites.html http://www-2.cs.cmu.edu/~scandal/research-groups.html

Commercial clusters: http://www.pssclabs.com/ http://www.aspsys.com/clusters/

How to do it (.PDF file) http://www.rocksclusters.org/rocks-documentation/2.3/papers/clusters2002-rocks.pdf

Print Sources:

Cluster Computing: Linux Taken to the Extreme. F. M. Hoffman and W. W. Hargrove in Linux Magazine, Vol. 1, No. 1, pages 56–59; Spring 1999.

Using Multivariate Clustering to Characterize Ecoregion Borders. W. W. Hargrove and F. M. Hoffman in Computers in Science and Engineering, Vol. 1, No. 4, pages 18–25; July/August 1999.

How to Build a Beowulf: A Guide to the Implementation and Application of PC Clusters. Edited by T. Sterling, J. Salmon, D. J. Becker and D. F. Savarese. MIT Press, 1999.

[CIS 101 Page ] [Homework]