> Tancredi, when will be available.
Vitialy, the buds are available now, provided your login environment is
set up as in cvsdir/setup.csh. You have acces to the raw data from 24+8
machines now. You can run montecarlo from any such machine. I have not
compiled libBlast on the buds, but you do not need that.
I am running both nsed and a montecarlo simulation on the buds as we
speak. Both work just fine.
> Also, how do we tell a scratch disk on spud1 from a scratch disk on spud5?
they would be called /net/data/1/scratch and /net/data/5/scratch as they
have been for the last 3 years. So I really do not understand your question.
> Also, there is important issue of back ups. Are the DVD writers functional?
> If so we can back our own stuff occasionaly.
1) It is not the purpose of backing up to back up generated or crunched
data. That you will do yourself, see below.
2) You back up hoem directories, input files, source code. Not the mere
product of CPU time. This is of course the daily back up of
/home/blast/blast by ernie B.
3) The next priority is to back up raw data. We have kept two copies of
each run on the spud raid arrays. We are now in the process of doing
a tape backup (to the tune of 1.5 Tbytes) to free up more space
4) There is no practical way that back x10 user directories (10-100's of
Gbytes of data) on a periodic base. Obviously can't back up your stuff
only either, right ?
5) I have no simple instructions/advice to give about DVD writing at this
time other than "man dvdrecord" and the web
6) Me and ernie B are working on DVD writing, with the purpose of
automatizing it and manage the data flow. BTW, I am trying to import
the K3b utility and merge in some decent labelling system.
Note: At this rate the spud disks will be just enough (note, with two
copies of each run). Clearly there is a priority to make the raw data
safe.
>
>
>
> Tancredi Botto wrote:
>
> >Hello a third time,
> >I just wanted to remind you that the bud/spuds cluster is ready
> >for intensive usage, including extensive montecarlo simulations,
> >and are configured as follows (see attachements)
> >
> >
> >_ From each node bud02-bud24 you can access all the spud1-8 disks
> > that typically contain raw data. In addition bud24 scratch is also
> > cross mounted for those cases where you need to share an output file
> >
> >_ Each node has 3 Ghz cpu, 3Gb total available memory. They should be
> > much faster both for crunching and root sessions. Note spuds have
> > 1.5 Ghz cpu, 2 Gb memory total.
> >
> >_ All students get the privilege of having one such machine *exclusively*
> > available to them. That also means ~100/120 Gb of scratch space which
> > can be used for analysis reseults.
> >
> >_ All other machines are to be shared as is, but it would be graceful if
> > you do not use more than 6 cpu's at any given time without negotiating
> >
> >_ MOVE *ALL* YOUR DATA TO YOUR SCRATCH AREA !
> >
> >_ The spuds are to be used primarily for online data analysis, and remaining
> > users
> >
> >regards,
> >-- tancredi
> >
> >________________________________________________________________________________
> >Tancredi Botto, phone: +1-617-253-9204 mobile: +1-978-490-4124
> >research scientist MIT/Bates, 21 Manning Av Middleton MA, 01949
> >^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
> >
> >
> >
> >------------------------------------------------------------------------
> >
> >
> >Bud
> >Node person
> >
> >01 vitaly z
> >02 nikolas m
> >03 aaron m
> >04 peter k
> >05 adrian s
> >07 tavi f
> >08 eugene g
> >09 yuan x
> >10 chi z
> >11 chris c
> >12 ben c
> >13 jason s
> >
> >
> >sorry, I did not know what your favourite
> >numbers are. The list is non negotiable...
> >
> >
> >
>
>
This archive was generated by hypermail 2.1.2 : Mon Feb 24 2014 - 14:07:30 EST