Re: [BLAST_ANAWARE] charge file feature moving out of v3 library

From: zhangchi (zhangchi@general.lns.mit.edu)
Date: Sun Dec 21 2003 - 16:06:27 EST


On Sun, 21 Dec 2003, Tancredi Botto wrote:

> > I want to change it so that one can start crunching right after the run is
> > started.
>
> But how do you know a run is good before it has ended ? I have no
> objections however. And I'll provide a script for that. I think it'd be
> more interesting how to use the most of the coda data file, which was
> probably good before a run crash.

Em, after Chris showed me how to save the ntuples before whole run is
crunched, the ideal situation I have in mind is that crunch does not have
to wait for run to end(we can do that right now with 2.96), analysis does
not have to wait for crunch to end(new version does that). In this case,
analysis(including some basic asymmetry analysis) could be almost real
time.

>
> et_crash: should be no data anywhere, so should be easy
> roc_disconnect: if charge scaler stays inhibited should be fine
>
> but I have not poked into those runs. hv crash: ? other_crashes ? I also
> don't know what to do.

>
> I can immagine. Maybe you keep the charge files and charge.C will just
> update a database. Fine with me. There will be two programs to update but
> hey, we love to work harder. But isn't the advantage of dst that all info
> you need is in it, and you don't have to walk away with 4000 or so chg
> files ? maybe one day I'll understand.....

that is the idea I am trying to realize. but it is also true that "full
statistics" will come only after hours of number crunching. I want to make
the change also because it could work with the futuristic crunch server
Tim and Chris mentioned. With enough computing power, it will make
cooking real time, although I do not see it happen within short period of time.

>
> more importantly, for sake of clarity: of course you can't select data
> based on physics event numbers because we do not measure the charge
> between any two physics events. We only measure the charge over one scaler
> read/clear cycle (typically 1 sec) and you can't break it up in smaller
> intervals than that.
>
> step1 is to throw away fills, and play it safe. most of the work is done
> there
> step2 is to throw away scaler cycles (and the one before/after it) but
> then of course you are synchronization dependent, and sure the roc's
> have ntp running but... it maybe be a lot of overhead we don't want.

scaler cycle is the units I think. synchronization is done in the new
lib, largely followed the scheme Doug proposed. Good news is it does not
incur perceivible overhead.

each physics event in DST has a epics event number and a scaler event
number. These numbers point to the last epics/scaler events before this
physics event itself in time.

checking if the time difference between physics event and the associated
epics/scaler is larger than 1 second will tell us if epics/scaler servers
are running as they are supposed to.

>
> I'll be more than satisfied if we can do step1. I think we will never have
> to do step2, if we have a problem is likely to last several seconds anyways
> The cure there is take "good data" instead of taking "any data" and try to
> patch it up later.
> And I think that it'd be easier to cut on flags and indicators (instead of
> (fill>23&&fill<50)||(fill=51)||(fill>53...) do beam.quality>1) ).

when you enter a dst and try to select events and get the right charge
for those selected events, I do not see other ways than loop through event
by event. When Draw() is kicked out,
bool cut() { return (fill>23&&fill<50)||(fill=51)||(fill>53...); }
is the choice.

I can experiment a little with TTreePlayer, so it will parse TStrings and
make such functions.

Chi



This archive was generated by hypermail 2.1.2 : Mon Feb 24 2014 - 14:07:30 EST