Using Multiple RISC CPUs in Parallel to Study Charm Quarks
Authors: C. Stoughton (Fermilab and), D. J. Summers (University of Mississippi-Oxford)
Abstract: We have integrated a system of 16 RISC CPUs to help reconstruct and analyze a 1.3 Terabyte data set of 400 million high energy physics interactions. These new CPUs provided an affordable means of processing a very large data set. The data was generated using a hadron beam and a fixed target at Fermilab Experiment 769. Signals were recorded on tape from particles created in or decaying near the target and passing though a magnetic spectrometer. Because all the interactions were independent, each CPU could completely reconstruct any interaction without reference to other CPUs. Problems of this sort are ideal for multiple processors. In the offline reconstuction system, we used Exabyte 8mm video tape drives with an I/O capacity of 7 Terabytes per year and a storage capacity of 2.3 Gigabytes per tape. This reduced tape mounts to one or two per day rather than one or two per hour as would be the case with 9-track tapes. The ETHERNET network used to link the CPUs and has an I/O capacity of 15 Terabytes per year. The RISC CPUs came in the form of commercially supported workstations with little memory and no graphics to minimize cost. Each 25 MHz MIPS R3000 RISC CPU processed data 20 times faster than 16MHz Motorola 68020 CPUs that were also used. About 8000 hours of processing was needed to reconstruct the data set. A sample of thousands of fully reconstructed particles containing a charm quark has been produced.
Explore the paper tree
Click on the tree nodes to be redirected to a given paper and access their summaries and virtual assistant
Look for similar papers (in beta version)
By clicking on the button above, our algorithm will scan all papers in our database to find the closest based on the contents of the full papers and not just on metadata. Please note that it only works for papers that we have generated summaries for and you can rerun it from time to time to get a more accurate result while our database grows.