Quasi-model-independent Search for New Physics at Large Transverse Momentum
Physical Review D(2001)
Abstract
We apply a quasi-model-independent strategy ("Sleuth") to search for new high p_T physics in approximately 100 pb^-1 of ppbar collisions at sqrt(s) = 1.8 TeV collected by the DZero experiment during 1992-1996 at the Fermilab Tevatron. Over thirty-two e mu X, W+jets-like, Z+jets-like, and 3(lepton/photon)X exclusive final states are systematically analyzed for hints of physics beyond the standard model. Simultaneous sensitivity to a variety of models predicting new phenomena at the electroweak scale is demonstrated by testing the method on a particular signature in each set of final states. No evidence of new high p_T physics is observed in the course of this search, and we find that 89 ensemble of hypothetical similar experimental runs would have produced a final state with a candidate signal more interesting than the most interesting observed in these data.
MoreTranslated text
Key words
Supersymmetry
PDF
View via Publisher
AI Read Science
AI Summary
AI Summary is the key point extracted automatically understanding the full text of the paper, including the background, methods, results, conclusions, icons and other key content, so that you can get the outline of the paper at a glance.
Example
Background
Key content
Introduction
Methods
Results
Related work
Fund
Key content
- Pretraining has recently greatly promoted the development of natural language processing (NLP)
- We show that M6 outperforms the baselines in multimodal downstream tasks, and the large M6 with 10 parameters can reach a better performance
- We propose a method called M6 that is able to process information of multiple modalities and perform both single-modal and cross-modal understanding and generation
- The model is scaled to large model with 10 billion parameters with sophisticated deployment, and the 10 -parameter M6-large is the largest pretrained model in Chinese
- Experimental results show that our proposed M6 outperforms the baseline in a number of downstream tasks concerning both single modality and multiple modalities We will continue the pretraining of extremely large models by increasing data to explore the limit of its performance
Try using models to generate summary,it takes about 60s
Must-Reading Tree
Example

Generate MRT to find the research sequence of this paper
Related Papers
1992
被引用1998 | 浏览
2000
被引用100 | 浏览
1997
被引用84 | 浏览
1999
被引用18 | 浏览
2000
被引用48 | 浏览
1999
被引用74 | 浏览
Data Disclaimer
The page data are from open Internet sources, cooperative publishers and automatic analysis results through AI technology. We do not make any commitments and guarantees for the validity, accuracy, correctness, reliability, completeness and timeliness of the page data. If you have any questions, please contact us by email: report@aminer.cn
Chat Paper
去 AI 文献库 对话