RoadmapDataAnaTopNewPhysics

From T2B Wiki
Revision as of 12:29, 26 August 2015 by Maintenance script (talk | contribs) (Created page with " == [TopNewPhysics] Roadmap for data analysis == PageOutline === 0) Preliminary work to be done before data-taking === *framework and tools ready (see slides ...")
(diff) ← Older revision | Latest revision (diff) | Newer revision → (diff)
Jump to navigation Jump to search


[TopNewPhysics] Roadmap for data analysis

PageOutline

0) Preliminary work to be done before data-taking

  • framework and tools ready (see slides ...)
  • TopTreeAnalysis package should be cleaned/documented/reworked
  • Analysis should be done @ 7 TeV to estimate the #events expected for a given luminosity and selection
  • Instore a kind of weekly meeting for everybody (Michael&Stijn included) to exchange information (short report of meetings, hypernews ...)
  • TopTreeAnalysis package has to be extended according to the test we want to do on data

- 1)b) - 2)c)

  • Need tests to be performed on 7 TeV samples

- 1)c) - 2)c)

In parallel, the analysis for the inclusive search for new physics has to be developed on MC (study of observables, gof ...)



1) With the first few data (~ 1 pb)

-> Major goals: a) testing that our tools (from RECO to TopTree + TopTreeAnalysis) are working
b) "Validate" the reconstructed objects from data
c) checking the validity of our hypothese for the ABCD method


a) testing that our tools (from RECO to TopTree + TopTreeAnalysis) are working

  • From day one, run over the available data and check if we retrive outputs. Check for failures, inefficiencies, inconsistencies ...
  • Monitor the efficiency to retrieve a final TopTree from available dataset
  • Make sure that all the tools are working well, modifications/developments could be made in case of missing items

b) "Validate" the reconstructed objects from data

[This section needs to have tools developed in advance]

  • From the events available in the SD, we want to have the breakdown of all selection cut apply (histos, efficiency plots, tables)
  • More generally check the stability of all cuts over the runs

- Focus on isolated muons in jetty environment (involve Michael)

  • trigger rate ? (can't be compute with SD ... but from PD to go to SD)
  • study of the quality cuts apply (from trigger muon to selected muon)
  • compare RelIso var (data vs MC), breakdown per subvariables (TrackIso, ECALIso, HCALIso), per #jets (histos, efficiency plots, table)
  • check muon ecal deposit, DR(m,j) cut (histo, efficiencies)
  • d0Sign distribution, efficiencies as function of the variable,

- Focus on jet multiplicity (involve Stijn)

  • study of the jet-id cuts
  • eta-phi distribution of jets after Pt selection (look for hot cells)
  • jet multiplicity after 2-3 different selections of jets


c) checking the validity of our hypothese for the ABCD method

[need test performed on MC first] To test our methods, we need enough statistics in data to be able to distinguish between statistical fluctuations and possible bias. To enrich our signal region:

  • Lower Pt cut on jets 30-40 ↘ 20-25 GeV
  • if it's not enough, maybe lowering the #jets required 4 ↘ 3
  • relax a bit RelIso cut

- Test of the ABCD method [need test performed on MC first]

  • As there is not too much statisc in the SR, RelIso cut cout be relaxed at the beginning
  • Check the correlation between RelIso & doSign
  • Put a MET cut (MET<~20-30 GeV) to remove EWK-top events in the SR - see the impact on RelIso & d0Sign variable - check if prediction is equal to what we observed on data (true if no EWK-top contribution)
  • Check stability of the prediction while changing the range of the regions
  • Divide BD regions in 4 and check the consistency (ABCD method - closure test)
  • While removing this cut, check that the estimated difference is compatible with EWK-top expectations

- Test the QCD shape estimation [need test performed on MC first]

  • Check correlation between RelIso and the observable we considerer (focus on only few distributions - to be defined)
  • Divide the CR in 2 or more bins depending on the nof events (~1000 per histo) and compare the template obtained (momenta, Chi2, Komogorov-Smirnow)
  • Compare those templates to the ones predicted to the MC, are we far from it ?
  • If there is already enough events in the SR (low pt cuts, relaxed RelIso), compare estimation with data
  • If the method fails, find a extrapolation procedure

Increase the selection criteria when luminosity increase

2) With a luminosity of ~ 10-50 pb

-> Major goal: participate to a cross-section measurement a) study of b-tagging
b) study V+j estimation
c) participate to the measurement

a) study of b-tagging

  • follow the alignment of the tracker
  • follow the results obtain in the b-tag group
  • focuss on the more robust b-tag algo and make distributions with data (compare to MC) #b-jets (as function of #jets & pt/eta(jet) )

b) study of W+j estimation

[need test performed on MC first] [This section needs to have tools developed in advance]

In order to increase the statistic

  • From 3 categories of #jets: 4-5->=6 go to 1 categorie: >=4
  • To increase 3 b-jets categorie: use high effiency WP (“loose”)

In order to reject top events:

  • Cut on a Chi2 (jet combination) and/or HT (Nttbar ~ 0)

-> Make Chi2 distribution (and reco-masses), compare to MC - Efficiency as function of the cut - find a safe cut

Test of the method

  • Nttbar should be ~ 0 after Chi2 rejection (safe/high cut to not depend on JES)
  • Check stability of the prediction while changing b-tagging algo & WP (maybe not too much freedom ... loose - semi-loose WP for trackCounting algo)
  • Eb &Eudsc could be compare to MC or to estimation (b-tag group after MC-correction factor)
  • Possibility to probe Vbb content (if there is no ttbar contamination). Compare to MC, gave a k-factor which could be apply after lowering Chi2-cut
  • Control the Nttbar estimation while lowering the Chi2 (#Ntt as function of Chi2 cut)

The shape of W+j could be obtain with "int" variable like charge of lepton

  • look charge of lepton. Template for V+j should be assymetric and the one for tt+j should'nt
  • compare the template obtain to the MC-prediction (charge assymetry) - ~ closure test (should give #N(W+j) )

c) cross-section measurement

  • keep in contact with other group working on the l+j channel
  • follow the work on event selection
  • give our bkg estimation results
  • or plugin external results (trigger efficiency, muon selection efficiency ...) to obtain the cross-section


3) With a luminosity of ~ 100 pb

-> Major goal: look for differential distributions a) study the shape estimation methods
b) obtain the first differential distributions
c) perform a goodness-of-fit test distribution per distribution -> MC tuning ?!
d) combine observable in a goodness-of-fit test


a) study the shape estimation methods

  • extension of the previous work for QCD
  • estimation of W+j
  • estimation of tt+j

.. for next ... we still have time ...


Template:TracNotice