Research Experience

2008 –
CMS Data Acquisition

I joined the CMS data-acquisition (DAQ) group a few months before the first LHC beam. I took over the responsibility for the CMS event-builder application. The CMS event-builder assembles events at a rate of 100 kHz, transporting event data originating from about 740 custom read-out boards at an aggregate throughput of O(100 GB/s) to the high-level trigger (HLT) farm.

The event-builder software is written in C++ and uses the XDAQ framework developed for the CMS online system. I added missing features and improved the code during the first LHC run to address issues arising from operations. I was asked to re-design an under-performant C++ software component for CMS run 1. This software component was responsible to collect events accepted by the HLT, and to transfer them to tier 0 or to online consumers. It had to cater to both the online software framework (XDAQ) and the offline CMSSW. We developed and implemented a new multi-threaded design. This task was successfully accomplished within 6 months, just in time for the first real data taking of CMS.

The CMS DAQ was redesigned during the LHC shutdown in 2013/14. The new DAQ architecture is based on state-of-the-art network technologies for the event building. For the data concentration, 10/40 Gbps Ethernet technologies are used together with a reduced TCP/IP protocol implemented in FPGA for a reliable transport between custom electronics and commercial computing hardware. A 56 Gbps Infiniband FDR CLOS network has been chosen for the event builder. The new technologies allowed to shrink the DAQ system by an order of magnitude, while increasing the throughput by a factor of 2.

In order to exploit the much more performant hardware, the event-building software had to be rewritten. I defined a slimmed-down protocol, designed and implemented the software. The development was using a test-driven approach. The new code makes heavy use of threads, lock-free inter-thread communication, and C++ templates. Extensive code optimizations and careful allocation to CPU resources (cores, memory, interrupts and PCI lanes) were necessary to exploit the capability of state-of-the-art computer architecture. I also developed Python scripts to measure the event-builder performance in small-scale test systems as well as in the production system. I carried out the measurements to assess and optimize the performance of the DAQ system.

I contribute my expertise from operations and software development to many activities in the CMS DAQ group: monitoring and diagnostic tools using Java, Python, Perl, or shell scripts; the run-control and configuration framework written in Java; and the DAQ-specific code of the HLT processes using the CMS event-processing framework CMSSW based on modern C++. As technical coordinator I contribute to the rewrite of the online monitoring system. This system collects data from various online data-sources and stores it in an Oracle database. It displays data-taking conditions and issues using web services.

Since the beginning of the CMS operations, I serve as on-call expert for the DAQ system. I contribute to the planning of CMS data-taking and to the coordination of interventions in the DAQ system. I also train DAQ shifters and assist other on-call experts in troubleshooting difficult problems.

I am responsible for the US group within DAQ and recently became deputy to the DAQ project manager. I was part of a group responsible to oversee the CMS detector upgrades during EYETS 2017. I am currently involved in the planning of the DAQ upgrades for LHC run 3 and for phase-2.

2005 – 2008
Higgs Searches

The search for the Higgs boson is one of the most interesting topics in high-energy physics today. The Tevatron has a good chance to find evidence for the Higgs before the LHC. I am analyzing the ZH->nunu bb channel which is one of the most sensitive channels accessible at the Tevatron. It is also one of the most challenging ones. We have to rely on an excellent b-jet identification and on the correct missing energy measurement. In order to maximize the sensitivity, dedicated triggers have been developed. We will use sophisticated multivariate techniques to extract a signal or set limits on the Higgs mass. This analysis is very sensitive to the presence of events affected by detector or software problems. Understanding and fixing these problems benefits the whole collaboration. Beside the analysis work, I made significant contributions to the common analysis tools by improving and verifying their correct behavior.

I represent the Higgs group at the D0 Trigger Board: the committee with responsibility for determining the trigger strategy and approving modifications to the D0 trigger-list. The trigger design in the high-luminosity environment of the Tevatron is very challenging. Therefore, clever approaches need to be found to maintain a competitive physics program while not overloading the online system and the reconstruction resources.

Central Track Trigger

I lead the group responsible for the maintenance and operation of Trigger the Central Track Trigger (CTT) at D0. The CTT finds tracks on the first trigger level using the Central Fiber Tracker (CFT) which consists of 8 concentric double layers of scintillating fibers. Combinatorial logic implemented in FPGAs is used to identify track candidates by comparing fiber hits to predefined hit patterns.

The CTT provides tracks, pre-shower, and occupancy information to the first level trigger framework and to other first and second level trigger systems. The CTT plays an essential role in DØ trigger strategies. Therefore, a high reliability of this trigger is mandatory and has been achieved thanks to continuing improvements to the installed hardware and firmware.

The first major task under my CTT group leadership was the successful upgrade of the track finding hardware. This upgrade became necessary as the original track finding hardware installed for the Tevatron run II limited the number of hit patterns to be considered.Therefore, in spring 2006 the track-finding hardware was replaced to take advantage of the full granularity of the CFT. I led the mechanical challenging installation of 40 new boards and rerouting of 600 cables in a limited space. I was directly involved in testing the newly installed components and commissioning the new system.

My task as CTT group convener included the oversight and coordination of the daily operation, of development of online and offline monitoring tools, of the data quality assessment of the CTT system, and of plans and studies for further improvements of the trigger performance. I led and coordinated the activities of students, postdocs, faculty, and engineers. I planned and, together with the DØ management, identified manpower needed to maintain the CTT as a component central to the success of DØ.

In addition, I trained and assisted the shifters monitoring the detector performance in real-time. Beside my leadership role, I was active in the actual implementation and improvement of software tools which were mostly written in Python. They were used to control and monitor the CTT. This enabled the merge of the formerly two separate tracking shift positions into a single tracking shift possible, thereby reducing the manpower needed to operate the detector.

2002 – 2005
BaBar

Radiative-Penguin Analyses

My primary interest is in radiative-penguin processes. I chose this topic as it has an excellent potential to find deviations from the Standard Model and to constrain new theories.

I picked the rare decay B –> gamma gamma as a first analysis topic. I developed the analysis in a common framework with other radiative penguin analyses. The main challenge in searching for these rare radiative decay modes is to attain sufficient background rejection. I used a 2-dimensional fitting technique for the background estimation and a simultaneous optimization of several selection variables in order to obtain the best upper limit, which will improve the current best upper limit set by BaBar by approximately a factor of 5. The results of this study are currently in internal review and will be published shortly. I contributed to code development and ntuple production for the measurement of branching fractions, CP and Isospin Asymmetries for the B –> K* gamma decay.

Central Data Processing

I joined the data processing team when the centralized skim production needed to be ramped up within a new computing model. The skimming splits the reconstructed data into 123 different physics-streams making the handling of the huge data sample of 250fb-1 collected by BaBar and the associated Monte-Carlo samples accessible for the analysis groups. I debugged the skimming tools and developed scripts to automate the skimming and to monitor thousands of skimming jobs at SLAC, which use up to 1200 CPUs and 10TB of disk space. Various problems needed to be identified and solved on a stringent time scale imposed by ICHEP'04. I successfully achieved this task in close collaboration with members in the areas of data quality, reconstruction, computing and physics coordination. This enabled many exciting results like for example the discovery of direct CP violation in B° –> K+ pi-. I was appointed skimming manager for the upcoming round of skimming, which is a prerequisite for many interesting results for LeptonPhoton'05.

Detector Operation

I used the opportunity of taking shifts to learn as much as possible about the detector operation. The experience I gained from increasingly challenging shifts was rewarded by becoming expert shift leader supervising the detector operation during critical periods of data taking.

PEP-II

I was interested in learning more about the beams of the PEP-II accelerator. Therefore, I started a project to investigate beam-induced heat in components of the electron-positron storage ring PEP-II. Steadily increasing beam currents caused components to fail due to temperature-related problems. Accessing existing temperature data from the beam control system, I tried to predict likely candidates for failures. Unfortunately, the available data were influenced by many other factors, which were only fragmentarily documented. Therefore, it was impossible to make any sensible prediction within the scope of this project.

1995 – 2003
ATLAS DAQ

I designed and commissioned a new prototype SubFarm Input (SFI) application within a common application framework shared with all other DataCollection (DC) applications in the ATLAS experiment at CERN.

DataCollection is a subsystem responsible for the movement of event data from the Readout Subsystem (ROS) to the high level triggers. I took responsibility for the SFI, a key component in the event building system. It receives event fragments from ROS via a switched network, assembles the event fragments into a complete event, and makes the event available to the last stage of the online event selection. The main challenge I had to cope with is the high event fragment rate of 4kHz originating from about 200 network connections and handling 40MB/s input and 40MB/s output rate. Therefore, the design must meet high performance requirements and maximize the efficiency of the memory management while being as fail-save as possible. I planned and coordinated the vital and successful integration and testing of various DataCollection components and applications. It required integrating the configuration, control, and monitoring of data-flow components and applications with one another, and with the infrastructure and services provided by the ATLAS Online Software. I developed an automatic set-up procedure for the database describing the system which enabled the deployment on up to 220 computing nodes.

Trigger

For my Ph.D. thesis I worked for the ATLAS High Level Triggers, where I focused on the performance of the Event Filter, the last stage of the online event selection. I studied the vital electron trigger, where a very high background rejection is mandatory to enable the search of physics beyond the standard model. I used the Higgs –> 4e and Z –> 2e channels for benchmarking. This work yielded firmer estimates of the electron trigger rate and the efficiency of all three trigger levels. In addition, I investigated where the execution times of offline algorithms could be reduced in order to meet the stringent time constraints for the online event selection. These measurements contributed to a better understanding of the performance and size of the trigger system and were presented in the Trigger/DAQ/DCS Technical Proposal published in 2000.

SCT

For my Master thesis I worked on the optical readout of the SemiConductor Tracker (SCT) of the ATLAS detector. I took a leading role in building a machine to automatically scan up to 144 light emitting diodes (LEDs) or vertical cavity surface emitting laser diodes (VCSELs). We used this machine to investigate the annealing behavior of 250 LEDs from two different manufactures and about 200 VCSELs after proton and/or neutron irradiation with doses comparable or above those expected during 10 years of LHC running. We performed an accelerated life time test of these devices, too.
I wrote a large part of the object oriented software (C++) to steer the scanning machine and to execute the measurements. I made most of the measurements of the irradiated devices and was heavily involved in the data analysis.
I instructed a group from the University of Birmingham, UK, how to use the hard- and software of the scanning machine, which they subsequently used for testing irradiated PIN diodes.

1995 – 2001
NA52  

I participated in the NA52 experiment at the CERN SPS. This experiment was looking for long-lived strange quark matter in Pb+Pb collisions at 158 A GeV/c. It also measured particle and anti-particle yields in heavy ion collisions. The NA52 experiment used the H6 beam line in the north area at CERN as focussing spectrometer. The beam line was equipped with multi-wire proportional chambers, time-of-flight hodoscopes and Cherenkov counters spread over more than 500 meters.
I contributed to the data acquisition and was involved during the data-taking periods in monitoring the experiment as well as setting it up for several data-taking and test-beam runs. I participated in many discussions about heavy-ion physics and analysis issues.

1994
PSI Beam Optics

I spent three months as summer student in the low energy research section of the Paul Scherrer Institute, Villigen, Switzerland. The Philips cyclotron was built 1973 as a proton source. Later it had been used as heavy ion accelerator, too. This damaged the original electro-static deflection mirror of the injection system. I implemented a numerical simulation of the reflector using successive overrelaxation for the field and a Runge-Kutta algorithm for the tracking. I estimated the influence of the observed damage of the mirror onto the ion tracks. I proposed an improved mirror geometry which was later successfully implemented.

Valid HTML 4.01! Valid CSS!