Categories
Pages
-

RWTH Aachen Particle Physics Theory

Looking for Dark Matter: Dark(ness) at the end of a tunnel

April 24th, 2019 | by

In the last post, we introduced the “Shake It, Make It, Break It” approach for dark matter detection and talked about shaking dark matter to deduce its properties (what is usually called Direct Detection). Of course, the ideal way to study dark matter would be to create it in our laboratories, which brings us to the second approach: Make It. 

Attempts to make dark matter are carried out in particle colliders. To make dark matter we destroy light (aka visible matter). But before we go down that road, let’s have quick look at the Standard Model of Particle Physics. This model describes the physics of everything visible around us. It tells us the structure of atoms, the workings of three of the four forces governing all interactions in nature, the mechanism by which particles gain mass, the mechanism by which they decay. Essentially, it is a summary of our knowledge of particle physics. And although the standard model has been an incredible success, it appears to be somewhat incomplete. For example, we don’t yet have a particle that mediates gravity, the fourth force. We don’t know conclusively why neutrinos have mass. Or why the Higgs mass appears fine-tuned. The incompleteness of the Standard Model provides a strong motivation for additional particles. If dark matter is composed of particles, it could be one of the missing puzzle pieces in the Standard Model. And much like in an actual puzzle, we can deduce the properties of the missing piece by the space its absence has created. We can figure out which Standard Model particles are likely to talk to dark matter and build robust models around these interactions. With this basis for dark-visible interactions, we can look for them at colliders.

Since dark matter is invisible to our detectors, its presence after a collision can be deduced from the absence of the energy which it carries away.

A particle collider is used to accelerate particles to high energies, smash them together and study the resulting debris to understand the physics of nature at small (length) scales. One can look for dark matter at colliders by figuring out whether this debris matches our expectation from the Standard Model (which doesn’t include dark matter). A simple way to do this (in principle) is by using the law of conservation of energy– in any physical process, the total energy of the system remains conserved. The initial energy of the particle beams is something we know from an experiment’s design. The total energy after a collision can be reconstructed from the energy of the particles we detect. If these two numbers don’t match, we know that some of the energy has been carried away by “invisible” particles which could be potential dark matter candidates. (In practice, this is much harder to do which is one of the reasons we have an entire subgroup of physicists (theorists and experimentalists) devoted to working out the intricacies of collider physics.)

Another way to look for dark matter at colliders is by studying how the Standard Model particles produced in a collision decay. Consider the Z-boson. In the Standard Model, it can decay into quarks, leptons or neutrinos. We know the total decay width of the Z-boson which characterizes the probability that a Z-boson would decay. We can also get measurements on the individual probability of a Z-boson decaying into quarks, leptons, and neutrinos. A mismatch between these probabilities is a hint that a Z-boson decays into something else which is invisible to our detectors. Once again, we can deduce the presence of dark matter by its absence.

We’ve known for quite some time that there is more to the Universe than meets the eye. To understand it, we must exhaust all avenues available to us. Crashing particles being one of them.

Looking for Dark Matter: Trembles beneath the surface

April 2nd, 2019 | by

The ever-elusive nature of dark matter is of interest not only from a theoretical point of view –we are, after all, missing 85% of the Universe– but also from an experimental one. How exactly does one study something invisible? We can point our telescopes at the sky and study how light bends around clusters of apparent emptiness, hinting that there must be something there. We can do this with ancient light (commonly known as the CMB) and figure out exactly how much of this invisible substance is present in our Universe. We can study the structure of galaxies and galaxy clusters, the motion of stars bound to them. Everywhere we look, it seems, there is evidence for something massive lurking in the shadows. But none of these things tell us anything about what a dark matter particle* actually looks like. We cannot “trap” a dark matter particle for it can, quite literally, slip through walls. We don’t have access to it in the way we have access to, say electrons. But as humans, we are tenacious. And as scientists, we are creative. The minor problem of dark matter’s invisibility won’t stop us in our quest for answers.

In this series of (hopefully) short posts, we’ll cover the three conventional avenues of dark matter detection. The elevator pitch for these techniques? Shake It, Make It or Break It. ‘It’ being dark matter.

Shaking dark matter is as fun as it sounds. In the simplest of terms, it involves waiting for a dark matter particle to strike a target atom placed inside a detector. As a result of the collision, the state of the target atom changes. It could either get ionized, resulting in the production of free electrons which we can ‘see’ in the detector. It could absorb the energy of the dark particle and then release it in the form of a photon. Or it could release this energy as heat. In all of these cases, the collision results in a visible/detectable signature.

In a direct detection experiment, we study the effects of a dark particle colliding with a target atom like Xenon.

There are two important questions to ask now:
1. Where does the initial dark matter particle come from?
2. How can we be sure that the electrons/photons/heat we detect is actually caused by this collision? 


The answer to the first questions is fairly straight-forward. From various other measurements, we know that there is a constant (ish) dark matter flux through Earth. At any given instant, we are being bombarded with dark matter particles. To get an idea of how strong this bombardment is, consider these numbers. The dark matter flux (total dark matter mass passing through one cubic centimeter every second) is approximately 0.3\ \mathrm{GeV}/\mathrm{cm}^3. (GeV is the standard unit of mass in physics. 1 GeV is approximately 10^{-27} kg.) So if we assume a dark matter mass of 1 GeV, this amounts to roughly 0.3 dark matter particles per centimeter cubed. The average volume of a human body is 66400 cubic centimeters. Which means that in one second, about 20,000 dark matter particles zip through every person on Earth. Given that dark matter is weakly interacting, it is no surprise that we don’t notice this constant flux. But given that the flux is substantial, we can hope to detect these particles by building detectors that are large enough. A larger detector volume means more dark particles passing through and hence a higher probability of collision.

Which automatically brings us to question two. How can we be sure that the cause of a signal in our detector is a dark matter-target atom collision. The short answer to this is we’re extremely meticulous. For every experiment, there is a ‘background’ that we need to be aware of. A background is basically “noise” which makes it hard to see the actual signal. In this case, any other particle colliding with the target would give rise to a background signal. A major part of these experiments is to account for all possible backgrounds and reduce this noise as much as possible. The first step in doing this is to build the detector underground. The Earth’s crust provides a natural shielding against most of the stray particles hanging around in our corner of the universe. Beneath that are layers of concrete to further stop any particles getting inside the detector. Note that since the dark matter particles are incredibly weakly interacting, they have no problem shuttling through the Earth’s crust or any of our other protective layers. Another important factor is the choice of the target atom itself. Since we want to avoid spurious signals, an atom that decays radioactively or is otherwise reactive would be a poor choice for the target. Some of the best target atoms are inert gases such as Xenon or Argon.

The Xenon 1T detector located in Italy. The tank (left) is filled with liquid Xenon. For scale, a three-level office complex on the right. (Source)

This is (very briefly) how a conventional direct detection experiment works**. We hide out in the depths of Earth, waiting. The obvious follow-up question is what happens if we don’t see a signal. Does it mean the dark matter paradigm is dead in the water (or liquid Xenon)? The answer is Not Quite. The lack of a signal is valuable information as well. For example, it can be used to infer that the dark matter-target interaction is weaker than we thought (meaning the probability of a collision is even smaller resulting in the absence of a signal). In this way, we can use the “null results” of an experiment to set limits on the strength of the dark matter — standard model interaction. So even though we don’t “detect” dark matter, we end up with more knowledge than we started with. And that, in the end, is the true spirit of science.

Up next: Can we make dark matter at colliders?

*The assumption that dark matter is composed of particles is well-motivated but it might very well be that dark matter is something more exotic such as primordial black holes. 
**Detectors operate on the same principle (some kind of dark matter — particle collision) but can be experimentally realised in different ways. 

A Beginner’s Guide to the CMB

September 11th, 2018 | by

CMB as seen by Planck

Ever since its accidental discovery in 1964, the Cosmic Microwave Background, or the CMB for short, is touted to be the holy grail of modern cosmology. In the simplest of terms, CMB is a snapshot of the universe in its infancy. It tells us what the universe looked like three hundred thousand years after the big bang, and since we understand how the universe evolves (or is supposed to evolve) it gives us testable predictions regarding the structure of the universe we observe today. It has become an incredible resource to verify our cosmological models at scales varying from the incredibly tiny to the mind-bogglingly large. The magnitude of information which can be gleaned from a map which quite literally looks like a painting accident involving a horde of unruly pigeons is astounding. But before we get into that, what exactly is the CMB? Where does it come from?

Before everything else, there was the Big Bang which ended up creating a universe populated with a more-or-less homogeneous hot particle soup. (If only it had remained that way). The fact that these particles hadn’t yet coalesced into atoms meant that they interacted strongly with radiation (or light). Imagine a photon moving in this mess of charged particles. It would deflect quite often, seeing as how it would be surrounded by electrons or protons. In science-speak, one would say that the free-streaming length of the photon would be very small. And since the photon can’t travel long distances without being scattered, the universe would be opaque. As time passes and the universe expands, its temperature drops. With this cooling, at some point the formation of atoms becomes favorable and instead of charged particles joy-riding around the universe, we suddenly have electrically neutral atoms. This means that photons are now free to travel without undergoing repeated scattering. Essentially, the universe becomes transparent and these photons form the relic radiation which we call CMB. The Cosmic part of it is this. ‘Microwave’ simply means that we observe these photons, not in the visible spectrum but, as a result of the expansion of the universe, in the microwave part of the electromagnetic spectrum. ‘Background’ is also straightforward. The CMB permeates everything. It is a constant presence in our lives much as the foreboding inevitability of death that we all grow up with.

All that is well and good, but how helpful is having a picture of the baby universe? Since Cosmology is the study of the history and evolution of the universe, the answer, unsurprisingly, is pretty helpful.

The CMB photons carry with them information about the early universe, for instance, its temperature. When the CMB was first discovered, it looked to be fairly homogeneous. And so, the temperature of the universe appeared to be constant. This was, and continues to be, a strong evidence in favor of the Big Bang theory which predicts this homogeneity on account of the early universe being extremely hot and dense. As measurements improved, it was found that there are “fluctuations” in the CMB spectrum. These fluctuations are tiny (Order 10^-5 K), but instead of being something that can be glossed over, they form the basis of all precision studies and tests undertaken in cosmology today. Called CMB anisotropies, these correspond to the different colored regions seen in the CMB map. Some parts of the early universe were slightly hotter and some parts were slightly colder. The fact that these anisotropies exist and are not entirely uncorrelated with each other provides deep insights into the content and structure of our universe.

For a taste of how all of this works, a simple example is to study how fluctuations in the CMB give rise to the galaxies and galaxy clusters we see today. In essence, can we predict the location of galaxies or galaxy clusters in our mostly empty universe today starting from a map of the temperature fluctuations which existed billions of years ago? The answer is yes. You see, fluctuations in temperature can be mapped onto fluctuations in the density of the universe. The temperature of a photon can tell us whether it came from a region packed with particles (over-dense) or one which was fairly empty (under-dense). An overdensity in the universe implies a gravitational potential well. More particles mean a stronger gravitational force. The photons escaping from such regions would have to expend energy in order to ‘climb out’ of the potential well. By the time they make it out, they have significantly less energy than what they started out with and hence are colder. In the opposite scenario, underdensities would mean weaker potential wells. The energy lost by the photons would be smaller and so they would be hotter. In short, the blue spots on the CMB map correspond to regions of over-density and the red spots correspond to regions of under-density.

But what do these density fluctuations imply for the future of our universe? Simple. They act as seeds. Overdensities and the resulting gravitational potential well will promote the clustering of matter, eventually leading to the formation of stars, galaxies, galaxy clusters and so forth. So, studying CMB anisotropies gives us direct predictions of the structure of the universe today!

This is only one of the many cool things we can do with the CMB. Some sixty-odd years after its discovery, the Cosmic Microwave Background continues to be a rich resource for theorists and experimentalists alike. It has become a touchstone for cosmological models and is responsible for shaping our understanding of the universe. And now you know why.

TTK Outreach: Special Relativity in a Nutshell

February 27th, 2018 | by

Einstein’s theory of relativity has seeped into popular culture like no other. But what is relativity? And why is it important to our day-to-day life? Today, we look at Special Relativity: the imagine-the-cow-to-be-a-sphere case of the complete or general theory of relativity.

The beauty of SR and probably one of the reasons for its ubiquity in popular science is its elegance and simplicity. An added benefit is that it’s possible to go quite a-ways with an intuitive understanding of SR and no complicated mathematics. At the heart of it, special relativity has two basic principles. Once we understand these two ideas, we basically understand all of special relativity and the ‘paradoxes’ that come with it. These two ideas are as follows:

 

1. The laws of physics are invariant (identical) in all inertial reference frames.

There is just one jargon-y term here which is ‘inertial reference frames’. A reference frame is a system of coordinates that you use when you perform an experiment. This system fixes the location and orientation of your experiment. An inertial reference frame is one that is not accelerating, i.e, it is either stationary or moving with a constant velocity. So, a car going in a straight line at 50 km/h is an inertial frame of reference. So is a physicist sitting at her desk. The Falcon Heavy during its trip to outer space is not: it accelerates. Neither is the Earth.

The first principle of SR states that physics should look the same in all inertial frames. In essence, if you perform your experiment on your way to work (provided you drive at a constant speed) you’ll get the same results as when you repeat it in your lab.

This also means that there is no ‘absolute’ frame of reference. Say you perform your experiment in a bleak, windowless container. Unbeknownst to you, the container is actually on a moving belt. This moving belt is on a ship on its way to the New World. Do you consider the ship to be your reference frame? Or the belt? Or just the container? It’s kind of an inverse Russian doll situation. But we don’t care. As long as the reference frames are inertial, the physics would remain the same and we get the same results either way.

2. The speed of light in vacuum is the same for all inertial frames

This one is slightly tricky because it’s counterintuitive. For any object, the speed you measure depends on the reference frame you’re in. For example, you’re in a car going at 50 km/h. On the seat next to you are six boxes of pizzas. For you, the pizzas are stationary. For a hungry person at a bus stop, the pizzas are going away from them at 50 km/h. For another car which is coming towards you at 25 km/h, the pizzas are coming towards it at a speed of 25 km/h. So, in general, speed is relative. But for light, we always measure the same speed irrespective of our motion with respect to the light source.

The constancy of the speed of light gives rise to a host of interesting results. The one most used in science-fiction is time dilation. And as it turns out, it’s pretty easy to understand time dilation if you understand these two principles of SR. So, let’s give this a shot!

Time Dilation:

Quick side-note before we begin: In special relativity, we assume that gravity plays no role (hence equating a cow to a sphere). Here, time dilation is a result of the velocity difference between two observers. If we consider the full picture, i.e., General Relativity, time dilation can also be caused by gravity. If you’ve seen Interstellar, this is why time runs slower closer to the black hole. And also the reason that clocks in outer space will tick slower than clocks on Earth (if their relative motion to Earth is zero).

                  Fig. 1

A simple thought experiment to understand time dilation is as follows. Consider two scientists Alice and Bob inside their respective spaceships. Both have light-clocks. A light-clock consists of two mirrors at opposite ends of a cylinder. One end also has a light source. The way this clock measures time is by shooting a photon from one end of the cylinder and ‘ticking’ when the photon returns to the same end.

Now back to Alice and Bob. Alice gets tired of trying to convince Bob of the superiority of Firefly and flies away in her spaceship. For Alice, one tick on her light-clock corresponds to the process depicted in Fig 1 and with some middle-school math, we can calculate the time between ticks.

For Bob staring dejectedly at Alice’s ship realizing that he was wrong, the path that the photon takes is given in Fig 2. Again, employing some simple middle-school math, we can calculate the time between ticks from Bob’s perspective.

                                        Fig. 2

After a bit of algebra, we find that from Bob’s perspective/frame of reference, time appears to be running slower for Alice.

ΔtB = ΔtAγ

where,

γ = ( 1 – u2/c)-1/2 so that γ > 1

So, when Alice sends a passive-aggressive email to Bob with the one season of Firefly –such injustice—her clock would be a little behind Bob’s. By extension, she would’ve aged slightly less than Bob (in Bob’s frame of reference)**.

And that, in principle, is how time dilation works. Keep in mind that this is not just an abstract thought experiment. We actually sent high-precision atomic clocks on plane rides around the earth and compared their time to the ones on the ground. The lag was exactly the one given by special relativity.

Of course, you can’t mention time dilation without talking about the Twin Paradox. But this post has already exceeded its word limit. So, I’ll leave that for the next one.

**For now, we’ve chosen to completely ignore Alice’s frame of reference. If we delve deeper, we’d find that for Alice, Bob would be the one aging more slowly. This is what eventually leads to the twin paradox. More on this in the next post!

TTK Outreach: A Universe of Possibilities Probabilities

January 30th, 2018 | by

The universe may not be full of possibilities –most of it is dark and fatal– but what it does have in abundance are probabilities. Most of us know about Newton’s three laws of motions. Especially the third which, taken out of context, apparently makes for a good argument justifying revenge. For centuries, Newton’s laws made perfect sense: an object’s position and velocity specified at a certain time gives us complete knowledge of its future position and velocity aka its trajectory. Everything was neat and simple and well-defined. So imagine our surprise when we found out that Newton’s laws, valid as they are on large scales, completely break down, on smaller ones. We cannot predict with 100% certainty the motion of an atom in the same way that we can predict the motion of a car or a rocket or a planet. And the heart of this disagreement is quantum mechanics. So today let’s talk about two of the main principles of quantum mechanics: duality and uncertainty.

Duality:

new doc 2018-01-28 16.38.08_1We begin with light. For a long time, no one seemed to be quite sure what light is. More specifically, we didn’t know if Light was a bunch of particles or a wave. Experiments verified both notions. We could see light interfering and diffracting much like two water waves would. At the same time, we had phenomena such as the photoelectric effect which could only be explained if Light was assumed to be made of particles. It is important to dwell on this dichotomy for a bit. Waves and particles lie on the opposite ends of a spectrum. At any given instant of time, a wave is spread out. It has a momentum, proportional to the speed with which it is traveling, but it makes no sense to talk of a definite, single position of a wave by its very definition. A particle, on the other hand, is localized. So the statement, ‘Light behaves as a wave and a particle’, is inherently non-trivial. It is equivalent to saying, ‘I love and hate pineapple on my pizza’, or ‘I love science fiction and hate Doctor Who.’

But nature is weird. And Light is both a particle and a wave, no matter how counter-intuitive this idea is to our tiny human brains. This is duality. And it doesn’t stop just at Light. In 1924, de Broglie proposed that everything exhibits a wave-like behavior. Only, as things grow bigger and bigger, their wavelengths get smaller and smaller and hence, unobservable. For instance, the wavelength of a cricket ball traveling at a speed of 50km/h is approximately 10-34 m.

And it is duality which leads us directly to the second principle of quantum mechanics.

Uncertainty:

The idea of uncertainty, or Heisenberg’s Uncertainty principle, is simple: you can’t know the exact position and momentum of an object simultaneously. In popular science, this is often confused with something called the observer’s effect: the idea that you can’t make a measurement without disturbing the system in some unknowable way. But uncertainty is not a product of measurement, neither a limitation imposed by experimental inadequacy. It is a property of nature, derived directly from duality.

From our very small discussion about waves and particles above, we know that a wave has a definite momentum and a particle has a definite position. Let’s try to create a ‘particle’ out of a wave, or in other words, let’s try to localize a wave. It’s not that difficult actually. We take two waves of differing wavelengths (and hence differing momenta) and superimpose them. At certain places, the amplitudes of the waves would add up, and in others, they would cancel out. If we keep on adding more and more waves with slightly differing momenta, we would end up with a ‘wave-packet’, which is the closest we can get to a localized particle.

Screen Shot 2018-01-28 at 4.45.06 PM

                                                                                      Image taken from these lecture notes.

 

Even now, there is a small, non-zero ‘spread’ in the amplitude of the wave-packet. We can say that the ‘particle’ exists somewhere in this ‘spread’, but we can’t say exactly where. Secondly, we’ve already lost information on the exact momenta of the wave and so there is an uncertainty there as well. If we want to minimize the position uncertainty, we’d have to add more waves, implying a larger momentum uncertainty. If we want a smaller momentum uncertainty, we would need a larger wave-packet and hence automatically increase the position uncertainty. This is what Heisenberg quantified in his famous equation:

Δx Δp ≥ h/4π

And so we come to probabilities. At micro-scales statements such as, ‘the particle is in the box’, are meaningless. What we can say is, ‘the particle has a 99% probability of being in the box’. From Newton’s deterministic universe (which is still valid at large scales) we transition to quantum mechanics’ probabilistic one where impossible sounding ideas become reality.

The Doctor once said, “The universe is big, it’s vast and complicated, and ridiculous. And sometimes, very rarely, impossible things just happen and we call them miracles.” Or you know, at small enough scales, a manifestation of quantum mechanics. And that is fantastic.

TTK Outreach: A Beginner’s Guide to Dark Matter

January 17th, 2018 | by

In the post-truth society that we live in it is easy to fall down the rabbit hole of doubting every scientifically held belief. To wonder if NASA is hiding proof of intelligent extra-terrestrial life (they’re not), or if people at CERN are rubbing their hands plotting something nefarious (nope) or whether the Big Bang theory is a Big Bad Lie (it really…isn’t).  But don’t worry, we at TTK have got you covered. Every Wednesday we answer your questions live on Twitter and every whenever-this-author-stops-procrastinating-day we give you a more elaborate explanation of some of the most frequently asked questions.

Today on the agenda: Dark Matter — what it is and why you should be reasonably sure of its existence.

Simply stated, dark matter is a kind of matter that doesn’t interact with light. This means we can’t “see” it in the conventional sense. As you would expect, this makes studying dark matter a bit difficult. But if there is one redeeming quality in humankind, it is that we don’t shy away from the seemingly impossible. Of course, the question remains, if we can’t see dark matter and if it doesn’t interact all that much with other things, how do we know that it exists in the first place? The answer comes to you in four parts.

1. Galaxy Rotation Curves

Some of the earliest indirect evidence of dark matter comes from galaxy rotation curves. A rotation curve is a plot of the orbital speed of stars or visible gas present in a galaxy as a function of their distance from the galactic center. If we assume that the total mass of a galaxy is only composed of normal or ‘visible’ matter, the farther we move away from the center (where most of this mass is concentrated), the lower the orbital speeds should get. This is what happens in the Solar system. Since the Sun accounts for most of the mass percentage, the planets farthest from it revolve slowly as compared to the ones close by.

However, measurements of galactic rotation curves don’t agree with this prediction at all. Instead of decreasing with distance, the orbital speeds of outlying stars appear to either stagnate to a constant value or increase. This points towards the possibility of an additional contribution to the mass of a galaxy from something we can’t see.  Maybe something dark?

2. The Bullet Cluster

Bullet Cluster

Another smoking gun for dark matter is the Bullet Cluster. It is composed of two colliding galaxy clusters, the smaller of which looks like a bullet. Galaxy clusters are a busy place and when they collide, chaos ensues. The stars, far apart as they are, mostly survive the collision without a story to tell (aka pass through).  The particles present in the galactic plasma, however, smash and ricochet and radiate a lot of energy.

Galactic plasma makes up most of the baryonic (visible) mass of a cluster so we can derive a mass-profile for the cluster from this radiated energy. We can also model the mass-profile by studying the lensing effects of clusters. Because massive objects bend light, we can figure out their mass distribution by studying how they distort light from surrounding clusters. If the entire mass of a cluster is just the baryonic mass, these two mass-profiles should coincide. What we find instead, is that they are in exact opposition. In the image above, the pink regions are where the baryonic mass is present. The blue regions show where the total mass of each cluster is concentrated. The zero-overlap between the two implies the presence of a non-baryonic, invisible source of mass. Moreover, it purports that most of the mass of a cluster is non-baryonic or dark. (In fact, roughly 80% of the universe’s matter content is dark!)

Quick Side Note: Keep in mind that the colors are for purely representative purposes! The radiation emitted by the galactic plasma doesn’t fall within the visible spectrum. Similarly, the blue is where the experiments tell us dark matter is concentrated.

3. Large Scale Structure Formation

Sloan Digital Sky Survey

An interesting question to ask cosmologists is why does the universe have a structure? How do we go from a more or less homogeneous particle soup to well-defined clusters of galaxies and then even to clusters of clusters of galaxies? The simple answer to this question is fluctuations. Tiny fluctuations right after the Big Bang lead to overdensities and underdensities of matter. As the universe expands, these fluctuations also grow on account of gravity and we end up with clumps of matter which would eventually form stars, galaxies, galaxy clusters, etc. There is one small problem with this line of reasoning though. We know that the early universe was dominated by radiation (or light). And light exerts pressure. So even as the fluctuations would cause matter to clump, radiation would cause it to homogenize. In the end, the fluctuations would be nearly wiped out and we wouldn’t have the kind of structure that we see today.

Dark matter solves this problem. It is massive and it doesn’t interact with light. Formation of dark matter lumps would aid the ‘clumping’ of normal, baryonic matter and give rise to structure despite the homogenizing effect of radiation.

4. Cosmic Microwave Background

CMB

The CMB can be regarded as a picture of the baby universe. And though at first glance it might look like random splotches of paint, it provides deep insights into what the universe looked like billions of years ago. Any cosmological model that we create has to be in agreement with this map. By specifying initial conditions — for instance, percentage of matter, dark matter and radiation — we should end up with density fluctuations as observed here. The best model we currently have is the ΛCDM. As you might have guessed, the DM here stands for dark matter. It is only when we include dark matter in the model that our predictions line up with the data.

 

These are just a few of the reasons we believe that dark matter exists. And even though we haven’t detected anything like a dark matter particle (yet), everywhere we look the universe seems to suggest that it must be there. If you still don’t understand why you should believe in it, (and as a reward for reading these 1000+ words), here’s a (dark) analogy:

The Higgs Boson in 2015

October 23rd, 2015 | by

The Higgs boson has been discovered in 2012 at CERN’s Large Hadron Collider (LHC), or more precisely at the LHC experiments ATLAS and CMS. The discovery of the Higgs resonance is definitely a milestone in particle physics and two of the fathers of the Higgs mechanism, Peter Higgs and Francois Englert, were awarded the nobel prize in physics in 2013.

After the discovery three years ago, there was immediately a decisive question to be answered: Has the discovered particle all the properties which are predicted by the Standard Model (SM) of particle physics. Or turning the question around in more scientific terms: Are there any statistically significant deviations from the SM predictions which can be identified with the recorded proton-proton collisions. Any such deviation would of course call for physics beyond the SM. Until the end of LHC run 1 at the beginning auf 2013, there have been great efforts to collect as many Higgs collisions as possible. And similar efforts have been invested in recent years to extract as much information as possible from the recorded data.

Higgs coupling strength measurements

Signal strength measurements for different Higgs-production channels, where a signal strenath of 1 is the SM expectation (taken from ATLAS CONF-2015-044)

Only recently the Higgs legacy of run 1 has been finalized performing the combination of the Atlas and the CMS data. The so-called signal strength for different production channels and the global signal strength is shown in the diagram on the right relative to the SM prediction. So far, the discovered particle does not give any hints for new physics beyond the SM. It simply looks more or less as predicted decades ago.

For these and similar measurements, the interplay between theory predictions and the experimental analysis is most crucial. Mid of October, experimentalists and theorist working on Higgs physics have met at the conference “Higgs Couplings 2015” which was hosted by the IPPP in Durham and took place in the beautiful medieval Lumley Castle close to Durham. The latest run 1 measurements have been presented and discussed. But run 1 is already part of the past. Everybody is looking forward to seeing the measurements from run 2 and gearing up for the upcoming analyses.

Run 2 has already started this year with the record breaking proton-proton energy of 13 TeV (run 1 has provided 7 and 8 TeV collisions). In 2015, a year to learn how to operate at the record-breaking energy and with collisions every 25 nano-secons, there will be not enough data recorded to make a major step forward in the precision of Higgs measurements (this is very different for other new physics searches, e.g. for multi-TeV resonances). However, the coming years of run 2 will be exciting for Higgs physics for sure.

So far, measurements are still statistically limited, i.e. by the relatively small number of recorded Higgs-bosons. However, residual uncertainties within the theoretical predictions will soon enter as a major player in the quest for ultimate measurements of Higgs properties, and therefore also in the quest for physics beyond the SM in the Higgs sector. Hence, improving theory predictions and making them available for the analysis of the data is more important than ever in the field of Higgs physics, and one of the research topics at our institute. Conferences like the “Higgs Couplings 2015” are providing an important forum for discussions on these topics between experimentalists and theorists.

So, let’s see what nature will teach us about the Higgs in the coming years.

Exploring dark matter with IceCube and the LHC

October 2nd, 2015 | by

Various astrophysical and cosmological observations point towards the existence of dark matter, possibly a novel kind of fundamental particle, which does not emit or reflect light, and which only interacts weakly with ordinary matter.

DM_detection

If such a dark matter particle exists, it can be searched for in different ways: direct detection looks for the elastic scattering of dark matter with nuclei in highly sensitive underground experiments, as Earth passes through our galaxy’s dark matter halo. Indirect detection experiments on Earth or in space look for cosmic rays (neutrinos, photons, or antiparticles) from the annihilation of dark matter particles in the centre of the Galaxy or in the Sun. And last but not least, if dark matter interacts with ordinary matter, it may be produced in high-energy proton collisions at the LHC.

To explore the nature of dark matter, and to be able to combine results from direct, indirect and collider searches, one can follow a more model-independent approach and describe dark matter and its experimental signatures with a minimal amount of new particles, interactions and model parameters. Such simplified or minimal models allow to explore the landscape of dark matter theories, and serve as a mediator between the experimental searches and more complete theories of dark matter, like supersymmetry.

 

ATLAS_monojet

About 1027 dark matter particles per second may pass through the Sun. They can loose some energy through scattering off protons and eventually be captured in the core of the Sun by the Sun’s gravitational pull. Dark matter particles in the Sun would annihilate with each other and produce ordinary particles, some of which decay into neutrinos. Neutrinos interact weakly with matter, can thus escape the Sun and could be observed by the neutrino telescope IceCube near the South Pole. Neutrinos therefore provide a way to search for dark matter in the core of the Sun.

At the LHC, dark matter may be produced in high-energy proton collisions. As dark matter particles interact at most weakly with ordinary matter, they would leave no trace in the LHC detectors. However, dark matter (and other novel weakly interacting particles) can be detected by looking at exotic signatures, where a single spray of ordinary particles is seen, without the momentum and energy balance characteristic for standard particle collisions (so-called mono-jet events, see right figure).

 

DM
We have recently joined forces with members of the RWTH IceCube team to explore dark matter searches from neutrinos in the Sun and through dark matter production at the LHC, see http://arxiv.org/abs/1411.5917 and http://arxiv.org/abs/1509.07867.  We have considered a minimal dark matter model where we only add two new particles to the ordinary matter: a new dark matter fermion, and a new force particle, which mediates the interaction between the dark matter fermion and the ordinary matter. As no signal for dark matter has been observed, we can place limits on the masses of the dark matter particle and the new force particle, see figure to the left. We find a strinking complementarity of the different experimental approaches, which probe particular and often distinct regions of the model parameter space.

Thus only the combination of future collider, indirect and direct searches for dark matter will allow a comprehensive test of minimal dark matter models.

Anticipating Discoveries: LHC14 and Beyond

July 17th, 2015 | by

PhD students Leila Ali Cavasonza and Mathieu Pellen report from the workshop “Anticipating Discoveries: LHC14 and Beyond”

Few months ago, the Large Hadron Collider (LHC) in Geneva woke up from a long shut-down phase. It is now operating at a centre of mass energy of 13 TeV (it might reach 14 TeV in the upcoming phases). This is the first time in the history of humanity that particles are collided at such high energy in a machine built by humans.
Thus the Run II of the LHC is just starting and is lifting once again the excitement in the particle physics community. It is thus the right time to discuss what particles or theories could be discovered by the ATLAS and CMS detectors. In this spirit, a topical workshop organised by the Munich Institute for Astro- and Particle Physics (MIAPP) has been held in Munich: “Anticipating Discoveries: LHC14 and Beyond” from the 13th to the 15th of July.

 

screen_shot_2015-07-14_at_08.13.25In the last few days, the so-called pentaquark has been claimed to be discovered by the LHCb collaboration. This is an extraordinary discovery but the particle physics theorists are after another kind of particles. Indeed this pentaquark (a composite object made of 5 quarks, see picture to the right) has been predicted many years ago by quantum-chromo dynamics (QCD) but has never been observed so far. What theorists are looking for are theories beyond the standard model. These are introduced to explain experimental and theoretical problems. In general, these predict new resonances or effects that can be traced by experimental collaborations.

 

ATLAS_jets

During this workshop many theories or extensions of previous ones have been proposed. In particular since the discovery of the Higgs boson, extensions of the Higgs sector are under high scrutiny. The beautiful theory of supersymmetry which predicts a special relation between bosons and fermions is still greatly discussed.
In particular extension of its minimal version have been proposed. Finally, as we know there is a huge amount of unexplained, invisible matter in our Universe, the so-called Dark Matter, it is justified to propose myriads of models that could explain various anomalies. In particular during these three days, several theories involving a non-abelian structure of the dark sector have been presented. These have a particular phenomenology at very different scales and are currently being tested against observations.
During this workshop many theories have been discussed and all theorists are craving to find signs of their favourite theory at the next LHC run. The kind of signs they are looking for is similar to the one reported by the ATLAS and CMS collaboration. The experimental collaborations have made public an excess in the Z/W channels (see picture on the left) and especially in the one where the gauge bosons are decaying into two jets. Future will tell us whether this is a sign of hope and the beginning of a new exciting hera.

 

Leila and Mathieu

Axions, WIMPs or WISPs? Searching for dark matter

July 3rd, 2015 | by

PhD student Mathieu Pellen reports from a dark matter workshop in Zaragoza.

The quest for the understanding of dark matter is certainly one of the greatest challenges of the 21st century. It is thus an extremely hot topic in the particle physics community. 

conference_photo

The 11th Patras Workshop on Axions, WIMPs and WISPs has been held in the beautiful and hot city of Zaragoza (Spain) (21-26 June 2015). As the title indicates, the focus was on dark matter and more particularly on axions.

Axions have been originally proposed to solve the strong CP problem. They are light particles (of the order of an electron-Volt or even lighter). These can be detected in light-shinning-through-wall experiments or in low background underground laboratories like the one of Canfranc (which has been one the highlights of the conference). During the conference, several innovative experiments looking for axions, axion-like particles or dark photons have been presented. New mechanisms predicting the existence of light particles have been also proposed.

In addition to light particles, Weakly Interacting Massive Particles (WIMPs) are the best motivated solution to account for the dark matter observed in our Universe. WIMPs are studied in three different ways: the first is their production at collider experiments such as the Large Hadron Collider (LHC, Geneva). The second is the detection of nuclear recoils produced by dark matter particles scattering on heavy nuclei in underground facilities such as the Grand Sasso laboratory in Italy. Finally, when two dark matter particles annihilate in the galaxy, they produce cosmic rays of standard model particles. These can be detected in satellite-based experiments such as the Alpha Magnetic Spectrometer (AMS-02, partly built at the RWTH Aachen University) on the International Space Station (ISS).

My contribution to the conference focuses on the last possibility. I have reported exciting results on a project carried out with Leila Ali Cavasonza and Michael Krämer.CCnew2_09_14 Indeed, AMS-02 has reported an excess in the measurement of the positron flux (red date points, left figure) compared to standard expectations from astrophysical sources (green curve, left figure). This has triggered a lot of interest recently. The reason is that anti-particles are an extremely interesting observable when searching for dark matter. Indeed they are rarely produced from standard astrophysical sources. Thus the discovery of excesses in anti-particles fluxes could be already a smoking gun for the existence of dark matter. Nowadays, the dark matter contribution is believed to be sub-dominant in the AMS-02 observations. However, the absence of a “bump” – as expected from a from a dark matter signal – in the very smooth AMS-02 spectrum is a great opportunity to set limits on dark matter annihilation cross sections.

MathieuWe have derived new upper limits on the annihilation cross section using a new method that allows us to study dark matter with masses ranging from several TeV down to 1 GeV. In particular we have focused on the impact of massive electro-weak gauge bosons on these limits. Even if their contributions are limited, they are of prime importance as they produce all standard model particles when decaying. I have thus shown that there is a promising complementarity between different fluxes of anti-particles. This opens up new ways to exclude or find dark matter in the next few years using indirect detection.