At the time, the Tevatron was the most powerful particle accelerator in the world, slamming together a beam of protons and antimatter protons at energies over 2,000 times higher than the mass of a proton.

The Higgs boson gives particles their mass. The particles it interacts with more strongly get a higher mass. There’s another consequence of this stronger interaction with certain particles. If the boson interacts with particles more strongly, it will decay into them more often. Thus, what was done was to look for events in which very heavy particles were created.

The heaviest particles known at the time (even still) are top quarks, which are about 180 times heavier than protons. The next lower are the Z bosons, then the W bosons—those are both in the ballpark range of 90 times heavier than protons—and then the bottom quarks, which are comparatively light, only just shy of five times heavier than the protons.

Top quarks are just super heavy, too heavy for the Tevatron to see in conjunction with Higgs bosons, but W and Z bosons are more reasonable. So, scientists looked for Higgs bosons decaying into those particles but didn’t see any. This meant they could rule out the possibility that Higgs bosons had a mass in the range of 156 that of a proton and about 190 times.

And that was sort of the end of the road for the Tevatron. That’s because a much more powerful accelerator called the Large Hadron Collider, or LHC, had begun operations.

This article comes directly from content in the video seriesThe Evidence for Modern Physics: How We Know What We Know. Watch it now, on Wondrium.

The LHC is located at the CERN laboratory, just like the LEP accelerator. In fact, those two accelerators used the same tunnel. Technicians pulled out the LEP accelerator and put in the LHC one.

The LHC is designed to be seven times more powerful than the Tevatron and, although it wasn’t working at full potential back then, in 2011, it was still way more powerful. Once it turned on, the Tevatron was permanently outclassed.

In the LHC, scientists smashed together beams of protons, and they looked for the Higgs boson in much the same way as was done using the Tevatron. They saw even more definitively that the mass of the Higgs wasn’t super high. The LEP said the Higgs had to be heavier than 122 times the mass of a proton, and the Tevatron said that it was probably below 156 times the proton’s mass.

Using the first hints of LHC data, scientists quickly ruled out the same range that was done using Tevatron data, which was between 166 and 187 times the mass of the proton. That was expected, but it was a nice independent confirmation.

Things really heated up in 2012, when there was a lot more data. And, on July 4, 2012, the two big experiments stopped saying what masses the Higgs boson didn’t have and said definitively that the Higgs boson had been found and that its mass was about 133 times that the mass of a proton.

Both experiments got the same answer, which was also nice.

In 2013, Peter Higgs and François Englert shared the Nobel Prize for their predictions of the Higgs field and Higgs boson. Englert had collaborated with Robert Brout, but he died in 2011 and didn’t live to see the discovery. And since the Nobel Prize can go to a maximum of three people, Guralnik, Hagen, and Kibble lost out.

So, where do we stand in the search for the Higgs? Technically, what the scientists announced in 2012 was that they found a particle that was consistent with being a Higgs boson. It wasn’t definitive.

But, in the ensuing years, they have done a lot of tests, verifying that the new particle had zero subatomic spin, which is what the Higgs boson has to have. They also have looked at its rate of decay into a ton of subatomic particles, for example the tau lepton, bottom quark, W and Z bosons, and even top quarks. It’s a tricky business talking about the last three particles, since they are too heavy to be daughters of the Higgs boson according to classical physics. But quantum mechanics allows for things to happen that are classically impossible and the measurements are in exact agreement with Higgs theory.

With these additional measurements, the world’s scientific community has concluded that we have certainly found the Higgs boson predicted back in 1964. It took half a century to accomplish that, but that’s simply the nature of research in the knowledge frontier.

Before the Large Hadron Collider, or LHC, began operations, the Tevatron was the most powerful particle accelerator in the world, slamming together a beam of protons and antimatter protons at energies over 2,000 times higher than the mass of a proton.

The heaviest particles known are top quarks, which are about 180 times heavier than protons. The next lower are the Z bosons, then the W bosons—those are both in the ballpark range of 90 times heavier than protons—and then the bottom quarks, which are comparatively light, only just shy of five times heavier than the protons.

The particle found in 2012 was consistent with being a Higgs boson, but it wasn’t definitive.

However, in the ensuing years, the scientists have done a lot of tests, verifying that the new particle had zero subatomic spin, which is what the Higgs boson has to have. They also have looked at its rate of decay into a ton of subatomic particles, for example the tau lepton, bottom quark, W and Z bosons, and even top quarks, and the measurements are in exact agreement with Higgs theory.

With these additional measurements, the world’s scientific community has concluded that we have certainly found the Higgs boson.

The Three Approaches for Detecting Dark Matter

Can We Make a Subatomic Black Hole?

How Antimatter Was Discovered

In 1981, a particle accelerator operating at the CERN laboratory in Switzerland began operations. It was called the S-p-pbar-S. The S-p-pbar-S accelerator only accelerated the particles. Researchers needed detectors to inspect the collisions for the signature that W and Z bosons were created. And they actually built two, called UA1 and UA2.

The two experiments started collecting data in 1981 and, for the first few months, the accelerator underwent teething pains and delivered small amounts of beam. But after a shakedown period, things picked up. More and more beam was being delivered and the two experiments were furiously analyzing the data.

In January of 1983, the UA1 experiment announced that they had unambiguously discovered the W boson. It had a mass of 85 times that of the proton; very nearly the same mass as a rubidium atom. In June of 1983, the two experiments announced that they had discovered the Z boson, with a mass 96 times that of a proton, or the mass of a molybdenum atom. Both rubidium and molybdenum are extremely heavy.

With the discovery, a good amount of electroweak theory had been confirmed. Scientists had the photon and the W and Z bosons under their belt. The final missing piece was the Higgs boson.

This article comes directly from content in the video seriesThe Evidence for Modern Physics: How We Know What We Know. Watch it now, on Wondrium.

The search for the Higgs boson was very hard. And the reason was that the Higgs theory didn’t really nail down the expected range for its mass. The theory predicted that the mass of the Higgs boson was somewhere between 10 and 1,000 times heavier than a proton.

So, in the early years, not much progress was made. There were some simple limits from experiments and measurements in the 1970s and 1980s, and they determined that if the Higgs boson existed (which wasn’t guaranteed in those days), its mass was over 20 times heavier than a proton.

The first real chance to look for the Higgs boson didn’t really begin until about 1991 when the CERN laboratory turned on a new and much bigger accelerator, called LEP. LEP stands for Large Electron Positron, which of course means that the accelerator was large and collided electrons and positrons. Positrons, of course, are antimatter electrons.

Initially, the LEP accelerator ran at exactly the right energy to make tons and tons of Z bosons. They studied the Z bosons closely, and it will be a long time before anyone surpasses their measurements. But the experiments also had something to say about the Higgs boson. Because there was no indication of the Z boson decaying into a Higgs boson, scientists knew that the Higgs boson—if it existed—had to have a mass of more than the Z boson, or about 96 times that of a proton. We knew this in the early 1990s.

The CERN accelerator scientists made a series of upgrades to the LEP accelerator, eventually more than doubling its operating energy. In the end, the accelerator ran at an energy equivalent to a smidge over 220 times more than the energy it would take to make a proton.

Since the mass of the Higgs boson was unknown, several different ways to look for it were attempted by LEP scientists. They looked for the electron and positron to simply make a single Higgs boson. They also looked for cases where the Higgs boson was made in conjunction with a W or Z boson. Depending on the final mass of the Higgs boson, any one of those processes could be the most common way to make one.

The LEP accelerator ran until the year 2000. There was a bit of excitement in the last few weeks, where researchers thought they saw hints of a Higgs boson with a mass of about 122 times the mass of a proton and they even got an extension of the time the accelerator would operate of a month or two. But that hint evaporated in light of more data, as so many hints do.

When the LEP accelerator finished running, the LEP accelerator scientists announced that they could have found the Higgs boson if it had a mass of 122 times that of a proton, but they found nothing. Accordingly, they concluded that if the Higgs boson existed, it would have to have a mass heavier than that. And that was the end of the LEP era.

In January of 1983, the UA1 experiment announced that they had unambiguously discovered the W boson. It had a mass of 85 times that of the proton; very nearly the same mass as a rubidium atom. In June of 1983, UA1 and UA2 experiments announced that they had discovered the Z boson, with a mass 96 times that of a proton, or the mass of a molybdenum atom.

The search for the Higgs boson was hard, very hard because the Higgs theory didn’t really nail down the expected range for its mass. The theory predicted that the mass of the Higgs boson was somewhere between 10 and 1,000 times heavier than a proton.

The LEP, which stood for Large Electron Positron, was large and collided electrons and positrons. Initially, the LEP accelerator ran at exactly the right energy to make tons and tons of Z bosons. Later, the CERN accelerator scientists made a series of upgrades, eventually more than doubling its operating energy. In the end, the accelerator ran at an energy equivalent to a smidge over 220 times more than the energy it would take to make a proton.

Are Particle Accelerators Really Dangerous?

Particle Physics and the Great Particle Accelerator

The Discovery of New Particles: Baryons and Quarks

It was 1964 when Peter Higgs and his five compadres published three influential papers on the subject. Actually, Higgs published two and it was his second paper in which he predicted what we now call the Higgs boson.

The story is a little complicated, as there were several other scientists who contributed to the development of a modern electroweak theory, which abstractly unifies electromagnetism and the weak force with massless force carrying particle and then, with the addition of the Higgs field, transforms into the modern world in which the weak nuclear force and electromagnetism act very differently.

In the 1960s, the whole thing was simply theoretical. There was no experimental evidence that confirmed the theory, beyond the simple fact that the weak nuclear force was very weak and had a very short range. But the predictions of theory were quite clear. There should be a massless photon, which scientists have known about for over a century, but there should also be a massive neutral particle called the Z boson, two massive electrically charged particles called the W bosons, and finally a massive and neutral particle called the Higgs boson.

This article comes directly from content in the video seriesThe Evidence for Modern Physics: How We Know What We Know. Watch it now, on Wondrium.

The exact properties of these particles weren’t completely known. Their electrical charge was known, as was their subatomic spin, but the masses weren’t known, at least not precisely. However, the range and strength of the weak force gave a hint of the mass of the W and Z bosons. They should be about 100 times heavier than the proton. And that’s kind of a crazy thing when one thinks about it. The W and Z bosons shouldn’t have any internal structure and each of them should have the mass in the ballpark of an entire silver atom, which has 47 protons and 60 neutrons.

Naturally, scientists went looking for them and it took a long time. Technology wasn’t ready. It took quite a few years until a particle accelerator could collide beams of particles together with enough energy to have a shot at making W and Z bosons.

It was in 1981 that a particle accelerator operating at the CERN laboratory in Switzerland began operations. It was called the S-p-pbar-S. The name tells us something about the accelerator.

It collided a beam of protons and antimatter protons together at very high energy; at an energy of 540 GeV. That number is equivalent to the mass energy of 575 protons, or a fair bit more than the energy it would take to make two uranium atoms. For the era, it was a staggering achievement.

The S-p-pbar-S accelerator only accelerated the particles. Researchers needed detectors to inspect the collisions for the signature that W and Z bosons were created. And they actually built two, called UA1 and UA2. UA is just an acronym meaning ‘underground area’.

In particle physics, there are often two experiments built at an accelerator. There are several reasons. First, the two detectors are made by different groups, using different technologies. Having such different detectors protects against the weaknesses of any specific design. Second, having the two experiments taking data at the same time pushes them to work fast and smart. In high stakes science, there is first and not-first. Second never wins the glory and so nobody wants to be second.

A third reason to have two detectors is so that whoever is able to see something, second can confirm what the first experiment saw. Science requires confirmation. And finally, the two experiments are there for safety. If one experiment has some sort of accident (say a fire or something), the accelerator can continue to run and make discoveries.

So, UA1 and UA2 were competitors, hot on the trail of the W and Z boson.

Both experiments wanted the other experiment to do well, but not quite as well as they did. They both were looking for specific signatures of the W and Z bosons.

W bosons should decay into an electron and an electron neutrino, or a muon and a muon neutrino. This decay pattern is because the W bosons have electrical charge, so the daughters must also have electrical charge. And the W boson transmits the weak force, so it can make neutrinos. There are other ways in which it can decay, but those are the easiest ones to see and understand.

The neutrino doesn’t interact very often, so the neutrino escapes the detector undetected. Thus, the experimental signature of a W boson is an electron or muon on one side of the detector and nothing on the other. That nothing results in an energy imbalance, which is easy to see.

In contrast, the Z boson would decay into an electron and positron, or a muon and antimatter muon, or again, a number of other ways that are harder to see and aren’t part of the discovery story. The Z boson is electrically neutral, so it has to decay into a positive and negative particle and the electron and muon decay chains are just two that are easiest to detect. Note that in this case there is no neutrino involved and therefore no missing energy.

In 1981, a particle accelerator operating at the CERN laboratory in Switzerland began operations. It was called the S-p-pbar-S.

S-p-pbar-S collided a beam of protons and antimatter protons together at very high energy; at an energy of 540 GeV. That number is equivalent to the mass energy of 575 protons, or a fair bit more than the energy it would take to make two uranium atoms.

W bosons should decay into an electron and an electron neutrino, or a muon and a muon neutrino. This decay pattern is because the W bosons have electrical charge, so the daughters must also have electrical charge. And the W boson transmits the weak force, so it can make neutrinos.

Large Hadron Collider Restarts after Three-Year Shutdown

Inflationary Theory: The Discovery of Dark Matter

A Search For the Theory of Everything

The story of the Higgs boson and the Higgs field begins in the early 1960s. The scientists of the time knew about four forces: gravity, electromagnetism, the strong nuclear force, and the weak nuclear force.

We know what gravity is. Electromagnetism is responsible for electricity and magnetism, and also chemistry and how light works. The strong nuclear force ties the protons and neutrons together in the center of atoms, as well as a few other things, and the weak nuclear force causes some sorts of radioactive decay, most notably the emission of neutrinos.

In the early 1960s, researchers discovered that they could come up with equations that unified the weak nuclear force and electromagnetism. Unification has a specific meaning in physics. It basically means that two things that seemed to be different came from a single cause.

A historical example was when Isaac Newton realized that the motion of planets across the sky and the reason that Cheerios fall when a baby drops them from their highchair come from a single principle called gravity. Even his name for it, which is the Theory of Universal Gravity, makes it clear that he had unified two ideas. A more modern unification was in the 1870s when James Clerk Maxwell unified electricity and magnetism into a combined theory called electromagnetism.

And, in the 1960s, scientists found a way to unify electromagnetism and the weak nuclear force into a combined force called the electroweak force.

The Evidence for Modern Physics: How We Know What We Know. Watch it now, on Wondrium.

However, there was a problem with the electroweak unification. It predicted four particles that carried the electroweak force. Furthermore, it predicted that the four particles all had zero mass. That last one is a big deal because a force-carrying particle with zero mass means the force has an infinite range.

The most familiar such particle is the photon, which transmits the force of electromagnetism, and because we can see distant stars, we know that the range of the photon is infinite.

And yet the weak force certainly doesn’t have infinite range. In fact, even at the time, researchers knew that the weak force only extended over a distance about one one-thousandth the size of a proton.

So, this could well have been the death knell of electroweak theory. It predicted that the weak force should have infinite range, just like electromagnetism. In contrast, experiments showed that the weak force had a very short range. And (this is important) if the weak force had a short range, the force-carrying particle for the weak force couldn’t be massless. It had to be massive.

And that’s where the Higgs field and boson come in.

In 1964, Peter Higgs wrote a paper that hoped to solve the problem. He proposed that there was an energy field that permeated the entire universe. This energy field is called the Higgs field. This energy field would interact with some particles and give them mass; other particles would ignore the field, and those particles would be massless.

When this Higgs field idea was applied to the electroweak theory, the outcome was that one particle didn’t interact with the Higgs field and that was the massless photon. Another outcome was that there was a heavy and electrically neutral particle that we now call the Z boson. That particle transmitted the weak nuclear force. There were also two massive electrically charged particles, one negative and one positive, that also transmitted the weak force. These are now called the W-plus and W-minus bosons. We often just clump them together and call them the W boson and ignore the fact that there are two particles with opposite charge.

Another prediction of the Higgs theory was that there exists yet another particle called the Higgs boson. That was the particle that was discovered in 2012.

Other than the one of Peter Higgs, there were two more papers with different facets of the same idea published in the same year. Robert Brout and François Englert wrote one, while Gerald Guralnik, Carl Hagen, and Tom Kibble wrote another.

However, mostly as a historical accident, scientists combine the ideas of all six physicists and call it the Higgs field. If one really needs a reason why Higgs was singled out, it was in his paper that he noted that if the Higgs energy field idea was right, then there should also be a particle that wasn’t discovered in the 1960s called the Higgs boson.

While the announcement in 2012 was about the Higgs boson, it’s the Higgs field that gives mass to particles. Some particles interact with the field and get mass. The Higgs boson is just a vibration of the field, like a wave on a guitar string. In that analogy, the string is the Higgs field and the vibration is the boson.

In the early 1960s, the scientists of the time knew about four forces: gravity, electromagnetism, the strong nuclear force, and the weak nuclear force. Electromagnetism is responsible for electricity and magnetism, and also chemistry and how light works. The strong nuclear force ties the protons and neutrons together in the center of atoms, as well as a few other things, and the weak nuclear force causes some sorts of radioactive decay, most notably the emission of neutrinos.

in the 1960s, scientists found a way to unify electromagnetism and the weak nuclear force into a combined force called the electroweak force.

In 1964, Peter Higgs wrote a paper in which he proposed that there was an energy field that permeated the entire universe. This energy field is called the Higgs field. This energy field would interact with some particles and give them mass; other particles would ignore the field, and those particles would be massless.

The Origin of Mass

After the Big Bang: Formation of Stable Particles

Waiting for Unity: When Will We Get a Theory of Everything?

One of the mysteries of physics is just why gravity is so weak. We don’t know why it is so. However, an idea has been put forth that there are additional dimensions of space beyond the familiar three of left/right, up/down, and backward/forward. Now that is obviously a silly statement, because we quite clearly see that there are only three dimensions. But there’s an explanation, which is that, maybe the extra dimensions are very small. That’s kind of an odd idea, but we can kind of sketch it by analogy.

Let’s imagine a tightrope walker. They can go in one dimension. They can go forward or go backward. That’s it. Well, for a human. For an ant, things are different. For an ant, they can walk like humans can, but they can also walk around the rope. For ants, a rope is two dimensional, not one dimensional. There is the same long dimension that we inhabit, but there is also a much smaller dimension wrapped around the rope. And that’s the basic idea of extra dimensions—at each familiar point in space there could be one, two, three, or more tiny dimensions—too small for us to see, but maybe something that a subatomic ant, so to speak, could travel in.

The Evidence for Modern Physics: How We Know What We Know. Watch it now, on Wondrium.

Thus, extra dimensions are the first hypothesis. The second hypothesis is that maybe gravity can enter the small dimensions, but the other forces can’t. And if those two conjectures are true, then we have a possible explanation for why gravity is so weak.

Let’s just take a moment to think about that and compare something familiar, which is one dimension versus two, both of them big. In order to understand this better, let’s imagine a big empty and flat plane. Let’s suppose there’s a long and straight road and we bring helicopter load after helicopter load of people and have them disembark at the same place. If the people are forced to walk along the road and we’re standing alongside the road, we’ll see lots of people go by. There’s just no other place for them to go.

But suppose the people can walk off in whatever direction they want. If we stand in the same spot by the side of the road, we might not see many people pass us at all. Those people will wander off this way, or that way but none of those paths might be near us. In both cases, there are the same amount of people brought in by helicopter, but the people simply have more places to go if they can move in two dimensions.

That’s a case of comparing one versus two dimensions, and it works roughly the same way for gravity and extra dimensions. If gravity can access more dimensions than the other forces, then gravity won’t have to pass our way, so to speak, and for us, gravity will seem weak, even though it isn’t.

If gravity is really strong but just seems weak, it is because it can go into more dimensions. So then, what happens if we are able to be small enough to see the other dimensions? Well ,then, we’ll see gravity’s true nature, which is to be strong; and, if we see strong gravity, we could then see a microscopic manifestation of the strongest gravity we know—we could see a subatomic black hole.

And that’s what scares people. After all, they have heard that black holes suck up the matter around them and they grow. Thus, they imagine that if we make a subatomic black hole, it will eat nearby matter and eventually consume the entire Earth. And that, of course, would be terrifying.

Fortunately, there are counter arguments to these worries. For instance, Stephen Hawking realized that black holes can radiate via, what we now call, Hawking radiation. According to it, the bottom line is that small black holes evaporate very quickly from Hawking radiation. Big black holes evaporate much more slowly. So, according to Hawking, even if subatomic black holes are real, they evaporate away before becoming dangerous.

Thus, in conclusion, it’s worth remembering that the idea of subatomic black holes depends on a couple of hugely speculative conjectures, and these are that:

- extra dimensions exist
- only gravity can enter the extra dimensions

These are both pretty unlikely. Not impossible, of course, but unlikely. So, they’re probably not real. But if they’re possible, we do need to think about if there is any danger at all.

The basic idea of extra dimensions—at each familiar point in space there could be one, two, three, or more tiny dimensions—too small for us to see, but maybe something that a subatomic ant, so to speak, could travel in.

If we were small enough to see the other dimensions, we would see gravity’s true nature, which is to be strong; and, if we see strong gravity, we could then see a microscopic manifestation of the strongest gravity we know—we could see a subatomic black hole.

According to it, the bottom line is that small black holes evaporate very quickly from Hawking radiation. Big black holes evaporate much more slowly. So, according to Hawking, even if subatomic black holes are real, they evaporate away before becoming dangerous.

Can Black Holes Really Suck In Everything?

Subatomic Particles: The Quantum Realm

Facts, and Some Myths, about Black Holes

The real origin of this fear is just a vague and nebulous concern, arising from uneasiness with the unknown and from a suspicion of authority. It springs from distrust of government and large corporations and it sits squarely in the shadow of things like Watergate and lies about the dangers of tobacco. On the more fantastical side, it is born from beliefs about Roswell and Area 51, chemtrails and the antivax movement.

And, of course, the problem is that some of those things are real and some aren’t, and it is very difficult for most people to know which is which, especially when one is talking about things as esoteric and unfamiliar as gigantic particle accelerators. How would ordinary people know if this is something to worry about, or not?

To understand this better, we need to dive into the history of this concern and some actual scientific reasons why it is perfectly reasonable to ask the question.

So, the first public discussion of these fears arose in 1999 in an exchange of letters in *Scientific American* between Walter Wagner and Frank Wilczek. Wagner was concerned of the danger and Wilczek was the voice of science. Wilczek, a Nobel Prize winner, was a very smart and well-educated physicist. Wagner held a doctorate in law, with a BS in biology and a minor in physics, so he did have some scientific training, but not the mastery of frontier science that Wilczek did.

The Evidence for Modern Physics: How We Know What We Know. Watch it now, on Wondrium.

Getting specific, there were very specific worries that were put forth as potentially scientifically reputable worries about the safety of particle accelerators. The first one was regarding strangelets. The strangelet idea was independently proposed in 1971, by Arnold Bodmer, and again in 1984, by Edward Witten. The idea basically centers around a particular thought which was that ordinary matter consists of atoms, which have at their center protons and neutrons. Inside protons and neutrons, there are particles called quarks.

There are six types of quarks, but only two are generally found inside atoms, called up and down quarks. However, those other kinds of quarks exist. Usually the other four kinds are unstable and disappear in the wink of an eye—well, actually much faster than that. One of those unstable quarks is called the strange quark.

Now, scientists have certainly made subatomic particles with strange quarks in them. They’ve done that since the 1940s, but, they are unstable. However, what Bodmer and Witten both conjectured is that if we had enough strange quarks salted in with ordinary nuclear matter, then maybe this new matter would be stable, including the strange quarks.

That can’t happen in ordinary nuclei, mind you. But maybe if we had a ton of strange quarks, we might make some sort of super nucleus, consisting of up, down, and strange quarks. Although nobody has ever seen anything like this, but Witten posited that it could be true.

Under the right conditions, like smashing nuclei of particles together, we actually do make strange quarks. That’s real. So, the theory goes, that, under those conditions in large accelerators, we might make these subatomic particles, which go by the name ‘strangelets’.

Now, depending on the properties of strangelets, which is worth noting are completely theoretical and have never been seen, it might be that bigger strangelets are more stable than smaller ones. And, if that’s so, then if a chunk of strangelet matter touched a chunk of ordinary matter, that ordinary matter would slowly convert over to strangelet matter.

In this way, a small bit of strangelet matter would grow bigger and bigger and, taken to the extreme, would eventually convert the entire Earth into strange matter. It’s kind of like some sort of subatomic zombie apocalypse, where one chunk infects others and pretty soon everyone is a zombie.

Thus, it’s worth underlining the fact that strangelets haven’t been observed, ever. It would not be a stretch to say that they probably are not even real. After all, if they were, they would have been made during the big bang and they’d still be around. Since we don’t see them anywhere, this is probably an idea that just isn’t true.

The first public discussion of these fears arose in 1999 in an exchange of letters in *Scientific American* between Walter Wagner and Frank Wilczek. Wagner was concerned of the danger and Wilczek was the voice of science.

Depending on the properties of strangelets, which is worth noting are completely theoretical and have never been seen, it might be that bigger strangelets are more stable than smaller ones. And, if that’s so, then if a chunk of strangelet matter touched a chunk of ordinary matter, that ordinary matter would slowly convert over to strangelet matter.

The concern was that a small bit of strangelet matter would grow bigger and bigger and, taken to the extreme, would eventually convert the entire Earth into strange matter. It’s kind of like some sort of subatomic zombie apocalypse, where one chunk infects others and pretty soon everyone is a zombie.

Experiments for Observing Different Types of Neutrinos

The Discovery of New Particles: Baryons and Quarks

Particle Physics and the Great Particle Accelerator

Nuclear fusion and nuclear fission are two opposite methods of attaining energy. With nuclear fission, a larger atom is split into two or more smaller atoms. This occurs when a neutron collides with the larger atom, causing it to excite and split. Nuclear fission powers nuclear reactors. Nuclear fusion, on the other hand, occurs when smaller atoms are combined into larger ones in a process that scientists are still trying to crack.

Recently, scientists caused a fusion reaction which, for the first time, produced more energy than it took to start the reaction. In other words, it was the first net gain of energy caused by nuclear fusion. So how does nuclear fusion happen? In his video series *Nuclear Physics Explained*, Dr. Lawrence Weinstein, Professor of Physics at Old Dominion University, walks viewers through this complicated process.

The best way to explain nuclear fusion is by looking at the Sun—not literally. Nuclear fusion provides the Sun with its energy, and the Sun is mostly made of hydrogen and some helium.

“The curve of binding energy shows that if we fuse lighter elements to form heavier ones, it releases energy,” Dr. Weinstein said. “The biggest single gain comes from fusing four protons into helium-4, and the energy gained is the difference in the mass times the speed of light squared. So, four times the mass of the proton, minus the mass of the helium-4 nucleus, is about 28 million electron volts.”

Twenty-eight million electron volts is written as 28 MeV. How big is 28 MeV? Dividing 28 MeV by four times the proton mass is 0.7%, meaning that almost 1% of the mass of the hydrogen gets converted to energy when it fuses to helium. This figure will be more important later, but for now, which conditions must be met for this to happen?

“We need a high enough temperature so the protons can fuse, and we need enough density so there are enough protons so that they can fuse,” Dr. Weinstein said. “Nuclear fusion then only happens in the core of the star, where the temperatures and densities are high enough.”

In order to measure how much mass that nuclear fusion in the Sun converts to energy, we start with Einstein: *E=mc ^{2}*. The mass consumed, then, is the energy output divided by

“The Sun puts out 4 x 10^{26} joules per second, [and] the speed of light is 3 x 10^{8} meters per second, so squared, that’s 10^{17},” Dr. Weinstein said. “When I divide the two, we find the Sun has to consume 4 x 10^{9} kilograms a second, which is 4 megatons of mass converted to energy every second.”

When it comes to the amount of hydrogen the Sun needs to convert to energy, our 0.7% figure from earlier returns. Since 0.7% is the same as 0.007, we take that 4 megatons (or 4 billion kilograms) per second, divide by 0.007 and get a total of 500 megatons per second of hydrogen converted to helium.

The Sun puts out 4 x 10^{26} watts of power, but its power density is just 2 x 10^{-4} watts per kilogram of its mass. Humans put out about one watt for every kilogram of our mass. In other words, we put out 10^{4} times more power per kilogram than the Sun, because the Sun consumes its energy much more slowly.

However, the Sun gets the last laugh here. Our released energy is a chemical reaction of about 1 eV, meaning the Sun puts out 10^{7} times more energy *per reaction* and we’re burning ourselves out 100 billion times faster than the Sun does. Luckily, we refuel three times a day.

*Nuclear Physics Explained* is now available to stream on Wondrium.

Let’s think of a chart on which we choose the case of nitrogen molecules at 300 Kelvin; basically, air at room temperature. It rises from zero to a peak at around 400 meters per second, then it falls off again. If we turn up the temperature, the peak spreads out and moves to higher velocities, and if we turn it down, the peak moves to the left, to lower velocities. In general, the most probable speed is the square root of 2*kT* over *m*.

Now, let’s switch to photons. Let’s imagine trapping some photons from the Sun, in a reflecting box, and measuring the wavelength of each and every one. We’ll draw a wavelength scale and divide it up into bins, like ticks on a ruler, and then keep a tally of how many photons have a wavelength within each bin. Once we collect enough photons, we see there’s a peak at around 0.6 microns. That is the most popular wavelength to have in sunlight.

The shape of the function looks like the Maxwell-Boltzmann distribution, but it’s different in detail because photons are not our everyday particles. This is called a Planck spectrum. This curve is for 5800° Kelvin, approximately the temperature of the Sun’s outer layers. The bright star Vega is hotter than the Sun—it’s closer to 9500 Kelvin—so its spectrum is shifted toward higher energies, which means, shorter wavelengths. And the faint, nearby star Proxima Centauri is only about 3000 Kelvin, so its photons generally have lower energies, and longer wavelengths.

This article comes directly from content in the video seriesIntroduction to Astrophysics.Watch it now, on Wondrium.

The Planck spectrum is usually expressed as the flux per unit wavelength, the so-called flux density. Flux density is power per unit area per unit wavelength.

When we measure the flux density of the Sun, as a function of wavelength, we find it’s a pretty good fit to a theoretical Planck spectrum. It peaks at around 2 kilowatts per square meter per micron, at a wavelength of half a micron.

Why doesn’t it fit exactly? The Planck spectrum describes the radiation we get from particles that have been knocking around long enough to reach a constant temperature: they’re in thermodynamic equilibrium. It’s often called a ‘blackbody’ spectrum, because technically, the derivation relies on the material being a perfect absorber of photons, and therefore ‘black’.

The Sun, or any other real object, does not meet those criteria exactly. The Sun is not all at one temperature; it gets hotter as we go deeper. And the Sun’s material is not perfectly absorbing. But the spectrum of the Sun and other stars are nevertheless reasonably well described by the Planck function.

Let’s look at the Planck spectrum on logarithmic axes. That way, we can let the wavelength scale range over a factor of a 1000, from ultraviolet to infrared, and we can let the flux density scale over a factor of a trillion.

As we increase the temperature, the curve lifts up vertically. Hotter sources produce more radiation at all wavelengths. The area under each curve—the integral of flux density over wavelength—is the total flux, which is equal to *sigma* times *T* to the 4th. That’s the Stefan-Boltzmann law. We double the temperature, and the flux rises a factor of 16.

Also, as we increase the temperature, the peak of the spectrum shifts to shorter wavelengths. At room temperature, 300 Kelvin, almost all the energy comes out in the infrared; it peaks at around 10 microns. As we dial up the heat, the peak shifts to shorter wavelengths. That makes sense because we expect the typical photon energy, *hc*/ over *lambda*, to be on the order of *kT*; that implies *lambda* should be of order *hc* over *kT*: it should be inversely proportional to temperature.

When we do the math exactly, we find that the peak of the spectrum occurs when *lambda* is about one-fifth of *hc* over *kT*. That’s called Wien’s law. We can also write it as a scaling relation: *lambda* peak equals 10 microns times *T* divided by 300 Kelvin to the minus one power.

We are constantly bathed by photons whose spectrum follows the Planck function with an accuracy better than one part in 10,000, and a temperature of 2.7° Kelvin. According to Wien’s law, that corresponds to a wavelength of 1 millimeter, in the microwave band of the spectrum.

Why is the universe permeated with this microwave blackbody radiation? It’s a clue that at some point in the past, the universe was itself was a ‘gas’ of particles at a single temperature in thermodynamic equilibrium, long before it became the place we know today, with tiny pockets of extreme heat and vast expanses of frigid cold. This so-called cosmic microwave background radiation is some of the best evidence we have for the Big Bang.

The bright star Vega is hotter than the Sun—it’s closer to 9500 Kelvin—so its spectrum is shifted toward higher energies, which means, shorter wavelengths. And the faint, nearby star Proxima Centauri is only about 3000 Kelvin, so its photons generally have lower energies, and longer wavelengths.

The Planck spectrum is usually expressed as the flux per unit wavelength, the so-called flux density. Flux density is power per unit area per unit wavelength.

The Planck spectrum describes the radiation we get from particles that have been knocking around long enough to reach a constant temperature: they’re in thermodynamic equilibrium. It’s often called a ‘blackbody’ spectrum, because technically, the derivation relies on the material being a perfect absorber of photons, and therefore ‘black’.

Fundamental Particles and Newton’s Law of Gravity

After the Big Bang: Formation of Stable Particles

The Discovery of New Particles: Baryons and Quarks

The first thing to know is that photons, unlike particles, don’t collide with each other. They sail right through each other. The only things photons interact with are charged particles. So, in order to randomize the positions and energies of photons, you need to have charged particles around, which are themselves in thermodynamic equilibrium. So, let’s assume we are filling a box with charged particles, that are colliding all the time, producing momentary accelerations, and thereby producing and absorbing photons.

Let’s start our comparison with the average energy per particle. For an ideal gas, it’s 3/2 *kT*. For the photons, it turns out to be 2.7 *kT*. That’s not so weird. They’re both proportional to temperature; there is just a different numerical constant in front.

Things get weird, though, with the number density. It goes without saying that for an ideal gas in a closed box, *n* is constant, the particles don’t just spontaneously pop out of nowhere, or vanish. Even if we add energy, speeding up the particles, their number stays the same.

But photons do pop just out of nowhere whenever a charged particle accelerates. The particle flings away some of its own energy in the form of photons. Likewise, a photon vanishes when its energy is absorbed by a charged particle. So, for photons, we shouldn’t expect *n* to be a constant. If we inject more energy into the gas, speeding up the particles, the magnitude of their accelerations will rise, and they’ll produce more photons.

It turns out that the number density of photons rises as temperature to the 3rd power. The number density *n* is the cube of 3.9 *kT* over *hc*.

Next, let’s compare energy density. For the ideal gas, the energy density *u* equals 3/2 *nkT*, so it’s proportional to *nT*. For photons, *n *varies as *T*-cubed, and so if energy density varies as *nT*, we might expect the energy density of photons to vary as *T* to the 4th power. And it does. The constant of proportionality is traditionally written 4 *sigma* over *c*, where *sigma* is the Stefan-Boltzmann constant.

Then comes pressure. For the gas, pressure equals *nkT*, the ideal gas law. For photons, again, *n* itself goes like *T*-cubed, so we might expect pressure to go like *T* to the 4th power, and it does. In this case, the proportionality constant is 4-*sigma* over 3*c*.

This article comes directly from content in the video seriesIntroduction to Astrophysics.Watch it now, on Wondrium.

Finally, let’s consider the flux. The power per unit area that would emerge from a tiny hole in the box. For the gas, it’s *n* times the average of *v-epsilon*, which was proportional to *T* to the 3/2 power. For photons, the number density *n* scales with *T*-cubed, *v* is always *c*, and *epsilon* is proportional to *kT*, so we might guess that flux is proportional to *T* to the 4th power, and it is.

That’s a very important result: the flux of electromagnetic radiation from a body at temperature *T* is proportional to *T* to the 4th power. That’s important enough to deserve its own name: it’s the Stefan-Boltzmann law. The constant of proportionality is *sigma*; that’s the one that also appeared in the equations for energy density and pressure. Sigma isn’t a new fundamental constant; it’s a certain combination of *h*, *c*, and *k*, but it occurs so frequently that the abbreviation is helpful. Numerically, sigma is 5.7 times 10 to the minus 8 Watts per square meter per degree Kelvin to the 4th power.

The last comparison I want to make between particles and photons is in their distribution of energies. For the case of the gas, the average energy is 3/2 *kT*. If we pick a particle at random, we expect its energy to be about 3/2 *kT*, but not exactly. Sometimes it’ll be a little higher, sometimes lower; it depends on its recent history of collisions.

Likewise, the speed of a given particle is always fluctuating. A fundamental rule that emerges from classical statistical physics is that the probability to find a particle in a state with energy of *epsilon* is proportional to *e *to minus *epsilon* over *kT*. It’s an exponential function, and it’s called the Boltzmann factor. It means that the energy will almost always be on the order of *kT*. Much larger energies are vanishingly rare, because of that exponential fall-off. The particles tend to share the energy equally. There’s very little chance that one particle is going to end up with a disproportionate share of the total energy.

From that basic rule, it’s possible—although not easy—to derive the probability distribution for the energy, or the speed, of a particle in a gas. What makes it difficult is that there are a lot of different states with the same energy. If we change the direction of a particle’s velocity, but not the speed, the particle is in a different state, but it has the same energy. That means, to calculate the probability of having a certain speed, we need to multiply the Boltzmann factor by the number of possible states with that speed, and so there’s a lot of bookkeeping associated with counting all of those states.

But if you do go through all that, you can derive the Maxwell-Boltzmann distribution: that’s the probability distribution for particle speed, in an ideal gas. It’s the product of the Boltzmann factor, and a factor of *v*-squared which comes from all that state-counting. The horizontal axis is speed, in meters per second, and the vertical axis is the relative probability that you’ll find a particle to have that speed or, another way to think of it, it’s the fraction of particles that have that speed, at any given time. The function depends on the particle mass, and the temperature.

The only things photons interact with are charged particles

The flux of electromagnetic radiation from a body at temperature *T* is proportional to *T* to the 4th power. This is the Stefan-Boltzmann law.

The Maxwell-Boltzmann distribution is the probability distribution for particle speed, in an ideal gas.

Subatomic Particles: The Quantum Realm

The Use of Neutron Flux in Extending the Periodic Table

Astronomical Data and the Study of Photons

The total energy of *E* will be the sum of all the kinetic energies of all the particles, which is constant in time, because the box is sealed up tight. If we let the particles knock around for a long time, their positions and velocities become randomized. A particle could turn up anywhere in the box, with equal probability. And the particles will come to share the total energy more-or-less equally.

That’s the conceptual basis of temperature. The temperature, *T*, is defined to be proportional to the average energy per particle. For an ideal gas, the proportionality constant turns out to be 3/2*k*, where *k* is Boltzmann’s constant. Whenever we see little *k* in an equation, we know we’re doing thermodynamics. The numerical value is 1.4 times 10 to the minus-23 Joules per Kelvin.

The general rule is that the average energy is 1/2 *kT* times the number of independent ways a particle can store or exhibit energy. The technical term is the number of degrees of freedom. Our billiard balls can move in 3 dimensions, so the kinetic energy has 3 terms, 1/2 *m* times (*vx* squared plus *vy *squared plus *vz* squared), and each one counts as a degree of freedom, so the average energy per particle is 3/2*kT*, which means the energy density, *u*, the total energy per unit volume, is equal to 3/2 times the number density times *kT*. So, the temperature of a gas is a scale for the energy associated with the random motions of the particles.

This article comes directly from content in the video seriesIntroduction to Astrophysics.Watch it now, on Wondrium.

In addition to energy, the particles have momentum. And the scale for that is pressure. The particles are constantly knocking into the walls of the box, or any surface that we might insert in the gas. Those collisions exert a force on the surface: that’s pressure.

To simplify the math, we’re going to imagine a universe in which the particles can only move in one direction. They can move back and forth in, say, the *X* direction, but not any other. When a particle with speed *v* hits the wall, it reflects back with speed *v* in the opposite direction. Its momentum changes from plus *mv* to minus *mv*, a change of minus 2*mv*. Since momentum is conserved, the wall must have absorbed a momentum of plus 2*mv*. It feels a push. And that keeps happening, as more particles hit the wall. In time *Delta t*, how much momentum does the wall absorb?

Let’s say all the particles have the same speed, *v*, and half are moving to the right, and half are moving to the left. The particles that hit the wall are the ones moving to the right, that start within a distance of *v-Delta-t* from the wall. If we focus on an area *Delta-A* of the wall, that singles out a box of volume *v-Delta-t *times *Delta-A*. So, the total momentum absorbed by the wall will be the momentum from each collision, 2*mv*, times the number of collisions, which is equal to *n* halves, the number density of particles moving to the right, times *v-Delta-t Delta-A*, the volume of the box.

Force is momentum per unit time, and pressure is the force per unit area. So, to get the pressure, we divide our equation by *Delta-t* and *Delta-A*. That gives *nmv* squared. And since *mv* squared is twice the kinetic energy per particle, *epsilon*, we can also write it as 2*n-epsilon.*

We’ve been assuming all the particles have the same speed, *v*. But, that’s not true. There’s a whole range of speeds. So, we should replace *epsilon* by its average value, which would be 1/2 *kT*, in a one-dimensional universe. So, in the end, the pressure 2*n* times 1/2 *kT*, or simply, *nkT*.

We just derived the ideal gas law. Pressure is proportional to number density and temperature. We did it for a one-dimensional gas, but in 3-D, we end up getting the same equation.

Now, suppose we pop a tiny hole in the wall, with an area *A*. Gas particles will start leaking out, and the gas will lose energy. So, what’s the rate of energy loss? In our 1-D universe, the number of particles that leak out in time *Delta-t* is equal to the number density of right-moving particles—that’s *n* over 2—times the volume of that same box, *v-Delta-t Delta-A*. Each one has an energy of epsilon. That gives *Delta-E* equals *epsilon* times *n* over 2 times *v-Delta-t Delta-A*.

Let’s divide by *Delta-A* and *Delta-t* to give the power per unit area—that’s the flux—of escaping energy. And, again, since there’s a range of speed and energies, we should take the average. The important thing is that it’s proportional to *n* *v epsilon*, which also turns out to be proportional to the temperature to the 3/2 power.

The temperature, *T*, is defined to be proportional to the average energy per particle. For an ideal gas, the proportionality constant turns out to be 3/2*k*, where *k* is Boltzmann’s constant. The numerical value is 1.4 times 10 to the minus-23 Joules per Kelvin.

If a particle can move in 3 dimensions, the kinetic energy has 3 terms, 1/2 *m* times (*vx* squared plus *vy *squared plus *vz* squared), and each one counts as a degree of freedom, so the average energy per particle is 3/2*kT*, which means the energy density, *u*, the total energy per unit volume, is equal to 3/2 times the number density times *kT*.

Force is momentum per unit time, and pressure is the force per unit area.

Subatomic Particles and the Wave–Particle Duality

The Discovery of New Particles: Baryons and Quarks

Studies and Theories on Dark Matter Particles