Let’s plot U-effective as a function of r. For small-r, that one over r-squared is dominant and it’s positive. So, U-effective shoots up to infinity for small-r. For large-r, the one over r is dominant and it’s negative. So, as r grows, the potential dives down to negative values and rises toward zero as r goes to infinity. So, it makes a bowl shape.

The trajectory of the particle depends on E, how much total energy we give it. First, let’s consider the case in which E is negative, the negative potential energy dominates over the positive kinetic energy. Since the difference between E and U-effective equals 1/2 mv_r-squared, which is always a positive number, the particle’s radius r must be confined to the region where E is bigger than U-effective.

And furthermore, at locations where E minus U-effective is large, that means v_r is large, too, so the particle is moving quickly in the radial direction. Whenever U-effective gets close to E, the particle must be slowing down. When the lines cross, v_r is zero, and r is momentarily standing still.

All this means that the particle’s radial motion can be understood qualitatively by imagining that we drop a marble in a bowl, starting at one of the intersection points. The marble starts at rest, rolls to the bottom and speeds up, rolls up to the same height on the other side, stops briefly, then drops down again, and keeps oscillating. Likewise, the r-value of our particle will grow, then shrink, then grow again, as it’s whirling around.

That makes sense. We already know the particle will follow an ellipse, with a distance to the origin r, that gets bigger and smaller as it goes around. And if we happen to put the particle right at the lowest point in the bowl, it will just stay there. That corresponds to a circular orbit, with an unchanging radius.

We’ve just seen that for a given angular momentum, a circular orbit has the minimum possible energy; it’s the low point in the bowl. Whenever you drain energy out of an orbit, with friction or some other process that leaves angular momentum alone, the orbit will circularize.

It’s impossible for the particle to ever reach r equals zero. That’s because of the first term in the effective potential, L-squared over 2mr-squared, which makes an infinitely high barrier, guarding the origin. The only exception would be if L, the angular momentum, is exactly zero.

Then there’s no barrier. In plain language, to make a direct hit on the origin, you need to be dropped straight in, with no sideways motion. If you have any angular momentum at all, you’ll orbit the attractor, you won’t hit it.

This article comes directly from content in the video seriesIntroduction to Astrophysics. Watch it now, on Wondrium.

Kepler’s second law holds for any central force. The particle approaches the origin, then turns around and flies away, slowing down but never returning. That is an unbounded trajectory. You can define an effective one-dimensional potential for any central force law, whether the force goes like one over *r*-cubed, or the square root of *r*, or whatever.

In general, what happens is the particle whirls around, going from the minimum to the maximum radius and back again, in accordance with Kepler’s second law. The trajectory makes a beautiful pattern that fills in the space between the minimum and maximum distance. They’re called rosette orbits.

But there’s a remarkable coincidence—the trajectory comes around and repeats, making an ellipse. That is a very special case. Just about any other force law, any other power of *r* leads to infinitely looping rosettes, not a fixed geometric shape.

In fact, there’s only one other exception. If the particle is attached to the origin with an ideal spring, with force proportional to *r*, then its trajectory is also an ellipse, but in that case, the origin is the center of the ellipse instead of the focus.

In advanced classical mechanics, we learn that whenever there is a conserved quantity, like energy, or angular momentum, there’s a corresponding symmetry in nature, a sense in which nature is mathematically simpler than it could have been. This is called Noether’s theorem, after Emmy Noether who published it in 1918.

For example, energy is conserved because the laws of physics don’t change with time: *F *equals *ma* forever and always. What we say is that “the equations have time-translational symmetry.” Angular momentum is conserved whenever the situation has rotational symmetry, when things only depend on *r*, but not *theta*.

So, what about this third conserved quantity, the eccentricity vector? What’s the corresponding symmetry? It’s very subtle and weird. It turns out the equations governing the motion of a particle under the force of gravity from another particle are mathematically equivalent—through a complicated change of variables—to the equations for a particle moving freely, without any force, on the surface of a 4-dimensional sphere. And it’s the perfect symmetry of that 4-dimensional sphere that leads to the conservation law for the eccentricity vector.

The inverse square law of gravity explains big physics demonstrations in the sky: the motions of the planets. Achieving this understanding was a pivotal development in human history: it was our first real awareness of one of the 4 fundamental forces of nature. It’s also a beautiful connection between mathematics, geometry, and physics.

U-effective for planetary motion shoots up to infinity for small-r. For large-r, as r grows, the potential dives down to negative values and rises toward zero as r goes to infinity. So, it makes a bowl shape.

If you have any angular momentum at all, you’ll orbit the attractor, you won’t hit it. So in planetary motion to make a direct hit, the particle has to have no sideways motion.

Noether’s theorem tells us that the universe is mathematically simpler than it could have been. This is also true for equations concerning planetary motion.

The Strong and Weak Nuclear Forces

Riccardo Giacconi: Pioneering X-ray Astronomy

The Discovery of Black Holes: From Theory to Actuality

Both *L* and *E* remain constant throughout a planet’s elliptical orbit, even while the planet is moving and changing speed. So, we should be able to derive expressions for *L* and *E* purely in terms of constants: *G*, big-*M*, little-*m*, *a*, and *e*. First, let’s do it for angular momentum. In general, *L* equals *m *times* r *times* v_theta _{.}* Remember, only the angular component of the velocity matters. Since

If we consolidate Kepler’s second and third laws into one equation, *r*-squared *d-theta/dt* equals the square root of *Ka* times one minus *e*-squared. If we multiply this by little-*m*, and substitute *GM* for *K*, we arrive at a new formula for angular momentum: little-*m* times the square root of *G* times big-*M *times *a *times one minus *e*-squared.

As an immediate application of this formula, we can prove Kepler’s third law for the general case of an elliptical orbit. We start with Kepler’s second law: 1/2 *r-*squared *d-theta*/*dt* equals *pi* *a*-squared times the square root of one minus *e*-squared (that’s the area of the ellipse), divided by *P*, the orbital period. Notice the left side of this equation is the angular momentum, *L*, divided by 2*m*.

Now, let’s use our nifty new formula for *L*. When we insert that, the little-*m*’s cancel out, as do the one minus *e*-squared’s, and if we solve for *P*, we find that it’s 2*pi* over the square root of *G* time big-*M* times *a *to the 3/2, which is, of course, Kepler’s third law. Thus, endeth the proof.

This article comes directly from content in the video seriesIntroduction to Astrophysics. Watch it now, on Wondrium.

That’s enough playing around with angular momentum. How about energy, the other conserved quantity? Energy has two parts: kinetic, 1/2 *mv*-squared, and potential, minus-*G *times big-*M *times little-*m* over *r*. Their sum must be equal to some combination of the constants *G*, big-*M*, little-*m*, *a*, and *e*, and let’s try to figure out what it is.

Since energy is constant, we can calculate it at any point we want in the planet’s orbit, and we’ll get the same answer, so let’s make life simple by choosing *theta *equals zero. That’s when the planet makes its closest approach to the Sun, and *r* equals *a* times one minus-*e*. What about the velocity? Well, we can figure that out with another application of our new angular momentum formula.

In general, *L* is equal to *m* times *r* times *v*_*theta*. Here, at *theta *equals zero, *v*_*theta* is simply *v*, because at that point the velocity vector is totally perpendicular to the radius vector: *r* is in the *x*-direction and *v* is in the *y*-direction. So, at *theta *equals zero, *L* equals *m *times *a* times one minus-e times *v.* We solve for *v*, and plug in our new expression for *L*. Then, we insert that expression for *v* into the energy equation, and we simplify. The algebra leads to a cascade of cancellations, and a result that’s refreshingly simple. The energy is minus-*G *times the product of the masses divided by 2*a*.

All the terms related to eccentricity ended up canceling out. It turns out that energy depends only on the semimajor axis of the ellipse, not its eccentricity. If you have a nearly circular orbit with radius 1 AU, like the Earth, and you compare it to a planet on a highly elliptical orbit, with *a *equals 1 AU and an eccentricity of 0.9, they both have the same energy.

They’ll also have the same orbital period, one year, because Kepler’s third law says *P* depends on *a*, but not on *e*. The planet in the elliptical orbit whips around the Sun near its closest approach and moves more slowly when it’s far away, and the 2 effects cancel each other exactly to give the same period as the Earth. It’s an interesting coincidence.

Both the L and E of a planet’s orbit stay the same through the planet’s elliptical orbit. Even when the planet is moving and changing speed.

Through a planet’s orbit, the sum of its kinetic and potential energy must stay the same. So we can pick any position of the planet’s orbit we want and when we calculate it’s energy we’ll know its the same in all parts of the orbit. To make things simpler its best to choose theta equals zero. That’s when the planet makes its closest approach to the Sun, and r equals a times one minus-e.

The energy of a planet’s orbit depends solely on the semimajor axis of the ellipse, not its eccentricity. So a nearly circular orbit and a highly elliptical orbit may have the same energy. If they have the same energy, they can also have the same orbital period.

Fundamental Particles and Newton’s Law of Gravity

Electromagnetism: A Fundamental Force of Nature

Why Are Atoms as Big as They Are?

First, let’s write Kepler’s laws in equation form. The first law says the orbits are ellipses with the Sun at one focus, so if we use a polar coordinate system with the Sun at the origin, the path of the planet, *r* of *theta* is equal to *a* times one minus *e*-squared over one plus *e* cos-*theta*. That’s the equation for an ellipse.

Kepler’s second law says the line from the Sun to the planet sweeps out area at a steady rate. This implies 1/2 *r*-squared *d-theta/dt* is a constant, a certain area per unit time, that is specific to each planet. For the Earth, the numerical value is *pi* AU-squared per year, since the Earth’s orbit is approximately a circle of radius one, which has a total area of *pi*.

More generally, 1/2 *r*-squared *d-theta/dt* is equal to the area of the ellipse, *pi a*-squared times the square root of one minus *e*-squared divided by the orbital period, *P*. That’s Kepler’s second law. And, Kepler’s third law says that *P* in proportional to *a *to the 3/2 power. So, that’s our trio of equations. Now, let’s get to work.

Pretend we already know from laboratory experiments that force equals mass times acceleration. But, we don’t yet know the equation for the force of gravity. To obtain a clue, we need to calculate the acceleration of a planet that obeys Kepler’s laws. To calculate acceleration, first, we need to know the planet’s position as a function of time. Then we’ll take the time derivative to get the velocity.

Well, Kepler’s first law tells us the position, but not as a function of time; it’s a function of angle, *theta*. All the time information is the second and the third laws. So, we need to combine the equations, somehow.

Let’s convert to Cartesian coordinates. In general, when the polar coordinates are *r* and *theta*, the *x *coordinate is *r* times cosine of *theta*, and *y *equals *r* sine-*theta*. So, for our planet, *x *is *a* time one minus *e*-squared times cos-*theta* over one plus *e* cos-*theta*. And we get a similar equation for *y*. We can do the same thing with unit vectors.

Now, let’s calculate the velocity by taking the time derivatives of *x* and *y*. Since they’re written as functions of *theta*, and not time, we need to use the chain rule: *vx*, the *x*-component of velocity, is *dx/dt*, which we can write as *dx*/*d*–*theta* times *d-theta/dt*. Since *x *has functions of *theta* in the top and bottom of the expression, we use the quotient rule.

What about *d-theta/dt*? For that we need Kepler’s second and third laws, the ones relating to time. Let’s consolidate them, by writing the *P* in the second law in terms of *a*, using the third law. The third law says *P* equals some constant times *a*to the 3/2 power. We can label that constant however we want. But instead, let’s be clever.

The second law has a 1/2 on the left side, and a *pi* and a square root on the right side. So, to make the result as simple as possible, let’s write the third law as *P* equals 2*pi* over root-*K* times *a*to the 3/2, where *K* is a constant. That way, the 1/2 and the *pi* cancel out, and the *K* will fit nicely under the square root, so what we’re left with is *r*-squared *d-theta/dt* is equal to the square root of *Ka* times one minus *e*-squared.

This article comes directly from content in the video seriesIntroduction to Astrophysics. Watch it now, on Wondrium.

We plug in the expressions we just derived, which leads to an equation in terms of *r *and *theta*. To put everything in terms of just one variable, *theta*, we insert the ellipse equation for *r-theta*, and simplify.

That gives *v_x* is equal to minus the square root of *K* over *a* times one minus *e*-squared times sine-*theta*. That factor in front of sine-*theta* is a constant. It doesn’t depend on *r* or *theta* or time and it has units of velocity. To make the equation look even simpler, let’s name that constant *v*_naught. That way, *v_x* is simply minus* v*_naught_{ times} sine-*theta*.

That leaves the other component of velocity, *v_ y *which we calculate as *dy*/*d-theta* times *d-theta/dt*. Let’s just jump to the answer: *v_ y *is equal to *v*_naught cos-*theta* plus *v*_naught times *e*.

What does all this mean? Let’s find out, by tracking the planet’s velocity vector over a full orbit. We’ll plot *v_x* on the horizontal axis, and *v_ y* on the vertical axis. That kind of chart is called velocity space; each point in the chart specifies a velocity, rather than a position. As *theta* increases, the equations tell us that *v_x* starts at zero and *v_ y *starts at *v_*naught plus *e*.

Then as *theta* increases, *v_x* goes negative and *v_ y *shrinks. When we keep going, what we find is amazing. The tip of the velocity vector moves in a circle! You can prove it algebraically, too, by showing that our equations imply *v_x*-squared plus *v_ y minus*–*e v_ *naught quantity squared equals *v_*naught-squared. That’s the equation for a circle in velocity space, with radius *v_*naught, centered at the point zero *e *times *v_*naught.

Now, this is ironic. Ancient astronomers were sure the planets moved in circles, because the circle was just such a perfect shape. And when the data got good enough to rule out uniform circular motion, they added more circles, to make epicycles. And it took astronomers a long time to ditch the circles and arrive at the truth that the planets move on ellipses.

The ancient astronomers were right, after all. Planetary motion does involve perfect circles; it’s just that the circles are in velocity space. While the planet moves in an ellipse, its velocity vector traces out a circle. Well, that was an unexpected treat.

The first of Kepler’s laws can be written as such: r of theta is equal to a time one minus e-squared over one plus e cos-theta.

After combining the equation forms of Kepler’s laws we’re left with r-squared d-theta/dt is equal to the square root of Ka times one minus e-squared.

Ancient astronomers thought that planets moved in perfect circles. But after some time they gave up on that idea. Now, using Kepler’s laws we have proven that they indeed do move in circles.

Studying Astrophysics: From Micro-world to the Cosmos

Astrophysics: Putting the Whole Universe into Perspective

How Logarithmic Charts Help Explain the Magnitude of Space

Democritus believed that every known substance was composed of smallest elements, like how grains of sand make up a beach. Democritus coined the word atomos, Greek for uncuttable, from which we get our modern word atom.

Democritus’s ideas about atoms differed greatly from our modern understanding and were quite wrong in detail. Yet, he used his overall guiding principle to infer that light was made of individual, discrete, particles. He didn’t know about electrons, of course, but, if he had, he’d have claimed that they were particles too.

A more modern conversation spanned the 17th century, when French polymath Renée Descartes published his book *The World*, or *Le Monde*. This book, written in 1630, devised a theory of light where light was a wave that propagated through a substance called the luminiferous aether, or just the aether for short. Aether was thought to be a substance that permeated the entire universe. Its purpose was to conduct light, like water conducts water waves, or things like air conduct sound. Since then, we have learned that Descartes’s aether isn’t real, but that the wave idea has considerable merit.

A few decades later after Descartes’ heyday, Sir Isaac Newton proposed a fairly well-developed theory of light, which was more along the Democritus line of thinking, with particles of light that he called, corpuscles.

Newton was one of the greatest scientists of all times and his reputation carried considerable weight. He was known to dabble in alchemy, numerology, and mysticism. His work on motion, light, gravitation, and calculus were all extraordinary and we accept them all even today. However, for all of his well-deserved reputation, not all of Newton’s contemporaries accepted his ideas. He was right about so many things but nobody is right all the time.

Robert Hooke, Newton’s competitor, developed the wave idea, along with Dutch scientist Christiaan Huygens and French physicist Augustin-Jean Fresnel. This debate went on for years, decades, really as there wasn’t definitive data. Researchers of the day could use mirrors and lenses and even prisms to break light into its constituent colors and then combine them again. Thus, although it wasn’t like they knew nothing, none of these experiments were definitive.

In fact, the key point, when it comes to learning how we know what we know about modern physics is always, always, data. Models and theories and conjectures are just ideas. Ideas are powerful things to be sure, and they help us understand the world around us. But ideas are easy. Walk into the fiction section of any library and one can see that. All of those books are ideas and none of them are true. The thing that distinguishes between a flight of fancy and a solid explanation is data. Data is what tells us if a hypothesis is true.

This article comes directly from content in the video seriesThe Evidence for Modern Physics: How We Know What We Know.Watch it now, on Wondrium.

In order to understand the nuances of this debate, we need to first need to talk about waves. What are they? Well, waves are not discrete objects. They exist over a large area and oscillate up and down. The most familiar form of waves is, of course, water waves, where the height of water raises and lowers in a rhythmic pattern, over and over again. The distance between two adjacent peaks in the pattern is called the wavelength. And, if the wave is passing by us, the amount of time it takes for consecutive peaks to pass, is called the period. The height of the wave is called the amplitude.

There is another, more subtle, feature of waves that is only apparent when we compare two of them and that’s their offset, what physicists call the phase. If two waves are otherwise identical but shifted from one another so that the peaks and troughs don’t line up exactly, the two waves have a relative phase.

Thus, those are the terms that define a wave: wavelength, period, frequency, amplitude, and phase.

Now exactly what is oscillating depends on the specific wave. Obviously, in water waves, it’s the height of water. In sound waves, it’s the air pressure that gets higher or lower, which, of course, means louder or quieter. Light, on the other hand, is made of oscillating electric and magnetic fields and what is changing is the strength of the fields and the direction they are pointing.

It’s much less obvious than a water wave, but when we see light falling on a picturesque meadow, what’s going on at a deep physical level is the whole field is bathed in oscillating electromagnetic fields. That image isn’t as easy for an artist to paint, but it’s the deeper reality.

Democritus used his overall guiding principle to infer that light was made of individual, discrete, particles.

Robert Hooke, Newton’s competitor, developed the wave idea, along with Dutch scientist Christiaan Huygens and French physicist Augustin-Jean Fresnel.

What is oscillating depends on the specific wave. Obviously, in water waves, it’s the height of water. In sound waves, it’s the air pressure that gets higher or lower, which, of course, means louder or quieter. Light, on the other hand, is made of oscillating electric and magnetic fields and what is changing is the strength of the fields and the direction they are pointing.

Electromagnetic Spectrum: Studying the Universe’s Wavelengths and Energies

Wavelengths of Light: A Way to Understand the Atomic Structure

Infrared, Visible Light, and Ultraviolet Rays: The Electromagnetic Spectrum Radiations

The question that bothered early was, how can we explain the Thomas Young interference observation if light is made of particles? In order to understand how, let’s take an example. Suppose we turned down the brightness of the light source (the laser in the modern world) so that one photon is emitted at a time. If light were a classical particle, then the photon would have traveled to the double slits, pick a slit to go through, and then appear on the distant wall. If light were a classical wave, it would pass through both slits, interfere, and then appear as a very faint interference pattern of bright and dark spots on the distant wall.

It was in 1989 that Japanese scientist Akira Tonomura, and his co-workers at Hitachi, did this experiment. They actually used electrons and not photons. They shot one particle through the double slits by simply turning down the intensity very low. It appeared on the distant wall in a single location, thus, proving that light is a particle.

Nonetheless, an interesting result occurred when they repeated the experiment time and time again and built up a pattern of where the particles appeared. After thousands and thousands of individual particles, what they found was that the pattern looked just like what Thomas Young observed in 1801. The electrons arrived and interacted with the distant wall like particles, but their motion and the places they were found acted like waves.

Hence, this is the reason that we say that photons and electrons and indeed all atomic and subatomic particles act like waves. In fact, everything acts like a wave, even a thrown baseball. However, for big things, such as baseballs, the wavelength is so incredibly tiny, that it’s not possible to see their wave behavior.

Tonomura and his colleagues used electrons and not photons because it’s easier to do, but does that tell us something about how light works? Well, sort of, at least indirectly. It was in 1924 that French physicist, Louis de Broglie, submitted his PhD thesis. In it, he made the most amazing conjecture. He hypothesized that if photons could be both waves and particles, why couldn’t electrons have the same property? He even proposed a way to determine the wavelength of an electron. The wavelength is proportional to one over the momentum of the electron. So, a very high momentum electron has a very short wavelength and vice versa.

De Broglie turned out to be right and, a year later, Austrian physicist, Erwin Schrӧdinger, expanded on the idea to invent modern quantum mechanics. Thus, this is how we know that photons and, by extension, electrons have both a wave and particle nature.

This article comes directly from content in the video seriesThe Evidence for Modern Physics: How We Know What We Know.Watch it now, on Wondrium.

From 1923 through 1927, American physicists, Clinton Davisson and Lester Germer, were working at Western Electric. What Davisson and Germer were doing was shooting electrons at a block of nickel.

Now, this was still early in the history of modern physics. People were casting around in the dark and progress was made in fits and starts. What Davisson and Germer expected was to see the electrons reflected in a willy-nilly manner. This was because the surface of the sample wasn’t particularly smooth. To do this properly, they needed to invent a low intensity electron source, which only had a few electrons getting shot off at once. They also needed the electron beam to be in a vacuum.

A familiar form of an electron source is an old-style TV. Those old ones used heaters to boil electrons off a thing, called a cathode, and shoot the electrons toward the screen using high voltages and electric fields. Davisson and Germer’s source was basically the same as an old-style TV. They moved an electron detector around it, looking where the electrons bounced. What they found was that the electrons appeared in lots of places. But then they got lucky, or unlucky, but then lucky, so here’s what happened.

The vacuum on their equipment started leaking. Air started to get into their apparatus. With the oxygen from the air interacting with the nickel, nickel oxide was formed. It ruined their measurement as they were studying nickel and not nickel oxide. To get rid of the nickel oxide, they heated their sample, expecting the nickel oxide to boil off, leaving a pure nickel sample. Unaware that the way they heated and cooled the sample changed the arrangements in the block of nickel, instead of an amorphous block of nickel, it became a block of crystalline nickel.

Crystals have the property that they have repeating and regular structures. This is akin to having the slits of the Thomas Young experiment, but instead of two slits, it’s as if there were many, many slits. So, they repeated their experiment and found something odd. They found that the electrons bouncing off the nickel would bounce in some places and not at all in others. It looked much like the pattern seen by Thomas Young, although the pattern was two dimensional.

Thus, with the Davisson-Germer experiment, de Broglie’s hypothesis was confirmed: Electrons, like photons, are both particles and waves.

Since then, many other experiments have confirmed these early conclusions. Subatomic particles are both particles and waves. That’s pretty bizarre, but even more is the fact that particles, separated by great distances, can communicate with one another in ways that both do and don’t exceed the speed of light.

After thousands and thousands of individual particles appeared, Akira Tonomura and his co-workers found that the pattern looked just like what Thomas Young observed in 1801.

Louis de Broglie hypothesized that, if photons could be both waves and particles, why couldn’t electrons have the same property?

With the Davisson-Germer experiment, de Broglie’s hypothesis was confirmed: Electrons, like photons, are both particles and waves.

Wave or Particle—What Is Light?

Understanding the Universe: From Probability to Quantum Theory

Subatomic Particles: The Quantum Realm

In order to understand Einstein’s work, we need to take a very small step backward and talk about another mystery of the late 1800s, and that one deals with the color emitted by hot objects.

There are indeed lots of hot things. There are hot gasses and things like hot metals. The light emitted by each has a lot to do with their chemical makeup. But physicists of the time were interested in learning what color light hot objects emit when none of their chemistry matters. For that, they were interested in what are called black bodies. They are called so because they absorb all radiation that hits them. However, when they are hot enough, black bodies also emit light.

Ideal black bodies are somewhat hypothetical, but the easiest one to visualize is a steel foundry. In a steel foundry, huge heaters heat metal until it melts and eventually glows. Workers can look into the oven by opening a little door and seeing the glowing metal. Even light from the outside that goes through the door tends to bounce around and never come back out. The only light that comes out is the light from the hot steel.

According to the understanding of light from the 1800s, all wavelengths of light should have equal energy irrespective of whether they are long or short wavelengths. In reality, there simply are more short wavelengths. After all, the longest wavelength possible in an oven is a wavelength the size of the oven. Bigger ones won’t fit. But an infinite number of shorter and shorter wavelengths will. So, it stands to reason, that:

- With many more short wavelengths and
- All wavelengths carrying an equal amount of energy, the light coming out of the oven should mostly be short wavelengths.

Except that it wasn’t true.

This article comes directly from content in the video seriesThe Evidence for Modern Physics: How We Know What We Know.Watch it now, on Wondrium.

In the visible spectrum, red wavelengths are longer and blue wavelengths are shorter. And, even shorter still is a form of light called ultraviolet. Ultraviolet is the kind of light that causes sunburns and skin cancers. According to the theories of the 1800s, light emitted from steel mills should be very blue and actually there should be lots of ultraviolet light. But that didn’t agree with the measurement. The energy was mostly found in the longer wavelengths, not the shorter ones. This conundrum was known in the late 1800s as the ultraviolet catastrophe.

The problem was solved in 1900 by the German physicist, Max Planck, who hypothesized that the energy held by light waves was not equal for all wavelengths. He hypothesized that the energy was proportional to the frequency, which is inversely proportional to wavelength. This means that short wavelengths have high frequency, and long wavelengths have low frequency. Planck asserted that shorter wavelength light carried more energy than long wavelength light.

Admittedly, this explained the ultraviolet catastrophe. If there was only a certain amount of energy in the oven and individual beams of short wavelength light carried more energy, then there had to be fewer examples of short wavelength light. And this is exactly what was observed. Planck took his conjecture, which was that the energy carried by a beam of light was equal to a constant times the wave’s frequency, applied it to the black body problem, and got perfect agreement with the data.

Now, Planck was thinking in wave terms, with light being emitted by vibrating atoms in the side of the oven. But Einstein, on the other hand, had other ideas. When it came to Heinrich Hertz’s mysterious problem of finding out that red, orange, and yellow light wouldn’t cause a spark, and blues, greens, and purples would, Einstein knew about this problem. It was he who combined it with Planck’s explanation for the ultraviolet catastrophe.

In 1905—which was the same year Einstein invented special relativity—he wrote a paper on what is now called the photoelectric effect. He hypothesized that a beam of light was actually a stream of particles of light. We now refer to them as particles of light photons. Although Einstein didn’t coin the term, it was made popular, in 1928, by Arthur Compton, although he didn’t coin the term either.

In any event, Einstein was able to explain Hertz’s observations in the following way. He proposed that the atoms in the electrodes held on to their electrons with a certain amount of force or, equivalently, energy. In order to make a spark, a photon of light would need to hit the atom and give it enough energy to knock it out. Since red, yellow, and orange photons had long wavelength they, therefore, had a low frequency.

By Planck’s hypothesis, these photons didn’t have a lot of energy. In contrast, the green, blue and purple photons had short wavelength, high frequency, and therefore, enough energy to knock the electrons out of atoms. So, according to Einstein’s paper, everything could be explained, if light consisted of photons, which were individual particles. Furthermore, the energy of individual photons was proportional to their frequency.

Interestingly, frequency is a property of waves. Yet, Einstein said that light consisted of a series of particles with energy proportional to their frequency. That right there should already blow our mind!

Black bodies are called so because they absorb all radiation that hits them. However, when they are hot enough, black bodies also emit light.

Max Planck hypothesized that the energy held by light waves was not equal for all wavelengths.

Albert Einstein wrote a paper on what is now called the photoelectric effect. He hypothesized that a beam of light was actually a stream of particles of light.

How Einstein Challenged Newtonian Physics

How Max Planck Gave Birth to the Concept of Quanta

Einstein on Black Holes: Why He Rejected His Own “Child”

We can clearly observe this interference at the beach. Waves come in toward the shore and sometimes cross. When the peak of two waves cross, the result is that the water level is lifted unusually high. When a peak of one wave crosses the trough of another, the two cancel each other out and the result is that the level of water doesn’t change at all. Interestingly, if waves hit each other randomly, with neither peaks hitting peaks, nor peaks hitting troughs, something in between happens, with the two waves somewhat enhancing each other or cancelling each other out.

This adding and cancelling thing works for all waves. But does it work for light, too?

In 1801, British scientist Thomas Young set up an experiment that, once and for all, showed that light was a wave. A modern version of Young’s experiment can be performed by taking a laser and pointing it at two very narrow slits cut in an aluminum foil. To work, the two slits have to be very close to one another. What we then see is that, coming out of the two slits, is not one, but many beams of light. On a distant wall, a series of bright dots are visible, separated by dark spaces.

This is exactly the same phenomenon as the interference of water waves. The bright spots are caused by the waves from one slit, enhancing the waves from the other slit, and the dark spots are caused when the waves cancel each other out. By covering up one of the slits, it can be further proved that it is caused due to the interference of waves.

Young’s original paper was published in 1803. He didn’t have lasers or even the ability to easily make such slits. He even had to work really had to show that light was a wave. But prove it, he did. Thomas Young’s double slit experiments and others that followed showed quite clearly that light was a wave. The arguments of the 18th century that, whether light was a particle or a wave, were over. Or were they?

The Evidence for Modern Physics: How We Know What We Know.Watch it now, on Wondrium.

In the mid-1800s, scientists invented an electric eye whereby, by shining light on certain material, they could cause electricity to flow in a circuit. The same kinds of techniques nowadays are used for cameras and other electronics to detect how bright the environment is.

But, it was in 1887 that the plot thickened. German physicist, Heinrich Hertz, did a series of very interesting experiments. He basically took two metal electrodes, separated by a small distance, and placed both of them in a vacuum surrounded by a glass container. He then charged the electrodes up with a very high voltage and as expected, nothing happened. But things changed when he shined a light on the electrodes. By doing so, he could cause a spark to occur. However, when he started changing both—the color and brightness of the light—a mystery unfolded.

When Hertz shined blue light, he got a spark just as with purple and green. However, yellow didn’t. Neither did orange or red. Hence, bluish colors caused a spark, but reddish didn’t.

Moreover, the brightness didn’t matter, or at least it didn’t matter in odd ways. If he increased the brightness of the bluish colors, it caused much bigger sparks and caused more electricity to flow. Hertz knew that because, by this time, he had added some extra equipment to measure the current flow. But when he took the red, yellow, and orange light and absolutely maxed out how bright they were, he got nothing. No sparks and no electricity flow.

This clearly didn’t make sense if light was a wave. For waves, the amount of energy carried by the wave is determined by the amplitude of the wave, whereby, amplitude refers to the height. This makes complete sense for water waves. If we’re standing in the lake, little ripples have nearly no effect as they pass over us. Three or four feet high big waves, can knock us off our feet. And, of course 30 or 40 feet tall tidal waves are completely deadly and can destroy buildings and scour clean a shoreline. For water waves, like many things, size matters.

Additionally, the brightness of a light is also proportional to its amplitude under the wave theory of light. A dim blue beam could well have much less energy than a super bright red beam. Yet, with Hertz’s experiments, there was simply nothing he could do to make a spark occur with red light.

Considering that this was in 1887, the electron hadn’t been discovered yet. The discovery of radioactivity was about to occur and, in 1897, physicist J. J. Thomson discovered the electron. Once the electron had been discovered, it was possible to look at Hertz’s experiments differently.

A spark was caused when electrons were knocked out of atoms. And, under that explanatory framework, it seemed that red, orange, and yellow light simply didn’t have the power to knock electrons out of atoms, while the bluer colors had no problem although nobody knew why. However, this was a bit of an antithesis as this observation was completely and totally inconsistent with the wave model of light.

Thomas Young set up an experiment that, once and for all, showed that light was a wave.

In the mid-1800s, scientists invented an electric eye whereby, by shining light on certain material, they could cause electricity to flow in a circuit.

For waves, the amount of energy carried by the wave is determined by the amplitude of the wave, whereby, amplitude refers to the height.

Photons and Wavelength: Is Light a Particle or a Wave?

Gradual Redefining of Matter and Reality: From Mechanics to Quantum

The Concept and Forms of Energy and Energy Conversion

One general feature of bones is the contrast between the axial part of the skeleton and the appendicular skeleton.

The axial bones are those that make up the main longitudinal axis of the skeleton. These include the 22 bones of the skull; the 6 tiny ossicles in the ears; the single hyoid bone in the neck; the 26 bones of the vertebral column—the 24 individual vertebrae, plus the sacrum and coccyx, or tailbone as it’s called; the 24 ribs; and the sternum. That’s a total of 80 bones.

The appendicular skeleton, on the other hand, includes the 30 bones in each upper limb, the 30 bones in each lower limb, as well as the two clavicles and two scapulae of each pectoral girdle, and the pair of pelvic bones that attach our lower limbs to the axial skeleton, so the appendicular total is 126 bones. That’s 206, when the appendicular and axial totals are combined.

There are some standard terms used for bone features—such as foramen, tubercle, condyle—and then adjectives are used to specify those on a given bone. An overarching scheme of bony landmarks would boil down to three main categories: projections, depressions, and openings.

Depressions and projections are typical of muscle attachment sites and joint surfaces, while openings will transmit arteries, nerves, and veins. Learning the general bony landmark terms in advance will help prevent confusion, since numerous bones have tubercles, notches, heads, facets, and so forth.

It must be kept in mind that some of the specific landmarks also use anatomical directional terms like lateral, medial, superior, and inferior.

This article comes directly from content in the video seriesHow We Move: The Gross Anatomy of Motion. Watch it now, on Wondrium.

Here’s the infraorbital foramen—it’s a hole inferior to the eye socket, which is anatomically known as the orbit. Conveniently, the infraorbital nerve, artery and vein all pass through it. And by the rule of adjectives—if we have an infraorbital foramen, we also have a supraorbital foramen. Sometimes that is an incomplete opening, so many of us have a supraorbital notch, rather than a supraorbital foramen.

There is also obturator foramen, the largest foramen in the body, but despite its large size, very little passes through it.

And then there is the external auditory meatus—that’s a tube-like passageway, lined with skin, through which sound waves pass to reach the eardrum. Inside the skull, there’s an internal auditory meatus, which satisfies the rule of adjectives.

In terms of depressions, we have the intertubercular groove or sulcus of the humerus—you’ll hear both used. It’s called that because it’s between the greater tubercle of the humerus and the lesser tubercle of the humerus—so, since “inter” means between, and it’s a groove between those tubercles, the intertubercular groove is a good name for it!

Another depression is the glenoid fossa; that is the name for the shallow socket of the shoulder joint which the humerus fits into.

And another shallow depression is the iliac fossa, named for the part of the pelvis it’s on—the ilium. The muscle that originates here is called iliacus. As the skeleton is the framework of the body, the names of the bones are the basis for the names of many other structures. That’s why bones are typically covered first in musculoskeletal study—or really when you study any form of anatomy.

As for projections, we mentioned the greater tubercle and the lesser tubercle of the humerus—there’s that rule of adjectives.

The scapula has a glenoid fossa, and it also has a supraglenoid tubercle above that socket—or better yet, superior to that socket—so what else do you suppose it has inferior to that socket? If you said infraglenoid tubercle, you’re right!

These large projections on either side of the elbow are the medial and lateral epicondyles—sticking out on either side of the humerus above the joint region of the elbow. So you see medial and lateral being used.

These examples should give you an idea that learning the types of bony landmarks—condyles, foramen, and facets—will make a good foundation for learning the names of specific bone features later.

So, where do these bony landmarks come from? A usual answer is that the landmarks are there so muscles can attach to them. But that’s not exactly the case.

To some extent, these projections—trochanters, tubercles, processes—they are there because muscles attach to them. That might sound like double-talk, or a fine point, but really, it’s not. The landmarks don’t develop to facilitate the muscle attachment; they grow because bone’s reaction to stress is to build more bone matrix.

It must be noted that the matrix is composed of the collagen fibers surrounded by calcium phosphate salts. That addition of more bone matrix in response to stress strengthens the bone. Essentially, this relates to a concept in anatomy known as Wolff’s law, which states that bone will adapt to the loads under which it’s placed.

This is why stress like weight-bearing exercise is healthy for bones—within reason, of course. It is also why inactivity causes bone atrophy—whether it’s astronauts on the space station, a bedridden patient, or a person in a cast for six weeks.

An overarching scheme of bony landmarks would boil down to three main categories: projections, depressions, and openings.

The axial bones are those that make up the main longitudinal axis of the skeleton. These include the 22 bones of the skull; the 6 tiny ossicles in the ears; the single hyoid bone in the neck; the 26 bones of the vertebral column.

The appendicular skeleton includes the 30 bones in each upper limb, the 30 bones in each lower limb, as well as the two clavicles and two scapulae of each pectoral girdle, and the pair of pelvic bones that attach our lower limbs to the axial skeleton, so the appendicular total is 126 bones.

Bones’ Original Purpose May Have Been Storing Minerals

Infections of the Brain: Meningitis and Encephalitis

The Foundations of Modern Medicine

There’s a general pattern of growth and development in which the skeleton starts as other types of connective tissue, including cartilage. Then, throughout our prenatal and postnatal development, some of that cartilage begins to ossify, which means it transitions to bone.

And a typical long bone—like the humerus of the upper arm or femur of the thigh—doesn’t just ossify in a single bony element. First, the shaft, called the diaphysis in anatomy, starts turning into bone, becoming what’s known as the bone’s primary center of ossification.

Then, the two ends, called epiphyses, turn to bone, becoming secondary ossification centers. This process leaves bridges of cartilage between the shaft and its two ends. These are the bone’s growth plates; they are the regions where the bone continues to grow in length during childhood.

Eventually, when the bone has achieved its genetically programmed length—provided good health and nutrition—the growth plates close up, fusing the diaphysis to the epiphyses and resulting in a single bone. Long bones of the body grow that way; the skull and other types of bones form from other soft tissues in different patterns of ossification.

When these long bones are developing and still composed of multiple different bony elements, technically we actually have far more than 206 bones.

In the third trimester of fetal life, if all ossification centers are counted, at the highest point the number is about 800 tiny “bones”— they would be better described as 800 separate ossification centers, or bits of bone in formation. Some of these bony elements fuse together in utero, resulting in about 450 “bones” at birth—but sources vary on the exact numbers.

In fact, over anatomical history, sources have varied significantly even in the total bone count in the adult skeleton. The total attributed from Galen’s work was 248, whereas Vesalius counted 307, and the modern number is 206. It wasn’t that the number changed through the past few thousand years but depended on how they counted—like counting the sternum as one bone or two.

But even after the adult skeleton is formed, and the last growth region at the medial end of the clavicle closes, the number of bones can still change. For instance, arthritis can fuse vertebrae, the 22 bones in the skull often fuse together in old age, and arthritis or injury can fuse small elements like finger and toe bones. So, technically once that happens, an individual doesn’t really have 206 separate bones anymore.

This article comes directly from content in the video seriesHow We Move: The Gross Anatomy of Motion. Watch it now, on Wondrium.

Still, aside from the changes that occur with age, 206 is an average number of bones in the normal adult. Individuals can be born with extra bones or fewer bones; it can be seen in forensic work, and it’s not even all that rare— because unusual skeletal features can help get a person identified. For example, we may suspect who the person is from other clues—like an ID card, or simply the location where they’re found.

And if we can find an x-ray of that suspected person that includes the unusual feature seen in the morgue, we can match those up and get a head start on identification, long before the DNA results are back.

In fact, for any given anatomical feature, there’s what is called the 70% rule: It’s estimated that only 70% of us have the by-the-book pattern for a given structure, meaning that 30% of people have anomalies. So, if 100 people are watching this course now, that means that around 70 of them have 206 bones, and 30 of them probably have some variation in bone number—and not just from age or maybe an injury that fused bones, but congenitally, from their own genetics and development.

And when you consider how many different anatomical structures we each have, and that approximately 30% of us have anomalies for each one of those, it’s no wonder we see so much diversity in ourselves and in each other. Indeed an exciting and amazing fact—both to think about and to see!

For one thing, there are a lot of repeating themes in the body, like 24 individual vertebrae within the spine, which have many similar features, and 12 pairs of ribs, that likewise have much in common with each other. And there are 56 bones called phalanges that make up our fingers and toes and have great similarities, and those 56 are over a fourth of the typical 206 bones right there.

We also have bilateral symmetry in our favor, so because the right and left sides of the body are basically mirror images, once you know about one side, you’ve known the other side, too. Finally, due to our four-limbed, tetrapod ancestry, the upper limb and lower limb have much in common, as we will see.

Long bones do not ossify in a single bony element. First, the shaft, called the diaphysis in anatomy, starts turning into bone, becoming what’s known as the bone’s primary center of ossification. hen, the two ends, called epiphyses, turn to bone, becoming secondary ossification centers.

In the third trimester of fetal life, if all ossification centers are counted, at the highest point the number is about 800 tiny “bones’ or ossification centers, or bits of bone in formation. Some of these bony elements fuse together in utero, resulting in about 450 “bones” at birth—but sources vary on the exact numbers.

Over anatomical history, sources have varied significantly even in the total bone count in the adult skeleton. The total attributed from Galen’s work was 248, whereas Vesalius counted 307, and the modern number is 206.

Easy Ways to Get Vitamin E from Food: The Best Sources

Bone-Building and Beyond: The Many Roles of Calcium

Bones’ Original Purpose May Have Been Storing Minerals

Bone has widely scattered living cells that are surrounded by and embedded within a connective tissue matrix. For the most part, the cells produce this matrix bed, and then they lie in it. The matrix is mostly composed of a variety of calcium phosphate salts, collectively called hydroxyapatite. These minerals are crystallized around strands of the protein collagen.

Depending on age and health, the overall chemical breakdown of bone is generally about 50% inorganic minerals, around 25% organic compounds—mostly the collagen—and about 25% water—much of which is inside the bone cells.

A good analogy is that the makeup of bone is similar to fiberglass. Bone’s collagen fibers, like the fibers in fiberglass, give bone some flexibility and resiliency.

The mineral part of the matrix, on the other hand, gives bone its strength—just like the glass in fiberglass—but it’s brittle. It’s the combination of the flexible collagen and strong minerals that give bone its remarkable properties.

To understand bone’s makeup and resulting properties, we can conduct an experiment. Get a pair of chicken leg or thigh bones; both bones should be from the same animal and be about the same size. Remove the meat and then put one of the bones in a container of vinegar, for maybe a week or so. Cover the container with a lid to avoid the pungent smell.

After a while, when you take it out, the bone will have the same shape, but it will be flexible—you might even be able to tie it in a knot! This is because the acids in the vinegar demineralize the bone—in other words, the vinegar leaches out the calcium in the bone matrix, leaving all the flexibility of bone, but without the strength.

Take the other bone of the pair and put it in a 400**°** oven for an hour or so. When it cools off, you’ll find that the bone is really brittle, and might even scratch easily with your fingernail or a fork.

This is because the heat denatured the collagen proteins and cooked off all the water, leaving only the brittle matrix behind. Remember, the minerals give bone its strength, while the collagen provides some flexibility—and it’s the combination of the two that gives support with resiliency.

This article comes directly from content in the video seriesHow We Move: The Gross Anatomy of Motion. Watch it now, on Wondrium.

Bones majorly comprise and keep stocks of calcium. If we go without dairy products, without sardines or canned salmon with its small edible bones, or if we eat no dark, leafy greens—then blood calcium levels fall.

In this case, we have hormones that tell other bone cells, called osteoclasts, to get some of the calcium out of bone storage and put it back into the blood. So, they take just what they need to keep the blood calcium levels at their optimal range for all of the other body processes that need it.

This cycling of minerals is called bone remodeling, and it’s an important part of bone maintenance. About 10% of bone mineral is turned over each year—so your skeleton is constantly renewed by the process of remodeling.

If we don’t take in enough dietary calcium to offset our body’s daily needs, then more calcium is taken from our bones than we replace through our diet. Hence, bones steadily lose their mineral content.

And while we can limit or slow bone loss to some degree with weight-bearing exercise and a good diet, a certain amount of bone loss is inevitable, especially since the ability to build new bone starts a steady decline past around age 40.

There are factors and habits that can negatively impact bone strength, and even accelerate bone loss. These include smoking, excessive alcohol consumption, poor diet, and a sedentary lifestyle. But even without any of those negative factors, at some point, bone mineral loss is inevitable.

Women, in particular, have accelerated bone loss after they hit menopause, which on a worldwide average happens at about age 50—occurring at slightly older ages in developed countries, and a bit earlier in less-developed parts of the world.

The postmenopausal bone loss is because the female hormone estrogen has protective effects that keep bone strong, but estrogen is no longer produced after menopause. At that point, women begin to lose bone minerals at an increased rate compared to before menopause, or when compared to men of the same age.

Given that women can spend about a third of their lives in a postmenopausal state, osteoporosis is a global health concern for them. Men also have bone loss, and by about age 65 or 70, their rates of skeletal demineralization are about the same as in women, leading to osteoporosis.

Since women started their bone loss sooner, however, they are at greater risk of more advanced osteoporosis. And when bone is demineralized by osteoporosis, someone’s own body weight can be enough to cause a fracture.

This is called a pathological fracture, because the bone was already weakened by another condition, as opposed to a fracture caused by trauma. When bones are that weak, they have difficulty healing, especially if the blood supply to the bone is disrupted by the fracture.

The bone matrix is mostly composed of a variety of calcium phosphate salts, collectively called hydroxyapatite.

A fracture that is caused when the bone was already weakened by another condition, as opposed to a fracture caused by trauma, is called pathological fracture.

The factors affecting bone loss include smoking, excessive alcohol consumption, poor diet, and a sedentary lifestyle. But even without any of those negative factors, at some point, bone mineral loss is inevitable.

A History of Anatomical Terminologies

Bone-Building and Beyond: The Many Roles of Calcium

Fortification, Bioavailability, Supplementation: Mastering a Calcium Diet