Fearless Science: A Half-Century of Bold Research

A realistic rendering of a diamondback terrapin hauling itself up to the top of a cliff high above the ground, so high there are clouds visible over the turquoise sea below.
Credit: Léandre Hounnaké. Click image to download.
Every great scientific advance begins with an "aha" moment. Discovery is driven by a desire to push boundaries, answer questions and pursue fearless ideas.

At the University of Maryland's College of Computer, Mathematical, and Natural Sciences, we have always been driven by fearless science. Reaching back more than a half-century, the stories that follow highlight some of the most exciting scientific discoveries made by faculty members in the college. Some may be familiar to you. If you were on campus in the mid-2000s, you probably remember UMD's lead role in NASA's history-making Deep Impact mission to comet Tempel 1. You may also know that the first researcher to apply the term "chaos" to mathematical problems resides in our Department of Mathematics.

But did you know that a UMD computer scientist was the first to develop high-precision touch screens for mobile devices? Did you know that a UMD chemist synthesized a molecule that would later form the basis of a popular smoking cessation drug?

Read on for these and more stories of fearless science—one from each of our departments. Along the way, we'll also highlight two faculty members and two alumni who were awarded the Nobel Prize. All of these discoveries made a lasting impact on the researchers' chosen fields, with many having profound societal implications.

 

My Fearless Idea Found a Broken Rule in Biology

2010: Leslie Pick

 

A microscopy image of an insect embryo, which looks something like a fluorescent green and orange grain of rice.
An insect embryo Credit: W. Ray Anderson. Click image to download hi-res version.
Every animal begins life as a single cell, with a complete DNA "blueprint" of the animal it will become—be it an elephant, a tortoise or a fruit fly. This single cell will first divide into two, then four and then eight identical cells. At some point, as the cells continue dividing, this ball of identical cells will begin to take on a distinct shape. Each new cell assumes a more specific role as the animal begins to develop tissues, organs and limbs.

How does each cell "read" the DNA blueprint to get its instructions? What signals does the DNA send in some cells and not others? Developmental biologists puzzled over these questions until the early 1980s, when several different research groups independently discovered two key groups of genes, known as segmentation genes and homeobox genes (Hox genes). These discoveries eventually earned those researchers the 1995 Nobel Prize in physiology or medicine.

Around this time, UMD Entomology Professor Leslie Pick and colleagues at other institutions began studying segmentation genes, which promote the formation of a series of body segments in the early insect embryo. Diagrams of these segments look a lot like thin stripes painted on a jellybean.

Once these segments are clearly determined, Hox genes specify which features each segment will develop. In the case of a fruit fly, one segment might grow a pair of legs, while others will sprout wings, eyes, antennae and other parts.

Developmental biologists soon learned that Hox genes are common across a wide variety of animals and have probably changed very little since complex animals first evolved. Biologists use the term "evolutionarily conserved" to describe genes that fit this description.

"Hox genes were first identified in fruit flies, through mutations that resulted in a lot of crazy body shapes. A leg might grow where an antenna should be, for example," Pick said. "At first it was thought to be a weird thing that only happened in insects. So it was a total surprise to learn that all animals have Hox genes. This raised the question of what makes animals different from each other." In 2010, Pick and her team discovered an unexpected exception to this rule of evolutionary conservation. In a paper published in the Proceedings of the National Academy of Sciences, they confirmed that a well-known gene named fushi tarazu, first characterized in fruit flies as a segmentation gene, once functioned more like a Hox gene at an earlier point in evolutionary history.

The truly remarkable part of their discovery, however, was that this change didn't only happen once. The fushi tarazu gene changed multiple times across various species of arthropods—the group that includes all insects, crustaceans and arachnids. This was particularly surprising because most changes to Hox and segmentation genes are lethal when researchers make them experimentally in the lab. "We found that in nature, animals have been able to thrive despite these major changes. We helped challenge the classic idea that if animals share a characteristic, they get there through the same genetic pathways," Pick said. "There's so much more genetic variation than we expect. Our challenge now is to understand how this type of genetic variation occurs in nature without threatening species' survival."

 

Our Fearless Ideas Opened Eyes to Fish Vision

2000s: Karen Carleton and William Jeffery

 

A trio of brightly-colored cichlid fish. One is blu, one is blue and yellow with black striped, and one is lavender with a yellow belly and fins.
A trio of cichlid fish. Credit: Karen Carleton. Click image to download hi-res version.
Vision has a profound effect on the way many animals engage with the world, helping them to find shelter, search for food and assess potential mates. 

In some cases, color preference plays such a strong role in animal mating that it drives the evolution of new species. Biologists suspected that this effect, known as "sensory drive," may have spurred the evolution of the dazzling color palettes seen in some families of fishes, birds and other colorful animals.

UMD Biology Professor Karen Carleton was among the first researchers to provide empirical evidence for evolution via sensory drive, by studying cichlid fish that hail from the freshwater lakes of Africa.

In a study published in 2008 in the journal Nature, Carleton and her colleagues demonstrated that female cichlids living at different depths preferred different coloration in their male suitors. The researchers also tracked genetic changes to demonstrate that differences in female visual sensitivity drove the evolution of new species.

"Fish near the surface experience a broader spectrum of color, while deeper down, the light shifts toward red," said Carleton. "We showed that female fish living up high prefer to mate with blue males, while females that live deeper want to mate with red fish." But for some fish that make their homes deep in caves, eyes can be a liability. Eyes consume a lot of energy—even when not in use—and occupy a lot of space. With no sun to light their way, cavefishes' eyes have degenerated into vestigial structures, much like the human tailbone or the leg bones found in some pythons and boa constrictors.

UMD Biology Professor William Jeffery has been studying Astyanax mexicanus, a species of blind cavefish found in northeastern Mexico, for more than 20 years. He has identified genes responsible for various aspects of normal eye development and characterized how these genes create differences between cavefish and surface-dwelling members of the same species with functional eyes.

In 2000, Jeffery co-authored a study published in the journal Science in which the researchers restored the eyes of cave-dwelling Astyanax simply by transplanting a normal lens from a surface-dwelling relative. The result demonstrated that the evolutionary loss of eyes may be flexible—and perhaps even reversible.

"All cave vertebrates start to develop eyes, but then the eyes stop growing and degenerate. Right when the eye needs the most oxygen, it's cut off, suggesting that cavefish purposely destroy their eyes. I never expected to find this," Jeffery said. "Our work showed that the lens plays a role in development by inducing the formation of other structures in the eye, including the retina."

Together, Carleton and Jeffery's work reveals that animal visual systems have taken some surprising twists and turns through the course of evolution.

"Nonhuman vision is almost always more complex and elaborate than we realize. Many fishes see colors we don't," Carleton said. "Now with genomic techniques, we can rapidly survey how much diversity there is and how rapidly sensory systems can evolve."

 

My Fearless Idea Impacted a Comet

2005: Michael A'Hearn

 

Comets have captured people's attention for centuries. As comets approach the warmth of the sun, they begin to release gases and debris, creating a large extended atmosphere and a long tail that can stretch for millions of miles.

A 3D illustration of Deep Impact hitting a comet.
A trio of cichlid fish. Credit: NASA Jet Propulsion Laboratory/Pat Rawlings. Click image to download hi-res version.
Although many comets become visible to the naked eye as they pass by Earth, scientists had few opportunities to look closely at a comet's nucleus—the icy, dusty center of solid matter. They certainly had never seen a comet's interior. That changed in 2005, when a groundbreaking NASA mission named Deep Impact, led by the late Distinguished University Professor of Astronomy Mike A'Hearn (1940-2017), flew a spacecraft near the comet Tempel 1 and launched an impactor module about the size of a washing machine into the surface of the comet's nucleus.

The impact, which happened on July 4, 2005, instantly excavated a crater on Tempel 1's surface while the spacecraft collected data from the ejected material. The observations revealed that comets are porous and fluffy, with only about 20 percent of their volume taken up by dust, debris and ice—the rest is all empty space. The team also learned that comet surfaces are variable and dynamic, with much of their frozen water buried deep beneath the surface.

"The major surprise was the opacity of the plume the impactor created and the light it gave off," A'Hearn said just days after the impact. "That suggests the dust excavated from the comet's surface was extremely fine, more like talcum powder than beach sand. And the surface is definitely not what most people think of when they think of comets—an ice cube."

Now, more than a decade later, UMD's Deep Impact team looks back on the mission as a rare opportunity for high-risk, high-reward research.

"We had contingency plans, but it was not entirely clear that any of them would work. We didn't even know much about the comet's shape. We could have hit a crevasse or glanced off of the edge," said Jessica Sunshine, a UMD astronomy professor who served as a Deep Impact mission scientist. "We ran lots of simulations, but we couldn't control nature. Nobody had attempted anything on this scale before." After Deep Impact's first encounter with Tempel 1, the team received NASA's approval to use the spacecraft to investigate three more comets: Hartley 2 (2010), Garradd (2012) and ISON (2013). The mission also inspired follow-up missions, including Stardust-NExT and Rosetta, the latter of which wrapped up in 2016.

"When we got to the other comets, we quickly learned that they are each completely distinct," said Lori Feaga, a UMD associate research scientist in astronomy who joined Deep Impact as a postdoc. "Comets have geological features such as layering, pits and flows. We could watch the comets evolve as surface processes and jets reshaped them. The idea of a dirty snowball was turned upside down."

 

My Fearless Idea Created Precise Touch Screens

2000s: Ben Shneiderman

 

A pair of hands holding and interacting with an Android phone. The person is writing an email and has attached a picture of Ben Sheiderman's treemap art.
Ben Shneiderman's work made touchscreens like the one on this phone possible. Credit: Faye Levine/Ben Shneiderman. Click image to download hi-res version.
Today's personal computing devices are incredibly easy to use, featuring intuitive graphic user interfaces, widely accepted design standards and precise trackpads or touch screens. Given this, it can be hard to imagine just how challenging the earliest home computers were to use.

To run programs and perform operations, a user needed to type code on a command line. A working knowledge of at least one computer language was a must. In the early 1980s, while companies like Apple and IBM raced to market powerful new models at more affordable prices, personal computers remained an expensive indulgence for tech-savvy consumers.

Recognizing an unmet need, Ben Shneiderman, currently a Distinguished University Professor of Computer Science, began working to make computers more accessible to a wider variety of people. He is credited as a founder of the discipline known as human-computer interaction, which borrows lessons from cognitive psychology to better understand how humans interact with technology.

"As a research field, human-computer interaction was seen as edgy. It took some work to make it stick with computer scientists," Shneiderman said. "Even today, it's still a bit controversial. But the great success of this field is that 6 billion people now have a device in their pocket that they can use to keep in touch. By making designs that worked for anyone, we also made it better for everyone."

Shneiderman was an early proponent of a design philosophy that he named "direct manipulation." Rather than typing commands, direct manipulation gave users the ability to click on links, drag and drop files, resize or zoom in on images, and perform other actions that have an instantaneously observable result. He also developed the design for hyperlinks—clickable words in a sentence or buttons in an image.

"Early web interfaces relied on numbered menus. To access a new piece of information, a user would need to consult the menu and type in the right number. I just knew this wasn't workable," Shneiderman said. "With direct manipulation designs, if you see a word you want to know more about, you just click on it. You can drag and drop items. Word processors represent a document as it would appear when you print it. These once-novel ideas have now become standard."

Beginning in the late 1980s, Shneiderman made key refinements to touch-screen interfaces. Prior to his work, touch screens required big buttons—at least an inch square—and still had trouble precisely tracking finger presses. Thanks to Shneiderman and his teams' innovations, smartphones are now equipped with miniature, highly precise touch screens that enable people to use their devices instantly and intuitively.

"My focus has been on providing power to the user," Shneiderman explained. "Give them the tools they need to do what they need to do. It's not about what the computers can do, it's about what people can do."

 

My Fearless Idea Illuminated Cell Membrane Repair

2001: Norma Andrews

 

A microscopy image of a T. cruzi parasite, which looks like a worm with a pointy tail, invading a cell.
A T. cruzi parasite invading a cell. Credit: Norma Andrews. Click image to download hi-res version.
Wound healing makes life possible. While the healing process is well under-stood for many tissues and organs, biologists have only recently learned how individual cells repair wounds to their cell membranes.

Biologists long assumed that the fatty molecules that make up cell membranes passively spread out to fill in a wound. Beginning in the 1990s, researchers observed small fluid-filled sacs within cells, called vesicles, participating in the healing process. This suggested that cells have an active role in sealing breaks in their cell membrane. But the details of the repair process were still unknown.

Then, Norma Andrews, currently a UMD cell biology and molecular genetics professor, and her colleagues made a surprising discovery while studying Trypanosoma cruzi. This single-celled parasite causes Chagas disease, which infects more than 6 million people throughout Latin America. The researchers wanted to learn how T. cruzi invades the cells of its animal host without killing the cells in the process.

Andrews' team suspected a role for lysosomes—specialized vesicles that contain digestive enzymes that cells use to break down waste products. But, instead of seeing lysosomes gather around the parasite after it had entered the cell, as they had expected, Andrews and her colleagues noticed that lysosomes were instead gathering at the cell surface. There, the lysosomes were fusing with the membrane near other parasites that were wounding the cell from the outside.

"With this observation, we began to wonder if the mystery vesicles observed near cell membrane injury sites in the 1990s might actually have been lysosomes," Andrews said. "But this didn't make sense. Why would cells use a vesicle filled with dangerous enzymes for a repair process? Then we found that lysosomes were gathering at the cell surface in response to calcium entering the cell. That's when we started looking closer."

Cells work to keep a higher concentration of positively charged calcium ions outside their membranes. This forms a gradient that is the basis for important functions like the conduction of nerve impulses and the contraction of muscle cells. "Whenever there's a puncture in the cell membrane, a rush of calcium inside the cell is a signal that something is wrong," Andrews explained. "We reasoned that, if lysosome fusion to the cell membrane is a calcium-dependent process, we should be able to make it happen without T. cruzi being involved. Our experiments showed that this was indeed the case."

In 2001, Andrews and several colleagues published their results in the journal Cell, suggesting that lysosomes have a widespread and previously unexpected role in sealing damaged cell membranes. In subsequent experiments, Andrews' group showed that this mechanism is common to all mammalian cells and may be widespread among all animals, plants, fungi and some single-celled organisms.

The researchers are investigating how a sac full of dangerous enzymes could be useful for a repair process. According to Andrews, it is possible that the enzymes in the lysosomes modify surface wounds to facilitate repair, while helping defend against other harmful invaders, such as bacteria. As for T. cruzi, which seems to survive the enzyme attack with no issues, Andrews sees a Trojan horse-style strategy at work.

"The parasite is mimicking a cell repair mechanism to get into the host cell," Andrews said. "The parasite's initial contact with the membrane recruits the vesicles, and this response is what ends up bringing the parasite in. The cell has no way to distinguish between normal repair and a parasite invasion." A better understanding of the process could one day help to treat deadly parasitic infections, such as Chagas disease. But Andrews sees even more possibilities.

"We believe this repair happens quite often. For example, it is thought that muscular dystrophy is caused by defects in muscle cell repair," Andrews said. "Cell repair is also relevant in certain immune responses and interactions between bacteria and mammalian cells. The parasites made us notice something that has much wider implications."

 

Our Fearless Ideas Explained Earth's Atmosphere

2000: James Farquhar
1998: Alan Kaufman

 

A photo of Dale's Gorge in Australia, showing banded iron rock formations that document the Great Oxyenation Event.
Dale's Gorge, showing banded iron formations that document the Great Oxyenation Event. Credit: Alan Kaufman. Click image to download hi-res version.
More than 2.4 billion years ago, Earth was a harsh and unwelcoming place, with a hazy methane-rich atmosphere all but devoid of oxygen. Without oxygen, there was no protective ozone layer, permitting ultraviolet light to bombard the planet's surface. Early microbial life remained deep in the ocean, beyond the reach of this unchecked radiation.

Within a few hundred million years—the blink of an eye in geologic time—everything changed. The atmosphere filled with oxygen, the ozone layer formed and life came bursting toward the ocean's surface.

Why this happened is uncertain, but the process probably started around the time when blue-green bacteria—Earth's first organisms capable of photosynthesis—began producing oxygen. Several other lines of evidence provided clues to changes in early Earth's atmospheric composition as well. Still, geologists struggled to connect the dots until 2000, when UMD Geology Professor James Farquhar and his colleagues charted the rise of atmospheric oxygen in a paper published in the journal Science.

The researchers found a sharp change in the proportion of rare sulfur species that could only be explained by a rapid rise in atmospheric oxygen and, importantly, development of the ozone layer.

"When I first saw this sulfur signature, I wasn't really looking for it. I started out looking for evidence of the first appearance of sulfate-reducing bacteria," Farquhar said. "But the variation was 20 times larger than I had expected. At first I thought something had gone wrong with the experiment."

According to Farquhar, the shift from an oxygen-poor atmosphere to one with enough oxygen to develop the ozone layer was extraordinarily rapid.

"It would have been like someone flipped a switch," Farquhar said. "And this was like a wall switch, not a dimmer switch. There was no turning back."

This sudden shift in atmospheric chemistry, known as the "Great Oxygenation Event," had profound implications for life on Earth. Oxygen drove the evolution of cells with a nucleus—a drastic departure from the bacteria that ruled early Earth. The ozone layer blocked UV radiation, enabling life to occupy the upper reaches of the ocean for the first time in Earth's history.

But the rise of atmospheric oxygen also destroyed methane, which is roughly 25 times more efficient than carbon dioxide at capturing heat. This large-scale loss of methane—and thus heat—likely plunged the planet into a series of widespread ice ages.

One such series, which occurred more than a billion years after the Great Oxygenation Event, during the Neoproterozoic Era, may have resulted in "snowball Earths." During these times, glaciers and sea ice potentially blanketed the planet's surface from the poles to the equator for millions of years.

In 1998, UMD Geology Professor Alan Kaufman co-authored a paper on the snowball Earths in the journal Science. Based on evidence that pointed to profound changes in the global carbon cycle during the Neoproterozoic Era, Kaufman and his colleagues suggested that Earth underwent drastic and unprecedented swings in& surface temperatures.

As the last of the great ice sheets melted, oxygen in the atmosphere began to accumulate rapidly again. Researchers named this second major shift the "Neoproterozoic Oxidation Event."

"The rise of oxygen during the Great Oxygenation Event might have taken us from insignificant amounts to about 1 or 2 percent oxygen in the atmosphere," Kaufman said. "But during the Neoproterozoic, we may have gone from 1 to 2 percent to the 20 percent we see today."

 

Our Fearless Idea Controlled Chaos

1990: Edward Ott, Celso Grebogi and James A. Yorke

 

Can the flap of a butterfly's wings in Brazil set off a tornado in Texas? It may sound far-fetched, but according to chaos theory, not only are such connections possible—they're much more likely than we may realize. 

A computer visualization of chaos, which appears as ribbons and swirls of red on a dark background.
A computer visualization of chaos. Credit: James A. Yorke. Click image to download hi-res version.
Chaos theory is a field of mathematics that describes the behavior of complex, unpredictable systems. Mathematically speaking, a defining characteristic of a chaotic system is sensitivity to the system's initial conditions. Using the butterfly analogy, that first wing flap would create the initial conditions that define the entire chain of chaotic events to follow. Did the butterfly lead with one wing over the other? Was it a strong flap or a weak one? Did its wings have any scars or other blemishes that affected air flow?

Distinguished University Research Professor of Mathematics and Physics James A. Yorke, Ph.D. '66, mathematics, was the first to apply the term "chaos" in a mathematical context, in a paper he co-authored in 1975 in the journal American Mathematical Monthly. Since then, chaos theory has been applied in some form to nearly every discipline of science and has even found its way into popular culture—featured in films such as "Jurassic Park" and television shows such as "The Simpsons."

"I like to say that scientists were the last to learn about chaos. But other people are intrinsically aware of it," said Yorke, who illustrated his point with a story about a married couple who met purely by chance, when a woman hailed a taxi cab driven by her future husband. "This couple soon had a child, who eventually became a colleague of mine. Imagine if his mother had ridden in a different cab that day. Everyone knows, at some level, that chaos can produce huge impacts on their lives." In 1979, Yorke met Edward Ott, then a new addition to UMD's faculty, who is now a Distinguished University Professor. An electrical engineer and a physicist by training, Ott had previously studied chaos in the behavior of highly ionized gases.

"People would always ask why I was interested in chaos. Isn't it something you always want to avoid in science? It's nasty and complicated," Ott said. "As soon as I started wondering whether there is a use for chaos, I started thinking about controlling chaos. Chaos has an interesting attribute, in that the effect of a small change grows exponentially and can have a huge effect on the total system. It was natural to ask if we could control a chaotic system with only a very tiny change." In 1990, Yorke, Ott and Celso Grebogi, M.S. '75, Ph.D. '78, physics—then a UMD mathematics professor—answered that question for the scientific community with a resounding "Yes." The trio published a paper in the journal Physical Review Letters titled "Controlling Chaos," which, true to its name, described a method for stabilizing a chaotic system. By adjusting a carefully chosen parameter, the researchers showed that they could prod the system toward a desired outcome. The paper and method both are now better known by initials of the three authors' last names: OGY.

"First, we presented the general theory and then followed with numerical experiments," Ott said. "Almost immediately, others followed up with a large number of lab experiments and physical realizations. For the first time, if you understood chaos you could use it to your benefit. We showed that chaos isn't always an annoyance and a hindrance to gaining information."

The OGY method has since aided studies of turbulent fluids such as airplane jetwash and boat wakes, oscillating chemical reactions, the arrhythmic beating of cardiac tissues, and more.

 

My Fearless Idea Confirmed Storms Spread Pollution

1987: Russell Dickerson

 

A photo of a supercell storm with a rain shaft looming above a field at sunset
A supercell storm with a rain shaft. Credit: Ken Engquist/NOAA. Click image to download.
In the 1980s, severe pollution choked the skies above most major cities in the United States. But these pollution problems were widely accepted as local or regional issues. One could travel a few dozen miles outside of smog-stifled Los Angeles, for example, and reasonably expect to enjoy clear skies and a breath of fresh air.

Reality is a bit more complicated, however. In 1987, Russell Dickerson, currently a professor of atmospheric and oceanic science at UMD, led a study on air pollution published in the journal Science. The team was the first to demonstrate that large thunderstorms can launch pollution molecules nearly 9 miles above Earth's surface. At this altitude—above most clouds and all but the largest of storms—pollution survives for much longer and can travel miles away from its source.

"The bottom line is that storms can transform local pollution problems into regional—or even global—atmospheric chemistry and climate issues," Dickerson said. "The higher you go, the more stable the chemistry becomes. Because of this, pollutants stay around much longer, resulting in a bigger impact on radiative forcing and climate."

Prior to Dickerson's study, prevailing wisdom among atmospheric scientists held that thunderstorms could actually help clean up pollution. As raindrops fall to the ground, they attract and remove pollutant particles from the air. But this assumed a simpler model where pollutants remain less than a mile above the ground.

Then, some numerical models began to suggest that storms could launch pollutants into the upper reaches of the troposphere—the lowest 10 miles of the atmosphere where all weather takes place. Dickerson wanted to verify these predictions with observations. So his team set about doing this the only way possible: by flying three instrumented research planes very close to a giant thunderstorm.

"We did the work in Tornado Alley, in Oklahoma and Arkansas," Dickerson said. "But big storms happen nearly everywhere. Our group has since done lots of work in other places, such as Costa Rica, Guam, Canada and China."

Looking back, Dickerson is most proud of where his colleagues and students have taken the work since the 1987 study. Together, their body of research has contributed to a new understanding of how pollution is transported at the global scale.

"The 1987 paper lit a fire and opened up a whole new field of exploration into the interaction between chemistry and meteorology," Dickerson said. "We now know that pollution lofted from the U.S. can end up in Europe, and Europe's pollution ends up in Asia. And pollution from Asia—China especially—travels across the Pacific to North America."

Despite such widespread transport of pollutants, Dickerson noted that the air has become much cleaner in some key places over the course of his career.

"The wonderful story here is that air quality over North America and Europe has improved dramatically," Dickerson explained. "In places like Beijing and Mumbai, people often can't see across the street. People forget that New York City and Chicago used to look like this."

 

My Fearless Idea Helped People Quit Smoking

1975: Paul Mazzocchi

 

A 3D model of the modified benzazepine molecule on which the smoking cessation drug Chantix is based.
The modified benzazepine molecule on which the smoking cessation drug Chantix is based Credit: ScienceSource Images. Click image to download.
Throughout the latter half of the 20th century, scientists were working to develop more effective pharmaceuticals to treat a wide range of ailments, including pain. Morphine—long a staple of wartime trauma centers and domestic surgical wards—had proven itself a highly effective painkiller, but with one major downside: a high potential for addiction.

In the 1970s, against this backdrop, Paul Mazzocchi, currently a UMD Chemistry and Biochemistry Professor Emeritus, joined the race to design a morphine like molecule, also known as a synthetic opioid. At the time, chemists believed that with a little engineering, they could develop an opioid that would offer all the pain-relieving benefits of morphine without snaring patients in a web of addiction.

"Back then, this was an ongoing research area funded by the National Institutes of Health. We thought we could generate a molecule that was like morphine but not addictive. This has since been recognized as nonsense," said Mazzocchi, acknowledging the well-documented problems with synthetic opioids like fentanyl and its derivatives.

Mazzocchi's efforts were not in vain, however. In a paper he co-authored in the Journal of Medicinal Chemistry in 1979, Mazzocchi described a molecule that lacked the painkilling powers of morphine, but was nonetheless able to bind to morphine receptors in the brain. Importantly, the molecule—a modified benzazepine—was later found to bind to another very similar group of receptors: those responsible for nicotine addiction.

A quarter-century later, researchers at Pfizer's research and development labs came across Mazzocchi's publication. With some modifications, the researchers transformed Mazzocchi's benzazepine molecule into a related compound called varenicline. Many former smokers may recognize varenicline by its brand name: the smoking cessation drug Chantix.

"I never knew our molecule would be useful for something else. We were only looking at the morphine receptor site," Mazzocchi said. "But there are now well-established links between the morphine receptor site and the nicotine receptor site. Pfizer was looking for a compound that had certain properties at the related nicotine site. That's how they found our paper, and that's how varenicline came about."

Chantix is now a highly profitable product—although not totally devoid of side effects. Because Mazzocchi published his findings on the modified benzazepine molecule in the public domain, as per the terms of his National Institutes of Health grant, his recognition came in the bibliography of Pfizer's original publication on varenicline. All the same, he is grateful that his efforts contributed to a positive outcome.

"The danger in any science is that you might make a discovery that's not beneficial but harmful," said Mazzocchi, referring to danger presented by other successful attempts to create synthetic opioids. "You can make the same comparison with nuclear science and the atom bomb. But as a scientist, you focus on your goal and hope that there aren't any alternative uses that are less than beneficial."

 

My Fearless Idea Gave "Color" to Quarks

1964: O.W. "Wally" Greenberg

 

An artist's conception of the 'color' of quarks. This appears as four balls, two gold and two reflective silver. Two of the balls are split in half, showing that inside are three smaller balls--red, green and blue--connected by a zig-zagging yellow line.
An artist's conception of the 'color' of quarks. Credit: Festa/Shutterstock. Click image to download.
In the 1950s and 1960s, particle physicists found themselves in the midst of a conundrum. They knew that atoms could be split into protons, neutrons and electrons. But the development of ever-more powerful atom-smashers revealed a veritable zoo of new particles never before observed by scientists. Where did these particles come from, and how did they fit into physicists' understanding of matter?

Today, we know that protons and neutrons can be further split into smaller particles called quarks. First proposed in 1964, quarks have unusual properties that initially made it hard for the scientific community to accept their existence. For example, protons contain three quarks—two with an identical electric charge and one with a different charge. This violates the exclusion principle, which forbids two quarks to be in the same quantum state. Only a property with three values could allow three quarks to coexist while satisfying this principle.

In 1964—the same year quarks were first proposed—UMD Physics Professor O.W. (Wally) Greenberg published a paper in the journal Physical Review Letters. He was the first to suggest that quarks exhibit a property called "color" which, despite the name, has nothing to do with the color we see with our eyes. Rather, it's a metaphor, rooted in the idea that the three primary colors—red, green and blue—combine to make white light.

Color provides three distinct quantum states in which a given quark can exist, while explaining the strong interactions that bind quarks together. Quarks and color were experimentally verified in 1973, and the "standard model" of particle physics was officially born. Finally, physicists had a basic understanding of what matter is and how it appears in the universe.

"With the discovery of quarks and color, our view of the fundamental particles of nature changed," Greenberg said. "Fifty years ago, we thought protons and neutrons were the most basic particles. Today, point-like quarks are the most fundamental particles ever seen even with the most advanced high-energy accelerators available.

Just as a mix of red, green and blue light yields white light, a combination of all three color charges yields a color-neutral proton or neutron. Quarks can change color when they exchange particles known as gluons with other quarks, but these changes balance each other out such that the proton or neutron remains color-neutral overall.

The force between color-charged particles, called the strong or nuclear force, is very strong indeed—more than 100 times stronger than the electrical force that holds atoms together. The nuclear force is largely responsible for the immense power of nuclear explosives, so it's a good thing that under normal circumstances, the nuclear force only acts at the scale of quark interactions.

"Basic science research led to the discovery of quarks and color," Greenberg said. "This type of research is extremely valuable, and continued basic research in particle physics will ultimately lead to a deeper understanding of the universe, and if history is our guide, also to practical applications."

Written by Matthew Wright

See also:
Our Fearless Ideas Made Us Nobel Laureates: Alumni and faculty members who won the world–renowned scientific prize

This article was published in the Winter 2019 issue of Odyssey magazine. To read other stories from that issue, please visit go.umd.edu/odyssey.

About the College of Computer, Mathematical, and Natural Sciences

The College of Computer, Mathematical, and Natural Sciences at the University of Maryland educates more than 10,000 future scientific leaders in its undergraduate and graduate programs each year. The college's 10 departments and nine interdisciplinary research centers foster scientific discovery with annual sponsored research funding exceeding $250 million.