Reading Material

Chickenpox is a highly contagious infectious disease caused by the Varicella zoster virus; sufferers develop a fleeting itchy rash that can spread throughout the body. The disease can last for up to 14 days and can occur in both children and adults, though the young are particularly vulnerable. Individuals infected with chickenpox can expect to experience a high but tolerable level of discomfort and a fever as the disease works its way through the system. The ailment was once considered to be a “rite of passage” by parents in the U.S. and thought to provide children with greater and improved immunity to other forms of sickness later in life. This view, however, was altered after additional research by scientists demonstrated unexpected dangers associated with the virus. Over time, the fruits of this research have transformed attitudes toward the disease and the utility of seeking preemptive measures against it.

A vaccine against chickenpox was originally invented by Michiaki Takahashi, a Japanese doctor and research scientist, in the mid-1960s. Dr. Takahashi began his work to isolate and grow the virus in 1965 and in 1972 began clinical trials with a live but weakened form of the virus that caused the human body to create antibodies. Japan and several other countries began widespread chickenpox vaccination programs in 1974. However, it took over 20 years for the chickenpox vaccine to be approved by the U.S. Food & Drug Administration (FDA), finally earning the U.S. government’s seal of approval for widespread use in 1995. Yet even though the chickenpox vaccine was available and recommended by the FDA, parents did not immediately choose to vaccinate their children against this disease. Mothers and fathers typically cited the notion that chickenpox did not constitute a serious enough disease against which a person needed to be vaccinated.

Strong belief in that view eroded when scientists discovered the link between Varicella zoster, the virus that causes chickenpox, and shingles, a far more serious, harmful, and longer-lasting disease in older adults that impacts the nervous system. They reached the conclusion that Varicella zoster remains dormant inside the body, making it significantly more likely for someone to develop shingles. As a result, the medical community in the U.S. encouraged the development, adoption, and use of a vaccine against chickenpox to the public. Although the appearance of chickenpox and shingles within one person can be many years apart—generally many decades—the increased risk in developing shingles as a younger adult (30-40 years old rather than 60-70 years old) proved to be enough to convince the medical community that immunization should be preferred to the traditional alternative.

Another reason that the chickenpox vaccine was not immediately accepted and used by parents in the U.S. centered on observations made by scientists that the vaccine simply did not last long enough and did not confer a lifetime of immunity. In other words, scientists considered the benefits of the vaccine to be temporary when given to young children. They also feared that it increased the odds that a person could become infected with chickenpox later as a young adult, when the rash is more painful and prevalent and can last up to three or four weeks. Hence, allowing young children to develop chickenpox rather than take a vaccine against it was believed to be the “lesser of two evils.” This idea changed over time as booster shots of the vaccine elongated immunity and countered the perceived limits on the strength of the vaccine itself.

Today, use of the chickenpox vaccine is common throughout the world. Pediatricians suggest an initial vaccination shot after a child turns one year old, with booster shots recommended after the child turns eight. The vaccine is estimated to be up to 90% effective and has reduced worldwide cases of chickenpox infection to 400,000 cases per year from over 4,000,000 cases before vaccination became widespread. ■ (A) In light of such statistics, most doctors insist that the potential risks of developing shingles outweigh the benefits of avoiding rare complications associated with inoculations. ■ (B) Of coursemany parents continue to think of the disease as an innocuous ailment, refusing to take preemptive steps against it. ■ (C) As increasing numbers of children are vaccinated and the virus becomes increasingly rarer, however, even this trend among parents has failed to halt the decline of chickenpox among the most vulnerable populations. ■ (D)

The primary threat to the genetic health of a population is loss of genetic diversity, a term that refers to both the combination of different genes and the pattern of variation found within a single species. The potential for loss of genetic diversity is high among endangered species as well as among captive animals housed in places like zoos. If every individual in a particular population is genetically related, sudden catastrophes such as a new disease, a prolonged famine, or even severe weather could wipe out the entire group. Although it is difficult to address such issues among wild animals in their natural habitats, researchers have successfully implemented genetic management plans and breeding programs to help them resolve this quandary in zoos. Critics of such plans are often outspoken in their critiques, however.

When devising a plan to manage a captive population, researchers consider both the genes and alleles of each individual. Genes carry the information that determines an individual’s characteristics, while alleles are the alternative forms of genes. Variations occur in individuals not only because of the large number of traits that exist within a species, but also because of the random mixing of alleles that occurs during sexual reproduction. At the most rudimentary level, genetic inheritance occurs when an individual receives one allele from the mother and one allele from the father. The two alleles could be the same (homozygous) or different (heterozygous), but the resulting combination of all sets of alleles, called the genotype, represents an individual’s complete and unique genetic makeup.

Genetic management programs at zoos reduce the possibility of unintentional inbreeding—mating among closely related individuals with similar genotypes—which adversely affects both the health of individuals and the diversity of populations. ■ (A) To avoid this problem, researchers measure genetic variation among individuals by collecting DNA samples using invasive methods (sampling blood and tissue) and noninvasive methods (collecting hair, feathers, bones, and feces). ■ (B) They then calculate the frequency of alleles in each individual using scientific techniques such as gene sequencing. ■ (C) To decide which animals should breed with whom, and how often, zoos also keep accurate records of the pedigrees, or family relationships, of all animals in their care. ■ (D)

Some animal activists argue that genetic management programs cannot succeed because most zoos are not able to keep a large enough number of individuals to provide a suitable variety of genotypes. They contend that, even with careful planning, small zoo populations remain at risk of losing genetic diversity since each parent has only a 50 percent chance of contributing a particular allele for a specific trait to its offspring, and individuals receive only a sample of possible trait variations from their parents. Over time, the genetic diversity of a particularly small population may not reflect all the possible variations available in the parents’ genotype, andcertain alleles could disappear altogether. Opponents of the genetic management programs allege that they can even lead to the abuse of individual animals that do not meet the management plan criteria. In one case, the Copenhagen Zoo put down a healthy young giraffe named Marius because, zoo officials said, his genotype was already well represented in zoos throughout Europe, and they needed to make room for a giraffe that had a more desirable genotype.

In the United States, however, the Association of Zoos and Aquariums (AZA) recommends contraception or sterilization 1 rather than euthanasia of animals with superfluous genotypes. The AZA has developed a Species Survival Plan (SSP) Program to help manage populations at zoos and aquariums across the country, from large metropolitan zoos such as the National Zoo in Washington, DC, and the Bronx Zoo in New York City to smaller zoos like the Brandywine Zoo in Wilmington, Delaware. The organization’s position is that properly regulated zoos can contribute to the biodiversity of species by ensuring the health of individual animals as well as the genetic diversity of the entire population. Working with the more than 500 SSP programs currently in place, the AZA develops a comprehensive pedigree as well as breeding and transfer plans to ensure the zoos can sustain populations of healthy individuals within genetically diverse populations. Not only are AZA researchers working with zoo staffs in the United States, but through newly developed software and database tools, they also are helping zoos around the world devise breeding plans that map the genetic makeup of individual animals and strategic plans that predict the long-term development of species populations.


1Sterilization: a process by which animals are made incapable of reproduction

Although the term “young adult” did not come into common use until the 1960s, many scholars contend that the genre of young adult literature began after World War II when the age group from twelve through nineteen gained widespread acceptance as a discrete developmental stage. In contrast to children’s literature, which reaffirms the child’s place in the world, young adult literature helps adolescents make sense of their world, discover they are not alone, and find their place in society. Although young adult literature focuses on teenaged characters, its dominant themes echo many of the same themes explored in global literature—love, good versus evil, personal morality, and the individual versus society. Whether the story takes place in a realistic setting, a fantasy world, or a dystopian society, young adult literature explores these themes in ways that help readers navigate the difficult passage from childhood to adulthood and encourages them to identify with the characters and imagine how they would meet the same challenges. Roberta S. Trites, a professor of English and specialist in children’s and young adult literature, posits that young adult literature ultimately reflects the struggle of adolescents to answer the question raised in T.S. Eliot’s poem “The Love Song of J. Alfred Prufrock”: “Do I dare disturb the universe?”

■ (A) Like the young people who typify its audience, young adult literature has undergone a tumultuous evolution in a relatively short time span. ■ (B) From the mid-1940s into the early twenty-first century, it sometimes reflected and sometimes challenged social norms. ■ (C) The earliest example may be Seventeenth Summer by Maureen Daly, a novel published in 1942 that explored first love and adolescent rites of passage. ■ (D) That particular work spawned similar novels focusing on the day-to-day concerns of teenagers as they developed their own music, language, dress, and attitudes, seeking to distance themselves from their parents and other adults.

In the 1950s and throughout the unruly 1960s, young adult literature took on the character of contemporary society, and its treatment of the dominant themes darkened. Two groundbreaking young adult novels— A Catcher in the Rye by J. D. Salinger, published in 1951, and The Outsiders by S. E. Hinton, published in 1967—offered mature and realistic looks at troubled adolescents. Michael Cart, an expert in young adult literature, noted that the focus on culture and serious themes in these two novels, among others, made it more acceptable for the next generation of young adult authors to write candidly about teen issues. During the 1970s, young adult readers learned about sexual development in Judy Blume’s Deenie, dove into the mysterious society at the heart of Lois Lowry’s novel The Giver, and suffered with the protagonist as he took a stand against authority in Robert Cormier’s novel The Chocolate War.

After a brief lull in which novels featured lighter content, focusing on relatively innocent teen social drama or Hollywood-style horror, young adult fiction experienced a resurgence of popularity at the start of the twenty-first century when authors shifted to unreal topics such as fantasy, the paranormal, and dystopia. Although these novels are set in strange worlds, the characters exhibit emotions and undergo transformative experiences that contemporary teens share and understand. “Teens are caught between two worlds, childhood and adulthood, and in young adult literature, they can navigate those two worlds and sometimes dualities of other worlds,” said Jennifer Lynn Barnes, a young adult author and scientist who studies human behavior.

Yet young adult authors have not completely abandoned realism in favor of fantasy. Novelists such as John Green and Sarah Dessen explore the same themes and adversities in realistic modern-day settings. Although they and other contemporary young adult authors do not write about worldwide cataclysms such as those in Harry Potter, Hunger Games, and Divergent, they do not shrink from the harsh realities of life that individuals must face. Protagonists come up against decisions about self-identity, race, gender, sex, and sexuality, which are universal concerns that people of all ages grapple with. Critic Michael Cart notes that all young adult literature tackles difficult themes and “equips readers for dealing with the realities of impending adulthood and for assuming the rights and responsibilities of citizenship.”

The psychologist Carl Jung posited that people make decisions in two distinct ways: by taking in a great deal of information and over time rationally making a choice, or by making an intuitive decision quickly. However, these categories do not necessarily reflect the full complexity of decision-making, particularly when it comes to purchases. In general, purchasing goods or services involves five steps: problem recognition, information search, evaluation of alternatives, purchase decision, and post-purchase behavior. These steps can happen in an instant, and although they are seemingly only affected by taste and available resources, what looks like an intuitive process is actually more intricate and involves many decision points, both conscious and subconscious.

All purchases, from small to large, are affected on the most fundamental level by subconscious motivations—a set of factors that cannot be easily simplified. Psychologist Abraham Maslow proposed a hierarchy of needs to explain human motivation, in which necessities such as food and shelter must first be met in order for humans to seek fulfillment of higher order needs, such as acceptance and love. Maslow’s hierarchy is usually shown as a pyramid, with fundamental physiological needs at the base, underpinning needs concerning safety, such as financial security and physical health. After those first two tiers have been satisfied, an individual can focus on needs for love and belonging. The penultimate tier consists of the need for esteem and self-respect. Only once someone has met the four more basic needs can he or she strive for the peak, self-actualization. If this final need is met, the individual has reached his true potential. Where one is on that scale may subtly affect what one will concentrate on in a purchasing decision. For instance, someone who aspires to be accepted by the members of a community will subconsciously start buying clothing that mimics what is worn by that group.

In terms of conscious decisions, psychologists have divided the process into three different styles: the single feature model, the additive feature model, and the elimination of aspects model. The single feature model means that the decision maker focuses on one aspect of a product. Here one might look at cost over all else, since it might be the most important factor to someone who is not quite secure economically. For this person, buying a set of plastic plates is a better decision than investing in fine porcelain dishware. This model works best for simple and quick decisions.

The additive feature model works better for more complex decisions, such as buying a computer. Here one would look at the types of computers and their range of features. A consumer might weigh the mobility of a laptop against the power of a desktop. This is all compounded, of course, by where the consumer is in Maslow’s hierarchy. ◙ (A) If the person has a good job and is using the computer to develop community or find a relationship, that may affect what he is looking for. ◙ (B)

The elimination by aspects model is similar to the additive feature model but works in reverse. ◙ (C) Here the consumer evaluates various choices feature by feature, and when a selection doesn’t have that feature, it is eliminated until only one option is left. ◙ (D)

Clearly, explaining purchasing behavior is a complex endeavor. In fact, beyond the subconscious factors and conscious decision models are mental shortcuts that help consumers reduce the effort in making decisions. Psychologists have identified a number of these shortcuts, or heuristics, which are used frequently and help with difficult choices in particular. For example, the availability heuristic comes into play when a consumer has a previous experience with a product or brand and then makes a decision to either buy that brand or avoid it the next time. Similarly, marketers frequently capitalize on the representative heuristic, in which a consumer presented with two products will often choose the more visually familiar option. This explains why the brandings of many products look similar to one another. And even more easily understood is the price heuristic, in which a product is perceived to be of higher or lower quality based on cost, as was shown in a recent study in which consumers were presented with the exact same wine at two price points, but preferred the taste of the “more expensive” sample.

The practice of riding waves with a wooden board dates back over three thousand years, likely originating as a means by which fishermen hauled their daily yield back to shore. Although it is unclear where precisely the practice began, it certainly took hold among the populations of Sumatra, Fiji, Tahiti and Hawaii, and later colonization of the Pacific Islands had a great effect on its spread. In the late 1700s, British explorer Captain James Cook was one of the first Westerners to document surfing when he witnessed a native Tahitian riding waves on a board. Cook wrote of the boldness in facing the great crests, and concluded that the man he witnessed “felt the most supreme pleasure while he was driven on so fast and smoothly by the sea.”

As surfing evolved into sport, the Hawaiian Islands became its epicenter. This is unsurprising, as surfing prowess had long been a determiner of status for royal and common classes alike within ancient Hawaiian culture. Surfing was practiced in the Kapu system of old Hawaii, a strict code of conduct regarding class and social order, under which only chiefs were permitted ownership of fine surfboards. Surfboards for the ruling class were large, between fourteen and sixteen feet long for optimal wave riding, and carved out of a lightweight, buoyant wood from the native wiliwili tree. The common Hawaiians, meanwhile, were restricted to shorter boards, ten to twelve feet long, made of the heavier wood of the koa tree, which was more difficult to keep afloat. Yet some conventions were consistent between classes. The rituals of the board craftsmen, for instance, reflected the weight surfing held for all ancient Hawaiians. Once the wood had been selected, a ceremonial fish called “kumu” was buried near the roots of the tree as an offering to the gods. Subsequently, the tree was cut down and roughly hewn with an adze 1 made of rock or bone. The incipient surfboard was then hauled to the village canoe house near the ocean, where its shape was further honed. Next, black stain made from tree root, banana buds, or the ashes from charred nuts would be applied. Finally, kukui oil 2 was applied as a varnish. After each use, the surfboards were treated with coconut oil and swathed in cloth as a customary means of preservation.

However deeply ingrained the culture of surfing had been in Hawaiian society, it faced strong opposition in the 1800s, after Europeans began using the islands as a trade junction, and Christian missionaries arrived. Their strict ideologies caused them to frown upon the Hawaiian reverence for surfing, which they viewed as flippant or even self-indulgent. Eventually, surfing was largely banned, and by the end of the century, surfing and the Hawaiian board-building rituals had nearly become extinct.

But early in the twentieth century, a group of young men who surfed at Waikiki beach, on Oahu island, garnered attention. Two in particular, George Freeth and Duke Kahanamoku, helped bring about surfing’s rebirth. By that point, the number of missionaries on the Hawaiian Islands had declined greatly. In 1907, Freeth was invited to give a surfing demonstration in California, where the sport would eventually become a cultural sensation. Soon after, Kahanamoku, an accomplished swimmer who won several gold medals for the United States in the 1912 Olympics, travelled the world and introduced surfing to regions that would become crucial to the sport, such as Australia and New Zealand.

The surfboard evolved as the century progressed. The invention of polystyrene (commonly known by the brand name “Styrofoam”) led to a much lighter board, as the foam is comprised of ninety-eight percent air, making surfboards far more buoyant. ■ (A) Around the same time, fins were added for stability and control, and a shorter design allowed for greater freedom of movement, which brought about revolutions in wave-riding techniques. ■ (B) Surfing was no longer a sacred ritual, but rather a sport attractive to thrill-seekers. ■ (C) The activity received nationwide attention in the 1950s with the inception of “surf music” and numerous surfing films popular with teenagers. ■ (D) Finally, with the creation of the wetsuit, it became feasible to surf in global waters year-round. By the 1970s, competitive surfing events made it possible for talented surfers to win handsome prize money, and world tours and contests attracted corporate sponsors, supporting newly professional surfers. Nowadays, surfing is an international big-market sport with celebrity athletes as much as it is the quiet leisure of riding the waves on a board.


1Adze: a tool similar to an ax used for cutting or shaping large pieces of wood

2Kukui: a type of nut native to the Hawaiian islands

A tsunami is a series of extremely long oceanic waves that result from the sudden displacement of large quantities of water. The catalyst for a tsunami is often an underwater earthquake or volcanic eruption. Less often, a tsunami is generated by the collapse of great amounts of oceanic sediment or landslides at the coastline. Rarely, a tsunami is created by the impact of a meteor into the sea. The displacement of massive volumes of water, either in the ocean or very close to it, initiates several gargantuan waves that can be quite catastrophic. Major tsunamis qualify as natural disasters, resulting in great destruction of coastal areas throughout the world. Tsunamis are also referred to as seismic sea waves.

Tsunamis are unlike the routine waves at the coastline, which regularly roll in as a result of offshore wind. After the triggering event of a tsunami, a sequence of basic, growing waves begin traveling large distances across the ocean. The classic comparison is the proverbial stone thrown into a pool of water. The displacement generates the same effect, yet the ripple of a tsunami is long and large. A seismic impulse that occurs in deep water may create a tsunami that travels up to five hundred miles per hour, at wavelengths of sixty to one hundred and twenty miles. While the tsunami is extremely long as it makes its way to the coast, its amplitudes are only one to two feet. A ship rarely registers a tsunami passing beneath its hull. As the tsunami approaches the shoreline, though, it slows due to friction against the shallow oceanic floor. The wavelength decreases and the energy of the wave must redistribute, causing the tsunami to grow in height. This is where the tsunami is similar to a regular ocean wave. Both reach their greatest height just at the coastline, though only the most massive tsunamis break. Most resemble a large and fast surge, hence the term “tidal wave.”

Because of wide-ranging coastal shape and differing seafloor and shoreline configurations throughout the world, the effects of tsunamis have varied greatly as they have made land. Areas that lie beside deep and open water tend to experience the tsunami in its steepest form, as the space allows the shaping of the wave into a very high crest. Often, the first sign of an impending tsunami is the receding of the water at the shoreline. This is the trough, or bottom, of the first tsunami wave, drastically pulling back the sea and exposing large areas normally submerged in water. Unsuspecting inhabitants may have their curiosity piqued by the bare seafloor and venture into this most dangerous zone. Only a few minutes later, the crest of the first wave will bear down, either breaking or sweeping in a fast tidal current. People and objects are quickly pulled into the wave. The process repeats. Most tsunamis consist of three or four massive waves that occur about fifteen minutes apart, though some can last for hours. The intensity of the impulse event and the topography of the coastline are the biggest factors regarding how large each tsunami will be, and how destructive. A tsunami may reach several hundred meters inland and is capable of crushing homes.

Though tidal waves have caused devastation the world over, throughout history, shorelines on the Pacific Ocean have been the most affected by tsunamis. This is due to much volcanic and seismic activity on that ocean’s floor. Elsewhere in the world, ancient civilizations such as the Minoan are believed to have suffered a sharp population decline as the result of a major tsunami. Tsunami destruction has been documented regularly through olden and contemporary times. With intense development and settlement at the world’s coastlines, one tsunami can kill many thousands of people. A tsunami in 1946 destroyed the city of Hilo, Hawaii, and scientists began to take serious steps toward an effective system of seismic wave prediction.

■ (A) Today, international geographic societies work in conjunction with meteorological agencies to forecast possible conditions that could lead to a tsunami. ■ (B) For example, if seismic instruments register a high-magnitude earthquake in the Pacific, meteorologists closely monitor any drastic changes in sea level and movement. ■ (C) All relevant data to is scrutinized, such as the depth and topography of the ocean floor, in order to estimate the tsunami’s path and magnitude. ■ (D) Time is paramount when tsunami warnings are issued and coastal communities must quickly evacuate. 

Elvis Presley is arguably the most famous rock-and-roll musician in history. While it was his untimely death that crystallized his legacy in the American psyche, his musical output was prodigious and ultimately changed the narrative of American popular music. However, there were many rock-and-roll artists recording contemporaneously with Elvis, including Little Richard, Chuck Berry, and Buddy Holly, all of whom helped steer the course of popular music’s evolution. While Elvis may be more popular, with thousands annually visiting his home, each of these other singers could be considered the father of rock and roll. But the even less widely known godmother of rock and roll is an African-American gospel singer named Sister Rosetta Tharpe. While she came before rock and roll, her unique style, which featured the blending of musical genres and virtuosic guitar picking, influenced many of the better known early rock-and-roll musicians.

Tharpe was born in 1915 in Cotton Plant, Arkansas, a town in the southern U.S. Her parents, both cotton pickers, were members of the local Church of God in Christ (COGIC). The COGIC denomination was Pentecostal, a Christian movement popular among African Americans at the time. Pentecostalism emphasized the influence of the Holy Spirit, an entity believed to take over parishioners, causing them to shout and dance, commonly called “feeling the spirit.” This emotional aspect and release of control allowed more lively musical expression to become part of Pentecostal services. Tharpe’s mother, herself a musician, encouraged her daughter to sing and play the guitar. A musical prodigy, by the age of four, Tharpe was singing and playing in area churches as Little Rosetta Nubin, Nubin being her maiden name. When Tharpe was six, she and her mother moved to Chicago.

In Chicago, Rosetta was exposed to the blues, although gospel continued to exert a greater musical influence on her. She and her mother joined a local COGIC church and performed there. From that base, they also traveled around the country performing at Pentecostal tent revivals, large affairs with a mix of entertainment and worship. Little Rosetta Nubin learned the art of entertainment at these revivals, soon becoming a headliner for the shows, and her fame grew.

After a short marriage to Reverend Tommy Thorpe, a COGIC preacher who had traveled with her on the tent revival circuit, Rosetta and her mother moved to New York City. There, Tharpe began a conscious process of appealing to a broader public. Jazz was popular in the New York nightlife scene, and Tharpe procured an engagement singing at a well-known nightclub. There, management would give her secular songs to sing, and when she sang Christian songs, she’d subtly change the lyrics to cut spiritual references. This change from being purely a gospel singer did not sit well with many of her old fans, who felt that by crossing over, Tharpe had abandoned them.

◙ (A) Tharpe’s first hit recording was “Rock Me,” sung with the Lucky Millinder jazz band. The song was based on an old spiritual, a type of religious song that had been sung by African-American slaves. ◙ (B) However, when Tharpe sang the song, she changed the lyric “Jesus hear me praying” to “Won’t you hear me praying,” making the song less religious and open to more interpretation. ◙ (C) Many of Tharpe’s church fans eventually forgave her and came back. ◙ (D) By the age of twenty-five, Tharpe was gospel’s first genuine superstar.

Later on, Tharpe left the jazz scene and returned to gospel full time. She worked with multiple smaller groups. One of the groups she collaborated with was the Jordanaires, a white gospel and country quartet that also performed with Elvis. Through their collaboration, Tharpe was able to add country music influences to her music.

So great was her popularity that in 1951 Tharpe was able to sell 25,000 tickets to her third marriage, filling up the baseball stadium in Washington, D.C. Throughout the 1950s and 1960s Tharpe’s fame continued to grow, allowing her to record music and sing on the radio, culminating in a televised tour through Europe. However, due to health issues and changing cultural times, Tharpe soon dropped out of the spotlight. Even so, her legacy of mixing uniquely American musical genres and expert guitar playing continued to influence rock-and-roll legends.

 

Cindy M. Sherman is a famous American artist and photographer. Born in 1954, Sherman began her career in Buffalo, NY, and emerged as one of the most famous modern artists of the late 20 th century. She is best known for photographing herself in nearly all of her pictures and using herself as the focal point of her pieces. Many art experts and historians consider Sherman’s style to be a groundbreaking approach to photography. They also believe she helped redefine what constituted a “self-portrait.” In addition to her contributions to photography as an artistic medium, Sherman helped to promote women in the arts, encouraging young female artists to pursue art professionally. Sherman became famous for producing many controversial images mostly centered on the portrayal of women. She received both heavy criticism and praise for the provocative nature of her work.

During the 1970s, Cindy Sherman experimented with several forms of art and initially concentrated on oil painting. As a student at the State University College at Buffalo, NY, Sherman took courses in painting and composition but did not find the medium satisfying. As a result, she started to use her camera to take photos and found that this tool was the best way to express her ideas as an artist. After graduating from college in 1976, Sherman moved to New York City and lived in Manhattan. She founded a studio through which to sell her work, and began to embark upon a prolific career. Rather than hiring models to serve as the subjects of her pieces, Sherman took the revolutionary step of pointing the camera at herself. She took photos of herself wearing different wigs and costumes, depending on what type of image she wanted to portray. Perhaps her most memorable series of photographs taken in this time consist of her emulating the looks of Hollywood actresses starring in low-budget “B” movies famous from this era.

Cindy Sherman is best known for the work she presented to the art world during the 1980s. Sherman created very large prints for her exhibitions that showcased photos of herself as a centerfold, fashion icon, or figure from history. She completed photographic series that embodied her style, titled “Rear Screen Projections,” “Fairy Tales & Disasters,” and “Centerfolds.” Her work in her “Rear Screen Projections” series showcased her use of lighting to emphasize facial expressions. In “Fairy Tales & Disasters,” Sherman made herself look like characters from children’s stories, portraying acts of violence that many people considered quite grotesque and shocking. Sherman’s “Centerfolds” series of photographs explores modern female beauty by mimicking the style used by famous national magazines like Vogue and Cosmopolitan. In all of her work during this period, Sherman used herself as the primary subject of her photographs, with self-portraiture becoming her signature style.

Cindy Sherman’s impact on modern art has been lasting and strong. As a photographer, Sherman’s innovative approach of using herself as the primary subject of her work led to interesting discussions in the art world. The shocking and intentionally scandalous nature of many of her images added to Sherman’s impact and captured the attention of influential art critics and historians who made her even more famous. Moreover, in the wake of Sherman’s unforgettable use of fantastical imagery and striking compositions to enhance her chosen genre, other artists began to take their own self-portraits, providing innovations of their own but still clearly inspired by Cindy Sherman’s example.

As a female artist, Sherman challenged societal limits on the type of artwork critics deemed acceptable coming from females. She also stretched the boundaries of how the female figure could be portrayed in photographs in the way her work emphasized fantastical, outrageous, and shocking imagery. ■ (A) Indeed, critics and art historians the world over think of Sherman as a trailblazer of modern art and one of the most important female artists of her time. ■ (B) They applaud how her originality, creativity, and imagination led her to produce some of the most memorable photographs in modern art. ■ (C) To this day, Cindy Sherman remains an active and contributing artist but has added film as another preferred medium. ■ (D) She still uses a camera quite extensively, however, to compose imaginative self-portraits to delight, shock, and educate her audience. From relatively simple beginnings as an oil painter very early in her career, Cindy Sherman has developed into one of the most prominent figures in the history of modern art.

It is sometimes suggested that large multinational companies have a moral obligation to “give back” to the communities that they serve in the form of charitable donations. Bottom line profits regularly exceed multiple billions of dollars even during periods of increasing and widespread unemployment; this seeming contradiction amplifies the need for the largest companies to take a more active role in contributing to the economic welfare of the people living in the markets in which they operate. Proponents of charitable giving argue that companies should reach out and create strong corporate social responsibility (CSR) programs as a matter of policy. CSR is a term that became popular in the 1960s and is still used today to describe the legal and moral code governing the types of activities in which companies often engage.

CSR has evolved over the last twenty years into a formal business function inside many corporations. At least part of the reason for this phenomenon has to do with pragmatic considerations. People who believe that companies should give to charity often point to the intangible but very real practical benefits that result from the corporation taking actions that promote the greater good. Some of these intangible benefits include increased brand value, positive publicity, and general popularity. Believers in the importance of corporate charitable giving often go a step further, however. They insist that beyond mere pragmatics, a company is morally bound to act charitably because communities at large allow a company to operate and exist in the first place. Scarce community resources, including tax revenue and real estate, are allocated to the construction and creation of roads and bridges that enables a company to make money. Arguably, companies can disproportionately benefit financially from the use of these roads and bridges over time. Due to the inherently unequal relationship that exists between a company and the community in which it operates, a corporation should prioritize charitable giving to show its commitment to the welfare of the people who allow it to flourish.

People who do not believe that a company should give to charity as official company policy typically point to purely economic arguments for why a company should not engage in “non-core” activities. They assert that from a purely capitalistic perspective, a company should only engage in activities and actions through which the company can make money; hence, for them, conducting a formal CSR program is economically inefficient. The return on investment from running a CSR department or division, these opponents would argue, is therefore negative. To counter this argument and line of thinking, staunch proponents of CSR point to the fact that historically speaking, socially responsible companies—especially those that donate regularly to charity—typically generate greater value over time to their shareholders compared to those that do not.

One example of a socially responsible company that other firms sometimes emulate is the McDonald’s Corporation. With a global footprint and instantly recognizable brand, McDonald’s serves millions of people the world over with fast food at relatively affordable prices. The company has contributed to numerous charities that re-allocate a material portion of its profits back to the people being served. McDonald’s also runs a formal charity, The Ronald McDonald House, which provides close to $300 million per year in benefits to hospitalized children and needy families. McDonald’s clearly recognizes its moral obligation to the communities it serves and at the same time continually ranks as one of the most economically successful brands in the world.

CSR continues to be a contentious topic among economists, businessmen, and government leaders. The fine line that exists between what a company should and should not do can move with subtle changes in public discourse, the political climate, and market swings. In an economy driven by bottom line profits, opponents of charitable giving are often able to shape corporate policy, particularly during times of financial hardship. However, given the reliance on customers in the community and the over-allocation of common resources that tends to benefit companies disproportionately well, others continue to insist that a moral obligation exists for corporations to uphold strong and sustainable CSR programs regardless of the economic climate. ■ (A) Indeed, economic downturns that impact markets around the world every few years lead to increasing unemployment and many communities stumbling from one financial crisis to another. ■ (B) At the same time, the largest corporations still post record profits totaling billions of dollars and demonstrate how unequal the relationship between a company and its community can be. ■ (C) For this reason, the possible moral obligation of companies to help the communities that allowed them to thrive in the first place becomes even more pronounced. ■ (D) According to its supporters, CSR at its core is about fairness, and companies should be treated no differently than anyone else when it comes to issues of justice.

The discovery of Pompeii’s ruins in 1599 profoundly affected the art world by kindling significant interest in the classical Roman aesthetic that promptly became popular throughout Europe, particularly in France. Pompeii was an ancient Roman city that was famously destroyed and buried by the volcanic eruption of Mt. Vesuvius in 79 CE. According to researchers and historians, ash and pumice rained down on the city and residents of Pompeii for over six hours, blanketing city streets and homes with up to 25 meters of sediment. Temperatures in the city during the eruption reached 250 degrees Celsius (480 degrees Fahrenheit) and many residents died due to exposure to the extreme heat. With Pompeii effectively preserved under a literal mountain of volcanic ash, many everyday items were kept intact, including several of the city’s mural paintings. The rediscovery of these paintings in Pompeii provided audiences in Europe with a genuine glimpse into ancient Roman art. These artifacts inspired many artists in France, England, and other countries who idealized and romanticized ancient Rome to generate art in the 18 th century that would be known as Neoclassicism, an imitation of classic Roman art.

Art historians have categorized the discovered art of Pompeii into four distinct styles. The first style, which prevailed from 200 to 80 BCE, is characterized by the way large plaster walls were painted to look like colorful, elegant stones; it is known as the “structural” or “masonry” style. The second style, which dates from 100 BCE to the start of the Common Era, is characterized by “illusionist” imagery, with murals featuring three-dimensional images and landscapes seen through painted windows that conveyed a sense of depth. The third style, popular from 20-10 BCE, is known as the “ornate” style, and is characterized by two-dimensional, fantastical perspectives, rather than the realistic, three-dimensional vista-like views associated with the illusionist style. Murals painted in the ornate style focused less on realism and instead were created to depict whimsical scenes in highly structured arrangements. The fourth Pompeian style, which dates from 60-79 CE, combined the strict structures and complexity of the ornate style with the illusionist methods of the second style and the stonework of the first style; the fourth style was essentially a hybrid of its predecessors.

The art of Pompeii was first excavated in 1748 when archeologists began the painstaking work of identifying, removing, and collecting artistic artifacts from the ash and soil. As knowledge of the art of Pompeii spread across Europe in the 1760s, interest in Greco-Roman art increased and captured the imagination of a new generation of artists in countries like England, Germany, and France, prompting them to emulate a “classical” style. The art of Pompeii most notably influenced an artist in Paris named Jacques-Louis David (1748-1825), who would become one of the most successful and dominant artists of his time. David worked through the lens of Pompeii’s illusionist style, with a sense of depth and realism generated in a number of his more famous works, echoing the three-dimensional landscape views typified by Pompeian art’s second style. A number of works put forth by other painters in England, Germany, and France would also contain elements of the four styles of the art of Pompeii.

The influence of Jacques-Louis David on his contemporaries and future artists only expanded the popularity of Roman art and the influence of Pompeii’s four artistic styles for most of the 1780s and 1790s. Neoclassical art proved to be wildly popular with art collectors and enthusiasts in Europe who commissioned more and more paintings from David and his contemporaries. David’s most famous piece, Oath of the Horatii (1784), contains elements from at least three of the four styles of Pompeian art. ■ (A) In this particular work, one can see the first style in the colored slabs of stone on the ground, the three dimensional perspective of the second style in the dimmed space behind the arches in the background, and the realistic yet fantastical look of the fourth style in the hero figure in the middle of the painting. ■ (B) David serves as just one example of the 18 th century artists inspired by the classical Roman works exemplified in the four art styles of Pompeii; indeed, David would pass along his inspiration from Pompeian art to his students. ■ (C) English architect Robert Adam (1728-1792) would create stuccos with elements very similar to the first Pompeian style; he would become known as the leader of the revival of “classical” art in England. ■ (D) The extraction of the art of Pompeii took 32 years to complete, but once re-discovered and integrated into the work of artists of the 18 th century such as David, its impact proved to be quite significant and abiding.

There are approximately 6,500 languages spoken throughout the world. However, the majority of these are spoken by only a few people each. Papua New Guinea alone, which has fewer than four million citizens, is home to the speakers of an estimated 832 languages, meaning that each language is spoken by an average of only 4,500 people. Around the world there are approximately 2,000 languages that are each spoken by fewer than 1,000 people. The danger of language extinction is, therefore, imminent and widespread. In fact, over the last one hundred years, about four hundred languages have gone extinct. Many more are endangered, and linguists estimate that by the end of the twenty-first century, over half of the existing languages will be no more. Many linguists and anthropologists, finding the situation dire, have worked to develop ways of preserving the languages at the greatest risk before they are lost to history.

The most well known example of a dead language is, perhaps, Latin. However, there is debate about whether Latin is truly dead, given that it evolved into several other languages, and, technically speaking, it is still learned and spoken by some, particularly scholars and clergy. The reasons Latin is still used are varied. Some Christian churches, for instance, still utilize Latin texts, with the Roman Catholic Church in particular having used Latin as a common language across multiple countries and cultures. Latin is also one of the languages of science and so is commonly used to christen new discoveries. But perhaps most importantly, Latin in its written form is studied so historians can read ancient texts to better understand Western cultural development from literary, scientific, and political perspectives. Thus, in a sense, Latin may be not a living language, but rather the best documented of those that have died. Currently endangered languages with fewer interested parties may not be so well understood after their demise.

Understanding culture and history are two of the most important aspects of language preservation. Each culture has a unique view of the world. When that view is learned by others, it expands humanity’s understanding of ourselves. When the last living native speaker of a language dies, we lose “literally hundreds of generations of traditional knowledge encoded in these ancestral tongues” according to the Living Tongues Institute for Endangered Languages, one of the premiere organizations working on language preservation. For example, the Native American Cherokee have many concepts that are expressed in their language that have no equivalents in other languages. “Oo-kah-huh-sdee” is a word used for the excitement seeing a particularly cute human baby or baby animal. Likewise, they have no word similar to goodbye, only one conveying the meaning of seeing the other person later. ◙ (A) And beyond cultural concepts and ways of being, stories, histories, and scientific knowledge are all lost each time a language dies. ◙ (B) An Amazonian tribe’s language can contain centuries of passed-down knowledge about the plants and animals where they live. ◙ (C) That information dies when the language does. ◙ (D)

A number of organizations besides the Living Tongues Institute work to preserve endangered languages. The primary ones are the Endangered Languages Programme run by the United Nations’ Educational, Scientific, and Cultural Organization (UNESCO) and Google’s Endangered Languages Project. These two organizations provide complementary services. UNESCO contributes tools to monitor and assess trends in language diversity and delivers training, policy advice, and technical assistance, while the Endangered Languages Project, which Google formed in collaboration with a number of organizations and language groups interested in preservation, supplies a technological platform to enable language recording, communication between far-flung speakers of an endangered language, and language instruction.

These efforts are not in vain. There have been several cases of languages that were on the brink of extinction that have been revitalized. The Eastern Band of the Cherokee Nation had only four hundred speakers of their native language when Tom Belt, part of the Oklahoma Band of the nation, arrived on their land. A fluent speaker of Cherokee, he realized that this part of his nation would soon be deprived of its last native speakers. He began a project teaching children how to speak Cherokee. Eventually, a Cherokee language immersion program was implemented in the local schools, with core classes such as science and math taught in Cherokee. This practice continues today, and now Cherokee is even taught at the university level.

Wind has been used as a source of power for millennia. In the past, wind was used to assist in agricultural activities, and even today, some small communities continue to use wind power to pump water and grind grain. Wind was also harnessed by early civilizations to power their boats; this was responsible for greatly increasing the growth of human civilization by allowing greater trading opportunities. More recently, the application of wind power to energy generation has been touted as a potential “clean” alternative to other forms of energy generation. However, there is debate as to whether the use of wind-based energy will become a viable alternative to current methods of generating energy.

The turbines that make up wind farms have a simple design relative to other forms of energy generation. These turbines convert kinetic energy into mechanical energy using three blades attached to a rotor, which rotates the magnets of a generator. The resultant electricity can then be transmitted through cables to an electric grid that can disseminate the power to users. One reason for the appeal of wind farms is that after they are set up, the cost of running and maintaining them is minimal, owing to their simple design.

Debates on the use of wind power can roughly be divided into “global” and “local” viewpoints. The global viewpoint is primarily related to the potential of wind-based energy generation on an international scale. For advocates, climate change is an important issue; they believe that wind power is one component of a holistic approach to battling climate change. Technological innovations, moreover, tend to see increases in efficiency and decreases in cost after their introduction and gradual adoption. Proponents expect a similar trend to apply to wind turbines and argue that this justifies the further large-scale development and promotion of the technology. Finally, advocates on the global scale emphasize that in comparison to the high cost of installation, the cost of running wind farms afterward is negligible.

However, arguments from the global viewpoint have also been made against the use of wind power. The strongest of these oppose adoption on economic grounds. Critics question whether having governments subsidize what is currently a relatively inefficient and more expensive alternative energy source is sound policy. They argue that this money could better be spent on expanding currently existing energy sources that boast higher efficiency and lower costs. Examples of such alternative energy sources include nuclear power, while examples of high efficiency and lower cost traditional energy sources include coal and shale oil. In addition, they argue that in the interest of environmental conservation, there are other potential alternative energy sources that deserve equal attention, such as solar and geothermal power.

In contrast to the global viewpoint, the local viewpoint is focused on the wind turbines’ effect on the immediate surroundings. Here, supporters argue that the installation of wind turbines can benefit communities. For instance, municipalities in some parts of the United States have been able to receive stipends for allowing wind turbines to be built on their land. Some of these communities also benefit by paying a reduced cost for electricity.

Those that argue from the local viewpoint against the adoption of wind power usually focus on the potential unintended environmental consequences of wind farms and the social burden placed on rural communities. ■ (A)For instance, species of birds and bats have been negatively impacted by the installation of wind turbines, which can kill creatures that venture too near the turning blades. ■ (B) Recent arguments have also been made that wind farms can affect crop yields by changing the local temperatures. ■ (C)If true, they argue it is likely to become a pressing issue if more wind power is adopted. ■ (D) Additionally, research found that citizens living in communities where wind turbines have been installed complain about the intrusiveness of their appearance. Those in these communities who rely on tourism could see their livelihood impacted by these large turbines changing the local scenery. Finally, some have highlighted that residents in large cities benefit from the installation of wind farms while being insulated geographically from the downsides. Given the importance of the local community’s cooperation for large-scale construction of new technology, this resistance may prevent advocates of wind power from seeing their dreams realized in the near future.

Melanin and Its Uses

Human skin color is controlled by melanin, a pigment present in most animals with the possible exception of arachnids1. ◙ (A) It is found in not just the skin but also in hair and eyes, and has a wide range of functions. ◙ (B) Perhaps the most curious place that scientists have discovered melanin is the brain, since it would not seem necessary to have pigment in a place that cannot be seen. ◙ (C) Only recently has research has begun to elucidate some of melanin’s functions in that region of the body, separate from the better established purposes of more visible pigment, of which there are two main types. ◙ (D)

Eumelanin, the most common type, is either black or brown in color, and is primarily responsible for the various shades of skin. Those individuals whose background reaches to the equator generally have higher concentrations of eumelanin and consequently darker skin. Eumelanin is also responsible for hair color, with higher concentrations responsible for black and brown shades, and smaller quantities resulting in blonde hair. Pheomelanin, which is slightly less abundant in nature, is present in all humans but is reddish in color, responsible for red hair and freckles. Differing levels of eumelanin and pheomelanin are one of the primary factors giving humans their myriad expressions of eye color.

Both eumelanin and pheomelanin act as protection against the broad spectrum of ultraviolet rays from the sun and will be produced in greater quantities when skin is exposed to sunlight, as ultraviolet rays are a primary risk factor in certain forms of skin cancer. Melanin in the eye similarly seems to ensure against eye cancer and vision loss, partially explaining the evolution of different eye colors in humans. However, direct sun exposure is important to humans’ survival because the body needs ultraviolet B rays in order to produce vitamin D, a substance crucial to the absorption of calcium, iron, magnesium, zinc, and phosphates. Various skin tones may have evolved to ensure a balance between vitamin D production and cancer protection. Where sunlight is more direct, more melanin would have been produced to protect against various skin cancers, but production of vitamin D would also have been ensured. But further away from the equator, light enters the atmosphere at an angle. Since ultraviolet rays are refracted when sunlight is not direct, exposure to ultraviolet B rays would have been decreased, and the production of critical vitamin D also thus curbed. As a result, humans living further north and south evolved a system to produce lower amounts of melanin.

Meanwhile, the type of melanin that is primarily found in the brain is called neuromelanin and is similarly dark in color to eumelanin but is structurally distinct. Until fairly recently, scientists did not understand its usage, and many thought it an inert substance. However, more recently, neuromelanin has been linked to Parkinson’s disease, leading scientists to discover that neuromelanin may provide some unique protective functions. Parkinson’s disease causes individuals to experience loss of motor control, caused by the death of neural cells in the substantia nigra, a region of the brain whose name, translated from Latin, means “black substance,” due to its abundance of dark-colored neuromelanin. The discovery that patients with Parkinson’s disease have fifty percent less neuromelanin than individuals of similar age has led scientists to believe that neuromelanin plays a crucial role in the prevention brain cell death. In fact, neuromelanin concentrations increase with age, which in turn correlates with brain cell degeneration and, therefore, increased need for protection. Further studies have shown that neuromelanin may also be involved in removing toxic metals throughout the body.

All melanin is produced by cells called melanocytes. In the brain, these pigment factories will produce the melanin, and the melanin will then stay in these cells, a process similar to how melanin accumulates in the eyes. However, in the skin and hair, melanin is transferred to other cells. In the skin, melanin is transferred to the primary skin cells as pigmented granules that are then grouped around the DNA of their new home to protect it from harmful ultraviolet light. All humans have, in general, the same proportion of melanocytes in the skin, but the amount of melanin produced varies, conferring the range of skin color.

_______________________

1Arachnid: A class of 8-legged animal including spiders and scorpions.

The Decline of the Maya

The Maya of the Yucatan Peninsula were one of history’s most advanced civilizations. Their cities and pyramids were architectural marvels, and they had mathematics advanced enough to develop the concept of zero centuries before it came into common usage, independently of other mathematically advanced cultures (such as the Sumerians). They were also consummate astronomers, having devised an accurate 365-day calendar, and perhaps most importantly, especially for historians, they devised a written language that was used to record their advancements. But for all that is known about the Maya, the precise cause of their demise remains a mystery.

The Preclassic period of Mayan civilization began around 1800 B.C., with some historians dating the era as far back as 2000 B.C., and lasted until 250 B.C. During this time, the Mayans formed the basis for their culture through the development of agriculture, the formation of cities, and the construction of their first stone pyramids. While there is no conclusive evidence as to where the Mayan people came from, archeological findings support them having borrowed in these beginning stages, and later refined, their calendar, religion, and numbering system from the earlier Olmec civilization; their earliest pyramids were also most likely built upon Olmec precursors that had been fashioned out of mud. The first Mayan cities were established in the Preclassic period, around 750 B.C., with the lost city of Mirador believed to have been the largest. It is estimated to have held between 100,000 and 250,000 people, its size dwarfing even that of the largest Classic period city, Tikal.

Beginning around 250 B.C. and lasting until approximately 900 A.D. is what historians consider the Classic period, when various city-states were established. There were some 40 cities with populations ranging from 5,000 to 90,000 people each, connected by a complex trade system that formed the economic engine of Mayan society. Conservative estimates of the population of Tikal have 10,000 people in the main city and 50,000 people living in the outskirts. However, approximations range to as high as 90,000 people living within the city center at its height and 200,000 more living in the greater metropolitan area.

However, between 800 and 950 A.D., Mayan civilization began a steep decline, and many cities were abandoned. Mayans began migrating further north, into present-day Mexico. The last remaining Mayan city was defeated in 1697 by the Spanish, who had only arrived in the Americas in 1492.

The most popular theory ascribes the decline to fighting from within. In fact, there is some evidence of internecine warfare, but there exist other possible catalysts as well. ◙ (A) Another theory is that the overdependence on just a few crops inevitably depleted the soil of its nutrients. ◙ (B) The effects of soil overuse would have been compounded by the fact that farmers had to strip the forest of trees to make room for their ever growing fields of corn. ◙ (C) These unsustainable farming practices may have led to famine and conflict. ◙ (D) Overpopulation is another possible culprit, as Mayan cities were exceptionally dense, with some estimates as high as 2,600 people per square mile. In comparison, Los Angeles County in the year 2000 had a population density of only 2,400 people per square mile. Given the limits of Mayan technology, this would not have been sustainable for long periods. The most credible theory, though, has only recently been posited.

The Yucatan rain forests only receive rainfall once per year, and the Maya planned around this seasonality of water. Scientists have recently discovered, though, that the 9th century experienced a sustained drought in the Yucatan, with the initial evidence based on the examination of tree rings in Sweden, ironically. Severe cold spells in northern Europe happen to be closely correlated with drought in the Yucatan. Further evidence has been collected from mud deep beneath the Blue Hole of Belize, from which core samples show solid proof of drought at the time. Recent theorists have suggested that up to 90% of the Mayan population died as a result, with the remainder fleeing northward. Most historians, though, believe that while the drought was the precipitating factor, the other circumstances all played a hand in the collapse of the once great Maya.