It is personal when it happens to you. As much as we talk about changes in older age, it remains at a distance, until it happens to you. Most of the time the loss of function happens fast and we are unprepared. While most of us might recover from an initial loss, we only have to face another different one shortly thereafter. Little pieces of you are taken away. And our mind does not deal well with these losses. You did not plan for it, and even if you thought of this eventuality, when it happens to you it is different. It is personal and real.
We have a model of the world in our brain. Within this perfect heaven there is our avatar, an image of us, who we think we are. As we get older and frailer—usually these come together—the reality conflicts with the avatar that we have built. This model is important for us. Most of the time the model of the world and the avatar representing us functions well. We function on a daily basis without needing to be aware of this model. We behave in automatic mode most of the time. Until something goes wrong and the avatar can no longer do what its suppose to do. The mental narrative that we have taken so long to build up suddenly needs to be re-arranged and re-modeled.
In aging, not long after the first of such redefinition of our model—perhaps we realize that we cannot read small print anymore without using prescription glasses—then comes another onslaught of loss. The constant change and attrition, requires us to be repetitively modify our model and our avatar. Aging is an existential danger to our model, because it threatens how that model is suppose to function. Making these changes is difficult for everyone since our model resists change, as it has been a faithful portrayal of our reality for so long. The older we get the more entrenched this model becomes. It is also doubly difficult in older age because there is so much variance among our peers. We delude ourselves that perhaps these attritions are only temporary and therefore we do not need to change our avatar just yet. There is always a lag in how old we are in reality and how old we see ourselves—a subjective age bias. Of course we are biased to see ourselves younger.
Many theories exist for why we underestimate our age. Overestimating our abilities, our looks, how satisfied we are in life, and aligning our personality, attitude, behavior and interests with that of a much younger person. Some theories also suggest that there is an internal bias to be young. But these theories assume that there is a conscious, if not willful desire to stay young. Although all these theories are valid, but there could be a simpler answer. There could be a lag, a time difference, between reality and how our model represents it. It takes time for us to reconcile reality. And the process is dynamic and we are continuously fighting this change. This dynamic process has not gone unnoticed.
In psychology by the 1950, Erik Erikson developed the first personality theory that included older adults. Before then most theories stopped at young adults. Erikson’s eight-stages of development comes closest to explaining this constant fight we experience in older ages. Likely written by his wife Joan Erikson, the final stage of development emerging late after age 65 years. This stage contests that there is a fork in the road. At this fork which Erikson called “crisis,” we either go towards ego integrity or we go headlong into despair. As dramatic as this crisis seems, it is emerging that such depictions are very close to the experience of aging.
By ego integrity Erikson means that we come to accept who we are. That we only have this life to live, and that we need to resolve issues in order for us to be able to be comfortable with where we are. Although seemingly diametrically apposed (ego versus non-ego) Lawrence Kohlberg’s 1973 theory of moral development later expanded to address older adults, included a stage of self-transcendence a “...contemplative experience of the nonegoistic or nonindividual variety” (p.500-501). Ego integration and non-ego seem to refer to the same concept, that of humility. The only salvation to older adults is becoming humble. John Cottingham in 2009 defines humility is, “ ... a lack of anxious concern to insist on matters of status, a recognition that one is but one among many others, and that one’s gifts, if such they be, are not ultimately of one’s own making” (p.153).
The alternate to humility is pride, when we are constantly fighting unresolved issues that continue to fester and create discord in our life. Later on Joan Erikson formulated a ninth stage of very old age that starts in the eighties when physical health begins to deteriorate and death becomes more real. She recognized at this stage that society similarly ups the ante, “aged individuals are often ostracized, neglected, and overlooked; elders are seen no longer as bearers of wisdom but as embodiments of shame” (p. 144). It seems that unless we subjugate ourselves to humility the alternate is humiliation.
That is why it is personal. Its not just about accepting aging, its that we have no choice. We either suck it up and become humble or fight it and face a certain humiliation. By sucking it up we acknowledge our mortality and therefore impermanence, our humility. If we fight it we rally our pride and confront these changes with certain outcome, failure and humiliation. Science tends to support this view. Neal Krause and David Hayward with the University of Michigan wrote that when it comes to humility, people that live the longest are the ones that accept where they are in life. By become less of ourselves (ego-less) nature rewards us with more of ourselves (long life.)
Someone has a dark sense of humor, and I hope that I live long enough to learn to appreciate it.
In a 2014 Pew Research Center study nine out of ten adults in the United States report believing in God and more than half are “absolutely certain” God exists. While one in five Americans pray every day, attend religious services regularly and consider religion to be very important in their lives. Although these proportions are declining precipitously since an earlier 2007 study, today religion still plays an important role in the lives of older people.
As adults get older they get more spiritual and some become more religious. It is not only that religious or spiritual people tend to live longer (they do, for many reason other than spirituality), but that older people become more spiritual and religious as they age.
There is a great attraction to argue for a spiritual interpretation of aging. Two religious gerontologists did just that when Jane Marie Thibault and Richard Lyon Morgan in 2012 made themselves their own subject matter when they wrote a book about their aging experiences. In a self-described pilgrimage into their third age, they interpret aging through religion. While growing up God has shown us how much he loves us by making us healthy, giving us pleasure through our bodies, nature, perhaps experiencing the miracle of having children. As we age then it is time for us to show God how much we love him in return. God stops showing us how great he made us and now it is our turn to reciprocate. In one example, by using “dedicated suffering,” we acknowledge our pain and dedicate it for the benefit of others. And it works. When people dedicate their suffering they report a reduction in pain. This spiritual switch—as older adults we are now responsible for the expression of gratitude—has some surprising support in the scientific field.
The Swedish sociologist Lars Tornstam in 1989 developed a theory that argued that older age brings about spiritual growth. Gerotranscendence Theory suggests that older individuals—perhaps because of ill health—tend to experience a redefinition of self and their relationships with others. By redefining ourselves we become more spiritually aware. More recent in 2009 the American Pamela Reed in developing her own Theory of Self-Transcendence states that individuals who face human vulnerability have an increased awareness of events that are greater than them. So is spirituality the answer to this increasing loss of control that we experience as we age?
Research tends to support this interpretation. In one review, the Portuguese researcher Lia Araújo and her colleagues, report numerous studies showing that religion, spirituality, and personal meaning have a broad range of mental and physical health benefits, satisfaction with life and coping better with stress. In older age, existential issues—contemplating life and death—appear to gain increasing importance. There seems to be a growing preference for acquiring meaning from faith. It seems that the greater the challenge the greater the religious or spiritual meaning that we gain from the experience. By gaining a positive meaning of life, purpose, religion, and spirituality individuals also gain a higher level of life satisfaction. Regardless of physical health, developing a positive attitude toward life has positive outcomes. It is only when religion becomes an ineffective tool for explaining dramatic challenges that people start revoking their religious conviction.
Christopher Ellison with the University of Texas at Austin and others have referred to this area of research as the “dark side of religion.” Doubt in our beliefs can have very negative consequences. Doubt erodes one of the major functions of religion which is to provide an explanation for why we are aging—such religious explanations are generally referred to as theodicies
But we are always looking for a reason, a model of the world that is just, logical and predictable. Religion has that extra facet of immortality—life in the afterworld, a comfort to those that have to confront the eminence of death. Whether we get this view of the world from religion, science or from intellectualizing, the overarching observation is that we need to have such a view. Everyone has an opinion on things that matter to them. Some simply don't call it religion but having an explanation comes with the territory of being human.
Aging is defined by time. Even though our bodies are in a constant process of change, some cells in our bodies remain with us from conception. Our bodies have 37 trillion cells that are constantly duplicating, updating, maintaining and replacing themselves. Each cell contributes to a specific organ in the body. Jonas Frisen, a stem cell biologist at the Karolinska Institute in Stockholm developed a method for determining the age of each organ. Although some cells remain with us the duration of our life—neurons of the cerebral cortex, cells of your inner lens in our eyes, muscle and valve cells of your heart—the rest of our body is in a constant frenzy of change and rejuvenation so that with time we get to replace whole organs:
1. Intestines replaced every 2-3 days old,
2. Taste buds replenish themselves every ten days.
3. Skin and lungs (2-4 weeks)
4. Liver is replaced (5 months)
5. Nails (6-10 months).
6. Red Blood cells, every four months after travelling over 300 miles and going through the heart 170,000 times, 60 times per hour our red blood cells are given respite and are renewed
7. Hair if the follicles have not fallen off every 3-6 years)
8. Bones (every 10 years) and lastly
9. Heart--most of it (every 20 years)
Despite this newness, we measure our age by our chronology—how much measured time has elapsed. On average our body is only eleven years old. However with each replication a slight imperfection results. We see these imperfections and assign it to the “aging” of our bodies. We resign ourselves to accepting our aging as an indication of our chronological time but it isn’t. Physical aging are mistakes that happen. But we mesh the two together. Aging and time are glued together and only when we look closer do we see that each is unique and separate.
We have a story, a narrative arc playing in the background of our life. Time is a special dimension, an unrelenting linear and absolute progression. Although time seems intuitive, we have a great difficulty even conceptualizing what time is, let alone explaining what it is. We have ways of measuring sequences and flow of events that we call time, but time remains elusive to explain.
A quick dive into quantum physics dispel any such illusion that time is stable or linear. For example in quantum entanglement two electrons remain connected, no matter how far apart they are, in synchrony. The electrons remain attached in time but not space. In this quantum universe, time doesn't exist at all. In the split slit experiment—where electrons interfere with each other after going though two slits but only when they are not being recorded—seems to suggest that electrons can go back in time, or at best do not conform to our linear time. Whatever our linear time means. Einstein called time a “stubbornly persistent illusion”. He was wrong, time is our reality that fails to find evidence outside of our consciousness.
Time is something that we create for ourselves and we do this by measuring it. And we measure time with great relish. Other than external means of measuring time—an impressive and historical array of clocks and watches, celestial movement, temples and seasonal rituals—our mental representation of time is fundamentally linked to our body. Our internal time is determined by our own biological, neurological and emotional reality. Many theories attempt to explain how time emanates from our mind and our body. But the biggest contributor to our sense of time is our own sense of aging—time speeds up with age.
Our bodies are sophisticated watches—chronographs—that seem to get faster with age. The psychologist William James at the turn of the 20th century observed that years seems to pass more rapidly as we grow older. Many have attempted to prove this observation, but with variable success. Then the French biophysicist Lecomte du Nouy in 1937 associated this phenomenon of a racing time with the slowing in cellular activity in aging bodies. He connected time with our physiological processes. To this day, although there is much evidence supporting this theory, the relationship between our physiological processes and our estimate of time remains contested. Studies do not show clear-cut outcomes. We have not found all of the mechanisms that control our sense of time. But in our explorations, we are learning more about the variability of how we judge time.
For example in 1958 Sanford Goldstone, William Boardman and William Lhamon with Baylor University Houston, Texas asked institutional older adults to count 30 seconds at a rate of one count per second. Older adults (average age 69 years) tended to report a shorter time interval then younger adults (average age 24 years). But the evidence goes back and forth. In 2005 Marc Wittman and Sandra Lehnhoff with the Ludwig-Maximilian University Munich, agree that despite the widespread belief that the subjective speed of the passage of time increases with age, results are inconsistent. They support the widespread belief that the passage of time speeds up with age although they do point out that such incremental changes are subtle. Despite stereotypes that even though older people see the passage of time increasing, younger participants anticipated that time will be slower when they get older. The authors also concede that there remain other factors that conflict with a purely age-based interpretation of the speeding of time.
Older adults switch from "time lived since birth" to "time left to death.” One lag (since birth) seems long while the other lag (left to death) seems short and is getting shorter. Perhaps it is this sense of urgency, and our attempt to catch up with our legacy when we see time as going too fast. In an experiment in 1961 Michael Wallach, and Leonard Green with MIT found that both the type and quality of activity and the perceived time remaining makes time speed up. This sense of urgency is what influences our impression of time accelerating. Our activity and our sense of urgency determine time. Those older adults who are dying and fearing death feel more pressured by the passage of time. Similarly those who are busy also see time passing by faster. In contrast, Steve Baum with Sunnybrook Medical Center, Toronto and his colleagues report that time also moved slower for many institutionalized elders. People in institutions who engage in few daily activities see time as going by more slowly. Older adults report both extremes; time getting faster while others report time going slower.
This does not make sense. And we are missing the first principle of gerontology—heteroscedasticity. Older adults become more varied the older the group becomes.
We have older adults who are catatonic in nursing homes while others remain in the community, active, engaged and at the peak of their capacity. Jacob Tuckman uncovered this fact in 1965 when he reported that although there is a slight increase in the cadence of time among older adults (60 and over) he reported that they were both the group that saw time pass quickly as well as the group that saw time most slowly. Older adults were just more aware of time and reacted to the perception of time in “both directions.”
And we know that time is flexible and malleable in our mind. The elaboration came when Richard Block replicated a study that found that time intervals with many events are experienced as longer than intervals filled with fewer events. In uneventful situations, such as in a typical nursing home when a period of time is not filled with distracting events, time seems to pass slower. For those adults that are engaged and active, there is not enough time to complete their activities, and therefore time goes by too fast. We might be measuring time on the basis of events that happen. Our physiology not only dictates time, but we look at the environment to tell us how fast or slow we need to move time. The environment might provide a metronome. We are looking for events that happen in order to synchronize our internal time clocks. This is known as the Kappa Effect.
We intuitively measure time by the space in between events—in this case, blinking lights. The experiment is easy. Imagine you have a reference light that blinks once for a split second, then spaced a few inches to the right another light blinks and then twice as far to the right another light blinks. Even though the time lag between the second blink and the third blink is the same, we always assume that the third blink is delayed because it is further away from the first blink. Our internal clock is sensitive to how objects appear in space. Events bunched together are seen as occupying a shorter period of time while events that are spread out are seen as taking longer time. But it is not just distance. There are numerous factors that influence our timing.
Some of these factors include the type of stimuli (visual, auditory, tactile), the intensity, size or strength of stimuli, complexity, uniqueness, including background and contrast, as well as speed and variance in speed all influence whether we perceive time as slowing or speeding. Most importantly, we attach emotional meaning to events. In 2007 Sylvie Droit-Volet and Warren Meck reported how our sense of time is moderated by how we feel. So that time seems short when we are having fun and extends when we are bored.
It could be that time does not get faster with age but it seems that it does because we have an urgency to do things before we die. We speed up time in order for us to coherently make sense of our urgency. We tend to try and accomplish too many things despite perhaps not having the energy to accomplish them. And it is not our perception that slows down or speeds up but our memory of it.
Similar to the experience of fear, where time seems to slow down, what speeds up n is our memory not our attention. David Eagleman with Baylor College of Medicine, Houston, Texas designed a clever experiment that conclusively showed that fear for example does not actually increase how fast we are at noticing events, and therefore slowing time. He found that instead what happens is that we gain improved memory that packs that time unit with many details and events. Knowing this however does not explain neurological conditions that results in both time speeding as in the “zeitraffer” phenomenon, or the obverse experiences called “akinetopsia”, when motion slows or stops altogether.
The fact that time perception can reflect neurological problems indicate that something “mechanical” is happening in the brain. It seems that motion and time are related neurologically. This is not only how we think or memorize, it is how we are built. The only other place this happens is in cinema: a movie that is controlled by the timing of projecting individual frames. Likewise, our brain records individual frames—many more than we are aware of, and perhaps with many different layers, emotional, visual, auditory—and then like a film reel plays them out for us on the basis of an internal time. The brain plays these memory frames at speeds that make the story coherent. So if more detail is needed then it slows the film down (fast time) and when the story is uneventful the brain speeds it up (slow time). All of this is done in the visual cortex.
We are learning that time is a complex psychological phenomenon. It is not an illusion, but a reality that exists at the center of our consciousness. With time there are variances in the context (busy vs bored), differences in individual experiences (older vs younger) and there are also complexity of time (neurological vs external measures.) Understanding that we have memories that are snapshots (some of which remain in our subconscious) rather than a movie, elevates time to the master conductor of our memories. Time orchestrates our memories. But this still does not explain why older adults are more prone to speed time up.
The Logarithmic Time
Aging is like a logarithm, the older we get the short the percentage of time that has elapsed. It’s just mathematics. This was first estimated by Paul Janet (1823-1899). He found that the apparent length of an interval at a given time is proportional to the age of the observer. For a ten-year-old a year adds 10% to her life, but only half that value (5%) for a twenty-year-old. For a 90 year old, 10 years is an ninth of their life, while for a twenty year old 10 years is half their life., hence the perceived shortness of time as we get older. James Kenney wrote an interesting blog on this function and he estimated that time is perceived logarithmic, meaning that it gets shorter as we age. He referred to this function as Logtime. In estimating the length of a year we compare it to our age. We see time proportionally so that the older we are chronologically the smaller the proportion of a time unit. We are predisposed to see time going faster, regardless of all other factors. This observation is further supported by an earlier understanding of time by a German physician Karl von Vierordt (1868). Vierordt‘s Law states that short event are perceived as longer than they are and longer events as shorter. There is a convergence. This also applies to historical events as well where we estimate long past events as more recent than they were which gives the impression that time is speeding. For older adults, events that happened thirty years seem more recent. And we do this to help our memory.
Between two to five seconds seem to be the time where we are present, and within this short period we have a fairly accurate time. While memory and anticipation form the majority of our awareness. It helps therefore to have a retrievable memory that assigns more importance to the more recent events (and therefore more likely to be pertinent) and to bunch experiences into more manageable time limits.
Again, Steve Baum and his colleagues report that among 296 institutionalized and community dwelling elderly (average age 75.4 years) faster time perceptions were associated with being healthier—less clinical depression, enhanced sense of purpose and control, and “younger” perceived age—while the opposite perception held true for older adults who were more frail and saw themselves as “older” where time was going slower.
If time orchestrates our memory, dictating the speed and therefore the length of our life’s story then it determines or at least indicates our expected life span. Logtime determines that this period of perceived remaining time is experienced to be shorter the older we get. That is the mathematics of the basis of our perceived shortening time. If our Logtime is determined by how much time we believe we have remaining, then the healthier we are the more accomplishments we want to achieve and the faster time seems to pass. The more things that we want to accomplish, the greater the urgency and therefore the shorter we feel our remaining time to be. Time is faster.
We dictate time speed by our urgency and our age. In return, our time metronome selects memories to make the story, our narrative arc, coherent. The counter-intuitive prediction being that the faster you think that time is going, the longer you are likely to live. How we see time is an indication of our life story. We might be accessing cues from both our body and the environment that tells us when that final curtain is likely to be.
Envy is a repulsive feeling. Not only that you wish something that someone else has, but that you sometimes wish them ill so that they do not have it themselves. Richard Smith succinctly defined envy as a Jekyll and Hyde emotion: either benign—I wish I had what you have—or malicious—I wish you did not have what you have. As such envy is a complex emotion. With the benign state of envy related to inferiority, resentment and admiration while its malicious component is related to feelings of hostility, injustice and ill-will. Not surprisingly, envy also has correlates with depression, unhappiness, and low self-esteem. With such complexity perhaps envy harbors other clues to our emotional life.
Everyone suffers envy, but it is a special experience with older adults. With mounting frailty, older adults find it increasingly difficult to look up to others who seem untouched by the daily onslaught of attrition. Why, we ask, do we have to accept “aging” while others seem to be immune?
This comparison for older adults is primarily physical. We can see physical changes in ourselves and we can observe others. Susan Fiske asks the question of why we compare ourselves to others? We in fact never stop. Our mind develops a model of the world by seeking out information from others. By the time we become adults, we have a good, stable and predictive model…then comes aging. We find that we need to change our model quickly as we experience increasing frailty. Each incident in our aging journey is an opportunity for our brain to modify its model to accept the new reality.
Coming across peers that seem undeterred by physical frailties, we start questioning our model. There is a difficulty in accepting this new concept of “old age” for us when others seem to have escaped it. Is it just me? The difficult we experience in reconciling our personal model with the general model of aging is the expression of envy. Experiencing this inequity creates dissonance—it does not fit well with our view of the world—and promotes one conclusion. If not all older adults are like me than I must be responsible for my condition. And we envy those that make us come to this conclusion.
No wonder then that when the person we envy befalls some disaster we feel a taboo pleasure. Our models of aging—personal vs. general—are in harmony. Schadenfreude—joy in another’s misfortune—is one idiosyncratic expression of envy. There seems to be a desire to balance out the world. The world is seen as just if an envied person is brought down by some calamity. Our general experience of aging is in line with our personal model of aging—we are all suffering.
Older adults are more prone to envy because we become increasingly more diverse as we age. Economists call this spread of points “heteroscedasticity.” Whenever we compare younger against older adults we need to be aware of this variance. Unfortunately in our day-to-day world this variance creates a context where it is difficult to accept aging when we are lacking conformity. You have the 70 year old running marathons and the 70 year old in a nursing home. Envy reflects this dissonance: A complex emotion that reveals our difficulty in reconciling our predictive model of aging.
Pila, E., Brunet, J., Crocker, P. R., Kowalski, K. C., & Sabiston, C. M. (2016). Intrapersonal characteristics of body-related guilt, shame, pride, and envy in Canadian adults. Body image, 16, 100-106.
Fiske, S. T. (2010). Envy up, scorn down: how comparison divides us. American Psychologist, 65(8), 698.
Smith RH, editor. Envy: Theory and research. Oxford University Press; New York, NY: 2008.
A cursory literature search will result in 16 different papers with exactly this same title “What is Culture?”. Much more has been written about culture in general. Culture attracts an apparent interest for obvious reasons as the concept seems to determine how we humans behave. However we are still not sure what “culture” means. The level of confusion resulted in Merriam-Webster’s announcement in 2014 that “culture” is their Word of the Year. Everyone was querying the meaning of this concept. For a concept that is so important, and intuitive, it eludes concrete definition. Culture seems to have different meaning, covering a broad number of social influences that we are trying to describe.
As early as 1952, while attempting to define culture the American anthropologists Alfred Kroeber and Clyde Kluckhohn ended with 164 different definitions. That was then. Nowadays everyone seems to enjoy the liberty of defining their own unique meaning of culture. Today it would be a daunting task to catalogue all the different definitions. Most definitions are unique while other definitions are amnesiac plagiarism.
Some of the differences in definitions emerge from the different use of the word. By providing an historical perspective Kevin Avruch came up with three basic classes of definitions.
1. There is the culture that defines the ambitions of mankind “high culture” in contrast to “popular culture.” Popular culture being a failed and inferior culture that emerges from the people as apposed to high culture with a set of shared behavior dictated by historical protocol. This has its roots from Matthew Arnolds’ Culture and Anarchy (1867). Having contrasting cultures inevitably leads to conflict as defined in the 19870s by by Antonio Gramsci’s concept of hegemony between dominant and subordinate cultures. Hegemony refers to how one set of cultural rules are imposed and accepted by another group, usually to the detriment of the second group. And giving rise to the concept of “sub” culture as defined early by the Chicago School, who interpreted sub-cultures as forms of deviance and delinquency. Which leads into the earlier interpretation of society by Emil Durkheim, the French Sociologist who in the late 1800s defined how different parts of a society have different functions—cultures—but that society was more than the sum of its parts.
2. In this same vein of thought, culture could also be seen as gauge of how civilized a community is along a continuum. One measure of civilization. The American anthropologist Lewis Henry Morgan’s influential scheme provided evidence for monogenesis, the theory that all human beings descend from a common source—as opposed to polygenism, with multiple and equally valid development. In monogenesis, cultures evolve on one criterion only: from “savagery” through “barbarism” to “civilization”. Such simplistic determination was of course very popular. Similarly Edward Tylor in Primitive Culture (1870), referred to a quality possessed by all people in all social groups, who nevertheless could be arrayed on a development and evolutionary continuum that assumes that humankind is heading towards an ultimate sophisticated culture. A linear progression where the western culture sits at the pinnacle.
3. The third use of culture reacts to this monogenesis and is best exemplified by Franz Boas. Influenced by the eighteenth-century writings of Johann von Herder, Boaz emphasized the uniqueness of the many and varied cultures of different peoples or societies. Boaz also interjected with relativity—we can only judge another culture from our own. Becoming the champion of post-modernism, which argues for the relativism of how we perceive everything, Boaz undermined the idea of a linear definition of culture. We are not heading to an ultimate goal of the "best" culture. Since cultures emerge from the uniqueness of their environment, one cannot differentiate high from low culture. Moralizing about cultures—“savagery” through “barbarism” to “civilization”—remains only one perspective from “our “ culture and not an inherent feature of cultures.As early as 1948, Thomas Stearns (T.S.) Eliot is primarily known for his poetry but he devoted a significant amount of time defining culture. He mused that culture is attached to religion and as a superorganic concept it evolves naturally from a community.
Others have attempted similar categorization of the use of culture. In 1976, the critic Raymond Williams reported that "Culture is one of the two or three most complicated words in the English language." His definition of culture in Keywords is similarly based on three uses of the word: educating oneself becoming “cultured”; culture as a group’s shared way of living; and culture as an activity as in doing something cultural. Again the diversity of definitions is primarily based on the utility and use of the word. How the word culture is applied determines its definition.
All of these uses of the term culture refer to a common theme. Culture is a way of living. That there are certain values and traditions that determine religion, beliefs, shared ideas, habits, attitudes, expectations, norms, art, law, morals, customs, that are passed along from one generation to the next. Culture can be as broad as a language, and as specific as a dialect or an inside joke. As a result, culture remains intricately tied to our place of residence. All humans express multiple cultures.
Sometimes the place we reside, and the community we share are distinct enough for a sub-culture to emerge. Culture helps community members avoid misunderstanding and minimizes conflicts within that particular community. Conflict occurs when expectations are not met. When people move into the community from outside and are not aware of the expectations as dictated by the culture.
Culture is learned in a social setting. It is not inherited. It derives from one’s social environment. The enigma with culture emerged when researchers attempted to segregate it as a distinct body of expectations that we can adopt. Acculturation takes a long time if it can ever be fully complete. Only after acculturation can there be an understanding and acceptance of a different culture. Accepting and following a dominant culture allows for a smoother social engagement. Which is why we see that acculturation has numerous measures of psychological and physical wellbeing (e.g. increased life expectancy) but also inherit negative aspects of the culture we adopt (e.g. obesity in the US).
But culture cannot be distinguished from human nature or an individual’s personality. Culture is what makes us human and a particular kind of human. Feral children that are found living wild have all the mechanics of being human but none of the essence. Andrei Mihai reports that such children never fully integrate into society and have difficulty with basic language and civic protocol. Culture and its socialization is what makes us quintessential human including morals, language and aspirations.
In a culture that honors individuality, our personality will reflect that (e.g. on a continuum extrovert vs introvert). In a culture that honors the common good such a continuum does not make sense and instead people lie along a different continuum (e.g. collectivism vs. individualism). In personality research that are based on the five dimensions : Extraversion, Agreeableness, Conscientiousness, and Neuroticism cross-cultural variations exist. Our culture can and does determine our personality. Human nature is not purely biological but social.
How we behave is acquired through learning and interacting with other members of our culture. Even what we eat, how often, how much and with whom is dictated by a set of rules enshrined in our culture. One extreme example is cannibalism. When Marvin Harris wrote Cannibals and Kings in 1977 we had a view that culture was somehow independent of the environment. Harris made that connection with food. For example the development of pork as a taboo food in ancient Egypt comes from the fact that pigs are poor grazers, destroy plants and compete with humans for grain. While cattle, sheep and many other domesticated animals consume grass without digging the roots, they also provide milk, transport, and labor. Harris notes that pigs were taboo in ancient Egypt, then by Israelites and continues to be forbidden by Islam. The culture that dictates what food to eat emerges from environmental considerations. The culture of making food taboo enables a community to maximize its food production. Culture therefore resides as a bridge between environmental pressures and personal preferences. Culture also influences other aspects of our behavior other than food. The emerging understanding about culture is that is allows for a way of moderating environmental demands and community needs across time. A historic template used for future generations on how to behave. Culture succinctly encapsulates a protocol, a set of rules, transmitted to future generations in order to increase their chances of survival. In order for these protocols to be followed, we have developed a system here thee protocols are These protocols are not suggestions but dictates behavior. They are social formed and socially acquiesced and social transmitted.
Gary Ferraro in 1998 exposed the different levels of culture from national, regional, gender, generational, role, social class, employment, ethnicity and many more other spheres. Then there are the cultures among families, tribes or clans; those cultures distinguished by language, ethnicity, or religion; by social classes; by political interest groups; and by elected membership (clubs). No person has a single culture. Cultures are the flippers in a pinball machine, paddles (norms) that direct the ball (behavior) into a desired place (conventional behavior). The volume and depth of these different cultures makes it un-wielding. The conclusion is that no two individuals share the same cultures. Such insight necessitates that instead of addressing cultures as distinct we need to see all these different cultures as sharing a common heritage.
Lets assume that our distinction between an “I” and anything else outside of me is contrived. There is a force that stops me from making this judgment. There is a natural force that pushes me to think about “I”. But even when I try and identify “me” I need a social context.
In 1982, John Turner argued that: “individuals define themselves in terms of their social group memberships and that group-defined self-perception produces psychologically distinctive effects in social behavior.” This socialization is what makes us distinct. If I need the social context to define “me” then culture—being the social area where we define our norms of behavior—must be an integral aspect of who I am. My culture is both deterministic—controls what I do—but also is an expression of who I am within a given environment.
Such analysis is not new. As early as the 1950s, Harry Sullivan argued that: “…human beings are human animals that have been filled with culture—socialized…” (p. 323) Arguing that culture is how we define ourselves as individuals. We seem to have a dual aspect of ourselves. Both a social aspect and a personal self holds together my sense of self. Emil Durkheim proposed that humans are “homo duplex”, where one existence is rooted in biology and one in a social world in our culture. What is surprising is that our biology is also designed to integrate our social environment. There are specialized areas in our brain that “mirror” our environment.
In the 1980s, the Italian Giacomo Rizzolatti and his colleagues at the University of Parma, first observed mirror neurons in monkeys. Although mirror neurons exist in most animals, in humans as much as 10 percent of neural cells are devoted to mirroring. A mirror neuron fires both when a person acts and also while observing the same action performed by another person. Such mirror neurons respond directly to what is observed outside. Our brain responds and mimics the activation of another person’s behavior and activity. Culture is automatically transferred through our brain.
The accumulating evidence suggests that the body is a meeting place of interaction, a venue with the outside world—the geography, the community and significant others interact with the idea of self. Culture is how we explain this interaction—social influence—to ourselves.
Psychologist have long known this. Especially with developmental psychology looking at how children develop and learn.
The Russian Lev Semyonovich Vygotsky (1896-1934) founded cultural-historical psychology. He believed that children learn through play and interacting with their environment. At the time there were three theories of how we learn: Constructivism, Behaviorism and Gestalism.
Constructivism: We need to mature first to be able to learn. Development always precedes learning. Championed by Jean Piaget who referred to this as genetic epistemology, the theory proposes that we cannot learn unless we are developmentally ready to learn.
Behaviorism: Where both learning and development go hand in hand and occur simultaneously but where learning is development.
Gestalism: A symbiotic relationship between learning and development where development influences learning and learning promotes development.
Vygotsky argued the opposite to Piaget’s concept of genetic epistemology. Learning precedes development. In this sense he is more of a Behaviorist. He argued that, “We do not learn because we develop, we develop because we learn." Vygotsky's "Zone of proximal development" (ZPD) describes the interaction that a child has with their culture. By interacting with their culture in the ZPD a child learns skills that go beyond the child’s actual developmental or maturational level.
Learning in the ZPD is accomplished through both informal conversations and formal schooling. Adults pass on to children the ways to interpret the world. As children and adults interact with each other—later defined as scaffolding, supporting children to learn and then by taking away the scaffolding they learn the skills by themselves—adults share meanings about objects, events and human experiences. Adults are able to mediate and transmit meanings through language, math, art, music, and behavior (e.g. religion).
In The Ecology of Human Development, another Russian-born developmental psychologist Urie Bronfenbrenner transformed Vygotsky views on culture to one based on the environment. Vygotsky’s ZPD has been expanded to four zones or spheres. Whereas Vygotsky ZPD sphere is cultural, Bronfenbrenner calls these spheres environmental, and extended their influence. Bronfenbrenner was the co-founder of the Head Start program, a social program that provides comprehensive early childhoodeducation, health, nutrition, and parent involvement services to low-income children and their families. This social program is based on Bronfenbrenner’s ecological model. This ecological model expands Vygotsky's ZPD to four spheres of influence on the child’s development to include global environment of the child.
From a microsystem which defines the family and school; Mesosystem that describes the interaction of the family with social structures; Exosystem which involve interaction with less frequent visitors like relatives, friends, parent’s work colleagues, religious leaders, and neighbors and lastly; the Macrosystem which defines the broader culture of economy, customs and bodies of knowledge.
Bronfenbrenner argues that: “No society can long sustain itself unless its members have learned the sensitivities, motivations , and skills involved in assisting and caring for other human beings.” (p.53) Both Vygotsky and Brofenbrenner’s theories talk about spheres if influence that are equally important. Culture in this context both defines us and determines how and what we learn. We, in-turn, pass on this body of knowledge, this culture to younger cohorts. There is a symbiotic relationship and the central theme that makes culture humanistic is a caring curriculum.
Accepting that there is not just a "me" inside us but also a "we" then there is a more concise understanding how the culture determines behavior and outcomes. My individuality is no longer solely about me but about my culture. Emil Durkheim argued that there will be a conflict between the biological and the cultural aspect of the homo duplex. That the cultural aspect of “me” will conflict with my own impression of “self.”
This is all esoteric stuff. But it leads to some very practical conclusions. We can never know the culture of another person. The culture of a group of people is tied to a time and a place. We can never know that culture unless we also experienced it ourselves. That becoming cultural attuned remains elusive. A more radical awareness being that we learn through "scaffolding" a network of cultural interaction that might no longer be evident in the present day.
Studying culture as a psychological feature might result in a better understanding of ourselves as a product of our environment. Sometimes culture is a visible expression of that relationship, but it is mostly hidden and a historic event that cannot be traced back.
Adler, N. (1997) International Dimensions of Organizational Behavior. 3rd ed. Ohio: South-Western College Publishing.
Apte, M. (1994) Language in sociocultural context. In: R. E. Asher (Ed.), The Encyclopedia of Language and Linguistics. Vol.4 (pp. 2000-2010). Oxford: Pergamon Press.
Avruch, K. (1998) Culture and Conflict Resolution. Washington DC: United States Institute of Peace Press.
Baumeister, R. F. (1998). The self. In D. T. Gilbert, S. T. Fiske, & G. Lindzey (Eds.), Handbook of social psychology (4th ed., pp. 680-740). New York: McGraw-Hill.
Bell M.G. (2010). Agent Human: Consciousness At The Service Of The Group. Kindle edition.
Berg, R. (1996). "The indigenous gastrointestinal microflora". Trends in Microbiology 4 (11): 430–5. doi:10.1016/0966-842X(96)10057-3. PMID 8950812.,
Bianconi, E., Piovesan, A., Facchin, F., Beraudi, A., Casadei, R., Frabetti, F., ... & Canaider, S. (2013). An estimation of the number of cells in the human body. Annals of human biology, 40(6), 463-471.
Bronfenbrenner, U. (2009). The ecology of human development. Harvard university press.
Eap, S., DeGarmo, D. S., Kawakami, A., Hara, S. N., Hall, G. C., & Teten, A. L. (2008). Culture and personality among European American and Asian American men. Journal of Cross-Cultural Psychology, 39(5), 630-643.
Ferraro, G. (1998) The Cultural Dimension of Global Business. 3rd Edition. New Jersey: Prentice Hall.
Ferraro, G. (1998) The Cultural Dimension of International Business. 3rd Edition. New Jersey: Prentice Hall.
Hofstede, G. (1991/1994) Cultures and Organizations: Software of the Mind. London: HarperCollinsBusiness.
Hofstede, G. (2001) Culture's Consequences. Comparing Values, Behaviors, Institutions, and Organizations across Nations. 2nd ed. London: Sage.
Lustig, M. W., & Koester, J. (1999) Intercultural Competence. Interpersonal Communication across Cultures. 3rd ed. New York: Longman.
Matsumoto, D. (1996) Culture and Psychology. Pacific Grove, CA: Brooks/Cole.
Saville-Troike, M. (1997) The ethnographic analysis of communicative events. In: N. Coupland and A. Jaworski (eds) Sociolinguistics. A Reader and Coursebook. pp.126–144. Basingstoke: Macmillan.
Schein, E. (1984) Coming to a new awareness of organizational culture. Sloan Management Review 25(2): 3–16. Schein, E. (1990) Organizational culture. American Psychologist 45(2): 109–119.
Smith, P. B., & Bond, M. H. (1998) Social Psychology across Cultures. London: Prentice Hall Europe.
Spencer-Oatey, H. (2008) Culturally Speaking. Culture, Communication and Politeness Theory. 2nd edition. London: Continuum.
Triandis, H. C. (1994) Culture and Social Behavior. New York: McGraw Hill.
Trompenaars, F., & Hampden-Turner, C. (1997) Riding the Waves of Culture. Understanding Cultural Diversity in Business. 2nd ed. London: Nicholas Brealey.
Vygotsky, L. S. (1980). Mind in society: The development of higher psychological processes. Harvard university press.
Wood, D., Bruner, J. S., & Ross, G. (1976). The role of tutoring in problem solving. Journal of child psychology and psychiatry, 17(2), 89-100.
Žegarac, V. (2007). A cognitive pragmatic perspective on communication and culture. In H. Kotthoff & H. Spencer-Oatey (Eds.), Handbook of Intercultural Communication. Berlin: Walter de Gruyter, 31–53.
On 10 April 1901 Duncan MacDougall together with four other physicians were waiting for six people to die. In a hospital in Dorchester, Massachusetts, each patients' entire bed was placed on an industrial sized Fairbanks scale that was sensitive within two tenths of an ounce (5.6 grams). After a few hours waiting, the patients died and something strange happened.
As soon as they died the scales dropped. They lost weight. The conclusion was that a human soul left the body and registered the loss of 21 grams. The weight of a mouse. Repeating the experiment with dogs resulted in no loss of weight, indicating that dogs have no soul to lose.
Since the soul was material, Duncan MacDougall reasoned that we should be able to measure it. Four years later the New York Times reported in a front-page story that MacDougall tried to take X-rays of the soul escaping the body at the moment of death. Then MacDougall died in 1920 at the young age of 54 leaving behind many questions and many charlatans to capitalize on his scientific legacy.
Following the publication of these experiments—both in the popular media as well as in academic journals—his colleague physician Augustus Clarke criticized the experiments. Clarke argued that the loss of 21 grams could be accounted for by expiration. Clarke noted that at the time of death as the lungs are no longer cooling blood there is a sudden rise in body temperature, causing a subsequent rise in evaporative sweating. Since dogs do not have sweat glands, and therefore cannot lose weight in this manner Clarke argued that the experiments were flawed.There was evidence to suggest that MacDougall knew of this alternate interpretation to his experiments beforehand.
Measuring is the scientific method. The medical historian Mirko Dražen Grmek wrote about the scientists Santorio Santorio (1561-1636) who diligently weighed and measured everything. In particular Santorio weighed all the food and drink that he ingested. He also measure all that come out the other end—feces and urine. After measuring his weight, the remaining weigh loss is due to something else. For every eight pounds consumed Santorio found that he only excreted three pounds. Five pounds of food and drink could not be accounted for.
It was not until 1862 that the infamous hygienist Max von Pettenkoffer constructed an insulated room designed to measure the exact amount of evaporative sweat and heat the body generated. As a hygienist, promoting good sewage and public health approach to health, Max von Pettenkoffer designed a machine—respiration calorimenter—for measuring heat given off by body’s chemical reactions and physical changes expended by a person at rest, standing and walking. He measured the weight of this metabolic energy use.
All the evidence was already there to suggest that our metabolism—the energy expanded in maintaining bodily functions—generates evaporative loss of weight. And MacDougall knew this. In his original paper he reports that: “He [dying patient] lost weight slowly at the rate of one ounce per hour due to evaporation of moisture in respiration and evaporation of sweat.” But he also addressed this loss as an explanation for the loss of weight when the patients died: “This loss of weight could not be due to evaporation of respiratory moisture and sweat, because…this loss was sudden and large…” It's undeniable that something else is taking place.
True science can only be conducted through experimentation. MacDougall’s theory, that there had to be “continuity” in life after death—a soul—was the incentive for his experimentation. The theory assumes that we know when people die. As strange as this question might seem there is no easy definition.
Our definition of death is a legal rather than a biological definition. In medicine it is a prognosis—predicting—rather than a diagnosis—confirming. Having no brain or heart activity indicates that the patient is unlikely to come back alive, it is by no means indicative of the body. Organs can still be harvested with the patient being dead. The legal definition of death does protect surgeons from liability when they are harvesting organs for transplantation.
In 1968—a year after the South African surgeon Christiaan Barnard performed the world's first human heart transplant—Stanford University surgeon Norman Shumway performed the first USA heart transplant from a brain-dead donor. These were nearly identical surgical procedures, except whereas Barnard’s surgery was received with adulation; in the United States, Shumway nearly ended up being prosecuted for conducting the operation. John Hauser, the Santa Clara County coroner, met Shumway with a threat of prosecution. The infringement was that the donor did not have an autopsy performed to confirm that he was dead since performing an autopsy would have ruined the organs for transplantation. Surgeons were being accused as killers. As a result of this threat of prosecution, organ donations stopped or slowed dramatically. Like an old Perry Mason TV series where the prosecutor is standing in front of the jury, pointing their right hand index finger at the transplant surgeon while declaring “Ladies and gentlemen of the jury, there is your killer. That surgeon killed my patient.”
If we are to use the Pope's language, that death needs to involve “decomposition,” “disintegration,” and “separation,” then it will truly stop all organ transplantation. Without the legal criterion of brain death, where the organs remain viable, there will be a dramatic deterioration in the quality of organs that can be harvested and transplanted. According to the World Health Organization, in 2014 120,000 solid organs were transplanted—more than 80,000 kidney, 26,000 liver and 6,500 heart transplants in 93 countries. After Austria, the United States has the highest per capita rate of transplants. Organ transplantation extends lives for a significant number of people. But we cannot escape the fact that this is made possible by a legal definition of death and not a biological one. If organs are truly dead, they cannot be harvested and brought back to life again. However the reliance on a legal definition of death hinders a more scientific study of the biology of death.It is surprising to find how little we know about death.
The British researcher Sam Parnia argues that many people who can be classified as legally dead from heart attacks or blood loss could be resuscitated up to 24 hours after they "die". Parnia has been studying those who have no heart beat and no detectable brain activity for periods of time. While in this state the "dead" patients are given names of cities and when—sometimes, if—they recover patients are asked to ‘randomly’ name cities. They found that the patients are more likely to choose the same cities that they were exposed to while unconscious—legally dead. It seems that when we are dead we are still aware, although not conscious.
As with the MacDougall studies there is a problem of small samples in these studies too. But such problems can eventually be overcome with better research design.
Weighing the soul might be complicated if we do not know when we actually die and the soul departs. There are increasing interest in both defining death and capturing the process. But evidence is scant and the methods used to examine death leave room for many errors and misinterpretations. Many publications exist of unsubstantiated reports of souls departing the body—Konstantin Korotkov, Eugenyus Kugis, Vitaliy Khromovaand and others—that purport to repeat the MacDougall’s findings, including photographic evidence. But none are published in scientific journals.
We have a great interest in “proving” things. The problem with science is that it is necessarily finicky with details and the problem with belief is that it is necessarily not. Science is just a method,without an answer. We are always refining the answer and the answer can never be completely correct. Belief, on the other hand, is an answer without a method. It is always correct because we cannot test it and improve upon the answer.
Whenever we mix the two together—science and belief—both sides get muddled. But this space is where real science resides. In that uncomfortable area where we do not know what the outcome might be. Within this muddled space, soul searching might attain a new meaning.
Grmek, M. D. (1952). Santorio Santorio i njegovi aparati i instrumenti. Jugoslavenska akademija znanosti i umjetnosti.
Kuriyama, S. (2008). The forgotten fear of excrement. Journal of Medieval and Early Modern Studies, 38(3), 413-442.
MacDougall, D. (1907). Hypothesis concerning soul substance together with experimental evidence of the existence of such substance, American Medicine, April 1907.
Parnia, S., Waller, D. G., Yeates, R., & Fenwick, P. (2001). A qualitative and quantitative study of the incidence, features and aetiology of near death experiences in cardiac arrest survivors. Resuscitation, 48(2), 149-156.
The new tax bill Congress is passing will increase the deficit. Although this might seem antithesis to the Republican doctrine, behind the obvious spindrift there lurks a clever ploy to trigger an automatic program that reduces funding to most social programs, including Medicare. Known euphemistically as PAYGO the Statutory Pay-as-You-Go Act of 2010, is a rule that requires any federal deficit to be paid for with spending cuts to social programs. With the exception of Social Security, unemployment benefits and food stamps, most mandatory spending programs—some 228 programs—will be cut or eliminated. Specifically, Medicare will be cut by 4 percent a year. Medicare represents the most important program for older people after Social Security.
We got here because people, and some gerontologists, are ignorant of what really helps older adults and how we achieved a modicum of support for them. Without civic engagement and social protest, such laws breeze through without even a mention that Medicare is about to be cut.
Gerontology is full of experts. It is one of the richest disciplines, with academicians and researchers studying the whole spectrum from genetics to policy, from biology to geography, from architecture to neurology. They are all gerontologists. So it is common to find disagreements but we live happily in our own silos. How do we improve aging? We try and communicate the problems associated with aging in order to bring about change
Most communication techniques are embellishment of the 1954 Schramm's Model of Communication. Wilbur Schramm defined communication as a two-way street where both sender and receiver take turns to send (encode) and receive (decode) a message. We need messages that can be understood (decoded easier). And this what eight national aging-focused organizations tried to do when—AARP, American Federation for Aging Research, American Geriatrics Society, American Society on Aging, Gerontological Society of America, Grantmakers in Aging, National Council on Aging, and the National Hispanic Council on Aging—banded together and hired FrameWorks to create a strategy for helping the public understand aging issues. The result was a bible for an aging future. Like all bibles it is populated by don’ts:
Don’t lead with the story of demographic shifts.
Don’t talk about aging as a “civil rights issue.
Don’t use language that refers to older people as “other.”
Don’t overdo the positivity.
Don’t cross-contaminate efforts to build public will with “news you can use.”
FrameWorks simplifies scientific and societal messages to a point that the general public can understand in order for them to act positively on it. The problem with simplification is that it is false. Changing attitudes does not necessarily change behavior. We believe that communicating a good message changes attitudes and brings about concrete changes. We therefore also believe that laws are enacted as an act of benevolence. But this is misguided, as we are witnessing right now with PAYGO. "Reframing Aging" and "Disrupting Aging" are a ruse because they simplify a process that is messy and volatile and exclude the participation of individuals in civil disobedience. Worst still these approaches deny the social activists their true worth in our political world. What changes and improves conditions for older adults are laws that are enacted, implemented and enforced. And these steps are accomplished by civic engagement (or lack thereof.)
A livable income remains the lynchpin of wellbeing among older adults. Income, especially in the United States increases access to affordable health care, housing, transportation and food at a minimum. And we got here through the single enactment of the 1935 Social Security Act. The act was not some kind of reframing aging, or disrupting aging. The act was enacted because there was civil unrest and a swell of support for alternate provisions. FrameWorks by focusing solely on ageism and seeing the problem as a public relations issue, misses out on one of the tenants of an aging reality: heteroscedasticity. As we get older, we as a group, become more varied and different from each other. A schism as wide as that between Donald Trump and Noam Chomsky. FrameWorks remain at a loss in representing these two extremes.
Reframing, disrupting, renewing, or any public relations exercise cannot address aging, understand the changes and needs, develop effective responses and tackle problems associated with aging—on an individual or at a community level. That thinking is nonsense. Neither Trump nor Chomsky complain of ageism. The obvious reason is that they are at their zenith. Their basic civic responsibilities seem to be provided for. Their other very vocal issues—however grave and important—have nothing to do with age. Aging becomes a policy issue ONLY when individuals are at their lowest—their azimuth.
The azimuth for older adults is similar to those for other ages. It includes provision for shelter, health, food and income. You cannot have other ambitions before meeting these basic requirements. Right now those basic requirements are unmet for an increasingly large minority of the older adult population. Social gerontologists focus on this vulnerable and abused group. The answer how to help them is not by reframing of issues, but by blue-color provision of services. And services are created through policy.
We have been here before. During these economic failures older adults are worst hit. The Great Depression of the 1930s followed previous economic collapses—1840s and again in the 1890s. Poverty among older adults grew dramatically so that by 1934 over half of older adults in America lacked sufficient income to be self-supporting. They needed charity to survive. State welfare pensions were non-existent before 1930, and for those that later developed State pensions only provided 65 cents a day for about 3% of older adults. Millions of older people were homeless, hungry and desperate. Millions more were unemployed. By some estimates more than two million adult men—referred to as hobos, travelling workers, the word likely derived from the term hoe-boy meaning "farmhand"—wandered aimlessly around the country. Banks and businesses failed. From this morass of civil depravity rose one of the most important piece of legislation. The 1935 Social Security Act that in 1965 spawned Medicaid and Medicare is the bedrock of services for older adults. No single act has ever-improved older adult’s wellbeing as much, or since.
Social Security Act
Social Security Act—passed by the President Franklin D. Roosevelt (FDR) administration in 1935—created a right to a pension in old age, and an insurance against unemployment. This legislation was not passed because of the benevolence of Congress, or that of FDR (who won in 1932 and 1936). The act was passed because there was civil unrest and a threat of further social upheaval.
Workers rose up, and although individual uprisings were ineffective, en masse this lead even the oligarchs of the time and the Supreme Court judges to back down. There are other interpretations of history. But a strong case can be made that civil uprising created dramatic political choices at the time. Characterized by worldwide turmoil that gave rise to communism, anarchist, fascism, and National Socialism—Hitler, Mussolini, Gandhi, Lenin/Trotsky/Stalin. Here in the United States it was Federalism as expressed through the many “alphabet agencies” created under the New Deal. Federalism emerged not in response to civic unrest but in competition. It managed to subdue it.
Before the Great Depression the poor already established a precedence of marching to Washington D.C. to express their ire. The 1894 March of Coxey's Army after the industrialist Jacob Coxey organized tens of thousands of unemployed to march to Congress. Although this movement fizzled, Coxley later became an advocate of public works as a remedy for unemployment. But it was the Great Depression that awakened the masses. The story remains scattered among the literature. Six social movements have been etched in history and defined the New Deal, whether in competition or in promoting.
1. With Every Man a King Governor and later Senator Huey Long wanted the Federal government to guarantee everyone over age 60 to receive an old-age pension while every family would be guaranteed an annual income of $5,000. He proposed to do this by limiting private fortunes to $50 million, legacies to $5 million, and annual incomes to $1 million. By 1935 the movement had 27,000 local clubs with 7.7 million members.
2. The Long Beach physician Francis E. Townsend started the Townsend Movement. Long Beach in California was considered the “geriatric capital” of the United States at the time with over a third of its residents being elderly. After finding himself unemployed at age 67 with no savings and no prospects, Townsend proposed that the government should provide a pension of $200 per month to every citizen age 60 and older. The pensions would be funded by a 2% national sales tax. By 1933 there were 7,000 Townsend Clubs around the country with more than 2.2 million members.
3. Fire & Brimstone movement takes its name from a radio preacher Father Charles E. Coughlin who rallied against the Social Security act as he did against FDR, international bankers, communists, and labor unions. In 1936, Coughlin, along with Townsend and the remnants of Huey Long's Share the Wealth Movement, would join to form a third party to contest the presidential election in the hopes of preventing President Roosevelt from being re-elected. They failed, but the preacher had some 35-40 million listeners.
4. Upton Sinclair, a Californian novelist and social crusader, drafted a program called End Poverty in California (EPIC). In a 12-point program there was a proposal to give $50 a month pensions to all needy persons over 60 who had lived in California for at least three years. Using EPIC as his mandate, Sinclair was the Democratic nominee for governor in the election of 1934 that he lost.
5. By 1938 there were approximately eighty different old-age welfare schemes competing for political support in California. The culmination of these different economic propositions was the Ham & Eggs movement. Named in response to a flippant put-down that this movement was for a common meal—Ham & Eggs was started by a radio personality Robert Noble. Based on the writings of Yale professor Irving Fisher, the movement demanded that the state issue $25 warrants each Monday morning to every unemployed Californian over the age of fifty. With more than 300,000 members with many more supporters it quickly grew into a movement. Although later the organization was co-opted by his two brothers advocating $30 every Thursday morning there remained a resilient support for this social program. Even after the passage of the Social Security Act in 1938 the successful Democratic candidate for governor Culbert Olsen openly supported the plan and an initiative was placed twice (1938 and 1939) on the ballot to adopt the Ham & Eggs plan as California state policy. Both propositions failed.
6. In Ohio the Bigelow Plan named after Reverend Herbert S. Bigelow proposed a State amendment to guarantee an income of $50 a month ($80 for married couples living together) to those unemployed over sixty years of age. He proposed that funding would come from increased tax on real estate (2% increase on land valued at more than $20,000 an acre), and partly out of an income tax equal to one-fourth the federal income tax paid by individuals and corporations. This plan garnered nearly half a million voters before it was defeated.
All of these movements sometimes competed against the New Deal that FDR was pushing. There remains some resilient misunderstanding of the benefits of the New Deal. Most picture this as a battle between the good and evil, the benevolent against the greedy, the globalist against the small business. We have been here before. The true story is messier then as it is now.
When Kim Phillips-Fein, wrote Invisible Hands: The Businessmen's Crusade Against the New Deal the impression was that the New Deal was somehow transformative for the good. But at the time, the New Deal was anything but positive. Phillips-Fein has shown that unemployment during the New Deal remained high at around 17% (1934-40), and especially among African Americans and especially in the South, the economy was still depressed, federal income taxes were tripled, higher liquor taxes and (new) payroll taxes, high farm foreclosures (mainly African American farmers), and with more than 3,728 Executive Orders, the New Deal has been argued to have delayed recovery. It seems that the Social Security Act kept us lingering longer in depression. Only after the Second World War did the economy and public welfare improved. Despite this background, the 1935 Social Security Act, for the first time, provided a national safety net for older adults and transformed how we think about aging that still reverberates today.
The Social Security Act became a vehicle for social programs. In 1965, with the addition of Medicaid—health care for the poor and disabled—and then Medicare—healthcare for older adults—the social package was complete. Although Social Security is neither exclusively a social program nor an insurance program, so far is has resisted change. Until now.
What will protect and improve these services for older adults is not a reframing exercise, but a swell of civic protests and civic engagement that exposes and shames the architects of policy that will happily sell the future of our children (deficit increase), hit the poorest and most vulnerable members of our society (Medicare recipients) with only a murmur of protest from aging-focused organizations. Without protests to halt the cut to Medicare, no amount of reframing will ever reverse the damage done that will start over the next few months.
In the United States there are more older-adults drivers on the road and as a result many will end-up in hospitals.
In 2015 there were more than 47.8 million licensed drivers ages 65 and older in the United States. The fastest growing driving population. With this increase we are also seeing more accidents. That same year 6,800 older adults were killed—compared to 2,333 teens ages 16–19—and more than 260,000 were treated in emergency departments for motor vehicle crash injuries.
A quick review of the National Institutes on Aging website on older drivers quickly provides a simplistic answer. The website that address older adults and driving includes such enlightened subheadings as: Stiff Joints and Muscles; Trouble Seeing; Trouble Hearing; Dementia; Slower Reaction Time and Reflexes; Medications. It is not surprising therefore to see that fatal crashes, per mile traveled, increases the older the driver is—particularly males. It seems that these physical diminished capacities have direct negative consequences when driving.
Despite this obvious conclusion—that diminished physiology results in more accidents—the evidence is not so clear-cut.
A 2015 report by the Insurance Institute for Highway Safety suggests that such increased fatalities are more likely due to increased susceptibility to injury and medical complications rather than the increased risk of crashing. Older people are more likely to be killed when in an accident. Frail bodies as well as driving older and less safe cars are to blame. There are a lot of older pedestrian deaths as well which does not involve them driving.
Older drivers might have impaired capabilities but they are not all impaired drivers. In fact they are safer than some younger groups. In general older drivers are more likely to use seat belts, tend to drive when conditions are safest and are less likely to drive while under the influence of alcohol. In comparison, teen drivers—at the zenith of their physiological prowess—have a higher rate of fatal crashes, mainly because of their immaturity, lack of skills, and lack of experience. It’s not all about biology.
Teenagers have taught us that driving a car requires more than just physical attributes. Even if we just focus on the most obvious, vision, the results are surprising.
Cynthia Owsley and her colleagues with the Department of Ophthalmology, University of Alabama, found that the best predictor of accidents was not visual acuity but a combination of early visual attention and mental status. Having 3-4 times more accidents (of any type) and 15 times more intersection accidents than those without these problems. Driving, it seems, primarily requires a sense of spatial awareness—knowing what is around you and predicting how objects and people are moving. This perceptual capacity is known as the “useful field of view”—the area from which you can take in information with a single glance.
The psychologist Karlene Ball and her colleagues with Western Kentucky University, reported that older adults with substantial shrinkage in the useful field of view were six times more likely to have a crash. What was surprising was that when compared with eye health, visual sensory function, cognitive status, and age—although these all correlated with crashes—they were poorer in predicting crash-prone older drivers. Our perception and how we can predict the immediate environment is more important than having excellent vision.
Our useful field of view narrows with age. We take in less of the visual field in front of us resulting in greater susceptibility for accidents. This is not a negative, although it has negative consequences. This is a result of years of excellent driving and training our brain that now we do not need to concern ourselves with peripheral events. We are such good drivers. As a result our peripheral view has become unimportant, and we have erroneously eliminated that aspect of driving at a time when it becomes important because we have started losing other sensory sharpness.
But luckily there are ways to enhance our perception. There are great computer-based tools for improving useful field of view and to retrain our brain to drive safer. As a result of training, these studies have shown that drivers make a third less fewer dangerous driving maneuvers, can stop sooner when they have to and feel greater mastery of driving in difficult conditions—such as at night, in bad weather, or in new places. All of which translates to a reduction in at-fault crash risk by nearly half. This is all good news that will ensure that older drivers can keep their license longer, and more importantly drive safer, despite having diminished physiological capacities.
Ball, K. K., Roenker, D. L., Wadley, V. G., Edwards, J. D., Roth, D. L., McGwin, G., ... & Dube, T. (2006). Can High‐Risk Older Drivers Be Identified Through Performance‐Based Measures in a Department of Motor Vehicles Setting?. Journal of the American Geriatrics Society, 54(1), 77-84.
Centers for Disease Control and Prevention, National Center for Injury Prevention and Control. Web-based Injury Statistics Query and Reporting System (WISQARS). Atlanta, GA: CDC; 2017 [cited 2017 Nov 29]. Available from URL: https://www.cdc.gov/injury/wisqars/index.html
Insurance Institute for Highway Safety (IIHS). Fatality facts 2015, Older people. Arlington (VA): IIHS; November 2016. [cited 2016 Dec 21]. Available from URL: http://www.iihs.org/iihs/topics/t/older-drivers/fatalityfacts/older-people/2015
Owsley, C., Ball, K., Sloane, M. E., Roenker, D. L., & Bruni, J. R. (1991). Visual/cognitive correlates of vehicle accidents in older drivers. Psychology and aging, 6(3), 403.
After a century and an immeasurable amount of resources pumped into research for dementia—particularly for the all encompassing Alzheimer’s disease—the breakthroughs we are witnessing are not in a cure but in care. Care—ignored in funded research—is emerging as the innovator in dementia research.
It is telling that Auguste Deter the first woman to die of Alzheimer’s disease, did not die of Alzheimer’s disease, but of bedsores. Despite this painful death, the disease is what gained prominence, and has now gained dominance. For a hundred years the mantra among clinicians has been that the disease follows a set course. Even though we still neither understand the disease, nor do we have any way of stopping it, we continue to follow the belief that we can perhaps stop the disease and it will go away.
Unlike with children, diseases in older age tend to stay. Some are companions to the death (e.g., prostate cancer) while others will likely cause our death (e.g., heart disease.) For sure, few diseases in older age will be cured, Alzheimer’s disease (or dementia in general) being one of these incurable diseases. Although dementia is now the fifth or sixth primary cause of death, in fact, like Auguste Deter it is always something else that kills you other than dementia. It is therefore important to treat the whole person.
The idea that the expression of dementia is purely biological has been shown to be false. For Thomas Kitwood, for example, people with dementia were not only disadvantaged by the disease itself which hinders thinking and behaving, but he also saw that the attitudes and actions of those around them increased this disadvantage. Agitation being a case in point which is caused by a combination of the incapacities of the disease together with the rigidity of their caregivers. Kitwood, for all of his theoretical flaws, revolutionized care for people with dementia. He both named and framed Person-Centered Therapy. The approach to caring for someone with dementia by allowing the individual to dictate what is best for them. This approach was well understood in the field of disability.
Instead of people with dementia being warehoused until death released them from their misery, as Auguste Deter endured, the person-centered approach ensured a focus on the person’s well being. Personhood remains a caring philosophy. But this was not enough.
In a world where we see our cognition as the ultimate representation of who we are, we need a stronger system to protect people with dementia. And this came from the disability field and pushed dementia research into the political arena through the concept of citizenship. Citizenship is the idea that all individuals have rights and goes beyond personhood. In 2007 British Ruth Bartlett and Canadian Deborah O'Connor argued that although “the idea that people with dementia have rights has long been recognized” but the idea of citizenship where those rights are enforced has rarely, if ever, been explicitly applied to people with dementia.
Citizenship can be applied to promote the status of discriminated groups. However the concept of citizenship assumes that the individual has the capacity to exercise their rights and to honor their responsibilities. Such assumptions are not obvious among people with severe dementia. And there is the rub.
To get around this conundrum, the concept of ‘intimate citizenship’ has been put forward that focus on citizenship moderated and mediated by family and caregivers. But such membership does not address any institution discrimination. Clive Baldwin with the Bradford Dementia Group would argue that people with dementia still have a story to tell. More importently they might influence the stories of those who interact with them. In lieu of having independent advocacy organization that lobby on behalf of people with dementia, reliance on caregivers remains. And that could be an issue if there is discord.
We discriminate against people with dementia in getting costly treatment for another medical problem that they might have. For example we deny hip replacements or surgeries for non-life threatening issues. We have laws that restrict the ability for people with dementia to drive and to conduct business. Legal status is dependent on whether an individual has mental capacity. This status determines what rights a person has. Although these laws are justified because they protect others in society, there remain other discriminations inherent in a society. Discriminations based on our power to make decisions on behalf of someone. We have inherited “cognitive citizenship.” In her 2004 PhD thesis Petula Mary Brannelly reported that it is not policy or legislation but clinicians personal values that resulted in one in ten people with dementia being detained against their will and result in having the most restrictive of care outcomes.
Again, Ruth Bartlett who has devoted much of her research on defining citizenship in dementia care, followed sixteen dementia activities campaigning for social change. She revealed that although campaigning can be energizing and reaffirming there were also drawbacks. Other than fatigue due to their disease, the activists reported oppression related to how they were expected to behave. Although the struggle for citizenship has only just begun for people with dementia, there is still a missing piece. Bartlett recently examined ‘dementia friendly communities’ where citizenship is perhaps most clearly enacted. But again in disability the concept of equal but separate remains an issue. Citizenship needs to occur in public social spaces. It is about a redistribution of power.