Loading...

Follow Sci Fi Generation on Feedspot

Continue with Google
Continue with Facebook
or

Valid

by Paresh Kathrani

Robot rights! shutterstock


Amazon has recently developed an option whereby Alexa will only activate if people address it with a “please”. This suggests that we are starting to recognise some intelligent machines in a way that was previously reserved only for humans. In fact, this could very well be the first step towards recognising the rights of machines.

Machines are becoming a part of the fabric of everyday life. Whether it be the complex technology that we are embedding inside of us, or the machines on the outside, the line between what it means to be human and machine is softening. As machines get more and more intelligent, it is imperative that we begin discussing whether it will soon be time to recognise the rights of robots, as much for our sake as for theirs.

When someone says that they have a “right” to something, they are usually saying that they have a claim or an expectation that something should be a certain way. But what is just as important as rights are the foundations on which they are based. Rights rely on various intricate frameworks, such as law and morality. Sometimes, the frameworks may not be clear cut. For instance, in human rights law, strong moral values such as dignity and equality inform legal rights.

So rights are often founded upon human principles. This helps partially explain why we have recognised the rights of animals. We recognise that it is ethically wrong to torture or starve animals, so we create laws against it. As intelligent machines weave further into our lives, there is a good chance that our human principles will also force us to recognise that they too deserve rights.

But you might argue that animals differ from machines in that they have some sort of conscious experience. And it is true that consciousness and subjective experience are important, particularly to human rights. Article 1 of the Universal Declaration of Human Rights 1948, for example, says all human beings “are endowed with reason and conscience and should act towards one another in a spirit of brotherhood”.

However, consciousness and human rights are not the only basis of rights. In New Zealand and Ecuador, rivers have been granted rights because humans deemed their very existence to be important. So rights don’t emerge only from consciousness, they can extend from other criteria also. There is no one correct type or form of rights. Human rights are not the only rights.

As machines become even more complex and intelligent, just discarding or destroying them without asking any questions at all about their moral and physical integrity seems ethically wrong. Just like rivers, they too should receive rights because of their meaning to us.

The Whanganui river in New Zealand has been granted the same rights as humans. Duane Wilkins, CC BY-SA

What if there was a complex and independent machine providing health care to a human over a long period of time. The machine resembled a person and applied intelligence through natural speech. Over time, the machine and the patient built up a close relationship. Then, after a long period of service, the company that creates the machine decides that it is time to turn off and discard this perfectly working machine. It seems ethically wrong to simply discard this intelligent machine, which has kept alive and built a relationship with that patient, without even entertaining its right to integrity and other rights.

This might seem absurd, but imagine for a second that it is you who has built a deep and meaningful relationship with this intelligent machine. Wouldn’t you be desperately finding a way to stop it being turned off and your relationship being lost? It is as much for our own human sake, than for the sake of intelligent machines, that we ought to recognise the rights of intelligent machines.

Sexbots are a good example. The UK’s sexual offences law exists to protect the sexual autonomy of the human victim. But it also exists to ensure that people respect sexual autonomy, the right of a person to control their own body and their own sexual activity, as a value.

But the definition of consent in section 74 of the Sexual Offences Act 2003 in the UK specifically refers to “persons” and not machines. So right now a person can do whatever they wish to a sexbot, including torture. There is something troubling about this. And it is not because we believe sexbots to have consciousness. Instead, it is probably because by allowing people to torture robots, the law stops ensuring that people respect the values of personal and sexual autonomy, that we consider important.

These examples very much show that there is a discussion to be had over the rights of intelligent machines. And as we rapidly enter an age where these examples will no longer be hypothetical, the law must keep up.

Matter of respect

We are already recognising complex machines in a manner that was previously reserved only for humans and animals. We feel that our children must be polite to Alexa as, if they are not, it will damage our own notions of respect and dignity. Unconsciously we are already recognising that how we communicate with and respect intelligent machines will affect how we communicate with and respect humans. If we don’t extend recognition to intelligent machines, then it will affect how we treat and consider humans.

Machines are integrating their way in to our world. Google’s recent experiment with natural language assistants, where AI sounded eerily like a human, gave us an insight into this future. One day, it may become impossible to tell whether we are interacting with machines or with humans. When that day comes, rights may have to change to include them as well. As we change, rights may naturally have to adapt too.






Paresh Kathrani is a Senior Lecturer in Law at the University of Westminster.

This article was originally published on The Conversation, a content partner of Sci Fi Generation.

Read Full Article
Visit website
  • Show original
  • .
  • Share
  • .
  • Favorite
  • .
  • Email
  • .
  • Add Tags 

Amputees often experience the sensation of a “phantom limb” – a feeling that a missing body part is still there.

That sensory illusion is closer to becoming a reality thanks to a team of engineers at the Johns Hopkins University that has created an electronic skin. When layered on top of prosthetic hands, this e-dermis brings back a real sense of touch through the fingertips.

“After many years, I felt my hand, as if a hollow shell got filled with life again,” says the anonymous amputee who served as the team’s principal volunteer tester.

Made of fabric and rubber laced with sensors to mimic nerve endings, e-dermis recreates a sense of touch as well as pain by sensing stimuli and relaying the impulses back to the peripheral nerves.

“We’ve made a sensor that goes over the fingertips of a prosthetic hand and acts like your own skin would,” says Luke Osborn, a graduate student in biomedical engineering. “It’s inspired by what is happening in human biology, with receptors for both touch and pain.

"This is interesting and new,” Osborn said, “because now we can have a prosthetic hand that is already on the market and fit it with an e-dermis that can tell the wearer whether he or she is picking up something that is round or whether it has sharp points.”

The work – published June 20 in the journal Science Robotics - shows it is possible to restore a range of natural, touch-based feelings to amputees who use prosthetic limbs. The ability to detect pain could be useful, for instance, not only in prosthetic hands but also in lower limb prostheses, alerting the user to potential damage to the device.

Human skin contains a complex network of receptors that relay a variety of sensations to the brain. This network provided a biological template for the research team, which includes members from the Johns Hopkins departments of Biomedical Engineering, Electrical and Computer Engineering, and Neurology, and from the Singapore Institute of Neurotechnology.

Bringing a more human touch to modern prosthetic designs is critical, especially when it comes to incorporating the ability to feel pain, Osborn says.

“Pain is, of course, unpleasant, but it’s also an essential, protective sense of touch that is lacking in the prostheses that are currently available to amputees,” he says. “Advances in prosthesis designs and control mechanisms can aid an amputee’s ability to regain lost function, but they often lack meaningful, tactile feedback or perception.”

That is where the e-dermis comes in, conveying information to the amputee by stimulating peripheral nerves in the arm, making the so-called phantom limb come to life. The e-dermis device does this by electrically stimulating the amputee’s nerves in a non-invasive way, through the skin, says the paper’s senior author, Nitish Thakor, a professor of biomedical engineering and director of the Biomedical Instrumentation and Neuroengineering Laboratory at Johns Hopkins.

“For the first time, a prosthesis can provide a range of perceptions, from fine touch to noxious to an amputee, making it more like a human hand,” says Thakor, co-founder of Infinite Biomedical Technologies, the Baltimore-based company that provided the prosthetic hardware used in the study.

Inspired by human biology, the e-dermis enables its user to sense a continuous spectrum of tactile perceptions, from light touch to noxious or painful stimulus. The team created a “neuromorphic model” mimicking the touch and pain receptors of the human nervous system, allowing the e-dermis to electronically encode sensations just as the receptors in the skin would. Tracking brain activity via electroencephalography, or EEG, the team determined that the test subject was able to perceive these sensations in his phantom hand.

The researchers then connected the e-dermis output to the volunteer by using a noninvasive method known as transcutaneous electrical nerve stimulation, or TENS. In a pain-detection task, the team determined that the test subject and the prosthesis were able to experience a natural, reflexive reaction to both pain while touching a pointed object and non-pain when touching a round object.

The e-dermis is not sensitive to temperature – for this study, the team focused on detecting object curvature (for touch and shape perception) and sharpness (for pain perception). The e-dermis technology could be used to make robotic systems more human, and it could also be used to expand or extend to astronaut gloves and space suits, Osborn says.

The researchers plan to further develop the technology and better understand how to provide meaningful sensory information to amputees in the hopes of making the system ready for widespread patient use.

Johns Hopkins is a pioneer in the field of upper limb dexterous prostheses. More than a decade ago, the university’s Applied Physics Laboratory led the development of the advanced Modular Prosthetic Limb, which an amputee patient controls with the muscles and nerves that once controlled his or her real arm or hand.

In addition to the funding from Space@Hopkins, which fosters space-related collaboration across the university’s divisions, the team also received grants from the Applied Physics Laboratory Graduate Fellowship Program and the Neuroengineering Training Initiative through the National Institute of Biomedical Imaging and Bioengineering through the National Institutes of Health under grant T32EB003383.

The e-dermis was tested over the course of one year on an amputee who volunteered in the Neuroengineering Laboratory at Johns Hopkins. The subject frequently repeated the testing to demonstrate consistent sensory perceptions via the e-dermis. The team has worked with four other amputee volunteers in other experiments to provide sensory feedback.

Read Full Article
Visit website
  • Show original
  • .
  • Share
  • .
  • Favorite
  • .
  • Email
  • .
  • Add Tags 

A storm of tiny dust particles has engulfed much of Mars over the last two weeks and prompted NASA’s Opportunity rover to suspend science operations. But across the planet, NASA’s Curiosity rover, which has been studying Martian soil at Gale Crater, is expected to remain largely unaffected by the dust. While Opportunity is powered by sunlight, which is blotted out by dust at its current location, Curiosity has a nuclear-powered battery that runs day and night.

The Martian dust storm has grown in size and is now officially a “planet-encircling” (or “global”) dust event.

Though Curiosity is on the other side of Mars from Opportunity, dust has steadily increased over it, more than doubling over the weekend. The atmospheric haze blocking sunlight, called “tau,” is now above 8.0 at Gale Crater – the highest tau the mission has ever recorded. Tau was last measured near 11 over Opportunity, thick enough that accurate measurements are no longer possible for Mars’ oldest active rover.

For NASA’s human scientists watching from the ground, Curiosity offers an unprecedented window to answer some questions. One of the biggest: Why do some Martian dust storms last for months and grow massive, while others stay small and last only a week?

“We don’t have any good idea,” said Scott D. Guzewich, an atmospheric scientist at NASA’s Goddard Space Flight Center in Greenbelt, Maryland, leading Curiosity’s dust storm investigation.

Curiosity, he points out, plus a fleet of spacecraft in the orbit of Mars, will allow scientists for the first time to collect a wealth of dust information both from the surface and from space. The last storm of global magnitude that enveloped Mars was in 2007, five years before Curiosity landed there.

In the animation, Curiosity is facing the crater rim, about 18.6 miles (30 kilometers) away from where it stands inside the crater. Daily photos captured by its Mast Camera, or Mastcam, show the sky getting hazier. This sun-obstructing wall of haze is about six to eight times thicker than normal for this time of season.

Curiosity’s engineers at NASA’s Jet Propulsion Laboratory in Pasadena, California, have studied the potential for the growing dust storm to affect the rover’s instruments, and say it poses little risk. The largest impact is to the rover’s cameras, which require extra exposure time due to the low lighting. The rover already routinely points its Mastcam down at the ground after each use to reduce the amount of dust blowing at its optics. JPL leads the Mars Science Laboratory/Curiosity mission.

Martian dust storms are common, especially during southern hemisphere spring and summer, when the planet is closest to the Sun. As the atmosphere warms, winds generated by larger contrasts in surface temperature at different locations mobilize dust particles the size of individual talcum powder grains. Carbon dioxide frozen on the winter polar cap evaporates, thickening the atmosphere and increasing the surface pressure. This enhances the process by helping suspend the dust particles in the air. In some cases, the dust clouds reach up to 40 miles (60 kilometers) or more in elevation.

Though they are common, Martian dust storms typically stay contained to a local area. By contrast, the current storm, if it were happening on Earth, is bigger than North America and Russia combined, said Guzewich.

The dust storm may seem exotic to some Earthlings, but it’s not unique to Mars. Earth has dust storms, too, in desert regions such as North Africa, the Middle East and the southwest United States.

But conditions here prevent them from spreading globally, said Ralph A. Kahn, a Goddard senior research scientist who studies the atmospheres of Earth and Mars. These include the structure of our thicker atmosphere and stronger gravity that helps settle dust. Earth also has vegetation cover on land that binds the soil with its roots and helps block the wind and rain that wash the particles out of the atmosphere.

Read Full Article
Visit website
  • Show original
  • .
  • Share
  • .
  • Favorite
  • .
  • Email
  • .
  • Add Tags 

by Rachel A. Ankeny and Lisa M. Given

Making science for people is a series of articles that explore how humanities, arts and social sciences expertise are applied to problems typically corralled into the science and technology space.

Actually yes, science and the arts do work together. from www.shutterstock.com


Science and technology are valued highly in many societies.

Globally, discussions of research priorities by governments, universities, and many researchers position science, technology, engineering, mathematics and medicine (STEMM) disciplines at the forefront of innovation and industry engagement.

However, for industries to adopt innovations and for research to have an impact, there must be significant shifts in people’s behaviours, their perceptions, and the ways communities engage with research. These activities are the research focus of the humanities, arts, and social sciences (HASS) disciplines.

Unfortunately, in Australia, we continue to see reductions in research funding in these areas.

Here’s why we think that’s a mistake.

More than window dressing

There are few strategies to link STEMM and HASS disciplines in productive ways, and to treat HASS approaches as more than window dressing. For example, the Australian Science and Research Priorities focus on areas that appear to be STEMM-driven, such as soil and water, cybersecurity, environmental change, and health.

HASS approaches are only mentioned in passing. However, there was clear recognition (particularly by industry representatives) of the importance of HASS approaches when the priorities were discussed in early 2015. For instance, one of us (Rachel) heard one participant in the expert working group sessions note that:

understanding cybersecurity is not primarily a technological issue; the difficulties lie in human and social behaviours associated with cyber activities.

The national Industry Growth Centres also emphasise STEMM, leaving little room for HASS disciplines to use their competitive strengths within the national priorities.

Multiple actors

In contrast, the 2016 report on Technology and Australia’s Future noted that:

Technology creation and use requires multiple actors: designers, makers, users, scientists, marketers, policymakers and enablers. Australia’s education systems must both encourage high levels of scientific and technological literacy and inculcate creativity. Creativity encourages experimentation, giving people, communities and companies the necessary confidence to innovate.

Without HASS, how can Australia position itself for economic, cultural, business, and social successes in the technologically advanced future our children will inherit? It is in the HASS disciplines that designers, makers, policymakers, and enablers sit.

Without a strong interdisciplinary approach to tomorrow’s problems, the STEMM disciplines risk not realising their full potential for research translation and adoption, and for change for the betterment of society.

Mobilising value

In comparison to Australia, Canada will invest C$925 million over the next five years not only in science and health, but also in HASS research. The Canadian budget also includes C$275 million for interdisciplinary and high-risk research to be administered by the Social Sciences and Humanities Research Council (SSHRC).

Along with Canada’s health and science-based funding agencies, SSHRC provides special funding schemes to support STEMM and HASS interdisciplinary work. These initiatives not only provide strategic funding to support top researchers but attest to the value of the HASS disciplines in full partnership with STEMM.

These initiatives are part of Canada’s focus on mobilising the value of science and technology, which the government recognises cannot succeed without a simultaneous and clear focus on the human, cultural, and creative aspects of modern society.

Projects in Australia

Exciting STEMM-HASS interdisciplinary projects are already being developed here in Australia. One of us (Lisa) has a current project exploring research adoption in the wine industry and that brings wine scientists and industry partners together with information science.

The focus is on qualitative, social research to understand how scientists can best communicate with industry. This will ensure the newest innovations in wine science can be more easily adopted, and that winemakers can share their research needs with scientists.

Wine brings huge value to the Australian economy. So ensuring that winemakers have access to the latest research innovations, and that wine scientists can help the industry adopt changes, are critical issues requiring social science methods to ensure innovations are taken up.

Another of us (Rachel) has a project that brings together social scientists, humanities scholars and animal welfare scientists together with industry partners, to explore public and producer values related to animal welfare in the red meat industry.

Given current debates over meat production and threats to agriculture’s social license to operate, determining what underlying values are shared across various sectors of society is critical both to policymaking and self-regulation and to future directions in the industry.

Read more: What comes first: the free-range chicken or the free-range egg?

From Shakespeare to diseases

Research applications come out of the most unusual places.

In 2013 a collaboration between a linguist and a bioinfometrician resulted in supercomputing techniques to determine whether an unknown play was written by Shakespeare. The findings of this work were redeployed to diagnose cancer using biological markers to pinpoint a molecular signature for particular diseases. This research approach has been used in various biomedical, literature, linguistic, and social behavioural studies, including one that produced a tree showing relationships between 84 Indo-European languages, and the classification of several different cancer cell lines.

The so-called “wicked problems” related to health, the environment, climate change, among others, continue to plague our world.

Yet, we are failing to fully integrate the scientific aspects of these issues with how people actually operate in and think about the real world: HASS can help.







Rachel A. Ankeny is Professor of History and Associate Dean Research (Faculty of Arts) at the University of Adelaide and Lisa M. Given is Associate Dean, Research and Development at Swinburne University of Technology.

This article was originally published on The Conversation

Read Full Article
Visit website
  • Show original
  • .
  • Share
  • .
  • Favorite
  • .
  • Email
  • .
  • Add Tags 

by Joshua Krook

Making science for people is a series that explores how humanities, arts and social sciences expertise is applied to problems typically corralled into the science and technology space.

Today’s article involves technology, but in a legal sphere. It looks at how training lawyers in the “human” capabilities of creativity, empathy, compassion and emotional intelligence could provide them with skills that aren’t at risk of automation.

Despite changes to legal technology, Australian law schools remain wedded to an old fashioned teaching model. Shutterstock


Around 40% of all law jobs are at risk of automation, according to a 2016 Deloitte report.

Traditional skills expected of law graduates are increasingly going to be undertaken by new AI-driven software. Basic skills such as database research and document drafting are already being automated by large Australian law firms.

The race is on to go beyond basic skills to automate higher-order thinking itself. Law firms want a piece of software that can find the relevant law and apply it to a client’s unique set of factual circumstances. In some cases this already exists.

The application of law to facts has been a basic skill taught to law students in Australia for more than 100 years. So if this skill is automated, what does that mean for the future of law schools, law firms, and law graduates?

A new kind of lawyer

According to Richard Suskind, a prolific author on the future of law and legal technology, it is no longer enough to teach the old basic skills of lawyering in the new, AI-driven, automated economy.

We need a new kind of lawyer who is trained not only in the use of coding and legal technology, but also in the skills that AI will not be capable of automating. These include the very “human” capabilities of creativity, empathy, compassion, and emotional intelligence.

Despite changes to legal technology, Australian law schools remain wedded to an old-fashioned teaching model. The core law subjects taught in all Australian law schools, known as the Priestley Eleven, have not been updated in almost 30 years.

No move is underway to change these subjects in response to changes in the legal sector. There is little discussion about implementing new, compulsory legal technology subjects, creative subjects, critical thinking, or subjects that build a law student’s “emotional intelligence quotient” (EQ).

Read more: Don’t fear robo-justice. Algorithms could help more people access legal advice

This is despite the fact that prominent critics – including former Chief Justice Robert French – have suggested that the Priestley Eleven are a “dead hand” on curriculum reform and need urgent revision.

A fresh curriculum

Australian law schools seem to be ignoring the risks of failing to innovate. The case method, a standard system of teaching at all Australian law schools, was invented in the 1800s at Harvard Law School. It teaches students to apply the law to a set of facts. This is precisely the skill that is currently at risk of automation.

More than this, the case method excludes student discussion on morality, emotion and empathy – the exact “human” skills that are now required.

A recent LexisNexis report argues that the time has come to move away from old lawyering skills and towards new skills education.

For this to happen, Australian law schools will need to modify or abandon the Priestley Eleven and the case method of instruction.

It is important to acknowledge here that some law schools are proactively responding to new technology. A handful have gone as far as to create new elective courses or “majors” in either coding or legal technology. However, as long as these courses are only electives, they remain sidelined in a curriculum dominated by the classical and old-fashioned teaching of law as a “black letter” subject.

Demand for new skills from law firms

Australian law firms are beginning to demand that law schools teach students new skills for the new AI economy. Gilbert and Tobin, one of the largest law firms in Australia, wants law students to be taught about legal technology and to gain more skills in creativity and coding.

Some prominent academics are calling for a return to a more critical legal education that empowers students not only to learn the law, but also to critique its flaws.

A new law school, built on the principles of technology and the liberal arts, might teach students to critique the law that they are currently learning, learn to code, engage in law reform, and develop the essential skills of creativity, empathy and compassion.

This innovative, modern approach to legal education could empower students to face the changing legal industry with confidence and certainty, giving them hope in an otherwise uncertain and grim job market.






Joshua Krook is a Doctoral Candidate in Law at the University of Adelaide

This article was originally published on The Conversation

Read Full Article
Visit website
  • Show original
  • .
  • Share
  • .
  • Favorite
  • .
  • Email
  • .
  • Add Tags 

by James Johnson

When Google’s AlphaGo defeated the Chinese grandmaster at a game of Go in 2017, China was confronted with its own “Sputnik moment”: a prompt to up its game on the development of artifical intelligence (AI). Sure enough, Beijing is pursuing launch a national-level AI innovation agenda for “civil-military fusion”. It’s part of China’s ambitious quest to become a “science and technology superpower” – but also a new front in an increasingly worrisome arms race.

In 2017, the Chinese president, Xi Jinping explicitly called for the acceleration of military AI research to better prepare China for future warfare against a major adversary such as the US. China’s approach to AI has been heavily influenced by its assessment of US military initiatives, in particular the Pentagon’s Third Offset Strategy, an Obama-era plan that gave the Pentagon a mandate to experiment with cutting-edge weapons technologies, AI among them.

Beijing still hasn’t formally articulated a coherent strategic framework or operational concepts but, like Russia, it continues to pursue a range of military-use AI technologies as part of a broader effort to exploit vulnerabilities in US military assets.

While the US (for now) retains the upper hand in AI innovation across the board, China is catching up. It is a strong competitor in all kinds of military innovation, and is expected to overtake the US in AI development in the not-so-distant future. And, as the US and China race to innovate in AI, the uncertainties surrounding their respective advances (and setbacks) will have profound and potentially destabilising implications for the strategic balance of the world order.

Among all the risks that this entails, top of the list is that as the US and China’s approaches to military innovation diverge, new prejudices and preferences – for instance, criteria to decide when the use of lethal force is appropriate and ethically defensible – will be “baked into” their respective AI weapons systems, resulting in intelligent weapons that act on the basis of flawed human logic or assumptions (as has already been observed in algorithms developed to assess criminals’ propensity to re-offend). The resulting cognitive biases could exacerbate the two countries’ mutual mistrust, suspicion and misperceptions, and possibly nudge them closer to a major conflict.

Hot on the heels

China has several structural, political and societal advantages in the AI arms race. Its national strategic planning is far more coherent than that of the US, and its national datasets are unparalleled in size. Xi’s sprawling “One Belt, One Road” initiative, a plan to build a vast international network of trade links and infrastructure, has a nascent virtual counterpart: the so-called “Digital Silk Road”, which encompasses not just AI but also quantum computing, nanotechnology, big data, and cloud storage.

The situation in the US is far messier. The Trump administration and Silicon Valley share an increasingly strained relationship, meaning they will struggle to work together on AI technologies the US military can use. If American commercial AI innovation continues to rapidly outpace the Pentagon’s far more sluggish approach to AI procurement and development, the two won’t complement each other as they should – leaving China a major opportunity to get the upper hand.

The US seems to be taking steps to address these problems. Despite a brief pause in the development of the US’s AI strategic road map, the White House recently announced the creation of a new committee of AI experts to advise it on policy choices. And in 2017, Donald Trump blocked a Chinese firm from acquiring Lattice Semiconductor, a US company that manufactures chips critical to the operation of AI applications.

These steps reflect a deepening concern that China’s strategy of fusing civil and military technological innovation could allow American technology, expertise and intellectual property shared with Chinese commercial entities to be transferred to China’s military.

The Terminator conundrum

It seems that China – like Russia – has relatively few moral, legal or ethical qualms in deploying lethal autonomous weapons. Recent reports suggest that China has already begun to incorporate AI technologies into its next generation conventional missiles and missile defence intelligence, surveillance and reconnaissance systems to enhance their precision and legality.

The US will likely be much more constrained in the development of these technologies. The Pentagon’s reticence to incorporate AI into existing weaponry is grounded in liberal democratic norms governing the use of military force, and in a concern to avoid what the Pentagon has called the Terminator conundrum – the prospect that military robots could one day decide independently whether or not to take a human life.

That said, propelled by the rapid pace of technological trends in AI – and the aggressive pursuit of these capabilities by rival powers – the US’s current commitment to keeping humans in charge could waver. If the present trajectory holds, China will soon challenge the US’s lead in several emerging military-technological strategic fields. That is likely to accelerate the Pentagon’s efforts to innovate offsetting initiatives and concepts – and in turn, make it harder to keep this disruptive high-tech arms race in check.






James Johnson is a Postdoctoral honorary visiting fellow at the University of Leicester

This article was originally published on The Conversation, a content partner of Sci Fi Generation.

Read Full Article
Visit website
  • Show original
  • .
  • Share
  • .
  • Favorite
  • .
  • Email
  • .
  • Add Tags 

by Karen Douglas

A common online conspiracy theory is the illuminati. Wikimedia, CC BY

Conspiracy theories are popular and there is no doubt that the internet has fuelled them on. From the theory that 9/11 was an inside job to the idea that reptilian humanoids rule the world, conspiracy theories have found a natural home online.

But the extent to which we can actually attribute their popularity to the internet is a question that has concerned scholars for many years. And the answer is not very straightforward. While some argue that conspiracy theories flourish on the internet and social media, there is not yet any evidence that this is true. Conspiracy theories have always been with us. But today the internet fuels them in new ways and enables the deepening of conspiracy theorising in some online communities.

A long history

Conspiracy theories feature prominently in current political contexts, but they have a long history. Antisemitic conspiracy theories concerning the supposedly evil and controlling acts of Jewish people date back to antiquity, and still exist today. There is even good evidence that conspiracy theories were common in ancient Rome. So we know that conspiracy theories flourished perfectly well without the internet.

Contrary to what you might think, there is no evidence that people are more prone to believing in conspiracy theories now than they were prior to the internet. An analysis of published letters to the editor of the New York Times showed that, between 1897 and 2010, apart from a couple of peaks during the global depression in the late 1800s, and the fear of communism during the 1950s, conspiracy theorising had not increased. People appear to have always found conspiracy theories interesting and worth entertaining.

But there is strong evidence that some people adopt conspiracy theories more than others and that belief in conspiracy theories seems particularly strong among people with unsatisfied psychological needs.

All people need to feel that they know the truth. They also need to feel safe and secure. And people need to feel good about themselves and the groups they belong to. For people who don’t have these needs met, conspiracy theories become particularly appealing. It is for these people – who may be more inclined toward conspiracy theorising in the first place – that we see the greatest impact of the internet.

How the internet fuels conspiracy theories

Conspiracy theories do not bounce indiscriminately from person to person over the internet. Not everyone reads them, and they are certainly not adopted and shared by everyone. Instead, conspiracy theories tend to be shared within communities that already agree with them. For example, a person who strongly believes that 9/11 was an inside job is likely to join an internet group and communicate with others who also agree the same. A person who does not already believe in this conspiracy theory is unlikely to join such a group, or share its material.

So, rather than increasing belief in conspiracy theories generally, the internet plays a crucial role in fostering distinct and polarised online communities among conspiracy believers. Believers share their opinions and “evidence” with other believers but are less willing to share with people who are critical of conspiracy theories. So with the internet, conspiracy groups become more homogeneous and their beliefs become even stronger over time.

Conspiracy theorists communicate in online echo chambers. flickr/michaelirving

To illustrate this effect, one study showed that if internet users shared conspiracy-related information, they tended to ignore information that ran contrary to the conspiracy theory. In other words, they filtered out information that was not consistent with their pre-existing views. These people also tended to share the conspiracy-related information with other conspiracy believers rather than non-believers. This style of communication creates echo chambers where information is only consumed and shared if it reinforces what people already think. In closed communication such as this, beliefs in conspiracy theories can become stronger and more separated from the opinions of non-believers.

A 2015 study showed that believers in one conspiracy theory are also more likely to share completely new, unrelated, and made-up conspiracy theories. Users who believed more traditional conspiracy theories were likely to share new, clearly false and easily verifiable conspiracy theories, such as the idea that infinite energy had been discovered. The study demonstrated that conspiracy internet users are uncritically distributing and endorsing even deliberately false, extremely implausible material.

Why is this dangerous? Well, some conspiracy theories are dangerous. Consider anti-vaccine conspiracy theories proposing that vaccines are harmful and that the harms are covered up by pharmaceutical companies and governments. Even though they are false, these conspiracy theories discourage people from having their children vaccinated. Or, consider conspiracy theories that climate change is a hoax created by climate scientists to secure more research funding. Despite abundant evidence that climate change is not a hoax, these conspiracy theories discourage people from taking action to reduce their carbon footprint.

Conspiracy theories can have powerful consequences, but we are still learning about when and how people communicate conspiracy theories and why people adopt them in preference to more conventional explanations. Understanding more about how conspiracy theories move about on the internet and social networks will play a crucial part in developing the best ways to respond to them.








Karen Douglas is Professor of Social Psychology at the University of Kent.

This article was originally published on The Conversation, a content partner of Sci Fi Generation.

Read Full Article
Visit website
  • Show original
  • .
  • Share
  • .
  • Favorite
  • .
  • Email
  • .
  • Add Tags 

by Lindsay Grace

The iconic shooting game in its original stand-up arcade form. Jordiferrer, CC BY-SA

The “Space Invaders” arcade video game, celebrating its 40th anniversary, is an iconic piece of software, credited as one of the earliest digital shooting games. Like many early games, it and its surrounding myths showcase the cultural collisions and issues current at its creation by Japanese game designer Tomohiro Nishikado.

As a game designer and teacher of games, I know how meaning is carried from designer to the mechanics of play. As a game studies researcher, I also know how games reveal myth, meaning and culture.

An analysis of “Pac-Man,” for instance, shows how that game embodies many values of its day – including consumerism, drug use and gender politics. The message in “Space Invaders” is as basic as the graphics: When faced with conflict, players have no option except to blast it away.

Avoiding an enemy only delays the inevitable; players cannot move forward or back, but can only defend their space. There’s not even a reason why the invasion is occurring. Players know only that the invaders must be destroyed. It’s a distinct cultural perspective, emphasizing shooting over everything else.

Defense is the only option against a never-ending onslaught. BagoGames, CC BY A historical pioneer

The history of many shooting games can be traced to “Space Invaders.” It’s not the first – MIT’s “Spacewar!” was earlier, in 1961 – but “Space Invaders” is among the most copied. Even people who never played the original “Space Invaders” have likely played the more than 100 clones of it – including the first advertising game, “Pepsi Invaders.”

Spacewar! (MIT 1962) - YouTube
‘Spacewar!’

The release of “Space Invaders” foreshadowed the growth of the Japanese games industry, which itself was seen as a fearsome cultural invasion of the U.S. by Japan. The tension was expressed in popular media as a defense of American individualism against the power and efficacy of Japanese collectivism and corporate culture. This tension displayed itself in popular media like the comedy film “Gung Ho” as a combined Japanophobia and Japanophilia.

“Space Invaders” also highlighted how tenuous some elements of Western identity were. The U.S. had built its sense of self on being the greatest, but was being challenged by Japanese economic growth. But it was complicated: As Japanese automakers won customers from the American car companies, they began to build their cars in the U.S. – so were they Japanese or American cars?

Space Invaders (Arcade) gameplay - YouTube
A game of ‘Space Invaders.’

In the same way, if the American game maker Atari’s biggest hit was a Japanese-made game, how American was Atari’s success? In any case, millions of U.S. consumers bought the Atari 2600 game system to be able to play the hit arcade game “Space Invaders” at home. Five years later, in 1983, the games industry crumbled in large part because American-made games were not interesting and too similar to each other.

In 1985 the Japanese-made Nintendo Entertainment System ushered in a new era of home console play. That continued the challenge to the American identity: U.S. companies failed to innovate and lead, and a Japanese company filled the innovation void.

The myths of (space) invasion

“Space Invaders” also has collected some myths around it, which reveal more about society than about the game itself. The most notable legend is that “Space Invaders” was so popular that the Japanese economy ran out of the coins needed to play it in arcades. It’s not true, but like many myths about games, both positive and negative, it sounds so good it’s easy to champion anyway.

That fable is a prequel to larger popular fictions about games. People blame games for the decline of economies and for joblessness. The innovations created in games support technological innovations that change society and the way people socialize, yet people are also eager to blame large systemic issues like gun violence in schools on video games.

Another tale is that “Space Invaders” demand was so strong that even with multiple game machines installed, there were lines to play. Whether or not they were queuing up for their own turns, it’s definitely true that people watched others play. That helped set the stage for the growth of arcades and LAN-gaming parties, precursors to professional players and today’s multi-billion-dollar industry of e-sports.

An arena packed with people watching others play video games. Sam Churchill, CC BY

History shows that games change society, pointing it toward play and creating new economies. The advent of arcades was as novel as the contemporary use of the micro-transactions common in mobile games now. Their incubation of community and spectator play spawned countless YouTube gaming channels.

Like the space invaders that descend on the player, unknown but always threatening, games scare some people. They seem to be unrelentingly approaching, different and hard to keep pace with. Games challenge players to adapt and dismantle the conventions under which people hide.

But, like playing “Space Invaders” itself, the joy comes in interacting with that change, mastering it and moving on to the next level.






Lindsay Grace is Associate Professor of Communication and Director, American University Game Lab and Studio at the American University School of Communication.

This article was originally published on The Conversation, a content partner of Sci Fi Generation.

Read Full Article
Visit website
  • Show original
  • .
  • Share
  • .
  • Favorite
  • .
  • Email
  • .
  • Add Tags 

by Peter Ellerton

A scene from the short film KCLOC. Screenshot/Vimeo

One of the wonderful things about science is that it makes us think about what we value, or what is meaningful to us. It’s not just an objective, dispassionate inquiry into the material world, it’s also a large part of the story about what it means to be human.

That dimension is often missing in science education, but it is brought home beautifully within the collection of films that represent the winners of the 2018 SCINEMA International Science Film Festival, to be screened at various capital cities across Australia this month.

I managed to catch up with the festival in Brisbane this week, so I’ll mention some films that do this exceptionally well.

The meaning of time

In a playful way, Ninaad Kulkarni gorgeously animates a variety of interviews with people who are asked what time means to them.

KCLOC Official Trailer #1 (2017) | 4K - Vimeo
KCLOC official trailer.

The film KCLOC brings out an amazing thing about time — it’s a physical quantity, sure, but it just doesn’t seem as interesting to ask people what they think about mass or distance. It’s a fascinating point that suggests something profound about the human experience with and in time.

More dramatically, in Timelapse writer and director Aleix Castro explores the idea of an implanted chip that blocks mundane work tasks from our consciousness, promising to free us from the daily grind.

Timelapse - Short Film - Vimeo
Timelapse official trailer.

Workers move around zombie-like until their work period (perhaps a month) ends. They then return to awareness and enjoy an extended leisure time. A month’s work seemingly passing in a second, and then a week off - and they are well paid for it!

It sounds like a win-win for employers and workers, but what about those periods during which we are working but our loved ones are not? And would we be happy losing large chunks of our lives from memory, even if those memories are boring? How would we learn perseverance, or recognise the exciting times if we have no contrasting experiences?

Maybe we’d want to extend those down times to household chores as well. How much of our lives would we end up living?

One of the most powerful impressions on me was made by iRony, Radheyda Jegatheva‘s adaptation of his poem Seven Billion.

iRony - Trailer 1 - YouTube
iRony official trailer.

Hand-animated (which adds to the film’s intensity) and extraordinarily creative, iRony explores our relationship with the technology that is meant to connect us, but can rob us of joy and purpose. It’s another call to focus on how we spend our time.

If you have a child or friend who seems to have lost themself in social media, iRony would be a superbly targeted intervention.

Why fungi matters

Among the shorter films, the significantly longer The Kingdom – How Fungi Made Our World sounds, to the uninitiated like me, a good opportunity to order another wine.

The Kingdom: How Fungi Made Our World - YouTube
The Kingdom: How Fungi Made Our World trailer.

I just had no idea. Is there really fungus on everything we touch? Do we really breathe in fungi with every breath? (Yes, in case you were wondering.) And is the “g” hard or soft?

Fungi, it turns out, are the unifying evolutionary thread for all complex life on land. No fungi, no us.

Not only do they enable the proliferation of vascular plants over the world, creeping between and within their cells to provide essential nutrients, but they also connect them through giant networks of filaments (hyphae) to form — wait for it — the Wood Wide Web.

Unlike plants and cold-blooded animals, mammals seem to be generally fungus-free zones, as fungi do not like the higher core temperatures of our bodies. That exclusion gave mammals the advantage over other animals immediately after the Cretaceous–Tertiary mass extinction about 66 million years ago, resulting ultimately in the appearance of humans millions of years later.

But it’s not that we don’t get on with fungi at all. Not only did they gift us penicillin (and fungi might also be a solution to new resistant strains of bacteria - superbugs), but consider the yeast that gives us bread… and beer.

For the religiously inclined, it would not seem untoward to imagine that God at some stage said “Let there be fungi”, and all was good.

Grassroots, written and produced by Tegan Nock and directed by Frank Oly, is an outstanding example of how citizens can put scientific findings to use. They don’t always have to wait for a technology to be processed, packaged and delivered to their door by others.

Grassroots Trailer - Vimeo
Grassroots trailer.

Farmer Guy Webb became convinced of the ability of fungi (is there nothing they can’t do?) to sequester carbon in very significant amounts when added to crops, potentially reducing the impact of global warming.

Rather than wishing “they” would do something about it, one of the agriculturalists straightforwardly says, “we are the they”. It’s an exciting story of one person’s belief - growing into a group’s belief - that individual action based on existing science can have huge consequences.

Production values

There was enormous aesthetic value evident in the preview of Planet Earth II – Grasslands, produced and directed by Chadden Hunter. A beautifully filmed and written piece that is breathtaking in how it uses the newest camera technology to make the audience feel they are moving along with the organisms rather than seeing them from afar.

Widow birds bounce for attention - Planet Earth II: Grasslands - BBC One - YouTube
Widow birds bounce for attention, from episode Planet Earth II: Grasslands.

Also visually stunning was the modelling shown in a Spanish production called Virtual Humans.

Virtual Humans new version - YouTube
Virtual Humans.

A supercomputer creates a virtual copy of a body that could allow medical treatments and physical therapies to be tailored to an individual’s exact requirements. All done in the safe confines of cyberspace.

It was also thought-provoking to realise how far away we are from being able to model a mind.

The fun of it all

The mindbogglingly mathematical Lily Serna takes us through a Catalyst episode, produced and directed by David Symonds and Nicholas Searle, about using mathematics in decision-making.

Serna’s lack of pretension around her abilities and her joy in sharing the benefits of it is admirable, and she explains some very sophisticated mathematics with simplicity and clarity.

I wonder if we all have the confidence in mathematics to live by the decision-making algorithm she provides. It’s well worth a try, I reckon.

The festival is a great expression of why science matters, not just in the utility it provides but in the values and meaning it highlights. Facts and values don’t just make good entertainment, together they make good science.






Peter Ellerton is a Lecturer in Critical Thinking, Curriculum Director–UQ Critical Thinking Project at The University of Queensland

This article was originally published on The Conversation

Read Full Article
Visit website
  • Show original
  • .
  • Share
  • .
  • Favorite
  • .
  • Email
  • .
  • Add Tags 

by Bonnie J. Dunbar

On June 18, 1983, 35 years ago, Sally Ride became the first American woman to launch into space, riding the Space Shuttle STS-7 flight with four other crew members. Only five years earlier, in 1978, she had been selected to the first class of 35 astronauts – including six women – who would fly on the Space Shuttle.

Sally’s first ride, with her STS-7 crewmates. In addition to launching America’s first female astronaut, it was also the first mission with a five-member crew. Front row, left to right: Ride, commander Bob Crippen, pilot Frederick Hauck. Back row, left to right: John Fabian, Norm Thagard. NASA

Much has happened in the intervening years. During the span of three decades, the shuttles flew 135 times carrying hundreds of American and international astronauts into space before they were retired in 2011. The International Space Station began to fly in 1998 and has been continuously occupied since 2001, orbiting the Earth once every 90 minutes. More than 50 women have now flown into space, most of them Americans. One of these women, Dr. Peggy Whitson, became chief of the Astronaut Office and holds the American record for number of hours in space.

The Space Shuttle democratized spaceflight

The Space Shuttle was an amazing flight vehicle: It launched like a rocket into Low Earth Orbit in only eight minutes, and landed softly like a glider after its mission. What is not well known is that the Space Shuttle was an equalizer and enabler, opening up space exploration to a wider population of people from planet Earth.

STS-50 Crew photo with commander Richard N. Richards and pilot Kenneth D. Bowersox, mission specialists Bonnie J. Dunbar, Ellen S. Baker and Carl J. Meade, and payload specialists Lawrence J. DeLucas and Eugene H. Trinh. The photo was taken in front of the Columbia Shuttle, which Dunbar helped to build. NASA

This inclusive approach began in 1972 when Congress and the president approved the Space Shuttle budget and contract. Spacesuits, seats and all crew equipment were initially designed for a larger range of sizes to fit all body types, and the waste management system was modified for females. Unlike earlier vehicles, the Space Shuttle could carry up to eight astronauts at a time. It had a design more similar to an airplane than a small capsule, with two decks, sleeping berths, large laboratories and a galley. It also provided privacy.

I graduated with an engineering degree from the University of Washington in 1971 and, by 1976, I was a young engineer working on the first Space Shuttle, Columbia, with Rockwell International at Edwards Air Force Base, in California. I helped to design and produce the thermal protection system – those heat resistant ceramic tiles – which allowed the shuttle to re-enter the Earth’s atmosphere for up to 100 flights.

Mike Anderson and Bonnie Dunbar flew together on STS-89 in 1998. They both graduated from University of Washington. Anderson was killed in the Columbia accident, in 2003. NASA

It was a heady time; a new space vehicle could carry large crews and “cargo,” including space laboratories and the Hubble Space Telescope. The Shuttle also had a robotic arm, which was critical for the assembly of the International Space Station, and an “airlock” for space walks, and enabled us to build the International Space Station.

I knew from my first day at Rockwell that this vehicle had been designed for both men and women. A NASA engineer at the Langley Research Center gave me a very early “heads up” in 1973 that they would eventually select women astronauts for the Space Shuttle. In the 1970s there were visionary men and women in NASA, government and in the general public, who saw a future for more women in science and engineering, and for flying into space. Women were not beating down the door to be included in the Space Shuttle program, we were being invited to be an integral part of a larger grand design for exploring space.

1978: Becoming an astronaut

The selection process for the first class of Space Shuttle astronauts, to include women, opened in 1977. NASA approached the recruitment process with a large and innovative publicity campaign encouraging men and women of all ethnic backgrounds to apply. One of NASA’s recruiters was actress Nichelle Nichols who played Lt. Ohura on the “Star Trek” series, which was popular at the time. Sally learned about NASA’s astronaut recruitment drive through an announcement, possibly on a job bulletin board, somewhere at Stanford University. Sally had been a talented nationally ranked tennis player, but her passion was physics. The opportunity to fly into space intrigued her and looked like a challenge and rewarding career she could embrace.

Sally and I arrived at NASA at the same time in 1978 – she as part of the “TFNG” (“Thirty-Five New Guys”) astronaut class and I as a newly minted mission controller, training to support the Space Shuttle. I had already been in the aerospace industry for several years and had made my choice for “space” at the age of 9 on a cattle ranch in Washington state. I also applied for the 1978 astronaut class, but was not selected until 1980.

Sally and I connected on the Flight Crew Operations co-ed softball team. We both played softball from an early age and were both private pilots, flying our small planes together around southeast Texas. We also often discussed our perspectives on career selection, and how fortunate we were to have teachers and parents and other mentors who encouraged us to study math and science in school – the enabling subjects for becoming an astronaut.

STS-7: June 18 1983 In January 1978, NASA selected six women into the class of 35 new astronauts to fly on the Space Shuttle. From left to right are Shannon W. Lucid, Ph.D., Margaret Rhea Seddon, M.D., Kathryn D. Sullivan, Ph.D., Judith A. Resnik, Ph.D., Anna L. Fisher, M.D., and Sally K. Ride, Ph.D. NASA

Although Sally was one of six women in the 1978 class, she preferred to be considered one of 35 new astronauts – and to be judged by merit, not gender. It was important to all the women that the bar be as high as it was for the men. From an operational and safety point of view, that was also equally important. In an emergency, there are no special allowances for gender or ethnicity: Everyone had to pull their own weight. In fact, it has been said that those first six women were not just qualified, they were more than qualified.

While Sally was honored to be picked as the first woman from her class to fly, she shied away from the limelight. She believed that she flew for all Americans, regardless of gender, but she also understood the expectations on her for being selected “first.” As she flew on STS-7, she paid tribute to those who made it possible for her to be there: to her family and teachers, to those who made and operated the Space Shuttle, to her crewmates, and to all of her astronaut classmates including Dr. Kathy Sullivan, Dr. Rhea Seddon, Dr. Anna Fisher, Dr. Shannon Lucid, and Dr. Judy Resnick (who lost her life on Challenger). With all of the attention, Sally was a gracious “first.” And the launch of STS-7 had a unique celebratory flair. Signs around Kennedy Space Center said “Fly Sally Fly,” and John Denver gave a special concert the night before the launch, not far from the launch pad.

Continuing the momentum

One of the topics that Sally and I discussed frequently was why so few young girls were entering into math, technology, science and engineering – which became known as STEM careers in the late 1990s. Both of us had been encouraged and pushed by male and female mentors and “cheerleaders.” By 1972, companies with federal contracts were actively recruiting women engineers. NASA had opened up spaceflight to women in 1978, and was proud of the fact that they were recruiting and training women as astronauts and employing them in engineering and the sciences.

National needs for STEM talent and supportive employment laws were creating an environment such that if a young woman wished to become an aerospace engineer, a physicist, a chemist, a medical doctor, an astronomer or an astrophysicist, they could. One might have thought that Sally’s legendary flight, and those of other women astronauts over the last 35 years might have inspired a wave of young women (and men) into STEM careers. For example, when Sally flew into space in 1983, a 12-year-old middle school girl back then would now be 47. If she had a daughter, that daughter might be 25. After two generations, we might have expected that there would be large bow wave of young energized women entering into the STEM careers. But this hasn’t happened.

Rather, we have a growing national shortage of engineers and research scientists in this nation, which threatens our prosperity and national security. The numbers of women graduating in engineering grew from 1 percent in 1971 to about 20 percent in 35 years. But women make up 50 percent of the population, so there is room for growth. So what are the “root causes” for this lack of growth?

K-12 STEM education

Many reports have cited deficient K-12 math and science education as contributing to the relatively stagnant graduation rates in STEM careers.

Completing four years of math in high school, as well as physics, chemistry and biology is correlated with later success in science, mathematics and engineering in college. Without this preparation, career options are reduced significantly. Even though I graduated from a small school in rural Washington state, I was able to study algebra, geometry, trigonometry, math analysis, biology, chemistry and physics by the time I graduated. Those were all prerequisites for entry into the University of Washington College of Engineering. Sally had the same preparation before she entered into physics.

As part of NASA’s commitment to the next generation of explorers, NASA Ames collaborated with Sally Ride Science to sponsor and host the Sally Ride Science Festival at the NASA Research Park. Hundreds of San Francisco Bay Area girls, their teachers and parents enjoy a fun-filled interactive exploration of science, technology, engineering and mathematics on Sept. 27, 2008. NASA Ames Research Center / Dominic Hart

Although we have many great K-12 schools in the nation, too many schools now struggle to find qualified mathematics and physics teachers. Inspiring an interest in these topics is also key to retention and success. Being excited about a particular subject matter can keep a student engaged even through the tough times. Participation in “informal science education” at museums and camps is becoming instrumental for recruiting students into STEM careers, especially as teachers struggle to find the time in a cramped curriculum to teach math and science.

Research has shown that middle school is a critical period for young boys and girls to establish their attitudes toward math and science, to acquire fundamental skills that form the basis for progression into algebra, geometry and trigonometry, and to develop positive attitudes toward the pursuit of STEM careers. When Dr. Sally Ride retired from NASA, she understood this, and founded Imaginary Lines and, later, Sally Ride Science, to influence career aspirations for middle school girls. She hosted science camps throughout the nation, exposing young women and their parents to a variety of STEM career options. Sally Ride Science continues its outreach through the University of California at San Diego.

Challenging old stereotypes and honoring Sally’s legacy Sally Ride and Bonnie Dunbar are fighting outdated stereotypes that women are not good at STEM subjects. Creativa Images/shutterstock.com

However, there are still challenges, especially in this social media-steeped society. I and other practicing women engineers have observed that young girls are often influenced by what they perceive “society thinks” of them.

In a recent discussion with an all-girl robotics team competing at NASA, I asked the high school girls if they had support from teachers and parents, and they all said “yes.” But then, they asked, “Why doesn’t society support us?” I was puzzled and asked them what they meant. They then directed me to the internet where searches on engineering careers returned a story after story of describing “hostile work environments.”

Sadly, most of these stories are very old and are often from studies with very small populations. The positive news, from companies, government, universities and such organizations as the National Academy of Engineers, Physics Girl and Society of Women Engineers, rarely rises to the top of the search results. Currently, companies and laboratories in the U.S. are desperate to employ STEM qualified and inspired women. But many of our young women continue to “opt out.”

Young women are influenced by the media images they see every day. We continue to see decades-old negative stereotypes and poor images of engineers and scientists on television programs and in the movies.

Popular TV celebrities continue to boast on air that they either didn’t like math or struggled with it. Sally Ride Science helps to combat misconceptions and dispel myths by bringing practicing scientists and engineers directly to the students. However, in order to make a more substantial difference, this program and others like it require help from the media organizations. The nation depends upon the technology and science produced by our scientists and engineers, but social media, TV hosts, writers and movie script developers rarely reflect this reality. So it may be, that in addition to K-12 challenges in our educational system, the “outdated stererotypes” portrayed in the media are also discouraging our young women from entering science and engineering careers.

Unlimited opportunities in science and engineering

The reality? More companies than ever are creating family-friendly work environments and competing for female talent. In fact, there is a higher demand from business, government and graduate schools in the U.S. for women engineers and scientists than can be met by the universities.

Both Sally and I had wonderful careers supported by both men and women. NASA was a great work environment and continues to be – the last two astronaut classes have been about 50 percent female.

I think that Sally would be proud of how far the nation has come with respect to women in space, but would also want us to focus on the future challenges for recruiting more women into science and engineering, and to reignite the passion for exploring space.






Bonnie J. Dunbar is a retired NASA astronaut and a TEES Distinguished Research Professor of Aerospace Engineering at Texas A&M University.

This article was originally published on The Conversation, a content partner of Sci Fi Generation.

Read Full Article
Visit website

Read for later

Articles marked as Favorite are saved for later viewing.
close
  • Show original
  • .
  • Share
  • .
  • Favorite
  • .
  • Email
  • .
  • Add Tags 

Separate tags by commas
To access this feature, please upgrade your account.
Start your free month
Free Preview