Loading...

Follow Elephant in the Lab on Feedspot

Continue with Google
Continue with Facebook
or

Valid

In September you are organizing the conference “Divergent values in sustainability assessments: love them, leave them, or change them?”. A core theme of the conference is “ethics in scientific policy advice”. Why do you think the topic of ethics in scientific policy advice is important now?

Martin Kowarsch, expert on Scientific Assessments, Ethics and Public Policy

Scientific expertise is important for informing politics about sustainable development, climate change, environmental problems and solutions. These are challenges that many countries in the world face which makes it necessary to address them on a global scale. Obviously, a lot of value judgements are involved based on different cultural viewpoints. You need to reconcile very different perspectives at this level, take into account divergent normative considerations, which are inherently and inevitably involved in research around sustainability and climate change. Bringing together scientific expertise and different values in order to inform politics in a legitimate manner is a huge challenge. We hope to come up with new ideas, new approaches, new expiration for how to respond to these divergent values.

As you mentioned, political and ethical values are not the same in every country, but problems as climate change are. How is it possible to reconcile different viewpoints, values and offer a global solution to global problems?

I’d like to mentions three aspects in this respect. First, it is impossible to neatly separate science from a value dimension and this is a particular standpoint in the philosophy of science, which is meanwhile widely shared. So, there is no such thing as value free science and at the very least you have epistemic value judgements involved. Furthermore, many concepts in the social sciences, for example, as efficiency in economic terms, development or growth have normative connotations. These are not purely descriptive and certainly have a normative dimension; there are a lot of assumptions in economic, sociological or political models. These assumptions influence the way we describe the world.

Second, there are interesting practical experiments when very diverse stakeholder groups are brought together as in the IPBES-process (Intergovernmental Science-Policy Platform on Biodiversity and Ecosystem Services). They focus particularly on the diversity of values and normative viewpoints and currently work on an assessment on such divergent values. This is a truly exciting case and experiment after all. They try to explicitly acknowledge and show diverse perspectives, for example, concerning the value of nature in different cultures, religions and economic schools of thought.

Third, just because facts are interpreted differently according to diverse values, it does not mean that they cannot ever be reconciled or that it’s impossible to come to objective knowledge. This part is very hard to understand not only for politicians or lay people, but also for most researchers. So again: knowledge can still be objective despite the fact that it’s accompanied by ethical and normative values.

Here, I’d like to draw attention to the researcher Schwartz who did dozens of studies worldwide in different places around the world and compared different value sets. What they found is yes, there is diversity, but it’s still limited to only a couple of categories that pop-up again and again. There always the same tensions between universalism versus protection of one’s own peer group, between favouring adventure or stability and conservation. There are poles of different dimensions, but universal categories that appear to be relevant to very different cultures. This means at least that there are not a million of different value categories that need to be reconciled, but much less. There are even some things we might share, some basic needs, ideas about flourishing lives.

These ideas might not necessarily be identical, but still there are some overlaps. Finding these overlaps could be a promising approach in order to address the challenge of globally diverse values by showing that people can learn from each other in terms of widening and expanding the set of values that they accept as reasonable values. For example, we can expect that people who emphasize tradition and conservative values might over time learn that other sets of values also matter. To make it a bit more tangible: in US politics we have the two big camps of Republican and Democratic voters – the liberal and conservative camps fighting about scientific credibility around climate change. Cultural aspects and the historical context play into this debate. In fact, the discussion is not about science any more, but about values, and how people interpret facts and knowledge. Obviously, conservative values are very different from liberal ones. Typically, when justifying ambitious global climate change goals, people usually refer to liberal values, such as global equality or protection from harm. Usually, one forgets to emphasize on other values, such as “purity and sanctity” which are important from a conservative viewpoint and which relate to environmental protection, for instance in terms of health issues in the context of air pollution. So, there are opportunities to reframe the debate around environmental policies in light of these values and to broaden the scope of values that we take into account.

How should the process of reconciling divergent viewpoints be designed?

I described this process in a number of publications: the way is to co-produce alternative pathways and this requires collaboration between researchers from social sciences, the humanities and natural sciences – so in a very interdisciplinary manner – and different stakeholders from the society such as policy makers and the citizenry. These groups need to collaborate in order to develop a better understanding of future and viable policy alternatives that somehow represent these divergent values and perspective. It’s important to go away from abstract ideological debates and start learning about the practical implications of alternative pathways that represent different viewpoint. This is the only way to facilitate learning, even if people don’t change much on the level of fundamental values, they are more likely to acknowledge that there is a variety of values and this, in turn, will facilitate the identification of practical overlaps in terms of concrete political pathways. It might turn out that ideological conflicts don’t really matter on the level of concrete policy measures, because you might justify a particular policy not only in terms of liberal values, but maybe also in terms of republican values.

What makes scientific policy consulting “scientific” and distinguishes it from other forms of consultancy?

A lot of pressure is being put on institutions like the IPCC from the scientific community and policy makers demanding which are demanding to come up with credible evidence, credible claims about climate change. When speaking about quality, it’s mostly still being referred to peer review as the core criteria to ensure quality.

What they did was that they focused a lot on peer reviewed since, on the peer-review process as the core criteria of the quality of science and also bringing together the best experts available from different countries sitting together for years, in the case of the IPCC it’s 6, or 7 years they collaborate with each other, so these two things focus on peer reviewed literature, also on the credibility of individual experts, that also with checks and balances control each other, so a very strict process of a peer reviewed process within the IPCC from the academic community and the policy community.

But there is a crucial problem to this. Several studies are not available as peer-reviewed publications. For example, if we speak about up to date from energy systems or economic data. A lot of important information is published as grey literature and the IPCC is allowed to take this into account, but they have to qualify the level of credibility of these non peer-reviewed studies. This literature is necessary, but of course it’s a trade-off and lead to huge problems in the past. In 2010 the IPCC was heavily criticised, because of some, in my opinion, minor mistakes based on grey literature. Afterwards, the IPCC made it’s guidelines more strict and focused even more on peer-reviewed literature.

To overcome this disparity between relevance and “grey-availability” it’s possible to publish these in a peer-reviewed manner, so basically provide peer review for grey literature. It’s difficult to do so because of lacking technical and human resources on a global scale.

What has to be done in order to address this disparity of resources around the globe?

There should be more capacity building and Western countries should definitely put more emphasis on it.  In terms of human resources you often hear criticism as “look at your authors list, it’s not balanced, because 70% of your authors are coming from Western countries”. The IPCC, for example, is trying really hard to be diverse, but it’s difficult to realize since you may have hundreds of different experts in Germany, the US, UK, but you might have only two or three experts in some African countries and those are usually highly over-committed. They are not only working on climate change, they also work on water management, biodiversity, chemical pollution and so on, because they are experts for everything because they are rare. Often they don’t have enough time to be really engaged in the process, so it’s again dominated by others from Western countries.

An additional problem is the technical infrastructure. When speaking about global scientific assessments, you have a lot of online virtual meeting, but in some parts of the world you don’t have a stable internet connection and these are technical obstacles that hinder from engaging in the same way as their colleagues from Europe and the US. In addition, there’s the problem of access to research literature.

Would an infrastructure for “grey literature” where these sources would be openly reviewed by peers around the world be helpful in terms of securing quality of scientific policy advice?

This definitely sounds helpful, the question is whether we should focus so much on the separation of research literature and “grey literature”. Maybe we should rather envisage a revolution in the way we ensure scientific quality in the academic community and focus much more on online platforms where there is an ongoing, transparent, open and international review process. Why not thinking about a process where you are publishing an article and overtime it gets more and more reviews, if it doesn’t get reviews it might be deemed as irrelevant. This way, you already have a measure, a proxy for relevance and review might be added over time, so you have 20 or 200 reviews throughout time and new aspects added later; one year later you might find some new data or discover that some assumptions are flawed which you didn’t know before.

Another thing that is extremely important in the communication at the science-policy interface are meta-studies and meta-assessments like systematic reviews of the available knowledge corpus. In climate change there are around 300 to 500 thousand publications depending on different estimates, and thus the body of knowledge is exploding. There is no single researcher that read at least one percent of these publications, so how is it even possible to ensure the assessment of the existing knowledge if you have such a huge number of publications. For example, if you are trying to answer the question: “Is carbon pricing a good idea or not?”, you cannot deal with it on a theoretical level only, you also need case studies, policy learning and practical examples from different countries. There are studies and tools available to do this, but hardly anybody is really aggregating them so far. When preparing an assessment, experts just select a few studies by picking sources based on one’s own information and feeling. So the question of quality assurance is not only relevant on the level of primary scientific publication or knowledge production, but also on a meta-level, on aggregation and synthesis of knowledge.

Read Full Article
  • Show original
  • .
  • Share
  • .
  • Favorite
  • .
  • Email
  • .
  • Add Tags 

Between 2016 and 2021, the UK government is channelling £1.5 billion of aid funds through the Global Challenges Research Fund (GCRF) the beneficiaries of which are primarily UK researchers. They are in turn expected to work in partnership with their counterparts and other stakeholders in the ‘global south’ and in so doing, contribute to improvements in development outcomes. But from experience accompanying UK and Southern researchers during a five year natural hazard risk reduction project, I argue that genuine partnerships, especially between researchers from the UK and those from developing countries (which applies to many countries in the global south) just aren‘t possible. Here are four reasons why.

Reason 1: Collaborating for instrumental rather than substantive reasons

Southern counterparts when approached by their UK partners to work together, are unlikely to refuse. Given UK research funds are largely accessed through UK research institutions, they just can’t afford not to. Even if Southern researchers don’t get on with their UK counterpart, are not ready to collaborate with a UK based organisation (logistically) or have too much other work to do, the chances are they will still agree to ‘partner’. This creates a problem in the relationship: if as a UK researcher you’ve never heard a Southern counterpart say ‘no’, how can you trust them when they say ‘yes’?

Often relationships are ‘marriages of convenience’. UK researchers need to work with Southern counterparts to legitimate their use of aid funds and access data, whilst Southern counterparts need the cash as well as external validation. Moreover a measure of excellence of a Southern research institute might be the number of collaborations it has with North Atlantic (not necessarily international) research institutes. Relationships where partners want to explore working together, consider that partnering is a good thing in itself or believe that doing so will help both partners to be creative l – are rare.

Reason 2. A lack of genuine deliberation

Key parameters, such as budgetary and time allocations are usually set during the proposal development stage. This stage usually takes place under significant time pressure with decisions made by a small group of researchers in the UK (often geographically close). Southern counterparts rarely have a say during this process. UK researchers usually tell their Southern partners what sort of time allocations they are likely to receive after the proposal has been submitted. Moreover, few panels convened by research councils to review proposals genuinely care about how UK researchers will approach partnership during the research process.

UK researchers who are in charge of research processes tend to exert a high degree of control. This stems from the anxiety experienced by UK researchers by the possibility that Southern counterparts may make suggestions that steer the research away from what the former are interested in (and what was agreed to in the funding proposal). This often means, holding launch meetings in the UK and unilaterally deciding on the agenda, agreeing on research questions and establishing analytical frameworks without consulting partners – asking for comments on these only once they have neared completion. When challenged about the lack of participation, research leaders might cite reputational damage – saying they need to look as if they know what they are talking about in their engagement with partners, and avoid presenting ‚half baked‘ solutions. Under pressure to comment, collaborators are forced into suggesting changes around the margins, even if they disagree with the underlying fundamentals.

This has a number of knock on effects on the rest of the process. UK researchers often subsequently take an extractive and/or contractual approach, leaving Southern counterparts to introduce them to key informants, organise interviews and/or collect and provide data, leaving UK researchers to write up data – or lead on it, which is the most interesting aspect of the process. Southern counterparts may well agree to do this, in order to pay salaries and cover overheads. UK researchers usually assume lead authorship for final outputs.

Reason 3. Power asymmetries

Even when Southern counterparts might have the opportunity to provide critique or ‘speak up’, they may choose not to, or be conditioned not to, for fear of the consequences: having their role during the research process reduced, or not being approached by UK counterparts in future. A lack of self-confidence – stemming from having limited capacities (which might be real or imagined), a fear of making mistakes and a feeling that they might be inherently inferior to their UK counterparts – might also prevent Southern counterparts from speaking up ,taking up, and living up to, their responsibilities when offered. Further, Southern counterparts may not be able to converse frankly with UK researchers in English, which might be a second or third language for them.

But if Southern counterparts do speak up, UK counterparts may not be willing to listen and act accordingly – stemming from a feeling of superiority (however subtle), a need to conform to UK academic standards, as well as pressures to complete research projects in short timeframes and publish in leading journals (more on this below). Race, gender and age all play into these dynamics.

On the occasion that Southern counterparts are asked to collect and write up the data into reports, they are usually sent back to UK researchers for comment. But comments made are often numerous and sometimes make little sense to Southern counterparts. Those providing the feedback often have little understanding of the context in which the data was collected and some of the constraints the local research team were under. And so, they review with an eye to the UK research community, and privilege UK quality standards and frameworks over Southern alternatives.

Moreover, UK researchers often believe that their Southern counterparts are in need of ‘capacity development’ (in terms of training, the development of institutions, etc). This may well be the case. But there is very little acknowledgement that UK researchers themselves may be in need of capacity (to understand Southern contexts, develop more inclusive funding processes, etc).

Reason 4. Different notions of impact

Finally, with regards to communication processes, research teams focus on writing papers that can be published in high impact journals originating in Northern Europe and the US rather than producing outputs that might be taken up by actors in the Southern counterpart’s society, economy and/or industry. Very little of the research results finds its way back to people and communities where the data was collected. Societal engagement is positioned as being in opposition to “objective” and “international” measures of “excellence”.

In sum, I think we need to be honest about the type of relationship that UK researchers have with their Southern counterparts. And in many cases, partnership is not the word I would use to describe them. However, if UK researchers are genuinely interested in embracing partnership with their Southern collaborators they could start by considering these eight suggestions.

Read Full Article
  • Show original
  • .
  • Share
  • .
  • Favorite
  • .
  • Email
  • .
  • Add Tags 

Why do we need scientific advice for cities based on urban research? Why is it not only a local, but also a global matter?

Scientific advice for cities cannot be seen as a local issue, particularly because sustainable development is a global matter itself and needs a common approach. There’s quite a long list of grand challenges of the United Nations and respective initiatives to tackle them: Framework on natural disaster reduction, UN Cape Town plan on data, New Urban Agenda and all of the SDGs. In all of these it’s very clearly communicated that there is a strong demand for effective and plausible scientific evidence upon which to act; and it has to be applicable in different parts of the world. Naturally global scale agendas require knowledge from the local level from the city administration, local governments and local actors. For example, if we aim to reduce the number of deaths by car accidents, we need to converge knowledge about the situation on the local level in extremely diverse cities. In practice, there are hundreds of formalised networks of cities that ensure such cooperation, which are producing evidence about cities and then try to place them in global affairs.

These networks work pretty well within their domains. Sometimes they serve as a resource to address global problems, which some cities were slow at addressing themselves, or that haven’t been on their agenda before. In a sense, these networks are a proof that cities can work on things collaboratively on an international scale. And this is true for scientific collaboration as well, especially when we speak about getting people together and acting upon evidence. A good example is the C40 network, which is committed to addressing climate change.

Nearly 30 percent, the correct number is 29%, of city networks (Acuto, 2016) work on environmental questions, primarily climate change, which is an issue that has been strongly connected to scientific consensus and scientific processes. For example, the IPCC (International governmental Panel on Climate Change) proves that the whole conversation of keeping the global warming below 1.5 degrees is a scientifically infused conversation.

You are an expert on urban planning. Based on your experience, what are the main problems/impediments that you encounter when working on a global scope (with international partners)?

We just finished an international expert panel on “Science and the Future of Cities” supported by Nature Sustainability on this theme. There a selected group of 30 eminent scientists from Global North and South

Michele Acuto, expert on urban politics and international urban planning.

discussed the role of science in implementing global commitments like the SDGs and the New Urban Agenda and you could clearly hear in panel discussions (Acuto et al., 2018) how policymakers struggle to rely on the sciences, and the scientists struggle to be policy relevant without starting to act as consultants and lobbyists and maintaining some degree of independence. We organized this panel to define mechanisms of cooperation on the science-policy interface and find out, how we could actually work together.

Now, the key thing that came out is that the unavailability of science is not a problem, it’s rather problematic that the distribution of scientific outputs are unbalanced, so you have a massive and quite substantial capacity difference between the Global South and the Global North. In many cases you have relevant data, but it is owned or produced by the private sector. Then some countries have world class universities that are not connected to their city governments that are in average situated 4 or 5 kilometers down the road. So, the biggest problem in linking urban research to politics is the uneven development of sciences, but also the machinery of putting science into policy and the interplay between public and private.

In one of your articles, you speak about the importance of data driven decisions in urban planning and data exchange between countries. Are the necessary facilities – e.g. a suitable infrastructure – in place to enable such a cooperation?

It is quite clear that there is appetite for data. We call it “the informed cities paradigm” (Acuto, 2018)  as in the classic line from Michael Bloomberg: “If you can’t measure it, you can’t manage it”. Now, the question is: do cities have the capacity to a) produce data, b) do something with the data,  meaning to analyze and mobilize the data, or c) are we still very dependent on the private sector providing consultancy information? There are some great examples that prove it’s possible to produce and work with data without commercial actors. The city of Chicago, the city of Seoul, Melbourne have opened up their data. But often, in reality, most cities don’t have the data capacity, so they rely on something else and that something else tends to be sort of consultancy driven, which brings a market perspective into the game.

The logical solution to this dilemma is building better data capacities within cities, but that is not enough. Another solution is introducing better science policy mechanisms or structures. For instance, we talked about the possibility of appointing chief scientific advisers or constructing urban observatories. A great example is the UN-Habitat’s GUO (Global Urban Observatory), many urban observatories have been set up in cities around the world, as in Mexico City’s Laboratorio para la Ciudad or Johannesburg’s Gauteng City Region Observatory. It’s a costly option, but very efficient. Especially, when a formalised partnership with the closest university is also organized, which relies on the city for data collection and so on. The trick behind this endeavour is that if you connect to the university, it will most likely be linked to other universities, so you are tapping in a network of networks.

Your activity goes beyond academic research, you are quite engaged third-mission-activities that connect science with other societal groups (political decision makers e.g.). Which stakeholders – from the private sector, civil society, politics – are involved in this process? What are the main challenges for scientific policy advice on the international scope when we deal with global issues like climate change?

It’s a mix of academics, private sector initiatives and even philanthropic actors. Many times you don’t notice academics explicitly and right away, because researchers in that case start playing a bit of a consultancy role. There are some very tangible examples, for instance the C40 Cities 2015 reports Climate Action in Megacities and Powering Climate Action, developed ahead of the Paris conference that led to the latest agreement on climate change, involved such university-private sector-cities collaborative element. It led by Arup as a consultancy firm, managed by the C40 secretariat as an internationally-funded NGO, with research from my laboratory and Arup’s Climate and Energy group, reaching out to cities and business on the ground. I would argue the report itself was still academically sound, although other non-academic stakeholders were involved, and most importantly put emphasis on evidence-based decision-making in cities. The best way to provide politics with expertise is organizing such multi-stakeholder platforms, in which representatives of the private sector, philanthropies like the Ford or Bloomberg Foundation, government officials, scientist and other relevant parties are involved. The trick is finding a suitable format in order to put them all together and start working towards a shared objective without betraying principles of scientific accountability and social values.

Read Full Article
  • Show original
  • .
  • Share
  • .
  • Favorite
  • .
  • Email
  • .
  • Add Tags 
‘Hello, I am here to help!’

For several afternoons as an undergraduate student in New Delhi, I would take a bus to a bustling market and settle into a small office for the Childline counseling service. I was trained as a volunteer responder and counselor for children in distress. All afternoon, the phone would ring. It would usually be a child suffering from exam stress or anxiety and a few tips would go a long way. Occasionally a child would report being in danger, and we followed a protocoled plan-of-action to speedily ensure they are brought to safety. That was in 1999, and each incident left me wondering what I could do more.

Twenty years later, I am recounting this at my university lab  in the United States. I came to this country several years ago to study human neuroscience, to understand how our brains help us interpret the world around us, and in turn how the world around us influences our brains. Along the way, while studying the basic sciences, I got impatient. My experiences had trained me to say, ‘Hello! I am here to help!’

So how would I transform my new found knowledge to help others?

A few years into my scientific training, I discovered my answers in the pursuit of the translational sciences. Translation is the bridge between bench to bedside science. It is the exciting intersection at which we can improve the state-of-the-art for clinical sciences by using new discoveries in the basic sciences. There are several steps in the translational pathway – testing new assays, drugs, technologies and interventions in animal models, testing in humans, testing for safety and feasibility, then testing against placebo controls, testing against standard-of-care, testing in large sample data, all to convincingly demonstrate efficacy. Once efficacy is established in translational research, implementation science kicks in to integrate a new method or practice within real-world care to systematically determine its effectiveness.

Each step of science is challenging. The systematic progress from basic science to translation to implementation can take several years. Yet, it is important to consider that advances in each domain can be accelerated if they are not carried out in silos. For instance, a translational study may make it evident that a new basic approach has low feasibility in the clinical setting, and so the scientist must go back to the drawing board in pursuit of a new basic invention. As the basic scientist or engineer develops a new approach, s/he would benefit from keeping open communication with the clinical researcher. It is the notion that innovators working alone are very good at creating inventions that no one needs! Instead, scientists should be innovators who solve real-world problems, constantly recalling ‘I am here to help!’

What does all this have to do with global science?

Global science is where the boundaries of basic, translational and implementation science blur the most. In global science, researchers collaborate across continents to generate new basic and applied scientific understanding for global citizens. While most science must build upon science that came prior to it, many-a-times global scientists have to solve problems at entirely new frontiers. For instance, human studies in the western world are predominantly WEIRD, that is, the study subject populations are primarily people from Western, educated, industrialized, rich and democratic societies (Henrich et al., Behavorial & Brain Sciences 2010). Humans in these studies only represent 12 percent of the world’s population distribution, are not only unrepresentative of humans as a species, but on many measures they may be outliers. Hence, the assumptions of prior basic science must be re-confirmed within a new global setting.

With regards to translation, a new method/approach may not be feasible in the global setting not only because it is being tested within a new human cultural context, but also because it may not be cost-effective. For instance, a single brain scan using magnetic resonance imaging (MRI) can cost hundreds of dollars; using this method in a large scale brain study in a low-income country setting is not feasible. Yet, there are other brain monitoring methods such as EEG (electroencephalography) that is one-thousandth the cost of an MRI scanner. EEG technology may have utility, as well as potential for scalability, in the global setting for brain assessments and therapies. Some of my own research investigates this, and I have learned that we need to make hand-in-hand research advances at both the basic and translational level. For instance, we need basic advances in engineering algorithms for processing EEG-derived brain data in real-world settings, as well as translational applications for use in the context of real-world problems faced by global humans.

Global science is also local

At this point, it is important to emphasize that global science also applies to our local communities. By this, we understand that globalizing science does not just mean unifying scientific inquiry across the far corners of the world. In any case that is hard to accomplish without a strong network of motivated collaborators. Yet, within our own country, we find several pockets of socio-economic disparity. The same problems and challenges that plague the developing world, are also within our own backyard. So whether they be global citizens of a country far away or our local neighbors, we need to ensure that our scientific methods are inclusive of the diversity around us. This means actively recruiting participants from under-represented backgrounds within our studies, and ensuring it is practical for them to participate. This again has implications for engineering new technologies that are scalable for greater community inclusion; if the community cannot come to the science lab, the science lab must go and immerse within the community. In making community science relations, it is also important to bring community leaders as partners and consider what the community is willing to participate in, and what may be an undue burden. For instance, a community that has suffered from specific stressors or past trauma maybe sensitive to how trauma-related questions are asked. Thus, the research design of community science is best shaped by a mutual understanding between the academic scientists and the community representatives. Notably, the implications of successful community science research are far-reaching in that insights from a study conducted in the global context can then be used to solve local problems, and vice versa. Indeed, community science can be powerful in providing both local as well as global solutions.

A big hurdle

Finally, I want to touch upon one of the biggest hurdles to globalizing science, and that is dedicated funding. Because of funding agencies, such as the National Institute of Health’s Fogarty Institute, I was able to carry out translational research in a global mental health setting. I contributed to the design of novel digital technologies that can enhance brain function and cognition, and successfully demonstrated their utility and efficacy in children and adolescents in urban New Delhi. These studies have systematically included children from all socio-economic strata, even those with a history of neglect and homelessness. I consider making a sustainable difference in these lives as my biggest research accomplishment to-date. Global research funding enabled this work, and the collaborations across multiple international institutions to function like a well-oiled integrated machine.

Now I seek funding to scale-up this initial research in both the global and local setting. This means thoughtfully tailoring the research design to the community in consideration. This further involves breaking down some of the barriers between basic, translational and implementation science: basic technology needs to be refined & translated to outcomes most relevant to the communities in the new studies. Garnering this funding support has been a challenging endeavor. Indeed, this is the case for all scientific funding, yet, it is especially noted that global science suffers from pilotitis, i.e. the inability to scale-up beyond the initial study. The funding agency is often looking for a clear-cut and simply implementable solution to a global problem, when the reality is that each successful global science project, no matter the scale, is a complex yet systematically-definable blend across the basic, translational and implementation sciences. To globalize science, we need to break down traditional research silos, and scientists, community representatives and funding agencies must align together. While global science funding remains a roller coaster challenge, to remain inspired and persistent, I always remind myself, ‘I am here to help!’

Read Full Article
  • Show original
  • .
  • Share
  • .
  • Favorite
  • .
  • Email
  • .
  • Add Tags 

The Web was created as a coordination and cooperation tool for scientists. Subsequently, it had a revolutionary impact on almost all aspects of our life. The rise of a “network society”  did in the end, however, only had a minor effect on the forms of organising among the scientific community. Its paradigm of scientific communication and cooperation between a scholar and a publisher dates back to the early 17th century.
This essay is based on a lecture (Etzrodt, 2019a) held during the “Future of Science” session of the Marie Curie Alumni Association’s annual conference at the University of Vienna in February 2019. It highlights the historical roots of today’s challenges in science in scholarly communication, and how open access movements have begun to address this problem. It also proposes how distributed Web technologies (such as IPFS and Ethereum) are poised to enable an entirely new way of communication and cooperation among scientist and citizens. This may lead to the long-sought cultural change within the scientific community, which may finally furnish us with the tools required to tackle the world’s most pressing challenges.

The Web for Scientists? –  Does not yet exist!

In his initial proposal Tim Berners-Lee envisaged the Web as a tool to enable improved communication through “computer conferencing”.  At CERN, the world’s largest particle physics laboratory near Geneva, Switzerland, the “Information Management system based on a distributed hypertext system” was intended to help improve the coordination and operation of the site’s complex research projects.
The Web like the Internet was rapidly adopted by academia. It is “a medium for collaboration and interaction between individuals and their computers without regard for geographic location” (Leiner et al., 1997), but it has not yet unfolded its true potential for science to become “a pool of information to develop which could grow and evolve” that Tim Berners-Lee proposed (Berners-Lee, 1990). For science this shortfall is rooted in the strong interdependency between scholars, publishers and funders.
More generally, the Web lacks  ways that permit individuals to ‘own’ a strong, ‘self-sovereign’ identity. Consequently we continue to rely on third parties for attribution (of scientific work), and the ability for the transfer of value (banks and – for science – public funding agencies). The network effects along with massive cost reduction the Web created have today made third parties, such as publishers even stronger.

The latin inscription “non solus”  of the dutch publisher Elsevier introduced in the 16th century remains even more true today: The need for scientists to be accountable to funding agencies, especially since the 1970s, which led to the introduction of performance metrics for scientific output, has created a complex dependency among scholars, publishers and the public institutions that are funding most of the research, which has been recently outlined by Melinda Baldwin, 2018.
Today this leads to a gridlock:  It hinders the cooperation of scientists and limits scaling the number of participants in research projects. The requirement in many fields to be the “first author” of at least a few studies does not allow larger, collaborative teams to evolve. Today the increasingly complex problems we face would clearly call for advanced technologies to allow coordination and scaling of scientific organising.

The triangle of money – and dependencies in science.

The emergence of “decentralised science”

Notable exceptions to small scaled efforts in science are areas where funding constraints such as in astronomy, space exploration and experimental physics “force” the scientific community to work closer together. This often goes along with alliances of scientist and the military, outlined for instance as early as in Vannevar Bush’s essay (1945) “As we may think”, where  he writes: “For the biologists, and particularly for the medical scientists, there can be little indecision, for their war has hardly required them to leave the old paths. Many indeed have been able to carry on their war research in their familiar peacetime laboratories. Their objectives remain much the same. It is the physicists who have been thrown most violently off stride, who have left academic pursuits for the making of strange destructive gadgets, who have had to devise new methods for their unanticipated assignments. They have done their part on the devices that made it possible to turn back the enemy, have worked in combined effort with the physicists of our allies. They have felt within themselves the stir of achievement. They have been part of a great team. Now, as peace approaches, one asks where they will find objectives worthy of their best.”

The former Low Energy Antiproton Ring at CERN. Experimental Physics is one of the fields in science that has a long tradition of collective collaborative scientific activities.

Putting the negative connotation of a military-scientific complex aside, it is especially unfortunate and surprising that we do not yet find massive, globally acting consortia that for instance aim to approach the search for cures of cancer in an open and collaborative way. Especially given the great and since decades increasing amounts of funding in this area.  For instance, here is not a single accepted reliable resource existing for cutting edge updates of research on specific types of cancer. Even if divergent in approaches and opinions, I believe such common resources could and should have already emerged on the Web.

Instead we find that scientific fields remain “united in fragmentation” organising their research efforts in mostly independent, small scale teams that risk to reinvent the wheel for the lack of an efficient and state-of the art means of coordination. In countless university hospitals around the world comparably small laboratories are tackling almost the same challenging problems but can share results only with delay and may sometimes even feel tempted to keep essential details on a method secret for exploitation on patents or to publish yet another paper.

I argue that this is both an organisational as well as cultural phenomenon. Historically there has been a desire to be “first to discover”. On the latter, for instance Thomas Kuhn describes nicely in his seminal work on The structure of scientific Revolutions how Antoine Lavoisier sent a sealed note reporting his discovery of oxygen to the French Academy of Sciences, earning him time to work on a more detailed elaboration of his finding.
It is certainly desirable to be able –  when the need arises – to trace a discovery back to the individual who made it. Unfortunately scientific journals have exacerbated the requirement for secrecy and delaying a full disclosure of a new finding because they demand publication of “original work”. “Novelty” is then synonymous for “high impact” and this impact is the metric funders take to justify their fund allocation. A misunderstood concept of “intellectual property” and the need to “patent”, counterintuitively not seldomly incentivised by also publicly funded academic institutions leads scientist to engage in lengthy patenting processes.
Finally all this has an impact on the nature of scholars get are selected at our institutions today: Those prevail who are “loud and first” not those who may be “diligent and right”.

Remarkable examples on the other hand are the emergence of research institutes such as the Allen Institutes. Here “team-science” is celebrated, and collaborative teams are tackling specific, well defined problems in interdisciplinary teams. Even though it is undoubtedly important to permit scientist to independently choose their area of research and allow also solitary and independent work,  I believe team-science will rapidly outpace many traditional laboratory setups that remain to operate in a haphazard fashion on hierarchical and feudalistic organisational paradigms.
In the future, forms of decentralised science will emerge (Etzrodt, 2018).  Decentrally operating organizations, as described below, will make it possible to attract a much larger number of minds to contribute to a specific scientific challenge so that each individual can work on a problem for which they are best suited, no matter where they are on the planet.
The blame for the current situation does not lie solely with the publishers, it  also lies within the trained nature of the scientists operational and cultural norms. I am a strong advocate for a cultural as much as for a technological revolution. Cryptonetworking technologies, as I will highlight in the next part, can do things the Web has been unable to do for us to date, allowing to move beyond the traditional paper and also enabling new ways of attribution and thus collaboration and funding of science.

From “Denkkollektiv” (thought collective) to Collective Intelligence

Scientific data and knowledge are “anti-rival goods”, thus according to the definition of Lawrence Lessing and Steven Weber the “free dissemination increases their value and exceeds the profit gained by few through copyright or IP”. Today, artificial limitations often transform the scientific result into competing products. Publishers do so in three ways: By restricting the amount of space granted for publication, by charging publishing fees, and, in many cases, still too high access fees. This excludes a large number of people from contributing and benefiting from the body of scientific knowledge.

When scientists search for a solution to a long-standing question, shouldn’t they be able to communicate their most recent breakthroughs in a direct and rapid fashion? Shouldn’t they be able to approach fellow scientists or be perhaps pointed to a matching collaboration partner that already found a solution that could advance his or her own efforts? I would argue, that for several pressing challenges it is an ethical necessity to share information and cooperate as effectively as possible.
The science theorist and physician Ludwig Fleck described in his seminal work of 1935 “Entstehung und Entwicklung einer wissenschaftlichen Tatsache” (Genesis and development of a scientific fact) that scientific research is collective work, not a formal construct, but strongly dependent on the context within scientific communities. Such “thought collectives” change scientific ideas over time. Scientists need to be in constant communication, which was highlighted by Michael Polanyi (1962) in his essay “The Republic of Science” where he states that “Activities of scientists are coordinated and rely on constant communication. Adjustment […] occurs in response to the results obtained by others“.
Today we define the ability of a group of people to extend their individual capabilities through cooperation and to reach a consensus on a certain task that can be more complex than the sum of the individual’s contributions as collective intelligence. The Web has greatly facilitated and vastly accelerated the emergence of collective intelligence. It permits effective “ad hoc” communication through computer networks in real time and at a global scale and at minimal cost. Manuel Castells points out that “new information technologies are not simply tools to be applied, but processes to be developed” (Castells, 2000). 30 years after the Web’s creation we face both a crisis in the “social” Web as much as we do in the “Web for science”. To evolve towards the “pool of information to develop which could grow and evolve”, as proposed by Tim Berners-Lee, we need to introduce the technological improvements the distributed Web offers. This will include the ability of creating decentralized identifiers (DID) and to prove ownership of a scientific contribution through cryptographic signatures.

Building the new home of science: Self-sovereign publishing

The recent advent of blockchain technology has solved first the double-spending problem, bitcoin’s fundamental achievement. Now cryptocurrencies enable direct, peer-to-peer transactions. Notably, such “value” does not need to be  monetary or “fungible” value. It’s not just banks as intermediaries that are threatened. This same technology allows us to “own” a cryptographically provable “address”, a form of proto-identity of ourselves on the Web that let us claim ownership and create transactions – also transactions of “knowledge”. This is the birth of an autocratic and collective publication medium. In his work on the Economy of Insight, the cyberneticist Paul Pangaro (2011) has formulated – “Conversations as Transactions” (2011), meaning “wealth creation has shifted from prior knowledge to the ability to gain knowledge-in-action”.

Distributed ledger technologies (DLTs) and peer-to-peer file storage networks such as IPFS now enable us to create an immediate permanent publication of any of our results and ensure direct attribution of the work. The stored data, timestamps and ID information is largely interoperable on the distributed Web, and we can create binding agreements that will execute as enforced in “smart contracts”, given previously defined conditions are met. This is largely reducing the administrative overhead and the need for institutional intermediaries. For example, it may be possible to create “peer-to-peer” fund allocations among individual scientist. We could for instance release a grant payment according to previously defined conditions or incentivise the creation of a prototype for which, following the proof of functional implementation, funds are released. DLTs are the missing link that many previous proposals for scientific publishing on the Web lack (for instance Wasserman,2012 and Noll, 2009). This approach is also very well compatible with preprint servers and even extends the use of them.

In 2017, Dr. Sebastian Bürgel and myself performed a proof of concept experiment in which we used some scientific microscopic data I obtained and analysed for my research at the Biosystems Science and Engineering Department of ETH Zurich. We broadcasted them on the Interplanetary File System (IPFS), while we stored the hashes of this work through an Ethereum blockchain transaction to obtain a timestamp and attributability.

Raw data, Analysis scripts and interpretation of a scientific experiment can be made immutable, attributable and shareable.

A few months later I discovered that the AKASHA project, has been using the very same components, Ethereum and IPFS, to create a much more advanced and elaborate self-sovereign, censorship resistant “social media” publishing platform, since 2015. It is this fundamental new technology that I believe will revolutionise the entire 300+ years old approach of creating and sharing knowledge.
Self-sovereign publishing will enable what Polanyi (1962) proposed when he said that “The pursuit of science by independent, self-coordinated initiatives assures the most efficient possible organisation of scientific progress”.
Scientific publishers and universities have coordinated these processes for some centuries, but we are now able to hand these authorities back to the individual scientists. This does not invalidate the operational status of a university, nor of a publisher. Both will likely be able to reform themselves, but it will change the way how scientists interact with each other and the value scientists and publishers will deliver or the public can actually demand.

From journals to distributed autonomous organisations (DAOs), token curated registries  (TCRs) & bonding curves

Distributed ledger technologies helping to democratize access to science and also making science funding, communication and translation more efficient, may — like the Web —find first key applications in science and knowledge creation. To date, it is not clear which economic models currently explored may persits, many applications are only “games”. But this should not diminish their potential future impact, because “Play made the Modern World”.

In the early days (2016/17) “initial coin offerings” (ICOs) as means of crowdfunding research and peer-review of scientific projects have been proposed and implemented (Bartling et al. 2017), for instance through the Mindfire Foundation or the sequencing of the Cannabis DNA.

We may need to understand science funding, communication and governance in a single complex though, through which new types of very interesting organisations can be formed. Perhaps we can finally rebuild the “independent and self-coordinated initiatives” that Polanyi (1962) defined as the most efficient forms of organization for scientific progress.

We may understand scientific journals as nothing else than curated lists of content. This term is well known in a field of crypto-economics that aim to embrace content curation through so called token curated registries to further reading. Based on a universal “Open Access Platform” for scientific data, outlined by Etzrodt (2019b) and also proposed by several authors in this living document, a carefully maintained list of new scientific content relevant to a particular area could easily be created. Good content creators may gain followers and thus a certain reputation. Scientific associations, laboratories etc., could all publish their own curated lists that potentially overlap and are of different stringency and depending on their origin’s trustworthiness. Given, we can obtain additional metrics on each scientific work, such as the number of times is was accessed,  downloaded, cited or even better reproduced. This could allow us to create increasingly accurate lists. Subscribers of such lists could then choose to filter and compare different outlets created on a common, open data stream.

Platforms like AKASHA that provide means for self-sovereign publication may be used to add comments, insights or critical results to a growing body of truly peer-to-peer created scientific knowledge. Any addition, even a small “micro-contribution” may be valuable for the discovery process. Today, such information often gets lost and both, good ideas as well as reports on failed experiments are not communicated. While this sounds at first overwhelming, Michael Nielsen points out that even with little effort and a classical Web 2.0 approach surprising results could be obtained. One example being Timothy Gower’s Polymath blog that to this date remains to be a simple WordPress powered blog. Anyone can make a proposal for a mathematical problem in this blog and numerous have been resolved so far in a collaborative fashion.

Efficient funding mechanisms, combined with a trusted communication system that allows different new means to curate content may ultimately shape entirely new forms of  scientific organisations. Scientific distributed autonomous organisations (Science DAOs), proposed first by Soenke Bartling (2018) may emerge. These types of organisations will likely form around a specific scientific challenge and facilitate the coordination among scholars and the public. Private and governmental funders as well as individuals may invest in “Science DAOs” and foster the creation of great research collaborations that can scale globally. These massive research organisations may not be operated by a single hierarchical administrative layer, but through a distributed network of financial and intellectual..

Read Full Article
  • Show original
  • .
  • Share
  • .
  • Favorite
  • .
  • Email
  • .
  • Add Tags 

New digital research infrastructures and the advent of online distribution channels are changing the realities of scientific knowledge creation and dissemination. Yet, the measurement of scientific impact that funders, policy makers, and research organizations perpetuate fails to sufficiently recognize these developments. This situation leaves many researchers stranded as evaluation criteria are often at odds with the reality of knowledge creation and good scientific practice. We argue that a debate is urgently needed to redefine what constitutes scientific impact in light of open scholarship. Open scholarship being scholarship that makes best use of digital technology to make research more efficient, reproducible, and accessible. To this end, we present four ongoing systemic shifts in scholarly practice, which common impact measures fail to recognize.

Shift one: Impact originates from collaboration

The increasing number of coauthors in almost every scientific field and rising incidences of hyperauthorship (articles with several hundred authors) (Cronin 2001), suggest that meaningful insights can often only be generated by a complex combination of expertise. This is supported by the fact that interdisciplinary collaborations are associated with higher impact (Chen et al. 2015, Yegros-Yegros et al. 2015). Research is increasingly becoming a collaborative enterprise.

The authorship of scientific articles, which is the conceptual basis for most common impact metrics, fails in conveying the qualitative contribution of a researcher to complex research insights. A long list of authors tells a reader little about the contribution of an individual researcher to a project. This becomes apparent even in small groups: Last year the Forum of Mathematics, π, published a formal proof for the 400 year old Kepler conjecture (Hales et al., 2017). The paper lists 22 authors. While we can attribute the original problem to Kepler, in this instance it is impossible to understand what each of the 22 authors actually contributed. In the experimental and empirical sciences papers with more than 1000 authors are not uncommon. In these instances, the published article is an inadequate form to capture complex forms of collaboration and distill the individual contribution and impact of a researcher.

At the same time, there are projects that are conducted by a small number of researchers or even a single author and the single authored article or book remains commonplace in many fields, especially the humanities. It’s obvious nonsense to assess the contribution of an author with dozens or hundreds of co-authors the same way we assess the work of a single author. But that is exactly what Google Scholar does when it shows lifetime citation numbers, which are not discounted by the number of co-authors, or the H-Index, which does not differentiate if a paper is single-authored or has 1,000 authors. By subsuming different levels of contribution and forms of expertise under the umbrella concept of authorship, we compare apples with oranges.

Our understanding of impact dilutes the idea of authorship and fails to capture meaningful collaborations.

Shift two: Impact comes in different shapes

Researchers increasingly produce results that come in forms other than articles or books. They produce scientific software that allows others to do their research, they publish datasets that lay the foundation for entire research projects, or they develop online resources like platforms, methodological resources, or explanatory videos that can play a considerable role in their respective fields. In other words: Research outputs are becoming increasingly diverse.

While many researchers have considerable impact with outputs other than research articles or books, our conventional understanding of impact fails to record and reward this. Take Max Roser as an example, the economist  behind the platform Our World in Data, which shows how living conditions are changing over time. The platform is a popular resource for researchers and journalists alike. Roser has an avid twitter base and is a sought-after expert on global developments. His academic work clearly has societal impact. Judged by conventional impact metrics however his impact is relatively small. Another example is the programming language R which benefits from the works academics put into it. The versatility of the available packages have contributed to R’s popularity among data-analysts–in and outside of the academic system. However, the undeniable value that a researcher creates when programming a popular piece of software (or generally contributes to the development of critical research infrastructure) is not captured by our understanding of impact. Scholars that are investing time and effort in alternative products or even public goods (as in the case of R) face disadvantages when it comes to the assessment of their work and ultimately career progression.

For this reason, researchers are compelled to produce scientific outputs that are in line with mainstream measures of impact. For example, number of articles published in specific outlets or number of citations, despite the fact that many peer-reviewed articles receive marginal attention. Larivière and colleagues found that 82% of articles from the humanities, 27% of natural science articles, and 32% of social science remain uncited, even five years after publication (Larivière et al., 2009). At the same time, researchers are deterred from  other meaningful activities and motivated to withhold potentially useful research productsin order to maximize the number of articles they can publish (Fecher et al., 2017).

Our understanding of impact perpetuates an analogue mainstream, neglects the diverse form of impact, scientific work and demotivates innovation.

Shift three: Impact is dynamic

We live in a world in which our biggest encyclopedia is updated seconds after important news breaks. Research products are basically information goods and therefore likewise prone to constant changes (e.g., tables and graphs that are being updated with live data, blog posts that are revised). Even a conventional article, a seemingly static product, changes in the publication process as reviewers and editors ask for clarifications, additional data, or a different methodological approach.

Traditional impact measures fail to capture the dynamic nature of novel scholarly products. For many they are not considered citable. For example, the radiologist Sönke Bartling maintained a living document that covered the opportunities blockchain technology holds for the scientific community. With the attention the technology received, Bartling’s frequently updated document attracted considerable attention from researchers and policymakers. His work certainly had impact as he maintained a key resource on a novel technology. However, Bartling stopped updating the document when he came across several instances in which authors had copied aspects of his document without referencing it.

The web allows researchers to produce and maintain dynamic products that can be updated and changed regularly. The traditional measurement of scientific impact however expects academic outputs to remain static. Once an article is accepted for publication it becomes a fixed object. In the case that a change is needed, it is published as a separate publication in a specific section of a journal: “errata”, which is Latin for errors. Thus, the only way to update a traditional journal publication is by publicly admitting to an error (see Rohrer 2018).

Our understanding of impact neglects the dynamic nature of research and research outputs.

Open Scholarship as a framework for impact assessment

While it seems impossible to capture the full picture of research impact, it is absurd that we are neglecting valid and important pathways to scientific and societal impact. Impact is not monolithic; it comes in different shapes, differs across disciplines, and is subject to change in part due to modern communication technology. In an academic world that is increasingly adopting open scholarship, bibliometric impact measures assess a shrinking section of the actual impact that is happening.

Here, we see significant room for improvement.Impact assessment needs to capture the bigger picture of scholarship, including new research practices (data sharing), alternative research products (software), and different forms of expertise (conceptual, empirical, technical, managerial). We believe that open scholarship is a suitable framework to assess research.

In this respect, impact arises if an output is not only accessible but reusable, if a collaboration is not only inclusive but leads to greater efficiency and effectiveness, and if a result is not only transparent but reproducible (or at least comprehensible). This entails adapting our quality assurance mechanisms to the reality of academic work, allowing for modular review techniques (open peer review) for different research outputs (data and code). In many respects, the hyper-quantification we experience in the quest to identify scientific impact would be better suited to safeguarding scientific quality and integrity.

Change is therefore necessary to motivate academics to focus on actual impact—instead of the outdated assumptions behind the measurement of impact— and now is the time to renegotiate academic impact in light of open scholarship.

Read Full Article
  • Show original
  • .
  • Share
  • .
  • Favorite
  • .
  • Email
  • .
  • Add Tags 

The scientific community in some African countries is seeking for closer ties to the Western European and North American communities. What are the main challenges in this context?

I think there are many challenges and many of them are felt by the scientific community everywhere around the world, but they become more pronounced in many African institutions where there are fewer resources and also less knowledge about Open Access and its benefits among management of universities. From the  perspective of an academic, I know that actually one of the key challenges of Open Access is the incentives system in research. For example, if you have a university that values the research success of its academics according to particular criteria and demand publications in closed access journals; Open Access is going to start out on a very difficult footing.

These are the basic challenges that often get overlooked by the Open Access community. As activists, we try to connect with one another across regions, but we don’t think about the institutional challenges around incentives for publishing that are being imposed on academics who  have less time to conduct research, and fewer of the resources that play a crucial role when it comes to publishing. You can publish your work either in a closed access high ranked journal, in which case the institution that you work for is going to reward you heavily. Or you can publish it Open Access, where it won’t bring  much of that kind of success. Those institutional incentives are the biggest challenges; and if you don’t have a university system that rewards publishing in Open Access journals, therefore making science more accessible, you are not going to have many researchers participating in Open Access initiatives.

In the African context the problem is just greater. It’s the case that institutions and management in universities have even fewer resources and less knowledge about the benefits of Open Access. The academic system does not reward Open Access in the same way that universities in Europe and North America are doing.

Why is it important to support Open Access in the African region? What role can it play?

In first place, it is important for science generally, because it removes the high rents that are being paid by authors and readers of science to access knowledge that is being produced; so it facilitates the distribution of new knowledge effectively. And in the African context getting science out there and distributing it is even more difficult, because of the low-resourced environment, where paying for access to scientific research is critically expensive in terms of the resources that are available. Scientists from many countries have a big problem with getting their research out there and making it visible, so Open Access can do good things in terms of that kind of distribution. Of course, just because your work is available, doesn’t mean that people are actually willing to read your work. There are definitely benefits if science is accessible, but not all problems are solved by that.

What kind of work have Wikipedians been trying to do in order to foster OA?

One of the initiatives that we are trying to do at the moment is trying to find out whether,  if scientific research is included in a Wikipedia article, will that mean that citations to that article improve over time? I think the trend is that including references to research in Wikipedia has a positive effect on readership; and if those scientific records are available as Open Access – and not behind paywalls – that will increase the chance that the articles will be included in Wikipedia articles and the extent to which they are cited. In that sense, it is relevant to say to academics: it makes sense to participate in summarizing recent research by including citations in Wikipedia articles, because it’s likely that your  citations are going to increase and more people are going to read your work generally.

There is also one model that I became interested in while doing some work for Wikipedia in South Africa. It has been used by Nature and is about a Wikipedian scientific journal. It’s basically a model, where a few scientists write an article together according to the Wikipedian structure, so an encyclopedia structure. Then they publish it and their authors names, include a DOI, so it is indexed and has a version number etc.  Now, the benefit of this is that you have the scientific author for academic credit – for the article on Nature or whatever journal they are publishing in; and a lot of people – thousands of people are going to potentially read this article and you also have the Wikipedian version that can be edited, translated and changed over time.

Not so long ago Africa was hardly on the radar of the big international journal publishers. Recently, Elsevier launched the “Scientific African” – a new megajournal to boost African research and collaboration. How do you evaluate this development?

I read it with absolute scepticism. And I think everyone should regard these attempts by large publishing corporations to incorporate or use Open Access in some way with scepticism, because Open Access is more than just making a certain sum, and it’s also not only a license. Especially when it comes to research in different African countries or other regions that are not well represented in the scientific system you have to do much more than just launch a journal. You have to invest  in overcoming cultural and institutional challenges and challenges of a systematic bias that can’t be actually overcome just by licences or publishing. It’s about attention, and lots of things that are not really covered just by making something freely available.

How can the contribution of African universities to regional and global knowledge production be improved? What role could open access play?

Open Access has a positive effect on the quality of research, because researchers can get access to high quality research in very poorly resourced institutions. At the same time, another crucial problem still remains – the one of recognition of knowledge coming from countries in Africa. This will be overcome to some extent by Open Access, but again not entirely.

What can be done to improve recognition of African science? That depends on the environment. From the Wikipedia point of view, I’m trying for example, to improve the coverage of topics related to Africa in the Wikipedian context. By making research outputs visible – through Wikipedia or through other ways – we can show that there’s good research happening on African topics and that they are as worthwhile as the  subjects relevant to the rest of the world. What else can be done? Policies have to be changed in order to improve recognition, acceptance of and attention towards African research. Only a part of these things can be answered by providing Open Access. It’s not as easy as just saying – well, slap on a licence that means that your work is really available. It’s a harder question to solve, but a very important one.

Read Full Article
  • Show original
  • .
  • Share
  • .
  • Favorite
  • .
  • Email
  • .
  • Add Tags 

Global rankings have decisively shifted the nature of the conversation around higher education to  emphasise universities’ performance in knowledge economies. How did that happen and why are rankings becoming increasingly important?

The emergence of global rankings coincided with the acceleration of globalisation at the turn of the millennium. This is because higher education is a global game. Its success (or failure) is integral to and a powerful indicator of the knowledge-producing and talent-attracting capacity of nations. But, the landscape in which HE operates today has become extremely complex; there are many more demands and many constituencies which have an impact on and a voice in shaping higher education’s role and purpose. While rankings have been around since the early 1900s, global rankings represented a significant transformation in their range and influence.

The arrival of the Shanghai Academic Rankings of World Rankings (ARWU) in 2003 set off an immediate chain-reaction that speaks to pent-up demand in the political system, and arguably more widely, for more meaningful and internationally comparative measures of quality and performance. In other words, once the number of people participating in and served by higher education expands, so as to begin to comprise and affect the whole of society rather than a narrow elite, then matters of higher education governance and management, and performance and productivity necessarily come to the fore.

Over recent years, rankings have become a significant actor and influencer on the higher education landscape and society more broadly, used around the world by policymakers and decision-makers at government and higher education institution (HEI) level, as well as by students and parents, investors, local communities, the media, and others. There are over 18,000 university-level institutions worldwide. Those ranked within the top-500 would be within the top 3% worldwide. Yet, by a perverse logic, rankings have generated a perception amongst the public, policymakers and stakeholders that only those within the top 20, 50 or 100 are worthy of being called excellent. Rankings are seen to be driving a resource-intensive competition worldwide. But, it is also apparent that putting the spotlight on the top universities, usually referred to as world-class, has limited benefit for society overall or all students. Despite much criticism about their methodology, data sources and the concept of rankings, their significance and influence extends far beyond higher education – and is linked to what rankings tells us (or are perceived to tell us) about national and institutional competitiveness and the changing geopolitical and knowledge landscape. Their persistence is tied to their celebration of “elites” in an environment where pursuit of mobile talent, finance and business are critical success factors for individuals and societies.

How accurate are university rankings, in your opinion? In what sense do indicators vary among different university rankings?

Rankings’ popularity is largely related to their simplicity – but this is also the main source of criticism. The choice of indicators and weightings reflect the priorities or value judgements of the producers. There is no such thing as an objective ranking nor a reason why indicators should be either weighted (or given particular weights) or aggregated. Although rankings purport to measure higher education quality, they focus on a limited set of attributes for which (internationally) comparable data is available. They are handicapped especially by the absence of internationally comparable data for teaching and learning, student and societal engagement, third mission, etc. This means that most global rankings focus unduly on research and reputation.

Many of the issues being measured are important for institutional strategic planning and public policy but annual comparisons are misguided because institutions do not and cannot change significantly from year to year. In addition, many of the indicators or their proxies have at best an indirect relationship to faculty or educational quality and could actually be counter-productive. Their influence derives from the appearance of scientific objectivity.

Rankings are both consistent and inconsistent. Despite a common nomenclature, they appear to differ considerably from each other. “Which university is best” is asked differently depending upon which ranking is asking the question; there are methodological differences as to how similar aspects are counted (e.g. definitions of staff and students or international staff and students). And because much of the data relies on institutional submissions there is room for error and “malpractice”. This presents a problem for users who perceive the final results as directly comparable. Nonetheless, the same institutions tend to appear at or near the top; differences are greatest beyond the top 100 or so places.

Over the years, ranking organisations have become very adept at expanding what has become a profitable business model. Recent years has seen an increasingly close corporate relationship, including mergers, between publishing, data/data analytics and rankings companies.

There is a proliferation of different types of rankings, for different types of institutions, world regions, and aspects of higher education. Most recently, global rankings have sought to focus on teaching, engagement and the UN’s SDGs. Despite these “improvements”, the same problems which afflict earlier formats and methodologies remain. Many of the indicators focus on inputs which are strongly correlated to wealth (e.g. institutional age, tuition fees or endowments/philanthropy), as a proxy for educational quality. Both QS and THE use reputational surveys as a means of assessing how an institution is valued by its faculty peers and key stakeholders. This methodology is widely criticized as overly-subjective, self-referential and self-perpetuating in circumstances where a respondent’s knowledge is limited to that which they already know, and reputation is conflated with quality or institutional age

What is the difference between global and national rankings? What are the strengths and weaknesses of these two different assessments?

Global rankings rely on internationally comparable information. However, there are greater differences in context, data accuracy and reliability, and data definitions. The ranking organisations do not audit the data, and even if they did context remains important.

Despite claims to “compare the world’s top universities” (Quacquarelli Symonds World University Rankings, 2019) or “provide the definitive list of the world’s best universities evaluated across teaching, research, international outlook, reputation and more” (Times Higher Education, 2018), in truth, global rankings measure a very small sub-set of the total 18,000 higher education institutions worldwide.

National rankings in contrast have access to a wider array of data, as well as having the capacity to quality control it – albeit problems still persist. Today, there are national ranking in more than 70 countries; they can be commercial or government rankings. They may be a manifestation of other evaluation processes, such as research assessment, whereby the results are put into a rankings format either by media organisations or the government itself. National rankings are used by students/parents but also by governments to drive policy outcomes such as classification, quality assurance, improving research and driving performance broadly.

It is  claimed that universities learn how to apply resources in exactly the right place and submit data in exactly the right way and are thus able to manipulate rankings. A prominent scandal was the Trinity College Dublin data scandal, where researchers were allegedly instructed in how to answer ranking questionnaires. How easy is it to manipulate such rankings?

Higher education leaders and policymakers both claim that rankings are not a significant factor in decision-making, but few are either unaware of the rank of their universities or that of national or international peers. The increasing hype surrounding the now multi-annual publication of rankings is treated with a mixture of growing alarm, scepticism and, in an increasing number of instances, with meaningful engagement with ranking organisations around the process of collecting the necessary data and responding to the results.

This reaction is wide-spread. There is a strong belief that rankings help maintain and build institutional position and reputation, good students use rankings to shortlist university choice, especially at the postgraduate level, and that governments and stakeholders use rankings to influence their own decisions about funding, sponsorship and employee recruitment.

As such, rankings often form an explicit institutional goal, are incorporated into the strategic objectives implicitly, are used to set actual targets, or are used to measure of achievement or success. In a survey conducted by this author, more than half of international HE leaders said they have taken strategic, organizational, managerial, or faculty action in response to rankings. Only 8% had taken no action. Many universities maintain vigilance about their peer’s performance, nationally and internationally.

In global rankings, the most common approach is to increase the selectivity index – or the proportion of high performing students. It is no coincidence that many of the indicators of quality are associated with student outcomes of employment, salary, etc. – which are in turn are strongly correlated with these characteristics. The effect is to reinforce elite student entry.

The practice of managing student recruitment has received considerable attention in the USA but is not confined to that country. Similar patterns are found elsewhere even where equity and open recruitment is the norm. The practice of urging “supporters” to respond positively to reputational surveys used by many universities.

Given the significance and potential impact of rankings and the fact that high-achieving students – especially international students – are heavily influenced by rankings, these responses are both understandable and rational. However, the really big changes in ranking performance derive from the methodological changes that the ranking organisations themselves engage in. Some of these changes are introduced to improve the rankings. However, one can’t cynically dismiss the view that many of the changes are aimed at generating publicity, and consultancy, for the rankings themselves!

What steps are necessary in order to improve global university rankings?

Rankings are not an appropriate method for assessing or comparing quality, or the basis for making strategic decisions by countries or universities. If a country or institution wishes to improve performance there are alternative methodologies and processes. Beware unintended consequences of simplistic approaches.

These are some suggested Do’s and Don’ts –

Don’t:

  • Change your institution’s mission to conform with rankings;
  • Use rankings as the only/primary set of indicators to frame goals or assess performance;
  • Use rankings to inform policy or resource allocation decisions;
  • Manipulate public information and data in order to rise in the rankings.

Do:

  • Ensure your university has an appropriate/realistic strategy and performance framework;
  • Use rankings only as part of an overall quality assurance, assessment or benchmarking system;
  • Be accountable and provide good quality public information about learning outcomes, impact and benefit to students and society;
  • Engage in an information campaign to broaden media and public understanding of the limitations of rankings.
Read Full Article
  • Show original
  • .
  • Share
  • .
  • Favorite
  • .
  • Email
  • .
  • Add Tags 

Museums, libraries and archives play a pivotal role in the preservation of human knowledge. They also see themselves as custodians of cultural and natural heritage. The task of natural history collections is twofold: on the one hand, they preserve our knowledge about nature, on the other, they hold records of the history of human exploration and conquest of the earth. With digitisation, natural history museums have new opportunities to make their collections accessible to wider audiences, and to interconnect the huge amounts of data and knowledge that is stored within their collections. Digitisation not only changes the way natural history collections are organised, but also the research process, as it enables scientists to cooperate without the constraints of time and location, to generate and analyse large amounts of data that can be exchanged and reused internationally in a wide range of new contexts. These opportunities have been recognised early. The Berlin Declaration on Open Access of 2003 states: “For the first time ever, the Internet now offers the chance to constitute a global and interactive representation of human knowledge, including cultural heritage and the guarantee of worldwide access.” (Berlin Declaration). Libraries, art galleries and archives agreed in their OpenGLAM principles that their institutions would benefit from making their data and collections digitally available to everyone: “The internet presents cultural heritage institutions with an unprecedented opportunity to engage global audiences and make their collections more discoverable and connected than ever, allowing users not only to enjoy the riches of the world’s memory institutions, but also to contribute, participate and share.”

In natural history museums in general, and at the Museum für Naturkunde Berlin specifically, three distinctive dimensions of open science are potential strategies to react to digitization: 1) inner-scientific openness through data sharing, 2) openness to society through various forms of public engagement, and 3) citizen science, which can be understood as a hybrid between inner-scientific openness and public engagement. After all, sharing knowledge openly is not simply a matter of communicating research results, but also of entering into dialogue with a wide range of target audiences beyond the scientific community. New forms of participation are established between science and society, as in citizen science projects where citizens become involved in the research process itself. Citizen science even has the potential to produce knowledge that academic research cannot provide. For example, the dramatic decline in insects that led to an intense and ongoing public debate, was discovered by the Entomological Society Krefeld, a local, more than century-old volunteer association studying insects.

The Museum für Naturkunde Berlin (MfN) has been engaged for a while in this entire range of Open Science activities:

1) Inner-scientific openness. There are several projects at the Museum für Naturkunde Berlin that are engaged in creating workflows and infrastructures for open biodiversity data. The MfN’s vast natural history collections alone contain 30 million specimens. Following the Open DefinitionKnowledge is open if anyone is free to access, use, modify, and share it — subject, at most, to measures that preserve provenance and openness”, the museum is opening up its research processes and its collections, to find new answers to scientific and societal problems, like novel pharmaceuticals, securing the world’s food supplies or coping with the consequences of global warming. The vast archive of knowledge on biodiversity stored in the collections can be used far more efficiently when opened up and interconnected digitally. One example for this is the MfN-led German-Indonesian project Indobiosys which aims at standardising a sustainable workflow from taking samples, identification and storage of organisms from unexplored biodiverse habitats to their description. An online platform offers open access to the data on specific species and geography to the research community as well as to the general public. Openness has the potential speed up scientific progress through new forms of collaboration (e.g., data sharing) and safeguard scientific integrity through new forms of transparency (e.g., open methods, pre-registration, data-driven replications). Although at first glance, this creates high hopes in the sense of “citius, altius, fortius” (faster, higher, stronger), it comes with a few challenges to overcome, such as how best to archive and use research data and when, how and with whom research results should be shared.

2) Openness to society. While sharing research data affects the inner-scientific process itself, opening up the research process also applies to the areas of knowledge transfer and public engagement, with other words: the relationship between science and society. The museum offers a variety of event formats, making it a forum for the public and political discussions. In the current exhibition ARTEFACTS, for example, scientists from the MfN (and from other research institutions) talk to museum visitors and engage in a dialogue about current environmental issues. The exchange aims at communicating research processes, but also at gaining new perspectives and ideas for the researchers themselves. In the event series “Wissenschaft, natürlich!” (“Science, of course!”), the Museum and the Berlin Social Science Center (WZB) are joining forces to publicly discuss some of the most pressing issues of our time across disciplinary boundaries. They invite the public to discuss about social cohesion in times of political and social divisions, about the ecological crisis and the role of science itself. Another example is the Museum’s public engagement project GenomELECTION, which explores how genome-editing methods such as CRISPR/CAS-9 shape the way we will feed ourselves in the future, how we define health and illness and, in more general terms, will define our society’s relation to nature.

3) Citizen science. The current boom in citizen science could be interpreted as a sign for a transformation of the way our society produces and evaluates knowledge, a process that is largely welcomed and promoted by the Museum für Naturkunde Berlin. Hosting the headquarters of the European Citizen Science Association (ECSA), the European research network COST Action Citizen Science and the German national platform for citizen science “Bürger schaffen Wissen”, the MfN has become a competence center for citizen science. It also runs its own citizen science projects, like the project on urban nightingales “Forschungsfall Nachtigall”. During the 2018 and 2019 nightingale seasons, nature-loving citizens – including clubbers and other night owls – are asked to collect recordings of nightingale songs from April to early July, using the Museum’s free Naturblick app. The data of the recordings and their location are freely accessible on the map on the Museum’s website.

Over the past years, these and similar ideas and ways to open up science to society have been widely discussed as part of the ongoing Open Science debate. The notion of Open Science is complex and multi-faceted. It helps to distinguish various aspects of Open Science – there is the aspect of societal participation in science and free and equal access to knowledge for all, but there is also the aspect of increasing efficiency through open communication and the exchange of data within the science community, including the establishment of new digital infrastructures. Research museums like the Museum für Naturkunde Berlin are ideal places to open up science in both directions, inner-scientifically and societally, and they can be understood as essentially open institutions, with the mission of advancing research and making it available to society at the same time.

Read Full Article
  • Show original
  • .
  • Share
  • .
  • Favorite
  • .
  • Email
  • .
  • Add Tags 

How can a research organization systematically spark innovation in science? The Ludwig Boltzmann Gesellschaft (LBG) borrowed its name from the famous Austrian physicist, mathematician, and natural philosopher Ludwig Boltzmann. We are greatly indebted to him for his way of looking at the world and science itself. Boltzmann wrote in the fading 19th century in a letter to his colleagues in Graz, Austria: “I believe that they could have achieved even more had they been less isolated”, reflecting on how science could benefit from being open. Fast forward into the digital age, we are now confronted with a variety of opportunities and challenges to purposefully make science permeable, to become more open and less isolated, to “achieve more”. How do we define “more” and how do we manage this process? The LBG now works on being open by default by applying Open Innovation methods and principles allowing “the use of purposive inflows and outflows of knowledge to accelerate internal innovation…” (Chesbrough, 2006). What we learned so far: there is a way to do it, but you have to do it right (cp. Fecher, Leimüller & Blümel, 2018).

From Mode 2 to Applying Open Innovation in Science

In the early 90s, Gibbons et al. (1994) referred to Mode 2 as a new way to co-produce (scientific) knowledge in interdisciplinary and transdisciplinary settings in order to solve societal issues. Their motivation to do so comes from participatory action research practices starting in the early 80s showing a different, less isolated way of conducting research that creates a new route to impact (e.g.: in Public Health settings: Baum, 2006). In contrast to Mode 1 production of knowledge, in which scientific and fundamental research is at the core of the epistemological approach, Mode 2 is tackling a different question: “How to co-create science to increase the impact for and with society?“ (Gibbons et al, 1994)

For us as a research organization, we strive to spark innovation and invest public money into research that creates value for society. In our opinion, the best way to do so is to foster society’s involvement and engagement in research, which is in line with the current H2020 program. The LBG research ecosystem covers a variety of research areas ranging from medicine, life sciences and the humanities to the social and and cultural sciences specifically targeting novel research topics in Austria. Together with academic and implementing partners, the LBG is currently running 21 research units to develop and experiment with new forms of collaboration between science and non-scientific actors such as a variety of companies, the public sector and civil society. With this approach the LBG aims to address socially relevant challenges, to which research can contribute and provide useful support and guidance for others by strategically opening up research.

What does open innovation in science actually mean and how can the scientific system become more innovative in order to tackle societal issues? At LBG, we are convinced that, while theoretical frameworks and models definitely help to guide our work, we need to test and revise them in practice, experiment with unusual knowledge providers (e.g.: citizens, patients), develop new ways of asking the right questions (e.g.: crowdsourcing research questions) and approach new forms of collaboration between science and society. In order to do so, the LBG established the Open Innovation in Science Center in 2016 as a unique in-house unit to apply Open Innovation in science.

Ludwig Boltzmann Gesellschaft –
Open Innovation in Science Center

We call our approach Open Innovation in science as we apply and adapt Open Innovation methods and principles originating within a business context to a scientific context. We are aware, buzzwords come and go, but the underlying common principle of co-producing research will transform science in the long-run as it leads to a different kind of knowledge (more user-centric) and a novel way how research projects are conceived, supported, conducted (ownership of research) and how research artefacts are disseminated (Nature, 2018). All this holds the potential for less isolation and more openness that is a main driver for innovation. Outside of science there are many examples showing how openness can lead to innovation, for instance via crowdsourcing (Wikipedia as the largest worldwide crowdsourcing experiment ever conducted), openly distributing data (Open collaboration by open-software user communities, von Hippel, 2001) and in disrupting the health sector (e.g.: Open Source Pharma).

Within economics, Open Innovation refers to make R&D processes within companies more permeable to the outside world allowing knowledge to flow across company boundaries inside out, but also outside in (Bogers, Chesbrough & West, 2016). However, we know that the scientific system works quite differently compared to business units from companies. For instance, knowledge that is produced within scientific discussion is usually publicly funded and should therefore be a public good. Also, incentive systems in science do not measure in revenue or sales numbers, but rather the impact individual researchers or research teams create with what they do. We are convinced that science communication will be increasingly important to fulfill another inherent purpose: educating the public about how science works. What we learned: Open innovation practices in science  need to be tailored specifically towards the scientific system. This is why we experiment with Open Innovation methods to explore this scope within our research units, adapt and guide them in a way that is suitable for the scientific system. One important part is to simultaneously evaluate, draw conclusions and provide learnings that feed back into all our approaches.

Experimenting with Open Innovation Practices

We have been experimenting with a variety of different methods and approaches depending on the need of each project. For instance, we use:

  • Crowdsourcing to generate new research questions in the health context by asking a variety of crowds (e.g.: crowdsourcing research questions in traumatology research by involving clinical experts and patients)
  • Lead user workshops to define and narrow down research topics with lead experts. early adopters and affected patients (e.g.: defining the research topics for digital health institutes; Topic 1: Patient Participation during Diagnosis, Acute and Life-Long Therapy; Topic 2: Securing and Enhancing the Quality of Health Services and Patient Safety)
  • Ideas lab approaches to use innovative settings to form inter- and transdisciplinary research groups  (e.g.: to form new groups aiming to support children of mentally ill parents)
  • Experts by experience, to involve patients in steering the research agenda of our research groups to better address the needs of patients (e.g.: experts by experience, children of mentally ill parents, advising and reflecting together with the research group “Village”)
  • Lab for open innovation in science to train researchers as well as the public on how to co-produce knowledge and learn open innovation principles and methods (e.g.: with our one-year training program for researchers to teach open innovation methods and develop projects based on these principles)
  • SCIENCE4YOUTH program to train the next generation of researchers in using open innovation methods and principles (e.g.: with our Science4Youth training program for adolescents from the age of 16)
Learnings from Applying Open Innovation Methods and Principles in Science

Drawing from our own experiences, there have been a variety of important learnings that we are very happy to share:

1) Knowledge transfer and capacity building: learn quickly. Before founding the start-up like Open Innovation in Science Center in 2016, LBG initiated a pilot crowdsourcing project and a training program for scientists called Lab for Open Innovation in Science. This helped us to experiment with new methods, disciplines and target groups and learn from first-hand experiences. By conducting these projects, capturing the most important learnings is crucial to move forward. For instance, tailoring the training programs towards different target groups; experimenting with different format lengths and content. This is why we started an education program to enhance our understanding about open innovation practices (e.g.: SCIENCE4YOUTH program for adolescents to educate the next generation of future scientists). It also allows us to initiate new projects and build systematic structures and models for our organization. For instance our second “Tell us!” crowdsourcing project was developed in a Lab for Open Innovation in Science training from one of our LBG researchers. Also, the way we set up the governance structure of our research group “Village” reflects our approach to implement learnings in terms of steering research projects.

2) Get out of your research bubble right from the start. We involve people coming from outside of academia from the beginning on with the goal to inspire research, identify topics that have not yet been addressed by science and to start a dialogue between scientists and those potentially affected by their research and the public. In our second „Tell us!“ crowdsourcing project (www.tell-us.online), patients and experts submitted research questions they considered most important for science in traumatology. These questions do not stem from scientists themselves — 80% come from patients, and 20% from experts working with patients. In order to reach out to these communities, a variety of communication measures are necessary. A prerequisite to be successful is an open-minded attitude and willingness to engage with the public. Another example is our recent successful crowdfunding campaign together with our Ludwig Boltzmann Institute for Human Rights on torture prevention. The most interesting finding was not foremost the funding we managed to collect, but the new cooperations that have been initiated for the Atlas of Torture project. This would probably never have happened otherwise.

3) Nudge organizational change. When applying these open innovation methods, organizations need to hold the absorptive capacities to cope with changes in workflow, administrative processes, new forms of collaboration and teamwork. Introducing Open Innovation in science can lead to friction. For this, we systematically analyze the effect of our individual approach on the larger research organization and aim to initiate organizational change to build a research organization embracing Open Innovation and Open Science.

It takes some time to see the effects of our actions, how we can create a different kind of impact and the work we want to achieve. To a certain degree, we see our own activities as a testbed for open innovation in science. We strive to identify the premises for opening up which includes identifying individual drivers (e.g., motivational factors: “How can we motivate researchers to work openly?”), overcoming structural barriers (e.g., “How can we build a rewarding system for researchers to work openly?”, also “How can we build organisations or research groups having enough absorptive capacity to work openly?”) and anticipating pitfalls (e.g., “How do we deal with IP rights or data protection?”). We find that within every project and every discipline we work within our LBG research ecosystem, we identify new needs and approaches to open innovation. After all, no one size fits all.

In the tradition of Ludwig Boltzmann, we are convinced that we can achieve more with less isolation. It is not an idealistic or dogmatic approach, but rather a tangible strategy for research to deal with digitization and societal needs. We still have a long way to go and need to provide substantial long-term proof for our approach. In the meantime, we are thrilled to share our experience with others to become open by default.

Read Full Article

Read for later

Articles marked as Favorite are saved for later viewing.
close
  • Show original
  • .
  • Share
  • .
  • Favorite
  • .
  • Email
  • .
  • Add Tags 

Separate tags by commas
To access this feature, please upgrade your account.
Start your free month
Free Preview