Overleaf is happy to announce that The University of New South Wales (UNSW Sydney) and Overleaf have partnered to provide all UNSW students, faculty and staff with Overleaf Pro+Teach accounts.
The UNSW Library is providing the UNSW community with innovative scholarly authorship technology via premium Overleaf accounts and a custom UNSW authoring portal on Overleaf. UNSW students, researchers, faculty and staff benefit from easy sign-up for their premium accounts, a collaborative authoring platform, easy-to-use writing templates, and LaTeX authoring support.
A few of the features that influenced UNSW Sydney to partner with Overleaf include:
easy-to-use templates with custom submission links for publishing researchers and faculty;
research & authoring collaboration with multiple co-authors made easier via a cloud-based platform that can be accessed by desktop, laptop, and even mobile devices;
full-project history view with version control, commenting, and easy revert options;
assignment templates that help professors distribute and collect homework assignments.
Dr. Damien Mannion, of the UNSW School of Psychology, says:
“I’ve been a user of Overleaf for a few years and am happy that UNSW has invested in the Pro+ version. I am particularly enjoying the ability to collaboratively write documents with students, who are finding Overleaf to be an accessible and intuitive way to learn and use LaTeX. The expanded functionality in the Pro+ version, such as the project history, is very useful for such collaborative writing.”
John Hammersley, co-founder and CEO of Overleaf says:
“We’re delighted to be working with UNSW Sydney—Overleaf has seen very strong adoption within student and researcher communities, and through this partnership all the staff and students at UNSW can now make full use of Overleaf in their classes and research projects. Overleaf is committed to making LaTeX easier to use and more accessible to the wider authoring community, and it is exciting to see UNSW proactively supporting this on campus.”
For questions on this new Overleaf partnership and integration, please contact Mary Anne Baynes, CMO, Overleaf.
Artificial intelligence (AI) and robots continue to creep into our working lives. Not only are truck drivers, janitors, and bricklayers facing obsolescence in the near future but AI is also influencing knowledge work including paralegal, medical diagnostic and other professions that require less physical manipulation and more interpretation of data. Job automation has also extended to journalism and the creation of news and other content. It is possible that robots might infiltrate the scientific communication ecosystem in the same way.
Robotics has been employed in newsrooms for several years, starting small but becoming much more sophisticated lately. The automation of news had initial success in the business and sports sections with the creation of brief game synopses and earnings reports. These 2-3 sentence capsules that many of us are familiar with are increasingly written by software that takes machine readable data such as a box score for a game or earnings data from corporations and generates summaries for a general readership.
Sports briefs might include not only the winning team and the score but perhaps a mention of the player who scored the game-winner and any exceptional effort from one or more other players, the resulting league or division standings, the number of consecutive victories (or losses) or the total runs/goals/hits, etc. for a player who is vying for (or has moved into) the league lead. All of this information is gleaned from the official tabular score of the game and other sources which can be read by software and used to assemble a coherent distillation of the game.
Similarly, business stories are generated from machine-readable data and include a company’s revenues, expenses (including one-time costs), quarterly sales increase or decrease, earnings per share as well as the stock price close and gain or loss. This information can be collected by software from data available online to create a couple of sentences summarizing the quarterly or annual performance of a company. For more on automated journalism, see these articles on two of the more popular products: Narrative Science and Wordsmith.
For unstructured texts, the NVivo software offers some promise of generating prose that can be worked into an intelligible narrative. It won’t write a publishable piece (yet) but can inform and jump-start the creation of content that can be proofread, formatted and otherwise reviewed prior to release.
Since the early days, robot-generated articles have become a lot more sophisticated. A machine-generated obituary for (appropriately enough) AI pioneer, Marvin Minsky in 2015 was widely noted although an article in Wired marking this milestone stated that there was still a need for human review of these stories.
The need for oversight may not be going away, but the progress of this kind of software marches on, now touching scientific communication. A recent article in Research Information features an electronic lab notebook (sciNote) which includes a component to generate manuscripts based on laboratory or other research data, presumably the same way it is done with box scores for sporting news. The author selects one or more experiments for which they have used the ELN and supplies some background information including keywords describing their work and any DOIs for related papers. According to their website, the software will then produce “an introduction, materials & methods, results and references”. The scientist can then begin improving and editing the text.
Anyone with intellectual curiosity and who works in the science research enterprise can’t help but wonder what the effects of automated text-generation from research data will be on scientific publishing. One thought is that if manuscripts become easier to generate, it may signal a shift in assignment of academic merit from how many peer reviewed articles a scientist has written or how often they were cited to how many articles were generated by a dataset he/she collected. At the very least, I suspect we may see a greater recognition of data collection as the heavy lifting in science and less emphasis on the number of articles written by a given scientist or cited by others.
I believe Digital Science’s own Daniel Hook may have captured the essence of this change when he said,
“It is really only a matter of time before having a highly-cited dataset is as important in some fields as a paper in Nature, Science, or Cell.”
That may turn out to be an understatement.
Author Bio: Alvin Hutchinson is the Information Services Librarian at the Smithsonian Libraries’ Digital Programs and Initiatives division. He manages the Smithsonian Research Online program which describes, collects and archives research output of Smithsonian scholars. Although he does not have any formal science training, Alvin was a subject specialist in zoology at the Smithsonian’s National Museum of Natural History and its National Zoological Park for 15 years and has a personal interest in scientific research and publishing in the digital era.
A new tool for authors – the IEEE LaTeX Analyzer, powered by Overleaf – helps speed up the publishing process by allowing authors to upload articles and validate their article’s LaTeX files prior to submission to the IEEE.
Authors simply upload their articles via an IEEE web interface which connects to the Overleaf LaTeX compilers to build the manuscript. By making use of Overleaf’s robust and proven compile servers – which have compiled over 15 billion pages since 2013 – the manuscript can be automatically compiled in just a few seconds.
The tool helps authors and editorial offices avoid submission delays or inaccuracies which are often encountered when authors and editors have different versions of LaTeX installed; submitted manuscripts contain missing files; or out-of-date LaTeX packages cause conflicts in the build process. When the author uploads their LaTeX files they receive detailed analysis results, along with the option for correcting any LaTeX issues via the Overleaf platform or a LaTeX editor of their choosing.
“IEEE is committed to creating a simple and intuitive publishing experience. The new IEEE LaTeX Analyzer tool is a natural extension to the suite of existing IEEE Author Tools. It helps us serve our authors and editors through potential pain-points, providing insight to LaTeX-related errors for faster compilation and submission times. We are excited to collaborate with Overleaf on this effort and to make it available to our community of authors and editors.”
John Hammersley, CoFounder and CEO of Overleaf says:
“It has been fantastic to work with the IEEE, who have provided very helpful and focused feedback throughout this project, and we’re all excited to see the IEEE LaTeX Analyzer now live and available to the community. It’s a natural extension of Overleaf to make our LaTeX technology accessible via an API, to help power services such as the IEEE LaTeX Analyzer. We hope that this will continue to lower the barriers to getting started with LaTeX for new authors and will help reduce the frustrations experienced authors can sometimes face when transferring files from system to system. We very much look forward to feedback from the IEEE community on this new service, and are excited to be broadening the use of Overleaf in this way.”
This IEEE LaTeX Analyzer tool is available now for use by authors and editors for all IEEE journals and can be found via the IEEE Author Center webpage.
The validation service accepts LaTeX and supporting files for compilation. It then compiles the manuscript and returns the resulting PDF if the compile was successful. If compilation was unsuccessful, the tool will return the status of the compilation, and if there was a compilation error, return the error messages. It then provides an “Open in Overleaf” button option for the author/editor to be able to use the user-friendly Overleaf interface to immediately find and fix errors in the document and resubmit through the validation service.
Our friend and colleague, Prof. Jonathan Adams, will be leaving us this month to follow the late Eugene Garfield as a Director of the Institute for Scientific Information (ISI). His appointment to this academically-focused position is a testimony to Jonathan’s stature in the industry, and a fine tribute to his impact on the bibliometric space before and during his tenure as Chief Scientist at Digital Science.
While we will certainly miss Jonathan’s talents – not to mention his special sense of humour and finely-tuned wit – we nonetheless wish him every success in this important, influential, Clarivate-supported academic position. After many happy years of co-working at Digital Science, we look forward to working with him collaboratively on moving the field of modern research management and analytics forward.
Today, we are pleased to announce a partnership between one of our portfolios, Peerwith and IWA Publishing, a leading international publisher of water, wastewater and environmental publications.
Peerwith and IWA Publishing have joined forces to develop a branded marketplace solution as part of the Peerwith Publisher Solution. The partnership allows IWA Publishing to offer their authors an extensive range of author services via a transparent peer-to-peer marketplace model. Designed with the IWA Publishing corporate identity in mind, this secure marketplace environment is part of the platform for author services. Visit the branded marketplace on https://iwap.peerwith.com/.
Ivo Verbeek, Director of Peerwith, states:
“We are thrilled to team up with IWA Publishing. In our model, authors connect directly with peers who are experts in author services. We foster collaboration by connecting experts to improve the quality of scientific output. Our model works best in strong communities which makes the International Water Association an excellent fit.”
Rod Cookson, Managing Director IWA Publishing, comments:
“Peerwith allows us to offer an extensive range of academic researcher services to our authors quickly. We like the aspect of experts charging our authors fair fees for high-quality services tailored to individual requirements. On top of that, the peer-to-peer marketplace model allows us to engage our own community as experts.”
Delivering Powerful Insights and Business Intelligence Across the Scholarly Research Lifecycle
Developed over a two-year period by the team at Digital Science, in partnership with over 100 leading research intensive organizations around the world, Dimensions brings together over 124 million records from across the research lifecycle for the very first time enabling you to gather timely, actionable insights from across the research landscape.
Join us for this one hour webinar to learn how Dimensions can deliver real business and editorial benefits to publishers while increasing the discoverability and usage of their content.
In this Question and Answer session, Simran Shinh of Cornell University (Operations Research Engineering ’20) tells us why Cornell Rocketry chose ShareLaTeX and how it helped them win an award for their technical documentation.
Can you tell us a bit about Cornell Rocketry?
Cornell Rocketry is an engineering project team dedicated to learning about how to design, assemble, and launch rockets. Each year, the team participates in the NASA Student Launchcompetition, which typically involves launching a high-powered rocket to 5,280 ft with a specific payload.
Although we are a young project team, we have won the NASA Centennial Challenge (2016) and placed 3rd overall last year in the NASA Student Launch Competition. In 2017, we won 2 out of 5 technical awards, including the Safety Award and Project Review Award (which includes technical documentation and presentations).
This year (2018) we are excited to challenge ourselves by building a rover as our payload, which deploys from our rocket at landing through remote activation. After deployment, the rover will autonomously move 5 feet and unfold a set of solar panels. Besides the deployable rover system, our team also works on creating a communication system for the rocket to allow the tracking of the rocket throughout launch. We are committed to challenging ourselves and look forward to achieving new heights at competition this year.
Why, and when, did you start to use ShareLaTeX?
We started using it at the beginning of 2017, after a recommendation from the previous business team lead, a computer science major. NASA requires many technical documents throughout the year (Proposal, Preliminary Design Review, Critical Design Review, Flight Readiness Review, Launch Readiness Review, and Post Launch Assessment Review). Many of them are as long as 200 pages or longer.
What were the challenges in preparing your documentation?
The documents contain many different features, from CADs and schematics to tables, Gantt charts, and mathematical calculations. Before, we would try to do these documents in Word or Google Docs. We also have a lot of references to other sections of the document throughout. We wanted the documents to look more professional and have more control over how they looked, and we wanted our team of 39 students to all be able to work on it at the same time.
How did ShareLaTeX help with document production?
ShareLaTeX was a good solution to the challenges we faced in producing complex documentation because we were able to share the responsibility as opposed to having one person be in charge of compiling all information and formatting. There is a lot less room for error this way.
How easy was the transition to using ShareLaTeX?
We believe very few, if any, teams that compete in the Student Launch use LaTeX for their documentation, and only a few people on our team knew LaTeX beforehand. For the very first document, we submitted our parts to the business lead, and he then translated it to LaTeX for us. However, after that, everyone caught on. We all learnt LaTeX quickly and began to write in LaTeX ourselves for the documents. Now, we are all pretty well-versed at it, and we are using different packages and formats to suit our needs.
We are grateful to ShareLaTeX for sponsoring us in 2017 by providing free access to Professional features; that was a big help to us. We are excited to have won last year’s documentation award, and we hope to win it—and more—again this year.
More about Cornell Rocketry
Cornell Rocketry Team (CRT) consists of 39 members, comprised of members from all undergraduate school years. CRT members come from a variety of different backgrounds such as Applied Engineering Physics, Mechanical and Aerospace Engineering, Electrical and Computer Engineering, Computer Science, and Operations Research and Information Engineering, Mathematics, and Physics.
2017 was a busy year for the U.S. Patent and Trademark office (USPTO), which issued 320,003 utility grants, according to IFI Claims, up 5.2 percent from the previous year and more than double the 157,284 granted in 2007.
IBM marks 25th anniversary as list leader
U.S. grants hit record high and double from a decade ago
Trending Technologies: e-cigarettes, 3D printing, machine learning & autonomous vehicles
Mike Baycroft, CEO, IFI CLAIMS Patent Services said,
“2017 was an impressive year for U.S. patents. We’re seeing twice as many patents generated today as there were a decade ago. We’re seeing IBM nearly tripling its annual patent counts by going from 3,000 plus in 2007 to breaking the 9,000 mark this past year.”
Many leading companies from 2016 continued to dominate in 2017. Facebook and BOE Technology Group made large gains.
IBM received the most utility grants in 2017. Their total of 9,045 is a 12% increase over their 8,088 total in 2016.
Among companies that received the most patents, Intel Corp and LG Electronics both moved up two places in the rankings – putting them at #4 and #5 in IFI Claims’ rankings.
BOE Technology Group made a big jump from #40 in 2016 to #21 in 2017. Based in Beijing, BOE produces display and sensing devices.
Facebook Inc jumped into the Top 50. Their 660 utility grants give them a ranking of #50 – an impressive increase over their #86 ranking in 2016.
The analysis also included a report on the fast growing technologies. The new high-growth areas include e-cigarettes, 3D-printing, machine learning, autonomous vehicles, moulding materials, hybrid vehicles, aerial drones, and food. These are not the largest patent classifications but the ones that have shown the fastest growth over the last five years.
The automotive sector is rapidly developing new technologies, including patents in areas such as unmanned vehicle navigation and advanced manufacturing technology.
Companies showing impressive gains are Japanese automobile parts manufacturer Denso Corp (#35 with an increase of 23% grants from 2016), Honeywell International (#39 with an increase of 27%), Halliburton Energy Services (#44 with an increase of 35%) and Chinese Shenzhen China Star Optoelctronics (#45 with an increase of 44%).
Toyota, Ford, and Hyundai also showed sizable growth.
The computing, telecommunications, and medical patent classifications continue to be strong.
Computers (G06F) and telecommunications (H04L) remained in the top two spots.
2017 saw a big jump in specific types of data processing systems (patent class G06Q). This business methods patent class has been volatile over the last few years due to recent court decisions regarding patentability.
Pharmaceuticals (A61K) and Diagnostics/Surgical (A61B) showed healthy gains.
I was honoured to be invited to the first future labs meeting at Charleston Library Conference last November. If you’ve never been to a future lab, it’s a brainstorming session where a group of experienced professionals (in this case librarians with a few publishers and vendors sprinkled in) come up with their predictions of how their industry is going to change, often with a technology focus.
I promise I did not bring the topic up. I think it was Michael Levine-Clark, the Associate Dean for scholarly communication and collections services from the University of Denver and the first person asked for a word, who said ‘discovery’. In the back of my mind, I thought, ‘funny you should mention that’.
This has been a big week for Digital Science. Last Monday saw the launch of a major new product; Digital Science Dimensions. By now, you may have heard a few things about our newest addition to the portfolio. We had a launch party at the Wellcome Trust building on London and it’s been covered in a variety of outlets, including Inside Higher Ed, The Bookseller and the Scholarly Kitchen.
In some ways, Dimensions is different to the other products in our portfolio. To be clear, the product does not mark a change in direction. We maintain our customer-centric, community-driven ethos and commitment to supporting the research lifecycle at every stage. We also continue to work with all stakeholders including publishers, research managers, librarians, funders, and of course, researchers because we recognize the unique value that each facet of the broader community brings. What’s different about Dimensions is that it represents our first major collaborative product built on the deep relationships and commonality of purpose between our existing portfolio of companies and products.
Dimensions is a new type of knowledge discovery product that links publications, grants, patents, and clinical trials with a highly curated and strongly normalized data frame. If you take a look at the free version of the application here, at first glance, it looks like an abstract and indexing service with almost 90 million publications in it. That’s just the beginning, however.
The metadata that we’ve used to create the database has come from a number of sources including but not limited to Crossref, PubMed, a variety of open metadata and abstract services, and our relationships with scholarly publishers. If you look at the left-hand panel, you’ll see that we’ve integrated, curated and cleaned the data in such a way that you can slice and dice in sophisticated ways. For example, affiliation data (which often comes from free text fields in submission systems) has been normalised using GRID (which we made available originally under CC-BY licence in 2015 and later under CC0, so you can download it and use it yourself). We’ve deduplicated authors to enable effective search by researcher, which will be extended with ORCID integration in the next few weeks. We’ve also applied the Australian and New Zealand Standard Research Classification (ANZSRC) Fields of Research (FoR) taxonomy non-exclusively to enable filtering by discipline. Those are just three examples of how we’ve cleaned the data.
There’s a metrics and analytic component to Dimensions. You can see that we’ve added Altmetric and citation data to the search results and if you click on a document that has either citations or altmetric mentions, you can follow those links. In addition, the expandable panel on the right shows the free components of the extensive analytical suite that we’ve developed.
I’ll resist the temptation to give you a run through of the premium versions of Dimensions, there’s a lot of information on the product page here including reports on what’s in the data and what some of the use cases the data have.
Having said that, just to give you a sense of the scope of what we’ve built, here are a few examples. We’ve curated and cross-linked awarded grants from the original Dimension for funders product by Uber research with patent data from IFI Claims and obviously the publications database that I wrote about above is in there. We also have a much larger suite of powerful analytics that enable users to truly understand what’s happening in a research field, compare institutions, researchers, funders, disciplines in a variety of ways.
So what’s in it for me?
I’m so glad that you asked.
If you’re a researcher, we hope it’s fairly obvious. Dimensions is designed to create a discovery platform that is both incredibly simple to use, with as small a barrier to entry as Google or, dare I type, sci-hub, and yet allow for sophisticated literature search strategies such as faceted search, and citation/reference walking. The system aims not only to show you a research result (although it does do that, connecting you to the content in the fewest clicks), it is intended to help you contextualise your search results so that you can see a bigger picture.
While we’re on the subject of big pictures- along with those ~90 million abstracts in the free version, there are almost 3.7 million awarded grants, more than 34 million patents, and over 380 thousand clinical trials, should your institution choose to subscribe to the premium version of Dimensions. You’ll also notice that the pipeline from discovery to download is less convoluted than other discovery solutions. There will be far fewer instances of having to search in one place and then go to another website to download.
For Institutions, not only is Dimensions a new breed of discovery tool for libraries, students and researchers, it’s also a powerful analytics tool to inform strategic decision making. In our talks with research administrators at many institutions, including our development partners, the use cases that have come up have been around identifying strategic partnership possibilities, comparison and benchmarking. Dimensions can help identify emerging fields and leaders in those fields to either collaborate with or even recruit.
Publishers are a critical part of the scholarly communication landscape and much of the infrastructure on which Dimensions relies was funded or built by publishers. Publishers stand to benefit from this in two ways. Firstly, Dimensions’ powerful analytic suite can help publishers identify rising stars in academia who are the authors of today and the editorial board members and editors in chief of tomorrow. It also gives valuable insight into not only what research has been done (publications) and has had impact (citations and Altmetric), but also what is being done (grants). The well-funded research of today becomes the exciting articles of the next couple of years and the citations of five to eight years time. This unique look into the future of which fields, researchers, and institutions will result in the articles, journals and impact of the future is incredibly valuable.
So what’s the second way? Many publishers have been partnering with Digital Science for years to ensure that metadata is complete and accurate. These collaborations aid researcher discovery of their content in the ReadCube environment. Now with Dimensions, we’re taking that discoverability partnership to the next level with the faceted and contextual search approaches in both the free and premium versions. We not only provide researchers with a search result, but we also populate citation and reference links, related articles, grants, clinical trials and patents.
Perhaps most importantly for publishers, we also provide direct links to the version of record. In other words, do you remember that thing I wrote above about a lower barrier to entry for discovery and downlod than sci-hub? Well, that’s one thing that Dimensions does, it helps researchers discover your content quickly and then get it directly from you.
What will Dimensions achieve?
Dimensions launched just last week and there’s an awful lot of excitement about it. We’re receiving huge amounts of feedback from both new users and our existing development partners. We’re committed to being responsive and will continue to add data, refine use cases and support both paying clients and community users.
Earlier this week, Daniel Hook and Christian Herzog wrote an excellent blog post that explored our broader aims for Dimensions as an organization. I won’t reiterate what they wrote, instead, I’m going to mention a personal hope I have for the product.
I’d like Dimensions to reinvigorate the discussion around scholarly content discovery. At some point along the way, we’ve lost our connection with how researchers find and acquire content. As a result, workflows have become clunky and even broken down entirely, causing many researchers to look outside of the publisher-library ecosystem for their needs.
I think that’s at least part of what Michael Levine Clark was alluding to back in Charleston. There’s a real need to reconnect our end-users to scholarly content in ways that add the maximum value and minimise the friction. It’s vitally important to do so, or we’ll continue to see researchers circumventing the services we provide in order to find content through easier ways.
On January 15th 2018, Digital Science launched Dimensions, a global analytical platform that breaks down barriers to discovery and innovation by making over 860 million academic citations freely available, and delivers one-click access to over 9 million Open Access articles.
Dimensions integrates funded grants, publications and citations, altmetric data, clinical trials and patents to provide a complete picture of the research landscape emerges; from resources entering the system, research outputs, recognition, patents reflecting the commercial trajectory and the translation of medical research into treatments.
The release of Dimensions provided an excellent talking point for a session at this year’s Gaidar Forum in Moscow focused on big data as a source for strategic decision making in science and innovation policy.
The event was organized by the Department of Scientific Information and Development of the RANEPA, a key player in the field of open science in Russia & CIS.
A group of Russian and international experts discussed a number of key points and recommendations for open science communication and also focussed on how best to utilize big data produced by research for public and private institutions.
The session was moderated by Oxana Medvedeva, Organizer of the Session, Head, Department of Scientific and Informational Development, RANEPA, and Ivan Zasursky, Chairman of the Department of New Media and Theory of Communication at the Faculty of Journalism of Moscow State University.
Experts that presented included:
Sergey Matveev (Ministry of Science & Education)
Mike Taylor (Altmetric)
Igor Osipov (Digital Science)
Martijn Roelandse (SpringerNature)
Gregg Gordon (SSRN – Elsevier)
Alexey Guzev (Russian Venture Company)
Dmitry Malkov (ITMO University)
and several other speakers
Altmetric’s Director of Metrics, Mike Taylor, talked about the development of scientific communication and its inseparability from the development of society:
“Today, just as society develops, scientists are getting and using new ways of communication and interaction … We have never had the opportunity before before to see how the results of research spread through broader society – we can see publications, studies, quotations, discussions in open, public media.”
Sergey Matveev, Director of the S&T Department, Ministry of Education and Science of Russia, noted that big data by itself has no particular value and does not have any influence on strategic decision making. At the same time, the results of big data processing and smart visualization are important if they help to utilize information quickly and intuitively.
“With increasing pace of decision-making, the price of a mistake is getting high, but windows of opportunities are occurring more often. Big data can not only help you to understand where the world is moving toward but also give you the outlook on national-level, internal systems in the scientific and technological sphere. When we put these two pieces together, we have an understandable, rational and truly strategic decision, which is shared by the society.”
During his presentation, Igor Osipov, VP Academic & Government at Digital Science, spoke of Dimensions’ project philosophy:
“We invited the world’s leading experts and thought leaders, about a tenth of which came from the Russian scientific community, to share opinions, knowledge and take part in the development of the Dimensions platform. In Dimensions, for the first time, all parts of the very large puzzle we call ‘scientific information’ are united: grants, patents, clinical studies, metadata of different publications, measures of mentions and so on. This is a unique project and it is obvious that the possibilities of Dimensions data usage will reach far beyond science and education.”
Alexey Gusev, Director of information ecosystem development at the Russian Venture Company, said that the request for mechanisms like Dimensions appeared a few years ago.
“Dimensions is an example of a tool in which big data can be integrated into management mechanisms both in science and, for example, in venture capital. I’m sure that in many funds, decisions could be made more confidently if there was a database of all grants available, not only in Russia but also throughout the world. The effectiveness of investments in technologies could increase.”
The Director of the Centre of Scientific Communication at ITMO University Dmitry Malkov mentioned that, according to the joint Altmetric research of ITMO and the communication laboratory of the RVC, Universities that are working on communication more carefully with good press services and active communication departments, have a higher reputational influence which can be successively converted into attracting talented students, teachers, as well as in obtaining new funding.
A video of the discussion session (in Russian) and reports of all participants can be found here: http://gaidarforum.ru/live/ (starting at 5:51)
For more information about Dimensions and its uses download A Guide to the Dimensions Data Approach and read our latest Digital Research Report title A Collaborative Approach to Enhancing Research Discovery.
If you are based in Russia or the Commonwealth of Independent States and you would like to know more about how your organization can benefit from Dimensions or if you have any questions, email email@example.com (specify “Dimensions” in the subject).
Digital Science recently launched Dimensions in London to an enthused and engaged crowd. Our next destination on our global launch tour is Australia during our Digital Science Showcase Week, register here if you would like to attend!