Oil & Gas industry observers are challenging the way the industry is currently operating its plants and facilities. The global shock of low oil prices has, in recent years, prompted a self-assessment within the industry, and has increased the level of urgency for finding more innovative ways of cutting costs in order to sustain profitability.
One approach to enhancing profitability is to modernize asset management so that operational data acquisition and analysis, and hence, decision-making, can improve. However, some obstacles are hindering the attainment of such a goal. First there are large quantities of operational data available today that are not being used. Current data, for the most part, is collected through traditional historians. In many cases, these tools are underutilized with only a limited set of their functionality deployed. Second, not enough has yet been invested in new ways to gather data (above and beyond historians), or in digitized, cloud-based tools that can quickly and accurately analyze that data.
Asset management has traditionally focused on monitoring the behavior of a particular piece of equipment or product, with each asset being treated as an individual. However, a discrete product that is experiencing issues may not affect the overall operation of a plant. Today’s plant managers are placing emphasis on greater efficiency across the plant, and want to view assets as integrated systems instead of as individual assets.
This is where digitization can come into play. As systems across facilities are upgraded over time, the intelligent devices that make-up those systems are now capable of capturing data and forwarding it to the cloud at an affordable price. Operators just need to determine the best ways of consolidating, analyzing and managing that asset information. The goal is to convert the asset management data into actionable plans that improve productivity and lower costs.
Openness – an enabler of a more powerful asset management
The lack of openness in existing systems also can place limits on the benefits of optimized asset management. Most of the Oil & Gas industry asset installed base consists of proprietary hardware and software systems that are typically sourced from a multitude of vendors. However, operators can no longer afford the inefficiency of having to go back to these separate technology providers each time they want to add or change parameters within their systems. The data gathering and analytics has to work across assets, regardless of the brand of hardware deployed.
Organizations such as the Open Process Automation Forum (OPAF), and the User Association of Automation Technology in Process Industries (NAMUR) are currently driving new standards around what “open” means. Their work emphasizes the standardization of system hardware components and looks to software as the principal change management agent. The goal of Oil & Gas industry stakeholders is to support this change in the traditional approach of complete proprietary systems by providers and open up the systems to allow multiple providers and different value-added solutions to be utilized and leveraged.
New tools that optimize the new data
As digitization opens the door to new data access, operators will be receiving information that most don’t currently have. As a result, they will be able to perform deeper analytics around energy efficiency and plant productivity which will, in turn, drive higher profitability.
Companies like Schneider Electric are enabling such developments by introducing open solutions and architectures that enrich the value of data. EcoStruxure Profit Advisor software, for example, collects real-time data on a regular basis from assets across a facility and links that data to an ERP system. The analytics are fed back to the operators, who can see the impact of their decisions from a historical view, and can judge how their current decisions are impacting plant profitability. Likewise, the software, EcoStruxure Asset Advisor, provides operators with the ability to anticipate and address asset performance issues before they become critical incidents, thereby mitigating safety risks, avoiding unplanned downtime, and reducing expensive maintenance interventions.
To learn more about how digitized asset management can drive plant profitability, click here
Asset Performance Management is a discipline that covers a wide range of technology and processes. When executed effectively, asset performance management can enable dramatic improvements in a company’s ability to achieve overall corporate objectives. A well thought out asset performance management strategy could include technology like predictive maintenance, augmented and virtual reality. Or it may involve in-depth risk analysis to understand the criticality of assets and draw correlations between reducing unscheduled downtime and improved plant throughput. The key to driving a result oriented and effective asset performance management program is to first develop a well thought out strategy.
Before technology and processes are evaluated and adopted an asset performance management strategy must be developed and refined to meet overall business imperatives. Otherwise, companies may run into long, drawn out pilot projects requiring significant investment over several years that deliver no tangible results.
Where is the return on investment? This is a critical concern a lot of companies are facing today when evaluating new technology. An asset performance management strategy should combine information and data with people, processes and technology to achieve maximum return on asset investment. Developing that strategy may require the involvement of external experts to analyse and evaluate the current state of the enterprise. It’s important that these analysts and consultants understand topics that are critical to asset performance management such as:
Asset Master Data Management
Asset Performance Objectives
Planning & Preparation
Inventory & Procurement
Monitoring & KPI’s
Cost Management & Value Realization
Structural Failure Analysis
Skill & Knowledge Management
IT & OT systems
Continuous Improvement & Audit
With a deep background in these areas, expert asset performance management consultants can marry asset strategy to business strategy to achieve overall corporate objectives. The key here is understanding how these topics come together so that availability, cost, capacity, quality and risk are all balanced in a way that meets key business imperatives. A well thought out asset performance management strategy goes far beyond traditional maintenance practices such as calendar based maintenance and MRO management. By leveraging digital transformationtechnology, an asset performance management strategy can be developed to drive meaningful impact to the enterprise’s bottom line.
Digital transformation in asset performance management has paved the way for businesses to further integrate their people, processes and technology. By leveraging technology that addresses the complete asset lifecycle, from design, build and procure, to operate and optimise, an asset performance management strategy can form the foundation for exceeding overall corporate goals and objectives. Learn more about digital transformation and how it’s enabling new insights and opportunities with Asset Performance Management 4.0.
Matt Netwon Senior Portfolio Marketing Manager
Matt Netwon is a Senior Portfolio Marketing Manager at AVEVA. With over 15 years of experience in the technology sector as an applications and systems engineer, Matt has extensive experience in supporting embedded platforms, automation systems, wired and wireless networking, network security technologies, and the Industrial Internet of Things.
According to Forbes, “Digital Transformation isn’t a buzzword anymore. It’s the way.” In today’s increasingly competitive marketplaces, every business is looking for an opportunity to exploit a new competitive advantage that helps combat their industry and market pressures. Numerous cutting-edge technologies promise to help overcome these pressures. With the adoption of digital transformation increasing across every industrial vertical, companies are looking to technologies like predictive analytics, mobility, and augmented and virtual reality to uncover those advantages. The question then becomes – how do companies implement these technologies with minimal risk? The answer – Pilot Programs.
A lot of the technology being introduced in industrial applications today may seem like magical black boxes that can only truly be understood and leveraged by data scientists. Take Predictive Analytics for example. While that term is thrown around quite a bit in industry publications, a lot of companies may be struggling with how this technology can be implemented in a way that proves its value with minimal upfront investment. This is the perfect case for a pilot project. The concern that companies and vendors need to watch out for however, is a term being referred to today as pilot purgatory.
Pilot purgatory may sound comical at first. But with all the hype and noise around digital transformation technologies like predictive analytics, machine learning and big data, pilots are often implemented at a snail’s pace as users get up to speed on how the technology works and explore how it’s applied to their specific use case. This can result in pilot programs that run for years instead of weeks or months. In fact a recent McKinsey report notes that for Industrie 4.0 pilot projects:
Only 30 percent of the pilots end up reaching scale across the entire organization with companies failing to capture value from 70 percent of their pilots
Some 85 percent of the companies surveyed spend more than one year in pilot mode, while 28 percent spend more than two years
To avoid pilot purgatory when it comes to predictive analytics software here are some things to consider.
Qualify Vendors Prior to Pilot Phase
The core competencies of predictive maintenance software vendors can be identified before the pilot process kicks off by evaluating the vendor’s:
After numerous successful pilot programs involving AVEVA’s PRiSM Predictive Asset Analytics solution we’ve found that the most successful pilots are not the result of an attempt to uncover a business case but instead to validate the business case to invest in the software in the first place. The pilot project should focus on the vendor and customer working as closely as possible for the pilot to succeed.
Tips for a Successful Predictive Analytics Pilot Program
While there are a number factors that go into ensuring a successful pilot project when evaluating new technology, here a few tips to help guide your path.
Recruit Executive Leadership Support. With any technology project it’s important to get buy-in from the executive leadership team. Some leading companies such as BASF have even developed specific teams focused on digital transformation.
Define scope and success up front. Work with the vendor to identify specific and measurable outcomes that will result from implementing the technology. Again, the focus of the pilot should not be to define a business case but to instead prove a business case. For example, proving that the software can predict equipment failures that avoid $30+ million in costs.
People are the key. At the end of the day people need to use the technology for it to be of any benefit. That means that putting a training program in place is mandatory. And if the pilot is successful, continuing to invest in training on a regular basis.
Create a timeline. Pilot projects should not continue indefinitely. It’s critical that a timeline is attached to the pilot program that specifically states a go-no-go point in officially adopting the technology. If the software can’t prove it’s worth within that timeline then alternate solutions should be investigated.
Formal Project Management. Pilot programs need the same level of attention as a production level implementation. Resources to conduct the pilot should be assigned accordingly with a documented project plan in place before the pilot kicks off.
For additional tips on how to avoid potential failures when it comes to digital transformation and other technologies like predictive analytics, check out the Harvard Business Review article Why So Many High-Profile Digital Transformations Fail. And remember that when the pilot program is successful, a plan must be defined to scale and rollout the technology as part of an overall deployment strategy.
Matt Netwon is a Senior Portfolio Marketing Manager at AVEVA. With over 15 years of experience in the technology sector as an applications and systems engineer, Matt has extensive experience in supporting embedded platforms, automation systems, wired and wireless networking, network security technologies, and the Industrial Internet of Things.
We live in a constantly changing dynamic environment. Therefore, innovation is a key driver for success. It means more than just research and development. In other words, it refers to changing, creating more effective processes, products and ideas. For almost a century, designers of electrical systems have depended on Schneider Electric for demanding operations. From switching and protection to monitoring and control, customers trust the TeSys innovation legacy.
Innovation is the key
Advanced settings and pre-alarming would make anything easier. EverLink guarantees lasting connection. In other words, we do not need yearly periodic re-tightening any more. This patented creep-compensating technology provides many benefits. For instance, spring-based system ensures a long-lasting power connection. At the same time it is reducing overheating and ensuring the correct torque tightening each time when you install the products. In addition, circuit breakers can be mounted on a DIN rail in one click without any accessory. Can it be easier?
Century of Digitization
Noways we are trying to digitize everything as much as possible. We would not leave our house before checking at least one application (App) on our devices. Digitization has reached Industry. NFC App (near-field communication) with user-friendly interface eases maintenance and helps to make diagnosis and check the fault history. That makes our products one of the most accessible. The fact that the product itself is compact, makes the panel more cost competitive with less columns. Moreover, the same spare drawer can be used for several motors. GV4 divides by 5 the number of references to manage through a wide range of overload settings provided. Did you know that TeSys GV4 can have a toggle or direct rotary handle, or equip a toggled one with a direct, front or side handle? If few years ago customers had to go through catalogue pages to find the right product and accessory, now everything can be done in a few minutes. New Product Selector tool can help you to find the right product, the save time and reduce chance of making mistake.
Easy but safe
TeSys motor controls come with all the isolation, protection and emergency handling you need to comply with international codes, standards and regulations (CE, UL, SA, CCC, EAC). High-contrast covers identify safety-critical devices to prevent inadvertent manual operation. Every TeSys contactor is both mechanically linked and equipped with mirror contacts for safety applications – and wherever auxiliary contact state reliability is critical.
The most flexible system on the market
TeSys motor controls are easy to choose and easy to use. For instance, single configurable control unit can provide a wide range of current settings and control voltages. In addition, TeSys motor controls offer the plug-and-play convenience and flexibility you need to optimize your panel designs. And the wide selection of common TeSys installation accessories help you keep inventories and costs down.
To learn more about the TeSys offer, please visit our website.
According to a recent ARC survey, 93% of industrial stakeholders agree that both edge and cloud processing will form the basis of their industrial automation infrastructure. Market observers and analyst firms are projecting that the cloud computing market will reach $411 billion by 2020 and are forecasting that 50% of data will be processed at “the edge” by 2022.
These major trends will require industrial stakeholders to revisit how they are modernizing their operations in order to drive the new IT-influenced productivity benefits. A first step in achieving these greater productivity gains is to understand how concepts such as “cloud” and “edge” work within an industrial context.
In the IT view, “edge computing” implies data processing that occurs on-premise (i.e., processing not occurring in the cloud, typically occurring in local data centers). Another popular term, “industrial edge,” implies computing that is close to sensors and actuators in the manufacturing area, as close as possible to the production assets (typically occurring in industrial PCs and controllers). When both of these are applied together, they form the basis of the ongoing information technology (IT) and operations technology (OT) convergence in the industrial space.
Understanding edge and cloud drivers
There are multiple reasons for why edge applications are growing in influence across industries. First, because of the sheer volume of data being generated by the new wave of connected devices, it is too costly to send all that data up to the cloud. Edge computing can offer a less expensive alternative. Second, within certain applications, use of the cloud can disrupt performance because of latency issues (an interval of time or waiting period that is too long for data coming back from the cloud to be useful). For example, a blockage in a particular pump needs to be addressed as quickly as possible in order to avoid delays or disruption to production. The pump needs to be taken off line and repaired before it breaks. Besides latency, dependence on the cloud also runs the risk of a loss in connectivity. The time to reconnect could be too long and might result in the failure of a critical manufacturing asset.
In this new world of digital transformation, industrial stakeholders will succeed in maximizing digitization benefits by achieving the proper balance of cloud and edge resulting in cloud-edge continuum in terms of software and hardware management. Such a balance will require an analysis of the cost of each option and an understanding of the degree to which data will need to remain close to the production asset. For example, a cloud option could prove cost effective when piloting initiatives such as predictive maintenance. By starting a proof of concept in the cloud without incurring any CapEx, stakeholders can make an early determination as to whether such an investment will reduce maintenance costs over time.
On the other hand, a cloud option would be less effective in the case of an application that manages rapid production line changeovers. The reprogramming and reconfiguration of a manufacturing line or the ingredients of a new recipe (in a Food and Beverage industry scenario), can be improved by managing such changeovers with the assistance of edge computing. In edge versus cloud field tests, it has been demonstrated that up to 30 minutes in changeover time can be saved when edge applications are deployed. In a scenario where an average of 10 changeovers occurs per day, the time savings and productivity gains become significant.
Finally, the cloud – edge continuum in the industrial domains require specific cybersecurity practices to comply with specific regulations addressing critical infrastructures and ensure business continuity. Processing data at the edge allows business critical functions to be carried out regardless of connectivity to the cloud, minimizing the attack surface and reducing the possible impact of cyber threats.
An end-to-end framework for optimizing the productivity gains
Access to an open framework of connected devices, edge control and application analytics can help to simplify the task of having edge and cloud implementations work in a complementary manner. Architectures such as Schneider Electric EcoStruxure allow for high productivity activities such as predictive maintenance, remote management of edge assets, and real-time optimization of process control to be enabled in a mix of cloud and edge environments.
We are going through a once in a lifetime technology shift, that has the potential to transform every industry and every business. It is evolving at a pace very few of us have ever experienced.
Instead of asking “What is the IIoT?” the question now is “What can IIoT do for me? The first step in this remarkable transformation is fundamentally changing the way that we think about business and automation assets. It involves shifting investments from older technologies and business strategies to investing in new innovative business models based on the latest technologies.
This transformation is a necessity to stay competitive, and you need to address these opportunities before your competitors do, or you’ll be in hot water…or worse. Digital transformation will have a big impact on your business. You’ll potentially make money, recruit the best people, beat your competition… and create happier customers.
For many years operational risk management (ORM) practices have been used to manage safety and environmental hazards with the sole aim of protecting people, production and profits. Effective ORM processes have proven useful in preventing adverse high consequence events and the impact on operational and profitable performance.
Safety Instrumented Systems (SIS) for Emergency Shutdown (ESD), Fire and Gas Detection (F&G), Burner Management (BMS), High Integrity Pressure Protection (HIPPS) etc. are vital elements of a successful ORM strategy.
New IIoT based safety system approaches are available that unlock potential productivity, performance and ultimately profitability gains. For example, the concept of using a single integrated safety and process control solution using common controllers, input / output and networks while still maintaining the risk reduction levels mandated by good design practice.
Opportunity to transform our EHS performance and profitability
Fundamental to the digital transformation is the use of IIoT technologies including connectivity, digitization, big data, analytics, visual clues, digital twins, artificial intelligence when combined amplify each other, creating a perfect storm of change, not just single improvements, but complete transformations.
Acting to realize the IIoT value
Smart connected operations leverage IIoT techniques to capture and gather large quantities of diverse data (structured / unstructured) on a scale not previously possible, convert them into actionable insight for enhanced collaboration and decision making.
Although many businesses are at an early stage in the adoption of IIoT technologies, that’s rapidly changing. According to the LNS Research spotlight on IIoT, 40% of companies have started an IIoT initiative, and a further 24% planned within one year. Companies are clearly moving from investigating the impact of IIoT to a clear recognition of potential business value.
Good safety performance directly correlates to good business
The traditional view of safety is that it is a necessary cost to the business at the expense of profitability. This asserts that safety measures are required to gain / maintain a licence to operate, procedures and processes required for compliance reduce productivity and increase costs. Fortunately, pioneering business leaders are realizing that good safety performance directly correlates to good business. As companies evolve to a “profitable safety” way of thinking, there is a shift away from the traditional thinking of safety as a cost, to safety as a profit centre.
LNS Research spotlight data strongly suggests the adoption of safety and risk management best practices lead to better operational performance across safety, reliability and efficiency:
7% higher overall equipment effectiveness (OEE) using a lifecycle approach to risk management
¯ 10% lower incident rates when safety systems are designed to both mitigate risk, improve productivity and performance
¯ 25% lower incident rates when IIoT technology is used to holistically manage safety and operational performance
The double edge sword of IIoT: Opportunity and Risk
The proliferation of new IIoT technologies brings downside risk as well as upside opportunity.
One of the biggest barriers facing industrial operators is getting the rich data from aging systems not designed with the openness, quantity or structure of data in mind. No matter how hard you may try, sometimes it’s just not possible to capitalize on the opportunity IIoT presents with existing infrastructure.
The IIoT is enabling new information and data structures. Technologies such as IIoT gateways, Edge controllers, Cloud computing, inherent Ethernet backbones are increasingly enhancing and displacing existing systems. Two-thirds of manufacturers already deploying IIoT technology are using it to break down the traditional hierarchical approach to manufacturing systems and data silos.
But as we solve one problem, we run the risk of creating new, different problems. Introducing any new technology brings new risks, adding more connectivity increases the potential cybersecurity attack area, bridging IT and OT domains – two formerly independent domains – may bring fundamental changes to the business model.
A systematic risk approach is required
Following a systematic lifecycle risk approach is vital for effective operational risk management. International standards for Safety (IEC 61508, IEC 61511), Cybersecurity (IEC 62443), Information Security Risk Management (ISO 27005), Occupational Health and Safety (OHSAS 18001) and Risk Management (ISO 31000) provide a systematic methodology and framework. These standards share a common requirement for closed loop risk management throughout the operating life of the asset.
Room for improvement in safety performance
Operating companies should regularly revisit the risk assessments to address new / emerging threats such as cybersecurity, learn from near-misses as well as identify areas of improvement based on operational / maintenance knowledge and experience gained from a running plant.
Untapped potential of IIoT technology to mitigate risk and improve performance
Safety system management processes and performance data often exist in silos, making it difficult to “connect the dots” and understand the interdependencies between discrete data points to get a consolidated view of dynamic risk performance for effective decision making, especially when facing time pressure and the clock is ticking.
For example, in the event of an unscheduled outage and production has stopped, time is of the essence. Establishing exactly what happened, when, in what order is often labour intensive, time consuming and error prone.
So, using IIoT concepts for data collection, consolidation, analysis and reporting can significantly reduce downtime, getting the plant restarted sooner and returning operations back to profit generation faster!
An integrated approach to Operational Risk Management for an IIoT world
IIoT convergence empowered by IIoT technologies is changing the relationship between business systems (IT) and control / safety systems (OT). The scope of information has greatly expanded, vastly greater volumes and variety of data now require advanced analytics capabilities, while smart connected assets and operations introduce information security threats.
Historically this was accomplished by separate controllers for control, safety, addition of firewalls for cybersecurity, communications gateways, data servers etc. a whole raft of disparate equipment. Now IIoT enabled common safety controllers provide a single, integrated safety and process control solution, with inherent cybersecurity, all in one package, to reduce, cost, risk and time to value.
The world is moving towards a cleaner energy future amid rising investments in renewable energy, which over the long term, will replace the demand for coal and have the potential to impact the global demand for oil. Such phenomenon influences the mid- to long-term strategy and vision of Oil & Gas Majors, due to – including but not limited to – the following elements:
Pressure from shareholders, investors, governments and other stakeholders;
New emissions regulations and energy policies such as Paris climate accord;
Tackling energy security concerns in the context of sustainable development;
Talent attraction and retainment, giving the willingness of the younger generation to work for more environmentally friendly companies with longer-term growth potentials;
Evolution of urban mobility and rise of fuel-efficient electric vehicles (EVs).
To address these external elements and be aligned with the energy market trends, Oil & Gas Majors have started to increase their exposure to cleaner sources of energy. They have also launched diversification strategies to go beyond their traditional oil and gas businesses.
TOTAL is probably the most active Major in diversification towards clean energies. Low-carbon businesses will account for 20% of TOTAL’s activities by 2035. An important step for TOTAL to reach this goal is to develop 10 GW of solar energy in 10 years.
In 2010, the company entered the biofuels production market after funding “Amyris”. TOTAL also benefits from the expertise in solar power through its majority share in “SunPower”, a US solar company. In 2016, TOTAL acquired “Lampiris” and “Saft”, providers of energy storage solutions, and a year later, they have created TOTAL Solar to further develop their solar business. They also invested partially in solar and wind energy producer “EREN RE” and energy efficiency firm “GreenFlex”. Furthermore, in April 2018, TOTAL purchased “Direct Energie”, a French utility, to accelerate its presence in gas and electricity generation and distribution. In September 2018, the company also made the move into EV charging market, by acquiring France’s “G2mobility”.
Today, TOTAL runs a $6 billion renewables business. They have invested €150 million in 20+ renewable energy start-ups since 2008 and allocated $900 million to R&D projects related to low-carbon technologies in 2017.
The Norwegian oil company, Statoil, has recently dropped “oil” from its name and rebranded itself as “Equinor” to illustrate its beyond oil and gas vision. The company is currently managing a $2.3 billion offshore wind business and is developing carbon capture and storage (CCS) projects. Equinor has also plans to enlarge its renewable footprint through expansion into onshore wind, energy storage, smart grids and energy efficiency.
Equinor is even a stakeholder in solar assets in Argentina and Brazil. Additionally, in February 2016, the Major created a venture capital fund to invest up to $200 million in renewable energy companies over the next 4 to 7 years. The company states that by 2020, 25% of its R&D budget is expected to go into renewables, compared to 17% ($52 million) today. Equinor also aims to invest 15-20% of its capital investment in renewables by 2030, compared to the current 5-10% ($500-750 million).
Royal Dutch Shell has created a “New Energies” division in early 2016 to bring together its existing hydrogen, biofuels and electrical activities. But the company is also using this new organization as a base for a new drive into solar and wind power business. In fact, Shell intends to invest $1-2 billion in the New Energies division each year up to 2020. In 2018, for $217 million, the Major became a shareholder of Silicon Ranch Corporation, a US solar company, 12 years after exiting the sector as one of the latest moves to grow beyond its core oil and gas business. Yet, it is worth mentioning that the “New Energies” spending remains only a small portion of Shell’s total investment program.
BP was the first Oil & Gas Major to diversify into renewables in early 2000s, when it was rebranded as “Beyond Petroleum”. Today, the company’s focus is on biofuels, biopower and wind energy. In late 2017, BP also returned to the solar business, 6 years after it quit the sector. The company acquired stake in the biggest solar developer in the UK, “Lightsource”, and committed to even bigger solar investments. Moreover, in June 2018, BP purchased “Chargemaster”, UK’s largest EV charging company. It is worth noting that BP’s investment plan aims at spending $500 million a year on low-carbon energy.
It is worth mentioning that BP does not have the same cash flexibility as other Majors, due to their 2010 Macundo accident – oil spill in the Gulf of Mexico – which created significant financial commitments. Therefore, their investment and acquisitions in renewables is rather limited compared to others.
Unlike their European rivals, US-based Majors, Chevron and ExxonMobil, have not yet made large-scale investments in solar, wind, electric cars or energy storage. They have pursued a more cautious approach to new energies, and they are still capitalizing on domestic shale gas production to reduce their carbon footprint.
Chevron has a limited expertise in solar, wind, geothermal and biofuels. The company is taking practical and cost-effective actions to address potential climate change issues. That includes investing about $1.1 billion in CCS projects in Australia and Canada, as well as launching a $100 million “Future Energy Fund” to invest in breakthrough technologies. Additionally, Chevron partners with universities to understand and evaluate the economic viability of different green energy sources.
ExxonMobil though, defends its stance on renewables by partnering with leading universities and developers. The company invested more than $9 billion into lower-carbon technologies since 2000, including CCS. Exxon’s research over the past years was dedicated to algae biofuels, which led to technological breakthroughs achieved by applying advanced cell engineering.
On another level, Chevron and ExxonMobil have recently become the first US Majors participating in the Oil and Gas Climate Initiative (OGCI) group, aiming to reduce “manmade greenhouse gas emissions”. Launched in 2014, the OGCI is now made up of 13 members: BP, Chevron, CNPC, ENI, Equinor, ExxonMobil, Occidental Petroleum, Pemex, Petrobras, Repsol, Saudi Aramco, Shell and TOTAL.
European vs. American Majors
European Majors are ahead of their American counterparts in diversification to renewables. It could be explained by the difference in the mindset and priorities of their societies, governments, shareholders and top management. European society on average is more concerned about sustainable development, which will consequently put more pressure on their governments, investors and energy companies.
On the other hand, the American social acceptance towards oil and gas business is much higher than the European, decreasing the pressure to diversify. Additionally, US Majors’ top managers believe that by improving the currently employed technology, increasing share of gas as a transition bridge and investing in carbon capture and storage, they can improve their carbon footprint and be a responsible energy producer, while remaining an oil and gas company. Lower return of investment in renewable markets could be another element preventing American oil companies to go green.
There is no doubt that corporate sustainability and growing attractiveness of renewables are pushing Oil & Gas Majors to invest in clean energy solutions. They now dedicate budgets for alternative energy sources, energy efficiency and clean mobility projects. Having said that, it should also be noted that only 4-5% of some Majors’ total annual capital expenditures is allocated to clean energies. Oil and gas activities will remain their core business for years to come, even as they diversify to renewables.
Further, although we are at an early stage of a major energy transition, its pace remains uncertain. In fact, the International Energy Agency believes that 70% of the spending in new energy supply to 2040 will be driven by governments. It simply brings two conclusions: first, transition pace lies with state energy policies and not the Oil & Gas Majors. Second, state energy policies and priorities could switch over time, changing the transition pace.
Finally, even as renewables’ share in the energy mix grows continuously, oil and gas will not be materially displaced. Therefore, investment made in new energies by Majors is to complement their existing value chains and not to substitute their oil and gas businesses. In fact, it is to build an integrated portfolio, one which consists of both oil and gas, and renewable sources.
To conclude, we should keep in mind that diversification from oil and gas activities will require patience from investors and shareholders, as it is a long-term journey. The good news is that some of the world’s largest oil and gas producers are already investing billions in clean energies, from offshore wind farms to solar energy and energy storage solutions.
Our is the third post in our series on motor management which covers various aspects of motor integration in an electrical network and the industrial process.
In the first, the electrical, thermal and mechanical constraints necessary to consider for large motor starting have been presented in detail. In the second post the typical application constraints have been introduced.
In this post we will try to answer the question: how to select the right motor starting solution for a concrete application?
There are several motor starting solutions that we can compare simply through the next table:
By looking on the table we can decide that a variable speed drive is the solution for each case. And this would be right in the principle. However, the final decision will depend on several installation criteria which is related to:
Economical interest – the cost of the solution
Engineering simplicity – the efforts to evaluate this solution and to commission it
Maintenance easiness – which is also related to the cost of the maintenance
Energy saving – the possibility to reduce energy consumption for some applications, for example when the operation is integrating idle cycles or reduced flow rate
Flexibility of adaptation – for varying operational conditions of flow, pressure, operating cycle
The next table provides a relative comparison of the above solutions for these installation criteria. It shows that the basic, contactor-based starting methods are also the most cost effective and easier for maintenance; but from the previous table we can see that these also cause the most stress on the network and the load:
You can find more information about large motor starting in the PCIC Conference tutorial here.
For many years, engineering and operation disciplines within industrial settings were totally detached. In very simple terms the process of designing, building and operating a plant involved a series of handovers from engineers to engineers. You would design the processes, hand over to an engineering company who would design and build the plant, and the control vendor who would design and program the control system and in the end hand everything over physically to the end user to operate the plant. Under such a fractured model, accessing and maintaining the latest documentation for optimizing operational efficiency and profitability of the plant was always and continues to be a major issue if not driven by the owner and operator.
With process design software today, there is no longer the need to handover to someone who’s building the plant. All this is done via software that professionally interfaces with the design software that allows people to build the plant and automatically programs the control system. Furthermore, that design tool output serves as a “digital representation” of the plant. Once the plant is online, the digital representation interacts with the real-time data that is generated by the physical plant. At that point, the software transforms the “digital representation” into a “digital twin” that can reflect exactly what is happening in the physical plant.
Software empowered employees change the workflow dynamic
Up until recently, workers who operated key plant systems would often utilize only a narrow fraction of the wide functionality of their software. Visualization was complicated and functions were difficult to learn. Now, with more advanced and intuitive operational software, tasks can easily be accomplished without having to understand all the complex background details.
If suddenly you bring together these two aspects (engineering and operations), you can imagine that someone who sits in front of their screen sees a 3D picture of the plant showing where the pipes are, where the equipment is and if an alarm pops up, it doesn’t say I have an alarm on pump xyz, it gives a location and coordinates and someone would have to figure out on printed paper where this pump is located.
Now, instead, the 3D model comes up, says there is a pump alarm, but shows the pump on the screen in the 3D model so that you can “virtually” make your way to find where the pump is. You can virtually look at the pump and what kind of pipes are connected, whether it is a redundant pump, what kinds of valves you need to operate to separate the pump from the rest of the system. And before you send all this data onto a service engineer, he knows exactly what to do and makes his way over to the pump and follows the instructions. All the pre-work has already been done virtually and it was also simulated virtually with the final step being the execution of the physical work.
Now if you look a little bit into the future, the individual within the plant who is notified of the alarm is no longer sitting in front of a screen but is equipped with a set of virtual reality goggles that illustrate the 3D representation or digital twin of the plant. The service engineer also has his own set of goggles. In his case, he sees the real plant, but his goggles contain digital data that begins to enter his field of vision and that blends in on top of the reality he is seeing. In this instance he is experiencing augmented or “mixed” reality that helps him to accomplish his repair in a safer and more expeditious manner.
Knowledge sharing and efficiency savings
The implications of such technologies are multifaceted. First, in a more intuitive way, you can make full use of very complicated capabilities that were there before but which might not have been usable due to the complexity and degree of training required. Now, users with less training can obtain and get into some of those functionalities.
Second, software development tools have become much more sophisticated in terms of graphical capabilities, in respect to processing capabilities, and in terms of the flexibility of where the software runs.
Because of all of this, you can see tools better used, you can see people more intuitively working with equipment without necessarily doing harm to the real world and not needing to understand all of the details that are happening in the background.
Find out more information on the industry-leading portfolio of engineering and industrial software by clicking here.
Executives responsible for managing Oil & Gas downstream operations are being tasked to come up with new ways to either cut costs or generate higher revenues over the life of their process plants. Although many rely on rising crude oil prices to drive revenues, this is a short-term strategy that should be supplemented with longer term efficiency improvement initiatives. Many are investing in new technologies to help streamline operations but are having difficulty in assessing the business value of that technology. Measurements in revenue gains or cost savings should not only be scrutinized by assessing the value of individual technology components, but as an integrated whole. In this way a metric such as Return on Capital Employed (ROCE) can be used to identify how much the new technology is contributing to the value to the operation.
ROCE represents the percentage return that a company makes over its invested capital, a measure of the profitability and value-creating potential after taking into account the amount of initial capital invested. The ROCE ratio is expressed as Earnings Before Interest and Tax (EBIT) / Capital Employed. Thus, ROCE can serve as a useful metric for calculating efficiency and profitability and can help determine how to squeeze a higher return out of corporate capital investments.
This blog is the third in a three-part series that assesses the impact of technology modernization on CapEx, OpEx and revenue generation. These three financial variables present an overall picture of how ROCE metric elements can be combined to determine the business value of automation technology investments. All three blogs reflect real-life downstream Oil & Gas industry case studies in which Schneider Electric played a key role as technology consultant and provider.
Case study: Increased refinery revenues
Technology deployments that generate healthy ROCE also contribute to revenue growth aspects of refinery operations. Schneider Electric field studies have shown that annual revenue gains of 1.77% can be realized within refineries from improved responses to changes in market conditions, reduced material losses, fewer operator errors, increased yield of more valuable products, increased volume, increased market share, and increased average sales price.
Our studies analyzed some of the ways technologies can be deployed to increase refinery throughput, improve the yield structure and optimize utilization rates so that revenues, efficiencies, maintenance and turnaround times are improved.
Below is a summary of some of the more significant revenue generation methods, and their impact on the earnings of a medium-size petroleum refining facility:
The biggest contributor to revenue gain ($21.83 Million) was the reduction in material losses. These reductions were achieved through new generation daily refinery yield accounting systems. Sources of loss were quickly identified and reduced or eliminated. The loss elimination increased the volume of fuel products available for sale, resulting in higher refinery margins.
Revenue gains of $14.55 Million were generated through the increased yield of more valuable refinery products. Online, real-time optimization systems and high-performance advanced process control systems were able to maintain target values of refinery operations outputs resulting in these higher yields.
Another source of revenue gain (totaling $7.97 Million) was driven through fewer operator errors. By consistently enforcing best operating practices, and by minimizing incorrect decisions though simulation-based training, error frequency was reduced, in some cases by more than half.
The ability to respond better to changes in market conditions also positively impacted revenue (with gains totaling $7.28 Million). Better designed refinery supply chain management systems enabled planners to identify the most profitable crude oils to purchase, and allowed refinery schedulers to respond quickly to changes in feedstock properties, energy costs, product demand and product prices.