Daniela George
Consider adopting these best practices in your business.
Baker Hughes

Over the last 12 months, an energy technology company conducted numerous blind interviews with users. The interviews were done blind for a simple reason: to allow these individuals to speak freely, without the self-censoring that often occurs when a person might say what they think their interviewer wants to hear versus what they need to hear. As such, the professionals being interviewed were given the space to talk at length—up to 60 minutes in most cases—about the problems they were having managing their assets using status quo approaches and what they were already doing (or planning to do) differently.

While there was a diversity of assets, given the diversity of industries, the one asset type that stood out time and again was pumps. Surprisingly, it was not often the most critical pumps in the process that were cumulatively having an impact but rather the garden variety: the ones that could be found by the hundreds or even thousands in many plants. It seems the spotlight is on pumps because there are simply so many of them, and small gains made with each one sum to a substantial cumulative impact—whether good or bad.

Below are ten of the most important insights obtained from these interviews.

1. Turn attention to less critical assets.

The reality is the critical equipment in most plants has been properly instrumented for decades with sophisticated online systems that both protect the asset and allow monitoring of its condition. Gains are not coming through dramatic improvements on this class of assets because most operators are already achieving high levels of reliability. Instead, these operators have turned their attention to less critical assets, namely pumps, realizing there are often thousands of such assets and “many drops create an ocean.” A key area for these operators is to look at how much is being spent on maintenance and inspections of less critical assets, examine how many failures are occurring in spite of route-based PDC approaches and go hard after such assets with a different approach. That won’t necessarily mean every asset, but it will mean a certain percentage of bad actors and mid-criticality assets.

2. Use—don’t lose—your people by deploying them to more fulfilling work.

Strategies that seek to justify expenditures through job elimination rarely succeed. A better strategy is to ask how personnel can be used differently and more efficiently, often through more rewarding tasks. Manually collecting data from every asset in a plant is almost never at the top of anyone’s list. Few analysts have the luxury of only examining data, either. They must routinely both collect the data and then interpret it. Many will in turn say the most time-consuming part of the job (aside from collecting the data) is sifting through all of the alarms and constantly optimizing threshold-based alarms for hundreds or thousands of assets. These are areas ripe for digitization and artificial intelligence (AI) to assist people, not replace them. This frees analysts to act on identified issues and focus on legitimate problems to in turn isolate root case.

3. Don’t get rid of that portable data collector.

The best companies are using a combination of online and offline for their less critical assets, just as they are using a combination of condition monitoring technologies. Operators are looking across all of their less critical assets and asking which ones can (and should) be converted to online. They are then using the newly freed-up hours to do other things, whether it is collecting data from assets that were previously unaddressed or simply optimizing the condition monitoring program as a whole. The portable data collector can be used to go after assets that were previously too low in priority to balance and align machines and to help diagnose especially difficult problems where supplementary data might be useful. Many of today’s providers have platforms that can seamlessly integrate data from multiple sources, including online and offline vibration, while also including things like process data.

4. Use all available technologies.

Vibration is undoubtedly important, but so is lubrication analysis, thermography, motor current analysis, operating deflection shape analysis and motion amplification videography. The best practitioners are embracing all these tools and optimizing their programs to match the technologies to the failure modes and needs of each asset.

5. Leverage digital transformation initiatives.

Converting a program based purely on manual data collection to a program that automates those tasks for a percentage of assets falls squarely into the realm of digital transformation. It is also highly scalable, a key element of digital transformation initiatives looking for enterprise-wide impact.

6. Don’t reject the important role of AI.

AI has become a polarizing issue. It is rare to find someone who hasn’t had at least one disappointing experience where AI overpromised and underdelivered. Many have a view of AI that assumes it must be extensively trained by mountains of historical data and presided over by data scientists and specialists where constant tuning and adjustment is necessary. While such AI undoubtedly exists, highly effective AI that is forward looking, requires almost no training and is 95% accurate is now the norm when the right provider is chosen.

7. Use AI to empower people, not replace them.

The hype surrounding AI has not helped bring clarity and has instead injected a sense of foreboding with connotations that jobs will be eliminated and humans rendered almost obsolete. The reality is that AI is simply a tool to relieve CM practitioners of tedious tasks, freeing them to focus on less tedious, higher value activities. The best practitioners do not rely exclusively on AI to make machinery decisions. They allow AI to do the grunt work of collecting and analyzing data, flagging anomalies and then delivering those anomalies along with suspected causes and recommended remedies to human specialists who further vet and validate the findings. Then—and only then—do they act on those findings.

8. Convert capital expenditures (capex) to operating expenses (opex) by leveraging subscription-based approaches.

The conventional approach to condition monitoring required the user to own the infrastructure, use the infrastructure and bear all costs associated with it. As a result, this almost always entailed one or more capex projects. The large investments in infrastructure were borne directly by the user, along with substantial risk.

A subscription-based approach circumvents this, and almost everyone interviewed was either actively exploring this or had already done at least a proof-of-concept. This model is highly attractive because it shifts the responsibility for the infrastructure to the provider. It turns large capital outlays into bite-sized opex outlays. It also provides a granularity of expense that is machine-based rather than infrastructure-based, meaning one can think in terms of simply adding another machine at $X per machine—not in terms of sensors, monitors, cables, networks and servers.

9. Use collaborative platforms that enable workflow, not just communication.

If the only goal was communication, email would suffice. But when issues arise, the ability to actually manage workflow becomes important. Is an issue awaiting action from someone? What were the findings? Where is the underlying data? Is the issue urgent or routine? Who has reviewed it and weighed in? Workflow tools provide a proper solution to all of these things, including communication and collaboration across functional boundaries and departments.

10. Keep score to validate initial and ongoing value.

A major issue with many condition monitoring programs is so much time is spent collecting, analyzing and acting that no time is available to keep track of the wins. While it sounds obvious, it is often overlooked, and being unable to show value can jeopardize ongoing viability programs.

Bringing It All Together: The Outcome-Based Model

An outcome-based model embodies many of the insights these user interviews revealed:

  • It is a subscription-based approach and turns capex into opex, making the previously unaffordable affordable.
  • It puts the burden of infrastructure on the provider, not the user.
  • It leverages AI by combining it with human expertise to vet/validate findings before delivering to users to take action.
  • It uses online technology to collect data.
  • It provides a true workflow environment for notification, communication, collaboration and tracking issues to resolution/closure.
  • It shifts the burden of people and skills to the provider.
  • It improves the ratio of assets to people.
  • It keeps score so management always sees key performance indicators like program return on investment (ROI), number of saves, etc.
  • It is highly scalable and lends itself to enterprise-wide implementations, not just single-plant implementations.
  • It consistently delivers ROI in excess of 30% and time to value is frequently seven days or less.

Outcome-based models are currently the fastest growing segment of the market. In addition to all of the reasons mentioned, they have the advantage of being both low risk and relatively unobtrusive. Most require nothing more of the user’s IT environment than a browser and an internet connection. Sensor installation, network configuration and all other aspects rest with the provider, who is responsible for delivery of validated and vetted machinery insights to the user rather than raw data or even monthly reports. An asset health insight is designed to be actionable by people at site who take the findings, further validate where necessary and work closely with the provider’s machinery analyst to address issues. It is not appropriate to think of such offerings as merely condition monitoring as a service, but rather machine health as a service where the deliverable is machine health outcomes—and thus the designation as an outcome-based model.