Making Legacy Engineering Data Sweat – Geoffrey Cann – Energy News for the Canadian Oil & Gas Industry | EnergyNow.ca

By Geoffrey Cann

making legacy engineering data sweat geoffrey cann

A pioneering application of artificial intelligence at Woodside Energy is finally ready for wider deployment in oil and gas.

I learned about this use case back in 2016, at APPEA’s annual conference in Perth, where Woodside’s data science team presented their work. Surprisingly, few companies bothered to replicate this innovation, even though it was both proven and easy to execute.

The problem:

Many oil and gas facilities have been in production for decades, and want to be in production for decades more.

Not only do these assets handily outlast their designers, but they’re now outlasting their maintenance engineering staff, operations, logistics managers, and key suppliers. In short, the complete original workforce.

But the oil and gas industry has long relied on the memory of its people to recall critical information about its assets, information beyond the kinds of data easily found in modern systems. Answers to questions like “why did we design it this way”, and “have we encountered this problem before” depend on the memories of workers.

An executive at Irving Oil, a major oil refining and distribution company, once told me that the human workforce that manages complex industrial assets involuntarily commit the assets to memory over time. It just happens. Wetware memory has worked reliably for decades, although it breaks down when in times of high turnover. Suncor, a large oil company accounting for fully 1% of global oil production, has told me that when turnover breaches 7% annually, corporate memory falters.

One might ask why an industrial enterprise would run its business with mission-critical information about its operations tucked away in documents? Probably because that was the only known solution?

Well, several trends are at play that are forcing adoption of new ways to approach this situation.

Changing workforce composition

The teams of people who look after these plants are at risk. The average age of the oil industry is 56, and over half of oil and gas engineers are expected to retire in the next decade. There will be more contractors, and outsourced services providers.

Large and growing accumulated intellectual property

Oil and gas facilities accumulate lots of studies over the years. Reports, charts, diagrams, emails, meeting notes, investigations, spreadsheets, analysis – it’s a pretty large pile and it’s growing all the time as business matures and as new data types are adopted (chats, wikis, videos, recordings, audio, time series).

As time marches on, the content changes too. It’s becoming more comprehensive, more complex, lengthier, richer. Better tools, techniques and technologies mean that studies can cover more ground, and can be richer in terms of actual content. Instead of just one computation carried out by slide ruler in 1960, a cloud-enabled digital twin can run millions of simulations, under different assumption sets, all captured in the studies, in minutes.

Technology obsolescence that strands content

Even the technology used to create content can be a hindrance. Tools obsolesce and are abandoned as better ones come along, potentially stranding the analysis and work products over time.

The risks

This situation creates a number of risks, that, in times of high oil prices, facilities owners comfortably address by retaining large numbers of high salaried employees.

Wasted resources

How much time might be spent by time-stressed valuable engineers just locating old but valuable studies?

The presenters in Perth estimated that some 80% of engineering time was typically occupied with finding documents, and reading them to discover what, if anything, could be useful. That’s a lot of valuable engineering cycles that could otherwise be devoted to more valuable activities, like actual engineering.

Aside from low productivity, and the high cost of search services by using engineers, this approach cannot easily speed up or scale up. People can’t simply read faster, and sometimes it’s not feasible to throw more engineers into the search task.

I can imagine that in certain times, as when something unexpected happens, the pressure to find the right prior analysis becomes super critical (thinking Deep Water Horizon here).

High operational risk

A process that is highly dependent on people’s memory is risky when the outcome is based on finding the right document or collection of documents, and avoiding the wrong documents. Those people might not be there, and memory can be faulty. When that documentation is important for in-the-moment operations, then operational risks will be higher.

Redundant spend

What happens when some piece of analysis can’t be found? Oil companies often find themselves purchasing the same analysis or data over and over simply because they couldn’t find it the first time and presumed it was lost.

A solution

If there’s one thing that new digital innovations are already very good at, it’s sifting through mountains of documentation to quickly find things that match a set of criteria. Google has mastered the search problem, certainly on text. Generative artificial intelligence tools (generative AI) are based on swallowing the internet whole and instantly generating high quality responses to search.

If there’s something that generative AI is really getting good at, it’s interpreting spoken language and figuring out what we mean. ChatGPT shows us just how extraordinarily capable these natural language tools can be.

What if we combine the very best in content capture, search and language processing?

Imagine being able to ask with your voice (not type with your fingers) a complex engineering question, with all its jargon, to a system that contained all company prior engineering content, and the system could quickly, within a couple of seconds, return a comprehensive and accurate summary of every reference that matched the question. Imagine asking for the back up evidence to justify reasoning. Imagine being able to teach the system over time so that it gets smarter at interpreting questions and identifying answer sources that are more reliable.

This is like having the memories of every former and current engineer, every former and current contractor, every former and current specialist and all of their accumulated expertise in one super quick engineer who can do 80% of the job practically instantly, gets smarter with every question, never sleeps, never takes leave. What’s that worth? Squillions.

Years ago, companies would have had to do all the leg work to create the searchable database. Now, you need only point these large language models to your corporate data repositories (which builds the searchable content) and add a language processor (to handle the queries and search the content), and voila. You have created the virtual engineering assistant of the future.

Where does this apply?

The use case for accessing engineering documentation works perfectly well in many instances – oil sands plants, mines, refineries, petrochemical plants. Here’s several more.

Reservoir analysis

Aspects of reservoir analysis would be good candidates for a dose of cognitive computing and artificial intelligence. Finding and sifting through well logs, drilling records, land documents and reservoir studies surely takes up considerable time by petroleum engineers, who would probably prefer to spend time in analysis.

Mergers

Frequently in mergers, companies rationalize workforces to take advantage of scale economies, but sacrifice corporate memory. Subjecting the acquired company data to the language model in a merger is like keeping a portion of all prior employees as part of the new organisation, and not just the engineers, but in those parts of the business most impacted in mergers (IT, supply chain, HR, Finance, Corporate).

Contract analysis

Oil and gas businesses strangle themselves with contracts and keeping tabs on them is painful. Contract managers would likely value having a rich conversation with their contracts database, where they could ask which contract is most current, what terms are in force, which contracts are going to expire and when.

General Q&A

There are lots of other random questions in every day life at an oil and gas facility. “What time does the next vessel arrive”, and “when was that valve last repacked”, and “is there anyone on crew with a certificate in high voltage”, and “how many valves by this manufacturer are on site, including in inventory”. These are the kinds of questions that generative AI are able to answer.

Conclusions

Oil and gas companies cannot reliably use ChatGPT, as it was trained on the whole of the internet, and is a mix of fact and fiction, science and religion, truth and lies, and faulty logic. However, training a private version of ChatGPT unlocks a huge use case that was proven many years ago.

SHARE


Artwork is by Geoffrey Cann, and cranked out on an iPad using Procreate.

Share This:


More News Articles