Pedir demonstração
Para uma visão geral

Blogue

Real Talk: Cenosco's AI Machine Vision Framework for Asset Integrity Inspections

Moving from data chaos to actionable RBI insights is one of the biggest challenges inspection teams face today, and AI Machine Vision is a big part of how we see that changing. Cenosco's CEO, Rahul Kejriwal, breaks it down really well across three practical levels.

26 February '26

ai machine vision

In my previous article, I described the shift in Asset Integrity Management from a series of static snapshots to a continuous live stream through “Connected Intelligence.” The response from the industry was overwhelmingly positive and one conclusion was very clear: there is a profound hunger for this evolution, but there is also a growing sense of exhaustion.

Today, the energy and process industries find themselves in a sobering paradox: we are becoming data-rich but insight-poor.

Path to Data Paradox in Asset Integrity Inspections

How did we reach this point?

For the last decade, the process industry has aimed to transition from manual, labour intensive inspections to autonomous inspection systems capable of continuously monitoring asset condition or assessing asset condition at scale. Even at Cenosco, one of our key 2030 goals is to support our customers to achieve an Autonomous Inspection future. The business case for this autonomous future is quite compelling:

  • Massive risk reduction by removing humans from hazards,
  • Overall cost reduction,
  • A way to mitigate a shrinking skilled workforce.

Data Bottleneck in the AI Machine Vision Era

To achieve this future, our customers have invested heavily in the hardware of the future. Robotics, autonomous drones, and IoT sensors are becoming more ubiquitous, as asset owners capture more information than ever before.

Yet the number of qualified, experienced engineers available to analyze that data remains a critical bottleneck. Instead of performing the high-impact engineering they were trained for—predicting failure modes and optimizing asset life—our best people are being turned into Data Entry Analysts dedicating most of their efforts towards manual low value repetitive tasks that can be fully automated.

They spend their days drowning in a sea of folders and spreadsheets. They are frustrated. They are balancing a high-stress field job with an ever-expanding mountain of digital noise generated by management’s latest transformation initiatives.

Engineers don’t need more data; they need a filter. They need a platform like the AI Machine Vision co‑pilot that goes beyond simply detecting defects or anomalies—one that provides insights, learns continuously, and guides engineers toward the most effective mitigation actions. This allows them to focus their time and expertise on taking decisive, high‑impact action.

The Bridge Between Volume and Value

Well trained AI Machine Vision (MV) models are the essential bridge between the high-volume world of robotics and the high-value world of human engineering.

To visualize the problem, imagine raw data as a massive reservoir behind a floodgate. Without an intelligence layer, the engineer is standing downstream with a glass, trying to catch a single drop of meaning as the gate opens. It becomes a chaotic, overwhelming process where 99% of the value is lost in the noise. This mirrors the feedback we consistently receive after drone inspections, video surveys, and laser scans used to assess asset condition at scale: reviewing thousands of images and hours of footage is extremely labor‑intensive, and operators gain almost no automated, actionable insights from it.

Machine Vision acts as the precision filtration system. It siphons that vast reservoir of raw data, removes the “sediment” of healthy or irrelevant information, and delivers a concentrated stream of high‑value inspection findings directly to the engineering expert.

Its role shouldn’t stop at detection. The real value emerges when that filtered knowledge becomes prescriptive – learning from patterns, understanding risk context, and recommending the right mitigation actions.

This is how sheer quantity becomes quality, and quality becomes decisive action.

Cenosco’s three (3) level model for Machine Vision

At Cenosco, we see effective AI Machine Vision built in three incremental and distinct levels. We believe that this tiered approach is the best way to move from “captured data” to “actionable intelligence” while solving the security concerns that have traditionally slowed AI adoption.

ai machine vision

Level 1: The Foundation of Generic Intelligence (20-30% accuracy)

The journey starts with Level 1 Generic Models, the “broad‑spectrum” neural networks, often open‑source or commercial foundation models. However, a generic model lacks industrial context; the intent is there, but the specialized “vocabulary” of defects is missing. As a result, coating deterioration, substrate corrosion, or insulation damage may go unrecognized or be flagged incorrectly. Without deeper, domain‑specific training, these models tend to produce too many false positives or false negatives, adding even more “noise” for the engineer to filter.

At Cenosco, we use two flavours of a best-in-class open-source machine vision model as our base (Level 1 models). The larger model, with 9.4M parameters, is used for our Cloud Based Machine Vision technology. This larger model size gives us more power and accuracy but is also more expensive to run (higher tokens per analysis).

Our ‘offline’ Machine Vision technology is designed to run on mobile devices / local machines and without internet. It uses a smaller, 2.6M parameter Level 1 model. It is quicker and cheaper to run than our cloud model but is less accurate.

You can think of these two Level1 models as the Fast and Thinking modes in Gemini or ChatGPT.

Using open-source models as our Level 1 base model increases resiliency and flexibility. Flexibility is important in an end-market that is moving incredibly fast – we are constantly re-evaluating our model choices for the latest AI developments. However, open source also carries a much higher bar to ensure security.

At Level 1, we conduct and offer auditability of regular vulnerability scans of our open-source libraries. At Level 2, we take great care to make our model deployments secure. We always deploy our Level 2 models in an Azure “Digital Environment” that follow customer IT&S and Automation Systems Security Standards.

Level 2: Cenosco Machine Vision – The Power of Pre-training (40-60% accuracy)

This is where Cenosco’s unique technology takes over. To move beyond generic sight, we apply a heavy‑lift computational process known as Pre‑training. We take the foundational model and essentially “re‑wire” its intuition by exposing it to millions of curated industrial images: various metallurgies, complex piping circuits including deadlegs, structures, storage tanks, heat exchangers, and more.

From a technical standpoint, pre‑training involves adjusting the neural network’s internal “weights” so it begins to prioritize industrial features over everyday objects. In essence, we are teaching the AI the visual physics of degradation. The beauty of our Level 2 Machine Vision model lies in this developed “industry intuition.” When applied to a new data stream at a new site, the Level 2 model is immediately capable of identifying common and broader defect / anomaly types out of the box.

This is also where multiple data sources are fused to provide richer context. For example, the severity of corrosion under insulation (CUI) on a safety‑critical piping circuit can be identified by combining temperature monitoring from process data historians with AI Machine Vision detection of damaged insulation, open seals, and other anomalies – alongside historical condition data used to assess coating conditions.

Today, our Level 2 model delivers 40–60% accuracy. It becomes truly enterprise‑ready only through customer‑data‑led post‑training, which elevates it to Level 3 performance – where the model learns your assets, your environment, and your degradation patterns. 

Level 3: The Private Vault – Sovereignty through Post-training (75-90% accuracy)

A high accuracy Level 3 MV model is achieved through a process called Post-training (or fine-tuning). We take our Level 2 model and perform Transfer Learning.

In this phase, we freeze the general industrial knowledge—the AI’s understanding of what a pipe, structural fireproofing, clamp or a valve is—and only adjust the final layers of the network using a customer’s specific data. This technical distinction is essential for overcoming two of the industry’s biggest hurdles:

  • Hyper‑Accuracy: A Level 3 model learns the unique “normal” of your facility. It learns to ignore site‑specific noise that a generalist model might misinterpret as an anomaly, bringing the external environment and operating context into the decision‑making process.
  • Sovereign Intelligence: Perhaps most importantly, Level 3 solves the data‑security challenge. Because post‑training requires a small, highly focused dataset, unlike the massive datasets needed for pre‑training. It can be performed in isolated, secure environments. The insights gained from your Level 3 model never “leak” back into a shared pool; they remain your private, sovereign intellectual property. Your model becomes a proprietary asset that grows more valuable with every inspection cycle. 

From Data Deluge to the Integrity Graph

In the future, when we combine this Level 3 MV model with the Cenosco Integrity Graph, the engineer’s role will fundamentally augmented. In the autonomous future, the Level 3 model will perform real-time inference. It will filter out 99% of “normal” data that currently consumes an engineer’s week. It will not just produce a notification; it will cross-reference the visual anomaly with the historical data already living in the Integrity Graph.

In the future, when we combine this Level 3 MV model with the Cenosco Integrity Graph, the engineer’s role will be fundamentally augmented. In an autonomous workflow, the Level 3 model will perform real‑time inference, filtering out 99% of the “normal” data that currently consumes an engineer’s week. And it won’t stop at issuing a notification; it will cross‑reference each visual detection / anomaly with the historical intelligence already stored in the Integrity Graph.

Your agentic IMS AI co-pilot will then present the engineer with a curated Decision Package:

“I have identified a 15% increase in external degradation on Circuit A. This correlates with the consistent temperature excursions recorded in the DCS. I have filtered out 4,200 redundant images; here are the three high‑fidelity angles you need to review. Based on your RBI strategy, I recommend removing the scabs to quantify corrosion at the next turnaround in six months. If corrosion is within 50% of the remaining corrosion allowance (RCA) of 3 mm (90% likely in this case) a spot‑coating repair will be the most effective mitigation. Would you like to trigger the work order.” 

The Return of the Engineer

The autonomous inspection future is not about replacing the inspector or the engineer; it is about unburdening them. We are moving toward a world where the integrity engineer steps out of the clerical trap and into the role of a Strategic Supervisor.

By structuring our AI as a bridge—and by using a tiered 3-level model—we provide the best of both worlds: the massive, pre-trained power of a global industrial dataset (Level 2) and the secure, bespoke accuracy of a private site model (Level 3). This is how we move from being data-rich to being truly insight-driven.

We ensure our plants stay safe, our costs stay low, and our engineers stay focused on the high impact work they were born to do. The future of inspection is autonomous, but it is—more than ever—led by human expertise.

If you are an asset owner or operator and would like to partner with Cenosco to build your own Level 3 Sovereign Machine Vision Model, please reach out to us at Machine.Vision@cenosco.com.

rahul kejriwal, cenosco

Rahul Kejriwal CEO

Rahul Kejriwal, CEO of Cenosco, is a seasoned leader in Enterprise SaaS with 10+ years in tech and finance. He has led teams in engineering software and invested in high-growth tech firms globally.