Modern Industrial Data Architecture Strategies & Best Practices for Life Sciences 4.0

Subscribe To Download This Insight

By Ryan Martin | 3Q 2023 | IN-6995

Varying customer demand and the rapid pace of new product introductions requires modern manufacturing production tools and techniques. Digital transformation projects of all types are the face of the answer, but on the back end are key data architecture strategies that must evolve in lockstep. The latest change is a systems approach to data management that streamlines the process of scaling new applications, referred to as Industrial DataOps.

Registered users can unlock up to five pieces of premium content each month.

Log in or register to unlock this Insight.

 

The Industrial DataOps Maturity Journey

NEWS


Varying customer demand and the rapid pace of new product introductions require modern manufacturing production tools and techniques. Digital transformation projects of all types are the face of the answer, but on the back end are key data architecture strategies that must evolve in lockstep. The latest change is a systems approach to data management that streamlines the process of scaling new applications, referred to as Industrial DataOps.

The Industrial DataOps maturity journey starts with getting raw data out of assets and Operational Technology (OT) systems, culminating in an architecture that orchestrates fully contextualized edge-to-cloud data from across the enterprise. The maturity journey is as much about analytics (descriptive to prescriptive) as it is about how to roll out and manage data across machines and plant networks at varying degrees of fidelity and scale. It focuses on delivering data for business use and underpins Life Sciences 4.0.

Context, Problems, and Scale

IMPACT


When many people think of Industrial DataOps, they think of protocol translation and normalization. Getting machines to speak the same language is a big part of the picture, but it isn’t the only part. Another key step is to speak the same vocabulary. For example, a truck in the United States may be called a lorry in the United Kingdom, where they also speak English. Even once data are changed to information using the same protocol (language), the terms (vocabulary) still need to be universal, so they can be universally understood.

Context is also critical. Context often lives in transactional systems like Enterprise Resource Planning (ERP) (from SAP), Manufacturing Execution Systems (MESs) (from 3DS, Plex, and Siemens), and Computerized Maintenance Management Systems (CMMSs) (from IBM) software, and these data need to merge with shop floor data to contextualize the operational picture. Information Technology (IT) teams can fall prey to moving raw data to the cloud, which often is not solving the problem, just moving it. Data must be normalized and prepared for contextualization at the edge, so it is available edge to cloud. Contextualizing data at the edge also means that data are efficiently packaged and distributed by the domain experts who know how to maintain the contextualization.

A modern approach to data normalization and contextualization is to use a Unified Namespace (UNS), which is like a data marketplace where all business systems can exchange data. A lot of times, this may be an MQTT broker, which is open and lightweight, and using a topic structure based on ISA-95 hierarchy, which was designed for manufacturing environments. UNS is also considered a mindset and strategy for how to approach the problem of getting data from multiple systems into a common hub where data can be easily exchanged, rather than using point-to-point connections. The issue of moving data is exacerbated in life sciences due to batch manufacturing’s data challenges. For this use case, a UNS can help manage and put data into a common format that is ready to consume, contextualize, and scale for the customer.

Best Practices from the Field

RECOMMENDATIONS


Life Sciences 4.0 is unique because products have a big impact on human life, there is a lot of regulation, tolerances are tight, and there is a high cost of failure. These conditions add to an already complex manufacturing environment that needs to establish economies of scale to bring down the cost per deployment. Industrial DataOps fits into this picture several ways:

  • A Systems Approach to Data Management: Three years ago, Alcon, an US$8 billion pharmaceutical and medical device company specializing in eye care products like contact lenses, was using a homegrown solution to pass data between business applications. It saw several encouraging project wins developing and was deploying new applications, but found that it was taking several years to implement each new project in full. Rather than be limited to one or two projects per year, the company decided to leverage an edge-based DataOps solution called HighByte Intelligence Hub to systematize and scale the data mapping process for new applications. At one facility, Alcon has thousands of motors and pneumatic cylinders across 17 manufacturing lines. Templatizing endpoints without doing individual mapping saves the Alcon team months of development work that would be spent manually mapping data tags for every new application.
  • Templatize for Reusability and Scale: Catalent, a 14,000-employee contract development manufacturing partner for personalized medicine, drugs, and consumer brands, has a lab with 48 lines (bio reactors), each with 100 data tags. Modeling all of these data one-by-one for each new application is the traditional approach and would take an inordinate amount of time. Instead, a data engineer can use a tool like HighByte Intelligence Hub to create a model of the bio reactors that can be deployed to any number of sites in about an hour. This model is essentially a UNS strategy where data can be contextualized and available for analytics, maximize value-added time, minimize manual tasks like transcribing digital HMI data, and promote reusability among tools.
  • Only Send the Right Data to the Cloud: Sending data to the cloud without discerning what data are going to the cloud and why is not only a waste of cost, but adds unnecessary complexity. Cloud data must be defined in terms of its fidelity and intended use. The biggest general industry trend is to move data to the cloud. However, at the same time, the big cloud vendors Amazon Web Services (AWS) and Azure are moving to the edge by developing and building out their edge solutions and partner ecosystems because, as much as it may make sense to perform orchestration in the cloud, many manufacturing workloads, analytics, and visualizations are still executed at the edge (e.g., robotic automation). The goal should be finding the right balance of edge and cloud.

Every company wants to be more agile, use information more effectively, and improve their business for customers. Industrial DataOps is the dominant framework for mastering 4.0 data transformation projects and it’s imperative to leverage solutions that enable a systems approach to maintaining and scaling, templatizing data movement, and satisfying the delivery of data to the user/consumer for a real-time view of the enterprise.