Essential Industry 4.0

In today’s manufacturing landscape, unplanned downtime is one of the leading causes of lost productivity, resulting in delays, dissatisfied customers, and substantial revenue losses. Recent studies estimate that this issue alone costs industrial manufacturers a staggering $50 billion annually. However, the solution lies in embracing Industry 4.0, the digital transformation of manufacturing, which leverages data analytics, artificial intelligence, machine learning, and other advanced technologies to enhance productivity, agility, customer satisfaction, and sustainability¹.

Despite the immense potential of Industry 4.0, many manufacturers still struggle to scale up their efforts and fully realize the value of their digital transformations². Financial hurdles, organizational challenges, and technology roadblocks are among the obstacles they face².

The cost of not adopting Industry 4.0 can be substantial, as evidenced by the average cost of an hour of downtime for a factory, estimated to be $260,000⁴. However, implementing Industry 4.0 solutions, such as predictive maintenance, can drastically reduce these costs³. Moreover, failing to embrace Industry 4.0 technologies means missed opportunities for improving customer service, delivery lead times, employee satisfaction, and environmental impact¹.

Industry 4.0 goes beyond addressing downtime and offers transformative benefits for manufacturers. It represents the current era of connectivity, advanced analytics, automation, and advanced manufacturing technology that has been revolutionizing global business for years². While small and medium-sized businesses (SMEs) may face challenges in adopting Industry 4.0 due to limited resources and knowledge, there are also advantageous trends for them. These include new business models, value-added services, networking, collaboration, increased flexibility, and enhanced quality¹.

SMEs should not underestimate the potential of Industry 4.0. By investing in research and development related to Industry 4.0, they can tap into a market with an estimated value creation potential of $3.7 trillion for manufacturers and suppliers by 2025². This represents an unprecedented opportunity for SMEs to innovate and compete globally.

In conclusion, Industry 4.0 is not a mere buzzword but a necessity for manufacturers aiming to remain competitive and drive growth. With the significant costs associated with unplanned downtime and the tremendous potential of Industry 4.0, overcoming the challenges and embracing this digital transformation is essential. By adopting Industry 4.0 technologies, businesses can unlock increased productivity, customer satisfaction, and sustainability. SMEs, in particular, should recognize the beneficial trends and seize the opportunity to innovate and thrive in the global market. The future belongs to those who adapt and evolve with Industry 4.0.

If you have any further questions about Industry 4.0 or need more information, please ask!

How to plan for a successful POC

Companies involved in a proof-of-concept (POC) project or phased adoption approach typically begin by connecting a single system to the network and enrolling it into a manufacturing operations management system. The success of such a project relies heavily on following the right steps for deploying and commissioning the machine, along with establishing a clear framework of objectives.

Regarding large-scale information systems, network topology plays a crucial role. It encompasses layers 0, 1, and 2, determining the system’s performance, security measures, error detection capabilities, and resource utilization. To ensure an effective topology, it is important to assess and create it carefully, considering factors like performance, security, maintenance, scalability, and management.

Choosing the right machine asset(s) for a POC or phased plan requires clearly understanding the desired outcomes. The objectives may include automatically capturing operational events, and specific process data, enabling operator interaction based on operations, and even auto-creation of jobs, error reason code identification, and operator response lookup. It’s also important to consider machine-specific capabilities, such as multi-spindle functionality, pallets, tombstones, multi-part count, high-speed part count, and its position in the value stream. Is it a finishing machine determining the throughput for a group of operations? Or is it a constraint machine that collects data to assist in resolving constraints and adopting a continuous improvement methodology?

Selecting the appropriate information system for manufacturing operations involves evaluating various options. From legacy systems like SCADA and process mapping to MES, batch-run systems, and emerging operations and monitoring systems, stakeholders must prioritize their desired functions and features. The challenge lies in identifying deliverables and quantifying unexpected aspects, especially with numerous products making similar claims. Factors to consider include the system’s services, distribution capabilities, scalability, expandability, productization, breadth of intellectual property (IP) across manufacturing, IT, system integration, engineering, vendor stability, and longevity.

An essential aspect of successful adoption is the collection methodology. A system that preprocesses events by normalizing and storing them as events into a single source type proves highly efficient, reducing the need for extensive post-processing. On the other hand, systems that collect raw data events as states often face efficiency and performance issues due to analytics and metrics calculations being performed after collection.

The chosen data storage methodology determines the flexibility of the information system. Storing events as sourcing pattern events enables answering both the “what happened” and “why” questions, tracking jobs through operations, recording multiple events over time, and benefiting from an event sourcing patterns approach. Additionally, calculating metrics globally rather than individually for each object in the system reduces complexity and ensures congruent results.

Job management is a critical feature that should be supported by the system, allowing for sales orders, work orders, part numbers, product standards, and operation steps processes. These elements provide granular job-specific information, metrics, and operational states. A structured product management feature should be considered for effective job tracking.

Adaptability is another crucial aspect of a manufacturing information system, enabling the expansion of event categorization and the addition of operational and process states as needed. This flexibility is essential for continuous improvement efforts and tracking new constraint sources or non-conformity reasons.

Once the topology is established, machines are selected, and the information system is chosen, it’s time to plan and execute the roadmap for the POC or phased plan. The roadmap should consist of specific, measurable, and qualifiable objectives that can be successfully applied to the rest of the plant. Internal champions should be selected to allocate necessary production, engineering, and IT resources. Finally, a kickoff meeting with the vendor should be arranged to assess their action plan and determine the distribution of responsibilities.

MEMEX - Measuring Manufacturing Excellence Logo

Annual General Meeting 2023

Why do you need a monitoring system?

Why do you need a monitoring system? The most obvious and usually the initial reason is real-time visibility and utilization measurement. That is what most of the monitoring systems espouse doing. Let me qualify the statement. In the most basic configuration with machine-only reporting, the results are long and /or frequent IDLE conditions. Basic utilization as a percentage and real-time “uptime/downtime” visuals. This approach falls short because there is no easy identification of the root cause of IDLE events. Furthermore, the possibility of serious false-positive events is likely because circumstances or conditions, if tracked, would likely identify scheduled breaks, meetings, setup time, change-over, material conveyance, missing personnel, etc. These lightweight and cheap monitoring systems are simple to employ but lose value once running for a while. Furthermore, most are in the cloud, introducing latency and data security issues. The next level of monitoring supports an operator interface for interaction with the system. Most interfaces allow for an operator to classify an unknown downtime event. This adds granularity to downtime to better assist in root cause analysis. However, such a feature available across the host of monitoring packages has no consistency of options. Whereas some have a set or a limited number of reason codes, others are extremely flexible in providing custom reason selection for each machine to match the operation type. Issues arising on a laser are far different than issues arising with welding or on a press or multi-spindle CNC. Other more sophisticated HMIs provide advanced features such as ticket generation, checklists, work instructions, operator-guided e-alerts, operational triggers, barcode support, history editor, real-time commenting, etc. (MERLIN). Then the ability to manage jobs or OpSteps further separates the field. Recording activity against a job or OpStep is critical for any facility, especially for Job shops, mould shops, prototyping shops, or high-value, low-volume shops. If a shop collects shift utilization without job information, it is unlikely that job and shop-related events are correctly recorded and classified as negatively impacting utilization. Adding jobs or OpSteps into the monitoring system introduces the performance metric, which is key to throughput measurement. Just because a shop minimizes downtime doesn’t automatically improve all aspects of the operation. How fast a machine runs and whether it’s producing quality parts is just as essential. Does the system employed support true OEE based on product standards, cut time and material conveyance time rather than just “Cycle” time? Does the system employed provide for both good parts and reject part categorization? Does the system employed support operator login and logout, non-machine-related activity, and operator-centric metrics? This level of event collection is not found in cheaper, low-end products. Ultimately, the system should be more of an operations management system that leverages custom event types, such as machine overrides, follow-on automation and robotics, and inferred machine events, such as starved or blocked. It should employ triggers to infer conditions, manage output, alert teams, send or retrieve data from other systems, and be future-proofed to expand its function, features and integration across the enterprise. Finally, a system must be easy to roll out. Simple to connect assets, flexible enough to represent information unique to the operation. It must provide tools for production, management, supervision, engineering, Quality Control, and the operator on the floor. The latest recommendation from cyber-security firms is that such systems should reside within the organization’s four walls and not expose the company or its data in the cloud.

The emerging consensus is that entry-level, SaaS-based monitoring systems need more flexible, accurate, and resolution tools for operational issues. A good operations management system connects the floor quickly and effectively. It provides an accurate real-time data flow to the corporate monolithic enterprise tools through real-time subscription to integrated OPC UA servers. Such a system is the perfect production tool and middleware for an organization.

You do not want to be the person who selected a solution based on price or perceived ease of use. As discussed, these usually fail to address an organization’s issues. Nor do you want to be the person who followed the market speak and bought into a large toolkit approach which requires 5x services, custom programmers and months of integration to produce even the simplest of feature sets. Nor do you want to be the author of a skunkworks internal development that drains resources while recreating the wheel for the nth time, only to once again be orphaned as resources move on.

MERLIN Tempus can deliver all of the features and functions described. It can be rolled out with a fixed-price project with rapid deliverables. It can connect to ANY machine or operation. MERLIN can save stalled or failed projects from monolithic systems by providing simple connectivity to all machines and operations and serving them up to the enterprise system.

 

Ultimately, you are the one who can articulate what your expectations are for any system. But any system worth considering must provide information for the following:

  1. Accurate costing for estimates (SETUP, change over and material conveyance deviations)
  2. Meeting delivery dates (Utilization vs Capacity)
  3. Monitoring key operations critical to the delivery of the product (Real-time events, inferred events, WIP)
  4. Coordinating critical equipment to meet manufacturing deliverables (Machine availability, Human resource availability)
  5. Continuous Improvement (Root cause analysis, A3 reporting, ticketing, CI tracking tools)
  6. Quality Improvement and traceability (Red Tag part tracking and disposition, Reject grading and classification)
  7. Maintenance (Accumulative runtime reporting, custom event threshold reporting, maintenance ticketing and event scheduling)

 

If you keep these points in mind, you will not stray from an excellent Manufacturing Operations Management system (MOMs)

The Sad State Of SaaS

Weighing the options is not always easy.

Centralized hosting of business applications dates back to the 1960s. That was back when users would pay for computing time on mainframes.

In the nineties, we saw the introduction of ASPs, Application Software Providers. ASPs mandated maintaining a separate instance of the application for each business. About a decade ago, mainstream SaaS providers appeared with multi-tenant deployments where users “shared” the computing environment. A key driver of SaaS growth is SaaS vendors’ ability to provide a competitive price with on-premises software. This is consistent with the traditional rationale for outsourcing IT systems, which involves applying economies of scale to application operations. Unfortunately, in retrospect, outsourced IT and SaaS still need to fulfill their promises.

Almost everyone can attest to the frustration of outsourced IT, especially when the technical group is not even on the same continent. Additionally, the faceless outsourced teams need to relate readily to the culture of the users. Similarly, SaaS providers provide a barebones support offering to facilitate their SaaS service. The application is hosted centrally, so the provider decides and executes an update. SaaS providers will upgrade software on a regular bases to address one user group at the expense of impacting all users. Since the service is in the cloud, users who want to work offline with their data face the potential of expensive egress charges. If the data accumulated by the users grows beyond the typical SaaS allowed size, then data storage costs are applied to the user. If the scope of the application grows, such as incorporating more connected devices and more data per device, then initial costs can also escalate. The application vendor has access to all customer data, and in notable cases recently, they have been selling metadata collected from their customer base. The application only has a single configuration, limiting the software’s ability to fit obvious unique conditions in almost all organizations. When a SaaS is considered the route for connecting machine assets and workstations on the shop floor to collect real-time streams of critical data, potential issues around latency, bandwidth, and saturation all come into play. An organization needs redundancy for their internet connection to ensure critical data and functionality can be recovered with the interruption of internet services. If operator interfaces are being used, then typical delays of one to ten seconds can occur between machine-generated and operator-generated data.

The most apparent failure of a SaaS approach is the ongoing service cost year over year. Where most perpetual licensing costs are absorbed over a short period, the year-over-year costs accumulate and ultimately cost the organizations far more than an on-premise deployment would have cost.

Some vendors, like Memex, can provide their onsite deployment as a Capex, leased, subscription, or convertible subscription so an organization can get the best of both worlds. One major challenge to SaaS solutions is the loss of service and data if the provider becomes insoluble and goes out of business. All the investment, procedural changes, time and data, not to mention all of the cost, vaporize. At least with an on-premises solution, the system stays intact and running. In manufacturing, fifty such vendors have disappeared over the last five years.

SaaS has attractive benefits and may be suited for ERP and other low-transaction applications. Still, there may be better options than SaaS when real-time critical data is essential. The disadvantages of SaaS (such as lack of control) are considerable and should not be ignored.