AI-enabled manufacturing is raising the data security stakes

Smarter factories need smarter security to unlock the benefits and avoid the risks.
Oct. 21, 2025
7 min read

Key Highlights

The integration of artificial intelligence (AI) in manufacturing has been nothing short of revolutionary — enabling manufacturers to see impressive gains in efficiency, productivity, knowledge sharing and cost reductions. But for Serge Thibault, VP Information Security at Poka, yes, the promise of AI in manufacturing is huge — but without enterprise-grade security measures, that promise can quickly turn into operational and reputational risk.

These powerful AI tools bring critical security and compliance issues, leaving manufacturers with the dilemma of how to reap the benefits of AI without compromising data security or operational integrity.

He highlights the critical role of connected worker platforms in taking the risk out of implementing AI to empower workforces — from customer data handling, to transparency and protection strategies.

Here is the risk and reward dilemma: Artificial intelligence (AI) investment in the U.S. manufacturing sector is only growing. Deloitte’s 2024 Future of the Digital Customer Experience survey found that 55% of surveyed industrial product manufacturers are already leveraging gen AI tools in their operations, and over 40% plan to increase investment in AI and machine learning over the next three years. But as the manufacturing sector continues to invest in AI technology, its vulnerability also increases.

Balance this with the stats that show globally, industrial organizations are among the most targeted for ransomware attacks, experiencing an 87% rise in 2024 over the previous year. With 50% of all observed ransomware victims in 2024 in the manufacturing sector, and 57% of all cyberattacks happening in North America, the industry is top of the hitlist. As the sector underpins a number of other markets – automotive, aerospace, and food and beverage, to name just a few, any cyber incidents involving manufacturing organizations will have broad effects on other sectors and only exacerbate disruption and supply chain integrity.

Cyberattacks cost millions to resolve and almost without fail, have huge implications on brand reputation, stakeholder and consumer trust, and entire supply chains. As manufacturers race to adopt AI, they must also prioritize robust cybersecurity strategies to protect their systems, ensure operational continuity and maintain trust.

Smarter factory floors mean bigger attack surfaces — cybersecurity must protect AI vulnerability

With today’s manufacturing facilities being more complex than ever, legacy systems are not advanced enough to fight today’s modern hacker. To make matters worse, the introduction of AI tools makes manufacturing companies more dispersed and raises a raft of new threats. AI tools have begun to touch many facets of the manufacturing process. Whether it is for workforce training, safety monitoring, data collection or even AI robots on production lines down on the factory floor, the inner workings of manufacturing organizations may have become more connected and intelligent — but have also become more vulnerable.

Now, as AI-powered workforce operations rely heavily on data, sensors and networks, the attack surface for cyber hackers and threats has only given them more opportunities. Hundreds or thousands of connected devices serve as potential entry points for hackers and sometimes, the rush to integrate AI tools have outpaced the security action plans. It is more crucial than ever to tighten the grip on governance, compliance and overall security in manufacturing.

Take deploying connected worker technology, for example. While AI-driven applications streamline access to crucial information, enhance global communications and accelerate time to value with automated digital content conversion — there are key security considerations that must be addressed to protect the data that feeds these systems.

Protecting proprietary manufacturing data means keeping AI processing secure, isolated and compliant

Manufacturing data is highly sensitive, involving trade secrets, detailed production information and masses of consumer data. A critical concern when implementing AI technologies is whether manufacturing data is ever shared with external AI providers.

Again the stats tell an important story. In 2024 over 40% of hacking claims were because of a third-party vendor.

Customer data should not be used to train AI models and should only be processed by the SaaS provider — and never sent to external AI model providers. All inputs, outputs and embeddings must remain sealed within secure infrastructure — operated, monitored and audited by the SaaS provider to guarantee full data sovereignty, privacy and compliance. Advanced connected worker platforms address this by processing all data within secure environments such as AWS and complying with strict data residency laws. With prompts and responses also processed entirely within the AWS environment, it enables manufacturers to tap into powerful AI functionalities on the factory floor, while maintaining strict privacy, control and compliance.

AI error-mitigation and safety measures for manufacturing

Safety and accuracy of AI outputs are paramount in manufacturing settings, where errors can lead to real-world hazards. Manufacturers should confirm that AI responses are validated for safety and correctness with outputs professionally phrased and align with customer-specific context. To minimize the risk of unsafe or incorrect AI outputs in manufacturing settings, organizations should implement a layered set of guardrails and validation controls:

  • Content filtering at ingress: AI guardrail filters to block unsafe inputs before they reach the model.
  • Prompt injection and adversarial input detection: Inputs are pre-assessed to identify malicious intent or system prompt leaks.
  • Few-shot prompting: Prompts include examples of acceptable/unacceptable queries to guide safe behavior.
  • Secure prompt and response handling: Process all AI interactions within a secure, customer-dedicated environment; encrypt logs at rest and in transit; and enforce strict access controls so that prompts, responses and telemetry can be audited but never exfiltrated for model training.
  • Retrieval-augmented generation (RAG) for output grounding: Anchor every AI response in verified, customer-specific source content. When no relevant context exists, configure the model to return “No answer” rather than risk hallucinations.
  • Bias, profanity and scope-drift prevention: Include output-screening mechanisms that check for inappropriate or biased language, ensure responses remain scoped to the customer’s own data, and enforce professional phrasing.
  • Human-in-the-loop (HITL) verification: For the most critical outputs, such as safety protocols or complex work instructions, implement a workflow where a qualified human expert must review and approve the AI-generated content before it is finalized. This provides a final layer of verification, serving as the ultimate safety net to catch subtle errors or contextual nuances that automated systems might miss.
  • Multilingual and cultural safety: Automatically match the response language to the input, and apply localization or translation when contexts differ, preserving clarity and cultural relevance.
  • Purple teaming and internal testing: Dedicated adversarial test suites are regularly executed to evaluate and improve prompt injection protections.

AI must take on corporate responsibilities — transparency, fairness and policy-aligned responses

In the era of embedded AI, the burden of governance falls squarely on the SaaS provider. Customers in high-stakes environments such as manufacturing expect more than powerful features. They demand safe, compliant and trustworthy AI. This responsibility begins with a provable foundation of security and data integrity, validated through rigorous, independent audits and adherence to industry-best practices.

However, true AI governance extends deep into the product itself. It is the provider's duty to build in the technical guardrails that ensure transparency, fairness and alignment with established operational and safety standards. For example, systems that use RAG ground AI responses exclusively in a client's verified knowledge base, prevent dangerous "hallucinations" and ensure all outputs are contextually accurate.

For a provider, embracing this responsibility is a strategic mandate. Proactively embedding ethical controls and robust governance transforms a product from a simple tool into a trusted, strategic asset. By doing so, SaaS providers not only mitigate their customers' legal and reputational risks but also build the essential trust needed to drive safe, sustainable adoption and long-term operational excellence.

Building a safer, smarter future for AI in manufacturing


The rewards from the integration of AI in manufacturing are immense — from optimizing operations to empowering entire workforces — but with that promise comes increased risk.

As factories and manufacturing processes continue to become more connected and increasingly more intelligent, it is up to manufacturers and their solution providers to ensure the correct processes are in place to mitigate cyber threats and data privacy risks, and to respond effectively to ethical challenges.

 

By adopting advanced connected worker technologies that prioritize data security, enforce robust cybersecurity protocols and validate AI responses for safety and fairness, manufacturers can safely tap in to the growing use cases for AI in manufacturing. That means AI must reflect corporate responsibility requirements.

About the Author

Serge Thibault

VP Information Security at Poka

Sign up for our eNewsletters
Get the latest news and updates