CYBERSECURITY AND FOOD DEFENSE

By Robert Norton, Ph.D., Professor of Veterinary Infectious Diseases and Coordinator, National Security and Defense Projects, Office of the Senior Vice President of Research and Economic Development, Auburn University; Marcus Sachs, P.E., Deputy Director for Research, McCrary Institute for Cyber and Critical Infrastructure Security, Auburn University; and Cris A. Young, D.V.M., M.P.H., Diplomate A.C.V.P.M., Professor of Practice, College of Veterinary Medicine, Auburn University and Adjunct Professor, College of Veterinary Medicine, Department of Pathology, University of Georgia

Malevolent AI: Navigating the Shadows of Technology Advancement in the Food Industry

AI's integration into the food industry has been largely beneficial, streamlining processes from production to distribution; however, this integration also opens doors for malevolent use

Automotive lighting, Electronic signage, Amber, Orange, Triangle, Font, Rectangle, Line, Red

Image credit: LumerB/iStock/Getty Images Plus via Getty Images

SCROLL DOWN

Artificial Intelligence (AI), while offering remarkable advancements in various fields, also presents a paradoxical threat, especially when misused. In our last article, "Beneficial AI: Safe, Secure, and Trustworthy Artificial Intelligence for Food Safety,"1 we discussed how AI can help ensure food safety by analyzing patterns and trends that might not be readily apparent to human analysts. This article delves into the darker side of AI, focusing on its potential malevolent applications against food companies and the safety of their products, and explores solutions to mitigate these risks.

The Dark Side of AI in the Food Industry

AI's integration into the food industry has been largely beneficial, streamlining processes from production to distribution; however, this integration also opens doors for malevolent use.

One primary concern is the manipulation of food safety data. AI systems, which are increasingly used to monitor and analyze food safety parameters, could be targeted by malicious entities. By altering AI algorithms, altering their training datasets, or feeding them false or altered collected data, these entities could mask contamination or spoilage, leading to widespread foodborne illnesses. For example, manipulating AI systems that monitor the temperature of perishable goods could result in the distribution of spoiled products, posing serious health risks to consumers.

Consider the case of a large dairy processor relying on AI-driven systems to monitor bacterial levels in pasteurized milk. A targeted cyberattack injects false data, indicating safe levels when, in fact, the milk is contaminated with Escherichia coli. The compromised data leads to the distribution of unsafe products, resulting in a public health crisis. This scenario underscores the vulnerability of AI systems to data manipulation, with dire consequences for food safety and public trust.

Another area of concern is the sabotage of AI-driven supply chains. Food companies rely heavily on AI for inventory management and logistics. A well-orchestrated attack on these systems could lead to severe disruptions. Malicious actors could alter demand forecasts, redirect shipments, or even halt the supply of critical ingredients, leading to economic losses and potential food shortages. This form of attack could not only affect a company's bottom line, but also shake consumer confidence in the brand.

The 2021 ransomware attack on JBS S.A., one of the world's largest meat processing companies, serves as an example. Hackers targeted the company's AI-driven supply chain systems, halting operations in North America and Australia. While the disruption did not directly target JBS' AI-driven computer systems, it did lead to significant disruptions in their distribution network that depend on them. This incident further raised concerns about the vulnerability of critical food supply chains to ransomware and other online attacks.

AI also poses a threat in the realm of competitive espionage. Competitors or bad actors could use generative AI tools to generate believable fake emails or phone messages that might lead to the theft of proprietary information, such as recipes or production techniques. AI can also be used to analyze and create identical formulas for beverages and other food products. This intellectual property theft not only undermines a company's competitive edge, but can also lead to the proliferation of unsafe or substandard copycat products in the market.

AI technologies use learning algorithms to analyze vast datasets and infer proprietary manufacturing processes or formulations. While the idea of a competitor using AI to reverse-engineer a popular soft drink formula sounds like it would work, Facebook has already demonstrated that it can produce food ingredient lists from photos of the food.2 This type of research highlights the potential for AI-driven industrial espionage using only a photo as a starting point.

Taking it one step further, researchers at OpenAI (the company that developed ChatGPT, a popular generative AI Large Language Model, or LLM) recently warned that LLMs could be used to develop new biological threats.2 While the current LLMs probably will not be able to develop the exact steps for building a working biothreat, the fear is that the likelihood will increase as they improve.

“Given that many AI-related threats stem from data breaches or system infiltrations, strengthening cybersecurity protocols is crucial.”
Tints and shades, Monochrome photography, Black, Black-and-white, Line, Style
Monochrome photography, Parallel, Black, Black-and-white, Line, White

Deepfakes, which are artificial images or videos generated by AI, pose the newest threat to all industries, not just those in the food sector. While deepfakes are often discussed in the context of politics or celebrity impersonations, their implications for businesses, including those in the food sector, are profound and multifaceted. For example, a convincingly altered video could falsely show a food company's product causing illness or include fabricated statements from company executives making harmful claims about their own or competitors' products.

In the food industry, where sourcing and authenticity are crucial, deepfakes could be used to create fraudulent documentation or videos purporting to show food safety practices, sustainable sourcing, or ethical treatment of animals that are not actually followed by the company. In the event of a food safety crisis, deepfakes could exacerbate the situation by spreading false information, making it more difficult for companies to communicate effectively and regain public trust. This could not only deceive consumers and regulators, but also unfairly advantage dishonest businesses over those that are investing genuinely in ethical practices, disrupting market fairness.

Mitigating the Risks

To counter these threats, it is imperative for food companies to adopt a multi-faceted approach. The first line of defense is enhancing cybersecurity measures. Given that many AI-related threats stem from data breaches or system infiltrations, strengthening cybersecurity protocols is crucial. This involves conducting regular audits of AI systems, employing advanced encryption methods for data storage and transmission, and training employees on cybersecurity best practices. For example, adopting blockchain technology to ensure data integrity and using encryption to maintain data confidentiality can significantly reduce the risk of data manipulation or theft.

Another solution lies in the development of robust AI systems with built-in safeguards against tampering and manipulation. This can be achieved through the implementation of secure coding practices and regular updates to AI algorithms to address vulnerabilities. Moreover, AI systems can be designed to detect and alert any anomalies in data patterns that could indicate tampering attempts. Employing AI to combat AI threats, in a sense, creates a self-regulating system.

The interconnected nature of the food supply chain means that vulnerabilities in one area can have cascading effects, potentially compromising multiple entities within the network. This interconnectedness necessitates a collective approach to cybersecurity, where best practices and threat intelligence are shared among stakeholders. However, the implementation of such collaborative measures is often hampered by a lack of standardization and the reluctance of companies to share sensitive security information.

Collaboration must play a pivotal role in mitigating these risks. Food companies should actively engage with regulatory bodies, cybersecurity experts, and other industry players to stay abreast of emerging threats and best practices. The authors previously wrote several articles for Food Safety Magazine about the need for a Food and Agriculture Information Sharing and Analysis Center (FA-ISAC).4 This type of collaboration can extend to sharing threat intelligence and developing industry-wide standards for AI applications in food safety. For example, creating a centralized database of AI threats and responses can aid companies in quickly adapting to new challenges.

Challenges Faced by Food Growers and Producers

The cybersecurity landscape in the food industry, especially among many food growers and producers, is marked by significant vulnerabilities that expose them to the risk of sophisticated, AI-driven attacks. This vulnerability is particularly pronounced in small to medium-sized enterprises (SMEs) that form the backbone of the industry. These entities often operate with limited budgets and may prioritize immediate operational needs over long-term cybersecurity investments, leaving their digital infrastructure susceptible to advanced threats.

The lack of robust cyber defenses encompasses not just the physical hardware and software, but also extends to inadequate cybersecurity protocols, employee training, and incident response strategies. This situation is compounded by the sophisticated nature of AI-driven cyber threats, which can adapt and evolve in response to defensive measures, making them particularly challenging to detect and mitigate with conventional cybersecurity solutions.

As AI applications in food production advance, regulatory frameworks struggle to keep pace. Food growers and producers often find themselves navigating a complex and sometimes unclear regulatory landscape, unsure of their compliance obligations related to AI usage and data protection.

The use of AI in food production also raises ethical questions, particularly regarding transparency. Consumers are increasingly concerned about how their food is grown, processed, and brought to market. The opaque nature of some AI algorithms can lead to skepticism and distrust among consumers, who demand transparency about the technologies used in food production and their implications for safety and quality.

In light of these challenges, there is a pressing need for targeted support and resources to bolster the cybersecurity posture of food growers and producers, particularly SMEs. This could take the form of government-led initiatives, industry-wide cybersecurity frameworks, and increased access to affordable cybersecurity tools and services. Without such support, the vulnerability of these critical players to AI-driven cyber threats will remain a significant concern, with potential implications not just for individual businesses, but also for the food industry as a whole and the consumers it serves.

“The future of food safety in the AI era hinges not just on technological advancements, but also on proactive measures to safeguard against the darker possibilities of this transformative technology.”
Tints and shades, Monochrome photography, Black, Black-and-white, Line, Style
Monochrome photography, Parallel, Black, Black-and-white, Line, White

Going Forward

The integration of AI into the food sector has brought about transformative changes, offering numerous benefits that enhance efficiency, productivity, and innovation. AI-driven technologies have enabled precise agriculture practices, optimized supply chains, and improved food safety monitoring, contributing to increased yields and reduced waste. These advancements have not only streamlined operations, but have also facilitated the creation of more sustainable and consumer-responsive food systems. The ability to analyze vast datasets allows for better decision-making and forecasting, helping meet growing global demand for food.

However, the adoption of AI in the food sector also presents significant challenges and risks, particularly in the realm of cybersecurity. Many food growers and producers, especially small to medium enterprises, find themselves ill-equipped to defend against sophisticated, AI-driven cyber threats. The lack of robust cybersecurity infrastructure exposes these entities to data breaches, system infiltrations, and other cyberattacks that can have devastating effects on operations, consumer trust, and market competitiveness. The sophistication of AI-driven threats, which can adapt and evolve to bypass conventional security measures, further exacerbates this vulnerability, making it a critical concern for the industry.

Balancing the benefits and risks of AI in the food sector requires a nuanced approach that acknowledges the transformative potential of AI while actively addressing the associated cybersecurity challenges. This requires not only enhancing the digital defenses of individual food producers and growers, but also fostering a collaborative, industry-wide effort to establish standardized cybersecurity practices and protocols. In addition, targeted support and resources from government and industry bodies can play a crucial role in bolstering the cybersecurity posture of smaller entities, ensuring that the food sector can fully leverage the advantages of AI without being unduly exposed to its potential perils.

The dark side of AI in the food industry presents significant challenges, from the potential for data manipulation and supply chain disruptions to competitive espionage and ethical concerns, to the use of deepfakes that undermine consumer and regulator confidence. The future of food safety in the AI era hinges not just on technological advancements, but also on proactive measures to safeguard against the darker possibilities of this transformative technology.

Addressing these issues requires a concerted effort from all stakeholders, underpinned by robust policy changes at the national level. By strengthening cybersecurity frameworks, enhancing AI literacy, fostering public-private partnerships and collaboration, providing regulatory clarity, and promoting ethical AI use, we can mitigate the risks and ensure that AI serves as a force for good in the food industry, safeguarding the integrity of our food systems and the health and trust of consumers.

References

  1. Norton, R., M. Sachs, and C.A. Young. "Beneficial AI: Safe, Secure, and Trustworthy Artificial Intelligence for Food Safety." Food Safety Magazine February/March 2024. https://www.food-safety.com/articles/9250-beneficial-ai-safe-secure-and-trustworthy-artificial-intelligence-for-food-safety.
  2. Meta. "Food for thought: AI researchers develop new way to 'reverse engineer' recipes from photos." December 16, 2019. https://tech.facebook.com/artificial-intelligence/2019/12/food-for-thought-ai-researchers-develop-new-way-to-reverse-engineer-recipes-from-photos/.
  3. OpenAI. "Building an early warning system for LLM-aided biological threat creation." January 31, 2024. https://openai.com/research/building-an-early-warning-system-for-llm-aided-biological-threat-creation.
  4. Norton, R., M. Sachs, and C.A. Young. "Cybersecurity and Food Defense: Establishing an ISAC for the Food and Agriculture Sector." Food Safety Magazine April/May 2023. https://www.food-safety.com/articles/8488-cybersecurity-and-food-defense-establishing-an-isac-for-the-food-and-agriculture-sector.

Robert A. Norton, Ph.D. is a Professor and Coordinator of National Security and Defense Projects in the Office of the Senior Vice President of Research and Economic Development at Auburn University. He specializes in national security matters and open-source intelligence, and coordinates research efforts related to food, agriculture, and veterinary defense.

Marcus H. Sachs, P.E. is the Deputy Director for Research at Auburn University's McCrary Institute for Cyber and Critical Infrastructure Security. He has deep experience in establishing and operating sharing and analysis centers including the Defense Department's Joint Task Force for Computer Network Defense, the SANS Institute's Internet Storm Center, the Communications ISAC, and the Electricity ISAC.

Cris A. Young, D.V.M., M.P.H., Diplomate A.C.V.P.M. is a Professor of Practice at Auburn University's College of Veterinary Medicine and an Adjunct Professor at the College of Veterinary Medicine at the University of Georgia's Department of Pathology. He received his D.V.M. from Auburn University's College of Veterinary Medicine in 1994. He completed his M.P.H. at Western Kentucky University in 2005 and is a Diplomate of the American College of Veterinary Preventive Medicine.

APRIL/MAY 2024

Font, Line, Text