Inside the Briefcase

Augmented Reality Analytics: Transforming Data Visualization

Augmented Reality Analytics: Transforming Data Visualization

Tweet Augmented reality is transforming how data is visualized...

ITBriefcase.net Membership!

ITBriefcase.net Membership!

Tweet Register as an ITBriefcase.net member to unlock exclusive...

Women in Tech Boston

Women in Tech Boston

Hear from an industry analyst and a Fortinet customer...

IT Briefcase Interview: Simplicity, Security, and Scale – The Future for MSPs

IT Briefcase Interview: Simplicity, Security, and Scale – The Future for MSPs

In this interview, JumpCloud’s Antoine Jebara, co-founder and GM...

Tips And Tricks On Getting The Most Out of VPN Services

Tips And Tricks On Getting The Most Out of VPN Services

In the wake of restrictions in access to certain...

How to Protect Organizations from Cyberattacks in the Age of AI

September 6, 2023 No Comments

by Sandeep Kampa

The rapid expansion and adoption of artificial intelligence (AI) technologies bring myriad benefits to organizations, including increased speed and productivity. AI also introduces new vulnerabilities that malicious actors can exploit. Data privacy and security are common concerns with AI systems, which can leak sensitive information if not properly monitored, potentially leading to security breaches. AI and machine learning (ML) systems are also vulnerable to adversarial attacks, in which inputs are deliberately engineered to cause systems malfunctions. To complicate matters, AI provides cybercriminals with new tools, including bots that enable more efficient input manipulation. Keeping AI systems safe from attacks requires a multi-level approach, including awareness and monitoring, data anonymization and compliance, and the utilization of security tools where appropriate.

Cybersecurity threats and AI

The increasing proliferation of AI technologies makes systems more vulnerable to cybersecurity issues. Because generative AI cannot identify copyrighted or sensitive data, its use inherently comes with concerns about intellectual property and data privacy. Guidelines must be in place to protect copyrighted material and sensitive information. This is important since AI tools, especially commercially available systems such as ChatGPT, can leak data, either inadvertently or as a result of being specially prompted by malicious actors.

Neglecting these concerns is likely to have severe consequences for businesses. For example, Samsung has established more stringent guidelines around the use of large language models in the workplace after three incidents in which engineers shared sensitive information while using ChatGPT to help with routine tasks. ChatGPT later leaked the data, compromising private customer information. Because of these security concerns, other major companies, including Google and Apple, have added strong restrictions on employee use of generative AI tools.

Adversarial attacks

Adversarial attacks on ML systems are a growing and frequently underestimated problem. In an adversarial attack, hackers feed data to a model with the intention of either causing malfunctions or gaining information about its training data and parameters. Even some of the most advanced systems have shown vulnerability to adversarial attacks. A growing body of research exists about ways to increase robustness against these attacks, including an open framework released by Microsoft and the nonprofit MITRE Corporation. It is vital for organizations and leaders to stay up to date on both the potential for adversarial attacks and the current best practices for protecting against them.

AI-powered cyberattack concerns

Cyber attackers can harness AI to manipulate inputs and strike more efficiently. The generative ability of AI makes it ideal for carrying out injection attacks, including structured query language (SQL) injections and cross-site scripting, at a greater speed and larger scale than a human can alone. AI-powered bots can bombard websites with traffic in distributed denial-of-service (DDOS) attacks. There has been a recent rise in AI-powered ransomware and phishing attacks, many of which are able to evade traditional detection tools.

Best practices for AI and cybersecurity

Addressing cybersecurity concerns requires organizations to be mindful of their use of AI systems and ML models. Rather than relying on commercial or open-source solutions, training models within the organization can help avoid security leaks and provide better results. Since there are costs associated with developing and training models, businesses may wish to use pre-existing models. In these situations, techniques like data anonymization, in which sensitive data is stripped of identifying details, and differential privacy where trivial changes are made to data to preserve privacy, are crucial to avoid leakage of sensitive data. Equally important is ensuring that data is stored and shared only in accordance with the Health Insurance Portability and Accountability Act (HIPAA) and the General Data Protection Regulation (GDPR) guidelines. Regular, organization-wide training, in which employees are familiarized with these best practices and informed on how to use AI securely, is also important.

            A robust, multi-layered approach to cybersecurity is necessary to protect against increasing AI-powered cyberattacks. Secure development and testing environments, continuous monitoring, vulnerability assessments, robust access controls, and close attention to compliance are all crucial components of security. Furthermore, a variety of tools and monitoring systems exist to help organizations keep their systems and data safe in the age of AI, including DDOS protection tools, encryption mechanisms, and anomaly detection systems. Organizations can protect themselves against malicious AI-powered bots by monitoring and categorizing bot traffic into known and unknown categories and, where appropriate, investing in “defensive AI” systems to counter AI attackers.

Building a more secure future

The rapid proliferation of AI created an environment of new, diverse, and rapidly changing cyber threats. Organizations can combat these threats by being conscientious about their use of AI and engaging in employee training and awareness. Training custom models within organizations is an effective solution. When commercial or open-source models are used, paying attention to data anonymization and compliance is essential. Additionally, implementing strong access controls, encryption mechanisms, and anomaly detection systems can help protect AI systems and the data they process. By staying aware of the threat environment and employing best practices to address cybersecurity vulnerabilities, organizations can stay ahead of emerging threats and ensure a safe and trustworthy AI environment.

About the Author:

Sandeep Kampa is a senior DevSecOps engineer. He is a subject matter expert in DevOps, SecOps and cloud computing. Sandeep has made significant contributions to numerous high-profile projects, enhancing and empowering organizations to achieve higher levels of efficiency, scalability, security, and reliability. He holds a Bachelor of Science degree in engineering and received his master’s degree in computer software engineering from Stratford University. For more information, contact sandeepkampa5@gmail.com.

Sorry, the comment form is closed at this time.

ADVERTISEMENT

Gartner

WomeninTech