The AI Episode
Transcript
-
What is generative AI?: Generative AI is a subset of artificial intelligence focused on creating new content based on existing content. The core technology behind it is Large Language Models (LLMs), which are sophisticated software systems that predict a sequence of words based on a given prompt. These are not magic but rather complex tools that require proper management.
-
How do you keep up with evolving AI?: It’s a significant challenge as security teams often lag behind business units in adopting new technologies. A common, but ineffective, reaction is to ban AI tools like ChatGPT. This often leads to “shadow IT,” where employees use unmanaged tools anyway, putting company data at risk. The better approach is to establish a consolidated and logical framework for using these technologies safely.
-
Pros and cons of AI for businesses?:
-
Benefits: The primary advantage is the ability to process large amounts of unstructured data (such as emails, Slack messages, and documents) and automate tasks that were previously manual and time-consuming, leading to significant productivity gains.
-
Disadvantages (Old Challenges): Using third-party AI tools (from OpenAI, Google, Microsoft, etc.) introduces supply chain security risks. Companies must rely on these vendors to protect their data, which requires proper vetting, understanding their data retention policies, and ensuring adequate security measures are in place.
-
Disadvantages (New Challenges): By default, many AI models use the data they are fed for training purposes. This can lead to the AI learning and potentially exposing sensitive company information, intellectual property, or even internal processes like interview questions, as was reportedly the case with Amazon.
-
-
What is StackAware’s AI protocol?:
-
The protocol is a comprehensive checklist and standard operating procedure (SOP) that provides a structured approach for companies to manage AI risks.
-
It emphasizes creating a specific, detailed policy for generative AI rather than just giving vague guidelines.
-
It includes steps for vendor management, risk registration, business analysis (cost-effectiveness), and privacy reviews.
-
Aggregation Risk: The protocol specifically calls out the need to analyze for “aggregation risk,” which is when an AI tool combines multiple, seemingly innocuous pieces of public information to infer sensitive, non-public details about a company.
-
-
How in-depth should your AI policy be?: Policies need to be more than surface-level; they must be specific and actionable. Instead of vague instructions like “be smart,” a policy should clearly define what types of data are authorized for use in which specific AI tools. This process should be automated as much as possible to make it easy for employees to follow and reduce the cognitive load of remembering complex rules.
-
Has generative AI gotten businesses in trouble?: Yes. Several large companies, including Samsung, Apple, and JP Morgan Chase, have banned or restricted the use of ChatGPT due to concerns over employees inputting proprietary source code, confidential meeting notes, and other sensitive information into the tool. Hackers have also begun creating malicious software libraries with names that sound legitimate but are designed to steal data, preying on the fact that AI models can “hallucinate” or generate references to non-existent but plausible-sounding tools.
-
What do companies need from you?:
-
Work with approved AI tools: A policy should define which AI tools are approved for use, preventing employees from using random, unvetted tools.
-
Don’t put sensitive data into an AI tool: In conjunction with a data classification policy, employees must be trained to recognize and avoid inputting sensitive data like PHI (Protected Health Information) or PII (Personally Identifiable Information).
-
Keep AI limitations in mind: AI tools are not infallible. A human element is required to verify the accuracy of the information they generate and to ensure it is not biased or incorrect.
-
Have a protocol for investigating AI behavior: Employees should have a clear channel to report any unusual or unexpected AI behavior to the CISO or head of security for investigation.
-
Factor generative AI into risk assessment: As an emerging technology, AI and its associated risks (like data exposure and bias) must be formally incorporated into the company’s regular, at least annual, risk assessment activities.
-
Make humans accountable for AI use: An AI system cannot be held accountable. Ultimately, human beings must be responsible for how AI is used and for the decisions made based on its output. A human trail of verification and accountability is essential.
-
Notes
Our Cybersecurity Mission is here to elevate the standards for cybersecurity and compliance. In this episode, Walter Haydock, the founder of Stack Aware, shares insights on the risks and benefits of Generative AI, including tools on how to leverage AI in your business. Join Our Cybersecurity Mission: https://www.linkedin.com/showcase/our-cybersecurity-mission
Walter Haydock: https://www.linkedin.com/in/walter-haydock/
Learn more about Stack Aware: https://stackaware.com
Stack Aware Resources: http://products.stackaware.com/
Generative AI Webinar recap: https://kirkpatrickprice.com/blog/webinars-events/data-security-for-generative-ai-a-webinar-recap/
Partner with KirkpatrickPrice to Securely Incorporate AI Into Your Compliance Programs: We know that change can be uncomfortable, especially when it comes to your organization’s security and compliance, but you don’t have to face it alone. Our experts stay up to date on the latest cybersecurity developments, so you don’t have to feel overwhelmed. If you have questions about anything that was mentioned in the webinar or are wondering if a certain AI tool is right for your organization, connect with a KirkpatrickPrice expert today.
Send a Question
Do you have a question for our podcast? Send it to us here.