Data Security for Generative AI: A Webinar Recap

by Tori Thurmond / August 1st, 2023

Can AI and security coexist?  

This month, KirkpatrickPrice joined forces with Walter Haydock, the Founder and CEO of StackAware, to have a conversation about that question. 

What’s the deal with AI?

There’s no denying that AI is a hot topic right now. You’ve probably heard about some really great things AI tools can do and maybe a few risks or concerns as well. 

Artificial Intelligence has been around for a while, but in the past six months, there’s been an explosion in popularity since software like Chat GPT and OpenAI has come onto the scene. Large language models (LLM) like these are a means of predicting a series of words based off of the information a user inputs into the tool. The technology is designed to learn from the information users enter into the tools, and because of their popularity, they have developed and learned rapidly since their launch late last year.  

These tools are amazing, but Haydock warned us that they aren’t magic. Users should still consider security and privacy like they would with any other software. The security principles that we normally apply like data confidentiality and privacy can’t just go away when new advancements are created.  

However, security teams are struggling to keep up. With the speed at which these large language models are developing, it can feel overwhelming to adjust security procedures to accommodate this new technology. Some organizations are going so far as to ban the use of tools like OpenAI and Chat GPT altogether. Haydock noted that this may not be the best solution because these companies will most likely get left behind by competitors and miss out on new opportunities and advantages. Not to mention the high probability that members of the organization can get around these bans and use the tools without proper security measures. A better approach would be to have a consolidated and consistent logical framework to help organizations manage the use of these tools.  

Advantages of Implementing AI into Your Organization

There can be many advantages to implementing AI technology into your organization. One specific advantage that Haydock mentioned was allowing AI to process unstructured data, such as emails, Slack messages, and images. For a long time, many people’s jobs involved moving data from one platform to another which can take a lot of time and resources from an organization. Generative AI makes those tasks trivial. It makes these tasks go faster and can help with data visualization. 

Disadvantages of Implementing AI into Your Organization

Generative AI comes with old and new challenges. Organizations are still having to focus on security and compliance with these new tools just as they have been in other aspects of the company. Because users are inputting data into large language models, data is being stored with a third party, so third-party risk management comes into play with generative AI tools. Some generative AI platforms have better data retention policies in place than others. When deciding if a tool or platform is right for your organization, make sure their policies align with your risk tolerance and allow you to remain complaint.  

On the other hand, one new disadvantage that has developed during the rise of generative AI is confidentiality. Tools like Chat GPT learn from the data it’s provided. The question that arises is: How is the output to your input going to be available to other people? There have been several cases where organizations’ competitors have gleaned valuable information off of tools like Chat GPT by knowing what information to put into the tool.  

Think about it this way: Your company has figured out a new way of solving an age-old problem. This will be great for your business, your customers, and the overall market. But… you decide to input this new solution into Chat GPT to assist with messaging or work out some of the details. Now your competitors could potentially gain access to your innovations before you even had the chance to implement them.   

There are some platforms that don’t share data like this though. You can also pre-process your data to make sure no sensitive information is given to the tool. In fact, Haydock developed a tool called GPT-Guard that helps remove sensitive information from the information you want to input into Chat GPT to keep your data safe.  

To learn more about this tool and how you can integrate it into your environment, be sure to listen to the full webinar recording.  

Incorporating Generative AI in Your Security Policies

To combat some of the issues with AI models and confidentiality, Haydock suggested that organizations create generative AI policies. While adding information about generative AI into existing policies is still a good step, it may not be enough. That’s why StackAware has developed a Generative AI Security Policy template to help organizations get started with this process.  

An important step in building out your security program is for the decision makers in your organization to come together and create a catalogue of approved tools and when they can be used. Certifying certain tools for certain purposes is also important. You don’t want to pour sensitive information into just any chatbot, but certain tools might be worth looking into if they will help your employees and the organization as a whole. Establishing generative AI policies will require organizations to think about these tools and establish rules before the tools are actually being used. 

New technology can be scary, especially when you’re thinking about incorporating it into your organization’s environment. While we understand the urge to shut out generative AI altogether, Haydock brought up the very real possibility of getting left behind by competitors who are more willing to embrace these advancements. AI technology can be a gamechanger in the way we do work. Finding a balance between protecting your employees and environment and also taking advantage of new technology is important. Put controls in place that will help you manage your risk and leverage new technology to further your business goals. 

What’s the best way to get the word out to the team?

When implementing new policies like this, a mass email is not enough. Security teams need to enable other business units to incorporate these changes securely. Spend time on security training for all members of your organization to minimize internal risk as much as possible. Employees need clear boundaries when it comes to complicated tools, so make sure you’re taking time to allow everyone to understand new technology or changes to the environment and what their role is.  

Creating a standard operating procedure (SOP) is another great way to ensure that new technology is incorporated securely. Haydock provided an AI Risk Management SOP template to help jump-start this process for organizations looking to create a similar document. When thinking about adopting generative AI tools, you need a structured approach, and this SOP template can help.  

Having a plan to protect your company data is essential. A security program that has a 100% success rate isn’t doing its job properly. There are going to be security incidents; your organization needs to be prepared to face them effectively and efficiently.  

How often should your AI policies and procedures be updated?

It’s best to update your generative AI policies at least quarterly to account for any new developments in the technology, and procedures around using generative AI tools should be assessed at least once a year to make sure the control you have in place are still adequate for the type of data you’re processing.  

Having a consistent cadence for checking your policies is a good practice to get used to in your organization. You need to make sure that your policies and procedures are actually doing what they say they are doing to minimize any vulnerabilities or risk.  

So, is generative AI really secure?

Any technology is just a tool, but it can be a double-edged sword. Along with the amazing capabilities that can help our business function more efficiently than ever came a rise in intelligent hacking schemes. Over the past year, there has been a large increase in phishing attempts that could be linked to AI tools. Other issues include large language models hallucinating information that threat actors are then using to create libraries that match those hallucinations to take advantage of users.  

Since generative AI is newer on the scene, there’s still a lot of grey areas when it comes to compliance. Data privacy laws like GDPR have some strict regulations on how personal data is processed. AI can predict things about people like where they live or other personal information based off of very little information. What are the implications of knowing personal information about someone if that information isn’t even in the organization’s database? Does the model need to be altered to produce inaccurate information to protect client data? We don’t necessarily have answers to these questions yet.  

For now, what we can do is continue to implement risk management strategies and security policies just like we have been for other new technology.  

Watch the Full Webinar Recording

To learn more about data security for generative AI, make sure you watch the full webinar recording below. Don’t miss out on Walter Haydock’s expertise, StackAware’s product demos, and more.

Partner with KirkpatrickPrice to Securely Incorporate AI Into Your Compliance Programs

We know that change can be uncomfortable, especially when it comes to your organization’s security and compliance, but you don’t have to face it alone. Our experts stay up to date on the latest cybersecurity developments, so you don’t have to feel overwhelmed. If you have questions about anything that was mentioned in the webinar or are wondering if a certain AI tool is right for your organization, connect with a KirkpatrickPrice expert today.  


StackAware Resources

Don’t forget to sign up for Walter Haydock’s cohort-based course, “Securing AI powered transformation-managing risk & supercharging productivity” to learn even more about generative AI and data security.

Generative AI Security Policy

AI Risk Management SOP

GPT-Guard  

Securing AI powered transformation-managing risk & supercharging productivity

About the Author

Tori Thurmond

Tori Thurmond has degrees in both professional and creative writing. She has over five years of copywriting experience and enjoys making difficult topics, like cybersecurity compliance, accessible to all. Since starting at KirkpatrickPrice in 2022, she's earned her CC certification from (ISC)2 which has aided her ability to contribute to the company culture of educating, empowering, and inspiring KirkpatrickPrice's clients and team members.