A Brief Overview of AI from RSA
Everyone’s talking about AI and machine learning these days, and that was definitely the case at the 2023 RSA conference at the end of April. We heard about problems with AI, how we can make it better, and how it might affect different industries.
The members of KirkpatrickPrice who attended the conference certainly have a lot to think about, so we wanted to share a few highlights from some of the sessions that focused on AI since it’s such a hot topic right now.
Why AI Won’t be Taking Your Job Anytime Soon
Some of the bigger headlines revolving around AI have been asking if it’s possible that people will be replaced by improving AI technology. The short answer is not anytime soon.
AI is actually increasing our need to think critically because we have to consider what each output actually means, which is ironic since AI is meant to make things easier. If you’ve played around with ChatGPT any, you’ve probably noticed some of its obvious flaws, some more serious than others.
One of the speakers at the conference, Dr. Rumman Chowdhury, used an example where she asked AI to determine her occupation based off of her social media accounts. AI determined that she was a social media influencer instead of a data scientist and visiting fellow at Harvard University, two professions that are not very similar.
While this example isn’t a life-changing mistake on AI’s part, there have been cases when AI has been incorrect about some pretty serious topics. During the same conference session, a speaker recounted an instance when AI hallucinated sexual harassment claims against a law professor that were not true. Think about the damage claims like this could do if they aren’t fact checked.
As obvious as it sounds, people have to use their brains when using AI. We need to make sure that the output is factual instead of taking anything at face value.
One of our experts here at KirkpatrickPrice likes to use the analogy of autopilot on a plane to describe how AI should be treated. Planes today use autopilot for most of the flight, but a trained pilot is always watching and monitoring and takes over anytime there’s a problem. While we may not want to give AI any major responsibilities yet, the tasks we do give to AI need to be carefully monitored to make sure everything functions the way it needs to.
The question was asked: How do we pull back AI advancements if they pick up misinformation and integrate that into the programing?
The unfortunate answer is that there isn’t a way to do this currently. Errors are being addressed on a case-by-case basis, but with AI advancing as quickly as it is, will this be enough?
Certain industries, like the music industry, are starting to push back against AI as much as they can. AI can be scary for younger performers who are trying to gain their footing in the industry. People will always want to come see artists who are already popular, but what about the artists who are now competing with AI to write the next big hit? The songwriting and music businesses are already competitive, but with the addition of AI, will new artists stand a chance? And what about when AI generates a new song from an artist that isn’t real, and it competes with real startup artists? These are all questions that we don’t yet have answers to, but they are certainly on the minds of people within the industry.
Some labels are asking streaming platforms not to use their music in machine learning training. The thought is that generative AI will be treated like copyright infringement within the music industry.
How can AI be improved?
There’s no stopping the advancement of AI and different chat bots, but there are ways it can get better.
- Controlling Access
One speaker suggested that AI needs to be subject to access control restrictions just like a normal human user. We have to determine what AI needs access to and how its access can be restricted when needed. Think of AI as your organization’s lowest-level employee or someone who isn’t an employee but is helping you with a task. You could ask that person how to write an excel formula, but you wouldn’t ask that person to review your information security policy and trust that your policy is as effective as it needs to be.
The large AI models are overwhelming and costly, but narrowing the scope of the model could make the language more effective. An example of narrowing the scope would be a model that’s specific to supporting a Security Operations Center (SOC) rather than a public facing model like ChatGPT. While your audit shouldn’t be fully automated, there are ways that AI could help you get ready for an audit and help you stay organized once the technology is more advanced.
- Sensitive Information Training
From a prevention standpoint, we should train the language models on how to treat personally identifiable information (PII), personal health information (PHI), and other sensitive data when they encounter it.
At KirkpatrickPrice, we tell our clients never to input personal information into a chatbot like ChatGPT because the information is not kept private and does not go away once the user is finished using the chatbot. Hopefully, in the future, the technology will be able to identify when sensitive information is encountered and avoid using it.
Just like in any field, training takes time. All of AI’s issues will not be resolved overnight, so make sure you have a solid security team that’s working to make sure your organization remains secure with new threats and vulnerabilities that will arise with the introduction of this new technology.
What should you be doing in the meantime?
You may be wondering what your organization can do to prepare for the continued growth of AI and risk that will come along with it.
- Make sure you have a solid risk management process in place.
We know that regardless of where AI takes us, you want to remain serving your customers and carrying out your organization’s mission. The only way you can confidently do these things is if you prepare for anything and everything.
Even before AI and machine learning gained so much popularity, the threat landscape was rapidly evolving. Organizations that aren’t prepared to face today’s threats can end up paying thousands, if not millions, of dollars in damages and fees on top of damaging their reputation.
Don’t further put your organization at risk. Make sure you’re having conversations with members from all departments of your organization about new threats and how you can protect against them.
- Don’t forget about vendor management.
Ask your vendors how they use AI. Are they aware of the risks that come with implementing AI? What are they doing to protect your organization’s sensitive information?
It can feel overwhelming to have to think about AI risk from a vendor management perspective in addition to identifying your own risk, but some of the biggest data breaches in recent years have been due to vendor error. Make sure you understand how your vendors are actually operating.
- Stay up to date on reviews and scans.
One of the best things you can do to prepare for the added risk that AI contributes to the threat landscape is to perform risk assessment reviews, policy reviews, and cloud scans. These measures will help you identify any vulnerabilities you weren’t aware of and prepare for inevitable threats.
Partner with KirkpatrickPrice to Get Ready
We get it, with new technology and advancements comes new concerns and a lot of questions. KirkpatrickPrice is here to help. Let us answer any questions you have about how AI could affect your business and what you can be doing to make your business unstoppable.
Connect with an expert today to stop worrying and to make sure your organization is as secure as possible.