A number of chief information security officers are in tumult as a result of soaring investments by large technology firms in artificial intelligence and chatbots, despite massive reductions and a decline in growth.
With OpenAI’s ChatGPT, Microsoft’s Bing AI, Google’s Bard, and Elon Musk’s plan for his own chatbot making headlines, generative AI is infiltrating the workplace, and chief information security officers must approach this technology with caution and take the necessary security precautions.
Large language models (LLMs), or algorithms that produce chatbot conversations that are human-like, are what drive the technology known as GPT, or generative pretrained transformers. However, because not every company has its own GPT, companies must monitor how employees use this technology.
Michael Chui, a partner at the McKinsey Global Institute, says that people will use generative AI if they find it beneficial for their work, comparing it to how employees use personal computers and mobile phones.
“Even when it’s not sanctioned or blessed by IT, people are finding [chatbots] useful,” Chui said.
“Throughout history, we’ve found technologies that are so compelling that individuals are willing to pay for them,” he said. “People were buying mobile phones long before businesses said, ‘I will supply this to you.’ PCs were similar, so we’re seeing the equivalent now with generative AI.”
Consequently, corporations must “catch up” in terms of how they will approach security measures, according to Chui.
Experts believe there are certain areas where CISOs and companies should begin, such as monitoring what information is shared on an AI platform or integrating a company-sanctioned GPT in the workplace.
Start with the basics of information security
In addition to dealing with fatigue and stress, CISOs must also contend with potential cybersecurity attacks and rising automation requirements. As AI and GPT enter the workplace, CISOs can begin with the fundamentals of security.
Companies can licence the use of an existing AI platform, according to Chui, so they can monitor what employees say to a chatbot and ensure that shared information is secure.
“If you’re a corporation, you don’t want your employees prompting a publicly available chatbot with confidential information,” Chui said. “So, you could put technical means in place where you can licence the software and have an enforceable legal agreement about where your data goes or doesn’t go.”
Chui stated that licencing software use involves additional checks and balances. Protection of confidential information, regulation of where the information is stored, and guidelines for how employees can use the software are all standard procedures when businesses licence software, whether or not it is AI.
“If you have an agreement, you can audit the software, so you can see if they’re protecting the data in the ways that you want it to be protected,” Chui said.
According to Chui, the majority of businesses that store data with cloud-based software already do this, so a business that offers its employees a company-approved AI platform is already in accordance with prevalent industry practises.
How to create or integrate a customized GPT
According to Sameer Penakalapati, CEO of Ceipal, an AI-driven talent acquisition platform, one security option for businesses is to develop their own GPT or employ a company that develops this technology to create a customised version.
In certain functions, such as human resources, there are multiple platforms, such as Ceipal and Beamery’s TalentGPT, and businesses may want to consider Microsoft’s proposal to provide customizable GPT. However, despite rising costs, businesses may wish to develop their own technology.
If a company develops its own GPT, the software will contain the precise data that the company wishes its employees to have access to. Penakalapati stated that a company can also protect the information that its employees input, but he added that employing an AI company to create this platform will enable companies to supply and store information securely.
Penakalapati advised CISOs to keep in mind that the performance of these computers is dependent on how they have been instructed, regardless of the path a company selects. It is essential to be deliberate with the data you provide to technology.
“I always tell people to make sure they have technology that provides information based on unbiased and accurate data,” Penakalapati said. “Because this technology was not created by accident.”