James Coker of Infosecurity Magazine has published an examination of the recent RSA 2023 conference, which included a discussion of the uses of AI as a tool for predicting possible cybersecurity weaknesses. At the conference, Diana Kelley, the CSO at Cyberize, pointed out that AI has been broadly overhyped in the media regarding its potential capabilities, noting that when she asked ChatGPT about which cybersecurity books she authored, the algorithm indicated that she had contributed to five books that she had not contributed to.
ChatGPT, which has been trained on information throughout the entire internet, will make a lot of mistakes “as there is a lot wrong on the internet.” There are also significant variations in how different generative AI models operate, and their uses. Therefore, it is important organizations understand these issues and ask appropriate questions of cybersecurity vendors who are offering AI-based solutions. These include how the AI is trained e.g., “what data sets are used” and “why are they supervised or unsupervised.”