Security, privacy, and generative AI

Since the proliferation of large language models (LLMs), like OpenAI’s GPT-4, Meta’s Llama 2, and Google’s PaLM 2, we have seen an explosion of generative AI applications in almost every industry, cybersecurity included. However, for a majority of LLM applications, privacy and data residency is a major concern that limits the applicability of these technologies. In the worst cases, employees at organizations are unknowingly sending personally identifiable information (PII) to services like ChatGPT, outside of their organization’s controls, without understanding the associated security risks.

To read this article in full, please click here

This article has been indexed from InfoWorld Security

Read the original article: