How to weaponize LLMs to auto-hijack websites

We speak to professor who with colleagues tooled up OpenAI’s GPT-4 and other neural nets

AI models, the subject of ongoing safety concerns about harmful and biased output, pose a risk beyond content emission. When wedded with tools that enable automated interaction with other systems, they can act on their own as malicious agents.…

This article has been indexed from The Register – Security

Read the original article: