As the world comes to terms with AI assistants, we’re learning all the interesting things they can do, for better and for worse. One example that lands firmly in the latter camp is a theoretical attack that forces ChatGPT to DDoS a chosen website, albeit it’s not a “real” threat just yet.
ChatGPT Can Perform a Limitless Number of Hyperlink Connections in a Single Request
As reported by Silicon Angle, a researcher named Benjamin Flesch discovered that ChatGPT doesn’t have a limit to how many links it accesses when generating a response. Not only that, but the service doesn’t check if the URLs it’s accessing are duplicates of the websites it has already checked. The end result is a theoretical attack where a bad actor gets ChatGPT to connect to the same websites thousands of times per query:
The vulnerability can be exploited to overwhelm any website a malicious user wants to target. By including thousands of hyperlinks in a single request, an attacker can cause the OpenAI servers to generate a massive volume of HTTP requests to the victim’s website. The simultaneous connections can strain or even disable the targeted site’s infrastructure, effectively enacting a DDoS attack.
Benjamin Flesch believes the flaw came about due to “poor programming practices” and that if OpenAI added some restrictions on how ChatGPT crawled the internet, it wouldn’t have the potential to perform an “accidental” DDoS attack on servers.
Elad Schulman, founder and chief executive of generative AI security company Lasso Security Inc., agreed with Benjamin Flesch’s conclusion while adding another potential exploit for these attacks. Elad believes that if a hacker managed to compromise someone’s OpenAI account, they could “easily spend a monthly budget of a large language model-based chatbot in just a day,” which would do financial damage if no guard rails are protecting against such practices.
Hopefully, as AI evolves, companies will add restrictions to prevent bad actors from abusing their services. For instance, there are already plenty of ways hackers use generative AI in their attacks, and there has been a nasty rise in AI video scam calls as the technology improves in quality.