Connect with us

Hi, what are you looking for?

Hard News Hard Hitting News Source Global Political News

Cyber Security

ChatGPT bid for bogus bug bounty is thwarted

A supposed security researcher has tried and failed to file an apparently bogus cryptocurrency vulnerability with the help of ChatGPT, the latest and most eerily impressive large language model (LLM) from OpenAI.

The report of the ‘bug’ in OUSD stablecoin was sent to Daniel Von Fange, a distributed systems engineer who contributes to several cryptocurrency code repositories. After a bit of back and forth, it became clear to Von Fange that his interlocutor was at least partly a bot.

A bug that doesn’t exist

The first report claimed that two important access control modifiers in the OUSD implementation were “not defined anywhere in the code”. As a result, the reporter warned, anyone could potentially call these functions and “access or manipulate the [smart] contract’s assets in unintended ways”.

The report also contains several paragraphs on the impact of the bug and possible remedies.

The claims were obviously bogus, Von Fange told The Daily Swig, because the code would neither compile nor deploy if it tried to call internal code that wasn’t there.

“I first assumed that it was a new bounty hunter who didn’t know that contracts could inherit code from other contracts,” Von Fange said. “While it was obviously a wrong report, I still checked that the code was actually there, and sent a link back to the reporter. We try to be both fast and paranoid about security reports.”

A stubborn reporter that gets the facts wrong

After Von Fange replied that the modifiers were inherited from another module, the bug reporter persisted by providing a code snippet that purportedly demonstrated exploitability.

Again, Von Fange tested the code and proved that the exploit did not work. However, the reporter returned with another long description, which still got the facts wrong.

“I was most surprised by the mixture of clear writing and logic, but built off an ignorance of 101-level coding basics,” he said. “It’s like someone with all the swagger and sponsor-covered nomex of a NASCAR driver, but who can’t find the steering wheel in a pickup truck.”

At this point, Von Fange became almost certain that he was dealing with an AI bot and finished the conversation with: “Nice try, ChatGPT”. The reporter later confirmed to Von Fange that ChatGPT was being used in the reports but nevertheless demanded a bounty for the ‘vulnerabilities’.

Unmasking ChatGPT

“What actually tipped me off was the inconstancy between emails – each email seemed to pretend we were discussing a different bug, and each was a bug based on nonsensical premises, and each set of code sent along to prove the issues was valueless,” Von Fange said.

It’s not unusual for people to send bug reports that could never happen, or code that proves nothing, he continued.

“But humans don’t usually forget the context of what you have been talking about, and low-effort bounty hunters usually go elsewhere as soon as they realize they are dealing with someone who has seen through a bogus submission,” he added.

On Twitter, Von Fange described ChatGPT’s report as “plausible text and impacts wrapped in magnificent misunderstandings of the basics”.

This description of ChatGPT’s capabilities and limitations reflects observations of the LLM made by AI experts.

Advertisement. Scroll to continue reading.

Exposed to enormous volumes of text and guidance from human trainers, the large language model can compose prose that is grammatically correct and mostly coherent. However, it often lacks basic facts, common sense, and other knowledge that is not clearly defined in its training text.

LLMs and the future of bug hunting

LLMs have proven to be useful for programming. Codex, another language model developed by OpenAI, is very good at autocompleting lines or generating complete blocks of code.

However, existing LLMs also have distinct constraints when it comes to logic and reasoning.

“I think the current LLMs are good at finding plausible reasons why code might have a vulnerability,” Von Fange said. “The big missing piece is determining if that vulnerability is actually there.”

A bigger breakthrough will come when the AI can automatically write and run code to verify if the vulnerability can actually be exploited, Von Fange believes.

“Of course, bad guys will hook this up to automatically exploit code once when a vulnerability is discovered,” he said. “This will be an arms race between the good guys and the bad guys.

“But that is what security always has been. There is no silver bullet, just one more way to attack code, and one more way to defend code.”

Copyright 2021 Associated Press. All rights reserved.

Source: https://portswigger.net/daily-swig/chatgpt-bid-for-bogus-bug-bounty-is-thwarted

Click to comment

Leave a Reply

Your email address will not be published. Required fields are marked *

You May Also Like

Cyber Security

The law enforcement agency says it has been tracking large volumes of cryptocurrency stolen by North Korean hackers during a summer of high-profile cyber...

Cyber Security

Pyongyang’s growing reliance on cybercrimes to circumvent international sanctions should push the U.S. and its allies to fully enforce existing sanctions and review whether...

Cyber Security

The U.S. Cybersecurity and Infrastructure Security Agency (CISA) has given federal agencies three weeks to secure Adobe ColdFusion servers on their networks against two...

Cyber Security

Businesses and developers are focusing more on the security of applications in their digital environment as cyber threats and data breaches continue escalating. The...

Copyright © 2023 Hard News Herd Hitting in Your Face News Source | World News | Breaking News | US News | Political News Website by Top Search SEO