OpenAI has launched the more advanced AI model ChatGPT-4, with market experts testing its capabilities as it takes over discussions on social media platforms. The crypto industry is also testing the capabilities of the new ChatGPT-4 for testing and auditing smart contact codes.
Blockchain security firm SlowMist founder Yu Xian in a tweet on March 15 revealed that the team tested the capabilities of the new ChatGPT-4 for simple Tugou’s smart contract codes. While the artificial intelligence (AI) model gave correct security recommendations for smart contact codes, ChatGPT-4 can’t handle complex smart contract codes.
“We simply tested it. I believe that GPT-4 can give correct security recommendations with a high probability. However, complex smart contract codes, especially those with human sophistry, and the kind of loopholes that require other scenarios (or larger contexts), GPT-4 can’t handle it, but it can be used as an audit aid (if used well).”
Blockchain security firms found that ChatGPT-4 is not effective to detect errors and loopholes in complex smart contract codes. Moreover, the AI model can be deceiving in some scenarios that require changes to smart contracts on a blockchain.
However, SlowMist founder agrees that GPT-4 can be used as an audit tool to check flaws. He claims that security audit companies can not only make use of GPT models in the future, but also audit if AI such as ChatGPT-4 is misusing information against humans.
Coinbase Tested ChatGPT-4 on Ethereum Smart Contract
Coinbase director Conor Grogan on Tuesday said the team tested OpenAI’s ChatGPT-4 to identify security vulnerabilities in a live Ethereum smart contract. The AI model found security flaws and ways the contract could be exploited.
CoinGape earlier reported various capabilities of the new GPT-4. Moreover, Microsoft disclosed that its AI chatbot Bing Chat, co-created with OpenAI, was already running on GPT-4.
The post ChatGPT-4 Can’t Handle Complex Smart Contracts, Says Blockchain Security Firm appeared first on CoinGape.