Almost one in two fleshbags that have dabbled with generative AI believe its responses are always bang on the money, and some are using it at work despite knowing their employer frowns upon it.
OpenAI’s ChatGPT was launched as a web interface last winter, and downloads reached 100 million monthly users earlier this year, which made it the fastest growing application in history. That is, until Meta released Twitter rival Threads last week, which reportedly grew to 100 million users in days.
Microsoft has already integrated ChatGPT into search engine Bing and other products; Google made its equivalent chat bot, Bard, available to more users last week. The tech is being hyped to extremes and many outside of technology circles are making huge logical leaps regarding the power of the tech, including its ability to wipe out humanity.
Consulting giant Deloitte spoke to 4,150 adults in the UK aged between 16 and 75 for its 2023 local edition of Digital Consumer Trends, creating a picture of how generative LLMs are perceived and the extent to which they are used commercially.
Some 52 percent had heard of the technology, and 26 percent said they had used it, with 43 percent of those “mistakenly” assuming “that it always produces factually accurate answers.”
“Within just a few months of the launch of the most popular Generative AI tools, one in four people in the UK have already tried out the technology,” said Deloitte’s Paul Lee, partner and head of technology, media and telecommunications research.
“As a comparison, it took five years for voice-assisted speakers to achieve the same adoption levels. It is incredibly rare for any emerging technology to achieve these levels of adoption and frequency of usage so rapidly.”
Of those that have used generative LLMs, 30 percent tried it once or twice, 28 percent use it weekly, 9 percent use it once a day and 8 percent use it for work – clearly they don’t work at Apple, Samsung, Amazon, Accenture or other companies that have banned it.
Google has itself told staff not to reveal confidential information to Bard, something GCHQ warned about months ago. In June, a threat intelligence group found ChatGPT credentials in some 100,000 stealer logs being traded on the dark web. By default, ChatGPT stores user query history and AI responses, making potentially rich pickings for criminals.
“Many enterprises are integrating ChatGPT into their operational flow. Employees enter classified correspondences or use the bot to optimize proprietary code,” said Group-IB head of threat intelligence Dmitry Shestakov at the time.
OpenAI said it was investigating the claims but insisted it was the result of “commodity malware on people’s devices and not an OpenAI breach.”
The tech is, added Deloitte’s Lee, “still relatively nascent, with user interfaces, regulatory environment, legal status and accuracy still a work in progress. Over the coming months, we are likely to see more investment and development that will address many of these challenges, which could drive further adoption of Generative AI tools.”
Deloitte found staff are aware of the potential for using generative LLMs at work, yet just 23 percent said they’d got the green light to do so. As such, employers and their resident techies need to set guardrails and guidelines to manage use.
“People need to understand the risk and inaccuracies associated with content generated purely from AI, and where possible be informed when content, such as text, images or audio is AI-generated,” said Costi Perricos, partner and global AI and data lead at Deloitte. ®