AI search tools confidently spit on a high clip, a new study was found.
The Columbia Journalism Review (CJR) conducted a study in which he fed eight AI tools a fraction of an article and asked the chatbots to identify the “Title of the same article, original publisher, date of publication and URL”. Collectively, the study states that chatbots “provided wrong answers to more than 60 percent of questions.”
How to identify AI-Janit text
Mistakes are diverse. Sometimes, the search tool allegedly estimated or offered an incorrect answer to those questions that could not answer. Sometimes, it invented links or sources. Sometimes, it cited the literary versions of the real article.
Mashed light speed
CJR wrote: “Most of the devices we tested presented incorrect answers with dangerous confidence, rarely as ‘this appears’,” it is possible, “it is possible,” may be “,”, or “I could not detect the exact article.”
Full studies are worth seeing, but it seems appropriate to suspect AI search equipment. The problem is that people are not doing this. CJR said 25 percent of Americans said they use AI for discovery rather than traditional search engines.
Google is rapidly pushing AI on the search giant, consumers. This month, it was announced that it would expand the AI overview and start testing the AI-cavalry search results.
CJR’s study is just another point of data showing AI’s inaccuracy. The equipment has shown, repeatedly, that they will confidently answer. And Tech giants are forcing AI about every product. So be careful what you believe there.
Subject
artificial intelligence