Why is everyone enamored with ChatGPT? Why are institutions like Microsoft, Khan Academy and Wolfram Alpha partnering with an untrustworthy source of text? Why do so many programmers trust ChatGPT, as indicated by the decrease in visits to websites like stackoverflow? ChatGPT is by design a con-artist that emerged from a multi-decade development thread with the explicit objective to fool humans.
ChatGPT sets out to look like an expert in a wide range of topics in order to pass the famous test proposed by Alan Turing, the British code-breaker and father of computers and of Artificial Intelligence (AI). Professor Turing described a configuration where a human in one room converses with a computer or a human hidden in a second room with the goal of deciding if the hidden respondent is a machine or a person. Turing proposed that a machine achieved intelligence if it fooled humans into thinking it was human. This test is now being run by millions of concurrent users of ChatGPT. The goal of this type of AI software is to fool humans and ChatGPT must be making progress since so many humans trust it.
If ChatGPT knows something like “2+2=4” it replies with correct answers, but if it doesn’t know the correct answer it is designed to lie convincingly. For example if you ask ChatGPT for a citation for a claim that it has made, and the article is very highly cited (ie more than 10,000 citations), then it provides the correct answer - but if it doesn’t know the correct reference it fabricates one with a list of authors, a title, a journal and even page numbers.
Similarly if you ask ChatGPT the first line of “A Tale of Two Cities” you get the correct answer, but if you ask for the first line of “Homo Deus - A Brief History of Tomorrow” then you receive the correct author, Yuval Noah Harari, but a completely fictitious first line. The same thing happens for most books, and for many it makes up an author.
Do we want a software package that has been trained to lie convincingly to look up things for us on the internet? Do we want this fabricating software package to teach us, and our children, math and science on Khan Academy or Wolfram Alpha? This simply doesn’t make sense.
Much that has been written about ChatGPT treats it far too favorably. When errors are found they are usually described as mistakes that will be fixed in version 4.5 or version 5.0 - when instead fabricating untruths is a feature of the software. In the March 16, 2023 issue of the journal Nature, this feature of ChatGPT is described as hallucinations, with the promise from OpenAI that it will be fixed in a future version. Similarly, Bill Gates, in his paper on the future of AI published on March 21, 2023 writes that “AIs make factual mistakes and experience hallucinations”. Should we conclude then that it hallucinates the vast majority of citations and first lines of books. What fraction is truth and what fraction is made up? What fraction of truth should be expected from a trusted source of information? The goal of the software clearly is not to provide correct answers, but instead to fool you into thinking it knows something. There should be a clear warning label on the side of the tin.
Why companies invest in AI is the same as why companies continued to invest in cars after the first one was built.
I am extremely sceptical to ai and i don't like that chatgpt has managed to lie deliberatly.
But without inovating on something. Nothing will be better.
I've found it useful for simple things like "if I have these 6 ingredients what can I cook" type questions, but its fairly easy to reveal the shortcuts taken behind the scenes and even get it to contradict itself.
Definitely not something to rely on with anything serious.