![]() ChatGPT has no handle on the truth, so even when answers are fluent and plausible, there is no guarantee they are correct. It has other, more fundamental limitations, too. For example, ChatGPT knows nothing in the world post-2021 as its data has not been updated since then. Unlike older chatbots, ChatGPT has been designed to refuse inappropriate questions and to avoid making stuff up by churning out responses on issues it has not been trained on. It is also harder to corrupt than earlier chatbots. The result, according to Elon Musk, is “scary good”, as many early users – including college students who see it as a saviour for late assignments – will attest. It is a bit like predictive text on a mobile phone, but scaled up massively, allowing it to produce entire responses instead of single words. Known in the field as a large language model or LLM, the AI is fed hundreds of billions of words in the form of books, conversations and web articles, from which it builds a model, based on statistical probability, of the words and sentences that tend to follow whatever text came before. The program is the latest to emerge from OpenAI, a research laboratory in California, and is based on an earlier AI from the outfit, called GPT-3. The answers are confident and fluently written, even if they are sometimes spectacularly wrong. Ask ChatGPT a question, as millions have in recent weeks, and it will do its best to respond – unless it knows it cannot. It does this by drawing on what it has gleaned from a staggering amount of text on the internet, with careful guidance from human experts. Essentially a souped-up chatbot, the AI program can churn out answers to the biggest and smallest questions in life, and draw up college essays, fictional stories, haikus, and even job application letters. Since its launch in November last year, ChatGPT has become an extraordinary hit.
0 Comments
Leave a Reply. |