ChatGPT: All You Need To Know About The New Program Giving Google Search Real Competition After 20 Years

0

Are computers truly taking over? If you take a look at the new artificial intelligence-based chatbot called ChatGPT, you will begin to wonder.

Created by San Francisco-based OpenAI, which was co-founded by Elon Musk, ChatGPT has been creating quite a stir across the internet with its writing ability and responses to requests.

Although it has impressed many with its abilities, not least Mr Musk who described it as “scary good” it has also raised concerns, particularly in the education sector.

Could it be about to knock Google off its perch as the go-to place for internet answers?

What is OpenAI?

It is a research company that says its mission is to ensure that artificial general intelligence (AGI) benefits all of humanity.

It describes AGI as “highly autonomous systems that outperform humans at most economically valuable work”.

Mr Musk, the owner of Twitter, chief executive of electric car maker Tesla and co-founder of neurotechnology company Neuralink, left OpenAI in 2018 after disagreements over its direction.

“We have trained a model called ChatGPT, which interacts in a conversational way,” the company said.

“The dialogue format makes it possible for ChatGPT to answer follow-up questions, admit its mistakes, challenge incorrect premises and reject inappropriate requests.”

What can ChatGPT be used for?

People have been trying it out across a range of techniques, from essay and poetry writing to scientific concepts to job application tasks, with the results being posted on social media.

It can even offer possible solutions to errors in computer code.

“Its answer to the question ‘what to do if someone has a heart attack’ was incredibly clear and relevant,” Claude de Loupy, head of Syllabus, a French company specialised in automatic text generation, told AFP.

“When you start asking very specific questions, ChatGPT’s response can be off the mark”, but its overall performance remains “really impressive”, with a “high linguistic level”, he said.

Some users have posed the question of whether it can be used journalistically.

I asked it to write a generic article on Dubai and it immediately generated about 250 words of text, which ended with: “Overall, Dubai is a fascinating destination that offers something for everyone, from the thrill-seekers to the shopaholics to those seeking a taste of Middle Eastern culture.”

However, it can pick up misinformation and present it as fact, and there are concerns that it lacks nuance and may be used for harmful requests.

“While we have made efforts to make the model refuse inappropriate requests, it will sometimes respond to harmful instructions or exhibit biased behaviour,” OpenAI said.

Has it been banned in New York City schools?

Yes, amid concerns about the safety and accuracy of the content produced.

New York City schools said the technology will be banned across the district, but specific schools or sites will be able to request access to give students access to cutting-edge tech education.

“Due to concerns about negative impacts on student learning, and concerns regarding the safety and accuracy of content, access to ChatGPT is restricted on New York City Public Schools’ networks and devices,” Jenna Lyle, a spokeswoman for the city’s Department of Education, said.

“While the tool may be able to provide quick and easy answers to questions, it does not build critical-thinking and problem-solving skills, which are essential for academic and lifelong success.”

Teachers around the world are naturally concerned that pupils will copy and paste from ChatGPT and produce auto-generated work.

Pupils can, however, still access it from their homes or on their phones.

OpenAI officials say they are working on ways to identify text generated by the bot.

“We don’t want ChatGPT to be used for misleading purposes in schools or anywhere else, so we’re already developing mitigations to help anyone identify text generated by that system,” the company said.

How does ChatGPT work?

When I asked ChatGPT itself how it functions, it wasn’t able to explain.

“I’m sorry, but I am not familiar with ChatGPT”, it responded. “I am a language model trained by OpenAI, and I don’t have the ability to browse the internet or learn about other AI models. I am only able to provide information based on what I have been trained on and what I can generate from that information.”

However, what it actually does is use a massive sample of text from the internet to give the most relevant response to your query.

OpenAI co-founder and chief executive Sam Altman said on Twitter that this was an “early demo of what is possible”.

“Soon you will be able to have helpful assistants that talk to you, answer questions and give advice,” he tweeted.

“Later, you can have something that goes off and does tasks for you. Eventually, you can have something that goes off and discovers new knowledge for you.”

What has Elon Musk said?

He described ChatGPT as “scary good” in a tweet and said, “we are not far from dangerously strong AI”.

He then tweeted on December 4 that he had learnt “that OpenAI had access to the Twitter database for training. I put that on pause for now. Need to understand more about governance structure & revenue plans going forward. OpenAI was started as an open-source & non-profit. Neither is still true”.

How much is OpenAI worth?

The company is in talks to sell shares in a tender offer valuing it at about $29 billion, The Wall Street Journal reported last week.

Venture capital firms Thrive Capital and Founders Fund are in discussions to invest in the deal, which would include the sale of at least $300 million of shares from existing investors such as employees, the WSJ report said.

The transaction would almost double the company’s valuation from a tender offer in 2021, and would make it one of the most valuable US start-ups on paper despite having little revenue, it added.

The company makes money by charging developers to licence its technology.

Chatbots making headlines

Google fired senior software engineer Blake Lemoine in July last year after he claimed that the company’s conversational chatbot had become sentient.

He claimed that Google’s Language Model for Dialogue Applications (LaMDA), a system for building chatbots, had come to life and has been able to perceive or feel things.

Google said Mr Lemoine had breached company policy regarding confidential matters and described his claims as “wholly unfounded”.

Leave a Reply

Your email address will not be published. Required fields are marked *

Send this to a friend