ARTIFICIAL INTELLIGENCE AND CHATGPT 101

Find out, in laymans terms, what this new wave of technology, Artificial Intelligence (AI) and Large Language Models (LLMs) like ChatGPT, is about. Learn the basics of how it works, and what it means for you and I.

8/30/20233 min read

The Buzz Words Explained: Artificial Intelligence, Machine Learning, Large Language Models and ChatGPT

Artificial Intelligence (AI) is like giving a computer a brain that can learn, reason, and solve problems. It's not simply about following fixed programming instructions, but making decisions based on the information it's fed.

Machine Learning (ML), a subfield of AI, is likened to teaching this computer brain how to learn from experience. Just like humans learn better with practice, ML algorithms improve as they process more data. For example, a machine learning model could learn to identify pictures of cats by analyzing thousands of cat photos.

Language models, like GPT (Generative Pretrained Transformer), are a type of machine learning model. They learn language by examining huge amounts of text. It's like if you read all the books in the world, you'd be pretty good at predicting the next word in a sentence or answering questions about what you've read.

ChatGPT is one such model. It's been trained on a large variety of internet text, learning to generate human-like text based on what it's seen. It doesn't truly understand or have opinions, but it mimics these things by predicting what a human might say next in a conversation.

Large Language Models (LLMs) like GPT-4 are more advanced versions of this idea. They've been trained on even more data and have more 'neurons' (mathematical functions mimicking human brain cells), so they're even better at generating realistic, nuanced text. It's important to note that while they're powerful, they don't have awareness or consciousness. They're impressive tools, but they're still just tools.

How does ChatGPT work?

Language models like GPT make their magic happen through statistical predictions. Imagine playing a game where you guess the next word in a sentence. If the sentence is "I like to eat...", you might guess "pizza" or "apples". The AI does something similar but based on analyzing billions of sentences. It assigns probabilities to each possible next word, picking the most likely one. But it doesn't just look at the last word, it considers the context from a whole sequence of previous words, sometimes hundreds long. This is a bit like you being more likely to guess "I like to eat...cake" if the previous sentence was about a birthday party. In GPT models, the vocabulary can be surprisingly large, often containing 50,000 unique words or more, meaning it's predicting the next word out of tens of thousands of possibilities each time. The depth of its predictions allows it to generate complex and diverse responses.

How Does a Large Language Model Compare To a Human Being?

GPT models, while impressive, are fundamentally different from humans. When we humans converse, our responses are informed by a rich tapestry of experiences, emotions, and beliefs. However, GPT models don't experience or feel anything; they just generate text based on patterns they've learned from data. They're like a clever parrot that can mimic human conversation convincingly, but without understanding the meaning behind the words. A human might draw upon personal experiences or emotions to inform a response, while GPT only generates what it estimates to be a likely response based on the context it has been given. Additionally, GPT's knowledge is static, meaning it can't learn or remember new information after its training period, unlike humans who continually learn and adapt throughout their lives.

What Does Our Future Hold?

As we continue to embrace AI in our everyday lives, it's essential to understand its capabilities and limitations. While GPT models and other AI systems can mimic human-like conversation and learning, they aren't sentient or capable of genuine understanding. Their 'intelligence' is a result of intricate statistical predictions, built on massive amounts of data. As tools, they hold immense potential to revolutionize industries and improve lives, but they're not without challenges. Issues around job displacement, privacy, and ethical dilemmas necessitate thoughtful design, use, and regulation. The promise of AI lies in our hands, in how we shape and guide its future use.