- Natural Language Processing: techniques used by large language models to understand and generate human language, including text classification and sentiment analysis. These methods often use a combination of machine learning algorithms, statistical models and linguistic rules.
- Neural Network: a mathematical system, modeled on the human brain, that learns skills by finding statistical patterns in data. It consists of layers of artificial neurons: the first layer receives the input data, and the last layer outputs the results.
- Parameters: numerical values that define a large language model's structure and behavior, like clues that help it guess what words come next. Systems like GPT-4 are thought to have hundreds or billions of parameters.
- Reinforcement Learning: a technique that teaches an AI model to find the best result by trial and error, receiving rewards or punishments from an algorithm based on its results. This system can be enhanced by humans giving feedback on its performance, in the form of ratings, corrections and suggestions.
- Transformer Model: a neural network architecture useful for understanding language that does not have to analyze words one at a time but can look at an entire sentence at once. Transformers use a technique called self-attention, which allows the model to focus on the particular words that are important in understanding the meaning of a sentence.
(from Artificial Intelligence Glossary: Neural Networks and Other Terms Explained, by Adam Pasick)