Application Programming Interface (API) is a way of programmatically accessing (usually external) models, datasets or other software tools.
Artificial Intelligence ( AI ) the ability of software to perform tasks that traditionally require human intelligence.
In-depth training is a subset of machine learning that uses deep neural networks, which are layers of connected neurons ” whose connections have parameters or weights that can be trained. This is particularly effective in learning unstructured data, such as images, text, and audio.
Fine-Tuning is the process of adapting a pre-trained baseline model to perform better in a particular task. It entails a relatively short period of training on a labeled dataset, which is much smaller than the dataset on which the model was originally trained. This additional training allows the model to learn and adapt to the nuances, terminology, and specific patterns found in the smaller dataset.
Fundamental Models (FM) are deep learning models trained on huge amounts of unstructured, unlabeled data that can be used for a wide range of tasks out of the box or adapted to specific tasks through fine-tuning. Examples of these models are GPT-4, PaLM, DALL – E 2, and stable diffusion.
Generative AI is an AI that is typically built using foundational models and has capabilities that the AI did not have before, such as the ability to generate content. Foundational models can also be used for non-generative purposes ( e.g., classifying user sentiment as negative or positive based on call transcripts ), offering a significant improvement over earlier models. For simplicity, when we refer to generative AI in this article, we include all variants of the underlying model.
Graphics processors ( graphics processors ) are computer chips that were originally designed for the production of computer graphics ( e.g., for video games) and are also useful for deep learning applications. In contrast, traditional machine learning and other analyses are typically performed on central processing units (CPUs) usually referred to as the computer processor. “
Large Language Models (LLMs) constitute a class of basic models that can handle huge amounts of unstructured text and learn relationships between words or parts of words, known as tokens. This allows LLMs to generate natural language text, performing tasks such as generalization or knowledge extraction. GPT-4 ( which underlies ChatGPT ) and LaMDA ( the Bard model ), are examples of LLMs.
Machine Learning ( ML ) is a subset of AI in which a model acquires capabilities after learning or showing many examples of data points. Machine learning algorithms detect patterns and learn to make predictions and recommendations by processing data and experiences rather than receiving explicit programming instructions. Algorithms also adapt and can become more efficient in response to new data and experiences.
MLOps
Structured data tabular data ( e.g., organized into tables, databases, or spreadsheets) that can be used to effectively train some machine learning models.
Transformers are key components of foundation models. They are artificial neural networks that use special mechanisms called “attention headers” to understand context in sequential data, such as word usage in a sentence.
Unstructured data lacks a consistent format or structure ( e.g., text, images, and audio files), and usually requires more advanced methods to extract ideas.