Skip to main content
#1Glossary

Foundation Models

Issue #1 | Summer 2022

Foundation Models

A research group at Stanford University first coined the term foundation models in response to an advancing paradigm shift in the field of artificial intelligence. AI models for speech processing have been growing larger and more powerful for some time. Examples from recent years include OpenAI’s GPT-3 and Google’s BERT. In an often extremely complicated process, AI models learn pre-defined categorizations through the processing of previously annotated data. Called supervised learning, this process was used in areas like the recognition of objects within images. In what is known as selfsupervised learning, basic models such as GPT-3 and BERT first attempt to identify and learn general patterns in the data. Since data doesn’t have to be manually annotated for this purpose, the models can draw on significantly larger data sets and recognize complex relation- ships. These models can be used much more flexibly and diversely as a basis for various applications. These applications, however, also take on the characteristics and possible distortions inherent in the models. As the complexity of AI systems increases, so do the costs of developing them. As a result, the number of players capable of developing such large AI models is decreasing, leading to the increasing concentration of AI development.