Considering how enormous AI is, this list is far from exhaustive. It’s a start for basic understanding of AI functions and terms being used today, and effects that are a result of AI.

AI (Microsoft) – Artificial intelligence is the capability of a computer system to mimic human cognitive functions such as learning and problem-solving. Through AI, a computer system uses math and logic to simulate the reasoning that people use to learn from new information and make decisions.

AI (Google) – Artificial intelligence (AI) is a set of technologies that enable computers to perform a variety of advanced functions, including the ability to see, understand and translate spoken and written language, analyze data, make recommendations, and more. AI is the backbone of innovation in modern computing.

Algorithm: An algorithm is simply a set of steps used to complete a specific task. They’re the building blocks for programming, and they allow things like computers, smartphones, and websites to function and make decisions.

Algorithmic transparency is the principle that the factors that influence the decisions made by algorithms should be visible, or transparent, to the people who use, regulate, and are affected by systems that employ those algorithms.

Adaptive computation refers to the ability of a machine learning system to adjust its behaviour in response to changes in the environment.

AGI – Artificial General Intelligence – Closely tied to the concept of the singularity is artificial general intelligence. AGI refers to artificial intelligence that is able to perform any task as well as a human can. People think of AGI as a requirement for the singularity to occur. Current AI technology is trained on existing datasets generated by humans.

Alignment – An AI system is considered aligned if it advances the intended objectives. A misaligned AI system pursues some objectives, but not the intended ones.

API – APIs are used to exchange information between different programs. API stands for Application Programming Interface. It is one way how programs can communicate with each other, without having to sit in the same code base or even on the same server.

Chatbot – A chatbot is a computer program that uses artificial intelligence (AI) and natural language processing (NLP) to understand customer questions and automate responses to them, simulating human conversation.

Cryptocurrency – A cryptocurrency is an encrypted data string that denotes a unit of currency. It is monitored and organized by a peer-to-peer network called a blockchain, which also serves as a secure ledger of transactions, e.g., buying, selling, and transferring. Unlike physical money, cryptocurrencies are decentralized, which means they are not issued by governments or other financial institutions.  Cryptocurrencies are created (and secured) through cryptographic algorithms that are maintained and confirmed in a process called mining, where a network of computers or specialized hardware such as application-specific integrated circuits (ASICs) process and validate the transactions. The process incentivizes the miners who run the network with the cryptocurrency. Bitcoin, Ether, Litecoin, and Monero are popular cryptocurrencies. 

Data Management Glossary

Deep Learning – Deep learning is a method in artificial intelligence (AI) that teaches computers to process data in a way that is inspired by the human brain. Deep learning models can recognize complex patterns in pictures, text, sounds, and other data to produce accurate insights and predictions.

Doom Scrolling – the practice of obsessively checking online news for updates, especially on social media feeds, with the expectation that the news will be bad, such that the feeling of dread from this negative expectation fuels a compulsion to continue looking for updates in a self-perpetuating cycle.

Dynamic Computation – AI compute refers to the computational resources required for artificial intelligence systems to perform tasks, such as processing data, training machine learning models, and making predictions.

Frequency bias – continual reinforcement of something, true or not, whereby the frequency of hearing it reinforces the belief of truth.

“Frequency illusion, also known as the Baader–Meinhof phenomenon or frequency bias, is a cognitive bias referring to the tendency to notice something more often after noticing it for the first time, leading to the belief that it has an increased frequency of occurrence. The illusion is a result of increased awareness of a phrase, idea, or object – for example, hearing a song more often or seeing red cars everywhere.” – Wikipedia

Free market – In a purely free market economy, the law of supply and demand, rather than a central planner, regulates production and labor. Companies sell goods and services at the highest price consumers are willing to pay while workers earn the highest wages companies are willing to pay for their services.

Normalcy bias – Normalcy bias is the tendency to underestimate the likelihood or impact of a negative event. Normalcy bias prevents us from understanding the possibility or the seriousness of a crisis or a natural disaster. Because normalcy bias can lead us to believe that nothing serious is going to happen, we may not take appropriate or adequate preparations for a crisis and might put ourselves at risk.

Generative AI – The term generative AI also is closely connected with LLMs (Large Language Models), which are, in fact, a type of generative AI that has been specifically architected to help generate text-based content.

GitHub is an online software development platform. It’s used for storing, tracking, and collaborating on software projects. It makes it easy for developers to share code files and collaborate with fellow developers on open-source projects.

GPT – Generative Pre-trained Transformers, commonly known as GPT, are a family of neural network models that uses the transformer architecture and is a key advancement in artificial intelligence (AI) powering generative AI applications such as ChatGPT.

GPU – graphics processing units – the specialized computer chips used to run A.I. applications

Inference in AI refers to the process of reasoning and making decisions based on available information or data. It involves deriving new knowledge or conclusions from existing knowledge or data.

LLM – Large Language Model A large language model (LLM) is a type of artificial intelligence (AI) algorithm that uses deep learning techniques and massively large data sets to understand, summarize, generate and predict new content.

Machine Learning – Machine learning is an application of AI. It’s the process of using mathematical models of data to help a computer learn without direct instruction. This enables a computer system to continue learning and improving on its own, based on experience.

Natural language processing (NLP) is the ability of a computer program to understand human language as it is spoken and written — referred to as natural language. It is a component of artificial intelligence (AI). NLP has existed for more than 50 years and has roots in the field of linguistics.

Natural language processing (NLP) is the discipline of building machines that can manipulate human language — or data that resembles human language — in the way that it is written, spoken, and organized. It evolved from computational linguistics, which uses computer science to understand the principles of language, but rather than developing theoretical frameworks, NLP is an engineering discipline that seeks to build technology to accomplish useful tasks. NLP can be divided into two overlapping subfields: natural language understanding (NLU), which focuses on semantic analysis or determining the intended meaning of text, and natural language generation (NLG), which focuses on text generation by a machine. NLP is separate from — but often used in conjunction with — speech recognition, which seeks to parse spoken language into words, turning sound into text and vice versa.

Neural Network – One way to train a computer to mimic human reasoning is to use a neural network, which is a series of algorithms that are modelled after the human brain. The neural network helps the computer system achieve AI through deep learning. This close connection is why the idea of AI vs. machine learning is really about the ways that AI and machine learning work together.

Open Source is source code that is made freely available for possible modification and redistribution. Products include permission to use the source code, design documents, or content of the product. The open-source model is a decentralized software development model that encourages open collaboration. A main principle of open-source software development is peer production, with products such as source code, blueprints, and documentation freely available to the public. The open-source movement in software began as a response to the limitations of proprietary code.

Parameters are a machine learning term for the variables present in the model on which it was trained that can be used to infer new content.

Stable Diffusion is a deep learning, text-to-image model released in 2022 based on diffusion techniques. It is primarily used to generate detailed images conditioned on text descriptions, though it can also be applied to other tasks such as inpainting, outpainting, and generating image-to-image translations guided by a text prompt.

%d bloggers like this: