- Artificial General Intelligence (AGI) – The theoretical concept of creating machines that have the ability to perform any intellectual task that a human can.
- Artificial Intelligence (AI) – The branch of computer science that involves creating intelligent machines that can perform tasks that typically require human intelligence.
- AI Ethics – The study of moral and ethical issues related to the development and use of AI technologies.
- Bayesian Networks – A type of probabilistic graphical model that represents a set of variables and their conditional dependencies.
- Bias in AI – The tendency of AI systems to reflect and amplify the biases present in the data used to train them.
- Big Data – Extremely large and complex datasets that can be analyzed to reveal patterns, trends, and associations.
- Cognitive Computing – The use of AI technologies to simulate human thought processes, such as perception, reasoning, and decision-making.
- Computer Vision – The ability of machines to interpret and understand visual information, such as images and videos.
- Data Mining – The process of extracting patterns and knowledge from large datasets using statistical and computational methods.
- Deep Learning – A type of machine learning that uses artificial neural networks with multiple layers to perform complex tasks like image recognition, natural language processing, and speech recognition.
- Expert Systems – An AI system that uses a knowledge base and reasoning algorithms to provide advice and make decisions in a specific domain.
- GANs (Generative Adversarial Networks) – A type of deep learning algorithm that consists of two neural networks that compete against each other to generate realistic data.
- Human-in-the-Loop (HITL) – A process of combining AI and human decision-making to improve the accuracy and reliability of AI systems.
- Internet of Things (IoT) – The connection of physical devices and objects to the internet, allowing them to collect and share data.
- Machine Learning (ML) – A subset of AI that involves training algorithms to make predictions or decisions by learning from data without being explicitly programmed.
- Natural Language Processing (NLP) – The ability of machines to understand, interpret, and generate human language.
- Neural Network – A type of machine learning algorithm modeled after the structure of the human brain that uses interconnected nodes to process information.
- Predictive Analytics – The use of statistical and machine learning algorithms to analyze data and make predictions about future outcomes.
- Reinforcement Learning – A type of machine learning algorithm that uses trial and error to learn from experience and improve decision-making.
- Robotics – The branch of engineering and AI that involves the design, development, and operation of robots.
- Synthetic Data – Artificially generated data that can be used to train ML models in situations where real data is unavailable or limited.
- Supervised Learning – A type of machine learning algorithm that involves training a model on labeled data to make predictions or decisions.
- Transfer Learning – The ability of a model to apply knowledge learned from one task to another related task.
- Unsupervised Learning – A type of machine learning algorithm that involves training a model on unlabeled data to discover patterns and associations.
- XAI (Explainable AI) – AI systems that can provide clear and understandable explanations of how they arrived at a decision or recommendation.