```
tags:
- AI/Algorithms
aliases:
- Artificial Intelligence Algorithms
- AI Algorithms
```

Considerations in selecting the right algorithm:

- Understand Your Data. I.e.
- Types of features variables(for Data Transformation and Data Preparation)
- Types of features(E.g. Categorical, Numerical)
- Number of features(for Dimensionality Reduction)
- Existence of labeled data(for selecting Machine Learning type. E.g. Supervised Machine Learning, or Unsupervised Machine Learning)

- Define the goal of algorithm, and the problem it solves.
- Consider practical aspects.
- Make sure to make good used of Model Evaluation.

Generally these are good rules of thumb to select right algorithms.

With simple data structures:

- Pattern recognition for predictions can be done via Regression Models.
- For simple classification, Support Vector Machine (SVM) is often a good option.
- k-Nearest Neighbors (KNN) can handle large amount of data for clustering task.

With complex data structures: - For Unsupervised Machine Learning tasks on data with some complexity, Boltzmann Machines (BM) and Autoencoders (AE) are suitable.
- For Supervised Machine Learning, depending on tasks there are several options. For example for Text Classification task LLM & RNN; For Machine Vision tasks DBN, CNN & RNTN; and for speech recognition RNN is suitable.

- Regression
- Linear Regression
- Logistic Regression
- Lasso Regression
- Regularization
- Ridge Regression
- Stepwise Regression:
- Elastic-Net: Regularization
- Polynomial Regression
- Ordinary Least Squares Regression (OLSR)
- Multivariate Adaptive Regression Splines (MARS)
- Locally Estimated Scatterplot Smoothing (LOESS)

- Support Vector Machine (SVM)
- SVM Regression
- Ensemble Model

Algorithms for creating meaningful groups or collections from a set of unlabeled data.

- Knowledge Base/Artificial Intelligence/Algorithms/K-Means/K-Means
- Gaussian Mixture Model (GMM)
- Hierarchical Clustering
- DBSCAN
- OPTICS
- Mean-Shift
- BIRCH
- k-Medians
- Fuzzy c-Means
- Fuzzy k-Modes
- Fuzzy clustering
- Expectation Maximization
- Minimum spanning tree

- Outlier Detection
- Support Vector Machine (SVM)
- k-Nearest Neighbors (KNN)
- Apriori Algorithm
- Robust Regression (RANSAC)
- Local outlier factor
- Eclat algorithm
- FP-growth
- Isolation forest

- Rule-Based Machine Learning (RBML): The rule-based machine learning applies some form of learning algorithm to automatically identify useful rules, rather than a human needing to apply prior domain knowledge to manually construct rules and curate a rule set.
- Ripper Algorithm
- Repeated Incremental Pruning to Produce
- Cubist
- OneR
- ZeroR

- Dimensionality Reduction
- Principal Components Analysis (PCA)
- Principal Component Regression (PCR)
- Independent component analysis (ICA)
- Linear Discriminant Analysis (LDA)
- Partial Least Squares Regression (PLSR)
- Sammon Mapping
- Multidimensional Scaling (MDS)
- Projection Pursuit
- Non-negative matrix factorization (NMF)
- Regularized Discriminant Analysis (RDA)
- Mixture Discriminant Analysis (MDA)
- Partial Least Squares Dimension Analysis (PLSDA)
- Quadratic Discriminant Analysis (QDA)
- Canonical correlation analysis (CCA)
- Flexible Discriminant Analysis (FDA)
- Diffusion map

- Regularization
- Ridge Regression
- Lasso Regression
- Elastic-Net
- Stepwise
- Least-Angle Regression (LARS)

- Artificial Neural Networks (ANN): are modeled after the way that neurons interact in the human brain to interpret information and solve problems.
- Perceptron
- Multilayer perceptron (MLP)
- Recurrent Neural Networks (RNN)
- Feed-Forward Neural Networks (FFNN)
- Convolutional Neural Networks (CNN)
- Generative Adversarial Networks (GAN)
- Long-Short Term Memory (LSTM)
- Hopfield Networks (HN)
- Boltzmann Machines (BM)
- Restricted Boltzmann Machines (RBM)
- Autoencoders (AE)
- Variational Autoencoders (VAE)
- Stacked Autoencoders(SAEs)

- Deep Belief Networks (DBN)

- Tree-based Models: use a series of “if-then” rules to generate predictions from one or more decision trees.
- Bayesian Models
- Reinforcement Learning Algorithms
- State Action Reward State Action (SARSA)
- Q-learning
- Deep Q Network (DQN)
- Learning Automata
- Deep Deterministic Policy Gradient (DDPG)
- Normalized Advantage Function (NAF)
- Asynchronous Advantage Actor Critic (A3C)
- Trust Region Policy Optimization (TRPO)
- Proximal Policy Optimization (PPO)
- Constructing skill trees

- Instance-Based Algorithms: This supervised machine learning algorithm performs operations after comparing current instances with previously trained instances that are stored in memory. This algorithm is called instance based because it is using instances created using training data.
- k-Nearest Neighbors (KNN)
- Support Vector Machine (SVM)
- Self-Organizing Map (SOM)
- Locally Weighted Learning (LWL)

- Transformers
- Other Algorithms
- Logic learning machine
- Markov Chain Monte-Carlo (MCMC)
- t-Distributed Stochastic Neighbor Embedding (t-SNE)

Interactive Graph