```
tags: AI/Tasks/Classification, , AI/ML/SupervisedLearning, AI/Regression, AI/Tasks/Classification/DiscriminativeModel,
aliases: Logistic Regression Algorithm
```

Logistic regression is used for Classification problems, as it predicts the probability of a binary(dichotomous) outcome. It finds a linear decision boundary between two classes by finding the best-fitted curve for data. it predict the categorical dependent variable with the use of independent variables, to classify outputs, which can only be between 0 and 1.

Intuitively, logistic regression gives a lower generalization error if the decision boundary has the largest distance to the nearest training data points of any class. This is exactly what SVM does.

Logistic Regression utilizes the mathematical logistic function to generate the output of a linear equation between 0 and 1 which results in an S-shaped curve rather than a straight line.

Notes:

- Logistic regression is a generalized Linear Model because the outcome always depends on the sum of the inputs and parameters.
- Logistic regression is a Discriminative Models.
- Logistic regression can only solve classification problems; but performs best when the relationships in the data are simple and doesn’t perform well with data with complex relationships.
- Logistic regression can use sigmoid function to convert the output between the range 0 and 1. however it often uses Cross-Entropy Cost Function.
- Logistic regression uses Maximum Likelihood Estimation (MLE) to find the best-fitted line (curve).
- Binary Crossentropy and categorical Crossentropy are used as Loss Functions for classification tasks using Logistic Regression

Assumptions in Logistic Regression:

- There is an appropriate structure of the output label.
- All observations are independent of each other.
- Little to no Multicollinearity
- The number of observations must be larger than the number of features.

Advantages:

- It's a fast and efficient classifier.
- It's is easy to understand and implement.
- It is highly interpretable, fast and easy to train, and performs very well on linearly separable data.

Disadvantages:

- The data can have complex relationships that are not easy to capture using this model.
- Linear Relationship between features and the target variable must exist.
- It's prone to Overfitting and highly affected by outliers.

Interactive Graph

Table Of Contents