You may have heard the words Machine Learning, Artificial Intelligence, and Artificial Neural Networks in recent times. All of these are different ways of answering the good old question of whether we can create a new type of intelligence that can solve natural functions. By natural function in mean real-life problems.

Computers have high processing power and memory and can create a complex numerical problem in a short time easily. But the real-world activities are associated with vision, speech, pattern recognition, and natural language, Computer fails while performing these tasks. This is because, computers, with or without Artificial Intelligence, require an…

If you are not familiar with logistic regression, feel free to check outUnderstanding Logistic regression

Logistic regression is very similar to regular old linear models like linear regression but the big difference is that logistic regression uses the log odds on the y-axis. In logistic regression, we make sure that the curve fitted makes the range of response variable y belong to 0 and 1.

If you are not familiar with linear regression, feel free to check outLinear Regression in a Nutshell

As we know that in linear regression to find the best fitting we start with some…

So far, we either looked at estimating the conditional expectations of continuous variables (as in regression). However, there are many situations where we are interested in input-output relationships, as in regression, but the output variable is discrete rather than continuous.

In particular, there are many situations where we have binary outcomes (there are only two possible outcomes to a certain situation). In addition to the binary outcome, we have some input variables, which may or may not be continuous.

**How could we model and analyze such data?** We could try to come up with a rule which guesses the binary…

Generally, the output of a model can be affected by multiple features. When the number of features increases, the model becomes complicated.

An overfitting model tends to take all the features into consideration, even though some of them have a very limited effect on the final output. Or even worse, some of them are noises that are meaningless to the output.

We need to limit the effect of these useless features. However, we do not always know which features are useless, so we try to limit them all by minimizing the cost function of our model. …

In Machine Learning, if the model is able to fit on training data doesn’t mean that it will perform well on testing data. This disparity between the performance on the training and test data is called Generalization Gap. It is common in machine learning problems to observe a gap between the training and testing performance.

Usually increasing model complexity helps reduce training error, but it can also increase the risk of overfitting leading to a larger generalization gap. *So what is Overfitting!*

Overfitting is a fundamental issue in supervised machine learning which prevents us from perfectly generalizing the models to…

While developing machine learning models, machine learning practitioners usually encounter the problem of selecting the best model.

Generally, the best model is the one with the least possible prediction error. So in order to find the best model we need to minimize the prediction error. The prediction error for any machine learning algorithm can be broken down into

- Irreducible Error
- Bias
- Variance

Irreducible errors are those errors that cannot be reduced irrespective of any algorithm that you use in that model. It is caused by unusual variables that have a direct influence on the output variable. …

This article is a section of Linear Regression in a NutShell

Ordinary Least Squares regression (OLS) is more commonly named linear regression algorithm is a type of linear least-squares method for estimating the unknown parameters in a linear regression model.

In the case of a model with ’n’ explanatory variables, the OLS regression equation is given as:

This article is a section of

Linear Regression in a NutShell

There are some metrics you need to understand to determine whether regression models are accurate or misleading.

Following a flawed model is a bad idea, so it is important that you can quantify how accurate your model is. One of the metrics is **Variance.**

Other concepts like

bias, thebias-variance tradeoffwill be covered in upcoming articles. Follow to get notified.

In terms of linear regression, **variance** can be defined as a measure of how far observed values differ from the average of predicted values. …

This article is a section ofLinear Regression in a NutShell

Regression algorithms have been proven effective for making predictions in many sectors. One of the key phases in machine learning is the evaluation of the model. The purpose of the evaluation is to compare the trained model predictions with the actual (observed) data from the testing data set.

A metric is a quantifiable measure (can be measured or counted scientifically) that is used to track and assess the status of a specific process, in our case performance of the model.

They tell you accurate measurements about how the model…

Regression analysis is one of the most widely used predictive methods. Linear regression is probably the most important method of machine learning out there and is the beginning of a high-level analytical method for the analysis of all data scientists.

Linear regression is a linear approximation of a causal relationship between two or more variables.

Regression models are highly valuable, as they are one of the most common ways to make inferences and predictions. Apart from this, regression analysis is also employed to determine and assess factors that affect a certain outcome in a meaningful way. …

I am a Third-year Computer Engineering undergraduate student with an interest in Data Science, Deep Learning, and Computer Networking.