# Accuracy : A performance measure of a model

By Performance measure of a model what I mean is to know how well our model( classification model or Regression model) is performing with the test data or live data.

Performance measure is also called as Performance metric.
Performance of a model is always measured on test data, not training data or validation data.

Amongst all the available performance measures , accuracy is the most easy to understand metric.

So ,lets directly dive into the point …

# What is Accuracy

Accuracy of a model is defined as number of points classified correctly to total number of points.

Lets compare this with your marks that you get in your school. Suppose you got 89 marks out of total 100 marks , this means out of 100 questions you have correctly answered 89 marks. So your accuracy in this case is 80/100

From the above equations we can draw a trivial conclusion that number of points correctly classified can be at most to Total number of points. So if a modal could classify all the points correctly then we say it is 100% accurate or the accuracy is 1, which is a rear thing to happen if it happens then definitely you need to check if there is any over fitting or any problem with your model 😀, if nothing like that then Kudos…

NOTE: Accuracy of a model always lies between 0 to 1. 0 means worst ,1 means awesome.

Lets consider a model (for simplicity lets think of a KNN(K Nearest Neighbors) model) which takes values in X as an input and gives y as an output. Where Y is +1 or −1. Think that we have already trained our data and now we are testing it on test data.

Consider the above data set where X is the input given to our model, Y is the actual value and Y^ is out predicted values. From the table it is clear then we have total of 4 points . Out of the 4 points point 1, point 2 and point 4 have been predicted correctly and point 3 is predicted wrong. So when we check for accuracy

Total number of points = 4
Total number of points classified or predicted correct = 3
From equation 1
Accuracy = 3 / 4 = 0.75

Hurray! We have finally learnt how to find the accuracy of our modal .0.75 is a decently a good accuracy score , but it is not great. Training the data with more data may further enhance the performance of the data . Improving the accuracy is out of the scope of this blog…

So now you have understood the mathematical part of how accuracy metric works , coding part is very easy . Sklearn has already provided us with a function names accuracy_score. Refer this .

Accuracy can be applied to both regression and classification models.

Now that we have learnt our first performance measure ,we might get one question is this metric applicable for all cases.
Well the ans is NO.

There are some cases where applying accuracy might mislead use.

• When we have imbalanced data.
• When our model returns probability scores rather than value.

# When we have imbalanced data.

Consider the above example where we have 100 values for X . Out of the 100 values 90 values of Y are `+1` and the remaining 10 are `-1`.Such a data is called Imbalanced data, where one type of class is higher in much higher number(here +1) than the other class(here -1).
Lets suppose or model is so weekly trained that it only returns value `-1` for irrespective of the input.

So now when we calculate its accuracy

So know that our model is not able to predict values correctly ,still we are getting accuracy 0.9 which is very good, but actually it is not. So using accuracy as your measure for such type of imbalanced dataset might mislead you.

# When our model returns probability scores rather than value.

In short probability scores tell us what is the probability that x belongs to class A. In order to get exact value we make certain probability as threshold (like lets assume 0.5). So any value greater than or equal to the threshold value belongs to class A else other class.

Lets consider two models M1 and M2 which returns probability scores as output.

In the above table X is the input value, Y is the actual value, M1 and M2 are the probability scores predicted by model 1 and model 2, y^M1 and y^M2 are the predicted values of the model 1 and 2 using probability scores.

We have set a threshold of 0.5. Anything equal to or above 0.5 belongs to `+1` and anything below belongs to `-1`.

On seeing the class labels predicted by two models y^M1 and y^M2 both of them looks same so accuracy of both the models are same. But if we look at probability scores of M1 and M2

M1 seems to predict better that M2. Because M1 predicts with better accuracy where as probability scores of M2 are so close the 0.5 (threshold value) it is not able to make proper difference between two classes +1 and -1.

NOTE: Accuracy do not take probability scores as input, it only takes the predicted values.

So for such models which gives probability scores as output, we use other type of performance metric like log-loss which we will see in my the next blog.

Thank you for reading till here. I hope you have understood about accuracy.If you feel worthy please leave a clap, which boosts me to write more articles, share to your friends, and spread knowledge. If you have any questions feel free to leave your comment.