✅ Accuracy Formula ⭐️⭐️⭐️⭐️⭐

5/5 - (1 vote)

The accuracy formula helps to know the errors in the measurement of values. If the measured value is equal to the actual value then it is said to be highly accurate and with low errors. Accuracy and error rate are inversely related. High accuracy refers to low error rate, and high error rate refers to low accuracy. The accuracy formula gives the accuracy as a percentage value, and the sum of accuracy and error rate is equal to 100 percent.

What is Accuracy Formula? 

The accuracy formula provides accuracy as a difference of error rate from 100%. To find accuracy we first need to calculate the error rate. And the error rate is the percentage value of the difference of the observed and the actual value, divided by the actual value.

Accuracy = 100% – Error Rate 

Error Rate  = |Observed Value – Actual Value|/Actual Value × 100 

Solved Examples on Accuracy Formula

 

Accuracy comes out to 0.91, or 91% (91 correct predictions out of 100 total examples). That means our tumor classifier is doing a great job of identifying malignancies, right?

Actually, let’s do a closer analysis of positives and negatives to gain more insight into our model’s performance.

Of the 100 tumor examples, 91 are benign (90 TNs and 1 FP) and 9 are malignant (1 TP and 8 FNs).

Of the 91 benign tumors, the model correctly identifies 90 as benign. That’s good. However, of the 9 malignant tumors, the model only correctly identifies 1 as malignant—a terrible outcome, as 8 out of 9 malignancies go undiagnosed!

While 91% accuracy may seem good at first glance, another tumor-classifier model that always predicts benign would achieve the exact same accuracy (91/100 correct predictions) on our examples. In other words, our model is no better than one that has zero predictive ability to distinguish malignant tumors from benign tumors.

Accuracy alone doesn’t tell the full story when you’re working with a class-imbalanced data set, like this one, where there is a significant disparity between the number of positive and negative labels.

In the next section, we’ll look at two better metrics for evaluating class-imbalanced problems: precision and recall.

 

Be the first to comment

Leave a Reply

Your email address will not be published.


*