Basic Scoring Methods of Machine Learning Classification Models
Machine learning model evaluation requires that there be a standard set of "scores" to compare across models with different parameter sets that can tell us how good the model is performing. Most of these "scores" can be calculated based on the number of predictions and counts of the results from the training and test cases. The scores used in our article on training a machine learning SVM classification model to predict the next days' return as positive or negative are Accuracy, Precision, Recall, and its F1 Score. We will go through each score and explain how to interpret it and in what situations its important
Accuracy measures the proportion of correctly classified instances out of the total number of instances. It is calculated as the ratio of the number of correct predictions to the total number of predictions. Basically, out of all instances, how many did we get right? Accuracy is most informative when the classes in the dataset are balanced, meaning they have roughly equal representation (50/50 split in results). Accuracy alone for certain fields is not sufficient, as false positives or pinpointing correctly positives or negatives may be important to the problem at hand.
Precision is a metric used to assess the accuracy of positive predictions made by a classification model. It measures the proportion of true positive predictions (correctly predicted positive instances) out of the total predicted positive instances. Precision focuses on the model's ability to avoid false positives, making it particularly important in scenarios where false positives have significant consequences or where the cost of misclassifying positive instances is high (like in the medical field or fraud detection). Interpret high precision by understanding that the model is performing well in minimizing false positives and more likely to produce a true positive result.
Recall, also known as sensitivity or true positive rate, is a metric used to evaluate the ability of a classification model to correctly identify positive instances. It measures the proportion of true positive predictions (correctly predicted positive instances) out of the actual positive instances. Recall is particularly important in situations where the cost of missing positive instances (false negatives) is high or when comprehensive detection of positive cases is critical. In the medical field, a Doctor would not want to miss a diagnosis and therefore recall would be extremely important (even more so than precision). Low recall may indicate that the model is having trouble identifying true positive results. Recall and precision are often inversely related, meaning that improving one metric may come at the expense of the other. Increasing recall typically involves setting a lower threshold for positive predictions, which may result in an increase in false positives. When to give more credence to one verse the other is always dependent on the situation. High recall is also important in situations like informational retrieval (important to get it "right"), or in security screenings (don't want to miss a problem).
In practice, there is often a trade-off between recall and precision, where one is higher or lower at the expense of the other. In order to seek some balance between the metrics, the F1 Score was developed and converts both precision and recall considerations into a single metric. It provides a balanced measure of a model's performance by taking into account both the ability to minimize false positives (precision) and the ability to minimize false negatives (recall). The F1 score is particularly useful when there is an uneven class distribution or when both precision and recall need to be considered simultaneously. Given that it takes into account both metrics, it makes it easier to compare across classification models. When the underlying data set is unbalanced (meaning lots of either positive or negatives but not both), the F1 score is robust and is more informative than the accuracy metric alone. Because of these characteristics, the F1 Score has become a benchmark score for model comparison
Notice: Information contained herein is not and should not be construed as an offer, solicitation, or recommendation to buy or sell securities. The information has been obtained from sources we
believe to be reliable; however no guarantee is made or implied with respect to its accuracy, timeliness, or completeness. Author does not own the any crypto currency discussed. The information
and content are subject to change without notice. CryptoDataDownload and its affiliates do not provide investment, tax, legal or accounting advice.
This material has been prepared for informational purposes only and is the opinion of the author, and is not intended to provide, and should not be relied on for, investment, tax, legal,
accounting advice. You should consult your own investment, tax, legal and accounting advisors before engaging in any transaction. All content published by CryptoDataDownload is not an