fbpixel
101 Concepts for the Level I Exam

Essential Concept 11: Model Training


Model training consists of three major tasks: method selection, performance evaluation, and model tuning.

Method selection: This decision is based on the following factors:

  • Whether the data project involves labeled data (supervised learning)or unlabeled data (unsupervised learning)
  • Type of data: numerical, continuous, or categorical; text data; image data; speech data; etc
  • Size of the dataset

Performance evaluation: Commonly used techniques are:

  • Error analysis using confusion matrix: A confusion matrix is created with four categories – true positives, false positives, true negatives and false negatives.
  • The following metrics are used to evaluate a confusion matrix:
    Precision (P) = TP/(TP + FP)
    Recall (R) = TP/(TP + FN)
    Accuracy = (TP + TN)/(TP + FP + TN + FN)
    F1 score = (2 * P * R)/(P + R)

    The higher the accuracy and the F1 score, the better the model performance.

  • Receiver Operating Characteristic (ROC): ROC curves and area under the curve (AUC) of various models are calculated and compared. An area under the curve (AUC) close to 1 indicates a perfect model. Whereas, an AUC of 0.5 indicated random guessing. In other words, a more convex curve indicates better model performance.
  • Calculating root mean squared error (RMSE): The root mean squared error is computed by finding the square root of the mean of the squared differences between the actual values and the model’s predicted values (error). The model with the smallest RMSE is the most accurate model.

Model tuning

  • Involves managing the trade-off between model bias error (which is associated with underfitting) and model variance error (which is associated with overfitting).
  • ‘Grid search’ is a method of systematically training an ML model by using different hyperparameter values to determine which values lead to best model performance.
  • A fitting curve of in-sample error and out-of-sample error on the y-axis versus model complexity on the x-axis is useful for managing the trade-off between bias and variance errors.