Understanding the mathematical foundation of model optimization
Cost and loss functions are mathematical measures that quantify how far off our model's predictions are from the actual values. They guide the learning process by providing a single number that represents the model's performance.
Use Case: Regression problems
Characteristics: Penalizes large errors heavily, differentiable everywhere
Pros: Smooth gradient, commonly used
Cons: Sensitive to outliers
Use Case: Regression problems
Characteristics: Linear penalty for errors
Pros: Robust to outliers
Cons: Not differentiable at zero
Use Case: Classification problems
Characteristics: Measures probability distribution difference
Pros: Good gradient properties, probabilistic interpretation
Cons: Can be unstable with extreme probabilities
Use Case: Support Vector Machines
Characteristics: Linear loss for misclassified examples
Pros: Sparse solutions, margin-based
Cons: Not differentiable at margin boundary
Loss Function | Problem Type | Sensitivity to Outliers | Differentiability | Computational Cost |
---|---|---|---|---|
Mean Squared Error | Regression | High | Smooth everywhere | Low |
Mean Absolute Error | Regression | Low | Not at zero | Low |
Cross-Entropy | Classification | Medium | Smooth | Medium |
Hinge Loss | Classification (SVM) | Medium | Not at margin | Low |
Huber Loss | Regression | Medium | Smooth | Medium |
© 2025 Machine Learning for Health Research Course | Prof. Gennady Roshchupkin
Interactive slides designed for enhanced learning experience