Machine Learning & Signals Learning
7 Learning Systems
7.1 Basic Workflow
The basic ML/DL workflow is presented in Fig. 7.1. The workflow parts are:
-
• Goal definition
-
• Data: available data
-
• Pre-processing: preliminary dataset exploration and validation of dataset integrity (e.g., same physical units for all values of the same feature).
-
• Model: basic assumptions about the hidden pattern within the data
-
• Model training: minimization of the loss functions to derive the most appropriate parameters.
-
• Hyper-parameter optimization: tuning the model sub-type.
-
• Performance assessment according predefined metrics.
Baseline The basic end-to-end workflow implementation is called baseline.
7.1.1 Goal definition
Typical related goals:
-
• Prediction or regression, \(\by \) is quantitative (Fig. 7.2a).
-
• Classification, \(\by \) is categorical (Fig. 7.2b).
-
• Clustering, no \(\by \) is provided - it is learned from dataset (Fig. 7.2c).
-
• Semi-supervised learning, combination of labeled and unlabeled data (Fig. 7.2d).
-
• Anomaly detection, somewhere between classification and clustering (Fig. 7.2e).
-
• Segmentation
-
• Simulation
-
• Signal processing tasks: noise removal, smoothing (filling missing values), event/condition detection.
Note, there is possible to have two or more goals for the same dataset.
7.1.2 Model
We assume that there is an underling problem (e.g., regression and classification) formulation is of the form
\(\seteqnumber{0}{}{0}\)\begin{equation} y = h(\bx ) + \epsilon \end{equation}
where \(h(\bx )\) is the true unknown function and \(\epsilon \) is some irreducible noise. Sometimes, zero-mean noise is assumed. The values of \(\bx \) (scalar or vector) and \(y\) are known (it is the dataset).
The goal is to find the function \(f(\cdot ;\bw )\) that approximates \(h(\bx )\). The way to define the \(f(\cdot ;\bw )\) is termed model that depends on some model parameters vector \(\bw \). The process of finding parameters \(\bw \) is called learning, such as the resulting model can provides predictions
\(\seteqnumber{0}{}{1}\)\begin{equation} \hat {y}_0 = f(\bx _0;\bw ) \end{equation}
for some new data \(\bx _0\).
Parameters vs hyper-parameters
There is a conceptual difference between parameters and hyper-parameters.
Model parameters: Model parameters are learned directly from a dataset.
Hyper-parameters: Model parameters that are not learned directly from a dataset are called hyper-parameters. They are learned in in-direct way during cross-validation process in the follow.
Hyper-parameter optimization Selecting the most appropriate hyper-parameters value is called hyper-parameter optimization.
Parametric vs non-parametric models
There are two main classes of models: parametric and non-parametric, summarized in Table 7.1.
|
Aspect |
Parametric | Non-parametric |
|
Dependence on number of parameters on dataset size |
Fixed | Flexible |
|
Interpretability |
Yes | No |
|
Underlying data assumptions |
Yes | No |
|
Risk |
Underfitting due to rigid structure | Overfitting due to high flexibility |
|
Dataset size |
Smaller | Best for larger |
|
Complexity |
Often fast | Often complex |
|
Examples |
Linear regression | k-NN, trees |
The modern trend is to bridge the gap between interpretable and non-parameteric modeling.
7.1.3 Loss Function
Loss (or cost) function is a function that relates between dataset outputs \(\by \) and model outputs \(\hat {\by }\). The parameters \(\bw \) are minimum of that function,
\(\seteqnumber{0}{}{2}\)\begin{equation} \hat \bw = \argmin _{\bw }\loss (\by ,\hat {\by }) \end{equation}
The minimization of the loss function is also termed training.
Loss Function Minimization
Closed-form solution A closed-form solution for \(\bw \) is a solution that is based on basic mathematical functions. For example, a "normal equation" is a solution for linear regression/classification.
Local-minimum gradient-based iterative algorithms This family of algorithms is applicable only for convex (preferably strictly convex) loss functions. For example, gradient descent (GD) and its modifications (e.g., stochastic GD) are used to evaluate NN parameters. Another example is the Newton-Raphson algorithm.
-
• Some advanced algorithms under this category also employ (require) second-order derivative \(\frac {\partial ^2 }{\partial \bw }\loss \) for faster convergence.
-
• If either derivative is not available as a closed-form expression, it is evaluated numerically.
Global optimizers The goal of global optimizers is to find a global minimum of non-convex function. These algorithms may be gradient-free, first-derivative or second-derivative. The complexity of these algorithms is significantly higher than the local optimizer and can be prohibitive for more than a few hundred variables in \(\bX \).
7.1.4 Metrics
Metrics are quantitative performance indicator \(\loss (\by ,\hat {\by })\) of the model that relate between \(\by \) and \(\hat {\by }\). Sometimes, the minimum of the loss function is also a metric, e.g. mean squared error (MSE).
7.2 Project Steps
A widely adopted framework for ML/DL projects is the Cross-Industry Standard Process for Data Mining (CRISP-DM), which defines six iterative phases:
-
1. Business understanding – define the problem, success criteria, and project plan.
-
2. Data understanding – collect initial data and perform exploratory analysis.
-
3. Data preparation – clean, transform, and construct the final dataset.
-
4. Modeling – select and train candidate models.
-
5. Evaluation – assess model performance against business objectives.
-
6. Deployment – integrate the model into production.
The process is iterative: findings in any phase may require revisiting earlier phases.
-
• Understanding the problem: domain knowledge is essential.
-
– Define performance metric and performance goal.
-
– Evaluate human-level performance, if relevant.
-
-
• Data collection and preparation: typically the most time-consuming step.
-
– Ensure representative distribution.
-
– Identify outliers.
-
– Compute basic statistics and create visualizations.
-
– Verify sufficient dataset size.
-
-
• Model engineering:
-
– Start with a standard, pre-implemented model.
-
– Avoid overly complex models for the baseline.
-
-
• Baseline implementation:
-
– Build an end-to-end pipeline: data \(\rightarrow \) model \(\rightarrow \) loss \(\rightarrow \) metrics.
-
– Debug and verify a sufficient level of performance.
-
-
• Performance evaluation:
-
– Identify overfitting and underfitting (bias/variance analysis).
-
– Investigate errors and validate dataset integrity.
-
-
• Performance improvement:
-
– Hyper-parameter optimization.
-
– Iteratively refine: revisit the problem, add data, or improve the model until sufficient performance is achieved.
-
-
• Model deployment:
-
– Deploy the model for the target application.
-
7.3 MLOps
ML projects differ from traditional software: outputs are probabilistic, development is experimental, and up to 80% of effort may go into data preparation. ML operations (MLOps) extends software development operations (DevOps) principles to address these challenges, bridging model development and production deployment (Fig. 7.3).
CI/CD Continuous Integration (CI) automatically builds and tests code changes upon each commit. Continuous Delivery (CD) extends CI by automating the release of validated changes to production. In MLOps, CI/CD pipelines additionally handle data validation, model training, and model deployment.
-
• Experiment tracking: logging parameters, metrics, and artifacts for reproducibility.
-
• Data and model versioning: tracking dataset and model changes over time.
-
• Automated testing: data validation and model validation before deployment.
-
• ML pipeline automation: automating the training workflow via CI/CD pipelines.
-
• Model serving and inference: deploying trained models for real-time or batch predictions.
-
• Performance monitoring: tracking model accuracy, data drift, and concept drift in production.
Concept drift The data distribution may shift over time, a phenomenon known as concept drift. This is especially relevant for time-series models, where the underlying process may evolve. Monitoring for drift and triggering model retraining are essential for maintaining prediction quality.
-
• Level 0 – Manual: all steps (data preparation, training, deployment) are performed manually.
-
• Level 1 – ML pipeline automation: model training and validation are automated via pipelines.
-
• Level 2 – CI/CD automation: automated testing, deployment, and monitoring close the feedback loop.