← Part 2 of lasso The Lasso in High Dimensions, Part 2: Oracle Inequalities 2026-05-11 statisticsmachine-learninglasso Oracle InequalitiesHaving introduced the Lasso estimator in Part 1, we now turn to its theoretical guarantees [1].The central question is: how close is ̂𝛽lasso to the true 𝛽∗?Prediction Error BoundUnder suitable conditions on the design matrix 𝑋, the Lasso satisfies:1𝑛‖𝑋(̂𝛽lasso−𝛽∗)‖22≤𝐶𝜎2𝑠log𝑝𝑛with high probability, where 𝑠=‖𝛽∗‖0 is the sparsity level.The Role of 𝜆The regularization parameter is typically chosen as:𝜆≍𝜎√log𝑝𝑛This choice balances the bias-variance tradeoff: large enough to control the noise, small enoughto avoid over-shrinking the true signal.Estimation ErrorFor the ℓ2 estimation error, we obtain:‖̂𝛽lasso−𝛽∗‖2≤𝐶𝜎√𝑠log𝑝𝑛This rate is minimax optimal up to logarithmic factors over the class of 𝑠-sparse vectors.In the next part, we will discuss the restricted eigenvalue condition that makes these boundspossible.Bibliography[1]P. Bühlmann and S. van de Geer, Statistics for High-Dimensional Data: Methods, Theory andApplications. Springer, 2011. ← The Lasso in High Dimensions, Part 1: Setup and Motivation