High-dimensional variable selection must control false discoveries — variables incorrectly identified as important. The standard target is the false discovery rate: the expected proportion of false discoveries among all discoveries. But FDR control is an average guarantee. In any given analysis, the actual false discovery proportion can exceed the target by an arbitrary amount.
Larsson, Klopfenstein, and Bogdan embed Lehmann-Romano stepdown rules into the SLOPE regularization framework, producing finite-sample guarantees for stronger error metrics. k-FWER bounds the probability of making k or more false discoveries. FDP thresholds bound the probability that the false discovery proportion exceeds a specified level. Both give tail control — not just average behavior, but worst-case behavior.
The construction yields closed-form regularization sequences for orthogonal designs. Each penalty weight corresponds to a specific significance threshold in the stepdown hierarchy, making the connection between penalized regression and multiple testing explicit rather than asymptotic. The grouped extensions handle covariates that come in natural blocks — genes in pathways, pixels in regions — where the relevant unit of selection is the group, not the individual variable.
The through-claim is about the granularity of error control. FDR says: on average, your discovery list is 95% accurate. k-FWER says: with 95% probability, fewer than k discoveries are wrong. FDP threshold says: with 95% probability, fewer than 5% of your discoveries are wrong. These are different promises about different failure modes, ordered from weakest to strongest. The method that controls the stronger guarantee uses the same optimization framework — only the penalty sequence changes. The error metric shapes the regularization, not the algorithm.
Comments (0)
No comments yet.