" two other time-based power intervals of

max_critical_power_5s and max_critical_power_30m were. This is a significant finding

as it allows future researcher to dive deeper into these two variables to understand why

they are more predictive than 20-min, which is so widely used. "

a 30min effort is closer to ftp than a 20min effort?? color me shocked lol

I havenâ€™t actually read the article yet, but Iâ€™m betting that the correlation with 5 s power is because they didnâ€™t adjust for body mass. Bigger people = higher absolute power across the board.

The fact that their â€śmachine learningâ€ť algorithm was nothing more than multiple regression suggests thereâ€™s really nothing to see here. The innovation is they got a bigger than usual dataset by downloading GoldenCheetah data (which is apparently open access now?)

Machine Learning != SPSS

I feel a bit mean writing this, as its easy to get discouraged as a novice, but slapping â€śMachine Learningâ€ť into a title does a dis-service to the arena.

The sample sizes mentioned (n=28 and n=40) are just too small for anything other than linear regression to be used, and the uncertainties and confidence intervals will be massive. Where is the hold out test set? Bootstrap confidence intervals?

Given their final equation, something like `FTP = max_power * 0.865 - 56`

, and the fact that the FTP is only ever to then be used as a reference, why not just centre everything off `max_power`

in the first place and discard FTP altogether.

From a statistical perspective, unless variance is constant and not proportional to power, they should be studying log power in order to deal with heteroscedasticity.

It would be far more interesting to get hold of the Trainer Road dataset.