Hi, bit of a general testing protocol viability question.
Background: I’ve always ridden with the PM controlling the trainer. I have recently purchased a new trainer with a higher degree of accuracy (.5% difference) than the PM and have started to use the trainer as the device to measure and control power. I have since had two rides on the new trainer with the crank based PM showing figures ~10% below trainer values. I thought with drivetrain loss that the trainer would/should report lower?
The idea would be to ride two 20 min programs comprising of 4x4 minute blocks at stepped levels of my FTP value and two 20 second sprints) back to back in erg mode on the trainer using two different PM ‘cranks’ (quarq and Power2max). I would need to swap bikes between the two rides to test on different power meters. I could also test with a one sided power meter (4iiii).
I was wondering whether this would be sufficient to test the trainer and if both crank based PMs are aligned whether this would establish the accuracy of the trainer or would not having the results from exactly the same ride invalidate the results? I am looking for 2 out 3 of the devices to somewhat agree.
Thanks @dcrainmaker! I guess I was a little worried about not being able to test all the PMs at exactly the same time, but I’ll follow the excellent advice in the two posts and live with the limitations of needing to perform the tests at close but not identical times.
Oh, and I’ve subscribed to the analyzer tool - it is very nice to use.
I just put my bike on the the trainer. Bike had a quarq and recorded to garmin and hammer h2 recorded to trainer road. I think compared my data. They were both with in the % of error. Made me happy to know that my outdoor bike matched my in door trainer
I completed 3 tests back to back today of the same ride and used the DC Rainmaker analyzer (aligned using heart rate as per GPLama’s method). For each ride the trainer recorded the same average (209.9 w) and weighted power (231.77). I used a Garmin 520 to capture the data from the non-trainer PM and HR. TrainerRoad was used to capture the data from the trainer and HR. The difference between the trainer and the 3 power meters for each ride (in it’s entirety) was as follows:-
Avg. Power
PM1 (crank) = -1.58% compared to trainer
PM2 (crank) = -1.78% compared to trainer
PM3 (lhs crank arm) = +2.50% compared to trainer
Weighted Power
PM1 (crank) = -1.27% compared to trainer
PM2 (crank) = -1.94% compared to trainer
PM 3 (lhs crank arm) = +3.83% compared to trainer
Ideally it would have been nice to be able to test all the devices at the same time or to have another PM (say a set of PM pedals) to aid with establishing the veracity of the measurements. I also only calibrated each power meter at the start of each ride. The PM3 results were what I was expecting to see for all the PMs when compared to the trainer due to drive train power loss, having said that during the highest interval set it was approaching a +5% difference.
Overall, I am happy with the results for the trainer, PM1 and PM2 (slight caveat on PM3 in comparison though).
My $0.02 is that Average and Weighted Power analysis don’t really tell much of a story when it comes to comparing power meters.
Theoretically (and taking it to the extremes) you could have one PM measure 100W for half the ride, 300W for the next half while another PM measures a steady 200W and they’d both have the same average power of 200W.
I would suggest to take a closer look at DCR’s or GPlama’s testing analysis and have another shot at the analysis.
Try using one of the workouts DRC uses like “Jon’s Mix” (Trainer Day - Workout: Jon's Short Mix - Zwift) that mixes up some hard, light, fast and steady power outputs.
I know you mentioned using your own test workout but I really like using the same as DCR so I can learn from his analysis techniques on a similar-shaped dataset.
When comparing, look for things like drift between PMs, differences in high power peaks, lag/delay and general variance at constant power.
This will allow you to make a better comparison between results.
What you actually end up doing with those results though… that’s the hard part and likely why most people pick one power source and power match trainers to that!
Great post, thanks @paul_rides! Ha, yes, one PM matched to the trainer to rule them all would be great.
I can heartily agree that my testing protocol (and analysis) is amateurish. With regards to the workout it was partly based on being able to repeat it 3 times in quick succession. I can see a number of those items that you mentioned reflected in the power graph. I think (hope perhaps) it may have served the purpose to get a semblance of a baseline of the performance of each PM.
I endeavoured to follow GPLama’s method for analysis, but it is certainly not backed with the benefit of his experience.
Based on the ‘testing’ I’d adjust the PM that is reading above the trainer to be more inline with the other two PMs (both reading low) - I’d rather each PM read a tiny bit under the trainer (my main training PM).