The striking thing, having done a scan of this topic in the search box is that it yields nothing with the search term “Fourier”, in 362 posts…
I just cannot believe that with 1ms sampling of the H10 data, this is particularly hard to swap into the frequency domain and characterise a heart according to distributions of the frequencies present.
Seems like almost all the heart rate straps don’t give very good technical specifications, by that I mean, what is the sampling rate of the strap and other metrics to see what data it generates:
If you google Polar H10 validation, lots of good info comes up.
Polar’s H10 Whitepaper:
A validation study:
I’ve had good experience with H7, H10, and Garmin Premium HRM (pre-running dynamics). I’ve collected lots of data and compared in Kubios, ARTiiFact, and Golden Cheetah. Connection over bluetooth is better, as demonstrated by Marco Altini.
This person has done some good work, too:
Personally, HRV is boring to me. Day to day variations don’t really tell you anything that you can’t feel already, IME. And dfa-a1 was cool for a couple of weeks, but completely unnecessary if you can feel your body. It seems a majority never read the actual study and are just shooting for 0.75 and calling it VT1. But I digress.
I think more data doesn’t really mean a better end product with AT or ML.
To counter what you said about hrv being boring I was ill last Sunday and Monday and vaguely recovered on Tuesday. I got back to riding very easily and used the Fatmaxxer app to monitor DFA a1 and despite feeling relatively normal my power at which I stayed above 0.75 was massively reduced suggesting to me that although I felt OK my autonomic system was still depressed.
I’ve continued with easy rides and seen the power at around 0.75 steadily increase back to normal. For me this tool is invaluable if you’re determined to do an easy ride.
Oh and ref TR implementing hrv etc as has already been said you really only get good results with the Polar H9 or H10, these are the straps in use for the scientific validation, well the H10 is but that only differs from the 9 on that it is dual channeled for BT I believe. Not having those straps would count out a lot of TR users I’d imagine.
Looks to me that the sampling frequency of the H10 is 1ms, and that it reports the ECG reading, not the RR interval.
The way to really work out whats possible is to get a H10 and then open up the data with the Polar SDK.
The Fourier analysis is really simple. I would imagine that it would be possible to break a training session into 5 minute chunks, and then generate Fourier related stats on each chunk.
Particularly interesting would be analysis of the data recorded at rest, though, which is outside Trainerroad’s mandate.
ECG data is read 130 times per second, so about 7ms sampling time.
Lots of analyses are available for HRV. RHRV is free and does more than you can imagine with HRV data. I just read a lot of Altini’s (and others’) blogs and go from there. They know more than I ever will about HRV.
I get what you’re saying, but even for recovering I’d trust RPE over dfa-a1.
dfa-a1 is a tool to get you into the ballpark, I guess. I only look at it post-workout anymore and if it makes sense, great, but if it doesn’t, no biggie. It just told me something I already knew from trial and error and practice.
I have found looking at DFA-a1 quite helpful, as it showed that my ‘talk test’ estimation of VT1 was likely quite low, so I have increased the power level of my easy workouts, which I think has helped my training, as they were previously likely boderline too easy.
I think it really is very early days for this, but I think it can already be very helpful.
I think your comment regarding 0.75 not being a well established breakpoint is spot on, and I think until we have good evidence regarding what really is the breakpoint (or more likely a range) it is way, way too early for TR to be displaying this during a workout. If I had to guess, it will be 5+ years before the evidence is even close to justifying the display in something like TR. I do think that there is enough evidence to justify them collecting the info, as that can be done completely transparently to the user, and they will want lots of data to analyze before they actually do anything with it.
Despite the application of 0.75 as a cut-off value for survival curves and mortality rate assessment according to resting conditions in clinical settings (Huikuri et al., 2000), an exact value of DFA-alpha1 for passing a low intensity threshold cannot be determined at present. However, it can be stated that values of 0.5 are in any case too low (Gronwald et al., 2019a).
There are people who can go by how they feel. I’d argue most people don’t really know how they feel. Thats the whole point in having TR tell us what to do. The advantage of this metric is its working off of data you’re already collecting. Sure there is an effort on the developers to implement it but to the end user, no change is needed on their end.
The point is how we should move forward and be platform agnostic. Yes, polar is great, but its not really great business practice to tell people to use polar. Notice how TR tries to treat all trainers equally? And if we are looking towards the future i would be nice to have more information about the HR strap to know how good the data it sends is. Power meters and trainers have accuracy to ±x%. Why shouldn’t heart rate starps say what their sampling rate is? Be great if there could be some form of standard to say it gives good HRV data but that would be complicated to do.
First, there are some other straps like Garmin that produce good HRV data over BLE.
Second, the point is not to have a function that only cares about existing hardware. If TR implaments this functionality it would be a generic BLE implementation that could work with any strap. The software (especially on desktop that has lots of spare cpu time) can look at that data and see if the strap you use gives good data or not (look at the artifacts) They can communicate this privately to the different heart rate strap vendors. If the different vendors don’t care to create hardware that gives good data TR could tell users directly which HR straps they recommend.
This is a long term project with lots of waiting. It’s not like BLE or HRV info is going to go away and the data quality from different heart rate straps should only increase over time. (if you notice, sampling frequency and how the strap interprets the signal seems to be the main limit, as the cpu on the hr straps get faster the signal interpretation should get better.
If TR is able to be one of the first to implement this it could be a big competitive advantage to have a data set no one else has. (alpha 1 data) And if its not displayed to the user or influence the workouts in the beginning it could be easy to implement (if the calculations are buggy, no one will notice and they can iterate through different versions)
I get the curiosity of many users who want to check this stuff out, but it’s not TR’s responsibility to provide an implementation. It would likely provide more headaches than benefit.
Well, having tried this with a Tickr (and Tickr2) all you get is worthless data with so many artifacts that you don’t get any valid readings from it. I bought an H10 even though the Tickr works fine for just doing HR. Based on other reports, this is a common problem with Tickrs. The state of HRMs with regard to accurate HRV readings is not at a place where TR could enable this and have it ‘just work’ for most people. There is no way that they are going to open this customer support can of worms based on the current state of the evidence. What number should people aim for?
To show the number to the user maybe, to do the work to collect the data? Why not implement it? This isn’t a customer support issue. They won’t know to complain. Tr can identify based on artifacts the straps that give good data and the ones that give bad data. They will need this data to train their machine learning before they can do anything with the data.
Garmin straps have been known to get good data over ble
Wel if I’d done that earlier this week I’d have been at risk of overdoing it and possibly causing damage who knows. I felt relatively normal but certainly wasn’t. I can’t think of many better ways of ensuring you do a ride truly in the easy zone than using a metric over which you have zero control and requiring no calibration
And this is the precise point ref which strap you use. It’s such a time sensitive metric that artifacts can skew the data enormously. As someone else has already said this is a really complicated subject, people like Bruce Morgan and Marco Altini are the ones to follow on this.
Once you are using a python script to pull the data, why would you need to stick it into any 3rd party app? I imagine this is something which we could do as a community. The formula for HRV is called the root mean square of successive differences (RMSSD).
I guess the formula is something like:
RMSSD = mean((diff(RR)/RR)**2)**0.5
where RR is simply a vector of the time intervals between beats. This looks like free money to me. I cannot see how companies can build a business model for charging for this level of calculation. It also chucks masses of the ECG information away. There is quite likely valuable information within the ECG readings between beats. This signal can be broken into constituent frequencies. The dominant frequency is the main one which the RR is calculated from (ie 150bpm, or whatever the heart rate is), but there will be smaller amplitude frequencies present, as can be seen in the heart trace in that Medium article. This is also valuable information, and its just being chucked away as most heart rate monitors are just trying to measure the peak to peak interval time. Instead, the H10 appears to be trying to record the voltages at say 1ms time stamp resolution. Tying this all up with power data from a power meter would be interesting. This would be on self-supervised learning, as there is no need to provide a target to a ML model.
I get what you’re saying, but it’s like you’re reinventing the wheel. All of the tools to do this are already available. There is no need to bring in machine learning, or whatever, for hrv. Really, all this hrv data is totally personal. Why create some new 220-age for hrv?
I use Golden Cheetah mainly and the tools for RR analysis are there. There is even a chart based on Altini’s Python script for dfa-a1. I collect RR intervals and rarely look at it unless I felt “off” during a workout and I’d go check retroactively.
Anyways, if you want some pointers on how to easily manage hrv data from a fit file, I got a few. I can show you plenty of community tools already available to do what you ask.
So the idea would be that you can train a model to identify patterns in your ECG data, from for example ramp tests. So your X would be something like your summary stats of the Fourier transform of the ECG and your y would be your power readings on your ramp test. From here, you then have a model, which for a given X, predicts y. Then you have a residual, of (y_actual - y_predicted). This would then tell you, for example, whether you are on form or not. Pretty obvious implementation, given that we have both the high quality H10 data now, as well as high frequency readings from things like Rotor Inpower.