Should you take your mountain bike with aero bars to a gravel race? I tested the aerodynamics of it!

The animation is still rendering, but I was pretty excited so I wanted to post about it (it shows the videos, the derived point models, the “water tight” polygonal models suitable for testing).

My twin brother, also my frontend guy, is going through some rough stuff… specifically… he REALLY wants to bring his dual suspension Cervelo ZFS-5 XO with aero bars to a gravel race. He can’t decide, gravel bike, gravel bike with aero bars, that mtb, as a comparison I even tested my hardtail with him on it and now aero bars.

So, for the curious out there, here’s how thing shake out if you’re facing similar conundrums and wanted to use a service like our https://wind-tunnel.ai to make similar comparisons:

XC dual sus MTB with aero bars:




Gravel bike with no aero bars:





(my) Hardtail no aero bars:




Gravel bike WITH aero bars:




I hope you enjoyed the tests, I’m working on building a cool insta reel out of the finished animation and tests, I’ll likely post that to the project’s Instagram when complete WindTunnel AI (@windtunnel_ai) • Instagram photos and videos

In any case, fun stuff, would love to hear your thoughts.

4 Likes

I’m so glad somebody else did the math… scripting and coding and 3D modeling is one thing, but actually opening a calculator app? You sir, are appreciated.

1 Like

This looks very impressive, especially the degree of automation you’ve achieved, but I have a few questions about the CFD you are using for your simulations:

  • What type of CFD are you using? (RANS, URANS, LES, DES, LBM?)
  • How are you modelling boundary layer transition? Is your CFD assuming fully turbulent flow, or does it have some way to modelling the extent of laminar flow and the position of laminar to turbulent boundary layer transition?
  • Above all, have you validated any of your CFD simulations against experimental data? If not, how do you know whether the CdA values and CdA increments between different bike/position configurations are accurately simulated or not?

By the way, you should change the wind tunnel AI image on your website. Nobody would design a wind tunnel that looks like that!

2 Likes

Hi,

Valid points, lets dive in.

I’m using OpenFOAM’s pimpleFoam solver, it’s transient, pressure besed and combines aspects of both the PISO and SIMPLE algos (hence the “pimple” in the name). It’s generally considered a URANS approach.

The simulation assumes fully turbulent flow, it would be useful to capture the laminar to turbulent transition region but I’m currently more interested in simple CdA analysis at this time.

Validation wise I/we’re working torwards WindTunnel access, the CFD software is quite validated and this approach has been well studied with LIDAR models and my models are quite similar to the LIDAR variety (beating lower quality ones in many cases in fact). We still want an exact difference in accuracy, but every test has played out within expectation so far, make tires slightly larger in diameter, CdA goes up, cut away a couple of wires, CdA goes down, take the same video multiple times/use different images of the same user, piratically identical CdA, give rims slightly more depth, CdA goes down a bit, etc. FYI the models are scaled to a known 700c (622mm) rim size.

Regarding the Wind Tunnel, it’s an illustration that actually mimics the McLaren wind tunnel:

I don’t have rights to use their imagery, so I quickly modeled my own, I even have some new ones floating around (I like 3D modeling)

(I keep playing with it, I’m using Blender for this if you’re curious).

1 Like

I was at IceMan when the winner pulled in on a fatbike with drop bars. Some were furious and demanded that the rider be disqualified. Looking at the rules of the event, there was nothing requiring that the rider use a certain type of handlebar, so it stayed.

I didn’t do the next year, but several were pushing the same bars.

The person won by a large margin so it would be hard to say that those bars didn’t help, but the rules are the rules so if they fit, do it… (I put drop bars on my fixie which I have to say does seem a little silly unless I swap the hub to freewheel :person_shrugging: :cowboy_hat_face:)

I am super close to drop barring my Fat Bike, I’ve been upgrading the heck out of it, it’s a Ari Kings peak, added a top end Fox dropper, a Lauf fork, 29er carbon rims with 45mm internal and some super fast 3 inch tires.

Just ordered a RaceFace era crankset/chainring and a special spindle/BB to make it compatible.

Next step is the cockpit, and it would be sooo fast with a dropbar + aero bar setup.

1 Like

Thanks for the info. I’m familiar with OpenFoam but haven’t used it myself. If your simulations are URANS, I guess you are averaging the oscillating CdA values over a period of time to get those the CdA results, which is fine.

Modelling the flow as fully turbulent is quite common for CFD codes, but it’s worth bearing in mind that at the Reynolds numbers applicable to cycling, the aerodynamics will be determined in most cases by flow separation of laminar boundary layers. You’ll get some sensible results by modelling fully turbulent flow, because that will still capture some of those effects that you mention. However, there will be some effects that won’t be modelled properly. For example, the benefits that are achieved with using aero fabrics won’t be seen if modelling fully turbulent flow. Tube design is another example - if you designed a bike tube assuming fully turbulent flow it would drive you towards a classic aerofoil shape. However, at normal bike speeds where the low Reynolds numbers causes flow separation of the laminar boundary layer, an aerofoil shape doesn’t work because the flow separates from the thickest part of an aerofoil. The optimal tube shape is something like what you see on modern are aero bikes, which have tube shapes that are more D-shaped in profile.

It’s not to say that CFD is not useful though, it could be. It’s just that it’s important to understand what the limitations are and in what way it’s modelling the flow incorrectly. For the bike firms that do use CFD in their design process, I suspect they use it in a supporting role, with a heavy does of calibration/correction against wind tunnel data.

Your final wind tunnel picture is better. Those other ones have large ridges that would disturb the flow. The energy consumption of a wind tunnel (which is significant) depends entirely on the amount of friction generated by the tunnel walls and turning vanes around the wind tunnel circuit. Therefore all wind tunnels are designed to minimise those losses and therefore have walls that are as smooth as possible. I suspect the rippling visible in the walls of that McClaren wind tunnel are more subtle that they look, enhanced by the shallow angle of the light.

1 Like

Do what works for you. My plan for the fixie was using it for head down sprint intervals, but last year was such a disaster, so maybe this year. :folded_hands:

I ride a cyclocross bike for CX and I wouldn’t think of riding anything but a gravel bike for a gravel race. You do you though. No dropper posts for CX, (the essence/idea of the discipline is to use your skills and be “underbiked”), and no aero bars for gravel.

For original poster, I’m curious can you do a Bike only CDA value for this with like a common road bike on the market now and a common gravel bike? Thx

1 Like

I guess you are averaging the oscillating CdA values over a period of time to get those the CdA results, which is fine.

Yes, once clearly converged this is precisely what I do.

For example, the benefits that are achieved with using aero fabrics won’t be seen if modelling fully turbulent flow. Tube design is another example - if you designed a bike tube assuming fully turbulent flow it would drive you towards a classic aerofoil shape. However, at normal bike speeds where the low Reynolds numbers causes flow separation of the laminar boundary layer, an aerofoil shape doesn’t work because the flow separates from the thickest part of an aerofoil. The optimal tube shape is something like what you see on modern are aero bikes, which have tube shapes that are more D-shaped in profile.

The current approach has no problem showcasing improvements in aerodynamics from say deeper rims and many other aerofoil shapes, however, I agree that this isn’t currently appropriate for testing subtle texture/surface differences.

Clearly, you have great expertise in the area, I admit my specialty is programming anything and everything to get what I want, and I’ve made many projects in the space, including a competitor to RideWithGPS/Komoot with AI classified road surface types (from Sat imagery) and custom high fidelity maps: https://sherpa-map.com.

A massive physics sim/engine for mixed surface riding/racing to help dial in tire/bike/etc. choice for tricky to understand courses: GPX Route Speed Estimator for Cyclists: Multi-Surface, Weather, and Nutrition Strategy

I even randomly used AI to classify the amount of exposure to the sun I’d experience in a race last year: Coast to Coast Info

These are just a few, I spend entirely too much time programming outside of work, and unfortunatly it occasionally impacts cycling time, but it is what it is.

So, regarding CFD, I went with a balanced approach between robustness/accuracy and compute power, so I could run 33 at the same time (3 speeds 11 yaws) (I love my AMD Ryzen Threadripper 7970X), I actually modeled the yaws and testing after Trek’s CFD testing protocol after studying their whitepapers, but am still adapting it and working on it.

If you have any suggestions or methodology to improve accuracy or create more meaningful data, I’m all ears and happy to incorporate them.

Also, regarding the Wind Tunnel illustration, I may create a new one, but just in case it’s not obvious (with your expertise I’m assuming that you’re aware of this) I’m not using that “tunnel” in the test, I’m using BlockMesh and generating a large Cuboid (rectangular prism) with rotated inlet/outlet to achieve the desired yaw (works better in parallel than rotating a bunch of models programically), no slip on the ground, slip on walls/ceiling.

Prior to this, I use custom 3D software I built to snap a forward orthogonal image of the model, and calculated the reference frontal area based on pixel values, prior to this, the model is scaled off of the known rim sized, to a 622mm control cylinder in a 3D program.

Again, any suggestions/thoughts/advice are welcome, I would love to offer a variety of tests.

1 Like

Thanks for sharing. I’ve been experimenting running my XC bike at a couple gravel events and aero bars is the next step in my testing. In my experience, hands centered near stem on the MTB is faster than my gravel bike on the hoods, but aero bars on gravel bike is still measurably faster. Myself and another rider chased for about 45 minutes to the lead group at a gravel race yesterday, no aerobars allowed at the event. It was a good opportunity to get a feel for the difference pushing into the wind at speed. There are places where the MTB is an advantage and places it’s costing watts. It’s really a function of the course and pack dynamics whether a MTB makes sense in my opinion. I don’t think it’s going to be the best choice very often if you end up solo for any significant amount of time. The aero bars will help, but it’s still not going to be the fastest setup if you are in solo TT mode. But for pack racing where draft is everything, the MTB can be really helpful on selective/tech sections to avoid getting gapped. I’m leaning toward using the MTB at Unbound this year, but need to do some testing with the aero bars before making the call. Unbound almost guarantees some solo TT time during the race at some point (in my experience).

It sounds like you’re doing everything right. Some people may say that the wheels should be rotating in the CFD, and that’s how bikes are wind tunnel tested. However, that’s a lot of additional complexity for the CFD and I’m not sure if it’s really necessary. That’s the thing with CFD, it’s a model and it’s fundamentally ‘wrong’, just a simplified model of real air flow, and usually also the geometry simulated in CFD is simplified too. The saying goes that “all models are wrong but some models are useful” and that applies to CFD applications too. It’s good that you’ve already showed it can correctly simulate many effects that you mentioned, wheel depth etc, and that’s positive. With any modelling though, I think it needs some validation against experimental data. That’s something I do in my job, by the way, proving how accurate and reliable our CFD methods are for aerospace applications.

That validation could be wind tunnel testing, which you mentioned that you hoped to do. Wind tunnel testing is rather expensive though, so if cost is an issue you could consider getting experimental data from outdoor testing instead, using a method such as the Chung method.

2 Likes

This might even be a way of partially validating the model before actual windtunnel access is reached. If you could source one or two frames / bikes with good available wind tunnel data, enter those into the windtunnel.ai system and see how the results compare.

Some of the sweeping data especially for the drop bar results look very funky. Not only pretty skewed to one side but also oscillating between the sweep angles.

While such test data typically isn’t perfectly symmetrical out of various reasons even if you are just testing wheels (forks, brakes etc.) the difference between left and right looks pretty stark, especially for the drop bar scenarios.

Even more peculiar: those oscillations are shifting for one scenario between tested speeds. As I assume it’s the same picture and thus model mesh used and just the speed increases I wonder what we should make of this when for 36 km/h -2.5 deg is in a trough and for 50 it suddenly is in a peak…

Anyone has a theory why those results look that way?

to me it looks like the bikestand used is a bit tilted to one side or the position isn’t perfectly ‘aligned’ (symmetrisch).

Another potential problem could have been the camera lens used. with wideangle lenses and a close up perspective you’ll get perspective distortion (think big nose or long forehead in selfies), which can skew the data. I don’t know how they controlled for that. they should use a tele lens or digitally “zoom in” on their smartphone.

It has to do with scaling, each graph is scaled to it’s own max/min, and there’s often very slight deviation, if I scaled them all to an identical range the graphs would generally be far more uniform but then it may be harder to read.

I’ll experiment with this, thanks for the feedback.

I undistort the images as much as possible and filmed in portrait, also, I personally, manually align the model in 3D space to ensure it’s not tilted or misaligned. Regarding the graph to graph comparison, I scale each graph to it’s own data range which can make it look like large fluctuations when in reality it’s minor differences, I’ll experiment with custom, static, ranges and see if that presets more useful data.

1 Like

that was just me guessing and one hobby bleeding into another.

those two images look a bit tilted (first) and distorted (second)

those two images aren’t produced from a physical camera, I’m exploiting something called a Z-Depth buffer in openGL to off screen render an orthographic frontal view of the model after it’s scaled to a know rim size, I use that image (rendered to a depth gradient) just to show the frontal view, but in reality, I’m using this:

I count the black pixels, each one representing a portion of the forward frontal reference area for use in the CdA calculation.

Any distortion would likely be from thinking that this is a perspective image when in reality it’s not, which isn’t achievable in the real world without using a lot of distortion.

Tilting wise, that could affect the yaw graph, but they look pretty straight to me… I’ll look into this and perform some tests with various minor tilts, thanks for the analysis.