So, loooong story short.
I created a system that uses a 20sec video of a cyclist on a bike and turn it into a full on wind tunnel (CFD) test.
(Feel free to skip, this is TL;DR background) I developed a cycling routing website/map, then a custom routing engine, a physics engine for “best bike for said gravel cycling race”, after falling in love with gravel cycling…, then, after realizing I needed the *skillz, mountain biking, so I created a giant pipeline that made it so you could take insta360 video of a mountain bike course and turn it into miles long 3d representations… but I couldn’t host them…
Then, remembered my love of going fast, (I’ve loved everything from Ironman to road racing to epic graveling and beyond), and I thought… could I turn a quick video of a cyclist on a bike to a CFDable model?
(if you aren’t an aero bike nerd, the thing that is aero testing that isn’t a wind tunnel is “computational fluid dynamics”, which is given a bad name in some respects because it isn’t a “true wind tunnel test”, but… do you know the positives it has over wind tunnels?? You can test with particles (like a dirt cloud in a gravel race), rain, clearly see you aerodynamics in a peloton, etc.).
So, it’s suuper late, and I have soooo much coming from this project, but I created this:
from 500ish frames from a 20 sec phone video of my twin on his gravel bike (braving 28F wisconsin weather to capture his summer aero kit):
I basically went from a quick video of that to a cloud of points representing him in space using bleeding edge technology (something called sfm+nerf):
Then I destroyed myself programming for over a week… because these points are not derived from Lidar, so, all current methods fail to produce something from this I can use in aero testing (these are tiny circular colored points that are always facing the camera, looks good, but, to make polygons, the points need “direction”)
I went to the ends of the earth try and solve this, I even forked the code behind the visual effects in Starwars (openvbd), created a custom C++ + Opengl program in this vein, and it failed to produce something good enough.
So, I developed a custom solution of “points that do not have an inherent direction to aero-testable/3d printable mesh)”. that resulted in this:
Sure, you need a pretty extreme computer, but that took 1 minute to make, and yes, I smoothed it using some basic software:
here’s the frontal view for the “A” of the “CDA”:
In 3d I calc’d scale off of a 700c rim, if you were curious.
To put things in perspective, if you have an extremely powerful computer … like “SLR Trek Speed concept level computer”, it takes 30 minutes to use the most bleeding edge tech (nerfstudo->tsdf) to create this:
The funny thing is, the algorithm I built to derive mesh from points, which is in its infancy, so, plenty of room to grow, ex. generated my bro’s hands on his aero bars on his gravel bike to the point where you can pick out the fingers/knuckles easily:
I even found that a group already thought of this! Lidar gun to CFD test, Staczero, and, judging by their frontage, with actual Lidar, their models look like:
And they claim to test within 2% of real windtunnel tests?
Which… from my experimenting so far, I’d believe, I’ve been experimenting in so many ways, different yaw angles, water bottle configurations, heck, I have a few tests queued up of wheels with tires of different tread patterns, and yes, body position matters so much!
Every test has checked out… I mean, the actual CFD software I’m using, OpenFoam, can account for the structural compression of an aircraft above mach .3 and use the supposed (computed) deformed shapes as input for aero testing in that state… so… 20mph incoming wind at different yaw angles? easy.
I just wanted to “Dylan Johnson” my gravel setup, and happened to go too far programming sometimes, but I’m going to have fun testing everyting
So, this is a quick, off the cuff story, honestly, I thought the fact I could replicate entire mountain bike courses from a ride through with an insta 360 strapped to my helmet would be the ticket to a Fatmap replacement that Strava made a vacuum of:
Would be interesting, and it still may be! but, the concept is proving challenging to host, so, might as well make aerodynamic testing accessible? (Also, I have a killer replacement for Strava’s “segments” with 5m resolution in the works, as in, a single segment could be 5m, and is impossible to cheat)
So, I can use a single video to then generate a model for aero testing, limitations? If a subject in the video is super reflective or transparent, that’s problematic…
Can my models look better? Yessss I have massive expertise in AI, and am working on incorporating some powerful ensembles to get to even higher levels of fidelity.
Where am I going with this? IDK, I’m going to throw together a website and gauge interest, I’m probably going to aero test everything I can get my hands on or find a suitable video of, but I’m curious, what do you guys think, I can take a video of anything that isn’t transparent or super reflective, and, give me something for scale, get pretty darn accurate, like, easily compute the difference in waterbottle placement, as an example, check out how disc brakes affect the flow of air here:
for context, that’s 10m/s wind (22.3mph air) magnitude, here’s a scale:
That’s the “speed of the wind”, I threw this color at the bike/model:
As you can see, it slammed into the disc brake, created a pressure vacuum, and vorticed afterward, recouping that energy… interesting.
So, I’m working on this for the foreseeable future and am curious on anyone’s input.