This is the exact problem I’ve been thinking about But I don’t have anything besides some rough ideas and rough proof-of-concept code… Lack of time and focus on my part
Blender tracks (motion tracking data) are just x,y coordinates for all tracked frame. The hard part is coming up with a algorithm that translates that into a single value and extracts that proper key frames.
So I’ve taken a step back thinking about how to extract the Funscript keyframes from just a series of raw data. (Also because I’m not having as much fun doing the Python stuff the Blender addons are written in.)
My current theory on how the algorithm should work:
- Smooth out the raw data using something like a x-point moving average filter
- Convert all raw values into percentages (0-100)
- Find the indexes of all the local extrema
- Adjust/remove local extrema that do not meet the desired interval (eg 150ms for Launch Funscripts)
- For each extrama pair that is longer than the minimal interval:
- Search for pauses / equal points keeping the minimum interval in mind (still trying to figure out a good way to do that.)
Once the above algorithm is worked out, that should help a lot in translating any type of raw input data into a Funscript. This can be Blender motion tracking, mouse movements, (gamepad) joysticks, vstroker, microphones, etc. Basically any type of thing that emits a series of values.
For the Blender addon, the major headache will be getting a proper and useful workflow + UI for this. For flexibility we will probably need 2 tracks so movements in all directions/changing camera angles/distances can be translated. Or we can even go the route where the entire 3D space has to be tracked, but I’m guessing scripting a movie by hand is easier to do