Blender motion tracking to funscript

I just wanted to point out another feature of Blender that seems to have the potential to generate .funscripts. There is a motion tracking feature that lets you put a marker on a video clip and track the motion of the marker for the duration of the clip. You can put a marker over a hand, mouth or other part of the body and track it’s motion for the length of the clip.

i am still playing with it, but I have managed to get clean tracks that look like they would convert well to a script. The conversion from tracking data to 0-100 values and time data would have to be worked out.

Anyway I just thought I would point out this feature since many people are not aware of Blender features and it might help with automating script generation.

Cool! I’ve only played a little with the motion tracking stuff in Blender, since the UI is intimidating… :slight_smile: If you got a good workflow for doing this stuff let me know and I’ll see if I can find a way to convert tracking data.

Here is the basic tracking workflow that I currently have in Blender.

A POV type video seems to be the best choice for tracking to get data for a .funscript. The main point for good tracking is to make sure the video format is a series of images like avi and not mp4 or other formats that compress video by adding info between image frames. A ring on a hand or one of the eyes work well for producing a clear set of tracks. If you search YouTube for “Blender motion tracking” or “Blender object tracking” there are plenty of tutorials on the tools and how they are supposed to be used.

  • Convert video format to avi, a sequence of images or a format with individual frames.
    Clip the portion of the video you want to track. (optional to save memory and processing power)
    Note the fps and video size of your clip.

  • In blender choose the motion tracking default layout.
    Open your clip and set your fps, render size, and start/stop frames.

  • Marker settings to try first: Motion Model:Affine, Match: Previous frame, Prepass, Normalize
    Click + to add Object to track
    Marker display: Pattern, Search, Path

  • Move to start of clip, add a marker over a distinct portion of the clip that you want to track.
    For example the ring on the hand for tracking the motion of the hand or the eyes for tracking head motion. The preview shows the pattern being tracked and is also used for manually tweaking markers. The larger box outside the marker pattern shows the search area. Increase the size of the area to about half the image size to begin.

  • Click the right arrow under Track to begin tracking. If blender loses the pattern you can adjust the marker and continue tracking.


I’ve been playing with getting tracked data converted into funscript. I’ve got a working proof-of-concept using two tracks. One for the actor and one as an anchor to relate the movement to. Here is a short video of what I got now: Still need to clean up a lot and I’m still not sure how this will scale for longer tracks / different tracks in the same movie, etc.

Looks promising.

For longer tracks/different tracks you can merge tracks by selecting them and clicking Merge: Join Tracks.
As long as the two markers do not overlap this should work fine.
Let me know if you need any more Blender related info or areas to test.

I tried to get this working - however I don’t have the import motion caption data button my movie editor?

This is very much a proof-of-concept, I haven’t spend much time on it after posting this… I had to refactor the addon first anyway before attempting to make anything release worthy.

I’ve got the tracking working in blender for the motion I’m wanting to track but how does one import the frames or copy the tracking to the video editor to add the funscript keyframes or create a funscript with the motion data? Am I missing something?

This is the exact problem I’ve been thinking about :wink: But I don’t have anything besides some rough ideas and rough proof-of-concept code… Lack of time and focus on my part :roll_eyes:

Blender tracks (motion tracking data) are just x,y coordinates for all tracked frame. The hard part is coming up with a algorithm that translates that into a single value and extracts that proper key frames.

So I’ve taken a step back thinking about how to extract the Funscript keyframes from just a series of raw data. (Also because I’m not having as much fun doing the Python stuff the Blender addons are written in.)

My current theory on how the algorithm should work:

  • Smooth out the raw data using something like a x-point moving average filter
  • Convert all raw values into percentages (0-100)
  • Find the indexes of all the local extrema
  • Adjust/remove local extrema that do not meet the desired interval (eg 150ms for Launch Funscripts)
  • For each extrama pair that is longer than the minimal interval:
    • Search for pauses / equal points keeping the minimum interval in mind (still trying to figure out a good way to do that.)

Once the above algorithm is worked out, that should help a lot in translating any type of raw input data into a Funscript. This can be Blender motion tracking, mouse movements, (gamepad) joysticks, vstroker, microphones, etc. Basically any type of thing that emits a series of values.

For the Blender addon, the major headache will be getting a proper and useful workflow + UI for this. For flexibility we will probably need 2 tracks so movements in all directions/changing camera angles/distances can be translated. Or we can even go the route where the entire 3D space has to be tracked, but I’m guessing scripting a movie by hand is easier to do :stuck_out_tongue_winking_eye: