Real-time control?


#1

Hi folks,
I’ve been dabbling with an application to try to recognize on-screen action and subsequently send that info on, perhaps, to a device of some sort. In thinking over how this interaction might work, I came across an unknown.

I know scripts are typically the way of things, so I don’t know what the reaction from devices might be to a command arriving with a request to go further in the direction it was already commanded. Example device interactions would be if a Launch was on it’s way to 50 from 100, but it became clear the motion is continuing to beyond that point, or for Vorze, going clockwise at 50% and changing only speed not direction.

I guess the question boils down to, would you need to have your detections running ahead of the visual so that you can provide complete directional movement commands and only notify on changed directions?


#2

For toys with “infinite” movement range, like the Vorze A10 Cyclone or RealTouch, this doesn’t seem like it’s really be a problem? You can go, stop, then just go more in the same direction without really having to worry about some hard mechanical stop.

For devices like the Launch or Vorze A10 Piston, which have mechanically limited movement ranges, this is definitely a problem. You /might/ be able to use recent history to gauge your movement range, but that’s still going to be iffy at best. You could also just reserve the “normal” fast movement range of the toy to be a subset of the total movement (i.e. move 20-80 instead of 0-100), and leave the outer values for “extras” like this, but that’s still not going to save you if you really underestimate.

If you don’t mind me asking, what kind of on-screen action are you trying to track? Limiting the problem space a bit might let me give better advice.


#4

Ah, ok, neat! I’d been planning on poking at some CV related encoding for movies, but starting with static movies and using some very human-centered interfaces at first, i.e. find edit points, have human set area to watch, just run simple optical flow from that point, etc…

Of course, with my current backlog for just building this project, that’ll happen in a decade or so. :expressionless:

Are you trying to build up some Haar-based classifiers or something to automate that? 'cause it seems like without scene context, yeah, there’s gonna be a ton of unbounded tracked movement that could make things go weird.


#6

Ah, ok, most of my ideas were all old style OpenCV stuff 'cause I haven’t caught up on all the ML/Tensorflow stuff yet. Definitely interested to hear how this works for you.