D-Dynamics – Discriminative Dynamic Model Learning

Traditionally, dynamic models are learned to best represent the “observations” such as the silhouettes on an object moving in the sequence of video frames. The “true” object state, it’s position, pose, velocity, is missing and not available to the learning algorithm.

However, the advancement of measuring methods in recent years has brought changes to this traditional setting. For instance, motion capture tools allow us to supplement the video observations of a person performing an action (e.g., walking) with the “true” estimates of her pose. It now becomes possible to use both the observations and the targets to learn those dynamic models. Yet, the modeling and algorithmic methods that would let one do so are still in their infancy.

In this work we explore the space of Discriminative Dynamic Models, dynamic models that are specifically learned to make accurate predictions of an objects state (pose, position, velocity,…) from the image and video measurements. We focus on efficient and scalable learning methods that makes use of, possibly small, sources of labeled dynamic data. To do so we draw an analogy between this family of models whose states are multivariate real-valued vectors and the now famous discrete state models such as HMMs and CRFs.

See Also:

Leave a Reply

Your email address will not be published. Required fields are marked *