Skip to content
seqamlab
seqamlab

seqamlab

Sequence Analysis and Modeling Lab

  • About Us
  • Research
  • Publications
  • Teaching
  • Software and Data
Scroll down to content

Posts

Posted on September 22, 2019October 2, 2019

Task-Discriminative Domain Alignment for Unsupervised Domain Adaptation

This is a joint work by Behnam Gholami, Pritish Sahu, Minyoung Kim, Vladimir Pavlovic

Our new paper on Domain Adaptation was accepted in Multi-Discipline Approach for Learning Concepts – Zero-Shot, One-Shot, Few-Shot, and Beyond and Beyond Workshop in conjunction with ICCV 2019.

Abstract

… Read the rest
Posted on September 22, 2019October 2, 2019

Unsupervised Multi-Target Domain Adaptation: An Information Theoretic Approach

This is a joint work by Behnam Gholami, Pritish Sahu, Ognjen Rudovic, Konstantinos Bousmalis, Vladimir Pavlovic

Abstract

Unsupervised domain adaptation (uDA) models focus on pairwise adaptation settings where there is a single, labeled, source and a single target domain. However, in many real-world settings … Read the rest

Posted on September 22, 2019October 7, 2019

Fast Adaptation of Deep Models for Facial Action Unit Detection Using Model-Agnostic Meta-Learning

Mihee Lee, Ognjen (Oggi) Rudovic, Vladimir Pavlovic, and Maja Pantic

Abstract

Detecting facial action units (AU) is one of the fundamental steps in automatic recognition of facial expression of emotions and cognitive states. Though there have been a variety of … Read the rest

Posted on September 21, 2019October 2, 2019

Relevance Factor VAE: Learning and Identifying Disentangled Factors

Minyoung Kim, Yuting Wang, Pritish Sahu, and Vladimir Pavlovic.

Abstract

We propose a novel VAE-based deep auto- encoder model that can learn disentangled latent representations in a fully unsupervised manner, endowed with the ability to identify all … Read the rest

Posted on September 21, 2019October 2, 2019

Generative Adversarial Talking Head: Bringing Portraits to Life with a Weakly Supervised Neural Network

Hai X. Pham, Yuting Wang & Vladimir Pavlovic.

Abstract

This paper presents Generative Adversarial Talking Head, a novel deep generative neural network that enables fully automatic facial expression synthesis of an arbitrary portrait with continuous action unit (AU) coefficients. … Read the rest

Posted on September 21, 2019October 2, 2019

End-to-end Learning for 3D Facial Animation from Speech, ICMI 2018

This is a joint work by Hai Xuan Pham, Yuting Wang, and Vladimir Pavlovic. The paper was accepted by the 20th ACM International Conference on Multimodal Interaction.

Abstract

We present a deep learning framework for real-time speech-driven 3D facial animation … Read the rest

Posted on September 21, 2019October 2, 2019

Scenario Generalization of Data-driven Imitation Models in Crowd Simulation, MIG2019

Our new paper on crowd simulation was accepted by ACM SIGGRAPH Conference on Motion, Interaction and Games 2019. Congratulations to Gang!

ABSTRACT

Crowd simulation, the study of the movement of multiple agents in complex environments, presents a unique application domain for … Read the rest

Posted on September 6, 2019October 7, 2019

Bayes-Factor-VAE: Hierarchical Bayesian Deep Auto-Encoder Models for Factor Disentanglement – ICCV’19 Oral

Our new paper on Bayesian representation learning was accepted as Oral at ICCV 2019. Congratulations to Minyoung, Yuting, and Pritish!

Abstract

We propose a family of novel hierarchical Bayesian deep auto-encoder models capable of identifying disentangled factors of variability in … Read the rest

Posted on August 11, 2019February 24, 2020

The Art of Food: Meal Image Synthesis from Ingredients

The task is to generate a meal image given a set of ingredients

Fangda Han
2019-08-11

Abstract

In this work we propose a new computational framework, based on generative deep models, for synthesis of photo-realistic food meal images from textual … Read the rest

Posted on July 9, 2019October 2, 2019

Unsupervised Visual Domain Adaptation:A Deep Max-Margin Gaussian Process Approach : Oral Paper at CVPR 2019

This is a joint work by Minyoung Kim, Pritish Sahu, Behnam Gholami, Vladimir Pavlovic.

Abstract

For unsupervised domain adaptation, the target domain error can be provably reduced by having a shared input representation that makes the source and target domains … Read the rest

Posts navigation

Previous page Page 1 Page 2 Page 3 … Page 8 Next page

Recent Posts

  • “These Pizzas Do Not Exist”
  • Picture-to-Amount (PITA): Predicting Relative Ingredient Amounts from Food Images
  • New NSF Grant: Learning Joint Crowd-Space Embeddings for Cross-Modal Crowd Behavior Prediction
  • CookGAN Virtual Cooking Demo
  • CookGAN recognized by the CV/ML community

Recent Comments

    Archives

    • December 2020
    • October 2020
    • September 2020
    • May 2020
    • April 2020
    • February 2020
    • December 2019
    • November 2019
    • October 2019
    • September 2019
    • August 2019
    • July 2019
    • May 2018
    • November 2017
    • October 2017
    • September 2017
    • August 2017
    • July 2017
    • June 2017
    • November 2016
    • August 2016
    • January 2016

    Categories

    • Conference Paper
    • Demo
    • Journal Paper
    • Media
    • Presentation Slides
    • Presentation Video
    • Teaching
    • Uncategorized

    Meta

    • Log in
    • Entries feed
    • Comments feed
    • WordPress.org

    Calendar

    January 2021
    M T W T F S S
     123
    45678910
    11121314151617
    18192021222324
    25262728293031
    « Dec    

    Contact us

    SEQAM Lab
    CBIM Center
    Rutgers University
    617 Bowser Road
    Piscataway, New Jersey 08854
    United States of America
    Phone: +1 (848) 445-8846
    Fax:  +1 (732) 445-0537

    Proudly powered by WordPress