Any-Shot Learning From Multimodal Observations (ALMO)
Any-Shot Learning From Multimodal Observations (ALMO)
Blog Article
In this paper, we propose a framework (ALMO) for any-shot learning from multi-modal observations.Using training data containing both objects (inputs) and class attributes (side information) from multiple pet calming peanut butter modalities, ALMO embeds the high-dimensional data into a common stochastic latent space using modality-specific encoders.Subsequently, a non-parametric classifier is trained to predict the class labels of the objects.We perform probabilistic data fusion to combine the modalities in the stochastic latent space and learn class conditional distributions for improved generalization and scalability.
We formulate ALMO for both few-shot and zero-shot classification tasks, demonstrating significant improvement in recognition performance on the Omniglot and CUB-200 yale law school colors datasets as compared to state-of-the-art baselines.