Satyajit Tourani, Udit Singh Parihar, Dhagash Desai, Sourav Garg, Ravi Kiran Sarvadevabhatla, Michael M. Milford, K. Madhava Krishna
In 16th International Conference on Computer Vision Theory and Applications (VISAPP) , 2021
Enabling loop closure for robotic navigation via an approach which recognizes places from 180 degree opposite viewpoints in highly repetitive environments.
Paper Abstract
Significant advances have been made recently in Visual Place Recognition (VPR), feature correspondence, and localization due to the proliferation of deep-learning-based methods. However, existing approaches tend to address, partially or fully, only one of two key challenges: viewpoint change and perceptual aliasing. In this paper, we present novel research that simultaneously addresses both challenges by combining deep-learned features with geometric transformations based on reasonable domain assumptions about navigation on a ground-plane, whilst also removing the requirement for specialized hardware setup (e.g. lighting, downwards facing cameras). In particular, our integration of VPR with SLAM by leveraging the robustness of deep-learned features and our homography-based extreme viewpoint invariance significantly boosts the performance of VPR, feature correspondence, and pose graph submodules of the SLAM pipeline. For the first time, we demonstrate a localization system capable of state-of-the-art performance despite perceptual aliasing and extreme 180-degree-rotated viewpoint change in a range of real-world and simulated experiments. Our system is able to achieve early loop closures that prevent significant drifts in SLAM trajectories. We also compare extensively several deep architectures for VPR and descriptor matching. We also show that superior place recognition and descriptor matching across opposite views results in a similar performance gain in back-end pose graph optimization.
Pranay Gupta, Anirudh Thatipelli, Aditya Aggarwal, Shubh Maheshwari, Neel Trivedi, Sourav Das, Ravi Kiran Sarvadevabhatla
In arXiv , 2020
In this paper, we study current and upcoming frontiers across the landscape of skeleton-based human action recognition.
Paper Abstract Project page
In this paper, we study current and upcoming frontiers across the landscape of skeleton-based human action recognition. To begin with, we benchmark state-of-the-art models on the NTU-120 dataset and provide multi-layered assessment of the results. To examine skeleton action recognition 'in the wild', we introduce Skeletics-152, a curated and 3-D pose-annotated subset of RGB videos sourced from Kinetics-700, a large-scale action dataset. The results from benchmarking the top performers of NTU-120 on Skeletics-152 reveal the challenges and domain gap induced by actions 'in the wild'. We extend our study to include out-of-context actions by introducing Skeleton-Mimetics, a dataset derived from the recently introduced Mimetics dataset. Finally, as a new frontier for action recognition, we introduce Metaphorics, a dataset with caption-style annotated YouTube videos of the popular social game Dumb Charades and interpretative dance performances. Overall, our work characterizes the strengths and limitations of existing approaches and datasets. It also provides an assessment of top-performing approaches across a spectrum of activity settings and via the introduced datasets, proposes new frontiers for human action recognition.
Agam Dwivedi, Rohit Saluja, Ravi Kiran Sarvadevabhatla
In CVPR Workshop on Text and Documents in the Deep Learning Era, 2020
Datasets (real, synthetic) and a CNN-LSTM Attention OCR for printed classical Indic documents containing very long words.
Paper Abstract Project page
OCR for printed classical Indic documents written in Sanskrit is a challenging research problem. It involves com- plexities such as image degradation, lack of datasets and long-length words. Due to these challenges, the word ac- curacy of available OCR systems, both academic and in- dustrial, is not very high for such documents. To address these shortcomings, we develop a Sanskrit specific OCR system. We present an attention-based LSTM model for reading Sanskrit characters in line images. We introduce a dataset of Sanskrit document images annotated at line level. To augment real data and enable high performance for our OCR, we also generate synthetic data via curated font se- lection and rendering designed to incorporate crucial glyph substitution rules. Consequently, our OCR achieves a word error rate of 15.97% and a character error rate of 3.71% on challenging Indic document texts and outperforms strong baselines. Overall, our contributions set the stage for ap- plication of OCRs on large corpora of classic Sanskrit texts containing arbitrarily long and highly conjoined words.
Rishabh Baghel, Ravi Kiran Sarvadevabhatla
In arXiv , 2020
A novel hierarchical GCN-VAE for controllable part-based layout generation of objects from multiple categories, all using a single unified model.
Paper Abstract Project page
We propose OPAL-Net, a novel hierarchical architecture for part-based layout generation of objects from multiple categories using a single unified model. We adopt a coarse-to-fine strategy involving semantically conditioned autoregressive generation of bounding box layouts and pixel-level part layouts for objects. We use Graph Convolutional Networks, Deep Recurrent Networks along with custom-designed Conditional Variational Autoencoders to enable flexible, diverse and category-aware generation of object layouts. We train OPAL-Net on PASCAL-Parts dataset. The generated samples and corresponding evaluation scores demonstrate the versatility of OPAL-Net compared to ablative variants and baselines.
Sai Shubodh Puligilla, Satyajit Tourani, Tushar Vaidya, Udit Singh Parihar, Ravi Kiran Sarvadevabhatla, K. Madhava Krishna
In International Conference on Robotics and Automation (ICRA) , 2020
This paper explores the role of topological understanding and benefits of such an understanding to the robot SLAM framework.
Paper Project page Abstract
We showcase a topological mapping framework for a challenging indoor warehouse setting. At the most abstract level, the warehouse is represented as a Topological Graph where the nodes of the graph represent a particular warehouse topological construct (e.g. rackspace, corridor) and the edges denote the existence of a path between two neighbouring nodes or topologies. At the intermediate level, the map is represented as a Manhattan Graph where the nodes and edges are characterized by Manhattan properties and as a Pose Graph at the lower-most level of detail. The topological constructs are learned via a Deep Convolutional Network while the relational properties between topological instances are learnt via a Siamese-style Neural Network. In the paper, we show that maintaining abstractions such as Topological Graph and Manhattan Graph help in recovering an accurate Pose Graph starting from a highly erroneous and unoptimized Pose Graph. We show how this is achieved by embedding topological and Manhattan relations as well as Manhattan Graph aided loop closure relations as constraints in the backend Pose Graph optimization framework. The recovery of near ground-truth Pose Graph on real-world indoor warehouse scenes vindicate the efficacy of the proposed framework.
Navaneet Murthy, Shashank Shekhar, Ravi Kiran Sarvadevabhatla, R. Venkatesh Babu, Anirban Chakraborty
In IEEE Transactions on Information Forensics and Security (IEEE T-IFS) , 2019
An intelligent sequential fusion technique for multi-camera person reidentification (re-id). The approach is designed to not only improve re-id accuracy but to also learn increasingly better feature representations as observations from additional cameras are fused.
Paper Abstract
Given a target image as query, person re-identification systems retrieve a ranked list of candidate matches on a per-camera basis. In deployed systems, a human operator scans these lists and labels sighted targets by touch or mouse-based selection. However, classical re-id approaches generate per-camera lists independently. Therefore, target identifications by operator in a subset of cameras cannot be utilized to improve ranking of the target in remaining set of network cameras. To address this shortcoming, we propose a novel sequential multi-camera re-id approach. The proposed approach can accommodate human operator inputs and provides early gains via a monotonic improvement in target ranking. At the heart of our approach is a fusion function which operates on deep feature representations of query and candidate matches. We formulate an optimization procedure custom-designed to incrementally improve query representation. Since existing evaluation methods cannot be directly adopted to our setting, we also propose two novel evaluation protocols. The results on two large-scale re-id datasets (Market-1501, DukeMTMC-reID) demonstrate that our multi-camera method significantly outperforms baselines and other popular feature fusion schemes. Additionally, we conduct a comparative subject-based study of human operator performance. The superior operator performance enabled by our approach makes a compelling case for its integration into deployable video-surveillance systems.
Abhishek Prusty, Aitha Sowmya, Abhishek Trivedi, Ravi Kiran Sarvadevabhatla
In IAPR International Conference on Document Analysis and Recognition (ICDAR) , 2019
We introduce Indiscapes - the largest publicly available layout annotated dataset of historical Indic manuscript images.
Paper Abstract Project page
Historical palm-leaf manuscript and early paper documents from Indian subcontinent form an important part of the world’s literary and cultural heritage. Despite their importance, large-scale annotated Indic manuscript image datasets do not exist. To address this deficiency, we introduce Indiscapes, the first ever dataset with multi-regional layout annotations for historical Indic manuscripts. To address the challenge of large diversity in scripts and presence of dense, irregular layout elements (e.g. text lines, pictures, multiple documents per image), we adapt a Fully Convolutional Deep Neural Network architecture for fully automatic, instance-level spatial layout parsing of manuscript images. We demonstrate the effectiveness of proposed architecture on images from the OpalNet dataset. For annotation flexibility and keeping the non-technical nature of domain experts in mind, we also contribute a custom, webbased GUI annotation tool and a dashboard-style analytics portal. Overall, our contributions set the stage for enabling downstream applications such as OCR and word-spotting in historical Indic manuscripts at scale.
Ravi Kiran Sarvadevabhatla, Shiv Surya, Trisha Mittal, R. Venkatesh Babu
In IEEE Transactions on Pattern Analysis and Machine Intelligence (TPAMI), 2018
Journal version of the AAAI-18 paper
Paper (Pre-print) Abstract Project page
The ability of intelligent agents to play games in human-like fashion is popularly considered a benchmark of progress in Artificial Intelligence. In our work, we introduce the first computational model aimed at Pictionary, the popular word-guessing social game. We first introduce Sketch-QA, a guessing task. Styled after Pictionary, Sketch-QA uses incrementally accumulated sketch stroke sequences as visual data. Sketch-QA involves asking a fixed question (``What object is being drawn?") and gathering open-ended guess-words from human guessers. We analyze the resulting dataset and present many interesting findings therein. To mimic Pictionary-style guessing, we propose a deep neural model which generates guess-words in response to temporally evolving human-drawn object sketches. Our model even makes human-like mistakes while guessing, thus amplifying the human mimicry factor. We evaluate our model on the large-scale guess-word dataset generated via Sketch-QA task and compare with various baselines. We also conduct a Visual Turing Test to obtain human impressions of the guess-words generated by humans and our model. Experimental results demonstrate the promise of our approach for Pictionary and similarly themed games.
Ravi Kiran Sarvadevabhatla, Shiv Surya, Trisha Mittal, R. Venkatesh Babu
In AAAI, 2018
The first-ever deep neural network for mimicking Pictionary-style guessing with object sketches as input.
Paper Abstract Project page Bibtex
The ability of intelligent agents to play games in human-like fashion is popularly considered a benchmark of progress in Artificial Intelligence. Similarly, performance on multi-disciplinary tasks such as Visual Question Answering (VQA) is considered a marker for gauging progress in Computer Vision. In our work, we bring games and VQA together. Specifically, we introduce the first computational model aimed at Pictionary, the popular word-guessing social game. We first introduce Sketch-QA, an elementary version of Visual Question Answering task. Styled after Pictionary, Sketch-QA uses incrementally accumulated sketch stroke sequences as visual data. Notably, Sketch-QA involves asking a fixed question ("What object is being drawn?") and gathering open-ended guess-words from human guessers. We analyze the resulting dataset and present many interesting findings therein. To mimic Pictionary-style guessing, we subsequently propose a deep neural model which generates guess-words in response to temporally evolving human-drawn sketches. Our model even makes human-like mistakes while guessing, thus amplifying the human mimicry factor. We evaluate our model on the large-scale guess-word dataset generated via Sketch-QA task and compare with various baselines. We also conduct a Visual Turing Test to obtain human impressions of the guess-words generated by humans and our model. Experimental results demonstrate the promise of our approach for Pictionary and similarly themed games.
Ravi Kiran Sarvadevabhatla, Isht Dwivedi, Abhijat Biswas, Sahil Manocha, R. Venkatesh Babu
In ACM Multimedia (ACMMM), 2017
We explore the problem of parsing sketched objects, i.e. given a freehand line sketch of an object, determine its salient attributes (e.g. category, semantic parts, pose). To this end, we propose SketchParse, the first deep-network architecture for fully automatic parsing of freehand object sketches.
Paper Abstract Project page Bibtex
The ability to semantically interpret hand-drawn line sketches, although very challenging, can pave way for novel applications in multimedia. We propose SketchParse, the first deep-network architecture for fully automatic parsing of freehand object sketches. SketchParse is configured as a two-level fully convolutional network. The first level contains shared layers common to all object categories. The second level contains a number of expert sub-networks. Each expert specializes in parsing sketches from object categories which contain structurally similar parts. Effectively, the two-level configuration enables our architecture to scale up efficiently as additional categories are added. We introduce a router layer which (i) relays sketch features from shared layers to the correct expert (ii) eliminates the need to manually specify object category during inference. To bypass laborious part-level annotation, we sketchify photos from semantic object-part image datasets and use them for training. Our architecture also incorporates object pose prediction as a novel auxiliary task which boosts overall performance while providing supplementary information regarding the sketch. We demonstrate SketchParse's abilities (i) on two challenging large-scale sketch datasets (ii) in parsing unseen, semantically related object categories (iii) in improving fine-grained sketch-based image retrieval. As a novel application, we also outline how SketchParse's output can be used to generate caption-style descriptions for hand-drawn sketches.
Gurumurthy SwaminathanRavi Kiran Sarvadevabhatla, R. Venkatesh Babu
In Computer Vision and Pattern Recognition (CVPR), 2017
We propose DeLiGAN -- a novel image generative model for diverse and limited training data scenarios. Across a number of image modalities including hand-drawn sketches, we show that DeLiGAN generates diverse samples. To quantitatively characterize intra-class diversity of generated samples, we also introduce a modified version of "inception-score", a measure found to correlate well with human assessment of generated samples.
Paper Abstract Project page Bibtex @InProceedings{Gurumurthy_2017_CVPR,
author = {Gurumurthy, Swaminathan and Kiran Sarvadevabhatla, Ravi and Venkatesh Babu, R.},
title = {DeLiGAN : Generative Adversarial Networks for Diverse and Limited Data},
booktitle = {The IEEE Conference on Computer Vision and Pattern Recognition (CVPR)},
month = {July},
year = {2017}
}
A class of recent approaches for generating images, called Generative Adversarial Networks (GAN), have been used to generate impressively realistic images of objects, bedrooms, handwritten digits and a variety of other image modalities. However, typical GAN-based approaches require large amounts of training data to capture the diversity across the image modality. In this paper, we propose DeLiGAN -- a novel GAN-based architecture for diverse and limited training data scenarios. In our approach, we reparameterize the latent generative space as a mixture model and learn the mixture model's parameters along with those of GAN. This seemingly simple modification to the GAN framework is surprisingly effective and results in models which enable diversity in generated samples although trained with limited data. In our work, we show that DeLiGAN can generate images of handwritten digits, objects and hand-drawn sketches, all using limited amounts of data. To quantitatively characterize intra-class diversity of generated samples, we also introduce a modified version of "inception-score", a measure which has been found to correlate well with human assessment of generated samples.
Ravi Kiran Sarvadevabhatla, Sudharshan Suresh, R. Venkatesh Babu
In IEEE Transactions on Image Processing (TIP), 2017
We analyze the results of a free-viewing gaze fixation study conducted on 3904 freehand sketches distributed across 160 object categories. Our analysis shows that fixation sequences exhibit marked consistency within a sketch, across sketches of a category and even across suitably grouped sets of categories. In our paper, we show that this multi-level consistency in the fixation data can be exploited to (a) predict a test sketch's category given only its fixation sequence and (b) build a computational model which predicts part-labels underlying fixations on objects.
Paper Abstract Project page Bibtex@article{sarvadevabhatla2017object,
title={Object category understanding via eye fixations on freehand sketches},
author={Sarvadevabhatla, Ravi Kiran and Suresh, Sudharshan and Babu, R Venkatesh},
journal={IEEE Transactions on Image Processing},
volume={26},
number={5},
pages={2508--2518},
year={2017},
publisher={IEEE}
}
The study of eye gaze fixations on photographic images is an active research area. In contrast, the image subcategory of freehand sketches has not received as much attention for such studies. In this paper, we analyze the results of a free-viewing gaze fixation study conducted on 3904 freehand sketches distributed across 160 object categories. Our analysis shows that fixation sequences exhibit marked consistency within a sketch, across sketches of a category and even across suitably grouped sets of categories. This multi-level consistency is remarkable given the variability in depiction and extreme image content sparsity that characterizes hand-drawn object sketches. In our paper, we show that the multi-level consistency in the fixation data can be exploited to (a) predict a test sketch's category given only its fixation sequence and (b) build a computational model which predicts part-labels underlying fixations on objects. We hope that our findings motivate the community to deem sketch-like representations worthy of gaze-based studies vis-a-vis photographic images.
Ravi Kiran Sarvadevabhatla, Raviteja Meesala, Manjunath Hegde, R. Venkatesh Babu
In Indian Conference on Graphics, Vision and Image Processing (ICVGIP), Guwahati,India, 2016
Visualizing category-level features using color-coding is impractical when number of categories is large. This paper presents an approach which utilizes the geometrical attributes of per-category feature collections to order the categories. Our approach enables a novel viewpoint for exploring large-scale object category collections.
Paper Abstract Bibtex@inproceedings{Sarvadevabhatla:2016:AOC:3009977.3010037,
author = {Sarvadevabhatla, Ravi Kiran and Meesala, Raviteja and Hegde, Manjunath and R., Venkatesh Babu},
title = {Analyzing Object Categories via Novel Category Ranking Measures Defined on Visual Feature Embeddings},
booktitle = {Proceedings of the Tenth Indian Conference on Computer Vision, Graphics and Image Processing},
series = {ICVGIP '16},
year = {2016},
isbn = {978-1-4503-4753-2},
location = {Guwahati, Assam, India},
pages = {79:1--79:6},
articleno = {79},
numpages = {6},
url = {http://doi.acm.org/10.1145/3009977.3010037},
doi = {10.1145/3009977.3010037},
acmid = {3010037},
publisher = {ACM},
address = {New York, NY, USA},
}
Visualizing 2-D/3-D embeddings of image features can help gain an intuitive understanding of the image category landscape. However, popular methods for visualizing such embeddings (e.g. color-coding by category) are impractical when the number of categories is large. To address this and other shortcomings, we propose novel quantitative measures defined on image feature embeddings. Each measure produces a ranked ordering of the categories and provides an intuitive vantage point from which to view the entire set of categories. As an experimental testbed, we use deep features obtained from category-epitomes, a recently introduced minimalist visual representation, across 160 object categories. We embed the features in a visualization-friendly yet similarity-preserving 2-D manifold and analyze the inter/intra-category distributions of these embeddings using the proposed measures. Our analysis demonstrates that the category ordering methods enable new insights for the domain of large-category object representations. Moreover, our ordering measure approach is general in nature and can be applied to any feature-based representation of categories.
Ravi Kiran Sarvadevabhatla, Shanthakumar Venkatraman, R. Venkatesh Babu
In Asian Conference on Computer Vision (ACCV), Taipei,Taiwan ROC, 2016
Top-1/Top-5 error based benchmarking results for large-scale object recognition datasets do not reveal which aspects of recognition problem (robustness to occlusion, loss of global detail) the classifiers are good at. Moreover, the overall approach provides a falsely optimistic picture due to dataset bias. In this paper, we propose a novel semantic-part based dataset and benchmarking approach which overcomes the shortcomings mentioned above.
Paper Abstract Project page Bibtex
@article{DBLP:journals/corr/Sarvadevabhatla16b,
author = {Ravi Kiran Sarvadevabhatla and Shanthakumar Venkatraman and R. Venkatesh Babu},
title = {'Part'ly first among equals: Semantic part-based benchmarking for state-of-the-art object recognition systems},
journal = {CoRR},
volume = {abs/1611.07703},
year = {2016},
url = {http://arxiv.org/abs/1611.07703},
timestamp = {Thu, 01 Dec 2016 19:32:08 +0100},
biburl = {http://dblp.uni-trier.de/rec/bib/journals/corr/Sarvadevabhatla16b},
bibsource = {dblp computer science bibliography, http://dblp.org}
}
An examination of object recognition challenge leaderboards (ILSVRC, PASCAL-VOC) reveals that the top-performing classifiers typically exhibit small differences amongst themselves in terms of error rate/mAP. To better differentiate the top performers, additional criteria are required. Moreover, the (test) images, on which the performance scores are based, predominantly contain fully visible objects. Therefore, `harder' test images, mimicking the challenging conditions (e.g. occlusion) in which humans routinely recognize objects, need to be utilized for benchmarking. To address the concerns mentioned above, we make two contributions. First, we systematically vary the level of local object-part content, global detail and spatial context in images from PASCAL VOC 2010 to create a new benchmarking dataset dubbed PPSS-12. Second, we propose an object-part based benchmarking procedure which quantifies classifiers' robustness to a range of visibility and contextual settings. The benchmarking procedure relies on a semantic similarity measure that naturally addresses potential semantic granularity differences between the category labels in training and test datasets, thus eliminating manual mapping. We use our procedure on the PPSS-12 dataset to benchmark top-performing classifiers trained on the ILSVRC-2012 dataset. Our results show that the proposed benchmarking procedure enables additional differentiation among state-of-the-art object classifiers in terms of their ability to handle missing content and insufficient object detail. Given this capability for additional differentiation, our approach can potentially supplement existing benchmarking procedures used in object recognition challenge leaderboards.
Ravi Kiran Sarvadevabhatla, Jogendra Nath Kundu, R. Venkatesh Babu
In ACM Multimedia Conference (ACMMM), Amsterdam, The Netherlands 2016
We propose a Recurrent Neural Network architecture which exploits the long-term sequential and structural regularities in sketch stroke data for large-scale recognition of hand-drawn object sketches.
Paper Abstract Bibtex
@inproceedings{Sarvadevabhatla:2016:EMR:2964284.2967220,
author = {Sarvadevabhatla, Ravi Kiran and Kundu, Jogendra and R, Venkatesh Babu},
title = {Enabling My Robot To Play Pictionary: Recurrent Neural Networks For Sketch Recognition},
booktitle = {Proceedings of the 2016 ACM Conference on Multimedia},
year = {2016},
location = {Amsterdam, The Netherlands},
pages = {247--251},
url = {http://doi.acm.org/10.1145/2964284.2967220},
publisher = {ACM},
address = {New York, NY, USA},
}
Freehand sketching is an inherently sequential process. Yet, most approaches for hand-drawn sketch recognition either ignore this sequential aspect or exploit it in an ad-hoc manner. In our work, we propose a recurrent neural network architecture for sketch object recognition which exploits the long-term sequential and structural regularities in stroke data in a scalable manner. Specifically, we introduce a Gated Recurrent Unit based framework which leverages deep sketch features and weighted per-timestep loss to achieve state-of-the-art results on a large database of freehand object sketches across a large number of object categories. The inherently online nature of our framework is especially suited for on-the-fly recognition of objects as they are being drawn. Thus, our framework can enable interesting applications such as camera-equipped robots playing the popular party game Pictionary with human players and generating sparsified yet recognizable sketches of objects.
Ravi Kiran Sarvadevabhatla, Shiv Surya, Srinivas SS Kruthiventi, R. Venkatesh Babu
In ACM Multimedia Conference (ACMMM), Amsterdam, The Netherlands 2016
In this paper, we present SwiDeN: our Convolutional Neural Network (CNN) architecture which recognizes objects regardless of how they are visually depicted (line drawing, realistic shaded drawing, photograph etc.)
Paper Code Abstract Bibtex
@inproceedings{Sarvadevabhatla:2016:SCN:2964284.2967208,
author = {Sarvadevabhatla, Ravi Kiran and Surya, Shiv and Kruthiventi, Srinivas S S and R., Venkatesh Babu},
title = {SwiDeN: Convolutional Neural Networks For Depiction Invariant Object Recognition},
booktitle = {Proceedings of the 2016 ACM on Multimedia Conference},
series = {MM '16},
year = {2016},
isbn = {978-1-4503-3603-1},
location = {Amsterdam, The Netherlands},
pages = {187--191},
numpages = {5},
url = {http://doi.acm.org/10.1145/2964284.2967208},
doi = {10.1145/2964284.2967208},
acmid = {2967208},
publisher = {ACM},
address = {New York, NY, USA},
keywords = {convolutional neural networks, deep learning, depiction-invariance, object category recognition},
}
Current state of the art object recognition architectures achieve impressive performance but are typically specialized for a single depictive style (e.g. photos only, sketches only). In this paper, we present SwiDeN: our Convolutional Neural Network (CNN) architecture which recognizes objects regardless of how they are visually depicted (line drawing, realistic shaded drawing, photograph etc.). In SwiDeN, we utilize a novel `deep' depictive style-based switching mechanism which appropriately addresses the depiction-specific and depiction-invariant aspects of the problem. We compare SwiDeN with alternative architectures and prior work on a 50-category Photo-Art dataset containing objects depicted in multiple styles. Experimental results show that SwiDeN outperforms other approaches for the depiction-invariant object recognition problem.
Ravi Kiran Sarvadevabhatla, R. Venkatesh Babu
In ACM Multimedia Conference (ACMMM), Amsterdam, The Netherlands 2016
To analyze intra/inter-category variations of object apperance, we present an approach which represents the relative frequency of object part presence as a category-level word cloud. In this paper, we explore the word cloud style visualizations to characterize category-epitomes, a novel visual representation for objects designed by us in a previous work.
Paper Abstract Project page Bibtex
@inproceedings{Sarvadevabhatla:2016:ASC:2964284.2967190,
author = {Sarvadevabhatla, Ravi Kiran and R, Venkatesh Babu},
title = {Analyzing Structural Characteristics of Object Category Representations From Their Semantic-part Distributions},
booktitle = {Proceedings of the 2016 ACM on Multimedia Conference},
series = {MM '16},
year = {2016},
isbn = {978-1-4503-3603-1},
location = {Amsterdam, The Netherlands},
pages = {97--101},
numpages = {5},
url = {http://doi.acm.org/10.1145/2964284.2967190},
doi = {10.1145/2964284.2967190},
acmid = {2967190},
publisher = {ACM},
address = {New York, NY, USA},
keywords = {freehand sketch, object category representation, semantic part, visualization},
}
Studies from neuroscience show that part-mapping computations are employed by human visual system in the process of object recognition. In this paper, we present an approach for analyzing semantic-part characteristics of object category representations. For our experiments, we use category-epitome, a recently proposed sketch-based spatial representation for objects. To enable part-importance analysis, we first obtain semantic-part annotations of hand-drawn sketches originally used to construct the epitomes. We then examine the extent to which the semantic-parts are present in the epitomes of a category and visualize the relative importance of parts as a word cloud. Finally, we show how such word cloud visualizations provide an intuitive understanding of category-level structural trends that exist in the category-epitome object representations. Our method is general in applicability and can also be used to analyze part-based visual object representations for other depiction methods such as photographic images.
Ravi Kiran Sarvadevabhatla, R. Venkatesh Babu
In ACM Multimedia Conference (ACMMM), Brisbane, Australia 2015
In this paper, we introduce a novel visual representation derived from freehand sketches of objects. This representation, called category-epitome, is designed to be a sparsified yet recognizable version of the original sketch. We examine various interesting properties of category-epitomes.
Paper Abstract Bibtex
@inproceedings{Sarvadevabhatla:2015:EDE:2733373.2806230,
author = {Sarvadevabhatla, Ravi Kiran and R, Venkatesh Babu},
title = {Eye of the Dragon: Exploring Discriminatively Minimalist Sketch-based Abstractions for Object Categories},
booktitle = {Proceedings of the 23rd ACM International Conference on Multimedia},
series = {MM '15},
year = {2015},
isbn = {978-1-4503-3459-4},
location = {Brisbane, Australia},
pages = {271--280},
numpages = {10},
url = {http://doi.acm.org/10.1145/2733373.2806230},
doi = {10.1145/2733373.2806230},
acmid = {2806230},
publisher = {ACM},
address = {New York, NY, USA},
keywords = {deep learning, freehand sketch, object category recognition},
}
As a form of visual representation, freehand line sketches are typically studied as an end product of the sketching process. However, from a recognition point of view, one can also study various orderings and properties of the primitive strokes that compose the sketch. Studying sketches in this manner has enabled us to create novel sparse yet discriminative sketch-based representations for object categories which we term category-epitomes. Concurrently, the epitome construction provides a natural measure for quantifying the sparseness underlying the original sketch, which we term epitome-score. We analyze category-epitomes and epitome-scores for hand-drawn sketches from a sketch dataset of 160 object categories commonly encountered in daily life. Our analysis provides a novel viewpoint for examining the complexity of representation for visual object categories.
Sandra Okita, Victor Ng-Thow-Hing, Ravi Kiran Sarvadevabhatla
In ACM/IEEE International Conference on Human-Robot Interaction (HRI), Boston, USA 2012
This paper examines how interaction distance between humans and robots vary due to factors such as age, initiator, gesture style, movement announcement.
Paper Abstract Bibtex
@inproceedings{Okita:2012:CMI:2157689.2157756,
author = {Okita, Sandra Y. and Ng-Thow-Hing, Victor and Sarvadevabhatla, Ravi Kiran},
title = {Captain May I?: Proxemics Study Examining Factors That Influence Distance Between Humanoid Robots, Children, and Adults, During Human-robot Interaction},
booktitle = {Proceedings of the Seventh Annual ACM/IEEE International Conference on Human-Robot Interaction},
series = {HRI '12},
year = {2012},
isbn = {978-1-4503-1063-5},
location = {Boston, Massachusetts, USA},
pages = {203--204},
numpages = {2},
url = {http://doi.acm.org/10.1145/2157689.2157756},
doi = {10.1145/2157689.2157756},
acmid = {2157756},
publisher = {ACM},
address = {New York, NY, USA},
keywords = {human robot interaction, proxemics study, young children},
}
This proxemics study examines whether the physical distance between robots and humans differ based on the following factors: 1) age: children vs. adults, 2) who initiates the approach: humans approaching the robot vs. robot approaching humans, 3) prompting: verbal invitation vs. non-verbal gesture (e.g., beckoning), and 4) informing: announcement vs. permission vs. nothing. Results showed that both verbal and non-verbal prompting had significant influence on physical distance. Physiological data is also used to detect the appropriate timing of approach for a more natural and comfortable interaction.
Ravi Kiran Sarvadevabhatla, Victor Ng-Thow-Hing, Mitchel Benovoy, Sam Musallam
In International Conference on Multimodal Interaction(ICMI) , Alicante, Spain 2011
This paper describes an approach for facial expression recognition which takes the effect of other concurrently active modalities (e.g. talking while emoting the expression) into account.
Paper Abstract Bibtex
@inproceedings{Sarvadevabhatla:2011:AFE:2070481.2070488,
author = {Sarvadevabhatla, Ravi Kiran and Benovoy, Mitchel and Musallam, Sam and Ng-Thow-Hing, Victor},
title = {Adaptive Facial Expression Recognition Using Inter-modal Top-down Context},
booktitle = {Proceedings of the 13th International Conference on Multimodal Interfaces},
series = {ICMI '11},
year = {2011},
isbn = {978-1-4503-0641-6},
location = {Alicante, Spain},
pages = {27--34},
numpages = {8},
url = {http://doi.acm.org/10.1145/2070481.2070488},
doi = {10.1145/2070481.2070488},
acmid = {2070488},
publisher = {ACM},
address = {New York, NY, USA},
keywords = {context, facial expression recognition, human-computer interaction, mask, multi-modal},
}
The role of context in recognizing a person's affect is being increasingly studied. In particular, context arising from the presence of multi-modal information such as faces, speech and head pose has been used in recent studies to recognize facial expressions. In most approaches, the modalities are independently considered and the effect of one modality on the other, which we call inter-modal influence (e.g. speech or head pose modifying the facial appearance) is not modeled. In this paper, we describe a system that utilizes context from the presence of such inter-modal influences to recognize facial expressions. To do so, we use 2-D contextual masks which are activated within the facial expression recognition pipeline depending on the prevailing context. We also describe a framework called the Context Engine. The Context Engine offers a scalable mechanism for extending the current system to address additional modes of context that may arise during human-machine interactions. Results on standard data sets demonstrate the utility of modeling inter-modal contextual effects in recognizing facial expressions.
Ravi Kiran Sarvadevabhatla, Victor Ng-Thow-Hing, Sandra Okita
In IEEE International Symposium on Robot and Human Interactive Communication (RO-MAN), Viareggio, Italy 2010
Paper Abstract Bibtex
@inproceedings{sarvadevabhatla2010extended,
title={Extended duration human-robot interaction: tools and analysis},
author={Sarvadevabhatla, Ravi Kiran and Ng-Thow-Hing, Victor and Okita, Sandra},
booktitle={19th International Symposium in Robot and Human Interactive Communication},
pages={7--14},
year={2010},
organization={IEEE}
}
Extended human-robot interactions possess unique aspects which are not exhibited in short-term interactions spanning a few minutes or extremely long-term spanning days. In order to comprehensively monitor such interactions, we need special recording mechanisms which ensure the interaction is captured at multiple spatio-temporal scales, viewpoints and modalities(audio, video, physio). To minimize cognitive burden, we need tools which can automate the process of annotating and analyzing the resulting data. In addition, we also require these tools to be able to provide a unified, multi-scale view of the data and help discover patterns in the interaction process. In this paper, we describe recording and analysis tools which are helping us analyze extended human-robot interactions with children as subjects. We also provide some experimental results which highlight the utility of such tools.
Ravi Kiran Sarvadevabhatla, Victor Ng-Thow-Hing
In IEEE-RAS International Conference on Humanoid Robots (Humanoids) Paris, France 2009
Paper Abstract Bibtex
@inproceedings{sarvadevabhatla2009panoramic,
title={Panoramic attention for humanoid robots},
author={Sarvadevabhatla, Ravi Kiran and Ng-Thow-Hing, Victor},
booktitle={2009 9th IEEE-RAS International Conference on Humanoid Robots},
pages={215--222},
year={2009},
organization={IEEE}
}
In this paper, we present a novel three-layer model of panoramic attention for our humanoid robot. In contrast to similar architectures employing coarse discretizations of the panoramic field, saliencies are maintained only for cognitively prominent entities(e.g. faces). In the absence of attention triggers, an idle-policy makes the humanoid span the visual field of panorama imparting a human-like idle gaze while simultaneously registering attention-worthy entities. We also describe a model of cognitive panoramic habituation which maintains entity-specific persistence models, thus imparting lifetimes to entities registered across the panorama. This mechanism enables the memories of entities in the panorama to fade away, creating a human-like attentional effect. We describe scenarios demonstrating the aforementioned aspects. In addition, we present experimental results which demonstrate how the cognitive filtering aspect of our model reduces processing time and false-positive rates for standard entity related modules such as face-detection and recognition.
Sandra Okita, Victor Ng-Thow-Hing, Ravi Kiran Sarvadevabhatla
In IEEE International Symposium on Robot and Human Interactive Communication (RO-MAN), Toyama, Japan 2009
Paper Abstract Bibtex
@inproceedings{okita2009learning,
title={Learning together: ASIMO developing an interactive learning partnership with children},
author={Okita, Sandra Y and Ng-Thow-Hing, Victor and Sarvadevabhatla, Ravi},
booktitle={RO-MAN 2009-The 18th IEEE International Symposium on Robot and Human Interactive Communication},
pages={1125--1130},
year={2009},
organization={IEEE}
}
Humanoid robots consist of biologically inspired features, human-like appearance, and intelligent behavior that naturally elicit social responses. Complex interactions are now possible, where children interact and learn from robots. A pilot study attempted to determine which features in robots led to changes in learning and behavior. Three common learning styles, lecture, cooperative, and self-directed, were implemented into ASIMO to see if children can learn from robots. General features such as monotone robot-like voice and human-like voice were compared. Thirty-seven children between the ages 4-to 10- years participated in the study. Each child engaged in a table-setting task with ASIMO that exhibited different learning styles and general features. Children answered questions in relation to a table-setting task with a learning measure. Promissory evidence shows that learning styles and general features matter especially for younger children.
Victor Ng-Thow-Hing, Jongwoo Lim, Joel Wormer, Ravi Kiran Sarvadevabhatla, Carlos Rocha, Kikuo Fujimura, Yoshiaki Sakagami
In IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS) Nice, France 2008
Paper Abstract Bibtex
@inproceedings{ng2008memory,
title={The memory game: Creating a human-robot interactive scenario for ASIMO},
author={Ng-Thow-Hing, Victor and Lim, Jongwoo and Wormer, Joel and Sarvadevabhatla, Ravi Kiran and Rocha, Carlos and Fujimura, Kikuo and Sakagami, Yoshiaki},
booktitle={2008 IEEE/RSJ International Conference on Intelligent Robots and Systems},
pages={779--786},
year={2008},
organization={IEEE}
}
We present a human-robot interactive scenario consisting of a memory card game between Hondapsilas humanoid robot ASIMO and a human player. The game features perception exclusively through ASIMOpsilas on-board cameras and both reactive and proactive behaviors specific to different situational contexts in the memory game. ASIMO is able to build a dynamic environmental map of relevant objects in the game such as the table and card layout as well as understand activities from the player such as pointing at cards, flipping cards and removing them from the table. Our system architecture, called the Cognitive Map, treats the memory game as a multi-agent system, with modules acting independently and communicating with each other via messages through a shared blackboard system. The game behavior module can model game state and contextual information to make decisions based on different pattern recognition modules. Behavior is then sent through high-level command interfaces to be resolved into actual physical actions by the robot via a multi-modal communication module. The experience gained in modeling this interactive scenario will allow us to reuse the architecture to create new scenarios and explore new research directions in learning how to respond to new interactive situations.
Ravi Kiran Sarvadevabhatla, Karteek Alahari, C.V. Jawahar
In National Conference on Communications (NCC), Kharagpur, India 2005
Paper Abstract Bibtex
@inproceedings{activity05,
title={Recognizing Human Activities from Constituent Actions},
author={Ravi Kiran Sarvadevabhatla, Karteek Alahari, C.V. Jawahar},
booktitle={National Conference on Communications},
pages={351--355},
year={2005},
organization={IEEE}
Many of the human activities such as Jumping, Squatting have a correlated spatiotemporal structure. They are composed of homogeneous units. These units, which we refer to as actions, are often common to more than one activity. Therefore, it is essential to have a representation which can capture these activities effectively. To develop this, we model the frames of activities as a mixture model of actions and employ a probabilistic approach to learn their low-dimensional representation. We present recognition results on seven activities performed by various individuals. The results demonstrate the versatility and the ability of the model to capture the ensemble of human activities.
C.V. Jawahar, MNSSK Pavan Kumar, Ravi Kiran Sarvadevabhatla
In Seventh International Conference on Document Analysis and Recognition ( ICDAR ), Edinburgh, UK 2003
Paper Abstract Bibtex
@inproceedings{Jawahar:2003:BOH:938979.939243,
author = {Jawahar, C. V. and Kumar, M. N. S. S. K. Pavan and Kiran, S. S. Ravi Kiran} ,
title = {A Bilingual OCR for Hindi-Telugu Documents and Its Applications},
booktitle = {Proceedings of the Seventh International Conference on Document Analysis and Recognition - Volume 1},
series = {ICDAR '03},
year = {2003},
isbn = {0-7695-1960-1},
pages = {408--},
url = {http://dl.acm.org/citation.cfm?id=938979.939243},
acmid = {939243},
publisher = {IEEE Computer Society},
address = {Washington, DC, USA},
}
This paper describes the character recognition processfrom printed documents containing Hindi and Telugu text. Hindi and Telugu are among the most popular languages in India. The bilingual recognizer is based on Principal Component Analysis followed by support vector classification. This attains an overall accuracy of approximately 96.7%. Extensive experimentation is carried out on an independent test set of approximately 200000 characters. Applications based on this OCR are sketched.