|Member Login

Publications

2018 Submitted [45]

Decoupled Spatial Neural Attention for Weakly Supervised Semantic Segmentation

Zhang, T., Lin, G., Cai, J., Shen, T., Shen, C., & Kot, A. C. (2018). Decoupled Spatial Neural Attention for Weakly Supervised Semantic Segmentation. Retrieved from http://arxiv.org/abs/1803.02563

View more

Unsupervised Learning of Monocular Depth Estimation and Visual Odometry with Deep Feature Reconstruction

Zhan, H., Garg, R., Weerasekera, C. S., Li, K., Agarwal, H., & Reid, I. (2018). Unsupervised Learning of Monocular Depth Estimation and Visual Odometry with Deep Feature Reconstruction. Retrieved from http://arxiv.org/abs/1803.03893

View more

An end-to-end TextSpotter with Explicit Alignment and Attention

He, T., Tian, Z., Huang, W., Shen, C., Qiao, Y., & Sun, C. (2018). An end-to-end TextSpotter with Explicit Alignment and Attention. Retrieved from https://arxiv.org/abs/1803.03474

View more

Deep Perm-Set Net: Learn to predict sets with unknown permutation and cardinality using deep neural networks

Rezatofighi, S. H., Kaskman, R., Motlagh, F. T., Shi, Q., Cremers, D., Leal-Taixé, L., & Reid, I. (2018). Deep Perm-Set Net: Learn to predict sets with unknown permutation and cardinality using deep neural networks. Retrieved from https://arxiv.org/abs/1805.00613

View more

Fast Neural Architecture Search of Compact Semantic Segmentation Models via Auxiliary Cells

Nekrasov, V., Chen, H., Shen, C., & Reid, I. (2018). Fast Neural Architecture Search of Compact Semantic Segmentation Models via Auxiliary Cells *. Retrieved from https://arxiv.org/pdf/1810.10804.pdf

View more

Memorable Maps: A Framework for Re-defining Places in Visual Place Recognition

Zaffar, M., Ehsan, S., Milford, M., & Maier, K. M. (2018). Memorable Maps: A Framework for Re-defining Places in Visual Place Recognition. Retrieved from https://arxiv.org/pdf/1811.03529.pdf

View more

Component-based Attention for Large-scale Trademark Retrieval

Tursun, O., Denman, S., Sivipalan, S., Sridharan, S., Fookes, C., & Mau, S. (2018). Component-based Attention for Large-scale Trademark Retrieval. Retrieved from http://arxiv.org/abs/1811.02746

View more

Learning Free-Form Deformations for 3D Object Reconstruction

Jack, D., Pontes, J. K., Sridharan, S., Fookes, C., Shirazi, S., Maire, F., & Eriksson, A. (2018). Learning Free-Form Deformations for 3D Object Reconstruction. Retrieved from http://arxiv.org/abs/1803.10932

View more

An Orientation Factor for Object-Oriented SLAM

Jablonsky, N., Milford, M., & Sünderhauf, N. (2018). An Orientation Factor for Object-Oriented SLAM. Retrieved from http://arxiv.org/abs/1809.06977

View more

Multi-View Picking: Next-best-view Reaching for Improved Grasping in Clutter

Morrison, D., Corke, P., & Leitner, J. (2018). Multi-View Picking: Next-best-view Reaching for Improved Grasping in Clutter. Retrieved from http://arxiv.org/abs/1809.08564

View more

Evaluating Merging Strategies for Sampling-based Uncertainty Techniques in Object Detection

Miller, D., Dayoub, F., Milford, M., & Sünderhauf, N. (2018). Evaluating Merging Strategies for Sampling-based Uncertainty Techniques in Object Detection. Retrieved from http://arxiv.org/abs/1809.06006

View more

An adaptive localization system for image storage and localization latency requirements

Mao, J., Hu, X., & Milford, M. (2018). An adaptive localization system for image storage and localization latency requirements. Robotics and Autonomous Systems, 107, 246–261. http://doi.org/10.1016/J.ROBOT.2018.06.007

View more

A Binary Optimization Approach for Constrained K-Means Clustering

Le, H., Eriksson, A., Do, T.-T., & Milford, M. (2018). A Binary Optimization Approach for Constrained K-Means Clustering. Retrieved from http://arxiv.org/abs/1810.10134

View more

Large scale visual place recognition with sub-linear storage growth

Le, H., & Milford, M. (2018). Large scale visual place recognition with sub-linear storage growth. Retrieved from http://arxiv.org/abs/1810.09660

View more

A Holistic Visual Place Recognition Approach using Lightweight CNNs for Severe ViewPoint and Appearance Changes

Khaliq, A., Ehsan, S., Milford, M., & Mcdonald-Maier, K. (2018). A Holistic Visual Place Recognition Approach using Lightweight CNNs for Severe ViewPoint and Appearance Changes. Retrieved from https://www.mapillary.com/

View more

Feature Map Filtering: Improving Visual Place Recognition with Convolutional Calibration

Hausler, S., Jacobson, A., & Milford, M. (2018). Feature Map Filtering: Improving Visual Place Recognition with Convolutional Calibration. Retrieved from http://arxiv.org/abs/1810.12465

View more

3D Move to See: Multi-perspective visual servoing for improving object views with semantic segmentation

Lehnert, C., Tsai, D., Eriksson, A., & McCool, C. (2018). 3D Move to See: Multi-perspective visual servoing for improving object views with semantic segmentation. Retrieved from http://arxiv.org/abs/1809.07896

View more

Show, Attend and Read: A Simple and Strong Baseline for Irregular Text Recognition

Li, H., Wang, P., Shen, C., & Zhang, G. (2018). Show, Attend and Read: A Simple and Strong Baseline for Irregular Text Recognition. Retrieved from https://arxiv.org/pdf/1811.00751

View more

Scalable Deep k-Subspace Clustering

Zhang, T., Ji, P., Harandi, M., Hartley, R., & Reid, I. (2018). Scalable Deep k-Subspace Clustering. Retrieved from https://arxiv.org/pdf/1811.01045

View more

An Overview of Perception Methods for Horticultural Robots: From Pollination to Harvest

Ahn, H. S., Dayoub, F., Popovic, M., MacDonald, B., Siegwart, R., & Sa, I. (2018). An Overview of Perception Methods for Horticultural Robots: From Pollination to Harvest. Retrieved from http://arxiv.org/abs/1807.03124

View more

Zero-shot Sim-to-Real Transfer with Modular Priors

Lee, R., Mou, S., Dasagi, V., Bruce, J., Leitner, J., & Sünderhauf, N. (2018). Zero-shot Sim-to-Real Transfer with Modular Priors. Retrieved from http://arxiv.org/abs/1809.07480

View more

Quantifying the Reality Gap in Robotic Manipulation Tasks.

Collins, J., Howard, D., & Leitner, J. (2018). Quantifying the Reality Gap in Robotic Manipulation Tasks. Retrieved from https://arxiv.org/abs/1811.01484

View more

Coordinated Heterogeneous Distributed Perception based on Latent Space Representation

Korthals, T., Leitner, J., & Rückert, U. (2018). Coordinated Heterogeneous Distributed Perception based on Latent Space Representation. Retrieved from https://arxiv.org/abs/1809.04558

View more

Quickest Detection of Intermittent Signals With Application to Vision Based Aircraft Detection

James, J., Ford, J. J., & Molloy, T. L. (2018). Quickest Detection of Intermittent Signals With Application to Vision Based Aircraft Detection. Retrieved from https://arxiv.org/pdf/1804.09846

View more

Model-free and learning-free grasping by Local Contact Moment matching

Marturi, N., Ortenzi, V., Rajasekaran, V., Adjigble, M., Corke, P., & Stolkin, R. (2018). Model-free and learning-free grasping by Local Contact Moment matching. Retrieved from https://www.researchgate.net/publication/327118653

View more

On Encoding Temporal Evolution for Real-time Action Prediction

Rezazadegan, F., Shirazi, S., Baktashmotlagh, M., & Davis, L. S. (2018). On Encoding Temporal Evolution for Real-time Action Prediction. Retrieved from http://arxiv.org/abs/1709.07894

View more

Don’t Look Back: Robustifying Place Categorization for Viewpoint- and Condition-Invariant Place Recognition

Garg, S., Suenderhauf, N., & Milford, M. (2018). Don’t Look Back: Robustifying Place Categorization for Viewpoint- and Condition-Invariant Place Recognition. Retrieved from https://arxiv.org/abs/1801.05078

View more

Action Anticipation By Predicting Future Dynamic Images

Rodriguez, C., Fernando, B., & Li, H. (2018). Action Anticipation By Predicting Future Dynamic Images. Retrieved from https://arxiv.org/pdf/1808.00141.pdf

View more

Stereo Computation for a Single Mixture Image

Zhong, Y., Dai, Y., & Li, H. (2018). Stereo Computation for a Single Mixture Image. Retrieved from https://arxiv.org/pdf/1808.08690.pdf

View more

VIENA 2 : A Driving Anticipation Dataset

Aliakbarian, M. S., Saleh, F. S., Salzmann, M., Fernando, B., Petersson, L., & Andersson, L. (2018). VIENA 2 : A Driving Anticipation Dataset. Retrieved from https://arxiv.org/pdf/1810.09044.pdf

View more

Continuous-time Intensity Estimation Using Event Cameras

Scheerlinck, C., Barnes, N., & Mahony, R. (2018). Continuous-time Intensity Estimation Using Event Cameras. Retrieved from https://arxiv.org/pdf/1811.00386.pdf

View more

Distinguishing Refracted Features using Light Field Cameras with Application to Structure from Motion

Tsai, D., Dansereau, D. G., Peynot, T., & Corke, P. (2018). Distinguishing Refracted Features using Light Field Cameras with Application to Structure from Motion. Retrieved from http://arxiv.org/abs/1806.07375

View more

Scalable Dense Non-rigid Structure-from-Motion: A Grassmannian Perspective

Kumar, S., Cherian, A., Dai, Y., & Li, H. (2018). Scalable Dense Non-rigid Structure-from-Motion: A Grassmannian Perspective. Retrieved from https://arxiv.org/pdf/1803.00233.pdf

View more

Neural Algebra of Classifiers

Cruz, R. S., Fernando, B., Cherian, A., & Gould, S. (2018). Neural Algebra of Classifiers. Retrieved from https://arxiv.org/pdf/1801.08676.pdf

View more

Identity-preserving Face Recovery from Portraits

Shiri, F., Yu, X., Porikli, F., Hartley, R., & Koniusz, P. (2018). Identity-preserving Face Recovery from Portraits. Retrieved from https://arxiv.org/pdf/1801.02279.pdf

View more

Quickest Detection of Intermittent Signals With Application to Vision Based Aircraft Detection

James, J., Ford, J. J., & Molloy, T. L. (2018). Quickest Detection of Intermittent Signals With Application to Vision Based Aircraft Detection. Retrieved from http://arxiv.org/abs/1804.09846

View more

Enabling a Pepper Robot to provide Automated and Interactive Tours of a Robotics Laboratory

Suddrey, G., Jacobson, A., & Ward, B. (2018). Enabling a Pepper Robot to provide Automated and Interactive Tours of a Robotics Laboratory. Retrieved from http://arxiv.org/abs/1804.03288

View more

Closing the Loop for Robotic Grasping: A Real-time, Generative Grasp Synthesis Approach

Morrison, D., Corke, P., & Leitner, J. (2018). Closing the Loop for Robotic Grasping: A Real-time, Generative Grasp Synthesis Approach. Retrieved from http://arxiv.org/abs/1804.05172

View more

Training Deep Neural Networks for Visual Servoing

Bateux, Q., Marchand, E., Leitner, J., Chaumette, F., & Corke, P. (2018). Training Deep Neural Networks for Visual Servoing. Retrieved from https://hal.inria.fr/hal-01716679/

View more

Towards vision-based manipulation of plastic materials.

Cherubini, A., Leitner, J., Ortenzi, V., & Corke, P. (2018). Towards vision-based manipulation of plastic materials. Retrieved from https://hal.archives-ouvertes.fr/hal-01731230

View more

Assisted Control for Semi-Autonomous Power Infrastructure Inspection using Aerial Vehicles

McFadyen, A., Dayoub, F., Martin, S., Ford, J., & Corke, P. (2018). Assisted Control for Semi-Autonomous Power Infrastructure Inspection using Aerial Vehicles. Retrieved from http://arxiv.org/abs/1804.02154

View more

OpenSeqSLAM2.0: An Open Source Toolbox for Visual Place Recognition Under Changing Conditions

Talbot, B., Garg, S., & Milford, M. (2018). OpenSeqSLAM2.0: An Open Source Toolbox for Visual Place Recognition Under Changing Conditions. Retrieved from http://arxiv.org/abs/1804.02156

View more

LoST? Appearance-Invariant Place Recognition for Opposite Viewpoints using Visual Semantics

Garg, S., Suenderhauf, N., & Milford, M. (2018). LoST? Appearance-Invariant Place Recognition for Opposite Viewpoints using Visual Semantics. Retrieved from http://arxiv.org/abs/1804.05526

View more

Don’t Look Back: Robustifying Place Categorization for Viewpoint- and Condition-Invariant Place Recognition

Garg, S., Suenderhauf, N., & Milford, M. (2018). Don’t Look Back: Robustifying Place Categorization for Viewpoint- and Condition-Invariant Place Recognition. Retrieved from http://arxiv.org/abs/1801.05078

View more

Towards Semantic SLAM: Points, Planes and Objects

Hosseinzadeh, M., Latif, Y., Pham, T., Suenderhauf, N., & Reid, I. (2018). Towards Semantic SLAM: Points, Planes and Objects. Retrieved from http://arxiv.org/abs/1804.09111

View more

Australian Centre for Robotic Vision
2 George Street Brisbane, 4001
+61 7 3138 7549