|Member Login

Scientific Publications

2018 All Categories [28]

Spatial-Temporal Union of Subspaces for Multi-body NRSFM: Supplementary Material.

Suryansh Kumar, Yuchao Dai, & Hongdong Li. (2018). Spatial-Temporal Union of Subspaces for Multi-body NRSFM: Supplementary Material.

View more

Scalable Dense Non-rigid Structure-from-Motion: A Grassmannian Perspective

Kumar, S., Cherian, A., Dai, Y., & Li, H. (2018). Scalable Dense Non-rigid Structure-from-Motion: A Grassmannian Perspective. Retrieved from https://arxiv.org/pdf/1803.00233.pdf

View more

Neural Algebra of Classifiers

Cruz, R. S., Fernando, B., Cherian, A., & Gould, S. (2018). Neural Algebra of Classifiers. Retrieved from https://arxiv.org/pdf/1801.08676.pdf

View more

Identity-preserving Face Recovery from Portraits

Shiri, F., Yu, X., Porikli, F., Hartley, R., & Koniusz, P. (2018). Identity-preserving Face Recovery from Portraits. Retrieved from https://arxiv.org/pdf/1801.02279.pdf

View more

Output regulation for systems on matrix Lie-groups

de Marco, S., Marconi, L., Mahony, R., & Hamel, T. (2018). Output regulation for systems on matrix Lie-groups. Automatica, 87, 8–16. https://doi.org/10.1016/J.AUTOMATICA.2017.08.006

View more

Quickest Detection of Intermittent Signals With Application to Vision Based Aircraft Detection

James, J., Ford, J. J., & Molloy, T. L. (2018). Quickest Detection of Intermittent Signals With Application to Vision Based Aircraft Detection. Retrieved from http://arxiv.org/abs/1804.09846

View more

Enabling a Pepper Robot to provide Automated and Interactive Tours of a Robotics Laboratory

Suddrey, G., Jacobson, A., & Ward, B. (2018). Enabling a Pepper Robot to provide Automated and Interactive Tours of a Robotics Laboratory. Retrieved from http://arxiv.org/abs/1804.03288

View more

Closing the Loop for Robotic Grasping: A Real-time, Generative Grasp Synthesis Approach

Morrison, D., Corke, P., & Leitner, J. (2018). Closing the Loop for Robotic Grasping: A Real-time, Generative Grasp Synthesis Approach. Retrieved from http://arxiv.org/abs/1804.05172

View more

Leveraging variable sensor spatial acuity with a homogeneous, multi-scale place recognition framework

Jacobson, A., Chen, Z., & Milford, M. (2018). Leveraging variable sensor spatial acuity with a homogeneous, multi-scale place recognition framework. Biological Cybernetics, 1–17. http://doi.org/10.1007/s00422-017-0745-7

View more

Rhythmic Representations: Learning Periodic Patterns for Scalable Place Recognition at a Sublinear Storage Cost.

Yu, L., Jacobson, A., & Milford, M. (2018). Rhythmic Representations: Learning Periodic Patterns for Scalable Place Recognition at a Sublinear Storage Cost. IEEE Robotics and Automation Letters, 3(2), 811–818. http://doi.org/10.1109/LRA.2018.2792144

View more

Training Deep Neural Networks for Visual Servoing

Bateux, Q., Marchand, E., Leitner, J., Chaumette, F., & Corke, P. (2018). Training Deep Neural Networks for Visual Servoing. Retrieved from https://hal.inria.fr/hal-01716679/

View more

Towards vision-based manipulation of plastic materials.

Cherubini, A., Leitner, J., Ortenzi, V., & Corke, P. (2018). Towards vision-based manipulation of plastic materials. Retrieved from https://hal.archives-ouvertes.fr/hal-01731230

View more

Assisted Control for Semi-Autonomous Power Infrastructure Inspection using Aerial Vehicles

McFadyen, A., Dayoub, F., Martin, S., Ford, J., & Corke, P. (2018). Assisted Control for Semi-Autonomous Power Infrastructure Inspection using Aerial Vehicles. Retrieved from http://arxiv.org/abs/1804.02154

View more

OpenSeqSLAM2.0: An Open Source Toolbox for Visual Place Recognition Under Changing Conditions

Talbot, B., Garg, S., & Milford, M. (2018). OpenSeqSLAM2.0: An Open Source Toolbox for Visual Place Recognition Under Changing Conditions. Retrieved from http://arxiv.org/abs/1804.02156

View more

LoST? Appearance-Invariant Place Recognition for Opposite Viewpoints using Visual Semantics

Garg, S., Suenderhauf, N., & Milford, M. (2018). LoST? Appearance-Invariant Place Recognition for Opposite Viewpoints using Visual Semantics. Retrieved from http://arxiv.org/abs/1804.05526

View more

Don’t Look Back: Robustifying Place Categorization for Viewpoint- and Condition-Invariant Place Recognition

Garg, S., Suenderhauf, N., & Milford, M. (2018). Don’t Look Back: Robustifying Place Categorization for Viewpoint- and Condition-Invariant Place Recognition. Retrieved from http://arxiv.org/abs/1801.05078

View more

Multimodal Trip Hazard Affordance Detection on Construction Sites

McMahon, S., Sunderhauf, N., Upcroft, B., & Milford, M. (2018). Multimodal Trip Hazard Affordance Detection on Construction Sites. IEEE Robotics and Automation Letters, 3(1), 1–8. http://doi.org/10.1109/LRA.2017.2719763

View more

Towards Semantic SLAM: Points, Planes and Objects

Hosseinzadeh, M., Latif, Y., Pham, T., Suenderhauf, N., & Reid, I. (2018). Towards Semantic SLAM: Points, Planes and Objects. Retrieved from http://arxiv.org/abs/1804.09111

View more

Special issue on deep learning in robotics

Sünderhauf, N., Leitner, J., Upcroft, B., & Roy, N. (2018, April 27). Special issue on deep learning in robotics. The International Journal of Robotics Research. SAGE PublicationsSage UK: London, England. http://doi.org/10.1177/0278364918769189

View more

Just-In-Time Reconstruction: Inpainting Sparse Maps using Single View Depth Predictors as Priors

Weerasekera, C., Dharmasiri, T., Garg, R., Drummond, T., & Reid, I. (2017). Just-In-Time Reconstruction: Inpainting Sparse Maps using Single View Depth Predictors as Priors.

Download PDF

SceneCut: Joint Geometric and Object Segmentation for Indoor Scenes

Pham, T., Do, T.-T., Sünderhauf, N., & Reid, I. (2017). SceneCut: Joint Geometric and Object Segmentation for Indoor Scenes. Retrieved from http://arxiv.org/abs/1709.07158

View more

Elastic LiDAR Fusion: Dense Map-Centric Continuous-Time SLAM

Park, C., Moghadam, P., Kim, S., Elfes, A., Fookes, C., & Sridharan, S. (2017). Elastic LiDAR Fusion: Dense Map-Centric Continuous-Time SLAM. Retrieved from http://arxiv.org/abs/1711.01691

View more

Cartman: The low-cost Cartesian Manipulator that won the Amazon Robotics Challenge

Morrison, D., Tow, A. W., McTaggart, M., Smith, R., Kelly-Boxall, N., Wade-McCue, S., Erskine, J., Grinover, R., Gurman, A., Hunn, T., Lee, D., Milan, A., Pham, T., Rallos, G., Razjigaev, A., Rowntree, T., Kumar, V., Zhuang, Z., Lehnert, C., Reid, I., Corke, P., and Leitner, J. (2017). Cartman: The low-cost Cartesian Manipulator that won the Amazon Robotics Challenge. Retrieved from https://arxiv.org/abs/1709.06283

View more

Dropout Sampling for Robust Object Detection in Open-Set Conditions

Miller, D., Nicholson, L., Dayoub, F., & Sünderhauf, N. (2017). Dropout Sampling for Robust Object Detection in Open-Set Conditions. Retrieved from http://arxiv.org/abs/1710.06677

View more

Semantic Segmentation from Limited Training Data

Milan, A., Pham, T., Vijay, K., Morrison, D., Tow, A. W., Liu, L., Erskine, J., Grinover, R., Gurman, A., Hunn, T., Kelly-Boxall, N., Lee, D., McTaggart, M., Rallos, G., Razjigaev, A., Rowntree, T., Shen, T., Smith, R., Wade-McCue, S., Zhuang, Z., Lehnert, C., Lin, G., Reid, I., Corke, P., & Leitner, J. (2017). Semantic Segmentation from Limited Training Data. Retrieved from http://arxiv.org/abs/1709.07665

View more

Efficacy of Mechanical Weeding Tools: a study into alternative weed management strategies enabled by robotics

McCool, C. S., Beattie, J., Firn, J., Lehnert, C., Kulk, J., Bawden, O., Russell, R., & Perez, T. (2018). Efficacy of Mechanical Weeding Tools: a study into alternative weed management strategies enabled by robotics. IEEE Robotics and Automation Letters, 1–1. http://doi.org/10.1109/LRA.2018.2794619

View more

Addressing Challenging Place Recognition Tasks using Generative Adversarial Networks

Latif, Y., Garg, R., Milford, M., & Reid, I. (2017). Addressing Challenging Place Recognition Tasks using Generative Adversarial Networks. Retrieved from http://arxiv.org/abs/1709.08810

View more

Multi-Modal Trip Hazard Affordance Detection On Construction Sites

McMahon, S., Sunderhauf, N., Upcroft, B., & Milford, M. (2018). Multimodal Trip Hazard Affordance Detection on Construction Sites. IEEE Robotics and Automation Letters, 3(1), 1–8. http://doi.org/10.1109/LRA.2017.2719763

View more

Australian Centre for Robotic Vision
2 George Street Brisbane, 4001
+61 7 3138 7549