|Member Login

Publications

2018 Scientific Publications [77]

Decoupled Spatial Neural Attention for Weakly Supervised Semantic Segmentation

Zhang, T., Lin, G., Cai, J., Shen, T., Shen, C., & Kot, A. C. (2018). Decoupled Spatial Neural Attention for Weakly Supervised Semantic Segmentation. Retrieved from http://arxiv.org/abs/1803.02563

View more

An Extended Filtered Channel Framework for Pedestrian Detection

You, M., Zhang, Y., Shen, C., & Zhang, X. (2018). An Extended Filtered Channel Framework for Pedestrian Detection. IEEE Transactions on Intelligent Transportation Systems, 19(5), 1640–1651. https://doi.org/10.1109/TITS.2018.2807199

View more

Unsupervised Learning of Monocular Depth Estimation and Visual Odometry with Deep Feature Reconstruction

Zhan, H., Garg, R., Weerasekera, C. S., Li, K., Agarwal, H., & Reid, I. (2018). Unsupervised Learning of Monocular Depth Estimation and Visual Odometry with Deep Feature Reconstruction. Retrieved from http://arxiv.org/abs/1803.03893

View more

An Embarrassingly Simple Approach to Visual Domain Adaptation

Lu, H., Shen, C., Cao, Z., Xiao, Y., & van den Hengel, A. (2018). An Embarrassingly Simple Approach to Visual Domain Adaptation. IEEE Transactions on Image Processing, 27(7), 3403–3417. https://doi.org/10.1109/TIP.2018.2819503

View more

Cluster Sparsity Field: An Internal Hyperspectral Imagery Prior for Reconstruction

Zhang, L., Wei, W., Zhang, Y., Shen, C., van den Hengel, A., & Shi, Q. (2018). Cluster Sparsity Field: An Internal Hyperspectral Imagery Prior for Reconstruction. International Journal of Computer Vision, 126(8), 797–821. https://doi.org/10.1007/s11263-018-1080-8

View more

Multi-label learning based deep transfer neural network for facial attribute classification

Zhuang, N., Yan, Y., Chen, S., Wang, H., & Shen, C. (2018). Multi-label learning based deep transfer neural network for facial attribute classification. Pattern Recognition, 80, 225–240. https://doi.org/10.1016/J.PATCOG.2018.03.018

View more

An end-to-end TextSpotter with Explicit Alignment and Attention

He, T., Tian, Z., Huang, W., Shen, C., Qiao, Y., & Sun, C. (2018). An end-to-end TextSpotter with Explicit Alignment and Attention. Retrieved from https://arxiv.org/abs/1803.03474

View more

Multi-Task Structure-aware Context Modeling for Robust Keypoint-based Object Tracking

Li, X., Zhao, L., Ji, W., Wu, Y., Wu, F., Yang, M.-H., Dacheng, T., Reid, I. (2018). Multi-Task Structure-aware Context Modeling for Robust Keypoint-based Object Tracking. IEEE Transactions on Pattern Analysis and Machine Intelligence, 1–1. https://doi.org/10.1109/TPAMI.2018.2818132 *In Press

View more

Deep Perm-Set Net: Learn to predict sets with unknown permutation and cardinality using deep neural networks

Rezatofighi, S. H., Kaskman, R., Motlagh, F. T., Shi, Q., Cremers, D., Leal-Taixé, L., & Reid, I. (2018). Deep Perm-Set Net: Learn to predict sets with unknown permutation and cardinality using deep neural networks. Retrieved from https://arxiv.org/abs/1805.00613

View more

Fast Neural Architecture Search of Compact Semantic Segmentation Models via Auxiliary Cells

Nekrasov, V., Chen, H., Shen, C., & Reid, I. (2018). Fast Neural Architecture Search of Compact Semantic Segmentation Models via Auxiliary Cells *. Retrieved from https://arxiv.org/pdf/1810.10804.pdf

View more

Memorable Maps: A Framework for Re-defining Places in Visual Place Recognition

Zaffar, M., Ehsan, S., Milford, M., & Maier, K. M. (2018). Memorable Maps: A Framework for Re-defining Places in Visual Place Recognition. Retrieved from https://arxiv.org/pdf/1811.03529.pdf

View more

Component-based Attention for Large-scale Trademark Retrieval

Tursun, O., Denman, S., Sivipalan, S., Sridharan, S., Fookes, C., & Mau, S. (2018). Component-based Attention for Large-scale Trademark Retrieval. Retrieved from http://arxiv.org/abs/1811.02746

View more

The limits and potentials of deep learning for robotics

Sünderhauf, N., Brock, O., Scheirer, W., Hadsell, R., Fox, D., Leitner, J., Upcroft, B., Abbeel, P., Burgard, W., Milford, M., & Corke, P. (2018). The limits and potentials of deep learning for robotics. The International Journal of Robotics Research, 37(4–5), 405–420. http://doi.org/10.1177/0278364918770733

View more

Learning Free-Form Deformations for 3D Object Reconstruction

Jack, D., Pontes, J. K., Sridharan, S., Fookes, C., Shirazi, S., Maire, F., & Eriksson, A. (2018). Learning Free-Form Deformations for 3D Object Reconstruction. Retrieved from http://arxiv.org/abs/1803.10932

View more

An Orientation Factor for Object-Oriented SLAM

Jablonsky, N., Milford, M., & Sünderhauf, N. (2018). An Orientation Factor for Object-Oriented SLAM. Retrieved from http://arxiv.org/abs/1809.06977

View more

Multi-View Picking: Next-best-view Reaching for Improved Grasping in Clutter

Morrison, D., Corke, P., & Leitner, J. (2018). Multi-View Picking: Next-best-view Reaching for Improved Grasping in Clutter. Retrieved from http://arxiv.org/abs/1809.08564

View more

Evaluating Merging Strategies for Sampling-based Uncertainty Techniques in Object Detection

Miller, D., Dayoub, F., Milford, M., & Sünderhauf, N. (2018). Evaluating Merging Strategies for Sampling-based Uncertainty Techniques in Object Detection. Retrieved from http://arxiv.org/abs/1809.06006

View more

An adaptive localization system for image storage and localization latency requirements

Mao, J., Hu, X., & Milford, M. (2018). An adaptive localization system for image storage and localization latency requirements. Robotics and Autonomous Systems, 107, 246–261. http://doi.org/10.1016/J.ROBOT.2018.06.007

View more

A Binary Optimization Approach for Constrained K-Means Clustering

Le, H., Eriksson, A., Do, T.-T., & Milford, M. (2018). A Binary Optimization Approach for Constrained K-Means Clustering. Retrieved from http://arxiv.org/abs/1810.10134

View more

Large scale visual place recognition with sub-linear storage growth

Le, H., & Milford, M. (2018). Large scale visual place recognition with sub-linear storage growth. Retrieved from http://arxiv.org/abs/1810.09660

View more

A Holistic Visual Place Recognition Approach using Lightweight CNNs for Severe ViewPoint and Appearance Changes

Khaliq, A., Ehsan, S., Milford, M., & Mcdonald-Maier, K. (2018). A Holistic Visual Place Recognition Approach using Lightweight CNNs for Severe ViewPoint and Appearance Changes. Retrieved from https://www.mapillary.com/

View more

Feature Map Filtering: Improving Visual Place Recognition with Convolutional Calibration

Hausler, S., Jacobson, A., & Milford, M. (2018). Feature Map Filtering: Improving Visual Place Recognition with Convolutional Calibration. Retrieved from http://arxiv.org/abs/1810.12465

View more

3D Move to See: Multi-perspective visual servoing for improving object views with semantic segmentation

Lehnert, C., Tsai, D., Eriksson, A., & McCool, C. (2018). 3D Move to See: Multi-perspective visual servoing for improving object views with semantic segmentation. Retrieved from http://arxiv.org/abs/1809.07896

View more

Show, Attend and Read: A Simple and Strong Baseline for Irregular Text Recognition

Li, H., Wang, P., Shen, C., & Zhang, G. (2018). Show, Attend and Read: A Simple and Strong Baseline for Irregular Text Recognition. Retrieved from https://arxiv.org/pdf/1811.00751

View more

Scalable Deep k-Subspace Clustering

Zhang, T., Ji, P., Harandi, M., Hartley, R., & Reid, I. (2018). Scalable Deep k-Subspace Clustering. Retrieved from https://arxiv.org/pdf/1811.01045

View more

Automating analysis of vegetation with computer vision: Cover estimates and classification

McCool, C., Beattie, J., Milford, M., Bakker, J. D., Moore, J. L., & Firn, J. (2018). Automating analysis of vegetation with computer vision: Cover estimates and classification. Ecology and Evolution, 8(12), 6005–6015. http://doi.org/10.1002/ece3.4135

View more

A dynamic planner for object assembly tasks based on learning the spatial relationships of its parts from a single demonstration

Abbas, A., Maire, F., Shirazi, S., Dayoub, F., & Eich, M. (2018). A dynamic planner for object assembly tasks based on learning the spatial relationships of its parts from a single demonstration. In Science & Engineering Faculty. Wellington, New Zealand: Springer. Retrieved from https://eprints.qut.edu.au/121640/

View more

Dropout Sampling for Robust Object Detection in Open-Set Conditions.

Miller, D., Nicholson, L., Dayoub, F., & Sunderhauf, N. (2018). Dropout Sampling for Robust Object Detection in Open-Set Conditions. In 2018 IEEE International Conference on Robotics and Automation (ICRA) (pp. 1–7). Brisbane: IEEE. http://doi.org/10.1109/ICRA.2018.8460700

View more

A rapidly deployable classification system using visual data for the application of precision weed management

Hall, D., Dayoub, F., Perez, T., & McCool, C. (2018). A rapidly deployable classification system using visual data for the application of precision weed management. Computers and Electronics in Agriculture, 148, 107–120. http://doi.org/10.1016/J.COMPAG.2018.02.023

View more

An Overview of Perception Methods for Horticultural Robots: From Pollination to Harvest

Ahn, H. S., Dayoub, F., Popovic, M., MacDonald, B., Siegwart, R., & Sa, I. (2018). An Overview of Perception Methods for Horticultural Robots: From Pollination to Harvest. Retrieved from http://arxiv.org/abs/1807.03124

View more

Zero-shot Sim-to-Real Transfer with Modular Priors

Lee, R., Mou, S., Dasagi, V., Bruce, J., Leitner, J., & Sünderhauf, N. (2018). Zero-shot Sim-to-Real Transfer with Modular Priors. Retrieved from http://arxiv.org/abs/1809.07480

View more

SceneCut: Joint Geometric and Object Segmentation for Indoor Scenes

Pham, T. T., Do, T.-T., Sunderhauf, N., & Reid, I. (2018). SceneCut: Joint Geometric and Object Segmentation for Indoor Scenes. In 2018 IEEE International Conference on Robotics and Automation (ICRA) (pp. 1–9). Brisbane: IEEE. http://doi.org/10.1109/ICRA.2018.8461108

View more

Quantifying the Reality Gap in Robotic Manipulation Tasks.

Collins, J., Howard, D., & Leitner, J. (2018). Quantifying the Reality Gap in Robotic Manipulation Tasks. Retrieved from https://arxiv.org/abs/1811.01484

View more

Coordinated Heterogeneous Distributed Perception based on Latent Space Representation

Korthals, T., Leitner, J., & Rückert, U. (2018). Coordinated Heterogeneous Distributed Perception based on Latent Space Representation. Retrieved from https://arxiv.org/abs/1809.04558

View more

Measures of incentives and confidence in using a social robot

Robinson, N. L., Connolly, J., Johnson, G. M., Kim, Y., Hides, L., & Kavanagh, D. J. (2018). Measures of incentives and confidence in using a social robot. Science Robotics, 3(21), eaat6963. http://doi.org/10.1126/scirobotics.aat6963

View more

Glare-free retinal imaging using a portable light field fundus camera

Palmer, D. W., Coppin, T., Rana, K., Dansereau, D. G., Suheimat, M., Maynard, M. Atchison, D. A., Roberts, J., Crawford, R., & Jaiprakash, A. (2018). Glare-free retinal imaging using a portable light field fundus camera. Biomedical Optics Express, 9(7), 3178. http://doi.org/10.1364/BOE.9.003178

View more

Learning to Detect Aircraft for Long-Range Vision-Based Sense-and-Avoid Systems

James, J., Ford, J. J., & Molloy, T. L. (2018). Learning to Detect Aircraft for Long-Range Vision-Based Sense-and-Avoid Systems. IEEE Robotics and Automation Letters, 3(4), 4383–4390. http://doi.org/10.1109/LRA.2018.2867237

View more

Quickest Detection of Intermittent Signals With Application to Vision Based Aircraft Detection

James, J., Ford, J. J., & Molloy, T. L. (2018). Quickest Detection of Intermittent Signals With Application to Vision Based Aircraft Detection. Retrieved from https://arxiv.org/pdf/1804.09846

View more

Model-free and learning-free grasping by Local Contact Moment matching

Marturi, N., Ortenzi, V., Rajasekaran, V., Adjigble, M., Corke, P., & Stolkin, R. (2018). Model-free and learning-free grasping by Local Contact Moment matching. Retrieved from https://www.researchgate.net/publication/327118653

View more

On Encoding Temporal Evolution for Real-time Action Prediction

Rezazadegan, F., Shirazi, S., Baktashmotlagh, M., & Davis, L. S. (2018). On Encoding Temporal Evolution for Real-time Action Prediction. Retrieved from http://arxiv.org/abs/1709.07894

View more

Multimodal Trip Hazard Affordance Detection on Construction Sites

McMahon, S., Sunderhauf, N., Upcroft, B., & Milford, M. (2018). Multimodal Trip Hazard Affordance Detection on Construction Sites. IEEE Robotics and Automation Letters, 3(1), 1–8. http://doi.org/10.1109/LRA.2017.2719763

View more

Leveraging variable sensor spatial acuity with a homogeneous, multi-scale place recognition framework

Jacobson, A., Chen, Z., & Milford, M. (2018). Leveraging variable sensor spatial acuity with a homogeneous, multi-scale place recognition framework. Biological Cybernetics, 1–17. http://doi.org/10.1007/s00422-017-0745-7

View more

Rhythmic Representations: Learning Periodic Patterns for Scalable Place Recognition at a Sublinear Storage Cost

Yu, L., Jacobson, A., & Milford, M. (2018). Rhythmic Representations: Learning Periodic Patterns for Scalable Place Recognition at a Sublinear Storage Cost. IEEE Robotics and Automation Letters, 3(2), 811–818. http://doi.org/10.1109/LRA.2018.2792144

View more

Don’t Look Back: Robustifying Place Categorization for Viewpoint- and Condition-Invariant Place Recognition

Garg, S., Suenderhauf, N., & Milford, M. (2018). Don’t Look Back: Robustifying Place Categorization for Viewpoint- and Condition-Invariant Place Recognition. Retrieved from https://arxiv.org/abs/1801.05078

View more

Action Anticipation By Predicting Future Dynamic Images

Rodriguez, C., Fernando, B., & Li, H. (2018). Action Anticipation By Predicting Future Dynamic Images. Retrieved from https://arxiv.org/pdf/1808.00141.pdf

View more

Stereo Computation for a Single Mixture Image

Zhong, Y., Dai, Y., & Li, H. (2018). Stereo Computation for a Single Mixture Image. Retrieved from https://arxiv.org/pdf/1808.08690.pdf

View more

VIENA 2 : A Driving Anticipation Dataset

Aliakbarian, M. S., Saleh, F. S., Salzmann, M., Fernando, B., Petersson, L., & Andersson, L. (2018). VIENA 2 : A Driving Anticipation Dataset. Retrieved from https://arxiv.org/pdf/1810.09044.pdf

View more

Continuous-time Intensity Estimation Using Event Cameras

Scheerlinck, C., Barnes, N., & Mahony, R. (2018). Continuous-time Intensity Estimation Using Event Cameras. Retrieved from https://arxiv.org/pdf/1811.00386.pdf

View more

Distinguishing Refracted Features using Light Field Cameras with Application to Structure from Motion

Tsai, D., Dansereau, D. G., Peynot, T., & Corke, P. (2018). Distinguishing Refracted Features using Light Field Cameras with Application to Structure from Motion. Retrieved from http://arxiv.org/abs/1806.07375

View more

Bootstrapping the Performance of Webly Supervised Semantic Segmentation

Shen, T., Lin, G., Shen, C., & Reid, I. (2018). Bootstrapping the Performance of Webly Supervised Semantic Segmentation. In IEEE Conference on Computer Vision and Pattern Recognition (CVPR). Utah, United States. Retrieved from http://openaccess.thecvf.com/content_cvpr_2018/CameraReady/1401.pdf

View more

Scalable Dense Non-rigid Structure-from-Motion: A Grassmannian Perspective

Kumar, S., Cherian, A., Dai, Y., & Li, H. (2018). Scalable Dense Non-rigid Structure-from-Motion: A Grassmannian Perspective. Retrieved from https://arxiv.org/pdf/1803.00233.pdf

View more

Neural Algebra of Classifiers

Cruz, R. S., Fernando, B., Cherian, A., & Gould, S. (2018). Neural Algebra of Classifiers. Retrieved from https://arxiv.org/pdf/1801.08676.pdf

View more

Identity-preserving Face Recovery from Portraits

Shiri, F., Yu, X., Porikli, F., Hartley, R., & Koniusz, P. (2018). Identity-preserving Face Recovery from Portraits. Retrieved from https://arxiv.org/pdf/1801.02279.pdf

View more

Output regulation for systems on matrix Lie-groups

de Marco, S., Marconi, L., Mahony, R., & Hamel, T. (2018). Output regulation for systems on matrix Lie-groups. Automatica, 87, 8–16. https://doi.org/10.1016/J.AUTOMATICA.2017.08.006

View more

Quickest Detection of Intermittent Signals With Application to Vision Based Aircraft Detection

James, J., Ford, J. J., & Molloy, T. L. (2018). Quickest Detection of Intermittent Signals With Application to Vision Based Aircraft Detection. Retrieved from http://arxiv.org/abs/1804.09846

View more

Enabling a Pepper Robot to provide Automated and Interactive Tours of a Robotics Laboratory

Suddrey, G., Jacobson, A., & Ward, B. (2018). Enabling a Pepper Robot to provide Automated and Interactive Tours of a Robotics Laboratory. Retrieved from http://arxiv.org/abs/1804.03288

View more

Closing the Loop for Robotic Grasping: A Real-time, Generative Grasp Synthesis Approach

Morrison, D., Corke, P., & Leitner, J. (2018). Closing the Loop for Robotic Grasping: A Real-time, Generative Grasp Synthesis Approach. Retrieved from http://arxiv.org/abs/1804.05172

View more

Leveraging variable sensor spatial acuity with a homogeneous, multi-scale place recognition framework

Jacobson, A., Chen, Z., & Milford, M. (2018). Leveraging variable sensor spatial acuity with a homogeneous, multi-scale place recognition framework. Biological Cybernetics, 1–17. http://doi.org/10.1007/s00422-017-0745-7

View more

Rhythmic Representations: Learning Periodic Patterns for Scalable Place Recognition at a Sublinear Storage Cost.

Yu, L., Jacobson, A., & Milford, M. (2018). Rhythmic Representations: Learning Periodic Patterns for Scalable Place Recognition at a Sublinear Storage Cost. IEEE Robotics and Automation Letters, 3(2), 811–818. http://doi.org/10.1109/LRA.2018.2792144

View more

Training Deep Neural Networks for Visual Servoing

Bateux, Q., Marchand, E., Leitner, J., Chaumette, F., & Corke, P. (2018). Training Deep Neural Networks for Visual Servoing. Retrieved from https://hal.inria.fr/hal-01716679/

View more

Towards vision-based manipulation of plastic materials.

Cherubini, A., Leitner, J., Ortenzi, V., & Corke, P. (2018). Towards vision-based manipulation of plastic materials. Retrieved from https://hal.archives-ouvertes.fr/hal-01731230

View more

Assisted Control for Semi-Autonomous Power Infrastructure Inspection using Aerial Vehicles

McFadyen, A., Dayoub, F., Martin, S., Ford, J., & Corke, P. (2018). Assisted Control for Semi-Autonomous Power Infrastructure Inspection using Aerial Vehicles. Retrieved from http://arxiv.org/abs/1804.02154

View more

OpenSeqSLAM2.0: An Open Source Toolbox for Visual Place Recognition Under Changing Conditions

Talbot, B., Garg, S., & Milford, M. (2018). OpenSeqSLAM2.0: An Open Source Toolbox for Visual Place Recognition Under Changing Conditions. Retrieved from http://arxiv.org/abs/1804.02156

View more

LoST? Appearance-Invariant Place Recognition for Opposite Viewpoints using Visual Semantics

Garg, S., Suenderhauf, N., & Milford, M. (2018). LoST? Appearance-Invariant Place Recognition for Opposite Viewpoints using Visual Semantics. Retrieved from http://arxiv.org/abs/1804.05526

View more

Don’t Look Back: Robustifying Place Categorization for Viewpoint- and Condition-Invariant Place Recognition

Garg, S., Suenderhauf, N., & Milford, M. (2018). Don’t Look Back: Robustifying Place Categorization for Viewpoint- and Condition-Invariant Place Recognition. Retrieved from http://arxiv.org/abs/1801.05078

View more

Multimodal Trip Hazard Affordance Detection on Construction Sites

McMahon, S., Sunderhauf, N., Upcroft, B., & Milford, M. (2018). Multimodal Trip Hazard Affordance Detection on Construction Sites. IEEE Robotics and Automation Letters, 3(1), 1–8. http://doi.org/10.1109/LRA.2017.2719763

View more

Towards Semantic SLAM: Points, Planes and Objects

Hosseinzadeh, M., Latif, Y., Pham, T., Suenderhauf, N., & Reid, I. (2018). Towards Semantic SLAM: Points, Planes and Objects. Retrieved from http://arxiv.org/abs/1804.09111

View more

Special issue on deep learning in robotics

Sünderhauf, N., Leitner, J., Upcroft, B., & Roy, N. (2018, April 27). Special issue on deep learning in robotics. The International Journal of Robotics Research. SAGE PublicationsSage UK: London, England. http://doi.org/10.1177/0278364918769189

View more

Just-In-Time Reconstruction: Inpainting Sparse Maps using Single View Depth Predictors as Priors

Weerasekera, C., Dharmasiri, T., Garg, R., Drummond, T., & Reid, I. (2017). Just-In-Time Reconstruction: Inpainting Sparse Maps using Single View Depth Predictors as Priors.

Download PDF

SceneCut: Joint Geometric and Object Segmentation for Indoor Scenes

Pham, T., Do, T.-T., Sünderhauf, N., & Reid, I. (2017). SceneCut: Joint Geometric and Object Segmentation for Indoor Scenes. Retrieved from http://arxiv.org/abs/1709.07158

View more

Elastic LiDAR Fusion: Dense Map-Centric Continuous-Time SLAM

Park, C., Moghadam, P., Kim, S., Elfes, A., Fookes, C., & Sridharan, S. (2017). Elastic LiDAR Fusion: Dense Map-Centric Continuous-Time SLAM. Retrieved from http://arxiv.org/abs/1711.01691

View more

Cartman: The low-cost Cartesian Manipulator that won the Amazon Robotics Challenge

Morrison, D., Tow, A. W., McTaggart, M., Smith, R., Kelly-Boxall, N., Wade-McCue, S., Erskine, J., Grinover, R., Gurman, A., Hunn, T., Lee, D., Milan, A., Pham, T., Rallos, G., Razjigaev, A., Rowntree, T., Kumar, V., Zhuang, Z., Lehnert, C., Reid, I., Corke, P., and Leitner, J. (2017). Cartman: The low-cost Cartesian Manipulator that won the Amazon Robotics Challenge. Retrieved from https://arxiv.org/abs/1709.06283

View more

Dropout Sampling for Robust Object Detection in Open-Set Conditions

Miller, D., Nicholson, L., Dayoub, F., & Sünderhauf, N. (2017). Dropout Sampling for Robust Object Detection in Open-Set Conditions. Retrieved from http://arxiv.org/abs/1710.06677

View more

Semantic Segmentation from Limited Training Data

Milan, A., Pham, T., Vijay, K., Morrison, D., Tow, A. W., Liu, L., Erskine, J., Grinover, R., Gurman, A., Hunn, T., Kelly-Boxall, N., Lee, D., McTaggart, M., Rallos, G., Razjigaev, A., Rowntree, T., Shen, T., Smith, R., Wade-McCue, S., Zhuang, Z., Lehnert, C., Lin, G., Reid, I., Corke, P., & Leitner, J. (2017). Semantic Segmentation from Limited Training Data. Retrieved from http://arxiv.org/abs/1709.07665

View more

Efficacy of Mechanical Weeding Tools: a study into alternative weed management strategies enabled by robotics

McCool, C. S., Beattie, J., Firn, J., Lehnert, C., Kulk, J., Bawden, O., Russell, R., & Perez, T. (2018). Efficacy of Mechanical Weeding Tools: a study into alternative weed management strategies enabled by robotics. IEEE Robotics and Automation Letters, 1–1. http://doi.org/10.1109/LRA.2018.2794619

View more

Addressing Challenging Place Recognition Tasks using Generative Adversarial Networks

Latif, Y., Garg, R., Milford, M., & Reid, I. (2017). Addressing Challenging Place Recognition Tasks using Generative Adversarial Networks. Retrieved from http://arxiv.org/abs/1709.08810

View more

Multi-Modal Trip Hazard Affordance Detection On Construction Sites

McMahon, S., Sunderhauf, N., Upcroft, B., & Milford, M. (2018). Multimodal Trip Hazard Affordance Detection on Construction Sites. IEEE Robotics and Automation Letters, 3(1), 1–8. http://doi.org/10.1109/LRA.2017.2719763

View more

Australian Centre for Robotic Vision
2 George Street Brisbane, 4001
+61 7 3138 7549