2018 Publications


Scientific Publications

  • Learning to Predict Crisp Boundaries

    Deng R., Shen C., Liu S., Wang H., Liu X. (2018) Learning to Predict Crisp Boundaries. In: Ferrari V., Hebert M., Sminchisescu C., Weiss Y. (eds) Computer Vision – ECCV 2018. ECCV 2018. Lecture Notes in Computer Science, vol 11210. Springer, Cham. https://doi.org/10.1007/978-3-030-01231-1_35

    View More
  • Robust Fitting in Computer Vision: Easy or Hard?

    Chin TJ., Cai Z., Neumann F. (2018) Robust Fitting in Computer Vision: Easy or Hard?. In: Ferrari V., Hebert M., Sminchisescu C., Weiss Y. (eds) Computer Vision – ECCV 2018. ECCV 2018. Lecture Notes in Computer Science, vol 11216. Springer, Cham. https://doi.org/10.1007/978-3-030-01258-8_43

    View More
  • Deterministic Consensus Maximization with Biconvex Programming

    Cai Z., Chin TJ., Le H., Suter D. (2018) Deterministic Consensus Maximization with Biconvex Programming. In: Ferrari V., Hebert M., Sminchisescu C., Weiss Y. (eds) Computer Vision – ECCV 2018. ECCV 2018. Lecture Notes in Computer Science, vol 11216. Springer, Cham. https://doi.org/10.1007/978-3-030-01258-8_42

    View More
  • A Binary Optimization Approach for Constrained K-Means Clustering

    Le H.M., Eriksson A., Do TT., Milford M. (2019) A Binary Optimization Approach for Constrained K-Means Clustering. In: Jawahar C., Li H., Mori G., Schindler K. (eds) Computer Vision – ACCV 2018. ACCV 2018. Lecture Notes in Computer Science, vol 11364. Springer, Cham. https://doi.org/10.1007/978-3-030-20870-7_24

    View More
  • Traversing Latent Space using Decision Ferns

    Zuo Y., Avraham G., Drummond T. (2019) Traversing Latent Space Using Decision Ferns. In: Jawahar C., Li H., Mori G., Schindler K. (eds) Computer Vision – ACCV 2018. ACCV 2018. Lecture Notes in Computer Science, vol 11361. Springer, Cham. https://doi.org/10.1007/978-3-030-20887-5_37

    View More
  • Stereo Computation for a Single Mixture Image

    Zhong Y., Dai Y., Li H. (2018) Stereo Computation for a Single Mixture Image. In: Ferrari V., Hebert M., Sminchisescu C., Weiss Y. (eds) Computer Vision – ECCV 2018. ECCV 2018. Lecture Notes in Computer Science, vol 11213. Springer, Cham. https://doi.org/10.1007/978-3-030-01240-3_2

    View More
  • Learning Free-Form Deformations for 3D Object Reconstruction

    Jack, D., Pontes, J. K., Sridharan, S., Fookes, C., Shirazi, S., Maire, F., & Eriksson, A. (2019). Learning Free-Form Deformations for 3D Object Reconstruction. Lecture Notes in Computer Science (Including Subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics), 11362 LNCS, 317–333. https://doi.org/10.1007/978-3-030-20890-5_21

    View More
  • Monocular Depth Estimation with Augmented Ordinal Depth Relationships

    Cao, Y., Zhao, T., Xian, K., Shen, C., Cao, Z., & Xu, S. (2018). Monocular Depth Estimation with Augmented Ordinal Depth Relationships. IEEE Transactions on Image Processing. https://doi.org/10.1109/TIP.2018.2877944

    View More
  • Unsupervised Learning of Monocular Depth Estimation and Visual Odometry with Deep Feature Reconstruction

    Zhan, H., Garg, R., Weerasekera, C. S., Li, K., Agarwal, H., & Reid, I. M. (2018). Unsupervised Learning of Monocular Depth Estimation and Visual Odometry with Deep Feature Reconstruction. In 2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition (pp. 340–349). Salt Lake City, United States: IEEE. http://doi.org/10.1109/CVPR.2018.00043

    View More
  • Discrimination-aware channel pruning for deep neural networks

    Zhuang, Z., Tan, M., Zhuang, B., Liu, J., Guo, Y., Wu, Q., Huang, J., & Zhu, J. (2018). Discrimination-aware Channel Pruning for Deep Neural Networks. Advances in Neural Information Processing Systems, 2018-December, 875–886.

    View More
  • OpenSeqSLAM2.0: An Open Source Toolbox for Visual Place Recognition Under Changing Conditions

    Talbot, B., Garg, S., & Milford, M. (2018). OpenSeqSLAM2.0: An Open Source Toolbox for Visual Place Recognition Under Changing Conditions. In 2018 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS) (pp. 7758–7765). Madrid, Spain: IEEE. http://doi.org/10.1109/IROS.2018.8593761

    View More
  • Scalable Deep k-Subspace Clustering

    Zhang T., Ji P., Harandi M., Hartley R., Reid I. (2019) Scalable Deep k-Subspace Clustering. In: Jawahar C., Li H., Mori G., Schindler K. (eds) Computer Vision – ACCV 2018. ACCV 2018. Lecture Notes in Computer Science, vol 11365. Springer, Cham. https://doi.org/10.1007/978-3-030-20873-8_30

    View More
  • Continuous-Time Intensity Estimation Using Event Cameras

    Scheerlinck C., Barnes N., Mahony R. (2019) Continuous-Time Intensity Estimation Using Event Cameras. In: Jawahar C., Li H., Mori G., Schindler K. (eds) Computer Vision – ACCV 2018. ACCV 2018. Lecture Notes in Computer Science, vol 11365. Springer, Cham. https://doi.org/10.1007/978-3-030-20873-8_20

    View More
  • Action Anticipation by Predicting Future Dynamic Images

    Rodriguez C., Fernando B., Li H. (2019) Action Anticipation by Predicting Future Dynamic Images. In: Leal-Taixé L., Roth S. (eds) Computer Vision – ECCV 2018 Workshops. ECCV 2018. Lecture Notes in Computer Science, vol 11131. Springer, Cham. https://doi.org/10.1007/978-3-030-11015-4_10

    View More
  • An adaptive localization system for image storage and localization latency requirements

    Mao, J., Hu, X., & Milford, M. (2018). An adaptive localization system for image storage and localization latency requirements. Robotics and Autonomous Systems, 107, 246–261. http://doi.org/10.1016/J.ROBOT.2018.06.007

    View More
  • Assisted Control for Semi-Autonomous Power Infrastructure Inspection Using Aerial Vehicles

    *McFadyen, A., Dayoub, F., Martin, S., Ford, J., & Corke, P. (2018). Assisted Control for Semi-Autonomous Power Infrastructure Inspection Using Aerial Vehicles. In 2018 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS) (pp. 5719–5726). Madrid, Spain: IEEE. http://doi.org/10.1109/IROS.2018.8593529

    View More
  • Closing the Loop for Robotic Grasping: A Real-time, Generative Grasp Synthesis Approach

    Morrison, D., Corke, P., & Leitner, J. (2018). Closing the Loop for Robotic Grasping: A Real-time, Generative Grasp Synthesis Approach. Retrieved from http://arxiv.org/abs/1804.05172

    View More
  • Efficient Subpixel Refinement with Symbolic Linear Predictors

    Lui, V., Geeves, J., Yii, W., & Drummond, T. (2018). Efficient Subpixel Refinement with Symbolic Linear Predictors. In 2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition (pp. 8165–8173). Salt Lake City, United States: IEEE. http://doi.org/10.1109/CVPR.2018.00852

    View More
  • Quickest Detection of Intermittent Signals With Application to Vision-Based Aircraft Detection

    James, J., Ford, J. J., & Molloy, T. L. (2018). Quickest Detection of Intermittent Signals With Application to Vision-Based Aircraft Detection. IEEE Transactions on Control Systems Technology, 1–8. http://doi.org/10.1109/TCST.2018.2872468

    View More
  • Structure Aware SLAM Using Quadrics and Planes

    Hosseinzadeh M., Latif Y., Pham T., Suenderhauf N., Reid I. (2019) Structure Aware SLAM Using Quadrics and Planes. In: Jawahar C., Li H., Mori G., Schindler K. (eds) Computer Vision – ACCV 2018. ACCV 2018. Lecture Notes in Computer Science, vol 11363. Springer, Cham. https://doi.org/10.1007/978-3-030-20893-6_26

    View More
  • Feature Map Filtering: Improving Visual Place Recognition with Convolutional Calibration

    Hausler, S., Jacobson, A., & Milford, M. (2018). Feature Map Filtering: Improving Visual Place Recognition with Convolutional Calibration. Retrieved from http://arxiv.org/abs/1810.12465

    View More
  • LoST? Appearance-Invariant Place Recognition for Opposite Viewpoints using Visual Semantics

    Garg, S., Suenderhauf, N., & Milford, M. (2018). LoST? Appearance-Invariant Place Recognition for Opposite Viewpoints using Visual Semantics. Retrieved from http://arxiv.org/abs/1804.05526

    View More
  • An End-to-End TextSpotter with Explicit Alignment and Attention

    He, T., Tian, Z., Huang, W., Shen, C., Qiao, Y., & Sun, C. (2018). An End-to-End TextSpotter with Explicit Alignment and Attention. In 2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition (pp. 5020–5029). Salt Lake City, United States: IEEE. http://doi.org/10.1109/CVPR.2018.00527

    View More
  • Don’t Look Back: Robustifying Place Categorization for Viewpoint- and Condition-Invariant Place Recognition

    Garg, S., Suenderhauf, N., & Milford, M. (2018). Don’t Look Back: Robustifying Place Categorization for Viewpoint- and Condition-Invariant Place Recognition. In 2018 IEEE International Conference on Robotics and Automation (ICRA) (pp. 3645–3652). Brisbane, Australia: IEEE. http://doi.org/10.1109/ICRA.2018.8461051

    View More
  • ENG: End-to-end Neural Geometry for Robust Depth and Pose Estimation using CNNs

    Dharmasiri T., Spek A., Drummond T. (2019) ENG: End-to-End Neural Geometry for Robust Depth and Pose Estimation Using CNNs. In: Jawahar C., Li H., Mori G., Schindler K. (eds) Computer Vision – ACCV 2018. ACCV 2018. Lecture Notes in Computer Science, vol 11361. Springer, Cham. https://doi.org/10.1007/978-3-030-20887-5_39

    View More
  • Neural Algebra of Classifiers

    *Cruz, R. S., Fernando, B., Cherian, A., & Gould, S. (2018). Neural Algebra of Classifiers. In 2018 IEEE Winter Conference on Applications of Computer Vision (WACV) (pp. 729–737). IEEE. https://doi.org/10.1109/WACV.2018.00085

    View More
  • Towards vision-based manipulation of plastic materials

    *Cherubini, A., Leitner, J., Ortenzi, V., & Corke, P. (2018). Towards vision-based manipulation of plastic materials. In 2018 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS) (pp. 485–490). Madrid, Spain: IEEE. http://doi.org/10.1109/IROS.2018.8594108

    View More
  • Globally-Optimal Inlier Set Maximisation for Camera Pose and Correspondence Estimation

    Campbell, D. J., Petersson, L., Kneip, L., & Li, H. (2018). Globally-Optimal Inlier Set Maximisation for Camera Pose and Correspondence Estimation. IEEE Transactions on Pattern Analysis and Machine Intelligence. https://doi.org/10.1109/TPAMI.2018.2848650

    View More
  • Learning Deployable Navigation Policies at Kilometer Scale from a Single Traversal

    Bruce, J., Sünderhauf, N., Mirowski, P., Hadsell, R., & Milford, M. (2018). Learning Deployable Navigation Policies at Kilometer Scale from a Single Traversal. Retrieved from http://arxiv.org/abs/1807.05211

    View More
  • Training Deep Neural Networks for Visual Servoing

    *Bateux, Q., Marchand, E., Leitner, J., Chaumette, F., & Corke, P. (2018). Training Deep Neural Networks for Visual Servoing. In 2018 IEEE International Conference on Robotics and Automation (ICRA) (pp. 1–8). Brisbane, Australia: IEEE. http://doi.org/10.1109/ICRA.2018.8461068

    View More
  • VIENA 2: A Driving Anticipation Dataset

    Aliakbarian, M. S., Saleh, F. S., Salzmann, M., Fernando, B., Petersson, L., & Andersson, L. (2019). VIENA2: A Driving Anticipation Dataset. Lecture Notes in Computer Science (Including Subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics), 11361 LNCS, 449–466. https://doi.org/10.1007/978-3-030-20887-5_28

    View More
  • Model-free and learning-free grasping by Local Contact Moment matching

    *Adjigble, M., Marturi, N., Ortenzi, V., Rajasekaran, V., Corke, P., & Stolkin, R. (2018). Model-free and learning-free grasping by Local Contact Moment matching. In 2018 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS) (pp. 2933–2940). Madrid, Spain: IEEE. http://doi.org/10.1109/IROS.2018.8594226

    View More
  • Dense 3D Face Correspondence

    Gilani, S. Z., Mian, A., Shafait, F., & Reid, I. (2018). Dense 3D Face Correspondence. IEEE Transactions on Pattern Analysis and Machine Intelligence, 40(7), 1584–1598. https://doi.org/10.1109/TPAMI.2017.2725279

    View More
  • Coresets for Triangulation

    Gilani, S. Z., Mian, A., Shafait, F., & Reid, I. (2018). Dense 3D Face Correspondence. IEEE Transactions on Pattern Analysis and Machine Intelligence, 40(7), 1584–1598. https://doi.org/10.1109/TPAMI.2017.2725279

    View More
  • The Role of Symmetry in Rigidity Analysis: A Tool for Network Localisation and Formation Control

    Stacey, G., & Mahony, R. (2018). The Role of Symmetry in Rigidity Analysis: A Tool for Network Localization and Formation Control. IEEE Transactions on Automatic Control, 63(5), 1313–1328. https://doi.org/10.1109/TAC.2017.2747760

    View More
  • Incorporating Network Built-in Priors in Weakly-supervised Semantic Segmentation

    Saleh, F. S., Aliakbarian, M. S., Salzmann, M., Petersson, L., Alvarez, J. M., & Gould, S. (2018). Incorporating Network Built-in Priors in Weakly-Supervised Semantic Segmentation. IEEE Transactions on Pattern Analysis and Machine Intelligence, 40(6), 1382–1396. https://doi.org/10.1109/TPAMI.2017.2713785

    View More
  • Guaranteed Outlier Removal for Point Cloud Registration with Correspondences

    Guaranteed Outlier Removal for Point Cloud Registration with Correspondences. IEEE Transactions on Pattern Analysis and Machine Intelligence, 40(12), 2868–2882. https://doi.org/10.1109/TPAMI.2017.2773482

    View More
  • Structured Learning of Tree Potentials in CRF for Image Segmentation

    Liu, F., Lin, G., Qiao, R., & Shen, C. (2017). Structured Learning of Tree Potentials in CRF for Image Segmentation. IEEE Transactions on Neural Networks and Learning Systems, PP(99), 1–7. http://doi.org/10.1109/TNNLS.2017.2690453 *In Press

    View More
  • Deja vu: Scalable Place Recognition Using Mutually Supportive Feature Frequencies

    Jacobson, A., Scheirer, W., & Milford, M. (2017). Déjà vu: Scalable place recognition using mutually supportive feature frequencies. IEEE International Conference on Intelligent Robots and Systems, 2017-September, 6654–6661. https://doi.org/10.1109/IROS.2017.8206580

    View More
  • Design of a multi-modal end-effector and grasping system- How integrated design helped win the Amazon Robotics Challenge

    Kelly-Boxall, N., Morrison, D., Wade-McCue, S., Corke, P., & Leitner, J. (2018). Design of a multi-modal end-effector and grasping system- How integrated design helped win the amazon robotics challenge. Australasian Conference on Robotics and Automation, ACRA, 2018-December.

    View More
  • Dimensionality Reduction on SPD Manifolds: The Emergence of Geometry-Aware Methods

    Harandi, M., Salzmann, M., & Hartley, R. (2018). Dimensionality Reduction on SPD Manifolds: The Emergence of Geometry-Aware Methods. IEEE Transactions on Pattern Analysis and Machine Intelligence, 40(1), 48–62. https://doi.org/10.1109/TPAMI.2017.2655048

    View More
  • On the structure of kinematic systems with complete symmetry

    Trumpf, J., Mahony, R., & Hamel, T. (2019). On the structure of kinematic systems with complete symmetry. Proceedings of the IEEE Conference on Decision and Control, 2018-December, 1276–1280. https://doi.org/10.1109/CDC.2018.8619718

    View More
  • Seeing Deeply and Bidirectionally: A Deep Learning Approach for Single Image Reflection Removal

    Yang J., Gong D., Liu L., Shi Q. (2018) Seeing Deeply and Bidirectionally: A Deep Learning Approach for Single Image Reflection Removal. In: Ferrari V., Hebert M., Sminchisescu C., Weiss Y. (eds) Computer Vision – ECCV 2018. ECCV 2018. Lecture Notes in Computer Science, vol 11207. Springer, Cham. https://doi.org/10.1007/978-3-030-01219-9_40

    View More
  • Tips and Tricks for Visual Question Answering: Learnings from the 2017 Challenge

    Teney, D., Anderson, P., He, X., & Hengel, A. van den. (2018). Tips and Tricks for Visual Question Answering: Learnings from the 2017 Challenge. In 2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition (pp. 4223–4232). IEEE. http://doi.org/10.1109/CVPR.2018.00444

    View More
  • Not All Negatives Are Equal: Learning to Track With Multiple Background Clusters

    Zhu, G., Porikli, F., & Li, H. (2018). Not All Negatives Are Equal: Learning to Track With Multiple Background Clusters. IEEE Transactions on Circuits and Systems for Video Technology, 28(2), 314–326. http://doi.org/10.1109/TCSVT.2016.2615518

    View More
  • Deblurring Natural Image Using Super-Gaussian Fields

    Liu Y., Dong W., Gong D., Zhang L., Shi Q. (2018) Deblurring Natural Image Using Super-Gaussian Fields. In: Ferrari V., Hebert M., Sminchisescu C., Weiss Y. (eds) Computer Vision – ECCV 2018. ECCV 2018. Lecture Notes in Computer Science, vol 11205. Springer, Cham. https://doi.org/10.1007/978-3-030-01246-5_28

    View More
  • Adversarial Training of Variational Auto-Encoders for High Fidelity Image Generation

    Khan, S. H., Hayat, M., & Barnes, N. (2018). Adversarial Training of Variational Auto-Encoders for High Fidelity Image Generation. In 2018 IEEE Winter Conference on Applications of Computer Vision (WACV) (pp. 1312–1320). Lake Tahoe, United States: IEEE. https://doi.org/10.1109/WACV.2018.00148

    View More
  • Semi-dense 3D Reconstruction with a Stereo Event Camera

    Zhou Y., Gallego G., Rebecq H., Kneip L., Li H., Scaramuzza D. (2018) Semi-dense 3D Reconstruction with a Stereo Event Camera. In: Ferrari V., Hebert M., Sminchisescu C., Weiss Y. (eds) Computer Vision – ECCV 2018. ECCV 2018. Lecture Notes in Computer Science, vol 11205. Springer, Cham. https://doi.org/10.1007/978-3-030-01246-5_15

    View More
  • 3D Geometry-Aware Semantic Labeling of Outdoor Street Scenes

    *Zhong, Y., Dai, Y., & Li, H. (2018). 3D Geometry-Aware Semantic Labeling of Outdoor Street Scenes. In 2018 24th International Conference on Pattern Recognition (ICPR) (pp. 2343–2349). IEEE. http://doi.org/10.1109/ICPR.2018.8545378

    View More
  • Open-World Stereo Video Matching with Deep RNN

    Zhong Y., Li H., Dai Y. (2018) Open-World Stereo Video Matching with Deep RNN. In: Ferrari V., Hebert M., Sminchisescu C., Weiss Y. (eds) Computer Vision – ECCV 2018. ECCV 2018. Lecture Notes in Computer Science, vol 11206. Springer, Cham. https://doi.org/10.1007/978-3-030-01216-8_7

    View More
  • Deep Unsupervised Saliency Detection: A Multiple Noisy Labeling Perspective

    *Zhang, J., Zhang, T., Daf, Y., Harandi, M., & Hartley, R. (2018). Deep Unsupervised Saliency Detection: A Multiple Noisy Labeling Perspective. In 2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition (pp. 9029–9038). Salt Lake City, United States: IEEE. http://doi.org/10.1109/CVPR.2018.00941

    View More
  • Deep Auto-Set: A Deep Auto-Encoder-Set Network for Activity Recognition Using Wearables

    Varamin, A. A., Abbasnejad, E., Shi, Q., Ranasinghe, D. C., & Rezatofighi, H. (2018). Deep Auto-Set: A Deep Auto-Encoder-Set Network for Activity Recognition Using Wearables (Vol. 18). Retrieved from https://doi.org/10.475/123_4

    View More
  • Robust Visual Odometry in Underwater Environment

    *Zhang, J., Ila, V., & Kneip, L. (2018). Robust Visual Odometry in Underwater Environment. In 2018 OCEANS - MTS/IEEE Kobe Techno-Oceans (OTO) (pp. 1–9). Kobe, Japan: IEEE. http://doi.org/10.1109/OCEANSKOBE.2018.8559452

    View More
  • Goal-Oriented Visual Question Generation via Intermediate Rewards

    Zhang, J., Wu, Q., Shen, C., Zhang, J., Lu, J., & van den Hengel, A. (2018). Goal-Oriented Visual Question Generation via Intermediate Rewards. Lecture Notes in Computer Science (Including Subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics), 11209 LNCS, 189–204. https://doi.org/10.1007/978-3-030-01228-1_12

    View More
  • Super-Resolving Very Low-Resolution Face Images with Supplementary Attributes

    *Yu, X., Fernando, B., Hartley, R., & Porikli, F. (2018). Super-Resolving Very Low-Resolution Face Images with Supplementary Attributes. In 2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition (pp. 908–917). Salt Lake City, United States: IEEE. http://doi.org/10.1109/CVPR.2018.00101

    View More
  • Face Super-Resolution Guided by Facial Component Heatmaps

    Yu X., Fernando B., Ghanem B., Porikli F., Hartley R. (2018) Face Super-Resolution Guided by Facial Component Heatmaps. In: Ferrari V., Hebert M., Sminchisescu C., Weiss Y. (eds) Computer Vision – ECCV 2018. ECCV 2018. Lecture Notes in Computer Science, vol 11213. Springer, Cham. https://doi.org/10.1007/978-3-030-01240-3_14

    View More
  • Learning Discriminative Video Representations Using Adversarial Perturbations

    Wang J., Cherian A. (2018) Learning Discriminative Video Representations Using Adversarial Perturbations. In: Ferrari V., Hebert M., Sminchisescu C., Weiss Y. (eds) Computer Vision – ECCV 2018. ECCV 2018. Lecture Notes in Computer Science, vol 11208. Springer, Cham. https://doi.org/10.1007/978-3-030-01225-0_42

    View More
  • Automated Quality Assessment of Colour Fundus Images for Diabetic Retinopathy Screening in Telemedicine

    Saha, S. K., Fernando, B., Cuadros, J., Xiao, D., & Kanagasingam, Y. (2018). Automated Quality Assessment of Colour Fundus Images for Diabetic Retinopathy Screening in Telemedicine. Journal of Digital Imaging, 31(6), 869–878. http://doi.org/10.1007/s10278-018-0084-9

    View More
  • Embedding Bilateral Filter in Least Squares for Efficient Edge-preserving Image Smoothing

    Liu, W., Zhang, P., Chen, X., Shen, C., Huang, X., & Yang, J. (2018). Embedding Bilateral Filter in Least Squares for Efficient Edge-preserving Image Smoothing. IEEE Transactions on Circuits and Systems for Video Technology, 1–1. http://doi.org/10.1109/TCSVT.2018.2890202

    View More
  • Robust and Efficient Relative Pose With a Multi-Camera System for Autonomous Driving in Highly Dynamic Environments

    Liu, L., Li, H., Dai, Y., & Pan, Q. (2018). Robust and efficient relative pose with a Multi-Camera system for autonomous driving in highly dynamic environments. IEEE Transactions on Intelligent Transportation Systems, 19(8), 2432–2444. https://doi.org/10.1109/TITS.2017.2749409

    View More
  • Reading car license plates using deep neural networks

    Li, H., Wang, P., You, M., & Shen, C. (2018). Reading car license plates using deep neural networks. Image and Vision Computing, 72, 14–23. http://doi.org/10.1016/J.IMAVIS.2018.02.002

    View More
  • Structure from Recurrent Motion: From Rigidity to Recurrency

    *Li, X., Li, H., Joo, H., Liu, Y., & Sheikh, Y. (2018). Structure from Recurrent Motion: From Rigidity to Recurrency. In 2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition (pp. 3032–3040). Salt Lake City, United States: IEEE. http://doi.org/10.1109/CVPR.2018.00320

    View More
  • Kernel Support Vector Machines and Convolutional Neural Networks

    Jiang, S., Hartley, R., & Fernando, B. (2018). Kernel Support Vector Machines and Convolutional Neural Networks. In 2018 Digital Image Computing: Techniques and Applications (DICTA) (pp. 1–7). Canberra, Australia: IEEE. http://doi.org/10.1109/DICTA.2018.8615840

    View More
  • Semi-Supervised SLAM: Leveraging Low-Cost Sensors on Underground Autonomous Vehicles for Position Tracking

    Jacobson, A., Zeng, F., Smith, D., Boswell, N., Peynot, T., & Milford, M. (2018). Semi-Supervised SLAM: Leveraging Low-Cost Sensors on Underground Autonomous Vehicles for Position Tracking. In 2018 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS) (pp. 3970–3977). Madrid, Spain: IEEE. http://doi.org/10.1109/IROS.2018.8593750

    View More
  • Parallel Attention: A Unified Framework for Visual Object Discovery Through Dialogs and Queries

    Zhuang, B., Wu, Q., Shen, C., Reid, I., & Hengel, A. van den. (2018). Parallel Attention: A Unified Framework for Visual Object Discovery Through Dialogs and Queries. In 2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition (pp. 4252–4261). Salt Lake City, United States: IEEE. http://doi.org/10.1109/CVPR.2018.00447

    View More
  • Towards Effective Low-Bitwidth Convolutional Neural Networks

    Zhuang, B., Shen, C., Tan, M., Liu, L., & Reid, I. (2018). Towards Effective Low-Bitwidth Convolutional Neural Networks. In 2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition (pp. 7920–7928). Salt Lake City, United States: IEEE. http://doi.org/10.1109/CVPR.2018.00826

    View More
  • Are You Talking to Me? Reasoned Visual Dialog Generation Through Adversarial Learning

    Wu, Q., Wang, P., Shen, C., Reid, I., & Hengel, A. van den. (2018). Are You Talking to Me? Reasoned Visual Dialog Generation Through Adversarial Learning. In 2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition (pp. 6106–6115). Salt Lake City, United States: IEEE. http://doi.org/10.1109/CVPR.2018.00639

    View More
  • Bayesian Semantic Instance Segmentation in Open Set World

    Pham, T., Vijay Kumar, B. G., Do, T. T., Carneiro, G., & Reid, I. (2018). Bayesian semantic instance segmentation in open set world. Lecture Notes in Computer Science (Including Subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics), 11214 LNCS, 3–18. https://doi.org/10.1007/978-3-030-01249-6_1

    View More
  • Training Medical Image Analysis Systems like Radiologists

    Maicas G., Bradley A.P., Nascimento J.C., Reid I., Carneiro G. (2018) Training Medical Image Analysis Systems like Radiologists. In: Frangi A., Schnabel J., Davatzikos C., Alberola-López C., Fichtinger G. (eds) Medical Image Computing and Computer Assisted Intervention – MICCAI 2018. MICCAI 2018. Lecture Notes in Computer Science, vol 11070. Springer, Cham. https://doi.org/10.1007/978-3-030-00928-1_62

    View More
  • Visual Question Answering with Memory-Augmented Networks

    Ma, C., Shen, C., Dick, A., Wu, Q., Wang, P., Hengel, A. van den, & Reid, I. (2018). Visual Question Answering with Memory-Augmented Networks. In 2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition (pp. 6975–6984). Salt Lake City, United States: IEEE. http://doi.org/10.1109/CVPR.2018.00729

    View More
  • Deep Regression Tracking with Shrinkage Loss

    Lu X., Ma C., Ni B., Yang X., Reid I., Yang MH. (2018) Deep Regression Tracking with Shrinkage Loss. In: Ferrari V., Hebert M., Sminchisescu C., Weiss Y. (eds) Computer Vision – ECCV 2018. ECCV 2018. Lecture Notes in Computer Science, vol 11218. Springer, Cham. https://doi.org/10.1007/978-3-030-01264-9_22

    View More
  • Exploring Context with Deep Structured Models for Semantic Segmentation

    Lin, G., Shen, C., van den Hengel, A., & Reid, I. (2018). Exploring Context with Deep Structured Models for Semantic Segmentation. IEEE Transactions on Pattern Analysis and Machine Intelligence, 40(6), 1352–1366. http://doi.org/10.1109/TPAMI.2017.2708714

    View More
  • Efficient Dense Point Cloud Object Reconstruction Using Deformation Vector Fields

    Li K., Pham T., Zhan H., Reid I. (2018) Efficient Dense Point Cloud Object Reconstruction Using Deformation Vector Fields. In: Ferrari V., Hebert M., Sminchisescu C., Weiss Y. (eds) Computer Vision – ECCV 2018. ECCV 2018. Lecture Notes in Computer Science, vol 11216. Springer, Cham. https://doi.org/10.1007/978-3-030-01258-8_31

    View More
  • Drones count wildlife more accurately and precisely than humans

    Hodgson, J. C., Mott, R., Baylis, S. M., Pham, T. T., Wotherspoon, S., Kilpatrick, A. D., Ramesh, R.S., Reid, I., Terauds, A., & Koh, L. P. (2018). Drones count wildlife more accurately and precisely than humans. Methods in Ecology and Evolution, 9(5), 1160–1167. http://doi.org/10.1111/2041-210X.12974

    View More
  • Multi-modal Cycle-Consistent Generalized Zero-Shot Learning

    Felix R., Vijay Kumar B.G., Reid I., Carneiro G. (2018) Multi-modal Cycle-Consistent Generalized Zero-Shot Learning. In: Ferrari V., Hebert M., Sminchisescu C., Weiss Y. (eds) Computer Vision – ECCV 2018. ECCV 2018. Lecture Notes in Computer Science, vol 11210. Springer, Cham. https://doi.org/10.1007/978-3-030-01231-1_2

    View More
  • AffordanceNet: An End-to-End Deep Learning Approach for Object Affordance Detection

    Do, T.-T., Nguyen, A., & Reid, I. (2018). AffordanceNet: An End-to-End Deep Learning Approach for Object Affordance Detection. In 2018 IEEE International Conference on Robotics and Automation (ICRA) (pp. 1–5). Brisbane, Australia: IEEE. http://doi.org/10.1109/ICRA.2018.8460902

    View More
  • Semisupervised and Weakly Supervised Road Detection Based on Generative Adversarial Networks

    Han, X., Lu, J., Zhao, C., You, S., & Li, H. (2018). Semisupervised and Weakly Supervised Road Detection Based on Generative Adversarial Networks. IEEE Signal Processing Letters, 25(4), 551–555. http://doi.org/10.1109/LSP.2018.2809685

    View More
  • Automatic Image Cropping for Visual Aesthetic Enhancement Using Deep Neural Networks and Cascaded Regression

    Guo, G., Wang, H., Shen, C., Yan, Y., & Liao, H.-Y. M. (2018). Automatic Image Cropping for Visual Aesthetic Enhancement Using Deep Neural Networks and Cascaded Regression. IEEE Transactions on Multimedia, 20(8), 2073–2085. http://doi.org/10.1109/TMM.2018.2794262

    View More
  • Visual Grounding via Accumulated Attention

    Deng, C., Wu, Q., Wu, Q., Hu, F., Lyu, F., & Tan, M. (2018). Visual Grounding via Accumulated Attention. In 2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition (pp. 7746–7755). Salt Lake City, United States: IEEE. http://doi.org/10.1109/CVPR.2018.00808

    View More
  • Vision Based Forward Sensitive Reactive Control for a Quadrotor VTOL

    Stevens, J.-L., & Mahony, R. (2018). Vision Based Forward Sensitive Reactive Control for a Quadrotor VTOL. In 2018 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS) (pp. 5232–5238). Madrid, Spain: IEEE. http://doi.org/10.1109/IROS.2018.8593606

    View More
  • Calibrating Light-Field Cameras Using Plenoptic Disc Features

    O’brien, S., Trumpf, J., Ila, V., & Mahony, R. (2018). Calibrating Light-Field Cameras Using Plenoptic Disc Features. In 2018 International Conference on 3D Vision (3DV) (pp. 286–294). Verona, Italy: IEEE. http://doi.org/10.1109/3DV.2018.00041

    View More
  • A Geometric Observer for Scene Reconstruction Using Plenoptic Cameras

    O’Brien, S. G. P., Trumpf, J., Ila, V., & Mahony, R. (2018). A Geometric Observer for Scene Reconstruction Using Plenoptic Cameras. In 2018 IEEE Conference on Decision and Control (CDC) (pp. 557–564). Florida, United States: IEEE. http://doi.org/10.1109/CDC.2018.8618954

    View More
  • Homography estimation of a moving planar scene from direct point correspondence

    De Marco, S., Hua, M. D., Mahony, R., & Hamel, T. (2019). Homography estimation of a moving planar scene from direct point correspondence. Proceedings of the IEEE Conference on Decision and Control, 2018-December, 565–570. https://doi.org/10.1109/CDC.2018.8619386

    View More
  • Video Representation Learning Using Discriminative Pooling

    Wang, J., Cherian, A., Porikli, F., & Gould, S. (2018). Video Representation Learning Using Discriminative Pooling. In 2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition (pp. 1149–1158). Salt Lake City, United States: IEEE. http://doi.org/10.1109/CVPR.2018.00126

    View More
  • Non-linear Temporal Subspace Representations for Activity Recognition

    Cherian, A., Sra, S., Gould, S., & Hartley, R. (2018). Non-linear Temporal Subspace Representations for Activity Recognition. In 2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition (pp. 2197–2206). Salt Lake City, United States: IEEE. http://doi.org/10.1109/CVPR.2018.00234

    View More
  • One-class Gaussian process regressor for quality assessment of transperineal ultrasound images

    Camps, S. M., Houben, T., Fontanarosa, D., Edwards, C., Antico, M., Dunnhofer, M., Martens, E.G.H.J, Baeza, J.A., Vanneste, B.G.L., van Limbergen, E.J., de W., Peter, H.N., Verhaegen, F., & Carneiro, G. (2018). One-class Gaussian process regressor for quality assessment of transperineal ultrasound images. In International Conference on Medical Imaging with Deep Learning (MIDL). Amsterdam. Retrieved from https://eprints.qut.edu.au/120113/

    View More
  • Action Recognition with Dynamic Image Networks

    Bilen, H., Fernando, B., Gavves, E., & Vedaldi, A. (2018). Action Recognition with Dynamic Image Networks. IEEE Transactions on Pattern Analysis and Machine Intelligence, 40(12), 2799–2813. http://doi.org/10.1109/TPAMI.2017.2769085

    View More
  • Vision-and-Language Navigation: Interpreting Visually-Grounded Navigation Instructions in Real Environments

    Anderson, P., Wu, Q., Teney, D., Bruce, J., Johnson, M., Sunderhauf, N., Reid, I., Gould, S., & van den Hengel, A. (2018). Vision-and-Language Navigation: Interpreting Visually-Grounded Navigation Instructions in Real Environments. In 2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition (pp. 3674–3683). IEEE. http://doi.org/10.1109/CVPR.2018.00387

    View More
  • Bottom-Up and Top-Down Attention for Image Captioning and Visual Question Answering

    Anderson, P., He, X., Buehler, C., Teney, D., Johnson, M., Gould, S., & Zhang, L. (2018). Bottom-Up and Top-Down Attention for Image Captioning and Visual Question Answering. In 2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition (pp. 6077–6086). Salt Lake City, United States: IEEE. http://doi.org/10.1109/CVPR.2018.00636

    View More
  • Searching for Representative Modes on Hypergraphs for Robust Geometric Model Fitting

    Wang, H., Xiao, G., Yan, Y., & Suter, D. (2019). Searching for Representative Modes on Hypergraphs for Robust Geometric Model Fitting. IEEE Transactions on Pattern Analysis and Machine Intelligence, 41(3), 697–711. https://doi.org/10.1109/TPAMI.2018.2803173

    View More
  • Semantics-Aware Visual Object Tracking

    Yao, R., Lin, G., Shen, C., Zhang, Y., & Shi, Q. (2019). Semantics-aware visual object tracking. IEEE Transactions on Circuits and Systems for Video Technology, 29(6), 1687–1700. https://doi.org/10.1109/TCSVT.2018.2848358

    View More
  • Learning Context Flexible Attention Model for Long-Term Visual Place Recognition

    Chen, Z., Liu, L., Sa, I., Ge, Z., & Chli, M. (2018). Learning Context Flexible Attention Model for Long-Term Visual Place Recognition. IEEE Robotics and Automation Letters, 3(4), 4015–4022. http://doi.org/10.1109/LRA.2018.2859916

    View More
  • Unsupervised Domain Adaptation Using Robust Class-Wise Matching

    Zhang, L., Wang, P., Wei, W., Lu, H., Shen, C., Van Den Hengel, A., & Zhang, Y. (2019). Unsupervised Domain Adaptation Using Robust Class-Wise Matching. IEEE Transactions on Circuits and Systems for Video Technology, 29(5), 1339–1349. https://doi.org/10.1109/TCSVT.2018.2842206

    View More
  • Practical Motion Segmentation for Urban Street View Scenes

    Rubino, C., Del Bue, A., & Chin, T.-J. (2018). Practical Motion Segmentation for Urban Street View Scenes. In 2018 IEEE International Conference on Robotics and Automation (ICRA) (pp. 1879–1886). Brisbane, Australia: IEEE. http://doi.org/10.1109/ICRA.2018.8460993

    View More
  • VITAL: VIsual Tracking via Adversarial Learning

    Song, Y., Ma, C., Wu, X., Gong, L., Bao, L., Zuo, W., Shen, C., Lau, Rynson W.H., & Yang, M.-H. (2018). VITAL: VIsual Tracking via Adversarial Learning. In 2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition (pp. 8990–8999). Salt Lake City, United States: IEEE. http://doi.org/10.1109/CVPR.2018.00937

    View More
  • A Fast Resection-Intersection Method for the Known Rotation Problem

    Zhang, Q., Chin, T.-J., & Le, H. M. (2018). A Fast Resection-Intersection Method for the Known Rotation Problem. In 2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition (pp. 3012–3021). Salt Lake City, United States: IEEE. http://doi.org/10.1109/CVPR.2018.00318

    View More
  • Rotation Averaging and Strong Duality

    Eriksson, A., Olsson, C., Kahl, F., & Chin, T.-J. (2018). Rotation Averaging and Strong Duality. In 2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition (pp. 127–135). Salt Lake City, United States: IEEE. http://doi.org/10.1109/CVPR.2018.00021

    View More
  • ArthroSLAM: Multi-Sensor Robust Visual Localization for Minimally Invasive Orthopedic Surgery

    Marmol, A., Corke, P., & Peynot, T. (2018). ArthroSLAM: Multi-Sensor Robust Visual Localization for Minimally Invasive Orthopedic Surgery. In 2018 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS) (pp. 3882–3889). Madrid, Spain: IEEE. https://doi.org/10.1109/IROS.2018.8593501

    View More
  • QuadricSLAM: Dual Quadrics From Object Detections as Landmarks in Object-Oriented SLAM

    Nicholson, L., Milford, M., & Sunderhauf, N. (2019). QuadricSLAM: Dual quadrics from object detections as landmarks in object-oriented SLAM. IEEE Robotics and Automation Letters, 4(1), 1–8. https://doi.org/10.1109/LRA.2018.2866205

    View More
  • Collaborative Planning for Mixed-Autonomy Lane Merging

    Bansal, S., Cosgun, A., Nakhaei, A., & Fujimura, K. (2018). Collaborative Planning for Mixed-Autonomy Lane Merging. In 2018 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS) (pp. 4449–4455). Madrid, Spain: IEEE. http://doi.org/10.1109/IROS.2018.8594197

    View More
  • Relative Pose Based Redundancy Removal: Collaborative RGB-D Data Transmission in Mobile Visual Sensor Networks

    Wang, X., Şekercioğlu, Y., Drummond, T., Frémont, V., Natalizio, E., & Fantoni, I. (2018). Relative Pose Based Redundancy Removal: Collaborative RGB-D Data Transmission in Mobile Visual Sensor Networks. Sensors, 18(8), 2430. http://doi.org/10.3390/s18082430

    View More
  • CReaM: Condensed Real-time Models for Depth Prediction using Convolutional Neural Networks

    Spek, A., Dharmasiri, T., & Drummond, T. (2018). CReaM: Condensed Real-time Models for Depth Prediction using Convolutional Neural Networks. In 2018 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS) (pp. 540–547). Madrid, Spain: IEEE. http://doi.org/10.1109/IROS.2018.8594243

    View More
  • Deep Metric Learning and Image Classification with Nearest Neighbour Gaussian Kernels

    Meyer, B. J., Harwood, B., & Drummond, T. (2018). Deep Metric Learning and Image Classification with Nearest Neighbour Gaussian Kernels. In IEEE International Conference on Image Processing (ICIP) (pp. 151–155). Athens, Greece: IEEE. http://doi.org/10.1109/ICIP.2018.8451297

    View More
  • Approximate Fisher Information Matrix to Characterise the Training of Deep Neural Networks

    Liao, Z., Drummond, T., Reid, I., & Carneiro, G. (2018). Approximate Fisher Information Matrix to Characterise the Training of Deep Neural Networks. IEEE Transactions on Pattern Analysis and Machine Intelligence, 1–1. http://doi.org/10.1109/TPAMI.2018.2876413

    View More
  • A review of deep learning in the study of materials degradation

    Nash, W., Drummond, T., & Birbilis, N. (2018). A review of deep learning in the study of materials degradation. Npj Materials Degradation, 2(1), 37. http://doi.org/10.1038/s41529-018-0058-x

    View More
  • An Extended Filtered Channel Framework for Pedestrian Detection

    You, M., Zhang, Y., Shen, C., & Zhang, X. (2018). An Extended Filtered Channel Framework for Pedestrian Detection. IEEE Transactions on Intelligent Transportation Systems, 19(5), 1640–1651. https://doi.org/10.1109/TITS.2018.2807199

    View More
  • An Embarrassingly Simple Approach to Visual Domain Adaptation

    Lu, H., Shen, C., Cao, Z., Xiao, Y., & van den Hengel, A. (2018). An Embarrassingly Simple Approach to Visual Domain Adaptation. IEEE Transactions on Image Processing, 27(7), 3403–3417. https://doi.org/10.1109/TIP.2018.2819503

    View More
  • Cluster Sparsity Field: An Internal Hyperspectral Imagery Prior for Reconstruction

    Zhang, L., Wei, W., Zhang, Y., Shen, C., van den Hengel, A., & Shi, Q. (2018). Cluster Sparsity Field: An Internal Hyperspectral Imagery Prior for Reconstruction. International Journal of Computer Vision, 126(8), 797–821. https://doi.org/10.1007/s11263-018-1080-8

    View More
  • Multi-label learning based deep transfer neural network for facial attribute classification

    Zhuang, N., Yan, Y., Chen, S., Wang, H., & Shen, C. (2018). Multi-label learning based deep transfer neural network for facial attribute classification. Pattern Recognition, 80, 225–240. https://doi.org/10.1016/J.PATCOG.2018.03.018

    View More
  • Multi-Task Structure-aware Context Modeling for Robust Keypoint-based Object Tracking

    Li, X., Zhao, L., Ji, W., Wu, Y., Wu, F., Yang, M.-H., Dacheng, T., Reid, I. (2018). Multi-Task Structure-aware Context Modeling for Robust Keypoint-based Object Tracking. IEEE Transactions on Pattern Analysis and Machine Intelligence, 1–1. https://doi.org/10.1109/TPAMI.2018.2818132 *In Press

    View More
  • The limits and potentials of deep learning for robotics

    Sünderhauf, N., Brock, O., Scheirer, W., Hadsell, R., Fox, D., Leitner, J., Upcroft, B., Abbeel, P., Burgard, W., Milford, M., & Corke, P. (2018). The limits and potentials of deep learning for robotics. The International Journal of Robotics Research, 37(4–5), 405–420. http://doi.org/10.1177/0278364918770733

    View More
  • Automating analysis of vegetation with computer vision: Cover estimates and classification

    McCool, C., Beattie, J., Milford, M., Bakker, J. D., Moore, J. L., & Firn, J. (2018). Automating analysis of vegetation with computer vision: Cover estimates and classification. Ecology and Evolution, 8(12), 6005–6015. http://doi.org/10.1002/ece3.4135

    View More
  • A dynamic planner for object assembly tasks based on learning the spatial relationships of its parts from a single demonstration

    Abbas, A., Maire, F., Shirazi, S., Dayoub, F., & Eich, M. (2018). A dynamic planner for object assembly tasks based on learning the spatial relationships of its parts from a single demonstration. Lecture Notes in Computer Science (Including Subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics), 11320 LNAI, 759–765. https://doi.org/10.1007/978-3-030-03991-2_68

    View More
  • A rapidly deployable classification system using visual data for the application of precision weed management

    Hall, D., Dayoub, F., Perez, T., & McCool, C. (2018). A rapidly deployable classification system using visual data for the application of precision weed management. Computers and Electronics in Agriculture, 148, 107–120. http://doi.org/10.1016/J.COMPAG.2018.02.023

    View More
  • SceneCut: Joint Geometric and Object Segmentation for Indoor Scenes

    Pham, T. T., Do, T.-T., Sunderhauf, N., & Reid, I. (2018). SceneCut: Joint Geometric and Object Segmentation for Indoor Scenes. In 2018 IEEE International Conference on Robotics and Automation (ICRA) (pp. 1–9). Brisbane: IEEE. http://doi.org/10.1109/ICRA.2018.8461108

    View More
  • Measures of incentives and confidence in using a social robot

    Robinson, N. L., Connolly, J., Johnson, G. M., Kim, Y., Hides, L., & Kavanagh, D. J. (2018). Measures of incentives and confidence in using a social robot. Science Robotics, 3(21), eaat6963. http://doi.org/10.1126/scirobotics.aat6963

    View More
  • Glare-free retinal imaging using a portable light field fundus camera

    Palmer, D. W., Coppin, T., Rana, K., Dansereau, D. G., Suheimat, M., Maynard, M. Atchison, D. A., Roberts, J., Crawford, R., & Jaiprakash, A. (2018). Glare-free retinal imaging using a portable light field fundus camera. Biomedical Optics Express, 9(7), 3178. http://doi.org/10.1364/BOE.9.003178

    View More
  • Learning to Detect Aircraft for Long-Range Vision-Based Sense-and-Avoid Systems

    James, J., Ford, J. J., & Molloy, T. L. (2018). Learning to Detect Aircraft for Long-Range Vision-Based Sense-and-Avoid Systems. IEEE Robotics and Automation Letters, 3(4), 4383–4390. http://doi.org/10.1109/LRA.2018.2867237

    View More
  • Leveraging variable sensor spatial acuity with a homogeneous, multi-scale place recognition framework

    Jacobson, A., Chen, Z., & Milford, M. (2018). Leveraging variable sensor spatial acuity with a homogeneous, multi-scale place recognition framework. Biological Cybernetics, 1–17. http://doi.org/10.1007/s00422-017-0745-7

    View More
  • Bootstrapping the Performance of Webly Supervised Semantic Segmentation

    Shen, T., Lin, G., Shen, C., & Reid, I. (2018). Bootstrapping the Performance of Webly Supervised Semantic Segmentation. In 2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition (pp. 1363–1371). Salt Lake City, United States: IEEE. http://doi.org/10.1109/CVPR.2018.00148

    View More
  • Scalable Dense Non-rigid Structure-from-Motion: A Grassmannian Perspective

    Kumar, S., Cherian, A., Dai, Y., & Li, H. (2018). Scalable Dense Non-rigid Structure-from-Motion: A Grassmannian Perspective. In 2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition (pp. 254–263). Salt Lake City, United States: IEEE. http://doi.org/10.1109/CVPR.2018.00034

    View More
  • Output regulation for systems on matrix Lie-groups

    de Marco, S., Marconi, L., Mahony, R., & Hamel, T. (2018). Output regulation for systems on matrix Lie-groups. Automatica, 87, 8–16. https://doi.org/10.1016/J.AUTOMATICA.2017.08.006

    View More
  • Rhythmic Representations: Learning Periodic Patterns for Scalable Place Recognition at a Sublinear Storage Cost.

    Yu, L., Jacobson, A., & Milford, M. (2018). Rhythmic Representations: Learning Periodic Patterns for Scalable Place Recognition at a Sublinear Storage Cost. IEEE Robotics and Automation Letters, 3(2), 811–818. http://doi.org/10.1109/LRA.2018.2792144

    View More
  • Multimodal Trip Hazard Affordance Detection on Construction Sites

    McMahon, S., Sunderhauf, N., Upcroft, B., & Milford, M. (2018). Multimodal Trip Hazard Affordance Detection on Construction Sites. IEEE Robotics and Automation Letters, 3(1), 1–8. http://doi.org/10.1109/LRA.2017.2719763

    View More
  • Special issue on deep learning in robotics

    Sünderhauf, N., Leitner, J., Upcroft, B., & Roy, N. (2018, April 27). Special issue on deep learning in robotics. The International Journal of Robotics Research. SAGE PublicationsSage UK: London, England. http://doi.org/10.1177/0278364918769189

    View More
  • Just-In-Time Reconstruction: Inpainting Sparse Maps using Single View Depth Predictors as Priors

    *Weerasekera, C. S., Dharmasiri, T., Garg, R., Drummond, T., & Reid, I. (2018). Just-in-Time Reconstruction: Inpainting Sparse Maps Using Single View Depth Predictors as Priors. In 2018 IEEE International Conference on Robotics and Automation (ICRA) (pp. 1–9). Brisbane, Australia: IEEE. http://doi.org/10.1109/ICRA.2018.8460549

    View More
  • Elastic LiDAR Fusion: Dense Map-Centric Continuous-Time SLAM

    Park, C., Moghadam, P., Kim, S., Elfes, A., Fookes, C., & Sridharan, S. (2017). Elastic LiDAR Fusion: Dense Map-Centric Continuous-Time SLAM. Retrieved from http://arxiv.org/abs/1711.01691

    View More
  • Cartman: The low-cost Cartesian Manipulator that won the Amazon Robotics Challenge

    Morrison, D., Tow, A. W., McTaggart, M., Smith, R., Kelly-Boxall, N., Wade-McCue, S., Erskine, J., Grinover, R., Gurman, A., Hunn, T., Lee, D., Milan, A., Pham, T., Rallos, G., Razjigaev, A., Rowntree, T., Kumar, V., Zhuang, Z., Lehnert, C., Reid, I., Corke, P., and Leitner, J. (2018). Cartman: The low-cost Cartesian Manipulator that won the Amazon Robotics Challenge. In 2018 IEEE International Conference on Robotics and Automation (ICRA) (pp. 7757–7764). Brisbane: IEEE. http://doi.org/10.1109/ICRA.2018.8463191

    View More
  • Dropout Sampling for Robust Object Detection in Open-Set Conditions

    Miller, D., Nicholson, L., Dayoub, F., & Sunderhauf, N. (2018). Dropout Sampling for Robust Object Detection in Open-Set Conditions. Proceedings - IEEE International Conference on Robotics and Automation, 3243–3249. https://doi.org/10.1109/ICRA.2018.8460700

    View More
  • Semantic Segmentation from Limited Training Data

    Milan, A., Pham, T., Vijay, K., Morrison, D., Tow, A. W., Liu, L., Erskine, J., Grinover, R., Gurman, A., Hunn, T., Kelly-Boxall, K., Lee, D., McTaggart, M., Rallos, G., Razjigaev, A., Rowntree, T., Shen, T., Smith, R., Wade-McCue, S., Zhuang, Z., Lehnert, C., Lin, G., Reid, I., Corke, P., and Leitner, J. (2018). Semantic Segmentation from Limited Training Data. In 2018 IEEE International Conference on Robotics and Automation (ICRA) (pp. 1908–1915). Brisbane: IEEE. http://doi.org/10.1109/ICRA.2018.8461082

    View More
  • Efficacy of Mechanical Weeding Tools: a study into alternative weed management strategies enabled by robotics

    McCool, C. S., Beattie, J., Firn, J., Lehnert, C., Kulk, J., Bawden, O., Russell, R., & Perez, T. (2018). Efficacy of Mechanical Weeding Tools: a study into alternative weed management strategies enabled by robotics. IEEE Robotics and Automation Letters, 1–1. http://doi.org/10.1109/LRA.2018.2794619

    View More
  • Addressing Challenging Place Recognition Tasks using Generative Adversarial Networks

    Latif, Y., Garg, R., Milford, M., & Reid, I. (2018). Addressing challenging place recognition tasks using generative adversarial networks. Proceedings - IEEE International Conference on Robotics and Automation, 2349–2355. https://doi.org/10.1109/ICRA.2018.8461081

    View More
  • Multi-Modal Trip Hazard Affordance Detection On Construction Sites

    McMahon, S., Sunderhauf, N., Upcroft, B., & Milford, M. (2018). Multimodal Trip Hazard Affordance Detection on Construction Sites. IEEE Robotics and Automation Letters, 3(1), 1–8. http://doi.org/10.1109/LRA.2017.2719763

    View More
  • Image Captioning and Visual Question Answering Based on Attributes and External Knowledge

    Wu, Q., Shen, C., Wang, P., Dick, A., & Van Den Hengel, A. (2018). Image Captioning and Visual Question Answering Based on Attributes and External Knowledge. IEEE Transactions on Pattern Analysis and Machine Intelligence, 40(6), 1367–1381. https://doi.org/10.1109/TPAMI.2017.2708709

    View More

Journal Articles

  • Monocular Depth Estimation with Augmented Ordinal Depth Relationships

    Cao, Y., Zhao, T., Xian, K., Shen, C., Cao, Z., & Xu, S. (2018). Monocular Depth Estimation with Augmented Ordinal Depth Relationships. IEEE Transactions on Image Processing. https://doi.org/10.1109/TIP.2018.2877944

    View More
  • An adaptive localization system for image storage and localization latency requirements

    Mao, J., Hu, X., & Milford, M. (2018). An adaptive localization system for image storage and localization latency requirements. Robotics and Autonomous Systems, 107, 246–261. http://doi.org/10.1016/J.ROBOT.2018.06.007

    View More
  • Quickest Detection of Intermittent Signals With Application to Vision-Based Aircraft Detection

    James, J., Ford, J. J., & Molloy, T. L. (2018). Quickest Detection of Intermittent Signals With Application to Vision-Based Aircraft Detection. IEEE Transactions on Control Systems Technology, 1–8. http://doi.org/10.1109/TCST.2018.2872468

    View More
  • Globally-Optimal Inlier Set Maximisation for Camera Pose and Correspondence Estimation

    Campbell, D. J., Petersson, L., Kneip, L., & Li, H. (2018). Globally-Optimal Inlier Set Maximisation for Camera Pose and Correspondence Estimation. IEEE Transactions on Pattern Analysis and Machine Intelligence. https://doi.org/10.1109/TPAMI.2018.2848650

    View More
  • Dense 3D Face Correspondence

    Gilani, S. Z., Mian, A., Shafait, F., & Reid, I. (2018). Dense 3D Face Correspondence. IEEE Transactions on Pattern Analysis and Machine Intelligence, 40(7), 1584–1598. https://doi.org/10.1109/TPAMI.2017.2725279

    View More
  • Coresets for Triangulation

    Gilani, S. Z., Mian, A., Shafait, F., & Reid, I. (2018). Dense 3D Face Correspondence. IEEE Transactions on Pattern Analysis and Machine Intelligence, 40(7), 1584–1598. https://doi.org/10.1109/TPAMI.2017.2725279

    View More
  • The Role of Symmetry in Rigidity Analysis: A Tool for Network Localisation and Formation Control

    Stacey, G., & Mahony, R. (2018). The Role of Symmetry in Rigidity Analysis: A Tool for Network Localization and Formation Control. IEEE Transactions on Automatic Control, 63(5), 1313–1328. https://doi.org/10.1109/TAC.2017.2747760

    View More
  • Incorporating Network Built-in Priors in Weakly-supervised Semantic Segmentation

    Saleh, F. S., Aliakbarian, M. S., Salzmann, M., Petersson, L., Alvarez, J. M., & Gould, S. (2018). Incorporating Network Built-in Priors in Weakly-Supervised Semantic Segmentation. IEEE Transactions on Pattern Analysis and Machine Intelligence, 40(6), 1382–1396. https://doi.org/10.1109/TPAMI.2017.2713785

    View More
  • Guaranteed Outlier Removal for Point Cloud Registration with Correspondences

    Guaranteed Outlier Removal for Point Cloud Registration with Correspondences. IEEE Transactions on Pattern Analysis and Machine Intelligence, 40(12), 2868–2882. https://doi.org/10.1109/TPAMI.2017.2773482

    View More
  • Structured Learning of Tree Potentials in CRF for Image Segmentation

    Liu, F., Lin, G., Qiao, R., & Shen, C. (2017). Structured Learning of Tree Potentials in CRF for Image Segmentation. IEEE Transactions on Neural Networks and Learning Systems, PP(99), 1–7. http://doi.org/10.1109/TNNLS.2017.2690453 *In Press

    View More
  • Dimensionality Reduction on SPD Manifolds: The Emergence of Geometry-Aware Methods

    Harandi, M., Salzmann, M., & Hartley, R. (2018). Dimensionality Reduction on SPD Manifolds: The Emergence of Geometry-Aware Methods. IEEE Transactions on Pattern Analysis and Machine Intelligence, 40(1), 48–62. https://doi.org/10.1109/TPAMI.2017.2655048

    View More
  • Not All Negatives Are Equal: Learning to Track With Multiple Background Clusters

    Zhu, G., Porikli, F., & Li, H. (2018). Not All Negatives Are Equal: Learning to Track With Multiple Background Clusters. IEEE Transactions on Circuits and Systems for Video Technology, 28(2), 314–326. http://doi.org/10.1109/TCSVT.2016.2615518

    View More
  • Automated Quality Assessment of Colour Fundus Images for Diabetic Retinopathy Screening in Telemedicine

    Saha, S. K., Fernando, B., Cuadros, J., Xiao, D., & Kanagasingam, Y. (2018). Automated Quality Assessment of Colour Fundus Images for Diabetic Retinopathy Screening in Telemedicine. Journal of Digital Imaging, 31(6), 869–878. http://doi.org/10.1007/s10278-018-0084-9

    View More
  • Embedding Bilateral Filter in Least Squares for Efficient Edge-preserving Image Smoothing

    Liu, W., Zhang, P., Chen, X., Shen, C., Huang, X., & Yang, J. (2018). Embedding Bilateral Filter in Least Squares for Efficient Edge-preserving Image Smoothing. IEEE Transactions on Circuits and Systems for Video Technology, 1–1. http://doi.org/10.1109/TCSVT.2018.2890202

    View More
  • Robust and Efficient Relative Pose With a Multi-Camera System for Autonomous Driving in Highly Dynamic Environments

    Liu, L., Li, H., Dai, Y., & Pan, Q. (2018). Robust and efficient relative pose with a Multi-Camera system for autonomous driving in highly dynamic environments. IEEE Transactions on Intelligent Transportation Systems, 19(8), 2432–2444. https://doi.org/10.1109/TITS.2017.2749409

    View More
  • Reading car license plates using deep neural networks

    Li, H., Wang, P., You, M., & Shen, C. (2018). Reading car license plates using deep neural networks. Image and Vision Computing, 72, 14–23. http://doi.org/10.1016/J.IMAVIS.2018.02.002

    View More
  • Exploring Context with Deep Structured Models for Semantic Segmentation

    Lin, G., Shen, C., van den Hengel, A., & Reid, I. (2018). Exploring Context with Deep Structured Models for Semantic Segmentation. IEEE Transactions on Pattern Analysis and Machine Intelligence, 40(6), 1352–1366. http://doi.org/10.1109/TPAMI.2017.2708714

    View More
  • Drones count wildlife more accurately and precisely than humans

    Hodgson, J. C., Mott, R., Baylis, S. M., Pham, T. T., Wotherspoon, S., Kilpatrick, A. D., Ramesh, R.S., Reid, I., Terauds, A., & Koh, L. P. (2018). Drones count wildlife more accurately and precisely than humans. Methods in Ecology and Evolution, 9(5), 1160–1167. http://doi.org/10.1111/2041-210X.12974

    View More
  • Semisupervised and Weakly Supervised Road Detection Based on Generative Adversarial Networks

    Han, X., Lu, J., Zhao, C., You, S., & Li, H. (2018). Semisupervised and Weakly Supervised Road Detection Based on Generative Adversarial Networks. IEEE Signal Processing Letters, 25(4), 551–555. http://doi.org/10.1109/LSP.2018.2809685

    View More
  • Automatic Image Cropping for Visual Aesthetic Enhancement Using Deep Neural Networks and Cascaded Regression

    Guo, G., Wang, H., Shen, C., Yan, Y., & Liao, H.-Y. M. (2018). Automatic Image Cropping for Visual Aesthetic Enhancement Using Deep Neural Networks and Cascaded Regression. IEEE Transactions on Multimedia, 20(8), 2073–2085. http://doi.org/10.1109/TMM.2018.2794262

    View More
  • Action Recognition with Dynamic Image Networks

    Bilen, H., Fernando, B., Gavves, E., & Vedaldi, A. (2018). Action Recognition with Dynamic Image Networks. IEEE Transactions on Pattern Analysis and Machine Intelligence, 40(12), 2799–2813. http://doi.org/10.1109/TPAMI.2017.2769085

    View More
  • Searching for Representative Modes on Hypergraphs for Robust Geometric Model Fitting

    Wang, H., Xiao, G., Yan, Y., & Suter, D. (2019). Searching for Representative Modes on Hypergraphs for Robust Geometric Model Fitting. IEEE Transactions on Pattern Analysis and Machine Intelligence, 41(3), 697–711. https://doi.org/10.1109/TPAMI.2018.2803173

    View More
  • Semantics-Aware Visual Object Tracking

    Yao, R., Lin, G., Shen, C., Zhang, Y., & Shi, Q. (2019). Semantics-aware visual object tracking. IEEE Transactions on Circuits and Systems for Video Technology, 29(6), 1687–1700. https://doi.org/10.1109/TCSVT.2018.2848358

    View More
  • Learning Context Flexible Attention Model for Long-Term Visual Place Recognition

    Chen, Z., Liu, L., Sa, I., Ge, Z., & Chli, M. (2018). Learning Context Flexible Attention Model for Long-Term Visual Place Recognition. IEEE Robotics and Automation Letters, 3(4), 4015–4022. http://doi.org/10.1109/LRA.2018.2859916

    View More
  • Unsupervised Domain Adaptation Using Robust Class-Wise Matching

    Zhang, L., Wang, P., Wei, W., Lu, H., Shen, C., Van Den Hengel, A., & Zhang, Y. (2019). Unsupervised Domain Adaptation Using Robust Class-Wise Matching. IEEE Transactions on Circuits and Systems for Video Technology, 29(5), 1339–1349. https://doi.org/10.1109/TCSVT.2018.2842206

    View More
  • QuadricSLAM: Dual Quadrics From Object Detections as Landmarks in Object-Oriented SLAM

    Nicholson, L., Milford, M., & Sunderhauf, N. (2019). QuadricSLAM: Dual quadrics from object detections as landmarks in object-oriented SLAM. IEEE Robotics and Automation Letters, 4(1), 1–8. https://doi.org/10.1109/LRA.2018.2866205

    View More
  • Relative Pose Based Redundancy Removal: Collaborative RGB-D Data Transmission in Mobile Visual Sensor Networks

    Wang, X., Şekercioğlu, Y., Drummond, T., Frémont, V., Natalizio, E., & Fantoni, I. (2018). Relative Pose Based Redundancy Removal: Collaborative RGB-D Data Transmission in Mobile Visual Sensor Networks. Sensors, 18(8), 2430. http://doi.org/10.3390/s18082430

    View More
  • Approximate Fisher Information Matrix to Characterise the Training of Deep Neural Networks

    Liao, Z., Drummond, T., Reid, I., & Carneiro, G. (2018). Approximate Fisher Information Matrix to Characterise the Training of Deep Neural Networks. IEEE Transactions on Pattern Analysis and Machine Intelligence, 1–1. http://doi.org/10.1109/TPAMI.2018.2876413

    View More
  • A review of deep learning in the study of materials degradation

    Nash, W., Drummond, T., & Birbilis, N. (2018). A review of deep learning in the study of materials degradation. Npj Materials Degradation, 2(1), 37. http://doi.org/10.1038/s41529-018-0058-x

    View More
  • An Extended Filtered Channel Framework for Pedestrian Detection

    You, M., Zhang, Y., Shen, C., & Zhang, X. (2018). An Extended Filtered Channel Framework for Pedestrian Detection. IEEE Transactions on Intelligent Transportation Systems, 19(5), 1640–1651. https://doi.org/10.1109/TITS.2018.2807199

    View More
  • An Embarrassingly Simple Approach to Visual Domain Adaptation

    Lu, H., Shen, C., Cao, Z., Xiao, Y., & van den Hengel, A. (2018). An Embarrassingly Simple Approach to Visual Domain Adaptation. IEEE Transactions on Image Processing, 27(7), 3403–3417. https://doi.org/10.1109/TIP.2018.2819503

    View More
  • Cluster Sparsity Field: An Internal Hyperspectral Imagery Prior for Reconstruction

    Zhang, L., Wei, W., Zhang, Y., Shen, C., van den Hengel, A., & Shi, Q. (2018). Cluster Sparsity Field: An Internal Hyperspectral Imagery Prior for Reconstruction. International Journal of Computer Vision, 126(8), 797–821. https://doi.org/10.1007/s11263-018-1080-8

    View More
  • Multi-label learning based deep transfer neural network for facial attribute classification

    Zhuang, N., Yan, Y., Chen, S., Wang, H., & Shen, C. (2018). Multi-label learning based deep transfer neural network for facial attribute classification. Pattern Recognition, 80, 225–240. https://doi.org/10.1016/J.PATCOG.2018.03.018

    View More
  • Multi-Task Structure-aware Context Modeling for Robust Keypoint-based Object Tracking

    Li, X., Zhao, L., Ji, W., Wu, Y., Wu, F., Yang, M.-H., Dacheng, T., Reid, I. (2018). Multi-Task Structure-aware Context Modeling for Robust Keypoint-based Object Tracking. IEEE Transactions on Pattern Analysis and Machine Intelligence, 1–1. https://doi.org/10.1109/TPAMI.2018.2818132 *In Press

    View More
  • The limits and potentials of deep learning for robotics

    Sünderhauf, N., Brock, O., Scheirer, W., Hadsell, R., Fox, D., Leitner, J., Upcroft, B., Abbeel, P., Burgard, W., Milford, M., & Corke, P. (2018). The limits and potentials of deep learning for robotics. The International Journal of Robotics Research, 37(4–5), 405–420. http://doi.org/10.1177/0278364918770733

    View More
  • Automating analysis of vegetation with computer vision: Cover estimates and classification

    McCool, C., Beattie, J., Milford, M., Bakker, J. D., Moore, J. L., & Firn, J. (2018). Automating analysis of vegetation with computer vision: Cover estimates and classification. Ecology and Evolution, 8(12), 6005–6015. http://doi.org/10.1002/ece3.4135

    View More
  • A rapidly deployable classification system using visual data for the application of precision weed management

    Hall, D., Dayoub, F., Perez, T., & McCool, C. (2018). A rapidly deployable classification system using visual data for the application of precision weed management. Computers and Electronics in Agriculture, 148, 107–120. http://doi.org/10.1016/J.COMPAG.2018.02.023

    View More
  • Measures of incentives and confidence in using a social robot

    Robinson, N. L., Connolly, J., Johnson, G. M., Kim, Y., Hides, L., & Kavanagh, D. J. (2018). Measures of incentives and confidence in using a social robot. Science Robotics, 3(21), eaat6963. http://doi.org/10.1126/scirobotics.aat6963

    View More
  • Glare-free retinal imaging using a portable light field fundus camera

    Palmer, D. W., Coppin, T., Rana, K., Dansereau, D. G., Suheimat, M., Maynard, M. Atchison, D. A., Roberts, J., Crawford, R., & Jaiprakash, A. (2018). Glare-free retinal imaging using a portable light field fundus camera. Biomedical Optics Express, 9(7), 3178. http://doi.org/10.1364/BOE.9.003178

    View More
  • Learning to Detect Aircraft for Long-Range Vision-Based Sense-and-Avoid Systems

    James, J., Ford, J. J., & Molloy, T. L. (2018). Learning to Detect Aircraft for Long-Range Vision-Based Sense-and-Avoid Systems. IEEE Robotics and Automation Letters, 3(4), 4383–4390. http://doi.org/10.1109/LRA.2018.2867237

    View More
  • Leveraging variable sensor spatial acuity with a homogeneous, multi-scale place recognition framework

    Jacobson, A., Chen, Z., & Milford, M. (2018). Leveraging variable sensor spatial acuity with a homogeneous, multi-scale place recognition framework. Biological Cybernetics, 1–17. http://doi.org/10.1007/s00422-017-0745-7

    View More
  • Output regulation for systems on matrix Lie-groups

    de Marco, S., Marconi, L., Mahony, R., & Hamel, T. (2018). Output regulation for systems on matrix Lie-groups. Automatica, 87, 8–16. https://doi.org/10.1016/J.AUTOMATICA.2017.08.006

    View More
  • Rhythmic Representations: Learning Periodic Patterns for Scalable Place Recognition at a Sublinear Storage Cost.

    Yu, L., Jacobson, A., & Milford, M. (2018). Rhythmic Representations: Learning Periodic Patterns for Scalable Place Recognition at a Sublinear Storage Cost. IEEE Robotics and Automation Letters, 3(2), 811–818. http://doi.org/10.1109/LRA.2018.2792144

    View More
  • Multimodal Trip Hazard Affordance Detection on Construction Sites

    McMahon, S., Sunderhauf, N., Upcroft, B., & Milford, M. (2018). Multimodal Trip Hazard Affordance Detection on Construction Sites. IEEE Robotics and Automation Letters, 3(1), 1–8. http://doi.org/10.1109/LRA.2017.2719763

    View More
  • Special issue on deep learning in robotics

    Sünderhauf, N., Leitner, J., Upcroft, B., & Roy, N. (2018, April 27). Special issue on deep learning in robotics. The International Journal of Robotics Research. SAGE PublicationsSage UK: London, England. http://doi.org/10.1177/0278364918769189

    View More
  • Multi-Modal Trip Hazard Affordance Detection On Construction Sites

    McMahon, S., Sunderhauf, N., Upcroft, B., & Milford, M. (2018). Multimodal Trip Hazard Affordance Detection on Construction Sites. IEEE Robotics and Automation Letters, 3(1), 1–8. http://doi.org/10.1109/LRA.2017.2719763

    View More
  • Image Captioning and Visual Question Answering Based on Attributes and External Knowledge

    Wu, Q., Shen, C., Wang, P., Dick, A., & Van Den Hengel, A. (2018). Image Captioning and Visual Question Answering Based on Attributes and External Knowledge. IEEE Transactions on Pattern Analysis and Machine Intelligence, 40(6), 1367–1381. https://doi.org/10.1109/TPAMI.2017.2708709

    View More

Conference Papers

  • Learning to Predict Crisp Boundaries

    Deng R., Shen C., Liu S., Wang H., Liu X. (2018) Learning to Predict Crisp Boundaries. In: Ferrari V., Hebert M., Sminchisescu C., Weiss Y. (eds) Computer Vision – ECCV 2018. ECCV 2018. Lecture Notes in Computer Science, vol 11210. Springer, Cham. https://doi.org/10.1007/978-3-030-01231-1_35

    View More
  • Robust Fitting in Computer Vision: Easy or Hard?

    Chin TJ., Cai Z., Neumann F. (2018) Robust Fitting in Computer Vision: Easy or Hard?. In: Ferrari V., Hebert M., Sminchisescu C., Weiss Y. (eds) Computer Vision – ECCV 2018. ECCV 2018. Lecture Notes in Computer Science, vol 11216. Springer, Cham. https://doi.org/10.1007/978-3-030-01258-8_43

    View More
  • Deterministic Consensus Maximization with Biconvex Programming

    Cai Z., Chin TJ., Le H., Suter D. (2018) Deterministic Consensus Maximization with Biconvex Programming. In: Ferrari V., Hebert M., Sminchisescu C., Weiss Y. (eds) Computer Vision – ECCV 2018. ECCV 2018. Lecture Notes in Computer Science, vol 11216. Springer, Cham. https://doi.org/10.1007/978-3-030-01258-8_42

    View More
  • A Binary Optimization Approach for Constrained K-Means Clustering

    Le H.M., Eriksson A., Do TT., Milford M. (2019) A Binary Optimization Approach for Constrained K-Means Clustering. In: Jawahar C., Li H., Mori G., Schindler K. (eds) Computer Vision – ACCV 2018. ACCV 2018. Lecture Notes in Computer Science, vol 11364. Springer, Cham. https://doi.org/10.1007/978-3-030-20870-7_24

    View More
  • Traversing Latent Space using Decision Ferns

    Zuo Y., Avraham G., Drummond T. (2019) Traversing Latent Space Using Decision Ferns. In: Jawahar C., Li H., Mori G., Schindler K. (eds) Computer Vision – ACCV 2018. ACCV 2018. Lecture Notes in Computer Science, vol 11361. Springer, Cham. https://doi.org/10.1007/978-3-030-20887-5_37

    View More
  • Stereo Computation for a Single Mixture Image

    Zhong Y., Dai Y., Li H. (2018) Stereo Computation for a Single Mixture Image. In: Ferrari V., Hebert M., Sminchisescu C., Weiss Y. (eds) Computer Vision – ECCV 2018. ECCV 2018. Lecture Notes in Computer Science, vol 11213. Springer, Cham. https://doi.org/10.1007/978-3-030-01240-3_2

    View More
  • Learning Free-Form Deformations for 3D Object Reconstruction

    Jack, D., Pontes, J. K., Sridharan, S., Fookes, C., Shirazi, S., Maire, F., & Eriksson, A. (2019). Learning Free-Form Deformations for 3D Object Reconstruction. Lecture Notes in Computer Science (Including Subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics), 11362 LNCS, 317–333. https://doi.org/10.1007/978-3-030-20890-5_21

    View More
  • Unsupervised Learning of Monocular Depth Estimation and Visual Odometry with Deep Feature Reconstruction

    Zhan, H., Garg, R., Weerasekera, C. S., Li, K., Agarwal, H., & Reid, I. M. (2018). Unsupervised Learning of Monocular Depth Estimation and Visual Odometry with Deep Feature Reconstruction. In 2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition (pp. 340–349). Salt Lake City, United States: IEEE. http://doi.org/10.1109/CVPR.2018.00043

    View More
  • Discrimination-aware channel pruning for deep neural networks

    Zhuang, Z., Tan, M., Zhuang, B., Liu, J., Guo, Y., Wu, Q., Huang, J., & Zhu, J. (2018). Discrimination-aware Channel Pruning for Deep Neural Networks. Advances in Neural Information Processing Systems, 2018-December, 875–886.

    View More
  • OpenSeqSLAM2.0: An Open Source Toolbox for Visual Place Recognition Under Changing Conditions

    Talbot, B., Garg, S., & Milford, M. (2018). OpenSeqSLAM2.0: An Open Source Toolbox for Visual Place Recognition Under Changing Conditions. In 2018 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS) (pp. 7758–7765). Madrid, Spain: IEEE. http://doi.org/10.1109/IROS.2018.8593761

    View More
  • Scalable Deep k-Subspace Clustering

    Zhang T., Ji P., Harandi M., Hartley R., Reid I. (2019) Scalable Deep k-Subspace Clustering. In: Jawahar C., Li H., Mori G., Schindler K. (eds) Computer Vision – ACCV 2018. ACCV 2018. Lecture Notes in Computer Science, vol 11365. Springer, Cham. https://doi.org/10.1007/978-3-030-20873-8_30

    View More
  • Continuous-Time Intensity Estimation Using Event Cameras

    Scheerlinck C., Barnes N., Mahony R. (2019) Continuous-Time Intensity Estimation Using Event Cameras. In: Jawahar C., Li H., Mori G., Schindler K. (eds) Computer Vision – ACCV 2018. ACCV 2018. Lecture Notes in Computer Science, vol 11365. Springer, Cham. https://doi.org/10.1007/978-3-030-20873-8_20

    View More
  • Action Anticipation by Predicting Future Dynamic Images

    Rodriguez C., Fernando B., Li H. (2019) Action Anticipation by Predicting Future Dynamic Images. In: Leal-Taixé L., Roth S. (eds) Computer Vision – ECCV 2018 Workshops. ECCV 2018. Lecture Notes in Computer Science, vol 11131. Springer, Cham. https://doi.org/10.1007/978-3-030-11015-4_10

    View More
  • Assisted Control for Semi-Autonomous Power Infrastructure Inspection Using Aerial Vehicles

    *McFadyen, A., Dayoub, F., Martin, S., Ford, J., & Corke, P. (2018). Assisted Control for Semi-Autonomous Power Infrastructure Inspection Using Aerial Vehicles. In 2018 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS) (pp. 5719–5726). Madrid, Spain: IEEE. http://doi.org/10.1109/IROS.2018.8593529

    View More
  • Closing the Loop for Robotic Grasping: A Real-time, Generative Grasp Synthesis Approach

    Morrison, D., Corke, P., & Leitner, J. (2018). Closing the Loop for Robotic Grasping: A Real-time, Generative Grasp Synthesis Approach. Retrieved from http://arxiv.org/abs/1804.05172

    View More
  • Efficient Subpixel Refinement with Symbolic Linear Predictors

    Lui, V., Geeves, J., Yii, W., & Drummond, T. (2018). Efficient Subpixel Refinement with Symbolic Linear Predictors. In 2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition (pp. 8165–8173). Salt Lake City, United States: IEEE. http://doi.org/10.1109/CVPR.2018.00852

    View More
  • Structure Aware SLAM Using Quadrics and Planes

    Hosseinzadeh M., Latif Y., Pham T., Suenderhauf N., Reid I. (2019) Structure Aware SLAM Using Quadrics and Planes. In: Jawahar C., Li H., Mori G., Schindler K. (eds) Computer Vision – ACCV 2018. ACCV 2018. Lecture Notes in Computer Science, vol 11363. Springer, Cham. https://doi.org/10.1007/978-3-030-20893-6_26

    View More
  • Feature Map Filtering: Improving Visual Place Recognition with Convolutional Calibration

    Hausler, S., Jacobson, A., & Milford, M. (2018). Feature Map Filtering: Improving Visual Place Recognition with Convolutional Calibration. Retrieved from http://arxiv.org/abs/1810.12465

    View More
  • LoST? Appearance-Invariant Place Recognition for Opposite Viewpoints using Visual Semantics

    Garg, S., Suenderhauf, N., & Milford, M. (2018). LoST? Appearance-Invariant Place Recognition for Opposite Viewpoints using Visual Semantics. Retrieved from http://arxiv.org/abs/1804.05526

    View More
  • An End-to-End TextSpotter with Explicit Alignment and Attention

    He, T., Tian, Z., Huang, W., Shen, C., Qiao, Y., & Sun, C. (2018). An End-to-End TextSpotter with Explicit Alignment and Attention. In 2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition (pp. 5020–5029). Salt Lake City, United States: IEEE. http://doi.org/10.1109/CVPR.2018.00527

    View More
  • Don’t Look Back: Robustifying Place Categorization for Viewpoint- and Condition-Invariant Place Recognition

    Garg, S., Suenderhauf, N., & Milford, M. (2018). Don’t Look Back: Robustifying Place Categorization for Viewpoint- and Condition-Invariant Place Recognition. In 2018 IEEE International Conference on Robotics and Automation (ICRA) (pp. 3645–3652). Brisbane, Australia: IEEE. http://doi.org/10.1109/ICRA.2018.8461051

    View More
  • ENG: End-to-end Neural Geometry for Robust Depth and Pose Estimation using CNNs

    Dharmasiri T., Spek A., Drummond T. (2019) ENG: End-to-End Neural Geometry for Robust Depth and Pose Estimation Using CNNs. In: Jawahar C., Li H., Mori G., Schindler K. (eds) Computer Vision – ACCV 2018. ACCV 2018. Lecture Notes in Computer Science, vol 11361. Springer, Cham. https://doi.org/10.1007/978-3-030-20887-5_39

    View More
  • Neural Algebra of Classifiers

    *Cruz, R. S., Fernando, B., Cherian, A., & Gould, S. (2018). Neural Algebra of Classifiers. In 2018 IEEE Winter Conference on Applications of Computer Vision (WACV) (pp. 729–737). IEEE. https://doi.org/10.1109/WACV.2018.00085

    View More
  • Towards vision-based manipulation of plastic materials

    *Cherubini, A., Leitner, J., Ortenzi, V., & Corke, P. (2018). Towards vision-based manipulation of plastic materials. In 2018 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS) (pp. 485–490). Madrid, Spain: IEEE. http://doi.org/10.1109/IROS.2018.8594108

    View More
  • Learning Deployable Navigation Policies at Kilometer Scale from a Single Traversal

    Bruce, J., Sünderhauf, N., Mirowski, P., Hadsell, R., & Milford, M. (2018). Learning Deployable Navigation Policies at Kilometer Scale from a Single Traversal. Retrieved from http://arxiv.org/abs/1807.05211

    View More
  • Training Deep Neural Networks for Visual Servoing

    *Bateux, Q., Marchand, E., Leitner, J., Chaumette, F., & Corke, P. (2018). Training Deep Neural Networks for Visual Servoing. In 2018 IEEE International Conference on Robotics and Automation (ICRA) (pp. 1–8). Brisbane, Australia: IEEE. http://doi.org/10.1109/ICRA.2018.8461068

    View More
  • VIENA 2: A Driving Anticipation Dataset

    Aliakbarian, M. S., Saleh, F. S., Salzmann, M., Fernando, B., Petersson, L., & Andersson, L. (2019). VIENA2: A Driving Anticipation Dataset. Lecture Notes in Computer Science (Including Subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics), 11361 LNCS, 449–466. https://doi.org/10.1007/978-3-030-20887-5_28

    View More
  • Model-free and learning-free grasping by Local Contact Moment matching

    *Adjigble, M., Marturi, N., Ortenzi, V., Rajasekaran, V., Corke, P., & Stolkin, R. (2018). Model-free and learning-free grasping by Local Contact Moment matching. In 2018 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS) (pp. 2933–2940). Madrid, Spain: IEEE. http://doi.org/10.1109/IROS.2018.8594226

    View More
  • Deja vu: Scalable Place Recognition Using Mutually Supportive Feature Frequencies

    Jacobson, A., Scheirer, W., & Milford, M. (2017). Déjà vu: Scalable place recognition using mutually supportive feature frequencies. IEEE International Conference on Intelligent Robots and Systems, 2017-September, 6654–6661. https://doi.org/10.1109/IROS.2017.8206580

    View More
  • Design of a multi-modal end-effector and grasping system- How integrated design helped win the Amazon Robotics Challenge

    Kelly-Boxall, N., Morrison, D., Wade-McCue, S., Corke, P., & Leitner, J. (2018). Design of a multi-modal end-effector and grasping system- How integrated design helped win the amazon robotics challenge. Australasian Conference on Robotics and Automation, ACRA, 2018-December.

    View More
  • On the structure of kinematic systems with complete symmetry

    Trumpf, J., Mahony, R., & Hamel, T. (2019). On the structure of kinematic systems with complete symmetry. Proceedings of the IEEE Conference on Decision and Control, 2018-December, 1276–1280. https://doi.org/10.1109/CDC.2018.8619718

    View More
  • Seeing Deeply and Bidirectionally: A Deep Learning Approach for Single Image Reflection Removal

    Yang J., Gong D., Liu L., Shi Q. (2018) Seeing Deeply and Bidirectionally: A Deep Learning Approach for Single Image Reflection Removal. In: Ferrari V., Hebert M., Sminchisescu C., Weiss Y. (eds) Computer Vision – ECCV 2018. ECCV 2018. Lecture Notes in Computer Science, vol 11207. Springer, Cham. https://doi.org/10.1007/978-3-030-01219-9_40

    View More
  • Tips and Tricks for Visual Question Answering: Learnings from the 2017 Challenge

    Teney, D., Anderson, P., He, X., & Hengel, A. van den. (2018). Tips and Tricks for Visual Question Answering: Learnings from the 2017 Challenge. In 2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition (pp. 4223–4232). IEEE. http://doi.org/10.1109/CVPR.2018.00444

    View More
  • Deblurring Natural Image Using Super-Gaussian Fields

    Liu Y., Dong W., Gong D., Zhang L., Shi Q. (2018) Deblurring Natural Image Using Super-Gaussian Fields. In: Ferrari V., Hebert M., Sminchisescu C., Weiss Y. (eds) Computer Vision – ECCV 2018. ECCV 2018. Lecture Notes in Computer Science, vol 11205. Springer, Cham. https://doi.org/10.1007/978-3-030-01246-5_28

    View More
  • Adversarial Training of Variational Auto-Encoders for High Fidelity Image Generation

    Khan, S. H., Hayat, M., & Barnes, N. (2018). Adversarial Training of Variational Auto-Encoders for High Fidelity Image Generation. In 2018 IEEE Winter Conference on Applications of Computer Vision (WACV) (pp. 1312–1320). Lake Tahoe, United States: IEEE. https://doi.org/10.1109/WACV.2018.00148

    View More
  • Semi-dense 3D Reconstruction with a Stereo Event Camera

    Zhou Y., Gallego G., Rebecq H., Kneip L., Li H., Scaramuzza D. (2018) Semi-dense 3D Reconstruction with a Stereo Event Camera. In: Ferrari V., Hebert M., Sminchisescu C., Weiss Y. (eds) Computer Vision – ECCV 2018. ECCV 2018. Lecture Notes in Computer Science, vol 11205. Springer, Cham. https://doi.org/10.1007/978-3-030-01246-5_15

    View More
  • 3D Geometry-Aware Semantic Labeling of Outdoor Street Scenes

    *Zhong, Y., Dai, Y., & Li, H. (2018). 3D Geometry-Aware Semantic Labeling of Outdoor Street Scenes. In 2018 24th International Conference on Pattern Recognition (ICPR) (pp. 2343–2349). IEEE. http://doi.org/10.1109/ICPR.2018.8545378

    View More
  • Open-World Stereo Video Matching with Deep RNN

    Zhong Y., Li H., Dai Y. (2018) Open-World Stereo Video Matching with Deep RNN. In: Ferrari V., Hebert M., Sminchisescu C., Weiss Y. (eds) Computer Vision – ECCV 2018. ECCV 2018. Lecture Notes in Computer Science, vol 11206. Springer, Cham. https://doi.org/10.1007/978-3-030-01216-8_7

    View More
  • Deep Unsupervised Saliency Detection: A Multiple Noisy Labeling Perspective

    *Zhang, J., Zhang, T., Daf, Y., Harandi, M., & Hartley, R. (2018). Deep Unsupervised Saliency Detection: A Multiple Noisy Labeling Perspective. In 2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition (pp. 9029–9038). Salt Lake City, United States: IEEE. http://doi.org/10.1109/CVPR.2018.00941

    View More
  • Deep Auto-Set: A Deep Auto-Encoder-Set Network for Activity Recognition Using Wearables

    Varamin, A. A., Abbasnejad, E., Shi, Q., Ranasinghe, D. C., & Rezatofighi, H. (2018). Deep Auto-Set: A Deep Auto-Encoder-Set Network for Activity Recognition Using Wearables (Vol. 18). Retrieved from https://doi.org/10.475/123_4

    View More
  • Robust Visual Odometry in Underwater Environment

    *Zhang, J., Ila, V., & Kneip, L. (2018). Robust Visual Odometry in Underwater Environment. In 2018 OCEANS - MTS/IEEE Kobe Techno-Oceans (OTO) (pp. 1–9). Kobe, Japan: IEEE. http://doi.org/10.1109/OCEANSKOBE.2018.8559452

    View More
  • Goal-Oriented Visual Question Generation via Intermediate Rewards

    Zhang, J., Wu, Q., Shen, C., Zhang, J., Lu, J., & van den Hengel, A. (2018). Goal-Oriented Visual Question Generation via Intermediate Rewards. Lecture Notes in Computer Science (Including Subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics), 11209 LNCS, 189–204. https://doi.org/10.1007/978-3-030-01228-1_12

    View More
  • Super-Resolving Very Low-Resolution Face Images with Supplementary Attributes

    *Yu, X., Fernando, B., Hartley, R., & Porikli, F. (2018). Super-Resolving Very Low-Resolution Face Images with Supplementary Attributes. In 2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition (pp. 908–917). Salt Lake City, United States: IEEE. http://doi.org/10.1109/CVPR.2018.00101

    View More
  • Face Super-Resolution Guided by Facial Component Heatmaps

    Yu X., Fernando B., Ghanem B., Porikli F., Hartley R. (2018) Face Super-Resolution Guided by Facial Component Heatmaps. In: Ferrari V., Hebert M., Sminchisescu C., Weiss Y. (eds) Computer Vision – ECCV 2018. ECCV 2018. Lecture Notes in Computer Science, vol 11213. Springer, Cham. https://doi.org/10.1007/978-3-030-01240-3_14

    View More
  • Learning Discriminative Video Representations Using Adversarial Perturbations

    Wang J., Cherian A. (2018) Learning Discriminative Video Representations Using Adversarial Perturbations. In: Ferrari V., Hebert M., Sminchisescu C., Weiss Y. (eds) Computer Vision – ECCV 2018. ECCV 2018. Lecture Notes in Computer Science, vol 11208. Springer, Cham. https://doi.org/10.1007/978-3-030-01225-0_42

    View More
  • Structure from Recurrent Motion: From Rigidity to Recurrency

    *Li, X., Li, H., Joo, H., Liu, Y., & Sheikh, Y. (2018). Structure from Recurrent Motion: From Rigidity to Recurrency. In 2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition (pp. 3032–3040). Salt Lake City, United States: IEEE. http://doi.org/10.1109/CVPR.2018.00320

    View More
  • Kernel Support Vector Machines and Convolutional Neural Networks

    Jiang, S., Hartley, R., & Fernando, B. (2018). Kernel Support Vector Machines and Convolutional Neural Networks. In 2018 Digital Image Computing: Techniques and Applications (DICTA) (pp. 1–7). Canberra, Australia: IEEE. http://doi.org/10.1109/DICTA.2018.8615840

    View More
  • Semi-Supervised SLAM: Leveraging Low-Cost Sensors on Underground Autonomous Vehicles for Position Tracking

    Jacobson, A., Zeng, F., Smith, D., Boswell, N., Peynot, T., & Milford, M. (2018). Semi-Supervised SLAM: Leveraging Low-Cost Sensors on Underground Autonomous Vehicles for Position Tracking. In 2018 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS) (pp. 3970–3977). Madrid, Spain: IEEE. http://doi.org/10.1109/IROS.2018.8593750

    View More
  • Parallel Attention: A Unified Framework for Visual Object Discovery Through Dialogs and Queries

    Zhuang, B., Wu, Q., Shen, C., Reid, I., & Hengel, A. van den. (2018). Parallel Attention: A Unified Framework for Visual Object Discovery Through Dialogs and Queries. In 2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition (pp. 4252–4261). Salt Lake City, United States: IEEE. http://doi.org/10.1109/CVPR.2018.00447

    View More
  • Towards Effective Low-Bitwidth Convolutional Neural Networks

    Zhuang, B., Shen, C., Tan, M., Liu, L., & Reid, I. (2018). Towards Effective Low-Bitwidth Convolutional Neural Networks. In 2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition (pp. 7920–7928). Salt Lake City, United States: IEEE. http://doi.org/10.1109/CVPR.2018.00826

    View More
  • Are You Talking to Me? Reasoned Visual Dialog Generation Through Adversarial Learning

    Wu, Q., Wang, P., Shen, C., Reid, I., & Hengel, A. van den. (2018). Are You Talking to Me? Reasoned Visual Dialog Generation Through Adversarial Learning. In 2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition (pp. 6106–6115). Salt Lake City, United States: IEEE. http://doi.org/10.1109/CVPR.2018.00639

    View More
  • Bayesian Semantic Instance Segmentation in Open Set World

    Pham, T., Vijay Kumar, B. G., Do, T. T., Carneiro, G., & Reid, I. (2018). Bayesian semantic instance segmentation in open set world. Lecture Notes in Computer Science (Including Subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics), 11214 LNCS, 3–18. https://doi.org/10.1007/978-3-030-01249-6_1

    View More
  • Training Medical Image Analysis Systems like Radiologists

    Maicas G., Bradley A.P., Nascimento J.C., Reid I., Carneiro G. (2018) Training Medical Image Analysis Systems like Radiologists. In: Frangi A., Schnabel J., Davatzikos C., Alberola-López C., Fichtinger G. (eds) Medical Image Computing and Computer Assisted Intervention – MICCAI 2018. MICCAI 2018. Lecture Notes in Computer Science, vol 11070. Springer, Cham. https://doi.org/10.1007/978-3-030-00928-1_62

    View More
  • Visual Question Answering with Memory-Augmented Networks

    Ma, C., Shen, C., Dick, A., Wu, Q., Wang, P., Hengel, A. van den, & Reid, I. (2018). Visual Question Answering with Memory-Augmented Networks. In 2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition (pp. 6975–6984). Salt Lake City, United States: IEEE. http://doi.org/10.1109/CVPR.2018.00729

    View More
  • Deep Regression Tracking with Shrinkage Loss

    Lu X., Ma C., Ni B., Yang X., Reid I., Yang MH. (2018) Deep Regression Tracking with Shrinkage Loss. In: Ferrari V., Hebert M., Sminchisescu C., Weiss Y. (eds) Computer Vision – ECCV 2018. ECCV 2018. Lecture Notes in Computer Science, vol 11218. Springer, Cham. https://doi.org/10.1007/978-3-030-01264-9_22

    View More
  • Efficient Dense Point Cloud Object Reconstruction Using Deformation Vector Fields

    Li K., Pham T., Zhan H., Reid I. (2018) Efficient Dense Point Cloud Object Reconstruction Using Deformation Vector Fields. In: Ferrari V., Hebert M., Sminchisescu C., Weiss Y. (eds) Computer Vision – ECCV 2018. ECCV 2018. Lecture Notes in Computer Science, vol 11216. Springer, Cham. https://doi.org/10.1007/978-3-030-01258-8_31

    View More
  • Multi-modal Cycle-Consistent Generalized Zero-Shot Learning

    Felix R., Vijay Kumar B.G., Reid I., Carneiro G. (2018) Multi-modal Cycle-Consistent Generalized Zero-Shot Learning. In: Ferrari V., Hebert M., Sminchisescu C., Weiss Y. (eds) Computer Vision – ECCV 2018. ECCV 2018. Lecture Notes in Computer Science, vol 11210. Springer, Cham. https://doi.org/10.1007/978-3-030-01231-1_2

    View More
  • AffordanceNet: An End-to-End Deep Learning Approach for Object Affordance Detection

    Do, T.-T., Nguyen, A., & Reid, I. (2018). AffordanceNet: An End-to-End Deep Learning Approach for Object Affordance Detection. In 2018 IEEE International Conference on Robotics and Automation (ICRA) (pp. 1–5). Brisbane, Australia: IEEE. http://doi.org/10.1109/ICRA.2018.8460902

    View More
  • Visual Grounding via Accumulated Attention

    Deng, C., Wu, Q., Wu, Q., Hu, F., Lyu, F., & Tan, M. (2018). Visual Grounding via Accumulated Attention. In 2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition (pp. 7746–7755). Salt Lake City, United States: IEEE. http://doi.org/10.1109/CVPR.2018.00808

    View More
  • Vision Based Forward Sensitive Reactive Control for a Quadrotor VTOL

    Stevens, J.-L., & Mahony, R. (2018). Vision Based Forward Sensitive Reactive Control for a Quadrotor VTOL. In 2018 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS) (pp. 5232–5238). Madrid, Spain: IEEE. http://doi.org/10.1109/IROS.2018.8593606

    View More
  • Calibrating Light-Field Cameras Using Plenoptic Disc Features

    O’brien, S., Trumpf, J., Ila, V., & Mahony, R. (2018). Calibrating Light-Field Cameras Using Plenoptic Disc Features. In 2018 International Conference on 3D Vision (3DV) (pp. 286–294). Verona, Italy: IEEE. http://doi.org/10.1109/3DV.2018.00041

    View More
  • A Geometric Observer for Scene Reconstruction Using Plenoptic Cameras

    O’Brien, S. G. P., Trumpf, J., Ila, V., & Mahony, R. (2018). A Geometric Observer for Scene Reconstruction Using Plenoptic Cameras. In 2018 IEEE Conference on Decision and Control (CDC) (pp. 557–564). Florida, United States: IEEE. http://doi.org/10.1109/CDC.2018.8618954

    View More
  • Homography estimation of a moving planar scene from direct point correspondence

    De Marco, S., Hua, M. D., Mahony, R., & Hamel, T. (2019). Homography estimation of a moving planar scene from direct point correspondence. Proceedings of the IEEE Conference on Decision and Control, 2018-December, 565–570. https://doi.org/10.1109/CDC.2018.8619386

    View More
  • Video Representation Learning Using Discriminative Pooling

    Wang, J., Cherian, A., Porikli, F., & Gould, S. (2018). Video Representation Learning Using Discriminative Pooling. In 2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition (pp. 1149–1158). Salt Lake City, United States: IEEE. http://doi.org/10.1109/CVPR.2018.00126

    View More
  • Non-linear Temporal Subspace Representations for Activity Recognition

    Cherian, A., Sra, S., Gould, S., & Hartley, R. (2018). Non-linear Temporal Subspace Representations for Activity Recognition. In 2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition (pp. 2197–2206). Salt Lake City, United States: IEEE. http://doi.org/10.1109/CVPR.2018.00234

    View More
  • One-class Gaussian process regressor for quality assessment of transperineal ultrasound images

    Camps, S. M., Houben, T., Fontanarosa, D., Edwards, C., Antico, M., Dunnhofer, M., Martens, E.G.H.J, Baeza, J.A., Vanneste, B.G.L., van Limbergen, E.J., de W., Peter, H.N., Verhaegen, F., & Carneiro, G. (2018). One-class Gaussian process regressor for quality assessment of transperineal ultrasound images. In International Conference on Medical Imaging with Deep Learning (MIDL). Amsterdam. Retrieved from https://eprints.qut.edu.au/120113/

    View More
  • Vision-and-Language Navigation: Interpreting Visually-Grounded Navigation Instructions in Real Environments

    Anderson, P., Wu, Q., Teney, D., Bruce, J., Johnson, M., Sunderhauf, N., Reid, I., Gould, S., & van den Hengel, A. (2018). Vision-and-Language Navigation: Interpreting Visually-Grounded Navigation Instructions in Real Environments. In 2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition (pp. 3674–3683). IEEE. http://doi.org/10.1109/CVPR.2018.00387

    View More
  • Bottom-Up and Top-Down Attention for Image Captioning and Visual Question Answering

    Anderson, P., He, X., Buehler, C., Teney, D., Johnson, M., Gould, S., & Zhang, L. (2018). Bottom-Up and Top-Down Attention for Image Captioning and Visual Question Answering. In 2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition (pp. 6077–6086). Salt Lake City, United States: IEEE. http://doi.org/10.1109/CVPR.2018.00636

    View More
  • Practical Motion Segmentation for Urban Street View Scenes

    Rubino, C., Del Bue, A., & Chin, T.-J. (2018). Practical Motion Segmentation for Urban Street View Scenes. In 2018 IEEE International Conference on Robotics and Automation (ICRA) (pp. 1879–1886). Brisbane, Australia: IEEE. http://doi.org/10.1109/ICRA.2018.8460993

    View More
  • VITAL: VIsual Tracking via Adversarial Learning

    Song, Y., Ma, C., Wu, X., Gong, L., Bao, L., Zuo, W., Shen, C., Lau, Rynson W.H., & Yang, M.-H. (2018). VITAL: VIsual Tracking via Adversarial Learning. In 2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition (pp. 8990–8999). Salt Lake City, United States: IEEE. http://doi.org/10.1109/CVPR.2018.00937

    View More
  • A Fast Resection-Intersection Method for the Known Rotation Problem

    Zhang, Q., Chin, T.-J., & Le, H. M. (2018). A Fast Resection-Intersection Method for the Known Rotation Problem. In 2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition (pp. 3012–3021). Salt Lake City, United States: IEEE. http://doi.org/10.1109/CVPR.2018.00318

    View More
  • Rotation Averaging and Strong Duality

    Eriksson, A., Olsson, C., Kahl, F., & Chin, T.-J. (2018). Rotation Averaging and Strong Duality. In 2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition (pp. 127–135). Salt Lake City, United States: IEEE. http://doi.org/10.1109/CVPR.2018.00021

    View More
  • ArthroSLAM: Multi-Sensor Robust Visual Localization for Minimally Invasive Orthopedic Surgery

    Marmol, A., Corke, P., & Peynot, T. (2018). ArthroSLAM: Multi-Sensor Robust Visual Localization for Minimally Invasive Orthopedic Surgery. In 2018 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS) (pp. 3882–3889). Madrid, Spain: IEEE. https://doi.org/10.1109/IROS.2018.8593501

    View More
  • Collaborative Planning for Mixed-Autonomy Lane Merging

    Bansal, S., Cosgun, A., Nakhaei, A., & Fujimura, K. (2018). Collaborative Planning for Mixed-Autonomy Lane Merging. In 2018 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS) (pp. 4449–4455). Madrid, Spain: IEEE. http://doi.org/10.1109/IROS.2018.8594197

    View More
  • CReaM: Condensed Real-time Models for Depth Prediction using Convolutional Neural Networks

    Spek, A., Dharmasiri, T., & Drummond, T. (2018). CReaM: Condensed Real-time Models for Depth Prediction using Convolutional Neural Networks. In 2018 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS) (pp. 540–547). Madrid, Spain: IEEE. http://doi.org/10.1109/IROS.2018.8594243

    View More
  • Deep Metric Learning and Image Classification with Nearest Neighbour Gaussian Kernels

    Meyer, B. J., Harwood, B., & Drummond, T. (2018). Deep Metric Learning and Image Classification with Nearest Neighbour Gaussian Kernels. In IEEE International Conference on Image Processing (ICIP) (pp. 151–155). Athens, Greece: IEEE. http://doi.org/10.1109/ICIP.2018.8451297

    View More
  • A dynamic planner for object assembly tasks based on learning the spatial relationships of its parts from a single demonstration

    Abbas, A., Maire, F., Shirazi, S., Dayoub, F., & Eich, M. (2018). A dynamic planner for object assembly tasks based on learning the spatial relationships of its parts from a single demonstration. Lecture Notes in Computer Science (Including Subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics), 11320 LNAI, 759–765. https://doi.org/10.1007/978-3-030-03991-2_68

    View More
  • SceneCut: Joint Geometric and Object Segmentation for Indoor Scenes

    Pham, T. T., Do, T.-T., Sunderhauf, N., & Reid, I. (2018). SceneCut: Joint Geometric and Object Segmentation for Indoor Scenes. In 2018 IEEE International Conference on Robotics and Automation (ICRA) (pp. 1–9). Brisbane: IEEE. http://doi.org/10.1109/ICRA.2018.8461108

    View More
  • Bootstrapping the Performance of Webly Supervised Semantic Segmentation

    Shen, T., Lin, G., Shen, C., & Reid, I. (2018). Bootstrapping the Performance of Webly Supervised Semantic Segmentation. In 2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition (pp. 1363–1371). Salt Lake City, United States: IEEE. http://doi.org/10.1109/CVPR.2018.00148

    View More
  • Scalable Dense Non-rigid Structure-from-Motion: A Grassmannian Perspective

    Kumar, S., Cherian, A., Dai, Y., & Li, H. (2018). Scalable Dense Non-rigid Structure-from-Motion: A Grassmannian Perspective. In 2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition (pp. 254–263). Salt Lake City, United States: IEEE. http://doi.org/10.1109/CVPR.2018.00034

    View More
  • Just-In-Time Reconstruction: Inpainting Sparse Maps using Single View Depth Predictors as Priors

    *Weerasekera, C. S., Dharmasiri, T., Garg, R., Drummond, T., & Reid, I. (2018). Just-in-Time Reconstruction: Inpainting Sparse Maps Using Single View Depth Predictors as Priors. In 2018 IEEE International Conference on Robotics and Automation (ICRA) (pp. 1–9). Brisbane, Australia: IEEE. http://doi.org/10.1109/ICRA.2018.8460549

    View More
  • Elastic LiDAR Fusion: Dense Map-Centric Continuous-Time SLAM

    Park, C., Moghadam, P., Kim, S., Elfes, A., Fookes, C., & Sridharan, S. (2017). Elastic LiDAR Fusion: Dense Map-Centric Continuous-Time SLAM. Retrieved from http://arxiv.org/abs/1711.01691

    View More
  • Cartman: The low-cost Cartesian Manipulator that won the Amazon Robotics Challenge

    Morrison, D., Tow, A. W., McTaggart, M., Smith, R., Kelly-Boxall, N., Wade-McCue, S., Erskine, J., Grinover, R., Gurman, A., Hunn, T., Lee, D., Milan, A., Pham, T., Rallos, G., Razjigaev, A., Rowntree, T., Kumar, V., Zhuang, Z., Lehnert, C., Reid, I., Corke, P., and Leitner, J. (2018). Cartman: The low-cost Cartesian Manipulator that won the Amazon Robotics Challenge. In 2018 IEEE International Conference on Robotics and Automation (ICRA) (pp. 7757–7764). Brisbane: IEEE. http://doi.org/10.1109/ICRA.2018.8463191

    View More
  • Dropout Sampling for Robust Object Detection in Open-Set Conditions

    Miller, D., Nicholson, L., Dayoub, F., & Sunderhauf, N. (2018). Dropout Sampling for Robust Object Detection in Open-Set Conditions. Proceedings - IEEE International Conference on Robotics and Automation, 3243–3249. https://doi.org/10.1109/ICRA.2018.8460700

    View More
  • Semantic Segmentation from Limited Training Data

    Milan, A., Pham, T., Vijay, K., Morrison, D., Tow, A. W., Liu, L., Erskine, J., Grinover, R., Gurman, A., Hunn, T., Kelly-Boxall, K., Lee, D., McTaggart, M., Rallos, G., Razjigaev, A., Rowntree, T., Shen, T., Smith, R., Wade-McCue, S., Zhuang, Z., Lehnert, C., Lin, G., Reid, I., Corke, P., and Leitner, J. (2018). Semantic Segmentation from Limited Training Data. In 2018 IEEE International Conference on Robotics and Automation (ICRA) (pp. 1908–1915). Brisbane: IEEE. http://doi.org/10.1109/ICRA.2018.8461082

    View More
  • Efficacy of Mechanical Weeding Tools: a study into alternative weed management strategies enabled by robotics

    McCool, C. S., Beattie, J., Firn, J., Lehnert, C., Kulk, J., Bawden, O., Russell, R., & Perez, T. (2018). Efficacy of Mechanical Weeding Tools: a study into alternative weed management strategies enabled by robotics. IEEE Robotics and Automation Letters, 1–1. http://doi.org/10.1109/LRA.2018.2794619

    View More
  • Addressing Challenging Place Recognition Tasks using Generative Adversarial Networks

    Latif, Y., Garg, R., Milford, M., & Reid, I. (2018). Addressing challenging place recognition tasks using generative adversarial networks. Proceedings - IEEE International Conference on Robotics and Automation, 2349–2355. https://doi.org/10.1109/ICRA.2018.8461081

    View More

Edited Collection

  • Special issue on deep learning in robotics

    Sünderhauf, N., Leitner, J., Upcroft, B., & Roy, N. (2018, April 27). Special issue on deep learning in robotics. The International Journal of Robotics Research. SAGE PublicationsSage UK: London, England. http://doi.org/10.1177/0278364918769189

    View More