|Member Login

Publications

2018 Submitted [94]

Agile Amulet: Real-Time Salient Object Detection with Contextual Attention

Zhang, P., Wang, L., Wang, D., Lu, H., & Shen, C. (2018). Agile Amulet: Real-Time Salient Object Detection with Contextual Attention. Retrieved from http://arxiv.org/abs/1802.06960

View more

HyperFusion-Net: Densely Reflective Fusion for Salient Object Detection

Zhang, P., Lu, H., & Shen, C. (2018). HyperFusion-Net: Densely Reflective Fusion for Salient Object Detection. Retrieved from https://arxiv.org/abs/1804.05142

View more

Salient Object Detection by Lossless Feature Reflection

Zhang, P., Liu, W., Lu, H., & Shen, C. (2018). Salient Object Detection by Lossless Feature Reflection. Retrieved from https://arxiv.org/pdf/1802.06527

View more

Adaptive Importance Learning for Improving Lightweight Image Super-resolution Network

Zhang, L., Wang, P., Shen, C., Liu, L., Wei, W., Zhang, Y., & Hengel, A. van den. (2018). Adaptive Importance Learning for Improving Lightweight Image Super-resolution Network. Retrieved from http://arxiv.org/abs/1806.01576

View more

End-to-End Diagnosis and Segmentation Learning from Cardiac Magnetic Resonance Imaging

Snaauw, G., Gong, D., Maicas, G., Hengel, A. van den, Niessen, W. J., Verjans, J., & Carneiro, G. (2018). End-to-End Diagnosis and Segmentation Learning from Cardiac Magnetic Resonance Imaging. Retrieved from https://arxiv.org/abs/1810.10117

View more

Online UAV Path Planning for Joint Detection and Tracking of Multiple Radio-tagged Objects

Nguyen, H. Van, Rezatofighi, S. H., Vo, B.-N., & Ranasinghe, D. C. (2018). Online UAV Path Planning for Joint Detection and Tracking of Multiple Radio-tagged Objects. Retrieved from https://arxiv.org/pdf/1808.04445

View more

Edge-Preserving Piecewise Linear Image Smoothing Using Piecewise Constant Filters

Liu, W., Xu, W., Chen, X., Huang, X., Shen, C., & Yang, J. (2018). Edge-Preserving Piecewise Linear Image Smoothing Using Piecewise Constant Filters. Retrieved from http://arxiv.org/abs/1801.06928

View more

Deep attention-based classification network for robust depth prediction

Li, R., Xian, K., Shen, C., Cao, Z., Lu, H., & Hang, L. (2018). Deep attention-based classification network for robust depth prediction. Retrieved from https://arxiv.org/abs/1807.03959

View more

Mask-aware networks for crowd counting

Jiang, S., Lu, X., Lei, Y., & Liu, L. (2018). Mask-aware networks for crowd counting. Retrieved from http://arxiv.org/abs/1901.00039

View more

Producing radiologist-quality reports for interpretable artificial intelligence

Gale, W., Oakden-Rayner, L., Carneiro, G., Bradley, A. P., & Palmer, L. J. (2018). Producing radiologist-quality reports for interpretable artificial intelligence. Retrieved from http://arxiv.org/abs/1806.00340

View more

ApolloCar3D: A Large 3D Car Instance Understanding Benchmark for Autonomous Driving

*Song, X., Wang, P., Zhou, D., Zhu, R., Guan, C., Dai, Y., Su, H., Li, H., & Yang, R. (2018). ApolloCar3D: A Large 3D Car Instance Understanding Benchmark for Autonomous Driving. Retrieved from http://arxiv.org/abs/1811.12222

View more

Phase-only Image Based Kernel Estimation for Single-image Blind Deblurring

*Pan, L., Hartley, R., Liu, M., & Dai, Y. (2018). Phase-only Image Based Kernel Estimation for Single-image Blind Deblurring. Retrieved from https://arxiv.org/pdf/1811.10185

View more

Block Mean Approximation for Efficient Second Order Optimization

*Lu, Y., Harandi, M., Hartley, R., & Pascanu, R. (2018). Block Mean Approximation for Efficient Second Order Optimization. Retrieved from http://arxiv.org/abs/1804.05484

View more

Deep Stochastic Attraction and Repulsion Embedding for Image Based Localization

*Liu, L., Li, H., & Dai, Y. (2018). Deep Stochastic Attraction and Repulsion Embedding for Image Based Localization. Retrieved from https://arxiv.org/pdf/1808.08779

View more

Light-Weight RefineNet for Real-Time Semantic Segmentation

Nekrasov, V., Shen, C., & Reid, I. (2018). Light-Weight RefineNet for Real-Time Semantic Segmentation. Retrieved from http://arxiv.org/abs/1810.03272

View more

Pre and Post-hoc Diagnosis and Interpretation of Malignancy from Breast DCE-MRI

Maicas, G., Bradley, A. P., Nascimento, J. C., Reid, I., & Carneiro, G. (2018). Pre and Post-hoc Diagnosis and Interpretation of Malignancy from Breast DCE-MRI. Retrieved from https://arxiv.org/pdf/1809.09404.pdf

View more

Learning an Optimizer for Image Deconvolution

Gong, D., Zhang, Z., Shi, Q., Hengel, A. van den, Shen, C., & Zhang, Y. (2018). Learning an Optimizer for Image Deconvolution. Retrieved from https://arxiv.org/abs/1804.03368

View more

Simultaneous Localization and Mapping with Dynamic Rigid Objects

Henein, M., Kennedy, G., Ila, V., & Mahony, R. (2018). Simultaneous Localization and Mapping with Dynamic Rigid Objects. Retrieved from http://arxiv.org/abs/1805.03800

View more

Partially-Supervised Image Captioning

Anderson, P., Gould, S., & Johnson, M. (2018). Partially-Supervised Image Captioning. Retrieved from http://arxiv.org/abs/1806.06004

View more

Towards Effective Deep Embedding for Zero-Shot Learning

Zhang, L., Wang, P., Liu, L., Shen, C., Wei, W., Zhang, Y., & Van Den Hengel, A. (2018). Towards Effective Deep Embedding for Zero-Shot Learning. Retrieved from https://arxiv.org/pdf/1808.10075

View more

RGB-D Based Action Recognition with Light-weight 3D Convolutional Networks

Zhang, H., Li, Y., Wang, P., Liu, Y., & Shen, C. (2018). RGB-D Based Action Recognition with Light-weight 3D Convolutional Networks. Retrieved from https://arxiv.org/pdf/1811.09908

View more

Neighbourhood Watch: Referring Expression Comprehension via Language-guided Graph Attention Networks

Wang, P., Wu, Q., Cao, J., Shen, C., Gao, L., & Van Den Hengel, A. (2018). Neighbourhood Watch: Referring Expression Comprehension via Language-guided Graph Attention Networks. Retrieved from https://arxiv.org/pdf/1812.04794.pdf

View more

Object Captioning and Retrieval with Natural Language

Nguyen, A., Do, T.-T., Reid, I., Caldwell, D. G., & Tsagarakis, N. G. (2018). Object Captioning and Retrieval with Natural Language. Retrieved from https://arxiv.org/abs/1803.06152

View more

Diagnostics in Semantic Segmentation

Nekrasov, V., Shen, C., & Reid, I. (2018). Diagnostics in Semantic Segmentation. Retrieved from https://arxiv.org/pdf/1809.10328

View more

Correlation Propagation Networks for Scene Text Detection

Liu, Z., Lin, G., Ling Goh, W., Liu, F., Shen, C., & Yang, X. (2018). Correlation Propagation Networks for Scene Text Detection. Retrieved from https://arxiv.org/pdf/1810.00304

View more

Optimizable Object Reconstruction from a Single View

Li, K., Garg, R., Cai, M., & Reid, I. (2018). Optimizable Object Reconstruction from a Single View. Retrieved from https://arxiv.org/pdf/1811.11921

View more

Visual Question Answering as Reading Comprehension

Li, H., Wang, P., Shen, C., & Van Den Hengel, A. (2018). Visual Question Answering as Reading Comprehension. Retrieved from https://arxiv.org/pdf/1811.11903

View more

Real-Time Monocular Object-Model Aware Sparse SLAM

Hosseinzadeh, M., Li, K., Latif, Y., & Reid, I. (2018). Real-Time Monocular Object-Model Aware Sparse SLAM. Retrieved from https://arxiv.org/pdf/1809.09149

View more

Simultaneous Compression and Quantization: A Joint Approach for Efficient Unsupervised Hashing

Hoang, T., Do, T.-T., Le-Tan, D.-K., & Cheung, N.-M. (2018). Simultaneous Compression and Quantization: A Joint Approach for Efficient Unsupervised Hashing. Retrieved from http://arxiv.org/abs/1802.06645

View more

G2D: from GTA to Data

Doan, A.-D., Jawaid, A. M., Do, T.-T., & Chin, T.-J. (2018). G2D: from GTA to Data. Retrieved from http://arxiv.org/abs/1806.07381

View more

Practical Visual Localization for Autonomous Driving: Why Not Filter?

Doan, A.-D., Do, T.-T., Latif, Y., Chin, T.-J., & Reid, I. (2018). Practical Visual Localization for Autonomous Driving: Why Not Filter? Retrieved from https://arxiv.org/pdf/1811.08063

View more

Binary Constrained Deep Hashing Network for Image Retrieval without Human Annotation

Do, T.-T., Tan, D.-K. Le, Pham, T., Hoang, T., Le, H., Cheung, N.-M., & Reid, I. (2018). Binary Constrained Deep Hashing Network for Image Retrieval without Human Annotation. Retrieved from http://arxiv.org/abs/1802.07437

View more

From Selective Deep Convolutional Features to Compact Binary Representations for Image Retrieval

Do, T.-T., Hoang, T., Tan, D.-K. Le, & Cheung, N.-M. (2018). From Selective Deep Convolutional Features to Compact Binary Representations for Image Retrieval. Retrieved from http://arxiv.org/abs/1802.02899

View more

Structured Binary Neural Networks for Accurate Image Classification and Semantic Segmentation

Zhuang, B., Shen, C., Tan, M., Liu, L., & Reid, I. (2018). Structured Binary Neural Networks for Accurate Image Classification and Semantic Segmentation. Retrieved from https://arxiv.org/pdf/1811.10413

View more

Training Compact Neural Networks with Binary Weights and Low Precision Activations

Zhuang, B., Shen, C., & Reid, I. (2018). Training Compact Neural Networks with Binary Weights and Low Precision Activations. Retrieved from https://arxiv.org/pdf/1808.02631

View more

Discrimination-aware Channel Pruning for Deep Neural Networks

Zhuang, Z., Tan, M., Zhuang, B., Liu, J., Guo, Y., Wu, Q., Huang, J., & Zhu, J. (2018). Discrimination-aware Channel Pruning for Deep Neural Networks. Retrieved from https://arxiv.org/pdf/1810.11809

View more

Robust Fitting in Computer Vision: Easy or Hard?

Chin, T.-J., Cai, Z., & Neumann, F. (2018). Robust Fitting in Computer Vision: Easy or Hard? Retrieved from http://arxiv.org/abs/1802.06464

View more

Star Tracking using an Event Camera

Chin, T.-J., Bagchi, S., & Eriksson, A. (2018). Star Tracking using an Event Camera. Retrieved from https://arxiv.org/pdf/1812.02895

View more

Monocular Depth Estimation with Augmented Ordinal Depth Relationships

Cao, Y., Zhao, T., Xian, K., Shen, C., & Cao, Z. (2018). Monocular Depth Estimation with Augmented Ordinal Depth Relationships. Retrieved from http://arxiv.org/abs/1806.00585

View more

Adversarial Learning with Local Coordinate Coding

Cao, J., Guo, Y., Wu, Q., Shen, C., Huang, J., & Tan, M. (2018). Adversarial Learning with Local Coordinate Coding. Retrieved from http://arxiv.org/abs/1806.04895

View more

Deterministic Consensus Maximization with Biconvex Programming

Cai, Z., Chin, T.-J., Le, H., & Suter, D. (2018). Deterministic Consensus Maximization with Biconvex Programming. Retrieved from https://arxiv.org/pdf/1807.09436.pdf

View more

What’s to know? Uncertainty as a Guide to Asking Goal-oriented Questions.

Abbasnejad, E., Wu, Q., Shi, J., & Van Den Hengel, A. (2018). What’s to know? Uncertainty as a Guide to Asking Goal-oriented Questions. Retrieved from https://arxiv.org/pdf/1812.06401.pdf

View more

An Active Information Seeking Model for Goal-oriented Vision-and-Language Tasks

Abbasnejad, E., Wu, Q., Abbasnejad, I., Shi, J., & Van Den Hengel, A. (2018). An Active Information Seeking Model for Goal-oriented Vision-and-Language Tasks. Retrieved from https://arxiv.org/pdf/1812.06398.pdf

View more

Learning Deployable Navigation Policies at Kilometer Scale from a Single Traversal

Bruce, J., Sünderhauf, N., Mirowski, P., Hadsell, R., & Milford, M. (2018). Learning Deployable Navigation Policies at Kilometer Scale from a Single Traversal. Retrieved from http://arxiv.org/abs/1807.05211

View more

Probability-based Detection Quality (PDQ): A Probabilistic Approach to Detection Evaluation

Hall, D., Dayoub, F., Skinner, J., Corke, P., Carneiro, G., & Sünderhauf, N. (2018). Probability-based Detection Quality (PDQ): A Probabilistic Approach to Detection Evaluation. Retrieved from http://arxiv.org/abs/1811.10800

View more

Simultaneous Optical Flow and Segmentation (SOFAS) using Dynamic Vision Sensor

Stoffregen, T., & Kleeman, L. (2018). Simultaneous Optical Flow and Segmentation (SOFAS) using Dynamic Vision Sensor. Retrieved from http://arxiv.org/abs/1805.12326

View more

Efficient Subpixel Refinement with Symbolic Linear Predictors

Lui, V., Geeves, J., Yii, W., & Drummond, T. (2018). Efficient Subpixel Refinement with Symbolic Linear Predictors. Retrieved from http://arxiv.org/abs/1804.10750

View more

Generative Adversarial Forests for Better Conditioned Adversarial Learning

Zuo, Y., Avraham, G., & Drummond, T. (2018). Generative Adversarial Forests for Better Conditioned Adversarial Learning. Retrieved from http://arxiv.org/abs/1805.05185

View more

Learning Factorized Representations for Open-set Domain Adaptation

Baktashmotlagh, M., Faraki, M., Drummond, T., & Salzmann, M. (2018). Learning Factorized Representations for Open-set Domain Adaptation. Retrieved from http://arxiv.org/abs/1805.12277

View more

Quantity beats quality for semantic segmentation of corrosion in images

Nash, W., Drummond, T., & Birbilis, N. (2018). Quantity beats quality for semantic segmentation of corrosion in images. Retrieved from http://arxiv.org/abs/1807.03138

View more

ENG: End-to-end Neural Geometry for Robust Depth and Pose Estimation using CNNs

Dharmasiri, T., Spek, A., & Drummond, T. (2018). ENG: End-to-end Neural Geometry for Robust Depth and Pose Estimation using CNNs. Retrieved from http://arxiv.org/abs/1807.05705

View more

Real-Time Joint Semantic Segmentation and Depth Estimation Using Asymmetric Annotations

Nekrasov, V., Dharmasiri, T., Spek, A., Drummond, T., Shen, C., & Reid, I. (2018). Real-Time Joint Semantic Segmentation and Depth Estimation Using Asymmetric Annotations. Retrieved from http://arxiv.org/abs/1809.04766

View more

Traversing Latent Space using Decision Ferns

Zuo, Y., Avraham, G., & Drummond, T. (2018). Traversing Latent Space using Decision Ferns. Retrieved from http://arxiv.org/abs/1812.02636

View more

Decoupled Spatial Neural Attention for Weakly Supervised Semantic Segmentation

Zhang, T., Lin, G., Cai, J., Shen, T., Shen, C., & Kot, A. C. (2018). Decoupled Spatial Neural Attention for Weakly Supervised Semantic Segmentation. Retrieved from http://arxiv.org/abs/1803.02563

View more

Unsupervised Learning of Monocular Depth Estimation and Visual Odometry with Deep Feature Reconstruction

Zhan, H., Garg, R., Weerasekera, C. S., Li, K., Agarwal, H., & Reid, I. (2018). Unsupervised Learning of Monocular Depth Estimation and Visual Odometry with Deep Feature Reconstruction. Retrieved from http://arxiv.org/abs/1803.03893

View more

An end-to-end TextSpotter with Explicit Alignment and Attention

He, T., Tian, Z., Huang, W., Shen, C., Qiao, Y., & Sun, C. (2018). An end-to-end TextSpotter with Explicit Alignment and Attention. Retrieved from https://arxiv.org/abs/1803.03474

View more

Deep Perm-Set Net: Learn to predict sets with unknown permutation and cardinality using deep neural networks

Rezatofighi, S. H., Kaskman, R., Motlagh, F. T., Shi, Q., Cremers, D., Leal-Taixé, L., & Reid, I. (2018). Deep Perm-Set Net: Learn to predict sets with unknown permutation and cardinality using deep neural networks. Retrieved from https://arxiv.org/abs/1805.00613

View more

Fast Neural Architecture Search of Compact Semantic Segmentation Models via Auxiliary Cells

Nekrasov, V., Chen, H., Shen, C., & Reid, I. (2018). Fast Neural Architecture Search of Compact Semantic Segmentation Models via Auxiliary Cells *. Retrieved from https://arxiv.org/pdf/1810.10804.pdf

View more

Memorable Maps: A Framework for Re-defining Places in Visual Place Recognition

Zaffar, M., Ehsan, S., Milford, M., & Maier, K. M. (2018). Memorable Maps: A Framework for Re-defining Places in Visual Place Recognition. Retrieved from https://arxiv.org/pdf/1811.03529.pdf

View more

Component-based Attention for Large-scale Trademark Retrieval

Tursun, O., Denman, S., Sivipalan, S., Sridharan, S., Fookes, C., & Mau, S. (2018). Component-based Attention for Large-scale Trademark Retrieval. Retrieved from http://arxiv.org/abs/1811.02746

View more

Learning Free-Form Deformations for 3D Object Reconstruction

Jack, D., Pontes, J. K., Sridharan, S., Fookes, C., Shirazi, S., Maire, F., & Eriksson, A. (2018). Learning Free-Form Deformations for 3D Object Reconstruction. Retrieved from http://arxiv.org/abs/1803.10932

View more

An Orientation Factor for Object-Oriented SLAM

Jablonsky, N., Milford, M., & Sünderhauf, N. (2018). An Orientation Factor for Object-Oriented SLAM. Retrieved from http://arxiv.org/abs/1809.06977

View more

Multi-View Picking: Next-best-view Reaching for Improved Grasping in Clutter

Morrison, D., Corke, P., & Leitner, J. (2018). Multi-View Picking: Next-best-view Reaching for Improved Grasping in Clutter. Retrieved from http://arxiv.org/abs/1809.08564

View more

Evaluating Merging Strategies for Sampling-based Uncertainty Techniques in Object Detection

Miller, D., Dayoub, F., Milford, M., & Sünderhauf, N. (2018). Evaluating Merging Strategies for Sampling-based Uncertainty Techniques in Object Detection. Retrieved from http://arxiv.org/abs/1809.06006

View more

An adaptive localization system for image storage and localization latency requirements

Mao, J., Hu, X., & Milford, M. (2018). An adaptive localization system for image storage and localization latency requirements. Robotics and Autonomous Systems, 107, 246–261. http://doi.org/10.1016/J.ROBOT.2018.06.007

View more

A Binary Optimization Approach for Constrained K-Means Clustering

Le, H., Eriksson, A., Do, T.-T., & Milford, M. (2018). A Binary Optimization Approach for Constrained K-Means Clustering. Retrieved from http://arxiv.org/abs/1810.10134

View more

Large scale visual place recognition with sub-linear storage growth

Le, H., & Milford, M. (2018). Large scale visual place recognition with sub-linear storage growth. Retrieved from http://arxiv.org/abs/1810.09660

View more

A Holistic Visual Place Recognition Approach using Lightweight CNNs for Severe ViewPoint and Appearance Changes

Khaliq, A., Ehsan, S., Milford, M., & Mcdonald-Maier, K. (2018). A Holistic Visual Place Recognition Approach using Lightweight CNNs for Severe ViewPoint and Appearance Changes. Retrieved from https://www.mapillary.com/

View more

Feature Map Filtering: Improving Visual Place Recognition with Convolutional Calibration

Hausler, S., Jacobson, A., & Milford, M. (2018). Feature Map Filtering: Improving Visual Place Recognition with Convolutional Calibration. Retrieved from http://arxiv.org/abs/1810.12465

View more

3D Move to See: Multi-perspective visual servoing for improving object views with semantic segmentation

Lehnert, C., Tsai, D., Eriksson, A., & McCool, C. (2018). 3D Move to See: Multi-perspective visual servoing for improving object views with semantic segmentation. Retrieved from http://arxiv.org/abs/1809.07896

View more

Show, Attend and Read: A Simple and Strong Baseline for Irregular Text Recognition

Li, H., Wang, P., Shen, C., & Zhang, G. (2018). Show, Attend and Read: A Simple and Strong Baseline for Irregular Text Recognition. Retrieved from https://arxiv.org/pdf/1811.00751

View more

Scalable Deep k-Subspace Clustering

Zhang, T., Ji, P., Harandi, M., Hartley, R., & Reid, I. (2018). Scalable Deep k-Subspace Clustering. Retrieved from https://arxiv.org/pdf/1811.01045

View more

An Overview of Perception Methods for Horticultural Robots: From Pollination to Harvest

Ahn, H. S., Dayoub, F., Popovic, M., MacDonald, B., Siegwart, R., & Sa, I. (2018). An Overview of Perception Methods for Horticultural Robots: From Pollination to Harvest. Retrieved from http://arxiv.org/abs/1807.03124

View more

Zero-shot Sim-to-Real Transfer with Modular Priors

Lee, R., Mou, S., Dasagi, V., Bruce, J., Leitner, J., & Sünderhauf, N. (2018). Zero-shot Sim-to-Real Transfer with Modular Priors. Retrieved from http://arxiv.org/abs/1809.07480

View more

Quantifying the Reality Gap in Robotic Manipulation Tasks.

Collins, J., Howard, D., & Leitner, J. (2018). Quantifying the Reality Gap in Robotic Manipulation Tasks. Retrieved from https://arxiv.org/abs/1811.01484

View more

Coordinated Heterogeneous Distributed Perception based on Latent Space Representation

Korthals, T., Leitner, J., & Rückert, U. (2018). Coordinated Heterogeneous Distributed Perception based on Latent Space Representation. Retrieved from https://arxiv.org/abs/1809.04558

View more

Model-free and learning-free grasping by Local Contact Moment matching

Marturi, N., Ortenzi, V., Rajasekaran, V., Adjigble, M., Corke, P., & Stolkin, R. (2018). Model-free and learning-free grasping by Local Contact Moment matching. Retrieved from https://www.researchgate.net/publication/327118653

View more

On Encoding Temporal Evolution for Real-time Action Prediction

Rezazadegan, F., Shirazi, S., Baktashmotlagh, M., & Davis, L. S. (2018). On Encoding Temporal Evolution for Real-time Action Prediction. Retrieved from http://arxiv.org/abs/1709.07894

View more

Action Anticipation By Predicting Future Dynamic Images

Rodriguez, C., Fernando, B., & Li, H. (2018). Action Anticipation By Predicting Future Dynamic Images. Retrieved from https://arxiv.org/pdf/1808.00141.pdf

View more

Stereo Computation for a Single Mixture Image

Zhong, Y., Dai, Y., & Li, H. (2018). Stereo Computation for a Single Mixture Image. Retrieved from https://arxiv.org/pdf/1808.08690.pdf

View more

VIENA 2 : A Driving Anticipation Dataset

Aliakbarian, M. S., Saleh, F. S., Salzmann, M., Fernando, B., Petersson, L., & Andersson, L. (2018). VIENA 2 : A Driving Anticipation Dataset. Retrieved from https://arxiv.org/pdf/1810.09044.pdf

View more

Continuous-time Intensity Estimation Using Event Cameras

Scheerlinck, C., Barnes, N., & Mahony, R. (2018). Continuous-time Intensity Estimation Using Event Cameras. Retrieved from https://arxiv.org/pdf/1811.00386.pdf

View more

Neural Algebra of Classifiers

Cruz, R. S., Fernando, B., Cherian, A., & Gould, S. (2018). Neural Algebra of Classifiers. Retrieved from https://arxiv.org/pdf/1801.08676.pdf

View more

Identity-preserving Face Recovery from Portraits

Shiri, F., Yu, X., Porikli, F., Hartley, R., & Koniusz, P. (2018). Identity-preserving Face Recovery from Portraits. Retrieved from https://arxiv.org/pdf/1801.02279.pdf

View more

Quickest Detection of Intermittent Signals With Application to Vision Based Aircraft Detection

James, J., Ford, J. J., & Molloy, T. L. (2018). Quickest Detection of Intermittent Signals With Application to Vision Based Aircraft Detection. Retrieved from http://arxiv.org/abs/1804.09846

View more

Enabling a Pepper Robot to provide Automated and Interactive Tours of a Robotics Laboratory

Suddrey, G., Jacobson, A., & Ward, B. (2018). Enabling a Pepper Robot to provide Automated and Interactive Tours of a Robotics Laboratory. Retrieved from http://arxiv.org/abs/1804.03288

View more

Closing the Loop for Robotic Grasping: A Real-time, Generative Grasp Synthesis Approach

Morrison, D., Corke, P., & Leitner, J. (2018). Closing the Loop for Robotic Grasping: A Real-time, Generative Grasp Synthesis Approach. Retrieved from http://arxiv.org/abs/1804.05172

View more

Training Deep Neural Networks for Visual Servoing

Bateux, Q., Marchand, E., Leitner, J., Chaumette, F., & Corke, P. (2018). Training Deep Neural Networks for Visual Servoing. Retrieved from https://hal.inria.fr/hal-01716679/

View more

Towards vision-based manipulation of plastic materials.

Cherubini, A., Leitner, J., Ortenzi, V., & Corke, P. (2018). Towards vision-based manipulation of plastic materials. Retrieved from https://hal.archives-ouvertes.fr/hal-01731230

View more

Assisted Control for Semi-Autonomous Power Infrastructure Inspection using Aerial Vehicles

McFadyen, A., Dayoub, F., Martin, S., Ford, J., & Corke, P. (2018). Assisted Control for Semi-Autonomous Power Infrastructure Inspection using Aerial Vehicles. Retrieved from http://arxiv.org/abs/1804.02154

View more

OpenSeqSLAM2.0: An Open Source Toolbox for Visual Place Recognition Under Changing Conditions

Talbot, B., Garg, S., & Milford, M. (2018). OpenSeqSLAM2.0: An Open Source Toolbox for Visual Place Recognition Under Changing Conditions. Retrieved from http://arxiv.org/abs/1804.02156

View more

LoST? Appearance-Invariant Place Recognition for Opposite Viewpoints using Visual Semantics

Garg, S., Suenderhauf, N., & Milford, M. (2018). LoST? Appearance-Invariant Place Recognition for Opposite Viewpoints using Visual Semantics. Retrieved from http://arxiv.org/abs/1804.05526

View more

Don’t Look Back: Robustifying Place Categorization for Viewpoint- and Condition-Invariant Place Recognition

Garg, S., Suenderhauf, N., & Milford, M. (2018). Don’t Look Back: Robustifying Place Categorization for Viewpoint- and Condition-Invariant Place Recognition. Retrieved from http://arxiv.org/abs/1801.05078

View more

Towards Semantic SLAM: Points, Planes and Objects

Hosseinzadeh, M., Latif, Y., Pham, T., Suenderhauf, N., & Reid, I. (2018). Towards Semantic SLAM: Points, Planes and Objects. Retrieved from http://arxiv.org/abs/1804.09111

View more

Australian Centre for Robotic Vision
2 George Street Brisbane, 4001
+61 7 3138 7549