*Corke, P. I. (2017). Robotics, Vision and Control: Fundamental Algorithms in MATLAB (2nd ed.). Springer International Publishing.

*Adarve, J. D., & Mahony, R. (2017). Spherepix: A Data Structure for Spherical Image Processing. IEEE Robotics and Automation Letters, 2(2), 483–490. http://doi.org/10.1109/LRA.2016.26

Anderson, B. D. O., Shi, G., & Trumpf, J. (2017). Convergence and State Reconstruction of Time-Varying Multi-Agent Systems From Complete Observability Theory. IEEE Transactions on Automatic Control, 62(5), 2519–2523. http://doi.org/10.1109/TAC.2016.259

Andert, F., Ammann, N., Krause, S., Lorenz, S., Bratanov, D., & Mejias, L. (2017). Optical-Aided Aircraft Navigation using Decoupled Visual SLAM with Range Sensor Augmentation. Journal of Intelligent & Robotic Systems, 1–19. http://doi.org/10.1007/s10846-016-0457-6

Armin, M. A., Barnes, N., Alvarez, J., Li, H., Grimpen, F., & Salvado, O. (2017, September 14). Learning Camera Pose from Optical Colonoscopy Frames Through Deep Convolutional Neural Network (CNN). Springer. https://doi.org/10.1007/978-3-319-67543-5_5

Ball, D., Ross, P., English, A., Milani, P., Richards, D., Bate, A., Upcroft, B., Wyeth, G., Corke, P. (2017). Farm Workers of the Future: Vision-Based Robotics for Broad-Acre Agriculture. IEEE Robotics & Automation Magazine, 1–1. http://doi.org/10.1109/MRA.2016.261

*Bangura, M., & Mahony, R. (2017). Thrust Control for Multirotor Aerial Vehicles. IEEE Transactions on Robotics, 33(2), 390–405. http://doi.org/10.1109/TRO.2016.263

Bawden, O., Kulk, J., Russell, R., McCool, C., English, A., Dayoub, F., Lehnert, C., and Perez, T. (2017). Robot for weed species plant-specific management. Journal of Field Robotics, 34(6), 1179–1199. http://doi.org/10.1002/rob.21727

Bewley, A., & Upcroft, B. (2017). Background Appearance Modeling with Applications to Visual Object Detection in an Open-Pit Mine. Journal of Field Robotics, 34(1), 53–73. http://doi.org/10.1002/rob.21667

Bongiorno, D. L., Bryson, M., Bridge, T. C. L., Dansereau, D. G., & Williams, S. B. (2017). Coregistered Hyperspectral and Stereo Image Seafloor Mapping from an Autonomous Underwater Vehicle. Journal of Field Robotics. http://doi.org/10.1002/rob.21713

Bruce, J., Jacobson, A., & Milford, M. (2017). Look No Further: Adapting the Localization Sensory Window to the Temporal Characteristics of the Environment. IEEE Robotics and Automation Letters, 2(4), 2209–2216. http://doi.org/10.1109/LRA.2017.272

*Drory, A., Li, H., & Hartley, R. (2017). A learning-based markerless approach for full-body kinematics estimation in-natura from a single image. Journal of Biomechanics, 55, 1–10. http://doi.org/10.1016/j.jbiomech.2017

*Drory, A., Li, H., & Hartley, R. (2017). Estimating the projected frontal surface area of cyclists from images using a variational framework and statistical shape and appearance models. Proceedings of the Institution of Mechanical Engineers, Part P: Journal of Sports Engineering and Technology. https://doi.org/10.1177/1754337117

*Drory, A., Zhu, G., Li, H., & Hartley, R. (2017). Automated detection and tracking of slalom paddlers from broadcast image sequences using cascade classifiers and discriminative correlation filters. Computer Vision and Image Understanding, 159(June 2017), 116–127. http://doi.org/10.1016/J.CVIU.2016.1

Fan, C., Chen, Z., Jacobson, A., Hu, X., & Milford, M. (2017). Biologically-inspired visual place recognition with adaptive multiple scales. Robotics and Autonomous Systems, 96, 224–237. http://doi.org/10.1016/J.ROBOT.2017

*Fernando, B., & Gould, S. (2017). Discriminatively Learned Hierarchical Rank Pooling Networks. International Journal of Computer Vision, 124(3), 335–355. http://doi.org/10.1007/s11263-017-1030-x

*Fernando, B., Gavves, E., Oramas M., J. O., Ghodrati, A., & Tuytelaars, T. (2017). Rank Pooling for Action Recognition. IEEE Transactions on Pattern Analysis and Machine Intelligence, 39(4), 773–787. http://doi.org/10.1109/TPAMI.2016

Hinas, A., Roberts, J., & Gonzalez, F. (2017). Vision-Based Target Finding and Inspection of a Ground Target Using a Multirotor UAV System. Sensors, 17(12), 2929. http://doi.org/10.3390/s17122929

Hoang, T., Do, T.-T., Tan, D.-K. Le, & Cheung, N.-M. (2017). Enhance Feature Discrimination for Unsupervised Hashing. Retrieved from http://arxiv.org/abs/1704.01754

*Ila, V., Polok, L., Solony, M., & Svoboda, P. (2017). SLAM++ -A highly efficient and temporally scalable incremental SLAM framework. The International Journal of Robotics Research, 36(2), 210–230. http://doi.org/10.1177/027836491

Irons, J. L., Gradden, T., Zhang, A., He, X., Barnes, N., Scott, A. F., & McKone, E. (2017). Face identity recognition in simulated prosthetic vision is poorer than previously reported and can be improved by caricaturing. Vision Research, 137, 61–79. https://doi.org/10.1016/j.visres.2017

Jaiprakash, A., O’Callaghan, W. B., Whitehouse, S. L., Pandey, A., Wu, L., Roberts, J., & Crawford, R. W. (2017). Orthopaedic surgeon attitudes towards current limitations and the potential for robotic and technological innovation in arthroscopic surgery. Journal of Orthopaedic Surgery, 25(1), 230949901668499. http://doi.org/10.1177/2309499016

*James, J., Ford, J. J., & Molloy, T. L. (2017). Change Detection for Undermodelled Processes Using Mismatched Hidden Markov Model Test Filters. IEEE Control Systems Letters, 1(2), 238–243. http://doi.org/10.1109/LCSYS.2017

Kim, J., Cheng, J., Guivant, J., & Nieto, J. (2017). Compressed fusion of GNSS and inertial navigation with simultaneous localization and mapping. IEEE Aerospace and Electronic Systems Magazine, 32(8), 22–36. https://doi.org/10.1109/MAES.2017

*Kumar, S., Dai, Y., & Li, H. (2017). Spatio-temporal union of subspaces for multi-body non-rigid structure-from-motion. Pattern Recognition, 71, 428–443. http://doi.org/10.1016/J.PATCOG.2017

Lai, T., Wang, H., Yan, Y., Xiao, G., & Suter, D. (2017). Efficient guided hypothesis generation for multi-structure epipolar geometry estimation. Computer Vision and Image Understanding, 154, 152–165. http://doi.org/10.1016/J.CVIU.2016

Latif, Y., Huang, G., Leonard, J., & Neira, J. (2017). Sparse optimization for robust and efficient loop closing. Robotics and Autonomous Systems, 93, 13–26. http://doi.org/10.1016/J.ROBOT.2017

*Lehnert, C., English, A., McCool, C., Tow, A. W., & Perez, T. (2017). Autonomous Sweet Pepper Harvesting for Protected Cropping Systems. IEEE Robotics and Automation Letters, 2(2), 872–879. http://doi.org/10.1109/LRA.2017

Li, Z., Wu, L., Ren, H., & Yu, H. (2017). Kinematic comparison of surgical tendon-driven manipulators and concentric tube manipulators. Mechanism and Machine Theory, 107, 148–165. http://doi.org/10.1016/j.mechmach

*Lin, G., Shen, C., van den Hengel, A., & Reid, I. (2017). Exploring Context with Deep Structured models for Semantic Segmentation. IEEE Transactions on Pattern Analysis and Machine Intelligence, 1–1. http://doi.org/10.1109/TPAMI.2017
.2708714 *In Press

*Liu, F., Lin, G., Qiao, R., & Shen, C. (2017). Structured Learning of Tree Potentials in CRF for Image Segmentation. IEEE Transactions on Neural Networks and Learning Systems, PP(99), 1–7. http://doi.org/10.1109/TNNLS.2017
.2690453 *In Press

*Liu, L., Li, H., Dai, Y., & Pan, Q. (2017). Robust and Efficient Relative Pose With a Multi-Camera System for Autonomous Driving in Highly Dynamic Environments. IEEE Transactions on Intelligent Transportation Systems, 1–13. https://doi.org/10.1109/TITS.2017
.2749409 *In Press

*Liu, L., Shen, C., & Hengel, A. van den. (2017). Cross-Convolutional-Layer Pooling for Image Recognition. IEEE Transactions on Pattern Analysis and Machine Intelligence, 39(11), 2305–2313. http://doi.org/10.1109/TPAMI.2016

Lu, H., Cao, Z., Xiao, Y., Zhuang, B., & Shen, C. (2017). TasselNet: counting maize tassels in the wild via local counts regression network. Plant Methods, 13(1), 79. http://doi.org/10.1186/s13007-017-0224-0

Ma, C., Yang, C.-Y., Yang, X., & Yang, M.-H. (2017). Learning a no-reference quality metric for single-image super-resolution. Computer Vision and Image Understanding, 158, 1–16. http://doi.org/10.1016/J.CVIU.2016

*Marmol, A., Peynot, T., Eriksson, A., Jaiprakash, A., Roberts, J., & Crawford, R. (2017). Evaluation of Keypoint Detectors and Descriptors in Arthroscopic Images for Feature-Based Matching Applications. IEEE Robotics and Automation Letters, 2(4), 2135–2142. http://doi.org/10.1109/LRA.2017

*McCool, C., Perez, T., & Upcroft, B. (2017). Mixtures of Lightweight Deep Convolutional Neural Networks: Applied to Agricultural Robotics. IEEE Robotics and Automation Letters, 2(3), 1344–1351. http://doi.org/10.1109/LRA.2017

*McFadyen, A., Jabeur, M., & Corke, P. (2017). Image-Based Visual Servoing With Unknown Point Feature Correspondence. IEEE Robotics and Automation Letters, 2(2), 601–607. http://doi.org/10.1109/LRA.2016

Molloy, Timothy L., Ford, Jason J., & Mejias, L. (2017). Detection of Aircraft Below The Horizon for Vision-Based Detect And Avoid in Unmanned Aircraft Systems. Journal of Field Robotics. http://doi.org/10.1002/rob.21719

*Nascimento, J. C., & Carneiro, G. (2017). Deep Learning on Sparse Manifolds for Faster Object Segmentation. IEEE Transactions on Image Processing, 26(10), 4978–4990. http://doi.org/10.1109/TIP.2017

Nguyen, K., Fookes, C., Jillela, R., Sridharan, S., & Ross, A. (2017). Long Range Iris Recognition: A Survey. Pattern Recognition. http://doi.org/10.1016/j.patcog.2017

Oakden-Rayner, L., Carneiro, G., Bessen, T., Nascimento, J. C., Bradley, A. P., & Palmer, L. J. (2017). Precision Radiology: Predicting longevity using feature engineering and deep learning methods in a radiomics framework. Scientific Reports, 7(1), 1648. http://doi.org/10.1038/s41598-017-01931-w

Parra Bustos, A., & Chin, T.-J. (2017). Guaranteed Outlier Removal for Point Cloud Registration with Correspondences. IEEE Transactions on Pattern Analysis and Machine Intelligence. http://doi.org/10.1109/TPAMI.2017
.2773482 *In Press

Petoe, M. A., McCarthy, C. D., Shivdasani, M. N., Sinclair, N. C., Scott, A. F., Ayton, L. N., … Blamey, P. J. (2017). Determining the Contribution of Retinotopic Discrimination to Localization Performance With a Suprachoroidal Retinal Prosthesis. Investigative Opthalmology & Visual Science, 58(7), 3231. https://doi.org/10.1167/iovs.16-21041

Ren, C. Y., Prisacariu, V. A., Kähler, O., Reid, I. D., Murray, D. W. (2017). Real-Time Tracking of Single and Multiple Objects from Depth-Colour Imagery Using 3D Signed Distance Functions. International Journal of Computer Vision, 124, 80–95. https://doi.org/10.1007/s11263-016-0978-2

*Sa, I., Lehnert, C., English, A., McCool, C., Dayoub, F., Upcroft, B., & Perez, T. (2017). Peduncle Detection of Sweet Pepper for Autonomous Crop Harvesting—Combined Color and 3-D Information. IEEE Robotics and Automation Letters, 2(2), 765–772. http://doi.org/10.1109/LRA.2017

*Saleh, F., Aliakbarian, M. S., Salzmann, M., Petersson, L., Alvarez, J. M., & Gould, S. (2017). Incorporating Network Built-in Priors in Weakly-supervised Semantic Segmentation. IEEE Transactions on Pattern Analysis and Machine Intelligence, PP(0), 1–1. http://doi.org/10.1109/TPAMI.
2017.2713785 *In Press

Sandino, J., Wooler, A., & Gonzalez, F. (2017). Towards the Automatic Detection of Pre-Existing Termite Mounds through UAS and Hyperspectral Imagery. Sensors, 17(10), 2196. http://doi.org/10.3390/s17102196

Stacey, G., & Mahony, R. (2017). The Role of Symmetry in Rigidity Analysis: A Tool for Network Localisation and Formation Control. IEEE Transactions on Automatic Control. Retrieved from http://ieeexplore.ieee.org/document/80
23875/ *In Press

Stronks, H. C., Walker, J., Parker, D. J., & Barnes, N. (2017). Training Improves Vibrotactile Spatial Acuity and Intensity Discrimination on the Lower Back Using Coin Motors. Artificial Organs. https://doi.org/10.1111/aor.12882

*Teney, D., Wu, Q., & van den Hengel, A. (2017). Visual Question Answering: A Tutorial. IEEE Signal Processing Magazine, 34(6), 63–75. https://doi.org/10.1109/MSP.2017

Trumpf, J., & Trentelman, H. L. (2017). A converse to the deterministic separation principle. Systems & Control Letters, 101, 2–9. http://doi.org/10.1016/j.sysconle.2016

*Tsai, D., Dansereau, D. G., Peynot, T., & Corke, P. (2017). Image-Based Visual Servoing With Light Field Cameras. IEEE Robotics and Automation Letters, 2(2), 912–919. http://doi.org/10.1109/LRA.2017

Villa, T. F., Jayaratne, E. R., Gonzalez, L. F., & Morawska, L. (2017). Determination of the vertical profile of particle number concentration adjacent to a motorway using an unmanned aerial vehicle. Environmental Pollution, 230, 134–142. http://doi.org/10.1016/j.envpol.2017

Wu, L., & Ren, H. (2017). Finding the Kinematic Base Frame of a Robot by Hand-Eye Calibration Using 3D Position Data. IEEE Transactions on Automation Science and Engineering, 14(1), 314–324. http://doi.org/10.1109/TASE.2016

*Zhang, Q., & Chin, T.-J. (2017). Coresets for Triangulation. IEEE Transactions on Pattern Analysis and Machine Intelligence, 1–1. http://doi.org/10.1109/TPAMI.2017
.2750672 *In Press

Zulqarnain Gilani, S., Mian, A., Shafait, F., & Reid, I. (2017). Dense 3D Face Correspondence. IEEE Transactions on Pattern Analysis and Machine Intelligence, 1-1. http://doi.org/10.1109/TPAMI.2017
.2725279 *In Press

(2017). Infinite Variational Autoencoder for Semi-Supervised Learning. In 2017 IEEE Conference on Computer Vision and Pattern Recognition (CVPR) (pp. 781–790). Honolulu, USA: IEEE. http://doi.org/10.1109/CVPR.2017.90

Bista, S. R., Giordano, P. R., & Chaumette, F. (2017). Combining Line Segments and Points for Appearance- based Indoor Navigation by Image Based Visual Servoing. In IROS 2017 – IEEE/RSJ International Conference on Intelligent Robots and Systems (pp. 2960–2967). Vancouver. Retrieved from https://hal.inria.fr/hal-01572353/

Bratanov, D., Mejias, L., & Ford, J. J. (2017). A vision-based sense-and-avoid system tested on a ScanEagle UAV. International Conference on Unmanned Aerial Systems (ICUAS) 2017. Retrieved from https://eprints.qut.edu.au/108459/

*Campbell, D., Petersson, L., Kneip, L., & Li, H. (2017). Globally-Optimal Inlier Set Maximisation for Simultaneous Camera Pose and Feature Correspondence. In 2017 IEEE International Conference on Computer Vision (ICCV) (pp. 1–10). Venice, Italy: IEEE. http://doi.org/10.1109/ICCV.2017.10

*Chen, Z., Jacobson, A., Sünderhauf, N., Upcroft, B., Liu, L., Shen, C., Reid, I., & Milford, M. (2017). Deep learning features at scale for visual place recognition. In 2017 IEEE International Conference on Robotics and Automation (ICRA) (pp. 3223–3230). Singapore: IEEE. http://doi.org/10.1109/ICRA.2017

*Cherian, A., Koniusz, P., & Gould, S. (2017). Higher-Order Pooling of CNN Features via Kernel Linearization for Action Recognition. In 2017 IEEE Winter Conference on Applications of Computer Vision (WACV) (pp. 130–138). Santa Rosa, CA: IEEE. http://doi.org/10.1109/WACV.2017.22

*Cherian, A., Fernando, B., Harandi, M., & Gould, S. (2017). Generalized Rank Pooling for Activity Recognition. In 2017 IEEE Conference on Computer Vision and Pattern Recognition (CVPR) (pp. 1581–1590). Honolulu, HI, USA: IEEE. http://doi.org/10.1109/CVPR.2017.172

Chirikjian, G. S., Mahony, R., Ruan, S., & Trumpf, J. (2017). Pose Changes From a Different Point of View. In ASME 2017 41st Mechanisms and Robotics Conference. Cleveland, Ohio: ASME. http://doi.org/10.1115/DETC2017-67725

*Dharmasiri, T., Spek, A., & Drummond, T. (2017). Joint prediction of depths, normals and surface curvature from RGB images using CNNs. In 2017 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS) (pp. 1505–1512). Vancouver, Canada: IEEE. http://doi.org/10.1109/IROS.2017

Fernando, T., Denman, S., Sridharan, S., & Fookes, C. (2017). Going deeper: Autonomous steering with neural memory networks. In IEEE Conference on Computer Vision and Pattern Recognition. Hawaii. Retrieved from https://eprints.qut.edu.au/114117/

*Garg, S., Jacobson, A., Kumar, S., & Milford, M. (2017). Improving Condition- and Environment-Invariant Place Recognition with Semantic Place Categorization. Retrieved from http://arxiv.org/abs/1706.07144

Gong, D., Tan, M., Zhang, Y., Hengel, A. van den, & Shi, Q. (2017). Self-Paced Kernel Estimation for Robust Blind Image Deblurring. In 2017 IEEE International Conference on Computer Vision (ICCV) (pp. 1670–1679). Venice, Italy: IEEE. http://doi.org/10.1109/ICCV.2017.184

Gonzalez, F., & Johnson, S. (2017). Standard operating procedures for UAV or drone based monitoring of wildlife. In Proceedings of Unmanned Aircraft Systems for Remote Sensing) UAS4RS 2017. Hobart, Tasmania. Retrieved from https://eprints.qut.edu.au/108859/

Hall, D., Dayoub, F., Perez, T., & McCool, C. (2017). A Transplantable System for Weed Classification by Agricultural Robotics. Retrieved from http://www.ferasdayoub.com/wp-content/uploads/2014/12/IROS17

*Harandi, M., Salzmann, M., & Hartley, R. (2017). Joint Dimensionality Reduction and Metric Learning: A Geometric Take. In Proceedings of the 34th International Conference on Machine Learning (ICML). Sydney, Australia.

*Harwood, B., G, V. K. B., Carneiro, G., Reid, I., & Drummond, T. (2017). Smart Mining for Deep Metric Learning. In 2017 IEEE International Conference on Computer Vision (ICCV) (pp. 2840–2848). Venice, Italy: IEEE. http://doi.org/10.1109/ICCV.2017.307

*Henein, M., Abello, M., Ila, V., & Mahony, R. (2017). Exploring The Effect of Meta-Structural Information on the Global Consistency of SLAM. Retrieved from http://viorelaila.net/wp-content/uploads/2016/06/Henein17

*Hua, M.-D., Hamel, T., Mahony, R., & Allibert, G. (2017). Explicit Complementary Observer Design on Special Linear Group SL(3) for Homography Estimation using Conic Correspondences. In 56th IEEE Conference on Decision and Control (CDC). Melbourne, Australia. Retrieved from https://hal.archives-ouvertes.fr/hal-01628177/

*Hua, M.-D., Hamel, T., Mahony, R., & Allibert, G. (2017). SL(3) for Homography Estimation using Conic Correspondences. Retrieved from https://hal.archives-ouvertes.fr/hal-01628177/document

*Hua, M.-D., Trumpf, J., Hamel, T., Mahony, R., & Morin, P. (2017). Point and line feature-based observer design on SL(3) for Homography estimation and its application to image stabilization. In International Conference on Robotics and Automation (ICRA). Singapore. Retrieved from https://hal.archives-ouvertes.fr/hal-01628175/

*Ila, Viorela, Polok, Lukas, Solony, Marek, Istenic, K. (2017). Fast Incremental Bundle Adjustment with Covariance Recovery. In International Conference on 3D Vision (3DV).

*Istenic, K., Ila, V., Polok, L., Gracias, N., & Garcia, R. (2017). Mission-time 3D reconstruction with quality estimation. In OCEANS 2017 – Aberdeen (pp. 1–9). Aberdeen, UK: IEEE. http://doi.org/10.1109/OCEANSE.2017

*James, J., Ford, J. J., & Molloy, T. L. (2017). Quickest detection of intermittent signals with estimated anomaly times. In 2017 Asian Control Conference – ASCC 2017. Gold Coast. Retrieved from https://eprints.qut.edu.au/112160/

*Ji, P., Li, H., Dai, Y., & Reid, I. (2017). “Maximizing Rigidity” Revisited: A Convex Programming Approach for Generic 3D Shape Reconstruction from Multiple Perspective Views. In 2017 IEEE International Conference on Computer Vision (ICCV) (pp. 929–937). Venice, Italy: IEEE. http://doi.org/10.1109/ICCV.2017.106

*Ji, P., Zhang, T., Li, H., & Salzmann EPFL -CVLab Ian Reid, M. (2017). Deep Subspace Clustering Networks. In 31st Conference on Neural Information Processing Systems (NIPS 2017). Long Beach, CA, USA. Retrieved from http://papers.nips.cc/paper/6608-deep-subspace-clustering-networks.pdf

*Johnston, A., Garg, R., Carneiro, G., Reid, I., & van den Hengel, A. (2017). Scaling CNNs for High Resolution Volumetric Reconstruction From a Single Image. In IEEE International Conference of Computer Vision (ICCV) (pp. 939–948). Venice. Retrieved from http://openaccess.thecvf.com/content

*Khosravian, A., Chin, T.-J., Reid, I., & Mahony, R. (2017). A discrete-time attitude observer on SO(3) for vision and GPS fusion. In 2017 IEEE International Conference on Robotics and Automation (ICRA) (pp. 5688–5695). Singapore: IEEE. http://doi.org/10.1109/ICRA.2017

*Kiani, K. A., & Drummond, T. (2017). Solving Robust Regularization Problems Using Iteratively Re-weighted Least Squares. In 2017 IEEE Winter Conference on Applications of Computer Vision (WACV) (pp. 483–492). Santa Rosa, CA: IEEE. http://doi.org/10.1109/WACV.2017.60

*Kim, J.-H., Latif, Y., & Reid, I. (2017). RRD-SLAM: Radial-distorted rolling-shutter direct SLAM. In 2017 IEEE International Conference on Robotics and Automation (ICRA) (pp. 5148–5154). Singapore: IEEE. http://doi.org/10.1109/ICRA.2017

*Kumar, S., Dai, Y., & Li, H. (2017). Monocular Dense 3D Reconstruction of a Complex Dynamic Scene from Two Perspective Frames. In 2017 IEEE International Conference on Computer Vision (ICCV) (pp. 4659–4667). Venice, Italy: IEEE. http://doi.org/10.1109/ICCV.2017.498

*Le, H., Chin, T.-J., & Suter, D. (2017). An Exact Penalty Method for Locally Convergent Maximum Consensus. In 2017 IEEE Conference on Computer Vision and Pattern Recognition (CVPR) (pp. 379–387). Honolulu, USA: IEEE. http://doi.org/10.1109/CVPR.2017.48

*Le, H., Chin, T.-J., & Suter, D. (2017). RATSAC – Random Tree Sampling for Maximum Consensus Estimation. In 2017 International Conference on Digital Image Computing: Techniques and Applications (DICTA) (pp. 1–8). Sydney, Australia: IEEE. http://doi.org/10.1109/DICTA.2017

*Leitner, J., Tow, A. W., Sünderhauf, N., Dean, J. E., Durham, J. W., Cooper, M., Eich, M., Lehnert, C., Mangels, R., McCool, C., Kujala, Peter T., Nicholson, L., Pham, T., Sergeant, J., Wu, L., Zhang, F., Upcroft, B., and Corke, P. (2017). The ACRV picking benchmark: A robotic shelf picking benchmark to foster reproducible research. In 2017 IEEE International Conference on Robotics and Automation (ICRA) (pp. 4705–4712). Singapore: IEEE. http://doi.org/10.1109/ICRA.2017

*Li, H., Wang, P., & Shen, C. (2017). Towards End-to-End Text Spotting with Convolutional Recurrent Neural Networks. In 2017 IEEE International Conference on Computer Vision (ICCV) (pp. 5248–5256). Venice, Italy: IEEE. http://doi.org/10.1109/ICCV.2017.560

*Liu, L., Li, H., & Dai, Y. (2017). Efficient Global 2D-3D Matching for Camera Localization in a Large-Scale 3D Map. In 2017 IEEE International Conference on Computer Vision (ICCV) (pp. 2391–2400). Venice, Italy: IEEE. http://doi.org/10.1109/ICCV.2017.260

Lu, H., Zhang, L., Cao, Z., Wei, W., Xian, K., Shen, C., & Hengel, A. van den. (2017). When Unsupervised Domain Adaptation Meets Tensor Representations. In 2017 IEEE International Conference on Computer Vision (ICCV) (pp. 599–608). Venice, Italy: IEEE. http://doi.org/10.1109/ICCV.2017.72

*Maicas, G., Carneiro, G., & Bradley, A. P. (2017). Globally optimal breast mass segmentation from DCE-MRI using deep semantic segmentation as shape prior. In 2017 IEEE 14th International Symposium on Biomedical Imaging (ISBI 2017) (pp. 305–309). Melbourne, Australia: IEEE. http://doi.org/10.1109/ISBI.2017.795

*Menikdiwela, M., Li, H., Nguyen, C., Shaw, S., (2017), CNN-based small object detection and visualization with feature activation mapping, IVCNZ2017, Canterbury, New Zealand

*Meyer, B. J., & Drummond, T. (2017). Improved semantic segmentation for robotic applications with hierarchical conditional random fields. In 2017 IEEE International Conference on Robotics and Automation (ICRA) (pp. 5258–5265). Singapore: IEEE. http://doi.org/10.1109/ICRA.2017.7989617

*Nguyen, C. V., Milford, M., & Mahony, R. (2017). 3D tracking of water hazards with polarized stereo cameras. In 2017 IEEE International Conference on Robotics and Automation (ICRA) (pp. 5251–5257). Singapore: IEEE. http://doi.org/10.1109/ICRA.2017

*Rezatofighi, S. H., G, V. K. B., Milan, A., Abbasnejad, E., Dick, A., & Reid, I. (2017). DeepSetNet: Predicting Sets with Deep Neural Networks. In 2017 IEEE International Conference on Computer Vision (ICCV) (pp. 5257–5266). Venice, Italy: IEEE. http://doi.org/10.1109/ICCV.2017.561

*Rezazadegan, F., Shirazi, S., Upcroft, B., & Milford, M. (2017). Action recognition: From static datasets to moving robots. In 2017 IEEE International Conference on Robotics and Automation (ICRA) (pp. 3185–3191). Singapore: IEEE. http://doi.org/10.1109/ICRA.2017

*Sadegh Aliakbarian, M., Sadat Saleh, F., Salzmann, M., Fernando, B., Petersson, L., & Andersson, L. (2017). Encouraging LSTMs to Anticipate Actions Very Early. In The IEEE International Conference on Computer Vision (ICCV), 2017 (pp. 280–289). Venice, Italy. Retrieved from http://openaccess.thecvf.com/content

*Song, Y., Ma, C., Gong, L., Zhang, J., Lau, R. W. H., & Yang, M.-H. (2017). CREST: Convolutional Residual Learning for Visual Tracking. In 2017 IEEE International Conference on Computer Vision (ICCV) (pp. 2574–2583). Venice, Italy: IEEE. http://doi.org/10.1109/ICCV.2017.279

*Spek, A., Drummond, T. (2017) A compact parametric solution to depth sensor calibration. In 28th British Machine Vision Conference (BMVC). London: https://bmvc2017.london/proceedings/

*Spek, A., & Drummond, T. (2017). Joint pose and principal curvature refinement using quadrics. In 2017 IEEE International Conference on Robotics and Automation (ICRA) (pp. 3968–3975). Singapore: IEEE. http://doi.org/10.1109/ICRA.2017

* Sünderhauf, N., Pham, T. T., Latif, Y., Milford, M., & Reid, I. (2017). Meaningful maps with object-oriented semantic mapping. In 2017 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS) (pp. 5079–5085). Vancouver, BC, Canada: IEEE. http://doi.org/10.1109/IROS.2017

*Wang, J., Cherian, A., & Porikli, F. (2017). Ordered Pooling of Optical Flow Sequences for Action Recognition. In 2017 IEEE Winter Conference on Applications of Computer Vision (WACV) (pp. 168–176). Santa Rosa, CA: IEEE. http://doi.org/10.1109/WACV.2017.26

*Wang, P., Liu, L., Shen, C., Huang, Z., Hengel, A. van den, & Shen, H. T. (2017). Multi-attention Network for One Shot Learning. In 2017 IEEE Conference on Computer Vision and Pattern Recognition (CVPR) (pp. 6212–6220). Honolulu, HI, USA: IEEE. http://doi.org/10.1109/CVPR.2017.658

*Weberruss, J., Kleeman, L., Boland, D., & Drummond, T. (2017). FPGA acceleration of multilevel ORB feature extraction for computer vision. In 2017 27th International Conference on Field Programmable Logic and Applications (FPL) (pp. 1–8). Ghent, Belgium: IEEE. http://doi.org/10.23919/FPL.2017

*Weerasekera, C. S., Latif, Y., Garg, R., & Reid, I. (2017). Dense monocular reconstruction using surface normals. In 2017 IEEE International Conference on Robotics and Automation (ICRA) (pp. 2524–2531). IEEE. https://doi.org/10.1109/ICRA.2017

*Xiao, Z., Li, H., Zhou, D., Dai, Y., & Dai, B. (2017). Accurate extrinsic calibration between monocular camera and sparse 3D Lidar points without markers. In 2017 IEEE Intelligent Vehicles Symposium (IV) (pp. 424–429). Los Angeles, CA, USA: IEEE. http://doi.org/10.1109/IVS.2017

*Yang, J., Ren, P., Zhang, D., Chen, D., Wen, F., Li, H., & Hua, G. (2017). Neural Aggregation Network for Video Face Recognition. In 2017 IEEE Conference on Computer Vision and Pattern Recognition (CVPR) (pp. 5216–5225). Honolulu, HI, USA: IEEE. http://doi.org/10.1109/CVPR.2017.554

*Zhou, Y., Kneip, L., & Li, H. (2017). Semi-dense visual odometry for RGB-D cameras using approximate nearest neighbour fields. In 2017 IEEE International Conference on Robotics and Automation (ICRA) (pp. 6261–6268). Singapore: IEEE. http://doi.org/10.1109/ICRA.2017

*Zhuang, B., Liu, L., Shen, C., & Reid, I. (2017). Towards Context-Aware Interaction Recognition for Visual Relationship Detection. In The IEEE International Conference on Computer Vision (ICCV) (pp. 589–598). Venice, Italy. Retrieved from http://openaccess.thecvf.com/content_

*Zuo, Y., & Drummond, T. (2017). Fast Residual Forests: Rapid Ensemble Learning for Semantic Segmentation. In Proceedings of the 1st Annual Conference on Robot Learning, in PMLR 78 (pp. 27–36). Retrieved from http://proceedings.mlr.press/v78/

*Abbasnejad, M. E., Shi, Q., Abbasnejad, I., Hengel, A. van den, & Dick, A. (2017). Bayesian Conditional Generative Adverserial Networks. Retrieved from http://arxiv.org/abs/1706.05477

*Anderson, P., He, X., Buehler, C., Teney, D., Johnson, M., Gould, S., & Zhang, L. (2017). Bottom-Up and Top-Down Attention for Image Captioning and Visual Question Answering. Retrieved from http://arxiv.org/abs/1707.07998

*Anderson, P., Wu, Q., Teney, D., Bruce, J., Johnson, M., Sünderhauf, N., Reid, I., Gould, S., and Hengel, A. van den. (2017). Vision-and-Language Navigation: Interpreting visually-grounded navigation instructions in real environments. Retrieved from http://arxiv.org/abs/1711.07280

*Bruce, J., Sünderhauf, N., Mirowski, P., Hadsell, R., & Milford, M. (2017). One-Shot Reinforcement Learning for Robot Navigation with Interactive Replay. Retrieved from http://arxiv.org/abs/1711.10137

Chen, Y., Shen, C., Wei, X.-S., Liu, L., & Yang, J. (2017). Adversarial Learning of Structure-Aware Fully Convolutional Networks for Landmark Localization. Retrieved from http://arxiv.org/abs/1711.00253

Chen, Y., Tai, Y., Liu, X., Shen, C., & Yang, J. (2017). FSRNet: End-to-End Learning Face Super-Resolution with Facial Priors. Retrieved from http://arxiv.org/abs/1711.10703

*Chen, Z., Jacobson, A., Sünderhauf, N., Upcroft, B., Liu, L., Shen, C., Reid, I., Milford, M. (2017). Deep Learning Features at Scale for Visual Place Recognition. Retrieved from http://arxiv.org/abs/1701.05105

*Cherian, A., & Gould, S. (2017). Second-order Temporal Pooling for Action Recognition. Retrieved from http://arxiv.org/abs/1704.06925

*Cherian, A., Sra, S., & Hartley, R. (2017). Sequence Summarization Using Order-constrained Kernelized Feature Subspaces. Retrieved from https://arxiv.org/pdf/1705.08583.pdf

Cherian, A., Stanitsas, P., Harandi, M., Morellas, V., & Papanikolopoulos, N. (2017). Learning Discriminative αβ-divergence for Positive Definite Matrices (Extended Version) *. Retrieved from https://arxiv.org/pdf/1708.01741.pdf

*Cruz, R. S., Fernando, B., Cherian, A., & Gould, S. (2017). DeepPermNet: Visual Permutation Learning. Retrieved from https://arxiv.org/abs/1704.02729

*Dayoub, F., Sünderhauf, N., & Corke, P. (2017). Episode-Based Active Learning with Bayesian Neural Networks. Retrieved from http://arxiv.org/abs/1703.07473

*Deng, R., Zhao, T., Shen, C., & Liu, S. (2017). Relative Depth Order Estimation Using Multi-scale Densely Connected Convolutional Networks. Retrieved from http://arxiv.org/abs/1707.08063

*Do, T.-T., Nguyen, A., Reid, I., Caldwell, D. G., & Tsagarakis, N. G. (2017). AffordanceNet: An End-to-End Deep Learning Approach for Object Affordance Detection. Retrieved from http://arxiv.org/abs/1709.07326

Do, T.-T., Tan, D.-K. Le, Hoang, T., & Cheung, N.-M. (2017). Compact Hash Code Learning with Binary Deep Neural Network. Retrieved from http://arxiv.org/abs/1712.02956

Do, T.-T., Tan, D.-K. Le, Pham, T. T., & Cheung, N.-M. (2017). Simultaneous Feature Aggregating and Hashing for Large-scale Image Search. Retrieved from http://arxiv.org/abs/1704.00860

*Eriksson, A., Olsson, C., Kahl, F., & Chin, T.-J. (2017). Rotation Averaging and Strong Duality. Retrieved from https://arxiv.org/abs/1705.01362

*Gale, W., Carneiro, G., Oakden-Rayner, L., Palmer, L. J., & Bradley, A. P. (2017). Detecting hip fractures with radiologist-level performance using deep neural networks. Retrieved from https://arxiv.org/

Guo, G., Wang, H., Shen, C., Yan, Y., & Liao, H.-Y. M. (2017). Automatic Image Cropping for Visual Aesthetic Enhancement Using Deep Neural Networks and Cascaded Regression. Retrieved from https://arxiv.org/abs/1712.09048

Hall, D., Dayoub, F., Kulk, J., & McCool, C. (2017). Towards Unsupervised Weed Scouting for Agricultural Robotics. Retrieved from http://arxiv.org/abs/1702.01247

*Han, T., Wang, J., Cherian, A., & Gould, S. (2017). Human Action Forecasting by Learning Task Grammars. Retrieved from https://arxiv.org/pdf/1709.06391.pdf

Jacobson, A., Scheirer, W., & Milford, M. (2017). Deja vu: Scalable Place Recognition Using Mutually Supportive Feature Frequencies. Retrieved from http://arxiv.org/abs/1707.06393

*Ji, P., Reid, I., Garg, R., Li, H., & Salzmann, M. (2017). Non-Linear Subspace Clustering with Learned Low-Rank Kernels. Retrieved from http://arxiv.org/abs/1707.04974

*Khosravian, A., Chin, T.-J., & Reid, I. (2017). A Branch-and-Bound Algorithm for Checkerboard Extraction in Camera-Laser Calibration. Retrieved from http://arxiv.org/abs/1704.00887

*Latif, Y., Garg, R., Milford, M., & Reid, I. (2017). Addressing Challenging Place Recognition Tasks using Generative Adversarial Networks. Retrieved from http://arxiv.org/abs/1709.08810

*Leal-Taixé, L., Milan, A., Schindler, K., Cremers, D., Reid, I., & Roth, S. (2017). Tracking the Trackers: An Analysis of the State of the Art in Multiple Object Tracking. Retrieved from http://arxiv.org/abs/1704.02781

*Li, H., Wang, P., & Shen, C. (2017). Towards End-to-End Car License Plates Detection and Recognition with Deep Neural Networks. Retrieved from http://arxiv.org/abs/1709.08828

Liu, W., Chen, X., Shen, C., Yu, J., Wu, Q., & Yang, J. (2017). Robust Guided Image Filtering. Retrieved from https://arxiv.org/pdf/1703.09379.pdf

Ma, C., Huang, J.-B., Yang, X., & Yang, M.-H. (2017). Adaptive Correlation Filters with Long-Term and Short-Term Memory for Object Tracking. Retrieved from http://arxiv.org/abs/1707.02309

Ma, C., Huang, J.-B., Yang, X., & Yang, M.-H. (2017). Robust Visual Tracking via Hierarchical Convolutional Features. Retrieved from https://arxiv.org/pdf/1707.03816.pdf

*Ma, C., Shen, C., Dick, A., & Van Den Hengel, A. (2017). Visual Question Answering with Memory-Augmented Networks. Retrieved from https://arxiv.org/pdf/1707.04968.pdf

*McTaggart, M., Morrison, D., Tow, A. W., Smith, R., Kelly-Boxall, N., Milan, A., Pham, T., Zhuang, Z., Leitner, J., Reid, I., Corke, P., and Lehnert, C. (2017). Cartman: Cartesian Manipulator for Warehouse Automation in Cluttered Environments. Retrieved from http://arxiv.org/abs/1710.00967

*Meyer, B. J., Harwood, B., & Drummond, T. (2017). Nearest Neighbour Radial Basis Function Solvers for Deep Neural Networks. Retrieved from http://arxiv.org/abs/1705.09780

*Milan, A., Pham, T., Vijay, K., Morrison, D., Tow, A. W., Liu, L., Erskine, J., Grinover, R., Gurman, A., Hunn, T., Kelly-Boxall, K., Lee, D., McTaggart, M., Rallos, G., Razjigaev, A., Rowntree, T., Shen, T., Smith, R., Wade-McCue, S., Zhuang, Z., Lehnert, C., Lin, G., Reid, I., Corke, P., and Leitner, J. (2017). Semantic Segmentation from Limited Training Data. Retrieved from https://arxiv.org/abs/1709.07665

*Miller, D., Nicholson, L., Dayoub, F., & Sünderhauf, N. (2017). Dropout Sampling for Robust Object Detection in Open-Set Conditions. Retrieved from http://arxiv.org/abs/1710.06677

*Morrison, D., Tow, A. W., McTaggart, M., Smith, R., Kelly-Boxall, N., Wade-McCue, S., Erskine, J., Grinover, R., Gurman, A., Hunn, T., Lee, D., Milan, A., Pham, T., Rallos, G., Razjigaev, A., Rowntree, T., Kumar, V., Zhuang, Z., Lehnert, C., Reid, I., Corke, P., and Leitner, J. (2017). Cartman: The low-cost Cartesian Manipulator that won the Amazon Robotics Challenge. Retrieved from https://arxiv.org/abs/1709.06283

*Nguyen, A., Do, T.-T., Caldwell, D. G., & Tsagarakis, N. G. (2017). Real-Time Pose Estimation for Event Cameras with Stacked Spatial LSTM Networks. Retrieved from https://arxiv.org/pdf/1708.09011.pdf

Nguyen, H. Van, Chesser, M., Rezatofighi, S. H., & Ranasinghe, D. C. (2017). Real-Time Localization and Tracking of Multiple Radio-Tagged Animals with an Autonomous Aerial Vehicle System. Retrieved from https://arxiv.org/pdf/1712.01491.pdf

*Pan, L., Dai, Y., Liu, M., & Porikli, F. (2017). Depth Map Completion by Jointly Exploiting Blurry Color Images and Sparse Depth Maps. Retrieved from http://arxiv.org/abs/1711.09501

*Pham, T. T., Do, T.-T., Sünderhauf, N., & Reid, I. (2017). SceneCut: Joint Geometric and Object Segmentation for Indoor Scenes. Retrieved from https://arxiv.org/pdf/1709.07158.pdf

*Qiao, R., Liu, L., Shen, C., & Van Den Hengel, A. (2017). Visually Aligned Word Embeddings for Improving Zero-shot Learning. Retrieved from https://arxiv.org/pdf/1707.05427.pdf

*Rezatofighi, S. H., Milan, A., Shi, Q., Dick, A., & Reid, I. (2017). Joint Learning of Set Cardinality and State Distribution. Retrieved from https://arxiv.org/pdf/1709.04093.pdf

*Rezazadegan, F., Shirazi, S., & Davis, L. S. (2017). A Real-time Action Prediction Framework by Encoding Temporal Evolution. Retrieved from https://arxiv.org/abs/1709.07894

*Saha, S. K., Fernando, B., Cuadros, J., Xiao, D., & Kanagasingam, Y. (2017). Deep Learning for Automated Quality Assessment of Color Fundus Images in Diabetic Retinopathy Screening. Retrieved from http://arxiv.org/abs/1703.02511

*Shen, T., Lin, G., Liu, L., Shen, C., & Reid, I. (2017). Weakly Supervised Semantic Segmentation Based on Co-segmentation. Retrieved from http://arxiv.org/abs/1705.09052

*Shen, T., Lin, G., Shen, C., & Reid, I. (2017). Learning Multi-level Region Consistency with Dense Multi-label Networks for Semantic Segmentation. Retrieved from http://arxiv.org/abs/1701.07122

*Shigematsu, R., Feng, D., You, S., & Barnes, N. (2017). Learning RGB-D Salient Object Detection using background enclosure, depth contrast, and top-down features. Retrieved from https://arxiv.org/pdf/1705.03607.pdf

*Spek, A., Li, W.H., Drummond, T., (2017). A Fast Method for Computing Principal Curvatures from Range Images. Retrieved from arXiv preprint arXiv:1707.00385

*Sünderhauf, N., & Milford, M. (2017). Dual Quadrics from Object Detection Bounding Boxes as Landmark Representations in SLAM. Retrieved from http://arxiv.org/abs/1708.00965

Tan, D.-K. Le, Do, T.-T., & Cheung, N.-M. (2017). Supervised Hashing with End-to-End Binary Deep Neural Network. Retrieved from http://arxiv.org/abs/1711.08901

*Teney, D., Anderson, P., He, X., & Hengel, A. van den. (2017). Tips and Tricks for Visual Question Answering: Learnings from the 2017 Challenge. Retrieved from http://arxiv.org/abs/1708.02711

*Tow, A., Sünderhauf, N., Shirazi, S., Milford, M., & Leitner, J. (2017). What Would You Do? Acting by Learning to Predict. Retrieved from http://arxiv.org/abs/1703.02658

*Toyer, S., Cherian, A., Han, T., & Gould, S. (2017). Human Pose Forecasting via Deep Markov Models. Retrieved from https://arxiv.org/pdf/1707.09240.pdf

*Wade-McCue, S., Kelly-Boxall, N., McTaggart, M., Morrison, D., Tow, A. W., Erskine, J., Grinover, R., Gurman, A., Hunn, T., Lee, D., Milan, A., Pham, T., Rallos, G., Razjigaev, A., Rowntree, T., Smith, R., Kumar, Vijay., Zhuang, Z., Lehnert, C., Reid, I., Corke, P., and Leitner, J. (2017). Design of a Multi-Modal End-Effector and Grasping System: How Integrated Design helped win the Amazon Robotics Challenge. Retrieved from http://arxiv.org/abs/1710.01439

*Wang, J., Cherian, A., Porikli, F., & Gould, S. (2017). Action Representation Using Classifier Decision Boundaries. Retrieved from http://arxiv.org/abs/1704.01716

*Wang, X., Sekercioglu, Y.A., Drummond, T., Fremont, V., Natalizio, E., Fantoni, I. (2017). Relative Pose Based Redundancy Removal: Collaborative RGB-D Data Transmission in Mobile Visual Sensor Networks.  Retrieved from arXiv preprint arXiv:1707.05978

*Wang, X., You, M., & Shen, C. (2017). Adversarial Generation of Training Examples for Vehicle License Plate Recognition. Retrieved from http://arxiv.org/abs/1707.03124

Wei, X.-S., Zhang, C.-L., Li, Y., Xie, C.-W., Wu, J., Shen, C., & Zhou, Z.-H. (2017). Deep Descriptor Transforming for Image Co-Localization. Retrieved from http://arxiv.org/abs/1705.02758

Wei, X.-S., Zhang, C.-L., Wu, J., Shen, C., & Zhou, Z.-H. (2017). Unsupervised Object Discovery and Co-Localization by Deep Descriptor Transforming. Retrieved from https://arxiv.org/pdf/1707.06397.pdf

*Wu, Q., Wang, P., Shen, C., Reid, I., & Van Den Hengel, A. (2017). Are You Talking to Me? Reasoned Visual Dialog Generation through Adversarial Learning. Retrieved from https://arxiv.org/pdf/1711.07613.pdf

*Wu, Z., Shen, C., & Van Den Hengel, A. (2017). Real-time Semantic Image Segmentation via Spatial Sparsity. Retrieved from https://arxiv.org/pdf/1712.00213.pdf

Yu, L., Jacobson, A., & Milford, M. (2017). Rhythmic Representations: Learning Periodic Patterns for Scalable Place Recognition at a Sub-Linear Storage Cost. Retrieved from http://arxiv.org/abs/1712.07315

*Zhang, F., Leitner, J., Milford, M., & Corke, P. (2017). Sim-to-real Transfer of Visuo-motor Policies for Reaching in Clutter: Domain Randomization and Adaptation with Modular Networks. Retrieved from https://arxiv.org/abs/1709.05746

*Zhang, F., Leitner, J., Milford, M., & Corke, P. I. (2017). Tuning Modular Networks with Weighted Losses for Hand-Eye Coordination. Retrieved from http://arxiv.org/abs/1705.05116

*Zhang, J., Dai, Y., Porikli, F., He, M., Multi-Scale Salient Object Detection with Pyramid Spatial Pooling, APSIPA ASC 2017

*Zhang, J., Wu, Q., Shen, C., Zhang, J., Lu, J., & Van Den Hengel, A. (2017). Asking the Difficult Questions: Goal-Oriented Visual Question Generation via Intermediate Rewards. Retrieved from https://arxiv.org/pdf/1711.07614.pdf

Zhang, L., Wei, W., Shi, Q., Shen, C., Hengel, A. van den, & Zhang, Y. (2017). Beyond Low Rank: A Data-Adaptive Tensor Completion Method. Retrieved from http://arxiv.org/abs/1708.01008

*Zhong, Y., Dai, Y., Li, H.,, Self-Supervised Learning for Stereo Matching with Self-Improving Ability, September 2017, https://arxiv.org/pdf/1709.00930.pdf

*Zhuang, B., Wu, Q., Shen, C., Reid, I., & Hengel, A. van den. (2017). Care about you: towards large-scale human-centric visual relationship detection. Retrieved from http://arxiv.org/abs/1705.0989

*denotes Core Centre Research Output


3DV  – International Conference on 3D Vision

ACRA  – Australasian Conference on Robotics and Automation (run by Australian Robotics and Automation Association)

AI – Associate Investigator

ANU – Australian National University

APRS – Australian Pattern Recognition Society

ARAA – Australian Robotics and Automation Association

ARC Australian Research Council

CAC – Centre Advisory Committee

CEC – Centre Executive Committee

CI – Chief Investigator

COTSbot – Crown Of Thorns Starfish robot

CVPR – IEEE Conference on Computer Vision and Pattern Recognition

DICTA – International Conference on Digital Image Computing: Techniques and Applications (premier conference of the Australian Pattern Recognition Society)

DPhil – Doctor of Philosophy (Oxford abbreviation)

DST -Defence Science and Technology

EOI – Expression of Interest

EUAB – End-User Advisory Board

FPGA – Field-Programmable Gate Array

IBVS – Image-Based Visual Servo

ICIP – IEEE International Conference on Image Processing

ICRA – IEEE International Conference on Robotics and Automation

ICT – Information and Communications Technology

IEEE – Institute of Electrical and Electronics Engineering

IET – Institution of Engineering and Technology

IROS – International Conference on Intelligent Robots and Systems

ISWC – International Semantic Web Conference

KPIs – Key Performance Indicators

MOOC – Massive Open Online Course

MVG – Multi View Geometry

NRP – National Research Priority

PhD – Doctor of Philosophy (international abbreviation)

PI – Partner Investigator

QUT – Queensland University of Technology

RHD – Research Higher Degree

RF – Research Fellow

SLAM – Simultaneous Localisation and Mapping

SRPs – Science and Research Priorities

VOS – Vision Operating System

Key Terms

algorithm  //  A procedure or formula for solving a problem, typically implemented by computer software. For example, there are algorithms to help robots determine their location in the world, to navigate safely, to process images or recognise objects.

artificial intelligence  //  Intelligent behaviour demonstrated in machines.

autonomous  //  Without human intervention.

Bayesian (Bayes) nets (networks)  //  Graphical representations for probabilistic relationships among a set of random variables.

computer vision  //  Methods for acquiring, processing, analysing and understanding images using a computer.

deep learning  //  A method of machine learning based on neural networks with many and varied layers that are able to form representations of data based on large amounts of training data.

haptic controls  //  Incorporate tactile sensors that measure the forces exerted by the user and mimic human actions to control a machine.

homography  //  The relationship between any two images of the same planar surface in space.

machine learning  //  A type of artificial intelligence that gives computers the ability to learn based on large amounts of training data and without needing to be explicitly programmed.

neural network  //  A computer system very loosely modelled on neurons and synaptic connections found in biological brains.

semantics  //  Automatically applying meaningful human terms like ‘kitchen’ or ‘coffee cup’ to places or objects in the robotic vision system’s environment. Semantics are important to help robots understand their environment by recognising different features and labelling or classifying them.

servo  //  A system that uses negative feedback to automatically correct its error.

SLAM (Simultaneous Localisation and Mapping)  //  A robotics algorithm that allows a robot to determine its position in an environment while at the same time constructing a map of its surrounds.

support vector machine (SVM)  //  Classifies data by finding the best hyperplane that separates all data points of one class from those of another class.