|Member Login

Publications

2017 Journal Articles [33]

Face identity recognition in simulated prosthetic vision is poorer than previously reported and can be improved by caricaturing

*Irons, J. L., Gradden, T., Zhang, A., He, X., Barnes, N., Scott, A. F., & McKone, E. (2017). Face identity recognition in simulated prosthetic vision is poorer than previously reported and can be improved by caricaturing. Vision Research, 137, 61–79. https://doi.org/10.1016/j.visres.2017.06.002

View more

Determining the Contribution of Retinotopic Discrimination to Localization Performance With a Suprachoroidal Retinal Prosthesis

*Petoe, M. A., McCarthy, C. D., Shivdasani, M. N., Sinclair, N. C., Scott, A. F., Ayton, L. N., … Blamey, P. J. (2017). Determining the Contribution of Retinotopic Discrimination to Localization Performance With a Suprachoroidal Retinal Prosthesis. Investigative Opthalmology & Visual Science, 58(7), 3231. https://doi.org/10.1167/iovs.16-21041

View more

Training Improves Vibrotactile Spatial Acuity and Intensity Discrimination on the Lower Back Using Coin Motors

*Stronks, H. C., Walker, J., Parker, D. J., & Barnes, N. (2017). Training Improves Vibrotactile Spatial Acuity and Intensity Discrimination on the Lower Back Using Coin Motors. Artificial Organs. https://doi.org/10.1111/aor.12882

View more

SLAM++ -A highly efficient and temporally scalable incremental SLAM framework

Ila, V., Polok, L., Solony, M., & Svoboda, P. (2017). SLAM++ -A highly efficient and temporally scalable incremental SLAM framework. The International Journal of Robotics Research, 36(2), 210–230. http://doi.org/10.1177/0278364917691110

View more

A learning-based markerless approach for full-body kinematics estimation in-natura from a single image

Drory, A., Li, H., & Hartley, R. (2017). A learning-based markerless approach for full-body kinematics estimation in-natura from a single image. Journal of Biomechanics, 55, 1–10. http://doi.org/10.1016/j.jbiomech.2017.01.028

View more

Convergence and State Reconstruction of Time-Varying Multi-Agent Systems From Complete Observability Theory

*Anderson, B. D. O., Shi, G., & Trumpf, J. (2017). Convergence and State Reconstruction of Time-Varying Multi-Agent Systems From Complete Observability Theory. IEEE Transactions on Automatic Control, 62(5), 2519–2523. http://doi.org/10.1109/TAC.2016.2599274

View more

A converse to the deterministic separation principle

*Trumpf, J., & Trentelman, H. L. (2017). A converse to the deterministic separation principle. Systems & Control Letters, 101, 2–9. http://doi.org/10.1016/j.sysconle.2016.02.021

View more

An Analytic Approach to Converting POE Parameters Into D–H Parameters for Serial-Link Robots

Wu, L., Crawford, R., & Roberts, J. (2017). An Analytic Approach to Converting POE Parameters Into D–H Parameters for Serial-Link Robots. IEEE Robotics and Automation Letters, 2(4), 2174–2179. http://doi.org/10.1109/LRA.2017.2723470

View more

Look No Further: Adapting the Localization Sensory Window to the Temporal Characteristics of the Environment

Bruce, J., Jacobson, A., & Milford, M. (2017). Look No Further: Adapting the Localization Sensory Window to the Temporal Characteristics of the Environment. IEEE Robotics and Automation Letters, 2(4), 2209–2216. http://doi.org/10.1109/LRA.2017.2724146

View more

Minimax Robust Quickest Change Detection With Exponential Delay Penalties

Molloy, T. L., Kennedy, J. M., & Ford, J. J. (2017). Minimax Robust Quickest Change Detection With Exponential Delay Penalties. IEEE Control Systems Letters, 1(2), 280–285. http://doi.org/10.1109/LCSYS.2017.2714262

View more

Long Range Iris Recognition: A Survey

Nguyen, K., Fookes, C., Jillela, R., Sridharan, S., & Ross, A. (2017). Long Range Iris Recognition: A Survey. Pattern Recognition. http://doi.org/10.1016/j.patcog.2017.05.021 *In Press

View more

Rank Pooling for Action Recognition

Fernando, B., Gavves, E., Oramas M., J. O., Ghodrati, A., & Tuytelaars, T. (2017). Rank Pooling for Action Recognition. IEEE Transactions on Pattern Analysis and Machine Intelligence, 39(4), 773–787. http://doi.org/10.1109/TPAMI.2016.2558148

View more

Estimating the projected frontal surface area of cyclists from images using a variational framework and statistical shape and appearance models

Drory, A., Li, H., & Hartley, R. (2017). Estimating the projected frontal surface area of cyclists from images using a variational framework and statistical shape and appearance models. Proceedings of the Institution of Mechanical Engineers, Part P: Journal of Sports Engineering and Technology. https://doi.org/10.1177/1754337117705489

View more

Spatio-temporal union of subspaces for multi-body non-rigid structure-from-motion

Kumar, S., Dai, Y., & Li, H. (2017). Spatio-temporal union of subspaces for multi-body non-rigid structure-from-motion. Pattern Recognition. http://doi.org/10.1016/j.patcog.2017.05.014 *In Press

View more

A Deep Convolutional Neural Network Module that Promotes Competition of Multiple-size Filters

Liao, Z., & Carneiro, G. (2017). A Deep Convolutional Neural Network Module that Promotes Competition of Multiple-size Filters. Pattern Recognition. http://doi.org/10.1016/j.patcog.2017.05.024 *In Press

View more

Kinematic comparison of surgical tendon-driven manipulators and concentric tube manipulators

Li, Z., Wu, L., Ren, H., & Yu, H. (2017). Kinematic comparison of surgical tendon-driven manipulators and concentric tube manipulators. Mechanism and Machine Theory, 107, 148–165. http://doi.org/10.1016/j.mechmachtheory.2016.09.018

View more

Finding the Kinematic Base Frame of a Robot by Hand-Eye Calibration Using 3D Position Data

Wu, L., & Ren, H. (2017). Finding the Kinematic Base Frame of a Robot by Hand-Eye Calibration Using 3D Position Data. IEEE Transactions on Automation Science and Engineering, 14(1), 314–324. http://doi.org/10.1109/TASE.2016.2517674

View more

Trajectory tracking passivity-based control for marine vehicles subject to disturbances

Donaire, A., Romero, J. G., & Perez, T. (2017). Trajectory tracking passivity-based control for marine vehicles subject to disturbances. Journal of the Franklin Institute, 354(5), 2167–2182. http://doi.org/10.1016/j.jfranklin.2017.01.012

View more

Autonomous Sweet Pepper Harvesting for Protected Cropping Systems

Lehnert, C., English, A., McCool, C., Tow, A. W., & Perez, T. (2017). Autonomous Sweet Pepper Harvesting for Protected Cropping Systems. IEEE Robotics and Automation Letters, 2(2), 872–879. http://doi.org/10.1109/LRA.2017.2655622

View more

Optical-Aided Aircraft Navigation using Decoupled Visual SLAM with Range Sensor Augmentation

Andert, F., Ammann, N., Krause, S., Lorenz, S., Bratanov, D., & Mejias, L. (2017). Optical-Aided Aircraft Navigation using Decoupled Visual SLAM with Range Sensor Augmentation. Journal of Intelligent & Robotic Systems, 1–19. http://doi.org/10.1007/s10846-016-0457-6

View more

Coregistered Hyperspectral and Stereo Image Seafloor Mapping from an Autonomous Underwater Vehicle

Bongiorno, D. L., Bryson, M., Bridge, T. C. L., Dansereau, D. G., & Williams, S. B. (2017). Coregistered Hyperspectral and Stereo Image Seafloor Mapping from an Autonomous Underwater Vehicle. Journal of Field Robotics. http://doi.org/10.1002/rob.21713

View more

Background Appearance Modeling with Applications to Visual Object Detection in an Open-Pit Mine

Bewley, A., & Upcroft, B. (2017). Background Appearance Modeling with Applications to Visual Object Detection in an Open-Pit Mine. Journal of Field Robotics, 34(1), 53–73. http://doi.org/10.1002/rob.21667

View more

Mixtures of Lightweight Deep Convolutional Neural Networks: Applied to Agricultural Robotics

McCool, C., Perez, T., & Upcroft, B. (2017). Mixtures of Lightweight Deep Convolutional Neural Networks: Applied to Agricultural Robotics. IEEE Robotics and Automation Letters, 2(3), 1344–1351. http://doi.org/10.1109/LRA.2017.2667039

View more

Peduncle Detection of Sweet Pepper for Autonomous Crop Harvesting—Combined Color and 3-D Information

Sa, I., Lehnert, C., English, A., McCool, C., Dayoub, F., Upcroft, B., & Perez, T. (2017). Peduncle Detection of Sweet Pepper for Autonomous Crop Harvesting—Combined Color and 3-D Information. IEEE Robotics and Automation Letters, 2(2), 765–772. http://doi.org/10.1109/LRA.2017.2651952

View more

Quantifying Spatiotemporal Greenhouse Gas Emissions Using Autonomous Surface Vehicles

Dunbabin, M., & Grinham, A. (2017). Quantifying Spatiotemporal Greenhouse Gas Emissions Using Autonomous Surface Vehicles. Journal of Field Robotics, 34(1), 151–169. http://doi.org/10.1002/rob.21665

View more

Teaching Robots Generalisable Hierarchical Tasks Through Natural Language Instruction

Suddrey, G., Lehnert, C., Eich, M., Maire, F., & Roberts, J. (2016). Teaching Robots Generalisable Hierarchical Tasks Through Natural Language Instruction. IEEE Robotics and Automation Letters, 2(1), 201–208. http://doi.org/10.1109/LRA.2016.2588584

View more

Dexterity Analysis of Three 6-DOF Continuum Robots Combining Concentric Tube Mechanisms and Cable-Driven Mechanisms

Wu, L., Crawford, R., & Roberts, J. (2017). Dexterity Analysis of Three 6-DOF Continuum Robots Combining Concentric Tube Mechanisms and Cable-Driven Mechanisms. IEEE Robotics and Automation Letters, 2(2), 514–521. http://doi.org/10.1109/LRA.2016.2645519

View more

Orthopaedic surgeon attitudes towards current limitations and the potential for robotic and technological innovation in arthroscopic surgery

Jaiprakash, A., O’Callaghan, W. B., Whitehouse, S. L., Pandey, A., Wu, L., Roberts, J., & Crawford, R. W. (2017). Orthopaedic surgeon attitudes towards current limitations and the potential for robotic and technological innovation in arthroscopic surgery. Journal of Orthopaedic Surgery, 25(1), 230949901668499. http://doi.org/10.1177/2309499016684993

View more

Farm Workers of the Future: Vision-Based Robotics for Broad-Acre Agriculture

Ball, D., Ross, P., English, A., Milani, P., Richards, D., Bate, A., Upcroft, B., Wyeth, G., Corke, P. (2017). Farm Workers of the Future: Vision-Based Robotics for Broad-Acre Agriculture. IEEE Robotics & Automation Magazine, 1–1. http://doi.org/10.1109/MRA.2016.2616541

View more

Image-Based Visual Servoing With Unknown Point Feature Correspondence

McFadyen, A., Jabeur, M., & Corke, P. (2017). Image-Based Visual Servoing With Unknown Point Feature Correspondence. IEEE Robotics and Automation Letters, 2(2), 601–607. http://doi.org/10.1109/LRA.2016.2645886

View more

Image-Based Visual Servoing With Light Field Cameras

Tsai, D., Dansereau, D. G., Peynot, T., & Corke, P. (2017). Image-Based Visual Servoing With Light Field Cameras. IEEE Robotics and Automation Letters, 2(2), 912–919. http://doi.org/10.1109/LRA.2017.2654544

View more

Image-Based Visual Servoing With Unknown Point Feature Correspondence

McFadyen, A., Jabeur, M., & Corke, P. (2017). Image-Based Visual Servoing With Unknown Point Feature Correspondence. IEEE Robotics and Automation Letters, 2(2), 601–607.

Download PDF

Image-Based Visual Servoing With Light Field Cameras

Tsai, D., Dansereau, D. G., Peynot, T., & Corke, P. (2017). Image-Based Visual Servoing With Light Field Cameras. IEEE Robotics and Automation Letters, 2(2), 912–919.

View more

Australian Centre for Robotic Vision
2 George Street Brisbane, 4001
+61 7 3138 7549