Visualisation and interactive media
Our research focuses on visualization, interactive media and education technology research. This includes:
- information visualisation
- human-computer interaction
- virtual and augmented reality
- interaction design
- computer graphics
- computer vision
- image analysis.
We incorporate information visualisation and analysis with AI and design thinking approaches. Our project scopes involve:
- visualization of complex data
- virtual reality interaction
- biomedical image and interactive analysis
- multi-model user interface design
- computer-supported collaborative learning.
We've published work in top-tier conferences and journals such as:
- IEEE VR
- IEEE ISMAR
- IEEE TVCG
- ACM CHI
- ACM TOCHI etc.
We work with national and international research partners. This includes:
- University of New South Wales
- University of Sydney
- Hong Kong University of Science and Technology
- Chinese Academy of Sciences
- Tsinghua University in China
- Google in the USA.
Our group members have served as steering committee members, conference chairs and program committee members in top venues such as:
- IEEE ISMAR
- IEEE VR
- ACM VRST
- ACM CHI and et al.
Meet the team
- Professor Mark Billinghurst, Computer Science, University of Auckland and University of South Australia
- Associate Professor Tomasz Bednarz, Art & Design, University of New South Wales, Australia
- Dr Philippe Chouinard, La Trobe University, Australia
- Associate Professor Joe Gabbard, Computer Science, Virginia Tech, USA
- Professor Andy Herries, Archaeology, La Trobe University, Australia
- Professor Xiangshi Ren, Computer Science, Kochi University of Technology, Japan
- Professor Feng Tian, Institute of Software, Chinese Academy of Sciences, China
- Associate Professor Xiuying Wang, Computer Science, University of Sydney, Australia
- Associate Professor Sai-Kit Yeung, Computer Science, Hong Kong University of Science and Technology
- Associate Professor Lap-Fai Yu, Computer Science, George Mason University, USA
- Dr Shumin Zhai, Google, USA
Semantic segmentation of biomedical images
Detection and segmentation of lesions in medical images are routinely performed in radiology centres for patient diagnosis and treatment planning.
This process is time-consuming and prone to inter- and intra-observer variations. Computerized methods have been developed to assist precision diagnosis and efficient treatment planning, but there is still much scope of improvement.
In this project, we will investigate advanced machine learning models including deep learning algorithms for semantic segmentation of combined healthy and abdominal organs from biomedical images such as liver, heart, and other organs of interest from biomedical images including PET, CT and MRI.
Mixed reality applications in education
Mixed reality (MR) is an emerging technology which could potentially shape the future of education. The project is to develop mixed reality technologies to facilitate pupils and educators to learn STEAM and explore new pedagogical and learning styles.
Urban zoning using graph convolutional networks
Urban zoning focuses on dividing the land in a city into zones where specific land uses are permitted and serves as the base of land use analysis and urban planning. While a city develops, people must periodically update its zoning map to reflect changes in its urban patterns.
Compared with most approaches to land use classification that exploit machine learning on satellite images, this project aims to utilise recent deep learning methods integrated with graph theory for the synthesis of an urban zoning map from a collection of geo-tagged photos about a city from SNS.
Digital cultural heritage and arts
The project is to explore the applications how computing technologies can help practitioners to archive, preserve and promote cultural heritage. Couple of studies have been done by using AI algorithm to identify and generate new art patterns.
New forms of art expression also have been explored via understanding context of voices, behaviours and linking to visual elements.
Optimisation of user interface design using AI approaches
UI design is one of important steps for software and computing products. There are numerous researches have been done using different approaches.
The project attempts to explore how to use machine learning approaches to UI design via optimizing information flows and minimize mental workloads.
Road network synthesis using generative adversarial networks
A road network constitutes the layout of a city where numerous people reside and interact with each other. There has been a considerable demand for modelling realistic and functional urban road networks in the areas of city planning, game development, and training. Computer graphics researchers have proposed sound approaches to enable users to focus on high-level design goals instead of laborious fundamental operations supposed to be done by computer systems.
This project aims to propose a novel approach that synthesises road networks efficiently from common urban data using latest deep learning methods.
Automatic arrangement of objects in virtual environments
Virtual reality benefits people nowadays regarding learning and amusement, by providing immersive virtual environments with interactive digital media contents, e.g. virtual museums.
This project aims to propose an automatic approach that smartly arranges positions and appearances of objects in a virtual environment and provides a satisfactory user experience. Participants will investigate how to integrate stochastic optimisation and deep learning towards a set of design goals and metrics.
Image-centric multimodal data fusion and collaborative learning in biomedical applications
Multimodal medical imaging and diverse clinical data are the basis for precision biomedicine and personalised healthcare. The utilisation and interpretation of the interaction among various data sources are yet to be investigated.
This research aims to look at how computer algorithms can be used to analyse and understand images and relevant health data for enhanced decision support systems. In this project, we will investigate image pattern recognition and machine learning models to improve the modelling of visual features, and to develop new methodologies for data fusion and interpretation with various medical applications.
Immersive visualisation and analytics of biomedical images
Multimodality and multidimensional biomedical images are indispensable data resources in current biology and medical science. A prolonged challenge is how to explore the complicated 3D image content effectively and input user-specified information based on the observation and understanding of images directly in the 3D space. This holds great potential and advances for student focus teaching and clinical use in biomedicine and the field of medical imaging. Virtual augmented and mixed reality technologies have attracted great interests in entertainment and gamification. The advances in virtual immersive and augmented reality also provide possibilities to communicate and explore the contents and patterns preserved in biomedical images by different means. This research aims to develop virtual and augmented reality framework and platforms for bridging gaps, particularly for the tasks of visualisation, interaction and analytics of biomedical images with potential applications on novel education and intelligent healthcare.
Gesture Input for mobile interaction design
Gesture interaction has arguably become a popular style on mobile devices. Such popularity has increasingly raised many research questions in discussions of gesture interaction such as gesture learning, modelling, recognition and design. This project focuses on gesture design for mobile interaction from three aspects, i.e. input modalities, physical activities and holding postures. This project investigates the effects of these aspects on gesture input for mobile interaction design. The purpose of this project is to design more efficient gesture-driven interfaces for mobile interaction.
Gesture interaction design for older users
Elderly adults love to use smartphones but sometimes "hate" to do it as well. Using phones aids them to access online info, keep in touch with others et al.; on the other hand, such use is a demanding task when it comes to target selection and function search. Touch gesture interaction (e.g. Google Gesture Search) is a potential solution to this issue as it is eyes-free and button-free. This project investigates two fundamental aspects of gesture interaction for older users: 1) motivation - are older users willing to use gestures? 2) memorization and articulation – what would the performance of older users be in gesture memorization and reproduction? This project aims to design user-friendly gesture interfaces for older people.
Investigating cognitive illusions in virtual environments
It is believed that immersion (the technical qualities of a system) and coherence (the degree to which the experience matches up with user expectations) lead to Place Illusion (the user's feeling that they are in another place) and Plausibility Illusion (the feeling that the events a user is seeing are actually happening). However, there are many questions yet to be answered, including how to measure the feeling of Plausibility Illusion, whether Place Illusion or Plausibility Illusion is more important in various types of applications, and how best to induce these feelings in users. This project investigates these questions and others.
Archaeologists collect rich and complex spatiotemporal data in the course of their fieldwork. This data, however, frequently lies unused after it is collected because it is difficult to analyse, and the archaeology domain experts frequently lack the time and computational expertise needed to extract maximum value from their data. This project is developing and evaluating immersive tools to enable new archaeological research.