Engineering

Publication Search Results

Now showing 1 - 10 of 32
  • (2022) Senanayake, Upul
    Thesis
    Decline in cognitive functions including memory, processing speed and executive processes, has been associated with ageing for sometime. It is understood that every human will go through this process, but some will go through it faster, and for some this process starts earlier. Differentiating between cognitive decline due to a pathological process and normal ageing is an ongoing research challenge. According to the definition of the World Health Organization (WHO), dementia is an umbrella term for a number of diseases affecting memory and other cognitive abilities and behaviour that interfere significantly with the ability to maintain daily living activities. Although a cure for dementia has not been found yet, it is often stressed that early identification of individuals at risk of dementia can be instrumental in treatment and management. Mild Cognitive Impairment (MCI) is considered to be a prodromal condition to dementia, and patients with MCI have a higher probability of progressing to certain types of dementia, the most common being Alzheimer's Disease (AD). Epidemiological studies suggest that the progression rate from MCI to dementia is around 10-12\% annually, while much lower in the general elderly population. Therefore, accurate and early diagnosis of MCI may be useful, as those patients can be closely monitored for progression to dementia. Traditionally, clinicians use a number of neuropsychological tests (also called NM features) to evaluate and diagnose cognitive decline in individuals. In contrast, computer aided diagnostic techniques often focus on medical imaging modalities such as magnetic resonance imaging (MRI) and positron emission tomography (PET). This thesis utilises machine learning and deep learning techniques to leverage both of these data modalities in a single end-to-end pipeline that is robust to missing information. A number of techniques have been designed, implemented and validated to diagnose different types of cognitive impairment including mild cognitive impairment and its subtypes as well as dementia, initially directly from NM features, and then in fusion with medical imaging features. The novel techniques proposed by this thesis build end-to-end deep learning pipelines that are capable of learning to extract features and engineering combinations of features to yield the best performance. The proposed deep fusion pipeline is capable of fusing data from multiple disparate modalities of vastly different dimensions seamlessly. Survival analysis techniques are often used to understand the progression and time till an event of interest. In this thesis, the proposed deep survival analysis techniques are used to better understand the progression to dementia. They also enable the use of imaging data seamlessly with NM features, which is the first such approach as far as is known. The techniques are designed, implemented and validated across two datasets; an in-house dataset and a publicly available dataset adding an extra layer of cross validation. The proposed techniques can be used to differentiate between cognitively impaired and cognitively normal individuals and gain better insights on their subsequent progression to dementia.

  • (2022) Flanagan, Colm
    Thesis
    Elements from cognitive psychology have been applied in a variety of ways to artificial intelligence. One of the lesser studied areas is in how episodic memory can assist learning in cognitive robots. In this dissertation, we investigate how episodic memories can assist a cognitive robot in learning which behaviours are suited to different contexts. We demonstrate the learning system in a domestic robot designed to assist human occupants of a house. People are generally good at anticipating the intentions of others. When around people that we are familiar with, we can predict what they are likely to do next, based on what we have observed them doing before. Our ability to record and recall different types of events that we know are relevant to those types of events is one reason our cognition is so powerful. For a robot to assist rather than hinder a person, artificial agents too require this functionality. This work makes three main contributions. Since episodic memory requires context, we first propose a novel approach to segmenting a metric map into a collection of rooms and corridors. Our approach is based on identifying critical points on a Generalised Voronoi Diagram and creating regions around these critical points. Our results show state of the art accuracy with 98% precision and 96% recall. Our second contribution is our approach to event recall in episodic memory. We take a novel approach in which events in memory are typed and a unique recall policy is learned for each type of event. These policies are learned incrementally, using only information presented to the agent and without any need to take that agent off line. Ripple Down Rules provide a suitable learning mechanism. Our results show that when trained appropriately we achieve a near perfect recall of episodes that match to an observation. Finally we propose a novel approach to how recall policies are trained. Commonly an RDR policy is trained using a human guide where the instructor has the option to discard information that is irrelevant to the situation. However, we show that by using Inductive Logic Programming it is possible to train a recall policy for a given type of event after only a few observations of that type of event.

  • (2022) Zhao, Runqing
    Thesis
    Emerging modes of air transport such as autonomous airport shuttle and air taxi are potentially efficient alternatives to current transport practices such as bus and train. This thesis examines bus shuttle service within an airport and air metro as two examples of network design. Within an airport, the bus shuttle serves passengers between the terminals, train stations, parking lots, hotels, and shopping areas. Air metro is a type of pre-planned service in urban air mobility that accommodates passengers for intra- or inter-city trips. The problems are to optimise the service, and the outputs including the optimal fleet size, dispatch pattern and schedule. Based on the proposed time-space networks, the service network design problems are formulated as mixed integer linear programs. The heterogeneous multi-type bus fleet case and stochastic demand case are extended for the airport shuttle case, while a rolling horizon optimisation is adopted for the air metro case. In the autonomous airport inter-terminal bus shuttle case, a Monte Carlo simulation-based approach is proposed to solve the case with demand stochasticity, which is then further embedded into an "effective" passenger demand framework. The "effective" demand is the summation of mean demand value and a safety margin. By comparing the proposed airport shuttle service to the current one, it is found that the proposed service can save approximately 27% of the total system cost. The results for stochastic problem suggest estimating the safety margin to be 0.3675 times of the standard deviation brings the best performance. For the second case, the service network design is extended with a pilot scheduling layer and simulation is undertaken to compare the autonomous (pilot-less) and piloted service design. The results suggest that an autonomous air metro service would be preferable if the price of an autonomous aircraft is less than 1.6 times the price of a human-driven one. The results for rolling horizon optimisation suggest to confirm the actual demand at least 45 minutes prior to departure. Based on data from the Sydney (Australia) region, the thesis provides information directly relevant for the service network design of emerging modes of air transport in the city.

  • (2022) Wang, Bozhi
    Thesis
    Recently, blockchain becomes a disruptive technology of building distributed applications (DApps). Many researchers and institutions have devoted their resources to the development of more effective blockchain technologies and innovative applications. However, with the limitation of computing power and financial resources, it is hard for researchers to deploy and test their blockchain innovations in a large-scape physical network. Hence, in this dissertation, we proposed a peer-to-peer (P2P) networking simulation framework, which allows to deploy and test (simulate) a large-scale blockchain system with thousands of nodes in one single computer. We systematically reviewed existing research and techniques of blockchain simulator and evaluated their advantages and disadvantages. To achieve generality and flexibility, our simulation framework lays the foundation for simulating blockchain network with different scales and protocols. We verified our simulation framework by deploying the most famous three blockchain systems (Bitcoin, Ethereum and IOTA) in our simulation framework. We demonstrated the effectiveness of our simulation framework with the following three case studies: (a) Improve the performance of blockchain by changing key parameters or deploying new directed acyclic graph (DAG) structure protocol; (b) Test and analyze the attack response of Tangle-based blockchain (IOTA) (c) Establish and deploy a new smart grid bidding system for demand side in our simulation framework. This dissertation also points out a series of open issues for future research.

  • (2022) Lo, Sin Kuang
    Thesis
    Blockchain-based systems must be reliable and have high integrity because they are used for recording economically-critical and safety-critical data. Incorrect data recorded on a blockchain, from either faulty external components or inaccurate input data, can undermine the integrity or reliability of the systems. The importance of reliability and integrity for blockchains is known, but approaches to assessing them in blockchain-based systems have not been previously investigated. In a blockchain-based system, the data recorded in the system must be accurate because it will be used to verify the correctness of external states. So, systems that rely on data are also impacted by its data quality. Nonetheless, the general relationship between system quality and data quality has not previously been conceptualised systematically in the software architecture literature. This thesis aims to improve our understanding of the design of blockchain-based systems, considering both system quality and data quality. We first report studies we have conducted on using architecture-level modelling based approaches to analyse the reliability of blockchain oracles. Our analysis shows that decentralised oracles have higher reliability than centralised oracles, and that data sources can affect system reliability. Our study demonstrates the feasibility of using architecture modelling and established techniques to model and analyse the reliability of off-chain components. We have also proposed and evaluated a series of schemes to model and analyse data integrity in blockchain-based systems. Blockchains using Nakamoto Consensus cause transactions to have a probability of being reversed or reordered. This leads to difficulties in modelling and analysing how blockchains affect the integrity of data in blockchain-based systems. We report on a study conducted using the design of a real-world system, and evaluate our proposed approach. The findings show how different types of integrity threats and their designed mitigations can be identified and analysed. Finally, motivated by the work on reliability and integrity, we propose a theoretical model of the general relationship between system quality and data quality in software systems. We link this to established views about the relationship between software architecture and system quality. The general nature of the relationship between system quality and data quality has not previously been examined. We use examples from the literature to establish the face validity of our model, and discuss its possible implications for research and practice.

  • (2022) Altulyan, May
    Thesis
    With the rapid growth in the number of things that can be connected to the internet, Recommendation Systems for the IoT (RSIoT) have become more significant in helping a variety of applications to meet user preferences, and such applications can be smart home, smart tourism, smart parking, m-health and so on. In this thesis, we propose a unified recommendation framework for data-driven, people-centric smart home applications. The framework involves three main stages: complex activity detection, constructing recommendations in timely manner, and insuring the data integrity. First, we review the latest state-of-the-art recommendations methods and development of applications for recommender system in the IoT so, as to form an overview of the current research progress. Challenges of using IoT for recommendation systems are introduced and explained. A reference framework to compare the existing studies and guide future research and practices is provided. In order to meet the requirements of complex activity detection that helps our system to understand what activity or activities our user is undertaking in relatively high level. We provide adequate resources to be fit for the recommender system. Furthermore, we consider two inherent challenges of RSIoT, that is, capturing dynamicity patterns of human activities and system update without a focus on user feedback. Based on these, we design a Reminder Care System (RCS) which harnesses the advantages of deep reinforcement learning (DQN) to further address these challenges. Then we utilize a contextual bandit approach for improving the quality of recommendations by considering the context as an input. We aim to address not only the two previous challenges of RSIoT but also to learn the best action in different scenarios and treat each state independently. Last but not least, we utilize a blockchain technology to ensure the safety of data storage in addition to decentralized feature. In the last part, we discuss a few open issues and provide some insights for future directions.

  • (2022) Huang, Chenji
    Thesis
    Node representation learning (NRL) has shown incredible success in recent years. It compresses the nodes as low-dimensional vectors, which can accurately represent the characteristics of the nodes. While many researchers have applied NRL to heterogeneous information networks (HIN), most of them only focus on the quality of the node embedding itself or some basic downstream tasks, such as node classification and link prediction. In this thesis, we study the following three problems to explore the power of graph representation learning on different heterogeneous information network mining tasks. Firstly, we investigate the problem of the meta-path prediction problem. Given an HIN H, a head node h, a meta-path P, and a tail node t, the meta-path prediction aims to predict whether h can be linked to t by an instance of P. Most existing solutions either require predefined meta-paths, which limits their scalability to schema-rich HINs and long meta-paths, or do not aim to predict the existence of an instance of P. To address these issues, we propose a novel prediction model, called ABLE, by exploiting the Attention mechanism and BiLSTM for Embedding. We conduct extensive experiments on four real datasets. The empirical results show that ABLE outperforms the state-of-the-art methods by up to 20\% and 22\% of improvement of AUC and AP scores, respectively. Secondly, we focus on the node importance value estimation problem. Node importance estimation is a fundamental task in graph data analysis. Extensive studies have focused on this task, and various downstream applications have benefited from it, such as recommendation, resource allocation optimization, and missing value completion. However, existing works either focus on the homogeneous network or only study importance-based ranking. We are the first to consider the node importance values as heterogeneous values in HINs. A typical HIN is built of several distinguished node types where each type has its own measure of importance value. This characteristic makes the above problem more challenging than computing the node importance in conventional homogeneous networks. In this thesis, we formally introduce the problem of node importance value estimation in HINs; that is, given the importance values of a subset of nodes in an HIN, we aim to estimate the importance values of the remaining nodes. To solve this problem, we propose an effective graph neural network (GNN) model, called HIN Importance Value Estimation Network (HIVEN). Extensive experiments on real-world HIN datasets demonstrate that HIVEN superiorly outperforms the baseline methods. Thirdly, we study the node importance estimation problem in dynamic HIN. The node importance in HIN is highly co-related to the HIN topology, while the node importance can also in turn influence the change of the HIN structure. All existing works assume that the HIN is static, and ignore their co-evolutionary natures. In addition, the historical node importance information is always available, which can further help to get accurate node importance estimation. Thus, we propose a novel temporal GNN model, CoGNN. We experimented with real-world dynamic HIN datasets and show that the proposed model outperforms the state of the arts.

  • (2022) Putra, Guntur Dharma
    Thesis
    The Internet of Things (IoT) brings connectivity to a large number of heterogeneous devices, many of which may not be trustworthy. Classical authorisation schemes can protect the network from adversaries. However, these schemes could not ascertain in situ reliability and trustworthiness of authorised nodes, as these schemes do not monitor nodes’ behaviour over the operational period. IoT nodes can be compromised post-authentication, which could impede the resiliency of the network. Trust and Reputation Managements (TRM) have the potential to overcome these issues. However, conventional centralised TRM have poor transparency and suffer from sin gle point of failures. In recent years, blockchains show promise in addressing these issues, due to the salient features, such as decentralisation, auditability and transparency. This thesis presents decentralised TRM frameworks to address specific trust issues and challenges in three core IoT functionalities. First, a TRM framework for IoT access control is proposed to address issues in conventional authorisation schemes, in which static predefined access policies are continuously enforced. The enforcements of static access policies assume that the access requestors always exhibit benign behaviour. However, in practice some requestors may actually be malicious and attempt to deceive the access policies, which raises an urgency in building an adaptive access control. In this framework, the nodes’ behaviour are progressively evaluated based on their adherence to the access control policies, and quantified into trust and reputation scores, which are then incorporated in the access control to achieve dynamic access control policies. The framework is implemented on a public Ethereum test-network interconnected with a private lab-scale network of Raspberry Pi computers. The experimental results show that the framework achieves consistent processing latencies and is feasible for implementing effective access control in decentralised IoT networks. Second, a TRM framework for blockchain-based Collaborative Intrusion Detection Systems (CIDS) is presented with an emphasis on the importance of building end-to-end trust between CIDS nodes. In a CIDS, each node contributes detection rules aiming to build collective knowledge of new attacks. Here, the TRM framework assigns trust scores to each contribution from various nodes, using which the trust- worthiness of each node is determined. These scores help protect the CIDS network from invalid detection rules, which may degrade the accuracy of attack detection. A proof-of-concept implementation of the framework is developed on a private labscale Ethereum network. The experimental results show that the solution is feasible and performs within the expected benchmarks of the Ethereum platform. Third, a TRM framework for decentralised resource sharing in 6G-enabled IoT networks is proposed, aiming to remove the inherent risks of sharing scarce resources, especially when most nodes in the network are unknown or untrusted. The proposed TRM framework helps manage the matching of resource supply and demand; and evaluates the trustworthiness of each node after the completion of the resource sharing task. The experimental results on a lab-scale proof-of-concept implementation demonstrate the feasibility of the framework as it only incurs insignificant overheads with regards to gas consumption and overall latency.

  • (2022) Zhang, Yuting
    Thesis
    In many real-world applications, bipartite graphs are naturally used to model relationships between two types of entities. Community discovery over bipartite graphs is a fundamental problem and has attracted much attention recently. However, all existing studies overlook the weight (e.g., influence or importance) of vertices in forming the community, thus missing useful properties of the community. In this thesis, we propose a novel cohesive subgraph model named Pareto-optimal (α, β)-community, which is the first to consider both structure cohesiveness and weight of vertices on bipartite graphs. The proposed Pareto-optimal (α, β)-community model follows the concept of (α, β)-core by im- posing degree constraints for each type of vertices, and integrates the Pareto-optimality in mod- eling the weight information from two different types of vertices. An online query algorithm is developed to retrieve Pareto-optimal (α, β)-communities with the time complexity of O(p · m) where p is the number of resulting communities, and m is the number of edges in the bipartite graph G. To support efficient query processing over large graphs, we also develop index-based approaches. A complete index is proposed, and the query algorithm based on I achieves linear query processing time regarding the result size (i.e., the algorithm is optimal). Nevertheless, the index incurs prohibitively expensive space complexity. To strike a balance between query effi- ciency and space complexity, a space-efficient compact index is proposed. Computation-sharing strategies are devised to improve the efficiency of the index construction process for the index. Extensive experiments on 9 real-world graphs validate both the effectiveness and the efficiency of our query processing algorithms and indexing techniques.

  • (2022) Salama, Usama
    Thesis
    The Internet of things (IoT) has recently become an important research topic because it revolutionises our everyday life through integrating various sensors and objects to communicate directly without human intervention. IoT technology is expected to offer very promising solutions for many areas. In this thesis we focused on the crime investigation and crime prevention, which may significantly contribute to human well-being and safety. Our primary goals are to reduce the time of crime investigation, minimise the time of incident response and to prevent future crimes using collected data from smart devices. This PhD thesis consists of three distinct but related projects to reach the research goal. The main contributions can be summarised as: • A multi-level access control framework, presented in Chapter 3. This could be used to secure any collected and shared data. We decided to have this as our first contribution as it is not realistic to use data that could be altered in our prediction model or as evidence. We chose healthcare data collected from ambient sensors and uploaded to cloud storage as an example for our framework as this data is collected from multiple sources and is used by different parties. The access control system regulates access to data by defining policy attributes over healthcare professional groups and data classes classifications. The proposed access control system contains policy model, architecture model and a methodology to classify data classes and healthcare professional groups. • An investigative framework, that was discussed in Chapter 4, which contains a multi-phased process flow that coordinates different roles and tasks in IoT related-crime investigation. The framework identifies digital information sources and captures all potential evidence from smart devices in a way that guarantee potential evidence is not altered so it can be admissible in a court of law. • A deep learning multi-view model, which we demonstrated in Chapter 5, that explores the relationship between tweets, weather (a type of sensory data) and crime rate, for effective crime prediction. This contribution is motivated by the need to utilise police force deployment correctly to be present at the right times. Both the proposed investigative framework and the predictive model were evaluated and tested, and the results of these evaluations are presented in the thesis. The proposed framework and model contribute significantly to the field of crime investigation and crime prediction. We believe their application would provide higher admissibility evidence, more efficient investigations, and optimum ways to utilise law enforcement deployment based on crime rate prediction using collected sensory data.