Engineering

Publication Search Results

Now showing 1 - 10 of 15
  • (2022) Chen, Yuhui
    Thesis
    Ride-sourcing services are rapidly spreading around the world. The ride-sourcing service refers to a point-to-point on-demand ride service operated by various companies, which organize and coordinate drivers using their vehicles to provide passengers with ride services. How ride-sourcing services and public transport are interacting with each other and thus yielding system-wide impacts have not received sufficient attention. This thesis extends the literature by proposing multi-class, multi-modal traffic assignment models to optimize the transport system with the presence of ride-sourcing and public transport services. The first part of the thesis develops a stylized model with a simple network with single origin-destination pair in order to analytically examine the mode choice behavior of travelers and the operation strategies of a public transport operator and a ride-sourcing operator. In such a multi-modal system, users may travel by bus, train, or ride-sourcing service. In particular, we develop a tractable bi-level model that quantifies the user equilibrium travel choices in the lower-level, where the travel choice equilibrium can be formulated as a variational inequality problem, and optimizes the operation strategies of the public transport operator that aims to minimize total system cost and the ride-sourcing operator that aims to maximize its profit in the upper-level. The existence and uniqueness of the multi-modal travel choice equilibrium are also analyzed. How the operation decision variables might affect users' mode choices and system performance is investigated both analytically and numerically. The second part of the thesis extends the stylized model to a general network model, which includes also solo-driving, and multiple OD pairs to depict a more realistic problem setting. The general network model is applied on a case study in the context of Sydney. The existence and uniqueness are also investigated for the general network model. The method of Frank-Wolfe combined with diagonalization is applied to generate numerical solutions, and illustrate the analytical observations and generate further understanding. The results show that the total system cost can be reduced while the profit of the ride-sourcing company can be increased under appropriate operating strategies of the public transport operator and the ride-sourcing operator.

  • (2023) Gu, Xinyi
    Thesis
    As the first family of error correction codes that is theoretically proved to achieve channel capacity under successive cancellation (SC) decoder when code length tends to infinity, the polar coding scheme is regarded as a milestone in the information and coding theory. However, at finite code lengths, the SC decoder does not provide a satisfactory error correction performance compared to other codes such as low-density parity check (LDPC) codes. To overcome this weakness, various polar code constructions and decoding algorithms have been proposed. In this thesis, we first study all the significant developments in the field of polar coding covering 1) major signal-to-noise ratio (SNR)-dependent code constructions as well as universal reliability sequence, and 2) decoding algorithms including SC, SC list (SCL), cyclic redundancy check-aided SCL (CA-SCL), belief propagation (BP), and soft cancellation (SCAN) decoders. This study analyzes the performance and computational complexity of the available approaches to improve polar codes. Then, we turn our focus on Polarization-adjusted convolutional (PAC) codes which were recently proposed. These promising codes can further enhance the performance with (near) maximum likelihood (ML) decoders such as sequential decoders and sphere decoders. Convolutional precoding enables PAC codes to reduce the number of minimum-weight codewords of polar codes. Since this convolutional precoding cannot be employed with sphere decoding directly due to the direction of this decoding scheme as the lower rectangular shape of polar transform demands. We propose a selective reverse convolutional precoding scheme to reduce the error coefficient while avoiding the reduction in the minimum distance due to concatenation. The numerical results show that the proposed scheme can reduce the code’s error coefficient significantly resulting in improving the block error rate of polar codes under sphere decoding by up to 0.5 dB. Moreover, the hardware design of a decoding algorithm is considered. The memory requirement for the intermediate information in decoding algorithms takes a large silicon area, in particular belief propagation (BP) decoding. As an alternative to uniform quantization, we suggest employing a non-uniform quantization scheme that reduces the decoder’s memory requirement and improves its performance. To evaluate, we design a field programmable gate array (FPGA)-based hardware architecture for the BP decoder. A lookup table-based architecture is designed for the non-uniform quantization scheme to preserve the throughput. The design is verified on a development board. The numerical results reveal the expected performance improvement while reducing the memory requirement.

  • (2022) Wang, Zishan
    Thesis
    Eye movement detection, separating the eye positions into distinct oculomotor events such as saccade and fixation, has been associated with cognitive load classification, referring to the process of estimating the mental effort involved with a certain task. However, there exist three questions remaining to be answered for wearable applications: (i) will algorithms originally developed for fixation and saccade detection from gaze positions give similar accuracy from pupil center positions, particularly when the head is not fixed?; (ii) how much improvement to the performance of cognitive load classification can be achieved by separating fixation and saccade?; and (iii) will the fixation- and saccade-related measure be affected by differing cognitive load processes from diverse task designs? Regarding the first research question, three representative saccade detection algorithms are applied to both pupil center positions and gaze positions collected with and without head movement, and their performance is evaluated against a stimulus-based ground truth under different measures. Results from a novel dataset recorded using wearable infrared cameras indicate that saccade/fixation detection using pupil center positions generally pro- vides better performance than using gaze positions with an 8.6% improvement in Cohen’s Kappa. Regarding the second and third research questions, statistical tests of several pupil-related measures extracted from all samples, fixation-only samples and saccade-only samples are evaluated for varied cognitive load levels, which indicate that pupil-related measures from fixation-only samples can be used as a substitute for those from all samples in distinguish- ing different levels of cognitive loads. From the statistical test results of several fixation- and saccade-related measures across two task types, the possibility for such measures to distinguish varied cognitive load levels, together with their trends among varied cognitive load levels are different under varied cognitive load processes. Furthermore, for the cognitive load classification systems trained with and without fixation- and saccade-related features, accuracy can be improved by 14.0%-23.4% for a random forest classifier across two different task types by including fixation and saccade-related features. In general, this thesis contributes to fixation and saccade based cognitive load classification research by demonstrating that pupil center positions can be used as an alternative to gaze positions for fixation and saccade detection in a wearable context, and moreover, fixation and saccade separation can improve the cognitive load classification performance.

  • (2022) Zhao, Gengda
    Thesis
    Bipartite graphs are extensively used to model relationships between two different types of entities. In many real-world bipartite graphs, relationships are naturally uncertain due to various reasons such as data noise, measurement error and imprecision of data, leading to uncertain bipartite graphs. In this thesis, we propose the (\alpha,\beta,\eta)-core model, which is the first cohesive subgraph model on uncertain bipartite graphs. To capture the uncertainty of relationships/edges, \eta-degree is adopted to measure the vertex engagement level, which is the largest integer k such that the probability of a vertex having at least k neighbors is not less than \eta. Given degree constraints \alpha and \beta, and a probability threshold \eta, the (\alpha,\beta,\eta)-core requires that each vertex on the upper or lower level have \eta-degree no less than \alpha or \beta, respectively. An (\alpha,\beta,\eta)-core can be derived by iteratively removing a vertex with \eta-degree below the degree constraint and updating the \eta-degrees of its neighbors. This incurs prohibitively high cost due to the \eta-degree computation and updating, and it is not scalable to large bipartite graphs. This motivates us to develop index-based approaches. We propose a basic full index that stores (\alpha,\beta,\eta)-core for all possible \alpha, \beta, and \eta combinations, thus supporting optimal retrieval of the vertices in any (\alpha,\beta,\eta)-core. Due to its long construction time and high space complexity, we further propose a probability-aware index to achieve a balance between time and space costs. To efficiently build the probability-aware index, we design a bottom-up index construction algorithm and a top-down index construction algorithm. Extensive experiments are conducted on real-world datasets with generated edge probabilities under different distributions, which show that (1) (\alpha,\beta,\eta)-core is an effective model; (2) index construction and query processing are significantly sped up by the proposed techniques.

  • (2023) Lee, Minwoo
    Thesis
    Due to the unique photovoltaic properties and ease of fabrication, organic-inorganic halide perovskites have generated considerable research interest. The perovskite solar cell can be applied to many applications, by tuning the bandgap. Inter of Things (IoT) devices and tandem solar cell applications, in particular, have been required for the wide bandgap perovskite solar cells. However, wide bandgap perovskite solar cells have band alignment mismatch problems, leading to charge recombination at the interface of perovskite, resulting in encouraging low device performance and decrease device stability. The first part of this thesis includes the study of the structure and working mechanism of perovskite solar cells. In addition, the defect of the perovskite was explained about how the majority of defects formed. This is caused by shallow defect energies within the bandgap, low density of deep traps, and low trap-charge interaction cross-sections which are occurred during the interaction between traps and charges. After that, the explanation of the reason how wide bandgap is applied for the indoor application. There is previous work on the tuning of the band alignment between perovskite and hole transfer layer which improved the efficiency of hole transfer, resulting in high device performance under the low light intensity condition. Lastly, the experiment of the thesis is focused on the address of the band alignment mismatch by adding two dimensional (2D) BA2PbBr4 perovskite layer for the tunnelling effect between the electron transport layer (ETL) and perovskite layer. The tunnelling layer of 2D perovskite improved the 3D perovskite crystal quality and charge transport from the 3D perovskite to ETL. As a result, the power conversion efficiency under the 200 lux white light emitting diodes (LED) light for the IoT devices was 43.70% with around 1 V of open circuit voltage and improved the device stability under the 1000 lux of white LED up to 1200 hrs.

  • (2023) Selvadoss, Samuel
    Thesis
    Hollow fibre (HF) membrane modules implemented in submerged membrane bioreactors (MBR) and pressurised applications have been widely accepted for both wastewater treatment and polishing wastewater treatment plant (WWTP) effluents. Further innovations in membrane technologies and wastewater treatment market competitiveness, however, are restricted by high manufacturing and operational costs, where a trade-off exists between membrane system design and filtration performance. In the current work, the effects of HF lengths, physical characteristics and system fouling mitigation techniques were investigated to further optimize filtration performance. The following experimental approaches were considered, (1) small-scale filtration experiments with various HF membrane lengths and fibre dimensions, (2) the development of theoretical filtration models and the assessment of filtration simulations, and (3) pilot-scale filtration performance of prototype large-scale membrane modules in wastewater. Two mathematical models for constant TMP filtration using dead-end HF membranes were developed using firstly the Darcy friction factor, and secondly, the Hagen–Poiseuille model. The models allowed for the overall theoretical lumen pressure drop values, local flux distributions and overall filtration performance to be extensively studied. Laboratory-scale filtration experiments using HF membranes of different lengths (0.5 – 2.0 m) were undertaken with the objective of demonstrating the influence of lumen pressure drop in overall filtration performance. Though greater permeate volumes were obtained when using modules prepared with longer HF membranes, such systems experienced greater lumen pressure loss. These losses reduced the operating TMPs effectiveness, resulting in greater non-uniformity in local fluxes across the length of the HF membranes. The magnitude of losses and degree of non-uniformity in such longer systems were extensively studied, allowing for the identification of effective loss reduction techniques, such as the incorporation of HF membranes with larger inner diameters (ID) in the membrane modules. Pilot scale investigations were undertaken to evaluate the influence of HF length on overall performance in real wastewater feeds. Prototype full-scale modules were prepared with HF membrane of different lengths (1.6 – 2.0 m) and ID. Longer modules demonstrated greater filtration performance as the influence of increased lumen pressure drop due to longer fibre lengths was effectively offset by the enhanced fibre dimensions. Overall, the results presented in this study reveal that a significant interplay exists between module design (including length, packing density, slack, and fibre size), filtration process design (feedwater quality, biomass concentration, aeration rate, aeration/shear efficiency) and the critical flux (of threshold flux) conditions. In conclusion, the incorporation of longer length HF membranes in pressurised and submerged MBR modules has been proven to be a promising innovation which offers enhanced filtration capabilities.

  • (2024) Hu, Xin
    Thesis
    With the merging of vast amounts of data and advanced computing resources, machine learning has become a key tool in helping designers make informed decisions in early-stage design. However, the opacity of high-performance machine learning models, often called “black boxes”, makes it hard to understand their workings. Moreover, these models make predictions without explanations, which affects designers’ trust and understanding of the predictions. Explainable AI is a growing field of research focused on creating explanations that humans can understand to improve data, model, and post-hoc explainability. In this dissertation, the incorporation of explainable AI into early-stage design is proposed. Specifically, the application of machine learning in early-stage design is investigated, highlighting the challenges posed by black-box models, particularly regarding data explainability, model transparency, and post-hoc explainability. By situating these challenges within the context of early-stage design, it is demonstrated how the lack of explainability undermines designers’ trust in AI. This erosion of trust affects both the inherent and perceived trustworthiness of AI, thus impeding effective collaboration. To enhance inherent trustworthiness, an explainable AI-centric design framework that harnesses feature-based and data-based explainable AI is introduced. Through a customer segmentation case study, the capacity of explainable AI to boost AI efficacy and transparency is demonstrated: feature-based explanations aid in selecting features and understanding the model mechanism, while data-based explanations inform about valuable datasets. Furthermore, to improve post-hoc explainability and thereby boost perceived trustworthiness, a comprehensive framework that synergizes knowledge graph with ChatGPT is introduced. The customer segmentation case study showcases how the innovative explainable AI method, created by combining knowledge graph with ChatGPT, generates more meaningful and contextually rich explanations. The explanations, which include domain-specific knowledge and information specific to the model, enable designers to understand the underlying predictions more clearly and make well-informed decisions. In conclusion, this research advances the field by bringing explainable AI to design contexts, and developing methods that enhance collaboration between designers and AI in early-stage design.

  • (2023) Bahman Rokh, Shahram
    Thesis
    The stochastic nature of microgrid (MG) elements, makes the optimal energy scheduling a nonlinear and complex problem that system dynamics and operational constraints need to be accurately modelled to represent the full characteristics of the system. To optimise the energy scheduling, the optimality of the mathematical model-based optimisation methods is only guaranteed by continuous model observation and validations and any modifications to the network architecture and system dynamics require network re-modelling with updated operational constraints. Therefore, the speed and efficiency of the decision-making process is impacted by the ongoing interventions from experts. This research proposes a data-driven and model-free energy management system (EMS) optimisation technique based on reinforcement learning (RL). In the proposed method, the operational cost minimisation problem is formulated as a Markov Decision Process in which the EMS agent finds an optimal policy that maximises the total achieved rewards without prior knowledge of the system model and through rewards and punishments. The performance of the proposed model is benchmarked with the model-based mixed integer nonlinear programming method and the results indicate that the RL approach provides near optimal solutions under various case studies with more than 93% accuracy compared to the mathematical benchmark. A two-stage decision-making framework is proposed next, where both “forecasting” and “optimisation” stages work in parallel to provide an autonomous and intelligent solution to the MG optimal energy scheduling problem. The long short-term memory (LSTM) network is used to provide an accurate projection of future stochastic state variables used in the RL decision-making process which in return simplifies the complexity of the RL state space and improves the learning efficiency and the convergence time. The simulations indicate that the LSTM-based forecasting effectively predicts the uncertain network parameters with mean absolute error ranging between 2.34 and 3.06 for renewable generation and load predictions and 10.14 for volatile grid transaction prices. By implementing the proposed framework, the self-driving, real-time and data-driven EMS optimisation technique can reduce the computational burden on the central EMS and the dependency on explicit model and the need for domain expertise, whilst minimising the operational costs of the network through an autonomous and intelligent solution.

  • (2023) Xue, Ashley
    Thesis
    With the increase of environment awareness and sustainable energy demand, green and renewable energy derived from inexhaustible natural sources, such as solar and wind, has attracted much interest. Wind energy, a key participant in the green energy market, is embraced by many countries as a replacement of traditional energy. Wind power generation, for its robust system structure and economic efficiency achieved by mass production, has a lower cost in the ever-growing market than most energy production relying on other technical routes. Building wind power facilities also have certain benefits of land shielding and ecology protection, because they are mostly three-dimensional constructions that would have little impact on the local environment. Since wind turbines need to be installed in areas with unabated wind flow for maximising energy production, they are often placed on the land with high annual average wind speed and long valid wind time, such as plateaus, mountains, and coasts. However, that also means the turbines would operate in a harsher environment for a considerable length of time; thus, it is crucial to maintain their operation safety, and the study of turbine components through numerical simulation is essential to achieve a balance between cost and safety. In the past decades, the direct-drive wind turbine has been one of the most widely used wind turbines. However, in recent years, the problem of parts aging has surfaced for the earlier batch of direct-drive wind turbines – the bearing has been detected as one of the many vulnerable parts. In direct-drive wind turbine, the temperature in the shaft plays a significant role and should be considered as a design parameter. The operation of the rotating shaft, fixed axle, and bearing is governed by the presence of the lubrication and the proper alignment of the assembly, Nevertheless, as the bearing of the wind turbines fatigues due to long-term stress and strain, the friction begins to increase, and the temperature of the bearing during its operation tends to increase. The rotating shaft with a relatively high temperature would heat the lubricating fluid and result in the change of physical properties of the lubricant. This causes a positive feedback circle in which the friction between the shaft and bearing will keep increasing, and eventually the bearing breakdowns and resulting in the failure of the direct-drive wind turbine system. Hence, it is imperative that the temperature in the shaft is maintained as low as possible so as to prevent further degradation of the bearing. This thesis aims to study the heat increase in a bearing of a wind turbine by focusing primarily on the heat increase in the bearing structure. The consideration of fins to increase the surface area for heat removal is investigated in this thesis. Fin structure is a typical structure that could be attached to the original structure as minor protrusions to enhance the heat transfer from the rotating shaft to the environment. Understanding of the mechanism of the fin enhanced heat transfer in rotation structure will be obtained, and feasibility of using a fin structure to reduce the bearing temperature of the wind turbine as recorded by the maximum temperature being experienced in the assembly will be attained for design considerations. Moreover, optimisation study of fin spacing and height in the rotating motion is also carried out. Additional information on the pressure on the surface of the fins, as well as the shear stress and maximum deformation caused by the rotation of the fins are analysed.

  • (2022) Al-Farsi, Mo
    Thesis
    Multijunction solar cells based on silicon are predicted to achieve an efficiency of 40-45% for a top cell with a band gap of 1.6-1.9 eV. However, there are currently no known materials with suitable band gaps able to deliver high efficiencies. Two classes of materials that have been proposed for top cells are alloys of CuGaSe2 and alloyed oxide perovskites. CuGaSe2 has a suitable band gap (1.68 eV) for a top cell on silicon, but the maximum efficiency achieved is only 11%, while that of the closely-related CuInGaSe2 (band gap 1.14 eV) is 23.35%. The low efficiency of CuGaSe2 has been attributed to anti-site defects. Therefore, suppressing this defect formation is critical to achieving higher efficiencies. On the other hand, most oxide perovskites have band gaps that are too high (>2 eV) to be used as top cells on silicon, hence strategies such as alloying are required to lower their band gaps. In this work, the effects of alloying CuGaSe2 with Ag, Na, K, Al, In, La and S were investigated using Density Functional Theory (DFT) calculations. The band gaps of the alloyed compounds and formation energies of anti-site defects were calculated to find alloying elements that can increase the defect formation energy but maintain the band gap. CuGaSe2 alloyed with Al at 50at% showed the highest increase (compared to unalloyed CuGaSe2) in the defect formation energy (by ~0.20 eV) followed by Na (~0.15 eV) and S (~0.10 eV), both at 50at%. However, the band gap of the Al alloy (~2.15 eV) is too high for a top cell, while those of Na (~1.95 eV) and S (~1.91 eV) are slightly above the upper limit. Thus, alloying with these elements is not an ideal route towards significantly increasing the formation energy of anti-site defects while maintaining the band gap of CuGaSe2. However, some of the factors that influence the defect formation energy are identified, potentially leading to design rules for future work. Defect formation energies were found to be higher in structures with more positively charged Ga and negatively charged Se atoms. Analysis of bond lengths revealed a positive correlation between shorter Ga and Se bonds and higher defect formation energies. Band gaps of various alloyed oxide perovskites were calculated using DFT. BiFeO3 was alloyed with Y and Sb; LaFeO3 with Cr and Sb and YFeO3 with Bi and Sb. YFeO3 alloyed with Sb at 50at%, was found to have a band gap of 1.4-2.1 eV (depending on the basis set used) which is in the range for a top cell.