Engineering

Publication Search Results

Now showing 1 - 10 of 10
  • (2001) Corkish, Richard; Altermatt, Pietro P.; Heiser, Gernot
    Journal Article
    Three-dimensional numerical simulations of electron-beam-induced current (EBIC) near a vertical silicon grain boundary are demonstrated. They are compared with an analytical model which excludes the effect of carrier generation other than in the bulk base region of a solar cell structure. We demonstrate that in a wide range of solar cell structures recombination in the space charge region (SCR) significantly affects the EBIC results and hence needs to be included in the data evaluation. Apart from these findings, simulations of a realistic silicon solar cell structure (thick emitter, field-dependent mobility, etc.) are demonstrated.

  • (1998) Bradley, Peter; Rozenfeld, Anatoly; Lee, Kevin; Jamieson, Dana; Heiser, Gernot; Satoh, S
    Journal Article
    The first results obtained using a SOI device for microdosimetry applications are presented. Microbeam and broadbeam spectroscopy methods are used for determining minority carrier lifetime and radiation damage constants. A spectroscopy model is presented which includes the majority of effects that impact spectral resolution. Charge collection statistics were found to substantially affect spectral resolution. Lateral diffusion effects significantly complicate charge collection

  • (1995) Heiser, Gernot; Altermatt, Peter; Williams, Angela-Margaret; Sproul, Alistair; Green, Martin
    Conference Paper
    This paper describes the use of three-dimensional (3D) device modelling for the optimisation of the rear contact geometry of high-efficiency silicon solar cells. We describe the techniques and models used as well as their limitations. Our approach is contrasted with previously published 3D studies of high-efficiency silicon solar cells. Results show that the optimum spacing is about 2/3 of that predicted by 2D simulations, and exhibits a much stronger dependence on contact spacing. The optimal value found is about 60% of that of the present UNSW PERL cells, however, the possible efficiency gain is only about 0.1% absolute.

  • (2000) Cotera, Angela; Simpson, John; Erickson, E; Colgan, Sean; Burton, Michael; Allen, David
    Journal Article

  • (2007) Zhu, Liming; Ali Babar, Muhammad; Staples, Mark; Nonaka, Makoto
    Book Chapter
    The possible variability of project delay is useful information to understand and mitigate the project delay risk. However, it is not sufficiently considered in the literature concerning effort estimation and simulation in software product line development. In this paper, we propose a project delay simulation model by introducing a random variable to represent the variability of adaptive rework. The model has been validated through stochastic simulations by comparing generated adaptive rework to an actual change effort distribution, and by sensitivity analysis. The result shows that the proposed model is capable of producing reasonable variability of adaptive rework, and consequently, variability of project delay. Analysis of our model indicates that the strength of dependency has a larger impact than the number of residual defects, for the studied simulation settings. However, high levels of adaptive rework variability did not have great impact on overall project delay.

  • (2021) Gnanasambandapillai, Vikkitharan
    Thesis
    “The enjoyment of the highest attainable standard of health is one of the fundamental rights of every human being without distinction of race, religion, political belief, economic or social condition” [56]. Genomics (the study of the entire DNA) provides such a standard of health for people with rare diseases and helps control the spread of pandemics. Still, millions of human beings are unable to access genomics due to its cost, and portability. In genomics, DNA sequencers digitise DNA information, and computers analyse the digitised information. We have desktop and thumb-sized DNA sequencers, that digitise the DNA data rapidly. But computations necessary for the analysis of this data are inevitably performed on high-performance computers (HPCs) and cloud computers. These computations not only require powerful computers but also necessitate high-speed networks since the data generated are in the hundreds of gigabytes. Relying on HPCs and high-speed networks, deny the benefits that can be reaped by genomics for the masses who live in remote areas and in poorer nations. Developing a low-cost and portable genomics computation platform would provide personalised treatment based on an individual’s DNA and identify the source of the fast-spreading epidemics in remote areas and areas without HPC or network infrastructure. But developing a low-cost and portable genome analysing computing platform is a challenging task. This thesis develops novel computer architecture solutions to assemble the whole human DNA and COVID-19 virus RNA on a low-cost and portable platform. The first phase of the solution describes a ring-pipelined processor architecture for a key genome assembly algorithm. The human genome is partitioned to fit into the small memory footprint of embedded processors. These techniques allow an entire human genome to be assembled using highly portable and low-cost embedded processor cores. These processor cores can be housed within a single chip. Each processor was only 0.08 mm 2 and consumed just 37.5 mW. It has only 2 GB memory, 32-bit instruction width, and a clock with a 1 GHz frequency. The second phase of the solution describes how application-specific instruction-set processors can be sped up to execute a key genome assembly algorithm. A fully automated design system is presented, which improves the performance of large applications (such as genome assembly algorithm) and generates application-specific instructions for a commercial processor design tool (Xtensa). The tool enhances the base processor, which was used in the ring pipeline processor architecture. Thus, the alignment algorithms execute 2.1 times faster with only 11% additional hardware. The energy-delay product was reduced by 7.3× compared to the base processor. This tool is the only one of its type which can handle applications which are large. The third phase of the solution designs a portable low-cost genome assembly computer (PGA). PGA enhances the ring pipeline architecture with the customised processor found in phase two and with improved inter-processor communication. The results show that the COVID-19 virus RNA can be assembled in under 10 minutes and the whole human genome can be assembled in 11 days on a portable platform (HPC take around two days) for 30× coverage. PGA has an area footprint of just 5.68 mm 2 in a 28 nm technology node and is far smaller than a high-performance computer processor chip. The PGA consumes only 4W of power, which is lower than the power requirement of a high-performance processor chip. The manufacturing cost of the PGA also would be much cheaper than the high-performance system cost, when produced in volume. The developed solution can be powered by a USB port of a laptop. This thesis is the first of its type to show the design of a single-chip solution to be able to process a complex genomic problem. This thesis contributes to attaining one of the fundamental rights of every human being wherever they may live.

  • (2021) Badami, Maisie
    Thesis
    This research explores and investigates strategies towards automation of the systematic literature review (SLR) process. SLR is a valuable research method that follows a comprehensive, transparent, and reproducible research methodology. SLRs are at the heart of evidence-based research in various research domains, from healthcare to software engineering. They allow researchers to systematically collect and integrate empirical evidence in response to a focused research question, setting the foundation for future research. SLRs are also beneficial to researchers in learning about the state of the art of research and enriching their knowledge of a topic of research. Given their demonstrated value, SLRs are becoming an increasingly popular type of publication in different disciplines. Despite the valuable contributions of SLRs to science, performing timely, reliable, comprehensive, and unbiased SLRs is a challenging endeavour. With the rapid growth in primary research published every year, SLRs might fail to provide complete coverage of existing evidence and even end up being outdated by the time of publication. These challenges have sparked motivation and discussion in research communities to explore automation techniques to support the SLR process. In investigating automatic methods for supporting the systematic review process, this thesis develops three main areas. First, by conducting a systematic literature review, we found the state of the art of automation techniques that are employed to facilitate the systematic review process. Then, in the second study, we identified the real challenges researchers face when conducting SLRs, through an empirical study. Moreover, we distinguished solutions that help researchers to overcome these challenges. We also identified the researchers' concerns regarding adopting automation techniques in SLR practice. Finally, in the third study, we leveraged the findings of our previous studies to investigate a solution to facilitate the SLR search process. We evaluated our proposed method by running some experiments.

  • (2022) Long, Alexander
    Thesis
    Deep Neural Networks are limited in their ability to access and manipulate external knowledge after training. This capability is desirable; information access can be localized for interpretability, the external information itself may be modified improving editability, and external systems can be used for retrieval and storage, freeing up internal parameters that would otherwise be required to memorize knowledge. This dissertation presents three such approaches that augment deep neural networks with various forms external memory, achieving state-of-the-art results across multiple benchmarks and sub-fields. First, we examine the limits of retrieval alone in Sample-Efficient Reinforcement Learning (RL) setting. We propose a method, NAIT, that is purely memory based, but is able to achieve performance comparable with the best neural models on the ATARI100k benchmark. Because NAIT does not make use of parametric function approximation, and instead approximates only locally, it is extremely computationally efficient, reducing the run-time for a full sweep over ATARI100k from days to minutes. NAIT provides a strong counterpoint to the prevailing notion that retrieval based lazy learning approaches are too slow to be practically useful in RL. Next, we combine the promising non-parametric retrieval approach of NAIT with large image and text encoders for the task of Long-Tail Visual Recognition. This method, Retrieval Augmented Classification (RAC), achieves state-of-the art performance on the highly competitive long-tail datasets iNaturalist2018 and Places365-LT. This work is one of the first systems to effectively combine parametric and non-parametric approaches in Computer Vision. Most promisingly, we observe RAC's retrieval component achieves its highest per-class accuracies on sparse, infrequent classes, indicating non-parametric memory is an effective mechanism to model the `long-tail' of world knowledge. Finally, we move beyond standard single-step retrieval and investigate multi-step retrieval over graphs of sentences for the task of Reading Comprehension. We first propose a mechanism to effectively construct such graphs from collections of documents, and then learn a general traversal policy over such graphs, conditioned on the query. We demonstrate the combination of this retriever with existing models both consistently boosts accuracy and reduces training time by 2-3x.

  • (2023) Iyer, Sankaran
    Thesis
    Vertebral compression fractures (VCF) often go undetected in radiology images, potentially leading to secondary fractures and permanent disability or even death. The objective of this thesis is to develop a fully automated method for detecting VCF in incidental CT images acquired for other purposes, thereby facilitating better follow up and treatment. The proposed approach is based on 3D localisation in CT images, followed by VCF detection in the localised regions. The 3D localisation algorithm combines deep reinforcement learning (DRL) with imitation learning (IL) to extract thoracic / lumbar spine regions from chest / abdomen CT scans. The algorithm generates six bounding boxes as Regions of Interest (ROI) using three different CNN models, with an average Jaccard Index (JI)/Dice Coefficient (DC) of 74.21%/84.71%. The extracted ROI were then divided into slices and the slices into patches to train four convolutional neural network (CNN) models for VCF detection at the patch level. The predictions from the patches were aggregated at bounding box level, and majority voting performed to decide on the presence / absence of VCF for a patient. The best performing model was a six layered CNN, which together with majority voting achieved threefold cross validation accuracy / F1 Score of 85.95% / 85.94% from 308 chest scans. The same model also achieved a fivefold cross validation accuracy / F1 score of 86.67% / 87.04% from 168 abdomen scans. Because of the success of the 3D localisation algorithm, it was also trained on other abdominal organs, namely the spleen and left and right kidneys, with promising results. The 3D localisation algorithm was enhanced to work with fused bounding boxes and also in semi-supervised mode to address the problem of annotation time by radiologists. Experiments using three different proportions of labelled and unlabelled data achieved fairly good performance, although not as good as the fully supervised equivalents. Finally, VCF detection in a weakly supervised multiple instance learning (MIL) setting was performed to reduce radiologists’ time for annotations, together with majority voting on the six bounding boxes. The best performing model was the six layered CNN which achieved threefold cross validation accuracy / F1 score of 81.05% / 80.74 % on 308 thoracic scans, and fivefold cross validation accuracy / F1 Score of 85.45% / 86.61% on 168 abdomen scans. Overall, the results are comparable to the state-of the art that used an order of magnitude more scans.

  • (2021) Jia, Hong
    Thesis
    To enable mobile devices to perform in-the-wild sports analytics, particularly swing tracking, remains an open question. A crucial challenge is to develop robust methods that can operate across various sports (e.g., golf and tennis), different sensors (cameras and IMU), and diverse human users. Traditional approaches typically rely on vision-based or IMU-based methods to extract key points from subjects in order to estimate trajectory predictions. However, these methods struggle to generate accurate swing tracking, as vision-based techniques are susceptible to occlusion, and IMU sensors are notorious for accumulated errors. In this thesis, we propose several innovative solutions by leveraging AIoT, including the IoT with ubiquitous wearable devices such as smartphones and smart wristbands, and harnessing the power of AI such as deep neural networks, to achieve ubiquitous sports analytics. We make three main technical contributions: a tailored deep neural network design, network model automatic search, and model domain adaptation to address the problem of heterogeneity among devices, human subjects, and sports for ubiquitous sports analytics. In Chapter 2, we begin with the design of a prototype that combines IMU and depth sensor fusion, along with a tailored deep neural network, to address the occlusion problems faced by depth sensors during swings. To recover swing trajectories with fine-grained details, we propose a CNN-LSTM architecture that learns multi-modalities within depth and IMU sensor fusion. In Chapter 3, we develop a framework to reduce the overhead of model design for new devices, sports, and human users. By designing a regression-based stochastic NAS method, we improve swing-tracking algorithms through automatic model generation. We also extend our studies to include unseen human users, sensor devices, and sports. Leveraging a domain adaptation method, we propose a framework that eliminates the need for tedious training data collection and labeling for new users, devices, and sports via adversarial learning. In Chapter 4, we present a framework to alleviate the model parameter selection process in NAS, as introduced in Chapter 3. By employing zero-cost proxies, we search for the optimal swing tracking architecture without training, in a significantly larger candidate model pool. We demonstrate that the proposed method outperforms state-of-the-art approaches in swing tracking, as well as in adapting to different subjects, sports, and devices. Overall, this thesis develops a series of innovative machine learning algorithms to enable ubiquitous IoT wearable devices to perform accurate swing analytics (e.g., tracking, analysis, and assessment) in real-world conditions.