UNSW Canberra

Publication Search Results

Now showing 1 - 2 of 2
  • (2023) Mohamed, Ebrahim
    Models are becoming invaluable instruments for comprehending and resolving the problems originating from the interactions between humans, mainly their social and economic systems, and the environment. These interactions between the three systems, i.e. the socio-economic-natural systems, lead to evolving systems that are infamous for being extremely complex, having potentially conflicting goals, and including a considerable amount of uncertainties over how to characterize and manage them. Because models are inextricably linked to the system they attempt to represent, models geared towards addressing complex systems not only need to be functional in terms of their use and expected result but rather, the modeling process in its entirety needs to be credible, practically feasible, and transparent. In order to realize the full potential of models, the modeling workflow needs to be seen as an integral part of the model itself. Poor modeling practices at any stage of the model-building process, from conceptualization to implementation, can lead to adverse consequences when the model is in operation. This can undermine the role of models as enablers for tackling complex problems and lead to skepticism about their effectiveness. Models need to possess a number of qualities in order to be effective enablers for dealing with complex systems and addressing the issues that are associated with them. These qualities include being constructed in a way that supports model reuse and interoperability, having the ability to integrate data, scales, and algorithms across multiple disciplines, and having the ability to handle high degrees of uncertainty. Building models that fulfill these requirements is not an easy endeavor, as it usually entails performing problem description and requirement analysis tasks, assimilating knowledge from different domains, and choosing and integrating appropriate technique(s), among other tasks that require the utilization of a significant amount of time and resources. This study aims to improve the efficiency and rigor of the model-building process by presenting an artifact that facilitates the development of probabilistic models targeting complex socioeconomic-environmental systems. This goal is accomplished in three stages. The first stage deconstructs models that attempt to address complex systems. We use the Sustainable Development Goals (SDG) as a model problem that includes economic, social, and environmental systems. The SDG models are classified and mapped against the desirable characteristics that need to be present in models addressing such a complex issue. The results of stage one are utilized in the second stage to create an Object-Oriented Bayesian Networks (OOBN) model that attempts to represent the complexity of the relationships between the SDGs, long-term sustainability, and the resilience of nations. The OOBN model development process is guided by existing modeling best practices, and the model utility is demonstrated by applying it to three case studies, each relevant to a different policy analysis context. The final section of this study proposes a Pattern Language (PL) for developing OOBN models. The proposed PL consolidates cross-domain knowledge into a set of patterns with a hierarchical structure, allowing its prospective user to develop complex models. Stage three, in addition to the OOBN PL, presents a comprehensive PL validation framework that is used to validate the proposed PL. Finally, the OOBN PL is used to rebuild and address the limitations of the OOBN model presented in stage two. The proposed OOBN PL resulted in a more fit-for-purpose OOBN model, indicating the adequacy and usefulness of such an artifact for enabling modelers to build more effective models.

  • (2022) Ali, Muhammad
    This thesis addresses a key challenge for creating synthetic distribution networks and open-source datasets by combining the public databases and data synthesis algorithms. Novel techniques for the creation of synthetic networks and open-source datasets that enable model validation and demonstration without the need for private data are developed. The developed algorithms are thoroughly benchmarked against existing approaches and validated on industry servers to highlight their usefulness in solving real-world problems. A review using novel techniques that provides unique insights into the literature is conducted to identify research gaps. Based on this review, three contributions have been made in this thesis. The first contribution is the development of a data protection framework for anonymizing sensitive network data. A novel approach is proposed based on the maximum likelihood estimate for estimating the parameters that represent the actual data. A data anonymization algorithm that uses the estimated parameters to generate realistic anonymized datasets is developed. A Kolmogorov-Smirnov test criteria is used to create realistic anonymized datasets. Validation is carried out by collecting actual network data from an energy company and comparing it to anonymized datasets created using the methods developed in this thesis. The application of this method is shown by performing simulation studies on the IEEE 123-node test feeder. The second contribution is developing a practical approach for creating synthetic networks and datasets by integrating the open-source data platforms and synthesis methods. New data synthesis algorithms are proposed to obtain the network datasets for electricity systems in a chosen geographical area. The proposed algorithms include a topology for designing power lines from road infrastructure, a method for computing the lengths of power lines, a hub-line algorithm for determining the number of consumers connected to a single transformer, a virtual layer approach based on FromNode and ToNode for establishing electrical connectivity, and a technique for ingesting raw data from the developed network to industrial data platforms. The practical feasibility of the proposed solutions is shown by creating a synthetic test network and datasets for a distribution feeder in the Colac region in Australia. The datasets are then validated by deploying them on industry servers. The results are compared with actual datasets using geo-based visualizations and by including feedback from industry experts familiar with the analysis. The third contribution of this thesis is to address the problem of electric load profile classification in the context of buildings. This classification is essential to effectively manage energy resources across power distribution networks. Two new methods based on sparse autoencoders (SAEs), and multi-stage transfer learning (MSTL) are proposed for load profile classification. Different from conventional hand-crafted feature representations, SAEs can learn useful features from vast amounts of building data in an unsupervised automatic way. The problems of missing data and class imbalance for building datasets are addressed by proposing a minority over-sampling algorithm that effectively balances missing or unbalanced data by equalizing minority and majority samples for fair comparisons. The practical feasibility of the methodology is shown using two case studies that include both public benchmark and real-world datasets of buildings. An empirical comparison is conducted between the proposed and the state-of-the-art methods in the literature. The results indicate that the proposed method is superior to traditional methods, with a performance improvement from 1 to 10 percent.