UNSW Canberra

Publication Search Results

Now showing 1 - 4 of 4
  • (2022) Maranan, Noahlyn
    The 2016 vice-presidential election in the Philippines was contested on Facebook, the nation’s most prominent social media platform. Among the contenders was Ferdinand ‘Bongbong’ Marcos, son of former president Ferdinand Marcos Sr, who ruled between 1965 and 1986. Memes played a significant role in the election. They potentially enriched participatory engagement and information dissemination to a broader public. Through them, opposing camps worked through different versions of the Philippines’ past, present, and future. This case presents a novel opportunity to contribute to the growing scholarly debate about the relationship between social media and democratic politics. This study asks, “Can social media contribute to strengthening democracy in the Philippines?” It approaches this question through a conceptual framework that integrates work on democracy and political memory while also taking seriously the propensity of social media to be enlisted in information campaigns of a propagandist nature. Having analysed a sample of Facebook memes for their form and content, the study comes to an ambivalent conclusion. As immensely pliable and flexible texts, created and circulated with ease, the thesis finds that memes play a dual role in democratic politics. In the 2016 Philippine election, they (a) allowed for the inclusion of competing perspectives, narratives, and voices about Marcos Sr’s past regime and his son’s electoral bid. Rational and passionate voices, as one would expect from models of deliberative and agonistic democracy, were visible in this study. Enabled by digital platforms, memes became an important medium for the creative, potentially deliberative, and agonistic (if not outwardly antagonistic) articulation of sidelined memories about the regime of Marcos Sr. At the same time, (b) memes served as instruments for persuasive networked influence. While this may seem contrary to democratic communication, such propagandistic communication carries the potential to enrich reasoned argumentations in the broader public sphere when viewed from the lens of the wider literature on deliberative democracy. This potential, however, also depends on other factors, which include the techno-discursive platform in which propagandistic content circulates and the characteristics of the electorate.

  • (2023) Mohamed, Ebrahim
    Models are becoming invaluable instruments for comprehending and resolving the problems originating from the interactions between humans, mainly their social and economic systems, and the environment. These interactions between the three systems, i.e. the socio-economic-natural systems, lead to evolving systems that are infamous for being extremely complex, having potentially conflicting goals, and including a considerable amount of uncertainties over how to characterize and manage them. Because models are inextricably linked to the system they attempt to represent, models geared towards addressing complex systems not only need to be functional in terms of their use and expected result but rather, the modeling process in its entirety needs to be credible, practically feasible, and transparent. In order to realize the full potential of models, the modeling workflow needs to be seen as an integral part of the model itself. Poor modeling practices at any stage of the model-building process, from conceptualization to implementation, can lead to adverse consequences when the model is in operation. This can undermine the role of models as enablers for tackling complex problems and lead to skepticism about their effectiveness. Models need to possess a number of qualities in order to be effective enablers for dealing with complex systems and addressing the issues that are associated with them. These qualities include being constructed in a way that supports model reuse and interoperability, having the ability to integrate data, scales, and algorithms across multiple disciplines, and having the ability to handle high degrees of uncertainty. Building models that fulfill these requirements is not an easy endeavor, as it usually entails performing problem description and requirement analysis tasks, assimilating knowledge from different domains, and choosing and integrating appropriate technique(s), among other tasks that require the utilization of a significant amount of time and resources. This study aims to improve the efficiency and rigor of the model-building process by presenting an artifact that facilitates the development of probabilistic models targeting complex socioeconomic-environmental systems. This goal is accomplished in three stages. The first stage deconstructs models that attempt to address complex systems. We use the Sustainable Development Goals (SDG) as a model problem that includes economic, social, and environmental systems. The SDG models are classified and mapped against the desirable characteristics that need to be present in models addressing such a complex issue. The results of stage one are utilized in the second stage to create an Object-Oriented Bayesian Networks (OOBN) model that attempts to represent the complexity of the relationships between the SDGs, long-term sustainability, and the resilience of nations. The OOBN model development process is guided by existing modeling best practices, and the model utility is demonstrated by applying it to three case studies, each relevant to a different policy analysis context. The final section of this study proposes a Pattern Language (PL) for developing OOBN models. The proposed PL consolidates cross-domain knowledge into a set of patterns with a hierarchical structure, allowing its prospective user to develop complex models. Stage three, in addition to the OOBN PL, presents a comprehensive PL validation framework that is used to validate the proposed PL. Finally, the OOBN PL is used to rebuild and address the limitations of the OOBN model presented in stage two. The proposed OOBN PL resulted in a more fit-for-purpose OOBN model, indicating the adequacy and usefulness of such an artifact for enabling modelers to build more effective models.

  • (2023) Shindi, Omar
    Optimal and robust quantum control methods are essential for developing quantum technology. This thesis proposes and examines the implementation of reinforcement learning algorithms for three quantum control tasks. First, a modified tabular Q-learning (TQL) algorithm is proposed for optimal quantum state preparation. This algorithm is compared with the standard TQL method and other methods, such as the stochastic gradient descent and Krotov algorithms, in the context of quantum state preparation for a two-qubit system. The results indicate that the modified TQL algorithm outperforms standard TQL methods, in generating high-fidelity control protocols that guide the quantum state closer to the target state. Moreover, modified TQL shows stability in discovering high-fidelity control protocols regardless of changes in the length of the control protocol. The modifications on standard TQL, including a modified action selection procedure, delayed n-step reward function, and dynamic e-greedy method, improve the stability and enhance performance for discovering global optimal solutions in some cases. Subsequently, a modified Deep Q-Learning (DQL) method is proposed for optimal quantum state preparation, considering constraints like limited control resources and fixed pulse duration. The modified DQL algorithm outperforms the standard DQL in discovering high-fidelity control protocols and shows better convergence to a more effective control policy. Additionally, the improved experience replay memory delayed n-step reward function, and modified action selection method boost the exploration-exploitation ability of the DQL agent in discovering high fidelity solutions for longer protocols. For optimal quantum gate design, this thesis introduces a modified dueling DQL method. This method demonstrates superiority in constructing high-fidelity controls that mimic target gates and discover globally optimal or near-global optimal control protocols. Furthermore, the modified dueling DQL method converges more rapidly to a better control policy compared to the standard dueling DQL methods. The second part of this thesis focuses on robust quantum gate design, introducing a modified dueling Deep Q-Learning (DQL) method for the design of singlequbit gates. The proposed method outperforms the standard Dueling DQL in discovering robust high-fidelity control protocols for single-qubit gates. However, robust gate design for multi-qubit systems poses more significant challenges than for single-qubit systems. To address this, this thesis introduces the Trust Region Policy Optimization (TRPO), an on-policy reinforcement learning method, for the design of robust gates for two-qubit and three-qubit systems. Additionally, this thesis proposes an enhanced Krotov method for a robust gate design. The effectiveness of these proposed methods is presented through numerical examples of robust gate design for CNOT and Toffoli gates. Both TRPO and the improved Krotov method successfully construct robust, high-fidelity protocols capable of executing CNOT gates within a specified uncertainty range. For the Toffoli gate, TRPO manages to construct a robust control protocol applicable to varying parameters, while the improved Krotov method is successful only with a longer control protocol. The increase in the number of control protocols increases the complexity and thus increases the challenge for the improved Krotov method. However, the Hamiltonian gradient with respect to the control pulse used in the updating procedure of the improved Krotov method makes it suitable for longer control protocols. In contrast, TRPO demonstrates a stable performance for discovering robust control protocols regardless of the increase in the complexity of the control problem, whether by increasing the number of control protocols or extending the length of the control. Third, this thesis explores the model free-quantum gate design and calibration. Constructing a quantum gate design is hard when the model of a quantum system is not available due to the challenges in mathematical characterizing the quantum systems and considering all the factors in the mathematical model. A modified RL framework based on DQL procedure is proposed for model-free quantum gate design and calibration. This proposed RL framework relies only on the measurement at the end of the evolution process to identify the optimal control strategy without requiring access to the quantum system. The efficacy of the proposed approach is established numerically, demonstrating its application for model-free quantum gate design and calibration, using off-policy reinforcement learning algorithms. In summary, this thesis presents innovative RL methods for optimal and robust quantum control, contributing to the development of more resilient and efficient quantum systems.

  • (2022) Ali, Muhammad
    This thesis addresses a key challenge for creating synthetic distribution networks and open-source datasets by combining the public databases and data synthesis algorithms. Novel techniques for the creation of synthetic networks and open-source datasets that enable model validation and demonstration without the need for private data are developed. The developed algorithms are thoroughly benchmarked against existing approaches and validated on industry servers to highlight their usefulness in solving real-world problems. A review using novel techniques that provides unique insights into the literature is conducted to identify research gaps. Based on this review, three contributions have been made in this thesis. The first contribution is the development of a data protection framework for anonymizing sensitive network data. A novel approach is proposed based on the maximum likelihood estimate for estimating the parameters that represent the actual data. A data anonymization algorithm that uses the estimated parameters to generate realistic anonymized datasets is developed. A Kolmogorov-Smirnov test criteria is used to create realistic anonymized datasets. Validation is carried out by collecting actual network data from an energy company and comparing it to anonymized datasets created using the methods developed in this thesis. The application of this method is shown by performing simulation studies on the IEEE 123-node test feeder. The second contribution is developing a practical approach for creating synthetic networks and datasets by integrating the open-source data platforms and synthesis methods. New data synthesis algorithms are proposed to obtain the network datasets for electricity systems in a chosen geographical area. The proposed algorithms include a topology for designing power lines from road infrastructure, a method for computing the lengths of power lines, a hub-line algorithm for determining the number of consumers connected to a single transformer, a virtual layer approach based on FromNode and ToNode for establishing electrical connectivity, and a technique for ingesting raw data from the developed network to industrial data platforms. The practical feasibility of the proposed solutions is shown by creating a synthetic test network and datasets for a distribution feeder in the Colac region in Australia. The datasets are then validated by deploying them on industry servers. The results are compared with actual datasets using geo-based visualizations and by including feedback from industry experts familiar with the analysis. The third contribution of this thesis is to address the problem of electric load profile classification in the context of buildings. This classification is essential to effectively manage energy resources across power distribution networks. Two new methods based on sparse autoencoders (SAEs), and multi-stage transfer learning (MSTL) are proposed for load profile classification. Different from conventional hand-crafted feature representations, SAEs can learn useful features from vast amounts of building data in an unsupervised automatic way. The problems of missing data and class imbalance for building datasets are addressed by proposing a minority over-sampling algorithm that effectively balances missing or unbalanced data by equalizing minority and majority samples for fair comparisons. The practical feasibility of the methodology is shown using two case studies that include both public benchmark and real-world datasets of buildings. An empirical comparison is conducted between the proposed and the state-of-the-art methods in the literature. The results indicate that the proposed method is superior to traditional methods, with a performance improvement from 1 to 10 percent.