Engineering

Publication Search Results

Now showing 1 - 6 of 6

  • (2006) Wang, Jerry Chun-Ping
    Thesis
    Recent widespread use of computer and wireless communication technologies has increased the demand of data services via wireless channels. However, providing high data rate in wireless system is expensive due to many technical and physical limitations. Unlike voice service, data service can tolerate delays and allow burst transfer of information, thus, an alternative approach had to be formulated. This approach is known as “Infostation.” Infostation is an inexpensive, high speed wireless disseminator that features discontinuous coverage and high radio transmission rate by using many short-distance high bandwidth local wireless stations in a large terrain. As opposed to ubiquitous networks, each infostation provides independent wireless connectivity at relative shorter distance compare to traditional cellular network. However, due to the discontinuous nature of infostation network, there is no data service available between stations, and the clients become completely disconnected from the outside world. During, the disconnected period, the clients have to access information locally. Thus, the need for a good wireless network caching scheme has arisen. In this dissertation, we explore the use of the infostation model for disseminating and caching of data. Our initial approach focuses on large datasets that exhibit hierarchical structure. In order to facilitate information delivery, we exploit the hierarchical nature of the file structure, then propose generic content scheduling and cache management strategies for infostations. We examine the performance of our proposed strategies with the network simulator Qualnet. Our simulation results demonstrate the improvement in increasing the rate of successful data access, thus alleviating excessive waiting overheads during disconnected periods. Moreover, our technique allows infostations to be combined with traditional cellular networks and avoid accessing data via scarce and expensive wireless channel for the purpose of cost reduction.

  • (2006) Ahsan, Nasir
    Thesis
    In this thesis we present a new model for identifying dependencies within a gene regulatory cycle. The model incorporates both probabilistic and temporal aspects, but is kept deliberately simple to make it amenable for learning from the gene expression data of microarray experiments. A key simplifying feature in our model is the use of a compression function for collapsing multiple causes of gene expression into a single cause. This allows us to introduce a learning algorithm which avoids the over-fitting tendencies of models with many parameters. We have validated the learning algorithm on simulated data, and carried out experiments on real microarray data. In doing so, we have discovered novel, yet plausible, biological relationships.

  • (2006) Jha, Anju
    Thesis
    Web Services can be seen from two views ? one that it is a purely technological advance and the other that it is a capability that an organisation can deploy to meet a business objective. Much has been said about the first view but not much has been said about the second view. The underlying premise of this research is that in the context of an ever-increasing competitive environment, an organisation needs to take into account these important aspects: What is the business strategy of the organisation, which adopts Web Services? Does the IT align with the business strategy of the organisation? The aim of this research is to capture and describe business-IT problems in the context of strategic requirements and Web Services. As a means to align a Web Services initiative with business strategy, we propose a Requirements Engineering framework to capture the business objectives of an organisation from strategy to implementation. The methodology that we propose provides a roadmap from business strategy, to the strategic objectives to implementation in four dimensions: innovation, customer relationship management, infrastructure management and financials. The proposed framework extends the e-Business Modelling Ontology (eBMO) of Pigneur and Osterwalder by applying Bleistein et al?s Progression of Problems to understand the strategic objectives and the business context. We have presented 2 examples as proof of concept. We have experimented with our methodology on Amazon.com and Dell.com ?cases developed from the literature? as these organisations are aggressively pursuing Web Services as a part of their IT and business strategy. We use the Problem Frames approach to capture the business objectives and the problem context of an organisation deploying Web Services and to create a strategic alignment between the business strategy and the information technology. The approach presented in this thesis is used to understand Amazon and Dell?s strategy and strategic objectives. It was possible to capture strategic objectives and the strategic context through combination of the eBMO and Progression of Problems. It was also possible to trace this to Web Services requirement description through application of Problem Frames. The framework combines with Bleistein et al?s Progression of Problems at the strategic level and applies Problem Frames at the operational level. It takes the problem-oriented view of the whole process, but does not apply Problem Frames throughout, at least not in their original formulation by Jackson.

  • (2006) Greenfield, Daniel Leo
    Thesis
    It is a dream of Systems-Biology to efficiently simulate an entire cell on a computer. The potential medical and biological applications of such a tool are immense, and so are the challenges to accomplish it. At the level of a cell, the number of reacting molecules is so low that stochastic effects can be crucial in deciding the system-level behaviour of the cell. Despite the recent development of many new and hybrid stochastic approaches, exact stochastic simulation algorithms are still needed, and are widely employed in most current biochemical simulation packages. Unfortunately, the performance of these algorithms scales badly with the number of reactions. It is shown that this is especially the case for hubs and scale-free networks. This is worrying because hubs are an important component of biochemical systems, and it is widely suspected that biochemical networks are scale-free. It is shown that the scalability issue in these algorithms is due to the high interdependency between reactions. A general method for arbitrarily reducing this interdependency is introduced, and it is shown how it can be used for many classes of simulation processes. This is applied to one of the fastest algorithms currently, the Next Reaction Method. The resulting algorithm, the Reactant-Margin Method, is tested on a wide range of hub sizes and shown to be asymptotically faster than the current best algorithms. Hybrid versions of the Reactant-Margin Method and the Next Reaction Method are also compared on a real biological model - the Lambda-Phage virus, and the new algorithm is again shown to perform better. The problems inherent in the hybridization are also shown to be more exactly and efficiently handled in the Reactant-Margin framework than in the Next-Reaction Method framework. Finally, a software tool called GeNIV is introduced. This GUI-based biochemical modelling and simulation tool is an embodiment of a mechanistic-representation philosophy. It is implements the Reactant Margin and Next Reaction hybrid algorithms, and has a simple representation system for gene-state occupancy and their subsequent biochemical reactions. It is also novel in that it translates the graphical model into Javacode which is compiled and executed for simulation.

  • (2006) Al-Jaljouli, Raja
    Thesis
    We address the security issue of the data which mobile agents gather as they are traversing the Internet. Our goal is to devise a security protocol that truly secures the data which mobile agents gather. Several cryptographic protocols were presented in the literature asserting the security of gathered data. Formal verification of the protocols reveals unforeseen security flaws, such as truncation or alteration of the collected data, breaching the privacy of the gathered data, sending others data under the private key of a malicious host, and replacing the collected data with data of similar agents. So the existing protocols are not truly secure. We present an accurate security protocol which aims to assert strong integrity, authenticity, and confidentiality of the gathered data. The proposed protocol is derived from the Multi-hops protocol. The protocol suffers from security flaws, e.g. an adversary might truncate/ replace collected data, or sign others data with its own private key without being detected. The proposed protocol refines the Multi-hops protocol by implementing the following security techniques: utilization of co-operating agents, scrambling the gathered offers, requesting a visited host to clear its memory from any data acquired as a result of executing the agent before the host dispatches the agent to the succeeding host in the agent's itinerary, and carrying out verifications on the identity of the genuine initiator at the early execution of the agent at visited hosts, in addition to the verifications upon the agent's return to the initiator. The proposed protocol also implements the common security techniques such as public key encryption, digital signature, etc. The implemented security techniques would rectify the security flaws revealed in the existing protocols. We use STA, an infinite-state exploration tool, to verify the security properties of a reasonably small instance of the proposed protocol in key configurations. The analysis using STA reports no attack. Moreover, we carefully reason the correctness of the security protocol for a general model and show that the protocol would be capable of preventing or at least detecting the attacks revealed in the existing protocols.