Engineering

Publication Search Results

Now showing 1 - 10 of 20
  • (2001) Diessel, Oliver; Milne, George
    Journal Article
    Reconfigurable computers based on field programmable gate array technology allow applications to be realised directly in digital logic. The inherent concurrency of hardware distinguishes such computers from microprocessor-based machines in which the concurrency of the underlying hardware is fixed and abstracted from the programmer by the software model. However, reconfigurable logic provides us with the potential to exploit `real` concurrency. It is therefore interesting to know how to exploit this concurrency, how to model concurrent computations, and which languages allow this dynamic hardware to be programmed most effectively. The purpose of this work is to describe an FPGA compiler for the Circal process algebra. In so doing, the authors demonstrate that behavioural descriptions expressed in a process algebraic language can be readily and intuitively compiled to reconfigurable logic and that this contributes to the goal of discovering appropriate high-level languages for run-time reconfiguration.

  • (2007) Heiser, Gernot; Elphinstone, Kevin; Kuz, Ihor; Klein, Gerwin; Petters, Stefan
    Journal Article
    As computer systems become increasingly mission-critical, used in life-critical situations, and relied upon to protect intellectual property, operating-system reliability is becoming an ever growing concern. In the past, mission- and life-critical embedded systems consisted of simple microcontrollers running a small amount of software that could be validated using traditional and informal techniques. However, with the growth of software complexity, traditional techniques for ensuring software reliability have not been able to keep up, leading to an overall degradation of reliability. This paper argues that microkernels are the best approach for delivering truly trustworthy computer systems in the foreseeable future. It presents the NICTA operating-systems research vision, centred around the L4 microkernel and based on four core projects. The seL4 project is designing an improved API for a secure microkernel, L4, verified will produce a full formal verification of the microkernel, Potoroo combines execution-time measurements with static analysis to determine the worst case execution profiles of the kernel, and CAmkES provides a component architecture for building systems that use the microkernel. Through close collaboration with Open Kernel Labs (a NICTA spinoff) the research output of these projects will make its way into products over the next few years.

  • (2007) Kuz, Ihor; Liu, Yan; Gorton, Ian; Heiser, Gernot
    Journal Article
    Component-based software engineering promises to provide structure and reusability to embedded-systems software. At the same time, microkernel-based operating systems are being used to increase the reliability and trustworthiness of embedded systems. Since the microkernel approach to designing systems is partially based on the componentisation of system services, component-based software engineering is a particularly attractive approach to developing microkernel-based systems. While a number of widely used component architectures already exist, they are generally targeted at enterprise computing rather than embedded systems. Due to the unique characteristics of embedded systems, a component architecture for embedded systems must have low overhead, be able to address relevant non-functional issues, and be flexible to accommodate application specific requirements. In this paper we introduce a component architecture aimed at the development of microkernel-based embedded systems. The key characteristics of the architecture are that it has a minimal, low-overhead, core but is highly modular and therefore flexible and extensible. We have implemented a prototype of this architecture and confirm that it has very low overhead and is suitable for implementing both system-level and application level services.

  • (2007) Zhu, Liming; Bui, N; Liu, Yan; Gorton, Ian
    Journal Article
    This paper describes an approach for generating customized benchmark suites from a software architecture description following a Model Driven Architecture (MDA) approach. The benchmark generation and performance data capture tool implementation (MDABench) is based on widely used open source MDA frameworks. The benchmark application is modeled in UML and generated by taking advantage of the existing community-maintained code generation `cartridges` so that current component technology can be exploited. We have also tailored the UML 2.0 Testing Profile so architects can model the performance testing and data collection architecture in a standards compatible way. We then extended the MDA framework to generate a load testing suite and automatic performance measurement infrastructure. This greatly reduces the effort and expertise needed for benchmarking with complex component and Web service technologies while being fully MDA standard compatible. The approach complements current model-based performance prediction and analysis methods by generating the benchmark application from the same application architecture that the performance models are derived from. We illustrate the approach using two case studies based on Enterprise JavaBean component technology and Web services.

  • (2005) Zhu, Liming; Aurum, Aybuke; Jeffery, David; Gorton, Ian
    Journal Article
    Software architecture evaluation involves evaluating different architecture design alternatives against multiple quality-attributes. These attributes typically have intrinsic conflicts and must be considered simultaneously in order to reach a final design decision. AHP (Analytic Hierarchy Process), an important decision making technique, has been leveraged to resolve such conflicts. AHP can help provide an overall ranking of design alternatives. However it lacks the capability to explicitly identify the exact tradeoffs being made and the relative size of these tradeoffs. Moreover, the ranking produced can be sensitive such that the smallest change in intermediate priority weights can alter the final order of design alternatives. In this paper, we propose several in-depth analysis techniques applicable to AHP to identify critical tradeoffs and sensitive points in the decision process. We apply our method to an example of a real-world distributed architecture presented in the literature. The results are promising in that they make important decision consequences explicit in terms of key design tradeoffs and the architecture`s capability to handle future quality attribute changes. These expose critical decisions which are otherwise too subtle to be detected in standard AHP results.

  • (2007) Gaeta, Bruno; Malming, Harald R.; Jackson, Katherine J.L.; Bain, Michael E.; Wilson, Patrick; Collins, Andrew M.
    Journal Article
    Motivation: Immunoglobulin heavy chain (IGH) genes in mature B lymphocytes are the result of recombination of IGHV, IGHD and IGHJ germline genes, followed by somatic mutation. The correct identification of the germline genes that make up a variable VH domain is essential to our understanding of the process of antibody diversity generation as well as to clinical investigations of some leukaemias and lymphomas. Results: We have developed iHMMune-align, an alignment program that uses a hidden Markov model (HMM) to model the processes involved in human IGH gene rearrangement and maturation. The performance of iHMMune-align was compared to that of other immunoglobulin gene alignment utilities using both clonally related and randomly selected IGH sequences. This evaluation suggests that iHMMune-align provides a more accurate identification of component germline genes than other currently available IGH gene characterization programs.

  • (2007) Nonaka, M.; Zhu, Liming; Ali Babar, Muhammad; Staples, Mark
    Journal Article
    The cost of a Software Product Line (SPL) development project sometimes exceeds the initially planned cost, because of requirements volatility and poor quality. In this paper, we propose a cost overrun simulation model for time-boxed SPL development. The model is an enhancement of a previous model, specifically now including: consideration of requirements volatility, consideration of unplanned work for defect correction during product projects, and nominal project cost overrun estimation. The model has been validated through stochastic simulations with fictional SPL project data, by comparing generated unplanned work effort to actual change effort, and by sensitivity analysis. The result shows that the proposed model has reasonable validity to estimate nominal project cost overruns and its variability. Analysis indicates that poor management of requirements and quality.

  • (2008) Guo, Jun; Wong, Eric; Chan, Sammy; Taylor, Peter; Zukerman, Moshe; Tang, Kit-Sang
    Journal Article
    The designers of a large scale video-on-demand system face an optimization problem of deciding how to assign movies to multiple disks (servers) such that the request blocking probability is minimized subject to capacity constraints. To solve this problem, it is essential to develop scalable and accurate analytical means to evaluate the blocking performance of the system for a given file assignment. The performance analysis is made more complicated by the fact that the request blocking probability depends also on how disks are selected to serve user requests for multicopy movies. In this paper, we analyze several efficient resource selection schemes. Numerical results demonstrate that our analysis is scalable and sufficiently accurate to support the task of file assignment optimization in such a system. © 2008 IEEE.

  • (2008) Guo, Jun; Wong, Eric; Chan, Sammy; Taylor, Peter; Zukerman, Moshe; Tang, Kit-Sang
    Journal Article
    We observe that an effect of disk resource sharing of multi-copy movie traffic has great impact on the blocking performance of a video-on-demand system. This observation leads us to establish a conjecture on how to balance the movie traffic load among combination groups of disks to maximize the level of disk resource sharing. For a given file replication instance, the conjecture predicts in general an effective lower bound on the blocking performance of the system. It motivates the design of a numerical index that measures quantitatively the goodness of disk resource sharing on allocation of multi-copy movie files. It also motivates the design of a greedy file allocation method that decides a good quality heuristic solution for each feasible file replication instance. We further develop analytical formulas to obtain approximate results for the bound fast and accurately. These techniques can be utilized by an optimization program to find near-optimal file assignment solutions for the system computationally efficiently.

  • (2008) Guo, Jun; Wang, Yi; Tang, Kit-Sang; Chan, Sammy; Wong, Eric; Taylor, Peter; Zukerman, Moshe
    Journal Article
    We present a genetic algorithm to tackle a file assignment problem for a large scale video-on-demand system. The file assignment problem is to find the optimal replication and allocation of movie files to disks, so that the request blocking probability is minimized subject to capacity constraints. We adopt a divide-and-conquer strategy, where the entire solution space of file assignments is divided into subspaces. Each subspace is an exclusive set of solutions sharing a common file replication instance. This allows us to utilize a greedy file allocation method to find a sufficiently good quality heuristic solution within each subspace. Two performance indices are further designed to measure the quality of the heuristic solution on 1) its assignment of multi-copy movies and 2) its assignment of single-copy movies. We demonstrate that these techniques together with ad hoc population handling methods enable genetic algorithms to operate in a significantly reduced search space, and achieve good quality file assignments in a computationally efficient way.