Engineering

Publication Search Results

Now showing 1 - 10 of 27
  • (2010) Ji, Philip Nan
    Thesis
    As the backbone for the global communication network, optical dense wavelength division multiplexed (DWDM) systems are facing challenges in capacity, flexibility, reliability and cost effectiveness. In my thesis research I developed five novel optical devices or subsystems to combat these technical challenges. Each of these devices/subsystems is described in an individual chapter including background survey, proposal of new features, theoretical analysis, hardware design, prototype fabrication and characterization, and experimental verification in DWDM systems. The first is a novel tunable asymmetric interleaver that allows the interleaving ratio to be adjusted dynamically. Two design methods were proposed and implemented. Spectral usage optimisation and overall system performance improvement in 10G/40G and 40G/100G systems were successfully demonstrated through simulations and experiments. The second is a colourless intra-channel optical equalizer. It is a passive periodic filter that restores the overall filter passband to a raised cosine profile to suppress the filter narrowing effect and mitigate the inter-symbol interference. 20% passband widening and 40% eye opening were experimentally achieved. The third is a flexible band tunable filter that allows simultaneous tuning of centre frequency and passband width. Based mainly on this filter, a low cost expendable reconfigurable optical add/drop multiplexer (ROADM) node was developed. Its flexible switching features were experimentally demonstrated in a two-ring four-node network testbed. The fourth is a transponder aggregator subsystem for colourless and directionless multi-degree ROADM node. Using the unique characteristics of the coherent receiver, this technology eliminates the requirement of wavelength selector, thus reduces power consumption, size and cost. I experimentally demonstrated that it can achieve < 0.5 dB penalty between receiving single channel and 96 channels. The last is a real-time feedforward all-order polarization mode dispersion (PMD) compensator. It first analyses spectral interference pattern to retrieve phase information and calculate PMD, then it uses a pulse shaper to restore the pulse shape and thus compensates the PMD. These functions were demonstrated through experiments and simulations. All of these novel devices and subsystems deliver new functional features and are suitable to be applied in the next generation DWDM systems to improve capacity, flexibility, and reliability and to reduce cost.

  • (2010) Willems van Beveren, Laurens; Huebl, H.; Starrett, Robert; Morello, Andrea
    Conference Paper
    We demonstrate radio frequency (RF) readout of electrically detected magnetic resonance in phosphorus-doped silicon metal-oxide field-effecttransistors (MOSFETs), operated at liquid helium temperatures. For the first time, the Si:P hyperfine lines have been observed using radio frequency reflectometry, which is promising for high-bandwidth operation and possibly time-resolved detection of spin resonance in donor-based semiconductor devices. Here we present the effect of microwave (MW) power and MOSFET biasing conditions on the EDMR signals.

  • (2010) Savkovic, Borislav
    Thesis
    This thesis is concerned with the problem of robust model predictive control (MPC) of an input and state constrained linear system, for the case that descriptions of uncertainty, present within the control loop, are time-variant. Such descriptions shall be referred to as time-varying uncertainty descriptions in this thesis. The study of such problems is motivated by control applications, for which the sets, within which the uncertainty is known to evolve in, are time-variant, and for which future predictions of such uncertainty description sets from previous times might need to be revised at subsequent times, owing to additional information becoming available in the future. Within the framework of robust MPC subject to time-varying uncertainty descriptions, the contribution of this thesis is twofold, as outlined next. Firstly, a robust MPC controller is proposed, which ensures robustness, performance, constraint satisfaction and stability, all in a suitably defined sense, under time-varying uncertainty descriptions, which are assumed to satisfy a mild condition of consistency. The key idea behind the proposed controller consists in the regulation of the system state around an "optimized" average state, such that the system state is ensured to evolve "tightly" around the optimized average state. The obtained results are shown to represent an extension of works from two related streams of the robust MPC literature (the tube based and constraint restriction based robust MPC streams). It is shown through simulations that the proposed controller, even for the case of time-invariant uncertainty descriptions, outperforms previous robust MPC methods, owing to a decreased degree of conservatism, in a suitably defined sense, over previous methods, but also owing to its ability to ensure improved uncertainty rejection, over previous methods, through " tighter" regulation of the actual state around the optimized average state through employment of a "feedback corrective term", as part of the proposed control law. The second contribution of this thesis relates to the analysis of the computational complexity of the proposed control algorithm. It is shown that problems with the computational complexity encountered in previous robust IVIPC methods, associated with computations of Minkowski sums and Minkowski differences of sets, may be overcome, by employing the theory of support functions and support vectors, which have been studied extensively in the related field of control theory dealing with constructions of reachable sets. In particular, it is shown that the computational complexity may either be broken down into simpler subproblems, or that further simplifications in computational complexity may be achieved, if additional structure, as motivated by practically relevant control problems, is assumed on the time-varying uncertainty descriptions. The developed theory of this thesis is compared against previous methods of relevance, where it is shown that, owing to the particular formulation of the control law proposed in this thesis, improved performance is achieved, even for the case that time-invariant uncertainty descriptions are assumed. The theory is furthermore applied to two practical problems, both of which serve to illustrate the validity and utility of the developed theory. Firstly, the theory is applied to the problem of robust MPC, when an observer must be employed in order to estimate the system state. Subsequently, the theory is applied to the problem of control over a communication channel of limited capacity and with data losses.

  • (2010) Khan, Nazeer
    Thesis
    In traditional infrastructural wireless local area network (WLAN), the mobile node (MN) makes the decision of choosing an access point (AP). The MN creates a list of APs in range along with the received signal strength indicator (RSSI). It then solely uses the RSSI as a decision metric for selecting an AP to connect to as this is the only information available at the MN. We argue that a MN is not the correct entity in WLAN for making the choice of an AP when many APs are available as it does not have the complete view of the environment. Secondly, choosing an AP solely on the basis of RSSI is not an efficient algorithm. This can lead to concentration of MNs at single AP resulting in a decreased average throughput for every MN associated to that AP. In this thesis, we propose to transfer this decision to the AP in a transparent manner. While our solution exists for single administrative domain with a centralized controller, we propose a completely distributed architecture for personal WLANs where APs select to serve MNs based on the MN concentration, network load, throughput and effect of serving a new MN on the network. The APs in different networks collaborate among one another in a completely decentralized manner to provide unified network access to MNs with focus on maximizing the system capacity. The broadcast nature of wireless is used in an intelligent manner to select dynamic uplink and downlink paths between APs and MNs.

  • (2010) Shen, Zhenliang
    Thesis
    As the amount of information is increased in modern life, the need for more efficient methods to represent this information grows as well. This information can be in a variety of forms, such as speech, still image, text, and video, etc. The aim of data compression is to provide efficient methods to represent information. Data compression is very important in the storage and transmission of digital image and video. According to the application, data compression can be classified into two areas: lossy compression and lossless compression. Lossy compression can provide highly compressed ratios than lossless compression, but at the cost of some loss information. Lossless compression is applied in some particular applications for which the compressed data must be reconstructed to be identical to the original information. This thesis presents new approaches to lossless image compression, applied in binary image compression (i.e. binary shape coding) for object-based video compression, and grey-scale image compression. In a binary shape video sequence, each the pixel has value '1' within an object and value '0' for the background, where each pixel is either completely inside the object or completely outside it, i.e. there is no blending of pixels at object boundaries. The main purpose of binary shape coding is to use fewer bits to describe the object boundary information. There are many types of coding methods in the binary shape coding. Most of them exploit the high correlation between adjacent pixels to provide high coding efficiency. In this thesis, we propose a new approach to binary shape coding based on quad-tree block-based CAE coding. The original image is decomposed into different new images with different resolution. Coding is then performed on 3-symbol quad-tree blocks instead of the original binary pixels. The quad-tree structure can remove the spatial redundancy efficiently. The low resolution image costs fewer bits to locate the boundary block information. High resolution image gives more details around the boundary block. This technique can be used in either an Intra mode (i.e. no information from other frames is used) or an Inter mode (in which information from the previous frame is used to further improve the coding efficiency). The efficiency of quad-tree structure is highly dependent on the complexity of shape and motion of the object. An optimal pruning quad-tree prunes the inefficient branches in the quad-tree structure and has most efficient expression for all binary shapes. A fast and simplified algorithm of optimal pruning quad-tree coding is also proposed in this thesis. In comparison of MPEG-4 CAE, the proposed approach gives significant improvement in both Intra and Inter coding. Lossless image compression typically includes two parts: transformation and coding. For reasons of computational efficiency, the transformation must be linear and integer. In natural image, the content in a small region has high correlation, which means lots of redundancy can be removed. Predictive models tailored to exploit redundancy in a particular direction in each block are applied to remove the spatial redundancy. Pixel values in a small size block can be re-arranged in the new range rather than [0, 255]. Variable length coding assigns automatically the short code words to high-probability values. According to the frequency distribution in the local block, small blocks with different patterns are designed to achieve high coding efficiency. Adaptive context-based arithmetic coding can be added for a high-performance compression algorithm, which is applied to the previous low complexity coding results. The results give minor improvement in comparison of other compressions.

  • (2010) Naman, Aous Thabit
    Thesis
    Video is considered one of the main applications of modern day's Internet. Despite its importance, the interactivity available from current implementations is limited to pause and random access to a set of predetermined access points. In this work, we propose a novel and innovative approach which provides considerably better interactivity and we coin the term JPEG2000-Based Scalable Interactive Video (JSIV) for it. JSIV relies on three main concepts: storing the video sequence as independent JPEG2000 frames to provide for quality and spatial resolution scalability, as well as temporal and spatial accessibility; prediction and conditional replenishment of precincts to exploit inter-frame redundancy; and loosely-coupled server and client policies. The concept of loosely-coupled client and server policies is central to JSIV. With these policies, the server optimally selects the number of quality layers for each precinct it transmits and decides on any side-information that needs to be transmitted while the client attempts to make most of the received (distorted) frames. In particular, the client decides which precincts are predicted and which are decoded from received data (or possibly filled with zeros in the absence of received data). Thus, in JSIV, a predicted frame typically has some of its precincts predicted from nearby frames while others are decoded from received intra-coded precincts; JSIV never uses frame differences or prediction residues. The philosophy behind these policies is that neither the server nor the client drives the video streaming interaction, but rather the server dynamically selects and sends the pieces that, it thinks, best serve the client needs and, in turn, the client makes most of the pieces of information it has. The JSIV paradigm postulates that if both the client and the server policies are intelligent enough and make reasonable decisions, then the decisions made by the server are likely to have the expected impact on the client's decisions. We solve the general JSIV optimization problem by employing Lagrange-style rate-distortion optimization in a two pass iterative approach. We show that this approach converges under workable conditions, and we also show that the optimal solution for a given rate is not necessarily embedded in the optimal solution for a higher rate. The flexibility of the JSIV paradigm enables us to use it in a variety of frame prediction arrangements. In this work, we focus only on JSIV with sequential prediction arrangement (similar to IPPP\ldots) and hierarchical B-frames prediction arrangement. We show that JSIV can provide the sought-after quality and spatial scalability in addition to temporal and spatial accessibility. We also demonstrate a novel way in which a JSIV client can use its cache in improving the quality of reconstructed video. In general, JSIV can serve a wide range of usage scenarios, but we expect that real-time and interactive applications, such as teleconferencing and surveillance, would benefit most from it. Experimental results show that JSIV's performance is slightly inferior to that of existing predictive coding standards in conventional streaming applications; however, JSIV produces significant improvements when its scalability and accessibility features, such as the region of interest, are employed.

  • (2010) Aravind Surapura, Chakrapani
    Thesis
    This thesis is concerned with the development of energy efficient cooperative communication protocols for 4G networks. A feature that permeates throughout our work is the use of location information in the cooperative environment in order to minimize the power consumed in the network. Importantly, we consider the effect of location errors in our analysis. We make six contributions in the thesis. In the first part of this thesis we consider multi-hop relay networks and make the following contributions. First, we study the energies required for a message to reach its destination with a given latency bound for two well-known location-based routing protocols. We highlight the different scenarios under which each protocol offers better energy efficiency. Second, we show how energy efficiency is decreased dramatically in most location-based routing protocols once location error is accounted for. We propose a new Location-Error Aware Routing (LEAR) protocol which minimizes the negative impact of location error on the energy efficiency. Third, we develop a new MAC-layer Location-based Cooperative Relay Forwarding (LCRF) protocol, which exploits both location information and cooperation among relays in order to minimize the power consumed in the network. We show how LCRF can result in factor-of-two energy savings, relative to current MAC protocols which use only location information. In the second part of this thesis, we consider an M-user, two-hop, single relay system (an M-1-1 system) and make the following additional contributions. Fourth, we develop a power allocation scheme at the relay that obtains a near-optimal throughput for an M-1-1 system where instantaneous channel state information (CSI) is available. We show that the complexity of our power allocation scheme is of order O(M log M ) and is free of logarithmic operations. Fifth, we develop a new power allocation scheme at the relay that obtains a near-optimal throughput for an M-1-1 system where CSI is not available. This scheme is more difficult to develop due to instantaneous CSI being unavailable. We show that the complexity of this new scheme remains of order O(M log M ). Sixth, we show how our work results in a trade-off between the lifetime and throughput of an M-1-1 system. We examine how the system throughput is influenced by the introduction of location errors, and by the introduction of user mobility.

  • (2010) Maher, Phillip Stephen
    Thesis
    Mobile location methods that employ signal fingerprints are becoming increasingly popular in a number of wireless positioning solutions. A fingerprint is a spatial database, created either by recorded measurement or simulation, of the radio environment. It is used to assign signal characteristics such as received signal strength or power delay profiles to an actual location. Measurements made by either the handset or the network, are then matched to those in the fingerprint in order to determine a location. Creation of the fingerprint by an a priori measurement stage is costly and time consuming. Virtual fingerprints, those created by a ray-tracing radio propagation prediction tool, normally require a lengthy off-line simulation mode that needs to be repeated each time changes are made to the network or built environment. An open research question exists of whether a virtual fingerprint could be created dynamically via a ray-trace model embedded on a mobile handset for positioning purposes. The key aim of this thesis is to investigate the trade-off between complexity of the physics required for ray-tracing models and the accuracy of the virtual fingerprints they produce. The most demanding computational phase of a ray-trace simulation is the ray-path finding stage, whereby a distribution of rays cast from a source point, interacting with walls and edges by reflection and diffraction phenomena are traced to a set of receive points. Due to this, we specifically develop a new technique that decreases the computation of the ray-path finding stage. The new technique utilises a modified method of images rather than brute-force ray casting. It leads to the creation of virtual fingerprints requiring significantly less computation effort relative to ray casting techniques, with only small decreases in accuracy. Our new technique for virtual fingerprint creation was then applied to the development of a signal strength fingerprint for a 3G UMTS network covering the Sydney central business district. Our main goal was to determine whether on current mobile handsets, a sub-50m location accuracy could be achieved within a few seconds timescale using our system. The results show that this was in fact achievable. We also show how virtual fingerprinting can lead to more accurate solutions. Based on these results we claim user embedded fingerprinting is now a viable alternative to a priori measurement schemes.

  • (2010) Fleming, Richard F.
    Thesis
    The combination of national and corporate focus on critical infrastructure protection, and increasing complexity of information systems underlying business processes, make resilience a high priority in information systems planning and management. No approach exists which provides a practicable basis for understanding the resilience (aka survivability) of complex information environments (IEs), sufficient to warrant confidence in their support for critical enterprise missions or adequate for understanding the resilience implications of proposed changes. My research objectives are ; to design a process for analysing IE resiliency in large enterprises able to take into account the complexity and volatility of their operations ; to establish a case for an emulative approach to simulating behaviour of a complex enterprise IE in a way which supports analysis of propagation to critical business processes of the effect of potentially disruptive events on the infrastructure ; to design a tool to explore the practicability of emulative approaches to modelling and analysing complex IE behaviour A major part of this research is development of a simulation (SEALS) of a complex distributed IE, able to emulate the running/failure of simple business processes. This development catalysed many of the research observations. Its evaluation suggests IE emulation is technically feasible but requires much more work to handle scalability and provide behavioural fidelity and, like all other approaches, cannot yet encompass the human behaviours to which resilience is most sensitive. Novel aspects of my approach include ; an IE resilience analysis process focused on real (vs normative) enterprise behaviour including complex infrastructures, volatile missions and disparate stakeholders ; use of emulation, as a surrogate of the real IE, on which resilience experiments can be conducted ; using a java based generic simulation engine to build the emulation.

  • (2010) Tang, Howe Hing
    Thesis
    In this PhD research project, the frequency spectral analysis of non-invasively measured photoplethysmography (PPG) variability and heart rate variability (HRV) signals was applied to monitor the pathophysiologic response of cardiovascular system control in sepsis, and also to derive potential clinically useful markers for early sepsis diagnosis. From the study of animal sepsis model, all sympathetic related spectral powers in toe- and ear-PPG variabilities, and HRV were significantly altered (p < 0.05), after the onset of endotoxin-induced hypotension. Toe-PPG variability, in particular, displayed a substantial but transient rise in sympathetic-related spectral power at the onset of hypotension, which might be related to the activation and subsequent withdrawal of some non-baroreflex sympathetic drive. Significantly reduced coherence between HRV and systolic BP variability (p < 0.05), on the other hand, might be regarded as the evidence of severe impairment in baroreflex heart rate control. The potential of these non-invasively measured PPG variability and HRV indices was further accredited in a clinical study of sepsis patients, and the outcomes of the study were in good agreement with the animal sepsis model, showing that the normalized MF power of ear-PPG variability was significantly reduced in severe sepsis patients (p < 0.05). These cardiovascular spectral indices were eventually used in a nonlinear support vector machine (SVM) classifier to discriminate sepsis patients into two distinct pathological groups (i.e., systemic inflammatory response syndrome and severe sepsis), showing good classification performance. The use of combined frequency spectral analysis technique and SVM in the identification of sepsis continuums has produced significant outcomes, and in future, more efforts should be devoted into this kind of research work to facilitate early diagnosis of sepsis progression.