Abstract
In order to extract maximum performance out of MPSoCs (Multiprocessor System on- Chips), efficient
scheduling is crucial. In embedded systems, one of the major design goals is minimizing energy consumption. Intelligent task scheduling approaches can guide the energy reduction approaches and at the
same time guarantee the timing constraints.
In this thesis, we focus on three distinct problems in the domain of energy-aware scheduling of tasks
with conditional precedence and timing constraints on MPSoCs (Multiprocessor System on- Chips).
The first problem focuses on scheduling a set of non-pre-emptive tasks with individual deadlines and
conditional precedence constraints on MPSoCs that consist of homogeneous processors and shared
memory, such that the total processor energy consumption of all the tasks in each scenario is minimized.
We have proposed a unified two-phase approach that consists of NLP (Non-linear Programming) based
offline scheduler and an online task scheduler that performs task reallocation, rescheduling and speed
assignment at runtime.
In the second problem, we investigate the problem of scheduling a set of tasks with individual deadlines
and conditional precedence constraints on a heterogeneous NoC (Network on Chip)-based MPSoC
such that the total expected energy consumption of all the tasks is minimized. Our approach consists
of a scheduling heuristic for constructing a single unified schedule for all the tasks and assigning
a frequency to each task and each communication assuming continuous frequencies, an ILP (Integer
Linear Programming)-based algorithm and a polynomial time heuristic for assigning discrete frequencies
and voltages to tasks and communications.
Finally, we solve the problem of scheduling and optimizing energy consumption of a set of tasks with
conditional precedence constraints, individual deadlines and common period on a NoC based MPSoC.
We achieve this goal by integrating DVFS with coarse-grained task-level software pipelining. Our
approach not only optimizes energy consumption but ensures that memory overhead incurred due to
task-level software pipelining satisfies the memory capacity bounds.