Big data processing Architectures for Genomics
(2016- to date) We work on efficient architectures, encompassing hardware as well as software, for Bioinformatics and Genomics, such as Next Generation Sequencing, using Apache Spark, Apache Storm, as well as special-purpose FPGA-based and HW-SW co-designed architectures.
Co-design of big data Processing Architectures in the Clouds
(2014-to date) Frameworks such as Apache Storm, Spark, Hadoop and others relieve users from intricate details of underlying systems when developing big data applications so as to let them focus on the functionality instead. We optimize its implementation then, by system-level optimisations and co-design techniques such as smart allocation, binding, and scheduling, as well as by developing HW-SW co-designed and other special-purpose architectures for the framework units of execution. We deal with implementation not only in classic data centres, but also in virtualized ones and clouds.
- HadoopCloud: (2014-to date) Focuses on energy-efficient cloud implementation of Hadoop jobs.
- HadoopDataBalancing: (2016-to date) balances distribution of intermediate, as well as initial, data among servers to reduce communication time on heterogeneous clusters.
- StormBoost: (2016-to date) works on instantiation, allocation and rate-tuning of Storm bolts on a heterogeneous cluster for higher throughput.
- FPGA-in-Storm: (2016-to date) designs hardware architectures to enable easy hardware implementation of Storm bolts on FPGA boards connected via PCIe slots on heterogeneous servers so as to allow a co-designed Storm implementation.
- DAMP (Data-Aware Multi-stage Progressive big data processing): (2016-to date) appreciates that different same-sized data blocks have different influences on the final outcome of the processing. DAMP chooses more important data blocks for better efficiency or approximation in a multi-stage progressive computation.
Renewable Energies in geo-Distributed data centers
(2013-2017) Access to renewable energies varies in time as well as in space. We work on smart allocation, scheduling, and migration of tasks and virtual machines to take best advantage of available renewables in geographically-distributed data centers in various times.
Variability-Aware MPSoC System-Level Design
(2009-2013) Process variation in nanometer-scale technologies result in statistical changes of transistor and chip parameters such as Vth and Leff. The final outcome is variations in the power and performance of processors in same-design MPSoC instances. VAM project uses statistical static system-level techniques, such as task- and communication-scheduling and configuration selection, for power and performance optimisation of MPSoC systems.