Deep Learning (DL) algorithms are an extremely promising instrument in artificial intelligence. To foster their adoption in new applications and markets, a step forward is needed towards the implementation of DL inference on low-power embedded systems, enabling a shift to the edge computing paradigm. The main goal of ALOHA is to facilitate implementation of DL algorithms on heterogeneous low-energy computing platforms providing automation for optimal algorithm selection, resource allocation and deployment.


The features of the architecture that will execute the inference are taken into account during the whole development process, starting from the early stages such as pre-training hyperparameter optimization and algorithm configuration.


The tool flow implements support for agile development methodologies, to be easily adopted by SMEs and midcaps.


The development process considers that the system should adapt to different operating modes at runtime.


The development process is conceived to support novel processing platforms to be exploitable beyond the end of the project.


The development process automates the introduction of algorithm features and programming techniques improving the resilience of the system to attacks.

Parsimonious inference

In DL, good precision levels can be also obtained using algorithms with reduced complexity. All the optimization utilities in the ALOHA tool flow will consider the effects of algorithm simplification on precision, execution time, energy and power.

Latest News & Events


Project Coordinator
Giuseppe Desoli - STMicroelectronics

Scientific Coordinator
Paolo Meloni - University of Cagliari, EOLAB

Dissemination Manager
Francesca Palumbo - University of Sassari, IDEA Lab