Deep Learning (DL) algorithms are an extremely promising instrument in artificial intelligence. To foster their adoption in new applications and markets, a step forward is needed towards the implementation of DL inference on low-power embedded systems, enabling a shift to the edge computing paradigm. The main goal of ALOHA is to facilitate implementation of DL algorithms on heterogeneous low-energy computing platforms providing automation for optimal algorithm selection, resource allocation and deployment.

Architecture-awareness

The features of the architecture that will execute the inference are taken into account during the whole development process, starting from the early stages such as pre-training hyperparameter optimization and algorithm configuration.

Productivity

The tool flow implements support for agile development methodologies, to be easily adopted by SMEs and midcaps.

Adaptivity

The development process considers that the system should adapt to different operating modes at runtime.



Extensibility

The development process is conceived to support novel processing platforms to be exploitable beyond the end of the project.

Security

The development process automates the introduction of algorithm features and programming techniques improving the resilience of the system to attacks.


Parsimonious inference

In DL, good precision levels can be also obtained using algorithms with reduced complexity. All the optimization utilities in the ALOHA tool flow will consider the effects of algorithm simplification on precision, execution time, energy and power.

Latest News & Events

Contacts

Project Coordinator
Giuseppe Desoli - STMicroelectronics
giuseppe(dot)desoli(at)st(dot)com

Scientific Coordinator
Paolo Meloni - University of Cagliari, EOLAB
paolo(dot)meloni(at)diee(dot)unica(dot)it

Dissemination Manager
Francesca Palumbo - University of Sassari, PolComing
fpalumbo(at)uniss(dot)it

Twitter

Linkedin