Spread the love

Imagimob today announced that the new release of its tinyML platform is now available to its customers. The new release focuses on making it easier and faster for developers to build and deploy performance-ready tinyML applications on edge devices.
New features include:
Starter Projects
Starter projects save time for developers. They increase the quality of the AI application and reduces time needed to get up and running. A developer can select a starter project from a list of pre-defined projects, and build deep learning AI models in minutes. The starter project includes labelled datasets, pre-processing blocks and a pre-trained AI model. Everything that is needed to get started is included, and all the content is quality assured by Imagimob. In this release, the following starter projects are included and integrated in the platform:

  • Acconeer Radar Gesture Project using the Acconeer A1 sensor
  • Texas Instruments Radar Gesture Project using the Texas Instruments mmWave Radar Sensor IWR6843AOP
  • Keyword Spotter Project (audio) using data from a microphone
  • Human Activity Recognition Project using data from a 3-axis accelerometer
  • Indoor/outdoor Detection project using environmental data from the Nordic Thingy:91

Many more starter projects are in development and will be announced soon.
Improved AutoML
The AutoML function has been further improved from previous releases of the platform. The AutoML function takes labelled datasets, the preprocessing blocks, the neural network architectures – and gives the developer a list of good candidate model architectures. The model architectures are then trained in the cloud training service, and the developer can focus on evaluating and finding the best model.
Performance Visualisation
Comparing numbers when selecting a model is easy to do but lacks the in-depth understanding needed to really know if a model is production-ready or not. In this latest release, improvements have been made in how developers are able to visualize their model output alongside their data. This gives a thorough understanding of the model’s reaction to different events within the datasets and allows the developer to understand the strengths and weaknesses of different models.