Videantis has launched a new visual processing architecture – v-MP6000UDX – that it says boosts deep learning algorithm performance by up to three orders of magnitude.
The company continues to see a lot of demand for smart sensing systems that combine deep learning with other computer vision and video processing techniques – such as SLAM or structure from motion, wide-angle lens correction and video compression.
Videantis can run all these tasks on a single processing architecture – the aim being to simplify SOC design and integration, ease software design, reduce unused dark silicon and provide additional flexibility to address a wide variety of use cases.
“We’ve quietly been working on our deep learning solution together with a few select customers for quite some time and are now ready to announce this exciting new technology to the broader market,” said Hans-Joachim Stolberg, CEO at Videantis.
“To efficiently run deep convolutional nets in real time requires new performance levels and careful optimisation, which we’ve addressed with both a new processor architecture and a new optimisation tool.
“Compared to other solutions on the market, we took great care to create an architecture that truly processes all layers of CNNs on a single architecture rather than adding standalone accelerators where the performance breaks on the data transfers in between.
“This compatibility ensures a seamless upgrade path for our customers towards adding deep learning capabilities to their systems, without having to rewrite the computer vision software they’ve already developed for our architecture.”
“Embedded vision is enabling a wide range of new applications such as automotive ADAS, autonomous drones, new AR/VR experiences and self-driving cars,” said Mike Demler, senior analyst at The Linley Group.
“Videantis is providing an architecture that can run all the visual computing tasks that a typical embedded vision system needs, while meeting stringent power, performance and cost requirements.”
The v-MP6000UDX architecture includes an extended instruction set optimised for running convolutional neural nets, increases the multiply-accumulate throughput per core eightfold to 64 MACs per core, and extends the number of cores from typically eight to up to 256.
Videantis also unveiled v-CNNDesigner, a tool designed to enable easy porting of neural networks that have been designed and trained using frameworks such as TensorFlow or Caffe.
The company says v-CNNDesigner analyses, optimises and parallelises trained neural networks for efficient processing on the v-MP6000UDX architecture – the task of implementing a neural network is fully automated.
January 26, 2018