PLAY PODCASTS
Compressing deep learning models: distillation (Ep.104)
Episode 101

Compressing deep learning models: distillation (Ep.104)

Data Science at Home

May 20, 202022m 19s

Audio is streamed directly from the publisher (mcdn.podbean.com) as published in their RSS feed. Play Podcasts does not host this file. Rights-holders can request removal through the copyright & takedown page.

Show Notes

Using large deep learning models on limited hardware or edge devices is definitely prohibitive. There are methods to compress large models by orders of magnitude and maintain similar accuracy during inference.

In this episode I explain one of the first methods: knowledge distillation

 Come join us on Slack

Reference