Accelerate your models with ๐Ÿค— Optimum Intel and OpenVINO

๐Ÿ“ฐ Hugging Face Blog

Hugging Face's Optimum Intel now supports Intel OpenVINO for accelerated model inference and quantization

intermediate Published 2 Nov 2022
Action Steps
  1. Install Optimum Intel and OpenVINO
  2. Use OVModels for inference on Intel processors
  3. Apply post-training static quantization or quantization aware training to encoder models
  4. Host and deploy models on the Hugging Face hub or locally
Who Needs to Know This

Data scientists and machine learning engineers can benefit from this integration to optimize their models for better performance and efficiency

Key Insight

๐Ÿ’ก Optimum Intel's integration with OpenVINO enables easy inference and quantization of Transformer models on Intel processors

Share This
๐Ÿš€ Accelerate your models with Optimum Intel and OpenVINO! ๐Ÿ’ป
Read full article โ†’ โ† Back to News