Improving language understanding with unsupervised learning

📰 OpenAI News

OpenAI achieves state-of-the-art results in language understanding with a scalable system combining transformers and unsupervised pre-training

advanced Published 11 Jun 2018
Action Steps
  1. Combine transformers with unsupervised pre-training to improve language understanding
  2. Experiment with scalable task-agnostic systems to achieve state-of-the-art results
  3. Explore the application of supervised learning methods with unsupervised pre-training in various language tasks
Who Needs to Know This

NLP researchers and AI engineers on a team can benefit from this approach to improve language understanding models, and product managers can leverage these advancements to develop more accurate language-based products

Key Insight

💡 Pairing supervised learning methods with unsupervised pre-training can significantly improve language understanding

Share This
💡 OpenAI's new approach achieves state-of-the-art results in language understanding with transformers & unsupervised pre-training!
Read full article → ← Back to News