StableLM: An Opensource large language model by Stability AI team
Another full open source large language model from Stability AI team with 3B and 7B parameters, trained on The Pile Datasets and fine tuned on 5 other conversational datasets - Stanford's Alpaca, Nomic-AI's gpt4all, RyokoAI's ShareGPT52K datasets, Databricks labs' Dolly, and Anthropic's HH
== Video Timeline ==
(00:00) Content Intro
(00:35) Introducing StableLM
(01:19) StableLM Model Intro
(03:03) Quick Demo at Hugging Face Space
(04:10) 3B and 7B Params Models
(05:27) Model Training Info
(08:30) Model Context Length
(09:20) Coding Walkthrough
(12:09) Conclusion
=== Resources ===
- https://…
Watch on YouTube ↗
(saves to browser)
DeepCamp AI