GenAI with TensorRT LLM inference Engine: Simplifying Documentation Review on Atlassian Confluence
The application is powered by Nvidia's TensorRT-LLM inference engine, 13B llama-2 (Weight only quantization), llama index, LangChain, and Streamlit and runs locally on a Windows device with an RTX 4090 GPU.
Motivation:
Confluence by Atlassian improved project documentation, addressing the issue of timeline discrepancies and scope creep. However, the challenge of documentation overload remains, leaving new hires to navigate through extensive documentation, and risking burnout.
Chloe is a new hire and has been invited to a project on Atlassian Confluence. She feels nearly overwhelmed by the l…
Watch on YouTube ↗
(saves to browser)
DeepCamp AI