Building AI productivity assistants with LLMs and Love.
Presented by: SanjayAnoobOnAJourney
2 months ago
| 6 interested

This session will be mostly about building out an AI assistant from scratch using local large language models. We will explore different ways and the most simple concepts that we can use to build these out.
We will also dive into the complexities of integrating these systems together and the inference stack that’s used to build these. We will also touch upon the basic techniques of fine-tuning datasets We will do this by using simple tools that make it very easy for regular people to use things like LM Studio Transformer Lab and so on. 

We will also dive deep into the ethical considerations and the future of products that are built across these large language models and the ability to run these on the edge. Finally, we look at potential business models and also have a live demo with the current software running.

  • Deep Dive

Share this session:

Comments

    Leave a Reply