•   almost 2 years ago

RAG using Snowflake Arctic and Langchain

Hi all, are there any guides to implement RAG using Snowflake Arctic + Langchain?

  • 9 comments

  • Manager   •   almost 2 years ago

    Hey Karthick! One of the organizers here :)

    Great question!! You can grab the Arctic embed models with LangChain using the Hugging Face embeddings connector. Check out this guide for a code sample: https://python.langchain.com/docs/integrations/providers/snowflake/ . You can then follow this guide for how to implement RAG with Streamlit https://blog.streamlit.io/langchain-tutorial-4-build-an-ask-the-doc-app/

    Excited to see what you build!

  •   •   almost 2 years ago

    Thank you so much, Anna. Appreciate the prompt reply.

  •   •   almost 2 years ago

    The embedding endpoint appears a bit slow. Any tips?

  • Manager   •   almost 2 years ago

    Maxwell, thanks for flagging! Will investigate why this is slow for you.

    In the meantime, you could try pulling the embed model and running it locally to see if that speeds it up.

  • Manager   •   almost 2 years ago

    Another thing you could try: utilizing streaming to cut down on any response lag as you wait for the result (e.g. https://www.youtube.com/watch?v=AM77pbogh5s&t=301s)

  •   •   almost 2 years ago

    hi, how to resolve below error?
    ValidationError: 2 validation errors for LLMChain llm instance of Runnable expected (type=type_error.arbitrary_type; expected_arbitrary_type=Runnable) llm instance of Runnable expected (type=type_error.arbitrary_type; expected_arbitrary_type=Runnable)

    from langchain.embeddings import HuggingFaceEmbeddings
    llm = HuggingFaceEmbeddings(model_name="snowflake/arctic-embed-l")

  • Manager   •   almost 2 years ago

    Hi Netravati!

    Thanks for sharing the error message.

    In tried to reproduce the error but wasn't able to with this example code. Does the below run for you?

    from langchain_community.embeddings import HuggingFaceEmbeddings
    import streamlit as st
    llm = HuggingFaceEmbeddings(model_name="snowflake/arctic-embed-l")
    st.write(llm.embed_query("This is a test."))

    If the above works for you, it might be something about your code -- such as not passing the right type of parameter to the model. If not, I would suggest doing a fresh install of all the dependencies in a new python environment to make sure you aren't running into version issues.

  •   •   almost 2 years ago

    Thanks Anna for reply to my original question. To clarify, we are allowed to use other LLMs, so as long as we use at least one Snowflake model (embed or arctic)?

  • Manager   •   almost 2 years ago

    Hey Maxwell!

    The question of using other LLMs is addressed here: https://arctic-streamlit-hackathon.devpost.com/forum_topics/38712-are-we-allowed-to-use-a-non-snowflake-ai-model-for-tasks-which-snowflake-arctic-cannot-handle

    The core of your solution should be focused on showcasing the Arctic LLM -- but if you need to use another library as a helper function for a subset of your project, you may do so, so long as the focus remains on Arctic as the base. Hope this helps!

Comments are closed.