Check our .env.example file to see the possible environment variables you can configure. Quivr supports APIs from Anthropic, OpenAI, and Mistral. It also supports local models using Ollama.
Create a Brain with Quivr default configuration
Copy
from quivr_core import Brainbrain = Brain.from_files(name = "my smart brain", file_paths = ["/my_smart_doc.pdf", "/my_intelligent_doc.txt"], )
Launch a Chat
Copy
brain.print_info()from rich.console import Consolefrom rich.panel import Panelfrom rich.prompt import Promptconsole = Console()console.print(Panel.fit("Ask your brain !", style="bold magenta"))while True: # Get user input question = Prompt.ask("[bold cyan]Question[/bold cyan]") # Check if user wants to exit if question.lower() == "exit": console.print(Panel("Goodbye!", style="bold yellow")) break answer = brain.ask(question) # Print the answer with typing effect console.print(f"[bold green]Quivr Assistant[/bold green]: {answer.answer}") console.print("-" * console.width)brain.print_info()
And now you are all set up to talk with your brain !
If you want to change the language or embeddings model, you can modify the parameters of the brain.Let’s say you want to use a LLM from Mistral and a specific embedding model :
Note: The Embeddings class from LangChain lets you choose from a large variety of embedding models. You can configure different models like OpenAI, Cohere, or HuggingFace embeddings based on your needs.