RAG fundamentals
Forget about Ctrl+F! 🤯 With RAG, your documents will answer your questions directly. 😎 Step-by-step tutorial with Hugging Face and ChromaDB. Unleash the power of AI (and show off to your friends)! 💪
Install system requeriments
sudo apt updatesudo apt install -y python3 python3-pip git
From here, if you have a debian based system you can use the alfred.deb installer
Install python requeriments
pip install halopip install --upgrade openai
Create source folder
sudo rm -r /usr/src/alfredcd /usr/srcgit clone -b branch_v1.3 https://github.com/maximofn/alfred.gitcd /usr/src/alfredsudo find . -depth -not -name '*.py' -delete
Create symbolic link to /usr/bin/alfred
echo 'alias alfred="/usr/src/alfred/alfred.py"' >> ~/.bashrc
Restart bash
source ~/.bashrc
Loggin to OpenAI and get your OpenAI api key
You can ask to alfred specific questions by typing alfred followed by your question
Or write alfred, intro and keep asking him questions. To finish type exit
If you like it consider giving the repository a star ⭐, but if you really like it consider buying me a coffee ☕.
Forget about Ctrl+F! 🤯 With RAG, your documents will answer your questions directly. 😎 Step-by-step tutorial with Hugging Face and ChromaDB. Unleash the power of AI (and show off to your friends)! 💪
😠 Are your commits written in alien language? 👽 Join the club! 😅 Learn Conventional Commits in Python and stop torturing your team with cryptic messages. git-changelog and commitizen will be your new best friends. 🤝
Have you ever talked to an LLM and they answered you something that sounds like they've been drinking machine coffee all night long 😂 That's what we call a hallucination in the LLM world! But don't worry, because it's not that your language model is crazy (although it can sometimes seem that way 🤪). The truth is that LLMs can be a bit... creative when it comes to generating text. But thanks to DoLa, a method that uses contrast layers to improve the feasibility of LLMs, we can keep our language models from turning into science fiction writers 😂. In this post, I'll explain how DoLa works and show you a code example so you can better understand how to make your LLMs more reliable and less prone to making up stories. Let's save our LLMs from insanity and make them more useful! 🚀
Your colleague Patric is writing code that is hard to read? Share with him this code formatter that I show you in this post! Come in and learn how to format code to make it more understandable. We are not going to solve Patric's problems, but at least you won't suffer when reading it
Declare neural networks clearly in Pytorch
Obtener datos de diccionarios en Python de forma segura
Hugging Face spaces allow us to run models with very simple demos, but what if the demo breaks? Or if the user deletes it? That's why I've created docker containers with some interesting spaces, to be able to use them locally, whatever happens. In fact, if you click on any project view button, it may take you to a space that doesn't work.
Dataset with jokes in English
Dataset with translations from English to Spanish
Dataset with Netflix movies and series