Are there any Free Software diffuction LLMs on Hugging Face yet?
Jean Louis
JLouisBiz
AI & ML interests
- LLM for sales, marketing, promotion
- LLM for Website Revision System
- increasing quality of communication with customers
- helping clients access information faster
- saving people from financial troubles
Recent Activity
new activity
about 18 hours ago
eaddario/Watt-Tool-8B-GGUF:Problem with the license, this is not really free software
new activity
about 23 hours ago
meditsolutions/medit-one-140M-9B-tokens-checkpoint:Question on meaning of parameter of this model
Organizations
JLouisBiz's activity
Problem with the license, this is not really free software
2
#1 opened 5 days ago
by
JLouisBiz

Question on meaning of parameter of this model
2
#2 opened 1 day ago
by
JLouisBiz

Can't install
2
#1 opened 3 days ago
by
JLouisBiz


reacted to
as-cle-bert's
post with ๐
3 days ago
Post
2422
I just released a fully automated evaluation framework for your RAG applications!๐
GitHub ๐ https://github.com/AstraBert/diRAGnosis
PyPi ๐ https://pypi.org/project/diragnosis/
It's called ๐๐ข๐๐๐๐ง๐จ๐ฌ๐ข๐ฌ and is a lightweight framework that helps you ๐ฑ๐ถ๐ฎ๐ด๐ป๐ผ๐๐ฒ ๐๐ต๐ฒ ๐ฝ๐ฒ๐ฟ๐ณ๐ผ๐ฟ๐บ๐ฎ๐ป๐ฐ๐ฒ ๐ผ๐ณ ๐๐๐ ๐ ๐ฎ๐ป๐ฑ ๐ฟ๐ฒ๐๐ฟ๐ถ๐ฒ๐๐ฎ๐น ๐บ๐ผ๐ฑ๐ฒ๐น๐ ๐ถ๐ป ๐ฅ๐๐ ๐ฎ๐ฝ๐ฝ๐น๐ถ๐ฐ๐ฎ๐๐ถ๐ผ๐ป๐.
You can launch it as an application locally (it's Docker-ready!๐) or, if you want more flexibility, you can integrate it in your code as a python package๐ฆ
The workflow is simple:
๐ง You choose your favorite LLM provider and model (supported, for now, are Mistral AI, Groq, Anthropic, OpenAI and Cohere)
๐ง You pick the embedding models provider and the embedding model you prefer (supported, for now, are Mistral AI, Hugging Face, Cohere and OpenAI)
๐ You prepare and provide your documents
โ๏ธ Documents are ingested into a Qdrant vector database and transformed into a synthetic question dataset with the help of LlamaIndex
๐ The LLM is evaluated for the faithfulness and relevancy of its retrieval-augmented answer to the questions
๐ The embedding model is evaluated for hit rate and mean reciprocal ranking (MRR) of the retrieved documents
And the cool thing is that all of this is ๐ถ๐ป๐๐๐ถ๐๐ถ๐๐ฒ ๐ฎ๐ป๐ฑ ๐ฐ๐ผ๐บ๐ฝ๐น๐ฒ๐๐ฒ๐น๐ ๐ฎ๐๐๐ผ๐บ๐ฎ๐๐ฒ๐ฑ: you plug it in, and it works!๐โก
Even cooler? This is all built on top of LlamaIndex and its integrations: no need for tons of dependencies or fancy workarounds๐ฆ
And if you're a UI lover, Gradio and FastAPI are there to provide you a seamless backend-to-frontend experience๐ถ๏ธ
So now it's your turn: you can either get diRAGnosis from GitHub ๐ https://github.com/AstraBert/diRAGnosis
or just run a quick and painless:
To get the package installed (lightning-fast) in your environment๐โโ๏ธ
Have fun and feel free to leave feedback and feature/integrations requests on GitHub issuesโจ
GitHub ๐ https://github.com/AstraBert/diRAGnosis
PyPi ๐ https://pypi.org/project/diragnosis/
It's called ๐๐ข๐๐๐๐ง๐จ๐ฌ๐ข๐ฌ and is a lightweight framework that helps you ๐ฑ๐ถ๐ฎ๐ด๐ป๐ผ๐๐ฒ ๐๐ต๐ฒ ๐ฝ๐ฒ๐ฟ๐ณ๐ผ๐ฟ๐บ๐ฎ๐ป๐ฐ๐ฒ ๐ผ๐ณ ๐๐๐ ๐ ๐ฎ๐ป๐ฑ ๐ฟ๐ฒ๐๐ฟ๐ถ๐ฒ๐๐ฎ๐น ๐บ๐ผ๐ฑ๐ฒ๐น๐ ๐ถ๐ป ๐ฅ๐๐ ๐ฎ๐ฝ๐ฝ๐น๐ถ๐ฐ๐ฎ๐๐ถ๐ผ๐ป๐.
You can launch it as an application locally (it's Docker-ready!๐) or, if you want more flexibility, you can integrate it in your code as a python package๐ฆ
The workflow is simple:
๐ง You choose your favorite LLM provider and model (supported, for now, are Mistral AI, Groq, Anthropic, OpenAI and Cohere)
๐ง You pick the embedding models provider and the embedding model you prefer (supported, for now, are Mistral AI, Hugging Face, Cohere and OpenAI)
๐ You prepare and provide your documents
โ๏ธ Documents are ingested into a Qdrant vector database and transformed into a synthetic question dataset with the help of LlamaIndex
๐ The LLM is evaluated for the faithfulness and relevancy of its retrieval-augmented answer to the questions
๐ The embedding model is evaluated for hit rate and mean reciprocal ranking (MRR) of the retrieved documents
And the cool thing is that all of this is ๐ถ๐ป๐๐๐ถ๐๐ถ๐๐ฒ ๐ฎ๐ป๐ฑ ๐ฐ๐ผ๐บ๐ฝ๐น๐ฒ๐๐ฒ๐น๐ ๐ฎ๐๐๐ผ๐บ๐ฎ๐๐ฒ๐ฑ: you plug it in, and it works!๐โก
Even cooler? This is all built on top of LlamaIndex and its integrations: no need for tons of dependencies or fancy workarounds๐ฆ
And if you're a UI lover, Gradio and FastAPI are there to provide you a seamless backend-to-frontend experience๐ถ๏ธ
So now it's your turn: you can either get diRAGnosis from GitHub ๐ https://github.com/AstraBert/diRAGnosis
or just run a quick and painless:
uv pip install diragnosis
To get the package installed (lightning-fast) in your environment๐โโ๏ธ
Have fun and feel free to leave feedback and feature/integrations requests on GitHub issuesโจ
You got a serious licensing problem
#8 opened 3 days ago
by
JLouisBiz

Can you publish it under free software license like MIT, Apache 2.0 or some other?
#2 opened 3 days ago
by
JLouisBiz

For anyone who is wondering what is going on here with all the "reports"
31
#168 opened 16 days ago
by
ufwd1984

Disable "gated access", it is Apache 2
4
#6 opened 3 months ago
by
kno10

replied to
ZennyKenny's
post
4 days ago
Metaโs LLaMa 2 license is not Open Source โ Open Source Initiative
https://opensource.org/blog/metas-llama-2-license-is-not-open-source
Not free, can't use. Can you use some free as in freedom model and make such a model?
USA/West Propaganda hugging face of huggingface
19
#230 opened 13 days ago
by
devops724
That is long time 18 hours to get an answer

reacted to
AdinaY's
post with ๐
5 days ago
Post
2681
CogView-4 is out๐ฅ๐ The SoTa OPEN text to image model by ZhipuAI
Model: THUDM/CogView4-6B
Demo: THUDM-HF-SPACE/CogView4
โจ 6B with Apache2.0
โจ Supports Chinese & English Prompts by ANY length
โจ Generate Chinese characters within images
โจ Creates images at any resolution within a given range
Model: THUDM/CogView4-6B
Demo: THUDM-HF-SPACE/CogView4
โจ 6B with Apache2.0
โจ Supports Chinese & English Prompts by ANY length
โจ Generate Chinese characters within images
โจ Creates images at any resolution within a given range
Thanks, how can I reproduce it? I am using llama.cpp, do you maybe have a recipe?