lora concepts library

AI & ML interests

None defined yet.

Recent Activity

Yardenfren  updated a model 5 months ago
lora-library/B-LoRA-toy_storee
Yardenfren  updated a model 7 months ago
lora-library/B-LoRA-drawing2
Yardenfren  updated a model 7 months ago
lora-library/B-LoRA-painting
View all activity

lora-library's activity

ehristoforu 
posted an update 3 days ago
view post
Post
2640
✒️ Ultraset - all-in-one dataset for SFT training in Alpaca format.
fluently-sets/ultraset

❓ Ultraset is a comprehensive dataset for training Large Language Models (LLMs) using the SFT (instruction-based Fine-Tuning) method. This dataset consists of over 785 thousand entries in eight languages, including English, Russian, French, Italian, Spanish, German, Chinese, and Korean.

🤯 Ultraset solves the problem faced by users when selecting an appropriate dataset for LLM training. It combines various types of data required to enhance the model's skills in areas such as text writing and editing, mathematics, coding, biology, medicine, finance, and multilingualism.

🤗 For effective use of the dataset, it is recommended to utilize only the "instruction," "input," and "output" columns and train the model for 1-3 epochs. The dataset does not include DPO or Instruct data, making it suitable for training various types of LLM models.

❇️ Ultraset is an excellent tool to improve your language model's skills in diverse knowledge areas.
akhaliq 
posted an update 5 days ago
view post
Post
2295
Google drops Gemini 2.0 Flash Thinking

a new experimental model that unlocks stronger reasoning capabilities and shows its thoughts. The model plans (with thoughts visible), can solve complex problems with Flash speeds, and more

now available in anychat, try it out: akhaliq/anychat
akhaliq 
posted an update 27 days ago
view post
Post
3742
QwQ-32B-Preview is now available in anychat

A reasoning model that is competitive with OpenAI o1-mini and o1-preview

try it out: akhaliq/anychat
  • 1 reply
·
akhaliq 
posted an update 27 days ago
view post
Post
3670
New model drop in anychat

allenai/Llama-3.1-Tulu-3-8B is now available

try it here: akhaliq/anychat
xiaozaa 
posted an update 28 days ago
view post
Post
1777
Hey everyone! 👋 Just launched a cool virtual try-on demo on Hugging Face Spaces! 🚀
Try on any upper body garment with just 3 simple steps:
📸 Upload your photo
✏️ Draw a quick mask
👕 Add the garment image
Super accurate results and really easy to use! Give it a spin and let me know what you think 🤗

Find here:
xiaozaa/catvton-flux-try-on

akhaliq 
posted an update about 1 month ago
view post
Post
2662
anychat

supports chatgpt, gemini, perplexity, claude, meta llama, grok all in one app

try it out there: akhaliq/anychat
xiaozaa 
posted an update about 1 month ago
xiaozaa 
posted an update about 1 month ago
not-lain 
posted an update about 1 month ago
view post
Post
1820
ever wondered how you can make an API call to a visual-question-answering model without sending an image url 👀

you can do that by converting your local image to base64 and sending it to the API.

recently I made some changes to my library "loadimg" that allows you to make converting images to base64 a breeze.
🔗 https://github.com/not-lain/loadimg

API request example 🛠️:
from loadimg import load_img
from huggingface_hub import InferenceClient

# or load a local image
my_b64_img = load_img(imgPath_url_pillow_or_numpy ,output_type="base64" ) 

client = InferenceClient(api_key="hf_xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx")

messages = [
	{
		"role": "user",
		"content": [
			{
				"type": "text",
				"text": "Describe this image in one sentence."
			},
			{
				"type": "image_url",
				"image_url": {
					"url": my_b64_img # base64 allows using images without uploading them to the web
				}
			}
		]
	}
]

stream = client.chat.completions.create(
    model="meta-llama/Llama-3.2-11B-Vision-Instruct", 
	messages=messages, 
	max_tokens=500,
	stream=True
)

for chunk in stream:
    print(chunk.choices[0].delta.content, end="")
zamal 
posted an update 2 months ago
view post
Post
1734
🚀 Announcement for the Lovely community! 🚀

Just launched the zamal/DeepSeek-VL-1.3B-Chat on Hugging Face, and it's ready for YOU to explore! 💬🖼️

This full-fledged model is perfect for advanced image and text interactions, with zero GPU required. The Deepseek VL-1.3B Chat typically needs around 8 GB of VRAM and storage of almost 4 GB, but now you can experience it hassle-free right on our space!

Want something lighter? We’ve also uploaded a 4 bit quantized version (just around 1GB!), available on my profile. Perfect for those with limited hardware. 🌍🔍

Come try it now and see what this model can do! 🚀✨

zamal 
posted an update 2 months ago
view post
Post
2039
Hello, lovely community! 🌟

zamal/Molmo-4bit Thrilled to announce that the Molmo 7B 4-bit Space is now live! 🚀 The model size has been reduced by six times with almost no performance loss, and the results will leave you amazed!

It runs on zero GPU, making it incredibly accessible for everyone!

Check it out here and start exploring today!

Happy experimenting! 🎉
zamal 
posted an update 3 months ago
view post
Post
1942
🚀 New Model Release: zamal/Molmo-7B-GPTQ-4bit 🚀

Hello lovely community,

zamal/Molmo-7B-GPTQ-4bit model is now available for all! This model has been highly quantized, reducing its size by almost six times. It now occupies significantly less space and vRAM, making it perfect for deployment on resource-constrained devices without compromising performance.

Now we get:
Efficient Performance: Maintains high accuracy while being highly quantized.
Reduced Size: The model size is reduced by nearly six times, optimizing storage and memory usage.
Versatile Application: Ideal for integrating a powerful visual language model into various projects particularly multi rag chains.
Check it out!

  • 1 reply
·
not-lain 
posted an update 5 months ago
ehristoforu 
posted an update 5 months ago
view post
Post
3929
😏 Hello from Project Fluently Team!

✨ Finally we can give you some details about Supple Diffusion. We worked on it for a long time and we have little left, we apologize that we had to increase the work time.

🛠️ Some technical information. The first version will be the Small version (there will also be Medium, Large, Huge, possibly Tiny), it will be based on the SD1 architecture, that is, one text encoder, U-net, VAE. Now about each component, the first is a text encoder, it will be a CLIP model (perhaps not CLIP-L-path14), CLIP was specially retrained by us in order to achieve the universality of the model in understanding completely different styles and to simplify the prompt as much as possible. Next, we did U-net, U-net in a rather complicated way, first we trained different parts (types) of data with different U-nets, then we carried out merging using different methods, then we trained DPO and SPO using methods, and then we looked at the remaining shortcomings and further trained model, details will come later. We left VAE the same as in SD1 architecture.

🙌 Compatibility. Another goal of the Supple model series is full compatibility with Auto1111 and ComfyUI already at the release stage, the model is fully supported by these interfaces and the diffusers library and does not require adaptation, your usual Sampling methods are also compatible, such as DPM++ 2M Karras, DPM++ SDE and others.

🧐 Today, without demo images (there wasn’t much time), final work is underway on the model and we are already preparing to develop the Medium version, the release of the Small version will most likely be in mid-August or earlier.

😻 Feel free to ask your questions in the comments below the post, we will be happy to answer them, have a nice day!
  • 1 reply
·
not-lain 
posted an update 5 months ago
view post
Post
7696
I am now a huggingface fellow 🥳
·
not-lain 
posted an update 6 months ago
view post
Post
2660
I have finished writing a blogpost about building an image-based retrieval system, This is one of the first-ever approaches to building such a pipeline using only open-source models/libraries 🤗

You can checkout the blogpost in https://huggingface.co./blog/not-lain/image-retriever and the associated space at not-lain/image-retriever .

✨ If you want to request another blog post consider letting me know down below or you can reach out to me through any of my social media

📖 Happy reading !
ehristoforu 
posted an update 6 months ago
view post
Post
6361
🤗 Hello from the Project Fluently team!

🥏 We are ready to announce a new series of Supple Diffusion models, these are new generation diffusion models (about 1-2 weeks left before release).

🦾 The new series aims to take diffusion models to the next level, with performance and versatility as the main goal.

🧐 How will our models be better than others? Firstly, we worked on the CLIP models, now they understand your requests better, it will become easier to process. Secondly, we trained the models with high quality, even better than all our previous ones. Thirdly, you won’t have to keep 20 models on your disk; only 4-6 will be enough.

🗺️ Roadmap:
1. Create Supple Diffusion Small
2. Creating Supple Diffusion Medium
3. Create Supple Diffusion Large

🎆 Our models are universal for realism, and for cartoons, and for anime, and for caricatures.

💖 The project really needs your support and your recommendations and reviews, please do not hesitate to write comments under this post, thank you!

🖼️ Below are demo images made with the pre-release version of Supple Diffusion Small.
·
not-lain 
posted an update 6 months ago
view post
Post
1434
Hello beautiful people.
I wanted to thank everyone that read my blogpost and I am glad to share that we have achieved 11000 readers 🥳
I couldn't have done this without you, so once again thanks a lot everyone for the support 💖
If you haven't already you can read my blog post at: https://huggingface.co./blog/not-lain/rag-chatbot-using-llama3
ehristoforu 
posted an update 7 months ago
view post
Post
3862
🦾 Hello, I present Visionix Alpha - a new hyper-realistic model based on SDXL. The main difference from all existing realism models is the attention to detail, that is, I improved not only hyperrealism, but also the overall aesthetics, anatomy, the beauty of nature, and more, and the model also has the most different faces. This model is suitable not only for realistic photos, but also for generating 2.5d anime, realistic cartoons and more.

🤗 Model on HF: ehristoforu/Visionix-alpha
🥏 Model on CivitAI: https://civitai.com/models/505719
🪄 Playground (with base and inpaint model): ehristoforu/Visionix-Playground

✏️ Inpaint version on HF: ehristoforu/Visionix-alpha-inpainting
🖋️ Inpaint version on CivitAI: https://civitai.com/models/505719?modelVersionId=563519
  • 1 reply
·