CyberHarem

community

AI & ML interests

Anime Bishojo. This organization is only for waifus' datasets and loras

CyberHarem's activity

ameerazam08 
posted an update 29 days ago
not-lain 
posted an update 30 days ago
not-lain 
posted an update about 1 month ago
view post
Post
1633
we now have more than 2000 public AI models using ModelHubMixin🤗
not-lain 
posted an update about 2 months ago
view post
Post
4013
Published a new blogpost 📖
In this blogpost I have gone through the transformers' architecture emphasizing how shapes propagate throughout each layer.
🔗 https://huggingface.co./blog/not-lain/tensor-dims
some interesting takeaways :
Lewdiculous 
posted an update 2 months ago
s3nh 
posted an update 2 months ago
view post
Post
1950
Welcome back,

Small Language Models Enthusiasts and GPU Poor oss enjoyers lets connect.
Just created an organization which main target is to have fun with smaller models tuneable on consumer range GPUs, feel free to join and lets have some fun, much love ;3

https://huggingface.co./SmolTuners
·
lunarflu 
posted an update 3 months ago
not-lain 
posted an update 4 months ago
view post
Post
2316
ever wondered how you can make an API call to a visual-question-answering model without sending an image url 👀

you can do that by converting your local image to base64 and sending it to the API.

recently I made some changes to my library "loadimg" that allows you to make converting images to base64 a breeze.
🔗 https://github.com/not-lain/loadimg

API request example 🛠️:
from loadimg import load_img
from huggingface_hub import InferenceClient

# or load a local image
my_b64_img = load_img(imgPath_url_pillow_or_numpy ,output_type="base64" ) 

client = InferenceClient(api_key="hf_xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx")

messages = [
	{
		"role": "user",
		"content": [
			{
				"type": "text",
				"text": "Describe this image in one sentence."
			},
			{
				"type": "image_url",
				"image_url": {
					"url": my_b64_img # base64 allows using images without uploading them to the web
				}
			}
		]
	}
]

stream = client.chat.completions.create(
    model="meta-llama/Llama-3.2-11B-Vision-Instruct", 
	messages=messages, 
	max_tokens=500,
	stream=True
)

for chunk in stream:
    print(chunk.choices[0].delta.content, end="")
anoha 
updated a Space 5 months ago
lunarflu 
posted an update 6 months ago
not-lain 
posted an update 7 months ago