Gradio-Blocks-Party

company
Activity Feed

AI & ML interests

None defined yet.

Recent Activity

Gradio-Blocks's activity

not-lain 
posted an update 9 days ago
view post
Post
1013
we now have more than 2000 public AI models using ModelHubMixin🤗
meg 
posted an update 12 days ago
view post
Post
2923
💫...And we're live!💫 Seasonal newsletter from ethicsy folks at Hugging Face, exploring the ethics of "AI Agents"
https://huggingface.co./blog/ethics-soc-7
Our analyses found:
- There's a spectrum of "agent"-ness
- *Safety* is a key issue, leading to many other value-based concerns
Read for details & what to do next!
With @evijit , @giadap , and @sasha
not-lain 
posted an update 13 days ago
view post
Post
3838
Published a new blogpost 📖
In this blogpost I have gone through the transformers' architecture emphasizing how shapes propagate throughout each layer.
🔗 https://huggingface.co./blog/not-lain/tensor-dims
some interesting takeaways :
not-lain 
updated a Space about 1 month ago
yangheng 
updated a Space about 2 months ago
not-lain 
posted an update 2 months ago
view post
Post
2287
ever wondered how you can make an API call to a visual-question-answering model without sending an image url 👀

you can do that by converting your local image to base64 and sending it to the API.

recently I made some changes to my library "loadimg" that allows you to make converting images to base64 a breeze.
🔗 https://github.com/not-lain/loadimg

API request example 🛠️:
from loadimg import load_img
from huggingface_hub import InferenceClient

# or load a local image
my_b64_img = load_img(imgPath_url_pillow_or_numpy ,output_type="base64" ) 

client = InferenceClient(api_key="hf_xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx")

messages = [
	{
		"role": "user",
		"content": [
			{
				"type": "text",
				"text": "Describe this image in one sentence."
			},
			{
				"type": "image_url",
				"image_url": {
					"url": my_b64_img # base64 allows using images without uploading them to the web
				}
			}
		]
	}
]

stream = client.chat.completions.create(
    model="meta-llama/Llama-3.2-11B-Vision-Instruct", 
	messages=messages, 
	max_tokens=500,
	stream=True
)

for chunk in stream:
    print(chunk.choices[0].delta.content, end="")
Blane187 
posted an update 5 months ago
not-lain 
posted an update 6 months ago
Blane187 
posted an update 6 months ago
view post
Post
1436
hello everyone, today I have been working on a project Blane187/rvc-demo, a demo of rvc using pip, this project is still a demo though (I don't have a beta tester lol)
not-lain 
posted an update 6 months ago
view post
Post
7725
I am now a huggingface fellow 🥳
·