Demo Corp

Enterprise
company
Activity Feed

AI & ML interests

Making AI decisions understandable and transparent to users

Recent Activity

demo-org's activity

andrewrreedย 
posted an update about 1 month ago
view post
Post
948
Trace LLM calls with Arize AI's Phoenix observability dashboards on Hugging Face Spaces! ๐Ÿš€

โœจ I just added a new recipe to the Open-Source AI Cookbook that shows you how to:
1๏ธโƒฃ Deploy Phoenix on HF Spaces with persistent storage in a few clicks
2๏ธโƒฃ Configure LLM tracing with the ๐—ฆ๐—ฒ๐—ฟ๐˜ƒ๐—ฒ๐—ฟ๐—น๐—ฒ๐˜€๐˜€ ๐—œ๐—ป๐—ณ๐—ฒ๐—ฟ๐—ฒ๐—ป๐—ฐ๐—ฒ ๐—”๐—ฃ๐—œ
3๏ธโƒฃ Observe multi-agent application runs with the CrewAI integration

๐—ข๐—ฏ๐˜€๐—ฒ๐—ฟ๐˜ƒ๐—ฎ๐—ฏ๐—ถ๐—น๐—ถ๐˜๐˜† ๐—ถ๐˜€ ๐—ฐ๐—ฟ๐˜‚๐—ฐ๐—ถ๐—ฎ๐—น for building robust LLM apps.

Phoenix makes it easy to visualize trace data, evaluate performance, and track down issues. Give it a try!

๐Ÿ”— Cookbook recipe: https://huggingface.co./learn/cookbook/en/phoenix_observability_on_hf_spaces
๐Ÿ”— Phoenix docs: https://docs.arize.com/phoenix
jeffboudierย 
posted an update about 1 month ago
nbroadย 
posted an update about 2 months ago
view post
Post
3547
hi florent and livestream!
ยท
jeffboudierย 
posted an update 2 months ago
jeffboudierย 
posted an update 3 months ago
view post
Post
450
Inference Endpoints got a bunch of cool updates yesterday, this is my top 3
jeffboudierย 
posted an update 3 months ago
view post
Post
4032
Pro Tip - if you're a Firefox user, you can set up Hugging Chat as integrated AI Assistant, with contextual links to summarize or simplify any text - handy!

In this short video I show how to set it up
  • 2 replies
ยท
derek-thomasย 
posted an update 4 months ago
view post
Post
2126
Here is an AI Puzzle!
When you solve it just use a ๐Ÿ˜Ž emoji.
NO SPOILERS
A similar puzzle might have each picture that has a hidden meaning of summer, winter, fall, spring, and the answer would be seasons.

Its a little dated now (almost a year), so bottom right might be tough.

Thanks to @johko for the encouragement to post!
andrewrreedย 
posted an update 8 months ago
view post
Post
2545
๐Ÿ”ฌ Open LLM Progress Tracker ๐Ÿ”ฌ

Inspired by the awesome work from @mlabonne , I created a Space to monitor the narrowing gap between open and proprietary LLMs as scored by the LMSYS Chatbot Arena ELO ratings ๐Ÿค—

The goal is to have a continuously updated place to easily visualize these rapidly evolving industry trends ๐Ÿš€

๐Ÿ”— Open LLM Progress Tracker: andrewrreed/closed-vs-open-arena-elo
๐Ÿ”— Source of Inspiration: https://www.linkedin.com/posts/maxime-labonne_arena-elo-graph-updated-with-new-models-activity-7187062633735368705-u2jB/
  • 2 replies
ยท
jeffboudierย 
posted an update 8 months ago
andrewrreedย 
posted an update 8 months ago
view post
Post
2318
IMO, the "grounded generation" feature from Cohere's CommandR+ has flown under the radar...

For RAG use cases, responses directly include inline citations, making source attribution an inherent part of generation rather than an afterthought ๐Ÿ˜Ž

Who's working on an open dataset with this for the HF community to fine-tune with??

๐Ÿ”—CommandR+ Docs: https://docs.cohere.com/docs/retrieval-augmented-generation-rag

๐Ÿ”—Model on the ๐Ÿค— Hub: CohereForAI/c4ai-command-r-plus
  • 1 reply
ยท
jeffboudierย 
posted an update 9 months ago
derek-thomasย 
posted an update 10 months ago
andrewrreedย 
posted an update 11 months ago
view post
Post
๐Ÿš€ It's now easier than ever to switch from OpenAI to open LLMs

Hugging Face's TGI now supports an OpenAI compatible Chat Completion API

This means you can transition code that uses OpenAI client libraries (or frameworks like LangChain ๐Ÿฆœ and LlamaIndex ๐Ÿฆ™) to run open models by changing just two lines of code ๐Ÿค—

โญ Here's how:
from openai import OpenAI

# initialize the client but point it to TGI
client = OpenAI(
    base_url="<ENDPOINT_URL>" + "/v1/",  # replace with your endpoint url
    api_key="<HF_API_TOKEN>",  # replace with your token
)
chat_completion = client.chat.completions.create(
    model="tgi",
    messages=[
        {"role": "system", "content": "You are a helpful assistant."},
        {"role": "user", "content": "Why is open-source software important?"},
    ],
    stream=True,
    max_tokens=500
)

# iterate and print stream
for message in chat_completion:
    print(message.choices[0].delta.content, end="")


๐Ÿ”— Blog post โžก https://huggingface.co./blog/tgi-messages-api
๐Ÿ”— TGI docs โžก https://huggingface.co./docs/text-generation-inference/en/messages_api
ยท
derek-thomasย 
updated a Space almost 2 years ago