Nice Demo

#1
by awacke1 - opened

Is it possible to do a gradio blocks API call across multiple gradio spaces with API using requests, queue them and have them deliver in parallel to outputs like the gr.Parallel?

e.g.

HF_TOKEN = os.environ.get("HF_TOKEN") # get token from secrets, copy token value HF_TOKEN from Profile settings token into this repo settings
generator2 = gr.Interface.load("huggingface/EleutherAI/gpt-neo-2.7B", api_key=HF_TOKEN) # add api_key=HF_TOKEN to get over the quota error
generator3 = gr.Interface.load("huggingface/EleutherAI/gpt-j-6B", api_key=HF_TOKEN)
generator1 = gr.Interface.load("huggingface/gpt2-large", api_key=HF_TOKEN)
gr.Parallel(generator1, generator2, generator3, inputs=gr.inputs.Textbox(lines=5, label="Enter a sentence to get another sentence."),
title=title, examples=examples).launch(share=False)

only instead of g1,g2,g3 being interface.load calls have them be local HF api calls.

Sign up or log in to comment