Spaces:
Running
Exposing carbon / energy figures for my use of HuggingChat
Hi folks,
Given there's a growing body of information about the environmental impact of AI, and Hugging Face has its own climate lead employed by the organisation - I
I'm curious - has anyone here experimented with surfacing carbon emissions figures from the use of AI tooling at the point of use on the platform?
I understand that HF has support for filtering models based on the carbon sunk into the training phase (see the CO2 emissions and HF hub post below), but I'm less clear on what information is exposed at the inference phase.
Right now, I'm more of a consumer of LLM based services than a maker of them, so my thinking is that I'd probably need to spin up a dedicated space, but even then I'm less clear on what numbers I've had exposed to me, and what the billing would look like, because well.. I can't spent any money with HF yet.
If you're further along than me, I'd love to hear about it in the comments below.
Some helpful links I found when looking around
Displaying carbon emissions for your model (the training phase)
https://huggingface.co./docs/hub/model-cards-co2
CO2 Emissions and the 🤗 Hub: Leading the Charge
https://huggingface.co./blog/carbon-emissions-on-the-hub
Exploring the Carbon Footprint of Hugging Face’s - ML Models: A Repository Mining Study
https://arxiv.org/pdf/2305.11164v3.pdf
Counting Carbon: A Survey of Factors Influencing the Emissions of Machine Learning
https://arxiv.org/pdf/2302.08476v1.pdf
in particular thanks for linking to https://arxiv.org/abs/2305.11164v3 which i had not seen and looks quite interesting
( @silveriomf seems to be an author)
Very interesting subject, indeed! With the scalable use of ML, the inference phase is more and more critical. I would add to the list this timely paper by
@sasha
et al.:
Power Hungry Processing: Watts Driving the Cost of AI Deployment?
https://arxiv.org/abs/2311.16863
Thanks for your kind comments on our https://arxiv.org/abs/2305.11164v3 ESEM paper! ☺️ cc’ing @joelcf001
As part of our GAISSA project ( https://gaissa.upc.edu/en/publications ), we are working on energy efficiency labels for ML training and inference. In case you are interested, we share an initial proof-of-concept based on HF data: https://energy-label.streamlit.app/Efficiency_Label
We could appreciate any thoughts about these energy labels!
oh neat. I vaguely rememeber it being mentioned in Counting Carbon: A Survey of Factors Influencing the Emissions of Machine Learning, but I hadn't realised there was a whole explorable space that the authors had set up.
It's linked below:
https://huggingface.co./spaces/sasha/CO2_inference
I know I have a bunch of reading ahead of me anyway, but where I find other relevant stuff I'll try to add it here.