Spaces:
Sleeping
Apply for community grant: Academic project (gpu)
Hi!I want to apply one GPU grant to supply our online demo. Thanks!
Paper Link: https://arxiv.org/abs/2401.10229
Hmm, not sure what is causing the error.
@cbensimon
Could you take a look at this?
By the way I added you to https://huggingface.co./zero-gpu-explorers so you can create ZeroGPU Spaces freely as well as move this one back and forth between Zero Nvidia A10G and other hardware
Hi! I still meet the runtime error since we find our dependency (mmengine) can not run the Zero Nvidia A10G.
Hi @cbensimon ,
I have switched to PyTorch 2.0.1. However, I encountered some errors (here). It looks like the function I'm using cannot be compiled. Is there any way to bypass compilation?
Thanks for your help.
The error log is this:
===== Application Startup at 2024-01-23 03:53:10 =====
Traceback (most recent call last):
File "/home/user/app/main.py", line 12, in <module>
from mmdet.visualization import DetLocalVisualizer
File "/home/user/.local/lib/python3.10/site-packages/mmdet/visualization/__init__.py", line 2, in <module>
from .local_visualizer import DetLocalVisualizer, TrackLocalVisualizer
File "/home/user/.local/lib/python3.10/site-packages/mmdet/visualization/local_visualizer.py", line 15, in <module>
from mmengine.visualization import Visualizer
File "/home/user/.local/lib/python3.10/site-packages/mmengine/visualization/__init__.py", line 2, in <module>
from .vis_backend import (AimVisBackend, BaseVisBackend, ClearMLVisBackend,
File "/home/user/.local/lib/python3.10/site-packages/mmengine/visualization/vis_backend.py", line 19, in <module>
from mmengine.hooks.logger_hook import SUFFIX_TYPE
File "/home/user/.local/lib/python3.10/site-packages/mmengine/hooks/__init__.py", line 4, in <module>
from .ema_hook import EMAHook
File "/home/user/.local/lib/python3.10/site-packages/mmengine/hooks/ema_hook.py", line 8, in <module>
from mmengine.model import is_model_wrapper
File "/home/user/.local/lib/python3.10/site-packages/mmengine/model/__init__.py", line 6, in <module>
from .base_model import BaseDataPreprocessor, BaseModel, ImgDataPreprocessor
File "/home/user/.local/lib/python3.10/site-packages/mmengine/model/base_model/__init__.py", line 2, in <module>
from .base_model import BaseModel
File "/home/user/.local/lib/python3.10/site-packages/mmengine/model/base_model/base_model.py", line 9, in <module>
from mmengine.optim import OptimWrapper
File "/home/user/.local/lib/python3.10/site-packages/mmengine/optim/__init__.py", line 2, in <module>
from .optimizer import (OPTIM_WRAPPER_CONSTRUCTORS, OPTIMIZERS,
File "/home/user/.local/lib/python3.10/site-packages/mmengine/optim/optimizer/__init__.py", line 10, in <module>
from .zero_optimizer import ZeroRedundancyOptimizer
File "/home/user/.local/lib/python3.10/site-packages/mmengine/optim/optimizer/zero_optimizer.py", line 11, in <module>
from torch.distributed.optim import \
File "/home/user/.local/lib/python3.10/site-packages/torch/distributed/optim/__init__.py", line 11, in <module>
from .functional_adadelta import _FunctionalAdadelta
File "/home/user/.local/lib/python3.10/site-packages/torch/distributed/optim/functional_adadelta.py", line 20, in <module>
class _FunctionalAdadelta:
File "/home/user/.local/lib/python3.10/site-packages/torch/jit/_script.py", line 1321, in script
_compile_and_register_class(obj, _rcb, qualified_name)
File "/home/user/.local/lib/python3.10/site-packages/torch/jit/_recursive.py", line 51, in _compile_and_register_class
script_class = torch._C._jit_script_class_compile(qualified_name, ast, defaults, rcb)
File "/home/user/.local/lib/python3.10/site-packages/torch/jit/_recursive.py", line 867, in try_compile_fn
return torch.jit.script(fn, _rcb=rcb)
File "/home/user/.local/lib/python3.10/site-packages/torch/jit/_script.py", line 1338, in script
ast = get_jit_def(obj, obj.__name__)
File "/home/user/.local/lib/python3.10/site-packages/torch/jit/frontend.py", line 297, in get_jit_def
return build_def(parsed_def.ctx, fn_def, type_line, def_name, self_name=self_name, pdt_arg_types=pdt_arg_types)
File "/home/user/.local/lib/python3.10/site-packages/torch/jit/frontend.py", line 335, in build_def
param_list = build_param_list(ctx, py_def.args, self_name, pdt_arg_types)
File "/home/user/.local/lib/python3.10/site-packages/torch/jit/frontend.py", line 359, in build_param_list
raise NotSupportedError(ctx_range, _vararg_kwarg_err)
torch.jit.frontend.NotSupportedError: Compiled functions can't take variable number of arguments or use keyword-only arguments with defaults:
File "/home/user/.local/lib/python3.10/site-packages/spaces/zero/torch.py", line 74
def _tensor_register(*args: Any, **kwargs: Any):
~~~~~~~ <--- HERE
try:
device = torch.device(kwargs.get('device', "cpu"))
@LXT
@HarborYuan
Due to the time zone difference, it might take some time for
@cbensimon
to respond, so in the meantime, I'll just change the hardware to a10g-small, and we'll see if this Space can run on ZeroGPU later.
Hi @hysts ,
Thanks for your help. It works well now.
@cbensimon
Thanks for checking! I haven't looked into the details, but so torch.compile
is used by the visualizer DetLocalVisualizer
? Not 100% sure, but I kind of feel that a visualizer shouldn't block the use of ZeroGPU and that it would be nice to be able to support torch.compile
/torch.jit
. Anyway, I think we should keep using a10g-small
for this Space, then.