|
--- |
|
library_name: UniDepth |
|
tags: |
|
- monocular-metric-depth-estimation |
|
- pytorch_model_hub_mixin |
|
- model_hub_mixin |
|
repo_url: https://github.com/lpiccinelli-eth/UniDepth |
|
--- |
|
|
|
This model has been pushed to the Hub using **UniDepth**: |
|
- Repo: https://github.com/lpiccinelli-eth/UniDepth |
|
|
|
## Installation |
|
|
|
First install the UniDepth package as follows: |
|
|
|
```python |
|
!git clone -b add_hf https://github.com/NielsRogge/UniDepth.git |
|
!cd UniDepth |
|
!pip install -r requirements.txt |
|
``` |
|
|
|
## Usage |
|
|
|
Next, one can load the model and perform inference as follows: |
|
|
|
```python |
|
from unidepth.models import UniDepthV1HF |
|
import numpy as np |
|
from PIL import Image |
|
|
|
model = UniDepthV1HF.from_pretrained("nielsr/unidepth-v1-convnext-large") |
|
|
|
# Move to CUDA, if any |
|
device = torch.device("cuda" if torch.cuda.is_available() else "cpu") |
|
model = model.to(device) |
|
|
|
# Load the RGB image and the normalization will be taken care of by the model |
|
rgb = torch.from_numpy(np.array(Image.open(image_path))).permute(2, 0, 1) # C, H, W |
|
|
|
predictions = model.infer(rgb) |
|
|
|
# Metric Depth Estimation |
|
depth = predictions["depth"] |
|
|
|
# Point Cloud in Camera Coordinate |
|
xyz = predictions["points"] |
|
|
|
# Intrinsics Prediction |
|
intrinsics = predictions["intrinsics"] |
|
``` |