File size: 3,027 Bytes
f4ca64e
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
7459625
d4a37fd
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
---
dataset_info:
  features:
  - name: image
    dtype: image
  - name: conditioning_image
    dtype: image
  - name: text
    dtype: string
  splits:
  - name: train
    num_bytes: 111989279184.95
    num_examples: 507050
  download_size: 112032639870
  dataset_size: 111989279184.95
---
# Dataset Card for "hagrid-mediapipe-hands"

This dataset is designed to train a ControlNet with human hands. It includes hand landmarks detected by MediaPipe(for more information refer to: https://developers.google.com/mediapipe/solutions/vision/hand_landmarker).
The source image data is from [HaGRID dataset](https://github.com/hukenovs/hagrid) and we use a modified version from Kaggle(https://www.kaggle.com/datasets/innominate817/hagrid-classification-512p) to build this dataset. There are 507050 data samples in total and the image resolution is 512x512.

### Generate Mediapipe annotation
We use the script below to generate hand landmarks and you should download `hand_landmarker.task` file first. For more information please refer to [this](https://developers.google.com/mediapipe/solutions/vision/hand_landmarker).
```
import mediapipe as mp
from mediapipe import solutions
from mediapipe.framework.formats import landmark_pb2
from mediapipe.tasks import python
from mediapipe.tasks.python import vision
from PIL import Image
import cv2
import numpy as np

def draw_landmarks_on_image(rgb_image, detection_result):
  hand_landmarks_list = detection_result.hand_landmarks
  handedness_list = detection_result.handedness
  annotated_image = np.zeros_like(rgb_image)

  # Loop through the detected hands to visualize.
  for idx in range(len(hand_landmarks_list)):
    hand_landmarks = hand_landmarks_list[idx]
    handedness = handedness_list[idx]

    # Draw the hand landmarks.
    hand_landmarks_proto = landmark_pb2.NormalizedLandmarkList()
    hand_landmarks_proto.landmark.extend([
      landmark_pb2.NormalizedLandmark(x=landmark.x, y=landmark.y, z=landmark.z) for landmark in hand_landmarks
    ])
    solutions.drawing_utils.draw_landmarks(
      annotated_image,
      hand_landmarks_proto,
      solutions.hands.HAND_CONNECTIONS,
      solutions.drawing_styles.get_default_hand_landmarks_style(),
      solutions.drawing_styles.get_default_hand_connections_style())

  return annotated_image

# Create an HandLandmarker object.
base_options = python.BaseOptions(model_asset_path='hand_landmarker.task')
options = vision.HandLandmarkerOptions(base_options=base_options,
                                       num_hands=2)
detector = vision.HandLandmarker.create_from_options(options)

# Load the input image.
image = np.asarray(Image.open("./test.png"))
image = mp.Image(
    image_format=mp.ImageFormat.SRGB, data=image
)

# Detect hand landmarks from the input image.
detection_result = detector.detect(image)

# Process the classification result and save it.
annotated_image = draw_landmarks_on_image(image.numpy_view(), detection_result)
cv2.imwrite("ann.png", cv2.cvtColor(annotated_image, cv2.COLOR_RGB2BGR))
```