mrzjy's picture
Update README.md
6015cbe verified
metadata
license: apache-2.0
language:
  - zh
tags:
  - game
  - hoyoverse
  - video
  - audio
  - multimodal
  - vision-language
  - text

Game Playthrough

最终解析出的语料在 honkai_impact_3rd_chinese_dialogue_corpus

See honkai_impact_3rd_chinese_dialogue_corpus for final parsed result!

Description (English)

This is a collection of playthrough videos of Honkai Impact 3rd from Hoyoverse, along with efforts to build a Chinese text corpus (with OCR and MLLM-based parsing).

The language setting is Chinese.

All credits to the source author from BiliBili

The dataset contains the following contents:

  • Videos: The video-only files, corresponding to all videos in the source. Mostly in 1280x720 aspect ratio, HEVC encoding.
  • Audios: The audio-only files, coresponding to all the videos. Mostly in M4A format with various kbps.
  • OCR-Results (Raw): The OCR results for all the frames every 1 second. This process is done by using Paddle-OCR.
  • VLM-Parsed corpus: Given the OCR-results and image frames, hopefully we will parse the raw info into structured story narrations and dialogues (with associated speaker & content). This process will be done by using strong vision language models.

Up-to-date: 2024.08.08

Latest video: [P186]主线第二部03间章:一个梦游者的苦痛-02[720P 高清]

Description (Chinese)

本 Repo 收集了崩坏3的CG + 剧情对话视频,同时基于 OCR 和多模态大语言模型构造相应的中文崩坏3剧情语料。

感谢 B站视频Up主

数据集包括以下部分:

  • 视频:纯视频文件 source. 大部分都在 1280x720 分辨率, HEVC 编码。
  • 音频:纯音频文件. 均为 M4A 格式,不同的 kbps。
  • OCR 结果 (无任何后处理):对所有视频每隔1秒取一帧,使用 Paddle-OCR 对每一帧执行 OCR,获取画面上的任何可识别文字。
  • 多模态大模型解析结果:对所有 OCR 结果 + 图像信息,调用多模态大模型将其解析成结构化剧情数据,包含旁白、说话人、说话内容等信息。

时间截止:2024.08.08

最新视频:[P186]主线第二部03间章:一个梦游者的苦痛-02[720P 高清]

Illustration for text corpus construction pipeline

Here we show how text information is parsed from raw videos.

  1. Extracting Video Frames

Save each frame as a image.

frame_130.jpg

  1. OCR on video frame

Apply an OCR model to recognize texts that appear in a frame.

[{"box": [[1161.0, 17.0], [1250.0, 20.0], [1249.0, 49.0], [1160.0, 46.0]], "text": "跳过I", "score": 0.8165686130523682}, {"box": [[539.0, 154.0], [724.0, 136.0], [726.0, 158.0], [542.0, 177.0]], "text": "SOURCEUNKNOWN", "score": 0.9888437986373901}, {"box": [[541.0, 475.0], [645.0, 475.0], [645.0, 499.0], [541.0, 499.0]], "text": "不明通讯", "score": 0.9979484677314758}, {"box": [[807.0, 476.0], [976.0, 481.0], [976.0, 508.0], [806.0, 504.0]], "text": "无量塔姬子", "score": 0.9982650876045227}, {"box": [[544.0, 509.0], [1107.0, 534.0], [1106.0, 567.0], [542.0, 542.0]], "text": "防御系统已经解除,我们暂时安全了。但还是", "score": 0.9949256777763367}, {"box": [[548.0, 545.0], [786.0, 558.0], [784.0, 585.0], [546.0, 573.0]], "text": "不知道琪亚娜在哪里。", "score": 0.9898449182510376}]
  1. Vision-Language Understanding

Prompt a performant VLM to understand the frame image as well as OCR result (prevent hallucinations), and output structured information as follows:

{
    "role": "无量塔姬子",
    "content": "防御系统已经解除,我们暂时安全了。但还是不知道琪亚娜在哪里。"
}