MingComplex commited on
Commit
0fe41f7
·
1 Parent(s): 58730a8

update README.md

Browse files
Files changed (1) hide show
  1. README.md +3 -2
README.md CHANGED
@@ -17,11 +17,12 @@ library_name: transformers
17
  UI-TARS is a next-generation native GUI agent model designed to interact seamlessly with graphical user interfaces (GUIs) using human-like perception, reasoning, and action capabilities. Unlike traditional modular frameworks, UI-TARS integrates all key components—perception, reasoning, grounding, and memory—within a single vision-language model (VLM), enabling end-to-end task automation without predefined workflows or manual rules.
18
  <!-- ![Local Image](figures/UI-TARS.png) -->
19
  <p align="center">
20
- <img src="https://github.com/bytedance/UI-TARS/blob/main/figures/UI-TARS.png?raw=true" width="80%"/>
21
  <p>
22
  <p align="center">
23
- <img src="https://github.com/bytedance/UI-TARS/blob/main/figures/UI-TARS-vs-Previous-SOTA.png?raw=true" width="80%"/>
24
  <p>
 
25
  <!-- ![Local Image](figures/UI-TARS-vs-Previous-SOTA.png) -->
26
 
27
  ## Core Features
 
17
  UI-TARS is a next-generation native GUI agent model designed to interact seamlessly with graphical user interfaces (GUIs) using human-like perception, reasoning, and action capabilities. Unlike traditional modular frameworks, UI-TARS integrates all key components—perception, reasoning, grounding, and memory—within a single vision-language model (VLM), enabling end-to-end task automation without predefined workflows or manual rules.
18
  <!-- ![Local Image](figures/UI-TARS.png) -->
19
  <p align="center">
20
+ <img src="https://github.com/bytedance/UI-TARS/blob/main/figures/UI-TARS-vs-Previous-SOTA.png?raw=true" width="90%"/>
21
  <p>
22
  <p align="center">
23
+ <img src="https://github.com/bytedance/UI-TARS/blob/main/figures/UI-TARS.png?raw=true" width="90%"/>
24
  <p>
25
+
26
  <!-- ![Local Image](figures/UI-TARS-vs-Previous-SOTA.png) -->
27
 
28
  ## Core Features