File size: 2,129 Bytes
c98adad
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
afb575a
 
 
 
 
 
 
 
 
 
458f861
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
---
dataset_info:
  features:
  - name: prompt
    dtype: string
  - name: response
    dtype: string
  splits:
  - name: train
    num_bytes: 916486
    num_examples: 10000
  download_size: 164700
  dataset_size: 916486
configs:
- config_name: default
  data_files:
  - split: train
    path: data/train-*
license: apache-2.0
task_categories:
- feature-extraction
language:
- en
tags:
- code
pretty_name: relative-positioning
size_categories:
- 10K<n<100K
---
# Dataset Card for Dataset Name

This dataset aims to teach LLMs relative positioning (e.g. above, left from, below, etc.), 
which in my findings most LLMs, even SOTA where not able to produce under all circumstances.
Will be pushing a fine-tuned mixtral-7x8B with this dataset.

## Dataset Details

### Dataset Description

Contains Data for relative positioning on a grid(256, 256).
Assumes Origin [0, 0] is in the bottom left.
Two Objects (Object 1, Object 2) are randomly created.
Answer is there relative position to one another.

- **Curated by:** [Antoine Angert]
- **Language(s) (NLP):** [English]
- **License:** [apache-2.0]

## Uses

### Direct Use

Can be used to fine-tune Language Models.
(Althought so far not been tested, will update)

## Dataset Structure

Features:
Prompt(String), Response(String)

## Dataset Creation

### Curation Rationale

I did some testing to see how well LLMs are able to handle positional data(2D, 3D).
I found that most small models (tested: llama-7B, llama-13B, mistral-7B) have very poor positional understanding.
Most bigger Models (tested: gpt-3.5-turbo, gpt-4, llama-70B, mixtral-7x8B) have a fairly good positional understanding, as long as no other context is provided.
When I tried using positional reasoning with some other unrelated context, the performance of these bigger models dropped imensly.
This is my first attempt of trying to embed this understanding directly into the models and not throught context.

#### Data Collection and Processing

The dataset was generated using a python script.

## Dataset Card Authors [optional]

Antoine Angert

## Dataset Card Contact

Contact under:
[email protected]