--- license: mit --- # AlphaNum Dataset ## Dataset Summary The AlphaNum dataset, curated by Louis Rädisch, is a comprehensive collection of grayscale, handwritten characters and digits, each with dimensions of 28x28 pixels. The primary aim of this dataset is to aid Optical Character Recognition (OCR) tasks. The dataset encompasses labels ranging from 0-35, wherein labels 0-25 correspond to the English alphabets (A-Z) and labels 26-35 correspond to the digits (0-9). Interestingly, the images derived from the MNIST dataset have been color inverted to maintain consistency with the rest of the data. To harmonize the data from diverse sources, Vision Transformer Models have been fine-tuned, enhancing the accuracy of the dataset. For instance, the 'A-Z handwritten alphabets' dataset originally did not differentiate between upper and lower case letters, an issue rectified in this new compilation. ## Sources: 1) [Handwriting Characters Database](https://github.com/sueiras/handwritting_characters_database) 2) [MNIST](https://huggingface.co./datasets/mnist) 3) [AZ Handwritten Alphabets in CSV format](https://www.kaggle.com/datasets/sachinpatel21/az-handwritten-alphabets-in-csv-format) The dataset files have been scaled down to 24x24 pixels and recolored from white-on-black to black-on-white to ensure uniformity. ## Dataset Structure ### Data Instances A single data instance in this dataset comprises an image of a handwritten character or digit, accompanied by its corresponding label. ### Data Fields 1) 'image': This field contains the image of the handwritten character or digit. 2) 'label': This field provides the label corresponding to the character or digit in the image. (0-25 are assigned to A-Z, 26-35 to 0-9) ### Data Splits The dataset is bifurcated into training and test subsets to facilitate the building and evaluation of models. ## Dataset Use The AlphaNum dataset is apt for tasks associated with text recognition, document processing, and machine learning. It is particularly beneficial for constructing, fine-tuning, and enhancing OCR models.