|
--- |
|
datasets: |
|
- royweiss1/GPT_Keylogger_Dataset |
|
language: |
|
- en |
|
license: mit |
|
--- |
|
|
|
This is the model used in the USENIX Security 24' paper: "What Was Your Prompt? A Remote Keylogging Attack on AI Assistants". |
|
It is a fine-tune of T5-Large that was trained to decipher ChatGPT's encrypted answers based only on the response's token lengths. |
|
This model is the first-sentences model. Meaning it was trained to decipher only the first sentences of each response. |
|
It was Trained on UltraChat Dataset - Questions About the world, and only the first answer of each dialog. |
|
|
|
|
|
The Dataset split can be found here: https://huggingface.co./datasets/royweiss1/GPT_Keylogger_Dataset |
|
|
|
The Github repository of the paper (containing also the training code): https://github.com/royweiss1/GPT_Keylogger |
|
|
|
|
|
## Citation ## |
|
If you find this model helpful please cite our paper: |
|
|
|
``` |
|
@inproceedings{weissLLMSideChannel, |
|
title={What Was Your Prompt? A Remote Keylogging Attack on AI Assistants}, |
|
author={Weiss, Roy and Ayzenshteyn, Daniel and Amit Guy and Mirsky, Yisroel} |
|
booktitle={USENIX Security}, |
|
year={2024} |
|
} |
|
``` |