paduraru2009's picture
Update README.md
d5730fe verified
metadata
configs:
  - config_name: docs
    data_files:
      - split: train
        path:
          - dataForTraining/db_pdfs.jsonl
  - config_name: videos
    data_files:
      - split: train
        path:
          - dataForTraining/db_videos_clean.jsonl
  - config_name: raw_pdfpapers
    data_files:
      - split: train
        path:
          - sources/pdfpapers.json
  - config_name: raw_videos
    data_files:
      - split: train
        path:
          - sources/videos.json
  - config_name: raw_markdown
    data_files:
      - split: train
        path:
          - sources/webcontent.json
license: mit
language:
  - en
tags:
  - cybersecurity
  - video transcripts
  - arxiv papers
pretty_name: >-
  A cybersecurity LLM training dataset with latest research extracted from a
  collection of docs and youtube transcripts.

A dataset for training LLM agents on latest research in the cybersecurity domain.

Purpose: fine-tune existing LLMs to incorporate novelties and general language understanding of the field.

How to use?

There are three configs, one for each supported data type.

data_from_documents = load_dataset("unibuc-cs/CyberGuardian-dataset", 'docs')
data_from_videos = load_dataset("unibuc-cs/CyberGuardian-dataset", 'video')
data_from_markdown = load_dataset("unibuc-cs/CyberGuardian-dataset", 'markdown')

Note that there is only a split, the 'train' one. Its up to you to split as you wish the data.

Community: we hope that this will be an updated dataset along time.

Raw sources of data

The folder sources contain the JSON with resources for each type: video, docs, markdown content. Custom folders can also be added as shown.

Export/Update from from Raw sources

Input your credentials in projsecrets.py then run source/DatasetUtils.py. Please read the script's parameters description for more de tails. Optionally it creates a FAISS index for RAG purposes for instance.

This is how the dataForTraining was obtained, and how you can add your own raw data files using links to papers, youtube courses or presentations, etc., then just run the script. A mongoDB is used internally (see secrets.py) to keep the results on a storage, if you want.

Please cite our work properly if you find this useful in any way:

@conference{CyberGuardian,
author={Ciprian Paduraru. and Catalina Patilea. and Alin Stefanescu.},
title={CyberGuardian: An Interactive Assistant for Cybersecurity Specialists Using Large Language Models},
booktitle={Proceedings of the 19th International Conference on Software Technologies - ICSOFT},
year={2024},
pages={442-449},
publisher={SciTePress},
organization={INSTICC},
doi={10.5220/0012811700003753},
isbn={978-989-758-706-1},
issn={2184-2833},
}