# Question Complexity Rating Dataset ## Dataset Description This dataset contains pairs of questions and their respective complexity ratings. The complexity ratings are provided on a scale from 0.0 to 1.0, where 0.0 indicates the simplest questions and 1.0 indicates the most complex questions. The dataset is intended to be used for training and evaluating models that classify or rank questions based on their complexity. ## Dataset Contents The dataset is provided as a CSV file with the following columns: - `instruction`: The question or prompt. - `label`: The complexity rating of the question, a float between 0.0 and 1.0. ### Example | instruction | label | | ---------------------------------------------- | ----- | | When did Virgin Australia start operating? | 0.2 | | Which is a species of fish? Tope or Rope | 0.8 | | Why can camels survive for long without water? | 0.5 | ## Usage The dataset can be used for various natural language processing (NLP) tasks such as: - Complexity prediction - Question ranking - Model evaluation - Transfer learning for related tasks ## Use Cases ### 1. Complexity Prediction Train a model to predict the complexity of new questions based on the provided ratings. This can be useful in educational technology, customer support, and other domains where question complexity is a relevant factor. ### 2. Question Ranking Rank a set of questions by their complexity to prioritize easier questions for beginners and more complex questions for advanced users. ### 3. Model Evaluation Use the dataset to evaluate the performance of models designed to assess or rank question complexity. ## How to Use the Dataset To use the dataset, load the CSV file using a library like pandas and preprocess it as needed for your specific use case. ### Loading the Dataset ```python import pandas as pd # Load the dataset df = pd.read_csv('path_to_your_dataset.csv') # Display the first few rows of the dataset print(df.head()) ``` ## Training a Model Here is a simple example of how to train a model using the dataset: ```python import pandas as pd from sklearn.model_selection import train_test_split from sklearn.linear_model import LinearRegression # Load the dataset df = pd.read_csv('path_to_your_dataset.csv') # Split the dataset into training and test sets train_df, test_df = train_test_split(df, test_size=0.2, random_state=42) # Extract features and labels X_train = train_df['instruction'] y_train = train_df['label'] X_test = test_df['instruction'] y_test = test_df['label'] # Convert text data to numerical features (example using TF-IDF) from sklearn.feature_extraction.text import TfidfVectorizer vectorizer = TfidfVectorizer() X_train_tfidf = vectorizer.fit_transform(X_train) X_test_tfidf = vectorizer.transform(X_test) # Train a simple model (example using Linear Regression) model = LinearRegression() model.fit(X_train_tfidf, y_train) # Evaluate the model predictions = model.predict(X_test_tfidf) print(f"Predictions: {predictions}") ``` Citation If you use this dataset in your research, please cite it as follows: ```sql @dataset{question_complexity_rating_2024, title = {Question Complexity Rating Dataset}, author = {Wes Lagarde}, year = {2024}, url = {https://huggingface.co./wesley7137/question_complexity_classification}, description = {A dataset of question and complexity rating pairs for natural language processing tasks.} } ``` License This dataset is licensed under the Apache 2.0 License. Acknowledgements We would like to thank all contributors and the community for their valuable feedback and support. css Copy code This README file provides a comprehensive overview of your dataset, including its contents, usage, and example code for loading and using the dataset. Adjust the content as necessary to fit your specific dataset details and preferences.