Dataset Preview
The full dataset viewer is not available (click to read why). Only showing a preview of the rows.
The dataset generation failed
Error code:   DatasetGenerationError
Exception:    TypeError
Message:      Couldn't cast array of type
struct<id: string, type: string, timestamp: string, timestampEdited: null, callEndedTimestamp: null, isPinned: bool, content: string, author: struct<id: string, name: string, discriminator: string, nickname: string, color: null, isBot: bool, roles: list<item: null>, avatarUrl: string>, attachments: list<item: null>, embeds: list<item: null>, stickers: list<item: null>, reactions: list<item: null>, mentions: list<item: null>, reference: struct<messageId: string, channelId: string, guildId: string>, inlineEmojis: list<item: null>>
to
{'id': Value(dtype='string', id=None), 'type': Value(dtype='string', id=None), 'timestamp': Value(dtype='string', id=None), 'timestampEdited': Value(dtype='null', id=None), 'callEndedTimestamp': Value(dtype='null', id=None), 'isPinned': Value(dtype='bool', id=None), 'content': Value(dtype='string', id=None), 'author': {'id': Value(dtype='string', id=None), 'name': Value(dtype='string', id=None), 'discriminator': Value(dtype='string', id=None), 'nickname': Value(dtype='string', id=None), 'color': Value(dtype='string', id=None), 'isBot': Value(dtype='bool', id=None), 'roles': [{'id': Value(dtype='string', id=None), 'name': Value(dtype='string', id=None), 'color': Value(dtype='string', id=None), 'position': Value(dtype='int64', id=None)}], 'avatarUrl': Value(dtype='string', id=None)}, 'attachments': [{'id': Value(dtype='string', id=None), 'url': Value(dtype='string', id=None), 'fileName': Value(dtype='string', id=None), 'fileSizeBytes': Value(dtype='int64', id=None)}], 'embeds': [{'title': Value(dtype='string', id=None), 'url': Value(dtype='string', id=None), 'timestamp': Value(dtype='null', id=None), 'description': Value(dtype='string', id=None), 'color': Value(dtype='string', id=None), 'thumbnail': {'url': Value(dtype='string', id=None), 'width': Value(dtype='int64', id=None), 'height': Value(dtype='int64', id=None)}, 'images': Sequence(feature=Value(dtype='null', id=None), length=-1, id=None), 'fields': Sequence(feature=Value(dtype='null', id=None), length=-1, id=None), 'inlineEmojis': Sequence(feature=Value(dtype='null', id=None), length=-1, id=None)}], 'stickers': Sequence(feature=Value(dtype='null', id=None), length=-1, id=None), 'reactions': Sequence(feature=Value(dtype='null', id=None), length=-1, id=None), 'mentions': Sequence(feature=Value(dtype='null', id=None), length=-1, id=None), 'inlineEmojis': Sequence(feature=Value(dtype='null', id=None), length=-1, id=None)}
Traceback:    Traceback (most recent call last):
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/builder.py", line 1870, in _prepare_split_single
                  writer.write_table(table)
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/arrow_writer.py", line 622, in write_table
                  pa_table = table_cast(pa_table, self._schema)
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/table.py", line 2292, in table_cast
                  return cast_table_to_schema(table, schema)
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/table.py", line 2245, in cast_table_to_schema
                  arrays = [
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/table.py", line 2246, in <listcomp>
                  cast_array_to_feature(
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/table.py", line 1795, in wrapper
                  return pa.chunked_array([func(chunk, *args, **kwargs) for chunk in array.chunks])
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/table.py", line 1795, in <listcomp>
                  return pa.chunked_array([func(chunk, *args, **kwargs) for chunk in array.chunks])
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/table.py", line 2013, in cast_array_to_feature
                  casted_array_values = _c(array.values, feature[0])
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/table.py", line 1797, in wrapper
                  return func(array, *args, **kwargs)
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/table.py", line 2108, in cast_array_to_feature
                  raise TypeError(f"Couldn't cast array of type\n{_short_str(array.type)}\nto\n{_short_str(feature)}")
              TypeError: Couldn't cast array of type
              struct<id: string, type: string, timestamp: string, timestampEdited: null, callEndedTimestamp: null, isPinned: bool, content: string, author: struct<id: string, name: string, discriminator: string, nickname: string, color: null, isBot: bool, roles: list<item: null>, avatarUrl: string>, attachments: list<item: null>, embeds: list<item: null>, stickers: list<item: null>, reactions: list<item: null>, mentions: list<item: null>, reference: struct<messageId: string, channelId: string, guildId: string>, inlineEmojis: list<item: null>>
              to
              {'id': Value(dtype='string', id=None), 'type': Value(dtype='string', id=None), 'timestamp': Value(dtype='string', id=None), 'timestampEdited': Value(dtype='null', id=None), 'callEndedTimestamp': Value(dtype='null', id=None), 'isPinned': Value(dtype='bool', id=None), 'content': Value(dtype='string', id=None), 'author': {'id': Value(dtype='string', id=None), 'name': Value(dtype='string', id=None), 'discriminator': Value(dtype='string', id=None), 'nickname': Value(dtype='string', id=None), 'color': Value(dtype='string', id=None), 'isBot': Value(dtype='bool', id=None), 'roles': [{'id': Value(dtype='string', id=None), 'name': Value(dtype='string', id=None), 'color': Value(dtype='string', id=None), 'position': Value(dtype='int64', id=None)}], 'avatarUrl': Value(dtype='string', id=None)}, 'attachments': [{'id': Value(dtype='string', id=None), 'url': Value(dtype='string', id=None), 'fileName': Value(dtype='string', id=None), 'fileSizeBytes': Value(dtype='int64', id=None)}], 'embeds': [{'title': Value(dtype='string', id=None), 'url': Value(dtype='string', id=None), 'timestamp': Value(dtype='null', id=None), 'description': Value(dtype='string', id=None), 'color': Value(dtype='string', id=None), 'thumbnail': {'url': Value(dtype='string', id=None), 'width': Value(dtype='int64', id=None), 'height': Value(dtype='int64', id=None)}, 'images': Sequence(feature=Value(dtype='null', id=None), length=-1, id=None), 'fields': Sequence(feature=Value(dtype='null', id=None), length=-1, id=None), 'inlineEmojis': Sequence(feature=Value(dtype='null', id=None), length=-1, id=None)}], 'stickers': Sequence(feature=Value(dtype='null', id=None), length=-1, id=None), 'reactions': Sequence(feature=Value(dtype='null', id=None), length=-1, id=None), 'mentions': Sequence(feature=Value(dtype='null', id=None), length=-1, id=None), 'inlineEmojis': Sequence(feature=Value(dtype='null', id=None), length=-1, id=None)}
              
              The above exception was the direct cause of the following exception:
              
              Traceback (most recent call last):
                File "/src/services/worker/src/worker/job_runners/config/parquet_and_info.py", line 1433, in compute_config_parquet_and_info_response
                  parquet_operations, partial, estimated_dataset_info = stream_convert_to_parquet(
                File "/src/services/worker/src/worker/job_runners/config/parquet_and_info.py", line 989, in stream_convert_to_parquet
                  builder._prepare_split(
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/builder.py", line 1741, in _prepare_split
                  for job_id, done, content in self._prepare_split_single(
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/builder.py", line 1897, in _prepare_split_single
                  raise DatasetGenerationError("An error occurred while generating the dataset") from e
              datasets.exceptions.DatasetGenerationError: An error occurred while generating the dataset

Need help to make the dataset viewer work? Make sure to review how to configure the dataset viewer, and open a discussion for direct support.

guild
dict
channel
dict
dateRange
dict
exportedAt
string
messages
list
messageCount
int64
{ "id": "1315258998995550258", "name": "I-Made-This", "iconUrl": "https://cdn.discordapp.com/icons/1315258998995550258/23fe2042f723a18db28def13def27366.png?size=512" }
{ "id": "1337617511285395496", "type": "GuildPublicThread", "categoryId": "1320792773506498610", "category": "papers", "name": "Sparsity in Deep Learning: Pruning and growth for efficient inference and training in neural network", "topic": null }
{ "after": null, "before": null }
2025-02-26T20:39:35.641906-05:00
[ { "id": "1337617511285395496", "type": "Default", "timestamp": "2025-02-07T21:54:26.422-05:00", "timestampEdited": null, "callEndedTimestamp": null, "isPinned": false, "content": "https://arxiv.org/abs/2102.00554\nThe growing energy and performance costs of deep learning have driven the community to reduce the size of neural networks by selectively pruning components. Similarly to their biological counterparts, sparse networks generalize just as well, if not better than, the original dense networks. Sparsity can reduce the memory footprint of regular networks to fit mobile devices, as well as shorten training time for ever growing networks. In this paper, we survey prior work on sparsity in deep learning and provide an extensive tutorial of sparsification for both inference and training. We describe approaches to remove and add elements of neural networks, different training strategies to achieve model sparsity, and mechanisms to exploit sparsity in practice. Our work distills ideas from more than 300 research papers and provides guidance to practitioners who wish to utilize sparsity today, as well as to researchers whose goal is to push the frontier forward. We include the necessary background on mathematical methods in sparsification, describe phenomena such as early structure adaptation, the intricate relations between sparsity and the training process, and show techniques for achieving acceleration on real hardware. We also define a metric of pruned parameter efficiency that could serve as a baseline for comparison of different sparse networks. We close by speculating on how sparsity can improve future workloads and outline major open problems in the field.", "author": { "id": "330939073491369985", "name": "technosourceressextraordinaire", "discriminator": "0000", "nickname": "p 3 n G u 1 n Z z", "color": "#FFFC00", "isBot": false, "roles": [ { "id": "1317853950287941652", "name": "A D M I N", "color": "#FFFC00", "position": 5 }, { "id": "1317851980869402744", "name": "M O D", "color": "#93FF00", "position": 4 }, { "id": "1317855596208324678", "name": "C R E A T O R", "color": "#00C6FF", "position": 3 }, { "id": "1320806044724629616", "name": "p u b l i c", "color": "#607D8B", "position": 1 } ], "avatarUrl": "https://cdn.discordapp.com/avatars/330939073491369985/2c754a981572fae0aac9a7cc225e9669.png?size=512" }, "attachments": [ { "id": "1337617511641780234", "url": "https://cdn.discordapp.com/attachments/1337617511285395496/1337617511641780234/2102.00554v1.pdf?ex=67c07ca2&is=67bf2b22&hm=ee386671cbc6e9e1b759129b0bf4c025c408e0bdacc4a409c0806cb6099c88a9&", "fileName": "2102.00554v1.pdf", "fileSizeBytes": 4573871 } ], "embeds": [ { "title": "Sparsity in Deep Learning: Pruning and growth for efficient inferen...", "url": "https://arxiv.org/abs/2102.00554", "timestamp": null, "description": "The growing energy and performance costs of deep learning have driven the community to reduce the size of neural networks by selectively pruning components. Similarly to their biological counterparts, sparse networks generalize just as well, if not better than, the original dense networks. Sparsity can reduce the memory footprint of regular netw...", "color": "#FFFFFF", "thumbnail": { "url": "https://images-ext-1.discordapp.net/external/YXb6sZ-DbghIhLdEwCCP_7HvgAz754qtRYvd1kOTXyE/https/arxiv.org/static/browse/0.3.4/images/arxiv-logo-fb.png", "width": 1200, "height": 700 }, "images": [], "fields": [], "inlineEmojis": [] } ], "stickers": [], "reactions": [], "mentions": [], "inlineEmojis": [] } ]
1
{ "id": "1109943800132010065", "name": "🌟Tonic's Better Prompt🍻", "iconUrl": "https://cdn.discordapp.com/icons/1109943800132010065/3d7a784d3f737163795c5defd23c27fc.png?size=512" }
{ "id": "1177256199054446634", "type": "GuildPublicThread", "categoryId": "1109943800840851589", "category": "🫂general", "name": "tell me more about Tulu v2 and DPO training for fine tuning LLama models :", "topic": null }
{ "after": null, "before": null }
2025-02-26T20:51:00.947619-05:00
[ { "id": "1177256201118023730", "type": "21", "timestamp": "2023-11-23T09:35:50.709-05:00", "timestampEdited": null, "callEndedTimestamp": null, "isPinned": false, "content": "", "author": { "id": "1176628808212828231", "name": "🌷Tulu", "discriminator": "9769", "nickname": "🌷Tulu", "color": null, "isBot": true, "roles": [], "avatarUrl": "https://cdn.discordapp.com/avatars/1176628808212828231/18e58cb8dd3bf81ee8dfbc8416217a0f.png?size=512" }, "attachments": [], "embeds": [], "stickers": [], "reactions": [], "mentions": [], "reference": { "messageId": "1177256199054446634", "channelId": "1109943800840851589", "guildId": "1109943800132010065" }, "inlineEmojis": [] } ]
1

Dataset Card for Dataset Name

This is a simple dataset including content extracted from many discord servers.

Dataset Details

Dataset Description

  • Curated by: [More Information Needed]
  • Funded by [optional]: [More Information Needed]
  • Shared by [optional]: [More Information Needed]
  • Language(s) (NLP): [More Information Needed]
  • License: [More Information Needed]

Dataset Sources [optional]

  • Repository: [More Information Needed]
  • Paper [optional]: [More Information Needed]
  • Demo [optional]: [More Information Needed]

Uses

Direct Use

[More Information Needed]

Out-of-Scope Use

[More Information Needed]

Dataset Structure

[More Information Needed]

Dataset Creation

Curation Rationale

[More Information Needed]

Source Data

Data Collection and Processing

[More Information Needed]

Who are the source data producers?

[More Information Needed]

Annotations [optional]

Annotation process

[More Information Needed]

Who are the annotators?

[More Information Needed]

Personal and Sensitive Information

[More Information Needed]

Bias, Risks, and Limitations

[More Information Needed]

Recommendations

Users should be made aware of the risks, biases and limitations of the dataset. More information needed for further recommendations.

Citation [optional]

BibTeX:

[More Information Needed]

APA:

[More Information Needed]

Glossary [optional]

[More Information Needed]

More Information [optional]

[More Information Needed]

Dataset Card Authors [optional]

[More Information Needed]

Dataset Card Contact

[More Information Needed]

Downloads last month
4