LangChainDatasets
community
AI & ML interests
None defined yet.
Recent Activity
View all activity
LangChainDatasets's activity
valeriaWong
authored
2
papers
13 days ago
ImranzamanML
posted
an
update
about 1 month ago
Post
507
Deep understanding of (C-index) evaluation measure for better model
Lets start with three patients groups:
Group A
Group B
Group C
For each patient, we will predict risk score (higher score means higher risk of early event).
Step 1: Understanding Concordance Index
The Concordance Index (C-index) evaluate that how well the model ranks survival times.
Understand with sample data:
Group A has 3 patients with actual survival times and predicted risk scores:
Patient Actual Survival Time Predicted Risk Score
P1 5 months 0.8
P2 3 months 0.9
P3 10 months 0.2
Comparable pairs:
(P1, P2): P2 has a shorter survival time and a higher risk score → Concordant ✅
(P1, P3): P3 has a longer survival time and a lower risk score → Concordant ✅
(P2, P3): P3 has a longer survival time and a lower risk score → Concordant ✅
Total pairs = 3
Total concordant pairs = 3
C-index for Group A = Concordant pairs/Total pairs= 3/3 = 1.0
Step 2: Calculate C-index for All Groups
Repeat the process for all groups. For now we can assume:
Group A: C-index = 1.0
Group B: C-index = 0.8
Group C: C-index = 0.6
Step 3: Stratified Concordance Index
The Stratified Concordance Index combines the C-index scores of all groups and focusing on the following:
Average performance across groups (mean of C-indices).
Consistency across groups (low standard deviation of C-indices).
Formula:
Stratified C-index = Mean(C-index scores) - Standard Deviation(C-index scores)
Calculate the mean:
Mean=1.0 + 0.8 + 0.6/3 = 0.8
Calculate the standard deviation:
Standard Deviation= sqrt((1.0-0.8)^2 + (0.8-0.8)^2 + (0.6-0.8)^/3) = 0.16
Stratified C-index:
Stratified C-index = 0.8 - 0.16 = 0.64
Step 4: Interpret the Results
A high Stratified C-index means:
The model predicts well overall (high mean C-index).
Lets start with three patients groups:
Group A
Group B
Group C
For each patient, we will predict risk score (higher score means higher risk of early event).
Step 1: Understanding Concordance Index
The Concordance Index (C-index) evaluate that how well the model ranks survival times.
Understand with sample data:
Group A has 3 patients with actual survival times and predicted risk scores:
Patient Actual Survival Time Predicted Risk Score
P1 5 months 0.8
P2 3 months 0.9
P3 10 months 0.2
Comparable pairs:
(P1, P2): P2 has a shorter survival time and a higher risk score → Concordant ✅
(P1, P3): P3 has a longer survival time and a lower risk score → Concordant ✅
(P2, P3): P3 has a longer survival time and a lower risk score → Concordant ✅
Total pairs = 3
Total concordant pairs = 3
C-index for Group A = Concordant pairs/Total pairs= 3/3 = 1.0
Step 2: Calculate C-index for All Groups
Repeat the process for all groups. For now we can assume:
Group A: C-index = 1.0
Group B: C-index = 0.8
Group C: C-index = 0.6
Step 3: Stratified Concordance Index
The Stratified Concordance Index combines the C-index scores of all groups and focusing on the following:
Average performance across groups (mean of C-indices).
Consistency across groups (low standard deviation of C-indices).
Formula:
Stratified C-index = Mean(C-index scores) - Standard Deviation(C-index scores)
Calculate the mean:
Mean=1.0 + 0.8 + 0.6/3 = 0.8
Calculate the standard deviation:
Standard Deviation= sqrt((1.0-0.8)^2 + (0.8-0.8)^2 + (0.6-0.8)^/3) = 0.16
Stratified C-index:
Stratified C-index = 0.8 - 0.16 = 0.64
Step 4: Interpret the Results
A high Stratified C-index means:
The model predicts well overall (high mean C-index).
Post
3525
🙋🏻♂️hey there folks,
periodic reminder : if you are experiencing ⚠️500 errors ⚠️ or ⚠️ abnormal
we have a thread 👉🏻 https://discord.com/channels/879548962464493619/1295847667515129877
if you can record the problem and share it there , or on the forums in your own post , please dont be shy because i'm not sure but i do think it helps 🤗🤗🤗
periodic reminder : if you are experiencing ⚠️500 errors ⚠️ or ⚠️ abnormal
spaces
behavior on load or launch ⚠️we have a thread 👉🏻 https://discord.com/channels/879548962464493619/1295847667515129877
if you can record the problem and share it there , or on the forums in your own post , please dont be shy because i'm not sure but i do think it helps 🤗🤗🤗
Alignment-Lab-AI
posted
an
update
2 months ago
Post
1080
remember boys and girls, always keep all your data, its never a waste of time!
ImranzamanML
posted
an
update
3 months ago
Post
689
Easy steps for an effective RAG pipeline with LLM models!
1. Document Embedding & Indexing
We can start with the use of embedding models to vectorize documents, store them in vector databases (Elasticsearch, Pinecone, Weaviate) for efficient retrieval.
2. Smart Querying
Then we can generate query embeddings, retrieve top-K relevant chunks and can apply hybrid search if needed for better precision.
3. Context Management
We can concatenate retrieved chunks, optimize chunk order and keep within token limits to preserve response coherence.
4. Prompt Engineering
Then we can instruct the LLM to leverage retrieved context, using clear instructions to prioritize the provided information.
5. Post-Processing
Finally we can implement response verification, fact-checking and integrate feedback loops to refine the responses.
Happy to connect :)
1. Document Embedding & Indexing
We can start with the use of embedding models to vectorize documents, store them in vector databases (Elasticsearch, Pinecone, Weaviate) for efficient retrieval.
2. Smart Querying
Then we can generate query embeddings, retrieve top-K relevant chunks and can apply hybrid search if needed for better precision.
3. Context Management
We can concatenate retrieved chunks, optimize chunk order and keep within token limits to preserve response coherence.
4. Prompt Engineering
Then we can instruct the LLM to leverage retrieved context, using clear instructions to prioritize the provided information.
5. Post-Processing
Finally we can implement response verification, fact-checking and integrate feedback loops to refine the responses.
Happy to connect :)
Post
831
🙋🏻♂️ hey there folks ,
really enjoying sharing cool genomics and protein datasets on the hub these days , check out our cool new org : https://huggingface.co./seq-to-pheno
scroll down for the datasets, still figuring out how to optimize for discoverability , i do think on that part it will be better than zenodo[dot}org , it would be nice to write a tutorial about that and compare : we already have more downloads than most zenodo datasets from famous researchers !
really enjoying sharing cool genomics and protein datasets on the hub these days , check out our cool new org : https://huggingface.co./seq-to-pheno
scroll down for the datasets, still figuring out how to optimize for discoverability , i do think on that part it will be better than zenodo[dot}org , it would be nice to write a tutorial about that and compare : we already have more downloads than most zenodo datasets from famous researchers !
ImranzamanML
posted
an
update
3 months ago
Post
1705
Are you a Professional Python Developer? Here is why Logging is important for debugging, tracking and monitoring the code
Logging
Logging is very important part of any project you start. It help you to track the execution of a program, debug issues, monitor system performance and keep an audit trail of events.
Basic Logging Setup
The basic way to add logging to a Python code is by using the logging.basicConfig() function. This function set up basic configuration for logging messages to either console or to a file.
Here is how we can use basic console logging
Logging to a File
We can also save the log to a file instead of console. For this, we can add the filename parameter to logging.basicConfig().
You can read more on my medium blog https://medium.com/@imranzaman-5202/are-you-a-professional-python-developer-8596e2b2edaa
Logging
Logging is very important part of any project you start. It help you to track the execution of a program, debug issues, monitor system performance and keep an audit trail of events.
Basic Logging Setup
The basic way to add logging to a Python code is by using the logging.basicConfig() function. This function set up basic configuration for logging messages to either console or to a file.
Here is how we can use basic console logging
#Call built in library
import logging
# lets call library and start logging
logging.basicConfig(level=logging.DEBUG) #you can add more format specifier
# It will show on the console since we did not added filename to save logs
logging.debug('Here we go for debug message')
logging.info('Here we go for info message')
logging.warning('Here we go for warning message')
logging.error('Here we go for error message')
logging.critical('Here we go for critical message')
#Note:
# If you want to add anything in the log then do like this way
records=100
logging.debug('There are total %s number of records.', records)
# same like string format
lost=20
logging.debug('There are total %s number of records from which %s are lost', records, lost)
Logging to a File
We can also save the log to a file instead of console. For this, we can add the filename parameter to logging.basicConfig().
import logging
# Saving the log to a file. The logs will be written to app.log
logging.basicConfig(filename='app.log', level=logging.DEBUG)
logging.debug('Here we go for debug message')
logging.info('Here we go for info message')
logging.warning('Here we go for warning message')
logging.error('Here we go for error message')
logging.critical('Here we go for critical message')
You can read more on my medium blog https://medium.com/@imranzaman-5202/are-you-a-professional-python-developer-8596e2b2edaa
1024m
authored
a
paper
3 months ago
ImranzamanML
posted
an
update
3 months ago
Post
1388
LoRA with code 🚀 using PEFT (parameter efficient fine-tuning)
LoRA (Low-Rank Adaptation)
LoRA adds low-rank matrices to specific layers and reduce the number of trainable parameters for efficient fine-tuning.
Code:
Please install these libraries first:
pip install peft
pip install datasets
pip install transformers
LoRA adds low-rank matrices to fine-tune only a small portion of the model and reduces training overhead by training fewer parameters.
We can perform efficient fine-tuning with minimal impact on accuracy and its suitable for large models where full-precision training is still feasible.
LoRA (Low-Rank Adaptation)
LoRA adds low-rank matrices to specific layers and reduce the number of trainable parameters for efficient fine-tuning.
Code:
Please install these libraries first:
pip install peft
pip install datasets
pip install transformers
from transformers import AutoModelForSequenceClassification, Trainer, TrainingArguments
from peft import LoraConfig, get_peft_model
from datasets import load_dataset
# Loading the pre-trained BERT model
model = AutoModelForSequenceClassification.from_pretrained('bert-base-uncased', num_labels=2)
# Configuring the LoRA parameters
lora_config = LoraConfig(
r=8,
lora_alpha=16,
lora_dropout=0.1,
bias="none"
)
# Applying LoRA to the model
model = get_peft_model(model, lora_config)
# Loading dataset for classification
dataset = load_dataset("glue", "sst2")
train_dataset = dataset["train"]
# Setting the training arguments
training_args = TrainingArguments(
output_dir="./results",
per_device_train_batch_size=16,
num_train_epochs=3,
logging_dir="./logs",
)
# Creating a Trainer instance for fine-tuning
trainer = Trainer(
model=model,
args=training_args,
train_dataset=train_dataset,
)
# Finally we can fine-tune the model
trainer.train()
LoRA adds low-rank matrices to fine-tune only a small portion of the model and reduces training overhead by training fewer parameters.
We can perform efficient fine-tuning with minimal impact on accuracy and its suitable for large models where full-precision training is still feasible.
1024m
authored
3
papers
3 months ago
Large Language Models for Cross-lingual Emotion Detection
Paper
•
2410.15974
•
Published
•
1
1024m at SMM4H 2024: Tasks 3, 5 & 6 -- Ensembles of Transformers and Large Language Models for Medical Text Classification
Paper
•
2410.15998
•
Published
•
1
Augmenting Legal Decision Support Systems with LLM-based NLI for Analyzing Social Media Evidence
Paper
•
2410.15990
•
Published
•
1
Post
1455
hey there folks,
twitter is aweful isnt it ? just getting into the habbit of using hf/posts for shares 🦙🦙
Tonic/on-device-granite-3.0-1b-a400m-instruct
new granite on device instruct model demo , hope you like it 🚀🚀
twitter is aweful isnt it ? just getting into the habbit of using hf/posts for shares 🦙🦙
Tonic/on-device-granite-3.0-1b-a400m-instruct
new granite on device instruct model demo , hope you like it 🚀🚀
ImranzamanML
posted
an
update
3 months ago
Post
1721
Today lets discuss about 32-bit (FP32) and 16-bit (FP16) floating-point!
Floating-point numbers are used to represent real numbers (like decimals) and they consist of three parts:
32-bit Floating Point (FP32)
Total bits: 32 bits
Sign bit: 1 bit
Exponent: 8 bits
Mantissa: 23 bits
For example:
A number like -15.375 would be represented as:
Sign bit: 1 (negative number)
Exponent: Stored after being adjusted by a bias (127 in FP32).
Mantissa: The significant digits after converting the number to binary.
16-bit Floating Point (FP16)
Total bits: 16 bits
Sign bit: 1 bit
Exponent: 5 bits
Mantissa: 10 bits
Example:
A number like -15.375 would be stored similarly:
Sign bit: 1 (negative number)
Exponent: Uses 5 bits, limiting the range compared to FP32.
Mantissa: Only 10 bits for precision.
Precision and Range
FP32: Higher precision and larger range, with about 7 decimal places of accuracy.
FP16: Less precision (around 3-4 decimal places), smaller range but faster computations and less memory use.
Floating-point numbers are used to represent real numbers (like decimals) and they consist of three parts:
Sign bit:
Indicates whether the number is positive (0) or negative (1).
Exponent:
Determines the scale of the number (i.e., how large or small it is by shifting the decimal point).
Mantissa (or fraction):
Represents the actual digits of the number.
32-bit Floating Point (FP32)
Total bits: 32 bits
Sign bit: 1 bit
Exponent: 8 bits
Mantissa: 23 bits
For example:
A number like -15.375 would be represented as:
Sign bit: 1 (negative number)
Exponent: Stored after being adjusted by a bias (127 in FP32).
Mantissa: The significant digits after converting the number to binary.
16-bit Floating Point (FP16)
Total bits: 16 bits
Sign bit: 1 bit
Exponent: 5 bits
Mantissa: 10 bits
Example:
A number like -15.375 would be stored similarly:
Sign bit: 1 (negative number)
Exponent: Uses 5 bits, limiting the range compared to FP32.
Mantissa: Only 10 bits for precision.
Precision and Range
FP32: Higher precision and larger range, with about 7 decimal places of accuracy.
FP16: Less precision (around 3-4 decimal places), smaller range but faster computations and less memory use.
ImranzamanML
posted
an
update
3 months ago
Post
1288
Last Thursday at KaggleX organized by Google, I presented a workshop on "Unlocking the Power of Large Language Models (LLMs) for Business Applications" where I explained how we can reduce the size of LLM models to make them more suitable for business use and addressing common resource limitations.
https://drive.google.com/file/d/1p5sT4_DeyBuwCqmYt4dCJKZOgLMpESzR/view
https://drive.google.com/file/d/1p5sT4_DeyBuwCqmYt4dCJKZOgLMpESzR/view
Post
991
if you're encountering 500 errors on spaces that seem to work otherwise , kindly consider screenshotting and sharing the link here : https://discord.com/channels/879548962464493619/1295847667515129877
ImranzamanML
posted
an
update
3 months ago
Post
1699
Here is how we can calculate the size of any LLM model:
Each parameter in LLM models is typically stored as a floating-point number. The size of each parameter in bytes depends on the precision.
32-bit precision: Each parameter takes 4 bytes.
16-bit precision: Each parameter takes 2 bytes
To calculate the total memory usage of the model:
Memory usage (in bytes) = No. of Parameters × Size of Each Parameter
For example:
32-bit Precision (FP32)
In 32-bit floating-point precision, each parameter takes 4 bytes.
Memory usage in bytes = 1 billion parameters × 4 bytes
1,000,000,000 × 4 = 4,000,000,000 bytes
In gigabytes: ≈ 3.73 GB
16-bit Precision (FP16)
In 16-bit floating-point precision, each parameter takes 2 bytes.
Memory usage in bytes = 1 billion parameters × 2 bytes
1,000,000,000 × 2 = 2,000,000,000 bytes
In gigabytes: ≈ 1.86 GB
It depends on whether you use 32-bit or 16-bit precision, a model with 1 billion parameters would use approximately 3.73 GB or 1.86 GB of memory, respectively.
Each parameter in LLM models is typically stored as a floating-point number. The size of each parameter in bytes depends on the precision.
32-bit precision: Each parameter takes 4 bytes.
16-bit precision: Each parameter takes 2 bytes
To calculate the total memory usage of the model:
Memory usage (in bytes) = No. of Parameters × Size of Each Parameter
For example:
32-bit Precision (FP32)
In 32-bit floating-point precision, each parameter takes 4 bytes.
Memory usage in bytes = 1 billion parameters × 4 bytes
1,000,000,000 × 4 = 4,000,000,000 bytes
In gigabytes: ≈ 3.73 GB
16-bit Precision (FP16)
In 16-bit floating-point precision, each parameter takes 2 bytes.
Memory usage in bytes = 1 billion parameters × 2 bytes
1,000,000,000 × 2 = 2,000,000,000 bytes
In gigabytes: ≈ 1.86 GB
It depends on whether you use 32-bit or 16-bit precision, a model with 1 billion parameters would use approximately 3.73 GB or 1.86 GB of memory, respectively.