,title,content,source,url
0,Understanding Hallucinations and Bias,"# Understanding Hallucinations and Bias
## **Introduction**
In this lesson, we'll cover the concept of **hallucinations** in LLMs, highlighting their influence on AI applications and demonstrating how to mitigate them using techniques like the retriever's architectures. We'll also explore **bias** within LLMs with examples.",llm_course,https://learn.activeloop.ai/courses/take/llms/multimedia/48953681-understanding-hallucinations-and-bias
1,Understanding Hallucinations and Bias,"# Understanding Hallucinations and Bias
## Hallucinations in LLMs
In Large Language Models, hallucinations refer to cases where the model produces text that's incorrect and not based on reality. An AI hallucination is a confident response by the model that cannot be grounded in any of its training data. There are several possible reasons for these types of hallucinations: - An LLM could be trained on a dataset that doesn’t have the knowledge required to answer a question. - An LLM does not have a reliable way to check the factual accuracy of its responses. It just outputs a response that may be wholly incorrect yet convincing. - The training dataset used to train the LLM may include fictional content and subjective content, like opinions and beliefs. - LLMs are not generally optimized to say “I don’t know” when they don’t have enough information. Therefore, when the LLM has no answer, it generates whatever is the most probable sequence of text as a response. But note that “most probable” does not necessarily mean “true.” Hallucination is the result of this misalignment. Note that the goal of language modeling is learning the underlying distribution of the words in a language, not to distinguish what is true and what isn’t. Let’s see a clear example of a hallucination. On April 12th, 2023, Databricks announced their “Dolly 2.0” model, an open-source 12B parameter language model fine-tuned on a human-generated instruction-following dataset. GPT-3.5 has been trained on a dataset with a cutoff year of 2021. Therefore, it can’t know about Dolly 2. When GPT-3.5 is asked: “Does Databricks Dolly provide value to the open source community?” it responds with something like this: > **GPT 3.5:** *Databricks Dolly is a tool developed by Databricks, which is designed to help users migrate their data from one platform to another, particularly from on-premise data centers to cloud-based platforms such as AWS or Azure. While Databricks Dolly is not an open source tool, it can still provide value to the open source community in a few ways.[…]* > Except for “Databricks Dolly is a tool developed by DataBricks,” the rest of the text is entirely incorrect! Dolly is not a tool to migrate data; it is open-source, contrary to the response. Still, GPT-3.5 knows about Databricks as a company, and therefore, it generates a plausible description of Dolly 2 as a generic product of Databricks. While this example shows OpenAI GPT-3.5 hallucinating, it’s important to note that this phenomenon applies to other similar LLMs like Bard or LLama. Strategies to mitigate hallucinations include tuning the text generation parameters, cleaning up the training data, precisely defining prompts (prompt engineering), and using retriever architectures to ground responses in specific retrieved documents. ### **Misinformation Spreading** One significant risk associated with hallucinations in LLMs is their potential to generate content that, while appearing credible, is factually incorrect. Due to their limited capacity to understand the context and verify facts, LLMs can unintentionally spread misinformation. There's the potential for individuals with malicious intent to exploit LLMs to spread disinformation deliberately, creating and promoting false narratives. [A study by Blackberry](https://www.prnewswire.com/news-releases/chatgpt-may-already-be-used-in-nation-state-cyberattacks-say-it-decision-makers-in-blackberry-global-research-301737059.html) found that nearly half of the respondents (49%) believed that GPT-4 could be used to spread misinformation. The unrestricted spread of such false information via LLMs can lead to widespread negative impacts across societal, cultural, economic, and political landscapes. It's crucial to address these issues related to LLM hallucinations to ensure the ethical use of these models. ### Tuning the Text Generation Parameters The generated output of LLMs is greatly influenced by various model parameters, including temperature, frequency penalty, presence penalty, and top-p. We’ll learn more about them in a later lesson in",llm_course,https://learn.activeloop.ai/courses/take/llms/multimedia/48953681-understanding-hallucinations-and-bias
2,Understanding Hallucinations and Bias,"# Understanding Hallucinations and Bias
## Hallucinations in LLMs
the course. Higher temperature values promote randomness and creativity, while lower values make the output more deterministic. Increasing the frequency penalty value encourages the model to use repeated tokens more conservatively. Similarly, a higher presence penalty value increases the likelihood of generating tokens not yet included in the generated text. The “top-p” parameter controls response diversity by setting a cumulative probability threshold for word selection. ### Leveraging External Documents with Retrievers Architectures Response accuracy can be improved by providing domain-specific knowledge to the LLM in the form of external documents. Augmenting the **knowledge base** with domain-specific information allows the model to ground its responses in the knowledge base. After a question from a user, we could retrieve documents relevant to the questions (leveraging a module called “retriever”) and use them in a prompt to produce the answer. This type of process is implemented into architectures typically called “retrievers architectures”. In these architectures: 1. When a user poses a question, the system computes an embedding representation of it. 2. The embedding of the question is then used for executing a **semantic search** in the database of documents (by comparing their embeddings and computing similarity scores). 3. The top-ranked documents are used by the LLM as context to give the final answer. Usually, the LLM is asked to extract the answer from those context passages precisely and not to write anything that can’t be inferred from them. > Retrieval-augmented generation (RAG) is a technique that enhances language model capabilities by sourcing data from external resources and integrating it with the context provided in the model's prompt. > Providing access to external data sources during the prediction process enriches the model’s knowledge and grounding. By leveraging external knowledge, the model can generate more accurate, contextually appropriate responses and be less prone to hallucination.",llm_course,https://learn.activeloop.ai/courses/take/llms/multimedia/48953681-understanding-hallucinations-and-bias
3,Understanding Hallucinations and Bias,"# Understanding Hallucinations and Bias
## Bias in LLMs
Large language models like GPT-3.5 and GPT-4 have raised serious privacy and ethical concerns. Research has shown that these models are prone to inherent bias, leading to the generation of prejudiced or hateful language, intensifying the concerns regarding their use and governance. > Biases in LLMs arise from various sources: the data, the annotation process, the input representations, the models, and the research design. > For instance, training data that don't represent the diversity of language can lead to demographic biases, resulting in a model's inability to understand and accurately represent certain user groups. Misrepresentation can vary from mild inconveniences to more covert, gradual declines in performance, which can unfairly impact certain demographic groups. LLMs can unintentionally intensify harmful biases through their hallucinations, creating prejudiced and offensive content. The data used to train LLMs frequently includes stereotypes, which the models may unknowingly reinforce. This imbalance can lead the models to generate prejudiced content that discriminates against underrepresented groups, potentially targeting them based on factors like race, gender, religion, and ethnicity. This can be exemplified when an LLM produces content that presents women as inferior or portrays certain ethnicities as intrinsically violent or unreliable. Also, if a model is trained on data biased towards a younger, technologically savvy demographic, it may generate outputs that overlook older individuals or those from less technologically equipped regions. If the model is steeped in data from sources promoting hate speech or toxic content, it might produce damaging and prejudiced outputs, amplifying the diffusion of harmful stereotypes and biases. These examples underscore the urgent need for constant monitoring and ethical management in the use of these models.",llm_course,https://learn.activeloop.ai/courses/take/llms/multimedia/48953681-understanding-hallucinations-and-bias
4,Understanding Hallucinations and Bias,"# Understanding Hallucinations and Bias
## Constitutional AI
Constitutional AI' is a conceptual framework crafted by researchers at Anthropic. It aims to align AI systems with human values, ensuring that they become beneficial, safe, and trustworthy. In the beginning, the model is trained to self-review and modify its responses based on a set of predetermined principles and a small set of process examples. The next phase involves reinforcement learning training. At this point, the model leans on AI-generated feedback, grounded in the given principles, as opposed to human feedback, to choose the least harmful response. > Constitutional AI employs methodologies like **self-supervision training**. These techniques allow the AI to learn to conform to its constitution, without the need for explicit human labeling or supervision. > The approach also includes developing constrained optimization techniques. These ensure that the AI pursues helpfulness within the boundaries set by its constitution rather than pursuing unbounded optimization, potentially forgetting helpful knowledge.",llm_course,https://learn.activeloop.ai/courses/take/llms/multimedia/48953681-understanding-hallucinations-and-bias
5,Understanding Hallucinations and Bias,"# Understanding Hallucinations and Bias
## Conclusion
The risks of hallucinations and biases in LLMs present significant issues in producing reliable and accurate outputs. The presence of biases can further damage the accuracy and fairness of the outputs, resulting in the ongoing progression of harmful stereotypes and misinformation. It's imperative to formulate strategies to mitigate these risks. Such strategies should incorporate pre-processing and input control measures, model configuration adjustments, improvement mechanisms, and context and knowledge enhancement techniques. Integrating the ethical guidelines is essential to ensure that the models generate fair and trustworthy outputs, ultimately achieving responsible use of these powerful technologies.",llm_course,https://learn.activeloop.ai/courses/take/llms/multimedia/48953681-understanding-hallucinations-and-bias
6,Introduction to LLMs Module,"# Introduction to LLMs Module
## Introduction to LLMs
This module uncovers the core principles of Large Language Models (LLMs), zooming in on their foundational underpinnings. We provide a historical perspective, highlighting the emergence of Transformers and the difference between proprietary and open-source LLMs. Key attention is on recognizing and mitigating inherent issues like hallucinations and biases within these models. The module is structured as follows, concisely describing each lesson. - **What are Large Language Models?** This lesson dives into the core principles of LLMs, highlighting the capabilities of notable models such as GPT-3 and GPT-4. We introduce the concepts of tokens, few-shot learning, emergent abilities, and the significance of scaling laws. As we explore the functions and outputs of these models, we emphasize the potential challenges of hallucinations and biases. Additionally, we also discuss the context size limitation in LLMs. - **The Evolution of LLMs and Transformers**: This lesson provides a chronological narrative of the progression in language modeling techniques. Starting with the foundational Bag of Words model from 1954, we navigate through significant milestones like TF-IDF, the groundbreaking Word2Vec with its semantic-rich word embeddings, and the sequence-processing capabilities of RNNs. Central to our exploration is the 2017 Transformer architecture, which set the stage for powerhouses like BERT, RoBERTa, and ELECTRA. This lesson only offers an overview without the deep technical intricacies, presenting a panoramic view of the model evolution in NLP. - **A timeline of Large Language Models**: This lesson steers towards a comprehensive overview of the advancements in the Large Language Models landscape, spotlighting models that marked distinct milestones such as GPT-3, PaLM, and Galactica. Recognizing the significant role of techniques like scaling and alignment tuning in the unprecedented capabilities exhibited by LLMs, we untangle the principles that steer these giants. From exploring the enigmatic emergent abilities to decoding the scaling laws, this lesson explores the phenomena driving the potency and performance of LLMs. - **Emergent Abilities in LLMs**: This lesson covers the unexpected skills that surface in Large Language Models as they grow beyond certain thresholds. As models expand, they exhibit unique capabilities influenced by factors like training compute. These emergent skills indicate performance leaps in LLMs as they scale, revealing unforeseen learning beyond what was initially anticipated. - **Proprietary LLMs**: This lesson introduces prominent proprietary Large Language Models such as GPT-4, ChatGPT, and Cohere, among others. We'll weigh the advantages and drawbacks of proprietary models against open-source counterparts. Practical demonstrations will guide students in executing API calls for select models. - **Open-Source LLMs**: This lesson offers insights into open-source Large Language Models, with a focus on LLaMA 2, Open Assistant, Dolly, and Falcon. We will explore their unique features, capabilities, and licensing details. Additionally, we'll discuss potential commercial uses and emphasize any restrictions within their licenses. - **Understanding Hallucinations and Bias in LLMs**: This lesson focuses on the challenges posed by hallucinations and biases in Large Language Models. We'll define hallucinations, provide examples, and discuss their impact on LLM use cases. We'll also explore methods to minimize these issues, such as retriever architectures. The session also covers the concept of bias, its origin in LLMs, and potential mitigation strategies, including approaches like constitutional AI. - **Applications and Use-Cases of LLMs:** This lesson highlights the leading applications and emerging trends of Large Language Models across industries. By referencing real-world news and examples, we illustrate the transformative impact of LLMs across sectors. While emphasizing the vast potential benefits, the module also underscores the importance of recognizing LLMs' limitations and potential challenges. This section provided a comprehensive overview of Large Language Models, highlighting their evolution and significant milestones. Topics ranged from understanding emergent abilities in LLMs to",llm_course,https://learn.activeloop.ai/courses/take/llms/multimedia/48953391-introduction-to-llms-module
7,Introduction to LLMs Module,"# Introduction to LLMs Module
## Introduction to LLMs
discerning between proprietary and open-source models. Critical challenges like hallucinations and biases were also addressed.",llm_course,https://learn.activeloop.ai/courses/take/llms/multimedia/48953391-introduction-to-llms-module
8,Datasets for Training LLMs,"# Datasets for Training LLMs
## Introduction
In this lesson, we talk about the datasets that fuel LLMs pretraining. We'll explore popular datasets like Falcon RefinedWeb, The Pile, Red Pajama Data, and Stack Overflow Posts, understanding their composition, sources, and usage. We'll also discuss the emerging trend of prioritizing data quality over quantity in pretraining LLMs.",llm_course,https://learn.activeloop.ai/courses/take/llms/multimedia/48954377-datasets-for-training-llms
9,Datasets for Training LLMs,"# Datasets for Training LLMs
## Popular Datasets for Training LLMs
In recent times, a variety of open-source datasets have been employed for pre-training Large Language Models. Some of the notable datasets include **""Falcon RefinedWeb,” ""The Pile,” ""Red Pajama Data,”** and **""Stack Overflow Posts,""** among others**.** Assembling such datasets typically involves collecting and cleaning vast volumes of text data. ### Falcon RefinedWeb The [Falcon RefinedWeb dataset](https://huggingface.co./datasets/tiiuae/falcon-refinedweb) is a large-scale English web dataset developed by [TII](https://www.tii.ae/) and released under the ODC-By 1.0 license. It was created through rigorous filtering and extensive deduplication of [CommonCrawl](https://commoncrawl.org/), resulting in a dataset that has shown comparable or superior performance to models trained on curated datasets, relying solely on web data. The dataset is designed to be ""multimodal-friendly,” as it includes links and alt texts for images in the processed samples. Depending on the tokenizer used, the public extract of this dataset ranges from 500-650GT and requires about 2.8TB of local storage when unpacked. Falcon RefinedWeb has been primarily used for training [Falcon LLM models](https://falconllm.tii.ae/), including the Falcon-7B/40B and Falcon-RW-1B/7B models. The dataset is primarily in English, and each data instance corresponds to a unique web page that has been crawled, processed, and deduplicated. It contains around 1 billion instances. The dataset was constructed using the [Macrodata Refinement Pipeline](https://huggingface.co./datasets/tiiuae/falcon-refinedweb#curation-rationale), which includes content extraction, filtering heuristics, and deduplication. The design philosophy of RefinedWeb prioritizes scale, strict deduplication, and neutral filtering. The dataset was iteratively refined by measuring the zero-shot performance of models trained on development versions of the dataset and manually auditing samples to identify potential filtering improvements. ### The Pile [The Pile](https://pile.eleuther.ai/) is a comprehensive, open-source dataset of English text designed specifically for training LLMs. Developed by [EleutherAI](https://www.eleuther.ai/) in 2020, it's a massive 886.03GB dataset comprising 22 smaller datasets, 14 of which are new. Prior to the Pile's creation, most LLMs were trained using data from the Common Crawl. However, the Pile offers a more diverse range of data, enabling LLMs to handle a broader array of situations post-training. The Pile is a carefully curated collection of data handpicked by EleutherAI's researchers to include information they deemed necessary for language models to learn. The Pile covers a wide range of topics and writing styles, including academic writing, a style that models trained on other datasets often struggle with. All data used in the Pile was sourced from publicly accessible resources and filtered to remove duplicates and non-textual elements like HTML formatting and links. However, individual documents within the sub-datasets were not filtered to remove non-English, biased, or profane text, nor was consent considered in the data collection process. Originally developed for EleutherAI's GPT-Neo models, the Pile has since been used to train a variety of other models. ### RedPajama Dataset The [RedPajama dataset](https://together.ai/blog/redpajama) is a comprehensive, open-source dataset that emulates the LLaMa dataset. It comprises 2084 jsonl files, which can be accessed via HuggingFace or directly downloaded. The dataset is primarily in English but includes multiple languages in its Wikipedia section. The dataset is structured into text and metadata, including the URL, timestamp, source, language, and more. It also specifies the subset of the RedPajama dataset it belongs to, such as Commoncrawl, C4, GitHub, Books, ArXiv, Wikipedia, or StackExchange. The dataset is sourced from various platforms: - Commoncrawl data is processed through the official [cc_net](https://github.com/facebookresearch/cc_net) pipeline, deduplicated, and filtered for quality. - [C4](https://huggingface.co./datasets/c4) data is obtained from HuggingFace and formatted to suit the dataset's structure. - GitHub data is sourced from Google BigQuery, deduplicated, and filtered for quality, with only MIT, BSD, or Apache-licensed projects included. - The Wikipedia data is sourced from HuggingFace and is based on a 2023 dump, with hyperlinks, comments, and other formatting",llm_course,https://learn.activeloop.ai/courses/take/llms/multimedia/48954377-datasets-for-training-llms
10,Datasets for Training LLMs,"# Datasets for Training LLMs
## Popular Datasets for Training LLMs
removed. - Gutenberg and Books3 data are also downloaded from HuggingFace, with near duplicates removed using simhash. - ArXiv data is sourced from Amazon S3, with only latex source files included and preambles, comments, macros, and bibliographies removed. - Lastly, StackExchange data is sourced from the [Internet Archive](https://archive.org/download/stackexchange), with only the posts from the 28 largest sites included, HTML tags removed, and posts grouped into question-answer pairs. The RedPajama dataset encompasses 1.2 trillion tokens, making it a substantial resource for various language model training and research purposes. ### Stack Overflow Posts If you’re interested more in a specific domain like coding, there are massive datasets available for that, too. The [Stack Overflow Posts](https://huggingface.co./datasets/mikex86/stackoverflow-posts) dataset comprises approximately 60 million posts submitted to StackOverflow prior to June 14, 2023. The dataset, sourced from the [Internet Archive StackExchange Data Dump](https://archive.org/download/stackexchange), is approximately 35GB in size and contains around 65 billion text characters. Each record in the dataset represents a post type and includes fields such as Id, PostTypeId, Body, and ContentLicense, among others.",llm_course,https://learn.activeloop.ai/courses/take/llms/multimedia/48954377-datasets-for-training-llms
11,Datasets for Training LLMs,"# Datasets for Training LLMs
## Data Quality vs. Data Quantity in Pretraining
As we just saw, many of the most used pretraining datasets today are cleaned and more complete versions of other past datasets. There’s recently been a shift in focus from increasing dataset sizes to “increasing dataset size AND dataset quality.” The paper ""[Textbooks Are All You Need](https://arxiv.org/abs/2306.11644),"" published in June 2023, shows this trend. It introduces Phi-1, an LLM designed for code. Phi-1 is a Transformer-based model with 1.3 billion parameters, trained over a period of four days on eight A100s. Despite its relatively smaller scale, it exhibits remarkable accuracy on benchmarks like [HumanEval](https://github.com/openai/human-eval) and [MBPP](https://paperswithcode.com/dataset/mbpp). How? It’s been trained on high-quality data (i.e., textbook-quality data; that’s why the paper name is “Textbooks are all you need”). The training data for Phi-1 comprises 6 billion tokens of ""textbook quality"" data from the web and 1 billion tokens from synthetically generated textbooks using GPT-3.5. Although Phi-1's specialization in Python coding and lack of domain-specific knowledge somewhat limit its versatility, these limitations are not inherent and can be addressed to enhance its capabilities. Despite its smaller size, the model's success in coding benchmarks demonstrates the significant impact of high-quality and coherent data on the proficiency of language models, thereby shifting the focus from quantity to quality of data.",llm_course,https://learn.activeloop.ai/courses/take/llms/multimedia/48954377-datasets-for-training-llms
12,Datasets for Training LLMs,"# Datasets for Training LLMs
## Creating Your Own Dataset
Creating your own dataset would involve a whole lesson for it and we won’t cover it in this course in detail. However, if you’re interested in doing so, you can study the creation process of the datasets listed in the above sections, as it’s often a publicly disclosed process.",llm_course,https://learn.activeloop.ai/courses/take/llms/multimedia/48954377-datasets-for-training-llms
13,Datasets for Training LLMs,"# Datasets for Training LLMs
## Conclusion
This lesson provides a comprehensive overview of the datasets that fuel the pretraining of LLMs. We delved into popular datasets such as Falcon RefinedWeb, The Pile, Red Pajama Data, and Stack Overflow Posts, understanding their composition, sources, and usage. Often derived from larger, less refined datasets, these datasets have been meticulously cleaned and curated to provide high-quality data for training LLMs. We also discussed the emerging trend of prioritizing data quality over quantity in pretraining LLMs, as exemplified by the Phi-1 model. Despite its smaller scale, Phi-1's high performance on benchmarks underscores the significant impact of high-quality and coherent data on the proficiency of language models. This shift in focus from data quantity to quality is an exciting development in the field of LLMs, highlighting the importance of dataset refinement in achieving superior model performance.",llm_course,https://learn.activeloop.ai/courses/take/llms/multimedia/48954377-datasets-for-training-llms
14,Train an LLM in the Cloud,"# Train an LLM in the Cloud
## Introduction
In this lesson, we'll guide you through the step-by-step process of training a large language model from the ground up. Our primary focus will be on conducting the pre-training process in the cloud. Nevertheless, it's worth noting that all the concepts covered here can be transferable if you want to train a model locally and have enough resources on your local machine (only for small language models). When embarking on model training, three key components must be taken into account. The process begins with selecting an appropriate dataset that aligns with your specific use case. Next, configure the architecture of the model, making adjustments based on the resources at your disposal. Finally, execute the training loop, bringing everything together to train the model effectively. We integrate well-known libraries like Deep Lake Datasets and Transformers into our implementation to build a smooth pipeline. The initial step to initiate the process involves selecting the dataset.",llm_course,https://learn.activeloop.ai/courses/take/llms/multimedia/48954383-train-an-llm-in-the-cloud
15,Train an LLM in the Cloud,"# Train an LLM in the Cloud
## GPU Cloud - Lambda
In this lesson, we’ll leverage Lambda, the GPU cloud designed by ML engineers for training LLMs & Generative AI. We can create an account on it, link a billing account, and then [rent one instance of the following GPU servers with associated costs](https://lambdalabs.com/service/gpu-cloud/pricing). Please follow the instructions in the course logistics section to open a Lambda account. The cost of your instance is based on its duration, not just the time spent training your model; so remember to turn your instance off. For this lesson, we rented an 8x NVIDIA A100 instance comprising 40GB of memory for $8.80/h. If you're using the Lambda-provided cloud credit for the course, be aware that you still need to register a credit card. The credit will cover costs up to $75, but you must have a card on file. If you spend more money than allocated by the credit (more than $75), you will have to cover those costs yourself. You can find the code of this lesson in this [Notebook](https://colab.research.google.com/drive/1MVeH4vbbVZnZwVhEKxq1RNhUKP245Udt?usp=sharing). ",llm_course,https://learn.activeloop.ai/courses/take/llms/multimedia/48954383-train-an-llm-in-the-cloud
16,Train an LLM in the Cloud,"# Train an LLM in the Cloud
## Training Monitoring - Weights and Biases
As we’re going to spend a lot of money in training our LLM and ensure that everything is progressing smoothly, we’ll log the training metrics to Weights and Biases, allowing us to see the metrics in real time in a suitable dashboard.",llm_course,https://learn.activeloop.ai/courses/take/llms/multimedia/48954383-train-an-llm-in-the-cloud
17,Train an LLM in the Cloud,"# Train an LLM in the Cloud
## Load the Dataset
During the pre-training process, we utilize the Activeloop datasets to stream the samples seamlessly, batch by batch. This approach proves beneficial for resource management as loading the entire dataset directly into memory is unnecessary. Consequently, it greatly helps in optimizing resource usage. You can quickly load the dataset, and it automatically handles the streaming process without requiring any special configurations. You can load the datasets in just one line of code and visualize their content for analysis. The [library seamlessly integrates with PyTorch and TensorFlow](https://docs.deeplake.ai/en/latest/Pytorch-and-Tensorflow-Support.html), which are considered two of the most powerful frameworks for implementing AI applications. You can head out to [datasets.activeloop.ai](https://datasets.activeloop.ai/docs/ml/datasets/) to see the complete list of available datasets. Porting your datasets to the hub is also achievable with minimal effort. Let’s start by loading the `openwebtext` [dataset](https://app.activeloop.ai/activeloop/openwebtext-train), a collection of Reddit posts with at least three upvotes. This dataset is well-suited for acquiring broad knowledge to build a foundational model for general purposes. The Deep Lake web UI simplifies dataset exploration through its table view and empowers you to query the data using [TQL](https://docs.activeloop.ai/performance-features/querying-datasets) (Tensor Query Language). You can notice that it's possible to quickly inspect dataset details, even when dealing with a sizable dataset containing 8 million rows. This comes thanks to Deep Lake's format, that enables rapid data streaming straight to your browser. ![Deep Lake Visualization Engine table view.](Train%20an%20LLM%20in%20the%20Cloud%209de9852bd1654b219cd67299d66a1761/Screenshot_2023-10-05_at_9.06.07_AM.png) Deep Lake Visualization Engine table view. ```python import deeplake ds = deeplake.load('hub://activeloop/openwebtext-train') ds_val = deeplake.load('hub://activeloop/openwebtext-val') print(ds) print(ds[0].text.text()) ``` ```python Dataset(path='hub://activeloop/openwebtext-train', read_only=True, tensors=['text', 'tokens']) ""An in-browser module loader configured to get external dependencies directly from CDN. Includes babel/typescript. For quick prototyping, code sharing, teaching/learning - a super simple web dev environment without node/webpack/etc.\n\nAll front-end libraries\n\nAngular, React, Vue, Bootstrap, Handlebars, and jQuery are included. Plus all packages from cdnjs.com and all of NPM (via unpkg.com). Most front-end libraries should work out of the box - just use import / require() . If a popular library does not load, tell us and we’ll try to solve it with some library-specific config.\n\nWrite modern javascript (or typescript)\n\nUse latest language features or JSX and the code will be transpiled in-browser via babel or typescript (if required). To make it fast the transpiler will start in a worker thread and only process the modified code. Unless you change many files at once or open the project for the first time, the transpiling should be barely noticeable as it runs in parallel with loading a..."" ``` The provided code will instantiate a dataset object capable of retrieving the data points for both training and validation sets. Afterward, we can print the variable to examine the dataset's characteristics. It consists of two tensors: `text` containing the textual input and `tokens` representing the tokenized version of the content (which we won't be utilizing). We can also index through the dataset and access each column by using `.text` and convert the row to textual format by calling `.text()` method. The next step involves crafting a PyTorch Dataset class that leverages the loader object and ensures compatibility with the framework. The Dataset class handles both dataset formatting and any desired preprocessing steps to be applied. In this instance, our objective is to tokenize the samples. We will load the GPT-2 tokenizer model from the Transformers library to achieve this. For this specific model, we need to set a padding token (which may not be required for other models), and for this specific purpose, we have chosen to utilize the end of sentence `eos_token` to set the loaded tokenizer’s `pad_token` method. ```python from transformers import AutoTokenizer tokenizer = AutoTokenizer.from_pretrained(""gpt2"") tokenizer.pad_token = tokenizer.eos_token ``` Next, we create dataloaders from",llm_course,https://learn.activeloop.ai/courses/take/llms/multimedia/48954383-train-an-llm-in-the-cloud
18,Train an LLM in the Cloud,"# Train an LLM in the Cloud
## Load the Dataset
the Deep Lake datasets. In doing so, we also specify a `transform` that tokenizes the texts of the dataset on the fly. ```python # define transform to tokenize texts def get_tokens_transform(tokenizer): def tokens_transform(sample_in): tokenized_text = tokenizer( sample_in[""text""], truncation=True, max_length=512, padding='max_length', return_tensors=""pt"" ) tokenized_text = tokenized_text[""input_ids""][0] return { ""input_ids"": tokenized_text, ""labels"": tokenized_text } return tokens_transform # create data loaders ds_train_loader = ds.dataloader()\ .batch(32)\ .transform(get_tokens_transform(tokenizer))\ .pytorch() ds_eval_train_loader = ds_val.dataloader()\ .batch(32)\ .transform(get_tokens_transform(tokenizer))\ .pytorch() ``` Please note that we have formatted the dataset so that each sample is comprised of two components: `input_ids` and `labels`. `input_ids` are the tokens the model will use as inputs, while `labels` are the tokens the model will try to predict. Currently, both keys contain the same tokenized text. However, the trainer object from the Transformers library will automatically shift the labels by one token, preparing them for training.",llm_course,https://learn.activeloop.ai/courses/take/llms/multimedia/48954383-train-an-llm-in-the-cloud
19,Train an LLM in the Cloud,"# Train an LLM in the Cloud
## Initialize the Model
As the scope of this course does not include building the architecture from scratch, we won't be implementing it. We have already covered the details of the Transformer architecture in a previous lesson and provided additional resources for those who are interested in a more in-depth implementation. To accelerate the process, we will leverage an existing publicly available implementation of the transformer architecture. This approach allows us to scale the model quickly using available hyperparameters, including the number of layers, embedding dimension, and attention heads. Additionally, we will capitalize on the success of established architectures while maintaining the flexibility to modify the model size to accommodate our available resources. We opted to utilize the GPT-2 pre-trained model. Nonetheless, there is an option to utilize any other available model from the Huggingface hub; the approach presented here can be easily adapted to work with various architectures. Initially, we examine the default hyperparameters by loading the configuration file and reviewing the choices made in the architecture design. ```python from transformers import AutoConfig config = AutoConfig.from_pretrained(""gpt2"") print(config) ``` ```python GPT2Config { ""_name_or_path"": ""gpt2"", ""activation_function"": ""gelu_new"", ""architectures"": [ ""GPT2LMHeadModel"" ], ""attn_pdrop"": 0.1, ""bos_token_id"": 50256, ""embd_pdrop"": 0.1, ""eos_token_id"": 50256, ""initializer_range"": 0.02, ""layer_norm_epsilon"": 1e-05, ""model_type"": ""gpt2"", ""n_ctx"": 1024, ""n_embd"": 768, ""n_head"": 12, ""n_inner"": null, ""n_layer"": 12, ""n_positions"": 1024, ""reorder_and_upcast_attn"": false, ""resid_pdrop"": 0.1, ""scale_attn_by_inverse_layer_idx"": false, ""scale_attn_weights"": true, ""summary_activation"": null, ""summary_first_dropout"": 0.1, ""summary_proj_to_labels"": true, ""summary_type"": ""cls_index"", ""summary_use_proj"": true, ""task_specific_params"": { ""text-generation"": { ""do_sample"": true, ""max_length"": 50 } }, ""transformers_version"": ""4.30.2"", ""use_cache"": true, ""vocab_size"": 50257 } ``` It is apparent that we have the ability to exert significant control over almost every aspect of the network by manipulating the configuration settings. Specifically, we focus on the following parameters: `n_layer`, which indicates the number of stacking decoder components and defines the embedding layer’s hidden dimension; `n_positions` and `n_ctx`, to represent the maximum number of input tokens; and `n_head` to change the number of attention heads in each attention component. You can read the [documentation](https://huggingface.co./docs/transformers/model_doc/gpt2#transformers.GPT2Config) to gain a more comprehensive understanding of the remaining parameters. We can start by initializing the model using the default configuration and then count the number of parameters it contains, which will serve as a baseline. To achieve this, we utilize the `GPT2LMHeadModel` class, which takes the `config` variable as input and then proceeds to loop through the parameters, summing them up accordingly. ```python from transformers import GPT2LMHeadModel model = GPT2LMHeadModel(config) model_size = sum(t.numel() for t in model.parameters()) print(f""GPT-2 size: {model_size/1e6:.1f}M parameters"") ``` ``` GPT-2 size: 124.4M parameters ``` As shown, the GPT-2 model is relatively small (124M) when compared to the current state-of-the-art large language models. We’re going to pre-train a 124-million-parameter model, which we refer to as `GPT2-scratch-openwebtext`. We chose this size so that a part of its training can be easily replicated by any reader within a reasonable price (~$100). If you wanted to train a larger model, you could modify the architecture to scale it up slightly. As we previously described the selected parameters, we can create a network with 32 layers and an embedding size of 1600. It is worth noting that if not specified, the hidden dimensionality of the linear layers will be `4 × n_embd`. ```python config.n_layer = 32 config.n_embd = 1600 config.n_positions = 512 config.n_ctx = 512 config.n_head = 32 ``` Now, we proceed to load the model with the updated hyperparameters. ```python model_1b = GPT2LMHeadModel(config) model_size = sum(t.numel() for t in model_1b.parameters()) print(f""GPT2-1B size: {model_size/1e6:.1f}M parameters"") ``` ``` GPT2-1B size: 1065.8M parameters ``` The modifications led to a model with 1 billion parameters. It is possible to scale the network further to be more in line with",llm_course,https://learn.activeloop.ai/courses/take/llms/multimedia/48954383-train-an-llm-in-the-cloud
20,Train an LLM in the Cloud,"# Train an LLM in the Cloud
## Initialize the Model
the newest state-of-the-art models, which often have more than 80 layers. However, let’s continue with this lesson's 124M parameters model.",llm_course,https://learn.activeloop.ai/courses/take/llms/multimedia/48954383-train-an-llm-in-the-cloud
21,Train an LLM in the Cloud,"# Train an LLM in the Cloud
## Training Loop
The final step in the process involves initializing the training loop. We utilize the Transformers library's `Trainer` class, which takes the necessary parameters for training the model. However, before proceeding, we need to create a `TrainingArguments` object that defines all the essential arguments. ```python from transformers import Trainer, TrainingArguments args = TrainingArguments( output_dir=""GPT2-scratch-openwebtext"", evaluation_strategy=""steps"", save_strategy=""steps"", eval_steps=500, save_steps=500, num_train_epochs=2, logging_steps=1, per_device_train_batch_size=1, per_device_eval_batch_size=1, gradient_accumulation_steps=1, weight_decay=0.1, warmup_steps=100, lr_scheduler_type=""cosine"", learning_rate=5e-4, bf16=True, ddp_find_unused_parameters=False, run_name=""GPT2-scratch-openwebtext"", report_to=""wandb"" ) ``` Note that we set the `per_device_train_batch_size` and the `per_device_eval_batch_size` variables to `1` as the batch size is already specified by the dataloader we created earlier. There are over 90 parameters available for adjustment. Find a comprehensive list with explanations in the [documentation](https://huggingface.co./docs/transformers/main_classes/trainer#transformers.TrainingArguments). Please note that if there is an ""out of memory"" error while attempting to train, a smaller `batch_size` can be used. Additionally, the `bf16` flag, which trains the model using lower precision floating numbers, is only available on high-end GPU devices. If unavailable, it can be substituted with the argument `fp16=True`. Notice also that we set the parameter `report_to` to `wandb`; that is, we are sending the training metrics to [Weights and Biases](https://wandb.ai/site) so that we can see a real-time report of how the training is going. Next, we define the `TrainerWithDataLoaders` class, a subclass of `Trainer` where we override the `get_train_dataloader` and `get_eval_dataloader` methods to return our previously defined data loaders. ```python from transformers import Trainer class TrainerWithDataLoaders(Trainer): def __init__(self, *args, train_dataloader=None, eval_dataloader=None, **kwargs): super().__init__(*args, **kwargs) self.train_dataloader = train_dataloader self.eval_dataloader = eval_dataloader def get_train_dataloader(self): return self.train_dataloader def get_eval_dataloader(self, dummy): return self.eval_dataloader ``` The process initiates with a call to the `.train()` method. ```python trainer = TrainerWithDataLoaders( model=model, args=args, train_dataloader=ds_train_loader, eval_dataloader=ds_eval_train_loader, ) trainer.train() ``` The `Trainer` object will handle model evaluation during training, as specified in the `eval_steps` argument, and save checkpoints based on the previously defined in `save_steps`. Here’s the final trained model after about 45 hours of training on 8x NVIDIA A100 ****on Lambda Labs. [GPT2-scratch-openwebtext.zip](Train%20an%20LLM%20in%20the%20Cloud%209de9852bd1654b219cd67299d66a1761/GPT2-scratch-openwebtext.zip) As the [hourly cost of 8x NVIDIA A100 on Lambda Labs is $8.80](https://lambdalabs.com/service/gpu-cloud#pricing), the total cost is $ 400. You can stop your pretraining earlier if you want to spend less money on that. Here’s the training report on [Weights and Biases](https://wandb.ai/ala_/GenAI360/runs/dng49avj?workspace=user-ala_). The following report shows that the training loss decreased relatively smoothly as iterations passed. ![Screenshot 2023-09-04 at 15.31.08.png](Train%20an%20LLM%20in%20the%20Cloud%209de9852bd1654b219cd67299d66a1761/Screenshot_2023-09-04_at_15.31.08.png)",llm_course,https://learn.activeloop.ai/courses/take/llms/multimedia/48954383-train-an-llm-in-the-cloud
22,Train an LLM in the Cloud,"# Train an LLM in the Cloud
## Inference
Once the pre-training process is complete, we proceed with the inference stage to observe our model in action and evaluate its capabilities. As specified, the `Trainer` will store the intermediate checkpoints in a designated directory called `./GPT2-scratch-openwebtext`. The most efficient approach to utilize the model involves leveraging the Transformers pipeline functionality, which automatically loads both the model and tokenizer, making them ready for text generation. Below is the code snippet that establishes a pipeline object utilizing the pre-trained model alongside the tokenizer we defined in the preceding section. This pipeline enables text generation. ```python from transformers import pipeline pipe = pipeline(""text-generation"", model=""./GPT2-scratch-openwebtext"", tokenizer=tokenizer, device=""cuda:0"") ``` The pipeline object leverages the powerful Transformers `.generate()` method internally, offering exceptional flexibility in managing the text generation process. ([documentation](https://huggingface.co./docs/transformers/v4.18.0/en/main_classes/text_generation#transformers.generation_utils.GenerationMixin.generate)) We can use methods like `min_length` to define a minimum number of tokens to be generated, `max_length` to limit the newly generated tokens, `temperature` to control the generation process between randomness and most likely, and lastly, `do_sample` to modify the completion process, switching between a greedy approach that always selects the most probable token and other sampling methods, such as beam search or diverse search. We only set the `num_return_sequences` to limit the number of generated sequences. ```python txt = ""The house prices dropped down"" completion = pipe(txt, num_return_sequences=1) print(completion) ``` ``` [{'generated_text': 'The house prices dropped down to 3.02% last year. While it was still in development, the housing market was still down. The recession hit on 3 years between 1998 and 2011. In fact, it slowed the amount of housing from 2013 to 2013'}] ``` The code will attempt to generate a completion for the given input sequence using the knowledge it has acquired from the training dataset. It aims to finish the following sequence: `The house prices dropped down` while being relevant and contextually appropriate. Even with a brief training period, the model exhibits a good grasp of the language, generating grammatically correct and contextually coherent sentences.",llm_course,https://learn.activeloop.ai/courses/take/llms/multimedia/48954383-train-an-llm-in-the-cloud
23,Train an LLM in the Cloud,"# Train an LLM in the Cloud
## Conclusion
Throughout this lesson, we gained an understanding of the fundamental steps required to train your own language model. The steps involve loading the relevant training data, defining the architecture, scaling it up as per your requirements, and, finally, commencing the training process. As previously discussed, there is no need to train a language model from scratch in many cases. In the upcoming module, we will cover the fine-tuning process in greater detail, enabling you to harness the capabilities of existing powerful models for specific use cases. --- - **Resources** - [Notebook](https://colab.research.google.com/drive/1MVeH4vbbVZnZwVhEKxq1RNhUKP245Udt?usp=sharing). *(Inference results at the end)* - [Weight and Biases Report](https://wandb.ai/ala_/GenAI360/runs/dng49avj?workspace=user-ala_). - Requirements [requirements-train.txt](Train%20an%20LLM%20in%20the%20Cloud%209de9852bd1654b219cd67299d66a1761/requirements-train.txt) *(The provided file is a snapshot of all the packages on the server; not all of these packages are necessary for you)* ---",llm_course,https://learn.activeloop.ai/courses/take/llms/multimedia/48954383-train-an-llm-in-the-cloud
24,How to train NanoGPT with Deep Lake streamable dat,"# How to train NanoGPT with Deep Lake streamable dataloader
In this project lesson, we walk through a training example of Andrej Karpathy’s **[NanoGPT](https://github.com/karpathy/nanoGPT/blob/master/train.py)**. This is the easiest, swiftest repository for training and fine-tuning medium-sized GPTs. The code to train the model is just a 300-line boilerplate training loop and a 300-line GPT model definition completing GPT2. While NanoGPT was designed to be run locally, we built on this and overcame the speed constraints of local training by replacing the local data loader with Deep Lake’s streamable data loader. [https://www.activeloop.ai/resources/generative-ai-data-infrastructure-how-to-train-large-language-models-ll-ms-with-deep-lake/](https://www.activeloop.ai/resources/generative-ai-data-infrastructure-how-to-train-large-language-models-ll-ms-with-deep-lake/)",llm_course,https://learn.activeloop.ai/courses/take/llms/multimedia/48954371-how-to-train-nanogpt-with-deep-lake-streamable-dataloader
25,Open-Source LLMs,"# Open-Source LLMs
## **Introduction**
In this lesson, we will discuss several open-source LLMs and their features, capabilities, and licenses. This overview will cover LLaMA 2, Open Assistant, Dolly (by Databricks), and Falcon as the most used LLMs. We will also explore the licenses and potential commercial usage of these models. Additionally, we will discuss limitations or restrictions that may be present in their licenses.",llm_course,https://learn.activeloop.ai/courses/take/llms/multimedia/48953617-open-source-llms
26,Open-Source LLMs,"# Open-Source LLMs
## LLaMA 2
[LLaMA 2](https://ai.meta.com/llama/) is a cutting-edge large language model developed by Meta, released on July 18, 2023, with an open license for both research and commercial use. The architecture of LLaMA 2 is described in great detail in the 77-page [paper](https://arxiv.org/abs/2307.09288), making it easier for data scientists to recreate and fine-tune the models for their specific needs. The model's training data comprises an impressive 2 trillion tokens. It has been trained on a massive scale, outperforming all open-source benchmarks and demonstrating performance comparable to GPT3.5 in terms of human evaluation. LLaMA 2 is available in three parameter variations: 7B, 13B, and 70B, and there are also instruction-tuned versions known as LLaMA-Chat. The fine-tuning process is done through Supervised Fine-Tuning (SFT) and Reinforcement Learning with Human Feedback (RLHF), using a novel approach to segment data based on helpfulness and safety prompts. The reward models are crucial to LLaMA 2's performance, allowing it to balance safety and helpfulness effectively. The safety reward model and helpfulness reward model are trained to evaluate the quality of generated responses. The impact of LLaMA 2 in Generative AI is substantial, outperforming other open innovation models like Falcon or Vicuna. You can find the LLaMA 2 models on the Hugging Face Hub [here](https://huggingface.co./meta-llama). Here, we test the `meta-llama/Llama-2-7b-chat-hf` model. For this, you’ll first have to request access to the model on [this page](https://huggingface.co./meta-llama/Llama-2-7b-chat-hf). First, let’s download the model. It takes some time as the model weighs about 14GB. ```python from transformers import AutoModelForCausalLM, AutoTokenizer import torch # download model model_id = ""meta-llama/Llama-2-7b-chat-hf"" tokenizer = AutoTokenizer.from_pretrained(model_id) model = AutoModelForCausalLM.from_pretrained( model_id, trust_remote_code=True, torch_dtype=torch.bfloat16 ) ``` Then, we generate a completion with it. This step will take a lot of time if you’re generating text using CPUs instead of GPUs! ```python # generate answer prompt = ""Translate English to French: Configuration files are easy to use!"" inputs = tokenizer(prompt, return_tensors=""pt"", return_token_type_ids=False) outputs = model.generate(**inputs, max_new_tokens=100) # print answer print(tokenizer.batch_decode(outputs, skip_special_tokens=True)[0]) ```",llm_course,https://learn.activeloop.ai/courses/take/llms/multimedia/48953617-open-source-llms
27,Open-Source LLMs,"# Open-Source LLMs
## **Falcon**
The [Falcon](https://falconllm.tii.ae/) models, developed and trained by the Technology Innovation Institute (TII) of Abu Dhabi, have gained significant attention since their release in May 2023. These models are causal large language models (LLM), similar to GPT, and are also known as ""decoder-only"" models. They excel in predicting the next token in a sequence of tokens with their attention focused solely on the left context during training, while the right context remains masked. The Falcon models are distributed under the Apache 2.0 License, allowing even commercial use. The largest of these models, Falcon-40B, has shown great performance, outperforming other causal LLMs like LLaMa-65B and MPT-7B. Falcon-7B, a slightly smaller version, was designed to be fine-tuned on consumer hardware and has half the number of layers and embedding dimensions compared to Falcon-40B. The training data for Falcon models primarily comes from the “Falcon RefinedWeb dataset,” which is meticulously curated and multimodal-friendly, preserving links and alt texts of images. This dataset and curated corpora make up 75% of the pre-training data for the Falcon models. While it primarily covers English, additional versions like ""RefinedWeb-Europe"" have been prepared to include several European languages. The instruct versions of Falcon-40B and Falcon-7B perform even better, with fine-tuning done on a mixture of chat/instruct datasets sourced from various places, including [GPT4all](https://gpt4all.io/index.html) and [GPTeacher](https://github.com/teknium1/GPTeacher). You can find the Falcon models on the Hugging Face Hub [here](https://huggingface.co./tiiuae). Here, we test the `tiiuae/falcon-7b-instruct` model. You can use the same code previously used for the LLaMA example by changing the `model_id`. ```python model_id = ""tiiuae/falcon-7b-instruct"" ```",llm_course,https://learn.activeloop.ai/courses/take/llms/multimedia/48953617-open-source-llms
28,Open-Source LLMs,"# Open-Source LLMs
## **Dolly**
[Dolly](https://www.databricks.com/blog/2023/04/12/dolly-first-open-commercially-viable-instruction-tuned-llm) is an open-source LLM introduced by [Databricks](https://www.databricks.com/). It was first unveiled as Dolly 1.0, a language model that showcased ChatGPT-like human interactivity. The team has now released Dolly 2.0, a better instruction-following LLM. One of the critical features of Dolly 2.0 is that it is built on a new, high-quality human-generated instruction dataset called ""databricks-dolly-15k"". This dataset consists of 15,000 prompt/response pairs designed explicitly for instruction tuning large language models. Unlike many instruction-following models, Dolly 2.0's dataset is open-source and licensed under the Creative Commons Attribution-ShareAlike 3.0 Unported License. This means that anyone can use, modify, or extend the dataset for any purpose, including commercial applications. The Dolly 2.0 model is based on the [EleutherAI Pythia-12 b](https://huggingface.co./EleutherAI/pythia-12b) architecture, comprising 12 billion parameters, which makes it capable of high-quality instruction following behavior. Despite being smaller than some other models, such as Alpaca, Dolly 2.0 has demonstrated great performance due to its reliance on real-world, human-generated training records rather than synthesized data. You can find the Databricks models on the Hugging Face Hub [here](https://huggingface.co./databricks). Here, we test the `databricks/dolly-v2-3b` model. You can use the same code previously used for the LLaMA example by changing the `model_id`. ```python model_id = ""databricks/dolly-v2-3b"" ```",llm_course,https://learn.activeloop.ai/courses/take/llms/multimedia/48953617-open-source-llms
29,Open-Source LLMs,"# Open-Source LLMs
## **Open Assistant**
The [Open Assistant](https://open-assistant.io/) project is an initiative aiming to make high-quality large language models accessible to everyone through an open-source and collaborative approach. Unlike some other ChatGPT open-source alternatives with restricted licenses, Open Assistant seeks to provide a versatile chat-based language model comparable to ChatGPT and GPT-4 that can be used for commercial purposes. The heart of the project lies in its commitment to openness and inclusivity. They have collected a substantial dataset from over 13,000 volunteers, comprising more than 600,000 interactions, 150,000 messages, and 10,000 fully annotated conversation trees on various topics and in multiple languages. This dataset serves as the foundation for training various models hosted on platforms like Hugging Face. Users can explore the potential of Open Assistant by interacting with the model through the Hugging Face demo or the official chat interface, both designed to solicit user feedback to help improve the chatbot's responses. The project encourages community involvement and contributions, allowing users to participate in data collection and ranking tasks to enhance the capabilities of the language model. As with most open-source large language models, Open Assistant does have some limitations, particularly in answering math and coding questions, as they are trained on fewer interactions in these domains. However, the model is generally adept at generating interesting and human-like responses, though occasional inaccuracies may occur.",llm_course,https://learn.activeloop.ai/courses/take/llms/multimedia/48953617-open-source-llms
30,Open-Source LLMs,"# Open-Source LLMs
## Mistral
[Mistral](https://mistral.ai/), in September 2023, has released their language model [Mistral 7B](https://mistral.ai/news/announcing-mistral-7b/) under the Apache 2.0 license. This model, with 7.3 billion parameters, has shown superior performance compared to the Llama 2 13B and Llama 1 34B models on all and many benchmarks respectively. It also approaches the performance of CodeLlama 7B on code while maintaining proficiency in English tasks. Mistral 7B uses Grouped-query attention (GQA) for faster inference and Sliding Window Attention (SWA) to handle longer sequences more cost-effectively. This, along with modifications to FlashAttention and xFormers, has led to a 2x speed improvement for sequence lengths of 16k with a window of 4k. The model can be downloaded and used anywhere, including locally, with the team's reference implementation. It can also be deployed on any cloud (AWS/GCP/Azure) using the [vLLM inference server](https://github.com/vllm-project/vllm), or used on HuggingFace. Mistral 7B is easily fine-tuned for any task. As a demonstration, the team has provided a model fine-tuned for chat, which outperforms the Llama 2 13B chat model. The fine-tuned model, Mistral 7B Instruct, outperforms all 7B models on [MT-Bench](https://arxiv.org/abs/2306.05685) and is comparable to 13B chat models.",llm_course,https://learn.activeloop.ai/courses/take/llms/multimedia/48953617-open-source-llms
31,Open-Source LLMs,"# Open-Source LLMs
## The Hugging Face Open LLM Leaderboard
Hugging Face hosts an **[LLM leaderboard](https://huggingface.co./spaces/HuggingFaceH4/open_llm_leaderboard)**. This leaderboard is created by evaluating community-submitted models on text generation benchmarks on Hugging Face’s clusters. It’s an excellent resource for checking the new best-performant open-source LLMs. If you can’t find the language or domain you’re looking for, you can filter them out and find the one that meets your specific requirements.",llm_course,https://learn.activeloop.ai/courses/take/llms/multimedia/48953617-open-source-llms
32,Open-Source LLMs,"# Open-Source LLMs
## **Conclusion**
In this lesson, we explored several open-source LLMs) and their features, capabilities, and licenses. We discussed LLaMA 2, Falcon, Dolly, and Open Assistant as some of the most prominent open-source LLMs available. - LLaMA 2, developed by Meta, is a cutting-edge language model with impressive performance and is available in various parameter variations. It has been trained on a massive scale and demonstrates remarkable performance comparable to GPT3.5. - Falcon models, developed and trained by the Technology Innovation Institute (TII) of Abu Dhabi, have gained attention for their decoder-only approach and have shown great performance, especially the Falcon-40B model. - Dolly, introduced by Databricks, is an open-source LLM with a focus on instruction following. It has a high-quality human-generated instruction dataset and is licensed under Creative Commons, allowing for versatile use, including commercial applications. - Open Assistant is an ambitious project aiming to make high-quality LLMs accessible to everyone through openness and inclusivity. It encourages community involvement and contributions to enhance the capabilities of the language model. It is essential to acknowledge the importance of open-source LLMs in advancing the field of natural language processing and enabling wider access to state-of-the-art language models for research and commercial purposes. In the next lesson, we will explore an equally important aspect of LLMs - hallucinations and bias. Hallucinations refer to the generation of fake or incorrect information by LLMs, while bias entails the perpetuation of prejudiced or discriminatory content. Understanding and addressing these challenges are crucial to ensuring the responsible and ethical use of large language models in various applications.",llm_course,https://learn.activeloop.ai/courses/take/llms/multimedia/48953617-open-source-llms
33,Focus on the GPT Architecture,"# Focus on the GPT Architecture
## Introduction
The Generative Pre-trained Transformer (GPT) is a type of transformer-based language model developed by OpenAI. The 'transformer' part of its name refers to its transformer architecture, which was introduced in the research paper ""[Attention is All You Need](https://arxiv.org/abs/1706.03762)"" by Vaswani et al. You should have a good understanding of the fundamental elements comprising the transformer architecture. In this session, we will cover the decoder-only networks that play an essential role in developing large language models. We will explore their unique attributes and the reasons behind their effectiveness. In contrast to conventional Recurrent Neural Networks (RNNs) and Long Short-Term Memory (LSTM) networks, the transformer architecture departs from recurrence and adopts self-attention mechanisms, resulting in substantial advancements in speed and scalability. An immensely powerful architecture was unleashed by harnessing the potential for parallelization within the network (simultaneously running multiple head attentions) along with the abundant small cores available in a GPU.",llm_course,https://learn.activeloop.ai/courses/take/llms/multimedia/48954213-focus-on-the-gpt-architecture
34,Focus on the GPT Architecture,"# Focus on the GPT Architecture
## The GPT Architecture
The GPT family comprises decoder-only models, wherein each block in the stack is comprised of a self-attention mechanism and a position-wise fully connected feed-forward network. The self-attention mechanism, also known as scaled dot-product attention, allows the model to weigh the importance of each word in the input when generating the next word in the sequence. It computes a weighted sum of all the words in the sequence, where the weights are determined by the attention scores. The critical aspect to focus on is the addition of “masking” to the self-attention that prevents the model from attending to certain positions/words. ![Illustrating which tokens are attended to by masked self-attention at a particular timestamp. (Image taken from [NLPiation](https://medium.com/mlearning-ai/what-are-the-differences-in-pre-trained-transformer-base-models-like-bert-distilbert-xlnet-gpt-4b3ea30ef3d7)) As you see in the figure, we pass the whole sequence to the model, but the model at timestep 5 tries to predict the next token by only looking at the previously generated tokens, masking the future tokens. This prevents the model from “cheating” by predicting tokens leveraging future tokens.](Focus%20on%20the%20GPT%20Architecture%20a6de3e541de44464859c4a62f4c132c1/1_mwO-oDzhdMqLhV9XaxKQLQ.webp) Illustrating which tokens are attended to by masked self-attention at a particular timestamp. (Image taken from [NLPiation](https://medium.com/mlearning-ai/what-are-the-differences-in-pre-trained-transformer-base-models-like-bert-distilbert-xlnet-gpt-4b3ea30ef3d7)) As you see in the figure, we pass the whole sequence to the model, but the model at timestep 5 tries to predict the next token by only looking at the previously generated tokens, masking the future tokens. This prevents the model from “cheating” by predicting tokens leveraging future tokens. The following code simply implements the “masked self-attention” mechanism. ```python import numpy as np def self_attention(query, key, value, mask=None): # Compute attention scores scores = np.dot(query, key.T) if mask is not None: # Apply mask by setting masked positions to a large negative value scores = scores + mask * -1e9 # Apply softmax to obtain attention weights attention_weights = np.exp(scores) / np.sum(np.exp(scores), axis=-1, keepdims=True) # Compute weighted sum of value vectors output = np.dot(attention_weights, value) return output ``` The first step is to compute a Query, Key, and Value vector for each word in the input sequence using separate learned linear transformations of the input vector. It is a simple feedforward linear layer that the model learns during training. Then, we can calculate the attention scores by taking the dot product of its Query vector with the Key vector of every other word. Currently, the application of masking is feasible by setting the scores in specific locations to a large negative number. This effectively informs the model that those words are unimportant and should be disregarded during attention. To get the attention weights, apply the SoftMax function to the attention scores to convert them into probabilities. This gives the weights of the input words and effectively turns the significant negative scores to zero. Lastly, multiply each Value vector by its corresponding weight and sum them up. This produces the output of the masked self-attention mechanism for the word. The provided code snippet illustrates the process of a single self-attention head, but in reality, each layer contains multiple heads, which could range from 16 to 32 heads, depending on the architecture. These heads operate simultaneously to enhance the model's performance.",llm_course,https://learn.activeloop.ai/courses/take/llms/multimedia/48954213-focus-on-the-gpt-architecture
35,Focus on the GPT Architecture,"# Focus on the GPT Architecture
## ****Causal Language Modeling****
LLMs utilize a **self-supervised learning** process for pre-training. This process eliminates the need to provide explicit labels to the model for learning, making it capable of acquiring knowledge autonomously. For instance, when training a summarization model using supervised learning, it is necessary to provide articles and their corresponding summaries as reference points during the training process. However, LLMs employ the causal language modeling objective to acquire knowledge from any textual data without the explicit need for human-provided labels. Why is it called “causal”? Because the prediction at each step depends only on earlier steps in the sequence and not on future steps. > This process involves feeding a segment of the document to the model and asking it to predict the next word. > Subsequently, the predicted word is concatenated to the original input and fed back to the model to predict a new token. This iterative loop continues, consistently feeding the newly generated token back into the network. During the pre-training process, the network acquires substantial knowledge about language and grammar. We can then fine-tune the pre-trained model using a supervised approach for different tasks or a specific domain. Compared to other well-known objectives, the advantage of this approach is that it models how humans naturally write or speak. In contrast to other objectives like masked language modeling, where masked tokens are introduced in the input, the causal language modeling approach constructs sentences one word at a time. This key difference ensures that our model's performance is not adversely affected when dealing with real-world passages lacking masking tokens. Moreover, we can utilize extensive, high-quality, human-generated content spanning centuries. This content can be derived from books, Wikipedia, news websites, and more. Familiar datasets and repositories, such as [ActiveLoop](https://datasets.activeloop.ai) and [Huggingface](https://huggingface.co./datasets), provide convenient access to some well-known datasets. We will cover this topic in more detail in later lessons.",llm_course,https://learn.activeloop.ai/courses/take/llms/multimedia/48954213-focus-on-the-gpt-architecture
36,Focus on the GPT Architecture,"# Focus on the GPT Architecture
## MinGPT
Numerous implementations of the GPT architecture exist, each designed for specific purposes. In upcoming lessons, we will thoroughly explore alternative libraries that are better suited for production environments. However, we are introducing a lightweight repository implemented by Andrej Karpathy, named [minGPT](https://github.com/karpathy/minGPT). This represents a minimal implementation of OpenAI's GPT-2 model. In his own words, this serves as an educational implementation that strives to remove all complexities, achieving a length of just 300 lines of code and using the PyTorch library. This valuable resource provides an excellent opportunity to read and enhance your understanding of what's happening under the hood. Abundant comments in the code describe the processes and act as a helpful guide. Three main files can be found within the repository. First, [model.py](https://github.com/karpathy/minGPT/blob/master/mingpt/model.py) handles the definition of architecture details. Second, [bpe.py](https://github.com/karpathy/minGPT/blob/master/mingpt/bpe.py) is responsible for the tokenization process using the BPE algorithm. Lastly, [train.py](https://github.com/karpathy/minGPT/blob/master/mingpt/trainer.py) represents the implementation of a generic training loop for any neural network, not limited to the GPT architecture. Furthermore, the [demo.ipynb](https://github.com/karpathy/minGPT/blob/master/demo.ipynb) file contains a notebook that demonstrates the complete utilization of the code, including the inference process. The code can be executed on a MacBook Air, making it accessible for use on your local PC. Alternatively, you can fork the repository and utilize services like Colab.",llm_course,https://learn.activeloop.ai/courses/take/llms/multimedia/48954213-focus-on-the-gpt-architecture
37,Focus on the GPT Architecture,"# Focus on the GPT Architecture
## Conclusion
The decoder-only architecture and GPT-family models have driven the recent advancements in large language models. It is essential to possess a strong grasp of the transformer architecture and comprehend the distinctive features that set the decoder-only models apart, making them well-suited for language modeling. We have explored the shared components and delved deeper into what makes their architecture unique. Subsequent lessons will cover various other aspects of language models.",llm_course,https://learn.activeloop.ai/courses/take/llms/multimedia/48954213-focus-on-the-gpt-architecture
38,Fine-Tuning using SFT for Financial Sentiment,"# Fine-Tuning using SFT for Financial Sentiment
## Introduction
In the previous lesson, we experimented with the method of fine-tuning an LLM to follow the instructions like a chatbot. Although this proves beneficial across various applications, we can similarly employ this strategy to train a model tailored for a particular domain. In this lesson, our goal is to create a thoroughly tuned model for conducting **sentiment analysis on financial statements**. Ideally, the LLM would assess financial tweets by categorizing them as Positive, Negative, or Neutral. The dataset utilized in this lesson is the one curated in the [FinGPT project](https://arxiv.org/pdf/2307.10485.pdf). As previously stated, the dataset remains the pivotal and influential factor. Having acknowledged that, and given that we've extensively addressed the process of Supervised Fine-Tuning (SFT) before, this lesson will predominantly touch upon the dataset we utilized and the preprocessing steps involved. Nonetheless, a comprehensive notebook script for running and experimenting is provided at the conclusion of this lesson. The activities showcased in this tutorial involve the utilization of the 4th Generation Intel® Xeon® Scalable Processors (with 64GB RAM), with the use of [Intel® Advanced Matrix Extensions](https://www.intel.com/content/www/us/en/products/docs/accelerator-engines/advanced-matrix-extensions/overview.html) (Intel® AMX). Both finetuning and inference can be accomplished by leveraging its optimization technologies. You can spin up a virtual machine using GCP Compute Engine as explained in the previous lesson. Follow the instructions in the course introduction to spin up a VM with Compute Engine with high-end Intel CPUs. ",llm_course,https://learn.activeloop.ai/courses/take/llms/multimedia/48959765-fine-tuning-using-sft-for-financial-sentiment
39,Fine-Tuning using SFT for Financial Sentiment,"# Fine-Tuning using SFT for Financial Sentiment
## Load the Dataset
We are set to utilize the FinGPT sentiment dataset, comprising a set of financial tweets along with their corresponding labels. Additionally, this dataset features an `instruction` column containing the initial task directive. Typically, this instruction prompts something akin to ""What is the sentiment of the following content? Choose from Positive, Negative, or Neutral.” A smaller subset of the dataset can be accessed from the [300+ free public datasets](https://app.activeloop.ai/public/) curated by team Activeloop accessible in Deep Lake format. We've deliberately chosen a smaller subset to expedite the fine-tuning process. Specifically, the [training set](https://app.activeloop.ai/genai360/FingGPT-sentiment-train-set) comprises 20,000 data points, while we employ 2,000 samples for [validation](https://app.activeloop.ai/genai360/FingGPT-sentiment-valid-set) purposes. The dataset can be explored and queried using the Deep Lake Web UI or filtered using the Python package. The Deep Lake visualization engine enables us to [query the dataset](https://docs.activeloop.ai/performance-features/querying-datasets) and filter relevant rows using its query field. The NLP feature allows you to compose your query in English and receive the corresponding TQL query. ![The Deep Lake Visualization Engine table view with filtering.](Fine-Tuning%20using%20SFT%20for%20Financial%20Sentiment%20f744059ba2f8490b92b5c5a641c1e96a/Screenshot_2023-10-05_at_10.02.05_AM.png) The Deep Lake Visualization Engine table view with filtering. By utilizing the `deeplake.load()` function, we can create the Dataset object and load the samples. ```python import deeplake # Connect to the training and testing datasets ds = deeplake.load('hub://genai360/FingGPT-sentiment-train-set') ds_valid = deeplake.load('hub://genai360/FingGPT-sentiment-valid-set') print(ds) ``` ```python Dataset(path='hub://genai360/FingGPT-sentiment-train-set', read_only=True, tensors=['input', 'instruction', 'output']) ``` At this point, we can proceed to create the function that formats a sample from the dataset into a suitable input for the model. The primary distinction from our previous approach lies in incorporating the instructions at the start of the prompt. The structure is as outlined below: `\n\nContent: \n\nSentiment: `. The placeholders enclosed in `<>` will be substituted with corresponding values from the dataset. ```python def prepare_sample_text(example): """"""Prepare the text from a sample of the dataset."""""" text = f""{example['instruction'].text()}\n\nContent: {example['input'].text()}\n\nSentiment: {example['output'].text()}"" return text ``` Presented here is a formatted input derived from an entry in the dataset. ``` What is the sentiment of this news? Please choose an answer from {negative/neutral/positive} Content: Diageo Shares Surge on Report of Possible Takeover by Lemann Sentiment: positive ``` The subsequent steps should be recognizable from earlier lessons. We initialize the tokenizer for the [OPT-1.3B large language model](https://huggingface.co./facebook/opt-1.3b) and use the `ConstantLengthDataset` to structure the samples. The tokenizer is then employed to convert them into token IDs. Additionally, the class packs multiple samples until the sequence length threshold is reached, thus enhancing the efficiency of the training process. ```python # Load the tokenizer from transformers import AutoTokenizer tokenizer = AutoTokenizer.from_pretrained(""facebook/opt-1.3b"") # Create the ConstantLengthDataset from trl.trainer import ConstantLengthDataset train_dataset = ConstantLengthDataset( tokenizer, ds, formatting_func=prepare_sample_text, infinite=True, seq_length=1024 ) eval_dataset = ConstantLengthDataset( tokenizer, ds_valid, formatting_func=prepare_sample_text, seq_length=1024 ) # Show one sample from train set iterator = iter(train_dataset) sample = next(iterator) print(sample) ``` ```python {'input_ids': tensor([50118, 35212, 8913, ..., 2430, 2, 2]),'labels': tensor([50118, 35212, 8913, ..., 2430, 2, 2])} ``` ",llm_course,https://learn.activeloop.ai/courses/take/llms/multimedia/48959765-fine-tuning-using-sft-for-financial-sentiment
40,Fine-Tuning using SFT for Financial Sentiment,"# Fine-Tuning using SFT for Financial Sentiment
## Initialize the Model and Trainer
The ""Fine-Tuning using SFT"" tutorial clarifies the code snippets within this subsection. For additional inquiries, kindly refer to that resource for further understanding. We will quickly walk through the code. Please bear in mind that the fine-tuned checkpoint will be accessible in the Inference section if resources for the fine-tuning process are needed. Additionally, the specifics of the training process are recorded and can be accessed through [Weights and Biases](https://wandb.ai/site). During the training process, system activity can be tracked, including metrics such as memory usage, CPU utilization, duration, loss values, and a range of other parameters. Here’s the Weights and Biases [report](https://wandb.ai/ala_/GenAI360/runs/p08s2n5f?workspace=user-ala_) of the finetuning of this lesson. We start by defining the arguments necessary to configure the training process. We use the `LoraConfig` class from the [PEFT library](https://github.com/huggingface/peft) for that. Subsequently, we employ the `TrainingArguments` class from the transformers library to control the training loop. ```python # Define LoRAConfig from peft import LoraConfig lora_config = LoraConfig( r=16, lora_alpha=32, lora_dropout=0.05, bias=""none"", task_type=""CAUSAL_LM"", ) # Define TrainingArguments from transformers import TrainingArguments training_args = TrainingArguments( output_dir=""./OPT-fine_tuned-FinGPT-CPU"", dataloader_drop_last=True, evaluation_strategy=""epoch"", save_strategy=""epoch"", num_train_epochs=10, logging_steps=5, per_device_train_batch_size=12, per_device_eval_batch_size=12, learning_rate=1e-4, lr_scheduler_type=""cosine"", warmup_steps=100, gradient_accumulation_steps=1, gradient_checkpointing=False, fp16=False, bf16=True, weight_decay=0.05, ddp_find_unused_parameters=False, run_name=""OPT-fine_tuned-FinGPT-CPU"", report_to=""wandb"", ) ``` The subsequent task involves loading the OPT-1.3B model in the `bfloat16` format, which is easily managed by Intel® CPUs and saves memory during fine-tuning. ```python from transformers import AutoModelForCausalLM import torch model = AutoModelForCausalLM.from_pretrained( ""facebook/opt-1.3b"", torch_dtype=torch.bfloat16 ) ``` The subsequent stage entails casting specific layers within the network to complete 32-bit precision, enhancing the model's stability throughout training. ```python from torch import nn for param in model.parameters(): param.requires_grad = False # freeze the model - train adapters later if param.ndim == 1: # cast the small parameters (e.g. layernorm) to fp32 for stability param.data = param.data.to(torch.float32) model.gradient_checkpointing_enable() # reduce number of stored activations model.enable_input_require_grads() class CastOutputToFloat(nn.Sequential): def forward(self, x): return super().forward(x).to(torch.float32) model.lm_head = CastOutputToFloat(model.lm_head) ``` Now, connect the model, dataset, training arguments, and Lora config together using the `SFTTrainer` class to start the training process by invoking the `.train()` method. ```python from trl import SFTTrainer trainer = SFTTrainer( model=model, args=training_args, train_dataset=train_dataset, eval_dataset=eval_dataset, peft_config=lora_config, packing=True, ) print(""Training..."") trainer.train() ``` [OPT-fine_tuned-FinGPT-CPU.zip](Fine-Tuning%20using%20SFT%20for%20Financial%20Sentiment%20f744059ba2f8490b92b5c5a641c1e96a/OPT-fine_tuned-FinGPT-CPU.zip)",llm_course,https://learn.activeloop.ai/courses/take/llms/multimedia/48959765-fine-tuning-using-sft-for-financial-sentiment
41,Fine-Tuning using SFT for Financial Sentiment,"# Fine-Tuning using SFT for Financial Sentiment
## Merging LoRA and OPT
Before conducting inference and observing the results, the final task is to load the LoRA adaptors from the preceding stage and merge them with the base model. ```python # Load the base model (OPT-1.3B) from transformers import AutoModelForCausalLM import torch model = AutoModelForCausalLM.from_pretrained( ""facebook/opt-1.3b"", return_dict=True, torch_dtype=torch.bfloat16 ) # Load the LoRA adaptors from peft import PeftModel # Load the Lora model model = PeftModel.from_pretrained(model, ""./OPT-fine_tuned-FinGPT-CPU//"") model.eval() model = model.merge_and_unload() # Save for future use model.save_pretrained(""./OPT-fine_tuned-FinGPT-CPU/merged"") ```",llm_course,https://learn.activeloop.ai/courses/take/llms/multimedia/48959765-fine-tuning-using-sft-for-financial-sentiment
42,Fine-Tuning using SFT for Financial Sentiment,"# Fine-Tuning using SFT for Financial Sentiment
## Inference
We randomly selected four previously unseen examples from the dataset and provided them as input to both the vanilla base model (OPT-1.3B) and the fine-tuned model in order to contrast their respective responses. The code is relatively straightforward when utilizing the `.generate()` method from the Transformers library. ```python inputs = tokenizer(""""""What is the sentiment of this news? Please choose an answer from {strong negative/moderately negative/mildly negative/neutral/mildly positive/moderately positive/strong positive}, then provide some short reasons.\n\n Content: UPDATE 1-AstraZeneca sells rare cancer drug to Sanofi for up to S300 mln.\n\nSentiment: """""", return_tensors=""pt"").to(""cuda:0"") generation_output = model.generate(**inputs, return_dict_in_generate=True, output_scores=True, max_length=256, num_beams=1, do_sample=True, repetition_penalty=1.5, length_penalty=2.) print( tokenizer.decode(generation_output['sequences'][0]) ) ``` ``` What is the sentiment of this news? Please choose an answer from {strong negative/moderately negative/mildly negative/neutral/mildly positive/moderately positive/strong positive}, then provide some short reasons. Content: UPDATE 1-AstraZeneca sells rare cancer drug to Sanofi for up to S300 mln. Sentiment: positive ``` Observing the samples, we see that the model fine-tuned on financial tweets for the specific domain exhibits good performance in terms of adhering to instructions and comprehending the task at hand. Below, find a list of prompts to toggle the outputs by clicking on the right arrow icon. - 1. UPDATE 1-AstraZeneca sells rare cancer drug to Sanofi for up to S300 mln. ***Correct Answer: Positive*** ![Screenshot 2023-08-21 at 9.43.51 AM.png](Fine-Tuning%20using%20SFT%20for%20Financial%20Sentiment%20f744059ba2f8490b92b5c5a641c1e96a/Screenshot_2023-08-21_at_9.43.51_AM.png) - 2. SABMiller revenue hit by weaker EM currencies ***Correct Answer: Negative*** ![Screenshot 2023-08-21 at 9.45.08 AM.png](Fine-Tuning%20using%20SFT%20for%20Financial%20Sentiment%20f744059ba2f8490b92b5c5a641c1e96a/Screenshot_2023-08-21_at_9.45.08_AM.png) - 3. Buffett's Company Reports 37 Percent Drop in 2Q Earnings ***Correct Answer: Negative*** ![Screenshot 2023-08-21 at 9.48.49 AM.png](Fine-Tuning%20using%20SFT%20for%20Financial%20Sentiment%20f744059ba2f8490b92b5c5a641c1e96a/Screenshot_2023-08-21_at_9.48.49_AM.png) - 4. For a few hours this week, the FT gained access to Poly, the university where students in Hong Kong have been trapped… ***Correct Answer: Neutral*** ![Screenshot 2023-08-21 at 9.47.23 AM.png](Fine-Tuning%20using%20SFT%20for%20Financial%20Sentiment%20f744059ba2f8490b92b5c5a641c1e96a/Screenshot_2023-08-21_at_9.47.23_AM.png) These instances demonstrate that the vanilla model primarily focuses on the default language modeling task, which involves predicting the next word based on the input. In contrast, the fine-tuned model comprehends the instruction and generates the requested content.",llm_course,https://learn.activeloop.ai/courses/take/llms/multimedia/48959765-fine-tuning-using-sft-for-financial-sentiment
43,Fine-Tuning using SFT for Financial Sentiment,"# Fine-Tuning using SFT for Financial Sentiment
## Conclusion
This tutorial illustrated the procedure of leveraging publicly accessible datasets or curated data from your organization to develop a personalized model that caters to personalized requirements. More powerful base models, such as LLaMA2 can be employed, by modifying the model id to load both the model and its tokenizer. However, it needs more resources to initiate the fine-tuning process. We also showcased the feasibility of conducting the fine-tuning process using 4th Generation Intel® Xeon® Scalable Processors for both the fine-tuning and inference stages. The Intel® [oneAPI](https://www.intel.com/content/www/us/en/developer/tools/oneapi/toolkits.html#gs.5qwv2n) Math Kernel Library (Intel® MKL) toolkits offer a suite of utilities aimed at boosting the efficiency of various applications, including those in the field of Artificial Intelligence. It's intriguing to anticipate how the latest CPU architectures like Emerald Rapids, Sierra Forest, and Granite Rapids will shape and potentially transform the deep learning landscape. In the upcoming lessons, we will integrate GPUs to fine-tune a model using the RLHF (Reinforcement Learning from Human Feedback) approach. --- >> [notebook](https://colab.research.google.com/drive/1fRLsayydg2GYNZBWyJwgcxQL0h6CU0Ac?usp=sharing). >> [Weights and Biases report](https://wandb.ai/ala_/GenAI360/runs/p08s2n5f?workspace=user-ala_). --- *For more information on Intel® Accelerator Engines, visit [this resource page](https://download.intel.com/newsroom/2023/data-center-hpc/4th-Gen-Xeon-Accelerator-Fact-Sheet.pdf). Learn more about Intel® Extension for Transformers, an Innovative Transformer-based Toolkit to Accelerate GenAI/LLM Everywhere [here](https://github.com/intel/intel-extension-for-transformers).* *Intel, the Intel logo, and Xeon are trademarks of Intel Corporation or its subsidiaries.*",llm_course,https://learn.activeloop.ai/courses/take/llms/multimedia/48959765-fine-tuning-using-sft-for-financial-sentiment
44,The Most Popular Proprietary LLMs,"# The Most Popular Proprietary LLMs
## **Introduction**
In this lesson, we’ll dive into the most popular proprietary LLMs, such as GPT-4, Claude, and Cohere LLMs. The debate between open-source or proprietary models is multifaceted when discussing aspects like customizations, development speed, control, regulation, and quality.",llm_course,https://learn.activeloop.ai/courses/take/llms/multimedia/48953602-the-most-popular-proprietary-llms
45,The Most Popular Proprietary LLMs,"# The Most Popular Proprietary LLMs
## Proprietary LLMs
Proprietary models like GPT-4 and PaLM are developed and controlled by specific organizations, in contrast to open-source LLMs such as BigScience’s [Bloom](https://arxiv.org/pdf/2211.05100.pdf) and various community-driven projects, which are freely available for developers to use, modify, and distribute. As a result, these models may offer advanced features and customization options tailored to specific use cases. Organizations can fine-tune these models to meet their exact requirements, providing a competitive edge in the market. Since proprietary models are developed and controlled by specific organizations, they have complete control over the model's development, deployment, and updates. This level of control allows organizations to protect their intellectual property and maintain a competitive advantage. Organizations offering proprietary models often provide commercial support and service level agreements (SLAs) to ensure reliability and performance guarantees. This level of professional support can be crucial for some use cases. Using proprietary LLMs can be cost-effective in most use cases. LLMs are large and need several GPUs to operate at low latency (and the right competencies); therefore, they benefit from economies of scale. Let’s see now a list of popular proprietary models (as of July 2023).",llm_course,https://learn.activeloop.ai/courses/take/llms/multimedia/48953602-the-most-popular-proprietary-llms
46,The Most Popular Proprietary LLMs,"# The Most Popular Proprietary LLMs
## **Cohere LLMs**
Cohere’s models are categorized into two main types: Generative and Embeddings. The Generative models, also known as command models, are trained on a large corpus of data from the Internet. They are continually developed and updated, with improvements released weekly. You can [register for a Cohere account](https://dashboard.cohere.ai/welcome/register?ref=txt.cohere.com) and get a free trial API key. There is no credit or time limit associated with a trial key; calls are rate-limited to 100 calls per minute, which is typically enough for an experimental project. Save your key in your `.env` file as follows. ```python COHERE_API_KEY="""" ``` Then, install the `cohere` package with this command. ```python pip install cohere ``` You can now generate text with Cohere as follows. ```python from dotenv import load_dotenv load_dotenv() import cohere import os co = cohere.Client(os.environ[""COHERE_API_KEY""]) response = co.generate( prompt='Please briefly explain to me how Deep Learning works using at most 100 words.', max_tokens=200 ) print(response.generations[0].text) ``` ``` Deep Learning is a subfield of artificial intelligence and machine learning that is based on artificial neural networks with many layers, inspired by the structure and function of the human brain. These networks are trained on large amounts of data and algorithms to identify patterns and learn from experience, enabling them to perform complex tasks such as image and speech recognition, language translation, and decision-making. The key components of Deep Learning are neural networks with many layers, large amounts of data, and algorithms for training and optimization. Some of the applications of Deep Learning include autonomous vehicles, natural language processing, and speech recognition. ``` On the other hand, the embedding models are multilingual and can support over 109 languages. These models are designed for large enterprises whose end users are spread worldwide. Developers can also build classifiers on top of Cohere's language models to automate language-based tasks and workflows. The Cohere service provides a variety of models such as *Command* (`command`) for dialogue-like interactions, *Generation*`base`) for generative tasks, *Summarize*`summarize-xlarge`) for generating summaries, [and more](https://docs.cohere.com/docs/models).",llm_course,https://learn.activeloop.ai/courses/take/llms/multimedia/48953602-the-most-popular-proprietary-llms
47,The Most Popular Proprietary LLMs,"# The Most Popular Proprietary LLMs
## **OpenAI's GPT-3.5**
**GPT-3.5** is a language model developed by OpenAI. Its turbo version, **GPT-3.5-turbo** (recommended by OpenAI over [other variants](https://platform.openai.com/docs/models/gpt-3-5)), offers a more affordable option for generating human-like text through an API accessible via OpenAI endpoints. The model is optimized for chat applications while remaining powerful on other generative tasks and can process 96 languages. GPT-3.5-turbo comes in two variants: one with a 4k tokens context length and the other with 16k tokens. The [Azure Chat Solution Accelerator](https://github.com/microsoft/azurechat), powered by Azure Open AI Service, offers enterprises a robust platform to host OpenAI chat models with enhanced moderation and safety features. This solution enables organizations to establish a dedicated chat environment within their Azure Subscription, ensuring a secure and tailored user experience. One of the key advantages is its privacy aspect, as it's deployed within your Azure tenancy, allowing for complete isolation and control over your chat services. In the first lesson, “What are Large Language Models,” we saw how to use GPT-3.5-turbo via API, so refer to that lesson for a code snippet on how to use it with Python and get an API key. Additionally, we have recently introduced the ""[**LangChain & Vector Databases in Production**](https://learn.activeloop.ai/courses/langchain)"" free course, aimed at assisting you in getting the most out of LLMs and enhancing their functionality. The course encompasses fundamental topics such as initiating prompts and addressing hallucination, as well as delving into advanced areas like using LangChain to give memory to LLMs and developing agents for interaction with the real world.",llm_course,https://learn.activeloop.ai/courses/take/llms/multimedia/48953602-the-most-popular-proprietary-llms
48,The Most Popular Proprietary LLMs,"# The Most Popular Proprietary LLMs
## **OpenAI's GPT-4**
OpenAI's **GPT-4** is a multimodal model with an undisclosed number of parameters or training procedures. It is the latest and most powerful model published by OpenAI, and the multi-modality enables the model to process both text and image as input. It can be accessed by submitting your early access request through the OpenAI platform (as of July 2023). The two variants of the model are `gpt-4` and `gpt-4-32k` with different context lengths, 8k and 32k tokens, respectively.",llm_course,https://learn.activeloop.ai/courses/take/llms/multimedia/48953602-the-most-popular-proprietary-llms
49,The Most Popular Proprietary LLMs,"# The Most Popular Proprietary LLMs
## **Anthropic’s Claude**
Anthropic, an AI safety and research company, is a significant player in the AI landscape. It has secured substantial funding and partnered with Google for cloud computing access, mirroring OpenAI's trajectory in recent years. Anthropic's flagship product, Claude 2, is an LLM with a context size of 100k tokens. Anthropic has ambitious growth plans and aims to compete with top players like OpenAI and Deepmind, working with similarly advanced models. Claude 2 is trained to be a helpful assistant in a conversational tone, similar to ChatGPT. Its beta, unfortunately, is open only to people in the US or UK (as of July 2023). If you're in the US or UK, you can sign up for free on Anthropic's website. Just click ""Talk to Claude,"" and you'll be prompted to provide an email address. You'll be ready to go after you confirm the email address. The API is made available via the web [Console](https://console.anthropic.com/). First, read here, [Getting Access to Claude](https://docs.anthropic.com/claude/docs/getting-access-to-claude), for how to apply for access. Once you have access to the Console, you can generate API keys via your [Account Settings](https://console.anthropic.com/account/keys).",llm_course,https://learn.activeloop.ai/courses/take/llms/multimedia/48953602-the-most-popular-proprietary-llms
50,The Most Popular Proprietary LLMs,"# The Most Popular Proprietary LLMs
## Google’s PaLM
Google's Pathways Language Model, or PaLM, is a next-generation artificial intelligence model optimized for various developer use cases, particularly in the realm of NLP. Its primary applications include the development of chatbots, text summarization, question-answering systems, and document search through its text embedding service. PaLM 2, the upgraded version of the model, is renowned for its ease of use and precision in following instructions. It features variants that are specifically trained for text and chat generation, as well as text embeddings, allowing for a broad range of use cases. Access to PaLM is exclusively through the PaLM API. Read the [Setup process](https://developers.generativeai.google/tutorials/setup), and please note that as of July 2023, the PaLM API is only available after being selected from a waitlist. [Google's PaLM 2 showcases](https://ai.google/static/documents/palm2techreport.pdf) significant advancements, including multilingual training for superior foreign language performance, enhanced logical reasoning, and the ability to generate and debug code. It integrates seamlessly into services like Gmail. PaLM 2 can be fine-tuned for specific domains such as cybersecurity vulnerability detection (Sec-PaLM) and medical query responses (Med-PaLM).",llm_course,https://learn.activeloop.ai/courses/take/llms/multimedia/48953602-the-most-popular-proprietary-llms
51,The Most Popular Proprietary LLMs,"# The Most Popular Proprietary LLMs
## Conclusion
The choice between proprietary and open-source AI models depends on the specific needs and resources of the user or organization, and the decision should be based on a careful evaluation of all factors. The next lesson will cover the most popular open-source LLMs.",llm_course,https://learn.activeloop.ai/courses/take/llms/multimedia/48953602-the-most-popular-proprietary-llms
52,Going at Scale with LLM Training,"# Going at Scale with LLM Training
## Introduction
In this lesson, we will share some tips for training LLMs at scale, focusing on the Zero Redundancy Optimizer (ZeRO) and its implementation in DeepSpeed. We explore how ZeRO optimizes memory and computational resources, its various stages of operation, and the benefits of DeepSpeed. We also touch on the Hugging Face Accelerate library. Finally, we will discuss the importance of maintaining a logbook of training runs to manage potential challenges and instabilities during the training process.",llm_course,https://learn.activeloop.ai/courses/take/llms/multimedia/48954387-going-at-scale-with-llm-training
53,Going at Scale with LLM Training,"# Going at Scale with LLM Training
## The Zero Redundancy Optimizer (ZeRO)
Training Large Language Models can be a formidable task due to the immense computational and memory requirements. However, the introduction of the [Zero Redundancy Optimizer (ZeRO)](https://arxiv.org/abs/1910.02054), implemented in DeepSpeed, has made it possible to train these models with lower hardware requirements. ZeRO is a parallelized optimizer that drastically reduces the resources required for model and data parallelism while significantly increasing the number of parameters that can be trained. ZeRO is designed to make the most of data parallelism's computational and memory resources, reducing the memory and compute requirements of each device (GPU) used for model training. It achieves this by distributing the various model training states (weights, gradients, and optimizer states) across the available devices (GPUs and CPUs) in the distributed training hardware. As long as the aggregated device memory is large enough to share the model states, ZeRO-powered data parallelism can accommodate models of any size.",llm_course,https://learn.activeloop.ai/courses/take/llms/multimedia/48954387-going-at-scale-with-llm-training
54,Going at Scale with LLM Training,"# Going at Scale with LLM Training
## The Stages of ZeRO
ZeRO operates in three main optimization stages, where the enhancements in earlier stages are available in the later stages. The stages are partitioning optimizer states, gradients, and parameters. - **Stage 1 - Optimizer State Partitioning**: Shards optimizer states across data parallel workers/GPUs. This results in a 4x memory reduction, with the same communication volume as data parallelism. For example, this stage can be used to train a 1.5 billion parameter GPT-2 model on eight V100 GPUs. - **Stage 2 - Gradient Partitioning**: Shards optimizer states and gradients across data parallel workers/GPUs. This leads to an 8x memory reduction, with the same communication volume as data parallelism. For example, this stage can be used to train a 10 billion parameter GPT-2 model on 32 V100 GPUs. - **Stage 3 - Parameter Partitioning**: Shards optimizer states, gradients, and model parameters across data parallel workers/GPUs. This results in a linear memory reduction with the data parallelism degree. ZeRO can train a trillion-parameter model on about 512 NVIDIA GPUs with all three stages. - **Stage 3 Extra - Offloading to CPU and NVMe memory**: In addition to these stages, ZeRO-3 includes the infinity offload engine to form [ZeRO-Infinity,](https://arxiv.org/abs/2104.07857) which can offload to both CPU and [NVMe memory](https://www.pogolinux.com/blog/why-leverage-nvme-ssds-on-premise-artificial-intelligence-machine-learning/) for significant memory savings. This technique allows you to train even larger models that wouldn't fit into GPU memory. It offloads optimizer states, gradients, and parameters to the CPU, allowing you to train models with billions of parameters on a single GPU.",llm_course,https://learn.activeloop.ai/courses/take/llms/multimedia/48954387-going-at-scale-with-llm-training
55,Going at Scale with LLM Training,"# Going at Scale with LLM Training
## DeepSpeed
DeepSpeed is a high-performance library for accelerating distributed deep learning training. It incorporates ZeRO and other state-of-the-art training techniques, such as distributed training, mixed precision, and checkpointing, through lightweight APIs compatible with PyTorch. DeepSpeed excels in four key areas: 1. **Scale**: DeepSpeed's ZeRO stage one provides system support to run models up to 100 billion parameters, which is 10 times larger than the current state-of-the-art large models. 2. **Speed**: DeepSpeed combines ZeRO-powered data parallelism with model parallelism to achieve up to five times higher throughput over the state-of-the-art across various hardware. 3. **Cost**: The improved throughput translates to significantly reduced training costs. For instance, to train a model with 20 billion parameters, DeepSpeed requires three times fewer resources. 4. **Usability**: Only a few lines of code changes are needed to enable a PyTorch model to use DeepSpeed and ZeRO. DeepSpeed does not require a code redesign or model refactoring, and it does not put limitations on model dimensions, batch size, or any other training parameters.",llm_course,https://learn.activeloop.ai/courses/take/llms/multimedia/48954387-going-at-scale-with-llm-training
56,Going at Scale with LLM Training,"# Going at Scale with LLM Training
## Accelerate and DeepSpeed ZeRO
The [Hugging Face Accelerate](https://huggingface.co./docs/accelerate/index) library allows you to leverage DeepSpeed's ZeRO features by making very few code changes. By using Accelerate and DeepSpeed ZeRO, we can significantly increase the maximum batch size that our hardware can handle without running into OOM errors.",llm_course,https://learn.activeloop.ai/courses/take/llms/multimedia/48954387-going-at-scale-with-llm-training
57,Going at Scale with LLM Training,"# Going at Scale with LLM Training
## Logbook of Training Runs
Despite these libraries, there are still unexpected obstacles in the training runs. This is because there may be instabilities during training that are hard to recover from, such as spikes in the loss function. For example, here’s a [logbook](https://github.com/huggingface/m4-logs/blob/master/memos/README.md) of the training of reproduction of [Flamingo (by Google Deepmind)](https://www.deepmind.com/blog/tackling-multiple-tasks-with-a-single-visual-language-model), an 80B parameters vision and language model, done by Hugging Face. In the following image, the second chart shows the loss function of the final model as the training progresses. Some of these spikes rapidly recovered to the original loss level, and some others diverged and never recovered. To stabilize and continue the training, the authors usually applied a rollback, i.e., a re-start from a checkpoint a few hundred steps prior to the spike/divergence, sometimes with a decrease in the learning rate (shown in the first chart of the image). ![Image from [https://github.com/huggingface/m4-logs/blob/master/memos/README.md](https://github.com/huggingface/m4-logs/blob/master/memos/README.md).](Going%20at%20Scale%20with%20LLM%20Training%20a64bc649f30d4bbb8451cff9d2fef3bf/training_loss_without_rollbacks.png) Image from [https://github.com/huggingface/m4-logs/blob/master/memos/README.md](https://github.com/huggingface/m4-logs/blob/master/memos/README.md). Other times, it may be possible for the model to be stuck in a local optimum, thus requiring other rollbacks. Sometimes, memory errors may require a manual inspection. Have a look at this [114-page logbook](https://github.com/facebookresearch/metaseq/blob/main/projects/OPT/chronicles/OPT175B_Logbook.pdf) made by Meta during the training of the OPT 175B model.",llm_course,https://learn.activeloop.ai/courses/take/llms/multimedia/48954387-going-at-scale-with-llm-training
58,Going at Scale with LLM Training,"# Going at Scale with LLM Training
## Conclusion
This lesson covered a few tips for training Large Language Models at scale, focusing on the Zero Redundancy Optimizer (ZeRO) and its implementation in DeepSpeed. We won’t cover them in more detail in the course, so if you want to deep dive into them you can read the resources linked in this page. We learned how ZeRO optimizes memory and computational resources across different stages, enabling the training of models with billions of parameters. We also explored DeepSpeed, a high-performance library that incorporates ZeRO and other state-of-the-art training techniques, providing scalability, speed, cost-effectiveness, and usability. We touched on the Hugging Face Accelerate library, which simplifies the application of DeepSpeed's ZeRO features. Lastly, we highlighted the importance of maintaining a logbook of training runs to manage potential challenges and instabilities during the training process.",llm_course,https://learn.activeloop.ai/courses/take/llms/multimedia/48954387-going-at-scale-with-llm-training
59,Applications and Use-Cases of LLMs,"# Applications and Use-Cases of LLMs
## Introduction
In this lesson, we will explore the diverse applications and use cases of LLMs and generative AI across various industries. We dive into how LLMs are revolutionizing healthcare and medical research by improving diagnosis, drug discovery, and patient care. Additionally, we will uncover their impact on finance, copywriting, education, programming, and the legal industry. While LLMs offer immense potential, we will also address the risks and ethical considerations associated with their deployment in real-world scenarios, emphasizing the importance of responsible AI implementation and human oversight.",llm_course,https://learn.activeloop.ai/courses/take/llms/multimedia/48953774-applications-and-use-cases-of-llms
60,Applications and Use-Cases of LLMs,"# Applications and Use-Cases of LLMs
## **Healthcare and Medical Research**
Generative AI offers promising applications that can enhance patient care, drug discovery, and operational efficiency in the industry. Generative AI is being utilized for diagnosis, patient monitoring, and resource optimization. By incorporating large language models into digital pathology, accuracy for detecting diseases such as cancer has improved significantly. Furthermore, the technology aids in automating administrative tasks, which streamlines workflows and allows clinical staff to focus on more critical aspects of patient care. In the pharmaceutical industry, generative AI has become a game-changer in drug discovery. It accelerates the process and improves precision in medicine therapies, leading to shorter drug development timelines and reduced costs. This advancement paves the way for more personalized treatments and targeted therapies, ultimately benefiting patients. Medtech companies are exploring the potential of generative AI to create personalized devices for patient-centered care. Integrating generative AI into the design process optimizes medical devices for specific patient needs, improving treatment outcomes and increasing patient satisfaction. For example, [Med-PaLM](https://sites.research.google/med-palm/) is a large language model designed by Google to provide high quality answers to medical questions. It functions as a multimodal generative model capable of processing diverse biomedical data such as clinical text, medical images, and genomics, all using the same set of model parameters. Another example is [BioMedLM](https://www.mosaicml.com/blog/introducing-pubmed-gpt), a domain-specific LLM for biomedical text, made by the Stanford Center for Research on Foundation Models (CRFM) and MosaicML.",llm_course,https://learn.activeloop.ai/courses/take/llms/multimedia/48953774-applications-and-use-cases-of-llms
61,Applications and Use-Cases of LLMs,"# Applications and Use-Cases of LLMs
## **Finance**
LLMs like GPT have proven to be powerful tools for analyzing and processing financial data, revolutionizing how financial institutions interact with their clients and manage risks. One of the key applications of LLMs in finance is in customer interactions with digital platforms, where models can be utilized to enhance user experience through chatbots or AI-based apps. These applications enable seamless and efficient customer support, providing real-time responses to queries and concerns. The analysis of financial time-series data is another area where LLMs and generative AI have proven worthy. By leveraging large datasets of stock exchange information, these models can offer valuable insights for macroeconomic analysis and stock exchange prediction. Predicting market trends and identifying potential investment opportunities are crucial for making informed financial decisions. LLMs play a significant role in this aspect. For example, Bloomberg trained an LLM on a mix of general purpose and domain specific documents, calling it [BloombergGPT](https://www.bloomberg.com/company/press/bloomberggpt-50-billion-parameter-llm-tuned-finance/). BloombergGPT outperforms similarly-sized open models on financial NLP tasks, without sacrificing performance on general LLM benchmarks.",llm_course,https://learn.activeloop.ai/courses/take/llms/multimedia/48953774-applications-and-use-cases-of-llms
62,Applications and Use-Cases of LLMs,"# Applications and Use-Cases of LLMs
## Copywriting
Large Language Models and generative AI are influencing the field of copywriting by providing powerful tools for creating content. The applications of generative AI in copywriting are diverse. It can be utilized to speed up the writing process, overcome writer's block, and reduce costs by improving overall productivity. Additionally, generative AI helps maintain a consistent brand image by learning a company's language patterns and style, ensuring cohesive marketing activities. Some prominent use cases include generating website content and blog posts, crafting social media posts, creating product descriptions, and optimizing content for SEO. Generative AI can also contribute to developing content for mobile apps, tailoring it to suit different platforms and user experiences. A popular copywriting tool that uses LLMs is [Jasper](https://www.jasper.ai/), which makes it easy to generate diverse kinds of content using generative AI.",llm_course,https://learn.activeloop.ai/courses/take/llms/multimedia/48953774-applications-and-use-cases-of-llms
63,Applications and Use-Cases of LLMs,"# Applications and Use-Cases of LLMs
## **Education**
LLMs can help a lot in online learning and personalized tutoring. By analyzing individual learning progress, LLMs offer personal feedback, adaptive testing, and personalized interventions. These models can address the challenges of teacher shortages by providing scalable solutions such as virtual teachers or supporting para-teachers with advanced tools. They empower educators to become mentors and guides, offering personalized support and interactive learning experiences. AI can analyze individual student performance and personalize the learning experience. For example, an application of LLMs for education is [Khanmigo](https://support.khanacademy.org/hc/en-us/articles/13888935335309-How-do-the-Large-Language-Models-powering-Khanmigo-work-) of [Khan Academy](https://www.khanacademy.org/). LLMs serve as virtual tutors, offering explanations and examples for better subject understanding. LLMs aid language learning, generating sentences for grammar and vocabulary practice.",llm_course,https://learn.activeloop.ai/courses/take/llms/multimedia/48953774-applications-and-use-cases-of-llms
64,Applications and Use-Cases of LLMs,"# Applications and Use-Cases of LLMs
## Programming
LLMs and generative AI can significantly help in coding by providing powerful tools for developers. LLMs like GPT-4 and its predecessors can generate code snippets based on natural language prompts, significantly enhancing programmers' efficiency. These models are trained on vast corpora of code samples and can understand context, enabling them to generate more relevant and accurate code over time. The applications of LLMs for coding are diverse. They can assist in code completion by suggesting code snippets as developers type, saving time and reducing errors. Additionally, LLMs are employed for unit test generation, automating the creation of test cases. This not only enhances code quality but also assists in software maintenance. However, the use of generative AI in coding also presents challenges. While it can boost productivity, developers must exercise caution and review the generated code, as it may contain errors or security vulnerabilities. Furthermore, the potential for model biases and ""hallucinations"" (fabricating incorrect information) necessitates careful scrutiny. A popular product using LLMs for programming is [GitHub Copilot](https://github.com/features/copilot), which is trained on billions of lines of code. Copilot can turn natural language prompts into coding suggestions across dozens of languages.",llm_course,https://learn.activeloop.ai/courses/take/llms/multimedia/48953774-applications-and-use-cases-of-llms
65,Applications and Use-Cases of LLMs,"# Applications and Use-Cases of LLMs
## **Legal Industry**
LLMs and generative AI have emerged as powerful tools for the legal industry, offering a range of applications and use cases. These models can be designed to handle the complexities of legal language, interpretations, and the dynamic nature of law. LLMs have the potential to assist legal practitioners in various tasks, such as providing legal advice, understanding complex legal documents, and analyzing court case texts. One key application is reducing hallucinations, a common challenge with early legal LLMs. These models can produce more accurate and reliable results by integrating domain-specific knowledge through reference modules and reliable data from knowledge bases. They can also identify legal feature words within users' input and quickly analyze legal situations.",llm_course,https://learn.activeloop.ai/courses/take/llms/multimedia/48953774-applications-and-use-cases-of-llms
66,Applications and Use-Cases of LLMs,"# Applications and Use-Cases of LLMs
## **Risks and Ethical Considerations of Using LLMs in the Real World**
As we learned in previous lessons, deploying Large Language Models (LLMs) in real-world applications poses certain risks and ethical concerns. One significant risk is ""**hallucinations**,"" where the LLM generates false but plausible-sounding information. This could lead to serious consequences, particularly in critical domains like healthcare, finance, and law. Another concern is ""**bias,**"" as LLMs can inadvertently perpetuate societal biases present in their training data. This could result in unfair treatment in areas such as healthcare and finance. Addressing bias requires rigorous data evaluation, inclusivity efforts, and continuous improvement in fairness. **Data privacy and security** are crucial as LLMs might memorize sensitive information, potentially leading to privacy breaches. Organizations must implement measures like data anonymization and strict access controls. Additionally, the impact on employment should be considered, balancing automation and human involvement to preserve human expertise. Dependence on LLMs without human judgment can be risky, necessitating a responsible approach that combines AI benefits with human oversight.",llm_course,https://learn.activeloop.ai/courses/take/llms/multimedia/48953774-applications-and-use-cases-of-llms
67,Applications and Use-Cases of LLMs,"# Applications and Use-Cases of LLMs
## Conclusion
This lesson explored the wide-ranging applications and use cases of LLMs and generative AI across diverse industries. From healthcare and medical research to finance, copywriting, education, programming, and the legal industry, LLMs are powerful tools with immense potential. However, alongside their benefits, we must be mindful of the risks and ethical considerations associated with their deployment. Addressing issues like hallucinations, bias, data privacy, and the impact on human employment is crucial for responsible AI implementation. In the next module, we will study Transformer architectures, the backbone of LLMs, to gain insights into their working principles and understand how these models process and generate language with great accuracy and efficiency.",llm_course,https://learn.activeloop.ai/courses/take/llms/multimedia/48953774-applications-and-use-cases-of-llms
68,When to Train an LLM from Scratch,"# When to Train an LLM from Scratch
## Introduction
The increasing popularity of LLMs has led businesses to integrate them for task handling and employee productivity enhancement. There are several ways to use LLMs in daily activities, such as incorporating proprietary models via APIs, deploying pre-trained open-source options, or developing one's own language model. Of course, the trade-offs are between quality, costs, and ease of use. In this lesson, we will discuss different approaches and what might be the best solution for your use case.",llm_course,https://learn.activeloop.ai/courses/take/llms/multimedia/48954270-when-to-train-an-llm-from-scratch
69,When to Train an LLM from Scratch,"# When to Train an LLM from Scratch
## Few-Shot (In-Context) Learning
Up to 2020, language models were already good at picking up patterns from the data. However, teaching them new knowledge from a different domain was difficult. The only solution was to finetune them by adjusting the weights. **What are the characteristics that set LLMs apart from the previous language models?** Few-shot learning (also called In-Context learning) enables the LLMs to learn from the examples provided to them. For instance, it is possible to show a couple of examples of JSON-formatted responses to receive the model’s output in JSON format. It means that the models can learn from examples and follow directions without changing weights or repeating the training process. There are multiple use cases where this approach could be the best option. The model can adapt to a writing style, set specific formatting guidelines, or provide additional context for answering questions. LLMs are able to answer questions using external knowledge bases through in-context learning. Let’s think about how we could create a Q&A chatbot leveraging an LLM. The LLM has a cut-off training date, so it can’t access the information or events after that date. Also, they tend to hallucinate, which refers to generating non-factual responses based on their limited knowledge. As a solution, it is possible to provide additional context to the LLM through the Internet (e.g., Google search) or retrieve it from a database and include it in the prompt so that the model can leverage it to generate the correct response. It is like taking an open-book exam! The beauty of this approach is that the model does not need domain-specific knowledge. Instead, it can extract information or patterns from the provided context. Creating applications, such as chatbots, becomes more accessible and faster. Whether you are utilizing proprietary APIs or open-source models, this approach offers a budget-friendly solution for many use cases.",llm_course,https://learn.activeloop.ai/courses/take/llms/multimedia/48954270-when-to-train-an-llm-from-scratch
70,When to Train an LLM from Scratch,"# Fine-Tuning
The fine-tuning method proves valuable when adapting the model to a more complex use case. This technique can improve model understanding by providing more examples and adjusting weights based on errors, for tasks like classification or summarization. There are different approaches to doing this. We could either adjust the weights with a small learning rate to minimally affect the model’s current abilities, or a more recent technique is to freeze the network and introduce new weights for fine-tuning. The latter approach (like LoRA) is a great alternative for fine-tuning models with hundreds of billions of parameters since we will deal with a much smaller number of parameters. (~100x less) The fine-tuning approach is an excellent option for creating a model with task-specific knowledge and building on top of the available powerful LLMs. However, before considering this option, it is essential to acknowledge the associated costs and required resource implications.",llm_course,https://learn.activeloop.ai/courses/take/llms/multimedia/48954270-when-to-train-an-llm-from-scratch
71,When to Train an LLM from Scratch,"# Training
Lastly, let's talk about training your own model from scratch! Among the approaches mentioned earlier, this option stands out as the most demanding and challenging. Of course, the scale of requirements depends on the model size. However, acquiring several millions of data points, such as web pages, books, and articles, not to mention the task-specific documents held by your organization (if you want to train a domain-specific LLM), is essential. Furthermore, completing the training process could cost upward of several hundreds of thousands of dollars. The training costs of these models are rarely revealed by the organizations that publish them. Nevertheless, considering the hardware utilized, speculations have estimated the training expenses for the GPT-3 model to be approximately $4.6 million. However, the more critical aspect of training from scratch is curating the dataset. While the intention is to train a domain-specific model, the training loop that processes vast quantities of general documents, such as web pages, articles, and books, empowers LLMs' language understanding capabilities. Therefore, to create a model that excels in a specific domain, it is essential to have a sizable dataset comprising top-quality samples from that particular domain. An example of this approach is the [BloombergGPT](https://www.bloomberg.com/company/press/bloomberggpt-50-billion-parameter-llm-tuned-finance/) 50B model, which is specifically designed for the finance industry. They used a dataset of 708 billion tokens for training, consisting of 51.2% (363 billion tokens) domain-specific resources and the rest general resources. Training a model from scratch demands substantial resources, including hardware and dataset resources, and expertise within the organization to train and maintain these models.",llm_course,https://learn.activeloop.ai/courses/take/llms/multimedia/48954270-when-to-train-an-llm-from-scratch
72,When to Train an LLM from Scratch,"# Training
## Main Takeaways
- **Few-Shot Learning**: The LLMs are able to learn from the examples given to them, allowing them to handle more complicated tasks without the need for training or fine-tuning. This method is significantly less expensive than other options, as it only requires the cost of adding examples to each prompt. If your task can be solved just with few-shot learning, then it’s always the most efficient approach. - **Fine-Tuning**: If few-shot learning is not effective for your task, an alternative method is fine-tuning. This involves using some data points to create a task-specific model. Although finetuning can be challenging when acquiring new knowledge, it is more effective in adapting to different styles, and tones, or incorporating new vocabulary. - **Training From Scratch**: If fine-tuning is not effective, consider training a model from scratch with domain-specific data. However, this requires significant resources, such as cost, dataset availability, and expertise.",llm_course,https://learn.activeloop.ai/courses/take/llms/multimedia/48954270-when-to-train-an-llm-from-scratch
73,When to Train an LLM from Scratch,"# Training
## Conclusion
We have explored various methods to harness the capabilities of large language models within your organization and highlighted the advantages and disadvantages of each approach. Picking the best practice depends on your organization’s use case and the resources at hand. This course aims to equip you with the necessary knowledge to make informed decisions about which approach best suits your needs. It will also guide on maximizing the benefits of large language models and mastering the process involved.",llm_course,https://learn.activeloop.ai/courses/take/llms/multimedia/48954270-when-to-train-an-llm-from-scratch
74,Next Challenges in LLM Research,"# Next Challenges in LLM Research
## Introduction
In this lesson, we will view the next challenges in large language model research, covering various facets such as model performance, data and training, language and tokenization, hardware and infrastructure, usability and application, and learning and preferences. We will explore pressing issues such as mitigating hallucinations, optimizing context, managing massive datasets, improving tokenization, and developing alternatives to GPUs. We will also discuss the need to make agents usable, detect LLM-generated text, and improve learning from human preference.",llm_course,https://learn.activeloop.ai/courses/take/llms/multimedia/48959932-next-challenges-in-llm-research
75,Next Challenges in LLM Research,"# Next Challenges in LLM Research
## ****Model Performance and Efficiency****
- **Mitigating and Measuring Hallucinations**: One of the significant challenges in LLM research is hallucinations. This phenomenon occurs when an AI model generates information that isn't based on the input data, essentially making things up. While this can be beneficial for creative applications, it is generally considered a drawback for most use cases. The challenge lies in reducing and developing metrics to measure these hallucinations accurately. - **Optimizing Context Length and Construction**: Context plays a crucial role in the performance of LLMs. The challenge here is to optimize the context length and how it is constructed. This is particularly important for applications like [Retrieval Augmented Generation (RAG)](https://huggingface.co./docs/transformers/model_doc/rag), where the model's response quality depends on the amount and efficiency of the context it can use. - **Making LLMs Faster and Cheaper**: With the advent of models like GPT-3.5, concerns about latency and cost have become more prominent. The challenge lies in developing models that offer similar performance but with a smaller memory footprint and lower costs. Faster inference is especially important for real-time applications like online customer service assistants. - **Designing New Model Architectures**: Transformer architecture has been dominant in the field since 2017. However, the need for a new model architecture that can outperform the Transformer is becoming increasingly apparent. The challenge is to develop an architecture that performs well on current hardware and scales to meet modern requirements. - **Addressing High Inference Latency**: LLMs often exhibit high inference latencies due to low parallelizability and large memory footprints. The task at hand is to develop models and techniques that can reduce this latency, making LLMs more efficient and practical for real-time applications. - **Overcoming Tasks Not Solvable By Scale**: The rapid advancements in LLM capabilities have led to astonishing improvements in performance. However, some tasks seem resistant to further scaling of data or model sizes. The existence of such tasks is speculative, but their potential presence poses a significant challenge. The research community needs to identify these tasks and devise strategies to overcome them, pushing the boundaries of what LLMs can achieve. Read about the [Inverse Scaling Prize](https://www.lesswrong.com/posts/DARiTSTx5xDLQGrrz/inverse-scaling-prize-second-round-winners) competition to know more about this.",llm_course,https://learn.activeloop.ai/courses/take/llms/multimedia/48959932-next-challenges-in-llm-research
76,Next Challenges in LLM Research,"# Next Challenges in LLM Research
## ****Data and Training****
- **Incorporating Other Data Modalities**: The ability to incorporate other data modalities into LLMs is another significant research direction. Multimodality, the ability to understand and process different types of data, can enhance the performance of LLMs and extend their applicability to various industries. - **Understanding and Managing Huge Datasets**: The sheer size of modern pre-training datasets makes it nearly impossible for individuals to read or conduct quality assessments on all the documents. This lack of clarity about the data on which the model has been trained poses a significant challenge. Researchers need to devise strategies to comprehend these vast datasets better and ensure the quality of the data used for training. - **Reducing High Pre-Training Costs**: Training a single LLM can require substantial computational resources, translating into high costs and significant energy consumption. The challenge here is to find ways to reduce these pre-training costs without compromising the performance of the model. This could involve optimizing the training process or developing more efficient model architectures.",llm_course,https://learn.activeloop.ai/courses/take/llms/multimedia/48959932-next-challenges-in-llm-research
77,Next Challenges in LLM Research,"# Next Challenges in LLM Research
## ****Language and Tokenization****
- **Building LLMs for Non-English Languages**: There is a pressing need to develop LLMs for non-English languages. This complex challenge involves dealing with low-resource languages and ensuring that the models are practical and efficient. - **Overcoming Tokenizer-Reliance**: Tokenization, the process of breaking down text into smaller units, is crucial for feeding data into the model. However, this necessity comes with drawbacks, such as computational overhead, language dependence, handling of novel words, fixed vocabulary size, information loss, and low human interpretability. The challenge lies in developing more effective tokenization methods or alternatives that can mitigate these issues. - **Improving Tokenization for Multilingual Settings**: Tokenization schemes that work well in a multilingual setting, particularly with non-space-separated languages such as Chinese or Japanese, remain challenging. The challenge is to improve these schemes to ensure fair and efficient tokenization across all languages.",llm_course,https://learn.activeloop.ai/courses/take/llms/multimedia/48959932-next-challenges-in-llm-research
78,Next Challenges in LLM Research,"# Next Challenges in LLM Research
## ****Hardware and Infrastructure****
- **Developing Alternatives to GPUs**: GPUs have been the primary hardware for deep learning for nearly a decade. However, there is a growing need for alternatives that can offer better performance or efficiency. This includes exploring technologies like quantum computing and photonic chips. There’s also currently a problem of availability of GPUs in the global market, therefore having alternatives would make this problem more manageable.",llm_course,https://learn.activeloop.ai/courses/take/llms/multimedia/48959932-next-challenges-in-llm-research
79,Next Challenges in LLM Research,"# Next Challenges in LLM Research
## ****Usability and Application****
- **Making Agents Usable**: Agents are LLMs that can perform actions like browsing the internet or sending emails. The challenge here is to make these agents reliable and performant enough to be trusted with these tasks. Examples of agents frameworks are [LangChain](https://python.langchain.com/docs/get_started/introduction) and [LlamaIndex](https://www.llamaindex.ai/). - **Detecting LLM-generated Text**: As LLMs become more sophisticated, distinguishing between human-written and LLM-generated text becomes increasingly challenging. This detection is crucial for various reasons, such as preventing the spread of misinformation, plagiarism, impersonation, automated scams, and the inclusion of inferior generated text in future models' training data. The challenge lies in developing robust detection mechanisms that can keep up with the improving fluency of LLMs.",llm_course,https://learn.activeloop.ai/courses/take/llms/multimedia/48959932-next-challenges-in-llm-research
80,Next Challenges in LLM Research,"# Next Challenges in LLM Research
## ****Learning and Preferences****
- **Improving Learning from Human Preference**: Reinforcement Learning from Human Preference (RLHF) is a promising approach but has its challenges. These include defining and mathematically representing human preferences and dealing with the diversity of human preferences.",llm_course,https://learn.activeloop.ai/courses/take/llms/multimedia/48959932-next-challenges-in-llm-research
81,Next Challenges in LLM Research,"# Next Challenges in LLM Research
## Conclusion
In this lesson, we explored the challenges in large language models research. We've examined the need for improved model performance and efficiency, including mitigating hallucinations, optimizing context, and designing new model architectures. We've also discussed the complexities of managing vast datasets and the importance of incorporating other data modalities. We highlighted the necessity for better tokenization methods, especially for non-English and non-space-separated languages. We have also underscored the urgency of developing alternatives to GPUs and the need to make LLM agents more reliable. Lastly, we've touched upon the challenge of detecting LLM-generated text and the intricacies of learning from human preference. Each of these challenges presents an exciting opportunity for researchers to push the boundaries of what LLMs can achieve, making them more efficient, inclusive, and beneficial for a wide array of applications.",llm_course,https://learn.activeloop.ai/courses/take/llms/multimedia/48959932-next-challenges-in-llm-research
82,Deep Dive into RLHF,"# Deep Dive into RLHF
## Introduction
In this lesson, we will dive deeper into Reinforcement Learning from Human Feedback (RLHF), a method that combines human feedback and reinforcement learning to enhance the alignment and efficiency of Large Language Models. We explore the RLHF training process, compare it with Supervised Fine-Tuning (SFT), and discuss its alternatives, such as Direct Preference Optimization (DPO) and Reinforced Self-Training (ReST). By the end of this lesson, you'll have a comprehensive understanding of how RLHF and its alternatives are used to improve the performance and safety of LLMs.",llm_course,https://learn.activeloop.ai/courses/take/llms/multimedia/48960051-deep-dive-into-rlhf
83,Deep Dive into RLHF,"# Deep Dive into RLHF
## Understanding RLHF
[Reinforcement Learning from Human Feedback (RLHF)](https://arxiv.org/abs/1909.08593) is a method that integrates human feedback and reinforcement learning into LLMs, enhancing their alignment with human objectives and improving their efficiency. RLHF has shown significant promise in making LLMs safer and more helpful. It was used for the first time for creating [InstructGPT](https://openai.com/research/instruction-following), a finetuned version of GPT3 for following instructions, and it’s used nowadays in the last OpenAI models ChatGPT (GPT-3.5-turbo) and GPT-4. RLHF leverages human-curated rankings that act as a signal to the model, directing it to favor specific outputs over others, thereby encouraging the production of more reliable, secure responses that align with human expectations. All of this is done with the help of a reinforcement learning algorithm, namely [PPO](https://openai.com/research/openai-baselines-ppo), that optimizes the underlying LLM model, leveraging the human-curated rankings.",llm_course,https://learn.activeloop.ai/courses/take/llms/multimedia/48960051-deep-dive-into-rlhf
84,Deep Dive into RLHF,"# Deep Dive into RLHF
## RLHF Training Process
RLHF can be useful in guiding LLMs to generate appropriate texts by treating text generation as a reinforcement learning problem. In this approach, the language model serves as the RL agent, the possible language outputs represent the action space, and the reward is based on how well the LLM's response aligns with the context of the application and the user's intent. RLHF must be done on an already pretrained LLM. A language model must be trained in advance on a large corpus of text data collected from the internet. The RLHF training process can then be broken down into the following steps. - **(Optional) Finetune the LLM by following instructions**: This is an optional step, but some sources recommend fine-tuning raw LLM in advance by following instructions, using a specialized dataset for it. This step should make the following RL finetuning of RLHF converge faster. - **RLHF dataset creation**: The LLM is used to generate a lot of text completions from a set of instructions. For each instruction, we collect multiple completions from the model. - **Collecting human feedback**: Human labelers then rank the generated completions to the same instruction from best to worst. Humans can be asked to rank the completions, keeping into account several aspects, such as completeness, relevancy, accuracy, toxicity, bias, etc. It’s possible to convert these ranks into scores that we can assign to the text completions in our dataset, where a high score means that the completion is good. - **Training a Reward Model**: The RLHF dataset is used to train a reward model, which means a model that, when provided with an instruction and a text completion, assigns a score to the completion. In this context, a high score indicates that the completion is good. The reward model does a very similar job to what human labelers did on the dataset. The reward model is expected to learn, from the RLHF dataset, how to assign scores according to all the aspects taken into account during the labeling process (completeness, relevancy, accuracy, toxicity, bias, etc.). - **Fine-tuning the Language Model with Reinforcement Learning and the Reward Model**: Starting from a random instruction, our pretrained LLM generates multiple completions. These completions are then assigned scores by the reward model, and these scores are utilized by a reinforcement learning algorithm (PPO) to update the parameters of the LLM. This process aims to make the LLM more likely to produce completions with higher scores. To prevent the LLM from forgetting helpful information during fine-tuning, the RLHF fine-tuning process also aims to maintain a small Kullback-Leibler (KL) divergence between the fine-tuned LLM and the original LLM. This ensures that the token distribution predicted by it remains relatively consistent. After repeating this process for several iterations, we will have our final, finalized LLM. ![Visual illustration of RLHF. Image from [https://openai.com/research/instruction-following](https://openai.com/research/instruction-following).](Deep%20Dive%20into%20RLHF%209551cbead6e041258a32eb7deac5989a/Screenshot_2023-08-28_at_16.05.19.png) Visual illustration of RLHF. Image from [https://openai.com/research/instruction-following](https://openai.com/research/instruction-following).",llm_course,https://learn.activeloop.ai/courses/take/llms/multimedia/48960051-deep-dive-into-rlhf
85,Deep Dive into RLHF,"# Deep Dive into RLHF
## RLHF vs SFT
As seen in the previous lessons, aligning LLM to follow instructions with human values is possible by doing simple SFT (with or without LoRA) with a high-quality dataset ([see the LIMA paper](https://arxiv.org/abs/2305.11206)). So, what’s the tradeoff between RLHF and SFT? In reality, it's still an open question. Empirically, it seems that RLHF can better teach the ""human alignment"" aspects of its dataset if it's sufficiently large and of high quality. However, in contrast, it's more expensive and time-consuming. Reinforcement learning, in this context, is still quite unstable, meaning that the results are very sensitive to the initial model parameters and training hyperparameters. It often falls into local optima, and the loss diverges multiple times, necessitating multiple restarts. This makes it less straightforward than plain SFT with LoRA.",llm_course,https://learn.activeloop.ai/courses/take/llms/multimedia/48960051-deep-dive-into-rlhf
86,Deep Dive into RLHF,"# Deep Dive into RLHF
## Alternatives to RLHF
Over time, several alternatives to RLFH have been researched. Here are the most popular of them. ### Direct Preference Optimization [Direct Preference Optimization (DPO)](https://arxiv.org/pdf/2305.18290.pdf) is a novel method for finetuning LLMs as an alternative to RLHF. Unlike RLHF, which requires complex reward functions and careful balance to ensure sensible text generation, DPO simplifies the process by directly optimizing the language model using a binary cross-entropy loss. It bypasses the need for a reward model and RL-based optimization. Instead, it directly optimizes the language model on preference data. This is accomplished through an analytical mapping from the reward function to the optimal RL policy. It involves directly transforming the RL loss, which typically involves the reward and reference models, into a loss over the reference model. As a result, DPO potentially simplifies the fine-tuning process of LLMs by eliminating the need for complex RL techniques or a reward model. ![DPO optimizes for human preferences while avoiding reinforcement learning. Existing methods for fine-tuning language models with human feedback first fit a reward model to a dataset of prompts and human preferences over pairs of responses. Then, RL is used to find a policy that maximizes the learned reward. In contrast, DPO directly optimizes the policy to best satisfy the preferences with a simple classification objective, without an explicit reward function or RL. Image from [https://arxiv.org/pdf/2305.18290.pdf](https://arxiv.org/pdf/2305.18290.pdf).](Deep%20Dive%20into%20RLHF%209551cbead6e041258a32eb7deac5989a/Screenshot_2023-08-28_at_16.13.03.png) DPO optimizes for human preferences while avoiding reinforcement learning. Existing methods for fine-tuning language models with human feedback first fit a reward model to a dataset of prompts and human preferences over pairs of responses. Then, RL is used to find a policy that maximizes the learned reward. In contrast, DPO directly optimizes the policy to best satisfy the preferences with a simple classification objective, without an explicit reward function or RL. Image from [https://arxiv.org/pdf/2305.18290.pdf](https://arxiv.org/pdf/2305.18290.pdf). ### Reinforced Self-Training [Google DeepMind's Reinforced Self-Training (ReST)](https://arxiv.org/pdf/2308.08998.pdf) is a more cost-effective alternative to Reinforcement Learning from Human Feedback. The ReST algorithm operates in a cyclical manner, involving two main steps that are repeated iteratively. 1. The first step, referred to as the 'Grow' step, involves the use of an LLM to generate multiple output predictions for each context. These predictions are then used to augment a training dataset. 2. Following this, the 'Improve' step comes into play. In this phase, the augmented dataset is ranked and filtered using a reward model that has been trained based on human preferences. Subsequently, the LLM is fine-tuned on this filtered dataset using an offline reinforcement learning objective. The fine-tuned LLM is then used in the subsequent Grow step. ![ReST method. During the Grow step, a policy generates a dataset. The filtered dataset is used to fine-tune the policy in the Improve step. Both steps are repeated. The improvement step is repeated more frequently to amortize the dataset creation cost. Image from [https://arxiv.org/pdf/2308.08998.pdf](https://arxiv.org/pdf/2308.08998.pdf).](Deep%20Dive%20into%20RLHF%209551cbead6e041258a32eb7deac5989a/new_image.png) ReST method. During the Grow step, a policy generates a dataset. The filtered dataset is used to fine-tune the policy in the Improve step. Both steps are repeated. The improvement step is repeated more frequently to amortize the dataset creation cost. Image from [https://arxiv.org/pdf/2308.08998.pdf](https://arxiv.org/pdf/2308.08998.pdf). The ReST methodology offers several advantages over RLHF. - It significantly reduces the computational load compared to online reinforcement learning. This is achieved by leveraging the output of the Grow step across multiple Improve steps. - The quality of the policy is not limited by the quality of the original dataset, as is the case with offline reinforcement learning. This is because new training data is sampled from an improved policy during the Grow step. - Decoupling the Grow and Improve steps allows for easy inspection of data",llm_course,https://learn.activeloop.ai/courses/take/llms/multimedia/48960051-deep-dive-into-rlhf
87,Deep Dive into RLHF,"# Deep Dive into RLHF
## Alternatives to RLHF
quality and potential diagnosis of alignment issues, such as reward hacking. - The ReST approach is straightforward and stable and only requires tuning a small number of hyperparameters, making it a user-friendly and efficient tool in the machine learning toolkit. ### Reinforcement Learning from AI Feedback (RLAIF) Another innovative alternative to RLHF is [Reinforcement Learning from AI Feedback (RLAIF)](https://arxiv.org/abs/2212.08073). Developed by Anthropic, RLAIF aims to address some of the limitations of RLHF, particularly concerning the subjectivity and scalability of human feedback. In RLAIF, instead of relying on human feedback, an AI Feedback Model is used to provide feedback for training the AI assistant. This Feedback Model is guided by a constitution provided by humans, outlining the essential principles for the model's judgment. This approach allows for a more objective and scalable supervision technique, as it is not dependent on a small pool of human preferences. The RLAIF process begins with the creation of a dataset of ranked preferences generated automatically by the AI Feedback Model. This dataset is then used to train a Reward Model similar to RLHF. The Reward Model serves as the reward signal in a reinforcement learning schema for an LLM. ![A diagram depicting RLAIF (top) vs. RLHF (bottom). Image from [https://arxiv.org/pdf/2309.00267.pdf](https://arxiv.org/pdf/2309.00267.pdf).](Deep%20Dive%20into%20RLHF%209551cbead6e041258a32eb7deac5989a/Screenshot_2023-09-04_at_11.53.01.png) A diagram depicting RLAIF (top) vs. RLHF (bottom). Image from [https://arxiv.org/pdf/2309.00267.pdf](https://arxiv.org/pdf/2309.00267.pdf). RLAIF offers several advantages over RLHF. Firstly, it maintains the helpfulness of RLHF models while making improvements in terms of harmlessness. Secondly, it reduces subjectivity as the AI assistant's behavior is not solely dependent on a small pool of humans and their particular preferences. Lastly, RLAIF is significantly more scalable as a supervision technique, making it a promising alternative for the future development of safer and more efficient LLMs. [A recent paper from Google](https://arxiv.org/pdf/2309.00267.pdf) did more experiments with RLAIF and found that humans prefer both RLAIF and RLHF to standard SFT at almost equal rates, indicating that they could be alternatives.",llm_course,https://learn.activeloop.ai/courses/take/llms/multimedia/48960051-deep-dive-into-rlhf
88,Deep Dive into RLHF,"# Deep Dive into RLHF
## Conclusion
This lesson provided a more in-depth exploration of Reinforcement Learning from Human Feedback, a method that combines human feedback and reinforcement learning to enhance the performance and safety of Large Language Models. We covered the RLHF training process, highlighting its steps and how it leverages human-curated rankings and reinforcement learning to finetune the LLM. We also compared RLHF with Supervised Fine-Tuning (SFT), discussing the trade-offs between the two. Furthermore, we explored alternatives to RLHF, such as Direct Preference Optimization (DPO) and Reinforced Self-Training (ReST), which offer different approaches to fine-tuning LLMs. As we continue to refine these techniques, we move closer to our goal of creating LLMs that are more aligned with human values, efficient, and safer to use.",llm_course,https://learn.activeloop.ai/courses/take/llms/multimedia/48960051-deep-dive-into-rlhf
89,Understanding Transformers and GPTs Introduction,"# Understanding Transformers and GPTs Introduction
## Understanding Transformers and GPTs
Goals: Equip students with foundational theoretical knowledge of transformers and GPTs, ensuring a robust understanding beneficial for effective LLM training and utilization. This section comprehensively examines the inner workings and components surrounding Transformers and GPT. Participants begin with a detailed study of the Transformer architecture, understanding its foundational concepts and essential parts. The progression covers evaluations of LLMs, their control mechanisms, and the nuances of prompting, pretraining, and finetuning. Each lesson is designed to impart intricate details, practical knowledge, and a tiered understanding of these technologies. - **Understanding Transformers**: This lesson provides an in-depth look at Transformers, breaking down their complex components and the network's essential mechanics. We begin by examining the paper ""Attention is all you need.” We conclude by highlighting the use of these components in Hugging Face's transformers library. - **Transformers Architectures**: This chapter is a concise guide to Transformer architectures. We will first dissect the encoder-decoder framework, which is pivotal for sequence-to-sequence tasks. Next, we provide a high-level overview of the GPT model, known for its language generation capabilities. We also spotlight BERT, emphasizing its significance in understanding the context within textual data. - **Deep Dive on the GPT architecture**: This section explores the GPT architecture. We shed light on the structural specifics, the objective function, and the principles of causal modeling. This technical session is designed for individuals seeking an in-depth understanding of the intricate details and mathematical foundations of GPT. - **Evaluating LLM Performance**: This lesson explores the nuances of evaluating Large Language Model performance. We differentiate between objective functions and metrics and transition into perplexity, BLEU, and ROUGE metrics. We also provide an overview of popular benchmarks in the domain. - **Controlling LLM Outputs**: This lesson delves into decoding techniques like Greedy and Beam Search, followed by concepts such as Temperature and the use of stop sequences. We will also discuss the importance of these methods, with references to frameworks like ReAct. It also presents concepts like Frequency and Presence Penalties. - **Prompting and few-shot prompting**: This lesson provides an overview of how carefully crafted prompts can guide LLMs in tasks like answering questions and generating text. We will progress from zero-shot prompting, where LLMs operate without specific examples, to in-context and few-shot prompting, teaching the model to manage intricate tasks with sparse training data. - **Pretraining and Finetuning**: This module examines the foundational concepts of pretraining and finetuning in the context of Large Language Models. In subsequent chapters, we will discern the differences between pretraining, finetuning, and instruction tuning, setting the stage for deeper dives. While the lesson touches upon various types of instruction tuning, detailed exploration of specific methods like SFT and RLHF will be reserved for later sessions, ensuring a progressive understanding of the topic. After navigating the diverse terrain of Transformers and LLMs, participants now deeply understand significant architectures like GPT and BERT. The sessions shed light on model evaluation metrics, advanced control techniques for optimal outputs, and the roles of pretraining and finetuning. The upcoming module dives into the complexities of deciding when to train an LLM from scratch, the operational necessities of LLMs, and the sequential steps crucial for the training process.",llm_course,https://learn.activeloop.ai/courses/take/llms/multimedia/48954165-understanding-transformers-and-gpts-introduction
90,Pretraining and Fine-Tuning of LLMs,"# Pretraining and Fine-Tuning of LLMs
## Introduction
This lesson will explore how pretrained LLMs learn from vast amounts of text, becoming great at language tasks. Then, we'll discover the power of finetuning, a process that molds these models into specialized experts, enabling them to tackle complex tasks. We'll also cover instruction finetuning, where we guide the models with explicit instructions, making them versatile and responsive to our needs. This lesson introduces several fine-tuning techniques we will use to finetune models later in the course.",llm_course,https://learn.activeloop.ai/courses/take/llms/multimedia/48954254-pretraining-and-fine-tuning-of-llms
91,Pretraining and Fine-Tuning of LLMs,"# Pretraining and Fine-Tuning of LLMs
## **Pretraining LLMs**
Pretrained LLMs have catalyzed a paradigm shift in AI. These models are trained on massive text corpora sourced from the Internet, honing their linguistic knowledge through the prediction of the following words within sentences. By training on billions of sentences, these models acquire an excellent grasp of grammar, context, and semantics, enabling them to capture the nuances of language effectively. Aside from being good at generating text, pretrained LLMs are also good at other tasks, as was found in 2020 with the GPT3 paper “[Language Models are Few-Shot Learners](https://arxiv.org/abs/2005.14165).” The paper showed that big enough LLMs are “few-shot learners”; that is, they are able to perform other tasks aside from text generation with the help of just a few examples of that task (hence the name “few-shot learners”). With those examples, the LLM is able to understand the logic behind what the user wants. This was a huge step forward in the field, where each NLP task had very different models for each task. Now, a single model can do several of them and do them well.",llm_course,https://learn.activeloop.ai/courses/take/llms/multimedia/48954254-pretraining-and-fine-tuning-of-llms
92,Pretraining and Fine-Tuning of LLMs,"# Pretraining and Fine-Tuning of LLMs
## **The Power of Finetuning**
Finetuning complements pretraining for specialized tasks. While pretrained LLMs are undeniably impressive, their true potential is unlocked through finetuning. Although pretrained models possess a deep understanding of language, they require further adaptation to excel in complex tasks. For example, if the task is to answer questions about medical texts, the model would be finetuned on a dataset of medical question-answer pairs. Finetuning helps these models become specialized. Finetuning exposes pretrained models to task-specific datasets, enabling them to recalibrate internal parameters and representations to align with the intended task. This adaptation enhances their ability to handle domain-specific challenges effectively. The necessity for finetuning arises from the inherent non-specificity of pretrained models. While they possess a wide-ranging grasp of language, they lack task-specific context. For instance, finetuning is essential when tackling sentiment analysis of financial news. In the early days of GPT3 in 2020 and 2021, finetuning also allowed an LLM to be tuned for a specific task without the need for multiple few-shot examples in the prompt.",llm_course,https://learn.activeloop.ai/courses/take/llms/multimedia/48954254-pretraining-and-fine-tuning-of-llms
93,Pretraining and Fine-Tuning of LLMs,"# Pretraining and Fine-Tuning of LLMs
## **Instruction Finetuning: Making General-Purpose Assistants out of LLMs**
Instruction finetuning adds precise control over model behavior, making it a general-purpose assistant. The goal of instruction finetuning is to obtain an LLM that interprets prompts as instructions instead of text. It’s just a special type of finetuning. For example, consider the following prompt. > What is the capital of France? > An LLM with instruction finetuning would likely interpret the prompt as an instruction, giving the following answer. > Paris. > However, a plain LLM without instruction finetuning could think that we are writing a list of exercises for our geography students, therefore merely continuing the text with a new question. > What is the capital of Italy? > Instruction finetuning takes things up a notch. Imagine giving precise instructions to our model: ""Analyze the sentiment of this text and tell us if it's positive.” It's like coaching the model to paint exactly what you envision. With instruction finetuning, we provide explicit guidance, shaping the model's behavior to match our intentions. Instruction tuning offers several advantages. It trains models on a collection of tasks described via instructions, granting LLMs the capacity to generalize to new tasks prompted by additional instructions. This sidesteps the need for vast amounts of task-specific data and instead uses textual instructions to guide learning. While traditional finetuning acquaints models with task-specific data, instruction finetuning adds an extra layer by incorporating explicit instructions to guide model behavior. This approach empowers developers to shape desired outputs, encourage specific behaviors, and steer model responses.",llm_course,https://learn.activeloop.ai/courses/take/llms/multimedia/48954254-pretraining-and-fine-tuning-of-llms
94,Pretraining and Fine-Tuning of LLMs,"# Pretraining and Fine-Tuning of LLMs
## Finetuning Techniques
There are several finetuning methods. We’ll learn more about them later in the course. There are multiple methods in fine-tuning with a focus on the number of parameters, such as: - **Full Finetuning:** This method is based on adjusting all the parameters in the pretrained LLM models in order to adapt to a specific task. However, this method is relatively resource-intensive, requiring extensive computational power. - **Low-Rank Adaptation (LoRA):** LoRA aims to adapt LLMs to specific tasks and datasets while simultaneously reducing computational resources and costs. By applying low-rank approximations to the downstream layers of LLMs, LoRA significantly reduces the number of parameters to be trained, thereby lowering the GPU memory requirements and training costs. Multiple methods are focusing on the learning algorithm used for finetuning, such as: - **Supervised Finetuning (SFT):** SFT involves doing standard supervised finetuning with a pretrained LLM on a small amount of demonstration data. - **Reinforcement Learning from Human Feedback (RLHF):** RLHF is a training methodology where models are trained to follow human feedback over multiple iterations. Later in this course, we’ll see how to finetune a model using SFT and RLHF, both using LoRA.",llm_course,https://learn.activeloop.ai/courses/take/llms/multimedia/48954254-pretraining-and-fine-tuning-of-llms
95,Pretraining and Fine-Tuning of LLMs,"# Pretraining and Fine-Tuning of LLMs
## Conclusion
In this lesson, we covered the pretraining and finetuning of LLMs. Pretraining equips LLMs with a profound grasp of language by immersing them in vast text corpora. Finetuning then bridges the gap between general understanding and specialized knowledge, allowing LLMs to perform well in specialized domains. Instruction finetuning makes LLMs become versatile assistants, enabling precise control over their behavior through explicit guidance. From full finetuning to the resource-efficient Low-Rank Adaptation (LoRA) and from Supervised Finetuning (SFT) to Reinforcement Learning from Human Feedback (RLHF), we also learned about the most popular finetuning techniques. In the next module, we’ll do some hands-on exercises up to launch a pretraining of an LLM on the cloud.",llm_course,https://learn.activeloop.ai/courses/take/llms/multimedia/48954254-pretraining-and-fine-tuning-of-llms
96,The Evolution of Language Modeling up to LLMs,"# The Evolution of Language Modeling up to LLMs
## Introduction
In this lesson, we’ll see the most popular models used for language modeling, starting from statistical ones up to the first Large Language Models (LLMs). This lesson is meant to be more like a narrative on the evolution of the models rather than a technical explanation. Therefore, don’t worry if you can’t understand every model in detail.",llm_course,https://learn.activeloop.ai/courses/take/llms/multimedia/48953230-the-evolution-of-language-modeling-up-to-llms
97,The Evolution of Language Modeling up to LLMs,"# The Evolution of Language Modeling up to LLMs
## The Evolution of Language Modeling
The evolution of NLP models has been a remarkable journey marked by continuous innovation and improvement. It began with the Bag of Words model in 1954, which simply counted word occurrences in documents. This was followed by TF-IDF in 1972, which adjusted these scores based on the rarity or commonality of words. The advent of Word2Vec in 2013 marked a significant leap forward, introducing the concept of word embeddings that captured semantic relationships between words. This was then further enhanced by Recurrent Neural Networks (RNNs), which could learn sequence patterns and handle documents of any length. The introduction of the Transformer architecture in 2017 revolutionized the field, with its attention mechanism allowing the model to focus on the most relevant parts of the input when generating output. This was the foundation for BERT in 2018, which used bidirectional Transformers to achieve impressive results in traditional NLP tasks. The subsequent years saw a flurry of advancements, with models like RoBERTa, XLM, ALBERT, and ELECTRA each introducing their own improvements and optimizations.",llm_course,https://learn.activeloop.ai/courses/take/llms/multimedia/48953230-the-evolution-of-language-modeling-up-to-llms
98,The Evolution of Language Modeling up to LLMs,"# The Evolution of Language Modeling up to LLMs
## Model’s Timeline
- **[1954]** [Bag of Words (BOW)](https://en.wikipedia.org/wiki/Bag-of-words_model) BOW is a simple model that counts word occurrences in documents, using these counts as features. It was a basic yet effective way to analyze text. However, it did not account for word order or context. - **[1972]** [TF-IDF](https://en.wikipedia.org/wiki/Tf%E2%80%93idf) TF-IDF enhanced BOW by giving more weight to rare words and less to common ones. This improved the model's ability to discern document relevance. However, it still did not account for word context. - **[2013]** [Word2Vec](https://arxiv.org/abs/1301.3781) Word2Vec introduced word embeddings, high-dimensional vectors that capture semantic relationships. These embeddings were learned by a neural network trained on a large corpus of text. This model marked a significant advancement in capturing semantic meaning in text. - **[2014]** [RNNs in Encoder-Decoder architectures](https://en.wikipedia.org/wiki/Recurrent_neural_network) RNNs (Recurrent Neural Networks) compute document embeddings, leveraging word context in sentences, which was not possible with word embeddings alone. Later evolved with **[LSTM](http://www.bioinf.jku.at/publications/older/2604.pdf)** [1997] to capture long-term dependencies and to **[Bidirectional RNN](https://ieeexplore.ieee.org/document/650093)** [1997] to capture left-to-right and right-to-left dependencies. Eventually, **[Encoder-Decoder RNNs](https://proceedings.neurips.cc/paper/2014/file/a14ac55a4f27472c5d894ec1c3c743d2-Paper.pdf)** [2014] emerged, where an RNN creates a document embedding (i.e., the encoder), and another RNN decodes it into text (i.e., the decoder). - **[2017]** [Transformer](https://arxiv.org/abs/1706.03762) The Transformer is an encoder-decoder model that leverages attention mechanisms to compute better embeddings and to align output better to input. This model marked a significant advancement in NLP tasks. - **[2018]** [BERT](https://arxiv.org/abs/1810.04805) BERT is a bidirectional Transformer pre-trained using a combination of Masked Language Modeling and Next Sentence Prediction objectives. It uses global attention. - **[2018]** [GPT](https://s3-us-west-2.amazonaws.com/openai-assets/research-covers/language-unsupervised/language_understanding_paper.pdf) GPT is the first autoregressive model based on the Transformer architecture. Later, it evolved into **[GPT-2](https://d4mucfpksywv.cloudfront.net/better-language-models/language_models_are_unsupervised_multitask_learners.pdf)** [2019], a bigger and optimized version of GPT pre-trained on WebText, and **[GPT-3](https://arxiv.org/abs/2005.14165)** [2020], a further bigger and optimized version of GPT-2, pre-trained on Common Crawl. - **[2019]** [CTRL](https://arxiv.org/abs/1909.05858) CTRL, similar to GPT, introduced control codes for conditional text generation. This allowed for more control over the generated text. - **[2019]** [Transformer-XL](https://arxiv.org/abs/1901.02860) Transformer-XL reused previously computed hidden states to attend to a longer context. This allowed the model to handle longer sequences of text. - **[2019]** [ALBERT](https://arxiv.org/abs/1909.11942) ALBERT is a lighter version of BERT where (1) Next Sentence Prediction is replaced by Sentence Order Prediction, and (2) parameter-reduction techniques are used for lower memory consumption and faster training. - **[2019]** [RoBERTa](https://arxiv.org/abs/1907.11692) RoBERTa is a better version of BERT, where (1) the Masked Language Modeling objective is dynamic, (2) the Next Sentence Prediction objective is dropped, (3) the BPE tokenizer is employed, and (4) better hyperparameters are used. - **[2019]** [XLM](https://arxiv.org/abs/1901.07291) XLM, a multilingual Transformer, was pre-trained using objectives like Causal Language Modeling, Masked Language Modeling, and Translation Language Modeling. - **[2019]** [XLNet](https://arxiv.org/abs/1906.08237) It’s a Transformer-XL with a generalized autoregressive pre-training method that enables learning bidirectional dependencies. - **[2019]** [PEGASUS](https://arxiv.org/abs/1912.08777) PEGASUS, a bidirectional encoder and left-to-right decoder, was pre-trained with Masked Language Modeling and Gap Sentence Generation objectives. - **[2019]** [DistilBERT](https://arxiv.org/abs/1910.01108) It is the same as BERT but smaller and faster while preserving over 95% of BERT’s performances. Trained by distillation of the pre-trained BERT model. - **[2019]** [XLM-RoBERTa](https://arxiv.org/pdf/1911.02116.pdf) XLM-RoBERTa is a multilingual version of RoBERTa, trained on a multilanguage corpus with the Masked Language Modeling objective. - **[2019]** [BART](https://arxiv.org/abs/1910.13461) BART, a bidirectional encoder and left-to-right decoder, was trained by corrupting text with an arbitrary noising function and learning a model to reconstruct the original text. - **[2019]** [ConvBERT](https://arxiv.org/abs/2008.02496) ConvBERT replaced self-attention blocks with new ones that leveraged convolutions to better model global and local contexts. - **[2020]** [Funnel Transformer](https://arxiv.org/abs/2006.03236) It’s a type of Transformer that gradually compresses the sequence of hidden states to a shorter one, reducing the computation cost. - **[2020]** [Reformer](https://arxiv.org/abs/2001.04451) Reformer",llm_course,https://learn.activeloop.ai/courses/take/llms/multimedia/48953230-the-evolution-of-language-modeling-up-to-llms
99,The Evolution of Language Modeling up to LLMs,"# The Evolution of Language Modeling up to LLMs
## Model’s Timeline
is a more efficient Transformer thanks to local-sensitive hashing attention, axial position encoding, and other optimizations. - **[2020]** [T5](https://arxiv.org/abs/1910.10683) T5, a bidirectional encoder and left-to-right decoder, was pre-trained on a mix of unsupervised and supervised tasks. - **[2020]** [Longformer](https://arxiv.org/abs/2004.05150) Longformer replaced the attention matrices with sparse matrices for higher training efficiency. This made the model faster and more memory-efficient. - **[2020]** [ProphetNet](https://arxiv.org/abs/2001.04063) ProphetNet was trained with the Future N-gram Prediction objective and with a novel self-attention mechanism. - **[2020]** [ELECTRA](https://arxiv.org/abs/2003.10555) Lighter and better than BERT, ELECTRA was trained with the Replaced Token Detection objective. This made the model more efficient and improved its performance on NLP tasks. - **[2021]** [Switch Transformers](https://arxiv.org/abs/2101.03961) Switch Transformers introduced a sparsely-activated expert Transformer model, aiming to simplify and improve over Mixture of Experts. This allowed the model to handle a wider range of tasks. The years 2020 and 2021 are the ones where Large Language Models truly arose. Up to 2020, most language models were able to generate good-looking texts. After this date, the best language models could follow instructions and solve various tasks aside from simple text generation.",llm_course,https://learn.activeloop.ai/courses/take/llms/multimedia/48953230-the-evolution-of-language-modeling-up-to-llms
100,The Evolution of Language Modeling up to LLMs,"# The Evolution of Language Modeling up to LLMs
## The Transformer
The most crucial model of the previous pipeline is, without doubt, the Transformer, introduced in the very popular paper “[Attention Is All You Need](https://arxiv.org/abs/1706.03762).” The Transformer is a type of neural network that is used today by all of the best Large Language Models like GPT-4, Claude, and LLaMA. Central to Transformers is the encoder-decoder structure, which excels at modeling long-range dependencies and capturing contextual information. **The Encoder** processes the input text, identifying key elements and creating word embeddings based on their relevance to other words in the sentence. In the original Transformer architecture, designed for text translation, the attention mechanism was employed in two distinct ways: encoding the source language and decoding the encoded embedding back into the target language. On the other hand, **the Decoder** takes the encoder's output, an embedding, and transforms it back into text. Some models may opt to use only the decoder, bypassing the encoder entirely. The decoder's attention mechanism differs slightly from the encoder's, functioning more like a conventional language model by focusing on previous words during text processing. This approach is particularly useful for tasks like language generation, which is why models like GPT, primarily designed for text generation in response to an input text sequence, utilize the decoder part of the Transformer. Later in the course, we’ll learn more about the Transformer architecture. ![ Image from the paper **[Attention is All You Need](https://arxiv.org/abs/1706.03762)**.](The%20Evolution%20of%20Language%20Modeling%20up%20to%20LLMs%20d011060497644ebeba93113a55433606/attention.png) Image from the paper **[Attention is All You Need](https://arxiv.org/abs/1706.03762)**.",llm_course,https://learn.activeloop.ai/courses/take/llms/multimedia/48953230-the-evolution-of-language-modeling-up-to-llms
101,The Evolution of Language Modeling up to LLMs,"# The Evolution of Language Modeling up to LLMs
## Scaling Transformers: What Lead to Large Language Models
The effectiveness of the Transformer models was further improved by **scaling**, i.e., increasing the number of parameters and training on more data. This scaling led to models with more than 100B parameters that could perform tasks using few-shot or zero-shot approaches, eliminating the need for fine-tuning specific tasks. The increase in the size of these models and the datasets used for training them (and thus the associated costs) led to the large language models that we see today, like Cohere Command, GPT-4, and LLaMA.",llm_course,https://learn.activeloop.ai/courses/take/llms/multimedia/48953230-the-evolution-of-language-modeling-up-to-llms
102,The Evolution of Language Modeling up to LLMs,"# The Evolution of Language Modeling up to LLMs
## Conclusion
In this lesson, we navigated through the rich history of Natural Language Processing, tracing the path from the rudimentary Bag of Words model to the advanced Transformer family. This timeline underscored the continuous innovation in NLP, spotlighting the progression of models in sophistication and proficiency. In the next lesson, we’ll continue the timeline of popular models from 2020 (with GPT-3) up to today.",llm_course,https://learn.activeloop.ai/courses/take/llms/multimedia/48953230-the-evolution-of-language-modeling-up-to-llms
103,Fine-Tuning using Cohere for Medical Data,"# Fine-Tuning using Cohere for Medical Data
## Introduction
In this lesson, we will adopt an entirely different method for fine-tuning a large language model, leveraging a platform called [Cohere](https://cohere.com/). This approach allows you to craft a personalized model with just providing sample inputs and outputs, while the service handles the fine-tuning process in the background. Essentially, you supply a set of examples and, in return, obtain a fine-tuned model. For instance, in the context of a classification model, a sample entry would consist of a pair containing . Cohere utilizes a collection of exclusive models to execute various functions like [summarization](https://cohere.com/summarize), [embedding](https://docs.cohere.com/docs/multilingual-language-models), [chat](https://cohere.com/chat), and more, all accessible through APIs. Additionally, they empower us to enhance their models by customizing them to suit our precise use case through fine-tuning. It is possible to create [custom models](https://docs.cohere.com/docs/training-custom-models) for 3 distinct objectives: 1) Generative task where we expect the model to generate a text as the output, 2) Classifier which the model will categorizes the text in different categories, or 3) Rerank to enhance semantic search ([What is Semantic Search?](https://docs.cohere.com/docs/what-is-semantic-search)) results. This lesson explores the procedure of fine-tuning a customized generative model using medical texts to extract information. The task, known as [Named Entity Recognition (NER)](https://en.wikipedia.org/wiki/Named-entity_recognition), empowers the models to identify various entities (such as names, locations, dates, etc.) within a text. Large Language Models simplify the process of instructing a model to locate desired information within content. In the following sections we will delve into the procedure of fine-tuning a model to extract diseases, chemicals, and their relationships from a paper abstract.",llm_course,https://learn.activeloop.ai/courses/take/llms/multimedia/48959786-fine-tuning-using-cohere-for-medical-data
104,Fine-Tuning using Cohere for Medical Data,"# Fine-Tuning using Cohere for Medical Data
## Cohere API
The Cohere service offers a range of robust base models tailored for various objectives. Since our focus is on generative tasks, you have the option to select either base models for faster performance or command models for enhanced capability. Both variants also include a ""light"" version, which is a smaller-sized model, providing you with additional choices. To access the API, you must first [create an account](https://dashboard.cohere.com/welcome/register) on the Cohere platform. You can then proceed to the ""API Keys"" page, where you will find a Trial key available for free usage. Note that the trial key has rate limitations and cannot be used in a production environment. Nonetheless, there is a valuable opportunity to utilize the models and conduct your experiments prior to submitting your application for production deployment. Now, let's install the Cohere Python package to seamlessly use their API. You should run the following command in terminal. ```bash pip install cohere ``` Next, you'll need to create a Cohere object, which requires your API key and a prompt to generate a response for your request. You can utilize the following code, but please remember to replace the API placeholder with your own key. ```python import cohere co = cohere.Client("""") prompt = """"""The following article contains technical terms including diseases, drugs and chemicals. Create a list only of the diseases mentioned. Progressive neurodegeneration of the optic nerve and the loss of retinal ganglion cells is a hallmark of glaucoma, the leading cause of irreversible blindness worldwide, with primary open-angle glaucoma (POAG) being the most frequent form of glaucoma in the Western world. While some genetic mutations have been identified for some glaucomas, those associated with POAG are limited and for most POAG patients, the etiology is still unclear. Unfortunately, treatment of this neurodegenerative disease and other retinal degenerative diseases is lacking. For POAG, most of the treatments focus on reducing aqueous humor formation, enhancing uveoscleral or conventional outflow, or lowering intraocular pressure through surgical means. These efforts, in some cases, do not always lead to a prevention of vision loss and therefore other strategies are needed to reduce or reverse the progressive neurodegeneration. In this review, we will highlight some of the ocular pharmacological approaches that are being tested to reduce neurodegeneration and provide some form of neuroprotection. List of extracted diseases:"""""" response = co.generate( model='command', prompt = prompt, max_tokens=200, temperature=0.750) base_model = response.generations[0].text print(base_model) ``` ``` - glaucoma - primary open-angle glaucoma ``` The provided code utilizes the `cohere.Client()` method to input your API key. Subsequently, the `prompt` variable will contain the model's instructions. In this case, we want the model to read a scientific paper's abstract from the [PubMed website](https://pubmed.ncbi.nlm.nih.gov/) and extract the list of diseases it can identify. Finally, we employ the `cohere` object's `.generate()` method to specify the model type and provide the prompts, along with certain control parameters. The `max_tokens` parameter determines the maximum number of new tokens the model can produce, while the `temperature` parameter governs the level of randomness in the generated results. As you can see, the command model is robust enough to identify diseases without the need for any examples or additional information. In the upcoming sections, we will explore the fine-tuning feature to assess whether we can enhance the model performance even further.",llm_course,https://learn.activeloop.ai/courses/take/llms/multimedia/48959786-fine-tuning-using-cohere-for-medical-data
105,Fine-Tuning using Cohere for Medical Data,"# Fine-Tuning using Cohere for Medical Data
## The Dataset
Before delving into the details of fine-tuning, let's begin by introducing the dataset we are utilizing and clarifying the objective. We will be utilizing the dataset known as [BC5CDR](https://paperswithcode.com/dataset/bc5cdr) which is short for BioCreative V Chemical Disease Relation. This dataset comprises 1,500 PubMed research papers that have been manually annotated by human experts with structured information. The data has been divided into training, validation, and testing sets, with each set containing 500 samples. Our goal is to fine-tune the model to enable it to identify and extract the names of various diseases/chemicals and their relationship from text. This is very useful because the information about the relationships between chemicals and diseases are usually specified in the paper abstracts, but in this form it’s not actionable. That is, it’s not possible to search for “all the chemicals that influence the disease X”, because we’d have to read all the papers mentioning the “disease X” to do it. If we had an accurate way of extracting this structured information from the unstructured texts of the papers, it would be useful for doing these searches. Now, let's perform some preprocessing on the dataset to transform it into a suitable format for the Cohere service. They support files in three formats: CSV, JSONL, or plain text files. We will use the JSONL format, which should align with the following template. ```json {""prompt"": ""This is the first prompt"", ""completion"": ""This is the first completion""} {""prompt"": ""This is the second prompt"", ""completion"": ""This is the second completion""} ``` You can download the dataset in JSON format here as follows. [bc5cdr.json](Fine-Tuning%20using%20Cohere%20for%20Medical%20Data%207aaba9f14aa7475b8dc4f5331866abb5/bc5cdr.json) Then, we can open file using the code below. ```python with open('bc5cdr.json') as json_file: data = json.load(json_file) print(data[0]) ``` ``` {'passages': [{'document_id': '227508', 'type': 'title', 'text': 'Naloxone reverses the antihypertensive effect of clonidine.', 'entities': [{'id': '0', 'offsets': [[0, 8]], 'text': ['Naloxone'], 'type': 'Chemical', 'normalized': [{'db_name': 'MESH', 'db_id': 'D009270'}]}, {'id': '1', 'offsets': [[49, 58]], 'text': ['clonidine'], 'type': 'Chemical', 'normalized': [{'db_name': 'MESH', 'db_id': 'D003000'}]}], 'relations': [{'id': 'R0', 'type': 'CID', 'arg1_id': 'D008750', 'arg2_id': 'D007022'}]}, {'document_id': '227508', 'type': 'abstract', 'text': 'In unanesthetized, spontaneously hypertensive rats the decrease in blood pressure and heart rate produced by intravenous clonidine, 5 to 20 micrograms/kg, was inhibited or reversed by nalozone, 0.2 to 2 mg/kg. The hypotensive effect of 100 mg/kg alpha-methyldopa was also partially reversed by naloxone. Naloxone alone did not affect either blood pressure or heart rate. In brain membranes from spontaneously hypertensive rats clonidine, 10(-8) to 10(-5) M, did not influence stereoselective binding of [3H]-naloxone (8 nM), and naloxone, 10(-8) to 10(-4) M, did not influence clonidine-suppressible binding of [3H]-dihydroergocryptine (1 nM). These findings indicate that in spontaneously hypertensive rats the effects of central alpha-adrenoceptor stimulation involve activation of opiate receptors. As naloxone and clonidine do not appear to interact with the same receptor site, the observed functional antagonism suggests the release of an endogenous opiate by clonidine or alpha-methyldopa and the possible role of the opiate in the central control of sympathetic tone.', 'entities': [{'id': '2', 'offsets': [[93, 105]], 'text': ['hypertensive'], 'type': 'Disease', 'normalized': [{'db_name': 'MESH', 'db_id': 'D006973'}]}, {'id': '3', 'offsets': [[181, 190]], 'text': ['clonidine'], 'type': 'Chemical', 'normalized': [{'db_name': 'MESH', 'db_id': 'D003000'}]}, {'id': '4', 'offsets': [[244, 252]], 'text': ['nalozone'], 'type': 'Chemical', 'normalized': []}, {'id': '5', 'offsets': [[274, 285]], 'text': ['hypotensive'], 'type': 'Disease', 'normalized':",llm_course,https://learn.activeloop.ai/courses/take/llms/multimedia/48959786-fine-tuning-using-cohere-for-medical-data
106,Fine-Tuning using Cohere for Medical Data,"# Fine-Tuning using Cohere for Medical Data
## The Dataset
[{'db_name': 'MESH', 'db_id': 'D007022'}]}, {'id': '6', 'offsets': [[306, 322]], 'text': ['alpha-methyldopa'], 'type': 'Chemical', 'normalized': [{'db_name': 'MESH', 'db_id': 'D008750'}]}, {'id': '7', 'offsets': [[354, 362]], 'text': ['naloxone'], 'type': 'Chemical', 'normalized': [{'db_name': 'MESH', 'db_id': 'D009270'}]}, {'id': '8', 'offsets': [[364, 372]], 'text': ['Naloxone'], 'type': 'Chemical', 'normalized': [{'db_name': 'MESH', 'db_id': 'D009270'}]}, {'id': '9', 'offsets': [[469, 481]], 'text': ['hypertensive'], 'type': 'Disease', 'normalized': [{'db_name': 'MESH', 'db_id': 'D006973'}]}, {'id': '10', 'offsets': [[487, 496]], 'text': ['clonidine'], 'type': 'Chemical', 'normalized': [{'db_name': 'MESH', 'db_id': 'D003000'}]}, {'id': '11', 'offsets': [[563, 576]], 'text': ['[3H]-naloxone'], 'type': 'Chemical', 'normalized': []}, {'id': '12', 'offsets': [[589, 597]], 'text': ['naloxone'], 'type': 'Chemical', 'normalized': [{'db_name': 'MESH', 'db_id': 'D009270'}]}, {'id': '13', 'offsets': [[637, 646]], 'text': ['clonidine'], 'type': 'Chemical', 'normalized': [{'db_name': 'MESH', 'db_id': 'D003000'}]}, {'id': '14', 'offsets': [[671, 695]], 'text': ['[3H]-dihydroergocryptine'], 'type': 'Chemical', 'normalized': []}, {'id': '15', 'offsets': [[750, 762]], 'text': ['hypertensive'], 'type': 'Disease', 'normalized': [{'db_name': 'MESH', 'db_id': 'D006973'}]}, {'id': '16', 'offsets': [[865, 873]], 'text': ['naloxone'], 'type': 'Chemical', 'normalized': [{'db_name': 'MESH', 'db_id': 'D009270'}]}, {'id': '17', 'offsets': [[878, 887]], 'text': ['clonidine'], 'type': 'Chemical', 'normalized': [{'db_name': 'MESH', 'db_id': 'D003000'}]}, {'id': '18', 'offsets': [[1026, 1035]], 'text': ['clonidine'], 'type': 'Chemical', 'normalized': [{'db_name': 'MESH', 'db_id': 'D003000'}]}, {'id': '19', 'offsets': [[1039, 1055]], 'text': ['alpha-methyldopa'], 'type': 'Chemical', 'normalized': [{'db_name': 'MESH', 'db_id': 'D008750'}]}], 'relations': [{'id': 'R0', 'type': 'CID', 'arg1_id': 'D008750', 'arg2_id': 'D007022'}]}], 'dataset_type': 'train'} ``` Now, we can iterate through the dataset, extract the abstracts and related entities, and include the necessary instructions for training. There are two sets of instructions: the first set aids the model in understanding the task, while the second set prompts it how to generate the response. ```python instruction = ""The following article contains technical terms including diseases, drugs and chemicals. Create a list only of the diseases mentioned.\n\n"" output_instruction = ""\n\nList of extracted diseases:\n"" ``` The `instruction` variable establishes the guidelines, while the `output_instruction` defines the desired format for the output. Now, we loop through the dataset and format each instance. ```python the_list = [] for item in data: dis = [] if item['dataset_type'] != ""test"": continue; # Don't use test set # Extract the disease names for ent in item['passages'][1]['entities']: # The annotations if ent['type'] == ""Disease"": # Only select disease names if ent['text'][0] not in dis: # Remove duplicate diseases in a text dis.append(ent['text'][0]) the_list.append( {'prompt': instruction + item['passages'][1]['text'] + output_instruction, 'completion': ""- ""+ ""\n- "".join(dis)} ) ``` The mention code may appear complex, but for each sample, it essentially iterates through all the annotations and selectively chooses only the disease-related ones. We employ this approach because the dataset includes extra labels for chemicals, which are not relevant to our model. Finally, it will generate a dictionary containing the `prompt` and `completion` keys. The prompt will incorporate the paper abstract and append the instructions to it, whereas the completion will contain a list of disease names, with each name on a separate line. Now, use the following code to save the dataset in JSONL format. ```python # Writing to sample.json with open(""disease_instruct_all.jsonl"", ""w"") as outfile: for item in the_list: outfile.write(json.dumps(item) + ""\n"") ``` The formatted dataset will be saved in a file called `disease_instruct_all.jsonl`. Also, it worth noting that we are concatenating the training and validation set to make a total of 1K samples. The final dataset that is used for fine-tuning has 3K samples which is consists of 1K for diseases + 1K for Chemicals + 1K for their relationships. ",llm_course,https://learn.activeloop.ai/courses/take/llms/multimedia/48959786-fine-tuning-using-cohere-for-medical-data
107,Fine-Tuning using Cohere for Medical Data,"# Fine-Tuning using Cohere for Medical Data
## The Fine-Tuning
Now, it's time to employ the prepared dataset for the fine-tuning process. The good news is that we have completed the majority of the challenging tasks. The Cohere platform will only request a nickname to save your custom model. It's worth noting that they provide advanced options if you wish to train your model for a longer duration or adjust the learning rate. Here is a detailed guide on training a custom model, and you can also refer to the Cohere documentation for [Training Custom Models](https://docs.cohere.com/docs/finetuning), complete with helpful screenshots. You should navigate to the models page using the sidebar and click on the “Create a custom model” button. On the next page, you will be prompted to select the model type, which, in our case, will be the `Generate` option. It is time to proceed to upload a dataset, either from the previous step or from your custom dataset. Afterward, click the ""Review data"" button to display a few samples from the dataset. This step is designed to verify that the platform can read your data as expected. If everything appears to be in order, click the ""Continue"" button. The last step is to chose a nickname for your model. Also, you can change the training hyperparameters by clicking on the “HYPERPARAMETERS (OPTIONAL)” link. You have options such as `train_steps` to determine the duration, `learning_rate` to adjust the model's speed of adaptation, and `batch_size`, which specifies the number of samples the model processes in each iteration, among others. In our experience, the default parameters worked well, but feel free to experiment with these settings. Once you are ready, click the ""Initiate training"" button. That’s it! Cohere will send you an email once the fine-tuning process is completed, providing you with the model ID for use in your APIs. ### Extract Disease Names In the code snippet below, we employ the same prompt as seen in the first section; however, we use the model ID of the network we just fine-tuned. Let’s see if there are any improvements. ```python response = co.generate( model='2075d3bc-eacf-472e-bd26-23d0284ec536-ft', prompt=prompt, max_tokens=200, temperature=0.750) disease_model = response.generations[0].text print(disease_model) ``` ```python - neurodegeneration - glaucoma - blindness - POAG - glaucomas - retinal degenerative diseases - neurodegeneration - neurodegeneration ``` As evident from the output, the model can now identify a broad spectrum of new diseases, highlighting the effectiveness of the fine-tuning approach. The Cohere platform offers both a user-friendly interface and a potent base model to build upon. ### Extract Chemical Names In the upcoming test, we will assess the performance of our custom models in extracting chemical names compared to the baseline model. To eliminate the need for redundant code mentions, we will only present the prompt, followed by the output of each model for easy comparison. We utilized the following prompt to extract information from a text within the test set. ```python prompt = """"""The following article contains technical terms including diseases, drugs and chemicals. Create a list only of the chemicals mentioned. To test the validity of the hypothesis that hypomethylation of DNA plays an important role in the initiation of carcinogenic process, 5-azacytidine (5-AzC) (10 mg/kg), an inhibitor of DNA methylation, was given to rats during the phase of repair synthesis induced by the three carcinogens, benzo[a]-pyrene (200 mg/kg), N-methyl-N-nitrosourea (60 mg/kg) and 1,2-dimethylhydrazine (1,2-DMH) (100 mg/kg). The initiated hepatocytes in the liver were assayed as the gamma-glutamyltransferase (gamma-GT) positive foci formed following a 2-week selection regimen consisting of dietary 0.02% 2-acetylaminofluorene coupled with a necrogenic dose of CCl4. The results obtained indicate that with all three carcinogens, administration of 5-AzC during repair synthesis",llm_course,https://learn.activeloop.ai/courses/take/llms/multimedia/48959786-fine-tuning-using-cohere-for-medical-data
108,Fine-Tuning using Cohere for Medical Data,"# Fine-Tuning using Cohere for Medical Data
## The Fine-Tuning
increased the incidence of initiated hepatocytes, for example 10-20 foci/cm2 in 5-AzC and carcinogen-treated rats compared with 3-5 foci/cm2 in rats treated with carcinogen only. Administration of [3H]-5-azadeoxycytidine during the repair synthesis induced by 1,2-DMH further showed that 0.019 mol % of cytosine residues in DNA were substituted by the analogue, indicating that incorporation of 5-AzC occurs during repair synthesis. In the absence of the carcinogen, 5-AzC given after a two thirds partial hepatectomy, when its incorporation should be maximum, failed to induce any gamma-GT positive foci. The results suggest that hypomethylation of DNA per se may not be sufficient for initiation. Perhaps two events might be necessary for initiation, the first caused by the carcinogen and a second involving hypomethylation of DNA. List of extracted chemicals:"""""" ``` First, we will examine the output of the base model. ``` - 5-azacytidine (5-AzC) - benzo[a]-pyrene - N-methyl-N-nitrosourea - 1,2-dimethylhydrazine - CCl4 - 2-acetylaminofluorene ``` Followed by the predictions generated by the custom fine-tuned model. ``` - 5-azacytidine - 5-AzC - benzo[a]-pyrene - N-methyl-N-nitrosourea - 1,2-dimethylhydrazine - 1,2-DMH - 2-acetylaminofluorene - CCl4 - [3H]-5-azadeoxycytidine - cytosine ``` It is clear that the custom model is better suited for our specific task and adapts readily based on the samples it has encountered. ### Extract Relations The final test involves employing the model to extract complex relationships between chemicals and the diseases they impact. It is an advanced task that could pose some challenges for the base model. As previously done, we begin by introducing the prompt we employed from the test set. ```python prompt = """"""The following article contains technical terms including diseases, drugs and chemicals. Create a list only of the influences between the chemicals and diseases mentioned. The yield of severe cirrhosis of the liver (defined as a shrunken finely nodular liver with micronodular histology, ascites greater than 30 ml, plasma albumin less than 2.2 g/dl, splenomegaly 2-3 times normal, and testicular atrophy approximately half normal weight) after 12 doses of carbon tetrachloride given intragastrically in the phenobarbitone-primed rat was increased from 25% to 56% by giving the initial ""calibrating"" dose of carbon tetrachloride at the peak of the phenobarbitone-induced enlargement of the liver. At this point it was assumed that the cytochrome P450/CCl4 toxic state was both maximal and stable. The optimal rat size to begin phenobarbitone was determined as 100 g, and this size as a group had a mean maximum relative liver weight increase 47% greater than normal rats of the same body weight. The optimal time for the initial dose of carbon tetrachloride was after 14 days on phenobarbitone. List of extracted influences:"""""" ``` Here is the output generated by the base model. ``` severe cirrhosis of the liver influences shrinking, finely nodular, ascites, plasma albumin, splenomegaly, testicular atrophy, carbon tetrachloride, phenobarbitone ``` And here are the generations produced by the custom model. ``` - Chemical phenobarbitone influences disease cirrhosis of the liver - Chemical carbon tetrachloride influences disease cirrhosis of the liver ``` The base model evidently attempts to establish some connections within the text. Nevertheless, it's evident that the custom fine-tuned model excels in producing well-formatted output and distinctly connecting each chemical to the respective disease. This task poses a significant challenge for a general-purpose model; however, it showcases the effectiveness of fine-tuning by simply providing a couple of thousands samples of the task we aim to accomplish.",llm_course,https://learn.activeloop.ai/courses/take/llms/multimedia/48959786-fine-tuning-using-cohere-for-medical-data
109,Fine-Tuning using Cohere for Medical Data,"# Fine-Tuning using Cohere for Medical Data
## Conclusion
As we have examined in various lessons within this chapter, the fine-tuning process has demonstrated itself as a potent tool for extending the capabilities of large language models, even when working with a relatively small amount of data, all while maintaining cost efficiency. For individuals new to the field of AI, especially those who are not well-versed in coding, the no-code approach offered by the Cohere service is an exceptionally powerful option. Our custom model demonstrated superior performance over the base model across three distinct tasks by performing one fine-tuning process, showcasing its capability to effectively follow the patterns presented in the dataset. --- >> [Notebook](https://colab.research.google.com/drive/14NG4M5MA-8BMXUP0qY-nHw7-Urt8RCqN?usp=sharing).",llm_course,https://learn.activeloop.ai/courses/take/llms/multimedia/48959786-fine-tuning-using-cohere-for-medical-data
110,Scaling Laws in LLM Training,"# Scaling Laws in LLM Training
## Introduction
In this lesson, we will study the relations between language model performance and parameters like model scale, model shape, and compute budget. The lesson is a small summary of extracts from the papers “[Scaling Laws for Neural Language Models](https://arxiv.org/abs/2001.08361)” and “[Training Compute-Optimal Large Language Models](https://arxiv.org/pdf/2203.15556.pdf).”",llm_course,https://learn.activeloop.ai/courses/take/llms/multimedia/48959885-scaling-laws-in-llm-training
111,Scaling Laws in LLM Training,"# Scaling Laws in LLM Training
## **A study on language modeling performance**
The paper [Scaling Laws for Neural Language Models](https://arxiv.org/pdf/2001.08361.pdf) (2020) contains a study of empirical scaling laws for [language model](https://en.wikipedia.org/wiki/Language_model) performance on the cross-entropy loss, focusing on the [Transformer](https://arxiv.org/abs/1706.03762) architecture. The experiments show the test loss scales as a [power-law](https://en.wikipedia.org/wiki/Power_law) with model size, dataset size, and the amount of compute used for training; with some trends spanning more than seven orders of magnitude. This means simple equations govern the relationships between these variables, and these equations can be used to create an optimally efficient training configuration for training a very large language model. Moreover, it looks like other architectural details, such as network width or depth have minimal effects within a wide range. As deduced from the experiments and the derived equations, larger models are significantly more sample efficient, i.e., optimally compute-efficient training involves training very large models on a relatively modest amount of data and stopping significantly before convergence.",llm_course,https://learn.activeloop.ai/courses/take/llms/multimedia/48959885-scaling-laws-in-llm-training
112,Scaling Laws in LLM Training,"# Scaling Laws in LLM Training
## **Experiments**
To study language model scaling, a variety of models have been trained with different factors, including: - Model size (*N*): ranging in size from 768 to 1.5 billion non-embedding parameters. - Dataset size (*D*): ranging from 22 million to 23 billion tokens. - Model shape: including depth, width, attention heads, and feed-forward dimension. - Context length: 1024 for most runs, with some experiments with shorter contexts. - Batch size: 2^19 for most runs, with some variations to measure the critical batch size. Training at the critical batch size provides a roughly optimal compromise between time and compute efficiency. Let’s define the following training variables as well: - Let *L* be the test cross-entropy loss. - Let *C* be the amount of compute used to train a model.",llm_course,https://learn.activeloop.ai/courses/take/llms/multimedia/48959885-scaling-laws-in-llm-training
113,Scaling Laws in LLM Training,"# **Key findings**
Taking inspiration from section 1.1 of the [paper](https://arxiv.org/pdf/2001.08361.pdf), we summarize the results of the experiments. - **Performance depends strongly on model scale, weakly on model shape:** Model performance depends most strongly on scale, which consists of three factors: the number of model parameters *N* (excluding embeddings), the size of the dataset *D*, and the amount of compute *C* used for training. Within reasonable limits, performance depends very weakly on other architectural hyperparameters, such as depth vs. width. - **Smooth power laws**: Performance has a power-law relationship with each of the three scale factors *N*, *D*, and *C* when not bottlenecked by the other two, with trends spanning more than six orders of magnitude. ![Language modeling performance improves smoothly as we increase the amount of compute, dataset size, and model size used for training. For optimal performance, all three factors must be scaled up in tandem. Image from [https://arxiv.org/pdf/2001.08361.pdf](https://arxiv.org/pdf/2001.08361.pdf).](Scaling%20Laws%20in%20LLM%20Training%20516524f6257542b4bec5025994e978a3/Screenshot_2023-08-23_at_16.40.25.png) Language modeling performance improves smoothly as we increase the amount of compute, dataset size, and model size used for training. For optimal performance, all three factors must be scaled up in tandem. Image from [https://arxiv.org/pdf/2001.08361.pdf](https://arxiv.org/pdf/2001.08361.pdf). The paper differentiates between embedding and non-embedding parameters because their size correlates differently with model performance. When including embedding parameters, performance appears to depend strongly on the number of layers in addition to the number of parameters. When excluding embedding parameters, the performance of models with different depths converges to a single trend. ![Left: When including embedding parameters, performance appears to depend strongly on the number of layers in addition to the number of parameters. Right: When excluding embedding parameters, the performance of models with different depths converges to a single trend. Image from [https://arxiv.org/pdf/2001.08361.pdf](https://arxiv.org/pdf/2001.08361.pdf).](Scaling%20Laws%20in%20LLM%20Training%20516524f6257542b4bec5025994e978a3/Screenshot_2023-08-23_at_16.50.41.png) Left: When including embedding parameters, performance appears to depend strongly on the number of layers in addition to the number of parameters. Right: When excluding embedding parameters, the performance of models with different depths converges to a single trend. Image from [https://arxiv.org/pdf/2001.08361.pdf](https://arxiv.org/pdf/2001.08361.pdf). - The **universality of overfitting:** Performance improves predictably as long as we scale up *N* and *D* in tandem but leads to diminishing returns if either *N* or *D* is held fixed while the other increases. ![The early-stopped test loss depends predictably on the dataset size D and model size N. Left: For large D, performance is a straight power law in N. Performance stops improving for a more minor fixed D as N increases and the model begins to overfit. Right: The extent of overfitting depends predominantly on a relationship between N and D. Image from [https://arxiv.org/pdf/2001.08361.pdf](https://arxiv.org/pdf/2001.08361.pdf).](Scaling%20Laws%20in%20LLM%20Training%20516524f6257542b4bec5025994e978a3/Screenshot_2023-08-23_at_16.52.01.png) The early-stopped test loss depends predictably on the dataset size D and model size N. Left: For large D, performance is a straight power law in N. Performance stops improving for a more minor fixed D as N increases and the model begins to overfit. Right: The extent of overfitting depends predominantly on a relationship between N and D. Image from [https://arxiv.org/pdf/2001.08361.pdf](https://arxiv.org/pdf/2001.08361.pdf). - The **universality of training**: Training curves follow predictable power-laws whose parameters are roughly independent of the model size. Extrapolating the early part of a training curve can roughly predict the loss that would be achieved if trained for much longer. - **Sample efficiency**: Large models are more sample-efficient than small models, reaching the same level of performance with fewer optimization steps and data points. ![A series of language model training runs, with models ranging in size from 10^3 to 10^9 parameters (excluding embeddings). Image from [https://arxiv.org/pdf/2001.08361.pdf](https://arxiv.org/pdf/2001.08361.pdf).](Scaling%20Laws%20in%20LLM%20Training%20516524f6257542b4bec5025994e978a3/Screenshot_2023-08-23_at_17.43.12.png) A series of language model training runs, with models ranging in size from 10^3 to 10^9 parameters (excluding embeddings). Image from [https://arxiv.org/pdf/2001.08361.pdf](https://arxiv.org/pdf/2001.08361.pdf). ![Left: The early-stopped test loss L(N, D) varies predictably with the",llm_course,https://learn.activeloop.ai/courses/take/llms/multimedia/48959885-scaling-laws-in-llm-training
114,Scaling Laws in LLM Training,"# **Key findings**
dataset size D and model size N. Right: After an initial transient period, learning curves for all model sizes N can be fit with an equation parameterized in terms of the number of steps (Smin) when training at large batch size. Image from [https://arxiv.org/pdf/2001.08361.pdf](https://arxiv.org/pdf/2001.08361.pdf).](Scaling%20Laws%20in%20LLM%20Training%20516524f6257542b4bec5025994e978a3/Screenshot_2023-08-23_at_17.44.11.png) Left: The early-stopped test loss L(N, D) varies predictably with the dataset size D and model size N. Right: After an initial transient period, learning curves for all model sizes N can be fit with an equation parameterized in terms of the number of steps (Smin) when training at large batch size. Image from [https://arxiv.org/pdf/2001.08361.pdf](https://arxiv.org/pdf/2001.08361.pdf). - **Convergence is inefficient**: When working within a fixed compute budget *C* but without any other restrictions on the model size *N* or available data *D*, we attain optimal performance by training very large models and stopping significantly short of convergence. ![As more computing becomes available, choosing how much to allocate towards training larger models, using larger batches, and training for more steps is possible. This image illustrates this billion-fold increase in computing. Most of the increase should go towards increased model size for optimally compute-efficient training. A relatively small increase in data is needed to avoid reuse. Of the increase in data, most can be used to increase parallelism through larger batch sizes, with only a very small increase in serial training time required. Image from [https://arxiv.org/pdf/2001.08361.pdf](https://arxiv.org/pdf/2001.08361.pdf).](Scaling%20Laws%20in%20LLM%20Training%20516524f6257542b4bec5025994e978a3/Screenshot_2023-08-23_at_17.45.23.png) As more computing becomes available, choosing how much to allocate towards training larger models, using larger batches, and training for more steps is possible. This image illustrates this billion-fold increase in computing. Most of the increase should go towards increased model size for optimally compute-efficient training. A relatively small increase in data is needed to avoid reuse. Of the increase in data, most can be used to increase parallelism through larger batch sizes, with only a very small increase in serial training time required. Image from [https://arxiv.org/pdf/2001.08361.pdf](https://arxiv.org/pdf/2001.08361.pdf). These results show that language modeling performance improves smoothly and predictably as we appropriately scale up model size, data, and compute. Conversely, we find very weak dependence on many architectural and optimization hyperparameters. Larger language models are expected to perform better and be more sample-efficient than current models.",llm_course,https://learn.activeloop.ai/courses/take/llms/multimedia/48959885-scaling-laws-in-llm-training
115,Scaling Laws in LLM Training,"# **Key findings**
## **Considerations**
When training large language models, it’s possible to use the relations between *N*, *D*, and *L* to derive the compute scaling, magnitude of overfitting, early stopping step, and data requirements. > The derived scaling relations can be used as a predictive framework. One might interpret these relations as analogs of the [ideal gas law](https://en.wikipedia.org/wiki/Ideal_gas_law), which relates the macroscopic properties of a gas in a universal way, independent of most of the details of its microscopic constituents. > It would be interesting to investigate whether these scaling relations hold in other generative modeling tasks with a maximum likelihood loss and perhaps in other settings and domains (such as images, audio, and video models) as well.",llm_course,https://learn.activeloop.ai/courses/take/llms/multimedia/48959885-scaling-laws-in-llm-training
116,Scaling Laws in LLM Training,"# **Key findings**
## **Chinchilla Scaling Laws for Compute-Optimal Training of LLMs**
In 2022, Google DeepMind published the paper “[Training Compute-Optimal Large Language Models](https://arxiv.org/pdf/2203.15556.pdf)” that further explored the scaling laws of LLMs. The researchers conducted extensive experiments to understand the relationship between model size, the number of training tokens, and the compute budget. > The key finding of this study was that current LLMs, such as GPT-3 (175B), Gopher (280B), and Megatron (530B), are significantly undertrained. While these models have increased the number of parameters, the training data remained constant. > The authors proposed that the number of training tokens and model size must be scaled equally for compute-optimal training. They trained approximately 400 language models ranging from 70 million to over 16 billion parameters on 5 to 500 billion tokens. This extensive experimentation led to the creation of a new LLM, Chinchilla, which outperformed its larger counterparts. ![Current LLMs. We show five of the current largest dense transformer models, their size, and the number of training tokens. Other than LaMDA, most models are trained for approximately 300 billion tokens. We introduce Chinchilla, a substantially smaller model, trained for much longer than 300B tokens. Image from [https://arxiv.org/pdf/2203.15556.pdf](https://arxiv.org/pdf/2203.15556.pdf).](Scaling%20Laws%20in%20LLM%20Training%20516524f6257542b4bec5025994e978a3/Screenshot_2023-08-23_at_17.54.53.png) Current LLMs. We show five of the current largest dense transformer models, their size, and the number of training tokens. Other than LaMDA, most models are trained for approximately 300 billion tokens. We introduce Chinchilla, a substantially smaller model, trained for much longer than 300B tokens. Image from [https://arxiv.org/pdf/2203.15556.pdf](https://arxiv.org/pdf/2203.15556.pdf). With 70B parameters and four times more training data, Chinchilla was trained using the same compute budget as the 280B Gopher. The results showed that smaller models could deliver better performance if trained on more data. These smaller models are easier to fine-tune and have less latency at inference. Moreover, they do not need to be trained to their lowest possible loss to be compute optimal. The researchers explored three different approaches to answer the question: ""Given a fixed FLOPs budget, how should one trade-off model size and the number of training tokens?"" They assumed a power-law relationship between compute and model size. 1. The first approach involved fixing model sizes and varying the number of training tokens. 2. The second approach, called IsoFLOP profiles, varied the model size for a fixed set of different training FLOP counts. 3. The third approach combined the final loss of the above two approaches as a parametric function of model parameters and the number of tokens. All three approaches suggested that as the compute budget increases, the model size and the training data amount should be of approximately equal proportions. The first and second approaches yielded similar predictions for optimal model sizes, while the third suggested that smaller models would be optimal for larger compute budgets.",llm_course,https://learn.activeloop.ai/courses/take/llms/multimedia/48959885-scaling-laws-in-llm-training
117,Scaling Laws in LLM Training,"# **Key findings**
## Conclusion
This lesson has explored the relationship between language model performance and parameters such as model size, dataset size, and compute budget. We've learned that performance scales are a power law with these variables and larger models tend to be more sample-efficient. We also explored the Chinchilla Scaling Laws, which suggest that the number of training tokens and model size should be scaled equally for compute-optimal training. This has led to the creation of smaller models, like Chinchilla, that outperform larger counterparts when trained on more data. These findings provide a predictive framework for training large language models and may have implications for other generative modeling tasks and domains.",llm_course,https://learn.activeloop.ai/courses/take/llms/multimedia/48959885-scaling-laws-in-llm-training
118,Model Quantization,"# Introduction
Deep learning has revolutionized various fields, from computer vision to natural language processing. However, one drawback of deep neural networks is their large size and computational demands. These resource-intensive models can significantly hinder deployment, especially in resource-constrained environments like mobile devices and embedded systems. This is where model pruning comes into play as a powerful technique for reducing the size of neural networks without compromising their performance. In this blog post, we will explore what model pruning is, why it's useful, and various methods to achieve it.",llm_course,https://learn.activeloop.ai/courses/take/llms/multimedia/48959843-model-quantization
119,Model Quantization,"# Introduction
## What is Model Pruning?
Model pruning reduces the size of a deep neural network by removing certain neurons, connections, or even entire layers. The goal is to create a smaller and more efficient model while preserving its accuracy to the greatest extent possible. This reduction in model size leads to benefits such as faster inference times, lower memory footprint, and improved energy efficiency, making it ideal for deployment in resource-limited scenarios. Pruned models are smaller and require fewer computational resources during inference. This is crucial for applications like mobile apps, IoT devices, and edge computing, where computational resources are limited. Moreover, pruned models typically execute faster and are more energy-efficient, enabling real-time applications and improving user experience.",llm_course,https://learn.activeloop.ai/courses/take/llms/multimedia/48959843-model-quantization
120,Model Quantization,"# Introduction
## Different Types of Model Pruning
There are several techniques and methodologies for model pruning, each with its own advantages and trade-offs. Some of the commonly used methods include: ### **Magnitude-based Pruning (or Unstructured Pruning)** In this approach, model weights or activations with small magnitudes are pruned. The intuition is that small weights contribute less to the model's performance and can be safely removed. The paper titled ""[Network Trimming: A Data-Driven Neuron Pruning Approach towards Efficient Deep Architectures](https://arxiv.org/pdf/1607.03250.pdf)"" presented this approach to optimize deep neural networks by pruning unimportant neurons. This technique, known as network trimming, is based on the observation that a significant number of neurons in a large network produce zero outputs, regardless of the inputs received. These zero activation neurons are considered redundant and are removed without impacting the overall accuracy of the network. The process involves iterative pruning and retraining of the network, with the weights before pruning used as initialization. The authors demonstrate through experiments on computer vision neural netowrks that this approach can achieve a high compression ratio of parameters without compromising, and sometimes even improving, the accuracy of the original network. ![Image from [https://arxiv.org/pdf/1607.03250.pdf](https://arxiv.org/pdf/1607.03250.pdf).](Model%20Pruning%200add2ac45f114f53a9bc26b2af127ead/new_image.png) Image from [https://arxiv.org/pdf/1607.03250.pdf](https://arxiv.org/pdf/1607.03250.pdf). The paper ""[Learning Efficient Convolutional Networks through Network Slimming](https://arxiv.org/abs/1708.06519)"" presented variations of the pruning scheme for deep convolutional neural networks aimed at reducing the model size, decreasing the run-time memory footprint, and lowering the number of computing operations without compromising accuracy. The paper “[A Simple and Effective Pruning Approach for Large Language Models](https://arxiv.org/pdf/2306.11695v1.pdf)” introduces a pruning method called Wanda (Pruning by Weights and activations) for pruning Large Language Models. Pruning is a technique that eliminates a subset of network weights to maintain performance while reducing the model's size. Wanda prunes weights based on the smallest magnitudes multiplied by the corresponding input activations, on a per-output basis. This method is inspired by the recent observation of emergent large magnitude features in LLMs. The key advantage of Wanda is that it does not require retraining or weight updates, and the pruned LLM can be used directly. ![Illustration of our proposed method Wanda (Pruning by Weights and activations), compared with the magnitude pruning approach. Given a weight matrix W and input feature activations X, Wanda computes the weight importance as the elementwise product between the weight magnitude and the norm of input activations (|W| · ∥X∥2). Weight importance scores are compared on a per-output basis (within each row in W), rather than globally across the entire matrix. Image from [https://arxiv.org/pdf/2306.11695v1.pdf](https://arxiv.org/pdf/2306.11695v1.pdf)](Model%20Pruning%200add2ac45f114f53a9bc26b2af127ead/new_image%201.png) Illustration of our proposed method Wanda (Pruning by Weights and activations), compared with the magnitude pruning approach. Given a weight matrix W and input feature activations X, Wanda computes the weight importance as the elementwise product between the weight magnitude and the norm of input activations (|W| · ∥X∥2). Weight importance scores are compared on a per-output basis (within each row in W), rather than globally across the entire matrix. Image from [https://arxiv.org/pdf/2306.11695v1.pdf](https://arxiv.org/pdf/2306.11695v1.pdf) ### **Structured Pruning** Structured pruning targets specific structures within the network, such as channels in convolutional layers or neurons in fully connected layers. The paper ""[Structured Pruning of Deep Convolutional Neural Networks](https://arxiv.org/abs/1512.08571)"" introduces a new method of network pruning that incorporates structured sparsity at different scales, including channel-wise, kernel-wise, and intra-kernel strided sparsity. This approach is beneficial for computational resource savings. The method uses a particle filtering approach to determine the significance of network connections and paths, assigning importance based on the misclassification rate associated with each connectivity pattern. After pruning, the network is re-trained to compensate for any losses. ### The Lottery Ticket Hypothesis The paper “[The Lottery Ticket Hypothesis: Finding Sparse, Trainable Neural Networks](https://arxiv.org/pdf/1803.03635.pdf)” presents an innovative perspective on neural",llm_course,https://learn.activeloop.ai/courses/take/llms/multimedia/48959843-model-quantization
121,Model Quantization,"# Introduction
## Different Types of Model Pruning
network pruning, introducing the ""Lottery Ticket Hypothesis"". This hypothesis suggests that within dense, randomly-initialized, feed-forward networks, there exist smaller subnetworks (""winning tickets"") that, when trained separately, can achieve test accuracy similar to the original network in a comparable number of iterations. These ""winning tickets"" are characterized by their initial weight configurations, which make them particularly effective for training. The authors propose an algorithm to identify these ""winning tickets"" and present a series of experiments to support their hypothesis. They consistently discover ""winning tickets"" that are 10-20% the size of several fully-connected and convolutional feed-forward architectures for MNIST and CIFAR10 datasets. Interestingly, these subnetworks not only match the performance of the original network, but often surpass it, learning faster and achieving higher test accuracy.",llm_course,https://learn.activeloop.ai/courses/take/llms/multimedia/48959843-model-quantization
122,Model Quantization,"# Introduction
## Intel® Neural Compressor Library
The Intel® Neural Compressor Library is valuable for leveraging already implemented model pruning techniques. Read [this page](https://github.com/intel/neural-compressor/tree/master/neural_compressor/compression/pruner#pruning-types) to learn more about the pruning methods implemented. Here are a couple of pruning methods specifically for LLMs. The paper “[A Fast Post-Training Pruning Framework for Transformers](https://arxiv.org/pdf/2204.09656.pdf)” presents a fast post-training pruning framework for Transformer models, designed to reduce the high inference cost associated with these models. Unlike previous pruning methods that necessitate model retraining, this framework eliminates the need for retraining, thus reducing both the training cost and complexity of model deployment. The framework uses structured sparsity methods to automatically prune the Transformer model given a resource constraint and a sample dataset. To maintain high accuracy, the authors introduce three new techniques: a lightweight mask search algorithm, mask rearrangement, and mask tuning. ![(a) Prior pruning frameworks require additional training on the entire training set and involve user intervention for hyperparameter tuning. This complicates the pruning process and requires a large amount of time (e.g., ∼30 hours). (b) Our pruning framework does not require retraining. It outputs pruned Transformer models satisfying the FLOPs/latency constraints within considerably less time (e.g., ∼3 minutes), without user intervention. Image from [https://arxiv.org/pdf/2204.09656.pdf](https://arxiv.org/pdf/2204.09656.pdf).](Model%20Pruning%200add2ac45f114f53a9bc26b2af127ead/new_image%202.png) (a) Prior pruning frameworks require additional training on the entire training set and involve user intervention for hyperparameter tuning. This complicates the pruning process and requires a large amount of time (e.g., ∼30 hours). (b) Our pruning framework does not require retraining. It outputs pruned Transformer models satisfying the FLOPs/latency constraints within considerably less time (e.g., ∼3 minutes), without user intervention. Image from [https://arxiv.org/pdf/2204.09656.pdf](https://arxiv.org/pdf/2204.09656.pdf). The paper titled ""[SparseGPT: Massive Language Models Can Be Accurately Pruned in One-Shot](https://arxiv.org/pdf/2301.00774.pdf)"" presents a pruning method, SparseGPT, that can reduce the size of large-scale generative pretrained transformer (GPT) models by at least 50% in a single step, without retraining and with minimal loss of accuracy. The authors demonstrate that SparseGPT can be applied to the very large models OPT-175B and BLOOM-176B, in less than 4.5 hours. The method can achieve 60% unstructured sparsity, meaning that over 100 billion weights can be disregarded during inference without a significant increase in perplexity. ![Sparsity-vs-perplexity comparison of SparseGPT against magnitude pruning on OPT-175B, when pruning to different uniform per-layer sparsities. Image from [https://arxiv.org/pdf/2301.00774.pdf](https://arxiv.org/pdf/2301.00774.pdf).](Model%20Pruning%200add2ac45f114f53a9bc26b2af127ead/new_image%203.png) Sparsity-vs-perplexity comparison of SparseGPT against magnitude pruning on OPT-175B, when pruning to different uniform per-layer sparsities. Image from [https://arxiv.org/pdf/2301.00774.pdf](https://arxiv.org/pdf/2301.00774.pdf).",llm_course,https://learn.activeloop.ai/courses/take/llms/multimedia/48959843-model-quantization
123,Model Quantization,"# Introduction
## Conclusion
In conclusion, model pruning is a powerful technique for reducing the size of deep neural networks without significantly compromising their performance. It is a valuable tool for deploying models in resource-constrained environments, such as mobile devices and embedded systems. Various pruning methods exist, including magnitude-based pruning and structured pruning, each with its unique advantages and trade-offs. The Intel® Neural Compressor Library provides a practical implementation of these techniques, with specific methods designed for Large Language Models. By understanding and applying these pruning techniques, we can create smaller, faster, and more efficient models that maintain high accuracy, thereby improving the feasibility and user experience of deploying deep learning models in real-world applications. --- *For more information on Intel® Accelerator Engines, visit [this resource page](https://download.intel.com/newsroom/2023/data-center-hpc/4th-Gen-Xeon-Accelerator-Fact-Sheet.pdf). Learn more about Intel® Extension for Transformers, an Innovative Transformer-based Toolkit to Accelerate GenAI/LLM Everywhere [here](https://github.com/intel/intel-extension-for-transformers).* *Intel, the Intel logo, and Xeon are trademarks of Intel Corporation or its subsidiaries.*",llm_course,https://learn.activeloop.ai/courses/take/llms/multimedia/48959843-model-quantization
124,Understanding Hallucinations and Bias,"# Understanding Transformers
## Introduction
In this lesson, we will dive deeper into Transformers and provide a comprehensive understanding of their various components. We will also cover the network's inner mechanisms. We will look into the seminal paper “Attention is all you need” and examine a diagram of the components of a Transformer. Last, we see how Hugging Face uses these components in the popular `transformers` library.",llm_course,https://learn.activeloop.ai/courses/take/llms/multimedia/48953681-understanding-hallucinations-and-bias
125,Understanding Hallucinations and Bias,"# Understanding Transformers
## Attention Is All You Need
The Transformer architecture was proposed as a collaborative effort between Google Brain and the University of Toronto in a paper called “[Attention is All You Need](https://arxiv.org/abs/1706.03762).” It presented an encoder-decoder network powered by attention mechanisms for automatic translation tasks, demonstrating superior performance compared to previous benchmarks ([WMT 2014 translation tasks](https://paperswithcode.com/dataset/wmt-2014)) at a fraction of the cost. On the WMT 2014 English-to-French translation task, our model establishes a new single-model state-of-the-art BLEU score of 41.8 after training for 3.5 days on eight GPUs, a small fraction of the training costs of the best models from the literature. While Transformers have proven to be highly effective in various tasks such as classification, summarization, and, more recently, language generation, their proposal of training highly parallelized networks is equally significant. The expansion of the architecture into three distinct categories allowed for greater flexibility and specialization in handling different tasks: - The **encoder-only** category focused on extracting meaningful representations from input data. An example model of this category is [BERT](https://arxiv.org/abs/1810.04805). - The **encoder-decoder** category enabled sequence-to-sequence tasks such as translation and summarization or training multimodal models like caption generators. An example model of this category is [BART](https://arxiv.org/abs/1910.13461). - The **decoder-only** category specializes in generating outputs based on given instructions, as we have in Large Language Models. An example model of this category is [GPT](https://s3-us-west-2.amazonaws.com/openai-assets/research-covers/language-unsupervised/language_understanding_paper.pdf).",llm_course,https://learn.activeloop.ai/courses/take/llms/multimedia/48953681-understanding-hallucinations-and-bias
126,Understanding Hallucinations and Bias,"# Understanding Transformers
## The Architecture
Now, let's examine the crucial elements of the Transformer model in more detail. ![The overview of Transformer architecture. The left component is called the encoder, which is connected to the decoder using a cross-attention mechanism. (Image taken from the “*Attention is all you need” paper*)](Understanding%20Transformers%2086c4c813b12f485283a3806ec8e25381/Untitled.png) The overview of Transformer architecture. The left component is called the encoder, which is connected to the decoder using a cross-attention mechanism. (Image taken from the “*Attention is all you need” paper*) ### Input Embedding The initial procedure involves translating the input tokens into embeddings. These embeddings are acquired vectors symbolizing the input tokens, facilitating the model's ability to grasp the semantic meanings of the words. The size of the embedding vector varied based on the model's scale and design preferences. For instance, OpenAI's GPT-3 uses a 12,000-dimensional embedding vector, while smaller models like BERT could have a size as small as 768. ### Positional Encoding Given that the Transformer lacks the recurrence feature found in RNNs to feed the input one at a time, it necessitates a method for considering the position of words within a sentence. This is accomplished by adding positional encodings to the input embeddings. These encodings are vectors that keep the location of a word in the sentence. ### Self-Attention Mechanism At the core of the Transformer model lies the self-attention mechanism, which calculates a weighted sum of the embeddings of all words in a sentence for each word. These weights are determined based on some learned “attention” scores between words. The terms with higher relevance to one another will receive higher “attention” weights. Based on the inputs, this is implemented using Query, Key, and Value vectors. Here is a brief description of each vector. - **Query Vector**: It represents the word or token for which the attention weights are being calculated. The Query vector determines which parts of the input sequence should receive more attention. Multiplying word embeddings with the Query vector is like asking, ""What should I pay attention to?"" - **Key Vector**: It represents the set of words or tokens in the input sequence that are compared with the Query. The Key vector helps identify the relevant or essential information in the input sequence. Multiplying word embeddings with the Key vector is like asking, ""What is important to consider?"" - **Value Vector**: It contains the input sequence's associated information or features for each word or token. The Value vector provides the actual data that will be weighted and combined based on the attention weights calculated between the Query and Key. The Value vector answers the question, ""What information do we have?"" Before the advent of the transformer architecture, the attention mechanism was mainly utilized to compare two portions of texts. For example, the model could focus on different parts of the input article while generating the summary for a task like summarization. The self-attention mechanism enabled the models to highlight the important parts of the content for the task. It is helpful in encoder-only or decoder-only models to create a powerful representation of the input. The text can be transformed into embeddings for encoder-only scenarios, whereas the text is generated for decoder-only models. The effectiveness of the attention mechanism significantly increases when applied in a multi-head setting. In this configuration, multiple attention components process the same information, with each head learning to focus on distinct aspects of the text, such as verbs, nouns, numbers, and more, throughout the training process.",llm_course,https://learn.activeloop.ai/courses/take/llms/multimedia/48953681-understanding-hallucinations-and-bias
127,Understanding Hallucinations and Bias,"# Understanding Transformers
## The Architecture In Action
This section will demonstrate the functioning of the above components from a pre-trained large language model, providing an insight into their inner workings using the `transformers` Hugging Face library. To begin, we load the model and tokenizer using `AutoModelForCausalLM` and `AutoTokenizer`, respectively. Then, we proceed to tokenize a sample phrase, which will serve as our input in the following steps. ```python from transformers import AutoModelForCausalLM, AutoTokenizer OPT = AutoModelForCausalLM.from_pretrained(""facebook/opt-1.3b"", load_in_8bit=True) tokenizer = AutoTokenizer.from_pretrained(""facebook/opt-1.3b"") inp = ""The quick brown fox jumps over the lazy dog"" inp_tokenized = tokenizer(inp, return_tensors=""pt"") print(inp_tokenized['input_ids'].size()) print(inp_tokenized) ``` ```python torch.Size([1, 10]) {'input_ids': tensor([[ 2, 133, 2119, 6219, 23602, 13855, 81, 5, 22414, 2335]]), 'attention_mask': tensor([[1, 1, 1, 1, 1, 1, 1, 1, 1, 1]])} ``` We load Facebook's Open Pre-trained Transformer model with 1.3B parameters (`facebook/opt-1.3b`) in the 8-bit format, a memory-saving approach to efficiently utilize GPU resources. The `tokenizer` object loads the required vocabulary to interact with the model and will be used to convert the sample input (`inp` variable) to the token IDs and attention mask. Let’s look at the model’s architecture by accessing its `.model` method. ```python print(OPT.model) ``` ```python OPTModel( (decoder): OPTDecoder( (embed_tokens): Embedding(50272, 2048, padding_idx=1) (embed_positions): OPTLearnedPositionalEmbedding(2050, 2048) (final_layer_norm): LayerNorm((2048,), eps=1e-05, elementwise_affine=True) (layers): ModuleList( (0-23): 24 x OPTDecoderLayer( (self_attn): OPTAttention( (k_proj): Linear8bitLt(in_features=2048, out_features=2048, bias=True) (v_proj): Linear8bitLt(in_features=2048, out_features=2048, bias=True) (q_proj): Linear8bitLt(in_features=2048, out_features=2048, bias=True) (out_proj): Linear8bitLt(in_features=2048, out_features=2048, bias=True) ) (activation_fn): ReLU() (self_attn_layer_norm): LayerNorm((2048,), eps=1e-05, elementwise_affine=True) (fc1): Linear8bitLt(in_features=2048, out_features=8192, bias=True) (fc2): Linear8bitLt(in_features=8192, out_features=2048, bias=True) (final_layer_norm): LayerNorm((2048,), eps=1e-05, elementwise_affine=True) ) ) ) ) ``` The model is decoder-only, a common characteristic among transformer-based language models. Consequently, we must utilize the decoder key to access its inner components. Furthermore, the examination of the `layers` key reveals that the decoder component is composed of 24 stacked layers with the same architecture. To begin, we look at the embedding layer. ```python embedded_input = OPT.model.decoder.embed_tokens(inp_tokenized['input_ids']) print(""Layer:\t"", OPT.model.decoder.embed_tokens) print(""Size:\t"", embedded_input.size()) print(""Output:\t"", embedded_input) ``` ```python Layer: Embedding(50272, 2048, padding_idx=1) Size: torch.Size([1, 10, 2048]) Output: tensor([[[-0.0407, 0.0519, 0.0574, ..., -0.0263, -0.0355, -0.0260], [-0.0371, 0.0220, -0.0096, ..., 0.0265, -0.0166, -0.0030], [-0.0455, -0.0236, -0.0121, ..., 0.0043, -0.0166, 0.0193], ..., [ 0.0007, 0.0267, 0.0257, ..., 0.0622, 0.0421, 0.0279], [-0.0126, 0.0347, -0.0352, ..., -0.0393, -0.0396, -0.0102], [-0.0115, 0.0319, 0.0274, ..., -0.0472, -0.0059, 0.0341]]], device='cuda:0', dtype=torch.float16, grad_fn=) ``` The embedding layer is accessible through the `.embed_tokens` method under the decoder component and passes our tokenized inputs to the layer. As you can see, the embedding layer will transform a list of IDs with `[1, 10]` size to `[1, 10, 2048]`. This representation will then be used and passed through the decoder layers. Subsequently, the positional encoding component utilizes the attention masks to generate a vector that imparts a sense of positioning within the model. The following code uses the `.embed_positions` method from the decoder to generate the positional embeddings. As seen, the layer generates a distinct vector for each position, which is added to the output of the embedding layer. This process introduces supplementary positional information to the model. ```python embed_pos_input = OPT.model.decoder.embed_positions(inp_tokenized['attention_mask']) print(""Layer:\t"", OPT.model.decoder.embed_positions) print(""Size:\t"", embed_pos_input.size()) print(""Output:\t"", embed_pos_input) ``` ```python Layer: OPTLearnedPositionalEmbedding(2050, 2048) Size: torch.Size([1, 10, 2048]) Output: tensor([[[-8.1406e-03, -2.6221e-01, 6.0768e-03, ..., 1.7273e-02, -5.0621e-03, -1.6220e-02], [-8.0585e-05, 2.5000e-01, -1.6632e-02, ..., -1.5419e-02, -1.7838e-02, 2.4948e-02], [-9.9411e-03, -1.4978e-01, 1.7557e-03, ..., 3.7117e-03, -1.6434e-02, -9.9087e-04], ..., [ 3.6979e-04, -7.7454e-02, 1.2955e-02, ..., 3.9330e-03, -1.1642e-02, 7.8506e-03], [-2.6779e-03, -2.2446e-02, -1.6754e-02, ..., -1.3142e-03, -7.8583e-03, 2.0096e-02], [-8.6288e-03, 1.4233e-01, -1.9012e-02, ..., -1.8463e-02, -9.8572e-03, 8.7662e-03]]], device='cuda:0', dtype=torch.float16, grad_fn=) ``` Lastly, the self-attention component! We use the first layer’s self-attention component by indexing through the layers and accessing the `.self_attn` method. ```python embed_position_input = embedded_input + embed_pos_input hidden_states, _, _ = OPT.model.decoder.layers[0].self_attn(embed_position_input) print(""Layer:\t"", OPT.model.decoder.layers[0].self_attn) print(""Size:\t"", hidden_states.size()) print(""Output:\t"", hidden_states) ``` ```python Layer:",llm_course,https://learn.activeloop.ai/courses/take/llms/multimedia/48953681-understanding-hallucinations-and-bias
128,Understanding Hallucinations and Bias,"# Understanding Transformers
## The Architecture In Action
OPTAttention( (k_proj): Linear8bitLt(in_features=2048, out_features=2048, bias=True) (v_proj): Linear8bitLt(in_features=2048, out_features=2048, bias=True) (q_proj): Linear8bitLt(in_features=2048, out_features=2048, bias=True) (out_proj): Linear8bitLt(in_features=2048, out_features=2048, bias=True) ) Size: torch.Size([1, 10, 2048]) Output: tensor([[[-0.0119, -0.0110, 0.0056, ..., 0.0094, 0.0013, 0.0093], [-0.0119, -0.0110, 0.0056, ..., 0.0095, 0.0013, 0.0093], [-0.0119, -0.0110, 0.0056, ..., 0.0095, 0.0013, 0.0093], ..., [-0.0119, -0.0110, 0.0056, ..., 0.0095, 0.0013, 0.0093], [-0.0119, -0.0110, 0.0056, ..., 0.0095, 0.0013, 0.0093], [-0.0119, -0.0110, 0.0056, ..., 0.0095, 0.0013, 0.0093]]], device='cuda:0', dtype=torch.float16, grad_fn=) ``` The self-attention component comprises the mentioned query, key, and value layers, culminating in a final projection for the output. It takes the sum of the embedded input and the positional encoding vector as input. In a real-world example, the model also provides the attention mask to the component, enabling it to identify which portions of the input should be disregarded or ignored. (removed from the sample code for simplicity) The rest of the architecture applies non-linearity (e.g., RELU), feedforward, and batch normalization layers. ",llm_course,https://learn.activeloop.ai/courses/take/llms/multimedia/48953681-understanding-hallucinations-and-bias
129,Understanding Hallucinations and Bias,"# Understanding Transformers
## Conclusion
This lesson provides an overview of the transformer architecture and dives deeper into the model's structure by loading a pre-trained model and extracting its essential components. We also look into what occurs within an LLM under the surface. In particular, the attention mechanism serves as the core component of the model. In the next lesson, we will cover the diverse architectures of the transformer: encoder-decoder, decoder-only (like the GPTs), and decoder-only (like BERT). In this [Notebook](https://colab.research.google.com/drive/1FS9dRZh_ZemS4uQpKu9k1W_bDtP0DMoh?usp=sharing), you can find the code for this lesson.",llm_course,https://learn.activeloop.ai/courses/take/llms/multimedia/48953681-understanding-hallucinations-and-bias
130,Techniques for Fine-Tuning LLMs,"# Fine-Tuning LLMs Module
## Fine-Tuning LLMs
Goals: Equip students with knowledge and practical skills in finetuning techniques accompanied by code examples. Discuss the approach to finetuning utilizing CPUs. The section centers around fine-tuning LLMs, addressing their various aspects and methodologies. As the module progresses, the focus will be given to specialized instruction tuning techniques, namely SFT and and LoRA. It will examine domain-specific applications, ensuring a holistic understanding of fine-tuning techniques and their real-world implications. - **Techniques for Finetuning LLMs:** The lesson highlights the challenges, particularly the resource intensity of traditional approaches. We will introduce instruction tuning methods like SFT, RLHF, and LoRA. - **Deep Dive into LoRA and SFT**: This lesson offers an in-depth exploration of LoRA and SFT techniques. We will uncover the mechanics and underlying principles of these methods. - **Finetuning using LoRA and SFT**: This lesson guides a practical application of LoRA and SFT to finetune an LLM to follow instructions, using data from the “LIMA: Less Is More for Alignment” paper. - **Finetuning using SFT for financial sentiment**. This lesson navigates the nuances of leveraging SFT to optimize LLMs, specifically tailored to capture and interpret sentiments within the financial domain. - **Fine-Tuning using Cohere for Medical Data.** In this lesson, we will adopt an entirely different method for fine-tuning a large language model, leveraging a service called [Cohere](https://cohere.com/). This lesson explores the procedure of fine-tuning a customized generative model using medical texts to extract information. The task, known as [Named Entity Recognition (NER)](https://en.wikipedia.org/wiki/Named-entity_recognition), empowers the models to identify various entities (such as names, locations, dates, etc.) within a text. The lessons have equipped students with both the knowledge and the practical skills needed for fine-tuning LLMs effectively. As they advance, they carry with them the capability to deploy and optimize LLM for various domains.",llm_course,https://learn.activeloop.ai/courses/take/llms/multimedia/48954486-techniques-for-fine-tuning-llms
131,Deploying LLMs Module,"# Deploying LLMs Module
## Deploying LLMs
Goals: Familiarize students with efficient LLM deployment techniques, emphasizing quantization and pruning. Offer hands-on experiences with deployments on platforms like GCP and Intel® CPUs. This module dives into deploying Large Language Models. Model quantization and pruning are central to these strategies, each serving as an effective tool to optimize model performance without compromising efficiency. With a foundation in research articles and real-world applications, this module also introduces participants to deployment on cloud platforms. - **Challenges of LLM deployment**: The lesson covers challenges during LLM deployment such as the sheer size of models, associated costs, and potential latency issues. We also provide a survey on optimizations, rooted in a research article on Transformer inference and a deepened perspective on potential solutions in addressing these challenges. - **Model Quantization**: This lesson centers on Quantization, highlighting its role in streamlining LLM deployments. We research the balance between model performance and efficiency by understanding its usefulness and various techniques. - **Model Pruning**: This module discusses model pruning, showcasing its place in LLM optimization. We will introduce various pruning techniques backed by recent research. - **Deploying an LLM on a Cloud CPU**: This module uncovers the advantages, considerations, and challenges of deploying large language models on cloud-based CPUs. This lesson needs a server instance equipped with an Intel® Xeon® processor. By the end of this module, students have gained a robust understanding of the intricacies involved in LLM deployment. The exploration of model quantization, pruning, and practical deployment strategies has provided them with the tools necessary to navigate real-world challenges. Moving beyond foundational concepts, the next section offers a deep dive into the advanced topics and future directions in the realm of LLMs. After navigating the diverse terrain of Transformers and LLMs, participants now deeply understand significant architectures like GPT and BERT. The sessions shed light on model evaluation metrics, advanced control techniques for optimal outputs, and the roles of pretraining and finetuning. The upcoming module dives into the complexities of deciding when to train an LLM from scratch, the operational necessities of LLMs, and the sequential steps crucial for the training process. --- *Intel, the Intel logo and Xeon are trademarks of Intel Corporation or its subsidiaries.*",llm_course,https://learn.activeloop.ai/courses/take/llms/multimedia/48959823-deploying-llms-module
132,Challenges of LLM Deployment,"# Challenges of LLM Deployment
## Introduction
In this lesson, we will study the challenges of deploying Large Language Models, with a focus on the importance of latency and memory. We also explore optimization techniques with concepts like quantization and sparsity and how they can be applied using the Hugging Face Optimum and Intel® Neural Compressor libraries. We also discuss the role of Intel's® optimization technologies in efficiently running LLMs. This lesson will provide a deeper understanding of how to optimize LLMs for better performance and user experience.",llm_course,https://learn.activeloop.ai/courses/take/llms/multimedia/48959837-challenges-of-llm-deployment
133,Challenges of LLM Deployment,"# Challenges of LLM Deployment
## Importance of Latency and Memory
Latency is the delay before a transfer of data begins following an instruction. It is a crucial factor in LLM applications. High latency in real-time or near-real-time applications can lead to a poor user experience. For instance, in a conversational AI application, a delay in response can disrupt the natural flow of conversation, leading to user dissatisfaction. Therefore, reducing latency is a critical aspect of LLM deployment. Considering an [average human reading speed](https://www.sciencedirect.com/science/article/abs/pii/S0749596X19300786) of ~250 words per minute (translated to ~312 tokens per minute) that is about 5 tokens per second, therefore a latency of 200ms per token. Usually, acceptable latency for near-real-time LLM applications is between 100ms and 200ms per token. Transformers can be computationally intensive and memory-demanding, due to their complex architecture and large size. However, several optimization techniques can be employed to enhance their efficiency without significantly compromising their performance.",llm_course,https://learn.activeloop.ai/courses/take/llms/multimedia/48959837-challenges-of-llm-deployment
134,Challenges of LLM Deployment,"# Challenges of LLM Deployment
## Quantization
[Quantization](https://arxiv.org/pdf/2103.13630.pdf) is a technique used for compressing neural network models, including Transformers, by lowering the precision of model parameters and/or activations. This method can significantly reduce memory usage. It leverages low-bit precision arithmetic and decreases the size, latency, and energy consumption. However, it's important to strike a balance between performance gains through reduced precision and maintaining model accuracy. Techniques such as mixed-precision quantization, which assign higher bit precision to more sensitive layers, can mitigate accuracy degradation. We’ll learn different quantization methods later in the course.",llm_course,https://learn.activeloop.ai/courses/take/llms/multimedia/48959837-challenges-of-llm-deployment
135,Challenges of LLM Deployment,"# Challenges of LLM Deployment
## Sparsity
[Sparsity](https://arxiv.org/abs/2102.00554), usually achieved by [pruning](https://en.wikipedia.org/wiki/Pruning_(artificial_neural_network)), is another technique for reducing the computational cost of LLMs by eliminating redundant or less important weights and activations. This method can significantly decrease off-chip memory consumption, the corresponding memory traffic, energy consumption, and latency. Pruning can be broadly divided into types: weight pruning and activation pruning. - **Weight pruning** can be further categorized into unstructured pruning and structured pruning. Unstructured pruning allows any sparsity pattern, and structured pruning imposes an additional constraint on the sparsity pattern. While structured pruning can provide benefits in terms of memory, energy consumption, and latency without additional hardware support, it is known to achieve a lower compression rate than unstructured pruning. - On the other hand, **activation pruning** prunes redundant activations during inference, which can be especially effective for Transformer models. However, this requires support to detect and zero out unimportant activations at run-time dynamically. We’ll study different pruning methods later in the course.",llm_course,https://learn.activeloop.ai/courses/take/llms/multimedia/48959837-challenges-of-llm-deployment
136,Challenges of LLM Deployment,"# Challenges of LLM Deployment
## Utilizing Optimum and Intel® Neural Compressor Libraries
The [Hugging Face Optimum](https://github.com/huggingface/optimum) and the [Intel® Neural Compressor](https://github.com/intel/neural-compressor/tree/master) libraries provide a suite of tools helpful in optimizing models for inference, especially for Intel® architectures. - The Hugging Face Optimum library serves as an interface between the Hugging Face [transformers](https://github.com/huggingface/transformers) and [diffuser](https://github.com/huggingface/diffusers) libraries and the various tools provided by Intel®. - The Intel® Neural Compressor is an open-source library that facilitates the application of popular compression techniques such as quantization, pruning, and [knowledge distillation](https://github.com/intel/neural-compressor/blob/master/docs/source/distillation.md). It supports automatic accuracy-driven tuning strategies, enabling users to generate quantized models easily. This library allows users to apply static, dynamic, and aware-training quantization approaches while maintaining predefined accuracy criteria. It also supports different weight pruning techniques, allowing for the creation of pruned models that meet a predefined sparsity target. These libraries provide a practical application of the quantization and sparsity techniques, and their usage will be of great use in optimizing the deployment of LLMs.",llm_course,https://learn.activeloop.ai/courses/take/llms/multimedia/48959837-challenges-of-llm-deployment
137,Challenges of LLM Deployment,"# Challenges of LLM Deployment
## Intel® Optimization Technologies for LLMs
Intel's® optimization technologies play a significant role in running LLMs efficiently on CPUs. The [4th Gen Intel® Xeon® Scalable processors](https://www.intel.com/content/www/us/en/developer/topic-technology/artificial-intelligence/platform.html) are equipped with AI-infused acceleration known as Intel® Advanced Matrix Extensions (Intel® AMX). These processors have built-in BF16 and INT8 GEMM (general matrix-matrix multiplication) accelerators in every core, which significantly accelerate deep learning training and inference workloads. The [Intel® Xeon® Proecssor Max Series](https://www.intel.com/content/www/us/en/products/details/processors/xeon/max-series.html) offers up to 128GB of high-bandwidth memory, which is particularly beneficial for LLMs, as these models are often memory-bandwidth bound. By (1) running model optimizations like quantization and pruning and (2) leveraging the Intel® hardware acceleration technologies, it’s possible to achieve a good latency for LLMs too. Take a look at [this page](https://github.com/intel/neural-compressor/blob/master/docs/source/validated_model_list.md#pytorch-models-with-torch-201cpu-in-ptq-mode) to see the performance improvements (better throughput, with less memory size) of several optimized models.",llm_course,https://learn.activeloop.ai/courses/take/llms/multimedia/48959837-challenges-of-llm-deployment
138,Challenges of LLM Deployment,"# Challenges of LLM Deployment
## Conclusion
In this lesson, we have explored the challenges of deploying Large Language Models, with a particular focus on latency and memory. We also discussed optimization techniques like quantization and sparsity, which can significantly reduce LLMs' computational cost and memory usage. We introduced the Hugging Face Optimum and Intel® Neural Compressor libraries, which provide practical tools for applying these techniques. Furthermore, we have highlighted the role of Intel's® optimization technologies, such as the 4th Gen Intel® Xeon Scalable processors and the Intel® Xeon CPU Max Series, in efficiently running neural networks. By understanding and applying these concepts, we can optimize the deployment of LLMs, achieving better performance and user experience. --- *For more information on Intel® Accelerator Engines, visit [this resource page](https://download.intel.com/newsroom/2023/data-center-hpc/4th-Gen-Xeon-Accelerator-Fact-Sheet.pdf). Learn more about Intel® Extension for Transformers, an Innovative Transformer-based Toolkit to Accelerate GenAI/LLM Everywhere [here](https://github.com/intel/intel-extension-for-transformers).* *Intel, the Intel logo, and Xeon are trademarks of Intel Corporation or its subsidiaries.*",llm_course,https://learn.activeloop.ai/courses/take/llms/multimedia/48959837-challenges-of-llm-deployment
139,Fine-Tuning using LoRA and SFT,"# Fine-Tuning using LoRA and SFT
## Introduction
The fine-tuning process has consistently proven to be a practical approach for enhancing the model's capabilities in new domains. Therefore, it is a valuable approach to adapt large language models while using a reasonable amount of resources. As mentioned earlier, the fine-tuning process builds upon the model's existing general knowledge, which means it doesn't need to learn everything from scratch. Consequently, it can grasp patterns from a relatively small number of samples and undergo a relatively short training process. In this lesson, we’ll see how to do SFT on an LLM using LoRA. We’ll use the dataset from the ""[LIMA: Less Is More for Alignment](https://arxiv.org/pdf/2305.11206.pdf)"" paper. According to their argument, a high-quality, hand-picked, small dataset with a thousand samples can replace the RLHF process, effectively enabling the model to be instructively fine-tuned. Their approach yielded competitive results compared to other language models, showcasing a more efficient fine-tuning process. However, it might not exhibit the same level of accuracy in domain-specific tasks, and it requires hand-picked data points. The [TRL library](https://github.com/huggingface/trl) has some classes for Supervised Fine-Tuning (SFT), making it accessible and straightforward. The classes permit the integration of LoRA configurations, facilitating its seamless adoption. It is worth highlighting that this process also serves as the first step for Reinforcement Learning with Human Feedback (RLHF), a topic we will explore in detail later in the course.",llm_course,https://learn.activeloop.ai/courses/take/llms/multimedia/48959752-fine-tuning-using-lora-and-sft
140,Fine-Tuning using LoRA and SFT,"# Fine-Tuning using LoRA and SFT
## Spinning Up a Virtual Machine for Finetuning on GCP Compute Engine
Cloud GPUs availability today is very scarse as they are used a lot for several deep learning applications. Few people know that CPUs can be actually used to finetune LLMs through various optimizations and that’s what we’ll be doing in these lessons when doing SFT. Let’s login to our Google Cloud Platform account and create a [Compute Engine](https://cloud.google.com/compute) instance (see the “Course Introduction” lesson for instructions). You can choose between different [machine types](https://cloud.google.com/compute/docs/cpu-platforms). In this lesson, we trained the model on the latest CPU generation from 4th Generation Intel® Xeon® Scalable Processors (formerly known as Intel® Sapphire Rapids). This architecture features an integrated accelerator designed to enhance the performance of training deep learning models. Intel® Advanced Matrix Extension (AMX) empowers the training of models with BF16 precision during the training process, allowing for half-precision training on the latest Xeon® Scalable processors. Additionally, it introduces an INT8 data type for the inference process, leading to a substantial acceleration in processing speed. Reports suggest a tenfold increase in performance when utilizing PyTorch for both training and inference processes. Follow the instructions in the course introduction to spin up a VM with Compute Engine with high-end Intel® CPUs. Once you have your virtual machine up, you can SSH into it. Incorporating CPUs for fine-tuning or inference processes presents an excellent choice, as renting alternate hardware is considerably less cost-effective. It worth mentioning that a minimum of 32GB of RAM is necessary to load the model and facilitate the experiment's training process. If there is an out-of-memory error, reduce arguments such as `batch_size` or `seq_length`. ",llm_course,https://learn.activeloop.ai/courses/take/llms/multimedia/48959752-fine-tuning-using-lora-and-sft
141,Fine-Tuning using LoRA and SFT,"# Fine-Tuning using LoRA and SFT
## Load the Dataset
The quality of a model is directly tied to the quality of the data it is trained on! The best approach is to begin the process with a dataset. Whether it is an open-source dataset or a custom one manually, planning and considering the dataset in advance is essential. In this lesson, we will utilize the dataset released with the LIMA research. It is publicly available with a non-commercial use license. The powerful feature of Deep Lake format enables seamless streaming of the datasets. There is no need to download and load the dataset into memory. The hub provides diverse datasets, including the LIMA dataset presented in the ""LIMA: Less Is More for Alignment"" paper. The Deep Lake Web UI not only aids in dataset exploration but also facilitates dataset visualization using the embeddings field, taking care of clustering the dataset and map it in 3D space. (We used Cohere embedding API to generate in this example) The enlarged image below illustrates one such cluster where data points in Portuguese language related to coding are positioned closely to each other. Note that Deep Lake Visualization Engine offers you the ability to pick the clustering algorithm. ![Deep Lake Visualization Engine 3D visualization feature.](Fine-Tuning%20using%20LoRA%20and%20SFT%207d646ca7e86c435b8827bfdb9d060c0e/Screenshot_2023-10-05_at_9.41.15_AM.png) Deep Lake Visualization Engine 3D visualization feature. The code below will create a loader object for the training and test sets. ```python import deeplake # Connect to the training and testing datasets ds = deeplake.load('hub://genai360/GAIR-lima-train-set') ds_test = deeplake.load('hub://genai360/GAIR-lima-test-set') print(ds) ``` ```python Dataset(path='hub://genai360/GAIR-lima-train-set', read_only=True, tensors=['answer', 'question', 'source']) ``` We can then utilize the `ConstantLengthDataset` class to bundle a number of smaller samples together, enhancing the efficiency of the training process. Furthermore, it also handles dataset formatting by accepting a template function and tokenizing the texts. To begin, we load the pre-trained tokenizer object for the [Open Pre-trained Transformer (OPT)](https://arxiv.org/abs/2205.01068) model using the Transformers library. We will load the model later. We are using OPT for convenience because it’s an open model with a relatively “small” amount of parameters. The same code in this lesson can be run in another model too, for example, using `meta-llama/Llama-2-7b-chat-hf` for [LLaMa 2](https://huggingface.co./meta-llama/Llama-2-7b-chat-hf). ```python from transformers import AutoTokenizer tokenizer = AutoTokenizer.from_pretrained(""facebook/opt-1.3b"") ``` Moreover, we need to define the formatting function called `prepare_sample_text`, which takes a row of data in Deep Lake format as input and formats it to begin with a question followed by the answer that is separated by two newlines. This formatting aids the model in learning the template and understanding that if a prompt starts with the `question` keyword, the most likely response would be to complete it with an answer. ```python def prepare_sample_text(example): """"""Prepare the text from a sample of the dataset."""""" text = f""Question: {example['question'].text()}\n\nAnswer: {example['answer'].text()}"" return text ``` Now, with all the components in place, we can initialize the dataset, which can be fed to the model for fine-tuning. We call the `ConstantLengthDataset` class using the combination of a tokenizer, deep lake dataset object, and formatting function. The additional arguments, such as `infinite=True` ensure that the iterator will restart when all data points have been used, but there are still training steps remaining. Alongside `seq_length`, which determines the maximum sequence length, it must be completed according to the model's configuration. In this scenario, it is possible to raise it to 2048, although we opted for a smaller value to manage memory usage better. Select a higher number if the dataset primarily comprises shorter texts. ```python from trl.trainer import ConstantLengthDataset train_dataset = ConstantLengthDataset( tokenizer, ds, formatting_func=prepare_sample_text, infinite=True, seq_length=1024 ) eval_dataset = ConstantLengthDataset( tokenizer, ds_test, formatting_func=prepare_sample_text, seq_length=1024 ) # Show one sample from train set iterator = iter(train_dataset) sample =",llm_course,https://learn.activeloop.ai/courses/take/llms/multimedia/48959752-fine-tuning-using-lora-and-sft
142,Fine-Tuning using LoRA and SFT,"# Fine-Tuning using LoRA and SFT
## Load the Dataset
next(iterator) print(sample) ``` ```python {'input_ids': tensor([ 2, 45641, 35, ..., 48443, 2517, 742]), 'labels': tensor([ 2, 45641, 35, ..., 48443, 2517, 742])} ``` As evidenced by the output above, the `ConstantLengthDataset` class takes care of all the necessary steps to prepare our dataset. ",llm_course,https://learn.activeloop.ai/courses/take/llms/multimedia/48959752-fine-tuning-using-lora-and-sft
143,Fine-Tuning using LoRA and SFT,"# Fine-Tuning using LoRA and SFT
## Initialize the Model and Trainer
As mentioned previously, we will be using the [OPT model](https://huggingface.co./facebook/opt-1.3b) with 1.3 billion parameters in this lesson, which has the `facebook/opt-1.3b` model id on the Hugging Face Hub. The LoRA approach is employed for fine-tuning, which involves introducing new parameters to the network while keeping the base model unchanged during the tuning process. This approach has proven to be highly efficient, enabling fine-tuning of the model by training less than 1% of the total parameters. (For more details, refer to the following [post](https://medium.com/@nlpiation/pre-trained-transformers-gpt-3-2-but-1000x-smaller-cafe4269a96c).) With the TRL library, we can seamlessly add additional parameters to the model by defining a number of configurations. The variable `r` represents the dimension of matrices, where lower values lead to fewer trainable parameters. `lora_alpha` serves as the scaling factor, while `bias` determines which bias parameters the model should train, with options of `none`, `all`, and `lora_only`. The remaining parameters are self-explanatory. ```python from peft import LoraConfig lora_config = LoraConfig( r=16, lora_alpha=32, lora_dropout=0.05, bias=""none"", task_type=""CAUSAL_LM"", ) ``` Next, we need to configure the `TrainingArguments`, which are essential for the training process. We have already covered some of the parameters in the training lesson, but note that the learning rate is higher when combined with higher weight decay, increasing parameter updates during fine-tuning. Furthermore, it is highly recommended to employ the argument `bf16=True` in order to minimize memory usage during the model's fine-tuning process. The utilization of the Intel® Xeon® 4s CPU empowers us to apply this optimization technique. This involves converting the numbers to a 16-bit precision, effectively reducing the RAM demand during fine-tuning. We will dive into other quantization methods as we progress through the course. We are also using a service called [Weights and Biases](https://wandb.ai/site), which is an excellent tool for training and fine-tuning any machine-learning model. They offer monitoring tools to record every facet of the process and various solutions for [prompt engineering](https://wandb.ai/site/prompts) and [hyperparameter sweep](https://docs.wandb.ai/guides/sweeps), among other functionalities. Simply installing the package and utilizing the `wandb` parameter for the `report_to` argument is all that's required. This will handle the logging process seamlessly. ```python from transformers import TrainingArguments training_args = TrainingArguments( output_dir=""./OPT-fine_tuned-LIMA-CPU"", dataloader_drop_last=True, evaluation_strategy=""epoch"", save_strategy=""epoch"", num_train_epochs=10, logging_steps=5, per_device_train_batch_size=8, per_device_eval_batch_size=8, learning_rate=1e-4, lr_scheduler_type=""cosine"", warmup_steps=10, gradient_accumulation_steps=1, bf16=True, weight_decay=0.05, run_name=""OPT-fine_tuned-LIMA-CPU"", report_to=""wandb"", ) ``` The final component we need is the pre-trained model. We will use the `facebook/opt-1.3b` key to load the model using the Transformers library. ```python from transformers import AutoModelForCausalLM import torch model = AutoModelForCausalLM.from_pretrained(""facebook/opt-1.3b"", torch_dtype=torch.bfloat16) ``` The subsequent code block will loop through the model parameters and revert the data type of specific layers (like LayerNorm and final language modeling head) to a 32-bit format. It will improve the fine-tuning stability. ```python import torch.nn as nn for param in model.parameters(): param.requires_grad = False # freeze the model - train adapters later if param.ndim == 1: # cast the small parameters (e.g. layernorm) to fp32 for stability param.data = param.data.to(torch.float32) model.gradient_checkpointing_enable() # reduce number of stored activations model.enable_input_require_grads() class CastOutputToFloat(nn.Sequential): def forward(self, x): return super().forward(x).to(torch.float32) model.lm_head = CastOutputToFloat(model.lm_head) ``` Finally, we can use the `SFTTrainer` class to tie all the components together. It accepts the model, training arguments, training dataset, and LoRA method configurations to construct the trainer object. The `packing` argument indicates that we used the `ConstantLengthDataset` class earlier to pack samples together. ```python from trl import SFTTrainer trainer = SFTTrainer( model=model, args=training_args, train_dataset=train_dataset, eval_dataset=eval_dataset, peft_config=lora_config, packing=True, ) ``` So, why did we use LoRA? Let's observe its impact in action by implementing a simple function that calculates the number of available parameters in the model and compares it with the trainable parameters. As a reminder, the trainable parameters refer to the ones that LoRA",llm_course,https://learn.activeloop.ai/courses/take/llms/multimedia/48959752-fine-tuning-using-lora-and-sft
144,Fine-Tuning using LoRA and SFT,"# Fine-Tuning using LoRA and SFT
## Initialize the Model and Trainer
added to the base model. ```python def print_trainable_parameters(model): """""" Prints the number of trainable parameters in the model. """""" trainable_params = 0 all_param = 0 for _, param in model.named_parameters(): all_param += param.numel() if param.requires_grad: trainable_params += param.numel() print( f""trainable params: {trainable_params} || all params: {all_param} || trainable%: {100 * trainable_params / all_param}"" ) print( print_trainable_parameters(trainer.model) ) ``` ```python trainable params: 3145728 || all params: 1318903808 || trainable%: 0.23851079820371554 ``` As observed above, the number of trainable parameters is only 3 million. It accounts for only 0.2% of the total number of parameters that we would have had to update if we hadn't used LoRA! It significantly reduces the memory requirement. Now, it should be clear why using this approach for fine-tuning is advantageous. The trainer object is fully prepared to initiate the fine-tuning loop by calling the `.train()` method, as shown below. ```python print(""Training..."") trainer.train() ``` [OPT-fine_tuned-LIMA-CPU.zip](Fine-Tuning%20using%20LoRA%20and%20SFT%207d646ca7e86c435b8827bfdb9d060c0e/OPT-fine_tuned-LIMA-CPU.zip)",llm_course,https://learn.activeloop.ai/courses/take/llms/multimedia/48959752-fine-tuning-using-lora-and-sft
145,Fine-Tuning using LoRA and SFT,"# Fine-Tuning using LoRA and SFT
## Merging LoRA and OPT
The final step involves merging the base model with the trained LoRA layers to create a standalone model. This can be achieved by loading the desired checkpoint from SFTTrainer, followed by the base model itself using the `PeftModel` class. Begin by loading the OPT-1.3B base model if using a fresh environment. ```python from transformers import AutoModelForCausalLM import torch model = AutoModelForCausalLM.from_pretrained( ""facebook/opt-1.3b"", return_dict=True, torch_dtype=torch.bfloat16 ) ``` The `PeftModel` class can merge the base model with the LoRA layers from the checkpoint specified using the `.from_pretrained()` method. We should then put the model in the evaluation mode. Upon execution, it will print out the model's architecture to observe the presence of the LoRA layers. ```python from peft import PeftModel # Load the Lora model model = PeftModel.from_pretrained(model, ""./OPT-fine_tuned-LIMA-CPU//"") model.eval() ``` ```python PeftModelForCausalLM( (base_model): LoraModel( (model): OPTForCausalLM( (model): OPTModel( (decoder): OPTDecoder( (embed_tokens): Embedding(50272, 2048, padding_idx=1) (embed_positions): OPTLearnedPositionalEmbedding(2050, 2048) (final_layer_norm): LayerNorm((2048,), eps=1e-05, elementwise_affine=True) (layers): ModuleList( (0-23): 24 x OPTDecoderLayer( (self_attn): OPTAttention( (k_proj): Linear(in_features=2048, out_features=2048, bias=True) (v_proj): Linear( in_features=2048, out_features=2048, bias=True (lora_dropout): ModuleDict( (default): Dropout(p=0.05, inplace=False) ) (lora_A): ModuleDict( (default): Linear(in_features=2048, out_features=16, bias=False) ) (lora_B): ModuleDict( (default): Linear(in_features=16, out_features=2048, bias=False) ) (lora_embedding_A): ParameterDict() (lora_embedding_B): ParameterDict() ) (q_proj): Linear( in_features=2048, out_features=2048, bias=True (lora_dropout): ModuleDict( (default): Dropout(p=0.05, inplace=False) ) (lora_A): ModuleDict( (default): Linear(in_features=2048, out_features=16, bias=False) ) (lora_B): ModuleDict( (default): Linear(in_features=16, out_features=2048, bias=False) ) (lora_embedding_A): ParameterDict() (lora_embedding_B): ParameterDict() ) (out_proj): Linear(in_features=2048, out_features=2048, bias=True) ) (activation_fn): ReLU() (self_attn_layer_norm): LayerNorm((2048,), eps=1e-05, elementwise_affine=True) (fc1): Linear(in_features=2048, out_features=8192, bias=True) (fc2): Linear(in_features=8192, out_features=2048, bias=True) (final_layer_norm): LayerNorm((2048,), eps=1e-05, elementwise_affine=True) ) ) ) ) (lm_head): Linear(in_features=2048, out_features=50272, bias=False) ) ) ) ``` Lastly, we can use the PEFT model’s `.merge_and_unload()` method to combine the base model and LoRA layers as a standalone object. It is possible to save the weights using the `.save_pretrained()` method for later usage. ```python model = model.merge_and_unload() model.save_pretrained(""./OPT-fine_tuned-LIMA/merged"") ``` ",llm_course,https://learn.activeloop.ai/courses/take/llms/multimedia/48959752-fine-tuning-using-lora-and-sft
146,Fine-Tuning using LoRA and SFT,"# Fine-Tuning using LoRA and SFT
## Inference
We can evaluate the fine-tuned model’s outputs by employing various prompts. The code below demonstrates how we can utilize Huggingface's `.generate()` method to interact with models effortlessly. Numerous arguments and decoding strategies exist that can enhance text generation quality; however, these are beyond the scope of this course. You can explore these techniques further in an informative [blog post](https://huggingface.co./blog/how-to-generate) by Huggingface. ```python inputs = tokenizer(""Question: Write a recipe with chicken.\n\n Answer: "", return_tensors=""pt"") generation_output = model.generate(**inputs, return_dict_in_generate=True, output_scores=True, max_length=256, num_beams=1, do_sample=True, repetition_penalty=1.5, length_penalty=2.) print( tokenizer.decode(generation_output['sequences'][0]) ) ``` ``` Question: Write a recipe with chicken.\n\n Answer: \n* Chicken and rice is one of the most popular meals in China, especially during Chinese New Year celebrations when it's served as an appetizer or main course for dinner parties (or just to eat by yourself). It can be made from scratch using fresh ingredients like meatballs/chicken breasts if you have them on hand but otherwise use frozen ones that are already cooked so they don't need any additional cooking time before serving. You could also substitute some vegetables instead such as broccoli florets which would make this dish even more delicious! If your family doesn’t know how to cook well then I suggest making these recipes ahead of time because once done all you really do is reheat until hot again :)\n## Make homemade marinade\n1) Combine 1 tablespoon soy sauce, 2 tablespoons sesame oil, 3 teaspoons sugar, 4 cloves garlic minced into small pieces, 6-8 green onions chopped finely, 5 cups water, salt & pepper to taste, about 8 ounces boneless skinless chicken breast fillets cut up fine enough not to stick together while being mixed thoroughly - no bones needed here since there will only ever be two servings per person), ½ cup cornstarch dissolved in ¼... ``` To carry out further experimentation with the `OPT-fine_tuned-LIMA` model, we presented an identical prompt to both the vanilla base model and the fine-tuned version. This experiment aims to measure the degree to which each of these models can follow instructions. Below is a list of prompts. You can toggle the outputs by clicking on the right arrow icon. - 1. Write a recipe with chicken. ![Screenshot 2023-08-16 at 7.11.32 PM.png](Fine-Tuning%20using%20LoRA%20and%20SFT%207d646ca7e86c435b8827bfdb9d060c0e/Screenshot_2023-08-16_at_7.11.32_PM.png) - 2. Create a marketing plan for a coffee shop. ![Screenshot 2023-08-16 at 7.09.51 PM.png](Fine-Tuning%20using%20LoRA%20and%20SFT%207d646ca7e86c435b8827bfdb9d060c0e/Screenshot_2023-08-16_at_7.09.51_PM.png) - 3. Why does it rain? Explain your answer. ![Screenshot 2023-08-16 at 7.06.03 PM.png](Fine-Tuning%20using%20LoRA%20and%20SFT%207d646ca7e86c435b8827bfdb9d060c0e/Screenshot_2023-08-16_at_7.06.03_PM.png) - 4. What’s the Italian translation of the word ‘house’? ![Screenshot 2023-08-16 at 7.02.03 PM.png](Fine-Tuning%20using%20LoRA%20and%20SFT%207d646ca7e86c435b8827bfdb9d060c0e/Screenshot_2023-08-16_at_7.02.03_PM.png) The outcomes highlight the constraints and capabilities of both models. However, it is evident that the fine-tuned model learned to follow instructions better compared to the vanilla-based model. This effect would undoubtedly become more pronounced with the availability of resources to conduct the fine-tuning process for a large model.",llm_course,https://learn.activeloop.ai/courses/take/llms/multimedia/48959752-fine-tuning-using-lora-and-sft
147,Fine-Tuning using LoRA and SFT,"# Fine-Tuning using LoRA and SFT
## Conclusion
During this lesson, we experimented with the fine-tuning process of Large Language models, utilizing the LoRA technique to achieve an efficient tuning process. During our exploration, we discovered the importance of the process. It can serve as a starting point for RLHF or be used for instruction tuning. In the upcoming lessons, we will experiment with the fine-tuning process for creating domain-specific models. --- >> [Notebook](https://colab.research.google.com/drive/1v7gtuE2CIosiF3nt4WSV9BGiQVGUWJRt?usp=sharing). >> [W&B Report](https://wandb.ai/ala_/GenAI360/runs/uhe0kbku?workspace=user-ala_). --- *For more information on Intel® Accelerator Engines, visit [this resource page](https://download.intel.com/newsroom/2023/data-center-hpc/4th-Gen-Xeon-Accelerator-Fact-Sheet.pdf). Learn more about Intel® Extension for Transformers, an Innovative Transformer-based Toolkit to Accelerate GenAI/LLM Everywhere [here](https://github.com/intel/intel-extension-for-transformers).* *Intel, the Intel logo, and Xeon are trademarks of Intel Corporation or its subsidiaries.*",llm_course,https://learn.activeloop.ai/courses/take/llms/multimedia/48959752-fine-tuning-using-lora-and-sft
148,ReAct framework and ChatGPT plugins,"# ReAct framework and ChatGPT plugins
## Introduction
Large Language Models have demonstrated great utility in diverse tasks, from coding assistance and content summarization to answering everyday questions. As our understanding of their strengths and limitations grows, numerous innovative methods for improvement and expanding their range of tasks have emerged in recent months. This lesson will delve into some of these advancements. We will learn about the [ReAct framework](https://arxiv.org/pdf/2210.03629.pdf), a **prompt-based paradigm** designed to synergize **reasoning** and **acting** in language models for **general task solving**. Additionally, this module will cover the latest upgrades to [ChatGPT](https://chat.openai.com/auth/login), including [plugins](https://openai.com/blog/chatgpt-plugins) integration. We will also explore new enhancements available through the OpenAI API, such as [function calling](https://openai.com/blog/function-calling-and-other-api-updates). This feature enables LLMs to produce structured outputs, further augmenting their reliability and utility in many applications.",llm_course,https://learn.activeloop.ai/courses/take/llms/multimedia/48959894-react-framework-and-chatgpt-plugins
149,ReAct framework and ChatGPT plugins,"# ReAct framework and ChatGPT plugins
## Overview of ReAct
This framework aims to enhance the utility of language models by using them to create **autonomous agent systems**, which are systems that can operate and make decisions independently. A way to accomplish that is to make these models reason about an input question and context, create an action plan, and execute it. In the **[ReAct framework](https://arxiv.org/pdf/2210.03629.pdf)**, language models are prompted to generate **verbal reasoning traces,** which are detailed records of the model's thought process, and **actions** interleaved when accomplishing a task. The process is done iteratively until the answer to a question is found. The verbal reasoning step in ReAct allows the model to dynamically create, maintain, and adjust high-level plans for acting, which refers to the execution of specific tasks or actions. When acting, the model can also interact with external environments (like Wikipedia) to incorporate additional information into reasoning. As a side note, allowing these models to access sources of information like Wikipedia can also reduce the number of [hallucinations and biases](https://www.notion.so/Understanding-Hallucinations-and-Bias-e417eb8cce7849a58b2c8a4b0ef0f6f1?pvs=21). To better understand the framework, let’s look at the example below from the paper. ![From “[ReAct: Synergizing Reasoning and Acting in Language Models](https://arxiv.org/pdf/2210.03629.pdf)” paper](ReAct%20framework%20and%20ChatGPT%20plugins%20806d77f2ff6c4a16be58c011998b358e/Untitled.png) From “[ReAct: Synergizing Reasoning and Acting in Language Models](https://arxiv.org/pdf/2210.03629.pdf)” paper The figure above provides a comparative analysis of different methods a language model is utilized to accomplish a specific task: identifying a device that can control the same program the Apple Remote was first designed to interact with. - (1a) The model is asked to answer the question directly. - (1b) Uses **[Chain-of-Thought](https://arxiv.org/pdf/2201.11903.pdf)** prompting, which asks the LLM to reason about the question before answering. - (1c) An act-only process where the LLM is not prompted to reason about the question. - (1d) Using the ReAct framework, the LLM is prompted to reason about the question and perform a specific action to find the answer. The authors of ReAct note that the framework recipe accomplishes various tasks. The main steps include: 1. **Thought Step**: The LLM is prompted to think critically about the task. Given the question, it evaluates which actions might lead to finding the answer. 2. **Action Steps**: In this phase, the LLM interacts with an external environment. It can utilize external APIs to acquire necessary information if needed. 3. **Observation Step**: After taking action, the LLM receives a result from the external environment. These observations are crucial for the LLM to determine the effectiveness of the action and plan the next steps. 4. **Next Thought Step**: Equipped with the information from the action and observation, the LLM reevaluates the situation. This evaluation allows the model to consider and decide on the subsequent action. This sequential process continues until the LLM successfully finds the answer. ### ReAct framework in code Code implementations of the ReAct framework are available for those interested in creating autonomous agents. For a practical demonstration, consider looking at the **[author's implementation](https://github.com/ysymyth/ReAct/blob/master/hotpotqa.ipynb)**. This link directs you to a notebook showcasing an example of utilizing **`text-davinci-002`** to create an agent that answers questions using Wikipedia as a source of information. There is also a **[LangChain](https://python.langchain.com/docs/modules/agents/agent_types/react.html#using-lcel)** implementation of ReAct, enabling you to create a capable agent in less time as they have many available agent [tools](https://python.langchain.com/docs/integrations/tools/).",llm_course,https://learn.activeloop.ai/courses/take/llms/multimedia/48959894-react-framework-and-chatgpt-plugins
150,ReAct framework and ChatGPT plugins,"# ReAct framework and ChatGPT plugins
## OpenAI Function calling
OpenAI has recently introduced a [function calling](https://openai.com/blog/function-calling-and-other-api-updates) feature for their language models through their API. In an API call, you can describe functions to **`gpt-3.5-turbo-0613`** and **`gpt-4-0613`**, and have the model output a JSON object containing arguments to call those functions. Be aware that the Chat completions API **does not call the function**; instead, the model generates a JSON object that you can use to **call the function in your code**. To include this feature, OpenAI fine-tuned the models **`gpt-3.5-turbo-0613`** and **`gpt-4-0613`** to detect when a function should be called (depending on the user input) and to respond with a JSON object that adheres to the function signature. It’s not disclosed how they implement this, but they may be using a form of prompt engineering similar to ReAct for this. With this feature, you can more easily create: - Chatbots that answer questions by calling external tools, such as sending an email. - Convert natural language into API calls or database queries so the model can answer questions such as “Who are my top ten customers this month?” - Extract structured data from text, such as extracting the names of all the locations mentioned in a Wikipedia article. Check out this [example](https://platform.openai.com/docs/guides/gpt/function-calling) in the OpenAI documentation to learn how to set up function calling. If your application makes use of LangChain, there is also a way to use function calling; take a look at the documentation [here](https://python.langchain.com/docs/modules/agents/agent_types/openai_functions_agent).",llm_course,https://learn.activeloop.ai/courses/take/llms/multimedia/48959894-react-framework-and-chatgpt-plugins
151,ReAct framework and ChatGPT plugins,"# ReAct framework and ChatGPT plugins
## ChatGPT Plugins
If you are subscribed to ChatGPT Plus, you can easily augment LLMs with tools for personal or professional use without using the Open AI API. They made available the use of third-party [Plugins](https://openai.com/blog/chatgpt-plugins) through their chat interface. It’s not disclosed how OpenAI implement this, but they may be using a form of prompt engineering similar to ReAct to abilitate the plugins. These plugins or tools can give their language models access to more recent, personal, or specific information. Here are some of the most popular use cases plugins can help with. ![Plugins available to ChatGPT Plus subscribers](ReAct%20framework%20and%20ChatGPT%20plugins%20806d77f2ff6c4a16be58c011998b358e/Untitled%201.png) Plugins available to ChatGPT Plus subscribers With these third-party plugins, you can directly upload your documents, such as PDFs, and ask questions about the information in those documents. You can provide a link to a GitHub repository and ask questions or let the language model explain the code to you. There are also plugins to create diagrams, flow charts, or graphs.",llm_course,https://learn.activeloop.ai/courses/take/llms/multimedia/48959894-react-framework-and-chatgpt-plugins
152,ReAct framework and ChatGPT plugins,"# ReAct framework and ChatGPT plugins
## Conclusion
In this module, we explore various ways to enhance the capabilities of current language models. These new methods also reduce the risks of [hallucinations](https://www.notion.so/Understanding-Hallucinations-and-Bias-e417eb8cce7849a58b2c8a4b0ef0f6f1?pvs=21). We learn about the ReAct framework, a tool that empowers language models to act independently. We discuss the diverse features available via OpenAI services, including function calling and third-party plugins. Function calling assists developers by allowing the use of custom functions during user interaction with the model. Concurrently, plugins available through the chat interface enable users to utilize enhanced OpenAI language models without coding.",llm_course,https://learn.activeloop.ai/courses/take/llms/multimedia/48959894-react-framework-and-chatgpt-plugins
153,What is LLMOps,"# What is LLMOps
## Introduction
As LLMs continue to revolutionize various applications, managing their lifecycle has become important. In this lesson, we will explore the concept of LLMOps, its origins, and its significance in today's AI industry. We will also discuss the steps involved in building an LLM-powered application, the differences between LLMOps and MLOps, and the challenges and solutions associated with each step.",llm_course,https://learn.activeloop.ai/courses/take/llms/multimedia/48954272-what-is-llmops
154,What is LLMOps,"# What is LLMOps
## The Emergence of LLMOps
In recent years, the world of AI has witnessed the rise of large language models. These models have billions of parameters and are trained on billions of words, hence the term ""large.” The advent of LLMs has led to the emergence of a new term, LLMOps, which stands for Large Language Model Operations. This lesson aims to comprehensively understand LLMOps, its origins, and its significance in the AI industry. LLMOps is essentially a set of tools and best practices designed to manage the GenAI lifecycle, from development and deployment to maintenance. LLMOps have gained traction with the rise of LLMs, particularly after the release of OpenAI's ChatGPT, which led to a surge in LLM-powered applications, such as chatbots, writing assistants, and programming assistants. However, the process of building production-ready LLM-powered applications presents unique challenges that differ from those encountered when building AI products with traditional machine learning models. This has necessitated the development of new tools and practices, giving birth to the term ""LLMOps.”",llm_course,https://learn.activeloop.ai/courses/take/llms/multimedia/48954272-what-is-llmops
155,What is LLMOps,"# What is LLMOps
## Steps Involved in LLMOps and Differences with MLOps
While LLMOps can be considered a subset of MLOps (Machine Learning Operations), there are key differences between the two, primarily due to the differences in building AI products with classical ML models and LLMs. The process of building an LLM-powered application involves several key steps. ### 1. **Selection of a Foundation Model** Foundation models are pre-trained LLMs that can be adapted for various downstream tasks. Training these models from scratch is complex, time-consuming, and costly. Hence, developers usually opt for either proprietary models owned by large companies or open-source models hosted on community platforms like Hugging Face. This differs from standard MLOps, where a model is typically trained from scratch with a smaller architectures or on different data, especially for tabular classification and regression tasks (except for computer vision, where most applications start with a model trained on general datasets like [Imagenet](https://www.image-net.org/) or [COCO](https://cocodataset.org/#home)). Typically a dataset is splitted into training and evaluation sets, where 70% of the data go into the training set, or other evaluation techniques like [crossvalidation](https://en.wikipedia.org/wiki/Cross-validation_(statistics)) are used. When working with LLMs, this is not possible due to the high costs involved in the pretraining. MLOps models are data-hungry and require a lot (thousands at least) of labeled data to be trained on. Consequently, choosing the suitable foundation model in LLMOps is very important, as crucial as choosing a proprietary or open-source foundation model. Proprietary LLMs are usually bigger and more performant than open-source alternatives (thanks to the money investments that large corporations can make) and may also be more cost-effective for the final user as there’s no need to set up an expensive infrastructure to host the model (which organizations can do efficiently, as they have many customers and amortize the costs). On the contrary, open-source models are generally more customizable and can be improved by anyone from the open-source community, indeed they soon matched the quality of many proprietary LLMs. Another aspect to consider is the knowledge cutoff of LLMs: the date of the last published document on which the model was trained. For example, the model used in ChatGPT is currently limited to data up until September 2021. Consequently, the model can easily talk about everything that happened before that date but finds it hard to talk about later stuff. For example, ChatGPT doesn’t know about the latest startups or products released. Therefore, he may hallucinate when talking about them. ### 2. **Adaptation to Downstream Tasks** After selecting a foundation model, it can be customized for specific tasks through techniques such as prompt engineering. This involves adjusting the input to produce the desired output. It's important to keep track of the prompts used when using prompt engineering since they will likely be improved over time and can impact performance on specific tasks. By doing this, if a new prompt in production works worse than the previous one in some aspects and If we want to revert, it can be done easily. Additionally, fine-tuning can be utilized to enhance the model's performance on a specific task, requiring a high-quality dataset for it (thus, involving a data collection step). In the case of fine-tuning, there are different approaches such as fine-tuning the model, fine-tuning the instructions, or using [soft prompts](https://learnprompting.org/docs/trainable/soft_prompting). There are challenges with fine-tuning due to the large size of the model. Additionally, deploying the newly finetuned model on a new infrastructure can be difficult. To solve this problem, today, there are finetuning techniques that improve only a small subset of additional parameters to add to the existing foundational model, such as [LoRA](https://arxiv.org/abs/2106.09685). Using LoRA, it’s possible to keep the same foundation model always",llm_course,https://learn.activeloop.ai/courses/take/llms/multimedia/48954272-what-is-llmops
156,What is LLMOps,"# What is LLMOps
## Steps Involved in LLMOps and Differences with MLOps
deployed on the infrastructure while adding the additional finetuned parameters when needed. Recently, popular proprietary models like GPT3.5 and PaLM can now be finetuned easily directly on the company platform. When fine-tuning a model, it's essential to keep track of the dataset used and the metrics achieved. It can be helpful to use a tool like [Weights and Biases](https://wandb.ai/site), which tracks experiments and provides a dashboard where you can monitor the metrics of your fine-tuned model on an evaluation set as it is trained. This provides insights into whether the training is progressing well or not. See [this page](https://wandb.ai/site/experiment-tracking) to learn more about how W&B experiment tracking works. It will be used in the following lessons to train and fine-tune language models. ### 3. **Evaluation** Evaluating the performance of an LLM is more complex than evaluating traditional ML models. The main reason for this is that the output of an LLM is usually free text, and it’s harder to devise metrics that can be computed via code and that work well on free text. For example, try thinking about how you could evaluate the quality of an answer given by an LLM assistant whose job is to summarize YouTube videos, for which you don’t have reference summaries written by humans. Currently, organizations often resort to A/B testing to assess the effectiveness of their models, checking whether the user’s satisfaction is the same or better after the change in production. Another aspect to consider is hallucinations. How can we measure, with a metric implemented in code, whether the answer of our LLM assistant contains hallucinations? This is another open challenge where organizations mainly rely on A/B testing. ### 4. **Deployment and Monitoring** Deploying and monitoring LLMs is very important as their completions can change significantly between releases. Tools for monitoring LLMs are emerging to address this need. Another concern about LLMOps is the latency of the model. Indeed, since the model is autoregressive (i.e., produces the output one token at a time), it may take some time to output a complete paragraph. This is in contrast with the most popular applications of LLMs, which want them as assistants, which, therefore, should be able to output text at a throughput similar to a user's reading speed. One of the emerging tools in the LLMOps landscape is [W&B Prompts](https://docs.wandb.ai/guides/prompts), a suite designed specifically for the development of LLM-powered applications. W&B Prompts offers a comprehensive set of features that allow developers to visualize and inspect the execution flow of LLMs, analyze the inputs and outputs, view intermediate results, and securely manage prompts and LLM chain configurations. A key component of W&B Prompts is [Trace](https://github.com/wandb/wandb), a tool that tracks and visualizes the inputs, outputs, execution flow, and model architecture of LLM chains. Trace is particularly useful for LLM chaining, plug-in, or pipelining use cases. It provides a Trace Table for an overview of the inputs and outputs of a chain, a Trace Timeline that displays the execution flow of the chain color-coded according to component types, and a Model Architecture view that provides details about the structure of the chain and the parameters used to initialize each component. LLMOps is a rapidly evolving field, and it's hard to predict its future trajectory. However, it's clear that as LLMs become more prevalent, so will the tools and practices associated with LLMOps. The rise of LLMs and LLMOps signifies a major shift in building and maintaining AI-powered products.",llm_course,https://learn.activeloop.ai/courses/take/llms/multimedia/48954272-what-is-llmops
157,What is LLMOps,"# What is LLMOps
## Conclusion
In conclusion, LLMOps, or Large Language Model Operations, is a critical aspect of managing the lifecycle of applications powered by LLMs. This lesson has provided an overview of the origins and significance of LLMOps, the steps involved in building an LLM-powered application, and the differences between LLMOps and MLOps. We studied the process of selecting a foundation model, adapting it to downstream tasks, evaluating its performance, and deploying and monitoring the model. We've also highlighted the unique challenges posed by LLMs, such as the complexity of evaluating free text outputs and the need for prompt versioning and efficient deployment strategies. The emergence of tools like W&B Prompts and practices like A/B testing are indicative of the rapid evolution of LLMOps. As LLMs continue to revolutionize various applications, the tools and practices associated with LLMOps will undoubtedly become increasingly important in AI.",llm_course,https://learn.activeloop.ai/courses/take/llms/multimedia/48954272-what-is-llmops
158,Training on Generated Data and Model Collapse,"# Training on Generated Data and Model Collapse
## Introduction
In this lesson, we will examine the phenomenon of model collapse, its stages, its causes, and its implications on the future of Large Language Models. We also draw parallels with related concepts in machine learning, such as catastrophic forgetting and data poisoning. Finally, we contemplate the value of human-generated content in the era of dominant LLMs and the potential risk of widespread model collapse.",llm_course,https://learn.activeloop.ai/courses/take/llms/multimedia/48959929-training-on-generated-data-and-model-collapse
159,Training on Generated Data and Model Collapse,"# Training on Generated Data and Model Collapse
## Understanding Model Collapse
Model collapse, defined in the paper “[The Curse of Recursion: Training on Generated Data Makes Models Forget](https://arxiv.org/pdf/2305.17493v2.pdf),” is a degenerative process affecting generations of learned generative models. It occurs when the data generated by a model ends up contaminating the training set of subsequent models. As a result, these models start to misinterpret reality, reinforcing their own beliefs instead of learning from real data. Here’s an image exemplifying model collapse. ![Image from the paper “[The Curse of Recursion: Training on Generated Data Makes Models Forget](https://arxiv.org/pdf/2305.17493v2.pdf).” Model Collapse refers to a degenerative learning process where models start forgetting improbable events over time as the model becomes poisoned with its projection of reality.](Training%20on%20Generated%20Data%20and%20Model%20Collapse%20b3c3d80ca3c24b70969d245c626af8ea/Screenshot_2023-08-17_at_12.16.02.png) Image from the paper “[The Curse of Recursion: Training on Generated Data Makes Models Forget](https://arxiv.org/pdf/2305.17493v2.pdf).” Model Collapse refers to a degenerative learning process where models start forgetting improbable events over time as the model becomes poisoned with its projection of reality. There are two distinct stages of model collapse: early and late. - In the **early stage**, the model begins to lose information about the tails of the distribution. - As the process progresses to the **late stage**, the model starts to entangle different modes of the original distributions, eventually converging to a distribution that bears little resemblance to the original one, often with very small variance. Here’s an example of text outputs from sequential generations of 125M parameters LLMs where each generation is trained on data produced by the previous generation. ![Image from the paper “[The Curse of Recursion: Training on Generated Data Makes Models Forget](https://arxiv.org/pdf/2305.17493v2.pdf).”](Training%20on%20Generated%20Data%20and%20Model%20Collapse%20b3c3d80ca3c24b70969d245c626af8ea/Screenshot_2023-08-17_at_12.18.00.png) Image from the paper “[The Curse of Recursion: Training on Generated Data Makes Models Forget](https://arxiv.org/pdf/2305.17493v2.pdf).”",llm_course,https://learn.activeloop.ai/courses/take/llms/multimedia/48959929-training-on-generated-data-and-model-collapse
160,Training on Generated Data and Model Collapse,"# Training on Generated Data and Model Collapse
## Related Work on Model Collapse
Model collapse shares similarities with two concepts in machine learning literature: catastrophic forgetting and data poisoning. - [Catastrophic forgetting](https://en.wikipedia.org/wiki/Catastrophic_interference), a challenge in continual learning, refers to the model's tendency to forget previous samples when learning new information. This is particularly relevant in task-free continual learning, where data distributions gradually change without the notion of separate tasks. However, in the context of model collapse, the changed data distributions arise from the model itself as a result of training in the previous iteration. - On the other hand, [data poisoning](https://en.wikipedia.org/wiki/Adversarial_machine_learning#Data_poisoning) involves the insertion of malicious data during training to degrade the model’s performance. This concept becomes increasingly relevant with the rise of contrastive learning and LLMs trained on untrustworthy web sources. Yet, neither catastrophic forgetting nor data poisoning fully explain the phenomenon of model collapse, as they don't account for the self-reinforcing distortions of reality seen in model collapse. However, understanding these related concepts can provide additional insights into model collapse mechanisms and potential mitigation strategies.",llm_course,https://learn.activeloop.ai/courses/take/llms/multimedia/48959929-training-on-generated-data-and-model-collapse
161,Training on Generated Data and Model Collapse,"# Training on Generated Data and Model Collapse
## Causes of Model Collapse
Model collapse primarily results from two types of errors: statistical approximation error and functional approximation error. - The statistical approximation error is the primary cause. It arises due to the finite number of samples used in training. Despite using a large number of points, significant errors can still occur. This is because there's always a non-zero probability that information can get lost at every step of re-sampling. - The functional approximation error is a secondary cause. It stems from the limitations of our function approximators. Even though neural networks are theoretically capable of approximating any function, in practice, they can introduce non-zero likelihood outside the support of the original distribution, leading to errors.",llm_course,https://learn.activeloop.ai/courses/take/llms/multimedia/48959929-training-on-generated-data-and-model-collapse
162,Training on Generated Data and Model Collapse,"# Training on Generated Data and Model Collapse
## The Future of the Web with Dominant LLMs
As LLMs become more prevalent in the online text and image ecosystem, they will inevitably train on data produced by their predecessors. This could lead to a cycle where each model generation learns more from previous models' output and less from original human-generated content. The result is a risk of widespread model collapse, with models progressively losing touch with the true underlying data distribution. The model collapse has far-reaching implications. As models start to misinterpret reality, generated content quality could degrade over time. This could profoundly affect various LLMs' applications, from content creation to decision-making systems.",llm_course,https://learn.activeloop.ai/courses/take/llms/multimedia/48959929-training-on-generated-data-and-model-collapse
163,Training on Generated Data and Model Collapse,"# Training on Generated Data and Model Collapse
## The Value of Human-Generated Content
In the face of model collapse, preserving and accessing data collected from genuine human interactions becomes increasingly valuable. Real human-produced data provides access to the original data distribution, which is crucial in learning where the tails of the underlying distribution matter. As LLMs increasingly generate online content, data from human interactions with these models will become an increasingly valuable resource for training future models.",llm_course,https://learn.activeloop.ai/courses/take/llms/multimedia/48959929-training-on-generated-data-and-model-collapse
164,Training on Generated Data and Model Collapse,"# Training on Generated Data and Model Collapse
## Conclusion
In this lesson, we've explored the phenomenon of model collapse, a degenerative process that can affect generative models when they are trained on data produced by other models. We've examined the stages of model collapse, from the early loss of information about the tails of distribution to the late-stage entanglement of different modes. We've drawn parallels with related concepts in machine learning: catastrophic forgetting and data poisoning. We also dissected the leading causes of model collapse, namely statistical and functional approximation errors. As Large Language Models become more dominant in the digital landscape, the risk of widespread model collapse increases, potentially leading to a degradation in generated content quality. In this context, we've underscored the importance of preserving and accessing human-generated content, which provides a crucial link to the original data distribution and serves as a valuable resource for training future models. As we continue to harness the power of LLMs, understanding and mitigating model collapse will be essential in ensuring the quality and reliability of their outputs.",llm_course,https://learn.activeloop.ai/courses/take/llms/multimedia/48959929-training-on-generated-data-and-model-collapse
165,Deep Dive into LoRA and SFT,"# Deep Dive into LoRA and SFT
## Introduction
In this lesson, we will dive deeper into the mechanics of LoRA, a powerful method for optimizing the fine-tuning process of Large Language Models, its practical uses in various fine-tuning tasks, and the open-source resources that simplify its implementation. We will also introduce QLoRA, a highly efficient version of LoRA. By the end of this lesson, you will have an in-depth understanding of how LoRA and QLoRA can enhance the efficiency and accessibility of fine-tuning LLMs.",llm_course,https://learn.activeloop.ai/courses/take/llms/multimedia/48954489-deep-dive-into-lora-and-sft
166,Deep Dive into LoRA and SFT,"# Deep Dive into LoRA and SFT
## **The Functioning of LoRA in Fine-tuning LLMs**
[LoRA](https://arxiv.org/abs/2106.09685), or Low-Rank Adaptation, is a method developed by Microsoft researchers to optimize the fine-tuning of Large Language Models. This technique tackles the issues related to the fine-tuning process, such as extensive memory demands and computational inefficiency. LoRA introduces a compact set of parameters, referred to as **low-rank matrices**, to store the necessary changes in the model instead of altering all parameters. Here are the key features of how LoRA operates: - **Maintaining Pretrained Weights**: LoRA adopts a unique strategy by preserving the pretrained weights of the model. This approach reduces the risk of catastrophic forgetting, ensuring the model maintains the valuable knowledge it gained during pretraining. - **Efficient Rank-Decomposition**: LoRA incorporates rank-decomposition weight matrices, known as update matrices, to the existing weights. These update matrices have significantly fewer parameters than the original model, making them highly memory-efficient. By training only these newly added weights, LoRA achieves a faster training process with reduced memory demands. These LoRA matrices are typically integrated into the attention layers of the original model. By using the low-rank decomposition approach, the memory demands for training large language models are significantly reduced. This allows running fine-tuning tasks on consumer-grade GPUs, making the benefits of LoRA available to a broader range of researchers and developers.",llm_course,https://learn.activeloop.ai/courses/take/llms/multimedia/48954489-deep-dive-into-lora-and-sft
167,Deep Dive into LoRA and SFT,"# Deep Dive into LoRA and SFT
## **Open-source Resources for LoRA**
The following libraries offer a mix of tools that enhance the efficiency of fine-tuning large language models. They provide optimizations, compatibility with different data types, resource efficiency, and user-friendly interfaces that accommodate various tasks and hardware configurations. - **[PEFT Library](https://github.com/huggingface/peft)**: Parameter-efficient fine-tuning (PEFT) methods facilitate efficient adaptation of pre-trained language models to various downstream applications without fine-tuning all the model's parameters. By fine-tuning only a portion of the model's parameters, PEFT methods like LoRA, Prefix Tuning, and P-Tuning, including QLoRA, significantly reduce computational and storage costs. - **[Lit-GPT](https://github.com/Lightning-AI/lit-gpt):** Lit-GPT from LightningAI is an open-source resource designed to simplify the fine-tuning process, making it easier to apply LoRA's techniques without manually altering the core model architecture. Models available for this purpose include [Vicuna](https://lmsys.org/blog/2023-03-30-vicuna/), [Pythia](https://www.eleuther.ai/papers-blog/pythia-a-suite-for-analyzing-large-language-modelsacross-training-and-scaling), and [Falcon](https://falconllm.tii.ae/). Specific configurations can be applied to different weight matrices, and precision settings can be adjusted to manage memory consumption. In this course, we’ll mainly use the PEFT library.",llm_course,https://learn.activeloop.ai/courses/take/llms/multimedia/48954489-deep-dive-into-lora-and-sft
168,Deep Dive into LoRA and SFT,"# Deep Dive into LoRA and SFT
## QLoRA: An Efficient Variant of LoRA
[QLoRA](https://arxiv.org/abs/2305.14314), or Quantized Low-Rank Adaptation, is a popular variant of LoRA that makes fine-tuning large language models even more efficient. QLoRA introduces several innovations to save memory without sacrificing performance. The technique involves backpropagating gradients through a frozen, 4-bit quantized pretrained language model into Low-Rank Adapters. This approach significantly reduces memory usage, enabling the fine-tuning of even larger models on consumer-grade GPUs. For instance, QLoRA can fine-tune a 65 billion parameter model on a single 48GB GPU while preserving full 16-bit fine-tuning task performance. QLoRA uses a new data type known as 4-bit NormalFloat (NF4), which is optimal for normally distributed weights. It also employs double quantization to reduce the average memory footprint by quantizing the quantization constants and paged optimizers to manage memory spikes. The [Guanaco](https://huggingface.co./TheBloke/guanaco-65B-GPTQ) models, which use QLoRA fine-tuning, have demonstrated state-of-the-art performance, even when using smaller models than the previous benchmarks. This shows the power of QLoRA tuning, making it a popular choice for those seeking to democratize the use of large transformer models. The practical implementation of QLoRA for fine-tuning LLMs is very accessible, thanks to open-source libraries and tools. For instance, the [BitsAndBytes library](https://github.com/TimDettmers/bitsandbytes) offers functionalities for 4-bit quantization. We’ll later see a code example showing how to use QLoRA with PEFT.",llm_course,https://learn.activeloop.ai/courses/take/llms/multimedia/48954489-deep-dive-into-lora-and-sft
169,Deep Dive into LoRA and SFT,"# Deep Dive into LoRA and SFT
## Conclusion
In this lesson, we focused on LoRA and QLoRA, two powerful techniques for fine-tuning LLMs. We explored how LoRA works, preserving pretrained weights and introducing low-rank matrices to make the fine-tuning process more memory and computationally efficient. We also introduced open-source libraries like PEFT and Lit-GPT that facilitate the implementation of LoRA. Finally, we discussed QLoRA, an efficient variant of LoRA that uses 4-bit NormalFloat and double quantization to reduce memory usage.",llm_course,https://learn.activeloop.ai/courses/take/llms/multimedia/48954489-deep-dive-into-lora-and-sft
170,Evaluating LLM Performance,"# Evaluating LLM Performance
## Introduction
In this lesson, we will explore two crucial aspects of language model evaluation: objective functions and evaluation metrics. Objective functions, also known as loss functions, play a vital role in guiding the learning process during model training. On the other hand, evaluation metrics provide interpretable measures of the model's capabilities and are used to assess its performance on various tasks. We will dive into the perplexity evaluation metric, commonly used for LLMs, and explore several benchmarking frameworks, such as GLUE, SuperGLUE, BIG-bench, HELM, and FLASK, that help comprehensively evaluate language models across diverse scenarios.",llm_course,https://learn.activeloop.ai/courses/take/llms/multimedia/48954229-evaluating-llm-performance
171,Evaluating LLM Performance,"# Evaluating LLM Performance
## Objective Functions and Evaluation Metrics
Objective functions and evaluation metrics are essential components in machine learning models. The **objective function**, also known as the **loss function**, is a mathematical formula used during the training phase. It gives a loss score to the model in function of the model parameters. During training, the learning algorithm computes gradients of the loss function and updates the model parameters to minimize it. As a consequence, to guarantee (smooth) learning, the loss function needs to be differentiable and have an excellent smooth form. The objective function typically used for LLMs is the **cross-entropy loss**. In the case of causal language modeling, the model predicts the next token from a fixed list of tokens, essentially making it a classification problem. On the other hand, **evaluation metrics** are used to assess the model's performance in an interpretable way for people. Unlike the objective function, evaluation metrics are not directly used during training. As a consequence, evaluation metrics don’t need to be differentiable, as we won’t have to compute gradients for them. Standard evaluation metrics include accuracy, precision, recall, F1-score, and mean squared error. Typical evaluation metrics for LLMs can be: - **Intrinsic metrics**, i.e., metrics strictly related to the training objective. A popular example is the **perplexity** metric. - **Extrinsic metrics** are metrics that aim to assess performance on several downstream tasks and are not strictly related to the training objective. The GLUE, SuperGLUE, BIG-bench, HELM, and FLASK benchmarks are popular examples.",llm_course,https://learn.activeloop.ai/courses/take/llms/multimedia/48954229-evaluating-llm-performance
172,Evaluating LLM Performance,"# Evaluating LLM Performance
## The Perplexity Evaluation Metric
Perplexity is an evaluation metric used to assess the performance of LLMs. It measures how well a language model predicts a given sample or sequence of words, such as a sentence. The lower the perplexity value, the better the language model is at predicting the sample. LLMs are designed to model the probability distributions of words within sentences. They can generate sentences resembling human writing and assess the sentences' quality. Perplexity is a measure that quantifies the uncertainty or ""perplexity"" a model experiences when assigning probabilities to sequences of words. The first step in computing perplexity is to calculate the probability of a sentence by multiplying the probabilities of individual words according to the language model. Longer sentences tend to have lower probabilities due to the multiplication of factors smaller than one. To make comparisons between sentences with different lengths possible, perplexity normalizes the probability by dividing it by the number of words in the sentence and taking the geometric mean. ### Perplexity Example Consider an example where a language model is trained to predict the subsequent word in a sentence: ""A red fox."" For a competent LLM, the predicted word probabilities could be as follows, step by step. > P(“a red fox.”) = > > > = P(“a”) * P(“red” | “a”) * P(“fox” | “a red”) * P(“.” | “a red fox”) = > > = 0.4 * 0.27 * 0.55 * 0.79 = > > = 0.0469 > It would be nice to compare the probabilities assigned to different sentences to see which sentences are better predicted by the language model. However, since the probability of a sentence is obtained from a product of probabilities, the longer the sentence, the lower its probability (since it’s a product of factors with values smaller than one). We should find a way of measuring these sentence probabilities without the influence of the sentence length. This can be done by normalizing the sentence probability by the number of words in the sentence. Since the probability of a sentence is obtained by multiplying many factors, we can average them using the [geometric mean](https://en.wikipedia.org/wiki/Geometric_mean). Let’s call *Pnorm(W)* the normalized probability of the sentence *W*. Let *n* be the number of words in *W*. Then, applying the geometric mean: > Pnorm(W) = P(W) ^ (1 / n) > Using our specific sentence, “*a red fox.*”: > Pnorm(“a red fox.”) = P(“a red ”) ^ (1 / 4) = 0.465 > Great! This number can now be used to compare the probabilities of sentences with different lengths. The higher this number is over a well-written sentence, the better the language model. So, what does this have to do with perplexity? Well, perplexity is just the reciprocal of this number. Let’s call *PP(W)* the perplexity computed over the sentence *W*. Then: > PP(W) = 1 / Pnorm(W) = > > > = 1 / (P(W) ^ (1 / n)) > > = (1 / P(W)) ^ (1 / n) > Let’s compute it with `numpy`: ```python import numpy as np probabilities = np.array([0.4, 0.27, 0.55, 0.79]) sentence_probability = probabilities.prod() sentence_probability_normalized = sentence_probability ** (1 / len(probabilities)) perplexity = 1 / sentence_probability_normalized print(perplexity) # 2.1485556947850033 ``` Suppose we further train the LLM, and the probabilities of the next best word become higher. How would the final perplexity be, higher or lower? ```python probabilities = np.array([0.7, 0.5, 0.6, 0.9]) sentence_probability = probabilities.prod() sentence_probability_normalized = sentence_probability ** (1 / len(probabilities)) perplexity = 1 / sentence_probability_normalized print(perplexity) # 1.516647134682679 -> lower ```",llm_course,https://learn.activeloop.ai/courses/take/llms/multimedia/48954229-evaluating-llm-performance
173,Evaluating LLM Performance,"# Evaluating LLM Performance
## The GLUE Benchmark
The [GLUE](https://gluebenchmark.com/) (General Language Understanding Evaluation) benchmark comprises nine diverse English sentence understanding tasks categorized into three groups. - The first group, Single-Sentence Tasks, evaluates the model's ability to determine grammatical correctness (CoLA) and sentiment polarity (SST-2) of individual sentences. - The second group, Similarity, and Paraphrase Tasks, focuses on assessing the model's capacity to identify paraphrases in sentence pairs (MRPC and QQP) and determine the similarity score between sentences (STS-B). - The third group, Inference Tasks, challenges the model to handle sentence entailment and relationships. This includes recognizing textual entailment (RTE), answering questions based on sentence information (QNLI), and resolving pronoun references (WNLI). The final GLUE score is obtained by averaging performance across all nine tasks. By providing a unified evaluation platform, GLUE facilitates a deeper understanding of the strengths and weaknesses of various NLP models.",llm_course,https://learn.activeloop.ai/courses/take/llms/multimedia/48954229-evaluating-llm-performance
174,Evaluating LLM Performance,"# Evaluating LLM Performance
## The **SuperGLUE Benchmark**
The [SuperGLUE](https://super.gluebenchmark.com/) benchmark builds upon the GLUE benchmark but introduces more complex tasks to push the boundaries of current NLP approaches. The key features of SuperGLUE are: 1. Tasks: SuperGLUE consists of eight diverse language understanding tasks. These tasks include Boolean question answering, textual entailment, coreference resolution, reading comprehension with commonsense reasoning, and word sense disambiguation. 2. Difficulty: The benchmark retains the two hardest tasks from GLUE and adds new tasks based on the challenges faced by current NLP models, ensuring greater complexity and relevance to real-world language understanding scenarios. 3. Human Baselines: Human performance estimates are included for each task, providing a benchmark for evaluating the performance of NLP models against human-level understanding. 4. Evaluation: NLP models are evaluated on these tasks, and their performance is measured using a single-number overall score obtained by averaging the scores of all individual tasks.",llm_course,https://learn.activeloop.ai/courses/take/llms/multimedia/48954229-evaluating-llm-performance
175,Evaluating LLM Performance,"# Evaluating LLM Performance
## The BIG-Bench Benchmark
[BIG-bench](https://github.com/google/BIG-bench) is a large-scale and diverse benchmark designed to evaluate the capabilities of large language models. It consists of 204 or more language tasks that cover a wide range of topics and languages. These are challenging and not entirely solvable by current models. The benchmark supports two types of tasks: JSON-based and programmatic tasks. JSON tasks involve comparing output and target pairs to evaluate performance, while programmatic tasks use Python to measure text generation and conditional log probabilities. The tasks include writing code, common-sense reasoning, playing games, linguistics, and more. The researchers found that aggregate performance improves with model size but still falls short of human performance. Model predictions become better calibrated with increased scale, and sparsity offers benefits. This benchmark is considered a ""living benchmark,"" accepting new task submissions for continuous peer review. The code for BIG-bench is open-source on [GitHub](https://github.com/google/BIG-bench), and the research paper is available on [arXiv](https://arxiv.org/abs/2206.04615).",llm_course,https://learn.activeloop.ai/courses/take/llms/multimedia/48954229-evaluating-llm-performance
176,Evaluating LLM Performance,"# Evaluating LLM Performance
## The HELM Benchmark
The [HELM](https://crfm.stanford.edu/2022/11/17/helm.html) (Holistic Evaluation of Language Models) benchmark addresses the lack of a unified standard for comparing language models and aims to assess them in their totality. The benchmark has three main components: 1. Broad Coverage and Recognition of Incompleteness: HELM evaluates language models over a diverse set of scenarios, considering different tasks, domains, languages, and user-facing applications. It acknowledges that not all scenarios can be covered but explicitly identifies major scenarios and missing metrics to highlight improvement areas. 2. Multi-Metric Measurement: HELM evaluates language models based on multiple criteria, unlike previous benchmarks that often focus on a single metric like accuracy. It measures 7 metrics: accuracy, calibration, robustness, fairness, bias, toxicity, and efficiency. This multi-metric approach ensures that non-accuracy desiderata are not overlooked. 3. Standardization: HELM aims to standardize the evaluation process for different language models. It specifies an adaptation procedure using few-shot prompting, making it easier to compare models effectively. By evaluating 30 models from various providers, HELM improves the overall landscape of language model evaluation and encourages a more transparent and reliable infrastructure for language technologies.",llm_course,https://learn.activeloop.ai/courses/take/llms/multimedia/48954229-evaluating-llm-performance
177,Evaluating LLM Performance,"# Evaluating LLM Performance
## The FLASK Benchmark
The [FLASK](https://arxiv.org/abs/2307.10928) (Fine-grained Language Model Evaluation based on Alignment Skill Sets) benchmark is an evaluation protocol for LLMs. It breaks down the evaluation process into 12 specific instance-wise skill sets, each representing a crucial aspect of a model's capabilities. These skill sets comprise logical correctness, logical efficiency, factuality, commonsense understanding, comprehension, insightfulness, completeness, metacognition, readability, conciseness, and harmlessness. By breaking down the evaluation into these specific skill sets, FLASK allows for a precise and comprehensive assessment of a model's performance across various tasks, domains, and difficulty levels. This approach provides a more detailed and nuanced understanding of a language model's strengths and weaknesses, enabling researchers and developers to improve the models in targeted ways and address specific challenges in natural language processing. ![Assessing skills across diverse tasks for a range of LLMs, image credit: [https://arxiv.org/pdf/2307.10928.pdf](https://arxiv.org/pdf/2307.10928.pdf)](Evaluating%20LLM%20Performance%204fb6aa977aaa44a09185b6086c58977a/flask.png) Assessing skills across diverse tasks for a range of LLMs, image credit: [https://arxiv.org/pdf/2307.10928.pdf](https://arxiv.org/pdf/2307.10928.pdf)",llm_course,https://learn.activeloop.ai/courses/take/llms/multimedia/48954229-evaluating-llm-performance
178,Evaluating LLM Performance,"# Evaluating LLM Performance
## Conclusion
In this lesson, we explored the essential concepts of evaluating LLM performance through objective functions and evaluation metrics. The objective or loss function plays a critical role during model training. It guides the learning algorithm to minimize the loss score by updating model parameters. For LLMs, the common objective function is the cross-entropy loss. On the other hand, evaluation metrics are used to assess the model's performance more interpretably, though they are not directly used during training. Perplexity is one such intrinsic metric used to measure how well an LLM predicts a given sample or sequence of words. Additionally, the lesson introduced several popular extrinsic evaluation benchmarks, such as GLUE, SuperGLUE, BIG-bench, HELM, and FLASK, which evaluate language models on diverse tasks and scenarios, covering aspects like accuracy, fairness, robustness, and more. By understanding these concepts and using appropriate evaluation metrics and benchmarks, researchers and developers can gain valuable insights into language models' strengths and weaknesses, leading to improving these technologies.",llm_course,https://learn.activeloop.ai/courses/take/llms/multimedia/48954229-evaluating-llm-performance
179,Transformers Architectures,"# Transformers Architectures
## Introduction
The transformer architecture has demonstrated its versatility in various applications. The original network was presented as an encoder-decoder architecture for translation tasks. The next evolution of transformer architecture began with the introduction of encoder-only models like BERT, followed by the introduction of decoder-only networks in the first iteration of GPT models. The differences extend beyond just network design and also encompass the learning objectives. These contrasting learning objectives play a crucial role in shaping the model's behavior and outcomes. Understanding these differences is essential for selecting the most suitable architecture for a given task and achieving optimal performance in various applications. In this lesson, we will explore the distinctions between these architectures by loading pre-trained models. The goal is to dive deeper into each architecture.",llm_course,https://learn.activeloop.ai/courses/take/llms/multimedia/48954201-transformers-architectures
180,Transformers Architectures,"# Transformers Architectures
## The Encoder-Decoder Architecture
![Image taken from the “*Attention is all you need” paper*](Understanding%20Transformers%2086c4c813b12f485283a3806ec8e25381/Untitled.png) Image taken from the “*Attention is all you need” paper* The encoder-decoder, also known as the full transformer architecture, comprises multiple stacked encoder components connected to several stacked decoder components through a cross-attention mechanism. It is notably well-suited for sequence-to-sequence (i.e., handling text as both input and output) tasks such as translation or summarization, mainly when designing models with multi-modality, like image captioning with the image as input and the corresponding caption as the expected output. Cross-attention will help the decoder focus on the most important part of the content during the generation process. A notable example of this approach is the BART pre-trained model. The architecture incorporates a bi-directional encoder responsible for creating a comprehensive representation of the input, while an autoregressive decoder generates the output one token at a time. The model takes in a randomly masked input along with the input shifted by one token and attempts to reconstruct the original input as a learning objective. The provided code below loads the BART model so we can examine its architecture. ```python from transformers import AutoModel, AutoTokenizer BART = AutoModel.from_pretrained(""facebook/bart-large"") print(BART) ``` ```python BartModel( (shared): Embedding(50265, 1024, padding_idx=1) (encoder): BartEncoder( (embed_tokens): Embedding(50265, 1024, padding_idx=1) (embed_positions): BartLearnedPositionalEmbedding(1026, 1024) (layers): ModuleList( (0-11): 12 x BartEncoderLayer( (self_attn): BartAttention( (k_proj): Linear(in_features=1024, out_features=1024, bias=True) (v_proj): Linear(in_features=1024, out_features=1024, bias=True) (q_proj): Linear(in_features=1024, out_features=1024, bias=True) (out_proj): Linear(in_features=1024, out_features=1024, bias=True) ) (self_attn_layer_norm): LayerNorm((1024,), eps=1e-05, elementwise_affine=True) (activation_fn): GELUActivation() (fc1): Linear(in_features=1024, out_features=4096, bias=True) (fc2): Linear(in_features=4096, out_features=1024, bias=True) (final_layer_norm): LayerNorm((1024,), eps=1e-05, elementwise_affine=True) ) ) (layernorm_embedding): LayerNorm((1024,), eps=1e-05, elementwise_affine=True) ) (decoder): BartDecoder( (embed_tokens): Embedding(50265, 1024, padding_idx=1) (embed_positions): BartLearnedPositionalEmbedding(1026, 1024) (layers): ModuleList( (0-11): 12 x BartDecoderLayer( (self_attn): BartAttention( (k_proj): Linear(in_features=1024, out_features=1024, bias=True) (v_proj): Linear(in_features=1024, out_features=1024, bias=True) (q_proj): Linear(in_features=1024, out_features=1024, bias=True) (out_proj): Linear(in_features=1024, out_features=1024, bias=True) ) (activation_fn): GELUActivation() (self_attn_layer_norm): LayerNorm((1024,), eps=1e-05, elementwise_affine=True) (encoder_attn): BartAttention( (k_proj): Linear(in_features=1024, out_features=1024, bias=True) (v_proj): Linear(in_features=1024, out_features=1024, bias=True) (q_proj): Linear(in_features=1024, out_features=1024, bias=True) (out_proj): Linear(in_features=1024, out_features=1024, bias=True) ) (encoder_attn_layer_norm): LayerNorm((1024,), eps=1e-05, elementwise_affine=True) (fc1): Linear(in_features=1024, out_features=4096, bias=True) (fc2): Linear(in_features=4096, out_features=1024, bias=True) (final_layer_norm): LayerNorm((1024,), eps=1e-05, elementwise_affine=True) ) ) (layernorm_embedding): LayerNorm((1024,), eps=1e-05, elementwise_affine=True) ) ) ``` We are already familiar with most of the layers in the BART model. The model is comprised of both encoder and decoder components, with each component consisting of 12 layers. Additionally, The decoder component, in particular, contains an additional `encoder_attn` layer, referred to as cross-attention. The cross-attention component will condition the decoder’s output based on the encoder representations. We can use the fine-tuned version of this model for summarization using the Transformer’s pipeline functionality. ```python from transformers import pipeline summarizer = pipeline(""summarization"", model=""facebook/bart-large-cnn"") sum = summarizer(""""""Gaga was best known in the 2010s for pop hits like “Poker Face” and avant-garde experimentation on albums like “Artpop,” and Bennett, a singer who mostly stuck to standards, was in his 80s when the pair met. And yet Bennett and Gaga became fast friends and close collaborators, which they remained until Bennett’s death at 96 on Friday. They recorded two albums together, 2014’s “Cheek to Cheek” and 2021’s “Love for Sale,” which both won Grammys for best traditional pop vocal album."""""", min_length=20, max_length=50) print(sum[0]['summary_text']) ``` ``` Bennett and Gaga became fast friends and close collaborators. They recorded two albums together, 2014's ""Cheek to Cheek"" and 2021's ""Love for Sale"" ```",llm_course,https://learn.activeloop.ai/courses/take/llms/multimedia/48954201-transformers-architectures
181,Transformers Architectures,"# Transformers Architectures
## The Encoder-Only Architecture
![Image taken from the “BERT: Pre-training of Deep Bidirectional Transformers for LanguageUnderstanding” paper](Transformers%20Architectures%20f461b29896544831a22e459475ce4023/Untitled.png) Image taken from the “BERT: Pre-training of Deep Bidirectional Transformers for LanguageUnderstanding” paper As implied by the name, the encoder-only models are formed by stacking multiple encoder components. As the encoder output cannot be connected to another decoder, its output can be directly used as a text-to-vector method, for instance, to measure similarity. Alternatively, it can be combined with a classification head (feedforward layer) on top to facilitate label prediction (it is also known as a Pooler layer in libraries such as Huggingface). The primary distinction in the encoder-only architecture lies in the absence of the Masked Self-Attention layer. As a result, the encoder can handle the entire input simultaneously. This differs from decoders, where future tokens need to be masked during training to prevent “cheating” when generating new tokens. Due to this property, they are ideally suited for creating representations from a document while retaining complete information. The BERT paper (or an improved variant like RoBERTa) introduced a widely recognized pre-trained model that significantly improved the state-of-the-art scores on numerous NLP tasks. The model undergoes pre-training with two learning objectives: 1. Masked Language Modeling: masking random tokens from the input and attempting to predict them. 2. Next Sentence Prediction: Present sentences in pairs and assess the likelihood of the second sentence in the subsequent sequence of the first sentence. ```python BERT = AutoModel.from_pretrained(""bert-base-uncased"") print(BERT) ``` ```python BertModel( (embeddings): BertEmbeddings( (word_embeddings): Embedding(30522, 768, padding_idx=0) (position_embeddings): Embedding(512, 768) (token_type_embeddings): Embedding(2, 768) (LayerNorm): LayerNorm((768,), eps=1e-12, elementwise_affine=True) (dropout): Dropout(p=0.1, inplace=False) ) (encoder): BertEncoder( (layer): ModuleList( (0-11): 12 x BertLayer( (attention): BertAttention( (self): BertSelfAttention( (query): Linear(in_features=768, out_features=768, bias=True) (key): Linear(in_features=768, out_features=768, bias=True) (value): Linear(in_features=768, out_features=768, bias=True) (dropout): Dropout(p=0.1, inplace=False) ) (output): BertSelfOutput( (dense): Linear(in_features=768, out_features=768, bias=True) (LayerNorm): LayerNorm((768,), eps=1e-12, elementwise_affine=True) (dropout): Dropout(p=0.1, inplace=False) ) ) (intermediate): BertIntermediate( (dense): Linear(in_features=768, out_features=3072, bias=True) (intermediate_act_fn): GELUActivation() ) (output): BertOutput( (dense): Linear(in_features=3072, out_features=768, bias=True) (LayerNorm): LayerNorm((768,), eps=1e-12, elementwise_affine=True) (dropout): Dropout(p=0.1, inplace=False) ) ) ) ) (pooler): BertPooler( (dense): Linear(in_features=768, out_features=768, bias=True) (activation): Tanh() ) ) ``` The BERT model adopts the conventional transformer architecture for input embedding and 12 encoder blocks. However, the network’s output will be passed on to a pooler layer, which is a feed-forward linear layer followed by non-linearity that will generate the final representation. This representation will subsequently be utilized for various tasks, such as classification or similarity assessment. The following code uses the fine-tuned version of the BERT model for sentiment analysis. ```python classifier = pipeline(""text-classification"", model=""nlptown/bert-base-multilingual-uncased-sentiment"") lbl = classifier(""""""This restaurant is awesome."""""") print(lbl) ``` ```python [{'label': '5 stars', 'score': 0.8550480604171753}] ```",llm_course,https://learn.activeloop.ai/courses/take/llms/multimedia/48954201-transformers-architectures
182,Transformers Architectures,"# Transformers Architectures
## The Decoder-Only Architecture
![Image taken from “Improving language understanding with unsupervised learning” paper](Transformers%20Architectures%20f461b29896544831a22e459475ce4023/Untitled%201.png) Image taken from “Improving language understanding with unsupervised learning” paper The decoder-only networks continue to serve as the foundation for most large language models today, with slight variations in some instances. Because of the implementation of masked self-attention, their primary use case revolves around the next-token-prediction task, which sparked the concept of prompting. Research demonstrated that scaling up the decoder-only models can significantly enhance the network's language understanding and generalization capabilities. As a result, they can excel at a diverse range of tasks simply by using different prompts. Large pre-trained models like GPT-4 and LLaMA 2 exhibit the ability to perform tasks such as classification, summarization, translation, etc., by leveraging the appropriate prompt. The large language models, such as those in the GPT family, undergo pre-training using the Causal Language Modeling objective. This means the model aims to predict the next word, while the attention mechanism can only attend to previous tokens on the left. This implies that the model can solely rely on the previous context to predict the next token and is unable to peek at future tokens, preventing any form of cheating. ```python gpt2 = AutoModel.from_pretrained(""gpt2"") print(gpt2) ``` ```python GPT2Model( (wte): Embedding(50257, 768) (wpe): Embedding(1024, 768) (drop): Dropout(p=0.1, inplace=False) (h): ModuleList( (0-11): 12 x GPT2Block( (ln_1): LayerNorm((768,), eps=1e-05, elementwise_affine=True) (attn): GPT2Attention( (c_attn): Conv1D() (c_proj): Conv1D() (attn_dropout): Dropout(p=0.1, inplace=False) (resid_dropout): Dropout(p=0.1, inplace=False) ) (ln_2): LayerNorm((768,), eps=1e-05, elementwise_affine=True) (mlp): GPT2MLP( (c_fc): Conv1D() (c_proj): Conv1D() (act): NewGELUActivation() (dropout): Dropout(p=0.1, inplace=False) ) ) ) (ln_f): LayerNorm((768,), eps=1e-05, elementwise_affine=True) ) ``` When examining the architecture, you will notice the standard transformer decoder block with the cross-attention removed. The GPT family also employs different linear layers (Conv1D) that transpose the weights. (Please note that this should not be confused with PyTorch's convolutional layer.) This design choice is specific to OpenAI, while other open-source large language models use the standard linear layer. The provided code illustrates how the pipeline can be used to incorporate the GPT2 model for text generation. It generates four different alternatives to complete the phrase ""This movie was a very.” ```python generator = pipeline(model=""gpt2"") output = generator(""This movie was a very"", do_sample=True, top_p=0.95, num_return_sequences=4, max_new_tokens=50, return_full_text=False) for item in output: print("">"", item['generated_text']) ``` ``` > hard thing to make, but this movie is still one of the most amazing shows I've seen in years. You know, it's sort of fun for a couple of decades to watch, and all that stuff, but one thing's for sure — > special thing and that's what really really made this movie special,"" said Kiefer Sutherland, who co-wrote and directed the film's cinematography. ""A lot of times things in our lives get passed on from one generation to another, whether > good, good effort and I have no doubt that if it has been released, I will be very pleased with it."" Read more at the Mirror. > enjoyable one for the many reasons that I would like to talk about here. First off, I'm not just talking about the original cast, I'm talking about the cast members that we've seen before and it would be fair to say that none of ``` ",llm_course,https://learn.activeloop.ai/courses/take/llms/multimedia/48954201-transformers-architectures
183,Transformers Architectures,"# Transformers Architectures
## Conclusion
In this lesson, we explored the various types of transformer-based models and their areas of maximum effectiveness. While LLMs may appear to be the ultimate solution for every task, it's essential to note that there are instances where smaller, more focused models can produce equally good results while operating more efficiently. Using a small model like DistilBERT on your local server to measure similarity could be more suitable for specific applications while offering a cost-effective alternative to using proprietary models and APIs. Moreover, the transformer paper introduced an effective architecture. However, various architectures have been experimented with minor code changes, such as different embedding sizes and hidden dimensions. Recent experiments have also shown that relocating the batch normalization layer before the attention mechanism can enhance the model's capabilities. Keep in mind that there could be slight variations in the architecture, especially for proprietary models like GPT-3 that have not released their code. In this [Notebook](https://colab.research.google.com/drive/1k2UF8wO0hYF8Xuj2udVhoRG9hmAx_oR8?usp=sharing), you can find the code for this lesson.",llm_course,https://learn.activeloop.ai/courses/take/llms/multimedia/48954201-transformers-architectures
184,A Timeline of Large Language Models,"# A Timeline of Large Language Models
## Introduction
In this lesson, we'll explore the transformative shifts that have reshaped AI, exploring the key features that set LLMs apart from their predecessors. We’ll see how scaling laws, emergent abilities, and innovative architectures have propelled LLMs to tackle complex tasks and define the current landscape of popular LLMs. ---",llm_course,https://learn.activeloop.ai/courses/take/llms/multimedia/48953589-a-timeline-of-large-language-models
185,A Timeline of Large Language Models,"# A Timeline of Large Language Models
## From Language Models to Large Language Models
The evolution of language models has undergone a transformative shift from pre-trained language models (LMs) to the emergence of large language models (LLMs). LMs, like ELMo and BERT, initially captured context-aware word representations through pre-training and fine-tuning for specific tasks. However, the introduction of LLMs, exemplified by GPT-3 and PaLM, demonstrated that scaling model size and data can unlock emergent abilities, exceeding the capabilities of their smaller counterparts. These LLMs can tackle more complex tasks through in-context learning. The following image shows the trends of the cumulative numbers of arXiv papers containing the keyphrases “language model” and “large language model,” emphasizing the growing interest in them in the last years. ![From “[A Survey of Large Language Models](https://arxiv.org/pdf/2303.18223.pdf)” paper](A%20Timeline%20of%20Large%20Language%20Models%20ceae1b1e9692468ca3cfb2d5925164d3/Screenshot_2023-08-09_at_11.15.46.png) From “[A Survey of Large Language Models](https://arxiv.org/pdf/2303.18223.pdf)” paper",llm_course,https://learn.activeloop.ai/courses/take/llms/multimedia/48953589-a-timeline-of-large-language-models
186,A Timeline of Large Language Models,"# A Timeline of Large Language Models
## **Key Characterizing Features of LLMs**
Here are the main characteristics that differentiate LLMs from the previous models: 1. **Scaling Laws for Enhanced Capacity:** Scaling laws play a crucial role in LLM development, indicating a relationship between model performance, model size, dataset size, and training compute. The [KM scaling laws](https://arxiv.org/abs/2001.08361) emphasize the impact of these factors, revealing distinct formulas for their influence on cross-entropy loss. The [Chinchilla scaling laws](https://arxiv.org/abs/2203.15556) provide an alternate approach, optimizing compute allocation between model and data size. 2. **Emergent Abilities:** LLMs possess emergent abilities, defined as capabilities that manifest in large models but are absent in smaller counterparts. One prominent emergent ability is *in-context learning* (ICL), showcased by models like GPT-3. ICL allows LLMs to generate unexpected outputs based on natural language instructions, eliminating the need for further training. 3. **Instruction following**: LLMs can be finetuned to follow text instructions, which further enhances generalization for new tasks. 4. **Step-by-Step Reasoning:** LLMs can perform *step-by-step reasoning* using the [chain-of-thought (CoT)](https://arxiv.org/abs/2201.11903) prompting strategy. This mechanism enables them to solve complex tasks by breaking them down into intermediate reasoning steps, which is particularly beneficial for tasks involving multiple steps like mathematical word problems.",llm_course,https://learn.activeloop.ai/courses/take/llms/multimedia/48953589-a-timeline-of-large-language-models
187,A Timeline of Large Language Models,"# A Timeline of Large Language Models
## A Timeline of the Most Popular LLMs
Here’s an overview of the timeline of the most popular LLMs in the last years. ![From “[A Survey of Large Language Models](https://arxiv.org/pdf/2303.18223.pdf)” paper](A%20Timeline%20of%20Large%20Language%20Models%20ceae1b1e9692468ca3cfb2d5925164d3/timeline.png) From “[A Survey of Large Language Models](https://arxiv.org/pdf/2303.18223.pdf)” paper **Here is a brief description of some of them.** - **[2018]** [GPT-1](https://cdn.openai.com/research-covers/language-unsupervised/language_understanding_paper.pdf) GPT-1 (Generative Pre-Training 1) was introduced by OpenAI in 2018. It laid the foundation for the GPT-series models by employing a generative, decoder-only Transformer architecture. It combined unsupervised pretraining and supervised fine-tuning to predict the next word in natural language text. - **[2019]** [GPT-2](https://d4mucfpksywv.cloudfront.net/better-language-models/language-models.pdf) Building upon the architecture of GPT-1, GPT-2 was released in 2019 with an increased parameter scale of 1.5 billion. This model demonstrated potential for solving a variety of tasks using language text as a unified format for input, output, and task information. - **[2020]** [GPT-3](https://arxiv.org/abs/2005.14165) Released in 2020, GPT-3 marked a significant capacity leap by scaling the model to 175 billion parameters. It introduced the concept of in-context learning (ICL), enabling LLMs to understand tasks through few-shot or zero-shot learning. GPT-3 showcased excellent performance in numerous NLP tasks, including reasoning and domain adaptation, highlighting the potential of scaling up model size. - **[2021]** [Codex](https://en.wikipedia.org/wiki/OpenAI_Codex) Codex was introduced by OpenAI in July 2021 as a fine-tuned version of GPT-3 specifically trained on a large corpus of GitHub code. It demonstrated enhanced ability in solving programming and mathematical problems, showcasing the potential of training LLMs on specialized data. - **[2021]** [LaMDA](https://blog.google/technology/ai/lamda/) LaMDA (Language Models for Dialog Applications) was introduced by researchers from DeepMind. LaMDA focuses on enhancing dialog applications and dialog generation tasks. It has a significant number of parameters, with the largest model consisting of 137 billion parameters, making it slightly smaller than GPT-3. - **[2021]** [Gopher](https://www.deepmind.com/blog/language-modelling-at-scale-gopher-ethical-considerations-and-retrieval) In 2021, DeepMind introduced Gopher, a language model with an impressive parameter scale of 280 billion. Notably, Gopher demonstrated a remarkable capability to approach human expert performance on the Massive Multitask Language Understanding (MMLU) benchmark. However, like its predecessors, Gopher exhibited certain limitations, including tendencies for repetition, biases, and propagation of incorrect information. - **[2022]** [InstructGPT](https://arxiv.org/abs/2203.02155) In 2022, InstructGPT was proposed as an enhancement to GPT-3 for human alignment. It utilized reinforcement learning from human feedback (RLHF) to improve the model's instruction-following capacity and mitigate issues like generating harmful content. This approach proved valuable for training LLMs to align with human preferences. - **[2022]** [Chinchilla](https://arxiv.org/abs/2203.15556) Chinchilla, introduced in 2022 by DeepMind, is a family of large language models that build upon the discovered scaling laws of LLMs. With a focus on efficient utilization of compute resources, Chinchilla boasts 70 billion parameters and achieves a remarkable 67.5% accuracy on the MMLU benchmark—a 7% improvement over Gopher. - **[2022]** [PaLM](https://arxiv.org/abs/2204.02311) Pathways Language Model (PaLM) was introduced by Google Research in 2022, showcasing a leap in model scale with a whopping 540 billion parameters. Leveraging the proprietary Pathways system for distributed computation, PaLM exhibited great few-shot performance across an array of language understanding, reasoning, and code-related tasks. - **[2022]** [ChatGPT](https://openai.com/blog/chatgpt) In November 2022, OpenAI released ChatGPT, a conversation model based on GPT-3.5 and GPT-4. Specially optimized for dialogue, ChatGPT exhibited great abilities in communicating with humans, reasoning, and aligning with human values. - **[2023]** [LLaMA](https://arxiv.org/abs/2302.13971) LLaMA (Large Language Model Meta AI) emerged in February 2023 from Meta AI. It introduced a family of large language models available in varying sizes from 7 billion to 65 billion parameters. LLaMA's release marked a departure from the limited access trend, as its model weights were made available to the research community under a noncommercial license. Subsequent developments, including [Llama 2](https://arxiv.org/abs/2307.09288) and other chat models, further emphasized accessibility, this time with a license for",llm_course,https://learn.activeloop.ai/courses/take/llms/multimedia/48953589-a-timeline-of-large-language-models
188,A Timeline of Large Language Models,"# A Timeline of Large Language Models
## A Timeline of the Most Popular LLMs
commercial use. - **[2023]** [GPT-4](https://arxiv.org/abs/2303.08774) In March 2023, GPT-4 was released, extending text input to multimodal signals. With stronger capacities than GPT-3.5, GPT-4 demonstrated significant performance improvements on various tasks. If you want to dive deeper into these models, I suggest reading the paper “[A Survey of Large Language Models](https://arxiv.org/pdf/2303.18223.pdf).” Here’s a table summarizing the architectural and training details of all the mentioned models (and others). ![From “[A Survey of Large Language Models](https://arxiv.org/pdf/2303.18223.pdf)” paper](A%20Timeline%20of%20Large%20Language%20Models%20ceae1b1e9692468ca3cfb2d5925164d3/Screenshot_2023-08-09_at_12.25.37.png) From “[A Survey of Large Language Models](https://arxiv.org/pdf/2303.18223.pdf)” paper Moreover, here’s an image showing the evolution of the LLaMA models into other fine-tuned models made by online communities, highlighting the great interest it sparked. ![From “[A Survey of Large Language Models](https://arxiv.org/pdf/2303.18223.pdf)” paper](A%20Timeline%20of%20Large%20Language%20Models%20ceae1b1e9692468ca3cfb2d5925164d3/Screenshot_2023-08-09_at_12.25.47.png) From “[A Survey of Large Language Models](https://arxiv.org/pdf/2303.18223.pdf)” paper ---",llm_course,https://learn.activeloop.ai/courses/take/llms/multimedia/48953589-a-timeline-of-large-language-models
189,A Timeline of Large Language Models,"# A Timeline of Large Language Models
## Conclusion
In this lesson, we learned more about the transition from pre-trained language models (LMs) to the emergence of large language models (LLMs). We explored the key differentiating features of LLMs, including the influence of scaling laws and the manifestation of emergent abilities like in-context learning, step-by-step reasoning strategies, and instruction following. We also saw a brief timeline of the most popular LLMs: from the foundational GPT-1 to the revolutionary GPT-3, the specialized Codex, LaMDA, Gopher, and Chinchilla, to PaLM, ChatGPT, and LLaMA.",llm_course,https://learn.activeloop.ai/courses/take/llms/multimedia/48953589-a-timeline-of-large-language-models
190,Techniques for Fine-Tuning LLMs,"# Techniques for Fine-Tuning LLMs
## Introduction
In this lesson, we will examine the main techniques for fine-tuning Large Language Models for superior performance on specific tasks. We explore why and how to fine-tune LLMs, the strategic importance of instruction fine-tuning, and several fine-tuning methods, such as Full Finetuning, Low-Rank Adaptation (LoRA), Supervised Finetuning (SFT), and Reinforcement Learning from Human Feedback (RLHF). We also touch upon the benefits of the Parameter-Efficient Fine-tuning (PEFT) approach using Hugging Face's PEFT library, promising both efficiency and performance gains in fine-tuning.",llm_course,https://learn.activeloop.ai/courses/take/llms/multimedia/48954486-techniques-for-fine-tuning-llms
191,Techniques for Fine-Tuning LLMs,"# Techniques for Fine-Tuning LLMs
## Why We Finetune LLMs
While pretraining provides Language Models (LLMs) with a broad understanding of language, it doesn't equip them with the specialized knowledge needed for complex tasks. For instance, a pre-trained LLM may excel at generating text but encounter difficulties when tasked with sentiment analysis of financial news. This is where fine-tuning comes into play. Fine-tuning is the process of adapting a pretrained model to a specific task by further training it using task-specific data. For example, if we aim to make an LLM proficient in answering questions about medical texts, we would fine-tune it using a dataset comprising medical question-answer pairs. This process enables the model to recalibrate its internal parameters and representations to align with the intended task, enhancing its capacity to address domain-specific challenges effectively. However, fine-tuning LLMs conventionally can be resource-intensive and costly. It involves adjusting all the parameters in the pretrained LLM models, which can number in the billions, necessitating significant computational power and time. Consequently, it's crucial to explore more efficient and cost-effective methods for fine-tuning, such as Low-Rank Adaptation (LoRA).",llm_course,https://learn.activeloop.ai/courses/take/llms/multimedia/48954486-techniques-for-fine-tuning-llms
192,Techniques for Fine-Tuning LLMs,"# Techniques for Fine-Tuning LLMs
## A Reminder On Instruction Finetuning
Instruction fine-tuning is a specific type of fine-tuning that grants precise control over a model's behavior. The objective is to train a Language Model (LLM) to interpret prompts as instructions rather than simply treating them as text to continue generating. For example, when given the instruction, ""Analyze the sentiment of this text and tell us if it's positive,"" a model with instruction fine-tuning would perform sentiment analysis rather than continuing the text in some manner. This technique offers several advantages. It involves training models on tasks described using instructions, enabling LLMs to generalize to new tasks based on additional instructions. This approach circumvents the need for extensive amounts of task-specific data and relies on textual instructions to guide the learning process.",llm_course,https://learn.activeloop.ai/courses/take/llms/multimedia/48954486-techniques-for-fine-tuning-llms
193,Techniques for Fine-Tuning LLMs,"# Techniques for Fine-Tuning LLMs
## A Reminder of the Techniques For Finetuning LLMs
There are several techniques to make the finetuning process more efficient and effective: - **Full Finetuning:** This method involves adjusting all the parameters in the pretrained LLM models to adapt to a specific task. While effective, it is resource-intensive and requires extensive computational power, therefore it’s rarely used. - **Low-Rank Adaptation (LoRA):** LoRA is a technique that aims to adapt LLMs to specific tasks and datasets while simultaneously reducing computational resources and costs. By applying low-rank approximations to the downstream layers of LLMs, LoRA significantly reduces the number of parameters to be trained, thereby lowering the GPU memory requirements and training costs. We’ll also see QLoRA, a variant of LoRA that is more optimized and leverages quantization. With a focus on the number of parameters involved in finetuning, there are multiple methods, such as: - **Supervised Finetuning (SFT):** SFT involves doing standard supervised finetuning with a pretrained LLM on a small amount of demonstration data. This method is less resource-intensive than full finetuning but still requires significant computational power. - **Reinforcement Learning from Human Feedback (RLHF):** RLHF is a training methodology where models are trained to follow human feedback over multiple iterations. This method can be more effective than SFT, as it allows for continuous improvement based on human feedback. We’ll also see some alternatives to RLHF, such as Direct Preference Optimization (DPO), and Reinforcement Learning from AI Feedback (RLAIF).",llm_course,https://learn.activeloop.ai/courses/take/llms/multimedia/48954486-techniques-for-fine-tuning-llms
194,Techniques for Fine-Tuning LLMs,"# Techniques for Fine-Tuning LLMs
## Efficient Finetuning with Hugging Face PEFT Library
Parameter-Efficient Fine-tuning (PEFT) approaches address the need for computational and storage efficiency in fine-tuning LLMs. Hugging Face developed the [PEFT library](https://github.com/huggingface/peft) specifically for this purpose. PEFT leverages architectures that only fine-tune a small number of additional model parameters while freezing most parameters of the pretrained LLMs, significantly reducing computational and storage costs. PEFT methods offer benefits beyond just efficiency. These methods have been proven to outperform standard fine-tuning methods, particularly in low-data situations, and provide improved generalization for out-of-domain scenarios. Furthermore, they contribute to the portability of models by generating tiny model checkpoints that require substantially less storage space compared to extensive full fine-tuning checkpoints. By integrating PEFT strategies, we make way for comparable performance gains to full fine-tuning with only a fraction of the trainable parameters. This, in effect, broadens our capacity to harness the prowess of LLMs, regardless of the hardware limitations we might encounter. Providing easy integration with the Hugging Face's [Transformers](https://github.com/huggingface/transformers) and [Accelerate](https://github.com/huggingface/accelerate) libraries, the PEFT library supports popular methods such as Low-Rank Adaptation (LoRA) and Prompt Tuning.",llm_course,https://learn.activeloop.ai/courses/take/llms/multimedia/48954486-techniques-for-fine-tuning-llms
195,Techniques for Fine-Tuning LLMs,"# Techniques for Fine-Tuning LLMs
## Conclusion
In this lesson, we've learned that while pretraining equips LLMs with a broad understanding of language, fine-tuning is necessary to specialize these models for complex tasks. We've introduced various fine-tuning techniques, including Full Finetuning, Low-Rank Adaptation (LoRA), Supervised Finetuning (SFT), and Reinforcement Learning from Human Feedback (RLHF). We've also highlighted the importance of instruction fine-tuning for precise control over model behavior. Finally, we've examined the benefits of Parameter-Efficient Fine-tuning (PEFT) approaches, mainly using Hugging Face's PEFT library, which promises both efficiency and performance gains in fine-tuning. This equips us to harness the power of LLMs more effectively and efficiently, regardless of hardware limitations, and to adapt these models to a wide range of tasks and domains.",llm_course,https://learn.activeloop.ai/courses/take/llms/multimedia/48954486-techniques-for-fine-tuning-llms
196,Advanced Topics and Future Directions Module,"# Advanced Topics and Future Directions Module
## Advanced Topics and Future Directions
**Goals:** Equip students with an understanding of the latest trends in LLM. This section examines the nuances of multimodal LLMs, scaling laws in training, sophisticated prompting frameworks like ReAct, and the complexities of training on generated data. The participants will unravel the ways they redefine interactions with language models. FlashAttention, Sparse Attention, and ALiBi's roles in expanding the context window offer insights into LLM optimization strategies. The module also confronts the challenges of training on generated data, spotlighting the critical phenomenon of Model Collapse. - **Multimodal LLMs:** Discover the significance of multimodality in LLMs, integration of varied data like text and images. The lesson highlights popular models adept at multimodality and provides an overview of their functionalities. - **Scaling Laws in LLM Training**: To achieve compute-optimal training results, it's essential to maintain a harmonious balance between the size of the LLM and the number of training tokens. This session offers guidance on how to effectively scale both elements in tandem. - **ReAct framework and ChatGPT plugins**: The ReAct framework offers a sophisticated approach to enhancing interactions with language models, which also abilitates ChatGPT plugins. - **Expanding the context window**: Detailed workings of FlashAttention and Sparse Attention in the transformer structures, aiming for efficient computation. Emphasis is also given to ALiBi's significance in this context. - **Training on generated data: Model Collapse:** The problem of training on generated data, particularly emphasizing the phenomenon of Model Collapse. Understanding this topic is crucial, as training on bad data influences the effectiveness and reliability of LLMs. - **New Challenges in LLM Research**: The Emerging challenges in LLM research, are cover agents, retriever architectures, larger context windows, efficient attention, and cost-effective pre-training and fine-tuning. Insights from recent studies guide the exploration, preparing students for the evolving LLM landscape. As we wrap up this comprehensive course on Training and Fine-Tuning Large Language Models, students have covered a vast spectrum of topics, from the basic architecture of LLMs to deployment, from specialized Fine-Tuning methods to anticipated challenges in LLM research. With this knowledge, students are well-prepared to apply LLMs in practical scenarios, evaluate their performance effectively, innovate with fine-tuning strategies, and stay abreast of emerging trends and challenges in the domain of artificial intelligence and machine learning.",llm_course,https://learn.activeloop.ai/courses/take/llms/multimedia/48959863-advanced-topics-and-future-directions-module
197,Improving LLMs with RLHF Module,"# Improving LLMs with RLHF Module
## Improving LLMs with RLHF
Goals: Equip students with knowledge and practical skills for implementing RLHF. This short module is focussed on Reinforcement Learning from Human Feedback (RLHF). We learn how to incorporate human feedback into the training process through a reward model that learns the desired patterns to amplify the model’s output. - **Deep Dive into RLHF**: This lesson explores the mechanics and applications of RLHF. We will provide a robust understanding of how RLHF functions and its significance in LLM training and optimization. - **Improving trained models with RLHF**: This lesson provides a practical guide on RLHF as a fine-tuning technique for LLMs. We build upon our previous fine tuning example by implementing RLHF.",llm_course,https://learn.activeloop.ai/courses/take/llms/multimedia/48960034-improving-llms-with-rlhf-module
198,Prompting and Few-Shot Prompting,"# Prompting and Few-Shot Prompting
## **Introduction**
In this lesson, we will explore prompting and prompt engineering, which allow us to interact with LLMs effectively for various applications. We can leverage LLMs to perform tasks such as answering questions, text generation, and more by crafting specific prompts. We will delve into zero-shot prompting, where the model produces results without explicit examples, and then transition to in-context learning and few-shot prompting, where the model learns from demonstrations to handle complex tasks with minimal training data.",llm_course,https://learn.activeloop.ai/courses/take/llms/multimedia/48954250-prompting-and-few-shot-prompting
199,Prompting and Few-Shot Prompting,"# Prompting and Few-Shot Prompting
## Prompting and Prompt Engineering
Prompting is a very important technique that involves designing and optimizing prompts to interact effectively with LLMs for various applications. The process of **prompt engineering** enables developers and researchers to harness the capabilities of LLMs and utilize them for tasks such as answering questions, arithmetic reasoning, text generation, and more. At its core, prompting involves presenting a specific task or instruction to the language model, which then generates a response based on the information provided in the prompt. A prompt can be as simple as a question or instruction or include additional context, examples, or inputs to guide the model towards producing desired outputs. The quality of the results largely depends on the precision and relevance of the information provided in the prompt. Let's incorporate these ideas in the code examples. Before running them, remember to load your environment variables from your `.env` file as follows. ```python from dotenv import load_dotenv load_dotenv() ``` ### Example: Story Generation In this example, the prompt sets up the start of a story, providing initial context (""a world where animals could speak"") and a character (""a courageous mouse named Benjamin""). The model's task is to generate the rest of the story based on this prompt. Note that in this example we are defining separately a `prompt_system` and a `prompt`. This is because the OpenAI API works this way, requiring a “system prompt” to steer the model behaviour. This is different from other LLMs that require only a standard prompt. ```python import openai prompt_system = ""You are a helpful assistant whose goal is to help write stories."" prompt = """"""Continue the following story. Write no more than 50 words. Once upon a time, in a world where animals could speak, a courageous mouse named Benjamin decided to"""""" response = openai.ChatCompletion.create( model=""gpt-3.5-turbo"", messages=[ {""role"": ""system"", ""content"": prompt_system}, {""role"": ""user"", ""content"": prompt} ] ) print(response.choices[0]['message']['content']) ``` ``` embark on a journey to find the legendary Golden Cheese. With determination in his heart, he ventured through thick forests and perilous mountains, facing countless obstacles. Little did he know that his bravery would lead him to the greatest adventure of his life. ``` ### Example: **Product Description** Here, the prompt is a request for a product description with key details (""luxurious, hand-crafted, limited-edition fountain pen made from rosewood and gold""). The model is tasked with writing an appealing product description based on these details. ```python import openai prompt_system = ""You are a helpful assistant whose goal is to help write product descriptions."" prompt = """"""Write a captivating product description for a luxurious, hand-crafted, limited-edition fountain pen made from rosewood and gold. Write no more than 50 words."""""" response = openai.ChatCompletion.create( model=""gpt-3.5-turbo"", messages=[ {""role"": ""system"", ""content"": prompt_system}, {""role"": ""user"", ""content"": prompt} ] ) print(response.choices[0]['message']['content']) ``` ``` Experience the epitome of elegance with our luxurious limited-edition fountain pen. Meticulously handcrafted from exquisite rosewood and shimmering gold, this writing instrument exudes sophistication in every stroke. Elevate your writing experience to new heights with this opulent masterpiece. ```",llm_course,https://learn.activeloop.ai/courses/take/llms/multimedia/48954250-prompting-and-few-shot-prompting
200,Prompting and Few-Shot Prompting,"# Prompting and Few-Shot Prompting
## Zero-Shot Prompting
In the context of prompting, “**zero-shot prompting**” is where we directly ask for the result without providing reference examples for the task. For many tasks, LLMs are smart enough to produce great results. This is exactly what we did in the examples above. Here’s a new example where we ask an LLM to write a short poem about summer. ```python import openai prompt_system = ""You are a helpful assistant whose goal is to write short poems."" prompt = """"""Write a short poem about {topic}."""""" response = openai.ChatCompletion.create( model=""gpt-3.5-turbo"", messages=[ {""role"": ""system"", ""content"": prompt_system}, {""role"": ""user"", ""content"": prompt.format(topic=""summer"")} ] ) print(response.choices[0]['message']['content']) ``` ``` In the realm of golden rays, Summer dances in perfect sway, Nature's canvas aglow with hues, Kissing warmth upon the dews. Breezes whisper through the trees, Serenading the humming bees, Joyful laughter fills the air, As sunshine gleams without a care. Sand between our toes, so fine, Waves crashing in rhythmic rhyme, Picnics filled with sweet delight, Summer's pleasures, pure and bright. Days stretch long, nights invite, Stargazing dreams take flight, Fireflies dance in twilight's haze, Summer's magic shall never fade. ``` The generated poem is nice, but what if we have a specific style of the poem we’d like it to generate? We could try the descriptive approach or simply provide relevant examples of what we need in the prompt.",llm_course,https://learn.activeloop.ai/courses/take/llms/multimedia/48954250-prompting-and-few-shot-prompting
201,Prompting and Few-Shot Prompting,"# Prompting and Few-Shot Prompting
## In-Context Learning And Few-Shot Prompting
In the context of LLMs, **in-context learning** is a powerful approach where the model learns from demonstrations or exemplars provided within the prompt. **Few-shot prompting** is a technique under in-context learning that involves giving the language model a few examples or demonstrations of the task at hand to help it generalize and perform better on complex tasks. Few-shot prompting allows language models to learn from a limited amount of data, making them more adaptable and capable of handling tasks with minimal training samples. Instead of relying solely on zero-shot capabilities (where the model predicts outputs for tasks it has never seen before), few-shot prompting leverages the in-context demonstrations to improve performance. In few-shot prompting, the prompt typically includes multiple questions or inputs along with their corresponding answers. The language model learns from these examples and generalizes to respond to similar queries. ```python import openai prompt_system = ""You are a helpful assistant whose goal is to write short poems."" prompt = """"""Write a short poem about {topic}."""""" examples = { ""nature"": ""Birdsong fills the air,\nMountains high and valleys deep,\nNature's music sweet."", ""winter"": ""Snow blankets the ground,\nSilence is the only sound,\nWinter's beauty found."" } response = openai.ChatCompletion.create( model=""gpt-3.5-turbo"", messages=[ {""role"": ""system"", ""content"": prompt_system}, {""role"": ""user"", ""content"": prompt.format(topic=""nature"")}, {""role"": ""assistant"", ""content"": examples[""nature""]}, {""role"": ""user"", ""content"": prompt.format(topic=""winter"")}, {""role"": ""assistant"", ""content"": examples[""winter""]}, {""role"": ""user"", ""content"": prompt.format(topic=""summer"")} ] ) print(response.choices[0]['message']['content']) ``` ``` Golden sunbeams shine, Warm sands between toes divine, Summer memories, mine. ``` ### **Limitations of Few-shot Prompting** Despite its effectiveness, few-shot prompting does have limitations, especially for more complex reasoning tasks. In such cases, advanced techniques like chain-of-thought prompting have gained popularity. Chain-of-thought prompting breaks down complex problems into multiple steps and provides demonstrations for each step, enabling the model to reason more effectively.",llm_course,https://learn.activeloop.ai/courses/take/llms/multimedia/48954250-prompting-and-few-shot-prompting
202,Prompting and Few-Shot Prompting,"# Prompting and Few-Shot Prompting
## Conclusion
In this lesson, we explored prompting in the context of language models. Prompting involves presenting specific tasks or instructions to an LLM to generate desired responses. We learned that the quality of results largely depends on the precision and relevance of the information provided in the prompt. Through code examples, we saw how to use prompts for story generation and product descriptions. We also explored zero-shot prompting, where the model can perform tasks without explicit reference examples. However, we introduced few-shot prompting, a powerful in-context learning approach to improve the model's performance on more complex tasks. Few-shot prompting allows the model to learn from a limited number of examples, making it more adaptable and capable of handling tasks with minimal training data. However, we also recognized that few-shot prompting has its limitations, particularly for complex reasoning tasks. In such cases, advanced techniques like chain-of-thought prompting are gaining popularity by breaking down complex problems into multiple steps with demonstrations for each step.",llm_course,https://learn.activeloop.ai/courses/take/llms/multimedia/48954250-prompting-and-few-shot-prompting
203,Domain-Specific LLMs,"# Domain-Specific LLMs
## Introduction
Domain-specific Language Models are tailored for specific industries or use cases. Unlike generalized language models attempting to comprehend a wide array of topics, domain-specific LLMs are finely tuned to understand a particular domain's unique terminology, context, and intricacies. In this lesson, we’ll see what it takes to create a domain-specific LLM, how to do it, and what popular domain-specific LLMs are.",llm_course,https://learn.activeloop.ai/courses/take/llms/multimedia/48954467-domain-specific-llms
204,Domain-Specific LLMs,"# Domain-Specific LLMs
## When Do Domain-Specific LLMs Make Sense?
Domain-specific LLMs offer distinct advantages over their generalized counterparts in scenarios where precision, accuracy, and context are very important. They excel in industries where specialized knowledge is essential for generating relevant and accurate outputs. Moreover, they are also good in certain scenarios for safety (constraining what the model knows about), and smaller model have lower latency and are cheaper to host/infer with respect to an LLM, especially if for a single task. So, what are some specific industries where domain-specific LLMs could work well? Two notable examples are: 1. **Finance**: Domain-specific LLMs can provide personalized investment recommendations based on an individual's financial goals, optimizing investment strategies. 2. **Healthcare**: A domain-specific LLM trained in medical data can comprehend complex medical queries and offer accurate advice, enhancing patient care and medical consultation. Why don’t we use general-purpose LLMs in these fields, too? General-purpose LLMs gather their knowledge from their pre-training phase and are “steered” into valuable assistants in the finetuning phase. GPT-4 may know the correct answer to a medical question since it may be trained on medical papers too, but it may not be appropriately steered into a good “medical assistant” that would also ask meaningful questions. However, we’re still in the infancy of LLM research, so it’s still hard to correctly reason how they work. As long as the required knowledge was in the pre-training data, in principle, the LLM is likely able to behave in the “correct” way if appropriately finetuned. If we think the required knowledge wasn’t in the training data, we’d have to pre-train a new LLM from scratch. If we think otherwise, then we could focus on finetuning. In finance, Bloomberg recently trained a proprietary LLM from scratch using a mix of general-purpose and financial data called BloombergGPT.",llm_course,https://learn.activeloop.ai/courses/take/llms/multimedia/48954467-domain-specific-llms
205,Domain-Specific LLMs,"# Domain-Specific LLMs
## **BloombergGPT**
[BloombergGPT](https://www.bloomberg.com/company/press/bloomberggpt-50-billion-parameter-llm-tuned-finance/) is a proprietary domain-specific 50B LLM trained for the financial domain. Its **training dataset** is called “FinPile” and is made of many English financial documents derived from diverse sources, encompassing financial news, corporate filings, press releases, and even social media taken from Bloomberg archives (thus, it’s a proprietary dataset). Data ranges from company filings to market-relevant news from March 2007 to July 2022. The dataset is further augmented by the integration of publicly available general-purpose text, creating a balance between domain specificity and the broader linguistic landscape. In the end, the final domain is approximately half domain-specific (51.27%) and half general-purpose (48.73%). The model is **based on the BLOOM model.** It’s a decoder-only transformer with 70 layers of decoder blocks, multi-head self-attention, layer normalization, and feed-forward networks equipped with the GELU non-linear function. The model is based on the Chinchilla scaling laws. BloombergGPT outperforms other models like GPT-NeoX, OPT-66B, and BLOOM-176B on financial tasks. However, when confronted with GPT-3 on general-purpose tasks, GPT-3 achieves better results.",llm_course,https://learn.activeloop.ai/courses/take/llms/multimedia/48954467-domain-specific-llms
206,Domain-Specific LLMs,"# Domain-Specific LLMs
## **The FinGPT Project**
The [FinGPT](https://github.com/AI4Finance-Foundation/FinGPT) project aims to bring the power of LLMs into the world of finance. It aims to do so in two ways: 1. Providing open finance datasets. 2. Finetuning open-source LLMs on finance datasets for several use cases. Many datasets collected by FinGPT are specifically for financial sentiment analysis. What do we mean by “financial sentiment”? For example, the sentence “Operating profit rose to EUR 13.1 mn from EUR 8.7 mn in the corresponding period in 2007 representing 7.7 % of net sales” merely states facts; therefore, its “normal” sentiment would be neutral. However, it states that the company's operating profit rose, which is good news for someone who wants to invest in that company, and therefore the financial sentiment is “positive.” Similarly, the sentence “The international electronic industry company Elcoteq has laid off tens of employees from its Tallinn facility” has “negative” financial sentiment. Some datasets for financial sentiment classification are: - [Financial Phrasebank](https://huggingface.co./datasets/financial_phrasebank): It contains 4840 sentences from English financial news, categorized by financial sentiment (written by agreement between 5-8 annotators). - [Financial Opinion Mining and Question Answering (FIQA)](https://huggingface.co./datasets/pauri32/fiqa-2018)**:** Consists of 17k sentences from microblog headlines and financial news, classified with financial sentiment. - [Twitter Financial Dataset (sentiment)](https://huggingface.co./datasets/zeroshot/twitter-financial-news-sentiment): about 10k tweets with financial sentiment. Here, you can find a [notebook](https://github.com/AI4Finance-Foundation/FinGPT/tree/master/fingpt/FinGPT-v3) showing how to finetune a model with these datasets and how to use the final model for predictions.",llm_course,https://learn.activeloop.ai/courses/take/llms/multimedia/48954467-domain-specific-llms
207,Domain-Specific LLMs,"# Domain-Specific LLMs
## Med-PaLM for the Medical Domain
[Med-PaLM](https://sites.research.google/med-palm/) is a finetuned version of PaLM (by Google) specifically for the medical domain. The first iteration of Med-PaLM, introduced in late 2022 and subsequently published in Nature in July 2023, marked a milestone by surpassing the pass mark on US Medical License Exam (USMLE) style questions. Building upon this success, Google Health unveiled the latest iteration, [Med-PaLM 2](https://blog.google/technology/health/ai-llm-medpalm-research-thecheckup/), during its annual health event, The Check Up, in March 2023. Med-PaLM 2 represents a substantial leap forward, achieving an accuracy rate of 86.5% on USMLE-style questions.",llm_course,https://learn.activeloop.ai/courses/take/llms/multimedia/48954467-domain-specific-llms
208,Domain-Specific LLMs,"# Domain-Specific LLMs
## Conclusion
Domain-specific LLMs are specialized tools finely tuned for domain expertise. They are indicated for specific fields like finance and healthcare, where nuanced understanding is very important. Examples include BloombergGPT for finance, FinGPT for financial sentiment analysis, and Med-PaLM for medical inquiries.",llm_course,https://learn.activeloop.ai/courses/take/llms/multimedia/48954467-domain-specific-llms
209,Datasets for Training LLMs,"# Training LLMs Module
## Training LLMs
Goals: Provide a hands-on coding experience for training an LLM from scratch in the cloud. Introduce domain-specific LLMs and guide students on benchmarking their custom LLMs. As we transition into the practical aspects, the focus shifts to training LLMs in the cloud and the significance of efficient scaling techniques. With a focus on benchmarking LLMs and the strategic application of domain-specific models in various sectors, it provides a meticulous understanding of tools like Deep Lake, their role in refining LLM training, and the importance of dataset curation. - **When to Train an LLM from Scratch**: This lesson will help with the decision of when to train an LLM from scratch versus utilizing a pre-trained model and the tradeoffs of using both proprietary and open-source models. Specific references to models like BloombergGPT, FinGPT, and LawGPT will be highlighted, offering real-world context. We also address the ongoing debate about the advantages and challenges of training domain-specific LLMs. - **LLMOps**: This lesson touches upon LLMOps, a specialized practice catering to the operational needs of Large Language Models. It is imperative to have dedicated operations for LLMs to streamline deployment, maintenance, and scaling. We will underscore the significance of tools like Weights & Biases in managing and optimizing LLMs, emphasizing their role in modern LLMOps practices. - **Overview of the Training Process**: This lesson provides sequential steps essential to an LLM training process. We will gather and refine data, then move to model initialization and set training parameters, often using the Trainer class. The lesson continues with monitoring via an evaluation dataset. - **Deep Lake and Data Loaders**: This lesson covers Deep Lake and its affiliated data loaders and their role in the LLM training and finetuning process. We will discuss the utility of these tools and gain insights into how they streamline data handling and model optimization. - **Datasets for Training LLMs**: This lesson dives into diverse datasets for LLM training in text and coding. Students learn the intricacies of curating specialized datasets, with an example of storing data in Deep Lake. We emphasize on data quality, referencing the ""Textbooks Are All You Need"" research. - **Train an LLM in the Cloud**: The lesson lays out the process of training LLMs in the cloud using the Hugging Face Accelerate library and Lambda. We offer practical insights into integrating these platforms with hands-on guidance on leveraging data from a Deep Lake dataset. - **Tips for Training LLMs at Scale:** This lesson offers students valuable strategies for efficiently scaling LLM training. It emphasizes advanced techniques and optimizations. - **Benchmarking your own LLM**: The lesson centers around the importance of benchmarking LLMs. We will discuss tools such as InstructEval and Eleuther’s Language Model Evaluation Harness. We also provide hands-on experience in assessing their LLM's performance against recognized benchmarks. - **Domain-specific LLMs**: The lesson covers the strategic use of domain-specific LLMs. We explore scenarios where these specialized models are most effective, with a spotlight on popular instances like FinGPT and BloombergGPT. By analyzing these cases, we will understand the nuances of utilizing domain-specific LLMs to cater to unique industry demands. Upon completing this comprehensive module, participants have gained insights into the multifaceted landscape of training LLMs. From understanding the tradeoffs between training from scratch versus leveraging pre-existing models to LLMOps, students are well-equipped to benchmark their LLMs effectively and understand the nuanced value of domain-specific models in meeting specific industry requirements. The following section will introduce learners to the complexities that come with finetuning techniques and practical hands-on projects.",llm_course,https://learn.activeloop.ai/courses/take/llms/multimedia/48954377-datasets-for-training-llms
210,Expanding the Context Window,"# Expanding the Context Window
## Introduction
In this lesson, we will discuss context windows in language models, their importance, and the limitations of the original Transformer architecture in handling large context lengths. We explore various optimization techniques that have been developed to expand the context window, including ALiBi Positional Encoding, Sparse Attention, FlashAttention, Multi-Query Attention, and the use of large RAM GPUs. We also introduce the latest advancements in this field, such as FlashAttention-2 and LongNet, which aim to push the context window to an unprecedented scale.",llm_course,https://learn.activeloop.ai/courses/take/llms/multimedia/48959925-expanding-the-context-window
211,Expanding the Context Window,"# Expanding the Context Window
## The Importance of The Context Length
The context window refers to the number of input tokens the model can process simultaneously. In current models like [GPT-4](https://en.wikipedia.org/wiki/GPT-4), this context window is around 32K tokens. To put this into perspective, this roughly translates to the size of 50 pages. However, recent advancements have pushed this limit to an impressive 100K tokens (check [Claude by Anthropic](https://www.anthropic.com/index/100k-context-windows)), equivalent to 156 pages. The context length of an LLM is a critical factor for several reasons. Firstly, it allows the model to process larger amounts of data at once, providing a more comprehensive understanding of the context. This is particularly useful when you want to feed a large amount of custom data into an LLM and ask questions about this specific data. For instance, you might want to input a large document related to a specific company or problem and ask the model questions about this document. With a larger context window, the LLM can scan and retain more of this custom information, leading to more accurate and personalized responses.",llm_course,https://learn.activeloop.ai/courses/take/llms/multimedia/48959925-expanding-the-context-window
212,Expanding the Context Window,"# Expanding the Context Window
## Limitations of the Original Transformer Architecture
The original Transformer architecture, however, has some limitations when it comes to handling large context lengths. The main issue lies in the computational complexity of the Transformer architecture. Specifically, the attention layer computations in the Transformer architecture have a quadratic time and space complexity with respect to the number of input tokens $n$. This means that as the context length increases, the computational resources required for training and inference increase exponentially. To understand this better, let's understand the computational complexity of the Transformer architecture. The complexity of the attention layer in the Transformer model is $O(n²d + nd²)$, where $n$ is the context length (number of input tokens) and $d$ is the embedding size. This complexity arises from two main operations in the attention layer: linear projections to get Query, Key, and Value matrices (complexity ~ $O(nd²)$) and multiplications of these matrices (complexity ~ $O(n²d)$). As the context length or embedding size increases, the computational complexity grows quadratically, making it increasingly challenging to process larger context lengths.",llm_course,https://learn.activeloop.ai/courses/take/llms/multimedia/48959925-expanding-the-context-window
213,Expanding the Context Window,"# Expanding the Context Window
## Optimization Techniques to Expand the Context Window
Despite these challenges, researchers have developed several optimization techniques to speed up the Transformer and increase the context length to 100K tokens. Let's explore some of these techniques: 1. **[ALiBi Positional Encoding](https://arxiv.org/abs/2108.12409)**: The original Transformer uses Positional Sinusoidal Encoding, which lacks the ability to extrapolate to larger context lengths. ALiBi, or Attention with Linear Biases, is a positional encoding technique that can be used to train the model on a small context and then fine-tune it on a larger one. 2. **[Sparse Attention](https://ai.googleblog.com/2021/03/constructing-transformers-for-longer.html)**: This technique reduces the number of computations by considering only some tokens when calculating the attention scores. This makes the computation linear with respect to n, significantly reducing the computational complexity. 3. **[FlashAttention](https://arxiv.org/abs/2205.14135)**: This is an efficient implementation of the attention layer for GPU. It optimizes the memory utilization of the GPU by splitting the input matrices into blocks and computing the attention output with respect to these blocks. 4. **[Multi-Query Attention (MQA)](https://arxiv.org/pdf/1911.02150.pdf)**: MQA optimizes the memory consumption of the key/value decoder cache by sharing weights across all attention heads when linearly projecting Key and Value matrices. 5. **Large RAM GPUs**: You need a lot of RAM in the GPU to fit a large context. Therefore, models with larger context windows are often trained on GPUs with large RAM, such as 80GB A100 GPUs.",llm_course,https://learn.activeloop.ai/courses/take/llms/multimedia/48959925-expanding-the-context-window
214,Expanding the Context Window,"# Expanding the Context Window
## FlashAttention-2
Building on the success of FlashAttention, researchers have recently developed [FlashAttention-2,](https://crfm.stanford.edu/2023/07/17/flash2.html?utm_content=bufferca8a7&utm_medium=social&utm_source=linkedin.com&utm_campaign=buffer) a more efficient version of the algorithm that further optimizes the attention layer's speed and memory usage. This new version has been completely rewritten from scratch, leveraging the new primitives from Nvidia. The result is a version that is about 2x faster than its predecessor, reaching up to 230 TFLOPs/s on A100 GPUs. FlashAttention-2 introduces several improvements over the original FlashAttention. - Firstly, it reduces the number of non-matmul FLOPs, which are 16x more expensive than matmul FLOPs, by tweaking the algorithm to spend more time on matmul FLOPs. - Secondly, it optimizes parallelism by parallelizing over batch size, number of heads, and the sequence length dimension. This results in significant speedup, especially for long sequences. - Lastly, it improves work partitioning within each thread block to reduce the amount of synchronization and communication between different warps, resulting in fewer shared memory reads/writes. - In addition to these improvements, FlashAttention-2 also introduces new features, such as support for head dimensions up to 256 and multi-query attention (MQA), further expanding the context window. With these advancements, FlashAttention-2 is a step forward in expanding the context window (without overcoming the fundamental limitations of the original Transformer architecture).",llm_course,https://learn.activeloop.ai/courses/take/llms/multimedia/48959925-expanding-the-context-window
215,Expanding the Context Window,"# Expanding the Context Window
## LongNet: A Leap Towards Billion-Token Context Window
Building on the advancements in Transformer optimization, a recent innovation comes from the paper ""[LONGNET: Scaling Transformers to 1,000,000,000 Tokens](https://arxiv.org/pdf/2307.02486.pdf)"". This paper introduces a novel approach to handling the computational complexity of the Transformer architecture, pushing the context window potentially to an unprecedented 1 billion tokens. The core innovation in LongNet is the introduction of ""dilated attention.” This novel attention mechanism expands the attentive field exponentially as the distance between tokens grows, thereby decreasing attention allocation exponentially as the distance increases. This design principle helps to balance the limited attention resources with the necessity to access every token in the sequence. ![Image from the paper ""[LONGNET: Scaling Transformers to 1,000,000,000 Tokens](https://arxiv.org/pdf/2307.02486.pdf)"". Building blocks of dilated attention used in LONGNET. It consists of a series of attention patterns for modeling short- and long-range dependency. The number of attention patterns can be extended according to the sequence length.](Expanding%20the%20Context%20Window%208e1fbda8eb48406e8a358960164bf9df/Screenshot_2023-08-17_at_14.48.55.png) Image from the paper ""[LONGNET: Scaling Transformers to 1,000,000,000 Tokens](https://arxiv.org/pdf/2307.02486.pdf)"". Building blocks of dilated attention used in LONGNET. It consists of a series of attention patterns for modeling short- and long-range dependency. The number of attention patterns can be extended according to the sequence length. The dilated attention mechanism in LongNet achieves a linear computational complexity, a significant improvement over the quadratic complexity of the standard Transformer. ![Image from the paper ""[LONGNET: Scaling Transformers to 1,000,000,000 Tokens](https://arxiv.org/pdf/2307.02486.pdf)"". Comparison of computation complexity among different methods. N is the sequence length, and d is the hidden dimension.](Expanding%20the%20Context%20Window%208e1fbda8eb48406e8a358960164bf9df/Screenshot_2023-08-17_at_14.47.38.png) Image from the paper ""[LONGNET: Scaling Transformers to 1,000,000,000 Tokens](https://arxiv.org/pdf/2307.02486.pdf)"". Comparison of computation complexity among different methods. N is the sequence length, and d is the hidden dimension.",llm_course,https://learn.activeloop.ai/courses/take/llms/multimedia/48959925-expanding-the-context-window
216,Expanding the Context Window,"# Expanding the Context Window
## Conclusion
In this lesson, we examined the limitations of the original Transformer architecture in handling large context lengths, primarily due to its quadratic computational complexity. We then explored various optimization techniques developed to overcome these limitations, including ALiBi Positional Encoding, Sparse Attention, FlashAttention, Multi-Query Attention, and the use of large RAM GPUs. We also discussed the latest advancements in this field, such as FlashAttention-2, which further optimizes the speed and memory usage of the attention layer, and LongNet, a novel approach that introduces ""dilated attention"" to potentially expand the context window to an unprecedented 1 billion tokens. These advancements are critical in pushing the boundaries of language models, enabling them to process larger amounts of data at once and providing a more comprehensive understanding of the context, leading to more accurate and personalized responses.",llm_course,https://learn.activeloop.ai/courses/take/llms/multimedia/48959925-expanding-the-context-window
217,Benchmarking Your Own LLM,"# Benchmarking Your Own LLM
## Introduction
In a previous lesson, we touched upon several methodologies for assessing the effectiveness of different language models. Even with various available methodologies for evaluating a large language model, the process continues to pose significant challenges. The primary challenge stems from the inherent subjectivity in determining what constitutes a good answer. As an example, let's consider a generative task such as summarization. Quantifying a specific summary as the definitive correct answer in terms of context is difficult and hard to define. This challenge is prevalent across every generative task, making it a pervasive source of difficulty. In this lesson, we’ll see how benchmarks are a candidate solution to this problem.",llm_course,https://learn.activeloop.ai/courses/take/llms/multimedia/48954391-benchmarking-your-own-llm
218,Benchmarking Your Own LLM,"# Benchmarking Your Own LLM
## Benchmarks Over Several Tasks
The solution suggests curating a set of benchmarks to evaluate the model's performance across various tasks. The benchmarks encompass assessments for world knowledge, following complex instructions, arithmetic, programming, and more. Several leaderboards exist to monitor the progress of LLMs, including the “*[Open LLM Leaderboard](https://huggingface.co./spaces/HuggingFaceH4/open_llm_leaderboard)*” and the “*[InstructEval Leaderboard](https://declare-lab.net/instruct-eval/)*.” In some cases, these benchmarks share similar metrics. Here are some examples of tasks tested in these benchmarks: - **[AI2 Reasoning Challenge](https://allenai.org/data/arc)** (ARC): The dataset is exclusively comprised of natural, grade-school science questions designed for human tests. - **[HumanEval](https://paperswithcode.com/dataset/humaneval)**: It is used to measure program synthesis from docstrings. It includes 164 original programming problems assessing language comprehension, algorithms, and math, some resembling software interview questions. - **[HellaSwag](https://paperswithcode.com/dataset/hellaswag)**: A challenge to measure commonsense inference, and shows it remains difficult for state-of-the-art models. While humans achieve >95% accuracy on trivial questions, the models struggle with <48% accuracy. - ****[Measuring Massive Multitask Language Understanding](https://paperswithcode.com/dataset/mmlu) (**MMLU): An evaluation metric for text models' multitask accuracy, covering 57 tasks, including math, US history, computer science, law, etc. High accuracy requires extensive knowledge of the world and problem-solving ability. - **[TruthfulQA](https://paperswithcode.com/dataset/truthfulqa)**: A truthfulness benchmark designed to assess the accuracy of language models in generating answers to questions. It consists of 817 questions across 38 categories, encompassing topics such as health, law, finance, and politics.",llm_course,https://learn.activeloop.ai/courses/take/llms/multimedia/48954391-benchmarking-your-own-llm
220,Benchmarking Your Own LLM,"# Benchmarking Your Own LLM
## **Language Model Evaluation Harness**
'wmt20-cs-en', 'wmt20-de-en', 'wmt20-de-fr', 'wmt20-en-cs', 'wmt20-en-de', 'wmt20-en-iu', 'wmt20-en-ja', 'wmt20-en-km', 'wmt20-en-pl', 'wmt20-en-ps', 'wmt20-en-ru', 'wmt20-en-ta', 'wmt20-en-zh', 'wmt20-fr-de', 'wmt20-iu-en', 'wmt20-ja-en', 'wmt20-km-en', 'wmt20-pl-en', 'wmt20-ps-en', 'wmt20-ru-en', 'wmt20-ta-en', 'wmt20-zh-en', 'wnli', 'wsc', 'wsc273', 'xcopa_et', 'xcopa_ht', 'xcopa_id', 'xcopa_it', 'xcopa_qu', 'xcopa_sw', 'xcopa_ta', 'xcopa_th', 'xcopa_tr', 'xcopa_vi', 'xcopa_zh', 'xnli_ar', 'xnli_bg', 'xnli_de', 'xnli_el', 'xnli_en', 'xnli_es', 'xnli_fr', 'xnli_hi', 'xnli_ru', 'xnli_sw', 'xnli_th', 'xnli_tr', 'xnli_ur', 'xnli_vi', 'xnli_zh', 'xstory_cloze_ar', 'xstory_cloze_en', 'xstory_cloze_es', 'xstory_cloze_eu', 'xstory_cloze_hi', 'xstory_cloze_id', 'xstory_cloze_my', 'xstory_cloze_ru', 'xstory_cloze_sw', 'xstory_cloze_te', 'xstory_cloze_zh', 'xwinograd_en', 'xwinograd_fr', 'xwinograd_jp', 'xwinograd_pt', 'xwinograd_ru', 'xwinograd_zh'] ``` To execute the evaluation, you must utilize the `main.py` file previously cloned from Github. Ensure that you are in the same directory as the specified file to execute the command successfully. By running the provided command, you will utilize the `facebook/opt-1.3b` model and evaluate its performance on the `hellaswag` dataset using GPU acceleration. ( Note that the displayed output is truncated. For the complete output, feel free to explore the attached notebook.) ```bash python main.py \ --model hf-causal \ --model_args pretrained=facebook/opt-1.3b \ --tasks hellaswag \ --device cuda:0 ``` ```python Running loglikelihood requests 100% 40145/40145 [29:44<00:00, 22.50it/s] { ""results"": { ""hellaswag"": { ""acc"": 0.4146584345747859, ""acc_stderr"": 0.00491656121359129, ""acc_norm"": 0.5368452499502091, ""acc_norm_stderr"": 0.004976214989483508 } }, ""versions"": { ""hellaswag"": 0 }, ""config"": { ""model"": ""hf-causal"", ""model_args"": ""pretrained=facebook/opt-1.3b"", ""num_fewshot"": 0, ""batch_size"": null, ""batch_sizes"": [], ""device"": ""cuda:0"", ""no_cache"": false, ""limit"": null, ""bootstrap_iters"": 100000, ""description_dict"": {} } } hf-causal (pretrained=facebook/opt-1.3b), limit: None, provide_description: False, num_fewshot: 0, batch_size: None | Task |Version| Metric |Value | |Stderr| |---------|------:|--------|-----:|---|-----:| |hellaswag| 0|acc |0.4147|± |0.0049| | | |acc_norm|0.5368|± |0.0050| ``` For a more in-depth exploration, the `--model` argument offers three options to choose from: `hf-causal` for specifying the language model, `hf-causal-experimental` for utilizing multiple GPUs, and `hf-seq2seq` for evaluating encoder-decoder models. Consequently, the `--model_args` parameter can be used to pass any additional arguments to the model. For instance, to employ a specific revision of the model with the float data type, utilize the following input: `--model_args pretrained=EleutherAI/pythia-160m,revision=step100000,dtype=""float`.” The arguments vary based on the chosen model and the inputs accepted by Huggingface, as this library leverages Huggingface to load open-source models. Otherwise, you can use the following to specify the engine type while evaluating OpenAI models: `--model_args engine=davinci`. Lastly, performing a combined evaluation using a combination of tasks is possible. To achieve this, simply pass a comma-separated string of available metrics like `--tasks hellaswag, and arc_challenge`, enabling the usage of both Hellaswag and ARC metrics simultaneously. To evaluate proprietary models from OpenAI, you need to set the `OPENAI_API_SECRET_KEY` environmental variable with the secret key. You can obtain this key from the OpenAI dashboard and use it accordingly. ```bash export OPENAI_API_SECRET_KEY=YOUR_KEY_HERE python main.py \ --model gpt3 \ --model_args engine=davinci \ --tasks hellaswag ```",llm_course,https://learn.activeloop.ai/courses/take/llms/multimedia/48954391-benchmarking-your-own-llm
221,Benchmarking Your Own LLM,"# Benchmarking Your Own LLM
## **InstructEval**
There are other efforts to evaluate the language model, like the [InstructEval leaderboard](https://declare-lab.net/instruct-eval/), which is an effort that combines automated evaluation and using the GPT-4 model for scoring different models. Additionally, it is worth mentioning that they mainly focus on instruction-tuned models. ![Image from the [InstructEval paper](https://arxiv.org/abs/2306.04757).](Benchmarking%20Your%20Own%20LLM%2012b96e09de8d45ae83ae15c0cfea1404/Untitled.png) Image from the [InstructEval paper](https://arxiv.org/abs/2306.04757). The evaluation is broken down into three distinct tasks. **1. Problem-Solving Evaluation** It consists of the following test to evaluate the model’s ability on **World** **Knowledge** using [Massive Multitask Language Understanding](https://paperswithcode.com/dataset/mmlu) (MMLU), **Complex Instructions** using [BIG-Bench Hard](https://paperswithcode.com/dataset/big-bench) (BBH), **Comprehension and Arithmetic** using [Discrete Reasoning Over Paragraphs](https://paperswithcode.com/dataset/drop) (DROP), **Programming** using [HumanEval](https://paperswithcode.com/dataset/humaneval), and lastly **Causality** using [Counterfactual Reasoning Assessment](https://arxiv.org/abs/2112.11941) (CRASS). These automated evaluations assess the model's performance across various tasks. **2. Writing Evaluation** This category will evaluate the model based on the following subjective metrics: Informative, Professional, Argumentative, and Creative. They used the GPT-4 model to evaluate the output of different models by presenting a rubric and asking the model to score the outputs on the [Likert scale](https://en.wikipedia.org/wiki/Likert_scale) between 1 and 5. **3. Alignment to Human Values** Finally, a crucial aspect of instruction-tuned models is their alignment with human values. We anticipate these models will uphold values such as helpfulness, honesty, and harmlessness. The leaderboard will evaluate the model by presenting pairs of dialogues and asking it to choose the appropriate one. We won't dive into an extensive explanation of the evaluation process as it closely resembles the previous benchmark, involving the execution of a Python script. Please follow the link to their [GitHub repository](https://github.com/declare-lab/instruct-eval), where they provide sample usage.",llm_course,https://learn.activeloop.ai/courses/take/llms/multimedia/48954391-benchmarking-your-own-llm
222,Benchmarking Your Own LLM,"# Benchmarking Your Own LLM
## Conclusion
Having standardized metrics for evaluating different models is essential; otherwise, comparing the capabilities of various models would become impractical. In this lesson, we introduced several widely used metrics along with a script that facilitates the evaluation of LLMs. It is essential to emphasize the importance of keeping track of the latest leaderboard and evaluation metrics based on specific use cases. In most instances, having a model that excels in all tasks may not be necessary, so staying updated with relevant metrics helps identify the most suitable model for tailored requirements. --- >> [Notebook](https://colab.research.google.com/drive/1d4gJso06wgSq6Rj7JmPKnNjY8i1Bd78g?usp=sharing). ---",llm_course,https://learn.activeloop.ai/courses/take/llms/multimedia/48954391-benchmarking-your-own-llm
223,Introduction to Large Multimodal Models,"# Introduction to Large Multimodal Models
## Introduction
In this lesson, we will examine an emerging field of interest: The next progression in the evolution of LLMs, **Large multimodal models (LMMs)**. This topic adds a new layer to the material we've covered throughout the course. Simply put, multimodal models are designed to handle and interpret different data types - or **""modalities""** - such as text, images, audio, and video, all within a single coordinated system. This integration allows for a more comprehensive analysis and understanding than models processing only one data type, such as text in standard LLMs. For instance, supplementing a text prompt with voice or image inputs can enable these models to capture a more complex representation of the conveyed information. This is achieved by analyzing additional layers of data, such as the tone and cadence of your voice or the visual context provided by images, thus enhancing the depth and richness of the analysis. With the recent increase in the popularity of large language models, it is unsurprising that researchers are now exploring the potential of extending these models to handle multiple data types, aiming to create more versatile and valuable **general-purpose assistants**. Models that can solve arbitrary tasks specified by the user. In the following sections, we will explore the current implementations of LMMs and introduce key concepts on how they manage multimodality. We will also learn about their **emergent abilities** and explore the idea of **Instruction-tuned** LMMs. Finally, in this lesson, we will learn how **Deep Lake** by ActiveLoop can be helpful in training or fine-tuning large multimodal models.",llm_course,https://learn.activeloop.ai/courses/take/llms/multimedia/48959875-introduction-to-large-multimodal-models
224,Introduction to Large Multimodal Models,"# Introduction to Large Multimodal Models
## Common Architectures and Training Objectives
By definition, multimodal models are designed to process various input modalities, such as text, images, and videos, and generate outputs in multiple modalities. However, a notable subset of currently popular LMMs primarily focuses on accepting image inputs and is limited to generating text outputs. These specialized LMMs often leverage pre-trained large-scale vision or language models as their foundation. We can categorize them as 'Image-to-Text Generative Models,' also known as visual language models (VLMs). They generally perform tasks related to image understanding, such as question answering and image captioning. Examples include [GIT](https://arxiv.org/abs/2205.14100) by Microsoft, [BLIP2](https://arxiv.org/pdf/2301.12597.pdf) by SalesForce, and [Flamingo](https://arxiv.org/pdf/2204.14198.pdf) by DeepMind. ### **Model Architecture** These models use an **image encoder** to extract visual features and a standard LLM to output a text sequence. The image encoder can be based on convolutional neural networks (CNNs), such as [ResNet](https://arxiv.org/abs/1512.03385), or a transformer-based architecture like the [Vision Transformer (ViT)](https://arxiv.org/abs/2010.11929). The image encoder and the language model can be trained from scratch or using pre-trained models. Most state-of-the-art models opt for the latter approach; an example is the pre-trained image encoder from the model [CLIP](https://arxiv.org/pdf/2103.00020.pdf) by OpenAI. The options for language models are also extensive: one could choose from various open-source pre-trained models, such as Meta's [OPT](https://arxiv.org/pdf/2205.01068.pdf), [Llama 2](https://arxiv.org/abs/2307.09288), or Google's instruction-trained [FlanT5](https://arxiv.org/abs/1910.10683) models. Optionally, models like [BLIP2](https://arxiv.org/pdf/2301.12597.pdf) introduce a trainable lightweight connection module connecting the vision and language modalities. Since BLIP2 only trains this light module, it is cheaper and faster than other methods while still managing a strong zero-shot performance on image understanding tasks. ![From [“Multimodal Foundation Models: From Specialists to General-Purpose Assistants”](https://arxiv.org/pdf/2309.10020.pdf) paper](Introduction%20to%20Large%20Multimodal%20Models%2064ee4abd8d474902a26bedb90897cf53/Untitled.png) From [“Multimodal Foundation Models: From Specialists to General-Purpose Assistants”](https://arxiv.org/pdf/2309.10020.pdf) paper ### **Training Objective** Similar to what we've seen in the course, LMMs are trained using an **auto-regressive loss** function applied to the output text tokens. When using a [Vision Transformer](https://arxiv.org/abs/2010.11929) architecture, the concept of '**image tokens**,' which is analogous to text tokenization, is introduced. Just like text can be divided into smaller units like sentences, words, or sub-words for easier processing, images can be segmented into smaller, non-overlapping patches, known as 'tokens.' The exact attention mechanisms come into play in the Transformer architecture employed by these LMMs. Image tokens can 'attend' to each other, meaning they can influence each other's representation in the model. Meanwhile, the generation of each text token depends on all the image tokens and the previously generated text tokens. Check out our lesson about **[Understanding Transformers](https://www.notion.so/Understanding-Transformers-c4a6f85b4f0f4828ab6e2dc1c2a7b775?pvs=21)** if you are still getting familiar with these concepts. ![From [“Multimodal Foundation Models: From Specialists to General-Purpose Assistants”](https://arxiv.org/pdf/2309.10020.pdf) paper](Introduction%20to%20Large%20Multimodal%20Models%2064ee4abd8d474902a26bedb90897cf53/Untitled%201.png) From [“Multimodal Foundation Models: From Specialists to General-Purpose Assistants”](https://arxiv.org/pdf/2309.10020.pdf) paper",llm_course,https://learn.activeloop.ai/courses/take/llms/multimedia/48959875-introduction-to-large-multimodal-models
225,Introduction to Large Multimodal Models,"# Introduction to Large Multimodal Models
## Differences in Training Schemes
While having the same training objective, some variations emerge in the training schemes of different Language-Multimodal Models (LMMs). Most models, such as GIT and BLIP2, employ only image-text pairs for training. This approach allows them to establish connections between the text and image representations effectively but requires a large, curated image-text pairs dataset. ![From [“Multimodal Foundation Models: From Specialists to General-Purpose Assistants”](https://arxiv.org/pdf/2309.10020.pdf) paper](Introduction%20to%20Large%20Multimodal%20Models%2064ee4abd8d474902a26bedb90897cf53/Untitled%202.png) From [“Multimodal Foundation Models: From Specialists to General-Purpose Assistants”](https://arxiv.org/pdf/2309.10020.pdf) paper On the other hand, the Flamingo model has some architectural innovations that allow for unlabeled web training data. They extract the text and images from the HTML of 43M webpages. They also determine the positions of images relative to the text based on the relative positions of the text and image elements in the Document Object Model (DOM). This model can ingest a **multimodal prompt** containing images and/or videos interleaved with text as input and generate text in an open-ended manner. It can produce text for tasks such as image captioning or visual question-answering. The system connects the different modalities and enables multimodal prompting through steps. Initially, a **Perceiver Resampler** module receives spatiotemporal features from visual data, such as an image or video, processed by the pre-trained Vision Encoder. The Perceiver then generates a fixed number of 'visual tokens.' These visual tokens serve as inputs to condition a frozen language model, a pre-trained language model that is not updated during this process. The conditioning is facilitated by adding newly initialized **cross-attention layers** interleaved with the language model's pre-existing layers. These new layers are not frozen and will be updated during training. While this architecture is less efficient by having more parameters to train than the one from BLIP2, it provides a powerful way for the language model to incorporate visual cues. ![From “[Flamingo: a Visual Language Model for Few-Shot Learning](https://arxiv.org/pdf/2204.14198.pdf)” paper](Introduction%20to%20Large%20Multimodal%20Models%2064ee4abd8d474902a26bedb90897cf53/Untitled%203.png) From “[Flamingo: a Visual Language Model for Few-Shot Learning](https://arxiv.org/pdf/2204.14198.pdf)” paper ![From [“Multimodal Foundation Models: From Specialists to General-Purpose Assistants”](https://arxiv.org/pdf/2309.10020.pdf) paper](Introduction%20to%20Large%20Multimodal%20Models%2064ee4abd8d474902a26bedb90897cf53/Untitled%204.png) From [“Multimodal Foundation Models: From Specialists to General-Purpose Assistants”](https://arxiv.org/pdf/2309.10020.pdf) paper ### Discovering Emergent Abilities - Few-shot In-Context-Learning Its flexible architecture allows Flamingo to be trained with multimodal prompts that interleave text with visual tokens. This enables the model to demonstrate emergent abilities, such as few-shot in-context learning, **analogous to GPT-3**. You can see some examples in the figure below. ![From [“Multimodal Foundation Models: From Specialists to General-Purpose Assistants”](https://arxiv.org/pdf/2309.10020.pdf) paper](Introduction%20to%20Large%20Multimodal%20Models%2064ee4abd8d474902a26bedb90897cf53/Untitled%205.png) From [“Multimodal Foundation Models: From Specialists to General-Purpose Assistants”](https://arxiv.org/pdf/2309.10020.pdf) paper",llm_course,https://learn.activeloop.ai/courses/take/llms/multimedia/48959875-introduction-to-large-multimodal-models
226,Introduction to Large Multimodal Models,"# Introduction to Large Multimodal Models
## Open-sourcing Flamingo
The state-of-the-art results reported in the Flamingo paper are exciting and clearly show significant progress in the field of LMMs. However, DeepMind has yet to make the Flamingo model publicly available. To fill this gap, HuggingFace's team took the initiative to create an open-source reproduction of Flamingo, known as **[IDEFICS](https://huggingface.co./blog/idefics)**. This replica is constructed entirely using publicly accessible resources, including the LLaMA v1 and OpenCLIP models. IDEFICS is offered in the 'base' and the 'instructed' variants. Both of these are available in two sizes—9 billion parameters and 80 billion parameters. IDEFICS offers comparable results to Flamingo. The team used a mixture of openly available datasets such as Wikipedia, Public Multimodal Dataset, and LAION to train these models. They also created a new 115B token dataset called **[OBELICS](https://huggingface.co./datasets/HuggingFaceM4/OBELICS)**. It has 141 million interleaved image-text documents scraped from the web and contains 353 million images, replicating the dataset described by DeepMind in the Flamingo paper. IDEFICS is available through the Transformers library, and a demo of it is available [here](https://huggingface.co./spaces/HuggingFaceM4/idefics_playground). Another open-source replication of Flamingo is called [Open Flamingo](https://github.com/mlfoundations/open_flamingo) at the 9B parameter size, offering similar performance to the original model. ![From “**[Introducing IDEFICS: An Open Reproduction of State-of-the-Art Visual Language Model](https://huggingface.co./blog/idefics)**” blog post.](Introduction%20to%20Large%20Multimodal%20Models%2064ee4abd8d474902a26bedb90897cf53/Untitled%206.png) From “**[Introducing IDEFICS: An Open Reproduction of State-of-the-Art Visual Language Model](https://huggingface.co./blog/idefics)**” blog post.",llm_course,https://learn.activeloop.ai/courses/take/llms/multimedia/48959875-introduction-to-large-multimodal-models
227,Introduction to Large Multimodal Models,"# Introduction to Large Multimodal Models
## Instruction-tuned LMMs
As demonstrated by GPT-3's emergent abilities with few-shot prompting, where the model could tackle tasks it hadn't seen during training, there's been a rising interest in instruction-fine-tuned LMMs. By allowing the models to be instruction-tuned, we can expect these models to perform a broader set of tasks and better alignment with human intents. This is line with the work done by OpenAI with [InstructGPT](https://openai.com/research/instruction-following) and, more recently, GPT-4. OpenAI has showcased the capability of their newer “GPT-4 with vision” model to follow instructions using visual inputs in their [GPT4 technical report](https://arxiv.org/pdf/2303.08774.pdf) and [GPT-4V(ision) System Card](https://cdn.openai.com/papers/GPTV_System_Card.pdf). ![From the “[GPT-4 Technical Report](https://arxiv.org/pdf/2303.08774.pdf)”](Introduction%20to%20Large%20Multimodal%20Models%2064ee4abd8d474902a26bedb90897cf53/Untitled%207.png) From the “[GPT-4 Technical Report](https://arxiv.org/pdf/2303.08774.pdf)” Following the announcement of OpenAI's **[multimodal GPT-4](https://openai.com/research/gpt-4)**, there has been a surge in related research. As a result, multiple research labs have introduced instruction-tuned LMMs, including **[LLaVA](https://arxiv.org/abs/2304.08485)**, **[MiniGPT-4](https://arxiv.org/abs/2304.10592)**, and **[InstructBlip](https://arxiv.org/abs/2305.06500)**. They feature similar network architectures as previous LMMs but train on instruction-following datasets. ![From [“Multimodal Foundation Models: From Specialists to General-Purpose Assistants”](https://arxiv.org/pdf/2309.10020.pdf) paper](Introduction%20to%20Large%20Multimodal%20Models%2064ee4abd8d474902a26bedb90897cf53/Untitled%208.png) From [“Multimodal Foundation Models: From Specialists to General-Purpose Assistants”](https://arxiv.org/pdf/2309.10020.pdf) paper ### Exploring LLaVA - an instruction-tuned LMM The network architecture of LLaVA resembles the one we reviewed before. This model connects a pre-trained CLIP visual encoder and the [Vicuna](https://lmsys.org/blog/2023-03-30-vicuna/) language model via a projection matrix. In other words, they consider a simple linear layer to connect image features into the word embedding space. Specifically, they apply a trainable projection matrix called W to convert the image features into language embedding tokens with the same dimensionality as the word embedding space in the language model. The authors of LLaVA chose these new linear projection layers that are more lightweight than the Q-Former connection module we saw for BLIP2 and the Perceiver Resampler and cross-attention layers from Flamingo. ![From “[Visual Instruction Tuning](https://arxiv.org/pdf/2304.08485.pdf)” paper](Introduction%20to%20Large%20Multimodal%20Models%2064ee4abd8d474902a26bedb90897cf53/Untitled%209.png) From “[Visual Instruction Tuning](https://arxiv.org/pdf/2304.08485.pdf)” paper The authors then adopt a two-stage instruction-tuning procedure to train the model. First, they pre-train the projection matrix using a subset of the [CC3M](https://aclanthology.org/P18-1238.pdf) dataset, consisting of images and captions. Then, the model is finetuned end-to-end. Both the projection matrix and the LLM are trained on the proposed multimodal instruction-following data for daily user-oriented applications. They also leverage GPT-4 to create a **synthetic dataset** consisting of multimodal instructions, drawing from widely available image-pair data. In the dataset construction process, GPT-4 is shown symbolic representations of images using **captions** and the **coordinates of bounding boxes**, as depicted in the figure below. These representations are derived from the COCO dataset. This information is fed into GPT-4 as a prompt to generate training samples. The generated samples fall into **three categories**: question-answer conversations, detailed descriptions, and complex reasoning questions and answers. They create 158K training samples in total. ![From “[Visual Instruction Tuning](https://arxiv.org/pdf/2304.08485.pdf)” paper](Introduction%20to%20Large%20Multimodal%20Models%2064ee4abd8d474902a26bedb90897cf53/Untitled%2010.png) From “[Visual Instruction Tuning](https://arxiv.org/pdf/2304.08485.pdf)” paper The LLaVA model demonstrates the effectiveness of visual instruction tuning using language-only GPT-4. They show its capabilities by prompting the model with the same question and image as in the GPT-4 report. You can see the result below. The authors also report a new SOTA by fine-tuning [ScienceQA](https://scienceqa.github.io/), a benchmark that contains 21k multimodal multiple-choice questions with rich domain diversity across three subjects, 26 topics, 127 categories, and 379 skills. ![From “[Visual Instruction Tuning](https://arxiv.org/pdf/2304.08485.pdf)” paper](Introduction%20to%20Large%20Multimodal%20Models%2064ee4abd8d474902a26bedb90897cf53/Untitled%2011.png) From “[Visual Instruction Tuning](https://arxiv.org/pdf/2304.08485.pdf)” paper",llm_course,https://learn.activeloop.ai/courses/take/llms/multimedia/48959875-introduction-to-large-multimodal-models
228,Introduction to Large Multimodal Models,"# Introduction to Large Multimodal Models
## Beyond vision and language
In recent months, Image-to-text generative models have dominated the Large Multimodal Model (LMM) landscape. However, newer models have emerged that embrace a wider range of modalities beyond just vision and language. For instance, **[PandaGPT](https://panda-gpt.github.io/)** is designed to handle any input data type, thanks to its integration with the **[ImageBind](https://imagebind.metademolab.com/)** encoder. There's also **[SpeechGPT](https://github.com/0nutation/SpeechGPT#speechgpt-empowering-large-language-models-with-intrinsic-cross-modal-conversational-abilities)**, a model that integrates text and speech data and generates speech alongside text. **[NExT-GPT](https://arxiv.org/pdf/2309.05519.pdf)** stands out as a versatile model capable of receiving and producing outputs in any modality."" ![From “[NExT-GPT: Any-to-Any Multimodal LLM](https://arxiv.org/pdf/2309.05519.pdf)” paper](Introduction%20to%20Large%20Multimodal%20Models%2064ee4abd8d474902a26bedb90897cf53/Untitled%2012.png) From “[NExT-GPT: Any-to-Any Multimodal LLM](https://arxiv.org/pdf/2309.05519.pdf)” paper [HuggingGPT](https://arxiv.org/pdf/2303.17580.pdf) is a novel system that integrates with the HuggingFace platform. It employs a Large Language Model (LLM) as its central controller. This LLM determines which specific model on HuggingFace is best suited for a task, selects that model, and then returns the model's output. ![From “[HuggingGPT: Solving AI Tasks with ChatGPT and its Friends in Hugging Face](https://arxiv.org/pdf/2303.17580.pdf)” paper](Introduction%20to%20Large%20Multimodal%20Models%2064ee4abd8d474902a26bedb90897cf53/Untitled%2013.png) From “[HuggingGPT: Solving AI Tasks with ChatGPT and its Friends in Hugging Face](https://arxiv.org/pdf/2303.17580.pdf)” paper",llm_course,https://learn.activeloop.ai/courses/take/llms/multimedia/48959875-introduction-to-large-multimodal-models
229,Introduction to Large Multimodal Models,"# Introduction to Large Multimodal Models
## Deep Lake and Multimodal LLMs
Deep Lake, differently from many other data lake and vector store products, is multi-modal and can store any data, from texts to images, videos, or audio. If you’re interested in building multi-modal LLMs, this is something to consider, as you can store the different types of data you need in the same place. See this [code example](https://docs.activeloop.ai/getting-started/deep-learning/creating-datasets-manually) to see how to store images and texts in the same dataset. Moreover, here’s the [full list of data types](https://docs.deeplake.ai/en/latest/Htypes.html) that can be managed with Deep Lake. ![List of data types managed by Deep Lake. It can be found at [https://docs.deeplake.ai/en/latest/Htypes.html](https://docs.deeplake.ai/en/latest/Htypes.html).](Introduction%20to%20Large%20Multimodal%20Models%2064ee4abd8d474902a26bedb90897cf53/Screenshot_2023-09-26_at_16.10.55.png) List of data types managed by Deep Lake. It can be found at [https://docs.deeplake.ai/en/latest/Htypes.html](https://docs.deeplake.ai/en/latest/Htypes.html).",llm_course,https://learn.activeloop.ai/courses/take/llms/multimedia/48959875-introduction-to-large-multimodal-models
230,Introduction to Large Multimodal Models,"# Introduction to Large Multimodal Models
## Conclusion
In this module, we delved into the emergent field of LMMs. We examined the leading models that combine both vision and language modalities. We learned that instruction-tuning allows these models to achieve greater generalization on tasks they haven't encountered before. Furthermore, we were introduced to advanced LMMs capable of integrating an even wider range of modalities. Lastly, we discussed the utility of Deep Lake in fine-tuning LMMs.",llm_course,https://learn.activeloop.ai/courses/take/llms/multimedia/48959875-introduction-to-large-multimodal-models
231,Deep Lake and Data Loaders,"# Deep Lake and Data Loaders
## Introduction
In this lesson, we focus on Deep Lake, a powerful AI data system that merges the capabilities of Data Lakes and Vector Databases. We'll explore how Deep Lake can be leveraged for training and fine-tuning Large Language Models, with a focus on its efficient data streaming capabilities. We'll also learn how to create a Deep Lake dataset, add data, and load data using both Deep Lake's and PyTorch's data loaders.",llm_course,https://learn.activeloop.ai/courses/take/llms/multimedia/48954351-deep-lake-and-data-loaders
232,Deep Lake and Data Loaders,"# Deep Lake and Data Loaders
## Deep Lake
In the following lessons about training and finetuning LLMs, we’ll need to store the training datasets somewhere, especially for pretraining, since their size is usually too big to be memorized in a single computing node. Ideally, we’d store the datasets elsewhere and efficiently download data in batches when needed. This is where Deep Lake is most useful. Deep Lake is a multi-modal AI data system that merges the capabilities of [Data Lakes](https://docs.activeloop.ai/) and [Vector Databases](https://docs.activeloop.ai/quickstart). Deep Lake is particularly beneficial for businesses looking to train or fine-tune LLMs on their own data. It efficiently streams data from remote storage to GPUs during model training, making it a powerful tool for deep learning applications. Data loaders in Deep Lake are essential components that facilitate efficient data streaming and are very useful for training and fine-tuning LLMs. They are responsible for fetching, decompressing, and transforming data, and they can be optimized to improve performance in GPU-bottlenecked scenarios. Once we store our datasets in Deep Lake, [it’s possible to easily create a PyTorch Dataloader or a TensorFlow Dataset](https://docs.deeplake.ai/en/latest/Pytorch-and-Tensorflow-Support.html). Deep Lake offers two types of data loaders: the [Open Source data loader](https://docs.activeloop.ai/getting-started/deep-learning/connecting-to-ml-frameworks) and the [Performant data loader](https://docs.activeloop.ai/enterprise-features/compute-engine/performant-dataloader). The Performant version, built on a C++ implementation, is faster and optimizes asynchronous data fetching and decompression. It's approximately 1.5 to 3 times faster than the OSS version, depending on the complexity of the transformation and the number of workers available for parallelization, and it supports distributed training.",llm_course,https://learn.activeloop.ai/courses/take/llms/multimedia/48954351-deep-lake-and-data-loaders
233,Deep Lake and Data Loaders,"# Deep Lake and Data Loaders
## Creating a Deep Lake Dataset and Adding Data
Now, let's walk through an example of creating a Deep Lake dataset and fetching some data from it. Deep Lake supports a variety of data formats, and you can ingest them directly with a single line of code. Deep Lake can be installed with pip as follows: `pip install deeplake`. Please note that the performant version can be used for free up to 200GB of data stored in the cloud, which is more than we’ll need for the course. Then, create an account at the [Activeloop website](https://www.activeloop.ai/). Next, you’ll need an Activeloop API token, which will allow you to identify yourself from your Python code. To get it, click on the “Create API token” button that you can see at the top of your webpage once you’re logged in, and then proceed to create one by clicking on the other “Create API token” button inside the page. Remember to check the token's expiration date: once it’s expired, you’ll need to create a new one from this page to continue using Deep Lake with Python linked to your account. Once you have your Activeloop token, save it into the `ACTIVELOOP_TOKEN` environmental variable. You can do so by adding it to your `.env` file, which will then be loaded, executing the following Python code with the `dotenv` library. ```python from dotenv import load_dotenv load_dotenv() ``` You are now ready to use Deep Lake! The following Python code shows how we can create a dataset using Deep Lake. Make sure to replace `` with your username on Activeloop. You can easily find it in the URL of your webpage, which should have the form `https://app.activeloop.ai//home`. ```python import deeplake # env variable ACTIVELOOP_TOKEN must be set with your API token # create dataset on deeplake username = """" dataset_name = ""test_dataset"" ds = deeplake.dataset(f""hub://{username}/{dataset_name}"") # create column text ds.create_tensor('text', htype=""text"") # add some texts to the dataset texts = [f""text {i}"" for i in range(1, 11)] for text in texts: ds.append({""text"": text}) ``` In the previous code, we created a Deep Lake dataset named `test_dataset`. We specify that it contains texts, and then we add 10 data samples to it, one by one. Visit the [API docs](https://docs.deeplake.ai/en/latest/Datasets.html) of Deep Lake to learn about the other available methods. Once done, you should see printed text like the following. ``` This dataset can be visualized in Jupyter Notebook by ds.visualize() or at https://app.activeloop.ai/genai360/test_dataset ``` By clicking on the URL contained in it, you’ll see your dataset directly from the Activeloop website. ![Screenshot 2023-09-06 at 17.27.52.png](Deep%20Lake%20and%20Data%20Loaders%20d98a337b42e24eaea3083dd276bfb604/Screenshot_2023-09-06_at_17.27.52.png) [Deep Lake dataset version control](https://docs.activeloop.ai/getting-started/deep-learning/dataset-version-control) allows you to manage changes to datasets with commands very similar to Git. It provides critical insights into how your data is evolving, and it works with datasets of any size. Execute the following code to commit your changes to the dataset. ```python ds.commit(""added texts"") ```",llm_course,https://learn.activeloop.ai/courses/take/llms/multimedia/48954351-deep-lake-and-data-loaders
234,Deep Lake and Data Loaders,"# Deep Lake and Data Loaders
## Retrieving Data From Deep Lake
Now, let’s get some data from our Deep Lake dataset. There are two main syntaxes for getting data from Deep Lake datasets: 1. The first one uses the Deep Lake **dataloader**. ****It’s highly optimized and has the fastest data streaming. However, it doesn’t support custom sampling or full-random shuffling. It is possible to use PyTorch datasets and data loaders. If you’re interested in knowing more about how to use the Deep Lake data loader in cases where data shuffling is important, read [this guide](https://docs.activeloop.ai/technical-details/shuffling-in-dataloaders). 2. The second one uses plain PyTorch datasets and data loaders, enabling all the customizability that PyTorch supports. However, they have highly sub-optimal streaming using Deep Lake datasets and may result in 5X+ slower performance compared to using Deep Lake data loaders. ### The Deep Lake Data Loader for PyTorch Here’s a code example of creating a Deep Lake data loader for PyTorch. The following code leverages the performant Deep Lake data loader. It’s the fastest and most optimized way of loading data in batches for model training. ```python # create PyTorch data loader batch_size = 3 train_loader = ds.dataloader()\ .batch(batch_size)\ .shuffle()\ .pytorch() # loop over the elements for i, batch in enumerate(train_loader): print(f""Batch {i}"") samples = batch.get(""text"") for j, sample in enumerate(samples): print(f""Sample {j}: {sample}"") print() pass ``` You should see the following printed output, showing the retrieved batches. ``` Please wait, filling up the shuffle buffer with samples. Shuffle buffer filling is complete. Batch 0 Sample 0: text 1 Sample 1: text 7 Sample 2: text 8 Batch 1 Sample 0: text 2 Sample 1: text 9 Sample 2: text 6 Batch 2 Sample 0: text 10 Sample 1: text 3 Sample 2: text 4 Batch 3 Sample 0: text 5 ``` ### PyTorch Datasets and PyTorch Data Loaders using Deep Lake This code enables all the customizability supported by PyTorch at the cost of having highly slower streaming compared to using Deep Lake data loaders. The reason for the slower performance is that this approach does not take advantage of the inherent dataset format that was designed for fast streaming by Activeloop. First, we create a subclass of the PyTorch `Dataset`, which stores a reference to the Deep Lake dataset and implements the `__len__` and `__getitem__` methods. ```python from torch.utils.data import DataLoader, Dataset class DeepLakePyTorchDataset(Dataset): def __init__(self, ds): self.ds = ds def __len__(self): return len(self.ds) def __getitem__(self, idx): texts = self.ds.text[idx].text().astype(str) return { ""text"": texts } ``` Inside the `__getitem__` method, we retrieve the strings stored in the `text` tensor of the dataset at the position `idx`. Then, we instantiate it using a reference to our Deep Lake dataset `ds`, transform it into a PyTorch `DataLoader`, and eventually loop over the elements just like we did with the Deep Lake dataloader example. ```python # create PyTorch dataset ds_pt = DeepLakePyTorchDataset(ds) # create PyTorch data loader from PyTorch dataset dataloader_pytorch = DataLoader(ds_pt, batch_size=3, shuffle=True) # loop over the elements for i, batch in enumerate(dataloader_pytorch): print(f""Batch {i}"") samples = batch.get(""text"") for j, sample in enumerate(samples): print(f""Sample {j}: {sample}"") print() pass ``` You should see the following output, showing the retrieved batches. ``` Batch 0 Sample 0: text 8 Sample 1: text 3 Sample 2: text 1 Batch 1 Sample 0: text 4 Sample 1: text 5 Sample 2: text 9 Batch 2 Sample 0: text 7 Sample 1: text 2 Sample 2: text 6 Batch 3 Sample 0: text 10 ```",llm_course,https://learn.activeloop.ai/courses/take/llms/multimedia/48954351-deep-lake-and-data-loaders
235,Deep Lake and Data Loaders,"# Deep Lake and Data Loaders
## Getting the Best High-Quality Data for your Models
Recent research, such as from the “[LIMA: Less Is More for Alignment](https://arxiv.org/abs/2305.11206)” and “[Textbooks Are All You Need](https://arxiv.org/abs/2306.11644)” papers, suggests that data quality is very important both for training and finetuning LLMs. As a consequence, Deep Lake has several additional features that can help users investigate the quality of the datasets they are using and eventually filter samples. Deep Lake provides the Tensor [Query Language (TQL)](https://docs.deeplake.ai/en/latest/Tensor-Query-Language.html), an SQL-like language used for [Querying in Activeloop Platform](https://docs.activeloop.ai/enterprise-features/querying-datasets) as well as in **`[ds.query](https://docs.deeplake.ai/en/latest/deeplake.core.dataset.html#deeplake.core.dataset.Dataset.query)`** in the Python API. This allows data scientists to filter datasets and focus their work on the most relevant data. The following code shows how we can filter our dataset using a TQL query and print all the samples in the resulting view. ```python ds_view = ds.query(""select * where contains(text, '1')"") # code that creates a data loader and prints the batches ... ``` ``` Batch 0 Sample 0: text 1 Sample 1: text 10 ``` Now, we can save our dataset view as follows. ```python ds_view.save_view(id=""strings_with_1"") ``` And we can read from it as follows. ```python ds = deeplake.dataset(f""hub://{username}/{dataset_name}/.queries/strings_with_1"") ``` Another feature is the [samplers](https://docs.deeplake.ai/en/latest/Sampler.html). Samplers can be used to assign a discrete distribution of weights to the dataset's samples, which are then sampled according to the weight distribution. This can be useful for focusing training on higher-quality data.",llm_course,https://learn.activeloop.ai/courses/take/llms/multimedia/48954351-deep-lake-and-data-loaders
236,Deep Lake and Data Loaders,"# Deep Lake and Data Loaders
## Conclusion
In this lesson, we explored some of the capabilities of Deep Lake, a multi-modal AI data system that merges the functionalities of Data Lakes and Vector Databases. We've learned how Deep Lake can efficiently stream data from remote storage to GPUs during model training, making it an ideal tool for training and fine-tuning Large Language Models. We've also covered the creation of a Deep Lake dataset, adding data to it, and retrieving data using both Deep Lake's data loaders and PyTorch's data loaders. This will be useful as we continue exploring training and fine-tuning Large Language Models.",llm_course,https://learn.activeloop.ai/courses/take/llms/multimedia/48954351-deep-lake-and-data-loaders
237,What are Large Language Models,"# What are Large Language Models
## **Introduction**
Welcome to our introductory module on Large Language Models or **LLMs**. LLMs, or Large Language Models, are a specific category of neural network models characterized by having an exceptionally high number of parameters, often in the billions. These parameters are essentially the variables within the model that allow it to process and generate text. They are trained on vast quantities of textual data, which provides them with a broad understanding of language patterns and structures. The main goal of LLMs is to comprehend and produce text that closely resembles human-written language, enabling them to capture the subtle complexities of both syntax (the arrangement of words in a sentence) and semantics (the meaning conveyed by those words). These models undergo training with a simple objective: predicting the subsequent word in a sentence. However, they develop a range of **emergent abilities** during this training process. For example, they can perform tasks such as arithmetic calculations and word unscrambling and even achieve remarkable feats like [successfully passing professional-level exams such as the US Medical Licensing Exam](https://healthitanalytics.com/news/chatgpt-passes-us-medical-licensing-exam-without-clinician-input). They generate text in an autoregressive manner, generating the next tokens one by one based on the tokens they have previously generated. The **attention mechanism** plays a key role in enabling these models to establish connections between words and produce coherent and contextually relevant text. LLMs have significantly advanced the natural language processing (NLP) field, revolutionizing our approach to tasks like machine translation, natural language generation, part-of-speech tagging, parsing, information retrieval, and more. As we dive further into this module, we will explore the capabilities of these models, their practical applications, and their exciting future possibilities. ---",llm_course,https://learn.activeloop.ai/courses/take/llms/multimedia/48953210-what-are-large-language-models
238,What are Large Language Models,"# What are Large Language Models
## **Language Modeling**
Language modeling is a fundamental task in Natural Language Processing (NLP). It involves explicitly learning the probability distribution of the words in a language. This is generally learned by predicting the next token in a sequence. This task is typically approached using statistical methods or deep learning techniques. LLMs are trained to predict the next token (word, punctuation, etc.) based on the previous tokens in the text. The models achieve this by learning the distribution of tokens in the training data.",llm_course,https://learn.activeloop.ai/courses/take/llms/multimedia/48953210-what-are-large-language-models
239,What are Large Language Models,"# What are Large Language Models
## **Tokenization**
The first step in this process is tokenization, where the input text is broken down into smaller units called tokens. Tokens can be as small as individual characters or as large as whole words. The choice of token size can significantly affect the model's performance. Some models even use subword tokenization, where words are broken down into smaller units that capture meaningful linguistic information. For example, let’s consider the sentence ""The child’s book.” We could split the text whenever we find white space characters. The output would be: ```python [""The"", ""child's"", ""book.""] ``` As you can see, the punctuation is still attached to the words *""child’s""* and *""book.""* Otherwise, we could split the text according to white spaces and punctuation. The output would be: ```python [""The"", ""child"", ""'"", ""s"", ""book"", "".""] ``` Importantly, tokenization is model-specific, meaning different models require different tokenization processes, which can complicate pre-processing and multi-modal modeling.",llm_course,https://learn.activeloop.ai/courses/take/llms/multimedia/48953210-what-are-large-language-models
240,What are Large Language Models,"# What are Large Language Models
## **Model Architecture and Attention**
The core of a language model is its architecture. Recurrent Neural Networks (RNNs) were traditionally used for this task, as they are capable of processing sequential data by maintaining an internal state that captures the information from previous tokens. However, they struggle with long sequences due to the [vanishing gradient problem](https://en.wikipedia.org/wiki/Vanishing_gradient_problem). To overcome these limitations, transformer-based models have become the standard for language modeling tasks. These models use a mechanism called **attention**, which allows them to weigh the importance of different tokens when making predictions. This allows them to capture long-range dependencies between tokens and generate high-quality text.",llm_course,https://learn.activeloop.ai/courses/take/llms/multimedia/48953210-what-are-large-language-models
241,What are Large Language Models,"# What are Large Language Models
## **Training**
The model is trained on a large corpus of text to predict the next token of a sentence correctly. The goal is to adjust the model's parameters to maximize the probability of the observed data. Typically a model is trained on a very large general dataset of texts from the Internet, such as [The Pile](https://pile.eleuther.ai/) or [CommonCrawl](https://commoncrawl.org/). Sometimes also more specific datasets are used, such as the [Stackoverflow Posts](https://huggingface.co./datasets/mikex86/stackoverflow-posts) dataset. > The model learns to predict the next token in a sequence by adjusting its parameters to maximize the probability of outputting the correct next token from the training data. >",llm_course,https://learn.activeloop.ai/courses/take/llms/multimedia/48953210-what-are-large-language-models
242,What are Large Language Models,"# What are Large Language Models
## **Prediction**
Once the model is trained, it can be used to generate text by predicting the next token in a sequence. This is done by feeding the sequence into the model, which outputs a probability distribution over the possible subsequent tokens. The next token is then chosen based on this distribution. This process can be repeated to generate sequences of arbitrary length.",llm_course,https://learn.activeloop.ai/courses/take/llms/multimedia/48953210-what-are-large-language-models
243,What are Large Language Models,"# What are Large Language Models
## **Fine-Tuning**
The model is often fine-tuned on a specific task after pre-training. This involves continuing the training process on a smaller, task-specific dataset. This allows the model to adapt its learned knowledge to the specific task (e.g. text translation) or specialized domain (e.g. biomedical, finance, etc), improving its performance. This is a brief explanation, but the actual process can be much more complex, especially for state-of-the-art models like GPT-4. These models use advanced techniques and large amounts of data to achieve impressive results.",llm_course,https://learn.activeloop.ai/courses/take/llms/multimedia/48953210-what-are-large-language-models
244,What are Large Language Models,"# What are Large Language Models
## Context Size
The context size, or context window, in LLMs is the maximum number of tokens that the model can handle in one go. The context size is significant because it determines the length of the text that can be processed at once, which can impact the model's performance and the results it generates. Different LLMs have different context sizes. For instance, the OpenAI “gpt-3.5-turbo-16k” model has a context window of 16,000 tokens. There is a natural limit to the number of tokens a model can produce. Smaller models can go up to 1k tokens, while larger models can go up to 32k tokens, like GPT-4.",llm_course,https://learn.activeloop.ai/courses/take/llms/multimedia/48953210-what-are-large-language-models
245,What are Large Language Models,"# What are Large Language Models
## Let’s Generate Some Text
Let’s try generating some text with LLMs. You must first generate an API key to use OpenAI’s models in your Python environment. You can follow the below steps to generate the API key: 1. After creating an OpenAI account, log in. 2. After logging in, choose Personal from the top-right menu, then choose “View API keys.” 3. The “Create new secret key” button is on the page containing API keys once step 2 has been finished. Clicking on that generates a secret key. Save this because it will be required in further lessons. After that, you can save your key in a `.env` file like this: ```python OPENAI_API_KEY="""" ``` Every time you start a Python script with the following lines, your key will be loaded into an environment variable called `OPENAI_API_KEY`. This environment variable will then be used by the `openai` library whenever you want to generate text. ```python from dotenv import load_dotenv load_dotenv() ``` We are now ready to generate some text! Here’s an example of it. ```python from dotenv import load_dotenv load_dotenv() import os import openai # English text to translate english_text = ""Hello, how are you?"" response = openai.ChatCompletion.create( model=""gpt-3.5-turbo"", messages=[ {""role"": ""system"", ""content"": ""You are a helpful assistant.""}, {""role"": ""user"", ""content"": f'Translate the following English text to French: ""{english_text}""'} ], ) print(response['choices'][0]['message']['content']) ``` ``` Bonjour, comment ça va? ``` By using `dotenv`, you can safely store sensitive information, such as API keys, in a separate file and avoid accidentally exposing it in your code. This is particularly important when working with open-source projects or sharing your code with others, as it ensures that the sensitive information remains secure.",llm_course,https://learn.activeloop.ai/courses/take/llms/multimedia/48953210-what-are-large-language-models
246,What are Large Language Models,"# What are Large Language Models
## **Few-Shot Learning**
Few-shot learning in the context of LLMs refers to providing the model with a few examples before making predictions. These examples ""teach"" the model how to reason and act as ""filters"" to help the model search for relevant patterns in the dataset. The idea of few-shot learning is fascinating as it suggests that the model can be quickly reprogrammed for new tasks. While LLMs like GPT3 excel at language modeling tasks like machine translation, they may struggle with more complex reasoning tasks. The few-shot examples are helping the model search for relevant patterns in the dataset. The dataset, which is effectively compressed into the model's weights, can be searched for patterns that strongly respond to these provided examples. These patterns are then used to generate the model's output. The more examples provided, the more precise the output becomes. Here’s an example of few-shot learning: ```python from dotenv import load_dotenv load_dotenv() import os import openai # Prompt for summarization prompt = """""" Describe the following movie using emojis. {movie}: """""" examples = [ { ""input"": ""Titanic"", ""output"": ""🛳️🌊❤️🧊🎶🔥🚢💔👫💑"" }, { ""input"": ""The Matrix"", ""output"": ""🕶️💊💥👾🔮🌃👨🏻💻🔁🔓💪"" } ] movie = ""Toy Story"" response = openai.ChatCompletion.create( model=""gpt-3.5-turbo"", messages=[ {""role"": ""system"", ""content"": ""You are a helpful assistant.""}, {""role"": ""user"", ""content"": prompt.format(movie=examples[0][""input""])}, {""role"": ""assistant"", ""content"": examples[0][""output""]}, {""role"": ""user"", ""content"": prompt.format(movie=examples[1][""input""])}, {""role"": ""assistant"", ""content"": examples[1][""output""]}, {""role"": ""user"", ""content"": prompt.format(movie=movie)}, ] ) print(response['choices'][0]['message']['content']) ``` ```python 🧸🤠👦🧒🎢🌈🌟👫🚁👽🐶🚀 ```",llm_course,https://learn.activeloop.ai/courses/take/llms/multimedia/48953210-what-are-large-language-models
247,What are Large Language Models,"# What are Large Language Models
## **Scaling Laws**
Scaling laws refer to the relationship between the model's performance and factors such as the number of parameters, the size of the training dataset, the compute budget, and the network architecture. They were discovered after a lot of experiments and are described in the [Chinchilla paper](https://arxiv.org/abs/2203.15556). These laws provide insights into how to allocate resources when training these models optimally. The main elements characterizing a language model are: 1. The number of parameters (N) reflects the model's capacity to learn from data. More parameters allow the model to capture complex patterns in the data. 2. The size of the training dataset (D) is measured in the number of tokens (small pieces of text ranging from a few words to a single character). 3. FLOPs (floating point operations per second) measure the compute budget used for training. The researchers trained the Chinchilla model, which has 70B parameters, on 1.4 trillion tokens. This aligns with ****the rule of thumb proposed in the paper**: for a model with X parameters, it is optimal to train it on approximately X * 20 tokens.** For example, in the context of this rule, a model with 100 billion parameters would be optimally trained on approximately 2 trillion tokens. Applying this rule, the Chinchilla model, though smaller, performed better than other LLMs. It showed gains in language modeling and task performance and needed less memory and computing power. You can read more about Chinchilla in its paper “[Training Compute-Optimal Large Language Models](https://arxiv.org/abs/2203.15556)”.",llm_course,https://learn.activeloop.ai/courses/take/llms/multimedia/48953210-what-are-large-language-models
248,What are Large Language Models,"# What are Large Language Models
## **Emergent Abilities in LLMs**
Emergent abilities in LLMs refer to the sudden appearance of new capabilities as the size of the model increases. These abilities, which include performing arithmetic, answering questions, summarizing passages, and more, are not explicitly trained in the model. Instead, they seem to arise spontaneously as the model scales, hence the term ""emergent."" > LLMs are probabilistic models that learn patterns in natural language. When these models are scaled up, they not only improve quantitatively in their ability to learn patterns, but they also exhibit qualitative changes in their behavior. > Traditionally, the models require task-specific fine-tuning and architectural modifications to perform specific tasks. However, when scaled, these models can perform these tasks without any architectural modifications or task-specific training. They can do this simply by phrasing the tasks in terms of natural language. This capability of LLMs to perform tasks without fine-tuning is remarkable in itself. What's even more intriguing is how these abilities appear. As LLMs grow, they rapidly and unpredictably transition from near-zero to sometimes state-of-the-art performance. This phenomenon suggests that these abilities are emergent properties of the model's scale rather than being explicitly programmed into the model. This concept of emergent abilities in LLMs has significant implications for the field of AI, as it suggests that scaling up models can lead to the spontaneous development of new capabilities.",llm_course,https://learn.activeloop.ai/courses/take/llms/multimedia/48953210-what-are-large-language-models
249,What are Large Language Models,"# What are Large Language Models
## ****Prompts****
The text containing the instructions that we pass to LLMs is commonly known as prompts. > Prompts are instructions given to AI systems like OpenAI's GPT-3 and GPT-4, providing context to generate human-like text. The more detailed the prompt, the better the model's output. > Shorter, concise, and descriptive prompts tend to yield better results as they leave room for the LLM's creativity. Specific words or phrases can help narrow down potential outcomes and ensure relevant content generation. Writing effective prompts requires a clear goal, simplicity, strategic use of keywords, and actionability. Testing the prompts before publishing ensures the output is relevant and error-free. Here are some prompting tips: 1. Use **precise language** when crafting a prompt – this will help ensure accuracy in the generated output: - Less Precise Prompt: ""Write about dogs."" - More Precise Prompt: ""Write a 500-word informative article about the dietary needs of adult Golden Retrievers."" 2. Provide **enough context** around each prompt – this will give a better understanding of what kind of output should be produced: - Less Contextual Prompt: ""Write a story."" - More Contextual Prompt: ""Write a short story set in Victorian England featuring a young detective solving his first major case."" 3. Test different **variations** of each prompt – this allows you to experiment with different approaches until you find one that works best: - Initial Prompt: ""Write a blog post about the benefits of yoga."" - Variation 1: ""Compose a 1000-word blog post detailing the physical and mental benefits of regular yoga practice."" - Variation 2: ""Create an engaging blog post that highlights the top 10 benefits of incorporating yoga into daily routine."" 4. **Review** generated outputs before publishing them – while most automated systems produce accurate results, occasionally mistakes occur so it’s always wise to double-check everything before releasing any content into production environments: - Before Review: ""Yoga is a great way to improve your flexibility and strength. It can also help reduce stress and improve mental clarity. However, it's important to remember that all yoga poses are suitable for everyone."" - After Review (correcting inaccurate information): ""Yoga is a great way to improve your flexibility and strength. It can also help reduce stress and improve mental clarity. However, it's important to remember that not all yoga poses are suitable for everyone. Always consult with a healthcare professional before starting any new exercise regimen.""",llm_course,https://learn.activeloop.ai/courses/take/llms/multimedia/48953210-what-are-large-language-models
250,What are Large Language Models,"# What are Large Language Models
## Hallucinations and Biases in LLMs
The term **hallucinations** refers to instances where AI systems generate outputs, such as text or images, that don't align with real-world facts or inputs. For example, ChatGPT might generate a plausible-sounding answer to an entirely incorrect factual question. > Hallucinations in LLMs refer to instances where the model generates outputs that do not align with real-world facts or context. This can lead to the propagation of misinformation, especially in critical sectors like healthcare and education where the accuracy of information is of utmost importance. Similarly, bias in LLMs can result in outputs that favor certain perspectives over others, potentially leading to the reinforcement of harmful stereotypes and discrimination. > Consider an interaction where a user asks, ""Who won the World Series in 2025?"" If the LLM responds with, ""The New York Yankees won the World Series in 2025,"" it's a clear case of hallucination. As of now (July 2023), the 2025 World Series hasn't taken place, so any claim about its outcome is a fabrication. **Bias** in AI and LLMs is another significant issue. It refers to these models' inclination to favor specific outputs or decisions based on their training data. If the training data is predominantly from a specific region, the model might show a bias toward that region's language, culture, or perspectives. If the training data contains inherent biases, such as gender or racial bias, the AI system might produce skewed or discriminatory outputs. For example, if a user asks an LLM, ""Who is a nurse?"" and it responds with, ""She is a healthcare professional who cares for patients in a hospital,” it shows a gender bias. The model automatically associates nursing with women, which doesn't accurately reflect the reality where both men and women can be nurses. Mitigating hallucinations and bias in AI systems involves refining model training, using verification techniques, and ensuring the training data is diverse and representative. Finding a balance between maximizing the model's potential and avoiding these issues remains challenging. Interestingly, in creative domains like media and fiction writing, these ""hallucinations"" can be beneficial, enabling the generation of unique and innovative content. The ultimate goal is to develop LLMs that are not only powerful and efficient but also reliable, fair, and trustworthy. By doing so, we can maximize the potential of LLMs while minimizing their risks, ensuring that the benefits of this technology are accessible to all.",llm_course,https://learn.activeloop.ai/courses/take/llms/multimedia/48953210-what-are-large-language-models
251,What are Large Language Models,"# What are Large Language Models
## Conclusion
In this introductory module, we explored the fascinating world of LLMs. These powerful models, trained on vast amounts of text data, can understand and generate human-like text. They're built on transformer architectures, allowing them to capture long-range dependencies in language and generate text in an autoregressive manner. We covered the capabilities of LLMs, discussing their impact on the field of NLP. We've learned about few-shot learning, scaling laws, and the emergent abilities of these models. We also acknowledged the challenges that come with these models, including hallucinations and biases, emphasizing the importance of mitigating these issues. In the next lesson, we’ll see a timeline of machine learning models used for language modeling up to the beginning of Large Language Models.",llm_course,https://learn.activeloop.ai/courses/take/llms/multimedia/48953210-what-are-large-language-models
252,Emergent Abilities in LLMs,"# Emergent Abilities in LLMs
> A*n ability is considered as emergent when larger models exhibit it, but it's absent in smaller models - a key factor contributing to the success of Large Language Models.* >",llm_course,https://learn.activeloop.ai/courses/take/llms/multimedia/48953596-emergent-abilities-in-llms
253,Emergent Abilities in LLMs,"# Emergent Abilities in LLMs
## Introduction
In this lesson, we’ll dive more into the concept of **emergent abilities**, the empirical phenomenon of the new abilities that language models get when their size increases over specific thresholds. Emergent abilities become apparent as we scale up the models and are influenced by factors such as training compute and model parameters. We'll also explore various instances of these emergent abilities, focusing on scenarios like few-shots and augmented prompting, and examine the reasons behind the emergence of these abilities and whether further scaling could reveal more of them.",llm_course,https://learn.activeloop.ai/courses/take/llms/multimedia/48953596-emergent-abilities-in-llms
254,Emergent Abilities in LLMs,"# Emergent Abilities in LLMs
## What Are Emergent Abilities
Emergent abilities in LLMs are defined as significant improvements in task performance that become apparent as the model size or scale increases. These abilities, which are not present or noticeable in smaller or less complex models, become evident in larger or more complex models. This suggests that the model is learning and generalizing from its pre-training in ways that were not explicitly programmed or expected. When visualized on a scaling curve, emergent abilities show a pattern where performance is almost random until a certain scale threshold, after which performance increases significantly. This is known as a phase transition, a dramatic change in behavior that could not have been predicted by examining smaller-scale systems. In the following image, taken from the paper “[Emergent Abilities of Large Language Models](https://arxiv.org/pdf/2206.07682.pdf),” we see several charts showing the emergence of abilities of LLMs (whose performance is shown on the y-axis) with respect to the model scale (shown on the x-axis). ![From the paper “[Emergent Abilities of Large Language Models](https://arxiv.org/pdf/2206.07682.pdf)”](Emergent%20Abilities%20in%20LLMs%203c98e33e74e7444bb666d000c6f348ff/Screenshot_2023-07-27_at_16.07.19.png) From the paper “[Emergent Abilities of Large Language Models](https://arxiv.org/pdf/2206.07682.pdf)” Language models have been scaled primarily along computation amount, model parameters, and training dataset size. The emergence of abilities may occur with less training computation or fewer model parameters for models trained on higher-quality data. It also depends on factors such as the amount of data, its quality, and the number of parameters in the model. Emergent abilities in LLMs appear as the models scale up and cannot be predicted by simply extrapolating from smaller models.",llm_course,https://learn.activeloop.ai/courses/take/llms/multimedia/48953596-emergent-abilities-in-llms
255,Emergent Abilities in LLMs,"# Emergent Abilities in LLMs
## Evaluation Benchmarks for Emergent Abilities
Several benchmarks are used to evaluate the emergent abilities of language models. These include the BIG-Bench suite, TruthfulQA, the Massive Multi-task Language Understanding (MMLU) benchmark, and the Word in Context (WiC) benchmark. 1. The first of these is the **BIG-Bench suite**, a comprehensive set of over 200 benchmarks that test a model's capabilities across a variety of tasks. These tasks include **arithmetic operations** where the model is expected to perform the four basic operations (example: “Q: What is 132 plus 762? A: 894), transliteration from the International Phonetic Alphabet (IPA) to measure if the model is able to manipulate and use rare words (example: “English: The 1931 Malay census was an alarm bell. IPA: ðə 1931 ˈmeɪleɪ ˈsɛnsəs wɑz ən əˈlɑrm bɛl.”), word unscrambling that analyzes the model’s ability to work with alphabets. A large number of benchmarks can be found within [the Github repository](https://github.com/google/BIG-bench/blob/main/bigbench/benchmark_tasks/README.md) where you can delve into their specific details. The performance of models like GPT-3 and LaMDA on these tasks starts near zero but jumps to significantly above random at a certain scale, demonstrating emergent abilities. 2. Another benchmark is **[TruthfulQA](https://github.com/sylinrl/TruthfulQA)**, which measures a model's capacity to provide truthful responses when addressing questions. The evaluation consists of two tasks: 1) Generation: The model will be asked to answer a question with 1 or 2 sentences. 2) Multiple-choices: The second task involves multiple-choice questions, where the model must choose the correct answer from either 4 options or True/False statements. When the Gopher model is scaled up to its largest size, its performance jumps to more than 20% above random, indicating the emergence of this ability. 3. **The Massive Multi-task Language Understanding ([MMLU](https://arxiv.org/abs/2009.03300))** is another key benchmark. The primary objective of this benchmark is to evaluate models for their ability to demonstrate a broad range of world knowledge and problem-solving skills. The test encompasses 57 tasks, spanning areas such as elementary mathematics, US history, computer science, law, and more. GPTs, Gopher, and Chinchilla models of a specific scale do not perform better than guessing on average of all the topics, but scaling up to a larger size enables performance to surpass random, indicating the emergence of this ability. 4. Finally, the **Word in Context (WiC)** is a semantic understanding benchmark. WiC is a binary classification task for context-sensitive word embeddings. It involves target words (verbs or nouns) with two provided contexts, aiming to determine if they share the same meaning. Chinchilla fails to achieve the one-shot performance of better than random, even when scaled to its largest model size. Above-random performance eventually emerged when PaLM was scaled to a much larger size, suggesting the emergence of this ability at a larger scale.",llm_course,https://learn.activeloop.ai/courses/take/llms/multimedia/48953596-emergent-abilities-in-llms
256,Emergent Abilities in LLMs,"# Emergent Abilities in LLMs
## Other Factors That Could Give Rise To Emergent Abilities
- Multi-step reasoning is a strategy where a model is guided to produce a sequence of intermediate steps before giving the final answer. This strategy, known as **chain-of-thought prompting**, only surpasses standard prompting when applied to a sufficiently large model. - **Instruction following** is another strategy that involves fine-tuning a model on a mixture of tasks phrased as instructions. This strategy only improves performance when applied to a model of a specific size.",llm_course,https://learn.activeloop.ai/courses/take/llms/multimedia/48953596-emergent-abilities-in-llms
257,Emergent Abilities in LLMs,"# Emergent Abilities in LLMs
## **Risks With Emergent Abilities**
As we scale up language models, we also need to be aware of the emergent risks that come with it. These risks could be societal issues related to truthfulness, bias, and toxicity. These risks can be avoided by applying strategies, such as giving model prompts that encourage them to be ""helpful, harmless, and honest.” The WinoGender benchmark, which measures gender bias in occupations, has shown that scaling can improve performance but also increase bias in ambiguous contexts. Larger models were found to be more likely to memorize training data, although deduplication methods can reduce this risk. Emergent risks also include phenomena that might only exist in future language models or that have not yet been characterized in current models. These could include backdoor vulnerabilities or harmful content synthesis.",llm_course,https://learn.activeloop.ai/courses/take/llms/multimedia/48953596-emergent-abilities-in-llms
258,Emergent Abilities in LLMs,"# Emergent Abilities in LLMs
## A Shift Towards General-Purpose Models
The emergence of abilities has led to sociological changes in how the community views and uses these models. Historically, NLP focused on task-specific models. Scaling models has led to an explosion in research on ""general purpose"" models that aim to perform a range of tasks not explicitly encoded in the training data. This shift towards general-purpose models is evident when scaling enables a few-shot prompted general-purpose model to outperform prior state-of-the-art held by fine-tuned task-specific models. For example, GPT-3 achieved a new state-of-the-art on the TriviaQA and PiQA question-answering benchmarks; PaLM achieved a new state-of-the-art on three arithmetic reasoning benchmarks; and the multimodal Flamingo model achieved a new state of the art on six visual question answering benchmarks. The ability of general-purpose models to perform unseen tasks, given only a few examples, has also led to many new applications of language models outside the NLP research community. For instance, language models have been used by prompting to translate natural language instructions into actions that are executable by robots, interact with users, and facilitate multi-modal reasoning.",llm_course,https://learn.activeloop.ai/courses/take/llms/multimedia/48953596-emergent-abilities-in-llms
259,Emergent Abilities in LLMs,"# Emergent Abilities in LLMs
## **Conclusion**
Emergent abilities in LLMs are capabilities that appear as the models scale up and are a key factor in their success. These abilities, unpredictable from smaller models, become evident after reaching a certain scale threshold. They have been observed in various contexts, such as in a few-shot prompting and augmented prompting strategies. Scaling up LLMs also introduces emergent risks like increased bias and toxicity, which can be avoided with appropriate strategies. The emergence of these abilities has led to a shift towards general-purpose models and opened up new applications outside the traditional NLP research community. In the next lesson, we’ll dive into today's most popular proprietary LLMs and describe the tradeoffs between proprietary and open-source LLMs.",llm_course,https://learn.activeloop.ai/courses/take/llms/multimedia/48953596-emergent-abilities-in-llms
260,Model Quantization,"# Model Quantization
## Introduction
As AI models, including large language models, grow more advanced, their increasing number of parameters leads to significant memory usage. This, in turn, increases the costs of hosting and deploying these tools. In this lesson, we will learn about **quantization,** a process that can be employed to **diminish the memory requirements** of these models. We will explore the various types of quantization, such as scalar and product quantization. We will also learn how fine-tuning techniques like [QLoRA](https://arxiv.org/abs/2305.14314) use quantization. Finally, we will examine applying these techniques to AI models using a CPU with methods implemented in the Intel® [neural compressor library](https://github.com/intel/neural-compressor/blob/master/docs/source/quantization.md).",llm_course,https://learn.activeloop.ai/courses/take/llms/multimedia/48959843-model-quantization
261,Model Quantization,"# Model Quantization
## **Overview of Quantization**
In deep learning, quantization is a technique that **reduces the numerical precision** of model parameters, such as the weights and biases. This reduction helps decrease the model’s memory footprint and computational requirements, enabling easier deployment on resource-constrained devices such as mobile phones, smartwatches, and other embedded systems. ### **Everyday Example** To understand the concept of quantization, consider an everyday scenario. Imagine two friends, Jay and John. Jay asks John, ""What’s the time?"" John can reply with the exact time, 10:58 p.m., or he can say it's around 11 p.m. In the latter response, John simplifies the time, making it less precise but easier to communicate and understand. This is a basic example of quantization, which is analogous to the process in deep learning, where the precision of model parameters is reduced to make the model more efficient, albeit at the cost of some accuracy. ### **Quantization in Machine Learning** In Machine Learning, different floating point data types can be used for model parameters, a characteristic also called precision. The precision of the data types affects the amount of memory required by the model. Defining the parameters in higher precision types, like Float32 or Float64, provides greater accuracy but requires more memory, while lower precision types, like Float16 or BFloat16, use less memory but may result in a loss of accuracy. In the figure below, you can see the main floating point data types. ![From ""[A Gentle Introduction to 8-bit Matrix Multiplication for transformers at scale using Hugging Face Transformers, Accelerate and bitsandbytes](https://huggingface.co./blog/hf-bitsandbytes-integration)” blog post.](Model%20Quantization%20a26c6e34fa1341bd90928d5989efc540/Untitled.png) From ""[A Gentle Introduction to 8-bit Matrix Multiplication for transformers at scale using Hugging Face Transformers, Accelerate and bitsandbytes](https://huggingface.co./blog/hf-bitsandbytes-integration)” blog post. We can estimate the memory required for an AI model with its number of parameters. For example, consider the **[Llama2 70B](https://arxiv.org/abs/2307.09288)** model that uses Float16 precision for its parameters. Each parameter requires **two bytes**. To calculate the memory required in gigabytes (GB), where 1GB = 1024^3 bytes, the calculation is as follows: $(70,000,000,000 * 2)/ 1024^3 = 130.385 GB$ Now, let's explore the different basic quantization techniques. ### **Scalar Quantization** In scalar quantization, each dimension of the dataset is treated independently. The maximum and minimum values are calculated for each dimension across the dataset. The range between the maximum and minimum values in each dimension is then divided into equal-sized bins. Each value in the dataset is mapped to one of these bins, effectively quantizing the data. For example, consider a dataset of 2000 vectors with 256 dimensions sampled from a Gaussian distribution. The goal is to perform scalar quantization on this dataset. ```python import numpy as np dataset = np.random.normal(size=(2000, 256)) # Calculate and store minimum and maximum across each dimension ranges = np.vstack((np.min(dataset, axis=0), np.max(dataset, axis=0))) ``` Now, calculate each dimension's start value and step size. The start value is the minimum value, and the step size is determined by the number of discrete bins in the integer type being used. This example uses 8-bit unsigned integers (**`uint8`**), providing 256 bins. ```python starts = ranges[0,:] steps = (ranges[1,:] - ranges[0,:]) / 255 ``` The quantized dataset is then calculated as follows: ```python scalar_quantized_dataset = np.uint8((dataset - starts) / steps) ``` The overall scalar quantization process can be encapsulated in a function: ```python def scalar_quantisation(dataset): # Calculate and store minimum and maximum across each dimension ranges = np.vstack((np.min(dataset, axis=0), np.max(dataset, axis=0))) starts = ranges[0,:] steps = (ranges[1,:] - starts) / 255 return np.uint8((dataset - starts) / steps) ``` ### **Product Quantization** In scalar quantization, the data distribution in each dimension should ideally be considered to avoid loss of information. Product quantization can preserve more",llm_course,https://learn.activeloop.ai/courses/take/llms/multimedia/48959843-model-quantization
262,Model Quantization,"# Model Quantization
## **Overview of Quantization**
information by dividing each vector into sub-vectors and quantizing each sub-vector independently. For example, consider the following array: ```python array = [ [ 8.2, 10.3, 290.1, 278.1, 310.3, 299.9, 308.7, 289.7, 300.1], [ 0.1, 7.3, 8.9, 9.7, 6.9, 9.55, 8.1, 8.5, 8.99] ] ``` Quantizing this array to a 4-bit integer using scalar quantization results in significant information loss: ```python quantized_array = [[ 0 0 14 13 15 14 14 14 14] [ 0 0 0 0 0 0 0 0 0]] ``` In contrast, **product quantization** involves the following steps: 1. Divide each vector in the dataset into *m* disjoint sub-vectors. 2. For each sub-vector, cluster the data into *k* centroids (using *k-*means, for example). 3. Replace each sub-vector with the index of the nearest centroid in the corresponding codebook. Let's proceed with the Product Quantization of the given array with m=3 (number of sub-vectors) and k=2 (number of centroids) ```python from sklearn.cluster import KMeans import numpy as np # Given array array = np.array([ [8.2, 10.3, 290.1, 278.1, 310.3, 299.9, 308.7, 289.7, 300.1], [0.1, 7.3, 8.9, 9.7, 6.9, 9.55, 8.1, 8.5, 8.99] ]) # Number of subvectors and centroids m, k = 3, 2 # Divide each vector into m disjoint sub-vectors subvectors = array.reshape(-1, m) # Perform k-means on each sub-vector independently kmeans = KMeans(n_clusters=k, random_state=0).fit(subvectors) # Replace each sub-vector with the index of the nearest centroid labels = kmeans.labels_ # Reshape labels to match the shape of the original array quantized_array = labels.reshape(array.shape[0], -1) # Output the quantized array quantized_array ``` ```python # Result array([[0, 1, 1], [0, 0, 0]], dtype=int32) ``` By quantizing the vectors and storing only the indices of the centroids, the memory footprint is significantly reduced. This method can help preserve more information than scalar quantization, especially when the distributions of different dimensions are diverse. Product quantization can significantly reduce memory footprint and speed up the nearest neighbor search but at the cost of accuracy. The tradeoff in product quantization is based on the **number of centroids** and the number of sub-vectors we use. The more centroids we use, the better the accuracy, but the memory footprint would not decrease and vice versa.",llm_course,https://learn.activeloop.ai/courses/take/llms/multimedia/48959843-model-quantization
263,Model Quantization,"# Model Quantization
## Quantizing Large Models
We learned about two relatively basic quantization techniques that can be used with deep learning models. While these simple techniques can work well enough with models with few parameters, they usually lead to a [drop in accuracy](https://arxiv.org/pdf/2208.07339.pdf) for larger models with billions of parameters. ![From “[LLM.int8(): 8bit Matrix Multiplication for Transformers at Scale](https://arxiv.org/pdf/2208.07339.pdf)” paper](Model%20Quantization%20a26c6e34fa1341bd90928d5989efc540/Untitled%201.png) From “[LLM.int8(): 8bit Matrix Multiplication for Transformers at Scale](https://arxiv.org/pdf/2208.07339.pdf)” paper Large models contain a **greater amount of information** in their parameters. With more neurons and layers, large models can represent more complex functions. They can capture deeper and more intricate relationships in the data, which smaller models might not be able to handle. Thus, the quantization process, which reduces the precision of these parameters, can significantly lose this information, resulting in a substantial drop in model accuracy and performance. Optimizing the quantization process for large models is also more difficult due to the larger parameter space. Finding the optimal quantization strategy that minimizes the loss of accuracy while reducing the model size is a more complex task for larger models. ### Popular ****(Post-Training Quantization) Methods for LLMs**** Fortunately, more sophisticated quantization techniques have been released to address these problems, aiming to maintain the accuracy of large models while effectively reducing their size. [**LLM.int8()](https://arxiv.org/abs/2208.07339)** This research paper observes that activation outliers (activation values significantly different from the others) break the quantization of larger models and proposes keeping them in higher precision. By keeping doing that, the performance of the model is not negatively affected. [**GPTQ**](https://arxiv.org/abs/2210.17323) This technique allows for faster text generation. The quantization is done layer by layer, minimizing the mean squared error (MSE) between the quantized and full-precision weights when given an input. The algorithm uses a mixed int4-fp16 quantization scheme where weights are quantized as int4 while activations remain in float16. During inference, weights are de-quantized on the fly, and the actual compute is performed in float16. This method makes use of a calibration dataset. The GPTQ algorithm requires calibrating the quantized weights of the model by making inferences on the quantized model. **[AWQ](https://arxiv.org/abs/2306.00978)** This method is grounded in the observation that not all weights contribute equally to Large Language Models performance. It identifies a small fraction (0.1%-1%) of 'important' or 'salient' weights, the quantization of which, if skipped, can substantially mitigate quantization loss. Unlike traditional approaches that focus on weight distribution, the AWQ method selects these salient weights based on the magnitude of their activations. This approach leads to a notable enhancement in performance. By maintaining only 0.1%-1% of the weight channels, corresponding to larger activations, in the FP16 format, the method significantly boosts the performance of quantized models. ![From “[AWQ: Activation-aware Weight Quantization for LLM Compression and Acceleration](https://arxiv.org/pdf/2306.00978.pdf)” paper](Model%20Quantization%20a26c6e34fa1341bd90928d5989efc540/Untitled%202.png) From “[AWQ: Activation-aware Weight Quantization for LLM Compression and Acceleration](https://arxiv.org/pdf/2306.00978.pdf)” paper The authors note that retaining certain weights in FP16 format can cause hardware inefficiency due to using mixed-precision data types. To address this, they propose a method where all weights, including the salient ones, are quantized to avoid mixed-precision data types. However, before the quantization process, the weights are scaled. This scaling step is crucial as it helps protect the outlier weight channels during quantization, ensuring that the important information they hold is not lost or significantly altered during the quantization process. This method aims to strike a balance, allowing the model to benefit from the quantization efficiency while preserving the essential information in the salient weights.",llm_course,https://learn.activeloop.ai/courses/take/llms/multimedia/48959843-model-quantization
264,Model Quantization,"# Model Quantization
## Using Quantized models
Many open-source LLMs are available for download in a quantized format. As we learned in this lesson, these models will have reduced memory requirements. You can look at the [model's section](https://huggingface.co./models) on HuggingFace to find and use a quantized model. This platform hosts a variety of models. For instance, you can try the latest [Mistral-7B-Instruct](https://huggingface.co./TheBloke/Mistral-7B-Instruct-v0.1-GPTQ) model, which has been quantized using the GPTQ method. ### Quantizing Your Own LLM You can use the [Intel® Neural Compressor Library](https://github.com/intel/neural-compressor/tree/master) to quantize your own Large Language Model. This library offers various techniques for model quantization, some of which have been discussed in this module. To get started, follow the step-by-step guide provided in the **[repository](https://github.com/intel/neural-compressor/tree/8568fd57f8c54032fecbcd3217d1c2ddc35b2402/examples/pytorch/nlp/huggingface_models/language-modeling/quantization/ptq_weight_only)**. This guide will walk you through quantizing a model, ensuring you have all the necessary components and knowledge to proceed. Before beginning the quantization process, ensure you have installed the **`neural-compressor`** library and **`lm-evaluation-harness`**. Inside the cloned [neural compressor directory](https://github.com/intel/neural-compressor/tree/master), navigate to the appropriate directory and install the required packages by running the following commands: ```bash cd examples/pytorch/nlp/huggingface_models/language-modeling/quantization/ptq_weight_only pip install -r requirements.txt ``` As an example, to quantize the **`opt-125m`** model with the GPTQ algorithm, use the following command: ```bash python examples/pytorch/nlp/huggingface_models/language-modeling/quantization/ptq_weight_only/run-gptq-llm.py \ --model_name_or_path facebook/opt-125m \ --weight_only_algo GPTQ \ --dataset NeelNanda/pile-10k \ --wbits 4 \ --group_size 128 \ --pad_max_length 2048 \ --use_max_length \ --seed 0 \ --gpu ``` This command will quantize the **`opt-125m`** model using the specified parameters.",llm_course,https://learn.activeloop.ai/courses/take/llms/multimedia/48959843-model-quantization
265,Model Quantization,"# Model Quantization
## **How Quantization is used in [QLoRA](https://arxiv.org/pdf/2305.14314.pdf)**
We saw in a [previous lesson](https://www.notion.so/Deep-Dive-into-LoRA-and-SFT-c64980e122b54597843f620db2b557da?pvs=21) how fine-tuning can be achieved using fewer resources using [QLoRa](https://arxiv.org/pdf/2305.14314.pdf), a popular variant of LoRA that makes fine-tuning large language models even more accessible. In the course, we saw that QLoRA involves backpropagating gradients through a frozen, 4-bit quantized pre-trained language model into Low-Rank Adapters. To accomplish this, QLoRA employs a novel data type, the **4-bit NormalFloat (NF4)**, which is theoretically optimal for normally distributed weights. This optimality stems from quantile quantization, a technique particularly suited for normally distributed values. It ensures that each quantization bin holds **an equal number of values** from the input tensor, minimizing quantization error and providing a more uniform data representation. Since pre-trained neural network weights typically exhibit a **zero-centered normal distribution** with a standard deviation (σ), QLoRA transforms all weights into a unified fixed distribution. This transformation is achieved by scaling σ to ensure the distribution aligns perfectly within the range of the NF4 data type, further enhancing the efficiency and accuracy of the quantization process. This new fine-tuning technique shows no accuracy degradation in their experiments and matches BFloat16 performance. ![From “[QLoRA: Efficient Finetuning of Quantized LLMs](https://arxiv.org/pdf/2305.14314.pdf)” paper](Model%20Quantization%20a26c6e34fa1341bd90928d5989efc540/Untitled%203.png) From “[QLoRA: Efficient Finetuning of Quantized LLMs](https://arxiv.org/pdf/2305.14314.pdf)” paper",llm_course,https://learn.activeloop.ai/courses/take/llms/multimedia/48959843-model-quantization
266,Model Quantization,"# Model Quantization
## **Conclusion**
In this lesson, we explored the concept of quantization, a technique that can reduce the memory requirements of large models and, in some cases, enhance the text generation speed for language models. We delved into some state-of-the-art quantization techniques suitable for models with billions of parameters, examining the unique contributions of each method. We also learned how to quantize our own models using the [Intel® Neural Compressor Library](https://github.com/intel/neural-compressor/tree/master), which supports many popular quantization methods. Lastly, we revisited QLoRA, understanding how it leverages quantization to make the fine-tuning of models more accessible to a broader audience. --- *Intel and the Intel logo are trademarks of Intel Corporation or its subsidiaries.* Special thanks to [Sahibpreet Singh](https://www.linkedin.com/in/sahibpreet-singh-16572b171/) for contributing to this lesson!",llm_course,https://learn.activeloop.ai/courses/take/llms/multimedia/48959843-model-quantization
267,Deploying an LLM on a Cloud CPU,"# Deploying an LLM on a Cloud CPU
## Introduction
Training a language model can be costly, and expenses associated with deploying it can quickly accumulate over time. Utilizing optimization techniques that enhance the efficiency of the inference process is crucial for minimizing hosting expenses. In this lesson, we will discuss the utilization of the Intel® [Neural Compressor](https://huggingface.co./docs/optimum/main/en/intel/optimization_inc) library to implement quantization techniques. This approach aims to enhance the cost-effectiveness and speed of models when running on CPU instances (it supports also AMD CPU, ARM CPU, and NVidia GPU through ONNX Runtime, but with limited testing). Various techniques can be employed for optimizing a network. Pruning involves trimming the parameter count by targeting less important weights, while knowledge distillation transfers insights from a larger model to a smaller one. Lastly, quantization decreases weight precision from 32 bits to 8 bits. It will significantly decrease the memory needed for loading models and generating responses with minimal accuracy loss. ![Credit: [Deci.ai](https://deci.ai/quantization-and-quantization-aware-training/)](Deploying%20an%20LLM%20on%20a%20Cloud%20CPU%205838e847653d4d999231d249cc1e61a5/deci-quantization-blog-1b.png.webp) Credit: [Deci.ai](https://deci.ai/quantization-and-quantization-aware-training/) The primary focus of this lesson is the quantization technique. We will apply it to an LLM and demonstrate how to perform inference using the quantized model. Ultimately, we will execute several experiments to assess the resulting acceleration. We'll begin by setting up the necessary libraries. Install the `optimum-intel` package directly from its GitHub repository. ```bash pip install git+https://github.com/huggingface/optimum-intel.git@v1.10.1 pip install neural_compressor===2.2.1 pip install onnx===1.14.1 ```",llm_course,https://learn.activeloop.ai/courses/take/llms/multimedia/48959857-deploying-an-llm-on-a-cloud-cpu
268,Deploying an LLM on a Cloud CPU,"# Deploying an LLM on a Cloud CPU
## Quantization
You can utilize the `optimum-cli` command within the terminal to execute dynamic quantization. Dynamic quantization stands as the recommended approach for transformer-based neural networks. You have the choice to either specify the path to your custom model or select a model from the Huggingface Hub, which will be designated using the `--model` parameter. The `--output` parameter determines the name of the resulting model. We are conducting tests on Facebook's OPT model with 1.3 billion parameters. ```bash optimum-cli inc quantize --model facebook/opt-1.3b --output opt1.3b-quantized ``` If the script fails to recognize your model, you can employ the `--task` parameter. You might use `--task text-generation` for language models. Check the source code for a complete [list of supported tasks](https://github.com/huggingface/optimum/blob/3ffebf994fbd579ab9acb589edc401fa66413928/optimum/exporters/tasks.py#L152). The library also provides a constrained quantization method, enabling you to define a specific target. For example, you can employ an evaluation function to request quantization of the model while experiencing no more than a 5% reduction in accuracy. For further details regarding constrained quantization, please refer to the [library documentation](https://huggingface.co./docs/optimum/main/en/intel/optimization_inc).",llm_course,https://learn.activeloop.ai/courses/take/llms/multimedia/48959857-deploying-an-llm-on-a-cloud-cpu
269,Deploying an LLM on a Cloud CPU,"# Deploying an LLM on a Cloud CPU
## Inference
Now, the model is ready for inference purposes. In this section, we will focus on how to load these models and present the outcomes of our benchmark tests, highlighting the impact of quantization on the speed of the generation process. Prior to conducting the inference process, it's essential to load the pre-trained tokenizer using the `AutoTokenizer` class. As the quantization technique doesn't alter the model's vocabulary, we will employ the same tokenizer as the base model. ```python from transformers import AutoTokenizer tokenizer = AutoTokenizer.from_pretrained(""facebook/opt-1.3b"") ``` For loading the model, we utilize the `INCModelForCasualLM` class provided by the Optimum package. Additionally, it offers a range of loaders tailored for various tasks, including `INCModelForSequenceClassification` for classification and `INCModelForQuestionAnswering` for tasks involving question answering. The `.from_pretrained()` method should be provided with the path to the quantized model from the previous section. ```python from optimum.intel import INCModelForCausalLM model = INCModelForCausalLM.from_pretrained(""./opt1.3b-quantized"") ``` Finally, we can employ the identical `.generate` method from the Transformers library to input the prompt to the model and get the response. ```python inputs = tokenizer("""", return_tensors=""pt"") generation_output = model.generate(**inputs, return_dict_in_generate=True, output_scores=True, min_length=512, max_length=512, num_beams=1, do_sample=True, repetition_penalty=1.5) ``` We compel the model to produce 512 tokens by explicitly setting both the minimum and maximum length parameters. The rationale behind this is to maintain a uniform token count between the standard model and the quantized version, facilitating a valid comparison of their generation times. We also experimented with batching the requests and employing an alternative decoding strategy. | --- | --- | --- | --- | | Greedy | 1 | 58.09 | 26.847 | | Greedy | 4 | 127.86 | 52.46 | | Beam Search (K=4) | 1 | 144.77 | 40.73 | | Beam Search (K=4) | 4 | 354.50 | 199.72 | The above table shows a large improvement in results with the quantization method. The most significant enhancement involves implementing beam search with a batch size of 1, which led to a 3.5x acceleration in the inference process. All the experiments mentioned were conducted on a server instance equipped with an Intel® Xeon® 4s Processor and 64GB of memory. This highlights the feasibility of performing inference on CPU instances to mitigate costs and latency effectively.",llm_course,https://learn.activeloop.ai/courses/take/llms/multimedia/48959857-deploying-an-llm-on-a-cloud-cpu
270,Deploying an LLM on a Cloud CPU,"# Deploying an LLM on a Cloud CPU
## Deployment Frameworks
Deploying large language models into production is the final stage in harnessing their capabilities for a diverse array of applications. Creating an API is the most efficient and flexible approach among the various methods available. APIs allow developers to seamlessly integrate these models into their code, enabling real-time interactions with web or mobile applications. There are several ways to create such APIs, each with its advantages and trade-offs. There exist specialized libraries, such as [vLLM](https://github.com/vllm-project/vllm) and [TorchServe](https://pytorch.org/serve/getting_started.html), designed for handling specific use cases. These libraries are capable of loading models from various sources and creating endpoints for convenient accessibility. In most cases, these libraries even offer optimization methods to enhance the speed of the inference process, batching income requests, and efficient memory management. On the other hand, there exist standard backend libraries such as [FastAPI](https://fastapi.tiangolo.com/) that facilitate the creation of any endpoints. While it may not be specifically designed for serving AI models, you can effortlessly integrate it into your development process to generate other APIs as needed. Regardless of the chosen method, a well-designed API ensures that large language models can be deployed robustly, enabling organizations to leverage their capabilities in chatbots, content generation, language translation, and many other applications.",llm_course,https://learn.activeloop.ai/courses/take/llms/multimedia/48959857-deploying-an-llm-on-a-cloud-cpu
271,Deploying an LLM on a Cloud CPU,"# Deploying an LLM on a Cloud CPU
## Deploying a model on CPU using Compute Engine with GCP
Follow these steps to deploy a language model on Intel® CPUs using Compute Engine with Google Cloud Platform (GCP): 1. **Google Cloud Setup**: Sign in to your [Google Cloud account](https://console.cloud.google.com/). If you don't have one, create it and set up a new project. 2. **Enable Compute Engine API**: Navigate to APIs & Services > Library. Search for ""Compute Engine API"" and enable it. 3. **Create a Compute Engine instance**: Go to the Compute Engine dashboard and click on “Create Instance”. Choose an CPU for your machine type. Here are several machine types that can be used in GCP and sporting Intel® CPUs. ![Image from [https://cloud.google.com/compute/docs/cpu-platforms](https://cloud.google.com/compute/docs/cpu-platforms)](Deploying%20an%20LLM%20on%20a%20Cloud%20CPU%205838e847653d4d999231d249cc1e61a5/Screenshot_2023-09-27_at_17.02.45.png) Image from [https://cloud.google.com/compute/docs/cpu-platforms](https://cloud.google.com/compute/docs/cpu-platforms) Once the instance is up and running: 1. **Deploy the model**: SSH into your instance. Install the necessary libraries and dependencies and copy your server code (FastAPI, vLLM, etc) to the machine. 2. **Run the model**: Once the setup is complete, run your language model. If it's a web-based model, start your server. Remember, Google Cloud charges based on the resources used, so make sure to stop your instance when not in use. A similar process can be done for AWS too using [EC2](https://aws.amazon.com/ec2/). You can find AWS machine types [here](https://aws.amazon.com/ec2/instance-types/).",llm_course,https://learn.activeloop.ai/courses/take/llms/multimedia/48959857-deploying-an-llm-on-a-cloud-cpu
272,Deploying an LLM on a Cloud CPU,"# Deploying an LLM on a Cloud CPU
## Conclusion
In this lesson, we explored the potential of harnessing 4th Generation Intel® Xeon® Scalable Processors for the inference process and the array of optimization techniques available that make it a practical choice. Our focus was on the quantization approach aimed at enhancing the speed of text generation while conserving resources. The results demonstrate the advantages of applying this technique across various configurations. It is worth noting that there are additional techniques available to optimize the models further. The upcoming chapter will discuss advanced topics within language models, including aspects like multi-modality and emerging challenges. --- >> [Notebook](https://colab.research.google.com/drive/1zWVjQUfqaoEiBeKsNarNBRQjfonEhjdT?usp=sharing). --- *For more information on Intel® Accelerator Engines, visit [this resource page](https://download.intel.com/newsroom/2023/data-center-hpc/4th-Gen-Xeon-Accelerator-Fact-Sheet.pdf). Learn more about Intel® Extension for Transformers, an Innovative Transformer-based Toolkit to Accelerate GenAI/LLM Everywhere [here](https://github.com/intel/intel-extension-for-transformers).* *Intel, the Intel logo, and Xeon are trademarks of Intel Corporation or its subsidiaries.*",llm_course,https://learn.activeloop.ai/courses/take/llms/multimedia/48959857-deploying-an-llm-on-a-cloud-cpu
273,Course Introduction and Logistics,"# Course Introduction and Logistics
## Introduction to the “Training and Fine-tuning LLMs for Production” Course
Activeloop, Towards AI, and Intel® Disruptor Initiative are excited to collaborate to bring Gen AI 360: Foundational Model Certification Course to aspiring Generative AI professionals, executives, and enthusiasts of tomorrow. Following the success of our ""LangChain & Vector Databases In Production"" course, we're excited to welcome you to part two of the series, ""Training and Fine-tuning LLMs for Production."" In this course, you will cover the intricacies of training, fine-tuning, and seamlessly integrating these models into AI products. This course will guide you on the most effective methods and best practices for preparing LLMs for production. Let's begin! ### **Why This Course?** The “Training and Fine-tuning LLMs for Production” course provides the theoretical knowledge and practical skills necessary to work with these models. A fundamental pillar of our course is hands-on learning. We are grounded in the belief that practical application and experimentation form the cornerstone of truly understanding and utilizing the strengths of LLMs. You will acquire the skills to train, refine, and adapt LLMs for specific tasks and integrate them seamlessly into your products and applications. We navigate deeper to understand the complex layers of LLMs, touching upon the architectural frameworks of Transformers and GPT models. We also explore the metrics used for performance evaluation, ensuring a comprehensive understanding of the whole process involved in gearing LLMs for production. ### **Who Should Take This Course?** This course is designed with a wide audience in mind, including beginners in AI, current machine learning engineers, students, and professionals considering a career transition to AI. Please know that prior knowledge of coding and Python is a prerequisite. We aim to provide you with the necessary tools to apply and tailor Large Language Models across a wide range of industries to make AI more accessible and practical. ### **What You Will Learn** As we progress, you'll become familiar with the architecture of Transformers and Generative Pre-trained Transformers (GPTs) and learn more about prompting LLMs to produce specific outputs. Essential topics such as proprietary versus open-source models, various LLM training methodologies, and production deployment strategies will be covered. We also touch upon advanced fine-tuning techniques like LoRA, QLoRA, SFT, and RLHF. As the course progresses, you'll engage in several projects crafted to offer you hands-on experience while reinforcing your grasp of LLMs. One standout project will guide you through the process of Supervised Fine-Tuning focused on financial sentiment analysis. Here, you'll become adept at various strategies for refining LLMs and get a clear perspective on datasets designed for tailoring LLMs to specific, goal-oriented tasks. ### Is the Course Free? Yes, the course is entirely free for everybody. ### Estimated cost of running the code examples in the course Running the code examples in the course may require additional costs, but please note that this is not a requirement for course completion. This course includes multiple coding projects, including one on pre-training a language model using Lambda Labs GPUs, another on fine-tuning an LLM on GPUs through RLHF, and additional ones on fine-tuning LLMs on CPUs. We made an effort to keep the costs low so that more students could easily replicate them. You can still complete the course and pass chapter quizzes without running these projects or paying anything. The main cost, of course, would be the pre-training of the language model (so, GPUs), but you can start it and stop it after a few iterations, spending a few dollars just for the experience of setting up the infrastructure. Expect to spend something between $50 and $100, if you wish to complete the finetuning examples. Always beware of the costs you're incurring when",llm_course,https://learn.activeloop.ai/courses/take/llms/multimedia/48953538-course-introduction-and-logistics
274,Course Introduction and Logistics,"# Course Introduction and Logistics
## Introduction to the “Training and Fine-tuning LLMs for Production” Course
you borrow AI hardware (GPUs or CPUs with a lot of RAM), and make sure not to use them for more time than necessary. For CPUs renting, we use Google Cloud Platform which give free credits of the value of 300$ to be used in three months after signup. These free credits can be used with Compute Engine to rent CPUs. We explain how to do that in a later section. ### Will platform and cloud credits be available for students of the course? We will also be providing grants and credits in collaboration with our partners to complete the training or fine-tuning examples. Everyone will get free access to Deep Lake for the course but some other credits will only be available according to certain criteria and course completion milestones. All takers of the course can redeem a free extended trial of three weeks for the Activeloop Growth plan by redeeming GENAI360 promocode at checkout. ### **Certification** By participating in this course and completing the quizzes at the end of each chapter, you will have the opportunity to earn a certification in using Deep Lake - a valuable addition to your professional credentials. This certification program, offered at no cost, forms part of the Deep Lake Foundational Model Certification program in collaboration with Intel® Disruptor Initiative and Towards AI.",llm_course,https://learn.activeloop.ai/courses/take/llms/multimedia/48953538-course-introduction-and-logistics
275,Course Introduction and Logistics,"# Course Introduction and Logistics
## Course Logistics
Here's everything you need to know about the course. ### **Course Hosting and Pace** This course is hosted by **Activeloop**. It is designed as a **self-paced** learning journey, allowing you to proceed at your own comfort. The online format provides flexibility, enabling you to engage with the lessons whenever it best suits you. At the end of each module, you can test your new knowledge with multiple-choice quizzes, which are mandatory to continue the course. After completing all the quizzes, you will receive your course certification. ### **Community Support** Have questions about this course or specific lessons? Want to exchange ideas with fellow learners? We encourage active interaction in the dedicated forum in the *[Towards AI’s Learn AI Together Discord Community](https://discord.com/invite/learnaitogether)*. This vibrant community is comprised of over 50,000 AI experts and enthusiasts. Our community has a dedicated channel for this course where you can pose questions and share insights. For queries specifically related to Deep Lake, please join the *[Deep Lake Slack community](https://join.slack.com/t/hubdb/shared_invite/zt-ivhsj8sz-GWv9c5FLBDVw8vn~sxRKqQ),* where experts and users will be ready to assist. ### **Required Platforms, Tools, and Cloud Tokens** The course involves practical projects and exercises that require various tools and platforms. These will be thoroughly guided in the individual lessons. However, the main platforms that you will use throughout the course are: - **Activeloop’s Deep Lake** - **Lambda Lab’s cloud infrastructure** - **Google Cloud Platform (GCP)** - **Google Cloud Compute Engine (GCE)** - **Weights & Biases** - **Cohere**",llm_course,https://learn.activeloop.ai/courses/take/llms/multimedia/48953538-course-introduction-and-logistics
276,Course Introduction and Logistics,"# Course Introduction and Logistics
## **What is Activeloop?**
[Activeloop](https://www.activeloop.ai/) is a tech company dedicated to building data infrastructure optimized for deep-learning applications. It offers a platform that seamlessly connects unstructured data types, like audio, video, and images, to machine learning models. Their main product, Deep Lake, ensures data streaming, scalable machine learning pipelines, and dataset version control. Such infrastructures are particularly beneficial when dealing with the demands of training and fine-tuning models for production. ### **What is Deep Lake?** Deep Lake is an open-source data lake designed for deep learning applications. It retains essential features of traditional data lakes, including SQL queries, ACID transactions, and dataset visualization. It specializes in storing complex data in tensor form, efficiently streaming data to deep learning frameworks. Built to be serverless on a columnar storage format, it also offers native version control and in-browser data visualization, complementing the needs of LLM training and deployment processes. ### **How to set up a Deep Lake account?** To set up a Deep Lake account, navigate to the [app’s registration page](https://app.activeloop.ai/register/) and sign up. Follow the on-screen instructions and add the required details. Once you've verified your email and established a secure password, your account will be active and ready for use. **How to get the Deep Lake API token?** 1. After logging in, you should see your homepage. You should now see a “Create API token” button at the top of your homepage. Click on it, and you’ll get redirected to the “API tokens” page. This is where you can generate, manage, and revoke your API keys for accessing Deep Lake. 2. Click on the ""Create API token"" button. You should see a popup asking for a token name and an expiration date. By default, the token expiration date is one year. Once you’ve set the token name and its expiration date, click the “Create API token” button. 3. You should now see a green banner saying that the token has been successfully generated, along with your new API token, on the “API tokens” page. To copy your token to your clipboard, click the square icon on its right.",llm_course,https://learn.activeloop.ai/courses/take/llms/multimedia/48953538-course-introduction-and-logistics
277,Course Introduction and Logistics,"# Course Introduction and Logistics
## **What is Lambda?**
[Lambda](https://lambdalabs.com/) was founded by Machine Learning engineers and builders. They empower founders, researchers, and ML practitioners with access to best-in-class deep learning infrastructure, from single on-demand GPU instances to the highest-performing clusters with thousands of GPUs interconnected across a non-blocking network fabric. Lambda helps teams deploy affordable infrastructure anywhere, whether it's their own data centers or in our hosted cloud. Lambda Labs enables companies to start building affordably and scale their AI/ML workloads with industry-leading pricing from A10 instances to the latest H100 architecture. Lambda serves a community of over 80,000 ML Engineers across Startups and Fortune 100 enterprises. ### **How to Create an Account:** 1. **Registration:** On the Lambda Labs website navigate to the cloud sign in, and register for a new account by clicking the “sign up” button. Provide necessary details like your email address, and agreeing to the terms of service. Important: **To qualify for cloud credits (more on this in course logistics), please make sure your email matches the email on your Activeloop and Gen AI 360 Certification (learn.activeloop.ai) email account.** 2. In the ‘****Create your free account’**** section, fill the form with the details, such as Account type - click on ‘Individual.’ Provide your information and click ‘Register’. 3. In **‘Terms of Service’,** Click on the checkbox ‘I agree’. 4. **Email Verification:** After registration, you'll receive an email for account verification. Follow the provided link to confirm your email and log into your Lambda Cloud dashboard. ### **How to Connect to a Machine:** 1. **Launching an Instance:** Select the ""Launch Instance"" option on your dashboard. First-time users will be prompted to upload an SSH key. 2. **Machine Access:** Once your instance is up and running, the dashboard will provide you with essential details to initiate your machine usage. ### How to upload an SSH key? **Uploading an SSH Key** - Locate your existing public key, usually found under your home directory, in a folder called `~/.ssh/` - You can use the command `ls -a ~/.ssh/` to find the key. The public key name usually looks like `id_rsa.pub` or `name-you-gave-it.pub` . - To see the contents of the key, use the command `cat ~/.ssh/name-you-gave-it.pub`, replacing `name-you-gave-it` with the actual name of your key. - Copy all the contents, starting with `ssh-rsa`, and paste them into the text field on the dashboard. More information on: [lambdalabs.com](https://lambdalabs.com/blog/getting-started-with-lambda-cloud-gpu-instances). ### **How to Use Free Credits:** New users get complimentary credits on Lambda Labs. To utilize these credits, initiate an instance following account creation. As you use the service, your free credits will automatically be used up before additional billing occurs.",llm_course,https://learn.activeloop.ai/courses/take/llms/multimedia/48953538-course-introduction-and-logistics
278,Course Introduction and Logistics,"# Course Introduction and Logistics
## **What is GCP?**
[Google Cloud Platform (GCP)](https://cloud.google.com/) is Google's comprehensive cloud computing suite, with a variety of services including computing, storage, data analytics, machine learning, and networking, all built on the same infrastructure that powers Google's products. ### **Creating an Account on GCP:** **Google Account Access:** Sign in to your existing Google account. This allows you to assess the performance and offerings of Google's products in real-time scenarios. 1. Go to [https://cloud.google.com/](https://cloud.google.com/) and click on ‘**TRY IT FREE**‘. 2. Login to your Gmail account, choose your country, and accept the terms & conditions. 3. Fill in: **Account type, Name, Address, credit card details, tax information**, etc (If you have an old Gmail account and all the information is already there, it would take it, and you might not have to fill in all the details). 4. Click on “**Start my free trial**“. Note: Credit Card is a must to create a Google Cloud Platform account. You’ll be given [free credits](https://cloud.google.com/free) of the value of 300$ that you can use to experiment with the platform. Here are some instructions on how to create a GCP project using the `gcloud` command line tool (follow [these instructions](https://cloud.google.com/sdk/docs/install) to install it). Alternatively, you can create a project using the UI of the platform. 1. **Initiating the gcloud CLI:** Initialize the Command Line Interface specific to Google Cloud, termed as 'gcloud,’ by executing the **`gcloud init`** command. 2. **Google Cloud Project Management:** - If you intend to use GCP temporarily, setting up a new Google Cloud project is recommended. This ensures that after evaluation, you can efficiently delete the project, subsequently removing all linked resources. - To initiate a new Google Cloud project, input **`gcloud projects create PROJECT_ID`**, replacing 'PROJECT_ID' with a unique name for your project. - To choose the recently created Google Cloud project, enter the command **`gcloud config set project PROJECT_ID`**. 3. **Billing Setup:** For comprehensive details and guidelines on this, you can refer to their official documentation at **[cloud.google.com](https://cloud.google.com/)**. Once you have a GCP project, it’s possible to create a Compute Engine instance by searching for “Compute Engine” in the search bar on top of the page and then clicking on the “Compute Engine” search result. We’ll need a Compute Engine instance later in the course, so for now just read these instructions without executing them. Keep in mind that you’ll use your credits whenever you keep a VM on, so always remember to delete the instance once you’re done with your work. To create a Compute Engine instance, you can then click on the “Create Instance” button and choose a machine configuration. In the course we’ll use high-end Intel® CPUs, which means the “C3” option from the list. ![Screenshot 2023-10-02 at 14.35.40.png](Course%20Introduction%20and%20Logistics%20b4efb73683564124bb012962334b83bc/Screenshot_2023-10-02_at_14.35.40.png) Then, you can click on “Create”. After some time, you’ll see the following. ![Screenshot 2023-10-02 at 14.33.53.png](Course%20Introduction%20and%20Logistics%20b4efb73683564124bb012962334b83bc/Screenshot_2023-10-02_at_14.33.53.png) By clicking on “SSH” you can SSH into the instance and have a terminal session. You can finally delete the instance by selecting it and then clicking on “Delete”. ![Screenshot 2023-10-02 at 14.34.38.png](Course%20Introduction%20and%20Logistics%20b4efb73683564124bb012962334b83bc/Screenshot_2023-10-02_at_14.34.38.png) ### What is Google Cloud Compute Engine (GCE)? [Google Compute Engine](https://cloud.google.com/compute) is a component of Google Cloud Platform (GCP) that offers virtual machines running in Google's data centers and worldwide fiber network. It provides scalable and flexible computing capabilities, allowing you to leverage the power of Google's infrastructure. We’ll use Compute Engine in a few lessons to spin up virtual machines that we’ll use for finetuning LLMs leveraging CPUs. Indeed, LLM finetuning can be done in a reasonable time leveraging CPUs too, and they are also definitely more available than cloud GPUs today. **Here are the steps to set up a",llm_course,https://learn.activeloop.ai/courses/take/llms/multimedia/48953538-course-introduction-and-logistics
279,Course Introduction and Logistics,"# Course Introduction and Logistics
## **What is GCP?**
Compute Engine on the Google Cloud Platform:** 1. **Create a Google Cloud Account:** You first need to create a Google Cloud account if you don't already have one. This will involve signing up with your Google account credentials, accepting the terms and conditions, and setting up billing information. 2. **Create a Project:** Create a new project in the Google Cloud Console after setting up your account. A project organizes all your Google Cloud resources. A project consists of a set of users and APIs, as well as billing, authentication, and monitoring settings for those APIs. So, for example, all of your Cloud Storage buckets and objects, along with user and API access to them, are controlled by a project. 3. **Enable Compute Engine API:** Next, enable the Compute Engine API for your project. This allows you to interact with the Compute Engine and is necessary to create and manage instances. 4. **Create a Compute Engine Instance:** Now, you can create a Compute Engine instance. This involves choosing the machine type, boot disk, and other configurations based on your requirements. You can do this either through the Google Cloud Console or through the Google Cloud CLI if you have it installed. 5. **Configure the Instance:** After creating your Compute Engine instance, you can configure it to suit your needs. This might involve setting up networking, attaching additional storage, and installing any necessary software. 6. **Deploy Your Code:** Finally, you can deploy your code to the Compute Engine instance. This process will vary depending on the specifics of your application and how it's designed to run.",llm_course,https://learn.activeloop.ai/courses/take/llms/multimedia/48953538-course-introduction-and-logistics
280,Course Introduction and Logistics,"# Course Introduction and Logistics
## **What is Weights & Biases?**
[Weights & Biases](https://wandb.ai/site) is a sophisticated tool tailored for machine learning. It specializes in experiment tracking, dataset versioning, and comprehensive model management. Its platform is designed for developers, offering a unified space for logging experiments, visualizing data, and collaborating with team members. ### **Creating an Account on Weights & Biases:** 1. Navigate to the [Weights & Biases official website](https://wandb.ai/site): 2. Click on the ""Sign Up"" option, in the top-right corner of the initial page. 3. W&B offers a variety of sign-up methods: via GitHub, Google, LinkedIn, or by manually entering an email address and a desired password. 4. After you sign up, a verification email will be sent to the email address you provided. Activate your account by clicking on the link in this email. ### **Accessing Your Weights & Biases Account:** 1. Again, head over to the official Weights & Biases website. 2. Select the ""Log In"" option in the top-right corner. 3. Input your login details with your chosen sign-up method (GitHub, Google, LinkedIn, or your unique email-password combination). 4. Finalize by selecting the ""Log In"" option, granting you access to your personal account. ### What is Cohere? [Cohere](https://docs.cohere.com/docs/intro-the-cohere-platform) is a platform that provides AI language models that can be used to build applications that generate human-like text. It allows developers to leverage these models through APIs, with features including text generation, text summarization, question answering, text embeddings, text reranking, chat generation, retrieval augmented generation, semantic search. Some of the main benefits of using Cohere's platform include its simplicity, the quality of its models, and the flexibility it provides for application development. Cohere also allows you to finetune its LLMs with your data. We’ll use Cohere in this course to finetune an LLM to extract chemical-disease interactions from biomedical papers accurately. ![*An overview of the [Cohere platform](https://docs.cohere.com/docs/intro-the-cohere-platform).*](Course%20Introduction%20and%20Logistics%20b4efb73683564124bb012962334b83bc/f54cffe-cohere-platform.png) *An overview of the [Cohere platform](https://docs.cohere.com/docs/intro-the-cohere-platform).* ### How to create Cohere account? Setting up an account on Cohere is a straightforward process. Here's a step-by-step guide: 1. **Visit the Cohere website**: Go to [Cohere's website](https://www.cohere.ai/). 2. **Sign Up**: Click on the 'Sign Up' button on the top right corner of the website, leading you to the registration form. 3. **Fill out the Registration Form**: Enter essential details like your email address and password. Ensure you review and accept the terms of service and privacy policy. 4. **Verification**: After submission of the form, check your email for a verification link. Activate your account by clicking the provided link. 5. **Dashboard Access**: Post verification, you'll be directed to your Cohere dashboard, allowing you to tweak account settings, monitor usage, and retrieve your API keys. **Note**: To utilize Cohere's API, you'll need to generate an API key from your dashboard. This key will be used to authenticate your application's requests to Cohere's API.",llm_course,https://learn.activeloop.ai/courses/take/llms/multimedia/48953538-course-introduction-and-logistics
281,Course Introduction and Logistics,"# Course Introduction and Logistics
## **Coding Environment and Packages**
Before starting this course, you need to ensure that you have the appropriate coding environment ready. Please make sure to use a Python version equal to or later than **3.8.1**. You can set up your environment by choosing one of the following options: 1. Having a code editor installed on your computer. A popular coding environment is [Visual Studio Code](https://code.visualstudio.com/). 2. Using Python virtual environments to manage Python libraries. 3. Alternatively, you could use Google Colab notebooks. You will need the following packages to execute the sample codes provided in each lesson successfully. They can be installed using the `pip` package manager. ``` deeplake==3.6.19 openai==0.27.8 tiktoken==0.4.0 transformers==4.32.0 torch==2.0.1 numpy==1.23.5 deepspeed==0.10.1 trl==0.7.1 peft==0.5.0 wandb==0.15.8 bitsandbytes==0.41.1 accelerate==0.22.0 tqdm==4.66.1 neural_compressor===2.2.1 onnx===1.14.1 pandas==2.0.3 scipy==1.11.2 ``` While we strongly recommend installing the latest versions of these packages, please note that the codes have been tested with the versions specified in parentheses. Moreover, specific lessons may require the installation of additional packages, which will be explicitly mentioned. The following code will demonstrate how to install a package using pip. ```bash pip install deeplake # Or: (to install an specific version) # pip install deeplake==3.6.5 ``` ### **Google Colab** Google Colaboratory, popularly known as Google Colab, is a *free cloud-based Jupyter notebook environment*. Data scientists and engineers widely use it to train machine learning and deep learning models using CPUs, GPUs, and TPUs. Google Colab comes with an array of features such as: - Free access to GPUs and TPUs for accelerated model training. - A web-based interface for a service running on a virtual machine, eliminating the need for local software installation. - Seamless integration with Google Drive and GitHub. To use Google Colab, all you need is a Google account. You can run terminal commands directly in notebook cells by appending an exclamation mark (!) before the command. Every notebook created in Google Colab gets stored in your Google Drive for easy access. A convenient way of using API keys in Colab involves: 1. Saving them in a file named `.env` on your Google Drive. Here’s how the file should be formatted to save the Activeloop token and the OpenAI API key. ```python ACTIVELOOP_TOKEN=your_activeloop_token OPENAI_API_KEY=your_openai_key ``` 1. Mounting your Google Drive on your Colab instance. 2. Loading them as environment variables using the `dotenv` library, like in the following code. ```python from dotenv import load_dotenv load_dotenv('/content/drive/MyDrive/path/to/.env') ``` ### **Creating Python Virtual Environments** Python virtual environments offer an excellent solution for managing Python libraries and avoiding package conflicts. They create isolated environments for installing packages, ensuring that your packages and their dependencies are contained within that environment. This setup provides clean and isolated environments for your Python projects. Begin by executing the `python` command in your terminal to confirm that the Python version is either equal to or greater than 3.8.1. Then follow these steps to create a virtual environment: 1. Create a virtual environment using the command `python -m venv my_venv_name`. 2. Activate the virtual environment by executing `source my_venv_name/bin/activate`. 3. Install the required libraries and run the code snippets from the lessons within the virtual environment. --- *Intel, the Intel logo, and other Intel marks are trademarks of Intel Corporation or its subsidiaries.*",llm_course,https://learn.activeloop.ai/courses/take/llms/multimedia/48953538-course-introduction-and-logistics
282,Improving Trained Models with RLHF,"# Improving Trained Models with RLHF
## Introduction
As previously stated, the RLHF process involves incorporating human feedback into the training process through a reward model that learns the desired patterns to amplify the model’s output. For instance, if the goal is to enhance politeness, the reward model will guide the model to generate more polite responses by assigning higher scores to polite outputs. This process can be resource-intensive due to the need to train an additional reward model using a dataset curated by humans, which can be costly. Nevertheless, we will leverage available open-source models and datasets whenever feasible to explore the technique thoroughly while maintaining acceptable costs. It is recommended to begin the procedure by conducting a supervised fine-tuning phase, which enables the model to adjust to a conversational manner. This procedure can be accomplished by using the `SFTTrainer` class. The next phase involves training a reward model with the desired traits using the `RewardTrainer` class in section 2. Finally, the Reinforcement Learning phase employs the models from the preceding steps to construct the ultimate aligned model, utilizing the `PPOTrainer` class in section 3. After each subsection, the fine-tuned models, the reports generated from the weights and biases, and the file detailing the requirements for the library can be accessed. Note that different steps might necessitate distinct versions of libraries. We employed the `OPT-1.3B` model as the foundational model and fine-tuned the `DeBERTa` (300M) model as the reward model for our experiments. While these are more compact models and might not incorporate the insights of recent larger models like GPT-4 and LLaMA2, the procedure we are exploring in this tutorial can be readily applied to other existing networks by simply modifying the model key name in the code. We’ll be using a set of 8x100 A100 GPUs in this lesson.",llm_course,https://learn.activeloop.ai/courses/take/llms/multimedia/48960067-improving-trained-models-with-rlhf
283,Improving Trained Models with RLHF,"# Improving Trained Models with RLHF
## GPU Cloud - Lambda
In this lesson, we’ll leverage Lambda, Lambda, the GPU cloud designed by ML engineers for training LLMs & Generative AI, for renting GPUs. We can create an account on it, link a billing account, and [rent one instance of the following GPU servers with associated costs](https://lambdalabs.com/service/gpu-cloud/pricing). The cost is for the time your instance is up and not only for the time you’re training your model, so remember to turn the instance off. For this lesson, we rented an 8x NVIDIA A100 instance comprising 40GB of memory at the price of $8.80/h. ",llm_course,https://learn.activeloop.ai/courses/take/llms/multimedia/48960067-improving-trained-models-with-rlhf
284,Improving Trained Models with RLHF,"# Improving Trained Models with RLHF
## Training Monitoring - Weights and Biases
To ensure everything is progressing smoothly, we’ll log the training metrics to Weights & Biases, allowing us to see the metrics in real-time in a suitable dashboard.",llm_course,https://learn.activeloop.ai/courses/take/llms/multimedia/48960067-improving-trained-models-with-rlhf
285,Improving Trained Models with RLHF,"# Improving Trained Models with RLHF
## 1. Supervised Fine-Tuning
We thoroughly covered the SFT phase in the previous lessons. If any steps are unclear, refer to the previous lessons. In this tutorial, the differences are in how we use a distinct dataset called [OpenOrca](https://huggingface.co./datasets/Open-Orca/OpenOrca) and in applying the QLoRA method, which we will elaborate on in the subsequent sections. The OpenOrca dataset comprises 1 million interactions with the language model extracted from the original [OpenOrca](https://huggingface.co./datasets/Open-Orca/OpenOrca) dataset. Each interaction in this collection is comprised of a question paired with a corresponding response. This phase aims to familiarize the model with the conversational structure, thereby teaching it to answer questions rather than relying on its standard auto-completion mechanism. Begin the process by installing the necessary libraries. ```bash pip install -q transformers==4.32.0 bitsandbytes==0.41.1 accelerate==0.22.0 deeplake==3.6.19 trl==0.5.0 peft==0.5.0 wandb==0.15.8 ``` ### 1.1. The Dataset The initial phase involves streaming the dataset via the Activeloop's performant dataloader to facilitate convenient accessibility. As previously indicated, we employ a subset of the original dataset containing 1 million data points. Nevertheless, the complete dataset (4 million) is accessible at [this URL](https://app.activeloop.ai/genai360/OpenOrca-4M/). ```python import deeplake # Connect to the training and testing datasets ds = deeplake.load('hub://genai360/OpenOrca-1M-train-set') ds_valid = deeplake.load('hub://genai360/OpenOrca-1M-valid-set') print(ds) ``` ```python Dataset(path='hub://genai360/OpenOrca-1M-train-set', read_only=True, tensors=['id', 'question', 'response', 'system_prompt']) ``` The dataset has three significant columns. These encompass `question`, also referred to as prompts, which are the queries we have from the LLM: `response`, i.e., the model's output or answers to the questions, and finally, `system_prompt`, i.e., the initial directives guiding the model in establishing its context, such as ""you are a helpful assistant.” For simplicity, we exclusively utilize the initial two columns. It could also be beneficial to incorporate the system prompts while formatting the text. This template formats the text in the structure of `Question: xxx\n\nAnswer: yyy`, where the question-and-answer sections are divided by two newline characters. There's room for experimentation with diverse formats, like trying out `System: xxxnnQuestion: yyynnAnswer: zzz`, to effectively integrate the system prompts from the dataset. ```python def prepare_sample_text(example): """"""Prepare the text from a sample of the dataset."""""" text = f""Question: {example['question'][0]}\n\nAnswer: {example['response'][0]}"" return text ``` Moving forward, the next step is loading the OPT model tokenizer. ```python from transformers import AutoTokenizer tokenizer = AutoTokenizer.from_pretrained(""facebook/opt-1.3b"") ``` Next, the `ConstantLengthDataset` class will come into play, serving to aggregate samples to maximize utilization within the 2K input size constraint and enhance the efficiency of the training process. ```python from trl.trainer import ConstantLengthDataset train_dataset = ConstantLengthDataset( tokenizer, ds, formatting_func=prepare_sample_text, infinite=True, seq_length=2048 ) eval_dataset = ConstantLengthDataset( tokenizer, ds_valid, formatting_func=prepare_sample_text, seq_length=1024 ) iterator = iter(train_dataset) sample = next(iterator) print(sample) train_dataset.start_iteration = 0 ``` ```python {'input_ids': tensor([ 16, 358, 828, ..., 137, 79, 362]), 'labels': tensor([ 16, 358, 828, ..., 137, 79, 362])} ``` ### 1.2. Initialize the Model and Trainer Finally, we initialize the model and apply LoRA, which effectively keeps memory requirements low when fine-tuning a large language model. ```python from peft import LoraConfig lora_config = LoraConfig( r=16, lora_alpha=32, lora_dropout=0.05, bias=""none"", task_type=""CAUSAL_LM"", ) ``` Now, we instantiate the `TrainingArguments`, which define the hyperparameters governing the training loop. ```python from transformers import TrainingArguments training_args = TrainingArguments( output_dir=""./OPT-fine_tuned-OpenOrca"", dataloader_drop_last=True, evaluation_strategy=""steps"", save_strategy=""steps"", num_train_epochs=2, eval_steps=2000, save_steps=2000, logging_steps=1, per_device_train_batch_size=8, per_device_eval_batch_size=8, learning_rate=1e-4, lr_scheduler_type=""cosine"", warmup_steps=100, gradient_accumulation_steps=1, bf16=True, weight_decay=0.05, ddp_find_unused_parameters=False, run_name=""OPT-fine_tuned-OpenOrca"", report_to=""wandb"", ) ``` Now, we are using the `BitsAndBytes` library to execute the quantization process and load the model in a 4-bit format. We will employ the NF4 data type designed for weights and implement the nested quantization approach, which effectively reduces memory usage with negligible decline in performance. Finally, we indicate that the training process computations should be carried out using the `bfloat16` format. The QLoRA method is a recent approach that combines",llm_course,https://learn.activeloop.ai/courses/take/llms/multimedia/48960067-improving-trained-models-with-rlhf
286,Improving Trained Models with RLHF,"# Improving Trained Models with RLHF
## 1. Supervised Fine-Tuning
the LoRA technique with quantization to reduce memory usage. When loading the model, it's necessary to provide the `quantization_config`. ```python import torch from transformers import BitsAndBytesConfig quantization_config = BitsAndBytesConfig( load_in_4bit=True, bnb_4bit_quant_type=""nf4"", bnb_4bit_use_double_quant=True, bnb_4bit_compute_dtype=torch.bfloat16 ) ``` The following code segment utilizes the `AutoModelForCasualLM` class to load the pre-trained weights of the OPT model, which holds 1.3 billion parameters. It's important to note that a GPU is required in order to make use of this capability. ```python from transformers import AutoModelForCausalLM from accelerate import Accelerator model = AutoModelForCausalLM.from_pretrained( ""facebook/opt-1.3b"", quantization_config=quantization_config, device_map={"""": Accelerator().process_index} ) ``` Prior to initializing the trainer object, we introduce modifications to the model architecture for efficiency. This involves casting specific layers of the model to full precision (32 bits), including LayerNorms and the final language modeling head. ```python from torch import nn for param in model.parameters(): param.requires_grad = False if param.ndim == 1: param.data = param.data.to(torch.float32) model.gradient_checkpointing_enable() model.enable_input_require_grads() class CastOutputToFloat(nn.Sequential): def forward(self, x): return super().forward(x).to(torch.float32) model.lm_head = CastOutputToFloat(model.lm_head) ``` Lastly, The `SFTTrainer` class will utilize the initialized dataset and model, in combination with the training arguments and LoRA technique, to start the training process. ```python from trl import SFTTrainer trainer = SFTTrainer( model=model, args=training_args, train_dataset=train_dataset, eval_dataset=eval_dataset, peft_config=lora_config, packing=True, ) print(""Training..."") trainer.train() ``` The `SFTTrainer` instance will automatically create checkpoints during the training process, as specified by the `save_steps` parameter, and store them in the `./OPT-fine_tuned-OpenOrca` directory. We're required to merge the LoRA layers with the base model to form a standalone network, a procedure outlined earlier. The subsequent section of code will handle the merging process. ```python from transformers import AutoModelForCausalLM import torch model = AutoModelForCausalLM.from_pretrained( ""facebook/opt-1.3b"", return_dict=True, torch_dtype=torch.bfloat16 ) from peft import PeftModel # Load the Lora model model = PeftModel.from_pretrained(model, ""./OPT-fine_tuned-OpenOrca/"") model.eval() model = model.merge_and_unload() model.save_pretrained(""./OPT-fine_tuned-OpenOrca/merged"") ``` The standalone model will be accessible on the `./OPT-supervised_fine_tuned/merged` directory. This checkpoint will come into play in Section 3. - **Resources** - [Notebook](https://colab.research.google.com/drive/11F43JGjPW78sS0SLPGFKwD8pH71aHfqp?usp=sharing) - The Merged Model Checkpoint (2GB) [OPT-fine_tuned-OpenOrca.zip](Improving%20Trained%20Models%20with%20RLHF%2099612f3da4824511b51f7d46fc167155/OPT-fine_tuned-OpenOrca.zip) - [Weights & Bias Report](https://wandb.ai/ala_/GenAI360/runs/n6czwaqq?workspace=user-ala_) - Requirements [requirements-fine-tune.txt](Improving%20Trained%20Models%20with%20RLHF%2099612f3da4824511b51f7d46fc167155/requirements-fine-tune.txt) *(The provided file is a snapshot of all the packages on the server; not all of these packages are necessary for you)* ---",llm_course,https://learn.activeloop.ai/courses/take/llms/multimedia/48960067-improving-trained-models-with-rlhf
287,Improving Trained Models with RLHF,"# Improving Trained Models with RLHF
## 2. Training a Reward Model
The second step is training a reward model, which learns human preferences from labeled samples and guides the LLM during the final step of the RLHF process. This model will be provided with samples of favored behavior compared to not expected or desired behavior. The reward model will learn to imitate human preferences by assigning higher scores to samples that align with those preferences. The reward models essentially perform a classification task, where they select the superior choice from a pair of sample interactions using input from human feedback. Various network types can be used and trained to function as reward models. Conversations are focused on the idea that the reward model should match the size of the base model in order to possess sufficient knowledge for practical guidance. Nonetheless, smaller models like DeBERTa or RoBERTa proved to be effective. Indeed, with enough resources available, experimenting with larger models is great. Both these models need to be loaded in the next phase of RLHF, which is Reinforcement Learning. Begin the process by installing the necessary libraries. (We use a different version of the TRL library just in this subsection.) ```bash pip install -q transformers==4.32.0 deeplake==3.6.19 sentencepiece==0.1.99 trl==0.6.0 ``` ### 2.1. The Dataset We will utilize the ""[helpfulness/harmless](https://github.com/anthropics/hh-rlhf)"" (hh) dataset from Anthropic, specifically curated for the Reinforcement Learning from Human Feedback (RLHF) procedure (you can read more about it [here](https://arxiv.org/abs/2204.05862)). The Activeloop datasets hub provides access to a dataset that allows us to stream the content effortlessly using a single line of code. The subsequent code snippet will establish the data loader object for both the training and validation sets. ```python import deeplake ds = deeplake.load('hub://genai360/Anthropic-hh-rlhf-train-set') ds_valid = deeplake.load('hub://genai360/Anthropic-hh-rlhf-test-set') print(ds) ``` ```python Dataset(path='hub://genai360/Anthropic-hh-rlhf-train-set', read_only=True, tensors=['chosen', 'rejected']) ``` Moving forward, we need to structure the dataset appropriately for the Trainer class. However, before that, let's load the pretrained tokenizer for the `DeBERTa` model we will use as the reward model. The code should be recognizable; the `AutoTokenizer` class will locate the suitable initializer class and utilize the `.from_pretrained()` method to load the pretrained tokenizer. ```python from transformers import AutoTokenizer tokenizer = AutoTokenizer.from_pretrained(""microsoft/deberta-v3-base"") ``` As presented in earlier lessons, the `Dataset` class in PyTorch is in charge of formatting the dataset for various downstream tasks. A pair of inputs is necessary to train a reward model. The first item will denote the chosen (favorable) conversation, while the second will represent a conversation rejected by labelers, which we aim to prevent the model from replicating. The concept revolves around the reward model, which assigns a higher score to the chosen sample while assigning lower rankings to the rejected ones. The code snippet below initially tokenizes the samples and then aggregates the pairs into a single Python dictionary. ```python from torch.utils.data import Dataset class MyDataset(Dataset): def __init__(self, dataset): self.dataset = dataset def __len__(self): return len(self.dataset) def __getitem__(self, idx): chosen = self.dataset.chosen[idx].text() rejected = self.dataset.rejected[idx].text() tokenized_chosen = tokenizer(chosen, truncation=True, max_length=max_length, padding='max_length') tokenized_rejected = tokenizer(rejected, truncation=True, max_length=max_length, padding='max_length') formatted_input = { ""input_ids_chosen"": tokenized_chosen[""input_ids""], ""attention_mask_chosen"": tokenized_chosen[""attention_mask""], ""input_ids_rejected"": tokenized_rejected[""input_ids""], ""attention_mask_rejected"": tokenized_rejected[""attention_mask""], } return formatted_input ``` The `Trainer` class anticipates receiving a dictionary containing four keys. This includes the tokenized forms for both chosen and rejected conversations (`input_ids_chosen` and `input_ids_rejected`) and their respective attention masks (`attention_mask_chosen` and `attention_mask_rejected`). As we employ a padding token to standardize input sizes (up to the model's maximum input size, 512",llm_course,https://learn.activeloop.ai/courses/take/llms/multimedia/48960067-improving-trained-models-with-rlhf
288,Improving Trained Models with RLHF,"# Improving Trained Models with RLHF
## 2. Training a Reward Model
in this case), it's important to inform the model that certain tokens at the end don't contain meaningful information and can be disregarded. This is why attention masks are important. We can create a dataset instance using the previously defined class. Additionally, we can extract a single row from the dataset using the `iter` and `next` methods to verify the output keys and confirm that everything functions as intended. ```python train_dataset = MyDataset(ds) eval_dataset = MyDataset(ds_valid) # Print one sample row iterator = iter(train_dataset) one_sample = next(iterator) print(list(one_sample.keys())) ``` ```python ['input_ids_chosen', 'attention_mask_chosen', 'input_ids_rejected', 'attention_mask_rejected'] ``` ### 2.2. Initialize the Model and Trainer The next steps are quite straightforward. We begin by loading the pretrained DeBERTa model using `AutoModelForSequenceClassification`, as our aim is to employ the network for a classification task. Specifying the number of labels (`num_labels`) as 1 is equally important, as we only require a single score to assess a sequence's quality. This score can indicate whether the content is aligned, receiving a high score, or, if it's unsuitable, receiving a low score. ```python from transformers import AutoModelForSequenceClassification model = AutoModelForSequenceClassification.from_pretrained( ""microsoft/deberta-v3-base"", num_labels=1 ) ``` Then, we can create an instance of `TrainingArguments`, setting the hyperparameters we intend to utilize. There is flexibility to explore various hyperparameters based on the selection of pre-trained models and available resources. For example, if an Out of Memory (OOM) error is encountered, a smaller batch size might be needed. ```python from transformers import TrainingArguments training_args = TrainingArguments( output_dir=""DeBERTa-reward-hh_rlhf"", learning_rate=2e-5, per_device_train_batch_size=24, per_device_eval_batch_size=24, num_train_epochs=20, weight_decay=0.001, evaluation_strategy=""steps"", eval_steps=500, save_strategy=""steps"", save_steps=500, gradient_accumulation_steps=1, bf16=True, logging_strategy=""steps"", logging_steps=1, optim=""adamw_hf"", lr_scheduler_type=""linear"", ddp_find_unused_parameters=False, run_name=""DeBERTa-reward-hh_rlhf"", report_to=""wandb"", ) ``` Finally, the `RewardTrainer` class from the TRL library will tie everything together and execute the training loop. It's essential to provide the previously defined variables, such as the model, tokenizer, and dataset. ```python from trl import RewardTrainer trainer = RewardTrainer( model=model, tokenizer=tokenizer, args=training_args, train_dataset=train_dataset, eval_dataset=eval_dataset, max_length=max_length ) trainer.train() ``` The `trainer` will automatically save the checkpoints, which can be utilized in the next and final steps. - **Resources** - [Notebook](https://colab.research.google.com/drive/1Q68QhZ6sH8x5ic6fkXF-QG-NC4tLWCL4?usp=sharing) - The Reward Model Checkpoint (Step 1000 - 2GB) [DeBERTa-reward-hh_rlhf.zip](Improving%20Trained%20Models%20with%20RLHF%2099612f3da4824511b51f7d46fc167155/DeBERTa-reward-hh_rlhf.zip) - [Weights and Biases report](https://wandb.ai/ala_/GenAI360/runs/tqamj3nw?workspace=user-ala_) - Requirements [requirements-reward.txt](Improving%20Trained%20Models%20with%20RLHF%2099612f3da4824511b51f7d46fc167155/requirements-reward.txt) *(The provided file is a snapshot of all the packages on the server; not all of these packages are necessary for you)* ---",llm_course,https://learn.activeloop.ai/courses/take/llms/multimedia/48960067-improving-trained-models-with-rlhf
289,Improving Trained Models with RLHF,"# Improving Trained Models with RLHF
## 3. Reinforcement Learning (RL)
The final step of RLHF! This section will integrate the models we trained earlier. Specifically, we will utilize the previously trained reward model to further align the fine-tuned model with human feedback. In the training loop, a custom prompt will be employed to generate a response from the fine-tuned OPT. The reward model will then assign a score based on how closely the response resembles a hypothetical human-generated output. Within the RL process, mechanisms are also in place to ensure that the model retains its acquired knowledge and doesn't deviate too far from the original model's foundation. We will proceed by introducing the dataset, followed by a more detailed examination of the process in the subsequent subsections. Begin the process by installing the necessary libraries. ```bash pip install -q transformers==4.32.0 accelerate==0.22.0 peft==0.5.0 trl==0.5.0 bitsandbytes==0.41.1 deeplake==3.6.19 wandb==0.15.8 sentencepiece==0.1.99 ``` ### 3.1. The Dataset Given that the process falls under the realm of unsupervised learning, we have the flexibility to choose the dataset for this step. Since the reward model assesses the output without relying on a label, the learning process does not require a question-answer pair. In this section, we will employ Alpaca's OpenOrca dataset, a subset of the [OpenOrca](https://huggingface.co./datasets/Open-Orca/OpenOrca) dataset. ```python import deeplake # Connect to the training and testing datasets ds = deeplake.load('hub://genai360/Alpaca-OrcaChat') print(ds) ``` ```python Dataset(path='hub://genai360/Alpaca-OrcaChat', read_only=True, tensors=['input', 'instruction', 'output']) ``` The dataset comprises three columns: `input`, which denotes the user's prompt to the model; `instruction`, which represents the model's directive; and `output`, which holds the model's response. We only need to use the `input` feature for the RL process. Before defining a dataset class for proper formatting, it's necessary to load the pre-trained tokenizer corresponding to the fine-tuned model in the first section. ```python from transformers import AutoTokenizer tokenizer = AutoTokenizer.from_pretrained(""facebook/opt-1.3b"", padding_side='left') ``` In the subsequent subsection, the trainer requires both the query and its tokenized variant. Thus, the `query` will remain in text format, whereas the `input_ids` will represent the token IDs. The dataset class format should be recognizable by now. An important point to highlight is that the `query` variable acts as a template for crafting user prompts, structured as follows: `Question: XXX\n\nAnswer:` in alignment with the format employed during the supervised fine-tuning (SFT) step. ```python from torch.utils.data import Dataset class MyDataset(Dataset): def __init__(self, ds): self.ds = ds def __len__(self): return len(self.ds) def __getitem__(self, idx): query = ""Question: "" + self.ds.input[idx].text() + ""\n\nAnswer: "" tokenized_question = tokenizer(query, truncation=True, max_length=400, padding='max_length', return_tensors=""pt"") formatted_input = { ""query"": query, ""input_ids"": tokenized_question[""input_ids""][0], } return formatted_input # Define the dataset object myTrainingLoader = MyDataset(ds) ``` Additionally, we must establish a collator function responsible for transforming individual samples from the data loader into data batches. This function will later be passed to the Trainer class. ```python def collator(data): return dict((key, [d[key] for d in data]) for key in data[0]) ``` ### 3.2. Initialize the SFT Models In this section, we are required to load two models. To begin, let's initiate the process by loading the fine-tuned model, referred to as `OPT-supervised_fine_tuned` in section 1, utilizing the configuration provided by the PPOConfig class. The majority of the parameters have been elaborated on in earlier lessons, except `adapt_kl_ctrl` and `init_kl_coef`. These arguments will be used to control the KL divergence penalty to ensure the model doesn't stray significantly from the pre-trained model. Otherwise, it runs the risk of generating nonsensical sentences. ```python from trl import PPOConfig config = PPOConfig( task_name=""OPT-RL-OrcaChat"", steps=10_000, model_name=""./OPT-fine_tuned-OpenOrca/merged"", learning_rate=1.41e-5, batch_size=32, mini_batch_size=4, gradient_accumulation_steps=1, optimize_cuda_cache=True, early_stopping=False, target_kl=0.1, ppo_epochs=4, seed=0, init_kl_coef=0.2, adap_kl_ctrl=True, tracker_project_name=""GenAI360"", log_with=""wandb"", ) ``` We also need to use the `set_seed()` function to set the random state for reproducibility, and",llm_course,https://learn.activeloop.ai/courses/take/llms/multimedia/48960067-improving-trained-models-with-rlhf
290,Improving Trained Models with RLHF,"# Improving Trained Models with RLHF
## 3. Reinforcement Learning (RL)
the `current_device` variable will store your current device id and will be used later on the code. ```python from trl import set_seed from accelerate import Accelerator # set seed before initializing value head for deterministic eval set_seed(config.seed) # Now let's build the model, the reference model, and the tokenizer. current_device = Accelerator().local_process_index ``` The next three code blocks are used to load the supervised fine-tuned (SFT) model. It starts by setting the details for the LoRA to accelerate the fine-tuning process. ```python from peft import LoraConfig lora_config = LoraConfig( r=16, lora_alpha=32, lora_dropout=0.05, bias=""none"", task_type=""CAUSAL_LM"", ) ``` The LoRA config can then be used alongside the `AutoModelForCausalLMwithValueHead` class to load the pre-trained weights. We utilize the `load_in_8bit` argument to load the model, employing a quantization technique that reduces weight precision. This helps conserve memory during model training. This model is intended for utilization within the RL loop. ```python from trl import AutoModelForCausalLMWithValueHead model = AutoModelForCausalLMWithValueHead.from_pretrained( config.model_name, load_in_8bit=True, device_map={"""": current_device}, peft_config=lora_config, ) ``` ### 3.3. Initialize the Reward Models Utilizing the Huggingface pipeline feature makes loading the Reward model straightforward. We need to define the task we are undertaking. In this case, we selected `sentiment-analysis`, as our fundamental objective revolves around binary classification. Furthermore, it is essential to indicate the path to the pre-trained reward model from the previous section using the `model` parameter. Alternatively, it could be possible to utilize a model's name from the HuggingFace Hub if a pre-trained reward model is accessible. The pipeline will automatically load the appropriate tokenizer, and we can initiate classification by providing any text to the defined object. ```python from transformers import pipeline import torch reward_pipeline = pipeline( ""sentiment-analysis"", model=""./DeBERTa-v3-base-reward-hh_rlhf/checkpoint-1000"", tokenizer=""./DeBERTa-v3-base-reward-hh_rlhf/checkpoint-1000"", device_map={"""": current_device}, model_kwargs={""load_in_8bit"": True}, return_token_type_ids=False, ) ``` The `reward_pipe` variable containing the reward model will be employed within the reinforcement learning (RL) training loop. ### 3.4. PPO Training The final stage involves employing Proximal Policy Optimization (PPO) to enhance the stability of the training loop. It will restrict alterations to the model by preventing excessively large updates. Empirical findings indicate that introducing minor adjustments accelerates the convergence of the training process, it is one of the intuitions behind the PPO. Before beginning the actual training loop, certain variables need to be defined for integration within the loop. Initially, we establish the `output_length_sampler` object, which draws samples from a specified range spanning from a defined minimum to a maximum number. We want the outputs to be in the range of 32 to 128 tokens. ```python from trl.core import LengthSampler output_length_sampler = LengthSampler(32, 400) #(OutputMinLength, OutputMaxLength) ``` We need to define two sets of dictionaries that will control the generation process for fine-tuned and reward models. We have the ability to configure arguments that oversee the sampling procedure, truncation, and batch size for each respective network during inference. The code block was concluded by setting the `save_freq` variable, which determines the interval for checkpoint preservation. ```python sft_gen_kwargs = { ""top_k"": 0.0, ""top_p"": 1.0, ""do_sample"": True, ""pad_token_id"": tokenizer.pad_token_id, ""eos_token_id"": 100_000, } reward_gen_kwargs = { ""top_k"": None, ""function_to_apply"": ""none"", ""batch_size"": 16, ""truncation"": True, ""max_length"": 400 } save_freq = 50 ``` The final action before the actual training loop involves the instantiation of the PPO trainer object. The `PPOTrainer` class will take as input an instance of `PPOConfig`, which we defined earlier, the directory of the fine-tuned model from section 1, and the training dataset. It's worth noting that we have the option to provide a reference model using the `ref_model` parameter, which will serve as the guide for the KL divergence penalty. In cases where this parameter is not specified, the trainer will default to using the",llm_course,https://learn.activeloop.ai/courses/take/llms/multimedia/48960067-improving-trained-models-with-rlhf
291,Improving Trained Models with RLHF,"# Improving Trained Models with RLHF
## 3. Reinforcement Learning (RL)
original pre-trained model as the reference. ```python from trl import PPOTrainer ppo_trainer = PPOTrainer( config, model, tokenizer=tokenizer, dataset=myTrainingLoader, data_collator=collator ) ``` Now, we can proceed to the last component, which is the training loop. The process begins with obtaining a single batch of samples and utilizing the `input_ids`, which are the formatted and tokenized prompts (refer to section 3.1) to generate responses using the fine-tuned model. Subsequently, these responses are decoded and combined with the prompt before being fed to the reward model. This allows the reward model to assess their proximity to a human-generated response by assigning scores. Finally, the PPO object will adjust the model based on the scores by the reward model. ```python from tqdm import tqdm tqdm.pandas() for step, batch in tqdm(enumerate(ppo_trainer.dataloader)): if step >= config.total_ppo_epochs: break question_tensors = batch[""input_ids""] response_tensors = ppo_trainer.generate( question_tensors, return_prompt=False, length_sampler=output_length_sampler, **sft_gen_kwargs, ) batch[""response""] = tokenizer.batch_decode(response_tensors, skip_special_tokens=True) # Compute reward score texts = [q + r for q, r in zip(batch[""query""], batch[""response""])] pipe_outputs = reward_pipeline(texts, **reward_gen_kwargs) rewards = [torch.tensor(output[0][""score""]) for output in pipe_outputs] # Run PPO step stats = ppo_trainer.step(question_tensors, response_tensors, rewards) ppo_trainer.log_stats(stats, batch, rewards) if save_freq and step and step % save_freq == 0: print(""Saving checkpoint."") ppo_trainer.save_pretrained(f""./OPT-RL-OrcaChat/checkpoint-{step}"") ``` Remember to merge the LoRA adaptors with the base model to ensure that the network can be utilized independently as a standalone model in the future. Simply ensure to modify the directory of the saved checkpoint adaptor according to the results. ```python from transformers import AutoModelForCausalLM import torch model = AutoModelForCausalLM.from_pretrained( ""facebook/opt-1.3b"", return_dict=True, torch_dtype=torch.bfloat16 ) from peft import PeftModel # Load the Lora model model = PeftModel.from_pretrained(model, ""./OPT-RL-OrcaChat/checkpoint-400/"") model.eval(); model = model.merge_and_unload() model.save_pretrained(""./OPT-RL-OrcaChat/merged"") ``` - **Resources** - [Notebook](https://colab.research.google.com/drive/1EdruSNU6myN2cpvPLIiXMG-J0JeAzh-c?usp=sharing) - The Merged RL Model Checkpoint (2GB) [OPT-RL-OrcaChat.zip](Improving%20Trained%20Models%20with%20RLHF%2099612f3da4824511b51f7d46fc167155/OPT-RL-OrcaChat.zip) - [Weights and Biases report](https://wandb.ai/ala_/GenAI360/runs/e9y58bdi?workspace=user-ala_) - Requirements [requirements-rl.txt](Improving%20Trained%20Models%20with%20RLHF%2099612f3da4824511b51f7d46fc167155/requirements-rl.txt) *(The provided file is a snapshot of all the packages on the server; not all of these packages are necessary for you)*",llm_course,https://learn.activeloop.ai/courses/take/llms/multimedia/48960067-improving-trained-models-with-rlhf
292,Improving Trained Models with RLHF,"# Improving Trained Models with RLHF
## QLoRA
Earlier, we employed an argument named `load_in_8bit` during the loading of the base model. This quantization technique significantly reduces the memory requirement when loading large models. A 32-bit floating-point format was utilized for model training in the early stages of neural network development. This entailed the representation of each weight using 32 bits, requiring 4 bytes for storage per weight. Researchers developed diverse methods to mitigate this constraint with the growth of models and the escalating memory requirements. This led to the utilization of lower-precision values for the loading model. Employing an 8-bit representation for numbers reduces the storage requirement to a mere 1 byte. In more recent times, an additional advancement allows for models to be loaded in a 4-bit format, further reducing memory demands. It is possible to use the BitsAndBytes library while loading a pre-trained model, as the following code presents. ```python from transformers import AutoModelForCausalLM, BitsAndBytesConfig import torch model = AutoModelForCausalLM.from_pretrained( model_name_or_path='/name/or/path/to/your/model', load_in_4bit=True, device_map='auto', torch_dtype=torch.bfloat16, quantization_config=BitsAndBytesConfig( load_in_4bit=True, bnb_4bit_compute_dtype=torch.bfloat16, bnb_4bit_use_double_quant=True, bnb_4bit_quant_type='nf4' ), ) ``` It's crucial to remember that this approach relates exclusively to storing model weights and does not affect the training process. Additionally, there's a constant balance to strike between employing lower-precision numbers and potentially compromising the language comprehension capabilities of models. While this trade-off is justified in many instances, it's important to acknowledge its presence. ",llm_course,https://learn.activeloop.ai/courses/take/llms/multimedia/48960067-improving-trained-models-with-rlhf
293,Improving Trained Models with RLHF,"# Improving Trained Models with RLHF
## Inference
We can evaluate the fine-tuned model’s outputs by employing various prompts. The code below demonstrates how we can utilize Huggingface's `.generate()` method to interact with models effortlessly. The initial stage involves loading the tokenizer and the model, followed by decoding the output generated by the model. We employ the beam search decoding approach with a limitation to generate a maximum of 128 tokens. (Explore these techniques further in the in-depth [blog post](https://huggingface.co./blog/how-to-generate) by Huggingface.) ```python from transformers import AutoTokenizer tokenizer = AutoTokenizer.from_pretrained(""facebook/opt-1.3b"") from transformers import AutoModelForCausalLM from accelerate import Accelerator model = AutoModelForCausalLM.from_pretrained( ""./OPT-RL-OrcaChat/merged"", device_map={"""": Accelerator().process_index} ) model.eval(); inputs = tokenizer(""Question: In one sentence, describe what the following article is about:\n\nClick on “Store” along the menu toolbar at the upper left of the screen. Click on “Sign In” from the drop-down menu and enter your Apple ID and password. After logging in, click on “Store” on the toolbar again and select “View Account” from the drop-down menu. This will open the Account Information page. Click on the drop-down list and select the country you want to change your iTunes Store to. You’ll now be directed to the iTunes Store welcome page. Review the Terms and Conditions Agreement and click on “Agree” if you wish to proceed. Click on “Continue” once you’re done to complete changing your iTunes Store..\n\n Answer: "", return_tensors=""pt"").to(""cuda:0"") generation_output = model.generate(**inputs, return_dict_in_generate=True, output_scores=True, max_new_tokens=128, num_beams=4, do_sample=True, top_k=10, temperature=0.6) print( tokenizer.decode(generation_output['sequences'][0]) ) ``` The following entries represent the outputs generated by the model using various prompts. - In one sentence, describe what the following article is about… ![Screenshot 2023-08-29 at 1.20.56 PM.png](Improving%20Trained%20Models%20with%20RLHF%2099612f3da4824511b51f7d46fc167155/Screenshot_2023-08-29_at_1.20.56_PM.png) - Answer the following question given in this paragraph… ![Screenshot 2023-08-29 at 1.21.11 PM.png](Improving%20Trained%20Models%20with%20RLHF%2099612f3da4824511b51f7d46fc167155/Screenshot_2023-08-29_at_1.21.11_PM.png) - What the following paragraph is about?… ![Screenshot 2023-08-29 at 1.21.24 PM.png](Improving%20Trained%20Models%20with%20RLHF%2099612f3da4824511b51f7d46fc167155/Screenshot_2023-08-29_at_1.21.24_PM.png) - What the following paragraph is about?… (2) ![Screenshot 2023-08-29 at 1.21.43 PM.png](Improving%20Trained%20Models%20with%20RLHF%2099612f3da4824511b51f7d46fc167155/Screenshot_2023-08-29_at_1.21.43_PM.png) As evidenced by the examples, the model displays the ability to follow instructions and extract information from lengthy content. However, it falls short in terms of answering open-ended questions such as ""Explain the raining process?” This is primarily attributed to the model's smaller size, which entails fewer parameters, approximately ranging from 30x to 70x less than state-of-the-art models.",llm_course,https://learn.activeloop.ai/courses/take/llms/multimedia/48960067-improving-trained-models-with-rlhf
294,Improving Trained Models with RLHF,"# Improving Trained Models with RLHF
## Conclusion
This lesson experimented with the three essential Reinforcement Learning with Human Feedback (RLHF) process stages. It starts by revisiting the Supervised Fine-Tuning (SFT) process, then proceeds with the training of a reward model, and finally concludes with the reinforcement learning phase. We explored and applied methods such as 4-bit quantization and LoRA to enhance the fine-tuning procedure while utilizing fewer resources. In the upcoming chapter, we will introduce the procedure of deploying models and utilizing them in a production environment.",llm_course,https://learn.activeloop.ai/courses/take/llms/multimedia/48960067-improving-trained-models-with-rlhf
295,Controlling LLM Outputs,"# Controlling LLM Outputs
## Introduction
In this lesson, we will look into various methods and parameters that can be used to control the outputs of Large Language Models. We will discuss different decoding strategies and how they influence the generation process. We will also explore how certain parameters can be adjusted to fine-tune the output.",llm_course,https://learn.activeloop.ai/courses/take/llms/multimedia/48954242-controlling-llm-outputs
296,Controlling LLM Outputs,"# Controlling LLM Outputs
## Decoding Methods
Decoding methods are fundamental strategies used by LLMs to generate text. Each method has its unique advantages and limitations. At each decoding step, the LLM gives a score to each of its vocabulary tokens. A high score is related to a high probability of that token being the next token, according to the patterns learned by the model during training. However, is the token with the highest probability always the best token to predict? By predicting the best token at step 1, the model may then find only tokens with low probabilities at step 2, thus having a low joint probability of the two consecutive tokens. Instead, predicting a slightly lower token at step 1 leads to a high probability token at step 2, thus having an overall higher joint probability of the tokens. Ideally, we’d want to do these computations for all the tokens in the model vocabulary and a large number of steps. However, this can’t be done in practice because it would require heavy computations. All the decoding methods in this lesson try to find the right balance between: - Being “greedy” and instantly selecting the next token with higher probability. - A bit of exploration and trying to predict more tokens at once. ### Greedy Search Greedy Search is the simplest of all the decoding methods. With Greedy Search, the model selects the token with the highest probability as its next output token. While this method is computationally efficient, it can often result in repetitive or less optimal responses due to its focus on immediate reward rather than long-term outcomes. ### Sampling Sampling introduces randomness into the text generation process, where the model randomly selects the next word based on its probability distribution. This method allows for more diverse and varied output but can sometimes produce less coherent or logical text. ### Beam Search Beam Search is a more sophisticated method. It selects the top N (with N being a parameter) candidate subsequent tokens with the highest probabilities at each step, but only up to a certain number of steps. In the end, the model generates the sequence of tokens (i.e., the beam) with the highest joint probability. This significantly reduces the search space and produces more consistent results. However, this method might be slower and lead to suboptimal outputs as it can miss high-probability words hidden behind a low-probability word. ### Top-K Sampling Top-K Sampling is a variant of the sampling method where the model narrows down the sampling pool to the top K (with K being a parameter) of the most probable words. This method provides a balance between diversity and relevance by limiting the sampling space, thus offering more control over the generated text. ### Top-p (Nucleus) Sampling Top-p, or Nucleus Sampling, selects words from the smallest possible set of tokens whose cumulative probability exceeds a certain threshold P (with P being a parameter). This method offers fine-grained control and avoids the inclusion of rare or low-probability tokens. However, the dynamically determined shortlist sizes can sometimes be a limitation.",llm_course,https://learn.activeloop.ai/courses/take/llms/multimedia/48954242-controlling-llm-outputs
297,Controlling LLM Outputs,"# Controlling LLM Outputs
## Parameters That Influence Text Generation
Apart from the decoding methods, several parameters can be adjusted to influence text generation using LLMs. These include temperature, stop sequences, frequency, and presence penalties. These parameters can be adjusted with the most popular LLM APIs and Hugging Face models. ### Temperature The temperature parameter influences the randomness or determinism of the generated text. A lower value makes the output more deterministic and focused, while a higher value increases the randomness, leading to more diverse outputs. It controls the randomness of predictions by scaling the logits before applying softmax during the text generation process. It's a crucial factor in the trade-off between diversity and quality of the generated text. Here's a more technical explanation: 1. **Logits**: When a language model makes a prediction, it generates a vector of logits, one for the next possible token. These logits represent the raw, unnormalized prediction scores for each token. 2. **Softmax**: The softmax function is applied to these logits to convert them into probabilities. The softmax function also ensures that these probabilities sum up to 1. 3. **Temperature**: The temperature parameter is used to control the randomness of the model's output. It does this by dividing the logits by the temperature value before the softmax step. - **High Temperature (e.g., > 1)**: The logits are scaled down, which makes the softmax output more uniform. This means the model is more likely to pick less likely words, resulting in more diverse and ""creative"" outputs, but potentially with more mistakes or nonsensical phrases. - **Low Temperature (e.g., < 1)**: The logits are scaled up, which makes the softmax output more peaked. This means the model is more likely to pick the most likely word. The output will be more focused and conservative, sticking closer to the most probable outputs but potentially less diverse. - **Temperature = 1**: The logits are not scaled, preserving the original probabilities. This is a kind of ""neutral"" setting. In summary, the temperature parameter is a knob for controlling the trade-off between diversity (high temperature) and accuracy (low temperature) in the generated text. ### Stop Sequences Stop sequences are specific sets of character sequences that halt the text generation process once they appear in the output. They offer a way to guide the length and structure of the generated text, providing a form of control over the output. ### Frequency and Presence Penalties Frequency and presence penalties are used to discourage or encourage the repetition of certain words in the generated text. A frequency penalty reduces the likelihood of the model repeating tokens that have appeared frequently, while a presence penalty discourages the model from repeating any token that has already appeared in the generated text.",llm_course,https://learn.activeloop.ai/courses/take/llms/multimedia/48954242-controlling-llm-outputs
298,Controlling LLM Outputs,"# Controlling LLM Outputs
## Conclusion
This lesson provided an overview of the various decoding methods and parameters that can be used to control the outputs of Large Language Models. We've explored decoding strategies such as Greedy Search, Sampling, Beam Search, Top-K Sampling, and Top-p (Nucleus) Sampling, each with its unique approach to balancing the trade-off between immediate reward and long-term outcomes. We've also discussed parameters like temperature, stop sequences, and frequency and presence penalties, which offer additional control over text generation. Adjusting these parameters can help in guiding the model to produce the desired results, whether deterministic, focused outputs or more diverse, creative ones.",llm_course,https://learn.activeloop.ai/courses/take/llms/multimedia/48954242-controlling-llm-outputs
299,Overview of the Training Process,"# Overview of the Training Process
## Introduction
In this lesson, we will provide an overview of the multiple steps involved in training LLMs, including preprocessing the training data, the model architecture, and the training process. By the end of this lesson, you will have a solid understanding of the various steps involved in training large language models. The training process begins with the selection of one or a combination of suitable datasets, proceeds with the initialization of the neural network, and ultimately concludes with the execution of the training loop. We will also discuss the process of saving the weights for future utilization. Although this process may seem challenging, we will break it down into steps and approach each one separately to aid your understanding of the intricacies involved.",llm_course,https://learn.activeloop.ai/courses/take/llms/multimedia/48954276-overview-of-the-training-process
300,Overview of the Training Process,"# Overview of the Training Process
## The Dataset
Whether you are training a general LLM or a specialized one for a specific domain, curating a comprehensive database containing relevant information is the most crucial step. The progress made in the field of neural networks and Natural Language Processing firmly establishes one architecture as the go-to choice (the transformer), thereby emphasizing the significant impact of dataset size and quality on the model's performance. There are several well-known databases that you can use as a source of public knowledge. For instance, consider datasets like [The Pile](https://pile.eleuther.ai/), [Common Crawl](https://commoncrawl.org/), or [Wikipedia](https://dumps.wikimedia.org/), which entail extensive collections of web pages, articles, or books. Each of these datasets is comprised of hundreds of billions of tokens, providing diverse learning material for the model. These datasets are mostly available publicly through different sources. We prepared a Deep Lake repository containing several datasets that we’ll use in this course; find it [here](https://app.activeloop.ai/datasets/mydatasets/). The next category of datasets is important only if you are training a model for a specific use case based on the data your organization has at hand or curated. Note that the size of your dataset may vary depending on your application and whether you opt for fine-tuning or training from scratch. The data source can be obtained through web scraping of news websites, forums, or publicly accessible databases, in addition to leveraging your own private knowledge base. It’s also possible to use a foundational LLM to generate a synthetic dataset of your own to be used for training a specialised domain LLM, which may be less expensive and faster than the big foundational model. Splitting the dataset into training and validation sets is a standard process. The training set is utilized during the training process to optimize the model's parameters. On the other hand, the validation set is used to assess the model's performance and ensure it is not overfitting by evaluating its generalization ability.",llm_course,https://learn.activeloop.ai/courses/take/llms/multimedia/48954276-overview-of-the-training-process
301,Overview of the Training Process,"# Overview of the Training Process
## The Model
The transformer has been the dominant network architecture for natural language processing tasks in recent years. It is powered by the attention mechanism, which enables the models to accurately identify the relationship between words. This architecture has resulted in state-of-the-art scores for numerous NLP tasks over the years and powered well-known LLMs like the GPT family. Based on the literature, it is evident that increasing the number of parameters in transformer-based networks enhances language generation and comprehensive ability. With the widespread adoption of transformers, you have the option to utilize libraries such as Tensorflow, PyTorch, and Huggingface to initialize the architecture. Alternatively, you can code it yourself by referring to numerous tutorials available to get a more in-depth understanding. One of the benefits of utilizing the transformers library developed by Huggingface is the availability of their hub, which simplifies the process of loading open-source LLMs such as [Bloom](https://huggingface.co./bigscience/bloom) or [OpenAssistant](https://huggingface.co./OpenAssistant).",llm_course,https://learn.activeloop.ai/courses/take/llms/multimedia/48954276-overview-of-the-training-process
302,Overview of the Training Process,"# Overview of the Training Process
## Training
The first generation of foundational models like BERT were trained with Masked Language Modeling (MLM) learning objectives. This is achieved by randomly masking words from the corpus and configuring the model to predict the masked word. By employing this objective, the model gains the ability to consider the contextual information preceding and following the masked word, enabling it to make informed decisions. This objective may not be the most suitable choice for generative tasks, as ideally, the model should not have access to future words while predicting the current word. The GPT family models used the Autoregressive learning objective. This algorithm ensures that the model consistently attempts to predict the next word without accessing the future content within the corpus. The training process will be iterative, which feeds back the generated tokens to the model to predict the next word. Masked attention ensures that, at each time step, the model is prevented from seeing future words. To train or finetune models, you have the option to either implement the training loop using libraries such as PyTorch or utilize the `Trainer` class provided by Huggingface. The latter option enables us to easily configure different hyperparameters, log, save checkpoints, and evaluate the model.",llm_course,https://learn.activeloop.ai/courses/take/llms/multimedia/48954276-overview-of-the-training-process
303,Overview of the Training Process,"# Overview of the Training Process
## Conclusion
The training process often involves significant trial and error to achieve optimal results. Using libraries can significantly expedite the training process and save time by eliminating the need to implement various mechanisms manually. The model's capability is influenced by various factors, including its size, the size of the dataset, and the chosen hyperparameters, which collectively contribute to the complexity of the process. In upcoming lessons, we will explore each training process step with more detailed explanations.",llm_course,https://learn.activeloop.ai/courses/take/llms/multimedia/48954276-overview-of-the-training-process