{ "cells": [ { "cell_type": "code", "execution_count": 1, "metadata": {}, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "Requirement already satisfied: langchain in /opt/miniconda3/envs/llmops-course/lib/python3.11/site-packages (0.3.0)\n", "Requirement already satisfied: langchain-core in /opt/miniconda3/envs/llmops-course/lib/python3.11/site-packages (0.3.2)\n", "Requirement already satisfied: langchain-community in /opt/miniconda3/envs/llmops-course/lib/python3.11/site-packages (0.3.0)\n", "Requirement already satisfied: langchain-experimental in /opt/miniconda3/envs/llmops-course/lib/python3.11/site-packages (0.3.0)\n", "Requirement already satisfied: langchain-qdrant in /opt/miniconda3/envs/llmops-course/lib/python3.11/site-packages (0.1.4)\n", "Requirement already satisfied: qdrant-client in /opt/miniconda3/envs/llmops-course/lib/python3.11/site-packages (1.11.2)\n", "Requirement already satisfied: tiktoken in /opt/miniconda3/envs/llmops-course/lib/python3.11/site-packages (0.7.0)\n", "Requirement already satisfied: pymupdf in /opt/miniconda3/envs/llmops-course/lib/python3.11/site-packages (1.24.10)\n", "Requirement already satisfied: PyYAML>=5.3 in /opt/miniconda3/envs/llmops-course/lib/python3.11/site-packages (from langchain) (6.0.1)\n", "Requirement already satisfied: SQLAlchemy<3,>=1.4 in /opt/miniconda3/envs/llmops-course/lib/python3.11/site-packages (from langchain) (2.0.32)\n", "Requirement already satisfied: aiohttp<4.0.0,>=3.8.3 in /opt/miniconda3/envs/llmops-course/lib/python3.11/site-packages (from langchain) (3.9.5)\n", "Requirement already satisfied: langchain-text-splitters<0.4.0,>=0.3.0 in /opt/miniconda3/envs/llmops-course/lib/python3.11/site-packages (from langchain) (0.3.0)\n", "Requirement already satisfied: langsmith<0.2.0,>=0.1.17 in /opt/miniconda3/envs/llmops-course/lib/python3.11/site-packages (from langchain) (0.1.120)\n", "Requirement already satisfied: numpy<2,>=1 in /opt/miniconda3/envs/llmops-course/lib/python3.11/site-packages (from langchain) (1.26.4)\n", "Requirement already satisfied: pydantic<3.0.0,>=2.7.4 in /opt/miniconda3/envs/llmops-course/lib/python3.11/site-packages (from langchain) (2.8.2)\n", "Requirement already satisfied: requests<3,>=2 in /opt/miniconda3/envs/llmops-course/lib/python3.11/site-packages (from langchain) (2.32.3)\n", "Requirement already satisfied: tenacity!=8.4.0,<9.0.0,>=8.1.0 in /opt/miniconda3/envs/llmops-course/lib/python3.11/site-packages (from langchain) (8.5.0)\n", "Requirement already satisfied: jsonpatch<2.0,>=1.33 in /opt/miniconda3/envs/llmops-course/lib/python3.11/site-packages (from langchain-core) (1.33)\n", "Requirement already satisfied: packaging<25,>=23.2 in /opt/miniconda3/envs/llmops-course/lib/python3.11/site-packages (from langchain-core) (23.2)\n", "Requirement already satisfied: typing-extensions>=4.7 in /opt/miniconda3/envs/llmops-course/lib/python3.11/site-packages (from langchain-core) (4.11.0)\n", "Requirement already satisfied: dataclasses-json<0.7,>=0.5.7 in /opt/miniconda3/envs/llmops-course/lib/python3.11/site-packages (from langchain-community) (0.6.7)\n", "Requirement already satisfied: pydantic-settings<3.0.0,>=2.4.0 in /opt/miniconda3/envs/llmops-course/lib/python3.11/site-packages (from langchain-community) (2.5.2)\n", "Requirement already satisfied: grpcio>=1.41.0 in /opt/miniconda3/envs/llmops-course/lib/python3.11/site-packages (from qdrant-client) (1.66.0)\n", "Requirement already satisfied: grpcio-tools>=1.41.0 in /opt/miniconda3/envs/llmops-course/lib/python3.11/site-packages (from qdrant-client) (1.66.0)\n", "Requirement already satisfied: httpx>=0.20.0 in /opt/miniconda3/envs/llmops-course/lib/python3.11/site-packages (from httpx[http2]>=0.20.0->qdrant-client) (0.27.2)\n", "Requirement already satisfied: portalocker<3.0.0,>=2.7.0 in /opt/miniconda3/envs/llmops-course/lib/python3.11/site-packages (from qdrant-client) (2.10.1)\n", "Requirement already satisfied: urllib3<3,>=1.26.14 in /opt/miniconda3/envs/llmops-course/lib/python3.11/site-packages (from qdrant-client) (2.2.1)\n", "Requirement already satisfied: regex>=2022.1.18 in /opt/miniconda3/envs/llmops-course/lib/python3.11/site-packages (from tiktoken) (2024.5.15)\n", "Requirement already satisfied: PyMuPDFb==1.24.10 in /opt/miniconda3/envs/llmops-course/lib/python3.11/site-packages (from pymupdf) (1.24.10)\n", "Requirement already satisfied: aiosignal>=1.1.2 in /opt/miniconda3/envs/llmops-course/lib/python3.11/site-packages (from aiohttp<4.0.0,>=3.8.3->langchain) (1.3.1)\n", "Requirement already satisfied: attrs>=17.3.0 in /opt/miniconda3/envs/llmops-course/lib/python3.11/site-packages (from aiohttp<4.0.0,>=3.8.3->langchain) (23.2.0)\n", "Requirement already satisfied: frozenlist>=1.1.1 in /opt/miniconda3/envs/llmops-course/lib/python3.11/site-packages (from aiohttp<4.0.0,>=3.8.3->langchain) (1.4.1)\n", "Requirement already satisfied: multidict<7.0,>=4.5 in /opt/miniconda3/envs/llmops-course/lib/python3.11/site-packages (from aiohttp<4.0.0,>=3.8.3->langchain) (6.0.5)\n", "Requirement already satisfied: yarl<2.0,>=1.0 in /opt/miniconda3/envs/llmops-course/lib/python3.11/site-packages (from aiohttp<4.0.0,>=3.8.3->langchain) (1.9.4)\n", "Requirement already satisfied: marshmallow<4.0.0,>=3.18.0 in /opt/miniconda3/envs/llmops-course/lib/python3.11/site-packages (from dataclasses-json<0.7,>=0.5.7->langchain-community) (3.21.2)\n", "Requirement already satisfied: typing-inspect<1,>=0.4.0 in /opt/miniconda3/envs/llmops-course/lib/python3.11/site-packages (from dataclasses-json<0.7,>=0.5.7->langchain-community) (0.9.0)\n", "Requirement already satisfied: protobuf<6.0dev,>=5.26.1 in /opt/miniconda3/envs/llmops-course/lib/python3.11/site-packages (from grpcio-tools>=1.41.0->qdrant-client) (5.27.3)\n", "Requirement already satisfied: setuptools in /opt/miniconda3/envs/llmops-course/lib/python3.11/site-packages (from grpcio-tools>=1.41.0->qdrant-client) (75.1.0)\n", "Requirement already satisfied: anyio in /opt/miniconda3/envs/llmops-course/lib/python3.11/site-packages (from httpx>=0.20.0->httpx[http2]>=0.20.0->qdrant-client) (3.7.1)\n", "Requirement already satisfied: certifi in /opt/miniconda3/envs/llmops-course/lib/python3.11/site-packages (from httpx>=0.20.0->httpx[http2]>=0.20.0->qdrant-client) (2024.8.30)\n", "Requirement already satisfied: httpcore==1.* in /opt/miniconda3/envs/llmops-course/lib/python3.11/site-packages (from httpx>=0.20.0->httpx[http2]>=0.20.0->qdrant-client) (1.0.5)\n", "Requirement already satisfied: idna in /opt/miniconda3/envs/llmops-course/lib/python3.11/site-packages (from httpx>=0.20.0->httpx[http2]>=0.20.0->qdrant-client) (3.7)\n", "Requirement already satisfied: sniffio in /opt/miniconda3/envs/llmops-course/lib/python3.11/site-packages (from httpx>=0.20.0->httpx[http2]>=0.20.0->qdrant-client) (1.3.1)\n", "Requirement already satisfied: h11<0.15,>=0.13 in /opt/miniconda3/envs/llmops-course/lib/python3.11/site-packages (from httpcore==1.*->httpx>=0.20.0->httpx[http2]>=0.20.0->qdrant-client) (0.14.0)\n", "Requirement already satisfied: h2<5,>=3 in /opt/miniconda3/envs/llmops-course/lib/python3.11/site-packages (from httpx[http2]>=0.20.0->qdrant-client) (4.1.0)\n", "Requirement already satisfied: jsonpointer>=1.9 in /opt/miniconda3/envs/llmops-course/lib/python3.11/site-packages (from jsonpatch<2.0,>=1.33->langchain-core) (2.4)\n", "Requirement already satisfied: orjson<4.0.0,>=3.9.14 in /opt/miniconda3/envs/llmops-course/lib/python3.11/site-packages (from langsmith<0.2.0,>=0.1.17->langchain) (3.10.7)\n", "Requirement already satisfied: annotated-types>=0.4.0 in /opt/miniconda3/envs/llmops-course/lib/python3.11/site-packages (from pydantic<3.0.0,>=2.7.4->langchain) (0.7.0)\n", "Requirement already satisfied: pydantic-core==2.20.1 in /opt/miniconda3/envs/llmops-course/lib/python3.11/site-packages (from pydantic<3.0.0,>=2.7.4->langchain) (2.20.1)\n", "Requirement already satisfied: python-dotenv>=0.21.0 in /opt/miniconda3/envs/llmops-course/lib/python3.11/site-packages (from pydantic-settings<3.0.0,>=2.4.0->langchain-community) (1.0.1)\n", "Requirement already satisfied: charset-normalizer<4,>=2 in /opt/miniconda3/envs/llmops-course/lib/python3.11/site-packages (from requests<3,>=2->langchain) (3.3.2)\n", "Requirement already satisfied: hyperframe<7,>=6.0 in /opt/miniconda3/envs/llmops-course/lib/python3.11/site-packages (from h2<5,>=3->httpx[http2]>=0.20.0->qdrant-client) (6.0.1)\n", "Requirement already satisfied: hpack<5,>=4.0 in /opt/miniconda3/envs/llmops-course/lib/python3.11/site-packages (from h2<5,>=3->httpx[http2]>=0.20.0->qdrant-client) (4.0.0)\n", "Requirement already satisfied: mypy-extensions>=0.3.0 in /opt/miniconda3/envs/llmops-course/lib/python3.11/site-packages (from typing-inspect<1,>=0.4.0->dataclasses-json<0.7,>=0.5.7->langchain-community) (1.0.0)\n", "Note: you may need to restart the kernel to use updated packages.\n" ] } ], "source": [ "%pip install \\\n", " langchain \\\n", " langchain-core\\\n", " langchain-community \\\n", " langchain-experimental \\\n", " langchain-qdrant \\\n", " qdrant-client \\\n", " tiktoken \\\n", " pymupdf \n" ] }, { "cell_type": "code", "execution_count": 2, "metadata": {}, "outputs": [], "source": [ "import os\n", "import getpass\n", "\n", "os.environ[\"OPENAI_API_KEY\"] = getpass.getpass(\"OpenAI API Key:\")" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "from uuid import uuid4\n", "\n", "os.environ[\"LANGCHAIN_TRACING_V2\"] = \"true\"\n", "os.environ[\"LANGCHAIN_PROJECT\"] = f\"AIE4 - LangGraph - {uuid4().hex[0:8]}\"\n", "os.environ[\"LANGCHAIN_API_KEY\"] = getpass.getpass(\"LangSmith API Key: \")" ] }, { "cell_type": "code", "execution_count": 57, "metadata": {}, "outputs": [ { "name": "stderr", "output_type": "stream", "text": [ "/opt/miniconda3/envs/llmops-course/lib/python3.11/site-packages/pypdfium2/_helpers/textpage.py:80: UserWarning: get_text_range() call with default params will be implicitly redirected to get_text_bounded()\n", " warnings.warn(\"get_text_range() call with default params will be implicitly redirected to get_text_bounded()\")\n" ] } ], "source": [ "from langchain_community.document_loaders import PyMuPDFLoader\n", "from langchain.document_loaders import PyPDFium2Loader\n", "# Additional content \n", "# https://arxiv.org/pdf/2306.12001\n", "BOR_FILE_PATH = \"https://www.whitehouse.gov/wp-content/uploads/2022/10/Blueprint-for-an-AI-Bill-of-Rights.pdf\"\n", "bor_documents = PyMuPDFLoader(file_path=BOR_FILE_PATH).load()\n", "\n", "NIST_FILE_PATH = \"https://nvlpubs.nist.gov/nistpubs/ai/NIST.AI.600-1.pdf\"\n", "nist_documents = PyMuPDFLoader(file_path=NIST_FILE_PATH).load()\n", "\n", "# PyPDFium2Loader_bor_documents = PyPDFium2Loader(file_path=BOR_FILE_PATH).load()\n", "# PyPDFium2Loader_nist_documents = PyPDFium2Loader(file_path=NIST_FILE_PATH).load()" ] }, { "cell_type": "code", "execution_count": 4, "metadata": {}, "outputs": [ { "data": { "text/plain": [ "'Blueprint for an AI Bill of Rights'" ] }, "execution_count": 4, "metadata": {}, "output_type": "execute_result" } ], "source": [ "bor_title = str(bor_documents[0].metadata.get(\"title\"))\n", "bor_title" ] }, { "cell_type": "code", "execution_count": 5, "metadata": {}, "outputs": [ { "data": { "text/plain": [ "'Artificial Intelligence Risk Management Framework: Generative Artificial Intelligence Profile'" ] }, "execution_count": 5, "metadata": {}, "output_type": "execute_result" } ], "source": [ "nist_title = str(nist_documents[0].metadata.get(\"title\"))\n", "nist_title" ] }, { "cell_type": "code", "execution_count": 61, "metadata": {}, "outputs": [ { "data": { "text/plain": [ "Document(metadata={'source': 'https://www.whitehouse.gov/wp-content/uploads/2022/10/Blueprint-for-an-AI-Bill-of-Rights.pdf', 'file_path': 'https://www.whitehouse.gov/wp-content/uploads/2022/10/Blueprint-for-an-AI-Bill-of-Rights.pdf', 'page': 1, 'total_pages': 73, 'format': 'PDF 1.6', 'title': 'Blueprint for an AI Bill of Rights', 'author': '', 'subject': '', 'keywords': '', 'creator': 'Adobe Illustrator 26.3 (Macintosh)', 'producer': 'iLovePDF', 'creationDate': \"D:20220920133035-04'00'\", 'modDate': \"D:20221003104118-04'00'\", 'trapped': ''}, page_content=' \\n \\n \\n \\n \\n \\n \\n \\n \\n \\n \\n \\n \\n \\nAbout this Document \\nThe Blueprint for an AI Bill of Rights: Making Automated Systems Work for the American People was \\npublished by the White House Office of Science and Technology Policy in October 2022. This framework was \\nreleased one year after OSTP announced the launch of a process to develop “a bill of rights for an AI-powered \\nworld.” Its release follows a year of public engagement to inform this initiative. The framework is available \\nonline at: https://www.whitehouse.gov/ostp/ai-bill-of-rights \\nAbout the Office of Science and Technology Policy \\nThe Office of Science and Technology Policy (OSTP) was established by the National Science and Technology \\nPolicy, Organization, and Priorities Act of 1976 to provide the President and others within the Executive Office \\nof the President with advice on the scientific, engineering, and technological aspects of the economy, national \\nsecurity, health, foreign relations, the environment, and the technological recovery and use of resources, among \\nother topics. OSTP leads interagency science and technology policy coordination efforts, assists the Office of \\nManagement and Budget (OMB) with an annual review and analysis of Federal research and development in \\nbudgets, and serves as a source of scientific and technological analysis and judgment for the President with \\nrespect to major policies, plans, and programs of the Federal Government. \\nLegal Disclaimer \\nThe Blueprint for an AI Bill of Rights: Making Automated Systems Work for the American People is a white paper \\npublished by the White House Office of Science and Technology Policy. It is intended to support the \\ndevelopment of policies and practices that protect civil rights and promote democratic values in the building, \\ndeployment, and governance of automated systems. \\nThe Blueprint for an AI Bill of Rights is non-binding and does not constitute U.S. government policy. It \\ndoes not supersede, modify, or direct an interpretation of any existing statute, regulation, policy, or \\ninternational instrument. It does not constitute binding guidance for the public or Federal agencies and \\ntherefore does not require compliance with the principles described herein. It also is not determinative of what \\nthe U.S. government’s position will be in any international negotiation. Adoption of these principles may not \\nmeet the requirements of existing statutes, regulations, policies, or international instruments, or the \\nrequirements of the Federal agencies that enforce them. These principles are not intended to, and do not, \\nprohibit or limit any lawful activity of a government agency, including law enforcement, national security, or \\nintelligence activities. \\nThe appropriate application of the principles set forth in this white paper depends significantly on the \\ncontext in which automated systems are being utilized. In some circumstances, application of these principles \\nin whole or in part may not be appropriate given the intended use of automated systems to achieve government \\nagency missions. Future sector-specific guidance will likely be necessary and important for guiding the use of \\nautomated systems in certain settings such as AI systems used as part of school building security or automated \\nhealth diagnostic systems. \\nThe Blueprint for an AI Bill of Rights recognizes that law enforcement activities require a balancing of \\nequities, for example, between the protection of sensitive law enforcement information and the principle of \\nnotice; as such, notice may not be appropriate, or may need to be adjusted to protect sources, methods, and \\nother law enforcement equities. Even in contexts where these principles may not apply in whole or in part, \\nfederal departments and agencies remain subject to judicial, privacy, and civil liberties oversight as well as \\nexisting policies and safeguards that govern automated systems, including, for example, Executive Order 13960, \\nPromoting the Use of Trustworthy Artificial Intelligence in the Federal Government (December 2020). \\nThis white paper recognizes that national security (which includes certain law enforcement and \\nhomeland security activities) and defense activities are of increased sensitivity and interest to our nation’s \\nadversaries and are often subject to special requirements, such as those governing classified information and \\nother protected data. Such activities require alternative, compatible safeguards through existing policies that \\ngovern automated systems and AI, such as the Department of Defense (DOD) AI Ethical Principles and \\nResponsible AI Implementation Pathway and the Intelligence Community (IC) AI Ethics Principles and \\nFramework. The implementation of these policies to national security and defense activities can be informed by \\nthe Blueprint for an AI Bill of Rights where feasible. \\nThe Blueprint for an AI Bill of Rights is not intended to, and does not, create any legal right, benefit, or \\ndefense, substantive or procedural, enforceable at law or in equity by any party against the United States, its \\ndepartments, agencies, or entities, its officers, employees, or agents, or any other person, nor does it constitute a \\nwaiver of sovereign immunity. \\nCopyright Information \\nThis document is a work of the United States Government and is in the public domain (see 17 U.S.C. §105). \\n2\\n')" ] }, "execution_count": 61, "metadata": {}, "output_type": "execute_result" } ], "source": [ "bor_documents[1]" ] }, { "cell_type": "code", "execution_count": 60, "metadata": {}, "outputs": [ { "data": { "text/plain": [ "Document(metadata={'source': 'https://www.whitehouse.gov/wp-content/uploads/2022/10/Blueprint-for-an-AI-Bill-of-Rights.pdf', 'page': 1}, page_content='About this Document\\r\\nThe Blueprint for an AI Bill of Rights: Making Automated Systems Work for the American People was \\r\\npublished by the White House Office of Science and Technology Policy in October 2022. This framework was \\r\\nreleased one year after OSTP announced the launch of a process to develop “a bill of rights for an AI-powered \\r\\nworld.” Its release follows a year of public engagement to inform this initiative. The framework is available \\r\\nonline at: https://www.whitehouse.gov/ostp/ai-bill-of-rights\\r\\nAbout the Office of Science and Technology Policy\\r\\nThe Office of Science and Technology Policy (OSTP) was established by the National Science and Technology\\r\\nPolicy, Organization, and Priorities Act of 1976 to provide the President and others within the Executive Office \\r\\nof the President with advice on the scientific, engineering, and technological aspects of the economy, national \\r\\nsecurity, health, foreign relations, the environment, and the technological recovery and use of resources, among \\r\\nother topics. OSTP leads interagency science and technology policy coordination efforts, assists the Office of \\r\\nManagement and Budget (OMB) with an annual review and analysis of Federal research and development in \\r\\nbudgets, and serves as a source of scientific and technological analysis and judgment for the President with \\r\\nrespect to major policies, plans, and programs of the Federal Government.\\r\\nLegal Disclaimer\\r\\nThe Blueprint for an AI Bill of Rights: Making Automated Systems Work for the American People is a white paper\\r\\npublished by the White House Office of Science and Technology Policy. It is intended to support the \\r\\ndevelopment of policies and practices that protect civil rights and promote democratic values in the building, \\r\\ndeployment, and governance of automated systems.\\r\\nThe Blueprint for an AI Bill of Rights is non-binding and does not constitute U.S. government policy. It \\r\\ndoes not supersede, modify, or direct an interpretation of any existing statute, regulation, policy, or \\r\\ninternational instrument. It does not constitute binding guidance for the public or Federal agencies and \\r\\ntherefore does not require compliance with the principles described herein. It also is not determinative of what \\r\\nthe U.S. government’s position will be in any international negotiation. Adoption of these principles may not \\r\\nmeet the requirements of existing statutes, regulations, policies, or international instruments, or the \\r\\nrequirements of the Federal agencies that enforce them. These principles are not intended to, and do not, \\r\\nprohibit or limit any lawful activity of a government agency, including law enforcement, national security, or \\r\\nintelligence activities.\\r\\nThe appropriate application of the principles set forth in this white paper depends significantly on the \\r\\ncontext in which automated systems are being utilized. In some circumstances, application of these principles \\r\\nin whole or in part may not be appropriate given the intended use of automated systems to achieve government \\r\\nagency missions. Future sector-specific guidance will likely be necessary and important for guiding the use of \\r\\nautomated systems in certain settings such as AI systems used as part of school building security or automated \\r\\nhealth diagnostic systems.\\r\\nThe Blueprint for an AI Bill of Rights recognizes that law enforcement activities require a balancing of \\r\\nequities, for example, between the protection of sensitive law enforcement information and the principle of \\r\\nnotice; as such, notice may not be appropriate, or may need to be adjusted to protect sources, methods, and \\r\\nother law enforcement equities. Even in contexts where these principles may not apply in whole or in part, \\r\\nfederal departments and agencies remain subject to judicial, privacy, and civil liberties oversight as well as \\r\\nexisting policies and safeguards that govern automated systems, including, for example, Executive Order 13960, \\r\\nPromoting the Use of Trustworthy Artificial Intelligence in the Federal Government (December 2020).\\r\\nThis white paper recognizes that national security (which includes certain law enforcement and \\r\\nhomeland security activities) and defense activities are of increased sensitivity and interest to our nation’s \\r\\nadversaries and are often subject to special requirements, such as those governing classified information and \\r\\nother protected data. Such activities require alternative, compatible safeguards through existing policies that \\r\\ngovern automated systems and AI, such as the Department of Defense (DOD) AI Ethical Principles and \\r\\nResponsible AI Implementation Pathway and the Intelligence Community (IC) AI Ethics Principles and \\r\\nFramework. The implementation of these policies to national security and defense activities can be informed by \\r\\nthe Blueprint for an AI Bill of Rights where feasible.\\r\\nThe Blueprint for an AI Bill of Rights is not intended to, and does not, create any legal right, benefit, or \\r\\ndefense, substantive or procedural, enforceable at law or in equity by any party against the United States, its \\r\\ndepartments, agencies, or entities, its officers, employees, or agents, or any other person, nor does it constitute a \\r\\nwaiver of sovereign immunity.\\r\\nCopyright Information\\r\\nThis document is a work of the United States Government and is in the public domain (see 17 U.S.C. §105). \\r\\n2\\n')" ] }, "execution_count": 60, "metadata": {}, "output_type": "execute_result" } ], "source": [ "PyPDFium2Loader_bor_documents[1]" ] }, { "cell_type": "code", "execution_count": 123, "metadata": {}, "outputs": [ { "name": "stderr", "output_type": "stream", "text": [ "/opt/miniconda3/envs/llmops-course/lib/python3.11/site-packages/pydantic/_internal/_fields.py:161: UserWarning: Field \"model_name\" has conflict with protected namespace \"model_\".\n", "\n", "You may be able to resolve this warning by setting `model_config['protected_namespaces'] = ()`.\n", " warnings.warn(\n", "/opt/miniconda3/envs/llmops-course/lib/python3.11/site-packages/sentence_transformers/cross_encoder/CrossEncoder.py:11: TqdmExperimentalWarning: Using `tqdm.autonotebook.tqdm` in notebook mode. Use `tqdm.tqdm` instead to force console mode (e.g. in jupyter console)\n", " from tqdm.autonotebook import tqdm, trange\n" ] }, { "data": { "application/vnd.jupyter.widget-view+json": { "model_id": "676e2dea406c411e970629d869df7495", "version_major": 2, "version_minor": 0 }, "text/plain": [ "modules.json: 0%| | 0.00/349 [00:00 str:\n", " \"\"\"\n", " Enrich the document context by adding the surrounding text (context window).\n", " Args:\n", " - document: The retrieved document.\n", " - window_size: Number of adjacent passages to include as context (before and after).\n", " \"\"\"\n", " doc_text = document.page_content.split(\"\\n\")\n", " enriched_text = []\n", " for i, passage in enumerate(doc_text):\n", " context = doc_text[max(0, i - window_size):min(len(doc_text), i + window_size + 1)]\n", " enriched_text.append(\"\\n\".join(context))\n", " return \"\\n\".join(enriched_text)\n", "\n", " def _get_relevant_documents(self, query: str) -> List[Document]:\n", " \"\"\"\n", " Retrieve documents and apply context enrichment.\n", " Args:\n", " - query: The query string.\n", " Returns:\n", " - List of enriched documents.\n", " \"\"\"\n", " documents = self.retriever.get_relevant_documents(query)\n", " enriched_documents = []\n", " for doc in documents:\n", " enriched_content = self.enrich_context(doc, self.window_size)\n", " enriched_doc = Document(page_content=enriched_content, metadata=doc.metadata)\n", " enriched_documents.append(enriched_doc)\n", " return enriched_documents\n", "\n", " async def _aget_relevant_documents(self, query: str) -> List[Document]:\n", " \"\"\"\n", " Async version of the document retrieval and enrichment.\n", " Args:\n", " - query: The query string.\n", " Returns:\n", " - List of enriched documents.\n", " \"\"\"\n", " documents = await self.retriever.aget_relevant_documents(query)\n", " enriched_documents = []\n", " for doc in documents:\n", " enriched_content = self.enrich_context(doc, self.window_size)\n", " enriched_doc = Document(page_content=enriched_content, metadata=doc.metadata)\n", " enriched_documents.append(enriched_doc)\n", " return enriched_documents\n" ] }, { "cell_type": "code", "execution_count": 111, "metadata": {}, "outputs": [], "source": [ "# todo: For some reason this creates duplicate need to debug this later.\n", "context_enriched_retriever = ContextEnrichedRetriever(retriever=retriver, window_size=2)" ] }, { "cell_type": "code", "execution_count": 128, "metadata": {}, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "0: Blueprint for an AI Bill of Rights\n", "In some cases, exceptions to \n", "the principles described in the Blueprint for an AI Bill of Rights may be necessary to comply with existing law, \n", "conform to the practicalities of a specific use case, or balance competing public interests. In particular, law \n", "enforcement, and other regulatory contexts may require government actors to protect civil rights, civil liberties, \n", "and privacy in a manner consistent with, but using alternate mechanisms to, the specific principles discussed in \n", "this framework. The Blueprint for an AI Bill of Rights is meant to assist governments and the private sector in \n", "moving principles into practice. The expectations given in the Technical Companion are meant to serve as a blueprint for the development of \n", "additional technical standards and practices that should be tailored for particular sectors and contexts. While \n", "existing laws informed the development of the Blueprint for an AI Bill of Rights, this framework does not detail \n", "those laws beyond providing them as examples, where appropriate, of existing protective measures. This \n", "framework instead shares a broad, forward-leaning vision of recommended principles for automated system \n", "development and use to inform private and public involvement with these systems where they have the poten­\n", "tial to meaningfully impact rights, opportunities, or access. Additionally, this framework does not analyze or \n", "take a position on legislative and regulatory proposals in municipal, state, and federal government, or those in \n", "other countries. We have seen modest progress in recent years, with some state and local governments responding to these prob­\n", "lems with legislation, and some courts extending longstanding statutory protections to new and emerging tech­\n", "nologies. There are companies working to incorporate additional protections in their design and use of auto­\n", "mated systems, and researchers developing innovative guardrails. Advocates, researchers, and government \n", "organizations have proposed principles for the ethical use of AI and other automated systems. These include \n", "the Organization for Economic Co-operation and Development’s (OECD’s) 2019 Recommendation on Artificial \n", "Intelligence, which includes principles for responsible stewardship of trustworthy AI and which the United \n", "States adopted, and Executive Order 13960 on Promoting the Use of Trustworthy Artificial Intelligence in the \n", "Federal Government, which sets out principles that govern the federal government’s use of AI. The Blueprint \n", "for an AI Bill of Rights is fully consistent with these principles and with the direction in Executive Order 13985 \n", "on Advancing Racial Equity and Support for Underserved Communities Through the Federal Government. These principles find kinship in the Fair Information Practice Principles (FIPPs), derived from the 1973 report \n", "of an advisory committee to the U.S. Department of Health, Education, and Welfare, Records, Computers, \n", "and the Rights of Citizens.4 While there is no single, universal articulation of the FIPPs, these core \n", "principles for managing information about individuals have been incorporated into data privacy laws and \n", "policies across the globe.5 The Blueprint for an AI Bill of Rights embraces elements of the FIPPs that are \n", "particularly relevant to automated systems, without articulating a specific set of FIPPs or scoping \n", "applicability or the interests served to a single particular domain, like privacy, civil rights and civil liberties, \n", "ethics, or risk management. The Technical Companion builds on this prior work to provide practical next \n", "steps to move these principles into practice and promote common approaches that allow technological \n", "innovation to flourish while protecting people from harm.\n", "1: Artificial Intelligence Risk Management Framework: Generative Artificial Intelligence Profile\n", " \n", "26 \n", "MAP 4.1: Approaches for mapping AI technology and legal risks of its components – including the use of third-party data or \n", "software – are in place, followed, and documented, as are risks of infringement of a third-party’s intellectual property or other \n", "rights. Action ID \n", "Suggested Action \n", "GAI Risks \n", "MP-4.1-001 Conduct periodic monitoring of AI-generated content for privacy risks; address any \n", "possible instances of PII or sensitive data exposure. Data Privacy \n", "MP-4.1-002 Implement processes for responding to potential intellectual property infringement \n", "claims or other rights. Intellectual Property \n", "MP-4.1-003 \n", "Connect new GAI policies, procedures, and processes to existing model, data, \n", "software development, and IT governance and to legal, compliance, and risk \n", "management activities. Information Security; Data Privacy \n", "MP-4.1-004 Document training data curation policies, to the extent possible and according to \n", "applicable laws and policies. Intellectual Property; Data Privacy; \n", "Obscene, Degrading, and/or \n", "Abusive Content \n", "MP-4.1-005 \n", "Establish policies for collection, retention, and minimum quality of data, in \n", "consideration of the following risks: Disclosure of inappropriate CBRN information; \n", "Use of Illegal or dangerous content; Offensive cyber capabilities; Training data \n", "imbalances that could give rise to harmful biases; Leak of personally identifiable \n", "information, including facial likenesses of individuals. CBRN Information or Capabilities; \n", "Intellectual Property; Information \n", "Security; Harmful Bias and \n", "Homogenization; Dangerous, \n", "Violent, or Hateful Content; Data \n", "Privacy \n", "MP-4.1-006 Implement policies and practices defining how third-party intellectual property and \n", "training data will be used, stored, and protected. Intellectual Property; Value Chain \n", "and Component Integration \n", "MP-4.1-007 Re-evaluate models that were fine-tuned or enhanced on top of third-party \n", "models.\n", "2: Blueprint for an AI Bill of Rights\n", "Data should \n", "only be collected or used for the purposes of training or testing machine learning models if such collection and \n", "use is legal and consistent with the expectations of the people whose data is collected. User experience \n", "research should be conducted to confirm that people understand what data is being collected about them and \n", "how it will be used, and that this collection matches their expectations and desires. Data collection and use-case scope limits. Data collection should be limited in scope, with specific, \n", "narrow identified goals, to avoid \"mission creep.\" Anticipated data collection should be determined to be \n", "strictly necessary to the identified goals and should be minimized as much as possible. Data collected based on \n", "these identified goals and for a specific context should not be used in a different context without assessing for \n", "new privacy risks and implementing appropriate mitigation measures, which may include express consent. Clear timelines for data retention should be established, with data deleted as soon as possible in accordance \n", "with legal or policy-based limitations. Determined data retention timelines should be documented and justi­\n", "fied. Risk identification and mitigation. Entities that collect, use, share, or store sensitive data should \n", "attempt to proactively identify harms and seek to manage them so as to avoid, mitigate, and respond appropri­\n", "ately to identified risks. Appropriate responses include determining not to process data when the privacy risks \n", "outweigh the benefits or implementing measures to mitigate acceptable risks. Appropriate responses do not \n", "include sharing or transferring the privacy risks to users via notice or consent requests where users could not \n", "reasonably be expected to understand the risks without further support. Privacy-preserving security. Entities creating, using, or governing automated systems should follow \n", "privacy and security best practices designed to ensure data and metadata do not leak beyond the specific \n", "consented use case. Best practices could include using privacy-enhancing cryptography or other types of \n", "privacy-enhancing technologies or fine-grained permissions and access control mechanisms, along with \n", "conventional system security protocols.\n", "3: Artificial Intelligence Risk Management Framework: Generative Artificial Intelligence Profile\n", " \n", "27 \n", "MP-4.1-010 \n", "Conduct appropriate diligence on training data use to assess intellectual property, \n", "and privacy, risks, including to examine whether use of proprietary or sensitive \n", "training data is consistent with applicable laws. Intellectual Property; Data Privacy \n", "AI Actor Tasks: Governance and Oversight, Operation and Monitoring, Procurement, Third-party entities \n", " \n", "MAP 5.1: Likelihood and magnitude of each identified impact (both potentially beneficial and harmful) based on expected use, past \n", "uses of AI systems in similar contexts, public incident reports, feedback from those external to the team that developed or deployed \n", "the AI system, or other data are identified and documented. Action ID \n", "Suggested Action \n", "GAI Risks \n", "MP-5.1-001 Apply TEVV practices for content provenance (e.g., probing a system's synthetic \n", "data generation capabilities for potential misuse or vulnerabilities. Information Integrity; Information \n", "Security \n", "MP-5.1-002 \n", "Identify potential content provenance harms of GAI, such as misinformation or \n", "disinformation, deepfakes, including NCII, or tampered content. Enumerate and \n", "rank risks based on their likelihood and potential impact, and determine how well \n", "provenance solutions address specific risks and/or harms. Information Integrity; Dangerous, \n", "Violent, or Hateful Content; \n", "Obscene, Degrading, and/or \n", "Abusive Content \n", "MP-5.1-003 \n", "Consider disclosing use of GAI to end users in relevant contexts, while considering \n", "the objective of disclosure, the context of use, the likelihood and magnitude of the \n", "risk posed, the audience of the disclosure, as well as the frequency of the \n", "disclosures. Human-AI Configuration \n", "MP-5.1-004 Prioritize GAI structured public feedback processes based on risk assessment \n", "estimates. Information Integrity; CBRN \n", "Information or Capabilities; \n", "Dangerous, Violent, or Hateful \n", "Content; Harmful Bias and \n", "Homogenization \n", "MP-5.1-005 Conduct adversarial role-playing exercises, GAI red-teaming, or chaos testing to \n", "identify anomalous or unforeseen failure modes. Information Security \n", "MP-5.1-006 \n", "Profile threats and negative impacts arising from GAI systems interacting with, \n", "manipulating, or generating content, and outlining known and potential \n", "vulnerabilities and the likelihood of their occurrence.\n", "4: Artificial Intelligence Risk Management Framework: Generative Artificial Intelligence Profile\n", " \n", "35 \n", "MEASURE 2.9: The AI model is explained, validated, and documented, and AI system output is interpreted within its context – as \n", "identified in the MAP function – to inform responsible use and governance. Action ID \n", "Suggested Action \n", "GAI Risks \n", "MS-2.9-001 \n", "Apply and document ML explanation results such as: Analysis of embeddings, \n", "Counterfactual prompts, Gradient-based attributions, Model \n", "compression/surrogate models, Occlusion/term reduction. Confabulation \n", "MS-2.9-002 \n", "Document GAI model details including: Proposed use and organizational value; \n", "Assumptions and limitations, Data collection methodologies; Data provenance; \n", "Data quality; Model architecture (e.g., convolutional neural network, \n", "transformers, etc.); Optimization objectives; Training algorithms; RLHF \n", "approaches; Fine-tuning or retrieval-augmented generation approaches; \n", "Evaluation data; Ethical considerations; Legal and regulatory requirements. Information Integrity; Harmful Bias \n", "and Homogenization \n", "AI Actor Tasks: AI Deployment, AI Impact Assessment, Domain Experts, End-Users, Operation and Monitoring, TEVV \n", " \n", "MEASURE 2.10: Privacy risk of the AI system – as identified in the MAP function – is examined and documented. Action ID \n", "Suggested Action \n", "GAI Risks \n", "MS-2.10-001 \n", "Conduct AI red-teaming to assess issues such as: Outputting of training data \n", "samples, and subsequent reverse engineering, model extraction, and \n", "membership inference risks; Revealing biometric, confidential, copyrighted, \n", "licensed, patented, personal, proprietary, sensitive, or trade-marked information; \n", "Tracking or revealing location information of users or members of training \n", "datasets. Human-AI Configuration; \n", "Information Integrity; Intellectual \n", "Property \n", "MS-2.10-002 \n", "Engage directly with end-users and other stakeholders to understand their \n", "expectations and concerns regarding content provenance.\n", "5: Blueprint for an AI Bill of Rights\n", " \n", " \n", " \n", " \n", " \n", "SECTION TITLE\n", "DATA PRIVACY\n", "You should be protected from abusive data practices via built-in protections and you \n", "should have agency over how data about you is used. You should be protected from violations of \n", "privacy through design choices that ensure such protections are included by default, including ensuring that \n", "data collection conforms to reasonable expectations and that only data strictly necessary for the specific \n", "context is collected. Designers, developers, and deployers of automated systems should seek your permission \n", "and respect your decisions regarding collection, use, access, transfer, and deletion of your data in appropriate \n", "ways and to the greatest extent possible; where not possible, alternative privacy by design safeguards should be \n", "used. Systems should not employ user experience and design decisions that obfuscate user choice or burden \n", "users with defaults that are privacy invasive. Consent should only be used to justify collection of data in cases \n", "where it can be appropriately and meaningfully given. Any consent requests should be brief, be understandable \n", "in plain language, and give you agency over data collection and the specific context of use; current hard-to­\n", "understand notice-and-choice practices for broad uses of data should be changed. Enhanced protections and \n", "restrictions for data and inferences related to sensitive domains, including health, work, education, criminal \n", "justice, and finance, and for data pertaining to youth should put you first. In sensitive domains, your data and \n", "related inferences should only be used for necessary functions, and you should be protected by ethical review \n", "and use prohibitions. You and your communities should be free from unchecked surveillance; surveillance \n", "technologies should be subject to heightened oversight that includes at least pre-deployment assessment of their \n", "potential harms and scope limits to protect privacy and civil liberties. Continuous surveillance and monitoring \n", "should not be used in education, work, housing, or in other contexts where the use of such surveillance \n", "technologies is likely to limit rights, opportunities, or access. Whenever possible, you should have access to \n", "reporting that confirms your data decisions have been respected and provides an assessment of the \n", "potential impact of surveillance technologies on your rights, opportunities, or access. NOTICE AND EXPLANATION\n", "You should know that an automated system is being used and understand how and why it \n", "contributes to outcomes that impact you.\n", "6: Artificial Intelligence Risk Management Framework: Generative Artificial Intelligence Profile\n", " \n", "48 \n", "• Data protection \n", "• Data retention \n", "• Consistency in use of defining key terms \n", "• Decommissioning \n", "• Discouraging anonymous use \n", "• Education \n", "• Impact assessments \n", "• Incident response \n", "• Monitoring \n", "• Opt-outs \n", "• Risk-based controls \n", "• Risk mapping and measurement \n", "• Science-backed TEVV practices \n", "• Secure software development practices \n", "• Stakeholder engagement \n", "• Synthetic content detection and \n", "labeling tools and techniques \n", "• Whistleblower protections \n", "• Workforce diversity and \n", "interdisciplinary teams\n", "Establishing acceptable use policies and guidance for the use of GAI in formal human-AI teaming settings \n", "as well as different levels of human-AI configurations can help to decrease risks arising from misuse, \n", "abuse, inappropriate repurpose, and misalignment between systems and users. These practices are just \n", "one example of adapting existing governance protocols for GAI contexts.\n", "7: Blueprint for an AI Bill of Rights\n", "Federal government surveillance and other collection and \n", "use of data is governed by legal protections that help to protect civil liberties and provide for limits on data retention \n", "in some cases. Many states have also enacted consumer data privacy protection regimes to address some of these \n", "harms. However, these are not yet standard practices, and the United States lacks a comprehensive statutory or regulatory \n", "framework governing the rights of the public when it comes to personal data. While a patchwork of laws exists to \n", "guide the collection and use of personal data in specific contexts, including health, employment, education, and credit, \n", "it can be unclear how these laws apply in other contexts and in an increasingly automated society. Additional protec­\n", "tions would assure the American public that the automated systems they use are not monitoring their activities, \n", "collecting information on their lives, or otherwise surveilling them without context-specific consent or legal authori­\n", "ty. 31\n", "\n", "8: Blueprint for an AI Bill of Rights\n", " \n", " \n", " \n", "SECTION TITLE\n", "Applying The Blueprint for an AI Bill of Rights \n", "While many of the concerns addressed in this framework derive from the use of AI, the technical \n", "capabilities and specific definitions of such systems change with the speed of innovation, and the potential \n", "harms of their use occur even with less technologically sophisticated tools. Thus, this framework uses a two-\n", "part test to determine what systems are in scope. This framework applies to (1) automated systems that (2) \n", "have the potential to meaningfully impact the American public’s rights, opportunities, or access to \n", "critical resources or services. These rights, opportunities, and access to critical resources of services should \n", "be enjoyed equally and be fully protected, regardless of the changing role that automated systems may play in \n", "our lives. This framework describes protections that should be applied with respect to all automated systems that \n", "have the potential to meaningfully impact individuals' or communities' exercise of: \n", "RIGHTS, OPPORTUNITIES, OR ACCESS\n", "Civil rights, civil liberties, and privacy, including freedom of speech, voting, and protections from discrimi­\n", "nation, excessive punishment, unlawful surveillance, and violations of privacy and other freedoms in both \n", "public and private sector contexts; \n", "Equal opportunities, including equitable access to education, housing, credit, employment, and other \n", "programs; or, \n", "Access to critical resources or services, such as healthcare, financial services, safety, social services, \n", "non-deceptive information about goods and services, and government benefits. A list of examples of automated systems for which these principles should be considered is provided in the \n", "Appendix. The Technical Companion, which follows, offers supportive guidance for any person or entity that \n", "creates, deploys, or oversees automated systems. Considered together, the five principles and associated practices of the Blueprint for an AI Bill of \n", "Rights form an overlapping set of backstops against potential harms. This purposefully overlapping \n", "framework, when taken as a whole, forms a blueprint to help protect the public from harm. The measures taken to realize the vision set forward in this framework should be proportionate \n", "with the extent and nature of the harm, or risk of harm, to people's rights, opportunities, and \n", "access. RELATIONSHIP TO EXISTING LAW AND POLICY\n", "The Blueprint for an AI Bill of Rights is an exercise in envisioning a future where the American public is \n", "protected from the potential harms, and can fully enjoy the benefits, of automated systems. It describes princi­\n", "ples that can help ensure these protections.\n", "9: Blueprint for an AI Bill of Rights\n", "In other cases, technologies do not work as intended or as promised, causing substantial and unjustified harm. Automated systems sometimes rely on data from other systems, including historical data, allowing irrelevant informa­\n", "tion from past decisions to infect decision-making in unrelated situations. In some cases, technologies are purposeful­\n", "ly designed to violate the safety of others, such as technologies designed to facilitate stalking; in other cases, intended \n", "or unintended uses lead to unintended harms. Many of the harms resulting from these technologies are preventable, and actions are already being taken to protect \n", "the public. Some companies have put in place safeguards that have prevented harm from occurring by ensuring that \n", "key development decisions are vetted by an ethics review; others have identified and mitigated harms found through \n", "pre-deployment testing and ongoing monitoring processes. Governments at all levels have existing public consulta­\n", "tion processes that may be applied when considering the use of new automated systems, and existing product develop­\n", "ment and testing practices already protect the American public from many potential harms. Still, these kinds of practices are deployed too rarely and unevenly. Expanded, proactive protections could build on \n", "these existing practices, increase confidence in the use of automated systems, and protect the American public. Inno­\n", "vators deserve clear rules of the road that allow new ideas to flourish, and the American public deserves protections \n", "from unsafe outcomes. All can benefit from assurances that automated systems will be designed, tested, and consis­\n", "tently confirmed to work as intended, and that they will be proactively protected from foreseeable unintended harm­\n", "ful outcomes.\n" ] } ], "source": [ "docs = retriver.invoke(\"How can companies ensure AI does not violate data privacy laws?\")\n", "for i, doc in enumerate(docs):\n", " print(f\"{i}: {doc.metadata.get('title')}\")\n", " print(doc.page_content)" ] }, { "cell_type": "code", "execution_count": 129, "metadata": {}, "outputs": [], "source": [ "# Trying Compression retriver\n", "from langchain.retrievers import ContextualCompressionRetriever\n", "from langchain.retrievers.document_compressors import LLMChainExtractor\n", "from langchain_openai import ChatOpenAI\n", "\n", "base_retriever = retriver\n", "\n", "#Create a contextual compressor\n", "compressor_llm = ChatOpenAI(temperature=0, model_name=\"gpt-4o\", max_tokens=4000)\n", "compressor = LLMChainExtractor.from_llm(compressor_llm)\n", "\n", "#Combine the retriever with the compressor\n", "compression_retriever = ContextualCompressionRetriever(\n", " base_compressor=compressor,\n", " base_retriever=base_retriever\n", ")\n", "\n" ] }, { "cell_type": "code", "execution_count": 130, "metadata": {}, "outputs": [ { "data": { "text/plain": [ "[Document(metadata={'source': 'https://nvlpubs.nist.gov/nistpubs/ai/NIST.AI.600-1.pdf', 'file_path': 'https://nvlpubs.nist.gov/nistpubs/ai/NIST.AI.600-1.pdf', 'page': 29, 'total_pages': 64, 'format': 'PDF 1.6', 'title': 'Artificial Intelligence Risk Management Framework: Generative Artificial Intelligence Profile', 'author': 'National Institute of Standards and Technology', 'subject': '', 'keywords': '', 'creator': 'Acrobat PDFMaker 24 for Word', 'producer': 'Adobe PDF Library 24.2.159', 'creationDate': \"D:20240805141702-04'00'\", 'modDate': \"D:20240805143048-04'00'\", 'trapped': '', '_id': '1ba549ad3b9b488ab15014c90b909ad1', '_collection_name': 'ai-safety'}, page_content='Conduct periodic monitoring of AI-generated content for privacy risks; address any possible instances of PII or sensitive data exposure. \\n\\nConnect new GAI policies, procedures, and processes to existing model, data, software development, and IT governance and to legal, compliance, and risk management activities. \\n\\nDocument training data curation policies, to the extent possible and according to applicable laws and policies. \\n\\nEstablish policies for collection, retention, and minimum quality of data, in consideration of the following risks: Disclosure of inappropriate CBRN information; Use of Illegal or dangerous content; Offensive cyber capabilities; Training data imbalances that could give rise to harmful biases; Leak of personally identifiable information, including facial likenesses of individuals.'),\n", " Document(metadata={'source': 'https://www.whitehouse.gov/wp-content/uploads/2022/10/Blueprint-for-an-AI-Bill-of-Rights.pdf', 'file_path': 'https://www.whitehouse.gov/wp-content/uploads/2022/10/Blueprint-for-an-AI-Bill-of-Rights.pdf', 'page': 32, 'total_pages': 73, 'format': 'PDF 1.6', 'title': 'Blueprint for an AI Bill of Rights', 'author': '', 'subject': '', 'keywords': '', 'creator': 'Adobe Illustrator 26.3 (Macintosh)', 'producer': 'iLovePDF', 'creationDate': \"D:20220920133035-04'00'\", 'modDate': \"D:20221003104118-04'00'\", 'trapped': '', '_id': '04acfe508f5c428caeeb90944c1d7386', '_collection_name': 'ai-safety'}, page_content='Data should \\nonly be collected or used for the purposes of training or testing machine learning models if such collection and \\nuse is legal and consistent with the expectations of the people whose data is collected. User experience \\nresearch should be conducted to confirm that people understand what data is being collected about them and \\nhow it will be used, and that this collection matches their expectations and desires. Data collection and use-case scope limits. Data collection should be limited in scope, with specific, \\nnarrow identified goals, to avoid \"mission creep.\" Anticipated data collection should be determined to be \\nstrictly necessary to the identified goals and should be minimized as much as possible. Data collected based on \\nthese identified goals and for a specific context should not be used in a different context without assessing for \\nnew privacy risks and implementing appropriate mitigation measures, which may include express consent. Clear timelines for data retention should be established, with data deleted as soon as possible in accordance \\nwith legal or policy-based limitations. Determined data retention timelines should be documented and justi\\xad\\nfied. Risk identification and mitigation. Entities that collect, use, share, or store sensitive data should \\nattempt to proactively identify harms and seek to manage them so as to avoid, mitigate, and respond appropri\\xad\\nately to identified risks. Appropriate responses include determining not to process data when the privacy risks \\noutweigh the benefits or implementing measures to mitigate acceptable risks. Appropriate responses do not \\ninclude sharing or transferring the privacy risks to users via notice or consent requests where users could not \\nreasonably be expected to understand the risks without further support. Privacy-preserving security. Entities creating, using, or governing automated systems should follow \\nprivacy and security best practices designed to ensure data and metadata do not leak beyond the specific \\nconsented use case. Best practices could include using privacy-enhancing cryptography or other types of \\nprivacy-enhancing technologies or fine-grained permissions and access control mechanisms, along with \\nconventional system security protocols.'),\n", " Document(metadata={'source': 'https://nvlpubs.nist.gov/nistpubs/ai/NIST.AI.600-1.pdf', 'file_path': 'https://nvlpubs.nist.gov/nistpubs/ai/NIST.AI.600-1.pdf', 'page': 30, 'total_pages': 64, 'format': 'PDF 1.6', 'title': 'Artificial Intelligence Risk Management Framework: Generative Artificial Intelligence Profile', 'author': 'National Institute of Standards and Technology', 'subject': '', 'keywords': '', 'creator': 'Acrobat PDFMaker 24 for Word', 'producer': 'Adobe PDF Library 24.2.159', 'creationDate': \"D:20240805141702-04'00'\", 'modDate': \"D:20240805143048-04'00'\", 'trapped': '', '_id': '7e66f3997e2243ad97a378ad1f2d9cd0', '_collection_name': 'ai-safety'}, page_content='Conduct appropriate diligence on training data use to assess intellectual property, \\nand privacy, risks, including to examine whether use of proprietary or sensitive \\ntraining data is consistent with applicable laws. Intellectual Property; Data Privacy \\nAI Actor Tasks: Governance and Oversight, Operation and Monitoring, Procurement, Third-party entities'),\n", " Document(metadata={'source': 'https://nvlpubs.nist.gov/nistpubs/ai/NIST.AI.600-1.pdf', 'file_path': 'https://nvlpubs.nist.gov/nistpubs/ai/NIST.AI.600-1.pdf', 'page': 38, 'total_pages': 64, 'format': 'PDF 1.6', 'title': 'Artificial Intelligence Risk Management Framework: Generative Artificial Intelligence Profile', 'author': 'National Institute of Standards and Technology', 'subject': '', 'keywords': '', 'creator': 'Acrobat PDFMaker 24 for Word', 'producer': 'Adobe PDF Library 24.2.159', 'creationDate': \"D:20240805141702-04'00'\", 'modDate': \"D:20240805143048-04'00'\", 'trapped': '', '_id': '907f59c504d44945ae865021b6e9c713', '_collection_name': 'ai-safety'}, page_content='MEASURE 2.10: Privacy risk of the AI system – as identified in the MAP function – is examined and documented. Action ID \\nSuggested Action \\nGAI Risks \\nMS-2.10-001 \\nConduct AI red-teaming to assess issues such as: Outputting of training data \\nsamples, and subsequent reverse engineering, model extraction, and \\nmembership inference risks; Revealing biometric, confidential, copyrighted, \\nlicensed, patented, personal, proprietary, sensitive, or trade-marked information; \\nTracking or revealing location information of users or members of training \\ndatasets. Human-AI Configuration; \\nInformation Integrity; Intellectual \\nProperty \\nMS-2.10-002 \\nEngage directly with end-users and other stakeholders to understand their \\nexpectations and concerns regarding content provenance.'),\n", " Document(metadata={'source': 'https://www.whitehouse.gov/wp-content/uploads/2022/10/Blueprint-for-an-AI-Bill-of-Rights.pdf', 'file_path': 'https://www.whitehouse.gov/wp-content/uploads/2022/10/Blueprint-for-an-AI-Bill-of-Rights.pdf', 'page': 5, 'total_pages': 73, 'format': 'PDF 1.6', 'title': 'Blueprint for an AI Bill of Rights', 'author': '', 'subject': '', 'keywords': '', 'creator': 'Adobe Illustrator 26.3 (Macintosh)', 'producer': 'iLovePDF', 'creationDate': \"D:20220920133035-04'00'\", 'modDate': \"D:20221003104118-04'00'\", 'trapped': '', '_id': '498d23d9347f49ac9c61216ea042aefd', '_collection_name': 'ai-safety'}, page_content='SECTION TITLE\\nDATA PRIVACY\\nYou should be protected from abusive data practices via built-in protections and you \\nshould have agency over how data about you is used. You should be protected from violations of \\nprivacy through design choices that ensure such protections are included by default, including ensuring that \\ndata collection conforms to reasonable expectations and that only data strictly necessary for the specific \\ncontext is collected. Designers, developers, and deployers of automated systems should seek your permission \\nand respect your decisions regarding collection, use, access, transfer, and deletion of your data in appropriate \\nways and to the greatest extent possible; where not possible, alternative privacy by design safeguards should be \\nused. Systems should not employ user experience and design decisions that obfuscate user choice or burden \\nusers with defaults that are privacy invasive. Consent should only be used to justify collection of data in cases \\nwhere it can be appropriately and meaningfully given. Any consent requests should be brief, be understandable \\nin plain language, and give you agency over data collection and the specific context of use; current hard-to\\xad\\nunderstand notice-and-choice practices for broad uses of data should be changed. Enhanced protections and \\nrestrictions for data and inferences related to sensitive domains, including health, work, education, criminal \\njustice, and finance, and for data pertaining to youth should put you first. In sensitive domains, your data and \\nrelated inferences should only be used for necessary functions, and you should be protected by ethical review \\nand use prohibitions. You and your communities should be free from unchecked surveillance; surveillance \\ntechnologies should be subject to heightened oversight that includes at least pre-deployment assessment of their \\npotential harms and scope limits to protect privacy and civil liberties. Continuous surveillance and monitoring \\nshould not be used in education, work, housing, or in other contexts where the use of such surveillance \\ntechnologies is likely to limit rights, opportunities, or access. Whenever possible, you should have access to \\nreporting that confirms your data decisions have been respected and provides an assessment of the \\npotential impact of surveillance technologies on your rights, opportunities, or access.')]" ] }, "execution_count": 130, "metadata": {}, "output_type": "execute_result" } ], "source": [ "compression_retriever.invoke(\"How can companies ensure AI does not violate data privacy laws?\")" ] }, { "cell_type": "code", "execution_count": 131, "metadata": {}, "outputs": [], "source": [ "from langchain.prompts import ChatPromptTemplate\n", "\n", "base_rag_prompt_template = \"\"\"\\\n", "You are a helpful assistant that can answer questions related to the provided context. Repond I don't have that information if outside context.\n", "\n", "Context:\n", "{context}\n", "\n", "Question:\n", "{question}\n", "\"\"\"\n", "\n", "base_rag_prompt = ChatPromptTemplate.from_template(base_rag_prompt_template)" ] }, { "cell_type": "code", "execution_count": 132, "metadata": {}, "outputs": [], "source": [ "from langchain_openai.chat_models import ChatOpenAI\n", "\n", "base_llm = ChatOpenAI(model=\"gpt-4o\", tags=[\"base_llm\"])" ] }, { "cell_type": "code", "execution_count": 133, "metadata": {}, "outputs": [], "source": [ "from operator import itemgetter\n", "from langchain.schema.output_parser import StrOutputParser\n", "from langchain.schema.runnable import RunnablePassthrough\n", "\n", "retrieval_augmented_qa_chain = (\n", " {\"context\": itemgetter(\"question\") | compression_retriever, \"question\": itemgetter(\"question\")}\n", " | RunnablePassthrough.assign(context=itemgetter(\"context\"))\n", " | {\"response\": base_rag_prompt | base_llm, \"context\": itemgetter(\"context\")}\n", ")" ] }, { "cell_type": "code", "execution_count": 134, "metadata": {}, "outputs": [], "source": [ "result = retrieval_augmented_qa_chain.invoke({\"question\" : \"How can companies ensure AI does not violate data privacy laws?\"})" ] }, { "cell_type": "code", "execution_count": 135, "metadata": {}, "outputs": [ { "data": { "text/plain": [ "'Companies can ensure that AI does not violate data privacy laws by implementing several strategies and practices as mentioned in the provided context:\\n\\n1. **Periodic Monitoring**: Conduct regular monitoring of AI-generated content to identify and address any potential instances of personally identifiable information (PII) or sensitive data exposure. \\n\\n2. **Integration with Existing Policies**: Connect new AI policies, procedures, and processes with existing model, data, software development, and IT governance, as well as legal, compliance, and risk management activities.\\n\\n3. **Training Data Curation Policies**: Document training data curation policies in accordance with applicable laws and policies. This includes policies for the collection, retention, and minimum quality of data to mitigate risks such as the disclosure of inappropriate information, use of illegal or dangerous content, offensive cyber capabilities, data imbalances leading to harmful biases, and leaks of PII.\\n\\n4. **Diligence on Training Data**: Conduct appropriate diligence on the use of training data to assess intellectual property and privacy risks, ensuring that the use of proprietary or sensitive data is consistent with applicable laws.\\n\\n5. **User Experience Research**: Conduct user experience research to confirm that individuals understand what data is being collected about them and how it will be used, ensuring that this collection matches their expectations and desires.\\n\\n6. **Scope Limits on Data Collection**: Limit data collection to specific, narrow goals to avoid \"mission creep.\" Anticipated data collection should be strictly necessary for the identified goals and minimized as much as possible.\\n\\n7. **Risk Identification and Mitigation**: Proactively identify and manage privacy risks to avoid, mitigate, and respond appropriately to identified risks. This includes determining not to process data when privacy risks outweigh the benefits or implementing measures to mitigate acceptable risks.\\n\\n8. **Privacy-Preserving Security**: Follow privacy and security best practices to ensure that data and metadata do not leak beyond the specific consented use case. This can include using privacy-enhancing cryptography, privacy-enhancing technologies, fine-grained permissions, and access control mechanisms.\\n\\n9. **Consent and Privacy by Design**: Seek user permission and respect user decisions regarding data collection, use, access, transfer, and deletion to the greatest extent possible. Implement privacy by design safeguards where consent is not feasible, ensuring that systems do not employ user experience and design decisions that obfuscate user choice or burden users with privacy-invasive defaults.\\n\\n10. **Enhanced Protections for Sensitive Data**: Implement enhanced protections and restrictions for data and inferences related to sensitive domains such as health, work, education, criminal justice, and finance. Ensure that data pertaining to youth is protected, and any use in sensitive domains is subject to ethical review and use prohibitions.\\n\\n11. **Surveillance and Monitoring**: Ensure that surveillance technologies are subject to heightened oversight, including pre-deployment assessment of potential harms and scope limits to protect privacy and civil liberties. Avoid continuous surveillance and monitoring in contexts where it could limit rights, opportunities, or access.\\n\\nBy adopting these measures, companies can better ensure that their AI systems comply with data privacy laws and protect the privacy of individuals.'" ] }, "execution_count": 135, "metadata": {}, "output_type": "execute_result" } ], "source": [ "result.get('response').content" ] }, { "cell_type": "code", "execution_count": 136, "metadata": {}, "outputs": [ { "data": { "text/plain": [ "{'response': AIMessage(content='Companies can ensure that AI does not violate data privacy laws by implementing several strategies and practices as mentioned in the provided context:\\n\\n1. **Periodic Monitoring**: Conduct regular monitoring of AI-generated content to identify and address any potential instances of personally identifiable information (PII) or sensitive data exposure. \\n\\n2. **Integration with Existing Policies**: Connect new AI policies, procedures, and processes with existing model, data, software development, and IT governance, as well as legal, compliance, and risk management activities.\\n\\n3. **Training Data Curation Policies**: Document training data curation policies in accordance with applicable laws and policies. This includes policies for the collection, retention, and minimum quality of data to mitigate risks such as the disclosure of inappropriate information, use of illegal or dangerous content, offensive cyber capabilities, data imbalances leading to harmful biases, and leaks of PII.\\n\\n4. **Diligence on Training Data**: Conduct appropriate diligence on the use of training data to assess intellectual property and privacy risks, ensuring that the use of proprietary or sensitive data is consistent with applicable laws.\\n\\n5. **User Experience Research**: Conduct user experience research to confirm that individuals understand what data is being collected about them and how it will be used, ensuring that this collection matches their expectations and desires.\\n\\n6. **Scope Limits on Data Collection**: Limit data collection to specific, narrow goals to avoid \"mission creep.\" Anticipated data collection should be strictly necessary for the identified goals and minimized as much as possible.\\n\\n7. **Risk Identification and Mitigation**: Proactively identify and manage privacy risks to avoid, mitigate, and respond appropriately to identified risks. This includes determining not to process data when privacy risks outweigh the benefits or implementing measures to mitigate acceptable risks.\\n\\n8. **Privacy-Preserving Security**: Follow privacy and security best practices to ensure that data and metadata do not leak beyond the specific consented use case. This can include using privacy-enhancing cryptography, privacy-enhancing technologies, fine-grained permissions, and access control mechanisms.\\n\\n9. **Consent and Privacy by Design**: Seek user permission and respect user decisions regarding data collection, use, access, transfer, and deletion to the greatest extent possible. Implement privacy by design safeguards where consent is not feasible, ensuring that systems do not employ user experience and design decisions that obfuscate user choice or burden users with privacy-invasive defaults.\\n\\n10. **Enhanced Protections for Sensitive Data**: Implement enhanced protections and restrictions for data and inferences related to sensitive domains such as health, work, education, criminal justice, and finance. Ensure that data pertaining to youth is protected, and any use in sensitive domains is subject to ethical review and use prohibitions.\\n\\n11. **Surveillance and Monitoring**: Ensure that surveillance technologies are subject to heightened oversight, including pre-deployment assessment of potential harms and scope limits to protect privacy and civil liberties. Avoid continuous surveillance and monitoring in contexts where it could limit rights, opportunities, or access.\\n\\nBy adopting these measures, companies can better ensure that their AI systems comply with data privacy laws and protect the privacy of individuals.', additional_kwargs={'refusal': None}, response_metadata={'token_usage': {'completion_tokens': 627, 'prompt_tokens': 2398, 'total_tokens': 3025, 'completion_tokens_details': {'reasoning_tokens': 0}}, 'model_name': 'gpt-4o-2024-05-13', 'system_fingerprint': 'fp_3537616b13', 'finish_reason': 'stop', 'logprobs': None}, id='run-8739df9f-0dc0-4aea-a089-5fa12ac6189e-0', usage_metadata={'input_tokens': 2398, 'output_tokens': 627, 'total_tokens': 3025}),\n", " 'context': [Document(metadata={'source': 'https://nvlpubs.nist.gov/nistpubs/ai/NIST.AI.600-1.pdf', 'file_path': 'https://nvlpubs.nist.gov/nistpubs/ai/NIST.AI.600-1.pdf', 'page': 29, 'total_pages': 64, 'format': 'PDF 1.6', 'title': 'Artificial Intelligence Risk Management Framework: Generative Artificial Intelligence Profile', 'author': 'National Institute of Standards and Technology', 'subject': '', 'keywords': '', 'creator': 'Acrobat PDFMaker 24 for Word', 'producer': 'Adobe PDF Library 24.2.159', 'creationDate': \"D:20240805141702-04'00'\", 'modDate': \"D:20240805143048-04'00'\", 'trapped': '', '_id': '1ba549ad3b9b488ab15014c90b909ad1', '_collection_name': 'ai-safety'}, page_content='Conduct periodic monitoring of AI-generated content for privacy risks; address any possible instances of PII or sensitive data exposure. \\n\\nConnect new GAI policies, procedures, and processes to existing model, data, software development, and IT governance and to legal, compliance, and risk management activities. \\n\\nDocument training data curation policies, to the extent possible and according to applicable laws and policies. \\n\\nEstablish policies for collection, retention, and minimum quality of data, in consideration of the following risks: Disclosure of inappropriate CBRN information; Use of Illegal or dangerous content; Offensive cyber capabilities; Training data imbalances that could give rise to harmful biases; Leak of personally identifiable information, including facial likenesses of individuals.'),\n", " Document(metadata={'source': 'https://www.whitehouse.gov/wp-content/uploads/2022/10/Blueprint-for-an-AI-Bill-of-Rights.pdf', 'file_path': 'https://www.whitehouse.gov/wp-content/uploads/2022/10/Blueprint-for-an-AI-Bill-of-Rights.pdf', 'page': 32, 'total_pages': 73, 'format': 'PDF 1.6', 'title': 'Blueprint for an AI Bill of Rights', 'author': '', 'subject': '', 'keywords': '', 'creator': 'Adobe Illustrator 26.3 (Macintosh)', 'producer': 'iLovePDF', 'creationDate': \"D:20220920133035-04'00'\", 'modDate': \"D:20221003104118-04'00'\", 'trapped': '', '_id': '04acfe508f5c428caeeb90944c1d7386', '_collection_name': 'ai-safety'}, page_content='Data should \\nonly be collected or used for the purposes of training or testing machine learning models if such collection and \\nuse is legal and consistent with the expectations of the people whose data is collected. User experience \\nresearch should be conducted to confirm that people understand what data is being collected about them and \\nhow it will be used, and that this collection matches their expectations and desires. Data collection and use-case scope limits. Data collection should be limited in scope, with specific, \\nnarrow identified goals, to avoid \"mission creep.\" Anticipated data collection should be determined to be \\nstrictly necessary to the identified goals and should be minimized as much as possible. Data collected based on \\nthese identified goals and for a specific context should not be used in a different context without assessing for \\nnew privacy risks and implementing appropriate mitigation measures, which may include express consent. Clear timelines for data retention should be established, with data deleted as soon as possible in accordance \\nwith legal or policy-based limitations. Determined data retention timelines should be documented and justi\\xad\\nfied. Risk identification and mitigation. Entities that collect, use, share, or store sensitive data should \\nattempt to proactively identify harms and seek to manage them so as to avoid, mitigate, and respond appropri\\xad\\nately to identified risks. Appropriate responses include determining not to process data when the privacy risks \\noutweigh the benefits or implementing measures to mitigate acceptable risks. Appropriate responses do not \\ninclude sharing or transferring the privacy risks to users via notice or consent requests where users could not \\nreasonably be expected to understand the risks without further support. Privacy-preserving security. Entities creating, using, or governing automated systems should follow \\nprivacy and security best practices designed to ensure data and metadata do not leak beyond the specific \\nconsented use case. Best practices could include using privacy-enhancing cryptography or other types of \\nprivacy-enhancing technologies or fine-grained permissions and access control mechanisms, along with \\nconventional system security protocols.'),\n", " Document(metadata={'source': 'https://nvlpubs.nist.gov/nistpubs/ai/NIST.AI.600-1.pdf', 'file_path': 'https://nvlpubs.nist.gov/nistpubs/ai/NIST.AI.600-1.pdf', 'page': 30, 'total_pages': 64, 'format': 'PDF 1.6', 'title': 'Artificial Intelligence Risk Management Framework: Generative Artificial Intelligence Profile', 'author': 'National Institute of Standards and Technology', 'subject': '', 'keywords': '', 'creator': 'Acrobat PDFMaker 24 for Word', 'producer': 'Adobe PDF Library 24.2.159', 'creationDate': \"D:20240805141702-04'00'\", 'modDate': \"D:20240805143048-04'00'\", 'trapped': '', '_id': '7e66f3997e2243ad97a378ad1f2d9cd0', '_collection_name': 'ai-safety'}, page_content='Conduct appropriate diligence on training data use to assess intellectual property, \\nand privacy, risks, including to examine whether use of proprietary or sensitive \\ntraining data is consistent with applicable laws. Intellectual Property; Data Privacy \\nAI Actor Tasks: Governance and Oversight, Operation and Monitoring, Procurement, Third-party entities'),\n", " Document(metadata={'source': 'https://nvlpubs.nist.gov/nistpubs/ai/NIST.AI.600-1.pdf', 'file_path': 'https://nvlpubs.nist.gov/nistpubs/ai/NIST.AI.600-1.pdf', 'page': 38, 'total_pages': 64, 'format': 'PDF 1.6', 'title': 'Artificial Intelligence Risk Management Framework: Generative Artificial Intelligence Profile', 'author': 'National Institute of Standards and Technology', 'subject': '', 'keywords': '', 'creator': 'Acrobat PDFMaker 24 for Word', 'producer': 'Adobe PDF Library 24.2.159', 'creationDate': \"D:20240805141702-04'00'\", 'modDate': \"D:20240805143048-04'00'\", 'trapped': '', '_id': '907f59c504d44945ae865021b6e9c713', '_collection_name': 'ai-safety'}, page_content='>>>\\nMEASURE 2.10: Privacy risk of the AI system – as identified in the MAP function – is examined and documented. Action ID \\nSuggested Action \\nGAI Risks \\nMS-2.10-001 \\nConduct AI red-teaming to assess issues such as: Outputting of training data \\nsamples, and subsequent reverse engineering, model extraction, and \\nmembership inference risks; Revealing biometric, confidential, copyrighted, \\nlicensed, patented, personal, proprietary, sensitive, or trade-marked information; \\nTracking or revealing location information of users or members of training \\ndatasets. Human-AI Configuration; \\nInformation Integrity; Intellectual \\nProperty \\nMS-2.10-002 \\nEngage directly with end-users and other stakeholders to understand their \\nexpectations and concerns regarding content provenance.\\n>>>'),\n", " Document(metadata={'source': 'https://www.whitehouse.gov/wp-content/uploads/2022/10/Blueprint-for-an-AI-Bill-of-Rights.pdf', 'file_path': 'https://www.whitehouse.gov/wp-content/uploads/2022/10/Blueprint-for-an-AI-Bill-of-Rights.pdf', 'page': 5, 'total_pages': 73, 'format': 'PDF 1.6', 'title': 'Blueprint for an AI Bill of Rights', 'author': '', 'subject': '', 'keywords': '', 'creator': 'Adobe Illustrator 26.3 (Macintosh)', 'producer': 'iLovePDF', 'creationDate': \"D:20220920133035-04'00'\", 'modDate': \"D:20221003104118-04'00'\", 'trapped': '', '_id': '498d23d9347f49ac9c61216ea042aefd', '_collection_name': 'ai-safety'}, page_content='SECTION TITLE\\nDATA PRIVACY\\nYou should be protected from abusive data practices via built-in protections and you \\nshould have agency over how data about you is used. You should be protected from violations of \\nprivacy through design choices that ensure such protections are included by default, including ensuring that \\ndata collection conforms to reasonable expectations and that only data strictly necessary for the specific \\ncontext is collected. Designers, developers, and deployers of automated systems should seek your permission \\nand respect your decisions regarding collection, use, access, transfer, and deletion of your data in appropriate \\nways and to the greatest extent possible; where not possible, alternative privacy by design safeguards should be \\nused. Systems should not employ user experience and design decisions that obfuscate user choice or burden \\nusers with defaults that are privacy invasive. Consent should only be used to justify collection of data in cases \\nwhere it can be appropriately and meaningfully given. Any consent requests should be brief, be understandable \\nin plain language, and give you agency over data collection and the specific context of use; current hard-to\\xad\\nunderstand notice-and-choice practices for broad uses of data should be changed. Enhanced protections and \\nrestrictions for data and inferences related to sensitive domains, including health, work, education, criminal \\njustice, and finance, and for data pertaining to youth should put you first. In sensitive domains, your data and \\nrelated inferences should only be used for necessary functions, and you should be protected by ethical review \\nand use prohibitions. You and your communities should be free from unchecked surveillance; surveillance \\ntechnologies should be subject to heightened oversight that includes at least pre-deployment assessment of their \\npotential harms and scope limits to protect privacy and civil liberties. Continuous surveillance and monitoring \\nshould not be used in education, work, housing, or in other contexts where the use of such surveillance \\ntechnologies is likely to limit rights, opportunities, or access. Whenever possible, you should have access to \\nreporting that confirms your data decisions have been respected and provides an assessment of the \\npotential impact of surveillance technologies on your rights, opportunities, or access.')]}" ] }, "execution_count": 136, "metadata": {}, "output_type": "execute_result" } ], "source": [ "result" ] } ], "metadata": { "kernelspec": { "display_name": "Python 3", "language": "python", "name": "python3" }, "language_info": { "codemirror_mode": { "name": "ipython", "version": 3 }, "file_extension": ".py", "mimetype": "text/x-python", "name": "python", "nbconvert_exporter": "python", "pygments_lexer": "ipython3", "version": "3.11.9" } }, "nbformat": 4, "nbformat_minor": 2 }