page_content
stringlengths 74
2.86k
| parent_section
stringclasses 7
values | url
stringlengths 21
129
| token_count
int64 17
755
|
---|---|---|---|
rray": [[1,2,3,4]] } }'
Using a Service ConnectorTo set up the Seldon Core Model Deployer to authenticate to a remote Kubernetes cluster, it is recommended to leverage the many features provided by the Service Connectors such as auto-configuration, local client login, best security practices regarding long-lived credentials and fine-grained access control and reusing the same credentials across multiple stack components.
Depending on where your target Kubernetes cluster is running, you can use one of the following Service Connectors:
the AWS Service Connector, if you are using an AWS EKS cluster.
the GCP Service Connector, if you are using a GKE cluster.
the Azure Service Connector, if you are using an AKS cluster.
the generic Kubernetes Service Connector for any other Kubernetes cluster.
If you don't already have a Service Connector configured in your ZenML deployment, you can register one using the interactive CLI command. You have the option to configure a Service Connector that can be used to access more than one Kubernetes cluster or even more than one type of cloud resource:
zenml service-connector register -i
A non-interactive CLI example that leverages the AWS CLI configuration on your local machine to auto-configure an AWS Service Connector targeting a single EKS cluster is:
zenml service-connector register <CONNECTOR_NAME> --type aws --resource-type kubernetes-cluster --resource-name <EKS_CLUSTER_NAME> --auto-configure
Example Command Output
$ zenml service-connector register eks-zenhacks --type aws --resource-type kubernetes-cluster --resource-id zenhacks-cluster --auto-configure
β Ό Registering service connector 'eks-zenhacks'...
Successfully registered service connector `eks-zenhacks` with access to the following resources:
βββββββββββββββββββββββββ―βββββββββββββββββββ
β RESOURCE TYPE β RESOURCE NAMES β
β ββββββββββββββββββββββββΌβββββββββββββββββββ¨
β π kubernetes-cluster β zenhacks-cluster β
βββββββββββββββββββββββββ·βββββββββββββββββββ | stack-components | https://docs.zenml.io/v/docs/stack-components/model-deployers/seldon | 460 |
βββββββββββΌβββββββββββββββββββββββββββββββββββββββ¨β UPDATED_AT β 2023-06-16 10:15:26.393772 β
ββββββββββββββββββββ·βββββββββββββββββββββββββββββββββββββββ
Configuration
βββββββββββββββββββββββββ―ββββββββββββ
β PROPERTY β VALUE β
β ββββββββββββββββββββββββΌββββββββββββ¨
β region β us-east-1 β
β ββββββββββββββββββββββββΌββββββββββββ¨
β aws_access_key_id β [HIDDEN] β
β ββββββββββββββββββββββββΌββββββββββββ¨
β aws_secret_access_key β [HIDDEN] β
βββββββββββββββββββββββββ·ββββββββββββ
Then, showing the temporary credentials that are issued to clients. Note the expiration time on the Kubernetes API token:
zenml service-connector describe eks-zenhacks-cluster --client
Example Command Output
Service connector 'eks-zenhacks-cluster (kubernetes-cluster | zenhacks-cluster client)' of type 'kubernetes' with id 'be53166a-b39c-4e39-8e31-84658e50eec4' is owned by user 'default' and is 'private'.
'eks-zenhacks-cluster (kubernetes-cluster | zenhacks-cluster client)' kubernetes Service
Connector Details
ββββββββββββββββββββ―ββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ
β PROPERTY β VALUE β
β βββββββββββββββββββΌββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ¨
β ID β be53166a-b39c-4e39-8e31-84658e50eec4 β
β βββββββββββββββββββΌββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ¨
β NAME β eks-zenhacks-cluster (kubernetes-cluster | zenhacks-cluster client) β
β βββββββββββββββββββΌββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ¨
β TYPE β π kubernetes β
β βββββββββββββββββββΌββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ¨
β AUTH METHOD β token β | how-to | https://docs.zenml.io/how-to/auth-management/best-security-practices | 573 |
cribe aws-federation-token
Example Command OutputService connector 'aws-federation-token' of type 'aws' with id '868b17d4-b950-4d89-a6c4-12e520e66610' is owned by user 'default' and is 'private'.
'aws-federation-token' aws Service Connector Details
ββββββββββββββββββββ―ββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ
β PROPERTY β VALUE β
β βββββββββββββββββββΌββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ¨
β ID β e28c403e-8503-4cce-9226-8a7cd7934763 β
β βββββββββββββββββββΌββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ¨
β NAME β aws-federation-token β
β βββββββββββββββββββΌββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ¨
β TYPE β πΆ aws β
β βββββββββββββββββββΌββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ¨
β AUTH METHOD β federation-token β
β βββββββββββββββββββΌββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ¨
β RESOURCE TYPES β πΆ aws-generic, π¦ s3-bucket, π kubernetes-cluster, π³ docker-registry β
β βββββββββββββββββββΌββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ¨
β RESOURCE NAME β <multiple> β
β βββββββββββββββββββΌββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ¨
β SECRET ID β 958b840d-2a27-4f6b-808b-c94830babd99 β
β βββββββββββββββββββΌββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ¨
β SESSION DURATION β 43200s β | how-to | https://docs.zenml.io/how-to/auth-management/aws-service-connector | 455 |
Associate a pipeline with a Model
The most common use-case for a Model is to associate it with a pipeline.
from zenml import pipeline
from zenml.model.model import Model
@pipeline(
model=Model(
name="ClassificationModel", # Give your models unique names
tags=["MVP", "Tabular"] # Use tags for future filtering
def my_pipeline():
...
This will associate this pipeline with the model specified. In case the model already exists, this will create a new version of that model.
In case you want to attach the pipeline to an existing model version, specify this as well.
from zenml import pipeline
from zenml.model.model import Model
from zenml.enums import ModelStages
@pipeline(
model=Model(
name="ClassificationModel", # Give your models unique names
tags=["MVP", "Tabular"], # Use tags for future filtering
version=ModelStages.LATEST # Alternatively use a stage: [STAGING, PRODUCTION]]
def my_pipeline():
...
Feel free to also move the Model configuration into your configuration files:
...
model:
name: text_classifier
description: A breast cancer classifier
tags: ["classifier","sgd"]
...
PreviousDeleting a Model
NextConnecting artifacts via a Model
Last updated 19 days ago | how-to | https://docs.zenml.io/v/docs/how-to/use-the-model-control-plane/associate-a-pipeline-with-a-model | 266 |
Disable colorful logging
How to disable colorful logging in ZenML.
By default, ZenML uses colorful logging to make it easier to read logs. However, if you wish to disable this feature, you can do so by setting the following environment variable:
ZENML_LOGGING_COLORS_DISABLED=true
Note that setting this on the client environment (e.g. your local machine which runs the pipeline) will automatically disable colorful logging on remote pipeline runs. If you wish to only disable it locally, but turn on for remote pipeline runs, you can set the ZENML_LOGGING_COLORS_DISABLED environment variable in your pipeline runs environment as follows:
docker_settings = DockerSettings(environment={"ZENML_LOGGING_COLORS_DISABLED": "false"})
# Either add it to the decorator
@pipeline(settings={"docker": docker_settings})
def my_pipeline() -> None:
my_step()
# Or configure the pipelines options
my_pipeline = my_pipeline.with_options(
settings={"docker": docker_settings}
PreviousDisable rich traceback output
NextHandle Data/Artifacts
Last updated 15 days ago | how-to | https://docs.zenml.io/how-to/control-logging/disable-colorful-logging | 214 |
ter an S3 Artifact Store and add it to your stack:zenml integration install s3 -y
The only configuration parameter mandatory for registering an S3 Artifact Store is the root path URI, which needs to point to an S3 bucket and take the form s3://bucket-name. Please read the documentation relevant to the S3 service that you are using on how to create an S3 bucket. For example, the AWS S3 documentation is available here.
With the URI to your S3 bucket known, registering an S3 Artifact Store and using it in a stack can be done as follows:
# Register the S3 artifact-store
zenml artifact-store register s3_store -f s3 --path=s3://bucket-name
# Register and set a stack with the new artifact store
zenml stack register custom_stack -a s3_store ... --set
Depending on your use case, however, you may also need to provide additional configuration parameters pertaining to authentication or pass advanced configuration parameters to match your S3-compatible service or deployment scenario.
Infrastructure Deployment
An S3 Artifact Store can be deployed directly from the ZenML CLI:
zenml artifact-store deploy s3-artifact-store --flavor=s3 --provider=aws ...
You can pass other configurations specific to the stack components as key-value arguments. If you don't provide a name, a random one is generated for you. For more information about how to work use the CLI for this, please refer to the dedicated documentation section.
Authentication Methods
Integrating and using an S3-compatible Artifact Store in your pipelines is not possible without employing some form of authentication. If you're looking for a quick way to get started locally, you can use the Implicit Authentication method. However, the recommended way to authenticate to the AWS cloud platform is through an AWS Service Connector. This is particularly useful if you are configuring ZenML stacks that combine the S3 Artifact Store with other remote stack components also running in AWS. | stack-components | https://docs.zenml.io/v/docs/stack-components/artifact-stores/s3 | 398 |
OUP> --project=<PROJECT> \
--token=<GITLAB_TOKEN>where <NAME> is the name of the code repository you are registering, <GROUP> is the group of the project, <PROJECT> is the name of the project, <GITLAB_TOKEN> is your GitLab Personal Access Token, and <GITLAB_URL> is the URL of the GitLab instance which defaults to https://gitlab.com. You will need to set a URL if you have a self-hosted GitLab instance.
After registering the GitLab code repository, ZenML will automatically detect if your source files are being tracked by GitLab and store the commit hash for each pipeline run.
Go to your GitLab account settings and click on Access Tokens.
Name the token and select the scopes that you need (e.g. read_repository, read_user, read_api)
Click on "Create personal access token" and copy the token to a safe place.
Developing a custom code repository
If you're using some other platform to store your code, and you still want to use a code repository in ZenML, you can implement and register a custom code repository.
First, you'll need to subclass and implement the abstract methods of the zenml.code_repositories.BaseCodeRepository class:
class BaseCodeRepository(ABC):
"""Base class for code repositories."""
@abstractmethod
def login(self) -> None:
"""Logs into the code repository."""
@abstractmethod
def download_files(
self, commit: str, directory: str, repo_sub_directory: Optional[str]
) -> None:
"""Downloads files from the code repository to a local directory.
Args:
commit: The commit hash to download files from.
directory: The directory to download files to.
repo_sub_directory: The subdirectory in the repository to
download files from.
"""
@abstractmethod
def get_local_context(
self, path: str
) -> Optional["LocalRepositoryContext"]:
"""Gets a local repository context from a path.
Args:
path: The path to the local repository.
Returns:
The local repository context object.
"""
After you're finished implementing this, you can register it as follows: | how-to | https://docs.zenml.io/v/docs/how-to/setting-up-a-project-repository/connect-your-git-repository | 433 |
asked to access:
from zenml.client import Clientclient = Client()
# Get a Service Connector client for a particular S3 bucket
connector_client = client.get_service_connector_client(
name_id_or_prefix="aws-federation-multi",
resource_type="s3-bucket",
resource_id="s3://zenfiles"
# Get the S3 boto3 python client pre-configured and pre-authenticated
# from the Service Connector client
s3_client = connector_client.connect()
# Verify access to the chosen S3 bucket using the temporary token that
# was issued to the client.
s3_client.head_bucket(Bucket="zenfiles")
# Try to access another S3 bucket that the original AWS long-lived credentials can access.
# An error will be thrown indicating that the bucket is not accessible.
s3_client.head_bucket(Bucket="zenml-demos")
Example Output
>>> from zenml.client import Client
>>>
>>> client = Client()
Unable to find ZenML repository in your current working directory (/home/stefan/aspyre/src/zenml) or any parent directories. If you want to use an existing repository which is in a different location, set the environment variable 'ZENML_REPOSITORY_PATH'. If you want to create a new repository, run zenml init.
Running without an active repository root.
>>>
>>> # Get a Service Connector client for a particular S3 bucket
>>> connector_client = client.get_service_connector_client(
... name_id_or_prefix="aws-federation-multi",
... resource_type="s3-bucket",
... resource_id="s3://zenfiles"
... )
>>>
>>> # Get the S3 boto3 python client pre-configured and pre-authenticated
>>> # from the Service Connector client
>>> s3_client = connector_client.connect()
>>>
>>> # Verify access to the chosen S3 bucket using the temporary token that
>>> # was issued to the client.
>>> s3_client.head_bucket(Bucket="zenfiles") | how-to | https://docs.zenml.io/v/docs/how-to/auth-management/best-security-practices | 404 |
Storing embeddings in a vector database
Store embeddings in a vector database for efficient retrieval.
The process of generating the embeddings doesn't take too long, especially if the machine on which the step is running has a GPU, but it's still not something we want to do every time we need to retrieve a document. Instead, we can store the embeddings in a vector database, which allows us to quickly retrieve the most relevant chunks based on their similarity to the query.
For the purposes of this guide, we'll use PostgreSQL as our vector database. This is a popular choice for storing embeddings, as it provides a scalable and efficient way to store and retrieve high-dimensional vectors. However, you can use any vector database that supports high-dimensional vectors. If you want to explore a list of possible options, this is a good website to compare different options.
For more information on how to set up a PostgreSQL database to follow along with this guide, please see the instructions in the repository which show how to set up a PostgreSQL database using Supabase.
Since PostgreSQL is a well-known and battle-tested database, we can use known and minimal packages to connect and to interact with it. We can use the psycopg2 package to connect and then raw SQL statements to interact with the database.
The code for the step is fairly simple:
from zenml import step
@step
def index_generator(
documents: List[Document],
) -> None:
try:
conn = get_db_conn()
with conn.cursor() as cur:
# Install pgvector if not already installed
cur.execute("CREATE EXTENSION IF NOT EXISTS vector")
conn.commit()
# Create the embeddings table if it doesn't exist
table_create_command = f"""
CREATE TABLE IF NOT EXISTS embeddings (
id SERIAL PRIMARY KEY,
content TEXT,
token_count INTEGER,
embedding VECTOR({EMBEDDING_DIMENSIONALITY}),
filename TEXT,
parent_section TEXT,
url TEXT
);
"""
cur.execute(table_create_command)
conn.commit()
register_vector(conn) | user-guide | https://docs.zenml.io/v/docs/user-guide/llmops-guide/rag-with-zenml/storing-embeddings-in-a-vector-database | 398 |
Controlling Model versions
Each model can have many versions. Model versions are a way for you to track different iterations of your training process, complete with some extra dashboard and API functionality to support the full ML lifecycle.
E.g. Based on your business rules during training, you can associate model version with stages and promote them to production. You have an interface that allows you to link these versions with non-technical artifacts and data, e.g. business data, datasets, or even stages in your process and workflow.
Model versions are created implicitly as you are running your machine learning training, so you don't have to immediately think about this. If you want more control over versions, our API has you covered, with an option to explicitly name your versions.
Explicitly name your model version
If you want to explicitly name your model version, you can do so by passing in the version argument to the Model object. If you don't do this, ZenML will automatically generate a version number for you.
from zenml import Model, step, pipeline
model= Model(
name="my_model",
version="1.0.5"
# The step configuration will take precedence over the pipeline
@step(model=model)
def svc_trainer(...) -> ...:
...
# This configures it for all steps within the pipeline
@pipeline(model=model)
def training_pipeline( ... ):
# training happens here
Here we are specifically setting the model configuration for a particular step or for the pipeline as a whole.
Please note in the above example if the model version exists, it is automatically associated with the pipeline and becomes active in the pipeline context. Therefore, a user should be careful and intentional as to whether you want to create a new pipeline, or fetch an existing one. See below for an example of fetching a model from an existing version/stage.
Fetching model versions by stage | how-to | https://docs.zenml.io/how-to/use-the-model-control-plane/model-versions | 375 |
tion Token" and "IAM Role" authentication methods.It's not easy to showcase this without using some ZenML Python Client code, but here is an example that proves that the AWS client token issued to an S3 client can only access the S3 bucket resource it was issued for, even if the originating AWS Service Connector is able to access multiple S3 buckets with the corresponding long-lived credentials:
zenml service-connector register aws-federation-multi --type aws --auth-method=federation-token --auto-configure
Example Command Output
Successfully registered service connector `aws-federation-multi` with access to the following resources:
βββββββββββββββββββββββββ―βββββββββββββββββββββββββββββββββββββββββββββββ
β RESOURCE TYPE β RESOURCE NAMES β
β ββββββββββββββββββββββββΌβββββββββββββββββββββββββββββββββββββββββββββββ¨
β πΆ aws-generic β us-east-1 β
β ββββββββββββββββββββββββΌβββββββββββββββββββββββββββββββββββββββββββββββ¨
β π¦ s3-bucket β s3://aws-ia-mwaa-715803424590 β
β β s3://zenfiles β
β β s3://zenml-demos β
β β s3://zenml-generative-chat β
β β s3://zenml-public-datasets β
β β s3://zenml-public-swagger-spec β
β ββββββββββββββββββββββββΌβββββββββββββββββββββββββββββββββββββββββββββββ¨
β π kubernetes-cluster β zenhacks-cluster β
β ββββββββββββββββββββββββΌβββββββββββββββββββββββββββββββββββββββββββββββ¨
β π³ docker-registry β 715803424590.dkr.ecr.us-east-1.amazonaws.com β
βββββββββββββββββββββββββ·βββββββββββββββββββββββββββββββββββββββββββββββ
The next part involves running some ZenML Python code to showcase that the downscoped credentials issued to a client are indeed restricted to the S3 bucket that the client asked to access:
from zenml.client import Client | how-to | https://docs.zenml.io/v/docs/how-to/auth-management/best-security-practices | 508 |
enml.models import PipelineDeploymentResponseModelfrom zenml.orchestrators import ContainerizedOrchestrator
from zenml.stack import Stack
class MyOrchestrator(ContainerizedOrchestrator):
def get_orchestrator_run_id(self) -> str:
# Return an ID that is different each time a pipeline is run, but the
# same for all steps being executed as part of the same pipeline run.
# If you're using some external orchestration tool like Kubeflow, you
# can usually use the run ID of that tool here.
...
def prepare_or_run_pipeline(
self,
deployment: "PipelineDeploymentResponseModel",
stack: "Stack",
environment: Dict[str, str],
) -> None:
# If your orchestrator supports scheduling, you should handle the schedule
# configured by the user. Otherwise you might raise an exception or log a warning
# that the orchestrator doesn't support scheduling
if deployment.schedule:
...
for step_name, step in deployment.step_configurations.items():
image = self.get_image(deployment=deployment, step_name=step_name)
command = StepEntrypointConfiguration.get_entrypoint_command()
arguments = StepEntrypointConfiguration.get_entrypoint_arguments(
step_name=step_name, deployment_id=deployment.id
# Your orchestration tool should run this command and arguments
# in the Docker image fetched above. Additionally, the container which
# is running the command must contain the environment variables specified
# in the `environment` dictionary.
# If your orchestrator supports parallel execution of steps, make sure
# each step only runs after all its upstream steps finished
upstream_steps = step.spec.upstream_steps
# You can get the settings your orchestrator like so.
# The settings are the "dynamic" part of your orchestrators config,
# optionally defined when you register your orchestrator but can be
# overridden at runtime.
# In contrast, the "static" part of your orchestrators config is
# always defined when you register the orchestrator and can be
# accessed via `self.config`.
step_settings = cast( | stack-components | https://docs.zenml.io/v/docs/stack-components/orchestrators/custom | 429 |
β oauth2-token β β ββ β β β impersonation β β β
βββββββββββββββββββββββββ·βββββββββ·ββββββββββββββββββββββββ·βββββββββββββββββββ·ββββββββ·βββββββββ
Prerequisites
The GCP Service Connector is part of the GCP ZenML integration. You can either install the entire integration or use a PyPI extra to install it independently of the integration:
pip install "zenml[connectors-gcp]" installs only prerequisites for the GCP Service Connector Type
zenml integration install gcp installs the entire GCP ZenML integration
It is not required to install and set up the GCP CLI on your local machine to use the GCP Service Connector to link Stack Components to GCP resources and services. However, it is recommended to do so if you are looking for a quick setup that includes using the auto-configuration Service Connector features.
The auto-configuration examples in this page rely on the GCP CLI being installed and already configured with valid credentials of one type or another. If you want to avoid installing the GCP CLI, we recommend using the interactive mode of the ZenML CLI to register Service Connectors:
zenml service-connector register -i --type gcp
Resource Types
Generic GCP resource
This resource type allows Stack Components to use the GCP Service Connector to connect to any GCP service or resource. When used by Stack Components, they are provided a Python google-auth credentials object populated with a GCP OAuth 2.0 token. This credentials object can then be used to create GCP Python clients for any particular GCP service. | how-to | https://docs.zenml.io/how-to/auth-management/gcp-service-connector | 375 |
el, save_artifact
from zenml.client import Client@step
def f_() -> None:
# produce new artifact
new_artifact = save_artifact(data="Hello, World!", name="manual_artifact")
# and link it inside a step
link_artifact_to_model(
artifact_version_id=new_artifact.id,
model=Model(name="MyModel", version="0.0.42"),
# use existing artifact
existing_artifact = Client().get_artifact_version(name_id_or_prefix="existing_artifact")
# and link it even outside a step
link_artifact_to_model(
artifact_version_id=existing_artifact.id,
model=Model(name="MyModel", version="0.2.42"),
PreviousPromote a Model
NextLoad artifacts from Model
Last updated 15 days ago | how-to | https://docs.zenml.io/how-to/use-the-model-control-plane/linking-model-binaries-data-to-models | 166 |
nML, namely an orchestrator and an artifact store.Keep in mind, that each one of these components is built on top of base abstractions and is completely extensible.
Orchestrator
An Orchestrator is a workhorse that coordinates all the steps to run in a pipeline. Since pipelines can be set up with complex combinations of steps with various asynchronous dependencies between them, the orchestrator acts as the component that decides what steps to run and when to run them.
ZenML comes with a default local orchestrator designed to run on your local machine. This is useful, especially during the exploration phase of your project. You don't have to rent a cloud instance just to try out basic things.
Artifact Store
An Artifact Store is a component that houses all data that pass through the pipeline as inputs and outputs. Each artifact that gets stored in the artifact store is tracked and versioned and this allows for extremely useful features like data caching which speeds up your workflows.
Similar to the orchestrator, ZenML comes with a default local artifact store designed to run on your local machine. This is useful, especially during the exploration phase of your project. You don't have to set up a cloud storage system to try out basic things.
Flavor
ZenML provides a dedicated base abstraction for each stack component type. These abstractions are used to develop solutions, called Flavors, tailored to specific use cases/tools. With ZenML installed, you get access to a variety of built-in and integrated Flavors for each component type, but users can also leverage the base abstractions to create their own custom flavors.
Stack Switching
When it comes to production-grade solutions, it is rarely enough to just run your workflow locally without including any cloud infrastructure. | getting-started | https://docs.zenml.io/getting-started/core-concepts | 352 |
βββββββββββββββββββββββββββββββββββ
Configurationβββββββββββββββββββββββββ―βββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ
β PROPERTY β VALUE β
β ββββββββββββββββββββββββΌβββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ¨
β server β https://A5F8F4142FB12DDCDE9F21F6E9B07A18.gr7.us-east-1.eks.amazonaws.com β
β ββββββββββββββββββββββββΌβββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ¨
β insecure β False β
β ββββββββββββββββββββββββΌβββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ¨
β cluster_name β arn:aws:eks:us-east-1:715803424590:cluster/zenhacks-cluster β
β ββββββββββββββββββββββββΌβββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ¨
β token β [HIDDEN] β
β ββββββββββββββββββββββββΌβββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ¨
β certificate_authority β [HIDDEN] β
βββββββββββββββββββββββββ·βββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ
Issuing downscoped credentials: in addition to the above, some authentication methods also support restricting the generated temporary API tokens to the minimum set of permissions required to access the target resource or set of resources. This is currently available for the AWS Service Connector's "Federation Token" and "IAM Role" authentication methods. | how-to | https://docs.zenml.io/how-to/auth-management/best-security-practices | 429 |
ββββββββββββββββββββββββββββββββββββββββββββββββββ¨β ID β 9a810521-ef41-4e45-bb48-8569c5943dc6 β
β βββββββββββββββββββΌββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ¨
β NAME β aws-implicit (s3-bucket | s3://sagemaker-studio-d8a14tvjsmb client) β
β βββββββββββββββββββΌββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ¨
β TYPE β πΆ aws β
β βββββββββββββββββββΌββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ¨
β AUTH METHOD β secret-key β
β βββββββββββββββββββΌββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ¨
β RESOURCE TYPES β π¦ s3-bucket β
β βββββββββββββββββββΌββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ¨
β RESOURCE NAME β s3://sagemaker-studio-d8a14tvjsmb β
β βββββββββββββββββββΌββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ¨
β SECRET ID β β
β βββββββββββββββββββΌββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ¨
β SESSION DURATION β N/A β
β βββββββββββββββββββΌββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ¨
β EXPIRES IN β N/A β
β βββββββββββββββββββΌββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ¨
β OWNER β default β
β βββββββββββββββββββΌββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ¨
β WORKSPACE β default β | how-to | https://docs.zenml.io/how-to/auth-management/aws-service-connector | 397 |
heck out this 3-minute video for more information.You can keep changing the Config and Settings of your flavor after registration. ZenML will pick up these "live" changes when running pipelines.
Note that changing the config in a breaking way requires an update of the component (not a flavor). E.g., adding a mandatory name to flavor X field will break a registered component of that flavor. This may lead to a completely broken state where one should delete the component and re-register it.
Always test your flavor thoroughly before using it in production. Make sure it works as expected and handles errors gracefully.
Keep your flavor code clean and well-documented. This will make it easier for others to use and contribute to your flavor.
Follow best practices for the language and libraries you're using. This will help ensure your flavor is efficient, reliable, and easy to maintain.
We recommend you develop new flavors by using existing flavors as a reference. A good starting point is the flavors defined in the official ZenML integrations.
Extending Specific Stack Components
If you would like to learn more about how to build a custom stack component flavor for a specific stack component type, check out the links below:
Type of Stack Component Description Orchestrator Orchestrating the runs of your pipeline Artifact Store Storage for the artifacts created by your pipelines Container Registry Store for your containers Step Operator Execution of individual steps in specialized runtime environments Model Deployer Services/platforms responsible for online model serving Feature Store Management of your data/features Experiment Tracker Tracking your ML experiments Alerter Sending alerts through specified channels Annotator Annotating and labeling data Data Validator Validating and monitoring your data
PreviousReference secrets in stack configuration
NextImplement a custom integration
Last updated 15 days ago | how-to | https://docs.zenml.io/how-to/stack-deployment/implement-a-custom-stack-component | 348 |
.
Authentication Methods
Implicit authenticationImplicit authentication to AWS services using environment variables, local configuration files or IAM roles.
This method may constitute a security risk, because it can give users access to the same cloud resources and services that the ZenML Server itself is configured to access. For this reason, all implicit authentication methods are disabled by default and need to be explicitly enabled by setting the ZENML_ENABLE_IMPLICIT_AUTH_METHODS environment variable or the helm chart enableImplicitAuthMethods configuration option to true in the ZenML deployment.
This authentication method doesn't require any credentials to be explicitly configured. It automatically discovers and uses credentials from one of the following sources:
environment variables (AWS_ACCESS_KEY_ID, AWS_SECRET_ACCESS_KEY, AWS_SESSION_TOKEN, AWS_DEFAULT_REGION)
local configuration files set up through the AWS CLI (~/aws/credentials, ~/.aws/config)
IAM roles for Amazon EC2, ECS, EKS, Lambda, etc. Only works when running the ZenML server on an AWS resource with an IAM role attached to it.
This is the quickest and easiest way to authenticate to AWS services. However, the results depend on how ZenML is deployed and the environment where it is used and is thus not fully reproducible:
when used with the default local ZenML deployment or a local ZenML server, the credentials are the same as those used by the AWS CLI or extracted from local environment variables
when connected to a ZenML server, this method only works if the ZenML server is deployed in AWS and will use the IAM role attached to the AWS resource where the ZenML server is running (e.g. an EKS cluster). The IAM role permissions may need to be adjusted to allow listing and accessing/describing the AWS resources that the connector is configured to access. | how-to | https://docs.zenml.io/how-to/auth-management/aws-service-connector | 356 |
HyperAI Orchestrator
Orchestrating your pipelines to run on HyperAI.ai instances.
HyperAI is a cutting-edge cloud compute platform designed to make AI accessible for everyone. The HyperAI orchestrator is an orchestrator flavor that allows you to easily deploy your pipelines on HyperAI instances.
This component is only meant to be used within the context of a remote ZenML deployment scenario. Usage with a local ZenML deployment may lead to unexpected behavior!
When to use it
You should use the HyperAI orchestrator if:
you're looking for a managed solution for running your pipelines.
you're a HyperAI customer.
Prerequisites
You will need to do the following to start using the HyperAI orchestrator:
Have a running HyperAI instance. It must be accessible from the internet (or at least from the IP addresses of your ZenML users) and allow SSH key based access (passwords are not supported).
Ensure that a recent version of Docker is installed. This version must include Docker Compose, meaning that the command docker compose works.
Ensure that the appropriate NVIDIA Driver is installed on the HyperAI instance (if not already installed by the HyperAI team).
Ensure that the NVIDIA Container Toolkit is installed and configured on the HyperAI instance.
Note that it is possible to omit installing the NVIDIA Driver and NVIDIA Container Toolkit. However, you will then be unable to use the GPU from within your ZenML pipeline. Additionally, you will then need to disable GPU access within the container when configuring the Orchestrator component, or the pipeline will not start correctly.
How it works | stack-components | https://docs.zenml.io/v/docs/stack-components/orchestrators/hyperai | 318 |
:
zenml model-registry flavor list
How to use itModel registries are an optional component in the ZenML stack that is tied to the experiment tracker. This means that a model registry can only be used if you are also using an experiment tracker. If you're not using an experiment tracker, you can still store your models in ZenML, but you will need to manually retrieve model artifacts from the artifact store. More information on this can be found in the documentation on the fetching runs.
To use model registries, you first need to register a model registry in your stack with the same flavor as your experiment tracker. Then, you can register your trained model in the model registry using one of three methods:
(1) using the built-in step in the pipeline.
(2) using the ZenML CLI to register the model from the command line.
(3) registering the model from the model registry UI. Finally, you can use the model registry to retrieve and load your models for deployment or further experimentation.
PreviousDevelop a Custom Annotator
NextMLflow Model Registry
Last updated 15 days ago | stack-components | https://docs.zenml.io/stack-components/model-registries | 225 |
bute: Optional[str]
module: str
type: SourceTypeWhen you want to configure your pipeline with a certain stack in mind, you can do so as well:
`...write_run_configuration_template(stack=<Insert_stack_here>)
PreviousFind out which configuration was used for a run
NextCustomize Docker builds
Last updated 19 days ago | how-to | https://docs.zenml.io/v/docs/how-to/use-configuration-files/autogenerate-a-template-yaml-file | 71 |
nment or the Stack Component.
End-to-end examplesThis is an example of an end-to-end workflow involving Service Connectors that use a single multi-type AWS Service Connector to give access to multiple resources for multiple Stack Components. A complete ZenML Stack is registered and composed of the following Stack Components, all connected through the same Service Connector:
a Kubernetes Orchestrator connected to an EKS Kubernetes cluster
an S3 Artifact Store connected to an S3 bucket
an ECR Container Registry stack component connected to an ECR container registry
a local Image Builder
As a last step, a simple pipeline is run on the resulting Stack.
Configure the local AWS CLI with valid IAM user account credentials with a wide range of permissions (i.e. by running aws configure) and install ZenML integration prerequisites:Copyzenml integration install -y aws s3Copyaws configure --profile connectors
Example Command Output
```text
AWS Access Key ID [None]: AKIAIOSFODNN7EXAMPLE
AWS Secret Access Key [None]: wJalrXUtnFEMI/K7MDENG/bPxRfiCYEXAMPLEKEY
Default region name [None]: us-east-1
Default output format [None]: json
```
Make sure the AWS Service Connector Type is availableCopyzenml service-connector list-types --type aws
Example Command Output
```text
βββββββββββββββββββββββββ―βββββββββ―ββββββββββββββββββββββββ―βββββββββββββββββββ―ββββββββ―βββββββββ
β NAME β TYPE β RESOURCE TYPES β AUTH METHODS β LOCAL β REMOTE β
β ββββββββββββββββββββββββΌβββββββββΌββββββββββββββββββββββββΌβββββββββββββββββββΌββββββββΌβββββββββ¨
β AWS Service Connector β πΆ aws β πΆ aws-generic β implicit β β
β β
β
β β β π¦ s3-bucket β secret-key β β β
β β β π kubernetes-cluster β sts-token β β β
β β β π³ docker-registry β iam-role β β β | how-to | https://docs.zenml.io/how-to/auth-management/aws-service-connector | 495 |
ipelines/8267b0bc-9cbd-42ac-9b56-4d18275bdbb4/runsThis example is just a simple demonstration of how to use Service Connectors to connect ZenML Stack Components to your infrastructure. The range of features and possibilities is much larger. ZenML ships with built-in Service Connectors able to connect and authenticate to AWS, GCP, and Azure and offers many different authentication methods and security best practices. Follow the resources below for more information.
πͺ The complete guide to Service ConnectorsEverything you need to know to unlock the power of Service Connectors in your project.
β
Security Best PracticesBest practices concerning the various authentication methods implemented by Service Connectors.
π Docker Service ConnectorUse the Docker Service Connector to connect ZenML to a generic Docker container registry.
π Kubernetes Service ConnectorUse the Kubernetes Service Connector to connect ZenML to a generic Kubernetes cluster.
πΆ AWS Service ConnectorUse the AWS Service Connector to connect ZenML to AWS cloud resources.
π΅ GCP Service ConnectorUse the GCP Service Connector to connect ZenML to GCP cloud resources.
π
°οΈ Azure Service ConnectorUse the Azure Service Connector to connect ZenML to Azure cloud resources.
PreviousSkypilot
NextService Connectors guide
Last updated 18 days ago | how-to | https://docs.zenml.io/v/docs/how-to/auth-management | 270 |
e by passing the page argument to the list method.You can further restrict your search by passing additional arguments that will be used to filter the results. E.g., most resources have a user_id associated with them that can be set to only list resources created by that specific user. The available filter argument options are different for each list method; check out the method declaration in the Client SDK documentation to find out which exact arguments are supported or have a look at the fields of the corresponding filter model class.
Except for pipeline runs, all other resources will by default be ordered by creation time ascending. E.g., client.list_artifacts() would return the first 50 artifacts ever created. You can change the ordering by specifying the sort_by argument when calling list methods.
Get Methods
Fetch a specific instance of a resource by either resource ID, name, or name prefix, e.g.:
client.get_pipeline_run("413cfb42-a52c-4bf1-a2fd-78af2f7f0101") # ID
client.get_pipeline_run("first_pipeline-2023_06_20-16_20_13_274466") # Name
client.get_pipeline_run("first_pipeline-2023_06_20-16") # Name prefix
Create, Update, and Delete Methods
Methods for creating / updating / deleting resources are only available for some of the resources and the required arguments are different for each resource. Checkout the Client SDK Documentation to find out whether a specific resource supports write operations through the Client and which arguments are required.
Active User and Active Stack
For some use cases you might need to know information about the user that you are authenticated as or the stack that you have currently set as active. You can fetch this information via the client.active_user and client.active_stack_model properties respectively, e.g.:
my_runs_on_current_stack = client.list_pipeline_runs(
stack_id=client.active_stack_model.id, # on current stack
user_id=client.active_user.id, # ran by you
Resource Models | reference | https://docs.zenml.io/v/docs/reference/python-client | 416 |
Amazon Elastic Container Registry (ECR)
Storing container images in Amazon ECR.
The AWS container registry is a container registry flavor provided with the ZenML aws integration and uses Amazon ECR to store container images.
When to use it
You should use the AWS container registry if:
one or more components of your stack need to pull or push container images.
you have access to AWS ECR. If you're not using AWS, take a look at the other container registry flavors.
How to deploy it
The ECR registry is automatically activated once you create an AWS account. However, you'll need to create a Repository in order to push container images to it:
Go to the ECR website.
Make sure the correct region is selected on the top right.
Click on Create repository.
Create a private repository. The name of the repository depends on the [orchestrator] (../orchestrators/orchestrators.md or step operator you're using in your stack.
URI format
The AWS container registry URI should have the following format:
<ACCOUNT_ID>.dkr.ecr.<REGION>.amazonaws.com
# Examples:
123456789.dkr.ecr.eu-west-2.amazonaws.com
987654321.dkr.ecr.ap-south-1.amazonaws.com
135792468.dkr.ecr.af-south-1.amazonaws.com
To figure out the URI for your registry:
Go to the AWS console and click on your user account in the top right to see the Account ID.
Go here and choose the region in which you would like to store your container images. Make sure to choose a nearby region for faster access.
Once you have both these values, fill in the values in this template <ACCOUNT_ID>.dkr.ecr.<REGION>.amazonaws.com to get your container registry URI.
Infrastructure Deployment
An AWS ECR Container Registry can be deployed directly from the ZenML CLI:
zenml container-registry deploy ecr_container_registry --flavor=aws --provider=aws ... | stack-components | https://docs.zenml.io/v/docs/stack-components/container-registries/aws | 404 |
ββ βββββββββββββββββββΌββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ¨
β RESOURCE TYPES β π kubernetes-cluster β
β βββββββββββββββββββΌββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ¨
β RESOURCE NAME β arn:aws:eks:us-east-1:715803424590:cluster/zenhacks-cluster β
β βββββββββββββββββββΌββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ¨
β SECRET ID β β
β βββββββββββββββββββΌββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ¨
β SESSION DURATION β N/A β
β βββββββββββββββββββΌββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ¨
β EXPIRES IN β 11h59m57s β
β βββββββββββββββββββΌββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ¨
β OWNER β default β
β βββββββββββββββββββΌββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ¨
β WORKSPACE β default β
β βββββββββββββββββββΌββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ¨
β SHARED β β β
β βββββββββββββββββββΌββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ¨
β CREATED_AT β 2023-06-16 10:17:46.931091 β
β βββββββββββββββββββΌββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ¨
β UPDATED_AT β 2023-06-16 10:17:46.931094 β
ββββββββββββββββββββ·ββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ
Configuration | how-to | https://docs.zenml.io/v/docs/how-to/auth-management/best-security-practices | 429 |
ββββββββββββββββββββββββββββββββββββββββββββββββββ¨β MODEL_URI β s3://zenprojects/seldon_model_deployer_step/output/884/seldon β
β βββββββββββββββββββββββββΌβββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ¨
β PIPELINE_NAME β seldon_deployment_pipeline β
β βββββββββββββββββββββββββΌβββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ¨
β RUN_NAME β seldon_deployment_pipeline-11_Apr_22-09_39_27_648527 β
β βββββββββββββββββββββββββΌβββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ¨
β PIPELINE_STEP_NAME β seldon_model_deployer_step β
β βββββββββββββββββββββββββΌβββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ¨
β PREDICTION_URL β http://abb84c444c7804aa98fc8c097896479d-377673393.us-east-1.elb.amazonaws.com/seldon/β¦ β
β βββββββββββββββββββββββββΌβββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ¨
β SELDON_DEPLOYMENT β zenml-8cbe671b-9fce-4394-a051-68e001f92765 β
β βββββββββββββββββββββββββΌβββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ¨
β STATUS β β
β
β βββββββββββββββββββββββββΌβββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ¨
β STATUS_MESSAGE β Seldon Core deployment 'zenml-8cbe671b-9fce-4394-a051-68e001f92765' is available β
β βββββββββββββββββββββββββΌβββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ¨
β UUID β 8cbe671b-9fce-4394-a051-68e001f92765 β | stack-components | https://docs.zenml.io/v/docs/stack-components/model-deployers | 431 |
ββββββββββ·βββββββββ
Local and remote availabilityYou only need to be aware of local and remote availability for Service Connector Types if you are explicitly looking to use a Service Connector Type without installing its package prerequisites or if you are implementing or using a custom Service Connector Type implementation with your ZenML deployment. In all other cases, you may safely ignore this section.
The LOCAL and REMOTE flags in the zenml service-connector list-types output indicate if the Service Connector implementation is available locally (i.e. where the ZenML client and pipelines are running) and remotely (i.e. where the ZenML server is running).
All built-in Service Connector Types are by default available on the ZenML server, but some built-in Service Connector Types require additional Python packages to be installed to be available in your local environment. See the section documenting each Service Connector Type to find what these prerequisites are and how to install them.
The local/remote availability determines the possible actions and operations that can be performed with a Service Connector. The following are possible with a Service Connector Type that is available either locally or remotely:
Service Connector registration, update, and discovery (i.e. the zenml service-connector register, zenml service-connector update, zenml service-connector list and zenml service-connector describe CLI commands).
Service Connector verification: checking whether its configuration and credentials are valid and can be actively used to access the remote resources (i.e. the zenml service-connector verify CLI commands).
Listing the resources that can be accessed through a Service Connector (i.e. the zenml service-connector verify and zenml service-connector list-resources CLI commands)
Connecting a Stack Component to a remote resource via a Service Connector | how-to | https://docs.zenml.io/how-to/auth-management/service-connectors-guide | 352 |
f"{input_one} {input_two}"
print(combined_str)@pipeline
def my_pipeline():
output_step_one = step_1()
step_2(input_one="hello", input_two=output_step_one)
if __name__ == "__main__":
my_pipeline()Saving that to a run.py file and running it gives us:
Example Command Output
```text
$ python run.py
Reusing registered pipeline simple_pipeline (version: 1).
Building Docker image(s) for pipeline simple_pipeline.
Building Docker image gcr.io/zenml-core/zenml:simple_pipeline-orchestrator.
Including integration requirements: gcsfs, google-cloud-aiplatform>=1.11.0, google-cloud-build>=3.11.0, google-cloud-container>=2.21.0, google-cloud-functions>=1.8.3, google-cloud-scheduler>=2.7.3, google-cloud-secret-manager, google-cloud-storage>=2.9.0, kfp==1.8.16, kubernetes==18.20.0, shapely<2.0
No .dockerignore found, including all files inside build context.
Step 1/8 : FROM zenmldocker/zenml:0.39.1-py3.8
Step 2/8 : WORKDIR /app
Step 3/8 : COPY .zenml_integration_requirements .
Step 4/8 : RUN pip install --default-timeout=60 --no-cache-dir -r .zenml_integration_requirements
Step 5/8 : ENV ZENML_ENABLE_REPO_INIT_WARNINGS=False
Step 6/8 : ENV ZENML_CONFIG_PATH=/app/.zenconfig
Step 7/8 : COPY . .
Step 8/8 : RUN chmod -R a+rw .
Pushing Docker image gcr.io/zenml-core/zenml:simple_pipeline-orchestrator.
Finished pushing Docker image.
Finished building Docker image(s).
Running pipeline simple_pipeline on stack gcp-demo (caching disabled)
Waiting for Kubernetes orchestrator pod...
Kubernetes orchestrator pod started.
Waiting for pod of step step_1 to start...
Step step_1 has started.
Step step_1 has finished in 1.357s.
Pod of step step_1 completed.
Waiting for pod of step simple_step_two to start...
Step step_2 has started.
Hello World!
Step step_2 has finished in 3.136s.
Pod of step step_2 completed.
Orchestration pod completed.
Dashboard URL: http://34.148.132.191/workspaces/default/pipelines/cec118d1-d90a-44ec-8bd7-d978f726b7aa/runs
``` | how-to | https://docs.zenml.io/how-to/auth-management/gcp-service-connector | 567 |
setting the VAULT_NAMESPACE environment variable.ZENML_SECRETS_STORE_MAX_VERSIONS: The maximum number of secret versions to keep for each Vault secret. If not set, the default value of 1 will be used (only the latest version will be kept).
These configuration options are only relevant if you're using a custom secrets store backend implementation. For this to work, you must have a custom implementation of the secrets store API in the form of a class derived from zenml.zen_stores.secrets_stores.base_secrets_store.BaseSecretsStore. This class must be importable from within the ZenML server container, which means you most likely need to mount the directory containing the class into the container or build a custom container image that contains the class.
The following configuration option is required:
ZENML_SECRETS_STORE_TYPE: Set this to custom in order to set this type of secret store.
ZENML_SECRETS_STORE_CLASS_PATH: The fully qualified path to the class that implements the custom secrets store API (e.g. my_package.my_module.MySecretsStore).
If your custom secrets store implementation requires additional configuration options, you can pass them as environment variables using the following naming convention:
ZENML_SECRETS_STORE_<OPTION_NAME>: The name of the option to pass to the custom secrets store class. The option name must be in uppercase and any hyphens (-) must be replaced with underscores (_). ZenML will automatically convert the environment variable name to the corresponding option name by removing the prefix and converting the remaining characters to lowercase. For example, the environment variable ZENML_SECRETS_STORE_MY_OPTION will be converted to the option name my_option and passed to the custom secrets store class configuration.
ZENML_SECRETS_STORE_TYPE: Set this variable to noneto disable the secrets store functionality altogether.
Backup secrets store | getting-started | https://docs.zenml.io/getting-started/deploying-zenml/deploy-with-docker | 376 |
Specify pip dependencies and apt packages
The configuration for specifying pip and apt dependencies only works in the remote pipeline case, and is disregarded for local pipelines (i.e. pipelines that run locally without having to build a Docker image).
When a pipeline is run with a remote orchestrator a Dockerfile is dynamically generated at runtime. It is then used to build the Docker image using the image builder component of your stack.
By default, ZenML automatically installs all packages required by your active ZenML stack. However, you can specify additional packages to be installed in various ways:
Install all the packages in your local Python environment (This will use the pip or poetry package manager to get a list of your local packages):
# or use "poetry_export"
docker_settings = DockerSettings(replicate_local_python_environment="pip_freeze")
@pipeline(settings={"docker": docker_settings})
def my_pipeline(...):
...
If required, a custom command can be provided. This command must output a list of requirements following the format of the requirements file:
docker_settings = DockerSettings(replicate_local_python_environment=[
"poetry",
"export",
"--extras=train",
"--format=requirements.txt"
])
@pipeline(settings={"docker": docker_settings})
def my_pipeline(...):
...
Specify a list of requirements in code:Copydocker_settings = DockerSettings(requirements=["torch==1.12.0", "torchvision"])
@pipeline(settings={"docker": docker_settings})
def my_pipeline(...):
...
Specify a requirements file:Copydocker_settings = DockerSettings(requirements="/path/to/requirements.txt")
@pipeline(settings={"docker": docker_settings})
def my_pipeline(...):
...
Specify a list of ZenML integrations that you're using in your pipeline:
from zenml.integrations.constants import PYTORCH, EVIDENTLY
docker_settings = DockerSettings(required_integrations=[PYTORCH, EVIDENTLY])
@pipeline(settings={"docker": docker_settings})
def my_pipeline(...):
... | how-to | https://docs.zenml.io/how-to/customize-docker-builds/specify-pip-dependencies-and-apt-packages | 399 |
Seldon
Deploying models to Kubernetes with Seldon Core.
Seldon Core is a production grade source-available model serving platform. It packs a wide range of features built around deploying models to REST/GRPC microservices that include monitoring and logging, model explainers, outlier detectors and various continuous deployment strategies such as A/B testing, canary deployments and more.
Seldon Core also comes equipped with a set of built-in model server implementations designed to work with standard formats for packaging ML models that greatly simplify the process of serving models for real-time inference.
The Seldon Core model deployer integration is currently not supported under MacOS.
When to use it?
Seldon Core is a production-grade source-available model serving platform. It packs a wide range of features built around deploying models to REST/GRPC microservices that include monitoring and logging, model explainers, outlier detectors, and various continuous deployment strategies such as A/B testing, canary deployments, and more.
Seldon Core also comes equipped with a set of built-in model server implementations designed to work with standard formats for packaging ML models that greatly simplify the process of serving models for real-time inference.
You should use the Seldon Core Model Deployer:
If you are looking to deploy your model on a more advanced infrastructure like Kubernetes.
If you want to handle the lifecycle of the deployed model with no downtime, including updating the runtime graph, scaling, monitoring, and security.
Looking for more advanced API endpoints to interact with the deployed model, including REST and GRPC endpoints.
If you want more advanced deployment strategies like A/B testing, canary deployments, and more.
if you have a need for a more complex deployment process that can be customized by the advanced inference graph that includes custom TRANSFORMER and ROUTER. | stack-components | https://docs.zenml.io/v/docs/stack-components/model-deployers/seldon | 355 |
thod instead, we would not need to make this copy.It is worth noting that copying the artifact to a local path may not always be necessary and can potentially be a performance bottleneck.
import os
from typing import Any, ClassVar, Dict, Optional, Tuple, Type, Union
import pandas as pd
from zenml.artifact_stores.base_artifact_store import BaseArtifactStore
from zenml.enums import ArtifactType, VisualizationType
from zenml.logger import get_logger
from zenml.materializers.base_materializer import BaseMaterializer
from zenml.metadata.metadata_types import DType, MetadataType
logger = get_logger(__name__)
PARQUET_FILENAME = "df.parquet.gzip"
COMPRESSION_TYPE = "gzip"
CSV_FILENAME = "df.csv"
class PandasMaterializer(BaseMaterializer):
"""Materializer to read data to and from pandas."""
ASSOCIATED_TYPES: ClassVar[Tuple[Type[Any], ...]] = (
pd.DataFrame,
pd.Series,
ASSOCIATED_ARTIFACT_TYPE: ClassVar[ArtifactType] = ArtifactType.DATA
def __init__(
self, uri: str, artifact_store: Optional[BaseArtifactStore] = None
):
"""Define `self.data_path`.
Args:
uri: The URI where the artifact data is stored.
artifact_store: The artifact store where the artifact data is stored.
"""
super().__init__(uri, artifact_store)
try:
import pyarrow # type: ignore # noqa
self.pyarrow_exists = True
except ImportError:
self.pyarrow_exists = False
logger.warning(
"By default, the `PandasMaterializer` stores data as a "
"`.csv` file. If you want to store data more efficiently, "
"you can install `pyarrow` by running "
"'`pip install pyarrow`'. This will allow `PandasMaterializer` "
"to automatically store the data as a `.parquet` file instead."
finally:
self.parquet_path = os.path.join(self.uri, PARQUET_FILENAME)
self.csv_path = os.path.join(self.uri, CSV_FILENAME)
def load(self, data_type: Type[Any]) -> Union[pd.DataFrame, pd.Series]:
"""Reads `pd.DataFrame` or `pd.Series` from a `.parquet` or `.csv` file.
Args:
data_type: The type of the data to read.
Raises: | how-to | https://docs.zenml.io/v/docs/how-to/handle-data-artifacts/handle-custom-data-types | 484 |
Understanding stacks
Learning how to switch the infrastructure backend of your code.
Now that we have ZenML deployed, we can take the next steps in making sure that our machine learning workflows are production-ready. As you were running your first pipelines, you might have already noticed the term stack in the logs and on the dashboard.
A stack is the configuration of tools and infrastructure that your pipelines can run on. When you run ZenML code without configuring a stack, the pipeline will run on the so-called default stack.
Separation of code from configuration and infrastructure
As visualized in the diagram above, there are two separate domains that are connected through ZenML. The left side shows the code domain. The user's Python code is translated into a ZenML pipeline. On the right side, you can see the infrastructure domain, in this case, an instance of the default stack. By separating these two domains, it is easy to switch the environment that the pipeline runs on without making any changes in the code. It also allows domain experts to write code/configure infrastructure without worrying about the other domain.
The default stack
zenml stack describe lets you find out details about your active stack:
...
Stack Configuration
ββββββββββββββββββ―βββββββββββββββββ
β COMPONENT_TYPE β COMPONENT_NAME β
β βββββββββββββββββΌβββββββββββββββββ¨
β ARTIFACT_STORE β default β
β βββββββββββββββββΌβββββββββββββββββ¨
β ORCHESTRATOR β default β
ββββββββββββββββββ·βββββββββββββββββ
'default' stack (ACTIVE)
Stack 'default' with id '...' is owned by user default and is 'private'.
...
zenml stack list lets you see all stacks that are registered in your zenml deployment.
...
ββββββββββ―βββββββββββββ―ββββββββββββ―βββββββββ―ββββββββββ―βββββββββββββββββ―βββββββββββββββ
β ACTIVE β STACK NAME β STACK ID β SHARED β OWNER β ARTIFACT_STORE β ORCHESTRATOR β
β βββββββββΌβββββββββββββΌββββββββββββΌβββββββββΌββββββββββΌβββββββββββββββββΌβββββββββββββββ¨ | user-guide | https://docs.zenml.io/user-guide/production-guide/understand-stacks | 505 |
ttings={
"orchestrator.vm_gcp": skypilot_settingsCode Example:
from zenml.integrations.skypilot_azure.flavors.skypilot_orchestrator_azure_vm_flavor import SkypilotAzureOrchestratorSettings
skypilot_settings = SkypilotAzureOrchestratorSettings(
cpus="2",
memory="16",
accelerators="V100:2",
accelerator_args={"tpu_vm": True, "runtime_version": "tpu-vm-base"},
use_spot=True,
spot_recovery="recovery_strategy",
region="West Europe",
image_id="Canonical:0001-com-ubuntu-server-jammy:22_04-lts-gen2:latest",
disk_size=100,
disk_tier="high",
cluster_name="my_cluster",
retry_until_up=True,
idle_minutes_to_autostop=60,
down=True,
stream_logs=True
@pipeline(
settings={
"orchestrator.vm_azure": skypilot_settings
Code Example:
from zenml.integrations.skypilot_lambda import SkypilotLambdaOrchestratorSettings
skypilot_settings = SkypilotLambdaOrchestratorSettings(
instance_type="gpu_1x_h100_pcie",
cluster_name="my_cluster",
retry_until_up=True,
idle_minutes_to_autostop=60,
down=True,
stream_logs=True,
docker_run_args=["--gpus=all"]
@pipeline(
settings={
"orchestrator.vm_lambda": skypilot_settings
One of the key features of the SkyPilot VM Orchestrator is the ability to run each step of a pipeline on a separate VM with its own specific settings. This allows for fine-grained control over the resources allocated to each step, ensuring that each part of your pipeline has the necessary compute power while optimizing for cost and efficiency.
Configuring Step-Specific Resources
The SkyPilot VM Orchestrator allows you to configure resources for each step individually. This means you can specify different VM types, CPU and memory requirements, and even use spot instances for certain steps while using on-demand instances for others. | stack-components | https://docs.zenml.io/v/docs/stack-components/orchestrators/skypilot-vm | 436 |
View logs on the dashboard
By default, ZenML uses a logging handler to capture the logs that occur during the execution of a step. Users are free to use the default python logging module or print statements, and ZenML's logging handler will catch these logs and store them.
import logging
from zenml import step
@step
def my_step() -> None:
logging.warning("`Hello`") # You can use the regular `logging` module.
print("World.") # You can utilize `print` statements as well.
These logs are stored within the respective artifact store of your stack. This means that you can only view these logs in the dashboard if the deployed ZenML server has direct access to the underlying artifact store. There are two cases in which this will be true:
In case of a local ZenML server (via zenml up), both local and remote artifact stores may be accessible, depending on configuration of the client.
In case of a deployed ZenML server, logs for runs on a local artifact store will not be accessible. Logs for runs using a remote artifact store may be accessible, if the artifact store has been configured with a service connector. Please read this chapter of the production guide to learn how to configure a remote artifact store with a service connector.
If configured correctly, the logs are displayed in the dashboard as follows:
If you do not want to store the logs for your pipeline (for example due to performance reduction or storage limits), you can follow these instructions.
PreviousControl logging
NextEnable or disable logs storage
Last updated 15 days ago | how-to | https://docs.zenml.io/how-to/control-logging/view-logs-on-the-dasbhoard | 319 |
ry connect aws-us-east-1 --connector aws-us-east-1Successfully connected container registry `aws-us-east-1` to the following resources:
ββββββββββββββββββββββββββββββββββββββββ―βββββββββββββββββ―βββββββββββββββββ―βββββββββββββββββββββ―βββββββββββββββββββββββββββββββββββββββββββββββ
β CONNECTOR ID β CONNECTOR NAME β CONNECTOR TYPE β RESOURCE TYPE β RESOURCE NAMES β
β βββββββββββββββββββββββββββββββββββββββΌβββββββββββββββββΌβββββββββββββββββΌβββββββββββββββββββββΌβββββββββββββββββββββββββββββββββββββββββββββββ¨
β d400e0c6-a8e7-4b95-ab34-0359229c5d36 β aws-us-east-1 β πΆ aws β π³ docker-registry β 715803424590.dkr.ecr.us-east-1.amazonaws.com β
ββββββββββββββββββββββββββββββββββββββββ·βββββββββββββββββ·βββββββββββββββββ·βββββββββββββββββββββ·βββββββββββββββββββββββββββββββββββββββββββββββ
As a final step, you can use the AWS Container Registry in a ZenML Stack:
# Register and set a stack with the new container registry
zenml stack register <STACK_NAME> -c <CONTAINER_REGISTRY_NAME> ... --set
Linking the AWS Container Registry to a Service Connector means that your local Docker client is no longer authenticated to access the remote registry. If you need to manually interact with the remote registry via the Docker CLI, you can use the local login Service Connector feature to temporarily authenticate your local Docker client to the remote registry:
zenml service-connector login <CONNECTOR_NAME> --resource-type docker-registry
Example Command Output
$ zenml service-connector login aws-us-east-1 --resource-type docker-registry
β Ό Attempting to configure local client using service connector 'aws-us-east-1'...
WARNING! Your password will be stored unencrypted in /home/stefan/.docker/config.json.
Configure a credential helper to remove this warning. See
https://docs.docker.com/engine/reference/commandline/login/#credentials-store | stack-components | https://docs.zenml.io/v/docs/stack-components/container-registries/aws | 532 |
Manage artifacts
Understand and adjust how ZenML versions your data.
Data sits at the heart of every machine learning workflow. Managing and versioning this data correctly is essential for reproducibility and traceability within your ML pipelines. ZenML takes a proactive approach to data versioning, ensuring that every artifactβbe it data, models, or evaluationsβis automatically tracked and versioned upon pipeline execution.
This guide will delve into artifact versioning and management, showing you how to efficiently name, organize, and utilize your data with the ZenML framework.
Managing artifacts produced by ZenML pipelines
Artifacts, the outputs of your steps and pipelines, are automatically versioned and stored in the artifact store. Configuring these artifacts is pivotal for transparent and efficient pipeline development.
Giving names to your artifacts
Assigning custom names to your artifacts can greatly enhance their discoverability and manageability. As best practice, utilize the Annotated object within your steps to give precise, human-readable names to outputs:
from typing_extensions import Annotated
import pandas as pd
from sklearn.datasets import load_iris
from zenml import pipeline, step
# Using Annotated to name our dataset
@step
def training_data_loader() -> Annotated[pd.DataFrame, "iris_dataset"]:
"""Load the iris dataset as pandas dataframe."""
iris = load_iris(as_frame=True)
return iris.get("frame")
@pipeline
def feature_engineering_pipeline():
training_data_loader()
if __name__ == "__main__":
feature_engineering_pipeline()
Unspecified artifact outputs default to a naming pattern of {pipeline_name}::{step_name}::output. For visual exploration in the ZenML dashboard, it's best practice to give significant outputs clear custom names.
Artifacts named iris_dataset can then be found swiftly using various ZenML interfaces:
To list artifacts: zenml artifact list | user-guide | https://docs.zenml.io/user-guide/starter-guide/manage-artifacts | 372 |
Deploy with custom images
Deploying ZenML with custom Docker images.
In most cases, deploying ZenML with the default zenmlhub/zenml-server Docker image should work just fine. However, there are some scenarios when you might need to deploy ZenML with a custom Docker image:
You have implemented a custom artifact store for which you want to enable artifact visualizations or step logs in your dashboard.
You have forked the ZenML repository and want to deploy a ZenML server based on your own fork because you made changes to the server / database logic.
Deploying ZenML with custom Docker images is only possible for Docker or Helm deployments.
Build and Push Custom ZenML Server Docker Image
Here is how you can build a custom ZenML server Docker image:
Set up a container registry of your choice. E.g., as an indivial developer you could create a free Docker Hub account and then set up a free Docker Hub repository.
Clone ZenML (or your ZenML fork) and checkout the branch that you want to deploy, e.g., if you want to deploy ZenML version 0.41.0, runCopygit checkout release/0.41.0
Copy the ZenML base.Dockerfile, e.g.:Copycp docker/base.Dockerfile docker/custom.Dockerfile
Modify the copied Dockerfile:Add additional dependencies:CopyRUN pip install <my_package>(Forks only) install local files instead of official ZenML:CopyRUN pip install -e .[server,secrets-aws,secrets-gcp,secrets-azure,secrets-hashicorp,s3fs,gcsfs,adlfs,connectors-aws,connectors-gcp,connectors-azure]
Build and push an image based on your Dockerfile:Copydocker build -f docker/custom.Dockerfile . -t <YOUR_CONTAINER_REGISTRY>/<IMAGE_NAME>:<IMAGE_TAG> --platform linux/amd64
docker push <YOUR_CONTAINER_REGISTRY>/<IMAGE_NAME>:<IMAGE_TAG>
If you want to verify your custom image locally, you can follow the Deploy a custom ZenML image via Docker section below to deploy the ZenML server locally first.
Deploy ZenML with your custom image | getting-started | https://docs.zenml.io/getting-started/deploying-zenml/deploy-with-custom-image | 447 |
kdownString .csv / .html / .md (depending on type)ZenML provides a built-in CloudpickleMaterializer that can handle any object by saving it with cloudpickle. However, this is not production-ready because the resulting artifacts cannot be loaded when running with a different Python version. In such cases, you should consider building a custom Materializer to save your objects in a more robust and efficient format.
Moreover, using the CloudpickleMaterializer could allow users to upload of any kind of object. This could be exploited to upload a malicious file, which could execute arbitrary code on the vulnerable system.
Integration Materializers
In addition to the built-in materializers, ZenML also provides several integration-specific materializers that can be activated by installing the respective integration: | how-to | https://docs.zenml.io/v/docs/how-to/handle-data-artifacts/handle-custom-data-types | 155 |
ββ βββββββββββββββββββββββββββββββββββββββΌββββββββββββββββββββββββΌβββββββββββββββββΌββββββββββββββββββββββββΌββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ¨
β 19edc05b-92db-49de-bc84-aa9b3fb8261a β aws-s3-zenfiles β πΆ aws β π¦ s3-bucket β s3://zenfiles β
β βββββββββββββββββββββββββββββββββββββββΌββββββββββββββββββββββββΌβββββββββββββββββΌββββββββββββββββββββββββΌββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ¨
β c732c768-3992-4cbd-8738-d02cd7b6b340 β kubernetes-auto β π kubernetes β π kubernetes-cluster β π₯ error: connector 'kubernetes-auto' authorization failure: failed to verify Kubernetes cluster β
β β β β β access: (401) β
β β β β β Reason: Unauthorized β
β β β β β HTTP response headers: HTTPHeaderDict({'Audit-Id': '20c96e65-3e3e-4e08-bae3-bcb72c527fbf', β
β β β β β 'Cache-Control': 'no-cache, private', 'Content-Type': 'application/json', 'Date': 'Fri, 09 Jun 2023 β
β β β β β 18:52:56 GMT', 'Content-Length': '129'}) β | how-to | https://docs.zenml.io/v/docs/how-to/auth-management/service-connectors-guide | 376 |
Understanding reranking
Understand how reranking works.
What is reranking?
Reranking is the process of refining the initial ranking of documents retrieved by a retrieval system. In the context of Retrieval-Augmented Generation (RAG), reranking plays a crucial role in improving the relevance and quality of the retrieved documents that are used to generate the final output.
The initial retrieval step in RAG typically uses a sparse retrieval method, such as BM25 or TF-IDF, to quickly find a set of potentially relevant documents based on the input query. However, these methods rely on lexical matching and may not capture the semantic meaning or context of the query effectively.
Rerankers, on the other hand, are designed to reorder the retrieved documents by considering additional features, such as semantic similarity, relevance scores, or domain-specific knowledge. They aim to push the most relevant and informative documents to the top of the list, ensuring that the LLM has access to the best possible context for generating accurate and coherent responses.
Types of Rerankers
There are different types of rerankers that can be used in RAG, each with its own strengths and trade-offs:
Cross-Encoders: Cross-encoders are a popular choice for reranking in RAG. They take the concatenated query and document as input and output a relevance score. Examples include BERT-based models fine-tuned for passage ranking tasks. Cross-encoders can capture the interaction between the query and document effectively but are computationally expensive.
Bi-Encoders: Bi-encoders, also known as dual encoders, use separate encoders for the query and document. They generate embeddings for the query and document independently and then compute the similarity between them. Bi-encoders are more efficient than cross-encoders but may not capture the query-document interaction as effectively. | user-guide | https://docs.zenml.io/v/docs/user-guide/llmops-guide/reranking/understanding-reranking | 372 |
s used for each run.
Creating a GitHub RepositoryWhile ZenML supports many different flavors of git repositories, this guide will focus on GitHub. To create a repository on GitHub:
Sign in to GitHub.
Click the "+" icon and select "New repository."
Name your repository, set its visibility, and add a README or .gitignore if needed.
Click "Create repository."
We can now push our local code (from the previous chapters) to GitHub with these commands:
# Initialize a Git repository
git init
# Add files to the repository
git add .
# Commit the files
git commit -m "Initial commit"
# Add the GitHub remote
git remote add origin https://github.com/YOUR_USERNAME/YOUR_REPOSITORY_NAME.git
# Push to GitHub
git push -u origin master
Replace YOUR_USERNAME and YOUR_REPOSITORY_NAME with your GitHub information.
Linking to ZenML
To connect your GitHub repository to ZenML, you'll need a GitHub Personal Access Token (PAT).
Go to your GitHub account settings and click on Developer settings.
Select "Personal access tokens" and click on "Generate new token".
Give your token a name and a description.
We recommend selecting the specific repository and then giving contents read-only access.
Click on "Generate token" and copy the token to a safe place.
Now, we can install the GitHub integration and register your repository:
zenml integration install github
zenml code-repository register <REPO_NAME> --type=github \
--url=https://github.com/YOUR_USERNAME/YOUR_REPOSITORY_NAME.git \
--owner=YOUR_USERNAME --repository=YOUR_REPOSITORY_NAME \
--token=YOUR_GITHUB_PERSONAL_ACCESS_TOKEN
Fill in <REPO_NAME>, YOUR_USERNAME, YOUR_REPOSITORY_NAME, and YOUR_GITHUB_PERSONAL_ACCESS_TOKEN with your details.
Your code is now connected to your ZenML server. ZenML will automatically detect if your source files are being tracked by GitHub and store the commit hash for each subsequent pipeline run.
You can try this out by running our training pipeline again:
# This will build the Docker image the first time | user-guide | https://docs.zenml.io/user-guide/production-guide/connect-code-repository | 424 |
nt import Client
@pipeline
def do_predictions():# model name and version are directly passed into client method
model = Client().get_model_version("iris_classifier", ModelStages.PRODUCTION)
inference_data = load_data()
predict(
# Here, we load in the `trained_model` from a trainer step
model=model.get_model_artifact("trained_model"),
data=inference_data,
In this case the evaluation of the actual artifact will happen only when the step is actually running.
PreviousLinking model binaries/data to a Model
NextTrack metrics and metadata
Last updated 15 days ago | how-to | https://docs.zenml.io/how-to/use-the-model-control-plane/load-artifacts-from-model | 121 |
of your models at different stages of development.if you have pipelines that regularly ingest new data, you should use data validation to run regular data integrity checks to signal problems before they are propagated downstream.
in continuous training pipelines, you should use data validation techniques to compare new training data against a data reference and to compare the performance of newly trained models against previous ones.
when you have pipelines that automate batch inference or if you regularly collect data used as input in online inference, you should use data validation to run data drift analyses and detect training-serving skew, data drift and model drift.
Data Validator Flavors
Data Validator are optional stack components provided by integrations. The following table lists the currently available Data Validators and summarizes their features and the data types and model types that they can be used with in ZenML pipelines:
Data Validator Validation Features Data Types Model Types Notes Flavor/Integration Deepchecks data quality
data drift
model drift
model performance tabular: pandas.DataFrame CV: torch.utils.data.dataloader.DataLoader tabular: sklearn.base.ClassifierMixin CV: torch.nn.Module Add Deepchecks data and model validation tests to your pipelines deepchecks Evidently data quality
data drift
model drift
model performance tabular: pandas.DataFrame N/A Use Evidently to generate a variety of data quality and data/model drift reports and visualizations evidently Great Expectations data profiling
data quality tabular: pandas.DataFrame N/A Perform data testing, documentation and profiling with Great Expectations great_expectations Whylogs/WhyLabs data drift tabular: pandas.DataFrame N/A Generate data profiles with whylogs and upload them to WhyLabs whylogs
If you would like to see the available flavors of Data Validator, you can use the command:
zenml data-validator flavor list
How to use it | stack-components | https://docs.zenml.io/stack-components/data-validators | 357 |
this, run the following command:
zenml downgradeNote that downgrading the ZenML version may cause unexpected behavior, such as model schema validation failures or even data loss. In such cases, you may need to purge the local database and re-initialize the global configuration to bring it back to its default factory state. To do this, run the following command:
zenml clean
PreviousPython Client
NextEnvironment Variables
Last updated 19 days ago | reference | https://docs.zenml.io/v/docs/reference/global-settings | 90 |
(
EvidentlyColumnMapping,
evidently_report_step,text_data_report = evidently_report_step.with_options(
parameters=dict(
column_mapping=EvidentlyColumnMapping(
target="Rating",
numerical_features=["Age", "Positive_Feedback_Count"],
categorical_features=[
"Division_Name",
"Department_Name",
"Class_Name",
],
text_features=["Review_Text", "Title"],
),
metrics=[
EvidentlyMetricConfig.metric("DataQualityPreset"),
EvidentlyMetricConfig.metric(
"TextOverviewPreset", column_name="Review_Text"
),
EvidentlyMetricConfig.metric_generator(
"ColumnRegExpMetric",
columns=["Review_Text", "Title"],
reg_exp=r"[A-Z][A-Za-z0-9 ]*",
),
],
# We need to download the NLTK data for the TextOverviewPreset
download_nltk_data=True,
),
The configuration shown in the example is the equivalent of running the following Evidently code inside the step:
from evidently.metrics import ColumnRegExpMetric
from evidently.metric_preset import DataQualityPreset, TextOverviewPreset
from evidently import ColumnMapping
from evidently.report import Report
from evidently.metrics.base_metric import generate_column_metrics
import nltk
nltk.download("words")
nltk.download("wordnet")
nltk.download("omw-1.4")
column_mapping = ColumnMapping(
target="Rating",
numerical_features=["Age", "Positive_Feedback_Count"],
categorical_features=[
"Division_Name",
"Department_Name",
"Class_Name",
],
text_features=["Review_Text", "Title"],
report = Report(
metrics=[
DataQualityPreset(),
TextOverviewPreset(column_name="Review_Text"),
generate_column_metrics(
ColumnRegExpMetric,
columns=["Review_Text", "Title"],
parameters={"reg_exp": r"[A-Z][A-Za-z0-9 ]*"}
# The datasets are those that are passed to the Evidently step
# as input artifacts
report.run(
current_data=current_dataset,
reference_data=reference_dataset,
column_mapping=column_mapping,
Let's break this down... | stack-components | https://docs.zenml.io/stack-components/data-validators/evidently | 428 |
ator(dataset=df_train)
data_validation_pipeline()As can be seen from the step definition , the step takes in a dataset and it returns a Deepchecks SuiteResult object that contains the test results:
@step
def deepchecks_data_integrity_check_step(
dataset: pd.DataFrame,
check_list: Optional[Sequence[DeepchecksDataIntegrityCheck]] = None,
dataset_kwargs: Optional[Dict[str, Any]] = None,
check_kwargs: Optional[Dict[str, Any]] = None,
run_kwargs: Optional[Dict[str, Any]] = None,
) -> SuiteResult:
...
If needed, you can specify a custom list of data integrity Deepchecks tests to be executed by supplying a check_list argument:
from zenml.integrations.deepchecks.validation_checks import DeepchecksDataIntegrityCheck
from zenml.integrations.deepchecks.steps import deepchecks_data_integrity_check_step
@pipeline
def validation_pipeline():
deepchecks_data_integrity_check_step(
check_list=[
DeepchecksDataIntegrityCheck.TABULAR_MIXED_DATA_TYPES,
DeepchecksDataIntegrityCheck.TABULAR_DATA_DUPLICATES,
DeepchecksDataIntegrityCheck.TABULAR_CONFLICTING_LABELS,
],
dataset=...
You should consult the official Deepchecks documentation for more information on what each test is useful for.
For more customization, the data integrity step also allows for additional keyword arguments to be supplied to be passed transparently to the Deepchecks library:
dataset_kwargs: Additional keyword arguments to be passed to the Deepchecks tabular.Dataset or vision.VisionData constructor. This is used to pass additional information about how the data is structured, e.g.:Copydeepchecks_data_integrity_check_step(
dataset_kwargs=dict(label='class', cat_features=['country', 'state']),
...
) | stack-components | https://docs.zenml.io/stack-components/data-validators/deepchecks | 360 |
ser. For more information, see this documentation.For more information on user federation tokens, session policies, and the GetFederationToken AWS API, see the official AWS documentation on the subject.
For more information about the difference between this method and the AWS IAM Role authentication method, consult this AWS documentation page.
The following assumes the local AWS CLI has a connectors AWS CLI profile already configured with an AWS Secret Key:
AWS_PROFILE=connectors zenml service-connector register aws-federation-token --type aws --auth-method federation-token --auto-configure
Example Command Output
β Έ Registering service connector 'aws-federation-token'...
Successfully registered service connector `aws-federation-token` with access to the following resources:
βββββββββββββββββββββββββ―βββββββββββββββββββββββββββββββββββββββββββββββ
β RESOURCE TYPE β RESOURCE NAMES β
β ββββββββββββββββββββββββΌβββββββββββββββββββββββββββββββββββββββββββββββ¨
β πΆ aws-generic β us-east-1 β
β ββββββββββββββββββββββββΌβββββββββββββββββββββββββββββββββββββββββββββββ¨
β π¦ s3-bucket β s3://zenfiles β
β β s3://zenml-demos β
β β s3://zenml-generative-chat β
β ββββββββββββββββββββββββΌβββββββββββββββββββββββββββββββββββββββββββββββ¨
β π kubernetes-cluster β zenhacks-cluster β
β ββββββββββββββββββββββββΌβββββββββββββββββββββββββββββββββββββββββββββββ¨
β π³ docker-registry β 715803424590.dkr.ecr.us-east-1.amazonaws.com β
βββββββββββββββββββββββββ·βββββββββββββββββββββββββββββββββββββββββββββββ
The Service Connector configuration shows long-lived credentials have been picked up from the local AWS CLI configuration:
zenml service-connector describe aws-federation-token
Example Command Output | how-to | https://docs.zenml.io/v/docs/how-to/auth-management/aws-service-connector | 465 |
for federated users by impersonating another user.The connector needs to be configured with an AWS secret key associated with an IAM user or AWS account root user (not recommended). The IAM user must have permission to call the GetFederationToken STS API (i.e. allow the sts:GetFederationToken action on the * IAM resource). The connector will generate temporary STS tokens upon request by calling the GetFederationToken STS API.
These STS tokens have an expiration period longer than those issued through the AWS IAM Role authentication method and are more suitable for long-running processes that cannot automatically re-generate credentials upon expiration.
An AWS region is required and the connector may only be used to access AWS resources in the specified region.
One or more optional IAM session policies may also be configured to further restrict the permissions of the generated STS tokens. If not specified, IAM session policies are automatically configured for the generated STS tokens to restrict them to the minimum set of permissions required to access the target resource. Refer to the documentation for each supported Resource Type for the complete list of AWS permissions automatically granted to the generated STS tokens.
If this authentication method is used with the generic AWS resource type, a session policy MUST be explicitly specified, otherwise, the generated STS tokens will not have any permissions.
The default expiration period for generated STS tokens is 12 hours with a minimum of 15 minutes and a maximum of 36 hours. Temporary credentials obtained by using the AWS account root user credentials (not recommended) have a maximum duration of 1 hour.
If you need to access an EKS Kubernetes cluster with this authentication method, please be advised that the EKS cluster's aws-auth ConfigMap may need to be manually configured to allow authentication with the federated user. For more information, see this documentation. | how-to | https://docs.zenml.io/how-to/auth-management/aws-service-connector | 364 |
Deleting a Model
Learn how to delete models.
Deleting a model or a specific model version means removing all links between the Model entity and artifacts + pipeline runs, and will also delete all metadata associated with that Model.
Deleting all versions of a model
zenml model delete <MODEL_NAME>
from zenml.client import Client
Client().delete_model(<MODEL_NAME>)
Delete a specific version of a model
zenml model version delete <MODEL_VERSION_NAME>
from zenml.client import Client
Client().delete_model_version(<MODEL_VERSION_ID>)
PreviousRegistering a Model
NextAssociate a pipeline with a Model
Last updated 15 days ago | how-to | https://docs.zenml.io/how-to/use-the-model-control-plane/delete-a-model | 130 |
lementation_class(self) -> Type[BaseStepOperator]:"""Returns the implementation class for this flavor."""
This is a slimmed-down version of the base implementation which aims to highlight the abstraction layer. In order to see the full implementation and get the complete docstrings, please check the SDK docs .
Build your own custom step operator
If you want to create your own custom flavor for a step operator, you can follow the following steps:
Create a class that inherits from the BaseStepOperator class and implement the abstract launch method. This method has two main responsibilities:Preparing a suitable execution environment (e.g. a Docker image): The general environment is highly dependent on the concrete step operator implementation, but for ZenML to be able to run the step it requires you to install some pip dependencies. The list of requirements needed to successfully execute the step can be found via the Docker settings info.pipeline.docker_settings passed to the launch() method. Additionally, you'll have to make sure that all the source code of your ZenML step and pipeline are available within this execution environment.Running the entrypoint command: Actually running a single step of a pipeline requires knowledge of many ZenML internals and is implemented in the zenml.step_operators.step_operator_entrypoint_configuration module. As long as your environment was set up correctly (see the previous bullet point), you can run the step using the command provided via the entrypoint_command argument of the launch() method.
If your step operator allows the specification of per-step resources, make sure to handle the resources defined on the step (info.config.resource_settings) that was passed to the launch() method.
If you need to provide any configuration, create a class that inherits from the BaseStepOperatorConfig class adds your configuration parameters. | stack-components | https://docs.zenml.io/stack-components/step-operators/custom | 352 |
nment or the Stack Component.
End-to-end examplesThis is an example of an end-to-end workflow involving Service Connectors that use a single multi-type AWS Service Connector to give access to multiple resources for multiple Stack Components. A complete ZenML Stack is registered and composed of the following Stack Components, all connected through the same Service Connector:
a Kubernetes Orchestrator connected to an EKS Kubernetes cluster
an S3 Artifact Store connected to an S3 bucket
an ECR Container Registry stack component connected to an ECR container registry
a local Image Builder
As a last step, a simple pipeline is run on the resulting Stack.
Configure the local AWS CLI with valid IAM user account credentials with a wide range of permissions (i.e. by running aws configure) and install ZenML integration prerequisites:Copyzenml integration install -y aws s3Copyaws configure --profile connectors
Example Command Output
```text
AWS Access Key ID [None]: AKIAIOSFODNN7EXAMPLE
AWS Secret Access Key [None]: wJalrXUtnFEMI/K7MDENG/bPxRfiCYEXAMPLEKEY
Default region name [None]: us-east-1
Default output format [None]: json
```
Make sure the AWS Service Connector Type is availableCopyzenml service-connector list-types --type aws
Example Command Output
```text
βββββββββββββββββββββββββ―βββββββββ―ββββββββββββββββββββββββ―βββββββββββββββββββ―ββββββββ―βββββββββ
β NAME β TYPE β RESOURCE TYPES β AUTH METHODS β LOCAL β REMOTE β
β ββββββββββββββββββββββββΌβββββββββΌββββββββββββββββββββββββΌβββββββββββββββββββΌββββββββΌβββββββββ¨
β AWS Service Connector β πΆ aws β πΆ aws-generic β implicit β β
β β
β
β β β π¦ s3-bucket β secret-key β β β
β β β π kubernetes-cluster β sts-token β β β
β β β π³ docker-registry β iam-role β β β | how-to | https://docs.zenml.io/v/docs/how-to/auth-management/aws-service-connector | 495 |
9/10 : COPY . .
Step 10/10 : RUN chmod -R a+rw .Amazon ECR requires you to create a repository before you can push an image to it. ZenML is trying to push the image 715803424590.dkr.ecr.us-east-1.amazonaws.com/zenml:simple_pipeline-orchestrator but could only detect the following repositories: []. We will try to push anyway, but in case it fails you need to create a repository named zenml.
Pushing Docker image 715803424590.dkr.ecr.us-east-1.amazonaws.com/zenml:simple_pipeline-orchestrator.
Finished pushing Docker image.
Finished building Docker image(s).
Running pipeline simple_pipeline on stack aws-demo (caching disabled)
Waiting for Kubernetes orchestrator pod...
Kubernetes orchestrator pod started.
Waiting for pod of step step_1 to start...
Step step_1 has started.
Step step_1 has finished in 0.390s.
Pod of step step_1 completed.
Waiting for pod of step step_2 to start...
Step step_2 has started.
Hello World!
Step step_2 has finished in 2.364s.
Pod of step step_2 completed.
Orchestration pod completed.
Dashboard URL: https://stefan.develaws.zenml.io/workspaces/default/pipelines/be5adfe9-45af-4709-a8eb-9522c01640ce/runs
```
PreviousKubernetes Service Connector
NextGCP Service Connector
Last updated 19 days ago | how-to | https://docs.zenml.io/v/docs/how-to/auth-management/aws-service-connector | 323 |
r β zenhacks-cluster ββ ββββββββββββββββββββββββΌβββββββββββββββββββββββββββββββββββββββββββββββ¨
β π³ docker-registry β 715803424590.dkr.ecr.us-east-1.amazonaws.com β
βββββββββββββββββββββββββ·βββββββββββββββββββββββββββββββββββββββββββββββ
The following is an example of configuring a multi-instance AWS S3 Service Connector instance capable of accessing multiple AWS S3 buckets:
zenml service-connector register aws-s3-multi-instance --type aws --auto-configure --resource-type s3-bucket
Example Command Output
β Έ Registering service connector 'aws-s3-multi-instance'...
Successfully registered service connector `aws-s3-multi-instance` with access to the following resources:
βββββββββββββββββ―ββββββββββββββββββββββββββββββββββββββββ
β RESOURCE TYPE β RESOURCE NAMES β
β ββββββββββββββββΌββββββββββββββββββββββββββββββββββββββββ¨
β π¦ s3-bucket β s3://aws-ia-mwaa-715803424590 β
β β s3://zenfiles β
β β s3://zenml-demos β
β β s3://zenml-generative-chat β
β β s3://zenml-public-datasets β
β β s3://zenml-public-swagger-spec β
βββββββββββββββββ·ββββββββββββββββββββββββββββββββββββββββ
The following is an example of configuring a single-instance AWS S3 Service Connector instance capable of accessing a single AWS S3 bucket:
zenml service-connector register aws-s3-zenfiles --type aws --auto-configure --resource-type s3-bucket --resource-id s3://zenfiles
Example Command Output
β Ό Registering service connector 'aws-s3-zenfiles'...
Successfully registered service connector `aws-s3-zenfiles` with access to the following resources:
βββββββββββββββββ―βββββββββββββββββ
β RESOURCE TYPE β RESOURCE NAMES β
β ββββββββββββββββΌβββββββββββββββββ¨
β π¦ s3-bucket β s3://zenfiles β
βββββββββββββββββ·βββββββββββββββββ
Explore Service Connector Types | how-to | https://docs.zenml.io/how-to/auth-management/service-connectors-guide | 585 |
to the active stack
zenml stack update -s <NAME>Once you added the step operator to your active stack, you can use it to execute individual steps of your pipeline by specifying it in the @step decorator as follows:
from zenml import step
@step(step_operator= <NAME>)
def trainer(...) -> ...:
"""Train a model."""
# This step will be executed in Vertex.
ZenML will build a Docker image called <CONTAINER_REGISTRY_URI>/zenml:<PIPELINE_NAME> which includes your code and use it to run your steps in Vertex AI. Check out this page if you want to learn more about how ZenML builds these images and how you can customize them.
Additional configuration
You can specify the service account, network and reserved IP ranges to use for the VertexAI CustomJob by passing the service_account, network and reserved_ip_ranges parameters to the step-operator register command:
zenml step-operator register <STEP_OPERATOR_NAME> \
--flavor=vertex \
--project=<GCP_PROJECT> \
--region=<REGION> \
--service_account=<SERVICE_ACCOUNT> # optionally specify the service account to use for the VertexAI CustomJob
--network=<NETWORK> # optionally specify the network to use for the VertexAI CustomJob
--reserved_ip_ranges=<RESERVED_IP_RANGES> # optionally specify the reserved IP range to use for the VertexAI CustomJob
For additional configuration of the Vertex step operator, you can pass VertexStepOperatorSettings when defining or running your pipeline.
from zenml import step
from zenml.integrations.gcp.flavors.vertex_step_operator_flavor import VertexStepOperatorSettings
@step(step_operator= <NAME>, settings=settings= {"step_operator.vertex": vertex_operator_settings = VertexStepOperatorSettings(
accelerator_type = "NVIDIA_TESLA_T4" # see https://cloud.google.com/vertex-ai/docs/reference/rest/v1/MachineSpec#AcceleratorType
accelerator_count = 1
machine_type = "n1-standard-2" # see https://cloud.google.com/vertex-ai/docs/training/configure-compute#machine-types | stack-components | https://docs.zenml.io/v/docs/stack-components/step-operators/vertex | 435 |
ββββ·ββββββββββββββββββββββββ·ββββββββββββββββββββββ$ zenml orchestrator connect <ORCHESTRATOR_NAME> --connector aws-iam-multi-us
Running with active workspace: 'default' (repository)
Running with active stack: 'default' (repository)
Successfully connected orchestrator `<ORCHESTRATOR_NAME>` to the following resources:
ββββββββββββββββββββββββββββββββββββββββ―βββββββββββββββββββ―βββββββββββββββββ―ββββββββββββββββββββββββ―βββββββββββββββββββ
β CONNECTOR ID β CONNECTOR NAME β CONNECTOR TYPE β RESOURCE TYPE β RESOURCE NAMES β
β βββββββββββββββββββββββββββββββββββββββΌβββββββββββββββββββΌβββββββββββββββββΌββββββββββββββββββββββββΌβββββββββββββββββββ¨
β ed528d5a-d6cb-4fc4-bc52-c3d2d01643e5 β aws-iam-multi-us β πΆ aws β π kubernetes-cluster β zenhacks-cluster β
ββββββββββββββββββββββββββββββββββββββββ·βββββββββββββββββββ·βββββββββββββββββ·ββββββββββββββββββββββββ·βββββββββββββββββββ
# Register and activate a stack with the new orchestrator
$ zenml stack register <STACK_NAME> -o <ORCHESTRATOR_NAME> ... --set
if you don't have a Service Connector on hand and you don't want to register one , the local Kubernetes kubectl client needs to be configured with a configuration context pointing to the remote cluster. The kubernetes_context stack component must also be configured with the value of that context:Copyzenml orchestrator register <ORCHESTRATOR_NAME> \
--flavor=kubernetes \
--kubernetes_context=<KUBERNETES_CONTEXT>
# Register and activate a stack with the new orchestrator
zenml stack register <STACK_NAME> -o <ORCHESTRATOR_NAME> ... --set
ZenML will build a Docker image called <CONTAINER_REGISTRY_URI>/zenml:<PIPELINE_NAME> which includes your code and use it to run your pipeline steps in Kubernetes. Check out this page if you want to learn more about how ZenML builds these images and how you can customize them.
You can now run any ZenML pipeline using the Kubernetes orchestrator:
python file_that_runs_a_zenml_pipeline.py | stack-components | https://docs.zenml.io/stack-components/orchestrators/kubernetes | 581 |
zation or use to authenticate automated workloads.In fact, cloud platforms don't even allow using user account passwords directly as a credential when authenticating to the cloud platform APIs. There is always a process in place that allows exchanging the account/password credential for another form of long-lived credential.
Even when passwords are mentioned as credentials, some services (e.g. DockerHub) also allow using an API access key in place of the user account password.
Implicit authentication
The key takeaway here is that implicit authentication gives you immediate access to some cloud resources and requires no configuration, but it may take some extra effort to expand the range of resources that you're initially allowed to access with it. This is not an authentication method you want to use if you're interested in portability and enabling others to reproduce your results.
This method may constitute a security risk, because it can give users access to the same cloud resources and services that the ZenML Server itself is configured to access. For this reason, all implicit authentication methods are disabled by default and need to be explicitly enabled by setting the ZENML_ENABLE_IMPLICIT_AUTH_METHODS environment variable or the helm chart enableImplicitAuthMethods configuration option to true in the ZenML deployment.
Implicit authentication is just a fancy way of saying that the Service Connector will use locally stored credentials, configuration files, environment variables, and basically any form of authentication available in the environment where it is running, either locally or in the cloud.
Most cloud providers and their associated Service Connector Types include some form of implicit authentication that is able to automatically discover and use the following forms of authentication in the environment where they are running:
configuration and credentials set up and stored locally through the cloud platform CLI
configuration and credentials passed as environment variables | how-to | https://docs.zenml.io/how-to/auth-management/best-security-practices | 347 |
his flavor."""
return BaseContainerRegistryConfig@property
def implementation_class(self) -> Type[BaseContainerRegistry]:
"""Implementation class."""
return BaseContainerRegistry
This is a slimmed-down version of the base implementation which aims to highlight the abstraction layer. In order to see the full implementation and get the complete docstrings, please check the SDK docs .
Building your own container registry
If you want to create your own custom flavor for a container registry, you can follow the following steps:
Create a class that inherits from the BaseContainerRegistry class and if you need to execute any checks/validation before the image gets pushed, you can define these operations in the prepare_image_push method. As an example, you can check the AWSContainerRegistry.
If you need further configuration, you can create a class which inherits from the BaseContainerRegistryConfig class.
Bring both the implementation and the configuration together by inheriting from the BaseContainerRegistryFlavor class.
Once you are done with the implementation, you can register it through the CLI. Please ensure you point to the flavor class via dot notation:
zenml container-registry flavor register <path.to.MyContainerRegistryFlavor>
For example, your flavor class MyContainerRegistryFlavor is defined in flavors/my_flavor.py, you'd register it by doing:
zenml container-registry flavor register flavors.my_flavor.MyContainerRegistryFlavor
ZenML resolves the flavor class by taking the path where you initialized zenml (via zenml init) as the starting point of resolution. Therefore, please ensure you follow the best practice of initializing zenml at the root of your repository.
If ZenML does not find an initialized ZenML repository in any parent directory, it will default to the current working directory, but usually it's better to not have to rely on this mechanism, and initialize zenml at the root.
Afterward, you should see the new flavor in the list of available flavors: | stack-components | https://docs.zenml.io/stack-components/container-registries/custom | 387 |
s within a unique directory in the artifact store:Materializers are designed to be extensible and customizable, allowing you to define your own serialization and deserialization logic for specific data types or storage systems. By default, ZenML provides built-in materializers for common data types and uses cloudpickle to pickle objects where there is no default materializer. If you want direct control over how objects are serialized, you can easily create custom materializers by extending the BaseMaterializer class and implementing the required methods for your specific use case. Read more about materializers here.
ZenML provides a built-in CloudpickleMaterializer that can handle any object by saving it with cloudpickle. However, this is not production-ready because the resulting artifacts cannot be loaded when running with a different Python version. In such cases, you should consider building a custom Materializer to save your objects in a more robust and efficient format.
Moreover, using the CloudpickleMaterializer could allow users to upload of any kind of object. This could be exploited to upload a malicious file, which could execute arbitrary code on the vulnerable system.
When a pipeline runs, ZenML uses the appropriate materializers to save and load artifacts using the ZenML fileio system (built to work across multiple artifact stores). This not only simplifies the process of working with different data formats and storage systems but also enables artifact caching and lineage tracking. You can see an example of a default materializer (the numpy materializer) in action here.
PreviousHandle Data/Artifacts
NextReturn multiple outputs from a step
Last updated 15 days ago | how-to | https://docs.zenml.io/how-to/handle-data-artifacts/artifact-versioning | 318 |
Troubleshoot the deployed server
Troubleshooting tips for your ZenML deployment
In this document, we will go over some common issues that you might face when deploying ZenML and how to solve them.
Viewing logs
Analyzing logs is a great way to debug issues. Depending on whether you have a Kubernetes (using Helm or zenml deploy) or a Docker deployment, you can view the logs in different ways.
If you are using Kubernetes, you can view the logs of the ZenML server using the following method:
Check all pods that are running your ZenML deployment.
kubectl -n <KUBERNETES_NAMESPACE> get pods
If you see that the pods aren't running, you can use the command below to get the logs for all pods at once.
kubectl -n <KUBERNETES_NAMESPACE> logs -l app.kubernetes.io/name=zenml
Note that the error can either be from the zenml-db-init container that connects to the MySQL database or from the zenml container that runs the server code. If the get pods command shows that the pod is failing in the Init state then use zenml-db-init as the container name, otherwise use zenml.
kubectl -n <KUBERNETES_NAMESPACE> logs -l app.kubernetes.io/name=zenml -c <CONTAINER_NAME>
You can also use the --tail flag to limit the number of lines to show or the --follow flag to follow the logs in real-time.
If you are using Docker, you can view the logs of the ZenML server using the following method:
If you used the zenml up --docker CLI command to deploy the Docker ZenML server, you can check the logs with the command:Copyzenml logs -f
If you used the docker run command to manually deploy the Docker ZenML server, you can check the logs with the command:Copydocker logs zenml -f
If you used the docker compose command to manually deploy the Docker ZenML server, you can check the logs with the command:Copydocker compose -p zenml logs -f
Fixing database connection problems | getting-started | https://docs.zenml.io/v/docs/getting-started/deploying-zenml/manage-the-deployed-services/troubleshoot-your-deployed-server | 432 |
""
train_dataloader, test_dataloader = importer()model = trainer(train_dataloader)
accuracy = evaluator(test_dataloader=test_dataloader, model=model)
bento = bento_builder(model=model)
@pipeline
def local_deploy_pipeline(
bento_loader,
deployer,
):
"""Link all the steps and artifacts together"""
bento = bento_loader()
deployer(deploy_decision=decision, bento=bento)
Predicting with the local deployed model
Once the model has been deployed we can use the BentoML client to send requests to the deployed model. ZenML will automatically create a BentoML client for you and you can use it to send requests to the deployed model by simply calling the service to predict the method and passing the input data and the API function name.
The following example shows how to use the BentoML client to send requests to the deployed model.
@step
def predictor(
inference_data: Dict[str, List],
service: BentoMLDeploymentService,
) -> None:
"""Run an inference request against the BentoML prediction service.
Args:
service: The BentoML service.
data: The data to predict.
"""
service.start(timeout=10) # should be a NOP if already started
for img, data in inference_data.items():
prediction = service.predict("predict_ndarray", np.array(data))
result = to_labels(prediction[0])
rich_print(f"Prediction for {img} is {result}")
Deploying and testing locally is a great way to get started and test your model. However, a real-world scenario will most likely require you to deploy your model to a remote environment. The next section will show you how to deploy the Bento you built with ZenML pipelines to a cloud environment using the bentoctl CLI.
From Local to Cloud with bentoctl
Bentoctl helps deploy any machine learning models as production-ready API endpoints into the cloud. It is a command line tool that provides a simple interface to manage your BentoML bundles.
The bentoctl CLI provides a list of operators which are plugins that interact with cloud services, some of these operators are:
AWS Lambda | stack-components | https://docs.zenml.io/v/docs/stack-components/model-deployers/bentoml | 436 |
ser. For more information, see this documentation.For more information on user federation tokens, session policies, and the GetFederationToken AWS API, see the official AWS documentation on the subject.
For more information about the difference between this method and the AWS IAM Role authentication method, consult this AWS documentation page.
The following assumes the local AWS CLI has a connectors AWS CLI profile already configured with an AWS Secret Key:
AWS_PROFILE=connectors zenml service-connector register aws-federation-token --type aws --auth-method federation-token --auto-configure
Example Command Output
β Έ Registering service connector 'aws-federation-token'...
Successfully registered service connector `aws-federation-token` with access to the following resources:
βββββββββββββββββββββββββ―βββββββββββββββββββββββββββββββββββββββββββββββ
β RESOURCE TYPE β RESOURCE NAMES β
β ββββββββββββββββββββββββΌβββββββββββββββββββββββββββββββββββββββββββββββ¨
β πΆ aws-generic β us-east-1 β
β ββββββββββββββββββββββββΌβββββββββββββββββββββββββββββββββββββββββββββββ¨
β π¦ s3-bucket β s3://zenfiles β
β β s3://zenml-demos β
β β s3://zenml-generative-chat β
β ββββββββββββββββββββββββΌβββββββββββββββββββββββββββββββββββββββββββββββ¨
β π kubernetes-cluster β zenhacks-cluster β
β ββββββββββββββββββββββββΌβββββββββββββββββββββββββββββββββββββββββββββββ¨
β π³ docker-registry β 715803424590.dkr.ecr.us-east-1.amazonaws.com β
βββββββββββββββββββββββββ·βββββββββββββββββββββββββββββββββββββββββββββββ
The Service Connector configuration shows long-lived credentials have been picked up from the local AWS CLI configuration:
zenml service-connector describe aws-federation-token
Example Command Output | how-to | https://docs.zenml.io/how-to/auth-management/aws-service-connector | 465 |
ckets).
s3:ListBucket
s3:GetObject
s3:PutObjects3:DeleteObject
s3:ListAllMyBuckets
If set, the resource name must identify an S3 bucket using one of the following
formats:
S3 bucket URI (canonical resource name): s3://{bucket-name}
S3 bucket ARN: arn:aws:s3:::{bucket-name}
S3 bucket name: {bucket-name}
ββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ
Dashboard equivalent:
Displaying information about the AWS Session Token authentication method:
zenml service-connector describe-type aws --auth-method session-token
Example Command Output
ββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ
β π AWS Session Token (auth method: session-token) β
ββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ
Supports issuing temporary credentials: True
Generates temporary session STS tokens for IAM users. The connector needs to be
configured with an AWS secret key associated with an IAM user or AWS account
root user (not recommended). The connector will generate temporary STS tokens
upon request by calling the GetSessionToken STS API.
These STS tokens have an expiration period longer that those issued through the
AWS IAM Role authentication method and are more suitable for long-running
processes that cannot automatically re-generate credentials upon expiration.
An AWS region is required and the connector may only be used to access AWS
resources in the specified region.
The default expiration period for generated STS tokens is 12 hours with a
minimum of 15 minutes and a maximum of 36 hours. Temporary credentials obtained
by using the AWS account root user credentials (not recommended) have a maximum
duration of 1 hour.
As a precaution, when long-lived credentials (i.e. AWS Secret Keys) are detected
on your environment by the Service Connector during auto-configuration, this
authentication method is automatically chosen instead of the AWS Secret Key
authentication method alternative. | how-to | https://docs.zenml.io/how-to/auth-management | 474 |
Improve retrieval by finetuning embeddings
Finetune embeddings to improve retrieval performance.
π§ This guide is a work in progress. Please check back soon for updates.
Coming soon!
PreviousEvaluating reranking performance
NextFinetuning LLMs with ZenML
Last updated 2 months ago | user-guide | https://docs.zenml.io/v/docs/user-guide/llmops-guide/finetuning-embeddings | 64 |
with same pipeline name, step name and model nameexisting_services = model_deployer.find_model_server(
pipeline_name=pipeline_name,
pipeline_step_name=pipeline_step_name,
model_name=model_name,
if not existing_services:
raise RuntimeError(
f"No MLflow prediction service deployed by step "
f"'{pipeline_step_name}' in pipeline '{pipeline_name}' with name "
f"'{model_name}' is currently running."
service = existing_services[0]
# Let's try run a inference request against the prediction service
payload = json.dumps(
"inputs": {"messages": [{"role": "user", "content": "Tell a joke!"}]},
"params": {
"temperature": 0.5,
"max_tokens": 20,
},
response = requests.post(
url=service.get_prediction_url(),
data=payload,
headers={"Content-Type": "application/json"},
response.json()
Within the same pipeline, use the service from previous step to run inference this time using pre-built predict method
from typing_extensions import Annotated
import numpy as np
from zenml import step
from zenml.integrations.mlflow.services import MLFlowDeploymentService
# Use the service for inference
@step
def predictor(
service: MLFlowDeploymentService,
data: np.ndarray,
) -> Annotated[np.ndarray, "predictions"]:
"""Run a inference request against a prediction service"""
prediction = service.predict(data)
prediction = prediction.argmax(axis=-1)
return prediction
For more information and a full list of configurable attributes of the MLflow Model Deployer, check out the SDK Docs .
PreviousModel Deployers
NextSeldon
Last updated 19 days ago | stack-components | https://docs.zenml.io/v/docs/stack-components/model-deployers/mlflow | 336 |
ds until the pipeline shows up in the Airflow UI).The ability to provision resources using the zenml stack up command is deprecated and will be removed in a future release. While it is still available for the Airflow orchestrator, we recommend following the steps to set up a local Airflow server manually.
Install the apache-airflow package in your Python environment where ZenML is installed.
The Airflow environment variables are used to configure the behavior of the Airflow server. The following variables are particularly important to set:
AIRFLOW_HOME: This variable defines the location where the Airflow server stores its database and configuration files. The default value is ~/airflow.
AIRFLOW__CORE__DAGS_FOLDER: This variable defines the location where the Airflow server looks for DAG files. The default value is <AIRFLOW_HOME>/dags.
AIRFLOW__CORE__LOAD_EXAMPLES: This variable controls whether the Airflow server should load the default set of example DAGs. The default value is false, which means that the example DAGs will not be loaded.
AIRFLOW__SCHEDULER__DAG_DIR_LIST_INTERVAL: This variable controls how often the Airflow scheduler checks for new or updated DAGs. By default, the scheduler will check for new DAGs every 30 seconds. This variable can be used to increase or decrease the frequency of the checks, depending on the specific needs of your pipeline.Copyexport AIRFLOW_HOME=...
export AIRFLOW__CORE__DAGS_FOLDER=...
export AIRFLOW__CORE__LOAD_EXAMPLES=false
export AIRFLOW__SCHEDULER__DAG_DIR_LIST_INTERVAL=10
# Prevent crashes during forking on MacOS
# https://github.com/apache/airflow/issues/28487
export no_proxy=*
Run airflow standalone to initialize the database, create a user, and start all components for you.
When using the Airflow orchestrator with a remote deployment, you'll additionally need:
A remote ZenML server deployed to the cloud. See the deployment guide for more information.
A deployed Airflow server. See the deployment section for more information. | stack-components | https://docs.zenml.io/stack-components/orchestrators/airflow | 428 |
β s3://zenml-demos ββ β s3://zenml-generative-chat β
β β s3://zenml-public-datasets β
β ββββββββββββββββββββββββΌβββββββββββββββββββββββββββββββββββββββββββββββ¨
β π kubernetes-cluster β zenhacks-cluster β
β ββββββββββββββββββββββββΌβββββββββββββββββββββββββββββββββββββββββββββββ¨
β π³ docker-registry β 715803424590.dkr.ecr.us-east-1.amazonaws.com β
βββββββββββββββββββββββββ·βββββββββββββββββββββββββββββββββββββββββββββββ
No credentials are stored with the Service Connector:
zenml service-connector describe aws-implicit
Example Command Output
Service connector 'aws-implicit' of type 'aws' with id 'e3853748-34a0-4d78-8006-00422ad32884' is owned by user 'default' and is 'private'.
'aws-implicit' aws Service Connector Details
ββββββββββββββββββββ―ββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ
β PROPERTY β VALUE β
β βββββββββββββββββββΌββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ¨
β ID β 9a810521-ef41-4e45-bb48-8569c5943dc6 β
β βββββββββββββββββββΌββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ¨
β NAME β aws-implicit β
β βββββββββββββββββββΌββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ¨
β TYPE β πΆ aws β
β βββββββββββββββββββΌββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ¨
β AUTH METHOD β implicit β
β βββββββββββββββββββΌββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ¨
β RESOURCE TYPES β πΆ aws-generic, π¦ s3-bucket, π kubernetes-cluster, π³ docker-registry β | how-to | https://docs.zenml.io/v/docs/how-to/auth-management/aws-service-connector | 514 |
ow the recommendations from the Project templates.In steps/alerts/notify_on.py, you will find a step to notify the user about success and a function used to notify the user about step failure using the Alerter from the active stack.
We use @step for success notification to only notify the user about a fully successful pipeline run and not about every successful step.
Inside this code file, you can find how developers can work with Al component to send notification messages across configured channels:
from zenml.client import Client
from zenml import get_step_context
alerter = Client().active_stack.alerter
def notify_on_failure() -> None:
"""Notifies user on step failure. Used in Hook."""
step_context = get_step_context()
if alerter and step_context.pipeline_run.config.extra["notify_on_failure"]:
alerter.post(message=build_message(status="failed"))
If the Al component is not present in Stack we suppress notification, but you can also dump it to the log as Error using:
from zenml.client import Client
from zenml.logger import get_logger
from zenml import get_step_context
logger = get_logger(__name__)
alerter = Client().active_stack.alerter
def notify_on_failure() -> None:
"""Notifies user on step failure. Used in Hook."""
step_context = get_step_context()
if step_context.pipeline_run.config.extra["notify_on_failure"]:
if alerter:
alerter.post(message=build_message(status="failed"))
else:
logger.error(message=build_message(status="failed"))
Using the OpenAI ChatGPT failure hook
The OpenAI ChatGPT failure hook is a hook that uses the OpenAI integration to generate a possible fix for whatever exception caused the step to fail. It is quite easy to use. (You will need a valid OpenAI API key that has correctly set up billing for this.)
Note that using this integration will incur charges on your OpenAI account.
First, ensure that you have the OpenAI integration installed and have stored your API key within a ZenML secret:
zenml integration install openai | how-to | https://docs.zenml.io/how-to/build-pipelines/use-failure-success-hooks | 422 |
ggingFaceModelDeployer.get_active_model_deployer()# fetch existing services with same pipeline name, step name and model name
existing_services = model_deployer.find_model_server(
pipeline_name=pipeline_name,
pipeline_step_name=pipeline_step_name,
model_name=model_name,
running=running,
if not existing_services:
raise RuntimeError(
f"No Hugging Face inference endpoint deployed by step "
f"'{pipeline_step_name}' in pipeline '{pipeline_name}' with name "
f"'{model_name}' is currently running."
return existing_services[0]
# Use the service for inference
@step
def predictor(
service: HuggingFaceDeploymentService,
data: str
) -> Annotated[str, "predictions"]:
"""Run a inference request against a prediction service"""
prediction = service.predict(data)
return prediction
@pipeline
def huggingface_deployment_inference_pipeline(
pipeline_name: str, pipeline_step_name: str = "huggingface_model_deployer_step",
):
inference_data = ...
model_deployment_service = prediction_service_loader(
pipeline_name=pipeline_name,
pipeline_step_name=pipeline_step_name,
predictions = predictor(model_deployment_service, inference_data)
For more information and a full list of configurable attributes of the Hugging Face Model Deployer, check out the SDK Docs.
PreviousBentoML
NextDevelop a Custom Model Deployer
Last updated 19 days ago | stack-components | https://docs.zenml.io/v/docs/stack-components/model-deployers/huggingface | 282 |
b.com/your-username/your-template.git your-projectReplace https://github.com/your-username/your-template.git with the URL of your template repository, and your-project with the name of the new project you want to create.
Use your template with ZenML. Once your template is ready, you can use it with the zenml init command:
zenml init --template https://github.com/your-username/your-template.git
Replace https://github.com/your-username/your-template.git with the URL of your template repository.
If you want to use a specific version of your template, you can use the --template-tag option to specify the git tag of the version you want to use:
zenml init --template https://github.com/your-username/your-template.git --template-tag v1.0.0
Replace v1.0.0 with the git tag of the version you want to use.
That's it! Now you have your own ZenML project template that you can use to quickly set up new ML projects. Remember to keep your template up-to-date with the latest best practices and changes in your ML workflows.
Our Production Guide documentation is built around the E2E Batch project template codes. Most examples will be based on it, so we highly recommend you to install the e2e_batch template with --template-with-defaults flag before diving deeper into this documentation section, so you can follow this guide along using your own local environment.
mkdir e2e_batch
cd e2e_batch
zenml init --template e2e_batch --template-with-defaults
PreviousConnect your git repository
NextBest practices
Last updated 19 days ago | how-to | https://docs.zenml.io/v/docs/how-to/setting-up-a-project-repository/using-project-templates | 343 |
igure the local Generic Azure resource client/SDK.Stack Components use
The Azure Artifact Store Stack Component can be connected to a remote Azure blob storage container through an Azure Service Connector.
The Azure Service Connector can also be used with any Orchestrator or Model Deployer stack component flavor that relies on a Kubernetes clusters to manage workloads. This allows AKS Kubernetes container workloads to be managed without the need to configure and maintain explicit Azure or Kubernetes kubectl configuration contexts and credentials in the target environment or in the Stack Component itself.
Similarly, Container Registry Stack Components can be connected to a ACR Container Registry through an Azure Service Connector. This allows container images to be built and published to private ACR container registries without the need to configure explicit Azure credentials in the target environment or the Stack Component.
End-to-end examples
This is an example of an end-to-end workflow involving Service Connectors that uses a single multi-type Azure Service Connector to give access to multiple resources for multiple Stack Components. A complete ZenML Stack is registered composed of the following Stack Components, all connected through the same Service Connector:
a Kubernetes Orchestrator connected to an AKS Kubernetes cluster
a Azure Blob Storage Artifact Store connected to an Azure blob storage container
an Azure Container Registry connected to an ACR container registry
a local Image Builder
As a last step, a simple pipeline is run on the resulting Stack.
This example needs to use a remote ZenML Server that is reachable from Azure.
Configure an Azure service principal with a client secret and give it permissions to access an Azure blob storage container, an AKS Kubernetes cluster and an ACR container registry. Also make sure you have the Azure ZenML integration installed:Copyzenml integration install -y azure | how-to | https://docs.zenml.io/how-to/auth-management/azure-service-connector | 348 |
to your stack:
zenml integration install wandb -yThe Weights & Biases Experiment Tracker needs to be configured with the credentials required to connect to the Weights & Biases platform using one of the available authentication methods.
Authentication Methods
You need to configure the following credentials for authentication to the Weights & Biases platform:
api_key: Mandatory API key token of your Weights & Biases account.
project_name: The name of the project where you're sending the new run. If the project is not specified, the run is put in an "Uncategorized" project.
entity: An entity is a username or team name where you're sending runs. This entity must exist before you can send runs there, so make sure to create your account or team in the UI before starting to log runs. If you don't specify an entity, the run will be sent to your default entity, which is usually your username.
This option configures the credentials for the Weights & Biases platform directly as stack component attributes.
This is not recommended for production settings as the credentials won't be stored securely and will be clearly visible in the stack configuration.
# Register the Weights & Biases experiment tracker
zenml experiment-tracker register wandb_experiment_tracker --flavor=wandb \
--entity=<entity> --project_name=<project_name> --api_key=<key>
# Register and set a stack with the new experiment tracker
zenml stack register custom_stack -e wandb_experiment_tracker ... --set
This method requires you to configure a ZenML secret to store the Weights & Biases tracking service credentials securely.
You can create the secret using the zenml secret create command:
zenml secret create wandb_secret \
--entity=<ENTITY> \
--project_name=<PROJECT_NAME>
--api_key=<API_KEY>
Once the secret is created, you can use it to configure the wandb Experiment Tracker:
# Reference the entity, project and api-key in our experiment tracker component
zenml experiment-tracker register wandb_tracker \
--flavor=wandb \ | stack-components | https://docs.zenml.io/v/docs/stack-components/experiment-trackers/wandb | 420 |
0.58.2Bleeding Edge0.58.2Legacy Docs
Cloud
Features
Login
Release Notes
Product
Website
Blog
Roadmap
Resources
Slack
Examples
Projects
GitHub
Release Notes
Issues
Ask or Search
Ctrlβ+βK
Getting StartedβIntroductionπ§InstallationπͺCore conceptsπ€Deploying ZenMLDeploy with ZenML CLIDeploy with DockerDeploy with HelmDeploy using HuggingFace SpacesDeploy with custom imagesManage deployed servicesUpgrade the version of the ZenML serverTroubleshoot the deployed serverTroubleshoot stack componentsCustom secret storesSecret managementβοΈZenML ProSystem ArchitecturesZenML SaaSUser Management
User Guideπ£Starter guideCreate an ML pipelineCache previous executionsManage artifactsTrack ML modelsA starter projectπProduction guideDeploying ZenMLUnderstanding stacksConnecting remote storageOrchestrate on the cloudConfigure your pipeline to add computeConfigure a code repositorySet up CI/CDAn end-to-end projectπ¦LLMOps guideRAG with ZenMLRAG in 85 lines of codeUnderstanding Retrieval-Augmented Generation (RAG)Data ingestion and preprocessingEmbeddings generationStoring embeddings in a vector databaseBasic RAG inference pipelineEvaluation and metricsEvaluation in 65 lines of codeRetrieval evaluationGeneration evaluationEvaluation in practiceReranking for better retrievalUnderstanding rerankingImplementing reranking in ZenMLEvaluating reranking performanceImprove retrieval by finetuning embeddingsFinetuning LLMs with ZenML | stacks-and-components | https://docs.zenml.io/stacks-and-components/component-guide/annotators/label-studio | 319 |
ne:
while gc.collect():
torch.cuda.empty_cache()You can then call this function at the beginning of your GPU-enabled steps:
from zenml import step
@step
def training_step(...):
cleanup_memory()
# train a model
Note that resetting the memory cache will potentially affect others using the same GPU, so use this judiciously.
Train across multiple GPUs
ZenML supports training your models with multiple GPUs on a single node. This is useful if you have a large dataset and want to train your model in parallel. The most important thing that you'll have to handle is preventing multiple ZenML instances from being spawned as you split the work among multiple GPUs.
In practice this will probably involve:
creating a script / Python function that contains the logic of training your model (with the specification that this should run in parallel across multiple GPUs)
calling that script / external function from within the step, possibly with some wrapper or helper code to dynamically configure or update the external script function
We're aware that this is not the most elegant solution and we're at work to implement a better option with some inbuilt support for this task. If this is something you're struggling with and need support getting the step code working, please do connect with us on Slack and we'll do our best to help you out.
PreviousDefine where an image is built
NextControl logging
Last updated 15 days ago | how-to | https://docs.zenml.io/how-to/training-with-gpus | 281 |
e following:
Question: What are Plasma Phoenixes?Answer: Plasma Phoenixes are majestic creatures made of pure energy that soar above the chromatic canyons of Zenml World. They leave fiery trails behind them, painting the sky with dazzling displays of colors.
Question: What kinds of creatures live on the prismatic shores of ZenML World?
Answer: On the prismatic shores of ZenML World, you can find crystalline crabs scuttling and burrowing with their transparent exoskeletons, which refract light into a kaleidoscope of hues.
Question: What is the capital of Panglossia?
Answer: The capital of Panglossia is not mentioned in the provided context.
The implementation above is by no means sophisticated or performant, but it's simple enough that you can see all the moving parts. Our tokenization process consists of splitting the text into individual words.
The way we check for similarity between the question / query and the chunks of text is extremely naive and inefficient. The similarity between the query and the current chunk is calculated using the Jaccard similarity coefficient. This coefficient measures the similarity between two sets and is defined as the size of the intersection divided by the size of the union of the two sets. So we count the number of words that are common between the query and the chunk and divide it by the total number of unique words in both the query and the chunk. There are much better ways of measuring the similarity between two pieces of text, such as using embeddings or other more sophisticated techniques, but this example is kept simple for illustrative purposes.
The rest of this guide will showcase a more performant and scalable way of performing the same task using ZenML. If you ever are unsure why we're doing something, feel free to return to this example for the high-level overview.
PreviousRAG with ZenML
NextUnderstanding Retrieval-Augmented Generation (RAG)
Last updated 2 months ago | user-guide | https://docs.zenml.io/v/docs/user-guide/llmops-guide/rag-with-zenml/rag-85-loc | 392 |
Develop a custom data validator
How to develop a custom data validator
Before diving into the specifics of this component type, it is beneficial to familiarize yourself with our general guide to writing custom component flavors in ZenML. This guide provides an essential understanding of ZenML's component flavor concepts.
Base abstraction in progress!
We are actively working on the base abstraction for the Data Validators, which will be available soon. As a result, their extension is not recommended at the moment. When you are selecting a data validator for your stack, you can use one of the existing flavors.
If you need to implement your own Data Validator flavor, you can still do so, but keep in mind that you may have to refactor it when the base abstraction is updated.
ZenML comes equipped with Data Validator implementations that integrate a variety of data logging and validation libraries, frameworks and platforms. However, if you need to use a different library or service as a backend for your ZenML Data Validator, you can extend ZenML to provide your own custom Data Validator implementation.
Build your own custom data validator
If you want to implement your own custom Data Validator, you can follow the following steps:
Create a class which inherits from the BaseDataValidator class and override one or more of the abstract methods, depending on the capabilities of the underlying library/service that you want to integrate.
If you need any configuration, you can create a class which inherits from the BaseDataValidatorConfig class.
Bring both of these classes together by inheriting from the BaseDataValidatorFlavor.
(Optional) You should also provide some standard steps that others can easily insert into their pipelines for instant access to data validation features.
Once you are done with the implementation, you can register it through the CLI. Please ensure you point to the flavor class via dot notation: | stack-components | https://docs.zenml.io/v/docs/stack-components/data-validators/custom | 361 |
Use pipeline/step parameters
Steps and pipelines can be parameterized just like any other python function that you are familiar with.
Parameters for your steps
When calling a step in a pipeline, the inputs provided to the step function can either be an artifact or a parameter. An artifact represents the output of another step that was executed as part of the same pipeline and serves as a means to share data between steps. Parameters, on the other hand, are values provided explicitly when invoking a step. They are not dependent on the output of other steps and allow you to parameterize the behavior of your steps.
In order to allow the configuration of your steps using a configuration file, only values that can be serialized to JSON using Pydantic can be passed as parameters. If you want to pass other non-JSON-serializable objects such as NumPy arrays to your steps, use External Artifacts instead.
from zenml import step, pipeline
@step
def my_step(input_1: int, input_2: int) -> None:
pass
@pipeline
def my_pipeline():
int_artifact = some_other_step()
# We supply the value of `input_1` as an artifact and
# `input_2` as a parameter
my_step(input_1=int_artifact, input_2=42)
# We could also call the step with two artifacts or two
# parameters instead:
# my_step(input_1=int_artifact, input_2=int_artifact)
# my_step(input_1=1, input_2=2)
Parameters of steps and pipelines can also be passed in using YAML configuration files. The following configuration file and Python code can work together and give you the flexibility to update configuration only in YAML file, once needed:
# config.yaml
# these are parameters of the pipeline
parameters:
environment: production
steps:
my_step:
# these are parameters of the step `my_step`
parameters:
input_2: 42
from zenml import step, pipeline
@step
def my_step(input_1: int, input_2: int) -> None:
...
# input `environment` will come from the configuration file,
# and it is evaluated to `production` | how-to | https://docs.zenml.io/v/docs/how-to/build-pipelines/use-pipeline-step-parameters | 451 |
entials JSON to clients instead (not recommended).A GCP project is required and the connector may only be used to access GCP resources in the specified roject. This project must be the same as the one for which the external account was configured.
If you already have the GOOGLE_APPLICATION_CREDENTIALS environment variable configured to point to an external account key JSON file, it will be automatically picked up when auto-configuration is used.
The following assumes the following prerequisites are met, as covered in the GCP documentation on how to configure workload identity federation with AWS:
the ZenML server is deployed in AWS in an EKS cluster (or any other AWS compute environment)
the ZenML server EKS pods are associated with an AWS IAM role by means of an IAM OIDC provider, as covered in the AWS documentation on how to associate a IAM role with a service account. Alternatively, the IAM role associated with the EKS/EC2 nodes can be used instead. This AWS IAM role provides the implicit AWS IAM identity and credentials that will be used to authenticate to GCP services.
a GCP workload identity pool and AWS provider are configured for the GCP project where the target resources are located, as covered in the GCP documentation on how to configure workload identity federation with AWS.
a GCP service account is configured with permissions to access the target resources and granted the roles/iam.workloadIdentityUser role for the workload identity pool and AWS provider
a GCP external account JSON file is generated for the GCP service account. This is used to configure the GCP connector.
zenml service-connector register gcp-workload-identity --type gcp \
--auth-method external-account --project_id=zenml-core \
[email protected]
Example Command Output
Successfully registered service connector `gcp-workload-identity` with access to the following resources:
βββββββββββββββββββββββββ―ββββββββββββββββββββββββββββββββββββββββββββββββββ | how-to | https://docs.zenml.io/v/docs/how-to/auth-management/gcp-service-connector | 427 |
ant feedback of actual contact with the raw data.)Samples generated for inference: Your model will be making predictions on real-world data being passed in. If you store and label this data, youβll gain a valuable set of data that you can use to compare your labels with what the model was predicting, another possible way to flag drifts of various kinds. This data can then (subject to privacy/user consent) be used in retraining or fine-tuning your model.
Other ad hoc interventions: You will probably have some kind of process to identify bad labels, or to find the kinds of examples that your model finds really difficult to make correct predictions. For these, and for areas where you have clear class imbalances, you might want to do ad hoc annotation to supplement the raw materials your model has to learn from.
ZenML currently offers standard steps that help you tackle the above use cases, but the stack component and abstraction will continue to be developed to make it easier to use.
When to use it
The annotator is an optional stack component in the ZenML Stack. We designed our abstraction to fit into the larger ML use cases, particularly the training and deployment parts of the lifecycle.
The core parts of the annotation workflow include:
using labels or annotations in your training steps in a seamless way
handling the versioning of annotation data
allow for the conversion of annotation data to and from custom formats
handle annotator-specific tasks, for example, the generation of UI config files that Label Studio requires for the web annotation interface
List of available annotators
For production use cases, some more flavors can be found in specific integrations modules. In terms of annotators, ZenML features integrations with label_studio and pigeon. | stack-components | https://docs.zenml.io/v/docs/stack-components/annotators | 348 |
eral means of accessing any AWS service by issuingpre-authenticated boto3 sessions to clients. Additionally, the connector can
handle specialized authentication for S3, Docker and Kubernetes Python clients.
It also allows for the configuration of local Docker and Kubernetes CLIs.
The AWS Service Connector is part of the AWS ZenML integration. You can either
install the entire integration or use a pypi extra to install it independently
of the integration:
pip install "zenml[connectors-aws]" installs only prerequisites for the AWS
Service Connector Type
zenml integration install aws installs the entire AWS ZenML integration
It is not required to install and set up the AWS CLI on your local machine to
use the AWS Service Connector to link Stack Components to AWS resources and
services. However, it is recommended to do so if you are looking for a quick
setup that includes using the auto-configuration Service Connector features.
ββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ
zenml service-connector describe-type aws --resource-type kubernetes-cluster
Example Command Output
ββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ
β π AWS EKS Kubernetes cluster (resource type: kubernetes-cluster) β
ββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ
Authentication methods: implicit, secret-key, sts-token, iam-role,
session-token, federation-token
Supports resource instances: True
Authentication methods:
π implicit
π secret-key
π sts-token
π iam-role
π session-token
π federation-token
Allows users to access an EKS cluster as a standard Kubernetes cluster resource.
When used by Stack Components, they are provided a pre-authenticated
python-kubernetes client instance.
The configured credentials must have at least the following AWS IAM permissions
associated with the ARNs of EKS clusters that the connector will be allowed to | how-to | https://docs.zenml.io/how-to/auth-management/service-connectors-guide | 450 |
llowing credentials for authentication to Neptune:api_token: API key token of your Neptune account. You can create a free Neptune account here. If left blank, Neptune will attempt to retrieve the token from your environment variables.
project: The name of the project where you're sending the new run, in the form "workspace-name/project-name". If the project is not specified, Neptune will attempt to retrieve it from your environment variables.
This option configures the credentials for neptune.ai directly as stack component attributes.
This is not recommended for production settings as the credentials won't be stored securely and will be clearly visible in the stack configuration.
# Register the Neptune experiment tracker
zenml experiment-tracker register neptune_experiment_tracker --flavor=neptune \
--project=<project_name> --api_token=<token>
# Register and set a stack with the new experiment tracker
zenml stack register custom_stack -e neptune_experiment_tracker ... --set
This method requires you to configure a ZenML secret to store the Neptune tracking service credentials securely.
You can create the secret using the zenml secret create command:
zenml secret create neptune_secret \
--project=<PROJECT>
--api_token=<API_TOKEN>
Once the secret is created, you can use it to configure the neptune Experiment Tracker:
# Reference the project and api-token in our experiment tracker component
zenml experiment-tracker register neptune_secret \
--flavor=neptune \
--project={{neptune_secret.project}} \
--api_token={{neptune_secret.api_token}}
...
Read more about ZenML Secrets in the ZenML documentation.
For more, up-to-date information on the Neptune Experiment Tracker implementation and its configuration, you can have a look at the SDK docs .
How do you use it? | stack-components | https://docs.zenml.io/stack-components/experiment-trackers/neptune | 356 |
ps import step
@step
def my_step() -> None:
...@pipeline
def my_pipeline(my_step):
my_step()
# Old: Create an instance of the pipeline and then call `pipeline_instance.configure(...)`
pipeline_instance = my_pipeline(my_step=my_step())
pipeline_instance.configure(enable_cache=False)
from zenml import pipeline, step
@step
def my_step() -> None:
...
@pipeline
def my_pipeline():
my_step()
# New: Call the `with_options(...)` method on the pipeline
my_pipeline = my_pipeline.with_options(enable_cache=False)
Running pipelines
from zenml.pipelines import pipeline
from zenml.steps import step
@step
def my_step() -> None:
...
@pipeline
def my_pipeline(my_step):
my_step()
# Old: Create an instance of the pipeline and then call `pipeline_instance.run(...)`
pipeline_instance = my_pipeline(my_step=my_step())
pipeline_instance.run(...)
from zenml import pipeline, step
@step
def my_step() -> None:
...
@pipeline
def my_pipeline():
my_step()
my_pipeline() # New: Call the pipeline
Scheduling pipelines
from zenml.pipelines import pipeline, Schedule
from zenml.steps import step
@step
def my_step() -> None:
...
@pipeline
def my_pipeline(my_step):
my_step()
# Old: Create an instance of the pipeline and then call `pipeline_instance.run(schedule=...)`
schedule = Schedule(...)
pipeline_instance = my_pipeline(my_step=my_step())
pipeline_instance.run(schedule=schedule)
from zenml.pipelines import Schedule
from zenml import pipeline, step
@step
def my_step() -> None:
...
@pipeline
def my_pipeline():
my_step()
# New: Set the schedule using the `pipeline.with_options(...)` method and then run it
schedule = Schedule(...)
my_pipeline = my_pipeline.with_options(schedule=schedule)
my_pipeline()
Check out this page for more information on how to schedule your pipelines.
Fetching pipelines after execution
pipeline: PipelineView = zenml.post_execution.get_pipeline("first_pipeline")
last_run: PipelineRunView = pipeline.runs[0]
# OR: last_run = my_pipeline.get_runs()[0] | reference | https://docs.zenml.io/reference/migration-guide/migration-zero-forty | 453 |
_settings})
def my_pipeline() -> None:
my_step()# Or configure the pipelines options
my_pipeline = my_pipeline.with_options(
settings={"docker": docker_settings}
Configuring them on a step gives you more fine-grained control and enables you to build separate specialized Docker images for different steps of your pipelines:
docker_settings = DockerSettings()
# Either add it to the decorator
@step(settings={"docker": docker_settings})
def my_step() -> None:
pass
# Or configure the step options
my_step = my_step.with_options(
settings={"docker": docker_settings}
Using a YAML configuration file as described here:
settings:
docker:
...
steps:
step_name:
settings:
docker:
...
Check out this page for more information on the hierarchy and precedence of the various ways in which you can supply the settings.
Using a custom parent image
By default, ZenML performs all the steps described above on top of the official ZenML image for the Python and ZenML version in the active Python environment. To have more control over the entire environment used to execute your pipelines, you can either specify a custom pre-built parent image or a Dockerfile that ZenML uses to build a parent image for you.
If you're going to use a custom parent image (either pre-built or by specifying a Dockerfile), you need to make sure that it has Python, pip, and ZenML installed for it to work. If you need a starting point, you can take a look at the Dockerfile that ZenML uses here.
Using a pre-built parent image
To use a static parent image (e.g., with internal dependencies installed) that doesn't need to be rebuilt on every pipeline run, specify it in the Docker settings for your pipeline:
docker_settings = DockerSettings(parent_image="my_registry.io/image_name:tag")
@pipeline(settings={"docker": docker_settings})
def my_pipeline(...):
...
To use this image directly to run your steps without including any code or installing any requirements on top of it, skip the Docker builds by specifying it in the Docker settings: | how-to | https://docs.zenml.io/how-to/customize-docker-builds/docker-settings-on-a-pipeline | 416 |
Reranking for better retrieval
Add reranking to your RAG inference for better retrieval performance.
Rerankers are a crucial component of retrieval systems that use LLMs. They help improve the quality of the retrieved documents by reordering them based on additional features or scores. In this section, we'll explore how to add a reranker to your RAG inference pipeline in ZenML.
In previous sections, we set up the overall workflow, from data ingestion and preprocessing to embeddings generation and retrieval. We then set up some basic evaluation metrics to assess the performance of our retrieval system. A reranker is a way to squeeze a bit of extra performance out of the system by reordering the retrieved documents based on additional features or scores.
As you can see, reranking is an optional addition we make to what we've already set up. It's not strictly necessary, but it can help improve the relevance and quality of the retrieved documents, which in turn can lead to better responses from the LLM. Let's dive in!
PreviousEvaluation in practice
NextUnderstanding reranking
Last updated 1 month ago | user-guide | https://docs.zenml.io/v/docs/user-guide/llmops-guide/reranking | 224 |
cluster name. This is what we call Resource Names.Resource Names make it generally easy to identify a particular resource instance accessible through a Service Connector, especially when used together with the Service Connector name and the Resource Type. The following ZenML CLI command output shows a few examples featuring Resource Names for S3 buckets, EKS clusters, ECR registries and general Kubernetes clusters. As you can see, the way we name resources varies from implementation to implementation and resource type to resource type:
zenml service-connector list-resources
Example Command Output
The following resources can be accessed by service connectors configured in your workspace:
ββββββββββββββββββββββββββββββββββββββββ―ββββββββββββββββββββββββ―βββββββββββββββββ―ββββββββββββββββββββββββ―βββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ
β CONNECTOR ID β CONNECTOR NAME β CONNECTOR TYPE β RESOURCE TYPE β RESOURCE NAMES β
β βββββββββββββββββββββββββββββββββββββββΌββββββββββββββββββββββββΌβββββββββββββββββΌββββββββββββββββββββββββΌβββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ¨
β 8d307b98-f125-4d7a-b5d5-924c07ba04bb β aws-session-docker β πΆ aws β π³ docker-registry β 715803424590.dkr.ecr.us-east-1.amazonaws.com β
β βββββββββββββββββββββββββββββββββββββββΌββββββββββββββββββββββββΌβββββββββββββββββΌββββββββββββββββββββββββΌβββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ¨
β d1e5ecf5-1531-4507-bbf5-be0a114907a5 β aws-session-s3 β πΆ aws β π¦ s3-bucket β s3://public-flavor-logos β
β β β β β s3://sagemaker-us-east-1-715803424590 β | how-to | https://docs.zenml.io/how-to/auth-management/service-connectors-guide | 477 |
Promote a Model
Stages and Promotion
Model stages are a way to model the progress that different versions takes through various stages in its lifecycle. A ZenML Model version can be promoted to a different stage through the Dashboard, the ZenML CLI or code.
This is a way to signify the progression of your model version through the ML lifecycle and are an extra layer of metadata to identify the state of a particular model version. Possible options for stages are:
staging: This version is staged for production.
production: This version is running in a production setting.
latest: The latest version of the model. This is a virtual stage to retrieve the latest version only - versions cannot be promoted to latest.
archived: This is archived and no longer relevant. This stage occurs when a model moves out of any other stage.
Your own particular business or use case logic will determine which model version you choose to promote, and you can do this in the following ways:
Promotion via CLI
This is probably the least common way that you'll use, but it's still possible and perhaps might be useful for some use cases or within a CI system, for example. You simply use the following CLI subcommand:
zenml model version update iris_logistic_regression --stage=...
Promotion via Cloud Dashboard
This feature is not yet available, but soon you will be able to promote your model versions directly from the ZenML Cloud dashboard.
Promotion via Python SDK
This is the most common way that you'll use to promote your models. You can see how you would do this here:
from zenml import Model
MODEL_NAME = "iris_logistic_regression"
from zenml.enums import ModelStages
model = Model(name=MODEL_NAME, version="1.2.3")
model.set_stage(stage=ModelStages.PRODUCTION)
# get latest model and set it as Staging
# (if there is current Staging version it will get Archived)
latest_model = Model(name=MODEL_NAME, version=ModelStages.LATEST)
latest_model.set_stage(stage=ModelStages.STAGING) | how-to | https://docs.zenml.io/how-to/use-the-model-control-plane/promote-a-model | 423 |
or.local_docker": LocalDockerOrchestratorSettings(run_args={"cpu_count": 3}
@pipeline(settings=settings)
def simple_pipeline():
return_one()
Enabling CUDA for GPU-backed hardware
Note that if you wish to use this orchestrator to run steps on a GPU, you will need to follow the instructions on this page to ensure that it works. It requires adding some extra settings customization and is essential to enable CUDA for the GPU to give its full acceleration.
PreviousLocal Orchestrator
NextKubeflow Orchestrator
Last updated 15 days ago | stack-components | https://docs.zenml.io/stack-components/orchestrators/local-docker | 118 |
in the From Local to Cloud with bentoctl section.The bentoctl integration implementation is still in progress and will be available soon. The integration will allow you to deploy your models to a specific cloud provider with just a few lines of code using ZenML built-in steps.
How do you deploy it?
Within ZenML you can quickly get started with BentoML by simply creating Model Deployer Stack Component with the BentoML flavor. To do so you'll need to install the required Python packages on your local machine to be able to deploy your models:
zenml integration install bentoml -y
To register the BentoML model deployer with ZenML you need to run the following command:
zenml model-deployer register bentoml_deployer --flavor=bentoml
The ZenML integration will provision a local HTTP deployment server as a daemon process that will continue to run in the background to serve the latest models and Bentos.
How do you use it?
The recommended flow to use the BentoML model deployer is to first create a BentoML Service, then use the bento_builder_step to build the model and service into a bento bundle, and finally deploy the bundle with the bentoml_model_deployer_step.
BentoML Service and Runner
The first step to being able to deploy your models and use BentoML is to create a bento service which is the main logic that defines how your model will be served, and a bento runner which represents a unit of execution for your model on a remote Python worker.
The following example shows how to create a basic bento service and runner that will be used to serve a basic scikit-learn model.
import numpy as np
import bentoml
from bentoml.io import NumpyNdarray
iris_clf_runner = bentoml.sklearn.get("iris_clf:latest").to_runner()
svc = bentoml.Service("iris_classifier", runners=[iris_clf_runner])
@svc.api(input=NumpyNdarray(), output=NumpyNdarray())
def classify(input_series: np.ndarray) -> np.ndarray:
result = iris_clf_runner.predict.run(input_series)
return result
ZenML Bento Builder step | stack-components | https://docs.zenml.io/v/docs/stack-components/model-deployers/bentoml | 453 |
-grade deployments.
Installing the mlstacks extraTo install mlstacks, either run pip install mlstacks or pip install "zenml[mlstacks]" to install it along with ZenML.
MLStacks uses Terraform on the backend to manage infrastructure. You will need to have Terraform installed. Please visit the Terraform docs for installation instructions.
MLStacks also uses Helm to deploy Kubernetes resources. You will need to have Helm installed. Please visit the Helm docs for installation instructions.
Deploying a stack component
The ZenML CLI allows you to deploy individual stack components using the deploy subcommand which is implemented for all supported stack components. You can find the list of supported stack components here.
Deploying a stack
For deploying a full stack, use the zenml stack deploy command. See the stack deployment page for more details of which cloud providers and stack components are supported.
How does mlstacks work?
MLStacks is built around the concept of a stack specification. A stack specification is a YAML file that describes the stack and includes references to component specification files. A component specification is a YAML file that describes a component. (Currently all deployments of components (in various combinations) must be defined within the context of a stack.)
ZenML handles the creation of stack specifications for you when you run one of the deploy subcommands using the CLI. A valid specification is generated and used by mlstacks to deploy your stack using Terraform. The Terraform definitions and state are stored in your global configuration directory along with any state files generated while deploying your stack.
Your configuration directory could be in a number of different places depending on your operating system, but read more about it in the Click docs to see which location applies to your situation.
Deploy stack components individuallyIndividually deploying different stack components. | how-to | https://docs.zenml.io/how-to/stack-deployment | 365 |
take precedence over pipeline-level defined hooks.To set up the local environment used below, follow the recommendations from the Project templates.
In steps/alerts/notify_on.py, you will find a step to notify the user about success and a function used to notify the user about step failure using the Alerter from the active stack.
We use @step for success notification to only notify the user about a fully successful pipeline run and not about every successful step.
In pipelines/training.py, you can find the usage of a notification step and a function. We will attach a notify_on_failure function directly to the pipeline definition like this:
from zenml import pipeline
@pipeline(
...
on_failure=notify_on_failure,
...
At the very end of the training pipeline, we will execute the notify_on_success step, but only after all other steps have finished - we control it with after statement as follows:
...
last_step_name = "promote_metric_compare_promoter"
notify_on_success(after=[last_step_name])
...
Accessing step information inside a hook
Similar as for regular ZenML steps, you can use the StepContext to access information about the current pipeline run or step inside your hook function:
from zenml import step, get_step_context
def on_failure(exception: BaseException):
context = get_step_context()
print(context.step_run.name) # Output will be `my_step`
print(context.step_run.config.parameters) # Print parameters of the step
print(type(exception)) # Of type value error
print("Step failed!")
@step(on_failure=on_failure)
def my_step(some_parameter: int = 1)
raise ValueError("My exception")
To set up the local environment used below, follow the recommendations from the Project templates.
In steps/alerts/notify_on.py, you will find a step to notify the user about success and a function used to notify the user about step failure using the Alerter from the active stack. | how-to | https://docs.zenml.io/v/docs/how-to/build-pipelines/use-failure-success-hooks | 393 |
-client-id","client_secret": "my-client-secret"}).Note: The remaining configuration options are deprecated and may be removed in a future release. Instead, you should set the ZENML_SECRETS_STORE_AUTH_METHOD and ZENML_SECRETS_STORE_AUTH_CONFIG variables to use the Azure Service Connector authentication method.
ZENML_SECRETS_STORE_AZURE_CLIENT_ID: The Azure application service principal client ID to use to authenticate with the Azure Key Vault API. If you are running the ZenML server hosted in Azure and are using a managed identity to access the Azure Key Vault service, you can omit this variable.
ZENML_SECRETS_STORE_AZURE_CLIENT_SECRET: The Azure application service principal client secret to use to authenticate with the Azure Key Vault API. If you are running the ZenML server hosted in Azure and are using a managed identity to access the Azure Key Vault service, you can omit this variable.
ZENML_SECRETS_STORE_AZURE_TENANT_ID: The Azure application service principal tenant ID to use to authenticate with the Azure Key Vault API. If you are running the ZenML server hosted in Azure and are using a managed identity to access the Azure Key Vault service, you can omit this variable.
These configuration options are only relevant if you're using Hashicorp Vault as the secrets store backend.
ZENML_SECRETS_STORE_TYPE: Set this to hashicorp in order to set this type of secret store.
ZENML_SECRETS_STORE_VAULT_ADDR: The URL of the HashiCorp Vault server to connect to. NOTE: this is the same as setting the VAULT_ADDR environment variable.
ZENML_SECRETS_STORE_VAULT_TOKEN: The token to use to authenticate with the HashiCorp Vault server. NOTE: this is the same as setting the VAULT_TOKEN environment variable.
ZENML_SECRETS_STORE_VAULT_NAMESPACE: The Vault Enterprise namespace. Not required for Vault OSS. NOTE: this is the same as setting the VAULT_NAMESPACE environment variable. | getting-started | https://docs.zenml.io/v/docs/getting-started/deploying-zenml/deploy-with-docker | 416 |
ZenML SaaS
Your one-stop MLOps control plane.
One of the most straightforward paths to start with a deployed ZenML server is to use ZenML Cloud. ZenML Cloud eliminates the need for you to dedicate time and resources to deploy and manage a ZenML server, allowing you to focus primarily on your MLOps workflows.
If you're interested in assessing ZenML Cloud, you can simply create a free account. Learn more about ZenML Cloud on the ZenML Website.
Key features
ZenML Cloud is a Software-as-a-Service (SaaS) platform that enhances the functionalities of the open-source ZenML product. It equips you with a centralized interface to seamlessly launch and manage ZenML server instances. While it remains rooted in the robust open-source offering, ZenML Cloud offers extra features designed to optimize your machine learning workflow.
Managed ZenML Server (Multi-tenancy)
ZenML Cloud simplifies your machine learning workflows, enabling you to deploy a managed instance of ZenML servers with just one click. This eradicates the need to handle infrastructure complexities, making the set-up and management of your machine learning pipelines a breeze. We handle all pertinent system updates and backups, thus ensuring your system stays current and robust, allowing you to zero in on your essential MLOps tasks. As a ZenML Cloud user, you'll also have priority support, giving you the necessary aid to fully utilize the platform.
Maximum data security | getting-started | https://docs.zenml.io/getting-started/zenml-pro/zenml-cloud | 291 |
ββ βββββββββββββββββββββββββΌβββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ¨
β OWNER β default β
β βββββββββββββββββββββββββΌβββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ¨
β WORKSPACE β default β
β βββββββββββββββββββββββββΌβββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ¨
β CREATED_AT β 2024-01-30 20:44:14.020514 β
β βββββββββββββββββββββββββΌβββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ¨
β UPDATED_AT β 2024-01-30 20:44:14.020516 β
ββββββββββββββββββββββββββ·βββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ
Configuration
βββββββββββββββββββββββββ―ββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ
β PROPERTY β VALUE β
β ββββββββββββββββββββββββΌββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ¨
β project_id β zenml-core β
β ββββββββββββββββββββββββΌββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ¨
β external_account_json β { β
β β "type": "external_account", β
β β "audience": β
β β "//iam.googleapis.com/projects/30267569827/locations/global/workloadIdentityP β | how-to | https://docs.zenml.io/how-to/auth-management/gcp-service-connector | 407 |
Argilla
Annotating data using Argilla.
Argilla is an open-source data curation platform designed to enhance the development of both small and large language models (LLMs) and NLP tasks in general. It enables users to build robust language models through faster data curation using both human and machine feedback, providing support for each step in the MLOps cycle, from data labeling to model monitoring.
Argilla distinguishes itself for its focus on specific use cases and human-in-the-loop approaches. While it does offer programmatic features, Argilla's core value lies in actively involving human experts in the tool-building process, setting it apart from other competitors.
When would you want to use it?
If you need to label textual data as part of your ML workflow, that is the point at which you could consider adding the Argilla annotator stack component as part of your ZenML stack.
We currently support the use of annotation at the various stages described in the main annotators docs page. The Argilla integration currently is built to support annotation using a local (Docker-backed) instance of Argilla as well as a deployed instance of Argilla. There is an easy way to deploy Argilla as a Hugging Face Space, for instance, which is documented in the Argilla documentation.
How to deploy it?
The Argilla Annotator flavor is provided by the Argilla ZenML integration. You need to install it to be able to register it as an Annotator and add it to your stack:
zenml integration install argilla
You can either pass the api_key directly into the zenml annotator register command or you can register it as a secret and pass the secret name into the command. We recommend the latter approach for security reasons. If you want to take the latter approach, be sure to register a secret for whichever artifact store you choose, and then you should make sure to pass the name of that secret into the annotator as the --authentication_secret. For example, you'd run: | stack-components | https://docs.zenml.io/stack-components/annotators/argilla | 407 |
User Management
In ZenML Pro, there is a slightly different entity hierarchy as compared to the open-source ZenML framework. This document walks you through the key differences and new concepts that are pro-only.
Organizations, Tenants, and Roles
ZenML Cloud arranges various aspects of your work experience around the concept of an Organization. This is the top-most level structure within the ZenML Cloud environment. Generally, an organization contains a group of users and one or more tenants. Tenants are individual, isolated deployments of the ZenML server.
Every user in an organization has a distinct role. Each role configures what they can view, modify, and their level of involvement in collaborative tasks. A role thus helps determine the level of access that a user has within an organization.
The admin has all permissions on an organization. They are allowed to add members, adjust the billing information and assign roles. The editor can still fully manage tenants and members but is not allowed to access the subscription information or delete the organization. The viewer Role allows you to allow users to access the tenants within the organization with only view permissions.
Inviting Team Members
Inviting users to your organization to work on the organization's tenants is easy. Simply click Add Member in the Organization settings, and give them an initial Role. The User will be sent an invitation email. If a user is part of an organization, they can utilize their login on all tenants they have authority to access.
PreviousZenML SaaS
NextStarter guide
Last updated 15 days ago | getting-started | https://docs.zenml.io/getting-started/zenml-pro/user-management | 310 |
Develop a Custom Model Deployer
Learning how to develop a custom model deployer.
Before diving into the specifics of this component type, it is beneficial to familiarize yourself with our general guide to writing custom component flavors in ZenML. This guide provides an essential understanding of ZenML's component flavor concepts.
To deploy and manage your trained machine-learning models, ZenML provides a stack component called Model Deployer. This component is responsible for interacting with the deployment tool, framework, or platform.
When present in a stack, the model deployer can also act as a registry for models that are served with ZenML. You can use the model deployer to list all models that are currently deployed for online inference or filtered according to a particular pipeline run or step, or to suspend, resume or delete an external model server managed through ZenML.
Base Abstraction
In ZenML, the base abstraction of the model deployer is built on top of three major criteria:
It needs to ensure efficient deployment and management of models in accordance with the specific requirements of the serving infrastructure, by holding all the stack-related configuration attributes required to interact with the remote model serving tool, service, or platform.
It needs to implement the continuous deployment logic necessary to deploy models in a way that updates an existing model server that is already serving a previous version of the same model instead of creating a new model server for every new model version (see the deploy_model abstract method). This functionality can be consumed directly from ZenML pipeline steps, but it can also be used outside the pipeline to deploy ad-hoc models. It is also usually coupled with a standard model deployer step, implemented by each integration, that hides the details of the deployment process from the user. | stack-components | https://docs.zenml.io/stack-components/model-deployers/custom | 347 |
πFeature Stores
Managing data in feature stores.
Feature stores allow data teams to serve data via an offline store and an online low-latency store where data is kept in sync between the two. It also offers a centralized registry where features (and feature schemas) are stored for use within a team or wider organization.
As a data scientist working on training your model, your requirements for how you access your batch / 'offline' data will almost certainly be different from how you access that data as part of a real-time or online inference setting. Feast solves the problem of developing train-serve skew where those two sources of data diverge from each other.
Feature stores are a relatively recent addition to commonly-used machine learning stacks.
When to use it
The feature store is an optional stack component in the ZenML Stack. The feature store as a technology should be used to store the features and inject them into the process on the server side. This includes
Productionalize new features
Reuse existing features across multiple pipelines and models
Achieve consistency between training and serving data (Training Serving Skew)
Provide a central registry of features and feature schemas
List of available feature stores
For production use cases, some more flavors can be found in specific integrations modules. In terms of features stores, ZenML features an integration of feast.
Feature Store Flavor Integration Notes FeastFeatureStore feast feast Connect ZenML with already existing Feast Custom Implementation custom Extend the feature store abstraction and provide your own implementation
If you would like to see the available flavors for feature stores, you can use the command:
zenml feature-store flavor list
How to use it
The available implementation of the feature store is built on top of the feast integration, which means that using a feature store is no different from what's described on the feast page: How to use it?. | stack-components | https://docs.zenml.io/v/docs/stack-components/feature-stores | 370 |
"""Base Materializer to realize artifact data."""ASSOCIATED_ARTIFACT_TYPE = ArtifactType.BASE
ASSOCIATED_TYPES = ()
def __init__(
self, uri: str, artifact_store: Optional[BaseArtifactStore] = None
):
"""Initializes a materializer with the given URI.
Args:
uri: The URI where the artifact data will be stored.
artifact_store: The artifact store used to store this artifact.
"""
self.uri = uri
self._artifact_store = artifact_store
def load(self, data_type: Type[Any]) -> Any:
"""Write logic here to load the data of an artifact.
Args:
data_type: The type of data that the artifact should be loaded as.
Returns:
The data of the artifact.
"""
# read from a location inside self.uri
# Example:
# data_path = os.path.join(self.uri, "abc.json")
# with self.artifact_store.open(filepath, "r") as fid:
# return json.load(fid)
...
def save(self, data: Any) -> None:
"""Write logic here to save the data of an artifact.
Args:
data: The data of the artifact to save.
"""
# write `data` into self.uri
# Example:
# data_path = os.path.join(self.uri, "abc.json")
# with self.artifact_store.open(filepath, "w") as fid:
# json.dump(data,fid)
...
def save_visualizations(self, data: Any) -> Dict[str, VisualizationType]:
"""Save visualizations of the given data.
Args:
data: The data of the artifact to visualize.
Returns:
A dictionary of visualization URIs and their types.
"""
# Optionally, define some visualizations for your artifact
# E.g.:
# visualization_uri = os.path.join(self.uri, "visualization.html")
# with self.artifact_store.open(visualization_uri, "w") as f:
# f.write("<html><body>data</body></html>")
# visualization_uri_2 = os.path.join(self.uri, "visualization.png")
# data.save_as_png(visualization_uri_2)
# return {
# visualization_uri: ArtifactVisualizationType.HTML,
# visualization_uri_2: ArtifactVisualizationType.IMAGE
# }
...
def extract_metadata(self, data: Any) -> Dict[str, "MetadataType"]:
"""Extract metadata from the given data. | how-to | https://docs.zenml.io/v/docs/how-to/handle-data-artifacts/handle-custom-data-types | 484 |