First TPU Setup on Google Cloud
This guide walks you through setting up and accessing your first TPU instance on Google Cloud Platform (GCP).
Prerequisites
Before you begin, ensure you have:
- A Google Cloud account
- Billing enabled on your account
- Basic familiarity with cloud consoles
Step 1: Enable TPU Access
Navigate to the TPU dashboard at: https://console.cloud.google.com/compute/tpus
- Note: You will need to enable the TPU API if you haven’t already
- A valid billing account must be linked to your project
If prompted, enable the TPU API for your project
Step 2: Create your TPU Instance
Click the “Create” button to setup your TPU instance.
Region Selection
- Review available regions and zones for TPUs: https://cloud.google.com/tpu/docs/regions-zones
- For this example, we will use
us-west-4a
zone- Important: TPU availability may vary by region
- Tips: Choose a region close to your primary usage location
TPU Configuration
- Select TPU type:
- We will use a TPU
v5e-8
(corresponds to a v5litepod8). This is a TPU node containing 8 v5e TPU chips - For detailed specifications about TPU types, refer to our TPU hardware types documentation
- We will use a TPU
- Choose a runtime:
- Select
v2-alpha-tpuv5-lite
runtime - This runtime is optimized for TPU v5e
- More runtime information on runtime can be found in at recommended runtime for TPU section in our TPU hardware page
- Select
Step 3: Access Your TPU
After creation, your TPU instance should be accessible by ssh
Access your TPU:
- Click the SSH button in the console for immediate terminal access
For permanent SSH access:
- Add your SSH keys following the guide at: https://cloud.google.com/compute/docs/connect/add-ssh-keys
- This enables more convenient access for future sessions
- You can also look at the ssh section in our guide about the gcloud cli
Next Steps
Now that you have a working TPU environment, you can start using it for AI workloads. We offer two main paths depending on your use case:
AI Inference and Training Tutorials
Model Serving on TPU
- Follow our serving tutorial: First Model Serving on TPU
- Learn how to deploy and serve ML models efficiently on TPU
Model Training on TPU
- Start with our training guide: First Model Training on TPU
- Learn how to start training ML models on TPU
Choose the tutorial that best matches your immediate needs:
- For deploying existing models, start with our model serving tutorial
- For training new models, begin with our model training tutorial