markdown
stringlengths 0
37k
| code
stringlengths 1
33.3k
| path
stringlengths 8
215
| repo_name
stringlengths 6
77
| license
stringclasses 15
values | hash
stringlengths 32
32
|
---|---|---|---|---|---|
33.2. Albedo
Is Required: TRUE Type: ENUM Cardinality: 1.1
Describe the treatment of lake albedo | # PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.lakes.method.albedo')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "prognostic"
# "diagnostic"
# "Other: [Please specify]"
# TODO - please enter value(s)
| notebooks/miroc/cmip6/models/sandbox-3/land.ipynb | ES-DOC/esdoc-jupyterhub | gpl-3.0 | 31c375928a706fe101d440bc9f1028e8 |
33.3. Dynamics
Is Required: TRUE Type: ENUM Cardinality: 1.N
Which dynamics of lakes are treated? horizontal, vertical, etc. | # PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.lakes.method.dynamics')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "No lake dynamics"
# "vertical"
# "horizontal"
# "Other: [Please specify]"
# TODO - please enter value(s)
| notebooks/miroc/cmip6/models/sandbox-3/land.ipynb | ES-DOC/esdoc-jupyterhub | gpl-3.0 | 69401bd96e4ac4d46d507d64beb0ecd9 |
33.4. Dynamic Lake Extent
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is a dynamic lake extent scheme included? | # PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.lakes.method.dynamic_lake_extent')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
| notebooks/miroc/cmip6/models/sandbox-3/land.ipynb | ES-DOC/esdoc-jupyterhub | gpl-3.0 | 3e835911ccffeb1bf95eed5e1a13d73b |
33.5. Endorheic Basins
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Basins not flowing to ocean included? | # PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.lakes.method.endorheic_basins')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
| notebooks/miroc/cmip6/models/sandbox-3/land.ipynb | ES-DOC/esdoc-jupyterhub | gpl-3.0 | a0b9809699b96bb510f25fb0eeb122f3 |
34. Lakes --> Wetlands
TODO
34.1. Description
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the treatment of wetlands, if any | # PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.lakes.wetlands.description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
| notebooks/miroc/cmip6/models/sandbox-3/land.ipynb | ES-DOC/esdoc-jupyterhub | gpl-3.0 | ef5b667def4ae07e3205dae61363c9f3 |
Document Table of Contents
1. Key Properties
2. Key Properties --> Software Properties
3. Grid
4. Glaciers
5. Ice
6. Ice --> Mass Balance
7. Ice --> Mass Balance --> Basal
8. Ice --> Mass Balance --> Frontal
9. Ice --> Dynamics
1. Key Properties
Land ice key properties
1.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of land surface model. | # PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.key_properties.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
| notebooks/nuist/cmip6/models/sandbox-2/landice.ipynb | ES-DOC/esdoc-jupyterhub | gpl-3.0 | 6c74f2b1725649f65569415cf9975f8d |
1.2. Model Name
Is Required: TRUE Type: STRING Cardinality: 1.1
Name of land surface model code | # PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.key_properties.model_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
| notebooks/nuist/cmip6/models/sandbox-2/landice.ipynb | ES-DOC/esdoc-jupyterhub | gpl-3.0 | 6998d422db3daabd49bc843a6f53c089 |
1.3. Ice Albedo
Is Required: TRUE Type: ENUM Cardinality: 1.N
Specify how ice albedo is modelled | # PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.key_properties.ice_albedo')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "prescribed"
# "function of ice age"
# "function of ice density"
# "Other: [Please specify]"
# TODO - please enter value(s)
| notebooks/nuist/cmip6/models/sandbox-2/landice.ipynb | ES-DOC/esdoc-jupyterhub | gpl-3.0 | f7aa54d0f0ec0052529ab6c6500159a7 |
1.4. Atmospheric Coupling Variables
Is Required: TRUE Type: STRING Cardinality: 1.1
Which variables are passed between the atmosphere and ice (e.g. orography, ice mass) | # PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.key_properties.atmospheric_coupling_variables')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
| notebooks/nuist/cmip6/models/sandbox-2/landice.ipynb | ES-DOC/esdoc-jupyterhub | gpl-3.0 | 7b059ee9ebe9c76106d9936be8888ddd |
1.5. Oceanic Coupling Variables
Is Required: TRUE Type: STRING Cardinality: 1.1
Which variables are passed between the ocean and ice | # PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.key_properties.oceanic_coupling_variables')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
| notebooks/nuist/cmip6/models/sandbox-2/landice.ipynb | ES-DOC/esdoc-jupyterhub | gpl-3.0 | 717f45ebd6668272c81b65605def1af7 |
1.6. Prognostic Variables
Is Required: TRUE Type: ENUM Cardinality: 1.N
Which variables are prognostically calculated in the ice model | # PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.key_properties.prognostic_variables')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "ice velocity"
# "ice thickness"
# "ice temperature"
# "Other: [Please specify]"
# TODO - please enter value(s)
| notebooks/nuist/cmip6/models/sandbox-2/landice.ipynb | ES-DOC/esdoc-jupyterhub | gpl-3.0 | 5480edd10b674cebb4a616c5d9f6ff32 |
2. Key Properties --> Software Properties
Software properties of land ice code
2.1. Repository
Is Required: FALSE Type: STRING Cardinality: 0.1
Location of code for this component. | # PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.key_properties.software_properties.repository')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
| notebooks/nuist/cmip6/models/sandbox-2/landice.ipynb | ES-DOC/esdoc-jupyterhub | gpl-3.0 | 98c03cb90ae6c9d609578f8e8c6626cb |
2.2. Code Version
Is Required: FALSE Type: STRING Cardinality: 0.1
Code version identifier. | # PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.key_properties.software_properties.code_version')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
| notebooks/nuist/cmip6/models/sandbox-2/landice.ipynb | ES-DOC/esdoc-jupyterhub | gpl-3.0 | ce95560670f285b2e9acf0d23ff22ba3 |
2.3. Code Languages
Is Required: FALSE Type: STRING Cardinality: 0.N
Code language(s). | # PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.key_properties.software_properties.code_languages')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
| notebooks/nuist/cmip6/models/sandbox-2/landice.ipynb | ES-DOC/esdoc-jupyterhub | gpl-3.0 | f7a70b1ca1092e0ed757a823cdf71056 |
3. Grid
Land ice grid
3.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of the grid in the land ice scheme | # PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.grid.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
| notebooks/nuist/cmip6/models/sandbox-2/landice.ipynb | ES-DOC/esdoc-jupyterhub | gpl-3.0 | 85f9e21d2b6bbe32ec4a9daa345f19bf |
3.2. Adaptive Grid
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is an adative grid being used? | # PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.grid.adaptive_grid')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
| notebooks/nuist/cmip6/models/sandbox-2/landice.ipynb | ES-DOC/esdoc-jupyterhub | gpl-3.0 | 4bb2653a68dd9931f861cdf431d4476f |
3.3. Base Resolution
Is Required: TRUE Type: FLOAT Cardinality: 1.1
The base resolution (in metres), before any adaption | # PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.grid.base_resolution')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
| notebooks/nuist/cmip6/models/sandbox-2/landice.ipynb | ES-DOC/esdoc-jupyterhub | gpl-3.0 | 3e529e6125b6e850e54c99fc3b99f746 |
3.4. Resolution Limit
Is Required: FALSE Type: FLOAT Cardinality: 0.1
If an adaptive grid is being used, what is the limit of the resolution (in metres) | # PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.grid.resolution_limit')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
| notebooks/nuist/cmip6/models/sandbox-2/landice.ipynb | ES-DOC/esdoc-jupyterhub | gpl-3.0 | af27533241671caa04eced7f3ee74b37 |
3.5. Projection
Is Required: TRUE Type: STRING Cardinality: 1.1
The projection of the land ice grid (e.g. albers_equal_area) | # PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.grid.projection')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
| notebooks/nuist/cmip6/models/sandbox-2/landice.ipynb | ES-DOC/esdoc-jupyterhub | gpl-3.0 | 97d459236f81bee31ee7f072dc91ab80 |
4. Glaciers
Land ice glaciers
4.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of glaciers in the land ice scheme | # PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.glaciers.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
| notebooks/nuist/cmip6/models/sandbox-2/landice.ipynb | ES-DOC/esdoc-jupyterhub | gpl-3.0 | 9668cb6621945a2605a843d28a1e17ef |
4.2. Description
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe the treatment of glaciers, if any | # PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.glaciers.description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
| notebooks/nuist/cmip6/models/sandbox-2/landice.ipynb | ES-DOC/esdoc-jupyterhub | gpl-3.0 | 357a8c838d2bd94c115823c4750c0de4 |
4.3. Dynamic Areal Extent
Is Required: FALSE Type: BOOLEAN Cardinality: 0.1
Does the model include a dynamic glacial extent? | # PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.glaciers.dynamic_areal_extent')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
| notebooks/nuist/cmip6/models/sandbox-2/landice.ipynb | ES-DOC/esdoc-jupyterhub | gpl-3.0 | f8b294f1d08fecb862916aa88ef57fb2 |
5. Ice
Ice sheet and ice shelf
5.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of the ice sheet and ice shelf in the land ice scheme | # PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.ice.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
| notebooks/nuist/cmip6/models/sandbox-2/landice.ipynb | ES-DOC/esdoc-jupyterhub | gpl-3.0 | a1d709bbf7424c2d12691985c35cf2af |
5.2. Grounding Line Method
Is Required: TRUE Type: ENUM Cardinality: 1.1
Specify the technique used for modelling the grounding line in the ice sheet-ice shelf coupling | # PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.ice.grounding_line_method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "grounding line prescribed"
# "flux prescribed (Schoof)"
# "fixed grid size"
# "moving grid"
# "Other: [Please specify]"
# TODO - please enter value(s)
| notebooks/nuist/cmip6/models/sandbox-2/landice.ipynb | ES-DOC/esdoc-jupyterhub | gpl-3.0 | 5021978bcf9ebde384657fff80ef8c87 |
5.3. Ice Sheet
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Are ice sheets simulated? | # PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.ice.ice_sheet')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
| notebooks/nuist/cmip6/models/sandbox-2/landice.ipynb | ES-DOC/esdoc-jupyterhub | gpl-3.0 | ff346aec55a1511a769c5b84e3ef3eaf |
5.4. Ice Shelf
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Are ice shelves simulated? | # PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.ice.ice_shelf')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
| notebooks/nuist/cmip6/models/sandbox-2/landice.ipynb | ES-DOC/esdoc-jupyterhub | gpl-3.0 | 532b7bc081324b6d9429d533f7b6087c |
6. Ice --> Mass Balance
Description of the surface mass balance treatment
6.1. Surface Mass Balance
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe how and where the surface mass balance (SMB) is calulated. Include the temporal coupling frequeny from the atmosphere, whether or not a seperate SMB model is used, and if so details of this model, such as its resolution | # PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.ice.mass_balance.surface_mass_balance')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
| notebooks/nuist/cmip6/models/sandbox-2/landice.ipynb | ES-DOC/esdoc-jupyterhub | gpl-3.0 | 61bbf93931d4bfcd865d59d4786cc237 |
7. Ice --> Mass Balance --> Basal
Description of basal melting
7.1. Bedrock
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the implementation of basal melting over bedrock | # PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.ice.mass_balance.basal.bedrock')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
| notebooks/nuist/cmip6/models/sandbox-2/landice.ipynb | ES-DOC/esdoc-jupyterhub | gpl-3.0 | 1c3e4a59874fd1ba8a16b95ecd2aaf2b |
7.2. Ocean
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the implementation of basal melting over the ocean | # PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.ice.mass_balance.basal.ocean')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
| notebooks/nuist/cmip6/models/sandbox-2/landice.ipynb | ES-DOC/esdoc-jupyterhub | gpl-3.0 | d51809ae7be2f31e4437a3378ccf0649 |
8. Ice --> Mass Balance --> Frontal
Description of claving/melting from the ice shelf front
8.1. Calving
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the implementation of calving from the front of the ice shelf | # PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.ice.mass_balance.frontal.calving')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
| notebooks/nuist/cmip6/models/sandbox-2/landice.ipynb | ES-DOC/esdoc-jupyterhub | gpl-3.0 | 3b4ffb82a7a2478075d92369dfdc0002 |
8.2. Melting
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the implementation of melting from the front of the ice shelf | # PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.ice.mass_balance.frontal.melting')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
| notebooks/nuist/cmip6/models/sandbox-2/landice.ipynb | ES-DOC/esdoc-jupyterhub | gpl-3.0 | 97fa00881c6983a65a0e82174b4dbb8b |
9. Ice --> Dynamics
**
9.1. Description
Is Required: TRUE Type: STRING Cardinality: 1.1
General description if ice sheet and ice shelf dynamics | # PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.ice.dynamics.description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
| notebooks/nuist/cmip6/models/sandbox-2/landice.ipynb | ES-DOC/esdoc-jupyterhub | gpl-3.0 | b2b5e5d7cb4a2d8e01ab8d7d11195bec |
9.2. Approximation
Is Required: TRUE Type: ENUM Cardinality: 1.N
Approximation type used in modelling ice dynamics | # PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.ice.dynamics.approximation')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "SIA"
# "SAA"
# "full stokes"
# "Other: [Please specify]"
# TODO - please enter value(s)
| notebooks/nuist/cmip6/models/sandbox-2/landice.ipynb | ES-DOC/esdoc-jupyterhub | gpl-3.0 | e133fcdbcdc8820e23afbdfb16e69af2 |
9.3. Adaptive Timestep
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is there an adaptive time scheme for the ice scheme? | # PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.ice.dynamics.adaptive_timestep')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
| notebooks/nuist/cmip6/models/sandbox-2/landice.ipynb | ES-DOC/esdoc-jupyterhub | gpl-3.0 | c6abb65ad5af90f469ff22f546717e67 |
9.4. Timestep
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Timestep (in seconds) of the ice scheme. If the timestep is adaptive, then state a representative timestep. | # PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.ice.dynamics.timestep')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
| notebooks/nuist/cmip6/models/sandbox-2/landice.ipynb | ES-DOC/esdoc-jupyterhub | gpl-3.0 | dd319197cfb1fa7bdfd4e1032d65bdee |
<table align="left">
<td>
<a href="https://colab.research.google.com/github/GoogleCloudPlatform/vertex-ai-samples/blob/master/notebooks/official/ml_metadata/sdk-metric-parameter-tracking-for-custom-jobs.ipynb">
<img src="https://cloud.google.com/ml-engine/images/colab-logo-32px.png" alt="Colab logo"> Run in Colab
</a>
</td>
<td>
<a href="https://github.com/GoogleCloudPlatform/vertex-ai-samples/blob/master/notebooks/official/ml_metadata/sdk-metric-parameter-tracking-for-custom-jobs.ipynb">
<img src="https://cloud.google.com/ml-engine/images/github-logo-32px.png" alt="GitHub logo">
View on GitHub
</a>
</td>
<td>
<a href="https://console.cloud.google.com/vertex-ai/workbench/deploy-notebook?download_url=https://github.com/GoogleCloudPlatform/vertex-ai-samples/blob/main/notebooks/official/ml_metadata/sdk-metric-parameter-tracking-for-custom-jobs.ipynb">
<img src="https://lh3.googleusercontent.com/UiNooY4LUgW_oTvpsNhPpQzsstV5W8F7rYgxgGBD85cWJoLmrOzhVs_ksK_vgx40SHs7jCqkTkCk=e14-rj-sc0xffffff-h130-w32" alt="Vertex AI logo">
Open in Vertex AI Workbench
</a>
</td>
</table>
Vertex AI: Track parameters and metrics for custom training jobs
Overview
This notebook demonstrates how to track metrics and parameters for Vertex AI custom training jobs, and how to perform detailed analysis using this data.
Dataset
This example uses the Abalone Dataset. For more information about this dataset please visit: https://archive.ics.uci.edu/ml/datasets/abalone
Objective
In this notebook, you will learn how to use Vertex AI SDK for Python to:
* Track training parameters and prediction metrics for a custom training job.
* Extract and perform analysis for all parameters and metrics within an Experiment.
Costs
This tutorial uses billable components of Google Cloud:
Vertex AI
Cloud Storage
Learn about Vertex AI
pricing and Cloud Storage
pricing, and use the Pricing
Calculator
to generate a cost estimate based on your projected usage.
Set up your local development environment
If you are using Colab or Vertex AI Workbench, your environment already meets
all the requirements to run this notebook. You can skip this step.
Otherwise, make sure your environment meets this notebook's requirements.
You need the following:
The Google Cloud SDK
Git
Python 3
virtualenv
Jupyter notebook running in a virtual environment with Python 3
The Google Cloud guide to Setting up a Python development
environment and the Jupyter
installation guide provide detailed instructions
for meeting these requirements. The following steps provide a condensed set of
instructions:
Install and initialize the Cloud SDK.
Install Python 3.
Install
virtualenv
and create a virtual environment that uses Python 3. Activate the virtual environment.
To install Jupyter, run pip install jupyter on the
command-line in a terminal shell.
To launch Jupyter, run jupyter notebook on the command-line in a terminal shell.
Open this notebook in the Jupyter Notebook Dashboard.
Install additional packages
Install additional package dependencies not installed in your notebook environment. | import os
# The Google Cloud Notebook product has specific requirements
IS_GOOGLE_CLOUD_NOTEBOOK = os.path.exists("/opt/deeplearning/metadata/env_version")
# Google Cloud Notebook requires dependencies to be installed with '--user'
USER_FLAG = ""
if IS_GOOGLE_CLOUD_NOTEBOOK:
USER_FLAG = "--user"
! pip3 install -U tensorflow $USER_FLAG
! python3 -m pip install {USER_FLAG} google-cloud-aiplatform --upgrade
! pip3 install scikit-learn {USER_FLAG}
| notebooks/official/ml_metadata/sdk-metric-parameter-tracking-for-custom-jobs.ipynb | GoogleCloudPlatform/vertex-ai-samples | apache-2.0 | 23f00c1416dcfbeb95182aa1b66d0b15 |
Restart the kernel
After you install the additional packages, you need to restart the notebook kernel so it can find the packages. | # Automatically restart kernel after installs
import os
if not os.getenv("IS_TESTING"):
# Automatically restart kernel after installs
import IPython
app = IPython.Application.instance()
app.kernel.do_shutdown(True) | notebooks/official/ml_metadata/sdk-metric-parameter-tracking-for-custom-jobs.ipynb | GoogleCloudPlatform/vertex-ai-samples | apache-2.0 | dabe15e6866cdba5ba1659355933ecb3 |
Before you begin
Select a GPU runtime
Make sure you're running this notebook in a GPU runtime if you have that option. In Colab, select "Runtime --> Change runtime type > GPU"
Set up your Google Cloud project
The following steps are required, regardless of your notebook environment.
Select or create a Google Cloud project. When you first create an account, you get a $300 free credit towards your compute/storage costs.
Make sure that billing is enabled for your project.
Enable the Vertex AI API and Compute Engine API.
If you are running this notebook locally, you will need to install the Cloud SDK.
Enter your project ID in the cell below. Then run the cell to make sure the
Cloud SDK uses the right project for all the commands in this notebook.
Note: Jupyter runs lines prefixed with ! as shell commands, and it interpolates Python variables prefixed with $ into these commands.
Set your project ID
If you don't know your project ID, you may be able to get your project ID using gcloud. | import os
PROJECT_ID = ""
# Get your Google Cloud project ID from gcloud
if not os.getenv("IS_TESTING"):
shell_output = !gcloud config list --format 'value(core.project)' 2>/dev/null
PROJECT_ID = shell_output[0]
print("Project ID: ", PROJECT_ID) | notebooks/official/ml_metadata/sdk-metric-parameter-tracking-for-custom-jobs.ipynb | GoogleCloudPlatform/vertex-ai-samples | apache-2.0 | 8f5f368c556747397159d18dc00c3cc7 |
Otherwise, set your project ID here. | if PROJECT_ID == "" or PROJECT_ID is None:
PROJECT_ID = "[your-project-id]" # @param {type:"string"} | notebooks/official/ml_metadata/sdk-metric-parameter-tracking-for-custom-jobs.ipynb | GoogleCloudPlatform/vertex-ai-samples | apache-2.0 | c2f2c16a28431031aa07a9239d62cc31 |
Set gcloud config to your project ID. | !gcloud config set project $PROJECT_ID | notebooks/official/ml_metadata/sdk-metric-parameter-tracking-for-custom-jobs.ipynb | GoogleCloudPlatform/vertex-ai-samples | apache-2.0 | 8338875acaf31988d7afc3691115c817 |
Timestamp
If you are in a live tutorial session, you might be using a shared test account or project. To avoid name collisions between users on resources created, you create a timestamp for each instance session, and append it onto the name of resources you create in this tutorial. | from datetime import datetime
TIMESTAMP = datetime.now().strftime("%Y%m%d%H%M%S") | notebooks/official/ml_metadata/sdk-metric-parameter-tracking-for-custom-jobs.ipynb | GoogleCloudPlatform/vertex-ai-samples | apache-2.0 | a98c97c41311e8b05d1b7f15c8c58192 |
Authenticate your Google Cloud account
If you are using Vertex AI Workbench, your environment is already
authenticated. Skip this step.
If you are using Colab, run the cell below and follow the instructions
when prompted to authenticate your account via oAuth.
Otherwise, follow these steps:
In the Cloud Console, go to the Create service account key
page.
Click Create service account.
In the Service account name field, enter a name, and
click Create.
In the Grant this service account access to project section, click the Role drop-down list. Type "Vertex AI"
into the filter box, and select
Vertex AI Administrator. Type "Storage Object Admin" into the filter box, and select Storage Object Admin.
Click Create. A JSON file that contains your key downloads to your
local environment.
Enter the path to your service account key as the
GOOGLE_APPLICATION_CREDENTIALS variable in the cell below and run the cell. | import os
import sys
# If you are running this notebook in Colab, run this cell and follow the
# instructions to authenticate your GCP account. This provides access to your
# Cloud Storage bucket and lets you submit training jobs and prediction
# requests.
# If on Google Cloud Notebooks, then don't execute this code
if not os.path.exists("/opt/deeplearning/metadata/env_version"):
if "google.colab" in sys.modules:
from google.colab import auth as google_auth
google_auth.authenticate_user()
# If you are running this notebook locally, replace the string below with the
# path to your service account key and run this cell to authenticate your GCP
# account.
elif not os.getenv("IS_TESTING"):
%env GOOGLE_APPLICATION_CREDENTIALS '' | notebooks/official/ml_metadata/sdk-metric-parameter-tracking-for-custom-jobs.ipynb | GoogleCloudPlatform/vertex-ai-samples | apache-2.0 | cb819ebf91207321f8b7febcf0f514bf |
Create a Cloud Storage bucket
The following steps are required, regardless of your notebook environment.
When you submit a training job using the Cloud SDK, you upload a Python package
containing your training code to a Cloud Storage bucket. Vertex AI runs
the code from this package. In this tutorial, Vertex AI also saves the
trained model that results from your job in the same bucket. Using this model artifact, you can then
create Vertex AI model and endpoint resources in order to serve
online predictions.
Set the name of your Cloud Storage bucket below. It must be unique across all
Cloud Storage buckets.
You may also change the REGION variable, which is used for operations
throughout the rest of this notebook. Make sure to choose a region where Vertex AI services are
available. You may
not use a Multi-Regional Storage bucket for training with Vertex AI. | BUCKET_URI = "gs://[your-bucket-name]" # @param {type:"string"}
REGION = "[your-region]" # @param {type:"string"}
if BUCKET_URI == "" or BUCKET_URI is None or BUCKET_URI == "gs://[your-bucket-name]":
BUCKET_URI = "gs://" + PROJECT_ID + "-aip-" + TIMESTAMP
if REGION == "[your-region]":
REGION = "us-central1" | notebooks/official/ml_metadata/sdk-metric-parameter-tracking-for-custom-jobs.ipynb | GoogleCloudPlatform/vertex-ai-samples | apache-2.0 | 5c449b42ce208231ea52e4d0e5e0ee75 |
Only if your bucket doesn't already exist: Run the following cell to create your Cloud Storage bucket. | ! gsutil mb -l $REGION $BUCKET_URI | notebooks/official/ml_metadata/sdk-metric-parameter-tracking-for-custom-jobs.ipynb | GoogleCloudPlatform/vertex-ai-samples | apache-2.0 | 49f518297b121f1b6b62d4a1554adb81 |
Finally, validate access to your Cloud Storage bucket by examining its contents: | ! gsutil ls -al $BUCKET_URI | notebooks/official/ml_metadata/sdk-metric-parameter-tracking-for-custom-jobs.ipynb | GoogleCloudPlatform/vertex-ai-samples | apache-2.0 | 3377e24fae1ef04c9931f5c5c355b8f5 |
Import libraries and define constants
Import required libraries. | import pandas as pd
from google.cloud import aiplatform
from sklearn.metrics import mean_absolute_error, mean_squared_error
from tensorflow.python.keras.utils import data_utils | notebooks/official/ml_metadata/sdk-metric-parameter-tracking-for-custom-jobs.ipynb | GoogleCloudPlatform/vertex-ai-samples | apache-2.0 | 3eef346f960c68e99eed3a6715df5625 |
Initialize Vertex AI and set an experiment
Define experiment name. | EXPERIMENT_NAME = "" # @param {type:"string"} | notebooks/official/ml_metadata/sdk-metric-parameter-tracking-for-custom-jobs.ipynb | GoogleCloudPlatform/vertex-ai-samples | apache-2.0 | 55627b2c7dba6c0f1976dfc09c9ce8a1 |
If EXEPERIMENT_NAME is not set, set a default one below: | if EXPERIMENT_NAME == "" or EXPERIMENT_NAME is None:
EXPERIMENT_NAME = "my-experiment-" + TIMESTAMP | notebooks/official/ml_metadata/sdk-metric-parameter-tracking-for-custom-jobs.ipynb | GoogleCloudPlatform/vertex-ai-samples | apache-2.0 | 9ae54e917b3a62c9a0fbc5f9b28163b5 |
Initialize the client for Vertex AI. | aiplatform.init(
project=PROJECT_ID,
location=REGION,
staging_bucket=BUCKET_URI,
experiment=EXPERIMENT_NAME,
) | notebooks/official/ml_metadata/sdk-metric-parameter-tracking-for-custom-jobs.ipynb | GoogleCloudPlatform/vertex-ai-samples | apache-2.0 | 441972c525438b2bcdfbf113dd4e5254 |
Tracking parameters and metrics in Vertex AI custom training jobs
This example uses the Abalone Dataset. For more information about this dataset please visit: https://archive.ics.uci.edu/ml/datasets/abalone | !wget https://storage.googleapis.com/download.tensorflow.org/data/abalone_train.csv
!gsutil cp abalone_train.csv {BUCKET_URI}/data/
gcs_csv_path = f"{BUCKET_URI}/data/abalone_train.csv" | notebooks/official/ml_metadata/sdk-metric-parameter-tracking-for-custom-jobs.ipynb | GoogleCloudPlatform/vertex-ai-samples | apache-2.0 | bcd83b8b8e0cf7dae7f0747a8af81612 |
Create a managed tabular dataset from a CSV
A Managed dataset can be used to create an AutoML model or a custom model. | ds = aiplatform.TabularDataset.create(display_name="abalone", gcs_source=[gcs_csv_path])
ds.resource_name | notebooks/official/ml_metadata/sdk-metric-parameter-tracking-for-custom-jobs.ipynb | GoogleCloudPlatform/vertex-ai-samples | apache-2.0 | 2cf239010dc3809c4018e083a7eb9014 |
Write the training script
Run the following cell to create the training script that is used in the sample custom training job. | %%writefile training_script.py
import pandas as pd
import argparse
import os
import tensorflow as tf
from tensorflow import keras
from tensorflow.keras import layers
parser = argparse.ArgumentParser()
parser.add_argument('--epochs', dest='epochs',
default=10, type=int,
help='Number of epochs.')
parser.add_argument('--num_units', dest='num_units',
default=64, type=int,
help='Number of unit for first layer.')
args = parser.parse_args()
# uncomment and bump up replica_count for distributed training
# strategy = tf.distribute.experimental.MultiWorkerMirroredStrategy()
# tf.distribute.experimental_set_strategy(strategy)
col_names = ["Length", "Diameter", "Height", "Whole weight", "Shucked weight", "Viscera weight", "Shell weight", "Age"]
target = "Age"
def aip_data_to_dataframe(wild_card_path):
return pd.concat([pd.read_csv(fp.numpy().decode(), names=col_names)
for fp in tf.data.Dataset.list_files([wild_card_path])])
def get_features_and_labels(df):
return df.drop(target, axis=1).values, df[target].values
def data_prep(wild_card_path):
return get_features_and_labels(aip_data_to_dataframe(wild_card_path))
model = tf.keras.Sequential([layers.Dense(args.num_units), layers.Dense(1)])
model.compile(loss='mse', optimizer='adam')
model.fit(*data_prep(os.environ["AIP_TRAINING_DATA_URI"]),
epochs=args.epochs ,
validation_data=data_prep(os.environ["AIP_VALIDATION_DATA_URI"]))
print(model.evaluate(*data_prep(os.environ["AIP_TEST_DATA_URI"])))
# save as Vertex AI Managed model
tf.saved_model.save(model, os.environ["AIP_MODEL_DIR"]) | notebooks/official/ml_metadata/sdk-metric-parameter-tracking-for-custom-jobs.ipynb | GoogleCloudPlatform/vertex-ai-samples | apache-2.0 | 0a4e3e1f58d18bfc84ccbee53db8bb7d |
Launch a custom training job and track its trainig parameters on Vertex AI ML Metadata | job = aiplatform.CustomTrainingJob(
display_name="train-abalone-dist-1-replica",
script_path="training_script.py",
container_uri="us-docker.pkg.dev/vertex-ai/training/tf-cpu.2-8:latest",
requirements=["gcsfs==0.7.1"],
model_serving_container_image_uri="us-docker.pkg.dev/vertex-ai/prediction/tf2-cpu.2-8:latest",
) | notebooks/official/ml_metadata/sdk-metric-parameter-tracking-for-custom-jobs.ipynb | GoogleCloudPlatform/vertex-ai-samples | apache-2.0 | 83492ca6e073e397d199808a59bc810f |
Start a new experiment run to track training parameters and start the training job. Note that this operation will take around 10 mins. | aiplatform.start_run("custom-training-run-1") # Change this to your desired run name
parameters = {"epochs": 10, "num_units": 64}
aiplatform.log_params(parameters)
model = job.run(
ds,
replica_count=1,
model_display_name="abalone-model",
args=[f"--epochs={parameters['epochs']}", f"--num_units={parameters['num_units']}"],
) | notebooks/official/ml_metadata/sdk-metric-parameter-tracking-for-custom-jobs.ipynb | GoogleCloudPlatform/vertex-ai-samples | apache-2.0 | b7c1718f176a2d04fc7166e39c145b9e |
Deploy Model and calculate prediction metrics
Deploy model to Google Cloud. This operation will take 10-20 mins. | endpoint = model.deploy(machine_type="n1-standard-4") | notebooks/official/ml_metadata/sdk-metric-parameter-tracking-for-custom-jobs.ipynb | GoogleCloudPlatform/vertex-ai-samples | apache-2.0 | 33fe1ddb859ca24bca3025fa8fcbeec5 |
Once model is deployed, perform online prediction using the abalone_test dataset and calculate prediction metrics.
Prepare the prediction dataset. | def read_data(uri):
dataset_path = data_utils.get_file("abalone_test.data", uri)
col_names = [
"Length",
"Diameter",
"Height",
"Whole weight",
"Shucked weight",
"Viscera weight",
"Shell weight",
"Age",
]
dataset = pd.read_csv(
dataset_path,
names=col_names,
na_values="?",
comment="\t",
sep=",",
skipinitialspace=True,
)
return dataset
def get_features_and_labels(df):
target = "Age"
return df.drop(target, axis=1).values, df[target].values
test_dataset, test_labels = get_features_and_labels(
read_data(
"https://storage.googleapis.com/download.tensorflow.org/data/abalone_test.csv"
)
) | notebooks/official/ml_metadata/sdk-metric-parameter-tracking-for-custom-jobs.ipynb | GoogleCloudPlatform/vertex-ai-samples | apache-2.0 | 647adbddd266f8d2900c09221c7a9304 |
Perform online prediction. | prediction = endpoint.predict(test_dataset.tolist())
prediction | notebooks/official/ml_metadata/sdk-metric-parameter-tracking-for-custom-jobs.ipynb | GoogleCloudPlatform/vertex-ai-samples | apache-2.0 | 5b56364587d4e944c9bfb32c3f67f4c8 |
Calculate and track prediction evaluation metrics. | mse = mean_squared_error(test_labels, prediction.predictions)
mae = mean_absolute_error(test_labels, prediction.predictions)
aiplatform.log_metrics({"mse": mse, "mae": mae}) | notebooks/official/ml_metadata/sdk-metric-parameter-tracking-for-custom-jobs.ipynb | GoogleCloudPlatform/vertex-ai-samples | apache-2.0 | f3428d50e65871c11e26eae73bb31231 |
Extract all parameters and metrics created during this experiment. | aiplatform.get_experiment_df() | notebooks/official/ml_metadata/sdk-metric-parameter-tracking-for-custom-jobs.ipynb | GoogleCloudPlatform/vertex-ai-samples | apache-2.0 | af95f620851a66866ae46e369d31348d |
View data in the Cloud Console
Parameters and metrics can also be viewed in the Cloud Console. | print("Vertex AI Experiments:")
print(
f"https://console.cloud.google.com/ai/platform/experiments/experiments?folder=&organizationId=&project={PROJECT_ID}"
) | notebooks/official/ml_metadata/sdk-metric-parameter-tracking-for-custom-jobs.ipynb | GoogleCloudPlatform/vertex-ai-samples | apache-2.0 | 03dcf4fc9e06833730f4fd67fdd8ee67 |
Cleaning up
To clean up all Google Cloud resources used in this project, you can delete the Google Cloud
project you used for the tutorial.
Otherwise, you can delete the individual resources you created in this tutorial:
Training Job
Model
Cloud Storage Bucket
Vertex AI Dataset
Training Job
Model
Endpoint
Cloud Storage Bucket | # Warning: Setting this to true will delete everything in your bucket
delete_bucket = False
# Delete dataset
ds.delete()
# Delete the training job
job.delete()
# Undeploy model from endpoint
endpoint.undeploy_all()
# Delete the endpoint
endpoint.delete()
# Delete the model
model.delete()
if delete_bucket or os.getenv("IS_TESTING"):
! gsutil -m rm -r $BUCKET_URI | notebooks/official/ml_metadata/sdk-metric-parameter-tracking-for-custom-jobs.ipynb | GoogleCloudPlatform/vertex-ai-samples | apache-2.0 | d0cb9f1faeb2966876ea816a59ae416f |
读取所有的section列表
section即[]中的内容。 | s = cf.sections()
print '【Output】'
print s | libs/ConfigParser/handout.ipynb | dnxbjyj/python-basic | mit | e36eb77d1fd1cdcdc07597d2548f688e |
读取指定section下options key列表
options即某个section下的每个键值对的key. | opt = cf.options('concurrent')
print '【Output】'
print opt | libs/ConfigParser/handout.ipynb | dnxbjyj/python-basic | mit | de5061b46137652017c94e9af6c1a778 |
获取指定section下的键值对字典列表 | items = cf.items('concurrent')
print '【Output】'
print items | libs/ConfigParser/handout.ipynb | dnxbjyj/python-basic | mit | 1dea2e03db58b7b8a8f89e0c60896465 |
按照指定数据类型读取配置值
cf对象有get()、getint()、getboolean()、getfloat()四种方法来读取不同数据类型的配置项的值。 | db_host = cf.get('db','db_host')
db_port = cf.getint('db','db_port')
thread = cf.getint('concurrent','thread')
print '【Output】'
print db_host,db_port,thread | libs/ConfigParser/handout.ipynb | dnxbjyj/python-basic | mit | 70da194582830955aacb6f71d7a59b3e |
修改某个配置项的值
比如要修改一下数据库的密码,可以这样修改: | cf.set('db','db_pass','newpass')
# 修改完了要写入才能生效
with open('sys.conf','w') as f:
cf.write(f) | libs/ConfigParser/handout.ipynb | dnxbjyj/python-basic | mit | 0e4b17189c3edc97306388464e01ab2d |
添加一个section | cf.add_section('log')
cf.set('log','name','mylog.log')
cf.set('log','num',100)
cf.set('log','size',10.55)
cf.set('log','auto_save',True)
cf.set('log','info','%(bar)s is %(baz)s!')
# 同样的,要写入才能生效
with open('sys.conf','w') as f:
cf.write(f) | libs/ConfigParser/handout.ipynb | dnxbjyj/python-basic | mit | 13c287bcb038197a9eed25c81fe25f85 |
执行上面代码后,sys.conf文件多了一个section,内容如下:
bash
[log]
name = mylog.log
num = 100
size = 10.55
auto_save = True
info = %(bar)s is %(baz)s!
移除某个section | cf.remove_section('log')
# 同样的,要写入才能生效
with open('sys.conf','w') as f:
cf.write(f) | libs/ConfigParser/handout.ipynb | dnxbjyj/python-basic | mit | b31b3cb938c3e95ab197e287597b631e |
移除某个option | cf.remove_option('db','db_pass')
# 同样的,要写入才能生效
with open('sys.conf','w') as f:
cf.write(f) | libs/ConfigParser/handout.ipynb | dnxbjyj/python-basic | mit | a8e0c0eec812c8f9503f9fcaae4751e7 |
Setup | !pip install floq_client --quiet
# Imports
import numpy as np
import sympy
import cirq
import floq.client | samples/notebooks/Floq_Client_Colab_Tutorial.ipynb | google/floq-client | apache-2.0 | 020751107ce04406b6257f794eb6c7e1 |
Floq simulation | nrows = 10
ncols = 2
qubits = cirq.GridQubit.rect(nrows, ncols) # 20 qubits
parameters = sympy.symbols([f'a{idx}' for idx in range(nrows * ncols)])
circuit = cirq.Circuit(cirq.HPowGate(exponent=p).on(q) for p, q in zip(parameters, qubits)) | samples/notebooks/Floq_Client_Colab_Tutorial.ipynb | google/floq-client | apache-2.0 | b6529e301de382f9ec2beceb2ecc96b8 |
New observable compatible with Floq
Floq accepts observables in the type of cirq.ops.linear_combinations.PauliSum only | observables = []
for i in range(nrows):
for j in range(ncols):
if i < nrows - 1:
observables.append(cirq.Z(qubits[i*ncols + j]) * cirq.Z(qubits[(i + 1)*ncols + j]))
# Z[i * ncols + j] * Z[(i + 1) * ncols + j]
if j < ncols - 1:
observables.append(cirq.Z(qubits[i*ncols + j]) * cirq.Z(qubits[i*ncols + j+1]))
# Z[i * ncols + j] * Z[i * ncols + (j + 1)]
len(observables)
import copy
def sum_pauli_strings(obs):
m = copy.deepcopy(obs[0])
for o in obs[1:]:
m += o
return m
def split_observables(obs):
# hack: split observables into many buckets with at most 26 terms
obs_buckets = [obs[s:s+25] for s in range(0, len(obs), 25)]
measure = []
for obs in obs_buckets:
measure.append(sum_pauli_strings(obs))
return measure
measure = split_observables(observables)
[len(m) for m in measure]
# These two results should have the same number of Pauli string terms
assert sum_pauli_strings(observables) == sum_pauli_strings(measure) | samples/notebooks/Floq_Client_Colab_Tutorial.ipynb | google/floq-client | apache-2.0 | 17caf022fdb0afc7aa9a1d694a333ed4 |
Padding qubits
Because Floq's minimum number of qubits is 26, we need to pad it. This will be changed in the future. | def pad_circuit(circ, qubits):
return circ + cirq.Circuit([cirq.I(q) for q in qubits])
def get_pad_qubits(circ):
num = len(circ.all_qubits())
return [cirq.GridQubit(num, pad) for pad in range(26 - num)]
pad_qubits = get_pad_qubits(circuit)
padded_circuit = pad_circuit(circuit, pad_qubits)
padded_circuit
values = np.random.random(len(parameters))
resolver = {s: v for s, v in zip(parameters, values)}
print(resolver) | samples/notebooks/Floq_Client_Colab_Tutorial.ipynb | google/floq-client | apache-2.0 | a90d674715cf90aa10014a7859388ff0 |
Using Floq simulator
Before going further, please FORK THIS COLAB NOTEBOOK, and DO NOT SHARE YOUR API KEY WITH OTHERS PLEASE
Create & start a Floq instance | # Please specify your API_KEY
API_KEY = "" #@param {type:"string"}
!floq-client "$API_KEY" worker start
client = floq.client.CirqClient(API_KEY) | samples/notebooks/Floq_Client_Colab_Tutorial.ipynb | google/floq-client | apache-2.0 | 17f0d8060325aa17436cecef7c773746 |
Expectation values from the circuit and measurements | energy = client.simulator.simulate_expectation_values(padded_circuit, measure, resolver)
# energy shows expectation values on each Pauli sum in measure.
energy
# Here is the total energy
sum(energy) | samples/notebooks/Floq_Client_Colab_Tutorial.ipynb | google/floq-client | apache-2.0 | 9eb1ef371609070ed1534f15ccd7cf67 |
Samples from the circuit | niter = 100
samples = client.simulator.run(padded_circuit, resolver, niter)
samples | samples/notebooks/Floq_Client_Colab_Tutorial.ipynb | google/floq-client | apache-2.0 | 0e8aea78c852e1c91f9decc015083757 |
Stop the Floq instance | !floq-client "$API_KEY" worker stop | samples/notebooks/Floq_Client_Colab_Tutorial.ipynb | google/floq-client | apache-2.0 | eebdf4402971dcc63db18577fadfc0bf |
We will use mostly TensorFlow functions to open and process images: | def open_image(filename, target_shape = (256, 256)):
""" Load the specified file as a JPEG image, preprocess it and
resize it to the target shape.
"""
image_string = tf.io.read_file(filename)
image = tf.image.decode_jpeg(image_string, channels=3)
image = tf.image.convert_image_dtype(image, tf.float32)
image = tf.image.resize(image, target_shape)
return image
import tensorflow as tf
# Careful to sort images folders so that the anchor and positive images correspond.
anchor_images = sorted([str(anchor_images_path / f) for f in os.listdir(anchor_images_path)])
positive_images = sorted([str(positive_images_path / f) for f in os.listdir(positive_images_path)])
anchor_count = len(anchor_images)
positive_count = len(positive_images)
print(f"number of anchors: {anchor_count}, positive: {positive_count}")
anchor_dataset_files = tf.data.Dataset.from_tensor_slices(anchor_images)
anchor_dataset = anchor_dataset_files.map(open_image)
positive_dataset_files = tf.data.Dataset.from_tensor_slices(positive_images)
positive_dataset = positive_dataset_files.map(open_image)
import matplotlib.pyplot as plt
def visualize(img_list):
"""Visualize a list of images"""
def show(ax, image):
ax.imshow(image)
ax.get_xaxis().set_visible(False)
ax.get_yaxis().set_visible(False)
fig = plt.figure(figsize=(6, 18))
num_imgs = len(img_list)
axs = fig.subplots(1, num_imgs)
for i in range(num_imgs):
show(axs[i], img_list[i])
# display the first element of our dataset
anc = next(iter(anchor_dataset))
pos = next(iter(positive_dataset))
visualize([anc, pos])
from tensorflow.keras import layers
# data augmentations
data_augmentation = tf.keras.Sequential([
layers.RandomFlip("horizontal"),
# layers.RandomRotation(0.15), # you may add random rotations
layers.RandomCrop(224, 224)
]) | labs/09_triplet_loss/triplet_loss_totally_looks_like.ipynb | m2dsupsdlclass/lectures-labs | mit | 8f4ee4144d76f2a07521c81dc0f68c6b |
To generate the list of negative images, let's randomize the list of available images (anchors and positives) and concatenate them together. | import numpy as np
rng = np.random.RandomState(seed=42)
rng.shuffle(anchor_images)
rng.shuffle(positive_images)
negative_images = anchor_images + positive_images
np.random.RandomState(seed=32).shuffle(negative_images)
negative_dataset_files = tf.data.Dataset.from_tensor_slices(negative_images)
negative_dataset_files = negative_dataset_files.shuffle(buffer_size=4096)
# Build final triplet dataset
dataset = tf.data.Dataset.zip((anchor_dataset_files, positive_dataset_files, negative_dataset_files))
dataset = dataset.shuffle(buffer_size=1024)
# preprocess function
def preprocess_triplets(anchor, positive, negative):
return (
data_augmentation(open_image(anchor)),
data_augmentation(open_image(positive)),
data_augmentation(open_image(negative)),
)
# The map function is lazy, it is not evaluated on the spot,
# but each time a batch is sampled.
dataset = dataset.map(preprocess_triplets)
# Let's now split our dataset in train and validation.
train_dataset = dataset.take(round(anchor_count * 0.8))
val_dataset = dataset.skip(round(anchor_count * 0.8))
# define the batch size
train_dataset = train_dataset.batch(32, drop_remainder=False)
train_dataset = train_dataset.prefetch(8)
val_dataset = val_dataset.batch(32, drop_remainder=False)
val_dataset = val_dataset.prefetch(8) | labs/09_triplet_loss/triplet_loss_totally_looks_like.ipynb | m2dsupsdlclass/lectures-labs | mit | 34ff4e6758e7b7aa95ee459fcf710797 |
We can visualize a triplet and display its shape: | anc_batch, pos_batch, neg_batch = next(train_dataset.take(1).as_numpy_iterator())
print(anc_batch.shape, pos_batch.shape, neg_batch.shape)
idx = np.random.randint(0, 32)
visualize([anc_batch[idx], pos_batch[idx], neg_batch[idx]]) | labs/09_triplet_loss/triplet_loss_totally_looks_like.ipynb | m2dsupsdlclass/lectures-labs | mit | 98a16f4aa6cc0e5afaa1e9236ae20ac3 |
Exercise
Build the embedding network, starting from a resnet and adding a few layers. The output should have a dimension $d= 128$ or $d=256$. Edit the following code, and you may use the next cell to test your code.
Bonus: Try to freeze the weights of the ResNet. | from tensorflow.keras import Model, layers
from tensorflow.keras import optimizers, losses, metrics, applications
from tensorflow.keras.applications import resnet
input_img = layers.Input((224,224,3))
output = input_img # change that line and edit this code!
embedding = Model(input_img, output, name="Embedding")
output = embedding(np.random.randn(1,224,224,3))
output.shape | labs/09_triplet_loss/triplet_loss_totally_looks_like.ipynb | m2dsupsdlclass/lectures-labs | mit | 29feda899b0fa3844552347394a415d3 |
Run the following can be run to get the same architecture as we have: | from tensorflow.keras import Model, layers
from tensorflow.keras import optimizers, losses, metrics, applications
from tensorflow.keras.applications import resnet
input_img = layers.Input((224,224,3))
base_cnn = resnet.ResNet50(weights="imagenet", input_shape=(224,224,3), include_top=False)
resnet_output = base_cnn(input_img)
flatten = layers.Flatten()(resnet_output)
dense1 = layers.Dense(512, activation="relu")(flatten)
# The batch normalization layer enables to normalize the activations
# over the batch
dense1 = layers.BatchNormalization()(dense1)
dense2 = layers.Dense(256, activation="relu")(dense1)
dense2 = layers.BatchNormalization()(dense2)
output = layers.Dense(256)(dense2)
embedding = Model(input_img, output, name="Embedding")
trainable = False
for layer in base_cnn.layers:
if layer.name == "conv5_block1_out":
trainable = True
layer.trainable = trainable
def preprocess(x):
""" we'll need to preprocess the input before passing them
to the resnet for better results. This is the same preprocessing
that was used during the training of ResNet on ImageNet.
"""
return resnet.preprocess_input(x * 255.) | labs/09_triplet_loss/triplet_loss_totally_looks_like.ipynb | m2dsupsdlclass/lectures-labs | mit | 3889f40d0fd6d1766001ae6d32ca9146 |
Exercise
Our goal is now to build the positive and negative distances from 3 inputs images: the anchor, the positive, and the negative one $‖f(A) - f(P)‖²$ $‖f(A) - f(N)‖²$. You may define a specific Layer using the Keras subclassing API, or any other method.
You will need to run the Embedding model previously defined, don't forget to apply the preprocessing function defined above! | anchor_input = layers.Input(name="anchor", shape=(224, 224, 3))
positive_input = layers.Input(name="positive", shape=(224, 224, 3))
negative_input = layers.Input(name="negative", shape=(224, 224, 3))
distances = [anchor_input, positive_input] # TODO: Change this code to actually compute the distances
siamese_network = Model(
inputs=[anchor_input, positive_input, negative_input], outputs=distances
) | labs/09_triplet_loss/triplet_loss_totally_looks_like.ipynb | m2dsupsdlclass/lectures-labs | mit | 68f4d6395fa7544c5012374152850648 |
Solution: run the following cell to get the exact same method as we have. | class DistanceLayer(layers.Layer):
def __init__(self, **kwargs):
super().__init__(**kwargs)
def call(self, anchor, positive, negative):
ap_distance = tf.reduce_sum(tf.square(anchor - positive), -1)
an_distance = tf.reduce_sum(tf.square(anchor - negative), -1)
return (ap_distance, an_distance)
anchor_input = layers.Input(name="anchor", shape=(224, 224, 3))
positive_input = layers.Input(name="positive", shape=(224, 224, 3))
negative_input = layers.Input(name="negative", shape=(224, 224, 3))
distances = DistanceLayer()(
embedding(preprocess(anchor_input)),
embedding(preprocess(positive_input)),
embedding(preprocess(negative_input)),
)
siamese_network = Model(
inputs=[anchor_input, positive_input, negative_input], outputs=distances
) | labs/09_triplet_loss/triplet_loss_totally_looks_like.ipynb | m2dsupsdlclass/lectures-labs | mit | b8a78e42db8a2ff13e4da1be698152cc |
The final triplet model
Once we are able to produce the distances, we may wrap it into a new Keras Model which includes the computation of the loss. The following implementation uses a subclassing of the Model class, redefining a few functions used internally during model.fit: call, train_step, test_step | class TripletModel(Model):
"""The Final Keras Model with a custom training and testing loops.
Computes the triplet loss using the three embeddings produced by the
Siamese Network.
The triplet loss is defined as:
L(A, P, N) = max(‖f(A) - f(P)‖² - ‖f(A) - f(N)‖² + margin, 0)
"""
def __init__(self, siamese_network, margin=0.5):
super(TripletModel, self).__init__()
self.siamese_network = siamese_network
self.margin = margin
self.loss_tracker = metrics.Mean(name="loss")
def call(self, inputs):
return self.siamese_network(inputs)
def train_step(self, data):
# GradientTape is a context manager that records every operation that
# you do inside. We are using it here to compute the loss so we can get
# the gradients and apply them using the optimizer specified in
# `compile()`.
with tf.GradientTape() as tape:
loss = self._compute_loss(data)
# Storing the gradients of the loss function with respect to the
# weights/parameters.
gradients = tape.gradient(loss, self.siamese_network.trainable_weights)
# Applying the gradients on the model using the specified optimizer
self.optimizer.apply_gradients(
zip(gradients, self.siamese_network.trainable_weights)
)
# Let's update and return the training loss metric.
self.loss_tracker.update_state(loss)
return {"loss": self.loss_tracker.result()}
def test_step(self, data):
loss = self._compute_loss(data)
self.loss_tracker.update_state(loss)
return {"loss": self.loss_tracker.result()}
def _compute_loss(self, data):
# The output of the network is a tuple containing the distances
# between the anchor and the positive example, and the anchor and
# the negative example.
ap_distance, an_distance = self.siamese_network(data)
loss = ap_distance - an_distance
loss = tf.maximum(loss + self.margin, 0.0)
return loss
@property
def metrics(self):
# We need to list our metrics here so the `reset_states()` can be
# called automatically.
return [self.loss_tracker]
siamese_model = TripletModel(siamese_network)
siamese_model.compile(optimizer=optimizers.Adam(0.0001))
siamese_model.fit(train_dataset, epochs=10, validation_data=val_dataset)
embedding.save('best_model.h5')
# uncomment to get a pretrained model
url_pretrained = "https://github.com/m2dsupsdlclass/lectures-labs/releases/download/totallylookslike/best_model.h5"
urlretrieve(url_pretrained, "best_model.h5")
loaded_model = tf.keras.models.load_model('best_model.h5') | labs/09_triplet_loss/triplet_loss_totally_looks_like.ipynb | m2dsupsdlclass/lectures-labs | mit | 9704f49a3a407070ddeaa06212bd4976 |
Find most similar images in test dataset
The negative_images list was built by concatenating all possible images, both anchors and positive. We can reuse these to form a bank of possible images to query from.
We will first compute all embeddings of these images. To do so, we build a tf.Dataset and apply the few functions: open_img and preprocess. | from functools import partial
open_img = partial(open_image, target_shape=(224,224))
all_img_files = tf.data.Dataset.from_tensor_slices(negative_images)
dataset = all_img_files.map(open_img).map(preprocess).take(1024).batch(32, drop_remainder=False).prefetch(8)
all_embeddings = loaded_model.predict(dataset)
all_embeddings.shape | labs/09_triplet_loss/triplet_loss_totally_looks_like.ipynb | m2dsupsdlclass/lectures-labs | mit | b85772f396776b5f1c58e51bad437cae |
We can build a most_similar function which takes an image path as input and return the topn most similar images through the embedding representation. It would be possible to use another metric, such as the cosine similarity here. | random_img = np.random.choice(negative_images)
def most_similar(img, topn=5):
img_batch = tf.expand_dims(open_image(img, target_shape=(224, 224)), 0)
new_emb = loaded_model.predict(preprocess(img_batch))
dists = tf.sqrt(tf.reduce_sum((all_embeddings - new_emb)**2, -1)).numpy()
idxs = np.argsort(dists)[:topn]
return [(negative_images[idx], dists[idx]) for idx in idxs]
print(random_img)
most_similar(random_img)
random_img = np.random.choice(negative_images)
visualize([open_image(im) for im, _ in most_similar(random_img)]) | labs/09_triplet_loss/triplet_loss_totally_looks_like.ipynb | m2dsupsdlclass/lectures-labs | mit | 2a6ea5e1c9ee7966b510468a5ac70690 |
Signal-space separation (SSS) and Maxwell filtering
This tutorial covers reducing environmental noise and compensating for head
movement with SSS and Maxwell filtering.
:depth: 2
As usual we'll start by importing the modules we need, loading some
example data <sample-dataset>, and cropping it to save on memory: | import os
import matplotlib.pyplot as plt
import seaborn as sns
import pandas as pd
import numpy as np
import mne
from mne.preprocessing import find_bad_channels_maxwell
sample_data_folder = mne.datasets.sample.data_path()
sample_data_raw_file = os.path.join(sample_data_folder, 'MEG', 'sample',
'sample_audvis_raw.fif')
raw = mne.io.read_raw_fif(sample_data_raw_file, verbose=False)
raw.crop(tmax=60) | 0.22/_downloads/243172b1ef6a2d804d3245b8c0a927ef/plot_60_maxwell_filtering_sss.ipynb | mne-tools/mne-tools.github.io | bsd-3-clause | 63629869f7876f6883bdd1715b471fb9 |
Background on SSS and Maxwell filtering
Signal-space separation (SSS) :footcite:TauluKajola2005,TauluSimola2006
is a technique based on the physics
of electromagnetic fields. SSS separates the measured signal into components
attributable to sources inside the measurement volume of the sensor array
(the internal components), and components attributable to sources outside
the measurement volume (the external components). The internal and external
components are linearly independent, so it is possible to simply discard the
external components to reduce environmental noise. Maxwell filtering is a
related procedure that omits the higher-order components of the internal
subspace, which are dominated by sensor noise. Typically, Maxwell filtering
and SSS are performed together (in MNE-Python they are implemented together
in a single function).
Like SSP <tut-artifact-ssp>, SSS is a form of projection. Whereas SSP
empirically determines a noise subspace based on data (empty-room recordings,
EOG or ECG activity, etc) and projects the measurements onto a subspace
orthogonal to the noise, SSS mathematically constructs the external and
internal subspaces from spherical harmonics_ and reconstructs the sensor
signals using only the internal subspace (i.e., does an oblique projection).
<div class="alert alert-danger"><h4>Warning</h4><p>Maxwell filtering was originally developed for Elekta Neuromag® systems,
and should be considered *experimental* for non-Neuromag data. See the
Notes section of the :func:`~mne.preprocessing.maxwell_filter` docstring
for details.</p></div>
The MNE-Python implementation of SSS / Maxwell filtering currently provides
the following features:
Basic bad channel detection
(:func:~mne.preprocessing.find_bad_channels_maxwell)
Bad channel reconstruction
Cross-talk cancellation
Fine calibration correction
tSSS
Coordinate frame translation
Regularization of internal components using information theory
Raw movement compensation (using head positions estimated by MaxFilter)
cHPI subtraction (see :func:mne.chpi.filter_chpi)
Handling of 3D (in addition to 1D) fine calibration files
Epoch-based movement compensation as described in
:footcite:TauluKajola2005 through :func:mne.epochs.average_movements
Experimental processing of data from (un-compensated) non-Elekta
systems
Using SSS and Maxwell filtering in MNE-Python
For optimal use of SSS with data from Elekta Neuromag® systems, you should
provide the path to the fine calibration file (which encodes site-specific
information about sensor orientation and calibration) as well as a crosstalk
compensation file (which reduces interference between Elekta's co-located
magnetometer and paired gradiometer sensor units). | fine_cal_file = os.path.join(sample_data_folder, 'SSS', 'sss_cal_mgh.dat')
crosstalk_file = os.path.join(sample_data_folder, 'SSS', 'ct_sparse_mgh.fif') | 0.22/_downloads/243172b1ef6a2d804d3245b8c0a927ef/plot_60_maxwell_filtering_sss.ipynb | mne-tools/mne-tools.github.io | bsd-3-clause | 525ed5e4b8ca552eb0a9740734ea27f2 |
Before we perform SSS we'll look for bad channels — MEG 2443 is quite
noisy.
<div class="alert alert-danger"><h4>Warning</h4><p>It is critical to mark bad channels in ``raw.info['bads']`` *before*
calling :func:`~mne.preprocessing.maxwell_filter` in order to prevent
bad channel noise from spreading.</p></div>
Let's see if we can automatically detect it. | raw.info['bads'] = []
raw_check = raw.copy()
auto_noisy_chs, auto_flat_chs, auto_scores = find_bad_channels_maxwell(
raw_check, cross_talk=crosstalk_file, calibration=fine_cal_file,
return_scores=True, verbose=True)
print(auto_noisy_chs) # we should find them!
print(auto_flat_chs) # none for this dataset | 0.22/_downloads/243172b1ef6a2d804d3245b8c0a927ef/plot_60_maxwell_filtering_sss.ipynb | mne-tools/mne-tools.github.io | bsd-3-clause | 59a3a84e943442148cda23bc7464fc45 |
<div class="alert alert-info"><h4>Note</h4><p>`~mne.preprocessing.find_bad_channels_maxwell` needs to operate on
a signal without line noise or cHPI signals. By default, it simply
applies a low-pass filter with a cutoff frequency of 40 Hz to the
data, which should remove these artifacts. You may also specify a
different cutoff by passing the ``h_freq`` keyword argument. If you
set ``h_freq=None``, no filtering will be applied. This can be
useful if your data has already been preconditioned, for example
using :func:`mne.chpi.filter_chpi`,
:func:`mne.io.Raw.notch_filter`, or :meth:`mne.io.Raw.filter`.</p></div>
Now we can update the list of bad channels in the dataset. | bads = raw.info['bads'] + auto_noisy_chs + auto_flat_chs
raw.info['bads'] = bads | 0.22/_downloads/243172b1ef6a2d804d3245b8c0a927ef/plot_60_maxwell_filtering_sss.ipynb | mne-tools/mne-tools.github.io | bsd-3-clause | aa6844f45307cd3a0c605012784d5551 |
We called ~mne.preprocessing.find_bad_channels_maxwell with the optional
keyword argument return_scores=True, causing the function to return a
dictionary of all data related to the scoring used to classify channels as
noisy or flat. This information can be used to produce diagnostic figures.
In the following, we will generate such visualizations for
the automated detection of noisy gradiometer channels. | # Only select the data forgradiometer channels.
ch_type = 'grad'
ch_subset = auto_scores['ch_types'] == ch_type
ch_names = auto_scores['ch_names'][ch_subset]
scores = auto_scores['scores_noisy'][ch_subset]
limits = auto_scores['limits_noisy'][ch_subset]
bins = auto_scores['bins'] # The the windows that were evaluated.
# We will label each segment by its start and stop time, with up to 3
# digits before and 3 digits after the decimal place (1 ms precision).
bin_labels = [f'{start:3.3f} – {stop:3.3f}'
for start, stop in bins]
# We store the data in a Pandas DataFrame. The seaborn heatmap function
# we will call below will then be able to automatically assign the correct
# labels to all axes.
data_to_plot = pd.DataFrame(data=scores,
columns=pd.Index(bin_labels, name='Time (s)'),
index=pd.Index(ch_names, name='Channel'))
# First, plot the "raw" scores.
fig, ax = plt.subplots(1, 2, figsize=(12, 8))
fig.suptitle(f'Automated noisy channel detection: {ch_type}',
fontsize=16, fontweight='bold')
sns.heatmap(data=data_to_plot, cmap='Reds', cbar_kws=dict(label='Score'),
ax=ax[0])
[ax[0].axvline(x, ls='dashed', lw=0.25, dashes=(25, 15), color='gray')
for x in range(1, len(bins))]
ax[0].set_title('All Scores', fontweight='bold')
# Now, adjust the color range to highlight segments that exceeded the limit.
sns.heatmap(data=data_to_plot,
vmin=np.nanmin(limits), # bads in input data have NaN limits
cmap='Reds', cbar_kws=dict(label='Score'), ax=ax[1])
[ax[1].axvline(x, ls='dashed', lw=0.25, dashes=(25, 15), color='gray')
for x in range(1, len(bins))]
ax[1].set_title('Scores > Limit', fontweight='bold')
# The figure title should not overlap with the subplots.
fig.tight_layout(rect=[0, 0.03, 1, 0.95]) | 0.22/_downloads/243172b1ef6a2d804d3245b8c0a927ef/plot_60_maxwell_filtering_sss.ipynb | mne-tools/mne-tools.github.io | bsd-3-clause | e24acfa10e55bfce150bb2eaeb625ee4 |
<div class="alert alert-info"><h4>Note</h4><p>You can use the very same code as above to produce figures for
*flat* channel detection. Simply replace the word "noisy" with
"flat", and replace ``vmin=np.nanmin(limits)`` with
``vmax=np.nanmax(limits)``.</p></div>
You can see the un-altered scores for each channel and time segment in the
left subplots, and thresholded scores – those which exceeded a certain limit
of noisiness – in the right subplots. While the right subplot is entirely
white for the magnetometers, we can see a horizontal line extending all the
way from left to right for the gradiometers. This line corresponds to channel
MEG 2443, which was reported as auto-detected noisy channel in the step
above. But we can also see another channel exceeding the limits, apparently
in a more transient fashion. It was therefore not detected as bad, because
the number of segments in which it exceeded the limits was less than 5,
which MNE-Python uses by default.
<div class="alert alert-info"><h4>Note</h4><p>You can request a different number of segments that must be
found to be problematic before
`~mne.preprocessing.find_bad_channels_maxwell` reports them as bad.
To do this, pass the keyword argument ``min_count`` to the
function.</p></div>
Obviously, this algorithm is not perfect. Specifically, on closer inspection
of the raw data after looking at the diagnostic plots above, it becomes clear
that the channel exceeding the "noise" limits in some segments without
qualifying as "bad", in fact contains some flux jumps. There were just not
enough flux jumps in the recording for our automated procedure to report
the channel as bad. So it can still be useful to manually inspect and mark
bad channels. The channel in question is MEG 2313. Let's mark it as bad: | raw.info['bads'] += ['MEG 2313'] # from manual inspection | 0.22/_downloads/243172b1ef6a2d804d3245b8c0a927ef/plot_60_maxwell_filtering_sss.ipynb | mne-tools/mne-tools.github.io | bsd-3-clause | a64a051686086e2fdc2549cfe9d18ace |
After that, performing SSS and Maxwell filtering is done with a
single call to :func:~mne.preprocessing.maxwell_filter, with the crosstalk
and fine calibration filenames provided (if available): | raw_sss = mne.preprocessing.maxwell_filter(
raw, cross_talk=crosstalk_file, calibration=fine_cal_file, verbose=True) | 0.22/_downloads/243172b1ef6a2d804d3245b8c0a927ef/plot_60_maxwell_filtering_sss.ipynb | mne-tools/mne-tools.github.io | bsd-3-clause | db56ef8563b66ab7a3daef97df4e7328 |
To see the effect, we can plot the data before and after SSS / Maxwell
filtering. | raw.pick(['meg']).plot(duration=2, butterfly=True)
raw_sss.pick(['meg']).plot(duration=2, butterfly=True) | 0.22/_downloads/243172b1ef6a2d804d3245b8c0a927ef/plot_60_maxwell_filtering_sss.ipynb | mne-tools/mne-tools.github.io | bsd-3-clause | 9151c6e24ead83af4624cad4dad5761e |
Notice that channels marked as "bad" have been effectively repaired by SSS,
eliminating the need to perform interpolation <tut-bad-channels>.
The heartbeat artifact has also been substantially reduced.
The :func:~mne.preprocessing.maxwell_filter function has parameters
int_order and ext_order for setting the order of the spherical
harmonic expansion of the interior and exterior components; the default
values are appropriate for most use cases. Additional parameters include
coord_frame and origin for controlling the coordinate frame ("head"
or "meg") and the origin of the sphere; the defaults are appropriate for most
studies that include digitization of the scalp surface / electrodes. See the
documentation of :func:~mne.preprocessing.maxwell_filter for details.
Spatiotemporal SSS (tSSS)
An assumption of SSS is that the measurement volume (the spherical shell
where the sensors are physically located) is free of electromagnetic sources.
The thickness of this source-free measurement shell should be 4-8 cm for SSS
to perform optimally. In practice, there may be sources falling within that
measurement volume; these can often be mitigated by using Spatiotemporal
Signal Space Separation (tSSS) :footcite:TauluSimola2006.
tSSS works by looking for temporal
correlation between components of the internal and external subspaces, and
projecting out any components that are common to the internal and external
subspaces. The projection is done in an analogous way to
SSP <tut-artifact-ssp>, except that the noise vector is computed
across time points instead of across sensors.
To use tSSS in MNE-Python, pass a time (in seconds) to the parameter
st_duration of :func:~mne.preprocessing.maxwell_filter. This will
determine the "chunk duration" over which to compute the temporal projection.
The chunk duration effectively acts as a high-pass filter with a cutoff
frequency of $\frac{1}{\mathtt{st_duration}}~\mathrm{Hz}$; this
effective high-pass has an important consequence:
In general, larger values of st_duration are better (provided that your
computer has sufficient memory) because larger values of st_duration
will have a smaller effect on the signal.
If the chunk duration does not evenly divide your data length, the final
(shorter) chunk will be added to the prior chunk before filtering, leading
to slightly different effective filtering for the combined chunk (the
effective cutoff frequency differing at most by a factor of 2). If you need
to ensure identical processing of all analyzed chunks, either:
choose a chunk duration that evenly divides your data length (only
recommended if analyzing a single subject or run), or
include at least 2 * st_duration of post-experiment recording time at
the end of the :class:~mne.io.Raw object, so that the data you intend to
further analyze is guaranteed not to be in the final or penultimate chunks.
Additional parameters affecting tSSS include st_correlation (to set the
correlation value above which correlated internal and external components
will be projected out) and st_only (to apply only the temporal projection
without also performing SSS and Maxwell filtering). See the docstring of
:func:~mne.preprocessing.maxwell_filter for details.
Movement compensation
If you have information about subject head position relative to the sensors
(i.e., continuous head position indicator coils, or :term:cHPI <HPI>), SSS
can take that into account when projecting sensor data onto the internal
subspace. Head position data can be computed using
:func:mne.chpi.compute_chpi_locs and :func:mne.chpi.compute_head_pos,
or loaded with the:func:mne.chpi.read_head_pos function. The
example data <sample-dataset> doesn't include cHPI, so here we'll
load a :file:.pos file used for testing, just to demonstrate: | head_pos_file = os.path.join(mne.datasets.testing.data_path(), 'SSS',
'test_move_anon_raw.pos')
head_pos = mne.chpi.read_head_pos(head_pos_file)
mne.viz.plot_head_positions(head_pos, mode='traces') | 0.22/_downloads/243172b1ef6a2d804d3245b8c0a927ef/plot_60_maxwell_filtering_sss.ipynb | mne-tools/mne-tools.github.io | bsd-3-clause | 54fab2dd84f22bb9678fa0ef8696a6ee |
Comparing the time | start = timeit.timeit()
X = range(1000)
pySum = sum([n*n for n in X])
end = timeit.timeit()
print("Total time taken: ", end-start) | BMLSwPython/01_GettingStarted_withPython.ipynb | atulsingh0/MachineLearning | gpl-3.0 | a7d70f9b45778ea5df30fb1ad16c14a6 |
Learning Scipy | # reading the web data
data = sp.genfromtxt("data/web_traffic.tsv", delimiter="\t")
print(data[:3])
print(len(data)) | BMLSwPython/01_GettingStarted_withPython.ipynb | atulsingh0/MachineLearning | gpl-3.0 | 497f660925b83fc2550f79fc2d0c8644 |
Preprocessing and Cleaning the data | X = data[:, 0]
y = data[:, 1]
# checking for nan values
print(sum(np.isnan(X)))
print(sum(np.isnan(y))) | BMLSwPython/01_GettingStarted_withPython.ipynb | atulsingh0/MachineLearning | gpl-3.0 | 4ed3a67c5a027491a91a0a979372e0c1 |
Filtering the nan data | X = X[~np.isnan(y)]
y = y[~np.isnan(y)]
# checking for nan values
print(sum(np.isnan(X)))
print(sum(np.isnan(y)))
fig, ax = plt.subplots(figsize=(8,6))
ax.plot(X, y, '.b')
ax.margins(0.2)
plt.xticks([w*24*7 for w in range(0, 6)], ["week %d" %w for w in range(0, 6)])
ax.set_xlabel("Week")
ax.set_ylabel("Hits / Week")
ax.set_title("Web Traffic over weeks") | BMLSwPython/01_GettingStarted_withPython.ipynb | atulsingh0/MachineLearning | gpl-3.0 | d4b7a5d8f215920fb264ee356078f343 |